id stringlengths 1 8 | text stringlengths 6 1.05M | dataset_id stringclasses 1
value |
|---|---|---|
6645742 | <filename>scripts/er_unmatched_test.py<gh_stars>0
#%% [markdown]
# # A density-based test
# Here, we compare the two unmatched networks by treating each as an Erdos-Renyi network
# and simply compare their estimated densities.
#%% [markdown]
# ## The Erdos-Renyi (ER) model
# The [**Erdos-Renyi (ER) model**
# ](https://en.wikipedia.org/wiki/Erd%C5%91s%E2%80%93R%C3%A9nyi_model)
# is one of the simplest network models. This model treats
# the probability of each potential edge in the network occuring to be the same. In
# other words, all edges between any two nodes are equally likely.
#
# ```{admonition} Math
# Let $n$ be the number of nodes. We say that for all $(i, j), i \neq j$, with $i$ and
# $j$ both running
# from $1 ... n$, the probability of the edge $(i, j)$ occuring is:
# $$ P[A_{ij} = 1] = p_{ij} = p $$
# Where $p$ is the the global connection probability.
# Each element of the adjacency matrix $A$ is then sampled independently according to a
# [Bernoulli distribution](https://en.wikipedia.org/wiki/Bernoulli_distribution):
# $$ A_{ij} \sim Bernoulli(p) $$
# For a network modeled as described above, we say it is distributed
# $$ A \sim ER(n, p) $$
# ```
# Thus, for this model, the only parameter of interest is the global connection
# probability, $p$. This is sometimes also referred to as the **network density**.
#%% [markdown]
# ## Testing under the ER model
# In order to compare two networks $A^{(L)}$ and $A^{(R)}$ under this model, we
# simply need to compute these network densities ($p^{(L)}$ and $p^{(R)}$), and then
# run a statistical test to see if these densities are significantly different.
# ```{admonition} Math
# Under this
# model, the total number of edges $m$ comes from a $Binomial(n(n-1), p)$ distribution,
# where $n$ is the number of nodes. This is because the number of edges is the sum of
# independent Bernoulli trials with the same probability. If $m^{(L)}$ is the number of
# edges on the left
# hemisphere, and $m^{(R)}$ is the number of edges on the right, then we have:
# $$m^{(L)} \sim Binomial(n^{(L)}(n^{(L)} - 1), p^{(L)})$$
# and independently,
# $$m^{(R)} \sim Binomial(n^{(R)}(n^{(R)} - 1), p^{(R)})$$
# To compare the two networks, we are just interested in a comparison of $p^{(L)}$ vs.
# $p^{(R)}$. Formally, we are testing:
# $$H_0: p^{(L)} = p^{(R)}, \quad H_a: p^{(L)} \neq p^{(R)}$$
# Fortunately, the problem of testing for equal proportions is well studied.
# In our case, we will use Fisher's Exact test to run this test for the null and
# alternative hypotheses above.
# ```
#%%
from pkg.utils import set_warnings
set_warnings()
import datetime
import time
import matplotlib.pyplot as plt
import numpy as np
from myst_nb import glue as default_glue
from pkg.data import load_network_palette, load_node_palette, load_unmatched
from pkg.io import savefig
from pkg.plot import set_theme
from pkg.stats import erdos_renyi_test
from scipy.stats import binom
from statsmodels.stats.proportion import proportion_confint
DISPLAY_FIGS = False
FILENAME = "er_unmatched_test"
def gluefig(name, fig, **kwargs):
savefig(name, foldername=FILENAME, **kwargs)
glue(name, fig, prefix="fig")
if not DISPLAY_FIGS:
plt.close()
def glue(name, var, prefix=None):
savename = f"{FILENAME}-{name}"
if prefix is not None:
savename = prefix + ":" + savename
default_glue(savename, var, display=False)
t0 = time.time()
set_theme(font_scale=1.25)
network_palette, NETWORK_KEY = load_network_palette()
node_palette, NODE_KEY = load_node_palette()
left_adj, left_nodes = load_unmatched("left")
right_adj, right_nodes = load_unmatched("right")
#%%
n_nodes_left = left_adj.shape[0]
n_nodes_right = right_adj.shape[0]
n_possible_left = n_nodes_left ** 2 - n_nodes_left
n_possible_right = n_nodes_right ** 2 - n_nodes_right
glue("n_possible_left", n_possible_left)
glue("n_possible_right", n_possible_right)
n_edges_left = np.count_nonzero(left_adj)
n_edges_right = np.count_nonzero(right_adj)
density_left = n_edges_left / n_possible_left
density_right = n_edges_right / n_possible_right
glue("density_left", density_left)
glue("density_right", density_right)
left_binom = binom(n_possible_left, density_left)
right_binom = binom(n_possible_right, density_right)
#%%
fig, ax = plt.subplots(1, 1, figsize=(6, 6))
ax.bar(0, density_left, color=network_palette["Left"])
ax.bar(1, density_right, color=network_palette["Right"])
coverage = 0.99
coverage_percentage = coverage * 100
glue("coverage_percentage", coverage_percentage)
left_lower, left_upper = proportion_confint(
n_edges_left, n_possible_left, alpha=1 - coverage, method="beta"
)
right_lower, right_upper = proportion_confint(
n_edges_right, n_possible_right, alpha=1 - coverage, method="beta"
)
ax.plot([0, 0], [left_lower, left_upper], color="black", linewidth=4)
ax.plot([1, 1], [right_lower, right_upper], color="black", linewidth=4)
ax.set(
xlabel="Network",
xticks=[0, 1],
xticklabels=["Left", "Right"],
ylabel=r"Estimated density ($\hat{p}$)",
)
gluefig("er_density", fig)
#%%
stat, pvalue, _ = erdos_renyi_test(left_adj, right_adj)
glue("pvalue", pvalue)
#%% [markdown]
# ## Reject bilateral symmetry under the ER model
#%% [markdown]
# ```{glue:figure} fig:er_unmatched_test-er_density
# :name: "fig:er_unmatched_test-er_density"
#
# Comparison of estimated densities for the left and right hemisphere networks. The
# estimated density (probability of any edge across the entire network), $\hat{p}$, for
# the left
# hemisphere is ~{glue:text}`er_unmatched_test-density_left:0.3f`, while for the right
# it is
# ~{glue:text}`er_unmatched_test-density_right:0.3f`. Black lines denote
# {glue:text}`er_unmatched_test-coverage_percentage`**%**
# confidence intervals for this estimated parameter $\hat{p}$. The p-value for testing
# the null hypothesis that these densities are the same is
# {glue:text}`er_unmatched_test-pvalue:0.3g` (two
# sided Fisher's exact test).
# ```
#%% [markdown]
# {numref}`Figure {number} <fig:er_unmatched_test-er_density>` shows the comparison of
# the network densities between the left and right hemisphere induced subgraphs. We see
# that the density on the left is ~{glue:text}`er_unmatched_test-density_left:0.3f`, and
# on the right it is ~{glue:text}`er_unmatched_test-density_right:0.3f`. To determine
# whether this is a difference likely to be observed by chance under the ER model,
# we ran a two-sided Fisher's exact test, which tests whether the success probabilities
# between two independent binomials are significantly different. This test yields a
# p-value of {glue:text}`er_unmatched_test-pvalue:0.3g`, suggesting that we have strong
# evidence to reject this version of our hypotheis of bilateral symmetry. We note that
# while the difference between estimated densities is not massive, this low p-value
# results from the large sample size for this comparison. We note that there are
# {glue:text}`er_unmatched_test-n_possible_left:,.0f` and
# {glue:text}`er_unmatched_test-n_possible_right:,.0f` potential edges on the left and
# right,
# respectively, making the sample size for this comparison quite large.
#
# To our knowledge, when neuroscientists have considered the question of bilateral
# symmetry, they have not meant such a simple comparison of proportions. In many ways,
# the ER model is too simple to be an interesting description of connectome structure.
# However, we note that *even the simplest network model* yields a significant
# difference between brain hemispheres for this organism. It is unclear whether this
# difference in densities is biological (e.g. a result of slightly differing rates of
# development for this individual), an artifact of how the data was collected (e.g.
# technological limitations causing slightly lower reconstruction rates on the left
# hemisphere), or something else entirely. Still, the ER test results also provide
# important considerations for other tests. Almost any network statistic (e.g.
# clustering coefficient, number of triangles, etc), as well as many of the model-based
# parameters we will consider in this paper, are strongly related to the network
# density. Thus, if the densities are different, it is likely that tests based on any
# of these other test statistics will also reject the null hypothesis. Thus, we will
# need ways of telling whether an observed difference for these other tests could be
# explained by this difference in density alone.
#%%
elapsed = time.time() - t0
delta = datetime.timedelta(seconds=elapsed)
| StarcoderdataPython |
3396154 | import openmc_source_plotter as osp
import openmc
# initialises a new source object
my_source = openmc.Source()
# sets the location of the source to x=0 y=0 z=0
my_source.space = openmc.stats.Point((0, 0, 0))
# sets the direction to isotropic
my_source.angle = openmc.stats.Isotropic()
# sets the energy distribution to 100% 14MeV neutrons
my_source.energy = openmc.stats.Discrete([14e6], [1])
# makes an initial_source.h5 file with details of the particles
initial_source_filename = osp.create_initial_particles(
source=my_source,
number_of_particles=10,
openmc_exec="/home/jshim/miniconda3/envs/openmc_0_11_0/bin/openmc",
)
# gets the particle corrdiantes, energy and direction
data = osp.get_particle_data(initial_source_filename)
print(data)
| StarcoderdataPython |
1920799 | from django.contrib import admin
from .models import GamebeaterProfile, GameOwnership
admin.site.register(GamebeaterProfile)
admin.site.register(GameOwnership) | StarcoderdataPython |
102295 | import sys, json, os, subprocess as sp, getpass, logging, argparse
from pathlib import Path
import paramiko
# silence deprecation warnings
# https://github.com/paramiko/paramiko/issues/1386
import warnings
warnings.filterwarnings(action='ignore',module='.*paramiko.*')
logger = logging.getLogger()
logger.setLevel(logging.INFO)
PORT = 8022
CLIENT_KEY = paramiko.RSAKey(filename=Path(os.environ["HOME"], ".ssh", "pshaw_client"))
SERVER_KEY = paramiko.RSAKey(filename=Path(os.environ["HOME"], ".ssh", "pshaw_server"))
SUBSYSTEM = "pshaw"
def recv(pipe):
string = pipe.readline()
#logger.info("recv %s", string)
return json.loads(string)
def send(pipe, object):
string = json.dumps(object)
#logger.info("send %s", string)
pipe.write(string + "\n")
def get_password(realm):
client = paramiko.SSHClient()
client.get_host_keys().add("[localhost]:%s" % PORT, "ssh-rsa", SERVER_KEY)
client.connect("localhost", pkey=CLIENT_KEY, port=PORT)
transport = client.get_transport()
# using channel as a context manager causes it to be closed afterwards, which seems to conflict
# with the close on client or pipe.
#with transport.open_channel("session") as channel:
channel = transport.open_channel("session")
if True:
logger.info("invoking subsystem")
channel.invoke_subsystem("pshaw")
with channel.makefile("rw") as pipe:
send(pipe, realm)
password = recv(pipe)
if password is None:
password = getpass.getpass(prompt="%s password (will be stored in pshawd): " % realm)
send(pipe, password)
client.close()
return password
def main():
parser = argparse.ArgumentParser(description="ssh with password persistence.")
parser.add_argument("realm", help="A name to associate with the password so it can later be retrieved under that name.")
args, command = parser.parse_known_args()
password = get_password(args.realm)
rpipe, wpipe = os.pipe()
os.set_inheritable(rpipe, True)
os.set_blocking(wpipe, False)
os.write(wpipe, password.encode("ascii"))
os.close(wpipe)
os.execvp("sshpass", ["sshpass", "-d%i" % rpipe] + command)
if __name__ == "__main__":
main()
| StarcoderdataPython |
398201 | <filename>license.py
import sys
import datetime
if len(sys.argv) != 3:
print >>sys.stderr, 'Usage: license.py version_num filename'
version = sys.argv[1]
year = datetime.date.today().year
s = """\
LICENSE AGREEMENT FOR MATPLOTLIB %(version)s
--------------------------------------
1. This LICENSE AGREEMENT is between <NAME> ("JDH"), and the
Individual or Organization ("Licensee") accessing and otherwise using
matplotlib software in source or binary form and its associated
documentation.
2. Subject to the terms and conditions of this License Agreement, JDH
hereby grants Licensee a nonexclusive, royalty-free, world-wide license
to reproduce, analyze, test, perform and/or display publicly, prepare
derivative works, distribute, and otherwise use matplotlib %(version)s
alone or in any derivative version, provided, however, that JDH's
License Agreement and JDH's notice of copyright, i.e., "Copyright (c)
2002-%(year)d <NAME>; All Rights Reserved" are retained in
matplotlib %(version)s alone or in any derivative version prepared by
Licensee.
3. In the event Licensee prepares a derivative work that is based on or
incorporates matplotlib %(version)s or any part thereof, and wants to
make the derivative work available to others as provided herein, then
Licensee hereby agrees to include in any such work a brief summary of
the changes made to matplotlib %(version)s.
4. JDH is making matplotlib %(version)s available to Licensee on an "AS
IS" basis. JDH MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR
IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, JDH MAKES NO AND
DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS
FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF MATPLOTLIB %(version)s
WILL NOT INFRINGE ANY THIRD PARTY RIGHTS.
5. JDH SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF MATPLOTLIB
%(version)s FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR
LOSS AS A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING
MATPLOTLIB %(version)s, OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF
THE POSSIBILITY THEREOF.
6. This License Agreement will automatically terminate upon a material
breach of its terms and conditions.
7. Nothing in this License Agreement shall be deemed to create any
relationship of agency, partnership, or joint venture between JDH and
Licensee. This License Agreement does not grant permission to use JDH
trademarks or trade name in a trademark sense to endorse or promote
products or services of Licensee, or any third party.
8. By copying, installing or otherwise using matplotlib %(version)s,
Licensee agrees to be bound by the terms and conditions of this License
Agreement.
""" % locals()
file(sys.argv[2], 'w').write(s)
| StarcoderdataPython |
337947 | <filename>example.py<gh_stars>100-1000
from pretrained.cifar100 import cifar100
model = cifar100(model='resnet18')
model.test()
# model.test('....test')
# model.train('...train) | StarcoderdataPython |
3295042 | """LinePredictor class"""
from typing import Tuple, Union
import numpy as np
from text_recognizer.models import LineModelCtc
from text_recognizer.datasets import EmnistLinesDataset
import text_recognizer.util as util
class LinePredictor:
"""Given an image of a line of handwritten text, recognizes text contents."""
def __init__(self, dataset_cls=EmnistLinesDataset):
self.model = LineModelCtc(dataset_cls=dataset_cls)
self.model.load_weights()
def predict(self, image_or_filename: Union[np.ndarray, str]) -> Tuple[str, float]:
"""Predict on a single image."""
if isinstance(image_or_filename, str):
image = util.read_image(image_or_filename, grayscale=True)
else:
image = image_or_filename
return self.model.predict_on_image(image)
def evaluate(self, dataset):
"""Evaluate on a dataset."""
return self.model.evaluate(dataset.x_test, dataset.y_test)
| StarcoderdataPython |
11328255 | #!/usr/bin/env python
""" for block as a family example """
import ecf as ecflow
from ecf import (Family, Task, Inlimit, Label, Limit, Repeat, Edit)
PARAMS = ["u", "v", "t", "r", "q", "w"]
def process():
""" provide leaf task """
return Task("process")
def family_for():
""" for family """
return (
Family("for").add(
process(),
Repeat(kind="integer", name="STEP", start=1, end=240, step=3)),
Family("loop").add(
process(),
Repeat("PARAM", PARAMS, kind="string")),
Family("parallel").add(
Limit("lim", 2), Inlimit("lim"),
[Family(param).add(
Edit(PARAM=param),
process().add(
Label("info", param)))
for param in PARAMS]),
Family("explode").add(
Limit("lim", 2),
Inlimit("lim"),
# LIST COMPREHENSION:
[Task("t%d" % num) for num in range(1, 5 + 1)]))
| StarcoderdataPython |
3572766 | import utils
def main():
"""Main."""
hex1 = '1c0111001f010100061a024b53535009181c'
hex2 = '686974207468652062756c6c277320657965'
hex3 = '746865206b696420646f6e277420706c6179'
xor = utils.xor(bytearray.fromhex(hex1), bytearray.fromhex(hex2))
assert ''.join([chr(c).encode('hex') for c in xor]) == hex3
print 'p2 ok'
if __name__ == '__main__':
main()
| StarcoderdataPython |
3261411 | with open("./datas/lsjz/005161.csv", encoding="utf-8") as f:
print(len(f.readlines())) | StarcoderdataPython |
9771969 | import bs4 as bs
import pickle
import requests
import datetime as dt
import os
import pandas as pd
import pandas_datareader.data as web
import matplotlib.pyplot as plt
from matplotlib import style
#from mplfinance import candlestick_ohlc
from mplfinance.original_flavor import candlestick_ohlc
import matplotlib.dates as mdates
import seaborn as sb
df = pd.read_csv('stock_details/AMZN.csv', index_col=0, parse_dates=True)
def plotdata():
# This snippet will help us to pick the Adjusted Close column of each stock other than our target stock
# which is AMZN, rename the column as the ticker and merge it in our feature set.
# It will produce a feature set like this. The Date is the index and corresponding to the Date,
# each ticker’s “Adj Close” value. Now, We will see there are a few empty columns initially.
# This is because these companies didn’t start to participate in the stock market back in 2010.
# This will give us a feature set of 200 columns containing 199 company’s values and the Date.
# Now, let’s focus on our target stock the AMZN stock.
# we will start visualizing each of our given column values for the target stock.
# Now, let’s visualize, our stock using the candlestick notation. I am using Pandas version 0.24.2 for this.
# There may be an issue as in the current versions this module is depreciated.
df_ohlc = df['Adj Close'].resample('10D').ohlc()
# print(df_ohlc.head())
df_volume = df['Volume'].resample('10D').sum()
df_ohlc.reset_index(inplace=True)
df_ohlc['Date'] = df_ohlc['Date'].map(mdates.date2num)
ax1 = plt.subplot2grid((6, 1), (0, 0), rowspan=5, colspan=1)
ax2 = plt.subplot2grid((6, 1), (5, 0), rowspan=1, colspan=1, sharex=ax1)
ax1.xaxis_date()
candlestick_ohlc(ax1, df_ohlc.values, width=2, colorup='g')
ax2.fill_between(df_volume.index.map(mdates.date2num), df_volume.values, 0)
plt.show()
plotdata()
def featuredata():
# Now, let’s devise some features that will help us to predict our target.
# We will calculate the 50 moving average.
# This characteristic is used by a lot of traders for predictions.
# New column "Moving_av" is added to the dataframe
df['Moving_av'] = df['Adj Close'].rolling(window=50, min_periods=0).mean()
# print(df.head())
df['Moving_av'].plot()
# Now, we will try to obtain two more features, Rate of increase in volume and rate of increase in Adjusted Close for our stock
i = 1
rate_increase_in_vol = [0]
rate_increase_in_adj_close = [0]
while i < len(df):
rate_increase_in_vol.append(df.iloc[i]['Volume'] - df.iloc[i - 1]['Volume'])
rate_increase_in_adj_close.append(df.iloc[i]['Adj Close'] - df.iloc[i - 1]['Adj Close'])
i += 1
df['Increase_in_vol'] = rate_increase_in_vol
df['Increase_in_adj_close'] = rate_increase_in_adj_close
df.to_csv("dataset_target_2.csv", index=False)
# print(df.head())
# df['Increase_in_vol'].plot()
# df['Increase_in_adj_close'].plot()
def mergedata():
featuredata()
# Now, our feature file for our target stock is ready.
# Now, we merge both these feature files to make the main feature set.
df1 = pd.read_csv('dataset_target_2.csv')
df3 = pd.read_csv('stock_details/AMZN.csv')
df2 = pd.read_csv('Dataset_temp.csv')
Dates = []
i = 0
while i < len(df3):
Dates.append(df3.iloc[i]['Date'])
i += 1
df_new = df1.join(df2, how='outer')
df_new.fillna(0.0)
df_new['Date'] = Dates
df_new.to_csv('Dataset_main.csv', index=False)
# print(df2.head())
# print(df_new.head())
mergedata() | StarcoderdataPython |
4841713 | <filename>transform_binary_payload/src-payload-decoders/python/dragino_lwl01.py<gh_stars>1-10
import base64
def dict_from_payload(base64_input: str, fport: int = None):
""" Decodes a base64-encoded binary payload into JSON.
Parameters
----------
base64_input : str
Base64-encoded binary payload
fport: int
FPort as provided in the metadata. Please note the fport is optional and can have value "None", if not provided by the LNS or invoking function.
If fport is None and binary decoder can not proceed because of that, it should should raise an exception.
Returns
-------
JSON object with key/value pairs of decoded attributes
"""
bytes = base64.b64decode(base64_input)
value= (bytes[0] << 8 | bytes[1]) & 0x3FFF
battery = value/1000
door_open_status = 0
if bytes[0] & 0x40:
water_leak_status = 1
water_leak_status = 0
if bytes[0] & 0x80:
door_open_status = 1
mod = bytes[2]
if mod == 1:
open_times = bytes[3] << 16 | bytes[4] << 8 | bytes[5]
open_duration = bytes[6] << 16 | bytes[7] << 8 | bytes[8]
result = {
"mod": mod,
"battery": battery,
"door_open_status": door_open_status,
"open_times": open_times,
"open_duration": open_duration
}
return result
if mod == 2:
leak_times = bytes[3] << 16 | bytes[4] << 8 | bytes[5]
leak_duration = bytes[6] << 16 | bytes[7] << 8 | bytes[8]
result = {
"mod": mod,
"battery": battery,
"leak_times": leak_times,
"leak_duration": leak_duration
}
return result
result = {
"battery": battery,
"mod": mod
} | StarcoderdataPython |
3368013 | """
collections.Counter()
A counter is a container that stores elements as dictionary keys, and their counts are stored as dictionary values.
Sample Code
>>> from collections import Counter
>>>
>>> myList = [1,1,2,3,4,5,3,2,3,4,2,1,2,3]
>>> print Counter(myList)
Counter({2: 4, 3: 4, 1: 3, 4: 2, 5: 1})
>>>
>>> print Counter(myList).items()
[(1, 3), (2, 4), (3, 4), (4, 2), (5, 1)]
>>>
>>> print Counter(myList).keys()
[1, 2, 3, 4, 5]
>>>
>>> print Counter(myList).values()
[3, 4, 4, 2, 1]
Task
is a shoe shop owner. His shop has number of shoes.
He has a list containing the size of each shoe he has in his shop.
There are number of customers who are willing to pay amount of money only if they get the shoe of their desired size.
Your task is to compute how much money earned.
Input Format
The first line contains , the number of shoes.
The second line contains the space separated list of all the shoe sizes in the shop.
The third line contains , the number of customers.
The next lines contain the space separated values of the desired by the customer and , the price of the shoe.
Constraints
Output Format
Print the amount of money earned by .
Sample Input
10
2 3 4 5 6 8 7 6 5 18
6
6 55
6 45
6 55
4 40
18 60
10 50
Sample Output
200
Explanation
Customer 1: Purchased size 6 shoe for $55.
Customer 2: Purchased size 6 shoe for $45.
Customer 3: Size 6 no longer available, so no purchase.
Customer 4: Purchased size 4 shoe for $40.
Customer 5: Purchased size 18 shoe for $60.
Customer 6: Size 10 not available, so no purchase.
"""
import collections
if __name__ == "__main__":
X = int(input())
shoe_size = collections.Counter(map(int, input().split()))
N = int(input())
assert 0 < X,N < 10**3
# assert all(2 < i < 20 for i in size) == True
profit = 0
for i in range(N):
size, price = map(int, input().split())
assert 2 < size < 20
assert 20 < price < 100
if size in shoe_size and shoe_size[size] > 0:
profit += price
shoe_size[size] -= 1
print(profit) | StarcoderdataPython |
1755005 | <reponame>StevenHuang2020/ML
#python3
#steven 04/03/2020
import matplotlib.pyplot as plt
import numpy as np
gAlpha = 100
def setPlot():
plt.xticks(np.linspace(0,10,15))
plt.yticks(np.linspace(-4000,4000,15))
def plotXY(x,y):
plt.plot(x,y)
#plt.show()
def getTreeBranch(startPt,slope):
drawLen = 1
x = np.linspace(startPt[0],startPt[0] + drawLen,10)
b = startPt[1]-slope*startPt[0]
y = slope*x + b
ptLast = [x[-1],y[-1]]
plotXY(x,y)
#print('---------------------------start:,',startPt,'last:',ptLast)
return ptLast
def tree(N, startPt, slope, level=1):
level += 1
if N > 0:
lSlope = slope+gAlpha #slope*(1+gAlpha)
rSlope = slope-gAlpha #slope*(1-gAlpha)
#print('gLevel=',gLevel,'slope=',slope,'lSlope=',lSlope,'rSlope=',rSlope,startPt)
ptLeft = getTreeBranch(startPt, lSlope)
tree(N-1,ptLeft,lSlope)
ptRight = getTreeBranch(startPt, rSlope)
tree(N-1,ptRight,rSlope,level)
else:
return
def main():
startPt = [0,0]
slope = 0
N=8
#print(sys.getrecursionlimit())
pt = getTreeBranch(startPt,slope)
tree(N,pt,slope)
setPlot()
plt.show()
if __name__ == "__main__":
main()
| StarcoderdataPython |
6567628 | import unittest
from pathlib import Path
from pangtreebuild.pangenome import graph
from pangtreebuild.pangenome.builders import maf2poagraph
from pangtreebuild.pangenome.parameters import msa
from pangtreebuild.tools import pathtools
def nid(x): return graph.NodeID(x)
def bid(x): return graph.BlockID(x)
class Maf2poagraphTests(unittest.TestCase):
def setUp(self):
metadata_path = Path(__file__).parent.joinpath("../seq_metadata.csv").resolve()
self.metadatacsv = msa.MetadataCSV(pathtools.get_file_content_stringio(metadata_path), metadata_path)
self.maf_files_dir = Path(__file__).parent.joinpath("maf_files").resolve()
def test_1_messy_sequences(self):
maf_path = self.maf_files_dir.joinpath("test_1_messy_sequences.maf")
expected_nodes = [
graph.Node(node_id=nid(0), base=graph.Base('A'), aligned_to=None, block_id=bid(0)),
graph.Node(node_id=nid(1), base=graph.Base('A'), aligned_to=nid(2), block_id=bid(0)),
graph.Node(node_id=nid(2), base=graph.Base('C'), aligned_to=nid(1), block_id=bid(0)),
graph.Node(node_id=nid(3), base=graph.Base('T'), aligned_to=None, block_id=bid(0)),
graph.Node(node_id=nid(4), base=graph.Base('C'), aligned_to=nid(5), block_id=bid(0)),
graph.Node(node_id=nid(5), base=graph.Base('G'), aligned_to=nid(4), block_id=bid(0)),
graph.Node(node_id=nid(6), base=graph.Base('A'), aligned_to=None, block_id=bid(1)),
graph.Node(node_id=nid(7), base=graph.Base('C'), aligned_to=None, block_id=bid(1)),
graph.Node(node_id=nid(8), base=graph.Base('G'), aligned_to=None, block_id=bid(1)),
graph.Node(node_id=nid(9), base=graph.Base('C'), aligned_to=nid(10), block_id=bid(2)),
graph.Node(node_id=nid(10), base=graph.Base('G'), aligned_to=nid(9), block_id=bid(2)),
graph.Node(node_id=nid(11), base=graph.Base('T'), aligned_to=None, block_id=bid(2)),
graph.Node(node_id=nid(12), base=graph.Base('C'), aligned_to=None, block_id=bid(2)),
graph.Node(node_id=nid(13), base=graph.Base('C'), aligned_to=None, block_id=bid(2)),
graph.Node(node_id=nid(14), base=graph.Base('A'), aligned_to=None, block_id=bid(2)),
]
expected_sequences = {
msa.SequenceID('seq0'):
graph.Sequence(msa.SequenceID('seq0'),
[graph.SeqPath([*map(nid, [1, 3, 4, 6, 8, 9, 11, 12])])],
graph.SequenceMetadata({'group': '1'})),
msa.SequenceID('seq1'):
graph.Sequence(msa.SequenceID('seq1'),
[graph.SeqPath([*map(nid, [2, 3, 4, 10, 11, 12, 13, 14])])],
graph.SequenceMetadata({'group': '1'})),
msa.SequenceID('seq2'):
graph.Sequence(msa.SequenceID('seq2'),
[graph.SeqPath([*map(nid, [0, 2, 5, 6, 7, 10, 11, 12, 14])])],
graph.SequenceMetadata({'group': '2'})),
msa.SequenceID('seq3'):
graph.Sequence(msa.SequenceID('seq3'),
[],
graph.SequenceMetadata({'group': '2'}))
}
actual_nodes, actual_sequences = maf2poagraph.get_poagraph(msa.Maf(pathtools.get_file_content_stringio(maf_path),
maf_path),
self.metadatacsv)
self.assertEqual(expected_nodes, actual_nodes)
self.assertEqual(expected_sequences, actual_sequences)
if __name__ == '__main__':
unittest.main()
| StarcoderdataPython |
1941243 | from unittest import TestCase
from slackbot.owners import CarOwners
import os
TEST_CSV = os.path.join(os.path.dirname(__file__), 'dummy_owners.csv')
class TestCarOwners(TestCase):
@classmethod
def setUpClass(cls):
# Make a new file each time
with open(TEST_CSV, 'w') as f:
f.writelines(['"kenteken","slackid","name"\n',
'"AA123Z","U12345","<NAME>"\n'])
cls.car_owners = CarOwners(csv_path=TEST_CSV)
@classmethod
def tearDownClass(cls):
os.remove(TEST_CSV)
def test_lookup_success(self):
result = self.car_owners.lookup('AA123Z')
assert result == {'slackid': 'U12345', 'name': '<NAME>'}
def test_lookup_invalid(self):
with self.assertRaises(AssertionError) as e:
self.car_owners.lookup('TOO-LONG')
assert str(e.exception) == 'Length of the licence plate must be 6 (without any dashes)'
def test_lookup_not_found(self):
result = self.car_owners.lookup('BB123B')
self.assertIsNone(result)
def test_tag_invalid_licence_plate(self):
with self.assertRaises(AssertionError) as e:
self.car_owners.tag('U123456', 'TOOLONG')
assert str(e.exception) == 'Length of the licence plate must be 6 (without any dashes)'
def test_tag_and_untag(self):
new_plate = 'CC333C'
self.car_owners.tag(new_plate, slackid='U123456')
# New entry should be found, and persisted:
assert self.car_owners.lookup(new_plate) == {'slackid': 'U123456', 'name': None}
with open(TEST_CSV) as f:
data = f.readlines()
line = data[-1].strip() # strip off the new-line char
nr_rows = len(data)
assert line == '"CC333C","U123456",""'
# Lets remove this entry:
self.car_owners.untag('U123456', new_plate)
# should not be there anymore (lookup + data-file)
nr_rows_after_untag = sum(1 for _ in open(TEST_CSV))
assert nr_rows_after_untag == (nr_rows - 1)
self.assertIsNone(self.car_owners.lookup(new_plate))
# Should not crash if it does not exists
self.car_owners.untag('U123456', new_plate)
| StarcoderdataPython |
7392 | import django_filters
from django.forms import TextInput
from src.accounts.models import User
from src.application.models import Quiz, StudentGrade
class UserFilter(django_filters.FilterSet):
username = django_filters.CharFilter(widget=TextInput(attrs={'placeholder': 'username'}), lookup_expr='icontains')
first_name = django_filters.CharFilter(widget=TextInput(attrs={'placeholder': 'first name'}), lookup_expr='icontains')
last_name = django_filters.CharFilter(widget=TextInput(attrs={'placeholder': 'last name'}), lookup_expr='icontains')
email = django_filters.CharFilter(widget=TextInput(attrs={'placeholder': 'email'}), lookup_expr='icontains')
class Meta:
model = User
fields = {
'is_active': ['exact']
}
| StarcoderdataPython |
149991 | <filename>apps/core/views.py
from django.views.generic import TemplateView
from directory.models import Organisation
from resources.models import Resource
from .mixins import ResourcesViewMixin
class HomeView(TemplateView, ResourcesViewMixin):
template_name = 'core/home.html'
def get_context_data(self, **kwargs):
context = super().get_context_data()
context['carousel_resources'] = Resource.get_carousel_resources(self.request.user)
context['latest_resource'] = Resource.get_latest(self.request.user)
most_tried_resource = Resource.get_most_tried(self.request.user).first()
context['most_tried'] = most_tried_resource
kwargs = {'user': self.request.user}
if most_tried_resource:
kwargs.update({'exclude': most_tried_resource.id})
context['most_liked'] = Resource.get_most_liked(**kwargs).first()
context['most_published'] = Organisation.get_most_published_this_week()
return context
class SearchView(TemplateView, ResourcesViewMixin):
template_name = 'core/search.html'
| StarcoderdataPython |
9688080 | <filename>otcextensions/tests/unit/osclient/vpc/v2/test_peering.py<gh_stars>1-10
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
import mock
from unittest.mock import call
from osc_lib import exceptions
from otcextensions.osclient.vpc.v2 import peering
from otcextensions.tests.unit.osclient.vpc.v2 import fakes
from openstackclient.tests.unit import utils as tests_utils
class TestListVpcPeerings(fakes.TestVpc):
objects = fakes.FakeVpcPeering.create_multiple(3)
column_list_headers = (
'Id',
'Name',
'Status',
'Local Router Id',
'Peer Router Id',
'Peer Project Id',
)
columns = (
'id',
'name',
'status',
'local_router_id',
'peer_router_id',
'peer_project_id'
)
data = []
for s in objects:
data.append(
(s.id, s.name, s.status, s.local_vpc_info['vpc_id'],
s.peer_vpc_info['vpc_id'], s.peer_vpc_info['tenant_id']))
def setUp(self):
super(TestListVpcPeerings, self).setUp()
self.cmd = peering.ListVpcPeerings(self.app, None)
self.client.peerings = mock.Mock()
self.client.api_mock = self.client.peerings
def test_list(self):
arglist = []
verifylist = []
# Verify cm is triggered with default parameters
parsed_args = self.check_parser(self.cmd, arglist, verifylist)
# Set the response
self.client.api_mock.side_effect = [self.objects]
# Trigger the action
columns, data = self.cmd.take_action(parsed_args)
self.client.api_mock.assert_called_with()
self.assertEqual(self.column_list_headers, columns)
self.assertEqual(self.data, list(data))
def test_list_args(self):
arglist = [
'--limit', '1',
'--marker', '2',
'--id', '3',
'--name', '4',
'--project-id', '5',
'--router-id', '6',
'--status', 'ACTIVE'
]
verifylist = [
('limit', 1),
('marker', '2'),
('id', '3'),
('name', '4'),
('project_id', '5'),
('router_id', '6'),
('status', 'ACTIVE'),
]
# Verify cm is triggered with default parameters
parsed_args = self.check_parser(self.cmd, arglist, verifylist)
# Set the response
self.client.api_mock.side_effect = [self.objects]
# Trigger the action
columns, data = self.cmd.take_action(parsed_args)
self.client.api_mock.assert_called_with(
limit=1,
marker='2',
id='3',
name='4',
project_id='5',
router_id='6',
status='ACTIVE',
)
class TestCreateVpcPeering(fakes.TestVpc):
_data = fakes.FakeVpcPeering.create_one()
columns = (
'id',
'name',
'local_vpc_info',
'peer_vpc_info',
'description',
'created_at',
'updated_at',
'status'
)
data = fakes.gen_data(_data, columns)
def setUp(self):
super(TestCreateVpcPeering, self).setUp()
self.cmd = peering.CreateVpcPeering(self.app, None)
self.client.create_peering = mock.Mock(
return_value=fakes.FakeVpcPeering.create_one())
self.client.get_project_id = mock.Mock(
return_value='test-local-project-uuid')
def test_create_different_project(self):
arglist = [
'test-peering',
'--local-router-id', 'test-local-router-uuid',
'--peer-router-id', 'test-peer-router-uuid',
'--peer-project-id', 'test-peer-project-uuid',
'--description', 'test-peering',
]
verifylist = [
('name', 'test-peering'),
('local_router_id', 'test-local-router-uuid'),
('peer_router_id', 'test-peer-router-uuid'),
('peer_project_id', 'test-peer-project-uuid'),
('description', 'test-peering'),
]
# Verify cm is triggereg with default parameters
parsed_args = self.check_parser(self.cmd, arglist, verifylist)
# Trigger the action
columns, data = self.cmd.take_action(parsed_args)
attrs = {
'name': 'test-peering',
'request_vpc_info': {
'vpc_id': 'test-local-router-uuid',
'tenant_id': 'test-local-project-uuid'
},
'accept_vpc_info': {
'vpc_id': 'test-peer-router-uuid',
'tenant_id': 'test-peer-project-uuid'
},
'description': 'test-peering'
}
self.client.create_peering.assert_called_with(**attrs)
self.assertEqual(self.columns, columns)
def test_create_same_project(self):
arglist = [
'test-peering',
'--local-router-id', 'test-local-router-uuid',
'--peer-router-id', 'test-peer-router-uuid',
'--description', 'test-peering',
]
verifylist = [
('name', 'test-peering'),
('local_router_id', 'test-local-router-uuid'),
('peer_router_id', 'test-peer-router-uuid'),
('description', 'test-peering'),
]
# Verify cm is triggereg with default parameters
parsed_args = self.check_parser(self.cmd, arglist, verifylist)
# Trigger the action
columns, data = self.cmd.take_action(parsed_args)
attrs = {
'name': 'test-peering',
'request_vpc_info': {
'vpc_id': 'test-local-router-uuid'
},
'accept_vpc_info': {
'vpc_id': 'test-peer-router-uuid'
},
'description': 'test-peering'
}
self.client.create_peering.assert_called_with(**attrs)
self.assertEqual(self.columns, columns)
class TestUpdateVpcPeering(fakes.TestVpc):
_data = fakes.FakeVpcPeering.create_one()
columns = (
'id',
'name',
'local_vpc_info',
'peer_vpc_info',
'description',
'created_at',
'updated_at',
'status'
)
data = fakes.gen_data(_data, columns)
def setUp(self):
super(TestUpdateVpcPeering, self).setUp()
self.cmd = peering.UpdateVpcPeering(self.app, None)
self.client.find_peering = mock.Mock(return_value=self._data)
self.client.update_peering = mock.Mock(return_value=self._data)
def test_update(self):
arglist = [
self._data.name,
'--name', 'test-peering-updated',
'--description', 'vpc peering updated',
]
verifylist = [
('peering', self._data.name),
('name', 'test-peering-updated'),
('description', 'vpc peering updated'),
]
# Verify cm is triggereg with default parameters
parsed_args = self.check_parser(self.cmd, arglist, verifylist)
# Trigger the action
columns, data = self.cmd.take_action(parsed_args)
self.client.find_peering.assert_called_with(self._data.name)
self.client.update_peering.assert_called_with(
self._data.id,
name='test-peering-updated',
description='vpc peering updated'
)
self.assertEqual(self.columns, columns)
class TestShowVpcPeering(fakes.TestVpc):
_data = fakes.FakeVpcPeering.create_one()
columns = (
'id',
'name',
'local_vpc_info',
'peer_vpc_info',
'description',
'created_at',
'updated_at',
'status'
)
data = fakes.gen_data(_data, columns)
def setUp(self):
super(TestShowVpcPeering, self).setUp()
self.cmd = peering.ShowVpcPeering(self.app, None)
self.client.find_peering = mock.Mock(return_value=self._data)
def test_show_no_options(self):
arglist = []
verifylist = []
# Testing that a call without the required argument will fail and
# throw a "ParserExecption"
self.assertRaises(tests_utils.ParserException,
self.check_parser, self.cmd, arglist, verifylist)
def test_show(self):
arglist = [
self._data.id,
]
verifylist = [
('peering', self._data.id),
]
# Verify cm is triggered with default parameters
parsed_args = self.check_parser(self.cmd, arglist, verifylist)
# Trigger the action
columns, data = self.cmd.take_action(parsed_args)
self.client.find_peering.assert_called_with(self._data.id)
self.assertEqual(self.columns, columns)
self.assertEqual(self.data, data)
def test_show_non_existent(self):
arglist = [
'unexist_vpc_peering',
]
verifylist = [
('peering', 'unexist_vpc_peering'),
]
# Verify cm is triggered with default parameters
parsed_args = self.check_parser(self.cmd, arglist, verifylist)
find_mock_result = exceptions.CommandError('Resource Not Found')
self.client.find_peering = (
mock.Mock(side_effect=find_mock_result)
)
# Trigger the action
try:
self.cmd.take_action(parsed_args)
except Exception as e:
self.assertEqual('Resource Not Found', str(e))
self.client.find_peering.assert_called_with('unexist_vpc_peering')
class TestSetVpcPeering(fakes.TestVpc):
_data = fakes.FakeVpcPeering.create_one()
columns = (
'id',
'name',
'local_vpc_info',
'peer_vpc_info',
'description',
'created_at',
'updated_at',
'status'
)
data = fakes.gen_data(_data, columns)
def setUp(self):
super(TestSetVpcPeering, self).setUp()
self.cmd = peering.SetVpcPeering(self.app, None)
self.client.find_peering = mock.Mock(return_value=self._data)
self.client.set_peering = mock.Mock(return_value=self._data)
def test_set(self):
arglist = [
self._data.name,
'--accept'
]
verifylist = [
('peering', self._data.name),
('accept', True),
]
# Verify cm is triggered with default parameters
parsed_args = self.check_parser(self.cmd, arglist, verifylist)
# Trigger the action
columns, data = self.cmd.take_action(parsed_args)
self.client.find_peering.assert_called_with(self._data.name)
self.client.set_peering.assert_called_with(self._data.id, 'accept')
self.assertEqual(self.columns, columns)
self.assertEqual(self.data, data)
class TestDeleteVpcPeering(fakes.TestVpc):
_data = fakes.FakeVpcPeering.create_multiple(2)
def setUp(self):
super(TestDeleteVpcPeering, self).setUp()
self.client.delete_peering = mock.Mock(return_value=None)
# Get the command object to test
self.cmd = peering.DeleteVpcPeering(self.app, None)
def test_delete(self):
arglist = [
self._data[0].name,
]
verifylist = [
('peering', [self._data[0].name]),
]
# Verify cm is triggered with default parameters
parsed_args = self.check_parser(self.cmd, arglist, verifylist)
self.client.find_peering = (
mock.Mock(return_value=self._data[0])
)
# Trigger the action
result = self.cmd.take_action(parsed_args)
self.client.delete_peering.assert_called_with(self._data[0].id)
self.assertIsNone(result)
def test_multiple_delete(self):
arglist = []
for data in self._data:
arglist.append(data.name)
verifylist = [
('peering', arglist),
]
# Verify cm is triggered with default parameters
parsed_args = self.check_parser(self.cmd, arglist, verifylist)
find_mock_result = self._data
self.client.find_peering = (
mock.Mock(side_effect=find_mock_result)
)
# Trigger the action
result = self.cmd.take_action(parsed_args)
calls = []
for data in self._data:
calls.append(call(data.id))
self.client.delete_peering.assert_has_calls(calls)
self.assertIsNone(result)
def test_multiple_delete_with_exception(self):
arglist = [
self._data[0].name,
'unexist_vpc_peering',
]
verifylist = [
('peering', arglist),
]
# Verify cm is triggered with default parameters
parsed_args = self.check_parser(self.cmd, arglist, verifylist)
find_mock_result = [self._data[0], exceptions.CommandError]
self.client.find_peering = (
mock.Mock(side_effect=find_mock_result)
)
# Trigger the action
try:
self.cmd.take_action(parsed_args)
except Exception as e:
self.assertEqual(
'1 of 2 VPC peering(s) failed to delete.', str(e))
self.client.find_peering.assert_any_call(self._data[0].name)
self.client.find_peering.assert_any_call('unexist_vpc_peering')
self.client.delete_peering.assert_called_once_with(self._data[0].id)
| StarcoderdataPython |
362382 | #!/usr/bin/env python3
import numpy as np
from argparse import ArgumentParser
from math import inf
from os import fstat
from sklearn.linear_model import LinearRegression
from tqdm import tqdm, trange
# Argparse initializations
argument_parser = ArgumentParser(description = 'Molecular Dynamics Residence Time')
argument_parser.add_argument('data_file', type = str, help = 'Data file')
argument_parser.add_argument('dump_file', type = str, help = 'Dump file')
argument_parser.add_argument('adsorbent_atom_id_start', type = int, help = 'Adsorbent atom id start (inclusive)')
argument_parser.add_argument('adsorbent_atom_id_end', type = int, help = 'Adsorbent atom id end (inclusive)')
argument_parser.add_argument('adsorbate_atom_id_start', type = int, help = 'Adsorbate atom id start (inclusive)')
argument_parser.add_argument('adsorbate_atom_id_end', type = int, help = 'Adsorbate atom id end (inclusive)')
args = argument_parser.parse_args()
data_file = args.data_file
dump_file = args.dump_file
adsorbent_atom_id_start = args.adsorbent_atom_id_start
adsorbent_atom_id_end = args.adsorbent_atom_id_end
adsorbate_atom_id_start = args.adsorbate_atom_id_start
adsorbate_atom_id_end = args.adsorbate_atom_id_end
# Some helper functions
def is_adsorbent_atom(atom_id):
return atom_id >= adsorbent_atom_id_start and atom_id <= adsorbent_atom_id_end
def is_adsorbate_atom(atom_id):
return atom_id >= adsorbate_atom_id_start and atom_id <= adsorbate_atom_id_end
def squared_distance(coords1, coords2):
dist = 0
for i in range(3):
dist += (coords1[i] - coords2[i]) ** 2
return dist
# Helper class to find average coordinates of a set of molecules
class AverageCoords:
def __init__(self, coords):
self.coords = coords
self.num = 1
def add_contribution(self, coords):
for i in range(len(self.coords)):
self.coords[i] = (self.coords[i] * self.num + coords[i]) / (self.num + 1)
self.num += 1
# Initializations using the data file
adsorbate_mols_set = set()
atom_id_to_mol_id = {}
with open(data_file, newline = '') as datafile:
for _ in range(2):
datafile.readline()
num_atoms, _ = datafile.readline().split()
num_atoms = int(num_atoms)
for line in datafile:
if line == 'Atoms\n':
break
datafile.readline()
for _ in trange(num_atoms, desc = 'Processing data file'):
line = datafile.readline()
if line == '\n':
break
atom_id, mol_id, _, _, _, _, _ = line.split()
atom_id = int(atom_id)
mol_id = int(mol_id)
atom_id_to_mol_id[atom_id] = mol_id
if is_adsorbate_atom(atom_id):
adsorbate_mols_set.add(mol_id)
# Initializations using the dump file
num_timesteps = 0
previous_timestep = -1
time_delta = -1
adsorbed_mols_sets = []
with open(dump_file, newline = '') as dumpfile, tqdm(total = fstat(dumpfile.fileno()).st_size, desc = 'Processing dump file') as pbar:
while dumpfile.readline():
num_timesteps += 1
if previous_timestep == -1:
previous_timestep = int(dumpfile.readline().strip())
elif time_delta == -1:
time_delta = int(dumpfile.readline().strip()) - previous_timestep
previous_timestep += time_delta
else:
assert int(dumpfile.readline().strip()) == previous_timestep + time_delta, 'The difference between each pair of consecutive timesteps in the dump file should be same'
previous_timestep += time_delta
dumpfile.readline()
num_atoms = int(dumpfile.readline().strip())
for _ in range(5):
dumpfile.readline()
adsorbent_atoms_coords = []
adsorbate_mols_avg_coords = {}
adsorbed_mols_sets.append(set())
for i in range(num_atoms):
coords = [0] * 3
atom_id, _, coords[0], coords[1], coords[2], _, _, _ = dumpfile.readline().split()
atom_id = int(atom_id)
for j in range(3):
coords[j] = float(coords[j])
if is_adsorbent_atom(atom_id):
adsorbent_atoms_coords.append(coords)
if is_adsorbate_atom(atom_id):
mol_id = atom_id_to_mol_id[atom_id]
if mol_id in adsorbate_mols_avg_coords:
adsorbate_mols_avg_coords[mol_id].add_contribution(coords)
else:
adsorbate_mols_avg_coords[mol_id] = AverageCoords(coords)
for adsorbent_coords in adsorbent_atoms_coords:
min_dist = inf
closest_adsorbate_mol_id = -1
for adsorbate_mol_id, adsorbate_avg_coords in adsorbate_mols_avg_coords.items():
dist = squared_distance(adsorbent_coords, adsorbate_avg_coords.coords)
if dist < min_dist:
min_dist = dist
closest_adsorbate_mol_id = adsorbate_mol_id
adsorbed_mols_sets[-1].add(closest_adsorbate_mol_id)
pbar.update(dumpfile.tell() - pbar.n)
# Residence time calculation
num = [0] * num_timesteps
auto_correlation_avgs = [0] * num_timesteps
for t0_index in trange(num_timesteps, desc = 'Computing res. time '):
mols_continuously_remained_adsorbed = adsorbate_mols_set.copy()
auto_correlation = []
for t0_plus_t_index in range(t0_index, num_timesteps):
new_adsorbed_set = set()
for adsorbed_mol_id in adsorbed_mols_sets[t0_plus_t_index]:
if adsorbed_mol_id in mols_continuously_remained_adsorbed:
new_adsorbed_set.add(adsorbed_mol_id)
mols_continuously_remained_adsorbed = new_adsorbed_set
auto_correlation.append(len(mols_continuously_remained_adsorbed))
for i in range(len(auto_correlation)):
auto_correlation_avgs[i] += auto_correlation[i]
for i in range(1, num_timesteps + 1):
auto_correlation_avgs[num_timesteps - i] /= i
# "auto_correlation_avgs[-1] < 1" condition helps decrease the skewness of the graph
# Removing skewness seems difficult, so currently using a heuristic
while auto_correlation_avgs and auto_correlation_avgs[-1] < 1:
auto_correlation_avgs.pop()
auto_correlation_avgs = np.log(np.array(auto_correlation_avgs)).reshape(-1, 1)
time_deltas = np.array([i * time_delta for i in range(len(auto_correlation_avgs))]).reshape(-1, 1)
print("Calculated residence time (in timesteps):", -1 / LinearRegression().fit(time_deltas, auto_correlation_avgs).coef_[0, 0])
| StarcoderdataPython |
1866043 | # Copyright (c) 2021 MIT
#
# Permission to use, copy, modify, and distribute this software for any
# purpose with or without fee is hereby granted, provided that the above
# copyright notice and this permission notice appear in all copies.
#
# THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR(S) DISCLAIM ALL WARRANTIES
# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL AUTHORS BE LIABLE FOR
# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
# OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
import time
import signal
import sys, os
import subprocess
import json
import xmlrpc.server
import xmlrpc.client
import re
import threading
from os.path import expanduser
from argparse import ArgumentParser, REMAINDER
from typing import Optional, IO, List, Any
from jobDescription import TrainingJob
import grpc
import runtime_pb2
import runtime_pb2_grpc
# import examples.vgg as vgg # TODO: this is used for debugging. Remove this later.
extra_args = [] # unparsed arguments stored here are forwarded to runtimes
HAS_EXCEPTION = False
def excepthook(args):
global HAS_EXCEPTION
print("In excepthook", args)
HAS_EXCEPTION = True
threading.excepthook = excepthook
def waitthreads(threadList):
for thread in threadList:
while thread.is_alive() and not HAS_EXCEPTION:
time.sleep(0.1)
if HAS_EXCEPTION:
sys.exit(-1)
thread.join()
def discover_gpu_numa():
from subprocess import check_output
gpus = check_output("nvidia-smi -x -q | grep \"gpu id\"", shell=True).decode("utf-8").splitlines()
try:
has_numactl = os.system("numactl ls > /dev/null 2>&1") == 0
except:
has_numactl = False
if not has_numactl:
return [-1] * len(gpus)
nodes = []
for g in gpus:
gid = g.split("\"")[1][4:].lower()
node = check_output(f"cat /sys/bus/pci/devices/{gid}/numa_node", shell=True).decode("utf-8").strip()
nodes.append(int(node))
return nodes
class CppRuntimeProxy:
def __init__(self, addressWithPort: str):
self.channel = grpc.insecure_channel(addressWithPort) # ex) 'localhost:50051'
self.stub = runtime_pb2_grpc.RuntimeStub(self.channel)
def scheduleTraining(self, name, jobInJson, dataDir, tensorTagsInJson, jobRankToGlobalRankInJson, jobParamsInJson):
response = self.stub.ScheduleTraining(runtime_pb2.ScheduleTrainingRequest(
name=name, job_in_json=jobInJson, data_dir=dataDir,
tensor_tags_in_json=tensorTagsInJson,
job_rank_to_global_rank_in_json=jobRankToGlobalRankInJson, job_meta_params_in_json=jobParamsInJson))
print("received: " + response.message)
def poke(self):
response = self.stub.Poke(runtime_pb2.Empty())
# print("received: " + response.message)
def shutdown(self):
response = self.stub.Shutdown(runtime_pb2.Empty())
print("received: " + response.message)
def initCommBackend(self):
# response = self.stub.(runtime_pb2.Empty())
# print("received: " + response.message)
pass
# print("initCommBackend() not implemented")
def initCommNCCL(self, message, msgType, groupId, members):
response = self.stub.InitCommNCCL(runtime_pb2.InitCommNCCLMsg(
message=message, msg_type=msgType, group_id=groupId, members=members))
print("received: " + response.message)
return response.group_id;
def initCommGRPC(self, rankToIpMap):
rankToIpMapInJson = json.dumps(rankToIpMap)
print("In initCommGRPC, rankToIpMapInJson: " + rankToIpMapInJson)
response = self.stub.InitCommGRPC(runtime_pb2.InitCommGRPCRequest(
rank_to_ip_map_in_json = rankToIpMapInJson
))
print("received: " + response.message)
def initCommGroups(self, jobName, commGroupsInJson):
print("initCommGroups not implemented")
class Location:
def __init__(self, address: str, port: int, device: int, userId: str, sshKeyPath: str, isCpp: bool):
self.address = address
self.port = port
self.device = device
self.userId = userId
self.sshKeyPath = sshKeyPath
self.serverId = None
self.proxy = None
self.isCpp = isCpp
self.is_local = address == "127.0.0.1"
self.process = None
self.numa_node = -1
def getProxy(self, maxRetry = 180):
if self.proxy != None:
# print("getProxy() returned from cached proxy value.")
return self.proxy
# Python runtime
retryGap = 1
retryCount = 0
while retryCount < maxRetry and not HAS_EXCEPTION:
try:
if self.isCpp: # CPP runtime
self.proxy = CppRuntimeProxy("%s:%d"%(self.address, self.port))
# print("cppProxy created for %s:%d"%(self.address, self.port))
else:
self.proxy = xmlrpc.client.ServerProxy("http://%s:%d/"%(self.address, self.port))
self.proxy.poke()
return self.proxy
except (ConnectionRefusedError, grpc.RpcError): # ConnectionRefusedError is for xmlrpc.
print("Cannot connect to %s:%d. Will retry in %d sec." %
(self.address, self.port, retryGap))
time.sleep(retryGap)
# retryGap += 2 # exponential back off.
retryCount += 1
assert False, "couldn't connect"
return None
def downloadFile(self, remotePath: str, localPath: str):
assert not self.is_local
print(" Downloading %s to %s at %s" % (remotePath, localPath, self.address))
kwargs = dict()
kwargs['stderr'] = subprocess.STDOUT
# sh_command = ['mkdir', '-p', localPath]
# subprocess.check_call(sh_command, **kwargs)
sh_command = ['scp', '-i', self.sshKeyPath, '%s@%s:%s' % (self.userId, self.address, remotePath), localPath]
subprocess.check_call(sh_command, **kwargs)
def uploadFile(self, localFilePath, remotePath):
assert not self.is_local
print(" Uploading %s to %s at %s" % (localFilePath, remotePath, self.address))
kwargs = dict()
# kwargs['shell'] = True
kwargs['stderr'] = subprocess.STDOUT
sh_command = ['scp', '-i', self.sshKeyPath, localFilePath, '%s@%s:%s' % (self.userId, self.address, remotePath)]
subprocess.check_call(sh_command, **kwargs)
def rsh(self, command):
kwargs = dict()
kwargs['stderr'] = subprocess.STDOUT
# sh_command = ['ssh', '-v', '-i', '~/.ssh/ulma-sjp.pem', 'ubuntu@%s' % self, '%s' % command]
if self.is_local:
sh_command = command
kwargs["shell"] = True
else:
sh_command = ['ssh', '-i', self.sshKeyPath, '-o', 'StrictHostKeyChecking=no', '%s@%s' % (self.userId, self.address), '%s' % command]
try:
subprocess.check_call(sh_command, **kwargs)
except subprocess.CalledProcessError as e:
output = e.output
exit(1)
return
def __monitor(self):
self.process.wait()
sys.exit(0)
def rshAsync(self, command, **kwargs):
print("Sending cmd: %s" % command)
if self.is_local:
sh_command = command
kwargs["shell"] = True
else:
sh_command = ['ssh', '-i', self.sshKeyPath, '-o StrictHostKeyChecking=no', '%s@%s' % (self.userId, self.address),
'%s' % command]
self.process = subprocess.Popen(sh_command, **kwargs)
t = threading.Thread(target=Location.__monitor, args=(self,), daemon=True)
t.start()
return self.process
def upSync(self, localPath, remotePath):
if self.is_local:
assert False
return
try:
subprocess.check_call(['rsync', '-e', 'ssh -i %s -o StrictHostKeyChecking=no' % self.sshKeyPath,
'-rh', "--exclude=*__pycache__", localPath, "%s@%s:%s" % (self.userId, self.address, remotePath)],
stderr=subprocess.STDOUT)
except subprocess.CalledProcessError as e:
output = e.output
exit(1)
class ClusterCoordinator(xmlrpc.server.SimpleXMLRPCServer):
""" GPU cluster coordinator. It accepts training jobs from clients and schedule them to runtimes. """
def __init__(self, addrToBind: str, portToBind: int, locations: List[Location], workDir: str, be_batch_size: int):
super(ClusterCoordinator, self).__init__((addrToBind, portToBind))
self.myAddr = addrToBind
self.myPort = portToBind
self.locations = locations
self.workDir = workDir
self.processes = [] # from subprocess calls used for launching runtime.
self.nextTagStartOffset = 1
self.be_batch_size = be_batch_size
self.commGroups = set()
self.ongoingJobs = {} # Dict of contexts of ongoing jobs. Indexed by job name.
f = open("runtimeResult.data", "w")
f.close()
def _dispatch(self, method, params):
""" Custom dispatcher for XML-RPC server. """
try:
# We are forcing the 'export_' prefix on methods that are
# callable through XML-RPC for security.
func = getattr(self, 'export_' + method)
except AttributeError:
raise Exception('method "%s" is not supported' % method)
else:
return func(*params)
######################################################
## RPC handlers
######################################################
def export_poke(self):
return 'Returned from poke at %s' % self.myAddr
def export_scheduleTraining(self, jobName: str, trainingJobInJSON: str, runbe):
job = TrainingJob("test", None, None, 0, 0, "")
job.loadJSON(trainingJobInJSON)
print("received job")
gpusUsed = job.getGpusUsed()
moduleDescList = [job.dumpSingleRunnableModule(rank) for rank in range(gpusUsed)]
tensorTags = self.buildCommTensorTags(moduleDescList)
tensorTagsInJson = json.dumps(tensorTags)
for rank in range(gpusUsed):
with open(f"/tmp/rank{rank}.json", "wb") as f:
f.write(bytes(moduleDescList[rank].encode("utf-8")))
commSets = self.buildNeededCommGroups(moduleDescList)
for s in commSets:
self.initCommBackendAll("nccl", s)
jobRankToGlobalRank = list(range(gpusUsed))
jobRankToGlobalRankInJson = json.dumps(jobRankToGlobalRank)
# TODO: should pick locations that doesn't have other priority job scheduled.
if len(self.locations) < gpusUsed:
return "Not enough servers available. %d gpus available while %d needed" % (len(self.locations), gpusUsed)
jobParams = {
"run_with_be": runbe,
"nr_gpus": gpusUsed,
"cifar_training": "cifar" in jobName,
"lossfn": "CrossEntropyLoss" if "gpt2" in jobName else "NLL",
"autocast": True,
}
jobParamsInJson = json.dumps(jobParams)
threadList = []
def requestScheduleTraining(proxy, jobInJson):
proxy.scheduleTraining(jobName, jobInJson, "SYNTHETIC", tensorTagsInJson, jobRankToGlobalRankInJson, jobParamsInJson)
for rank in range(gpusUsed):
location = self.locations[rank]
moduleDesc = moduleDescList[rank]
thread = threading.Thread(name='reqScheTrain%d'%rank, target=requestScheduleTraining, args=(location.getProxy(), moduleDesc))
threadList.append(thread)
thread.start()
waitthreads(threadList)
self.ongoingJobs[jobName] = {"iterTime": 0, "gpuMsec": 0, "gpusUsed": gpusUsed, "gpusFinished": 0, "globalBatchSize": job.globalBatchSize}
self.ongoingJobs[jobName].update({"beImagesPerIter": 0.0, "idleMsPerIter": 0.0})
# for rank in range(gpusUsed):
# location = self.locations[rank]
# moduleDesc = moduleDescList[rank] # job.dumpSingleRunnableModule(rank)
# print(location.getProxy().scheduleTraining(jobName, moduleDesc, "SYNTHETIC", tensorTagsInJson, jobRankToGlobalRankInJson))
return 'done'
def export_notifyTrainingFinished(self, runtimeAddress: str, name: str, beImagesPerIter: float, idleMsPerIter: float, remainingJobCount: int, fpTime: float, bpTime: float, iterTime: float):
print("Training for %s is completed at %s. (%d jobs are remaining) fp: %3.1f bp: %3.1f iterTime: %3.1f" % (name, runtimeAddress, remainingJobCount, fpTime, bpTime, iterTime))
iterTime /= 1000
self.ongoingJobs[name]["iterTime"] = max(self.ongoingJobs[name]["iterTime"], iterTime)
self.ongoingJobs[name]["gpuMsec"] += (fpTime + bpTime) / 1000
self.ongoingJobs[name]["gpusFinished"] += 1
self.ongoingJobs[name]["beImagesPerIter"] += beImagesPerIter
self.ongoingJobs[name]["idleMsPerIter"] += idleMsPerIter
if self.ongoingJobs[name]["gpusFinished"] == self.ongoingJobs[name]["gpusUsed"]:
toprints = [
"{globalBatchSize:2}", "{gpusUsed:2}", "{iterTime:4.1f}",
"{gpuMsec:4.1f}", "{beImagesPerIter:3.1f}",
"{idleMsPerIter:3.1f}"
]
print("Training for {} is completed entirely.".format(name))
cols = ["GlobalBatchSize", "GpusUsed", "IterTime", "GpuMsec", "BeImagesPerIter", "IdleMsPerIter"]
print(" " + " ".join(cols))
dataline = " " + " ".join(toprints).format(**self.ongoingJobs[name])
print(dataline)
f = open("runtimeResult.data", "a")
f.write(dataline + "\n")
f.close()
return 'done'
def export_addGpuNode(self):
print("NOT YET IMPLEMENTED.")
######################################################
## Internal helper methods
######################################################
def buildCommTensorTags(self, moduleDescList):
# TODO: need tag allocator that can recycle tags.
tag = 0
tensorTags = {}
for moduleDesc in moduleDescList:
spec = json.loads(moduleDesc)
for ldsc in spec["layers"]:
if "xfers" in ldsc: # either sender or receiver need to assign tag.
for item in ldsc["xfers"]:
tensorTags[item["name"]] = tag
tag += item["prop"]["xferSamples"]
tensorTags[item["name"] + "_back"] = tag
tag += item["prop"]["xferSamples"]
return tensorTags
def buildNeededCommGroups(self, moduleDescList):
groups = set()
desc = json.loads(moduleDescList[0])
for l in desc['layers']:
activeset = tuple(sorted(l['gpuAssignment']))
if len(activeset) > 1:
groups.add(activeset)
return list(groups)
######################################################
## Runtime cluster management
######################################################
def installPackages(self):
""" Install required software at each runtime server """
pipPackages = ["torch", "jsonpickle", "torchvision"]
# "pip install torch==1.8.0+cu111 torchvision==0.9.0+cu111 torchaudio==0.8.0 -f https://download.pytorch.org/whl/torch_stable.html"]
for location in self.locations:
for pipPackage in pipPackages:
location.rsh("pip install %s" % pipPackage)
def launchRuntimeAll(self, c10dBackend: str, profile: bool, cppRuntime: bool, manualLaunch: bool):
""" Launch runtime at all remote locations. Also registers the sighandler
that cleanly shuts down all remote runtime servers.
"""
# Using the absolute path for compatibility with C++ runtime.
logdir = args.logdir
if not logdir:
logdir = os.getcwd() + "/logs/"
upSyncedAddrs = set()
for i, location in enumerate(self.locations):
if (location.address not in upSyncedAddrs):
# TODO: skip if location's addr is same as the current node.
# location.upSync(".", self.workDir)
upSyncedAddrs.add(location.address)
# pass master ip and port.
stdoutFp = open(f"{logdir}/runtime%d.out"%i, "a", buffering=1)
stderrFp = open(f"{logdir}/runtime%d.err"%i, "a", buffering=1)
nsysPrefix = ""
if "--cuda_profile" in extra_args:# and location.device == 0: # Only run 1 nsys per host.
nsysPrefix = "nsys profile -f true -o net%d -c cudaProfilerApi -t cuda,nvtx --export sqlite " % i # -s none
if manualLaunch:
print("Skipping ssh launching runtime. Must have launched them manually.")
elif cppRuntime:
if location.numa_node >= 0:
numacmd = "numactl -N{nn} -m{nn}".format(nn=location.numa_node)
else:
numacmd = ""
self.processes.append(location.rshAsync(
f"CUDA_VISIBLE_DEVICES={location.device} {numacmd} {nsysPrefix} {self.workDir}/csrc/build/runtime" + \
" --myAddr %s:%d --device 0 --c10dBackend %s --rank %d --worldSize %d --logdir %s --be_batch_size %d %s" % \
(location.address, location.port, c10dBackend, i, len(self.locations), logdir, self.be_batch_size, " ".join(extra_args)) #+ \
, stdout=stdoutFp, stderr=stderrFp))
else:
self.processes.append(location.rshAsync(
# nsysPrefix + "python3 " + self.workDir + "runtime.py" + \
"source ~/.profile; " + nsysPrefix + "python3 " + self.workDir + "runtime.py" + \
" --coordinatorAddr %s:%d --myAddr %s:%d --device %d --c10dBackend %s --rank %d --worldSize %d --be_batch_size %d %s" % \
(self.myAddr, self.myPort, location.address, location.port, location.device, c10dBackend, i, len(self.locations), self.be_batch_size, "--profile" if profile else "") #+ \
, stdout=stdoutFp, stderr=stderrFp))
sig_names = {2: "SIGINT", 15: "SIGTERM"}
last_return_code = None
def sigkill_handler(signum, frame):
print("signum:%d Trying to shutdown all runtime." % signum)
self.shutdownRuntimeAll()
# self.waitForRuntimeAll()
for process in self.processes:
print(f"Killing subprocess {process.pid}")
try:
process.terminate()
# process.kill()
except Exception:
pass
if last_return_code is not None:
raise subprocess.CalledProcessError(returncode=last_return_code, cmd=cmd)
if signum in sig_names:
print(f"Main process received {sig_names[signum]}, exiting")
sys.exit(1)
signal.signal(signal.SIGINT, sigkill_handler)
# signal.signal(signal.SIGTERM, sigkill_handler)
time.sleep(2) ## + (15 if profile else 0))
for location in self.locations:
proxy = location.getProxy()
proxy.poke()
def shutdownRuntimeAll(self):
""" Ask all remote runtime servers to stop. Returns after all servers ack the shutdown request. """
for location in self.locations:
try:
proxy = location.getProxy(maxRetry=1)
if proxy != None:
print(proxy.shutdown())
# print(location.getProxy(maxRetry=1).shutdown())
except xmlrpc.client.Fault:
print("pipe broken while shuting down %s" % location.address)
except grpc.RpcError:
print("GRPC error while shuting down %s" % location.address)
def initCommBackendAll(self, c10dBackend, commGroupSet):
assert(sorted(commGroupSet) == list(commGroupSet))
if tuple(commGroupSet) in self.commGroups:
return
self.commGroups.add(tuple(commGroupSet))
if c10dBackend == "nccl":
group_id = self.locations[commGroupSet[0]].getProxy().initCommNCCL("Generate comm group ID", 0, bytes(128), list(commGroupSet))
threadList = []
def requestInitCommBackend(proxy):
# print(proxy.initCommBackend())
if c10dBackend == "grpc":
print(proxy.initCommGRPC(rankToIpMap))
if c10dBackend == "nccl":
proxy.initCommNCCL("Join comm group", 1, group_id, list(commGroupSet))
for i in commGroupSet:
location = self.locations[i]
thread = threading.Thread(name='init_comm%d'%i, target=requestInitCommBackend, args=(location.getProxy(),))
thread.start()
threadList.append(thread)
waitthreads(threadList)
def initCommGroupsAll(self, jobName: str, commGrpDict: dict, jobRankToGlobalRank: list):
""" A helper function that will ask all runtimes to create new c10d comm groups.
Used while scheduling a new training job. This method should be invoked before
scheduling a new training job to any runtime that will participate in training.
"""
commGrpDictWithGlobalRanks = {}
for grpName in commGrpDict:
grpRanks = commGrpDict[grpName]
globalGrpRanks = [jobRankToGlobalRank[rank] for rank in grpRanks]
commGrpDictWithGlobalRanks[grpName] = globalGrpRanks
commGrpDictWithGlobalRanksInJson = json.dumps(commGrpDictWithGlobalRanks)
threadList = []
def requestInitCommGroups(proxy, jobName, commGroupsInJson):
# print(proxy.initCommGroups(jobName, commGroupsInJson))
proxy.initCommGroups(jobName, commGroupsInJson)
for i, location in enumerate(self.locations):
thread = threading.Thread(name='init_commGroups%d'%i, target=requestInitCommGroups,
args=(location.getProxy(), jobName, commGrpDictWithGlobalRanksInJson,))
thread.start()
threadList.append(thread)
waitthreads(threadList)
def waitForRuntimeAll(self):
""" Waits until all runtime processes terminate. Development use only. """
# TODO: replace this method with xmlrpc server event loop.
print("Waiting for ssh process to terminate.")
for p in self.processes:
p.wait()
####################################################################################
## Initial launch scripts
####################################################################################
def parse_args():
"""
Helper function parsing the command line options
@retval ArgumentParser
"""
parser = ArgumentParser(description="ClusterCoordinator initial launch "
"script that will spawn up "
"multiple distributed processes")
# Optional arguments for the launch helper
parser.add_argument("--addrToBind", type=str, default="localhost:12340",
help="IP:port to listen for requests to the cluster coordinator")
parser.add_argument("--c10dBackend", type=str, default="nccl",
help="pytorch c10d communication backend. Type either nccl or gloo")
parser.add_argument("--logLevel", type=int, default=1,
help="Logging level. 0: verbose, 1: Info, 2: Error") # NOT YET IMPLEMENTED.
parser.add_argument("--pathToConfig", type=str, default="clusterConfig.json",
help="The full path to the cluster configuration files")
parser.add_argument('--install', default=False, action='store_true',
help="When this option is set, it will install required pip packages to all servers")
parser.add_argument('--profile', default=False, action='store_true',
help="To launch runtimes with night system profiling.")
parser.add_argument("--be_batch_size", type=int, default=0,
help="launch runtimes with be beatch size")
parser.add_argument('--cpp', default=False, action='store_true',
help="To launch CPP version runtimes.")
parser.add_argument('--manualLaunch', default=False, action='store_true',
help="Do not runtimes automatically. Primarily for using gdb on runtime processes.")
parser.add_argument("--logdir", type=str, default="", help="Full path of log directory")
# For installing nsys.. (with other cuda toolkit..)
# wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/cuda-ubuntu1804.pin
# sudo mv cuda-ubuntu1804.pin /etc/apt/preferences.d/cuda-repository-pin-600
# sudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/7fa2af80.pub
# sudo add-apt-repository "deb https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/ /"
# sudo apt-get update
# sudo apt-get -y install cuda
return parser.parse_known_args()
def main():
global args, extra_args
args, extra_args = parse_args()
# clusterConfig = json.load(open(args.pathToConfig))
global rankToIpMap
rankToIpMap = {}
commGrpRanksWorld = []
locations = []
# for serverConfig in clusterConfig["serverList"]:
# print("Found %s" % str(serverConfig))
port = 11250
gpus = discover_gpu_numa()
for i, node in enumerate(gpus):
rankToIpMap[str(len(locations))] = f"127.0.0.1:{port}"
commGrpRanksWorld.append(len(locations))
loc = Location("127.0.0.1", port, i, None, None, args.cpp)
loc.numa_node = node
locations.append(loc)
port += 1
addrToBindCombo = re.split('[-:]', args.addrToBind)
addrToBind = addrToBindCombo[0]
portToBind = int(addrToBindCombo[1])
coordinator = ClusterCoordinator(addrToBind, portToBind, locations, os.getcwd(), args.be_batch_size)
if args.install:
coordinator.installPackages()
# Just make sure there's no previously left runtimes.
# CPP runtimes seem to terminate appropriately. So, there's no need to shutdown leftovers.
if not args.cpp:
print("Cleaning up potentially leftover runtime servers from previous experiment.")
coordinator.shutdownRuntimeAll()
time.sleep(10)
coordinator.launchRuntimeAll(args.c10dBackend, profile=args.profile, cppRuntime=args.cpp, manualLaunch=args.manualLaunch)
print("All runtime nodes are up and running. Now, initializing communication backend..")
coordinator.initCommBackendAll(args.c10dBackend, commGrpRanksWorld)
print("Communication backends are ready at all locations.")
print("Now, cluster is ready to accept training jobs.")
sys.stdout.flush()
coordinator.timeout = 1
while not HAS_EXCEPTION:
coordinator.handle_request()
time.sleep(5)
if __name__ == "__main__":
main()
| StarcoderdataPython |
173360 | <filename>medium/python/c0355_777_swap-adjacent-in-lr-string/00_leetcode_0355.py
# DRUNKWATER TEMPLATE(add description and prototypes)
# Question Title and Description on leetcode.com
# Function Declaration and Function Prototypes on leetcode.com
#777. Swap Adjacent in LR String
#In a string composed of 'L', 'R', and 'X' characters, like "RXXLRXRXL", a move consists of either replacing one occurrence of "XL" with "LX", or replacing one occurrence of "RX" with "XR". Given the starting string start and the ending string end, return True if and only if there exists a sequence of moves to transform one string to the other.
#Example:
#Input: start = "RXXLRXRXL", end = "XRLXXRRLX"
#Output: True
#Explanation:
#We can transform start to end following these steps:
#RXXLRXRXL ->
#XRXLRXRXL ->
#XRLXRXRXL ->
#XRLXXRRXL ->
#XRLXXRRLX
#Note:
#1 <= len(start) = len(end) <= 10000.
#Both start and end will only consist of characters in {'L', 'R', 'X'}.
#class Solution(object):
# def canTransform(self, start, end):
# """
# :type start: str
# :type end: str
# :rtype: bool
# """
# Time Is Money | StarcoderdataPython |
8040410 | <reponame>filmceadmin/VideoSesDonustur<filename>bot/translation.py
class Translation(object):
DOWNLOAD_PROGRESS = "`█`"
UPLOAD_PROGRESS = "`░`"
START_TEXT = """Merhaba {0}\nŞimdi bana sesi olmayan bir video gönder. 👉 @hextr"""
PROGRESS = """`
Yüzde : {0}%
Tamamlanan: {1}
Toplam: {2}
Hız: {3}/s
Süre: {4}
`"""
| StarcoderdataPython |
8127625 | <filename>mayhem/datatypes/common.py<gh_stars>0
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# mayhem/datatypes/structure.py
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of the project nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# 'AS IS' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
import _ctypes
import collections
import ctypes
_function_cache = {}
_function_cache_entry = collections.namedtuple('FunctionCacheEntry', ('restype', 'argtypes', 'flags'))
class MayhemCFuncPtr(_ctypes.CFuncPtr):
_argtypes_ = ()
_restype_ = None
_flags_ = 0
@property
def address(self):
return ctypes.cast(self, ctypes.c_void_p).value
def duplicate(self, other):
if callable(other):
if isinstance(other, ctypes._CFuncPtr):
other = ctypes.cast(other, ctypes.c_void_p).value
elif not isinstance(other, int):
other = ctypes.cast(other, ctypes.c_void_p).value
return self.__class__(other)
@classmethod
def new(cls, name, restype=None, argtypes=None, flags=0):
new = type(name, (cls,), {
'_argtypes_': argtypes,
'_restype_': restype,
'_flags_': flags
})
return new
class MayhemStructure(ctypes.Structure):
pass
# defined here so it can use the function cache
def _WINFUNCTYPE(restype, *argtypes, use_errno=False, use_last_error=False):
flags = _ctypes.FUNCFLAG_STDCALL
if use_errno:
flags |= _ctypes.FUNCFLAG_USE_ERRNO
if use_last_error:
flags |= _ctypes.FUNCFLAG_USE_LASTERROR
cache_entry = _function_cache_entry(restype=restype, argtypes=argtypes, flags=flags)
function = _function_cache.get(cache_entry)
if function is not None:
return function
FunctionType = MayhemCFuncPtr.new('CFunctionType', **cache_entry._asdict())
_function_cache[cache_entry] = FunctionType
return FunctionType
| StarcoderdataPython |
1723535 | <reponame>saurabh896/python-1
from PIL import Image,ImageDraw,ImageFont
im = Image.open("pic.jpg")
x,y = im.size
font = ImageFont.truetype("verdana.ttf", x/3)
dr = ImageDraw.Draw(im)
dr.text((3*x/4,0),font=font,text="4",fill="#FF0000")
im.save("pictuer_has_number.jpg")
| StarcoderdataPython |
366500 | from collections.abc import Iterable
from virtualbox.exceptions import NoSuchIndex
# Wrappers
def restrictRange(min, max, keyword):
def decorator(function):
def check(*args, **kwargs):
if min <= kwargs[keyword] < max:
return function(*args, **kwargs)
raise NoSuchIndex()
return check
return decorator
# array related
def shiftArray(array, num):
num %= len(array)
return array[num:] + array[:num]
def rshiftArray(array, num):
num %= len(array)
return array[len(array) - num:] + array[:len(array) - num]
def loop(x, maxNum):
if x < 0:
return maxNum + x
return x
def flatmap(x):
return sum([flatmap(x) if isinstance(x, Iterable) else [x] for i in x], [])
def fill(list, to):
return list + [None]*(to - len(list))
def inAny(what, inWhat):
return any(map(lambda x: x in inWhat, what))
| StarcoderdataPython |
5072777 | from typing import *
import argparse
import sys
import os.path
import dataclasses
import json
@dataclasses.dataclass
class DurationEvent:
name: str
timestep: float
duration: float
@staticmethod
def from_json(jobj: Mapping):
return DurationEvent(jobj['name'], jobj['ts'], jobj['dur'])
@dataclasses.dataclass
class InstantEvent:
name: str
timestep: float
@staticmethod
def from_json(jobj: Mapping):
return InstantEvent(jobj['name'], jobj['ts'])
def main(argv: list[str]=sys.argv) -> Optional[int]:
prog = argv[0]
args = argv[1:]
description = 'Analyzes profiler results'
usage = f'{prog} [profile]'
profile_default = os.path.join('data', 'profile', 'results.json')
profile_help = f'JSON profile data (default={profile_default})'
parser = argparse.ArgumentParser(prog=prog, description=description, usage=usage)
parser.add_argument('profile', nargs='?', type=str, default=profile_default, help=profile_help)
parsed_args = parser.parse_args(args)
if not os.path.isfile(parsed_args.profile):
print(f'Error: profile file "{parsed_args.profile}" not found')
return 1
print(f'Loading "{parsed_args.profile}"')
with open(parsed_args.profile, 'r') as f:
try:
profile_json = json.load(f)
except json.JSONDecodeError as e:
print(f'JSON Error: {e.msg}')
return 1
analyze_profile(profile_json)
def analyze_profile(profile_json: Mapping):
profile = profile_json['traceEvents'][1:]
functions: list[DurationEvent] = []
scopes: list[DurationEvent] = []
threads: list[DurationEvent] = []
events: list[InstantEvent] = []
for trace in filter(lambda t: 'ph' in t and 'cat' in t, profile):
match trace['ph']:
case 'X':
match trace['cat']:
case 'function' : functions.append(DurationEvent.from_json(trace))
case 'scope' : scopes.append(DurationEvent.from_json(trace))
case 'thread' : threads.append(DurationEvent.from_json(trace))
case 'i':
match trace['cat']:
case 'event' : events.append(InstantEvent.from_json(trace))
case 's':
pass
case 'f':
pass
functionstats: dict[str, float] = add_durations(functions)
scopestats: dict[str, float] = add_durations(scopes)
threadstats: dict[str, float] = add_durations(threads)
print('Profiled functions:')
print_stats(functionstats)
print()
print('Profiled scopes:')
print_stats(scopestats)
print()
print('Profiled threads:')
print_stats(threadstats)
# Averate time per call
# Events - frame time
def add_durations(events: list[DurationEvent]) -> dict[str, float]:
result: dict[str, float] = {}
for event in events:
if event.name not in result:
result[event.name] = event.duration
else:
result[event.name] += event.duration
return { k: result[k] for k in sorted(list(result.keys()), key=lambda name: result[name], reverse=True) }
def print_stats(stats: dict[str, float]):
for i, name in enumerate(sorted(list(stats.keys()), key=lambda name: stats[name], reverse=True)[:10], 1):
duration = stats[name]
print(f'{i}. {name}: {duration / 1000:.2f}ms')
if __name__ == '__main__':
sys.exit(main(sys.argv))
| StarcoderdataPython |
3221228 | #!/usr/bin/env python3
@app.route('/api/v1/insert_weight', methods=['GET'])
def insert_weight_v1():
conn = None
try:
params = config()
conn = psycopg2.connect(**params)
cur = conn.cursor(cursor_factory = psycopg2.extras.DictCursor)
cur.execute("""INSERT INTO weights (child_id, weight, weight_date) VALUES ({}, {}, {});""".format(request.args['id'], request.args['weight'], request.args['date']))
conn.commit()
return jsonify([{'code':201}])
except (Exception, psycopg2.DatabaseError) as error:
return str(error)
finally:
if conn is not None:
conn.close()
| StarcoderdataPython |
6463465 | <reponame>pragneshrana/Reinforcement-Learning
from gym import Env
from gym.envs.registration import register
from gym.utils import seeding
from gym import spaces
import numpy as np
from math import sqrt
class chakra(Env):
metadata = {
'render.modes': ['human', 'rgb_array'],
'video.frames_per_second': 50
}
def __init__(self):
'''
Initizliaing state and action space
'''
self.action_space = spaces.Box(low=-1, high=1, shape=(2,)) #2d action space
self.observation_space = spaces.Box(low=-1, high=1, shape=(2,))
self.seed()
self.viewer = None
self.state = None #state
self.done = False
self.goal = np.array([0,0])
def seed(self, seed=None):
'''
Random seed generator
It's a random number generator which can be repeated based on state stored
'''
self.np_random, seed = seeding.np_random(seed)
return [seed]
def step(self, action):
'''
based on action taken method will return next state, reward,
'''
# Action is clipped or bounded between [-0.025,0.025]
action = np.clip(action, -0.025, 0.025)
#if there no change in the state don't chnage the state and return zero reward
if ( self.state[0] == self.goal[0] and self.state[1] == self.goal[1]):
return np.array(self.state), 0.0, self.done, None
else:
#if state is changed then return new state with obtained award
next_state = self.change_state_by_action(action)
reward = self.pick_reward(next_state)
self.state = next_state
return np.array(self.state), reward, self.done, None
def change_state_by_action(self, action):
'''
based on action picked changing the state and returning new state
'''
next_state = self.state + action
# Checking if new state is outside environment then reseting environment
if(next_state[0] > 1 or next_state[0] < -1 or next_state[1] > 1 or next_state[1] < -1):
return self.reset()
else:
return np.array(next_state)
def pick_reward(self, next_state):
'''
based on next state return is given
if state is final goal return is o
else discounted return is based state values
'''
if (next_state[0] == self.goal[0] and next_state[1] == self.goal[1]):
self.done = True
return 0
else:
dist = np.sqrt(next_state[0]**2 + next_state[1]**2)
return -dist
def reset(self):
'''
if states that are far away from the required then resetting the environment
'''
while(True):
self.state = self.np_random.uniform(low=-1, high=1, size=(2,))
if(np.linalg.norm(self.state) > 0.9):
break
return np.array(self.state)
# method for rendering
def render(self, mode='human', close=False):
if(close):
if(self.viewer is not None):
self.viewer.close()
self.viewer = None
return
screen_width = 800
screen_height = 800
if(self.viewer is None):
from gym.envs.classic_control import rendering
self.viewer = rendering.Viewer(screen_width, screen_height)
agent = rendering.make_circle(
min(screen_height, screen_width) * 0.03)
origin = rendering.make_circle(
min(screen_height, screen_width) * 0.03)
trans = rendering.Transform(translation=(0, 0))
agent.add_attr(trans)
self.trans = trans
agent.set_color(1, 0, 0)
origin.set_color(0, 0, 0)
origin.add_attr(rendering.Transform(
translation=(screen_width // 2, screen_height // 2)))
self.viewer.add_geom(agent)
self.viewer.add_geom(origin)
# self.trans.set_translation(0, 0)
self.trans.set_translation((self.state[0] + 1) / 2 * screen_width,(self.state[1] + 1) / 2 * screen_height,)
return self.viewer.render(return_rgb_array=mode == 'rgb_array')
register(
'chakra-v0',
entry_point='rlpa2.chakra:chakra',
max_episode_steps=4000,
)
| StarcoderdataPython |
1986956 | <filename>scripts/insert-copyright-header.py
#!/usr/bin/env python
# Copyright (c) 2014 Azavea.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os, argparse, sys, re
license = """Copyright (c) 2014 %s.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License."""
max_width = max(map(lambda x: len(x), license.split('\n')))
def c_style_comment(text):
commented = [ ]
commented.append("/*")
for line in text.split("\n"):
commented.append(" * " + line)
commented.append(" */")
return '\n'.join(commented)
def python_comment(text):
commented = [ ]
for line in text.split("\n"):
commented.append("# " + line)
return '\n'.join(commented) + '\n'
re_c = re.compile(r'(\/\*\*\*.*?Copyright \(c\).*\*\*\*\/)', re.MULTILINE | re.DOTALL )
re_py = re.compile(r'((#.*?Copyright \(c\).*(?:\n|\r\n?))(#.*?(\n|\r\n?))*)', re.MULTILINE)
# Key: Regex for full path.
# Value: Tuple of (comment regex, comment function, comment regex, copyright company)
config = {
r"\.\/spark.*?\.scala$" : (re_c, c_style_comment, "DigitalGlobe"),
r".*\.scala$" : (re_c, c_style_comment, "Azavea"),
r".*\.py$" : (re_py, python_comment, "Azavea")
}
re_c = re.compile(r'^((\/\*\*\*.*?(?:\n|\r\n?))(\*.*?Copyright \(c\).*?(?:\n|\r\n?))(\*.*?(?:\n|\r\n?))(.*?\*\*\*\/(?:\n|\r\n?)))', re.MULTILINE)
re_py = re.compile(r'^((#.*?Copyright \(c\).*(?:\n|\r\n?))(#.*(?:\n|\r\n?))*)', re.MULTILINE)
def insert_header(path, pattern, comment):
f = open(path, "r")
text = f.read()
f.close()
(sub_text, n) = pattern.subn(comment, text, count = 1)
if n == 0:
sub_text = comment + "\n\n" + text
print "\tadded"
else:
print "\tupdated"
f = open(path, "w")
f.write(sub_text)
f.close
def handle(dir, file):
path = os.path.join(dir, file)
for (pattern, (re_matcher, f_comment, owner)) in config.iteritems():
m = re.match(pattern, path)
if m:
print(path)
license_comment = f_comment(license % owner)
insert_header(path, re_matcher, license_comment)
break
def main():
#Start walking from CWD and apply rules defined in config
for root, dirs, files in os.walk("."):
for f in files:
handle(root, f)
if __name__ == "__main__":
main()
| StarcoderdataPython |
1959603 | from django.contrib import admin
from .models import Member, Post, Comment, CreditCard, Filter, Image
# Register your models here.
admin.site.register(Member)
admin.site.register(Post)
admin.site.register(Comment)
admin.site.register(CreditCard)
admin.site.register(Filter)
admin.site.register(Image)
| StarcoderdataPython |
11228234 | import os
import sys
from inspect import signature
from os.path import abspath, dirname, relpath
import pandas as pd
from aaai20.utils import debug_print, run_script
VERBOSITY = 0
def main(csv_fname, cmd_idx):
"""
Run single command from csv file that specifies many commands.
The command that should be run corresponds to a row in the .csv file
with all commands. The specific command that should be run is indicated
by the row idx.
Parameters
----------
csv_fname: str
Filename of the csv containing all commands
cmd_idx: int
Index of row that corresponds to command to be run.
Returns
-------
"""
assert isinstance(cmd_idx, int)
assert isinstance(csv_fname, str)
# Extract command
df = pd.read_csv(csv_fname, index_col=0)
head_tuple = tuple(df.columns)
data_tuple = tuple(df.iloc[cmd_idx])
param_dict = {k: v for k, v in zip(head_tuple, data_tuple)}
msg = """
param_dict: {}
""".format(param_dict)
debug_print(msg,V=VERBOSITY)
# Run command
sig = signature(run_script)
ba = sig.bind(**param_dict)
run_script(*ba.args, **ba.kwargs)
return
if __name__ == '__main__':
# Extract parameters
csv_fname_outer_scope = sys.argv[1]
cmd_idx_outer_scope = int(sys.argv[2])
# Run main
main(csv_fname_outer_scope, cmd_idx_outer_scope)
| StarcoderdataPython |
3347221 | # -*- coding: utf-8 -*-
# Generated by Django 1.11.8 on 2017-12-25 14:41
from __future__ import unicode_literals
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('app', '0004_auto_20170105_1426'),
]
operations = [
migrations.AlterField(
model_name='cfauser',
name='funder_name',
field=models.CharField(blank=True, default='', max_length=256),
),
migrations.AlterField(
model_name='cfauser',
name='osa_email',
field=models.EmailField(blank=True, help_text='The email address for contacting OSA when an app is funded.', max_length=254, null=True, verbose_name='OSA Contact Email'),
),
migrations.AlterField(
model_name='cfauser',
name='user',
field=models.OneToOneField(help_text='You must first create a user before adding them to the CFA.', on_delete=django.db.models.deletion.CASCADE, related_name='profile', to=settings.AUTH_USER_MODEL),
),
migrations.AlterField(
model_name='cfauser',
name='user_type',
field=models.CharField(choices=[('R', 'REQUESTER'), ('F', 'FUNDER')], max_length=1),
),
migrations.AlterField(
model_name='eligibilityanswer',
name='answer',
field=models.CharField(choices=[('Y', 'YES'), ('N', 'NO')], max_length=1),
),
migrations.AlterField(
model_name='event',
name='status',
field=models.CharField(choices=[('S', 'SAVED'), ('B', 'SUBMITTED'), ('F', 'FUNDED'), ('W', 'FOLLOWUP'), ('O', 'OVER')], max_length=1),
),
migrations.AlterField(
model_name='funderconstraint',
name='answer',
field=models.CharField(choices=[('Y', 'YES'), ('N', 'NO')], max_length=1),
),
migrations.AlterField(
model_name='item',
name='category',
field=models.CharField(choices=[('H', 'Honoraria/Services'), ('E', 'Equipment/Supplies'), ('F', 'Food/Drinks'), ('S', 'Facilities/Security'), ('T', 'Travel/Conference'), ('P', 'Photocopies/Printing/Publicity'), ('O', 'Other')], max_length=1),
),
]
| StarcoderdataPython |
3353524 | from __future__ import absolute_import, print_function, unicode_literals
from flask import (
Flask,
make_response,
jsonify,
request,
render_template,
send_from_directory,
abort,
redirect,
)
from flask_cors import CORS
from JavPy.functions import Functions
import json
import os
from JavPy.utils.requester import spawn
import JavPy.utils.config as config
import JavPy.utils.buggyauth as auth
from copy import deepcopy
base_path = "/".join(os.path.abspath(__file__).replace("\\", "/").split("/")[:-3])
web_dist_path = base_path + "/app/web/dist"
app = Flask(__name__, template_folder=web_dist_path)
CORS(app, resources=r"/*")
@app.before_first_request
def before_first_request():
pass
@app.before_request
def before_request():
if request.full_path == "/auth_by_password?":
return
if not auth.check_request(request):
abort(400)
@app.route("/auth_by_password", methods=["POST"])
def auth_by_password():
params = json.loads(request.data.decode("utf-8"))
print(params)
if auth.check_password(params["password"]):
cookie = auth.generate_cookie(request)
return cookie
else:
return make_response("auth failed"), 400
@app.route("/get_config", methods=["POST"])
def get_config():
cfg = deepcopy(config.Config.config)
if "password" in cfg:
del cfg["password"]
return json.dumps(cfg)
@app.route("/update_config", methods=["POST"])
def update_config():
data = json.loads(request.data.decode("utf-8"))
if data["password"]:
config.Config.set_config("password", data["password"])
config.Config.set_config("ip-blacklist", data["ipBlacklist"])
config.Config.set_config("ip-whitelist", data["ipWhitelist"])
config.Config.save_config()
try:
import importlib
_reload = importlib.reload
except (ImportError, AttributeError):
_reload = reload
_reload(config)
_reload(auth)
return ""
@app.route("/")
def index():
return render_template("index.html")
@app.route("/<path:path>")
def send_static(path):
if not os.path.exists(web_dist_path + "/" + path):
return render_template("index.html")
else:
return send_from_directory(web_dist_path, path)
@app.route("/search_by_code", methods=["POST"])
def search_by_code():
params = json.loads(request.data.decode("utf-8"))
print(params)
res = {"videos": None, "other": None}
if params["code"]:
try:
res["videos"] = [Functions.search_by_code(params["code"]).to_dict()]
rsp = jsonify(res)
except AttributeError:
rsp = make_response("")
else:
rsp = make_response("")
rsp.headers["Access-Control-Allow-Origin"] = "*"
return rsp
@app.route("/search_by_actress", methods=["POST"])
def search_by_actress():
params = json.loads(request.data.decode("utf-8"))
print(params)
actress = params["actress"]
history_name = params["history_name"] == "true"
briefs, names = spawn(
Functions.search_by_actress, actress, None, history_name
).wait_for_result()
res = {
"videos": [brief.to_dict() for brief in briefs],
"other": {"history_names": names},
}
rsp = jsonify(res)
rsp.headers["Access-Control-Allow-Origin"] = "*"
return rsp
@app.route("/new", methods=["POST"])
def new():
params = json.loads(request.data.decode("utf-8"))
print(params)
if "up_to" in params:
res = Functions.get_newly_released(params["up_to"], False)
elif "page" in params:
res = Functions.get_newly_released(False, params["page"])
else:
res = Functions.get_newly_released(30, False)
if res:
res = [x.to_dict() for x in res]
rsp = jsonify(res)
rsp.headers["Access-Control-Allow-Origin"] = "*"
return rsp
@app.route("/search_magnet_by_code", methods=["POST"])
def search_magnet_by_code():
params = json.loads(request.data.decode("utf-8"))
print(params)
res = []
if params["code"]:
res = Functions.get_magnet(params["code"])
if res:
res = [x.to_dict() for x in res]
rsp = jsonify(res)
rsp.headers["Access-Control-Allow-Origin"] = "*"
return rsp
@app.route("/get_tags", methods=["POST"])
def get_tags():
params = json.loads(request.data.decode("utf-8"))
print(params)
res = Functions.get_tags()
rsp = jsonify(res)
rsp.headers["Access-Control-Allow-Origin"] = "*"
return rsp
@app.route("/actress_info", methods=["POST"])
def actress_info():
params = json.loads(request.data.decode("utf-8"))
print(params)
res = Functions.get_actress_info(params["actress"])
rsp = jsonify(res.to_dict())
print(res)
rsp.headers["Access-Control-Allow-Origin"] = "*"
return rsp
| StarcoderdataPython |
11364940 | <reponame>gaborbernat/toxn
import logging
import os
from contextlib import contextmanager
from pathlib import Path
from typing import Any, Dict, Generator, List, Tuple
from toxn.config.models.task.base import TaskConfig
from toxn.config.models.venv import Install
from toxn.util import Loggers
def install_params(batch_name: str, packages: List[str], config: TaskConfig,
develop: bool = False) -> Install:
return Install(batch_name, packages, config.install_command, develop)
class TaskLogging(logging.LoggerAdapter):
"""
This example adapter expects the passed in dict-like object to have a
'connid' key, whose value in brackets is prepended to the log message.
"""
def process(self, msg: str, kwargs: Any) -> Tuple[str, Dict[str, Any]]:
task = self.extra.get('task') # type: ignore
if task is None:
task_info = ''
else:
task_info = f'[{task}] '
return f"{task_info}{msg}", kwargs
@contextmanager
def change_dir(to_dir: Path, logger: Loggers) -> Generator[None, None, None]:
cwd = Path(os.getcwd())
if cwd != to_dir:
logger.debug('change cwd to %r', to_dir)
os.chdir(str(to_dir))
try:
yield
finally:
if cwd != to_dir:
logger.debug('change cwd to %r', to_dir)
os.chdir(str(cwd))
| StarcoderdataPython |
11343512 | <gh_stars>0
#capitalize function --- first letter in uppercase
s1 = "indian brave soldiers..."
print(s1) # -- CA1
print(s1.capitalize()) # -- CA2
s2 = 'India'
print(s2.capitalize()) # -- CA3
s3 = '31 is my age'
print(s3.capitalize()) # -- CA4
| StarcoderdataPython |
5064298 | from cms.app_base import CMSAppConfig
from djangocms_versioning.datastructures import VersionableItem
from .models import App2PostContent, App2TitleContent
class CMSApp1Config(CMSAppConfig):
djangocms_moderation_enabled = True
djangocms_versioning_enabled = True
moderated_models = (App2PostContent, App2TitleContent)
versioning = [
VersionableItem(
content_model=App2PostContent,
grouper_field_name="post",
copy_function=lambda x: x,
),
VersionableItem(
content_model=App2TitleContent,
grouper_field_name="title",
copy_function=lambda x: x,
),
]
| StarcoderdataPython |
6447526 | <reponame>dikien/Machine-Learning-Newspaper
# -*- coding: UTF-8 -*-
from time import time
from step3_feature_engineering import preprocess_2
from sklearn.svm import SVC
from sklearn.cross_validation import cross_val_score, KFold
from sklearn import grid_search
features, labels, vectorizer, selector, le = preprocess_2("pkl/article_2_people.pkl", "pkl/lable_2_people.pkl")
# Constructing the k-fold cross validation iterator (k=5)
cv = KFold(n=features.shape[0], # total number of samples
n_folds=10, # number of folds the dataset is divided into
shuffle=True,
random_state=123)
t0 = time()
parameters = {'kernel':('linear', 'rbf'), 'C':[1, 10, 100, 1000]}
clf = grid_search.GridSearchCV(SVC(), parameters)
clf.fit(features, labels)
# print cross_val_score(clf, features, labels, cv=cv, scoring='accuracy')
print "escape time : ", round(time()-t0, 3), "s"
print "best score is %s" % clf.best_score_
print "best parameter is %s" % clf.best_params_
print clf.grid_scores_
'''
escape time : 365.156 s
best score is 0.839794303797
best parameter is {'kernel': 'linear', 'C': 1}
[mean: 0.83979, std: 0.01271, params: {'kernel': 'linear', 'C': 1}, mean: 0.52097, std: 0.00029, params: {'kernel': 'rbf', 'C': 1}, mean: 0.81487, std: 0.02442, params: {'kernel': 'linear', 'C': 10}, mean: 0.52097, std: 0.00029, params: {'kernel': 'rbf', 'C': 10}, mean: 0.76622, std: 0.01725, params: {'kernel': 'linear', 'C': 100}, mean: 0.52136, std: 0.00048, params: {'kernel': 'rbf', 'C': 100}, mean: 0.73299, std: 0.00730, params: {'kernel': 'linear', 'C': 1000}, mean: 0.83900, std: 0.01419, params: {'kernel': 'rbf', 'C': 1000}]
'''
'''
# unit version
from time import time
from step3_vectorize_text import preprocess
from sklearn.svm import SVC
import numpy as np
from sklearn.metrics import accuracy_score
import collections
features_train, features_test, labels_train, labels_test = preprocess()
t0 = time()
clf = SVC(kernel='rbf', C=10000.0)
clf.fit(features_train, labels_train)
print "training time:", round(time()-t0, 3), "s"
t0 = time()
y_pred = clf.predict(features_test)
print "predicting time:", round(time()-t0, 3), "s"
print accuracy_score(labels_test, y_pred, normalize=True)
counter=collections.Counter(y_pred)
print counter
#########################################################
'''
| StarcoderdataPython |
328714 | <filename>migrations/versions/5f3ea30d645d_initial_migration.py
"""Initial Migration
Revision ID: 5<PASSWORD>d
Revises: <PASSWORD>
Create Date: 2019-12-02 18:49:02.495171
"""
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision = '<PASSWORD>'
down_revision = '<PASSWORD>'
branch_labels = None
depends_on = None
def upgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.add_column('comments', sa.Column('time', sa.DateTime(), nullable=True))
op.add_column('posts', sa.Column('timestamp', sa.DateTime(), nullable=True))
op.create_index(op.f('ix_posts_timestamp'), 'posts', ['timestamp'], unique=False)
# ### end Alembic commands ###
def downgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.drop_index(op.f('ix_posts_timestamp'), table_name='posts')
op.drop_column('posts', 'timestamp')
op.drop_column('comments', 'time')
# ### end Alembic commands ###
| StarcoderdataPython |
9652237 | # ____ #
####################-=|NOTE|=-######################
#U SHOULD INPUT YOUR E-MAIL AND PASSWORD TO <PASSWORD>#
#THIS IS FOR NORMAL DISCORD USERS... #
#Programmed By Cylops. (Lyceion) #
####################################################
#Licensed By MIT's Casual License...
import asyncio
import discord
from discord.ext.commands import Bot
from discord.ext import commands
client = discord.Client()
@client.event
async def on_ready():
print("Cylops' Core Is Online \[T]/")
print("\n Name: %s || ID: %s"%(client.user.name, client.user.id))
channelID = input("Please Input A Channel ID: ")
Break_Varaible = "ext-msg"
while True:
msg = input("\nMessage To Send Server:")
aaa = discord.Embed(color=0xff8000)
aaa.add_field(name="Sent From Cylops' Core", value=msg, inline=False)
aaa.set_footer(text="-|- Programmed By Cylops -|-")
aaa.set_thumbnail(url="https://i.hizliresim.com/nlYg1M.jpg")
main_channel = channelID
await client.send_message(discord.Object(id=main_channel), embed=aaa)
print("Message Has Been Sent!")
client.run('E-MAIL', 'PASS', bot=False) | StarcoderdataPython |
11309182 | <reponame>WarwickAnimeSoc/aniMango
# Generated by Django 2.2.13 on 2020-09-15 16:14
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('stream', '0001_initial'),
]
operations = [
migrations.AddField(
model_name='viewcounter',
name='name',
field=models.CharField(default='count', max_length=10),
preserve_default=False,
),
]
| StarcoderdataPython |
5142502 | import torch
import torch.nn as nn
from ..model import BaseMuZeroNet
class MuZeroNet(BaseMuZeroNet):
def __init__(self, input_size, action_space_n, reward_support_size, value_support_size,
inverse_value_transform, inverse_reward_transform):
super(MuZeroNet, self).__init__(inverse_value_transform, inverse_reward_transform)
self.input_size = input_size
self.hx_size = 8
self._representation = nn.Sequential(nn.Linear(input_size, self.hx_size),
nn.Tanh())
self._dynamics_state = nn.Sequential(nn.Linear(self.hx_size + action_space_n, 16),
nn.Tanh(),
nn.Linear(16, self.hx_size),
nn.Tanh())
self._dynamics_reward = nn.Sequential(nn.Linear(self.hx_size + action_space_n, 16),
nn.LeakyReLU(),
nn.Linear(16, reward_support_size))
self._prediction_actor = nn.Sequential(nn.Linear(self.hx_size, 16),
nn.LeakyReLU(),
nn.Linear(16, action_space_n))
self._prediction_value = nn.Sequential(nn.Linear(self.hx_size, 16),
nn.LeakyReLU(),
nn.Linear(16, value_support_size))
self.action_space_n = action_space_n
self._prediction_value[-1].weight.data.fill_(0)
self._prediction_value[-1].bias.data.fill_(0)
self._dynamics_reward[-1].weight.data.fill_(0)
self._dynamics_reward[-1].bias.data.fill_(0)
def prediction(self, state):
actor_logit = self._prediction_actor(state)
value = self._prediction_value(state)
return actor_logit, value
def representation(self, obs_history):
return self._representation(obs_history)
def dynamics(self, state, action):
assert len(state.shape) == 2
assert action.shape[1] == 1
action_one_hot = torch.zeros(size=(action.shape[0], self.action_space_n),
dtype=torch.float32, device=action.device)
action_one_hot.scatter_(1, action, 1.0)
x = torch.cat((state, action_one_hot), dim=1)
next_state = self._dynamics_state(x)
reward = self._dynamics_reward(x)
return next_state, reward
def features_size(self):
return self.features(torch.zeros(self.input_size)).view(1, -1).size(1)
class Flatten(nn.Module):
def forward(self, input):
return input.view(-1)
| StarcoderdataPython |
1937792 | <reponame>NeonOcean/Environment
from _collections import defaultdict
from _weakrefset import WeakSet
from collections import namedtuple
from alarms import add_alarm_real_time, cancel_alarm, add_alarm
from clock import interval_in_real_seconds
from indexed_manager import CallbackTypes
from routing.route_enums import RouteEventType
from sims4.callback_utils import CallableList
from sims4.service_manager import Service
from sims4.tuning.tunable import TunableRealSecond
import services
import sims4.geometry
import sims4.log
import sims4.math
logger = sims4.log.Logger('Broadcaster', default_owner='epanero')
class BroadcasterService(Service):
INTERVAL = TunableRealSecond(description='\n The time between broadcaster pulses. A lower number will impact\n performance.\n ', default=5)
DEFAULT_QUADTREE_RADIUS = 0.1
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self._alarm_handle = None
self._processing_task = None
self._on_update_callbacks = CallableList()
self._pending_broadcasters = []
self._active_broadcasters = []
self._cluster_requests = {}
self._object_cache = None
self._object_cache_tags = None
self._pending_update = False
self._quadtrees = defaultdict(sims4.geometry.QuadTree)
def create_update_alarm(self):
self._alarm_handle = add_alarm(self, interval_in_real_seconds(self.INTERVAL), self._on_update, repeating=True, use_sleep_time=False)
def start(self):
self.create_update_alarm()
object_manager = services.object_manager()
object_manager.register_callback(CallbackTypes.ON_OBJECT_LOCATION_CHANGED, self._update_object_cache)
object_manager.register_callback(CallbackTypes.ON_OBJECT_ADD, self._update_object_cache)
services.current_zone().wall_contour_update_callbacks.append(self._on_wall_contours_changed)
def stop(self):
if self._alarm_handle is not None:
cancel_alarm(self._alarm_handle)
self._alarm_handle = None
if self._processing_task is not None:
self._processing_task.stop()
self._processing_task = None
object_manager = services.object_manager()
object_manager.unregister_callback(CallbackTypes.ON_OBJECT_LOCATION_CHANGED, self._update_object_cache)
object_manager.unregister_callback(CallbackTypes.ON_OBJECT_ADD, self._update_object_cache)
services.current_zone().wall_contour_update_callbacks.remove(self._on_wall_contours_changed)
def add_broadcaster(self, broadcaster):
if broadcaster not in self._pending_broadcasters:
self._pending_broadcasters.append(broadcaster)
if broadcaster.immediate:
self._pending_update = True
self._on_update_callbacks()
def remove_broadcaster(self, broadcaster):
if broadcaster in self._pending_broadcasters:
self._pending_broadcasters.remove(broadcaster)
if broadcaster in self._active_broadcasters:
self._remove_from_cluster_request(broadcaster)
self._remove_broadcaster_from_quadtree(broadcaster)
self._active_broadcasters.remove(broadcaster)
broadcaster.on_removed()
self._on_update_callbacks()
def _activate_pending_broadcasters(self):
for broadcaster in self._pending_broadcasters:
self._active_broadcasters.append(broadcaster)
self.update_cluster_request(broadcaster)
self._update_object_cache()
self._pending_broadcasters.clear()
def _add_broadcaster_to_quadtree(self, broadcaster):
self._remove_broadcaster_from_quadtree(broadcaster)
broadcaster_quadtree = self._quadtrees[broadcaster.routing_surface.secondary_id]
broadcaster_bounds = sims4.geometry.QtCircle(sims4.math.Vector2(broadcaster.position.x, broadcaster.position.z), self.DEFAULT_QUADTREE_RADIUS)
broadcaster_quadtree.insert(broadcaster, broadcaster_bounds)
return broadcaster_quadtree
def _remove_broadcaster_from_quadtree(self, broadcaster):
broadcaster_quadtree = broadcaster.quadtree
if broadcaster_quadtree is not None:
broadcaster_quadtree.remove(broadcaster)
def update_cluster_request(self, broadcaster):
if broadcaster not in self._active_broadcasters:
return
clustering_request = broadcaster.get_clustering()
if clustering_request is None:
return
self._remove_from_cluster_request(broadcaster)
cluster_request_key = (type(broadcaster), broadcaster.routing_surface.secondary_id)
if cluster_request_key in self._cluster_requests:
cluster_request = self._cluster_requests[cluster_request_key]
cluster_request.set_object_dirty(broadcaster)
else:
cluster_quadtree = self._quadtrees[broadcaster.routing_surface.secondary_id]
cluster_request = clustering_request(lambda : self._get_broadcasters_for_cluster_request_gen(*cluster_request_key), quadtree=cluster_quadtree)
self._cluster_requests[cluster_request_key] = cluster_request
quadtree = self._add_broadcaster_to_quadtree(broadcaster)
broadcaster.on_added_to_quadtree_and_cluster_request(quadtree, cluster_request)
def _remove_from_cluster_request(self, broadcaster):
cluster_request = broadcaster.cluster_request
if cluster_request is not None:
cluster_request.set_object_dirty(broadcaster)
def _is_valid_cache_object(self, obj):
if obj.is_sim:
return False
elif self._object_cache_tags:
object_tags = obj.get_tags()
if object_tags & self._object_cache_tags:
return True
else:
return False
return False
return True
def get_object_cache_info(self):
return (self._object_cache, self._object_cache_tags)
def _generate_object_cache(self):
self._object_cache = WeakSet(obj for obj in services.object_manager().valid_objects() if self._is_valid_cache_object(obj))
def _update_object_cache(self, obj=None):
if obj is None:
self._object_cache = None
self._object_cache_tags = None
return
if self._object_cache is not None and self._is_valid_cache_object(obj):
self._object_cache.add(obj)
def _is_valid_broadcaster(self, broadcaster):
broadcasting_object = broadcaster.broadcasting_object
if broadcasting_object is None or not broadcasting_object.visible_to_client:
return False
if broadcasting_object.is_in_inventory():
return False
elif broadcasting_object.parent is not None and broadcasting_object.parent.is_sim:
return False
return True
def _get_broadcasters_for_cluster_request_gen(self, broadcaster_type, broadcaster_level):
for broadcaster in self._active_broadcasters:
if broadcaster.guid == broadcaster_type.guid:
if broadcaster.should_cluster():
if broadcaster.routing_surface.secondary_id == broadcaster_level:
yield broadcaster
def get_broadcasters_debug_gen(self):
for cluster_request in self._cluster_requests.values():
for cluster in cluster_request.get_clusters_gen():
broadcaster_iter = cluster.objects_gen()
yield next(broadcaster_iter)
yield from cluster_request.get_rejects()
for broadcaster in self._active_broadcasters:
if not broadcaster.should_cluster():
if self._is_valid_broadcaster(broadcaster):
yield broadcaster
def get_broadcasters_gen(self):
for (cluster_request_key, cluster_request) in self._cluster_requests.items():
is_cluster_dirty = cluster_request.is_dirty()
for broadcaster in self._get_broadcasters_for_cluster_request_gen(*cluster_request_key):
if self._is_valid_broadcaster(broadcaster):
broadcaster.regenerate_constraint()
for cluster in cluster_request.get_clusters_gen():
linkable_broadcasters_iter = (b for b in cluster.objects_gen() if self._is_valid_broadcaster(b))
master_broadcaster = next(linkable_broadcasters_iter, None)
if master_broadcaster is None:
continue
master_broadcaster.set_linked_broadcasters(linkable_broadcasters_iter)
yield master_broadcaster
yield from (b for b in cluster_request.get_rejects() if self._is_valid_broadcaster(b))
for broadcaster in self._active_broadcasters:
if not broadcaster.should_cluster():
if self._is_valid_broadcaster(broadcaster):
yield broadcaster
PathSegmentData = namedtuple('PathSegmentData', ('prev_pos', 'cur_pos', 'segment_vec', 'segment_mag_sq', 'segment_normal'))
def get_broadcasters_along_route_gen(self, sim, path, start_time=0, end_time=0):
path_segment_datas = {}
start_index = max(0, path.node_at_time(start_time).index - 1)
end_index = min(len(path) - 1, path.node_at_time(end_time).index)
for broadcaster in self.get_broadcasters_gen():
if broadcaster.route_events:
if not broadcaster.can_affect(sim):
continue
constraint = broadcaster.get_constraint()
geometry = constraint.geometry
if geometry is None:
continue
polygon = geometry.polygon
if polygon is None:
continue
if not constraint.valid:
continue
constraint_pos = polygon.centroid()
constraint_radius_sq = polygon.radius()
constraint_radius_sq = constraint_radius_sq*constraint_radius_sq
for index in range(end_index, start_index, -1):
prev_index = index - 1
prev_node = path.nodes[prev_index]
if not constraint.is_routing_surface_valid(prev_node.routing_surface_id):
continue
segment_key = (prev_index, index)
segment_data = path_segment_datas.get(segment_key, None)
if segment_data is None:
cur_node = path.nodes[index]
cur_pos = sims4.math.Vector3(*cur_node.position)
prev_pos = sims4.math.Vector3(*prev_node.position)
segment_vec = cur_pos - prev_pos
segment_vec.y = 0
segment_mag_sq = segment_vec.magnitude_2d_squared()
if sims4.math.almost_equal_sq(segment_mag_sq, 0):
segment_normal = None
else:
segment_normal = segment_vec/sims4.math.sqrt(segment_mag_sq)
segment_data = BroadcasterService.PathSegmentData(prev_pos, cur_pos, segment_vec, segment_mag_sq, segment_normal)
path_segment_datas[segment_key] = segment_data
else:
(prev_pos, cur_pos, segment_vec, segment_mag_sq, segment_normal) = segment_data
if segment_normal is None:
constraint_vec = constraint_pos - prev_pos
constraint_dist_sq = constraint_vec.magnitude_2d_squared()
if constraint_radius_sq < constraint_dist_sq:
continue
else:
constraint_vec = constraint_pos - prev_pos
constraint_vec.y = 0
contraint_proj = constraint_vec - segment_normal*sims4.math.vector_dot_2d(constraint_vec, segment_normal)
if constraint_radius_sq < contraint_proj.magnitude_2d_squared():
continue
for (transform, _, time) in path.get_location_data_along_segment_gen(prev_index, index):
if not geometry.test_transform(transform):
continue
yield (time, broadcaster)
break
break
def get_pending_broadcasters_gen(self):
yield from self._pending_broadcasters
def _get_all_objects_gen(self):
is_any_broadcaster_allowing_objects = True if self._object_cache else False
if not is_any_broadcaster_allowing_objects:
for broadcaster in self._active_broadcasters:
(allow_objects, allow_objects_tags) = broadcaster.allow_objects.is_affecting_objects()
if allow_objects:
is_any_broadcaster_allowing_objects = True
if allow_objects_tags is None:
self._object_cache_tags = None
break
else:
if self._object_cache_tags is None:
self._object_cache_tags = set()
self._object_cache_tags |= allow_objects_tags
if is_any_broadcaster_allowing_objects:
if self._object_cache is None:
self._generate_object_cache()
yield from list(self._object_cache)
else:
self._object_cache = None
self._object_cache_tags = None
yield from services.sim_info_manager().instanced_sims_gen()
def register_callback(self, callback):
if callback not in self._on_update_callbacks:
self._on_update_callbacks.append(callback)
def unregister_callback(self, callback):
if callback in self._on_update_callbacks:
self._on_update_callbacks.remove(callback)
def _on_update(self, _):
self._pending_update = True
def _on_wall_contours_changed(self, *_, **__):
self._update_object_cache()
def provide_route_events(self, route_event_context, sim, path, failed_types=None, start_time=0, end_time=0, **kwargs):
for (time, broadcaster) in self.get_broadcasters_along_route_gen(sim, path, start_time=start_time, end_time=end_time):
resolver = broadcaster.get_resolver(sim)
for route_event in broadcaster.route_events:
if not failed_types is None:
pass
if not route_event_context.route_event_already_scheduled(route_event, provider=broadcaster) and route_event.test(resolver):
route_event_context.add_route_event(RouteEventType.BROADCASTER, route_event(time=time, provider=broadcaster, provider_required=True))
def update(self):
if self._pending_update:
self._pending_update = False
self._update()
def _is_location_affected(self, constraint, transform, routing_surface):
if constraint.geometry is not None and not constraint.geometry.test_transform(transform):
return False
elif not constraint.is_routing_surface_valid(routing_surface):
return False
return True
def update_broadcasters_one_shot(self, broadcasters):
for obj in self._get_all_objects_gen():
object_transform = None
routing_surface = obj.routing_surface
for broadcaster in broadcasters:
if broadcaster.can_affect(obj):
constraint = broadcaster.get_constraint()
if not constraint.valid:
continue
if object_transform is None:
parent = obj.parent
if parent is None:
object_transform = obj.transform
else:
object_transform = parent.transform
if self._is_location_affected(constraint, object_transform, routing_surface):
broadcaster.apply_broadcaster_effect(obj)
broadcaster.remove_broadcaster_effect(obj)
if not obj.valid_for_distribution:
break
def _update(self):
try:
self._activate_pending_broadcasters()
current_broadcasters = set(self.get_broadcasters_gen())
for obj in self._get_all_objects_gen():
object_transform = None
is_affected = False
for broadcaster in current_broadcasters:
if broadcaster.can_affect(obj):
constraint = broadcaster.get_constraint()
if not constraint.valid:
continue
if object_transform is None:
parent = obj.parent
if parent is None:
object_transform = obj.transform
else:
object_transform = parent.transform
if self._is_location_affected(constraint, object_transform, obj.routing_surface):
broadcaster.apply_broadcaster_effect(obj)
if not obj.valid_for_distribution:
is_affected = False
break
is_affected = True
if not is_affected:
if self._object_cache is not None:
self._object_cache.discard(obj)
for broadcaster in current_broadcasters:
broadcaster.on_processed()
finally:
self._on_update_callbacks()
class BroadcasterRealTimeService(BroadcasterService):
def create_update_alarm(self):
self._alarm_handle = add_alarm_real_time(self, interval_in_real_seconds(self.INTERVAL), self._on_update, repeating=True, use_sleep_time=False)
| StarcoderdataPython |
252396 | <filename>seth/crypto.py
import struct
import hashlib
import subprocess
import re
from binascii import hexlify, unhexlify
from seth.args import args, hexdump
from seth.consts import TERM_PRIV_KEY
class RC4(object):
def __init__(self, key):
x = 0
self.sbox = list(range(256))
for i in range(256):
x = (x + self.sbox[i] + key[i % len(key)]) % 256
self.sbox[i], self.sbox[x] = self.sbox[x], self.sbox[i]
self.i = self.j = 0
self.encrypted_packets = 0
def decrypt(self, data):
if self.encrypted_packets >= 4096:
self.update_key()
out = []
for char in data:
self.i = (self.i + 1) % 256
self.j = (self.j + self.sbox[self.i]) % 256
self.sbox[self.i], self.sbox[self.j] = self.sbox[self.j], self.sbox[self.i]
out.append(char ^ self.sbox[(self.sbox[self.i] + self.sbox[self.j]) % 256])
self.encrypted_packets += 1
return bytes(bytearray(out))
def update_key(self):
print("Updating session keys")
pad1 = b"\x36"*40
pad2 = b"\x5c"*48
# TODO finish this
def reencrypt_client_random(crypto, bytes):
"""Replace the original encrypted client random (encrypted with OUR
public key) with the client random encrypted with the original public
key"""
reenc_client_rand = rsa_encrypt(crypto["client_rand"],
crypto["pubkey"]) + b"\x00"*8
result = bytes.replace(crypto["enc_client_rand"],
reenc_client_rand)
return result
def generate_rsa_key(keysize):
p = subprocess.Popen(
["openssl", "genrsa", str(keysize)],
stdout=subprocess.PIPE,
stderr=subprocess.DEVNULL
)
key_pipe = subprocess.Popen(
["openssl", "rsa", "-noout", "-text"],
stdin=p.stdout,
stdout=subprocess.PIPE
)
p.stdout.close()
output = key_pipe.communicate()[0]
# parse the text output
key = None
result = {}
for line in output.split(b'\n'):
field = line.split(b':')[:2]
if len(field) == 2 and field[0] in [
b'modulus',
b'privateExponent',
b'publicExponent'
]:
key = field[0].decode()
result[key] = field[1]
elif not line[:1] == b" ":
key = None
if line[:4] == b" "*4 and key in result:
result[key] += line[4:]
for f in ["modulus", "privateExponent"]:
b = result[f].replace(b':', b'')
b = unhexlify(b)
result[f] = int.from_bytes(b, "big")
m = re.match(b'.* ([0-9]+) ', result['publicExponent'])
result['publicExponent'] = int(m.groups(1)[0])
return result
def rsa_encrypt(bytes, key):
r = int.from_bytes(bytes, "little")
e = key["publicExponent"]
n = key["modulus"]
c = pow(r, e, n)
return c.to_bytes(2048, "little").rstrip(b"\x00")
def rsa_decrypt(bytes, key):
s = int.from_bytes(bytes, "little")
d = key["privateExponent"]
n = key["modulus"]
m = pow(s, d, n)
return m.to_bytes(2048, "little").rstrip(b"\x00")
def is_fast_path(bytes):
if len(bytes) <= 1: return False
return bytes[0] % 4 == 0 and bytes[1] in [len(bytes), 0x80]
def decrypt(bytes, From="Client"):
cleartext = b""
if is_fast_path(bytes):
is_encrypted = (bytes[0] >> 7 == 1)
has_opt_length = (bytes[1] >= 0x80)
offset = 2
if has_opt_length:
offset += 1
if is_encrypted:
offset += 8
cleartext = rc4_decrypt(bytes[offset:], From=From)
else: # slow path
offset = 13
if len(bytes) <= 15: return bytes
if bytes[offset] >= 0x80: offset += 1
offset += 1
security_flags = struct.unpack('<H', bytes[offset:offset+2])[0]
is_encrypted = (security_flags & 0x0008)
if is_encrypted:
offset += 12
cleartext = rc4_decrypt(bytes[offset:], From=From)
if not cleartext == b"":
if args.debug:
print("Cleartext: ")
hexdump(cleartext)
return bytes[:offset] + cleartext
else:
return bytes
def sym_encryption_enabled(crypto):
if "client_rand" in crypto:
return (not crypto["client_rand"] == b"")
else:
return False
def generate_session_keys(crypto):
# Ch. 5.3.5.1
def salted_hash(s, i):
sha1 = hashlib.sha1()
sha1.update(i + s + crypto["client_rand"] +
crypto["server_rand"])
md5 = hashlib.md5()
md5.update(s + sha1.digest())
return md5.digest()
def final_hash(k):
md5 = hashlib.md5()
md5.update(k + crypto["client_rand"] +
crypto["server_rand"])
return md5.digest()
# Non-Fips, 128bit key
pre_master_secret = (crypto["client_rand"][:24] +
crypto["server_rand"][:24])
master_secret = (salted_hash(pre_master_secret, b"A") +
salted_hash(pre_master_secret, b"BB") +
salted_hash(pre_master_secret, b"CCC"))
session_key_blob = (salted_hash(master_secret, b"X") +
salted_hash(master_secret, b"YY") +
salted_hash(master_secret, b"ZZZ"))
mac_key, server_encrypt_key, server_decrypt_key = [
session_key_blob[i*16:(i+1)*16] for i in range(3)
]
server_encrypt_key = final_hash(server_encrypt_key)
server_decrypt_key = final_hash(server_decrypt_key)
client_encrypt_key = server_decrypt_key
client_decrypt_key = server_encrypt_key
crypto["mac_key"] = mac_key
crypto["server_encrypt_key"] = server_encrypt_key
crypto["server_decrypt_key"] = server_decrypt_key
crypto["client_encrypt_key"] = client_encrypt_key
crypto["client_decrypt_key"] = client_decrypt_key
# TODO handle shorter keys than 128 bit
print("Session keys generated")
init_rc4_sbox(crypto)
def init_rc4_sbox(crypto):
print("Initializing RC4 s-box")
# TODO: get rid of global variables
global RC4_CLIENT
global RC4_SERVER
RC4_CLIENT = RC4(crypto["server_decrypt_key"])
RC4_SERVER = RC4(crypto["client_decrypt_key"])
def rc4_decrypt(data, From="Client"):
if From == "Client":
return RC4_CLIENT.decrypt(data)
else:
return RC4_SERVER.decrypt(data)
def sign_certificate(cert, sign_len):
"""Signs the certificate with the private key"""
m = hashlib.md5()
m.update(cert)
m = m.digest() + b"\x00" + b"\xff"*45 + b"\x01"
m = int.from_bytes(m, "little")
d = int.from_bytes(TERM_PRIV_KEY["d"], "little")
n = int.from_bytes(TERM_PRIV_KEY["n"], "little")
s = pow(m, d, n)
return s.to_bytes(sign_len, "little")
| StarcoderdataPython |
1636718 | import pandas as pd
dados = pd.read_csv('/home/laumzav/PycharmProjects/Python_study/AlCuOn/Pandas/aluguel.csv', sep=';')
dados.head(10)
dados.dtypes
tipos = list(dados['Tipo'].drop_duplicates())
residencial = ['Quitinete', 'Casa', 'Apartamento', 'Casa de Condomínio', 'Flat', 'Casa de Vila', 'Loft']
dados['Residencial'] = dados['Tipo'].isin(residencial) | StarcoderdataPython |
4833288 | # global
import abc
from typing import Optional, Union
# local
import ivy
class ArrayWithLosses(abc.ABC):
def cross_entropy(
self: ivy.Array,
pred: Union[ivy.Array, ivy.NativeArray],
axis: Optional[int] = -1,
epsilon: Optional[float] = 1e-7,
*,
out: Optional[ivy.Array] = None
) -> ivy.Array:
return ivy.cross_entropy(self._data, pred, axis=axis, epsilon=epsilon, out=out)
def binary_cross_entropy(
self: ivy.Array,
pred: Union[ivy.Array, ivy.NativeArray],
epsilon: Optional[float] = 1e-7,
out: Optional[ivy.Array] = None,
) -> ivy.Array:
return ivy.binary_cross_entropy(self._data, pred, epsilon=epsilon, out=out)
def sparse_cross_entropy(
self: ivy.Array,
pred: Union[ivy.Array, ivy.NativeArray],
axis: Optional[int] = -1,
epsilon: Optional[float] = 1e-7,
out: Optional[ivy.Array] = None,
) -> ivy.Array:
return ivy.sparse_cross_entropy(
self._data, pred, axis=axis, epsilon=epsilon, out=out
)
| StarcoderdataPython |
1839262 | ###################################################################################################################
#
# A P I B L U E P R I N T
#
# THIS IS THE BUSINESS LOGIC BLUEPRINT PACKAGED
#
###################################################################################################################
from flask import Blueprint, g, url_for
from ..auth import auth
from ..decorators import rate_limit
from ..errors import ValidationError, bad_request
api = Blueprint('api', __name__)
# This function will present a JSON showing endpoints accessible via API
def get_catalog():
return {
'callsigns_url': url_for('api.get_callsigns', _external=True)
}
# Basic error handling
@api.errorhandler(ValidationError)
def validation_error(e):
return bad_request(str(e))
# Basic HTTP error handling - Error 400 Bad Request
@api.errorhandler(400)
def bad_request_error(e):
return bad_request('invalid request')
# Basic Rate-Limiting. A before_request is called to force Rate-Limit from now onwards
@api.before_request
@auth.login_required
@rate_limit(limit=5, period=15)
def before_request():
pass
# Here we take care of HTTP headers after all internal processing and before send a response to the client
@api.after_request
def after_request(response):
if hasattr(g, 'headers'):
response.headers.extend(g.headers)
return response
# We do this import sentence here at the end of the file in order TO AVOID CIRCULAR DEPENDENCIES IN FLASK
# DO NOT PAY ATTENTION TO THIS FACT
from api.v_1_0.SelfCare import callsigns
| StarcoderdataPython |
1714411 | <reponame>Eszti/tuw-nlp
import datetime
import json
import sys
import traceback
import graphviz
import stanza
from flask import Flask, request
from graphviz import Source
import networkx as nx
from networkx.readwrite import json_graph
from tuw_nlp.grammar.text_to_4lang import TextTo4lang
from tuw_nlp.graph.fourlang import FourLang
from tuw_nlp.graph.utils import graph_to_pn
from tuw_nlp.text.pipeline import CachedStanzaPipeline, CustomStanzaPipeline
HOST = 'localhost'
PORT = 5006
app = Flask(__name__)
nlp = stanza.Pipeline('en')
nlp_de = CustomStanzaPipeline(processors='tokenize,pos,lemma,depparse')
text_to_4lang_en = TextTo4lang("en", "en_nlp_cache")
text_to_4lang_de = TextTo4lang("de", "de_nlp_cache")
# echo '0 Die Gebäudehöhe darf 6,5 m nicht überschreiten.' | python brise_nlp/plandok/get_attributes.py
def visualize(sentence):
dot = graphviz.Digraph()
dot.node("0", "ROOT", shape="box")
for token in sentence.tokens:
for word in token.words:
dot.node(str(word.id), word.text)
dot.edge(str(word.head), str(word.id),
label=word.deprel)
return dot
@app.route('/build', methods=['POST'])
def build():
ret_value = {"result": {"errors": None, "graph": None, "ud": None}}
data = request.get_json()
if len(data) == 0 or not data["text"]:
print("No input text found")
ret_value["result"]["errors"] = "No input text found"
sys.stdout.flush()
return json.dumps(ret_value)
print("Text to process: {0}".format(data))
try:
lang = data["lang"] if "lang" in data else "en"
text = data["text"]
substitute = data["method"] == "substitute"
depth = data["depth"]
append_zero_graph = data["append"]
if lang == "en":
fl_graphs = list(text_to_4lang_en(text))
g = fl_graphs[0]
for n in fl_graphs[1:]:
g = nx.compose(g, n)
fl = FourLang(g, 0)
if int(depth):
text_to_4lang_en.expand(
fl, depth=int(depth), substitute=substitute)
sen = nlp(text).sentences[0]
elif lang == "de":
fl_graphs = list(text_to_4lang_de(text))
g = fl_graphs[0]
for n in fl_graphs[1:]:
g = nx.compose(g, n)
fl = FourLang(g, 0)
if int(depth):
text_to_4lang_de.expand(
fl, depth=int(depth), substitute=substitute)
sen = nlp_de(text).sentences[0]
ret_value["result"]["ud"] = visualize(sen).source
if fl:
if append_zero_graph:
fl.append_zero_paths()
ret_value["result"]["graph"] = fl.to_dot()
except Exception as e:
traceback.print_exc()
ret_value["result"]["errors"] = str(e)
print("Returning: {0}".format(ret_value))
sys.stdout.flush()
return json.dumps(ret_value)
@app.route('/get_definition', methods=['POST'])
def get_definition():
ret_value = {"result": {"errors": None, "def": None}}
data = request.get_json()
if len(data) == 0 or not data["text"]:
print("No input text found")
ret_value["result"]["errors"] = "No input text found"
sys.stdout.flush()
return json.dumps(ret_value)
print("Text to process: {0}".format(data))
try:
text = data["text"]
lang = data["lang"]
if lang == "de":
definition = text_to_4lang_de.lexicon.get_definition(text)
else:
definition = text_to_4lang_en.lexicon.get_definition(text)
if definition:
ret_value["result"]["def"] = definition
except Exception as e:
traceback.print_exc()
ret_value["result"]["errors"] = str(e)
print("Returning: {0}".format(ret_value))
sys.stdout.flush()
return json.dumps(ret_value)
if __name__ == '__main__':
app.run(debug=True, host=HOST, port=PORT)
| StarcoderdataPython |
368525 | <reponame>imperial-genomics-facility/IGFPortal
import typing
import logging
from typing import Tuple, Any
from .. import db
from ..models import RawSeqrun, SampleSheetModel
def fetch_samplesheet_for_seqrun(seqrun_id: str) -> Any:
try:
result = \
db.session.\
query(
SampleSheetModel.samplesheet_tag,
SampleSheetModel.csv_data).\
join(RawSeqrun, RawSeqrun.samplesheet_id==SampleSheetModel.samplesheet_id).\
filter(RawSeqrun.raw_seqrun_igf_id==seqrun_id).\
filter(SampleSheetModel.status=='PASS').\
filter(SampleSheetModel.validation_time >= SampleSheetModel.update_time).\
one_or_none()
return result
except Exception as e:
raise ValueError("Failed to fetch samplesheet for seqrun, error: {0}".format(e))
def check_and_filter_raw_seqruns_after_checking_samplesheet(
raw_seqrun_igf_ids: list) -> \
Tuple[list, list]:
try:
id_list = list()
run_list = list()
results = \
db.session.\
query(RawSeqrun.raw_seqrun_id, RawSeqrun.raw_seqrun_igf_id).\
join(SampleSheetModel, SampleSheetModel.samplesheet_id==RawSeqrun.samplesheet_id).\
filter(SampleSheetModel.status=='PASS').\
filter(SampleSheetModel.validation_time >= SampleSheetModel.update_time).\
filter(RawSeqrun.raw_seqrun_igf_id.in_(raw_seqrun_igf_ids)).\
all()
id_list = [
i[0] if isinstance(i, tuple) else i
for i in results]
run_list = [
i[1] if isinstance(i, tuple) else i
for i in results]
return id_list, run_list
except Exception as e:
raise ValueError("Failed to filter seqruns, error: {0}".format(e))
def change_raw_run_status(
run_list: list,
status: str) -> None:
try:
db.session.\
query(RawSeqrun).\
filter(RawSeqrun.raw_seqrun_igf_id.in_(run_list)).\
update({'status': status}, synchronize_session='fetch')
db.session.commit()
except Exception as e:
db.session.rollback()
raise ValueError("Failed to change raw run status, error: {0}".format(e))
| StarcoderdataPython |
6643035 | """
推理文件
"""
# Copyright 2021 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
import argparse
import json
import os
from StreamManagerApi import MxDataInput
from StreamManagerApi import StreamManagerApi
SUPPORT_IMG_SUFFIX = (".jpg", ".JPG", ".jpeg", ".JPEG")
current_path = os.path.abspath(os.path.dirname(__file__))
parser = argparse.ArgumentParser(
description="SSD Ghost infer " "example.",
fromfile_prefix_chars="@",
)
parser.add_argument(
"--pipeline_path",
type=str,
help="mxManufacture pipeline file path",
default=os.path.join(current_path, "../conf/ssd_ghost.pipeline"),
)
parser.add_argument(
"--stream_name",
type=str,
help="Infer stream name in the pipeline config file",
default="detection",
)
parser.add_argument(
"--img_path",
type=str,
help="Image pathname, can be a image file or image directory",
default=os.path.join(current_path, "../coco/val2017"),
)
parser.add_argument(
"--res_path",
type=str,
help="Directory to store the inferred result",
required=False,
)
args = parser.parse_args()
def infer():
"""Infer images by DVPP + OM. """
pipeline_path = args.pipeline_path
stream_name = args.stream_name.encode()
img_path = os.path.abspath(args.img_path)
res_dir_name = args.res_path
stream_manager_api = StreamManagerApi()
ret = stream_manager_api.InitManager()
if ret != 0:
print("Failed to init Stream manager, ret=%s" % str(ret))
exit()
# create streams by pipeline config file
with open(pipeline_path, "rb") as f:
pipeline_str = f.read()
ret = stream_manager_api.CreateMultipleStreams(pipeline_str)
if ret != 0:
print("Failed to create Stream, ret=%s" % str(ret))
exit()
in_plugin_id = 0
# Construct the input of the stream
data_input = MxDataInput()
if os.path.isfile(img_path) and img_path.endswith(SUPPORT_IMG_SUFFIX):
file_list = [os.path.abspath(img_path)]
else:
file_list = os.listdir(img_path)
file_list = [
os.path.join(img_path, img)
for img in file_list
if img.endswith(SUPPORT_IMG_SUFFIX)
]
if not res_dir_name:
res_dir_name = os.path.join(".", "infer_res")
print(f"res_dir_name={res_dir_name}")
os.makedirs(res_dir_name, exist_ok=True)
pic_infer_dict_list = []
for file_name in file_list:
with open(file_name, "rb") as f:
img_data = f.read()
if not img_data:
print(f"read empty data from img:{file_name}")
continue
data_input.data = img_data
unique_id = stream_manager_api.SendDataWithUniqueId(
stream_name, in_plugin_id, data_input
)
if unique_id < 0:
print("Failed to send data to stream.")
exit()
infer_result = stream_manager_api.GetResultWithUniqueId(
stream_name, unique_id, 3000
)
if infer_result.errorCode != 0:
print(
"GetResultWithUniqueId error. errorCode=%d, errorMsg=%s"
% (infer_result.errorCode, infer_result.data.decode())
)
exit()
pic_infer_dict_list.extend(
parse_img_infer_result(file_name, infer_result)
)
print(f"Inferred image:{file_name} success!")
with open(os.path.join(res_dir_name, "det_result.json"), "w") as fw:
fw.write(json.dumps(pic_infer_dict_list))
stream_manager_api.DestroyAllStreams()
def parse_img_infer_result(file_name, infer_result):
"""
Inference result
"""
obj_list_obj = json.loads(infer_result.data.decode())
if obj_list_obj == {}:
return []
obj_list = obj_list_obj['MxpiObject']
det_obj_list = []
for o in obj_list:
x0, y0, x1, y1 = (
o.get("x0"),
o.get("y0"),
o.get("x1"),
o.get("y1"),
)
bbox_for_map = [x0, y0, x1 - x0, y1 - y0]
score = o.get("classVec")[0].get("confidence")
category_id = o.get("classVec")[0].get("classId")
img_fname_without_suffix = os.path.basename(file_name).split(".")[0]
image_id = img_fname_without_suffix
det_obj_list.append(
dict(
image_id=image_id,
bbox=bbox_for_map,
category_id=category_id,
score=score,
)
)
return det_obj_list
if __name__ == "__main__":
infer()
| StarcoderdataPython |
1831697 | from pymongo import MongoClient
client = MongoClient("localhost", 27017)
gobDb = client["GoBDB"]
jobCollection = gobDb["Jobs"]
categories = jobCollection.aggregate([
{
'$project': {
'category': '$attributes.category_name',
'title': '$attributes.title'
}
}
])
print(list(categories)) | StarcoderdataPython |
200348 | import sys
environment = sys.argv[1]
region_name = sys.argv[2]
app_code = sys.argv[3]
version = sys.argv[4]
def regionCode(region_name):
return f"{region_name[0:1]}{region_name[-1:]}"
def resourceEnv(app_code, version, env):
return f"{app_code}-env{version}-{env}"
def resourceEnvCompact(app_code, version, env):
return f"{app_code}env{version}{env}"
def outputVariable(output_name, value):
print(f"::set-output name={output_name}::{value}")
key_vault_name = f"kv-{resourceEnv(app_code, version, environment)}"
registry_name = f"acr{resourceEnvCompact(app_code, version, environment)}"
registry_group_name = f"rg-{resourceEnv(app_code, version, environment)}"
outputVariable("key_vault_name", key_vault_name)
outputVariable("registry_name", registry_name)
outputVariable("registry_group_name", registry_group_name)
| StarcoderdataPython |
6450848 | <reponame>microsoft/SpeedyRec
# Copyright (c) Microsoft Corporation.
# Licensed under the MIT License.
import os
import time
import logging
import random
import torch
from tqdm.auto import tqdm
import torch.optim as optim
import sys, traceback
from torch.nn.parallel import DistributedDataParallel as DDP
import torch.distributed as dist
import torch.multiprocessing as mp
from pathlib import Path
import numpy as np
from operator import itemgetter
from LanguageModels.SpeedyModel import SpeedyModelForRec
from LanguageModels.configuration_tnlrv3 import TuringNLRv3Config
from .utils import (setuplogging, init_process, cleanup_process, warmup_linear, init_config, dump_args, get_device,
get_barrier, only_on_main_process, check_args_environment)
from .streaming import get_files
from .dataloader import DataLoaderTrain
from .preprocess import read_news, check_preprocess_result
from .speedyfeed import SpeedyFeed
def ddp_train(args):
'''
Distributed training
'''
# os.environ['MASTER_ADDR'] = 'localhost'
# os.environ['MASTER_PORT'] = '12355'
setuplogging()
Path(args.model_dir).mkdir(parents=True, exist_ok=True)
args = check_args_environment(args)
logging.info('-----------start train------------')
if args.world_size > 1:
global_cache = mp.Manager().dict()
news_idx_incache = mp.Manager().dict()
global_prefetch_step = mp.Manager().list([0] * args.world_size)
global_prefetch_step2 = mp.Manager().list([0] * args.world_size)
data_files = mp.Manager().list([])
end = mp.Manager().Value('b', False)
mp.spawn(train,
args=(args, global_cache, news_idx_incache, global_prefetch_step, global_prefetch_step2, end, data_files),
nprocs=args.world_size,
join=True)
else:
global_cache = mp.Manager().dict()
news_idx_incache = mp.Manager().dict()
global_prefetch_step = mp.Manager().list([0] * args.world_size)
global_prefetch_step2 = mp.Manager().list([0] * args.world_size)
data_files = mp.Manager().list([])
end = mp.Manager().Value('b', False)
train(0, args, global_cache, news_idx_incache, global_prefetch_step, global_prefetch_step2, end, data_files, dist_training=False)
def train(local_rank,
args,
cache,
news_idx_incache,
prefetch_step,
prefetch_step2,
end,
data_files,
dist_training=True):
'''
Args:
local_rank(int): the rank of current process
args: parameters
cache(shared list): global shared cache, the vec will be storaged in it as numpy.array
news_idx_incache(shared dict): {news_id:(index in cache, encoded step)}
prefetch_step(shared list): sync the dataloaders
end(shared bool): If it is True, stop all data processes
data_files(shared list): the paths of train data, storaged in a shared list
'''
setuplogging()
try:
if dist_training:
init_process(local_rank, args.world_size)
device = get_device()
barrier = get_barrier(dist_training)
logging.info('loading model: {}'.format(args.bert_model))
args, config = init_config(args, TuringNLRv3Config)
if args.pretrained_model_path != 'None':
bert_model = SpeedyModelForRec.from_pretrained(
args.pretrained_model_path,
from_tf=bool('.ckpt' in args.pretrained_model_path),
config=config)
else:
bert_model = SpeedyModelForRec(config)
if args.freeze_bert:
logging.info('Freeze the parameters of {}'.format(args.bert_model))
for param in bert_model.parameters():
param.requires_grad = False
# choose which block trainable
for index, layer in enumerate(bert_model.bert.encoder.layer):
if index in args.finetune_blocks:
logging.info(f"finetune block {index}")
for param in layer.parameters():
param.requires_grad = True
root_data_dir = os.path.join(args.root_data_dir,'traindata')
with only_on_main_process(local_rank, barrier) as need:
if need:
check_preprocess_result(args,root_data_dir)
logging.info('finish the preprocess of docfeatures')
news_features, category_dict, subcategory_dict = read_news(args,root_data_dir)
logging.info('news_num:{}'.format(len(news_features)))
#init the news_idx_incache and data_paths
with only_on_main_process(local_rank, barrier) as need:
if need:
temp_cache = {}
temp_cache_state = {}
for idx, news in enumerate(news_features.keys()):
temp_cache_state[news] = [idx, -args.max_step_in_cache]
temp_cache[idx] = [0.0]*args.news_dim
if len(temp_cache) == 100000:
news_idx_incache.update(temp_cache_state)
cache.update(temp_cache)
temp_cache_state = {}
temp_cache = {}
news_idx_incache.update(temp_cache_state)
cache.update(temp_cache)
data_paths = get_files(dirname=os.path.join(args.root_data_dir,'traindata'),
filename_pat=args.filename_pat)
data_paths.sort()
dump_args(args)
model = SpeedyFeed(args, bert_model, len(category_dict),
len(subcategory_dict))
model = model.to(device)
if dist_training:
ddp_model = DDP(model,
device_ids=[local_rank],
output_device=local_rank,
find_unused_parameters=True)
else:
ddp_model = model
rest_param = filter(
lambda x: id(x) not in list(map(id, bert_model.parameters())),
ddp_model.parameters())
optimizer = optim.Adam([{
'params': bert_model.parameters(),
'lr': args.pretrain_lr * warmup_linear(args, 1)
}, {
'params': rest_param,
'lr': args.lr * warmup_linear(args, 1)
}])
logging.info('Training...')
start_time = time.time()
global_step = 0
for ep in range(args.epochs):
with only_on_main_process(local_rank, barrier) as need:
# data_files.clear()
if need:
while len(data_files) > 0:
data_files.pop()
data_files.extend(data_paths)
random.seed(ep)
random.shuffle(data_files)
dataloader = DataLoaderTrain(
args=args,
data_files=data_files,
news_idx_incache=news_idx_incache,
prefetch_step=prefetch_step,
prefetch_step2=prefetch_step2,
end=end,
local_rank=local_rank,
world_size=args.world_size,
news_features=news_features,
enable_prefetch=args.enable_prefetch,
enable_prefetch_stream=args.enable_prefetch_stream,
global_step=global_step)
loss = 0.0
usernum = 0
for cnt, batch in tqdm(enumerate(dataloader)):
address_cache, update_cache, batch = batch
usernum += batch[-3].size(0)
global_step += 1
if args.enable_gpu:
segments, token_masks, seg_masks, key_position, fre_cnt, elements, batch_hist, batch_mask, batch_negs = (
x.cuda(non_blocking=True) if x is not None else x
for x in batch)
else:
segments, token_masks, seg_masks, key_position, fre_cnt, elements, batch_hist, batch_mask, batch_negs = batch
#Get news vecs from cache.
if address_cache is not None:
cache_vec = itemgetter(*address_cache)(cache)
cache_vec = torch.FloatTensor(
cache_vec).cuda(device=device, non_blocking=True)
if len(cache_vec.shape)==1:
cache_vec = cache_vec.unsqueeze(0)
else:
cache_vec = None
bz_loss,encode_vecs = ddp_model(segments,token_masks, seg_masks, elements, cache_vec,
batch_hist, batch_mask, batch_negs,
key_position, fre_cnt)
loss += bz_loss.item()
optimizer.zero_grad()
bz_loss.backward()
optimizer.step()
#update the cache
if args.max_step_in_cache > 0:
encode_vecs = encode_vecs.detach().cpu().numpy()
temp_cache = {}
for inx, vec in zip(update_cache, encode_vecs):
temp_cache[inx] = vec
cache.update(temp_cache)
optimizer.param_groups[0]['lr'] = args.pretrain_lr*warmup_linear(args,global_step+1) #* world_size
optimizer.param_groups[1]['lr'] = args.lr*warmup_linear(args,global_step+1) #* world_size
if global_step % args.log_steps == 0:
logging.info(
'[{}] cost_time:{} step:{}, usernum: {}, train_loss: {:.5f}, lr:{}, pretrain_lr:{}'.format(
local_rank, time.time() - start_time, global_step, usernum, loss / args.log_steps,
optimizer.param_groups[1]['lr'], optimizer.param_groups[0]['lr']))
loss = 0.0
# save model minibatch
if local_rank == 0 and global_step % args.save_steps == 0:
ckpt_path = os.path.join(args.model_dir, f'{args.savename}-epoch-{ep + 1}-{global_step}.pt')
torch.save(
{
'model_state_dict': model.state_dict(),
'optimizer': optimizer.state_dict(),
'category_dict': category_dict,
'subcategory_dict': subcategory_dict
}, ckpt_path)
logging.info(f"Model saved to {ckpt_path}")
logging.info('epoch:{}, usernum:{}, time:{}'.format(ep + 1, usernum, time.time() - start_time))
# save model after an epoch
if local_rank == 0:
ckpt_path = os.path.join(args.model_dir, '{}-epoch-{}.pt'.format(args.savename, ep + 1))
torch.save(
{
'model_state_dict': model.state_dict(),
'optimizer': optimizer.state_dict(),
'category_dict': category_dict,
'subcategory_dict': subcategory_dict
}, ckpt_path)
logging.info(f"Model saved to {ckpt_path}")
if dist_training:
cleanup_process()
except:
error_type, error_value, error_trace = sys.exc_info()
traceback.print_tb(error_trace)
logging.info(error_value)
raise
| StarcoderdataPython |
1746827 | """Deep Pictorial Gaze architecture."""
from typing import Dict
import numpy as np
import scipy
import tensorflow as tf
from core import BaseDataSource, BaseModel
from datasources import UnityEyes
import util.gaze
class DPG(BaseModel):
"""Deep Pictorial Gaze architecture as introduced in [Park et al. ECCV'18]."""
def __init__(self, tensorflow_session=None, first_layer_stride=2, num_modules=3,
num_feature_maps=32, growth_rate=8, extra_tags=[], **kwargs):
"""Specify DPG-specific parameters."""
self._hg_first_layer_stride = first_layer_stride
self._hg_num_modules = num_modules
self._hg_num_feature_maps= num_feature_maps
self._dn_growth_rate = growth_rate
self._extra_tags = extra_tags
# Call parent class constructor
super().__init__(tensorflow_session, **kwargs)
_hg_first_layer_stride = 2
_hg_num_modules = 3
_hg_num_feature_maps = 32
_hg_num_residual_blocks = 1
_hg_num_gazemaps = 2
_dn_growth_rate = 8
_dn_compression_factor = 0.5
_dn_num_layers_per_block = (4, 4, 4, 4)
_dn_num_dense_blocks = len(_dn_num_layers_per_block)
@property
def identifier(self):
"""Identifier for model based on data sources and parameters."""
first_data_source = next(iter(self._train_data.values()))
input_tensors = first_data_source.output_tensors
if self._data_format == 'NHWC':
_, eh, ew, _ = input_tensors['eye'].shape.as_list()
else:
_, _, eh, ew = input_tensors['eye'].shape.as_list()
return 'DPG_i%dx%d_f%dx%d_n%d_m%d_k%d_%s' % (
ew, eh,
int(ew / self._hg_first_layer_stride),
int(eh / self._hg_first_layer_stride),
self._hg_num_feature_maps, self._hg_num_modules,
self._dn_growth_rate,
'-'.join(self._extra_tags) if len(self._extra_tags) > 0 else '',
)
def train_loop_pre(self, current_step):
"""Run this at beginning of training loop."""
# Step learning rate decay
multiplier = np.power(0.1, int(current_step / 10000))
self._tensorflow_session.run(self.assign_learning_rate_multiplier, feed_dict={
self.learning_rate_multiplier_placeholder: multiplier,
})
_column_of_ones = None
_column_of_zeros = None
def _augment_training_images(self, images, mode):
if mode == 'test':
return images
with tf.variable_scope('augment'):
if self._data_format == 'NCHW':
images = tf.transpose(images, perm=[0, 2, 3, 1])
n, h, w, _ = images.shape.as_list()
if self._column_of_ones is None:
self._column_of_ones = tf.ones((n, 1))
self._column_of_zeros = tf.zeros((n, 1))
transforms = tf.concat([
self._column_of_ones,
self._column_of_zeros,
tf.truncated_normal((n, 1), mean=0, stddev=.05*w),
self._column_of_zeros,
self._column_of_ones,
tf.truncated_normal((n, 1), mean=0, stddev=.05*h),
self._column_of_zeros,
self._column_of_zeros,
], axis=1)
images = tf.contrib.image.transform(images, transforms, interpolation='BILINEAR')
if self._data_format == 'NCHW':
images = tf.transpose(images, perm=[0, 3, 1, 2])
return images
def build_model(self, data_sources: Dict[str, BaseDataSource], mode: str):
"""Build model."""
data_source = next(iter(data_sources.values()))
input_tensors = data_source.output_tensors
x = input_tensors['eye']
y1 = input_tensors['gazemaps'] if 'gazemaps' in input_tensors else None
y2 = input_tensors['gaze'] if 'gaze' in input_tensors else None
with tf.variable_scope('input_data'):
# self.summary.feature_maps('eyes', x, data_format=self._data_format_longer)
if y1 is not None:
self.summary.feature_maps('gazemaps', y1, data_format=self._data_format_longer)
outputs = {}
loss_terms = {}
metrics = {}
# Lightly augment training data
x = self._augment_training_images(x, mode)
with tf.variable_scope('hourglass'):
# Prepare for Hourglass by downscaling via conv
with tf.variable_scope('pre'):
n = self._hg_num_feature_maps
x = self._apply_conv(x, num_features=n, kernel_size=7,
stride=self._hg_first_layer_stride)
x = tf.nn.relu(self._apply_bn(x))
x = self._build_residual_block(x, n, 2*n, name='res1')
x = self._build_residual_block(x, 2*n, n, name='res2')
# Hourglass blocks
x_prev = x
gmap = None
for i in range(self._hg_num_modules):
with tf.variable_scope('hg_%d' % (i + 1)):
x = self._build_hourglass(x, steps_to_go=4, num_features=self._hg_num_feature_maps)
x, gmap = self._build_hourglass_after(
x_prev, x, do_merge=(i < (self._hg_num_modules - 1)),
)
x_prev = x
if y1 is not None:
# Cross-entropy loss
metrics['gazemaps_ce'] = -tf.reduce_mean(tf.reduce_sum(
y1 * tf.log(tf.clip_by_value(gmap, 1e-10, 1.0)), # avoid NaN
axis=[1, 2, 3]))
# metrics['gazemaps_ce'] = tf.losses.softmax_cross_entropy(
# tf.reshape(y1, (self._batch_size, -1)),
# tf.reshape(gmap, (self._batch_size, -1)),
# loss_collection=None,
# )
x = gmap
outputs['gazemaps'] = gmap
self.summary.feature_maps('bottleneck', gmap, data_format=self._data_format_longer)
with tf.variable_scope('densenet'):
# DenseNet blocks to regress to gaze
for i in range(self._dn_num_dense_blocks):
with tf.variable_scope('block%d' % (i + 1)):
x = self._apply_dense_block(x,
num_layers=self._dn_num_layers_per_block[i])
if i == self._dn_num_dense_blocks - 1:
break
with tf.variable_scope('trans%d' % (i + 1)):
x = self._apply_transition_layer(x)
# Global average pooling
with tf.variable_scope('post'):
x = self._apply_bn(x)
x = tf.nn.relu(x)
if self._data_format == 'NCHW':
x = tf.reduce_mean(x, axis=[2, 3])
else:
x = tf.reduce_mean(x, axis=[1, 2])
x = tf.contrib.layers.flatten(x)
# Output layer
with tf.variable_scope('output'):
x = self._apply_fc(x, 2)
outputs['gaze'] = x
if y2 is not None:
metrics['gaze_mse'] = tf.reduce_mean(tf.squared_difference(x, y2))
metrics['gaze_ang'] = util.gaze.tensorflow_angular_error_from_pitchyaw(y2, x)
# Combine two loss terms
if y1 is not None and y2 is not None:
loss_terms['combined_loss'] = 1e-5*metrics['gazemaps_ce'] + metrics['gaze_mse']
# Define outputs
return outputs, loss_terms, metrics
def _apply_conv(self, tensor, num_features, kernel_size=3, stride=1):
return tf.layers.conv2d(
tensor,
num_features,
kernel_size=kernel_size,
strides=stride,
padding='SAME',
kernel_initializer=tf.truncated_normal_initializer(mean=0.0, stddev=0.01),
kernel_regularizer=tf.contrib.layers.l2_regularizer(1e-4),
bias_initializer=tf.zeros_initializer(),
data_format=self._data_format_longer,
name='conv',
)
def _apply_fc(self, tensor, num_outputs):
return tf.layers.dense(
tensor,
num_outputs,
use_bias=True,
kernel_initializer=tf.truncated_normal_initializer(mean=0.0, stddev=0.01),
kernel_regularizer=tf.contrib.layers.l2_regularizer(1e-4),
bias_initializer=tf.zeros_initializer(),
name='fc',
)
def _apply_pool(self, tensor, kernel_size=3, stride=2):
tensor = tf.layers.max_pooling2d(
tensor,
pool_size=kernel_size,
strides=stride,
padding='SAME',
data_format=self._data_format_longer,
name='pool',
)
return tensor
def _apply_bn(self, tensor):
return tf.contrib.layers.batch_norm(
tensor,
scale=True,
center=True,
is_training=self.use_batch_statistics,
trainable=True,
data_format=self._data_format,
updates_collections=None,
)
def _build_residual_block(self, x, num_in, num_out, name='res_block'):
with tf.variable_scope(name):
half_num_out = max(int(num_out/2), 1)
c = x
with tf.variable_scope('conv1'):
c = tf.nn.relu(self._apply_bn(c))
c = self._apply_conv(c, num_features=half_num_out, kernel_size=1, stride=1)
with tf.variable_scope('conv2'):
c = tf.nn.relu(self._apply_bn(c))
c = self._apply_conv(c, num_features=half_num_out, kernel_size=3, stride=1)
with tf.variable_scope('conv3'):
c = tf.nn.relu(self._apply_bn(c))
c = self._apply_conv(c, num_features=num_out, kernel_size=1, stride=1)
with tf.variable_scope('skip'):
if num_in == num_out:
s = tf.identity(x)
else:
s = self._apply_conv(x, num_features=num_out, kernel_size=1, stride=1)
x = c + s
return x
def _build_hourglass(self, x, steps_to_go, num_features, depth=1):
with tf.variable_scope('depth%d' % depth):
# Upper branch
up1 = x
for i in range(self._hg_num_residual_blocks):
up1 = self._build_residual_block(up1, num_features, num_features,
name='up1_%d' % (i + 1))
# Lower branch
low1 = self._apply_pool(x, kernel_size=2, stride=2)
for i in range(self._hg_num_residual_blocks):
low1 = self._build_residual_block(low1, num_features, num_features,
name='low1_%d' % (i + 1))
# Recursive
low2 = None
if steps_to_go > 1:
low2 = self._build_hourglass(low1, steps_to_go - 1, num_features, depth=depth+1)
else:
low2 = low1
for i in range(self._hg_num_residual_blocks):
low2 = self._build_residual_block(low2, num_features, num_features,
name='low2_%d' % (i + 1))
# Additional residual blocks
low3 = low2
for i in range(self._hg_num_residual_blocks):
low3 = self._build_residual_block(low3, num_features, num_features,
name='low3_%d' % (i + 1))
# Upsample
if self._data_format == 'NCHW': # convert to NHWC
low3 = tf.transpose(low3, (0, 2, 3, 1))
up2 = tf.image.resize_bilinear(
low3,
up1.shape[1:3] if self._data_format == 'NHWC' else up1.shape[2:4],
align_corners=True,
)
if self._data_format == 'NCHW': # convert back from NHWC
up2 = tf.transpose(up2, (0, 3, 1, 2))
return up1 + up2
def _build_hourglass_after(self, x_prev, x_now, do_merge=True):
with tf.variable_scope('after'):
for j in range(self._hg_num_residual_blocks):
x_now = self._build_residual_block(x_now, self._hg_num_feature_maps,
self._hg_num_feature_maps,
name='after_hg_%d' % (j + 1))
x_now = self._apply_conv(x_now, self._hg_num_feature_maps, kernel_size=1, stride=1)
x_now = self._apply_bn(x_now)
x_now = tf.nn.relu(x_now)
with tf.variable_scope('gmap'):
gmap = self._apply_conv(x_now, self._hg_num_gazemaps, kernel_size=1, stride=1)
x_next = x_now
if do_merge:
with tf.variable_scope('merge'):
with tf.variable_scope('gmap'):
x_gmaps = self._apply_conv(gmap, self._hg_num_feature_maps, kernel_size=1, stride=1)
with tf.variable_scope('x'):
x_now = self._apply_conv(x_now, self._hg_num_feature_maps, kernel_size=1, stride=1)
x_next += x_prev + x_gmaps
# Perform softmax on gazemaps
if self._data_format == 'NCHW':
n, c, h, w = gmap.shape.as_list()
gmap = tf.reshape(gmap, (n, -1))
gmap = tf.nn.softmax(gmap)
gmap = tf.reshape(gmap, (n, c, h, w))
else:
n, h, w, c = gmap.shape.as_list()
gmap = tf.transpose(gmap, perm=[0, 3, 1, 2])
gmap = tf.reshape(gmap, (n, -1))
gmap = tf.nn.softmax(gmap)
gmap = tf.reshape(gmap, (n, c, h, w))
gmap = tf.transpose(gmap, perm=[0, 2, 3, 1])
return x_next, gmap
def _apply_dense_block(self, x, num_layers):
assert isinstance(num_layers, int) and num_layers > 0
c_index = 1 if self._data_format == 'NCHW' else 3
x_prev = x
for i in range(num_layers):
with tf.variable_scope('layer%d' % (i + 1)):
n = x.shape.as_list()[c_index]
with tf.variable_scope('bottleneck'):
x = self._apply_composite_function(x,
num_features=min(n, 4*self._dn_growth_rate),
kernel_size=1)
with tf.variable_scope('composite'):
x = self._apply_composite_function(x, num_features=self._dn_growth_rate,
kernel_size=3)
if self._data_format == 'NCHW':
x = tf.concat([x, x_prev], axis=1)
else:
x = tf.concat([x, x_prev], axis=-1)
x_prev = x
return x
def _apply_transition_layer(self, x):
c_index = 1 if self._data_format == 'NCHW' else 3
x = self._apply_composite_function(
x, num_features=int(self._dn_compression_factor * x.shape.as_list()[c_index]),
kernel_size=1)
x = tf.layers.average_pooling2d(x, pool_size=2, strides=2, padding='valid',
data_format=self._data_format_longer)
return x
def _apply_composite_function(self, x, num_features=_dn_growth_rate, kernel_size=3):
x = self._apply_bn(x)
x = tf.nn.relu(x)
x = self._apply_conv(x, num_features=num_features, kernel_size=kernel_size, stride=1)
return x
| StarcoderdataPython |
12864554 | <reponame>CONABIO-audio/irekua-dev-tools<gh_stars>0
import click
from irekua_dev_tools.utils import load_config
from irekua_dev_tools.utils import get_working_directory
from irekua_dev_tools.utils import load_environment_variables
from irekua_dev_tools.utils import load_repository_info
from . import git
from . import dev
from . import config
from . import db
from .extra import clean
@click.group()
@click.pass_context
@click.option('--config-file', '-c', 'config_file', type=click.Path())
@click.option('--target', '-t', type=click.Path(exists=True))
@click.option('--default-config', '-dc', 'default_config', is_flag=True)
def cli(ctx, config_file, target, default_config):
config = load_config(path=config_file, aux_config=not default_config)
repository_info = load_repository_info(
method=config['repositories']['method'],
repository_file=config['repositories']['repository_file'])
load_environment_variables(config)
ctx.ensure_object(dict)
ctx.obj['config'] = config
ctx.obj['repository_info'] = repository_info
if target is None:
target = get_working_directory(ctx.obj['config'])
ctx.obj['target'] = target
cli.add_command(dev.cli)
cli.add_command(git.cli)
cli.add_command(config.cli)
cli.add_command(db.cli)
cli.add_command(clean)
| StarcoderdataPython |
3360451 | # -*- coding: utf-8 -*-
# Copyright (c) 2019, Aptitude technologie and contributors
# For license information, please see license.txt
from __future__ import unicode_literals
import frappe
from frappe.model.document import Document
from frappe import _
from frappe.model.rename_doc import rename_doc
from shei.shei.doctype.task_template.task_template import reorder_tasks_after_update
from shei.shei.doctype.task_template.task_template import get_all_task_template_from_sub_type
class TaskSubject(Document):
def autoname(self):
self.name = self.generate_doc_name(self.sub_type, self.task_desc)
def generate_doc_name(self, sub_type, task_desc):
return ("{0}-{1}".format(self.sub_type, self.task_desc)).upper()
def get_old_data(self, name):
'''Fetch the data from database for the given Task Subject name'''
old_values = dict()
old_values['task_desc'] = frappe.db.get_value('Task Subject', name, 'task_desc')
old_values['task_subtype'] = frappe.db.get_value('Task Subject', name, 'sub_type')
old_values['name'] = frappe.db.get_value('Task Subject', name, 'name')
old_values['disabled'] = frappe.db.get_value('Task Subject', name, 'disabled')
return old_values
def create_new_task_subject(self, name, disabled, task_desc, sub_type):
task = frappe.new_doc('Task Subject')
task.flags.ignore_permissions = True
task.update({'disabled': disabled, 'task_desc': task_desc, 'sub_type': sub_type}).save()
def create_tast_template(self, sub_type, task_desc):
task_template = frappe.new_doc('Task Template')
task_template.flags.ignore_permissions = True
last_task_order_template = get_all_task_template_from_sub_type(sub_type)
if not last_task_order_template:
last_task_order = 0
else:
last_task_order = last_task_order_template[-1].task_order
task_template.update({'task_subject': self.generate_doc_name(sub_type, task_desc), 'task_order': last_task_order + 1}).save()
def update_data(self, name, disabled, task_desc, sub_type):
new_name = self.name
if not frappe.db.exists('Task Subject', name):
self.create_new_task_subject(name, disabled, task_desc, sub_type)
self.create_tast_template(sub_type, task_desc)
else:
new_name = self.renamed_all_tasks(task_desc, name, disabled)
return new_name
def renamed_all_tasks(self, task_desc, name, disabled):
'''Verify, create and/or rename everything related to task subject'''
new_name = self.name
old_values = self.get_old_data(self.name)
need_renaming = False
if old_values['task_desc'].upper() != self.task_desc.upper(): #if the descrption have changed, we need to rename the doc
new_name = self.generate_doc_name(self.sub_type, self.task_desc)
frappe.rename_doc(doctype=self.doctype, old=self.name, new=new_name, ignore_permissions=True)
need_renaming = True
doc = frappe.get_doc('Task Subject', new_name)
doc.flags.ignore_permissions = True
doc.update({'disabled': disabled, 'task_desc': task_desc}).save()
self.update_task_template(old_values['name'], new_name, need_renaming)
if need_renaming:
self.update_projects_tasks(old_values['name'], new_name)
self.update_tasks(old_values['name'], new_name)
return new_name
def update_task_template(self, old_name, new_name, need_renaming):
'''In case a task was disabled or renamed, update the task template accordingly'''
reorder_tasks_after_update(self.sub_type)
if need_renaming:
frappe.rename_doc(doctype='Task Template', old=old_name, new=new_name, ignore_permissions=True)
#Entry Point : Update Task button
def update_task_info(self):
new_doc_name = self.update_data(self.name, self.disabled, self.task_desc, self.sub_type)
self.update_task_progression_range_order(new_doc_name, self.sub_type, self.disabled)
self.validate_task_progression_range(self.sub_type)
frappe.msgprint(_("Tasks have been updated"))
def validate_task_progression_range(self, sub_type):
# since we have 5 dots on the customer portal, the limit is set to 5.
# The current code doesn't allow to have more or less task progression range: shei.template.pages.project.html
nb_tpr = len(frappe.db.get_all('Task Progression Range', {'sub_type':sub_type}, 'name'))
if nb_tpr < 5:
frappe.msgprint(_("You need to create {0} Task Progression Range for this subtype. All the clients with a project associate with this subtype will have an error when consulting the Customer Portal").format(5 - nb_tpr))
def update_projects_tasks(self, old_name, new_name):
'''Update all tasks where title = old_name'''
project_tasks = frappe.db.get_all('Project Task', { 'parenttype': 'Project', 'title':old_name }, ['title', 'name'])
for task in project_tasks:
t = frappe.get_doc('Project Task', task['name'])
t.flags.ignore_permissions = True
t.update({'title': new_name}).save()
def update_tasks(self, old_name, new_name):
'''Update all task where task_subject = old_name'''
tasks = frappe.db.get_all('Task', {'subject': old_name}, ['subject', 'name'])
for task in tasks:
t = frappe.get_doc('Task', task['name'])
t.flags.ignore_permissions = True
t.update({'subject': new_name}).save()
def update_task_progression_range_order(self, new_name, sub_type, disabled):
'''Update the task subject name or task order if Task Subject have been disabled'''
if frappe.db.exists('Task Progression Range', {'sub_type': sub_type, 'task_subject': new_name}):
tpr = frappe.get_doc('Task Progression Range', {'sub_type': sub_type, 'task_subject': new_name})
tpr.flags.ignore_permissions = True
have_replacement = self.task_have_replacement(sub_type, tpr.task_order)
if disabled and not have_replacement: #ie the task is the last one or the last in the list. We want to have the previous or next one
previous_task = self.get_previous_task_template(tpr.task_order, sub_type)
tpr.update({'task_subject': previous_task['name'], 'task_order': previous_task['task_order']}).save()
elif disabled and have_replacement: #The task order will remainns the same, but need to update the task subject
task_names = frappe.db.get_all('Task Template', {'task_order': tpr.task_order}, 'task_subject')
for task in task_names:
#we have a lit of task template with the right task order.
#Now we need to find the Task Subject with the right subtype
replacement_task_name = frappe.db.get_value('Task Subject', {'name':task.name, 'sub_type':sub_type}, 'name')
if replacement_task_name:
tpr.update({'task_subject': replacement_task_name}).save()
def task_have_replacement(self, sub_type, tpr_task_order):
'''Check if the current task have another task to replace it'''
tasks_template = get_all_task_template_from_sub_type(sub_type)
if tasks_template and tasks_template[-1].task_order == tpr_task_order: #Need to know if there's a task to replace the current one
return True
else:
return False
def get_previous_task_template(self, task_order, sub_type):
'''Return the previous Task Template based on the Task Order and subtype'''
if task_order == 1:
replacement_tasks = frappe.db.get_all('Task Template', {'task_order': task_order + 1}, ['name', 'task_order'])
else:
replacement_tasks = frappe.db.get_all('Task Template', {'task_order': task_order - 1}, ['name', 'task_order'])
if not replacement_tasks: #ie it was the last task
return
for task in replacement_tasks:
#since the task template don't have the subtype, we need to go throught all the task subject
#and find the one with the same task subject with the right sub_type
if frappe.db.exists('Task Subject', {'name':task.name, 'sub_type':sub_type}):
return frappe.db.get_value('Task Template', {'task_subject':task.name}, '*', as_dict=True)
| StarcoderdataPython |
6659543 | import logging
from kismetclient.utils import csv
from kismetclient.exceptions import ServerError
log = logging.getLogger(__name__)
def kismet(client, version, starttime, servername, dumpfiles, uid):
""" Handle server startup string. """
log.info('Server: ' +
' '.join([version, starttime, servername, dumpfiles, uid]))
def capability(client, CAPABILITY, capabilities):
""" Register a server's default protocol capabilities. """
client.protocols[CAPABILITY] = csv(capabilities)
def protocols(client, protocols):
""" Enumerate protocol capabilities so they can be registered. """
for protocol in csv(protocols):
client.cmd('CAPABILITY', protocol)
def ack(client, cmdid, text):
""" Handle ack messages in response to commands. """
# Simply remove from the in_progress queue
client.in_progress.pop(cmdid)
def error(client, cmdid, text):
""" Handle error messages in response to commands. """
cmd = client.in_progress.pop(cmdid)
raise ServerError(cmd, text)
def print_fields(client, **fields):
""" A generic handler which prints all the fields. """
for k, v in fields.items():
print('%s: %s' % (k, v))
print ('-' * 80)
| StarcoderdataPython |
5149913 | <gh_stars>0
from . import RequestOptions
from . import Sort
class Pager(object):
"""
Generator that takes an endpoint (top level endpoints with `.get)` and lazily loads items from Server.
Supports all `RequestOptions` including starting on any page. Also used by models to load sub-models
(users in a group, views in a workbook, etc) by passing a different endpoint.
Will loop over anything that returns (List[ModelItem], PaginationItem).
"""
def __init__(self, endpoint, request_opts=None):
if hasattr(endpoint, 'get'):
# The simpliest case is to take an Endpoint and call its get
self._endpoint = endpoint.get
elif callable(endpoint):
# but if they pass a callable then use that instead (used internally)
self._endpoint = endpoint
else:
# Didn't get something we can page over
raise ValueError("Pager needs a server endpoint to page through.")
self._options = request_opts
# If we have options we could be starting on any page, backfill the count
if self._options:
self._count = ((self._options.pagenumber - 1) * self._options.pagesize)
else:
self._count = 0
self._options = RequestOptions()
def __iter__(self):
# Fetch the first page
current_item_list, last_pagination_item = self._endpoint(self._options)
if last_pagination_item.total_available is None:
# This endpoint does not support pagination, drain the list and return
while current_item_list:
yield current_item_list.pop(0)
return
# Get the rest on demand as a generator
while self._count < last_pagination_item.total_available:
if len(current_item_list) == 0:
current_item_list, last_pagination_item = self._load_next_page(last_pagination_item)
try:
yield current_item_list.pop(0)
self._count += 1
except IndexError:
# The total count on Server changed while fetching exit gracefully
return
def _load_next_page(self, last_pagination_item):
next_page = last_pagination_item.page_number + 1
opts = RequestOptions(pagenumber=next_page, pagesize=last_pagination_item.page_size)
if self._options is not None:
opts.sort, opts.filter = self._options.sort, self._options.filter
current_item_list, last_pagination_item = self._endpoint(opts)
return current_item_list, last_pagination_item
| StarcoderdataPython |
5181133 | <reponame>he134543/XLake
import pandas as pd
import numpy as np
from glob import glob
def Net_Loading(inflows_path, outflows_path):
# inflows_path = 'Inflows'
g_inflow_path = inflows_path + '/*.csv'
g_outflow_path = outflows_path + '/*.csv'
inflows_names = glob(g_inflow_path)
outflows_names = glob(g_outflow_path)
inflows = [i for i in range(len(inflows_names))]
outflows = [i for i in range(len(outflows_names))]
for i in range(len(inflows_names)):
inflows[i] = pd.read_csv(inflows_names[i], encoding='utf-8', index_col=0, parse_dates = True)
for j in range(len(outflows_names)):
outflows[j] = pd.read_csv(outflows_names[j], encoding = 'utf-8', index_col=0, parse_dates = True)
def cal_loading(WQ, inflows, outflows): #input the parameter name,'TP','TKN','chla'
W_in = 0
for inflow in inflows:
W_in = inflow['Flow'] * inflow[WQ] + W_in
W_out = 0
for outflow in outflows:
W_out = outflow['Flow'] * outflow[WQ]
W = W_in - W_out
return W
W_tkn = cal_loading('TKN', inflows, outflows)
W_tp = cal_loading('TP', inflows, outflows)
W_chl = cal_loading('chla', inflows, outflows)
return W_tkn, W_tp, W_chl | StarcoderdataPython |
3283385 | import sys
from typing import List, Union
import numpy as np
import _robustats
def weighted_median(
x: Union[List[float], np.ndarray],
weights: Union[List[float], np.ndarray]
) -> float:
"""Calculate the weighted median of an array with related weights.
For arrays with an even number of elements, this function calculates the
lower weighted median.
Args:
x: List or Numpy array.
weights: List or Numpy of weights related to 'x'.
Returns:
Weighted median.
Examples:
>>> weighted_median(x=[1., 2., 3.], weights=[1., 1., 1.])
2.0
>>> weighted_median(x=[1., 2., 3.], weights=[2., 1., 1.])
2.0
>>> weighted_median(x=[1., 2., 3.], weights=[3., 1., 1.])
1.0
>>> weighted_median(x=[1., 2.], weights=[1., 1.])
1.0
"""
return _robustats.weighted_median(x, weights)
def medcouple(x: Union[List[float], np.ndarray]) -> float:
"""Calculate the medcouple of a list of numbers.
Args:
x: List or Numpy array.
Returns:
Medcouple.
Examples:
>>> medcouple(x=[1., 2., 3.])
0.0
>>> medcouple(x=[1., 2., 3., 4., 5., 6.])
0.0
>>> medcouple(x=[1., 2., 2., 2., 3., 4., 5., 6.])
1.0
>>> medcouple(x=[0.2, 0.17, 0.08, 0.16, 0.88, 0.86, 0.09, 0.54, 0.27])
0.7
"""
if type(x) == list:
epsilon1 = sys.float_info.epsilon
epsilon2 = sys.float_info.min
elif type(x) == np.ndarray:
epsilon1 = np.finfo(x.dtype).eps
epsilon2 = np.finfo(x.dtype).min
else:
raise ValueError(
"Wrong function argument: array type not supported; please use a "
"Python list or a Numpy array."
)
return _robustats.medcouple(x, epsilon1, epsilon2)
def mode(x: Union[List[float], np.ndarray]) -> float:
"""Calculate the mode of a list of numbers.
Args:
x: List or Numpy array.
Returns:
Mode.
Examples:
>>> mode(x=[1., 2., 3., 4., 5.])
2.0
>>> mode(x=[1., 2., 3., 3., 4., 5.])
3.0
>>> mode(x=[1., 2., 2., 3., 3., 3., 4., 4., 5.])
3.0
>>> mode(x=[1., 2., 3., 3., 3., 4., 4., 4., 4., 5.])
3.0
>>> mode(x=[1., 2., 3., 3., 3., 4., 4., 4., 4., 4., 5., 6., 7.])
4.0
"""
return _robustats.mode(x)
| StarcoderdataPython |
8004561 | <filename>magic_method.py<gh_stars>1-10
"""class People:
def __init__(self, name, age):
self.name = name
self.age = age
def __del__(self):
print("deleted")
person = People("james", 25)
"""
class Square:
def __init__(self, side1, side2):
self.side1 = side1
self.side2 = side2
def __add__(self, other):
return Square(self.side1 + other.side1, self.side2 + other.side2)
def __repr__(self):
return f"side 1: {self.side1} side2: {self.side2} result {self.side1 + self.side2}"
square1 = Square(50, 20)
square2 = Square(50, 30)
result = square1 + square2
print(result) | StarcoderdataPython |
8161170 | import jiminy.api
class ModelCreator(jiminy.api.Creator):
"""Polygonal geometry"""
name = "modelDefault"
label = "Model"
family = "tailcoat.model"
icon = "cubes"
| StarcoderdataPython |
6576557 | """
Custom component to integrate with OctoPrint.
For more details about this component, please refer to
https://github.com/fredrikbaberg/octoprint_hass
"""
import os
from datetime import timedelta
import logging
import voluptuous as vol
from homeassistant import config_entries
import homeassistant.helpers.config_validation as cv
from homeassistant.helpers import discovery
from homeassistant.util import Throttle
from sampleclient.client import Client
from integrationhelper.const import CC_STARTUP_VERSION
from .octoprint_rest_api import OctoPrint
from .const import (
CONF_BINARY_SENSOR,
CONF_ENABLED,
CONF_NAME,
CONF_PASSWORD,
CONF_SENSOR,
CONF_SWITCH,
CONF_USERNAME,
DEFAULT_NAME,
DOMAIN_DATA,
DOMAIN,
ISSUE_URL,
PLATFORMS,
REQUIRED_FILES,
VERSION,
)
MIN_TIME_BETWEEN_UPDATES = timedelta(seconds=30)
_LOGGER = logging.getLogger(__name__)
BINARY_SENSOR_SCHEMA = vol.Schema(
{
vol.Optional(CONF_ENABLED, default=True): cv.boolean,
vol.Optional(CONF_NAME, default=DEFAULT_NAME): cv.string,
}
)
SENSOR_SCHEMA = vol.Schema(
{
vol.Optional(CONF_ENABLED, default=True): cv.boolean,
vol.Optional(CONF_NAME, default=DEFAULT_NAME): cv.string,
}
)
SWITCH_SCHEMA = vol.Schema(
{
vol.Optional(CONF_ENABLED, default=True): cv.boolean,
vol.Optional(CONF_NAME, default=DEFAULT_NAME): cv.string,
}
)
CONFIG_SCHEMA = vol.Schema(
{
DOMAIN: vol.Schema(
{
vol.Optional(CONF_USERNAME): cv.string,
vol.Optional(CONF_PASSWORD): cv.string,
vol.Optional(CONF_BINARY_SENSOR): vol.All(
cv.ensure_list, [BINARY_SENSOR_SCHEMA]
),
vol.Optional(CONF_SENSOR): vol.All(cv.ensure_list, [SENSOR_SCHEMA]),
vol.Optional(CONF_SWITCH): vol.All(cv.ensure_list, [SWITCH_SCHEMA]),
}
)
},
extra=vol.ALLOW_EXTRA,
)
async def async_setup(hass, config):
"""Set up this component using YAML."""
if config.get(DOMAIN) is None:
# We get here if the integration is set up using config flow
return True
# Print startup message
_LOGGER.info(
CC_STARTUP_VERSION.format(name=DOMAIN, version=VERSION, issue_link=ISSUE_URL)
)
# Check that all required files are present
file_check = await check_files(hass)
if not file_check:
return False
# Create DATA dict
hass.data[DOMAIN_DATA] = {}
# Get "global" configuration.
username = config[DOMAIN].get(CONF_USERNAME)
password = config[DOMAIN].get(CONF_PASSWORD)
# Configure the client.
# OP = OctoPrint(host, port)
client = Client(username, password)
hass.data[DOMAIN_DATA]["client"] = OctoprintData(hass, client)
# Load platforms
for platform in PLATFORMS:
# Get platform specific configuration
platform_config = config[DOMAIN].get(platform, {})
# If platform is not enabled, skip.
if not platform_config:
continue
for entry in platform_config:
entry_config = entry
# If entry is not enabled, skip.
if not entry_config[CONF_ENABLED]:
continue
hass.async_create_task(
discovery.async_load_platform(
hass, platform, DOMAIN, entry_config, config
)
)
hass.async_create_task(
hass.config_entries.flow.async_init(
DOMAIN, context={"source": config_entries.SOURCE_IMPORT}, data={}
)
)
return True
async def async_setup_entry(hass, config_entry):
"""Set up this integration using UI."""
_LOGGER.info("TEST: async_setup_entry\n%s", config_entry.data)
conf = hass.data.get(DOMAIN_DATA)
if config_entry.source == config_entries.SOURCE_IMPORT:
_LOGGER.info("TEST: Import")
if conf is None:
hass.async_create_task(
hass.config_entries.async_remove(config_entry.entry_id)
)
return False
# Print startup message
_LOGGER.info(
CC_STARTUP_VERSION.format(name=DOMAIN, version=VERSION, issue_link=ISSUE_URL)
)
# Check that all required files are present
file_check = await check_files(hass)
if not file_check:
return False
# Create DATA dict
hass.data[DOMAIN_DATA] = {}
# Get "global" configuration.
# username = config_entry.data.get(CONF_USERNAME)
# password = config_entry.data.get(CONF_PASSWORD)
host = '127.0.0.1'
port = 5000
# Configure the client.
# client = Client(username, password)
client = OctoPrint(host, port)
hass.data[DOMAIN_DATA]["client"] = OctoprintData(hass, client)
# Add binary_sensor
hass.async_add_job(
hass.config_entries.async_forward_entry_setup(config_entry, "binary_sensor")
)
# Add sensor
hass.async_add_job(
hass.config_entries.async_forward_entry_setup(config_entry, "sensor")
)
# Add switch
hass.async_add_job(
hass.config_entries.async_forward_entry_setup(config_entry, "switch")
)
return True
class OctoprintData:
"""This class handle communication and stores the data."""
def __init__(self, hass, client):
"""Initialize the class."""
self.hass = hass
self.client = client
@Throttle(MIN_TIME_BETWEEN_UPDATES)
async def update_data(self):
"""Update data."""
# This is where the main logic to update platform data goes.
try:
data = self.client.get_data()
self.hass.data[DOMAIN_DATA]["data"] = data
except Exception as error: # pylint: disable=broad-except
_LOGGER.error("Could not update data - %s", error)
async def check_files(hass):
"""Return bool that indicates if all files are present."""
# Verify that the user downloaded all files.
base = f"{hass.config.path()}/custom_components/{DOMAIN}/"
missing = []
for file in REQUIRED_FILES:
fullpath = "{}{}".format(base, file)
if not os.path.exists(fullpath):
missing.append(file)
if missing:
_LOGGER.critical("The following files are missing: %s", str(missing))
returnvalue = False
else:
returnvalue = True
return returnvalue
async def async_remove_entry(hass, config_entry):
"""Handle removal of an entry."""
try:
await hass.config_entries.async_forward_entry_unload(
config_entry, "binary_sensor"
)
_LOGGER.info(
"Successfully removed binary_sensor from the Octoprint integration"
)
except ValueError:
pass
try:
await hass.config_entries.async_forward_entry_unload(config_entry, "sensor")
_LOGGER.info("Successfully removed sensor from the Octoprint integration")
except ValueError:
pass
try:
await hass.config_entries.async_forward_entry_unload(config_entry, "switch")
_LOGGER.info("Successfully removed switch from the Octoprint integration")
except ValueError:
pass
| StarcoderdataPython |
23109 | <reponame>tallywiesenberg/dating-data-dividend
from flask_login import UserMixin
from werkzeug.security import generate_password_hash, check_password_hash
from .extensions import db
class User(UserMixin, db.Model):
id = db.Column(db.Integer, primary_key=True)
username = db.Column(db.String(80), unique=True, nullable=False)
password = db.Column(db.String(200), nullable=False)
address = db.Column(db.String(44), unique=True, nullable=False)
left_swipes_given = db.Column(db.Integer, nullable=False)
right_swipes_given = db.Column(db.Integer, nullable=False)
matches = db.Column(db.Integer, nullable=False)
bio = db.Column(db.String(240))
time_logged = db.Column(db.Float, nullable=False)
gender = db.Column(db.String(10), nullable=False)
gender_preference = db.Column(db.String(10), nullable=False)
def __repr__(self):
return f'{self.id} -- {self.username} -- {self.password} -- {self.address}'
def set_password(self, password):
'''Create hashed password'''
self.password = generate_password_hash(password, method='sha256')
def check_password(self, password):
return check_password_hash(self.password, password)
def set_address(self, address):
'''Set address from placeholder to real-deal after connecting with metamask'''
self.address = address
# class UserData(db.Model):
# id = db.Column(db.Integer, primary_key=True)
# left_swipes_given = db.Column(db.Integer, nullable=False)
# right_swipes_given = db.Column(db.Integer, nullable=False)
# matches = db.Column(db.Integer, nullable=False)
# bio = db.Column(db.String(240))
# path_to_photos = db.Column(db.String(20), nullable=False)
# user_id = db.Column(db.String(44),
# # db.ForeignKey('user_login.id'),
# nullable=False)
class Swipe(db.Model):
id = db.Column(db.Integer, primary_key=True)
timestamp = db.Column(db.DateTime, nullable=False)
decision = db.Column(db.Boolean, nullable=False)
#The user that swipes
front_user = db.Column(db.String(44),
# db.ForeignKey('user_login.id'),
nullable=False)
#The user that is swiped upon
back_user = db.Column(db.String(44),
# db.ForeignKey('user_login.id'),
nullable=False)
match = db.Column(db.Boolean, nullable=False) | StarcoderdataPython |
4823642 | import numpy as np
import math as m
def Jacobian(dh,rho):
T=ConstructTransform(dh)
t=ExtractOrients(T[-1][0:3,0:3])
zs=[np.dot(t,np.array([0,0,1,1]))[0,0:3] for t in T]
On=np.dot(T[-1],np.array([0,0,0,1]))
Os=[(On-np.dot(T[i],np.array([0,0,0,1])))[0,0:3] for i in range(len(T))]
j=np.zeros([6,len(zs)])
for l in range(len(zs)):
j[:,l]=np.concatenate((np.cross(zs[l][0,0:3],Os[l][0,0:3]).reshape([3,1]),zs[l][0,0:3].reshape([3,1])),axis=0).reshape([6]) if rho[l]==1 else np.concatenate((zs[l][0,0:3].reshape([3,1]),np.zeros([3,1])),axis=0).reshape([6])
return np.matrix(j)
def ConstructTransform(dh):
T=[np.identity(4)]
for l in dh:
T+=[np.dot( np.matrix(T[-1]),np.matrix([[m.cos(l[3]),-m.sin(l[3])*m.cos(l[0]),m.sin(l[3])*m.sin(l[0]), l[2]*m.cos(l[3])],\
[m.sin(l[3]), m.cos(l[3])*m.cos(l[0]), -m.cos(l[3])*m.sin(l[0]), l[2]*m.sin(l[3])],\
[0, m.sin(l[0]),m.cos(l[0]),l[1]],\
[0,0,0,1]]))]
T.pop(0)
return T
def ExtractOrients(R):
t1=m.atan2(R[2,1],R[2,2])
t2=m.atan2(-R[2,0],m.sqrt(R[2,1]**2+R[2,2]**2))
t3=m.atan2(R[1,0],R[0,0])
return np.array([t1,t2,t3]) | StarcoderdataPython |
11211998 | # Copyright 2021 The NetKet Authors - All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import abc
from netket.utils.types import DType
from netket.hilbert import AbstractHilbert
class AbstractOperator(abc.ABC):
"""Abstract class for quantum Operators. This class prototypes the methods
needed by a class satisfying the Operator concept.
"""
_hilbert: AbstractHilbert
r"""The hilbert space associated to this operator."""
def __init__(self, hilbert: AbstractHilbert):
self._hilbert = hilbert
@property
def hilbert(self) -> AbstractHilbert:
r"""The hilbert space associated to this operator."""
return self._hilbert
@property
def size(self) -> int:
r"""The total number number of local degrees of freedom."""
return self._hilbert.size
@property
def is_hermitian(self) -> bool:
"""Returns true if this operator is hermitian."""
return False
@property
def H(self) -> "AbstractOperator":
"""Returns the Conjugate-Transposed operator"""
if self.is_hermitian:
return self
from ._lazy import Adjoint
return Adjoint(self)
@property
def T(self) -> "AbstractOperator":
"""Returns the transposed operator"""
return self.transpose()
@property
@abc.abstractmethod
def dtype(self) -> DType:
"""The dtype of the operator's matrix elements ⟨σ|Ô|σ'⟩."""
raise NotImplementedError
def collect(self) -> "AbstractOperator":
"""
Returns a guranteed concrete instancce of an operator.
As some operations on operators return lazy wrapperes (such as transpose,
hermitian conjugate...), this is used to obtain a guaranteed non-lazy
operator.
"""
return self
def transpose(self, *, concrete=False) -> "AbstractOperator":
"""Returns the transpose of this operator.
Args:
concrete: if True returns a concrete operator and not a lazy wrapper
Returns:
if concrete is not True, self or a lazy wrapper; the
transposed operator otherwise
"""
if not concrete:
from ._lazy import Transpose
return Transpose(self)
else:
raise NotImplementedError
def conjugate(self, *, concrete=False) -> "AbstractOperator":
"""Returns the complex-conjugate of this operator.
Args:
concrete: if True returns a concrete operator and not a lazy wrapper
Returns:
if concrete is not True, self or a lazy wrapper; the
complex-conjugated operator otherwise
"""
raise NotImplementedError
def conj(self, *, concrete=False) -> "AbstractOperator":
return self.conjugate(concrete=False)
def __repr__(self):
return f"{type(self).__name__}(hilbert={self.hilbert})"
| StarcoderdataPython |
1669961 | """merge things
Revision ID: 20d6b579456d
Revises: 5d2bb32c6e04, 9<PASSWORD>
Create Date: 2022-02-02 12:33:13.933500
"""
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision = '<KEY>'
down_revision = ('5d2bb32c6e04', '<PASSWORD>')
branch_labels = None
depends_on = None
def upgrade():
pass
def downgrade():
pass
| StarcoderdataPython |
11212661 | import requests
import base64
import hmac
import time
api_key = "XAS6NTiO2rTbE0Gl2W3nvQ"
secret_key = "<KEY>"
api_url = "http://localhost:5003/api?"
command = "list"
secret_key = str.encode(secret_key)
timestamp = str(int(time.time()))
msg = "apikey=" + api_key + "&" + "command=" + command + "&" + "time=" + timestamp
msg = str.encode(msg)
sign = hmac.new(secret_key,msg,digestmod="sha256")
sign = sign.hexdigest()
sign = str.encode(sign)
sign_encoded = base64.b64encode(sign)
msg = msg.decode()
sign_encoded = sign_encoded.decode()
msg = msg + "&" + "sign=" + sign_encoded
url = api_url + msg
print(url)
print(requests.get(url).text)
input("")
| StarcoderdataPython |
6439032 | <reponame>rpurnama/pycodes<gh_stars>1-10
import time
import getpass
import sys
print """
db d8b db d88888b db .o88b. .d88b. .88b d88. d88888b
88 I8I 88 88' 88 d8P Y8 .8P Y8. 88'YbdP`88 88'
88 I8I 88 88ooooo 88 8P 88 88 88 88 88 88ooooo
Y8 I8I 88 88ooooo 88 8b 88 88 88 88 88 88ooooo
`8b d8'8b d8' 88. 88booo. Y8b d8 `8b d8' 88 88 88 88.
`8b8' `8d8' Y88888P Y88888P `Y88P' `Y88P' YP YP YP Y88888P
\t================================================================
\tProgrammed by rpurnama for personal usage. This program is still
\tunder development, it isn't for commercial purpose. So, please
\tuse with your own risk. (c) RPOERNAMA Corporation.
"""
id_login = raw_input('User ID: ')
#id_login = "Please Enter Your Identification!"
pass_login = getpass.getpass('Password: ')
#print(pass_login)
def Jarvis(feedback):
if feedback != '':
if "who are you" in feedback:
print("My name is Jarvis, nice to see you!")
elif "exit" in feedback:
print("System Closed. Good Bye!")
sys.exit(0)
else:
print("I don\'t understand you")
else:
pass
time.sleep(1)
while True:
inpdata = raw_input('What\'s your command: ')
Jarvis(inpdata) | StarcoderdataPython |
173887 | # -*- coding: utf-8 -*-
# snapshottest: v1 - https://goo.gl/zC4yUc
from __future__ import unicode_literals
from snapshottest import GenericRepr, Snapshot
snapshots = Snapshot()
snapshots['test_args 1'] = [
(
'trace_func',
(
GenericRepr('sentinel.frame'),
'call',
None
),
{
}
),
(
'trace_func',
(
GenericRepr('sentinel.frame'),
'line',
None
),
{
}
)
]
| StarcoderdataPython |
9627785 | <filename>src/tw-mapgen.py
import tkinter as tk
from create_layered import create_layered
from pathlib import Path
config = [
('filename', str(Path.cwd().joinpath('newmap.map')), str),
('base map size', '300', int),
('block length (max tunnel size)', '20', int),
('min wall thickness (per side)', '1', int), # on each side
('max wall thickness (per side)', '5', int), # on each side
('wall thickness change probability', '0.15', float),
('obstacle grow length', '11', int), # has be be less than sqrt(0.5) * blocklen - 2
('obstacle size', '5', int),
('obstacle side switch probability', '0.8', float),
('obstacle direction change probability', '0.4', float),
('obstacle freeze probability', '0.8', float),
('block wall (game layer)', '1', int),
('block corner (game layer)', '1', int),
('block obstacle (game layer)', '1', int),
('block freeze (game layer)', '9', int),
('directions (0:left, 1:up, 2:right, 3:down)', '2,2,2,3,3,3,2,1,1,1,2,2,3,3,3,2,1,1,1,2,2,2,2', lambda x: list(map(int,x.split(',')) if x.strip() else None)) # directions to build along
]
# window
window = tk.Tk()
window.title('Teeworlds Map Generator')
# window.geometry("1400x1200")
# header
tk.Label(text='enter settings below and hit generate').pack()
# inputs
frame = tk.Frame()
entries = []
for i, (text, default, t) in enumerate(config):
label = tk.Label(text=text, master=frame)
label.grid(row=i, column=0, padx=10, pady=10, sticky='e')
entry = tk.Entry(master=frame)
entry.insert(tk.END, default)
entry.grid(row=i, column=1, padx=10, pady=10, sticky='ew')
entries.append(entry)
frame.columnconfigure(0,weight=0)
frame.columnconfigure(1,weight=1)
frame.pack(fill=tk.BOTH, expand=True)
# generate
status_label = None
def generate(*args):
try:
create_layered(*[t(x.get()) for x, (text, default, t) in zip(entries, config)])
result = 'success!'
except Exception as e:
result = f'error: {e}'
print(result)
status_label['text'] = result
button = tk.Button(text="generate", command=generate)
button.pack()
status_label = tk.Label()
status_label.pack()
# mainloop
window.mainloop()
| StarcoderdataPython |
5106012 | from random import choice
from typing import Any, Dict
from django.http.request import HttpRequest
from django.http.response import HttpResponse, Http404
from django.shortcuts import redirect, render
from django.views.generic.base import ContextMixin, View
from django.views.generic.detail import DetailView
from django.views.generic.edit import DeleteView
from django.views.generic.list import ListView
from .models import Product, Rating, RatingItem
def random_redirect(request):
to = ('series/', 'book/', 'film/', 'game/')
return redirect(choice(to))
class ProductListView(ContextMixin, View):
template_name = 'myList/myList.html'
context_object_name = 'list'
def get(self, request: HttpRequest, *args: Any, **kwargs: Any) -> HttpResponse:
product = kwargs.get('product')
verbosed = {'film': 'фильмов', 'game': 'игр', 'series': 'сериалов', 'book': 'книг'}
if product not in verbosed:
raise Http404
context = self.get_context_data(
title=f'Список {verbosed[product]}',
dict_queryset = Product.objects.separated_by_status(product),
)
return render(request, self.template_name, context)
class ProductDetail(DetailView):
model = Product
context_object_name = 'product'
template_name = 'myList/productDetail.html'
slug_url_kwarg = 'slug'
def get_context_data(self, **kwargs):
context = super().get_context_data(**kwargs)
context.update(self.get_object().rating.avarage_rating_score)
return context
class RatingListView(ListView):
model = Rating
context_object_name = 'ratingItems'
template_name = 'myList/productDetailRating.html'
slug_url_kwarg = 'slug'
class ProductDelete(DeleteView):
model = Product
template_name = 'myList/productDelete.html'
context_object_name = 'product'
success_url = '/myList/'
| StarcoderdataPython |
3509466 | """ Sphere simulation and position tools
Geometry simulation:
* :py:func:`simulate_spheres_in_sphere`: Simulate a random sphere packing of hard
spheres of identical radius inside a larger sphere
Neighbor counting:
* :py:func:`nearest_neighbors`: Calculate the distance to the nth closest point to
a given set of points
* :py:func:`count_neighbors`: Calculte the number of points within a radial neighborhood
Point manipulation tools:
* :py:func:`split_red_green`: Split a point list into red/green with a given
probability distribution
* :py:func:`mask_points`: Subset point lists based on a mask
* :py:func:`concat_points`: Concatenate point lists
"""
# Imports
from typing import Tuple, List
# 3rd party
import numpy as np
from sklearn.neighbors import BallTree
# Our own imports
from . import _simulation
from .consts import (
NUM_RED, NUM_GREEN, AGGREGATE_RADIUS, SAME_CELL_RADIUS, NEIGHBOR_RADIUS,
)
# Neighbor counting
def nearest_neighbors(red_points: np.ndarray,
green_points: np.ndarray,
num_closest: int = 1) -> np.ndarray:
""" Find the closest green point to a red point
:param red_points:
The n x 3 array of red points
:param green_points:
The m x 3 array of green points
:param num_closest:
The nth closest point to return
:returns:
An n x 3 array of distances to green points for each red point
"""
red_points = np.stack(red_points, axis=1)
green_points = np.stack(green_points, axis=1)
tree = BallTree(green_points)
return tree.query(red_points, k=num_closest, return_distance=True)[0][:, num_closest-1]
def count_neighbors(red_points: np.ndarray,
green_points: np.ndarray,
radius: float = NEIGHBOR_RADIUS) -> np.ndarray:
""" Count the number of neighbors within a radius
:param ndarray red_points:
The n x 3 array of red points
:param ndarray green_points:
The m x 3 array of green points
:param float radius:
The radius within which a point is a neighbor
:returns:
An n x 3 array of counts of green points near each red point
"""
red_points = np.stack(red_points, axis=1)
green_points = np.stack(green_points, axis=1)
tree = BallTree(green_points)
return tree.query_radius(red_points, r=radius, count_only=True)
# Point manipulation tools
def mask_points(points: List[np.ndarray],
mask: np.ndarray) -> Tuple[np.ndarray]:
""" Mask off the points
:param List[ndarray] points:
List of 1D point arrays to mask
:param ndarray mask:
Mask for those arrays
:returns:
The same set of points, but masked
"""
points = np.stack(points, axis=1)
points = points[mask, :]
return points[:, 0], points[:, 1], points[:, 2]
def concat_points(*args) -> Tuple[np.ndarray]:
""" Concatenate all the points
:param \\*args:
List of ndarray tuples to concatenate
:returns:
An x, y, z tuple of all the points
"""
final_x = []
final_y = []
final_z = []
for (x, y, z) in args:
if x.ndim == 0:
assert y.ndim == 0
assert z.ndim == 0
continue
assert x.shape[0] == y.shape[0]
assert x.shape[0] == z.shape[0]
final_x.append(x)
final_y.append(y)
final_z.append(z)
return (np.concatenate(final_x),
np.concatenate(final_y),
np.concatenate(final_z))
def split_red_green(all_points: Tuple[np.ndarray],
num_red: int = NUM_RED,
num_green: int = NUM_GREEN,
udist: str = 'uniform') -> Tuple[Tuple[np.ndarray]]:
""" Split into red and green cells
:param Tuple[ndarray] all_points:
The list of coordinates to split into red and green
:param int num_red:
The number of points to assign to red
:param int num_green:
The number of points to assign to green
:param str udist:
Distribution for the red points
:returns:
A tuple of (red, green) points
"""
x, y, z = all_points
all_radii = np.sqrt(x**2 + y**2 + z**2)
all_indices = np.arange(all_radii.shape[0])
# Various distributions
if udist == 'uniform':
all_prob = np.ones_like(all_radii)
all_prob = all_prob / np.sum(all_prob)
elif udist == 'left_triangle':
all_prob = np.max(all_radii) - all_radii
all_prob = all_prob / np.sum(all_prob)
elif udist == 'right_triangle':
all_prob = all_radii / np.sum(all_radii)
elif udist == 'inside':
sorted_indexes = np.argsort(all_radii)
all_mask = np.zeros_like(all_radii)
all_mask[sorted_indexes[:num_red]] = 1
all_prob = all_mask / np.sum(all_mask)
elif udist == 'outside':
sorted_indexes = np.argsort(all_radii)
all_mask = np.zeros_like(all_radii)
all_mask[sorted_indexes[-num_red:]] = 1
all_prob = all_mask / np.sum(all_mask)
else:
raise ValueError(f'Unknown distribution: {udist}')
# Choose red cells with the probability given by the distribution
red_indices = np.random.choice(all_indices, size=(num_red, ), p=all_prob, replace=False)
# Now choose green cells as the remainder
green_mask = np.ones_like(all_prob, dtype=np.bool)
green_mask[red_indices] = False
green_indices = all_indices[green_mask]
# Split the coordinate masks
red_points = (x[red_indices], y[red_indices], z[red_indices])
green_points = (x[green_indices], y[green_indices], z[green_indices])
print(f'Got {red_points[0].shape[0]} red points')
print(f'Got {green_points[0].shape[0]} green points')
return red_points, green_points
# Shape functions
def simulate_spheres_in_sphere(num_particles: int,
particle_radius: float = SAME_CELL_RADIUS,
sphere_radius: float = AGGREGATE_RADIUS,
rnd=np.random,
umin: float = 0.0,
umax: float = 1.0,
udist: str = 'uniform') -> Tuple[np.ndarray]:
""" Simulate a set of spheres packed in a sphere
:param int num_points:
The number of points to draw inside the sphere
:param float particle_radius:
The radius of the spherical particles to pack
:param float sphere_radius:
Radius of the sphere to pack into
:param RandomState rnd:
The random number generator
:param float umean:
0 for center biased, 1 for edge biased, 0.5 for no bias
:param float urange:
The range of the generator (1 for no bias)
:param str udist:
The distribution for the parameter
:returns:
x, y, z coordinates for points in the sphere
"""
return _simulation.simulate_spheres_in_sphere(
num_particles, particle_radius, sphere_radius, umin, umax, udist)
| StarcoderdataPython |
15068 | import json
import logging
LOGGER = logging.getLogger(__name__)
def start(self):
self.start_consuming()
def on_message(self, channel, method, properties, body):
"""
Invoked by pika when a message is delivered from the AMQP broker. The
channel is passed for convenience. The basic_deliver object that
is passed in carries the exchange, routing key, delivery tag and
a redelivered flag for the message. The properties passed in is an
instance of BasicProperties with the message properties and the body
is the message that was sent.
:param channel: The channel object.
:type channel: pika.channel.Channel
:param method: basic_deliver method.
:type method: pika.Spec.Basic.Deliver
:param properties: The properties.
:type properties: pika.Spec.BasicProperties
:param body: The message body.
:type body: bytes
"""
try:
print('message received')
print(properties.correlation_id)
if properties.correlation_id == self.correlation_id_reference:
print("SUCCEEDEEDRT")
self.callback_method(json.loads(body), properties)
self.acknowledge_message(method.delivery_tag)
self.channel.stop_consuming()
except Exception:
LOGGER.exception("Synchronous callback method exception:") | StarcoderdataPython |
1741332 | #!/usr/bin/python
# -*- coding: utf-8 -*-
from __future__ import print_function
__author__ = "bl"
import logging, sys, unittest, os
from silvaengine_utility import Utility
from dotenv import load_dotenv
load_dotenv()
setting = {
"region_name": os.getenv("region_name"),
"aws_access_key_id": os.getenv("aws_access_key_id"),
"aws_secret_access_key": os.getenv("aws_secret_access_key"),
}
sys.path.insert(0, "/var/www/projects/silvaengine_resouces")
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
logger = logging.getLogger()
from silvaengine_resource import Resource
class SilvaEngineResourceTest(unittest.TestCase):
def setUp(self):
self.resource = Resource(logger, **setting)
logger.info("Initiate SilvaEngineResourceTest ...")
def tearDown(self):
logger.info("Destory SilvaEngineResourceTest ...")
@unittest.skip("demonstrating skipping")
def test_add_resource(self):
logger.info(
self.resource.add_resource(
[
"analytics_engine",
"user_engine",
"shipping_quote_engine",
"seller_engine",
"factory_engine",
]
)
)
# @unittest.skip("demonstrating skipping")
def test_graphql_get_resource(self):
variables = {
"limit": 10,
"lastEvaluatedKey": {},
}
query = """
query resources($limit: Int!, $lastEvaluatedKey: JSON) {
resources(limit: $limit, lastEvaluatedKey: $lastEvaluatedKey) {
items {
resourceId
service
moduleName
className
function
label
status
createdAt
updatedAt
updatedBy
operations {
query {
label
action
}
mutation {
label
action
}
}
}
lastEvaluatedKey
}
}
"""
# variables = {
# "limit": 1,
# "lastEvaluatedKey": Utility.json_dumps(
# {
# "service": {"S": "subscription_management"},
# "resource_id": {"S": "053429072013b1fc6eeac9555cd4618b"},
# }
# ),
# }
payload = {"query": query, "variables": variables}
response = self.resource.resource_graphql(**payload)
logger.info(response)
if __name__ == "__main__":
unittest.main()
| StarcoderdataPython |
3436984 | <filename>workshop/forms.py
import datetime
from crispy_forms.helper import FormHelper
from crispy_forms.layout import Layout, Fieldset, ButtonHolder, Submit
from django import forms
from django.contrib.auth.forms import AuthenticationForm
from .models import Customer, Employee, Vehicle, Task, Invoice
# TODO: make timezone aware
YEARS = tuple((x,x) for x in range(datetime.date.today().year, 1950, -1))
class CustomerForm(forms.ModelForm):
class Meta:
model = Customer
fields = ['name', 'address', 'city', 'state', 'zip', 'phone']
widgets = {
'address': forms.TextInput()
}
class TaskForm(forms.ModelForm):
class Meta:
model = Task
fields = ['customer', 'vehicle', 'description', 'amount', 'employee']
def __init__(self, *args, q={}, **kwargs):
super(TaskForm, self).__init__(*args, **kwargs)
self.fields['vehicle'] = forms.ModelChoiceField(
queryset=Vehicle.objects.filter(user=q['user'])
)
self.fields['customer'] = forms.ModelChoiceField(
queryset=Customer.objects.filter(user=q['user'])
)
self.fields['employee'] = forms.ModelChoiceField(
queryset=Employee.objects.filter(user=q['user'])
)
class EmployeeForm(forms.ModelForm):
class Meta:
model = Employee
exclude = ['user']
widgets = {
'notes': forms.Textarea()
}
def __init__(self, *args, **kwargs):
self.helper = FormHelper()
self.helper.layout = Layout(
Fieldset(
'Add employee',
'name',
'cost_per_hour',
'phone',
'notes'
),
ButtonHolder(
Submit('submit', 'Submit', css_class="button white"),
)
)
super(EmployeeForm, self).__init__(*args, **kwargs)
class VehicleForm(forms.ModelForm):
class Meta:
model = Vehicle
exclude = ['user']
widgets = {
'year': forms.Select(choices=YEARS),
}
class InvoiceForm(forms.ModelForm):
class Meta:
model = Invoice
fields = ['customer', 'tasks',]
widgets = {
'customer': forms.HiddenInput(),
'tasks': forms.CheckboxSelectMultiple(),
}
def __init__(self, *args, q={}, **kwargs):
super(InvoiceForm, self).__init__(*args, **kwargs)
if q:
self.fields['tasks'].queryset = Task.objects.filter(user=q['user'], customer=q['customer'], invoiced=0)
self.initial['customer'] = Customer.objects.get(user=q['user'], id=q['customer']).id
class CustomAuthenticationForm(AuthenticationForm):
# username = forms.CharField(widget=forms.TextInput(attrs={'placeholder':'username'}))
username = forms.CharField(widget=forms.TextInput())
password = forms.CharField(widget=forms.PasswordInput())
| StarcoderdataPython |
4878023 | import cv2
import numpy as np
# 去除汗孔
def remove_pore(im, pore_size_max):
""""
:param im: 需要去除汗孔的图像
:param pore_size_max: 汗孔面积的最大值(经验值)
:return: 处理后图像
"""
# cv2.RETR_EXTERNAL:只检测外轮廓
# cv2.CHAIN_APPROX_NON:存储所有的轮廓点
image, contours, hierarchy = cv2.findContours(im, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
for i in range(len(contours)):
area = cv2.contourArea(contours[i])
if area <= pore_size_max:
cv2.drawContours(image, [contours[i]], 0, 0, -1)
return image
def preprocess(num):
""""
对图像进行处理,彩色图变为二值图,移除汗孔,细节处理
:param num: 要处理的图像编号
:return: 处理后的图像,图像编号
"""
# 读入
im = cv2.imread('./im/' + str(num) + '.bmp')
# 变为灰度图
im_gray = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)
# 均值滤波:去除椒盐噪声
im_median = cv2.medianBlur(im_gray, 5)
# 高斯滤波:去除高斯噪声
im_gauss = cv2.GaussianBlur(im_median, (3, 3), 0)
# 二值化:OTSU方法
ret, im_thresh = cv2.threshold(im_gauss, 0, 255, cv2.THRESH_OTSU)
# 移除汗孔
im_rp1 = remove_pore(im=im_thresh, pore_size_max=36)
# 形态学变换
closing = cv2.morphologyEx(im_rp1, cv2.MORPH_CLOSE, kernel=np.ones((3, 3), np.uint8), iterations=1)
im_final = closing
return im_final, num
def img_write(image, num):
"""
将处理后的图像写入
. param image:处理后图像
. param num:图像编号
. return:成功信息
"""
cv2.imwrite('./im_process/result_' + str(num) + '.bmp', image)
return "Picture {} .".format(num)
if __name__ == '__main__':
img_list = [7, 17, 27]
for img in img_list:
final, num = preprocess(img)
img_write(final, num=num)
| StarcoderdataPython |
8975 | <reponame>thongnbui/MIDS_251_project
#!/usr/bin/python
import json
import argparse
from influxdb import InfluxDBClient
parser = argparse.ArgumentParser(description = 'pull data for softlayer queue' )
parser.add_argument( 'measurement' , help = 'measurement001' )
args = parser.parse_args()
client_influxdb = InfluxDBClient('172.16.31.10', '8086', 'cricket', 'cricket', 'cricket_data')
query = 'SELECT "data_center", "device", "value" FROM "cricket_data"."cricket_retention".'+args.measurement+' WHERE time > now() - 10m order by time'
result = client_influxdb.query(query)
for r in result:
i = 0
for data_center, device, value, time in r:
print args.measurement,'\t',r[i][data_center],'\t',r[i][device],'\t',r[i][time],'\t',r[i][value]
i += 1
| StarcoderdataPython |
3564323 | <reponame>gregmbi/polyaxon
#!/usr/bin/python
#
# Copyright 2018-2020 Polyaxon, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import List
import ujson
from polyaxon import settings, types
from polyaxon.agents.spawners.async_spawner import AsyncSpawner
from polyaxon.containers.containers import get_default_notification_container
from polyaxon.lifecycle import V1StatusCondition
from polyaxon.logger import logger
from polyaxon.polyaxonfile import OperationSpecification
from polyaxon.polyflow import V1IO, V1Component, V1Operation, V1Plugins, V1Termination
from polyaxon.polyflow.run import V1Notifier
from polyaxon.polypod import compiler
async def notify_run(
namespace: str,
owner: str,
project: str,
run_uuid: str,
run_name: str,
condition: V1StatusCondition,
connections: List[str],
):
spawner = AsyncSpawner(namespace=namespace)
await spawner.k8s_manager.setup()
for connection in connections:
connection_type = settings.AGENT_CONFIG.notification_connections_by_names.get(
connection
)
if not connection_type:
logger.warning(
"Could not create notification using connection {}, "
"the connection was not found or not set correctly.".format(
connection_type
)
)
continue
operation = V1Operation(
params={
"kind": connection_type.kind,
"owner": owner,
"project": project,
"run_uuid": run_uuid,
"run_name": run_name,
"condition": ujson.dumps(condition.to_dict()),
},
termination=V1Termination(max_retries=3),
component=V1Component(
name="slack-notification",
plugins=V1Plugins(
auth=False,
collect_logs=False,
collect_artifacts=False,
collect_resources=False,
sync_statuses=False,
),
inputs=[
V1IO(name="kind", iotype=types.STR, is_optional=False),
V1IO(name="owner", iotype=types.STR, is_optional=False),
V1IO(name="project", iotype=types.STR, is_optional=False),
V1IO(name="run_uuid", iotype=types.STR, is_optional=False),
V1IO(name="run_name", iotype=types.STR, is_optional=True),
V1IO(name="condition", iotype=types.STR, is_optional=True),
V1IO(name="connection", iotype=types.STR, is_optional=True),
],
run=V1Notifier(
connections=[connection],
container=get_default_notification_container(),
),
),
)
compiled_operation = OperationSpecification.compile_operation(operation)
resource = compiler.make(
owner_name=owner,
project_name=project,
project_uuid=project,
run_uuid=run_uuid,
run_name=run_name,
run_path=run_uuid,
compiled_operation=compiled_operation,
params=operation.params,
)
await spawner.create(
run_uuid=run_uuid,
run_kind=compiled_operation.get_run_kind(),
resource=resource,
)
| StarcoderdataPython |
199406 | # Generated by Django 3.0.5 on 2020-05-25 04:20
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('part', '0042_auto_20200518_0900'),
('stock', '0042_auto_20200523_0121'),
]
operations = [
migrations.AlterField(
model_name='stockitem',
name='part',
field=models.ForeignKey(help_text='Base part', limit_choices_to={'active': True, 'virtual': False}, on_delete=django.db.models.deletion.CASCADE, related_name='stock_items', to='part.Part', verbose_name='Base Part'),
),
]
| StarcoderdataPython |
1637123 | """ Higher-level types for interacting with Myria relations """
from dateutil.parser import parse
from itertools import izip
from myria import MyriaConnection, MyriaError
from myria.schema import MyriaSchema
class MyriaRelation(object):
""" Represents a relation in the Myria system """
DefaultConnection = MyriaConnection(hostname='localhost', port=8753)
def __init__(self, relation, connection=DefaultConnection, schema=None):
""" Attach to an existing Myria relation, or create a new one
relation: the name of the relation. One of:
* qualified components: {'userName': 'public',
'programName': 'adhoc',
'relationName': 'my_relation'}
* qualified name: 'public:adhoc:my_relation'
* unqualified name: 'my_relation' (assume public:adhoc)
Keyword arguments:
connection: attach to a specific Myria API endpoint
schema: for a relation that does not yet exist, specify its schema
"""
self.name = relation if isinstance(relation, basestring) \
else relation.name
self.components = self._get_name_components(self.name)
self.connection = connection
self.qualified_name = self._get_qualified_name(self.components)
self._schema = None
self._metadata = None
# If the relation is already persisted, any schema parameter
# must match the persisted version.
if schema is not None and self.is_persisted and self.schema != schema:
raise ValueError('Stored relation schema does not match '
'that specified as schema parameter.')
elif schema is not None:
self._schema = schema
def to_dict(self):
""" Download this relation as JSON """
return self.connection.download_dataset(self.qualified_name) \
if self.is_persisted else []
@property
def schema(self):
""" The schema of the relation """
if self._schema is None:
self._schema = MyriaSchema(json=self.metadata['schema'])
return self._schema
@property
def created_date(self):
""" The creation date for this relation """
return parse(self.metadata['created'])
def __len__(self):
""" The number of tuples in the relation """
return int(self.metadata['numTuples'])
@property
def metadata(self):
""" A JSON dictionary of relation metadata """
if self._metadata is None:
self._metadata = self.connection.dataset(self.qualified_name)
return self._metadata
@property
def is_persisted(self):
""" Does the relation exist in the Myria database? """
try:
return bool(self.metadata)
except MyriaError:
return False
@staticmethod
def _get_name(qualified_name):
""" Stringify a list of name components into a valid Myria name """
return ':'.join([qualified_name['userName'],
qualified_name['programName'],
qualified_name['relationName']])
@staticmethod
def _get_name_components(name):
""" Parse a Myria relation name into a list of components """
components = name.split(':')
default_components = ['public', 'adhoc'][:max(3 - len(components), 0)]
return default_components + components[:3]
@staticmethod
def _get_qualified_name(name_or_components):
""" Generate a Myria relation dictionary from a string or list """
if isinstance(name_or_components, basestring):
return MyriaRelation._get_qualified_name(
MyriaRelation._get_name_components(name_or_components))
else:
return dict(izip(('userName', 'programName', 'relationName'),
name_or_components[:3]))
| StarcoderdataPython |
6415659 | <filename>pman/pman/core.py
import imp
import fnmatch
import os
import shutil
import subprocess
import sys
import time
from collections import OrderedDict
import configparser
from . import toml
from . import creationutils
class PManException(Exception):
pass
class NoConfigError(PManException):
pass
class CouldNotFindPythonError(PManException):
pass
class BuildError(PManException):
pass
class FrozenEnvironmentError(PManException):
def __init__(self):
super().__init__("Operation not supported in frozen applications")
_CONFIG_DEFAULTS = OrderedDict([
('general', OrderedDict([
('name', 'Game'),
('renderer', 'none'),
('material_mode', 'legacy'),
('physics_engine', 'builtin'),
])),
('build', OrderedDict([
('asset_dir', 'assets/'),
('export_dir', '.built_assets/'),
('ignore_patterns', []),
('converters', ['native2bam']),
])),
('run', OrderedDict([
('main_file', 'main.py'),
('auto_build', True),
('auto_save', True),
])),
])
_USER_CONFIG_DEFAULTS = OrderedDict([
('python', OrderedDict([
('path', ''),
('in_venv', False),
])),
])
def _update_conf(config):
if 'general' in config:
if 'render_plugin' in config['general']:
if '/' in config['general']['render_plugin']:
# Convert from path to module
renderplugin = config['general']['render_plugin']
rppath = get_abs_path(config, renderplugin)
maindir = os.path.dirname(get_abs_path(config, config['run']['main_file']))
rppath = os.path.splitext(os.path.relpath(rppath, maindir))[0]
module_parts = rppath.split(os.sep)
modname = '.'.join(module_parts)
config['general']['render_plugin'] = modname
config['general']['renderer'] = config['general']['render_plugin']
del config['general']['render_plugin']
if 'converter_hooks' in config['general']:
config['general']['converters'] = [
'blend2bam' if i == 'pman.hooks.converter_blend_bam' else i
for i in config['general']['converter_hooks']
]
del config['general']['convert_hooks']
def _get_config(startdir, conf_name, defaults):
try:
if startdir is None:
startdir = os.getcwd()
except FileNotFoundError:
# The project folder was deleted on us
raise NoConfigError("Could not find config file")
dirs = os.path.abspath(startdir).split(os.sep)
while dirs:
cdir = os.sep.join(dirs)
if cdir.strip() and conf_name in os.listdir(cdir):
configpath = os.path.join(cdir, conf_name)
confdict = toml.load(configpath)
confdict = {
k: dict(defaults.get(k, {}), **confdict.get(k, {}))
for k in set(defaults.keys()) | set(confdict.keys())
}
confdict['internal'] = {
'projectdir': os.path.dirname(configpath),
}
_update_conf(confdict)
return confdict
dirs.pop()
# No config found
raise NoConfigError("Could not find config file")
def get_config(startdir=None):
return _get_config(startdir, '.pman', _CONFIG_DEFAULTS)
def config_exists(startdir=None):
try:
get_config(startdir)
have_config = True
except NoConfigError:
have_config = False
return have_config
def get_user_config(startdir=None):
try:
conf = _get_config(startdir, '.pman.user', _USER_CONFIG_DEFAULTS)
except NoConfigError:
# No user config, just create one
config = get_config(startdir)
file_path = os.path.join(config['internal']['projectdir'], '.pman.user')
print("Creating user config at {}".format(file_path))
open(file_path, 'w').close()
conf = _get_config(startdir, '.pman.user', _USER_CONFIG_DEFAULTS)
confpy = conf['python']['path']
if not confpy:
# Try to find a Python program to default to
try:
pyprog = get_python_program()
pyloc = shutil.which(pyprog)
conf['python']['path'] = pyloc
activate_this_loc = os.path.join(os.path.dirname(pyloc), 'activate_this.py')
conf['python']['in_venv'] = os.path.exists(activate_this_loc)
write_user_config(conf)
except CouldNotFindPythonError:
pass
return conf
def _write_config(config, conf_name):
writecfg = config.copy()
del writecfg['internal']
with open(os.path.join(config['internal']['projectdir'], conf_name), 'w') as f:
toml.dump(writecfg, f)
def write_config(config):
_write_config(config, '.pman')
def write_user_config(user_config):
_write_config(user_config, '.pman.user')
def is_frozen():
return imp.is_frozen(__name__)
def create_project(projectdir='.', extras=None):
if is_frozen():
raise FrozenEnvironmentError()
if not os.path.exists(projectdir):
os.makedirs(projectdir)
confpath = os.path.join(projectdir, '.pman')
if os.path.exists(confpath):
print("Updating project in {}".format(projectdir))
else:
print("Creating new project in {}".format(projectdir))
if not os.path.exists(confpath):
# Touch config file to make sure it is present
with open(confpath, 'a') as _:
pass
config = get_config(projectdir)
write_config(config)
user_config = get_user_config(projectdir)
creationutils.create_dirs(projectdir, (
config['build']['asset_dir'],
'tests',
))
templatedir = creationutils.get_template_dir()
creationutils.copy_template_files(projectdir, templatedir, (
('main.py', config['run']['main_file']),
('settings.prc', 'settings.prc'),
('requirements.txt', 'requirements.txt'),
('setup.py', 'setup.py'),
('setup.cfg', 'setup.cfg'),
('pylintrc', '.pylintrc'),
('test_imports.py', 'tests/test_imports.py'),
))
if extras:
import pkg_resources
entrypoints = {
entrypoint.name: entrypoint.load()
for entrypoint in pkg_resources.iter_entry_points('pman.creation_extras')
}
for extra in extras:
if extra not in entrypoints:
print('Could not find creation extra: {}'.format(extra))
continue
entrypoints[extra](projectdir, config, user_config)
def get_abs_path(config, path):
return PMan(config=config).get_abs_path(path)
def get_rel_path(config, path):
return PMan(config=config).get_rel_path(path)
def get_python_program(config=None):
python_programs = [
'ppython',
'python3',
'python',
'python2',
]
if config is not None:
user_config = get_user_config(config['internal']['projectdir'])
confpy = user_config['python']['path']
if confpy:
python_programs.insert(0, confpy)
# Check to see if there is a version of Python that can import panda3d
for pyprog in python_programs:
args = [
pyprog,
'-c',
'import panda3d.core; import direct',
]
with open(os.devnull, 'w') as f:
try:
retcode = subprocess.call(args, stderr=f)
except FileNotFoundError:
retcode = 1
if retcode == 0:
return pyprog
# We couldn't find a python program to run
raise CouldNotFindPythonError('Could not find a Python version with Panda3D installed')
def in_venv():
return (
hasattr(sys, 'real_prefix') or
(hasattr(sys, 'base_prefix') and sys.base_prefix != sys.prefix)
)
def run_program(config, args, use_venv=True, cwd=None):
user_config = get_user_config(config['internal']['projectdir'])
if use_venv and user_config['python']['in_venv']:
actv_this_loc = os.path.join(
os.path.dirname(user_config['python']['path']),
'activate_this.py'
)
args = [
'python',
os.path.join(os.path.dirname(__file__), 'venvwrapper.py'),
actv_this_loc,
] + args
subprocess.call(args, cwd=cwd)
def run_script(config, args, use_venv=True, cwd=None):
user_config = get_user_config(config['internal']['projectdir'])
if use_venv and user_config['python']['in_venv']:
pyprog = 'python'
else:
pyprog = get_python_program(config)
run_program(config, [pyprog] + args, use_venv=use_venv, cwd=cwd)
def build(config=None):
PMan(config=config).build()
def run(config=None):
PMan(config=config).run()
def dist(config=None, build_installers=True, platforms=None):
PMan(config=config).dist(build_installers, platforms)
def clean(config=None):
PMan(config=config).clean()
def create_renderer(base, config=None):
if not is_frozen():
if config is None:
config = get_config()
sys.path.append(get_abs_path(config, config['build']['export_dir']))
import pman_renderer #pylint: disable=import-error
return pman_renderer.get_renderer()(base)
_RENDER_STUB = """
import functools
def get_renderer():
modname = '{}'
attrs = {}
module = __import__(modname, fromlist=['__name__'], level=0)
return functools.reduce(getattr, attrs, module)
"""
RENDER_STUB_NAME = 'pman_renderer.py'
def converter_copy(_config, _user_config, srcdir, dstdir, assets):
for asset in assets:
src = asset
dst = src.replace(srcdir, dstdir)
# print('Copying file from "{}" to "{}"'.format(src, dst))
if not os.path.exists(os.path.dirname(dst)):
os.makedirs(os.path.dirname(dst))
shutil.copyfile(src, dst)
class PMan(object):
def __init__(self, config=None, config_startdir=None):
if config:
self.config = config
self.user_config = get_user_config(config['internal']['projectdir'])
else:
self.config = get_config(config_startdir)
self.user_config = get_user_config(config_startdir)
if is_frozen():
self.converters = []
else:
import pkg_resources
self.converters = [
entry_point.load()
for entry_point in pkg_resources.iter_entry_points('pman.converters')
if entry_point.name in self.config['build']['converters']
]
def get_abs_path(self, path):
return os.path.join(
self.config['internal']['projectdir'],
path
)
def get_rel_path(self, path):
return os.path.relpath(path, self.config['internal']['projectdir'])
def build(self):
import pkg_resources
if is_frozen():
raise FrozenEnvironmentError()
if hasattr(time, 'perf_counter'):
#pylint:disable=no-member
stime = time.perf_counter()
else:
stime = time.time()
print("Starting build")
srcdir = self.get_abs_path(self.config['build']['asset_dir'])
dstdir = self.get_abs_path(self.config['build']['export_dir'])
if not os.path.exists(srcdir):
print("warning: could not find asset directory: {}".format(srcdir))
return
if not os.path.exists(dstdir):
print("Creating asset export directory at {}".format(dstdir))
os.makedirs(dstdir)
print("Read assets from: {}".format(srcdir))
print("Export them to: {}".format(dstdir))
ignore_patterns = self.config['build']['ignore_patterns']
print("Ignoring file patterns: {}".format(ignore_patterns))
# Gather files and group by extension
ext_asset_map = {}
ext_dst_map = {}
ext_converter_map = {}
for converter in self.converters:
ext_dst_map.update(converter.ext_dst_map)
for ext in converter.supported_exts:
ext_converter_map[ext] = converter
for root, _dirs, files in os.walk(srcdir):
for asset in files:
src = os.path.join(root, asset)
dst = src.replace(srcdir, dstdir)
ignore_pattern = None
for pattern in ignore_patterns:
if fnmatch.fnmatch(asset, pattern):
ignore_pattern = pattern
break
if ignore_pattern is not None:
print('Skip building file {} that matched ignore pattern {}'.format(asset, ignore_pattern))
continue
ext = '.' + asset.split('.', 1)[1]
if ext in ext_dst_map:
dst = dst.replace(ext, ext_dst_map[ext])
if os.path.exists(dst) and os.stat(src).st_mtime <= os.stat(dst).st_mtime:
print('Skip building up-to-date file: {}'.format(dst))
continue
if ext not in ext_asset_map:
ext_asset_map[ext] = []
print('Adding {} to conversion list to satisfy {}'.format(src, dst))
ext_asset_map[ext].append(os.path.join(root, asset))
# Find which extensions have hooks available
convert_hooks = []
for ext, converter in ext_converter_map.items():
if ext in ext_asset_map:
convert_hooks.append((converter, ext_asset_map[ext]))
del ext_asset_map[ext]
# Copy what is left
for ext in ext_asset_map:
converter_copy(self.config, self.user_config, srcdir, dstdir, ext_asset_map[ext])
# Now run hooks that non-converted assets are in place (copied)
for convert_hook in convert_hooks:
convert_hook[0](self.config, self.user_config, srcdir, dstdir, convert_hook[1])
# Write out stub importer so we do not need pkg_resources at runtime
renderername = self.config['general']['renderer']
if not renderername:
renderername = _CONFIG_DEFAULTS['general']['renderer']
for entry_point in pkg_resources.iter_entry_points('pman.renderers'):
if entry_point.name == renderername:
renderer_entry_point = entry_point
break
else:
raise BuildError('Could not find renderer for {0}'.format(renderername))
renderer_stub_path = os.path.join(dstdir, RENDER_STUB_NAME)
print('Writing renderer stub to {}'.format(renderer_stub_path))
with open(renderer_stub_path, 'w') as renderer_stub_file:
renderer_stub_file.write(_RENDER_STUB.format(
renderer_entry_point.module_name,
repr(renderer_entry_point.attrs)
))
if hasattr(time, 'perf_counter'):
#pylint:disable=no-member
etime = time.perf_counter()
else:
etime = time.time()
print("Build took {:.4f}s".format(etime - stime))
def run(self):
if is_frozen():
raise FrozenEnvironmentError()
mainfile = self.get_abs_path(self.config['run']['main_file'])
print("Running main file: {}".format(mainfile))
args = [mainfile]
#print("Args: {}".format(args))
run_script(self.config, args, cwd=self.config['internal']['projectdir'])
def dist(self, build_installers=True, platforms=None):
if is_frozen():
raise FrozenEnvironmentError()
args = [
'setup.py',
]
if build_installers:
args += ['bdist_apps']
else:
args += ['build_apps']
if platforms is not None:
args += ['-p', '{}'.format(','.join(platforms))]
run_script(self.config, args, cwd=self.config['internal']['projectdir'])
def clean(self):
if is_frozen():
raise FrozenEnvironmentError()
shutil.rmtree(self.get_abs_path(self.config['build']['export_dir']), ignore_errors=True)
shutil.rmtree(self.get_abs_path('build'), ignore_errors=True)
shutil.rmtree(self.get_abs_path('dist'), ignore_errors=True)
| StarcoderdataPython |
6640131 | <gh_stars>0
#!/usr/bin/env python3
import os
import argparse
import time
import sys
from threading import Thread, Lock
global txns
global clients
clients = 100
txns = 100
# ==================== Notes and information ====================
# This script will run multiple instances (threaded) of the the Perf_Add_nyms.py script or the Perf_get_nyms.py. The
# command line parameters for each script are different and can be set from this script without modifying Add_nyms or
# Get_nyms scripts.
# The settings for Perf runner are 'clients' and 'txns'. Clients is the number of threads (or client machines) to use,
# the txns indicates how many transactions will run per client (thread). These settings are specific to Perf_runner.py
#
# The command line for both performance scripts is created in the 'command' variable found below. The default setting
# for Perf_Add_nyms.py uses the -n and -s parameters to specify the number of threads and clients to use. The value
# from clients is iterated through and uses 'i' to track which iteration is processing.
# The default vaiables for the Add_nyms script will be used. If any of the default settings for Add_nyms or Get_nyms
# needs to be modified, add the changes here to the perf runner by modifying the 'command' variable.
# ================================================================
# Example:
# Run Perf_Add_nyms.py: python3.6 Perf_runner.py -a
# Run Perf_gert_nyms.py using 3 clients (threads) - by setting clients to 3: python3.6 Perf_runner.py -g
parser = argparse.ArgumentParser(description='This script will create multiple threads of the Perf_Add_nyms.py or '
'the Perf_get_nyms.py.')
parser.add_argument('-a', help='Use this parameter to start Perf_Add_nyms.py', action='store_true',
default=False, required=False)
parser.add_argument('-g', help='Use this parameter to start Perf_get_nyms.py', action='store_true',
default=False, required=False)
# parser.print_help()
results = parser.parse_args()
if results.a:
results.a = 'Perf_Add_nyms.py'
if results.g:
results.g = 'Perf_get_nyms.py'
def run_test(i, lock):
print("This is a test : " + repr(results.g))
print("This is a test : " + repr(results.a))
if results.a:
# The value for -n is the 'txns' variable at the top of this script
command = 'python3 ' + results.a + ' -n ' + str(txns) + ' -s ' + repr(i)
elif results.g:
# The default values for -d -t and -g in get_nym will be used
command = 'python3 ' + results.g + ' -s ' + repr(clients) + ' -d nym_files'
else:
print("\n\nPlease specify a script to use or run Perf_runner.py -h for additional information")
sys.exit(1)
with lock:
print("Starting thread {}".format(i))
# Run the command
# print(command)
os.system(command)
with lock:
print("Thread {} stopped".format(i))
# Create threads
lock = Lock()
# Start Time
# timeBegin = datetime.now()
overmind_start_time = time.time()
# get the number of clients (threads) to create
threads = [Thread(target=run_test, args=(i, lock)) for i in range(clients)]
# Start threads
for x in threads:
x.start()
# Stop threads
for x in threads:
x.join()
# Total Time
totalTime = time.time() - overmind_start_time
hours = totalTime / 3600
totalTime = 3600 * hours
minutes = totalTime / 60
seconds = 60 * minutes
ttl_txns = clients * txns
ttl_seconds = int((hours * 3600) + (minutes * 60) + seconds)
try:
txns_per_second = int(ttl_txns / ttl_seconds)
except Exception as E:
txns_per_second = None
print("There is too small test run time that causes an error: ", E)
print("\n ----------- Total time to run the test: %dh:%dm:%ds" % (hours, minutes, seconds) + " -----------")
print("\n Clients = " + str(clients))
print("\n Transaction per client = " + str(txns))
print("\n Total transactions requested = " + str(ttl_txns))
print("\n Estimated transactions per second = " + str(txns_per_second))
tm = time.strftime("%d-%m-%Y_%H-%M-%S")
file = open("test_results_time_" + tm + ".log", "w")
file.write("\n ----------- Total time to run the test: %dh:%dm:%ds" % (hours, minutes, seconds) + " -----------\n")
file.write("\n Clients = " + str(clients))
file.write("\n Transaction per client = " + str(txns))
file.write("\n Total transactions requested = " + str(ttl_txns))
file.write("\n Estimated transactions per second = " + str(txns_per_second))
file.close()
| StarcoderdataPython |
6444750 | #!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# || ____ _ __
# +------+ / __ )(_) /_______________ _____ ___
# | 0xBC | / __ / / __/ ___/ ___/ __ `/_ / / _ \
# +------+ / /_/ / / /_/ /__/ / / /_/ / / /_/ __/
# || || /_____/_/\__/\___/_/ \__,_/ /___/\___/
#
# Copyright (C) 2014 Bitcraze AB
#
# Crazyflie Nano Quadcopter Client
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
# 02110-1301, USA.
"""
Mux for giving control to one device (slave/student) for all axis (roll/pitch/
yaw/thrust) with the ability to take over all of them from a second device
(master/teacher).
"""
import logging
from .takeoverselectivemux import TakeOverSelectiveMux
__author__ = 'Bitcraze AB'
__all__ = ['TakeOverMux']
logger = logging.getLogger(__name__)
class TakeOverMux(TakeOverSelectiveMux):
def __init__(self, *args):
super(TakeOverMux, self).__init__(*args)
self.name = "Teacher (RPYT)"
self._muxing = {
self._master: ("estop", "alt1", "alt2", "assistedControl", "exit"),
self._slave: ("roll", "pitch", "yaw", "thrust")
}
| StarcoderdataPython |
3597593 | from setuptools import setup, find_packages
setup(name="naomi_bot", packages=find_packages(exclude=("naomi-bot-env"))) | StarcoderdataPython |
11248953 | import numpy as np
import bandits_lab.algorithms as algs
import bandits_lab.bandit_definitions as bands
import sim_utilities as sim
"""
Code for the experiments in the paper Adaptation to the range in K-armed bandits.
"""
np.random.seed(0)
K = 10
T = 100000
n_tests = 100
"""
Definitions of the problems considered.
"""
scales = [0.01, 0.1, 1, 10]
ups = np.ones(K)
lows = np.zeros(K)
ups[0] = 1.2
M = 0
means = (ups + lows) / 2
# variances = 0.01 * np.ones(K) # low variance
variances = 0.25 * np.ones(K) # high variance
band_list = [
bands.SymTruncatedGaussian(K, s * lows, s * ups, s * means, s * s * variances)
for s in scales
]
N_tests = [n_tests for _ in band_list]
"""
List of algorithms. These algorithms are implemented in the bandits_lab package.
"""
r = 1.2
alg_list = [
algs.UCB_a(K, sig=1e-3 * r, label=r"UCB $\sigma = 10^{-3} \times 1.2$"),
algs.UCB_a(K, sig=0.01 * r, label=r"UCB $\sigma = 0.01 \times 1.2$"),
algs.UCB_a(K, sig=0.1 * r, label=r"UCB $\sigma = 0.1 \times 1.2 $"),
algs.UCB_a(K, sig=1 * r, label=r"UCB $\sigma = 1 \times 1.2$"),
algs.UCB_a(K, sig=10 * r, label=r"UCB $\sigma = 10 \times 1.2$"),
# algs.UCB_a(K, sig=100 * r, label=r"UCB $\sigma = 100 \times 1.2 $"),
# algs.Exp3(K, M=M, label="Exp3 M=0"),
# algs.Exp3(K, M=1.2, label="Exp3 M=1.2"),
# algs.AdaHedgeExp3(K, M=M, label="AHB without extra exploration M=0"),
# algs.AdaHedgeExp3(K, M=1.1, label="AHB without extra exploration M=1.2"),
algs.MaxUCB(K, sig_init=0, label="Range estimating UCB"),
algs.AdaHedgeExp3ExtraExp(K, label="AHB with extra exploration"),
algs.UCB_a(K, sig=0, label="FTL"),
algs.RandomPlay(K, label="Random Play"),
# algs.FastAdaFTRLTsallis(
# K, M=M, sym=False, proxy=False, label="Bandit AdaFTRL Tsallis"
# ),
# algs.FastFTRLTsallis(K, M=M, label=r"FTRL Tsallis $\eta_t=t^{-1/2}$"),
]
data_dict = {
"name": "<NAME>",
"short_name": "multi_scale_2_trunc_gauss_hi_var",
"T": T,
"N_tests": N_tests,
"band_list": band_list,
"alg_list": alg_list,
"results": None,
"scales": scales,
"seed": 0,
"folder": "data_saves/range_adaptation/",
}
print("T :", T)
print("means list : ", means)
print("scales : ", scales)
sim.launch(data_dict, n_jobs=4, checkpoints=True)
print("Done")
| StarcoderdataPython |
1991125 | #encoding:utf-8
import requests
import json
count_all = 0
hit = 0
fout = open('request.out_ik_balance','w')
dict_query_all = dict()
dict_recall_hit = dict()
dict_predict_all = dict()
jiexi_count = 0
fei_chui_lei = ['qa','baike','chat']
count_vertical = 0
count_non_vertical = 0
pre_count_vertical = 0
pre_count_non_vertical = 0
vertical_hit = 0
non_vertical_hit = 0
# string = 'http://10.142.98.128:8080/search?query=今天这个歌好听吗'
#
# github_url = string
# data = {}
# r = requests.post(github_url, data=data)
#
# ans = json.loads(r.text)
# print (ans['answer'])
with open('domain_test_no_o2o_0620.tsv','r') as f:
for line in f.readlines():
count_all += 1
l = line.strip().split('\t')
line_query,line_domain = l[0],l[1]
if line_domain not in fei_chui_lei:
count_vertical += 1
else:
count_non_vertical += 1
##测试集的domain的统计
if line_domain not in dict_query_all:
dict_query_all[line_domain] = 1
else:
dict_query_all[line_domain] += 1
string = 'http://10.142.98.128:8080/search?query='+ str(line_query)
# string = 'http://10.142.98.128:8080/search?query=今天这个歌好听吗'
github_url = string
data = {}
r = requests.post(github_url,data = data)
ans = json.loads(r.text)
# print ans['answer']
# exit()
ans = ans['answer']
my_dict = dict()
try:
for query in ans:
#print (query['fieldsJson'])#.encode('utf-8')
# print( query['fieldsJson'].split('\"')[7] )
# print(query)
domain = query['fieldsJson'].split('\"')[7]
if domain not in my_dict:
my_dict[ domain] = 1
else:
my_dict[ domain] += 1
jiexi_count += 1
#print (ans)
#print (my_dict)
# if sorted(my_dict.items(),key= lambda a:a[1],reverse = True)[0][0]
# my_dict['chat'] = 5
# my_dict['play_command'] = 2
##@@ find the one appear most times and appear in the first if appear times are same~
dict_sorted = sorted(my_dict.items(), key=lambda a: a[1], reverse=True)
if len(dict_sorted)>=2:
if dict_sorted[0][1] != dict_sorted[1][1]:
predict_domain = sorted(my_dict.items(), key=lambda a: a[1], reverse=True)[0][0]
else:
to_be_see = []
for cell in dict_sorted:
if cell[1] == dict_sorted[0][1]:
to_be_see.append(cell[0])
for query in ans:
domain = query['fieldsJson'].split('\"')[7]
if domain in to_be_see:
predict_domain = domain
break
else:
predict_domain = dict_sorted[0][0]
# print(dict_sorted)
#
# print(predict_domain)
#print (predict_domain,line_domain)
##@@ predict correct!
##预测的domain的统计
if predict_domain not in dict_predict_all:
dict_predict_all[predict_domain] = 1
else:
dict_predict_all[predict_domain] += 1
if predict_domain not in fei_chui_lei:
pre_count_vertical += 1
else:
pre_count_non_vertical += 1
### 预测正确的统计
if predict_domain == line_domain:
hit += 1
if predict_domain not in dict_recall_hit:
dict_recall_hit[predict_domain] = 1
else:
dict_recall_hit[predict_domain] += 1
##同时在非垂类
if predict_domain in fei_chui_lei and line_domain in fei_chui_lei :
non_vertical_hit += 1
##同时不在非垂类,就是同时在垂类
if predict_domain not in fei_chui_lei and line_domain not in fei_chui_lei:
vertical_hit += 1
except:
print(ans)
print(str(line_query))
print('exception')
# break
print ('28 domain:',hit/count_all)
fout.write('28 domain correct/all_query:' + '\t'+str(hit/count_all) + '\n')
for key in dict_query_all:
P = dict_recall_hit[key]/dict_predict_all[key]
R = dict_recall_hit[key]/dict_query_all[key]
F1 = 2*P*R/(P+R)
count = dict_query_all[key]
print('domain: ',key ,'count: ',count,'recall: ',R ,'precision: ',P,'F1: ',F1)
# print()
# fout.write('domain_precision:' + '\t'+ str(key) +'\t'+ str (dict_recall_hit[key]/dict_predict_all[key])+'\n' )
fout.write( '\t'.join( ['domain: ' , str(key) ,'count: ',str(count),'recall: ',str(R ) ,'precision: ' , str (P),'F1: ',str(F1) ]) +'\n' )
# for key in dict_predict_all:
v_R = vertical_hit/count_vertical
v_P = vertical_hit/pre_count_vertical
v_F1 = 2*v_P*v_R/(v_P+v_R)
non_v_R = non_vertical_hit/count_non_vertical
non_v_P = non_vertical_hit/pre_count_non_vertical
non_F1 = 2*non_v_P*non_v_R/(non_v_P+non_v_R)
print('vertical:' ,'recall:',v_R,'precision:',v_P)
print('non_vertical:' ,'recall:',non_v_R,'precision:',non_v_P)
fout.write('\t'.join(['vertical:' ,'recall:', str(v_R),'precision:',str(v_P) ,'v_F1',str(v_F1) ] )+'\n' )
fout.write('\t'.join(['non_vertical:' ,'recall:',str(non_v_R),'precision:',str(non_v_P) ,'non_v_F1',str(non_F1) ] )+'\n')
vertical_acc = (non_vertical_hit+vertical_hit)/(count_vertical+count_non_vertical)
fout.write( '\t'.join(['non_vertical_hit,vertical_hit,count_vertical,count_non_vertical',str(non_vertical_hit),str(vertical_hit),str(count_vertical),str(count_non_vertical)]) +'\n' )
fout.write('jiexi_count : '+'\t' +str(jiexi_count) +'\n')
fout.write( '\t'.join(['vertical_acc',str(vertical_acc)]) + '\n') | StarcoderdataPython |
9730097 | #!/usr/bin/python
# 1. Calculer la somme d'une ligne
# 2. Calculer la somme d'une colonne
# 3. Calculer la somme de deux diagonales
def somme_ligne(tab, n):
som = 0
for i in range(len(tab[0])):
som += tab[n][i]
return som
def somme_colonne(tab, n):
som = 0
for i in range(len(tab)):
som += tab[i][n]
return som
def somme_diagonale(tab):
som = 0
som1 = 0
for i in range(len(tab)):
som += tab[i][i]
som1 += tab[i][4 - i]
return som, som1
tab = [[2, 4, 6, 8, 10] for i in range(5)]
# La somme des éléments de la deuxième ligne
print('La somme des éléments de la ligne 2 est', somme_ligne(tab, 1))
# La somme des éléments de la toisième colonne
print('La somme des éléments de la 4 colonne est', somme_colonne(tab, 3))
# La somme des diagonales
som, som1 = somme_diagonale(tab)
print('La somme des éléments d\'une diagonale est',
som, 'et celle de l\'autre est', som1)
| StarcoderdataPython |
9659876 | # -*- coding: utf-8 -*-
"""
Created on Thu Dec 10 14:49:27 2020
@author: celine.gross
"""
from collections import Counter
with open('input_day10.txt', 'r') as file:
puzzle = [int(x.strip()) for x in file]
print(puzzle[:10])
# Part 1
def count_jolt_differences(data, start=0, end=+3):
data.extend([start, max(data)+int(end)])
data.sort()
jolt_differences = [data[i+1] - data[i] for i, number in enumerate(data[:-1])]
return Counter(jolt_differences)
def run(data):
counts = count_jolt_differences(data)
return counts[1]*counts[3]
print(run(puzzle))
# Part 2
print(len(puzzle)) | StarcoderdataPython |
1615491 | import torch
import torch.nn as nn
from torch.cuda.amp import GradScaler, autocast
from torchattacks.attack import Attack
class FastBIM(Attack):
def __init__(self, model, eps=4/255, alpha=1/255, steps=0):
super().__init__("FastBIM", model)
self.eps = eps
self.alpha = alpha
if steps == 0:
self.steps = int(min(eps*255 + 4, 1.25*eps*255))
else:
self.steps = steps
self._supported_mode = ['default', 'targeted']
self.scaler = GradScaler()
def forward(self, images, labels):
r"""
Overridden.
"""
images = images.clone().detach().to(self.device)
labels = labels.clone().detach().to(self.device)
if self._targeted:
target_labels = self._get_target_label(images, labels)
loss = nn.CrossEntropyLoss()
ori_images = images.clone().detach()
for _ in range(self.steps):
images.requires_grad = True
# Accelerating forward propagation
with autocast():
outputs = self.model(images)
# Calculate loss
if self._targeted:
cost = -loss(outputs, target_labels)
else:
cost = loss(outputs, labels)
# Update adversarial images with gradient scaler applied
scaled_loss = self.scaler.scale(cost)
# Update adversarial images
grad = torch.autograd.grad(scaled_loss, images,
retain_graph=False,
create_graph=False)[0]
adv_images = images + self.alpha*grad.sign()
a = torch.clamp(ori_images - self.eps, min=0)
b = (adv_images >= a).float()*adv_images \
+ (adv_images < a).float()*a
c = (b > ori_images+self.eps).float()*(ori_images+self.eps) \
+ (b <= ori_images + self.eps).float()*b
images = torch.clamp(c, max=1).detach()
return images | StarcoderdataPython |
11282704 | <gh_stars>0
''' /processes '''
from flask_restful_swagger_2 import Resource, swagger
from flask_restful.reqparse import RequestParser
from flask_restful.utils import cors
from flask import request
from . import rpc
from .src.auth import auth
from .src.response import *
from .src.request import ModelRequestParser
from .src.cors import CORS
from .src.parameters import process_body, process_id, qname
from .src.models import process_description
class ProcessApi(Resource):
__res_parser = ResponseParser()
__req_parser = ModelRequestParser(process_description)
__req_q_parser = ModelRequestParser([qname], location="args")
@cors.crossdomain(
origin=["*"],
methods=["GET", "POST"],
headers=["Authorization", "Content-Type"],
credentials=True)
@swagger.doc(CORS().__parse__())
def options(self):
return self.__res_parser.code(200)
@cors.crossdomain(
origin=["*"],
methods=["GET", "POST"],
headers=["Authorization", "Content-Type"],
credentials=True)
@auth(admin=True)
@swagger.doc({
"tags": ["Process Discovery"],
"description": "Registers a new process.",
"parameters": [process_body],
"security": [{"Bearer": []}],
"responses": {
"200": OK("Details of the created process.").__parse__(),
"400": BadRequest().__parse__(),
"401": Unauthorized().__parse__(),
"403": Forbidden().__parse__(),
"500": InternalServerError().__parse__(),
"501": NotImplemented().__parse__(),
"503": ServiceUnavailable().__parse__()
}
})
def post(self, user_id):
try:
args = self.__req_parser.parse_args()
rpc_response = rpc.processes.create_process(user_id, args)
if rpc_response["status"] == "error":
raise self.__res_parser.map_exceptions(rpc_response, user_id)
return self.__res_parser.data(200, rpc_response["data"])
except Exception as exc:
return self.__res_parser.error(exc)
@cors.crossdomain(
origin=["*"],
methods=["GET", "POST"],
headers=["Authorization", "Content-Type"],
credentials=True)
@auth()
@swagger.doc({
"tags": ["Process Discovery"],
"description": "Returns processes supported by the back-end.",
"parameters": [qname],
"security": [{"Bearer": []}],
"responses": {
"200": OK("An array of EO processes including their unique identifiers and a description.").__parse__(),
"401": Unauthorized().__parse__(),
"403": Forbidden().__parse__(),
"500": InternalServerError().__parse__(),
"501": NotImplemented().__parse__(),
"503": ServiceUnavailable().__parse__()
}
})
def get(self, user_id):
try:
args = self.__req_q_parser.parse_args()
rpc_response = rpc.processes.get_all_processes(args["qname"])
if rpc_response["status"] == "error":
raise self.__res_parser.map_exceptions(rpc_response, user_id)
return self.__res_parser.data(200, rpc_response["data"])
except Exception as exc:
return self.__res_parser.error(exc)
class ProcessDetailApi(Resource):
__res_parser = ResponseParser()
@cors.crossdomain(
origin=["*"],
methods=["GET"],
headers=["Authorization", "Content-Type"],
credentials=True)
@swagger.doc(CORS().__parse__([process_id]))
def options(self):
return self.__res_parser.code(200)
@cors.crossdomain(
origin=["*"],
methods=["GET"],
headers=["Authorization", "Content-Type"],
credentials=True)
@auth()
@swagger.doc({
"tags": ["Process Discovery"],
"description": "Returns further information on a given EO process available at the back-end.",
"parameters": [process_id],
"security": [{"Bearer": []}],
"responses": {
"200": OK("JSON object with metadata of the EO process.").__parse__(),
"401": Unauthorized().__parse__(),
"403": Forbidden().__parse__(),
"404": NotFound().__parse__(),
"500": InternalServerError().__parse__(),
"501": NotImplemented().__parse__(),
"503": ServiceUnavailable().__parse__()
}
})
def get(self, user_id, process_id):
try:
rpc_response = rpc.processes.get_process(process_id)
if rpc_response["status"] == "error":
raise self.__res_parser.map_exceptions(rpc_response, user_id)
return self.__res_parser.data(200, rpc_response["data"])
except Exception as exc:
return self.__res_parser.error(exc)
| StarcoderdataPython |
5160041 | <gh_stars>0
import os
from casual.make.platform.platform_unix import CommonUNIX
from casual.make.platform.registry import RegisterPlatform
@RegisterPlatform("linux")
class Linux( CommonUNIX):
def pre_make(self):
path = os.path.dirname( os.path.realpath(__file__));
print
print '#'
print '# Common stuff'
print 'include ' + path + '/../common.mk'
print
print '# include static platform specific'
print 'include ' + path + '/static.mk'
print
| StarcoderdataPython |
9659234 | <gh_stars>10-100
"""
This module provides functions for generating dashboard tags. These functions are simple wrappers on top of the lxml `ElementMaker` API.
These functions can be combined with additional HTML tags to produce the required XML to define dashboards.
>>> import lxml.etree
>>> from argusclient.dashboardtags import E, DASHBOARD, CHART, TITLE, METRIC
>>> h1 = E.h1
>>> hr = E.hr
>>> dashboard = DASHBOARD(h1("Test Dashboard"), hr(), CHART(TITLE("hdara.test"), METRIC("-1d:-0d:test.scope:test.metric:sum", name="hdara.test.metric"), name="Chart"))
>>> print lxml.etree.tostring(dashboard, pretty_print=True)
<ag-dashboard>
<h1>Test Dashboard</h1>
<hr/>
<ag-chart name="Chart">
<ag-option name="title.text" value="hdara.test"/>
<ag-metric name="hdara.test.metric">-1d:-0d:test.scope:test.metric:sum</ag-metric>
</ag-chart>
</ag-dashboard>
>>> print lxml.etree.tostring(dashboard, method="html")
<ag-dashboard><h1>Test Dashboard</h1><hr/><ag-chart name="Chart"><ag-option name="title.text" value="hdara.test"/><ag-metric name="hdara.test.metric">-1d:-0d:test.scope:test.metric:sum</ag-metric></ag-chart></ag-dashboard>
Argus cant't handle auto-closed XML tags, so using "html" `method` is recommended.
"""
#
# Copyright (c) 2016, salesforce.com, inc.
# All rights reserved.
# Licensed under the BSD 3-Clause license.
# For full license text, see LICENSE.txt file in the repo root or https://opensource.org/licenses/BSD-3-Clause
#
import lxml.builder
#: Use this to create additional XML/HTML tags, e.g., `E.h1` will create the `<h1>` tag
E = lxml.builder.ElementMaker()
_DASHBOARD = getattr(E, "ag-dashboard")
_DATE = getattr(E, "ag-date")
_TEXT = getattr(E, "ag-text")
_SUBMIT = getattr(E, "ag-submit")
_CHART = getattr(E, "ag-chart")
_OPTION = getattr(E, "ag-option")
_METRIC = getattr(E, "ag-metric")
_FLAGS = getattr(E, "ag-flags")
_TABULAR = getattr(E, "ag-table")
_STATUS_INDICATOR = getattr(E, "ag-status-indicator")
def DASHBOARD(*args, **kwargs):
""" Generates an `ag-dashboard` tag. """
return _DASHBOARD(*args, **kwargs)
def DATE(*args, **kwargs):
""" Generates an `ag-date` tag. """
return _DATE(*args, **kwargs)
def TEXT(*args, **kwargs):
""" Generates an `ag-text` tag. """
return _TEXT(*args, **kwargs)
def SUBMIT(*args, **kwargs):
""" Generates an `ag-submit` tag. """
return _SUBMIT(*args, **kwargs)
def CHART(*args, **kwargs):
""" Generates an `ag-chart` tag. """
return _CHART(*args, **kwargs)
def OPTION(*args, **kwargs):
""" Generates an `ag-option` tag. """
return _OPTION(*args, **kwargs)
def METRIC(*args, **kwargs):
""" Generates an `ag-metric` tag. """
return _METRIC(*args, **kwargs)
def FLAGS(*args, **kwargs):
""" Generates an `ag-flags` tag. """
return _FLAGS(*args, **kwargs)
def TABULAR(*args, **kwargs):
""" Generates an `ag-table` tag. """
return _TABULAR(*args, **kwargs)
def START_DATE(name="start", label="Start Date", default="-1d"):
""" Generates a `ag-date` tag with sensible defaults for `name`, `label` and `default` for specifying a start date. """
return DATE(type="datetime", name=name, label=label, default=default)
def END_DATE(name="end", label="End Date", default="-0d"):
""" Generates a `ag-date` tag with sensible defaults for `name`, `label` and `default` for specifying end date. """
return DATE(type="datetime", name=name, label=label, default=default)
def TEXT_BOX(name, label=None, default=None):
""" Generates a `ag-text` tag with sensible defaults for `type`, `name`, `label` and `default` for specifying text field. """
return TEXT(type="text", name=name, label=label or name.capitalize(), default=default or "")
def TITLE(title):
""" Generates a `ag-option` tag with the specified `title`. """
return OPTION(name="title.text", value=title)
def SUB_TITLE(subTitle):
""" Generates a `ag-option` tag with the specified `subtitle`. """
return OPTION(name="subtitle.text", value=subTitle)
def YMIN(value):
""" Generates a `ag-option` tag with the specified yaxis.min value. """
return OPTION(name="yaxis.min", value=value)
def YMAX(value):
""" Generates a `ag-option` tag with the specified yaxis.max value. """
return OPTION(name="yaxis.max", value=value)
def XMIN(value):
""" Generates a `ag-option` tag with the specified xaxis.min value. """
return OPTION(name="xaxis.min", value=value)
def XMAX(value):
""" Generates a `ag-option` tag with the specified xaxis.max value. """
return OPTION(name="xaxis.max", value=value)
def AREA_CHART(*args, **kwargs):
""" Generates an `ag-chart` tag with `type='stackarea'`. """
return _CHART(type='stackarea', *args, **kwargs)
def STATUS_INDICATOR(*args, **kwargs):
"""Generates an `ag-status-indicator` tag with the passed in `name`,`hi`,`low` and METRIC attributes"""
return _STATUS_INDICATOR(*args, **kwargs)
| StarcoderdataPython |
5101619 | """Transform Gas Demands.
From:
ID,DT,Usage
1565,33501,0
To:
id datetime demand
1565 2009-12-02 00:30:00 0
"""
from pathlib import Path
from typing import Iterable
import dask.dataframe as dd
from dask.diagnostics import ProgressBar
def _read_raw_txt_files(dirpath: str) -> Iterable[dd.DataFrame]:
filepaths = list(dirpath.glob("GasDataWeek*"))
return dd.read_csv(
filepaths,
sep=",",
header=0,
dtype={"ID": "int16", "DT": "string", "Usage": "float32"},
engine="c",
)
def _slice_timeid_column(ddf: dd.DataFrame) -> dd.DataFrame:
ddf["day"] = ddf["DT"].str.slice(0, 3).astype("int16")
ddf["halfhourly_id"] = ddf["DT"].str.slice(3, 5).astype("int8")
return ddf.drop(columns=["DT"])
def _convert_dayid_to_datetime(ddf: dd.DataFrame) -> dd.DataFrame:
ddf["datetime"] = (
dd.to_datetime(
ddf["day"],
origin="01/01/2009",
unit="D",
)
+ dd.to_timedelta(ddf["halfhourly_id"] / 2, unit="h")
)
return ddf.drop(columns=["day", "halfhourly_id"])
def clean_gas_demands(input_dirpath, output_dirpath="gas_demands"):
demand_raw = _read_raw_txt_files(
Path(input_dirpath) / "CER Gas Revised October 2012" / "CER_Gas_Data"
)
demand_with_times = _slice_timeid_column(demand_raw)
demand_with_datetimes = _convert_dayid_to_datetime(demand_with_times)
print("Cleaning Gas Demands...")
with ProgressBar():
demand_with_datetimes.to_parquet(output_dirpath)
| StarcoderdataPython |
6612965 | <gh_stars>10-100
#!/usr/bin/env python
# -*- coding: ASCII -*-
"""
:Author: <NAME>
:Contact: <EMAIL>
:Date: *27.03.2008
"""
import math
class EDict:
def __init__(self):
self._internal_dict = {}
self._sorted_keys = []
self._is_sync = True
def __len__(self):
return len(self._internal_dict.keys())
def __getitem__(self,key):
return self._internal_dict[key]
def __repr__(self):
"""Return string representation of a EDict."""
return repr(self._internal_dict)
# __str__ is the same as __repr__
__str__ = __repr__
def clear(self):
self._is_sync = True
self._internal_dict = {}
self._sorted_keys = []
def set(self,key,value):
if key == None:
raise TypeError,"EDict does not allow None keys"
if not(self._internal_dict.has_key(key)):
self._is_sync = False
self._internal_dict[key] = value
return True
def get(self,key):
if not(self._internal_dict.has_key(key)):
return None
else:
return self._internal_dict[key]
def get_keys(self): return self._internal_dict.keys()
def get_values(self): return self._internal_dict.values()
def get_items(self): return self._internal_dict.items()
def iterkeys(self): return self._internal_dict.iterkeys()
def itervalues(self): return self._internal_dict.itervalues()
def iteritems(self): return self._internal_dict.iteritems()
def has_key(self,elem): return self._internal_dict.has_key(elem)
def _make_sync(self):
self._sorted_keys = self._internal_dict.keys()
self._sorted_keys.sort()
self._is_sync = True
def get_smaller(self,key):
if not(self._is_sync): self._make_sync()
cur_len = len(self._sorted_keys)
if cur_len > 0:
if not(key <= self._sorted_keys[0]):
cur_pos = -1
forlast = -1
new_pos = cur_len/2
dist = max(int(round(cur_len/4.0)),1)
while(cur_pos != new_pos) and (new_pos != forlast):
forlast = cur_pos
cur_pos = new_pos
#print cur_pos, dist
if key > self._sorted_keys[cur_pos]:
new_pos = cur_pos+dist
if (new_pos >= cur_len): new_pos = cur_len-1
elif key < self._sorted_keys[cur_pos]:
new_pos = cur_pos-dist
if (new_pos < 0): new_pos = 0
else:
new_pos = cur_pos
dist = max(int(dist/2.0),1)
if (cur_pos+1 < cur_len) and (cur_pos > 0):
if (key == self._sorted_keys[cur_pos]):
return (self._sorted_keys[cur_pos-1],self._internal_dict[self._sorted_keys[cur_pos-1]])
elif ((key <= self._sorted_keys[cur_pos+1]) and (key > self._sorted_keys[cur_pos])):
return (self._sorted_keys[cur_pos],self._internal_dict[self._sorted_keys[cur_pos]])
elif ((key <= self._sorted_keys[cur_pos]) and (key > self._sorted_keys[cur_pos-1])):
return (self._sorted_keys[cur_pos-1],self._internal_dict[self._sorted_keys[cur_pos-1]])
else:
print "get_smaller: SHOULD NOT HAPPEN!",cur_pos,"max:",cur_len
elif ((cur_pos == 0) and (key > self._sorted_keys[cur_pos])):
return (self._sorted_keys[cur_pos],self._internal_dict[self._sorted_keys[cur_pos]])
elif ((cur_pos+1 == cur_len) and (key > self._sorted_keys[cur_pos])):
return (self._sorted_keys[cur_pos],self._internal_dict[self._sorted_keys[cur_pos]])
elif ((cur_pos+1 == cur_len) and (key > self._sorted_keys[cur_pos-1])):
return (self._sorted_keys[cur_pos-1],self._internal_dict[self._sorted_keys[cur_pos-1]])
return (None,None)
def get_smaller_equal(self,key):
if self.has_key(key):
return key,self._internal_dict[key]
else:
return self.get_smaller(key)
def get_larger(self,key):
if not(self._is_sync): self._make_sync()
cur_len = len(self._sorted_keys)
if cur_len > 0:
if not(key >= self._sorted_keys[-1]):
cur_pos = -1
forlast = -1
new_pos = cur_len/2
dist = max(int(round(cur_len/4.0)),1)
while(cur_pos != new_pos) and (new_pos != forlast):
forlast = cur_pos
cur_pos = new_pos
#print cur_pos, dist
if key > self._sorted_keys[cur_pos]:
new_pos = cur_pos+dist
if (new_pos >= cur_len): new_pos = cur_len-1
elif key < self._sorted_keys[cur_pos]:
new_pos = cur_pos-dist
if (new_pos < 0): new_pos = 0
else:
new_pos = cur_pos
dist = max(int(dist/2.0),1)
if (cur_pos+1 < cur_len) and (cur_pos > 0):
if (key == self._sorted_keys[cur_pos]):
return (self._sorted_keys[cur_pos+1],self._internal_dict[self._sorted_keys[cur_pos+1]])
elif ((key < self._sorted_keys[cur_pos+1]) and (key >= self._sorted_keys[cur_pos])):
return (self._sorted_keys[cur_pos+1],self._internal_dict[self._sorted_keys[cur_pos+1]])
elif ((key < self._sorted_keys[cur_pos]) and (key >= self._sorted_keys[cur_pos-1])):
return (self._sorted_keys[cur_pos],self._internal_dict[self._sorted_keys[cur_pos]])
else:
print "get_larger: SHOULD NOT HAPPEN!",cur_pos,"max:",cur_len
elif ((cur_pos == 0) and (key < self._sorted_keys[cur_pos])):
return (self._sorted_keys[cur_pos],self._internal_dict[self._sorted_keys[cur_pos]])
elif ((cur_pos == 0) and (key < self._sorted_keys[cur_pos+1])):
return (self._sorted_keys[cur_pos+1],self._internal_dict[self._sorted_keys[cur_pos+1]])
elif ((cur_pos+1 == cur_len) and (key < self._sorted_keys[cur_pos])):
return (self._sorted_keys[cur_pos],self._internal_dict[self._sorted_keys[cur_pos]])
return (None,None)
def get_larger_equal(self,key):
if self.has_key(key):
return key,self._internal_dict[key]
else:
return self.get_larger(key)
if __name__ == '__main__':
import os
convtbl = EDict()
convtbl.set(149.449379,"99")
convtbl.set(149.399739,"96")
convtbl.set(4.393534,"23")
convtbl.set(3,"15")
convtbl.set(0,"5")
convtbl.set(-1.933954,"0.007")
convtbl.set(-2.118821,"0.005")
convtbl.set(-2.388300,"0.003")
convtbl.set(-2.874310,"0.001")
convtbl.set(-226.479313,"0")
print 150,convtbl.get_larger_equal(150)
print 149.4,convtbl.get_larger_equal(149.4)
print 3,convtbl.get_larger_equal(3)
print -2.3,convtbl.get_larger_equal(-2.3)
print -2.4,convtbl.get_larger_equal(-2.4)
print -2.8,convtbl.get_larger_equal(-2.8)
print -2.9,convtbl.get_larger_equal(-2.9)
print -227,convtbl.get_larger_equal(-227)
print "###############"
convtbl = EDict()
convtbl.set(149.449379,"99")
convtbl.set(149.399739,"96")
convtbl.set(4.393534,"23")
convtbl.set(3,"15")
convtbl.set(-1.933954,"0.007")
convtbl.set(-2.018751,"0.006")
convtbl.set(-2.118821,"0.005")
convtbl.set(-2.239310,"0.004")
convtbl.set(-2.388300,"0.003")
convtbl.set(-2.580755,"0.002")
convtbl.set(-2.874310,"0.001")
convtbl.set(-226.479313,"0")
print 150,convtbl.get_larger_equal(150)
print 149.4,convtbl.get_larger_equal(149.4)
print 3,convtbl.get_larger_equal(3)
print -2.3,convtbl.get_larger_equal(-2.3)
print -2.4,convtbl.get_larger_equal(-2.4)
print -2.6,convtbl.get_larger_equal(-2.6)
print -2.8,convtbl.get_larger_equal(-2.8)
print -2.9,convtbl.get_larger_equal(-2.9)
print -227,convtbl.get_larger_equal(-227)
print "###############"
table = os.environ['CADD'] + "/whole_genome/conversion_tbl_cave/conversion_table_ext.tsv"
maxValue,minValue = None,None
convtbl = EDict()
if os.path.exists(table):
infile = open(table)
for line in infile:
fields = line.split()
if len(fields) == 2:
val = float(fields[1])
convtbl.set(val,fields[0])
if val > maxValue or maxValue == None: maxValue = val
if val < minValue or minValue == None: minValue = val
infile.close()
#convtbl.set(-220.0,"0")
print 150,convtbl.get_larger_equal(150)
print 149.4,convtbl.get_larger_equal(149.4)
print 3,convtbl.get_larger_equal(3)
print -2.3,convtbl.get_larger_equal(-2.3)
print -2.4,convtbl.get_larger_equal(-2.4)
print -2.6,convtbl.get_larger_equal(-2.6)
print -2.8,convtbl.get_larger_equal(-2.8)
print -2.9,convtbl.get_larger_equal(-2.9)
print -227,convtbl.get_larger_equal(-227)
print "###"
print len(convtbl)
print "###"
count = 0
for key,value in sorted(convtbl.iteritems()):
print key,value
count += 1
if count > 10: break
| StarcoderdataPython |
11393485 |
# ****************************************************************************
# contains: rule.split_inside_bracke -> see ^method-description
# rule.or_split -> split an or separated pattern elements
# author: @Hakim-Beldjoudi (hbFree) NOV 2019
# ****************************************************************************
# comma split inside bracket params without taking params
# that are inside a param's bracket as to_split_elements
# ex: inside= "im_0[0, ab, ...], pp_1[a, b, c, ...], ..." -> ['im_0[0, ab, ...]', 'pp_1[a, b, c, ...]', ...]
# !would generate unwanted split if we do inside.split(',')
def split_inside_bracket(self, inside):
splitted = []
el = ''
inside_bracket = 0
for letter in inside:
if letter == '[':
inside_bracket = inside_bracket + 1
if letter == ']':
inside_bracket = inside_bracket - 1
if letter == ',' and inside_bracket == 0:
splitted.append(el)
el = ''
else:
el = el + letter
return splitted
# exemple: |pp|or|v0| -> ['pp', 'v0']
def or_split(self, model_el):
accum = ''
reslt = []
i = 0
while i < len(model_el) - 1:
if model_el[i:i + 2] == 'or':
accum = accum.strip()
reslt.append(accum)
accum = ''
i = i + 1
else:
accum = accum + model_el[i]
i = i + 1
accum = accum + model_el[-1]
reslt.append(accum)
return reslt
| StarcoderdataPython |
280179 | <filename>data/example/split_words_tags.py
__author__ = "<NAME>"
from pathlib import Path
import numpy as np
def senttodoc():
file_path = 'wnut17/'
for n in ['train','dev','test']:
words = []
tags = []
sents = []
sent = ''
sent_tags = ''
with Path(file_path + n).open() as f:
for line in f:
ls = line.split('\t')
word, tag = ls[0], ls[-1]
print(word+'ss')
if word == '\n' or word=='':
print('if ',word)
sents.append(sent)
tags.append(sent_tags)
sent = ''
sent_tags = ''
else:
print('else ',word)
if sent != '':
sent = sent + ' ' + word
sent_tags = sent_tags + ' ' + tag.rstrip()
else:
sent += word
sent_tags += tag.rstrip()
with Path(file_path+n+'.words').open('w') as f:
for sent in sents:
f.write(sent + '\n')
with Path(file_path + n + '.tags').open('w') as f:
for tag in tags:
f.write(tag + '\n')
def wordtodoc():
file_path = '/home/saad/tf_ner/data/example/'
for n in ['train','dev','test']:
words = []
tags = []
with Path(file_path+n).open() as f:
for line in f:
ls = line.split(' ')
word, tag = ls[0],ls[-1]
words.append(word)
tags.append(tag.rstrip())
with Path(file_path+n+'.words').open('w') as f:
for word in words:
word = word.strip()
if word == '\n':
f.write('')
else:
f.write(word+'\n')
with Path(file_path + n + '.tags').open('w') as f:
for tag in tags:
if tag == '':
f.write('')
else:
f.write(tag + '\n')
if __name__ == '__main__':
# Load vocab file
senttodoc()
| StarcoderdataPython |
3233373 | import enum
from datetime import datetime, date
from ..time import Frequency
_aggregation_lookup = {}
class Aggregation(enum.Enum):
"""
Supported aggregations in the API. Includes simple implementations of
the different aggregation types for learning purposes.
"""
#: Calculate the mean value
AVERAGE = (lambda items: sum(items) / len(items),)
#: Calculate the sum of all values
SUM = (sum,)
#: Find the minimum value
MIN = (min,)
#: Find the maximum value
MAX = (max,)
def __init__(self, func):
self._func = func
_aggregation_lookup[self.tag.lower()] = self
@property
def tag(self):
"""
Get the tag for this aggregation type.
:return: The aggregation tag (name)
:rtype: str
"""
return self.name
def aggregate(self, iterable):
"""
Perform an aggregation on any iterable of numbers (such as lists,
tuples, generators etc.).
:param iterable: Any iterable of numbers
:type iterable: iterable
:return: An aggregate
:rtype: float
"""
return self._func(iterable)
def __str__(self):
return self.name
def __repr__(self):
return self.name
@staticmethod
def is_valid_tag(tag):
"""
Check whether an aggregation tag exists or not.
:param tag: An aggregation tag
:type tag: str
:return: True if it exists, otherwise False
:rtype: bool
"""
return tag.lower() in _aggregation_lookup
@staticmethod
def by_tag(tag):
"""
Look up aggregation by tag.
:param tag: An aggregation tag
:type tag: str
:return: The aggregation for the given tag
:rtype: Aggregation
"""
return _aggregation_lookup[tag.lower()]
_filter_lookup = {}
class Filter(enum.Enum):
"""
Supported filters in the API.
Includes simple implementations of the filters for learning
purposes.
Note: The API automatically separates futures peak and offpeak by
looking at the selected frequency for aggregations:
* For weekly, monthly, quartly and yearly frequency, the futures
peak and offpeak are used.
* For daily or higher frequencies, the standard peak and offpeak
are used.
"""
#: All hours
BASE = (
lambda dt: True,
lambda dt: True
)
#: Peak hours
PEAK = (
lambda dt: 8 <= dt.hour <= 19,
lambda dt: dt.isoweekday() <= 5 and 8 <= dt.hour <= 19
)
#: Offpeak hours
OFFPEAK = (
lambda dt: not PEAK.is_in_filter(dt),
lambda dt: not PEAK.is_in_future_filter(dt),
)
#: Monday–Friday
WORKDAYS = (
lambda dt: dt.isoweekday() <= 5,
lambda dt: dt.isoweekday() <= 5
)
#: Saturday and Sunday
WEEKEND = (
lambda dt: dt.isoweekday() > 5,
lambda dt: dt.isoweekday() > 5
)
def __init__(self, filter_func, future_filter_func):
self._filter_func = filter_func
self._future_filter_func = future_filter_func
_filter_lookup[self.tag.lower()] = self
@property
def tag(self):
"""
The filter tag (name)
"""
return self.name
def get_filter_function(self, frequency):
"""
Given a frequency, return the appropriate filter function:
* For weekly, monthly, quartly and yearly frequency, the futures
peak and offpeak function is used.
* For daily or higher frequencies, the standard peak and offpeak
function is used.
:param frequency: The resulting frequency of an aggregation
:type frequency: Frequency
:return: A filter function
:rtype: function
"""
assert isinstance(frequency, Frequency), "Not a Frequency provided"
if frequency >= Frequency.P1D:
# For daily, hourly or higher frequencies
return self._filter_func
else:
# For futures resolutions (weekly, monthly, quarterly, yearly)
return self._future_filter_func
def is_in_filter(self, datetime_obj):
"""
Check whether or not a datetime object is in a filter.
:param datetime_obj: A date-time
:type datetime_obj: datetime
:return: True if the date-time object falls into filter, otherwise False
:rtype: bool
"""
assert isinstance(datetime_obj, datetime), "Not a datetime object"
return self._filter_func(datetime_obj)
def is_in_future_filter(self, datetime_obj):
"""
Check whether or not a datetime object is in a filter for
futures-contracts (weekly, monthly, quarterly, yearly).
:param datetime_obj: A date-time
:type datetime_obj: datetime
:return: True if the date-time object falls into filter, otherwise False
:rtype: bool
"""
assert isinstance(datetime_obj, (datetime, date)), (
"Not a datetime or date object"
)
return self._future_filter_func(datetime_obj)
def __str__(self):
return self.name
def __repr__(self):
return self.name
@staticmethod
def is_valid_tag(tag):
"""
Check whether a filter tag exists or not.
:param tag: A filter tag
:type tag: str
:return: True if it exists, otherwise False
:rtype: bool
"""
return tag.lower() in _filter_lookup
@staticmethod
def by_tag(tag):
"""
Look up a filter by tag.
:param tag: A filter tag
:type tag: str
:return: The filter for the given tag
:rtype: Filter
"""
return _filter_lookup[tag.lower()]
| StarcoderdataPython |
1914895 | import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
from torch.autograd import Variable
import random, math, os, time
import numpy as np
import pandas as pd
from sklearn.preprocessing import MinMaxScaler
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
from utils.adamw import AdamW
from utils.cyclic_scheduler import CyclicLRWithRestarts
from utils.early_stopping import EarlyStopping
from utils.prepare_WL import test_pm25_single_station
from sklearn.metrics import mean_absolute_error
from sklearn.metrics import mean_squared_error
# set the random seeds for reproducability
SEED = 1234
random.seed(SEED)
torch.manual_seed(SEED)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
########## Support
def init_weights(m):
for name, param in m.named_parameters():
if 'weight' in name:
nn.init.normal_(param.data, mean=0, std=0.01)
else:
nn.init.constant_(param.data, 0)
def count_parameters(model):
return sum(p.numel() for p in model.parameters() if p.requires_grad)
def epoch_time(start_time, end_time):
elapsed_time = end_time - start_time
elapsed_mins = int(elapsed_time / 60)
elapsed_secs = int(elapsed_time - (elapsed_mins * 60))
return elapsed_mins, elapsed_secs
def numpy_to_tvar(x):
return Variable(torch.from_numpy(x).type(torch.FloatTensor).to(device))
def series_to_superviesed(x_timeseries,
y_timeseries,
n_memory_step,
n_forcast_step,
split=None):
'''
x_timeseries: input time series data, numpy array, (time_step, features)
y_timeseries: target time series data, numpy array, (time_step, features)
n_memory_step: number of memory step in supervised learning, int
n_forcast_step: number of forcase step in supervised learning, int
split: portion of data to be used as train set, float, e.g. 0.8
'''
assert len(x_timeseries.shape
) == 2, 'x_timeseries must be shape of (time_step, features)'
assert len(y_timeseries.shape
) == 2, 'y_timeseries must be shape of (time_step, features)'
input_step, input_feature = x_timeseries.shape
output_step, output_feature = y_timeseries.shape
assert input_step == output_step, 'number of time_step of x_timeseries and y_timeseries are not consistent!'
n_RNN_sample = input_step - n_forcast_step - n_memory_step + 1
RNN_x = np.zeros((n_RNN_sample, n_memory_step, input_feature))
RNN_y = np.zeros((n_RNN_sample, n_forcast_step, output_feature))
for n in range(n_RNN_sample):
RNN_x[n, :, :] = x_timeseries[n:n + n_memory_step, :]
RNN_y[n, :, :] = y_timeseries[n + n_memory_step:n + n_memory_step +
n_forcast_step, :]
if split != None:
assert (split <= 0.9) & (split >= 0.1), 'split not in reasonable range'
return RNN_x[:int(split * len(RNN_x))], RNN_y[:int(split * len(RNN_x))], \
RNN_x[int(split * len(RNN_x)) + 1:], RNN_y[int(split * len(RNN_x)) + 1:]
else:
return RNN_x, RNN_y, None, None
########### Model
class Encoder(nn.Module):
def __init__(self, input_dim, enc_hid_dim, dec_hid_dim, enc_layers,
dec_layers, dropout_p):
super(Encoder, self).__init__()
self.input_dim = input_dim
self.enc_hid_dim = enc_hid_dim
self.dec_hid_dim = dec_hid_dim
self.enc_layers = enc_layers
self.dec_layers = dec_layers
self.dropout_p = dropout_p
self.input_linear = nn.Linear(self.input_dim, self.enc_hid_dim)
self.lstm = nn.LSTM(input_size=self.enc_hid_dim,
hidden_size=self.enc_hid_dim,
num_layers=self.enc_layers,
bidirectional=True)
self.output_linear = nn.Linear(self.enc_hid_dim * 2, self.dec_hid_dim)
self.dropout = nn.Dropout(self.dropout_p)
def forward(self, input):
embedded = self.dropout(torch.tanh(self.input_linear(input)))
outputs, (hidden, cell) = self.lstm(embedded)
hidden = torch.tanh(
self.output_linear(
torch.cat((hidden[-2, :, :], hidden[-1, :, :]), dim=1)))
# for different number of decoder layers
hidden = hidden.repeat(self.dec_layers, 1, 1)
return outputs, (hidden, hidden)
class Global_Attention(nn.Module):
def __init__(self, enc_hid_dim, dec_hid_dim):
super(Global_Attention, self).__init__()
self.enc_hid_dim = enc_hid_dim
self.dec_hid_dim = dec_hid_dim
self.attn = nn.Linear(self.enc_hid_dim * 2 + self.dec_hid_dim,
self.dec_hid_dim)
self.v = nn.Parameter(torch.rand(self.dec_hid_dim))
def forward(self, hidden, encoder_outputs):
batch_size = encoder_outputs.shape[1]
src_len = encoder_outputs.shape[0]
# only pick up last layer hidden
hidden = torch.unbind(hidden, dim=0)[0]
hidden = hidden.unsqueeze(1).repeat(1, src_len, 1)
encoder_outputs = encoder_outputs.permute(1, 0, 2)
energy = torch.tanh(
self.attn(torch.cat((hidden, encoder_outputs), dim=2)))
energy = energy.permute(0, 2, 1)
v = self.v.repeat(batch_size, 1).unsqueeze(1)
attention = torch.bmm(v, energy).squeeze(1)
return F.softmax(attention, dim=1)
class Decoder(nn.Module):
def __init__(self, output_dim, enc_hid_dim, dec_hid_dim, dec_layers,
dropout_p, attention):
super(Decoder, self).__init__()
self.enc_hid_dim = enc_hid_dim
self.dec_hid_dim = dec_hid_dim
self.output_dim = output_dim
self.dec_layers = dec_layers
self.dropout_p = dropout_p
self.attention = attention
self.input_dec = nn.Linear(self.output_dim, self.dec_hid_dim)
self.lstm = nn.LSTM(input_size=self.enc_hid_dim * 2 + self.dec_hid_dim,
hidden_size=self.dec_hid_dim,
num_layers=self.dec_layers)
self.out = nn.Linear(
self.enc_hid_dim * 2 + self.dec_hid_dim + self.dec_hid_dim,
self.output_dim)
self.dropout = nn.Dropout(self.dropout_p)
def forward(self, input, hidden, cell, encoder_outputs):
input = input.unsqueeze(0)
input = torch.unsqueeze(input, 2)
embedded = self.dropout(torch.tanh(self.input_dec(input)))
a = self.attention(hidden, encoder_outputs)
a = a.unsqueeze(1)
encoder_outputs = encoder_outputs.permute(1, 0, 2)
weighted = torch.bmm(a, encoder_outputs)
weighted = weighted.permute(1, 0, 2)
lstm_input = torch.cat((embedded, weighted), dim=2)
output, (hidden, cell) = self.lstm(lstm_input, (hidden, cell))
input_dec = embedded.squeeze(0)
output = output.squeeze(0)
weighted = weighted.squeeze(0)
output = self.out(torch.cat((output, weighted, input_dec), dim=1))
return output.squeeze(1), (hidden, cell), a
class Seq2Seq(nn.Module):
def __init__(self, encoder, decoder, device):
super(Seq2Seq, self).__init__()
self.encoder = encoder
self.decoder = decoder
self.device = device
def forward(self, src, trg, teacher_forcing_ratio=0.5):
batch_size = src.shape[1]
max_len = trg.shape[0]
outputs = torch.zeros(max_len, batch_size,
self.decoder.output_dim).to(self.device)
decoder_attn = torch.zeros(max_len, src.shape[0]).to(self.device)
encoder_outputs, (hidden, cell) = self.encoder(src)
# only use y initial y
output = src[-1, :, 0]
for t in range(0, max_len):
output, (hidden,
cell), attn_weight = self.decoder(output, hidden, cell,
encoder_outputs)
outputs[t] = output.unsqueeze(1)
teacher_force = random.random() < teacher_forcing_ratio
output = (trg[t].view(-1) if teacher_force else output)
return outputs
def train(model, optimizer, criterion, X_train, y_train):
iter_per_epoch = int(np.ceil(X_train.shape[0] * 1. / BATCH_SIZE))
iter_losses = np.zeros(EPOCHS * iter_per_epoch)
n_iter = 0
perm_idx = np.random.permutation(X_train.shape[0])
# train for each batch
for t_i in range(0, X_train.shape[0], BATCH_SIZE):
batch_idx = perm_idx[t_i:(t_i + BATCH_SIZE)]
x_train_batch = np.take(X_train, batch_idx, axis=0)
y_train_batch = np.take(y_train, batch_idx, axis=0)
loss = train_iteration(model, optimizer, criterion, CLIP, WD,
x_train_batch, y_train_batch)
if t_i % 50 == 0:
print('batch_loss:{}'.format(loss))
iter_losses[t_i // BATCH_SIZE] = loss
n_iter += 1
return np.mean(iter_losses[range(0, iter_per_epoch)])
def train_iteration(model, optimizer, criterion, clip, wd, X_train, y_train):
model.train()
optimizer.zero_grad()
X_train = np.transpose(X_train, [1, 0, 2])
y_train = np.transpose(y_train, [1, 0, 2])
X_train_tensor = numpy_to_tvar(X_train)
y_train_tensor = numpy_to_tvar(y_train)
output = model(X_train_tensor, y_train_tensor)
output = output.view(-1)
y_train_tensor = y_train_tensor.view(-1)
loss = criterion(output, y_train_tensor)
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), clip)
optimizer.step()
scheduler.batch_step()
return loss.item()
### evaluate
def evaluate(model, criterion, X_test, y_test, scaler_x, scaler_y):
epoch_loss = 0
iter_per_epoch = int(np.ceil(X_test.shape[0] * 1. / BATCH_SIZE))
iter_losses = np.zeros(EPOCHS * iter_per_epoch)
# other loss: MAE RMSLE
iter_multiloss = [
np.zeros(EPOCHS * iter_per_epoch),
np.zeros(EPOCHS * iter_per_epoch),
np.zeros(EPOCHS * iter_per_epoch)
]
iter_losses = np.zeros(EPOCHS * iter_per_epoch)
perm_idx = np.random.permutation(X_test.shape[0])
n_iter = 0
with torch.no_grad():
for t_i in range(0, X_test.shape[0], BATCH_SIZE):
batch_idx = perm_idx[t_i:(t_i + BATCH_SIZE)]
x_test_batch = np.take(X_test, batch_idx, axis=0)
y_test_batch = np.take(y_test, batch_idx, axis=0)
loss, mae, rmsle, rmse = evaluate_iteration(
model, criterion, x_test_batch, y_test_batch, scaler_x,
scaler_y)
iter_losses[t_i // BATCH_SIZE] = loss
iter_multiloss[0][t_i // BATCH_SIZE] = mae
iter_multiloss[1][t_i // BATCH_SIZE] = rmsle
iter_multiloss[2][t_i // BATCH_SIZE] = rmse
# writer.add_scalars('Val_loss', {'val_loss': iter_losses[t_i // BATCH_SIZE]},
# n_iter)
n_iter += 1
return np.mean(iter_losses[range(0, iter_per_epoch)]), np.mean(
iter_multiloss[0][range(0, iter_per_epoch)]), np.mean(
iter_multiloss[1][range(0, iter_per_epoch)]), np.mean(
iter_multiloss[2][range(0, iter_per_epoch)])
def evaluate_iteration(model, criterion, x_test, y_test, scaler_x, scaler_y):
model.eval()
x_test = np.transpose(x_test, [1, 0, 2])
y_test = np.transpose(y_test, [1, 0, 2])
x_test_tensor = numpy_to_tvar(x_test)
y_test_tensor = numpy_to_tvar(y_test)
output = model(x_test_tensor, y_test_tensor, 0)
output = output.view(-1)
y_test_tensor = y_test_tensor.view(-1)
loss = criterion(output, y_test_tensor)
# metric
output_numpy = output.cpu().data.numpy()
y_test_numpy = y_test_tensor.cpu().data.numpy()
output_numpy = scaler_y.inverse_transform(output_numpy.reshape(-1, 1))
y_test_numpy = scaler_y.inverse_transform(y_test_numpy.reshape(-1, 1))
loss_mae = mean_absolute_error(y_test_numpy, output_numpy)
loss_RMSLE = RMSLE(y_test_numpy, output_numpy)
loss_RMSE = np.sqrt(mean_squared_error(y_test_numpy, output_numpy))
return loss.item(), loss_mae, loss_RMSLE, loss_RMSE
if __name__ == "__main__":
INPUT_DIM = 1
OUTPUT_DIM = 1
ENC_HID_DIM = 25
DEC_HID_DIM = 25
ENC_DROPOUT = 0.1
DEC_DROPOUT = 0.1
ECN_Layers = 2
DEC_Layers = 2
LR = 0.001 # learning rate
WD = 0.1 # weight decay
CLIP = 1
EPOCHS = 1000
BATCH_SIZE = 100
(x_train, y_train, x_train_len,
x_train_before_len), (x_test, y_test, x_test_len, x_test_before_len), (
scaler_x, scaler_y) = test_pm25_single_station()
print('\nsize of x_train, y_train, x_test, y_test:')
print(x_train.shape, y_train.shape, x_test.shape, y_test.shape)
# time series to image
# Model
glob_attn = Global_Attention(ENC_HID_DIM, DEC_HID_DIM)
enc = Encoder(INPUT_DIM, ENC_HID_DIM, DEC_HID_DIM, ECN_Layers, DEC_Layers,
ENC_DROPOUT)
dec = Decoder(OUTPUT_DIM, ENC_HID_DIM, DEC_HID_DIM, DEC_Layers,
DEC_DROPOUT, glob_attn)
model = Seq2Seq(enc, dec, device).to(device)
model.apply(init_weights)
print(model)
print(f'The model has {count_parameters(model):,} trainable parameters')
optimizer = AdamW(model.parameters(), lr=1e-3, weight_decay=1e-5)
scheduler = CyclicLRWithRestarts(optimizer,
BATCH_SIZE,
68673,
restart_period=5,
t_mult=1.2,
policy="cosine")
criterion = nn.MSELoss()
# Early Stopping
# initialize the early_stopping object
# early stopping patience; how long to wait after last time validation loss improved.
patience = 10
early_stopping = EarlyStopping(patience=patience, verbose=True)
best_valid_loss = float('inf')
for epoch in range(EPOCHS):
train_epoch_losses = np.zeros(EPOCHS)
evaluate_epoch_losses = np.zeros(EPOCHS)
print('Epoch:', epoch)
scheduler.step()
start_time = time.time()
train_loss = train(model, optimizer, criterion, x_train, y_train)
valid_loss, _, _, _ = evaluate(model, criterion, x_test, y_test,
scaler_x, scaler_y)
end_time = time.time()
train_epoch_losses[epoch] = train_loss
evaluate_epoch_losses[epoch] = valid_loss
epoch_mins, epoch_secs = epoch_time(start_time, end_time)
# early_stopping needs the validation loss to check if it has decresed,
# and if it has, it will make a checkpoint of the current model
early_stopping(valid_loss, model)
if early_stopping.early_stop:
print("Early stopping")
break
print(f'Epoch: {epoch + 1:02} | Time: {epoch_mins}m {epoch_secs}s')
print(
f'\tTrain Loss: {train_loss:.3f} | Train PPL: {math.exp(train_loss):7.3f}'
)
print(
f'\t Val. Loss: {valid_loss:.3f} | Val. PPL: {math.exp(valid_loss):7.3f}'
)
# # prediction
#
# #
# model.load_state_dict(torch.load('checkpoint.pt',map_location='cpu'))
#
# test_loss, test_mae, test_rmsle, test_rmse = evaluate(model, criterion, x_test, y_test, scaler_x, scaler_y)
#
# # plt.show()
#
# print(f'| Test Loss: {test_loss:.4f} | Test PPL: {math.exp(test_loss):7.4f} |')
# print(f'| MAE: {test_mae:.4f} | Test PPL: {math.exp(test_mae):7.4f} |')
# print(f'| RMSLE: {test_rmsle:.4f} | Test PPL: {math.exp(test_rmsle):7.4f} |')
# print(f'| RMSE: {test_rmse:.4f} | Test PPL: {math.exp(test_rmse):7.4f} |')
| StarcoderdataPython |
11328574 | <reponame>bytepl/tracardi
from pprint import pprint
from tracardi.domain.event_metadata import EventMetadata, EventTime
from tracardi.service.notation.dot_accessor import DotAccessor
from tracardi.domain.context import Context
from tracardi.domain.event import Event
from tracardi.domain.flow import Flow, FlowSchema
from tracardi.domain.profile import Profile
from tracardi.domain.session import Session, SessionMetadata
from tracardi.domain.resource import Resource
from tracardi.process_engine.tql.parser import Parser
from tracardi.process_engine.tql.transformer.expr_transformer import ExprTransformer
if __name__ == "__main__":
data = {
"n": 1,
"a": {
"b": 1,
"c": [1, 2, 3],
"d": {"aa": 1},
"e": "test",
'f': 1,
'g': True,
'h': None,
'i': "2021-01-10"
}
}
p = Parser(Parser.read('grammar/uql_expr.lark'), start='expr')
# t = p.parse("a.b=1 and (a.c == 2 or a.c == [1,2,3])")
# t = p.parse("datetime(a.i) between datetime(\"2020-01-01\") and datetime(\"2022-01-01\")")
# t = p.parse("a.d.aa between 2 and 1")
# t = p.parse("a.e == \"test\"")
# t = p.parse("a.b == a.f")
# t = p.parse("a.g == TRUE")
# t = p.parse("a.h == null")
# t = p.parse("profile@id == \"1\"")
# t = p.parse("<EMAIL> exists")
# t = p.parse("<EMAIL> == 1")
t = p.parse("payload@n == 1")
# pprint(t)
profile = Profile(id="1")
session = Session(id="2", metadata=SessionMetadata())
payload = data
resource = Resource(id="3", type="event")
context = Context()
event = Event(metadata=EventMetadata(time=EventTime()),
id="event-id", type="type", source=resource, context=context, profile=profile, session=session)
flow = Flow(id="flow-id", name="flow", wf_schema=FlowSchema(version="0.6.0"))
dot = DotAccessor(profile, session, payload, event, flow)
query = ExprTransformer(dot=dot).transform(t)
pprint(query)
| StarcoderdataPython |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.