Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
1,500
| 72,167,624
|
TF Agent taking the same action for all test states after training in Reinforcement Learning
|
<p>I am trying to create a Custom PyEnvironment for making an agent learn the optimum hour to send the notification to the users, based on the rewards received by clicking on the notifications sent in previous 7 days.</p>
<p>After the training is complete, the agent is only taking the action as the same hour even if different states are inputted.</p>
<p><strong>states structure</strong> : We are considering last 7 days hour-response pairs of the user.</p>
<pre><code>Eg : 7,0,14,1,5,0,3,0,23,0,9,1,12,0 [Reward 0 at 7th hour, Reward 1 at 14th hour and so on]
</code></pre>
<p><strong>Hyperparameters:</strong></p>
<pre><code>duration = 1 # @param {type:"integer"}
discount = 0.99 # @param {type:"number"}
state_size = 7 # @param {type:"integer"}
num_episodes = 1 # @param {type:"integer"}
learning_rate = 1e-5 # @param {type:"number"}
num_eval_episodes = 100 # @param {type:"integer"}
replay_buffer_max_length = 100000 # @param {type:"integer"}
initial_collect_steps = 100 # @param {type:"integer"}
batch_size = 10 # @param {type:"integer"}
num_iterations = 1000 # @param {type:"integer"}
collect_steps_per_iteration = 1 # @param {type:"integer"}
log_interval = 500 # @param {type:"integer"}
eval_interval = 500 # @param {type:"integer"}
</code></pre>
<p><strong>Below is the code of NotifEnv class:</strong></p>
<pre><code>class NotifEnv(py_environment.PyEnvironment):
def __init__(self, duration, discount, state_size):
self._action_spec = array_spec.BoundedArraySpec(
shape=(), dtype=np.int32, minimum=0, maximum=23, name='action')
self._state_spec = array_spec.BoundedArraySpec(
shape=(2*state_size, ), dtype=np.float32, minimum=0.0, maximum=1.0, name='state')
self._discount_spec = array_spec.BoundedArraySpec(
shape=(), dtype=np.float32, minimum=0.0, maximum=1.0, name='discount')
self.step_count = 0
self.duration = duration
self.state_size = state_size
self.discount = discount
self._episode_ended = False
self.state = np.array([0]*2*self.state_size, dtype=np.float32)
def observation_spec(self):
"""Return state_spec."""
return self._state_spec
def action_spec(self):
"""Return action_spec."""
return self._action_spec
def _reset(self):
"""Return initial_time_step."""
self.state = np.array([0]*2*self.state_size, dtype=np.float32)
self._episode_ended = False
self.step_count = 0
return ts.restart(np.array(self.state, dtype=np.float32))
def _step(self, action):
"""Apply action and return new time_step."""
if action<0 or action>23 :
raise ValueError('`action` should be between 0 and 23.')
self.step_count += 1
curr_reward = get_reward(action)
self.state[:-2] = self.state[2:]
self.state[-2:] = [action+1, curr_reward]
if self.step_count >= duration :
self.episode_ended = True
self.step_count = 0
return ts.termination(np.array(self.state, dtype=np.float32), reward=curr_reward)
else :
return ts.transition(np.array(self.state, dtype=np.float32), discount=self.discount, reward=curr_reward)
</code></pre>
<p><strong>Reward function (reward only given if hour of day < 12) else penalise the agent:</strong></p>
<pre><code>def get_reward(action):
if action<12 :
return 1
else :
return -0.3
</code></pre>
<p><strong>Model Architecture :</strong></p>
<pre><code>train_env = tf_py_environment.TFPyEnvironment(NotifEnv(duration=duration, discount=discount, state_size=state_size))
eval_env = tf_py_environment.TFPyEnvironment(NotifEnv(duration=duration, discount=discount, state_size=state_size))
def compute_avg_return(environment, policy, num_episodes=10):
total_return = 0.0
for _ in range(num_episodes):
time_step = environment.reset()
episode_return = 0.0
while not time_step.is_last():
action_step = policy.action(time_step)
time_step = environment.step(action_step.action)
episode_return += time_step.reward
total_return += episode_return
avg_return = total_return / num_episodes
return avg_return.numpy()[0]
</code></pre>
<p><strong>Training :</strong></p>
<pre><code>fc_layer_params = (6,)
action_tensor_spec = tensor_spec.from_spec(env.action_spec())
num_actions = 24
# Define a helper function to create Dense layers configured with the right
# activation and kernel initializer.
def dense_layer(num_units):
return tf.keras.layers.Dense(
num_units,
activation=tf.keras.activations.relu,
kernel_initializer=tf.keras.initializers.VarianceScaling(
scale=2.0, mode='fan_in', distribution='truncated_normal'))
# QNetwork consists of a sequence of Dense layers followed by a dense layer
# with `num_actions` units to generate one q_value per available action as
# it's output.
dense_layers = [dense_layer(num_units) for num_units in fc_layer_params]
q_values_layer = tf.keras.layers.Dense(
num_actions,
activation=None,
kernel_initializer=tf.keras.initializers.RandomUniform(
minval=-0.03, maxval=0.03),
bias_initializer=tf.keras.initializers.Constant(-0.2))
q_net = sequential.Sequential(dense_layers + [q_values_layer])
optimizer = tf.keras.optimizers.Adam(learning_rate=learning_rate)
train_step_counter = tf.Variable(0)
print("train_step_counter = ",train_step_counter)
agent = dqn_agent.DqnAgent(
train_env.time_step_spec(),
train_env.action_spec(),
q_network=q_net,
optimizer=optimizer,
td_errors_loss_fn=common.element_wise_squared_loss,
train_step_counter=train_step_counter)
agent.initialize()
</code></pre>
<p><strong>Initialising Replay Buffer :</strong></p>
<pre><code> replay_buffer = tf_uniform_replay_buffer.TFUniformReplayBuffer(
data_spec=agent.collect_data_spec,
batch_size=train_env.batch_size,
max_length=20) # replay_buffer_max_length
def collect_step(environment, policy, buffer):
time_step = environment.current_time_step()
print("time_step = "+str(time_step))
action_step = policy.action(time_step)
print("action_step = "+str(action_step))
next_time_step = environment.step(action_step.action)
print("next_time_step = "+str(next_time_step))
traj = trajectory.from_transition(time_step, action_step, next_time_step)
print("traj = "+str(traj))
print("\n \n")
# Add trajectory to the replay buffer
buffer.add_batch(traj)
# buffer.add_batch(traj)
def collect_data(env, policy, buffer, steps):
for _ in range(steps):
print("NUM : "+str(_))
collect_step(env, policy, buffer)
collect_data(train_env, random_policy, replay_buffer, initial_collect_steps)
</code></pre>
<p><strong>Training Run :</strong></p>
<pre><code>iterator = iter(dataset)
agent.train = common.function(agent.train)
# Reset the train step
agent.train_step_counter.assign(0)
# Evaluate the agent's policy once before training.
avg_return = compute_avg_return(eval_env, agent.policy, num_eval_episodes)
returns = [avg_return]
print("log_interval = ",log_interval)
print("eval_interval = ",eval_interval)
print("returns = ",returns)
for _ in range(num_iterations):
# Collect a few steps using collect_policy and save to the replay buffer.
# collect_data(train_env, agent.collect_policy, replay_buffer, collect_steps_per_iteration)
# Sample a batch of data from the buffer and update the agent's network.
experience, unused_info = next(iterator)
# print("x = "+str(x))
train_loss = agent.train(experience).loss
step = agent.train_step_counter.numpy()
if step % log_interval == 0:
print('step = {0}: loss = {1}'.format(step, train_loss))
print(np.max(q_net.get_weights()[0]))
if step % eval_interval == 0:
avg_return = compute_avg_return(eval_env, agent.policy, num_eval_episodes)
print('step = {0}: Average Return = {1}'.format(step, avg_return))
returns.append(avg_return)
</code></pre>
|
<p>I think that 1000 training steps could be on of the reason the agent is stuck with picking one action. RL agents need hundreds of thousands of iterations, and this could be even higher depending on the task and some neural networks hyperparameters, like the learning rate.</p>
<p>I suggest you to increase the <code>num_iterations</code> by <strong>a lot</strong>, and then check if something improves.</p>
|
tensorflow2.0|reinforcement-learning|dqn|tf-agent
| 0
|
1,501
| 50,518,569
|
How to backprop through a model that predicts the weights for another in Tensorflow
|
<p>I am currently trying to train a model (hypernetwork) that can predict the weights for another model (main network) such that the main network's cross-entropy loss decreases. However when I use tf.assign to assign the new weights to the network it does not allow backpropagation into the hypernetwork thus rendering the system non-differentiable. I have tested whether my weights are properly updated and they seem to be since when subtracting initial weights from updated ones is a non zero sum.</p>
<p>This is a minimal sample of what I am trying to achieve.</p>
<pre><code>import numpy as np
import tensorflow as tf
from tensorflow.contrib.layers import softmax
def random_addition(variables):
addition_update_ops = []
for variable in variables:
update = tf.assign(variable, variable+tf.random_normal(shape=variable.get_shape()))
addition_update_ops.append(update)
return addition_update_ops
def network_predicted_addition(variables, network_preds):
addition_update_ops = []
for idx, variable in enumerate(variables):
if idx == 0:
print(variable)
update = tf.assign(variable, variable + network_preds[idx])
addition_update_ops.append(update)
return addition_update_ops
def dense_weight_update_net(inputs, reuse):
with tf.variable_scope("weight_net", reuse=reuse):
output = tf.layers.conv2d(inputs=inputs, kernel_size=(3, 3), filters=16, strides=(1, 1),
activation=tf.nn.leaky_relu, name="conv_layer_0", padding="SAME")
output = tf.reduce_mean(output, axis=[0, 1, 2])
output = tf.reshape(output, shape=(1, output.get_shape()[0]))
output = tf.layers.dense(output, units=(16*3*3*3))
output = tf.reshape(output, shape=(3, 3, 3, 16))
return output
def conv_net(inputs, reuse):
with tf.variable_scope("conv_net", reuse=reuse):
output = tf.layers.conv2d(inputs=inputs, kernel_size=(3, 3), filters=16, strides=(1, 1),
activation=tf.nn.leaky_relu, name="conv_layer_0", padding="SAME")
output = tf.reduce_mean(output, axis=[1, 2])
output = tf.layers.dense(output, units=2)
output = softmax(output)
return output
input_x_0 = tf.zeros(shape=(32, 32, 32, 3))
target_y_0 = tf.zeros(shape=(32), dtype=tf.int32)
input_x_1 = tf.ones(shape=(32, 32, 32, 3))
target_y_1 = tf.ones(shape=(32), dtype=tf.int32)
input_x = tf.concat([input_x_0, input_x_1], axis=0)
target_y = tf.concat([target_y_0, target_y_1], axis=0)
output_0 = conv_net(inputs=input_x, reuse=False)
target_y = tf.one_hot(target_y, 2)
crossentropy_loss_0 = tf.losses.softmax_cross_entropy(onehot_labels=target_y, logits=output_0)
conv_net_parameters = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope="conv_net")
weight_net_parameters = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope="weight_net")
print(conv_net_parameters)
weight_updates = dense_weight_update_net(inputs=input_x, reuse=False)
#updates_0 = random_addition(conv_net_parameters)
updates_1 = network_predicted_addition(conv_net_parameters, network_preds=[weight_updates])
with tf.control_dependencies(updates_1):
output_1 = conv_net(inputs=input_x, reuse=True)
crossentropy_loss_1 = tf.losses.softmax_cross_entropy(onehot_labels=target_y, logits=output_1)
check_sum = tf.reduce_sum(tf.abs(output_0 - output_1))
c_opt = tf.train.AdamOptimizer(beta1=0.9, learning_rate=0.001)
update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS) # Needed for correct batch norm usage
with tf.control_dependencies(update_ops): # Needed for correct batch norm usage
train_variables = weight_net_parameters #+ conv_net_parameters
c_error_opt_op = c_opt.minimize(crossentropy_loss_1,
var_list=train_variables,
colocate_gradients_with_ops=True)
init=tf.global_variables_initializer()
with tf.Session() as sess:
init = sess.run(init)
loss_list_0 = []
loss_list_1 = []
for i in range(1000):
_, checksum, crossentropy_0, crossentropy_1 = sess.run([c_error_opt_op, check_sum, crossentropy_loss_0,
crossentropy_loss_1])
loss_list_0.append(crossentropy_0)
loss_list_1.append(crossentropy_1)
print(checksum, np.mean(loss_list_0), np.mean(loss_list_1))
</code></pre>
<p>Does anyone know how I can get tensorflow to compute the gradients for this? Thank you.</p>
|
<p>In this case your weights aren't variables, they are computed tensors based on the hypernetwork. All you really have is one network during training. If I understand you correctly you are then proposing to discard the hypernetwork and be able to use just the main network to perform predictions.</p>
<p>If this is the case then you can either save the weight values manually and reload them as constants, or you could use <code>tf.cond</code> and <code>tf.assign</code> to assign them as you are doing during training, but use <code>tf.cond</code> to choose to use the variable or the computed tensor depending on whether you're doing training or inference. </p>
<p>During training you will need to use the computed tensor from the hypernetwork in order to enable backprop.</p>
<hr>
<p>Example from comments, <code>w</code> is the weight you'll use, you can assign a variable during training to keep track of it, but then use <code>tf.cond</code> to either use the variable (during inference) or the computed value from the hypernetwork (during training). In this example you need to pass in a boolean placeholder <code>is_training_placeholder</code> to indicate if you're running training of inference.</p>
<pre><code>tf.assign(w_variable, w_from_hypernetwork)
w = tf.cond(is_training_placeholder, true_fn=lambda: w_from_hypernetwork, false_fn=lambda: w_variable)
</code></pre>
|
tensorflow|machine-learning|backpropagation|tensorflow-gradient
| 0
|
1,502
| 45,563,438
|
hybrid of max pooling and average pooling
|
<p>While tweaking a deep convolutional net using Keras (with the TensorFlow backend) I would like to try out a hybrid between <code>MaxPooling2D</code> and <code>AveragePooling2D</code>, because both strategies seem to improve two different aspects regarding my objective.</p>
<p>I'm thinking about something like this:</p>
<pre><code> -------
|8 | 1|
x = ---+---
|1 | 6|
-------
average_pooling(x) -> 4
max_pooling(x) -> 8
hybrid_pooling(x, alpha_max=0.0) -> 4
hybrid_pooling(x, alpha_max=0.25) -> 5
hybrid_pooling(x, alpha_max=0.5) -> 6
hybrid_pooling(x, alpha_max=0.75) -> 7
hybrid_pooling(x, alpha_max=1.0) -> 8
</code></pre>
<p>Or as an equation:</p>
<pre><code>hybrid_pooling(x, alpha_max) =
alpha_max * max_pooling(x) + (1 - alpha_max) * average_pooling(x)
</code></pre>
<p>Since it looks like such a thing is not provided off the shelf, how can it be implemented in an efficient way?</p>
|
<p>I now use a different solution for combining both pooling variations.</p>
<ul>
<li>give the tensor to both pooling functions</li>
<li>concatenate the results</li>
<li>use a small conv layer to learn how to combine</li>
</ul>
<p>This approach, of course, has a higher computational cost but is also more flexible.
The conv layer after the concatenation can learn to simply blend the two pooling results with an alpha, but it can also end up using different alphas for different features and of course - as conv layers do - combine the pooled features in a completely new way.</p>
<p>The code (Keras functional API) looks as follows:</p>
<pre><code>import numpy as np
from tensorflow.keras.layers import Input, MaxPooling2D, Conv2D
from tensorflow.keras.layers import Concatenate, AveragePooling2D
from tensorflow.keras.models import Model
# implementation of the described custom pooling layer
def hybrid_pool_layer(pool_size=(2,2)):
def apply(x):
return Conv2D(int(x.shape[-1]), (1, 1))(
Concatenate()([
MaxPooling2D(pool_size)(x),
AveragePooling2D(pool_size)(x)]))
return apply
# usage example
inputs = Input(shape=(256, 256, 3))
x = inputs
x = Conv2D(8, (3, 3))(x)
x = hybrid_pool_layer((2,2))(x)
model = Model(inputs=inputs, outputs=x)
model.compile(loss='categorical_crossentropy', optimizer='nadam')
</code></pre>
<p>Surely one could also leave out the <code>Conv2D</code> and just return the concatenation of the two poolings and let the next layer do the merging work. But the implementation above makes sure that tensor resulting from this hybrid pooling has the shape one would also expect from a normal single pooling operation.</p>
|
tensorflow|keras|deep-learning|max-pooling|spatial-pooling
| 5
|
1,503
| 45,291,810
|
Finding a key value pair in a dictionary that contains a specific value in python
|
<p>In python 3 and python 2, is there a way to get the key value pair in a dictionary that contains a specific value? E.g. here is the dictionary:</p>
<pre><code>dict_a = {'key_1': [23, 'ab', 'cd'], 'key_2': [12, 'aa', 'hg']}
</code></pre>
<p>How do I get the key value pair where 'cd' is present in the value? I tried using itervalues() but that does not seem to work</p>
|
<p>You can use a simple dictionary comprehension to check if <code>cd</code> is in the value of each key, value pair:</p>
<pre><code>>>> dict_a = {'key_1': [23, 'ab', 'cd'], 'key_2': [12, 'aa', 'hg']}
>>> {k: v for k, v in dict_a.items() if 'cd' in v}
{'key_1': [23, 'ab', 'cd']}
</code></pre>
<p>This can be generalized by extracting the logic into a function:</p>
<pre><code>>>> def filter_dict(d, key):
return {k: v for k, v in d.items() if key in v}
>>> dict_a = {'key_1': [23, 'ab', 'cd'], 'key_2': [12, 'aa', 'hg']}
>>> filter_dict(dict_a, 'cd')
{'key_1': [23, 'ab', 'cd']}
>>>
</code></pre>
|
python|pandas|dictionary
| 1
|
1,504
| 45,409,148
|
Python: Looping through Pandas DataFrame to match string in a list
|
<p>My question is regarding a Pandas DataFrame and a list of e-mail addresses. The simplified dataframe (called 'df') looks like this:</p>
<pre><code> Name Address Email
0 Bush Apple Street
1 Volt Orange Street
2 Smith Kiwi Street
</code></pre>
<p>The simplified list of e-mail addresses looks like this:</p>
<pre><code>list_of_emails = ['johnsmith@gmail.com', 'judyvolt@hotmail.com', 'bush@yahoo.com']
</code></pre>
<p>Is it possible to loop through the dataframe, to check if a last name is (part of) a e-mail address AND then add that email address to the dataframe?
The following code does not work unfortunately, because of line 2 I think:</p>
<pre><code>for index, row in df.iterrows():
if row['Name'] in x for x in list_of_emails:
df['Email'][index] = x
</code></pre>
<p>Your help is very much appreciated!</p>
|
<p>Generally you should consider using <code>iterrows</code> as last resort only.</p>
<p>Consider this:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'Name': ['Smith', 'Volt', 'Bush']})
list_of_emails = ['johnsmith@gmail.com', 'judyvolt@hotmail.com', 'bush@yahoo.com']
def foo(name):
for email in list_of_emails:
if name.lower() in email:
return email
df['Email'] = df['Name'].apply(foo)
print(df)
# Name Email
# 0 Smith johnsmith@gmail.com
# 1 Volt judyvolt@hotmail.com
# 2 Bush bush@yahoo.com
</code></pre>
|
python|python-3.x|pandas
| 4
|
1,505
| 62,530,749
|
type dependence of 'backward ' in pytorch
|
<pre class="lang-py prettyprint-override"><code>x = torch.ones(1, requires_grad=True)
print(x)
y = x + 2.
print(y)
y.backward()
print(x.grad)
</code></pre>
<p>-->result>>>>></p>
<pre class="lang-py prettyprint-override"><code>tensor([1.], requires_grad=True)
tensor([3.], grad_fn=<AddBackward0>)
tensor([1.])
</code></pre>
<p>Here is no problem. However, if I change the type, I get</p>
<pre class="lang-py prettyprint-override"><code>x = torch.ones(1, requires_grad=True)
x = x.double()
print(x)
y = x + 2.
y = y.double()
print(y)
y.backward()
print(x.grad)
</code></pre>
<p>-->result>>>>></p>
<pre class="lang-py prettyprint-override"><code>tensor([1.], dtype=torch.float64, grad_fn=<CopyBackwards>)
tensor([3.], dtype=torch.float64, grad_fn=<AddBackward0>)
None
/usr/local/lib/python3.6/dist-packages/torch/tensor.py:746: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the gradient for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations.
warnings.warn("The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad "
</code></pre>
<p>=============================</p>
<p>What is the difference? Is there any restriction of 'type' on 'backward'?</p>
|
<p>You are looking at the wrong tensor ;)</p>
<p>By calling <code>x = x.double()</code> you create a new tensor, so when you call <code>x.grad</code> later on, you call it on the <code>double()</code> version of <code>x</code>, which isn't your original <code>x</code> anymore.</p>
|
pytorch
| 0
|
1,506
| 62,856,279
|
How to add values/ labels over each marker in lineplot in Python Seaborn?
|
<p>I have a dataframe consists of the range of time, stock and the counts of text as the columns . The dataframe looks like this</p>
<pre><code> Time Stock Text
0 00:00 - 01:00 BBNI 371
1 00:00 - 01:00 BBRI 675
2 00:00 - 01:00 BBTN 136
3 00:00 - 01:00 BMRI 860
4 01:00 - 02:00 BBNI 936
5 01:00 - 02:00 BBRI 1325
6 01:00 - 02:00 BBTN 316
7 01:00 - 02:00 BMRI 1630
</code></pre>
<p>I want to draw a lineplot with the codes below :</p>
<pre><code>df=tweetdatacom.groupby(["Time","Stock"])[["Text"]].count().reset_index()
plt.figure(figsize=(10,5))
line=sn.lineplot(x="Time", y="Text",hue="Stock",palette=["green","orange","red","blue"],marker="o",data=df)
plt.xticks(size=5,rotation=45, horizontalalignment='right',fontweight='light',fontsize='large')
plt.xlabel('Time Interval',size=12)
plt.ylabel('Total Tweets',size=12)
</code></pre>
<p>This is the result that I get with the codes :
<a href="https://i.stack.imgur.com/tQX5s.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tQX5s.png" alt="enter image description here" /></a></p>
<p>Now, I want to put the value for each marker on the plot, how can I do that?</p>
<p>Thank you very much</p>
|
<p>Loop through the number of data with <code>ax.text</code>. I'm creating this only with the data presented to me, so I'm omitting some of your processing.</p>
<pre><code>import pandas as pd
import numpy as np
import io
data = '''
Time Stock Text
0 "00:00 - 01:00" BBNI 371
1 "00:00 - 01:00" BBRI 675
2 "00:00 - 01:00" BBTN 136
3 "00:00 - 01:00" BMRI 860
4 "01:00 - 02:00" BBNI 936
5 "01:00 - 02:00" BBRI 1325
6 "01:00 - 02:00" BBTN 316
7 "01:00 - 02:00" BMRI 1630
'''
df = pd.read_csv(io.StringIO(data), sep='\s+')
import seaborn as sn
import matplotlib.pyplot as plt
# df=tweetdatacom.groupby(["Time","Stock"])[["Text"]].count().reset_index()
plt.figure(figsize=(10,5))
palette = ["green","orange","red","blue"]
line=sn.lineplot(x="Time", y="Text",hue="Stock",palette=palette, marker="o", data=df)
plt.xticks(size=5,rotation=45, horizontalalignment='right',fontweight='light',fontsize='large')
plt.xlabel('Time Interval',size=12)
plt.ylabel('Total Tweets',size=12)
for item, color in zip(df.groupby('Stock'),palette):
for x,y,m in item[1][['Time','Text','Text']].values:
# print(x,y,m)
plt.text(x,y,m,color=color)
</code></pre>
<p><a href="https://i.stack.imgur.com/xfu9H.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xfu9H.png" alt="enter image description here" /></a></p>
|
python|pandas|matplotlib|seaborn|line-plot
| 2
|
1,507
| 54,553,032
|
Incomplete shape error when try to export inference in tensor flow object detection api
|
<p>I trained custom object detector with loss 2.x , when i try to export it i am getting the following error</p>
<p>I came across this : <a href="https://stackoverflow.com/questions/52002256/parsing-inputs-incomplete-shape-error-while-exporting-the-inference-graph-i">'Parsing Inputs... Incomplete shape' error while exporting the inference graph in Tensorflow</a></p>
<p>But Iam sure label is same : I ran it multiple times</p>
<pre><code>python3 export_inference_graph.py --input_type image_tensor --pipeline_config_path training/inception_v2.config --trained_checkpoint_prefix training/model.ckpt-688 --output_directory trained-inference-graphs/output_inference_graph_v1
</code></pre>
<p>I'm getting the error :</p>
<p>114 ops no flops stats due to incomplete shapes.
Parsing Inputs...
Incomplete shape.</p>
<p>For complete report please check : <a href="https://pastebin.com/mGSBDgJC" rel="nofollow noreferrer">https://pastebin.com/mGSBDgJC</a></p>
<p>Edit : The model is detecting with eval using
python3 eval.py --logtostderr --pipeline_config_path=/home/ic/Documents/objectExtraction/workspace/training_demo/training/inception_v2.config --checkpoint_dir=/home/ic/Documents/objectExtraction/workspace/training_demo/training --eval_dir=/home/ic/Documents/objectExtraction/workspace/training_demo/eval</p>
|
<p>This happens since you didn't state what is the input resolution with the argument <code>input_shape</code>. It simply means that it cannot compute how many flops an operation will take, since it doesn't know what the input resolution will be. You can still use the exported graph for inference without problem.</p>
|
python-3.x|tensorflow|object-detection|object-detection-api
| 1
|
1,508
| 54,426,748
|
Pandas multi index to datetime
|
<p>How do I convert the following multiindex to datetime object and set that as the new index?</p>
<pre><code>df_gauge_mean.index
Out[376]:
MultiIndex(levels=[[2018, 2019], [1, 12], [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31]],
labels=[[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26]],
names=[u'Year', u'Month', u'Day'])
</code></pre>
<p>One way I can think of doing it would be to make new columns for the 3 index levels and then use pd.to_datetime(["Year", "Month", "Day"]). Is there a more direct way to do this? The same question is asked elsewhere in another way <code>https://stackoverflow.com/questions/26505140/convert-pandas-multi-index-to-pandas-timestamp</code> but no one one has answered it yet.</p>
|
<p>Since you did not post your multiple index , first we need convert it </p>
<pre><code>s="MultiIndex(levels=[[2018, 2019], [1, 12], [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31]],labels=[[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26]], names=[u'Year', u'Month', u'Day'])"
idx=eval(s, {}, {'MultiIndex': pd.MultiIndex})
pd.to_datetime(idx.to_frame())
Out[761]:
Year Month Day
2018 12 1 2018-12-01
2 2018-12-02
3 2018-12-03
4 2018-12-04
5 2018-12-05
6 2018-12-06
</code></pre>
|
python|pandas|multi-index
| 4
|
1,509
| 73,839,762
|
Error when using `numpy.random.normal()` with Numba
|
<p>I'm exploring a bit Numba to optimize some signal processing codes. According to Numba's documentation, the function from the <code>numpy.random</code> package is well supported by the just-in-time compiler. However, when I run</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
from numba import jit
@jit(nopython=True)
def numba():
noise = np.random.normal(size=100)
# ...
if __name__ == "__main__":
numba()
</code></pre>
<p>I get the following error:</p>
<pre><code>Traceback (most recent call last):
File ".../test.py", line 89, in <module>
numba()
File ".../venv/lib/python3.9/site-packages/numba/core/dispatcher.py", line 468, in _compile_for_args
error_rewrite(e, 'typing')
File ".../venv/lib/python3.9/site-packages/numba/core/dispatcher.py", line 409, in error_rewrite
raise e.with_traceback(None)
numba.core.errors.TypingError: Failed in nopython mode pipeline (step: nopython frontend)
No implementation of function Function(<built-in method normal of numpy.random.mtrand.RandomState object at 0x104355740>) found for signature:
>>> normal(size=Literal[int](100000))
There are 4 candidate implementations:
- Of which 4 did not match due to:
Overload in function '_OverloadWrapper._build.<locals>.ol_generated': File: numba/core/overload_glue.py: Line 129.
With argument(s): '(size=int64)':
Rejected as the implementation raised a specific error:
TypingError: unsupported call signature
raised from .../venv/lib/python3.9/site-packages/numba/core/typing/templates.py:439
During: resolving callee type: Function(<built-in method normal of numpy.random.mtrand.RandomState object at 0x104355740>)
During: typing of call at .../test.py (65)
File "test.py", line 65:
def numba():
noise = np.random.normal(size=SIZE)
^
</code></pre>
<p>Am I doing something obviously silly?</p>
<p>Thanks a lot!</p>
|
<p>if you check the <a href="https://numba.pydata.org/numba-doc/dev/reference/numpysupported.html#distributions" rel="nofollow noreferrer">current state of documentation</a>, the size argument is not supported yet.</p>
<p>since numba compiles this to machine code, this is equivalent in terms of speed.</p>
<pre class="lang-py prettyprint-override"><code>@jit(nopython=True)
def numba():
noise = np.empty(100,dtype=np.float64)
for i in range(100):
noise[i] = np.random.normal()
return noise
</code></pre>
<p><strong>Edit:</strong> the numba version is actually twice as fast ... probably because it doesn't parse inputs.</p>
|
python|numpy|numba
| 1
|
1,510
| 73,756,529
|
How do I find the value of a row of a column with the value of a variable?
|
<p>doing it this way and printing the value of "row_x" prints the value of row 1 of a csv file</p>
<pre><code>fila = 1
vueltas = 0
fila_x = ejes_df.loc[ejes_df['Variable'].isin([1])]
</code></pre>
<p>but when I want to indicate the value of the row through the value of a variable "row = 1" it does not seem to work, when printing it only the name of the variable "row_x" appears</p>
<pre><code>fila = 1
vueltas = 0
fila_x = ejes_df.loc[ejes_df['Variable'].isin([fila])]
</code></pre>
<p>What could I do to make it work? Is there another way to do it in which the value of the variable can be used to indicate the value of the row?
to read csv file i am using pandas</p>
|
<p>ok I found the error, I think it was because when changing the value of the row I was only running that line of code, instead of running the line of code where the variable "row" was given the value of 1, that is why the function did not find that value ♀️</p>
|
python|pandas|csv
| 0
|
1,511
| 73,810,774
|
Pandas: filtering rows based on condition
|
<p>Existing Dataframe :</p>
<pre><code>Id Date Status
A 26-01-2022 begin
A 26-01-2022 failed
A 27-01-2022 begin
A 27-01-2022 in-process
A 27-01-2022 success
B 01-02-2022 in-process
B 01-02-2022 success
B 02-02-2022 begin
B 02-02-2022 failed
</code></pre>
<p>Expected Dataframe :</p>
<pre><code>Id Date Status
A 27-01-2022 begin
A 27-01-2022 in-process
A 27-01-2022 success
B 01-02-2022 in-process
B 01-02-2022 success
</code></pre>
<p>Need to keep all the activity of the Id where the status for that Id ends with <strong>success</strong> for that particular date.</p>
<p>tried grouping Id and Date, but stuck with dropping the relevant rows</p>
|
<p>Use <code>transform('last')</code>:</p>
<pre><code>df[df.groupby(['Id', 'Date']).Status.transform('last') == 'success']
</code></pre>
|
python|pandas|dataframe
| 1
|
1,512
| 73,807,871
|
How update column value in dataframe if there are values in another column and keep original value when exist NAN
|
<p>I have a dataframe like this:</p>
<pre><code>Id date sales sales_new
291 2022-03-01 10 15
292 2022-04-01 12 16
293 2022-05-01 9 0
294 2022-06-01 13 20
295 2022-07-01 10 nan
296 2022-08-01 12 nan
</code></pre>
<p>I would like replace the values if they exist</p>
<p>Outcome desire:</p>
<pre><code>Id date sales
291 2022-03-01 15
292 2022-04-01 16
293 2022-05-01 0
294 2022-06-01 20
295 2022-07-01 10
296 2022-08-01 12
</code></pre>
|
<p>You can use <code>update</code> to do that:</p>
<pre><code>df['sales'].update(df['sales_new'])
df.drop(columns='sales_new')
</code></pre>
<p><strong>Result</strong></p>
<pre><code> Id date sales
0 291 2022-03-01 15
1 292 2022-04-01 16
2 293 2022-05-01 0
3 294 2022-06-01 20
4 295 2022-07-01 10
5 296 2022-08-01 12
</code></pre>
|
python|pandas|dataframe
| 1
|
1,513
| 71,310,298
|
getting vertices and face as numpy array from a stl file using trimesh
|
<p>I have an STL file, I now need to read the vertices and face value of that STL file using trimesh.</p>
<pre><code> myobj = trimesh.load_mesh("file.stl", enable_post_processing=True, solid=True)
myobj.faces #gives me ndarray of faces
</code></pre>
<p>how to read vertices from myobj ?</p>
|
<pre><code>myobj.vertices
</code></pre>
<p><a href="https://trimsh.org/trimesh.base.html?#trimesh.base.Trimesh.vertices" rel="nofollow noreferrer">Here's the document</a></p>
|
python|trimesh|numpy-stl
| 0
|
1,514
| 52,242,527
|
Tensorflow Eager Execution on Colab
|
<p>I'm trying to enable eager execution with Tensorflow on a Colab notebook but I get an error message:</p>
<pre><code>import tensorflow as tf
import tensorflow.contrib.eager as tfe
tf.enable_eager_execution()
</code></pre>
<blockquote>
<p><em>ValueError: tf.enable_eager_execution must be called at program startup</em></p>
</blockquote>
<p>I tried to click "reset runtime" in order to run tf.enable_eager_execution at startup but the error rises again.</p>
<p>Opening a different notebook didn't help.</p>
<p>What can I do in Colab in order to allow eager execution after this error rises?</p>
<p>Thanks.</p>
|
<p>This was a bug on our side, fix is now live, and the original code snippet should be working again. See <a href="https://github.com/googlecolab/colabtools/issues/262" rel="noreferrer">Tensorflow eager mode dose not work when GPU is enabled in google colab · Issue #262 · googlecolab/colabtools</a>.</p>
<p>(NB: I'm on the Google Colaboratory team.)</p>
|
python|tensorflow|google-colaboratory|eager
| 7
|
1,515
| 60,614,574
|
TensorFlow sees a black screen as an object for 99%
|
<p>I have the problem that TensorFlow detects a black screen as an object?
I have trained it so it only can see two classes, but I want to see N/A label if its neither the classes.</p>
<p><a href="https://i.stack.imgur.com/Bg9om.png" rel="nofollow noreferrer">Result</a></p>
<p>Hope you guys have any idea how to fix this.</p>
|
<p>I don't know what code you're using, but I assume your classes are truth or lying from faces. Two suggestions:</p>
<ul>
<li><p>It may be that you have divided your image data by 255 twice at some point. </p></li>
<li><p>Make sure you include a third class which is "not a face". I assume you're passing your convolutional neural network model round each pixel of an image. It can't classify a table as lying or telling the truth, so make sure a "not considered" class is predicted for things that aren't faces.</p></li>
</ul>
|
tensorflow|artificial-intelligence
| 0
|
1,516
| 60,653,617
|
No module named 'uff'
|
<p><strong>Goal :</strong>
To convert tensorflow .pb model to tensorrt </p>
<p><strong>System specs :</strong>
Ubundu 18.04
Cuda 10.0
TensorRT 5.1</p>
<p>Even Installed <code>sudo apt-get install uff-converter-tf</code></p>
<p>Error obtained when trying to import <code>uff</code> in Python:</p>
<blockquote>
<p>ModuleNotFoundError: No module named 'uff'</p>
</blockquote>
|
<p>Installing <code>uff-converter-tf</code> is not enough, you will need to install Python UFF wheel to be able to use it. <a href="https://docs.nvidia.com/deeplearning/sdk/tensorrt-install-guide/index.html" rel="noreferrer">Here</a> you will find complete installation guide.</p>
<p>If using Python 2.7:</p>
<pre><code>pip2 install tensorrt-*-cp27-none-linux_x86_64.whl
</code></pre>
<p>If using Python 3.x:</p>
<pre><code>pip3 install tensorrt-*-cp3x-none-linux_x86_64.whl
</code></pre>
<p>You will find these wheels with rest of TensorRT binaries that were resulted from unpacking <code>TensorRT-${version}.${os}.${arch}-gnu.${cuda}.${cudnn}.tar.gz</code>. </p>
<p>As always, it's best if installed in virtual environment. </p>
|
python|tensorflow|tensorrt|uff
| 5
|
1,517
| 72,763,382
|
How to find most similar string values in a dataframe?
|
<p>I am finding the similarity between the sentence using embedding sentence and looping through all the document's embedded sentences to find the right match relative to the search string. I also want to display the document name in the output along with the similarity match result but am not sure how I can extract that information from the dataframe respective to the sentence we get in the output result. I have tried the index method but it is not showing me the correct document name.</p>
<p>Please guide how can I get the document name in the result output along with the sentence.</p>
<p>My data frame looks like this:</p>
<pre><code>Document name Document sentences in tokens
Doc 1 [Sentence 1, sentence 2, sentence 3]
Doc 2 [Sentence 1, sentence 2, sentence 3]
</code></pre>
<p>I have used the following code to find the top 10 matches with the search string.</p>
<pre><code>from itertools import chain
docs_sent_tokens=list(chain.from_iterable(main_df['Sentence_Tokenize_rules']))
docs_name=main_df['Document name']
results=[]
#set the threshold value to get the similarity result accordingly
threshold=0
#embedding all the documents and find the similarity between search text and all the tokenize sentences
for docs_sent_token in docs_sent_tokens:
#To find the document name
for index in main_df.index:
doc_name= main_df['Document name'][index]
sentence_embeddings = model.encode(docs_sent_token)
sim_score1 = cosine_sim(search_sentence_embeddings, sentence_embeddings)
if sim_score1 > threshold:
results.append((
docs_sent_token,
sim_score1,
doc_name
))
#printing the top 10 matching result in dataframe format
df=pd.DataFrame(results, columns=['Matching Sentence','Similarity Score','Docuemnt name'])
# sorting in descending order based on the similarity score
df.sort_values("Similarity Score", ascending = False, inplace = True)
#change the value of n to see more results
df.head(n=10)
</code></pre>
<p>Output should be like this:</p>
<pre><code>Matching sentence similarity score document name
Sentence 12 0.80 doc 1
sentence 15 0.69 doc 3
</code></pre>
|
<p>Here is an example of how you can do it using Python standard library <a href="https://docs.python.org/3/library/difflib.html" rel="nofollow noreferrer">difflib</a> module, which provides helpers for computing deltas.</p>
<p>Given the following toy dataframe and search sentence:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df = pd.DataFrame(
{
"document": ["doc 1", "doc 2"],
"sentences": [
["lore ipsum", "magna carta", "upside down"],
["tempus fugit", "memento mori", "lora ipsom"],
],
}
)
search_sentence = "lor ipsum"
</code></pre>
<p>Define a helper function to compare sentences similarity:</p>
<pre class="lang-py prettyprint-override"><code>def ratio(a, b):
return round(SequenceMatcher(None, a, b).ratio(), 2)
</code></pre>
<p>And then:</p>
<pre class="lang-py prettyprint-override"><code># Use Python instead of Pandas
df = df.to_dict(orient="list")
# Init empty dictionary
results = {"Matching sentence": [], "similarity score": [], "document name": []}
# Iterate to compare
for (doc, sentences) in zip(df["document"], df["sentences"]):
for i, sentence in enumerate(sentences):
results["Matching sentence"].append(f"Sentence {i+1}")
results["similarity score"].append(ratio(search_sentence, sentence))
results["document name"].append(doc)
</code></pre>
<p>Finally:</p>
<pre class="lang-py prettyprint-override"><code>new_df = (
pd.DataFrame(results)
.sort_values(by="similarity score", ascending=False)
.reset_index(drop=True)
)
print(new_df)
# Ouptut
Matching sentence similarity score document name
0 Sentence 1 0.95 doc 1
1 Sentence 3 0.84 doc 2
2 Sentence 2 0.29 doc 2
3 Sentence 3 0.20 doc 1
4 Sentence 1 0.19 doc 2
5 Sentence 2 0.10 doc 1
</code></pre>
|
pandas|search|nlp
| 0
|
1,518
| 72,721,266
|
AnnData: Adding unaligned observations annotations
|
<p>I have an AnnData (<code>adata</code>) and a pandas Dataframe (<code>df</code>). I would like left-join <code>adata.obs</code> and <code>df</code> on their index. How do I go about doing this? I tried using <code>pd.merge</code> and <code>adata.obs.join(df)</code> with no success. Both result in all the columns containing <code>NaN</code>. This shouldn't be the case since they share nearly all the same index values.</p>
|
<p>You are on the right track with merge - the following works as expected:</p>
<pre class="lang-py prettyprint-override"><code>adata.obs = adata.obs.merge(how='left',right=df, left_index=True, right_index=True)
</code></pre>
<p>Note: Mismatch of dtypes (e.g. <code>str</code> in <code>adata.obs.index</code> and <code>int</code> in <code>df.index</code>) is a common source of errors.</p>
|
python|pandas
| 0
|
1,519
| 72,721,237
|
EarlyStopping using epoch level metrics in Pytorch Lightning
|
<p>I have written a classifier in pytorch lightning, and I'd like to plot training/validation loss and accuracy against the epoch number, rather than the step number, which seems to be the default behaviour. I can do this using the following code (using tensorboard as the logger:</p>
<pre><code>def validation_epoch_end(self, outputs):
val_loss = torch.stack([x['loss'] for x in outputs]).mean()
val_acc = torch.stack([x['accuracy'] for x in outputs]).mean()
self.logger.experiment.add_scalar(
"Loss/Validation",
val_loss,
self.current_epoch)
self.logger.experiment.add_scalar(
"Accuracy/Validation",
val_acc,
self.current_epoch)
</code></pre>
<p>I'd now like to use the metric <code>Accuracy/Validation</code> in the <code>EarlyStopping</code> callback:</p>
<pre><code>trainer = pl.Trainer(
default_root_dir=root_dir,
callbacks=[EarlyStopping(monitor='Accuracy/Validation', mode='max', patience=5)],
max_epochs=100,
logger=tb_logger
)
</code></pre>
<p>This doesn't work, as it can't find <code>Accuracy/Validation</code>. The only way I've been able to work around this is also call <code>self.log</code> in the validation step:</p>
<pre><code>def validation_step(self, batch, batch_idx):
loss, acc = self.forward(batch, mode='val')
self.log(
"val_loss",
loss,
on_step=False,
on_epoch=True,
)
self.log(
"val_accuracy",
acc,
on_step=False,
on_epoch=True,
)
return {'loss': loss, "accuracy": acc}
</code></pre>
<p>I can now use <code>val_accuracy</code> in the <code>EarlyStopping</code> callback, however this now duplicates the plot I want but with the step number on the x-axis. Is there any way around this?</p>
|
<p>If you want to log several values you should use <code>log_dict</code> instead of <code>log</code>. Like this:</p>
<pre class="lang-py prettyprint-override"><code>metrics = {
'val_loss': loss,
'val_accuracy': acc
}
self.log_dict(metrics, on_step=False, on_epoch=True)
</code></pre>
|
tensorboard|pytorch-lightning
| 0
|
1,520
| 59,724,691
|
Finding the longest sequence of dates in a dataframe
|
<p>I'd like to know how to find the longest unbroken sequence of dates (formatted as <code>2016-11-27</code>) in a <code>publish_date</code> column (dates are not the index, though I suppose they could be).</p>
<p>There are a number of stack overflow questions which are similar, but AFAICT all proposed answers return the <em>size</em> of the longest sequence, which is not what I'm after. </p>
<p>I want to know e.g. that the stretch from <code>2017-01-01</code> to <code>2017-06-01</code> had no missing dates and was the longest such streak. </p>
|
<p>Here is an example of how you can do this:</p>
<pre><code>import pandas as pd
import datetime
# initialize data
data = {'a': [1,2,3,4,5,6,7],
'date': ['2017-01-01', '2017-01-03', '2017-01-05', '2017-01-06', '2017-01-07', '2017-01-09', '2017-01-31']}
df = pd.DataFrame(data)
df['date'] = pd.to_datetime(df['date'])
# create mask that indicates sequential pair of days (except the first date)
df['mask'] = 1
df.loc[df['date'] - datetime.timedelta(days=1) == df['date'].shift(),'mask'] = 0
# convert mask to numbers - each sequence have its own number
df['mask'] = df['mask'].cumsum()
# find largest sequence number and get this sequence
res = df.loc[df['mask'] == df['mask'].value_counts().idxmax(), 'date']
# extract min and max dates if you need
min_date = res.min()
max_date = res.max()
# print result
print('min_date: {}'.format(min_date))
print('max_date: {}'.format(max_date))
print('result:')
print(res)
</code></pre>
<p>The result will be:</p>
<pre><code>min_date: 2017-01-05 00:00:00
max_date: 2017-01-07 00:00:00
result:
2 2017-01-05
3 2017-01-06
4 2017-01-07
</code></pre>
|
pandas|dataframe
| 3
|
1,521
| 59,640,574
|
how to vectorize the scatter-matmul operation
|
<p>I have many matrices <code>w1</code>, <code>w2</code>, <code>w3...wn</code> with shapes (<code>k*n1</code>, <code>k*n2</code>, <code>k*n3...k*nn</code>) and <code>x1</code>, <code>x2</code>, <code>x3...xn</code> with shapes (<code>n1*m</code>, <code>n2*m</code>, <code>n3*m...nn*m</code>).<br>
I want to get <code>w1@x1</code>, <code>w2@x2</code>, <code>w3@x3</code> ... respectively.</p>
<p>The resulting matrix is multiple <code>k*m</code> matrices and can be concatenated into a large matrix with shape <code>(k*n)*m</code>.</p>
<p>Multiply them one by one will be slow. How to vectorize this operation?</p>
<p>Note: The input can be a <code>k*(n1+n2+n3+...+nn)</code> matrix and a <code>(n1+n2+n3+...+nn)*m</code> matrix, and we may use a batch index to indicate those submatrices.</p>
<p>This operation is related to the scatter operations implemented in <a href="https://github.com/rusty1s/pytorch_scatter" rel="nofollow noreferrer"><code>pytorch_scatter</code></a>, so I refer it as "<code>scatter_matmul</code>".</p>
|
<p>You can vectorize your operation by creating a large block-diagonal matrix <code>W</code> of shape <code>n*k</code>x<code>(n1+..+nn)</code> where the <code>w_i</code> matrices are the blocks on the diagonal. Then you can vertically stack all <code>x</code> matrices into an <code>X</code> matrix of shape <code>(n1+..+nn)</code>x<code>m</code>. Multiplying the block diagonal <code>W</code> with the vertical stack of all <code>x</code> matrices, <code>X</code>: </p>
<pre><code>Y = W @ X
</code></pre>
<p>results with <code>Y</code> of shape <code>(k*n)</code>x<code>m</code> which is exactly the concatenated large matrix you are seeking.</p>
<p>If the shape of the block diagonal matrix <code>W</code> is too large to fit into memory, you may consider making <code>W</code> <a href="https://pytorch.org/docs/stable/sparse.html" rel="nofollow noreferrer">sparse</a> and compute the product using <a href="https://pytorch.org/docs/stable/sparse.html#torch.sparse.mm" rel="nofollow noreferrer"><code>torch.sparse.mm</code></a>.</p>
|
python|tensorflow|deep-learning|pytorch|vectorization
| 2
|
1,522
| 57,935,290
|
Selecting rows in a DataFrame based on Time of the day?
|
<p>I have a DataFrame with <code>Datetime</code> as my <code>index_col</code>. This data is for 14 days. I want to create two different DataFrames based on the time of day: data between 6 am and 18 pm of all days as one df and data between 18 pm and 6 am of all days in another df. How to extract?</p>
<p>Below is my Dataframe screenshot</p>
<p><a href="https://i.stack.imgur.com/qYncw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qYncw.png" alt="enter image description here"></a></p>
|
<p>try using </p>
<p>for df1</p>
<pre><code>df1=df.between_time('06:00', '18:00')
</code></pre>
<p>for df2</p>
<pre><code>df2=df.between_time('18:00', '06:00')
</code></pre>
<p>more about it <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.between_time.html" rel="nofollow noreferrer">here</a></p>
|
python|pandas
| 4
|
1,523
| 54,773,169
|
How to plot specific value counts?
|
<pre><code>Col 1 Col 2 Col 3
0 No 2901
Yes 639
1 No 1858
Yes 415
2 No 1366
Yes 252
3 No 1236
Yes 277
</code></pre>
<p>Say I have the following list. Essentially I'd like to create a double line graph with Col 1 on x and the counts for Yes/No (Col 3) on y. I'm having trouble doing this with the list. Would it be easier to just create 2 separate dataframes of Yes and No and then plot?</p>
|
<p>Use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.pivot.html" rel="nofollow noreferrer">df.pivot</a> to make <code>Col 2</code> values column names, and then simply use plot:</p>
<pre><code>df.pivot(index="Col 1", columns="Col 2", values="Col 3").plot()
</code></pre>
<p>Pivot is needed, since pandas draws separate line per column.</p>
|
pandas|matplotlib
| 0
|
1,524
| 54,839,391
|
Select the first row of each group after 'groupby()' and 'value_counts() function
|
<p>I have a data set named <code>new_data_set</code> which looks like this:</p>
<p><a href="https://i.stack.imgur.com/n2o22.png" rel="nofollow noreferrer">Image</a></p>
<p>I want to find genre which came the maximum number of times for each year.</p>
<p>So I did this: </p>
<pre><code>new_data_set.groupby('release_year')['genre']).apply(lambda x: x.value_counts())`
</code></pre>
<p>And the result of it looks like this:<a href="https://i.stack.imgur.com/0Glkw.png" rel="nofollow noreferrer">result</a></p>
<p>Now I am in need to fetch the first row from each group to get the answer. So the result should look like this:</p>
<pre><code>1960 Drama
1961 Drama
.
.
</code></pre>
<p>How should I do this?</p>
|
<p>Add <code>index[0]</code> and then <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.reset_index.html" rel="nofollow noreferrer"><code>reset_index</code></a>:</p>
<pre><code>new_data_set = pd.DataFrame({
'release_year':[2004,2005,2004,2005,2005,2004],
'genre':list('aaabbb')
})
df = (new_data_set.groupby('release_year')['genre']
.apply(lambda x: x.value_counts().index[0])
.reset_index()
)
print (df)
release_year genre
0 2004 a
1 2005 b
</code></pre>
|
pandas|jupyter-notebook|data-science|data-analysis
| 4
|
1,525
| 54,779,775
|
Issues including Eigen for simple C++ TensorFlow Lite test program
|
<p>I compiled the library for the C++ API for TensorFlow Lite (r1.97) using the script <code>${TENSORFLOW_ROOT}/tensorflow/lite/tools/make/build_rpi_lib.sh</code> following the steps suggested at this official <a href="https://www.tensorflow.org/lite/rpi" rel="nofollow noreferrer">page</a> (Native Compiling, downloading the necessary libraries), where <code>${TENSORFLOW_ROOT}</code> is the root folder where I cloned the repository.</p>
<p>I am trying to compile this simple <code>test.cpp</code> program:</p>
<pre class="lang-cpp prettyprint-override"><code>#include <memory>
#include "tensorflow/lite/interpreter.h"
int main(void)
{
std::unique_ptr<tflite::Interpreter> interpreter(new tflite::Interpreter);
}
</code></pre>
<p>using the command:</p>
<pre><code>gcc-6 test.cpp -I${TENSORFLOW_ROOT} -I${TENSORFLOW_ROOT}/tensorflow/contrib/makefile/downloads/eigen -I${TENSORFLOW_ROOT}/tensorflow/contrib/makefile/downloads/protobuf/src -I${TENSORFLOW_ROOT}/tensorflow/contrib/makefile/downloads -L${TENSORFLOW_ROOT}/tensorflow/lite/tools/make/gen/rpi_armv7l/lib -lstdc++ -ldl -ltensorflow-lite
</code></pre>
<p>The list of includes was suggested in the <a href="https://www.tensorflow.org/lite/tfmobile/linking_libs" rel="nofollow noreferrer">Integrating TensorFlow libraries</a> page (specifically from the section iOS). Compilation fails with the following error related to the inclusion of Eigen:</p>
<pre><code>${TENSORFLOW_ROOT}/third_party/eigen3/unsupported/Eigen/CXX11/Tensor:1:42: fatal error: unsupported/Eigen/CXX11/Tensor: No such file or directory
#include "unsupported/Eigen/CXX11/Tensor"
</code></pre>
<p>I found several links where an apparently similar problem is discussed (such as this <a href="https://stackoverflow.com/questions/46731174/unsupported-eigen-cxx11-tensor-no-such-file-or-directory-while-working-with-t?rq=1">one</a>), but the proposed solutions involve using references to the TensorFlow python package which is something that is not possible in my case (and it feels quite patchy - I am not considering using python for this project).</p>
<p>I also tried using a different include path to Eigen (e.g. <code>${TENSORFLOW_ROOT}/third_party/eigen3</code>):</p>
<pre><code>gcc-6 test.cpp -I${TENSORFLOW_ROOT} -I${TENSORFLOW_ROOT}/third_party/eigen3 -I${TENSORFLOW_ROOT}/tensorflow/contrib/makefile/downloads/protobuf/src -I${TENSORFLOW_ROOT}/tensorflow/contrib/makefile/downloads -L${TENSORFLOW_ROOT}/tensorflow/lite/tools/make/gen/rpi_armv7l/lib -lstdc++ -ldl -ltensorflow-lite
</code></pre>
<p>and also this causes Eigen related compilation errors of this sort:</p>
<pre><code>...
${TENSORFLOW_ROOT}/third_party/eigen3/unsupported/Eigen/CXX11/Tensor:1:42: error: #include nested too deeply
#include "unsupported/Eigen/CXX11/Tensor"
...
${TENSORFLOW_ROOT}/third_party/eigen3/Eigen/Core:1:22: error: #include nested too deeply
#include "Eigen/Core"
...
</code></pre>
<p>Any suggestions on how to solve this issue? What is the right set of include paths?</p>
|
<p>Turns out I was including the wrong folder. Instead of <code>${TENSORFLOW_ROOT}/tensorflow/contrib/makefile/downloads/eigen</code> or <code>${TENSORFLOW_ROOT}/third_party/eigen3</code>, the right one is <code>${TFLITE_ROOT}/tensorflow/lite/tools/make/downloads/eigen</code>.</p>
<p>I am still puzzled by the number of <code>eigen</code> folders inside the repository:</p>
<pre><code>find . -name "eigen*" -type d
./third_party/eigen3
./tensorflow/lite/tools/make/downloads/eigen
</code></pre>
|
c++|include-path|eigen3|tensorflow-lite
| 0
|
1,526
| 49,476,516
|
Why is dummy '0' row created after pd.read_csv()?
|
<p>Why there is an additional row named '0' after read_csv?</p>
<p>In the following code, I saved df1 to csv file and read it back. However, there is an additional row named '0'. How can I avoid it? </p>
<pre><code>d1 = {'a' : 1, 'b' : 2, 'c' : 3}
df1=pd.Series(d1)
print('\ndf1:'); print(df1)
> df1:
> a 1
> b 2
> c 3
> dtype: int64
df1.to_csv("df1.csv")
df1=pd.read_csv("df1.csv", index_col=0, header=None)
print('\ndf1:'); print(df1)
> df1:
> 1
> 0 <<<<---- ????
> a 1
> b 2
> c 3
</code></pre>
|
<p>That is not a dummy row - that is the name of your indexing Column.</p>
<p>You can check it by running:</p>
<pre><code>> df.index.name
0
</code></pre>
<p>You can change it by setting it too:</p>
<pre><code>> df.index.name = "my_index"
1
my_index
a 1
b 2
c 3
</code></pre>
|
python|pandas
| 1
|
1,527
| 49,512,706
|
Pandas: Average values in DataFrame column if string in adjacent column contains substring from another DataFrame
|
<p>I've been stuck on this for a while! I've got these two pandas dataframes:</p>
<pre><code>import pandas as pd
color_scores = pd.DataFrame({'score': [12.4, 9.8, 7.4, 2.6, 14.8],
'colors': ['blue, red, green', 'blue, purple, orange',
'blue, pink, yellow', 'purple, pink, orange',
'yellow, pink, green']})
color_avgs = pd.DataFrame({'colors': [
'blue',
'red',
'green',
'purple',
'orange',
'pink',
'yellow',
]})
</code></pre>
<p>What I'm trying to do is create a second column in <code>color_avgs</code> which would be the average of the values in <code>color_scores['score']</code> if the string in <code>color_scores['colors]</code> contains the substring/color from <code>color_avgs['colors']</code>.</p>
<p>I know how I can do this manually for each color (below). However, I don't know how to loop over all the colors listed in <code>color_avgs['colors']</code>, and add the result to a new column (<code>color_avgs['average']</code>).</p>
<pre><code>color_scores.loc[color_scores['colors'].str.contains('blue'), 'score'].mean()
9.866666666666667
</code></pre>
<p>Thanks in advance!</p>
|
<p>I think need:</p>
<pre><code>from collections import Counter
c1, c2 = Counter(), Counter()
for row in color_scores.itertuples():
for i in row[1].split(', '):
c1[i] += row[2]
c2[i] += 1
s = pd.Series(c1).div(pd.Series(c2))
print (s)
blue 9.866667
green 13.600000
orange 6.200000
pink 8.266667
purple 6.200000
red 12.400000
yellow 11.100000
dtype: float64
color_avgs['new'] = color_avgs['colors'].map(s)
print (color_avgs)
colors new
0 blue 9.866667
1 red 12.400000
2 green 13.600000
3 purple 6.200000
4 orange 6.200000
5 pink 8.266667
6 yellow 11.100000
</code></pre>
<p><strong>Explanation</strong>:</p>
<ol>
<li>Loop by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.itertuples.html" rel="nofollow noreferrer"><code>itertuples</code></a> and for each row add to 2 Counters splitted values of <code>colors</code> and count</li>
<li>Create Series and divide for <code>mean</code></li>
<li>Last <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.map.html" rel="nofollow noreferrer"><code>map</code></a> new column</li>
</ol>
<p>Pandas only solution:</p>
<pre><code>s = (color_scores.set_index('score')['colors']
.str.split(', ', expand=True)
.stack()
.reset_index(name='a')
.groupby('a')['score'].mean())
color_avgs['new'] = color_avgs['colors'].map(s)
print (color_avgs)
colors new
0 blue 9.866667
1 red 12.400000
2 green 13.600000
3 purple 6.200000
4 orange 6.200000
5 pink 8.266667
6 yellow 11.100000
</code></pre>
<p><strong>Explanation</strong>:</p>
<ol>
<li>First <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.split.html" rel="nofollow noreferrer"><code>split</code></a> values to <code>DataFrame</code></li>
<li>Reshape by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.stack.html" rel="nofollow noreferrer"><code>stack</code></a></li>
<li>Aggregate mean per groups</li>
<li>Last <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.map.html" rel="nofollow noreferrer"><code>map</code></a> new column</li>
</ol>
|
python|pandas|aggregate
| 1
|
1,528
| 73,503,682
|
how to save space training
|
<p>I have written an intent classification program. This is first trained with training data and then tested with test data. The training process takes a few seconds. What is the best way to save such a training, so that it does not have to be trained again with every call? Is it enough to save train_X and train_y? or does the model have to be saved somehow?</p>
<pre><code>import numpy as np
import pandas as pd
import os
import spacy
import csv
from sklearn.preprocessing import LabelEncoder
from sklearn.svm import SVC
# read data from csv
def read_data(path):
with open(path, 'r') as csvfile:
readcsv = csv.reader(csvfile, delimiter=',')
labels = []
sentences = []
for row in readcsv:
label = row[0]
sentence = row[1]
labels.append(label)
sentences.append(sentence)
return sentences, labels
# Loading Test Data
sentences_test, labels_test = read_data('./a_test.csv')
# print out the first two rows
print(sentences_test[:2], '\n')
print(labels_test[:2])
# Loading Training Data
sentences_train, labels_train = read_data('./a_train.csv')
# Load the spacy model: nlp
nlp = spacy.load('en_core_web_lg')
embedding_dim = nlp.vocab.vectors_length
print(embedding_dim)
def encode_sentences(sentences):
# Calculate number of sentences
n_sentences = len(sentences)
print('Length :-', n_sentences)
X = np.zeros((n_sentences, embedding_dim))
# y = np.zeros((n_sentences, embedding_dim))
# Iterate over the sentences
for idx, sentence in enumerate(sentences):
# Pass each sentence to the nlp object to create a document
doc = nlp(sentence)
# Save the document's .vector attribute to the corresponding row in
# X
X[idx, :] = doc.vector
return X
train_X = encode_sentences(sentences_train)
test_X = encode_sentences(sentences_test)
# every label gets his own number
def label_encoding(labels):
# Calculate the length of labels
n_labels = len(labels)
print('Number of labels :-', n_labels)
le = LabelEncoder()
y = le.fit_transform(labels)
print(y[:100])
print('Length of y :- ', y.shape)
return y
train_y = label_encoding(labels_train)
test_y = label_encoding(labels_test)
df1 = pd.read_csv('./a_train.csv', delimiter=',')
df1.dataframeName = 'a_train.csv'
nRow, nCol = df1.shape
print(f'There are {nRow} rows and {nCol} columns')
df1.sample(10)
df1.describe()
# X_train and y_train was given.
def svc_training(X, y):
# Create a support vector classifier
clf = SVC(C=1)
# Fit the classifier using the training data
clf.fit(X, y)
return clf
model = svc_training(train_X, train_y)
print(model.predict(train_X))
# Validation Step
def svc_validation(model, X, y):
# Predict the labels of the test set
y_pred = model.predict(X)
# Count the number of correct predictions
n_correct = 0
for i in range(len(y)):
if y_pred[i] == y[i]:
n_correct += 1
print("Predicted {0} correctly out of {1} training examples".format(n_correct, len(y)))
#svc_validation(model, train_X, train_y)
#svc_validation(model, test_X, test_y)
</code></pre>
|
<p><code>spaCy</code> has methods to write any given model <a href="https://spacy.io/usage/saving-loading" rel="nofollow noreferrer">to and from disk</a>.</p>
<p>Go <code>model.to_disk(path)</code> to store the model on your hard drive, then <code>model.from_disk()</code> to retrieve it. Let me know if this answers your question.</p>
|
python|pandas|spacy
| 1
|
1,529
| 73,416,907
|
model.save: Tried to export a function which references 'untracked' resource even though the tensor should be tracked
|
<h1>The Problem</h1>
<p>I am getting this error from my code (see minimal repro code below):</p>
<pre><code>AssertionError: Tried to export a function which references 'untracked' resource Tensor("1310:0", shape=(), dtype=resource). TensorFlow objects (e.g. tf.Variable) captured by functions must be 'tracked' by assigning them to an attribute of a tracked object or assigned to an attribute of the main object directly. See the information below:
Function name = b'__inference_signature_wrapper_1328'
Captured Tensor = <ResourceHandle(name="Resource-3-at-0x231d96d6250", device="/job:localhost/replica:0/task:0/device:GPU:0", container="Anonymous", type="class tensorflow::Var", dtype and shapes : "[ DType enum: 1, Shape: [9,9,3,1] ]")>
Trackable referencing this tensor = <tf.Variable 'StylePredictionModelDummy/dummy_conv/kernel:0' shape=(9, 9, 3, 1) dtype=float32>
</code></pre>
<h1>Thoughts</h1>
<p>I do not understand why this particular tensor (<code>StylePredictionModelDummy/dummy_conv/kernel:0</code>) is not tracked. As you can see my model (<code>style_transfer_model</code>) is created with the Tensorflow functional API and the untracked tensor in question is instantiated as part of <code>StylePredictionModelDummy.__init__()</code> in line 92 and <em>assigned as a class property</em> to <code>self.feature_extractor</code> which is what the error recommends to do to track the value.</p>
<p>The outputs of <code>StylePredictionModelDummy</code>(line 93) are split up(lines 100-105) and routed to the <code>ConditionalInstanceNormalization</code>(line 71) layers as <code>scale</code> and <code>bias</code> inside of the <code>expand</code> layers (line 109). So it also can't be that the tensor is just not used because it <em>is</em> used.</p>
<h1>Minimal Reproduction Code</h1>
<p>It is not as minimal as it could be but I could not make it smaller without breaking it.
Here is a link to a google collab where the error can be reproduced: <a href="https://colab.research.google.com/drive/1HFq2k12IGO6bsnbd8WWvg3yci6g12BD0?usp=sharing" rel="nofollow noreferrer">https://colab.research.google.com/drive/1HFq2k12IGO6bsnbd8WWvg3yci6g12BD0?usp=sharing</a></p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import tensorflow as tf
image_shape = (None, 960 // 4, 1920 //4, 3)
class StylePredictionModelDummy(tf.keras.Model):
feature_extractor = None
def __init__(self, num_top_parameters, name="StylePredictionModelDummy"):
super().__init__(name=name)
self.feature_extractor = tf.keras.layers.Conv2D(1, 9, 5, padding='same', name="dummy_conv")
self.style_norm_predictor = tf.keras.layers.Dense(
num_top_parameters,
activation=tf.keras.activations.softmax,
name="style_norm_predictor")
def call(self, inputs, training=None, mask=None):
x = self.feature_extractor(inputs)
x = tf.keras.layers.GlobalAveragePooling2D(name="avg_pool")(x)
x = self.style_norm_predictor(x)
return x
class ConditionalInstanceNormalization(tf.keras.layers.Layer):
"""Instance Normalization Layer (https://arxiv.org/abs/1607.08022)."""
def __init__(self, num_feature_maps, epsilon=1e-5):
super().__init__(name="ConditionalInstanceNormalization")
self.epsilon = epsilon
self.num_feature_maps = num_feature_maps
def call(self, x, **kwargs):
inputs = x
x = inputs['inputs']
scale = inputs['scale']
bias = inputs['bias']
x = x * scale + bias
return x
def get_config(self):
return {
"epsilon": self.epsilon,
"num_feature_maps": self.num_feature_maps,
}
def expand(input_shape, filters, size, strides, name) -> tf.keras.Model:
name = f"expand_{name}"
content_inputs = tf.keras.layers.Input(shape=input_shape)
scale = tf.keras.layers.Input(shape=(1, 1, filters))
bias = tf.keras.layers.Input(shape=(1, 1, filters))
inputs = {
"inputs": content_inputs,
"scale": scale,
"bias": bias,
}
result = tf.keras.layers.Conv2DTranspose(
filters=filters, kernel_size=size, strides=strides, padding='same',
name=f"{name}_conv")(content_inputs)
instance_norm_params = {
"inputs": result,
"scale": scale,
"bias": bias,
}
result = ConditionalInstanceNormalization(filters)(instance_norm_params)
result = tf.keras.layers.ReLU()(result)
return tf.keras.Model(inputs, result, name=name)
def make_style_transfer_model(input_shape,
name="StyleTransferModel"):
decoder_layer_specs = [
{"filters": 64, "size": 3, "strides": 2},
{"filters": 32, "size": 3, "strides": 2},
]
inputs = {
'content': tf.keras.layers.Input(shape=input_shape['content'][1:]),
'style': tf.keras.layers.Input(shape=input_shape['style'][1:]),
}
content_input, style_input = (inputs['content'], inputs['style'])
num_style_parameters = sum(map(lambda spec: spec['filters'] * 2, decoder_layer_specs))
style_predictor = StylePredictionModelDummy(num_style_parameters)
style_params = style_predictor(style_input)
x = content_input
input_filters = input_shape['content'][-1]
style_norm_param_lower_bound = 0
for i, decoder_layer_spec in enumerate(decoder_layer_specs):
style_norm_scale_upper_bound = style_norm_param_lower_bound + decoder_layer_spec["filters"]
style_norm_offset_upper_bound = style_norm_scale_upper_bound + decoder_layer_spec["filters"]
scale = style_params[:, style_norm_param_lower_bound:style_norm_scale_upper_bound]
offset = style_params[:, style_norm_scale_upper_bound:style_norm_offset_upper_bound]
scale, offset = tf.expand_dims(scale, -2, name="expand_scale_0"), tf.expand_dims(offset, -2, name="expand_offset_0")
scale, offset = tf.expand_dims(scale, -2, name="expand_scale_1"), tf.expand_dims(offset, -2, name="expand_offset_1")
style_norm_param_lower_bound = style_norm_offset_upper_bound
expand_layer_input_shape = (input_shape['content'][1] * 2 ** i, input_shape['content'][2] * 2 ** i, input_filters)
expand_layer = expand(input_shape=expand_layer_input_shape, name=i,
filters=decoder_layer_spec["filters"],
size=decoder_layer_spec["size"],
strides=decoder_layer_spec["strides"])
x = expand_layer({
"inputs": x,
"scale": scale,
"bias": offset,
})
input_filters = decoder_layer_spec["filters"]
model = tf.keras.Model(inputs=inputs, outputs=x, name=name)
return model
input_shape = {'content': image_shape, 'style': image_shape}
output_shape = image_shape
style_transfer_model = make_style_transfer_model(input_shape)
element = {
'content': tf.convert_to_tensor(np.zeros((1, image_shape[1], image_shape[2], 3))),
'style': tf.convert_to_tensor(np.zeros((1, image_shape[1], image_shape[2], 3))),
}
# call once to build model
style_transfer_model(element)
style_transfer_model.save(filepath="%TEMP%/model", include_optimizer=False, save_format='tf')
</code></pre>
<h1>The question</h1>
<p>Why is my tensor not being referenced?</p>
|
<p>I managed to boil the minimum example down even more and found out what the problem was. Here is the new minimum repro code:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import tensorflow as tf
class DummyModel(tf.keras.Model):
feature_extractor = None
def __init__(self, name="StylePredictionModelDummy"):
super().__init__(name=name)
self.feature_extractor = tf.keras.layers.Conv2D(1, 9, 5, padding='same', name="dummy_conv")
def call(self, inputs, training=None, mask=None):
x = self.feature_extractor(inputs)
return x
image_shape = (None, 960//4, 1920//4, 3)
model = DummyModel()
element = tf.convert_to_tensor(np.zeros((1, image_shape[1], image_shape[2], 3)))
# call once to build model
result = model(element)
model.save(filepath="%TEMP%/model", include_optimizer=False, save_format='tf')
</code></pre>
<p>The issue is the class Member <code>feature_extractor = None</code> in line 5 in the DummyModel outside of <code>__init__</code>. I used a static member instead of an normal member. this seems to screw up the tensorflow reference tracking. <strong>Removing that static member fixed it.</strong>
See the python docs on class objects for details: <a href="https://docs.python.org/3/tutorial/classes.html#class-objects" rel="nofollow noreferrer">https://docs.python.org/3/tutorial/classes.html#class-objects</a></p>
|
python|tensorflow|keras
| 1
|
1,530
| 60,231,072
|
Is there a way to convert list of string formatted dictionary to a dataframe in Python?
|
<p>I am practicing how to use beautifulsoup and currently in a pickle as I can't convert the results to a dataframe. Hope to get your help.</p>
<p>In this example, the page I want to scrape can be obtained using the following:</p>
<pre><code>from bs4 import BeautifulSoup
import requests
import pandas as pd
page = requests.get("https://store.moncler.com/en-ca/women/autumn-winter/view-all-outerwear?tp=72010&ds_rl=1243188&gclid=EAIaIQobChMIpfDj9bjP5wIVlJOzCh0-9ghJEAAYASAAEgLuSfD_BwE&gclsrc=aw.ds", verify = False)
soup = BeautifulSoup(page.content, 'html.parser')
</code></pre>
<p>I have managed to isolate to the product section using the following code</p>
<pre><code>test_class = []
for section_tag in soup.find_all('section', class_='search__products__shelf search__products__shelf--moncler'):
for test in section_tag.find_all('article'):
test_class.append(test.get('data-ytos-track-product-data'))
</code></pre>
<p>The result of this is a list of <strong>string-formatted</strong> dictionary which looks like the following:</p>
<blockquote>
<p>['{"product_position":0,"product_title":"TREPORT","product_brand":"MONCLER","product_category":"3074457345616676837/3074457345616676843","product_micro_category":"Outerwear","product_micro_category_id":"3074457345616676843","product_macro_category":"OUTERWEAR","product_macro_category_id":"3074457345616676837","product_color_id":"Dark
blue","product_color":"Dark
blue","product_price":0.0,"product_discountedPrice":2530.0,"product_price_tf":"0","product_discountedPrice_tf":"2126.05","product_id":"1890828705323513","product_variant_id":"1890828705323514","list":"searchresult","product_quantity":1,"product_coupon":"","product_cod8":null,"product_cod10":null,"product_legacy_macro_id":"1012","product_legacy_micro_id":"1446","product_is_in_stock":true,"is_rsi_product":false,"rsi_product_tracking_url":""}',
'{"product_position":1,"product_title":"RIMAC","product_brand":"MONCLER","product_category":"3074457345616676837/3074457345616676854","product_micro_category":"Bomber
Jacket","product_micro_category_id":"3074457345616676854","product_macro_category":"OUTERWEAR","product_macro_category_id":"3074457345616676837","product_color_id":"Dark
blue","product_color":"Dark
blue","product_price":0.0,"product_discountedPrice":2340.0,"product_price_tf":"0","product_discountedPrice_tf":"1966.39","product_id":"5549023491788128","product_variant_id":"5549023491788129","list":"searchresult","product_quantity":1,"product_coupon":"","product_cod8":null,"product_cod10":null,"product_legacy_macro_id":"1012","product_legacy_micro_id":"4715","product_is_in_stock":true,"is_rsi_product":false,"rsi_product_tracking_url":""}',</p>
</blockquote>
<p>My question is how to convert the result to a pandas dataframe from a list of <strong>string formatted</strong> dictionary like that?</p>
<p>I have tried to use the code below to start with</p>
<pre><code>import ast
ast.literal_eval(test_class[1])
</code></pre>
<p>but to no avail (it gives me below error code).</p>
<blockquote>
<p>ValueError: malformed node or string: <_ast.Name object at
0x000001985A976748></p>
</blockquote>
<p>The end result should store each key of the dictionary into columns in a Dataframe (ie. 'product_position','product_title','product_brand',etc)</p>
<p>Any help / guidance would be much appreciated.</p>
<p>Thanks.</p>
|
<p>Looks like the question really is about how to parse a string, not how to do something with pandas.</p>
<p>The list you have seem to contain simply valid json strings. You can convert them to python dict's using <code>json.loads()</code> from the standard lib. Of course if some strings are malformed that's another story, you'll have to google how to parse malformed jsons.</p>
<p>After getting a list of python dicts turning them into a DataFrame is trivial.</p>
|
python|pandas|list|dictionary|beautifulsoup
| 1
|
1,531
| 60,031,038
|
Searching for a word within a dataframe column
|
<p>I have the following datadrame:</p>
<pre><code> import pandas as pd
df = pd.DataFrame({'Id_email': [1, 2, 3, 4],
'Word': ['_ SENSOR 12', 'new_SEN041', 'engine', 'sens 12'],
'Date': ['2018-01-05', '2018-01-06', '2017-01-06', '2018-01-05']})
print(df)
</code></pre>
<p>I would like to scroll through the 'Word' column looking for derivatives of the word Sensor.</p>
<p>If I found it I wanted to fill the new column 'Type' with Sensor_Type, if I didn't find it, in the corresponding line, I wanted to fill it with Other.</p>
<p>I tried to implement it as follows (this code is wrong):</p>
<pre><code> df['Type'] = 'Other'
for i in range(0, len(df)):
if(re.search('\\SEN\\b', df['Word'].iloc[i], re.IGNORECASE) or
re.search('\\sen\\b', df['Word'].iloc[i], re.IGNORECASE)):
df['Type'].iloc[i] == 'Sensor_Type'
else:
df['Type'].iloc[i] == 'Other'
</code></pre>
<p>My (wrong) output is as follows:</p>
<pre><code>Id_email Word Date_end Type
1 _ SENSOR 12 2018-01-05 Other
2 new_SEN041 2018-01-06 Other
3 engine 2017-01-06 Other
4 sens 12 2018-01-05 Other
</code></pre>
<p>But, I would like the output to be like this:</p>
<pre><code>Id_email Word Date_end Type
1 _ SENSOR 12 2018-01-05 Sensor_Type
2 new_SEN041 2018-01-06 Sensor_Type
3 engine 2017-01-06 Other
4 sens 12 2018-01-05 Sensor_Type
</code></pre>
|
<p>Use pandas str contains, and include case as False - this allows you to search for sen or SEN</p>
<pre><code>df.assign(Type = lambda x: np.where(x.Word.str.contains(r'SEN', case=False),
'Sensor_Type','Other'))
Id_email Word Date Type
0 1 _ SENSOR 12 2018-01-05 Sensor_Type
1 2 new_SEN041 2018-01-06 Sensor_Type
2 3 engine 2017-01-06 Other
3 4 sens 12 2018-01-05 Sensor_Type
</code></pre>
|
python|pandas|dataframe
| 3
|
1,532
| 65,459,399
|
Bounding Box regression using Keras transfer learning gives 0% accuracy. The output layer with Sigmoid activation only outputs 0 or 1
|
<p>I am trying to create an object localization model to detect license plate in an image of a car. I used VGG16 model and excluded the top layer to add my own dense layers, with the final layer having 4 nodes and sigmoid activation to get (xmin, ymin, xmax, ymax).</p>
<p>I used the functions provided by keras to read image, and resize it to (224, 244, 3), and also used preprocess_input() function to process the input. I also tried to manually process the image by resizing with padding to maintain proportion, and normalize the input by dividing by 255.</p>
<p>Nothing seems to work when I train. I get 0% train and test accuracy. Below is my code for this model.</p>
<pre><code>def get_custom(output_size, optimizer, loss):
vgg = VGG16(weights="imagenet", include_top=False, input_tensor=Input(shape=IMG_DIMS))
vgg.trainable = False
flatten = vgg.output
flatten = Flatten()(flatten)
bboxHead = Dense(128, activation="relu")(flatten)
bboxHead = Dense(32, activation="relu")(bboxHead)
bboxHead = Dense(output_size, activation="sigmoid")(bboxHead)
model = Model(inputs=vgg.input, outputs=bboxHead)
model.compile(loss=loss, optimizer=optimizer, metrics=['accuracy'])
return model
</code></pre>
<p>X and y were of shapes (616, 224, 224, 3) and (616, 4) respectively. I divided the coordinates by the length of the respective sides so each value in y is in range (0,1).</p>
<p>I'll link my python notebook below from github so you can see the full code. I am using google colab to train the model.
<a href="https://github.com/gauthamramesh3110/image_processing_scripts/blob/main/License_Plate_Detection.ipynb" rel="nofollow noreferrer">https://github.com/gauthamramesh3110/image_processing_scripts/blob/main/License_Plate_Detection.ipynb</a></p>
<p>Thanks in advance. I am really in need of help here.</p>
|
<p>If you're doing object localization task then you shouldn't using <code>'accuracy'</code> as your metrics, because <a href="https://www.tensorflow.org/api_docs/python/tf/keras/Model#compile" rel="nofollow noreferrer">docs of compile()</a> said:</p>
<blockquote>
<p>When you pass the strings 'accuracy' or 'acc', we convert this to one
of tf.keras.metrics.BinaryAccuracy,
tf.keras.metrics.CategoricalAccuracy,
tf.keras.metrics.SparseCategoricalAccuracy based on the loss function
used and the model output shape</p>
</blockquote>
<p>You should using <a href="https://www.tensorflow.org/api_docs/python/tf/keras/metrics/MeanAbsoluteError" rel="nofollow noreferrer">tf.keras.metrics.MeanAbsoluteError</a>, <a href="https://keras.io/examples/vision/retinanet/#computing-pairwise-intersection-over-union-iou" rel="nofollow noreferrer">IoU</a>(Intersection Over Union) or <a href="https://blog.paperspace.com/mean-average-precision/" rel="nofollow noreferrer">mAP</a>(Mean Average Precision) instead</p>
|
tensorflow|machine-learning|keras|deep-learning|object-detection
| 0
|
1,533
| 65,234,067
|
How to reshape BatchDataset class tensor?
|
<p>Im unable to reshape tensor loaded from my own custom dataset. As shown below <code>ds_train</code> has batch size of 8 and I want to reshape it such as: <code>len(ds_train),128*128</code>. So that I can feed the batch to my keras autoencoder model. Im new to TF and couldnt find solutions online, thus posting here.</p>
<pre><code>ds_train = tf.keras.preprocessing.image_dataset_from_directory(
directory=healthy_path,
labels="inferred",
label_mode=None,
color_mode="grayscale",
batch_size=8,
image_size=(128, 128),
shuffle=True,
seed=123,
validation_split=0.05,
subset="training",)
</code></pre>
<p>Similarly my model is based on TF2 Functional API as Follows:</p>
<pre><code>inputs = keras.Input(shape=(128*128))
norm = layers.experimental.preprocessing.Rescaling(1./255)(inputs)
encode = layers.Dense(14, activation='relu', name='encode')(norm)
coded = layers.Dense(3, activation='relu', name='coded')(encode)
decode = layers.Dense(14, activation='relu', name='decode')(coded)
decoded = layers.Dense(128*128, activation='sigmoid', name='decoded')(decode)
</code></pre>
<p>My attempt at reshaping</p>
<pre><code>ds_train = tf.reshape(ds_train, shape=[-1])
ds_validation = tf.reshape(ds_train, shape=[-1])
#AUTOTUNE = tf.data.experimental.AUTOTUNE
#ds_train = ds_train.cache().prefetch(buffer_size=AUTOTUNE)
#ds_validation = ds_validation.cache().prefetch(buffer_size=AUTOTUNE)
</code></pre>
<p>Error :</p>
<pre><code>ValueError: Attempt to convert a value (<BatchDataset shapes: (None, 128, 128, 1), types: tf.float32>) with an unsupported type (<class 'tensorflow.python.data.ops.dataset_ops.BatchDataset'>) to a Tensor.
</code></pre>
<p>Entire Error Callstack:</p>
<pre><code>---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-17-764a960c83e5> in <module>
----> 1 ds_train = tf.reshape(ds_train, shape=[-1])
2 ds_validation = tf.reshape(ds_train, shape=[-1])
3 #AUTOTUNE = tf.data.experimental.AUTOTUNE
4 #ds_train = ds_train.cache().prefetch(buffer_size=AUTOTUNE)
5 #ds_validation = ds_validation.cache().prefetch(buffer_size=AUTOTUNE)
C:\Anaconda3\lib\site-packages\tensorflow\python\util\dispatch.py in wrapper(*args, **kwargs)
199 """Call target, and fall back on dispatchers if there is a TypeError."""
200 try:
--> 201 return target(*args, **kwargs)
202 except (TypeError, ValueError):
203 # Note: convert_to_eager_tensor currently raises a ValueError, not a
C:\Anaconda3\lib\site-packages\tensorflow\python\ops\array_ops.py in reshape(tensor, shape, name)
193 A `Tensor`. Has the same type as `tensor`.
194 """
--> 195 result = gen_array_ops.reshape(tensor, shape, name)
196 tensor_util.maybe_set_static_shape(result, shape)
197 return result
C:\Anaconda3\lib\site-packages\tensorflow\python\ops\gen_array_ops.py in reshape(tensor, shape, name)
8227 try:
8228 return reshape_eager_fallback(
-> 8229 tensor, shape, name=name, ctx=_ctx)
8230 except _core._SymbolicException:
8231 pass # Add nodes to the TensorFlow graph.
C:\Anaconda3\lib\site-packages\tensorflow\python\ops\gen_array_ops.py in reshape_eager_fallback(tensor, shape, name, ctx)
8247
8248 def reshape_eager_fallback(tensor, shape, name, ctx):
-> 8249 _attr_T, (tensor,) = _execute.args_to_matching_eager([tensor], ctx)
8250 _attr_Tshape, (shape,) = _execute.args_to_matching_eager([shape], ctx, _dtypes.int32)
8251 _inputs_flat = [tensor, shape]
C:\Anaconda3\lib\site-packages\tensorflow\python\eager\execute.py in args_to_matching_eager(l, ctx, default_dtype)
261 ret.append(
262 ops.convert_to_tensor(
--> 263 t, dtype, preferred_dtype=default_dtype, ctx=ctx))
264 if dtype is None:
265 dtype = ret[-1].dtype
C:\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py in convert_to_tensor(value, dtype, name, as_ref, preferred_dtype, dtype_hint, ctx, accepted_result_types)
1497
1498 if ret is None:
-> 1499 ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
1500
1501 if ret is NotImplemented:
C:\Anaconda3\lib\site-packages\tensorflow\python\framework\constant_op.py in _constant_tensor_conversion_function(v, dtype, name, as_ref)
336 as_ref=False):
337 _ = as_ref
--> 338 return constant(v, dtype=dtype, name=name)
339
340
C:\Anaconda3\lib\site-packages\tensorflow\python\framework\constant_op.py in constant(value, dtype, shape, name)
262 """
263 return _constant_impl(value, dtype, shape, name, verify_shape=False,
--> 264 allow_broadcast=True)
265
266
C:\Anaconda3\lib\site-packages\tensorflow\python\framework\constant_op.py in _constant_impl(value, dtype, shape, name, verify_shape, allow_broadcast)
273 with trace.Trace("tf.constant"):
274 return _constant_eager_impl(ctx, value, dtype, shape, verify_shape)
--> 275 return _constant_eager_impl(ctx, value, dtype, shape, verify_shape)
276
277 g = ops.get_default_graph()
C:\Anaconda3\lib\site-packages\tensorflow\python\framework\constant_op.py in _constant_eager_impl(ctx, value, dtype, shape, verify_shape)
298 def _constant_eager_impl(ctx, value, dtype, shape, verify_shape):
299 """Implementation of eager constant."""
--> 300 t = convert_to_eager_tensor(value, ctx, dtype)
301 if shape is None:
302 return t
C:\Anaconda3\lib\site-packages\tensorflow\python\framework\constant_op.py in convert_to_eager_tensor(value, ctx, dtype)
96 dtype = dtypes.as_dtype(dtype).as_datatype_enum
97 ctx.ensure_initialized()
---> 98 return ops.EagerTensor(value, ctx.device_name, dtype)
99
100
</code></pre>
|
<p>Try changing the shape inside the neural net:</p>
<pre><code>inputs = keras.Input(shape=(128, 128, 1))
flat = keras.layers.Flatten()(inputs)
</code></pre>
<p>This would work:</p>
<pre><code>import numpy as np
import tensorflow as tf
x = np.random.rand(10, 128, 128, 1).astype(np.float32)
inputs = tf.keras.Input(shape=(128, 128, 1))
flat = tf.keras.layers.Flatten()(inputs)
encode = tf.keras.layers.Dense(14, activation='relu', name='encode')(flat)
coded = tf.keras.layers.Dense(3, activation='relu', name='coded')(encode)
decode = tf.keras.layers.Dense(14, activation='relu', name='decode')(coded)
decoded =tf.keras.layers.Dense(128*128, activation='sigmoid', name='decoded')(decode)
model = tf.keras.Model(inputs=inputs, outputs=decoded)
model.build(input_shape=x.shape) # remove this, it's just for demonstrating
model(x) # remove this, it's just for demonstrating
</code></pre>
<pre><code><tf.Tensor: shape=(10, 16384), dtype=float32, numpy=
array([[0.50187236, 0.4986383 , 0.50084716, ..., 0.4998364 , 0.50000435,
0.4999416 ],
[0.5020216 , 0.4985297 , 0.5009147 , ..., 0.4998234 , 0.5000047 ,
0.49993694],
[0.50179213, 0.49869663, 0.50081086, ..., 0.49984342, 0.5000042 ,
0.4999441 ],
...,
[0.5021732 , 0.49841946, 0.50098324, ..., 0.49981016, 0.50000507,
0.49993217],
[0.50205255, 0.49843505, 0.5009038 , ..., 0.49979147, 0.4999932 ,
0.49991176],
[0.50192004, 0.49860355, 0.50086874, ..., 0.49983227, 0.5000045 ,
0.4999401 ]], dtype=float32)>
</code></pre>
<p>Note that I removed the rescaling layer, I don't have it in my Tensorflow version. You can put it right back.</p>
|
python|tensorflow2.0
| 0
|
1,534
| 65,092,243
|
Python - import csv file and group numbers by column
|
<p>My problem is simple: I have <a href="https://gofile.io/d/O6e9wS" rel="nofollow noreferrer">this</a> csv file.
I use Python 3. This file represent the number of new covid cases divided by country every day. But what I want to do is to obtain the global number of cases day by day regardless of the origin country. What is the fastest and simplest way to do this ?</p>
<p>Thank you</p>
|
<p>You could make a dictionary with dates as the key and cases as the value.</p>
<pre><code>from datetime import datetime
cases_by_day = {}
with open("cases.csv") as f:
f.readline()
for line in f:
elements = line.split(",")
date = datetime.strptime(elements[0], "%d/%m/%Y")
cases_by_day[date] = cases_by_day.get(date, 0) + int(elements[3])
</code></pre>
<p>This is easily expandable to add deaths and other data per day.</p>
|
python-3.x|pandas|graph|dataset|data-mining
| 1
|
1,535
| 50,040,989
|
Pandas group by on list of dictionaries
|
<p>I have a data like below, I am trying to group the data into dayname and hour.</p>
<pre><code> [
{
"avg": 52,
"hour": 9,
"dayname": "Friday"
},
{
"avg": 1,
"hour": 10,
"dayname": "Friday"
},
{
"avg": 12,
"hour": 11,
"dayname": "Friday"
},
{
"avg": 3,
"hour": 12,
"dayname": "Friday"
},
{
"avg": 12,
"hour": 09,
"dayname": "Saturday"
},
{
"avg": 30,
"hour": 10,
"dayname": "Saturday"
},
{
"avg": 66,
"hour": 11,
"dayname": "Saturday"
},
{
"avg": 45,
"hour": 12,
"dayname": "Saturday"
}
]
</code></pre>
<p>I want the final OP:</p>
<pre><code>hour Friday Saturday
9 52 12
10 1 30
11 12 16
12 3 45
</code></pre>
<p>Here is my code tried :</p>
<pre><code> cur = mysql.connection.cursor()
sql = "select avg(value) avg, hour, dayname from table;"
cur.execute(sql)
row_headers = [x[0] for x in cur.description] #this will extract row headers
rv = cur.fetchall()
json_result = []
for result in rv:
json_result.append(dict(zip(row_headers, result)))
# resultfromdb= json.dumps(json_result)
finalresult = #how to get the expected op
return finalresult
</code></pre>
<p>From there how can to group and get final result using pandas?</p>
|
<p>You can feed a list of dictionaries directly into <code>pandas</code> and then manipulate:</p>
<pre><code>df = pd.DataFrame(lst)
res = df.pivot_table(index='hour', columns='dayname', values='avg', aggfunc=np.sum)\
.reset_index()
res.columns.name = ''
print(res)
hour Friday Saturday
0 9 52 12
1 10 1 30
2 11 12 66
3 12 3 45
</code></pre>
|
python|python-3.x|pandas|pandas-groupby
| 1
|
1,536
| 50,128,438
|
How to specify column type in pandas dataframe
|
<p>After executing this line</p>
<pre><code>data['numbers'] = data.apply(lambda row : [1] * len(row.text), axis=1)
</code></pre>
<p>The column 'numbers' is not a list as I expect it to be, but instead it's of type object which can't be indexed and I get IndexError.</p>
<p>What I want as result is a column with 'numbers' where each line has as much ones as the length of the corresponding text in the line.</p>
<p>How can I fix it?</p>
|
<p><code>dtype</code> of <code>string</code>s, <code>dict</code>s, <code>list</code>s, <code>set</code>s, <code>tuple</code>s is always <code>object</code>, for testing <code>type</code> use:</p>
<pre><code>data = pd.DataFrame({'text':['aaas','as']}, index=[10,12])
data['numbers'] = data.apply(lambda row : [1] * len(row.text), axis=1)
print (data['numbers'].apply(type))
0 <class 'list'>
1 <class 'list'>
Name: numbers, dtype: object
#check scalar
print (type(data.loc[0, 'numbers']))
<class 'list'>
</code></pre>
<p>If want check <code>length</code>s:</p>
<pre><code>print (len(data.iloc[0, data.columns.get_loc('numbers')]))
4
data['lens'] = data['numbers'].str.len()
print (data)
text numbers lens
10 aaas [1, 1, 1, 1] 4
12 as [1, 1] 2
</code></pre>
|
python|list|pandas|dataframe|apply
| 2
|
1,537
| 63,920,846
|
Create a 2-D Array From a Group of 1-D Arrays of Different Lengths in Python
|
<p>I have 2 1-D arrays that I have combined into a single 1-D array and would like to combine them into a 2-D array with 3 columns consisting of the two arrays and the newly created combined array. Ultimately, the objective is to plot all three 1-D arrays on a single chart using Plotly. The values are datetime but I will use integers here for the sake of simplicity.</p>
<pre><code>import numpy as np
a = np.array([1,3,4,5,7,9])
b = np.array([2,4,6,8])
c = np.array([1,2,3,4,5,6,7,8,9])
# The created array should be 9 rows and 3 columns that looks like:
abc = np.array([1,0,1],[0,2,2],[3,0,3],[4,4,4],[5,0,5],[0,6,6],[7,0,7],[0,8,8],[9,0,9])
</code></pre>
<p>Essentially, array abc is the c column repeated 3 times with zeros where there are missing values for a or b. I would prefer to do this in Numpy but am open to alternatives as well. In addition, the zeros don't have to be present and can be substituted with NaN, Null, etc. The questions I've reviewed seem to suggest that there is no way to combine arrays of different lengths but I'm certain there must be a way of combining the arrays by extending the shorter ones using indexing. I'm just having trouble getting from here to there. Any help would be greatly appreciated.</p>
|
<p>Pure numpy approach:</p>
<pre><code>import numpy as np
a = np.array([1,3,4,5,7,9])
b = np.array([2,4,6,8])
c = np.array([1,2,3,4,5,6,7,8,9])
abc = np.zeros((10, 3))
# change to a loop, if you like
abc[a, 0] = a
abc[b, 1] = b
abc[c, 2] = c
print(abc[1:])
</code></pre>
<p>prints:</p>
<pre><code>[[1. 0. 1.]
[0. 2. 2.]
[3. 0. 3.]
[4. 4. 4.]
[5. 0. 5.]
[0. 6. 6.]
[7. 0. 7.]
[0. 8. 8.]
[9. 0. 9.]]
</code></pre>
|
arrays|python-3.x|numpy
| 2
|
1,538
| 64,129,567
|
Python/Pandas - Checking the list of values in pandas column for a condition
|
<p>I have a python program which process multiple files. Each file have customer id column based on value in city column. Some file has 8 digit customer id and some have 9 digit customer id. I need to do this -</p>
<pre><code>if column city value is in ['Chicago', 'Newyork', 'Milwaukee', 'Dallas']:
input_file['customer_id'] = input_file['customer_id'].str[0:2] + '-' + input_file['customer_id'].str[2:8]
Else:
input_file['customer_id'] = 'K' + '-' + input_file['customer_id'].str[0:3] + '-' + input_file['customer_id'].str[3:8]
</code></pre>
<p>Input</p>
<pre><code>id cusotmerid city
1 89898988 Dallas
2 898989889 Houston
</code></pre>
<p>Output</p>
<pre><code>id cusotmerid city
1 89-898988 Dallas
2 K-898-989889 Houston
</code></pre>
<p>How can I achieve it.</p>
|
<p>Check with <code>np.where</code></p>
<pre><code>l = ['Chicago', 'Newyork', 'Milwaukee', 'Dallas']
s1 = df['customer_id'].str[0:2] + '-' + df['customer_id'].str[2:8]
s2 = 'K' + '-' + df['customer_id'].str[0:3] + '-' + df['customer_id'].str[3:8]
df['cusotmerid'] = np.where(df['city'].isin(l), s1, s2)
</code></pre>
|
python|pandas
| 1
|
1,539
| 46,919,525
|
Custom Dummy Coding in Pandas
|
<p>I have a dataframe with event data. I have two columns: Primary and Secondary. The Primary and Secondary columns both contain lists of tags (e.g., ['Fun event', 'Dance party']).</p>
<pre><code> primary secondary combined
['booze', 'party'] ['singing', 'dance'] ['booze', 'party', 'singing', 'dance']
['concert'] ['booze', 'vocals'] ['concert', 'booze', 'vocals']
</code></pre>
<p>I want to dummy code the data so that primary columns have a 1 code, non-observed columns have a 0, and values in the secondary column have a .5 value. Like so:</p>
<pre><code>combined booze party singing dance concert vocals
['booze', 'party', 'singing', 'dance'] 1 1 .5 .5 0 0
['concert', 'booze', 'vocals'] .5 0 0 0 1 .5
</code></pre>
|
<p>Here's one approach that works by transforming the <code>primary</code> and <code>secondary</code> columns' values into columns on the dataframe:</p>
<pre><code>df = pd.DataFrame({
'primary': [['booze', 'party'], ['concert']],
'secondary': [['singing', 'dance'], ['booze', 'vocals']],
})
# create primary and secondary indicator columns
iprim = df.primary.apply(lambda v: pd.Series([1] * len(v), index=v))
isec = df.secondary.apply(lambda v: pd.Series([.5] * len(v), index=v))
# join with primary, then update from secondary columns
df = df.join(iprim).join(isec, rsuffix='_')
df.drop([c for c in df.columns if c.endswith('_')], axis=1, inplace=True)
df.update(isec)
df.fillna(0)
</code></pre>
<p>=></p>
<pre><code> primary secondary booze concert party dance singing vocals
0 [booze, party] [singing, dance] 1.0 0.0 1.0 0.5 0.5 0.0
1 [concert] [booze, vocals] 0.5 1.0 0.0 0.0 0.0 0.5
</code></pre>
<p>Note the second <code>.join()</code> uses rsuffix to add columns that were already in <code>primary</code>, whereas <code>.update()</code> is used to overwrite values in the primary columns. <code>.drop()</code> removes these columns. Rearrange to prefer primary over secondary.</p>
|
python|pandas|dummy-variable
| 1
|
1,540
| 62,916,521
|
Keras LSTM to Pytorch
|
<p>I am using the following code to apply sequential LSTM to time-series data with one value. It works fine with a Keras version. I am wondering how could I do the same using PyTorch?</p>
<pre><code>import tensorflow
from tensorflow.keras import optimizers
from tensorflow.keras import losses
from tensorflow.keras.models import Sequential, Model
from tensorflow.keras.layers import Dense, Input, Dropout, Embedding, LSTM
from tensorflow.keras.optimizers import RMSprop, Adam, Nadam
from tensorflow.keras.preprocessing import sequence
from tensorflow.keras.callbacks import TensorBoard
# training_dataset.shape = (303, 24, 1)
time_steps = 24
metric = 'mean_absolute_error'
model = Sequential()
model.add(LSTM(units=32, activation='tanh', input_shape=(time_steps, 1), return_sequences=True))
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer='adam', loss='mean_absolute_error', metrics=[metric])
print(model.summary())
batch_size=32
epochs=20
model.fit(x=training_dataset, y=training_dataset,
batch_size=batch_size, epochs=epochs,
verbose=1, validation_data=(training_dataset, training_dataset),
callbacks=[TensorBoard(log_dir='../logs/{0}'.format(tensorlog))])
testing_pred = model.predict(x=testing_dataset)
</code></pre>
|
<p>You can check the pytorch documentation for that: <a href="https://pytorch.org/docs/master/generated/torch.nn.LSTM.html" rel="nofollow noreferrer">https://pytorch.org/docs/master/generated/torch.nn.LSTM.html</a></p>
<p>the simplest code is the following:</p>
<pre><code> import torch, torch.nn as nn, torch.optim.Adam as Adam
model = nn.Sequential(nn.LSTM(input_size=1, hidden_size=32, output_size=1), nn.Sigmoid)
opt = Adam(model.parameters())
loss_func = nn.MSELoss()
for (x, y) in dataloader:
opt.zero_grad()
pred = model(x)
loss = loss_func(y, pred)
loss.backward()
opt.step()
</code></pre>
|
pytorch
| 0
|
1,541
| 62,899,927
|
ValueError: Dataframe must have columns "ds" and "y" with the dates and values respectively
|
<p>I have created my data source with ds and y format and am still receiving error as above...please see below code</p>
<pre><code> import pandas as pd
import numpy as np
import pystan
from fbprophet import Prophet
import matplotlib.pyplot as plt
plt.style.use("fivethirtyeight")
df = pd.read_csv(r'C:\Users\sussmanbl\Desktop\fb prophet ar historical2.csv')
df['ds'] = pd.to_datetime(df['ds'])
print(df.head())
df.dtypes
plt.figure(figsize=(12,8))
plt.plot(df.set_index('ds'))
plt.legend(['AR'])
m1 = Prophet(weekly_seasonality=True)
m1 = Prophet(daily_seasonality=True)
m1.fit(df)
future = m1.make_future_dataframe(periods=90)
future.tail().T
forecast = m1.predict(future)
forecast[['ds', 'yhat', 'yhat_lower', 'yhat_upper']].tail()
fig1 = m1.plot(forecast)
fig2 = m1.plot_components(forecast)
runfile('C:/Users/sussmanbl/Desktop/Modelling/FB Prophet AR Historical.py',
wdir='C:/Users/sussmanbl/Desktop/Modelling')
Reloaded modules: stanfit4anon_model_ad32c37d592cdbc572cbc332ac6d2ee2_4431954053790800620
ds y
0 2017-01-03 10
1 2017-01-04 39
2 2017-01-05 19
3 2017-01-06 12
4 2017-01-09 11
</code></pre>
<h2>Error:</h2>
<pre class="lang-py prettyprint-override"><code>Traceback (most recent call last):
File "", line 1, in
runfile('C:/Users/sussmanbl/Desktop/Modelling/FB Prophet AR Historical.py',
wdir='C:/Users/sussmanbl/Desktop/Modelling')
File "C:\Users\sussmanbl.conda\envs\stan_env\lib\site-
packages\spyder_kernels\customize\spydercustomize.py", line 827, in runfile
execfile(filename, namespace)
File "C:\Users\sussmanbl.conda\envs\stan_env\lib\site-
packages\spyder_kernels\customize\spydercustomize.py", line 110, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File "C:/Users/sussmanbl/Desktop/Modelling/FB Prophet AR Historical.py", line 17, in
m1.fit(df)
File "C:\Users\sussmanbl.conda\envs\stan_env\lib\site-packages\fbprophet\forecaster.py", line
1082, in fit
'Dataframe must have columns "ds" and "y" with the dates and '
ValueError: Dataframe must have columns "ds" and "y" with the dates and values respectively.
</code></pre>
<pre><code> df.dtypes
Out[46]:
ds datetime64[ns]
y int64
dtype: object
</code></pre>
<p>i have formatted the source data to be in ds and y format. the dates are in proper format as are the values. im not sure what the code or the source data is missing that is causing the value error to come up</p>
|
<p>I had the same issue; turns out I had <code>ds</code> and <code>y</code> capitalized. Once I made them lowercase in the <code>.csv</code> file, it all worked as it should.</p>
|
python|pandas
| 0
|
1,542
| 63,047,597
|
Substituting values in place of variables in pandas dataframe
|
<p>I have a dataframe as follows:</p>
<pre><code> df:
Name Age Type Result
Ra 35 adult $name is an $type of $age years
Ro 12 child $name behaves like $type as he is $age years old
<+100 rows>
</code></pre>
<p>Each statement in Result column is different. All I need to do it populate the variable with proper values.</p>
<p>I am aware of multiple options like formatted strings and string format method but not able to understand how to implement it in a dataframe scenario as mentioned above:</p>
<pre><code>Option 1: f"Shepherd {name} is {age} years old."
Option 2: "Shepherd {} is {} years old.".format(name, age)
</code></pre>
<p>Can anyone help with the usage?</p>
|
<p><a href="https://docs.python.org/3/library/stdtypes.html?highlight=format_map#str.format_map" rel="nofollow noreferrer">format_map</a> takes a dictionary to replace the placeholders with the actual value.</p>
<p>The placeholders to use in our case is the column names.</p>
<p><a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.apply.html?highlight=dataframe%20apply#pandas.DataFrame.apply" rel="nofollow noreferrer">DataFrame.apply</a> along <code>axis=1</code> uses the <code>format_map</code> functionality for every row of the dataframe.</p>
<p>Make your csv like this:</p>
<pre><code>Name Age Type Result
Ra 35 adult {Name} is an {Type} of {Age} years
Ro 12 child {Name} behaves like {Type} as he is {Age} years old
</code></pre>
<p>Code:</p>
<pre class="lang-py prettyprint-override"><code>df = pd.read_csv('data.csv', sep='\t')
df['Result'] = df.apply(lambda row: row['Result'].format_map(row), axis=1)
print(df)
</code></pre>
<p>Output</p>
<pre><code> Name Age Type Result
0 Ra 35 adult Ra is an adult of 35 years
1 Ro 12 child Ro behaves like child as he is 12 years old
</code></pre>
<p>If you're not in a position to change the data, the code below will produce the same result as well.</p>
<pre class="lang-py prettyprint-override"><code>def str_substitute(row):
return row['Result']\
.replace('$name', row['Name'])\
.replace('$type', row['Type'])\
.replace('$age', str(row['Age']))
df['Result'] = df.apply(str_substitute, axis=1)
</code></pre>
|
python|pandas|dataframe
| 1
|
1,543
| 63,103,706
|
MTCNN does not use GPU on first detection but does on following detections
|
<p>I have tensorflow 2.0 -gpu installed. I am doing face detection using MTCNN. On the first call to detect the face it takes 3.86 seconds. On the next call it takes only .049 seconds. I suspect it is not using the GPU on the first call but it does on the second call. I know MTCNN does import tensorflow but I do not understand why the GPU is not used on the first call. Code is below.</p>
<pre><code>import time
from mtcnn import MTCNN
import cv2
#********first run of image detection - note resulting process time- think not using gpu
detector = MTCNN()
img_file=r'c:\Temp\people\storage\1.jpg'
img = cv2.imread(img_file, cv2.COLOR_BGR2RGB)
start=time.time()
detector.detect_faces(img)
stop=time.time()
duration = stop-start
print(duration)
# rerun image detection on the same image - note duration much less must be using gpu
start=time.time()
detector.detect_faces(img)
stop=time.time()
duration = stop-start
print(duration)
Using TensorFlow backend.
3.8625590801239014
0.049181222915649414
</code></pre>
|
<p>It does use GPU on the first call. The main overhead in the allocation of model's parameters and creation of the computation graph in memory. You can use a small "dummy" image first (it doesn't have to be full-size) to allow the ops to be formed and variables to be placed on the GPU, then continue using the actual images.</p>
|
python|tensorflow2.0
| 2
|
1,544
| 67,950,502
|
Selecting values from tensor based on an index tensor
|
<p>I have two matrices. Matrix A is contains some values and matrix B contains indices. The shape of matrix A and B is (batch, values) and (batch, indices), respectively.</p>
<p>My goal is to select values from matrix A based on indices of matrix B along the batch dimension.</p>
<p>For example:</p>
<pre><code># Matrix A
<tf.Tensor: shape=(2, 5), dtype=float32, numpy=
array([[0., 1., 2., 3., 4.],
[5., 6., 7., 8., 9.]], dtype=float32)>
# Matrix B
<tf.Tensor: shape=(2, 2), dtype=int32, numpy=
array([[0, 1],
[1, 2]], dtype=int32)>
# Expected Result
<tf.Tensor: shape=(2, 2), dtype=float32, numpy=
array([[0., 1.],
[6., 7.]], dtype=int32)>
</code></pre>
<p>How can I achieve this in Tensorflow?</p>
<p>Many thanks in advance!</p>
|
<p>You can achieve this with the <a href="https://www.tensorflow.org/api_docs/python/tf/gather" rel="nofollow noreferrer"><code>tf.gather</code></a> function.</p>
<pre><code>mat_a = tf.constant([[0., 1., 2., 3., 4.],
[5., 6., 7., 8., 9.]])
mat_b = tf.constant([[0, 1], [1, 2]])
out = tf.gather(mat_a, mat_b, batch_dims=1)
out.numpy()
array([[0., 1.],
[6., 7.]], dtype=float32)
</code></pre>
|
tensorflow|tensorflow2.0
| 1
|
1,545
| 67,862,535
|
Add calculated column to df2 for every row entry in df2 pandas dataframe
|
<p>I have 2 dataframes</p>
<pre><code>df1 = pd.DataFrame({'X':[1,2,3,4,5],'Y':[1,2,3,4,5],'Point':[1,2,3,4,5]})
</code></pre>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">X</th>
<th style="text-align: center;">Y</th>
<th style="text-align: center;">Point</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">1</td>
<td style="text-align: center;">1</td>
<td style="text-align: center;">1</td>
</tr>
<tr>
<td style="text-align: left;">2</td>
<td style="text-align: center;">2</td>
<td style="text-align: center;">2</td>
</tr>
<tr>
<td style="text-align: left;">3</td>
<td style="text-align: center;">3</td>
<td style="text-align: center;">3</td>
</tr>
<tr>
<td style="text-align: left;">4</td>
<td style="text-align: center;">4</td>
<td style="text-align: center;">4</td>
</tr>
<tr>
<td style="text-align: left;">5</td>
<td style="text-align: center;">5</td>
<td style="text-align: center;">5</td>
</tr>
</tbody>
</table>
</div>
<pre><code>df2 = pd.DataFrame({'X':[1,2,3,4,5,6,7,8],'Y':[1,2,3,4,5,6,7,8]})
</code></pre>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">X</th>
<th style="text-align: center;">Y</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">1</td>
<td style="text-align: center;">1</td>
</tr>
<tr>
<td style="text-align: left;">2</td>
<td style="text-align: center;">2</td>
</tr>
<tr>
<td style="text-align: left;">3</td>
<td style="text-align: center;">3</td>
</tr>
<tr>
<td style="text-align: left;">4</td>
<td style="text-align: center;">4</td>
</tr>
<tr>
<td style="text-align: left;">5</td>
<td style="text-align: center;">5</td>
</tr>
<tr>
<td style="text-align: left;">6</td>
<td style="text-align: center;">6</td>
</tr>
<tr>
<td style="text-align: left;">7</td>
<td style="text-align: center;">7</td>
</tr>
<tr>
<td style="text-align: left;">8</td>
<td style="text-align: center;">8</td>
</tr>
</tbody>
</table>
</div>
<p>I need to add a new column into df2 for every row entry in df1 and populate with the following equation</p>
<pre><code>=sqrt((abs(df1.X-df2.X)**2)+(abs(df1.Y-df2.Y)**2))
</code></pre>
<p>Each new column title should correspond to the Point column entry in df1
whats the best way to do this</p>
<p>should look like this</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">X</th>
<th style="text-align: center;">Y</th>
<th style="text-align: center;">Point1</th>
<th style="text-align: center;">Point2</th>
<th style="text-align: center;">Point3</th>
<th style="text-align: center;">Point4</th>
<th style="text-align: center;">Point5</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">1</td>
<td style="text-align: center;">1</td>
<td style="text-align: center;">0</td>
<td style="text-align: center;">1.41</td>
<td style="text-align: center;">2.83</td>
<td style="text-align: center;">4.24</td>
<td style="text-align: center;">5.66</td>
</tr>
<tr>
<td style="text-align: left;">2</td>
<td style="text-align: center;">2</td>
<td style="text-align: center;">1.41</td>
<td style="text-align: center;">0</td>
<td style="text-align: center;">1.41</td>
<td style="text-align: center;">2.83</td>
<td style="text-align: center;">4.24</td>
</tr>
<tr>
<td style="text-align: left;">3</td>
<td style="text-align: center;">3</td>
<td style="text-align: center;">2.83</td>
<td style="text-align: center;">1.41</td>
<td style="text-align: center;">0</td>
<td style="text-align: center;">1.41</td>
<td style="text-align: center;">2.83</td>
</tr>
<tr>
<td style="text-align: left;">4</td>
<td style="text-align: center;">4</td>
<td style="text-align: center;">4.24</td>
<td style="text-align: center;">2.83</td>
<td style="text-align: center;">1.41</td>
<td style="text-align: center;">0</td>
<td style="text-align: center;">1.41</td>
</tr>
<tr>
<td style="text-align: left;">5</td>
<td style="text-align: center;">5</td>
<td style="text-align: center;">5.66</td>
<td style="text-align: center;">4.24</td>
<td style="text-align: center;">2.83</td>
<td style="text-align: center;">1.41</td>
<td style="text-align: center;">0</td>
</tr>
<tr>
<td style="text-align: left;">6</td>
<td style="text-align: center;">6</td>
<td style="text-align: center;">7.07</td>
<td style="text-align: center;">5.66</td>
<td style="text-align: center;">4.24</td>
<td style="text-align: center;">2.83</td>
<td style="text-align: center;">1.41</td>
</tr>
<tr>
<td style="text-align: left;">7</td>
<td style="text-align: center;">7</td>
<td style="text-align: center;">8.49</td>
<td style="text-align: center;">7.07</td>
<td style="text-align: center;">5.66</td>
<td style="text-align: center;">4.24</td>
<td style="text-align: center;">2.83</td>
</tr>
<tr>
<td style="text-align: left;">8</td>
<td style="text-align: center;">8</td>
<td style="text-align: center;">9.90</td>
<td style="text-align: center;">8.49</td>
<td style="text-align: center;">7.07</td>
<td style="text-align: center;">5.66</td>
<td style="text-align: center;">4.24</td>
</tr>
</tbody>
</table>
</div>
|
<p>One possibility is to use the <code>cdist</code> function from <code>scipy</code>.</p>
<pre><code>from scipy.spatial.distance import cdist
output = pd.DataFrame(data=cdist(df1[["X", "Y"]], df2, 'euclidean'),
index=pd.MultiIndex.from_frame(df2[["X","Y"]]),
columns=[f"Point{i+1}" for i in range(5)]).reset_index()
>>> output
X Y Point1 Point2 Point3 Point4 Point5
0 1 1 0.000000 1.414214 2.828427 4.242641 5.656854
1 2 2 1.414214 0.000000 1.414214 2.828427 4.242641
2 3 3 2.828427 1.414214 0.000000 1.414214 2.828427
3 4 4 4.242641 2.828427 1.414214 0.000000 1.414214
4 5 5 5.656854 4.242641 2.828427 1.414214 0.000000
5 6 6 7.071068 5.656854 4.242641 2.828427 1.414214
6 7 7 8.485281 7.071068 5.656854 4.242641 2.828427
7 8 8 9.899495 8.485281 7.071068 5.656854 4.242641
</code></pre>
|
python|pandas
| 0
|
1,546
| 61,222,503
|
How to remove duplicate from rows and convert its value to column in pandas
|
<p>I have lot of data in the following format.</p>
<pre><code>Person_ID Person_value
1 usr:value1
1 val:value2
2 usr:value1
2 val:value2
3 usr:value1
3 val:value2
4 usr:value1
4 val:value2
</code></pre>
<p>But I want result like this:</p>
<pre><code>Person_ID Person_value Person_value2
1 Usr:value1 val:value2
2 Usr:value1 val:value2
3 Usr:value1 val:value2
4 Usr:value1 val:value2
</code></pre>
<p>OR Like this::</p>
<pre><code>Person_ID Person_value
1 Usr:value1 // val:value2
2 Usr:value1 // val:value2
3 Usr:value1 // val:value2
4 Usr:value1 // val:value2
</code></pre>
<p>Due to these Value it cause these duplicate value. and keeping the both value is very important.</p>
|
<p>IIUC use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.cumcount.html" rel="nofollow noreferrer"><code>GroupBy.cumcount</code></a> for helper column with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.pivot.html" rel="nofollow noreferrer"><code>pivot</code></a>:</p>
<pre><code>df1 = (df.assign(a=df.groupby('Person_ID').cumcount().add(1))
.pivot('Person_ID','a','Person_value')
.add_prefix('Person_value'))
print (df1)
a Person_value1 Person_value2
Person_ID
A usr:value1 val:value2
B usr:value1 val:value2
C usr:value1 val:value2
D usr:value1 val:value2
</code></pre>
<p>Or aggregate <code>join</code>:</p>
<pre><code>df1 = (df.groupby('Person_ID')
.agg(' // '.join)
.reset_index())
print (df1)
Person_ID Person_value
0 A usr:value1 // val:value2
1 B usr:value1 // val:value2
2 C usr:value1 // val:value2
3 D usr:value1 // val:value2
</code></pre>
<p>If is necessary split rows by <code>Person_value</code> before <code>:</code> before <code>pivot</code>:</p>
<pre><code>df1 = (df.assign(a=df['Person_value'].str.split(':').str[0])
.pivot('Person_ID','a','Person_value'))
print (df1)
a usr val
Person_name
A usr:value1 val:value2
B usr:value1 val:value2
C usr:value1 val:value2
D usr:value1 val:value2
</code></pre>
|
python|python-3.x|pandas|amortized-analysis
| 3
|
1,547
| 61,495,877
|
How to replace or swap all values (largest with smallest) in python?
|
<p>I want to swap all the values of my data frame.Largest value must be replaced with smallest value (i.e. 7 with 1, 6 with 2, 5 with 3, 4 with 4, 3 with 5, and so on..</p>
<pre><code>import numpy as np
import pandas as pd
import io
data = '''
Values
6
1
3
7
5
2
4
1
4
7
2
5
'''
df = pd.read_csv(io.StringIO(data))
</code></pre>
<p><strong>Trial</strong></p>
<p>First I want to get all the unique values from my data.</p>
<pre><code>df1=df.Values.unique()
print(df1)
[6 1 3 7 5 2 4]
</code></pre>
<p>I have sorted it in ascending order:</p>
<pre><code>sorted1 = list(np.sort(df1))
print(sorted1)
[1, 2, 3, 4, 5, 6, 7]
</code></pre>
<p>Than I have reverse sorted the list:</p>
<pre><code>rev_sorted = list(reversed(sorted1))
print(rev_sorted)
[7, 6, 5, 4, 3, 2, 1]
</code></pre>
<p>Now I need to replace the max. value with min. value and so on in my main data set (df). The old values can be replaced or a new column might be added.</p>
<p><strong>Expected Output</strong>:</p>
<pre><code>Values,New_Values
6,2
1,7
3,5
7,1
5,3
2,6
4,4
1,7
4,4
7,1
2,6
5,3
</code></pre>
|
<p>Here's a vectorized one -</p>
<pre><code>In [51]: m,n = np.unique(df['Values'], return_inverse=True)
In [52]: df['New_Values'] = m[n.max()-n]
In [53]: df
Out[53]:
Values New_Values
0 6 2
1 1 7
2 3 5
3 7 1
4 5 3
5 2 6
6 4 4
7 1 7
8 4 4
9 7 1
10 2 6
11 5 3
</code></pre>
<p>Translating to pandas with <code>pandas.factorize</code> -</p>
<pre><code>m,n = pd.factorize(df.Values, sort=True)
df['New_Values'] = n[m.max()-m]
</code></pre>
|
python|pandas|numpy
| 3
|
1,548
| 61,535,852
|
Lazy evaluations of numpy.einsum to avoid storing intermediate large dimensional arrays in memory
|
<p>Imagine that I have integers, <code>n,q</code> and vectors/arrays with these dimensions:</p>
<pre><code>import numpy as np
n = 100
q = 102
A = np.random.normal(size=(n,n))
B = np.random.normal(size=(q, ))
C = np.einsum("i, jk -> ijk", B, A)
D = np.einsum('ijk, ikj -> k', C, C)
</code></pre>
<p>which is working fine if all intermediate arrays fit in memory.</p>
<p>Now assume that I can store in memory arrays of size <code>(n,n)</code>, <code>(q,n)</code> but not any three dimensional arrays such as with shape <code>(n,n,q)</code>. I cannot store in memory array <code>C</code> above. Instead, to compute <code>D</code>,</p>
<pre><code>D1 = np.einsum('i, jk, i, kj -> k', B, A, B, A, optimize='optimal')
</code></pre>
<p>works fine and <code>np.einsum</code> is typically smart enough to find a <code>einsum_path</code> so that no 3d array is ever constructed. Great!</p>
<p>Now let's complicate things slightly:</p>
<pre><code>C = np.einsum("i, jk -> ijk", B, A) # as before
Y2 = np.random.normal(size=(n, ))
Z2 = np.random.normal(size=(q, n))
C2 = np.einsum("j, ik -> ijk", Y2, Z2)
E = np.einsum('ijk, ikj -> k', C+C2, C+C2)
</code></pre>
<p>Here I cannot find a reasonable way (reasonable, as in short/readable code) to construct <code>E</code> without constructing intermediate 3d arrays such as C and C2. </p>
<p>Questions:</p>
<ol>
<li>is there a <code>np.einsum</code> one liner that would construct <code>E</code>, without constructing the intermediate 3d arrays C and C2?<br>
The following appears to work by expanding into four terms, but is rather impractical compared to the hypothetical API in question 2...</li>
</ol>
<pre><code>E_CC = np.einsum('i, jk, i, kj -> k', B, A, B, A, optimize='optimal') # as D before
E_C2C2 = np.einsum('j, ik, k, ij -> k', Y2, Z2, Y2, Z2, optimize='optimal')
E_CC2 = np.einsum('i, jk, k, ij -> k', B, A, Y2, Z2, optimize='optimal')
E_C2C = np.einsum('j, ik, i, kj -> k', Y2, Z2, B, A, optimize='optimal')
E_new = E_CC + E_C2C2 + E_CC2 + E_C2C
np.isclose(E_new, E) # all True!
</code></pre>
<ol start="2">
<li>Is there a ''lazy'' version of <code>np.einsum</code> that would wait before the final call to find an optimal <code>einsum_path</code> throughout the composition of several lazy einsum, including sums as in the above example? For instance, with an hypothetical <code>einsum_lazy</code>, the following would construct <code>E</code> without storing a 3d array (such as C or C2) in memory:</li>
</ol>
<pre><code>C = np.einsum_lazy("i, jk -> ijk", B, A) # nothing has been computed yet!
C2 = np.einsum_lazy("j, ik -> ijk", Y2, Z2) # nothing has been computed yet!
E = np.einsum('ijk, ikj -> k', C+C2, C+C2) # expand the sums and uses optimal einsum_path to compute E
</code></pre>
|
<p>This is a really fascinating question - as @s-m-e mentioned, numpy does not offer a lazy <code>einsum</code> computations, but it does offer a lower level function called <a href="https://numpy.org/doc/stable/reference/generated/numpy.einsum_path.html#numpy.einsum_path" rel="nofollow noreferrer">np.einsum_path</a>, which <code>np.einsum</code> uses to actually find the optimal contractions.</p>
<p>What if you did this:</p>
<pre><code>C_path = np.einsum_path("i, jk -> ijk", B, A)[0]
C2_path = np.einsum_path("j, ik -> ijk", Y2, Z2)[0]
CC2_path = C_path + C2_path[1:]
</code></pre>
<p>And somehow used the path in a final computation? The biggest issue here is that you're summing C and C2, and elementwise addition is not currently supported by <code>einsum</code>, so it's hard to optimize that.</p>
<p>Take a look at <a href="https://stackoverflow.com/a/53177651/11771447">@Eelco Hoogendoorn</a>'s response to a similar question: maybe breaking it up into smaller computations isn't such a bad idea :)</p>
|
numpy|multidimensional-array|lazy-evaluation|numpy-einsum
| 1
|
1,549
| 65,694,416
|
Can a tf-agents environment be defined with an unobservable exogenous state?
|
<p>I apologize in advance for the question in the title not being very clear. I'm trying to train a reinforcement learning policy using tf-agents in which there exists some unobservable stochastic variable that affects the state.</p>
<p>For example, consider the standard CartPole problem, but we add wind where the velocity changes over time. I don't want to train an agent that relies on having observed the wind velocity at each step; I instead want the wind to affect the position and angular velocity of the pole, and the agent to learn to adapt just as it would in the wind-free environment. In this example however, we would need the wind velocity at the current time to be correlated with the wind velocity at the previous time e.g. we wouldn't want the wind velocity to change from 10m/s at time t to -10m/s at time t+1.</p>
<p>The problem I'm trying to solve is how to track the state of the exogenous variable without making it part of the observation spec that gets fed into the neural network when training the agent. Any guidance would be appreciated.</p>
|
<p>Yes, that is no problem at all. Your environment object (a subclass of <code>PyEnvironment</code> or <code>TFEnvironment</code>) can do whatever you want within it. The <code>observation_spec</code> requirement is only related to the TimeStep that you output in the <code>step</code> and <code>reset</code> methods (more precisely in your implementation of the <code>_step</code> and <code>_reset</code> abstract methods).</p>
<p>Your environment however is completely free to have any additional attributes that you might want (like parameters to control wind generation) and any number of additional methods you like (like methods to generate the wind at this timestep according to <code>self._wind_hyper_params</code>). A quick schematic of your code would look like is below:</p>
<pre><code>class MyCartPole(PyEnvironment):
def __init__(self, ..., wind_HP):
... # self._observation_spec and _action_spec can be left unchanged
self._wind_hyper_params = wind_HP
self._wind_velocity = 0
self._state = ...
def _update_wind_velocity(self):
self._wind_velocity = ...
def factor_in_wind(self):
self.state = ... #update according to wind
def _step(self, action):
... # self._state update computations
self._update_wind_velocity
self.factor_in_wind()
observations = self._state_to_observations()
...
</code></pre>
|
tensorflow|reinforcement-learning|tensorflow-agents
| 2
|
1,550
| 65,696,049
|
How to set date index manually and fill zeros for missing rows in python panda dataframe
|
<p>I have a dataset given below and I have a parameter that takes current date:</p>
<pre><code>product_name serial_number date sum
"A" "12" "2020-01-01" 150
"A" "12" "2020-01-02" 350
"A" "12" "2020-01-05" 550
"A" "12" "2020-01-10" 1500
</code></pre>
<p>As an example, please take the current_date as "2020-01-15". I am trying to set index manually from current_date, "2020-01-15" to min date in a given dataset ("2020-01-01") and output it as a dataframe that fills missing dates with zeros:</p>
<pre><code>product_name serial_number date sum
"A" "12" "2020-01-01" 150
"A" "12" "2020-01-02" 350
"A" "12" "2020-01-03" 0
"A" "12" "2020-01-04" 0
"A" "12" "2020-01-05" 550
"A" "12" "2020-01-06" 0
"A" "12" "2020-01-07" 0
"A" "12" "2020-01-08" 0
"A" "12" "2020-01-09" 0
"A" "12" "2020-01-10" 1500
"A" "12" "2020-01-11" 0
"A" "12" "2020-01-12" 0
"A" "12" "2020-01-13" 0
"A" "12" "2020-01-14" 0
"A" "12" "2020-01-15" 0
</code></pre>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.pivot.html" rel="nofollow noreferrer"><code>pivot</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.reindex.html" rel="nofollow noreferrer"><code>DataFrame.reindex</code></a> by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.date_range.html" rel="nofollow noreferrer"><code>date_range</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.stack.html" rel="nofollow noreferrer"><code>DataFrame.stack</code></a>:</p>
<pre><code>current_date = '2020-01-15'
#if need dynamically set today
#current_date = pd.to_datetime('today')
r = pd.date_range(df['date'].min(), current_date, name='date')
cols = ['product_name','serial_number']
df = (df.pivot(cols, 'date', 'sum')
.reindex(r, axis=1, fill_value=0)
.stack()
.reset_index(name='sum'))
product_name serial_number date sum
0 A 12 2020-01-01 150
1 A 12 2020-01-02 350
2 A 12 2020-01-03 0
3 A 12 2020-01-04 0
4 A 12 2020-01-05 550
5 A 12 2020-01-06 0
6 A 12 2020-01-07 0
7 A 12 2020-01-08 0
8 A 12 2020-01-09 0
9 A 12 2020-01-10 1500
10 A 12 2020-01-11 0
11 A 12 2020-01-12 0
12 A 12 2020-01-13 0
13 A 12 2020-01-14 0
14 A 12 2020-01-15 0
</code></pre>
<p>Or <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.set_index.html" rel="nofollow noreferrer"><code>DataFrame.set_index</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.reindex.html" rel="nofollow noreferrer"><code>DataFrame.reindex</code></a> by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.MultiIndex.from_product.html" rel="nofollow noreferrer"><code>MultiIndex.from_product</code></a>:</p>
<pre><code>current_date = '2020-01-15'
#if need dynamically set today
#current_date = pd.to_datetime('today')
r = pd.date_range(df['date'].min(), current_date)
mux = pd.MultiIndex.from_product([df['product_name'].unique(),
df['serial_number'].unique(),
r], names=['product_name','serial_number','date'])
df = (df.set_index(['product_name','serial_number', 'date'])
.reindex(mux, fill_value=0)
.reset_index())
</code></pre>
<p>For more dynamic solution are set unique values in list comprehension:</p>
<pre><code>current_date = '2020-01-15'
#if need dynamically set today
#current_date = pd.to_datetime('today')
r = pd.date_range(df['date'].min(), current_date)
cols = ['product_name','serial_number']
uniq = [df[x].unique() for x in cols]
mux = pd.MultiIndex.from_product(uniq+[r], names= cols + ['date'])
df = (df.set_index(['product_name','serial_number', 'date'])
.reindex(mux, fill_value=0)
.reset_index())
</code></pre>
<hr />
<pre><code>print (df)
product_name serial_number date sum
0 A 12 2020-01-01 150
1 A 12 2020-01-02 350
2 A 12 2020-01-03 0
3 A 12 2020-01-04 0
4 A 12 2020-01-05 550
5 A 12 2020-01-06 0
6 A 12 2020-01-07 0
7 A 12 2020-01-08 0
8 A 12 2020-01-09 0
9 A 12 2020-01-10 1500
10 A 12 2020-01-11 0
11 A 12 2020-01-12 0
12 A 12 2020-01-13 0
13 A 12 2020-01-14 0
14 A 12 2020-01-15 0
</code></pre>
|
python|pandas|dataframe|time-series
| 1
|
1,551
| 63,558,043
|
get min and max values of several values in a df
|
<p>I have this df :</p>
<pre><code>df=pd.DataFrame({'stop_i':['stop_0','stop_0','stop_0','stop_1','stop_1','stop_0','stop_0'],'time':[0,10,15,50,60,195,205]})
</code></pre>
<p>Each line corresponds to the <code>time</code> (in seconds) where the bus was at the <code>stop_i</code>.</p>
<p>First, i want to count how much times the bus was at <code>stop_i</code> with <code>180 seconds</code> between the last seen and the next one first seen. The result would be <code>{'stop_0' : 2,'stop_1': 1}</code> because for <code>stop_0</code> the last time it has been seen for the first time was at <code>15s</code> then its appears again at <code>195s</code> so <code>195-15<=180</code> then it counts for 2 and <code>stop_1</code> appears only one time</p>
<p>Secondly I want to get this dict : <code>{'stop_0' : [[0,15],[195,205], 'stop_1': [[50,60]]}</code> containing the min and the max value of the time when the bus was at the <code>stop_i</code></p>
<p>Is there a way to do that with pandas to avoid a loop through the df ?</p>
<p>Thanks !</p>
|
<p>No looping</p>
<ol>
<li>generate a new column that is the set of times that bus is at a stop (assumes index is sequential)</li>
<li>from this get first and last times. then construct a list of first / last times. Plus calcs for > 180s. This logic seems odd. stop_1 only has one visit so count of 1 for > 180s is forced</li>
<li>finally get dictionaries you want.</li>
</ol>
<pre><code>df=pd.DataFrame({'stop_i':['stop_0','stop_0','stop_0','stop_1','stop_1','stop_0','stop_0'],'time':[0,10,15,50,60,195,205]})
dfp =(df
# group when a bus is at a stop
.assign(
grp=lambda dfa: np.where(dfa["stop_i"].shift()!=dfa["stop_i"], dfa.index, np.nan)
)
.assign(
grp=lambda dfa: dfa["grp"].fillna(method="ffill")
)
# within group get fisrt and last time it's at stop
.groupby(["stop_i","grp"]).agg({"time":["first","last"]})
.reset_index()
# based on expected output... in reality there is only 1 time bus is between stops
# > 180 seconds. stop_1 only has one visit to cannot be > 180s
.assign(
combi=lambda dfa: dfa.apply(lambda r: [r[("time","first")], r[("time","last")]] , axis=1),
stopchng=lambda dfa: dfa[("stop_i")]!=dfa[("stop_i")].shift(),
timediff=lambda dfa: dfa[("time","first")] - dfa[("time","last")].shift(),
)
)
# first requirement... which seems wrong
d1 = (dfp.loc[(dfp[("timediff")]>=180) | dfp[("stopchng")], ]
.groupby("stop_i")["stop_i"].count()
.to_frame().T.reset_index(drop="True")
.to_dict(orient="records")
)
# second requirement
d2 = (dfp.groupby("stop_i")["combi"].agg(lambda s: list(s))
.to_frame().T.reset_index(drop=True)
.to_dict(orient="records")
)
print(d1, d2)
</code></pre>
<p><strong>output</strong></p>
<pre><code>[{'stop_0': 2, 'stop_1': 1}] [{'stop_0': [[0, 15], [195, 205]], 'stop_1': [[50, 60]]}]
</code></pre>
|
python|python-3.x|pandas|list|dictionary
| 2
|
1,552
| 53,649,050
|
Creating separate columns based on string value in python
|
<p>I am quite new but I am working with a dataframe that looks like the below:</p>
<pre><code>ID Tag
123 Asset Class > Equity
123 Inquiry Type > Demonstration
123 Inquiry Topic > Holdings
456 Asset Class > Fixed Income
456 Inquiry Type > BF
456 Inquiry Topic > VaR
</code></pre>
<p>What I am trying to do is change this into a dataframe which looks like the below:</p>
<pre><code>ID Asset Type Inquiry Type Inquiry Topic
123 Equity Demonstration Holdings
456 Fixed Income Bf VaR
</code></pre>
<p>Sorry if this is quite simple; however, I am having an issue with this manipulation. I have tried .melt but this does not seem to complete what I am trying to complete. </p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.split.html" rel="nofollow noreferrer"><code>split</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.pivot.html" rel="nofollow noreferrer"><code>pivot</code></a> and selecting splitted lists by indexing:</p>
<pre><code>s = df['Tag'].str.split(' > ')
df = (pd.pivot(index=df['ID'], columns=s.str[0], values=s.str[1])
.reset_index()
.rename_axis(None, 1))
print (df)
ID Asset Class Inquiry Topic Inquiry Type
0 123 Equity Holdings Demonstration
1 456 Fixed Income VaR BF
</code></pre>
|
pandas|dataframe
| 1
|
1,553
| 55,550,605
|
What is it that's being passed to transform function of Series?
|
<p>I'm using <code>transform</code> function of <code>Series</code>, but something got me confused.</p>
<p>I have searched the documentation of Pandas and googled,but couldn't find the answer.</p>
<p>When I use <code>np.sum</code>, the result is:</p>
<pre><code>s = Series(range(7))
s.transform(lambda x:x + np.sum(x))
0 2
1 4
2 6
3 8
4 10
5 12
6 14
Name: A, dtype: int64
</code></pre>
<p>So, I think <code>x</code> is the element of the <code>Series</code>.But when I use <code>x.sum</code>,the result is:</p>
<pre><code>s.transform(lambda x:x + x.sum())
0 29
1 30
2 31
3 32
4 33
5 34
6 35
Name: A, dtype: int64
</code></pre>
<p><code>x</code> looks like a Series. And when <code>s</code> is a dataframe, it will get the same result.<br>
I am confused. Who can help me answer my question, thanks a lot.</p>
|
<p>I read the source code. I find that the <code>transform</code> function depend on <code>aggregate</code> function.And It will try a regular apply first:</p>
<pre><code>def aggregate(self, func, axis=0, *args, **kwargs):
# Validate the axis parameter
self._get_axis_number(axis)
result, how = self._aggregate(func, *args, **kwargs)
if result is None:
# we can be called from an inner function which
# passes this meta-data
kwargs.pop('_axis', None)
kwargs.pop('_level', None)
# try a regular apply, this evaluates lambdas
# row-by-row; however if the lambda is expected a Series
# expression, e.g.: lambda x: x-x.quantile(0.25)
# this will fail, so we can try a vectorized evaluation
# we cannot FIRST try the vectorized evaluation, because
# then .agg and .apply would have different semantics if the
# operation is actually defined on the Series, e.g. str
try:
result = self.apply(func, *args, **kwargs)
except (ValueError, AttributeError, TypeError):
result = func(self, *args, **kwargs)
return result
</code></pre>
<p>so, first it will pass a scalar to the user-defined-function. The <code>transform</code> function will invoke <code>s.apply(lambda x: x + x.sum())</code>, it will raise a <code>AttributeError</code>, then it will pass the whole series to the user-defined-function. For example:</p>
<pre><code>def func(x):
print(type(x))
print(x)
return x + x.sum()
s.transform(func)
<class 'int'>
1
<class 'pandas.core.series.Series'>
0 1
1 2
2 3
3 4
4 5
5 6
6 7
Name: A, dtype: int64
0 29
1 30
2 31
3 32
4 33
5 34
6 35
Name: A, dtype: int64
</code></pre>
|
pandas|transform|series
| 0
|
1,554
| 55,501,746
|
Type Error when multiplying slope by a list of X
|
<p><strong>Below is the problem I am running into:</strong></p>
<p><em>Linear Regression - Given 16 pairs of prices (as dependent variable) and
corresponding demands (as independent variable), use the linear regression tool to estimate the best fitting
linear line.</em></p>
<pre><code>Price Demand
127 3420
134 3400
136 3250
139 3410
140 3190
141 3250
148 2860
149 2830
151 3160
154 2820
155 2780
157 2900
159 2810
167 2580
168 2520
171 2430
</code></pre>
<p><strong>Here is my code:</strong></p>
<pre><code>from pylab import *
from numpy import *
from scipy.stats import *
x = [3420, 3400, 3250, 3410, 3190, 3250, 2860, 2830, 3160, 2820, 2780, 2900, 2810, 2580, 2520, 2430]
np.asarray(x,dtype= np.float64)
y = [127, 134, 136 ,139, 140, 141, 148, 149, 151, 154, 155, 157, 159, 167, 168, 171]
np.asarray(y, dtype= np.float64)
slope,intercept,r_value,p_value,slope_std_error = stats.linregress(x,y)
y_modeled = x*slope+intercept
plot(x,y,'ob',markersize=2)
plot(x,y_modeled,'-r',linewidth=1)
show()
</code></pre>
<p><strong>Here is the error I get:</strong></p>
<pre><code>Traceback (most recent call last):
File "<ipython-input-48-0a0274c24b19>", line 13, in <module>
y_modeled = x*slope+intercept
TypeError: can't multiply sequence by non-int of type 'numpy.float64'
</code></pre>
|
<p>You did not convert the Python lists to numpy arrays here:</p>
<pre><code>x = [3420, 3400, 3250, 3410, 3190, 3250, 2860, 2830, 3160, 2820, 2780, 2900, 2810, 2580, 2520, 2430]
np.asarray(x,dtype= np.float64)
</code></pre>
<p><code>np.asarray</code> returns a numpy array, but does not modify the original. You can do this instead:</p>
<pre><code>x = [3420, 3400, 3250, 3410, 3190, 3250, 2860, 2830, 3160, 2820, 2780, 2900, 2810, 2580, 2520, 2430]
x = np.asarray(x, dtype=np.float64)
</code></pre>
<p>There is a big difference in how numpy array multiplication works compared to how Python list multiplication works. See here:</p>
<pre><code>>>> 3 * np.array([1, 2, 3])
array([3, 6, 9])
>>> 3 * [1, 2, 3]
[1, 2, 3, 1, 2, 3, 1, 2, 3]
</code></pre>
<p>In your case, you tried to do the latter (list multiplication), but you multiplied with a float, which cannot work and that is what the error said.</p>
|
python|python-2.7|numpy|linear-regression
| 1
|
1,555
| 66,885,879
|
Ranking and extracting the data from a column of a dataframe?
|
<p>I have a file with ranked information:</p>
<p>df1</p>
<pre><code>Type Rank
frameshift 1
stop_gained 2
stop_lost 3
splice_region_variant 4
splice_acceptor_variant 5
splice_donor_variant 6
missense_variant 7
coding_sequence_variant 8
intron_variant 9
NMD_transcript_variant 10
non_coding 11
</code></pre>
<p>and another file containing values separated with <code>,</code></p>
<p>df2['Variants']</p>
<pre><code>A|intron_variant&NMD_transcript_variant|MODIFIER|23||,A|intron_variant&non_coding|MODIFIER|||,A|intron_variant&non_coding|MODIFIER|||
G|missense_variant&splice_region_variant|HIGH|85||,A|intron_variant&non_coding|MODIFIER|||,A|intron_variant&non_coding|MODIFIER|||
G|missense_variant|MODERATE|23||,G|frameshift&intron_variant|HIGH|||,G|intron_variant&non_coding|MODIFIER|||,G|frameshift&missense_variant|HIGH|42||
G|missense_variant|MODERATE|23||,G|intron_variant|MODIFIER|||,G|intron_variant&non_coding|MODIFIER|||,G|stop_gained&splice_region_variant|HIGH|||
G|missense_variant|MODERATE|23||
G|missense_variant&stop_lost|HIGH|12||
</code></pre>
<p>I want to extract the data from <code>df2['Variants']</code> based on the rank order mentioned in the <code>df1</code>. some complication in the data are sometimes they are given combinedly with <code>&</code> as <code>frameshift&intron_variant</code>. In such cases, I want to split the data by <code>&</code> consider it by their rank. Likewise, I want to extract the values from the data as:</p>
<pre><code>Extracted Ranked
A|intron_variant&NMD_transcript_variant|MODIFIER|23|| intron_variant
G|missense_variant&splice_region_variant|HIGH|85|| splice_region_variant
G|frameshift&intron_variant|HIGH||| frameshift
G|stop_gained&splice_region_variant|HIGH||| stop_gained
G|missense_variant|MODERATE|23|| missense_variant
G|missense_variant&stop_lost|HIGH|12|| stop_lost
</code></pre>
<p>I was able to split the files using <code>&</code> using the code given <a href="https://stackoverflow.com/questions/65951583/how-to-replace-the-column-of-dataframe-based-on-priority-order/65952228">here</a>. But unable to extract high ranked variants given in df1 from multiple values separated by <code>,</code> comma.</p>
<p>Thanks</p>
|
<p>You can change list comprehension for first split by <code>|</code> (or <code>,</code>) and then by <code>&</code>:</p>
<pre><code>d = df1.set_index('Type')['Rank'].to_dict()
max1 = df1['Rank'].max()+1
def f(x):
d1 = {z: d.get(z, max1) for y in x for y in x.split('|') for z in y.split('&')}
#https://stackoverflow.com/a/280156/2901002
return min(d1, key=d1.get)
df2['Anno_prio'] = df2['Variants'].apply(f)
print (df2)
Variants Anno_prio
0 A|intron_variant&NMD_transcript_variant|MODIFI... intron_variant
1 G|missense_variant&splice_region_variant|HIGH|... splice_region_variant
2 G|missense_variant|MODERATE|23||,G|frameshift&... frameshift
3 G|missense_variant|MODERATE|23||,G|intron_vari... stop_gained
4 G|missense_variant|MODERATE|23|| missense_variant
5 G|missense_variant&stop_lost|HIGH|12|| stop_lost
</code></pre>
<p>EDIT:</p>
<p>For get also values splitted by <code>,</code> I prefer pandas solution with spliting first by <code>,</code> and then <code>&</code> or <code>|</code>:</p>
<pre><code>d = df1.set_index('Type')['Rank'].to_dict()
df = (df2.assign(Extracted = df2['Variants'].str.split(','))
.explode('Extracted')
.assign(Ranked = lambda x: x['Extracted'].str.split('&|\|'))
.explode('Ranked')
.assign(Rank = lambda x: x['Ranked'].map(d))
.sort_values('Rank')
)
df = df[~df.index.duplicated()].sort_index()
print (df)
Variants Extracted Ranked Rank
0 K|a&b|MOD|,K|d|LOW|,K|a&e|MOD| K|a&b|MOD| a 1.0
1 J|c&d&a|MOD|,J|b&c&d|MOD| J|c&d&a|MOD| a 1.0
2 H|b&c|HIGH|,H|b| H|b&c|HIGH| b 2.0
</code></pre>
|
python|pandas
| 2
|
1,556
| 66,860,702
|
Dataframe doesn't show all columns
|
<p>I am working in Python with the compas-scores dataset of 47 columns and 11757 rows (<a href="https://github.com/propublica/compas-analysis" rel="nofollow noreferrer">https://github.com/propublica/compas-analysis</a>).</p>
<p>I've noticed that when I am calling the dataset, it doesn't shows me all the columns. Instead I have a column in the middle with three dots [...] (see the example in the screenshot). The import statement of pandas was: <code>import pandas as pd</code></p>
<p>Is there a way to show all the 47 columns?</p>
<p><a href="https://i.stack.imgur.com/gKjwb.png" rel="nofollow noreferrer">compas-score screenshot</a></p>
<p>I would apreciate your help.</p>
<p><strong>SOLUTION:</strong></p>
<p>I had to maximize all the columns adding one piece of code. This solution has worked for me:</p>
<p><code>pd.options.display.max_columns = None</code></p>
|
<p>assuming your import statement is <code>import pandas as pd</code></p>
<p>I'd suggest trying one of the below:</p>
<pre><code>pd.options.display.max_columns = None
</code></pre>
<p>or</p>
<pre><code>pd.set_option('display.max_columns', None)
</code></pre>
<p>same can be used for <code>max_rows</code>:</p>
<pre><code>pd.set_option('display.max_rows', None)
pd.options.display.max_rows = None
</code></pre>
|
python|pandas|dataframe
| 2
|
1,557
| 68,105,972
|
pandas pivot based on unique values and criteria
|
<p>I have this dataframe:</p>
<pre><code>df_in = pd.DataFrame({'id': ['123', '123', '123', '123', '123', '456'],
'ven_group': ['a', 'a', 'a', 'b', 'f', 'f'],
'date': ['1/1/21', '2/1/21', '3/1/21', '1/1/21', '1/1/21', '1/1/21']
})
</code></pre>
<p><a href="https://i.stack.imgur.com/4rmyc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4rmyc.png" alt="enter image description here" /></a></p>
<p>I have the following criteria (this is the list ven_group that i need):</p>
<pre><code>ven_group_li = ['a', 'b', 'c']
</code></pre>
<p>This is the output i need:</p>
<p><a href="https://i.stack.imgur.com/l2SRL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/l2SRL.png" alt="enter image description here" /></a></p>
<p>Basically it's a pivot table with every unique id as a row with every ven_group name if it's in the ven_group_li, and then find min and max date for this ven_group, and if ven_group name is not in the list, it'll fill the row with NaN.</p>
<p>I tried this, but I don't know how to modify it to include my ven_group requirement and have min, max dates:</p>
<pre><code>df_out1 = df_in.groupby('id')['ven_group'].apply(lambda x: pd.DataFrame(x.unique()).T).reset_index(level=1, drop=True)
</code></pre>
|
<p>One Way:</p>
<pre><code>unique_id = df.id.unique()
ven_group_li = ['a', 'b', 'c']
df = df[df.ven_group.isin(ven_group_li)]
df1 = df.groupby(['id', 'ven_group']).agg(
[min, max]).reset_index(-1).groupby(level=0).agg(list)
df1.columns = ['name', 'max', 'min']
df2 = pd.concat(
[df1[c].apply(pd.Series).add_prefix("ven_" + c + "_") for c in df1], axis=1
)
df2 = df2[sorted(df2.columns, key=lambda x: x.split('_')[-1])].reindex(unique_id)
</code></pre>
<h4>OUTPUT:</h4>
<pre><code> ven_name_0 ven_max_0 ven_min_0 ven_name_1 ven_max_1 ven_min_1
id
123 a 1/1/21 3/1/21 b 1/1/21 1/1/21
456 NaN NaN NaN NaN NaN NaN
</code></pre>
|
python|pandas|group-by|pivot-table|apply
| 1
|
1,558
| 59,441,896
|
Using NLTK to tokeniz sentences to words using pandas
|
<p>I'm trying to tokenize sentences from a csv file into words but my loop is not jumping to the next sentence its just doing first column. any idea where is the mistake ?
this is how my CSV file look like <a href="https://i.stack.imgur.com/juWSL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/juWSL.png" alt="enter image description here"></a></p>
<pre><code>import re
import string
import pandas as pd
text=pd.read_csv("data.csv")
from nltk.tokenize import word_tokenize
tokenized_docs=[word_tokenize(doc) for doc in text]
x=re.compile('[%s]' % re.escape(string.punctuation))
tokenized_docs_no_punctuation = []
</code></pre>
<p>the output I'm getting is like this</p>
<p><a href="https://i.stack.imgur.com/OD7yg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OD7yg.png" alt="enter image description here"></a></p>
<p>which I expected to do for all sentences as a loop not just one.</p>
|
<p>You just need to change the code to grab the sentences:</p>
<pre><code>import re
import string
import pandas as pd
text=pd.read_csv("out157.txt", sep="|")
from nltk.tokenize import word_tokenize
tokenized_docs=[word_tokenize(doc) for doc in text['SENTENCES']]
x=re.compile('[%s]' % re.escape(string.punctuation))
tokenized_docs_no_punctuation = []
</code></pre>
|
python|pandas|dataframe|nltk
| 3
|
1,559
| 59,457,670
|
Best case to use tensorflow
|
<p>I followed all the steps mentioned in the article:</p>
<p><a href="https://stackabuse.com/tensorflow-2-0-solving-classification-and-regression-problems/" rel="nofollow noreferrer">https://stackabuse.com/tensorflow-2-0-solving-classification-and-regression-problems/</a></p>
<p>Then I compared the results with Linear Regression and found that the error is less (68) than the tensorflow model (84).</p>
<pre><code>from sklearn.linear_model import LinearRegression
logreg_clf = LinearRegression()
logreg_clf.fit(X_train, y_train)
pred = logreg_clf.predict(X_test)
print(np.sqrt(mean_squared_error(y_test, pred)))
</code></pre>
<p>Does this mean that if I have large dataset, I will get better results than linear regression?
What is the best situation - when I should be using tensorflow?</p>
|
<p>Answering your first question, Neural Networks are notoriously known for <a href="https://en.wikipedia.org/wiki/Overfitting" rel="nofollow noreferrer">overfitting</a> on smaller datasets, and here you are comparing the performance of a simple linear regression model with a neural network with two hidden layers on the testing data set, so it's not very surprising to see that the MLP model falling behind (assuming that you are working with relatively a smaller dataset) the linear regression model. Larger datasets will definitely help neural networks in learning more accurate parameters and generalize the phenomena well. </p>
<p>Now coming to your second question, Tensorflow is basically a library for building deep learning models, so whenever you are working on a deep learning problem like image recognition, Natural Language Processing, etc. you need massive computational power and will be processing a ton of data to train your models, and this is where TensorFlow becomes handy, it offers you GPU support which will significantly boost your training process which otherwise becomes practically impossible. Moreover, if you are building a product that has to be deployed in a production environment for it to be consumed, you can make use of <a href="https://www.tensorflow.org/tfx/guide/serving" rel="nofollow noreferrer">TensorFlow Serving</a> which helps you to take your models much closer to the customers.</p>
<p>Hope this helps!</p>
|
tensorflow|scikit-learn|regression|linear-regression
| 2
|
1,560
| 57,087,585
|
How do I get this output?
|
<p>I have got a very large <code>Pandas Data Frame</code>. A part of which looks like this: </p>
<pre><code> Rule_Name Rule_Seq_No Condition Expression Type
Rule P 1 ID 19909 Action
Rule P 1 Type A Condition
Rule P 1 System B Condition
Rule P 2 ID 19608 Action
Rule P 2 Type A Condition
Rule P 2 System C Condition
Rule S 1 ID 19909 Action
Rule S 1 Type A Condition
Rule S 1 System M Condition
Rule S 2 ID 19608 Action
Rule S 2 Type C Condition
Rule S 2 System F Condition
</code></pre>
<p>This table contains some rules with sequence numbers. </p>
<p>I tried using different functions such as <code>MERGE</code>, <code>GROUP BY</code>, <code>APPLY</code> but I am not getting the desired output. </p>
<p>The expected output should be something like this: </p>
<pre><code> Rule_Name Rule_Seq_No Condition Action
Rule P 1 (Type=A)and(System=B) 19909
Rule P 2 (Type=A)and(System=C) 19608
Rule S 1 (Type=A)and(System=M) 19909
Rule S 2 (Type=A)and(System=F) 19608
</code></pre>
<p>For the same rule and the same sequence number and where the <code>TYPE</code> is <code>Condition</code>, I want to merge the rows. And, where the <code>TYPE</code> is <code>ACTION</code>, it should show in a separate column.</p>
|
<p>Use:</p>
<pre><code>df1 = (df.assign(Condition = '(' + df['Condition'] + '=' + df['Expression'] + ')')
.groupby(['Rule_Name','Rule_Seq_No','Type'])
.agg({'Condition': 'and'.join, 'Expression':'first'})
.unstack()
.drop([('Condition','Action'), ('Expression','Condition')], axis=1)
.droplevel(axis=1, level=0)
.reset_index()
.rename_axis(None, axis=1))
print (df1)
Rule_Name Rule_Seq_No Condition Action
0 Rule P 1 (Type=A)and(System=B) 19909
1 Rule P 2 (Type=A)and(System=C) 19608
2 Rule S 1 (Type=A)and(System=M) 19909
3 Rule S 2 (Type=C)and(System=F) 19608
</code></pre>
<p><strong>Explanation</strong>:</p>
<ol>
<li>Join columns <code>Condition</code> and <code>Expression</code> with <code>=</code> and add <code>()</code></li>
<li>Aggreagate by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.agg.html" rel="nofollow noreferrer"><code>GroupBy.agg</code></a> with <code>join</code> and <code>first</code></li>
<li>Reshape by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.unstack.html" rel="nofollow noreferrer"><code>DataFrame.unstack</code></a></li>
<li>Remove unnecessary columns by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.drop.html" rel="nofollow noreferrer"><code>DataFrame.drop</code></a> with tuples, because <code>MultiIndex</code></li>
<li>Remove top level of <code>MultiIndex</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.droplevel.html" rel="nofollow noreferrer"><code>DataFrame.droplevel</code></a></li>
<li>Data cleaning by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.reset_index.html" rel="nofollow noreferrer"><code>DataFrame.reset_index</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.rename_axis.html" rel="nofollow noreferrer"><code>DataFrame.rename_axis</code></a></li>
</ol>
<p>EDIT:</p>
<p>Solution for oldier pandas versions (below 0.24+) with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Index.droplevel.html" rel="nofollow noreferrer"><code>Index.droplevel</code></a>:</p>
<pre><code>df1 = (df.assign(Condition = '(' + df['Condition'] + '=' + df['Expression'] + ')')
.groupby(['Rule_Name','Rule_Seq_No','Type'])
.agg({'Condition': 'and'.join, 'Expression':'first'})
.unstack()
.drop([('Condition','Action'), ('Expression','Condition')], axis=1))
df1.columns = df1.columns.droplevel(level=0)
df1 = df1.reset_index().rename_axis(None, axis=1)
print (df1)
Rule_Name Rule_Seq_No Condition Action
0 Rule P 1 (Type=A)and(System=B) 19909
1 Rule P 2 (Type=A)and(System=C) 19608
2 Rule S 1 (Type=A)and(System=M) 19909
3 Rule S 2 (Type=C)and(System=F) 19608
</code></pre>
|
python-3.x|pandas|dataframe
| 5
|
1,561
| 57,037,742
|
How to data clean in groups
|
<p>I have an extremely long dataframe with a lot of data which I have to clean so that I can proceed with data visualization. There are several things I have in mind that needs to be done and I can do each of them to a certain extent but I don't know how to, or if it's even possible, to do them together. </p>
<p>This is what I have to do:</p>
<ol>
<li>Find the highest arrival count every year and see if the mode of transport is by air, sea or land.</li>
</ol>
<pre><code> period arv_count Mode of arrival
0 2013-01 984350 Air
1 2013-01 129074 Sea
2 2013-01 178294 Land
3 2013-02 916372 Air
4 2013-02 125634 Sea
5 2013-02 179359 Land
6 2013-03 1026312 Air
7 2013-03 143194 Sea
8 2013-03 199385 Land
... ... ... ...
78 2015-03 940077 Air
79 2015-03 133632 Sea
80 2015-03 127939 Land
81 2015-04 939370 Air
82 2015-04 118120 Sea
83 2015-04 151134 Land
84 2015-05 945080 Air
85 2015-05 123136 Sea
86 2015-05 154620 Land
87 2015-06 930642 Air
88 2015-06 115631 Sea
89 2015-06 138474 Land
</code></pre>
<p>This is an example of what the data looks like. I don't know if it's necessary but I have created another column just for year like so:</p>
<pre><code>def year_extract(year):
return year.split('-')[0].strip()
df1 = pd.DataFrame(df['period'])
df1 = df1.rename(columns={'period':'Year'})
df1 = df1['Year'].apply(year_extract)
df1 = pd.DataFrame(df1)
df = pd.merge(df, df1, left_index= True, right_index= True)
</code></pre>
<p>I know how to use groupby and I know how to find maximum but I don't know if it is possible to find maximum in a group like finding the highest arrival count in 2013, 2014, 2015 etc</p>
<p>The data above is the total arrival count for all countries based on the mode of transport and period but the original data also had hundreds of additional rows of which region and country stated are stated but I dropped because I don't know how to use or clean them. It looks like this:</p>
<pre><code>period region country moa arv_count
2013-01 Total Total Air 984350
2013-01 Total Total Sea 129074
2013-01 Total Total Land 178294
2013-02 Total Total Air 916372
... ... ... ... ...
2015-12 AMERICAS USA Land 2698
2015-12 AMERICAS Canada Land 924
2013-01 ASIA China Air 136643
2013-01 ASIA India Air 55369
2013-01 ASIA Japan Air 51178
</code></pre>
<p>I would also like to make use of the region data if it is possible. Hoping to create a clustered column chart with the 7 regions as the x axis and arrival count as y axis and each region showing the arrival count via land, sea and air but I feel like there are too much excess data that I don't know how to deal with right now.</p>
<p>For example, I don't know how to deal with the period and the country because all I need is the total arrival count of land, sea and air based on region and year regardless of country and months.</p>
|
<p>I used this dataframe to test the code (the one in your question):</p>
<pre><code>df = pd.DataFrame([['2013-01', 'Total', 'Total', 'Air', 984350],
['2013-01', 'Total', 'Total', 'Sea', 129074],
['2013-01', 'Total', 'Total', 'Land', 178294],
['2013-02', 'Total', 'Total', 'Air', 916372],
['2015-12', 'AMERICAS', 'USA', 'Land', 2698],
['2015-12', 'AMERICAS', 'Canada', 'Land', 924],
['2013-01', 'ASIA', 'China', 'Air', 136643],
['2013-01', 'ASIA', 'India', 'Air', 55369],
['2013-01', 'ASIA', 'Japan', 'Air', 51178]],
columns = ['period', 'region', 'country', 'moa', 'arv_count'])
</code></pre>
<p>Here is the code to get the sum of arrival counts, by year, region and type (sea, land air):</p>
<p>First add a 'year' column:</p>
<pre><code>df['year'] = pd.to_datetime(df['period']).dt.year
</code></pre>
<p>Then group by (year, region, moa) and sum arv_count in each group:</p>
<pre><code>df.groupby(['region', 'year', 'moa']).arv_count.sum()
</code></pre>
<p>Here is the output:</p>
<pre><code>region year moa
AMERICAS 2015 Land 3622
ASIA 2013 Air 243190
Total 2013 Air 1900722
Land 178294
Sea 129074
</code></pre>
<p>I hope this is what you were looking for!</p>
|
python|pandas|data-cleaning
| 1
|
1,562
| 45,953,287
|
Check values of the keys in dictionary and construct matrix for keys only using Python
|
<p>I have a dictionary as mentioned below.</p>
<pre><code>mydictionary = {colours: [red, pink, blue, green, yellow], animals: [cat, rat, dog, goat],
vehicles: [car, jeep, van, bus, lorry]}
</code></pre>
<p>I also have a matrix for each value of the keys as given in the examples below (if that value is in the essay I read I make it 1, otherwise 0).</p>
<pre><code> red, pink, blue, green, yellow, cat, rat, ... bus, lorry
Essay1: 1 0 0 1 1 0 0 1 0
Essay2: 0 1 0 0 1 0 1 0 0
</code></pre>
<p>Hence, this looks like a list of lists.</p>
<pre><code>[[1,0,0,1,1,0,0,1,0], [0,1,0,0,1,0,1,0,0]]
</code></pre>
<p>Now by using 'mydictionary' I want to transform the above matrix as follows (That is, if one or more values of the key is 1, I mark the key as 1, else 0).</p>
<pre><code> Colours, Animals, Vehicles
Essay1: 1 0 1
Essay2: 1 1 0
</code></pre>
<p>The above mentioned matrix can be written as a list of lists as below.</p>
<pre><code>[[1,0,1], [1,1,0]]
</code></pre>
<p>I am new to pandas, Hence, I am interested in knowing if this is possible to do using pandas dataframes.</p>
|
<pre><code>m = {v: k for k, l in mydictionary.items() for v in l}
df.groupby(df.columns.map(m.get), 1).sum().clip(0, 1)
animals colours vehicles
Essay1 0 1 1
Essay2 1 1 0
</code></pre>
|
python|pandas
| 3
|
1,563
| 46,144,429
|
Tensorflow Extract Indices Not Equal to Zero
|
<p>I want to return a dense tensor of the non-zero indices for each row. For example, given the tensors:</p>
<pre><code>[0,1,1]
[1,0,0]
[0,0,1]
[0,1,0]
</code></pre>
<p>Should return</p>
<pre><code>[1,2]
[0]
[2]
[1]
</code></pre>
<p>I can get the indices using tf.where(), but I do not know how to combine the results based on the first index. For example:</p>
<pre><code>graph = tf.Graph()
with graph.as_default():
data = tf.constant([[0,1,1],[1,0,0],[0,0,1],[0,1,0]])
indices = tf.where(tf.not_equal(data,0))
sess = tf.InteractiveSession(graph=graph)
sess.run(tf.local_variables_initializer())
print(sess.run([indices]))
</code></pre>
<p>The above code returns:</p>
<pre><code>[array([[0, 1],
[0, 2],
[1, 0],
[2, 2],
[3, 1]])]
</code></pre>
<p>However, I would like to combine the result based on first column of these indices. Can anybody suggest a way to do this? </p>
<p><strong>UPDATE</strong></p>
<p>Trying to get this to work for a larger number of dimensions and running into an error. If I run the code below on the matrix</p>
<pre><code>sess = tf.InteractiveSession()
a = tf.constant([[0, 1, 1, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 1, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 1, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 1, 1, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 1, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 1, 0, 1],
[0, 0, 0, 0, 0, 0, 0, 0, 1, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 1],
[1, 0, 0, 0, 0, 0, 0, 0, 0, 1]])
row_counts = tf.reduce_sum(a, axis=1)
max_padding = tf.reduce_max(row_counts)
extra_padding = max_padding - row_counts
extra_padding_col = tf.expand_dims(extra_padding, 1)
range_row = tf.expand_dims(tf.range(max_padding), 0)
padding_array = tf.cast(tf.tile(range_row, [9, 1])<extra_padding_col, tf.int32)
b = tf.concat([a, padding_array], axis=1)
result = tf.map_fn(lambda x: tf.cast(tf.where(tf.not_equal(x, 0)), tf.int32), b)
result = tf.where(result<=max_padding, result, -1*tf.ones_like(result)) # replace with -1's
result = tf.reshape(result, (int(result.get_shape()[0]), max_padding))
result.eval()
</code></pre>
<p>Then I will get too many -1's so the solution seems to not quite be there:</p>
<pre><code>[[ 1, 2],
[ 2, -1],
[-1, -1],
[-1, -1],
[-1, -1],
[-1, -1],
[-1, -1],
[-1, -1],
[ 0, -1]]
</code></pre>
|
<p>Notice that in your example, the output is not a matrix but a jagged array. Jagged arrays have limited support in TensorFlow (through TensorArray), so it's more convenient to deal with rectangular arrays. You could pad each row with -1's to make the output rectangular</p>
<p>Suppose your output was already rectangular, without padding you could use <code>map_fn</code> as follows</p>
<pre><code>tf.reset_default_graph()
sess = tf.InteractiveSession()
a = tf.constant([[0,1,1],[1,1,0],[1,0,1],[1,1,0]])
# cast needed because map_fn likes to keep same dtype, but tf.where returns int64
result = tf.map_fn(lambda x: tf.cast(tf.where(tf.not_equal(x, 0)), tf.int32), a)
# remove extra level of nesting
sess.run(tf.reshape(result, (4, 2)))
</code></pre>
<p>Output is</p>
<pre><code>array([[1, 2],
[0, 1],
[0, 2],
[0, 1]], dtype=int32)
</code></pre>
<p>When padding is needed, you could do something like this</p>
<pre><code>sess = tf.InteractiveSession()
a = tf.constant([[0, 1, 1, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 1, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 1, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 1, 1, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 1, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 1, 0, 1],
[0, 0, 0, 0, 0, 0, 0, 0, 1, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 1],
[1, 0, 0, 0, 0, 0, 0, 0, 0, 1]])
row_counts = tf.reduce_sum(a, axis=1)
max_padding = tf.reduce_max(row_counts)
max_index = int(a.get_shape()[1])
extra_padding = max_padding - row_counts
extra_padding_col = tf.expand_dims(extra_padding, 1)
range_row = tf.expand_dims(tf.range(max_padding), 0)
num_rows = tf.squeeze(tf.shape(a)[0])
padding_array = tf.cast(tf.tile(range_row, [num_rows, 1])<extra_padding_col, tf.int32)
b = tf.concat([a, padding_array], axis=1)
result = tf.map_fn(lambda x: tf.cast(tf.where(tf.not_equal(x, 0)), tf.int32), b)
result = tf.where(result<max_index, result, -1*tf.ones_like(result)) # replace with -1's
result = tf.reshape(result, (int(result.get_shape()[0]), max_padding))
result.eval()
</code></pre>
<p>This should produce</p>
<pre><code>array([[ 1, 2],
[ 2, -1],
[ 4, -1],
[ 5, 6],
[ 6, -1],
[ 7, 9],
[ 8, -1],
[ 9, -1],
[ 0, 9]], dtype=int32)
</code></pre>
|
python|tensorflow
| 1
|
1,564
| 66,437,545
|
Converting column type 'datetime64[ns]' to datetime in Python3
|
<p>I would like to perform a comparison between the two dates (One from a pandas dataframe) in python3, another one is calculated. I would like to filter pandas dataframe if the values in the 'Publication_date' is equal to or less than the today's date and is greater than the date 10 years ago.</p>
<p>The pandas df looks like this:</p>
<pre><code> PMID Publication_date
0 31611796 2019-09-27
1 33348808 2020-12-17
2 12089324 2002-06-27
3 31028872 2019-04-25
4 26805781 2016-01-21
</code></pre>
<p>I am doing the comparison as shown below.</p>
<pre><code>df[(df['Publication_date']> datetime.date.today() - datetime.timedelta(days=3650)) &
(df['Publication_date']<= datetime.date.today())]
</code></pre>
<p>Above date filter when applied on the df should not give Row:3 of the df.</p>
<p>'Publication_date' column has type 'string'. I converted it to date using below line in my script.</p>
<pre><code>df_phenotype['publication_date']= pd.to_datetime(df_phenotype['publication_date'])
</code></pre>
<p>But it changes the column type to 'datetime64[ns]' that makes the comparison incompatible between 'datetime64[ns]' and datetime.</p>
<p>How can I perform this comparison?</p>
<p>Any help is highly appreciated.</p>
|
<p>You can use only pandas for working with datetimes - <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Timestamp.floor.html" rel="nofollow noreferrer"><code>Timestamp.floor</code></a> is for remove times from datetimes (set times to <code>00:00:00</code>):</p>
<pre><code>df['Publication_date']= pd.to_datetime(df['Publication_date'])
today = pd.to_datetime('now').floor('d')
df1 = df[(df['Publication_date']> today - pd.Timedelta(days=3650)) &
(df['Publication_date']<= today)]
</code></pre>
<p>Also you can use <code>10 years</code> offset:</p>
<pre><code>today = pd.to_datetime('now').floor('d')
df1 = df[(df['Publication_date']> today - pd.offsets.DateOffset(years=10)) &
(df['Publication_date']<= today)]
</code></pre>
<hr />
<pre><code>print (df1)
PMID Publication_date
0 31611796 2019-09-27
1 33348808 2020-12-17
3 31028872 2019-04-25
4 26805781 2016-01-21
</code></pre>
|
python|pandas|dataframe|datetime
| 1
|
1,565
| 66,688,647
|
Fill tensor with another tensor where mask is true
|
<p>I need to insert elements of tensor <em>new</em> into a tensor <em>old</em> with a certain probability, let's say that it is 0.8 for simplicity.
Substantially this is what masked_fill would do, but it only works with monodimensional tensor.
Actually I am doing</p>
<pre><code> prob = torch.rand(trgs.shape, dtype=torch.float32).to(trgs.device)
mask = prob < 0.8
dim1, dim2, dim3, dim4 = new.shape
for a in range(dim1):
for b in range(dim2):
for c in range(dim3):
for d in range(dim4):
old[a][b][c][d] = old[a][b][c][d] if mask[a][b][c][d] else new[a][b][c][d]
</code></pre>
<p>which is awful. I would like something like</p>
<pre><code> prob = torch.rand(trgs.shape, dtype=torch.float32).to(trgs.device)
mask = prob < 0.8
old = trgs.multidimensional_masked_fill(mask, new)
</code></pre>
|
<p>I am not sure what some of your objects are, but this should get you to do what you need in short order:</p>
<p><code>old</code> is the your existing data.</p>
<p><code>mask</code> is the mask you generated with probability p</p>
<p><code>new</code> is the new tensor that has elements you want to insert.</p>
<pre><code># torch.where
result = old.where(mask, new)
</code></pre>
|
python|pytorch|tensor
| 2
|
1,566
| 57,337,931
|
Tensorflow : DLL load failed: A dynamic link library (DLL) initialization routine failed
|
<p>I am setting up a Captcha solver with tensorflow object-detection and i get this error</p>
<pre>
DLL load failed: A dynamic link library (DLL) initialization routine failed.
</pre>
<p>It is on a Windows Server I got Python 3.7.3 And Tensorflow 1.14.0
and i am not using the tensorflow-gpu ! but i already get this error</p>
<pre><code>import tensorflow as tf
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\__init__.py", line 24, in <module>
from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-import
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\python\__init__.py", line 49, in <module>
from tensorflow.python import pywrap_tensorflow
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 74, in <module>
raise ImportError(msg)
ImportError: Traceback (most recent call last):
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python37\lib\imp.py", line 242, in load_module
return load_dynamic(name, filename, file)
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python37\lib\imp.py", line 342, in load_dynamic
return _load(spec)
ImportError: DLL load failed: A dynamic link library (DLL) initialization routine failed.
Failed to load the native TensorFlow runtime.
</code></pre>
<p>I did Everything And Installed All Of required libraries.</p>
<p>i saw all other questions with this theme in stackoverflow but all of them was solving for tensorflow-gpu.</p>
|
<p><strong>TensorFlow release binaries version 1.6 and higher are prebuilt with AVX instruction sets.</strong> .<br />
Therefore on any CPU that does not have these instruction sets, either CPU or GPU version of TF will fail to load.</p>
<p>Apparently, your CPU model does not support AVX instruction sets. You can still use TensorFlow with the alternatives given below:</p>
<ul>
<li><p>Try <a href="https://colab.sandbox.google.com/drive/1P2fbpa3OrOJjwOITnFwwMWC-VLpyCb88" rel="nofollow noreferrer">Google Colab</a> to use TensorFlow.<br />
The easiest way to use TF will be
to switch to <a href="https://colab.sandbox.google.com/drive/1P2fbpa3OrOJjwOITnFwwMWC-VLpyCb88" rel="nofollow noreferrer">google colab</a>. You get pre-installed latest stable TF
version. Also you can use pip install to install any other preferred
TF version.<br />
It has an added advantage since you can easily switch
to different hardware accelerators (cpu, gpu, tpu) as per the task.
All you need is a good internet connection and you are all set.</p>
</li>
<li><p>Try to build TF from sources by changing CPU optimization flags.</p>
</li>
</ul>
|
python|tensorflow|machine-learning
| 3
|
1,567
| 72,868,893
|
Apply function to dataframe column based on combinations of values from other columns
|
<p>I have a dataframe that looks like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Region</th>
<th>Country</th>
<th>Product</th>
<th>Year</th>
<th>Price</th>
</tr>
</thead>
<tbody>
<tr>
<td>Africa</td>
<td>South Africa</td>
<td>ABC</td>
<td>2016</td>
<td>500</td>
</tr>
<tr>
<td>Africa</td>
<td>South Africa</td>
<td>ABC</td>
<td>2017</td>
<td>400</td>
</tr>
<tr>
<td>Africa</td>
<td>South Africa</td>
<td>ABC</td>
<td>2018</td>
<td>15</td>
</tr>
<tr>
<td>Africa</td>
<td>South Africa</td>
<td>ABC</td>
<td>2019</td>
<td>450</td>
</tr>
<tr>
<td>Africa</td>
<td>Uganda</td>
<td>ABC</td>
<td>2016</td>
<td>750</td>
</tr>
<tr>
<td>Africa</td>
<td>Uganda</td>
<td>ABC</td>
<td>2017</td>
<td>670</td>
</tr>
<tr>
<td>Africa</td>
<td>Uganda</td>
<td>ABC</td>
<td>2018</td>
<td>1300</td>
</tr>
<tr>
<td>Africa</td>
<td>Uganda</td>
<td>ABC</td>
<td>2019</td>
<td>890</td>
</tr>
<tr>
<td>Asia</td>
<td>Japan</td>
<td>DEF</td>
<td>2016</td>
<td>500</td>
</tr>
<tr>
<td>Asia</td>
<td>Japan</td>
<td>DEF</td>
<td>2017</td>
<td>420</td>
</tr>
<tr>
<td>Asia</td>
<td>Japan</td>
<td>DEF</td>
<td>2018</td>
<td>415</td>
</tr>
<tr>
<td>Asia</td>
<td>Japan</td>
<td>DEF</td>
<td>2019</td>
<td>0</td>
</tr>
</tbody>
</table>
</div>
<pre><code>data = {'Region': ['Africa','Africa','Africa','Africa','Africa','Africa','Africa','Africa','Asia','Asia','Asia','Asia'],
'Country': ['South Africa','South Africa','South Africa','South Africa','Uganda','Uganda','Uganda','Uganda','Japan','Japan','Japan','Japan'],
'Product': ['ABC','ABC','ABC','ABC','XYZ','XYZ','XYZ','XYZ','DEF','DEF','DEF','DEF'],
'Year': [2016, 2017, 2018, 2019,2016, 2017, 2018, 2019,2016, 2017, 2018, 2019],
'Price': [500, 400, 15,450,750,670,1300,890,500,420,415,0]}
df = pd.DataFrame(data)
</code></pre>
<p>I want to calculate the Interquartile Range to identify outliers and extract the index positions of potential outliers.</p>
<p>I created a function however, I am having trouble applying the function to the <code>Price</code> column based on combinations of the <code>Region</code> and <code>Product</code> columns.</p>
<p>My function is below:</p>
<pre><code>def tukeys_method(df, variable, iterable1, iterable2):
itr1 = df[iterable1].unique() #create list of unique values for iterable 1
itr2 = df[iterable2].unique() #create list of unique values for iterable 2
for (i,j) in zip(itr1, itr2):
#Takes two parameters: dataframe & variable of interest as string
q1 = df.groupby([iterable1,iterable2])[variable].quantile(0.25) #calculate quantiles
q3 = df.groupby([iterable1,iterable2])[variable].quantile(0.75) #calculate quantiles
iqr = q3-q1
inner_fence = 1.5*iqr
outer_fence = 3*iqr
#inner fence lower and upper end
inner_fence_le = q1-inner_fence
inner_fence_ue = q3+inner_fence
#outer fence lower and upper end
outer_fence_le = q1-outer_fence
outer_fence_ue = q3+outer_fence
outliers_prob = []
outliers_poss = []
for index, x in enumerate(df.groupby([iterable1,iterable2])[variable]):
if x <= outer_fence_le or x >= outer_fence_ue:
outliers_prob.append(index)
for index, x in enumerate(df.groupby([iterable1,iterable2])[variable]):
if x <= inner_fence_le or x >= inner_fence_ue:
outliers_poss.append(index)
return outliers_prob, outliers_poss
probable_outliers_tm, possible_outliers_tm = tukeys_method(df, "Price",'Region','Product')
</code></pre>
<p>I get the following error when I run the function:</p>
<pre><code>ValueError: operands could not be broadcast together with shapes (570,) (2,)
</code></pre>
<p>Does anyone know what I need to do to fix this?</p>
|
<p>I managed to figure it out, if anyone is interested, the solution is below:</p>
<pre><code># Identify outliers using Tukey's method.
def outliers_tukey(df, variable, iterable1, iterable2):
outliers_prob = []
outliers_poss = []
for (i,j) in itertools.product(df[iterable1].unique(), df[iterable2].unique()):
#Takes two parameters: dataframe & variable of interest as string
q1 = df.loc[(df[iterable1]==i) & (df[iterable2]==j)][variable].quantile(0.25)
q3 = df.loc[(df[iterable1]==i) & (df[iterable2]==j)][variable].quantile(0.75)
iqr = q3-q1
inner_fence = 1.5*iqr
outer_fence = 3*iqr
#inner fence lower and upper end
inner_fence_le = q1-inner_fence
inner_fence_ue = q3+inner_fence
#outer fence lower and upper end
outer_fence_le = q1-outer_fence
outer_fence_ue = q3+outer_fence
for index, x in enumerate(df[variable]):
if x <= outer_fence_le or x >= outer_fence_ue:
outliers_prob.append(index)
for index, x in enumerate(df[variable]):
if x <= inner_fence_le or x >= inner_fence_ue:
outliers_poss.append(index)
return outliers_prob, outliers_poss
probable_outliers_tm, possible_outliers_tm = outliers_tukey(df, "Price",'Region','Product')
</code></pre>
|
python|pandas|function|numpy|scipy
| 0
|
1,568
| 72,868,981
|
How to fetch elements between x and y from tf.data.TFRecordDataset in Tensorflow
|
<p>The <code>test_dataset</code> is defined as:</p>
<pre><code>test_dataset = tf.data.TFRecordDataset([test_tfrecords])
test_dataset = test_dataset.map(map_f)
test_dataset = test_dataset.repeat(1)
test_dataset = test_dataset.batch(1)
</code></pre>
<p>For fetching the first 100 elements:</p>
<pre><code>for test in test_dataset.take(100):
pass
</code></pre>
<p>But how to fetch elements in <code>index range</code> between 50 and 150?</p>
<pre><code>for test in test_dataset.take([50-150]):
pass
</code></pre>
|
<p>Try <code>tf.data.Dataset.skip</code>:</p>
<pre><code>for test in test_dataset.skip(50).take(100):
pass
</code></pre>
|
python|tensorflow|tensorflow-datasets
| 1
|
1,569
| 70,666,759
|
How to update one dataframe based on the value of other dataframe?
|
<p>Hi I have 2 dataframes in which I have to update the values based on the other dataframe-</p>
<p>Example:
df1:</p>
<pre><code>Region Sub_Region Status Reason
LATAM CRM Success
LATAM Genesys Failed
ASPAC CRM Success
ASPAC Genesys Success
</code></pre>
<p>df2:</p>
<pre><code>Region Sub_Region Max_Load_Date
LATAM CRM 2021-08-15
LATAM Genesys 2021-09-10
ASPAC CRM 2021-10-11
ASPAC Genesys 2021-10-15
</code></pre>
<p>In Final Output:</p>
<pre><code>Region Sub_Region Status Reason Max_Load_Date
LATAM CRM Success 2021-08-15
LATAM Genesys Failed
ASPAC CRM Success 2021-10-11
ASPAC Genesys Success 2021-10-15
</code></pre>
<p>Only those will update for which status = 'Success'</p>
|
<p>You can <code>merge</code> and <code>mask</code>:</p>
<pre><code>(df1.merge(df2, on=['Region', 'Sub_Region'])
.assign(Max_Load_Date=lambda d: d['Max_Load_Date'].mask(d['Status']=='Failed', ''))
)
</code></pre>
<p>Merging will ensure the correct data is mapped to the correct row in case both dataframes are not sorted in the exact same order or have missing rows.</p>
<p>output:</p>
<pre><code> Region Sub_Region Status Max_Load_Date
0 LATAM CRM Success 2021-08-15
1 LATAM Genesys Failed
2 ASPAC CRM Success 2021-10-11
3 ASPAC Genesys Success 2021-10-15
</code></pre>
|
python|python-3.x|pandas|dataframe
| 0
|
1,570
| 70,419,906
|
How to group by date and apply a formula to each group?
|
<p>I have the following <code>df</code>:</p>
<pre><code> DateTime Var1
0 2021-08-01 10:00:00 115.0
1 2021-08-01 11:00:00 99.0
2 2021-08-01 12:00:00 155.0
3 2021-08-01 13:00:00 73.0
4 2021-08-01 14:00:00 44.0
5 2021-08-02 10:00:00 112.0
6 2021-08-02 11:00:00 100.0
7 2021-08-02 12:00:00 150.0
8 2021-08-02 13:00:00 70.0
9 2021-08-02 14:00:00 45.0
</code></pre>
<p>I need to group the data by date (not date and time), and apply <code>my_group</code> formula to each date.
This is how I did it:</p>
<pre><code>def my_group(group):
if len(group) == 0:
return np.nan
return group["Var1"]/len(group)
result = (
df[["DateTime","Var1"]]
.assign(date=lambda x: x["DateTime"].dt.date)
.groupby("date")
.apply(my_group)
.reset_index()
)
result.head()
</code></pre>
<p>But instead of grouping by date, the records seem to group differently, because I see the same date duplicated in the <code>result</code> (the <code>Var1</code> values come from my original <code>df</code>):</p>
<pre><code> date level_1 Var1
0 2021-08-01 0 0.016767
1 2021-08-01 1 0.014398
</code></pre>
|
<p>Output is expected, because:</p>
<pre><code>return group["Var1"]/len(group)
</code></pre>
<p>return Series like original DataFrame.</p>
<p>Need aggregation, e.g. <code>sum</code>:</p>
<pre><code>return group["Var1"].sum()/len(group)
</code></pre>
<p>what is same like:</p>
<pre><code>return group["Var1"].mean()
</code></pre>
<p>I think length of group is never <code>0</code> here, so solution should be simplify:</p>
<pre><code>result = (
df.groupby(df["DateTime"].dt.date.rename('date'))["Var1"].mean().reset_index()
)
print (result)
date Var1
0 2021-08-01 97.2
1 2021-08-02 95.4
</code></pre>
<p>Another solution with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Grouper.html" rel="nofollow noreferrer"><code>Grouper</code></a>:</p>
<pre><code>result = (
df.groupby(pd.Grouper(freq='d', key="DateTime"))["Var1"]
.mean()
.rename_axis('date')
.reset_index()
)
</code></pre>
|
python|pandas
| 2
|
1,571
| 70,602,796
|
Pytorch GPU memory keeps increasing with every batch
|
<p>I'm training a CNN model on images. Initially, I was training on image patches of size <code>(256, 256)</code> and everything was fine. Then I changed my dataloader to load full HD images <code>(1080, 1920)</code> and I was cropping the images after some processing. In this case, the GPU memory keeps increasing with every batch. Why is this happening?</p>
<p>PS: While tracking losses, I'm doing <code>loss.detach().item()</code> so that loss is not retained in the graph.</p>
|
<p>As suggested <a href="https://discuss.pytorch.org/t/gpu-memory-consumption-increases-while-training/2770/3?u=nagabhushansn95" rel="nofollow noreferrer">here</a>, deleting the input, output and loss data helped.</p>
<p>Additionally, I had the data as a dictionary. Just deleting the dictionary isn't sufficient. I had to iterate over the dict elements and delete all of them.</p>
|
python|memory-leaks|pytorch
| 0
|
1,572
| 70,408,145
|
Select a column without "losing" a dimension
|
<p>Suppose I execute the following code</p>
<pre><code>W = tf.random.uniform(shape = (100,1), minval = 0, maxval = 1)
Z = tf.random.uniform(shape = (100,1), minval = 0, maxval = 1)
X = tf.concat([W,Z],1)
</code></pre>
<p>The shape of <code>X[:,1]</code> would be <code>[100,]</code>. That is, <code>X[:,1].shape</code> would yield <code>[100,]</code>. If I want to select the second column of <code>X</code> and want the resulting array to have shape <code>[100,1]</code>, what should I do? I looked at <code>tf.slice</code> but I'm not sure if it' helpful.</p>
|
<p>Maybe just use <code>tf.newaxis</code> for your use case:</p>
<pre class="lang-py prettyprint-override"><code>import tensorflow as tf
W = tf.random.uniform(shape = (100,1), minval = 0, maxval = 1)
Z = tf.random.uniform(shape = (100,1), minval = 0, maxval = 1)
X = tf.concat([W,Z],1)
print(X[:, 1, tf.newaxis].shape)
# (100, 1)
</code></pre>
|
python|tensorflow
| 1
|
1,573
| 35,951,712
|
Numpy Install on Mac OSX
|
<p>I have been trying to install numpy on my Mac OSX for the last 2 days. I have installed Homebrew as well, and ran the command-</p>
<pre><code>pip install numpy
</code></pre>
<p>which gives me the following output-</p>
<pre><code>Requirement already satisfied (use --upgrade to upgrade): numpy in /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages
</code></pre>
<p>then I ran the command-</p>
<pre><code>import numpy as np
</code></pre>
<p>which gives me </p>
<pre><code>-bash: import: command not found
</code></pre>
<p>Any help is highly appreciated. Thanks! </p>
|
<p>If you run <code>import numpy</code> from the command line, you will get an error:</p>
<pre><code>$ import numpy as np
-bash: import: command not found
</code></pre>
<p>You need to start Python, and then type the import statement within the Python interpreter:</p>
<pre><code>$ python
Python 3.5.1 |Continuum Analytics, Inc.| (default, Dec 7 2015, 11:24:55)
[GCC 4.2.1 (Apple Inc. build 5577)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy as np
>>> np.arange(5)
array([0, 1, 2, 3, 4])
</code></pre>
|
macos|python-2.7|numpy
| 2
|
1,574
| 37,384,627
|
How to show Cartesian system in polar plot in python?
|
<p>Here I tried to add the polar plot on top of the Cartesian grid,but what I got instead was 2 separate figures(one polar another Cartesian),I want this polar figure to be embedded in the Cartesian plot. Also I have used some of the code previously available as I am new to matplotlib.</p>
<pre><code>from pylab import *
import matplotlib.pyplot as plt
x = [0,10,-3,-10]
y = [0,10,1,-10]
color=['w','w','w','w']
fig = plt.figure()
ax1 = fig.add_subplot(111)
scatter(x,y, s=100 ,marker='.', c=color,edgecolor='w')
circle1=plt.Circle((0,0),5,color='r',fill=False)
circle_min=plt.Circle((0,0),4.5,color='g',fill=False)
circle_max=plt.Circle((0,0),5.445,color='b',fill=False)
fig = plt.gcf()
fig.gca().add_artist(circle1)
fig.gca().add_artist(circle_min)
fig.gca().add_artist(circle_max)
left,right = ax1.get_xlim()
low,high = ax1.get_ylim()
arrow( left, 0, right -left, 0, length_includes_head = True, head_width = 0.15 )
arrow( 0, low, 0, high-low, length_includes_head = True, head_width = 0.15 )
grid()
fig = plt.figure()
ax2 = fig.add_subplot(111)
scatter(x,y, s=100 ,marker='.', c=color,edgecolor='w')
circle2=plt.Circle((0,0),5,color='r',fill=False)
circle_min=plt.Circle((0,0),4.5,color='g',fill=False)
circle_max=plt.Circle((0,0),5.445,color='b',fill=False)
fig = plt.gcf()
fig.gca().add_artist(circle2)
fig.gca().add_artist(circle_min)
fig.gca().add_artist(circle_max)
left,right = ax2.get_xlim()
low,high = ax2.get_ylim()
arrow( left, 0, right -left, 0, length_includes_head = True, head_width = 0.15 )
arrow( 0, low, 0, high-low, length_includes_head = True, head_width = 0.15 )
import numpy as np
import matplotlib.pyplot as plt
theta = np.linspace(-np.pi, np.pi, 100)
r1 = 1 - np.sin(3*theta)
r2 = 1 + np.cos(theta)
ax = plt.subplot(111, polar=True, # add subplot in polar coordinates
axisbg='Azure') # background colour
ax.set_rmax(2.2) # r maximum value
ax.grid(True) # add the grid
ax.plot(theta, r1,
color='Tomato', # line colour
ls='--', # line style
lw=3, # line width
label='a 3-fold curve') # label
ax.plot(theta, r2,
color='purple',
linewidth=3,
ls = '-',
label = 'a cardioid')
ax.legend(loc="lower right") # legend location
titlefont = {
'family' : 'serif',
'color' : 'black',
'weight' : 'bold',
'size' : 16,
}
ax.set_title("A plot in polar coordinates", # title
va='bottom', # some space below the title
fontdict = titlefont # set the font properties
)
grid()
show()
#I am getting a separate Cartesian image + a polar image while what I need is both the things in a single image
</code></pre>
|
<p>I am not used to <code>matplotlib</code> but I reduced your code to his minimum to better understand it and make it look less redudant. look at what I get: </p>
<pre><code>import pylab
import matplotlib.pyplot as plt
import numpy as np
#########################################
x = [0,10,-3,-10]
y = [0,10,1,-10]
color=['w','w','w','w']
theta = np.linspace(-np.pi, np.pi, 100)
#########################################
pylab.scatter(x,y, s=100 ,marker='.', c=color,edgecolor='w')
plt.gcf().gca().add_artist(plt.Circle((0,0),5,color='r',fill=False))
plt.gcf().gca().add_artist(plt.Circle((0,0),4.5,color='g',fill=False))
plt.gcf().gca().add_artist(plt.Circle((0,0),5.445,color='b',fill=False))
plt.figure().add_subplot(111)
ax = plt.subplot(111, polar=True,axisbg='Azure')
ax.plot(theta, 1 - np.sin(3*theta),color='Tomato',ls='--',lw=3,label='a 3-fold curve')
ax.plot(theta, 1 + np.cos(theta),color='purple',linewidth=3,ls = '-',label = 'a cardioid')
pylab.show()
</code></pre>
<p>it is nearly the same result...</p>
|
python|numpy|matplotlib
| 2
|
1,575
| 37,257,737
|
merging dataframes only by certain columns
|
<p>I try to merge dataframes by index and only take certain columns to the result.</p>
<pre><code>result = pd.concat([self.retailer_categories_probes_df['euclidean_distance'], self.retailers_categories_df['euclidean_distance']])
</code></pre>
<p>But with the result I get the 'euclidean_distance' from first table ?
Any idea what is wrong ?</p>
<p>Also how I can give names to the destination columns ?</p>
|
<p>I think you may need <code>axis=1</code>:</p>
<pre><code>result = pd.concat([self.retailer_categories_probes_df['euclidean_distance'], self.retailers_categories_df['euclidean_distance']], axis=1)
</code></pre>
<p>See <code>pd.concat()</code> <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.concat.html" rel="nofollow">docs</a></p>
|
python|pandas|dataframe
| 1
|
1,576
| 37,424,988
|
Using apply to write to a global Pandas dataframe
|
<p>I am trying to create a new dataframe that has values added to it if a condition from another dataframe is met. Where if a column is not empty it would apply a function that would write the proceeding 90 rows to the new dataframe. However, I am having a host of errors when I am writing the current code, as I am having some confusion as to what value is being returned after applying the function.</p>
<p>I have written a code that uses a for loop, however it requires a significant time to execute. I was hoping to write the new code in just numpy and pandas.</p>
<pre><code>df_regression = pd.DataFrame()
def func(x):
global df_regression
a = x.name
b = x.name - 89
df_writetoreg = df6_mod[b:a]
df_regression = df_regression.append(df_writetoreg)
return df_writetoreg
df6_mod[pd.notnull(df6_mod['Characters1'])][['Characters1']].apply(lambda x: func(x), axis=1)
</code></pre>
<p>****EDIT*****</p>
<p>Example:</p>
<p>DataFrame A</p>
<pre><code> Example Value Characters1
A 10 NA
A 20 NA
A 30 1
A 15 NA
A 10 NA
B 10 NA
B 20 NA
B 30 NA
B 15 1
B 10 NA
</code></pre>
<p>Therefore in this case, after applying the function, I want to create a dataframe B that will have all values preceding the "1" value found in the Characters1 column, where instead of 3 values you have 90.</p>
<p>DataFrame B</p>
<pre><code> Example Value Characters1
A 10 NA
A 20 NA
A 30 1
B 20 NA
B 30 NA
B 15 1
</code></pre>
|
<p>Question is slightly unclear but here is what I think you need:</p>
<pre><code>In[1] import pandas as pd
df = pd.DataFrame({"Example": ["A","A","A","A","A","B","B","B","B","B"],
"Value": [10,20,30,15,10,10,20,30,15,10], "Characters1": [np.nan, np.nan, 1, np.nan,np.nan,np.nan,np.nan,np.nan,1, np.nan]})
In[2] df[df.Characters1.bfill(limit=2) == 1] # Replace limit=2 with 99 for your case
Out[2]:
Characters1 Example Value
0 NaN A 10
1 NaN A 20
2 1.0 A 30
6 NaN B 20
7 NaN B 30
8 1.0 B 15
</code></pre>
|
python|numpy|pandas
| 1
|
1,577
| 42,096,798
|
Using gensim Scipy2Corpus without materializing sparse matrix in memory
|
<p>I have three NumPy arrays saved to disk in <code>.npy</code> format, together totaling about 40 GB (representing text count data from a very large document set). The three arrays represent the <code>data</code>, <code>indices</code>, and <code>indptr</code> attributes of a <a href="https://docs.scipy.org/doc/scipy-0.17.0/reference/generated/scipy.sparse.csc_matrix.html" rel="nofollow noreferrer"><code>scipy.sparse.csc_matrix</code></a> sparse matrix.</p>
<p>I want to use distributed gensim LsiModel on this data, specifically with the <a href="https://radimrehurek.com/gensim/matutils.html#gensim.matutils.Scipy2Corpus" rel="nofollow noreferrer"><code>gensim.matutils.Scipy2Corpus</code></a> function to provide the corpus iterator from the underlying sparse matrix.</p>
<p>However, I do not want to materialize the whole matrix in memory. Instead, how can I tell gensim about the underlying disk data and have gensim stream from disk into the csc matrix as needed to distribute chunks to the worker processes? If I understand correctly, this is what <code>Scipy2Corpus</code> and the <a href="https://radimrehurek.com/gensim/dist_lsi.html" rel="nofollow noreferrer">distributed LsiModel examples</a> claim to do, but instead they require materializing the arrays into a csc matrix ahead of time.</p>
<p>I have tried loading each of the underlying arrays with <code>mmap_mode='r'</code>, but the function that constructs the materialized csc matrix, <code>scipy.sparse.csc_matrix</code> will pull all of the data in regardless.</p>
|
<p>I made a sparse matrix:</p>
<pre><code>In [207]: A=sparse.random(100,100,.1,'csr')
In [208]: A
Out[208]:
<100x100 sparse matrix of type '<class 'numpy.float64'>'
with 1000 stored elements in Compressed Sparse Row format>
</code></pre>
<p>Made copies of its attributes, and saved them</p>
<pre><code>In [209]: D,I,J = A.data.copy(),A.indptr.copy(),A.indices.copy()
In [236]: np.save('sparse_d',D)
In [237]: np.save('sparse_I',I)
In [238]: np.save('sparse_J',J)
</code></pre>
<p>If I reload them as <code>mmap</code>, I can make a new matrix:</p>
<pre><code>In [240]: D1=np.load('sparse_d.npy', mmap_mode='r')
In [241]: I1=np.load('sparse_I.npy', mmap_mode='r')
In [242]: J1=np.load('sparse_J.npy', mmap_mode='r')
In [243]: Amm=sparse.csr_matrix((D1,J1,I1),A.shape)
</code></pre>
<p>But the attributes of this new array are in memory copies of the files:</p>
<pre><code>In [246]: type(D1)
Out[246]: numpy.core.memmap.memmap
In [247]: type(Amm.data)
Out[247]: numpy.ndarray
</code></pre>
<p>So a sparse matrix made from <code>mmap</code> arrays loads all the data (as claimed in the original post).</p>
<pre><code>In [252]: A.indptr
Out[252]:
array([ 0, 10, 20, 29, 36, 46, 60, 66, 74, 85, 93,
104, 117, 128, 139, 147, 156, 165, 172, 188, 201, 211, ...
</code></pre>
<p>I can get a slice of <code>A</code> with</p>
<pre><code>In [254]: A37=A[3:7,:]
In [255]: A37
Out[255]:
<4x100 sparse matrix of type '<class 'numpy.float64'>'
with 37 stored elements in Compressed Sparse Row format>
</code></pre>
<p>This is a copy of <code>A</code>, not a view (as it would have been for a dense array).</p>
<p>The relevant <code>indptr</code> values for these rows is</p>
<pre><code>In [256]: I1[3:8]
Out[256]: memmap([29, 36, 46, 60, 66])
</code></pre>
<p>66-29 is 37, the number of nonzero values in <code>A37</code>.</p>
<p>I could make a sparse matrix from slices of the <code>mmap</code> arrays:</p>
<pre><code>In [259]: Amm37=sparse.csr_matrix((D1[29:66],J1[29:66],I1[3:8]-I1[3]), shape=(4, 100))
In [260]: Amm37
Out[260]:
<4x100 sparse matrix of type '<class 'numpy.float64'>'
with 37 stored elements in Compressed Sparse Row format>
In [261]: np.allclose(A37.A, Amm37.A)
Out[261]: True
</code></pre>
<p>While the attributes of <code>Amm37</code> are in memory, the <code>D1</code> and <code>J1</code> inputs to the function are <code>mmap</code> slices:</p>
<pre><code>In [262]: D1[29:66]
Out[262]:
memmap([ 0.08874032, 0.18952295, 0.43034343, 0.83725733, 0.61073925,
0.9178207 , 0.03644072, 0.32206696, 0.90945298, 0.20585155,
.... 0.43905333])
In [263]: I1[3:8]-I1[3]
Out[263]: array([ 0, 7, 17, 31, 37])
</code></pre>
<p>So yes, I can recreate a row slice of the original <code>A</code> from the saved attributes, without loading whole the mmap arrays. All I need is selected slices from the 3 arrays. Similarly I could select columns from a <code>csc</code> format array.</p>
<p>But it would be very difficult to pull a column from <code>csr</code> attributes without loading them entirely. The desired <code>data</code> and <code>indices</code> values will be scattered through out the stored arrays. Converting <code>csr</code> attributes to <code>csc</code> requires creating the full <code>csr</code> matrix and using its <code>tocsc()</code> method. </p>
|
python|numpy|scipy|sparse-matrix|gensim
| 0
|
1,578
| 8,136,273
|
Wrapping C++ and CUDA in Python
|
<p>I want to create an interface for a numerical library consisting of both OOP C++ (boost)
and CUDA C code, in Python. There is already an existing MATLAB interface, but it contains
a lot of mex.h dependencies. </p>
<p>How can this be done as painless as possible? </p>
|
<p>Here are a couple of links to look at. Could people who've used any of these please comment ?</p>
<pre><code># day status packagename version homepage summary
2011-02-03 4 "scikits.cuda" 0.03 http://github.com/lebedov/scikits.cuda/
Python interface to GPU-powered libraries
2010-10-27 0 "KappaCUDA" 1.5.0 http://psilambda.com
Module to give easy access to NVIDIA CUDA from Python using the Kappa Library.
2010-10-16 5 "pycuda" 0.94.2 http://mathema.tician.de/software/pycuda
Python wrapper for Nvidia CUDA
2010-07-01 4 "PyGouda" 1.0 http://pypi.python.org/pypi/pycuda
The EasyCheese of GPU programming
</code></pre>
|
numpy|cython|ctags|mex
| 2
|
1,579
| 37,978,624
|
How to export array in csv or txt in Python
|
<p>I'm trying to export array to txt or csv file. I've been trying with numpy but i always get some error like
<code>TypeError: Mismatch between array dtype ('<U14') and format specifier ('%.18e')</code></p>
<p>Here is my code without numpy that works great but I need help with part how to export it.</p>
<pre><code>peoples = []
for content in driver.find_elements_by_class_name('x234'):
people = content.find_element_by_xpath('.//div[@class="zstrim"]').text
if people != "Django" and people != "Rooky" :
pass
peoples.append([people, 1, datetime.now().strftime("%d/%m/%y %H:%M")])
print(peoples)
</code></pre>
<p>Really need some help with this.</p>
|
<p>Looks like you are doing something like:</p>
<pre><code>In [1339]: peoples=[]
In [1340]: for _ in range(3):
......: peoples.append([234, datetime.datetime.now().strftime("%d/%m/%y %H:%M")])
......:
In [1341]: peoples
Out[1341]: [[234, '22/06/16 14:57'], [234, '22/06/16 14:57'], [234, '22/06/16 14:57']]
</code></pre>
<p><code>peoples</code> is an array (or here a list of lists), that contains, among other things formatted dates.</p>
<pre><code>In [1342]: np.savetxt('test.txt',peoples)
...
TypeError: Mismatch between array dtype ('<U14') and format specifier ('%.18e %.18e')
</code></pre>
<p>Since I didn't specify <code>fmt</code> it constructed a default one, consisting of two <code>%.18e</code> fields. That's great for general formatting of numbers. But the data includes 14 characters strings ('U14' - unicode in Python3).</p>
<p>If I tell it to use <code>%s</code>, the generic string format, I get:</p>
<pre><code>In [1346]: np.savetxt('test.txt',peoples, fmt='%s', delimiter=',')
In [1347]: cat test.txt
234,22/06/16 14:57
234,22/06/16 14:57
234,22/06/16 14:57
</code></pre>
<p>Not ideal, but still it works. <code>fmt='%20s'</code> would be better.</p>
<p>I glossed over a another nuance. <code>peoples</code> is a list of lists. <code>np.savetxt</code> works with arrays, so it first turns that into an array with:</p>
<pre><code>In [1360]: np.array(peoples)
Out[1360]:
array([['234', '22/06/16 14:57'],
['234', '22/06/16 14:57'],
['234', '22/06/16 14:57']],
dtype='<U14')
</code></pre>
<p>But this turns both columns into <code>U14</code> strings. So I have to format both columns with <code>%s</code>. I can't use a numeric format on the first. What I need to do first is make a structured array with a numeric field(s) and a string field. I know how to do that, but I won't get into the details now.</p>
<p>As per comments, it could be simpler to format each <code>peoples</code> line as a complete string, and write that to a file.</p>
<pre><code>In [1378]: with open('test.txt','w') as f:
for _ in range(3):
f.write('%10d,%20s\n'%(234, datetime.datetime.now().strftime("%d/%m/%y %H:%M")))
......:
In [1379]: cat test.txt
234, 22/06/16 15:18
234, 22/06/16 15:18
234, 22/06/16 15:18
</code></pre>
|
python|arrays|csv|numpy
| 1
|
1,580
| 31,466,769
|
Add column of empty lists to DataFrame
|
<p>Similar to this question <a href="https://stackoverflow.com/questions/16327055/how-to-add-an-empty-column-to-a-dataframe">How to add an empty column to a dataframe?</a>, I am interested in knowing the best way to add a column of empty lists to a DataFrame.</p>
<p>What I am trying to do is basically initialize a column and as I iterate over the rows to process some of them, then add a filled list in this new column to replace the initialized value.</p>
<p>For example, if below is my initial DataFrame:</p>
<pre><code>df = pd.DataFrame(d = {'a': [1,2,3], 'b': [5,6,7]}) # Sample DataFrame
>>> df
a b
0 1 5
1 2 6
2 3 7
</code></pre>
<p>Then I want to ultimately end up with something like this, where each row has been processed separately (sample results shown):</p>
<pre><code>>>> df
a b c
0 1 5 [5, 6]
1 2 6 [9, 0]
2 3 7 [1, 2, 3]
</code></pre>
<p>Of course, if I try to initialize like <code>df['e'] = []</code> as I would with any other constant, it thinks I am trying to add a sequence of items with length 0, and hence fails.</p>
<p>If I try initializing a new column as <code>None</code> or <code>NaN</code>, I run in to the following issues when trying to assign a list to a location.</p>
<pre><code>df['d'] = None
>>> df
a b d
0 1 5 None
1 2 6 None
2 3 7 None
</code></pre>
<p>Issue 1 (it would be perfect if I can get this approach to work! Maybe something trivial I am missing):</p>
<pre><code>>>> df.loc[0,'d'] = [1,3]
...
ValueError: Must have equal len keys and value when setting with an iterable
</code></pre>
<p>Issue 2 (this one works, but not without a warning because it is not guaranteed to work as intended):</p>
<pre><code>>>> df['d'][0] = [1,3]
C:\Python27\Scripts\ipython:1: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
</code></pre>
<p>Hence I resort to initializing with empty lists and extending them as needed. There are a couple of methods I can think of to initialize this way, but is there a more straightforward way?</p>
<p>Method 1:</p>
<pre><code>df['empty_lists1'] = [list() for x in range(len(df.index))]
>>> df
a b empty_lists1
0 1 5 []
1 2 6 []
2 3 7 []
</code></pre>
<p>Method 2:</p>
<pre><code> df['empty_lists2'] = df.apply(lambda x: [], axis=1)
>>> df
a b empty_lists1 empty_lists2
0 1 5 [] []
1 2 6 [] []
2 3 7 [] []
</code></pre>
<p><strong>Summary of questions:</strong></p>
<p>Is there any minor syntax change that can be addressed in Issue 1 that can allow a list to be assigned to a <code>None</code>/<code>NaN</code> initialized field?</p>
<p>If not, then what is the best way to initialize a new column with empty lists?</p>
|
<p>One more way is to use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.empty.html" rel="noreferrer"><code>np.empty</code></a>:</p>
<pre><code>df['empty_list'] = np.empty((len(df), 0)).tolist()
</code></pre>
<hr>
<p>You could also knock off <code>.index</code> in your "Method 1" when trying to find <code>len</code> of <code>df</code>.</p>
<pre><code>df['empty_list'] = [[] for _ in range(len(df))]
</code></pre>
<hr>
<p>Turns out, <code>np.empty</code> is faster...</p>
<pre><code>In [1]: import pandas as pd
In [2]: df = pd.DataFrame(pd.np.random.rand(1000000, 5))
In [3]: timeit df['empty1'] = pd.np.empty((len(df), 0)).tolist()
10 loops, best of 3: 127 ms per loop
In [4]: timeit df['empty2'] = [[] for _ in range(len(df))]
10 loops, best of 3: 193 ms per loop
In [5]: timeit df['empty3'] = df.apply(lambda x: [], axis=1)
1 loops, best of 3: 5.89 s per loop
</code></pre>
|
python|pandas
| 68
|
1,581
| 64,358,754
|
regex code, how to slove some data entry error
|
<p>I have two dataframe</p>
<pre><code>df1
name
ADAM, HAFIZ M
ABAD, FARLEY J
CORDDED, NANCY C
BOMBSHAD, WANG D
df2
JOSEPH W. HOLUBKA
WANG E. JONATHAN
CUCU F. LIU,
WANG C. DANA,
LANDY F. JON
</code></pre>
<p>I am hoping to extract the first name of each dataframe. For df1, I need the "first name" portion after "," , the second df, the first name is what I want.</p>
<p>so the returned df is</p>
<pre><code>df1
HAFIZ
FARLEY
NANCY
WANG
df2
JOSEPH
WANG
CUCU
WANG
LANDY
</code></pre>
<p>my current code is</p>
<pre><code> df['name'].str.upper().apply(lambda name:re.search(r'\w+(?!.*,)',name).group())
</code></pre>
<p>This regex works for both df, however, I just realized my data has an entry error. In df2, Liu and Dana have a "," at the end which cause the regex to not working.</p>
<p>the error is group() is not an attribute.</p>
<p>Is there anyway I could fix this code? the regex should work for both df</p>
|
<p>You can use</p>
<pre><code>(^(?=[^,]*,?$)[\w'-]+|(?<=, )[\w'-]+)
</code></pre>
<p>See the <a href="https://regex101.com/r/WbfbU8/2/" rel="nofollow noreferrer">regex demo</a>. This pattern allows matching a name at the initial position in the string if there is a trailing comma in the string.</p>
<p>Use it in Pandas with <code>Series.str.extract</code> vectorized method:</p>
<pre class="lang-py prettyprint-override"><code>df['first name'] = df['name'].str.upper().str.extract(r"(^(?=[^,]*,?$)[\w'-]+|(?<=, )[\w'-]+)", expand=False)
</code></pre>
<p><strong>Regex details</strong></p>
<ul>
<li><code>^(?=[^,]*,?$)[\w'-]+</code> - one or more word, <code>'</code> and <code>-</code> chars (<code>[\w'-]+</code>) at the start of the string (<code>^</code>) if the string has no commas but may end with an optional comma (<code>(?=[^,]*,?$)</code>)</li>
<li><code>|</code> - or</li>
<li><code>(?<=, )[\w'-]+</code> - one or more word, <code>'</code> and <code>-</code> chars chars preceded with comma + space.</li>
</ul>
|
python|regex|pandas
| 1
|
1,582
| 64,518,556
|
Can you use a context manager to write a pandas DataFrame to sqlite
|
<p>When writing a pandas DataFrame to sql, would it be possible to use the with statement in the following way?</p>
<pre><code>import sqlite3
import pandas as pd
with sqlite3.connect('database.db') as conn:
df = pd.read_sql("SELECT * FROM table", conn)
# add change to db
df.to_sql('table', conn, if_exists='replace', index=False)
</code></pre>
|
<p>From the <a href="https://docs.python.org/3/library/sqlite3.html#using-the-connection-as-a-context-manager" rel="nofollow noreferrer">documentation</a></p>
<blockquote>
<p>Connection objects can be used as context managers that automatically commit or rollback transactions. In the event of an exception, the transaction is rolled back; otherwise, the transaction is committed</p>
</blockquote>
<h3>Example:</h3>
<pre><code>import sqlite3
import pandas as pd
con = sqlite3.connect("database.db")
# Return the result as a list
with con:
result = con.execute("SELECT * FROM table").fetchall()
# Pass as engine to pandas
with con:
df = pd.read_sql("SELECT * FROM table", con)
# Close connection
con.close()
</code></pre>
<h2>Edit</h2>
<p>As @Parfait has rightly pointed out the context manager is on transactions made by the connection, thus the <code>con</code> object should closed manually.</p>
|
python|sql|pandas
| 2
|
1,583
| 47,916,288
|
Python DataFrame: replace or combine selected values into main DataFrame
|
<p>I have two pandas DataFrame as below. It contains strings and np.nan values.
df =</p>
<pre><code> A B C D E F
0 aaa abx fwe dcs NaN gsx
1 bbb daf dxs fsx NaN ewe
2 ccc NaN NaN NaN NaN dfw
3 ddd NaN NaN asc NaN NaN
4 eee NaN NaN cse NaN NaN
5 fff NaN NaN wer xer NaN
</code></pre>
<p>df_result = </p>
<pre><code> A C E F
2 sfa NaN NaN wes
4 web NaN NaN NaN
5 NaN wwc wew NaN
</code></pre>
<p>What I want is copy entire df_result DataFrame to df DataFrame with corresponding columns and index. so my output would be = </p>
<pre><code> A B C D E F
0 aaa abx fwe dcs NaN gsx
1 bbb dxs fsx fsx NaN ewe
2 sfa NaN NaN NaN NaN wes
3 wen NaN NaN asc NaN NaN
4 web NaN NaN cse NaN NaN
5 NaN NaN wwc wer wew NaN
</code></pre>
<p>So basically I want to copy exact values of df_result to df even thought ther are np.nan values like A:5 (changed from fff to NaN). Also, I need to keep the order of columns as it is. Please let me know efficient way to do this. Thank you!</p>
|
<pre><code>df.update(dfr.fillna('NaN'))
df.replace('NaN',np.nan)
Out[501]:
A B C D E F
0 aaa abx fwe dcs NaN gsx
1 bbb daf dxs fsx NaN ewe
2 sfa NaN NaN NaN NaN wes
3 ddd NaN NaN asc NaN NaN
4 web NaN NaN cse NaN NaN
5 NaN NaN wwc wer wew NaN
</code></pre>
|
python|pandas|dataframe|replace|combiners
| 3
|
1,584
| 47,670,098
|
slicing error in numpy array
|
<p>I am trying to run the following code </p>
<pre><code>fs = 1000
data = np.loadtxt("trainingdataset.txt", delimiter=",")
data1 = data[:,2]
data2 = data1.astype(int)
X,Y = data2['521']
</code></pre>
<p>but it gets me the following error</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\hadeer.elziaat\Desktop\testspec.py", line 58, in <module>
X,Y = data2['521']
IndexError: only integers, slices (`:`), ellipsis (`...`), numpy.newaxis (`None`) and integer or boolean arrays are valid indices
</code></pre>
<p>my dataset</p>
<pre><code>1,4,6,10
2,100,125,10
3,100,7216,254
4,100,527,263
5,100,954,13
6,100,954,23
</code></pre>
|
<p>You say you want all the data in the 3rd column.</p>
<p>I assume you also want some kind of x-axis, since you are attempting to do <code>X, Y = ...</code>. How about using the first column for that? Then your code would be:</p>
<pre><code>import numpy as np
data = np.loadtxt("trainingdataset.txt", delimiter=',', dtype='int')
x = data[:, 0]
y = data[:, 2]
</code></pre>
<p>What remains unclear from your question is why you tried to index your <code>data</code> with <code>521</code> - which failed because you cannot use strings as indices on plain arrays.</p>
|
python-3.x|numpy|matplotlib|signal-processing
| 0
|
1,585
| 47,893,677
|
Why are numpy functions so slow on pandas series / dataframes?
|
<p>Consider a small MWE, taken from <a href="https://stackoverflow.com/q/47881048/4909087">another question</a>:</p>
<pre><code>DateTime Data
2017-11-21 18:54:31 1
2017-11-22 02:26:48 2
2017-11-22 10:19:44 3
2017-11-22 15:11:28 6
2017-11-22 23:21:58 7
2017-11-28 14:28:28 28
2017-11-28 14:36:40 0
2017-11-28 14:59:48 1
</code></pre>
<p>The goal is to clip all values with an upper bound of 1. My answer uses <code>np.clip</code>, which works fine.</p>
<pre><code>np.clip(df.Data, a_min=None, a_max=1)
array([1, 1, 1, 1, 1, 1, 0, 1])
</code></pre>
<p>Or, </p>
<pre><code>np.clip(df.Data.values, a_min=None, a_max=1)
array([1, 1, 1, 1, 1, 1, 0, 1])
</code></pre>
<p>Both of which return the same answer. My question is about the relative performance of these two methods. Consider - </p>
<pre><code>df = pd.concat([df]*1000).reset_index(drop=True)
%timeit np.clip(df.Data, a_min=None, a_max=1)
1000 loops, best of 3: 270 µs per loop
%timeit np.clip(df.Data.values, a_min=None, a_max=1)
10000 loops, best of 3: 23.4 µs per loop
</code></pre>
<p>Why is there such a massive difference between the two, just by calling <code>values</code> on the latter? In other words...</p>
<p><strong>Why are numpy functions so slow on pandas objects?</strong></p>
|
<p>Yes, it seems like <code>np.clip</code> is a lot slower on <code>pandas.Series</code> than on <code>numpy.ndarray</code>s. That's correct but it's actually (at least asymptotically) not that bad. 8000 elements is still in the regime where constant factors are major contributors in the runtime. I think this is a very important aspect to the question, so I'm visualizing this (borrowing from <a href="https://stackoverflow.com/a/44468758/5393381">another answer</a>):</p>
<pre><code># Setup
import pandas as pd
import numpy as np
def on_series(s):
return np.clip(s, a_min=None, a_max=1)
def on_values_of_series(s):
return np.clip(s.values, a_min=None, a_max=1)
# Timing setup
timings = {on_series: [], on_values_of_series: []}
sizes = [2**i for i in range(1, 26, 2)]
# Timing
for size in sizes:
func_input = pd.Series(np.random.randint(0, 30, size=size))
for func in timings:
res = %timeit -o func(func_input)
timings[func].append(res)
%matplotlib notebook
import matplotlib.pyplot as plt
import numpy as np
fig, (ax1, ax2) = plt.subplots(1, 2)
for func in timings:
ax1.plot(sizes,
[time.best for time in timings[func]],
label=str(func.__name__))
ax1.set_xscale('log')
ax1.set_yscale('log')
ax1.set_xlabel('size')
ax1.set_ylabel('time [seconds]')
ax1.grid(which='both')
ax1.legend()
baseline = on_values_of_series # choose one function as baseline
for func in timings:
ax2.plot(sizes,
[time.best / ref.best for time, ref in zip(timings[func], timings[baseline])],
label=str(func.__name__))
ax2.set_yscale('log')
ax2.set_xscale('log')
ax2.set_xlabel('size')
ax2.set_ylabel('time relative to {}'.format(baseline.__name__))
ax2.grid(which='both')
ax2.legend()
plt.tight_layout()
</code></pre>
<p><a href="https://i.stack.imgur.com/NXQXu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NXQXu.png" alt="enter image description here" /></a></p>
<p>It's a log-log plot because I think this shows the important features more clearly. For example it shows that <code>np.clip</code> on a <code>numpy.ndarray</code> is faster but it also has a much smaller constant factor in that case. The difference for large arrays is only ~3! That's still a big difference but way less than the difference on small arrays.</p>
<p>However, that's still not an answer to the question where the time difference comes from.</p>
<p>The solution is actually quite simple: <code>np.clip</code> delegates to the <code>clip</code> <strong>method</strong> of the first argument:</p>
<pre><code>>>> np.clip??
Source:
def clip(a, a_min, a_max, out=None):
"""
...
"""
return _wrapfunc(a, 'clip', a_min, a_max, out=out)
>>> np.core.fromnumeric._wrapfunc??
Source:
def _wrapfunc(obj, method, *args, **kwds):
try:
return getattr(obj, method)(*args, **kwds)
# ...
except (AttributeError, TypeError):
return _wrapit(obj, method, *args, **kwds)
</code></pre>
<p>The <code>getattr</code> line of the <code>_wrapfunc</code> function is the important line here, because <code>np.ndarray.clip</code> and <code>pd.Series.clip</code> are different methods, yes, <strong>completely different methods</strong>:</p>
<pre><code>>>> np.ndarray.clip
<method 'clip' of 'numpy.ndarray' objects>
>>> pd.Series.clip
<function pandas.core.generic.NDFrame.clip>
</code></pre>
<p>Unfortunately is <code>np.ndarray.clip</code> a C-function so it's hard to profile it, however <code>pd.Series.clip</code> is a regular Python function so it's easy to profile. Let's use a Series of 5000 integers here:</p>
<pre><code>s = pd.Series(np.random.randint(0, 100, 5000))
</code></pre>
<p>For the <code>np.clip</code> on the <code>values</code> I get the following line-profiling:</p>
<pre><code>%load_ext line_profiler
%lprun -f np.clip -f np.core.fromnumeric._wrapfunc np.clip(s.values, a_min=None, a_max=1)
Timer unit: 4.10256e-07 s
Total time: 2.25641e-05 s
File: numpy\core\fromnumeric.py
Function: clip at line 1673
Line # Hits Time Per Hit % Time Line Contents
==============================================================
1673 def clip(a, a_min, a_max, out=None):
1674 """
...
1726 """
1727 1 55 55.0 100.0 return _wrapfunc(a, 'clip', a_min, a_max, out=out)
Total time: 1.51795e-05 s
File: numpy\core\fromnumeric.py
Function: _wrapfunc at line 55
Line # Hits Time Per Hit % Time Line Contents
==============================================================
55 def _wrapfunc(obj, method, *args, **kwds):
56 1 2 2.0 5.4 try:
57 1 35 35.0 94.6 return getattr(obj, method)(*args, **kwds)
58
59 # An AttributeError occurs if the object does not have
60 # such a method in its class.
61
62 # A TypeError occurs if the object does have such a method
63 # in its class, but its signature is not identical to that
64 # of NumPy's. This situation has occurred in the case of
65 # a downstream library like 'pandas'.
66 except (AttributeError, TypeError):
67 return _wrapit(obj, method, *args, **kwds)
</code></pre>
<p>But for <code>np.clip</code> on the <code>Series</code> I get a totally different profiling result:</p>
<pre><code>%lprun -f np.clip -f np.core.fromnumeric._wrapfunc -f pd.Series.clip -f pd.Series._clip_with_scalar np.clip(s, a_min=None, a_max=1)
Timer unit: 4.10256e-07 s
Total time: 0.000823794 s
File: numpy\core\fromnumeric.py
Function: clip at line 1673
Line # Hits Time Per Hit % Time Line Contents
==============================================================
1673 def clip(a, a_min, a_max, out=None):
1674 """
...
1726 """
1727 1 2008 2008.0 100.0 return _wrapfunc(a, 'clip', a_min, a_max, out=out)
Total time: 0.00081846 s
File: numpy\core\fromnumeric.py
Function: _wrapfunc at line 55
Line # Hits Time Per Hit % Time Line Contents
==============================================================
55 def _wrapfunc(obj, method, *args, **kwds):
56 1 2 2.0 0.1 try:
57 1 1993 1993.0 99.9 return getattr(obj, method)(*args, **kwds)
58
59 # An AttributeError occurs if the object does not have
60 # such a method in its class.
61
62 # A TypeError occurs if the object does have such a method
63 # in its class, but its signature is not identical to that
64 # of NumPy's. This situation has occurred in the case of
65 # a downstream library like 'pandas'.
66 except (AttributeError, TypeError):
67 return _wrapit(obj, method, *args, **kwds)
Total time: 0.000804922 s
File: pandas\core\generic.py
Function: clip at line 4969
Line # Hits Time Per Hit % Time Line Contents
==============================================================
4969 def clip(self, lower=None, upper=None, axis=None, inplace=False,
4970 *args, **kwargs):
4971 """
...
5021 """
5022 1 12 12.0 0.6 if isinstance(self, ABCPanel):
5023 raise NotImplementedError("clip is not supported yet for panels")
5024
5025 1 10 10.0 0.5 inplace = validate_bool_kwarg(inplace, 'inplace')
5026
5027 1 69 69.0 3.5 axis = nv.validate_clip_with_axis(axis, args, kwargs)
5028
5029 # GH 17276
5030 # numpy doesn't like NaN as a clip value
5031 # so ignore
5032 1 158 158.0 8.1 if np.any(pd.isnull(lower)):
5033 1 3 3.0 0.2 lower = None
5034 1 26 26.0 1.3 if np.any(pd.isnull(upper)):
5035 upper = None
5036
5037 # GH 2747 (arguments were reversed)
5038 1 1 1.0 0.1 if lower is not None and upper is not None:
5039 if is_scalar(lower) and is_scalar(upper):
5040 lower, upper = min(lower, upper), max(lower, upper)
5041
5042 # fast-path for scalars
5043 1 1 1.0 0.1 if ((lower is None or (is_scalar(lower) and is_number(lower))) and
5044 1 28 28.0 1.4 (upper is None or (is_scalar(upper) and is_number(upper)))):
5045 1 1654 1654.0 84.3 return self._clip_with_scalar(lower, upper, inplace=inplace)
5046
5047 result = self
5048 if lower is not None:
5049 result = result.clip_lower(lower, axis, inplace=inplace)
5050 if upper is not None:
5051 if inplace:
5052 result = self
5053 result = result.clip_upper(upper, axis, inplace=inplace)
5054
5055 return result
Total time: 0.000662153 s
File: pandas\core\generic.py
Function: _clip_with_scalar at line 4920
Line # Hits Time Per Hit % Time Line Contents
==============================================================
4920 def _clip_with_scalar(self, lower, upper, inplace=False):
4921 1 2 2.0 0.1 if ((lower is not None and np.any(isna(lower))) or
4922 1 25 25.0 1.5 (upper is not None and np.any(isna(upper)))):
4923 raise ValueError("Cannot use an NA value as a clip threshold")
4924
4925 1 22 22.0 1.4 result = self.values
4926 1 571 571.0 35.4 mask = isna(result)
4927
4928 1 95 95.0 5.9 with np.errstate(all='ignore'):
4929 1 1 1.0 0.1 if upper is not None:
4930 1 141 141.0 8.7 result = np.where(result >= upper, upper, result)
4931 1 33 33.0 2.0 if lower is not None:
4932 result = np.where(result <= lower, lower, result)
4933 1 73 73.0 4.5 if np.any(mask):
4934 result[mask] = np.nan
4935
4936 1 90 90.0 5.6 axes_dict = self._construct_axes_dict()
4937 1 558 558.0 34.6 result = self._constructor(result, **axes_dict).__finalize__(self)
4938
4939 1 2 2.0 0.1 if inplace:
4940 self._update_inplace(result)
4941 else:
4942 1 1 1.0 0.1 return result
</code></pre>
<p>I stopped going into the subroutines at that point because it already highlights where the <code>pd.Series.clip</code> does much more work than the <code>np.ndarray.clip</code>. Just compare the total time of the <code>np.clip</code> call on the <code>values</code> (55 timer units) to one of the first checks in the <code>pandas.Series.clip</code> method, the <code>if np.any(pd.isnull(lower))</code> (158 timer units). At that point the pandas method didn't even start at clipping and it already takes 3 times longer.</p>
<p>However several of these "overheads" become insignificant when the array is big:</p>
<pre><code>s = pd.Series(np.random.randint(0, 100, 1000000))
%lprun -f np.clip -f np.core.fromnumeric._wrapfunc -f pd.Series.clip -f pd.Series._clip_with_scalar np.clip(s, a_min=None, a_max=1)
Timer unit: 4.10256e-07 s
Total time: 0.00593476 s
File: numpy\core\fromnumeric.py
Function: clip at line 1673
Line # Hits Time Per Hit % Time Line Contents
==============================================================
1673 def clip(a, a_min, a_max, out=None):
1674 """
...
1726 """
1727 1 14466 14466.0 100.0 return _wrapfunc(a, 'clip', a_min, a_max, out=out)
Total time: 0.00592779 s
File: numpy\core\fromnumeric.py
Function: _wrapfunc at line 55
Line # Hits Time Per Hit % Time Line Contents
==============================================================
55 def _wrapfunc(obj, method, *args, **kwds):
56 1 1 1.0 0.0 try:
57 1 14448 14448.0 100.0 return getattr(obj, method)(*args, **kwds)
58
59 # An AttributeError occurs if the object does not have
60 # such a method in its class.
61
62 # A TypeError occurs if the object does have such a method
63 # in its class, but its signature is not identical to that
64 # of NumPy's. This situation has occurred in the case of
65 # a downstream library like 'pandas'.
66 except (AttributeError, TypeError):
67 return _wrapit(obj, method, *args, **kwds)
Total time: 0.00591302 s
File: pandas\core\generic.py
Function: clip at line 4969
Line # Hits Time Per Hit % Time Line Contents
==============================================================
4969 def clip(self, lower=None, upper=None, axis=None, inplace=False,
4970 *args, **kwargs):
4971 """
...
5021 """
5022 1 17 17.0 0.1 if isinstance(self, ABCPanel):
5023 raise NotImplementedError("clip is not supported yet for panels")
5024
5025 1 14 14.0 0.1 inplace = validate_bool_kwarg(inplace, 'inplace')
5026
5027 1 97 97.0 0.7 axis = nv.validate_clip_with_axis(axis, args, kwargs)
5028
5029 # GH 17276
5030 # numpy doesn't like NaN as a clip value
5031 # so ignore
5032 1 125 125.0 0.9 if np.any(pd.isnull(lower)):
5033 1 2 2.0 0.0 lower = None
5034 1 30 30.0 0.2 if np.any(pd.isnull(upper)):
5035 upper = None
5036
5037 # GH 2747 (arguments were reversed)
5038 1 2 2.0 0.0 if lower is not None and upper is not None:
5039 if is_scalar(lower) and is_scalar(upper):
5040 lower, upper = min(lower, upper), max(lower, upper)
5041
5042 # fast-path for scalars
5043 1 2 2.0 0.0 if ((lower is None or (is_scalar(lower) and is_number(lower))) and
5044 1 32 32.0 0.2 (upper is None or (is_scalar(upper) and is_number(upper)))):
5045 1 14092 14092.0 97.8 return self._clip_with_scalar(lower, upper, inplace=inplace)
5046
5047 result = self
5048 if lower is not None:
5049 result = result.clip_lower(lower, axis, inplace=inplace)
5050 if upper is not None:
5051 if inplace:
5052 result = self
5053 result = result.clip_upper(upper, axis, inplace=inplace)
5054
5055 return result
Total time: 0.00575753 s
File: pandas\core\generic.py
Function: _clip_with_scalar at line 4920
Line # Hits Time Per Hit % Time Line Contents
==============================================================
4920 def _clip_with_scalar(self, lower, upper, inplace=False):
4921 1 2 2.0 0.0 if ((lower is not None and np.any(isna(lower))) or
4922 1 28 28.0 0.2 (upper is not None and np.any(isna(upper)))):
4923 raise ValueError("Cannot use an NA value as a clip threshold")
4924
4925 1 120 120.0 0.9 result = self.values
4926 1 3525 3525.0 25.1 mask = isna(result)
4927
4928 1 86 86.0 0.6 with np.errstate(all='ignore'):
4929 1 2 2.0 0.0 if upper is not None:
4930 1 9314 9314.0 66.4 result = np.where(result >= upper, upper, result)
4931 1 61 61.0 0.4 if lower is not None:
4932 result = np.where(result <= lower, lower, result)
4933 1 283 283.0 2.0 if np.any(mask):
4934 result[mask] = np.nan
4935
4936 1 78 78.0 0.6 axes_dict = self._construct_axes_dict()
4937 1 532 532.0 3.8 result = self._constructor(result, **axes_dict).__finalize__(self)
4938
4939 1 2 2.0 0.0 if inplace:
4940 self._update_inplace(result)
4941 else:
4942 1 1 1.0 0.0 return result
</code></pre>
<p>There are still multiple function calls, for example <code>isna</code> and <code>np.where</code>, that take a significant amount of time, but overall this is at least comparable to the <code>np.ndarray.clip</code> time (that's in the regime where the timing difference is ~3 on my computer).</p>
<p>The takeaway should probably be:</p>
<ul>
<li>Many NumPy functions just delegate to a method of the object passed in, so there can be huge differences when you pass in different objects.</li>
<li>Profiling, especially line-profiling, can be a great tool to find the places where the performance difference comes from.</li>
<li>Always make sure to test differently sized objects in such cases. You could be comparing constant factors that probably don't matter except if you process lots of small arrays.</li>
</ul>
<p>Used versions:</p>
<pre><code>Python 3.6.3 64-bit on Windows 10
Numpy 1.13.3
Pandas 0.21.1
</code></pre>
|
python|performance|pandas|numpy
| 50
|
1,586
| 48,965,791
|
One value for each df Column group
|
<pre><code> A B
0 2002-01-16 10
1 2002-01-16 7
2 2002-01-16 2
3 2002-01-16 8
4 2002-01-16 5
5 2002-01-17 54
6 2002-01-17 6
7 2002-01-17 2
</code></pre>
<p>I want to add a <em>C column</em> which contains the <strong>first <em>Column B value</em> for each <em>Column A date group</em></strong>. The output might be:</p>
<pre><code> A B C
0 2002-01-16 10 10
1 2002-01-16 7 10
2 2002-01-16 2 10
3 2002-01-16 8 10
4 2002-01-16 5 10
5 2002-01-17 54 54
6 2002-01-17 6 54
7 2002-01-17 2 54
</code></pre>
<p>I´ve tested with:</p>
<pre><code>df["C"] = df.values[0][1]
</code></pre>
<p>But it doesn´t change the value for each <em>Column A date group</em>.</p>
<p>Thank you.</p>
|
<p>You can groupby column A, then use <code>.transform('first')</code> on column B to generate a series that has the first value of the group for all items in the group, eg:</p>
<pre><code>df.loc[:, 'C'] = df.groupby('A').B.transform('first')
</code></pre>
<p>This'll make your example frame be:</p>
<pre><code> A B C
0 2002-01-16 10 10
1 2002-01-16 7 10
2 2002-01-16 2 10
3 2002-01-16 8 10
4 2002-01-16 5 10
5 2002-01-17 54 54
6 2002-01-17 6 54
7 2002-01-17 2 54
</code></pre>
|
python|pandas
| 3
|
1,587
| 48,935,200
|
Pandas DF referencing the same slice twice in the same computation
|
<p>I have a huge data set to process and I am trying to optimize the most costly line, processing wise.</p>
<p>I use a df with 3 columns, A, B and C.
I have 2 values, a and b, which are used to update the value of C in a subset of the df.</p>
<p>Before I continue, let me define a textual substitution to increase readability:</p>
<pre><code>filter(_X) -> df.loc[df['A'] < a, _X]
</code></pre>
<p>Every time I type "filter", please substitute it with the text on the right (applying the correct argument in place of the parameter _X - think C/C++ macros).
The line of code in question is:</p>
<pre><code>filter('C') += a * np.minimum(filter('B'), b)
</code></pre>
<p>What I'm not sure about is if python will process "filter" twice when evaluating the expression, or if it will use a "reference" (a-la C++) and only do it once.
In the former case, is there a way for me to rewrite the expression in a way to avoid the double execution of the code of "filter"?</p>
<p>Moreover, if you have suggestions on how to rewrite the "filter" itself, I'd be happy to test them.</p>
<p>EDIT:
Expanded version of the code:</p>
<pre><code>df.loc[df['A'] < a, 'C'] += a * np.minimum(df.loc[df['A'] < a, 'B'], b)
</code></pre>
|
<p>If I understand correctly, you may not need to "filter twice" after the <code>+=</code>. see my example below:</p>
<pre><code>np.random.seed(5)
df = pd.DataFrame(np.random.randint(0,100,size=(4, 4)), columns=list('ABCD'))
A B C D
0 99 78 61 16
1 73 8 62 27
2 30 80 7 76
3 15 53 80 27
</code></pre>
<p>Now if you wanted to add the values of the minimum of columns <code>C</code> and <code>D</code> to the current value of <code>B</code> that would simply be: <code>df.loc[df['A'] < 80, 'B'] += np.minimum(df['C'], df['D'])</code></p>
<pre><code> A B C D
0 99 78.0 61 16
1 73 35.0 62 27 #<--- meets condition 8+27=35
2 30 87.0 7 76 #<--- meets condition 80+7=87
3 15 80.0 80 27 #<--- meets condition 53+27=80
</code></pre>
<p>Notice how when <code>A</code> < 80. the <code>B</code> value changes with whichever value in <code>C</code> or <code>D</code> is smaller. One thing to note is that <code>B</code> turns to a float. Not sure why. </p>
|
python|pandas
| 1
|
1,588
| 49,341,568
|
Pandas Series argument function memoization
|
<p>I want to memoize a function with mutable parameters (Pandas Series objects). Is there any way to accomplish this?</p>
<p>Here's a simple Fibonacci example, the parameter is a Pandas Series where the first element represents the sequence's index.</p>
<p>Example:</p>
<pre><code>from functools import lru_cache
@lru_cache(maxsize=None)
def fib(n):
if n.iloc[0] == 1 or n.iloc[0] == 2:
return 1
min1 = n.copy()
min1.iloc[0] -=1
min2 = n.copy()
min2.iloc[0] -= 2
return fib(min1) + fib(min2)
</code></pre>
<p>Call function:</p>
<pre><code>fib(pd.Series([15,0]))
</code></pre>
<p>Result:</p>
<pre><code>TypeError: 'Series' objects are mutable, thus they cannot be hashed
</code></pre>
<p>The intended use is more complex, so I posted this useless but simple example.</p>
|
<p>Several options:</p>
<ul>
<li>Convert the mutable objects to something immutable such as a string or tuple.</li>
<li>Create a hash of the mutable objects and use that as the memo dict key. Risk of hash clashes.</li>
<li>Create an immutable subclass which implements the __hash__() function.</li>
</ul>
|
python|pandas|memoization|mutable
| 1
|
1,589
| 58,633,886
|
Highlighting Values in Pandas
|
<p>Good Evening all,
I have made a pandas DataFrame from an excel spreadsheet. I am trying to highlight the names in this list that have logged in at 9:01:00 etc. Anyone who has logged in past the hour or half hour by 1 minute, but excluding those that have logged in early eg 07:59:00 or 07:29:00. EG. Those with * around the time. I am a complete amateur coder so I apologise. If things could be put in the simplest form without assuming a great degree of knowledge I would very much appreciate it. Also, if this is incredibly complex/ impossible I also apologise.</p>
<pre><code> Name Login\nTime
0 ITo 07:59:09
1 Ann 07:59:13
2 Darryll 07:59:24
3 Darren 07:59:31
4 FlorR 07:59:42
5 Colm 07:59:56
6 NatashaBr 07:59:59
7 AlexRobe 07:59:59
8 JonathanSinn 08:00:02
9 BrendanJo 08:00:04
10 DanielCov 08:00:15
11 RW 08:00:17
12 SaraHerrma 08:00:26
13 RobertStew 08:00:37
14 JasonBal *08:04:36*
17 KevinAll 08:59:52
18 JFo 09:00:05
19 LiviaHarr 09:00:22
20 Patrick *09:01:36*
24 SianDi 09:30:32
25 AlisonBri 09:59:27
26 MMulholl 10:00:02
27 TiffanyThom 10:00:07
29 GeorgeEdw 11:00:00
30 JackSha 11:00:50
31 UsmanA 11:59:46
32 LewisBrad 12:02:30
34 RyanmacCor 12:59:20
35 GerardMcphil 12:59:56
36 TanjaN 13:00:07
37 MartinRichar 13:30:08
38 MarkBellin 13:30:20
39 KyranSpur 13:30:24
40 RichRam 13:58:53
41 OctavioSan 14:30:10
42 CharlesS 16:45:07
43 DanielHoll 16:50:55
44 ThomasHoll 16:59:45
45 RosieFl 16:59:56
46 CiaranMur 17:00:01
47 LouiseDa 17:29:29
48 WilliamAi 17:30:02
</code></pre>
|
<p>You can have a look at the Pandas styling options. It has an applymap function which helps you to color code specific columns based on conditions of your choice. </p>
<p>The documention (<a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/style.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/user_guide/style.html</a>) has some examples, you can go through these and decide how you would want to highlight the column values. </p>
<p>Assuming you have a DataFrame called df. You can define your own styling function func() and apply it your DataFrame. </p>
<pre><code>df.style.apply(func)
</code></pre>
<p>You can define your styling function based on the examples in the documentation. Let me know if you have any more questions. </p>
|
python|python-3.x|pandas
| 0
|
1,590
| 70,178,582
|
Why loss function always return zero after first epoch?
|
<p>Why the loss function is always printing zero after the first epoch?</p>
<p>I suspect it's because of <code>loss = loss_fn(outputs, torch.max(labels, 1)[1])</code>.</p>
<p>But if I use <code>loss = loss_fn(outputs, labels)</code>, I will get the error</p>
<pre><code>RuntimeError: 0D or 1D target tensor expected, multi-target not supported
</code></pre>
<p>.</p>
<pre><code>nepochs = 5
losses = np.zeros(nepochs)
loss_fn = nn.CrossEntropyLoss()
optimizer = optim.Adam(modell.parameters(), lr = 0.001)
for epoch in range(nepochs):
running_loss = 0.0
n = 0
for data in train_loader:
#single batch
if(n == 1):
break;
inputs, labels = data
optimizer.zero_grad()
outputs = modell(inputs)
#loss = loss_fn(outputs, labels)
loss = loss_fn(outputs, torch.max(labels, 1)[1])
loss.backward()
optimizer.step()
running_loss += loss.item()
n += 1
losses[epoch] = running_loss / n
print(f"epoch: {epoch+1} loss: {losses[epoch] : .3f}")
</code></pre>
<p>The model is:</p>
<pre><code>def __init__(self, labels=10):
super(Classifier, self).__init__()
self.fc = nn.Linear(3 * 64 * 64, labels)
def forward(self, x):
out = x.reshape(x.size(0), -1)
out = self.fc (out)
return out
</code></pre>
<p>Any idea?</p>
<p>The labels are a 64 elements tensor like this:</p>
<pre><code>tensor([[7],[1],[ 2],[3],[ 2],[9],[9],[8],[9],[8],[ 1],[7],[9],[2],[ 5],[1],[3],[3],[8],[3],[7],[1],[7],[9],[8],[ 8],[3],[7],[ 5],[ 1],[7],[3],[2],[1],[ 3],[3],[2],[0],[3],[4],[0],[7],[1],[ 8],[4],[1],[ 5],[ 3],[4],[3],[ 4],[8],[4],[1],[ 9],[7],[3],[ 2],[ 6],[4],[ 8],[3],[ 7],[3]])
</code></pre>
|
<p>Usually loss calculation is <code>loss = loss_fn(outputs, labels)</code> and here <code>outputs</code> is as following:</p>
<pre><code>_ , outputs = torch.max(model(input), 1)
or
outputs = torch.max(predictions, 1)[0]
</code></pre>
<p>Common practice is modifying <code>outputs</code> instead of <code>labels</code>:</p>
<blockquote>
<p><code>torch.max()</code> returns a namedtuple <code>(values, indices)</code> where values is
the maximum value of each row of the <code>input</code> tensor in the given
dimension <code>dim</code>. And <code>indices</code> is the index location of each maximum value found (<code>argmax</code>).</p>
</blockquote>
<p>In your code snippet the <code>labels</code> is not indices of the labels, so when you calculate the loss, the function should look like this:</p>
<pre><code>loss = loss_fn(torch.max(outputs, 1)[0], labels)
</code></pre>
|
python|pytorch|conv-neural-network
| 1
|
1,591
| 56,249,665
|
Pandas Merge returns empty dataframe
|
<p>I'm loading two dataframes from two different csv's, and, try to join them, but for some reason <code>pd.merge</code> is not joining the data and is returning empty dataframe. I have tried changing the data types, but still nothing. Appreciate any help.</p>
<p>Sample <code>mon_apps</code> dataframe:</p>
<pre><code>app_cnts | year | month
16634 | 2018 | 8
9636 | 2019 | 2
17402 | 2017 | 8
17472 | 2017 | 11
15689 | 2018 | 6
</code></pre>
<p>Sample <code>mon_spend</code> dataframe:</p>
<pre><code>channel | month | year | spend
FB | 1 | 2017 | 0
FB | 2 | 2017 | 0
FB | 3 | 2017 | 0
FB | 4 | 2017 | 0
FB | 5 | 2017 | 0
</code></pre>
<p>I change the datatypes(just to be sure it's not an issue) like so:</p>
<pre><code>mon_spend[['month', 'year', 'spend']] = mon_spend[['month', 'year', 'spend']].astype(np.int64)
mon_spend['channel'] = mon_spend['channel'].astype(str)
mon_apps = mon_apps.astype(np.int64)
</code></pre>
<p>I check the datatypes:</p>
<pre><code>mon_spend
channel object
month int64
year int64
spend int64
dtype: object
mon_apps
app_cnts int64
year int64
month int64
dtype: object
</code></pre>
<p>I join using <code>pd.merge</code> like so:</p>
<pre><code>pd.merge(mon_apps[['app_cnts', 'year', 'month']], mon_spend, left_on = ["year", "month"], right_on = ["year", "month"])
</code></pre>
<p>Appreciate any help. Thanks.</p>
<p><strong><em>More data</em></strong></p>
<pre><code>channel month year spend
FB 2017 1 0
FB 2017 2 0
FB 2017 3 0
FB 2017 4 0
FB 2017 5 0
FB 2017 6 0
FB 2017 7 52514
FB 2017 8 10198
FB 2017 9 25408
FB 2017 10 31333
FB 2017 11 128071
FB 2017 12 95160
FB 2018 1 5001
FB 2018 2 17929
FB 2018 3 84548
FB 2018 4 16414
FB 2018 5 28282
FB 2018 6 38430
FB 2018 7 58757
FB 2018 8 120722
FB 2018 9 143766
FB 2018 10 68400
FB 2018 11 66984
FB 2018 12 58228
</code></pre>
<p><em>Some more Info</em></p>
<pre><code>print (mon_spend[["year", "month"]].sort_values(["year", "month"]).drop_duplicates().values.tolist())
[[2017, 1], [2017, 2], [2017, 3], [2017, 4], [2017, 5]]
print (mon_apps[["year", "month"]].sort_values(["year", "month"]).drop_duplicates().values.tolist())
[[2017, 8], [2017, 11], [2018, 6], [2018, 8], [2019, 2]]
[[1, 2017], [1, 2018], [1, 2019], [2, 2017], [2, 2018], [2, 2019], [3, 2017], [3, 2018], [4, 2017], [4, 2018], [5, 2017], [5, 2018], [6, 2017], [6, 2018], [7, 2017], [7, 2018], [8, 2017], [8, 2018], [9, 2017], [9, 2018], [10, 2017], [10, 2018], [11, 2017], [11, 2018], [12, 2017], [12, 2018]]
[[2017, 1], [2017, 2], [2017, 3], [2017, 4], [2017, 5], [2017, 6], [2017, 7], [2017, 8], [2017, 9], [2017, 10], [2017, 11], [2017, 12], [2018, 1], [2018, 2], [2018, 3], [2018, 4], [2018, 5], [2018, 6], [2018, 7], [2018, 8], [2018, 9], [2018, 10], [2018, 11], [2018, 12], [2019, 1], [2019, 2], [2019, 3]]
Out[18]:
[[2017, 8], [2017, 11], [2018, 6], [2018, 8], [2019, 2]]
</code></pre>
|
<p>The month and year is exchanged in the files you are merging try swapping the keys while merging or rename them before the merge:</p>
<pre><code>mon_apps.merge(complete_file, left_on=["year", "month"],right_on=['month','year'])
</code></pre>
|
python-3.x|pandas
| 3
|
1,592
| 56,150,863
|
Pandas: load a table into a dataframe with read_sql - `con` parameter and table name
|
<p>In trying to import an sql database into a python pandas dataframe, and I am getting a syntax error. I am newbie here, so probably the issue is very simple.</p>
<p>After downloading sqlite sample chinook.db from <a href="http://www.sqlitetutorial.net/sqlite-sample-database/" rel="nofollow noreferrer">http://www.sqlitetutorial.net/sqlite-sample-database/</a>
and reading pandas documentation, I tried to load it into a pandas dataframe with</p>
<pre><code>import pandas as pd
import sqlite3
conn = sqlite3.connect('chinook.db')
df = pd.read_sql('albums', conn)
</code></pre>
<p>where <code>'albums'</code> is a table of 'chinook.db' gathered with sqlite3 from command line.</p>
<p>The result is:</p>
<pre><code>...
DatabaseError: Execution failed on sql 'albums': near "albums": syntax error
</code></pre>
<p>I tried variations of the above code to import in an ipython session the tables of the database for exploratory data analysis, with no success.
What am I doing wrong? Is there a documentation/tutorial for newbies with some examples around?</p>
<p>Thanks in advance for your help!</p>
|
<p>Found it!</p>
<p>An example of db connection with SQLAlchemy can be found here:
<a href="https://www.codementor.io/sagaragarwal94/building-a-basic-restful-api-in-python-58k02xsiq" rel="nofollow noreferrer">https://www.codementor.io/sagaragarwal94/building-a-basic-restful-api-in-python-58k02xsiq</a></p>
<pre><code>import pandas as pd
from sqlalchemy import create_engine
db_connect = create_engine('sqlite:///chinook.db')
df = pd.read_sql('albums', con=db_connect)
print(df)
</code></pre>
<p>As suggested by @Anky_91, also <code>pd.read_sql_table</code> works, as read_sql wraps it.</p>
<p>The issue was the connection, that has to be made with SQLAlchemy and not with sqlite3.</p>
<p>Thanks</p>
|
python|pandas|sqlite
| 0
|
1,593
| 56,381,631
|
"index 1 is out of bounds for axis 0 with size 1" in python
|
<p>I seem to be having an indexing problem? I do not know how to interpret this error... :/ I think it has to do with how I initialized u.</p>
<p>I have this 3x3 G matrix that I created using the variable u (a vector, x - y). I just made a zero matrix for now bc I'm not quite sure how to code it yet, there are lots of partials and norms involved haha. x_j = (x_1 (j), x_2 (j), x_3 (j)) and y_j = (y_1 (j), y_2 (j), y_3 (j)). x and y are nx3 vectors. alpha_j is a 3x3 matrix. The A matrix is block diagonal matrix of size 3nx3n. I am having trouble with the W matrix (size 3nx3n, where the (i,j)th block is the 3x3 matrix given by alpha_i*G_[ij]*alpha_j).</p>
<pre><code>def G(u):
u1 = u[0]
u2 = u[1]
u3 = u[2]
g = np.array([[0,0,0],[0,0,0],[0,0,0]],complex)
return g
def W(x, y, k, alpha, A):
# initialization
n = x.shape[0] # the number of x vextors
result = np.zeros([3*n,3*n],complex)
u = np.matlib.zeros((n, 3)) # u = x - y
print(u)
num_in_blocks = n
# variables
a_i = alpha_j(alpha, A)
a_j = alpha_j(alpha, A)
for i in range(0, 2):
x1 = x[i] # each row of x
y1 = y[i] # each row of y
for j in range(0, n-1):
u[i][j] = x1[j] - y1[j] # each row of x minus each row of y
if i != j:
block_result = a_i * G((u[i][j]), k) * a_j
for k in range(num_in_blocks):
for l in range(num_in_blocks):
result[3*i + k, 3*j + l] = block_result[i, j]
return result
def alpha_j(a, A):
alph = np.array([[0,0,0],[0,0,0],[0,0,0]],complex)
n = A.shape[0]
rho = np.random.rand(n,1)
for i in range(0, n-1):
for j in range(0, n-1):
alph[i,j] = (rho[i] * a * A[i,j])
return alph
#------------------------------------------------------------------
# random case
def x(n):
return np.random.randint(100, size=(n, 3))
def y(n):
return np.random.randint(100, size=(n, 3))
# SYSTEM PARAMETERS
theta = 0 # can range from [0, 2pi)
chi = 10 + 1j
lam = 0.5 # microns (values between .4-.7)
k = (2 * np.pi)/lam # 1/microns
V_0 = (0.05)**3 # microns^3
K = k * np.array([[0], [np.sin(theta)], [np.cos(theta)]])
alpha = (V_0 * 3 * chi)/(chi + 3)
A = np.matlib.identity(3)
#------------------------------------------------------------------
# TEST FUNCTIONS
w = W(x(3), y(3), k, alpha, A)
print(w)
</code></pre>
<p>I keep getting the error "invalid index to scalar variable." at the line u1 = u[0].</p>
|
<p><code>np.matlib</code> makes a <code>np.matrix</code>, a subclass of <code>np.ndarray</code>. It's supposed to give a MATLAB feel, and (nearly) always produces a 2d array. Its use in new code is being discouraged.</p>
<pre><code>In [42]: U = np.matrix(np.arange(9).reshape(3,3))
In [43]: U
Out[43]:
matrix([[0, 1, 2],
[3, 4, 5],
[6, 7, 8]])
</code></pre>
<p>Indexing with [0] picks the first row, but returns a 2d matrix.</p>
<pre><code>In [44]: U[0]
Out[44]: matrix([[0, 1, 2]])
In [45]: U[0].shape
Out[45]: (1, 3)
</code></pre>
<p>Adding another [1] still indexes the first dimension (which is now size 1):</p>
<pre><code>In [46]: U[0][1]
---------------------------------------------------------------------------
IndexError: index 1 is out of bounds for axis 0 with size 1
</code></pre>
<p>Normally we index numpy arrays with a composite index: </p>
<pre><code>In [47]: U[0,1]
Out[47]: 1
</code></pre>
<p>If we make an <code>ndarray</code> instead:</p>
<pre><code>In [48]: U = np.arange(9).reshape(3,3)
In [49]: U
Out[49]:
array([[0, 1, 2],
[3, 4, 5],
[6, 7, 8]])
In [50]: U[0]
Out[50]: array([0, 1, 2]) # 1d
In [51]: U[0][1] # works,
Out[51]: 1
In [52]: U[0,1] # still preferable
Out[52]: 1
</code></pre>
|
python|numpy|indexing
| 0
|
1,594
| 55,820,138
|
Pandas reset_index() is not working after grouping by and aggregating by multiple methods
|
<p>I have a pandas DataFrame with 2 grouping columns and 3 numeric columns.
I am grouping the data like this:</p>
<pre><code>df = df.groupby(['date_week', 'uniqeid']).agg({
'completes':['sum', 'median', 'var', 'min', 'max']
,'dcount_visitors': ['sum', 'median', 'var', 'min', 'max']
,'dcount_visitor_groups': ['sum', 'median', 'var', 'min', 'max']
})
</code></pre>
<p>The result is the expected multi-level index: </p>
<pre><code>MultiIndex(levels=[['completes', 'dcount_visitors', 'dcount_subscriptions', 'dcount_visitor_groups', 'date_week'], ['sum', 'median', 'var', 'min', 'max', '']],
labels=[[4, 3, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2], [5, 5, 0, 1, 2, 3, 4, 0, 1, 2, 3, 4, 0, 1, 2, 3, 4]])
</code></pre>
<p>Usually I flatten a multi-index like this: </p>
<pre><code>df2 = df2.reset_index(drop=True)
</code></pre>
<p>However, when I inspect the columns I still get a multi-index.
I have tried including the <code>as_index=False</code> in my groupby function but that doesn't work either. </p>
<p>Interestingly, this process works as expected if I only use 1 numeric column with one aggregation. </p>
<pre><code>u = nunits.groupby(['account', 'week_date', 'accountid', 'full_account_name','SegmentName'], as_index=False).agg({'ConsumptionUnit': 'sum'})
Index(['account', 'week_date', 'accountid', 'full_account_name', 'SegmentName',
'ConsumptionUnit'],
dtype='object')
</code></pre>
<p>Any tips or recommendations would be appreciated.</p>
|
<p>(realize it's a bit against the norm to "accept" your own question, but wanted to save folks time in responding to a question that was resolved) </p>
<p>@Efran: I did, and it was a 2 level multi-index.
@Bugbeeb: Good call on identifying the level. The 5 on the labels was throwing me off. </p>
<p>I was able to hunt down an answer: as of Pandas 0.24.0 you can use <code>.to_flat_index</code>.
I had been using 0.23.0 so wasn't finding this as option in the that documentation. </p>
<p>An example of how to use this can be found <a href="https://stackoverflow.com/questions/14507794/pandas-how-to-flatten-a-hierarchical-index-in-columns/55757002#55757002">here</a></p>
<p>after: <code>df.columns = df.columns.to_flat_index()</code>
The resulting index looks like this </p>
<pre><code>Index([ 'date_week',
'TPID',
('completes', 'sum'),
('completes', 'median'),
('completes', 'var'),
('completes', 'min'),
('completes', 'max'),
('dcount_visitors_with_events', 'sum'),
('dcount_visitors_with_events', 'median'),
('dcount_visitors_with_events', 'var'),
('dcount_visitors_with_events', 'min'),
('dcount_visitors_with_events', 'max'),
('dcount_id_groups', 'sum'),
('dcount_id_groups', 'median'),
('dcount_id_groups', 'var'),
('dcount_id_groups', 'min'),
('dcount_id_groups', 'max')],
dtype='object')
</code></pre>
<p>Hope this helps other folks and thanks for quick replies.
This community is great!</p>
|
python|pandas|feature-engineering
| 2
|
1,595
| 65,007,457
|
I have a numpy file that is an array of percentages, how do I turn this to a yes/no database where only values higher than 0.3 are a yes?
|
<p>I have an array of 19000 numbers from 0 to 1 that represent percentages and I would like to have a database of yes and no where yes is every number above 0.3 and no every number below 0.3.</p>
<p>Something that should look like this where original is my initial file</p>
<p>original = [0.2499320, 0.456484, 0.324824 ... 0.231, 0.3213]</p>
<p>modified = [no, yes, yes... no, yes]</p>
<p>Is there a way to do this? Thank you in advance</p>
|
<p><a href="https://numpy.org/doc/stable/reference/generated/numpy.where.html" rel="nofollow noreferrer">numpy.where</a> will do the job.</p>
<pre><code>import numpy as np
arr = np.random.rand(10)
print(arr)
print(np.where(arr>=0.3, 'Yes', 'No'))
# [0.23556319 0.1173074 0.2673033 0.05552573 0.79930567 0.33443317
# 0.88644862 0.89914459 0.64551288 0.60601345]
# ['No' 'No' 'No' 'No' 'Yes' 'Yes' 'Yes' 'Yes' 'Yes' 'Yes']
</code></pre>
|
python|arrays|python-3.x|numpy
| 1
|
1,596
| 64,934,798
|
How to do time series backward resampling e.g. 5 business days starting on the last data date?
|
<p>I would like to compute weekly returns but starting from the end date backwards. This is my initial attempt to implement it using pandas:</p>
<pre><code>import pandas as pd
import numpy as np
from pandas.tseries.offsets import BDay
index = pd.date_range(start='2020-09-13', end='2020-10-13', freq=BDay())
index_len = len(index)
dfw = pd.DataFrame(data=np.arange(start=1, stop=1+(index_len-1)*0.002, step=0.002),
index=index,
columns=['col1'])
def weekly_ret(x):
if x.size > 0:
print(f"range is {x.index[0]} - {x.index[-1]}")
return (x.iloc[-1] - x.iloc[0]) / x.iloc[0]
else:
return np.nan
dfw = dfw.resample(rule='5B').apply(weekly_ret)
print(dfw)
</code></pre>
<p>then I get the following output but this is not what I want:</p>
<pre><code>range is 2020-09-14 00:00:00 - 2020-09-18 00:00:00
range is 2020-09-21 00:00:00 - 2020-09-25 00:00:00
range is 2020-09-28 00:00:00 - 2020-10-02 00:00:00
range is 2020-10-05 00:00:00 - 2020-10-09 00:00:00
range is 2020-10-12 00:00:00 - 2020-10-13 00:00:00
col1
2020-09-14 0.008000
2020-09-21 0.007921
2020-09-28 0.007843
2020-10-05 0.007767
2020-10-12 0.001923
</code></pre>
<p>I would like it to start from <code>2020-10-13</code> backwards so that the last range would be:</p>
<pre><code>range is 2020-10-07 00:00:00 - 2020-10-13 00:00:00
</code></pre>
<p>instead of:</p>
<pre><code>range is 2020-10-12 00:00:00 - 2020-10-13 00:00:00
</code></pre>
<p>What I have tried so far:</p>
<ol>
<li>Inverting the dataframe with <code>dfw = dfw.reindex(index=dfw.index[::-1])</code></li>
<li>The step #1 above plus having the rule to be <code>-5B</code>, this results in an error.</li>
<li>Using the origin parameter for the resample function but this has no effect on the order of the computation i.e. <code>origin=dfw.index[-1]</code></li>
<li>The step #1 above plus computing per number of rows on the inverted dataframe <code>dfw = dfw.rolling(5).apply(weekly_ret)[::5]</code> but here I get a NaN for the first (last) interval and this solution is somewhat also wasteful.</li>
</ol>
<p><strong>UPDATE</strong>: this would be the wanted output; notice the last return considers the week starting from the last day in the index backwards:</p>
<pre><code>range is 2020-09-16 00:00:00 - 2020-09-22 00:00:00 = 0.007968127490039847
range is 2020-09-23 00:00:00 - 2020-09-29 00:00:00 = 0.00788954635108482
range is 2020-09-30 00:00:00 - 2020-10-06 00:00:00 = 0.007812500000000007
range is 2020-10-07 00:00:00 - 2020-10-13 00:00:00 = 0.00773694390715668
col1
2020-09-22 0.007968
2020-09-29 0.007890
2020-10-06 0.007813
2020-10-13 0.007737 i.e. (1.042 - 1.034)/1.034
</code></pre>
|
<p>So what you're looking for are <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#anchored-offsets" rel="nofollow noreferrer">anchored offsets</a>, i.e. resampling the DataFrame on a weekly basis, starting on <em>the same weekday</em> that your last index is on. In your case, <code>2020-10-13</code> is a Tuesday, i.e. you want to use the rule <code>W-TUE</code>. I'd suggest using a lookup dictionary to translate the <code>.weekday()</code> number (e.g. <code>Tuesday == 1</code>) into the respective rule. Then you only need to apply your function over the <code>.resample()</code>:</p>
<pre><code>rule_lookup={
0:'W-MON',
1:'W-TUE',
2:'W-WED',
3:'W-THU',
4:'W-FRI',
5:'W-SAT',
6:'W-SUN'
}
# get the proper rule which ends on the last date in the index
rule = rule_lookup[dfw.index[-1].weekday()]
print(f"=> resampling using rule: {rule}")
dfw = dfw.resample(rule=rule).apply(weekly_ret)
print(dfw)
</code></pre>
<p>yields:</p>
<pre><code>=> resampling using rule: W-TUE
range is 2020-09-14 00:00:00 - 2020-09-15 00:00:00
range is 2020-09-16 00:00:00 - 2020-09-22 00:00:00
range is 2020-09-23 00:00:00 - 2020-09-29 00:00:00
range is 2020-09-30 00:00:00 - 2020-10-06 00:00:00
range is 2020-10-07 00:00:00 - 2020-10-13 00:00:00
col1
2020-09-15 0.002000
2020-09-22 0.007968
2020-09-29 0.007890
2020-10-06 0.007813
2020-10-13 0.007737
</code></pre>
|
python|pandas|pandas-resample
| 1
|
1,597
| 64,980,681
|
How to list the index and columns names together for a given dataframe?
|
<p>How to list the column names along with their index from a data frame in python ?</p>
<p>the below code gives only the index numbers of the column names. But i need to learn on how to list the index numbers along with the column names for a large dataset with multiple columns names.</p>
<pre><code>enter code here:
columnnames=['a','b','c']
df.columns.get_loc(col) for col in column_names
</code></pre>
|
<p>You can create a <code>map</code> like this:</p>
<pre><code>In [3359]: col_map = {df.columns.get_loc(c):c for c in df.columns}
In [3360]: col_map
Out[3360]: {0: 'id', 1: 'count', 2: 'colors'}
</code></pre>
|
python|python-3.x|pandas|dataframe|indexing
| 2
|
1,598
| 39,819,323
|
How to do logical operation between DataFrame and Series?
|
<p>Suppose I have a bool <code>DataFrame df</code> and a bool <code>Series x</code> with the same index, and I want to do logical operation between <code>df</code> and <code>x</code> per column. Is there any short and fast way like <code>DataFrame.sub</code> compare to using <code>DataFrame.apply</code>?</p>
<pre><code>In [31]: df
Out[31]:
x y z u
A False False True True
B True True True True
C True False False False
In [32]: x
Out[32]:
A True
B False
C True
dtype: bool
In [33]: r = df.apply(lambda col: col & x) # Any other way ??
In [34]: r
Out[34]:
x y z u
A False False True True
B False False False False
C True False False False
</code></pre>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.mul.html" rel="noreferrer"><code>mul</code></a>, but need cast to <code>int</code> and then to <code>bool</code>, because <code>UserWarning</code>:</p>
<pre><code>print (df.astype(int).mul(x.values, axis=0).astype(bool))
x y z u
A False False True True
B False False False False
C True False False False
</code></pre>
<p>Similar solution:</p>
<pre><code>print (df.mul(x.astype(int), axis=0).astype(bool))
x y z u
A False False True True
B False False False False
C True False False False
</code></pre>
<hr>
<pre><code>print (df.mul(x.values, axis=0))
x y z u
A False False True True
B False False False False
C True False False False
</code></pre>
<blockquote>
<p>C:\Anaconda3\lib\site-packages\pandas\computation\expressions.py:181: UserWarning: evaluating in Python space because the '*' operator is not supported by numexpr for the bool dtype, use '&' instead
unsupported[op_str]))</p>
</blockquote>
<p>Another <code>numpy</code> solution with <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.logical_and.html" rel="noreferrer"><code>np.logical_and</code></a>:</p>
<pre><code>print (pd.DataFrame(np.logical_and(df.values, x.values[:, None]),
index=df.index,
columns=df.columns))
x y z u
A False False True True
B False False False False
C True False False False
</code></pre>
|
python|pandas|dataframe|boolean|logical-operators
| 7
|
1,599
| 39,622,059
|
Slice numpy array to make it desired shaped
|
<p>Surprisingly, couldn't find the answer across the internet. I have an n-dimensional numpy array. E.g.: 2-D np array:</p>
<pre><code>array([['34.5500000', '36.9000000', '37.3200000', '37.6700000'],
['41.7900000', '44.8000000', '48.2600000', '46.1800000'],
['36.1200000', '37.1500000', '39.3100000', '38.1000000'],
['82.1000000', '82.0900000', '76.0200000', '77.7000000'],
['48.0100000', '51.2500000', '51.1700000', '52.5000000', '55.2500000'],
['39.7500000', '39.5000000', '36.8100000', '37.2500000']], dtype=object)
</code></pre>
<p>As you can see, the 5th row consists of 5 elemnts and <strong>i want to make the 5th dissapear</strong>, using something like this:</p>
<pre><code>np.slice(MyArray, [6,4])
</code></pre>
<p>[6,4] is a shape. I really DO not want to iterate threw dimensions and cut them. I tried the <code>resize</code> method, but it returns nothing!</p>
|
<p>This is not a 2d array. It is a 1d array, whose elements are objects, in this case some 4 element lists and one 5 element one. And this lists contain strings.</p>
<pre><code>In [577]: np.array([['34.5500000', '36.9000000', '37.3200000', '37.6700000'],
...: ['41.7900000', '44.8000000', '48.2600000', '46.1800000'],
...: ['36.1200000', '37.1500000', '39.3100000', '38.1000000'],
...: ['82.1000000', '82.0900000', '76.0200000', '77.7000000'],
...: ['48.0100000', '51.2500000', '51.1700000', '52.5000000', '55.25
...: 00000'],
...: ['39.7500000', '39.5000000', '36.8100000', '37.2500000']], dtyp
...: e=object)
Out[577]:
array([['34.5500000', '36.9000000', '37.3200000', '37.6700000'],
['41.7900000', '44.8000000', '48.2600000', '46.1800000'],
['36.1200000', '37.1500000', '39.3100000', '38.1000000'],
['82.1000000', '82.0900000', '76.0200000', '77.7000000'],
['48.0100000', '51.2500000', '51.1700000', '52.5000000', '55.2500000'],
['39.7500000', '39.5000000', '36.8100000', '37.2500000']], dtype=object)
In [578]: MyArray=_
In [579]: MyArray.shape
Out[579]: (6,)
In [580]: MyArray[0]
Out[580]: ['34.5500000', '36.9000000', '37.3200000', '37.6700000']
In [581]: MyArray[5]
Out[581]: ['39.7500000', '39.5000000', '36.8100000', '37.2500000']
In [582]: MyArray[4]
Out[582]: ['48.0100000', '51.2500000', '51.1700000', '52.5000000', '55.2500000']
In [583]:
</code></pre>
<p>To <code>slice</code> this you need to iterate on the elements of the array</p>
<pre><code>In [584]: [d[:4] for d in MyArray]
Out[584]:
[['34.5500000', '36.9000000', '37.3200000', '37.6700000'],
['41.7900000', '44.8000000', '48.2600000', '46.1800000'],
['36.1200000', '37.1500000', '39.3100000', '38.1000000'],
['82.1000000', '82.0900000', '76.0200000', '77.7000000'],
['48.0100000', '51.2500000', '51.1700000', '52.5000000'],
['39.7500000', '39.5000000', '36.8100000', '37.2500000']]
</code></pre>
<p>Now with all the sublists the same length, <code>np.array</code> will create a 2d array:</p>
<pre><code>In [585]: np.array(_)
Out[585]:
array([['34.5500000', '36.9000000', '37.3200000', '37.6700000'],
['41.7900000', '44.8000000', '48.2600000', '46.1800000'],
['36.1200000', '37.1500000', '39.3100000', '38.1000000'],
['82.1000000', '82.0900000', '76.0200000', '77.7000000'],
['48.0100000', '51.2500000', '51.1700000', '52.5000000'],
['39.7500000', '39.5000000', '36.8100000', '37.2500000']],
dtype='<U10')
</code></pre>
<p>Still strings, though</p>
<pre><code>In [586]: np.array(__,dtype=float)
Out[586]:
array([[ 34.55, 36.9 , 37.32, 37.67],
[ 41.79, 44.8 , 48.26, 46.18],
[ 36.12, 37.15, 39.31, 38.1 ],
[ 82.1 , 82.09, 76.02, 77.7 ],
[ 48.01, 51.25, 51.17, 52.5 ],
[ 39.75, 39.5 , 36.81, 37.25]])
</code></pre>
|
python|arrays|numpy
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.