Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
373,800
| 59,272,234
|
AttributeError: module 'tensorflow' has no attribute 'app': error
|
<p>I am currently following this <a href="https://github.com/EdjeElectronics/TensorFlow-Object-Detection-API-Tutorial-Train-Multiple-Objects-Windows-10" rel="nofollow noreferrer">tutorial</a> at section 4. When I run the command to generate the TF Records it returns a trace-back error for the generate_tfrecord.py file. The first error being for:</p>
<pre><code>flags = tf.compat.v1.flags
flags.DEFINE_string('csv_input', '', 'Path to the CSV input')
flags.DEFINE_string('image_dir', '', 'Path to the image directory')
flags.DEFINE_string('output_path', '', 'Path to output TFRecord')
FLAGS = flags.FLAGS
</code></pre>
<p>I simply fixed it by adding the .compat.v1 line because I am using TF 2.0.</p>
<p>The next error I got was that last line with;</p>
<pre><code>if __name__ == '__main__':
tf.app.run()
</code></pre>
<p>It returned:</p>
<pre><code>Traceback (most recent call last):
File "generate_tfrecord.py", line 101, in <module>
tf.app.run()
AttributeError: module 'tensorflow' has no attribute 'app'
</code></pre>
<p>Any help would greatly appreciated!
-Cheers</p>
|
<p>Or you can simpy add
<code>import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()</code> instead of <code>import tensorflow as tf</code></p>
|
python|tensorflow|machine-learning|artificial-intelligence
| 1
|
373,801
| 59,305,940
|
@tf.function is slowing down training step
|
<p>I am using the following tf.function decorated training step:</p>
<pre><code>@tf.function
def train_step(inputs, labels):
with tf.GradientTape(persistent=True) as tape:
predictions = model([X, F], training=True)
losses = [l_f(tf.expand_dims(labels[:,i], axis=-1), predictions[i]) for i, l_f in enumerate(loss_functions)]
gradients = [tape.gradient(l, model.trainable_variables) for l in losses]
for g in gradients:
grads = [gg if gg is not None else tf.zeros_like(model.trainable_variables[i], dtype=tf.float32) for i, gg in enumerate(g)]
optimizer.apply_gradients(zip(grads, model.trainable_variables)
del tape
return losses
def weighted_loss(weights):
@tf.function
def loss_func(labels, predictions):
min_class_filter = tfk.backend.greater(labels, 0.5)
y_min = tf.boolean_mask(labels, min_class_filter)
y_max = tf.boolean_mask(labels, tf.math.logical_not(min_class_filter))
y_pred_min = tf.boolean_mask(predictions, min_class_filter)
y_pred_max = tf.boolean_mask(predictions, tf.math.logical_not(min_class_filter))
loss_min_class = tfk.backend.mean(tfk.backend.binary_crossentropy(y_min, y_pred_min))
loss_max_class = tfk.backend.mean(tfk.backend.binary_crossentropy(y_max, y_pred_max))
loss_all = tfk.backend.mean(tfk.backend.binary_crossentropy(labels, predictions))
return weights[0]*loss_min_class + weights[1]*loss_max_class + weights[2]*loss_all
return loss_func
loss_functions = [weighted_loss(w) for w in target_weights]
</code></pre>
<p>It's a little quirky, but basically, my network has multiple outputs, which means that there are cases where returning a gradient of None for certain weights is correct, so I am replacing those gradients with zero, and I'm calculating the loss at each of these outputs separately and then propagating each of them at each step.</p>
<p>When I run this as written, it takes an extremely long time (10min+) to run a single training step, and I see the following message in the logs:</p>
<pre><code>E tensorflow/core/grappler/optimizers/meta_optimizer.cc:502] function_operator failed: Invalid argument: Input 0 of node model/LSTM_forward_0/zeros_like was passed int32 from model/LSTM_forward_0/StatefulPartitioned Call:9 incompatible with expected variant.
</code></pre>
<p>When I remove the @tf.function decorator, it runs in about 10% of the time, and I do not see this log warning. Is this warning a red herring or does it legitimately point to an issue created by adding @tf.function?</p>
<p>Additional Details:</p>
<ul>
<li>TF 2.0</li>
<li>GPU enabled and available </li>
<li>CUDA 10.1</li>
<li>GPU utilization 0% in both cases but that isn't caused by data-feed maxing the CPU throughput, as when I generate training data outside of the training loop, it's as good as instantaneous from TFRecords with sufficient prefetch and limited augmentation</li>
<li>dtype of inputs, labels, gradients and all model.trainable_variables are all tf.float32</li>
</ul>
|
<p>From what I read, <code>tf.function</code> should not include any assignment to the graph vars for it to run smoothly.</p>
<p>In a training step, you are changing the weights of the model, thus violating this. </p>
<p>I'm not sure this is the reason, but you can try to leave <code>tf.function</code> only in the loss function, but not in the training step. </p>
|
python|tensorflow|keras
| 0
|
373,802
| 59,269,250
|
TensorFlow 2.0, error in keras.applications (as_list() is not defined on an unknown TensorShape)
|
<p>There are several questions on SO with this error:</p>
<pre><code>ValueError: as_list() is not defined on an unknown TensorShape.
</code></pre>
<p>and a few relevant issues on git as well: <a href="https://github.com/tensorflow/tensorflow/issues/26305" rel="nofollow noreferrer">1</a>, <a href="https://github.com/tensorflow/tensorflow/issues/24824" rel="nofollow noreferrer">2</a>. </p>
<p>However, I haven't found a consistent answer to why this message appears, or a solution for my specific issue. This entire pipeline used to work with <code>tf2.0.0-alpha</code> and now, after installing with Conda <code>conda install tensorflow=2.0 python=3.6</code> the pipeline is broken.</p>
<p>In short terms, I use a generator to return image data to the <code>tf.data.Dataset.from_generator()</code> method. This works fine, until I try to call the <code>model.fit()</code> method, which results in the following error.</p>
<pre><code>Train for 750 steps, validate for 100 steps
Epoch 1/5
1/750 [..............................] - ETA: 10sTraceback (most recent call last):
File "/usr/local/anaconda3/envs/tf/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/local/anaconda3/envs/tf/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/Users/tmo/Projects/casa/image/src/train.py", line 148, in <module>
Trainer().train_vgg16()
File "/Users/tmo/Projects/casa/image/src/train.py", line 142, in train_vgg16
validation_steps=100)
File "/usr/local/anaconda3/envs/tf/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training.py", line 728, in fit
use_multiprocessing=use_multiprocessing)
File "/usr/local/anaconda3/envs/tf/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py", line 324, in fit
total_epochs=epochs)
File "/usr/local/anaconda3/envs/tf/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py", line 123, in run_one_epoch
batch_outs = execution_function(iterator)
File "/usr/local/anaconda3/envs/tf/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2_utils.py", line 86, in execution_function
distributed_function(input_fn))
File "/usr/local/anaconda3/envs/tf/lib/python3.6/site-packages/tensorflow_core/python/eager/def_function.py", line 457, in __call__
result = self._call(*args, **kwds)
File "/usr/local/anaconda3/envs/tf/lib/python3.6/site-packages/tensorflow_core/python/eager/def_function.py", line 503, in _call
self._initialize(args, kwds, add_initializers_to=initializer_map)
File "/usr/local/anaconda3/envs/tf/lib/python3.6/site-packages/tensorflow_core/python/eager/def_function.py", line 408, in _initialize
*args, **kwds))
File "/usr/local/anaconda3/envs/tf/lib/python3.6/site-packages/tensorflow_core/python/eager/function.py", line 1848, in _get_concrete_function_internal_garbage_collected
graph_function, _, _ = self._maybe_define_function(args, kwargs)
File "/usr/local/anaconda3/envs/tf/lib/python3.6/site-packages/tensorflow_core/python/eager/function.py", line 2150, in _maybe_define_function
graph_function = self._create_graph_function(args, kwargs)
File "/usr/local/anaconda3/envs/tf/lib/python3.6/site-packages/tensorflow_core/python/eager/function.py", line 2041, in _create_graph_function
capture_by_value=self._capture_by_value),
File "/usr/local/anaconda3/envs/tf/lib/python3.6/site-packages/tensorflow_core/python/framework/func_graph.py", line 915, in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
File "/usr/local/anaconda3/envs/tf/lib/python3.6/site-packages/tensorflow_core/python/eager/def_function.py", line 358, in wrapped_fn
return weak_wrapped_fn().__wrapped__(*args, **kwds)
File "/usr/local/anaconda3/envs/tf/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2_utils.py", line 66, in distributed_function
model, input_iterator, mode)
File "/usr/local/anaconda3/envs/tf/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2_utils.py", line 112, in _prepare_feed_values
inputs, targets, sample_weights = _get_input_from_iterator(inputs)
File "/usr/local/anaconda3/envs/tf/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2_utils.py", line 149, in _get_input_from_iterator
distribution_strategy_context.get_strategy(), x, y, sample_weights)
File "/usr/local/anaconda3/envs/tf/lib/python3.6/site-packages/tensorflow_core/python/keras/distribute/distributed_training_utils.py", line 308, in validate_distributed_dataset_inputs
x_values_list = validate_per_replica_inputs(distribution_strategy, x)
File "/usr/local/anaconda3/envs/tf/lib/python3.6/site-packages/tensorflow_core/python/keras/distribute/distributed_training_utils.py", line 356, in validate_per_replica_inputs
validate_all_tensor_shapes(x, x_values)
File "/usr/local/anaconda3/envs/tf/lib/python3.6/site-packages/tensorflow_core/python/keras/distribute/distributed_training_utils.py", line 373, in validate_all_tensor_shapes
x_shape = x_values[0].shape.as_list()
File "/usr/local/anaconda3/envs/tf/lib/python3.6/site-packages/tensorflow_core/python/framework/tensor_shape.py", line 1171, in as_list
raise ValueError("as_list() is not defined on an unknown TensorShape.")
ValueError: as_list() is not defined on an unknown TensorShape.
</code></pre>
<p>This is the code that loads and reshapes each image:</p>
<pre><code> def preprocess_image(self, image):
"""
"""
image = tf.image.decode_jpeg(image, channels=3)
image = tf.image.resize(image, self.hw)
image /= 255.0 # normalize to [0,1] range
image.set_shape([224, 224, 3])
return image
</code></pre>
<p>which is applied to a generator (e.g. <code>training_generator</code>) that runs through a list of images and yields the preprocessed result:</p>
<pre><code> def make_ts_dataset(self):
AUTOTUNE = tf.data.experimental.AUTOTUNE
BATCH_SIZE = 32
image_count_training = len(self.X_train)
image_count_validation = len(self.X_test)
training_generator = GetTensor(hw=self.hw, train=True).make_tensor
training_image_ds = tf.data.Dataset.from_generator(training_generator, tf.float32, [224, 224, 3])
training_price_ds = tf.data.Dataset.from_tensor_slices(tf.cast(self.y_train, tf.float32))
validation_generator = GetTensor(hw=self.hw, test=True).make_tensor
validation_image_ds = tf.data.Dataset.from_generator(validation_generator, tf.float32, [224, 224, 3])
validation_price_ds = tf.data.Dataset.from_tensor_slices(tf.cast(self.y_test, tf.float32))
training_ds = tf.data.Dataset.zip((training_image_ds, training_price_ds))
validation_ds = tf.data.Dataset.zip((validation_image_ds, validation_price_ds))
training_ds = training_ds.shuffle(buffer_size=int(round(image_count_training)))
training_ds = training_ds.repeat()
training_ds = training_ds.batch(BATCH_SIZE)
training_ds = training_ds.prefetch(buffer_size=AUTOTUNE)
validation_ds = validation_ds.shuffle(buffer_size=int(round(image_count_validation)))
validation_ds = validation_ds.repeat()
validation_ds = validation_ds.batch(BATCH_SIZE)
validation_ds = validation_ds.prefetch(buffer_size=AUTOTUNE)
for image_batch, label_batch in training_ds.take(1):
print(label_batch.shape, image_batch.shape)
pass
return training_ds, validation_ds
</code></pre>
<p>At all points, the shapes look correct, i.e. <code>(32,) (32, 224, 224, 3)</code></p>
<p>I am initialising the weights with trained weights from <code>VGG16</code></p>
<pre><code> def train_vgg16(self):
training_ds, validation_ds = Trainer.make_ts_dataset(self)
base_vgg = keras.applications.vgg16.VGG16(include_top=False,
weights='imagenet',
input_shape=(224, 224, 3))
base_vgg.trainable = False
print(base_vgg.summary())
vgg_with_base = keras.Sequential([
base_vgg,
tf.keras.layers.GlobalMaxPooling2D(),
tf.keras.layers.Dense(1024, activation=tf.nn.relu),
tf.keras.layers.Dense(1024, activation=tf.nn.relu),
tf.keras.layers.Dense(512, activation=tf.nn.relu),
tf.keras.layers.Dense(1)])
print(base_vgg.summary())
vgg_with_base.compile(optimizer='adam',
loss='mse',
metrics=['mape'])
vgg_with_base.fit(training_ds,
epochs=5,
validation_data=validation_ds,
steps_per_epoch=750,
validation_steps=100)
</code></pre>
<p>However, training never initiates because <code>x_shape = x_values[0].shape.as_list()</code> fails. </p>
<p><strong>Edit</strong> (11/12/19):</p>
<p>After some troubleshooting, I found that the error is initiated in the <code>keras.applications</code> layer. </p>
<pre><code> base_vgg = keras.applications.vgg16.VGG16(include_top=False,
weights='imagenet',
input_shape=(224, 224, 3))
</code></pre>
<p>Removing <code>base_vgg</code> from the model and initialising training works fine. </p>
|
<p>I fixed this error by tf.cast(label, tf.float32) after importing the masks.</p>
|
tensorflow|keras|deep-learning|tensorflow2.0|tf.keras
| 0
|
373,803
| 59,242,835
|
CTC: blank must be in label range
|
<h1>summary</h1>
<p>I'm adding alphabets to captcha recognition, but pytorch's CTC seems to not working properly when alphabets are added.</p>
<h1>What I've tried</h1>
<p>At first, I modified <code>BLANK_LABEL</code> to 62 since there are 62 labels(0-9, a-z, A-Z), but it gives me runtime error <code>blank must be in label range</code>. I also tried <code>BLANK_LABEL=0</code> and then assigning 1~63 as nonblank labels but it outputs NaN as loss.</p>
<h1>The code</h1>
<p>This is the colab link for the current version of my code: <a href="https://drive.google.com/file/d/1JeiH8hU-C1yBPxrExzmwRaT3EzC9HDdN/view?usp=sharing" rel="nofollow noreferrer">here</a></p>
<p>below are just core parts of the code.</p>
<p>Constants:</p>
<pre class="lang-py prettyprint-override"><code>DATASET_PATH = "/home/ik1ne/Downloads/numbers"
MODEL_PATH = "/home/ik1ne/Downloads"
BATCH_SIZE = 50
TRAIN_BATCHES = 180
TEST_BATCHES = 20
TOTAL_BATCHES = TRAIN_BATCHES+TEST_BATCHES
TOTAL_DATASET = BATCH_SIZE*TOTAL_BATCHES
BLANK_LABEL = 63
</code></pre>
<p>dataset generation:</p>
<pre class="lang-py prettyprint-override"><code>!pip install captcha
from captcha.image import ImageCaptcha
import itertools
import os
import random
import string
if not os.path.exists(DATASET_PATH):
os.makedirs(DATASET_PATH)
characters = "0123456789"+string.ascii_lowercase + string.ascii_uppercase
while(len(list(Path(DATASET_PATH).glob('*'))) < TOTAL_BATCHES):
captcha_str = "".join(random.choice(characters) for x in range(6))
if captcha_str in list(Path(DATASET_PATH).glob('*')):
continue
ImageCaptcha().write(captcha_str, f"{DATASET_PATH}/{captcha_str}.png")
</code></pre>
<p>dataset:</p>
<pre class="lang-py prettyprint-override"><code>def convert_strseq_to_numseq(s):
for c in s:
if c >= '0' and c <= '9':
return int(c)
elif c>='a' and c <='z':
return ord(c)-ord('a')+10
else:
return ord(c)-ord('A')+36
class CaptchaDataset(Dataset):
"""CAPTCHA dataset."""
def __init__(self, root_dir, transform=None):
self.root_dir = root_dir
self.image_paths = list(Path(root_dir).glob('*'))
self.transform = transform
def __getitem__(self, index):
image = Image.open(self.image_paths[index])
if self.transform:
image = self.transform(image)
label_sequence = [convert_strseq_to_numseq(c) for c in self.image_paths[index].stem]
return (image, torch.tensor(label_sequence))
def __len__(self):
return len(self.image_paths)
</code></pre>
<p>model:</p>
<pre class="lang-py prettyprint-override"><code>class StackedLSTM(nn.Module):
def __init__(self, input_size=60, output_size=11, hidden_size=512, num_layers=2):
super(StackedLSTM, self).__init__()
self.hidden_size = hidden_size
self.num_layers = num_layers
self.dropout = nn.Dropout()
self.fc = nn.Linear(hidden_size, output_size)
self.lstm = nn.LSTM(input_size, hidden_size, num_layers)
def forward(self, inputs, hidden):
batch_size, seq_len, input_size = inputs.shape
outputs, hidden = self.lstm(inputs, hidden)
outputs = self.dropout(outputs)
outputs = torch.stack([self.fc(outputs[i]) for i in range(width)])
outputs = F.log_softmax(outputs, dim=2)
return outputs, hidden
def init_hidden(self, batch_size):
weight = next(self.parameters()).data
return (weight.new(self.num_layers, batch_size, self.hidden_size).zero_(),
weight.new(self.num_layers, batch_size, self.hidden_size).zero_())
net = StackedLSTM().to(device)
</code></pre>
<p>training:</p>
<pre class="lang-py prettyprint-override"><code>net.train() # set network to training phase
epochs = 30
# for each pass of the training dataset
for epoch in range(epochs):
train_loss, train_correct, train_total = 0, 0, 0
h = net.init_hidden(BATCH_SIZE)
# for each batch of training examples
for batch_index, (inputs, targets) in enumerate(train_dataloader):
inputs, targets = inputs.to(device), targets.to(device)
h = tuple([each.data for each in h])
BATCH_SIZE, channels, height, width = inputs.shape
# reshape inputs: NxCxHxW -> WxNx(HxC)
inputs = (inputs
.permute(3, 0, 2, 1)
.contiguous()
.view((width, BATCH_SIZE, -1)))
optimizer.zero_grad() # zero the parameter gradients
outputs, h = net(inputs, h) # forward pass
# compare output with ground truth
input_lengths = torch.IntTensor(BATCH_SIZE).fill_(width)
target_lengths = torch.IntTensor([len(t) for t in targets])
loss = criterion(outputs, targets, input_lengths, target_lengths)
loss.backward() # backpropagation
nn.utils.clip_grad_norm_(net.parameters(), 10) # clip gradients
optimizer.step() # update network weights
# record statistics
prob, max_index = torch.max(outputs, dim=2)
train_loss += loss.item()
train_total += len(targets)
for i in range(BATCH_SIZE):
raw_pred = list(max_index[:, i].cpu().numpy())
pred = [c for c, _ in groupby(raw_pred) if c != BLANK_LABEL]
target = list(targets[i].cpu().numpy())
if pred == target:
train_correct += 1
# print statistics every 10 batches
if (batch_index + 1) % 10 == 0:
print(f'Epoch {epoch + 1}/{epochs}, ' +
f'Batch {batch_index + 1}/{len(train_dataloader)}, ' +
f'Train Loss: {(train_loss/1):.5f}, ' +
f'Train Accuracy: {(train_correct/train_total):.5f}')
train_loss, train_correct, train_total = 0, 0, 0
</code></pre>
|
<p>This error will occur when the index of blank is larger than the total number of classes, which equals <code>number of chars + blank</code>. What's more, the index starts from <code>0</code>, instead of <code>1</code>, so if you have <code>62</code> characters in total, their index should be <code>0-61</code> and the index of blank should be <code>62</code> instead of <code>63</code>. (Or you can set blank as <code>0</code>, other characters from <code>1-62</code>)</p>
<p>You should also check the shape of the output tensor, it should has shape <code>[T, B, C]</code>, where <code>T</code> is the time step length, <code>B</code> is the batch size, <code>C</code> is the class num, remember to add blank in to the class num or you will meet the problem</p>
|
python|pytorch|ctc
| 1
|
373,804
| 59,111,486
|
Plot groupby of groupby pandas
|
<p>The data is a time series, with many member ids associated with many categories: </p>
<pre><code>data_df = pd.DataFrame({'Date': ['2018-09-14 00:00:22',
'2018-09-14 00:01:46',
'2018-09-14 00:01:56',
'2018-09-14 00:01:57',
'2018-09-14 00:01:58',
'2018-09-14 00:02:05'],
'category': [1, 1, 1, 2, 2, 2],
'member': ['bob', 'joe', 'jim', 'sally', 'jane', 'doe'],
'data': ['23', '20', '20', '11', '16', '62']})
</code></pre>
<p>There are about 50 categories with 30 members, each with around 1000 datapoints. </p>
<p>I am trying to make one plot per category. </p>
<p>By subsetting each category then plotting via:</p>
<pre><code>fig, ax = plt.subplots(figsize=(8,6))
for i, g in category.groupby(['memeber']):
g.plot(y='data', ax=ax, label=str(i))
plt.show()
</code></pre>
<p>This works fine for a single category, however, when i try to use a for loop to repeat this for each category, it does not work</p>
<pre><code>tests = pd.DataFrame()
for category in categories:
tests = df.loc[df['category'] == category]
for test in tests:
fig, ax = plt.subplots(figsize=(8,6))
for i, g in category.groupby(['member']):
g.plot(y='data', ax=ax, label=str(i))
plt.show()
</code></pre>
<p>yields an "AttributeError: 'str' object has no attribute 'groupby'" error.</p>
<p>What i would like is a loop that spits out one graph per category, with all the members' data plotted on each graph</p>
|
<p>Creating your dataframe</p>
<pre><code>import pandas as pd
data_df = pd.DataFrame({'Date': ['2018-09-14 00:00:22',
'2018-09-14 00:01:46',
'2018-09-14 00:01:56',
'2018-09-14 00:01:57',
'2018-09-14 00:01:58',
'2018-09-14 00:02:05'],
'category': [1, 1, 1, 2, 2, 2],
'member': ['bob', 'joe', 'jim', 'sally', 'jane', 'doe'],
'data': ['23', '20', '20', '11', '16', '62']})
</code></pre>
<p>then <strong>[EDIT after comments]</strong></p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
import numpy as np
subplots_n = np.unique(data_df['category']).size
subplots_x = np.round(np.sqrt(subplots_n)).astype(int)
subplots_y = np.ceil(np.sqrt(subplots_n)).astype(int)
for i, category in enumerate(data_df.groupby('category')):
category_df = pd.DataFrame(category[1])
x = [str(x) for x in category_df['member']]
y = [float(x) for x in category_df['data']]
plt.subplot(subplots_x, subplots_y, i+1)
plt.plot(x, y)
plt.title("Category {}".format(category_df['category'].values[0]))
plt.tight_layout()
plt.show()
</code></pre>
<p>yields to</p>
<p><a href="https://i.stack.imgur.com/lQ1Zb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lQ1Zb.png" alt="pandas groupby subplots"></a></p>
<p>Please note that this nicely takes care also of bigger groups like</p>
<pre class="lang-py prettyprint-override"><code>data_df2 = pd.DataFrame({'category': [1, 1, 1, 2, 2, 2, 3, 3, 4, 4, 5, 5, 5],
'member': ['bob', 'joe', 'jim', 'sally', 'jane', 'doe', 'ric', 'mat', 'pip', 'zoe', 'qui', 'quo', 'qua'],
'data': ['23', '20', '20', '11', '16', '62', '34', '27', '12', '7', '9', '13', '7']})
</code></pre>
<p><a href="https://i.stack.imgur.com/rwoHb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rwoHb.png" alt="pandas groupby subplots"></a></p>
|
python|pandas|for-loop|matplotlib|pandas-groupby
| 1
|
373,805
| 59,080,681
|
Forward Propagate RNN using Pytorch
|
<p>I am trying to create an RNN forward pass method that can take a variable input, hidden, and output size and create the rnn cells needed. To me, it seems like I am passing the correct variables to self.rnn_cell -- the input values of x and the previous hidden layer. However, the error I receive is included below. </p>
<p>I have also tried using x[i] and x[:,i,i] (as suggested by my professor) to no avail. I am confused and just looking for guidance as to whether or not I am doing the right thing here. My prof suggested that since I keep receiving errors, I should restart the kernel in jupyter notebook and rerun code. I have, and I receive the same errors...</p>
<p>Please let me know if you need additional context. </p>
<pre><code>class RNN(nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super(RNN, self).__init__()
self.hidden_size = hidden_size
self.rnn_cell = nn.RNNCell(input_size, hidden_size)
self.fc = nn.Linear(hidden_size, output_size)
self.softmax = nn.LogSoftmax(dim=1)
def forward(self, x):
"""
x: size [seq_length, 1, input_size]
"""
h = torch.zeros(x.size(1), self.hidden_size)
for i in range(x.size(0)):
### START YOUR CODE ###
h = self.rnn_cell(x[:,:,i], h)
### END YOUR CODE ###
### START YOUR CODE ###
# Hint: first call fc, then call softmax
out = self.softmax(self.fc(self.hidden_size, h.size(0)))
### END YOUR CODE ###
return out
IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)
</code></pre>
|
<p>I am not an expert at RNNs but giving it a try.</p>
<pre><code>class RNN(nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super(RNN, self).__init__()
self.hidden_size = hidden_size
self.rnn_cell = nn.RNN(input_size, hidden_size)
self.fc = nn.Linear(hidden_size, output_size)
def forward(self, x):
"""
x: size [seq_length, 1, input_size]
"""
h = torch.zeros(num_layers(hidden), x.size(0), self.hidden_size)
### START YOUR CODE ###
out,hidden = self.rnn_cell(x, h)
### END YOUR CODE ###
### START YOUR CODE ###
# Hint: first call fc, then call softmax
out = out.contiguous().view(-1, self.hidden_dim) #You need to reshape the output to fit the FC layer
out = self.fc(out)
return F.softmax(out)
### END YOUR CODE ###
return out
</code></pre>
<p>Please try running this and let me know in case of errors or any doubts. (Cannot ask you details as I can't comment rn.)</p>
<p>If you got any idea from my answer, do support.</p>
|
python|machine-learning|pytorch|recurrent-neural-network
| 1
|
373,806
| 59,222,218
|
Does the backend matter for savefig
|
<p>I have a script which plots some pandas data, and then either shows the plot interactively with <code>plt.show()</code>, or saves it to a file with <code>plt.savefig(args.out)</code>.</p>
<pre><code>import matplotlib.pyplot as plt
# set up the dataframe here
ax = df.plot.line(x=0, title=args.title, figsize=(12,8), grid=True, **kwargs)
if (args.out):
vprint("Saving figure to ", args.out, "...")
plt.savefig(args.out)
else:
vprint("Showing interactive plot...")
plt.show()
</code></pre>
<p>The question is, does the default matplotlib backend matter for the scenario where I save to a file with <code>savefig</code>? It definitely matters in the other case since it's used to display the interactive plot, but if I call <code>savefig</code> is another backend used entirely?</p>
|
<p>When showing a figure, the backend obviously matters, because it provides two things: </p>
<ul>
<li>The renderer to draw the image</li>
<li>The GUI within which the image is shown.</li>
</ul>
<p>When saving a figure, only the former matters. However, matplotlib provides a multitude of export formats. At the end, the chosen backend will determine what to do when a figure is saved, and in most cases, will use one of the existing non-interactive backends to produce the output file.</p>
<p>Some examples:</p>
<p><code>TkAgg</code> will use the tkinter GUI to show a figure. For saving a png figure, it will fall back to the basic <code>Agg</code> backend to produce the png file. For saving an svg file, it will fall back to the <code>svg</code> backend, for saving a <code>pdf</code> it will fallback to the <code>pdf</code> backend, etc.</p>
<p><code>TkCairo</code>, will use the tkinter GUI to show a figure. For saving a png figure, it will fall back to the basic <code>Cairo</code> backend to produce the png file. For the rest, same as above.</p>
<p><code>Qt5Agg</code> will use the PyQt GUI to show a figure. For png will fall back to <code>Agg</code>. For others same as above.</p>
<p>similar for other backends.</p>
|
python-3.x|pandas|matplotlib
| 2
|
373,807
| 59,201,907
|
Overfitting on image classification
|
<p>I'm working on image classification problem of sign language digits dataset with 10 categories (numbers from 0 to 10). My models are highly overfitting for some reason, even though I tried simple ones (like 1 Conv Layer), classical ResNet50 and even state-of-art NASNetMobile.</p>
<p>Images are colored and 100x100 in size. I tried tuning learning rate but it doesn't help much, although decreasing batch size results in earlier increase of val accuracy.</p>
<p>I applied augmentation to images and it didn't help too: my train accuracy can hit 1.0 when val accuracy can't get higher than 0.6.</p>
<p>I looked at the data and it seems to load just fine. Distribution of classes in validation set is fair too. I have 2062 images in total.</p>
<p>When I change my loss to <code>binary_crossentropy</code> it seems to give better results for both train accuracy and val accuracy, but that doesn't seem to be right.</p>
<p>I don't understand what's wrong, could you please help me find out what I'm missing? Thank you.</p>
<p>Here's a link to my notebook: <a href="https://colab.research.google.com/drive/1ujAU0Mj-nFlHbxmdNOM7RfQSz4FYsLth" rel="nofollow noreferrer">click</a></p>
|
<p>This is going to be a very interesting answer. There's so many things you need to pay attention to when looking at a problem. Fortunately, there's a methodology (might be vague, but still a methodology).</p>
<p><strong>TLDR</strong>: Start your journey at the data, not the model.</p>
<h2>Analysing the data</h2>
<p>First let's look at your data? </p>
<p><a href="https://i.stack.imgur.com/DKhFj.png" rel="noreferrer"><img src="https://i.stack.imgur.com/DKhFj.png" alt="enter image description here"></a></p>
<p>You have 10 classes. Each image is <code>(100,100)</code>. And there only 2062 images. There's your first problem. There's very little data compared to a standard image classification problem. Therefore, you need to make sure that your data is easy to learn from without sacrificing generalizability of the data (i.e. so that it can do well on the validation/test sets). How do we do that?</p>
<ul>
<li>Understand your data</li>
<li>Normalize your data</li>
<li>Reduce the number of features</li>
</ul>
<p>Understanding data is a recurring theme in the other sections. So I won't have a separate section for that.</p>
<h3>Normalizing your data</h3>
<p>Here's first problem. You are rescaling your data to be between [0,1]. But you can do so much better by standardizing your data (i.e. <code>(x - mean(x))/std(x)</code>). Here's how you do that.</p>
<pre><code>def create_datagen():
return tf.keras.preprocessing.image.ImageDataGenerator(
samplewise_center=True,
samplewise_std_normalization=True,
horizontal_flip=False,
rotation_range=30,
shear_range=0.2,
validation_split=VALIDATION_SPLIT)
</code></pre>
<p>Another thing you might notice is I've set <code>horizontal_flip=False</code>. This brings me back to the first point. You have to make a judgement call to see what augmentation techniques might make sense.</p>
<ul>
<li>Brightness/ Shear - Seems okay</li>
<li>Cropping/resizing - Seems okay</li>
<li>Horizontal/Vertical flip - This is not something I'd try at the beginning. If someone shows you a hand sign in two different horizontal orientations, you might have trouble understanding some signs.</li>
</ul>
<h3>Reducing the number of features</h3>
<p>This is very important. You don't have that much data. And you want to make sure you get the most out of the data. The data has the original size of <code>(100,100)</code>. You can do well with a significantly less size image (I have tried <code>(64,64)</code> - But you might be able to go even lower). So please <strong>reduce the size of the images whenever you can</strong>.</p>
<p>Next thing, it doesn't matter if you see a sign in RGB or Grayscale. You still can recognize the sign. But Grayscale cuts down the amount of samples by 66% compared to RGB. So <strong>use less color channels whenever you can</strong>.</p>
<p>This is how you do these,</p>
<pre><code>def create_flow(datagen, subset, directory, hflip=False):
return datagen.flow_from_directory(
directory=directory,
target_size=(64, 64),
color_mode='grayscale',
batch_size=BATCH_SIZE,
class_mode='categorical',
subset=subset,
shuffle=True
)
</code></pre>
<p>So again to reiterate, you need to spend time understanding data before you go ahead with a model. This is a bare minimal list for this problem. Feel free to try other things as well.</p>
<h2>Creating the model</h2>
<p>So, here's the changes I did to the model.</p>
<ul>
<li>Added <code>padding='same'</code> to all the convolutional layers. If you don't do that by default it has <code>padding=valid</code>, which results in an automatic dimensionality reduction. This means, the deeper you go, the smaller your output is going to be. And you can see in the model you had you have a final convolution output of size <code>(3,3)</code>. This is probably too small for the dense layer to make sense of. So pay attention to what the dense layer is getting.</li>
<li>Reduced the kernel size - Kernel size is directly related to the number of parameters. So to reduce the chances of overfitting to your small dataset. Go with a smaller kernel size whenever possible.</li>
<li>Removed dropout from convolutional layers - This is something I did as a precaution. Personally, I don't know if dropout works with convolution layers as well as with Dense layers. So I don't want to have an unknown complexity in my model at the beginning.</li>
<li>Removed the last convolutional layer - Reducing the parameters in the model to reduce changes of overfitting.</li>
</ul>
<h3>About the optimizer</h3>
<p>After you do these changes, you don't need to change the learning rate of <code>Adam</code>. <code>Adam</code> works pretty well without any tuning. So that's a worry you can leave for later.</p>
<h3>About the batch size</h3>
<p>You were using a batch size of 8. Which is not even big enough to contain a single image for each class in a batch. Try to set this to a higher value. I set it to <code>32</code>. Whenever you can try to increase batch size. May be not to very large values. But up to around 128 (for this problem should be fine).</p>
<pre><code>model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Convolution2D(8, (5, 5), activation='relu', input_shape=(64, 64, 1), padding='same'))
model.add(tf.keras.layers.MaxPooling2D((2, 2)))
model.add(tf.keras.layers.BatchNormalization())
model.add(tf.keras.layers.Convolution2D(16, (3, 3), activation='relu', padding='same'))
model.add(tf.keras.layers.MaxPooling2D((2, 2)))
model.add(tf.keras.layers.BatchNormalization())
model.add(tf.keras.layers.Convolution2D(32, (3, 3), activation='relu', padding='same'))
model.add(tf.keras.layers.MaxPooling2D((2, 2)))
model.add(tf.keras.layers.BatchNormalization())
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(128, activation='relu'))
model.add(tf.keras.layers.Dropout(0.5))
model.add(tf.keras.layers.Dense(10, activation='softmax'))
model.summary()
</code></pre>
<h2>Final result</h2>
<p>By doing some pre-meditation before jumping to making a model I achieved significantly better results than what you have.</p>
<h3>Your result</h3>
<pre><code>Epoch 1/10
233/233 [==============================] - 37s 159ms/step - loss: 2.6027 - categorical_accuracy: 0.2218 - val_loss: 2.7203 - val_categorical_accuracy: 0.1000
Epoch 2/10
233/233 [==============================] - 37s 159ms/step - loss: 1.8627 - categorical_accuracy: 0.3711 - val_loss: 2.8415 - val_categorical_accuracy: 0.1450
Epoch 3/10
233/233 [==============================] - 37s 159ms/step - loss: 1.5608 - categorical_accuracy: 0.4689 - val_loss: 2.7879 - val_categorical_accuracy: 0.1750
Epoch 4/10
233/233 [==============================] - 37s 158ms/step - loss: 1.3778 - categorical_accuracy: 0.5145 - val_loss: 2.9411 - val_categorical_accuracy: 0.1450
Epoch 5/10
233/233 [==============================] - 38s 161ms/step - loss: 1.1507 - categorical_accuracy: 0.6090 - val_loss: 2.5648 - val_categorical_accuracy: 0.1650
Epoch 6/10
233/233 [==============================] - 38s 163ms/step - loss: 1.1377 - categorical_accuracy: 0.6042 - val_loss: 2.5416 - val_categorical_accuracy: 0.1850
Epoch 7/10
233/233 [==============================] - 37s 160ms/step - loss: 1.0224 - categorical_accuracy: 0.6472 - val_loss: 2.3338 - val_categorical_accuracy: 0.2450
Epoch 8/10
233/233 [==============================] - 37s 158ms/step - loss: 0.9198 - categorical_accuracy: 0.6788 - val_loss: 2.2660 - val_categorical_accuracy: 0.2450
Epoch 9/10
233/233 [==============================] - 37s 160ms/step - loss: 0.8494 - categorical_accuracy: 0.7111 - val_loss: 2.4924 - val_categorical_accuracy: 0.2150
Epoch 10/10
233/233 [==============================] - 37s 161ms/step - loss: 0.7699 - categorical_accuracy: 0.7417 - val_loss: 1.9339 - val_categorical_accuracy: 0.3450
</code></pre>
<h3>My result</h3>
<pre><code>Epoch 1/10
59/59 [==============================] - 14s 240ms/step - loss: 1.8182 - categorical_accuracy: 0.3625 - val_loss: 2.1800 - val_categorical_accuracy: 0.1600
Epoch 2/10
59/59 [==============================] - 13s 228ms/step - loss: 1.1982 - categorical_accuracy: 0.5843 - val_loss: 2.2777 - val_categorical_accuracy: 0.1350
Epoch 3/10
59/59 [==============================] - 13s 228ms/step - loss: 0.9460 - categorical_accuracy: 0.6676 - val_loss: 2.5666 - val_categorical_accuracy: 0.1400
Epoch 4/10
59/59 [==============================] - 13s 226ms/step - loss: 0.7066 - categorical_accuracy: 0.7465 - val_loss: 2.3700 - val_categorical_accuracy: 0.2500
Epoch 5/10
59/59 [==============================] - 13s 227ms/step - loss: 0.5875 - categorical_accuracy: 0.8008 - val_loss: 2.0166 - val_categorical_accuracy: 0.3150
Epoch 6/10
59/59 [==============================] - 13s 228ms/step - loss: 0.4681 - categorical_accuracy: 0.8416 - val_loss: 1.4043 - val_categorical_accuracy: 0.4400
Epoch 7/10
59/59 [==============================] - 13s 228ms/step - loss: 0.4367 - categorical_accuracy: 0.8518 - val_loss: 1.7028 - val_categorical_accuracy: 0.4300
Epoch 8/10
59/59 [==============================] - 13s 226ms/step - loss: 0.3823 - categorical_accuracy: 0.8711 - val_loss: 1.3747 - val_categorical_accuracy: 0.5600
Epoch 9/10
59/59 [==============================] - 13s 227ms/step - loss: 0.3802 - categorical_accuracy: 0.8663 - val_loss: 1.0967 - val_categorical_accuracy: 0.6000
Epoch 10/10
59/59 [==============================] - 13s 227ms/step - loss: 0.3585 - categorical_accuracy: 0.8818 - val_loss: 1.0768 - val_categorical_accuracy: 0.5950
</code></pre>
<p><strong>Note</strong>: This is a minimal effort I put. You can increase your accuracy further by augmenting data, optimizing the model structure, choosing the right batch size etc.</p>
|
python|tensorflow|keras|deep-learning|computer-vision
| 8
|
373,808
| 59,156,753
|
how to extract features from the exist data based on time and date
|
<p>I am wondering how can i extract total transaction count per user, monthly total transaction count, weekly total transaction count of the user, daily total transaction counts, hourly total transaction count and ten mins total transaction counts as well as the average of the six above periods using the following formula. the file is attached in <a href="https://drive.google.com/open?id=1nW_sLFj2sSspV2fADo-7PKjK-xMiaZ8E" rel="nofollow noreferrer">csv</a> as shown below:</p>
<p>(total number of transactions)/((timestamp of earliest transaction - timestamp of latest transaction))</p>
<pre><code> user date value
29 2011-06-13 20:34:38 77609248
29 2011-06-12 14:22:36 184677003
29 2011-06-12 14:22:36 2397489
30 2013-11-19 08:35:43 2790480
30 2013-11-07 05:45:14 46873751
30 2013-11-07 05:45:14 100000000
37 2011-11-28 05:46:50 1000000
37 2011-11-03 08:17:27 1000000
37 2011-10-31 00:57:44 10000000
38 2013-11-26 03:49:44 1000031
38 2013-11-26 03:49:44 1000021
38 2013-11-26 03:49:44 1000012
39 2013-06-09 05:49:04 176875806
39 2013-03-22 18:25:34 8000
40 2013-11-08 13:53:44 1068051
40 2013-11-07 13:41:01 1014938
40 2013-09-06 17:23:35 1024979
</code></pre>
<p>I tried to write this code</p>
<pre><code>df['date'] = pd.to_datetime(df["date"].dt.strftime('%Y-%m-%d %H:%M:%S'))
d = {'user': ['size'], 'value': ['mean', 'sum', 'min', 'max'], 'date': ['min', 'max', 'count'], 'date': lambda x: x.max() - x.min()}
res = df.groupby('user').agg(d)
res.to_csv(r'out2.csv', sep='\t', )
</code></pre>
<p>But the output of the date is not as expected, i got this
output:</p>
<pre><code>user count value_mean total_value value_min value_max first_last_date
29 3 88227913.33333333 264683740 2397489 184677003 1 days 06:12:02.000000000
30 3 49888077.0 149664231 2790480 100000000 12 days 02:50:29.000000000
37 3 4000000.0 12000000 1000000 10000000 28 days 04:49:06.000000000
38 3 1000021.3333333334 3000064 1000012 1000031 0 days 00:00:00.000000000
39 2 88441903.0 176883806 8000 176875806 78 days 11:23:30.000000000
40 3 1035989.3333333334 3107968 1014938 1068051 62 days 20:30:09.000000000
</code></pre>
<p> </p>
|
<p>You can use this:</p>
<pre><code>df['date'] = pd.to_datetime(df['date'])
asd = df.set_index('date').groupby('user').resample('1M')
df_result = pd.DataFrame({'mean':asd.mean()['value'], 'max':asd.max()['value'], 'sum':asd.sum()['value']})
df_result
mean max sum
user date
29 2011-06-30 8.822791e+07 184677003.0 264683740
30 2013-11-30 4.988808e+07 100000000.0 149664231
37 2011-10-31 1.000000e+07 10000000.0 10000000
2011-11-30 1.000000e+06 1000000.0 2000000
38 2013-11-30 1.000021e+06 1000031.0 3000064
39 2013-03-31 8.000000e+03 8000.0 8000
2013-04-30 NaN NaN 0
2013-05-31 NaN NaN 0
2013-06-30 1.768758e+08 176875806.0 176875806
40 2013-09-30 1.024979e+06 1024979.0 1024979
2013-10-31 NaN NaN 0
2013-11-30 1.041494e+06 1068051.0 2082989
</code></pre>
<p>Sadly you cant use aggregate using groupby-resample to get the results of different functions.</p>
|
python|pandas
| 0
|
373,809
| 59,334,422
|
Creating new column based on condition and extracting respective value from other column. Pandas Dataframe
|
<p>I am relatively new to this field and am working with a data set to find meaningful insights into customer behavior. My <code>dataset</code> looks like:</p>
<p>customerId week first_trip_week rides
0 156 44 36 2
1 164 44 38 6
2 224 42 36 5
3 224 43 36 4
4 224 44 36 5</p>
<p>What I want to do is create new columns <code>week 44</code>,<code>week 43</code>, <code>week 42</code> and get the values in the "ride" column to be filled into the rows for the respective customer id. This is in the hope that I can eventually also make the <code>customerId</code> my index and can get denominations for different weeks. Help would be greatly appreciated!</p>
<p>Thank you!! </p>
|
<p>If I'm understanding you correctly, you want to create new columns in the same dataframe for weeks 44, 43, and 42 with the correct values for each customerId and NaN for those that don't have it. If your original dataframe has all the user data, I would first filter for dataframes that have the correct week number</p>
<pre><code>week42DF = dataset.loc[dataset['week']==42,['customerId','rides']].rename(columns={'rides':'week42Rides'})
</code></pre>
<p>getting only the rides and customerId and renaming the former here to make things a little easier for us. Then left join the old dataframe and the new one on customerId</p>
<pre><code>dataset = pd.merge(dataset,week42DF,how='left',on='customerId')
</code></pre>
<p>The users that are missing from week42DF will have NaN in the week42rides column in the merged dataset which you can then use the .fillna(0) method to replace with zeros. Do this for each week you require.</p>
<p>See Pandas' documentation on <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.merge.html" rel="nofollow noreferrer">merge</a> and the more general <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.concat.html" rel="nofollow noreferrer">concatenate</a> for more info.</p>
|
pandas
| 0
|
373,810
| 59,262,364
|
Iterating / Slicing over a dataframe and performing mathematical calculations
|
<p><a href="https://i.stack.imgur.com/oi8XC.jpg" rel="nofollow noreferrer">Dataframe image</a></p>
<p>the operation that I intend to perform is whenever there is a '2' in the column 3, we need to take that entry and take the column 1 value of that entry and subtract the column 1 value of the previous entry and then multiply the result by a constant integer (say 5).
For example: From the image we have a '2' in column 3 at 6:00 and the value of column 1 for that entry is 0.011333 and take the previous column 1 entry which is 0.008583 and perform the following.
(0.011333 - 0.008583)* 5.
This I want to perform everytime when we receive a '2' in column 3 in a dataframe. Please help. I am not able to get the write code to perform the above operation.</p>
|
<p>Perhaps this <a href="https://stackoverflow.com/questions/23664877/pandas-equivalent-of-oracle-lead-lag-function">question</a> will help you</p>
<p>I think in SQL way, so basically you will make new column that filled with the value from the row above it.</p>
<pre><code>df['column1_lagged'] = df['column 1'].shift(1)
</code></pre>
<p>Then you create another column that do the calculation</p>
<pre><code>constant = 5
df['calculation'] = (df['column 1'] - df['column1_lagged'])*constant
</code></pre>
<p>After that you just slice the dataframe to your condition (column 3 with '2's)</p>
<pre><code>condition = df['column 3'] == 2
df[condition]
</code></pre>
|
python|pandas|dataframe
| 0
|
373,811
| 59,408,554
|
Applying lambda function in two data frames in the same time
|
<p>I have two data frames and I want to apply only one lambda function in both of them in the same time. They are string.</p>
<pre><code>df1:
A B C
0 1 1 2
1 2 0 0
2 1 2 2
3 1.5 1 3
df2:
A B
0 3 1
1 4 5
2 2.7 2
3 3.1 4
</code></pre>
<p>Some Thing like :</p>
<pre><code>df1["A"] = df1["A"].apply(lambda x: float(x))
df2["A"] = df2["A"].apply(lambda x: float(x))
</code></pre>
<p>In only one line. I heard I could use something like:</p>
<pre><code>map(lambda x: x.query(float(x)), [df1, df2])
</code></pre>
<p>But it is returning me a map object and I do not know what to do with that.
Thanks.</p>
|
<p>I believe you need convert output to list of Dataframes with <code>astype</code> function:</p>
<pre><code>dfs = list(map(lambda x: x.astype(float), [df1, df2]))
</code></pre>
|
python|pandas
| 2
|
373,812
| 59,341,819
|
Sample every nth minute in a minute based datetime column in python
|
<p>How to select every 5th minute row in a dataframe? If 5th minute is missing then 4th or 3rd would do..</p>
<p>I DO NOT WANT MEAN OR ANY AGGREGATE</p>
<p>I have tried:</p>
<pre><code>df.groupby(pd.TimeGrouper('5Min'))['AUDUSD'].mean()
df.resample('5min', how=np.var).head()
</code></pre>
<p>both are not producing desired results..</p>
<p>My Input:</p>
<pre><code> DATETIME AUDUSD
DATETIME
2019-06-07 00:01:00 2019.06.07 00:01 0.69740
2019-06-07 00:02:00 2019.06.07 00:02 0.69742
2019-06-07 00:03:00 2019.06.07 00:03 0.69742
2019-06-07 00:04:00 2019.06.07 00:04 0.69742
2019-06-07 00:05:00 2019.06.07 00:05 0.69739
2019-06-07 00:06:00 2019.06.07 00:06 0.69740
2019-06-07 00:07:00 2019.06.07 00:07 0.69739
2019-06-07 00:08:00 2019.06.07 00:08 0.69740
2019-06-07 00:11:00 2019.06.07 00:11 0.69741
2019-06-07 00:12:00 2019.06.07 00:12 0.69741
2019-06-07 00:13:00 2019.06.07 00:13 0.69740
2019-06-07 00:14:00 2019.06.07 00:14 0.69740
2019-06-07 00:15:00 2019.06.07 00:15 0.69754
2019-06-07 00:16:00 2019.06.07 00:16 0.69749
2019-06-07 00:17:00 2019.06.07 00:17 0.69752
2019-06-07 00:18:00 2019.06.07 00:18 0.69753
2019-06-07 00:19:00 2019.06.07 00:19 0.69758
2019-06-07 00:20:00 2019.06.07 00:20 0.69763
2019-06-07 00:21:00 2019.06.07 00:21 0.69764
2019-06-07 00:23:00 2019.06.07 00:23 0.69765
2019-06-07 00:28:00 2019.06.07 00:28 0.69763
</code></pre>
<p>Desired Output:</p>
<pre><code> DATETIME AUDUSD
DATETIME
2019-06-07 00:05:00 2019.06.07 00:05 0.69739
2019-06-07 00:10:00 2019.06.07 00:08 0.69740
2019-06-07 00:15:00 2019.06.07 00:15 0.69754
2019-06-07 00:20:00 2019.06.07 00:20 0.69763
2019-06-07 00:25:00 2019.06.07 00:23 0.69765
2019-06-07 00:30:00 2019.06.07 00:28 0.69763
</code></pre>
|
<p>This works for me, except i used first as i don't know what method your using:</p>
<pre><code>df.set_index(pd.DatetimeIndex(df['DATETIME']))
df.set_index(pd.DatetimeIndex(df['DATETIME'])).resample("5T").agg('first')
Out[2649]:
DATETIME AUDUSD
DATETIME
2019-06-07 00:00:00 2019.06.07 00:01 0.69740
2019-06-07 00:05:00 2019.06.07 00:05 0.69739
2019-06-07 00:10:00 2019.06.07 00:11 0.69741
2019-06-07 00:15:00 2019.06.07 00:15 0.69754
2019-06-07 00:20:00 2019.06.07 00:20 0.69763
2019-06-07 00:25:00 2019.06.07 00:28 0.69763
</code></pre>
|
python|pandas|pandas-groupby
| 4
|
373,813
| 59,117,360
|
How to reshape an array of arrays in Python using Numpy
|
<p>As you can see below I have created three arrays that contain different random numbers:</p>
<pre><code>np.random.seed(200)
Array1 = np.random.randn(300)
Array2 = Array1 + np.random.randn(300) * 2
Array3 = Array1 + np.random.randn(300) * 2
data = np.array([Array1, Array2 , Array3])
#data.reshape(data, (Array3, Array1)
mydf = pd.DataFrame(data)
mydf.tail()
</code></pre>
<p>My objective is to build a DataFrame with those three arrays. Each array should show its values in a different column. The DataFrame should have three columns and the index. My problem with the above code is that the Dataframe is built in horizontal position instead of vertical position. The DataFrame looks like this:</p>
<p><a href="https://i.stack.imgur.com/SZn4V.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SZn4V.png" alt="enter image description here"></a></p>
<p>I have tried to use the reshape function to reshape the numpy array called ”data” but I couldn’t make it work. Any help would be more than welcome. Thanks!</p>
|
<p>You can use <code>.T</code> to transpose either the data <code>data = np.array([Array1, Array2 , Array3]).T</code> or the dataframe <code>mydf = pd.DataFrame(data).T</code>. </p>
<p>Output:</p>
<pre class="lang-py prettyprint-override"><code> 0 1 2
295 -0.126758 1.697413 0.399351
296 0.548405 1.402154 -4.396156
297 -1.063243 0.279774 -0.636649
298 -0.678952 -2.061554 0.244339
299 -0.527970 -0.290680 -0.930381
</code></pre>
<p>Or build a 2D array right away</p>
<pre><code>arr = np.random.randn(300, 3)
arr[:, 1:] *= 2
mydf = pd.DataFrame(arr)
</code></pre>
|
python|pandas|numpy
| 6
|
373,814
| 59,339,005
|
elementwise comparison failed; returning scalar, but in the future will perform elementwise comparison
|
<pre><code>n1data = pcatrain_data[train_labels[0, :] == i, :]
n2data = pcatrain_data[train_labels[0, :] == j, :]
</code></pre>
<p>the shape of pcatrain_data is (14395,40)
and the shape of train_labels is (1,14395)</p>
<p>it is my understanding that "train_labels[0, :] == i" will return a list of boolean of size 14395 with values true where it is equal to i.</p>
<p>and since pcatrain_data is of size 14395 it shouldn't cause any errors.</p>
<p>this is the code that is causing the issue.
I am trying to get all the columns from specific rows of pcatrain_data.
I want the rows where train_labels[0, :] == i.</p>
<p>I do not know why this error is coming, as I have done this before and it worked. both in python3</p>
<p>error is "FutureWarning: elementwise comparison failed; returning scalar, but in the future will perform elementwise comparison"</p>
|
<p>you need to write</p>
<pre><code>import warnings
import numpy as np
warnings.simplefilter(action='ignore', category=FutureWarning)
</code></pre>
<p>Then the warning can disapear</p>
|
python|python-3.x|numpy
| -1
|
373,815
| 59,091,544
|
In python3: strange behaviour of list(iterables)
|
<p>I have a specific question regarding the behaviour of iterables in python. My iterable is a custom built Dataset class in pytorch:</p>
<pre><code>import torch
from torch.utils.data import Dataset
class datasetTest(Dataset):
def __init__(self, X):
self.X = X
def __len__(self):
return len(self.X)
def __getitem__(self, x):
print('***********')
print('getitem x = ', x)
print('###########')
y = self.X[x]
print('getitem y = ', y)
return y
</code></pre>
<p>The weird behaviour now comes about when I initialize a specific instance of that datasetTest class. Depending on what data structure I pass as an argument X, it behaves differently when I call list(datasetTestInstance). In particular, when passing a torch.tensor as argument there is no problem, however when passing a dict as argument it will throw a KeyError. The reason for this is that list(iterable) not just calls i=0, ..., len(iterable)-1, but it calls i=0, ..., len(iterable). That is, it will iterate until (inclusive) the index equal to the length of the iterable. Obviously, this index is not definied in any python datastructure, as the last element has always the index len(datastructure)-1 and not len(datastructure). If X is a torch.tensor or a list, no error will be risen, even though I think the should be an error. It will still call getitem even for the (non-existent) element with index len(datasetTestinstance) but it will not compute y=self.X[len(datasetTestInstance]. Does anyone know if pytorch handels this somehow gracefully internally?</p>
<p>When passing a dict as data it will throw an error in the last iteration, when x=len(datasetTestInstance). This is actually the expected behaviour I guess. But why does this only happen for a dict and not for a list or torch.tensor?</p>
<pre><code>if __name__ == "__main__":
a = datasetTest(torch.randn(5,2))
print(len(a))
print('++++++++++++')
for i in range(len(a)):
print(i)
print(a[i])
print('++++++++++++')
print(list(a))
print('++++++++++++')
b = datasetTest({0: 12, 1:35, 2:99, 3:27, 4:33})
print(len(b))
print('++++++++++++')
for i in range(len(b)):
print(i)
print(b[i])
print('++++++++++++')
print(list(b))
</code></pre>
<p>You could try out that snippet of code if you want to understand better what I have observed.</p>
<p>My questions are:</p>
<p>1.) Why does list(iterable) iterate until (including) the len(iterable)? A for loop doesnt do that.</p>
<p>2.) In case of a torch.tensor or a list passed as data X: Why does it not throw an error even when calling the getitem method for the index len(datasetTestInstance) which should actually be out of range since it is not defined as an index in the tensor/list? Or, in other words, when having reached the index len(datasetTestInstance) and then going into the <strong>getitem</strong> method, what happens exactly? It obviously doesnt make the call 'y = self.X[x]' anymore (otherwiese there would be an IndexError) but it DOES enter the getitem method which I can see as it prints the index x from within the getitem method. So what happens in that method? And why does it behave different depending on whether having a torch.tensor/list or a dict?</p>
|
<p>A bunch of useful links:</p>
<ol>
<li><a href="https://docs.python.org/3/reference/datamodel.html#emulating-container-types" rel="nofollow noreferrer">[Python 3.Docs]: Data model - Emulating container types</a></li>
<li><a href="https://docs.python.org/3/library/stdtypes.html#iterator-types" rel="nofollow noreferrer">[Python 3.Docs]: Built-in Types - Iterator Types</a></li>
<li><a href="https://docs.python.org/3/library/functions.html#iter" rel="nofollow noreferrer">[Python 3.Docs]: Built-in Functions - <strong>iter</strong>(<em>object[, sentinel]</em>)</a></li>
<li><a href="https://stackoverflow.com/questions/41474829/why-does-list-ask-about-len">[SO]: Why does list ask about __len__?</a> (all answers)</li>
</ol>
<p>The key point is that the <em>list</em> constructor uses the (iterable) argument's <em>__len__</em> ((if provided) to calculate the new container length), <strong>but then iterates over it</strong> (via iterator protocol).</p>
<p>Your example worked in that manner (iterated all keys and failed to the one equal to the dictionary length) because of a <strong>terrible coincidence</strong> (remember that <em>dict</em> supports iterator protocol, and that happens over its keys (which is a sequence)):</p>
<ul>
<li><strong>Your dictionary only had <em>int</em> keys</strong> (and more)</li>
<li><strong>Their values are the same as their indexes</strong> (in the sequence)</li>
</ul>
<p>Changing any condition expressed by the above 2 bullets, would make the actual error more eloquent.</p>
<p>Both objects (<em>dict</em> and <em>list</em> (of <em>tensor</em>s)) support iterator protocol. In order to make things work, you should wrap it in your <em>Dataset</em> class, and tweak a bit the one of the mapping type (to work with values instead of keys). <br>The code (<em>key_func</em> related parts) is a bit complex, but only to be easily configurable (if you want to change something - for <em>demo</em> purposes).</p>
<p><em>code00.py</em>:</p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python3
import sys
import torch
from torch.utils.data import Dataset
from random import randint
class SimpleDataset(Dataset):
def __init__(self, x):
self.__iter = None
self.x = x
def __len__(self):
print(" __len__()")
return len(self.x)
def __getitem__(self, key):
print(" __getitem__({0:}({1:s}))".format(key, key.__class__.__name__))
try:
val = self.x[key]
print(" {0:}".format(val))
return val
except:
print(" exc")
raise #IndexError
def __iter__(self):
print(" __iter__()")
self.__iter = iter(self.x)
return self
def __next__(self):
print(" __next__()")
if self.__iter is None:
raise StopIteration
val = next(self.__iter)
if isinstance(self.x, (dict,)): # Special handling for dictionaries
val = self.x[val]
return val
def key_transformer(int_key):
return str(int_key) # You could `return int_key` to see that it also works on your original example
def dataset_example(inner, key_func=None):
if key_func is None:
key_func = lambda x: x
print("\nInner object: {0:}".format(inner))
sd = SimpleDataset(inner)
print("Dataset length: {0:d}".format(len(sd)))
print("\nIterating (old fashion way):")
for i in range(len(sd)):
print(" {0:}: {1:}".format(key_func(i), sd[key_func(i)]))
print("\nIterating (Python (iterator protocol) way):")
for element in sd:
print(" {0:}".format(element))
print("\nTry building the list:")
l = list(sd)
print(" List: {0:}\n".format(l))
def main():
dict_size = 2
for inner, func in [
(torch.randn(2, 2), None),
({key_transformer(i): randint(0, 100) for i in reversed(range(dict_size))}, key_transformer), # Reversed the key order (since Python 3.7, dicts are ordered), to test int keys
]:
dataset_example(inner, key_func=func)
if __name__ == "__main__":
print("Python {0:s} {1:d}bit on {2:s}\n".format(" ".join(item.strip() for item in sys.version.split("\n")), 64 if sys.maxsize > 0x100000000 else 32, sys.platform))
main()
print("\nDone.")
</code></pre>
<p><strong>Output</strong>:</p>
<blockquote>
<pre class="lang-bat prettyprint-override"><code>[cfati@CFATI-5510-0:e:\Work\Dev\StackOverflow\q059091544]> "e:\Work\Dev\VEnvs\py_064_03.07.03_test0\Scripts\python.exe" code00.py
Python 3.7.3 (v3.7.3:ef4ec6ed12, Mar 25 2019, 22:22:05) [MSC v.1916 64 bit (AMD64)] 64bit on win32
Inner object: tensor([[ 0.6626, 0.1107],
[-0.1118, 0.6177]])
__len__()
Dataset length: 2
Iterating (old fashion way):
__len__()
__getitem__(0(int))
tensor([0.6626, 0.1107])
0: tensor([0.6626, 0.1107])
__getitem__(1(int))
tensor([-0.1118, 0.6177])
1: tensor([-0.1118, 0.6177])
Iterating (Python (iterator protocol) way):
__iter__()
__next__()
tensor([0.6626, 0.1107])
__next__()
tensor([-0.1118, 0.6177])
__next__()
Try building the list:
__iter__()
__len__()
__next__()
__next__()
__next__()
List: [tensor([0.6626, 0.1107]), tensor([-0.1118, 0.6177])]
Inner object: {'1': 86, '0': 25}
__len__()
Dataset length: 2
Iterating (old fashion way):
__len__()
__getitem__(0(str))
25
0: 25
__getitem__(1(str))
86
1: 86
Iterating (Python (iterator protocol) way):
__iter__()
__next__()
86
__next__()
25
__next__()
Try building the list:
__iter__()
__len__()
__next__()
__next__()
__next__()
List: [86, 25]
Done.
</code></pre>
</blockquote>
<p>You might also want to check <a href="https://pytorch.org/docs/stable/_modules/torch/utils/data/dataset.html" rel="nofollow noreferrer">[PyTorch]: SOURCE CODE FOR TORCH.UTILS.DATA.DATASET</a> (<em>IterableDataset</em>).</p>
|
python|list|dictionary|pytorch|iterable
| 1
|
373,816
| 59,292,524
|
index 5 is out of bounds for axis 1 with size 5
|
<p>Hi I have the following function which produces an out of bounds error:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
dataset = pd.read_csv('50_Startups.csv')
x = dataset.iloc[:,:-1].values
y = dataset.iloc[:,4].values
from sklearn.preprocessing import LabelEncoder , OneHotEncoder
labelencoder_x = LabelEncoder()
x[:,3] = labelencoder_x.fit_transform(x[:,3])
onehotencoder = OneHotEncoder(categorical_features = [3])
x = onehotencoder.fit_transform(x).toarray()
x = x[:,1:]
from sklearn.model_selection import train_test_split
x_train,x_test,y_train,y_test = train_test_split(x,y,test_size=0.2,random_state=0)
from sklearn.linear_model import LinearRegression
regressor = LinearRegression()
regressor = regressor.fit(x_train,y_train)
y_pred = regressor.predict(x_test)
import statsmodels.formula.api as sm
x = np.append(arr = np.ones((50,1)).astype(int) , values = x , axis = 1)
x_opt = x[:,[0,1,2,3,4,5]]
regressor_ols = sm.OLS(endog= y , exog = x_opt).fit()
regressor_ols.summary()
x_opt = x[:,[0,1,2,3,4,5]]
regressor_ols = sm.OLS(endog= y , exog = x_opt).fit()'''
</code></pre>
<p><strong>The full error displayed is :</strong> </p>
<pre><code>x_opt = x[:,[0,1,2,3,4,5]]
Traceback (most recent call last):
File "<ipython-input-45-62cb7e2f326e>", line 1, in <module>
x_opt = x[:,[0,1,2,3,4,5]]
IndexError: index 5 is out of bounds for axis 1 with size 5
</code></pre>
<p>How can i solve this error ?</p>
|
<p>Well an array with a size of 5 has it's last index as 4. </p>
<p>An array always starts at index 0 and not 1.</p>
|
python|pandas|numpy
| 1
|
373,817
| 59,154,530
|
Why is matplotlib plotting so much slower than pd.DataFrame.plot()?
|
<p>Hello dear Community,</p>
<p>I haven't found something similar during my search and hope I haven't overseen anything. I have the following issue:</p>
<p>I have a big dataset whichs shape is 1352x121797 (1353 samples and 121797 time points). Now I have clustered these and would like to generate one plot for each cluster in which every time series for this cluster is plotted.</p>
<p>However, when using the matplotlib syntax it is like super extremely slow (and I'm not exactly sure where that comes from). Even after 5-10 minutes it hasn't finished.</p>
<pre><code>import matplotlib.pyplot as plt
import pandas as pd
fig, ax = plt.subplots()
for index, values in subset_cluster.iterrows(): # One Cluster subset, dataframe of shape (11x121797)
ax.plot(values)
fig.savefig('test.png')
</code></pre>
<p>Even, when inserting a <strong>break</strong> after <code>ax.plot(values)</code> it still doesn't finish. I'm using Spyder and thought that it might be due to Spyder always rendering the plot inline in the console.</p>
<p>However, when simply using the pandas method of the Series <code>values.plot()</code> instead of <code>ax.plot(values)</code> the plot appears and is saved in like 1-2 seconds.</p>
<p>As I need the customization options of matplotlib for standardizing all the plots and make them look a little bit prettier I would love to use the matplotlib syntax. Anyone has any ideas?</p>
<p>Thanks in advance</p>
<p>Edit: so while trying around a little bit it seems, that the rendering is the time-consuming part. When ran with the backend <code>matplotlib.use('Agg')</code>, the plot command runs through quicker (if using <code>plt.plot()</code> instead of <code>ax.plot()</code>), but <code>plt.savefig()</code> then takes forever. However, still it should be in a considerable amount of time right? Even for 121xxx data points.</p>
|
<p>Posting as answer as it may help OP or someone else: I had the same problem and found out that it was because the data I was using as x-axis was an Object, while the y-axis data was float64. After explicitly setting the object to DateTime, plotting With Matplotlib went as fast as Pandas' df.plot(). I guess that Pandas does a better job at understanding the data type when plotting.</p>
<p>OP, you might want to check if the values you are plotting are in the right type, or if, like me, you had some problems when loading the dataframe from file.</p>
|
python|pandas|matplotlib|plot
| 2
|
373,818
| 59,426,684
|
How to append rows from one Dataframe to another having different column structure
|
<p>I have two excel files, the first one has 34 columns and the second one has 19 columns, the first one has all these 19 columns but if I add some empty column I can get the structure of the first file. I want to append rows from 2nd file to the first file</p>
<p>I added empty columns to get the same structure as the first one but when I tried to merge both data frames I got:
ValueError: Plan shapes are not aligned (maybe because of those empty columns doesn't have a name )</p>
<pre><code>merged = pd.read_excel(r'D:\Incident in detail.xlsx')
resolved = pd.read_excel(r'D:\Data set\Resolved Incident.xlsx')
for i in range(13,26):
resolved.insert(i,"","",allow_duplicates=True)
resolved.insert(33,"","", allow_duplicates=True)
resolved_planning = resolved[resolved['Priority'] == '5 - Planning']
merged.append(resolved_planning, ignore_index = True, sort = False)
merged.to_excel(r'D:\test.xlsx', index = False)
</code></pre>
<p>I also tried using itertuples() but didn't get anything on how to append a list as a row into a data frame.</p>
<p>Edit1:</p>
<p>There is a library dplyr in R in which there is a method bind_rows() which can append rows by column name <a href="https://stackoverflow.com/questions/37585517/adding-rows-to-a-dataframe-based-on-column-names-and-add-na-to-empty-columns">example</a></p>
|
<p>I don't know if it is efficient or not but this is what I did:</p>
<pre><code>new_res = pd.DataFrame(data = resolved_planning.values, columns = merged.columns)
new_merged=merged.append(new_res, ignore_index = True, sort = False)
</code></pre>
<p>I made a new data frame (new_res) with the first data frame's header and second data frame's (resolved_planning) values and then appending both data frames.</p>
|
python|excel|pandas|numpy
| 0
|
373,819
| 59,075,406
|
Can't plot in jupyter notebook
|
<p>I have a jupyter notebook script and a part of it looks like that:</p>
<pre><code>import pandas as pd
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
matplotlib.style.use('ggplot')
matplotlib.rc('text', usetex=True)
import scipy.stats
df = pd.read_csv("./titanic-train.csv")
ax = df.Age.hist(bins=20)
ax.set_xlabel("age / years")
ax.set_ylabel("frequency");
</code></pre>
<p>after that i get an error and it won't print the histogram.
I am using Python 3.7.3 installed with Anaconda.
From printing latex in the cmd window in Windows 10 i get: </p>
<blockquote>
<p>This is pdfTeX, Version 3.14159265-2.6-1.40.20 (MiKTeX 2.9.7250
64-bit)</p>
</blockquote>
<p>and here is the Error Message that I get from running the dataframe.hist():</p>
<pre><code>Error in callback <function install_repl_displayhook.<locals>.post_execute at 0x0000029D82BD2950> (for post_execute):
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
~\Anaconda3\lib\site-packages\matplotlib\texmanager.py in _run_checked_subprocess(self, command, tex)
303 cwd=self.texcache,
--> 304 stderr=subprocess.STDOUT)
305 except FileNotFoundError as exc:
~\Anaconda3\lib\subprocess.py in check_output(timeout, *popenargs, **kwargs)
394 return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
--> 395 **kwargs).stdout
396
~\Anaconda3\lib\subprocess.py in run(input, capture_output, timeout, check, *popenargs, **kwargs)
471
--> 472 with Popen(*popenargs, **kwargs) as process:
473 try:
~\Anaconda3\lib\subprocess.py in __init__(self, args, bufsize, executable, stdin, stdout, stderr, preexec_fn, close_fds, shell, cwd, env, universal_newlines, startupinfo, creationflags, restore_signals, start_new_session, pass_fds, encoding, errors, text)
774 errread, errwrite,
--> 775 restore_signals, start_new_session)
776 except:
~\Anaconda3\lib\subprocess.py in _execute_child(self, args, executable, preexec_fn, close_fds, pass_fds, cwd, env, startupinfo, creationflags, shell, p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite, unused_restore_signals, unused_start_new_session)
1177 os.fspath(cwd) if cwd is not None else None,
-> 1178 startupinfo)
1179 finally:
FileNotFoundError: [WinError 2] The system cannot find the file specified
The above exception was the direct cause of the following exception:
RuntimeError Traceback (most recent call last)
~\Anaconda3\lib\site-packages\matplotlib\pyplot.py in post_execute()
107 def post_execute():
108 if matplotlib.is_interactive():
--> 109 draw_all()
110
111 # IPython >= 2
~\Anaconda3\lib\site-packages\matplotlib\_pylab_helpers.py in draw_all(cls, force)
126 for f_mgr in cls.get_all_fig_managers():
127 if force or f_mgr.canvas.figure.stale:
--> 128 f_mgr.canvas.draw_idle()
129
130 atexit.register(Gcf.destroy_all)
~\Anaconda3\lib\site-packages\matplotlib\backend_bases.py in draw_idle(self, *args, **kwargs)
1905 if not self._is_idle_drawing:
1906 with self._idle_draw_cntx():
-> 1907 self.draw(*args, **kwargs)
1908
1909 def draw_cursor(self, event):
~\Anaconda3\lib\site-packages\matplotlib\backends\backend_agg.py in draw(self)
386 self.renderer = self.get_renderer(cleared=True)
387 with RendererAgg.lock:
--> 388 self.figure.draw(self.renderer)
389 # A GUI class may be need to update a window using this draw, so
390 # don't forget to call the superclass.
~\Anaconda3\lib\site-packages\matplotlib\artist.py in draw_wrapper(artist, renderer, *args, **kwargs)
36 renderer.start_filter()
37
---> 38 return draw(artist, renderer, *args, **kwargs)
39 finally:
40 if artist.get_agg_filter() is not None:
~\Anaconda3\lib\site-packages\matplotlib\figure.py in draw(self, renderer)
1707 self.patch.draw(renderer)
1708 mimage._draw_list_compositing_images(
-> 1709 renderer, self, artists, self.suppressComposite)
1710
1711 renderer.close_group('figure')
~\Anaconda3\lib\site-packages\matplotlib\image.py in _draw_list_compositing_images(renderer, parent, artists, suppress_composite)
133 if not_composite or not has_images:
134 for a in artists:
--> 135 a.draw(renderer)
136 else:
137 # Composite any adjacent images together
~\Anaconda3\lib\site-packages\matplotlib\artist.py in draw_wrapper(artist, renderer, *args, **kwargs)
36 renderer.start_filter()
37
---> 38 return draw(artist, renderer, *args, **kwargs)
39 finally:
40 if artist.get_agg_filter() is not None:
~\Anaconda3\lib\site-packages\matplotlib\axes\_base.py in draw(self, renderer, inframe)
2643 renderer.stop_rasterizing()
2644
-> 2645 mimage._draw_list_compositing_images(renderer, self, artists)
2646
2647 renderer.close_group('axes')
~\Anaconda3\lib\site-packages\matplotlib\image.py in _draw_list_compositing_images(renderer, parent, artists, suppress_composite)
133 if not_composite or not has_images:
134 for a in artists:
--> 135 a.draw(renderer)
136 else:
137 # Composite any adjacent images together
~\Anaconda3\lib\site-packages\matplotlib\artist.py in draw_wrapper(artist, renderer, *args, **kwargs)
36 renderer.start_filter()
37
---> 38 return draw(artist, renderer, *args, **kwargs)
39 finally:
40 if artist.get_agg_filter() is not None:
~\Anaconda3\lib\site-packages\matplotlib\axis.py in draw(self, renderer, *args, **kwargs)
1204 ticks_to_draw = self._update_ticks()
1205 ticklabelBoxes, ticklabelBoxes2 = self._get_tick_bboxes(ticks_to_draw,
-> 1206 renderer)
1207
1208 for tick in ticks_to_draw:
~\Anaconda3\lib\site-packages\matplotlib\axis.py in _get_tick_bboxes(self, ticks, renderer)
1149 """Return lists of bboxes for ticks' label1's and label2's."""
1150 return ([tick.label1.get_window_extent(renderer)
-> 1151 for tick in ticks if tick.label1.get_visible()],
1152 [tick.label2.get_window_extent(renderer)
1153 for tick in ticks if tick.label2.get_visible()])
~\Anaconda3\lib\site-packages\matplotlib\axis.py in <listcomp>(.0)
1149 """Return lists of bboxes for ticks' label1's and label2's."""
1150 return ([tick.label1.get_window_extent(renderer)
-> 1151 for tick in ticks if tick.label1.get_visible()],
1152 [tick.label2.get_window_extent(renderer)
1153 for tick in ticks if tick.label2.get_visible()])
~\Anaconda3\lib\site-packages\matplotlib\text.py in get_window_extent(self, renderer, dpi)
888 raise RuntimeError('Cannot get window extent w/o renderer')
889
--> 890 bbox, info, descent = self._get_layout(self._renderer)
891 x, y = self.get_unitless_position()
892 x, y = self.get_transform().transform_point((x, y))
~\Anaconda3\lib\site-packages\matplotlib\text.py in _get_layout(self, renderer)
289 _, lp_h, lp_d = renderer.get_text_width_height_descent(
290 "lp", self._fontproperties,
--> 291 ismath="TeX" if self.get_usetex() else False)
292 min_dy = (lp_h - lp_d) * self._linespacing
293
~\Anaconda3\lib\site-packages\matplotlib\backends\backend_agg.py in get_text_width_height_descent(self, s, prop, ismath)
199 fontsize = prop.get_size_in_points()
200 w, h, d = texmanager.get_text_width_height_descent(
--> 201 s, fontsize, renderer=self)
202 return w, h, d
203
~\Anaconda3\lib\site-packages\matplotlib\texmanager.py in get_text_width_height_descent(self, tex, fontsize, renderer)
446 else:
447 # use dviread. It sometimes returns a wrong descent.
--> 448 dvifile = self.make_dvi(tex, fontsize)
449 with dviread.Dvi(dvifile, 72 * dpi_fraction) as dvi:
450 page, = dvi
~\Anaconda3\lib\site-packages\matplotlib\texmanager.py in make_dvi(self, tex, fontsize)
336 self._run_checked_subprocess(
337 ["latex", "-interaction=nonstopmode", "--halt-on-error",
--> 338 texfile], tex)
339 for fname in glob.glob(basefile + '*'):
340 if not fname.endswith(('dvi', 'tex')):
~\Anaconda3\lib\site-packages\matplotlib\texmanager.py in _run_checked_subprocess(self, command, tex)
306 raise RuntimeError(
307 'Failed to process string with tex because {} could not be '
--> 308 'found'.format(command[0])) from exc
309 except subprocess.CalledProcessError as exc:
310 raise RuntimeError(
RuntimeError: Failed to process string with tex because latex could not be found
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
~\Anaconda3\lib\site-packages\matplotlib\texmanager.py in _run_checked_subprocess(self, command, tex)
303 cwd=self.texcache,
--> 304 stderr=subprocess.STDOUT)
305 except FileNotFoundError as exc:
~\Anaconda3\lib\subprocess.py in check_output(timeout, *popenargs, **kwargs)
394 return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
--> 395 **kwargs).stdout
396
~\Anaconda3\lib\subprocess.py in run(input, capture_output, timeout, check, *popenargs, **kwargs)
471
--> 472 with Popen(*popenargs, **kwargs) as process:
473 try:
~\Anaconda3\lib\subprocess.py in __init__(self, args, bufsize, executable, stdin, stdout, stderr, preexec_fn, close_fds, shell, cwd, env, universal_newlines, startupinfo, creationflags, restore_signals, start_new_session, pass_fds, encoding, errors, text)
774 errread, errwrite,
--> 775 restore_signals, start_new_session)
776 except:
~\Anaconda3\lib\subprocess.py in _execute_child(self, args, executable, preexec_fn, close_fds, pass_fds, cwd, env, startupinfo, creationflags, shell, p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite, unused_restore_signals, unused_start_new_session)
1177 os.fspath(cwd) if cwd is not None else None,
-> 1178 startupinfo)
1179 finally:
FileNotFoundError: [WinError 2] The system cannot find the file specified
The above exception was the direct cause of the following exception:
RuntimeError Traceback (most recent call last)
~\Anaconda3\lib\site-packages\IPython\core\formatters.py in __call__(self, obj)
339 pass
340 else:
--> 341 return printer(obj)
342 # Finally look for special method names
343 method = get_real_method(obj, self.print_method)
~\Anaconda3\lib\site-packages\IPython\core\pylabtools.py in <lambda>(fig)
242
243 if 'png' in formats:
--> 244 png_formatter.for_type(Figure, lambda fig: print_figure(fig, 'png', **kwargs))
245 if 'retina' in formats or 'png2x' in formats:
246 png_formatter.for_type(Figure, lambda fig: retina_figure(fig, **kwargs))
~\Anaconda3\lib\site-packages\IPython\core\pylabtools.py in print_figure(fig, fmt, bbox_inches, **kwargs)
126
127 bytes_io = BytesIO()
--> 128 fig.canvas.print_figure(bytes_io, **kw)
129 data = bytes_io.getvalue()
130 if fmt == 'svg':
~\Anaconda3\lib\site-packages\matplotlib\backend_bases.py in print_figure(self, filename, dpi, facecolor, edgecolor, orientation, format, bbox_inches, **kwargs)
2054 orientation=orientation,
2055 dryrun=True,
-> 2056 **kwargs)
2057 renderer = self.figure._cachedRenderer
2058 bbox_artists = kwargs.pop("bbox_extra_artists", None)
~\Anaconda3\lib\site-packages\matplotlib\backends\backend_agg.py in print_png(self, filename_or_obj, metadata, pil_kwargs, *args, **kwargs)
525
526 else:
--> 527 FigureCanvasAgg.draw(self)
528 renderer = self.get_renderer()
529 with cbook._setattr_cm(renderer, dpi=self.figure.dpi), \
~\Anaconda3\lib\site-packages\matplotlib\backends\backend_agg.py in draw(self)
386 self.renderer = self.get_renderer(cleared=True)
387 with RendererAgg.lock:
--> 388 self.figure.draw(self.renderer)
389 # A GUI class may be need to update a window using this draw, so
390 # don't forget to call the superclass.
~\Anaconda3\lib\site-packages\matplotlib\artist.py in draw_wrapper(artist, renderer, *args, **kwargs)
36 renderer.start_filter()
37
---> 38 return draw(artist, renderer, *args, **kwargs)
39 finally:
40 if artist.get_agg_filter() is not None:
~\Anaconda3\lib\site-packages\matplotlib\figure.py in draw(self, renderer)
1707 self.patch.draw(renderer)
1708 mimage._draw_list_compositing_images(
-> 1709 renderer, self, artists, self.suppressComposite)
1710
1711 renderer.close_group('figure')
~\Anaconda3\lib\site-packages\matplotlib\image.py in _draw_list_compositing_images(renderer, parent, artists, suppress_composite)
133 if not_composite or not has_images:
134 for a in artists:
--> 135 a.draw(renderer)
136 else:
137 # Composite any adjacent images together
~\Anaconda3\lib\site-packages\matplotlib\artist.py in draw_wrapper(artist, renderer, *args, **kwargs)
36 renderer.start_filter()
37
---> 38 return draw(artist, renderer, *args, **kwargs)
39 finally:
40 if artist.get_agg_filter() is not None:
~\Anaconda3\lib\site-packages\matplotlib\axes\_base.py in draw(self, renderer, inframe)
2643 renderer.stop_rasterizing()
2644
-> 2645 mimage._draw_list_compositing_images(renderer, self, artists)
2646
2647 renderer.close_group('axes')
~\Anaconda3\lib\site-packages\matplotlib\image.py in _draw_list_compositing_images(renderer, parent, artists, suppress_composite)
133 if not_composite or not has_images:
134 for a in artists:
--> 135 a.draw(renderer)
136 else:
137 # Composite any adjacent images together
~\Anaconda3\lib\site-packages\matplotlib\artist.py in draw_wrapper(artist, renderer, *args, **kwargs)
36 renderer.start_filter()
37
---> 38 return draw(artist, renderer, *args, **kwargs)
39 finally:
40 if artist.get_agg_filter() is not None:
~\Anaconda3\lib\site-packages\matplotlib\axis.py in draw(self, renderer, *args, **kwargs)
1204 ticks_to_draw = self._update_ticks()
1205 ticklabelBoxes, ticklabelBoxes2 = self._get_tick_bboxes(ticks_to_draw,
-> 1206 renderer)
1207
1208 for tick in ticks_to_draw:
~\Anaconda3\lib\site-packages\matplotlib\axis.py in _get_tick_bboxes(self, ticks, renderer)
1149 """Return lists of bboxes for ticks' label1's and label2's."""
1150 return ([tick.label1.get_window_extent(renderer)
-> 1151 for tick in ticks if tick.label1.get_visible()],
1152 [tick.label2.get_window_extent(renderer)
1153 for tick in ticks if tick.label2.get_visible()])
~\Anaconda3\lib\site-packages\matplotlib\axis.py in <listcomp>(.0)
1149 """Return lists of bboxes for ticks' label1's and label2's."""
1150 return ([tick.label1.get_window_extent(renderer)
-> 1151 for tick in ticks if tick.label1.get_visible()],
1152 [tick.label2.get_window_extent(renderer)
1153 for tick in ticks if tick.label2.get_visible()])
~\Anaconda3\lib\site-packages\matplotlib\text.py in get_window_extent(self, renderer, dpi)
888 raise RuntimeError('Cannot get window extent w/o renderer')
889
--> 890 bbox, info, descent = self._get_layout(self._renderer)
891 x, y = self.get_unitless_position()
892 x, y = self.get_transform().transform_point((x, y))
~\Anaconda3\lib\site-packages\matplotlib\text.py in _get_layout(self, renderer)
289 _, lp_h, lp_d = renderer.get_text_width_height_descent(
290 "lp", self._fontproperties,
--> 291 ismath="TeX" if self.get_usetex() else False)
292 min_dy = (lp_h - lp_d) * self._linespacing
293
~\Anaconda3\lib\site-packages\matplotlib\backends\backend_agg.py in get_text_width_height_descent(self, s, prop, ismath)
199 fontsize = prop.get_size_in_points()
200 w, h, d = texmanager.get_text_width_height_descent(
--> 201 s, fontsize, renderer=self)
202 return w, h, d
203
~\Anaconda3\lib\site-packages\matplotlib\texmanager.py in get_text_width_height_descent(self, tex, fontsize, renderer)
446 else:
447 # use dviread. It sometimes returns a wrong descent.
--> 448 dvifile = self.make_dvi(tex, fontsize)
449 with dviread.Dvi(dvifile, 72 * dpi_fraction) as dvi:
450 page, = dvi
~\Anaconda3\lib\site-packages\matplotlib\texmanager.py in make_dvi(self, tex, fontsize)
336 self._run_checked_subprocess(
337 ["latex", "-interaction=nonstopmode", "--halt-on-error",
--> 338 texfile], tex)
339 for fname in glob.glob(basefile + '*'):
340 if not fname.endswith(('dvi', 'tex')):
~\Anaconda3\lib\site-packages\matplotlib\texmanager.py in _run_checked_subprocess(self, command, tex)
306 raise RuntimeError(
307 'Failed to process string with tex because {} could not be '
--> 308 'found'.format(command[0])) from exc
309 except subprocess.CalledProcessError as exc:
310 raise RuntimeError(
RuntimeError: Failed to process string with tex because latex could not be found
<Figure size 432x288 with 1 Axes>
</code></pre>
<p><strong>Edit 1:</strong></p>
<p>I have tried running the same code on my homePC which also runs Windows 10, the same Python version and latex version and the code is not throwing an error and it plots everything how it is supposed to be.
I still need it to work on my Laptop with which I have that issue but I just wanted to state that it surprisingly works on my other PC that seems to have the same settings.. </p>
|
<p>Okay so I found a solution on my problem.. </p>
<p>First I uninstalled Anaconda. I also deleted the folder found in the Users Directory .anaconda3 it was.
I reinstalled like normal and have run my script. After that I got 2 external windows to install packages from LaTex. (I didn't get them last time) </p>
<p>BUT I also am running BitDefender and when I wanted to install them my AntiVirus Software didn't allow them to download calling it unsafe but not mentioning it.
I deactivated BitDefender and tried to reproduce the Occasion of the LaTex windows opening.. so I let my script run again and the 2 windows appeared again.
I now have the packages installed and matplot and everything else is printing just fine now. </p>
<p><strong>TL;DR:</strong>
For anyone running into the same problem, it could be that your antivirus Software is preventing jupyter to install the needed packages and therefore you dont have the files needed to plot. </p>
|
python|pandas|jupyter-notebook|latex
| 0
|
373,820
| 59,317,543
|
Expected object or Value while read the .json file in Python
|
<p>I am trying to read the .json file in python.
Here is my python code:</p>
<pre><code>import pandas as pd
df_idf = pd.read_json('/home/lazzydevs/Data/datajs.json',lines = True)
print("Schema:\n\n",df_idf.dtypes)
print("Number of questions,columns=",df_idf.shape)
</code></pre>
<p>I checked my json file also it's also valid file.
Here is my .json file:</p>
<pre><code>[{
"id": "4821394",
"title": "Serializing a private struct - Can it be done?",
"body": "\u003cp\u003eI have a public class that contains a private struct. The struct contains properties (mostly string) that I want to serialize. When I attempt to serialize the struct and stream it to disk, using XmlSerializer, I get an error saying only public types can be serialized. I don't need, and don't want, this struct to be public. Is there a way I can serialize it and keep it private?\u003c/p\u003e",
"answer_count": "1",
"comment_count": "0",
"creation_date": "2011-01-27 20:19:13.563 UTC",
"last_activity_date": "2011-01-27 20:21:37.59 UTC",
"last_editor_display_name": "",
"owner_display_name": "",
"owner_user_id": "163534",
"post_type_id": "1",
"score": "0",
"tags": "c#|serialization|xml-serialization",
"view_count": "296"
},{
"id": "3367882",
"title": "How do I prevent floated-right content from overlapping main content?",
"body": "\u003cp\u003eI have the following HTML:\u003c/p\u003e\n\n\u003cpre\u003e\u003ccode\u003e\u0026lt;td class='a'\u0026gt;\n \u0026lt;img src='/images/some_icon.png' alt='Some Icon' /\u0026gt;\n \u0026lt;span\u0026gt;Some content that's waaaaaaaaay too long to fit in the allotted space, but which can get cut off.\u0026lt;/span\u0026gt;\n\u0026lt;/td\u0026gt;\n\u003c/code\u003e\u003c/pre\u003e\n\n\u003cp\u003eIt should display as follows:\u003c/p\u003e\n\n\u003cpre\u003e\u003ccode\u003e[Some content that's wa [ICON]]\n\u003c/code\u003e\u003c/pre\u003e\n\n\u003cp\u003eI have the following CSS:\u003c/p\u003e\n\n\u003cpre\u003e\u003ccode\u003etd.a span {\n overflow: hidden;\n white-space: nowrap;\n z-index: 1;\n}\n\ntd.a img {\n display: block;\n float: right;\n z-index: 2;\n}\n\u003c/code\u003e\u003c/pre\u003e\n\n\u003cp\u003eWhen I resize the browser to cut off the text, it cuts off at the edge of the \u003ccode\u003e\u0026lt;td\u0026gt;\u003c/code\u003e rather than before the \u003ccode\u003e\u0026lt;img\u0026gt;\u003c/code\u003e, which leaves the \u003ccode\u003e\u0026lt;img\u0026gt;\u003c/code\u003e overlapping the \u003ccode\u003e\u0026lt;span\u0026gt;\u003c/code\u003e content. I've tried various \u003ccode\u003epadding\u003c/code\u003e and \u003ccode\u003emargin\u003c/code\u003es, but nothing seemed to work. Is this possible?\u003c/p\u003e\n\n\u003cp\u003eNB: It's \u003cem\u003every\u003c/em\u003e difficult to add a \u003ccode\u003e\u0026lt;td\u0026gt;\u003c/code\u003e that just contains the \u003ccode\u003e\u0026lt;img\u0026gt;\u003c/code\u003e here. If it were easy, I'd just do that :)\u003c/p\u003e",
"accepted_answer_id": "3367943",
"answer_count": "2",
"comment_count": "2",
"creation_date": "2010-07-30 00:01:50.9 UTC",
"favorite_count": "0",
"last_activity_date": "2012-05-10 14:16:05.143 UTC",
"last_edit_date": "2012-05-10 14:16:05.143 UTC",
"last_editor_display_name": "",
"last_editor_user_id": "44390",
"owner_display_name": "",
"owner_user_id": "1190",
"post_type_id": "1",
"score": "2",
"tags": "css|overflow|css-float|crop",
"view_count": "4121"
}]
</code></pre>
<p>Now i am trying to read the json file in python but every time it's showing error:</p>
<pre><code>Traceback (most recent call last):
File "/home/lazzydevs/Desktop/tfstack.py", line 4, in <module>
df_idf = pd.read_json('/home/lazzydevs/Data/datajs.json',lines = True)
File "/home/lazzydevs/.local/lib/python3.7/site-packages/pandas/io/json/_json.py", line 592, in read_json
result = json_reader.read()
File "/home/lazzydevs/.local/lib/python3.7/site-packages/pandas/io/json/_json.py", line 715, in read
obj = self._get_object_parser(self._combine_lines(data.split("\n")))
File "/home/lazzydevs/.local/lib/python3.7/site-packages/pandas/io/json/_json.py", line 739, in _get_object_parser
obj = FrameParser(json, **kwargs).parse()
File "/home/lazzydevs/.local/lib/python3.7/site-packages/pandas/io/json/_json.py", line 849, in parse
self._parse_no_numpy()
File "/home/lazzydevs/.local/lib/python3.7/site-packages/pandas/io/json/_json.py", line 1093, in _parse_no_numpy
loads(json, precise_float=self.precise_float), dtype=None
ValueError: Expected object or value
</code></pre>
<p>I checked so many posts but not working...i don't know what is the problem.</p>
|
<p>The following piece of code seems to work on my machine.</p>
<pre><code>import pandas as pd
df_idf = pd.read_json('/home/lazzydevs/Data/datajs.json')
print("Schema:\n\n",df_idf.dtypes)
print("Number of questions,columns=",df_idf.shape)
</code></pre>
|
python|json|pandas
| 1
|
373,821
| 59,297,543
|
Why do I get the 'loop of ufunc does not support argument 0 of type int' error for numpy.exp?
|
<p>I have a dataframe and I'd like to perform exponential calculation on a subset of rows in a column. I've tried three versions of code and two of them worked. But I don't understand why one version gives me the error.</p>
<pre><code>import numpy as np
</code></pre>
<p>Version 1 (working)</p>
<pre><code>np.exp(test * 1.0)
</code></pre>
<p>Version 2 (working)</p>
<pre><code>np.exp(test.to_list())
</code></pre>
<p>Version 3 (Error)</p>
<pre><code>np.exp(test)
</code></pre>
<p>It shows the error below:</p>
<pre><code>AttributeError Traceback (most recent call last)
AttributeError: 'int' object has no attribute 'exp'
The above exception was the direct cause of the following exception:
TypeError Traceback (most recent call last)
<ipython-input-161-9d5afc93942c> in <module>()
----> 1 np.exp(pd_feature.loc[(pd_feature[col] > 0) & (pd_feature[col] < 700), col])
TypeError: loop of ufunc does not support argument 0 of type int which has no callable exp method
</code></pre>
<p>The test data is generated by:</p>
<pre><code>test = pd.loc[(pd['a'] > 0) & (pd['a'] < 650), 'a']
</code></pre>
<p>The data in test is just:</p>
<pre><code>0 600
2 600
42 600
43 600
47 600
60 600
67 600
Name: a, dtype: Int64
</code></pre>
<p>and its data type is:</p>
<pre><code><class 'pandas.core.series.Series'>
</code></pre>
<p>However, if I try to generate a dummy dataset, it works:</p>
<pre><code>data = {'a':[600, 600, 600, 600, 600, 600, 600], 'b': ['a', 'a', 'a', 'a', 'a', 'a', 'a']}
df = pd.DataFrame(data)
np.exp(df.loc[:,'a'])
</code></pre>
<p>Any idea of why I see this error? Thank you very much.</p>
|
<p>I guess your problem occurs because some NumPy functions explicitly require <code>float</code>-type arguments. Your code <code>np.exp(test)</code>, however, has type <code>int</code>.</p>
<p>Try forcing it to be <code>float</code></p>
<pre><code>import numpy as np
your_array = your_array.float()
output = np.exp(your_array)
# OR
def exp_test(x)
x.float()
return np.exp(x)
output = exp_test(your_array)
</code></pre>
|
python|numpy|exponential
| 36
|
373,822
| 59,396,520
|
calculate distacne of two cartesian coordinates
|
<p>I am still new to python and have practiced python as automation tool to get used to.</p>
<p>Now I want to try mathematical calculation in python.</p>
<p>I have tried to calculate distance of two cartesian coordinate I extracted from a hand practice data:</p>
<p>in temp2:</p>
<blockquote>
<p>-0.329637489 3.481200000 1.740200000</p>
<p>2.389059814 1.000230000 8.653210000</p>
</blockquote>
<p>N.B the numbers are in a <strong>text file</strong>, line 1 and 2.</p>
<p>My initial thought was append each number as a variable. Then, calculate them. However, I am struggling assign each number as a variable.</p>
<p>or</p>
<p>Is there are more effective way to calculate the distance of coordination?</p>
<p>If there is I would like to know both ways.</p>
<p>THANK YOU SO MUCH IN ADVANCE!</p>
|
<p>You can load the data directly in a numpy <code>array</code> using <code>np.loadtxt</code>:</p>
<pre><code>a= np.loadtxt('../filename.txt')
a
array([[-0.32963749, 3.4812 , 1.7402 ],
[ 2.38905981, 1.00023 , 8.65321 ]])
</code></pre>
<p>Then perform operations to compute distance:</p>
<pre><code>d = np.sqrt(np.sum((a[0]-a[1])**2))
</code></pre>
<p>Let's go through this step by step:</p>
<ul>
<li><code>a[0]</code> is <code>[-0.32963749, 3.4812, 1.7402]</code>, namely the 1st row</li>
<li><code>a[1]</code> is <code>[ 2.38905981, 1.00023, 8.65321]</code>, the 2nd row</li>
<li><code>a[0] - a[1]</code> is the elementwise difference: <code>[-2.7186973, 2.48097, -6.91301]</code></li>
<li><code>(a[0]-a[1])**2</code> is the elementwise difference squared <code>[7.39131503, 6.15521214, 47.78970726]</code></li>
<li>Finally <code>np.sqrt(np.sum())</code> sums all the components together, and then take square root, hence <code>d</code> is <code>7.831745298867902</code></li>
</ul>
<p>That said, numpy already has a build in function to compute this distance once you load the array: </p>
<pre><code>d = np.linalg.norm(a[0]-a[1])
</code></pre>
|
python|numpy
| 1
|
373,823
| 59,399,939
|
how to compare the sum score of men to sum score of women to get the count of countries?
|
<p>Let's say this is my data frame:</p>
<pre><code>country Edition sports Athletes Medal Gender Score
Germany 1990 Aquatics HAJOS, Alfred gold M 3
Germany 1990 Aquatics HIRSCHMANN, Otto silver M 2
Germany 1990 Aquatics DRIVAS, Dimitrios gold W 3
Germany 1990 Aquatics DRIVAS, Dimitrios silver W 2
US 2008 Athletics MALOKINIS, Ioannis gold M 1
US 2008 Athletics HAJOS, Alfred silver M 2
US 2009 Athletics CHASAPIS, Spiridon gold W 3
France 2010 Athletics CHOROPHAS, Efstathios gold W 3
France 2010 Athletics CHOROPHAS, Efstathios gold M 3
France 2010 golf HAJOS, Alfred Bronze M 1
France 2011 golf ANDREOU, Joannis silver W 2
Spain 2011 golf BURKE, Thomas gold M 3
</code></pre>
<p>I am trying to find for how many countries the sum of men scores is equal to the sum of women scores?
I have tried the following:</p>
<p><code>sum_men = df[ df ['Gender']=='M'].groupby ( 'country' )[Score ].sum()</code> <br>
<code>sum_women = df[ df ['Gender']=='W'].groupby ( 'country' )[Score ].sum()</code></p>
<p>Now i don't know how to compare this two and filter out no.of countries who have sum of men scores is equal to the sum of women scores.</p>
<p>can anyone please help me in this?</p>
|
<p>You can do this:</p>
<pre><code>sum_men = df[df['Gender']=='M'].groupby ('Country' )['Score'].sum().reset_index() #watch the reset_index()
sum_women = df[df['Gender']=='W'].groupby ('Country' )['Score'].sum().reset_index()
new_df = sum_men.merge(sum_women, on="Country")
new_df['diff'] = new_df['Score_x'] - new_df['Score_y']
new_df
Country Score_x Score_y diff
0 France 4 5 -1
1 Germany 5 5 0
2 US 3 3 0
print(new_df[new_df['diff']==0])
Country Score_x Score_y diff
1 Germany 5 5 0
2 US 3 3 0
</code></pre>
|
python|pandas
| 1
|
373,824
| 59,427,330
|
How to save full list from pandas function into variable
|
<p>I was having a problem when I created a function like this: </p>
<pre><code> def printlist(x):
for column in x:
global y
print(list(x[column]))
</code></pre>
<p>I get a completely full list, as such: </p>
<blockquote>
<p>['MATCH', 'MATCH', 'MATCH'] ['MATCH', 'MATCH', 'MATCH'] ['BUY', 'BUY',
'BUY']</p>
</blockquote>
<p>However, when I try to save the list as a variable, it prints only the first list: </p>
<pre><code>def printlist(x):
for column in x:
global y
y = list(x[column])
return y
</code></pre>
<blockquote>
<p>['MATCH', 'MATCH', 'MATCH']</p>
</blockquote>
<p>Would anyone know how I would be able to save the full list as a variable? Would greatly appreciate it!!</p>
|
<p>Use this code:</p>
<pre><code>def printlist(df):
column_list = []
for column in df:
column_list.append(df[column])
return column_list
list = printlist(df)
</code></pre>
|
python|pandas|list|function
| 0
|
373,825
| 59,178,625
|
Error message when read multiple text elements using selenium
|
<p>I am clicking on a certain link and would like to read all the text in a given class and return that as a row in pandas dataframe</p>
<p>This is the code I have</p>
<pre><code>page_link = 'http://beta.compuboxdata.com/fighter'
wait = WebDriverWait(cdriver,10)
wait.until(EC.visibility_of_element_located((By.ID,'s2id_autogen1'))).send_keys('Deontay Wilder')
wait.until(EC.visibility_of_element_located((By.CLASS_NAME, 'select2-result-label'))).click()
while True:
wait.until(EC.visibility_of_element_located((By.CLASS_NAME, 'view_more'))).click()
try:
element = wait.until(EC.visibility_of_element_located((By.CLASS_NAME, 'view_more')))
element.click()
except TimeoutException:
break
fights['fighters'] = wait.until(EC.find_element_by_class_name((By.CLASS_NAME,'col-xs-4 col-sm-4 col-md-4 col-lg-4'))).text
</code></pre>
<p>This, however returns the error message:</p>
<pre><code>TimeoutException: Message:
</code></pre>
<p>I have also tried using xpath but still getting the same error message:</p>
<pre><code>fights['fighters'] = wait.until(EC.find_element_by_xpath((By.CLASS_NAME,'//div[@class="col-xs-4 col-sm-4 col-md-4 col-lg-4"]/div'))).text
</code></pre>
<p>I specifically want to get this data:
<a href="https://i.stack.imgur.com/AYOD7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AYOD7.png" alt="enter image description here"></a></p>
<p>As requested this is the full traceback</p>
<pre><code><ipython-input-94-e2b36e136c00> in <module>
14 wait.until(EC.visibility_of_element_located((By.CLASS_NAME, 'select2-result-label'))).click()
15 while True:
---> 16 wait.until(EC.visibility_of_element_located((By.CLASS_NAME, 'view_more'))).click()
17 try:
18 element = wait.until(EC.visibility_of_element_located((By.CLASS_NAME, 'view_more')))
~\Anaconda3\lib\site-packages\selenium\webdriver\support\wait.py in until(self, method, message)
78 if time.time() > end_time:
79 break
---> 80 raise TimeoutException(message, screen, stacktrace)
81
82 def until_not(self, method, message=''):
TimeoutException: Message:
</code></pre>
<p>This is the source HTML
<a href="https://i.stack.imgur.com/JwZjO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JwZjO.png" alt="source"></a></p>
<p>Update</p>
<p>I tried adding a for loop to read multiple text elements, but still getting the same error</p>
<pre><code>elements = wait.until(EC.find_element_by_class_name((By.CLASS_NAME,'col-xs-4 col-sm-4 col-md-4 col-lg-4')))
for e in elements:
print(e.text)
</code></pre>
|
<p>Here is a solution. The Timeout exception will be handled here.</p>
<pre><code> age_link = 'http://beta.compuboxdata.com/fighter'
driver.get(age_link)
wait = WebDriverWait(driver,10)
wait.until(EC.visibility_of_element_located((By.ID,'s2id_autogen1'))).send_keys('Deontay Wilder')
wait.until(EC.visibility_of_element_located((By.CLASS_NAME, 'select2-result-label'))).click()
#Click on View More until it exists
while True:
try:
element = wait.until(EC.visibility_of_element_located((By.CLASS_NAME, 'view_more')))
print("Clicking on View More")
element.click()
except TimeoutException:
break
fighters = driver.find_elements_by_xpath("//div[@class='row row-bottom-margin-5']/div[2]")
#fighters = driver.find_elements_by_class_name("col-xs-4 col-sm-4 col-md-4 col-lg-4")
print(len(fighters))
# Print all the fighter names
for fighter in fighters:
print(fighter.text)
</code></pre>
|
python|pandas|selenium|selenium-webdriver
| 0
|
373,826
| 59,078,267
|
How to count occurances of multiple items in numpy?
|
<p>Assume the folowing numpy array:</p>
<pre><code>[1,2,3,1,2,3,1,2,3,1,2,2]
</code></pre>
<p>I want to <code>count([1,2])</code> to count all occurrences of 1 and 2, in a single run, yielding something like</p>
<pre><code>[4, 5]
</code></pre>
<p>corresponding to a <code>[1, 2]</code> input.</p>
<p>Is it supported in numpy?</p>
|
<pre><code># Setting your input to an array
array = np.array([1,2,3,1,2,3,1,2,3,1,2,2])
# Find the unique elements and get their counts
unique, counts = np.unique(array, return_counts=True)
# Setting the numbers to get counts for as a set
search = {1, 2}
# Gets the counts for the elements in search
search_counts = [counts[i] for i, x in enumerate(unique) if x in search]
</code></pre>
<p>This will output [4, 5]</p>
|
python|numpy
| 1
|
373,827
| 59,276,655
|
How to sum an ND array in python based on like entries?
|
<p>Let's say I have an ND array in python represented by the following scheme:</p>
<pre><code>["Event ID", "Event Location", "Event Cost"]
data = \
[[1, 0, 500]
[1, 0, 250]
[1, 1, 300]
[2, 0, 750]
[2, 1, 400]
[2, 1, 500]]
</code></pre>
<p>How can I collapse this array to sum up the cost for entries with the same event ID that happened in the same event location? This would give me the following array at the end:</p>
<pre><code>[[1, 0, 750]
[1, 1, 300]
[2, 0, 750]
[2, 1, 900]]
</code></pre>
|
<p>This is a classic use-case for <a href="https://docs.python.org/3/library/itertools.html#itertools.groupby" rel="nofollow noreferrer">itertools.groupby</a>:</p>
<pre class="lang-py prettyprint-override"><code>import itertools
result = [
[i, loc, sum(cost for _, _, cost in costs)]
for (i, loc), costs in itertools.groupby(data, key=lambda t: (t[0], t[1]))
]
</code></pre>
|
python|numpy
| 1
|
373,828
| 59,233,282
|
Jupyter Pandas - dropping items which have average over a threshold
|
<p>I have a data frame with items and their prices, something like this:
<code>
╔══════╦═════╦═══════╗
║ Item ║ Day ║ Price ║
╠══════╬═════╬═══════╣
║ A ║ 1 ║ 10 ║
║ B ║ 1 ║ 20 ║
║ C ║ 1 ║ 30 ║
║ D ║ 1 ║ 40 ║
║ A ║ 2 ║ 100 ║
║ B ║ 2 ║ 20 ║
║ C ║ 2 ║ 30 ║
║ D ║ 2 ║ 40 ║
║ A ║ 3 ║ 500 ║
║ B ║ 3 ║ 25 ║
║ C ║ 3 ║ 35 ║
║ D ║ 3 ║ 1000 ║
╚══════╩═════╩═══════╝</code></p>
<p>I want to exclude all rows from this df where the item has an average price over 200. So filtered df should look like this:
<code>
╔══════╦═════╦═══════╗
║ Item ║ Day ║ Price ║
╠══════╬═════╬═══════╣
║ B ║ 1 ║ 20 ║
║ C ║ 1 ║ 30 ║
║ B ║ 2 ║ 20 ║
║ C ║ 2 ║ 30 ║
║ B ║ 3 ║ 25 ║
║ C ║ 3 ║ 35 ║
╚══════╩═════╩═══════╝</code></p>
<p>I'm new to python and pandas but as a first step was thinking something like this to get a new df for avg prices: avg_prices_df = df.groupby('ItemID').Price.mean().reset_index and then not sure how to proceed from there. Not sure even that first step is correct.</p>
<p>To further complicate the matter, I am using vaex to read the data in ndf5 form as I have over 400 million rows.</p>
<p>Many thanks in advance for any advice.</p>
<p>EDIT: So I got the following code working, though I am sure it is not optimised..</p>
<p>`</p>
<h1>create dataframe of ItemIDs and their average prices</h1>
<p>df_item_avg_price = df.groupby(df.ItemID, agg=[vaex.agg.count('ItemID'), vaex.agg.mean('Price')])</p>
<h1>filter this new dataframe by average price threshold</h1>
<p>df_item_avg_price = (df_item_avg_price[df_item_avg_price["P_r_i_c_e_mean"] <= 50000000])</p>
<h1>create list of ItemIDs which have average price under the threshold</h1>
<p>items_in_price_range = df_item_avg_price['ItemID'].tolist()</p>
<h1>filter the original dataframe to include rows only with the items in price range</h1>
<p>filtered_df = df[df.ItemID.isin(items_in_price_range)]
`
Any better way to do this?</p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.transform.html" rel="nofollow noreferrer"><code>GroupBy.transform</code></a> for <code>mean</code>s per groups with same size like original, so possible filter out by <a href="http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#boolean-indexing" rel="nofollow noreferrer"><code>boolean indexing</code></a> all groups with means less like <code>200</code>:</p>
<pre><code>avg_prices_df = df[df.groupby('Item')['Price'].transform('mean') < 200]
</code></pre>
<p>Another solution with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.DataFrameGroupBy.filter.html" rel="nofollow noreferrer"><code>DataFrameGroupBy.filter</code></a>:</p>
<pre><code>avg_prices_df = df.groupby('Item').filter(lambda x: x['Price'].mean() < 200)
</code></pre>
<hr>
<pre><code>print (avg_prices_df)
Item Day Price
1 B 1 20
2 C 1 30
5 B 2 20
6 C 2 30
9 B 3 25
10 C 3 35
print (df.groupby('Item')['Price'].transform('mean'))
0 203.333333
1 21.666667
2 31.666667
3 360.000000
4 203.333333
5 21.666667
6 31.666667
7 360.000000
8 203.333333
9 21.666667
10 31.666667
11 360.000000
Name: Price, dtype: float64
</code></pre>
<p>Solution for vaex:</p>
<pre><code>df_item_avg_price = df.groupby(df.ItemID).agg({'Price' : 'mean'})
df_item_avg_price = (df_item_avg_price[df_item_avg_price["Price"] <= 200])
df = df_item_avg_price.drop(['Price']).join(df, on='ItemID')
print (df)
ItemID Day Price
0 B 1 20
1 B 2 20
2 B 3 25
3 C 1 30
4 C 2 30
5 C 3 35
</code></pre>
|
python|pandas|dataframe|jupyter|vaex
| 1
|
373,829
| 59,084,208
|
parse datetime resulting in ValueError
|
<p>I tried to parse a timestamp of a CSV file (first column named "time"). The timestamp has the format: <code>01.10.2016 00:10:00</code> (<code>dd.mm.yyyy HH:MM:SS</code>)</p>
<pre><code> timestamp_parser = lambda x: pd.datetime.strptime(x, "%d.%m.%Y %H:%M:%S")
df_pi_data = pd.read_csv( "pi_daten.csv", usecols = (0,1), sep =';', thousands='.', decimal=',', names = ['time','temperature'], parse_dates=['time'], date_parser = timestamp_parser)
</code></pre>
<p>The following error occurs:</p>
<pre><code>ValueError: time data '\xef\xbb\xbftime' does not match format '%d.%m.%Y %H:%M:%S'
</code></pre>
<p>@kantal:</p>
<pre><code> time;temperature;
01.10.2016 00:00; 23,13854599;
01.10.2016 00:10; 23,24945831;
01.10.2016 00:20; 23,16853714;
</code></pre>
|
<pre><code>time;temperature
01.10.2016 00:00; 23,13854599
01.10.2016 00:10; 23,24945831
01.10.2016 00:20; 23,16853714
</code></pre>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
timestamp_parser = lambda x: pd.datetime.strptime(x, "%d.%m.%Y %H:%M")
df = pd.read_csv("test.txt", sep=";", decimal=',', \
parse_dates=['time'], date_parser = timestamp_parser)
</code></pre>
<p>I use this data, and use the code, success working</p>
<pre><code> time temperature
0 2016-10-01 00:00:00 23.138546
1 2016-10-01 00:10:00 23.249458
2 2016-10-01 00:20:00 23.168537
</code></pre>
|
python|pandas|parsing
| 0
|
373,830
| 59,347,111
|
Pytorch RuntimeError: Expected object of device type cuda but got device type cpu for argument #1 'self' in call to _th_index_select
|
<p>I am training a model that takes tokenized strings which are then passed through an embedding layer and an LSTM thereafter. However, there seems to be an error in the input, as it does not pass through the embedding layer.</p>
<pre class="lang-py prettyprint-override"><code>class DrugModel(nn.Module):
def __init__(self, input_dim, output_dim, hidden_dim, drug_embed_dim,
lstm_layer, lstm_dropout, bi_lstm, linear_dropout, char_vocab_size,
char_embed_dim, char_dropout, dist_fn, learning_rate,
binary, is_mlp, weight_decay, is_graph, g_layer,
g_hidden_dim, g_out_dim, g_dropout):
super(DrugModel, self).__init__()
# Save model configs
self.drug_embed_dim = drug_embed_dim
self.lstm_layer = lstm_layer
self.char_dropout = char_dropout
self.dist_fn = dist_fn
self.binary = binary
self.is_mlp = is_mlp
self.is_graph = is_graph
self.g_layer = g_layer
self.g_dropout = g_dropout
self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# For one-hot encoded SMILES
if not is_mlp:
self.char_embed = nn.Embedding(char_vocab_size, char_embed_dim,
padding_idx=0)
self.lstm = nn.LSTM(char_embed_dim, drug_embed_dim, lstm_layer,
bidirectional=False,
batch_first=True, dropout=lstm_dropout)
# Distance function
self.dist_fc = nn.Linear(drug_embed_dim, 1)
if binary:
# Binary Cross Entropy
self.criterion = lambda x, y: y*torch.log(x) + (1-y)*torch.log(1-x)
def init_lstm_h(self, batch_size):
return (Variable(torch.zeros(
self.lstm_layer*1, batch_size, self.drug_embed_dim)).cuda(),
Variable(torch.zeros(
self.lstm_layer*1, batch_size, self.drug_embed_dim)).cuda())
# Set Siamese network as basic LSTM
def siamese_sequence(self, inputs, length):
# Character embedding
inputs = inputs.long()
inputs = inputs.cuda()
self.char_embed = self.char_embed(inputs.to(self.device))
c_embed = self.char_embed(inputs)
# c_embed = F.dropout(c_embed, self.char_dropout)
maxlen = inputs.size(1)
if not self.training:
# Sort c_embed
_, sort_idx = torch.sort(length, dim=0, descending=True)
_, unsort_idx = torch.sort(sort_idx, dim=0)
maxlen = torch.max(length)
# Pack padded sequence
c_embed = c_embed.index_select(0, Variable(sort_idx).cuda())
sorted_len = length.index_select(0, sort_idx).tolist()
c_packed = pack_padded_sequence(c_embed, sorted_len, batch_first=True)
else:
c_packed = c_embed
# Run LSTM
init_lstm_h = self.init_lstm_h(inputs.size(0))
lstm_out, states = self.lstm(c_packed, init_lstm_h)
hidden = torch.transpose(states[0], 0, 1).contiguous().view(
-1, 1 * self.drug_embed_dim)
if not self.training:
# Unsort hidden states
outputs = hidden.index_select(0, Variable(unsort_idx).cuda())
else:
outputs = hidden
return outputs
def forward(self, key1, key2, targets, key1_len, key2_len, status, predict = False):
if not self.is_mlp:
output1 = self.siamese_sequence(key1, key1_len)
output2 = self.siamese_sequence(key2, key2_len)
</code></pre>
<p>After instantiating the class I get the following error when passing the input through the embedding layer:</p>
<pre><code><ipython-input-128-432fcc7a1e39> in forward(self, key1, key2, targets, key1_len, key2_len, status, predict)
129 def forward(self, key1, key2, targets, key1_len, key2_len, status, predict = False):
130 if not self.is_mlp:
--> 131 output1 = self.siamese_sequence(key1, key1_len)
132 output2 = self.siamese_sequence(key2, key2_len)
133 set_trace()
<ipython-input-128-432fcc7a1e39> in siamese_sequence(self, inputs, length)
74 inputs = inputs.cuda()
75
---> 76 self.char_embed = self.char_embed(inputs.to(self.device))
77 set_trace()
78 c_embed = self.char_embed(inputs)
~/miniconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
539 result = self._slow_forward(*input, **kwargs)
540 else:
--> 541 result = self.forward(*input, **kwargs)
542 for hook in self._forward_hooks.values():
543 hook_result = hook(self, input, result)
~/miniconda3/lib/python3.7/site-packages/torch/nn/modules/sparse.py in forward(self, input)
112 return F.embedding(
113 input, self.weight, self.padding_idx, self.max_norm,
--> 114 self.norm_type, self.scale_grad_by_freq, self.sparse)
115
116 def extra_repr(self):
~/miniconda3/lib/python3.7/site-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
1482 # remove once script supports set_grad_enabled
1483 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 1484 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
1485
1486
RuntimeError: Expected object of device type cuda but got device type cpu for argument #1 'self' in call to _th_index_select
</code></pre>
<p>despite the fact that the input (e.g. key1) has already been passed to cuda and has been transformed into long format:</p>
<pre><code>tensor([[25, 33, 30, ..., 0, 0, 0],
[25, 7, 7, ..., 0, 0, 0],
[25, 7, 30, ..., 0, 0, 0],
...,
[25, 7, 33, ..., 0, 0, 0],
[25, 33, 41, ..., 0, 0, 0],
[25, 33, 41, ..., 0, 0, 0]], device='cuda:0')
</code></pre>
|
<p>setting <code>model.device</code> to cuda does not change your inner module devices, so <code>self.lstm</code>, <code>self.char_embed</code>, and <code>self.dist_fc</code> are all still on cpu. correct way of doing it is by using <code>DrugModel().to(device)</code></p>
<p>in general, it's better not to feed a <code>device</code> to your <code>model</code> and write it in a device-agnostic way. to make your <code>init_lstm_h</code> function device-agnostic you can use something like <a href="https://discuss.pytorch.org/t/which-device-is-model-tensor-stored-on/4908/15" rel="noreferrer">this</a></p>
|
runtime-error|gpu|pytorch|embedding
| 6
|
373,831
| 59,099,762
|
Python : Reverse Geocoding to get city name and state name in pandas
|
<p>I have a large dataset with latitudes and longitudes and i want to map city and state in front of them. Approach which i was using is this:</p>
<pre><code>import pandas as pd
import reverse_geocoder as rg
import pprint
df = pd.read_csv("D:\data.csv")
</code></pre>
<pre><code>def reverseGeocode(coordinates):
result = rg.search(coordinates)
# result is a list containing ordered dictionary.
pprint.pprint(result)
# Driver function
if __name__=="__main__":
# Coordinates tuple.Can contain more than one pair.
for i in range(2):
coordinates =(df['latitude'][i],df['longitude'][i])
reverseGeocode(coordinates)
</code></pre>
<p>Output:</p>
<pre><code>[OrderedDict([('lat', '13.322'),
('lon', '75.774'),
('name', 'Chikmagalur'),
('admin1', 'Karnataka'),
('admin2', 'Chikmagalur'),
('cc', 'IN')])]
[OrderedDict([('lat', '18.083'),
('lon', '73.416'),
('name', 'Mahad'),
('admin1', 'Maharashtra'),
('admin2', 'Raigarh'),
('cc', 'IN')])]
</code></pre>
<p>What i want to do is -</p>
<pre><code>
id latitude longitude name admin2 admin1
0 23 13.28637 75.78518
1 29 17.90387 73.43351
2 34 15.72967 74.49182
3 48 20.83830 73.26416
4 54 21.93931 75.13398
5 71 20.92673 75.32402
6 78 19.26049 73.38982
7 108 17.90468 73.43486
8 109 13.28637 75.78518
9 113 15.72934 74.49189
10 126 20.83830 73.26417
11 131 21.93930 75.13399
12 146 20.92672 75.32402
13 152 19.26049 73.38982
14 171 17.90657 73.43382
</code></pre>
<p>map name admin1 and admin2 in my dataframe(df) in front of ["latitude","longitude"]</p>
|
<p>Although others solution may be valid, but you can find more elegant one:</p>
<pre><code>import pandas as pd
import reverse_geocoder as rg
import pprint
df = pd.read_csv("data.csv")
def reverseGeocode(coordinates):
result = rg.search(coordinates)
return (result)
if __name__=="__main__":
# Coordinates tuple.Can contain more than one pair.
coordinates =list(zip(df['latitude'],df['longitude'])) # generates pair of (lat,long)
data = reverseGeocode(coordinates)
df['name'] = [i['name'] for i in data]
df['admin1'] = [i['admin1'] for i in data]
df['admin2'] = [i['admin2'] for i in data]
df.to_csv("data_appended.csv") # write to csv # result will be saved to data_appended.csv
</code></pre>
<p><a href="https://i.stack.imgur.com/bhAKG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bhAKG.png" alt="Output"></a></p>
|
python|pandas|mkreversegeocoder
| 1
|
373,832
| 59,085,947
|
Consume REStful API data with python => from JSON to pandas DataFrame
|
<pre><code>from flask import Flask, request
from flask_restful import Resource, Api, reqparse
#from flask_jwt import JWT, jwt_required
from pymongo import MongoClient
import pandas as pd
app = Flask(__name__)
app.secret_key = 'xxx'
api = Api(app)
class Data(Resource):
def get(self):
client = MongoClient("localhost", 27017)
db = client.test
collection = db.my_collection.find({})
df = pd.DataFrame(list(collection))
df = df.iloc[1:5:,1:(len(df.columns))] ### ID Variable raushauen, führt zu OverflowError
return df.to_json(orient="columns")
api.add_resource(Data, "/data/") # http://127.0.0.1:5000/data/
app.run(port = 5000, debug=True)
</code></pre>
<p>Hello everybody. This is my little API which runs on localhost. I just send a subset of the titanic dataset.</p>
<p>When I try to access the data in the browser I receive the following:</p>
<pre><code>"{\"PassengerId\":{\"1\":2,\"2\":3,\"3\":4,\"4\":5},\"Survived\":{\"1\":1,\"2\":1,\"3\":1,\"4\":0},\"Pclass\":{\"1\":1,\"2\":3,\"3\":1,\"4\":3},\"Name\":{\"1\":\"Cumings, Mrs. John Bradley (Florence Briggs Thayer)\",\"2\":\"Heikkinen, Miss. Laina\",\"3\":\"Futrelle, Mrs. Jacques Heath (Lily May Peel)\",\"4\":\"Allen, Mr. William Henry\"},\"Sex\":{\"1\":\"female\",\"2\":\"female\",\"3\":\"female\",\"4\":\"male\"},\"Age\":{\"1\":38.0,\"2\":26.0,\"3\":35.0,\"4\":35.0},\"SibSp\":{\"1\":1,\"2\":0,\"3\":1,\"4\":0},\"Parch\":{\"1\":0,\"2\":0,\"3\":0,\"4\":0},\"Ticket\":{\"1\":\"PC 17599\",\"2\":\"STON\\/O2. 3101282\",\"3\":\"113803\",\"4\":\"373450\"},\"Fare\":{\"1\":71.2833,\"2\":7.925,\"3\":53.1,\"4\":8.05},\"Cabin\":{\"1\":\"C85\",\"2\":null,\"3\":\"C123\",\"4\":null},\"Embarked\":{\"1\":\"C\",\"2\":\"S\",\"3\":\"S\",\"4\":\"S\"}}"
</code></pre>
<p>Now I try to consume the API and read the data into a dataframe again.
The following code results in a strange looking text Object:</p>
<pre><code>r = requests.get('http://localhost:5000/data/')
data = r.text
Out[90]: '"{\\"PassengerId\\":{\\"1\\":2,\\"2\\":3,\\"3\\":4,\\"4\\":5},\\"Survived\\":{\\"1\\":1,\\"2\\":1,\\"3\\":1,\\"4\\":0},\\"Pclass\\":{\\"1\\":1,\\"2\\":3,\\"3\\":1,\\"4\\":3},\\"Name\\":{\\"1\\":\\"Cumings, Mrs. John Bradley (Florence Briggs Thayer)\\",\\"2\\":\\"Heikkinen, Miss. Laina\\",\\"3\\":\\"Futrelle, Mrs. Jacques Heath (Lily May Peel)\\",\\"4\\":\\"Allen, Mr. William Henry\\"},\\"Sex\\":{\\"1\\":\\"female\\",\\"2\\":\\"female\\",\\"3\\":\\"female\\",\\"4\\":\\"male\\"},\\"Age\\":{\\"1\\":38.0,\\"2\\":26.0,\\"3\\":35.0,\\"4\\":35.0},\\"SibSp\\":{\\"1\\":1,\\"2\\":0,\\"3\\":1,\\"4\\":0},\\"Parch\\":{\\"1\\":0,\\"2\\":0,\\"3\\":0,\\"4\\":0},\\"Ticket\\":{\\"1\\":\\"PC 17599\\",\\"2\\":\\"STON\\\\/O2. 3101282\\",\\"3\\":\\"113803\\",\\"4\\":\\"373450\\"},\\"Fare\\":{\\"1\\":71.2833,\\"2\\":7.925,\\"3\\":53.1,\\"4\\":8.05},\\"Cabin\\":{\\"1\\":\\"C85\\",\\"2\\":null,\\"3\\":\\"C123\\",\\"4\\":null},\\"Embarked\\":{\\"1\\":\\"C\\",\\"2\\":\\"S\\",\\"3\\":\\"S\\",\\"4\\":\\"S\\"}}"\n'
</code></pre>
<p>which leads to an error when I run the following code:</p>
<pre><code>df = pd.read_json(data, orient="columns")
ValueError: DataFrame constructor not properly called!
</code></pre>
<p>This is the first time I create an API... can you tell me in what steps an error occured and how to fix it? Thanks.</p>
|
<p>It is an issue with your JSON data, try below:</p>
<pre><code>data = r.text
str_data = json.loads(data)
json_data = json.loads(str_data)
pd.DataFrame(json_data)
</code></pre>
<p>or</p>
<pre><code>data = r.text
json_data = json.loads(data)
pd.read_json(json_data)
</code></pre>
<p><a href="https://i.stack.imgur.com/zbUUO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zbUUO.png" alt="enter image description here"></a></p>
|
python|pandas|flask-restful
| 2
|
373,833
| 59,121,560
|
Error in downloading mnist data unknown url type: HTTPS
|
<p>when I try to load the mnist dataset using the code </p>
<pre><code>import tensorflow as tf
import numpy as np
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Convolution2D, MaxPooling2D
from keras.utils import np_utils
(X_train, y_train), (X_test, y_test) = tf.keras.datasets.mnist.load_data()
</code></pre>
<p>it gives me an error message saying:</p>
<pre><code>Exception: URL fetch failure on https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz: None -- unknown url type: https
</code></pre>
<p>I know that the issue is with the line <code>(X_train, y_train), (X_test, y_test) = tf.keras.datasets.mnist.load_data()</code> but I couldn't find anything online on how to solve it. I am using windows 10 and I Jupyter notebooks and Anaconda.</p>
|
<p>In the <code>mnist.py</code> file, change the </p>
<pre><code>origin_folder = 'https://storage.googleapis.com/tensorflow/tf-keras-datasets/'
</code></pre>
<p>to</p>
<pre><code>origin_folder = 'http://storage.googleapis.com/tensorflow/tf-keras-datasets/'
</code></pre>
|
python|tensorflow|keras|https|mnist
| 0
|
373,834
| 59,082,202
|
how to export data from python pandas with user defined message
|
<p>From the pandas data frame, I got the list of customer info and want to export it to an excel printing all the customer info like: </p>
<p>"This customer James"</p>
<p>"This customer Mark"</p>
<p>''''''''</p>
<p>''''''''</p>
<p>but am getting only 1 customer to excel instead of 10.</p>
<p>Below is the code i tried:</p>
<pre><code>info =
df['TABLE'].drop_duplicates().values.tolist()
for i in (Info):
excel_print= "This customer" ' ' +i
print(excel_print) ## I will get all 10 customers
for j in excel_print:
worksheet.write(1,1,excel_print)
</code></pre>
|
<p>If you want it in separated cells then you have to change first (or second) argument in <code>write()</code></p>
<p>I use <code>enumerate()</code> to have different values </p>
<pre><code>import xlsxwriter
workbook = xlsxwriter.Workbook('filename.xlsx')
worksheet1 = workbook.add_worksheet()
info = ['Adam','James', 'Mark']
for x, name in enumerate(info):
worksheet1.write(x, 0, "This customer " + name)
workbook.close()
</code></pre>
<p>If you want all in one cell then you have to create string with all text and then put it</p>
<pre><code>import xlsxwriter
workbook = xlsxwriter.Workbook('filename.xlsx')
worksheet1 = workbook.add_worksheet()
info = ['Adam','James', 'Mark']
lines = []
for name in info:
lines.append("This customer " + name)
text = '\n'.join(lines)
worksheet1.write(0, 0, text)
workbook.close()
</code></pre>
|
python|pandas
| 2
|
373,835
| 59,366,048
|
Is there a way to write a custom BCE loss in pytorch?
|
<p>I am writing a custom BCE in pytorch but in some cases it returns -inf and nan most cases. Which is due to the log function.</p>
<pre><code>bce_loss=y_true*torch.log2(y_pred) +(one_torch-y_true)*torch.log2(one_torch-y_pred)
</code></pre>
<p>Is there a way to rewrite this? Note y_pred is a sigmoid output which is between 0 and 1.</p>
|
<p>You can clamp the preds to stop from log error.</p>
<p><code>y_pred = torch.clamp(y_pred, 1e-7, 1 - 1e-7)</code></p>
|
python-3.x|machine-learning|neural-network|computer-vision|pytorch
| 1
|
373,836
| 59,312,917
|
Conditional repetitive cumulative sums on python
|
<p>I need to write code on python that computes cumulative sums from an axis of an array but every certain number of elements (for instance 200), the idea is that I want to add up error deviation but for the last 200 time step over an array of 10000, considering the first 200 time step as initial conditions of zero. The whole computation represents the <strong>integration</strong> with a time of step = 1 but coniders only the last 200-time step.</p>
<p>in other words, I want the cumulative sum (of the last 200 elements) just every time, 200 counts have passed to be saved in a new array.</p>
<p>An example on a small scale for illustration</p>
<p>x=[0,1,2,3,4,5,6,7,8,9]</p>
<p>consider summation every 2 counts and not 200</p>
<p>Result</p>
<p>y=[0,0,3,5,7,9,11,13,15,17]</p>
<p>as you notice, nothing has been counted the first 2 counts, because they should be initialized with zero, only when 2 counts have passed and we become 10 elements back as well equivalent to the given array.</p>
<p>Can anybody help me with doing this in Python ,if possible with numpy?</p>
|
<p>Why not compute cumulative sum to compute a sum over a running window</p>
<pre><code>cx=x.cumsum()
y=cx[step:] - cx[:-step]
</code></pre>
|
python|numpy|anaconda
| 1
|
373,837
| 59,168,803
|
ModuleNotFoundError: No module named 'utils.datasets'
|
<p>I am using Python 3.6.8 on Windows 10<br>
I installed <strong>tensorflow, keras, and utils</strong> using <strong>pip</strong>.</p>
<p><code>pip install tensorflow</code> and it installs the version <strong>2.0.0</strong><br>
<code>pip install keras</code> and it installs the version <strong>2.3.1</strong><br>
<code>pip install utils</code> but it does not show what version I have installed.</p>
<p>This is my header:</p>
<pre><code>from keras.preprocessing import image
from PIL import Image
from keras.models import model_from_json, load_model
import numpy as np
import cv2
from datetime import datetime
import os
import random
import string
from utils.datasets import get_labels
from utils.inference import apply_offsets
from utils.inference import load_detection_model
from utils.preprocessor import preprocess_input
</code></pre>
<p>This is my error:</p>
<blockquote>
<p>from utils.datasets import get_labels<br>
ModuleNotFoundError: No module named 'utils.datasets'</p>
</blockquote>
<p>Why am I getting this error? And how to fix it? BTW The code was written by a previous programmer and I need to modify it. But I can't even run it. not so good in python tho. i'm just getting started to it.</p>
<p>All my google search are all purple but I can't seem to find any solutions.</p>
<p><strong>EDIT</strong></p>
<p>The suggested answer (<a href="https://stackoverflow.com/questions/42319101/importerror-no-module-named-datasets">ImportError: No module named datasets</a>) does not satisfy my needs. I am having trouble on utils module. because when I comment out the line <code>from utils.datasets import get_labels</code></p>
<p>The error is on the next line:</p>
<blockquote>
<p>ModuleNotFoundError: No module named 'utils.inference'</p>
</blockquote>
|
<p>The utils model what the code your provided want to import is the part of the <a href="https://github.com/oarriaga/face_classification/tree/master/src" rel="nofollow noreferrer">oarriaga/face_classification</a> project.</p>
<p>The pip installed <code>utils</code> modul is <strong>quite different</strong> package, so you should not have installed via <code>pip</code>. Your code try to import moduls from this package, but it obviously has no such moduls. That is why the error messages.</p>
<p>So what you have to do is <code>pip uninstall utils</code> and then if your project directory structure is complete, the above code will import the <code>face_classification</code> package's moduls.</p>
|
python|tensorflow|machine-learning|keras
| 0
|
373,838
| 14,075,326
|
pandas RuntimeError in tseries/convertor when plotting
|
<p>When I execute the following statement:</p>
<pre><code>DataFrame(randn(3,1),index=[date(2012,10,1),date(2012,9,1),date(2012,8,1)],columns=['test']).plot()
</code></pre>
<p>I get the following exception:</p>
<p>File "/usr/local/lib/python2.7/dist-packages/pandas-0.10.0-py2.7-linux-x86_64.egg/pandas/tseries/converter.py", line 317, in <strong>call</strong>
(estimate, dmin, dmax, self.MAXTICKS * 2))
RuntimeError: MillisecondLocator estimated to generate 5270400 ticks from 2012-08-01 00:00:00+00:00 to 2012-10-01 00:00:00+00:00: exceeds Locator.MAXTICKS* 2 (2000)</p>
<p>Any workaround available for this bug ?</p>
|
<p>One workaround is to sort before plotting:</p>
<pre><code>df.sort().plot()
</code></pre>
<p><em>It looks like a bug, so I posted it on <a href="https://github.com/pydata/pandas/issues/2609" rel="nofollow">github</a>!</em></p>
<p>Note: this seems to plot ticks better if you use datetime rather than date:</p>
<pre><code>df1 = DataFrame(randn(3,1), index=[datetime(2012,10,1), datetime(2012,9,1), datetime(2012,8,1)], columns=['test'])
</code></pre>
|
python|plot|pandas
| 0
|
373,839
| 13,924,047
|
NumPy: 1D interpolation of a 3D array
|
<p>I'm rather new to NumPy. Anyone have an idea for making this code, especially the nested loops, more compact/efficient? BTW, dist and data are three-dimensional numpy arrays.</p>
<pre><code>def interpolate_to_distance(self,distance):
interpolated_data=np.ndarray(self.dist.shape[1:])
for j in range(interpolated_data.shape[1]):
for i in range(interpolated_data.shape[0]):
interpolated_data[i,j]=np.interp(
distance,self.dist[:,i,j],self.data[:,i,j])
return(interpolated_data)
</code></pre>
<p>Thanks!</p>
|
<p>Alright, I'll take a swag with this:</p>
<pre><code>def interpolate_to_distance(self, distance):
dshape = self.dist.shape
dist = self.dist.T.reshape(-1, dshape[-1])
data = self.data.T.reshape(-1, dshape[-1])
intdata = np.array([np.interp(distance, di, da)
for di, da in zip(dist, data)])
return intdata.reshape(dshape[0:2]).T
</code></pre>
<p>It at least removes one loop (and those nested indices), but it's not much faster than the original, ~20% faster according to <code>%timeit</code> in IPython. On the other hand, there's a lot of (probably unnecessary, ultimately) transposing and reshaping going on.</p>
<p>For the record, I wrapped it up in a dummy class and filled some 3 x 3 x 3 arrays with random numbers to test:</p>
<pre><code>import numpy as np
class TestClass(object):
def interpolate_to_distance(self, distance):
dshape = self.dist.shape
dist = self.dist.T.reshape(-1, dshape[-1])
data = self.data.T.reshape(-1, dshape[-1])
intdata = np.array([np.interp(distance, di, da)
for di, da in zip(dist, data)])
return intdata.reshape(dshape[0:2]).T
def interpolate_to_distance_old(self, distance):
interpolated_data=np.ndarray(self.dist.shape[1:])
for j in range(interpolated_data.shape[1]):
for i in range(interpolated_data.shape[0]):
interpolated_data[i,j]=np.interp(
distance,self.dist[:,i,j],self.data[:,i,j])
return(interpolated_data)
if __name__ == '__main__':
testobj = TestClass()
testobj.dist = np.random.randn(3, 3, 3)
testobj.data = np.random.randn(3, 3, 3)
distance = 0
print 'Old:\n', testobj.interpolate_to_distance_old(distance)
print 'New:\n', testobj.interpolate_to_distance(distance)
</code></pre>
<p>Which prints (for my particular set of randoms):</p>
<pre><code>Old:
[[-0.59557042 -0.42706077 0.94629049]
[ 0.55509032 -0.67808257 -0.74214045]
[ 1.03779189 -1.17605275 0.00317679]]
New:
[[-0.59557042 -0.42706077 0.94629049]
[ 0.55509032 -0.67808257 -0.74214045]
[ 1.03779189 -1.17605275 0.00317679]]
</code></pre>
<p>I also tried <code>np.vectorize(np.interp)</code> but couldn't get that to work. I suspect that would be much faster if it did work.</p>
<p>I couldn't get <code>np.fromfunction</code> to work either, as it passed (2) 3 x 3 (in this case) arrays of indices to <code>np.interp</code>, the same arrays you get from <code>np.mgrid</code>. </p>
<p>One other note: according the the docs for <code>np.interp</code>,</p>
<blockquote>
<p><code>np.interp</code> does not check that the x-coordinate sequence <code>xp</code> is increasing. If
<code>xp</code> is not increasing, the results are nonsense. A simple check for
increasingness is::</p>
<pre><code>np.all(np.diff(xp) > 0)
</code></pre>
</blockquote>
<p>Obviously, my random numbers violate the 'always increasing' rule, but you'll have to be more careful.</p>
|
python|multidimensional-array|numpy|interpolation
| 3
|
373,840
| 14,124,474
|
Using __call__ method of a class as a input to Numpy curve_fit
|
<p>I would like to use a <code>__call__</code> method of a class as a input to a Numpy curve_fit function due to my rather elaborate function and data preparation process (fitting analytical model data to some measurements). It works just fine by defining a function, but I can't get it to work with classes.</p>
<p>To recreate my problem you can run:</p>
<pre><code>import numpy as np
from scipy.optimize import curve_fit
#WORKS:
#def goal(x,a1,a2,a3,a4,a5):
# y=a1*x**4*np.sin(x)+a2*x**3+a3*x**2+a4*x+a5
# return y
# DOES NOT WORK:
class func():
def __call__(self,x,a1,a2,a3,a4,a5):
y=a1*x**4*np.sin(x)+a2*x**3+a3*x**2+a4*x+a5
return y
goal=func()
#data prepraration ***********
xdata=np.linspace(0,50,100)
ydata=goal(xdata,-2.1,-3.5,6.6,-1,2)
# ****************************
popt, pcov = curve_fit(goal, xdata, ydata)
print 'optimial parameters',popt
print 'The estimated covariance of optimial parameters',pcov
</code></pre>
<p>The error i get is:</p>
<pre><code>Traceback (most recent call last):
File "D:\...some path...\test_minimizacija.py", line 35, in <module>
popt, pcov = curve_fit(goal, xdata, ydata)
File "C:\Python26\lib\site-packages\scipy\optimize\minpack.py", line 412, in curve_fit
args, varargs, varkw, defaults = inspect.getargspec(f)
File "C:\Python26\lib\inspect.py", line 803, in getargspec
raise TypeError('arg is not a Python function')
TypeError: arg is not a Python function
</code></pre>
<p>How can I make this work? </p>
|
<p>Easy (although, not pretty), just change it to:</p>
<pre><code>popt, pcov = curve_fit(goal.__call__, xdata, ydata)
</code></pre>
<p>It's interesting that numpy forces you to pass a function object to <code>curve_fit</code> rather than an arbitrary callable ...</p>
<p>quickly inspecting the source for <code>curve_fit</code>, it appears that another workaround might be:</p>
<pre><code>popt,pcov = curve_fit(goal, xdata, ydata, p0=[1]*5)
</code></pre>
<p>Here, <code>p0</code> is the initial guess for the fit parameters. The problem appears to be that <code>scipy</code> inspects the arguments to the function so that it knows how many parameters to use if you don't actually provide parameters as an initial guess. Here, since we have 5 parameters, my initial guess is a list of all ones of length 5. (<code>scipy</code> defaults to using ones if you don't provide a guess also).</p>
|
python|class|numpy|call|curve-fitting
| 3
|
373,841
| 14,119,892
|
Python 4D linear interpolation on a rectangular grid
|
<p>I need to interpolate temperature data linearly in 4 dimensions (latitude, longitude, altitude and time).<br>
The number of points is fairly high (360x720x50x8) and I need a fast method of computing the temperature at any point in space and time within the data bounds.</p>
<p>I have tried using <code>scipy.interpolate.LinearNDInterpolator</code> but using Qhull for triangulation is inefficient on a rectangular grid and takes hours to complete.</p>
<p>By reading this <a href="https://github.com/scipy/scipy/issues/2246" rel="noreferrer">SciPy ticket</a>, the solution seemed to be implementing a new nd interpolator using the standard <code>interp1d</code> to calculate a higher number of data points, and then use a "nearest neighbor" approach with the new dataset. </p>
<p>This, however, takes a long time again (minutes).</p>
<p>Is there a quick way of interpolating data on a rectangular grid in 4 dimensions without it taking minutes to accomplish?</p>
<p>I thought of using <code>interp1d</code> 4 times <em>without</em> calculating a higher density of points, but leaving it for the user to call with the coordinates, but I can't get my head around how to do this.</p>
<p>Otherwise would writing my own 4D interpolator specific to my needs be an option here?</p>
<p>Here's the code I've been using to test this:</p>
<p>Using <code>scipy.interpolate.LinearNDInterpolator</code>:</p>
<pre><code>import numpy as np
from scipy.interpolate import LinearNDInterpolator
lats = np.arange(-90,90.5,0.5)
lons = np.arange(-180,180,0.5)
alts = np.arange(1,1000,21.717)
time = np.arange(8)
data = np.random.rand(len(lats)*len(lons)*len(alts)*len(time)).reshape((len(lats),len(lons),len(alts),len(time)))
coords = np.zeros((len(lats),len(lons),len(alts),len(time),4))
coords[...,0] = lats.reshape((len(lats),1,1,1))
coords[...,1] = lons.reshape((1,len(lons),1,1))
coords[...,2] = alts.reshape((1,1,len(alts),1))
coords[...,3] = time.reshape((1,1,1,len(time)))
coords = coords.reshape((data.size,4))
interpolatedData = LinearNDInterpolator(coords,data)
</code></pre>
<p>Using <code>scipy.interpolate.interp1d</code>:</p>
<pre><code>import numpy as np
from scipy.interpolate import LinearNDInterpolator
lats = np.arange(-90,90.5,0.5)
lons = np.arange(-180,180,0.5)
alts = np.arange(1,1000,21.717)
time = np.arange(8)
data = np.random.rand(len(lats)*len(lons)*len(alts)*len(time)).reshape((len(lats),len(lons),len(alts),len(time)))
interpolatedData = np.array([None, None, None, None])
interpolatedData[0] = interp1d(lats,data,axis=0)
interpolatedData[1] = interp1d(lons,data,axis=1)
interpolatedData[2] = interp1d(alts,data,axis=2)
interpolatedData[3] = interp1d(time,data,axis=3)
</code></pre>
<p>Thank you very much for your help!</p>
|
<p>In the same ticket you have linked, there is an example implementation of what they call <em>tensor product interpolation</em>, showing the proper way to nest recursive calls to <code>interp1d</code>. This is equivalent to quadrilinear interpolation if you choose the default <code>kind='linear'</code> parameter for your <code>interp1d</code>'s.</p>
<p>While this may be good enough, this is not linear interpolation, and there will be higher order terms in the interpolation function, as this image from the <a href="http://en.wikipedia.org/wiki/Bilinear_interpolation" rel="noreferrer">wikipedia entry on bilinear interpolation</a> shows:</p>
<p><img src="https://i.stack.imgur.com/Dz61S.png" alt="enter image description here"></p>
<p>This may very well be good enough for what you are after, but there are applications where a triangulated, really piecewise linear, interpoaltion is preferred. If you really need this, there is an easy way of working around the slowness of qhull.</p>
<p>Once <code>LinearNDInterpolator</code> has been setup, there are two steps to coming up with an interpolated value for a given point:</p>
<ol>
<li>figure out inside which triangle (4D hypertetrahedron in your case) the point is, and</li>
<li>interpolate using the <a href="http://en.wikipedia.org/wiki/Barycentric_coordinate_system_%28mathematics%29" rel="noreferrer">barycentric coordinates</a> of the point relative to the vertices as weights.</li>
</ol>
<p>You probably do not want to mess with barycentric coordinates, so better leave that to <code>LinearNDInterpolator</code>. But you do know some things about the triangulation. Mostly that, because you have a regular grid, within each hypercube the triangulation is going to be the same. So to interpolate a single value, you could first determine in which subcube your point is, build a <code>LinearNDInterpolator</code> with the 16 vertices of that cube, and use it to interpolate your value:</p>
<pre><code>from itertools import product
def interpolator(coords, data, point) :
dims = len(point)
indices = []
sub_coords = []
for j in xrange(dims) :
idx = np.digitize([point[j]], coords[j])[0]
indices += [[idx - 1, idx]]
sub_coords += [coords[j][indices[-1]]]
indices = np.array([j for j in product(*indices)])
sub_coords = np.array([j for j in product(*sub_coords)])
sub_data = data[list(np.swapaxes(indices, 0, 1))]
li = LinearNDInterpolator(sub_coords, sub_data)
return li([point])[0]
>>> point = np.array([12.3,-4.2, 500.5, 2.5])
>>> interpolator((lats, lons, alts, time), data, point)
0.386082399091
</code></pre>
<p>This cannot work on vectorized data, since that would require storing a <code>LinearNDInterpolator</code> for every possible subcube, and even though it probably would be faster than triangulating the whole thing, it would still be very slow.</p>
|
python|numpy|scipy|interpolation
| 13
|
373,842
| 44,870,655
|
Do scipy and numpy svd or eig always return the same singular/eigen vector?
|
<p>Since the SVD decomposition is not unique (pairs of left and right singular vectors can have their sign flipped simultaneously), I was wondering to what extent the U and V matrix returned by <code>scipy.linalg.svd()</code> are 'deterministic' / always the same? </p>
<p>I tried it a few times with a random array on my machine and it seems to always return the same thing (fortunately), but could that vary across machines?</p>
|
<p>SciPy and Numpy both compute the SVD by out-sourcing to the LAPACK <code>_gesdd</code> routine. Any deterministic implementation of this routine will produce the same results every time on a given machine with a given LAPACK implementation, but as far as I know there is no guarantee that different LAPACK implementations (i.e. NETLIB vs MKL, OSX vs Windows, etc.) will use the same convention. If your application depends on some convention for resolving the sign ambiguity, it would be safest to ensure it yourself in some sort of post-processing of the singular vectors; one useful approach is given in <em>Resolving the Sign Ambiguity in the
Singular Value Decomposition</em> <a href="http://prod.sandia.gov/techlib/access-control.cgi/2007/076422.pdf" rel="nofollow noreferrer">(pdf)</a></p>
|
numpy|scipy|linear-algebra|svd|eigenvector
| 3
|
373,843
| 44,943,790
|
Applying function to Pandas with GroupBy along direction of the grouping variable
|
<p>I have a cohort of N people and I computed a correlation matrix of some quantities (q1_score,...q5_score)</p>
<pre><code> df.groupby('participant_id').corr()
Out[130]:
q1_score q2_score q3_score q4_score q5_score
participant_id
11.0 q1_score 1.000000 -0.748887 -0.546893 -0.213635 -0.231169
q2_score -0.748887 1.000000 0.639649 0.324976 0.335596
q3_score -0.546893 0.639649 1.000000 0.154539 0.151233
q4_score -0.213635 0.324976 0.154539 1.000000 0.998752
q5_score -0.231169 0.335596 0.151233 0.998752 1.000000
14.0 q1_score 1.000000 -0.668781 -0.124614 -0.352075 -0.244251
q2_score -0.668781 1.000000 -0.175432 0.360183 0.184585
q3_score -0.124614 -0.175432 1.000000 -0.137993 -0.125115
q4_score -0.352075 0.360183 -0.137993 1.000000 0.968564
q5_score -0.244251 0.184585 -0.125115 0.968564 1.000000
17.0 q1_score 1.000000 -0.799223 -0.814424 -0.790587 -0.777318
q2_score -0.799223 1.000000 0.787238 0.658524 0.640786
q3_score -0.814424 0.787238 1.000000 0.702570 0.701440
q4_score -0.790587 0.658524 0.702570 1.000000 0.998996
q5_score -0.777318 0.640786 0.701440 0.998996 1.000000
18.0 q1_score 1.000000 -0.595545 -0.617691 -0.472409 -0.477523
q2_score -0.595545 1.000000 0.386705 0.148761 0.115068
q3_score -0.617691 0.386705 1.000000 0.806637 0.782345
q4_score -0.472409 0.148761 0.806637 1.000000 0.982617
q5_score -0.477523 0.115068 0.782345 0.982617 1.000000
</code></pre>
<p>I need to compute the median values of the correlations of all participants? What I mean: I need to take corr. between the item J and item K for all participants and find their median value.</p>
<p>I am sure it is a one line of code, but I'm struggling to realise (still learning pandas by examples).</p>
|
<p>Stack your data, and do another groupby:</p>
<pre><code>df.groupby('participant_id').corr().stack().groupby(level = [1,2]).median()
</code></pre>
<p>Edit: Actually, you don't need to stack if you don't want to:</p>
<pre><code>df.groupby('participant_id').corr().groupby(level = [1]).median()
</code></pre>
<p>works too.</p>
|
pandas|grouping
| 2
|
373,844
| 44,966,229
|
Unsupported type for timedelta
|
<p>I'm trying to increase dates in the pandas dataframe by values contained in the other column of the same dataframe like this</p>
<pre><code>loans['est_close_date'] = loans['dealdate'] + loans['tenor_weeks'].apply(lambda x: dt.timedelta(weeks =x))
</code></pre>
<p>but I keep getting error as:</p>
<blockquote>
<p>"unsupported type for timedelta weeks component: numpy.int64" error.</p>
</blockquote>
<p>On the other hand, construction like this</p>
<pre><code>loans['est_close_date'] = loans['dealdate'] + loans['tenor_weeks'].apply(lambda x: dt.timedelta(weeks =1)*x)
</code></pre>
<p>works just fine. I can't understand what's wrong with the first approach.
There is no missing values in <code>tenor_weeks</code> column.</p>
<p>Thank you in advance!</p>
|
<p>For me you solutions work perfectly, maybe necessary upgrade pandas/python.</p>
<p>I add pure pandas solution with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_timedelta.html" rel="nofollow noreferrer"><code>to_timedelta</code></a>:</p>
<pre><code>rng = pd.date_range('2017-04-03', periods=10)
loans = pd.DataFrame({'dealdate': rng, 'tenor_weeks': range(1,11)})
print (loans)
dealdate tenor_weeks
0 2017-04-03 1
1 2017-04-04 2
2 2017-04-05 3
3 2017-04-06 4
4 2017-04-07 5
5 2017-04-08 6
6 2017-04-09 7
7 2017-04-10 8
8 2017-04-11 9
9 2017-04-12 10
loans['est_close_date'] = loans['dealdate'] + loans['tenor_weeks'].apply(lambda x: dt.timedelta(weeks =x))
loans['est_close_date1'] = loans['dealdate'] + loans['tenor_weeks'].apply(lambda x: dt.timedelta(weeks =1)*x)
</code></pre>
<hr>
<pre><code>loans['est_close_date2'] = loans['dealdate'] + pd.to_timedelta(loans['tenor_weeks'],unit='w')
print (loans)
dealdate tenor_weeks est_close_date est_close_date1 est_close_date2
0 2017-04-03 1 2017-04-10 2017-04-10 2017-04-10
1 2017-04-04 2 2017-04-18 2017-04-18 2017-04-18
2 2017-04-05 3 2017-04-26 2017-04-26 2017-04-26
3 2017-04-06 4 2017-05-04 2017-05-04 2017-05-04
4 2017-04-07 5 2017-05-12 2017-05-12 2017-05-12
5 2017-04-08 6 2017-05-20 2017-05-20 2017-05-20
6 2017-04-09 7 2017-05-28 2017-05-28 2017-05-28
7 2017-04-10 8 2017-06-05 2017-06-05 2017-06-05
8 2017-04-11 9 2017-06-13 2017-06-13 2017-06-13
9 2017-04-12 10 2017-06-21 2017-06-21 2017-06-21
</code></pre>
|
python|pandas|numpy
| 4
|
373,845
| 44,882,372
|
How to learn relation between field description and possible categories
|
<p>I have a relation between products that looks like this</p>
<p><a href="https://i.stack.imgur.com/lDFxF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lDFxF.png" alt="enter image description here"></a></p>
<p>Is <a href="https://www.tensorflow.org/tutorials/seq2seq" rel="nofollow noreferrer"><em>seq2seq</em></a> a correct approach so my model can learn the relation between the text in both product (left column) and category (right column), and then being able to predict future categories given a product description?</p>
|
<p><code>Seq2Seq</code> essentially have two different recurrent neural networks tied together : an encoder RNN that takes input <code>text tokens</code> and an decoder RNN that starts generating <code>text tokens</code> based on the outputs from the encoder RNN. Its a sequence to a sequence network. But your case as i see it, the inputs are a sequence and the output is a category based on the inputs. You better try an <code>LSTM</code> network which takes your input sequence through an <code>embedding layer</code> and then passing the last <code>hidden state</code> of the <code>LSTM</code> to a <code>dense layer</code> to make the classification.</p>
<p>An LSTM model for your use case:</p>
<h1>Placeholders for input and output</h1>
<pre><code># input batch of text sequences of length `seq_length`
X = tf.placeholder(tf.int32, [None, seq_length], name='input')
# Output class labels
y = tf.placeholder(tf.float32, [None], name='labels')
</code></pre>
<h1>Embedding layer</h1>
<pre><code># For every word in your vocab you need a embedding vector.
# The below weights are not trainable as we will `init` with `pre-trained` embeddings. If you dont want to do that set it to True.
W = tf.get_variable(initializer=tf.random_uniform([vocab_size, embedding_size]), name='embed', trainable=False)
# Get the embedding representation of the inputs
embed = tf.nn.embedding_lookup(W, X)
</code></pre>
<h1>LSTM layer</h1>
<pre><code># Create basic LSTMCell, with number of hidden units as a input param
def lstm_cell():
return tf.contrib.rnn.BasicLSTMCell(n_hidden)
# Create a stack of LSTM cells (if you need)
stacked_lstm = tf.contrib.rnn.MultiRNNCell([lstm_cell() for _ in range(n_layers)])
# Create a dynamic RNN to handle the sequence of inputs
output, _ = tf.nn.dynamic_rnn(stacked_lstm, x, dtype=tf.float32)
# get the output of the last hidden state
last_hidden = output[:, -1, :]
</code></pre>
<h1>Final dense layer</h1>
<pre><code># output dimension should be `n_classes`.
logits = dense_layer(last_hidden ...)
</code></pre>
<p>This should be your Model. </p>
|
tensorflow
| 1
|
373,846
| 44,868,306
|
How to randomly mix N arrays in numpy?
|
<p>I have a list of N numpy arrays of the same shape. I need to combine them into one array in the following way. Each element of the output array should be randomly taken from the corresponding position of one of the input array.</p>
<p>For example, if I need to decide what value to use at position [2,0,7], I take all the values locate at this position in all N input arrays. So, I get N values and I choose one value randomly.</p>
<p>To make it a bit more complex. I would like assign a probability to each input array so that probabilities of values to be selected depends on which input array it is.</p>
|
<pre><code>import numpy as np
import itertools as it
x = np.random.choice(np.arange(10), (2,3,4)) # pass probabilities with p=...
N = 10
a = [k*np.ones_like(x) for k in range(N)] # list of N arrays of same shape
y = np.empty(a[0].shape) # output array
# Generate list of all indices of arrays in a (no matter what shape, this is
# handled with *) and set elements of output array y.
for index in list(it.product(*list(np.arange(n) for n in x.shape))):
y[index] = a[x[index]][index]
# a[x[index]] is the array chosen at a given index.
# a[x[index]][index] is the element of this array at the given index.
# expected result with the choice of list a: x==y is True for all elements
</code></pre>
<p>The "more complex part" should be handled with the parameter <code>p</code> of <a href="https://docs.scipy.org/doc/numpy-dev/reference/generated/numpy.random.choice.html" rel="nofollow noreferrer"><code>numpy.random.choice</code></a>. Anything else should be explained in the comments. With the use of <code>*</code> this should work for arbitrary shape of the arrays in <code>a</code> (I hope).</p>
|
python|arrays|numpy|random
| 0
|
373,847
| 45,179,079
|
How to use a nested dictionary with .map for a Pandas Series? pd.Series([]).map
|
<p>I'm trying to <code>map</code> certain values of a series, while keeping the others intact. In this case, I was to change <code>dmso --> dmso-2</code>, <code>naoh --> naoh-2</code>, and <code>water --> water-2</code> but I'm getting a <code>KeyError</code>.</p>
<p>First I'm doing a boolean statement to see if it's any of the ones of interest, if <code>True</code> then use this dictionary, if <code>False</code> then just return <code>x</code>. I could manually go in and change them but programming is fun and I can't figure out why this logic doesn't work.</p>
<pre><code># A sample of the series
Se_data = pd.Series({
'DMSO_S43': 'dmso',
'DMSO_S44': 'dmso',
'DOXYCYCLINE-HYCLATE_S25': 'doxycycline-hyclate',
'DOXYCYCLINE-HYCLATE_S26': 'doxycycline-hyclate'
})
# This boolean works
Se_data.map(lambda x: x in {"dmso", "naoh", "water"})
# DMSO_S43 True
# DMSO_S44 True
# DOXYCYCLINE-HYCLATE_S25 False
# DOXYCYCLINE-HYCLATE_S26 False
# This dictionary on the boolean works
Se_data.map(lambda x: {True: "control", False: x}[x in {"dmso", "naoh", "water"}])
# DMSO_S43 control
# DMSO_S44 control
# DOXYCYCLINE-HYCLATE_S25 doxycycline-hyclate
# DOXYCYCLINE-HYCLATE_S26 doxycycline-hyclate
# This nested dictionary isn't working
Se_data.map(lambda x: {
True: {"dmso": "dmso-2", "naoh": "naoh-2", "water": "water-2"}[x],
False: x
}[x in {"dmso", "naoh", "water"}])
# KeyError: 'doxycycline-hyclate'
</code></pre>
|
<p>If I understood correctly, you can do simply</p>
<pre><code>Se_data.replace({
'dmso': 'dmso-2',
'naoh': 'naoh-2',
'water': 'water-2',
})
</code></pre>
<p>which will leave all other values intact.</p>
<hr>
<p>For what it's worth, your code wasn't working because the expression</p>
<pre><code>{"dmso": "dmso-2", "naoh": "naoh-2", "water": "water-2"}[x]
</code></pre>
<p>is evaluated for <em>all</em> <code>x</code>, not just the <code>x in {"dmso", "naoh", "water"}</code>. Values in Python dictionaries aren't short-circuited or evaluated lazily like you expected. You could have done something like</p>
<pre><code>Se_data.map(lambda x: {
"dmso": "dmso-2",
"naoh": "naoh-2",
"water": "water-2"
}[x] if x in {"dmso", "naoh", "water"} else x)
</code></pre>
<p>or</p>
<pre><code>Se_data.map(lambda x: {
"dmso": "dmso-2",
"naoh": "naoh-2",
"water": "water-2"
}.get(x, x))
</code></pre>
|
python|pandas|dictionary|vector|mapping
| 1
|
373,848
| 45,154,180
|
How to dynamically freeze weights after compiling model in Keras?
|
<p>I would like to train a GAN in Keras. My final target is BEGAN, but I'm starting with the simplest one. Understanding <em>how to freeze</em> weights properly is necessary here and that's what I'm struggling with.</p>
<p>During the generator training time the discriminator weights might not be updated. I would like to <em>freeze</em> and <em>unfreeze</em> discriminator alternately for training generator and discriminator alternately. The problem is that setting <em>trainable</em> parameter to false on <em>discriminator</em> model or even on its' weights doesn't stop model to train (and weights to update). On the other hand when I compile the model after setting <em>trainable</em> to False the weights become <em>unfreezable</em>. I can't compile the model after each iteration because that negates the idea of whole training.</p>
<p>Because of that problem it seems that many Keras implementations are bugged or they work because of some non-intuitive trick in old version or something.</p>
|
<p>I've tried this example code a couple months ago and it worked:
<a href="https://github.com/fchollet/keras/blob/master/examples/mnist_acgan.py" rel="noreferrer">https://github.com/fchollet/keras/blob/master/examples/mnist_acgan.py</a></p>
<p>It's not the simplest form of GAN, but as far as I remembered, it's not too difficult to remove the classification loss and turn the model into a GAN.</p>
<p>You don't need to turn on/off the discriminator's trainable property and recompile. Simply create and compile two model objects, one with <code>trainable=True</code> (<code>discriminator</code> in the code) and another one with <code>trainable=False</code> (<code>combined</code> in the code).</p>
<p>When you're updating the discriminator, call <code>discriminator.train_on_batch()</code>. When you're updating the generator, call <code>combined.train_on_batch()</code>.</p>
|
python|tensorflow|neural-network|keras|theano
| 13
|
373,849
| 45,216,258
|
Pandas: extend mask to set regions
|
<p>I have a Boolean mask in "python pandas", where I want to widen (smear?) each "True" sample. Suppose I have the following:</p>
<pre><code>import pandas
data = pandas.DataFrame({'col1': [False, True, False, False, False, False, True, False, False, False]})
</code></pre>
<p>So the "True" mask looks like:</p>
<pre><code>mask = data[data.col1 == True]
>>> mask
col1
1 True
6 True
</code></pre>
<p>I would like to have:</p>
<pre><code>>>> mask
col1
0 True
1 True
2 True
5 True
6 True
7 True
</code></pre>
<p>How can I set the neighboring indices up to a distance of (say) 1 to "True" as well? I could write a for-loop, no thing. But that feels wrong.</p>
<p>Hints? Tips? Keywords?</p>
<p>Thanks!</p>
|
<p>One way you could do it is to use <code>rolling</code> and <code>max</code>:</p>
<pre><code>n=1
window = n*2 + 1
data.col1.rolling(window, center=True, min_periods=1).max().astype(bool)
</code></pre>
<p>Output:</p>
<pre><code>0 True
1 True
2 True
3 False
4 False
5 True
6 True
7 True
8 False
9 False
Name: col1, dtype: bool
print(mask)
col1
0 True
1 True
2 True
5 True
6 True
7 True
</code></pre>
|
python|pandas
| 2
|
373,850
| 45,118,448
|
Can't run prediciton because of troubles with tf.placeholder
|
<p>Apologies, I am new in Tensorflow. I am developing a simple onelayer_perceptron script that just obtaining init parameters trains a Neural Network using Tensorflow:</p>
<p>My compiler complains:</p>
<blockquote>
<p>You must feed a value for placeholder tensor 'input' with dtype float </p>
</blockquote>
<p>the error occurs here: </p>
<blockquote>
<p>input_tensor = tf.placeholder(tf.float32,[None, n_input],name="input")</p>
</blockquote>
<p>Plese see what I have done so far: </p>
<p>1) I init my input values</p>
<pre><code>n_input = 10 # Number of input neurons
n_hidden_1 = 10 # Number of hidden layers
n_classes = 3 # Out layers
weights = {
'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1])),
'out': tf.Variable(tf.random_normal([n_hidden_1, n_classes]))
}
biases = {
'b1': tf.Variable(tf.random_normal([n_hidden_1])),
'out': tf.Variable(tf.random_normal([n_classes]))
}
</code></pre>
<p>2) Initializing placeholders:</p>
<pre><code>input_tensor = tf.placeholder(tf.float32, [None, n_input], name="input")
output_tensor = tf.placeholder(tf.float32, [None, n_classes], name="output")
</code></pre>
<p>3) Train the NN </p>
<pre><code># Construct model
prediction = onelayer_perceptron(input_tensor, weights, biases)
init = tf.global_variables_initializer()
</code></pre>
<p>4) This is my onelayer_perceptron function that just does typical NN calculation matmul layers and weights, add biases and activates using sigmoid </p>
<pre><code>def onelayer_perceptron(input_tensor, weights, biases):
layer_1_multiplication = tf.matmul(input_tensor, weights['h1'])
layer_1_addition = tf.add(layer_1_multiplication, biases['b1'])
layer_1_activation = tf.nn.sigmoid(layer_1_addition)
out_layer_multiplication = tf.matmul(layer_1_activation, weights['out'])
out_layer_addition = out_layer_multiplication + biases['out']
return out_layer_addition
</code></pre>
<p>5) Running my script</p>
<pre><code>with tf.Session() as sess:
sess.run(init)
i = sess.run(input_tensor)
print(i)
</code></pre>
|
<p>You are not feeding the input to the place holder; you do it using a <code>feed_dict</code>.</p>
<p>You should do something similar:</p>
<pre><code> out = session.run(Tensor(s)_you_want_to_evaluate, feed_dict={input_tensor: input of size [batch_size,n_input], output_tensor: output of size [batch size, classes] })
</code></pre>
|
tensorflow|neural-network|perceptron
| 1
|
373,851
| 45,069,431
|
Python Pandas: Change value associated with each first day entry in every month
|
<p>I'd like to change the value associated with the first day in every month for a <code>pandas.Series</code> I have. For example, given something like this:</p>
<pre><code>Date
1984-01-03 0.992701
1984-01-04 1.003614
1984-01-17 0.994647
1984-01-18 1.007440
1984-01-27 1.006097
1984-01-30 0.991546
1984-01-31 1.002928
1984-02-01 1.009894
1984-02-02 0.996608
1984-02-03 0.996595
...
</code></pre>
<p>I'd like to change the values associated with <code>1984-01-03</code>, <code>1984-02-01</code> and so on. I've racked my brain for hours on this one and have looked around Stack Overflow a fair bit. Some solutions have come close. For example, using:</p>
<pre><code>[In]: series.groupby((m_ret.index.year, m_ret.index.month)).first()
[Out]:
Date Date
1984 1 0.992701
2 1.009894
3 1.005963
4 0.997899
5 1.000342
6 0.995429
7 0.994620
8 1.019377
9 0.993209
10 1.000992
11 1.009786
12 0.999069
1985 1 0.981220
2 1.011928
3 0.993042
4 1.015153
...
</code></pre>
<p>Is almost there, but I'm sturggling to proceed further.</p>
<p>What I'd ike to do is set the values associated with the first day present in each month for every year to 1.</p>
<p><code>series[m_ret.index.is_month_start] = 1</code> comes close, but the problem here is that <code>is_month_start</code> only selects rows where the day value is 1. However in my series, this isn't always the case as you can see. For example, the date of the first day in January is <code>1984-01-03</code>.</p>
<p><code>series.groupby(pd.TimeGrouper('BM')).nth(0)</code> doesn't appear to return the first day either, instead I get the last day:</p>
<pre><code>Date
1984-01-31 0.992701
1984-02-29 1.009894
1984-03-30 1.005963
1984-04-30 0.997899
1984-05-31 1.000342
1984-06-29 0.995429
1984-07-31 0.994620
1984-08-31 1.019377
...
</code></pre>
<p>I'm completely stumped. Your help is as always, greatly appreciated! Thank you.</p>
|
<p>One way would to be to use your <code>.groupby((m_ret.index.year, m_ret.index.month))</code> idea, but use <code>idxmin</code> instead on the index itself converted into a Series:</p>
<pre><code>In [74]: s.index.to_series().groupby([s.index.year, s.index.month]).idxmin()
Out[74]:
Date Date
1984 1 1984-01-03
2 1984-02-01
Name: Date, dtype: datetime64[ns]
In [75]: start = s.index.to_series().groupby([s.index.year, s.index.month]).idxmin()
In [76]: s.loc[start] = 999
In [77]: s
Out[77]:
Date
1984-01-03 999.000000
1984-01-04 1.003614
1984-01-17 0.994647
1984-01-18 1.007440
1984-01-27 1.006097
1984-01-30 0.991546
1984-01-31 1.002928
1984-02-01 999.000000
1984-02-02 0.996608
1984-02-03 0.996595
dtype: float64
</code></pre>
|
python|pandas|datetime|dataframe
| 4
|
373,852
| 45,247,500
|
Incremental data load using pandas
|
<p>I am trying to implement incremental data import using pandas.</p>
<p>I have two dataframes: df_old (original data, loaded before) and df_new (new data, to be merged with df_old). </p>
<p>data in df_old/df_new are unique on multiple columns
(for simplicity, lets say just 2: key1 and key2). other columns are data to be merged and lets say, they are only 2 of them too: val1 and val2. </p>
<p>beside these, there is one more column to be taken care of: change_id - it increases for each new entry overwriting the old one</p>
<p>the logic of import is pretty straightforward: </p>
<ol>
<li>if there is new key pair in df_new, it should be added (with corresponding val1/val2 values) to df_old</li>
<li><p>if there is a key pair in df_new which exists in df_old, then:</p>
<p>2a) if corresponding values in df_old and df_new are same, the old ones should be kept</p>
<p>2b) if corresponding values in df_old and df_new are different, the values in df_new should replace the old ones in df_old</p></li>
<li><p>there's no need to care about dala deletion (if some data in df_old exist, which are not present in df_new)</p></li>
</ol>
<p>so far, I came up with 2 different solutions:</p>
<pre><code>>>> df_old = pd.DataFrame([['A1','B2',1,2,1],['A1','A2',1,3,1],['B1','A2',1,3,1],['B1','B2',1,4,1],], columns=['key1','key2','val1','val2','change_id'])
>>> df_old.set_index(['key1','key2'], inplace=True)
>>> df_old
val1 val2 change_id
key1 key2
A1 B2 1 2 1
A2 1 3 1
B1 A2 1 3 1
B2 1 4 1
>>> df_new = pd.DataFrame([['A1','B2',2,1,2],['A1','A2',1,3,2],['C1','B2',2,1,2]], columns=['key1','key2','val1','val2','change_id'])
>>> df_new.set_index(['key1','key2'], inplace=True)
>>> df_new
val1 val2 change_id
key1 key2
A1 B2 2 1 2
A2 1 3 2
C1 B2 2 1 2
</code></pre>
<p>solution 1</p>
<pre><code># this solution groups concatenated old data with new ones, group them by keys and for each group evaluates if new data are different
def merge_new(x):
if x.shape[0] == 1:
return x.iloc[0]
else:
if x.iloc[0].loc[['val1','val2']].equals(x.iloc[1].loc[['val1','val2']]):
return x.iloc[0]
else:
return x.iloc[1]
def solution1(df_old, df_new):
merged = pd.concat([df_old, df_new])
return merged.groupby(level=['key1','key2']).apply(merge_new).reset_index()
</code></pre>
<p>solution 2</p>
<pre><code># this solution uses pd.merge to merge data + additional logic to compare merged rows and select new data
>>> def solution2(df_old, df_new):
>>> merged = pd.merge(df_old, df_new, left_index=True, right_index=True, how='outer', suffixes=('_old','_new'), indicator='ind')
>>> merged['isold'] = (merged.loc[merged['ind'] == 'both',['val1_old','val2_old']].rename(columns=lambda x: x[:-4]) == merged.loc[merged['ind'] == 'both',['val1_new','val2_new']].rename(columns=lambda x: x[:-4])).all(axis=1)
>>> merged.loc[merged['ind'] == 'right_only','isold'] = False
>>> merged['isold'] = merged['isold'].fillna(True)
>>> return pd.concat([merged[merged['isold'] == True][['val1_old','val2_old','change_id_old']].rename(columns=lambda x: x[:-4]), merged[merged['isold'] == False][['val1_new','val2_new','change_id_new']].rename(columns=lambda x: x[:-4])])
>>> solution1(df_old, df_new)
key1 key2 val1 val2 change_id
0 A1 A2 1 3 1
1 A1 B2 2 1 2
2 B1 A2 1 3 1
3 B1 B2 1 4 1
4 C1 B2 2 1 2
>>> solution2(df_old, df_new)
val1 val2 change_id
key1 key2
A1 A2 1.0 3.0 1.0
B1 A2 1.0 3.0 1.0
B2 1.0 4.0 1.0
A1 B2 2.0 1.0 2.0
C1 B2 2.0 1.0 2.0
</code></pre>
<p>Both of the work, however, I am still quite dissapointed with the performance on huge dataframes.
the question is: is there some better way to do this? any hint for decent speed improvement will be more than welcome...</p>
<pre><code>>>> %timeit solution1(df_old, df_new)
100 loops, best of 3: 10.6 ms per loop
>>> %timeit solution2(df_old, df_new)
100 loops, best of 3: 14.7 ms per loop
</code></pre>
|
<p>Here's one way to do this that's pretty fast:</p>
<pre><code>merged = pd.concat([df_old.reset_index(), df_new.reset_index()])
merged = merged.drop_duplicates(["key1", "key2", "val1", "val2"]).drop_duplicates(["key1", "key2"], keep="last")
# 100 loops, best of 3: 1.69 ms per loop
# key1 key2 val1 val2 change_id
# 1 A1 A2 1 3 1
# 2 B1 A2 1 3 1
# 3 B1 B2 1 4 1
# 0 A1 B2 2 1 2
# 2 C1 B2 2 1 2
</code></pre>
<p>The rationale here is to concatenate all rows and simply call <code>drop_duplicates</code> twice, rather than relying on join logic to get the rows you want. The first call to <code>drop_duplicates</code> drops rows originating in <code>df_new</code> that match on both the key & value columns since the default behavior of this method is to keep the first of the duplicate rows (in this case the row from <code>df_old</code>). The second call drops duplicates that match on the key columns, but specifies that the <code>last</code> row for each set of duplicates should be kept.</p>
<p>This approach assumes that the rows are sorted on <code>change_id</code>; this is a safe assumption given the order in which the example DataFrames are concatenated. If this is a faulty assumption with your real data, however, simply call <code>.sort_values('change_id')</code> on <code>merged</code> before dropping the duplicates.</p>
|
python|pandas|merge
| 4
|
373,853
| 45,007,081
|
Convert mat file to pandas dataframe
|
<p>I want to convert <a href="https://www.dropbox.com/s/galg3ihvxklf0qi/cardio.mat?dl=0" rel="nofollow noreferrer">this file</a> to pandas dataframe.</p>
<pre><code>import pandas as pd
import scipy.io
mat = scipy.io.loadmat('cardio.mat')
cardio_df = pd.DataFrame(mat)
</code></pre>
<p>I get this error :
<code>Exception: Data must be 1-dimensional
</code></p>
|
<p>It seems <code>mat</code> is a dict containing <code>X</code> of shape <code>(1831, 21)</code>, <code>y</code> with shape <code>(1831, 1)</code>, and some metadata. Assuming <code>X</code> is the data and <code>y</code> are the labels for the same, you can stack them horizontally with <code>np.hstack</code> and load them into pandas:</p>
<pre><code>In [1755]: mat = scipy.io.loadmat('cardio.mat')
In [1758]: cardio_df = pd.DataFrame(np.hstack((mat['X'], mat['y'])))
In [1759]: cardio_df.head()
Out[1759]:
0 1 2 3 4 5 6 \
0 0.004912 0.693191 -0.203640 0.595322 0.353190 -0.061401 -0.278295
1 0.110729 -0.079903 -0.203640 1.268942 0.396246 -0.061401 -0.278295
2 0.216546 -0.272445 -0.203640 1.050988 0.148753 -0.061401 -0.278295
3 0.004912 0.727346 -0.203640 1.212171 -0.683598 -0.061401 -0.278295
4 -0.100905 0.363595 1.321366 1.027120 0.141359 -0.061401 -0.278295
In [1760]: cardio_df.shape
Out[1760]: (1831, 22)
</code></pre>
|
python|pandas|dataframe|mat
| 10
|
373,854
| 44,966,528
|
Pandas: astype error string to float (could not convert string to float: '7,50')
|
<p>I am trying to convert a dataframe field from string to float in Pandas.</p>
<p>This is the field:</p>
<pre><code>In:
print(merged['platnosc_total'].head(100))
Out:
0 0,00
1 4,50
2 0,00
3 0,00
4 0,00
5 4,50
6 6,10
7 7,99
8 4,00
9 7,69
10 7,50
</code></pre>
<p>Note the 7,50, in the last row, which seems to cause the error:</p>
<pre><code>In:
merged['platnosc_total'].astype(float)
Out:
ValueError: could not convert string to float: '7,50'
</code></pre>
<p>Does this mean that the rest got converted, and only the row with 7,50 is the cause?</p>
<p>How can I actually cast this field/column to float?</p>
|
<p>Need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.replace.html" rel="nofollow noreferrer"><code>replace</code></a> first:</p>
<pre><code>print (merged['platnosc_total'].replace(',','.', regex=True).astype(float))
0 0.00
1 4.50
2 0.00
3 0.00
4 0.00
5 4.50
6 6.10
7 7.99
8 4.00
Name: platnosc_total, dtype: float64
</code></pre>
|
python|pandas
| 4
|
373,855
| 45,248,327
|
Numpy Cyclic Broadcast of Fancy Indexing
|
<p><code>A</code> is an <code>numpy</code> array with shape <code>(6, 8)</code></p>
<p>I want:</p>
<pre><code>x_id = np.array([0, 3])
y_id = np.array([1, 3, 4, 7])
A[ [x_id, y_id] += 1 # this doesn't actually work.
</code></pre>
<p>Tricks like <code>::2</code> won't work because the indices do not increase regularly.</p>
<p>I don't want to use extra memory to repeat <code>[0, 3]</code> and make a new array <code>[0, 3, 0, 3]</code> because that is slow.</p>
<p>The indices for the two dimensions do not have equal length.</p>
<p>which is equivalent to:</p>
<pre><code>A[0, 1] += 1
A[3, 3] += 1
A[0, 4] += 1
A[3, 7] += 1
</code></pre>
<p>Can <code>numpy</code> do something like this?</p>
<p>Update:</p>
<p>Not sure if <code>broadcast_to</code> or <code>stride_tricks</code> is faster than nested python loops. (<a href="https://stackoverflow.com/questions/5564098/repeat-numpy-array-without-replicating-data">Repeat NumPy array without replicating data?</a>) </p>
|
<p>You can convert <code>y_id</code> to a 2d array with the 2nd dimension the same as <code>x_id</code>, and then the two indices will be automatically broadcasted due to the dimension difference:</p>
<pre><code>x_id = np.array([0, 3])
y_id = np.array([1, 3, 4, 7])
A = np.zeros((6,8))
A[x_id, y_id.reshape(-1, x_id.size)] += 1
A
array([[ 0., 1., 0., 0., 1., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 1., 0., 0., 0., 1.],
[ 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0.]])
</code></pre>
|
python|numpy|broadcast
| 2
|
373,856
| 45,127,605
|
Numpy Reshape to obtain monthly means from data
|
<p>I'm trying to obtain monthly means from an observed precipitation data set for the period 1901-2015. The current shape of my prec variable is <code>(1380(time), 360(lon), 720(lat))</code>, with 1380 being the number of months over a 115 year period. I have been informed that to calculate monthly means, the most effective way is to conduct an <code>np.reshape</code> command on the <code>prec</code> variable to split the array up into months and years. However I am not sure what the best way to do this is. I was also wondering if there was a way in Python to select specific months of the year, as I will be producing plots for each month of the year.</p>
<p>I have been attempting to reshape the <code>prec</code> variable with the code below. However I am not sure how to do this correctly:</p>
<pre><code>#Set Source Folder
sys.path.append('../../..')
SrcFld = ("/export/silurian/array-01/obs/CRU/")
#Retrieve Data
data_path = ''
example = (str(SrcFld) + 'cru_ts4.00.1901.2015.pre.dat.nc')
Data = Dataset(example)
#Create Prec Mean Array and reshape to get monthly means
Prec_mean = np.zeros((360,720))
#Retrieve Variables
Prec = Data.variables['pre'][:]
lats = Data.variables['lat'][:]
lons = Data.variables['lon'][:]
np.reshape(Prec, ())
#Get Annual/Monthly Average
Prec_mean =np.mean(Prec,axis=0)
</code></pre>
<p>Any guidance on this issue would be appreciated.</p>
|
<p>The following snippet will first dice the precipitation array year-wise. We can then use that array to get the monthly average of precipitation.</p>
<pre class="lang-py prettyprint-override"><code>>>> prec = np.random.rand(1380,360,720)
>>> ind = np.arange(12,1380,12)
>>> yearly_split = np.array(np.split(prec, ind, axis=0))
>>> yearly_split.shape
(115, 12, 360, 720)
>>> monthly_mean = yearly_split.mean(axis=0)
>>> monthly_mean.shape
(12, 360, 720)
</code></pre>
|
python|numpy|mean|reshape
| 0
|
373,857
| 45,188,605
|
Adding spaces between strings after sum()
|
<p>Assuming that I have the following pandas dataframe:</p>
<pre><code>>>> data = pd.DataFrame({ 'X':['a','b'], 'Y':['c','d'], 'Z':['e','f']})
X Y Z
0 a c e
1 b d f
</code></pre>
<p>The desired output is:</p>
<pre><code>0 a c e
1 b d f
</code></pre>
<p>When I run the following code, I get:</p>
<pre><code>>>> data.sum(axis=1)
0 ace
1 bdf
</code></pre>
<p>So how do I add columns of strings with space between them?</p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.apply.html" rel="nofollow noreferrer"><code>apply</code></a> per rows by <code>axis=1</code> and <code>join</code>:</p>
<pre><code>a = data.apply(' '.join, axis=1)
print (a)
0 a c e
1 b d f
dtype: object
</code></pre>
<p>Another solution with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.add.html" rel="nofollow noreferrer"><code>add</code></a> spaces, <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sum.html" rel="nofollow noreferrer"><code>sum</code></a> and last <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.rstrip.html" rel="nofollow noreferrer"><code>str.rstrip</code></a>:</p>
<pre><code>a = data.add(' ').sum(axis=1).str.rstrip()
#same as
#a = (data + ' ').sum(axis=1).str.rstrip()
print (a)
0 a c e
1 b d f
dtype: object
</code></pre>
|
python|pandas
| 4
|
373,858
| 45,041,510
|
dictionaries with NaN key to pandas series
|
<p>I am trying to build a Pandas series by passing it a dictionary containing index and data pairs. While doing so I noticed an interesting quirk. When the dictionary contains <code>NaN</code> keys with an associated value, pandas Series retains the <code>NaN</code> key in the index but sets the corresponding value to <code>NaN</code> as well.</p>
<pre><code>import pandas as pd
d = {np.nan: 3500.0, 66485174.0: 1.0}
d = pd.Series(d, dtype='float64')
</code></pre>
<p>In the example above, <code>3500.0</code> will be set to <code>NaN</code> by <code>pd.Series</code>.<br>
I am using pandas 0.20.2 with python 2.7.</p>
<p>Does anyone know why this happens? My intuition is that <code>NaN</code> is probably seen as a infinite number beyond 64-bit, hence there might be some format issues</p>
|
<p>Issue has been fixed in pandas 0.23.3 (answered in comments).</p>
|
python|python-2.7|pandas|dictionary|series
| 1
|
373,859
| 45,115,463
|
DynamicPartition returns single output instead of multiple
|
<p>Here is my code which constructs graph using DynamicPartition operation to split a vector [1, 2, 3, 4, 5, 6] by two vectors [1, 2, 3] and [4, 5, 6] using mask [1, 1, 1, 0, 0, 0]:</p>
<pre><code>@Test
public void dynamicPartition2() {
Graph graph = new Graph();
Output a = graph.opBuilder("Const", "a")
.setAttr("dtype", DataType.INT64)
.setAttr("value", Tensor.create(new long[]{6}, LongBuffer.wrap(new long[] {1, 2, 3, 4, 5, 6})))
.build().output(0);
Output partitions = graph.opBuilder("Const", "partitions")
.setAttr("dtype", DataType.INT32)
.setAttr("value", Tensor.create(new long[]{6}, IntBuffer.wrap(new int[] {1, 1, 1, 0, 0, 0})))
.build().output(0);
graph.opBuilder("DynamicPartition", "result")
.addInput(a)
.addInput(partitions)
.setAttr("num_partitions", 2)
.build().output(0);
try (Session s = new Session(graph)) {
List<Tensor> outputs = s.runner().fetch("result").run();
try (Tensor output = outputs.get(0)) {
LongBuffer result = LongBuffer.allocate(3);
output.writeTo(result);
assertArrayEquals("Shape", new long[]{3}, output.shape());
assertArrayEquals("Values", new long[]{4, 5, 6}, result.array());
}
//Test will fail here
try (Tensor output = outputs.get(1)) {
LongBuffer result = LongBuffer.allocate(3);
output.writeTo(result);
assertArrayEquals("Shape", new long[]{3}, output.shape());
assertArrayEquals("Values", new long[]{1, 2, 3}, result.array());
}
}
}
</code></pre>
<p>After calling <code>s.runner().fetch("result").run()</code> List of length 1 is returned with value [4, 5, 6]. It seems that my graph produces only one output.</p>
<p>How to obtain the rest part of splitted vector?</p>
|
<p>The <code>DynamicPartition</code> operation returns multiple outputs (one for each partition), but the <a href="https://www.tensorflow.org/api_docs/java/reference/org/tensorflow/Session.Runner.html#fetch(java.lang.String)" rel="nofollow noreferrer"><code>Session.Runner.fetch</code></a> call is only requesting the 0-th output.</p>
<p>The Java API lacks a bunch of convenience sugar that the Python API has, but you can do what you want by explicitly requesting all the outputs. In other words, change from:</p>
<pre><code>List<Tensor> outputs = s.runner().fetch("result").run();
</code></pre>
<p>to </p>
<pre><code>List<Tensor> outputs = s.runner().fetch("result", 0).fetch("result", 1).run();
</code></pre>
<p>Hope that helps.</p>
|
java|tensorflow
| 1
|
373,860
| 45,025,538
|
Tensorflow session.run() does not proceed
|
<p>I am running <a href="https://github.com/lordet01/segan/blob/master/train_segan.sh" rel="nofollow noreferrer">https://github.com/lordet01/segan/blob/master/train_segan.sh</a> in my machine. The code does not proceed at line below (in model.py) :</p>
<pre><code>sample_noisy, sample_wav, sample_z = self.sess.run([self.gtruth_noisy[0], self.gtruth_wavs[0], self.zs[0]])
</code></pre>
<p>As a tensorflow beginner, this line seems just converting nodes to tensors.</p>
<p><strong>Could you suggest any possible reason why it does not proceed from line above?</strong>
I really appreciate any comments :) </p>
<p>My computing environment is as follows :
Python 3.6.0 (installed by Anaconda), TensorFlow 1.2.1, Cuda 8.0, TitanX(x2)</p>
|
<p>As the question is old, and the repository is not up-to-date with the original master branch, the optimisation of the network is not working.</p>
<p>Some of the changes that were introduced that are important are <a href="https://github.com/santi-pdp/segan/commits/master" rel="nofollow noreferrer">here</a>.</p>
<p>So basically the reason why your model never converges(or atleast takes a long time), because of all the gradients are stored as lists.</p>
<p><strong>New additions</strong> that are relevant to your question are :</p>
<pre><code>def build_model(self, config):
all_d_grads = []
all_g_grads = []
d_opt = tf.train.RMSPropOptimizer(config.d_learning_rate) #these lines are new
g_opt = tf.train.RMSPropOptimizer(config.g_learning_rate) #these lines are new
</code></pre>
<p>NEW:</p>
<pre><code># deemphasize
c_res = de_emph(c_res, self.preemph)
#segan/data_loader method
def de_emph(y, coeff=0.95):
if coeff <= 0:
return y
x = np.zeros(y.shape[0], dtype=np.float32)
x[0] = y[0]
for n in range(1, y.shape[0], 1):
x[n] = coeff * x[n - 1] + y[n]
return x
</code></pre>
<p>OLD:</p>
<pre><code>#There is no deemphasize, having which saves a considerable amount of time.
</code></pre>
<p>And minor changes like the whole conversion to a class with all values retained as object..etc.</p>
|
python|tensorflow
| 1
|
373,861
| 44,975,943
|
TensorFlow example but with middle layer
|
<p>I am trying to get this code to work. It may not look like it, but it comes mostly from the TensorFlow mnist example. I am trying to get three layers, though, and I've changed the input and output size. The input size is 12, the mid size is 6, and the output size is 2. This is what happens when I run this. It does not throw an error, but when I run the test option I always get 50%. When I go back to training it runs and I am sure the weights are changing. There is code for saving the model and weights, so I'm pretty confident it's not wiping out my weights every time I re-start it. The idea behind self.d_y_out is to have something that will allow me to run the model and get output for just one image. I think the problem is near the comment that says "PROBLEM??".</p>
<pre><code> self.d_keep = tf.placeholder(tf.float32)
self.d_W_2 = tf.Variable(tf.random_normal([mid_num, output_num], stddev=0.0001))
self.d_b_2 = tf.Variable(tf.random_normal([output_num], stddev=0.5))
self.d_x = tf.placeholder(tf.float32, [None, input_num])
self.d_W_1 = tf.Variable(tf.random_normal([input_num, mid_num], stddev=0.0001)) # 0.0004
self.d_b_1 = tf.Variable(tf.zeros([mid_num]))
self.d_y_ = tf.placeholder(tf.float32, [None, output_num])
self.d_x_drop = tf.nn.dropout(self.d_x, self.d_keep)
self.d_y_logits_1 = tf.matmul(self.d_x_drop, self.d_W_1) + self.d_b_1
self.d_y_mid = tf.nn.relu(self.d_y_logits_1)
self.d_y_mid_drop = tf.nn.dropout(self.d_y_mid, self.d_keep)
self.d_y_logits_2 = tf.matmul(self.d_y_mid_drop, self.d_W_2) + self.d_b_2
self.d_y_softmax = tf.nn.softmax_cross_entropy_with_logits(logits=self.d_y_logits_2, labels=self.d_y_)
self.d_cross_entropy = tf.reduce_mean(self.d_y_softmax) ## PROBLEM??
self.d_train_step = tf.train.GradientDescentOptimizer(0.001).minimize(self.d_cross_entropy) # 0.0001
# train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy) #0.5
#self.d_y_out = tf.argmax(self.d_y, 1) ## for prediction
self.d_y_out = tf.argmax(self.d_y_logits_2, 1, name="d_y_out")
if self.train :
for i in range(self.start_train,self.cursor_tot): #1000
batch_xs, batch_ys = self.get_nn_next_train(self.batchsize)
self.sess.run(self.d_train_step, feed_dict={self.d_x: batch_xs, self.d_y_: batch_ys, self.d_keep: 0.5})
if True: #mid_num > 0:
cost = self.sess.run([self.d_cross_entropy, self.d_train_step],
feed_dict={self.d_x: batch_xs, self.d_y_: batch_ys, self.d_keep: 0.5})
print cost[0], "cost"
if self.test :
d_correct_prediction = tf.equal(self.d_y_out, tf.argmax(self.d_y_,1))
#d_correct_prediction = tf.equal(tf.argmax(self.d_y , 1), tf.argmax(self.d_y_, 1))
d_accuracy = tf.reduce_mean(tf.cast(d_correct_prediction, tf.float32))
if self.use_loader : self.get_nn_next_test(self.batchsize)
print(self.sess.run([d_accuracy, self.d_cross_entropy],
feed_dict={self.d_x: self.mnist_test.images, self.d_y_: self.mnist_test.labels, self.d_keep: 1.0}))
if self.predict_dot :
for i in range(start, stop ) :
batch_0, batch_1 = self.get_nn_next_predict(self.batchsize)
if len(batch_0) > 0 :
out.extend( self.sess.run([self.d_y_out, self.d_cross_entropy],
feed_dict={self.d_x : batch_0, self.d_y_: batch_1, self.d_keep: 1.0})[0])
print "out" , len(out) , i, self.cursor_tot, out[:10],"..."
</code></pre>
<p><strong>EDIT</strong> I've edited the code in this question significantly. Much thanks to vijay m for getting me this far. Any help would be appreciated. Thanks.</p>
|
<p>The problem in this code is you calling <code>dropout</code> on the inputs. Yours is a single layer network and you don't need <code>dropout</code>. And use a momentum optimizer like <code>Adam</code> for training faster. The changes i made:</p>
<pre><code>d_y_logits_1 = tf.matmul(d_x, d_W_1) + d_b_1
d_y_mid = tf.nn.relu(d_y_logits_1)
d_y_logits_2 = tf.matmul(d_y_mid, d_W_2) + d_b_2
d_y_softmax = tf.nn.softmax_cross_entropy_with_logits(logits=d_y_logits_2, labels=d_y_)
d_cross_entropy = tf.reduce_mean(d_y_softmax)
d_train_step = tf.train.AdamOptimizer(0.01).minimize(d_cross_entropy)
</code></pre>
|
python|tensorflow|mnist
| 0
|
373,862
| 45,260,736
|
how do I avoid strings being read as bytes when reading a HDF 5 file into Pandas?
|
<p>currently, the data in h5 file does not have prefix 'b'. I read h5 file with following code. I wonder whether there is some better way to read h5 and with no prefix 'b'.</p>
<pre><code>import tables as tb
import pandas as pd
import numpy as np
import time
time0=time.time()
pth='d:/download/'
# read data
data_trading=pth+'Trading_v01.h5'
filem=tb.open_file(data_trading,mode='a',driver="H5FD_CORE")
tb_trading=filem.get_node(where='/', name='wind_data')
df=pd.DataFrame.from_records(tb_trading[:])
time1=time.time()
print('\ntime on reading data %6.3fs' %(time1-time0))
# in python3, remove prefix 'b'
df.loc[:,'Date']=[[dt.decode('utf-8')] for dt in df.loc[:,'Date']]
df.loc[:,'Code']=[[cd.decode('utf-8')] for cd in df.loc[:,'Code']]
time2=time.time()
print("\ntime on removing prefix 'b' %6.3fs" %(time2-time1))
print('\ntotal time %6.3fs' %(time2-time0))
</code></pre>
<p>the result of time</p>
<blockquote>
<p>time on reading data 1.569s</p>
<p>time on removing prefix 'b' 29.921s</p>
<p>total time 31.490s</p>
</blockquote>
<p>you see, removing prefix 'b' is really time consuming.</p>
<p>I have try to use pd.read_hdf, which don't rise prefix 'b'.</p>
<pre><code>%time df2=pd.read_hdf(data_trading)
Wall time: 14.7 s
</code></pre>
<p>which so far is faster.</p>
<hr>
<p>Using <a href="https://stackoverflow.com/questions/40389764/how-to-translate-bytes-objects-into-literal-strings-in-pandas-dataframe-pytho/40390108#40390108">this SO answer</a> and using a vectorised <code>str.decode</code>, I can cut the conversion time to 9.1 seconds (and thus the total time less than 11 seconds):</p>
<pre><code> for key in ['Date', 'Code']:
df[key] = df[key].str.decode("utf-8")
</code></pre>
<hr>
<p>Question: is there an even more effective way to convert my bytes columns to string when reading a HDF 5 data table?</p>
|
<p>The best solution for performance is to stop trying to "remove the <code>b</code> prefix." The <code>b</code> prefix is there because your data consists of bytes, and Python 3 insists on displaying this prefix to indicate bytes in many places. Even places where it makes no sense such as the output of the built-in <code>csv</code> module.</p>
<p>But inside your own program this may not hurt anything, and in fact if you want the highest performance you may be better off leaving these columns as <code>bytes</code>. This is especially true if you're using Python 3.0 to 3.2, which always use multi-byte unicode representation (<a href="https://stackoverflow.com/questions/26079392/how-is-unicode-represented-internally-in-python">see</a>).</p>
<p>Even if you are using Python 3.3 or later, where the conversion from bytes to unicode doesn't cost you any extra space, it may still be a waste of time if you have a lot of data.</p>
<p>Finally, Pandas is not optimal if you are dealing with columns of mostly unique strings which have a somewhat consistent width. For example if you have columns of text data which are license plate numbers, all of them will fit in about 9 characters. The inefficiency arises because Pandas does not exactly have a string column type, but instead uses an <a href="https://stackoverflow.com/questions/21018654/strings-in-a-dataframe-but-dtype-is-object"><code>object</code> column type</a>, which contains pointers to strings stored separately. This is bad for CPU caches, bad for memory bandwidth, and bad for memory consumption (again, if your strings are mostly unique and of similar lengths). If your strings have highly variable widths, it may be worth it because a short string takes only its own length plus a pointer, whereas the fixed-width storage typical in NumPy and HDF5 takes the full column width for every string (even empty ones).</p>
<p>To get fast, fixed-width string columns in Python, you may consider using NumPy, which you can read via the excellent <a href="http://docs.h5py.org/en/latest/strings.html" rel="nofollow noreferrer"><code>h5py</code></a> library. This will give you a NumPy array which is a lot more similar to the underlying data stored in HDF5. It may still have the <code>b</code> prefix, because Python insists that non-unicode strings always display this prefix, but that's not necessarily something you should try to prevent.</p>
|
python|pandas|hdf5
| 1
|
373,863
| 45,255,442
|
Sympy: How to compute Lie derivative of matrix with respect to a vector field
|
<p>I have a system(x'=f(x)+g(x)u), such that <code>f(x) is f:R3->R3</code> and <code>g(x) is g:R3->R(3x2)</code>.
My system is</p>
<p><a href="https://i.stack.imgur.com/SpMYF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SpMYF.png" alt="enter image description here"></a>
<a href="https://i.stack.imgur.com/9mdwY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9mdwY.png" alt="enter image description here"></a></p>
<p>As you can see, it is a MIMO nonlinear control system and I wish to find the controllability matrix for my system. Controllability matrix in this case is formulated by<br>
C=[g [f,g] [f,[f,g]] ..],<br>
where [f,g] denotes the lie bracket operation between f and g.
That is the reason why I need to compute Lie derivative of a matrix with respect to a vector field and vice versa. Because [f,g]=f<em>dg/dx-g</em>df/dx</p>
<p>Here in my system, f is 3x1 and g is 3x2 as there are two inputs available.
And I wish to calculate the above matrix C in Python.
My system is<br>
<code>f=sm.Matrix([[x1**2],[sin(x1)+x3**2],[cos(x3)+x1**2]])</code> and<br>
<code>g=sm.Matrix([[cos(x1),0],[x1**2,x2],[0,0]])</code>.</p>
<p>My code is:</p>
<pre><code>from sympy.diffgeom import *
from sympy import sqrt,sin,cos
M = Manifold("M",3)
P = Patch("P",M)
coord = CoordSystem("coord",P,["x1","x2","x3"])
x1,x2,x3 = coord.coord_functions()
e_x1,e_x2,e_x3 = coord.base_vectors()
f = x1**2*e_x1 + (sin(x1)+x3**2)*e_x2 + (cos(x3) + x1**2)*e_x3
g = (cos(x1))*e_x1+(x1**2,x2)*e_x2 + 0*e_x3
#h1 = x1
#h2 = x2
#Lfh1 = LieDerivative(f,h1)
#Lfh2 = LieDerivative(f,h2)
#print(Lfh1)
#print(Lfh2)
Lfg = LieDerivative(f,g)
print(Lfg)
</code></pre>
<p>Why isn't my code giving me correct answer?</p>
|
<p>The only error in your code is due to the tuple used for multiple inputs. For LieDerivative in Sympy.diffgeom to work you need a vector field defined properly. </p>
<p>For single input systems, the exact code that you have works without the tuple, so, in that case, for example if you have
<code>g = (cos(x1))*e_x1+x1**2*e_x2 + 0*e_x3</code></p>
<p>(that is g(x) is 3 x 1 matrix with just the first column). Then, making the above change, you get the correct Lie derivatives.</p>
<p>For multiple input case, (as in your question), you can simply separate the two columns into g1 and g2 and proceed as the above case. This works because for multiple inputs case,
<a href="https://i.stack.imgur.com/QPX0u.png" rel="nofollow noreferrer">See math here</a></p>
<p>where g_1 and g_2 are the two columns. The final result for Lgh is a 1 x 2 matrix which you can basically get if you have the two results calculated above (Lg1h and Lg2h).</p>
<p>Code - </p>
<pre><code>from sympy.diffgeom import *
from sympy import sqrt,sin,cos
from sympy import *
M = Manifold("M",3)
P = Patch("P",M)
coord = CoordSystem("coord",P,["x1","x2","x3"])
x1,x2,x3 = coord.coord_functions()
e_x1,e_x2,e_x3 = coord.base_vectors()
f = x1**2*e_x1 + (sin(x1)+x3**2)*e_x2 + (cos(x3) + x1**2)*e_x3
g1 = (cos(x1))*e_x1+(x1**2)*e_x2 + 0*e_x3
g2 = 0*e_x1+ x2*e_x2 + 0*e_x3
h = x1
Lg1h = LieDerivative(g1,h)
Lg2h = LieDerivative(g2,h)
Lgh = [Lg1h, Lg2h]
print(Lgh)
</code></pre>
|
python|numpy|vector|sympy|derivative
| 0
|
373,864
| 56,939,854
|
nyoka AttributeError: The layer has never been called and thus has no defined input shape
|
<p>I'm trying to output a trained Tensorflow 2.0 model to PMML using the nyoka package. When I do so, it errors out. The problem seems to be different from that in <a href="https://stackoverflow.com/questions/55321864/attributeerror-the-layer-has-never-been-called-and-thus-has-no-defined-input-sh">this answer</a>,
even though the error is the same, because my model does not have a complicated creation function and does, in fact, train appropriately and transform appropriately.</p>
<pre class="lang-py prettyprint-override"><code>def create_and_train(x_training,y_training,n_cols_in,modelparams):
layers = [tf.keras.layers.Dense(n_cols_in,activation="relu"),
tf.keras.layers.Dropout(.5)]
for param in modelparams:
layers.extend([tf.keras.layers.Dense(param,activation="sigmoid"),tf.keras.layers.Dropout(.5)])
layers.append(tf.keras.layers.Dense(1,activation="sigmoid"))
model = tf.keras.models.Sequential(layers)
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=[tf.keras.metrics.AUC()])
model.fit(x_training, y_training, epochs = epochs)
with open("NN"+"_".join([str(m) for m in modelparams])+".pmml","w") as pmml_file:
pmml = KerasToPmml(model)
pmml.export(pmml_file)
</code></pre>
<p>Instead of a PMML file, I'm getting</p>
<pre><code>AttributeError: The layer has never been called and thus has no defined input shape.
</code></pre>
|
<p>This is Tensorflow's error. If you can print input_shape and output_shape or weights for each layer then you will be able to export it using Nyoka also.</p>
|
python|tensorflow|pmml
| 0
|
373,865
| 57,181,931
|
Converting dtype: period[M] to string format
|
<p>I have converted my dates in to an Dtype M format as I don't want anything to do with the dates. Unfortunately I cannot plot with this format so I want now convert this in to strings.</p>
<p>So I need to group my data so I can print out some graphs by months.
But I keep getting a serial JSON error when my data is in <code>dtype:Mperiod</code>
so I want to convert it to strings.</p>
<pre><code>df['Date_Modified'] = pd.to_datetime(df['Collection_End_Date']).dt.to_period('M')
#Add a new column called Date Modified to show just month and year
df = df.groupby(["Date_Modified", "Entity"]).sum().reset_index()
#Group the data frame by the new column and then Company and sum the values
df["Date_Modified"].index = df["Date_Modified"].index.strftime('%Y-%m')
</code></pre>
<p>It returns a string of numbers, but I need it to return a string output.</p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.dt.strftime.html" rel="nofollow noreferrer"><code>Series.dt.strftime</code></a> for set <code>Series</code> to strings in last step:</p>
<pre><code>df["Date_Modified"]= df["Date_Modified"].dt.strftime('%Y-%m')
</code></pre>
<p>Or set it before <code>groupby</code>, then converting to month period is not necessary:</p>
<pre><code>df['Date_Modified'] = pd.to_datetime(df['Collection_End_Date']).dt.strftime('%Y-%m')
df = df.groupby(["Date_Modified", "Entity"]).sum().reset_index()
</code></pre>
|
python|pandas|numpy
| 1
|
373,866
| 57,052,605
|
Unable to reload the config file on the fly in Tensorflow serving
|
<p>I am following the below link to create a script that reloads the config file of the tensorflow serving on the fly .</p>
<p><a href="https://stackoverflow.com/questions/54440762/tensorflow-serving-update-model-config-add-additional-models-at-runtime/54455066#54455066">TensorFlow Serving: Update model_config (add additional models) at runtime</a></p>
<p>But, I am getting the following error</p>
<pre><code>raise _Rendezvous(state, None, None, deadline)
grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "failed to connect to all addresses"
debug_error_string = "
{"created":"@1563280495.867330024","description":"Failed to pick subchannel","file":"src/core/ext/filters/client_channel/client_channel.cc","file
_line":3381,"referenced_errors":[{"created":"@1563280495.867323165","description":"failed to connect to all addresses","file":"src/core/ext/filters/client_channel/lb_policy/pick_first/pick_first.cc","file_line":453,"grpc_status":14}]}"
</code></pre>
|
<p>I resolved the issue by assigning the port number 8500 while invoking the tensorflow server using docker container and using the same port number in grpc client.</p>
|
tensorflow|tensorflow-serving
| 0
|
373,867
| 57,257,533
|
I need help to create mnist.pkl.gz from my own set of images
|
<p>I am new to python and machine learning.
I successfully tested DBN.py examples from deeplearing. now I want to put my own set of images into mnist.pkl.gz format </p>
<p>I already tried some code from a project named JPG-PNG-to-MNIST-NN-Format
on github but it gives me idx format
I used some code to convert this idx format to mnist.pkl but I found that there should be a validation_set images which is not presented in JPG-PNG-to-MNIST-NN-Format and my DBN.py code gives me error "ran out of input"
I even tried this
<a href="https://stackoverflow.com/questions/26107927/how-to-put-my-dataset-in-a-pkl-file-in-the-exact-format-and-data-structure-used/57256991#57256991">How to put my dataset in a .pkl file in the exact format and data structure used in "mnist.pkl.gz"?</a>
but I dont know how to prepare *.csv labels. this is my code</p>
<pre><code>from PIL import Image
from numpy import genfromtxt
import gzip, cPickle
from glob import glob
import numpy as np
import pandas as pd
def dir_to_dataset(glob_files, loc_train_labels=""):
print("Gonna process:\n\t %s"%glob_files)
dataset = []
for file_count, file_name in enumerate( sorted(glob(glob_files),key=len) ):
image = Image.open(file_name)
img = Image.open(file_name).convert('LA') #tograyscale
pixels = [f[0] for f in list(img.getdata())]
dataset.append(pixels)
if file_count % 1000 == 0:
print("\t %s files processed"%file_count)
# outfile = glob_files+"out"
# np.save(outfile, dataset)
if len(loc_train_labels) > 0:
df = pd.read_csv(loc_train_labels)
return np.array(dataset), np.array(df["class"])
else:
return np.array(dataset)
Data1, y1 = dir_to_dataset("train\\*.png","train.csv")
Data2, y2 = dir_to_dataset("valid\\*.png","valid.csv")
Data3, y3 = dir_to_dataset("test\\*.png","test.csv")
# Data and labels are read
train_set_x = Data1[:7717]
train_set_y = y1[:7717]
val_set_x = Data2[:1653]
val_set_y = y2[:1653]
test_set_x = Data3[:1654]
test_set_y = y3[:1654]
# Divided dataset into 3 parts. I had 6281 images.
train_set = train_set_x, train_set_y
val_set = val_set_x, val_set_y
test_set = test_set_x, val_set_y
dataset = [train_set, val_set, test_set]
f = gzip.open('mnist.pkl.gz','wb')
cPickle.dump(dataset, f, protocol=2)
f.close()
</code></pre>
<p>but I get these errors</p>
<pre><code>Gonna process:
train\*.png
Traceback (most recent call last):
File "to-mnist.py", line 27, in <module>
Data1, y1 = dir_to_dataset("train\\*.png","train.csv")
File "to-mnist.py", line 22, in dir_to_dataset
return np.array(dataset), np.array(df["class"])
File "/home/alireza/.local/lib/python2.7/site-packages/pandas/core/frame.py", line 2927, in __getitem__
indexer = self.columns.get_loc(key)
File "/home/alireza/.local/lib/python2.7/site-packages/pandas/core/indexes/base.py", line 2659, in get_loc
return self._engine.get_loc(self._maybe_cast_indexer(key))
File "pandas/_libs/index.pyx", line 108, in pandas._libs.index.IndexEngine.get_loc
File "pandas/_libs/index.pyx", line 132, in pandas._libs.index.IndexEngine.get_loc
File "pandas/_libs/hashtable_class_helper.pxi", line 1601, in pandas._libs.hashtable.PyObjectHashTable.get_item
File "pandas/_libs/hashtable_class_helper.pxi", line 1608, in pandas._libs.hashtable.PyObjectHashTable.get_item
KeyError: 'class'
</code></pre>
<p>I think this has something to do with my *.csv files. the *.csv files are normal txt document with class of 0 and 1 in it. something like this </p>
<pre><code>0
0
0
0
0
0
1
1
1
1
</code></pre>
|
<p>Thank you so much for answering my question.
I made a project on GitHub and put all my data in it to create mnist.pkl.gz dataset for anyone who is like me at the beginning of deeplearning. </p>
<p>you can find it here
<a href="https://github.com/tikroute/mnist.pkl.gz-dataset-creator" rel="nofollow noreferrer">https://github.com/tikroute/mnist.pkl.gz-dataset-creator</a></p>
<p>Hope it helps other students in this field :)</p>
|
python|pandas|mnist
| 0
|
373,868
| 57,076,823
|
How to push rows of an existing dataframe to a new dataframe based on a condition?
|
<p>I currently have a dataframe with the elevation information for houses. I would like to separate this into different dataframes based on a condition. I have the following: </p>
<pre><code>minor = data[data.NAVD88 <= 5]
moderate = data[data.NAVD88 > 5] and data[data.NAVD88 < 7]
major = data[data.NAVD88 >= 7]
</code></pre>
<p>However, the moderate line doesn't seem to work, and I get the following error:</p>
<pre><code>The truth value of a DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
</code></pre>
<p>What is the correct syntax to have this work properly?</p>
|
<p>Use botwise <code>and</code> by <code>&</code> and because priority precedence add <code>()</code> for chained boolean masks:</p>
<pre><code>minor = data[data.NAVD88 <= 5]
moderate = data[(data.NAVD88 > 5) & (data.NAVD88 < 7)]
major = data[data.NAVD88 >= 7]
</code></pre>
|
python|pandas
| 0
|
373,869
| 57,150,633
|
How to change a simple network dataframe to a correlation table?
|
<p>I have a dataframe which is in this format</p>
<pre><code> from to weight
0 A D 3
1 B A 5
2 C E 6
3 A C 2
</code></pre>
<p>I wish to convert this to a correlation-type dataframe which would look like this - </p>
<pre><code> A B C D E
A 0 0 2 0 3
B 5 0 0 0 0
C 0 0 0 0 6
D 0 0 0 0 0
E 0 0 0 0 0
</code></pre>
<p>I thought a possible (read naïve) solution would be to loop over the dataframe and then assign the values to the correct cells of another dataframe by comparing the rows and columns.</p>
<p>Something similar to this:</p>
<pre><code>new_df = pd.DataFrame(columns = sorted(set(df["from"])), index =sorted(set(df["from"])))
for i in range(len(df)):
cor.loc[df.iloc[i,0], df.iloc[i,1]] = df.iloc[i,2]
</code></pre>
<p>And that worked. However, I've read about not looping over Pandas dataframes <a href="https://stackoverflow.com/a/55557758/10949324">here</a>. </p>
<p>The primary issue is that my dataframe is larger than this - a couple thousand rows. So I wish to know if there's another solution to this since this method doesn't sit well with me in terms of being Pythonic. Possibly faster as well, since speed is a concern.</p>
|
<p>IIUC, this is a pivot with reindex:</p>
<pre><code>(df.pivot(index='from', columns='to', values='weight')
.reindex(all_vals)
.reindex(all_vals,axis=1)
.fillna(0)
)
</code></pre>
<p>Output:</p>
<pre><code>to A B C D E
from
A 0.0 0.0 2.0 3.0 0.0
B 5.0 0.0 0.0 0.0 0.0
C 0.0 0.0 0.0 0.0 6.0
D 0.0 0.0 0.0 0.0 0.0
E 0.0 0.0 0.0 0.0 0.0
</code></pre>
|
python|pandas|dataframe
| 1
|
373,870
| 57,067,335
|
How to identify duplicate entries in pandas
|
<p>I have a dataframe as follows.</p>
<pre><code> title description
0 mmm mmm
1 mmm mmm
2 mmm mmm
3 mmm mmm
4 mmm mmm
5 mmm mmm
6 mmm mmm
7 nnn nnn
8 nnn nnn
9 lll lll
10 jjj jjj
</code></pre>
<p>I want to keep one entry and remove all other duplicate entries while returning another dataframe that include details of the removed entries from the above dataframe.</p>
<p>For example, the output should be;</p>
<pre><code> title description
0 mmm mmm
1 nnn nnn
2 lll lll
3 jjj jjj
</code></pre>
<p>and the details of the removed entries should outputted as;</p>
<pre><code> title description count
0 mmm mmm 6
1 nnn nnn 1
</code></pre>
<p>My current code is as follows.</p>
<pre><code>import pandas as pd
df = pd.DataFrame({"title":["mmm", "mmm", "mmm", "mmm", "mmm", "mmm", "mmm", "nnn", "nnn", "lll", "jjj"], "description":["mmm", "mmm", "mmm", "mmm", "mmm", "mmm", "mmm", "nnn", "nnn", "lll", "jjj"]})
df.drop_duplicates()
</code></pre>
<p>However, it removes all the duplicates (which is not my intention).</p>
<p>Is it possible to do this in pandas in python?</p>
<p>I am happy to provide more details if needed.</p>
|
<p>Method involved <code>duplicated</code>+<code>groupby.size</code></p>
<p>First question </p>
<pre><code>df[~df.duplicated()]
title description
0 mmm mmm
7 nnn nnn
9 lll lll
10 jjj jjj
</code></pre>
<p>Second question </p>
<pre><code>df[df.duplicated()].groupby(['title','description']).size()
title description
mmm mmm 6
nnn nnn 1
dtype: int64
</code></pre>
|
pandas
| 1
|
373,871
| 57,113,644
|
Create 1-column dataframe from multi-index pandas series
|
<p>I have a multi-index series like this:</p>
<pre class="lang-py prettyprint-override"><code>Year Month
2012 1 444
2 222
3 333
4 1101
</code></pre>
<p>which I want to turn into:</p>
<pre class="lang-py prettyprint-override"><code>Date Value
2012-01 444
2012-02 222
2012-03 333
2012-04 1101
</code></pre>
<p>to plot a line.</p>
<p>I have tried both <code>series.unstack(level=0)</code> and <code>series.unstack(level=1)</code>, but this creates a matrix</p>
<pre class="lang-py prettyprint-override"><code>In[1]: series.unstack(level=0)
Out[1]:
Year 2012 2013 2014 2015 2016 2017 2018
Month
1 444 ... ... ... ... ... ...
2 222 ... ... ... ... ... ...
3 333 ... ... ... ... ... ...
4 1101 ... ... ... ... ... ...
</code></pre>
<p>What am I missing?</p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Index.to_frame.html" rel="nofollow noreferrer"><code>Index.to_frame</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.to_datetime.html" rel="nofollow noreferrer"><code>to_datetime</code></a> working if also added <code>Day</code> column, and reasign back:</p>
<pre><code>s.index = pd.to_datetime(s.index.to_frame().assign(Day=1))
print (s)
2012-01-01 444
2012-02-01 222
2012-03-01 333
2012-04-01 1101
Name: a, dtype: int64
</code></pre>
<p>For one column <code>DataFrame</code> use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.to_frame.html" rel="nofollow noreferrer"><code>Series.to_frame</code></a>:</p>
<pre><code>df1 = s.to_frame('Value')
print (df1)
Value
2012-01-01 444
2012-02-01 222
2012-03-01 333
2012-04-01 1101
</code></pre>
<p>If need <code>PeriodIndex</code> add <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.dt.to_period.html" rel="nofollow noreferrer"><code>Series.dt.to_period</code></a>:</p>
<pre><code>s.index = pd.to_datetime(s.index.to_frame().assign(Day=1)).dt.to_period('m')
print (s)
2012-01 444
2012-02 222
2012-03 333
2012-04 1101
Freq: M, Name: a, dtype: int64
df2 = s.to_frame('Value')
print (df2)
Value
2012-01 444
2012-02 222
2012-03 333
2012-04 1101
</code></pre>
|
python|pandas|series
| 2
|
373,872
| 56,915,656
|
Inefficient preprocessing in Python
|
<p>I've to do preprocessing on <em>some</em> .csv file. These .csv file are matrix of audio feature from TIMIT dataset. Basically they are matrix of #samples * 123 features.
I would like to do a sliding window over the samples.</p>
<p>I wrote this class:</p>
<pre class="lang-py prettyprint-override"><code>import glob
import pandas as pd
import numpy as np
from math import floor
from keras.utils.np_utils import to_categorical
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import MinMaxScaler
import numpy as np
import time
import datetime
import progressbar
class MyDataGenerator:
def __init__(self, path):
self.__path = path
self.__path_train = path[0]
self.__path_test = path[1]
self.__path_validation = path[2]
def generate_overlapping_chunks(self, timesteps, compact = True):
print("reading train:")
data_train = self.generate_data_frame(self.__path_train)
print("reading test:")
data_test = self.generate_data_frame(self.__path_test)
print("reading validation:")
data_validation = self.generate_data_frame(self.__path_validation)
if compact:
data_train = self.compact_class(data_train)
data_test = self.compact_class(data_test)
data_validation = self.compact_class(data_validation)
train_n, test_n, validation_n = self.min_max_scale_skl(data_train, data_test, data_validation)
print("train:")
train_data, train_label = self.generate_chunks(data_train, train_n, timesteps)
print("test:")
test_data, test_label = self.generate_chunks(data_test, test_n, timesteps)
print("validation:")
validation_data, validation_label = self.generate_chunks(data_validation, validation_n, timesteps)
train_label, test_label, validation_label = self.encode_label(train_label, test_label, validation_label)
return train_data, train_label, test_data, test_label, validation_data, validation_label
def compact_class(self, data_file):
data_file.loc[data_file['phoneme'] == 'ux', 'phoneme'] = 'uw'
data_file.loc[data_file['phoneme'] == 'axr', 'phoneme'] = 'er'
data_file.loc[data_file['phoneme'] == 'em', 'phoneme'] = 'm'
data_file.loc[data_file['phoneme'] == 'nx', 'phoneme'] = 'n'
data_file.loc[data_file['phoneme'] == 'eng', 'phoneme'] = 'ng'
data_file.loc[data_file['phoneme'] == 'hv', 'phoneme'] = 'hh'
data_file.loc[data_file['phoneme'] == 'h#', 'phoneme'] = 'sil'
data_file.loc[data_file['phoneme'] == 'pau', 'phoneme'] = 'sil'
data_file.loc[data_file['phoneme'] == 'pcl', 'phoneme'] = 'sil'
data_file.loc[data_file['phoneme'] == 'tcl', 'phoneme'] = 'sil'
data_file.loc[data_file['phoneme'] == 'kcl', 'phoneme'] = 'sil'
data_file.loc[data_file['phoneme'] == 'bcl', 'phoneme'] = 'sil'
data_file.loc[data_file['phoneme'] == 'dcl', 'phoneme'] = 'sil'
data_file.loc[data_file['phoneme'] == 'gcl', 'phoneme'] = 'sil'
data_file.loc[data_file['phoneme'] == 'epi', 'phoneme'] = 'sil'
data_file.loc[data_file['phoneme'] == 'zh', 'phoneme'] = 'sh'
data_file.loc[data_file['phoneme'] == 'en', 'phoneme'] = 'n'
data_file.loc[data_file['phoneme'] == 'el', 'phoneme'] = 'l'
data_file.loc[data_file['phoneme'] == 'ix', 'phoneme'] = 'ih'
data_file.loc[data_file['phoneme'] == 'ax', 'phoneme'] = 'ah'
data_file.loc[data_file['phoneme'] == 'ax-h', 'phoneme'] = 'ah'
data_file.loc[data_file['phoneme'] == 'ao', 'phoneme'] = 'aa'
return data_file
def generate_data_frame(self, path):
data = pd.DataFrame()
tot = len(glob.glob(path))
bar = progressbar.ProgressBar(maxval=tot, widgets=[progressbar.Bar('=', '[', ']'), ' ', progressbar.Percentage()])
i = 0
bar.start()
for file_name in glob.iglob(path):
data_file = pd.read_csv(file_name)
data = pd.concat((data, data_file))
i = i+1
bar.update(i)
bar.finish()
data = data.rename(columns={'Unnamed: 0': 'frame'})
return data
def min_max_scale_skl(self, train, test, validation):
scaler = MinMaxScaler(feature_range=(-1, 1))
scaler = scaler.fit(np.concatenate((train.iloc[:, 1:124], test.iloc[:, 1:124], validation.iloc[:, 1:124])))
return scaler.transform(train.iloc[:, 1:124]), scaler.transform(test.iloc[:, 1:124]), scaler.transform(validation.iloc[:, 1:124])
def generate_chunks(self, data, data_norm, timesteps):
label = np.empty(0)
data_np = np.empty((1, timesteps, 123))
b = range(timesteps, data.shape[0]+1)
bar = progressbar.ProgressBar(maxval=data.shape[0]-timesteps, widgets=[progressbar.Bar('=', '[', ']'), ' ', progressbar.Percentage()])
bar.start()
for i in range(0, data.shape[0]-timesteps+1):
c = ((data_norm[i:b[i]])).reshape(1, timesteps, (124-1))
data_np = np.concatenate((data_np, c))
label = np.concatenate((label, [data.iloc[i+floor(timesteps/2)]['phoneme']]))
bar.update(i)
bar.finish()
return data_np[1:], label
def encode_label(self, train, test, val):
encoder = LabelEncoder()
encoder.fit(
np.concatenate(
(train, np.concatenate((test, val)))
)
)
train_encoded_labels = encoder.transform(train)
test_encoded_labels = encoder.transform(test)
val_encoded_labels = encoder.transform(val)
return to_categorical(train_encoded_labels), to_categorical(test_encoded_labels), to_categorical(val_encoded_labels)
</code></pre>
<p>I noticed that </p>
<pre class="lang-py prettyprint-override"><code>generate_chunks(self, data, data_norm, timesteps)
</code></pre>
<p>is very slow. Last execution took me more than 40 hours on an Intel Xeon E5-1620 v3. I use Python 3.6.8 installed with Anaconda.
Any idea to boost this crappy code? </p>
|
<p>Try to divide your records into smaller chunks and then process them in parallel.
Here is a great discussion with easy examples:
<a href="https://stackoverflow.com/questions/2846653/how-to-use-threading-in-python">How to use threading in Python?</a></p>
<p>Also, there is an option to use cython (Python + C (in big shortcut)). It can be helpful for huge loops etc.</p>
|
python|pandas|numpy
| 1
|
373,873
| 56,979,461
|
How to use multi-gpu during inference in pytorch framework
|
<p>I am trying to make model prediction from unet3D built on pytorch framework. I am using multi-gpus</p>
<pre><code>import torch
import os
import torch.nn as nn
os.environ['CUDA_DEVICE_ORDER']='PCI_BUS_ID'
os.environ['CUDA_VISIBLE_DEVICES']='0,1,2'
model = unet3d()
model = nn.DataParallel(model)
model = model.to('cuda')
result = model.forward(torch.tensor(input).to('cuda').float())
</code></pre>
<p>But the model still uses only 1 GPU (the first one) and I get memory error.</p>
<pre><code>CUDA out of memory. Tried to allocate 64.00 MiB (GPU 0; 11.00 GiB total capacity; 8.43 GiB already allocated; 52.21 MiB free; 5.17 MiB cached)
</code></pre>
<p>How shoudl I use Multi-GPUs during inference phase? What is the mistake in my script above? </p>
|
<p>DataParallel handles sending the data to gpu.</p>
<pre><code>import torch
import os
import torch.nn as nn
os.environ['CUDA_DEVICE_ORDER']='PCI_BUS_ID'
os.environ['CUDA_VISIBLE_DEVICES']='0,1,2'
model = unet3d()
model = nn.DataParallel(model.cuda())
result = model.forward(torch.tensor(input).float())
</code></pre>
<p>if this doesn't work, please provide more details about <code>input</code>.</p>
<p>[EDIT]:</p>
<p>Try this:</p>
<pre><code>with torch.no_grad():
result = model(torch.tensor(input).float())
</code></pre>
|
pytorch|multi-gpu
| 3
|
373,874
| 57,188,409
|
Assigning a parameter to the GPU sets is_leaf as false
|
<p>If I create a <code>Parameter</code> in PyTorch, then it is automatically assigned as a leaf variable:</p>
<pre><code>x = torch.nn.Parameter(torch.Tensor([0.1]))
print(x.is_leaf)
</code></pre>
<p>This prints out <code>True</code>. From what I understand, if <code>x</code> is a leaf variable, then it will be updated by the optimiser.</p>
<p>But if I then assign <code>x</code> to the GPU:</p>
<pre><code>x = torch.nn.Parameter(torch.Tensor([0.1]))
x = x.cuda()
print(x.is_leaf)
</code></pre>
<p>This prints out <code>False</code>. So now I cannot assign <code>x</code> to the GPU and keep it as a leaf node.</p>
<p>Why does this happen?</p>
|
<p>Answer is in <a href="https://pytorch.org/docs/stable/autograd.html#torch.Tensor.is_leaf" rel="nofollow noreferrer"><code>is_leaf</code></a> documentation and here is your exact case:</p>
<pre><code>>>> b = torch.rand(10, requires_grad=True).cuda()
>>> b.is_leaf
False
# b was created by the operation that cast a cpu Tensor into a cuda Tensor
</code></pre>
<p>Citing documentation further:</p>
<blockquote>
<p>For Tensors that have <code>requires_grad</code> which is True, they will be leaf
Tensors if they were created by the user. This means that they are not
the result of an operation and so grad_fn is None.</p>
</blockquote>
<p>In your case, <code>Tensor</code> <strong>was not</strong> created by you, but was created by PyTorch's <code>cuda()</code> operation (leaf is the pre-cuda <code>b</code>).</p>
|
gpu|pytorch|autograd
| 2
|
373,875
| 57,001,183
|
How to extract Keras layer weights as trainable parameter?
|
<p>I'm training a GAN-like models, but not exactly the same. I'm using Keras with TensorFlow backend. </p>
<p>I have two Keras models <code>G</code> and <code>D</code>. I want to output the <strong>weights parameter</strong> of a target layer in <code>G</code>, as <strong>the input of model D</strong>, and use the result of <code>D.predict(G.weights)</code> as part of the loss function for <code>G</code>, i.e. <code>D</code> is not trainable, but the argument <code>G.weights</code> are trainable. In this way to want to further train <code>G.weights</code>.</p>
<p>I tried to use </p>
<pre class="lang-py prettyprint-override"><code>def custom_loss(ytrue, ypred):
### Something to do with ytrue and ypred
weight = self.G.get_layer('target').get_weights()
loss += self.D.predict(weight)
return loss
</code></pre>
<p>but apparently it does not work since <code>weight</code> is just a numpy array and is not trainable. </p>
<p>Is there a way to get the weights of model that is still trainable in Keras? I'm new to Keras and know very little about TensorFlow. I will be very appreciate it someone can help!</p>
|
<p>As you mention, <code>layer.get_weights()</code> will return the <em>current</em> weights of the matrix. What you want to feed for prediction is a the node in the computation graph representing such weights. You can use <code>layer.trainable_weights</code> instead, which will return two <code>tf.Variable</code> which you can feed to another layer/model.</p>
<p>Note that there is one variable for the unit to unit connections and another one for the bias. If you want to get a flattened tensor from it you could do something like:</p>
<pre class="lang-py prettyprint-override"><code>from keras import backend as K
...
ww, bias = self.G.get_layer('target').trainable_weights
flattened_weights = Flatten()(K.concat([ww, K.reshape(bias, (5, 1))], axis=1))
</code></pre>
|
python|tensorflow|keras
| 2
|
373,876
| 56,922,806
|
Denormalizing a DataFrame of company names [Part 1]
|
<p>I have a Pandas DataFrame of company names which has the following structure:</p>
<pre><code>import numpy as np
import pandas as pd
df = pd.DataFrame({'name' : ['Nitron', 'Pulset', 'Rotaxi'],
'postal_code' : [1410, 1020, 1310],
'previous_name1' : ['Rotory', np.NaN, 'Datec'],
'previous_name2' : [ np.NaN, 'Cmotor', np.NaN],
'previous_name3' : ['Datec', np.NaN, np.NaN]
})
print(df)
| name | postal_code | previous_name1 | previous_name2 | previous_name3 |
|--------|-------------|----------------|----------------|----------------|
| Nitron | 1410 | Rotory | NaN | Datec |
| Pulset | 1020 | NaN | Cmotor | NaN |
| Rotaxi | 1310 | Cyclip | NaN | NaN |
</code></pre>
<p>As you'll notice, a company can have up to three previous names. </p>
<p>My goal is to "denormalize" the above table so that the new DataFrame has the following form:</p>
<pre><code>| name | postal_code |
|--------|-------------|
| Nitron | 1410 |
| Rotory | 1410 |
| Datec | 1410 |
| Pulset | 1020 |
| Cmotor | 1020 |
| Rotaxi | 1310 |
| Cyclip | 1310 |
</code></pre>
<p>That is, I want to add a new row for all instances where the previous company names is non-missing and delete the previous names Series afterwards (I also want to add the <code>postal_code</code> value for each new row).</p>
<p>I'm looking for a description of the method (preferably with code or pseudocode) which will allow me to achieve the above result. </p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.set_index.html" rel="nofollow noreferrer"><code>DataFrame.set_index</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.stack.html" rel="nofollow noreferrer"><code>DataFrame.stack</code></a> for remove misisng values and reshape, then remove second level of <code>MultiIndex</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.reset_index.html" rel="nofollow noreferrer"><code>DataFrame.reset_index</code></a> and last convert <code>Series</code> to 2 column <code>DataFrame</code>:</p>
<pre><code>df1 = (df.set_index('postal_code')
.stack()
.reset_index(level=1, drop=True)
.reset_index(name='name'))
print (df1)
postal_code name
0 1410 Nitron
1 1410 Rotory
2 1410 Datec
3 1020 Pulset
4 1020 Cmotor
5 1310 Rotaxi
6 1310 Datec
</code></pre>
<p>Or use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.melt.html" rel="nofollow noreferrer"><code>DataFrame.melt</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.dropna.html" rel="nofollow noreferrer"><code>DataFrame.dropna</code></a>, but ordering of values is different:</p>
<pre><code>df1 = (df.melt('postal_code', value_name='name')
.drop('variable', axis=1)
.dropna(subset=['name'])
.reset_index( drop=True)
)
print (df1)
postal_code name
0 1410 Nitron
1 1020 Pulset
2 1310 Rotaxi
3 1410 Rotory
4 1310 Datec
5 1020 Cmotor
6 1410 Datec
</code></pre>
<p>But possible sorting by first column:</p>
<pre><code>df1 = (df.melt('postal_code', value_name='name')
.drop('variable', axis=1)
.dropna(subset=['name'])
.sort_values('postal_code')
.reset_index( drop=True)
)
print (df1)
postal_code name
0 1020 Pulset
1 1020 Cmotor
2 1310 Rotaxi
3 1310 Datec
4 1410 Nitron
5 1410 Rotory
6 1410 Datec
</code></pre>
|
python|pandas|dataframe
| 3
|
373,877
| 56,961,893
|
Detect if any value is above zero and change it
|
<p>I have the following array:</p>
<pre><code>[(True,False,True), (False,False,False), (False,False,True)]
</code></pre>
<p>If any element contains a True then they should all be true. So the above should become:</p>
<pre><code>[(True,True,True), (False,False,False), (True,True,True)]
</code></pre>
<p>My below code attempts to do that but it simply converts all elements to True:</p>
<pre><code>a = np.array([(True,False,True), (False,False,False), (False,True,False)], dtype='bool')
aint = a.astype('int')
print(aint)
aint[aint.sum() > 0] = (1,1,1)
print(aint.astype('bool'))
</code></pre>
<p>The output is:</p>
<pre><code>[[1 0 1]
[0 0 0]
[0 1 0]]
[[ True True True]
[ True True True]
[ True True True]]
</code></pre>
|
<p>You could try <code>np.any</code>, which <a href="https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.any.html" rel="nofollow noreferrer">tests whether any array element along a given axis evaluates to True</a>.</p>
<p>Here's a quick line of code that uses a list comprehension to get your intended result.</p>
<pre><code>lst = [(True,False,True), (False,False,False), (False,False,True)]
result = [(np.any(x),) * len(x) for x in lst]
# result is [(True, True, True), (False, False, False), (True, True, True)]
</code></pre>
|
python|numpy
| 2
|
373,878
| 56,928,479
|
plotly line graph iterate over columns and loop trace
|
<p>Taking this post <a href="https://stackoverflow.com/questions/55809312/plot-multiple-columns-on-line-graph-using-dash-plotly">Plot multiple columns on line graph using Dash/Plotly</a> as a reference, I'm looking to plot similar dataframe into line graph using plotly. What makes me stuck is how to put trace into a loop so that all item will be plot as difference trace data.</p>
<pre><code>data = {'year':['2018','2019'],
'jan':[20009,49599],
'feb':[13000,22000],
'mac':[23345,45888],
'apr':[23399,23399]}
df=pd.DataFrame(data).set_index('year')
apr feb jan mac
year
2018 23399 13000 20009 23345
2019 23399 22000 49599 45888
</code></pre>
<p>This is the closest code manage to get </p>
<pre><code> month = ['January', 'February', 'March', 'April']
res=[]
for item in df.index:
res=go.Scatter(
x=month,
y=df.values.tolist(),
mode='lines',
name='df.index',
)
data=[res]
layout = dict(title = 'Budget Monthly',
xaxis= dict(title= 'month',ticklen= 5,zeroline= False)
)
fig = go.Figure(data=data, layout=layout)
py.iplot(fig, filename='line')
</code></pre>
<p><a href="https://i.stack.imgur.com/vOTof.png" rel="nofollow noreferrer">Line plot but no separate trace </a></p>
<p>Any help is very appreciated. Tq</p>
|
<p>Hope I got your idea correctly))</p>
<p>I had a similar issue some time ago and was very happy to find the cufflinks package.</p>
<p>To install cufflinks you need to execute the following commands:</p>
<pre><code>pip install cufflinks
pip install ipywidgets
</code></pre>
<p>and restart your notebook if you are using Ipython.</p>
<p>And here is the code sample to build such plots:</p>
<pre><code>import pandas as pd
import numpy as np
import plotly.offline as py
import plotly.graph_objs as go
plotly.offline.init_notebook_mode()
data = {'year':['2018','2019'],
'jan':[20009,49599],
'feb':[13000,22000],
'mar':[23345,45888],
'apr':[23399,23399]}
df=pd.DataFrame(data).set_index('year')
df=df.transpose()
import cufflinks as cf
py.iplot([{
'x': df.index,
'y': df[col],
'name': col
} for col in df.columns], filename='cufflinks/multiple-lines-on-same-chart')
</code></pre>
<p><a href="https://i.stack.imgur.com/TC5h7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TC5h7.png" alt="enter image description here"></a></p>
|
python|pandas|plotly
| 0
|
373,879
| 57,160,788
|
Appending to next row in dataframe, from within a for loop
|
<p>I've created a web scraper that scrapes the Yahoo Finance Summary and Statistics page of a stock for Python programming educational purposes only. It reads from the '1stocklist.csv' in the programs directory which looks like this:</p>
<pre><code>Symbols
SNAP
KO
</code></pre>
<p>From there, it adds the new information to new columns in the dataframe as it should. There are a lot of 'for' loops in there and I'm still tweaking it as it's not grabbing some data correctly, but it's fine for now.</p>
<p>My problem is trying to save the dataframe to a new .csv file. The way it outputs right now as you'll see is something like this:</p>
<p><a href="https://i.stack.imgur.com/XDSCw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XDSCw.png" alt="wrong output" /></a></p>
<p>The SNAP row should begin with the 14.02 and everything right, and the next row should be KO beginning with the 51.39 and over.</p>
<p>Any ideas? Just create a 1stocklist.csv file that looks like the above and try it. Thanks!</p>
<pre><code># Import dependencies
from bs4 import BeautifulSoup
import re, random, time, requests, datetime, csv
import pandas as pd
import numpy as np
# Use Pandas to read the "1stocklist.csv" file. We'll use Pandas so that we can append a 'dataframe' with new
# information we get from the Zacks site to work with in the program and output to the 'data(date).csv' file later
maindf = pd.read_csv('1stocklist.csv', skiprows=1, names=[
# The .csv header names
"Symbols"
]) #, delimiter = ',')
# Setting a time delay will help keep scraping suspicion down and server load down when scraping the Zacks site
timeDelay = random.randrange(2, 8)
# Start scraping Yahoo
print('Beginning to scrape Yahoo Finance site for information ...')
tickerlist = len(maindf['Symbols']) # for progress bar
# Create a progress counter to display how far along in the zacks rank scraping it is
zackscounter = 1
# For every ticker in the stocklist dataframe
for ticker in maindf['Symbols']:
# Print the progress
print(zackscounter, ' of ', tickerlist, ' - ', ticker) # for seeing which stock it's currently on
# The list of URL's for the stock's different pages to scrape the information from
summaryurl = 'https://ca.finance.yahoo.com/quote/' + ticker
statsurl = 'https://ca.finance.yahoo.com/quote/' + ticker + '/key-statistics'
# Define the headers to use in Beautiful Soup 4
headers = requests.utils.default_headers()
headers['User-Agent'] = 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36'
# Employ random time delay now before starting with the (next) ticker
time.sleep(timeDelay)
# Use Beautiful Soup 4 to get the info from the first Summary URL page
page = requests.get(summaryurl, headers=headers)
soup = BeautifulSoup(page.text, 'html.parser')
counter = 0 # used to tell which 'td' it's currently looking at
table = soup.find('div', {'id' :'quote-summary'})
for i in table.find_all('span'):
counter += 1
if counter % 2 == 0: # All Even td's are the metrics/numbers we want
data_point = i.text
#print(data_point)
maindf[column_name] = data_point # Add the data point to the right column
else: # All odd td's are the header names
column_name = i.text
#print(column_name)
# Use Beautiful Soup 4 to get the info from the second stats URL page
page = requests.get(statsurl, headers=headers)
soup = BeautifulSoup(page.text, 'html.parser')
time.sleep(timeDelay)
# Get all the data in the tables
counter = 0 # used to tell which 'td' it's currently looking at
table = soup.find('section', {'data-test' :'qsp-statistics'})
for i in table.find_all('td'):
counter += 1
if counter % 2 == 0: # All Even td's are the metrics/numbers we want
data_point = i.text
#print(data_point)
maindf[column_name] = data_point # Add the data point to the right column
else: # All odd td's are the header names
column_name = i.text
#print(column_name)
file_name = 'data_raw.csv'
if zackscounter == 1:
maindf.to_csv(file_name, index=False)
else:
maindf.to_csv(file_name, index=False, header=False, mode='a')
zackscounter += 1
continue
</code></pre>
<p>UPDATE:</p>
<p>I know it’s something to do with how I’m trying to append the dataframe to the .csv file at the end. My beginning dataframe is just one column with all the ticker symbols in it, then it’s trying to add each new column to the dataframe as the program goes along, and fills down to the bottom of the ticker list. What I’m wanting to happen is to just add the column_name header as it should, and then append the appropriate data specific to the one ticker and do that for each ticker in the “Symbols” column of my dataframe. Hope that provides some clarity to the issue?</p>
<p>I’ve tried using .loc in various ways but to no success. Thanks!</p>
|
<p><strong>UPDATE WITH ANSWER</strong></p>
<p>I was able to figure it out!</p>
<p>Basically, I changed the first dataframe that reads from the 1stocklist.csv to be its own dataframe, then created a new blank one to work with from within the first for loop. Here is the updated head that I created:</p>
<pre><code># Use Pandas to read the "1stocklist.csv" file. We'll use Pandas so that we can append a 'dataframe' with new
# information we get from the Zacks site to work with in the program and output to the 'data(date).csv' file later
opening_dataframe = pd.read_csv('1stocklist.csv', skiprows=1, names=[
# The .csv header names
"Symbols"
]) #, delimiter = ',')
# Setting a time delay will help keep scraping suspicion down and server load down when scraping the Zacks site
timeDelay = random.randrange(2, 8)
# Start scraping Yahoo
print('Beginning to scrape Yahoo Finance site for information ...')
tickerlist = len(opening_dataframe['Symbols']) # for progress bar
# Create a progress counter to display how far along in the zacks rank scraping it is
zackscounter = 1
# For every ticker in the stocklist dataframe
for ticker in opening_dataframe['Symbols']:
maindf = pd.DataFrame(columns=['Symbols'])
maindf.loc[len(maindf)] = ticker
# Print the progress
print(zackscounter, ' of ', tickerlist, ' - ', ticker) # for seeing which stock it's currently on
# The list of URL's for the stock's different pages to scrape the information from
summaryurl = 'https://ca.finance.yahoo.com/quote/' + ticker
statsurl = 'https://ca.finance.yahoo.com/quote/' + ticker + '/key-statistics'
......
......
......
</code></pre>
<p>Notice the "opening_dataframe = ..." name change, and the</p>
<p>maindf = pd.DataFrame(columns=['Symbols'])
maindf.loc[len(maindf)] = ticker</p>
<p>part. I also utilize the .loc to add to the next available row in the dataframe. Hopefully this helps someone!</p>
|
python|python-3.x|pandas
| 0
|
373,880
| 56,978,793
|
Create a dataset for chord diagram and plot
|
<p>I'm trying to convert the below dataset into the right format to then plot it into a chord diagram.</p>
<pre><code> a b c d e f g h
0 1 0 0 0 0 1 0 0
1 1 0 0 0 0 0 0 0
2 1 0 1 1 1 1 1 1
3 1 0 1 1 0 1 1 1
4 1 0 0 0 0 0 0 0
5 0 1 0 0 1 1 1 1
6 1 1 0 0 1 1 1 1
7 1 1 1 1 1 1 1 1
8 1 1 0 0 1 1 0 0
9 1 1 1 0 1 0 1 0
10 1 1 1 0 1 1 0 0
11 1 0 0 0 0 1 0 0
12 1 1 1 1 1 1 1 1
13 1 1 1 1 1 1 1 1
14 0 1 1 1 1 1 1 0
</code></pre>
<p>The result would be a chord diagram showing all the possible combinations between the variables, with each stream width being the count of a particular combination occurrences within the dataset - for example a + b count is 7 in the dataset above (where both are 1).</p>
|
<p>I do not know a lot about which could be the best chord diagram library but may I help you a little bit:</p>
<h2>first we define our data in a pandas dataset</h2>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
data = [
[1, 0, 0, 0, 0, 1, 0, 0],
[1, 0, 0, 0, 0, 0, 0, 0],
[1, 0, 1, 1, 1, 1, 1, 1],
[1, 0, 0, 0, 0, 0, 0, 0],
[1, 0, 1, 1, 0, 1, 1, 1],
[0, 1, 0, 0, 1, 1, 1, 1],
[1, 1, 0, 0, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 0, 0, 1, 1, 0, 0],
[1, 1, 1, 0, 1, 0, 1, 0],
[1, 1, 1, 0, 1, 1, 0, 0],
[1, 0, 0, 0, 0, 1, 0, 0],
[1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1],
[0, 1, 1, 1, 1, 1, 1, 0]]
dataframe = pd.DataFrame(data, columns = ['a','b','c','d','e','f','g','h'])
</code></pre>
<h2>now we implement the algorithm</h2>
<pre class="lang-py prettyprint-override"><code>def relationship (columnsList, dataframe):
result = 0
for index, row in dataframe.iterrows():
equal = True
for col in range(len(columnsList)-1):
if (equal and row[columnsList[col]] == row[columnsList[col+1]]):
equal = True
else:
equal = False
result += 1 if equal else 0
return result
</code></pre>
<h2>Some Tests</h2>
<pre class="lang-py prettyprint-override"><code>>>> relationship (['a','b','d'], dataframe) # a+b+d
3
>>> relationship (['a','b','h'], dataframe) # a+b+h
4
>>> relationship (['a','b'], dataframe) # a+b
7
</code></pre>
<p>The diagram is up to you, I hope you can find this helpful!</p>
|
python|python-3.x|pandas|numpy|holoviews
| 0
|
373,881
| 57,082,909
|
Using Percent (%) in Pandas and pyodbc SQL Server w/o Error
|
<p>I believe having <code>%</code> in my sql queries is causing issues in Python because of <code>%s</code> being used for variables. I have tried escaping character and have had no luck so far</p>
<pre><code>import pyodbc
import pandas as pd
conn = pyodbc.connect('...')
cursor = conn.cursor()
sql_statement = """
select ABS(CHECKSUM(NEWID()) % 2), %s
"""
s = sql_statement % (5)
df = pd.read_sql_query(s, conn)
</code></pre>
<pre><code>ValueError: unsupported format character ')' (0x29) at index 33
</code></pre>
<p><code>ABS(CHECKSUM(NEWID()) % 2)</code> is supposed to just be a way to return a random value for each row</p>
<p>This is just a simple example. Any time I try to use <code>var like '%abc%</code> I get the same issue as above, I believe the % characters are causing issues in the python libraries.</p>
<p>Is there a way to escape these characters or to avoid this issue?</p>
|
<p>Typically just adding another '%' indicates that you're using '%' as a string and not a modulo. E.g.:</p>
<pre><code>print('5%%')
</code></pre>
<p>Results in printing '5%'.</p>
|
python|pandas|pyodbc
| 2
|
373,882
| 57,172,827
|
How to rewrite the depth to normal map code using tensorflow keras for bath of inputs?
|
<p>Here is my python code to convert the depth map (256,256,1) to normal map (256,256,3) for single input. I want to rewrite the code in tensorflow keras for batch of predicted depth.</p>
<pre><code>zy, zx = np.gradient(d_im)
# You may also consider using Sobel to get a joint Gaussian smoothing and differentation
# to reduce noise
zx = cv2.Sobel(d_im, cv2.CV_64F, 1, 0, ksize=5)
zy = cv2.Sobel(d_im, cv2.CV_64F, 0, 1, ksize=5)
normal = np.dstack((-zx, -zy, np.ones_like(d_im)))
n = np.linalg.norm(normal, axis=2)
normal[:, :, 0] /= n
normal[:, :, 1] /= n
normal[:, :, 2] /= n
# offset and rescale values to be in 0-255
normal += 1
normal /= 2
normal *= 255
cv2.imwrite("normal2.png", normal[:, :, ::-1])
</code></pre>
<p>Here d_im is the depth numpy array of above mentioned shape and normal2.png is the 3 channel image of normal values calculated from the depth map.</p>
|
<p>I solved it:</p>
<pre><code>def depth_to_normal(y_pred):
zy, zx = tf.image.image_gradients(y_pred)
normal_ori = tf.concat([-zx, -zy, tf.ones_like(y_pred)], 3)
new_normal = tf.square(zx) + tf.square(zy) + 1
normal = normal_ori/new_normal
normal += 1
normal /= 2
return normal
</code></pre>
|
python-3.x|tensorflow|keras
| 0
|
373,883
| 56,984,561
|
How to print a message every time a chunksize is written to the database in pandas?
|
<pre><code>engine = create_engine('postgresql://user:password@server/db')
df.to_sql('new_table', con=engine, if_exists='append', index=False, chunksize=20000)
</code></pre>
<p>I want to print a message every time a chunksize has to been written to the DB, so that I know the script is successfully running. How can I achieve this?</p>
|
<p>You can always chunk it up manually and process chunks in a loop inside which you can setup message printing or a progress bar. See an example <a href="https://stackoverflow.com/a/39495229/11477031">here</a>.</p>
|
python-3.x|pandas
| 1
|
373,884
| 56,875,105
|
How to implement different ranking algorithms in tf-ranking framework?
|
<p>What is the algorithm behind tf-ranking? and can we use Lambdamart algorithm in tf-ranking .Can anyone suggest some good sources to learn these </p>
|
<p>TF Ranking provides a framework for you to implement your own algorithm. It helps you set up the components around the core of your model (input_fn, metrics and loss functions, etc.). But the scoring logic (aka scoring function) should be provided by the user.</p>
<p>You probably have already seen these, but just in case:</p>
<p><a href="https://arxiv.org/abs/1812.00073" rel="nofollow noreferrer">TF-Ranking: Scalable TensorFlow Library for Learning-to-Rank
</a></p>
<p>and</p>
<p><a href="https://arxiv.org/abs/1811.04415" rel="nofollow noreferrer">Learning Groupwise Multivariate Scoring Functions Using Deep Neural Networks</a></p>
|
tensorflow|ranking|google-ranking
| 0
|
373,885
| 56,871,473
|
Render JSON response from BigQuery using Pandas?
|
<p>I'm a Ruby dev doing a lot of data work that's decided to switch to Python. I'm enjoying making the transition so far and have been blown away by Pandas, Jupyter Notebooks etc.</p>
<p>My current task is to write a lightweight RESTful API that under the hood is running queries against Google BigQuery. </p>
<p>I have a really simple test running in Flask, this works fine, but I did have trouble rendering the BigQuery response as JSON. To get around this, I used Pandas and then converted the dataframe to JSON. While it works, this feels like an unnecessary step and I'm not even sure this is a legitimate use case for Pandas. I have also read that creating a dataframe can be slow as data volume increases. </p>
<p>Below is my little mock up test in Flask. It would be really helpful to hear from experienced Python Devs how you'd approach this and if there are any other libraries I should be looking at here.</p>
<pre><code>from flask import Flask
from google.cloud import bigquery
import pandas
app = Flask(__name__)
@app.route("/bq_test")
def bq_test():
client = bigquery.Client.from_service_account_json('/my_creds.json')
sql = """select * from `my_dataset.my_table` limit 1000"""
query_job = client.query(sql).to_dataframe()
return query_job.to_json(orient = "records")
if __name__ == "__main__":
app.run()
</code></pre>
|
<p>From the BigQuery documentation-</p>
<blockquote>
<p>BigQuery supports functions that help you retrieve data stored in
JSON-formatted strings and functions that help you transform data into
JSON-formatted strings:</p>
<p>JSON_EXTRACT or JSON_EXTRACT_SCALAR</p>
<p><code>JSON_EXTRACT(json_string_expr, json_path_string_literal)</code>, which returns JSON values as STRINGs.</p>
<p><code>JSON_EXTRACT_SCALAR(json_string_expr, json_path_string_literal)</code>, which returns scalar JSON values as STRINGs.</p>
<p><strong>Description</strong></p>
<p>The <code>json_string_expr</code> parameter must be a JSON-formatted string. ...</p>
<p>The <code>json_path_string_literal</code> parameter identifies the value or values you want to obtain from the JSON-formatted string. You construct this parameter using the <a href="https://code.google.com/p/jsonpath" rel="nofollow noreferrer">JSONPath</a> format. </p>
</blockquote>
<p><a href="https://cloud.google.com/bigquery/docs/reference/standard-sql/json_functions" rel="nofollow noreferrer">https://cloud.google.com/bigquery/docs/reference/standard-sql/json_functions</a></p>
|
python|pandas|flask|google-bigquery
| 1
|
373,886
| 57,049,675
|
How to specify number of layers in keras?
|
<p>I'm trying to define a fully connected neural network in keras using tensorflow backend, I have a sample code but I dont know what it means.</p>
<pre><code>model = Sequential()
model.add(Dense(10, input_dim=x.shape[1], kernel_initializer='normal', activation='relu'))
model.add(Dense(50, input_dim=x.shape[1], kernel_initializer='normal', activation='relu'))
model.add(Dense(20, input_dim=x.shape[1], kernel_initializer='normal', activation='relu'))
model.add(Dense(10, input_dim=x.shape[1], kernel_initializer='normal', activation='relu'))
model.add(Dense(1, kernel_initializer='normal'))
model.add(Dense(y.shape[1],activation='softmax'))
</code></pre>
<p>From the above code I want to know what is the number of inputs to my network, number of outputs, number of hidden layers and number of neurons in each layer. And what is the number coming after model.add(Dense ? assuming x.shape[1]=60.
What is the name of this network exacly? Should I call it a fully connected network or convolutional network?</p>
|
<p>That should be quite easy.</p>
<ol>
<li><p>For knowing about the model's inputs and outputs use,</p>
<pre><code>input_tensor = model.input
output_tensor = model.output
</code></pre>
<p>You can print these <code>tf.Tensor</code> objects to get the <code>shape</code> and <code>dtype</code>.</p></li>
<li><p>For fetching the Layers of a model use,</p>
<pre><code>layers = model.layers
print( layers[0].units )
</code></pre></li>
</ol>
<p>With these tricks you can easily get the input and output tensors for a model or its layer.</p>
|
tensorflow|keras|deep-learning
| 0
|
373,887
| 57,085,982
|
Error while predicting a single value using a linear regression model
|
<p>I'm a beginner and making a linear regression model, when I make predictions on the basis of test sets, it works fine. But when I try to predict something for a specific value. It gives an error. The tutorial I'm watching, they don't have any errors.</p>
<pre><code>dataset = pd.read_csv('Position_Salaries.csv')
X = dataset.iloc[:, 1:2].values
y = dataset.iloc[:, 2].values
# Fitting Linear Regression to the dataset
from sklearn.linear_model import LinearRegression
lin_reg = LinearRegression()
lin_reg.fit(X, y)
# Visualising the Linear Regression results
plt.scatter(X, y, color = 'red')
plt.plot(X, lin_reg.predict(X), color = 'blue')
plt.title('Truth or Bluff (Linear Regression)')
plt.xlabel('Position level')
plt.ylabel('Salary')
plt.show()
# Predicting a new result with Linear Regression
lin_reg.predict(6.5)
</code></pre>
<p>ValueError: Expected 2D array, got scalar array instead:
array=6.5.
Reshape your data either using array.reshape(-1, 1) if your data has a single feature or array.reshape(1, -1) if it contains a single sample.</p>
|
<p>According to the <a href="https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html#sklearn.linear_model.LinearRegression.predict" rel="nofollow noreferrer">Scikit-learn documentation</a>, the input array should have shape <code>(n_samples, n_features)</code>. As such, if you want a single example with a single value, you should expect the shape of your input to be <code>(1,1)</code>.</p>
<p>This can be done by doing:</p>
<pre><code>import numpy as np
test_X = np.array(6.5).reshape(-1, 1)
lin_reg.predict(test_X)
</code></pre>
<p>You can check the shape by doing:</p>
<pre><code>test_X.shape
</code></pre>
<p>The reason for this is because the input can have many samples (i.e. you want to predict for multiple data points at once), or/and each sample can have many features. </p>
<p>Note: <a href="https://docs.scipy.org/doc/numpy/user/basics.creation.html" rel="nofollow noreferrer">Numpy</a> is a Python library to support large arrays and matrices. <a href="https://scikit-learn.org/stable/install.html" rel="nofollow noreferrer">When scikit-learn is installed, Numpy should be installed as well</a>.</p>
|
machine-learning|scikit-learn|anaconda|sklearn-pandas
| 4
|
373,888
| 57,203,358
|
Copying int64 Column to a New Column and Appending Str to end of each record in Pandas DataFrame
|
<p>I have 11 columns in a DataFrame, and want to clone the first column to a new column to be the 12th column, to be named mail. The new mail column should contain all the column1 records and also add <code>str('@myexampledomain.com')</code> to it. The Column I want to copy to create the new column is the <code>"sAMAccountName"</code> column. This is from my <code>output.csv</code> file</p>
<pre><code>df.dtypes
Out[45]:
sAMAccountName int64
createHomeFolder object
description object
homeDrive object
mustChangePassword object
password object
sn object
givenName object
GradeLevel object
Schoolcode object
cn object
dtype: object
</code></pre>
<p>An example of a record from <code>sAMAccountName</code> currently is 15127. The new column records should be <code>"15127@myexampledomain.com"</code></p>
<p>I am on <code>python 3.6.3</code> and have read documentation everywhere and tried many different ways to do this but always get errors. </p>
<pre><code>import pandas as pd
df = pd.read_csv('output.csv',)
df_mail = df[df['sAMAccountName'] == 'mail'].copy()
TypeError: invalid type comparison
</code></pre>
<p>I was hoping to get the new column titled <code>"mail"</code> with all the same rows as the <code>"sAMAccountName"</code> column. Later on, I can figure out how to append the rest of the email address to the records. But right now, I can't get that new column created.</p>
|
<p>If I understand you correctly, you want to <code>copy</code> the <code>"sAMAccountName"</code> column and call it <code>"mail"</code>. Your current code:</p>
<pre><code>import pandas as pd
df = pd.read_csv('output.csv',)
df_mail = df[df['sAMAccountName'] == 'mail'].copy()
TypeError: invalid type comparison
</code></pre>
<p>Returns a <code>TypeError</code> because you are trying to use boolean masking to filter out results, but as your <code>df.dtypes</code> says, <code>sAMAccountName</code> contains <code>int64</code> elements. To create a new column that copies another, do the following:</p>
<pre><code>df['mail'] = df.sAMAccountName.copy()
</code></pre>
<p>This will add a new column (<code>mail</code>) to your existing dataframe.</p>
<p>To then convert all the elements to a <code>str</code> and add <code>"@myexampledomain.com"</code> to it, cast the <code>mail</code> column as a <code>str</code> using <code>.astype(str)</code>, concatenate, and save over itself:</p>
<pre><code>df['mail'] = df.mail.astype(str) + "@myexampledomain.com"
</code></pre>
|
python-3.x|pandas|dataframe
| 0
|
373,889
| 57,125,740
|
How to replace the entry of a column with different name by recognizing a pattern?
|
<p>I have a column let's say <code>'Match Place'</code> in which there are entries like <code>'MANU @ POR'</code>, <code>'MANU vs. UTA'</code>, <code>'MANU @ IND'</code>, <code>'MANU vs. GRE'</code> etc. So my columns have 3 things in its entry, the 1st name is <code>MANU</code> i.e, 1st country code, 2nd is <code>@/vs.</code> and 3rd is again 2nd country name. So what I wanna do is if <code>'@'</code> comes in any entry of my column I want is to be changed to <code>'away'</code> and if <code>'vs.'</code> comes in replace whole entry to <code>'home'</code> like 'MANU @ POR' should be changed to <code>'away'</code> and 'MANU vs. GRE' should be changed to <code>'home'</code></p>
<p>although I wrote some code to do so using for, if, else but it's taking a way too long time to compute it and my total rows are 30697
so is there any other way to reduce time
below I'm showing you my code
pls help</p>
<pre><code>for i in range(len(df)):
if is_na(df['home/away'][i]) == True:
temp = (df['home/away'][i]).split()
if temp[1] == '@':
df['home/away'][i] = 'away'
else:
df['home/away'][i] = 'home
</code></pre>
|
<p>You can use <a href="https://www.google.com/search?q=np+select&rlz=1C1GCEU_enIN822IN823&oq=np+select&aqs=chrome..69i57j35i39j0l4.1884j0j7&sourceid=chrome&ie=UTF-8" rel="nofollow noreferrer"><code>np.select</code></a> to assign multiple conditions:</p>
<pre><code>s=df['Match Place'].str.split().str[1] #select the middle element
c1,c2=s.eq('@'),s.eq('vs.') #assign conditions
np.select([c1,c2],['away','home']) #assign this to the desired column
#array(['away', 'home', 'away', 'home'], dtype='<U11')
</code></pre>
|
python|string|pandas
| 3
|
373,890
| 56,906,887
|
How can I define variables for several columns
|
<p>I'm creating a program that returns different statistics form any file uploaded (with a certain data structure).</p>
<p>I need to write some code that allows to define variables for the columns in each file, the problem is that in some cases there are 5 columns and in others 7, 8 or more.</p>
<p>Any thoughts? Maybe with a for loop?</p>
<p>I expect the program to read all the columns and name them x1, x2, x3 and so on.</p>
|
<p>If you don't specify the names of the headers then pandas will infer them. You can change them after you read them if you like or you can force them to be what you want.</p>
<p>For instance, letting pandas infer the header names and then renaming them X1...</p>
<pre><code>df = pd.read_csv('test.csv',header=None)
df
0 1 2 3 4 #<- Header names given by pandas
0 1 2 3 4 5
df.columns = [f"X{_}" for _ in df.index]
X0 X1 X2 X3 X4
0 1 2 3 4 5
</code></pre>
<p>Or if you want to give each column a specific name, something like</p>
<pre><code>df.columns = ['Foo', 'Bar', 'Baz', 'Biz', 'Boo']
Foo Bar Baz Biz Boo
0 1 2 3 4 5
</code></pre>
<p>Or if you prefer to ensure that all data has 8 columns regardless of what the user passed in. In this case you will get NaN in the unfilled columns</p>
<pre><code>df = pd.read_csv('test.csv',header=None,names=['X1','X2','X3','X4','X5','X6','X7','X8'])
X1 X2 X3 X4 X5 X6 X7 X8
0 1 2 3 4 5 NaN NaN NaN
</code></pre>
<p>No matter how you code it, you have columns with the names you provide or the ones pandas provides.</p>
<pre><code>df['Foo'] == df[1] == df['X1']
</code></pre>
|
python|python-3.x|pandas
| 1
|
373,891
| 57,043,695
|
pandas - downsample a more frequent DataFrame to the frequency of a less frequent DataFrame
|
<p>I have two DataFrames that have different data measured at different frequencies, as in those csv examples:</p>
<p>df1:</p>
<pre><code>i,m1,m2,t
0,0.556529,6.863255,43564.844
1,0.5565576199999884,6.86327749999999,43564.863999999994
2,0.5565559400000003,6.8632764,43564.884
3,0.5565699799999941,6.863286799999996,43564.903999999995
4,0.5565570200000007,6.863277200000001,43564.924
5,0.5565316400000097,6.863257100000007,43564.944
...
</code></pre>
<p>df2:</p>
<pre><code>i,m3,m4,t
0,306.81162500000596,-1.2126870045404683,43564.878125
1,306.86175000000725,-1.1705838272666433,43564.928250000004
2,306.77552454544787,-1.1240195386446195,43564.97837499999
3,306.85900545454086,-1.0210345363692084,43565.0285
4,306.8354250000052,-1.0052431772666657,43565.078625
5,306.88397499999286,-0.9468344809917896,43565.12875
...
</code></pre>
<p>I would like to obtain a single df that have all the measures of both dfs at the times of the first one (which get data less frequently).</p>
<p>I tried to do that with a for loop averaging over the df2 measures between two timestamps of df1 but it was <strong>extremely slow</strong>.</p>
|
<p>IIUC, <code>i</code> is index column and you want to put <code>df2['t']</code> in bins and averaging the other columns. So you can use <code>pd.cut</code>:</p>
<pre><code>groups =pd.cut(df2.t, bins= list(df1.t) + [np.inf],
right=False,
labels=df1['t'])
# cols to copy
cols = [col for col in df2.columns if col != 't']
# groupby and get the average
new_df = (df2[cols].groupby(groups)
.mean()
.reset_index()
)
</code></pre>
<p>Then <code>new_df</code> is:</p>
<pre><code> t m3 m4
0 43564.844 NaN NaN
1 43564.864 306.811625 -1.212687
2 43564.884 NaN NaN
3 43564.904 NaN NaN
4 43564.924 306.861750 -1.170584
5 43564.944 306.838482 -1.024283
</code></pre>
<p>which you can merge with <code>df1</code> on <code>t</code>:</p>
<pre><code>df1.merge(new_df, on='t', how='left')
</code></pre>
<p>and get:</p>
<pre><code> m1 m2 t m3 m4
0 0.556529 6.863255 43564.8 NaN NaN
1 0.556558 6.863277 43564.9 306.811625 -1.212687
2 0.556556 6.863276 43564.9 NaN NaN
3 0.556570 6.863287 43564.9 NaN NaN
4 0.556557 6.863277 43564.9 306.861750 -1.170584
5 0.556532 6.863257 43564.9 306.838482 -1.024283
</code></pre>
|
python|pandas|dataframe
| 1
|
373,892
| 57,017,223
|
Pandas: Use a dataframe to index another one and fill the gaps?
|
<p>I have two dataframes. df_0 is a complete list of dates and df_1 is a generic register indexed by incomplete dates. I need to make a dataframe that has df_0’s complete dates as an index, filled with df_1’s register in the matching dates. For dates without a register entry, I just need to repeat the last date’s register data as a filler. Any ideas on how to make this?</p>
<p>Thanks in advance.</p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.reindex.html" rel="nofollow noreferrer"><code>DataFrame.reindex</code></a> with parameter <code>method</code>:</p>
<pre><code>df = df_1.reindex(df_0.index, method="ffill")
</code></pre>
|
python|pandas|indexing|match|fill
| 2
|
373,893
| 57,273,888
|
Keras vs. TensorFlow code comparison sources
|
<p>This isn't really a question that's code-specific, but I haven't been able to find any answers or resources.</p>
<p>I'm currently trying to teach myself some "pure" TensorFlow rather than just using Keras, and I felt that it would be very helpful if there were some sources where they have TensorFlow code and the equivalent Keras code side-by-side for comparison.</p>
<p>Unfortunately, most of the results I find on the Internet talk about performance-wise differences or have very simple comparison examples (e.g. "and so this is why Keras is much simpler to use"). I'm not so much interested in those details as much as I am in the code itself.</p>
<p>Does anybody know if there are any resources out there that could help with this?</p>
|
<p>Here you have two models, in <code>Tensorflow</code> and in <code>Keras</code>, that are correspondent:</p>
<pre><code>import tensorflow as tf
import numpy as np
import pandas as pd
from keras.datasets import mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
</code></pre>
<h1>Tensorflow</h1>
<pre><code>X = tf.placeholder(dtype=tf.float64)
Y = tf.placeholder(dtype=tf.float64)
num_hidden=128
# Build a hidden layer
W_hidden = tf.Variable(np.random.randn(784, num_hidden))
b_hidden = tf.Variable(np.random.randn(num_hidden))
p_hidden = tf.nn.sigmoid( tf.add(tf.matmul(X, W_hidden), b_hidden) )
# Build another hidden layer
W_hidden2 = tf.Variable(np.random.randn(num_hidden, num_hidden))
b_hidden2 = tf.Variable(np.random.randn(num_hidden))
p_hidden2 = tf.nn.sigmoid( tf.add(tf.matmul(p_hidden, W_hidden2), b_hidden2) )
# Build the output layer
W_output = tf.Variable(np.random.randn(num_hidden, 10))
b_output = tf.Variable(np.random.randn(10))
p_output = tf.nn.softmax( tf.add(tf.matmul(p_hidden2, W_output), b_output) )
loss = tf.reduce_mean(tf.losses.mean_squared_error(
labels=Y,predictions=p_output))
accuracy=1-tf.sqrt(loss)
minimization_op = tf.train.AdamOptimizer(learning_rate=0.01).minimize(loss)
feed_dict = {
X: x_train.reshape(-1,784),
Y: pd.get_dummies(y_train)
}
with tf.Session() as session:
session.run(tf.global_variables_initializer())
for step in range(10000):
J_value = session.run(loss, feed_dict)
acc = session.run(accuracy, feed_dict)
if step % 100 == 0:
print("Step:", step, " Loss:", J_value," Accuracy:", acc)
session.run(minimization_op, feed_dict)
pred00 = session.run([p_output], feed_dict={X: x_test.reshape(-1,784)})
</code></pre>
<h1>Keras</h1>
<pre><code>import tensorflow as tf
from tensorflow.keras.layers import Input, Dense
from keras.models import Model
l = tf.keras.layers
model = tf.keras.Sequential([
l.Flatten(input_shape=(784,)),
l.Dense(128, activation='relu'),
l.Dense(128, activation='relu'),
l.Dense(10, activation='softmax')
])
model.compile(loss='categorical_crossentropy', optimizer='adam',metrics = ['accuracy'])
model.summary()
model.fit(x_train.reshape(-1,784),pd.get_dummies(y_train),nb_epoch=15,batch_size=128,verbose=1)
</code></pre>
|
tensorflow|keras
| 4
|
373,894
| 57,223,666
|
Alternate optimization with two different optimizers in pytorch
|
<ul>
<li>I have two loss functions <code>l1</code> and <code>l2</code>, each optimized by two separate ADAM optimizers <code>opt1</code> and <code>opt2</code>.</li>
<li>The current value of my parameters is <code>x</code>.</li>
<li>I want to update <code>x</code> using <code>opt1</code> and <code>opt2</code> separately, and then "merge" the resulting new value of <code>x</code> depending on the magnitude of the gradients.</li>
</ul>
<p>Pseudocode</p>
<pre><code>grad1 = get_grad(l1)
grad2 = get_grad(l2)
n1 = norm(grad1)
n2 = norm(grad2)
x1 = opt1(grad1)
x2 = opt2(grad2)
w = n1 / (n1 + n2)
x = w*x1 + (1-w)*x2
</code></pre>
<p>How can I do it in pytorch? I am not sure how to use <code>backward()</code> and <code>step()</code>.</p>
|
<p>Following @UmangGupta comment, I did it by initializing three copies of <code>x</code>: two for <code>x1</code> and <code>x2</code>, and one for a backup of <code>x</code>. Then I do as follows</p>
<pre class="lang-py prettyprint-override"><code>def copy(target, source):
for x, y in zip(target.parameters(), source.parameters()):
x.data.copy_(y.data)
def merge(target, source1, source2, tau):
for x, y1, y2 in zip(target.parameters(), source1.parameters(), source2.parameters()):
x.data.copy_(tau * y1.data + (1.0 - tau) * y2.data)
def grad_norm(x)
n = 0.
for p in x.parameters():
p_norm = p.grad.data.norm(2)
n += p_norm.item() ** 2
return n ** (1. / 2)
...
copy(x_backup, x)
opt1.zero_grad()
l1.backward()
n1 = grad_norm(x)
opt1.step()
copy(x1, x)
copy(x, x_backup)
# same for opt2, x2, n2
merge(x, x1, x2, n1 / (n1 + n2))
</code></pre>
<p>I'd still like to have a cleaner way, if possible (not sure if copying the value, which happens very often, makes my code slower).</p>
|
optimization|pytorch|gradient-descent
| 0
|
373,895
| 57,024,439
|
Pandas retain index ordering
|
<p>I would like to plot a graph but pandas keeps reordering my index (N).</p>
<p>I want the order to be <code>N= 50, 100, 200</code> where there are three columns for each <code>N</code> namely <code>2x2 3x3 4x4</code></p>
<pre><code>f1 = pd.DataFrame({"User": ["2 x 2 x 2","3 x 3 x 3", "4 x 4 x 4","2 x 2 x 2","3 x 3 x 3", "4 x 4 x 4","2 x 2 x 2","3 x 3 x 3", "4 x 4 x 4"],\
"clm2": profit_comparison[0:len(profit_comparison)],\
"N": ["N=50","N=50","N=50","N=100","N=100","N=100","N=200","N=200","N=200"]})
with PdfPages('profit(n,p).pdf') as pdf:
ax1= df1.pivot(index = "N", columns = "User", values = "clm2").plot.bar(edgecolor = "white")
ax1.set_ylabel("Profit")
pdf.savefig()
plt.close()
</code></pre>
|
<p>I'm guessing that the incorrect order you're seeing is 100, 200, 50. If that's the case, what's happening is that Pandas is sorting your index alphabetically.</p>
<p>In that case, you have two options: the first is to sort the index from its numeric information, and you can check <a href="https://stackoverflow.com/questions/23493374/sort-dataframe-index-that-has-a-string-and-number">this question</a> for seeing how.</p>
<p>The second is to just change your data to insert a zero or blank space before 50, so it falls in line:</p>
<pre><code>profit_comparison = range (9)
f1 = pd.DataFrame (
{
"User": [
"2 x 2 x 2","3 x 3 x 3", "4 x 4 x 4",
"2 x 2 x 2","3 x 3 x 3", "4 x 4 x 4",
"2 x 2 x 2","3 x 3 x 3", "4 x 4 x 4"
],
"clm2": profit_comparison[:],
"N": [
"N= 50","N= 50","N= 50",
"N=100","N=100","N=100",
"N=200","N=200","N=200"
]
},
)
ax1 = f1.pivot(
index = "N",
columns = "User",
values = "clm2"
).plot.bar(edgecolor = "white")
ax1.set_ylabel("Profit")
</code></pre>
<p><a href="https://i.stack.imgur.com/YguKe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YguKe.png" alt="enter image description here"></a></p>
<p>Some additional notes:</p>
<ul>
<li>When within parenthesis, you don't need the backslash to continue in the following line</li>
<li>If you want to copy profit_comparison, you don't need to use <code>[:len(profit_comparison)]</code>; you can use just <code>profit_comparison[:]</code></li>
<li>If the index is still not sorted, you can use <code>sort_index()</code> on the dataframe generated by <code>pivot</code> before plotting</li>
</ul>
|
python|pandas
| 0
|
373,896
| 56,940,000
|
Trace a 3d graph with a black line where Z = 0?
|
<p>I have a functional 3d graph, but I want to make a trace line on the graph for when z = 0. </p>
<p>I tried to split up the graphs for when z>=0 and z<0 but this does not make a clear representation, as shown in code commented out. I want to trace this line in a different color. Another solution would be to have part of the graph z>=0 be one color and z<0 be another color, but I keep getting an error for this as well. </p>
<pre><code>from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
from matplotlib import cm
from matplotlib.ticker import LinearLocator, FormatStrFormatter
import numpy as np
def equation(delta=0.05):
#for F=0.5
x = np.arange(0,1,delta)
y = np.arange(2,6,delta)
X,Y = np.meshgrid(x,y)
Z = (X*Y-X-0.5*Y**2+2*0.5*Y)**2-4*(0.5*Y**2-0.5*Y)*(X-X*Y+Y-0.5*Y)
return X, Y, Z
#x = P
#y = K
fig = plt.figure()
ax = Axes3D(fig)
#set labels for graph
ax.set_xlabel('P')
ax.set_ylabel('K')
ax.set_zlabel('Z')
#set colors about and below 0
#c = (Z<=0)
#ax.plot_surface(x,y,z,c=c,cmap='coolwarm')
#ax.plot_surface(x,y,z,c= z<0)
c = z=0
x,y,z = equation(0.01)
surf=ax.plot_surface(x,y,z)
#surf=ax.plot_surface(x,y,z<0)
#surf=ax.plot_surface(x,y,z>=0)
#surf =ax.plot_surface(x,y,z, rstride=5, cstride=5)
#surf = ax.plot_trisurf(x,y,z,cmap=cm.jet,linewidth=0.1,vmin=-15, vmax=100)
#surf = ax.plot_surface(x,y,z,rstride = 5, cstride #=5,cmap=cm.RdBu,linewidth=0, antialiased=False)
ax.zaxis.set_major_locator(LinearLocator(10))
ax.zaxis.set_major_formatter(FormatStrFormatter('%.02f'))
#fig.colorbar(surf, shrink= 0.5, aspect=5)
#ax.view_init(elev=25,azim=-120)
plt.show()
</code></pre>
|
<p>When just highlighting the Z=0 line you need to remember that at that point you no longer have a surface but a 2D plane. You then want to find where that 2D plane is equal to zero. You want to use what Poolka suggested which is <code>ax.contour(x,y,z,[0])</code>. I would suggest changing the transparency (<code>alpha</code>) in the plots to make that line more visible.</p>
<p>You can also make those 2 regions separated at zero 2 different colors by creating a custom colormap and making your <code>vmin</code> and <code>vmax</code> centered around zero.</p>
<pre><code>from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
from matplotlib import cm
from matplotlib.ticker import LinearLocator, FormatStrFormatter
import numpy as np
import matplotlib.colors
def equation(delta=0.05):
x = np.arange(0,1,delta)
y = np.arange(2,6,delta)
X,Y = np.meshgrid(x,y)
Z = (X*Y-X-0.5*Y**2+2*0.5*Y)**2-4*(0.5*Y**2-0.5*Y)*(X-X*Y+Y-0.5*Y)
return X, Y, Z
fig = plt.figure()
ax = Axes3D(fig)
#set labels for graph
ax.set_xlabel('P')
ax.set_ylabel('K')
ax.set_zlabel('Z')
#Create custom colormap with only 2 colors
colors = ["blue","red"]
cm1 = LinearSegmentedColormap.from_list('my_list', colors, N=2)
x,y,z = equation(0.01)
surf=ax.plot_surface(x,y,z,alpha=.7,cmap=cm1,vmin=-150,vmax=150) #use custom colormap
#Use a contour plot to isolate Z=0 since it is a line and no longer a surface
ax.contour(x,y,z,[0],colors='k',linewidths=3)
ax.zaxis.set_major_locator(LinearLocator(10))
ax.zaxis.set_major_formatter(FormatStrFormatter('%.02f'))
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/7tuwu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7tuwu.png" alt="enter image description here"></a></p>
|
python|numpy|matplotlib|graphing
| 2
|
373,897
| 57,171,410
|
Python pandas pivot table between range of dates
|
<p>I'm trying to calculate the sum of quantity for every day for each combinaton of Profile-GeographicalZone-Town with the following sample df:</p>
<pre><code>df = pd.DataFrame({
'Profile': {0: 'P014', 1: 'P014', 2: 'P012', 3: 'P012', 4: 'P012', 5: 'P012', 6: 'P012', 7: 'P012', 8: 'P012', 9: 'P012'},
'GeogaphicalZone': {0: 'NORTH', 1: 'NORTH', 2: 'NORTH', 3: 'SOUTH', 4: 'SOUTH', 5: 'SOUTH', 6: 'NORTH', 7: 'NORTH', 8: 'NORTH', 9: 'NORTH'},
'Town': {0: 'LONDON', 1: 'LONDON', 2: 'MANCHESTER', 3: 'MANCHESTER', 4: 'MANCHESTER', 5: 'MANCHESTER', 6: 'LIVERPOOL', 7: 'LIVERPOOL', 8: 'LIVERPOOL', 9: 'LONDON'},
'Quantity': {0: 8.202, 1: 8.202, 2: 8.202, 3: 60.645, 4: 60.645, 5: 60.645, 6: 90.925, 7: 162.373, 8: 45.095, 9: 78.832},
'StartDate': {0: '01/02/2019', 1: '01/01/2019', 2: '01/12/2018', 3: '01/11/2018', 4: '01/10/2018', 5: '01/09/2018', 6: '01/08/2018', 7: '01/07/2018', 8: '01/06/2018', 9: '01/05/2018'},
'EndDate': {0: '01/04/2020', 1: '01/05/2020', 2: '01/06/2020', 3: '01/07/2020', 4: '01/08/2020', 5: '01/09/2020', 6: '01/10/2020', 7: '01/11/2020', 8: '01/12/2020', 9: '01/01/2021'}
}
</code></pre>
<p><a href="https://i.stack.imgur.com/aDV25.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aDV25.png" alt="n"></a></p>
<p>The Quantity is assumed to be the same during every day between Start and End Date</p>
<p>Now my desired output is to have the sum of Quantity for every Profile-GeographicalZone-Town between the min(StartDate) and the max(EndDate) for each combination.</p>
<p>for instance, for the combination P014-NORTH-LONDON, if I only show the days of Jan/Feb 2019, I expect to have something like this:
<a href="https://i.stack.imgur.com/A8AUT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/A8AUT.png" alt="enter image description here"></a></p>
<p>I think I should use a pivot table with pandas but I'm not sure how I should do with my Start/EndDate calculation.</p>
<p>I could do a first operation which would create a separate df with the quantity duplicated for all days between Start/EndDate and then apply a pivot table but I don't think this is very pythonic nor efficient. I feel there is something much suitable with pandas.</p>
<p>Is that possible ?</p>
<p>Thanks</p>
|
<p>Exploding it into daily make a very long data frame but here's how you do it:</p>
<pre><code>df = pd.DataFrame({
'Profile': {0: 'P014', 1: 'P014', 2: 'P012', 3: 'P012', 4: 'P012', 5: 'P012', 6: 'P012', 7: 'P012', 8: 'P012', 9: 'P012'},
'GeogaphicalZone': {0: 'NORTH', 1: 'NORTH', 2: 'NORTH', 3: 'SOUTH', 4: 'SOUTH', 5: 'SOUTH', 6: 'NORTH', 7: 'NORTH', 8: 'NORTH', 9: 'NORTH'},
'Town': {0: 'LONDON', 1: 'LONDON', 2: 'MANCHESTER', 3: 'MANCHESTER', 4: 'MANCHESTER', 5: 'MANCHESTER', 6: 'LIVERPOOL', 7: 'LIVERPOOL', 8: 'LIVERPOOL', 9: 'LONDON'},
'Quantity': {0: 8.202, 1: 8.202, 2: 8.202, 3: 60.645, 4: 60.645, 5: 60.645, 6: 90.925, 7: 162.373, 8: 45.095, 9: 78.832},
'StartDate': {0: '01/02/2019', 1: '01/01/2019', 2: '01/12/2018', 3: '01/11/2018', 4: '01/10/2018', 5: '01/09/2018', 6: '01/08/2018', 7: '01/07/2018', 8: '01/06/2018', 9: '01/05/2018'},
'EndDate': {0: '01/04/2020', 1: '01/05/2020', 2: '01/06/2020', 3: '01/07/2020', 4: '01/08/2020', 5: '01/09/2020', 6: '01/10/2020', 7: '01/11/2020', 8: '01/12/2020', 9: '01/01/2021'}
})
df['StartDate'] = pd.to_datetime(df['StartDate'])
df['EndDate'] = pd.to_datetime(df['EndDate'])
dates = df.apply(lambda row: pd.date_range(row['StartDate'], row['EndDate']).to_series(), axis=1) \
.stack() \
.droplevel(-1)
dates.name = 'Date'
df = df.join(dates)
</code></pre>
|
python|pandas|dataframe|pivot-table
| 0
|
373,898
| 57,206,434
|
How to get "reversed" time windows in pandas?
|
<p>(Note: I am pretty sure that this is not a duplicate question.)</p>
<p>I need "reversed" time windows from a pandas Dataframe. "reversed" as in I need them to have the last time index after processing them. Example:</p>
<pre><code>df = pd.DataFrame(data=[
[pd.Timestamp('2018-01-01 00:00:00'), 100],
[pd.Timestamp('2018-01-01 00:00:01'), 101],
[pd.Timestamp('2018-01-01 00:00:03'), 103],
[pd.Timestamp('2018-01-01 00:00:04'), 111]
], columns=['time', 'value']).set_index('time')
>>>
value
time
2018-01-01 00:00:00 100
2018-01-01 00:00:01 101
2018-01-01 00:00:03 103
2018-01-01 00:00:04 111
</code></pre>
<p>Normally you could just reverse the dataframe and call <code>.rolling</code> on that, but <em>pandas</em> does not like reversed time indices:</p>
<pre><code>df[::-1].rolling('2s')
>>> ValueError: index must be monotonic
</code></pre>
<p>Now, "reversed" time windows are just "forward" time windows shifted in time:</p>
<pre><code>ws = df.rolling('2s').mean()
ws.index = ws.index + pd.Timedelta(2, unit='s')
>>>
value
time
2018-01-01 00:00:02 100.0
2018-01-01 00:00:03 100.5
2018-01-01 00:00:05 103.0
2018-01-01 00:00:06 107.0
</code></pre>
<p>But due to the non uniform sampling this leads to time indices that are not aligned with the original data.</p>
<p>I have some code that works by slicing the windows manually, but that is prohibitively slow.</p>
<p>For reference, the result I would expect is:</p>
<pre><code> value
time
2018-01-01 00:00:00 100.5
2018-01-01 00:00:01 101.0
2018-01-01 00:00:03 107.0
2018-01-01 00:00:04 111.0
</code></pre>
<p>So windows with the current timestamp looking forward in time.</p>
|
<p>This is possible with a <code>reindex</code>... and then another <code>reindex</code>.</p>
<pre><code>u = pd.date_range(df.index.min(), df.index.max() + pd.Timedelta('2s'), freq='1s')
df.reindex(u).ffill().rolling(2).mean().shift(-1).reindex(df.index)
</code></pre>
<p></p>
<pre><code> value
time
2018-01-01 00:00:00 100.5
2018-01-01 00:00:01 101.0
2018-01-01 00:00:03 107.0
2018-01-01 00:00:04 111.0
</code></pre>
|
python|pandas
| 2
|
373,899
| 56,999,525
|
How to mask specific values in particular column in Python?
|
<p>I have a .csv file with 5 columns and about 5000 rows. In a particular column called 'summary' in the .csv file there is credit card numbers along with a few text. It looks like this</p>
<blockquote>
<p>hey this job needs to be done asap and pay with card# visa 5611000043310001</p>
</blockquote>
<p>I want to read this column, take out the number (maybe by using regular expression) and then mask the last 4 digits and write out the entire row as it is with the masked number like this in a .csv file. </p>
<blockquote>
<p>hey this job needs to be done asap and pay with card# visa 561100004331****</p>
</blockquote>
<p>How can I do it?</p>
|
<p>With regex, you could do:</p>
<pre><code>import re
>> s = "hey this job needs to be done asap and pay with card# visa 5611000043310001"
>> re.sub(r"(\d{12})\d{4}",r"\1****",s)
'hey this job needs to be done asap and pay with card# visa 561100004331****'
</code></pre>
<p>So basically, <code>(\d{12})</code> matches the first 12 digits (the parentheses are there to not replace these first 12). And then 4 digits, that we replace by stars. <code>\1</code> is a placeholder for the first group that is omitted by the replacement, so here it refers to the first 12 digits.</p>
|
python|pandas
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.