Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
8,000
| 72,064,465
|
Why can I not use f-string in the pandas assign method?
|
<p>For instance, I am trying to create new clean columns in the existing dataframe with a regex pattern applied as shown below. I get the SyntaxError that a keyword can't be an expression.</p>
<pre><code>for col in cols2:
df.assign(f"{col}_clean"=lambda df:df[col].str.replace(r"\(|\)|,", ""))
df.assign(f"{col}_clean"=lambda df:df[col].str.replace(r"\(|\)|,", ""))
^
SyntaxError: keyword can't be an expression
</code></pre>
<p>I then tried to assign a list of column names e.g.</p>
<pre><code> cols2_clean = []
for col in cols2:
clean = f"{col}_clean"
cols2_clean.append(clean)
df.assign(cols2_clean=lambda df:df[cols2].str.replace(r"\(|\)|,", ""))
</code></pre>
<p>that didn't work and gave the attribution error AttributeError: 'DataFrame' object has no attribute 'str'. Is my only option to do this manually one by one?</p>
|
<p><code>df.assign()</code> takes the column names as keyword arguments. You can't use a string as a keyword argument, it has to be an identifier.</p>
<p>What you can do is pass a dictionary using <code>**</code> to turn it into keyword arguments.</p>
<pre><code>df = df.assign(**{f"{col}_clean": lambda df:df[col].str.replace(r"\(|\)|,", "")})
</code></pre>
|
python|pandas
| 4
|
8,001
| 56,703,423
|
Custom loss with conditional return value
|
<p>I want a loss function with this regularization: for each prediction, if the predicted point has norm lower than 0.9 or greater than 1, I want to apply the regularization.</p>
<p>So I wrote this:</p>
<pre><code>def custom_loss(y_true, y_pred):
ret = keras.losses.mean_squared_error(y_true, y_pred)
n = tf.norm(y_pred, axis = 1)
intern_circle_distance = n - 0.9
return tf.where(tf.logical_and(tf.greater(intern_circle_distance, 0),
tf.less(intern_circle_distance, 0.1))
ret,
ret*2)
</code></pre>
<p>When I use this in the model.compile, an error is returned: </p>
<p>Shapes must be equal rank, but are 0 and 1 for 'loss_71/Hyper_loss/Select' (op: 'Select') with input shapes: [?], [], [].</p>
<p>I tried the loss outside keras' enviroment and it seems to works.
For example this:</p>
<pre><code>a = tf.constant([[-1.0, 1.5]])
n = a - 1
K.eval(tf.where(tf.logical_and(tf.greater(n, 0)),
tf.less(n, 2)),
a, a*2))
</code></pre>
<p>returns me the Tensor [-2., 1.5]</p>
<p>Why it works outside keras loss function and doesn't work inside keras loss function?
How can it works inside keras loss function?</p>
|
<p><code>keras.losses.mean_squared_error</code> gives you a scalar number, the mean of all the squared errors. If you want to change the error calculation per example, then do something like this:</p>
<pre><code>def custom_loss(y_true, y_pred):
diff = tf.squared_difference(y_true, y_pred)
n = tf.norm(y_pred, axis=1)
intern_circle_distance = n - 0.9
diff_reg = tf.where((intern_circle_distance > 0) & (intern_circle_distance <0.1))
diff, 2 * diff)
return tf.reduce_mean(diff_reg)
</code></pre>
|
python|tensorflow|keras|loss-function
| 0
|
8,002
| 56,716,948
|
DECODE_RAW the TensorSliceDataset
|
<p>I am replicating TTS model, Deep Voice 3.
Dataset is LJSpeech-1.1. I found a github repo (<a href="https://github.com/Kyubyong/deepvoice3" rel="nofollow noreferrer">https://github.com/Kyubyong/deepvoice3</a>) but it was written in earlier tensorflow version where I am using TF 2.0.
In data processing, I need to apply decode_raw function to the output of TensorSliceDataset.
But, I can't apply decode_raw function to the output.
<strong>So, my question is how can I apply decode_raw to the output of TensorSliceDataset?</strong></p>
<p>I have converted to the text into tensor with dimension (13066,).
In the original repo, he used tf.train.slice_input_producer.
For TF 2.0, I am using tf.data.Dataset.from_tensor_slices to convert that tensor into TensorSliceDataset.
After that, I can't apply decode_raw to TensorSliceDataset. Below is the code</p>
<pre class="lang-py prettyprint-override"><code># old TF code
texts, mels, dones, mags = tf.train.slice_input_producer([texts, mels, dones, mags], shuffle = True)
# TF 2.0 code
texts = tf.convert_to_tensor(texts)
texts = tf.data.Dataset.from_tensor_slices(texts)
texts = tf.io.decode_raw(texts, tf.int32) # (None,)
</code></pre>
|
<p>You need to apply parse function to a dataset object.
Instead of this line</p>
<pre><code>texts = tf.io.decode_raw(texts, tf.int32) # (None,)`
</code></pre>
<p>use</p>
<pre><code>texts = texts.map(lambda x: tf.io.decode_raw(x, tf.int32))
</code></pre>
|
python|tensorflow|nlp|tensorflow2.0
| 0
|
8,003
| 66,819,359
|
Build a pytorch model wrap around another pytorch model
|
<p>Is it possible to wrap a pytorch model inside another pytorch module? I could not do it the normal way like in transfer learning (simply concatenating some more layers) because in order to get the intended value for the next 'layer', I need to wait the last layer of the first module to generate multiple outputs (say 100) and to use all those outputs to get the value for the next 'layer' (say taking the max of those outputs). I tried to define the integrated model as something like the following:</p>
<pre><code>class integrated(nn.Module):
def __init__(self):
super(integrated, self)._init_()
def forward(self, x):
model = VAE(
encoder_layer_sizes=args.encoder_layer_sizes,
latent_size=args.latent_size,
decoder_layer_sizes=args.decoder_layer_sizes,
conditional=args.conditional,
num_labels=10 if args.conditional else 0).to(device)
device = torch.device('cpu')
model.load_state_dict(torch.load(r'...')) # the first model is saved somewhere else beforehand
model.eval()
temp = []
for j in range(100):
x = model(x)
temp.append(x)
y=max(temp)
return y
</code></pre>
<p>The reason I would like to do that is the library I need to use requires the input itself to be a pytorch module. Otherwise I could simply leave the last part outside of the module.</p>
|
<p><strong>Yes you can definitely use a Pytorch module inside another Pytorch module.</strong> The way you are doing this in your example code is a bit unusual though, as external modules (<code>VAE</code>, in your case) are more often initialized in the <code>__init__</code> function and then saved as attributes of the main module (<code>integrated</code>). Among other things, this avoids having to reload the sub-module every time you call <code>forward</code>.</p>
<p>One other thing that looks a bit funny is your for loop over repeated invocations of <code>model(x)</code>. If there is no randomness involved in <code>model</code>'s evaluation, then you would only need a single call to <code>model(x)</code>, since all 100 calls will give the same value. So assuming there is some randomness, you should consider whether you can get the desired effect by batching together 100 copies of <code>x</code> and using a single call to <code>model</code> with this batched input. This ultimately depends on additional information about why you are calling this function multiple times on the same input, but either way, using a single batched evaluation will be a <em>lot</em> faster than using many unbatched evaluations.</p>
|
pytorch
| 1
|
8,004
| 47,088,775
|
How can I use three Conv1d on the three axis of my 3*n matrix in Pytorch?
|
<p>The following is my CNN. The input of it is a (3,64) matrix, I want to use three convolution kernels to process the x,y,z axis respectively.</p>
<pre><code>class Char_CNN(nn.Module):
def __init__(self):
super(Char_CNN, self).__init__()
self.convdx = nn.Conv1d(1, 12, 20)
self.convdy = nn.Conv1d(1, 12, 20)
self.convdz = nn.Conv1d(1, 12, 20)
self.fc1 = nn.Linear(540, 1024)
self.fc2 = nn.Linear(1024, 30)
self.fc3 = nn.Linear(30, 13)
def forward(self, x):
after_convd = [self.convdx(x[:, :, 0]), self.convdy(x[:, :, 1]), self.convdz(x[:, :, 2])]
after_pool = [F.max_pool1d(F.relu(value), 3) for value in after_convd]
x = torch.cat(after_pool, 1)
x = x.view(x.size(0), -1)
x = self.fc1(x)
x = self.fc2(x)
x = self.fc3(x)
x = F.softmax(x)
return x
</code></pre>
<p>But during the running of <code>loss = criterion(out, target)</code>, a RunTime Error occurs:</p>
<blockquote>
<p>RuntimeError: Assertion `cur_target >= 0 && cur_target < n_classes' failed.</p>
</blockquote>
<p>I'm very new to pytorch so that I cannot find out the mistake of my code.
Can you help me?</p>
|
<p>The way of convolution is okay. The problem is my labels were between 1 and 13, and the correct range is 0 to 12.
After modifying it, my CNN works successfully.
But as a fresher to Pytorch and deep learning, I guess my convolution mode can be clearer and easier. Welcome to point out my errors! </p>
|
deep-learning|pytorch
| 0
|
8,005
| 47,402,346
|
Ranking groups based on size
|
<p>Sample Data:</p>
<pre><code>id cluster
1 3
2 3
3 3
4 3
5 1
6 1
7 2
8 2
9 2
10 4
11 4
12 5
13 6
</code></pre>
<p>What I would like to do is replace the largest cluster id with <code>0</code> and the second largest with <code>1</code> and so on and so forth. Output would be as shown below. </p>
<pre><code>id cluster
1 0
2 0
3 0
4 0
5 2
6 2
7 1
8 1
9 1
10 3
11 3
12 4
13 5
</code></pre>
<p>I'm not quite sure where to start with this. Any help would be much appreciated. </p>
|
<p>The objective is to relabel groups defined in the <code>'cluster'</code> column by the corresponding rank of that group's total value count within the column. We'll break this down into several steps:</p>
<ol>
<li>Integer factorization. Find an integer representation where each unique value in the column gets its own integer. We'll start with zero.</li>
<li>We then need the counts of each of these unique values.</li>
<li>We need to rank the unique values by their counts.</li>
<li>We assign the ranks back to the positions of the original column.</li>
</ol>
<hr>
<p><strong>Approach 1</strong><br>
Using Numpy's <code>numpy.unique</code> + <code>argsort</code> </p>
<p><strong>TL;DR</strong> </p>
<pre><code>u, i, c = np.unique(
df.cluster.values,
return_inverse=True,
return_counts=True
)
(-c).argsort()[i]
</code></pre>
<p>Turns out, <code>numpy.unique</code> performs the task of integer factorization and counting values in one go. In the process, we get unique values as well, but we don't really need those. Also, the integer factorization isn't obvious. That's because per the <code>numpy.unique</code> function, the return value we're looking for is called the <code>inverse</code>. It's called the inverse because it was intended to act as a way to get back the original array given the array of unique values. So if we let</p>
<pre><code>u, i, c = np.unique(
df.cluster.values,
return_inverse=True,
return_couns=True
)
</code></pre>
<p>You'll see <code>i</code> looks like:</p>
<pre><code>array([2, 2, 2, 2, 0, 0, 1, 1, 1, 3, 3, 4, 5])
</code></pre>
<p>And if we did <code>u[i]</code> we get back the original <code>df.cluster.values</code></p>
<pre><code>array([3, 3, 3, 3, 1, 1, 2, 2, 2, 4, 4, 5, 6])
</code></pre>
<p>But we are going to use it as integer factorization.</p>
<p>Next, we need the counts <code>c</code></p>
<pre><code>array([2, 3, 4, 2, 1, 1])
</code></pre>
<p>I'm going to propose the use of <code>argsort</code> but it's confusing. So I'll try to show it:</p>
<pre><code>np.row_stack([c, (-c).argsort()])
array([[2, 3, 4, 2, 1, 1],
[2, 1, 0, 3, 4, 5]])
</code></pre>
<p>What <code>argsort</code> does in general is to place the top spot (position 0), the position to draw from in the originating array.</p>
<pre><code># position 2
# is best
# |
# v
# array([[2, 3, 4, 2, 1, 1],
# [2, 1, 0, 3, 4, 5]])
# ^
# |
# top spot
# from
# position 2
# position 1
# goes to
# pen-ultimate spot
# |
# v
# array([[2, 3, 4, 2, 1, 1],
# [2, 1, 0, 3, 4, 5]])
# ^
# |
# pen-ultimate spot
# from
# position 1
</code></pre>
<p>What this allows us to do is to slice this <code>argsort</code> result with our integer factorization to arrive at a remapping of the ranks.</p>
<pre><code># i is
# [2 2 2 2 0 0 1 1 1 3 3 4 5]
# (-c).argsort() is
# [2 1 0 3 4 5]
# argsort
# slice
# \ / This is our integer factorization
# a i
# [[0 2] <-- 0 is second position in argsort
# [0 2] <-- 0 is second position in argsort
# [0 2] <-- 0 is second position in argsort
# [0 2] <-- 0 is second position in argsort
# [2 0] <-- 2 is zeroth position in argsort
# [2 0] <-- 2 is zeroth position in argsort
# [1 1] <-- 1 is first position in argsort
# [1 1] <-- 1 is first position in argsort
# [1 1] <-- 1 is first position in argsort
# [3 3] <-- 3 is third position in argsort
# [3 3] <-- 3 is third position in argsort
# [4 4] <-- 4 is fourth position in argsort
# [5 5]] <-- 5 is fifth position in argsort
</code></pre>
<p>We can then drop it into the column with <code>pd.DataFrame.assign</code> </p>
<pre><code>u, i, c = np.unique(
df.cluster.values,
return_inverse=True,
return_counts=True
)
df.assign(cluster=(-c).argsort()[i])
id cluster
0 1 0
1 2 0
2 3 0
3 4 0
4 5 2
5 6 2
6 7 1
7 8 1
8 9 1
9 10 3
10 11 3
11 12 4
12 13 5
</code></pre>
<hr>
<p><strong>Approach 2</strong><br>
I'm going to leverage the same concepts. However, I'll use Pandas <code>pandas.factorize</code> to get integer factorization with <code>numpy.bincount</code> to count values. The reason to use this approach is because Numpy's <code>unique</code> actually sorts the values in the midst of factorizing and counting. <code>pandas.factorize</code> does not. For larger data sets, big oh is our friend as this remains <code>O(n)</code> while the Numpy approach is <code>O(nlogn)</code>.</p>
<pre><code>i, u = pd.factorize(df.cluster.values)
c = np.bincount(i)
df.assign(cluster=(-c).argsort()[i])
id cluster
0 1 0
1 2 0
2 3 0
3 4 0
4 5 2
5 6 2
6 7 1
7 8 1
8 9 1
9 10 3
10 11 3
11 12 4
12 13 5
</code></pre>
|
python|pandas|numpy|dataframe
| 4
|
8,006
| 68,035,205
|
Pandas dataframe: grouping by unique identifier, checking conditions, and applying 1/0 to new column if condition is met/not met
|
<p>I have a large dataset pertaining customer churn, where every customer has an unique identifier (encoded key). The dataset is a timeseries, where every customer has one row for every month they have been a customer, so both the date and customer-identifier column naturally contains duplicates. What I am trying to do is to add a new column (called 'churn') and set the column to 0 or 1 based on if it is that specific customer's last month as a customer or not.</p>
<p>I have tried numerous methods to do this, but each and every one fails, either do to tracebacks or they just don't work as intended. It should be noted that I am very new to both python and pandas, so please explain things like I'm five (lol).</p>
<p>I have tried using pandas groupby to group rows by the unique customer keys, and then checking conditions:</p>
<pre><code>df2 = df2.groupby('customerid').assign(churn = [1 if date==max(date) else 0 for date in df2['date']])
</code></pre>
<p>which gives tracebacks because dataframegroupby object has no attribute assign.</p>
<p>I have also tried the following:</p>
<pre><code>df2.sort_values(['date']).groupby('customerid').loc[df['date'] == max('date'), 'churn'] = 1
df2.sort_values(['date']).groupby('customerid').loc[df['date'] != max('date'), 'churn'] = 0
</code></pre>
<p>which gives a similar traceback, but due to the attribute loc</p>
<p>I have also tried using numpy methods, like the following:</p>
<pre><code>df2['churn'] = df2.groupby(['customerid']).np.where(df2['date'] == max('date'), 1, 0)
</code></pre>
<p>which again gives tracebacks due to the dataframegroupby</p>
<p>and:</p>
<pre><code>df2['churn'] = np.where((df2['date']==df2['date'].max()), 1, df2['churn'])
</code></pre>
<p>which does not give tracebacks, but does not work as intended, i.e. it applies 1 to the churn column for the max date for all rows, instead of the max date for the specific customerid - which in retrospect is completely understandable since customerid is not specified anywhere.</p>
<p>Any help/tips would be appreciated!</p>
|
<p>IIUC use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.DataFrameGroupBy.transform.html" rel="nofollow noreferrer"><code>GroupBy.transform</code></a> with <code>max</code> for return maximal values per groups and compare with <code>date</code> column, last set <code>1,0</code> values by mask:</p>
<pre><code>mask = df2['date'].eq(df2.groupby('customerid')['date'].transform('max'))
df2['churn'] = np.where(mask, 1, 0)
</code></pre>
<hr />
<pre><code>df2['churn'] = mask.astype(int)
</code></pre>
|
pandas|dataframe|pandas-groupby
| 0
|
8,007
| 68,218,996
|
List index out of range error using TensorFlow
|
<p>I am Using TensorFlow to create an image classification model. I have written the following lines of code:</p>
<pre><code>import pandas as pd
import os
import tensorflow as tf
import keras
from keras.models import Sequential
from keras.layers import Dense, Conv2D , MaxPool2D , Flatten , Dropout
from keras.preprocessing.image import ImageDataGenerator
from keras.optimizers import Adam
from matplotlib import pyplot as plt
import matplotlib.image as mpimg
import random
%matplotlib inline
import matplotlib.pyplot as plt
from tensorflow.keras import datasets, layers, models
import glob
from PIL import Image
</code></pre>
<p>--importing all my libraries</p>
<pre><code>#newer code
dic = {}
# assuming you have .png format files else change the same into the glob statement
train_images='/Users/FOLDER/downloads/Boneage_competition/training_dataset/Resized/'
for file in glob.glob(train_images+'/*.png'):
b_name = os.path.basename(file).split('.')[0]
dic[b_name] = mpimg.imread(file)
dic_label_match = {}
label_file = '/Users/FOLDER/downloads/train.csv'
train_labels = pd.read_csv (r'/Users/FOLDER/downloads/train.csv')
for i in range(len(train_labels)):
# given your first column is age and image no starts from 1
dic_label_match[i+1] = str(train_labels.iloc[i][0])
# you can use the below line too
# dic_label_match[i+1] = str(train_labels.iloc[i][age])
# now you have dict with keys and values
# create two lists / arrays and you can pass the same to the keram model
train_x = []
label_ = []
for val in dic:
if val in dic and val in dic_label_match:
train_x.append(dic[val])
label_.append(dic_label_match[val])
</code></pre>
<p>-- appending each image to its corresponding label</p>
<pre><code>model = tf.keras.Sequential([
tf.keras.layers.Flatten(input_shape=(12611,300,300,1)),
tf.keras.layers.Dense(2, activation='relu'),
tf.keras.layers.Dense(2)
])
</code></pre>
<p>--Applying a model to the dataset</p>
<pre><code>loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
</code></pre>
<pre><code>model.compile(optimizer='adam',
loss=loss_fn,
metrics=['accuracy'])
</code></pre>
<p>-- compiling my model</p>
<pre><code>model.fit(train_x, label_, epochs=5)
</code></pre>
<p>Upon Running this code, I am greeted with an error message in the last line. The entire message is:</p>
<pre><code>IndexError Traceback (most recent call last)
<ipython-input-24-ca24364bad96> in <module>
----> 1 model.fit(train_x, label_, epochs=5)
~/opt/anaconda3/envs/ML2/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs)
706 self._check_call_args('fit')
707
--> 708 func = self._select_training_loop(x)
709 return func.fit(
710 self,
~/opt/anaconda3/envs/ML2/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training.py in _select_training_loop(self, inputs)
498 self._distribution_strategy)):
499 try:
--> 500 valid_adapter = data_adapter.select_data_adapter(inputs, None)
501 except ValueError as data_failure_exception:
502 valid_adapter = None
~/opt/anaconda3/envs/ML2/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/data_adapter.py in select_data_adapter(x, y)
645 def select_data_adapter(x, y):
646 """Selects a data adapter than can handle a given x and y."""
--> 647 adapter_cls = [cls for cls in ALL_ADAPTER_CLS if cls.can_handle(x, y)]
648 if not adapter_cls:
649 # TODO(scottzhu): This should be a less implementation-specific error.
~/opt/anaconda3/envs/ML2/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/data_adapter.py in <listcomp>(.0)
645 def select_data_adapter(x, y):
646 """Selects a data adapter than can handle a given x and y."""
--> 647 adapter_cls = [cls for cls in ALL_ADAPTER_CLS if cls.can_handle(x, y)]
648 if not adapter_cls:
649 # TODO(scottzhu): This should be a less implementation-specific error.
~/opt/anaconda3/envs/ML2/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/data_adapter.py in can_handle(x, y)
451 @staticmethod
452 def can_handle(x, y=None):
--> 453 handles_x = ListsOfScalarsDataAdapter._is_list_of_scalars(x)
454 handles_y = True
455 if y is not None:
~/opt/anaconda3/envs/ML2/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/data_adapter.py in _is_list_of_scalars(inp)
462 return True
463 if isinstance(inp, (list, tuple)):
--> 464 return ListsOfScalarsDataAdapter._is_list_of_scalars(inp[0])
465 return False
466
IndexError: list index out of range
</code></pre>
<p>I have tried adjusting the epoch number, as well as using other models to no avail.</p>
<p>If you have any ideas why this might be or any tips for my code, it would be greatly appreciated!</p>
|
<p>You are getting error due to the following line</p>
<pre><code>dic_label_match[i+1] = str(train_labels.iloc[i][0])
</code></pre>
<p>Here you are indexing with 1 instead 0. So when you are executing the following line</p>
<pre><code>label_.append(dic_label_match[val])
</code></pre>
<p>Your <code>label_</code> has value starting from <code>1, 2, ....</code> it should start from <code>0, 1, ...</code>.</p>
<p>Hence you can either change the line as follows</p>
<pre><code>label_.append(dic_label_match[val - 1])
</code></pre>
<p>or you can change the following line</p>
<pre><code>dic_label_match[i] = str(train_labels.iloc[i][0])
</code></pre>
|
python|tensorflow|keras|deep-learning|image-classification
| 0
|
8,008
| 59,178,619
|
During Pytorch tutorial a ModuleNotFoundError: ‘pycocotools._mask’ occurs
|
<p>Hello I’m new to Pytorch and I’ve been trying to work through this tutorial.
[<a href="https://github.com/pytorch/tutorials/blob/master/intermediate_source/torchvision_tutorial.rst]" rel="nofollow noreferrer">https://github.com/pytorch/tutorials/blob/master/intermediate_source/torchvision_tutorial.rst]</a></p>
<p>I’m using Visual Studio Code, conda installed the Pytorch. Initially the issue was that it was missing the module engine. But this problem was solved here:
So how do we get the references/detection/ folders? What should we download and install? I have installed the pytorch, torchvision in my environment, but I could not find those files. Thanks
With a step I misread.
So I downloaded and copied the pycocotools into the project directory and the vision/detection/ .py files in to it too.
That being done it fixed that issue but the error it is now giving is.</p>
<pre><code>(base) C:\Users\Sean\Desktop\Project\Test\Tutorial>D:/Anaconda/python.exe c:/Users/Sean/Desktop/Project/Test/Tutorial/tv-training-code.py
Traceback (most recent call last):
File "c:/Users/Sean/Desktop/Project/Test/Tutorial/tv-training-code.py", line 13, in <module>
from engine import train_one_epoch, evaluate
File "c:\Users\Sean\Desktop\Project\Test\Tutorial\engine.py", line 8, in <module>
from coco_utils import get_coco_api_from_dataset
File "c:\Users\Sean\Desktop\Project\Test\Tutorial\coco_utils.py", line 9, in <module>
from pycocotools import mask as coco_mask
File "c:\Users\Sean\Desktop\Project\Test\Tutorial\pycocotools\mask.py", line 3, in <module>
import pycocotools._mask as _mask
ModuleNotFoundError: No module named 'pycocotools._mask'
</code></pre>
<p>I’m not quite sure what the issue here is or how to fix it. Given that the _mask.pyx is present and is what I think is being imported. As I said I’m not to sure what the issue is but I would appreciate the help.</p>
<p>Python Version : 3.7.4
Pytorch: 1.2.0 (Cuda 10)</p>
|
<p>The problem is you copied the files for <code>pycocotools</code> instead of installing them. Files ending in <code>.pyx</code> are Cython files which need to be installed into extension modules (on Windows these would be a <code>.pyd</code> file). If you do an installation of the package instead of a file copy that should fix your problem.</p>
|
python|python-3.x|visual-studio-code|pytorch|torchvision
| 0
|
8,009
| 57,215,717
|
how to save a lot of variables with tf.train.Checkpoint
|
<p>I can save two variables(v1,v2) in checkpoints(<a href="https://www.tensorflow.org/beta/guide/checkpoints#manually_inspecting_checkpoints" rel="nofollow noreferrer">https://www.tensorflow.org/beta/guide/checkpoints#manually_inspecting_checkpoints</a>) with the following way. But if I have many variables(v3,v4 ...), how to do that? If I use the same way(v1=v1,v2=v2,v3=v3,v4=v4..) there are so many parameters. Is there one convenient way to do that?for example, since tf.train.Checkpoint can accept keras object, can I put all variables in one object?</p>
<pre><code>opt = tf.train.AdamOptimizer(0.1)
net = Net()
v1 = tf.get_variable("v1", [3], initializer = tf.zeros_initializer)
v2 = tf.get_variable("v2", [5], initializer = tf.zeros_initializer)
ckpt = tf.train.Checkpoint(step=tf.Variable(1), optimizer=opt, net=net,v2=v2,v1=v1)
manager = tf.train.CheckpointManager(ckpt, './tf_ckpts', max_to_keep=3)
ckpt.restore(manager.latest_checkpoint)
if manager.latest_checkpoint:
print("Restored from {}".format(manager.latest_checkpoint))
else:
print("Initializing from scratch.")
for example in toy_dataset():
loss = train_step(net, example, opt)
ckpt.step.assign_add(1)
if int(ckpt.step) % 10 == 0:
save_path = manager.save()
print("Saved checkpoint for step {}: {}".format(int(ckpt.step), save_path))
print("loss {:1.2f}".format(loss.numpy()))
</code></pre>
|
<p>Let me rephrase your question to make sure I understand it correct: "how to save many tf variables in checkpoint?".</p>
<p>If it so then my answer would be to put all of those variables into scope and access them in the manner proposed in this answer:<a href="https://stackoverflow.com/a/41642426/11708498">https://stackoverflow.com/a/41642426/11708498</a></p>
|
python|tensorflow|machine-learning|deep-learning
| 1
|
8,010
| 46,079,186
|
Tensorflow error when training: Caused by op 'shuffle_batch'
|
<p>I am trying to read images and labels from a TFRecord file, and then train with these.
I know that my TFRecord file exists, and have checked that it does contain 1000 images and labels. My problem only seems to arise when I want to pipe as input to train.
I am new to python and tensor flow, and not sure how to fix the problem</p>
<p>I get the following error occuring at tf.train.shuffle_batch</p>
<p>...</p>
<p>Caused by op 'shuffle_batch', defined at:
File "C:/AI/projects/DataGen/train.py", line 40, in
images_batch, labels_batch = tf.train.shuffle_batch([image, label], batch_size=10, capacity=1000,min_after_dequeue=2)</p>
<p>...</p>
<p>Here is my code, cobbled together from various mnist examples </p>
<pre><code>import tensorflow as tf
def read_and_decode_single_example(filename):
# first construct a queue containing a list of filenames.
# this lets a user split up there dataset in multiple files to keep
# size down
filename_queue = tf.train.string_input_producer([filename],
num_epochs=None)
# Unlike the TFRecordWriter, the TFRecordReader is symbolic
reader = tf.TFRecordReader()
# One can read a single serialized example from a filename
# serialized_example is a Tensor of type string.
_, serialized_example = reader.read(filename_queue)
# The serialized example is converted back to actual values.
# One needs to describe the format of the objects to be returned
feature = {'image': tf.FixedLenFeature([], tf.string),
'label': tf.FixedLenFeature([], tf.int64)}
features = tf.parse_single_example(serialized_example, features=feature)
# now return the converted data
label = tf.cast(features['label'], tf.float32)
image = tf.decode_raw(features['image'], tf.float32)
image = tf.reshape(image, [28, 28, 3])
return label, image
with tf.Session() as sess:
sess.run(tf.local_variables_initializer())
sess.run(tf.global_variables_initializer())
# get single examples
label, image = read_and_decode_single_example("train.tfrecords")
image = tf.cast(image, tf.float32) / 255.
# groups examples into batches randomly
images_batch, labels_batch = tf.train.shuffle_batch([image, label], batch_size=10, capacity=1000, min_after_dequeue=2)
# The model is:
#
# Y = softmax( X * W + b)
# X: matrix for rgb images of 28x28 pixels, flattened (there are 100 images in a mini-batch)
# W: weight matrix with (28x28x3) lines and 10 columns
# b: bias vector with 10 dimensions
# +: add with broadcasting: adds the vector to each line of the matrix (numpy)
# softmax(matrix) applies softmax on each line
# softmax(line) applies an exp to each value then divides by the norm of the resulting line
# Y: output matrix with 100 lines and 10 columns
# input X: 28x28x3 RGB images
X = images_batch
# correct answers will go here
Y_ = labels_batch
# weights W[28 * 28 * 3, 10]
W = tf.Variable(tf.zeros([28 * 28 * 3, 10]))
# biases b[10]
b = tf.Variable(tf.zeros([10]))
# flatten the images into a single line of pixels
# -1 in the shape definition means "the only possible dimension that will preserve the number of elements"
XX = tf.reshape(X, [-1, 28 * 28 * 3])
# The model
Y = tf.nn.softmax(tf.matmul(XX, W) + b)
# loss function: cross-entropy = - sum( Y_i * log(Yi) )
# Y: the computed output vector
# Y_: the desired output vector
# cross-entropy
# log takes the log of each element, * multiplies the tensors element by element
# reduce_mean will add all the components in the tensor
# so here we end up with the total cross-entropy for all images in the batch
cross_entropy = -tf.reduce_mean(Y_ * tf.log(Y)) * 100.0 # normalized for batches of 100 images,
# *10 because "mean" included an unwanted division by 10
# accuracy of the trained model, between 0 (worst) and 1 (best)
correct_prediction = tf.equal(tf.argmax(Y, 1), tf.argmax(Y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# training, learning rate = 0.005
train_step = tf.train.GradientDescentOptimizer(0.005).minimize(cross_entropy)
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(coord=coord)
for i in range(100 + 1):
print(i)
sess.run(train_step)
coord.request_stop()
# Wait for threads to stop
coord.join(threads)
sess.close()
</code></pre>
|
<p>I moved the initialization to just before the tf.train.start_queue_runners call and that solved the problem i.e. after the model is setup </p>
<pre><code>sess.run(tf.local_variables_initializer())
sess.run(tf.global_variables_initializer())
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(coord=coord)
</code></pre>
|
python|tensorflow
| 1
|
8,011
| 51,053,396
|
Convert two tensors with the same values into each other
|
<p>I have two same tensors in terms of values but they are different in terms of shape like these:</p>
<pre><code>output_image1 =
[[[[3. 1.]
[2. 7.]]
[[5. 4.]
[9. 8.]]]
[[[3. 3.]
[1. 4.]]
[[6. 5.]
[7. 2.]]]]
output_image2 =
[[[[3]
[1]
[5]
[4]]
[[2]
[7]
[9]
[8]]
[[3]
[3]
[6]
[5]]
[[1]
[4]
[7]
[2]]]]
output_image1.shape = (2, 2, 2, 2)
output_image2.shape = (1, 4, 4, 1)
</code></pre>
<p>How can I change shape of the image1 into the image2 with the same values. I mean from (2, 2, 2, 2) --> (1, 4, 4, 1) and having the same values like image2. </p>
|
<p>I found the answer, maybe its helpful for others. We should use this function:</p>
<pre><code>output_image = tf.depth_to_space(
output_image,
2,
name=None,
data_format='NHWC'
)
</code></pre>
|
python|tensorflow|keras
| 0
|
8,012
| 66,363,025
|
How to join different pandas dataframes stored on a list of dictionar ordered by multiindex
|
<p>I have a set of data in a list. Each item of the list is a dictionary with a unique key and the value of the dictionary is a DataFrame that contains 6 columns + index col.</p>
<pre><code> list = [{"A": Participation Assignment Words Creativeness Innovative Great
Date
2021-01-02 95.00 75.00 75.00 79.00 100 OK
2021-01-05 83.00 83.00 83.00 80.00 100 OK
2021-01-06 98.88 78.88 77.00 77.00 34 OK
2021-01-07 77.00 77.00 77.00 77.00 150 OK
2021-01-08 79.00 79.00 70.00 70.00 99 OK
... ... ... ... ... ... ...
2021-02-18 65.67 36.67 35.59 36.88 94 OK
2021-02-19 60.94 38.00 36.94 38.72 75 OK
2021-02-22 40.00 43.80 40.80 42.71 82 OK
2021-02-23 42.00 43.81 38.99 42.29 174 OK
2021-02-24 42.00 45.00 42.00 44.17 175 OK
[1065 rows x 6 columns]}, "B": Participation Assignment Words Creativeness Innovative Great
Date
2021-01-02 95.00 75.00 75.00 79.00 100 OK
2021-01-05 83.00 83.00 83.00 80.00 100 OK
2021-01-06 98.88 78.88 77.00 77.00 340 OK
2021-01-07 77.00 77.00 77.00 77.00 150 OK
2021-01-08 79.00 79.00 70.00 70.00 93 OK
... ... ... ... ... ... ...
2021-02-18 65.67 36.67 35.59 36.88 94 OK
2021-02-19 60.94 38.00 36.94 38.72 95 OK
2021-02-22 40.00 43.80 40.80 42.71 182 OK
2021-02-23 42.00 43.81 38.99 42.29 174 OK
2021-02-24 42.00 45.00 42.00 44.17 75 OK
[1065 rows x 6 columns]}, ...]
</code></pre>
<p>What I want to do is to have a large DataFrame where the general index is the Date, the first index Column is the respective key of each dictionary and the sub index of this is</p>
<pre><code> A B
Participation Assignment Words Creativeness Innovative Great Participation Assignment Words Creativeness Innovative Great
Date
2021-01-02 95.00 75.00 75.00 79.00 100 OK 95.00 75.00 75.00 79.00 100 OK
</code></pre>
<p>Something like that. Is it possible?</p>
|
<p>First dont use variable <code>list</code>, because python code word (<code>builtin</code>).</p>
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.concat.html" rel="nofollow noreferrer"><code>concat</code></a>, with dictionary is created <code>MultiIndex</code> from keys of dicts:</p>
<pre><code>df = pd.concat(L, axis=1)
</code></pre>
|
python|pandas|dataframe
| 0
|
8,013
| 66,491,624
|
Tensorflow 2 Keras Nested Model Subclassing - Total parameters zero
|
<p>I am trying to implement a simple model subclassing inspired by the VGG network.</p>
<p>So here is the code:</p>
<pre><code>class ConvMax(tf.keras.Model):
def __init__(self, filters=4, kernel_size=3, pool_size=2, activation='relu'):
super(ConvMax, self).__init__()
self.conv = tf.keras.layers.Conv2D(filters, kernel_size, padding='same', activation=activation)
self.maxpool = tf.keras.layers.MaxPool2D((pool_size, pool_size))
def call(self, input_tensor):
x = self.conv(input_tensor)
x = self.maxpool(x)
return x
class RepeatedConvMax(tf.keras.Model):
def __init__(self, repetitions=4, filters=4, kernel_size=3, pool_size=2, activation='relu', **kwargs):
super(RepeatedConvMax, self).__init__(**kwargs)
self.repetitions = repetitions
self.filters = filters
self.kernel_size = kernel_size
self.pool_size = pool_size
self.activation = activation
# Define a repeated ConvMax
for i in range(self.repetitions):
# Define a ConvMax layer, specifying filters, kernel_size, pool_size.
vars(self)[f'convMax_{i}'] = ConvMax(self.filters, self.kernel_size, self.pool_size, self.activation)
def call(self, input_tensor):
# Connect the first layer
x = vars(self)['convMax_0'](input_tensor)
# Connect the existing layers
for i in range(1, self.repetitions):
x = vars(self)[f'convMax_{i}'](x)
# return the last layer
return x
</code></pre>
<p>But when I am trying to build the network to see the summaries, here is what I found:</p>
<pre><code>model_input = tf.keras.layers.Input(shape=(64,64,3,), name="input_layer")
x = RepeatedConvMax()(model_input)
model = tf.keras.Model(inputs=model_input, outputs=x)
Model: "model_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_layer (InputLayer) [(None, 64, 64, 3)] 0
_________________________________________________________________
repeated_conv_max (RepeatedC (None, 4, 4, 4) 0
=================================================================
Total params: 0
Trainable params: 0
Non-trainable params: 0
_________________________________________________________________
</code></pre>
<p>The total params are <strong>zero</strong></p>
<p>However, when I try:</p>
<pre><code>model_input = tf.keras.layers.Input(shape=(64,64,3,), name="input_layer")
x = ConvMax()(model_input)
x = ConvMax()(x)
x = ConvMax()(x)
x = ConvMax()(x)
model = tf.keras.Model(inputs=model_input, outputs=x)
model.summary()
Model: "model"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_layer (InputLayer) [(None, 64, 64, 3)] 0
_________________________________________________________________
conv_max (ConvMax) (None, 32, 32, 4) 112
_________________________________________________________________
conv_max_1 (ConvMax) (None, 16, 16, 4) 148
_________________________________________________________________
conv_max_2 (ConvMax) (None, 8, 8, 4) 148
_________________________________________________________________
conv_max_3 (ConvMax) (None, 4, 4, 4) 148
=================================================================
Total params: 556
Trainable params: 556
Non-trainable params: 0
_________________________________________________________________
</code></pre>
<p>It shows <strong>correct</strong> the total params.</p>
<p>Do you know what is the problem?
Why on the two-level subclassing, the parameter is 0?
Will it affect the training?</p>
<p>Thank you...</p>
|
<p>The problem is not with keras but in the way you are initializing the layers in <code>RepeatedConvMax</code>.</p>
<p>TLDR: don't use <code>vars</code> to dinamically instantiate and retrieve attributes, instead use <code>setattr</code> and <code>getattr</code></p>
<p>To solve the problem, you simply have to replace <code>vars[]</code> with <code>setattr</code> and <code>getattr</code>. From my (very limited, I actually found this out right now while looking for a solution) understanding, when you call <code>vars</code> you are working on a copy of the dictionary representing your object. When you create attributes dynamically in this way, Keras is not able to add the weights to the model (why is that, I don't yet know, but I will find out and update the answer when I do).</p>
<p>If you define your class like this, everything works as expected:</p>
<pre><code>class RepeatedConvMax(tf.keras.Model):
def __init__(self, repetitions=4, filters=4, kernel_size=3, pool_size=2, activation='relu', **kwargs):
super(RepeatedConvMax, self).__init__(**kwargs)
self.repetitions = repetitions
self.filters = filters
self.kernel_size = kernel_size
self.pool_size = pool_size
self.activation = activation
# Define a repeated ConvMax
for i in range(self.repetitions):
# Define a ConvMax layer, specifying filters, kernel_size, pool_size.
setattr(self, f'convMax_{i}', ConvMax(self.filters, self.kernel_size, self.pool_size, self.activation))
def call(self, input_tensor, training=None, mask=None):
# Connect the first layer
x = getattr(self, 'convMax_0')(input_tensor)
# Connect the existing layers
for i in range(1, self.repetitions):
print(f"Layer {i}")
x = getattr(self, f'convMax_{i}')(x)
print(x)
# return the last layer
return x
</code></pre>
|
python|tensorflow|keras
| 3
|
8,014
| 66,357,644
|
Filter pandas columns based on multiple row condition
|
<p>This is an extension to my earlier question.</p>
<p><a href="https://stackoverflow.com/questions/66357241/filter-pandas-columns-based-on-row-condition">Filter pandas columns based on row condition</a></p>
<p>Now i want to have multiple conditions to filter columns.</p>
<p>Here is my data</p>
<pre><code> x1 x2 x3 ....
row1 12 3.4 5 ...
row2 1 3 4 ...
row3 True False True ...
...
</code></pre>
<p><code>df.loc[[:,df.loc['row3']==True]</code> works if I just want to filter the <code>row3</code> condition of <code>True</code></p>
<p>I want to filter the columns where <code>row3</code> is <code>true</code>,</p>
<p><code>and</code> i want to filter the columns where <code>row2</code> is <code>>3</code></p>
<p>So in this example only column x3 should appear.</p>
<p>I tried the following code but I get an error. I also tried adding brackets.</p>
<pre><code>df.loc[:,df.loc['row3']==True & :,df.loc['row2']>3]
</code></pre>
<p>Any ideas?</p>
|
<p>It should be:</p>
<pre><code>x = (pd.to_numeric(df.loc['row2'],'coerce').gt(3)) & (df.loc['row3']=='True')
</code></pre>
<hr />
<p><strong>x:</strong></p>
<pre><code>x1 False
x2 False
x3 True
dtype: bool
</code></pre>
<p>then you can easily apply filter to get the column where the value is true.</p>
<pre><code>x[x].index[0]
</code></pre>
<hr />
<p>output:</p>
<pre><code>x3
</code></pre>
<hr />
<pre><code>df.loc[:,x]
</code></pre>
|
python|pandas|filter
| 0
|
8,015
| 57,537,591
|
some coordinates that I extracted from geocoder in Python are not saving in the variable I created
|
<p><code>enter code here</code>Hi,
I want to save some coordinates(latitude and longitudes) I extracted through geocodes, the problem I have is those coordinates are not saving and I can't seem to add them as columns to the table I generated using pandas</p>
<p>I get this error:
AttributeError: 'NoneType' object has no attribute 'latitude'</p>
<pre><code> import pandas
from geopy.geocoders import Nominatim
df1= pandas.read_json("supermarkets.json")
nom= Nominatim(scheme= 'http')
lis= list(df1.index)
for i in lis:
l= nom.geocode(list(df1.loc[i,"Address":"Country"]))
j=[]+ [l.latitude]
k=[]+ [l.longitude]
</code></pre>
<p>I expect a way to get save the coordinates and include them in my table. Thanks</p>
|
<p>The <a href="https://geopy.readthedocs.io/en/stable/#geopy.geocoders.Nominatim.geocode" rel="nofollow noreferrer"><strong><code>nom.geocode(..)</code></strong> [geopy-doc]</a> can result in a <code>None</code> given the address can not be found, or the query is not answered in sufficient time. This is specified in the documentation:</p>
<blockquote>
<p><strong>Return type:</strong></p>
<p><code>None</code>, <code>geopy.location.Location</code> or a <code>list</code> of them, if <code>exactly_one=False</code>.</p>
</blockquote>
<pre><code>from operator import attrgetter
locations = df['Address':'Country'].apply(
lambda r: nom.geocode(list(r)), axis=1
)
nonnull = locations.notnull()
df.loc[nonnull, 'longitude'] = locations[nonnull].apply(attrgetter('longitude'))
df.loc[nonnull, 'latitude'] = locations[nonnull].apply(attrgetter('latitude'))</code></pre>
<p>We this first query all locations, and next we check what has been succesfull, and retrieve the <code>latitude</code>, and <code>latitude</code> for that location.</p>
|
python|pandas|geocode|geopy
| 1
|
8,016
| 51,515,544
|
Tensorflow faster rcnn giving good detection but still detecting false positives with coco objects
|
<p>I have used the tensorflow API to detect the Guinness harp using the process described here - <a href="https://pythonprogramming.net/introduction-use-tensorflow-object-detection-api-tutorial/" rel="nofollow noreferrer">https://pythonprogramming.net/introduction-use-tensorflow-object-detection-api-tutorial/</a>.</p>
<p>I have mostly good results, whenever the logo is clear in the image it finds it nicely - <a href="https://i.stack.imgur.com/iBW1V.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iBW1V.jpg" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/MVmZY.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MVmZY.jpg" alt="good detection"></a></p>
<p>However, after retraining from a coco checkpoint, it still detects what I think are coco objects with a very high confidence rating i.e people, magazines. I cannot work out why this is is.</p>
<p>(see below)</p>
<p><a href="https://i.stack.imgur.com/Fu5MU.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Fu5MU.jpg" alt="false positives"></a></p>
<p>I am using the faster_rcnn_inception_v2_coco.config found here - <a href="http://faster_rcnn_inception_v2_coco" rel="nofollow noreferrer">https://github.com/tensorflow/models/blob/master/research/object_detection/samples/configs/faster_rcnn_inception_v2_coco.config</a></p>
<p>Training for more steps does not seem to help as the total loss averages out. The above screenshots were from 10,000 training steps. I am training on a cpu. </p>
<p>I am augmenting my training images using <a href="https://github.com/aleju/imgaug" rel="nofollow noreferrer">imgaug</a>, and an example training image can be seen below ( i have included the debug bounding box around the target) - </p>
<p><a href="https://i.stack.imgur.com/6tFiB.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6tFiB.jpg" alt="enter image description here"></a></p>
<p>However, if the training images were the problem, wouldn't the graph have trouble detecting the target altogether?</p>
|
<p>I had a similar issue recently, from what it somewhat looks like a case of <strong>underfitting</strong>, I tried multiple things to improve on the results.</p>
<p>The thing that worked for me was actually <strong>augmenting data</strong> using the library <strong><a href="https://github.com/aleju/imgaug" rel="nofollow noreferrer">imgaug</a></strong>. You can augment the <em>images</em> as well as the <em>bounding boxes</em> using a simple script, try and increase the dataset by say 10/12 fold.</p>
<p>I would also suggest adding some <strong>background images</strong>, ie. images with no object, it was recommended by a few people in the tensorflow discussion in the issues.</p>
<p>Try and train the dataset again and monitor it using tensorboard. I think you will be able to reduce the number of false positives significantly.</p>
|
tensorflow|machine-learning|object-detection-api
| 1
|
8,017
| 51,476,762
|
get the age from date column in pandas dataframe (Current Date Format : MM/DD/YYYY HH:MM)
|
<p>How can i get the age from date column in pandas dataframe (Current Date Format : MM/DD/YYYY HH:MM).
Age expected in years.</p>
<pre><code>ID name dateofbirth
0 Raj 9/17/1966 01:37
1 Joe 11/13/1937 19:20
2 mano 1/5/1964 20:05
3 Rishi 11/13/1937 0:00
</code></pre>
<p>i am new to pandas, please suggest possible solution.</p>
|
<p>This is one approach</p>
<pre><code>import pandas as pd
import datetime
now = datetime.datetime.now()
df['dateofbirth'] = pd.to_datetime(df['dateofbirth'], format='%Y-%m-%d_%H:%M:%S')
df["Age"] = (now.date() - df['dateofbirth']).astype('<m8[Y]')
print(df)
</code></pre>
<p><strong>Output:</strong></p>
<pre><code> ID name dateofbirth Age
0 0 Raj 1966-09-17 01:37:00 51.0
1 1 Joe 1937-11-13 19:20:00 80.0
2 2 mano 1964-01-05 20:05:00 54.0
3 3 Rishi 1937-11-13 00:00:00 80.0
</code></pre>
|
python|pandas|date
| 2
|
8,018
| 70,794,588
|
CSV - Split multiple-line cell into multiple cells
|
<p>I’m currently doing some big data work. I have an issue in a .CSV where I need to split a multiple-line single-celled chunk of text, into individual cells. The below table shows the desired output. Currently, all of the 'ingredients' are in the same cell, with each ingredient on its own new line (Stack Overflow wouldn't allow me to create new lines in the same cell).</p>
<p>I need to write a script to split this single cell of ingredients into the below output, using each new line in the cell as a delimiter. The real use case I'm using this for is much more complex - over 200 'items', and anywhere between 50-150 'ingredients' per 'item'. I'm currently doing this manually in excel with a series of text to columns & transpose pastes, but it takes approximately 2-2.5 full work days to do.</p>
<p><a href="https://drive.google.com/file/d/1WIsQwPok-EXx5sXra7z_f4qe10u8CYkP/view?usp=sharing" rel="nofollow noreferrer">Link</a> to data</p>
<p>Code below</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Item</th>
<th>Ingredients</th>
</tr>
</thead>
<tbody>
<tr>
<td>Coffee</td>
<td>Coffee beans</td>
</tr>
<tr>
<td></td>
<td>Milk</td>
</tr>
<tr>
<td></td>
<td>Sugar</td>
</tr>
<tr>
<td></td>
<td>Water</td>
</tr>
</tbody>
</table>
</div>
<pre><code>import pandas as pd
df = pd.read_csv(r'd:\Python\menu.csv', delimiter=';', header=None)
headers = ["Item", "Ingredients"]
df.columns = headers
df["Ingredients"]=df["Ingredients"].str.split("\n")
df = df.explode("Ingredients").reset_index(drop=True)
df.to_csv(r"D:\Python\output.csv")
</code></pre>
|
<p>Using your code and linked data change delimeter to a comma like below.</p>
<pre><code>import pandas as pd
df = pd.read_csv('Inventory.csv', delimiter=',')
df["Software"]=df["Software"].str.split("\n")
df = df.explode("Software").reset_index(drop=True)
# Remove rows having empty string under Software column.
df = df[df['Software'].astype(bool)]
df = df.reset_index(drop=True)
df.to_csv("out_Inventory.csv")
print(df.to_string())
</code></pre>
<h4>Output</h4>
<pre><code> Hostname Software
0 ServerName1 Windows Driver Package - Amazon Inc. (AWSNVMe) SCSIAdapter (08/27/2019 1.3.2.53) [version 08/27/2019 1.3.2.53]
1 ServerName1 Airlock Digital Client [version 4.7.1.0]
2 ServerName1 AppFabric 1.1 for Windows Server [version 1.1.2106.32]
3 ServerName1 BlueStripe Collector [version 8.0.3]
...
</code></pre>
|
python|pandas|dataframe|csv|split
| 2
|
8,019
| 35,819,407
|
A Simple Network on TensorFlow
|
<p>I was trying to train a very simple model on TensorFlow. Model takes a single float as input and returns the probability of input being greater than 0. I used 1 hidden layer with 10 hidden units. Full code is shown below:</p>
<pre class="lang-python prettyprint-override"><code>import tensorflow as tf
import random
# Graph construction
x = tf.placeholder(tf.float32, shape = [None,1])
y_ = tf.placeholder(tf.float32, shape = [None,1])
W = tf.Variable(tf.random_uniform([1,10],0.,0.1))
b = tf.Variable(tf.random_uniform([10],0.,0.1))
layer1 = tf.nn.sigmoid( tf.add(tf.matmul(x,W), b) )
W1 = tf.Variable(tf.random_uniform([10,1],0.,0.1))
b1 = tf.Variable(tf.random_uniform([1],0.,0.1))
y = tf.nn.sigmoid( tf.add( tf.matmul(layer1,W1),b1) )
loss = tf.square(y - y_)
train_step = tf.train.GradientDescentOptimizer(0.01).minimize(loss)
# Training
with tf.Session() as sess:
sess.run(tf.initialize_all_variables())
N = 1000
while N != 0:
batch = ([],[])
u = random.uniform(-10.0,+10.0)
if u >= 0.:
batch[0].append([u])
batch[1].append([1.0])
if u < 0.:
batch[0].append([u])
batch[1].append([0.0])
sess.run(train_step, feed_dict = {x : batch[0] , y_ : batch[1]} )
N -= 1
while(True):
u = raw_input("Give an x\n")
print sess.run(y, feed_dict = {x : [[u]]})
</code></pre>
<p>The problem is, I am getting terribly unrelated results. Model does not learn anything and returns irrelevant probabilities. I tried to adjust learning rate and change variable initialization, but I did not get anything useful. Do you have any suggestions?</p>
|
<p>You are computing only one probability what you want is to have two classes: </p>
<ul>
<li>greater/equal than zero.</li>
<li>less than zero. </li>
</ul>
<p>So the output of the network will be a tensor of shape two that will contain the probabilities of each class. I renamed y_ in your example to <code>labels</code>:</p>
<pre><code>labels = tf.placeholder(tf.float32, shape = [None,2])
</code></pre>
<p>Next we compute the cross entropy between the result of the network and the expected classification. The classes for positive numbers would be <code>[1.0, 0]</code> and for negative numbers would be <code>[0.0, 1.0]</code>.
The loss function becomes:</p>
<pre><code>cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits, labels)
loss = tf.reduce_mean(cross_entropy)
</code></pre>
<p>I renamed the <code>y</code> to <code>logits</code> as that is a more descriptive name. </p>
<p>Training this network for 10000 steps gives:</p>
<pre><code>Give an x
3.0
[[ 0.96353203 0.03686807]]
Give an x
200
[[ 0.97816485 0.02264325]]
Give an x
-20
[[ 0.12095013 0.87537241]]
</code></pre>
<p>Full code:</p>
<pre><code>import tensorflow as tf
import random
# Graph construction
x = tf.placeholder(tf.float32, shape = [None,1])
labels = tf.placeholder(tf.float32, shape = [None,2])
W = tf.Variable(tf.random_uniform([1,10],0.,0.1))
b = tf.Variable(tf.random_uniform([10],0.,0.1))
layer1 = tf.nn.sigmoid( tf.add(tf.matmul(x,W), b) )
W1 = tf.Variable(tf.random_uniform([10, 2],0.,0.1))
b1 = tf.Variable(tf.random_uniform([1],0.,0.1))
logits = tf.nn.sigmoid( tf.add( tf.matmul(layer1,W1),b1) )
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits, labels)
loss = tf.reduce_mean(cross_entropy)
train_step = tf.train.GradientDescentOptimizer(0.01).minimize(loss)
# Training
with tf.Session() as sess:
sess.run(tf.initialize_all_variables())
N = 1000
while N != 0:
batch = ([],[])
u = random.uniform(-10.0,+10.0)
if u >= 0.:
batch[0].append([u])
batch[1].append([1.0, 0.0])
if u < 0.:
batch[0].append([u])
batch[1].append([0.0, 1.0])
sess.run(train_step, feed_dict = {x : batch[0] , labels : batch[1]} )
N -= 1
while(True):
u = raw_input("Give an x\n")
print sess.run(logits, feed_dict = {x : [[u]]})
</code></pre>
|
machine-learning|tensorflow|deep-learning
| 2
|
8,020
| 37,313,818
|
TensorFlow: Dst tensor is not initialized
|
<p>The <code>MNIST For ML Beginners</code> tutorial is giving me an error when I run <code>print(sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels}))</code>. Everything else runs fine. </p>
<p>Error and trace:</p>
<pre class="lang-py prettyprint-override"><code>InternalErrorTraceback (most recent call last)
<ipython-input-16-219711f7d235> in <module>()
----> 1 print(sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels}))
/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.pyc in run(self, fetches, feed_dict, options, run_metadata)
338 try:
339 result = self._run(None, fetches, feed_dict, options_ptr,
--> 340 run_metadata_ptr)
341 if run_metadata:
342 proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)
/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.pyc in _run(self, handle, fetches, feed_dict, options, run_metadata)
562 try:
563 results = self._do_run(handle, target_list, unique_fetches,
--> 564 feed_dict_string, options, run_metadata)
565 finally:
566 # The movers are no longer used. Delete them.
/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.pyc in _do_run(self, handle, target_list, fetch_list, feed_dict, options, run_metadata)
635 if handle is None:
636 return self._do_call(_run_fn, self._session, feed_dict, fetch_list,
--> 637 target_list, options, run_metadata)
638 else:
639 return self._do_call(_prun_fn, self._session, handle, feed_dict,
/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.pyc in _do_call(self, fn, *args)
657 # pylint: disable=protected-access
658 raise errors._make_specific_exception(node_def, op, error_message,
--> 659 e.code)
660 # pylint: enable=protected-access
661
InternalError: Dst tensor is not initialized.
[[Node: _recv_Placeholder_3_0/_1007 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/gpu:0", send_device="/job:localhost/replica:0/task:0/cpu:0", send_device_incarnation=1, tensor_name="edge_312__recv_Placeholder_3_0", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"]()]]
[[Node: Mean_1/_1011 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_319_Mean_1", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
</code></pre>
<p>I just switched to a more recent version of CUDA, so maybe this has something to do with that? Seems like this error is about copying a tensor to the GPU.</p>
<p>Stack: EC2 g2.8xlarge machine, Ubuntu 14.04 </p>
<p>UPDATE:</p>
<p><code>print(sess.run(accuracy, feed_dict={x: batch_xs, y_: batch_ys}))</code> runs fine. This leads me to suspect that the issue is that I'm trying to transfer a huge tensor to the GPU and it can't take it. Small tensors like a minibatch work just fine.</p>
<p>UPDATE 2:</p>
<p>I've figured out exactly how big the tensors have to be to cause this issue:</p>
<pre class="lang-py prettyprint-override"><code>batch_size = 7509 #Works.
print(sess.run(accuracy, feed_dict={x: mnist.test.images[0:batch_size], y_: mnist.test.labels[0:batch_size]}))
batch_size = 7510 #Doesn't work. Gets the Dst error.
print(sess.run(accuracy, feed_dict={x: mnist.test.images[0:batch_size], y_: mnist.test.labels[0:batch_size]}))
</code></pre>
|
<p>For brevity, this error message is generated when there is not enough memory to handle the batch size.</p>
<p>Expanding on <a href="https://stackoverflow.com/users/6416660/steven">Steven</a>'s link (I cannot post comments yet), here are a few tricks to monitor/control memory usage in Tensorflow:</p>
<ul>
<li>To monitor memory usage during runs, consider logging run metadata. You can then see the memory usage per node in your graph in Tensorboard. See the <a href="https://www.tensorflow.org/versions/r0.11/how_tos/graph_viz/index.html#runtime-statistics" rel="noreferrer">Tensorboard information page</a> for more information and an example of this.</li>
<li>By default, Tensorflow will try to allocate as much GPU memory as possible. You can change this using the GPUConfig options, so that Tensorflow will only allocate as much memory as needed. See the <a href="https://www.tensorflow.org/versions/r0.11/how_tos/using_gpu/index.html#allowing-gpu-memory-growth" rel="noreferrer">documentation</a> on this. There you also find an option that will allow you to only allocate a certain fraction of your GPU memory (I have found this to be broken sometimes though.).</li>
</ul>
|
tensorflow
| 36
|
8,021
| 37,223,327
|
Multithreading in Python script
|
<p>I have few datasets in the form of numpy.arrays, vectors and dictionaries. Let's call them <em>D</em><sub><em>i</em></sub>, <em>i</em> = 1..4. Other than these, I have a csv file F1.csv that has only one column.</p>
<p>I have written a python code <strong><em>P</em></strong> that will read rows from F1.csv. For each row, using read operations on <em>Di</em>, it will generate some data that has to be written on F2.csv. This code is working fine and giving expected results.</p>
<p>However, <em>D</em><sub><em>i</em></sub> are huge and <strong><em>P</em></strong> is using only one CPU core. How can I make P use two cores -- one core for the first half of F1.csv and other for the second half?</p>
<p>My code is too complex to be written here, therefore I am giving a toy version:</p>
<pre><code># Code for generating D1
...
# Code for generating D2
...
# Code for generating D3
...
# Code for generating D4
...
# P starts
F1 = csv.reader(open('data/F1.csv'), delimiter='\t')
F2 = open('data/F2.csv', 'wb')
for row in F1:
toBeWritten = { ... some read operations on Di ... } #detailed code is given below in Edit 2
F2.write(toBeWritten)
# P ends
</code></pre>
<p>How can I modify the code between "<em># P starts</em>" and "<em># P ends</em>" so that the threads read disjoint rows from F1.csv, calculate <em>toBeWritten</em> independently, and then write to F2.csv?</p>
<p>I am new to Python multi-threading, so an answer that simply modifies my code to accomplish the task will be highly appreciated.</p>
<p>Edit 1:</p>
<p>Please note that the bottleneck is generating the <em>toBeWritten</em> corresponding to each row of F2. <em>D<sub>1</sub></em> is a 1.5M x 1.3M sparse matrix, read as scipy.sparse.lil_matrix. Generating <em>toBeWritten</em> involves adding some rows of this matrix. This addition is the real culprit!</p>
<p>Edit 2:</p>
<p>Actual code for generating <em>toBeWritten</em> is now included in the code below.</p>
<pre><code># D1 is a 1.5M x 1.3M sparse matrix, read as scipy.sparse.lil_matrix.
# D2 is a 1.5M x 111 matrix, read as numpy.array
# for row in F1:
user_id = row[0]
clust = D2[user_id, 110]
neighbors = D2[ D2[:, 110] == clust][:,1]
score = np.zeros(1300000)
for neigh in neighbors:
score = score + D1 [neigh, :] # the most expensive operation
toBeWritten = np.argsort(score)[:,::-1].A[0,:]
</code></pre>
|
<p>Not sure if threads can help you with your specific problem but here you go:</p>
<pre><code># Code for generating D1
...
# Code for generating D2
...
# Code for generating D3
...
# Code for generating D4
...
# P starts
with open('data/F1.csv', 'rb') as csv_file, open('data/F2.csv', 'wb') as F2:
F1 = csv.reader(csv_file, delimiter='\t')
result = list()
def do_work(lines):
for line in lines:
toBeWritten = { ... some read operations on Di ... }
result.append(toBeWritten)
data = list(F1)
t0 = threading.Thread(target=do_work, args=(data[:len(data)/2], ))
t1 = threading.Thread(target=do_work, args=(data[len(data)/2:], ))
t0.start()
t1.start()
t0.join()
t1.join()
for line in result:
F2.write(line)
# P ends
</code></pre>
<p>You might want to try multiprocessing if threads don't help.</p>
|
python|multithreading|python-2.7|numpy|scipy
| 1
|
8,022
| 42,084,688
|
pandas cut and apply: Unexpected behavior for series
|
<p>For the following dataframe I want to group w.r.t the freq column, bin the data and sum the count data for each bin.</p>
<p>Example data look like this</p>
<pre><code>df = pd.DataFrame({"freq":[1,2,3], "count": [10,25,3]})
print(df)
count freq
0 10 1
1 25 2
2 3 3
</code></pre>
<p>To cut the data, I use</p>
<pre><code>pd.cut(df.freq, bins=[0,1, np.infty])
</code></pre>
<p>with output</p>
<pre><code>0 (0, 1]
1 (1, inf]
2 (1, inf]
Name: freq, dtype: category
Categories (2, object): [(0, 1] < (1, inf]]
</code></pre>
<p>So everything works as expected. However, now I want to map the freq column of df on the corresponding bins. I thinkt, this could be achieved with apply.
However, using apply in the followig way</p>
<pre><code>df.freq.apply(lambda x: pd.cut(x, bins=[0,1, np.infty]))
</code></pre>
<p>yields as TypeError</p>
<pre><code>TypeError: putmask() argument 1 must be numpy.ndarray, not numpy.int64
</code></pre>
<p>However, when I enforce that df.freq is a DataFrame</p>
<pre><code>pd.DataFrame(df.freq).apply(lambda x: pd.cut(x, bins=[0,1, np.infty]))
</code></pre>
<p>the expected output according to a mapping onto the bins is returned</p>
<pre><code> freq
0 (0, 1]
1 (1, inf]
2 (1, inf]
</code></pre>
<p>So why is the Dataframe cast from Series-type necessary here? The TypeError hints that an expected array is an integer. However, checking the pandas.tile._bin_to_cut function I haven't seen where this behavior is comming from.</p>
<p>Any suggestions or is this intended?</p>
<p>btw. python 3.6 and pandas 0.19.2 are used</p>
|
<p>I think <code>apply</code> is not necessary, need only <code>groupby</code> by binned <code>Series</code> which return function <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.cut.html" rel="nofollow noreferrer"><code>cut</code></a>:</p>
<pre><code>print (type(pd.cut(df.freq, bins=[0,1, np.infty])))
<class 'pandas.core.series.Series'>
print (df.groupby(pd.cut(df.freq, bins=[0,1, np.infty]))['count'].sum().reset_index())
freq count
0 (0, 1] 10
1 (1, inf] 28
</code></pre>
<p>You can also assign output to new column:</p>
<pre><code>df['freq'] = pd.cut(df.freq, bins=[0,1, np.infty])
print (df)
count freq
0 10 (0, 1]
1 25 (1, inf]
2 3 (1, inf]
print (df.groupby('freq')['count'].sum().reset_index())
freq count
0 (0, 1] 10
1 (1, inf] 28
</code></pre>
<hr>
<pre><code>df = df.assign(freq=pd.cut(df.freq, bins=[0,1, np.infty]))
print (df)
count freq
0 10 (0, 1]
1 25 (1, inf]
2 3 (1, inf]
print (df.groupby('freq')['count'].sum().reset_index())
freq count
0 (0, 1] 10
1 (1, inf] 28
</code></pre>
|
python|pandas
| 1
|
8,023
| 37,898,617
|
Having problems feeding data to tensorflow graph
|
<p>I am trying to adjust the MNIST2 problem in tensorflow tutorial to train a neural network using my own images. But I am having problems feeding data to the graph.</p>
<pre><code>My code:
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os.path
import time
import numpy as np
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import mnist
# Basic model parameters as external flags.
flags = tf.app.flags
FLAGS = flags.FLAGS
flags.DEFINE_integer('num_epochs', 2, 'Number of epochs to run trainer.')
flags.DEFINE_integer('batch_size', 100, 'Batch size.')
flags.DEFINE_string('train_dir', '/root/data', 'Directory with the training data.')
# Constants used for dealing with the files, matches convert_to_records.
TRAIN_FILE = 'train.tfrecords'
VALIDATION_FILE = 'validation.tfrecords'
# Set-up dos pacotes
sess = tf.InteractiveSession()
def read_and_decode(filename_queue):
reader = tf.TFRecordReader()
_, serialized_example = reader.read(filename_queue)
features = tf.parse_single_example(
serialized_example,
dense_keys=['image_raw', 'label'],
# Defaults are not specified since both keys are required.
dense_types=[tf.string, tf.int64])
# Convert from a scalar string tensor (whose single string has
# length mnist.IMAGE_PIXELS) to a uint8 tensor with shape
# [mnist.IMAGE_PIXELS].
image = tf.decode_raw(features['image_raw'], tf.uint8)
image.set_shape([mnist.IMAGE_PIXELS])
# OPTIONAL: Could reshape into a 28x28 image and apply distortions
# here. Since we are not applying any distortions in this
# example, and the next step expects the image to be flattened
# into a vector, we don't bother.
# Convert from [0, 255] -> [-0.5, 0.5] floats.
image = tf.cast(image, tf.float32) * (1. / 255) - 0.5
# Convert label from a scalar uint8 tensor to an int32 scalar.
label = tf.cast(features['label'], tf.int32)
return image, label
def inputs(train, batch_size, num_epochs):
"""Reads input data num_epochs times.
Args:
train: Selects between the training (True) and validation (False) data.
batch_size: Number of examples per returned batch.
num_epochs: Number of times to read the input data, or 0/None to
train forever.
Returns:
A tuple (images, labels), where:
* images is a float tensor with shape [batch_size, 30,26,1]
in the range [-0.5, 0.5].
* labels is an int32 tensor with shape [batch_size] with the true label,
a number in the range [0, char letras).
Note that an tf.train.QueueRunner is added to the graph, which
must be run using e.g. tf.train.start_queue_runners().
"""
if not num_epochs: num_epochs = None
filename = os.path.join(FLAGS.train_dir,
TRAIN_FILE if train else VALIDATION_FILE)
with tf.name_scope('input'):
filename_queue = tf.train.string_input_producer(
[filename], num_epochs=num_epochs)
# Even when reading in multiple threads, share the filename
# queue.
image, label = read_and_decode(filename_queue)
# Shuffle the examples and collect them into batch_size batches.
# (Internally uses a RandomShuffleQueue.)
# We run this in two threads to avoid being a bottleneck.
images, sparse_labels = tf.train.shuffle_batch(
[image, label], batch_size=batch_size, num_threads=2,
capacity=1000 + 3 * batch_size,
# Ensures a minimum amount of shuffling of examples.
min_after_dequeue=1000)
return images, sparse_labels
def weight_variable(shape):
initial = tf.truncated_normal(shape, stddev=0.1)
return tf.Variable(initial)
def bias_variable(shape):
initial = tf.constant(0.1, shape=shape)
return tf.Variable(initial)
def conv2d(x, W):
return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')
def max_pool_2x2(x):
return tf.nn.max_pool(x, ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1], padding='SAME')
#Variaveis
x = tf.placeholder(tf.float32, [None, 784])
y_ = tf.placeholder(tf.float32, [None, 36])
#Layer 1
W_conv1 = weight_variable([5, 5, 1, 32])
b_conv1 = bias_variable([32])
x_image = tf.reshape(x, [-1,28,28,1])
h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)
h_pool1 = max_pool_2x2(h_conv1)
#Layer 2
W_conv2 = weight_variable([5, 5, 32, 64])
b_conv2 = bias_variable([64])
h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
h_pool2 = max_pool_2x2(h_conv2)
#Densely Connected Layer
W_fc1 = weight_variable([7 * 7 * 64, 1024])
b_fc1 = bias_variable([1024])
h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64])
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)
#Dropout - reduz overfitting
keep_prob = tf.placeholder(tf.float32)
h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)
#Readout layer
W_fc2 = weight_variable([1024, 36])
b_fc2 = bias_variable([36])
y_conv=tf.nn.softmax(tf.matmul(h_fc1_drop, W_fc2) + b_fc2)
#Train and evaluate
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y_conv), reduction_indices=[1]))
train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
correct_prediction = tf.equal(tf.argmax(y_conv,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
sess.run(tf.initialize_all_variables())
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(sess=sess, coord=coord)
for i in range(20000):
batch = inputs(train=True, batch_size=FLAGS.batch_size, num_epochs=FLAGS.num_epochs)
if i%100 == 0:
print (batch[0])
print (type(batch[0]))
print (tf.shape(batch[0],name=None))
a=np.reshape(batch[0],(100,784))
#batch[1]=np.reshape(batch[1],[1])
train_accuracy = accuracy.eval(feed_dict={x:batch[0], y_: batch[1], keep_prob: 1.0})
print("step %d, training accuracy %g"%(i, train_accuracy))
train_step.run(feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})
test = inputs(train=False, batch_size=2000)
print("test accuracy %g"%accuracy.eval(feed_dict={x: test[0], y_: test[1], keep_prob: 1.0}))
coord.join(threads)
sess.close()
</code></pre>
<p>The program outputs the following error:</p>
<pre><code>Traceback (most recent call last):
File "4_Treino_Rede_Neural.py", line 158, in <module>
train_accuracy = accuracy.eval(feed_dict={x:batch[0], y_: batch[1], keep_prob: 1.0})
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 460, in eval
return _eval_using_default_session(self, feed_dict, self.graph, session)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2910, in _eval_using_default_session
return session.run(tensors, feed_dict)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 357, in run
np_val = np.array(subfeed_val, dtype=subfeed_t.dtype.as_numpy_dtype)
ValueError: setting an array element with a sequence.
</code></pre>
<p>I am not sure how to fix this issue.
Could anyone point me to the right direction?</p>
<p>Thanks
Marcelo V </p>
|
<p>You are trying to feed the <code>feed_dict</code> argument with TensorFlow tensors. TensorFlow then tries to convert these <code>tf.Tensor</code> to numpy arrays but cannot and returns your error.</p>
<p>As you use an input queue, you don't need a <code>feed_dict</code>.</p>
<p>Instead of:</p>
<pre class="lang-py prettyprint-override"><code>x = tf.placeholder(tf.float32, [None, 784])
y_ = tf.placeholder(tf.float32, [None, 36])
</code></pre>
<p>Just use:</p>
<pre class="lang-py prettyprint-override"><code>x, y_ = inputs(train=True, batch_size=FLAGS.batch_size, num_epochs=FLAGS.num_epochs)
</code></pre>
|
python|numpy|tensorflow|deep-learning
| 0
|
8,024
| 37,800,383
|
pandas dataframe(DatetimeIndex column) to spark dataframe (datetime format)
|
<p>I have a python pandas dataframe (pd_df) as follows: </p>
<pre><code> time count
0 2015-01-31 835
1 2015-02-28 1693
2 2015-03-31 2439
</code></pre>
<p>which I want to convert to spark dataframe (sp_df). I am using the following command : </p>
<p>When I tried </p>
<pre><code>sp_df = sqlContext.createDataFrame(pd_df).
</code></pre>
<p>The first column was returned in bigint format.</p>
<pre><code>time count
1422662400000000000 835
1425081600000000000 1693
</code></pre>
<p>I also tried the schema as follows but it didn't work either:</p>
<pre><code>from pyspark.sql.types import *
schema = StructType([
StructField("time", StringType(), True),
StructField("count", IntegerType(), True)])
sp_df = sqlContext.createDataFrame(pd_df, schema)
</code></pre>
<p>It gave me the error: </p>
<pre><code>DateType can not accept object 1422662400000000000L in type <type 'long'>
</code></pre>
<p>Can anyone suggest me the right way to do it?</p>
|
<p>Adding here for anyone who had the problem of converting a pandas date column to a spark DateType and not TimeStamp. My df column, although it was a proper <code>dt.date</code> type column in the pandas dataframe, automatically converted to a Spark <code>TimeStamp</code> (which includes the hour 00:00:00). That was undesirable.</p>
<p>Eventually the solution was changing the way of creating the Pandas dataframe from:</p>
<pre><code>df.DataFrame()
df['date'] = pd.to_date(['2019-01-01', 2019-02-02']).dt.date
</code></pre>
<p>to doing the same thing, but creating the pandas DataFrame using a dictionary</p>
<pre><code>d = {'date': pd.to_date(['2019-01-01', 2019-02-02']).dt.date}
df = pd.DataFrame(data=d)
</code></pre>
<p>The creation of the dataframe from a dictionary fixed the problem, and now my converted Spark dataframe was able to convert it to a date and note a timestamp column.</p>
<p>It's worth adding that I've also tried to manually convert from Pandas to Spark by adding the mapping: <code>np.dtype('<M8[ns]'): DateType()</code></p>
|
python|pandas|apache-spark|pyspark
| 2
|
8,025
| 64,509,819
|
PatsyError when using statsmodels for regression
|
<p>I'm using <code>ols</code> in <code>statsmodels</code> to run a regression. Once I run the regressions on each row of my dataframe, I want to retrieve the X variables from <code>patsy</code> thats used in those regressions. But, I get an error that I just cant seem to understand.</p>
<p><strong>Edit</strong>: I am trying to run a regression as presented in the <a href="https://stackoverflow.com/questions/24074481/fama-macbeth-regression-in-python-pandas-or-statsmodels">answer here</a>, but want to run the regression across each row of a grouped version of my dataframe <code>df</code>, where it is grouped by <code>Date</code>,<code>bal</code>, <code>dist</code>, <code>pay_hist</code>, <code>inc</code>, <code>bckts</code>. So I first group this data as described above and then try to run the regression on each row where <code>df</code> is grouped by <code>Date</code>: <code>df.groupby(['Date']).apply(ols_coef,'bal ~ C(dist) + C(pay_hist) + C(inc) + C(bckts)')</code></p>
<p>My code is as follows:</p>
<pre><code>from statsmodels.formula.api import ols
df = df.groupby([['Date','bal', 'dist', 'pay_hist', 'inc', 'bckts']])
######run regression
def ols_coef(x,formula):
return ols(formula,data=x).fit().params
gamma = df.groupby(['Date']).apply(ols_coef,'bal ~ C(dist) + C(pay_hist) + C(inc) + C(bckts)')
print('gamme is {}'.format(gamma))
########################
#####Now trying to retrieve the X variables in the regressions above
formula = 'bal ~ C(dist) + C(pay_hist) + C(inc) + C(bckts)'
data = df.groupby(['Date'])[['bckts', 'wac_dist', 'pay_hist', 'inc', 'bal']]
y,X = patsy.dmatrices(formula,data,return_type='dataframe')
################
</code></pre>
<p>I get the following error and am unsure how to solve it:</p>
<pre><code>patsy.PatsyError: Error evaluating factor: Exception: Column(s) ['bckts', 'dist', 'pay_hist', 'inc', 'bal'] already selected
bal ~ C(dist) + C(pay_hist) + C(inc) + C(bckts)
^^^^^^^^^^^
</code></pre>
|
<p>The problem is that you're passing a grouped dataframe into the<code>pasty.dmatrices</code> function. Since the grouped dataframe is iterable, you can do it in a loop like this, and store all of your X dataframs (one for each group) into a dictionary:</p>
<pre><code>import statsmodels.api as sm
import statsmodels.formula.api as smf
import numpy as np
import pandas as pd
import patsy
# Loading data
df = sm.datasets.get_rdataset("Guerry", "HistData").data
# Extracting Independent variables
formula = 'Suicides ~ Crime_parents + Infanticide'
data = df.groupby(['Region'])[['Suicides', 'Crime_parents', 'Infanticide', 'Region']]
X = {}
for name, group in data:
Y, X[name] = patsy.dmatrices(formula, group, return_type='dataframe')
print(X)
</code></pre>
|
python|pandas
| 1
|
8,026
| 64,294,776
|
python numpy.single gives different result when using out parameter
|
<p>I am trying to cast from a double precision array to single precision. To optimize on space I tried using the out argument so that numpy doesn't allocate additional space. However the results seem different for the two version of the call</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
doubleArr = np.zeros((10000,10000), dtype=np.double)
doubleArr[0,0] = 1e-30
singleArr = np.single(doubleArr)
print ("%.40f"%singleArr[0,0])
singleArr = np.zeros((10000,10000), dtype=np.single)
np.single(doubleArr, out=singleArr)
print ("%.40f"%singleArr[0,0])
</code></pre>
<p>The results are</p>
<pre><code>0.0000000000000000000000000000010000000032
0.0000000000000000000000000000000000000000
</code></pre>
<p>Is the usage of "out" parameter incorrect?</p>
|
<p>This mystery is resolved. Numpy.ctypes when calling c routines creates a bunch of memory for c-python interoperation. This memory doesn't get collected immediately which results in blow up in total memory usage. The solution is to use gc.collect when under memory pressure.</p>
|
python|numpy|casting|floating-point|precision
| 0
|
8,027
| 58,657,924
|
unable to convert number to words from Pandas series using num2words python library
|
<p>I am unable to make a new column in pandas dataframe which converts a number to words using python num2words library, its working using simple int or float parameters but not working with series</p>
<p>This is what i have tried:</p>
<pre><code>data['words'] = data['Value'].apply(lambda row : num2words(row['Value']))
</code></pre>
<blockquote>
<p>TypeError Traceback (most recent call last)
in
----> 1 data['words'] = data['Value'].apply(lambda row : num2words(row['Value']))</p>
<p>~\AppData\Local\Continuum\anaconda3\lib\site-packages\pandas\core\series.py in apply(self, func, convert_dtype, args, **kwds)
4040 else:
4041 values = self.astype(object).values
-> 4042 mapped = lib.map_infer(values, f, convert=convert_dtype)
4043
4044 if len(mapped) and isinstance(mapped[0], Series):</p>
<p>pandas_libs\lib.pyx in pandas._libs.lib.map_infer()</p>
<p> in (row)
----> 1 data['words'] = data['Value'].apply(lambda row : num2words(row['Value']))</p>
<p>TypeError: 'float' object is not subscriptable</p>
</blockquote>
|
<p>One way you can accomplish this is by defining a simple function that takes <code>row</code> as input and returns <code>num2words(row['Value'])</code>.</p>
<p>Then you can apply it on the DataFrame with <code>axis=1</code>.</p>
<pre><code>def f(row):
return num2words(row['Value'])
data['words'] = data.apply(f, axis=1)
</code></pre>
<p>Or you can apply directly to the column.</p>
<pre><code>data['words'] = data['Value'].apply(num2words)
</code></pre>
|
python|pandas|numpy
| 0
|
8,028
| 58,637,850
|
Clean and make readable bar graphs on Jupyter Notebook
|
<p>This might be petty but how can I make my output of bar graphs readable. Apparently I need to remove the <em>+sign</em> on bar heights and also <em>decimals</em> so that I remain with only whole numbers.Here is my data:</p>
<pre><code># intialise data of lists.
data = {'Hospital_name':['Jootrh Hospital', 'Jootrh Hospital', 'Embu Hospital', 'Embu Hospital','Bungoma Hospital', 'Bungoma Hospital', 'Keru Hospital', 'Keru Hospital'],
'periodname':["18-Jul", "18-Aug", "18-Jul", "18-Aug","18-Jul", "18-Aug", "18-Jul", "18-Aug"], 'normal deliveries':[452, 458, "NAN", 45,498, 466, "NAN", 450],
'caesarian sections':[67.0, 99.0, 13.0, 13.0,60.0, 19.0, 73.0, "NAN"], 'breach delivery':[10.0, "NAN", 13.0, 137.0,100.0, "NAN", "NAN" ,197.0],
'assisted vd':["NAN", "NAN", 1.0, 37.0,1.0, "NAN", 1.0, 37.0]}
# Create DataFrame
df = pd.DataFrame(data)
df
</code></pre>
<p>Here is my <strong>code</strong>, I am using jupyter notebook:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
grouped = df.groupby('Hospital_name')
ncols=1
nrows = int(np.ceil(grouped.ngroups/ncols))
fig, axes = plt.subplots(nrows=nrows, ncols=ncols,figsize=(10,40), constrained_layout=True)
x_offset = 0.02
y_offset = 0.02
for (key, ax) in zip(grouped.groups.keys(), axes.flatten()):
temp = grouped.get_group(key).replace("NAN",0).plot(kind='bar',ax=ax, title=key)
for bar in temp.patches:
b = bar.get_bbox()
val = "{:+.2f}".format(b.y1 + b.y0)
ax.annotate(val, ((b.x0 + b.x1)/2 + x_offset, b.y1 + y_offset))
ax.legend()
plt.show()
</code></pre>
<p>AND Here is my <strong>OUTPUT</strong>, looking so messy
<a href="https://i.stack.imgur.com/f2EWe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/f2EWe.png" alt="Bar graphs organized in one column"></a></p>
<p>Anyone to assist in making my output look readable? Note that I really need those bar heights number to be there and final result will be saved on a document maybe pdf. the <em>+ sign and decimals</em> can be removed </p>
|
<p>Would something like this work for you? I know it is a lot of changes and it is not really in line with my comment, but that is the way I found. I also realise that you may need to tweak a bit to accommodate all the additional dates you have.</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
data = {'Name': ['Jootrh Hospital', 'Jootrh Hospital',
'Embu Hospital', 'Embu Hospital',
'Bungoma Hospital', 'Bungoma Hospital',
'Keru Hospital', 'Keru Hospital'],
'Date': ['18-Jul', '18-Aug', '18-Jul', '18-Aug', '18-Jul', '18-Aug',
'18-Jul', '18-Aug'],
'Norm_Del': [452, 458, np.nan, 45, 498, 466, np.nan, 450],
'Caesa_Sec': [67., 99., 13., 13., 60., 19., 73., np.nan],
'Br_Del': [10., np.nan, 13., 137., 100., np.nan, np.nan, 197.],
'Ass_VD': [np.nan, np.nan, 1., 37., 1., np.nan, 1., 37.]}
df = pd.DataFrame(data)
df2 = df.pivot_table(
values=['Norm_Del', 'Caesa_Sec', 'Br_Del', 'Ass_VD'],
index=['Name', 'Date'], fill_value=0)
df2.plot.bar(rot=45, figsize=(16, 8),
color=['xkcd:cerulean', 'xkcd:avocado', 'xkcd:silver',
'xkcd:purple'])
i = 0
for unused, rows in df2.iterrows():
print(rows['Norm_Del'])
plt.annotate(rows['Ass_VD'], xy=(i - 0.19, rows['Ass_VD'] + 5), rotation=0,
color='xkcd:cerulean', fontweight='semibold', ha='center')
plt.annotate(rows['Br_Del'], xy=(i - 0.06, rows['Br_Del'] + 5), rotation=0,
color='xkcd:avocado', fontweight='semibold', ha='center')
plt.annotate(rows['Caesa_Sec'], xy=(i + 0.06, rows['Caesa_Sec'] + 5),
rotation=0, color='xkcd:silver', fontweight='semibold',
ha='center')
plt.annotate(rows['Norm_Del'], xy=(i + 0.19, rows['Norm_Del'] + 5),
rotation=0, color='xkcd:purple', fontweight='semibold',
ha='center')
i += 1
plt.savefig('so.png', bbox_inches='tight')
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/qq0Ft.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qq0Ft.png" alt="enter image description here"></a></p>
<p>EDIT:</p>
<pre><code>from datetime import datetime
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
data = {'Name': ['Jootrh Hospital', 'Jootrh Hospital',
'Embu Hospital', 'Embu Hospital',
'Bungoma Hospital', 'Bungoma Hospital',
'Keru Hospital', 'Keru Hospital'],
'Date': ['18-Jul', '18-Aug', '18-Jul', '18-Aug', '18-Jul', '18-Aug',
'18-Jul', '18-Aug'],
'Norm_Del': [452, 458, np.nan, 45, 498, 466, np.nan, 450],
'Caesa_Sec': [67., 99., 13., 13., 60., 19., 73., np.nan],
'Br_Del': [10., np.nan, 13., 137., 100., np.nan, np.nan, 197.],
'Ass_VD': [np.nan, np.nan, 1., 37., 1., np.nan, 1., 37.]}
df = pd.DataFrame(data)
df2 = df.pivot_table(
values=['Norm_Del', 'Caesa_Sec', 'Br_Del', 'Ass_VD'],
index=['Name', 'Date'], fill_value=0)
names = np.unique([x[0] for x in df2.index.values])
dates = sorted(np.unique([x[1] for x in df2.index.values]),
key=lambda day: datetime.strptime(day, '%d-%b'))
values = df2.columns.values
locLab = [-0.19, -0.06, 0.06, 0.19]
colors = ('xkcd:cerulean', 'xkcd:avocado', 'xkcd:silver', 'xkcd:purple')
fig, axs = plt.subplots(nrows=names.shape[0], figsize=(5 * len(dates),
4 * names.shape[0]))
i = 0
for name in names:
df2.loc[name].reindex(dates).plot.bar(
rot=0, ax=axs[i], title=name, color=colors)
j = 0
for date in dates:
k = 0
maxVal = np.amax(df2.loc[name].values)
for value in values:
val = df2.loc[name].loc[date][value]
axs[i].annotate(val, xy=(j + locLab[k], val + maxVal / 100),
color=colors[k], fontweight='semibold',
ha='center')
k += 1
j += 1
i += 1
plt.tight_layout()
plt.savefig('so.png', bbox_inches='tight')
</code></pre>
<p><a href="https://i.stack.imgur.com/OwagS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OwagS.png" alt="enter image description here"></a></p>
|
python|pandas|matplotlib|jupyter-notebook
| 2
|
8,029
| 58,607,667
|
I have written one code in python 2 now i want to execute this in python3, i am getting error
|
<pre><code>from sqlalchemy import create_engine
import pymysql
import pandas as pd
db='mysql+pymysql://developer:11111@192.168.1.11:3306/pos'
db_connection=create_engine(db)
df=pd.read_sql_table(table_name='product2', con=db_connection)
#df1=pd.read_sql_table(table_name='product', con=db_connection)
start_date = '2009-10-19 00:00:00'
end_date = '2010-10-19 23:59:59'
mask = (df['DateCreated'] > start_date) & (df['DateCreated'] <= end_date) #DateFilter
df = df.loc[mask]
t_type=(df['TherapyType']=='C')
df = df.loc[t_type]
df
i am getting this error :-
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-1-5c4885ab0640> in <module>
1 #extract the data from the created table in which dataframe was inserted..
----> 2 from sqlalchemy import create_engine
3 import pymysql
4 import pandas as pd
5 db='mysql+pymysql://developer:devdev@192.168.1.44:3306/pos'
ModuleNotFoundError: No module named 'sqlalchemy'
</code></pre>
<p>I have written a code in python 2 i want to execute this code in python 3 but i am getting error, i am not able to understand what should i do</p>
|
<pre><code>sudo pip3 install sqlalchemy,
sudo pip3 install PyMySQL,
</code></pre>
<p>I installed <code>sqlalchemy</code> and <code>pymysql</code> using these two commands and now I'm not getting any error.</p>
|
python|pandas|dataframe
| 0
|
8,030
| 58,861,560
|
Rename column with same column name based on values in DataFrame
|
<p>I have a DataFrame which can contain columns with the same column name. Based on the value I want to rename the column name so there are no duplicates. I've tried a few things, but every time I try to iterate over the columns and rename them I end up with the column name. df.rename(columns=df.columns[i]: 'some_name'}) seems to use the column name as well. </p>
<p>Let's say I have a dataframe; </p>
<pre><code>df = pd.DataFrame({"A": [10kg], "B": [4], "A": [4%]})
</code></pre>
<p>I would like to rename the column(s) named "A" based on the row value so that I get </p>
<pre><code> A B A%
0 10kg 4 4
</code></pre>
<p>I tried something like this:</p>
<pre><code>for i in range(0, len(df.columns)):
if 'A' in df.columns[i]:
if '%' in df.iloc[:,i].values[0]:
df = df.rename(columns={df.columns[i]: 'A_%'})
</code></pre>
<p>But this also renames the first column 'A'. Is there another way to rename it based on location?</p>
|
<p>Single list comprehension for new column names:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df = pd.concat([pd.DataFrame({"A": ['10kg'], "B": ['4']}),
pd.DataFrame({"A": ['4%']})], axis=1)
df.columns = [c + '_%'
if df.applymap(lambda x: '%' in x).any(axis=0).iloc[ic]
else c for ic, c in enumerate(df.columns)]
</code></pre>
<p>Edit -- better:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df = pd.concat([pd.DataFrame({"A": ['10kg'], "B": ['4']}),
pd.DataFrame({"A": ['4%']})], axis=1)
has_percentage = df.applymap(lambda x: '%' in x).any(axis=0)
df.columns = [c + '_%' if has_percentage.iloc[ic]
else c for ic, c in enumerate(df.columns)]
</code></pre>
|
python|pandas|dataframe
| 3
|
8,031
| 70,253,399
|
Write a scipy function without using a standard library (exponential power)
|
<p>My question might come across as stupid or so simple, but I could not work towards finding a solution. Here is my question: I want to write an exponential power distribution function which is available in scipy. However, I don't want to use the scipy for this. How do I go about it?</p>
<p>Here are my efforts so far:</p>
<pre><code>import math
import numpy as np
def ExpPowerFun(x,b, size=1000):
distribution = b*x**(b-1)*math.exp(1+x**b-math.exp(x**b))
return distribution
</code></pre>
<p>I used this equation based on <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.expon.html" rel="nofollow noreferrer">this</a> scipy doc. To be fair, using this equation and writing a function using it doesn't do much. As you can see, it returns only one value. I want to generate a distribution of random numbers based on scipy's exponential power distribution function without using scipy.
I have looked at <code>class exponpow_ge</code>from <a href="https://github.com/scipy/scipy/blob/master/scipy/stats/_continuous_distns.py" rel="nofollow noreferrer">github code</a>. However, it uses <code>scipy.special(-sc)</code>, so it's kind of useless for me, unless there is any workaround and avoids the use of scipy.</p>
<p>I can't figure out how to go about it. Again, this might be a simple task, but I am stuck. Please help.</p>
|
<p>the simplest way to generate a random number for a given distribution is using the inverse of the CDF of that function, the PPF (Percent point function) will give you the distribution you need when you apply it on uniform distributed numbers.</p>
<p>for you case the PPF (taken directly from scipy source code with some modifications) is:</p>
<pre class="lang-py prettyprint-override"><code>np.power(np.log(1-np.log(1-x)), 1.0/b)
</code></pre>
<p>hence you code should look like this</p>
<pre class="lang-py prettyprint-override"><code>def ExpPowerFun(b, size=1000):
x = np.random.rand(size)
return np.power(np.log(1-np.log(1-x)), 1.0/b)
import matplotlib.pyplot as plt
plt.hist(ExpPowerFun(2.7,10000),20)
plt.show()
</code></pre>
<p><strong>Edit:</strong> the uniform distribution has to be from 0 to 1 ofc since the probabilities are from 0% to 100%
<a href="https://i.stack.imgur.com/saJJm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/saJJm.png" alt="enter image description here" /></a></p>
|
python|numpy|scipy|statistics|exponential-distribution
| 0
|
8,032
| 70,281,456
|
How can I set the value of a Series at a specific in a chainable style?
|
<p>I can't figure how to set the value of a Series at a specific index in a chainable style.</p>
<p>For example, say I have the following dataframe:</p>
<pre><code>>>> df = pd.DataFrame({'a': [1,2,3], 'b': [0,0,0]})
>>> df
a b
0 1 0
1 2 0
2 3 0
</code></pre>
<p>If I want to change all the values of a column in a pipeline, I can use <code>pandas.DataFrame.assign()</code>:</p>
<pre><code>>>> df.assign(b=[4,5,6])
a b
0 1 4
1 2 5
2 3 6
</code></pre>
<p>...and then I can do other stuff with the dataframe on the same line, for example:</p>
<pre><code>>>> df.assign(b=[4,5,6]).mul(100)
a b
0 100 400
1 200 500
2 300 600
</code></pre>
<hr />
<p>But I can't do this for an individual value at a specific in a Series.</p>
<pre><code>>>> s = df['a']
>>> s
0 1
1 2
2 3
Name: a, dtype: int64
</code></pre>
<p>I can, of course, just use a normal Python assignment operation using <code>=</code>:</p>
<pre><code>>>> s[1] = 9
>>> s
0 1
1 9
2 3
Name: a, dtype: int64
</code></pre>
<p>But the problems with that are:</p>
<ul>
<li>It's in-place, so it modifies my existing dataframe</li>
<li>Assignment statements using <code>=</code> are not allowed in Python lambda functions</li>
</ul>
<p>For example, what if I wanted to do this:</p>
<pre><code>>>> df.apply(lambda x: x['b', 0] = 13, axis=1)
File "<stdin>", line 1
df.apply(lambda x: x['b', 0] = 13, axis=1)
^
SyntaxError: expression cannot contain assignment, perhaps you meant "=="?
</code></pre>
<p>(I understand that there are better ways to that particular case, but this is just a made-up example.)</p>
<p><strong>How can I set the value at the specified index of a Series?</strong> I would like to be able to just to something like <code>s.set_value(idx, 'my_val')</code> and have it return the modified (copied) Series.</p>
|
<p>You can use <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.where.html" rel="nofollow noreferrer"><code>pandas.Series.where()</code></a> to return a copy of the column with the item at the specified index.</p>
<p>This is basically like using <code>.loc</code>:</p>
<pre class="lang-py prettyprint-override"><code>>>> df['b'].where(df['b'].index != 1, 13)
0 0
1 13
2 0
Name: b, dtype: int64
</code></pre>
<p>If you have an index that isn't a <code>RangeIndex</code> or that doesn't start from zero, you can call <code>reset_index()</code> before <code>where()</code>, which will be like the above for <code>.loc</code> only mimicking the behavior of <code>.iloc</code> instead:</p>
<pre class="lang-py prettyprint-override"><code>>>> s = pd.Series({'a': 0, None: 0, True: 0})
>>> s
a 0
NaN 0
True 0
dtype: int64
>>> s.where(s.reset_index().index != 1, 13)
a 0
NaN 13
True 0
dtype: int64
</code></pre>
|
python|pandas|method-chaining
| 1
|
8,033
| 70,243,914
|
Is there a way to obtain all rows/data from paginated json file into pandas dataframe
|
<p>Edited:</p>
<p>Here are my steps:</p>
<pre><code>url = "https://ted.europa.eu/api/v2.0/notices/search?&q=TD%3D%5B3%5D&reverseOrder=true&scope=3&sortField=PD"
# get data from url
response = requests.get(url)
# return the json data, and read the output dict keys
data = response.json()
data
</code></pre>
<p>I have obtained a json file from an api as such:</p>
<blockquote>
<pre><code>{'took': 205,
total': 1703997,
'results': [{'AA': '1',
'AC': '2',
'BI': [],
'CY': 'MK',
'DI': '1046/2018',
'TY': '1'},
{'AA': '6',
'AC': '1',
'BI': [],
'CY': 'RS',
'DI': 'CODE_OTHERS',
'TY': '1'},
{'AA': '5',
'AC': '1',
'BI': [],
'CY': 'BE',
'DI': '1046/2018',
'TY': '1'},
...
</code></pre>
</blockquote>
<pre><code>#read the output dict keys
data.keys()
</code></pre>
<p>When I convert it to a pd df</p>
<pre><code>df = pd.DataFrame(data["results"])
</code></pre>
<blockquote>
<p>dict_keys(['took', 'total', 'results'])</p>
</blockquote>
<pre><code># create dataframe from key of interest
df = pd.DataFrame(data["results"])
df.head()
</code></pre>
<p>This returned the dataframe as expected...</p>
<pre><code># count number of rows
len(df.index)
</code></pre>
<blockquote>
<p>1000</p>
</blockquote>
<p>However, I expected a total of 1703997.</p>
<p>I'm still pondering how to fix this</p>
<p>Any idea on how I can do that?</p>
|
<p>You should specify the page in your API request. Unfortunately, the API Doc <a href="https://docs.ted.europa.eu/home/index.html" rel="nofollow noreferrer">doesn't seem to be available</a> yet.<br />
However according to this example on <a href="https://github.com/andreas-andersson/TED-api-tool/blob/4b937b4d3a004b2ce5a5b39e7fdd70b1dbf47303/ted.py#L62" rel="nofollow noreferrer">Github</a>, you should be able to get any page with <code>pageNum=</code>.<br />
You should be able to append all rows to <code>df</code> with the following:</p>
<pre><code>url = "https://ted.europa.eu/api/v2.0/notices/search?&q=TD%3D%5B3%5D&reverseOrder=true&scope=3&sortField=PD"
response = requests.get(url)
data = response.json()
df = pd.DataFrame()
i = 1
while "results" in data.keys():
df_page = pd.DataFrame(data["results"])
# print(df_page.head()) # uncomment to see page #i as df
df = df.append(df_page, ignore_index=True)
i+=1
response = requests.get(f'{url}&pageNum={i}')
data = response.json()
</code></pre>
<p>It could take a while until all pages are read, so be patient ;-)</p>
<h5>Edit:</h5>
<p>In your example the code will loop 1700+ times before getting the response <code>{'errorCode': 400, 'message': "The requested page number doesn't exist."}</code>. I don't know if your memory/the server can handle that much.
You might want to split your results in several chunks...</p>
|
python|json|pandas|api|pyspark
| 0
|
8,034
| 56,228,711
|
Error on tensorflow: Shape must be rank 2 but is rank 1 for 'MatMul_25'
|
<p>I'm trying to create a conditional GAN. However, i'm stuck as to why no matter what i do, it appears the same error over and over again.
Here's the code:</p>
<pre><code>image_dim = 784 #28 * 28
Y_dimension = 10
gen_hidd_dim = 256
disc_hidd_dim = 256
z_noise_dim =100 #input noise datapoint
def xavier_init(shape):
return tf.random_normal(shape = shape, stddev = 1/tf.sqrt(shape[0]/2.0))
weights = {
'disc_H' : tf.Variable(xavier_init([image_dim + Y_dimension, disc_hidd_dim])),
'disc_final' : tf.Variable(xavier_init([disc_hidd_dim, 1])),
'gen_H': tf.Variable([z_noise_dim + Y_dimension, gen_hidd_dim]),
'gen_final': tf.Variable(xavier_init([gen_hidd_dim, image_dim]))
}
bias = {
'disc_H': tf.Variable(xavier_init([disc_hidd_dim])),
'disc_final': tf.Variable(xavier_init([1])),
'gen_H': tf.Variable(xavier_init([gen_hidd_dim])),
'gen_final': tf.Variable(xavier_init([image_dim]))
}
Z_input = tf.placeholder(tf.float32, shape= [None, z_noise_dim ], name = 'input_noise')
Y_input = tf.placeholder(tf.float32, shape= [None, Y_dimension], name='Labels')
X_input = tf.placeholder(tf.float32, shape=[None, image_dim], name = 'real_input')
def Discriminator(x,y):
inputs = tf.concat(axis = 1, values = [x,y])
hidden_layer = tf.nn.relu(tf.add(tf.matmul(inputs, weights['disc_H']), bias['disc_H']))
final_layer = tf.add(tf.matmul(hidden_layer, weights['disc_final']), bias['disc_final'])
disc_output = tf.nn.sigmoid(final_layer)
return final_layer, disc_output
def Generator(x,y):
inputs = tf.concat(axis=1, values=[x,y])
hidden_layer = tf.nn.relu(tf.add(tf.matmul(tf.cast(inputs, tf.float32), tf.cast(weights['gen_H'], tf.float32)), tf.cast(bias['gen_H'],tf.float32)))
final_layer = tf.add(tf.matmul(hidden_layer, weights['gen_final']), bias['gen_final'])
gen_output = tf.nn.sigmoid(final_layer)
return gen_output
output_Gen = Generator(Z_input, Y_input)
</code></pre>
<p>Right after executing the Generator i get the following error:</p>
<pre><code> ValueError: Shape must be rank 2 but is rank 1 for 'MatMul_25' (op: 'MatMul') with input shapes: [?,110], [2].
</code></pre>
<p>What to do?</p>
|
<p>I think you just missed one call to <code>xavier_init()</code> when initialising your weights.</p>
<p>You have this:</p>
<pre><code>weights = {
'disc_H' : tf.Variable(xavier_init([image_dim + Y_dimension, disc_hidd_dim])),
'disc_final' : tf.Variable(xavier_init([disc_hidd_dim, 1])),
'gen_H': tf.Variable([z_noise_dim + Y_dimension, gen_hidd_dim]),
'gen_final': tf.Variable(xavier_init([gen_hidd_dim, image_dim]))
}
</code></pre>
<p>but I think you want this:</p>
<pre><code>weights = {
'disc_H' : tf.Variable(xavier_init([image_dim + Y_dimension, disc_hidd_dim])),
'disc_final' : tf.Variable(xavier_init([disc_hidd_dim, 1])),
'gen_H': tf.Variable(xavier_init([z_noise_dim + Y_dimension, gen_hidd_dim])),
'gen_final': tf.Variable(xavier_init([gen_hidd_dim, image_dim]))
}
</code></pre>
<p>The error message was because <code>weights['gen_H']</code> had shape <code>[2]</code> whereas you expected it to have shape <code>[110, 256]</code>. This meant that the call to <code>tf.matmul()</code> failed because it's impossible to matrix multiply a matrix with shape <code>[m, 110]</code> by a matrix of shape <code>[2]</code></p>
|
tensorflow|deep-learning
| 0
|
8,035
| 56,021,933
|
What is the fastest way to check whether a cell contains letters?
|
<p>I have a dataset with 2.6 million rows in which I have one column called <code>msgText</code>, which contains written messages.</p>
<p>Now, I want to filter out all messages that don't contain any letters. To do so I found the following code:</p>
<pre><code>dataset = dataset[dataset['msgText'].astype(str).str.contains('[A-Za-z]')]
</code></pre>
<p>However, after 16 hours the code is still running. </p>
<p>Furthermore, based on <a href="https://stackoverflow.com/questions/3437059/does-python-have-a-string-contains-substring-method">Does Python have a string 'contains' substring method?</a> I thought about creating a list of length 26, that contains all the letters in the alphabet and then check whether each cell contains that letter. But that does not seem efficient either. </p>
<p>Therefore, I am wondering if there is a faster way to find whether a cell contains letters. </p>
<hr>
<p>EDIT: The code above works pretty well. Apparently, what I had in my (slow) code was: <code>dataset['msgText'] = dataset[dataset['msgText'].astype(str).str.contains('[A-Za-z]')]</code></p>
|
<pre><code>import pandas
dataset['columnName'].apply(lambda x: x.find('\\w') > 0)
</code></pre>
|
python|pandas|contains
| 2
|
8,036
| 56,394,146
|
Add string prefix and string end to dataframe column
|
<p>I want to add steing prefix and string end to all values of my column of dataframe.
In the beginning i want to add <em>r'\b</em>
In the end i want to add <em>\b</em>
I did This but it doesn't give what i want.
A sample of my dataframe is
<a href="https://i.stack.imgur.com/BStA3.png" rel="nofollow noreferrer">enter image description here</a></p>
<pre><code> df['col'] = r'r\b' + df['col'].astype(str) + r'\b'
</code></pre>
|
<p>This should be the thing you seek:</p>
<pre><code>df['col'] = '\\b' + df['col'] + '\\b'
</code></pre>
<p>Your <code>r''</code> is only notation that the string is <em>raw</em>.</p>
|
python-3.x|pandas|dataframe
| 0
|
8,037
| 56,378,206
|
Why is multiplying matrix by it's inverse with numpy not producing the identity matrix?
|
<p>I'm multiplying a matrix by it's inverse and not getting an identify matrix in return. My suspicion is there's an issue with the floating point rounding (or lack thereof if the original matrix entries are just ints?) All help is appreciated.</p>
<pre><code>C = np.array([[5,5,5],[4,5,6],[7,8,9]])
print("Original matrix")
print(C)
print("Inverse matrix")
D = np.linalg.inv(C)
print(D)
print("Identity matrix")
print((C.dot(D)))
</code></pre>
<pre><code>Original matrix
[[5 5 5]
[4 5 6]
[7 8 9]]
Inverse matrix
[[-6.75539944e+14 -1.12589991e+15 1.12589991e+15]
[ 1.35107989e+15 2.25179981e+15 -2.25179981e+15]
[-6.75539944e+14 -1.12589991e+15 1.12589991e+15]]
Identity matrix
[[ 0.5 -2. 1.75]
[ 0. 0. 0.5 ]
[ 0.5 0. 2.75]]
</code></pre>
|
<p>One of the matrix properties say that a matrix only has a inverse form if the determinate of it is different from zero. Your matrix C has zero as determinant, so it does not have a inverse. Numpy make calculation because it does not get zero, but an approximation that in practice is zero.</p>
<pre><code>>>> np.linealg.det(C)
4.440892098500603e-15
</code></pre>
<p>in this case, the value of the determinant can be considered as zero.</p>
|
python|numpy|matrix
| 2
|
8,038
| 55,850,304
|
How to pass a Python pandas function as a variable/parameter in another function?
|
<pre><code>import pandas as pd
def func_sum(df, cols, col):
res = df[cols].groupby(col).sum()
return res
def func_count(df, cols,col):
res = df[cols].groupby(col).count()
return res
def func_ave(df, cols, col):
res = df[cols].groupby(col).mean()
return res
</code></pre>
<p>The way I did to combine these three functions is like below, which is not very elegant.</p>
<pre><code>def func(df, cols, col, method):
if method == 'sum':
return df[cols].groupby(col).sum()
if method == 'count':
return df[cols].groupby(col).count()
if method == 'mean':
return df[cols].groupby(col).mean()
</code></pre>
<p>I wonder if there is a better way to do this without using IF ELSE statement.
How Can I Pass the function of (sum, count, or mean) as a variable and then let's the passed function variable to be called inside the main 'func' function. </p>
<p>I would greatly appreciate any suggestions. </p>
|
<p>Use <a href="https://pandas.pydata.org/pandas-docs/version/0.23/generated/pandas.core.groupby.DataFrameGroupBy.agg.html" rel="nofollow noreferrer"><code>groupby.agg</code></a>, which takes one or more functions by reference or by name.</p>
<p>For example:</p>
<pre><code>def func(df, cols, col, method):
return df[cols].groupby(col).agg(method)
func(df, cols, col, pd.Series.sum)
func(df, cols, col, 'count')
func(df, cols, col, np.mean)
</code></pre>
|
python|pandas
| 1
|
8,039
| 55,578,525
|
Exit loop in python if SQL query doesn't bring any data
|
<p>I am new to Python and have been given a task to download data from different Database ( MS SQl and Teradata ). The logic behind my code is as follow :
1: Code picks up data for Vendor from an excel file.
2: From that list it loops through all the vendors and gives out a list of documents.
3: Then I use the list downloaded in step 2 to download data from teradata and append in a final dataset. </p>
<p>My question is, if the data in second step is blank the while loop goes in infinite. Any way to exit that an still execute the rest of the iteration?</p>
<pre><code>import pyodbc
import pandas as pd
VendNum = pd.ExcelFile(r"C:\desktop\VendorNumber.xlsx").parse('Sheet3',
dtype=str)
VendNum['Vend_Num'] = VendNum['Vend_Num'].astype(str).str.pad(10,
side='left', fillchar='0')
fDataSet = pd.DataFrame()
MSSQLconn=pyodbc.connect(r'Driver={SQL Server Native Client
11.0};Server=Servername;Database=DBName;Trusted_Connection=yes;')
TDconn = pyodbc.connect
(r"DSN=Teradata;DBCNAME=DBname;UID=User;PWD=password;",autocommit =True)
for index, row in VendNum.iterrows():
DocNum = pd.DataFrame()
if index > len(VendNum["Vend_Num"]):
break
while DocNum.size == 0:
print("Read SQL " + row["Vend_Num"])
DocNum = pd.read_sql_query("select Col1 from Table11 where
Col2 = " + "'" + row["Vend_Num"] + "'" + " and Col3 =
'ABC'",MSSQLconn)
print("Execute SQL " + row["Vend_Num"])
if DocNum.size > 0:
print(row["Vend_Num"])
dataList = ""
dfToList = DocNum['Col1'].tolist()
for i in dfToList:
dataList += "'"+i+ "'" + ","
dataList=dataList[0:-1]
DataSet= pd.read_sql("
Some SQl statement which works fine "),TDconn)
fDataSet = fDataSet.append(DataSet)
MSSQLconn.close()
TDconn.close()
</code></pre>
<p>The expected output is to append fDataset with each iteration of the code but when a blank Dataframe ( named DataSet ) is there the while loop doesn't exit.</p>
|
<p>When you are using system resources you should use </p>
<pre class="lang-py prettyprint-override"><code>with open(...):
</code></pre>
|
python|pandas|pandasql
| 0
|
8,040
| 55,755,675
|
Plotting if data is is available at any one time for each station, single plot
|
<p>As the title suggests, i would like to plot data availability, at any one time for each station. The plot can be thought to be a map or scatter plot, where the station number and time are the coordinates. Which will plot vertical lines, where there is data (i.e. floats/integers), and as a white space if data is missing (ie. NANs), temporal resolution is daily.</p>
<p>Similar to the plot at the end of the post. Which is from the output of an R package, 'Climatol' (homogen function).</p>
<p>I would like to know if there is similar way of plotting in PYTHON, I preferably don't want to use the R package, as it does more than just the plot, and hence will take a lot of hours for thousands of station data.</p>
<p>Some sample data (daily time series) of each stations would be like ;</p>
<pre><code>station1 = pd.DataFrame(pd.np.random.rand(100, 1)).set_index(pd.date_range(start = '2000/01/01', periods = 100))
station2 = pd.DataFrame(pd.np.random.rand(200, 1)).set_index(pd.date_range(start = '2000/03/01', periods = 200))
station3 = pd.DataFrame(pd.np.random.rand(300, 1)).set_index(pd.date_range(start = '2000/06/01', periods = 300))
station4 = pd.DataFrame(pd.np.random.rand(50, 1)).set_index(pd.date_range(start = '2000/09/01', periods = 50))
station5 = pd.DataFrame(pd.np.random.rand(340, 1)).set_index(pd.date_range(start = '2000/01/01', periods = 340))
</code></pre>
<p>Real sample data; <a href="https://drive.google.com/drive/folders/15PwpWIh13tyOyzFUTiE9LgrxUMm-9gh6?usp=sharing" rel="nofollow noreferrer">https://drive.google.com/drive/folders/15PwpWIh13tyOyzFUTiE9LgrxUMm-9gh6?usp=sharing</a>
Code to open for two stations;</p>
<pre><code>import pandas as pd
import numpy as np
df1 = pd.read_csv('wgenf - 2019-04-17T012724.318.genform1_proc',skiprows = 8,delimiter = ' ')
df1.drop(df1.tail(6).index,inplace=True)
df1 = df1.iloc[:,[1,3]]
df1.iloc[:,1].replace('-',np.nan,inplace=True)
df1 = df1.dropna()
df1['Date(NZST)'] = pd.to_datetime(df1.iloc[:,0],format = "%Y %m %d")
df1 = df1.set_index('Date(NZST)')
df2 = pd.read_csv('wgenf - 2019-04-17T012830.116.genform1_proc',skiprows = 8,delimiter = ' ')
df2.drop(df2.tail(6).index,inplace=True)
df2 = df2.iloc[:,[1,3]]
df2.iloc[:,1].replace('-',np.nan,inplace=True)
df2 = df2.dropna()
df2['Date(NZST)'] = pd.to_datetime(df2.iloc[:,0],format = "%Y %m %d")
df2 = df2.set_index('Date(NZST)')
</code></pre>
<p><a href="https://i.stack.imgur.com/W5Sxk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/W5Sxk.png" alt="enter image description here"></a></p>
<p>Expanding Asmus's code (Answer below) for multiple stations </p>
<pre><code>import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import glob as glob
start = '1900/01/01'
end = '2018/12/31'
counter = 0
filenames = glob.glob('data/temperature/*.genform1_proc')
for filename in filenames:
with open(filename, newline='') as f:
### read the csv file with pandas, using the correct tab delimiter
df1 = pd.read_csv(f,skiprows = 8,delimiter = '\t',)
df1.drop(df1.tail(8).index,inplace=True)
### replace invalid '-' with useable np.nan (not a number)
df1.replace('-',np.nan,inplace=True)
df1['Date(NZST)'] = pd.to_datetime(df1['Date(NZST)'],format = "%Y %m %d")
df1 = df1.set_index('Date(NZST)',drop=False)
### To make sure that we have data on all dates:
# create a new index, based on the old range, but daily frequency
idx = pd.date_range(start,end,freq="D")
df1=df1.reindex(idx, fill_value=np.nan)
### Make sure interesting data fields are numeric (i.e. floats)
df1["Tmax(C)"]=pd.to_numeric(df1["Tmax(C)"])
### Create masks for
# valid data: has both date and temperature
valid_mask= df1['Tmax(C)'].notnull()
### decide where to plot the line in y space,
ys=[counter for v in df1['Tmax(C)'][valid_mask].values]
plt.scatter(df1.index[valid_mask].values,ys,s=30,marker="|",color="g")
plt.show()
counter +=1
</code></pre>
<p>code above currently plots the one below.</p>
<p><a href="https://i.stack.imgur.com/NVXOc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NVXOc.png" alt="enter image description here"></a></p>
|
<p><strong>Updated</strong>: I have updated this answer according to the comments</p>
<p>Ok, so first of all, your input data is a bit messed up, with the delimiter actually being tabs (<code>'\t'</code>) and the first column rather ending in <code>,</code> instead. </p>
<p>Important steps:</p>
<ul>
<li>take care of cleanup first, replacing <code>,</code> with <code>\t</code>, and thus ensuring that the column headers are properly read as <code>df.keys()</code>. While you may think its not important, try to keep things clean! :-)</li>
<li>the index column 'Date(NZST)' is kept as a column, and a new index column is created (<code>idx</code>) that contains <em>all days</em> in the given range, since some days are missing in the original data. </li>
<li>make sure that the relevant keys/columns are in their appropriate type, e.g. 'Tmax(C)' should be a float. </li>
<li>finally, you can use <code>.notnull()</code> to get only <em>valid</em> data, but make sure that <em>both</em> date and temperature are present! This is stored as <code>valid_mask</code> for ease of use</li>
</ul>
<p>In the end, I plotted the data, using green, vertical lines as markers for "valid" measurements, and the same in red for invalid data. See figure.
Now you only need to run this for all stations.
Hope this helps!</p>
<p><a href="https://i.stack.imgur.com/bNjMC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bNjMC.png" alt="sample plot of valid / invalid data"></a></p>
<pre><code>import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from io import StringIO
import re
fpath='./wgenf - 2019-04-17T012537.711.genform1_proc'
### cleanup the input file
for_pd = StringIO()
with open(fpath) as fi:
for line in fi:
new_line = re.sub(r',', '\t', line.rstrip(),)
print (new_line, file=for_pd)
for_pd.seek(0)
### read the csv file with pandas, using the correct tab delimiter
df1 = pd.read_csv(for_pd,skiprows = 8,delimiter = '\t',)
df1.drop(df1.tail(6).index,inplace=True)
### replace invalid '-' with useable np.nan (not a number)
df1.replace('-',np.nan,inplace=True)
df1['Date(NZST)'] = pd.to_datetime(df1['Date(NZST)'],format = "%Y %m %d")
df1 = df1.set_index('Date(NZST)',drop=False)
### To make sure that we have data on all dates:
# create a new index, based on the old range, but daily frequency
idx = pd.date_range(df1.index.min(), df1.index.max(),freq="D")
df1=df1.reindex(idx, fill_value=np.nan)
### Make sure interesting data fields are numeric (i.e. floats)
df1["Tmax(C)"]=pd.to_numeric(df1["Tmax(C)"])
df1["Station"]=pd.to_numeric(df1["Station"])
### Create masks for
# invalid data: has no date, or no temperature
# valid data: has both date and temperature
valid_mask=( (df1['Date(NZST)'].notnull()) & (df1['Tmax(C)'].notnull()))
na_mask=( (df1['Date(NZST)'].isnull()) & (df1['Tmax(C)'].isnull()))
### Make the plot
fig,ax=plt.subplots()
### decide where to plot the line in y space, here: "1"
ys=[1 for v in df1['Station'][valid_mask].values]
### and plot the data, using a green, vertical line as marker
ax.scatter(df1.index[valid_mask].values,ys,s=10**2,marker="|",color="g")
### potentially: also plot the missing data, using a re, vertical line as marker at y=0.9
yerr=[0.9 for v in df1['Station'][na_mask].values]
ax.scatter(df1.index[na_mask].values,yerr,s=10**2,marker="|",color="r")
### set some limits on the y-axis
ax.set_ylim(0,2)
plt.show()
</code></pre>
|
python|r|pandas|matplotlib
| 1
|
8,041
| 64,729,078
|
How to generate a dataframe from lists of separate variables
|
<p>I have asked the same question here: <a href="https://stackoverflow.com/questions/64723083/convert-lists-from-separate-variables-into-a-dataframe">Convert lists from separate variables into a dataframe</a> which was closed.</p>
<p>The suggestions provided do not answer my question because what I have is not a list of lists but lists from separate variable as below.</p>
<pre><code>a=[1.4, 1.3]
b=[0.8, 0.8]
c=[2.4, 1.6]
d=[3.6, 2.9]
e=[2.8, 2.5]
</code></pre>
<p>How can I convert the above separate lists to a pandas dataframe to get the below output</p>
<pre><code>column_1,column_2
1.4, 1.3
0.8, 0.8
2.4, 1.6
3.6, 2.9
2.8, 2.5
</code></pre>
<p><strong>[CORRECTION]</strong></p>
<p>Actually the lists that I provided above is not individual lists as previously mentioned, but rather the output of 2 functions, mean_function() and stddev_function(), which I put in a list as below:</p>
<p><code>feat = [mean_function(), stddev_function()]</code>, and the output of feat gives the lists:</p>
<pre><code>[1.4, 1.3]
[0.8, 0.8]
[2.4, 1.6]
[3.6, 2.9]
[2.8, 2.5]
</code></pre>
<p>So what I need is to first convert the sequence of lists (from variable <em><strong>feat</strong></em>) to individuals lists as below and then convert to a dataframe:</p>
<pre><code>a=[1.4, 1.3]
b=[0.8, 0.8]
c=[2.4, 1.6]
d=[3.6, 2.9]
e=[2.8, 2.5]
</code></pre>
<p>Sorry for the inaccurate details in the beginning.</p>
|
<p>Use <code>df.to_csv()</code>:</p>
<pre><code>import pandas as pd
a=[1.4, 1.3]
b=[0.8, 0.8]
c=[2.4, 1.6]
d=[3.6, 2.9]
e=[2.8, 2.5]
df = [a,b,c,d,e]
df = pd.DataFrame(df, columns=['column_1', 'column_2'])
print(df.to_csv(index=False))
</code></pre>
<p>Output:</p>
<pre><code>column_1,column_2
1.4,1.3
0.8,0.8
2.4,1.6
3.6,2.9
2.8,2.5
</code></pre>
|
python|pandas|dataframe
| 2
|
8,042
| 64,770,251
|
Pandas Merge Multiple Columns
|
<p>I am struggling to merge two pandas dataframes to replicate a vlookup function using two columns as lookup value.</p>
<p>The first dataframe df has 6 columns including three columns: perf, ticker and date. The perf column is empty and this is the one I would like to see populated. The second dataframe u includes the same three columns, including values in the perf column but only for a specific date.</p>
<p>I have tried this:
df=pd.merge(df,u,how='left',on=['ticker_and_exch_code', 'date'])</p>
<p>But the result I get is a dataframe with new perf columns instead of populating the one existing perf column. Would really appreciate insights into what I am missing, thanks!</p>
<p>Vincent</p>
|
<p>If the <code>'perf'</code> column is empty in the first DataFrame, may I suggest removing it before merging the two DataFrames?</p>
<pre><code>df=pd.merge(
df.drop(columns='perf'),
u,
how='left',
on=['ticker_and_exch_code', 'date'],
)
</code></pre>
|
python|pandas|dataframe|merge
| 1
|
8,043
| 40,058,912
|
randomly controlling the percentage of non zero values in a matrix using python
|
<p>I am looking to create matrices with different levels of sparsity. I intend to do that by converting all the values that are nonzero in the data matrix to 1's and the remaining entries would be 0.</p>
<p>I was able to achieve that using the following code. But I am not sure how would I be able to randomly make the 1's to 0's in the final matrix with control on the percentage of 1's.</p>
<p>For eg:</p>
<p>the numpy.random.choice </p>
<blockquote>
<p>numpy.random.randint(2, size = data_shape, p=[0.75,0.25])</p>
</blockquote>
<p>enables us to create matrices with control over the percentage of 1's. How do I control the percentage of 1's in a similar way in the final matrix?</p>
<pre><code>import numpy as np
import scipy.sparse as sp
import numpy.ma as ma
indptr = np.array([0, 2, 3, 6])
indices = np.array([0, 2, 2, 0, 1, 2])
data = np.array([1, 2, 3, 4, 5, 6])
matrix = sp.csr_matrix((data, indices, indptr), shape=(3, 3)).toarray()
print(matrix)
mask = ma.masked_greater(matrix, 0)
print(mask)
print(mask.mask)
matrix2 = mask.mask
int_matrix = matrix2.astype(int)
print(int_matrix)
</code></pre>
<p>Output:</p>
<pre><code>Data matrix:
[[1 0 2]
[0 0 3]
[4 5 6]]
Masked matrix:
[[-- 0 --]
[0 0 --]
[-- -- --]]
Masked values:
[[ True False True]
[False False True]
[ True True True]]
Final matrix
[[1 0 1]
[0 0 1]
[1 1 1]]
</code></pre>
<p>Thanks for the help!!!</p>
|
<p>You could do something like this -</p>
<pre><code>idx = np.flatnonzero(a)
N = np.count_nonzero(a!=0) - int(round(0.25*a.size))
np.put(a,np.random.choice(idx,size=N,replace=False),0)
</code></pre>
<p><strong>Sample run</strong></p>
<p>1) Input array :</p>
<pre><code>In [259]: a
Out[259]:
array([[0, 1, 0, 1, 1],
[0, 1, 1, 0, 1],
[1, 1, 0, 0, 0],
[1, 0, 1, 1, 0]])
</code></pre>
<p>2) Get the non-zero indices :</p>
<pre><code>In [260]: idx = np.flatnonzero(a)
</code></pre>
<p>3) Get the number of non-zeros to be set as zeros :</p>
<pre><code>In [261]: N = np.count_nonzero(a!=0) - int(round(0.25*a.size))
</code></pre>
<p>4) Finally we select N randomly chosen indices from idx and set those in <code>a</code> as zeros :</p>
<pre><code>In [262]: np.put(a,np.random.choice(idx,size=N,replace=False),0)
</code></pre>
<p>5) Verify array -</p>
<pre><code>In [263]: a
Out[263]:
array([[0, 0, 0, 1, 0],
[0, 1, 0, 0, 0],
[1, 0, 0, 0, 0],
[1, 0, 0, 1, 0]])
</code></pre>
<p>6) Finally, we see the percentage of non-zeros and verify it to be <code>25%</code> :</p>
<pre><code>In [264]: np.count_nonzero(a!=0)/float(a.size)
Out[264]: 0.25
</code></pre>
|
python|python-3.x|numpy|matrix|scipy
| 0
|
8,044
| 39,966,543
|
ValueError: If using all scalar values, you must pass an index
|
<p>Take the following code:</p>
<pre class="lang-py prettyprint-override"><code>import MySQLdb as mdb
import pandas as pd
con = mdb.connect(db_host, db_user, db_pass, db_name)
query = """SELECT `TIME`.`BID-CLOSE`
FROM `EUR-USD`.`tbl_EUR-USD_1-Day`
WHERE TIME >= '2006-12-15 22:00:00' AND TIME <= '2007-01-03 22:00:00'
ORDER BY TIME ASC;"""
# Create a pandas dataframe from the SQL query
eurusd = pd.read_sql_query(query, con=con, index_col='TIME')
idx = pd.date_range('2006-12-17 22:00:00', '2007-01-03 22:00:00')
eurusd.reindex(idx, fill_value=None)
</code></pre>
<p>This gives an output of </p>
<pre><code> BID-CLOSE
2006-12-17 22:00:00 1.30971
2006-12-18 22:00:00 1.31971
2006-12-19 22:00:00 1.31721
2006-12-20 22:00:00 1.31771
2006-12-21 22:00:00 1.31411
2006-12-22 22:00:00 NaN
2006-12-23 22:00:00 NaN
2006-12-24 22:00:00 NaN
2006-12-25 22:00:00 1.30971
2006-12-26 22:00:00 1.31131
2006-12-27 22:00:00 1.31491
2006-12-28 22:00:00 1.32021
2006-12-29 22:00:00 NaN
2006-12-30 22:00:00 NaN
2006-12-31 22:00:00 1.32731
2007-01-01 22:00:00 1.32731
2007-01-02 22:00:00 1.31701
2007-01-03 22:00:00 1.30831
</code></pre>
<p>Re-Index the data</p>
<pre class="lang-py prettyprint-override"><code>eurusd = eurusd.reindex(idx, fill_value=None)
</code></pre>
<p>List of interpolate types</p>
<pre class="lang-py prettyprint-override"><code>methods = ['linear', 'quadratic', 'cubic']
</code></pre>
<p>Next line throws an Exception...</p>
<pre class="lang-py prettyprint-override"><code>pd.DataFrame({m: eurusd.interpolate(method=m) for m in methods})
</code></pre>
<pre><code>ValueError: If using all scalar values, you must pass an index
</code></pre>
<p>Following the Interpolation section of this guide <a href="http://pandas.pydata.org/pandas-docs/stable/missing_data.html" rel="nofollow noreferrer">http://pandas.pydata.org/pandas-docs/stable/missing_data.html</a>
How do I correctly 'pass an index' in this situation?</p>
<p>Update 1</p>
<p>The output of <code>eurusd.interpolate('linear')</code></p>
<pre><code> BID-CLOSE
2006-12-17 22:00:00 1.309710
2006-12-18 22:00:00 1.319710
2006-12-19 22:00:00 1.317210
2006-12-20 22:00:00 1.317710
2006-12-21 22:00:00 1.314110
2006-12-22 22:00:00 1.313010
2006-12-23 22:00:00 1.311910
2006-12-24 22:00:00 1.310810
2006-12-25 22:00:00 1.309710
2006-12-26 22:00:00 1.311310
2006-12-27 22:00:00 1.314910
2006-12-28 22:00:00 1.320210
2006-12-29 22:00:00 1.322577
2006-12-30 22:00:00 1.324943
2006-12-31 22:00:00 1.327310
2007-01-01 22:00:00 1.327310
2007-01-02 22:00:00 1.317010
2007-01-03 22:00:00 1.308310
</code></pre>
<p>Update 2</p>
<pre><code>In[9]: pd.DataFrame({m: eurusd['BID-CLOSE'].interpolate(method=m) for m in methods})
Out[9]:
cubic linear quadratic
2006-12-17 22:00:00 1.309710 1.309710 1.309710
2006-12-18 22:00:00 1.319710 1.319710 1.319710
2006-12-19 22:00:00 1.317210 1.317210 1.317210
2006-12-20 22:00:00 1.317710 1.317710 1.317710
2006-12-21 22:00:00 1.314110 1.314110 1.314110
2006-12-22 22:00:00 1.310762 1.313010 1.307947
2006-12-23 22:00:00 1.309191 1.311910 1.305159
2006-12-24 22:00:00 1.308980 1.310810 1.305747
2006-12-25 22:00:00 1.309710 1.309710 1.309710
2006-12-26 22:00:00 1.311310 1.311310 1.311310
2006-12-27 22:00:00 1.314910 1.314910 1.314910
2006-12-28 22:00:00 1.320210 1.320210 1.320210
2006-12-29 22:00:00 1.323674 1.322577 1.321632
2006-12-30 22:00:00 1.325553 1.324943 1.323998
2006-12-31 22:00:00 1.327310 1.327310 1.327310
2007-01-01 22:00:00 1.327310 1.327310 1.327310
2007-01-02 22:00:00 1.317010 1.317010 1.317010
2007-01-03 22:00:00 1.308310 1.308310 1.308310
</code></pre>
|
<p>The problem is that when you use the <code>DataFrame</code> constructor:</p>
<pre><code>pd.DataFrame({m: eurusd.interpolate(method=m) for m in methods})
</code></pre>
<p>the value for each <code>m</code> is a <code>DataFrame</code>, which will be interpreted as a scalar value, which is admittedly confusing. This constructer expects some sort of sequence or <code>Series</code>. The following should solve the problem:</p>
<pre><code>pd.DataFrame({m: eurusd['BID-CLOSE'].interpolate(method=m) for m in methods})
</code></pre>
<p>Since subsetting on a column returns a <code>Series</code>. So, for example instead of:</p>
<pre><code>In [34]: pd.DataFrame({'linear':df.interpolate('linear')})
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-34-4b6c095c6da3> in <module>()
----> 1 pd.DataFrame({'linear':df.interpolate('linear')})
/home/juan/anaconda3/lib/python3.5/site-packages/pandas/core/frame.py in __init__(self, data, index, columns, dtype, copy)
222 dtype=dtype, copy=copy)
223 elif isinstance(data, dict):
--> 224 mgr = self._init_dict(data, index, columns, dtype=dtype)
225 elif isinstance(data, ma.MaskedArray):
226 import numpy.ma.mrecords as mrecords
/home/juan/anaconda3/lib/python3.5/site-packages/pandas/core/frame.py in _init_dict(self, data, index, columns, dtype)
358 arrays = [data[k] for k in keys]
359
--> 360 return _arrays_to_mgr(arrays, data_names, index, columns, dtype=dtype)
361
362 def _init_ndarray(self, values, index, columns, dtype=None, copy=False):
/home/juan/anaconda3/lib/python3.5/site-packages/pandas/core/frame.py in _arrays_to_mgr(arrays, arr_names, index, columns, dtype)
5229 # figure out the index, if necessary
5230 if index is None:
-> 5231 index = extract_index(arrays)
5232 else:
5233 index = _ensure_index(index)
/home/juan/anaconda3/lib/python3.5/site-packages/pandas/core/frame.py in extract_index(data)
5268
5269 if not indexes and not raw_lengths:
-> 5270 raise ValueError('If using all scalar values, you must pass'
5271 ' an index')
5272
ValueError: If using all scalar values, you must pass an index
</code></pre>
<p>Use this instead:</p>
<pre><code>In [35]: pd.DataFrame({'linear':df['BID-CLOSE'].interpolate('linear')})
Out[35]:
linear
timestamp
2016-10-10 22:00:00 1.309710
2016-10-10 22:00:00 1.319710
2016-10-10 22:00:00 1.317210
2016-10-10 22:00:00 1.317710
2016-10-10 22:00:00 1.314110
2016-10-10 22:00:00 1.313010
2016-10-10 22:00:00 1.311910
2016-10-10 22:00:00 1.310810
2016-10-10 22:00:00 1.309710
2016-10-10 22:00:00 1.311310
2016-10-10 22:00:00 1.314910
2016-10-10 22:00:00 1.320210
2016-10-10 22:00:00 1.322577
2016-10-10 22:00:00 1.324943
2016-10-10 22:00:00 1.327310
2016-10-10 22:00:00 1.327310
2016-10-10 22:00:00 1.317010
2016-10-10 22:00:00 1.308310
</code></pre>
<p>Fair warning, though, I am getting a <code>LinAlgError: singular matrix</code> error when I try <code>'quadratic'</code> and <code>'cubic'</code> interpolation on your data. Not sure why though.</p>
|
python|pandas|quantitative-finance
| 7
|
8,045
| 39,676,294
|
Looping over files and plotting (Python)
|
<p>My data is look like as in the picture. All of my datas are in .txt format and my aim is to loop over files and plot them. First row represents my variables
(WL, ABS, T%) so firstly I need to delete them before proceeding. </p>
<pre><code>with open('Desktop/100-3.txt', 'r') as f:
data = f.read().splitlines(True)
with open('Desktop/100-3.txt', 'w') as f:
f.writelines(data[1:])
</code></pre>
<p>Probably it would not be necessary but I am very new in Numpy. Basically the algorithm will be as follows:</p>
<ol>
<li>Read all the .txt files</li>
<li>Plot T% versus WL, plot ABS versus WL, save. (WL -> x variable)</li>
<li>Continue for the next file, .. (two graphs for every .txt file)</li>
<li>Then finish the loop, exit.</li>
</ol>
<p><a href="http://i.stack.imgur.com/GOvbi.jpg" rel="nofollow">data looks like this</a></p>
<p><strong>What I've tried</strong></p>
<pre><code>from numpy import loadtxt
import os
dizin = os.listdir(os.getcwd())
for i in dizin:
if i.endswith('.txt'):
data = loadtxt("??",float)
</code></pre>
|
<p>For data files like this I would prefer <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.genfromtxt.html" rel="nofollow">np.genfromtxt</a> over np.loadtxt, it has many useful options you can look up in the docs. The <a href="https://docs.python.org/3/library/glob.html" rel="nofollow">glob</a> module is also nice to iterate over directories with wildcards as filters:</p>
<pre><code>from glob import glob
import numpy as np
import matplotlib.pyplot as plt
# loop over all files in the current directory ending with .txt
for fname in glob("./*.txt"):
# read file, skip header (1 line) and unpack into 3 variables
WL, ABS, T = np.genfromtxt(fname, skip_header=1, unpack=True)
# first plot
plt.plot(WL, T)
plt.xlabel('WL')
plt.ylabel('T%')
plt.show()
plt.clf()
# second plot
plt.plot(ABS, T)
plt.xlabel('WL')
plt.ylabel('ABS')
plt.show()
plt.clf()
</code></pre>
<p>The next step would be to do some research on matplotlib to make the plots look better.</p>
<p>Please let me know if the code does not work, I'll try to fix it then. </p>
<p>EDIT: Added plt.clf() to clear the figure before creating a new one.</p>
|
python|numpy|plot
| 2
|
8,046
| 39,729,508
|
Colored 3D plot
|
<p>I found here this good <a href="https://stackoverflow.com/questions/12423601/python-the-simplest-way-to-plot-3d-surface">example</a> to plot 3D data with Python 2.7.</p>
<pre><code>import matplotlib.pyplot as plt
from matplotlib.ticker import MaxNLocator
from matplotlib import cm
from mpl_toolkits.mplot3d import Axes3D
import numpy as np
# ======
## data:
DATA = np.array([
[-0.807237702464, 0.904373229492, 111.428744443],
[-0.802470821517, 0.832159465335, 98.572957317],
[-0.801052795982, 0.744231916692, 86.485869328],
[-0.802505546206, 0.642324228721, 75.279804677],
[-0.804158144115, 0.52882485495, 65.112895758],
[-0.806418040943, 0.405733109371, 56.1627277595],
[-0.808515314192, 0.275100227689, 48.508994388],
[-0.809879521648, 0.139140394575, 42.1027499025],
[-0.810645106092, -7.48279012695e-06, 36.8668106345],
[-0.810676720161, -0.139773175337, 32.714580273],
[-0.811308686707, -0.277276065449, 29.5977405865],
[-0.812331692291, -0.40975978382, 27.6210856615],
[-0.816075037319, -0.535615685086, 27.2420699235],
[-0.823691366944, -0.654350489595, 29.1823292975],
[-0.836688691603, -0.765630198427, 34.2275056775],
[-0.854984518665, -0.86845932028, 43.029581434],
[-0.879261949054, -0.961799684483, 55.9594146815],
[-0.740499820944, 0.901631050387, 97.0261463995],
[-0.735011699497, 0.82881933383, 84.971061395],
[-0.733021568161, 0.740454485354, 73.733621269],
[-0.732821755233, 0.638770044767, 63.3815970475],
[-0.733876941678, 0.525818698874, 54.0655910105],
[-0.735055978521, 0.403303715698, 45.90859502],
[-0.736448900325, 0.273425879041, 38.935709456],
[-0.737556181137, 0.13826504904, 33.096106049],
[-0.738278724065, -9.73058423274e-06, 28.359664343],
[-0.738507612286, -0.138781586244, 24.627237837],
[-0.738539663773, -0.275090412979, 21.857410904],
[-0.739099040189, -0.406068448513, 20.1110519655],
[-0.741152200369, -0.529726022182, 19.7019157715],
])
Xs = DATA[:,0]
Ys = DATA[:,1]
Zs = DATA[:,2]
## plot:
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
surf = ax.plot_trisurf(Xs, Ys, Zs, cmap=cm.jet, linewidth=0)
fig.colorbar(surf)
ax.xaxis.set_major_locator(MaxNLocator(5))
ax.yaxis.set_major_locator(MaxNLocator(6))
ax.zaxis.set_major_locator(MaxNLocator(5))
fig.tight_layout()
fig.savefig('3D.png')
plt.show()
</code></pre>
<p>The result is good:</p>
<p><img src="https://i.stack.imgur.com/PWIE0.png" alt="Output"></p>
<p>But, could it be possible to put this 3D map "in 2D"? I want to have only the color as the indication for the Z coordinate. As it would be to see this plot "from the top".
And to notice, the data (and so z coordinate) come from a measurement, not a function.</p>
<p>I have a lot of data and my computer is very slow...</p>
|
<p>As mentioned in the comment, you can use a contour. Since you are already using a triangulation, you can use <code>tricontourf</code>. See an example below.</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
## data:
DATA = np.array([
[-0.807237702464, 0.904373229492, 111.428744443],
[-0.802470821517, 0.832159465335, 98.572957317],
[-0.801052795982, 0.744231916692, 86.485869328],
[-0.802505546206, 0.642324228721, 75.279804677],
[-0.804158144115, 0.52882485495, 65.112895758],
[-0.806418040943, 0.405733109371, 56.1627277595],
[-0.808515314192, 0.275100227689, 48.508994388],
[-0.809879521648, 0.139140394575, 42.1027499025],
[-0.810645106092, -7.48279012695e-06, 36.8668106345],
[-0.810676720161, -0.139773175337, 32.714580273],
[-0.811308686707, -0.277276065449, 29.5977405865],
[-0.812331692291, -0.40975978382, 27.6210856615],
[-0.816075037319, -0.535615685086, 27.2420699235],
[-0.823691366944, -0.654350489595, 29.1823292975],
[-0.836688691603, -0.765630198427, 34.2275056775],
[-0.854984518665, -0.86845932028, 43.029581434],
[-0.879261949054, -0.961799684483, 55.9594146815],
[-0.740499820944, 0.901631050387, 97.0261463995],
[-0.735011699497, 0.82881933383, 84.971061395],
[-0.733021568161, 0.740454485354, 73.733621269],
[-0.732821755233, 0.638770044767, 63.3815970475],
[-0.733876941678, 0.525818698874, 54.0655910105],
[-0.735055978521, 0.403303715698, 45.90859502],
[-0.736448900325, 0.273425879041, 38.935709456],
[-0.737556181137, 0.13826504904, 33.096106049],
[-0.738278724065, -9.73058423274e-06, 28.359664343],
[-0.738507612286, -0.138781586244, 24.627237837],
[-0.738539663773, -0.275090412979, 21.857410904],
[-0.739099040189, -0.406068448513, 20.1110519655],
[-0.741152200369, -0.529726022182, 19.7019157715],
])
Xs = DATA[:,0]
Ys = DATA[:,1]
Zs = DATA[:,2]
## plot:
fig = plt.figure()
contour = plt.tricontourf(Xs, Ys, Zs, cmap="YlGnBu_r")
fig.colorbar(contour)
fig.savefig('3D.png')
plt.show()
</code></pre>
<p>The results is</p>
<p><a href="https://i.stack.imgur.com/vFLyC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vFLyC.png" alt="enter image description here"></a></p>
|
python|numpy|matplotlib|mplot3d
| 3
|
8,047
| 39,479,919
|
How do I subtract the previous row from the current row in a pandas dataframe and apply it to every row; without using a loop?
|
<p>I am using Python3.5 and I am working with pandas. I have loaded stock data from yahoo finance and have saved the files to csv. My DataFrames load this data from the csv. This is a copy of the ten rows of the csv file that is my DataFrame</p>
<pre><code> Date Open High Low Close Volume Adj Close
1990-04-12 26.875000 26.875000 26.625 26.625 6100 250.576036
1990-04-16 26.500000 26.750000 26.375 26.750 500 251.752449
1990-04-17 26.750000 26.875000 26.750 26.875 2300 252.928863
1990-04-18 26.875000 26.875000 26.500 26.625 3500 250.576036
1990-04-19 26.500000 26.750000 26.500 26.750 700 251.752449
1990-04-20 26.750000 26.875000 26.750 26.875 2100 252.928863
1990-04-23 26.875000 26.875000 26.750 26.875 700 252.928863
1990-04-24 27.000000 27.000000 26.000 26.000 2400 244.693970
1990-04-25 25.250000 25.250000 24.875 25.125 9300 236.459076
1990-04-26 25.000000 25.250000 24.750 25.000 1200 235.282663
</code></pre>
<p>I know that I can use iloc, loc, ix but these values that I index will only give my specific rows and columns and will not perform the operation on every row.
For example: Row one of the data in the open column has a value of 26.875 and the row below it has 26.50. The price dropped .375 cents. I want to be able to capture the % of Increase or Decrease from the previous day so to finish this example .375 divided by 26.875 = 1.4% decrease from one day to the next. I want to be able to run this calculation on every row so I know how much it has increased or decreased from the previous day. The index functions I have tried but they are absolute, and I don't want to use a loop. Is there a way I can do this with the ix, iloc, loc or another function?</p>
|
<p>you can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.pct_change.html" rel="noreferrer">pct_change()</a> or/and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.diff.html" rel="noreferrer">diff()</a> methods</p>
<p>Demo:</p>
<pre><code>In [138]: df.Close.pct_change() * 100
Out[138]:
0 NaN
1 0.469484
2 0.467290
3 -0.930233
4 0.469484
5 0.467290
6 0.000000
7 -3.255814
8 -3.365385
9 -0.497512
Name: Close, dtype: float64
In [139]: df.Close.diff()
Out[139]:
0 NaN
1 0.125
2 0.125
3 -0.250
4 0.125
5 0.125
6 0.000
7 -0.875
8 -0.875
9 -0.125
Name: Close, dtype: float64
</code></pre>
|
python|pandas|numpy|dataframe|indexing
| 66
|
8,048
| 44,052,719
|
Slow loading of large NumPy datasets
|
<p>I notice a long loading time (~10 min) of a .npy file for a 1D numpy array of object data type and with a length of ~10000. Each element in this array is an ordered dictionary (OrderedDict, a dictionary subclass from collections package) with a length ~5000. So, how can I efficiently save and load large NumPy arrays to and from disk? How are large data sets in Python traditionally handled?</p>
|
<p>Numpy will pickle embedded objects by default (which you could avoid with <code>allow_pickle=False</code> but sounds like you may need it) which is slow (see <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.save.html" rel="nofollow noreferrer">https://docs.scipy.org/doc/numpy/reference/generated/numpy.save.html</a>).
You may want to check Pandas (see <a href="http://matthewrocklin.com/blog/work/2015/03/16/Fast-Serialization" rel="nofollow noreferrer">http://matthewrocklin.com/blog/work/2015/03/16/Fast-Serialization</a>) or try to come up with your own file format that avoids pickling of your complex data structures.</p>
|
python|arrays|numpy|ordereddictionary
| 2
|
8,049
| 44,136,451
|
Use python pandas or R, organize calendar data
|
<p>I have a data frame that records employees' attendance history and it looks like the following:</p>
<pre><code>ID Sunday Monday Tuesday Wednesday Thursday Friday Saturday
1585 NA NA NA NA NA NA NA
1585 NA S S S S H NA
1585 NA H S S NA NA NA
1585 NA S S S NA NA NA
1597 NA S S NA S NA NA
1597 NA NA NA NA NA H NA
1597 NA H S S NA NA NA
1597 NA NA NA NA NA NA NA
</code></pre>
<p>In the above sample, there are two individuals uniquely identified by ID, the following 7 columns are Saturday to Sunday that begins at say, April 1st, 2017. There are three attendance behaviours: <code>S</code> means sick leave, <code>H</code> stands for holidays and <code>NA</code> means this individual is working on that day. </p>
<p>The interest is to re-organize the sick leave absence records. For example, individual 1585 begins a sick leave in Monday, April 10th,2017, and ends in Wednesday, April 19th,2017, lasting 10 days. Notice that during the 10 days, there are two days of local holidays, but it would be considered belonging to this sick leave spell. Then, this person begins a second sick leave in Monday, April 24th, 2017 and ends in Wednesday, April 26th. </p>
<p>We also have a record about the second person with ID 1597, again begins at April 1st, 2017 (so for each person, the beginning and ending dates of the records are the same). This person has three absence spells: the first one begins in Monday, April 3rd,2017 and ends in next day, April, 4th. The second spell lasts only one day and it begins and ends at April 6th. The last spell begins in April 18th and ends in April 19th. </p>
<p>The desired output would be like this:</p>
<pre><code>ID Begin_date End_date Duration
1585 2017-04-10 2017-04-19 10
1585 2017-04-24 2017-04-26 3
1597 2017-04-03 2017-04-04 2
1597 2017-04-06 2017-04-06 1
1597 2017-04-18 2017-04-19 2
</code></pre>
<p>The difficulty I face is how to recognize the consecutive sick leave dates, and on top of that, during a single sick leave spell, it is possible to have different types of attendance type (holidays), but holidays are still considered to be belonged to that single sick leave spell. </p>
|
<p>Based on the idea of @Cholts, I write a R code for generating the desired output</p>
<pre><code>#clean the workspace
rm(list=ls(all=TRUE))
require(tidyr)
library(dplyr)
library(lubridate)
library(stringr)
ID = c(rep(1585,4),rep(1597,4))
Sun = c(rep("D",8))
Sat = c(rep("D",8))
Mon = c("Y","S","H","S","S","Y","H","Y")
Tue = c("Y","S","S","S","S","Y","S","Y")
Wed = c("Y","S","S","S","Y","Y","S","Y")
Thur = c("Y","S","Y","Y","S","Y","Y","Y")
Fri = c("Y","H","Y","Y","Y","H","Y","Y")
id_u = unique(ID)
df = data.frame(Sun,Mon,Tue,Wed,Thur,Fri,Sat)
new_df = df %>% unite(new,Sun,Mon,Tue,Wed,Thur,Fri,Sat,remove=FALSE,sep="")
vstr = new_df$new
#===========================================================
idd = c()
begin_date = c()
end_date = c()
duration = c()
n = 2
start_date = ymd('2017-04-02')
for(i in 1:n){
ps = (i-1)*4 +1
pe = (i-1)*4 + 4
indstr = paste(vstr[ps:pe],collapse = "")
loca = str_locate_all(indstr,"S[SHD]*S|S")
rn = length(loca[[1]][,1])
for (j in 1:rn){
idd = append(idd,id_u[i])
begin_date = append(begin_date,ymd(start_date+loca[[1]][j,1]-1))
end_date = append(end_date,ymd(start_date+loca[[1]][j,2]-1))
duration = append(duration,loca[[1]][j,2]-loca[[1]][j,1]+1)
}
}
final_df = data.frame(idd,begin_date,end_date,duration)
</code></pre>
<p>The output is </p>
<pre><code>> final_df
idd begin_date end_date duration
1 1585 2017-04-10 2017-04-19 10
2 1585 2017-04-24 2017-04-26 3
3 1597 2017-04-03 2017-04-04 2
4 1597 2017-04-06 2017-04-06 1
5 1597 2017-04-18 2017-04-19 2
</code></pre>
|
python|r|pandas|tidyr
| 1
|
8,050
| 69,547,137
|
How should I avoid duplicate imports when writing a package?
|
<p>I'm making a python package to run analyses with pandas, and I use pandas objects in most files in the package. How do I import those functions so they're usable in the package but don't clutter the namespace for a user? Say I have this directory structure:</p>
<pre><code>MyThing/
MyThing/
__init__.py
apis.py
MyClass.py
</code></pre>
<p>where <code>MyClass.py</code> provides a class I will instantiate to process data in memory and <code>apis.py</code> has interfaces to local and remote databases. As a demonstration, say <code>__init__.py</code> contains</p>
<pre class="lang-py prettyprint-override"><code>from MyThing.MyClass import MyClass
from MyThing.apis import DBInterface
</code></pre>
<p>the contents of <code>MyClass.py</code> are</p>
<pre class="lang-py prettyprint-override"><code>class MyClass:
def __init__():
pass
</code></pre>
<p>and <code>apis.py</code> is</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
class DBInterface:
def __init__():
pass
</code></pre>
<p>With complete code I expect the use case to look something like this</p>
<pre class="lang-py prettyprint-override"><code>import MyThing as mt
# get some data
interface = mt.DBInterface()
some_data = interface.query(parameters)
# load it into MyThing
instance = mt.MyThing(some_data)
# add new data from another source
instance.read(filename)
# make some fancy products
instance.magic(parameters)
# update the database
interface.update_db(instance)
</code></pre>
<p>The concern I have is that <code>dir(mt.apis)</code> shows everything I've imported, meaning I can do things like make a pandas DataFrame with <code>df = mt.apis.pd.DataFrame()</code>. Is this how it's supposed to work? Should I be using <code>import</code> differently so the namespace isn't cluttered with dependencies? Should I design the package differently so the dependencies aren't available when I import <code>MyThing</code>?</p>
|
<p>What you are doing is fine and how it's supposed to work and I wouldn't advise trying hard to hide your pandas import.</p>
<p>The solution to this <code>df = mt.apis.pd.DataFrame()</code> is: don't do that.</p>
<p>If there is a function or variable within <code>Mything.apis</code> that you don't want others to use, you can prefix it with a single underscore (eg. <code>_foo</code>). By convention this is understood to be for "internal use" and is not imported when you do <code>from Mything.apis import *</code>. See <a href="https://www.python.org/dev/peps/pep-0008/#descriptive-naming-styles" rel="nofollow noreferrer">this section of the PEP-8 style guide</a> for more information about naming conventions of this sort.</p>
<p>If you'd like to be more explicit about what things your module exports you may define them like so <code>__all__ = ['foo', 'bar']</code>. This also makes it so that if you or someone does <code>from Mything.apis import *</code> (which is generally ill-advised anyway) they will only import <code>foo</code> and <code>bar</code>, but you should treat this as a mere suggestion, just like the leading underscore convention.</p>
|
python|pandas|dataframe
| 1
|
8,051
| 69,529,353
|
Remove rows from X_train and y_train at once
|
<p>I'm absolutely new in python, so there is a question.</p>
<p>I've splitted my original df to X_train, y_train, X_test, y_test.
Now i want to drop from y_train (pd.series) outliers therefore i need to remove object with same index from X_train(pd.df).
What is the easiest and cleanest way to do it?</p>
|
<p>try using <code>y_train = y_train[X_train_new.index]</code> where <code>X_train_new</code> is your new <code>X_train</code> after dropping some columns/row/outliers.</p>
|
python|pandas|scikit-learn
| 0
|
8,052
| 69,604,098
|
Python dataframe transpose time series (rows to column)
|
<p>I would like to transpose (?)/ transfrom this time series:</p>
<pre><code>values = ['Date Value Value_30days_later',
'26.01.01 36 40.3',
'29.01.01 36 38.2',
'30.01.01 37.5 36.5',
'31.01.01 37.5 37.3',
'01.02.01 37 36.7',
'02.02.01 37.5 36.5',
'05.02.01 35 33',
'06.02.01 32.5 26.5',
'07.02.01 31 25.3',
'08.02.01 30.5 29',
'09.02.01 32.3 30.3']
</code></pre>
<p>To looking like this one</p>
<pre><code>values = ['Date Value_-4 Value_-3 Value_-2 Value_-1 Value_0 Value_30days_later',
'01.02.01 36 36 37.5 37.5 37 36.7',
'02.02.01 36 37.5 37.5 37 37.5 36.5',
'05.02.01 37.5 37.5 37 37.5 35 33',
'06.02.01 37.5 37 37.5 35 32.5 26.5',
'07.02.01 37 37.5 35 32.5 31 25.3',
'08.02.01 37.5 35 32.5 31 30.5 29',
'09.02.01 35 32.5 31 30.5 32.3 30.3']
</code></pre>
<p>So basically converting 5 <strong>Value</strong> rows to columns based on <strong>Date</strong> and adding <strong>Value_30days_later</strong>
to it.</p>
<p>Based on the <strong>Date</strong> in the original time series I want to have a time series with values <strong>0</strong> to <strong>-4</strong> (the last 5 dates including the current one) and the value 30 days later.</p>
<p>Is there a simple function available using e.g., pandas or numpy?</p>
|
<p>I got it now by using this function</p>
<pre><code> def create_lags():
for lags in range(0,5):
time_series['value_lag_-'+str(lag)] = time_series['value'].shift(+lag)
</code></pre>
<p>This <strong>.shift function</strong> does it for me.</p>
|
python|pandas|dataframe|numpy
| 0
|
8,053
| 69,530,356
|
How to calculate a mean from a range of rows
|
<p>Using a forloop, how do I calculate 5 sets of data to calculate for example the mean and standard deviation?</p>
<p>For example the array is</p>
<pre class="lang-py prettyprint-override"><code>data = np.array([[49, 32, 32, 8, 49],
[ 1, 29, 28, 45, 20],
[11, 40, 5, 6, 21],
[13, 45, 3, 12, 12],
[11, 6, 39, 39, 27],
[10, 34, 1, 15, 42],
[31, 27, 3, 4, 12],
[41, 14, 27, 45, 44],
[48, 37, 14, 16, 13],
[41, 9, 14, 49, 16]])
</code></pre>
<p>Shape is (10,5)</p>
<p>I need to calculate the mean and standard from 5 rows continuously using forloop?
My code only calculates the mean of each row:</p>
<pre><code>for i in range(len(data)):
mean = np.mean(data[i])
print(mean)
</code></pre>
|
<p>You can do it quite easily without for loop like the following:</p>
<pre><code>import numpy as np
data = np.array([[49, 32, 32, 8, 49],
[ 1, 29, 28, 45, 20],
[11, 40, 5, 6, 21],
[13, 45, 3, 12, 12],
[11, 6, 39, 39, 27],
[10, 34, 1, 15, 42],
[31, 27, 3, 4, 12],
[41, 14, 27, 45, 44],
[48, 37, 14, 16, 13],
[41, 9, 14, 49, 16]])
mean = np.mean(data, axis=0)
print("Mean:", mean)
stdev = np.std(data, axis=0, ddof=1) #remove "ddof=1" if you want to calculate STDEV of a Population, adding "ddof=1" means you want to calculate the STDEV of Sample.
print("STDEV:",stdev)
</code></pre>
<p>Output:
<a href="https://i.stack.imgur.com/ubOPs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ubOPs.png" alt="enter image description here" /></a></p>
|
python|numpy|for-loop
| 0
|
8,054
| 69,482,711
|
How to determine cycles with Pandas
|
<p>Based on sample dataframe:</p>
<pre><code>import pandas as pd
Machine = [0,0,0,0,0,0,1,1,1,1,1,0,1,1,1,0,0,0,0,0,0,0,1,1,1,0,0,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,1,1,1,0,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0]
df2 = pd.DataFrame(Machine)
</code></pre>
<p>This is mocking a machine being on and off. 0 means that it is off, and 1 means that it is on over that period of time. However, due to poor data the machine will say its off in the middle of an on cycle seen in the data. (1,1,1,1,1,0,1,1,1) The machine is really on during this whole period and the 0 is an error. Does anyone know of a easy way to calculate the total number of on cycles this would have while ignoring instances of bad data?</p>
<p>The sample code above has 3 on cycles and 4 off cycles. What would be the best way to calculate this while ignoring random data errors in a on cycle.</p>
|
<p>It's maybe not the best way, but you can build something like:</p>
<pre><code>df['error_n-2'] = df['Machine'].eq(df.col1.shift(-2))
df['error_n-1'] = df['Machine'].eq(df.col1.shift(-1))
df['error_n+1'] = df['Machine'].eq(df.col1.shift(1))
df['error_n+2'] = df['Machine'].eq(df.col1.shift(2))
df['nb_diff'] =df['error_n-2']+df['error_n-1']+df['error_n+1']+df['error_n+2']
#apply a rule of 3 on 4
df['potential_error']=np.where(df['nb_diff']>=3,True,False)
#clean
df.drop(columns=['error_n-2','error_n-1','error_n+1','error_n+2','nb_diff'], inplace=True)
#exclude potential error
df[df['potential_error']==False]
</code></pre>
|
python|pandas
| 0
|
8,055
| 53,853,899
|
How to create a column timestamp while using apply in a pandas dataframe?
|
<p>I am applying some functions to pandas dataframe columns as:</p>
<pre><code>def foo(x):
return 1 + x
</code></pre>
<p>Then, I apply the function to a column:</p>
<pre><code>df['foo'] = df['a_col'].apply(foo)
</code></pre>
<p>How can I return a column with the amount of miliseconds that the function <code>foo</code> takes to finish?. For instance:</p>
<pre><code>A time_milisecs
2 0.1
4 0.2
4 0.3
3 0.3
4 0.2
</code></pre>
<p>Where <code>A</code> is the column that contains the result of the sum.</p>
|
<p>You can use the <code>time</code> module. Given you also wish to create a new series via a calculation, you can output a sequence of tuples, then convert to a dataframe and assign back to two series.</p>
<p>Here's a demonstration:</p>
<pre><code>import time
df = pd.DataFrame({'A': [2, 4, 4, 3, 4]})
def foo(x):
tstart = time.time()
time.sleep(0.25)
tend = time.time()
return 1 + x, (tend-tstart) * 10**3
df[['B', 'B_time']] = pd.DataFrame(df['A'].apply(foo).values.tolist())
print(df)
A B B_time
0 2 3 250.014544
1 4 5 250.014305
2 4 5 250.014305
3 3 4 250.014305
4 4 5 250.014067
</code></pre>
<p>With Python 3.7, you can use <a href="https://docs.python.org/3/library/time.html#time.process_time_ns" rel="nofollow noreferrer"><code>time.process_time_ns</code></a>, which measures time in nanoseconds.</p>
|
python|python-3.x|pandas|time
| 2
|
8,056
| 54,211,955
|
Build JSON object from pandas dataframe
|
<p>I'm trying to format pandas dataframe:</p>
<pre><code>> year mileage model manufacturer power fuel_type price
> 0 2011 184000 c-klasa Mercedes-Benz 161 diesel 114340
> 1 2013 102000 v40 Volvo 130 diesel 80511
> 2 2014 191000 scenic Renault 85 diesel 57613
> 3 1996 210000 vectra Opel 85 benzin 6278
> 4 2005 258000 tucson Hyundai 83 diesel 41363
> 5 2007 325000 astra Opel 74 diesel 26590
> 6 2002 200000 megane Renault 79 plin 16988
> 7 2011 191000 touran VW 77 diesel 62783
> 8 2007 210000 118 BMW 105 diesel 44318
> 9 2012 104000 3 Mazda 85 diesel 63522
> 10 2011 68000 c3 Citroen 54 benzin 44318
> 11 1993 200000 ax Citroen 37 diesel 43467
> 12 2011 142000 twingo Renault 55 benzin 28068
> 13 2005 280000 320 BMW 120 diesel 28068
</code></pre>
<p>output to fit JSON object requirements.
Here's my code:</p>
<pre><code>for model, car in carsDF.groupby('manufacturer'):
print("{\"",model,":\"[\"",'","'.join(car['model'].unique()),"\"]},")
</code></pre>
<p>which yields:</p>
<pre><code>> {" Alfa Romeo
> :"["156","159","146","147","giulietta","gt","33","mito","166","145","brera","sprint","spider","155","ostalo
> "]}, {" Aston Martin :"[" vantage "]},...
</code></pre>
<p>Which is ok except for spaces that shows each time I use escape chars "\".</p>
<p>How to create JSON object without them?
Is there any better way to generate JSON object for case like this?</p>
|
<p>I believe you need create Series by <code>unique</code> values by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.SeriesGroupBy.unique.html" rel="nofollow noreferrer"><code>SeriesGroupBy.unique</code></a> and then convert to json by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.to_json.html" rel="nofollow noreferrer"><code>Series.to_json</code></a>:</p>
<pre><code>j = carsDF.groupby('manufacturer')['model'].unique().to_json()
print (j)
{
"BMW": ["118", "320"],
"Citroen": ["c3", "ax"],
"Hyundai": ["tucson"],
"Mazda": ["3"],
"Mercedes-Benz": ["c-klasa"],
"Opel": ["vectra", "astra"],
"Renault": ["scenic", "megane", "twingo"],
"VW": ["touran"],
"Volvo": ["v40"]
}
</code></pre>
<p>If want each json separately solution is create <code>dictionaries</code> and convert to <code>json</code>s:</p>
<pre><code>import json
for model, car in carsDF.groupby('manufacturer'):
print (json.dumps({model: car['model'].unique().tolist()}))
{"BMW": ["118", "320"]}
{"Citroen": ["c3", "ax"]}
{"Hyundai": ["tucson"]}
{"Mazda": ["3"]}
{"Mercedes-Benz": ["c-klasa"]}
{"Opel": ["vectra", "astra"]}
{"Renault": ["scenic", "megane", "twingo"]}
{"VW": ["touran"]}
{"Volvo": ["v40"]}
</code></pre>
|
python|json|python-3.x|pandas|dictionary
| 1
|
8,057
| 54,198,577
|
ValueError: Cannot feed value of shape (1, 4, 84, 84) for Tensor 'Placeholder:0', which has shape '(?, 84, 84, 4)'
|
<p>I am running a DQN to learn to play Atari games, and am training it on GPU. I noticed that the 'data_format' for my model was NHWC (which is slower than NCHW for GPU training). I changed the data_format to NCHW but it gave this error;</p>
<pre><code>ValueError: Cannot feed value of shape (1, 4, 84, 84) for Tensor 'Placeholder:0', which has shape '(?, 84, 84, 4)'
</code></pre>
<p>This is my code;</p>
<pre><code> from __future__ import division, print_function, unicode_literals
from functools import reduce
# Handle arguments (before slow imports so --help can be fast)
import argparse
parser = argparse.ArgumentParser(
description="Train a DQN net.")
parser.add_argument("-n", "--number-steps", type=int, default=1000000, #4000000 CHANGED,
help="total number of training steps")
parser.add_argument("-l", "--learn-iterations", type=int, default=4,
help="number of game iterations between each training step")
parser.add_argument("-s", "--save-steps", type=int, default=1000,
help="number of training steps between saving checkpoints")
parser.add_argument("-c", "--copy-steps", type=int, default=10000,
help="number of training steps between copies of online DQN to target DQN")
parser.add_argument("-r", "--render", action="store_true", default=False,
help="render the game during training or testing")
parser.add_argument("-p", "--path", default="model",
help="path of the checkpoint file")
parser.add_argument("-m", "--model_fname", default="model.ckpt",
help="name of the checkpoint file")
parser.add_argument("-t", "--test", action="store_true", default=False,
help="test (no learning and minimal epsilon)")
parser.add_argument("-tg", "--test_games", type=int, default=20,
help="How many games to test across (no learning and minimal epsilon)")
parser.add_argument("-v", "--verbosity", action="count", default=0,
help="increase output verbosity")
args = parser.parse_args()
from collections import deque
import gym
import numpy as np
import os
import tensorflow as tf
import random
import sys
import time
import sys
from history import History
from replay_memory import ReplayMemory
from utils import rgb2gray, imresize
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
env = gym.make("Breakout-v0")
done = True # env needs to be reset
# First let's build the two DQNs (online & target)
input_height = 84
input_width = 84
input_channels = history_length = 4
conv_n_maps = [32, 64, 64]
conv_kernel_sizes = [(8,8), (4,4), (3,3)]
conv_strides = [4, 2, 1]
conv_paddings = ["VALID"] * 3
conv_activation = [tf.nn.relu] * 3
n_hidden = 256 #512 CHANGED
hidden_activation = tf.nn.relu
n_outputs = env.action_space.n # 9 discrete actions are available (for MsPacman at least)
initializer = tf.truncated_normal_initializer(0, 0.02)
tf.device("/gpu:0")
# Deep-Q network
def q_network(X_state, name):
prev_layer = X_state
with tf.variable_scope(name) as scope:
for n_maps, kernel_size, strides, padding, activation in zip(
conv_n_maps, conv_kernel_sizes, conv_strides,
conv_paddings, conv_activation):
prev_layer = tf.layers.conv2d(
prev_layer, filters=n_maps, kernel_size=kernel_size,
strides=strides, padding=padding, activation=activation,
kernel_initializer=initializer)
prev_layer_shape = prev_layer.get_shape().as_list()
n_hidden_in = reduce(lambda x, y: x * y, prev_layer_shape[1:])
last_conv_layer_flat = tf.reshape(prev_layer, shape=[-1, n_hidden_in])
hidden = tf.layers.dense(last_conv_layer_flat, n_hidden,
activation=hidden_activation,
kernel_initializer=initializer)
outputs = tf.layers.dense(hidden, n_outputs,
kernel_initializer=initializer)
trainable_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES,
scope=scope.name)
trainable_vars_by_name = {var.name[len(scope.name):]: var
for var in trainable_vars}
return outputs, trainable_vars_by_name
# Place holder for input
X_state = tf.placeholder(tf.float32, shape=[None, input_height, input_width,
input_channels])
# Create two Deep-Q networks
online_q_values, online_vars = q_network(X_state, name="q_networks/online")
target_q_values, target_vars = q_network(X_state, name="q_networks/target")
# We need an operation to copy the online DQN to the target DQN
copy_ops = [target_var.assign(online_vars[var_name])
for var_name, target_var in target_vars.items()]
copy_online_to_target = tf.group(*copy_ops)
# Parameters for optimizer
learning_rate = 0.001 #0.00025 #seems low? CHANGED
learning_rate_minimum = 0.001 #0.00025 CHANGED
learning_rate_decay = 0.96
learning_rate_decay_step = 50000
momentum = 0.95
# Huber loss
def clipped_error(x):
try:
return tf.select(tf.abs(x) < 1.0, 0.5 * tf.square(x), tf.abs(x) - 0.5)
except:
return tf.where(tf.abs(x) < 1.0, 0.5 * tf.square(x), tf.abs(x) - 0.5)
# Initialize optimizer for training
with tf.variable_scope("train"):
X_action = tf.placeholder(tf.int32, shape=[None]) # Action based on Q-value from Online network
y = tf.placeholder(tf.float32, shape=[None]) # Q-value from Target network
q_value = tf.reduce_sum(online_q_values * tf.one_hot(X_action, n_outputs),
axis=1)
delta = y - q_value
loss = tf.reduce_mean(clipped_error(delta))
global_step = tf.Variable(0, trainable=False, name='global_step')
learning_rate_step = tf.placeholder('int64', None, name='learning_rate_step')
learning_rate_op = tf.maximum(
learning_rate_minimum,
tf.train.exponential_decay(
learning_rate,
learning_rate_step,
learning_rate_decay_step,
learning_rate_decay,
staircase=True
)
)
training_op = tf.train.RMSPropOptimizer(
learning_rate_op, momentum=momentum, epsilon=0.01
).minimize(loss, global_step=global_step)
# Summary for Tensorboard
summary_steps = 100
with tf.variable_scope('summary'):
summary_tags = ['average.reward']
summary_placeholders = {}
summary_ops = {}
for tag in summary_tags:
summary_placeholders[tag] = tf.placeholder('float32', None, name=tag.replace(' ', '_'))
summary_ops[tag] = tf.summary.scalar(tag, summary_placeholders[tag])
histogram_summary_tags = ['episode.rewards', 'episode.actions']
for tag in histogram_summary_tags:
summary_placeholders[tag] = tf.placeholder('float32', None, name=tag.replace(' ', '_'))
summary_ops[tag] = tf.summary.histogram(tag, summary_placeholders[tag])
init = tf.global_variables_initializer()
saver = tf.train.Saver()
# TensorFlow - Execution phase
training_start = 10000 # start training after 10,000 game iterations
discount_rate = 0.99
skip_start = 90 # Skip the start of every game (it's just waiting time). -- Is this just for msPacman??
batch_size = 64 #32
iteration = 0 # game iterations
done = True # env needs to be reset
min_reward = -1.
max_reward = 1.
exp_moving_avg_reward = 0.
first_train_step = True
iterationStep = 0
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
config.gpu_options.per_process_gpu_memory_fraction = 1
# We will keep track of the max Q-Value over time and compute the mean per game
loss_val = np.infty
game_length = 0
total_max_q = 0
mean_max_q = 0.0
# Initialize history
history = History(
data_format='NCHW',
batch_size=batch_size,
history_length=history_length,
screen_height=input_height,
screen_width=input_width
)
# Initialize Replay Memory
replay_memory_size = 1000000
replay_memory = ReplayMemory(
data_format='NCHW',
batch_size=batch_size,
history_length=history_length,
screen_height=input_height,
screen_width=input_width,
memory_size=replay_memory_size,
model_dir='model'
)
# And on to the epsilon-greedy policy with decaying epsilon
eps_min = 0.1
eps_max = 1.0
test_eps = eps_min if args.test else None
eps_decay_steps = 1000000
# eps_min = 0.1
# eps_max = 1.0 if not args.test else eps_min
# eps_decay_steps = args.number_steps // 2
def epsilon_greedy(q_values, step):
epsilon = test_eps or \
(
eps_min + max(
0.,
(eps_max - eps_min) * (eps_decay_steps - max(0., step - training_start)) / eps_decay_steps
)
)
# epsilon = max(eps_min, eps_max - (eps_max-eps_min) * step/eps_decay_steps)
if np.random.rand() < epsilon:
return np.random.randint(n_outputs) # random action
else:
return np.argmax(q_values) # optimal action
def preprocess_observation(obs):
return imresize(rgb2gray(obs)/255., (input_width, input_height))
with tf.Session() as sess:
summary_writer = tf.summary.FileWriter(os.path.join(args.path, "logs"), sess.graph) #Logdir
def inject_summary(tag_dict, step):
summary_str_lists = sess.run(
[summary_ops[tag] for tag in tag_dict.keys()],
{summary_placeholders[tag]: value for tag, value in tag_dict.items()}
)
for summary_str in summary_str_lists:
summary_writer.add_summary(summary_str, step)
# Resume the training (if possible)
ckpt = tf.train.get_checkpoint_state(args.path)
if ckpt and ckpt.model_checkpoint_path:
ckpt_name = os.path.basename(ckpt.model_checkpoint_path)
fname = os.path.join(args.path, ckpt_name)
saver.restore(sess, fname)
print(" [*] Load SUCCESS: %s" % fname)
else:
init.run()
copy_online_to_target.run()
print(" [!] Load FAILED: %s" % args.path)
# ----------- Training ----------- #
if not args.test:
print("TRAINING")
exp_moving_avg_reward = 0.0
current_rewards = []
while True:
step = global_step.eval()
if step >= args.number_steps:
break
iteration += 1
if args.verbosity > 0:
print("\rIter {}, training step {}/{} ({:.1f})%, "
"loss {:5f}, exp-moving-avg reward {:5f}, "
"mean max-Q {:5f}".format(
iteration, step, args.number_steps, step * 100 / args.number_steps,
loss_val, exp_moving_avg_reward,
mean_max_q),
end=""
)
# Game over, start again
if done:
obs = env.reset()
# Randomly skip the start of each game
for skip in range(random.randint(0, skip_start - 1)):
obs, _, done, _ = env.step(0)
state = preprocess_observation(obs)
for _ in range(history_length):
history.add(state)
if args.render:
env.render() #rending training option
# Online DQN evaluates what to do
q_values = online_q_values.eval(feed_dict={X_state: [history.get()]})[0]
action = epsilon_greedy(q_values, step)
# Online DQN plays
obs, reward, done, info = env.step(action)
next_state = preprocess_observation(obs)
# Reward clipping
reward = max(min_reward, min(max_reward, reward))
# Update history
history.add(next_state)
# Let's memorize what happened
replay_memory.add(next_state, reward, action, done)
state = next_state
current_rewards.append(reward)
# Compute statistics for tracking progress (not shown in the book)
total_max_q += q_values.max()
game_length += 1
if done:
mean_max_q = total_max_q / game_length
total_max_q = 0.0
game_length = 0
if iteration < training_start or iteration % args.learn_iterations != 0:
continue # only train after warmup period and at regular intervals
# Sample memories and use the target DQN to produce the target Q-Value
X_state_val, X_action_val, rewards, X_next_state_val, terminal = \
replay_memory.sample()
next_q_values = target_q_values.eval(
feed_dict={X_state: X_next_state_val}
)
max_next_q_values = np.max(next_q_values, axis=1)
y_val = rewards + (1. - terminal) * discount_rate * max_next_q_values
# Update exponential moving average of rewards
if first_train_step:
exp_moving_avg_reward = np.mean(current_rewards)
first_train_step = False
else:
exp_moving_avg_reward = (exp_moving_avg_reward * 0.99) + (0.01 * np.mean(current_rewards))
current_rewards = []
# Train the online DQN
_, loss_val = sess.run([training_op, loss], feed_dict={
X_state: X_state_val,
X_action: X_action_val,
y: y_val,
learning_rate_step: step,
})
# Regularly inject summary
if step % summary_steps == 0:
inject_summary(
{
'average.reward': exp_moving_avg_reward
},
step
)
# Regularly copy the online DQN to the target DQN
if step % args.copy_steps == 0:
# print("Copying the weight from online DQN to target DQN ...")
copy_online_to_target.run()
# And save regularly
if step % args.save_steps == 0:
# print("Saving model ...")
saver.save(sess, os.path.join(args.path, args.model_fname), global_step=step)
# ----------- Testing ----------- #
if args.test:
print("TESTING")
test_games_played = -1 # -1 to offset the first env.reset() call
test_game_reward = []
test_game_reward_average = []
best_score = 0
sys.stdout.flush()
current_rewards = []
while args.test_games >= test_games_played:
sys.stdout.flush()
step = global_step.eval()
# Game over, start again - we've won or lost the game at this point
if done:
test_games_played += 1
print("\n# --------------------------------------------------------- #")
print("GAMES PLAYED SO FAR: {}".format(test_games_played))
print("GAME SCORE: {}".format(sum(x for x in test_game_reward if x > 0))) # Printing total score from the game (positive values only)
test_game_reward_average.append(sum(x for x in test_game_reward if x > 0))
if test_games_played > 0:
print("AVERAGE SCORE: {:5f}".format(sum(test_game_reward_average)/test_games_played)) # Printing average score across games
if sum(x for x in test_game_reward if x > 0) > best_score:
best_score = sum(x for x in test_game_reward if x > 0)
print("BEST SCORE: {}".format(best_score))
test_game_reward.clear() # clearing the list of games scores for the next game to be played
print("Game Finished. Resetting Env...")
print("# --------------------------------------------------------- #\n")
obs = env.reset()
state = preprocess_observation(obs)
for _ in range(history_length):
history.add(state)
if args.render:
env.render()
# time.sleep(0.1) # slow down the render
# Online DQN evaluates what to do
q_values = online_q_values.eval(feed_dict={X_state: [history.get()]})[0]
action = epsilon_greedy(q_values, step)
# Online DQN plays
obs, reward, done, info = env.step(action)
next_state = preprocess_observation(obs)
# Reward clipping
reward = max(min_reward, min(max_reward, reward))
# Update history
history.add(next_state)
# Let's memorize what happened
replay_memory.add(next_state, reward, action, done)
state = next_state
current_rewards.append(reward)
if args.test:
test_game_reward.append(reward)
continue #Puts it back to top of While loop
</code></pre>
<p>I think that the placeholder for input (line 89) the X_state needs to potentially be changed. However, I'm not sure how or exactly where a change needs to be made. </p>
<p>Thanks for your help.</p>
|
<p>From your code:</p>
<blockquote>
<p><code>X_state = tf.placeholder(tf.float32, shape=[None, input_height, input_width, input_channels])</code></p>
</blockquote>
<p>You're hardcoding the shape of the placeholder to be in the NHWC format. If you want to feed arrays in NCHW format, change <code>X_state</code> to <code>tf.placeholder(tf.float32, shape=[None, input_channels, input_height, input_width])</code></p>
|
python|tensorflow|deep-learning|reinforcement-learning|openai-gym
| 1
|
8,058
| 38,508,294
|
How to get the max/min value in Pandas DataFrame when nan value in it
|
<p>Since one column of my pandas dataframe has <code>nan</code> value, so when I want to get the max value of that column, it just return error. </p>
<pre><code>>>> df.iloc[:, 1].max()
'error:512'
</code></pre>
<p>How can I skip that <code>nan</code> value and get the max value of that column?</p>
|
<p>You can use <code>NumPy</code>'s help with <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.nanmax.html" rel="noreferrer"><code>np.nanmax</code></a>, <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.nanmin.html" rel="noreferrer"><code>np.nanmin</code></a> :</p>
<pre><code>In [28]: df
Out[28]:
A B C
0 7 NaN 8
1 3 3 5
2 8 1 7
3 3 0 3
4 8 2 7
In [29]: np.nanmax(df.iloc[:, 1].values)
Out[29]: 3.0
In [30]: np.nanmin(df.iloc[:, 1].values)
Out[30]: 0.0
</code></pre>
|
python|pandas
| 19
|
8,059
| 66,325,217
|
Upload data from DICOM files in Torchvision Model
|
<p>I'm sorry if the question is too basic, but I am just getting started with PyTorch (and Python).</p>
<p>I was trying to follow step by step the instructions here:
<a href="https://pytorch.org/tutorials/beginner/finetuning_torchvision_models_tutorial.html" rel="nofollow noreferrer">https://pytorch.org/tutorials/beginner/finetuning_torchvision_models_tutorial.html</a></p>
<p>However,I am working with have some DICOM files, that I kept into two directories (CANCER/NOCANCER). I split them with split-folders, to have it structured to be used with the ImageFolder dataset (as done in the tutorial).</p>
<p>I am aware that I only need to load the pixel_arrays extracted from the DICOM files, and I wrote some helper functions to:</p>
<ol>
<li>read all paths of the .dcm files;</li>
<li>read them and extract the pixel_array;</li>
<li>do a little preprocessing.
Here are the outlines of the helper functions:</li>
</ol>
<pre><code>import os
import pydicom
import cv2
import numpy as np
def createListFiles(dirName):
print("Fetching all the files in the data directory...")
lstFilesDCM =[]
for root, dir, fileList in os.walk(dirName):
for filename in fileList:
if ".dcm" in filename.lower():
lstFilesDCM.append(os.path.join( root , filename))
return lstFilesDCM
def castHeight(list):
lstHeight = []
min_height = 0
for filenameDCM in list:
readfile = pydicom.read_file(filenameDCM)
lstHeight.append(readfile.pixel_array.shape[0])
min_height = np.min(lstHeight)
return min_height
def castWidth(list):
lstWidth = []
min_Width = 0
for filenameDCM in list:
readfile = pydicom.read_file(filenameDCM)
lstWidth.append(readfile.pixel_array.shape[1])
min_Width = np.min(lstWidth)
return min_Width
def Preproc1(listDCM):
new_height, new_width = castHeight(listDCM), castWidth(listDCM)
ConstPixelDims = (len(listDCM), int(new_height), int(new_width))
ArrayDCM = np.zeros(ConstPixelDims, dtype=np.float32)
## loop through all the DICOM files
for filenameDCM in listDCM:
## read the file
ds = pydicom.read_file(filenameDCM)
mx0 = ds.pixel_array
## Standardisation
imgb = mx0.astype('float32')
imgb_stand = (imgb - imgb.mean(axis=(0, 1), keepdims=True)) / imgb.std(axis=(0, 1), keepdims=True)
## Normalisation
imgb_norm = cv2.normalize(imgb_stand, None, 0, 1, cv2.NORM_MINMAX)
## we make sure that data is saved as a data_array as a numpy array
data = np.array(imgb_norm)
## we save it into ArrayDicom and resize it based 'ConstPixelDims'
ArrayDCM[listDCM.index(filenameDCM), :, :] = cv2.resize(data, (int(new_width), int(new_height)), interpolation = cv2.INTER_CUBIC)
return ArrayDCM
</code></pre>
<p>So, now, how do I tell the dataloader to load the data, considering the structure it's in for labelling purposes, but only after doing this extraction and preprocessing on it?
I'm referencing the "Loading data" part of the tutorial in the documentation, that goes:</p>
<pre><code># Create training and validation datasets
image_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x), data_transforms[x]) for x in ['train', 'val']}
# Create training and validation dataloaders
dataloaders_dict = {x: torch.utils.data.DataLoader(image_datasets[x], batch_size=batch_size, shuffle=True, num_workers=4) for x in ['train', 'val']}
</code></pre>
<p>If it makes any sense, is it possible to do something on the lines of</p>
<pre><code>image_datasets = {x: datasets.ImageFolder(Preproc1(os.path.join(data_dir, x)), data_transforms[x]) for x in ['train', 'val']}
</code></pre>
<p>?</p>
<p>Also, another question I have is: is it worth to do a normalisation step in my preprocessing when the tutorial suggests to do a transforms.Normalize ?</p>
<p>I'm really sorry this sounds so vague, I've been trying to solve this for weeks now, but I can't manage.</p>
|
<p>It sounds like you will be better off implementing your own <a href="https://stackoverflow.com/a/64574689/1714410">custom <code>Dataset</code></a>.
And indeed, I think it would be better deferring normalization and other stuff to the transformations applied just before reading the images for the model.</p>
|
python|pytorch|torchvision|pydicom|medical-imaging
| 0
|
8,060
| 66,000,662
|
How to replace a string in a column based on the value of another column?
|
<p>I have this dataframe:</p>
<pre><code>df = pd.DataFrame([['US123','1111'],\
['CA456', '2222'],\
['US123', '3333'],\
['US123','4444'], \
['CA456', '5555']], columns=['ID', 'Notes'])
df
</code></pre>
<p><a href="https://i.stack.imgur.com/mDcip.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mDcip.png" alt="enter image description here" /></a></p>
<p>The desired output is:</p>
<p><a href="https://i.stack.imgur.com/OS57q.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OS57q.png" alt="enter image description here" /></a></p>
<p>What is the best way to add string <code>; CA</code> to column <code>Notes</code> when the value in column <code>ID</code> starts with <code>CA</code>? Thank you!</p>
|
<p>You may use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.loc.html" rel="nofollow noreferrer"><code>DataFrame.loc</code></a>, that'll change column <code>Notes</code> only where needed, based on the condition on the <code>ID</code> column</p>
<pre><code>df.loc[df['ID'].str.startswith('CA'), 'Notes'] = df['Notes'] + '; CA'
</code></pre>
|
python|pandas
| 2
|
8,061
| 66,327,985
|
Combining is in and where
|
<p>How can I create new column based on the odd even flag in Pandas</p>
<p>This is my data:</p>
<pre><code>id Flag
001 1
002 2
003 3
004 4
</code></pre>
<p>I would like to have this output if flag is even number then female, if flag is odd number then male:</p>
<pre><code>id Flag Gender
001 1 Male
002 2 Female
003 3 Male
004 4 Female
</code></pre>
|
<p>Use <a href="https://numpy.org/doc/stable/reference/generated/numpy.where.html" rel="nofollow noreferrer"><code>numpy.where</code></a> with modulo <code>2</code> for check even and odd numbers:</p>
<pre><code>df['Gender'] = np.where(df['Flag'] % 2,'Male','Female')
print (df)
id Flag Gender
0 1 1 Male
1 2 2 Female
2 3 3 Male
3 4 4 Female
</code></pre>
|
python|pandas|numpy|combinations|where-clause
| 1
|
8,062
| 66,037,882
|
Is there a way to resample price data to OHLC without trading out the DateTime Index for a RangeIndex?
|
<p>So I'm in a bit of a pickle.</p>
<p>I want to resample time and price ticker data into Open High Low Close.</p>
<p>To do that first I use <code>df.date = pd.to_datetime['date']</code> followed by <code>df = df.set_index('date')['price'].resample('1H').ohlc()</code></p>
<p>However in the process I lose my precious df.date - it is now df.index. And they are not the same. Other indicator functions that used to work with df.date no longer work with df.index</p>
<p>Even the price data given to me does not put the 'date' on the same level as the names of other columns, implying that it is not even the name of the column or something:</p>
<pre><code> open high low close
date
2021-01-28 01:00:00 30653.553694 30653.553694 30653.553694 30653.553694
2021-01-28 02:00:00 30994.198478 30994.198478 30994.198478 30994.198478
2021-01-28 03:00:00 31274.386041 31274.386041 31274.386041 31274.386041
2021-01-28 04:00:00 31441.260678 31441.260678 31441.260678 31441.260678
2021-01-28 05:00:00 31196.750744 31196.750744 31196.750744 31196.750744
... ... ... ... ...
2021-02-03 20:00:00 36708.821125 36708.821125 36708.821125 36708.821125
2021-02-03 21:00:00 37036.271097 37036.271097 37036.271097 37036.271097
2021-02-03 22:00:00 37266.377988 37266.377988 37266.377988 37266.377988
2021-02-03 23:00:00 37262.988292 37262.988292 37262.988292 37262.988292
2021-02-04 00:00:00 37725.264554 37808.578235 37725.264554 37808.578235
</code></pre>
<p>I include my code, please help me to decipher this one. Basically I just need to have keep the df.dates as df.dates but in DateTime format while the prices given in OHLC format.</p>
<pre><code>import pandas as pd
import json
import requests
API_URL = 'https://api.coingecko.com/api/v3'
r = requests.get(API_URL + '/coins/bitcoin/market_chart?vs_currency=usd&days=7&interval=hourly')
d = r.json()
df = pd.DataFrame(d['prices'])
df.columns = ['date', 'price']
df['date'] = pd.to_datetime(df['date'], unit='ms')
df = df.set_index('date')['price'].resample('1H').ohlc()
# THIS DOES NOT WORK --> df.index = df.date
</code></pre>
|
<p>In fact, you can use the <a href="https://stackoverflow.com/questions/66002493/attributeerror-while-trying-to-resample-a-pandas-dataframe-with-datetimeindex/66002616#66002616">method2</a>
I mention before.</p>
<pre><code># method2
df['hr'] = pd.to_datetime(df['date'].dt.strftime('%Y-%m-%d %H:00'))
dfn = df.join(df.groupby('hr')['price'].ohlc(), on='hr')
print(dfn)
</code></pre>
|
python|pandas|dataframe|datetime|matplotlib
| 1
|
8,063
| 66,215,723
|
Issue using a variable with an r-string in Python
|
<p>Fairly new to Python, and I've got a batch job that I now have to start saving some extracts from out to a company Sharepoint site. I've searched around and cannot seem to find a solution to the issue I keep running into. I need to pass a date into the filename, and was first having issues with using a normal string. If I just type out the entire thing as a raw string, I get the output I want:</p>
<pre><code>x = r"\\mnt4793\DavWWWRoot\sites\GlobalSupply\Plastics\DataExtracts\2021-02-15_aRoute.xlsx"
print (x)
</code></pre>
<p>The output is: <em>\mnt4793\DavWWWRoot\sites\GlobalSupply\Plastics\DataExtracts\2021-02-15_aRoute.xlsx</em></p>
<p>However, if I break the string into it's parts so I can get a parameter in there, I wind up having to toss an extra double-quote on the "x" parameter to keep the code from running into a <strong>"SyntaxError: EOL while scanning string literal"</strong> error:</p>
<pre><code>x = r"\\mnt4793\DavWWWRoot\sites\GlobalSupply\Plastics\DataExtracts\""
timestamp = date_time_obj.date().strftime('%Y-%m-%d')
filename = "_aRoute.xlsx"
print (x + timestamp + filename)
</code></pre>
<p>But the output I get passes that unwanted double quote into my string: <em>\mnt4793\DavWWWRoot\sites\GlobalSupply\Plastics\DataExtracts"2021-02-15_aRoute.xlsx</em></p>
<p>The syntax I need is clearly escaping me, I'm just trying to get the path built so I can save the file itself. If it happens to matter, I'm using pandas to write the file:</p>
<pre><code>data = pandas.read_sql(sql, cnxn)
data.to_excel(string_goes_here)
</code></pre>
<p>Any help would be greatly appreciated!</p>
|
<p>Per the comment from @Matthias, as it turns out, an r-string can't end with a single backslash. The quick workaround, therefore, was:</p>
<pre><code>x = r"\\mnt4793\DavWWWRoot\sites\GlobalSupply\Plastics\DataExtracts" + "\\"
</code></pre>
<p>The comment from @sammywemmy also linked to what looks to be a much more thorough solution.</p>
<p>Thank you both!</p>
|
python|pandas
| 0
|
8,064
| 66,235,161
|
Apply function to every two columns in dataframe and replace original columns with output
|
<p>I have a dataframe that contains X & Y data in columns like this:</p>
<pre><code>df_cols = ['x1', 'y1', 'x2', 'y2', 'x3', 'y3']
np.random.seed(365)
df = pd.DataFrame(np.random.randint(0,10,size=(10, 6)), columns=df_cols)
x1 y1 x2 y2 x3 y3
0 2 4 1 5 2 2
1 9 8 4 0 3 3
2 7 7 7 0 8 4
3 3 2 6 2 6 8
4 9 6 1 6 5 7
5 7 6 5 9 3 8
6 7 9 9 0 1 4
7 0 9 6 5 6 9
8 5 3 2 7 9 2
9 6 6 3 7 7 1
</code></pre>
<p>I need to call a function that takes one X & Y pair at a time and returns and updated X & Y pair (same length), and then either save that data to a new dataframe with the original column names, or replace the old X & Y data with the new data and keep the original column names.</p>
<p>For example, take this function below:</p>
<pre><code>def samplefunc(x, y):
x = x*y
y = x/10
return x, y
# Apply function to each x & y pair
x1, y1 = samplefunc(df.x1, df.y1)
x2, y2 = samplefunc(df.x2, df.y2)
x3, y3 = samplefunc(df.x3, df.y3)
# Save new/updated x & y pairs into new dataframe, preserving the original column names
df_updated = pd.DataFrame({'x1': x1, 'y1': y1, 'x2': x2, 'y2': y2, 'x3': x3, 'y3': y3})
# Desired result:
In [36]: df_updated
Out[36]:
x1 y1 x2 y2 x3 y3
0 8 0.8 5 0.5 4 0.4
1 72 7.2 0 0.0 9 0.9
2 49 4.9 0 0.0 32 3.2
3 6 0.6 12 1.2 48 4.8
4 54 5.4 6 0.6 35 3.5
5 42 4.2 45 4.5 24 2.4
6 63 6.3 0 0.0 4 0.4
7 0 0.0 30 3.0 54 5.4
8 15 1.5 14 1.4 18 1.8
9 36 3.6 21 2.1 7 0.7
</code></pre>
<p>But doing it this way is obviously really tedious and impossible for a huge dataset.
The similar/related questions I've found perform a simple transformation on the data rather than calling a function, or they add new columns to the dataframe instead of replacing the originals.</p>
<p>I tried to apply @PaulH's answer to my dataset, but neither of them are working as it is unclear how to actually call the function inside of either method.</p>
<pre><code># Method 1
array = np.array(my_actual_df)
df_cols = my_actual_df.columns
dist = 0.04 # a parameter I need for my function
df = (
pandas.DataFrame(array, columns=df_cols)
.rename_axis(index='idx', columns='label')
.stack()
.to_frame('value')
.reset_index()
.assign(value=lambda df: numpy.select(
[df['label'].str.startswith('x'), df['label'].str.startswith('y')],
# Call the function (not working):
[df['value'], df['value']] = samplefunc(df['value'], df['value']),
))
.pivot(index='idx', columns='label', values='value')
.loc[:, df_cols]
)
# Method 2
df = (
pandas.DataFrame(array, columns=df_cols)
.pipe(lambda df: df.set_axis(df.columns.map(lambda c: (c[0], c[1])), axis='columns'))
.rename_axis(columns=['which', 'group'])
.stack(level='group')
# Call the function (not working)
.assign(df['x'], df['y'] = samplefunc(df['x'], df['y']))
.unstack(level='group')
.pipe(lambda df: df.set_axis([''.join(c) for c in df.columns], axis='columns'))
)
</code></pre>
<p>The actual function I need to call is from Arty's answer to this question: <a href="https://stackoverflow.com/questions/64441803/resample-trajectory-to-have-equal-euclidean-distance-in-each-sample">Resample trajectory to have equal euclidean distance in each sample</a></p>
|
<p>Use slicing and apply operations on those slices.</p>
<pre><code>def samplefunc(x, y):
x = x**2
y = y/10
return x, y
arr = df.to_numpy().astype(object)
e_col = arr[:, ::2]
o_col = arr[:, 1::2]
e_col, o_col = samplefunc(e_col, o_col)
arr[:, ::2] = e_col
arr[:, 1::2] = o_col
out = pd.DataFrame(arr, columns=df.columns)
x1 y1 x2 y2 x3 y3
0 4 0.4 1 0.5 4 0.2
1 81 0.8 16 0.0 9 0.3
2 49 0.7 49 0.0 64 0.4
3 9 0.2 36 0.2 36 0.8
4 81 0.6 1 0.6 25 0.7
5 49 0.6 25 0.9 9 0.8
6 49 0.9 81 0.0 1 0.4
7 0 0.9 36 0.5 36 0.9
8 25 0.3 4 0.7 81 0.2
9 36 0.6 9 0.7 49 0.1
</code></pre>
|
python|pandas|dataframe|numpy
| 1
|
8,065
| 52,753,613
|
Grouping / Categorizing ages column
|
<p>I have a dataframe say <code>df</code>. <code>df</code> has a column <code>'Ages'</code></p>
<p><code>>>> df['Age']</code></p>
<p><a href="https://i.stack.imgur.com/pcs2l.png" rel="noreferrer"><img src="https://i.stack.imgur.com/pcs2l.png" alt="Age Data"></a></p>
<p>I want to group this ages and create a new column something like this</p>
<pre><code>If age >= 0 & age < 2 then AgeGroup = Infant
If age >= 2 & age < 4 then AgeGroup = Toddler
If age >= 4 & age < 13 then AgeGroup = Kid
If age >= 13 & age < 20 then AgeGroup = Teen
and so on .....
</code></pre>
<p>How can I achieve this using Pandas library.</p>
<p>I tried doing this something like this</p>
<pre><code>X_train_data['AgeGroup'][ X_train_data.Age < 13 ] = 'Kid'
X_train_data['AgeGroup'][ X_train_data.Age < 3 ] = 'Toddler'
X_train_data['AgeGroup'][ X_train_data.Age < 1 ] = 'Infant'
</code></pre>
<p>but doing this i get this warning </p>
<blockquote>
<p>/Users/Anand/miniconda3/envs/learn/lib/python3.7/site-packages/ipykernel_launcher.py:3: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy" rel="noreferrer">http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy</a>
This is separate from the ipykernel package so we can avoid doing imports until
/Users/Anand/miniconda3/envs/learn/lib/python3.7/site-packages/ipykernel_launcher.py:4: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame</p>
</blockquote>
<p>How to avoid this warning and do it in a better way.</p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.cut.html" rel="noreferrer"><code>pandas.cut</code></a> with parameter <code>right=False</code> for not includes the rightmost edge of bins:</p>
<pre><code>X_train_data = pd.DataFrame({'Age':[0,2,4,13,35,-1,54]})
bins= [0,2,4,13,20,110]
labels = ['Infant','Toddler','Kid','Teen','Adult']
X_train_data['AgeGroup'] = pd.cut(X_train_data['Age'], bins=bins, labels=labels, right=False)
print (X_train_data)
Age AgeGroup
0 0 Infant
1 2 Toddler
2 4 Kid
3 13 Teen
4 35 Adult
5 -1 NaN
6 54 Adult
</code></pre>
<p>Last for replace missing value use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.cat.add_categories.html" rel="noreferrer"><code>add_categories</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.fillna.html" rel="noreferrer"><code>fillna</code></a>:</p>
<pre><code>X_train_data['AgeGroup'] = X_train_data['AgeGroup'].cat.add_categories('unknown')
.fillna('unknown')
print (X_train_data)
Age AgeGroup
0 0 Infant
1 2 Toddler
2 4 Kid
3 13 Teen
4 35 Adult
5 -1 unknown
6 54 Adult
</code></pre>
<hr>
<pre><code>bins= [-1,0,2,4,13,20, 110]
labels = ['unknown','Infant','Toddler','Kid','Teen', 'Adult']
X_train_data['AgeGroup'] = pd.cut(X_train_data['Age'], bins=bins, labels=labels, right=False)
print (X_train_data)
Age AgeGroup
0 0 Infant
1 2 Toddler
2 4 Kid
3 13 Teen
4 35 Adult
5 -1 unknown
6 54 Adult
</code></pre>
|
python|pandas|dataframe
| 25
|
8,066
| 46,334,814
|
Replacing values in a list of Numpy arrays
|
<p>Let's say I have a list of Numpy arrays with varying shapes and need to replace all values of 255 with 1. </p>
<pre><code>A = np.array([[0,255], [0,0]])
B = np.array([[0, 255,255], [255,0,0]])
list_of_array = [A, B] # list could have many more arrays
</code></pre>
<p>Methods like <code>np.place()</code> and <code>X[X == 255] = 1</code> do not work on lists.</p>
|
<p>If you have to have a list of arrays, and want to modify the values in those arrays rather than creating new ones, then you can do it by iterating over the list.</p>
<pre><code>import numpy as np
A = np.array([[0,255], [0,0]])
B = np.array([[0, 255,255], [255,0,0]])
list_of_array = [A, B] # list could have many more arrays
for array in list_of_array:
array[array == 255] = 1
</code></pre>
|
python|numpy
| 2
|
8,067
| 46,211,631
|
Python, Cv2, numpy to indicate areas in a picture/image
|
<p>I want to specify certain areas in an image.</p>
<p>To specify 1 area I can do this:</p>
<pre><code>import cv2
import numpy as np
the_picture = cv2.imread("c:\\picture.jpg")
target_area = the_picture[300:360, 130:280]
</code></pre>
<p>The type of target_area is type 'numpy.ndarray'.</p>
<p>But a list of coordinates is a problem. I am struggling in turning a list of coordinates into the values required.</p>
<p>what I want to do is:</p>
<pre><code>the_picture = cv2.imread("c:\\picture.jpg")
list_of_areas = [
[300:360 , 130:280]
[300:360 , 440:540]
[400:460 , 0:130]
[400:460 , 250:400]
[400:460 , 560:740]
For area in list_of_areas:
the_picture(area) ### failed
</code></pre>
<p>Here are the coordinates:</p>
<pre><code> x y x1 y1
Area1 130 300 280 360
Area2 440 300 540 360
Area3 0 400 130 460
Area4 250 400 400 460
Area5 560 400 740 460
</code></pre>
<p>I tried to give a list like below but it doesn’t work. I also tried to make them strings in a list, changed the Square Brackets into Round Brackets neither worked.</p>
<pre><code>SyntaxError: invalid syntax
</code></pre>
<p>What’s the proper way to give the coordinates?</p>
|
<p>If I understood correctly what you want to do, and please correct me if I'm wrong, you could solve your problem by saving and retrieving the area coordinates as individual values, and not as pairs</p>
<pre><code>the_picture = cv2.imread("c:\\picture.jpg")
list_of_areas = [
[300, 360, 130, 280],
[300, 360, 440, 540],
[400, 460, 0, 130],
[400, 460 , 250, 400],
[400, 460, 560, 740]]
For y,y1,x,x1 in list_of_areas:
the_picture[y:y1, x:x1]
</code></pre>
|
python|image|opencv|numpy
| 2
|
8,068
| 46,261,671
|
Use numpy setdiff1d keeping the order
|
<pre><code>a = np.array([1, 2, 3])
b = np.array([4, 2, 3, 1, 0])
c = np.setdiff1d(b, a)
print("c", c)
</code></pre>
<p>The result is <code>c [0, 4]</code> but the answer I want is <code>c [4 0]</code>.</p>
<p>How can I do that?</p>
|
<p>Get the mask of non-matches with <code>np.in1d</code> and simply boolean-index into <code>b</code> to retain the order of elements in it -</p>
<pre><code>b[~np.in1d(b,a)]
</code></pre>
<p>Sample step-by-step run -</p>
<pre><code>In [14]: a
Out[14]: array([1, 2, 3])
In [15]: b
Out[15]: array([4, 2, 3, 1, 0])
In [16]: ~np.in1d(b,a)
Out[16]: array([ True, False, False, False, True], dtype=bool)
In [17]: b[~np.in1d(b,a)]
Out[17]: array([4, 0])
</code></pre>
|
python|numpy
| 10
|
8,069
| 58,585,241
|
modify the x-axis labels in histogram plot using matplotlib
|
<p>I use the following code to plot the histogram. If I want to make the label along the x-axis more fine grained, how to change the code. For instance, the current plot segment the x-axis with 0.2 as interval, can I have 0.05 as interval?</p>
<pre><code>import matplotlib.pyplot as plt
plt.hist(image_pixel_array,bins=25,color='g')
plt.grid(True)
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/4RGRS.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4RGRS.jpg" alt="enter image description here"></a></p>
|
<p>Here,</p>
<pre><code>plt.xlim([-3, 3])
plt.ylim([-3, 3])
plt.yticks(np.arange(-3, 3, 0.5))
plt.xticks(np.arange(-3, 3, 0.5))
plt.show()
</code></pre>
<p>you'll get the ticks 0.5 distance apart in the range of -3 to 3.
Hope this helps.</p>
|
python|python-3.x|numpy|matplotlib|scipy
| 5
|
8,070
| 58,175,304
|
Which Normalization method, min-max or z-scaling (Zero mean unit variance), works best for deep learning?
|
<p>I have data that is representing relative counts (0.0-1.0) as presented in the example below. calculated with the formula </p>
<pre><code>cell value(E.g.23)/sum of the colum(E.g. 1200) = 0.01916
</code></pre>
<p>Example data </p>
<pre><code> f1 f2 f3 f5 f6 f7 f8 class
0.266 0.133 0.200 0.133 0.066 0.133 0.066 1
0.250 0.130 0.080 0.160 0.002 0.300 0.111 0
0.000 0.830 0.180 0.016 0.002 0.059 0.080 1
0.300 0.430 0.078 0.100 0.082 0.150 0.170 0
</code></pre>
<p>before applying Deep learning algorithm I remove features that shows a high correlation. </p>
<p>I am confused at the time of normalization, which method is correct before model generation. </p>
<ol>
<li>Use data directly because the data already scaled (0.0-1.0).</li>
<li>Perform min-max scaling (<a href="https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MinMaxScaler.html" rel="nofollow noreferrer">https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MinMaxScaler.html</a>) </li>
<li>Perform (<a href="https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html" rel="nofollow noreferrer">https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html</a>)</li>
</ol>
<p>Because, when I use classical supervised algorithms min-max and z-scaling improve performance. But in the case of Deep learning using "TensorFlow-GPU" I am not able to see any significant difference between the two. </p>
<p>Thank you. </p>
|
<p>z-scaling is a good idea when your data is approximately normally distributed, this can often be the case.</p>
<p>min-max scaling is the right thing to do when you expect a largely uniform distribution.</p>
<p>In short, it depends on your data and your neuronal network.</p>
<p>But both are sensitive to outliers, you could try median-mad scaling.</p>
<p>See also: <a href="https://stats.stackexchange.com/questions/7757/data-normalization-and-standardization-in-neural-networks">https://stats.stackexchange.com/questions/7757/data-normalization-and-standardization-in-neural-networks</a></p>
|
machine-learning|deep-learning|normalization|tensorflow-datasets
| 0
|
8,071
| 58,466,368
|
Logistic Regression - How to use model on another dataset and get probability values
|
<p>I'm making my first ML model and I need some help with using model on second dataset.</p>
<p>So I have two sets: "train_full.csv" and "test_full.csv". Both sets have the exact same structure.</p>
<p>Only difference is that in "train_full.csv" column "target" is filled with 0s and 1s and in "test_set.csv" this column is empty and I want to predict these values.</p>
<p>Below you can find my model based on "train_full.csv". I have skipped the whole part of data cleaning for clarity of code:</p>
<pre class="lang-py prettyprint-override"><code>df2 = pd.read_csv("train_full.csv", sep = ';')
test_set = pd.read_csv("test_full.csv", sep = ';')
#Dataset cleaning
#my y is column named "target", and my x's are the remaining column
X_train, X_test, y_train, y_test = train_test_split(df2.drop('target',axis=1),
df2['target'], test_size=0.35,
random_state=101)
#Creating Logistic Regression Model
logmodel = LogisticRegression()
result = logmodel.fit(X_train, y_train)
#Making predictions
Predictions = logmodel.predict(X_test)
print(metrics.confusion_matrix(y_test, Predictions))
print(metrics.classification_report(y_test,Predictions)) #Accuracy: 78%
auc = metrics.roc_auc_score(y_test, y_pred_proba) #AUC: ~0.695
</code></pre>
<p>Now I want to use that model on second data set, which I have imported in the second line of code, however I dont need to split the dataset into training and testing subset anymore. I want to use model from above on the entire "test_full.csv" set. How can I do that?</p>
<p>Also, is there a way to add a column with calculated probability? So my output would be a pandas dataframe that would look like this:</p>
<pre><code>Id probability target
0 0.75 1
1 0.78 1
2 0.34 0
3 0.84 1
4 0.13 0
5 0.34 0
</code></pre>
<p>Kind regards</p>
|
<p>It is pretty simple.</p>
<p>You just need to drop the target column from the <code>test_set</code> and need to use
<code>logmodel.predict()</code> for classification and <code>logmodel.predict_proba()</code> for probability. Here is an example for the same =></p>
<pre><code>test_set = test_set.drop(['target'],axis=1)
</code></pre>
<p>==> below 2 lines will add a column in <code>test_set</code> dataframe which are the prob and classification related to predictions</p>
<pre><code>test_set['prob'] = logmodel.predict_proba(test_set)
test_set['classification'] = logmodel.predict(test_set)
</code></pre>
|
python|pandas|machine-learning|scikit-learn|logistic-regression
| 0
|
8,072
| 69,236,479
|
Using python generators with lots of data
|
<p>I have a dataset consisting of 250k items that need to meet certain criteria before being added to a list/generator. To speed up the processing, I want to use generators, but I am uncertain about whether to filter the data with a function that yields the filtered sample, or if I should just return the filtered sample to a generator object, etc. I would like the final object to only include samples that met the filter criteria, but by default python will return/yield a NoneType object. I have included example filter functions, data (<em>the real problem uses strings, but for simplicity I use floats from random normal distribution</em>), and what I intend to do with the data below.</p>
<p>How should I efficiently use generators in this instance? Is it even logical/efficient to use generators for this purpose? I know that I can check to see if an element from the return function is None to exclude it from the container (list/generator), but how can I do this with the function that yields values?</p>
<pre class="lang-py prettyprint-override"><code># For random data
import numpy as np
# Functions
def filter_and_yield(item_in_data):
if item_in_data > 0.0:
yield item_in_data
def filter_and_return(item_in_data):
if item_in_data > 0.0:
return item_in_data
# Arbitrary data
num_samples = 250 * 10**3
data = np.random.normal(size=(num_samples,))
# Should I use this: generator with generator elements?
filtered_data_as_gen_with_gen_elements = (filter_and_yield(item) for item in data)
# Should I use this: list with generator elements?
filtered_data_as_lst_with_gen_elements = [filter_and_yield(item) for item in data]
# Should I use this: generator with non-generator elements?
filtered_data_as_gen_with_non_gen_elements = (
filter_and_return(item) for item in data if filter_and_return(item) is not None)
# Should I use this: list with non-generator elements?
filtered_data_as_lst_with_non_gen_elements = [
filter_and_return(item) for item in data if filter_and_return(item) is not None]
# Saving the data as csv -- note, `filtered_data` is NOT defined
# but is a place holder for whatever correct way of filtering the data is
df = pd.DataFrame({'filtered_data': filtered_data})
df.to_csv('./filtered_data.csv')
</code></pre>
|
<p>The short answer is that none of these are best. Numpy and pandas include a lot of C and Fortan code that works on hardware level data types stored in contiguous arrays. Python objects, even low level ones like <code>int</code> and <code>float</code> are relatively bulky. They include the standard python object header and are allocated on the heap. And even simple operations like <code>></code> require a call to one of its methods.</p>
<p>Its better use use numpy/pandas functions and operators as much as possible. These packages have overloaded the standard python operators to work on entire sets of data in one call.</p>
<pre><code>df = pd.DataFrame({'filtered_data': data[data > 0.0]})
</code></pre>
<p>Here, <code>data > 0.0</code> created a new numpy array of true/false for the comparison. <code>data[...]</code> created a new array holding only the values of <code>data</code> that were also true.</p>
<p>Other notes</p>
<p><code>filter_and_yield</code> is a generator that will iterate 0 or 1 values. Python turned it into a generator because it has a <code>yield</code>. When it returns <code>None</code>, python turns it into a <code>StopIteration</code> exception. The consumer of this generator will not see the <code>None</code>.</p>
<p><code>(filter_and_yield(item) for item in data)</code> is a generator that returns generators. If you use it, you'll end up with dataframe column of generators.</p>
<p><code>[filter_and_yield(item) for item in data]</code> is a list of generators (because filter_and_yield is a generator). When pandas creates a column, it needs to know the column size. So it expands generators into lists like you've done here. You can make this for pandas, doesn't really matter. Except that pandas deletes that list when done, which reduces memory usage.</p>
<p><code>(filter_and_return(item) for item in data if filter_and_return(item) is not None)</code> This one works, but its pretty slow. <code>data</code> holds a hardware level array of integers. <code>for item in data</code> has to convert each of those integers into python level integers and the n<code>filter_and_return(item)</code> is a relatively expensive function call. This could be rewritten as <code>(value for value in (filter_and_return(item) for item in data) if value is not None)</code> to halve the number of function calls.</p>
<p><code>[filter_and_return(item) for item in data if filter_and_return(item) is not None]</code> As mentioned above. its okay to do this, but delete when done to conserve memory.</p>
|
python|pandas|performance|generator
| 1
|
8,073
| 44,776,682
|
How to efficiently sum and mean 2D NumPy arrays by id?
|
<p>I have a 2d array <code>a</code> and a 1d array <code>b</code>. I want to compute the sum of rows in array <code>a</code> group by each id in <code>b</code>. For example:</p>
<pre><code>import numpy as np
a = np.array([[1,2,3],[2,3,4],[4,5,6]])
b = np.array([0,1,0])
count = len(b)
ls = list(set(b))
res = np.zeros((len(ls),a.shape[1]))
for i in ls:
res[i] = np.array([a[x] for x in range(0,count) if b[x] == i]).sum(axis=0)
print res
</code></pre>
<p>I got the printed result as:</p>
<pre><code>[[ 5. 7. 9.]
[ 2. 3. 4.]]
</code></pre>
<p>What I want to do is, since the 1st and 3rd elements of <code>b</code> are <code>0</code>, I perform <code>a[0]+a[2]</code>, which is <code>[5, 7, 9]</code> as one row of the results. Similarly, the 2nd element of <code>b</code> is <code>1</code>, so that I perform <code>a[1]</code>, which is <code>[2, 3, 4]</code> as another row of the results.</p>
<p>But it seems my implementation is quite slow for large array. Is there any better implementation?</p>
<p>I know there is a <code>bincount</code> function in <code>numpy</code>. But it seems only supports 1d array.
Thank you all for helping me!</p>
|
<p>The <a href="https://github.com/EelcoHoogendoorn/Numpy_arraysetops_EP" rel="nofollow noreferrer">numpy_indexed</a> package (disclaimer: I am its author) was made to address problems exactly of this kind in an efficiently vectorized and general manner:</p>
<pre><code>import numpy_indexed as npi
unique_b, mean_a = npi.group_by(b).mean(a)
</code></pre>
<p>Note that this solution is general in the sense that it provides a rich set of standard reduction function (sum, min, mean, median, argmin, and so on), axis keywords if you need to work with different axes, and also the ability to group by more complicated things than just positive integer arrays, such as the elements of multidimensional arrays of arbitrary dtype.</p>
<pre><code>import numpy_indexed as npi
# this caches the complicated O(NlogN) part of the operations
groups = npi.group_by(b)
# all these subsequent operations have the same low vectorized O(N) cost
unique_b, mean_a = groups.mean(a)
unique_b, sum_a = groups.sum(a)
unique_b, min_a = groups.min(a)
</code></pre>
|
python|arrays|numpy
| 3
|
8,074
| 61,167,421
|
Split pandas list to different column and calculate the counts
|
<p>I've a pandas dataframe with a column name <code>ids</code> that contains list elements. So I want to split the <code>list</code> column to different columns.</p>
<pre><code>id partner_id ids
1 12 ["1","4","187275","187358","946475"]
2 12 ["1","191","28925","31441"]
3 16 ["1","2","293915","1573130","293918"]
4 11 ["1","13","294064","1238496"]
5 16 ["1","153339","155025","155029"]
</code></pre>
<p><strong>Desired output:</strong></p>
<pre><code>id partner_id id1 id2 id3 id4 id5
1 12 1 4 187275 187358 946475
2 12 1 191 28925 31441 NaN
3 16 1 2 293915 1573130 293918
4 11 1 13 294064 1238496 NaN
5 16 1 153339 155025 155029 NaN
</code></pre>
<p>What I've tried: </p>
<pre><code>df2 = pd.DataFrame(df.parent_path.values.tolist(), index=df.index)
</code></pre>
<p><strong>Full Code:</strong></p>
<pre><code>import pandas as pd
import numpy as np
pd.set_option('display.max_columns', 85)
pd.set_option('display.max_rows', 85)
df = pd.read_csv('../dataset/property_location_count.csv',low_memory=False)
df2 = pd.DataFrame(df.ids.values.tolist(), index=df.index)
</code></pre>
<p>But it doesn't split the columns as it does here :<a href="https://stackoverflow.com/a/35491399/1138192">https://stackoverflow.com/a/35491399/1138192</a></p>
|
<p>I think you are close, only is used <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.join.html" rel="nofollow noreferrer"><code>DataFrame.join</code></a> for append to original, <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.pop.html" rel="nofollow noreferrer"><code>DataFrame.pop</code></a> for extract column, then convert strings to numeric if necessary and last renamed columns names:</p>
<p>Also is necessary string repr of lists to lists by <code>json.loads</code>:</p>
<pre><code>import json
df = (df.join(pd.DataFrame([json.loads(x) for x in df.pop('ids')], index=df.index)
.astype(float)
.astype('Int64')
.rename(columns=lambda x: f'id{x+1}')))
print (df)
id partner_id id1 id2 id3 id4 id5
0 1 12 1 4 187275 187358 946475
1 2 12 1 191 28925 31441 NaN
2 3 16 1 2 293915 1573130 293918
3 4 11 1 13 294064 1238496 NaN
4 5 16 1 153339 155025 155029 NaN
</code></pre>
|
python|pandas
| 3
|
8,075
| 60,863,574
|
how to check if tenserflow is using gpu?
|
<p>I am using Jupyter notebook for training neural network. I choose in the anaconda applications on tenserflow-gpu however I dont think it is using GPU. How can I check it if it is using GPU for processing? </p>
|
<p>You could use the </p>
<pre><code><tf.config.list_physical_devices('GPU')>
</code></pre>
<p>For tensorflow 2.1.
Also check the documentation found <a href="https://www.tensorflow.org/api_docs/python/tf/config/list_physical_devices" rel="nofollow noreferrer">here</a></p>
|
tensorflow|deep-learning|gpu
| 2
|
8,076
| 71,778,511
|
How to count word similarity between two pandas dataframe
|
<p>Here's my first dataframe <code>df1</code></p>
<pre><code>Id Text
1 Asoy Geboy Ngebut
2 Asoy kita Geboy
3 Bersatu kita Teguh
</code></pre>
<p>Here's my second dataframe <code>df2</code></p>
<pre><code>Id Text
1 Bersatu Kita
2 Asoy Geboy Jalanan
</code></pre>
<p>Similarity Matrix, columns is <code>Id</code> from <code>df1</code>, rows is <code>Id</code> from <code>df2</code></p>
<pre><code> 1 2 3
1 0 0.33 1
2 0.66 0.66 0
</code></pre>
<p>Note:</p>
<p><code>0</code> value in (1,1) and (3,2) because no text similar</p>
<p><code>1</code> value in (3,1) is because of <code>Bersatu</code> and <code>Kita' (Id </code>1<code>on</code>df2<code>is avalilable in Id</code>3<code>on</code>df1`</p>
<p><code>0.33</code> is counted because of 1 of 3 words similar</p>
<p><code>0.66</code> is counted because of 2 of 3 words similar</p>
|
<p>IIUC, you need to compute a <code>set</code> intersection:</p>
<pre><code>l1 = [set(x.split()) for x in df1['Text'].str.lower()]
l2 = [set(x.split()) for x in df2['Text'].str.lower()]
pd.DataFrame([[len(s1&s2)/len(s1) for s1 in l1] for s2 in l2],
columns=df1['Id'], index=df2['Id'])
</code></pre>
<p>output:</p>
<pre><code>Id 1 2 3
Id
1 0.000000 0.333333 0.666667
2 0.666667 0.666667 0.000000
</code></pre>
<p><em>NB. Note that the condition on the denominator is not fully clear, for <code>{teguh, kita, bersatu}</code> vs <code>{kita, bersatu}</code> I count <code>2/3 = 0.666</code></em></p>
|
python|pandas|dataframe|text|cosine-similarity
| 1
|
8,077
| 71,471,838
|
Get rid of iterrows in pandas loop
|
<p>I'm trying to avoid using <code>iterrows()</code> in pandas and achieve a more performant solution. This is the code I have, where I loop through a DataFrame and for each record I need to add three more:</p>
<pre><code>import pandas as pd
fruit_data = pd.DataFrame({
'fruit': ['apple','orange','pear','orange'],
'color': ['red','orange','green','green'],
'weight': [5,6,3,4]
})
array = []
for index, row in fruit_data.iterrows():
row2 = { 'fruit_2': row['fruit'], 'sequence': 0}
array.append(row2)
for i in range(2):
row2 = { 'fruit_2': row['fruit'], 'sequence': i + 1}
array.append(row2)
print(array)
</code></pre>
<p>My real DataFrame has millions of records. Is there a way to optimize this code and NOT use <code>iterrows()</code> or <code>for</code> loops?</p>
|
<p>You could use <code>repeat</code> to repeat each fruit 3 times; then <code>groupby</code> + <code>cumcount</code> to assign <code>sequence</code> numbers; finally <code>to_dict</code> for the final output:</p>
<pre><code>tmp = fruit_data['fruit'].repeat(3).reset_index(name='fruit_2')
tmp['sequence'] = tmp.groupby('index').cumcount()
out = tmp.drop(columns='index').to_dict('records')
</code></pre>
<p>Output:</p>
<pre><code>[{'fruit_2': 'apple', 'sequence': 0},
{'fruit_2': 'apple', 'sequence': 1},
{'fruit_2': 'apple', 'sequence': 2},
{'fruit_2': 'orange', 'sequence': 0},
{'fruit_2': 'orange', 'sequence': 1},
{'fruit_2': 'orange', 'sequence': 2},
{'fruit_2': 'pear', 'sequence': 0},
{'fruit_2': 'pear', 'sequence': 1},
{'fruit_2': 'pear', 'sequence': 2},
{'fruit_2': 'orange', 'sequence': 0},
{'fruit_2': 'orange', 'sequence': 1},
{'fruit_2': 'orange', 'sequence': 2}]
</code></pre>
|
python|pandas|dataframe
| 1
|
8,078
| 42,337,793
|
Supress printing of pandas output to console
|
<p>I am using <code>topic_.set_value(each_topic, word, prob)</code> to change the value of cells in a pandas dataframe. Basically, I initialized a numpy array with a certain shape and converted it to a pandas dataframe. I am then replacing these zeros by iterating over all the columns and rows using the code above. The problem is that the number of cells are around 50,000 and every time I set the value pandas prints the array to the console. I want to suppress this behavior. Any ideas?</p>
<p><strong>EDIT</strong></p>
<p>I have two dataframes one is <code>topic_</code> which is the target dataframe and <code>tw</code> which is the source dataframe. The <code>topic_</code> is a topic by word matrix, where each cell stores the probability of a word occurring in a particular topic. I have initialized the <code>topic_</code> dataframe to zero using <code>numpy.zeros</code>. A sample of the <code>tw</code> dataframe- </p>
<pre><code>print(tw)
topic_id word_prob_pair
0 0 [(customer, 0.061703717964), (team, 0.01724444...
1 1 [(team, 0.0260560163563), (customer, 0.0247838...
2 2 [(customer, 0.0171786268847), (footfall, 0.012...
3 3 [(team, 0.0290787264225), (product, 0.01570401...
4 4 [(team, 0.0197917953222), (data, 0.01343226630...
5 5 [(customer, 0.0263740639141), (team, 0.0251677...
6 6 [(customer, 0.0289764173735), (team, 0.0249938...
7 7 [(client, 0.0265082412402), (want, 0.016477447...
8 8 [(customer, 0.0524006965405), (team, 0.0322975...
9 9 [(generic, 0.0373422774996), (product, 0.01834...
10 10 [(customer, 0.0305256248248), (team, 0.0241559...
11 11 [(customer, 0.0198707090364), (ad, 0.018516805...
12 12 [(team, 0.0159852971954), (customer, 0.0124540...
13 13 [(team, 0.033444510469), (store, 0.01961003290...
14 14 [(team, 0.0344793243818), (customer, 0.0210975...
15 15 [(team, 0.026416114692), (customer, 0.02041691...
16 16 [(campaign, 0.0486186973667), (team, 0.0236024...
17 17 [(customer, 0.0208270072145), (branch, 0.01757...
18 18 [(team, 0.0280889397541), (customer, 0.0127932...
19 19 [(team, 0.0297011415217), (customer, 0.0216007...
</code></pre>
<p>My <code>topic_</code> dataframe is of the size of <code>num_topics</code>(which is 20) by <code>number_of_unique_words</code> (in the <code>tw</code> dataframe) </p>
<p>Following is the code I am using to replace each value in the <code>topic_</code> dataframe</p>
<pre><code>for each_topic in range(num_topics):
a = tw['word_prob_pair'].iloc[each_topic]
for word, prob in a:
topic_.set_value(each_topic, word, prob)
</code></pre>
|
<p>just redirect the output into variable:</p>
<pre><code>>>> df.set_value(index=1,col=0,value=1)
0 1
0 0.621660 -0.400869
1 1.000000 1.585177
2 0.962754 1.725027
3 0.773112 -1.251182
4 -1.688159 2.372140
5 -0.203582 0.884673
6 -0.618678 -0.850109
>>> a=df.set_value(index=1,col=0,value=1)
>>>
</code></pre>
<p>To init df it's better to use this:</p>
<pre><code>pd.DataFrame(np.zeros_like(pd_n), index=pd_n.index, columns=pd_n.columns)
</code></pre>
|
python|pandas
| 1
|
8,079
| 69,818,184
|
Loop over column to verify condition and change cell values when need it with pandas
|
<p><a href="https://i.stack.imgur.com/Xjbla.png" rel="nofollow noreferrer">Pandas Data Frame</a>, I would like to loop over the column named 'first' to verify if is an email or not, if is an email remove it and leave that cell blank.</p>
<p>I have try</p>
<pre><code>for x in df['first']:
if '.com' in x:
x = ''
</code></pre>
|
<p>Answering the <code>pandas</code> portion of this, since that's not exactly email verification.</p>
<p>Use <code>.map</code> with a <code>lambda</code> function for this. This is the preferred way to do what you are trying to do in pandas, instead of iterating.</p>
<pre class="lang-py prettyprint-override"><code>df['email'] = df['email'].map(lambda x: x if '.com' in x else '')
</code></pre>
|
python|pandas
| 1
|
8,080
| 72,174,008
|
How to change "printed list" into dataframe with python
|
<p>I would like to convert my "printed list" into a dtaframe:</p>
<p>1st: I import a lift of tickers/symbols from a folder
import yfinance as yf</p>
<pre><code>with open("/Users/AB/OD/Earnings/tickers.txt") as fh:
tick1 = fh.read().split()
</code></pre>
<p>(here is an example of ticker to be saved in the txt file:
'ABCL',
'ABST',
'ACM',
'ADAP',
'ADCT',
'ADV'
)</p>
<p>2nd: I need to get the marketCap for all these tickers, however I dont know how to convert it into one big dataframe: headers: 'ticker'; 'marketCap'.</p>
<p>This is the code I could get, but not sure how to proceed:</p>
<pre><code>import pandas as pd
from pandas_datareader import data as pdr
import yfinance as yf
test = tick1
for ticker in test:
try:
marketCap = pdr.get_quote_yahoo(ticker)['marketCap']
print(marketCap)
except:
pass
</code></pre>
<p>If the marketCap is not available, we can simply skip it. we Don require them into the data frame.</p>
|
<ol>
<li>Before loop you could create list.</li>
<li>Inside loop you could append to this list</li>
<li>After loop you could convert this list to DataFrame</li>
</ol>
<p>And converting depends on what type of data you get in <code>marketCap</code> - <code>pandas.DataFrame</code>, <code>pandas.Series</code>, <code>list</code>, <code>dictionary</code>.</p>
<p>In your code it needs <code>pandas.concat(some_list)</code></p>
<pre><code>import pandas as pd
from pandas_datareader import data as pdr
import yfinance as yf
test = ['ABCL', 'ABST', 'ACM', 'ADAP', 'ADCT', 'ADV']
# --- before loop ---
data = []
# --- loop ---
for ticker in test:
try:
marketCap = pdr.get_quote_yahoo(ticker)['marketCap']
print(marketCap)
data.append(marketCap)
except Exception as ex:
print('skip:', ticker, ex)
# --- after loop ---
serie = pd.concat(data)
df = pd.DataFrame(serie)
print(df)
</code></pre>
<p>Result:</p>
<pre><code> marketCap
ABCL 1870047360
ABST 346576800
ACM 9633939456
ADAP 246278048
ADCT 641943744
ADV 1593755008
</code></pre>
|
python|pandas|list|dataframe
| 1
|
8,081
| 50,667,577
|
Tensorflow: tf.rnn.raw_rnn - fn_loop not callable?
|
<p>I try to use a custom function for the loop_fn in an raw_rnn but there is this weird</p>
<pre><code>"raise TypeError("loop_fn must be a callable")" # Exception thrown?
</code></pre>
<p>Call:</p>
<pre><code>callable_loop_fn = loop_fn(
time=time,
previous_output=None,
previous_state=None,
previous_loop_state=None,
_W=W, _b=b,
_decoder_lengths=decoder_lengths,
_pad_step_embedded=pad_step_embedded,
_eos_step_embedded=eos_step_embedded,
_encoder_final_state=encoder_final_state)
# using the functions for the attention decoder
decoder_outputs_ta, decoder_final_state, decoder_loop_state = tf.nn.raw_rnn(decoder_cell, callable_loop_fn)
</code></pre>
<p>Definition:</p>
<pre><code>def loop_fn(time, previous_output, previous_state, previous_loop_state, _W, _b, _decoder_lengths, _pad_step_embedded, _eos_step_embedded, _encoder_final_state):
if previous_state is None:
assert previous_output is None and previous_state is None
return loop_fn_initial(_decoder_lengths, _eos_step_embedded, _encoder_final_state)
else:
return loop_fn_transition(time, previous_output, previous_state, previous_loop_state, _W, _b, _decoder_lengths, _pad_step_embedded)
</code></pre>
<p>Does someone know what this could be? I thought the function I provide is callable or did I understand something wrong?</p>
|
<p><code>callable_loop_fn</code> is not a function, therefore it is not callable. </p>
<p>Specifically, <code>callable_loop_fn</code> is the value returned by <code>loop_fn()</code> which, in turn, returns either the output of <code>loop_fn_initial()</code> or the output of <code>loop_fn_initial()</code>. Evidently, none of these two function returns a function hence the exception <code>loop_fn must be a callable</code> is thrown.</p>
<p>According to <a href="https://www.tensorflow.org/api_docs/python/tf/nn/raw_rnn" rel="nofollow noreferrer">TF API</a>, you should write:</p>
<pre><code>def loop_fn(time, cell_output, cell_state, loop_state):
...
return (
elements_finished,
next_input,
next_cell_state,
emit_output,
next_loop_state
)
</code></pre>
<p>And then pass it to <code>tf.nn.raw_rnn</code>:</p>
<pre><code>raw_rnn(decoder_cell, loop_fn)
</code></pre>
<p>Note that you should respect the number and the order of the arguments that <code>loop_fn</code> expects to receive, otherwise you'll get error on <code>Unexpected argument</code> for the function <code>loop_fn</code>. Therefore, your implementation must be rearranged to only take 4 arguments.</p>
|
tensorflow|python-3.6|rnn|callable
| 2
|
8,082
| 50,388,396
|
error name 'dtype' is not defined
|
<p>I try to compile this code but I get this errror : </p>
<pre><code>NameError: name 'dtype' is not defined
</code></pre>
<p>Here is the python code : </p>
<pre><code># -*- coding: utf-8 -*-
from __future__ import division
import pandas as pd
import numpy as np
import re
import missingno as msno
from functools import partial
import seaborn as sns
sns.set(color_codes=True)
if dtype(data.OPP_CREATION_DATE)=="datetime64[ns]":
print("OPP_CREATION_DATE is of datetime type")
else:
print("warning: the type of OPP_CREATION_DATE is not datetime, please fix this")
</code></pre>
<p>Any idea please to help me to resolve this problem?
Thank you</p>
|
<p>You should use <code>type</code> instead of <code>dtype</code>.</p>
<p><code>type</code> is a built-in function of python -
<a href="https://docs.python.org/3/library/functions.html#type" rel="nofollow noreferrer">https://docs.python.org/3/library/functions.html#type</a></p>
<p>On the other hand, If <code>data</code> is a pandas dataframe then you can check the type of a column as follows:<br>
<code>df['colname'].dtype</code> or <code>df.colname.dtype</code></p>
|
python|numpy
| 3
|
8,083
| 50,664,839
|
Format Excel Column header for better visibility and Color
|
<p>I have gone through many posts but did not found the exact way to do the below.
Sorry for attaching screenshot(Just for better visibility) as well , I will write it also.
Basically it looks like -</p>
<pre><code>Name_of_the_Man Address_of_Man City
Jordan NC LMN
</code></pre>
<p>Input csv looks like</p>
<p><a href="https://i.stack.imgur.com/9vh2m.png" rel="noreferrer"><img src="https://i.stack.imgur.com/9vh2m.png" alt="Available csv"></a></p>
<p>Output Needed
<a href="https://i.stack.imgur.com/6y7JD.png" rel="noreferrer"><img src="https://i.stack.imgur.com/6y7JD.png" alt="Need this"></a></p>
<p>I have this code with me that picks the csv and attach as sheet in excel.</p>
<pre><code>writer = pd.ExcelWriter('final.xlsx'), engine='xlsxwriter')
for f in glob.glob(os.path.join(Path, "*.csv")):
df = pd.read_csv(f)
df.to_excel(writer, sheet_name=os.path.basename(f))
writer.save()
</code></pre>
<p>I want my csv file - having good space in between and color for the column header.I have went through this link <a href="https://stackoverflow.com/questions/41339701/python-change-header-color-of-dataframe-and-save-it-to-excel-file">Python - change header color of dataframe and save it to excel file</a> but it's not serving the purpose - It is coloring the sheet itself apart from column.</p>
<p>Update:
Got the answer below . Also wondering if that can be possible just a thought
<a href="https://i.stack.imgur.com/YaMyi.png" rel="noreferrer"><img src="https://i.stack.imgur.com/YaMyi.png" alt="enter image description here"></a></p>
|
<p>You can use <a href="http://xlsxwriter.readthedocs.io/example_pandas_header_format.html" rel="noreferrer">Pandas Excel output with user defined header format</a> with <a href="https://stackoverflow.com/a/36554382/2901002">solution</a> for change width by content:</p>
<pre><code>writer = pd.ExcelWriter("file.xlsx", engine='xlsxwriter')
# Convert the dataframe to an XlsxWriter Excel object. Note that we turn off
# the default header and skip one row to allow us to insert a user defined
# header. Also remove index values by index=False
df.to_excel(writer, sheet_name='Sheet1', startrow=1, header=False, index=False)
workbook = writer.book
worksheet = writer.sheets['Sheet1']
# Add a header format.
header_format = workbook.add_format({
'bold': True,
'fg_color': '#ffcccc',
'border': 1})
for col_num, value in enumerate(df.columns.values):
worksheet.write(0, col_num, value, header_format)
column_len = df[value].astype(str).str.len().max()
# Setting the length if the column header is larger
# than the max column value length
column_len = max(column_len, len(value)) + 3
print(column_len)
# set the column length
worksheet.set_column(col_num, col_num, column_len)
# Close the Pandas Excel writer and output the Excel file.
writer.save()
</code></pre>
<p><a href="https://i.stack.imgur.com/gdoB0.png" rel="noreferrer"><img src="https://i.stack.imgur.com/gdoB0.png" alt="pic"></a></p>
<p>Changed your solution:</p>
<pre><code>writer = pd.ExcelWriter('final.xlsx'), engine='xlsxwriter')
for f in glob.glob(os.path.join(Path, "*.csv")):
df = pd.read_csv(f)
df.to_excel(writer, sheet_name=os.path.basename(f))
workbook = writer.book
worksheet = writer.sheets[os.path.basename(f)]
# Add a header format.
header_format = workbook.add_format({
'bold': True,
'fg_color': '#ffcccc',
'border': 1})
for col_num, value in enumerate(df.columns.values):
worksheet.write(0, col_num, value, header_format)
column_len = df[value].astype(str).str.len().max()
# Setting the length if the column header is larger
# than the max column value length
column_len = max(column_len, len(value)) + 3
print(column_len)
# set the column length
worksheet.set_column(col_num, col_num, column_len)
writer.save()
</code></pre>
<p>EDIT:</p>
<pre><code>writer = pd.ExcelWriter("file.xlsx", engine='xlsxwriter')
#skip 2 rows
df.to_excel(writer, sheet_name='Sheet1', startrow=2, header=False, index=False)
workbook = writer.book
worksheet = writer.sheets['Sheet1']
# Add a header format.
header_format = workbook.add_format({
'bold': True,
'fg_color': '#ffcccc',
'border': 1})
#create dictionary for map length of columns
d = dict(zip(range(25), list(string.ascii_uppercase)))
#print (d)
max_len = d[len(df.columns) - 1]
print (max_len)
#C
#dynamically set merged columns in first row
worksheet.merge_range('A1:' + max_len + '1', 'This Sheet is for Personal Details')
for col_num, value in enumerate(df.columns.values):
#write to second row
worksheet.write(1, col_num, value, header_format)
column_len = df[value].astype(str).str.len().max()
column_len = max(column_len, len(value)) + 3
worksheet.set_column(col_num, col_num, column_len)
# Close the Pandas Excel writer and output the Excel file.
writer.save()
</code></pre>
|
python|excel|python-3.x|pandas|csv
| 10
|
8,084
| 50,301,544
|
How to delete columns without headers in python pandas read_csv
|
<p>Currently, I have to read the CSV file and set the headers in advance. And then drop the columns which I don't want. Is there any way to do this directly?</p>
<pre><code># Current Code
columns_name = ['station', 'date', 'observation', 'value', 'other_1',
'other_2', 'other_3', 'other_4']
del_columns_name = ['other_1', 'other_2', 'other_3', 'other_4']
df =pd.read_csv('filename', names = columns_name)
df.drop(del_columns_name, axis=1)
</code></pre>
|
<p>I think you might even specify the indexes right away. In this case you are insterested in: <code>[0,1,2,3]</code>. Consider this example which also parses dates.</p>
<pre><code>import pandas as pd
cols = ['station', 'date', 'observation', 'value']
data = '''\
1, 2018-01-01, 1, 1, 1, 1, 1, 1
2, 2018-01-02, 2, 2, 2, 2, 2, 2'''
file = pd.compat.StringIO(data)
df = pd.read_csv(file, names=cols, usecols=[0,1,2,3], parse_dates=[1])
print(df)
</code></pre>
<p>Returns:</p>
<pre><code> station date observation value
0 1 2018-01-01 1 1
1 2 2018-01-02 2 2
</code></pre>
|
python|python-3.x|pandas|dataframe
| 2
|
8,085
| 62,664,384
|
How do I swap the duplicates in a row with a blank without affecting the corresponding rows using python?
|
<p>Let's say we have the following data on excel,</p>
<pre><code>Column1 | Column2 | Column3 | .... Column n
A | 10 | a
A | 10 | b
A | 10 | c
B | 15 | d
B | 15 | e
B | 15 | f
C | 20 | g
C | 20 | h
.
.
.
</code></pre>
<p>I would like to modify it to,</p>
<pre><code>Column1 | Column2 | Column3 | .... Column n
A | 10 | a
| | b
| | c
B | 15 | d
| | e
| | f
C | 20 | g
| | h
.
.
.
</code></pre>
<p>I tried using the drop_duplicates (from pandas) technique but it deletes other rows too.</p>
<p>I can do the task manually, but I am trying to find a way of achieving the above through using python, any thoughts?</p>
|
<p>You can first find the indices of the duplicates</p>
<p><code>dup_index = df.duplicates().index</code></p>
<p>Then you can replace the values</p>
<p><code>df.Column1.replace(dup_index,'')</code></p>
<p>If you don't want blank values as rchurt said in the comment , groupby() can also be a good option if you don't want blanks and let your data to be as it is.</p>
|
python|pandas|duplicates
| 1
|
8,086
| 62,728,659
|
mse loss function not compatible with regularization loss (add_loss) on hidden layer output
|
<p>I would like to code in tf.Keras a Neural Network with a couple of loss functions. One is a standard mse (mean squared error) with a factor loading, while the other is basically a regularization term on the output of a hidden layer. This second loss is added through <code>self.add_loss()</code> in a user-defined class inheriting from <code>tf.keras.layers.Layer</code>. I have a couple of questions (the first is more important though).</p>
<p><strong>1)</strong> The error I get when trying to combine the two losses together is the following:</p>
<pre><code>ValueError: Shapes must be equal rank, but are 0 and 1
From merging shape 0 with other shapes. for '{{node AddN}} = AddN[N=2, T=DT_FLOAT](loss/weighted_loss/value, model/new_layer/mul_1)' with input shapes: [], [100].
</code></pre>
<p>So it comes from the fact that the tensors which should add up to make one unique loss value have different shapes (and ranks). Still, when I try to print the losses during the training, I clearly see that the vectors returned as losses have shape <code>batch_size</code> and rank 1. Could it be that when the 2 losses are summed I have to provide them (or at least the loss of <code>add_loss</code>) as scalar? I know the mse is usually returned as a vector where each entry is the mse from one sample in the batch, hence having <code>batch_size</code> as shape. I think I tried to do the same with the "regularization" loss. Do you have an explanation for this behavio(u)r?</p>
<p>The sample code which gives me error is the following:</p>
<pre><code>import numpy as np
import tensorflow as tf
from tensorflow.keras import backend as K
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Dense, Input
def rate_mse(rate=1e5):
@tf.function # also needed for printing
def loss(y_true, y_pred):
tmp = rate*K.mean(K.square(y_pred - y_true), axis=-1)
# tf.print('shape %s and rank %s output in mse'%(K.shape(tmp), tf.rank(tmp)))
tf.print('shape and rank output in mse',[K.shape(tmp), tf.rank(tmp)])
tf.print('mse loss:',tmp) # print when I put tf.function
return tmp
return loss
class newLayer(tf.keras.layers.Layer):
def __init__(self, rate=5e-2, **kwargs):
super(newLayer, self).__init__(**kwargs)
self.rate = rate
# @tf.function # to be commented for NN training
def call(self, inputs):
tmp = self.rate*K.mean(inputs*inputs, axis=-1)
tf.print('shape and rank output in regularizer',[K.shape(tmp), tf.rank(tmp)])
tf.print('regularizer loss:',tmp)
self.add_loss(tmp, inputs=True)
return inputs
tot_n = 10000
xx = np.random.rand(tot_n,1)
yy = np.pi*xx
train_size = int(0.9*tot_n)
xx_train = xx[:train_size]; xx_val = xx[train_size:]
yy_train = yy[:train_size]; yy_val = yy[train_size:]
reg_layer = newLayer()
input_layer = Input(shape=(1,)) # input
hidden = Dense(20, activation='relu', input_shape=(2,))(input_layer) # hidden layer
hidden = reg_layer(hidden)
output_layer = Dense(1, activation='linear')(hidden)
model = Model(inputs=[input_layer], outputs=[output_layer])
model.compile(optimizer='Adam', loss=rate_mse(), experimental_run_tf_function=False)
#model.compile(optimizer='Adam', loss=None, experimental_run_tf_function=False)
model.fit(xx_train, yy_train, epochs=100, batch_size = 100,
validation_data=(xx_val,yy_val), verbose=1)
#new_xx = np.random.rand(10,1); new_yy = np.pi*new_xx
#model.evaluate(new_xx,new_yy)
print(model.predict(np.array([[1]])))
</code></pre>
<p><strong>2)</strong> I would also have a secondary question related to this code. I noticed that printing with <code>tf.print</code> inside the function <code>rate_mse</code> only works with <code>tf.function</code>. Similarly, the <code>call</code> method of <code>newLayer</code> is only taken into consideration if the same decorator is commented during training. Can someone explain why this is the case or reference me to a possible solution?</p>
<p>Thanks in advance to whoever can provide me help. I am currently using Tensorflow 2.2.0 and keras version is 2.3.0-tf.</p>
|
<p>I stuck with the same problem for a few days. "Standard" loss is going to be a scalar at the moment when we add it to the loss from add_loss. The only way how I get it working is to add one more axis while calculating mean. So we will get a scalar, and it will work.</p>
<pre><code>tmp = self.rate*K.mean(inputs*inputs, axis=[0, -1])
</code></pre>
|
python-3.x|neural-network|layer|tf.keras|tensorflow2.x
| 2
|
8,087
| 62,593,523
|
How to populate value in column based on condition using numpy?
|
<p>Hi I am trying to populate new column with fixed value, if condition is met.
But i am also getting the value if condition is not met.(for some rows)
Where am i going wrong ?
I need blank in 'type' if 'id' is blank, else the string 'A2A'
datatype of column'ID1' is object. It gives error when i convert it into string.
Although we see blank rows in 'ID1'. It is displaying error "cannot perform operation on float"</p>
<p>Code:</p>
<pre><code>df1['type'] = np.where((df1['ID1'].isnull()) , np.nan,'A2A')
</code></pre>
<p>input:</p>
<pre><code>ID1
2
3
4
</code></pre>
<p>ouput:</p>
<pre><code>ID1 type
nan
A2A
A2A
2 A2A
3 A2A
nan
nan
4 A2A
A2A
</code></pre>
<p>Expected output:</p>
<pre><code>ID1 type
2 A2A
3 A2A
4 A2A
</code></pre>
|
<p>you can try this:</p>
<p>Make sure you remove multiple empty spaces to just empty</p>
<pre><code>df = pd.DataFrame([None, np.nan, '','','',2,3,'',''])
df = df.replace(r'^\s*$', '', regex=True)
df.fillna('', inplace=True)
</code></pre>
<p><strong>Option 1: You can use pandas <code>.apply</code></strong></p>
<pre><code>df["type"] = df.apply(lambda x: "A2A" if x[0] else '',axis=1)
</code></pre>
<p><strong>Option 2: without axis:</strong></p>
<pre><code>df["type"] = df[0].apply(lambda x: "A2A" if x else '')
</code></pre>
<p><strong>Option 3: You can also use <code>np.where</code>:</strong></p>
<pre><code>df["type"] = np.where(df[0], "A2A", '')
</code></pre>
<p><strong>Option 4: Convert entire column in string format and check values</strong></p>
<pre><code>df[0].apply(lambda x: "A2A" if str(x).lower().strip() not in ["none","nan",""] else '')
</code></pre>
<p>Input:</p>
<pre><code> 0
0 None
1 NaN
2
3
4
5 2
6 3
7
8
</code></pre>
<p>output:</p>
<pre><code> 0 type
0
1
2
3
4
5 2 A2A
6 3 A2A
7
8
</code></pre>
|
python|pandas|numpy
| 1
|
8,088
| 62,794,219
|
tensorflow GPU not showing in jupyter notebook
|
<h2>In terminal</h2>
<ul>
<li>windows 10</li>
<li>using cuda 10.1</li>
<li>python 3.7.7</li>
<li>GPU GeForce GTX 1050 4GB</li>
</ul>
<pre class="lang-sh prettyprint-override"><code>>>> import tensorflow as tf
2020-07-08 17:10:50.005569: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll
>>> tf.config.list_physical_devices('GPU')
2020-07-08 17:10:55.657489: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: GeForce GTX 1050 computeCapability: 6.1
coreClock: 1.493GHz coreCount: 5 deviceMemorySize: 4.00GiB deviceMemoryBandwidth: 104.43GiB/s
2020-07-08 17:10:55.701387: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0
[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
</code></pre>
<h2>In jupyter notebook</h2>
<pre class="lang-py prettyprint-override"><code>in [1]: import tensorflow as tf
tf.config.list_physical_devices('GPU')
out[1]: []
in [1]: tf.__version__
out[1]: '2.2.0'
</code></pre>
|
<p>It needs to make new kernel for this env and select kernel form jupyter notebook</p>
<pre class="lang-sh prettyprint-override"><code>$ conda activate env_name
$ pip install ipykernel --user
$ python -m ipykernel install --user --name env_name --display-name env_name
</code></pre>
|
python|tensorflow|jupyter-notebook
| 0
|
8,089
| 73,610,869
|
The expanded size of the tensor (1011) must match the existing size (512) at non-singleton dimension 1
|
<p>I have a trained a LayoutLMv2 model from huggingface and when I try to inference it on a single image, it gives the runtime error. The code for this is below:</p>
<pre><code>query = '/Users/vaihabsaxena/Desktop/Newfolder/labeled/Others/Two.pdf26.png'
image = Image.open(query).convert("RGB")
encoded_inputs = processor(image, return_tensors="pt").to(device)
outputs = model(**encoded_inputs)
preds = torch.softmax(outputs.logits, dim=1).tolist()[0]
pred_labels = {label:pred for label, pred in zip(label2idx.keys(), preds)}
pred_labels
</code></pre>
<p>The error comes when when I do <code>model(**encoded_inputs)</code>. The <code>processor</code> is called directory from Huggingface and is initialized as follows along with other APIs:</p>
<pre><code>feature_extractor = LayoutLMv2FeatureExtractor()
tokenizer = LayoutLMv2Tokenizer.from_pretrained("microsoft/layoutlmv2-base-uncased")
processor = LayoutLMv2Processor(feature_extractor, tokenizer)
</code></pre>
<p>The model is defined and trained as follows:</p>
<pre><code>model = LayoutLMv2ForSequenceClassification.from_pretrained(
"microsoft/layoutlmv2-base-uncased", num_labels=len(label2idx)
)
model.to(device);
optimizer = AdamW(model.parameters(), lr=5e-5)
num_epochs = 3
for epoch in range(num_epochs):
print("Epoch:", epoch)
training_loss = 0.0
training_correct = 0
#put the model in training mode
model.train()
for batch in tqdm(train_dataloader):
outputs = model(**batch)
loss = outputs.loss
training_loss += loss.item()
predictions = outputs.logits.argmax(-1)
training_correct += (predictions == batch['labels']).float().sum()
loss.backward()
optimizer.step()
optimizer.zero_grad()
print("Training Loss:", training_loss / batch["input_ids"].shape[0])
training_accuracy = 100 * training_correct / len(train_data)
print("Training accuracy:", training_accuracy.item())
validation_loss = 0.0
validation_correct = 0
for batch in tqdm(valid_dataloader):
outputs = model(**batch)
loss = outputs.loss
validation_loss += loss.item()
predictions = outputs.logits.argmax(-1)
validation_correct += (predictions == batch['labels']).float().sum()
print("Validation Loss:", validation_loss / batch["input_ids"].shape[0])
validation_accuracy = 100 * validation_correct / len(valid_data)
print("Validation accuracy:", validation_accuracy.item())
</code></pre>
<p>The complete error trace:</p>
<pre><code>RuntimeError Traceback (most recent call last)
/Users/vaihabsaxena/Desktop/Newfolder/pytorch.ipynb Cell 37 in <cell line: 4>()
2 image = Image.open(query).convert("RGB")
3 encoded_inputs = processor(image, return_tensors="pt").to(device)
----> 4 outputs = model(**encoded_inputs)
5 preds = torch.softmax(outputs.logits, dim=1).tolist()[0]
6 pred_labels = {label:pred for label, pred in zip(label2idx.keys(), preds)}
File ~/opt/anaconda3/envs/env_pytorch/lib/python3.9/site-packages/torch/nn/modules/module.py:1130, in Module._call_impl(self, *input, **kwargs)
1126 # If we don't have any hooks, we want to skip the rest of the logic in
1127 # this function, and just call forward.
1128 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1129 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1130 return forward_call(*input, **kwargs)
1131 # Do not call functions when jit is used
1132 full_backward_hooks, non_full_backward_hooks = [], []
File ~/opt/anaconda3/envs/env_pytorch/lib/python3.9/site-packages/transformers/models/layoutlmv2/modeling_layoutlmv2.py:1071, in LayoutLMv2ForSequenceClassification.forward(self, input_ids, bbox, image, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, labels, output_attentions, output_hidden_states, return_dict)
1061 visual_position_ids = torch.arange(0, visual_shape[1], dtype=torch.long, device=device).repeat(
1062 input_shape[0], 1
1063 )
1065 initial_image_embeddings = self.layoutlmv2._calc_img_embeddings(
1066 image=image,
1067 bbox=visual_bbox,
...
896 input_shape[0], 1
897 )
898 final_position_ids = torch.cat([position_ids, visual_position_ids], dim=1)
RuntimeError: The expanded size of the tensor (1011) must match the existing size (512) at non-singleton dimension 1. Target sizes: [1, 1011]. Tensor sizes: [1, 512]
</code></pre>
<p>I have tried to set up the tokenizer to cut off the max length but it finds <code>encoded_inputs</code> as Nonetype however the image is still there. What is going wrong here?</p>
|
<p>The error message tells you that the extracted text via ocr is longer (1011 tokens) than the underlying text model is able to handle (512 tokens). Depending on your task, you maybe can truncate your text with the tokenizer parameter <a href="https://huggingface.co/docs/transformers/main/en/model_doc/layoutlmv2#transformers.LayoutLMv2Tokenizer.__call__.truncation" rel="nofollow noreferrer">truncation</a> (the processor will pass this parameter to the tokenizer):</p>
<pre class="lang-py prettyprint-override"><code>import torch
from transformers import LayoutLMv2FeatureExtractor, LayoutLMv2Tokenizer, LayoutLMv2Processor, LayoutLMv2ForSequenceClassification
from PIL import Image, ImageDraw, ImageFont
query = "/content/Screenshot_20220905_202551.png"
image = Image.open(query).convert("RGB")
feature_extractor = LayoutLMv2FeatureExtractor()
tokenizer = LayoutLMv2Tokenizer.from_pretrained("microsoft/layoutlmv2-base-uncased")
processor = LayoutLMv2Processor(feature_extractor, tokenizer)
model = LayoutLMv2ForSequenceClassification.from_pretrained("microsoft/layoutlmv2-base-uncased", num_labels=2)
encoded_inputs = processor(image, return_tensors="pt")
# Model will raise an error because the tensor is longer as the trained position embeddings
print(encoded_inputs["input_ids"].shape)
encoded_inputs = processor(image, return_tensors="pt", truncation=True)
print(encoded_inputs["input_ids"].shape)
outputs = model(**encoded_inputs)
preds = torch.softmax(outputs.logits, dim=1).tolist()[0]
</code></pre>
<p>Output:</p>
<pre><code>torch.Size([1, 644])
torch.Size([1, 512])
</code></pre>
<p>For this code, I used the following screenshot:
<a href="https://i.stack.imgur.com/cDWTs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cDWTs.png" alt="donut paper" /></a></p>
|
python|machine-learning|computer-vision|huggingface-transformers
| 1
|
8,090
| 73,720,055
|
Replacing Square Brackets without value in .json array/pandas dataframe
|
<p>I was wondering if there is a way to remove/replace null/empty square brackets in json or pandas dataframe. I have tried to replace them after converting into string via .astype(str) and it is successful and/but it seems it converts all json values into string and I can not process further with the same structure. I would appreciate any solution/recommendation. thanks...</p>
<p><a href="https://i.stack.imgur.com/NyAnI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NyAnI.png" alt="enter image description here" /></a></p>
|
<p>With the following toy dataframe:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df = pd.DataFrame({"col1": ["a", [1, 2, 3], [], "d"], "col2": ["e", [], "f", "g"]})
print(df)
# Output
</code></pre>
<p><a href="https://i.stack.imgur.com/21PGz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/21PGz.png" alt="enter image description here" /></a></p>
<p>Here is one way to do it:</p>
<pre class="lang-py prettyprint-override"><code>df = df.applymap(lambda x: pd.NA if isinstance(x, list) and not x else x)
print(df)
# Output
</code></pre>
<p><a href="https://i.stack.imgur.com/2Ec0m.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2Ec0m.png" alt="enter image description here" /></a></p>
|
json|pandas|dataframe|brackets
| 0
|
8,091
| 73,780,514
|
I am trying to define the following vector function, but keep getting an error
|
<p>Where am I going wrong? The function:</p>
<p><a href="https://i.stack.imgur.com/wNmtb.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wNmtb.jpg" alt="enter image description here" /></a></p>
<p>The code I have written is as follows! Note the second def function is an attempt to integrate it using solve_ivp! please let me know if there are any issues there as well:</p>
<pre><code>def fun(s, rho_0, rho_1):
return lambda t, s:np.dot(np.array([0.775416, 0,0, 0.308968]).reshape(2,2), s) + np.array([rho_0,rho_1]).reshape(2,1)
def fun2(t, rho_0, rho_1):
res = solve_ivp(fun, [0, 5], y0 = [0, 0], t_eval=np.arange(0,5), args = (rho_0, rho_1), vectorized = True)
return res.y[1]
fun2(t = 0, rho_0 = 0.0099532, rho_1 = 0.001699)
</code></pre>
<p>The error I get:</p>
<pre><code>TypeError: fun() takes 3 positional arguments but 4 were given
</code></pre>
|
<p>From the <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.solve_ivp.html" rel="nofollow noreferrer">documentation</a>, <code>fun</code> should take <code>t</code> and <code>y</code> as it's first two arguments. Thus, you need to define <code>fun</code> as follows:</p>
<pre><code>fun(t, y, rho1, rho2)
</code></pre>
<p>That's why you're facing this error, it is passing <code>t</code>, <code>y</code>, and then the arguments specified in <code>args</code> (you've provided 2 arguments, so you end up with 4 in total including <code>t</code> and <code>y</code>).</p>
|
python|numpy|function|vector|ode
| 2
|
8,092
| 71,225,828
|
Python multiple separate pivot tables based on another column to separate excel files
|
<p>I am trying to produce multiple separate pivot tables for each distinct value in a different column in my df (like a different pivot table filtered by each). In the actual file there are several hundred R1's so was trying to find a way to loop over this somehow to produce them separately.</p>
<p>If possible is there a way to then send each pivot to a separate excel file</p>
<pre><code>import pandas as pd
df=pd.DataFrame({'Employee':['1','2','3','4','5','6','7','8','9','10','11','12', '13', '14', '15', '16', '17', '18', '19', '20'],
'R1': ['mike', 'mike', 'mike', 'mike', 'mike', 'mike', 'mike', 'mike', 'stacey' , 'stacey', 'stacey', 'stacey', 'stacey', 'stacey', 'stacey', 'stacey', 'stacey', 'stacey', 'stacey', 'stacey'],
'R2':['bill', 'bill', 'bill', 'bill', 'bill', 'chris', 'chris', 'chris', 'jill', 'jill', 'jill', 'tom', 'tom', 'tom', 'tom', 'pete', 'pete', 'pete', 'pete', 'pete']})
df
</code></pre>
<p>So essentially 1 excel file for mike's world that has a count by employee by R2 and 1 excel for stacey's world that has a count by employee of R2 (but in the real data this would be done for the several hundred R1's)</p>
<p>thanks!</p>
<p>Mike excel
<a href="https://i.stack.imgur.com/rklNp.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rklNp.jpg" alt="enter image description here" /></a></p>
<p>Stacey excel
<a href="https://i.stack.imgur.com/5Oy60.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5Oy60.jpg" alt="enter image description here" /></a></p>
|
<p>While there may be prettier ways in dealing with the dataframes prior to writing to the sheets, this provided me the results you were looking for. It should scale with any number of 'R1''s as "unique()" provides a list of the unique names within R1. Then breaks it down for the variables you need and writes it to a sheet on the given filepath.</p>
<pre><code>import pandas as pd
data_jobs2=pd.DataFrame({'Employee':['1','2','3','4','5','6','7','8','9','10','11','12', '13', '14', '15', '16', '17', '18', '19', '20'],
'L2Name': ['mike', 'mike', 'mike', 'mike', 'mike', 'mike', 'mike', 'mike', 'stacey' , 'stacey', 'stacey', 'stacey', 'stacey', 'stacey', 'stacey', 'stacey', 'stacey', 'stacey', 'stacey', 'stacey'],
'L3Name':['bill', 'bill', 'bill', 'bill', 'bill', 'chris', 'chris', 'chris', 'jill', 'jill', 'jill', 'tom', 'tom', 'tom', 'tom', 'pete', 'pete', 'pete', 'pete', 'pete']})
values = data_jobs2['L2Name'].unique()
filepath = 'Your\File\Path\Here\File_name.xlsx'
writer = pd.ExcelWriter(filepath, engine='openpyxl')
for i in values:
series = data_jobs2[data_jobs2['L2Name'] == i].groupby(['L2Name','L3Name'])['Employee'].count().to_frame().reset_index()
df_to_write = series.pivot(index = 'L2Name', columns='L3Name', values = 'Employee').reset_index().replace({i : 'Count of Employee'}).rename(columns={'L2Name':''}).set_index('')
df_to_write['Grand Total'] = df_to_write.sum(1)
df_to_write.to_excel(writer, sheet_name=i)
display(df_to_write)
display(series)
writer.save()
writer.close()
</code></pre>
|
python|loops|pivot|pandas.excelwriter
| 1
|
8,093
| 71,442,051
|
How to identify and remove outliers from a dataframe that contains both numerical and catagorical values?
|
<p>I have a dataset and need to remove the outliers 3 standard deviations away from the mean for each numerical column. The rows which contain the outliers should then be dropped.</p>
|
<p>Here is an example of code that will do what your question asks:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df = pd.DataFrame( [ 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105 ] * 10 + [ 1, 11, 21, 31, 41, 161, 171, 181, 191, 201 ] )
print(len(df[0]))
print(df.std()[0])
print(df[0].mean())
df_filtered = df[(df[0] - df[0].mean()).abs() < 3 * df.std()[0]]
print(len(df_filtered[0]))
</code></pre>
<p>Output:</p>
<pre><code>120
23.74747517170641
100.08333333333333
114
</code></pre>
<p>The length of the filtered dataframe is 6 less than that of the original, as 6 values are outliers beyond 3 standard deviations.</p>
|
python|python-3.x|pandas|dataframe
| 1
|
8,094
| 52,241,792
|
Only convert string representation of number to number in Pandas
|
<p>I have a pandas <code>Dataframe</code> and I realize when my <code>Dataframe</code> columns only have string representation of numbers then the conversion will take place, otherwise it will not. The code below I am using to convert all numbers that are in string form to numbers.</p>
<pre><code>import pandas as pd
from functools import partial
df = pd.DataFrame({0: ['3', 'r'], 1: ['1', 's']})
df = df.apply(partial(pd.to_numeric, errors='ignore'))
</code></pre>
<p>The code above will not work, because <code>'r'</code> and <code>'s'</code> are in columns. So everything will remain as strings. How can I get the code to convert <code>'3'</code> and <code>'1'</code> to numbers <code>3</code> and <code>1</code>?</p>
|
<p>As <a href="https://stackoverflow.com/questions/52241792/only-convert-string-representation-of-number-to-number-in-pandas#comment91432115_52241792">@MadPhysicist</a> states, Pandas.Series have a single <code>dtype</code>. However, that <code>dtype</code> can be <code>object</code> which means anything goes. You'll lose <strong>MANY</strong> advantages from having a numeric <code>dtype</code> but it might be what you want.</p>
<h3>Force non-numeric things to <code>NaN</code></h3>
<pre><code>df.apply(pd.to_numeric, errors='coerce')
0 1
0 3.0 1.0
1 NaN NaN
</code></pre>
<p><strong>NOTE:</strong><br>
<code>apply</code> iterates through each column and passes that column through the <code>callable</code> that was given. That means that each column got a treatment like this:</p>
<pre><code>pd.to_numeric(one_of_the_columns, errors='coerce')
</code></pre>
<p>Using <code>errors='coerce'</code> makes things numbers where it can and <code>np.nan</code> otherwise.</p>
<hr>
<h3>Use <code>dtype</code> object and throw away efficiency for... whatever it is you're trying to do</h3>
<pre><code>df = df.applymap(lambda x: pd.to_numeric(x, errors='ignore'))
df
0 1
0 3 1
1 r s
</code></pre>
<p>To validate that it actually changed <code>3</code> to a number try:</p>
<pre><code>df.applymap(type)
0 1
0 <class 'numpy.int64'> <class 'numpy.int64'>
1 <class 'str'> <class 'str'>
</code></pre>
<p><strong>NOTE:</strong><br>
<code>applymap</code> iterates through each <em>cell</em> of the dataframe and passes that cell's value through the <code>callable</code> passed. In this case, each cell was treated like:</p>
<pre><code>pd.to_numeric(one_particular_cell, errors='ignore')
</code></pre>
<p>And was turned into a number if possible otherwise left alone. </p>
<p>This is inefficient but does what you want. As Pandas tries to reconcile the damage you've done, it realizes that there are mixed types in some columns and changes the <code>dtype</code> to <code>object</code> in order to accomodate.</p>
|
python|pandas|dataframe
| 3
|
8,095
| 52,151,673
|
Dependencies missing in current linux-64 channels when trying to install tensorflow-gpu with conda command
|
<p>Hi I tried conda install tensorflow-gpu in my terminal and I get this</p>
<pre><code>Error: Dependencies missing in current linux-64 channels:
- tensorflow -> numpy >=1.11 -> blas * mkl
- tensorflow -> numpy 1.11* -> blas * openblas
- tensorflow -> tensorflow-tensorboard -> numpy >=1.11 -> blas *
openblas
- tensorflow -> numpy 1.12* -> blas * openblas
- tensorflow -> tensorflow-base ==1.3.0 -> numpy >=1.11 -> blas * mkl
- tensorflow -> tensorflow-base ==1.3.0 -> numpy >=1.11 -> blas * openblas
- tensorflow -> tensorflow-tensorboard -> numpy >=1.11 -> blas * mkl
- tensorflow -> numpy 1.12* -> blas * mkl
- tensorflow -> numpy 1.11* -> blas * mkl
- tensorflow -> numpy >=1.11 -> blas * openblas
</code></pre>
<p>I also installed openblas after but still same error. What is the issue?</p>
|
<p>As @pic0 has suggested, after doing</p>
<p><code>conda update conda</code></p>
<p>I were able to install all needed packages.</p>
<p>If you have installed Anaconda on the default folder (for me is <code>/home/user/anaconda</code>), you should not need to use <code>sudo</code>.</p>
|
python|tensorflow
| 13
|
8,096
| 52,126,936
|
Python - Read an image from URL to resize and convert image to grayscale
|
<p>I want to read an image from URL to resize and convert it to grayscale. I have seen a number of examples from stackoverflow and I tried them out. However, it never successfully converts image to grayscale in my case. I'm not sure what went wrong here. This are what I tried. </p>
<pre><code>import matplotlib.pyplot as plt
from skimage.transform import resize
import numpy as np
from skimage import io, color
# try 1
img1 = io.imread("https://prasadpamidi.github.io/images/image2.jpg", as_grey=True)
img1 = (img1 - 255.0) / 255
img1 = resize(img1, (32, 32))
# try 2
img1 = io.imread("https://prasadpamidi.github.io/images/image2.jpg")
img1 = img1.dot([0.07, 0.72, 0.21])
img1 = (img1 - 255.0) / 255
img1 = resize(img1, (32, 32))
# try 3
img1 = color.rgb2gray(io.imread("https://prasadpamidi.github.io/images/image2.jpg"))
img1 = (img1 - 255.0) / 255
img1 = resize(img1, (32, 32))
# print images
plt.figure(figsize=(5,5))
plt.imshow(img1)
plt.show()
</code></pre>
<p>The result more or less is similar to below. So, I'd like to know what I missed. Any suggestions will be appreciated.</p>
<p><a href="https://i.stack.imgur.com/TJeou.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TJeou.png" alt="enter image description here"></a></p>
|
<p>Give this code a try:</p>
<pre><code>from skimage.io import imread, imshow
from skimage.transform import resize
from skimage.util import img_as_ubyte
url = "https://prasadpamidi.github.io/images/image2.jpg"
img1 = imread(url, as_gray=True)
img2 = resize(img1, (32, 32))
img3 = img_as_ubyte(img2)
imshow(img3)
</code></pre>
<p>Output:</p>
<p><a href="https://i.stack.imgur.com/dUind.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dUind.png" alt="grayscale image"></a></p>
<p>I'm attaching a screenshot of the variable explorer to show you that the variables have the correct shape and type.</p>
<p><a href="https://i.stack.imgur.com/EOpdh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EOpdh.png" alt="screenshot of variable explorer"></a></p>
|
python-3.x|numpy|matplotlib|scikit-image
| 2
|
8,097
| 60,560,752
|
Tensorflow Square function recognition
|
<p>I am trying to learn the basics of machine learning. I am trying to train the AI the square function: 2^x</p>
<pre><code>import tensorflow as tf
import numpy as np
from tensorflow import keras
model = tf.keras.Sequential([keras.layers.Dense(units=1, input_shape=[1])])
model.compile(optimizer='sgd', loss='mean_squared_error')
xs = np.array([2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0, 12.0, 13.0, 14.0, 15.0], dtype=int)
ys = np.array([4.0, 8.0, 16.0, 32.0, 64.0, 128.0, 256.0, 512.0, 1024.0, 2048.0, 4096.0, 8192.0, 16384.0, 32768.0], dtype=int)
model.fit(xs, ys, epochs=1000)
print(model.predict([7.0]))
</code></pre>
<p>This should print +- 128 but the outcome is in the 2000th's and the loss is extremely high.</p>
<p>How can I optimize this neural network so it gives me more accurate answer?</p>
<p>Thank you for your time.</p>
|
<p>When you are training a <strong>Neural Network</strong>, which seems to be easy for you might be hard for the Computer. In your case training <strong>2 raised to nth power</strong>; training this pattern is somewhat hard to be discovered as the values changes <strong>drastically</strong>, this may or may not result in <strong>converging</strong> to a specific value.</p>
<p>As for the question on how to <strong>optimize a neural network</strong> to create a more accurate model some of the ways to achieve this is to <strong>add more layers</strong>, <strong>fine-tune hyper-parameters</strong>, making <strong>more complex models</strong>, and even <strong>creating your own layer, operations, functions</strong>. </p>
<p>If you are trying to learn the Basics of Machine Learning, there are many guides available to get you up the <strong>basics</strong> in TensorFlow Documentation.</p>
<p>Try to follow the curriculum in this <a href="https://www.tensorflow.org/resources/learn-ml/basics-of-machine-learning" rel="nofollow noreferrer">link</a> for a more clear path to study. </p>
|
python|tensorflow|neural-network
| 0
|
8,098
| 60,739,867
|
i want to remove sqaure brackets from python list output
|
<pre><code> df = pd.read_excel('Websites.xlsx', usecols=[3])
webs = df.dropna()
weblist = webs.values.tolist()
for count in range(0,len(weblist)):
print (weblist[count])
</code></pre>
<p>the output is</p>
<pre><code>['TRIPADVISOR.COM']
['CHASE.COM']
['WEBMD.COM']
['WEATHER.COM']
['INDEED.COM']
['HOMEDEPOT.COM']
['CRAIGSLIST.ORG']
['BANKOFAMERICA.COM']
</code></pre>
<p>i need to convert this all to website format like <a href="https://www.example.com" rel="nofollow noreferrer">https://www.example.com</a> </p>
|
<p>I think output is one column <code>Dataframe</code>, so add <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.squeeze.html" rel="nofollow noreferrer"><code>DataFrame.squeeze</code></a> for Series, last loop and change <code>url</code> with <code>f-strings</code>:</p>
<pre><code>for i in webs.squeeze():
print (f'https://www.{i.lower()}')
</code></pre>
|
python|pandas|list
| 2
|
8,099
| 60,750,288
|
Invalid device id when using pytorch dataparallel!
|
<h1>Environment:</h1>
<ul>
<li>Win10 </li>
<li>Pytorch 1.3.0 </li>
<li>python3.7</li>
</ul>
<h1>Problem:</h1>
<p>I am using <code>dataparallel</code> in Pytorch to use the two 2080Ti GPUs. Code are like below:</p>
<pre class="lang-py prettyprint-override"><code>device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = Darknet(opt.model_def)
model.apply(weights_init_normal)
model = nn.DataParallel(model, device_ids=[0, 1]).to(device)
</code></pre>
<p>But when run this code, I encounter errors below:</p>
<pre><code>Traceback (most recent call last):
File "C:/Users/Administrator/Desktop/PyTorch-YOLOv3-master/train.py", line 74, in <module>
model = nn.DataParallel(model, device_ids=[0, 1]).to(device)
File "C:\Users\Administrator\Anaconda3\envs\py37_torch1.3\lib\site-packages\torch\nn\parallel\data_parallel.py", line 133, in __init__
_check_balance(self.device_ids)
File "C:\Users\Administrator\Anaconda3\envs\py37_torch1.3\lib\site-packages\torch\nn\parallel\data_parallel.py", line 19, in _check_balance
dev_props = [torch.cuda.get_device_properties(i) for i in device_ids]
File "C:\Users\Administrator\Anaconda3\envs\py37_torch1.3\lib\site-packages\torch\nn\parallel\data_parallel.py", line 19, in <listcomp>
dev_props = [torch.cuda.get_device_properties(i) for i in device_ids]
File "C:\Users\Administrator\Anaconda3\envs\py37_torch1.3\lib\site-packages\torch\cuda\__init__.py", line 337, in get_device_properties
raise AssertionError("Invalid device id")
AssertionError: Invalid device id
</code></pre>
<p>When I debug into it, I find the function <code>device_count()</code> in <code>get_device_properties()</code> returns 1 while I have 2 GPU on my machine. And <code>torch._C._cuda_getDeviceCount()</code> returns 2 in Anaconda Prompt. What is wrong?</p>
<h1>Qustion:</h1>
<p>How to solve this problem?
How can I manage to use the two GPUs using dataparallel?
Thank you guys!</p>
|
<p>Basically as pointed out by @ToughMind, we need specify</p>
<pre><code>os.environ["CUDA_VISIBLE_DEVICES"] = "0, 1"
</code></pre>
<p>It depends though on the CUDA devices available in one's unit, so if someone has one GPU it may be appropriate to put, for example,</p>
<pre><code>os.environ["CUDA_VISIBLE_DEVICES"] = "0"
</code></pre>
|
python-3.x|deep-learning|pytorch
| 3
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.