Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
8,500
| 73,256,301
|
I keep getting ValueError: invalid literal for int() with base 10: ''
|
<p>here is the code but every time it creates this error.
ValueError: invalid literal for int() with base 10: ''</p>
<p>Here is the main code. It's for posting a tweet</p>
<pre><code>import tweepy
import pandas
import time
import creds
latest_tweeet_id =0
appkey= creds.appkey
appSecret = creds.appSecret
accessToken = creds.accessToken
acessSecret = creds.acessSecret
latest_tweeet_id =0
with open('latest.txt', 'r') as f:
latest_tweeet_id = int(f.read())
print(latest_tweeet_id)
df = pandas.read_csv('quotes.csv', sep=';')
def tweet(msg):
msg = msg[0:270]
try:
auth = tweepy.OAuth1UserHandler(creds.appkey, creds.appSecret, creds.accessToken, creds.acessSecret)
api = tweepy.API(auth)
try:
api.verify_credentials()
print("Auth Ok")
except:
print('Error')
api = tweepy.API(auth, wait_on_rate_limit=True)
api.update_status(msg)
print(msg)
except Exception as e:
print(e)
for idx, rows in df.iterrows():
if idx <= latest_tweeet_id:
continue
tweet(rows["QUOTE"])
with open('latest.txt', 'w') as f:
f.write(str(idx))
</code></pre>
<p>Here is the error:</p>
<pre><code>Traceback (most recent call last): wtflax2 (master)
File "app.py", line 15, in <module>
latest_tweeet_id = int(f.read())
ValueError: invalid literal for int() with base 10: ''
</code></pre>
<p>The latest.txt file has only one digit, 1.</p>
|
<p>The problem is that the file <code>latest.txt</code> is empty. When python reads the file, it gets the empty string, and when it tries to interpret the empty string as an integer it gives you that error.</p>
<blockquote>
<p>The latest.txt file has only one digit, 1.</p>
</blockquote>
<p>The error message specifically says that the value read from <code>latest.txt</code> is the empty string, which means the file is empty. Perhaps the problem is where you are running the program from? The <code>latest.txt</code> file that Python looks for will be in the directory that you run Python from. If you have two <code>latest.txt</code> files and you are running Python from the directory that contains the empty one, that could explain what you are seeing.</p>
<p>Or perhaps you simply forgot to save <code>latest.txt</code> after editing it? That kind of thing has happened to me several times.</p>
|
python|pandas|tweepy
| 2
|
8,501
| 73,396,823
|
How can I extract rows in Dataframe with function?
|
<p>My dataframe includes list, like this.</p>
<pre><code> a b
1 frog [1, 2, 3]
2 dog [4, 5]
3 melon [6, 7, 1]
</code></pre>
<p>I want to extract rows which b contains specific numbers, so I made this function.</p>
<pre><code>def a(_list, _tag):
if _tag in _list:
return True
else:
return False
</code></pre>
<p>I tried to use df.loc[], but it doesn't work well.
How can I write a code without iterating all of dataframe?</p>
<p>My expected output is this. If I want to find a row that contains '1' in b, output will be</p>
<pre><code> a b
1 frog [1, 2, 3]
3 melon [6, 7, 1]
</code></pre>
|
<p>Here's a way to do what your question asks:</p>
<pre class="lang-py prettyprint-override"><code>target = 1
df2 = df.explode('b').b == target
df['found'] = df2.groupby(df2.index).sum() > 0
print(df)
</code></pre>
<p>Output:</p>
<pre><code> a b found
0 frog [1, 2, 3] True
1 dog [4, 5] False
2 melon [6, 7, 1] True
</code></pre>
|
python|pandas|dataframe
| 0
|
8,502
| 35,295,741
|
Python Pandas. Creating DataFrame with Series does not preserve dtype
|
<p>I have a use-case which I thought would be quite common, and so I thought this question of mine should be easy to answer for myself, but I couldn't find the answer anywhere. Consider the following.</p>
<pre><code>df = pandas.DataFrame({"id": numpy.random.choice(range(100), 5, replace=False),
"value": numpy.random.rand(5)})
df2 = pandas.DataFrame([df["id"], df["value"]*2]).T
</code></pre>
<p>Basically I'm creating a <code>DataFrame</code>, <code>df2</code>, based on the values of an old <code>DataFrame</code>, <code>df</code>. Now if we run</p>
<pre><code>print(df.dtypes, end="\n------\n")
print(df2.dtypes)
</code></pre>
<p>we get</p>
<pre><code>id int64
value float64
dtype: object
------
id float64
value float64
dtype: object
</code></pre>
<p>You can see that the <code>dtype</code> of the first column of <code>df2</code> is <code>float64</code>, instead of <code>int64</code> as it should be, even though the <code>dtype</code> of the <code>Series</code> itself is <code>int64</code>. This behaviour is very perplexing to me, and I can't believe that it's intentional. How do I create a <code>DataFrame</code> from some <code>Series</code>s and preserve the <code>dtype</code>s of the <code>Series</code>s? In my mind it should be as easy as <code>pandas.DataFrame([s1, s2], dtypes=[int, float])</code>, but you can't do that in <code>pandas</code> for some reason.</p>
|
<p><em>Columns</em> of a DataFrame always have a single dtype. (This is because, under
the hood, Pandas stores <em>columns</em> of data which have the same dtype in blocks.)</p>
<p>When <code>pd.DataFrame</code> is passed a list of Series, it
unpacks each Series into a separate row. Since the Series have different dtypes, the columns end up with values with mixed dtypes. Pandas tries to resolve this by upgrading all values in each column to a single dtype.</p>
<hr>
<p>You could define <code>df2</code> with:</p>
<pre><code>df2 = pd.DataFrame({'id': df["id"], 'value': df["value"]*2})
</code></pre>
<p>or</p>
<pre><code>df2 = df.copy()
df2['value'] *= 2
</code></pre>
<p>or</p>
<pre><code>df2 = pd.concat([df["id"], df["value"]*2], axis=1)
</code></pre>
|
python|pandas
| 4
|
8,503
| 35,176,293
|
How to sum the information at two consecutive positions in a dataframe
|
<p>I have a pandas dataframe with position,k, y. For example</p>
<pre><code>pos k y
123 0.7 0.5
124 0.4 0.1
125 0.3 0.2
126 0.4 0.1
128 0.3 0.6
130 0.4 0.9
131 0.3 0.2
</code></pre>
<p>i would like to sum the information at k and y like</p>
<pre><code>123 1.1 0.6
125 0.7 0.3
128 0.3 0.6
130 0.7 1.1
</code></pre>
<p>so the output has only the first positions and the sum of value the first and its immediate consecutive number which follows it.</p>
<p>I tried grouping by pandas </p>
<pre><code>for k,g in df.groupby(df['pos'] - np.arange(df.shape[0])):
u=g.ix[0:,2:].sum()
</code></pre>
<p>but its groups all the consecutive numbers which I dont want </p>
<p>ALSO I NEED SOMETHING FAST AS I HAVE 2611774 ROW IN MY DATAFILE</p>
|
<p>Maybe this is faster than a loop, but it won't sum positions 123 and 124 and then 130 and 131 as I think you expect, because it sums odd positions with its consecutive like 129 and 130, 131 and 132... </p>
<pre><code>df = df.set_index('pos')
df_odd = df.loc[df.index.values % 2 == 1]
df_even = df.loc[df.index.values % 2 == 0]
df_even = df_even.set_index(df_even.index.values - 1)
df_odd.add(df_even, fill_value = 0)
</code></pre>
<p>Result:</p>
<pre><code>pos k y
123 1.1 0.6
125 0.7 0.3
127 0.3 0.6
129 0.4 0.9
131 0.3 0.2
</code></pre>
|
python|pandas
| 1
|
8,504
| 60,290,110
|
Pandas: Create a table with a “dummy variable” of another table
|
<p>Let's say I have this dataframes</p>
<p>DataFrame A (Products)</p>
<pre><code>Cod | Product | Cost | Date
-------------------------------
18 | Product01 | 3.4 | 21/04
22 | Product02 | 7.2 | 12/08
33 | Product03 | 8.4 | 17/01
55 | Product04 | 0.6 | 13/07
67 | Product05 | 1.1 | 09/09
</code></pre>
<p>DataFrame B (Operations)</p>
<pre><code>id | codoper | CodProd | valor
-------------------------------
1 | 00001 | 55 | 45000
2 | 00001 | 18 | 45000
3 | 00002 | 33 | 53000
1 | 00001 | 55 | 45000
</code></pre>
<p>The idea is obtain a "dataframe C" with the column product from "Dataframe B":</p>
<p>DataFrame C Result</p>
<pre><code>id | codoper | Product_18| Product_22| Product_33| Product_55| Product_67 |valor
----------------------------------------------------------------------------------
1 | 00001 | 1 | 0 | 0 | 1 | 0 |45000
2 | 00002 | 0 | 0 | 1 | 0 | 0 |53000
</code></pre>
<p>So far I only managed to do it from the "DataFrame B":</p>
<pre><code>pd.get_dummies(df, columns=['CodProd']).groupby(['codoper'], as_index=False).min()
</code></pre>
<p>Note: I don't have all products from Dataframe A in the Dataframe of Operations</p>
<p>thanks</p>
|
<p>You need to combine the dummies from <code>Products</code> with the dummies from <code>Operations</code>. Start by defining the output columns by using a prefix:</p>
<pre class="lang-py prettyprint-override"><code>columns = ['id', 'codoper'] + [f"Product_{cod}" for cod in A['Cod'].unique()] + ['valor']
</code></pre>
<p>Then, use get dummies as you're doing above, but use the same prefix from defing the columns. Group by <em>all columns which perfectly collinear</em>, i.e. <code>id</code>, <code>codoper</code>, and <code>valor</code>. If these aren't perfectly collinear, than you need to decide how to aggregate them to the level of <code>codoper</code>. Finally, reindex using the output columns you previously defined, filling missing values with zero.</p>
<pre class="lang-py prettyprint-override"><code>pd.get_dummies(B, columns=['CodProd'], prefix='Product').groupby(['id', 'codoper', 'valor'], as_index=False).sum().reindex(columns=columns, fill_value=0)
</code></pre>
<pre><code> id codoper Product_18 Product_22 Product_33 Product_55 Product_67 valor
0 1 00001 0 0 0 2 0 45000
1 2 00001 1 0 0 0 0 45000
2 3 00002 0 0 1 0 0 53000
</code></pre>
|
python|pandas|dataframe
| 2
|
8,505
| 60,174,899
|
Creating HDF5 compound attributes using h5py
|
<p>I'm trying to create some simple HDF5 datasets that contain attributes with a compound datatype using h5py. The goal is an attribute that has two integers. Here are two example of attributes I'd like to create.</p>
<p><a href="https://i.stack.imgur.com/EKVNq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EKVNq.png" alt="File with desired attributes"></a></p>
<p>My attempts end up with an array of two values such as</p>
<p><a href="https://i.stack.imgur.com/xlute.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xlute.png" alt="File with bad attribute"></a></p>
<p>How can I code this using h5py and get a single value that contains two integers?
Current code looks something like</p>
<pre><code>dt_type = np.dtype({"names": ["val1"],"formats": [('<i4', 2)]})
# also tried np.dtype({"names": ["val1", "val2"],"formats": [('<i4', 1), ('<i4', 1)]})
dataset.attrs.create('time.start', [('23', '3')], dtype=dt_type)
</code></pre>
<p>How can I specify the type or the attribute create to get the first example?</p>
|
<p>To make an array with <code>dt_type</code>, you have to properly nest lists and tuples:</p>
<pre><code>In [162]: arr = np.array([(['23','3'],)], dt_type)
In [163]: arr
Out[163]: array([([23, 3],)], dtype=[('val1', '<i4', (2,))])
</code></pre>
<p>This is (1,) array with a compound dtype. The dtype has 1 field, but 2 values within that field.</p>
<p>With the alternative dtype:</p>
<pre><code>In [165]: dt2 = np.dtype({"names": ["val1", "val2"],"formats": ['<i4', '<i4']})
In [166]: arr2 = np.array([('23','3',)], dt2)
In [167]: arr2
Out[167]: array([(23, 3)], dtype=[('val1', '<i4'), ('val2', '<i4')])
</code></pre>
<p>or the simplest array:</p>
<pre><code>In [168]: arr3 = np.array([23,2])
In [169]: arr3
Out[169]: array([23, 2])
</code></pre>
<p>Writing to a dataset:</p>
<pre><code>In [170]: ds.attrs.create('arr', arr)
In [172]: ds.attrs.create('arr2', arr2)
In [173]: ds.attrs.create('arr3', arr3)
</code></pre>
<p>check the fetch:</p>
<pre><code>In [175]: ds.attrs['arr']
Out[175]: array([([23, 3],)], dtype=[('val1', '<i4', (2,))])
In [176]: ds.attrs['arr2']
Out[176]: array([(23, 3)], dtype=[('val1', '<i4'), ('val2', '<i4')])
In [177]: ds.attrs['arr3']
Out[177]: array([23, 2])
</code></pre>
<p>dump:</p>
<pre><code>1203:~/mypy$ h5dump compound.h5
HDF5 "compound.h5" {
GROUP "/" {
DATASET "test" {
DATATYPE H5T_STD_I64LE
DATASPACE SIMPLE { ( 10 ) / ( 10 ) }
DATA {
(0): 0, 1, 2, 3, 4, 5, 6, 7, 8, 9
}
ATTRIBUTE "arr" {
DATATYPE H5T_COMPOUND {
H5T_ARRAY { [2] H5T_STD_I32LE } "val1";
}
DATASPACE SIMPLE { ( 1 ) / ( 1 ) }
DATA {
(0): {
[ 23, 3 ]
}
}
}
ATTRIBUTE "arr2" {
DATATYPE H5T_COMPOUND {
H5T_STD_I32LE "val1";
H5T_STD_I32LE "val2";
}
DATASPACE SIMPLE { ( 1 ) / ( 1 ) }
DATA {
(0): {
23,
3
}
}
}
ATTRIBUTE "arr3" {
DATATYPE H5T_STD_I64LE
DATASPACE SIMPLE { ( 2 ) / ( 2 ) }
DATA {
(0): 23, 2
}
}
}
}
}
</code></pre>
|
python|numpy|hdf5|h5py
| 2
|
8,506
| 65,082,248
|
('Trying to update a Tensor ', <tf.Tensor: shape=(), dtype=float32, numpy=3.0>)
|
<p>I am trying to run the example shown here:</p>
<p><a href="https://www.tensorflow.org/api_docs/python/tf/keras/optimizers/Optimizer" rel="nofollow noreferrer">https://www.tensorflow.org/api_docs/python/tf/keras/optimizers/Optimizer</a></p>
<p>but it gives me this error:</p>
<p>i am using linux with python 3</p>
<pre><code>import tensorflow as tf
import numpy as np
var1=tf.constant(3.0)
var2=tf.constant(3.0)
# Create an optimizer with the desired parameters.
opt = tf.keras.optimizers.SGD(learning_rate=0.1)
# `loss` is a callable that takes no argument and returns the value
# to minimize.
loss = lambda: 3 * var1 * var1 + 2 * var2 * var2
# In graph mode, returns op that minimizes the loss by updating the listed
# variables.
opt_op = opt.minimize(loss, var_list=[var1, var2])
opt_op.run()
# In eager mode, simply call minimize to update the list of variables.
opt.minimize(loss, var_list=[var1, var2])
---------------------------------------------------------------------------
NotImplementedError Traceback (most recent call last)
<ipython-input-1-f7fa46c26670> in <module>()
12 # In graph mode, returns op that minimizes the loss by updating the listed
13 # variables.
---> 14 opt_op = opt.minimize(loss, var_list=[var1, var2])
15 opt_op.run()
16 # In eager mode, simply call minimize to update the list of variables.
10 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py in apply_grad_to_update_var(var, grad)
592 """Apply gradient to variable."""
593 if isinstance(var, ops.Tensor):
--> 594 raise NotImplementedError("Trying to update a Tensor ", var)
595
596 apply_kwargs = {}
NotImplementedError: ('Trying to update a Tensor ', <tf.Tensor: shape=(), dtype=float32, numpy=3.0>)
</code></pre>
|
<p>As suggested by @xdurch0 use tf.Variable instead tf.constant.</p>
<p>Please check the working sample code below.</p>
<pre><code>import tensorflow as tf
import numpy as np
var1=tf.Variable(3.0)
var2=tf.Variable(3.0)
opt = tf.keras.optimizers.SGD(learning_rate=0.1)
# `loss` is a callable that takes no argument and returns the value
# to minimize.
loss = lambda: 3 * var1 * var1 + 2 * var2 * var2
# In graph mode, returns op that minimizes the loss by updating the listed
# variables.
#opt_op = opt.minimize(loss, var_list=[var1, var2])
#opt_op.run()
# In eager mode, simply call minimize to update the list of variables.
opt.minimize(loss, var_list=[var1, var2])
opt.variables()
</code></pre>
<p>Output</p>
<pre><code><function <lambda> at 0x7efdebc7f048>
[<tf.Variable 'SGD/iter:0' shape=() dtype=int64, numpy=1>]
</code></pre>
|
python|tensorflow
| 1
|
8,507
| 65,160,023
|
Any Advice on how to make this CNN training faster?
|
<p>I have been training a Neural Network for recognizing the differences between a paper with handwriting and a paper with Drawings, My images are all in (3508, 2480) size and I'm using a CNN for the task, the problem is that it is taking ages to train, I have 30,000 data belonging to 2 classes which are separated into validation and training, so I have:</p>
<ul>
<li>13650 Images of Handwritten Paragraphs for training</li>
<li>13650 Images of Drawings for training</li>
<li>1350 Images of Drawings for validation</li>
<li>1250 Images of Drawings for validation</li>
</ul>
<p>If you want to see my architecture here it is my</p>
<p><a href="https://i.stack.imgur.com/aNbWQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aNbWQ.png" alt="summary()" /></a></p>
<p>And here is my code:</p>
<pre class="lang-py prettyprint-override"><code>import tensorflow as tf
import os
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
from google.colab import drive
drive.mount('/content/drive')
</code></pre>
<pre class="lang-py prettyprint-override"><code>l0 = tf.keras.layers.Conv2D(32, (60,60), activation='relu', input_shape=(438, 310, 1), name='input')
l1 = tf.keras.layers.Dropout(.3)
l2 = tf.keras.layers.BatchNormalization()
l3 = tf.keras.layers.MaxPool2D(pool_size=(2,2),padding='same')
l12 = tf.keras.layers.Flatten()
l16 = tf.keras.layers.Dense(32, activation='relu')
l17 = tf.keras.layers.Dropout(.5)
l18 = tf.keras.layers.BatchNormalization()
l22 = tf.keras.layers.Dense(1, activation='sigmoid', name='output')
</code></pre>
<pre class="lang-py prettyprint-override"><code>from keras.preprocessing.image import ImageDataGenerator
trdata = ImageDataGenerator(rescale=1/255)
traindata = trdata.flow_from_directory("/content/drive/MyDrive/Sae/TesisProgra/DataSets/ParagraphsVsDrawings/Paste/0_Final/Training",target_size=(438, 310), color_mode="grayscale", batch_size=250)
valdata = ImageDataGenerator(rescale=1/255)
validationdata = valdata.flow_from_directory("/content/drive/MyDrive/Sae/TesisProgra/DataSets/ParagraphsVsDrawings/Paste/0_Final/Validation",target_size=(438, 310), color_mode="grayscale", batch_size=250)
</code></pre>
<pre class="lang-py prettyprint-override"><code>from keras.callbacks import ModelCheckpoint, EarlyStopping
checkpoint = ModelCheckpoint("ParagraphsVsDrawings.h5", monitor='val_accuracy', verbose=1, save_best_only=True, save_weights_only=False, save_freq='epoch', mode='auto')
history = model.fit(traindata, validation_data=validationdata, validation_steps=10,epochs=20, verbose=True, callbacks=[checkpoint])
</code></pre>
<p><strong>I´m using Google Colab PRO for the training with TPU and Big RAM options activated</strong></p>
<p>I have trained CNN before, but they trained really fast, I don´t know if it's for my images being to big maybe I could try resizing them with pillow, but I'm really lost at this point, I have been waiting 12 hours and It's still on first epoch</p>
|
<p>Your kernel size, 60 by 60, is quite big. Try 3 by 3 kernel or 5 by 5 kernel. It doesn't seem that image size is the problem since you are resizing from (3508, 2480) to (438, 310).</p>
<p>Also notice that the number of weights you have is very, very large. It is around 24 million. This is because you are flattening a (189, 125, 32) shape array and then your next layer (Dense) has 32 units, so 189 * 125 * 32 * 32 weights for that layer. That will take very, very long to train.</p>
<p>Try to add one or two more conv layers + pooling layers so that the number of weights when flattened is manageable.</p>
|
python|tensorflow|machine-learning|keras|deep-learning
| 0
|
8,508
| 65,485,322
|
How to return multiple columns using apply in Pandas dataframe
|
<p>I am trying to apply a function to a column of a Pandas dataframe, the function returns a list of tuples. This is my function:</p>
<pre><code>def myfunc(text):
values=[]
sections=api_call(text)
for (part1, part2, part3) in sections:
value=(part1, part2, part3)
values.append(value)
return values
</code></pre>
<p>For example,</p>
<pre><code>sections=myfunc("History: Had a fever\n Allergies: No")
print(sections)
</code></pre>
<p>output:</p>
<pre><code>[('past_medical_history', 'History:', 'History: Had a fever\n '), ('allergies', 'Allergies:', 'Allergies: No')]
</code></pre>
<p>For each tuple, I would like to create a new column. For example:</p>
<p>the original dataframe looks like this:</p>
<pre><code>id text
0 History: Had a fever\n Allergies: No
1 text2
</code></pre>
<p>and after applying the function, I want the dataframe to look like this (where xxx is various text content):</p>
<pre><code>id text part1 part2 part3
0 History: Had... past_... History: History: ...
0 Allergies: No allergies Allergies: Allergies: No
1 text2 xxx xxx xxx
1 text2 xxx xxx xxx
1 text2 xxx xxx xxx
...
</code></pre>
<p>I could loop through the dataframe and generate a new dataframe but it would be really slow. I tried following code but received a ValueError. Any suggestions?</p>
<pre><code>df.apply(lambda x: pd.Series(myfunc(x['col']), index=['part1', 'part2', 'part3']), axis=1)
</code></pre>
<p>I did a little bit more research, so my question actually boils down to how to unnest a column with a list of tuples. I found the answer from this link <a href="https://stackoverflow.com/questions/44758596/split-a-list-of-tuples-in-a-column-of-dataframe-to-columns-of-a-dataframe">Split a list of tuples in a column of dataframe to columns of a dataframe</a>
helps. And here is what I did</p>
<pre><code># step1: sectionizing
df["sections"] =df["text"].apply(myfunc)
# step2: unnest the sections
part1s = []
part2s = []
part3s = []
ids = []
def create_lists(row):
tuples = row['sections']
id = row['id']
for t in tuples:
part1s.append(t[0])
part2s.append(t[1])
part3s.append(t[2])
ids.append(id)
df.apply(create_lists, axis=1)
new_df = pd.DataFrame({"part1" :part1s, "part2": part2s, "part3": part3s,
"id": ids})[["part1", "part2", 'part3', "id"]]
</code></pre>
<p>But the performance is not so good. I wonder if there is better way.</p>
|
<p>The idea here is to set up some data and a function that can be operated on this data to generate three items that we can return. Choosing split and comma-separated values seems to be quick and mirror the function you are after.</p>
<pre><code>import pandas as pd
data = { 'names' : ['x,a,c','y,er,rt','z,1,ere']}
df = pd.DataFrame(data)
</code></pre>
<p>gives</p>
<pre><code> names
0 x,a,c
1 y,er,rt
2 z,1,ere
</code></pre>
<p>now</p>
<pre><code>def myfunc(text):
sections=text.split(',')
return sections
df[['part1', 'part2', 'part3']] = df['names'].apply(myfunc)
</code></pre>
<p>will give</p>
<pre><code> names part1 part2 part3
0 x,a,c x y z
1 y,er,rt a er 1
2 z,1,ere c rt ere
</code></pre>
<p>Which is probably not what you want, however</p>
<pre><code>df['part1'] ,df['part2'], df['part3'] = zip(*df['names'].apply(myfunc))
</code></pre>
<p>gives</p>
<pre><code> names part1 part2 part3
0 x,a,c x a c
1 y,er,rt y er rt
2 z,1,ere z 1 ere
</code></pre>
<p>which is probably what you want.</p>
|
pandas|apply
| 1
|
8,509
| 49,806,961
|
Matplotlib 2.02 plotting within a for loop
|
<p>I am having trouble with two things on a plot I am generating within a for loop, my code loads some data in, fits it to a function using curve_fit and then plots measured data and the fit on the same plot for 5 different sets of measured y value (the measured data is represent by empty circle markers and fit by a solid line as the same color as the marker)</p>
<p>Firstly I am struggling to reduce the linewidth of the fit (solid line) however much I reduce the float value of linewidth, I can increase the size just not decrease it by the value displayed in the output below. Secondly I would like the legend to display only circle markers not circles with lines through - I cannot seem to get this to work, any ideas?</p>
<p>Here is my code and attached is the output plot and data file on google drive share link (for some reason it's cutting off long lines of text on this post):</p>
<pre><code>import scipy
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
#define vogel-fulcher-tamman (VFT) function
def vft(x,sigma_0,temp_vf,D):
return np.log(sigma_0)-((D*temp_vf)/(x-temp_vf))
#load and sort data
data=np.genfromtxt('data file',skip_header=3)
temp=data[:,0]
inverse_temp=data[:,1]
dc_conduct=np.log10(data[:,2:11])
only_adam=dc_conduct[:,4:9]
colors = ['b','g','r','c','m']
labels = ['50mg 2-adam','300mg 2-adam','100 mg 2-adam','150 mg 2-adam','250mg
2-adam']
for i in range(0,len(only_adam)):
#fit VTF function
y=only_adam[:,i]
popt, pcov = curve_fit(vft,temp,y)
#plotting
plt.plot(inverse_temp,y,color=colors[i],marker='o',markerfacecolor='none',
label=labels[i])
plt.plot(inverse_temp,vft(temp, *popt),linewidth=0.00001,linestyle='-
',color=colors[i])
plt.ylabel("Ionic Conductivity [Scm**2/mol]")
plt.xlabel("1000 / [T(K)]")
plt.axis('tight')
plt.legend(loc='lower left')
</code></pre>
<p><img src="https://i.stack.imgur.com/OOpGY.png" alt="enter image description here"></p>
|
<ul>
<li>You are looping over the rows of <code>only_adam</code>, but index the columns of that array with the loop variable <code>i</code>. This does not make sense and leads to the error shown. </li>
<li>The plot that shows the data points has lines in it. Those are the lines shown. You cannot make them smaller by decreasing the other plot's linewidth. Instead you need to set the linestyle of that plot off, e.g. <code>plot(..., ls="")</code></li>
</ul>
|
python|python-3.x|numpy|matplotlib|scipy
| 1
|
8,510
| 50,064,719
|
Testing if value is contained in Pandas Series with mixed types
|
<p>I have a pandas series, for example: <code>x = pandas.Series([-1,20,"test"])</code>.</p>
<p>Now I would like to test if -1 is contained in <code>x</code> without looping over the whole series. I could transform the whole series to string and then test if <code>"-1" in x</code> but sometimes I have -1.0 and sometime -1 and so on, so this is not a good choice.</p>
<p>Is there another possibility to approach this?</p>
|
<p>What about </p>
<pre><code>x.isin([-1])
</code></pre>
<p>output:</p>
<pre><code>0 True
1 False
2 False
dtype: bool
</code></pre>
<p>Or if you want to have a count of how many instances:</p>
<pre><code>x.isin([-1]).sum()
</code></pre>
<p>Output:</p>
<pre><code>1
</code></pre>
|
python|pandas
| 2
|
8,511
| 49,883,862
|
The .apply() method is not working
|
<p>So <code>friends</code> is a column with a list in each instance such as <code>df['friends][0] = [id1, id2, ..., idn]</code>. I'm trying to count the number of friends in a separate column such as <code>df['friend_counts'][0] = n</code>. </p>
<p>I did the following. I've used this code in other datasets, but for some reason it's taking forever and the dataset is only 300,000 instances.</p>
<pre><code>df_user['friend_counts'] = df_user['friends'].apply(lambda x: len(df_user.friends[x]))
</code></pre>
<p>Also, for some reason this following code creates a <code>season</code> column but is not populated, i.e. it's all just blank spaces. This is troublesome since I did this exact same code for every other dataset. Did they change the <code>.apply()</code> method?</p>
<pre><code>#Convert 'date' to a date time object
df_reviews["date"] = pd.to_datetime(df_reviews["date"])
#Splitting up 'release_date' -> 'release_weekday', 'release_month',
'release_year'
df_reviews["weekday"] = df_reviews["date"].dt.weekday_name
df_reviews["month"] = df_reviews["date"].dt.month
df_reviews["year"] = df_reviews["date"].dt.year
### Helper function
def season_converter(month_name):
""" Returns the season a particular month is in """
season = ""`enter code here`
#Winter
if month_name in ['Jan', 'Feb', 'Dec']:
season = "Winter"
#Spring
if month_name in ['Mar', 'Apr', 'May']:
season = "Spring"
#Summer
if month_name in ['Jun', 'Jul', 'Aug'] :
season = "Summer"
#Fall
if month_name in ['Sep', 'Oct', 'Nov']:
season = "Fall"
#Other
if month_name == "NA":
season = "NA"
return season
#Create a new column that holds seasonal information
df_reviews['season'] = df_reviews['month'].apply(lambda x:
season_converter(x))
</code></pre>
|
<p>I suggest use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.map.html" rel="nofollow noreferrer"><code>map</code></a> by <code>dictionary</code> for improve performance:</p>
<pre><code>d = {1:'Winter', 2:'Winter', 12:'Winter', 3: 'Spring', .... np.nan:'NA', 'NA':'NA'}
df_reviews['season'] = df_reviews['month'].map(d)
</code></pre>
<p>Another solution if is possible use numeric seasons:</p>
<pre><code>df_reviews['season'] = (df_reviews['month'] % 12 + 3) // 3
</code></pre>
|
python-3.x|pandas|apply
| 0
|
8,512
| 50,139,043
|
How to combine/merge Columns within the same Dataframe in Pandas?
|
<p>I have a data frame similar to this: </p>
<pre><code> 0 1 2 3 4 5
0 1001 1 176 REMAINING US SOUTH
1 1002 1 176 REMAINING US SOUTH
</code></pre>
<p>What I would like to do is to combine columns 3,4, and 5 to create on column that has all of the data in columns 3,4, and 5. </p>
<p>Desired output:</p>
<pre><code> 0 1 2 3
0 1001 1 176 REMAINING US SOUTH
1 1002 1 176 REMAINING US SOUTH
</code></pre>
<p>I've already tried </p>
<pre><code>hbadef['6'] = hbadef[['3', '4', '5']].apply(lambda x: ''.join(x), axis=1)
</code></pre>
<p>and that didn't work out. </p>
<p>Here is the stacktrace when I implement</p>
<pre><code> hbadef['3'] = hbadef['3'] + ' ' + hbadef['4'] + ' ' + hbadef['5']
</code></pre>
<p>Stacktrace:</p>
<pre><code>TypeError Traceback (most recent call last)
pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_loc()
pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.Int64HashTable.get_item()
TypeError: an integer is required
During handling of the above exception, another exception occurred:
KeyError Traceback (most recent call last)
~\AppData\Local\Continuum\anaconda3\lib\site-packages\pandas\core\indexes\base.py in get_loc(self, key, method, tolerance)
2524 try:
-> 2525 return self._engine.get_loc(key)
2526 except KeyError:
pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_loc()
pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_loc()
KeyError: '3'
During handling of the above exception, another exception occurred:
TypeError Traceback (most recent call last)
pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_loc()
pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.Int64HashTable.get_item()
TypeError: an integer is required
During handling of the above exception, another exception occurred:
KeyError Traceback (most recent call last)
<ipython-input-62-2da6c35d6e89> in <module>()
----> 1 hbadef['3'] = hbadef['3'] + ' ' + hbadef['4'] + ' ' + hbadef['5']
2 # hbadef.drop(['4', '5'], axis=1)
3 # hbadef.columns = ['MKTcode', 'Region']
4
5 # pd.concat(
~\AppData\Local\Continuum\anaconda3\lib\site-packages\pandas\core\frame.py in __getitem__(self, key)
2137 return self._getitem_multilevel(key)
2138 else:
-> 2139 return self._getitem_column(key)
2140
2141 def _getitem_column(self, key):
~\AppData\Local\Continuum\anaconda3\lib\site-packages\pandas\core\frame.py in _getitem_column(self, key)
2144 # get column
2145 if self.columns.is_unique:
-> 2146 return self._get_item_cache(key)
2147
2148 # duplicate columns & possible reduce dimensionality
~\AppData\Local\Continuum\anaconda3\lib\site-packages\pandas\core\generic.py in _get_item_cache(self, item)
1840 res = cache.get(item)
1841 if res is None:
-> 1842 values = self._data.get(item)
1843 res = self._box_item_values(item, values)
1844 cache[item] = res
~\AppData\Local\Continuum\anaconda3\lib\site-packages\pandas\core\internals.py in get(self, item, fastpath)
3841
3842 if not isna(item):
-> 3843 loc = self.items.get_loc(item)
3844 else:
3845 indexer = np.arange(len(self.items))[isna(self.items)]
~\AppData\Local\Continuum\anaconda3\lib\site-packages\pandas\core\indexes\base.py in get_loc(self, key, method, tolerance)
2525 return self._engine.get_loc(key)
2526 except KeyError:
-> 2527 return self._engine.get_loc(self._maybe_cast_indexer(key))
2528
2529 indexer = self.get_indexer([key], method=method, tolerance=tolerance)
pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_loc()
pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_loc()
KeyError: '3'
</code></pre>
<p>I've tried removing the NaN values, but I get a similar result. I am perplexed as to why such a simple function is not working properly. </p>
<p>I'll be accepting an answer so that we can sorta "close" this question. Both of the answers are acceptable and solve the problem, the problem that I'm running into is likely an application error that I will have to solve independently from this question. </p>
|
<p>Use <code>concat</code> + <code>agg</code> </p>
<pre><code>pd.concat(
[df.iloc[:, :3], df.iloc[:, 3:].agg(' '.join, axis=1)],
axis=1,
ignore_index=True
)
0 1 2 3
0 1001 1 176 REMAINING US SOUTH
1 1002 1 176 REMAINING US SOUTH
</code></pre>
|
python|pandas|dataframe
| 1
|
8,513
| 50,184,845
|
Prevent my RAM memory from reaching 100%
|
<p>I have a very simple python script that reads a CSV file and sorts the rows according to the timestamps. However, the file is large enough (16 GB) that its reading uses ram memory completely. When it reaches 100% (i.e. 64 GB RAM memory), my system completely freezes, and I am forced to restart my computer.</p>
<p>Here is the code:</p>
<pre><code>import pandas as pd
from time import time
filename = 'AKER_OB.csv'
start_ = time()
file_ = pd.read_csv(filename)
end_ = time()
duration = end_ - start_
print("The duration to load that file : {}".format(duration))
file_.to_datetime(df['TimeStamps'], format="%Y-%m-%d %H:%M:%S").sort_values()
</code></pre>
<p>Head of <code>AKER_OB.csv</code> :</p>
<pre><code>TimeStamp,Bid1,BidSize1,Bid2,BidSize2,Bid3,BidSize3,Bid4,BidSize4,Bid5,BidSize5,Bid6,BidSize6,Bid7,BidSize7,Bid8,BidSize8,Bid9,BidSize9,Bid10,BidSize10,Bid11,BidSize11,Bid12,BidSize12,Bid13,BidSize13,Bid14,BidSize14,Bid15,BidSize15,Bid16,BidSize16,Bid17,BidSize17,Bid18,BidSize18,Bid19,BidSize19,Bid20,BidSize20,Ask1,AskSize1,Ask2,AskSize2,Ask3,AskSize3,Ask4,AskSize4,Ask5,AskSize5,Ask6,AskSize6,Ask7,AskSize7,Ask8,AskSize8,Ask9,AskSize9,Ask10,AskSize10,Ask11,AskSize11,Ask12,AskSize12,Ask13,AskSize13,Ask14,AskSize14,Ask15,AskSize15,Ask16,AskSize16,Ask17,AskSize17,Ask18,AskSize18,Ask19,AskSize19,Ask20,AskSize20
2016-10-08 00:00:00,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
2016-10-08 00:00:01,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
2016-10-08 00:00:02,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
2016-10-08 00:00:03,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
2016-10-08 00:00:04,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
2016-10-08 00:00:05,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
2016-10-08 00:00:06,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
2016-10-08 00:00:07,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
2016-10-08 00:00:08,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
</code></pre>
<p>What is the correct way to fix this problem? A full answer with code snippet will be appreciated. </p>
|
<p>Essentially, you have to implement your own out-of-memory sorting.</p>
<ol>
<li><p>Split your file in two or more pieces with <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html" rel="nofollow noreferrer">Pandas CSV chunker</a>, sort each piece (one piece at a time!), save it into a separate CSV file, and free the memory with <code>del</code>.</p></li>
<li><p>Merge the sorted files by opening all of the saved pre-sorted files with CSV chunkers, combining the rows from the chunks, as needed, and appending the sorted rows to the output file.</p></li>
</ol>
|
python|pandas|memory|ram
| 1
|
8,514
| 63,837,400
|
Facebook messenger analysis using jupyter Notebook
|
<p>I downloaded Facebook messenger data and I'm trying to analyze it.
So my goal is to know the number of occurrences of a word in all messages.
I converted the JSON file to a pandas Dataframe, and I have a column that contains all the messages.
I converted the messages column to a list and I tried to use NLTK to count the words.</p>
<pre><code>from nltk.tokenize import word_tokenize
from nltk.probability import FreqDist
mylist = df['content'].tolist()
tok_words = [word_tokenize(i) for i in mylist]
</code></pre>
<p>but the problem is when I tried to use FreqDist it shows an error "TypeError: unhashable type: 'list'"</p>
<pre><code>fdist = FreqDist(tok_words)
fdist.most_common(5)
</code></pre>
<p>and when tried this it works for me</p>
<pre><code>fdist = FreqDist(tok_words[0])
fdist.most_common(5)
</code></pre>
<p>I want to use FreqDist to all of the list not only to the first index.</p>
|
<p>Try:</p>
<pre><code>fdist = FreqDist()
for words in tok_words:
fdist.update(FreqDist(words))
fdist.most_common(5)
</code></pre>
|
pandas|nlp|nltk|tokenize
| 0
|
8,515
| 64,025,109
|
fill values after condition with NaN
|
<p>I have a df like this:</p>
<pre><code>df = pd.DataFrame(
[
['A', 1],
['A', 1],
['A', 1],
['B', 2],
['B', 0],
['A', 0],
['A', 1],
['B', 1],
['B', 0]
], columns = ['key', 'val'])
df
</code></pre>
<p>print:</p>
<pre><code> key val
0 A 1
1 A 1
2 A 1
3 B 2
4 B 0
5 A 0
6 A 1
7 B 1
8 B 0
</code></pre>
<p>I want to fill the rows after 2 in the val column (in the example all values in the val column from row 3 to 8 are replaced with nan).</p>
<p>I tried this:</p>
<pre><code>df['val'] = np.where(df['val'].shift(-1) == 2, np.nan, df['val'])
</code></pre>
<p>and iterating over rows like this:</p>
<pre><code>for row in df.iterrows():
df['val'] = np.where(df['val'].shift(-1) == 2, np.nan, df['val'])
</code></pre>
<p>but cant get it to fill nan forward.</p>
|
<p>You can use <code>boolean indexing</code> with <code>cummax</code> to fill <code>nan</code> values:</p>
<pre><code>df.loc[df['val'].eq(2).cummax(), 'val'] = np.nan
</code></pre>
<p>Alternatively you can also use <code>Series.mask</code>:</p>
<pre><code>df['val'] = df['val'].mask(lambda x: x.eq(2).cummax())
</code></pre>
<hr />
<pre><code> key val
0 A 1.0
1 A 1.0
2 A 1.0
3 B NaN
4 B NaN
5 A NaN
6 A NaN
7 B NaN
8 B NaN
</code></pre>
|
python-3.x|pandas|numpy
| 6
|
8,516
| 63,980,283
|
Merging MORE THAN two dataframes with pd.merge()
|
<p>Im trying to merge 4 csv files using pd.merge() based on a specific column ('filename'). I read that merge only works for two dataframes, and to instead try and merge the first two, then the 3rd, and then the 4th, in successive steps. This has ultimately worked, with the following code:</p>
<pre><code>combine = pd.merge(file1, file2, on='filename', how='inner')
combine1 = pd.merge(combine, file3, on='filename', how='inner')
combine2 = pd.merge(combine1, file4, on='filename', how='inner')
</code></pre>
<p>producing the following result:</p>
<pre><code>filename, count_x, count_y, count_x, count_y
M116_13331848_13109013422677.jpg, 21, 11, 18, 16
M116_13331848_13109013387678.jpg, 21, 13, 13, 18
M116_13331848_13109013329679.jpg, 19, 15, 16, 15
M116_13331848_13109013424677.jpg, 18, 13, 16, 15
M116_13331848_13109013385678.jpg, 17, 12, 15, 13
</code></pre>
<p>As you can see, the process has generated confusing headers on the columns. So, I tried using the suffixes parameter to control these headers. However, this only works for the first pd.merge() command, and not the second/third. Heres my script in its entirety:</p>
<p><a href="https://i.stack.imgur.com/bi1to.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bi1to.png" alt="SCRIPT" /></a></p>
<p>How can I attribute my own headers to each of the columns in the combined df?</p>
<p>Thank you,
R</p>
|
<p>May be you can use parameter <code>suffixes</code> in merge to control column names. From the <a href="https://pandas.pydata.org/pandas-docs/version/0.25.2/reference/api/pandas.DataFrame.merge.html" rel="nofollow noreferrer">pandas merge documentation</a>:</p>
<blockquote>
<p>Merge DataFrames df1 and df2 with specified left and right suffixes
appended to any overlapping columns.</p>
</blockquote>
<p>In above, something like:</p>
<pre><code>combine = pd.merge(file1, file2, on='filename', how='inner', suffixes=('_file1', '_file2'))
</code></pre>
<p>and similar on other <code>merge</code> too. That way you can know where the count came from while merging.</p>
<p>Example:</p>
<pre><code># Creating Dataframes
df1 = pd.DataFrame({'col1': ['foo', 'bar', 'baz'], 'count': [1, 2, 3]})
df2 = pd.DataFrame({'col1': ['foo', 'bar', 'baz'], 'count': [5, 6, 7]})
</code></pre>
<p>df1:</p>
<pre><code> col1 count
0 foo 1
1 bar 2
2 baz 3
</code></pre>
<p>df2:</p>
<pre><code> col1 count
0 foo 5
1 bar 6
2 baz 7
</code></pre>
<p>Merging</p>
<pre><code>pd.merge(df1, df2, on='col1', suffixes=('_df1', '_df2'))
</code></pre>
<p>Result:</p>
<pre><code> col1 count_df1 count_df2
0 foo 1 5
1 bar 2 6
2 baz 3 7
</code></pre>
<h1>Update</h1>
<p>Given you have four dataframes, may be you can try:</p>
<pre><code># Combine two of them
combine1 = pd.merge(file1, file2, on='filename', how='inner', suffixes=('_file1', '_file2'))
# Combine other two
combine2 = pd.merge(file3, file4, on='filename', how='inner', suffixes=('_file3', '_file4'))
# Now combine the combined dataframes
combine = pd.merge(combine1, combine2, on='filename', how='inner')
</code></pre>
|
python|python-3.x|pandas|merge|jupyter-notebook
| 1
|
8,517
| 47,043,682
|
extending the notion of missing values in pandas
|
<p>Suppose I have third party data which has some goofy convention for missing values (in my particular application, they use '-'). Is there some elegant way to tell pandas to extend its notion of missing value?</p>
|
<p>I think the best way to handle this would be by replacing '-' with np.nan.</p>
<pre><code>df.replace('-', np.nan)
</code></pre>
|
python|pandas
| 2
|
8,518
| 32,701,939
|
pandas resample with function that returns an array
|
<p>The <code>pd.resample</code> function accepts any function that goes from an array to a number as its <code>how</code> keyword argument (although that's not in the <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.resample.html" rel="nofollow">docs</a>). So the following example works well</p>
<pre><code>#!/usr/bin/python
import numpy as np
import pandas as pd
dates = pd.date_range('20130101', periods=60)
df = pd.DataFrame(np.random.randn(60,4), index=dates, columns=list('ABCD'))
print df.resample('10D', how=np.std, axis=0)
</code></pre>
<p>However, is there a way of doing the same thing with a function that returns an array? For example, if I try <code>df.resample('10D', how=np.fft.rfft, axis=0)</code> pandas will exit with:</p>
<pre><code>Exception: Data must be 1-dimensional
</code></pre>
<p>Now, is there a way of using a function such as <code>rfft</code> with the offset string functionality (e.g. if I wanted the fft of each "10Min" block in my data)?</p>
<p>I know I can probably do this with groupby or separating the dataframe first, but since pandas' offset string is so easy to use (it is especially useful in my area of real data analysis) I was wondering how one could do it and not lose its functionality.</p>
<p><em>EDIT</em></p>
<p>If I try <code>df.groupby(pd.TimeGrouper('10D')).apply(np.fft.rfft, axis=0)</code> it gives me this error:</p>
<pre><code>TypeError: cannot concatenate a non-NDFrame object
</code></pre>
|
<p>Because that fft function changes the shape of the input you can't just apply it directly. Here would be one way to wrap it.</p>
<pre><code>In [331]: def wrap_fft(df):
...: return pd.DataFrame({c:np.fft.rfft(df[c]) for c in df})
In [332]: df.groupby(pd.TimeGrouper('10D')).apply(wrap_fft)
Out[332]:
A \
2013-01-01 0 (0.54057835524+0j)
1 (3.58718639626-2.07316200855j)
2 (1.31007762632+1.22430332479j)
3 (4.36758085029-0.236242884113j)
4 (-0.0546232575249+2.11668684871j)
5 (1.55071284264+0j)
2013-01-11 0 (4.11929430037+0j)
1 (-0.93001545894-2.65804406349j)
2 (1.20206318744-1.43815460311j)
3 (1.24340282215-4.38679576432j)
4 (-0.582004943723-0.943867990404j)
5 (-1.81316546447+0j)
2013-01-21 0 (-1.49246511083+0j)
1 (-1.15010974637+0.527648266336j)
2 (-2.5428259911+2.36604684921j)
3 (-2.76468733089+0.860053921011j)
4 (-1.41328489201-0.36756122307j)
5 (-3.13773122523+0j)
.........
</code></pre>
|
python|arrays|pandas
| 1
|
8,519
| 32,640,438
|
Pandas: rolling std deviation/mean by trading days
|
<p>I am trying to extract the rolling std deviation and mean on trading data by using <code>rolling_*</code> functions of <code>pandas</code>. </p>
<p>My data looks like:</p>
<pre><code>Tick Trading_day Trade_price
VOD 2013-1-2 30.23
VOD 2013-1-2 30.33
VOD 2013-1-2 30.24
VOD 2013-1-5 31.23
VOD 2013-1-5 30.23
VOD 2013-1-6 30.23
VOD 2013-1-7 30.23
VOD 2013-1-8 30.23
VOD 2013-1-9 30.23
... ....... .....
RBS 2013-1-2 15.23
... ....... .....
</code></pre>
<p>Basically, I want to work out mean price and standard deviation of price by <strong>each stock</strong> based on <strong>(-3, +3) trading days</strong>.</p>
<p>Please note there are two <strong>tricky things</strong> here:</p>
<ol>
<li><p><strong>There are various number of trades</strong> in each trading day (frequent trades in liquid day).</p></li>
<li><p>Those are trading days (<strong>not calendar days</strong>), therefore they are not in sequence.</p></li>
</ol>
<p>My ideal output is </p>
<pre><code>Tick Trading_day mean_price std_price
VOD 2013-1-2 30.23 0.13
VOD 2013-1-5 30.11 0.09
VOD 2013-1-6 30.24 0.15
... ..... ....... .....
RBS 2013-1-2 15.23 0.19
</code></pre>
<p>Anyone has idea ? Thanks in advance !</p>
|
<p>Here is the data I'm using in this example:</p>
<pre><code>df = pd.DataFrame({'Tick': ['VOD'] * 7 + ['RBS'] * 2,
'Trade_price': [30.23, 30.24, 31.23, 30.23, 30.23, 30.23, 30.23, 14.11, 15.23],
'Trading_day': ['1/2/13', '1/2/13', '1/5/13', '1/5/13', '1/6/13', '1/7/13', '1/8/13', '1/2/13', '1/5/13']})
</code></pre>
<p>First, let's use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_datetime.html" rel="nofollow"><code>.to_datetime()</code></a> and make your date column Pandas timestamps if they are not already.</p>
<pre><code>df['Trading_day'] = pd.to_datetime(df.Trading_day)
</code></pre>
<p>Next, well group and transform the data so that we take the mean price for each ticker on any given day and that dates are unique in the index:</p>
<pre><code>df = df.groupby(['Trading_day', 'Tick']).Trade_price.mean().unstack()
>>> df
Tick RBS VOD
Trading_day
2013-01-02 14.11 30.235
2013-01-05 15.23 30.730
2013-01-06 NaN 30.230
2013-01-07 NaN 30.230
2013-01-08 NaN 30.230
</code></pre>
<p>Now, you want to "work out mean price and standard deviation of price by each stock based on (-3, +3) trading days.". One way to do this is to use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.rolling_mean.html" rel="nofollow"><code>pd.rolling_mean()</code></a> and center the results. Given the limited data set, I am using a 3 day centered window (i.e. the prior day, current day and following day). You will want to use a 7 day window to get +/- 3 trading days.</p>
<pre><code>>>> pd.rolling_mean(df, 3, center=True)
Tick RBS VOD
Trading_day
2013-01-02 NaN NaN
2013-01-05 NaN 30.398333
2013-01-06 NaN 30.396667
2013-01-07 NaN 30.230000
2013-01-08 NaN NaN
</code></pre>
<p>And to get a rolling standard deviation, just use <code>pd.rolling_std()</code>.</p>
|
python|pandas|trading
| 3
|
8,520
| 38,523,920
|
Variable shift in Pandas
|
<p>having two columns A and B in a dataframe: </p>
<pre><code> A B
0 1 6
1 2 7
2 1 8
3 2 9
4 1 10
</code></pre>
<p>I would like to create a column C. C must have values of B shifted by value of A:</p>
<pre><code> A B C
0 1 6 NaN
1 2 7 NaN
2 1 8 7
3 2 9 7
4 1 10 9
</code></pre>
<p>The command:</p>
<pre><code>df['C'] = df['B'].shift(df['A'])
</code></pre>
<p>does not work.
Do you have any other ideas?</p>
|
<p>I'd use help from <code>numpy</code> to avoid the <code>apply</code></p>
<pre><code>l = np.arange(len(df)) - df.A.values
df['C'] = np.where(l >=0, df.B.values[l], np.nan)
df
A B C
0 1 6 NaN
1 2 7 NaN
2 1 8 7.0
3 2 9 7.0
4 1 10 9.0
</code></pre>
<hr>
<p><strong><em>simple time test</em></strong> </p>
<p><a href="https://i.stack.imgur.com/f7E0V.png" rel="noreferrer"><img src="https://i.stack.imgur.com/f7E0V.png" alt="enter image description here"></a></p>
|
python|pandas|shift
| 5
|
8,521
| 63,220,132
|
SQLAlchemy Insert to MySQL DB UnicodeEncodeError for cyrlic data
|
<p>I use Python.SQLAlchemy with MySQL Database.
All code bellow normal work for latin symbols in data, but not work for cyrilic:</p>
<blockquote>
<p>UnicodeEncodeError: 'charmap' codec can't encode characters in
position 0-17: character maps to </p>
</blockquote>
<p>I added "encoding='utf8', convert_unicode=True" in engine constructor but is nothing to change</p>
<p>Charset/Collation in MySQL for Table: utf8 / utf8-bin</p>
<p>Code:</p>
<h1>Database connect</h1>
<pre><code> def DB_alchemy(self, category, db="mysql://user:pass@localhost/all_gid_2"):
self.sql_engine = sql.create_engine(db, echo=True, encoding='utf8', convert_unicode=True)
metadata = sql.MetaData(self.sql_engine)
sql_tbl_name_products = category+'_products'
sql_tbl_name_class = category + '_classes'
self.tbl_products = sql.Table(sql_tbl_name_products, metadata, autoload=True)
self.tbl_classes = sql.Table(sql_tbl_name_class, metadata, autoload=True)
self.connection = self.sql_engine.connect()
</code></pre>
<p>....</p>
<h1>Insert</h1>
<pre><code>def Insert_df_to_SQL(self, df, tbl):
dict_insert = df.to_dict(orient='records')
insert_qry = tbl.insert()
self.connection.execute(insert_qry, dict_insert)
</code></pre>
<p>The echo of SQLAlchemy engine in cyrrilic data:</p>
<blockquote>
<p>2020-08-02 22:07:05,839 INFO sqlalchemy.engine.base.Engine INSERT INTO
<code>Nb_classes</code> (type, class_subtype, text, explanation, name) VALUES
(%s, %s, %s, %s, %s) 2020-08-02 22:07:05,840 INFO
sqlalchemy.engine.base.Engine ('CL', 'Производительность',
'Расширенная функциональность', 'Стандартный процессор, внешняя
графика начального уровня, мультимедиа', 'CL_discret_lite') 2020-08-02
22:07:05,840 INFO sqlalchemy.engine.base.Engine ROLLBACK</p>
</blockquote>
<pre><code> Traceback (most recent call last):
File "C:/Users/shulya403/Shulya403_works/all_gid_2/Database/db_insert_pd.py", line 259, in <module>
FillDB.Classes_to_SQL(df_new=FillDB.df_Classes.head(3))
File "C:/Users/shulya403/Shulya403_works/all_gid_2/Database/db_insert_pd.py", line 241, in Classes_to_SQL
self.Insert_df_to_SQL(df_select, self.tbl_classes)
File "C:/Users/shulya403/Shulya403_works/all_gid_2/Database/db_insert_pd.py", line 166, in Insert_df_to_SQL
self.connection.execute(insert_qry, dict_insert)
File "C:\Users\shulya403\Shulya403_works\all_gid_2\venv\lib\site-packages\sqlalchemy\engine\base.py", line 1014, in execute
return meth(self, multiparams, params)
File "C:\Users\shulya403\Shulya403_works\all_gid_2\venv\lib\site-packages\sqlalchemy\sql\elements.py", line 298, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
File "C:\Users\shulya403\Shulya403_works\all_gid_2\venv\lib\site-packages\sqlalchemy\engine\base.py", line 1133, in _execute_clauseelement
distilled_params,
File "C:\Users\shulya403\Shulya403_works\all_gid_2\venv\lib\site-packages\sqlalchemy\engine\base.py", line 1318, in _execute_context
e, statement, parameters, cursor, context
File "C:\Users\shulya403\Shulya403_works\all_gid_2\venv\lib\site-packages\sqlalchemy\engine\base.py", line 1515, in _handle_dbapi_exception
util.raise_(exc_info[1], with_traceback=exc_info[2])
File "C:\Users\shulya403\Shulya403_works\all_gid_2\venv\lib\site-packages\sqlalchemy\util\compat.py", line 178, in raise_
raise exception
File "C:\Users\shulya403\Shulya403_works\all_gid_2\venv\lib\site-packages\sqlalchemy\engine\base.py", line 1278, in _execute_context
cursor, statement, parameters, context
File "C:\Users\shulya403\Shulya403_works\all_gid_2\venv\lib\site-packages\sqlalchemy\engine\default.py", line 593, in do_execute
cursor.execute(statement, parameters)
File "C:\Users\shulya403\Shulya403_works\all_gid_2\venv\lib\site-packages\MySQLdb\cursors.py", line 199, in execute
args = tuple(map(db.literal, args))
File "C:\Users\shulya403\Shulya403_works\all_gid_2\venv\lib\site-packages\MySQLdb\connections.py", line 280, in literal
s = self.string_literal(o.encode(self.encoding))
File "C:\Users\shulya403\AppData\Local\Continuum\anaconda3\lib\encodings\cp1252.py", line 12, in encode
return codecs.charmap_encode(input,errors,encoding_table)
UnicodeEncodeError: 'charmap' codec can't encode characters in position 0-17: character maps to <undefined>
</code></pre>
<p>What can i do?</p>
|
<blockquote>
<p>Try adding <code>?charset=utf8mb4</code> to the end of your connection URI. – Gord Thompson</p>
</blockquote>
|
mysql|python-3.x|pandas|encoding|sqlalchemy
| 6
|
8,522
| 63,095,949
|
Error when trying to read multiple .csv files in Jupyter Notebook using python
|
<p>I am given a file that contains 1000 .csv files(data0,data1,data2..........,data999) and I need to read all those files. So, I tried it on my own.
This was my approach: read data0.csv and perform transpose on it and then loop it through all the data*.csv files and then append them. But I was getting an error. Could someone help me out?
Reading data0.csv file and transposing it:</p>
<pre><code>df = pd.read_csv('data0.csv')
print (df.head(10))
df_temp = df
df_main = df_temp.transpose()
df_main
new_df = [df_main]
for i in range(1000):
filename = "data%d.csv"%i
df_s = pd.read_csv(filename)
new_df= pd.concat([df_s])
new_df[1]
</code></pre>
<p><a href="https://i.stack.imgur.com/Wp6Nr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Wp6Nr.png" alt="enter image description here" /></a></p>
<p>looping through 1000 files, transposing and concating:</p>
<p><a href="https://i.stack.imgur.com/OXCsF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OXCsF.png" alt="enter image description here" /></a></p>
<p>after transposing and appending all the 1000 csv files I should be getting 1000 rows x 150 columns. But I am not getting that.</p>
|
<p>I couldn't test this, because you did not provide an example of your file as text. Please try to provide a <a href="https://stackoverflow.com/help/minimal-reproducible-example">minimal reproducible example</a> next time.</p>
<p>My solution is a minor variation of <a href="https://stackoverflow.com/questions/20906474/import-multiple-csv-files-into-pandas-and-concatenate-into-one-dataframe">this SO post</a> mentioned by @Ranika Nisal.</p>
<pre><code>dfs = [pd.read_csv(f'data{i}.csv') for i in range(1000)]
df = pd.concat(dfs, axis=0, ignore_index=True)
</code></pre>
<p>Your solution did not generate a list of dataframes which is required for pd.concat() to work. Also, you tried to access the second dataframe with <code>new_df[1]</code> but there was only one element in your list. That's the reason why you've received a <code>KeyError</code>.</p>
|
python|pandas|csv|jupyter-notebook
| 0
|
8,523
| 63,058,307
|
Tensorflow Dataset Mask Sequence for Evaluation
|
<h3>Problem:</h3>
<p>Given variable-length inputs, the accuracy metric is incorrect because short vectors are padded to longer ones.</p>
<p>Using the <code>Masking</code> layer from <code>keras</code> solves this, by applying a mask to all zero values, but because my sequence contains zeros naturally, and super long (50,000 tokens) this slows down training by 50 times!</p>
<h3>Details</h3>
<p>I have a dataset represented in <code>tf.data.Dataset</code> which contains 3 properties per example:</p>
<ol>
<li><code>src</code> - an input sequence</li>
<li><code>tgt</code> - class ID output sequence</li>
<li><code>tokens</code> number of tokens in the sequence</li>
</ol>
<p>And I would like to train a sequence tagging model.</p>
<p>In order to use it with Keras, I understand I need to only have an <code>x</code> and <code>y</code> in my dataset, so I map:</p>
<pre class="lang-py prettyprint-override"><code>dataset = dataset.map(lambda d: (d['src'], d['tgt']))
</code></pre>
<p>And I pass to a keras model:</p>
<pre class="lang-py prettyprint-override"><code>model = tf.keras.Sequential([
tf.keras.layers.LSTM(hidden_size, return_sequences=True),
tf.keras.layers.Dense(2),
tf.keras.layers.Activation(activations.softmax)
])
</code></pre>
<p>Is there a way to apply a mask as part of the dataset pipeline, in graph mode? (the mask is <code>tf.sequence_mask(datum['tokens'])</code>)</p>
<h3>Alternative solution</h3>
<p>Alternatively, there is no issue with passing an unmasked sequence, if I could pass in my dataset the number of <code>tokens</code> as well, and create my own evaluation metric that applies the mask there.</p>
<p>I couldn't find how to pass a dataset with 3 items rather than 2, keras sequential model doesn't seem to allow that.</p>
|
<p>Hope this helps. You can see it takes not just input/output, but a second input as well. I'm also using more than 1 output, and so you can pass information in and out of the neural network. I hope you'll feel energized by the customizable nature of this example.</p>
<pre><code>import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
import tensorflow as tf
from tensorflow.keras.layers import Dense
from tensorflow.keras import Model
from sklearn.datasets import load_iris
from functools import partial
tf.keras.backend.set_floatx('float64')
iris, target = load_iris(return_X_y=True)
X = iris[:, :3]
y = iris[:, 3]
z = target
onehot = partial(tf.one_hot, depth=3)
dataset = tf.data.Dataset.from_tensor_slices((X, y, z)).shuffle(150)
train_ds = dataset.take(120).shuffle(10).\
batch(8).map(lambda a, b, c: (a, b, onehot(c)))
test_ds = dataset.skip(120).take(30).shuffle(10).\
batch(8).map(lambda a, b, c: (a, b, onehot(c)))
next(iter(train_ds))
class MyModel(Model):
def __init__(self):
super(MyModel, self).__init__()
self.d0 = Dense(64, activation='relu')
self.d1 = Dense(128, activation='relu')
self.d2 = Dense(1)
self.d3 = Dense(3)
def call(self, x, training=None, **kwargs):
x = self.d0(x)
x = self.d1(x)
a = self.d2(x)
b = self.d3(x)
return a, b
model = MyModel()
loss_obj_reg = tf.keras.losses.MeanAbsoluteError()
loss_obj_cat = tf.keras.losses.CategoricalCrossentropy(from_logits=True)
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-3)
loss_reg_train = tf.keras.metrics.Mean(name='regression loss')
loss_cat_train = tf.keras.metrics.Mean(name='categorical loss')
loss_reg_test = tf.keras.metrics.Mean(name='regression loss')
loss_cat_test = tf.keras.metrics.Mean(name='categorical loss')
train_acc = tf.keras.metrics.CategoricalAccuracy()
test_acc = tf.keras.metrics.CategoricalAccuracy()
@tf.function
def train_step(inputs, y_reg, y_cat):
with tf.GradientTape() as tape:
pred_reg, pred_cat = model(inputs, training=True)
reg_loss = loss_obj_reg(y_reg, pred_reg)
cat_loss = loss_obj_cat(y_cat, pred_cat)
gradients = tape.gradient([reg_loss, cat_loss], model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
loss_reg_train(reg_loss)
loss_cat_train(cat_loss)
train_acc(y_cat, pred_cat)
@tf.function
def test_step(inputs, y_reg, y_cat):
pred_reg, pred_cat = model(inputs, training=False)
reg_loss = loss_obj_reg(y_reg, pred_reg)
cat_loss = loss_obj_cat(y_cat, pred_cat)
loss_reg_test(reg_loss)
loss_cat_test(cat_loss)
test_acc(y_cat, pred_cat)
for epoch in range(250):
loss_reg_train.reset_states()
loss_cat_train.reset_states()
loss_reg_test.reset_states()
loss_cat_test.reset_states()
train_acc.reset_states()
test_acc.reset_states()
for xx, yy, zz in train_ds:
train_step(xx, yy, zz)
for xx, yy, zz in test_ds:
test_step(xx, yy, zz)
template = 'Epoch {:3} ' \
'MAE {:5.3f} TMAE {:5.3f} ' \
'Entr {:5.3f} TEntr {:5.3f} ' \
'Acc {:7.2%} TAcc {:7.2%}'
print(template.format(epoch+1,
loss_reg_train.result(),
loss_reg_test.result(),
loss_cat_train.result(),
loss_cat_test.result(),
train_acc.result(),
test_acc.result()))
</code></pre>
<p>Let me know if you need any kind of information.</p>
|
python|tensorflow|keras
| 1
|
8,524
| 67,994,364
|
Is there a convenient way to get class-specific Average Precision scores for a TensorFlow Object Detection model?
|
<p>The TensorFlow Object Detection API has a very convenient way to get performance metrics for trained models (described in their tutorial <a href="https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/training.html#evaluating-the-model-optional" rel="nofollow noreferrer">here</a>). Unfortunately, the Average Precision scores for each class are not provided.</p>
<p>Does anyone know of a convenient way to get class-specific AP scores for TensorFlow Object Detection models without having to write your own script to do it?</p>
<p>The <a href="https://stackoverflow.com/questions/59227059/how-to-evaluate-tensorflow-object-detection-api-on-only-one-particular-class">same question</a> was asked in 2019 but was not answered.</p>
<p>Thanks!</p>
|
<p>After some more searching, I found a couple solutions.</p>
<p><strong>1. Use a different evaluation configuration</strong></p>
<p>Simply change the <code>metrics_set</code> value in the <code>*.config</code> file for your model to <code>"pascal_voc_detection_metrics"</code>.</p>
<p>The TensorFlow Object Detection API supports a variety of evaluation metrics, detailed in the documentation <a href="https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/evaluation_protocols.md" rel="nofollow noreferrer">here</a>. The PASCAL VOC 2010 detection metric gives AP scores for each class.</p>
<p><strong>2. Edit the <code>cocoeval.py</code> file in your pycocotools package</strong></p>
<p>This method involves pasting 8 lines of code into the <code>cocoeval.py</code> file. It is well explained and documented in <a href="https://stackoverflow.com/questions/56247323/coco-api-evaluation-for-subset-of-classes">this</a> post.</p>
|
python|tensorflow|object-detection-api
| 1
|
8,525
| 41,515,877
|
How to set index values in a MultiIndex pandas DataFrame?
|
<p>I have a MultiIndex pandas DataFrame and need to change the level 0 index <em>values</em>. The current DataFrame has a MultiIndex called <code>(test, time)</code>.</p>
<p>How do I set the string "Test 4" (in the top level of the index) to be some other string?</p>
<p>For extra points, many of my DataFrames will have more than one value in the <code>test</code> level of the index. Can I use a list of names to rename more than one at a time?</p>
<p>Thank you for any help.</p>
<p><a href="https://i.stack.imgur.com/zV1sJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zV1sJ.png" alt="enter image description here"></a></p>
<p>(edited:) My data is:</p>
<pre><code>test time force displacement minutes event
'Test 4' 0.0000 .1202 0 0.00000 False
'Test 4' 0.0012 .0901 0 0.00002 False
'Test 4' 0.0018 .0901 0 0.00003 False
'Test 4' 0.0030 .0901 0 0.00004 False
'Test 4' 0.0042 .0901 0 0.00005 False
'Test 5' 0.0000 .1203 0 0.00000 False
'Test 5' 0.0012 .0901 0 0.00002 False
</code></pre>
<p>with a multi index set to <code>['test', 'time']</code></p>
|
<p>You can pass a dictionary to <code>rename</code> to rename specific values in any index.</p>
<pre><code>df.rename(index={'Test4':'something else'})
</code></pre>
|
python|pandas
| 6
|
8,526
| 41,275,921
|
NaNs when extracting no. of days between two dates in pandas
|
<p>I have a dataframe that contains the columns company_id, seniority, join_date and quit_date. I am trying to extract the number of days between join date and quit date. However, I get NaNs.</p>
<p>If I drop off all the columns in the dataframe except for quit date and join date and run the same code again, I get what I expect. However with all the columns, I get NaNs. </p>
<p>Here's my code:</p>
<pre><code>df['join_date'] = pd.to_datetime(df['join_date'])
df['quit_date'] = pd.to_datetime(df['quit_date'])
df['days'] = df['quit_date'] - df['join_date']
df['days'] = df['days'].astype(str)
df1 = pd.DataFrame(df.days.str.split(' ').tolist(), columns = ['days', 'unwanted', 'stamp'])
df['numberdays'] = df1['days']
</code></pre>
<p>This is what I get:</p>
<pre><code>days numberdays
585 days 00:00:00 NaN
340 days 00:00:00 NaN
</code></pre>
<p>I want 585 from the 'days' column in the 'numberdays' column. Similarly for every such row. </p>
<p>Can someone help me with this?</p>
<p>Thank you!</p>
|
<p>Instead of converting to string, extract the number of days from the timedelta value using the <code>dt</code> accessor.</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'join_date': ['2014-03-24', '2013-04-29', '2014-10-13'],
'quit_date':['2015-10-30', '2014-04-04', '']})
df['join_date'] = pd.to_datetime(df['join_date'])
df['quit_date'] = pd.to_datetime(df['quit_date'])
df['days'] = df['quit_date'] - df['join_date']
df['number_of_days'] = df['days'].dt.days
</code></pre>
<p>@Mohammad Yusuf Ghazi points out that <code>dt.day</code> is necessary to get the number of days instead of <code>dt.days</code> when working with datetime data rather than timedelta.</p>
|
python|pandas
| 2
|
8,527
| 27,686,240
|
Calculate Mahalanobis distance using NumPy only
|
<p>I am looking for NumPy way of calculating Mahalanobis distance between two numpy arrays (x and y).
The following code can correctly calculate the same using cdist function of Scipy. Since this function calculates unnecessary matix in my case, I want more straight way of calculating it using NumPy only.</p>
<pre><code>import numpy as np
from scipy.spatial.distance import cdist
x = np.array([[[1,2,3,4,5],
[5,6,7,8,5],
[5,6,7,8,5]],
[[11,22,23,24,5],
[25,26,27,28,5],
[5,6,7,8,5]]])
i,j,k = x.shape
xx = x.reshape(i,j*k).T
y = np.array([[[31,32,33,34,5],
[35,36,37,38,5],
[5,6,7,8,5]],
[[41,42,43,44,5],
[45,46,47,48,5],
[5,6,7,8,5]]])
yy = y.reshape(i,j*k).T
results = cdist(xx,yy,'mahalanobis')
results = np.diag(results)
print results
[ 2.28765854 2.75165028 2.75165028 2.75165028 0. 2.75165028
2.75165028 2.75165028 2.75165028 0. 0. 0. 0.
0. 0. ]
</code></pre>
<p>My trial:</p>
<pre><code>VI = np.linalg.inv(np.cov(xx,yy))
print np.sqrt(np.dot(np.dot((xx-yy),VI),(xx-yy).T))
</code></pre>
<p>Could anybody correct this method?</p>
<p>Here is formula for it:</p>
<p><a href="http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.spatial.distance.mahalanobis.html#scipy.spatial.distance.mahalanobis" rel="noreferrer">http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.spatial.distance.mahalanobis.html#scipy.spatial.distance.mahalanobis</a></p>
|
<p>I think your problem lies in the construction of your covariance matrix. Try:</p>
<pre><code>X = np.vstack([xx,yy])
V = np.cov(X.T)
VI = np.linalg.inv(V)
print np.diag(np.sqrt(np.dot(np.dot((xx-yy),VI),(xx-yy).T)))
</code></pre>
<p>Output:</p>
<pre><code>[ 2.28765854 2.75165028 2.75165028 2.75165028 0. 2.75165028
2.75165028 2.75165028 2.75165028 0. 0. 0. 0.
0. 0. ]
</code></pre>
<p>To do this without the intermediate array implicitly created here, you might have to sacrifice a C loop for a Python one:</p>
<pre><code>A = np.dot((xx-yy),VI)
B = (xx-yy).T
n = A.shape[0]
D = np.empty(n)
for i in range(n):
D[i] = np.sqrt(np.sum(A[i] * B[:,i]))
</code></pre>
<p>EDIT: actually, with <code>np.einsum</code> voodoo you can remove the Python loop and speed it up a lot (on my system, from 84.3 µs to 2.9 µs):</p>
<pre><code>D = np.sqrt(np.einsum('ij,ji->i', A, B))
</code></pre>
<p>EDIT: As @Warren Weckesser points out, <code>einsum</code> can be used to do away with the intermediate <code>A</code> and <code>B</code> arrays too:</p>
<pre><code>delta = xx - yy
D = np.sqrt(np.einsum('nj,jk,nk->n', delta, VI, delta))
</code></pre>
|
python|numpy
| 24
|
8,528
| 61,531,470
|
pd.Series.to_list() changing dtype
|
<p>When I am programming on colab, I keep running into this issue:</p>
<p>Here is my df:</p>
<pre><code>0 1
0 [2.7436598593417045e-05, 3.731542193080655e-05]
1 [8.279973504084787e-05, 2.145002145002145e-05]
2 [0.00022534319714215346, 0.0002031172259231674]
3 [3.239841667031943e-05, 2.7771297808289177e-05]
4 [0.00011311134356928321, 9.428422928088026e-05]
</code></pre>
<p>I want to get the data from df[1] into a list of lists so I can feed it into my model. To do so, I run:</p>
<pre><code>df[1].to_list()
</code></pre>
<p>and i get:</p>
<pre><code>['[2.7436598593417045e-05, 3.731542193080655e-05]',
'[8.279973504084787e-05, 2.145002145002145e-05]',
'[0.00022534319714215346, 0.00020311722592316746]',
'[3.239841667031943e-05, 2.7771297808289177e-05]',
'[0.00011311134356928321, 9.428422928088026e-05]']
</code></pre>
<p>which is a list of strings which I cannot use to feed into the model. I use this code all the time locally and it works fine, but on colab I get this result. Any ideas? The result I want is:</p>
<pre><code>[[2.7436598593417045e-05, 3.731542193080655e-05],
[8.279973504084787e-05, 2.145002145002145e-05],
[0.00022534319714215346, 0.00020311722592316746],
[3.239841667031943e-05, 2.7771297808289177e-05],
[0.00011311134356928321, 9.428422928088026e-05]]
</code></pre>
|
<p>Try <code>ast.literal_eval</code></p>
<pre><code>from ast import literal_eval
df[1].map(literal_eval).to_list()
[[2.7436598593417045e-05, 3.731542193080655e-05],
[8.279973504084787e-05, 2.145002145002145e-05],
[0.00022534319714215346, 0.00020311722592316746],
[3.239841667031943e-05, 2.7771297808289177e-05],
[0.00011311134356928321, 9.428422928088026e-05]]
</code></pre>
|
python|pandas|numpy|google-colaboratory
| 2
|
8,529
| 61,239,649
|
can not update Tensorflow for Conda
|
<p>I want to update Tensorflow from 1.14 to 2.1.0 but I'm not able to do it.</p>
<p>After I had installed it with command</p>
<blockquote>
<p>conda install -c anaconda tensorflow-gpu</p>
</blockquote>
<p>print(tensorflow.<strong>version</strong>) shows me that I have version 1.14.0</p>
<p>The same after</p>
<blockquote>
<p>conda update tensorflow-gpu</p>
</blockquote>
<p>Even after </p>
<blockquote>
<p>conda install <a href="https://anaconda.org/anaconda/tensorflow-gpu/2.1.0/download/win-64/tensorflow-gpu-2.1.0-h0d30ee6_0.tar.bz2" rel="nofollow noreferrer">https://anaconda.org/anaconda/tensorflow-gpu/2.1.0/download/win-64/tensorflow-gpu-2.1.0-h0d30ee6_0.tar.bz2</a></p>
</blockquote>
<p>I get in command prompt:</p>
<pre><code>Downloading and Extracting Packages
tensorflow-gpu-2.1.0 | ######################################################################################## | 100%
Preparing transaction: done
Verifying transaction: done
Executing transaction: done
C:\Windows\system32>set "KERAS_BACKEND="
C:\Windows\system32>python C:\Anaconda3\etc\keras\load_config.py 1>temp.txt
C:\Windows\system32>set /p KERAS_BACKEND= 0<temp.txt
C:\Windows\system32>del temp.txt
C:\Windows\system32>python -c "import keras" 1>nul 2>&1
C:\Windows\system32>if errorlevel 1 (
ver 1>nul
set "KERAS_BACKEND=theano"
python -c "import keras" 1>nul 2>&1
)
</code></pre>
<p>but I'm still on 1.14.0</p>
|
<p>You can update Tensorflow by typing this command to the Anaconda Prompt. </p>
<pre><code>conda install -c conda-forge tensorflow=2.1.0
</code></pre>
<p>Hope this works!</p>
|
python-3.x|tensorflow|conda
| 1
|
8,530
| 61,215,270
|
input_shape with image_generator in Tensorflow
|
<p>I'm trying to use this approach in Tensorflow 2.X to load large dataset that does not fit in memory.</p>
<p>I have a folder with X sub-folders that contains images. Each sub-folder is a class.</p>
<pre><code>\dataset
-\class1
-img1_1.jpg
-img1_2.jpg
-...
-\classe2
-img2_1.jpg
-img2_2.jpg
-...
</code></pre>
<p>I create my data generator from my folder like this:</p>
<pre><code>train_data_gen = image_generator.flow_from_directory(directory="path\\to\\dataset",
batch_size=100,
shuffle=True,
target_size=(100, 100), # Image H x W
classes=list(CLASS_NAMES)) # list of folder/class names ["class1", "class2", ...., "classX"]
</code></pre>
<blockquote>
<p>Found 629 images belonging to 2 classes.</p>
</blockquote>
<p>I've did a smaller dataset to test the pipeline. Only 629 images in 2 classes.
Now I can create a dummy model like this:</p>
<pre><code>model = tf.keras.Sequential()
model.add(Dense(1, activation=activation, input_shape=(100, 100, 3))) # only 1 layer of 1 neuron
model.add(Dense(2)) # 2classes
model.compile(optimizer="adam", loss="categorical_crossentropy", metrics=['categorical_accuracy'])
</code></pre>
<p>Once compile I try to fit this dummy model:</p>
<pre><code>STEPS_PER_EPOCH = np.ceil(image_count / batch_size) # 629 / 100
model.fit_generator(generator=train_data_gen , steps_per_epoch=STEPS_PER_EPOCH, epochs=2, verbose=1)
1/7 [===>..........................] - ETA: 2s - loss: 1.1921e-07 - categorical_accuracy: 0.9948
2/7 [=======>......................] - ETA: 1s - loss: 1.1921e-07 - categorical_accuracy: 0.5124
3/7 [===========>..................] - ETA: 0s - loss: 1.1921e-07 - categorical_accuracy: 0.3449
4/7 [================>.............] - ETA: 0s - loss: 1.1921e-07 - categorical_accuracy: 0.2662
5/7 [====================>.........] - ETA: 0s - loss: 1.1921e-07 - categorical_accuracy: 0.2130
6/7 [========================>.....] - ETA: 0s - loss: 1.1921e-07 - categorical_accuracy: 0.1808
</code></pre>
<blockquote>
<p>2020-04-14 20:39:48.629203: W tensorflow/core/framework/op_kernel.cc:1610] Invalid argument: ValueError: <code>generator</code> yielded an element of shape (29, 100, 100, 3) where an element of shape (100, 100, 100, 3) was expected.</p>
</blockquote>
<p>From what i understand, the last batch doesn't has the same shape has the previous batches. So it crashes. I've tried to specify a <code>batch_input_shape</code>.</p>
<pre><code>model.add(Dense(1, activation=activation, batch_input_shape=(None, 100, 100, 3)))
</code></pre>
<p>I've found <a href="https://stackoverflow.com/a/52126464/5462743">here</a> that I should put <code>None</code> to not specify the number of elements in the batch so it can be dynamic. But no success.</p>
<p>Edit: From the comment I had 2 mistakes:</p>
<ul>
<li>The output shape was bad. I missed the flatten layer in the model.</li>
<li>The previous link does work with the correction of the flatten layer</li>
<li>Missing some code, I actually feed the <code>fit_generator</code> with a <code>tf.data.Dataset.from_generator</code> but I gave here a <code>image_generator.flow_from_directory</code>.</li>
</ul>
<p>Here is the final code:</p>
<pre><code>train_data_gen = image_generator.flow_from_directory(directory="path\\to\\dataset",
batch_size=1000,
shuffle=True,
target_size=(100, 100),
classes=list(CLASS_NAMES))
train_dataset = tf.data.Dataset.from_generator(
lambda: train_data_gen,
output_types=(tf.float32, tf.float32),
output_shapes=([None, x, y, 3],
[None, len(CLASS_NAMES)]))
model = tf.keras.Sequential()
model.add(Flatten(batch_input_shape=(None, 100, 100, 3)))
model.add(Dense(1, activation=activation))
model.add(Dense(2))
model.compile(optimizer="adam", loss="categorical_crossentropy", metrics=['categorical_accuracy'])
STEPS_PER_EPOCH = np.ceil(image_count / batch_size) # 629 / 100
model.fit_generator(generator=train_data_gen , steps_per_epoch=STEPS_PER_EPOCH, epochs=2, verbose=1)
</code></pre>
|
<p>For the benefit of community here i am explaining, how to use <code>image_generator</code> in Tensorflow with input_shape <code>(100, 100, 3)</code> using <code>dogs vs cats</code> dataset </p>
<p>If we haven't choose right batch size there is a chance of model struck right after first epoch, hence i am starting my explanation with <code>how to choose batch_size ?</code> </p>
<p>We generally observe that <code>batch size</code> to be the <code>power of 2</code>, this is because of the effective work of optimized matrix operation libraries. This is further elaborated in <a href="https://arxiv.org/abs/1303.2314" rel="nofollow noreferrer">this</a> research paper. </p>
<p>Check out <a href="https://mydeeplearningnb.wordpress.com/2019/02/23/convnet-for-classification-of-cifar-10/" rel="nofollow noreferrer">this</a> blog which describes how to choose the right <code>batch size</code> while comparing the effects of different batch sizes on the <code>accuracy</code> of CIFAR-10 dataset.</p>
<p>Here is the end to end working code with outputs</p>
<pre><code>import os
import numpy as np
from keras import layers
import pandas as pd
from tensorflow.keras.layers import Input, Dense, Activation, ZeroPadding2D, BatchNormalization, Flatten, Conv2D
from tensorflow.keras.layers import AveragePooling2D, MaxPooling2D, Dropout, GlobalMaxPooling2D, GlobalAveragePooling2D
from tensorflow.keras.models import Sequential
from tensorflow.keras import regularizers, optimizers
from tensorflow.keras.preprocessing import image
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import keras.backend as K
K.set_image_data_format('channels_last')
train_dir = '/content/drive/My Drive/Dogs_Vs_Cats/train'
test_dir = '/content/drive/My Drive/Dogs_Vs_Cats/test'
img_width, img_height = 100, 100
input_shape = img_width, img_height, 3
train_samples = 2000
test_samples = 1000
epochs = 30
batch_size = 32
train_datagen = ImageDataGenerator(
rescale = 1. /255,
shear_range = 0.2,
zoom_range = 0.2,
horizontal_flip = True)
test_datagen = ImageDataGenerator(
rescale = 1. /255)
train_data = train_datagen.flow_from_directory(
train_dir,
target_size = (img_width, img_height),
batch_size = batch_size,
class_mode = 'binary')
test_data = test_datagen.flow_from_directory(
test_dir,
target_size = (img_width, img_height),
batch_size = batch_size,
class_mode = 'binary')
model = Sequential()
model.add(Conv2D(32, (7, 7), strides = (1, 1), input_shape = input_shape))
model.add(BatchNormalization(axis = 3))
model.add(Activation('relu'))
model.add(MaxPooling2D((2, 2)))
model.add(Conv2D(64, (7, 7), strides = (1, 1)))
model.add(BatchNormalization(axis = 3))
model.add(Activation('relu'))
model.add(MaxPooling2D((2, 2)))
model.add(Flatten())
model.add(Dense(64, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(1, activation='sigmoid'))
model.summary()
model.compile(loss = 'binary_crossentropy',
optimizer = 'rmsprop',
metrics = ['accuracy'])
model.fit_generator(
train_data,
steps_per_epoch = train_samples//batch_size,
epochs = epochs,
validation_data = test_data,
verbose = 1,
validation_steps = test_samples//batch_size)
</code></pre>
<p>Output:</p>
<pre><code>Found 2000 images belonging to 2 classes.
Found 1000 images belonging to 2 classes.
Model: "sequential_5"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_8 (Conv2D) (None, 94, 94, 32) 4736
_________________________________________________________________
batch_normalization_8 (Batch (None, 94, 94, 32) 128
_________________________________________________________________
activation_8 (Activation) (None, 94, 94, 32) 0
_________________________________________________________________
max_pooling2d_8 (MaxPooling2 (None, 47, 47, 32) 0
_________________________________________________________________
conv2d_9 (Conv2D) (None, 41, 41, 64) 100416
_________________________________________________________________
batch_normalization_9 (Batch (None, 41, 41, 64) 256
_________________________________________________________________
activation_9 (Activation) (None, 41, 41, 64) 0
_________________________________________________________________
max_pooling2d_9 (MaxPooling2 (None, 20, 20, 64) 0
_________________________________________________________________
flatten_4 (Flatten) (None, 25600) 0
_________________________________________________________________
dense_11 (Dense) (None, 64) 1638464
_________________________________________________________________
dropout_4 (Dropout) (None, 64) 0
_________________________________________________________________
dense_12 (Dense) (None, 1) 65
=================================================================
Total params: 1,744,065
Trainable params: 1,743,873
Non-trainable params: 192
_________________________________________________________________
Epoch 1/30
62/62 [==============================] - 14s 225ms/step - loss: 1.8307 - accuracy: 0.4853 - val_loss: 0.6931 - val_accuracy: 0.5000
Epoch 2/30
62/62 [==============================] - 14s 226ms/step - loss: 0.7085 - accuracy: 0.4832 - val_loss: 0.6931 - val_accuracy: 0.5010
Epoch 3/30
62/62 [==============================] - 14s 218ms/step - loss: 0.6955 - accuracy: 0.5300 - val_loss: 0.6894 - val_accuracy: 0.5292
Epoch 4/30
62/62 [==============================] - 14s 221ms/step - loss: 0.6938 - accuracy: 0.5407 - val_loss: 0.7309 - val_accuracy: 0.5262
Epoch 5/30
62/62 [==============================] - 14s 218ms/step - loss: 0.6860 - accuracy: 0.5498 - val_loss: 0.6776 - val_accuracy: 0.5665
Epoch 6/30
62/62 [==============================] - 13s 216ms/step - loss: 0.7027 - accuracy: 0.5407 - val_loss: 0.6895 - val_accuracy: 0.5101
Epoch 7/30
62/62 [==============================] - 13s 216ms/step - loss: 0.6852 - accuracy: 0.5528 - val_loss: 0.6567 - val_accuracy: 0.5887
Epoch 8/30
62/62 [==============================] - 13s 217ms/step - loss: 0.6772 - accuracy: 0.5427 - val_loss: 0.6643 - val_accuracy: 0.5847
Epoch 9/30
62/62 [==============================] - 13s 217ms/step - loss: 0.6709 - accuracy: 0.5534 - val_loss: 0.6623 - val_accuracy: 0.5887
Epoch 10/30
62/62 [==============================] - 14s 219ms/step - loss: 0.6579 - accuracy: 0.5711 - val_loss: 0.6614 - val_accuracy: 0.6058
Epoch 11/30
62/62 [==============================] - 13s 218ms/step - loss: 0.6591 - accuracy: 0.5625 - val_loss: 0.6594 - val_accuracy: 0.5454
Epoch 12/30
62/62 [==============================] - 13s 216ms/step - loss: 0.6419 - accuracy: 0.5767 - val_loss: 1.1041 - val_accuracy: 0.5161
Epoch 13/30
62/62 [==============================] - 13s 215ms/step - loss: 0.6479 - accuracy: 0.5783 - val_loss: 0.6441 - val_accuracy: 0.5837
Epoch 14/30
62/62 [==============================] - 13s 216ms/step - loss: 0.6373 - accuracy: 0.5899 - val_loss: 0.6427 - val_accuracy: 0.6310
Epoch 15/30
62/62 [==============================] - 13s 215ms/step - loss: 0.6203 - accuracy: 0.6133 - val_loss: 0.7390 - val_accuracy: 0.6220
Epoch 16/30
62/62 [==============================] - 13s 217ms/step - loss: 0.6277 - accuracy: 0.6362 - val_loss: 0.6649 - val_accuracy: 0.5786
Epoch 17/30
62/62 [==============================] - 13s 215ms/step - loss: 0.6155 - accuracy: 0.6316 - val_loss: 0.9823 - val_accuracy: 0.5484
Epoch 18/30
62/62 [==============================] - 14s 222ms/step - loss: 0.6056 - accuracy: 0.6408 - val_loss: 0.6333 - val_accuracy: 0.6048
Epoch 19/30
62/62 [==============================] - 14s 218ms/step - loss: 0.6025 - accuracy: 0.6529 - val_loss: 0.6514 - val_accuracy: 0.6442
Epoch 20/30
62/62 [==============================] - 13s 215ms/step - loss: 0.6149 - accuracy: 0.6423 - val_loss: 0.6373 - val_accuracy: 0.6048
Epoch 21/30
62/62 [==============================] - 13s 215ms/step - loss: 0.6030 - accuracy: 0.6519 - val_loss: 0.6086 - val_accuracy: 0.6573
Epoch 22/30
62/62 [==============================] - 13s 217ms/step - loss: 0.5936 - accuracy: 0.6865 - val_loss: 1.0677 - val_accuracy: 0.5605
Epoch 23/30
62/62 [==============================] - 13s 214ms/step - loss: 0.5964 - accuracy: 0.6728 - val_loss: 0.7927 - val_accuracy: 0.5877
Epoch 24/30
62/62 [==============================] - 13s 215ms/step - loss: 0.5866 - accuracy: 0.6707 - val_loss: 0.6116 - val_accuracy: 0.6421
Epoch 25/30
62/62 [==============================] - 13s 214ms/step - loss: 0.5933 - accuracy: 0.6662 - val_loss: 0.8282 - val_accuracy: 0.6048
Epoch 26/30
62/62 [==============================] - 13s 214ms/step - loss: 0.5705 - accuracy: 0.6885 - val_loss: 0.5806 - val_accuracy: 0.6966
Epoch 27/30
62/62 [==============================] - 14s 218ms/step - loss: 0.5709 - accuracy: 0.7017 - val_loss: 1.2404 - val_accuracy: 0.5333
Epoch 28/30
62/62 [==============================] - 13s 216ms/step - loss: 0.5691 - accuracy: 0.7104 - val_loss: 0.6136 - val_accuracy: 0.6442
Epoch 29/30
62/62 [==============================] - 13s 215ms/step - loss: 0.5627 - accuracy: 0.7048 - val_loss: 0.6936 - val_accuracy: 0.6613
Epoch 30/30
62/62 [==============================] - 13s 214ms/step - loss: 0.5714 - accuracy: 0.6941 - val_loss: 0.5872 - val_accuracy: 0.6825
</code></pre>
|
tensorflow2.0|tensorflow-datasets
| 0
|
8,531
| 61,289,172
|
Pandas drop subset of dataframe
|
<p>Assume we have <code>df</code> and <code>df_drop</code>:</p>
<pre><code>df = pd.DataFrame({'A': [1,2,3], 'B': [1,1,1]})
df_drop = df[df.A==df.B]
</code></pre>
<p>I want to delete <code>df_drop</code> from <code>df</code> without using the explicit conditions used when creating <code>df_drop</code>. I.e. I'm not after the solution <code>df[df.A!=df.B]</code>, but would like to, basically, take <code>df</code> minus <code>df_drop</code> somehow. Hopes this is clear enough. Otherwise happy to elaborate!</p>
|
<p>You can <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.merge.html" rel="nofollow noreferrer"><code>merge</code></a> both dataframes setting <code>indicator=True</code> and drop those columns where the indicator column is <code>both</code>:</p>
<pre><code>out = pd.merge(df,df_drop, how='outer', indicator=True)
out[out._merge.ne('both')].drop('_merge',1)
A B
1 2 1
2 3 1
</code></pre>
<hr>
<p>Or as jon clements points out, if checking by index is enough, you could simply use:</p>
<pre><code>df.drop(df_drop.index)
</code></pre>
|
python|pandas|dataframe
| 3
|
8,532
| 68,669,726
|
How to create rows in a pandas dataframe that are averages of other rows?
|
<p>Take a dataframe like this one:</p>
<pre><code>import pandas as pd
info = {'Year': [2010, 2010, 2010, 2010, 2015, 2015, 2015, 2015],
'Country': ['USA', 'Mexico', 'Canada', 'China', 'USA', 'Mexico', 'Canada', 'China'],
'AgeAvg': [40, 44, 45, 49, 45, 46, 50, 52],
'HeightAvg': [68, 65, 67, 68, 69, 70, 64, 67]}
df = pd.DataFrame(data=info)
df
Year Country AgeAvg HeightAvg
0 2010 USA 40 68
1 2010 Mexico 44 65
2 2010 Canada 45 67
3 2010 China 49 68
4 2015 USA 45 69
5 2015 Mexico 46 70
6 2015 Canada 50 64
7 2015 China 52 67
</code></pre>
<p>I want to add rows for 2011, 2012, 2013, and 2014. These rows will follow the same Countries, and have a smoothed average of the variables. For example, 2011 USA Age will be 41, 2012 USA age 42, 2013 USA age 43, 2014 USA age 44. This way the age will span from 2010 to 2015. I would also like to do this for all variables (like height in this case), not just age. Is there a way to do this in Python with Pandas?</p>
|
<p>Use <code>pd.MultiIndex.from_product</code> to reindex your dataframe and interpolate values:</p>
<pre><code>mi = pd.MultiIndex.from_product([df['Country'].unique(),
range(df.Year.min(), df.Year.max()+1)])
out = df.set_index(['Country', 'Year']).reindex(mi)
out = out.groupby(level=0).apply(lambda x: x.interpolate())
</code></pre>
<pre><code>>>> out
AgeAvg HeightAvg
USA 2010 40.0 68.0
2011 41.0 68.2
2012 42.0 68.4
2013 43.0 68.6
2014 44.0 68.8
2015 45.0 69.0
Mexico 2010 44.0 65.0
2011 44.4 66.0
2012 44.8 67.0
2013 45.2 68.0
2014 45.6 69.0
2015 46.0 70.0
Canada 2010 45.0 67.0
2011 46.0 66.4
2012 47.0 65.8
2013 48.0 65.2
2014 49.0 64.6
2015 50.0 64.0
China 2010 49.0 68.0
2011 49.6 67.8
2012 50.2 67.6
2013 50.8 67.4
2014 51.4 67.2
2015 52.0 67.0
</code></pre>
<p>You can swap levels if you prefer <code>Year</code> first.</p>
<pre><code>out = out.swaplevel().sort_index()
</code></pre>
|
python|pandas|dataframe
| 1
|
8,533
| 68,861,665
|
Pandas add dataframe to another row-wise by columns setting columns not available in the other as "nan"
|
<p>Say we have two dataframes, A with columns a,b,c and B with columns a,b,d and some values</p>
<p>A =</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>a</th>
<th>b</th>
<th>c</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>2</td>
<td>3</td>
</tr>
<tr>
<td>4</td>
<td>5</td>
<td>6</td>
</tr>
<tr>
<td>7</td>
<td>8</td>
<td>9</td>
</tr>
</tbody>
</table>
</div>
<p>and B =</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>a</th>
<th>b</th>
<th>d</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>2</td>
<td>3</td>
</tr>
<tr>
<td>4</td>
<td>5</td>
<td>6</td>
</tr>
<tr>
<td>7</td>
<td>8</td>
<td>9</td>
</tr>
</tbody>
</table>
</div>
<p>Is there a pandas function with can combine the two so that
C = f(A,B) =</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>a</th>
<th>b</th>
<th>c</th>
<th>d</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>2</td>
<td>3</td>
<td>nan</td>
</tr>
<tr>
<td>4</td>
<td>5</td>
<td>6</td>
<td>nan</td>
</tr>
<tr>
<td>7</td>
<td>8</td>
<td>9</td>
<td>nan</td>
</tr>
<tr>
<td>1</td>
<td>2</td>
<td>nan</td>
<td>3</td>
</tr>
<tr>
<td>4</td>
<td>5</td>
<td>nan</td>
<td>6</td>
</tr>
<tr>
<td>7</td>
<td>8</td>
<td>nan</td>
<td>9</td>
</tr>
</tbody>
</table>
</div>
<p>In other words, the columns that exist in one dataframe but not the other should be set to 'nan' in the other when adding the rows, but still add rows values on the columns common to both. I've tried join, concat and merge, but it seems that they don't work in this way, or I've used them wrong. Anyone have suggestions?</p>
|
<p>Use <code>pd.concat([A, B], axis=0, ignore_index=True)</code></p>
|
python|pandas|dataframe
| 1
|
8,534
| 68,690,917
|
Colour intensity is changing when stacking numpy image arrays
|
<p>I am loading an image from <a href="https://github.com/Jimut123/simply_junk/blob/main/image.nii.gz?raw=true" rel="nofollow noreferrer">here</a>, which is saved as .nii.gz. The image opens fine (with a dimension of (497x497)), and when displayed using matplotlib, it shows with correct intensities, as shown below:</p>
<p><a href="https://i.stack.imgur.com/gJ5ar.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gJ5ar.png" alt="kidney image 1" /></a></p>
<p>When we are trying to make it a 3 channel image, by stacking the numpy array, and after plotting it, the intensity changes, as shown below:</p>
<p><a href="https://i.stack.imgur.com/h0sjb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/h0sjb.png" alt="kidney image 2" /></a></p>
<p>Any idea how to resolve this? The minimal code to reproduce the problem in google colaboratory is present here:</p>
<pre><code>import cv2
import glob
import numpy as np
import nibabel as nib
import skimage.io as io
from nibabel import nifti1
import matplotlib.pyplot as plt
! wget "https://github.com/Jimut123/simply_junk/blob/main/image.nii.gz?raw=true" -O image.nii.gz
image_file_name = "image.nii.gz"
print(image_file_name)
epi_img = nib.load(image_file_name)
epi_img_data = epi_img.get_fdata()
epi_img_data = epi_img_data/epi_img_data.max()
# epi_img_data = epi_img_data[..., np.newaxis]
plt.imshow(epi_img_data)
plt.show()
total_mask = np.stack((epi_img_data,)*3, axis=-1)
plt.imshow(total_mask,cmap='gray')
plt.show()
</code></pre>
<p>Like for example this is a 3-channel image (RGB):</p>
<p><a href="https://i.stack.imgur.com/UcxeT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UcxeT.png" alt="enter image description here" /></a></p>
<p>But this looks exactly like it's grayscale version. For the above image, when stacking numpy arrays, I cannot make the image look similar to the grayscale one.</p>
|
<p>If you scale it as Cristolph and Johan described, the 3-channel plot becomes identical:</p>
<pre class="lang-py prettyprint-override"><code>epi_img_data -= epi_img_data.min()
epi_img_data /= epi_img_data.max()
total_mask = np.stack((epi_img_data,)*3, axis=-1)
plt.imshow(total_mask)
</code></pre>
<p><a href="https://i.stack.imgur.com/xhpu2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xhpu2.png" alt="1- vs 3-channel" /></a></p>
|
python|numpy|opencv|matplotlib
| 2
|
8,535
| 68,529,258
|
RTX 3070 compatibility with Pytorch
|
<blockquote>
<p>NVIDIA GeForce RTX 3070 with CUDA capability sm_86 is not compatible
with the current PyTorch installation. The current PyTorch install
supports CUDA capabilities sm_37 sm_50 sm_60 sm_70.</p>
</blockquote>
<p>So I'm currently trying to train a neural network but I'm getting this issue. It seems that the GPU model I have is not compatible with the version of PyTorch that I have.</p>
<p>The output of my nvcc -V is:</p>
<pre><code>Cuda compilation tools, release 11.4, V11.4.48
Build cuda_11.4.r11.4/compiler.30033411_0
</code></pre>
<p>and my PyTorch version is 1.9.0.</p>
<p>I've tried changing the CUDA version from what was initially 10 to 11.4 and there was no change. Any help would be hugely appreciated.</p>
|
<p>It might be because you have installed a torch package with cuda==10.* (e.g. <code>torch==1.9.0+cu102</code>) . I'd suggest trying:</p>
<pre><code>pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html
</code></pre>
|
deep-learning|pytorch|gpu|nvidia
| 5
|
8,536
| 36,523,861
|
Can pandas SparseSeries store values in the float16 dtype?
|
<p>The reason why I want to use a smaller data type in the sparse pandas containers is to reduce memory usage. This is relevant when working with data that originally uses bool (e.g. from <code>to_dummies</code>) or small numeric dtypes (e.g. int8), which are all converted to float64 in sparse containers.</p>
<h2>DataFrame creation</h2>
<p>The provided example uses a modest 20k x 145 dataframe. In practice I'm working with dataframes in the order of 1e6 x 5e3.</p>
<pre><code>In []: bool_df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 19849 entries, 0 to 19848
Columns: 145 entries, topic.party_nl.p.pvda to topic.sub_cat_Reizen
dtypes: bool(145)
memory usage: 2.7 MB
In []: bool_df.memory_usage(index=False).sum()
Out[]: 2878105
In []: bool_df.values.itemsize
Out[]: 1
</code></pre>
<p>A sparse version of this dataframe needs less memory, but is still much larger than needed, given the original dtype.</p>
<pre><code>In []: sparse_df = bool_df.to_sparse(fill_value=False)
In []: sparse_df.info()
<class 'pandas.sparse.frame.SparseDataFrame'>
RangeIndex: 19849 entries, 0 to 19848
Columns: 145 entries, topic.party_nl.p.pvda to topic.sub_cat_Reizen
dtypes: float64(145)
memory usage: 1.1 MB
In []: sparse_df.memory_usage(index=False).sum()
Out[]: 1143456
In []: sparse_df.values.itemsize
Out[]: 8
</code></pre>
<p>Even though this data is fairly sparse, the dtype conversion from bool to float64 causes non-fill values to take up 8x more space.</p>
<pre><code>In []: sparse_df.memory_usage(index=False).describe()
Out[]:
count 145.000000
mean 7885.903448
std 17343.762402
min 8.000000
25% 640.000000
50% 1888.000000
75% 4440.000000
max 84688.000000
</code></pre>
<p>Given the sparsity of the data, one would hope for a more drastic reduction in memory size:</p>
<pre><code>In []: sparse_df.density
Out[]: 0.04966184346992205
</code></pre>
<h2>Memory footprint of underlying storage</h2>
<p>The columns of <code>SparseDataFrame</code> are <code>SparseSeries</code>, which use <code>SparseArray</code> as a wrapper for the underlying <code>numpy.ndarray</code> storage. The number of bytes that are used by the sparse dataframe can (also) be computed directly from these ndarrays:</p>
<pre><code>In []: col64_nbytes = [
.....: sparse_df[col].values.sp_values.nbytes
.....: for col in sparse_df
.....: ]
In []: sum(col64_nbytes)
Out[]: 1143456
</code></pre>
<p>The ndarrays can be converted to use smaller floats, which allows one to calculate how much memory the dataframe would need when using e.g. float16s. This would result in a 4x smaller dataframe, as one might expect.</p>
<pre><code>In []: col16_nbytes = [
.....: sparse_df[col].values.sp_values.astype('float16').nbytes
.....: for col in sparse_df
.....: ]
In []: sum(col16_nbytes)
Out[]: 285864
</code></pre>
<p>By using the more appropriate dtype, the memory usage can be reduced to 10% of the dense version, whereas the float64 sparse dataframe reduces to 40%. For my data, this could make the difference between needing 20 GB and 5 GB of available memory.</p>
<pre><code>In []: sum(col64_nbytes) / bool_df.memory_usage(index=False).sum()
Out[]: 0.3972947477593764
In []: sum(col16_nbytes) / bool_df.memory_usage(index=False).sum()
Out[]: 0.0993236869398441
</code></pre>
<h2>Issue</h2>
<p>Unfortunately, dtype conversion of sparse containers has not been implemented in pandas:</p>
<pre><code>In []: sparse_df.astype('float16')
---------------------------------------------------
[...]/pandas/sparse/frame.py in astype(self, dtype)
245
246 def astype(self, dtype):
--> 247 raise NotImplementedError
248
249 def copy(self, deep=True):
NotImplementedError:
</code></pre>
<p>How can the <code>SparseSeries</code> in a <code>SparseDataFrame</code> be converted to use the <code>numpy.float16</code> data type, or another dtype that uses fewer than 64 bytes per item, instead of the default <code>numpy.float64</code>?</p>
|
<p>The <code>SparseArray</code> constructor can be used to convert its underlying <code>ndarray</code>'s dtype. To convert all sparse series in a dataframe, one can iterate over the df's series, convert their arrays, and replace the series with converted versions.</p>
<pre><code>import pandas as pd
import numpy as np
def convert_sparse_series_dtype(sparse_series, dtype):
dtype = np.dtype(dtype)
if 'float' not in str(dtype):
raise TypeError('Sparse containers only support float dtypes')
sparse_array = sparse_series.values
converted_sp_array = pd.SparseArray(sparse_array, dtype=dtype)
converted_sp_series = pd.SparseSeries(converted_sp_array)
return converted_sp_series
def convert_sparse_columns_dtype(sparse_dataframe, dtype):
for col_name in sparse_dataframe:
if isinstance(sparse_dataframe[col_name], pd.SparseSeries):
sparse_dataframe.loc[:, col_name] = convert_sparse_series_dtype(
sparse_dataframe[col_name], dtype
)
</code></pre>
<p>This achieves the stated purpose of reducing the sparse dataframe's memory footprint:</p>
<pre><code>In []: sparse_df.info()
<class 'pandas.sparse.frame.SparseDataFrame'>
RangeIndex: 19849 entries, 0 to 19848
Columns: 145 entries, topic.party_nl.p.pvda to topic.sub_cat_Reizen
dtypes: float64(145)
memory usage: 1.1 MB
In []: convert_sparse_columns_dtype(sparse_df, 'float16')
In []: sparse_df.info()
<class 'pandas.sparse.frame.SparseDataFrame'>
RangeIndex: 19849 entries, 0 to 19848
Columns: 145 entries, topic.party_nl.p.pvda to topic.sub_cat_Reizen
dtypes: float16(145)
memory usage: 279.2 KB
In []: bool_df.equals(sparse_df.to_dense().astype('bool'))
Out[]: True
</code></pre>
<p>It is, however, a somewhat lousy solution, because the converted dataframe behaves unpredictibly when it interacts with other dataframes. For instance, when converted sparse dataframes are concatenated with other dataframes, all contained series become dense series. This is not the case for unconverted sparse dataframes. They remain sparse series in the resulting dataframe.</p>
|
python|pandas|sparse-matrix|type-conversion
| 1
|
8,537
| 53,124,999
|
mark attendance of the user in dataframe
|
<p>I have a dataframe in which a user's daily entry and exit is noted, but the user comes at different time each day, for example below is the input user data</p>
<pre><code>Date UserID Intime Outtime
2018-06-29 73456 2018-06-29 07:30:54 2018-06-29 15:30:13
2018-06-28 73456 2018-06-28 08:29:23 2018-06-28 17:28:31
2018-06-27 73456 2018-06-27 11:26:02 2018-06-27 19:30:09
2018-06-26 73456 2018-06-26 14:20:42 2018-06-26 23:25:38
2018-06-25 73456 2018-06-25 07:31:19 2018-06-25 16:24:26
</code></pre>
<p>I need to maintain a hourly record of this user. so in a separate sheet i have the date and time on hourly basis. i need to add <code>1</code> in the user field, in which hour the user is in. for example output data.</p>
<pre><code>Hours User
2018-06-29 0:00:00
2018-06-29 1:00:00
2018-06-29 2:00:00
2018-06-29 3:00:00
2018-06-29 4:00:00
2018-06-29 5:00:00
2018-06-29 6:00:00
2018-06-29 7:00:00 1
2018-06-29 8:00:00 1
2018-06-29 9:00:00 1
2018-06-29 10:00:00 1
2018-06-29 11:00:00 1
2018-06-29 12:00:00 1
2018-06-29 13:00:00 1
2018-06-29 14:00:00 1
2018-06-29 15:00:00 1
2018-06-29 16:00:00
2018-06-29 17:00:00
</code></pre>
<p>I am able to create hour column but not able to mark attendance for hour in which user is in. </p>
<p>Any help will be highly appreciated. Thank you ! </p>
|
<p>Try this:</p>
<p>Build the hourly set</p>
<pre><code>s = pd.date_range(df1.index[0], df1.index[-1]+pd.DateOffset(1), freq='H')
idx = pd.period_range(df1.index[0], df1.index[-1]+pd.DateOffset(1), freq='H')
idx = idx[:-1]
</code></pre>
<p>Find when the index is inside the range of <code>Intime</code> and <code>Outtime</code></p>
<pre><code>sol = [int((s[i] >= df1.iloc[j,1] - pd.DateOffset(hours=1)) & (s[i] <= df1.iloc[j,2])) for j in range(len(df1)) for i in range(len(idx))]
</code></pre>
<p>Use numpy to reshape the list into a friendlier format</p>
<pre><code>sol2 = np.array(sol)
sol3 = np.reshape(sol2, (s.shape[0]-1,len(df1)),order = 'F')
</code></pre>
<p>Build the desired series</p>
<pre><code>ans = pd.Series(np.amax(sol3, axis=1),idx.values)
</code></pre>
<p>display the results</p>
<pre><code>print(ans)
</code></pre>
<p>output (for last day):</p>
<pre><code>2018-06-29 00:00 0
2018-06-29 01:00 0
2018-06-29 02:00 0
2018-06-29 03:00 0
2018-06-29 04:00 0
2018-06-29 05:00 0
2018-06-29 06:00 0
2018-06-29 07:00 1
2018-06-29 08:00 1
2018-06-29 09:00 1
2018-06-29 10:00 1
2018-06-29 11:00 1
2018-06-29 12:00 1
2018-06-29 13:00 1
2018-06-29 14:00 1
2018-06-29 15:00 1
2018-06-29 16:00 0
2018-06-29 17:00 0
2018-06-29 18:00 0
2018-06-29 19:00 0
2018-06-29 20:00 0
2018-06-29 21:00 0
</code></pre>
|
python|pandas|dataframe
| 2
|
8,538
| 53,333,131
|
Cannot Import Pandas in Python3.6.5
|
<p>I have installed pandas successfully in Terminal using the command: sudo pip3 install pandas. The installation information is shown as below:</p>
<pre><code>Requirement already up-to-date: pandas in ./.local/lib/python3.6/site-
packages (0.23.4)
Requirement already satisfied, skipping upgrade: python-dateutil>=2.5.0 in
/usr/local/lib/python3.6/site-packages (from pandas) (2.7.3)
Requirement already satisfied, skipping upgrade: numpy>=1.9.0 in
/usr/local/lib/python3.6/site-packages (from pandas) (1.14.3)
Requirement already satisfied, skipping upgrade: pytz>=2011k in
/usr/local/lib/python3.6/site-packages (from pandas) (2018.4)
Requirement already satisfied, skipping upgrade: six>=1.5 in
/usr/local/lib/python3.6/site-packages (from python-dateutil>=2.5.0->pandas)
(1.11.0)
</code></pre>
<p>When I import pandas in Python 3.6 or 2.7, the result is the same error: No module named pandas.
According to
<a href="https://stackoverflow.com/questions/33839960/why-cant-import-pandas-after-installed-successfully">why can't import pandas after installed successfully?</a></p>
<p>I have typed in the following code in Terminal but they do not work.</p>
<pre><code>export PYTHONPATH=$PYTHONPATH:/usr/local/python3.6/lib/python3.6/site-packages
export PYTHONPATH=$PYTHONPATH:./.local/lib/python3.6/site-packages
export PYTHONPATH=$PYTHONPATH:/usr/local/python3.6/lib/site-packages
</code></pre>
<p>What should I do? Thanks in advance.</p>
|
<p>Are you using Jupyter Notebook? What happens is, that you might have an virtual evironment, that has no Jupyter Notebook installed. After calling it from your current environment, it goes to the base environemnt and searches for it there. And if the search is succesfull, Jupyter will open, but in another environment, that maybe has no Pandas.</p>
|
python|pandas
| 0
|
8,539
| 53,337,769
|
How to optimize below code to run faster, size of my dataframe is almost 100,000 data points
|
<pre><code>def encoder(expiry_dt,expiry1,expiry2,expiry3):
if expiry_dt == expiry1:
return 1
if expiry_dt == expiry2:
return 2
if expiry_dt == expiry3:
return 3
FINAL['Expiry_encodings'] = FINAL.apply(lambda row: '{0}_{1}_{2}_{3}_{4}'.format(row['SYMBOL'],row['INSTRUMENT'],row['STRIKE_PR'],row['OPTION_TYP'], encoder(row['EXPIRY_DT'],
row['Expiry1'],
row['Expiry2'],
row['Expiry3'])), axis =1)
</code></pre>
<p>The code runs totally fine but its too slow, is there any other alternative to achieve this in less time bound? </p>
<p><a href="https://i.stack.imgur.com/oAstX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oAstX.png" alt="Sample Dataframe"></a></p>
|
<p>Give the following a try:</p>
<pre><code>FINAL['expiry_number'] = '0'
for c in '321':
FINAL.loc[FINAL['EXPIRY_DT'] == FINAL['Expiry'+c], 'expiry_number'] = c
FINAL['Expiry_encodings'] = FINAL['SYMBOL'].astype(str) + '_' + \
FINAL['INSTRUMENT'].astype(str) + '_' + FINAL['STRIKE_PR'].astype(str) + \
'_' + FINAL['OPTION_TYP'].astype(str) + '_' + FINAL['expiry_number']
</code></pre>
<p>This avoids the three <code>if</code> statements, has a default value (<code>'0'</code>) if none of the if statements evaluates to <code>True</code>, and avoids all the string formatting; above that, it also avoids the <code>apply</code> method with a <code>lambda</code>.</p>
<p>Note on the <code>'321'</code> order: this reflects the order in which the if-chain in the original code section is evaluated: <code>'Expiry3'</code> has the lowest priority, and in my code given here, it is first overridden by #2 and then by #1. The original if-chain would shortcut at #1, given that the highest priority. For example, if <code>'Expiry1'</code> and <code>'Expiry3'</code> have the same value (equal to <code>'EXPIRY_DT'</code>), the assigned value is <code>1</code>, not <code>3</code>.</p>
|
python|pandas|numpy|dataframe
| 3
|
8,540
| 53,090,114
|
Concatenate 5 unpickle dictionaries without overwriting (data is from CIFAR-10)
|
<pre><code>def unpickle(file):
import pickle
with open(file, 'rb') as fo:
dict = pickle.load(fo, encoding='bytes')
return dict
dict1 = unpickle(data_dir1)
dict2 = unpickle(data_dir2)
dict3 = unpickle(data_dir3)
dict4 = unpickle(data_dir4)
dict5 = unpickle(data_dir5)
</code></pre>
<p>Data Format (from CIFAR-10):</p>
<p>Loaded in this way, each of the batch files contains a dictionary with the following elements:</p>
<blockquote>
<p>data -- a 10000x3072 numpy array of uint8s. Each row of the array stores a 32x32 colour image. The first 1024 entries contain the red channel values, the next 1024 the green, and the final 1024 the blue. The image is stored in row-major order, so that the first 32 entries of the array are the red channel values of the first row of the image.</p>
</blockquote>
<p>My goal is to put all the numpy arrays together, which are stored in the dictionaries, as one big group of numpy arrays (no overwriting).</p>
|
<p><code>np.concatenate((dict1, dict2, dict3, dict4, dict5), axis=0)</code> should work</p>
|
python|numpy
| 0
|
8,541
| 65,732,637
|
Loop a function and bind together Pandas dataframe
|
<p>I have a function that connects to an SQL database and fetch data from a table. I aim to loop that function over an iterator to make the same query on different tables. The function works, but the for loop below do not return anything. I'm new to Python and I'm sure I miss something fundamental here.</p>
<p>Example Code-</p>
<pre><code>def func(years):
conn = pyodc.connect()
sql_query = """ SELECT TOP 100 X
FROM table_""" + years
df = pd.Dataframe()
if len(df) == 0:
df = pd.read_sql(sql_query, conn)
df['year'] = years
else:
df_temp = df.copy()
temp = pd.read_sql_query)sql_query, conn)
temp['year'] = years
df = temp.append(df_temp)
return df
loop = ['2017', '2018']
for year in loop:
func(year)
</code></pre>
|
<p>You need to pass an argument to the function for it to accept and loop over. Also you might want to consider <code>pd.concat</code> so you can assign a variable to your dataframe.</p>
<p>So something like this would work and is cleaner:</p>
<pre><code>def select_top(year):
conn = pyodc.connect()
sql_query = "SELECT TOP 100 X FROM table_{}".format(year)
df = pd.read_sql(sql_query, conn)
df['year'] = years
return df
years = ['2017', '2018']
data = pd.concat([select_top(year) for year in years])
</code></pre>
<p>It will loop over the years and concatenates all your dataframes into 1.</p>
|
python|pandas|for-loop
| 1
|
8,542
| 65,516,014
|
Python Pandas: Create a nested function to create new cols in final output
|
<p>I am trying to figure out how to create a function to trigger inner nested function in order to create my desired output.
I wish to output a new dataframe with new columns based on different conditions.</p>
<p>snippet of code:</p>
<pre><code> def func1(df):
alist = []
error ={}
for i in df.index:
if pd.isna([df.at[i, "col_name"]) == True:
alist.append[i]
df[alist,'new_col'] = "some msg"
return df
def func2(df):
alist = []
error ={}
for i in df.index:
if pd.isna([df.at[i, "other_col_name"]) == True and pd.isna([df.at[i, "other_col_name2"]== False:
alist.append[i]
df[alist,'new_col2'] = "some msg2"
return df
def func3(df):
func1(df)
func2(df)
return df ( The output with the new added columns new_col1 and new_col2)
</code></pre>
<p>Thanks for the help !</p>
<p>I am also receiving lots of settingwithcopywarning:, I am hoping there is a better way to create my
new columns too</p>
|
<p>You're setting the new column weirdly</p>
<pre><code>df[alist, 'new_col'] = 'some_msg'
</code></pre>
<p>Just set it directly</p>
<pre><code> def func1(df):
alist = []
error ={}
for i in df.index:
if pd.isna([df.at[i, "col_name"]) == True:
alist.append[i]
df['new_col'] = i
return df
</code></pre>
|
python|pandas|dataframe
| 0
|
8,543
| 21,320,405
|
How to write a pandas Series to CSV as a row, not as a column?
|
<p>I need to write a <code>pandas.Series</code> object to a CSV file as a row, not as a column. Simply doing</p>
<pre><code>the_series.to_csv( 'file.csv' )
</code></pre>
<p>gives me a file like this:</p>
<pre><code>record_id,2013-02-07
column_a,7.0
column_b,5.0
column_c,6.0
</code></pre>
<p>What I need instead is this:</p>
<pre><code>record_id,column_a,column_b,column_c
2013-02-07,7.0,5.0,6.0
</code></pre>
<p>This needs to work with pandas 0.10, so using <code>the_series.to_frame().transpose()</code> is not an option.</p>
<p>Is there a simple way to either transpose the Series, or otherwise get it written as a row?</p>
<p>Thanks!</p>
|
<p>You can just use the DataFrame constructor (rather than to_frame):</p>
<pre><code>In [11]: pd.DataFrame(s).T
Out[11]:
record_id column_a column_b column_c
2013-02-07 7 5 6
</code></pre>
|
python|csv|pandas
| 17
|
8,544
| 2,572,916
|
Numpy ‘smart’ symmetric matrix
|
<p>Is there a smart and space-efficient symmetric matrix in numpy which automatically (and transparently) fills the position at <code>[j][i]</code> when <code>[i][j]</code> is written to?</p>
<pre><code>import numpy
a = numpy.symmetric((3, 3))
a[0][1] = 1
a[1][0] == a[0][1]
# True
print(a)
# [[0 1 0], [1 0 0], [0 0 0]]
assert numpy.all(a == a.T) # for any symmetric matrix
</code></pre>
<p>An automatic Hermitian would also be nice, although I won’t need that at the time of writing.</p>
|
<p>If you can afford to symmetrize the matrix just before doing calculations, the following should be reasonably fast:</p>
<pre><code>def symmetrize(a):
"""
Return a symmetrized version of NumPy array a.
Values 0 are replaced by the array value at the symmetric
position (with respect to the diagonal), i.e. if a_ij = 0,
then the returned array a' is such that a'_ij = a_ji.
Diagonal values are left untouched.
a -- square NumPy array, such that a_ij = 0 or a_ji = 0,
for i != j.
"""
return a + a.T - numpy.diag(a.diagonal())
</code></pre>
<p>This works under reasonable assumptions (such as not doing both <code>a[0, 1] = 42</code> and the contradictory <code>a[1, 0] = 123</code> before running <code>symmetrize</code>).</p>
<p>If you really need a transparent symmetrization, you might consider subclassing numpy.ndarray and simply redefining <code>__setitem__</code>:</p>
<pre><code>class SymNDArray(numpy.ndarray):
"""
NumPy array subclass for symmetric matrices.
A SymNDArray arr is such that doing arr[i,j] = value
automatically does arr[j,i] = value, so that array
updates remain symmetrical.
"""
def __setitem__(self, (i, j), value):
super(SymNDArray, self).__setitem__((i, j), value)
super(SymNDArray, self).__setitem__((j, i), value)
def symarray(input_array):
"""
Return a symmetrized version of the array-like input_array.
The returned array has class SymNDArray. Further assignments to the array
are thus automatically symmetrized.
"""
return symmetrize(numpy.asarray(input_array)).view(SymNDArray)
# Example:
a = symarray(numpy.zeros((3, 3)))
a[0, 1] = 42
print a # a[1, 0] == 42 too!
</code></pre>
<p>(or the equivalent with matrices instead of arrays, depending on your needs). This approach even handles more complicated assignments, like <code>a[:, 1] = -1</code>, which correctly sets <code>a[1, :]</code> elements.</p>
<p>Note that Python 3 removed the possibility of writing <code>def …(…, (i, j),…)</code>, so the code has to be slightly adapted before running with Python 3: <code>def __setitem__(self, indexes, value): (i, j) = indexes</code>…</p>
|
python|matrix|numpy
| 93
|
8,545
| 63,487,363
|
Error when trying to multiply a variable?
|
<p>I'm trying to multiply a variable to output a weighted value as follows:</p>
<pre><code>import numpy as np
import pandas as pd
data_2017_18.income1_weight = data_2017_18.income1 * data_2017_18.survey_weight
</code></pre>
<p>I'm receiving the following error message:</p>
<p>TypeError: Object with dtype category cannot perform the numpy op multiply</p>
<p>I've tried to change the data_2017_18.income1 to an integer as follows:</p>
<pre><code>int(data_2017_18.income1)
</code></pre>
<p>But I'm getting this error:</p>
<p>TypeError: cannot convert the series to <class 'int'></p>
<p>Any suggestions, please?</p>
<p>Many thanks</p>
|
<p>Try <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.astype.html" rel="nofollow noreferrer"><code>Series.astype</code></a>:</p>
<pre><code>data_2017_18.income1_weight = data_2017_18.income1.astype(float) * data_2017_18.survey_weight
</code></pre>
|
python|pandas
| 2
|
8,546
| 63,483,588
|
Python - how to get table id
|
<p>The HTML is shown as below. How can i get the id="table_1880381"?</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code><div class="match" id="table_1880381"><div class="m_info"><div class="game" style="background-color:#669900;"><a href="http://info.win007.com/cn/subleague.aspx?sclassid=140" target="_blank">墨西聯</a><br/><span id="mt_1880381">7-31 02:30</span> </div><div class="time" id="time_1880381"><font color="red">完</font></div><div class="home" id="home_1880381"><div class="teamInfo"><div class="match" id="table_1851236"><div class="m_info"><div class="game" style="background-color:#64ba1e;"><a href="http://info.win007.com/cn/subleague.aspx?sclassid=772" target="_blank">烏茲超</a><br><span id="mt_1851236">8-18 20:00</span> </div><div class="time" id="time_1851236"><font color="red">完</font></div><div class="home" id="home_1851236"><div class="teamInfo"> <a href="javascript:Panlu(1851236)"><b>克孜勒庫姆</b></a>(中)[14]</div></code></pre>
</div>
</div>
</p>
<pre><code> container3 = soup.findAll("div", {"match": "id"})
print(container3)
</code></pre>
<p>When i run the above code, the result should be []. As i want to the result should be 1880381
Anyone mistake on the code?</p>
|
<p>Your html is confusing, but there are a number of ways to get to where you want to go.</p>
<p>Try something like:</p>
<pre><code>soup.select_one('div.home').attrs['id'].split('_')[1]
</code></pre>
<p>or</p>
<pre><code>soup.select_one('div.time').attrs['id'].split('_')[1]
</code></pre>
<p>Output, in either case:</p>
<pre><code>'1880381'
</code></pre>
<p>For your modified html:</p>
<pre><code>for item in soup.select('div.home'):
print(item['id'].split('_')[1])
</code></pre>
<p>Output:</p>
<pre><code>1880381
1851236
</code></pre>
|
python-3.x|pandas|beautifulsoup
| 1
|
8,547
| 21,430,938
|
RGB to XYZ in Scikit-Image
|
<p>I am trying to convert an image from RGB to XYZ using scikit-image. I found out that there are some differences depending the input type:</p>
<pre><code>from numpy import array,uint8
import skimage.color
rgb = array([array([[56,79,132],[255,100,70]])])
i1 = skimage.color.rgb2xyz(rgb)#rgb.dtype ->dtype('int32')
i2 = skimage.color.rgb2xyz(rgb.astype(uint8))
i3 = skimage.color.rgb2xyz(rgb.astype(float))
print i1[0,1,:]
print i2[0,1,:]
print i3[0,1,:]
</code></pre>
<p>This is the output:</p>
<pre><code>[ 5.55183419e-09 4.73226247e-09 3.02426596e-09]
[ 0.46907236 0.3082294 0.09272133]
[ 240644.54537677 153080.21825017 39214.47581034]
</code></pre>
<p>The cause of the differences is the function <code>img_to_float</code> which is used inside <code>rgb2xyz</code> (see <a href="https://stackoverflow.com/questions/21429261/array-conversion-using-scikit-image-from-integer-to-float">this question</a>). </p>
<p>But I am wondering: What is the correct way to use <code>rgb2xyz</code>? </p>
<p>Regarding <a href="https://stackoverflow.com/questions/17764744/why-is-there-such-a-difference-between-rgb-to-xyz-color-conversions">this question</a> there are multiple solutions, depending on the formula, but again: what is the correct image type that is <strong>required</strong> by <code>rgb2xyz</code>? It seems that <code>unit8</code>, but why? Thanks!</p>
|
<p>The following code should be self explanatory, but floating point values should have a range in <code>(0, 1)</code>, and integer type have their full range mapped to <code>(0, 1)</code> (for unsigned types) or <code>(-1, 1)</code> (for signed types):</p>
<pre><code>>>> from numpy import int32
>>> skimage.color.rgb2xyz((rgb / 255 * (2**31 - 1)).astype(int32))
array([[[ 0.08590003, 0.08097913, 0.2293394 ],
[ 0.46907236, 0.3082294 , 0.09272133]]])
>>> skimage.color.rgb2xyz(rgb.astype(uint8))
array([[[ 0.08590003, 0.08097913, 0.2293394 ],
[ 0.46907236, 0.3082294 , 0.09272133]]])
>>> skimage.color.rgb2xyz(rgb.astype(float) / 255)
array([[[ 0.08590003, 0.08097913, 0.2293394 ],
[ 0.46907236, 0.3082294 , 0.09272133]]])
</code></pre>
|
python|image-processing|numpy|colors|scikit-image
| 0
|
8,548
| 21,828,202
|
Fast inverse and transpose matrix in Python
|
<p>I have a large matrix <code>A</code> of shape <code>(n, n, 3, 3)</code> with <code>n</code> is about <code>5000</code>. Now I want find the inverse and transpose of matrix <code>A</code>:</p>
<pre><code>import numpy as np
A = np.random.rand(1000, 1000, 3, 3)
identity = np.identity(3, dtype=A.dtype)
Ainv = np.zeros_like(A)
Atrans = np.zeros_like(A)
for i in range(1000):
for j in range(1000):
Ainv[i, j] = np.linalg.solve(A[i, j], identity)
Atrans[i, j] = np.transpose(A[i, j])
</code></pre>
<p>Is there a faster, more efficient way to do this?</p>
|
<p>This is taken from a project of mine, where I also do vectorized linear algebra on many 3x3 matrices.</p>
<p>Note that there is only a loop over 3; not a loop over n, so the code is vectorized in the important dimensions. I don't want to vouch for how this compares to a C/numba extension to do the same thing though, performance wise. This is likely to be substantially faster still, but at least this blows the loops over n out of the water.</p>
<pre><code>def adjoint(A):
"""compute inverse without division by det; ...xv3xc3 input, or array of matrices assumed"""
AI = np.empty_like(A)
for i in xrange(3):
AI[...,i,:] = np.cross(A[...,i-2,:], A[...,i-1,:])
return AI
def inverse_transpose(A):
"""
efficiently compute the inverse-transpose for stack of 3x3 matrices
"""
I = adjoint(A)
det = dot(I, A).mean(axis=-1)
return I / det[...,None,None]
def inverse(A):
"""inverse of a stack of 3x3 matrices"""
return np.swapaxes( inverse_transpose(A), -1,-2)
def dot(A, B):
"""dot arrays of vecs; contract over last indices"""
return np.einsum('...i,...i->...', A, B)
A = np.random.rand(2,2,3,3)
I = inverse(A)
print np.einsum('...ij,...jk',A,I)
</code></pre>
|
python|numpy|transpose|matrix-inverse
| 7
|
8,549
| 21,600,219
|
how can i remove multiple rows with different labels in one command in pandas?
|
<p>I have a pandas dataframe that looks like the one below, and I want to drop several labels.</p>
<p>What works fine is:</p>
<pre><code>df = df[df['label'] != 'A']
</code></pre>
<p>or:</p>
<pre><code>df = df[(df['label'] != 'A') & (df['label'] != 'B')]
</code></pre>
<p>However, I have many labels that I want to drop, so I am looking for a command that works like:</p>
<pre><code>df = df[df['label'] != ['A','B']]
</code></pre>
<p>But this doesn't work</p>
<p>How can I drop, or the reverse select, several rows with different labels in one command? I am using pandas only for a few weeks now but can't find an answer to this problem.</p>
<pre><code> label value1 value2 value3
0 A 63.923445 109.688995 -0.692308
1 A 42.488038 87.081340 -0.692308
2 A 45.167464 91.267943 -0.504808
3 A 48.229665 88.755981 -1.485577
4 B 78.010000 180.100000 3.710000
5 B 87.833333 183.800000 2.225000
6 B 93.820000 181.980000 3.460000
7 B 110.836667 221.806667 2.833333
8 C 48.750000 127.450000 NaN
9 C 43.950000 103.100000 NaN
10 C NaN 74.970000 NaN
11 D 27.800000 89.250000 3.550000
12 D 28.000000 92.080000 3.530000
13 E 61.400000 99.300000 NaN
14 E 95.600000 257.000000 NaN
15 E 49.800000 145.000000 NaN
16 G 64.710000 136.000000 1.160000
</code></pre>
|
<p>try this:</p>
<pre><code>import numpy as np
df = df[np.logical_not(df['label'].isin(['A','B']))]
</code></pre>
<p>or</p>
<pre><code>df = df[- df['label'].isin(['A', 'B'])]
</code></pre>
<p>see <a href="https://stackoverflow.com/questions/14057007/remove-rows-not-isinx">Remove rows not .isin('X')</a></p>
|
python|pandas
| 0
|
8,550
| 24,841,306
|
Python - Sum 4D Array
|
<p>Given a <code>4D</code> array <code>M: (m, n, r, r)</code>, how can I sum all the <code>m * n</code> inner matrices (of shape <code>(r, r)</code>) to get a new matrix of shape <code>(r * r)</code>?</p>
<p>For example, </p>
<pre><code> M [[[[ 4, 1],
[ 2, 1]],
[[ 8, 2],
[ 4, 2]]],
[[[ 8, 2],
[ 4, 2]],
[[ 12, 3],
[ 6, 3]]]]
</code></pre>
<p>I expect the result should be </p>
<pre><code> [[32, 8],
[16, 8]]
</code></pre>
|
<p>You could use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.einsum.html" rel="nofollow">einsum</a>:</p>
<pre><code>In [21]: np.einsum('ijkl->kl', M)
Out[21]:
array([[32, 8],
[16, 8]])
</code></pre>
<hr>
<p>Other options include reshaping the first two axes into one axis, and then calling <code>sum</code>:</p>
<pre><code>In [24]: M.reshape(-1, 2, 2).sum(axis=0)
Out[24]:
array([[32, 8],
[16, 8]])
</code></pre>
<p>or calling the sum method twice:</p>
<pre><code>In [26]: M.sum(axis=0).sum(axis=0)
Out[26]:
array([[32, 8],
[16, 8]])
</code></pre>
<p>But using <code>np.einsum</code> is faster:</p>
<pre><code>In [22]: %timeit np.einsum('ijkl->kl', M)
100000 loops, best of 3: 2.42 µs per loop
In [25]: %timeit M.reshape(-1, 2, 2).sum(axis=0)
100000 loops, best of 3: 5.69 µs per loop
In [43]: %timeit np.sum(M, axis=(0,1))
100000 loops, best of 3: 6.08 µs per loop
In [33]: %timeit sum(sum(M))
100000 loops, best of 3: 8.18 µs per loop
In [27]: %timeit M.sum(axis=0).sum(axis=0)
100000 loops, best of 3: 9.83 µs per loop
</code></pre>
<hr>
<p>Caveat: timeit benchmarks can vary significantly due to many factors (OS, NumPy version, NumPy libraries, hardware, etc). The relative performance of various methods can sometimes also depend on the size of M. So it pays to do your own benchmarks on an M which is closer to your actual use case.</p>
<p>For example, for slightly larger arrays <code>M</code>, calling the <code>sum</code> method twice may be fastest:</p>
<pre><code>In [34]: M = np.random.random((100,100,2,2))
In [37]: %timeit M.sum(axis=0).sum(axis=0)
10000 loops, best of 3: 59.9 µs per loop
In [39]: %timeit np.einsum('ijkl->kl', M)
10000 loops, best of 3: 99 µs per loop
In [40]: %timeit np.sum(M, axis=(0,1))
10000 loops, best of 3: 182 µs per loop
In [36]: %timeit M.reshape(-1, 2, 2).sum(axis=0)
10000 loops, best of 3: 184 µs per loop
In [38]: %timeit sum(sum(M))
1000 loops, best of 3: 202 µs per loop
</code></pre>
|
python|arrays|numpy
| 5
|
8,551
| 24,462,725
|
How can I do element-wise arithmetic on Numpy matrices?
|
<p>I am using Numpy's <code>matlib</code> style matrices for a particular algorithm. This means that the multiplication operator <code>*</code> performs the equivalent of an <code>ndarray</code>'s <code>dot()</code>:</p>
<pre><code>>>> import numpy.matlib as nm
>>> a = nm.asmatrix([[1,1,1],[1,1,1],[1,1,1]])
>>> b = nm.asmatrix([[1,0,0],[0,1,0],[0,0,1]])
>>> a * b
matrix([[1, 1, 1],
[1, 1, 1],
[1, 1, 1]])
</code></pre>
<p>Is there a method to perform element-wise arithmetic, like the <code>*</code> operator does on <code>ndarray</code>s?</p>
|
<p>You could use <code>np.multiply</code>:</p>
<pre><code>>>> a = np.matrix(np.random.rand(3,3))
>>> b = np.matrix(np.random.rand(3,3))
>>> a * b
matrix([[ 1.29029129, 0.53126365, 2.12109815],
[ 0.99370991, 0.55737572, 1.9167072 ],
[ 0.76268194, 0.43509462, 1.48640178]])
>>> np.asarray(a) * np.asarray(b)
array([[ 0.67445535, 0.12609799, 0.7051103 ],
[ 0.00131878, 0.42079486, 0.5223201 ],
[ 0.65558303, 0.03020335, 0.16753354]])
>>> np.multiply(a, b)
matrix([[ 0.67445535, 0.12609799, 0.7051103 ],
[ 0.00131878, 0.42079486, 0.5223201 ],
[ 0.65558303, 0.03020335, 0.16753354]])
</code></pre>
<p>It's a little unusual to want to perform elementwise multiplication on the same object you're performing matrix multiplication on, but you probably already know that. :^) It might be worthwhile seeing if your algorithm has a nice <code>np.einsum</code> description, though.</p>
|
python-3.x|numpy
| 1
|
8,552
| 30,111,884
|
data type "country" not understood
|
<p>I am getting the following error for my code: data type "country" not understood. I am relatively new to python and am basically trying to learn how to work with .csv files. I'm using python 3.4 and editor Canopy. I was trying to format the data types of the csv into strings and floats, but as soon as i try to assign string type to the first data column (the col is headed by the word - country) i get the error. I am trying to assign country to "a200" type which is believe can be a string. What am i doing wrong here? Please be clear as i am new.</p>
<p>The code is this:</p>
<pre><code>import csv
import numpy
def open_with_csv(filename):
data = []
with open(filename) as csvin:
file_reader = csv.reader(csvin, delimiter = ',')
for line in file_reader:
data.append(line)
return data
data_from_csv = open_with_csv('C:\Users\user\Desktop\MDR-TB_burden_estimates_2015-05-07.csv')
print (data_from_csv)
FIELDNAMES = ['country', 'iso2', 'iso3', 'iso_numeric', 'year', 'source_mdr_new', 'source_drs_coverage_new', 'source_drs_year_new', 'e_new_mdr_pcnt', 'e_new_mdr_pcnt_lo', 'e_new_mdr_pcnt_hi', 'e_new_mdr_num', 'e_new_mdr_num_lo', 'e_new_mdr_num_hi', 'source_mdr_ret', 'source_drs_coverage_ret', 'source_drs_year_ret', 'e_ret_mdr_pcnt', 'e_ret_mdr_pcnt_lo', 'e_ret_mdr_pcnt_hi', 'e_ret_mdr_num', 'e_ret_mdr_num_lo', 'e_ret_mdr_num_hi', 'e_mdr_num', 'e_mdr_num_lo', 'e_mdr_num_hi']
print (FIELDNAMES)
DATATYPES = [('country','a200'), ('iso2'), ('iso3'), ('iso_numeric'), ('year'), ('source_mdr_new'), ('source_drs_coverage_new'), ('source_drs_year_new'), ('e_new_mdr_pcnt'), ('e_new_mdr_pcnt_lo'), ('e_new_mdr_pcnt_hi'), ('e_new_mdr_num'), ('e_new_mdr_num_lo'), ('e_new_mdr_num_hi'), ('source_mdr_ret'), ('source_drs_coverage_ret'), ('source_drs_year_ret'), ('e_ret_mdr_pcnt'), ('e_ret_mdr_pcnt_lo'), ('e_ret_mdr_pcnt_hi'), ('e_ret_mdr_num'), ('e_ret_mdr_num_lo'), ('e_ret_mdr_num_hi'), ('e_mdr_num'), ('e_mdr_num_lo'), ('e_mdr_num_hi')]
def load_data(filename, d=','):
my_csv = numpy.genfromtxt(filename, delimiter=d, skip_header=1, invalid_raise=False, names= FIELDNAMES, dtype = DATATYPES)
return my_csv
my_csv = load_data('C:\Users\user\Desktop\MDR-TB_burden_estimates_2015-05-07.csv')
</code></pre>
|
<p>Looks like the arguments you are passing to numpy.getfromtxt are incorrectly formatted.</p>
<p>If you want to pass a value to both names and dtype arguments then you need to specify dtype as a coma separated string: "a200, i4, etc..."</p>
<p>Alternatively you can pass a list of tuple ("name", "type") pairs and not specify names argument.</p>
<p>You can look here for examples:
<a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.genfromtxt.html" rel="nofollow">http://docs.scipy.org/doc/numpy/reference/generated/numpy.genfromtxt.html</a></p>
|
python|csv|numpy
| 2
|
8,553
| 53,755,046
|
numpy large integer failed
|
<p>I recently work on some project Euler problems</p>
<h1>Smallest multiple</h1>
<h2>Problem 5</h2>
<p>2520 is the smallest number that can be divided by each of the numbers from 1 to 10 without any remainder.</p>
<p>What is the smallest positive number that is evenly divisible by all of the numbers from 1 to 20?</p>
<p>I wrote my code it works great</p>
<pre><code>def factor_finder(n, j=2):
factor_list = []
if n == 2:
return [2]
elif n == 3:
return [3]
else:
while n >= j * 2:
while n % j == 0:
n = int(n / j)
factor_list.append(j)
j += 1
if n > 1:
factor_list.append(n)
return factor_list
def smallest_multiples(n):
from functools import reduce
factor_list = []
final_list = []
for i in range(2, n + 1):
factor_list += factor_finder(i)
# print(factor_list)
for i in set(factor_list):
l1 = []
l2 = []
for j in factor_list:
if j == i:
l1.append(j)
else:
if len(l1) > len(l2):
l2 = l1
l1 = []
else:
l1 = []
# print(l2)
final_list += l2
# print(final_list)
return (
np.array(final_list).cumprod()[-1],
reduce((lambda x, y: x * y), final_list),
)
</code></pre>
<p>The result is:</p>
<p>%time</p>
<p>smallest_multiples(1000)</p>
<p>CPU times: user 5 µs, sys: 0 ns, total: 5 µs
Wall time: 32.4 µs</p>
<p>(-4008056434385126912,
7128865274665093053166384155714272920668358861885893040452001991154324087581111499476444151913871586911717817019575256512980264067621009251465871004305131072686268143200196609974862745937188343705015434452523739745298963145674982128236956232823794011068809262317708861979540791247754558049326475737829923352751796735248042463638051137034331214781746850878453485678021888075373249921995672056932029099390891687487672697950931603520000)</p>
<p>My question is why numpy.cumprod() failed to get the right number. I thought numpy is the very number tool. Can Somebody give me some idea?</p>
|
<p>The problem is that the number reached a size that meant that it was no longer representable by ints in Python. If you look <a href="https://docs.scipy.org/doc/numpy/user/basics.types.html" rel="nofollow noreferrer">here</a>, you'll see that ints max out in size around 19 digits (i.e. 2^63 from 63 bits + sign bit) and then go into overflow. Numpy is based in C which uses fixed-precision for much faster computations with the trade off that it is limited by the 64bit integer and will overflow. Some functions in numpy even guard against this by converting to floats to do calculations which can hold even more digits.</p>
<p>If you tell numpy to use "object" as your datatype, there is a significant time penalty but it'll let you use the arbitrary-precision that you're used to in Python. For your code, it would look like: </p>
<pre><code>return (
np.cumprod(final_list, dtype="object")[-1],
reduce((lambda x, y: x * y), final_list),
)
</code></pre>
<p><a href="https://mortada.net/can-integer-operations-overflow-in-python.html" rel="nofollow noreferrer">More about overflow in numpy.</a></p>
|
python|numpy|biginteger
| 1
|
8,554
| 53,435,807
|
Validate Pandas dataframe column based on hierarchy
|
<p>I have a dataframe like this </p>
<pre><code>df1 = pd.DataFrame({'Site': ["S1", "S2", "S3", "S4", "S5", "S6","S7","S8","S9"],
'Sitelink': [" ","S1","S2","S6","S4"," ","S8"," ","S7"],
'level': ["R", "T", "P", "T", "P", "R","T","R","P"],
'Weight':["55","55","55","85","85","80","150","190","200"]})
</code></pre>
<p>column 'Site' will be always unique</p>
<p>column 'Sitelink' captures the next lower level site to each Site</p>
<p>column 'level' has 3 values- R, T, P where the hierarchy is R < T < P.</p>
<p>column 'Weight' can be any value.</p>
<p>The output should satisfy the condition that weight of a higher level site should be always lesser than or equal to lower level site. Expected result dataframe should be like</p>
<p><a href="https://i.stack.imgur.com/ZeVEn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZeVEn.png" alt="Output Dataframe"></a></p>
<p>I'm trying to loop the dataframe and compare each site with next level. Is there a better approach to do this?</p>
|
<p>If i understand correctly, you would like to to check if for each row the weight of the site is lower than or equal to the weight of the site marked as the <em>Sitelink</em>. </p>
<p>The code for a single row would than be:</p>
<pre><code>def is_error(row):
if row['Sitelink'] == " ":
return 'No Error'
site_link = df.loc[df['Site'] == row['Sitelink']]
if int(row['Weight']) <= int(site_link['Weight']):
return 'No Error'
else:
return 'Higher than lower'
</code></pre>
<p>Therefore we could apply this line for each row using the <code>apply</code> function:</p>
<pre><code>df['Error'] = df.apply(is_error, axis=1)
</code></pre>
|
pandas|dataframe
| 0
|
8,555
| 53,545,350
|
Do I need a neural network or graph database like neo4j for suggestion engine?
|
<p>I am building a simple recommendation/suggestion engine for a demo application which maintains a list of people. For each person, it keeps track of their food habits with the following preferences:</p>
<ol>
<li>Diet type: Vegetarian/non-vegetarian/vegan</li>
<li>Cuisine likings: Indian, Mexican, Italian, etc. (a person can like more than one)</li>
<li>Type of meals with exact time: Breakfast, lunch, supper, dinner</li>
<li>Specialized diets: Keto, Blood-group type, Atkins, etc.</li>
<li>Favorite vegetables: Spinach, Broccoli, etc.</li>
<li>Food allergy requirements</li>
<li>Location - City, area, street, etc.</li>
</ol>
<p>Once this data is available to the system, I need to build a simple suggestion engine - </p>
<ol>
<li>For any person chosen, suggest 10 other people who are most compatible in terms of food habits.</li>
<li>For a group of people chosen (say x, max 5), suggest x + 10 (here 15) people from the system in such a way that each person in the chosen group has compatible habit with at least one other person. Order of people in a group doesn't matter.</li>
</ol>
<p>My understanding here is that I don't need future predictions on unknown data and thus really no need for machine learning. All I need is suggestions based on statistical compatibility of existing data set. The rules are mostly based on relationships that people have with their food habits.</p>
<p>Is my understanding correct? Is this problem entirely solvable with graph database like Neo4j? Or do I really to build a Neural model for this using Tensorflow?</p>
|
<p>The easiest way to know which one to choose is to answer this more specific question. Is the data open world or <a href="https://en.wikipedia.org/wiki/Closed-world_assumption" rel="nofollow noreferrer">closed world</a>. </p>
<p>If you are an original <a href="https://en.wikipedia.org/wiki/The_Corbomite_Maneuver#Plot" rel="nofollow noreferrer">Star Trek</a> fan then think of it like how Spock would inference and needs all of the data before being able to answer (<a href="https://en.wikipedia.org/wiki/Closed-world_assumption" rel="nofollow noreferrer">Closed world</a>) or Captain Kirk who doesn't need all of the information before answering (Open World).</p>
<p>Seeing that you have all of the data needed before giving a solution I would say that you have a closed world problem and this should use Neo4j. </p>
<p>However being a fan of Logic programming, my personal preference would be to use either Prolog, specifically <a href="http://www.swi-prolog.org/" rel="nofollow noreferrer">SWI-Prolog</a> or <a href="https://www.mercurylang.org/" rel="nofollow noreferrer">Mercury</a>.</p>
|
tensorflow|machine-learning|neo4j
| 3
|
8,556
| 19,964,546
|
Pandas fuzzy merge/match name column, with duplicates
|
<p>I have two dataframes currently, one for <code>donors</code> and one for <code>fundraisers</code>. I'm trying to find if any <code>fundraisers</code> also gave donations, and if so, copy some of that information into my <code>fundraiser</code> dataset (donor name, email and their first donation). Problems with my data are:</p>
<ol>
<li>I need to match by name and email, but a user might have slightly different names (ex 'Kat' and 'Kathy').</li>
<li>Duplicate names for <code>donors</code> and <code>fundraisers</code>:
<ul>
<li>2a) With donors I can get unique name/email combinations since I just care about the first donation date </li>
<li>2b) With fundraisers though I need to keep both rows and not lose data like the date.</li>
</ul></li>
</ol>
<p>Sample code I have right now:</p>
<pre><code>import pandas as pd
import datetime
from fuzzywuzzy import fuzz
import difflib
donors = pd.DataFrame({"name": pd.Series(["John Doe","John Doe","Tom Smith","Jane Doe","Jane Doe","Kat test"]), "Email": pd.Series(['a@a.ca','a@a.ca','b@b.ca','c@c.ca','something@a.ca','d@d.ca']),"Date": (["27/03/2013 10:00:00 AM","1/03/2013 10:39:00 AM","2/03/2013 10:39:00 AM","3/03/2013 10:39:00 AM","4/03/2013 10:39:00 AM","27/03/2013 10:39:00 AM"])})
fundraisers = pd.DataFrame({"name": pd.Series(["John Doe","John Doe","Kathy test","Tes Ester", "Jane Doe"]),"Email": pd.Series(['a@a.ca','a@a.ca','d@d.ca','asdf@asdf.ca','something@a.ca']),"Date": pd.Series(["2/03/2013 10:39:00 AM","27/03/2013 11:39:00 AM","3/03/2013 10:39:00 AM","4/03/2013 10:40:00 AM","27/03/2013 10:39:00 AM"])})
donors["Date"] = pd.to_datetime(donors["Date"], dayfirst=True)
fundraisers["Date"] = pd.to_datetime(donors["Date"], dayfirst=True)
donors["code"] = donors.apply(lambda row: str(row['name'])+' '+str(row['Email']), axis=1)
idx = donors.groupby('code')["Date"].transform(min) == donors['Date']
donors = donors[idx].reset_index().drop('index',1)
</code></pre>
<p>So this leaves me with the first donation by each donor (assuming anyone with the exact same name and email is the same person). </p>
<p>Ideally I want my <code>fundraisers</code> dataset to look like:</p>
<pre><code>Date Email name Donor Name Donor Email Donor Date
2013-03-27 10:00:00 a@a.ca John Doe John Doe a@a.ca 2013-03-27 10:00:00
2013-01-03 10:39:00 a@a.ca John Doe John Doe a@a.ca 2013-03-27 10:00:00
2013-02-03 10:39:00 d@d.ca Kathy test Kat test d@d.ca 2013-03-27 10:39:00
2013-03-03 10:39:00 asdf@asdf.ca Tes Ester
2013-04-03 10:39:00 something@a.ca Jane Doe Jane Doe something@a.ca 2013-04-03 10:39:00
</code></pre>
<ul>
<li><p>I tried following this thread: <a href="https://stackoverflow.com/questions/13636848/is-it-possible-to-do-fuzzy-match-merge-with-python-pandas">is it possible to do fuzzy match merge with python pandas?</a> but keep getting index out of range errors (guessing it doesn't like the duplicated names in fundraisers) :( So any ideas how I can match/merge these datasets? </p></li>
<li><p>doing it with for loops (which works but is super slow and I feel there has to be a better way)</p></li>
</ul>
<p>Code:</p>
<pre><code>fundraisers["donor name"] = ""
fundraisers["donor email"] = ""
fundraisers["donor date"] = ""
for donindex in range(len(donors.index)):
max = 75
for funindex in range(len(fundraisers.index)):
aname = donors["name"][donindex]
comp = fundraisers["name"][funindex]
ratio = fuzz.ratio(aname, comp)
if ratio > max:
if (donors["Email"][donindex] == fundraisers["Email"][funindex]):
ratio *= 2
max = ratio
fundraisers["donor name"][funindex] = aname
fundraisers["donor email"][funindex] = donors["Email"][donindex]
fundraisers["donor date"][funindex] = donors["Date"][donindex]
</code></pre>
|
<p>Here's a bit more pythonic (in my view), working (on your example) code, without explicit loops:</p>
<pre class="lang-py prettyprint-override"><code>def get_donors(row):
d = donors.apply(lambda x: fuzz.ratio(x['name'], row['name']) * 2 if row['Email'] == x['Email'] else 1, axis=1)
d = d[d >= 75]
if len(d) == 0:
v = ['']*3
else:
v = donors.ix[d.idxmax(), ['name','Email','Date']].values
return pd.Series(v, index=['donor name', 'donor email', 'donor date'])
pd.concat((fundraisers, fundraisers.apply(get_donors, axis=1)), axis=1)
</code></pre>
<p>Output:</p>
<pre><code> Date Email name donor name donor email donor date
0 2013-03-27 10:00:00 a@a.ca John Doe John Doe a@a.ca 2013-03-01 10:39:00
1 2013-03-01 10:39:00 a@a.ca John Doe John Doe a@a.ca 2013-03-01 10:39:00
2 2013-03-02 10:39:00 d@d.ca Kathy test Kat test d@d.ca 2013-03-27 10:39:00
3 2013-03-03 10:39:00 asdf@asdf.ca Tes Ester
4 2013-03-04 10:39:00 something@a.ca Jane Doe Jane Doe something@a.ca 2013-03-04 10:39:00
</code></pre>
|
python|pandas|dataframe|fuzzywuzzy|fuzzy-comparison
| 4
|
8,557
| 20,115,312
|
python pandas applying for loop and groupby function
|
<p>I am new to python and I am not familiar iterating with the groupby function in pandas
I modified the code below and it works fine for creating a pandas dataframe</p>
<pre><code>i=['J,Smith,200 G Ct,',
'E,Johnson,200 G Ct,',
'A,Johnson,200 G Ct,',
'M,Simpson,63 F Wy,',
'L,Diablo,60 N Blvd,',
'H,Simpson,63 F Wy,',
'B,Simpson,63 F Wy,']
dbn=[]
dba=[]
for z,g in groupby(
sorted([l.split(',')for l in i],
key=lambda x:x[1:]),
lambda x:x[2:]
):
l=list(g);r=len(l);Address=','.join(z);o=l[0]
if r>2:
dbn.append('The '+o[1]+" Family,")
dba.append(Address)
elif r>1:
dbn.append(o[0]+" and "+l[1][0]+", "+o[1]+",")
dba.append(Address)
else:
dbn.append(o[0]+" "+o[1])
# print','.join(o),
dba.append(Address)
Hdf=pd.DataFrame({'Address':dba,'Name':dbn})
print Hdf
Address Name
0 60 N Blvd, L Diablo
1 200 G Ct, E and A, Johnson,
2 63 F Wy, The Simpson Family,
3 200 G Ct, J Smith
</code></pre>
<p>How would I modify the for loop to yield the same results if I am using a pandas dataframe instead of raw csv data?</p>
<pre><code>df=pd.DataFrame({'Name':['J','E','A','M','L','H','B'],
'Lastname':['Smith','Johnson','Johnson','Simpson','Diablo','Simpson','Simpson'],
'Address':['200 G Ct','200 G Ct','200 G Ct','63 F Wy','60 N Blvd','63 F Wy','63 F Wy']})
</code></pre>
|
<h3>Version with loop/generator:</h3>
<p>First, we create helper function and group data by <code>Lastname, Address</code>:</p>
<pre><code>def helper(k, g):
r = len(g)
address, lastname = k
if r > 2:
lastname = 'The {} Family'.format(lastname)
elif r > 1:
lastname = ' and '.join(g['Name']) + ', ' + lastname
else:
lastname = g['Name'].squeeze() + ' ' + lastname
return (address, lastname)
grouped = df.groupby(['Address', 'Lastname'])
</code></pre>
<p>Then create generator with helper function applied to each group:</p>
<pre><code>vals = (helper(k, g) for k, g in grouped)
</code></pre>
<p>And then create resulting DataFrame from it:</p>
<pre><code>pd.DataFrame(vals, columns=['Address','Name'])
Address Name
0 200 G Ct E and A, Johnson
1 200 G Ct J Smith
2 60 N Blvd L Diablo
3 63 F Wy The Simpson Family
</code></pre>
<h3>More vectorized version:</h3>
<p>Group data by <code>Lastname, Address</code> and then generate new DataFrame with length of group and string contains two first names concatenated:</p>
<pre><code>grouped = df.groupby(['Address', 'Lastname'])
res = grouped.apply(lambda x: pd.Series({'Len': len(x), 'Names': ' and '.join(x['Name'][:2])})).reset_index()
Address Lastname Len Names
0 200 G Ct Johnson 2 E and A
1 200 G Ct Smith 1 J
2 60 N Blvd Diablo 1 L
3 63 F Wy Simpson 3 M and H
</code></pre>
<p>Now just apply usual pandas transformations and delete unneseccary columns:</p>
<pre><code>res.ix[res['Len'] > 2, 'Lastname'] = 'The ' + res['Lastname'] + ' Family'
res.ix[res['Len'] == 2, 'Lastname'] = res['Names'] + ', ' + res['Lastname']
res.ix[res['Len'] < 2, 'Lastname'] = res['Names'] + ' ' + res['Lastname']
del res['Len']
del res['Names']
Address Lastname
0 200 G Ct E and A, Johnson
1 200 G Ct J Smith
2 60 N Blvd L Diablo
3 63 F Wy The Simpson Family
</code></pre>
|
python|pandas|iteration|grouping
| 1
|
8,558
| 15,626,375
|
Python/Numpy - Cross Product of Matching Rows in Two Arrays
|
<p>What is the best way to take the cross product of each corresponding row between two arrays? For example:</p>
<pre><code>a = 20x3 array
b = 20x3 array
c = 20x3 array = some_cross_function(a, b) where:
c[0] = np.cross(a[0], b[0])
c[1] = np.cross(a[1], b[1])
c[2] = np.cross(a[2], b[2])
...etc...
</code></pre>
<p>I know this can be done with a simple python loop or using numpy's apply_along_axis, but I'm wondering if there is any good way to do this entirely within the underlying C code of numpy. I currently use a simple loop, but this is by far the slowest part of my code (my actual arrays are tens of thousands of rows long).</p>
|
<p>I'm probably going to have to delete this answer in a few minutes when I realize my mistake, but doesn't the obvious thing work?</p>
<pre><code>>>> a = np.random.random((20,3))
>>> b = np.random.random((20,3))
>>> c = np.cross(a,b)
>>> c[0], np.cross(a[0], b[0])
(array([-0.02469147, 0.52341148, -0.65514102]), array([-0.02469147, 0.52341148, -0.65514102]))
>>> c[1], np.cross(a[1], b[1])
(array([-0.0733347 , -0.32691093, 0.40987079]), array([-0.0733347 , -0.32691093, 0.40987079]))
>>> all((c[i] == np.cross(a[i], b[i])).all() for i in range(len(c)))
True
</code></pre>
|
python|numpy|cross-product
| 6
|
8,559
| 12,347,797
|
How can I produce a nice output of a numpy matrix?
|
<p>I currently have the following snippet:</p>
<pre><code>#!/usr/bin/python
# -*- coding: utf-8 -*-
import numpy
from numpy import linalg
A = [[1,2,47,11],[3,2,8,15],[0,0,3,1],[0,0,8,1]]
S = [[113,49,2,283],[-113,0,3,359],[0,5,0,6],[0,20,0,12]]
A = numpy.matrix(A)
S = numpy.matrix(S)
numpy.set_printoptions(precision=2, suppress=True, linewidth=120)
print("S^{-1} * A * S")
print(linalg.inv(S) * A * S)
</code></pre>
<p>which produces this output:</p>
<p><img src="https://i.stack.imgur.com/QQQjK.png" alt="python output"></p>
<p><strong>Is there a standard way to produce an output similar to the following? How can I get this output?</strong></p>
<pre><code>[[ -1 -0.33 0 0]
[ 0 1 0 0]
[ 0 -648 4 0]
[ 0 6.67 0 5]]
</code></pre>
<p>What's different?</p>
<ul>
<li>At least two spaces between the last character of column <code>i</code> and the first character of column <code>i+1</code>, but it might be more if more is needed (NumPy output makes two spaces)</li>
<li>the dots are aligned (The <em>are</em> aligned, but the font setting of <code>BetterPythonConsole</code> messes it up)</li>
<li>No <code>-0</code> but <code>0</code></li>
<li>No <code>0.</code> but <code>0</code></li>
</ul>
<p><strong>edit</strong>: It seems as if the Python Console, which gets started with gEdits <a href="https://live.gnome.org/Gedit/Plugins/BetterPythonConsole" rel="noreferrer">BetterPythonConsole</a> plugin does something different than Python, when I start it from terminal.</p>
<p>This is the output as text of the script above</p>
<pre><code>moose@pc07:~/Desktop$ python matrixScript.py
S^{-1} * A * S
[[ -1. -0.33 0. -0. ]
[ 0. -1. -0. 0. ]
[ 0. -648. 4. -0. ]
[ 0. 6.67 0. 5. ]]
</code></pre>
<p>With prettyprint:</p>
<pre><code>S^{-1} * A * S
matrix([[ -1. , -0.33, 0. , -0. ],
[ 0. , -1. , -0. , 0. ],
[ 0. , -648. , 4. , -0. ],
[ 0. , 6.67, 0. , 5. ]])
</code></pre>
<p>This is defenitely worse, but it was worth a try.</p>
|
<p>If you use numpy 1.8.x you can customize formatting with the <code>formatter</code> parameter.
For example, setting:</p>
<pre><code>numpy.set_printoptions(formatter={'float': lambda x: 'float: ' + str(x)})
</code></pre>
<p>All floats would be printed like <code>float: 3.0</code>, or <code>float: 12.6666666666</code>.</p>
<p>Unfortunately I still have numpy 1.6.1 installed and this option is not provided,
so I'm not able to use it to get your desired output.</p>
|
python|numpy|formatting|gedit
| 4
|
8,560
| 71,996,147
|
How to make a boolean array by checking if the items in an array is in a list?
|
<p>I'm trying to find every item in an numpy array <code>arr</code> that's also in an arbitrary list <code>lst</code> and replace them, but while <code>arr > 0</code> will generate a boolean array for easy masking, <code>arr in lst</code> only works with all() or any() which isn't what I need.</p>
<p>Example input: array <code>(1, 2, 3, 4, 5)</code>, list <code>[2, 4, 6, 8]</code></p>
<p>Output: array <code>(1, 0, 3, 0, 5)</code></p>
<p>I managed to get the same result with for loops:</p>
<pre><code>for i in range(len(arr)):
if arr[i] in lst:
arr[i] = 0
</code></pre>
<p>Just wondering if there are other ways to do it that set arrays apart from lists.</p>
|
<p>You can use <a href="https://numpy.org/doc/stable/reference/generated/numpy.isin.html" rel="nofollow noreferrer"><code>numpy.isin</code></a>:</p>
<pre><code>a = np.array((1, 2, 3, 4, 5))
lst = [2, 4, 6, 8]
a[np.isin(a, lst)] = 0
</code></pre>
<p>Gives you an <code>a</code> of:</p>
<pre><code>array([1, 0, 3, 0, 5])
</code></pre>
|
python|arrays|list|numpy|boolean
| 2
|
8,561
| 71,860,723
|
Dynamically update pandas NaN based on field type in django model
|
<p>I need to save data from csv file to django models.
The data comes from external api so I have no control on its structure.
In my schema, I allowed all fields to be nullable.</p>
<p>This is my script</p>
<pre><code> text = f"{path}/report.csv"
df = pd.read_csv(text)
row_iter = df.iterrows()
for index, row in row_iter:
rows = {key.replace("-", "_"): row.pop(key) for key in row.keys()}
# print(f"rows {rows}")
# default_values = {
# "amazon-order-id",merchant-order-id,purchase-date,last-updated-date,order-status,fulfillment-channel,sales-channel,order-channel,ship-service-level,product-name,sku,asin,item-status,quantity,currency,item-price,item-tax,shipping-price,shipping-tax,gift-wrap-price,gift-wrap-tax,item-promotion-discount,ship-promotion-discount,ship-city,ship-state,ship-postal-code,ship-country,promotion-ids,is-business-order,purchase-order-number,price-designation,is-iba,order-invoice-type
# }
sb, created = Order.objects.update_or_create(
sellingpartner_customer=c,
amazon_order_id=rows["amazon_order_id"],
sku=rows["sku"],
asin=rows["asin"],
defaults={**rows},
)
</code></pre>
<p>However, since some of the csv fields has empty values, pandas will replace it with <code>NaN</code> value, this is where django returns an error</p>
<pre><code> django.db.utils.OperationalError: (1054, "Unknown column 'NaN' in 'field list'")
</code></pre>
<p>I tried replace empty values as empty string("")</p>
<pre><code> df.fillna("", inplace=True)
</code></pre>
<p>But django will return an error for fields that are not charfields</p>
<pre><code> django.core.exceptions.ValidationError: ['“” value must be a decimal number.']
</code></pre>
<p>My question is, how do you handle empty values from csv file in pandas so for example if the field type is boolean, pandas will just replace empty value with boolean False for django boolean type fields, <code>0</code> for empty decimal types, and just blank for empty charfields, etc ?</p>
|
<p>Nullable Django models won't take <code>np.nan</code> or other Pandas-compatible <code>not-a-number</code> objects. It expects taking <code>None</code> as in stock Python. When you have <code>nan</code> values, before you save them to Django, just replace them with <code>None</code> to avoid the validation error.</p>
|
python|django|pandas
| 0
|
8,562
| 55,563,411
|
How to resample a column by id
|
<p>I have a dataset like:</p>
<pre><code>id date value
1 16-12-1 9
1 16-12-1 8
1 17-1-1 18
2 17-3-4 19
2 17-3-4 20
1 17-4-3 21
2 17-7-13 12
3 17-8-9 12
2 17-9-12 11
1 17-11-12 19
3 17-11-12 21
</code></pre>
<p>The only structure above is that the rows are sorted by date.</p>
<p>What I want to do is, group by id and resample the dates, so that each id has the same number values. A monthly, weekly or daily resampling would suffice.</p>
<p>My final dataset (at yearly resampling) would look like:</p>
<pre><code>id interval value
1 16-12-1 - 17-12-1 75
2 16-12-1 - 17-12-1 62
3 16-12-1 - 17-12-1 33
</code></pre>
<p>How to implement this? Will this work (since I do not have the seconds in the date field, i.e. it is not a standard pandas datetime object)?</p>
<pre><code>dataframe.groupby(id).resample('year')
</code></pre>
<p>Is there any faster way to do this?</p>
|
<p>Weekly sum by id:</p>
<pre><code>df['date'] = pd.to_datetime(df['date'], format='%y-%m-%d')
df = df.set_index('date')
df.groupby('id').resample('W')['value'].agg('sum').loc[lambda x: x>0]
</code></pre>
<p>Output:</p>
<pre><code>id date
1 2016-12-04 17
2017-01-01 18
2017-04-09 21
2017-11-12 19
2 2017-03-05 39
2017-07-16 12
2017-09-17 11
3 2017-08-13 12
2017-11-12 21
Name: value, dtype: int64
</code></pre>
|
python|pandas|dataframe|time-series|pandas-groupby
| 3
|
8,563
| 56,831,607
|
Can tf.keras.layers.xx be used independently from tf.keras.Sequential or Model?
|
<p>In Tensorflow, many functions from some modules have been deprecated. Those from <code>tf.keras.layers</code> have been recommended. The <a href="https://www.tensorflow.org/api_docs/python/tf/keras/Model" rel="nofollow noreferrer">tutorials</a> provide examples of the usage of them by associating them with either <code>tf.keras.Sequential (Sequential)</code> or <code>tf.keras.Model (Model)</code>. I want to know whether it's possible to use some classes in <code>tf.keras.layers (e.g., Dense, Conv1D, etc.)</code> without using <code>Sequential</code> or <code>Model</code>. </p>
<p>Previously the following was used in my code:</p>
<pre><code>gru = tf.contrib.rnn.GRUCell(d)
states, output = tf.nn.dynamic_rnn(gru, inputs)
</code></pre>
<p>As both <code>tf.contrib.rnn</code> and <code>tf.nn.dynamic_rnn</code> have been deprecated, I want to know if I can replace them with the following commands without adding <code>Sequential</code> or <code>Model</code> in the code.</p>
<pre><code>gru = tf.keras.layers.GRUCell(d)
states, output = keras.layers.RNN(gru)(inputs)
</code></pre>
|
<blockquote>
<p>I want to know whether it's possible to use some classes in tf.keras.layers (e.g., Dense, Conv1D, etc.) without using Sequential or Model.</p>
</blockquote>
<p>Yes, sure it is. We can just 'call' the layer directly by doing something like this, for example:</p>
<pre class="lang-py prettyprint-override"><code>layer_example = tf.keras.layers.Dense(2,input_shape=(-1,24))
example_tensor = tf.random.uniform(shape=(2, 24))
layer_example(example_tensor)
</code></pre>
<p>Note that we first create an instance of the layer class with <code>tf.keras.layers.Dense</code> then later <em>call</em> that instance with <code>layer_example(example_tensor)</code></p>
<p>We can also (of course) extend this to the GRU example. Something like this:</p>
<pre><code>example_tensor = tf.random.uniform(shape=(2, 24, 12))
gru = tf.keras.layers.GRUCell(2)
states, output = tf.keras.layers.RNN(gru)(example_tensor)
</code></pre>
|
tensorflow|tf.keras
| 0
|
8,564
| 56,737,647
|
How to map lists of values to categorical vector
|
<p>I'm trying to do some clustering on a dataset of videos based on duration. I have a dictionary in which keys are user IDs, and values are a list of float (videos duration), 1 float per video created by the user.</p>
<p>Example :</p>
<pre><code>videos_per_user = {
63: [15.011667, 21.823333, 29.981667, 10.341667, 14.928333, 16.555, 29.976667],
64: [5.463333, 14.345, 5.571667, 18.848333]
}
</code></pre>
<p>Important note : these lists are not of the same length.</p>
<p>What I'm trying to do is to transform this dict to a pandas Dataframe, based on a reference vector (bins) so I can have a vector for each user that contains the number of videos for each category.</p>
<p>I've created my categorical vector as follows :
<code>bins = pd.Series(np.arange(start=0,stop=35,step=5))</code></p>
<p>I've tried to use <code>pd.cut(videos_per_user, bins=bins, right=True)</code> but I get the corresponding category for each duration while I'm trying to get something like : <code>[0,0,2,2,3,0]</code></p>
<p>Any ideas ? I didn't find similar situations on the web, but maybe because I don't really knowhow to correctly formulate my problem.</p>
<p>To conclude, I'd like to create a vector of length 6 (6 categories) for each user in my dict with the number of videos with the corresponding duration.</p>
|
<h3><code>searchsorted</code> and <code>bincount</code></h3>
<pre><code>b = np.arange(5, 30, 5)
# array([ 5, 10, 15, 20, 25])
</code></pre>
<p><strong>PLEASE NOTE:</strong> The <code>minlength</code> is what guarantees that all arrays will be of the same length. However, it needs to be set at the actual number of categories you expect to have. This can change if your actual set up isn't exactly as described in the question.</p>
<pre><code>pd.DataFrame({
user: np.bincount(b.searchsorted(durations), minlength=len(b) + 1)
for user, durations in videos_per_user.items()
})
63 64
0 0 0
1 0 2
2 2 1
3 2 1
4 1 0
5 2 0
</code></pre>
<hr>
<h3><code>value_counts</code> and <code>cut</code></h3>
<pre><code>pd.DataFrame({
user: pd.value_counts(pd.cut(durations, bins))
for user, durations in videos_per_user.items()
})
63 64
(0, 5] 0 0
(5, 10] 0 2
(10, 15] 2 1
(15, 20] 2 1
(20, 25] 1 0
(25, 30] 2 0
</code></pre>
|
python|pandas
| 1
|
8,565
| 56,761,609
|
Classification of time series of variable lengths using 1D CNN in tensorflow
|
<p>I have a dataset consisting of time series of different lengths. For instance, consider this</p>
<pre><code>ts1 = np.random.rand(230, 4)
ts2 = np.random.rand(12309, 4)
</code></pre>
<p>I have 200 sequences in the form of list of arrays </p>
<pre><code>input_x = [ts1, ts2, ..., ts200]
</code></pre>
<p>These time series have labels 1 if good and 0 if not. Hence my labels will be something like</p>
<pre><code>labels = [0, 0, 1, 0, 1, ....]
</code></pre>
<p>I am building a keras model as follows: </p>
<pre><code>model = keras.Sequential([
keras.layers.Conv1D(64, 3, activation='relu', input_shape=(None, 4)),
keras.layers.MaxPool1D(3),
keras.layers.Conv1D(160, 10, activation='relu'),
keras.layers.GlobalAveragePooling1D(),
keras.layers.Dropout(0.5),
keras.layers.Dense(128, activation='relu'),
keras.layers.Dense(2, activation='softmax')
</code></pre>
<p>])</p>
<p>The 4 in the input shape of the first convolution layer corresponds to the number of columns in each time series which is constant (think of it as having 4 sensors returning measurements for different operations). The objective is to classify if a time series is good or bad (0 or 1) however I am unable to figure out how to train this using keras. </p>
<p>Running this line </p>
<pre><code>model.fit(input_x, labels, epochs=5, batch_size=1)
</code></pre>
<p>Returns an error </p>
<pre><code>Error when checking model input: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 1 array(s), but instead got the following list of 200 arrays
</code></pre>
<p>Even using np.array(input_x) gives an error. How can I train this model with sequences of variable lengths? I know padding is an option but that's not what I am looking for. Also, I don't want to use an RNN with a sliding window. I am really looking into a solution with 1D CNN that works with sequences of variable lengths. Any help would be so much appreciated! </p>
|
<p>When working with a time series you want to define the input to the NN as <code>(batch_size, sequence_length, features)</code>.</p>
<p>Which corresponds to a <code>input_shape=(sequence_length, 4,)</code> in your case. You will have to decide upon a maximum sequence length that you will process for the purposes of training and generating predictions.</p>
<p>The inputs to the NN also need to be in the shape (batch_size, sequence_length, features).</p>
|
python|tensorflow|keras
| 1
|
8,566
| 56,795,389
|
When saving to csv using pandas, I get two same databases instead of two separate ones
|
<p>I am doing some exercise as part of a GIS and Python Course that I am undertaking individually through Git. The exercise is analysis of weather data from two weather stations. Their IDs are spelled USAF and have codes: 29980 and 28450. I have created a "selected" dataframe from the existing one and from that one, I need to select all rows into a variable called kumpula where the USAF code is 29980 and to a variable Rovaniemi where the USAF code is 28450. </p>
<p>I have done this:</p>
<pre><code>kumpula = selected.loc[selected['USAF']==29980]
rovaniemi = selected.loc[selected['USAF']==28450]
</code></pre>
<p>That is good. Now, I need to save the kumpula and rovaniemi DataFrame into 'Kumpula_temps_May_Aug_2017.csv' and 'Rovaniemi_temps_May_Aug_2017.csv'. I also need to separate with comma and use only 2 decimals in the floating point number. </p>
<p>Here is my code:</p>
<pre><code>kumpula = "Kumpula_temps_May_Aug_2017.csv"
selected.to_csv(kumpula, sep=',', float_format="%2f")
rovaniemi = "Rovaniemi_temps_May_Aug_2017.csv"
selected.to_csv(rovaniemi, sep=',', float_format="%2f")
</code></pre>
<p>This code should work. But, both of the files are the same. They are for rovaniemi, e.g., USAF code is 28450. Am I somehow overwriting "Kumpula_temps_May_Aug_2017.csv".</p>
|
<p>You need to modify your code as </p>
<pre><code>kumpula_df = selected.loc[selected['USAF']==29980]
rovaniemi_df = selected.loc[selected['USAF']==28450]
kumpula_df.to_csv("Kumpula.csv", sep=',', float_format="%2f")
rovaniemi_df.to_csv("rovaniemi.csv", sep=',', float_format="%2f")
</code></pre>
<p>Hope this helps!</p>
|
python|pandas|csv|export-to-csv
| 1
|
8,567
| 25,514,903
|
Pythonian structure?
|
<p>I'm someone that is semi-well versed in MATLAB, but am trying to move those skills to Python in hopes of future job prospects. For example, when performing machine learning, I enjoy MATLAB because I can make my function clean looking as below for example:</p>
<pre><code> main.m
------------
prescreen_fn(directory,threshold) %a prescreen function that is run
plot_prescreen_hits(directory) %plot and print prescreen hits
extract_features(directory,fft_size) %extract features from prescreen hit locations
generate_train_test(directory) %parse training and testing data
SVM_train_test(directory) %perform SVM training and testing
-----------
</code></pre>
<p>Well, you get the point. It's nice to have a clean main function whereby it's easy to pass off variables defined by a user, etc. </p>
<p>The problem is I don't know the best way to do this in python. I've read all over stack exchange how it's bad to call python script from other scripts and even then passing of variables is difficult. Also, I'm not wanting to have some massive script where I define lots of python code above and then call them below in the same script. </p>
<p>Apologies if this is very vague, but the general structure of how python "should" look is confusing me</p>
<p>Thanks</p>
|
<p>Your question may get closed as being off-topic or too broad, but I think it's a good question if rephrased as "what's the python equivalent of this code". </p>
<p>Generally speaking, this is something that a lot of folks coming from matlab get confused by. In python, things are separated into "namespaces" and you need to explicitly import functions/variables/etc from other files. </p>
<h2>Common high-level structure of code</h2>
<hr>
<p>In matlab (if I remember correctly), you can't have functions in the same file with "bare" statements. In python you can. However, you can't call a function before it has been defined. </p>
<p>In other words, you can do:</p>
<pre><code>def foo():
print 'bar'
foo()
</code></pre>
<p>but not:</p>
<pre><code>foo()
def foo():
print 'bar'
</code></pre>
<p>Therefore, because you typically want the "outline-level" code at the top of the file, it's common to put it into a function and then call that function at the bottom after the other functions have been defined. Typically, you'd call this function <code>main</code>, but you're free to name it whatever you'd like.</p>
<p>As a quick example:</p>
<pre><code>def main():
directory = load_data()
threshold, fft_size = 10, 1000
prescreen_fn(directory,threshold)
plot_prescreen_hits(directory)
extract_features(directory,fft_size)
generate_train_test(directory)
SVM_train_test(directory)
def prescreen_fn(directory, threshold):
"""A prescreen function that is run. Ideally this would be a
more informative docstring."""
pass
def plot_prescreen_hits(directory):
pass
def extract_features(directory,fft_size):
pass
def generate_train_test(directory):
pass
def SVM_train_test(directory):
pass
def load_data():
pass
if __name__ == '__main__':
main()
</code></pre>
<p>The last part probably looks a bit confusing. What that says is basically "execute the code in this block only if this file is run directly. If we're just importing functions from it, don't run anything yet." (There are a lot of explanations of this, e.g.<a href="https://stackoverflow.com/questions/419163/what-does-if-name-main-do">What does if __name__ == "__main__": do?</a> )</p>
<p>If you wanted, you could just do:</p>
<pre><code>def main():
...
def other_things():
...
main()
</code></pre>
<p>If you just run the file, you'll get the same result. The difference is in what happens when we import this code from somewhere else. (In the first example, <code>main</code> wouldn't be called while in the second it would.)</p>
<h2>Calling functions in other files</h2>
<hr>
<p>As things grow, you might decide to split some of that into separate files. For example, we might put some of the functions in a file called <code>data.py</code> and others in a file called <code>model.py</code>. We can then import functions from these files into another file where the "pipeline" is built up (we might even call this one <code>main.py</code>, or maybe something more descriptive).</p>
<p>Unlike matlab, we need to explicitly <code>import</code> these files. I won't go into the details here, but import basically tries to find a file or package (directory with a specific structure) with the specified name first in "library" locations and then in the same directory as the file being run (the preference order changed in 2.7 - local files used to supersede library files). </p>
<p>In the example below, <code>import data</code> will import functions and variables in the file <code>"data.py"</code> (and the same for <code>import model</code>). The functions, etc in that file are in a "namespace" called <code>data</code>, so we'll need to refer to them that way. (Note that you can do <code>from data import *</code> to bring them into the global namespace, but you really, really should avoid that unless you're in an interactive shell.)</p>
<pre><code>import data
import model
directory = data.load_data()
threshold, fft_size = 10, 1000
data.prescreen_fn(directory, threshold)
data.plot_prescreen_hits(directory)
data.extract_features(directory, fft_size)
model.generate_train_test(directory)
model.SVM_train_test(directory)
</code></pre>
<p>Notice that I didn't bother wrapping this one into a <code>main</code> function. We certainly could have. The reason I didn't do that here is that you presumably wouldn't ever want to import something from this short "main.py" file. Therefore we don't need to run things behind an <code>if __name__ == '__main__':</code> conditional. </p>
<p>Hopefully these examples help clarify things a bit.</p>
|
python|matlab|numpy
| 5
|
8,568
| 25,992,795
|
Fastest way to create an array in Python
|
<p>I want to create a 3D array in Python, filled with -1.</p>
<p>I tested these methods:</p>
<pre><code>import numpy as np
l = 200
b = 100
h = 30
%timeit grid = [[[-1 for x in range(l)] for y in range(b)] for z in range(h)]
1 loops, best of 3: 458 ms per loop
%timeit grid = -1 * np.ones((l, b, h), dtype=np.int)
10 loops, best of 3: 35.5 ms per loop
%timeit grid = np.zeros((l, b, h), dtype=np.int) - 1
10 loops, best of 3: 31.7 ms per loop
%timeit grid = -1.0 * np.ones((l, b, h), dtype=np.float32)
10 loops, best of 3: 42.1 ms per loop
%%timeit
grid = np.empty((l,b,h))
grid.fill(-1.0)
100 loops, best of 3: 13.7 ms per loop
</code></pre>
<p>So obviously, the last one is the fastest. Does anybody has an even faster method or at least less memory intensive? Because it runs on a RaspberryPi.</p>
|
<p>The only thing I can think to add is that any of these methods will be faster with the <code>dtype</code> argument chosen to take up as little memory as possible.</p>
<p>Assuming you need no more space that <code>int8</code>, the method suggested by @RutgerKassies in the comments took this long on my system:</p>
<pre><code>%timeit grid = np.full((l, b, h), -1, dtype=np.int8)
1000 loops, best of 3: 286 µs per loop
</code></pre>
<p>For comparison, not specifying <code>dtype</code> (defaulting to <code>int32</code>) took about 10 times longer with the same method:</p>
<pre><code>%timeit grid = np.full((l, b, h), -1)
100 loops, best of 3: 3.61 ms per loop
</code></pre>
<p>Your fastest method was about as fast as <code>np.full</code> (sometimes beating it):</p>
<pre><code>%%timeit
grid = np.empty((l,b,h))
grid.fill(-1)
100 loops, best of 3: 3.51 ms per loop
</code></pre>
<p>or, with <code>dtype</code> specified as <code>int8</code>,</p>
<pre><code>1000 loops, best of 3: 255 µs per loop
</code></pre>
<p><strong>Edit</strong>: This is probably cheating, but, well...</p>
<pre><code>%timeit grid = np.lib.stride_tricks.as_strided(np.array(-1, dtype=np.int8), (l, b, h), (0, 0, 0))
100000 loops, best of 3: 12.4 us per loop
</code></pre>
<p>All that's happening here is that we begin with an array of length one, <code>np.array([-1])</code>, and then fiddle with the stride lengths so that <code>grid</code> <em>looks</em> exactly like an array with the required dimensions.</p>
<p>If you need an actual array, you can use <code>grid = grid.copy()</code>; this makes the creation of the <code>grid</code> array about as fast as the quickest approaches suggested on elsewhere this page.</p>
|
python|numpy
| 2
|
8,569
| 66,898,158
|
Matplotlib barh yticklabels not displayed correctly
|
<p>I have the below code implemented to save a <code>barh</code> type plot as a .png file.</p>
<pre><code> fig, ax = plt.subplots()
width=0.5
ind = np.arange(len(df['Count'])) # the x locations for the groups
ax.barh(ind, df['count'], 0.8, color="blue")
ax.set_yticks(ind+width/2)
ax.set_yticklabels(df['UserName'], minor=False,fontsize=6)
for i, v in enumerate(df['Count']):
ax.text(v + 4, i + .1, str(v), color='blue')
plt.xlabel("Count", size=10)
plt.ylabel("User", size=10)
plt.title("Distribution", size=12)
plt.savefig('count.png')
</code></pre>
<p>The code works fine but sometimes the Yticklabels are either not shown or get truncated.</p>
<p>Ex1:</p>
<p><a href="https://i.stack.imgur.com/MRvNk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MRvNk.png" alt="enter image description here" /></a></p>
<p>Ex2:</p>
<p><a href="https://i.stack.imgur.com/WlrI2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WlrI2.png" alt="enter image description here" /></a></p>
<p>Ex3:</p>
<p><a href="https://i.stack.imgur.com/oDRZk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oDRZk.png" alt="enter image description here" /></a></p>
|
<p>In this line:
<code>ax.text(v + 4, i + .1, str(v), color='blue')</code>, you use x,y (position of the text) and add a constant 4, this constant will have different consequences in different plots.</p>
<p>You can try this instead:</p>
<pre class="lang-py prettyprint-override"><code>ax.text(v + v/4, i + .1, str(v), color='blue')
</code></pre>
<p>Now it is related to the value that you are using.</p>
<p>Or you can do:</p>
<pre class="lang-py prettyprint-override"><code> v_add = (df['Count'].max())/10
for i, v in enumerate(df['Count']):
ax.text(v + v_add, i + .1, str(v), color='blue')
</code></pre>
<p>play with these options, change the 4 or the 10 in these examples until you get something reasonable.</p>
<p>Another option is the following (although might not be very accurate)</p>
<pre class="lang-py prettyprint-override"><code> fig, ax = plt.subplots()
width=0.5
ind = np.arange(len(df['Count'])) # the x locations for the groups
ax.barh(ind, df['count'], 0.8, color="blue")
ax.set_yticks(ind+width/2)
ax.set_yticklabels(df['UserName'], minor=False,fontsize=6)
max_loc = ax.get_xlim()[1]
for i, v in enumerate(df['Count']):
ax.text(max_loc,i + .1 , str(v), color='blue')
plt.set_xlim(0,max_loc+max_loc/15)
plt.xlabel("Count", size=10)
plt.ylabel("User", size=10)
plt.title("Distribution", size=12)
</code></pre>
|
python|pandas|dataframe|matplotlib
| 1
|
8,570
| 66,950,811
|
Plot 3d vectors and points on the same plot in python?
|
<p>I have the following matrix:</p>
<pre><code> X = np.array([[1,2],[3,4],[5,6]])
</code></pre>
<p>and the following vectors</p>
<pre><code>Vecs = np.array([[ 0.70710678, 0.70710678, 0. ],
[ 0. , 0. , 1. ],
[-0.70710678, 0.70710678, 0. ]])
</code></pre>
<p>I want to draw the vectors <code>vecs[:,0]</code>, <code>vecs[:,1]</code>, <code>vecs[:,2] </code>(the rows of the matrix) on the same plot as the points [1,3,5] and [2,4,6]. The vectors should be orthogonal if possible to draw a 90 degree label but its not necessary.</p>
|
<p>You can use <code>scatter</code> for points and <code>quiver</code> for vectors + experiment with different parameters of <code>view_init</code> to find the best angle:</p>
<pre><code>fig = plt.figure()
ax = fig.add_subplot(projection='3d')
ax.view_init(15, 35)
ax.scatter(xs=X[0], ys=X[1], zs=X[2], color='crimson')
ax.quiver(0, 0, 0, Vecs[:,0], Vecs[:,1], Vecs[:,2], color='steelblue')
</code></pre>
<p>Output:</p>
<p><a href="https://i.stack.imgur.com/nn79M.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nn79M.png" alt="picture" /></a></p>
|
python|numpy|matplotlib|plot|vector
| 0
|
8,571
| 67,023,233
|
How do I filter out rows based on another data frame in Python?
|
<p>So I need to filter out rows from one data frame using another dataframe as a condition for it.</p>
<p>df1:</p>
<pre><code>system code
AIII-01 423
CIII-04 123
LV-02 142
</code></pre>
<p>df2:</p>
<pre><code>StatusMessage Event
123 Gearbox warm up
</code></pre>
<p>So for this example I need to remove the rows that has the code 423 and 142.</p>
<p>How do I do that?</p>
|
<p>Plug and play script for you. If this doesn't work on your regular code, check to make sure you have the same types in the same columns.</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df1 = pd.DataFrame(
{"system": ["AIII", "CIII", "LV"], "Code": [423, 123, 142]}
)
df2 = pd.DataFrame(
{"StatusMessage": [123], "Event": ["Gearbox warm up"]}
)
### This is what you need
df1 = df1[df1.Code.isin(df2.StatusMessage.unique())]
print(df1)
</code></pre>
|
python|pandas|dataframe|spyder
| 2
|
8,572
| 67,037,249
|
What is "DataFrame object has no attribute 'ix'" error?
|
<p>I am just trying to run a simple code which is:</p>
<pre><code>from stocker.stocker import Stocker
microsoft = Stocker(ticker='MSFT')
techm = Stocker(ticker='TECHM', exchange='NSE')
</code></pre>
<p>And I get this error:
<a href="https://i.stack.imgur.com/sGtEx.png" rel="nofollow noreferrer">Error Snippet</a></p>
<p><strong>Note:</strong> I am a beginner at coding so please be simple in your answers.</p>
|
<p>Accessing rows in pandas DataFrames using <code>.ix</code> has been deprecated for some time now. Check if your <code>stocker</code> module has more recent versions than the one you have.</p>
|
python|pandas|dataframe
| 0
|
8,573
| 67,038,186
|
How can i create a column from 2 related columns of lists in python?
|
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>sampleID</th>
<th>testnames</th>
<th>results</th>
</tr>
</thead>
<tbody>
<tr>
<td>23939332</td>
<td>[32131,34343,35566]</td>
<td>[NEGATIVE,0.234,3.331]</td>
</tr>
<tr>
<td>32332323</td>
<td>[34343,96958,39550,88088]</td>
<td>[0,312,0.008,0.1,0.2]</td>
</tr>
</tbody>
</table>
</div>
<p>The table above is what I have, and the one below is what I want to achieve:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>sampleID</th>
<th>32131</th>
<th>34343</th>
<th>39550</th>
<th>88088</th>
<th>96985</th>
<th>35566</th>
</tr>
</thead>
<tbody>
<tr>
<td>23939332</td>
<td>NEGATIVE</td>
<td>0.234</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>3.331</td>
</tr>
<tr>
<td>32332323</td>
<td>NaN</td>
<td>0,312</td>
<td>0.1</td>
<td>0.2</td>
<td>0.008</td>
<td>NaN</td>
</tr>
</tbody>
</table>
</div>
<p>So I need to create columns of unique values from the <code>testnames</code> column and fill the cells with the corresponding values from the <code>results</code> column.</p>
<p>Considering this is as a sample from a very large dataset (table).</p>
|
<p>Here is a commented solution:</p>
<pre class="lang-py prettyprint-override"><code>(df.set_index(['sampleID']) # keep sampleID out of the expansion
.apply(pd.Series.explode) # expand testnames and results
.reset_index() # reset the index
.groupby(['sampleID', 'testnames']) #
.first() # set the expected shape
.unstack()) #
</code></pre>
<p>It gives the result you expected, though with a different column order:</p>
<pre><code> results
testnames 32131 34343 35566 39550 88088 96958
sampleID
23939332 NEGATIVE 0.234 3.331 NaN NaN NaN
32332323 NaN 0.312 NaN 0.1 0.2 0.008
</code></pre>
<p>Let's see how it does on generated data:</p>
<pre class="lang-py prettyprint-override"><code>def build_df(n_samples, n_tests_per_sample, n_test_types):
df = pd.DataFrame(columns=['sampleID', 'testnames', 'results'])
test_types = np.random.choice(range(0,100000), size=n_test_types, replace=False)
for i in range(n_samples):
testnames = list(np.random.choice(test_types,size=n_tests_per_sample))
results = list(np.random.random(size=n_tests_per_sample))
df = df.append({'sampleID': i, 'testnames':testnames, 'results':results}, ignore_index=True)
return df
def reshape(df):
df2 = (df.set_index(['sampleID']) # keep the sampleID out of the expansion
.apply(pd.Series.explode) # expand testnames and results
.reset_index() # reset the index
.groupby(['sampleID', 'testnames']) #
.first() # set the expected shape
.unstack())
return df2
%time df = build_df(60000, 10, 100)
# Wall time: 9min 48s (yes, it was ugly)
%time df2 = reshape(df)
# Wall time: 1.01 s
</code></pre>
<p><code>reshape()</code> breaks when <code>n_test_types</code> becomes too large, with <code>ValueError: Unstacked DataFrame is too big, causing int32 overflow</code>.</p>
|
pandas|dataframe
| 4
|
8,574
| 67,118,189
|
Unable to load pre-trained model checkpoint with TensorFlow Object Detection API
|
<p>Similar to this question:</p>
<p><a href="https://stackoverflow.com/questions/49507040/where-can-i-find-model-ckpt-in-faster-rcnn-resnet50-coco-model">Where can I find model.ckpt in faster_rcnn_resnet50_coco model?</a> (this solution doesn't work for me)</p>
<p>I have downloaded the <code>ssd_resnet152_v1_fpn_1024x1024_coco17_tpu-8</code> with the intention of using it as a starting point. I am using the sample model configuration associated with that model in the TF model zoo.</p>
<p>I am only changing the num classes and paths for tuning, training and eval.</p>
<p>With:</p>
<pre><code>fine_tune_checkpoint: "C:\\Users\\Peter\\Desktop\\Adv-ML-Project\\models\\research\\object_detection\\test_data\\checkpoint\\model.ckpt"
</code></pre>
<p>I get:</p>
<pre><code>tensorflow.python.framework.errors_impl.NotFoundError: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for C:\Users\Pierre\Desktop\Adv-ML-Project\models\research\object_detection\test_data\checkpoint\model.ckpt
</code></pre>
<p>With:</p>
<pre><code>fine_tune_checkpoint: "C:\\Users\\Peter\\Desktop\\Adv-ML-Project\\models\\research\\object_detection\\test_data\\checkpoint\\ckpt-0.*"
</code></pre>
<p>I get:</p>
<pre><code>tensorflow.python.framework.errors_impl.DataLossError: Unable to open table file C:\Users\Pierre\Desktop\Adv-ML-Project\models\research\object_detection\test_data\checkpoint\ckpt-0.data-00000-of-00001: Data loss: not an sstable (bad mag
ic number): perhaps your file is in a different file format and you need to use a different restore operator?
</code></pre>
<p>I'm currently using absolute paths because it's easiest, but if it's a problem I can re-organize my project structure.</p>
<p><a href="https://i.stack.imgur.com/UQr81.png" rel="nofollow noreferrer">Checkpoint Folder</a></p>
<p>The official documentation from <a href="https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_training_and_evaluation.md" rel="nofollow noreferrer">https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_training_and_evaluation.md</a>
says to do something like</p>
<pre><code>fine_tune_checkpoint: a path prefix to the pre-existing checkpoint (ie:"/usr/home/username/checkpoint/model.ckpt-#####").
</code></pre>
<p>Is there something I am doing wrong here? I am running this with the following command (also from documentation):</p>
<pre><code>python object_detection/model_main_tf2.py \
--pipeline_config_path="C:\\Users\Pierre\\Desktop\\Adv-ML-Project\\models\\my_model\\my_model.config" \
--model_dir="C:\\Users\\Pierre\\Desktop\\Adv-ML-Project\\models\\my_model\\training" \
--alsologtostderr
</code></pre>
|
<p>Try changing the <code>fine_tune_checkpoint</code> path in the config file to something like <code>path_to_folder/ssd_resnet50_v1_fpn_640x640_coco17_tpu-8/checkpoint/ckpt-0</code></p>
<p>And in your training command, set the <code>model_dir</code> flag to just point to the model directory, don't include <code>training</code>, kind of like <code>--model_dir=<path_to>/ssd_resnet152_v1_fpn_1024x1024_coco17_tpu-8</code></p>
<p><a href="https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/training.html#configure-the-training-pipeline" rel="nofollow noreferrer">Source</a></p>
<p>Just change the backslashes to forward-slashes, since you're on windows</p>
|
python|tensorflow|machine-learning|image-segmentation|object-detection-api
| 1
|
8,575
| 66,842,670
|
Fix mismatching x-ticks
|
<p>I currently have two Pandas DataFrames that I would like to layer on top of another.</p>
<pre><code># Creating plot
fig, ax = plt.subplots(figsize=[16, 9])
ax.scatter(italy_df.columns, italy_df.loc['Record high °C'], label = 'Record high °C', color = '#FF0000')
ax.scatter(italy_df.columns, italy_df.loc['Record low °C'], label = 'Record low °C', color = '#006BD1')
ax.plot(italy_df.columns, italy_df.loc['Daily mean °C'], label = 'Daily mean °C', color='#2F4858')
# Setting red fill between max and min scatter
ax.fill_between(range(len(italy_df.T.index)),
italy_df.loc['Record high °C'], italy_df.loc['Daily mean °C'],
facecolor='#FF0000', alpha=0.1, interpolate=False)
# Setting blue fill between min scatter and the min scatter
ax.fill_between(range(len(italy_df.T.index)),
italy_df.loc['Daily mean °C'], italy_df.loc['Record low °C'],
facecolor='#006BD1', alpha=0.1, interpolate=False)
locations = np.arange(0, 12)
labels = pd.date_range('2020-01-01', '2020-12-31', freq='MS').month_name()
plt.xticks(locations, labels, rotation=45)
ax2 = ax.twinx()
ax2.plot(covid_positive_2020)
locations = np.arange(0, 305, 30)
labels = pd.date_range('2020-02-01', covid_positive_2020.index.max(), freq='MS').month_name()
plt.show()
</code></pre>
<p>The following code produces this output:</p>
<p><a href="https://i.stack.imgur.com/5mTfi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5mTfi.png" alt="enter image description here" /></a></p>
<p>As you may see there is a mismatch of xticks. <code>covid_positive_2020</code> DataFrame has dates ranging from February to December, whereas my <code>italy_df</code> plots an average temperature from January to February.</p>
<p>What I'm trying to achieve is combine both plots where the <code>covid_positive_2020</code> would start from February and expand till end of December.</p>
<p>Thank you in advance! </p>
<p>The final result would look something like this:</p>
<p><a href="https://i.stack.imgur.com/pGHRM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pGHRM.png" alt="enter image description here" /></a></p>
|
<p>Found an option to achieve my desired result:</p>
<p>Using <code>ax.twiny().twinx()</code> I was able to create a secondary x and y axis, on top of which I started plotting.</p>
<pre><code># Creating plot
fig, ax = plt.subplots(figsize=[16, 9])
ticks = np.arange(0, 366)
plt.xticks(ticks)
x = pd.to_datetime(covid_positive_2020.index.min())
y = pd.date_range('01/01/2020', x)
ln1 = ax.plot(np.arange(len(y)-1, 366), covid_positive_2020, label='COVID cases 2020', color='darkorange')
ln2 = ax.plot(covid_positive_2021, label='COVID cases 2021', color='purple')
ax.set_ylabel('Number of new cases')
plt.gca().axes.get_xaxis().set_visible(False)
ax2 = ax.twiny().twinx()
ax2.scatter(italy_df.columns, italy_df.loc['Record high °C'], label = 'Record high °C', color = '#FF0000')
ax2.scatter(italy_df.columns, italy_df.loc['Record low °C'], label = 'Record low °C', color = '#006BD1')
ln3 = ax2.plot(italy_df.columns, italy_df.loc['Daily mean °C'], label = 'Daily mean °C', color='#2F4858')
ax2.set_ylabel('Temperature $^{\circ}$C')
# Setting red fill between max and min scatter
ax2.fill_between(range(len(italy_df.T.index)),
italy_df.loc['Record high °C'], italy_df.loc['Daily mean °C'],
facecolor='#FF0000', alpha=0.1, interpolate=False)
# Setting blue fill between min scatter and the min scatter
ax2.fill_between(range(len(italy_df.T.index)),
italy_df.loc['Daily mean °C'], italy_df.loc['Record low °C'],
facecolor='#006BD1', alpha=0.1, interpolate=False)
locations = np.arange(0, 12)
labels = pd.date_range('2020-01-01', '2020-12-31', freq='MS').month_name()
plt.xticks(locations, labels, rotation=45)
ax2 = plt.gca()
lns = ln1 + ln2 + ln3
legends = [l.get_label() for l in lns]
ax2.legend(lns, legends, loc=0)
plt.rcParams["axes.grid"] = False
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/zOhYp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zOhYp.png" alt="enter image description here" /></a></p>
|
python|pandas|matplotlib
| 0
|
8,576
| 47,192,793
|
Image manipulation, using openCV and numpy. Trying to return an image without the color red
|
<p>I am trying to take in a image, checking pixel by pixel if there is any red in it. </p>
<p>If there is it'll replace it with white. Once it runs through every pixel, it'll return a new image with white instead of red. </p>
<p>The following are my attempts: </p>
<pre><code>import cv2
import numpy as np
def take_out_red():
'''
Takes an image, and returns the same image with red replaced
by white
------------------------------------------------------------
Postconditions:
returns
new_img (no red)
'''
img = cv2.imread('red.png',1)
#Reads the image#
new_img = img
#Makes a copy of the image
for i in range(499):
for y in range(499):
if np.all(img[i,y]) == np.all([0,0,253]):
#[0,0,253] is a red pixel
#Supposed to check if the particular pixel is red
new_img[i,y] == [255,255,255]
#if it is red, it'll replace that pixel in the new_image
#with a white pixel
return cv2.imshow('image',new_img)
#returns the new_image with no red
</code></pre>
<p>Any help would be highly appreciated, thank you so much in advance. </p>
|
<p>When you have <code>OpenCV</code> or <code>numpy</code> at your service, then you probably don't need to write double iterating <code>for</code> loops which are not clean and inefficient as well. Both the libraries have very efficient routines to iterate a n-D array and apply basic operations such as checking equality etc. </p>
<p>You can use <code>cv2.inRane()</code> method to segment the red color from the input image and then use powerful <code>numpy</code> to replace the color using the mask obtained from the <code>cv2.inRange()</code> as:</p>
<pre><code>import cv2
import numpy as np
img = cv2.imread("./sample_img.png")
red_range_lower = np.array([0, 0, 240])
red_range_upper = np.array([0, 0, 255])
replacement_color = np.array([255, 0, 0])
red_mask = cv2.inRange(img, red_range_lower, red_range_upper)
img[red_mask == 255] = replacement_color
cv2.imwrite("./output.png", img)
</code></pre>
<p>Input:</p>
<p><a href="https://i.stack.imgur.com/iWHfq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iWHfq.png" alt="enter image description here"></a></p>
<p>Output:</p>
<p><a href="https://i.stack.imgur.com/t03C4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/t03C4.png" alt="enter image description here"></a></p>
|
python|numpy|opencv|image-manipulation
| 1
|
8,577
| 47,395,944
|
SettingWithCopyWarning & Hidden Chaining
|
<p>I'm getting the SettingWithCopyWarning that suggests that I may have a chaining problem.</p>
<blockquote>
<p>SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead</p>
</blockquote>
<p>I've read about this at length, but cannot seem to find the right solution for my use case. Here's a great article on the topic: <a href="https://www.dataquest.io/blog/settingwithcopywarning/" rel="nofollow noreferrer">Understanding SettingwithCopyWarning in pandas</a></p>
<p>However, I'm still not quite sure how to proceed. Here are three variations of the same line of code that produce the ending result, but all throw up that same error.</p>
<p>Variations:</p>
<ol>
<li><p><code>X[subindex + '_DE'] = X[subindex + '_DE'].clip(lower=0, upper=200, axis=0)</code></p></li>
<li><p><code>X.loc[:, subindex + '_DE'] = np.clip(df.loc[:, subindex + '_DE'], 0, 200)</code></p></li>
<li><p><code>X.loc[:, subindex + '_DE'] = X.loc[:, subindex + '_DE'].clip(lower=0, upper=200, axis=0)</code></p></li>
</ol>
<p>End goal: Simply clip (truncate) any values in column [subindex + '_DE'] that extend beyond the lower (0) and upper (200) limits.</p>
<p>I'm not sure how to proceed. A little guidance would be helpful. Thank you in advance.</p>
<p><strong>Helpful background information:</strong></p>
<p>X is a pandas dataframe of float64 data arranged in 20 columns (features) x 6514 rows (observations).</p>
<p>Here's some data to work with:</p>
<pre><code>> print(X[subindex + '_DE'].head(180))
> date
> 1999-12-31 33.6584
> 2000-01-01 33.6584
> 2000-01-02 33.6584
> 2000-01-03 33.6584
> ... ...
> 2000-06-25 32.6530
> 2000-06-26 32.6530
> 2000-06-27 32.6530
> Name: NYEPLC_DE, Length: 180, dtype: float64
</code></pre>
|
<p>Found the answer.</p>
<p>In the <code>def</code>, I set <code>X = df[]</code> (a dataframe). If I simply add <code>.copy()</code> to <code>df[]</code>, then the warning goes away.</p>
<p>E.g. <code>X = df[['column1', 'column2']].copy()</code></p>
<p>Benjamin Pryke's article above is very good...</p>
|
python|pandas|dataframe
| 1
|
8,578
| 47,292,475
|
Python inaccurate curve fit
|
<pre><code>Temp k(T)
298 6.66E-63
300 1.48E-62
350 3.58E-55
400 1.25E-49
450 2.57E-45
500 7.30E-42
550 4.90E-39
600 1.12E-36
650 1.11E-34
700 5.72E-33
750 1.75E-31
800 3.49E-30
850 4.92E-29
900 5.17E-28
950 4.24E-25
1000 2.83E-26
</code></pre>
<p>Above is the given kinetic data, I am trying to fit this data and plot the same.</p>
<h1>Curvefitting</h1>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
import pandas as pd
plt.style.use('ggplot')
#Generate data
df=pd.read_excel('py_curvefit.xlsx')
T=df.Temp #xdata
def reacKine(T,A,n,Ea):
return A*((T/298)**n)*np.exp(-Ea/(0.008314*T))
kt=df['k(T)'] #ydata
#rectifying an erroneous value
kt[14]=4.24*10**(-27)
popt,pcov=curve_fit(reacKine,T,kt)
A,n,Ea=popt
plt.plot(T,np.log(kt),'g-',label='given data')
plt.plot(T,np.log(reacKine(T,*popt)),'ro',label='fit')
plt.xlabel('Temperature [K]')
plt.ylabel('log of reaction coefficient')
plt.legend(loc='best')
plt.show()
</code></pre>
<p>It says optimal parameters not found for the function. How do I rectify this. I am hoping to see a good fit. Is it because of the exponential term?</p>
|
<p>This is a sensitive problem (as is typical when exponentials are involved). For a problem like this, it is important to have a pretty good initial guess for the parameters.</p>
<p>If you experiment with the parameters, you'll find that <code>A</code> has to be very small. The default initial guess that is used by <code>curve_fit</code> for all the parameters is 1, and 1 is far too big for <code>A</code>. If I use 1e-10 for the initial guess for <code>A</code></p>
<pre><code>popt, pcov = curve_fit(reacKine, T, kt, p0=(1e-10, 1, 1))
</code></pre>
<p>I get the following error from <code>curve_fit</code>:</p>
<pre><code>RuntimeError: Optimal parameters not found: Number of calls to function has reached maxfev = 800.
</code></pre>
<p>So let's increase <code>maxfev</code> to, say, <code>2000</code>:</p>
<pre><code>popt, pcov = curve_fit(reacKine, T, kt, p0=(1e-10, 1, 1), maxfev=2000)
</code></pre>
<p>I got the same error. When I increased it to <code>100000</code>, the function succeeded.</p>
<p>Here's a script that includes the updated call to <code>curve_fit</code>, followed by the plot generated by the script.</p>
<pre><code>import numpy as np
from scipy.optimize import curve_fit
import matplotlib.pyplot as plt
T = np.array([298, 300, 350, 400, 450, 500, 550, 600, 650, 700, 750, 800,
850, 900, 950, 1000])
kt = np.array([6.66e-63, 1.48e-62, 3.58e-55, 1.25e-49, 2.57e-45, 7.30e-42,
4.90e-39, 1.12e-36, 1.11e-34, 5.72e-33, 1.75e-31, 3.49e-30,
4.92e-29, 5.17e-28, 4.24e-27, 2.83e-26])
def reacKine(T,A,n,Ea):
return A*((T/298)**n)*np.exp(-Ea/(0.008314*T))
popt, pcov = curve_fit(reacKine, T, kt, p0=(1e-10, 1, 1), maxfev=100000)
plt.plot(T, kt, '.', label='data')
tt = np.linspace(T[0], T[-1], 160)
kk = reacKine(tt, *popt)
semilogy = True
if semilogy:
plt.semilogy(tt, kk, 'k-', alpha=0.3, label='fit')
results_xy = (700, 1e-45)
else:
plt.plot(tt, kk, 'k-', alpha=0.3, label='fit')
results_xy = (300, 1.5e-26)
plt.annotate(xy=results_xy,
s=('Fit Results:\n $A\,$ = %.4g\n $n\,$ = %.4g\n $E_{a}$ = %.4g' %
tuple(popt)))
plt.xlabel('T')
plt.ylabel('k(T)')
plt.legend(framealpha=1, shadow=True)
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/g5SBI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/g5SBI.png" alt="plot"></a></p>
<hr>
<p>P.S. @MNewville might be able to suggest a better way to do this fit using <a href="http://lmfit.github.io/lmfit-py/" rel="nofollow noreferrer">lmfit</a>.</p>
|
python-3.x|pandas|scipy|curve-fitting
| 3
|
8,579
| 68,392,447
|
How to create n copies of every row on dataframe based value in list Python
|
<p>I have following dataframe</p>
<pre><code>| Domain | Description
| test.com | some string
.....
</code></pre>
<p>I have a text augmentation algorithm which accepts a string as input and returns list with <code>10</code> modified strings. Let's call this function <code>def augmentation(text)</code></p>
<p>For every row in dataframe, i want to make augmentation to <code>Description</code> column and create 10 copy of each row and pass values from <code>augmentation</code> function to <code>Description</code>.
The expected result should looks something like:</p>
<pre><code>| Domain | Description
| test.com | some string
| test.com | smoe string
| test.com | moe sring
| test.com | some sring
..... and so on.
</code></pre>
|
<p>I came up with following</p>
<pre><code>df['Description'] = df['Description'].apply(augmentation)
result = df.assign(Description=df['Description']).explode('Description').reset_index(drop=True)
</code></pre>
|
python|pandas
| 0
|
8,580
| 1,322,380
|
gotchas where Numpy differs from straight python?
|
<p>Folks,</p>
<p>is there a collection of gotchas where Numpy differs from python,
points that have puzzled and cost time ?</p>
<blockquote>
<p>"The horror of that moment I shall
never never forget !"<br>
"You will, though," the Queen said, "if you don't
make a memorandum of it."</p>
</blockquote>
<p>For example, NaNs are always trouble, anywhere.
If you can explain this without running it, give yourself a point --</p>
<pre><code>from numpy import array, NaN, isnan
pynan = float("nan")
print pynan is pynan, pynan is NaN, NaN is NaN
a = (0, pynan)
print a, a[1] is pynan, any([aa is pynan for aa in a])
a = array(( 0, NaN ))
print a, a[1] is NaN, isnan( a[1] )
</code></pre>
<p>(I'm not knocking numpy, lots of good work there, just think a FAQ or Wiki of gotchas would be useful.)</p>
<p>Edit: I was hoping to collect half a dozen gotchas (surprises for people learning Numpy).<br>
Then, if there are common gotchas or, better, common explanations,
we could talk about adding them to a community Wiki (where ?)
It doesn't look like we have enough so far.</p>
|
<p>Because <code>__eq__</code> does not return a bool, using numpy arrays in any kind of containers prevents equality testing without a container-specific work around.</p>
<p>Example:</p>
<pre><code>>>> import numpy
>>> a = numpy.array(range(3))
>>> b = numpy.array(range(3))
>>> a == b
array([ True, True, True], dtype=bool)
>>> x = (a, 'banana')
>>> y = (b, 'banana')
>>> x == y
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
</code></pre>
<p>This is a horrible problem. For example, you cannot write unittests for containers which use <code>TestCase.assertEqual()</code> and must instead write custom comparison functions. Suppose we write a work-around function <code>special_eq_for_numpy_and_tuples</code>. Now we can do this in a unittest:</p>
<pre><code>x = (array1, 'deserialized')
y = (array2, 'deserialized')
self.failUnless( special_eq_for_numpy_and_tuples(x, y) )
</code></pre>
<p>Now we must do this for every container type we might use to store numpy arrays. Furthermore, <code>__eq__</code> might return a bool rather than an array of bools:</p>
<pre><code>>>> a = numpy.array(range(3))
>>> b = numpy.array(range(5))
>>> a == b
False
</code></pre>
<p>Now each of our container-specific equality comparison functions must also handle that special case.</p>
<p>Maybe we can patch over this wart with a subclass?</p>
<pre><code>>>> class SaneEqualityArray (numpy.ndarray):
... def __eq__(self, other):
... return isinstance(other, SaneEqualityArray) and self.shape == other.shape and (numpy.ndarray.__eq__(self, other)).all()
...
>>> a = SaneEqualityArray( (2, 3) )
>>> a.fill(7)
>>> b = SaneEqualityArray( (2, 3) )
>>> b.fill(7)
>>> a == b
True
>>> x = (a, 'banana')
>>> y = (b, 'banana')
>>> x == y
True
>>> c = SaneEqualityArray( (7, 7) )
>>> c.fill(7)
>>> a == c
False
</code></pre>
<p>That seems to do the right thing. The class should also explicitly export elementwise comparison, since that is often useful.</p>
|
python|numpy
| 25
|
8,581
| 59,418,746
|
Pandas convert JSON string to Dataframe - Python
|
<p>i have a json string that need to be convert to a dataframe with desired column name.</p>
<pre><code>my_json = {'2017-01-03': {'open': 214.86,
'high': 220.33,
'low': 210.96,
'close': 216.99,
'volume': 5923254},
'2017-12-29': {'open': 316.18,
'high': 316.41,
'low': 310.0,
'close': 311.35,
'volume': 3777155}}
</code></pre>
<p>use below code doesn't give the format i want</p>
<pre><code>pd.DataFrame.from_dict(json_normalize(my_json), orient='columns')
</code></pre>
<p><a href="https://i.stack.imgur.com/S9sCP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/S9sCP.png" alt="enter image description here"></a></p>
<p>my expected format is below</p>
<p><a href="https://i.stack.imgur.com/PtvLx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PtvLx.png" alt="enter image description here"></a></p>
<p>Not sure how to do it?</p>
|
<p>You can also do it this way to get the exact format:</p>
<pre><code>pd.DataFrame(my_json).T.rename_axis(columns='Date')
Date open high low close volume
2017-01-03 214.86 220.33 210.96 216.99 5923254.0
2017-12-29 316.18 316.41 310.00 311.35 3777155.0
</code></pre>
<p>You can also read directly from the data to get the format with the missing date:</p>
<pre><code>pd.DataFrame.from_dict(my_json, orient='index').rename_axis(columns='Date')
Date open high low close volume
2017-01-03 214.86 220.33 210.96 216.99 5923254
2017-12-29 316.18 316.41 310.00 311.35 3777155
</code></pre>
|
python|json|pandas
| 3
|
8,582
| 59,141,366
|
Unable to complete this question due to syntax error in the Python code for tensorflow?
|
<p>The 'return' is outside the function. I have to return the values in tuples. Basically, there are two errors here. Firstly, the 'return' is outside of the function. Secondly, the result did not return as a tuple.</p>
<pre><code>def train_mnist():
class myCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs={}):
if logs.get('acc') > 0.99:
print ('\nReached 99% accuracy so cancelling training!')
self.model.stop_training = True
mnist = tf.keras.datasets.mnist
((x_train, y_train), (x_test, y_test)) = mnist.load_data(path=path)
(x_train, x_test) = (x_train / 255.0, x_test / 255.0)
callbacks = myCallback()
model = \
tf.keras.models.Sequential([tf.keras.layers.Flatten(input_shape=(28,
28)), tf.keras.layers.Dense(512,
activation=tf.nn.relu),
tf.keras.layers.Dense(10,
activation=tf.nn.softmax)])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
history = model.fit(x_train, y_train, epochs=10,
callbacks=[callbacks])
return (history.epoch, history.history['acc'][-1])
</code></pre>
|
<p>The issue was with the indentation and getting the accuracy from the <code>model log</code>. </p>
<p>I have modified your code as below and got the intended output. </p>
<pre><code>def train_mnist():
class myCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs):
if logs["accuracy"] > 0.99:
print ('\nReached 99% accuracy so cancelling training!')
self.model.stop_training = True
mnist = tf.keras.datasets.mnist
((x_train, y_train), (x_test, y_test)) = mnist.load_data()
(x_train, x_test) = (x_train / 255.0, x_test / 255.0)
callbacks = myCallback()
model = \
tf.keras.models.Sequential([tf.keras.layers.Flatten(input_shape=(28,
28)), tf.keras.layers.Dense(512,
activation=tf.nn.relu),
tf.keras.layers.Dense(10,
activation=tf.nn.softmax)])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
history = model.fit(x_train, y_train, epochs=10,
callbacks=[callbacks])
return (history.epoch, history.history['accuracy'][-1])
</code></pre>
<p><strong>Output:</strong> </p>
<pre><code>Epoch 1/10
1875/1875 [==============================] - 6s 3ms/step - loss: 0.2026 - accuracy: 0.9392
Epoch 2/10
1875/1875 [==============================] - 6s 3ms/step - loss: 0.0799 - accuracy: 0.9755
Epoch 3/10
1875/1875 [==============================] - 6s 3ms/step - loss: 0.0521 - accuracy: 0.9839
Epoch 4/10
1875/1875 [==============================] - 6s 3ms/step - loss: 0.0353 - accuracy: 0.9894
Epoch 5/10
1867/1875 [============================>.] - ETA: 0s - loss: 0.0278 - accuracy: 0.9910
Reached 99% accuracy so cancelling training!
1875/1875 [==============================] - 6s 3ms/step - loss: 0.0278 - accuracy: 0.9910
([0, 1, 2, 3, 4], 0.9909833073616028)
</code></pre>
|
python|tensorflow|machine-learning|syntax-error
| 0
|
8,583
| 59,156,167
|
Parallelize in Cython without GIL
|
<p>I'm trying to compute some columns of a <code>numpy</code> array, operating on python objects (<code>numpy</code> array) in a for loop using a <code>cdef</code> function.</p>
<p><strong>I would like to do it in parallel.</strong> But not sure how to. </p>
<p>Here is a toy example, one <code>def</code> function calls a <code>cdef</code> function in a for loop using <strong><code>prange</code></strong>, which is not allowed because <code>np.ndarray</code> is a python object. In my real problem, one matrix and one vector are the arguments of the <code>cdef</code> function, and some <code>numpy</code> matrix operations are performed, like <code>np.linalg.pinv()</code> (which I guess is actually the bottleneck).</p>
<pre><code>%%cython
import numpy as np
cimport numpy as np
from cython.parallel import prange
from c_functions import estimate_coef_linear_regression
DTYPE = np.float
ctypedef np.float_t DTYPE_t
def transpose_example(np.ndarray[DTYPE_t, ndim=2] data):
"""
Transposes a matrix. It does each row independently and parallel
"""
cdef Py_ssize_t n = data.shape[0]
cdef Py_ssize_t t = data.shape[1]
cdef np.ndarray[DTYPE_t, ndim = 2] results = np.zeros((t, n))
cdef Py_ssize_t i
for i in prange(n, nogil=True):
results[i, :] = transpose_vector(data[:, i])
return results
cdef transpose_vector(np.ndarray[DTYPE_t, ndim=1] vector):
"""
transposes a np vector
"""
return vector.transpose()
a = np.random.rand(100, 20)
transpose_example(a)
</code></pre>
<p>outputs </p>
<pre><code>Converting to Python object not allowed without gil
</code></pre>
<p><strong>What would be the best way to do this in parallel?</strong></p>
|
<p>You can pass typed memoryview slices (<code>cdef transpose_vector(DTYPE_t[:] vector)</code>) around without the GIL - it's one of the key advantages of the newer typed memoryview syntax over <code>np.ndarray</code>.</p>
<p>However,</p>
<ul>
<li>You can't call Numpy member functions (like transpose) on memoryviews, unless you cast back to a Numpy array (<code>np.asarray(vector)</code>). This requires the GIL.</li>
<li>Calling any kind of Python function (e.g. <code>transpose</code>) is going to require the GIL. This can be done inside a <code>with gil:</code> block, but when that block is almost your entire loop that becomes pretty pointless.</li>
<li>You don't specify a return type for <code>transpose_vector</code>, and so it'll default to <code>object</code>, which requires the GIL. You could specify a Cython return type, but I suspect even returning a memoryview slice may require some reference counting somewhere.</li>
<li>Be careful not to have multiple threads overwriting the same data in your passed memoryview slice.</li>
</ul>
<p>In summary: memoryview slices, but bear in mind you're quite limited in what you can do without the GIL. Your current example just isn't parallelizable (but this may be mostly because it's a toy example).</p>
|
python|numpy|parallel-processing|cython|hpc
| 2
|
8,584
| 44,917,675
|
How to delete column name
|
<p>I want to delete to just column name (x,y,z), and use only data.</p>
<pre><code>In [68]: df
Out[68]:
x y z
0 1 0 1
1 2 0 0
2 2 1 1
3 2 0 1
4 2 1 0
</code></pre>
<p>I want to print result to same as below.</p>
<pre><code>Out[68]:
0 1 0 1
1 2 0 0
2 2 1 1
3 2 0 1
4 2 1 0
</code></pre>
<p>Is it possible? How can I do this?</p>
|
<p>In pandas by default need column names.</p>
<p>But if really want <code>'remove'</code> columns what is strongly not recommended, because get duplicated column names is possible assign empty strings:</p>
<pre><code>df.columns = [''] * len(df.columns)
</code></pre>
<hr>
<p>But if need write <code>df</code> to file without columns and index add parameter <code>header=False</code> and <code>index=False</code> to <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_csv.html" rel="noreferrer"><code>to_csv</code></a> or <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_excel.html" rel="noreferrer"><code>to_excel</code></a>.</p>
<pre><code>df.to_csv('file.csv', header=False, index=False)
df.to_excel('file.xlsx', header=False, index=False)
</code></pre>
|
python|pandas
| 36
|
8,585
| 44,957,617
|
Reshaping pandas dataframe with a column containing lists
|
<p>Let's say I have a dataframe that looks like this:</p>
<pre><code>import pandas as pd
data = [{"Name" : "Project A", "Feedback" : ['we should do x', 'went well']},
{"Name" : "Project B", "Feedback" : ['eat pop tarts', 'boo']},
{"Name" : "Project C", "Feedback" : ['bar', 'baz']}
]
df = pd.DataFrame(data)
df = df[['Name','Feedback']]
df
Name Feedback
0 Project A ['we should do x', 'went well']
1 Project B ['eat pop tarts', 'boo']
2 Project C ['bar', 'baz']
</code></pre>
<p>What I would like to do is reshape the dataframe, such that Name is the key and each element in the list of the Feedback column is a value like so:</p>
<pre><code> Name Feedback
0 Project A 'we should do x'
1 Project A 'went well'
2 Project B 'eat pop tarts'
3 Project B 'boo'
4 Project C 'bar'
5 Project C 'baz'
</code></pre>
<p>What would be an efficient way to do this? </p>
|
<p>One option is to reconstruct the data frame by flattening column <em>Feedback</em> and repeat column <em>Name</em>:</p>
<pre><code>pd.DataFrame({
'Name': df.Name.repeat(df.Feedback.str.len()),
'Feedback': [x for s in df.Feedback for x in s]
})
# Feedback Name
#0 we should do x Project A
#0 went well Project A
#1 eat pop tarts Project B
#1 boo Project B
#2 bar Project C
#2 baz Project C
</code></pre>
|
python|pandas|nltk
| 4
|
8,586
| 45,225,841
|
Pandas data slicing by column names
|
<p>I am learning Pandas and trying to understand slicing. Everything makes sense expect when I try to slice using column names. My data frame looks like this:</p>
<pre><code> area pop
California 423967 38332521
Florida 170312 19552860
Illinois 149995 12882135
New York 141297 19651127
Texas 695662 26448193
</code></pre>
<p>and when I do <code>data['area':'pop']</code> I expected both columns to show since I am using explicit index and both the start and end of the slice should be inclusive, but the result is an empty dataframe.</p>
<p>I also get an empty dataframe for <code>data['area':]</code>. Why is this different from slicing with explicit indexes elsewhere?</p>
|
<p>According to <a href="https://pandas.pydata.org/pandas-docs/stable/indexing.html#slicing-ranges" rel="nofollow noreferrer">documentation</a></p>
<blockquote>
<p>With DataFrame, slicing inside of [] <strong>slices the rows</strong>. This is provided largely as a convenience since it is such a common operation.</p>
</blockquote>
<p>You get an empty DataFrame because your index contains strings and it can't find values 'area' and 'pop' there. Here what you get in case of numeric index</p>
<pre><code>>> data.reset_index()['area':'pop']
TypeError: cannot do slice indexing on <class 'pandas.core.indexes.range.RangeIndex'> with these indexers [area] of <class 'str'>
</code></pre>
<p>What you want instead is</p>
<pre><code>>> data.loc[:, 'area':'pop']
</code></pre>
|
python|pandas
| 6
|
8,587
| 45,107,528
|
Groupby and apply pandas vs dask
|
<p>there is something that I quite don't understand about <code>dask.dataframe</code> behavior. Let say I want to replicate this from pandas</p>
<pre><code>import pandas as pd
import dask.dataframe as dd
import random
s = "abcd"
lst = 10*[0]+list(range(1,6))
n = 100
df = pd.DataFrame({"col1": [random.choice(s) for i in range(n)],
"col2": [random.choice(lst) for i in range(n)]})
# I will need an hash in dask
df["hash"] = 2*df.col1
df = df[["hash","col1","col2"]]
def fun(data):
if data["col2"].mean()>1:
data["col3"]=2
else:
data["col3"]=1
return(data)
df1 = df.groupby("col1").apply(fun)
df1.head()
</code></pre>
<p>this returns</p>
<pre><code> hash col1 col2 col3
0 dd d 0 1
1 aa a 0 2
2 bb b 0 1
3 bb b 0 1
4 aa a 0 2
</code></pre>
<p>In Dask I tried</p>
<pre><code>def fun2(data):
if data["col2"].mean()>1:
return 2
else:
return 1
ddf = df.copy()
ddf.set_index("hash",inplace=True)
ddf = dd.from_pandas(ddf, npartitions=2)
gpb = ddf.groupby("col1").apply(fun2, meta=pd.Series())
</code></pre>
<p>where the groupby lead to the same result as in pandas but I'm having hard time merging the result on a new column preserving the hash index.
I'd like to have the following result</p>
<pre><code> col1 col2 col3
hash
aa a 5 2
aa a 0 2
aa a 0 2
aa a 0 2
aa a 4 2
</code></pre>
<p><strong>UPDATE</strong></p>
<p>Playing with merge I found this solution</p>
<pre><code>ddf1 = dd.merge(ddf, gpb.to_frame(),
left_on="col1",
left_index=False, right_index=True)
ddf1 = ddf1.rename(columns={0:"col3"})
</code></pre>
<p>I'm not sure how this is going to work if I have to a groupby over several columns. Plus is not exactly elegant.</p>
|
<p>How about using join?</p>
<p>This is your dask code with the exception of naming the Series <code>pd.Series(name='col3')</code></p>
<pre><code>def fun2(data):
if data["col2"].mean()>1:
return 2
else:
return 1
ddf = df.copy()
ddf.set_index("hash",inplace=True)
ddf = dd.from_pandas(ddf, npartitions=2)
gpb = ddf.groupby("col1").apply(fun2, meta=pd.Series(name='col3'))
</code></pre>
<p>then the join</p>
<pre><code>ddf.join(gpb.to_frame(), on='col1')
print(ddf.compute().head())
col1 col2 col3
hash
cc c 0 2
cc c 0 2
cc c 0 2
cc c 2 2
cc c 0 2
</code></pre>
|
python|pandas|group-by|apply|dask
| 4
|
8,588
| 57,212,688
|
pandas dataframe - Get value from under certain criteria
|
<pre><code>print df
id product_id product_title search_term relevance
0 2 100001 Simpsom Strong anglebracket 3.00
1 3 100001 Simpsom Strong ibracket 2.50
2 16 100005 Delta Vero rainshowerhead 2.33
</code></pre>
<p>Let's say I have id = 3 and want the search_term associated with it (VALUE ONLY). How would I extract that?</p>
<p>It got answer code of:</p>
<pre><code>target = df.loc[df['id']==3, 'search_term']
print target
</code></pre>
<p>However, it returns a entire pandas series including index like:</p>
<pre><code>1 ibracket
Name: search_term, dtype: object
</code></pre>
<p>not the value only 'ibracket'.</p>
<p>I know I can get the value by doing:</p>
<pre><code>target_i = df.loc[df['id']==16, 'search_term'].index[0]
target = df ['search_term'] [target_i]
</code></pre>
<p>so I can get what I want as the value only. But I assume there should be a way more like:</p>
<pre><code>target = df.loc[df['id']==16, 'search_term'].value
</code></pre>
<p>and get the value only directly. </p>
<p>But this doesn't work, does anyone know a solution to this? Thanks in advance.</p>
|
<p>You are doing it the long way. This works:</p>
<pre><code>search_term = df.loc[df['id'] == 3, 'search_term'].iloc[0]
</code></pre>
<p>Any Series can have 0 to many elements. <code>iloc[0]</code> gets the value of the first element in that series. For production, you should check if the series is empty first.</p>
|
python|python-3.x|pandas
| 0
|
8,589
| 57,072,953
|
Apply multiple operations on same columns after groupby
|
<p>I have the following <code>df</code>,</p>
<pre><code>id year_month amount
10 201901 10
10 201901 20
10 201901 30
20 201902 40
20 201902 20
</code></pre>
<p>I want to <code>groupby</code> <code>id</code> and <code>year-month</code> and then get the group size and sum of <code>amount</code>,</p>
<pre><code>df.groupby(['id', 'year_month'], as_index=False)['amount'].sum()
df.groupby(['id', 'year_month'], as_index=False).size().reset_index(name='count')
</code></pre>
<p>I am wondering how to do it at the same time in one line;</p>
<pre><code>id year_month amount count
10 201901 60 3
20 201902 60 2
</code></pre>
|
<p>Use <code>agg</code>:</p>
<pre><code>df.groupby(['id', 'year_month']).agg({'amount': ['count', 'sum']})
amount
count sum
id year_month
10 201901 3 60
20 201902 2 60
</code></pre>
<p>If you want to remove the multi-index, use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.MultiIndex.droplevel.html" rel="nofollow noreferrer"><code>MultiIndex.droplevel</code></a>:</p>
<pre><code>s = df.groupby(['id', 'year_month']).agg({'amount': ['count', 'sum']}).rename(columns ={'sum': 'amount'})
s.columns = s.columns.droplevel(level=0)
s.reset_index()
id year_month count amount
0 10 201901 3 60
1 20 201902 2 60
</code></pre>
|
python-3.x|pandas|dataframe|pandas-groupby
| 5
|
8,590
| 57,106,574
|
Is there a Pandas/Numpy implementation of the Monty Hall problem without looping?
|
<p>This is more of a curiosity exercise...</p>
<p>If you've not heard of the The Monty Hall problem, it's explained in this great <a href="https://www.youtube.com/watch?v=4Lb-6rxZxx0" rel="nofollow noreferrer">youtube video</a>.</p>
<p>I simulated it in python using numpy:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
num_games = 100000
options = np.arange(1, 4, 1)
stick_result = 0
switch_result = 0
for i in range(1, num_games + 1):
winning_door = np.random.randint(1, 4)
first_choice = np.random.randint(1, 4)
if winning_door == first_choice:
stick_success += 1
# remove a door that isn't the winning_door or the first_choice
door_to_remove = np.random.choice(options[~np.isin(options, [winning_door, first_choice])])
options_with_one_door_removed = options[~np.isin(options, door_to_remove)]
# switch door to remaining option that isn't the first choice
second_choice_after_switch = options_with_one_door_removed[~np.isin(options_with_one_door_removed, first_choice)]
if winning_door == second_choice_after_switch:
switch_result += 1
</code></pre>
<p>Is this possible to do without a for loop though? Here's what I have so far, but I'm not sure how to do the door switching. </p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
num_games = 100000
options = np.arange(1, 4, 1)
winning_door = np.random.randint(1, 4, num_games)
first_choice = np.random.randint(1, 4, num_games)
stick_successes = (winning_door == first_choice).sum()
# remove a door that isn't the winning_door or the first_choice
door_to_remove = ???
options_with_one_door_removed = ???
# switch door to remaining option that isn't the first choice
second_choice_after_switch = ???
switch_successes = (winning_door == second_choice_after_switch).sum()
</code></pre>
<p>You have to determine which door the gameshow host removes from each instance of the game (each row of the <code>winning_door</code> & <code>first_choice</code> arrays) and then switch the <code>first_choice</code> to the other remaining door.)</p>
<p>Any ideas?</p>
|
<p>Your biggest issue here is vectorizing <code>choice</code> with a mask. That could look something like:</p>
<pre class="lang-py prettyprint-override"><code>def take_masked_along_axis(arr, where, index, axis):
""" Take the index'th non-masked element along each 1d slice along axis """
assert where.dtype == bool
assert index.shape[axis] == 1
# np.searchsorted would be faster, but does not vectorize
unmasked_index = (where.cumsum(axis=axis) > index).argmax(axis=axis)
unmasked_index = np.expand_dims(unmasked_index, axis=axis) # workaround for argmax having no keepdims
return np.take_along_axis(arr, unmasked_index, axis=axis)
def random_choice_masked_along_axis(arr, where, axis):
""" Like the above, but choose the indices via a uniform random number """
assert where.dtype == bool
index = np.random.sample(arr.shape[:axis] + (1,) + arr.shape[axis+1:]) * where.sum(axis=axis, keepdims=True)
return take_masked_along_axis(arr, where, index, axis=axis)
</code></pre>
<p>Making the first part of your code something like</p>
<pre class="lang-py prettyprint-override"><code>options_broadcast = np.broadcast_to(options, (3, num_games))
removable = (options != options_broadcast) & (options != options_broadcast)
door_to_remove = random_choice_masked_along_axis(options_broadcast, where=removable, axis=0)
</code></pre>
|
python|pandas|numpy
| 2
|
8,591
| 45,805,148
|
How can I compute model metrics during training with canned estimators?
|
<p>Using Keras, one typically gets metrics (e.g. accuracy) as part of the progress bar for free. Using the example here:</p>
<p><a href="https://github.com/fchollet/keras/blob/master/examples/mnist_mlp.py" rel="nofollow noreferrer">https://github.com/fchollet/keras/blob/master/examples/mnist_mlp.py</a></p>
<p>After running e.g.</p>
<pre><code>history = model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(x_test, y_test))
</code></pre>
<p>Keras will start fitting the model, and will show progress output with something like:</p>
<pre><code> 3584/60000 [>.............................] - ETA: 10s - loss: 0.0308 - acc: 0.9905
</code></pre>
<p>Suppose I wanted to accomplish the same thing using a TensorFlow canned estimator -- extract the current accuracy for a classifier, and display that as part of a progress bar (done by e.g. a <a href="https://www.tensorflow.org/api_docs/python/tf/train/SessionRunHook" rel="nofollow noreferrer">SessionRunHook</a>).</p>
<p>It seems like accuracy metrics aren't provided as part of the default set of operations on a graph. Is there a way I can manually add it myself with a session run hook?</p>
<p>(It looks like it's possible to add operations to the graph as part of the <code>begin()</code> hook, but I'm not sure how I can e.g. request the computation of the model accuracy there.)</p>
|
<p>accuracy is one of the default metrics in canned classifiers. But it will be calculated by Estimator.evaluate call not by Estimator.train. You can create a for loop to do what you want:
for ...
estimator.train(training_data)
metrics = estimator.evaluate(evaluation_data)</p>
|
tensorflow
| 0
|
8,592
| 46,095,110
|
gcloud ml-engine local predict --text-instances fails with "Could not parse" error
|
<p>I'm trying to make the tensorflow boston sample (<a href="https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/tutorials/input_fn" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/tutorials/input_fn</a>) work on google cloudml and I seem to be successfull with the training, but I struggle with the subsequent predictions. </p>
<ol>
<li><p>I've tweaked the code to fit with tf.contrib.learn.Experiment and learn_runner.run(). It runs both locally and in the cloud with "gcloud ml-engine local train ..."/"gcloud ml-engine jobs submit training ...". </p></li>
<li><p>I can with the trained model run estimator.predict(input_fn=predict_input_fn)) and get meaningful predictions with the given boston_predict.csv set.</p></li>
<li><p>I can create and version the model in the cloud with "gcloud ml-engine models create ..." and "gcloud ml-engine versions create ..." </p></li>
</ol>
<p>But</p>
<ol start="4">
<li>Local predictions over "gcloud ml-engine local predict --model-dir=/export/Servo/XXX --text-instances boston_predict.csv" fails with a "InvalidArgumentError (see above for traceback): Could not parse example input <..> (Error code: 2). See below for transcript. It fails similarly with a headerless boston_predict.csv. </li>
</ol>
<p>I've looked up the expected format with "$ gcloud ml-engine local predict --help
", read the <a href="https://cloud.google.com/ml-engine/docs/how-tos/troubleshooting" rel="nofollow noreferrer">https://cloud.google.com/ml-engine/docs/how-tos/troubleshooting</a> but in general failed to find via google or stackexhange reports of my specific error.</p>
<p>I'm a noob, so I'm probably erring in some basic way, but I cannot spot it.</p>
<p>All and any help is appreciated,</p>
<p>:-)</p>
<p>yarc68000. </p>
<p>-------environment----------</p>
<pre><code>(env1) $ gcloud --version
Google Cloud SDK 170.0.0
alpha 2017.03.24
beta 2017.03.24
bq 2.0.25
core 2017.09.01
datalab 20170818
gcloud
gsutil 4.27
(env1) $ python --version
Python 2.7.13 :: Anaconda 4.3.1 (64-bit)
(env1) $ conda list | grep tensorflow
tensorflow 1.3.0 <pip>
tensorflow-tensorboard 0.1.6 <pip>
</code></pre>
<p>------------execution and error : boston_predict.csv ----------</p>
<pre><code>$ gcloud ml-engine local predict --model-dir=<..>/export/Servo/1504780684 --text-instances 1709boston/boston_predict.csv
<..>
ERROR:root:Exception during running the graph: Could not parse example input, value: 'CRIM,ZN,INDUS,NOX,RM,AGE,DIS,TAX,PTRATIO'
[[Node: ParseExample/ParseExample = ParseExample[Ndense=9, Nsparse=0, Tdense=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT], dense_shapes=[[1], [1], [1], [1], [1], [1], [1], [1], [1]], sparse_types=[], _device="/job:localhost/replica:0/task:0/cpu:0"](_arg_Placeholder_0_0, ParseExample/ParseExample/names, ParseExample/ParseExample/dense_keys_0, ParseExample/ParseExample/dense_keys_1, ParseExample/ParseExample/dense_keys_2, ParseExample/ParseExample/dense_keys_3, ParseExample/ParseExample/dense_keys_4, ParseExample/ParseExample/dense_keys_5, ParseExample/ParseExample/dense_keys_6, ParseExample/ParseExample/dense_keys_7, ParseExample/ParseExample/dense_keys_8, ParseExample/Const, ParseExample/Const_1, ParseExample/Const_2, ParseExample/Const_3, ParseExample/Const_4, ParseExample/Const_5, ParseExample/Const_6, ParseExample/Const_7, ParseExample/Const_8)]]
<..>
</code></pre>
<p>------- execution and error headerless boston_predict.csv ------</p>
<p>(here I try with a boston_predict.csv with the first line omitted)</p>
<pre><code>$ gcloud ml-engine local predict --model-dir=<..>/export/Servo/1504780684 --text-instances 1709boston/boston_predict_headerless.csv
<..>
ERROR:root:Exception during running the graph: Could not parse example input, value: '0.03359,75.0,2.95,0.428,7.024,15.8,5.4011,252,18.3'
[[Node: ParseExample/ParseExample = ParseExample[Ndense=9, Nsparse=0, Tdense=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT], dense_shapes=[[1], [1], [1], [1], [1], [1], [1], [1], [1]], sparse_types=[], _device="/job:localhost/replica:0/task:0/cpu:0"](_arg_Placeholder_0_0, ParseExample/ParseExample/names, ParseExample/ParseExample/dense_keys_0, ParseExample/ParseExample/dense_keys_1, ParseExample/ParseExample/dense_keys_2, ParseExample/ParseExample/dense_keys_3, ParseExample/ParseExample/dense_keys_4, ParseExample/ParseExample/dense_keys_5, ParseExample/ParseExample/dense_keys_6, ParseExample/ParseExample/dense_keys_7, ParseExample/ParseExample/dense_keys_8, ParseExample/Const, ParseExample/Const_1, ParseExample/Const_2, ParseExample/Const_3, ParseExample/Const_4, ParseExample/Const_5, ParseExample/Const_6, ParseExample/Const_7, ParseExample/Const_8)]]
<..>
</code></pre>
|
<p>There are likely two problems.</p>
<p>First, it looks as though the graph that you are exporting is expecting tf.Example protos as input, i.e. has a parse_example(...) op in it. The Boston sample does not appear to be adding that op, so I suspect that is part of your modifications.</p>
<p>Before showing the code you want for the input_fn, we need to talk about the second problem: versioning. Estimators existed in previous versions of TensorFlow under tensorflow.contrib. However, various parts have migrated into tensorflow.estimator with successive TensorFlow versions and the APIs have changed as they've moved.</p>
<p>CloudML Engine currently (as of 07 Sep 2017) only supports TF 1.0 and 1.2, so I'll provide a solution that works with 1.2. This is based on the <a href="https://github.com/GoogleCloudPlatform/cloudml-samples/tree/master/census/estimator/trainer" rel="nofollow noreferrer">census sample</a>. This is the input_fn you need in order to use CSV data, although I generally recommend exporting models that are independent of input format:</p>
<pre><code># Provides the data types for the various columns.
FEATURE_DEFAULTS=[[0.0], [0.0], [0.0], [0.0], [0.0], [0.0], [0.0], [0], [0.0]]
def predict_input_fn(rows_string_tensor):
# Takes a rank-1 tensor and converts it into rank-2 tensor
# Example if the data is ['csv,line,1', 'csv,line,2', ..] to
# [['csv,line,1'], ['csv,line,2']] which after parsing will result in a
# tuple of tensors: [['csv'], ['csv']], [['line'], ['line']], [[1], [2]]
row_columns = tf.expand_dims(rows_string_tensor, -1)
columns = tf.decode_csv(row_columns, record_defaults=FEATURE_DEFAULTS)
features = dict(zip(FEATURES, columns))
return tf.contrib.learn.InputFnOps(features, None, {'csv_row': csv_row})
</code></pre>
<p>And you'll need an export strategy like this:</p>
<pre><code>saved_model_export_utils.make_export_strategy(
predict_input_fn,
exports_to_keep=1,
default_output_alternative_key=None,
)
</code></pre>
<p>which you'll pass as a list of size 1 to the constructor of <a href="https://github.com/tensorflow/tensorflow/blob/18f36927160d05b941c056f10dc7f9aecaa05e23/tensorflow/contrib/learn/python/learn/experiment.py#L141" rel="nofollow noreferrer"><code>tf.contrib.learn.Experiment</code></a>.</p>
|
tensorflow|gcloud|google-cloud-ml-engine
| 0
|
8,593
| 22,980,487
|
Why is the mean smaller than the minimum and why does this change with 64bit floats?
|
<p>I have an input array, which is a masked array.<br>
When I check the mean, I get a nonsensical number: less than the reported minimum value!</p>
<p>So, raw array: <code>numpy.mean(A) < numpy.min(A)</code>. Note <code>A.dtype</code> returns <code>float32</code>.</p>
<p>FIX: <code>A3=A.astype(float)</code>. A3 is still a masked array, but now the mean lies between the minimum and the maximum, so I have some faith it's correct! Now for some reason <code>A3.dtype</code> is <code>float64</code>. Why?? Why did that change it, and why is it correct at 64 bit and wildly incorrect at 32 bit?</p>
<p>Can anyone shed any light on why I <em>needed</em> to recast the array to accurately calculate the mean? (with or without numpy, it turns out).</p>
<p>EDIT: I'm using a 64-bit system, so yes, that's why recasting changed it to 64bit. It turns out I didn't have this problem if I subsetted the data (extracting from netCDF input using <code>netCDF4 Dataset</code>), smaller arrays did not produce this problem - therefore it's caused by overflow, so switching to 64-bit prevented the problem.<br>
So I'm still not clear on why it would have initially loaded as float32, but I guess it aims to conserve space even if it is a 64-bit system. The array itself is <code>1872x128x256</code>, with non-masked values around 300, which it turns out is enough to cause overflow :)</p>
|
<p>If you're working with large arrays, be aware of potential overflow problems!!<br>
Changing from 32-bit to 64-bit floats in this instance avoids an (unflagged as far as I can tell) overflow that lead to the anomalous <code>mean</code> calculation. </p>
|
python|arrays|numpy|floating-accuracy|floating-point-conversion
| 0
|
8,594
| 35,422,583
|
Iterate through rows of grouped pandas dataframe to create new columns
|
<p>I'm new to Python and am trying to get to grips with Pandas for data analysis.</p>
<p>I wondered if anyone can help me loop through rows of grouped data in a dataframe to create new variables.</p>
<p>Suppose I have a dataframe called data, that looks like this:</p>
<pre>
+----+-----------+--------+
| ID | YearMonth | Status |
+----+-----------+--------+
| 1 | 201506 | 0 |
| 1 | 201507 | 0 |
| 1 | 201508 | 0 |
| 1 | 201509 | 0 |
| 1 | 201510 | 0 |
| 2 | 201506 | 0 |
| 2 | 201507 | 1 |
| 2 | 201508 | 2 |
| 2 | 201509 | 3 |
| 2 | 201510 | 0 |
| 3 | 201506 | 0 |
| 3 | 201507 | 1 |
| 3 | 201508 | 2 |
| 3 | 201509 | 3 |
| 3 | 201510 | 4 |
+----+-----------+--------+
</pre>
<p>There are multiple rows for each ID, MonthYear is of the form yyyymm, and Status is the status at each MonthYear (takes values 0 to 6)</p>
<p>I have manged to create columns to show me the cumulative maximum status, and an ever3 (to show me if an ID has ever had a status or 3 or more regardless of current status) indicator like this:</p>
<pre><code>data1['Max_Stat'] = data1.groupby(['Custno'])['Status'].cummax()
data1['Ever3'] = np.where(data1['Max_Stat'] >= 3, 1, 0)
</code></pre>
<p>What I would also like to do, is create the other columns to create metrics such as the number of times something has happened, or how long since an event. For example</p>
<blockquote>
<ul>
<li><p>Times3Plus : To show how many times the ID has had a status 3 or more at that point in time</p></li>
<li><p>Into3 : Set to Y the first time the ID has a status of 3 or more (not for subsequent times)</p></li>
</ul>
</blockquote>
<pre>
+----+-----------+--------+----------+-------+------------+-------+
| ID | YearMonth | Status | Max_Stat | Ever3 | Times3Plus | Into3 |
+----+-----------+--------+----------+-------+------------+-------+
| 1 | 201506 | 0 | 0 | 0 | 0 | |
| 1 | 201507 | 0 | 0 | 0 | 0 | |
| 1 | 201508 | 0 | 0 | 0 | 0 | |
| 1 | 201509 | 0 | 0 | 0 | 0 | |
| 1 | 201510 | 0 | 0 | 0 | 0 | |
| 2 | 201506 | 0 | 0 | 0 | 0 | |
| 2 | 201507 | 1 | 1 | 0 | 0 | |
| 2 | 201508 | 2 | 2 | 0 | 0 | |
| 2 | 201509 | 3 | 3 | 1 | 1 | Y |
| 2 | 201510 | 0 | 3 | 1 | 1 | |
| 3 | 201506 | 0 | 0 | 0 | 0 | |
| 3 | 201507 | 1 | 1 | 0 | 0 | |
| 3 | 201508 | 2 | 2 | 0 | 0 | |
| 3 | 201509 | 3 | 3 | 1 | 1 | Y |
| 3 | 201510 | 4 | 4 | 1 | 2 | |
+----+-----------+--------+----------+-------+------------+-------+
</pre>
<p>I can do this quite easily in SAS, using BY and RETAIN statements, but can't work out how to replicate this in Python.</p>
|
<p>I have managed to do this without iterating over each row, as I'm not sure what I was trying to do was possible. I had wanted to set up counters or indicators at group level,as is possible in SAS, and modify these row by row. Eg something like</p>
<pre><code>Times3Plus=0
if row['Status'] >= 3:
Times3Plus += 1
Return Times3Plus
</code></pre>
<p>In the end, I created a binary 3Plus indicator</p>
<pre><code>data['3Plus'] = np.where(data1['Status'] >= 3, 1, 0)
</code></pre>
<p>Then used groupby to summarise these to create Times3Plus at group level</p>
<pre><code>data['Times3Plus'] = data.groupby(['ID'])['3Plus'].cumsum()
</code></pre>
<p>Into3 could then be populated using a function</p>
<pre><code>def into3(row):
if row['3Plus'] == 1 and row['Times3Plus'] == 1: #i.e it is the first time
return 1
data['Into3'] = data.apply(into3, axis = 1)
</code></pre>
|
python|pandas
| 1
|
8,595
| 28,840,121
|
Vectorizing a series of CDF samples in Python with NumPy
|
<p>I am in the process of writing a basic financial program with Python where daily expenses are read in as a table and are turned into a PDF (Probability Density Function) and eventually a CDF (Cummulative Distribution Function) that ranges from 0 to 1 using the build in histogram capability of NumPy. I am trying to randomly sample a daily expense by comparing a random number ranging from 0 to 1 with the CDF array and an array of the CDF center points and using the interp1d functionality of SciPy to determine the interpolated value. I have successfully implemented this algorithm using a for loop, but it is way to slow and am trying to convert it to a vectorized format. I am including an example of the code that does work with a for loop and my attempt thus far in vectorizing the algorithm. I would greatly appreciate any advice on how I can make the vectorized version work and increase the execution speed of the code.</p>
<p>Sample input file:</p>
<pre><code>12.00 March 01, 2014
0.00 March 02, 2014
0.00 March 03, 2014
0.00 March 04, 2014
0.00 March 05, 2014
0.00 March 06, 2014
44.50 March 07, 2014
0.00 March 08, 2014
346.55 March 09, 2014
168.18 March 10, 2014
140.82 March 11, 2014
10.83 March 12, 2014
0.00 March 13, 2014
0.00 March 14, 2014
174.00 March 15, 2014
0.00 March 16, 2014
0.00 March 17, 2014
266.53 March 18, 2014
0.00 March 19, 2014
110.00 March 20, 2014
0.00 March 21, 2014
0.00 March 22, 2014
44.50 March 23, 2014
</code></pre>
<p>for loop version of code (that works but is too slow)</p>
<pre><code>#!usr/bin/python
import pandas as pd
import numpy as np
import random
import itertools
import scipy.interpolate
def Linear_Interpolation(rand,Array,Array_Center):
if(rand < Array[0]):
y_interp = scipy.interpolate.interp1d((0,Array[0]),(0,Array_Center[0]))
else:
y_interp = scipy.interpolate.interp1d(Array,Array_Center)
final_value = y_interp(rand)
return (final_value)
#--------- Main Program --------------------
# - Reads the file in and transforms the first column of float variables into
# an array titled MISC_DATA
File1 = '../../Input_Files/Histograms/Static/Misc.txt'
MISC_DATA = pd.read_table(File1,header=None,names = ['expense','month','day','year'],sep = '\s+')
# Creates the PDF bin heights and edges
Misc_hist, Misc_bin_edges = np.histogram(MISC_DATA['expense'],bins=60,normed=True)
# Creates the CDF bin heights
Misc = np.cumsum(Misc_hist*np.diff(Misc_bin_edges))
# Creates an array of the bin center points along the x axis
Misc_Center = (Misc_bin_edges[:-1] + Misc_bin_edges[1:])/2
iterator = range(0,100)
for cycle in iterator:
MISC_EXPENSE = Linear_Interpolation(random.random(),Misc,Misc_Center)
print MISC_EXPENSE
</code></pre>
<p>I am trying to vectorize the for loop in the manner shown below and convert the variable MISC_EXPENSE from a scalar into an array, but it is not working. It tells me that the truth value of an array with more than one element is ambiguous. I think it is referring to the fact that the array of random variables 'rand_var' has a different dimension than the arrays 'Misc' and 'Misc_Center'. Any suggestions are appreciated.</p>
<pre><code>rand_var = np.random.rand(100)
MISC_EXPENSE = Linear_Interpolation(rand_var,Misc,Misc_Center)
</code></pre>
|
<p>If I understood your example correct, the code creates one interpolation object per random number, which is slow. However, the <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.interp1d.html" rel="nofollow">interp1d</a> can take a vector of values to be interpolated. And the starting zero should be in the CDF in any case I assume:</p>
<pre><code>y_interp = scipy.interpolate.interp1d(
np.concatenate((np.array([0]), Misc)),
np.concatenate((np.array([0]), Misc_Center))
)
new_vals = y_interp(np.random.rand(100))
</code></pre>
|
python|numpy|scipy
| 1
|
8,596
| 28,713,281
|
Binarize data frame values based upon a column value
|
<p>I have a dataframe that looks like this</p>
<pre><code>+---------+-------------+------------+------------+
| hello | val1 | val2 | val3 |
+---------+-------------+------------+------------+
| 1.024 | -10.764779 | -8.230176 | -5.689302 |
| 16 | -15.772744 | -10.794013 | -5.79148 |
| 1.024 | -18.4738 | -13.935423 | -9.392713 |
| 0.064 | -11.642506 | -9.711523 | -7.772969 |
| 1.024 | -4.185368 | -2.094441 | 0.048861 |
+---------+-------------+------------+------------+
</code></pre>
<p>Let this dataframe be <code>df</code>. This is the operation I essentially would like to do</p>
<pre><code>values = ["val1", "val2", "val3"]
for ind in df.index:
hello = df.loc[ind, "hello"]
for name in values:
df.loc[ind, name] = (df.loc[ind, name] >= hello)
</code></pre>
<p>Essentially for every row <code>i</code> and column <code>j</code>, if <code>val_j</code> is less than <code>hello_i</code>, then <code>val_j = False</code>, otherwise <code>val_j = True</code></p>
<p>This is obviously not vectorized, and with my giant version of this table on my computer, my computer is having trouble performing these alterations.</p>
<p>What's the vectorized version of the operation above?</p>
|
<p>It would be quicker to test the entire series against the hello series:</p>
<pre><code>In [268]:
val_cols = [col for col in df if 'val' in col]
for col in val_cols:
df[col] = df[col] >= df['hello']
df
Out[268]:
hello val1 val2 val3
0 1.024 False False False
1 16.000 False False False
2 1.024 False False False
3 0.064 False False False
4 1.024 False False False
</code></pre>
<p>If we compare the performance:</p>
<pre><code>In [273]:
%%timeit
val_cols = [col for col in df if 'val' in col]
for col in val_cols:
df[col] = df[col] >= df['hello']
df
1000 loops, best of 3: 630 µs per loop
In [275]:
%%timeit
column_names = [name for name in df.columns if "val" in name]
binarized = df.apply(lambda row : row[column_names] >= row["hello"], axis=1)
df[binarized.columns] = binarized
df
100 loops, best of 3: 6.17 ms per loop
</code></pre>
<p>We see that my method is 10x faster as it is vectorised, your method is essentially looping over each row</p>
|
python|pandas
| 1
|
8,597
| 33,157,297
|
How can I efficiently translate "terrain" numpy array into networkx graph?
|
<p>I have a 2d boolean numpy array A. Each element is a pixel of the map with True corresponding to terrain, and False corresponding to water. Say, I want to check how many different continents I have, so I want to use networx.number_connected_components(G)</p>
<p>I can build the graph G manually iterating over elements of array A and checking whether pieces of land are connected or not (pixels are considered connected only if they have a common edge, so each pixel of land can be connected at most to 4, and no diagonal connections are allowed).</p>
<p>But this strikes my as inefficient and unpythonic. How can I do better? </p>
|
<p>To identify and count the number of connected regions, you can use <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.measurements.label.html" rel="nofollow"><code>scipy.ndimage.measurements.label</code></a> (so you don't need networkx). For example,</p>
<pre><code>In [73]: x
Out[73]:
array([[1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0],
[0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0],
[0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0],
[0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1],
[0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1],
[0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1],
[0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1],
[0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0]])
In [74]: from scipy.ndimage.measurements import label
In [75]: labeled_x, num_labels = label(x)
In [76]: num_labels
Out[76]: 8
In [77]: labeled_x
Out[77]:
array([[1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 1, 1, 1, 0, 2, 0, 0, 0],
[0, 0, 0, 0, 0, 1, 0, 0, 2, 0, 0, 0],
[0, 0, 0, 0, 3, 0, 0, 0, 2, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 2, 2, 0, 0, 0],
[0, 0, 0, 0, 0, 4, 0, 0, 0, 0, 0, 5],
[0, 0, 0, 0, 4, 4, 4, 0, 0, 0, 0, 5],
[0, 0, 0, 0, 0, 4, 0, 0, 6, 6, 0, 5],
[0, 0, 0, 0, 0, 0, 0, 7, 0, 0, 0, 5],
[0, 0, 0, 0, 0, 8, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 8, 0, 0, 0, 0, 0, 0]], dtype=int32)
</code></pre>
<p>(In the example, <code>x</code> is an array of 0s and 1s, but <code>label</code> also accepts a boolean array.)</p>
|
python|arrays|numpy|data-structures|graph
| 2
|
8,598
| 33,174,848
|
pandas - Use datetime.time objects as a dtype
|
<p>I am reading an Excel file that has the following structure:</p>
<pre><code> A B
2015-09-05 15:05:32
2015-09-05 19:05:02
</code></pre>
<p>I am reading this file using</p>
<pre><code>df = pd.ExcelFile(filename).parse(..)
</code></pre>
<p>When I look at the <code>dtype</code>, of this DataFrame, I can see that the dates are parsed properly as <code>datetime64</code> objects, but the times are not:</p>
<pre><code>>>> df.dtypes
A datetime64[ns]
B object
</code></pre>
<p>What's odd is that, when I look at the content of the <code>B</code>, I can see that they are all <code>datetime.time</code> objects</p>
<pre><code>[s for s in main_df['B'].tolist() if type(s) is not datetime.time]
# There are no values that are *not* datetime.time objects
</code></pre>
<p>I'd like to convert this <code>B</code> column to something that I can use more readily. For instance, I'd like to use a <code>MultiIndex</code> with first the day, and then the time (so that I can group and aggregate). Or I'd like to join the two so that I have a single column that's the full date.</p>
<p>But at this point, I'm stuck. I tried converting them to <code>datetime</code>:</p>
<pre><code>main_df['B'] = main_df['B'].astype('datetime64')
ValueError: Could not convert object to NumPy datetime
</code></pre>
<p>Any ideas?</p>
|
<p>If you just want to join the two, you could join them as strings:</p>
<pre><code>df = pd.DataFrame({ 'A' : ['2015-09-05', '2015-09-05'], 'B': ['15:05:32', '19:05:02']})
pd.to_datetime(df.A + ' ' + df.B)
</code></pre>
<p>Or you could use datetime to combine:</p>
<pre><code>import datetime
df.apply(lambda x: datetime.datetime.combine(x.A, x.B), axis=1)
</code></pre>
|
python|datetime|pandas|time-series
| 0
|
8,599
| 66,358,850
|
Is it possible to vertically stack two panda dataframes while maintaining different column names?
|
<p>Seems kinda simple but havent been able to find a fix for this idea on Google, YouTube, or Stackoverflow.</p>
<p>Essentially if one panda df like this:</p>
<pre><code>A | B
-----
45| 98
</code></pre>
<p>And then another panda df like this:</p>
<pre><code>X | Y
------
67 | 2
</code></pre>
<p>Is there a way to structure/combine the two together like this:</p>
<pre><code>df_merged = pd.DataFrame([df1,df2])
</code></pre>
<p>And generate a complete panda df like:</p>
<pre><code> A | B
-----
45| 98
X | Y
------
67 | 2
</code></pre>
<p>Can a row be created in the middle to separate between the top df and bottom df?</p>
<p>Is it possible?</p>
<p>Thank you</p>
|
<p>Dataframe is a table-like data structure. It can have multiple indexes, but what's inside is still something unified, meaning that viewing it as just 2 different tables with individual headers is contrary to the idea of what it is.</p>
<p>That said, you can make an new upper row filled with NaNs, then make the next row duplicate the column names, and then concat your 2 tables. It won't be exactly the same as having 2 tables, but will sort of mimic it. However I'd strongly discourage you against that. Dataframes are used for calculations, not for in-line visualizations, and it sounds very much like you are trying to perceive them as an Excel like instrument, which combines viewing UI with calculation capacities. When you need to visualize a dataframe, you need to plot it.</p>
|
python|pandas
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.