Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
375,200
| 53,885,404
|
Python Setting Values Without Loop
|
<p>I have a time series dataframe where there is 1 or 0 in it (true/false). I wrote a function that loops through all rows with values 1 in them. Given user defined integer parameter called <code>n_hold</code>, I will set values 1 to n rows forward from the initial row.</p>
<p>For example, in the dataframe below I will be loop to row <code>2016-08-05</code>. If <code>n_hold = 2</code>, then I will set both <code>2016-08-08</code> and <code>2016-08-09</code> to 1 too.:</p>
<pre><code>2016-08-03 0
2016-08-04 0
2016-08-05 1
2016-08-08 0
2016-08-09 0
2016-08-10 0
</code></pre>
<p>The resulting <code>df</code> will then is</p>
<pre><code>2016-08-03 0
2016-08-04 0
2016-08-05 1
2016-08-08 1
2016-08-09 1
2016-08-10 0
</code></pre>
<p>The problem I have is this is being run 10s of thousands of times and my current solution where I am looping over rows where there are ones and subsetting is way too slow. I was wondering if there are any solutions to the above problem that is really fast.</p>
<p>Here is my (slow) solution, <code>x</code> is the initial signal dataframe:</p>
<pre><code>n_hold = 2
entry_sig_diff = x.diff()
entry_sig_dt = entry_sig_diff[entry_sig_diff == 1].index
final_signal = x * 0
for i in range(0, len(entry_sig_dt)):
row_idx = entry_sig_diff.index.get_loc(entry_sig_dt[i])
if (row_idx + n_hold) >= len(x):
break
final_signal[row_idx:(row_idx + n_hold + 1)] = 1
</code></pre>
|
<p>Completely changed answer, because working differently with consecutive <code>1</code> values:</p>
<p><strong>Explanation</strong>:</p>
<p>Solution remove each consecutive <code>1</code> first by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.where.html" rel="nofollow noreferrer"><code>where</code></a> with chained boolean mask by comparing with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.ne.html" rel="nofollow noreferrer"><code>ne</code></a> (not equal <code>!=</code>) with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.shift.html" rel="nofollow noreferrer"><code>shift</code></a> to <code>NaN</code>s, forward filling them by <code>ffill</code> with <code>limit</code> parameter and last replace <code>0</code> back:</p>
<pre><code>n_hold = 2
s = x.where(x.ne(x.shift()) & (x == 1)).ffill(limit=n_hold).fillna(0, downcast='int')
</code></pre>
<p>Timings and comparing outputs:</p>
<pre><code>np.random.seed(123)
x = pd.Series(np.random.choice([0,1], p=(.8,.2), size=1000))
x1 = x.copy()
#print (x)
def orig(x):
n_hold = 2
entry_sig_diff = x.diff()
entry_sig_dt = entry_sig_diff[entry_sig_diff == 1].index
final_signal = x * 0
for i in range(0, len(entry_sig_dt)):
row_idx = entry_sig_diff.index.get_loc(entry_sig_dt[i])
if (row_idx + n_hold) >= len(x):
break
final_signal[row_idx:(row_idx + n_hold + 1)] = 1
return final_signal
#print (orig(x))
</code></pre>
<hr>
<pre><code>n_hold = 2
s = x.where(x.ne(x.shift()) & (x == 1)).ffill(limit=n_hold).fillna(0, downcast='int')
#print (s)
df = pd.concat([x,orig(x1), s], axis=1, keys=('input', 'orig', 'new'))
print (df.head(20))
input orig new
0 0 0 0
1 0 0 0
2 0 0 0
3 0 0 0
4 0 0 0
5 0 0 0
6 1 1 1
7 0 1 1
8 0 1 1
9 0 0 0
10 0 0 0
11 0 0 0
12 0 0 0
13 0 0 0
14 0 0 0
15 0 0 0
16 0 0 0
17 0 0 0
18 0 0 0
19 0 0 0
#check outputs
#print (s.values == orig(x).values)
</code></pre>
<p><strong>Timings</strong>:</p>
<pre><code>%timeit (orig(x))
24.8 ms ± 653 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
%timeit x.where(x.ne(x.shift()) & (x == 1)).ffill(limit=n_hold).fillna(0, downcast='int')
1.36 ms ± 12.7 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
</code></pre>
|
python|pandas|dataframe
| 2
|
375,201
| 53,819,651
|
How to normalize column names with '.1' in the column name and not drop any other characters?
|
<p>I have a df that looks like this:</p>
<pre><code>col1_test col1_test.1
abc NaN
</code></pre>
<p>How do I drop only the <code>.1</code> while keeping all the other characters in the column name? </p>
<p>current code to drop <code>.1</code>:</p>
<pre><code>df.columns = df.columns.str.extract(r'\.?', expand=False)
</code></pre>
<p>but this is dropping the other characters in the column name like underscore. </p>
<p>New df:</p>
<pre><code>col1_test col1_test
abc NaN
</code></pre>
<p>Once this part is set, I will merge the columns using this:</p>
<pre><code>df = df.groupby(level=0, axis=1).first()
</code></pre>
|
<p>This is not recommended because it becomes difficult to index specific columns when there are duplicate headers. </p>
<p>A better solution, however, since trying to perform a <code>groupby</code>, would be to pass a callable.</p>
<pre><code>df
col1_test col1_test.1
0 abc NaN
df.groupby(by=lambda x: x.rsplit('.', 1)[0], axis=1).first()
col1_test
0 abc
</code></pre>
<hr>
<p>For reference, you'd remove column suffixes with <code>str.replace</code>:</p>
<pre><code>df.columns = df.columns.str.replace(r'\.\d+$', '')
</code></pre>
<p>You can also use <code>str.rsplit</code>:</p>
<pre><code>df.columns = df.columns.str.rsplit('.', 1).str[0]
df
col1_test col1_test
0 abc NaN
</code></pre>
|
python|python-3.x|pandas|dataframe
| 2
|
375,202
| 54,181,590
|
Pandas - Comparing two Dataframe and finding difference
|
<p>I have two Dataframes with some sales data as below:</p>
<p>df1:</p>
<pre><code>prod_id,sale_date,new
101,2019-01-01,101_2019-01-01
101,2019-01-02,101_2019-01-02
101,2019-01-03,101_2019-01-03
101,2019-01-04,101_2019-01-04
</code></pre>
<p>df2:</p>
<pre><code>prod_id,sale_date
101,2019-01-01,101_2019-01-01
101,2019-01-04,101_2019-01-04
</code></pre>
<p>I am trying to compare the above two Dataframe to find dates which are missing in df2 as compared to df1</p>
<p>I have tried to do the below:</p>
<pre><code>final_1 = df1.merge(df2, on='new', how='outer')
</code></pre>
<p>This returns back the below Dataframe:</p>
<pre><code>prod_id_x,sale_date_x,new,prod_id_y,sale_date_y
101,2019-01-01,101_2019-01-01,,
101,2019-01-02,101_2019-01-01,,
101,2019-01-03,101_2019-01-01,,
101,2019-01-04,101_2019-01-01,,
,,101_2019-01-01,101,2019-01-01
,,101_2019-01-04,101,2019-01-04
</code></pre>
<p>This is not letting me compare these 2 Dataframe. </p>
<p>Expected Output:</p>
<pre><code>prod_id_x,sale_date_x,new
101,2019-01-02,101_2019-01-02
101,2019-01-03,101_2019-01-03
</code></pre>
|
<p>You can use <code>drop_duplicates</code></p>
<pre><code>pd.concat([df1,df2]).drop_duplicates(keep=False)
</code></pre>
|
python|pandas
| 0
|
375,203
| 54,145,967
|
Not able to plot the heatmap of one column with respect to others
|
<p>With the help of the question: <a href="https://stackoverflow.com/questions/39409866/correlation-heatmap">Correlation heatmap</a>, I have tried the following: </p>
<pre><code>import pandas
import seaborn as sns
dataframe = pandas.read_csv("training.csv", header=0,index_col=0)
for a in list(['output']):
for b in list(dataframe.columns.values):
corr.loc[a, b] = dataframe.corr().loc[a, b]
print(b)
print(corr)
sns.heatmap(corr['output'])
</code></pre>
<p>I got the following error: </p>
<pre><code>IndexError: Inconsistent shape between the condition and the input (got (8, 1) and (8,))
</code></pre>
<p>I do not want to have the all values correlation heatmap with all values. I only want to have the correlation of one column with respect to others. </p>
<p>Kindly, let me know what I am missing.</p>
|
<p>You are trying to build a heatmap from <code>pd.Series</code> - this does not work. <code>pd.Series</code> is a 1D object, while <code>seaborn.heatmap()</code> is commonly used for 2D data structures. </p>
<p><code>sns.heatmap(corr[['output']])</code> - will do the job</p>
<pre><code>df = pd.DataFrame(data=[[1,2,3],[5,4,3],[5,4,12]],index=[0,1,2],columns=['A','B','C'])
df.corr().loc['A',:]
</code></pre>
<p>Out[13]:</p>
<p>A 1.0</p>
<p>B 1.0</p>
<p>C 0.5</p>
<p>Name: A, dtype: float64</p>
<pre><code>sns.heatmap(df.corr().loc[['A'],:])
</code></pre>
<p><a href="https://i.stack.imgur.com/rpKZ2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rpKZ2.png" alt="enter image description here"></a></p>
|
python|python-3.x|pandas|heatmap|correlation
| 7
|
375,204
| 54,214,017
|
Converting all non-black pixels into one colour doesn't produce expected output
|
<p>I am trying to select non-black pixel and then colour them to black and the black pixels to white. I used a <a href="https://stackoverflow.com/questions/52735231/how-to-select-all-non-black-pixels-in-a-numpy-array">solution</a> provided on Stack Overflow but so far it isn't working for me.</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
image = plt.imread('Perforated_carbon/faltu.png')
plt.imshow(image)
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/eYhPq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eYhPq.png" alt="enter image description here"></a></p>
<pre><code>image_copy = image.copy()
black_pixels_mask = np.all(image == [0, 0, 0], axis=-1)
non_black_pixels_mask = ~black_pixels_mask
# or non_black_pixels_mask = np.any(image != [0, 0, 0], axis=-1)
image_copy[black_pixels_mask] = [255, 255, 255]
image_copy[non_black_pixels_mask] = [0, 0, 0]
plt.imshow(image_copy)
plt.show()
</code></pre>
<p><strong>This is the image I am getting currently</strong></p>
<p><a href="https://i.stack.imgur.com/CfgdJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CfgdJ.png" alt="enter image description here"></a></p>
<p><strong>What I would ideally like is this</strong></p>
<p><a href="https://i.stack.imgur.com/wXoMp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wXoMp.png" alt="enter image description here"></a></p>
<p>Additional information: </p>
<pre><code>>>> image.shape
(256, 192, 3)
>>> image.dtype
dtype('float32')
>>> import matplotlib; print(matplotlib.__version__)
2.0.0
</code></pre>
|
<p>How about changing</p>
<pre><code>black_pixels_mask = np.all(image == [0, 0, 0], axis=-1);
</code></pre>
<p>to</p>
<pre><code>black_pixels_mask = np.all(image == [0, 0, 0], axis=2)
</code></pre>
|
python|numpy|matplotlib|image-processing
| 0
|
375,205
| 54,043,484
|
Python 3.x: Create dataframe from two dictionaries
|
<p>I'm working on Python 3.x. What is to be achieved is: merge dictionaries based on keys and form a dataframe. This would clear:</p>
<p>What I have:</p>
<pre><code>import numpy as np
import pandas as pd
d1 = {(1, "Autumn"): np.array([2.5, 4.5, 7.5, 9.5]), (1, "Spring"): np.array([10.5, 11.7, 12.3, 15.0])}
d2 = {(1, "Autumn"): np.array([10.2, 13.3, 15.7, 18.8]), (1, "Spring"): np.array([15.6, 20, 23, 27])}
</code></pre>
<p>What I want to achieve:</p>
<pre><code>d3 = {(1, "Autumn"): pd.DataFrame([[2.5, 10.2], [4.5, 13.3], [7.5, 15.7], [9.5, 18.8]],
columns = ["d1", "d2"]), (1, "Spring"): pd.DataFrame([[10.5, 15.6], [11.7, 20],
[12.3, 23], [15.0, 27]], columns = ["d1", "d2"])}
</code></pre>
<p>P. S.: I'm actually working on <code>RandomForestRegressor</code> example. The above dictionaries are my X and y values after the train and test data splits. What I'm trying to achieve is to get X, y side-by-side in a dataframe for plots with above query. The size of dictionary is same as are the key and number of values for each key in both dictionaries.</p>
|
<p>Since all keys are present in both dictionaries (according to your comment), you could iterate through the keys of one dictionary and make a dataframe from each dictionary entry for each key:</p>
<pre><code>d3 = dict()
for k in d1.keys():
d3[k] = pd.DataFrame(np.array([d1[k],d2[k]]).T, columns=["d1","d2"])
</code></pre>
<p>Output:</p>
<pre><code>{(1, 'Autumn'):
d1 d2
0 2.5 10.2
1 4.5 13.3
2 7.5 15.7
3 9.5 18.8,
(1, 'Spring'):
d1 d2
0 10.5 15.6
1 11.7 20.0
2 12.3 23.0
3 15.0 27.0}
</code></pre>
|
python|python-3.x|pandas|dictionary
| 1
|
375,206
| 53,884,911
|
pd.read_sql_query single / double quotes formatting
|
<p>I'm using Python(Jupyter Notebook) and Postgres Database and am struggling to populate a Pandas dataframe.</p>
<p>The sql code runs fine using the query builder in pgAdmin4 which is</p>
<pre><code>SELECT "Date","Close" FROM test WHERE "Symbol" = 'AA'
</code></pre>
<p>However I can't get this to work in my Jupyter notebook, I assume its something to do with single quotes and double quotes but can't figure out what to change and have hit a wall. In the notebook I'm trying</p>
<pre><code>df = pd.read_sql_query('SELECT "Date","Close" FROM public.test WHERE "Symbol" = AA', conn)
</code></pre>
<p>but don't know what quotes to use around the AA (data) part of the query, if I use double quotes pandas thinks AA is a column and if I use single quotes it breaks the string.</p>
<p>I'd really appreciate it if someone could point me in the right direction.</p>
<p>Thanks</p>
|
<p>This will work:</p>
<pre><code>df = pd.read_sql_query("SELECT Date,Close FROM public.test WHERE Symbol = 'AA'", conn)
</code></pre>
<p>Sql chars must have single quotes, but column names don't need quotes at all.</p>
<p>If you <em>really</em> need double quotes inside sql query, then just make sure you use triple outer quotes on python string, like so:</p>
<pre><code>df = pd.read_sql_query("""SELECT "Date" FROM public.test WHERE Symbol = 'AA'""", conn)
</code></pre>
|
python|sql|postgresql|pandas|quotes
| 2
|
375,207
| 54,179,450
|
AttributeError: type object 'numpy.ndarray' has no attribute '__array_function__' on import numpy 1.15.4
|
<p>Here the minimal code not working:</p>
<pre><code>import numpy
</code></pre>
<p>Here the stack of error</p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/samuele/.local/lib/python3.6/site-packages/numpy/__init__.py", line 142, in <module>
from . import core
File "/home/samuele/.local/lib/python3.6/site-packages/numpy/core/__init__.py", line 59, in <module>
from . import numeric
File "/home/samuele/.local/lib/python3.6/site-packages/numpy/core/numeric.py", line 3093, in <module>
from . import fromnumeric
File "/home/samuele/.local/lib/python3.6/site-packages/numpy/core/fromnumeric.py", line 17, in <module>
from . import _methods
File "/home/samuele/.local/lib/python3.6/site-packages/numpy/core/_methods.py", line 158, in <module>
_NDARRAY_ARRAY_FUNCTION = mu.ndarray.__array_function__
AttributeError: type object 'numpy.ndarray' has no attribute '__array_function__'
</code></pre>
<p>The version of <strong>numpy</strong> installed is <strong>1.15.4</strong>.</p>
<p>Here the list of packages installed (I don't know if this can be useful).</p>
<pre><code>blas 1.0 mkl
ca-certificates 2018.03.07 0
certifi 2018.11.29 py36_0
intel-openmp 2019.1 144
libedit 3.1.20170329 h6b74fdf_2
libffi 3.2.1 hd88cf55_4
libgcc-ng 8.2.0 hdf63c60_1
libgfortran-ng 7.3.0 hdf63c60_0
libstdcxx-ng 8.2.0 hdf63c60_1
mkl 2019.1 144
mkl_fft 1.0.10 py36ha843d7b_0
mkl_random 1.0.2 py36hd81dba3_0
ncurses 6.1 he6710b0_1
numpy 1.15.4 py36h7e9f1db_0
numpy-base 1.15.4 py36hde5b4d6_0
openssl 1.1.1a h7b6447c_0
pip 18.1 py36_0
python 3.6.8 h0371630_0
readline 7.0 h7b6447c_5
setuptools 40.6.3 py36_0
sqlite 3.26.0 h7b6447c_0
tk 8.6.8 hbc83047_0
wheel 0.32.3 py36_0
xz 5.2.4 h14c3975_4
zlib 1.2.11 h7b6447c_3
</code></pre>
<p>My python version is <strong>3.6.8</strong>.</p>
|
<p>Unfortunately, you have mixed two different installation systems. You need to clear everything that was installed when you did <code>pip3 install tensorflow</code>.</p>
<p>The easiest is to start from scratch, and only do <code>conda install tensorflow</code>.</p>
<p>The more complex version is to remove manually <code>tensorflow</code> and its dependencies and reinstall them through <code>conda</code>.</p>
<p>If you are using Anaconda, only install a package through <code>pip</code> if it's not available in the conda repositories.</p>
|
python-3.x|numpy|import|conda
| 2
|
375,208
| 54,232,801
|
Print Keras Kernel
|
<p>I have written a custom keras layer and basically set up a kernel that looks like this as an example:</p>
<pre><code>[[w1, 0, 0],
[w2, w3, 0],
[0, w4, w5]]
</code></pre>
<p>where w1,...w5 are trainable weights and the zero entries are not trainable.</p>
<p>Now, I want to confirm if everything is working correctly (i.e. whether the kernel still has the zero entries where it should have it after training). I could not find out, how to print the kernel after training. The .get_weights() method just gets me the weights, but I want to print the kernel explictly.</p>
<p>Thank you in advance</p>
|
<p>So, I was lucky and found an answer in a not-related post. The answer is quite general:</p>
<p>For a tensor, defined as a class member of the custom layer, you need to call its evaluation method with the correct session. That is</p>
<pre><code>import keras.backend as K
# Train your model...
sess = K.get_session()
print(model.get_layer("name_of_your_layer").your_tensor.eval(session=sess))
</code></pre>
<p>As an example, to print the kernel of a dense layer after training this is</p>
<pre><code>import numpy as np
from keras.layers import Input, Dense
from keras.models import Model
import keras.backend as K
x = np.random.rand(10,3)
layer_1 = Input(shape=(x.shape[1],))
layer_2 = Dense(units=x.shape[1])(layer_1)
model = Model(inputs=layer_1, outputs=layer_2)
model.compile(optimizer="Adam", loss="MSE")
model.fit(x, x, epochs=5)
sess = K.get_session()
print(model.get_layer("dense_1").kernel.eval(session=sess))
</code></pre>
|
tensorflow|keras|keras-layer
| 2
|
375,209
| 53,931,557
|
Convert column in excel date format (DDDDD.tttt) to datetime using pandas
|
<p>I have a dataframe with multiple columns and want to convert one of those columns which is a date of floats (excel date format - DDDDD.ttt) to datetime. </p>
<p>At the moment the value of the columns are like this:</p>
<pre><code>42411.0
42754.0
</code></pre>
<p>So I want to convert them to:</p>
<pre><code>2016-02-11
2017-01-19
</code></pre>
|
<p>Given </p>
<pre><code># s = df['date']
s
0 42411.0
1 42754.0
Name: 0, dtype: float64
</code></pre>
<p>Convert from Excel to datetime using: </p>
<pre><code>s_int = s.astype(int)
# Correcting Excel Leap Year bug.
days = pd.to_timedelta(np.where(s_int > 59, s_int - 1, s_int), unit='D')
secs = pd.to_timedelta(
((s - s_int) * 86400.0).round().astype(int), unit='s')
pd.to_datetime('1899/12/31') + days + secs
0 2016-02-11
1 2017-01-19
dtype: datetime64[ns]
</code></pre>
<p><a href="https://stackoverflow.com/q/29387137/4909087">Reference.</a></p>
|
python|pandas|datetime|dataframe
| 1
|
375,210
| 53,961,983
|
How to hide column names while converting pandas dataframe to html using to_html
|
<p>I have a data set I am transposing with a loop that looks like this.</p>
<pre><code> x = []
for index, row in s1.iterrows():
x = row
tt = pd.DataFrame(x)
</code></pre>
<p>I am then using the pandas data frame to html function to send an email for each row of the s1 data frame transposed. My issue is that when I convert the series to a data frame at the end of the loop it includes the index for each iteration of the loop when it prints. So the output looks like this. </p>
<pre><code> 0
s1.Col1 s1.val1
s1.Col2 s1.val2
s1.Col3 s1.val3
s1.Col4 s1.val4
s1.Col5 s1.val5
s1.Col6 s1.val6
1
s1.Col1 s1.val1
s1.Col2 s1.val2
s1.Col3 s1.val3
s1.Col4 s1.val4
s1.Col5 s1.val5
s1.Col6 s1.val6
</code></pre>
<p>(Sorry I couldn't figure out how to put in a table)</p>
<p>So each email has all of the columns in the format I need but an index value of 0,1,2,3,4,5 etc. which I guess is being included as a column name. All I need to do is exclude the index value that is being output. </p>
<p>I am using the below code to generate the html for the email. So the object needs to be a data frame for this to work. </p>
<pre><code> email = " {tt} "
email = email.format(tt=tt.to_html())
</code></pre>
<p>Maybe there is an easier way to this. I am basically just trying to transpose the original data set and send an email with the table for each row. Any help would be appreciated. Thanks.</p>
|
<p>I think there's an easy way, as <code>.to_html()</code> actually has a header flag.
Check this out <code>tt.to_html(header=False)</code>?</p>
|
python|pandas|dataframe
| 1
|
375,211
| 54,127,301
|
How to parse a lot of txt files with pandas and somehow understand from which file each raw of the table
|
<p>I have a dataset containing the name, gender, and quantity of people with their names. There are a lot of text files (>100). Each of them has the same information with different quantity parameters but for 1880, 1881 .... 2008 years.
Here is a link to make it more clear: <a href="https://github.com/wesm/pydata-book/tree/2nd-edition/datasets/babynames" rel="nofollow noreferrer">https://github.com/wesm/pydata-book/tree/2nd-edition/datasets/babynames</a>
How can I import all of these files and mark raws with appropriate years?
So the table looks like this:</p>
<pre><code>YEAR NAME GENDER QUANTITY
1998 Marie F 2994
1996 John M 2984
1897 Molly F 54
</code></pre>
<p>The main concern is how to mark each raw with appropriate year according to the filename.</p>
<p>Here is my code for 1 file, but i need to do the same for more than 100 text files...</p>
<pre><code>import pandas as pd
df = pd.read_csv("yob1880.txt", header=None)
df["year"] = 1880 # add new column according to the file`s year
print(df)
</code></pre>
|
<p>There are two issues here:</p>
<ol>
<li>How to extract year from filename and assign to new column.</li>
<li>How to concatenate multiple dataframes.</li>
</ol>
<p>You can use string slicing and <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.assign.html" rel="nofollow noreferrer"><code>pd.DataFrame.assign</code></a> for the former; <a href="https://pandas.pydata.org/pandas-docs/version/0.23.4/generated/pandas.concat.html" rel="nofollow noreferrer"><code>pd.concat</code></a> for the latter. Assuming your filenames are of the format <code>yobXXXX.txt</code>:</p>
<pre><code>df = pd.concat(pd.read_csv(fn).assign(YEAR=int(fn[3:7])) for fn in filenames)
</code></pre>
<p>Or if you wish to ignore indices:</p>
<pre><code>df = pd.concat((pd.read_csv(fn).assign(YEAR=int(fn[3:7)) for fn in filenames),
ignore_index=True)
</code></pre>
|
python|pandas
| 0
|
375,212
| 53,850,611
|
Python Pandas, deleting NaN
|
<p>So basically I am stuck on a very simple thing. For some reason when I execute this code:</p>
<pre><code>import pandas as pd
x = pd.read_csv('titanic.csv')
v = x.dropna(axis=0,how="any")
z = v[["Survived"]]
y = z.where(z == 1)
print (y)
</code></pre>
<p>It still prints values with NaN, even though I have already done dropna on the whole file and it works. I just want to print rows with value 1. I have tried many variations and I cant seem to fix it. Any ideas? </p>
<p><strong>Output</strong></p>
<p><img src="https://i.stack.imgur.com/1y1J0.png" alt="Screen Shot"></p>
<p><strong>Part of the file I am interested in</strong></p>
<p><img src="https://i.stack.imgur.com/HNwZG.png" alt="Screen Shot"></p>
|
<p>try:</p>
<pre><code>y = z.where(z == 1).dropna(subset=['Survived'])
</code></pre>
|
python|pandas
| 2
|
375,213
| 53,824,848
|
Mask dataframe matching multiple conditions
|
<p>I would like mask (or assign 'NA') the value of a column in a dataframe if two conditions are met. This would be relatively straightforward if the conditions were performed row-wise, with something like:</p>
<pre><code>mask = ((df['A'] < x) & (df['B'] < y))
df.loc[mask, 'C'] = 'NA'
</code></pre>
<p>but I'm having some trouble figuring out of how to perform this task in my dataframe, which is structured more or less like:</p>
<pre><code>df = pd.DataFrame({ 'A': (188, 750, 1330, 1385, 188, 750, 810, 1330, 1385),
'B': (2, 5, 7, 2, 5, 5, 3, 7, 2),
'C': ('foo', 'foo', 'foo', 'foo', 'bar', 'bar', 'bar', 'bar', 'bar') })
A B C
0 188 2 foo
1 750 5 foo
2 1330 7 foo
3 1385 2 foo
4 188 5 bar
5 750 5 bar
6 810 3 bar
7 1330 7 bar
8 1385 2 bar
</code></pre>
<p>The values in column 'A' when <code>'C' == 'foo'</code> should also be found when <code>'C' == 'bar'</code> (something like an index), although it can have missing data in both 'foo' and 'bar'. How can I mask (or assign 'NA') the rows of column 'B' if both 'foo' and 'bar' are lower than 5 or any of them is missing? In the example above the output would be something like:</p>
<pre><code> A B C
0 188 2 foo
1 750 5 foo
2 1330 7 foo
3 1385 NA foo
4 188 5 bar
5 750 5 bar
6 810 NA bar
7 1330 7 bar
8 1385 NA bar
</code></pre>
|
<p>Here's one solution. The idea is to construct two Boolean masks, <code>m1</code> and <code>m2</code>, from two mapping series, <code>s1</code> and <code>s2</code>. Then use <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.mask.html" rel="nofollow noreferrer"><code>pd.Series.mask</code></a> to mask series <code>B</code>.</p>
<pre><code># create separate mappings for foo and bar
s1 = df.loc[df['C'] == 'foo'].set_index('A')['B']
s2 = df.loc[df['C'] == 'bar'].set_index('A')['B']
# use -np.inf to cover missing mappings
m1 = df['A'].map(s1).fillna(-np.inf).lt(5)
m2 = df['A'].map(s2).fillna(-np.inf).lt(5)
df['B'] = df['B'].mask(m1 & m2)
print(df)
A B C
0 188 2.0 foo
1 750 5.0 foo
2 1330 7.0 foo
3 1385 NaN foo
4 188 5.0 bar
5 750 5.0 bar
6 810 NaN bar
7 1330 7.0 bar
8 1385 NaN bar
</code></pre>
|
python|python-3.x|pandas|dataframe|series
| 2
|
375,214
| 54,150,976
|
Dataframe assign blanket criteria if not matching
|
<p>I have a dataframe organized in the following manner for railcars.
I'd like to count by ['Railroad'], but only if it matches 'VER'. If not, I want 'Railroad' to reassign the value to 'NOT' and count by that.</p>
<p>Dataframe hierarchy:</p>
<pre><code>df1 = df.reset_index().groupby(['Homebase','FINAL ETA','Code Description','L_E', 'Railroad'])['Code Description'].size()
</code></pre>
<p>Example output:</p>
<pre><code>Homebase FINAL ETA Code Description L_E Railroad
Rail2 2018-12-06 Arrival in yard L VER 1
2019-01-04 Arrival in yard L VER 10
2019-01-08 Arrival in yard L FIL 16
2019-01-09 Arrival in yard L FIL 5
2019-01-13 Arrival in yard L PAS 1
</code></pre>
<p>Desired output:</p>
<pre><code>Homebase FINAL ETA Code Description L_E Railroad
Rail2 2018-12-06 Arrival in yard L VER 1
2019-01-04 Arrival in yard L VER 10
2019-01-08 Arrival in yard L NOT 16
2019-01-09 Arrival in yard L NOT 5
2019-01-13 Arrival in yard L NOT 1
</code></pre>
|
<p>It looks like only the railroad column is changing, try this:</p>
<pre><code>ver = (df1['Railroad'] == 'VER')
df1['Railroad'] = 'NOT'
df1.loc[ver, 'Railroad'] = 'VER'
</code></pre>
|
python|pandas|dataframe
| 0
|
375,215
| 38,169,487
|
Python sorting numbers in a multicolumn file
|
<p>I have a file with 4 column data, and I want to prepare a final output file which is sorted by the first column. The data file (rough.dat) looks like:</p>
<pre><code>1 2 4 9
11 2 3 5
6 5 7 4
100 6 1 2
</code></pre>
<p>The code I am using to sort by the first column is:</p>
<pre><code>with open('rough.dat','r') as f:
lines=[line.split() for line in f]
a=sorted(lines, key=lambda x:x[0])
print a
</code></pre>
<p>The result I am getting is strange, and I think I'm doing something silly!</p>
<pre><code>[['1', '2', '4', '9'], ['100', '6', '1', '2'], ['11', '2', '3', '5'], ['6', '5', '7', '4']]
</code></pre>
<p>You may see that the first column sorting is not done as per ascending order, instead, the numbers starting with 'one' takes the priority!! A zero after 'one' i.e 100 takes priority over 11!</p>
|
<p>Strings are compared lexicographically (dictionary order):</p>
<pre><code>>>> '100' < '6'
True
>>> int('100') < int('6')
False
</code></pre>
<p>Converting the first item to <a href="https://docs.python.org/2/library/functions.html#int" rel="nofollow"><code>int</code></a> in key function will give you what you want.</p>
<pre><code>a = sorted(lines, key=lambda x: int(x[0]))
</code></pre>
|
python|list|sorting|numpy
| 0
|
375,216
| 38,504,907
|
Reading a .VTK polydata file and converting it into Numpy array
|
<p>I want to convert a .VTK ASCII polydata file into numpy array of just the coordinates of the points. I first tried this: <a href="https://stackoverflow.com/a/11894302">https://stackoverflow.com/a/11894302</a> but it stores a (3,3) numpy array where each entry is actually the coordinates of THREE points that make that particular cell (in this case a triangle). However, I don't want the cells, I want the coordinates of each point (without repeatition). Next I tried this: <a href="https://stackoverflow.com/a/23359921/6619666">https://stackoverflow.com/a/23359921/6619666</a> with some modifications. Here is my final code. Instead of numpy array, the values are being stored as a tuple but I am not sure if that tuple represents each point.</p>
<pre><code>import sys
import numpy
import vtk
from vtk.util.numpy_support import vtk_to_numpy
reader = vtk.vtkPolyDataReader()
reader.SetFileName('Filename.vtk')
reader.ReadAllScalarsOn()
reader.ReadAllVectorsOn()
reader.Update()
nodes_vtk_array= reader.GetOutput().GetPoints().GetData()
print nodes_vtk_array
</code></pre>
<p>Please give suggestions.</p>
|
<p>You can use <code>dataset_adapter</code> from <code>vtk.numpy_interface</code>:</p>
<pre><code>from vtk.numpy_interface import dataset_adapter as dsa
polydata = reader.GetOutput()
numpy_array_of_points = dsa.WrapDataObject(polydata).Points
</code></pre>
<p>From <a href="https://blog.kitware.com/improved-vtk-numpy-integration-part-2/" rel="noreferrer">Kitware blog</a>:</p>
<blockquote>
<p>It is possible to access PointData, CellData, FieldData, Points
(subclasses of vtkPointSet only), Polygons (vtkPolyData only) this
way.</p>
</blockquote>
|
python|arrays|numpy|vtk
| 8
|
375,217
| 38,199,408
|
How to use the dropna() to drop itme which is < 1?
|
<p>I have a DataFrame as follow:</p>
<pre><code>mydf = pd.DataFrame({'Name1':(4.2, 0.3), 'Name2':(0.2, 4.2), 'Name3':(3.3, 5.5)}, index=('Val1', 'Val2'))
</code></pre>
<p>How can I drop a column in which any item's value < 1?</p>
|
<p>This selects columns where all elements are <code>>=1</code> (complement of any of them being smaller than 1):</p>
<pre><code>mydf.ix[:, ~(mydf<1).any()]
Out[9]:
Name3
Val1 3.3
Val2 5.5
</code></pre>
|
python|pandas
| 1
|
375,218
| 38,059,735
|
Pandas groupby with categorical and apply copies index to additional column
|
<p>Consider the following MWE with three alternative last lines:</p>
<pre><code>df = pd.DataFrame({'a': np.arange(100)*3})
(df.assign(mybins = lambda df: pd.cut(df['a'],bins=np.linspace(0,300,6)))
.groupby('mybins')
.sum()
#.apply(lambda x: x.sum())
#.apply(lambda x: x.count()/float(len(df))*100)
)
</code></pre>
<p>So I have a DataFrame with floats. I want to groupby bins of column 'a' and do some calculations. When I use the <code>.sum</code> function it works as expected, it returns the bins as index and the sum of each bin as column values. </p>
<p>Now, when I use the apply function to calculate the sums, somehow the groupby index is also cast as an additional column 'mybins' in the dataframe, and the sum is applied to both columns. So now I have a column 'a' with the sums of <code>a</code> and a column 'mybins' with lists of the bin edges times <code>sum(a)</code>. This is not what I want/expected.</p>
<p>My final goal is to use <code>apply</code> to calculate percentages, but then I get an error (unsupported operand types), so I need to fix this strange behaviour. What am I missing?</p>
|
<p>is that what you want - pay attention at <code>.groupby('mybins')['a']</code> (<strong>['a']</strong>):</p>
<pre><code>In [270]: %paste
(df.assign(mybins = lambda df: pd.cut(df['a'],bins=np.linspace(0,300,6)))
.groupby('mybins')['a']
#.sum()
#.apply(lambda x: x.sum())
.apply(lambda x: x.sum()/float(len(x))*100)
)
## -- End pasted text --
Out[270]:
mybins
(0, 60] 3150.0
(60, 120] 9150.0
(120, 180] 15150.0
(180, 240] 21150.0
(240, 300] 27000.0
Name: a, dtype: float64
</code></pre>
<p>BTW you can achieve the same result in a more pandas idiomatic way:</p>
<pre><code>In [273]: %paste
(df.assign(mybins = lambda df: pd.cut(df['a'],bins=np.linspace(0,300,6)))
.groupby('mybins')
.mean() * 100
)
## -- End pasted text --
Out[273]:
a
mybins
(0, 60] 3150.0
(60, 120] 9150.0
(120, 180] 15150.0
(180, 240] 21150.0
(240, 300] 27000.0
</code></pre>
<p><strong>Explanation:</strong></p>
<p>given:</p>
<pre><code>In [33]: df
Out[33]:
s n s2 n2 n3
0 a 0.629772 a 6.297724 1
1 d 0.496197 d 4.961974 0
2 a 0.801868 a 8.018679 0
3 d 0.461914 d 4.619140 3
4 c 0.259175 c 2.591751 0
5 b 0.797740 b 7.977401 0
6 a 0.508496 a 5.084962 1
7 b 0.242306 b 2.423056 2
8 c 0.218082 c 2.180820 2
9 d 0.060125 d 0.601247 3
</code></pre>
<p>if we try to use <code>.apply()</code> for summing up the groups, we get:</p>
<pre><code>In [34]: df.groupby('s').apply(lambda x: x.sum())
Out[34]:
s n s2 n2 n3
s
a aaa 1.940136 aaa 19.401364 2
b bb 1.040046 bb 10.400456 2
c cc 0.477257 cc 4.772571 2
d ddd 1.018236 ddd 10.182361 6
</code></pre>
<p>because <code>apply()</code> will be applied on all columns, including the <code>groupby</code> column - <code>s</code> in this example</p>
<p>prove with <code>.apply(lambda x: print(x))</code> instead of <code>.apply(lambda x: x.sum())</code></p>
<pre><code>In [35]: df.groupby('s').apply(lambda x: print(x))
s n s2 n2 n3
0 a 0.629772 a 6.297724 1
2 a 0.801868 a 8.018679 0
6 a 0.508496 a 5.084962 1
s n s2 n2 n3
0 a 0.629772 a 6.297724 1
2 a 0.801868 a 8.018679 0
6 a 0.508496 a 5.084962 1
s n s2 n2 n3
5 b 0.797740 b 7.977401 0
7 b 0.242306 b 2.423056 2
s n s2 n2 n3
4 c 0.259175 c 2.591751 0
8 c 0.218082 c 2.180820 2
s n s2 n2 n3
1 d 0.496197 d 4.961974 0
3 d 0.461914 d 4.619140 3
9 d 0.060125 d 0.601247 3
Out[35]:
Empty DataFrame
Columns: []
Index: []
</code></pre>
<p>NOTE1: you see all columns including the <code>groupby</code> column</p>
<p>NOTE2: you see 5 groups instead of expected 4. <a href="https://github.com/pydata/pandas/issues/2656" rel="nofollow">With groupby, the applied function is called one extra time to see if certain optimizations can be done.</a></p>
<p>Now let's try to do it using <code>.sum()</code> function:</p>
<pre><code>In [37]: df.groupby('s').sum()
Out[37]:
n n2 n3
s
a 1.940136 19.401364 2
b 1.040046 10.400456 2
c 0.477257 4.772571 2
d 1.018236 10.182361 6
</code></pre>
<p><code>sum()</code> was smart enough to remove all non-numeric columns and if also removes the <code>groupby</code> column when applying <code>sum</code>:</p>
<pre><code>In [38]: df.groupby('n3').sum()
Out[38]:
n n2
n3
0 2.354980 23.549805
1 1.138269 11.382686
2 0.460388 4.603876
3 0.522039 5.220387
</code></pre>
<p>we just grouped by another numeric column: <code>n3</code> and the <code>sum()</code> wasn't applied on that <code>groupby</code> column</p>
|
python|pandas
| 2
|
375,219
| 38,131,287
|
Pandas: add column with the most recent values
|
<p>I have two pandas dataframes, both index with datetime entries. The <code>df1</code> has non-unique time indices, whereas <code>df2</code> has unique ones. I would like to add a column <code>df2.a</code> to <code>df1</code> in the following way: for every row in <code>df1</code> with timestamp <code>ts</code>, <code>df1.a</code> should contain the most recent value of <code>df2.a</code> whose timestamp is less then <code>ts</code>. </p>
<p>For example, let's say that <code>df2</code> is sampled every minute, and there are rows with timestamps <code>08:00:15</code>, <code>08:00:47</code>, <code>08:02:35</code> in <code>df1</code>. In this case I would like the value from <code>df2.a[08:00:00]</code> to be used for the first two rows, and <code>df2.a[08:02:00]</code> for the third. How can I do this?</p>
|
<p>You are describing an <a href="https://stackoverflow.com/questions/12322289/kdb-like-asof-join-for-timeseries-data-in-pandas">asof-join</a>, which was just <a href="http://pandas.pydata.org/pandas-docs/version/0.19.0/whatsnew.html#whatsnew-0190-enhancements-asof-merge" rel="nofollow noreferrer">released in pandas 0.19</a>.</p>
<pre><code>pd.merge(df1, df2, left_on='ts', right_on='a')
</code></pre>
|
python|datetime|pandas
| 2
|
375,220
| 38,222,126
|
tensorflow efficient way for tensor multiplication
|
<p>I have two tensors in tensorflow, the first tensor is 3-D, and the second is 2D. And I want to multiply them like this:</p>
<pre><code>x = tf.placeholder(tf.float32, shape=[sequence_length, batch_size, hidden_num])
w = tf.get_variable("w", [hidden_num, 50])
b = tf.get_variable("b", [50])
output_list = []
for step_index in range(sequence_length):
output = tf.matmul(x[step_index, :, :], w) + b
output_list.append(output)
output = tf.pack(outputs_list)
</code></pre>
<p>I use a loop to do multiply operation, but I think it is too slow. What would be the best way to make this process as simple/clean as possible?</p>
|
<p>You could use <code>batch_matmul</code>. Unfortunately it doesn't seem <code>batch_matmul</code> supports broadcasting along the batch dimension, so you have to tile your <code>w</code> matrix. This will use more memory, but all operations will stay in TensorFlow</p>
<pre><code>a = tf.ones((5, 2, 3))
b = tf.ones((3, 1))
b = tf.reshape(b, (1, 3, 1))
b = tf.tile(b, [5, 1, 1])
c = tf.batch_matmul(a, b) # use tf.matmul in TF 1.0
sess = tf.InteractiveSession()
sess.run(tf.shape(c))
</code></pre>
<p>This gives</p>
<pre><code>array([5, 2, 1], dtype=int32)
</code></pre>
|
python|tensorflow|deep-learning
| 2
|
375,221
| 38,241,933
|
how to convert column names into column values in pandas - python
|
<pre><code>df=pd.DataFrame(index=['x','y'], data={'a':[1,2],'b':[3,4]})
</code></pre>
<p>how can I convert column names into values of a column? This is my desired output</p>
<pre><code> c1 c2
x 1 a
x 3 b
y 2 a
y 4 b
</code></pre>
|
<p>You can use:</p>
<pre><code>print (df.T.unstack().reset_index(level=1, name='c1')
.rename(columns={'level_1':'c2'})[['c1','c2']])
c1 c2
x 1 a
x 3 b
y 2 a
y 4 b
</code></pre>
<p>Or:</p>
<pre><code>print (df.stack().reset_index(level=1, name='c1')
.rename(columns={'level_1':'c2'})[['c1','c2']])
c1 c2
x 1 a
x 3 b
y 2 a
y 4 b
</code></pre>
|
python|pandas
| 4
|
375,222
| 38,282,413
|
How to change colors of function plots in Tensorboard?
|
<p>I'm trying to compare different learning-rate-decays using Tensorflow. Therefore I visualize the cost functions in Tensorboard ('EVENTS'-tab). My problem is that the different plots of the functions are in very similar colors making it hard to compare them. Is there any possibility to change those colors?</p>
|
<p>Just create different summary writes with different log files for each learning rate. Then launch the tensorboard tool using:
<code>tensorboard --logdir=tag1:/path/to/summary/one,tag2:/path/to/summary/two</code></p>
|
tensorflow|tensorboard
| 9
|
375,223
| 38,309,109
|
matplotlib x-axis formatting if x-axis is pandas index
|
<p>I'm using iPython notebook's %matplotlib inline and I'm having trouble formatting my plot.</p>
<p><a href="https://i.stack.imgur.com/UsRHv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UsRHv.png" alt="Plot that needs x-axis formatting"></a></p>
<p>As you can see, my first and last data point aren't showing up the way the other data points are showing up. I'd like to have the error bars visible and have the graph be "zoomed out" a bit.</p>
<pre><code>df.plot(yerr=df['std dev'],color='b', ecolor='r')
plt.title('SpO2 Mean with Std Dev')
plt.xlabel('Time (s)')
plt.ylabel(SpO2)
</code></pre>
<p>I assume I have to use </p>
<pre><code>matplotlib.pyplot.xlim()
</code></pre>
<p>but I'm not sure how to use it properly if my x-axis is a DataFrame index composed of strings:</p>
<pre><code>index = ['-3:0','0:3','3:6','6:9','9:12','12:15','15:18','18:21','21:24']
</code></pre>
<p>Any ideas? Thanks!</p>
|
<p>You can see the usage of xlim <a href="http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.xlim" rel="nofollow noreferrer">here</a>. Basically in this case if you ran <code>plt.xlim()</code> you would get<code>(0.0, 8.0)</code>. As you have an index that uses text and not numbers the values for xlim are actually just the index of the entries in your index. So in this case you would just need to change the values by feeding in however many steps left and right you want your graph to take. For example:</p>
<pre><code>plt.xlim(-1,len(df))
</code></pre>
<p>Would change this:</p>
<p><a href="https://i.stack.imgur.com/by6vY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/by6vY.png" alt="enter image description here"></a></p>
<p>to this:</p>
<p><a href="https://i.stack.imgur.com/IMH6K.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IMH6K.png" alt="enter image description here"></a></p>
<p>Hope that helps.</p>
|
python|pandas|matplotlib
| 0
|
375,224
| 38,459,793
|
numpy boolean indexing multiple conditions
|
<p>I have a two dimensional numpy array and I am using python 3.5. I am starting to learn about Boolean indexing which is way cool. I can do this with my two dimensional array, arr.
mask = arr > 127
arr[mask] = 0</p>
<p>This works perfect but now I am trying to change this logic to use boolean indexing</p>
<pre><code>for x in range(arr.shape[0]):
for y in range(arr.shape[1]):
if arr[x,y] < -10:
arr[x,y] = 0
elif arr[x,y] < 15:
arr[x,y] = arr[x,y] + 5
else:
arr[x,y] = 30
</code></pre>
<p>I tried multiple conditional operators for my indexing but I get the following error:</p>
<p><code>ValueError: boolean index array should have 1 dimension boolean index array should have 1 dimension</code>. </p>
<p>I tried multiple versions to try to get this to work. Here is one try that produced the ValueError.</p>
<pre><code> arr_temp = arr.copy()
mask = arry_temp < -10
mask2 = arry_temp < 15
mask3 = mask ^ mask3
arr[mask] = 0
arr[mask3] = arry[mask3] + 5
arry[~mask2] = 30
</code></pre>
<p>I received the error on mask3. I am new to this so I know the code above is not efficient trying to work out it.</p>
<p>Any tips would be appreciated. </p>
|
<p>This might help. Consider a numpy array of floating point values foo.</p>
<pre><code>import numpy as np
foo=np.array([0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8])
</code></pre>
<p>foo yields</p>
<pre><code>array([0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.8, 0.8])
</code></pre>
<p>This is how you get the values in foo > 0.3</p>
<pre><code>foo[np.where( foo > 0.3)]
</code></pre>
<p>yields</p>
<pre><code>array([0.4, 0.5, 0.6, 0.7, 0.8])
</code></pre>
<p>This is how to do the same with multiple conditions. In this case, values > 0.3 and less than 0.6.</p>
<pre><code>foo[np.logical_and(foo > 0.3, foo < 0.6)]
</code></pre>
<p>yields</p>
<pre><code>array([0.4, 0.5])
</code></pre>
<p><strong>Alternatively using boolean mask array</strong></p>
<pre><code>mask_1 = foo > 0.3
mask_2 = foo < 0.6
mask_3 = np.logical_and(mask_1, mask_2)
mask_3
</code></pre>
<p>Yields a boolean mask array</p>
<pre><code>array([False, False, False, True, True, False])
</code></pre>
<p>Which you can then use to slice the array via</p>
<pre><code>foo[mask_3]
</code></pre>
<p>Yields</p>
<pre><code>array([0.4, 0.5])
</code></pre>
|
python|numpy
| 6
|
375,225
| 38,117,672
|
Deep Neural Network : Probability issue
|
<p>I'm working on a keyword spotting with deep neural network (Multi-Layer Perceptron) and I'm facing a following issue. </p>
<p>I have to detect a keyword in a speech signal. I use the library Tensorflow and I write my code based on this <a href="https://github.com/aymericdamien/TensorFlow-Examples/blob/master/examples/3_NeuralNetworks/multilayer_perceptron.py" rel="nofollow">example</a>.</p>
<pre><code>'''
A Multilayer Perceptron implementation example using TensorFlow library.
This example is using the MNIST database of handwritten digits
(http://yann.lecun.com/exdb/mnist/)
Author: Aymeric Damien
Project: https://github.com/aymericdamien/TensorFlow-Examples/
'''
# Import MINST data
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)
import tensorflow as tf
# Parameters
learning_rate = 0.001
training_epochs = 15
batch_size = 100
display_step = 1
# Network Parameters
n_hidden_1 = 256 # 1st layer number of features
n_hidden_2 = 256 # 2nd layer number of features
n_input = 784 # MNIST data input (img shape: 28*28)
n_classes = 10 # MNIST total classes (0-9 digits)
# tf Graph input
x = tf.placeholder("float", [None, n_input])
y = tf.placeholder("float", [None, n_classes])
# Create model
def multilayer_perceptron(x, weights, biases):
# Hidden layer with RELU activation
layer_1 = tf.add(tf.matmul(x, weights['h1']), biases['b1'])
layer_1 = tf.nn.relu(layer_1)
# Hidden layer with RELU activation
layer_2 = tf.add(tf.matmul(layer_1, weights['h2']), biases['b2'])
layer_2 = tf.nn.relu(layer_2)
# Output layer with linear activation
out_layer = tf.matmul(layer_2, weights['out']) + biases['out']
return out_layer
# Store layers weight & bias
weights = {
'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1])),
'h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2])),
'out': tf.Variable(tf.random_normal([n_hidden_2, n_classes]))
}
biases = {
'b1': tf.Variable(tf.random_normal([n_hidden_1])),
'b2': tf.Variable(tf.random_normal([n_hidden_2])),
'out': tf.Variable(tf.random_normal([n_classes]))
}
# Construct model
pred = multilayer_perceptron(x, weights, biases)
# Define loss and optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(pred, y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
# Initializing the variables
init = tf.initialize_all_variables()
# Launch the graph
with tf.Session() as sess:
sess.run(init)
# Training cycle
for epoch in range(training_epochs):
avg_cost = 0.
total_batch = int(mnist.train.num_examples/batch_size)
# Loop over all batches
for i in range(total_batch):
batch_x, batch_y = mnist.train.next_batch(batch_size)
# Run optimization op (backprop) and cost op (to get loss value)
_, c = sess.run([optimizer, cost], feed_dict={x: batch_x,
y: batch_y})
# Compute average loss
avg_cost += c / total_batch
# Display logs per epoch step
if epoch % display_step == 0:
print "Epoch:", '%04d' % (epoch+1), "cost=", \
"{:.9f}".format(avg_cost)
print "Optimization Finished!"
# Test model
correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
# Calculate accuracy
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
print "Accuracy:", accuracy.eval({x: mnist.test.images, y: mnist.test.labels})
</code></pre>
<p>I trained my network using MFCC features with double delta. In inputs, I have 2300 inputs which correspond to the MFCC for around 600ms in order to add context. In outputs, I have 2 classes, <code>mykeyword</code> and an other class <code>filler</code>. </p>
<p>I have a 3x128 MLP with ReLU activations and softmax output layer. I have also centered and whitened my data. </p>
<p>My labels vector looks like : <code>[ 1 0 ]</code> for the keyword and <code>[ 0 1 ]</code> for the "filler". </p>
<p>I would like to get a probability at the end which traduces the confidence score of detecting the keyword, thus I can use a threshold. However , after the softmax layer, I get only <code>0</code> or <code>1</code> for each examples of the testing data. I don't really understand why..</p>
<p>Thank you</p>
|
<p>In the world where numbers are infinitely precise, your model would be slightly different. You would actually use <code>tf.nn.softmax</code> at the end of your model, and optimize for <code>cross_entropy</code>. However, numbers have precision, and computing gradients for cross entropy followed by a softmax during the backward pass will result in numerical instability.</p>
<p>The thing is, the combined gradient of <code>softmax -> cross_entropy</code> is very stable (it is just the output of softmax for correct predictions and <code>output - 1</code> for incorrect). Therefore, tensorflow allows you to optimize directly for <code>softmax + crossentropy</code>, in which case it uses that stable way of computing the gradient.</p>
<p>That introduces a problem -- no node in your graph actually computes the probability, the probabilities are hidden inside that objective function.</p>
<p>To address it, add an extra node to your graph:</p>
<pre><code>prob = tf.nn.softmax(pred)
</code></pre>
<p>And evaluate it. It will return you the actual probabilities.</p>
<p>The <code>0</code> and <code>1</code> you observe are likely to be the outputs of <code>argmax</code>. <code>argmax</code> is "hard" in a sense that it just returns the largest value, not the probabilities (and the largest value conveniently matches whether you compute it before the softmax or after).</p>
|
neural-network|tensorflow|probability|deep-learning
| 0
|
375,226
| 66,175,409
|
Reformat a Pandas dataframe with a tuple in a column?
|
<p>I have a dataframe that contains a tuple column as follows.</p>
<pre><code>import pandas as pd
d = {'col1': [('A', 0), ('A', 1), ('A', 2), ('B', 0), ('B', 1), ('B', 2)], 'col2': [1, 1, 1, 2, 2, 2]}
df = pd.DataFrame(data=d)
# Split the tuple to two cols and drop the tuple col
df[['b1', 'b2']] = pd.DataFrame(df['col1'].tolist(), index=df.index)
print(df)
col2 b1 b2
0 1 A 0
1 1 A 1
2 1 A 2
3 2 B 0
4 2 B 1
5 2 B 2
</code></pre>
<p>What I am trying to do is to reformat this dataframe in the most efficient way and generate a new one where 0,1,2 are the columns and A, B are the row names. so I can write to a csv file.</p>
<pre><code> 0 1 2
A 1 1 1
B 2 2 2
</code></pre>
|
<p>So you can do <code>pivot</code> with <code>rename_axis</code></p>
<pre><code>out = df.pivot(index='b1',columns='b2',values='col2').\
rename_axis(None,axis=1).rename_axis(None)
Out[101]:
0 1 2
A 1 1 1
B 2 2 2
</code></pre>
|
python|pandas
| 2
|
375,227
| 66,258,756
|
Dropping rows based on timeseries using .loc
|
<p>I have a dataframe with a time series data. The dates have been parsed.</p>
<pre><code>data_path = "file.xlsx"
data = pd.read_excel(data_path, parse_dates=['date'], index_col='date')
</code></pre>
<p>I have dates from 2008 to 2017 but I want to drop all rows from 2017. I know that I can select dates by doing this:</p>
<pre><code>data.loc['2017']
</code></pre>
<p>...which selects all data from 2017. But, how can I drop rows based on .loc? If I try the code below, it gives an error:</p>
<pre><code>data.drop(data.loc['2017'], axis=0)
</code></pre>
<blockquote>
<p>KeyError: "['centre' 'avg' 'id' 'age'] not found in axis"</p>
</blockquote>
<p>Many thanks</p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#boolean-indexing" rel="nofollow noreferrer"><code>boolean indexing</code></a> with test year by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DatetimeIndex.year.html" rel="nofollow noreferrer"><code>DatetimeIndex.year</code></a> for get all values if not <code>2017</code>:</p>
<pre><code>df = data[data.index.year != 2017
</code></pre>
<p>Your solution:</p>
<pre><code>df = data.drop(data.loc['2017'].index)
</code></pre>
|
python|pandas|time-series
| 1
|
375,228
| 66,209,735
|
Copy values from column X+2 (two to the right of X) into column X
|
<p>I have a dataframe and one every three columns has a name (the others are unnamed 1,2,3...).</p>
<p>I want values in the columns that have names to be equal to the value of two columns to the right of that.</p>
<p>I was using <code>df.columns.get_loc("X")</code> and I can use this to correctly select my desired column using <code>df.iloc[:,X]</code>,</p>
<p>but I can't do Y = X +2 on pandas to do <code>df.iloc[:,X] = df.iloc[:,Y]</code> because X is not just an integer.</p>
<p>Any ideas on how to solve this? It can be a different way to get column X to have the same values as two columns to the right of X.</p>
<p>Thanks!</p>
|
<p>this would work, change 8 to fit your columns, or len(columns)//3*3</p>
<pre><code>for n in range(0,8,3):
df.iloc[:,n]= df.iloc[:,n+2]
</code></pre>
<p>it doesn't seem we can assign a multi column to a multi column, not sure if that is possible</p>
|
python|pandas
| 0
|
375,229
| 66,324,649
|
Can i simplify this creation of an array?
|
<p>I need to create an array like this with numpy in python:</p>
<p><a href="https://i.stack.imgur.com/5bLeP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5bLeP.png" alt="array" /></a></p>
<p>This is my current attempt:</p>
<pre><code>array = np.zeros([3, 3])
array[0, 0] = 1
array[0, 1] = 2
array[0, 2] = 3
array[1, 0] = 2
array[1, 1] = 4
array[1, 2] = 6
array[2, 0] = 3
array[2, 1] = 6
array[2, 2] = 9
print(array)
</code></pre>
<p>Is there a way to simplify this, Im pretty sure there is. However, i dont know how. :(</p>
|
<p>definitely :) One option that is straightforward:</p>
<pre><code>arr = np.arange(1,4)
my_arr = np.vstack((arr, 2*arr, 3*arr))
</code></pre>
<p>or you can use broadcasting. <a href="https://numpy.org/doc/stable/user/basics.broadcasting.html" rel="nofollow noreferrer">Broadcasting</a> is powerful and straightforward, but can be confusing if you don't read the documentation. In this case you make a (1x3) array, a (3x1) array, then multiply them and broadcasting is used to generate a (3x3):</p>
<pre><code>my_arr = np.arange(1,4) * np.array([1,2,3]).reshape(-1,1)
</code></pre>
<p>output:</p>
<pre><code>array([[1, 2, 3],
[2, 4, 6],
[3, 6, 9]])
</code></pre>
|
arrays|python-3.x|numpy
| 2
|
375,230
| 66,215,270
|
Numpy - Searching in a 4D matrix (AKA messed-up meshgrids)
|
<p>I am sorry if a similar question has been already posted in some way, but I could not find it anywhere so far. My problem is the following:</p>
<p>Suppose I have a 4D numpy matrix like this one</p>
<pre><code>M= array([[[[0. , 0. , 0. ],
[0. , 0. , 0.01]],
[[0. , 0.01, 0. ],
[0. , 0.01, 0.01]]],
[[[0.01, 0. , 0. ],
[0.01, 0. , 0.01]],
[[0.01, 0.01, 0. ],
[0.01, 0.01, 0.01]]]])
</code></pre>
<p>Which can be seen as a 3D meshgrid, where each point in space is a triplet of values (rows / axis=3 of the matrix). I have another 2D np array, corresponding to a set of points (in this case 2):</p>
<pre><code>Points= array([[0.01, 0.01, 0.], [0., 0., 0.]])
</code></pre>
<p>I would like to look into M and find the coordinates, or indices, corresponding to those points. Something like this</p>
<pre><code>coordinates= array([[1,1,0], [0,0,0]])
</code></pre>
<p>Unfortunately I have to avoid for loops as much as possible. I am looking for an equivalent of numpy.where() for such cases.</p>
<p>Thanks!</p>
|
<p>I don't see a strictly <code>numpy</code> solution because of <a href="https://stackoverflow.com/a/14772313/5431791"><code>this</code></a>. However, you can achieve this without <code>loop</code>s:</p>
<pre><code>>>> np.vstack([*map(lambda x: np.argwhere(np.equal(M, x).all(-1)), Points)])
array([[1, 1, 0],
[0, 0, 0]], dtype=int64)
</code></pre>
<p><strong>However, loop is preferable in this case, both in terms of speed and readability</strong></p>
<pre><code>>>> np.vstack([np.argwhere(np.equal(M, p).all(-1)) for p in Points])
array([[1, 1, 0],
[0, 0, 0]], dtype=int64)
</code></pre>
<p>Or,</p>
<pre><code>>>> np.hstack([np.equal(M, p).all(-1).nonzero() for p in Points]).T
array([[1, 1, 0],
[0, 0, 0]], dtype=int64)
</code></pre>
<p>Or,</p>
<pre><code>>>> np.array([np.equal(M, p).all(-1).nonzero() for p in Points]).squeeze()
array([[1, 1, 0],
[0, 0, 0]], dtype=int64)
</code></pre>
<p><strong>Timings:</strong></p>
<pre><code>>>> %timeit np.vstack([*map(lambda x: np.argwhere(np.equal(M, x).all(-1)), Points)])
42.8 µs ± 8.44 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
>>> %timeit np.vstack([np.argwhere(np.equal(M, p).all(-1)) for p in Points])
33.3 µs ± 840 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
>>> %timeit np.hstack([np.equal(M, p).all(-1).nonzero() for p in Points]).T
23.1 µs ± 786 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
>>> %timeit np.array([np.equal(M, p).all(-1).nonzero() for p in Points]).squeeze()
15.9 µs ± 62.5 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
</code></pre>
|
python|numpy|parsing|array-broadcasting
| 1
|
375,231
| 66,118,334
|
Pandas parse csv column from dict into table
|
<p>My csv file:</p>
<pre><code>FILE_INFO, CATEGORY, AREA, BOX, NAME
"{'id': 1, 'width': 4032, 'height': 3024, 'file_name': 'pic1.jpeg', 'license': 0, 'flickr_url': '', 'coco_url': '', 'date_captured': 0}",PRODUCT,2247.8981,"[2283.54, 934.13, 27.37, 82.13]","{'subcategory': 'BOTTLE', 'occluded': False}"
"{'id': 2, 'width': 4032, 'height': 3024, 'file_name': 'pic2.jpeg', 'license': 0, 'flickr_url': '', 'coco_url': '', 'date_captured': 0}",PRODUCT,2450.7795,"[2239.91, 1284.21, 33.15, 73.93]","{'subcategory': 'BOTTLE', 'occluded': False}"
"{'id': 3, 'width': 4032, 'height': 3024, 'file_name': 'pic3.jpeg', 'license': 0, 'flickr_url': '', 'coco_url': '', 'date_captured': 0}",INDUSTRIAL litter,2548.956,"[2316.07, 301.5, 68.3, 37.32]","{'subcategory': 'BOTTLE', 'occluded': False}"
"{'id': 4, 'width': 4032, 'height': 3024, 'file_name': 'pic4.jpeg', 'license': 0, 'flickr_url': '', 'coco_url': '', 'date_captured': 0}",INDUSTRIAL litter,1465.0172,"[3394.37, 1083.97, 26.99, 54.28]","{'subcategory': 'PAPER', 'occluded': False}"
</code></pre>
<p>How can I parse <strong>FILE_INFO</strong> column and get just <strong>file_name</strong> table without any other information. Same with <strong>NAME</strong> column and get only <strong>subcategory</strong> from it.
Others tables are good.</p>
|
<p>You can iterate over the values in a for loop and use JSON to extract the data you need.</p>
<p>So in a for loop you would do something like this:</p>
<pre><code>import json
for row in rows:
json.loads(row.replace("\'", "\""))['file_name']
</code></pre>
|
python|pandas|csv
| 1
|
375,232
| 66,309,302
|
multiinput GAN returning error ValueError: Graph disconnected:
|
<p>Been trying to troubleshoot this all weekend. I'm hoping someone can help.</p>
<p>I have a model that would take a normal array and process it within a GAN, it worked but once I changed it to be multi-imput, I started to get:</p>
<pre><code>ValueError: Graph disconnected:
</code></pre>
<p>My original code:</p>
<pre><code># Build stacked GAN model
gan_input = Input(shape=Xtrain.shape[1])
H = generator(gan_input)
gd_input=Concatenate()([gan_input,H])
gan_V = discriminator(gd_input)
GAN = Model(gan_input, [gan_V,H])
GAN.compile(loss=['categorical_crossentropy','mse'], optimizer=opt) #Complete GAN have both loss functions
GAN.summary()
</code></pre>
<p>then I modified it for multi-input:</p>
<pre><code>gan_dataframe_input = Input(shape=Xtrain[1][:-2].shape) #new testing
numpy_input = Input(shape=Xtrain[1][-1].shape)
gan_input = layers.concatenate([gan_dataframe_input, numpy_input])
print(gan_input)
print(mergedLayer)
H = generator([gan_dataframe_input,numpy_input]) <<--two shapes being imputed
gd_input=Concatenate()([gan_input,H]) <<--merged layer + above two shapes being imputed
gan_V = discriminator(gd_input)
GAN = Model(gan_input, [gan_V,H]) <<--this line returns an error
GAN.compile(loss=['categorical_crossentropy','mse'], optimizer=opt) #Complete GAN have both loss functions
GAN.summary()
</code></pre>
<p>Stack trace:</p>
<pre><code>KerasTensor(type_spec=TensorSpec(shape=(None, 736), dtype=tf.float32, name=None), name='concatenate_28/concat:0', description="created by layer 'concatenate_28'")
KerasTensor(type_spec=TensorSpec(shape=(None, 736), dtype=tf.float32, name=None), name='concatenate_27/concat:0', description="created by layer 'concatenate_27'")
WARNING:tensorflow:Functional model inputs must come from `tf.keras.Input` (thus holding past layer metadata), they cannot be the output of a previous non-Input layer. Here, a tensor specified as input to "model_34" was not an Input tensor, it was generated by layer concatenate_28.
Note that input tensors are instantiated via `tensor = tf.keras.Input(shape)`.
The tensor that caused the issue was: concatenate_28/concat:0
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-94-ac83091846e6> in <module>()
69 gd_input=Concatenate()([gan_input,H])
70 gan_V = discriminator(gd_input)
---> 71 GAN = Model(gan_input, [gan_V,H])
72 GAN.compile(loss=['categorical_crossentropy','mse'], optimizer=opt) #Complete GAN have both loss functions
73 GAN.summary()
4 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/functional.py in _map_graph_network(inputs, outputs)
988 'The following previous layers '
989 'were accessed without issue: ' +
--> 990 str(layers_with_complete_input))
991 for x in nest.flatten(node.outputs):
992 computable_tensors.add(id(x))
ValueError: Graph disconnected: cannot obtain value for tensor KerasTensor(type_spec=TensorSpec(shape=(None, 659), dtype=tf.float32, name='input_71'), name='input_71', description="created by layer 'input_71'") at layer "concatenate_28". The following previous layers were accessed without issue: []
</code></pre>
<p>Oddly looking at the full track trace, after I printed data on the layers, it seems the the number of items in the array aren't aligned? (659,) is the size of one of the inputs, whereas the other is (77,). I'm not sure what I'm doing wrong here. Any suggestions?</p>
|
<p>When you build multi-input/multi-output models, you must compile and feed the model input and output as arrays, instead of concatenating them like you did. Moreover, the inputs of a model must always be <code>tf.keras.layers.Input</code>. So the correct code would be</p>
<pre><code>gan_dataframe_input = Input(shape=Xtrain[1][:-2].shape) #new testing
numpy_input = Input(shape=Xtrain[1][-1].shape)
gan_input = layers.concatenate([gan_dataframe_input, numpy_input])
print(gan_input)
print(mergedLayer)
H = generator([gan_dataframe_input,numpy_input]) <<--two shapes being imputed
gd_input=Concatenate()([gan_input,H]) <<--merged layer + above two shapes being imputed
gan_V = discriminator(gd_input)
GAN = Model([gan_dataframe_input, numpy_input ], [gan_V,H]) <<--this line is modified
GAN.compile(loss=['categorical_crossentropy','mse'], optimizer=opt) #Complete GAN have both loss functions
GAN.summary()
</code></pre>
|
python|numpy|tensorflow|keras|generative-adversarial-network
| 1
|
375,233
| 66,120,794
|
I'm Trying to sort pandas aggregation
|
<p>I'm trying to aggregate and sort data from my dataset, but I don't know how to do. Can someone help me?</p>
<pre><code>data = {'message_id': ['1', '1', '1', '1', '2', '2', '2'],
'to': ['one', 'two', 'three', 'four', 'five', 'six', 'five'],
'idt': ['1','2','3','4','5','6','5']
}
df = pd.DataFrame(data, columns = ['message_id','to','idt'])
agg_func_text = {'to': [ set], 'idt': [ set]}
df.sort_values(by=['message_id', 'to'])
df3=df.groupby(['message_id']).agg(agg_func_text)
</code></pre>
<p>as result:</p>
<pre><code>message_id to set idt set
1 {four, three, one, two} {2, 3, 1, 4}
2 {five, six} {5, 6}
</code></pre>
<p>but I would like to recevied this as result:</p>
<pre><code>message_id to set idt set
1 {one, two, three, four} {1, 2, 3, 4}
2 {five, six} {5, 6}
</code></pre>
|
<p>In <code>python set</code> is not defined order, so cannot sorting or change ordering there, possible soution is use <code>dict.fromkeys().keys()</code> trick for remove duplicates and output is <code>tuple</code> (which should be sorted and there is also defined order):</p>
<pre><code>f = lambda x: dict.fromkeys(x).keys()
agg_func_text = {'to': f, 'idt': f}
#if need sorting assign back
df = df.sort_values(by=['message_id', 'idt'])
df3=df.groupby('message_id').agg(agg_func_text)
print (df3)
to idt
message_id
1 (one, two, three, four) (1, 2, 3, 4)
2 (five, six) (5, 6)
</code></pre>
|
python|pandas
| 1
|
375,234
| 66,152,428
|
Python Pandas apply qcut to grouped by level 0 of multi-index in multi-index dataframe
|
<p>I have a multi-index dataframe in pandas (date and entity_id) and for each date/entity I have obseravtions of a number of variables (A, B ...). My goal is to create a dataframe with the same shape but where the values are replaced by their decile scores.</p>
<p>My test data looks like this:</p>
<p><a href="https://i.stack.imgur.com/LaPQU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LaPQU.png" alt="enter image description here" /></a></p>
<p><em><strong>I want to apply qcut to each column grouped by level 0 of the multi-index</strong></em> - the issue I have is creating a result Dataframe</p>
<p>This code</p>
<pre><code>def qcut_sub_index(df_with_sub_index):
# create empty return value same shape as passed dataframe
df_return=pd.DataFrame()
for date, sub_df in df_with_sub_index.groupby(level=0):
df_return=df_return.append(pd.DataFrame(pd.qcut(sub_df, 10, labels=False, duplicates='drop')))
print(df_return)
return df_return
print(df_values.apply(lambda x: qcut_sub_index(x), axis=0))
</code></pre>
<p>returns</p>
<pre><code> A
as_at_date entity_id
2008-01-27 2928 0
2932 3
3083 6
3333 9
2008-02-27 2928 3
2935 9
3333 0
3874 6
2008-03-27 2928 1
2932 2
2934 0
2936 9
2937 4
2939 9
2940 7
2943 3
2944 0
2945 8
2946 6
2947 5
2949 4
B
as_at_date entity_id
2008-01-27 2928 9
2932 6
3083 0
3333 3
2008-02-27 2928 6
2935 0
3333 3
3874 9
2008-03-27 2928 0
2932 9
2934 2
2936 8
2937 7
2939 6
2940 3
2943 1
2944 4
2945 9
2946 5
2947 4
2949 0
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-104-72ff0e6da288> in <module>
11
12
---> 13 print(df_values.apply(lambda x: qcut_sub_index(x), axis=0))
~\Anaconda3\lib\site-packages\pandas\core\frame.py in apply(self, func, axis, raw, result_type, args, **kwds)
7546 kwds=kwds,
7547 )
-> 7548 return op.get_result()
7549
7550 def applymap(self, func) -> "DataFrame":
~\Anaconda3\lib\site-packages\pandas\core\apply.py in get_result(self)
178 return self.apply_raw()
179
--> 180 return self.apply_standard()
181
182 def apply_empty_result(self):
~\Anaconda3\lib\site-packages\pandas\core\apply.py in apply_standard(self)
272
273 # wrap results
--> 274 return self.wrap_results(results, res_index)
275
276 def apply_series_generator(self) -> Tuple[ResType, "Index"]:
~\Anaconda3\lib\site-packages\pandas\core\apply.py in wrap_results(self, results, res_index)
313 # see if we can infer the results
314 if len(results) > 0 and 0 in results and is_sequence(results[0]):
--> 315 return self.wrap_results_for_axis(results, res_index)
316
317 # dict of scalars
~\Anaconda3\lib\site-packages\pandas\core\apply.py in wrap_results_for_axis(self, results, res_index)
369
370 try:
--> 371 result = self.obj._constructor(data=results)
372 except ValueError as err:
373 if "arrays must all be same length" in str(err):
~\Anaconda3\lib\site-packages\pandas\core\frame.py in __init__(self, data, index, columns, dtype, copy)
466
467 elif isinstance(data, dict):
--> 468 mgr = init_dict(data, index, columns, dtype=dtype)
469 elif isinstance(data, ma.MaskedArray):
470 import numpy.ma.mrecords as mrecords
~\Anaconda3\lib\site-packages\pandas\core\internals\construction.py in init_dict(data, index, columns, dtype)
281 arr if not is_datetime64tz_dtype(arr) else arr.copy() for arr in arrays
282 ]
--> 283 return arrays_to_mgr(arrays, data_names, index, columns, dtype=dtype)
284
285
~\Anaconda3\lib\site-packages\pandas\core\internals\construction.py in arrays_to_mgr(arrays, arr_names, index, columns, dtype, verify_integrity)
76 # figure out the index, if necessary
77 if index is None:
---> 78 index = extract_index(arrays)
79 else:
80 index = ensure_index(index)
~\Anaconda3\lib\site-packages\pandas\core\internals\construction.py in extract_index(data)
385
386 if not indexes and not raw_lengths:
--> 387 raise ValueError("If using all scalar values, you must pass an index")
388
389 if have_series:
ValueError: If using all scalar values, you must pass an index
</code></pre>
<p>so something is preventing the second application of the lambda function.</p>
<p>I'd appreciate your help, thanks for takign a look.</p>
<p>p.s. if this can be done implcitly without using apply would love to hear. thanks</p>
|
<p>You solution appears over complicated. Your terminology is none standard, multi-indexes have levels. Stated as <code>qcut()</code> by level 0 of multi-index (not talking about sub-frames which are not pandas concepts)</p>
<p>Bring it all back together</p>
<ul>
<li>use <code>**kwargs</code> approach to pass arguments to <code>assign()</code> for all columns in data frame</li>
<li><code>groupby(level=0)</code> is <strong>as_of_date</strong></li>
<li><code>transform()</code> to get a row back for every entry in index</li>
</ul>
<pre><code>s = 12
df = pd.DataFrame({"as_at_date":np.random.choice(pd.date_range(dt.date(2020,1,27), periods=3, freq="M"), s),
"entity_id":np.random.randint(2900, 3500, s),
"A":np.random.random(s),
"B":np.random.random(s)*(10**np.random.randint(8,10,s))
}).sort_values(["as_at_date","entity_id"])
df = df.set_index(["as_at_date","entity_id"])
df2 = df.assign(**{c:df.groupby(level=0)[c].transform(lambda x: pd.qcut(x, 10, labels=False))
for c in df.columns})
</code></pre>
<h3>df</h3>
<pre><code> A B
as_at_date entity_id
2020-01-31 2926 0.770121 2.883519e+07
2943 0.187747 1.167975e+08
2973 0.371721 3.133071e+07
3104 0.243347 4.497294e+08
3253 0.591022 7.796131e+08
3362 0.810001 6.438441e+08
2020-02-29 3185 0.690875 4.513044e+08
3304 0.311436 4.561929e+07
2020-03-31 2953 0.325846 7.770111e+08
2981 0.918461 7.594753e+08
3034 0.133053 6.767501e+08
3355 0.624519 6.318104e+07
</code></pre>
<h3>df2</h3>
<pre><code> A B
as_at_date entity_id
2020-01-31 2926 7 0
2943 0 3
2973 3 1
3104 1 5
3253 5 9
3362 9 7
2020-02-29 3185 9 9
3304 0 0
2020-03-31 2953 3 9
2981 9 6
3034 0 3
3355 6 0
</code></pre>
|
python|pandas|apply|multi-index
| 2
|
375,235
| 66,218,802
|
how to remove auto indexing in pandas dataframe?
|
<p>How to remove auto indexing in <code>pandas</code> dataframe? drop index does not work.
So when I used <code>df.iloc[0:4]['col name']</code>, it always returns two-column, one for the actual data that I need, one for the auto row index. How could I get rid of the auto indexing and only return the column that I need?</p>
<p>This is what <code>iloc[0:4]['col name']</code> returns :</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>id2</td>
</tr>
<tr>
<td>1</td>
<td>id2</td>
</tr>
<tr>
<td>2</td>
<td>id1</td>
</tr>
<tr>
<td>3</td>
<td>id3</td>
</tr>
</tbody>
</table>
</div>
<p>and below is what I actually want :</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>id2</td>
</tr>
<tr>
<td>id2</td>
</tr>
<tr>
<td>id1</td>
</tr>
<tr>
<td>id3</td>
</tr>
</tbody>
</table>
</div>
|
<p>If you work in JupyterLab / Jupyter Notebook, use the command</p>
<pre><code>iloc[0:4]['col name'].style.hide_index()
</code></pre>
<hr />
<p><em>The explanation:</em></p>
<p>Dataframes <strong>always</strong> have an index, and there is no way of how to remove it, because it is a <em>core part</em> of every dataframe.</p>
<p>(<code>iloc[0:4]['col name']</code> is a dataframe, too.)</p>
<p>You can only <em>hide</em> it in your output.</p>
<p>For example in JupyterLab (or Jupyter Notebook) you may display your dataframe (<code>df</code>) without index using the command</p>
<pre><code>df.style.hide_index() # in your case: iloc[0:4]['col name'].style.hide_index()
</code></pre>
<p>Be careful - <code>df.style.hide_index()</code> is NOT a dataframe (it is a <code>Styler</code> object), so don't assign it back to <code>df</code>:</p>
<pre><code> # df = df.style.hide_index() # Don't do it, never!
</code></pre>
<hr />
<p>An example of a dataframe output — at first with the index, then without it:</p>
<p><a href="https://i.stack.imgur.com/9ffVH.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9ffVH.jpg" alt="enter image description here" /></a></p>
|
python|python-3.x|pandas|dataframe
| 1
|
375,236
| 65,924,250
|
Apply a function to specific rows of a Numpy array
|
<p>Let's say I have a 2 dimensional array, a function, and a "mask" of specific rows, as below:</p>
<pre><code>my_array = np.array([[0,1],[2,3],[4,5],[6,7]])
my_mask = np.array([0,1,0,1])
my_func = lambda x: x * 2
</code></pre>
<p>How can I apply this function to the rows of the the array that are true in the mask? I.e. for the example above, the result would be:</p>
<pre><code>array([[0,1],[4,6],[4,5],[12,14]])
</code></pre>
|
<p>You can use boolean indexing:</p>
<pre><code>mask = my_mask==1
my_array[mask] = my_func(my_array[mask])
</code></pre>
<p>Output:</p>
<pre><code>array([[ 0, 1],
[ 4, 6],
[ 4, 5],
[12, 14]])
</code></pre>
|
python|arrays|numpy
| 1
|
375,237
| 66,027,895
|
How to replace only NaN values in a column, with a specific function?
|
<p>I have a dataframe that looks like this:</p>
<pre><code>article_id title
NaN title_1
NaN title_2
NaN title_3
'202102011404103' title_4
'202102011404104' title_5
NaN title_6
</code></pre>
<p>I would like to apply something like this code, to NaN values in article_id column:</p>
<pre><code>from datetime import datetime
date = datetime.strftime(datetime.now(), "%Y%m%d%H%M")
df['article_id'] = [int(date + str("0"*(3-len(str(i)))) + str(i)) + 1 for i, k in df.reset_index().iterrows()]
</code></pre>
<p><strong>Instead of `datetime.now()</strong> I would like to start the 1st january. I would like to have a value for the variable date = '202101011348' for example</p>
<p>And in final result I would like to have the same length as row 4 and 5 for article_id column and start to a precise date (202101011348)</p>
<p>I tought doing this:</p>
<pre><code>df[df['article_id'].isna()]
</code></pre>
<p>And then apply the code above.</p>
<p>Expected output:</p>
<pre><code>article_id title
'202101011404106' title_1
'202101011404107' title_2
'202101011404108' title_3
'202102011404103' title_4
'202102011404104' title_5
'202101011404109' title_6
</code></pre>
<p>But how to apply this directly to the df, only to NaN values in the article_id column ?</p>
|
<p>You can use <code>apply</code> and <code>lambda</code> to achieve your goal.</p>
<p>Here I'm applying the <code>now()</code> function to <code>NaN</code> but it can be any method you want.</p>
<pre><code>import pandas as pd
import numpy as np
from datetime import datetime
df = pd.DataFrame({
"article_id": [np.NaN, np.NaN, np.NaN, "202101011212", "202101011313"],
"title": ["title_1", "title_2", "title_3", "title_4", "title_5"]
})
|------------------------------------------|
| | article_id | title |
|---|----------------------------|---------|
| 0 | NaN | title_1 |
| 1 | NaN | title_2 |
| 2 | NaN | title_3 |
| 3 | 202101011212 | title_4 |
| 4 | 202101011313 | title_5 |
|------------------------------------------|
df["article_id"] = df3["article_id"].apply(lambda x: datetime.now() if pd.isna(x) else x)
|------------------------------------------|
| | article_id | title |
|---|----------------------------|---------|
| 0 | 2021-02-03 13:16:29.438263 | title_1 |
| 1 | 2021-02-03 13:16:29.438269 |title_2 |
| 2 | 2021-02-03 13:16:29.438270 |title_3 |
| 3 | 202101011212 |title_4 |
| 4 | 202101011313 | title_5 |
|------------------------------------------|
</code></pre>
|
python|pandas
| 2
|
375,238
| 66,219,760
|
How to remove word which i connect with special character in python
|
<p>I have qeustion for you ! How to remove all words containing @, such as @AmericanVirgin. When I do</p>
<pre class="lang-py prettyprint-override"><code>df['text'] = df ['text'].Str.replace('@', '') only removes @
</code></pre>
<p>This is my dataframe :</p>
<pre class="lang-none prettyprint-override"><code>weet_id airline_sentiment text Raiting
0 570306133677760513 neutral @VirginAmerica What @dhepburn said. 2
1 570301130888122368 positive @VirginAmerica plus you've added commercials t... 3
2 570301083672813571 neutral @VirginAmerica I didn't today... Must mean I n... 2
</code></pre>
<p>This is outopu after answer
<a href="https://i.stack.imgur.com/wWXQR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wWXQR.png" alt="enter image description here" /></a></p>
|
<p>Input :</p>
<pre><code>tweet_id airline sentiment \
0 0 570306133677760513 neutral
1 1 570301130888122368 positive
2 2 570301083672813571 neutral
text Raiting \
0 @VirginAmerica What @dhepburn said 2
1 @VirginAmerica plus youve added commercials 3
2 @VirginAmerica I didn't today 0
</code></pre>
<pre><code># Using regex, removing words starting with @
dframe["text_corrected"] = dframe.text.str.replace("@\w+\s", "")
</code></pre>
<p>Output :</p>
<pre><code>tweet_id airline sentiment \
0 0 570306133677760513 neutral
1 1 570301130888122368 positive
2 2 570301083672813571 neutral
text Raiting \
0 @VirginAmerica What @dhepburn said 2
1 @VirginAmerica plus youve added commercials 3
2 @VirginAmerica I didn't today 0
text_corrected
0 What said
1 plus youve added commercials
2 I didn't today
</code></pre>
|
python|pandas|dataframe|for-loop
| 0
|
375,239
| 66,314,087
|
Using Pandas groupby in user defined function: why I can't use aggregation functions to groupyby
|
<p>I defined the following user function:</p>
<pre><code>def group_by(df, columns):
x = df.groupby(columns).sum()
x = x.reset_index()
return x
</code></pre>
<p>It works!</p>
<p>But the following does not! :</p>
<pre><code>def group_by(df, columns, aggfunc=sum):
x = df.groupby(columns).aggfunc()
x = x.reset_index()
return x
</code></pre>
<p>I get the error:</p>
<pre><code>AttributeError: 'DataFrameGroupBy' object has no attribute 'aggfunc'
</code></pre>
<p>Any ideas why?</p>
|
<p>When doing <code>df.groupby(columns).aggfunc()</code> you are doing a method call on the <a href="https://pandas.pydata.org/docs/reference/groupby.html" rel="nofollow noreferrer">groupby object</a>, this means Python will look for a method called aggfunc on this object. As this method does not exists it throws an AttributeError. The fact that you have aggfunc as an input parameter of your function doesn't affect this process.</p>
<p>You could however use the <a href="https://pandas.pydata.org/pandas-docs/version/0.23/generated/pandas.core.groupby.DataFrameGroupBy.agg.html" rel="nofollow noreferrer">.agg()</a> method to which you could pass the local variable aggfunc you have defined as input parameter.</p>
<pre><code>x = df.groupby(columns).agg(aggfunc)
</code></pre>
|
python|pandas|group-by
| 1
|
375,240
| 66,304,452
|
How to feed images into a CNN for binary classification
|
<p>I am trying to create a convolutional neural network that can detect whether or not a person is having a stroke, based upon a picture of their face. The images for my dataset are contained within a directory called <em>CNNImages</em>, which contains two subdirectories: <em>Strokes</em> and <em>RegularFaces</em>. Each subdirectory contains jpg images that I'm trying to feed into my neural network.</p>
<p>Following the approach used in <a href="https://towardsdatascience.com/build-your-own-convolution-neural-network-in-5-mins-4217c2cf964f" rel="nofollow noreferrer">this tutorial</a>, I have created the CNN, which works when fed with the MNIST dataset. However, I am having trouble feeding my own images into the neural network. I have been using the code shown by the <a href="https://keras.io/api/preprocessing/image/" rel="nofollow noreferrer">Keras tutorial</a> for Image data preprocessing, but it isn't working.</p>
<pre><code>import tensorflow as tf
import keras
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
import numpy as np
dataset = tf.keras.preprocessing.image_dataset_from_directory(
'C:\\Users\\Colin\\CNNImages',
labels="inferred",
label_mode="int",
class_names=None,
color_mode="rgb",
batch_size=32,
image_size=(128, 128),
shuffle=True,
seed=1,
validation_split=0.2,
subset="training",
interpolation="bilinear",
follow_links=False,
)
</code></pre>
<p>When I try to feed this dataset into my neural network using <code>(x_train, y_train), (x_test, y_test) = dataset</code>, I receive the following error:</p>
<pre><code>ValueError: too many values to unpack (expected 2)
</code></pre>
<p>I've included my attempt at a neural network below.</p>
<pre><code>batch_size = 128
num_classes = 2
epochs = 12
# input image dimensions
img_rows, img_cols = 128, 128
# the data, split between train and test sets
(x_train, y_train), (x_test, y_test) = dataset
x_train = x_train.reshape(869,128,128,3)
x_test = x_test.reshape(217,128,128,3)
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
# convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3),
activation='relu',
input_shape=(28,28,1)))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adadelta(),
metrics=['accuracy'])
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(x_test, y_test))
score = model.evaluate(x_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
</code></pre>
<p>I believe I am importing the images incorrectly into the CNN, but am unsure of how to fix this. What would be the solution to getting the images to import correctly?</p>
<p>Edit: Below is my updated code attempt. It is unable to function, due to <code>(x_train, y_train), (x_test, y_test) = train_ds</code> returning <code>ValueError: too many values to unpack (expected 2)</code></p>
<pre><code>import tensorflow as tf
import keras
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
import numpy as np
train_ds = tf.keras.preprocessing.image_dataset_from_directory(
'C:\\Users\\Colin\\Desktop\\CNNImages\\Training',
validation_split=None,
subset=None,
seed=123,
image_size=(128, 128),
batch_size=32)
val_ds = tf.keras.preprocessing.image_dataset_from_directory(
'C:\\Users\\Colin\\Desktop\\CNNImages\\Validation',
validation_split=None,
subset=None,
seed=123,
image_size=(128, 128),
batch_size=32)
batch_size = 128
num_classes = 2
epochs = 12
# input image dimensions
img_rows, img_cols = 128, 128
# the data, split between train and test sets
(x_train, y_train), (x_test, y_test) = train_ds
x_train = x_train.reshape(869,128,128,3)
x_test = x_test.reshape(217,128,128,3)
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
# convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3),
activation='relu',
input_shape=(28,28,1)))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adadelta(),
metrics=['accuracy'])
model.fit(
train_ds,
validation_data=val_ds,
epochs=3
)
score = model.evaluate(x_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
</code></pre>
|
<p><code>(x_train, y_train), (x_test, y_test) = dataset</code> part of the code raises error. Because, when you use <code>tf.keras.preprocessing.image_dataset_from_director()</code>, it returns batches of images, <strong>it does not split</strong> your data into train set and test set. So you need to declare seperately for train and test:</p>
<pre><code># first-approach
train_dataset = tf.keras.preprocessing.image_dataset_from_directory(train_folder, ...)
test_dataset = tf.keras.preprocessing.image_dataset_from_directory(test_folder, ...)
# second approach
train_ds = tf.keras.preprocessing.image_dataset_from_directory(
data_dir,
validation_split=0.2,
subset="training",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)
val_ds = tf.keras.preprocessing.image_dataset_from_directory(
data_dir,
validation_split=0.2,
subset="validation",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)
model.fit(
train_ds,
validation_data=val_ds,
epochs=3
)
</code></pre>
|
python|tensorflow|keras|neural-network|conv-neural-network
| 0
|
375,241
| 66,175,410
|
Generating data frames, but only getting 1 row
|
<p>I wish to create a dataframe using faker library in Python, but I am able to get only a single row, dont understand whats the issue in the code. here's the same:</p>
<pre><code>import pandas as pd
for dat in range(int(input())):
dat = [[fake.email(),fake.phone_number(),fake.address(),fake.name(),fake.date(),fake.pyint(0,3)]]
v = pd.DataFrame(dat, columns=['Email','PhNo','Address','Name','Date','Children'])
</code></pre>
<p>If I see columns in <code>v</code> by printing or head function, i can see only 1 row.
I wish to see the entire no of rows based on the user input, say if its 3, then 3 rows should be there.</p>
|
<p>You overwrite <code>dat</code> in each loop. You need to append the new data to the existing:</p>
<pre><code>dat = []
for _ in range(int(input())):
dat.append([fake.email(), fake.phone_number(), fake.address(), fake.name(), fake.date(), fake.pyint(0,3)])
</code></pre>
|
python|pandas|dataframe|faker
| 1
|
375,242
| 66,117,980
|
numpy: Randomly split array into 3 not equal parts
|
<p>Is there anyway so I can split an array into 3 not equal parts, with no duplicates, for example</p>
<p>array1 = 70% of the elements of the array</p>
<p>array2 = 10% of the elements of the array</p>
<p>array3 = 20% of the elements of the array</p>
<p>but without taking the same element twice?</p>
<p>Thank you!</p>
|
<p>If you are okay with losing the original order of the array, you can simply randomly shuffle the array, and then split the array as you desire.</p>
<pre><code>a = np.arange(100) # Example array.
split1 = int(0.7 * len(a))
split2 = int(0.8 * len(a))
np.random.shuffle(a)
p1 = a[:split1]
p2 = a[split1:split2]
p3 = a[split2:]
</code></pre>
<p>If the order of the array (let's call it <code>b</code>) must be preserved, you can do the above using <code>a = np.arange(len(b))</code>, and then:</p>
<pre><code>np.sort(p1)
np.sort(p2)
np.sort(p3)
bp1 = b[p1]
bp2 = b[p2]
bp3 = b[p3]
</code></pre>
|
python-3.x|numpy|random
| 0
|
375,243
| 66,317,262
|
Pandas: df (dataframe) is not defined
|
<p>I'm trying to load and edit a dataframe from a <code>xlsx</code> file. The file is located in the path which I defined in the variable <code>einlesen</code>. As soon as the bug is fixed, I want to delete a row and save the new dataframe in a new <code>xlsx</code> file in a specific path.</p>
<pre><code>import os
import re
import pandas as pd
import glob
import time
def setwd():
from pathlib import Path
import os
home = str(Path.home())
os.chdir(home + r'\...\...\Staffing Report\Input\...\Raw_Data')
latest = home + r'\...\...\Staffing Report\Input\MyScheduling\Raw_Data'
folders = next(os.walk(latest))[1]
creation_times = [(folder, os.path.getctime(folder)) for folder in folders]
creation_times.sort(key=lambda x: x[1])
most_recent = creation_times[-1][0]
print('test' + most_recent)
os.chdir(latest + '\\' + most_recent + '\\')
print('current cwd is: ' + os.getcwd())
save_dir = home + '\...\...\Staffing Report\Input\MyScheduling\Individual Status All\PBI\\' + 'Individual_Status.xlsx'
def rowdrop():
einlesen = os.getcwd()
print('test einlesen: ' + einlesen)
df = pd.DataFrame()
df = pd.read_excel('Individual Status.xls', sheet_name = 'Individual Status Raw Data')
df = pd.DataFrame(df)
#main
setwd()
rowdrop()
df.to_excel(save_dir, index = False)
print(df)
</code></pre>
<p>If im trying to run the code, it always states:</p>
<pre><code>---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-92-060708f6b065> in <module>
2 rowdrop()
3
----> 4 df.to_excel(save_dir, index = False)
5
6 print(df)
NameError: name 'df' is not defined
</code></pre>
|
<p>You get the error because you only defined <code>df</code> inside the <code>rowdrop</code> function; variables defined inside function can only be accessed inside the functions unless you do something to change that.</p>
<p>Change your function to return the <code>df</code>:</p>
<pre><code>def rowdrop():
einlesen = os.getcwd()
print('test einlesen: ' + einlesen)
df = pd.DataFrame()
df = pd.read_excel('Individual Status.xls', sheet_name = 'Individual Status Raw Data')
df = pd.DataFrame(df)
return df
</code></pre>
<p>And assign the returned value of the function call to a variable:</p>
<pre><code>df = rowdrop()
</code></pre>
<hr />
<p>Another way that is considered bad practice is to use the <code>global</code> method to make the <code>df</code> variable global:</p>
<pre><code>def rowdrop():
global df
einlesen = os.getcwd()
print('test einlesen: ' + einlesen)
df = pd.DataFrame()
df = pd.read_excel('Individual Status.xls', sheet_name = 'Individual Status Raw Data')
df = pd.DataFrame(df)
</code></pre>
<p>With the above method, you won't need to assign the function call to a variable, but please do not use that method, see <a href="https://stackoverflow.com/q/19158339/13552470">Why are global variables evil?</a></p>
|
python|pandas|function|dataframe|nameerror
| -1
|
375,244
| 65,958,671
|
Why does my model learn with Ragged Tensors but not Dense Tensors?
|
<p>I have a string of letters that follow a "grammar." I also have boolean labels on my training set of whether the string follows "the grammar" or not. Basically, my model is trying to learn determine if a string of letters follows the rules. It's a fairly simple problem (I got it out of a textbook).</p>
<p>I am generating my dataset like this:</p>
<pre><code>def generate_dataset(size):
good_strings = [string_to_ids(generate_string(embedded_reber_grammar))
for _ in range(size // 2)]
bad_strings = [string_to_ids(generate_corrupted_string(embedded_reber_grammar))
for _ in range(size - size // 2)]
all_strings = good_strings + bad_strings
X = tf.ragged.constant(all_strings, ragged_rank=1)
# X = X.to_tensor(default_value=0)
y = np.array([[1.] for _ in range(len(good_strings))] +
[[0.] for _ in range(len(bad_strings))])
return X, y
</code></pre>
<p>Notice the line <code>X = X.to_tensor(default_value=0)</code>. If this line is commented out, my model learns just fine. However, if it is not commented out, it fails to learn and the validation set performs the same as chance (50-50).</p>
<p>Here is my actual model:</p>
<pre><code>np.random.seed(42)
tf.random.set_seed(42)
embedding_size = 5
model = keras.models.Sequential([
keras.layers.InputLayer(input_shape=[None], dtype=tf.int32, ragged=True),
keras.layers.Embedding(input_dim=len(POSSIBLE_CHARS) + 1, output_dim=embedding_size),
keras.layers.GRU(30),
keras.layers.Dense(1, activation="sigmoid")
])
optimizer = keras.optimizers.SGD(lr=0.02, momentum = 0.95, nesterov=True)
model.compile(loss="binary_crossentropy", optimizer=optimizer, metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=5, validation_data=(X_valid, y_valid))
</code></pre>
<p>I am using <code>0</code> as the default value for the dense tensors. The <code>strings_to_ids</code> doesn't use 0 for any of the values but instead starts at 1. Also, when I switch to using a Dense tensor I change <code>ragged=True</code> to <code>False.</code> I have no idea why using a dense tensor causes the model to fail, as I've used dense tensors before in similar exercises.</p>
<p>For additional details, see the solution from the book (<a href="https://github.com/ageron/handson-ml2/blob/master/16_nlp_with_rnns_and_attention.ipynb" rel="nofollow noreferrer">exercise 8</a>) or my own colab <a href="https://colab.research.google.com/drive/13Za1YeCO8uEiyqcaUyWKm6zErOaWwpQC?usp=sharing" rel="nofollow noreferrer">notebook</a>.</p>
|
<p>So turns out the answer was that the shape of the dense tensor was different across the training set and validation set. This was because the longest sequence differed in length between the two sets (same with the test set).</p>
|
tensorflow|machine-learning|keras|machine-learning-model
| 0
|
375,245
| 66,097,006
|
How to concatenate key values of JSON object stored in pandas dataframe cell into a string per row?
|
<h3>My question is:</h3>
<p>how to concatenate key values of JSON object stored in pandas dataframe cell into a string per row? Sorry, I feel my problem is pretty straight-forward but I cannot find a good way to phrase it.</p>
<h3>My context is:</h3>
<p>Let's say I have a pandas dataframe, df, that contains a column named "participants". The cell values are JSON objects, like this for instance:</p>
<pre><code>df['participants'][0] == df.participants[0] ==
[{'participantId': 1,
'championId': 7 },
{'participantId': 2,
'championId': 350 },
{'participantId': 3,
'championId': 266 },
{'participantId': 4,
'championId': 517 },
{'participantId': 5,
'championId': 110, },
...
...
{'participantId': 10,
'championId': 10 }]
</code></pre>
<p><code>df.participants[1]</code> would include totally different information, with the same structure. If anybody's interested, this is part of what the League of Legends RiotWatcher python API spits out for per-game data.</p>
<p>My goal is to, for each <code>participantId</code>, <em>concatenate</em> that into a single string per row in our df, such that we have a new column 'x' that contains a <strong>string</strong> <code>'7, 350, 266, 517, 110'</code> for each row depending on whatever is in the participants column.</p>
<h3>My working solutions are:</h3>
<pre><code>for i in range(0, 20): #range of however many rows we have in dataframe, assume 20
y = ''
for j in range(0, 10): #there are always ten participants
this_champion_id = str(df_d1['participants'][i][j].get('championId'))
y += ' '+this_champion_id
df_d1['x'] = y
</code></pre>
<p>(<em>Sidenote</em>: I am avoiding using lists, because I've read lists are not vectorized in pandas, which means they are slower. That's why I am using a string here.)</p>
<p>However, as my data is about 100k rows long, this feels like it's not the fastest solution, especially since I think nested for loops are slower right?</p>
<p>Would it be possible to do something like</p>
<p><code>df['x'] = [str(df_d1['participants'][key][value].get('championId') for key, value in df['participants']]</code> ?</p>
<p>I am thinking a way of using a single for loop would be by leveraging the json library, like:</p>
<pre><code>for i in range(0, 20):
x = str(pd.json_normalize(df_d1.participants[i])['championId'].values)
df['x'] = x
</code></pre>
<p>Has anybody ran into something similar? Did you find a painless solution to this problem? My solutions are taking some time to run.</p>
<p>Thank you!</p>
|
<pre class="lang-py prettyprint-override"><code>In [16]: df['x'] = df['participants'].map(lambda x: ', '.join(str(i['participantId']) for i in x))
...: print(df['participants'][0])
...: print(df['x'][0])
...:
[{'participantId': 1, 'championId': 7}, {'participantId': 2, 'championId': 350}, {'participantId': 3, 'championId': 266}]
1, 2, 3
</code></pre>
|
python|json|pandas|dataframe
| 1
|
375,246
| 65,964,411
|
How to write a earlystopping function in Python
|
<p>My loss is like this:</p>
<pre class="lang-py prettyprint-override"><code>loss = np.sum(np.square(A-B))
</code></pre>
<p>How to write help function that would carry out "earlystopping" like in Keras?</p>
<p>Purpose:</p>
<p>If the loss is rising or does not fluctuate much then we stop and the get <code>A</code> and <code>B</code>.</p>
|
<p>I looked into Keras sources and find out code for EarlyStopping. I made my own callback, based on it:</p>
<pre><code>class EarlyStoppingByLossVal(Callback):
def __init__(self, monitor='val_loss', value=0.00001, verbose=0):
super(Callback, self).__init__()
self.monitor = monitor
self.value = value
self.verbose = verbose
def on_epoch_end(self, epoch, logs={}):
current = logs.get(self.monitor)
if current is None:
warnings.warn("Early stopping requires %s available!" % self.monitor, RuntimeWarning)
if current < self.value:
if self.verbose > 0:
print("Epoch %05d: early stopping THR" % epoch)
self.model.stop_training = True
</code></pre>
<p>And usage:</p>
<pre><code>callbacks = [
EarlyStoppingByLossVal(monitor='val_loss', value=0.00001, verbose=1),
# EarlyStopping(monitor='val_loss', patience=2, verbose=0),
ModelCheckpoint(
kfold_weights_path, monitor='val_loss', save_best_only=True,
verbose=0
)
]
model.fit(
X_train.astype('float32'), Y_train,
batch_size=batch_size, nb_epoch=nb_epoch,
shuffle=True, verbose=1, validation_data=(X_valid, Y_valid),
callbacks=callbacks
)
</code></pre>
|
python-3.x|numpy|machine-learning|neural-network
| 0
|
375,247
| 66,117,429
|
Replacing data in column with mean value of corresponding bin?
|
<p>I make bins out of my column using pandas' <code>pd.qcut()</code>. I would like to, then apply smoothing by corresponding bin's mean value.</p>
<p>I generate my bins with something like</p>
<pre class="lang-py prettyprint-override"><code>pd.qcut(col, 3)
</code></pre>
<p>For example,
Given the column values <code>[4, 8, 15, 21, 21, 24, 25, 28, 34]</code>
and the generated bins</p>
<pre><code>Bin1 [4, 15]: 4, 8, 15
Bin2 [21, 24]: 21, 21, 24
Bin3 [25, 34]: 25, 28, 34
</code></pre>
<p>I would like to replace the values with the following means</p>
<pre><code>Mean of Bin1 (4, 8, 15) = 9
Mean of Bin2 (21, 21, 24) = 22
Mean of Bin3 (25, 28, 34) = 29
</code></pre>
<p>Therefore:</p>
<pre><code>Bin1: 9, 9, 9
Bin2: 22, 22, 22
Bin3: 29, 29, 29
</code></pre>
<p>making the final dataset: <code>[9, 9, 9, 22, 22, 22, 29, 29, 29]</code></p>
<p>How can one also add a column with closest bin boundaries?</p>
<pre><code>Bin1: 4, 4, 15
Bin2: 21, 21, 24
Bin3: 25, 25, 34
</code></pre>
<p>making the final dataset: <code>[4, 4, 15, 21, 21, 24, 25, 25, 34]</code></p>
<p>very similar to <a href="https://stackoverflow.com/questions/17599429/binning-and-naming-new-columns-with-mean-of-binned-columns">this question</a> which is for R</p>
|
<p>It's exactly as you laid out. Using this technique to get <a href="https://stackoverflow.com/questions/12141150/from-list-of-integers-get-number-closest-to-a-given-value">nearest</a></p>
<pre><code>df = pd.DataFrame({"col":[4, 8, 15, 21, 21, 24, 25, 28, 34]})
df2 = df.assign(bin=pd.qcut(df.col, 3),
colbmean=lambda dfa: dfa.groupby("bin").transform("mean"),
colbin=lambda dfa: dfa.apply(lambda r: min([r.bin.left,r.bin.right], key=lambda x: abs(x-r.col)), axis=1))
</code></pre>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: right;"></th>
<th style="text-align: right;">col</th>
<th style="text-align: left;">bin</th>
<th style="text-align: right;">colbmean</th>
<th style="text-align: right;">colbin</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">0</td>
<td style="text-align: right;">4</td>
<td style="text-align: left;">(3.999, 19.0]</td>
<td style="text-align: right;">9</td>
<td style="text-align: right;">3.999</td>
</tr>
<tr>
<td style="text-align: right;">1</td>
<td style="text-align: right;">8</td>
<td style="text-align: left;">(3.999, 19.0]</td>
<td style="text-align: right;">9</td>
<td style="text-align: right;">3.999</td>
</tr>
<tr>
<td style="text-align: right;">2</td>
<td style="text-align: right;">15</td>
<td style="text-align: left;">(3.999, 19.0]</td>
<td style="text-align: right;">9</td>
<td style="text-align: right;">19</td>
</tr>
<tr>
<td style="text-align: right;">3</td>
<td style="text-align: right;">21</td>
<td style="text-align: left;">(19.0, 24.333]</td>
<td style="text-align: right;">22</td>
<td style="text-align: right;">19</td>
</tr>
<tr>
<td style="text-align: right;">4</td>
<td style="text-align: right;">21</td>
<td style="text-align: left;">(19.0, 24.333]</td>
<td style="text-align: right;">22</td>
<td style="text-align: right;">19</td>
</tr>
<tr>
<td style="text-align: right;">5</td>
<td style="text-align: right;">24</td>
<td style="text-align: left;">(19.0, 24.333]</td>
<td style="text-align: right;">22</td>
<td style="text-align: right;">24.333</td>
</tr>
<tr>
<td style="text-align: right;">6</td>
<td style="text-align: right;">25</td>
<td style="text-align: left;">(24.333, 34.0]</td>
<td style="text-align: right;">29</td>
<td style="text-align: right;">24.333</td>
</tr>
<tr>
<td style="text-align: right;">7</td>
<td style="text-align: right;">28</td>
<td style="text-align: left;">(24.333, 34.0]</td>
<td style="text-align: right;">29</td>
<td style="text-align: right;">24.333</td>
</tr>
<tr>
<td style="text-align: right;">8</td>
<td style="text-align: right;">34</td>
<td style="text-align: left;">(24.333, 34.0]</td>
<td style="text-align: right;">29</td>
<td style="text-align: right;">34</td>
</tr>
</tbody>
</table>
</div>
|
python|pandas
| 2
|
375,248
| 66,251,848
|
Dimensions Don't Match for Decoder in Tensorflow Tutorial
|
<p>I am following the Convolutional Autoencoder tutorial for tensorflow, using tensorflow 2.0 and keras, found <a href="https://www.tensorflow.org/tutorials/generative/autoencoder" rel="nofollow noreferrer">here</a>.</p>
<p>Using the provided code for building a CNN, but adding one more convolutional layer to both the encoder and decoder causes the code to break:</p>
<pre class="lang-py prettyprint-override"><code>class Denoise(Model):
def __init__(self):
super(Denoise, self).__init__()
self.encoder = tf.keras.Sequential([
layers.Input(shape=(28, 28, 1)),
layers.Conv2D(16, (3,3), activation='relu', padding='same', strides=2),
layers.Conv2D(8, (3,3), activation='relu', padding='same', strides=2),
## New Layer ##
layers.Conv2D(4, (3,3), activation='relu', padding='same', strides=2)
## --------- ##
])
self.decoder = tf.keras.Sequential([
## New Layer ##
layers.Conv2DTranspose(4, kernel_size=3, strides=2, activation='relu', padding='same'),
## --------- ##
layers.Conv2DTranspose(8, kernel_size=3, strides=2, activation='relu', padding='same'),
layers.Conv2DTranspose(16, kernel_size=3, strides=2, activation='relu', padding='same'),
layers.Conv2D(1, kernel_size=(3,3), activation='sigmoid', padding='same')
])
def call(self, x):
encoded = self.encoder(x)
decoded = self.decoder(encoded)
return decoded
autoencoder = Denoise()
</code></pre>
<p>Running <code>autoencoder.encoder.summary()</code> and <code>autoencoder.decoder.summary()</code>, I can see this is a shape issue:</p>
<pre><code>Encoder:
Layer (type) Output Shape Param #
=================================================================
conv2d_124 (Conv2D) (None, 14, 14, 16) 160
_________________________________________________________________
conv2d_125 (Conv2D) (None, 7, 7, 8) 1160
_________________________________________________________________
conv2d_126 (Conv2D) (None, 4, 4, 4) 292
=================================================================
Total params: 1,612
Trainable params: 1,612
Non-trainable params: 0
_________________________________________________________________
Decoder:
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_transpose_77 (Conv2DT (32, 8, 8, 4) 148
_________________________________________________________________
conv2d_transpose_78 (Conv2DT (32, 16, 16, 8) 296
_________________________________________________________________
conv2d_transpose_79 (Conv2DT (32, 32, 32, 16) 1168
_________________________________________________________________
conv2d_127 (Conv2D) (32, 32, 32, 1) 145
=================================================================
Total params: 1,757
Trainable params: 1,757
Non-trainable params: 0
_________________________________________________________________
</code></pre>
<p>Why is the leading dimension on the decoding side <code>32</code>? Why wouldn't the dimensions of the incoming layer be <code>None, 4, 4, 4</code> if the inputs are passed from the encoder? How do I fix this?</p>
<p>Thank you in advance for your help with this!</p>
|
<p>Remove <code>stride=2</code> in your last encoder layer, and add <code>stride=2</code> in your last decoder layer.</p>
<pre><code>from tensorflow.keras import layers
from tensorflow.keras import Model
class Denoise(Model):
def __init__(self):
super(Denoise, self).__init__()
self.encoder = tf.keras.Sequential([
layers.Input(shape=(28, 28, 1)),
layers.Conv2D(16, (3,3), activation='relu', padding='same', strides=2),
layers.Conv2D(8, (3,3), activation='relu', padding='same', strides=2),
## New Layer ##
layers.Conv2D(4, (3,3), activation='relu', padding='same')
## --------- ##
])
self.decoder = tf.keras.Sequential([
## New Layer ##
layers.Conv2DTranspose(4, kernel_size=3, strides=2, activation='relu', padding='same'),
## --------- ##
layers.Conv2DTranspose(8, kernel_size=3, strides=2, activation='relu', padding='same'),
layers.Conv2DTranspose(16, kernel_size=3, strides=2, activation='relu', padding='same'),
layers.Conv2D(1, kernel_size=(3,3), activation='sigmoid', padding='same', strides=2)
])
def call(self, x):
encoded = self.encoder(x)
decoded = self.decoder(encoded)
return decoded
autoencoder = Denoise()
autoencoder.build(input_shape=(1, 28, 28, 1))
autoencoder.summary()
</code></pre>
|
python|tensorflow|keras|autoencoder|dimensions
| 1
|
375,249
| 65,997,499
|
Python statsmodel outpput and Excel/Google Sheet output doesn't match
|
<p>I have a small dataset, for some reason, the output doesn't match with Excel's.</p>
<p>Here's what I did. I have to columns:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Miles Traveled</th>
<th>Travel Time</th>
</tr>
</thead>
<tbody>
<tr>
<td>89</td>
<td>7.0</td>
</tr>
<tr>
<td>66</td>
<td>5.4</td>
</tr>
<tr>
<td>78</td>
<td>6.6</td>
</tr>
<tr>
<td>111</td>
<td>7.4</td>
</tr>
<tr>
<td>44</td>
<td>4.8</td>
</tr>
<tr>
<td>77</td>
<td>6.4</td>
</tr>
<tr>
<td>80</td>
<td>7.0</td>
</tr>
<tr>
<td>66</td>
<td>5.6</td>
</tr>
<tr>
<td>109</td>
<td>7.3</td>
</tr>
<tr>
<td>76</td>
<td>6.4</td>
</tr>
</tbody>
</table>
</div>
<p>This is the output I get on Google Sheet:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th></th>
<th>Slope</th>
<th>Intercept</th>
</tr>
</thead>
<tbody>
<tr>
<td>Coefficient</td>
<td>0.04025678079</td>
<td>3.185560249</td>
</tr>
<tr>
<td>Standard Error</td>
<td>0.005706415564</td>
<td>0.4669507938</td>
</tr>
<tr>
<td>R Squared, Standard Error</td>
<td>0.8615153295</td>
<td>0.3423088398</td>
</tr>
<tr>
<td>F Stat</td>
<td>49.76812677</td>
<td>8</td>
</tr>
<tr>
<td>Regression SS / Residual SS</td>
<td>5.831597265</td>
<td>0.9374027345</td>
</tr>
</tbody>
</table>
</div>
<p>This output also matches with excel output.</p>
<p>However, when I do the following on statsmodel:</p>
<pre><code>milesTravelled = [89.0, 66.0, 78.0, 111.0, 44.0, 77.0, 80.0, 66.0, 109.0, 76.0]
travelTime = [7.0, 5.4, 6.6, 7.4, 4.8, 6.4, 7.0, 5.6, 7.3, 6.4]
model = sm.OLS(travelTime, milesTraveled).fit()
print(model.summary())
</code></pre>
<p>I get the following:</p>
<pre><code> OLS Regression Results
=======================================================================================
Dep. Variable: Travel Time R-squared (uncentered): 0.985
Model: OLS Adj. R-squared (uncentered): 0.983
Method: Least Squares F-statistic: 575.6
Date: Mon, 01 Feb 2021 Prob (F-statistic): 1.82e-09
Time: 10:18:44 Log-Likelihood: -11.951
No. Observations: 10 AIC: 25.90
Df Residuals: 9 BIC: 26.20
Df Model: 1
Covariance Type: nonrobust
==================================================================================
coef std err t P>|t| [0.025 0.975]
----------------------------------------------------------------------------------
Miles Traveled 0.0781 0.003 23.991 0.000 0.071 0.085
==============================================================================
Omnibus: 2.179 Durbin-Watson: 2.654
Prob(Omnibus): 0.336 Jarque-Bera (JB): 1.033
Skew: -0.777 Prob(JB): 0.597
Kurtosis: 2.741 Cond. No. 1.00
==============================================================================
</code></pre>
<p>As you can see, the values for standard error, R square etc. doesn't match Google Sheet/Excel at all. What am I doing wrong? What can I do to get an exact result summary like Google Sheet/Excel?</p>
|
<p>By default, the <code>OLS</code> class doesn't include the constant term in the linear model. You can use <code>sm.add_constant</code> to create the appropriate <code>exog</code> argument for <code>OLS</code>:</p>
<pre><code>In [36]: milesTraveled = [89.0, 66.0, 78.0, 111.0, 44.0, 77.0, 80.0, 66.0, 109.0, 76.0]
In [37]: travelTime = [7.0, 5.4, 6.6, 7.4, 4.8, 6.4, 7.0, 5.6, 7.3, 6.4]
In [38]: X = sm.add_constant(milesTraveled)
In [39]: model = sm.OLS(travelTime, X).fit()
In [40]: print(model.summary())
/Users/warren/a2020.11/lib/python3.8/site-packages/scipy/stats/stats.py:1603: UserWarning: kurtosistest only valid for n>=20 ... continuing anyway, n=10
warnings.warn("kurtosistest only valid for n>=20 ... continuing "
OLS Regression Results
==============================================================================
Dep. Variable: y R-squared: 0.862
Model: OLS Adj. R-squared: 0.844
Method: Least Squares F-statistic: 49.77
Date: Mon, 01 Feb 2021 Prob (F-statistic): 0.000107
Time: 13:04:53 Log-Likelihood: -2.3532
No. Observations: 10 AIC: 8.706
Df Residuals: 8 BIC: 9.312
Df Model: 1
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
const 3.1856 0.467 6.822 0.000 2.109 4.262
x1 0.0403 0.006 7.055 0.000 0.027 0.053
==============================================================================
Omnibus: 0.542 Durbin-Watson: 2.608
Prob(Omnibus): 0.763 Jarque-Bera (JB): 0.554
Skew: 0.370 Prob(JB): 0.758
Kurtosis: 2.115 Cond. No. 353.
==============================================================================
Notes:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
</code></pre>
|
python|pandas|numpy|statsmodels
| 0
|
375,250
| 66,164,487
|
How to use Pandas split for retaining both parts of column?
|
<p>I have a file with columns such as:</p>
<pre><code> A B C
f>g f=313/g=6535 1:123456
r>t r=2/t=7020 1:56789
g>f g=2/f=6764 1:65555
t>r t=5337/r=677 1:115675
</code></pre>
<p>and I am struggling with splitting them. I need not only to split them, but also save both parts of splitted column.</p>
<p>For third column, I tried this syntax</p>
<pre><code>df['name_1'] = df['C'].str.split(':')[0]
df['name_2'] = df['C'].str.split(':')[1]
</code></pre>
<p>But still get <code>ValueError</code></p>
<p>I have no more ideas left, what is wrong?
I checked previous questions, but no thread seems to be answering this problem</p>
<p>Thank you!</p>
|
<p>You can try something like</p>
<pre><code>df[['name_1', 'name_2']] = df['C'].str.split(':', expand=True)
</code></pre>
<p>Which results in what you want</p>
<pre><code> A B C name_1 name_2
0 f>g f=313/g=6535 1:123456 1 123456
1 r>t r=2/t=7020 1:56789 1 56789
2 g>f g=2/f=6764 1:65555 1 65555
3 t>r t=5337/r=677 1:115678 1 115678
</code></pre>
|
python-3.x|pandas
| 0
|
375,251
| 66,042,243
|
Pandas extrac number with a decimal operator afer $ from a string
|
<p>also there are several <a href="https://stackoverflow.com/questions/61897051/pandas-extract-number-with-decimals-from-string">similar questions</a> to that, I am still not able to solve my issue.</p>
<p>I have a pandas column from a poker game and want to analyze the pot size out of it, therefore I need to extract the number (with a . decimalseperator) after a <code>$</code>. The column looks like this:</p>
<pre><code>Action
Player (8, 5) won the $5.40 main pot with a Straight
...
Player (A, 2) won the $21.00 main pot with a flush
...
</code></pre>
<p>when i run: <code>df['number'] = df['action'].str.extract('([0-9][,.]*[0-9]*)')</code>
it doesn't give me the expected outcome the outcome shold be:</p>
<pre><code>number
5.40
...
21.00
</code></pre>
|
<p>You can use</p>
<pre class="lang-py prettyprint-override"><code>>>> import pandas as pd
>>> df = pd.DataFrame({'action':['Player (8, 5) won the $5.40 main pot with a Straight','Player (A, 2) won the $21.00 main pot with a flush']})
>>> df['action'].str.extract(r'\$(\d+(?:[,.]\d+)*)', expand=False)
0 5.40
1 21.00
Name: Action, dtype: object
</code></pre>
<p>The <code>\$(\d+(?:[,.]\d+)*)</code> pattern matches a literal <code>$</code> symbol, and then captures into Group 1 any one or more digits and then zero or more sequences of a <code>,</code> or <code>.</code> and then one or more digits.</p>
<p>See the <a href="https://regex101.com/r/80hlH5/2" rel="nofollow noreferrer">regex demo</a>.</p>
|
python|regex|pandas
| 2
|
375,252
| 66,254,073
|
Creating percent of total column in pandas
|
<p>I can't seem to figure out how to add a % of total column for each state and year in pandas.</p>
<p>my data looks like this</p>
<pre><code>year type Arizona_total Utah_total California_total Colorado_total
2018 Total 163,176 90,344 343,343 32,343
2018 bio. 272 270 234. 2343
</code></pre>
<p>The data then continues for each major back until 1990. I want to create a column that is a percent of total majors in that state during that year for each state.</p>
<p>I am a newbie and I apologize for the simple / poorly worded question.</p>
|
<p>Try this.</p>
<pre><code>total = np.sum(df.ix[:,'Arizona_total':].values)
df['percent'] = df.ix[:,'Arizona_total':].sum(axis=1)/total * 100
df
</code></pre>
|
python|pandas
| 0
|
375,253
| 66,114,826
|
How can I append a data frame to another data frame?
|
<p>I have the following python pandas data frame:</p>
<pre><code>master = pd.DataFrame(columns = ['Development id', 'Development Name', 'Integrated Development', 'Developer id', 'Developer', 'Ultimate Developer id', 'Ultimate Developer', 'Development Type', 'Sub Development Type', 'Joint-Venture', 'Year Completed', 'Latitude', 'Longitude', 'No of Floors', 'No of Rooms', 'No of Units/Residences/Lots', 'Gross Floor Area (SQM)', 'Gross Leasable Area', 'Lot Area (SQM)', 'Region', 'City/Municipality', 'CBD', 'Parking vehicles', 'Min. Price per SQM 2020', 'Max. Price per SQM 2020', 'Monthly Min. Rent per SQM 2020', 'Monthly Min Rent per SQM 2020'])
for i in my_list:
df = pd.read_excel('Template For Developer Footprint.xlsx', sheet_name=i)
temporary = (df.loc[(df['Development Type'] == 'Hotel & Resort') & df['Development id'].isnull()])
master.append(temporary)
display(temporary)
</code></pre>
<p>even though I use append, empty dataframe python or pandas.</p>
|
<p>In fact, you can concat the xlsx file's all sheets together. then filter the condition.</p>
<pre><code>xl = pd.ExcelFile('Template For Developer Footprint.xlsx')
df = pd.concat([xl.parse(sheet_name)
for sheet_name in xl.sheet_names ],
ignore_index=True)
cond = (df['Development Type'] == 'Hotel & Resort') & (df['Development id'].isnull())
cols = ['Development id', 'Development Name', 'Integrated Development', 'Developer id', 'Developer', 'Ultimate Developer id', 'Ultimate Developer', 'Development Type', 'Sub Development Type', 'Joint-Venture', 'Year Completed', 'Latitude', 'Longitude', 'No of Floors', 'No of Rooms', 'No of Units/Residences/Lots', 'Gross Floor Area (SQM)', 'Gross Leasable Area', 'Lot Area (SQM)', 'Region', 'City/Municipality', 'CBD', 'Parking vehicles', 'Min. Price per SQM 2020', 'Max. Price per SQM 2020', 'Monthly Min. Rent per SQM 2020', 'Monthly Min Rent per SQM 2020'])
dfn = df.loc[cond, cols].copy()
</code></pre>
|
python|pandas|append
| 0
|
375,254
| 66,090,211
|
how to merge/concat/join 2 dataframes with a non-unique multi-index to reconcile the content?
|
<p>I have 2 below dataframes from 2 sources, the 3 white columns are indexes. These are from 2 reports about historical trades. the trades can only be compared when 3 columns "Trade date" "Exchange Instrument" and "Prompt date" are the same. "Trade date" is because they were report in chronological order. and the futures contracts are only the same when "Exchange Instrument" and "Prompt date" are the same.</p>
<p><a href="https://i.stack.imgur.com/CofP9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CofP9.png" alt="df1" /></a></p>
<p><a href="https://i.stack.imgur.com/YzX5n.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YzX5n.png" alt="df2" /></a></p>
<p>I simply want to merge the 2 dfs so that there will be 7 columns with the same 3 indexes. it maybe challenging that the values are not unique for the same index on both reports: for example for the 1st August, CMX Cu contract with prompt 2020-03-01, there have been 3 trades and different prices:</p>
<p>I tried concat and merge, but never get a desired df.. for example while trying</p>
<pre><code>df_complete= pd.concat([df_ctrm_timelined, df_broker_timelined],axis=1)
</code></pre>
<p>I get</p>
<pre><code>ValueError: cannot handle a non-unique multi-index!
</code></pre>
<p>if you need the raw data, these are the first 10 rows of both dfs, the 2 dfs all have the same number of rows</p>
<p>df_broker_timelined[:10]</p>
<pre><code> broker_Lots broker_TradePrice \
Trade Date Exchange Instrument PromptDate
2019-08-01 CMX Cu 2020-03-01 -1 2.6840
2020-03-01 -1 2.6865
2020-03-01 -2 2.6870
2019-09-01 1 2.6640
2019-09-01 1 2.6665
2019-09-01 2 2.6670
LME Al 2019-10-16 6 1777.5000
2019-11-01 -3 1779.0000
2019-11-01 -1 1779.0000
2019-11-01 -2 1779.0000
broker_Quantity
Trade Date Exchange Instrument PromptDate
2019-08-01 CMX Cu 2020-03-01 -25000
2020-03-01 -25000
2020-03-01 -50000
2019-09-01 25000
2019-09-01 25000
2019-09-01 50000
LME Al 2019-10-16 150
2019-11-01 -75
2019-11-01 -25
2019-11-01 -50
</code></pre>
<p>df_ctrm_timelined[:10]</p>
<pre><code> ctrm_TradePrice ctrm_Lots \
Trade Date Exchange Instrument PromptDate
2019-08-01 CMX Cu 2019-09-30 2.6640 1
2019-09-30 2.6665 1
2019-09-30 2.6670 2
2020-03-31 2.6840 -1
2020-03-31 2.6865 -1
2020-03-31 2.6870 -2
LME Al 2019-10-16 1777.5000 6
2019-11-01 1792.5000 3
2019-11-01 1792.5000 3
2019-11-01 1781.5000 -6
ctrm_Quantity Strategy
Trade Date Exchange Instrument PromptDate
2019-08-01 CMX Cu 2019-09-30 25000 Strategy 1
2019-09-30 25000 Strategy 1
2019-09-30 50000 Strategy 1
2020-03-31 -25000 Strategy 1
2020-03-31 -25000 Strategy 1
2020-03-31 -50000 Strategy 1
LME Al 2019-10-16 150 Strategy 2
2019-11-01 75 Strategy 2
2019-11-01 75 Strategy 2
2019-11-01 -150 Strategy 2
</code></pre>
|
<p>Dealing with non-unique indexing:</p>
<pre><code>from seaborn import load_dataset
#Create one dataframe with unique indexes, set multiindex
df = load_dataset('tips')
df = df.set_index(['day', 'time', 'sex', 'smoker'])
#Create a unique label per inner most index
df = df.set_index(df.groupby(level=[0,1,2,3]).cumcount(), append=True)
#Create second dataframe
df2 = df * 2
#Use join
df.join(df2, lsuffix='_2')
</code></pre>
<p>Output:</p>
<pre><code> total_bill_2 tip_2 size_2 total_bill tip size
day time sex smoker
Sun Dinner Female No 0 16.99 1.01 2 33.98 2.02 4
Male No 0 10.34 1.66 3 20.68 3.32 6
1 21.01 3.50 3 42.02 7.00 6
2 23.68 3.31 2 47.36 6.62 4
Female No 1 24.59 3.61 4 49.18 7.22 8
... ... ... ... ... ... ...
Sat Dinner Male No 30 29.03 5.92 3 58.06 11.84 6
Female Yes 14 27.18 2.00 2 54.36 4.00 4
Male Yes 26 22.67 2.00 2 45.34 4.00 4
No 31 17.82 1.75 2 35.64 3.50 4
Thur Dinner Female No 0 18.78 3.00 2 37.56 6.00 4
[244 rows x 6 columns]
</code></pre>
<hr />
<p>Here's an example using the "tips" dataset:</p>
<pre><code>from seaborn import load_dataset
#Create one dataframe with unique indexes, set multiindex
df = load_dataset('tips')
df = df.set_index(['day', 'time', 'sex', 'smoker'])
df = df.groupby(level=[0,1,2,3]).first().dropna(how='all')
#Create second dataframe
df2 = df * 2
#Use join
df.join(df2, lsuffix='_2')
</code></pre>
<p>Output:</p>
<pre><code> total_bill_2 tip_2 size_2 total_bill tip size
day time sex smoker
Thur Lunch Male Yes 19.44 3.00 2.0 38.88 6.00 4.0
No 27.20 4.00 4.0 54.40 8.00 8.0
Female Yes 19.81 4.19 2.0 39.62 8.38 4.0
No 10.07 1.83 1.0 20.14 3.66 2.0
Dinner Female No 18.78 3.00 2.0 37.56 6.00 4.0
Fri Lunch Male Yes 12.16 2.20 2.0 24.32 4.40 4.0
Female Yes 13.42 3.48 2.0 26.84 6.96 4.0
No 15.98 3.00 3.0 31.96 6.00 6.0
Dinner Male Yes 28.97 3.00 2.0 57.94 6.00 4.0
No 22.49 3.50 2.0 44.98 7.00 4.0
Female Yes 5.75 1.00 2.0 11.50 2.00 4.0
No 22.75 3.25 2.0 45.50 6.50 4.0
Sat Dinner Male Yes 38.01 3.00 4.0 76.02 6.00 8.0
No 20.65 3.35 3.0 41.30 6.70 6.0
Female Yes 3.07 1.00 1.0 6.14 2.00 2.0
No 20.29 2.75 2.0 40.58 5.50 4.0
Sun Dinner Male Yes 7.25 5.15 2.0 14.50 10.30 4.0
No 10.34 1.66 3.0 20.68 3.32 6.0
Female Yes 17.51 3.00 2.0 35.02 6.00 4.0
No 16.99 1.01 2.0 33.98 2.02 4.0
</code></pre>
|
python|pandas
| 2
|
375,255
| 66,312,962
|
Pandas DataFrame groupby, count and sum across columns
|
<p>I have a dataset like the following. It has the cumulative vehicle counts over time.</p>
<p><a href="https://i.stack.imgur.com/81Nef.png" rel="nofollow noreferrer">Image Describing the Expected Output</a></p>
<pre><code>LcounterCar,LcounterTruck,LcounterBus,LcounterMotorcycle,LcounterVan,Ltime,RcounterCar,RcounterTruck,RcounterBus,RcounterMotorcycle,RcounterVan,Rtime
1,0,0,0,0,2021-02-22 13:22:00,,,,,
2,0,0,0,0,2021-02-22 13:23:00,,,,,
3,1,0,0,0,2021-02-22 13:23:00,,,,,
4,0,0,0,0,2021-02-22 13:24:00,,,,,
5,0,0,0,0,2021-02-22 13:25:00,,,,,
6,2,0,0,0,2021-02-22 13:25:00,,,,,
,,,,,,1,0,0,0,0,2021-02-22 13:25:00
,,,,,,2,0,0,0,0,2021-02-22 13:27:00
</code></pre>
<p>I have created a Pandas dataframe and I want to groupby Ltime and Rtime and get the total number of vehicles irrespective of the class (For example total number of vehicles in the Left line (L) in a given time period and total number of vehicles in the Right line (R) in a given time period).</p>
<p>The following is what I tried</p>
<pre><code>data = pd.read_csv('output2.txt')
data['Ltime'] = pd.to_datetime(data['Ltime'].str.strip())
data['Rtime'] = pd.to_datetime(data['Rtime'].str.strip())
data.info()
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 LcounterCar 6 non-null float64
1 LcounterTruck 6 non-null float64
2 LcounterBus 6 non-null float64
3 LcounterMotorcycle 6 non-null float64
4 LcounterVan 6 non-null float64
5 Ltime 6 non-null datetime64[ns]
6 RcounterCar 2 non-null float64
7 RcounterTruck 2 non-null float64
8 RcounterBus 2 non-null float64
9 RcounterMotorcycle 2 non-null float64
10 RcounterVan 2 non-null float64
11 Rtime 2 non-null datetime64[ns]
data.groupby('Ltime')['LcounterCar'].count().reset_index()
Ltime LcounterTruck
0 2021-02-22 13:22:00 1
1 2021-02-22 13:23:00 2
2 2021-02-22 13:24:00 1
3 2021-02-22 13:25:00 2
</code></pre>
<p>However, the count is always the same. Instead, the following is my expected output.</p>
<pre><code>Ltime, count
13:22:00, 1
13:23:00, 3 (two cars and one truck)
13:24:00, 1
13:25:00, 3
Rtime, count
13:25:00, 1
13:27:00, 1
</code></pre>
|
<p>There is an inconsistency between your data and what you describe</p>
<ul>
<li>consider left & right as separate data sets</li>
<li>you describe <code>sum()</code> not <code>count()</code>, hence have used <code>sum()</code></li>
<li><code>unstack()</code> the columns so that it becomes a straight forward <code>groupby(level=1).count()</code></li>
<li>treat 0 and <strong>NaN</strong> so it's not counted</li>
<li>pull left and right together using <code>concat()</code></li>
<li>calc final values</li>
</ul>
<pre><code>
1,0,0,0,0,2021-02-22 13:22:00,,,,,
2,0,0,0,0,2021-02-22 13:23:00,,,,,
3,1,0,0,0,2021-02-22 13:23:00,,,,,
4,0,0,0,0,2021-02-22 13:24:00,,,,,
5,0,0,0,0,2021-02-22 13:25:00,,,,,
6,2,0,0,0,2021-02-22 13:25:00,,,,,
,,,,,,1,0,0,0,0,2021-02-22 13:25:00
,,,,,,2,0,0,0,0,2021-02-22 13:27:00"""))
df.Ltime = pd.to_datetime(df.Ltime)
df.Rtime = pd.to_datetime(df.Rtime)
df2 = pd.concat([
(df.loc[:,[c for c in df.columns if c[0]==side]]
.dropna().set_index(f"{side}time")
.unstack().replace({0:np.nan}).groupby(level=1).count())
for side in list("LR")]).groupby(level=0).sum()
</code></pre>
<h3>df</h3>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: right;"></th>
<th style="text-align: right;">LcounterCar</th>
<th style="text-align: right;">LcounterTruck</th>
<th style="text-align: right;">LcounterBus</th>
<th style="text-align: right;">LcounterMotorcycle</th>
<th style="text-align: right;">LcounterVan</th>
<th style="text-align: left;">Ltime</th>
<th style="text-align: right;">RcounterCar</th>
<th style="text-align: right;">RcounterTruck</th>
<th style="text-align: right;">RcounterBus</th>
<th style="text-align: right;">RcounterMotorcycle</th>
<th style="text-align: right;">RcounterVan</th>
<th style="text-align: left;">Rtime</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">0</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">0</td>
<td style="text-align: right;">0</td>
<td style="text-align: right;">0</td>
<td style="text-align: right;">0</td>
<td style="text-align: left;">2021-02-22 13:22:00</td>
<td style="text-align: right;">nan</td>
<td style="text-align: right;">nan</td>
<td style="text-align: right;">nan</td>
<td style="text-align: right;">nan</td>
<td style="text-align: right;">nan</td>
<td style="text-align: left;">NaT</td>
</tr>
<tr>
<td style="text-align: right;">1</td>
<td style="text-align: right;">2</td>
<td style="text-align: right;">0</td>
<td style="text-align: right;">0</td>
<td style="text-align: right;">0</td>
<td style="text-align: right;">0</td>
<td style="text-align: left;">2021-02-22 13:23:00</td>
<td style="text-align: right;">nan</td>
<td style="text-align: right;">nan</td>
<td style="text-align: right;">nan</td>
<td style="text-align: right;">nan</td>
<td style="text-align: right;">nan</td>
<td style="text-align: left;">NaT</td>
</tr>
<tr>
<td style="text-align: right;">2</td>
<td style="text-align: right;">3</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">0</td>
<td style="text-align: right;">0</td>
<td style="text-align: right;">0</td>
<td style="text-align: left;">2021-02-22 13:23:00</td>
<td style="text-align: right;">nan</td>
<td style="text-align: right;">nan</td>
<td style="text-align: right;">nan</td>
<td style="text-align: right;">nan</td>
<td style="text-align: right;">nan</td>
<td style="text-align: left;">NaT</td>
</tr>
<tr>
<td style="text-align: right;">3</td>
<td style="text-align: right;">4</td>
<td style="text-align: right;">0</td>
<td style="text-align: right;">0</td>
<td style="text-align: right;">0</td>
<td style="text-align: right;">0</td>
<td style="text-align: left;">2021-02-22 13:24:00</td>
<td style="text-align: right;">nan</td>
<td style="text-align: right;">nan</td>
<td style="text-align: right;">nan</td>
<td style="text-align: right;">nan</td>
<td style="text-align: right;">nan</td>
<td style="text-align: left;">NaT</td>
</tr>
<tr>
<td style="text-align: right;">4</td>
<td style="text-align: right;">5</td>
<td style="text-align: right;">0</td>
<td style="text-align: right;">0</td>
<td style="text-align: right;">0</td>
<td style="text-align: right;">0</td>
<td style="text-align: left;">2021-02-22 13:25:00</td>
<td style="text-align: right;">nan</td>
<td style="text-align: right;">nan</td>
<td style="text-align: right;">nan</td>
<td style="text-align: right;">nan</td>
<td style="text-align: right;">nan</td>
<td style="text-align: left;">NaT</td>
</tr>
<tr>
<td style="text-align: right;">5</td>
<td style="text-align: right;">6</td>
<td style="text-align: right;">2</td>
<td style="text-align: right;">0</td>
<td style="text-align: right;">0</td>
<td style="text-align: right;">0</td>
<td style="text-align: left;">2021-02-22 13:25:00</td>
<td style="text-align: right;">nan</td>
<td style="text-align: right;">nan</td>
<td style="text-align: right;">nan</td>
<td style="text-align: right;">nan</td>
<td style="text-align: right;">nan</td>
<td style="text-align: left;">NaT</td>
</tr>
<tr>
<td style="text-align: right;">6</td>
<td style="text-align: right;">nan</td>
<td style="text-align: right;">nan</td>
<td style="text-align: right;">nan</td>
<td style="text-align: right;">nan</td>
<td style="text-align: right;">nan</td>
<td style="text-align: left;">NaT</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">0</td>
<td style="text-align: right;">0</td>
<td style="text-align: right;">0</td>
<td style="text-align: right;">0</td>
<td style="text-align: left;">2021-02-22 13:25:00</td>
</tr>
<tr>
<td style="text-align: right;">7</td>
<td style="text-align: right;">nan</td>
<td style="text-align: right;">nan</td>
<td style="text-align: right;">nan</td>
<td style="text-align: right;">nan</td>
<td style="text-align: right;">nan</td>
<td style="text-align: left;">NaT</td>
<td style="text-align: right;">2</td>
<td style="text-align: right;">0</td>
<td style="text-align: right;">0</td>
<td style="text-align: right;">0</td>
<td style="text-align: right;">0</td>
<td style="text-align: left;">2021-02-22 13:27:00</td>
</tr>
</tbody>
</table>
</div><h3>df2</h3>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;"></th>
<th style="text-align: right;">0</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">2021-02-22 13:22:00</td>
<td style="text-align: right;">1</td>
</tr>
<tr>
<td style="text-align: left;">2021-02-22 13:23:00</td>
<td style="text-align: right;">3</td>
</tr>
<tr>
<td style="text-align: left;">2021-02-22 13:24:00</td>
<td style="text-align: right;">1</td>
</tr>
<tr>
<td style="text-align: left;">2021-02-22 13:25:00</td>
<td style="text-align: right;">4</td>
</tr>
<tr>
<td style="text-align: left;">2021-02-22 13:27:00</td>
<td style="text-align: right;">1</td>
</tr>
</tbody>
</table>
</div>
|
pandas|dataframe|count|pandas-groupby
| 0
|
375,256
| 65,961,765
|
Using slice/mask instead of a for-loop to find items in an array
|
<pre><code>p= np.array([[ 1, 2, 3, 4, 5],
[ 6, 7, 8, 9, 10],
[11, 12, 13, 14, 15],
[16, 17, 18, 19, 20]])
</code></pre>
<p>I have the above array and need to find the items that are divisible by 2 and 3 without converting it to a flat array using for-loops and then slicing/masking.</p>
<p>I was able to figure out the for-loops part and am running into some issues with the slicing/masking part.</p>
<pre><code># we know that 6, 12 and 18 are divisible by 2 and 3
# therefore we can use slicing to pull those numbers out of the array
print(p[1:2,0:1]) # slice array to return 6
print(p[2:3,1:2]) # slice array to return 12
print(p[3:4,2:3]) # slice array to return 18
m=np.ma.masked_where(((p[:, :]%2==0)&(p[:, :]%3==0)),p)
print(m)
mask=np.logical_and(p%2==0,p%3==0)
print(mask)
</code></pre>
<p>Is there a more efficient way of slicing the array to find 6, 12 and 18? Also, is there a way to make either of the two mask functions output just 6, 12, and 18? The first one shows the inverse of what I want while the other returns a Boolean answer.</p>
|
<p>You nearly had it!</p>
<pre><code>mask=np.logical_and(p%2==0,p%3==0)
</code></pre>
<p>gives you <code>True</code> where <code>p % 2 == 0 and p % 3 == 0</code>.</p>
<pre><code>mask = array([[False, False, False, False, False],
[ True, False, False, False, False],
[False, True, False, False, False],
[False, False, True, False, False]])
</code></pre>
<p>From this, you can get the values of <code>p</code> where <code>mask</code> is <code>True</code> by simply</p>
<pre><code>p[mask]
</code></pre>
<p>Which gives the output:</p>
<pre><code>array([ 6, 12, 18])
</code></pre>
|
python|arrays|numpy
| 1
|
375,257
| 66,246,333
|
cannot stack numpy arrays with hstack in numba
|
<p>I have one matrix <code>mat</code> of the type</p>
<pre><code>array([[0.00000000e+00, 1.98300000e+03, 1.57400000e+00, ...,
nan, nan, 2.38395652e+00],
[0.00000000e+00, 1.98400000e+03, 1.80600000e+00, ...,
nan, 1.38395652e+00, 2.29417391e+00],
[0.00000000e+00, 1.98500000e+03, 4.72400000e+00, ...,
1.38395652e+00, 1.29417391e+00, 5.68147826e+00],
...,
[9.87500000e+03, 1.99200000e+03, 1.59700000e+00, ...,
nan, nan, 4.61641176e+00],
[9.87500000e+03, 1.99300000e+03, 3.13400000e+00, ...,
nan, 3.61641176e+00, 5.45824421e+00],
[9.87500000e+03, 1.99400000e+03, 7.61900000e+00, ...,
3.61641176e+00, 4.45824421e+00, 1.05298571e+01]])
</code></pre>
<p>with dimensions (107196, 46) and one vector <code>vec</code> of the type</p>
<pre><code>array([0.23, 0., 0.28, ..., 0.99, 1.0, 0.05])
</code></pre>
<p>with dimensions (107196,).
I want to use the <code>np.hstack</code> function to stack <code>vec</code> vertically as last column of <code>mat</code>. For this purpose, I use
<code>np.hstack((mat,vec[:,None]))</code>
Now, I want to get a njitted function which contains this operation. Say</p>
<pre><code>@jit(nopython=True)
def simul_nb(matrix, vector):
return np.hstack((matrix,vector[:,None]))
</code></pre>
<p>However, when I run <code>simul_nb(mat,vec)</code> I get the following error:</p>
<pre><code>Traceback (most recent call last):
File "<ipython-input-340-6c5341efa9b6>", line 1, in <module>
simul_nb(income_df,nols)
File "C:\Users\bagna\anaconda3\lib\site-packages\numba\core\dispatcher.py", line 415, in _compile_for_args
error_rewrite(e, 'typing')
File "C:\Users\bagna\anaconda3\lib\site-packages\numba\core\dispatcher.py", line 358, in error_rewrite
reraise(type(e), e, None)
File "C:\Users\bagna\anaconda3\lib\site-packages\numba\core\utils.py", line 80, in reraise
raise value.with_traceback(tb)
TypingError: No implementation of function Function(<built-in function getitem>) found for signature:
getitem(readonly array(float64, 1d, C), Tuple(slice<a:b>, none))
There are 16 candidate implementations:
- Of which 14 did not match due to:
Overload of function 'getitem': File: <numerous>: Line N/A.
With argument(s): '(readonly array(float64, 1d, C), Tuple(slice<a:b>, none))':
No match.
- Of which 2 did not match due to:
Overload in function 'GetItemBuffer.generic': File: numba\core\typing\arraydecl.py: Line 162.
With argument(s): '(readonly array(float64, 1d, C), Tuple(slice<a:b>, none))':
Rejected as the implementation raised a specific error:
TypeError: unsupported array index type none in Tuple(slice<a:b>, none)
raised from C:\Users\bagna\anaconda3\lib\site-packages\numba\core\typing\arraydecl.py:68
During: typing of intrinsic-call at <ipython-input-339-4d5037c266b7> (3)
During: typing of static-get-item at <ipython-input-339-4d5037c266b7> (3)
</code></pre>
<p>How do I make the function work?</p>
|
<p>Numba doesn't understand the <code>[:,None]</code>indexing for reshaping. Indeed, the latter is equivalent to <code>[:,np.newaxis]</code>, as you may already know, and at the present time, <code>np.newaxis</code> isn't a <a href="https://numba.pydata.org/numba-doc/dev/reference/numpysupported.html" rel="nofollow noreferrer">supported numpy features</a>, which partly explains the error message. Here, you should just use <code>vector.reshape((-1,1))</code> or <code>np.expand_dims(vector,1))</code> instead, which should give:</p>
<pre><code>@jit(nopython=True)
def simul_nb(matrix, vector):
return np.hstack((matrix,vector.reshape((-1,1))))
>>> new_mat = simul_nb(matrix, vector)
>>> new_mat.shape
>>> (107196, 47)
</code></pre>
|
numpy|matrix|vector|numba|hstack
| 1
|
375,258
| 65,951,501
|
Merge multiple .npy files into single .npy file
|
<p>I have a folder in which I have 100+ .npy files.
The path to this folder is '/content/drive/MyDrive/lung_cancer/subset0/trainImages'.</p>
<p>This folder has the .npy files as shown in the image <a href="https://i.stack.imgur.com/0kAcu.png" rel="nofollow noreferrer">the .npy files</a></p>
<p>The shape of each of these .npy files is (3,512,512)</p>
<p>I want to combine all of these files into one single file with the name trainImages.npy so that I can train my unet model with it.</p>
<p>My unet model takes input of the shape (1,512,512).
I will load the above trainImages.npy file into imgs_train as below to pass it as input into unet model</p>
<p>imgs_train = np.load(working_path+"trainImages.npy").astype(np.float32)</p>
<p>Can someone please tell me how do i concatenate all those .npy files into one single .npy file??
Thanks.</p>
|
<p>So I found the answer out by myself and I am attaching the code below if anyone needs it. Change it according to your needs..</p>
<pre><code>import os
import numpy as np
path = '/content/drive/MyDrive/lung_cancer/subset0/trainImages/'
trainImages = []
for i in os.listdir(path):
data = np.load(path+i)
trainImages.append(data)
</code></pre>
|
numpy|numpy-ndarray
| 2
|
375,259
| 65,936,263
|
Can I train in tensorflow with separate CUDA version in anaconda environment
|
<p>I need to train a model in TensorFlow-gpu==2.3.0 which needs the CUDA version to be 10.1. But when I type 'nvidia-smi' it shows CUDA version to be 10.0.</p>
<p>I created a conda environment using, "<strong>conda create -n tf2-gpu tensorflow-gpu cudatoolkit=10.1</strong>"
after initiating training, it throws an error as <strong>tensorflow.python.framework.errors_impl.InternalError: cudaGetDevice() failed. Status: CUDA driver version is insufficient for CUDA runtime version</strong></p>
<p>How can I train using tensorflow-gpu in conda environment with another version of CUDA? And, I still need CUDA 10.0 to be there, as it helps my other training setup.</p>
|
<p>Yes, you can create two virtual environments in Anaconda with different tensorflow version. But <code>CUDA</code> and <code>CuDNN</code> will be installing compatible to that specified <code>tensorflow-gpu</code>.</p>
<p>You can find <code>tensorflow-gpu</code> build configuration details <a href="https://www.tensorflow.org/install/source_windows#gpu" rel="nofollow noreferrer">here</a> to check supporting <code>CUDA</code> and <code>cuDNN</code> version.</p>
<p>Please check <a href="https://stackoverflow.com/a/70850191/14290681">this</a> similar issue link to create virtual environment in anaconda and to install specific tensorflow-gpu.</p>
|
python-3.x|tensorflow|nvidia
| 0
|
375,260
| 66,167,521
|
Pandas: How can I substract an array from a column repeatedely
|
<p>I have monthly average temperature data for certain year, lets say 1900, so the dataframe looks like:</p>
<pre><code>year | month | temp
-----+-------+-------
1900 | 1 | 18.5
1900 | 2 | 18.8
1900 | 3 | 21.4
...
1900 | 12 | 18.4
</code></pre>
<p>Then I have monthly average temperatures that goes from 1900 to 2020, and I want to compare them to the 1900 base</p>
<p>I extracted an array of 1900 temperatures:</p>
<pre><code>t1900 = df.loc[df.year == 1900]['temp']
</code></pre>
<p>How can I substract (without using a loop) the monthly temperature of each month in 1900 from the corresponding month in every year?</p>
<p>If I do <code>df['delta'] = df.temp - t1900</code> it works for the first year, but for the following ones, it obviously returns <code>NaN</code> since the operation is element wise.</p>
<p>The result I want to have something like:</p>
<pre><code>year | month | temp | delta
-----+-------+-------+------
1900 | 1 | 18.5 | 0.0
1900 | 2 | 18.8 | 0.0
1900 | 3 | 21.4 | 0.0
...
1900 | 12 | 18.4 | 0.0
1901 | 1 | 19.0 | 0.5
1901 | 2 | 18.6 | -0.2
1901 | 3 | 22.0 | 0.6
...
2020 | 10 | 24.5 | -0.4
</code></pre>
<p>Notice that in the last year I may or may not have the whole 12 months because of missing data</p>
<p>I have already solved this with a loop, and <code>iloc</code>ing, and its okay, since I'm filtering data and have around 3,000 rows of data, but the whole dataset has much more records and I belive using loops is not pretty optimal</p>
<p>Is there a way to tell Pandas or maybe Numpy that this operation should be done like repeating blocks?</p>
<p>EDIT:
(I hate to show my code when I know is horrible)</p>
<p>This produces the array of deltas, that then are appended to the original dataframe. It works and all, but its an horrid solution</p>
<pre><code>j = 0
delta = []
for i in range(0, len(df)):
d = df.iloc[i]['temp'] - base[j]
delta.append(d)
j += 1
if j > 11:
j = 0
</code></pre>
|
<p>First create <code>Series</code> by index from months by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.set_index.html" rel="nofollow noreferrer"><code>DataFrame.set_index</code></a>, so possible mapping original months by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.map.html" rel="nofollow noreferrer"><code>Series.map</code></a>:</p>
<pre><code>t1900 = df.loc[df.year == 1900].set_index('month')['temp']
df['delta'] = df.temp - df['month'].map(t1900)
</code></pre>
|
python|pandas|numpy
| 2
|
375,261
| 66,075,257
|
Select columns based on exact row value matches
|
<p>I am trying to select columns of a specified integer value (24). However, my new dataframe includes all values = and > 24. I have tried converting from integer to both float and string, and it gives the same results. Writing "24" and 24 gives same result. The dataframe is loaded from a .csv file.</p>
<pre><code>data_PM1_query24 = data_PM1.query('hours == "24"' and 'averages_590_nm_minus_blank > 0.3')
data_PM1_sorted24 = data_PM1_query24.sort_values(by=['hours', 'averages_590_nm_minus_blank'])
data_PM1_sorted24
</code></pre>
<p>What am I missing here?</p>
|
<p>Please try out the below codes. I'm assuming that the data type of "hours" and "averages_590_nm_minus_blank" is float. If not float, convert them to float.</p>
<pre><code>data_PM1_query24 = data_PM1.query('hours == 24 & averages_590_nm_minus_blank > 0.3')
</code></pre>
<p>or you can also use,</p>
<pre><code>data_PM1_query24 = data_PM1[(data_PM1.hours == 24) & (data.averages_590_nm_minus_blank > 0.3)]
</code></pre>
<p>Hope this solves your query!</p>
|
pandas|dataframe
| 0
|
375,262
| 66,219,625
|
Cannot setup package in conda environment with Pytorch installed
|
<p>All</p>
<p>After setting up the PyTorch 1.7.1 with CUDA 11.2 on a conda virtual environment, I run <code>python setup.py install</code> it always returns me the following error message.</p>
<pre><code>Traceback (most recent call last):
File "setup.py", line 2, in <module>
from torch.utils.cpp_extension import BuildExtension, CUDAExtension
File "anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/__init__.py", line 189, in <module>
_load_global_deps()
File "anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/__init__.py", line 142, in _load_global_deps
ctypes.CDLL(lib_path, mode=ctypes.RTLD_GLOBAL)
File "anaconda3/envs/pytorch/lib/python3.7/ctypes/__init__.py", line 364, in __init__
self._handle = _dlopen(self._name, mode)
OSError: anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/lib/../../../../libcublas.so.11: symbol free_gemm_select version libcublasLt.so.11 not defined in file libcublasLt.so.11 with link time reference
</code></pre>
<p>Could anyone can help with this?
Thanks!</p>
|
<p>Finally, I find the solution by just using the <code>pip</code> from the Pytorch official website.</p>
<pre><code>pip install torch==1.7.1+cu110 torchvision==0.8.2+cu110 torchaudio===0.7.2 -f https://download.pytorch.org/whl/torch_stable.html
</code></pre>
|
anaconda|pytorch
| 2
|
375,263
| 66,084,663
|
extracting hour and minutes from a cell in pandas column
|
<p><a href="https://i.stack.imgur.com/NBVNq.png" rel="nofollow noreferrer">Example</a></p>
<p>How can I split or extract 04:38 from 04:38:00 AM in a pandas dataframe column?</p>
|
<p>Using <code>str.slice</code>:</p>
<pre class="lang-py prettyprint-override"><code>df["hm"] = df["time"].str.slice(stop=5)
</code></pre>
|
pandas|time
| 0
|
375,264
| 66,218,036
|
Question on discrete convolution with python
|
<p>I am struggling to understand why the np.convolve method returns an N+M-1 set. I would appreciate your help.</p>
<p>Suppose I have two discrete probability distributions with values of <strong>[1,2]</strong> and <strong>[10,12]</strong> and probabilities of <strong>[.5,0.2]</strong> and <strong>[.5,0.4]</strong> respectively.</p>
<p>Using numpy's convolve function I get:</p>
<pre><code>>>In[]: np.convolve([.5,0.2],[.5,0.4])
>>Out[]: array([[0.25, 0.3 , 0.08])
</code></pre>
<p>However I don't understand why the resulting probability distribution only has 3 datapoints. To my understanding the sum of my input variables can have the following values: [11,12,13,14] so I would expect 4 datapoints to reflect the probabilities of each of these occurrences.</p>
<p>What am I missing?</p>
|
<p>I have managed to find the answer to my own question after understanding convolution a bit better. Posting it here for anyone wondering:</p>
<p>Effectively, the convolution of the two "signals" or probability functions in my example above is not correctly done as it is nowhere reflected that the events [1,2] of the first distribution and [10,12] of the second do not coincide.</p>
<p>Simply taking <code>np.convolve([.5,0.2],[.5,0.4])</code> assumes the probabilities corresponding to the same events (e.g. [1,2] [1,2]).</p>
<p>Correct approach would be to bring the two series into alignment under a common X axis as in x \in [1,12] as below:</p>
<pre><code>>>In[]: vector1 = [.5,0.2, 0,0,0,0,0,0,0,0,0,0]
>>In[]: vector2 = [0,0,0,0,0,0,0,0,0,.5, 0,0.4]
>>In[]: np.convolve(vector1, vector2)
>>Out[]: array([0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0.25, 0.1 ,
0.2 , 0.08, 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. ,
0. ])
</code></pre>
<p>which gives the correct values for 11,12,13,14</p>
|
python|numpy|convolution
| 1
|
375,265
| 66,280,320
|
What would be the best way to convert a text file to a pandas dataframe?
|
<p>I have a text file that essentially goes.</p>
<pre><code>Number|Name|Report
58|John|John is great
John is good
I like John
[Report Ends]
</code></pre>
<p>and repeats over and over for different people.</p>
<p>I want to turn this into a dataframe like the following</p>
<pre><code>Number Name Report
58 John John is great John is good I like John [Report Ends]
</code></pre>
<p>Using the line
<code>pd.read_csv('/Path', sep="|",header=0)</code> I have gotten the correct column names. And the first row is correct up until the "Report section. I think that the "Report" part messes everything up because it takes over several lines in the text file. How should I fit the Report data in the dataframe?</p>
|
<p>With a few lines of manual parsing, you can extract the info and adapt it before reading it into your dataframe.</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
with open('info.txt', 'r') as fp:
info = fp.readlines()
df_dicts = []
cd = None
for line in info[1:]:
line = line.replace('\n', ' ').strip()
if '|' in line:
cd = {}
df_dicts.append(cd)
cd['Number'], cd['Name'], cd['Report'] = line.split('|')
else:
cd['Report'] += " " + line
print(pd.DataFrame(df_dicts))
</code></pre>
<p>If you have issues with the replace functions being too general, you'll have to start looking into regex.</p>
|
python|pandas|dataframe|txt
| 1
|
375,266
| 66,186,565
|
Using panda to convert string "yes" to 1 but failed
|
<p>I'd like to convert "Yes"and "No" from column"ServiceLevel" to "1" and "0".
This is my code:</p>
<pre><code>mydata['ServiceLevel'].replace(to_replace ='Yes',value = 1,inplace = 'True')
mydata['ServiceLevel'].replace(to_replace ='No', value = 0,inplace = 'True')
mydata['ServiceLevel'].head()
</code></pre>
<p>ValueError: For argument "inplace" expected type bool, received type str.</p>
<p>What's this mean? How to correct it?</p>
|
<p><code>inplace</code> is a boolean argument - it takes either <code>True</code> or <code>False</code>, but you passed the <strong>string</strong> <code>'True'</code> (note the quotes). Remove the quotes to get a boolean literal, and you should be fine:</p>
<pre class="lang-py prettyprint-override"><code>mydata['ServiceLevel'].replace(to_replace ='Yes', value = 1, inplace = True)
mydata['ServiceLevel'].replace(to_replace ='No', value = 0, inplace = True)
# Here ---------------------------------------------------------------^---^
</code></pre>
|
python|pandas
| 1
|
375,267
| 66,063,155
|
Multiple time range selection in Pandas Python
|
<p>I have time-series data in CSV format. I want to calculate the mean for a different selected time period on a single run of the script, e.g. <code>01-05-2017: 30-04-2018, 01-05-2018: 30-04-2019</code> so on. Below is sample data</p>
<p><a href="https://i.stack.imgur.com/A5auK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/A5auK.png" alt="enter image description here" /></a></p>
<p>I have a script but it's taking only one given time period. but I want to give the multiple time period as I mentioned above.</p>
<pre><code>from datetime import datetime
import pandas as pd
df = pd.read_csv(r'D:\Data\RT_2015_2020.csv', index_col=[0],parse_dates=[0])
z = df['2016-05-01' : '2017-04-30']
# Want to make like this way
#z = df[['2016-05-01' : '2017-04-30'], ['2017-05-01' : '2018-04-30']]
# It will calculate the mean for the selected time period
z.mean()
</code></pre>
|
<p>If you use dates as an index, you can extract the data with the conditions included in the desired range.</p>
<pre><code>import pandas as pd
import numpy as np
import io
data = '''
Date Mean
18-05-2016 0.31
07-06-2016 0.32
17-07-2016 0.50
15-09-2016 0.62
25-10-2016 0.63
04-11-2016 0.56
24-11-2016 0.56
14-12-2016 0.22
13-01-2017 0.22
23-01-2017 0.23
12-02-2017 0.21
22-02-2017 0.21
'''
df = pd.read_csv(io.StringIO(data), delim_whitespace=True)
df['Date'] = pd.to_datetime(df['Date'])
df.set_index('Date', inplace=True)
df['2016'].head()
Mean
Date
2016-05-18 0.31
2016-07-06 0.32
2016-07-17 0.50
2016-09-15 0.62
2016-10-25 0.63
df.loc['2016-05-01':'2017-01-30']
Mean
Date
2016-05-18 0.31
2016-07-06 0.32
2016-07-17 0.50
2016-09-15 0.62
2016-10-25 0.63
2016-11-24 0.56
2016-12-14 0.22
2017-01-13 0.22
2017-01-23 0.23
df.loc['2016-05-01':'2017-01-30'].mean()
Mean 0.401111
dtype: float64
</code></pre>
|
python|pandas|time-series
| 0
|
375,268
| 66,251,149
|
How to create a grid in matplotlib out of a 2D numpy array where the items are classes
|
<p>I'm working with a 2D NumPy array of n dimensions where the items are a class Square that has a state of either a 1 or a 0. I didn't want to create a new array that contains the int values of my classes so is there a way I can map my array to a colored grid?</p>
<pre><code>import numpy as np
from random import randrange
class Square():
def __init__(self, state, pos):
self.state = state
self.pos = pos
self.adj_sqs = []
self.optimal_sq = []
def __repr__(self):
return str(self.state)
dim = 10
grid = np.array([[Square(randrange(2), [x,y]) for y in range (dim)] for x in range(dim)])
</code></pre>
|
<p>You can either create a numeric array directly via <code>np.array([[Square(...).state for y in ...] for x in ...])</code>. Or transform each element of the array of Square<code>s</code> to get their <code>state</code>:</p>
<pre class="lang-py prettyprint-override"><code>from matplotlib import pyplot as plt
from matplotlib.colors import ListedColormap
import numpy as np
from random import randrange
class Square():
def __init__(self, state, pos):
self.state = state
self.pos = pos
self.adj_sqs = []
self.optimal_sq = []
def __repr__(self):
return str(self.state)
dim = 10
grid = np.array([[Square(randrange(2), [x, y]) for y in range(dim)] for x in range(dim)])
grid_np = np.array([[grid[x, y].state for y in range(dim)] for x in range(dim)])
plt.pcolor(np.arange(-0.5, dim), np.arange(-0.5, dim), grid_np, cmap=ListedColormap(['crimson', 'turquoise']))
plt.gca().set_aspect('equal') # show square as square
plt.xticks(range(dim))
plt.yticks(range(dim))
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/w2vi5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/w2vi5.png" alt="example plot" /></a></p>
|
python|arrays|numpy|matplotlib|grid
| 1
|
375,269
| 66,202,758
|
Python AttributeError: 'list' object has no attribute 'to_csv'
|
<p>I'm currently encountering an error with my code, and I have no idea why. I originally thought it was because I couldn't save a csv file with hyphens in it, but that turns out not to be the case. Does anyone have any suggestions to what might be causing the problem. My code is below:</p>
<pre><code>import pandas as pd
import requests
query_set = ["points-per-game"]
for query in query_set:
url = 'https://www.teamrankings.com/ncaa-basketball/stat/' + str(query)
html = requests.get(url).content
df_list = pd.read_html(html)
print(df_list)
df_list.to_csv(str(query) + "stat.csv", encoding="utf-8")
</code></pre>
|
<p>The function <code>pd.read_html</code> returns a list of DataFrames found in the HTML source. Use <code>df_list[0]</code> to get the DataFrame which is the first element of this list.</p>
|
python|pandas|dataframe|export-to-csv
| 1
|
375,270
| 65,965,272
|
TypeError: 'in <string>' requires string as left operand, not NoneType
|
<p>I am trying to create a simple scraper to gatherbasketball stats. I was able to get the info I want, however, I can't figure out how to organized it all in a table.</p>
<p>I keep getting a "TypeError: 'in ' requires string as left operand, not NoneType."</p>
<p>Please see my code below:</p>
<pre><code>import requests
from bs4 import BeautifulSoup
import pandas as pd
import re
url = 'https://basketball.realgm.com/ncaa/boxscore/2021-01-29/North-Texas-at-Rice/367436'
page = requests.get(url)
soup = BeautifulSoup(page.content , 'html.parser')
#Extracting Columns
tables = soup.find('div', class_= 'boxscore-gamesummary')
columns = tables.find_all('th', class_='nosort')
#Extracting Stats
tables = soup.find('div', class_= 'boxscore-gamesummary')
stats = tables.find_all('td')
#Filling DataFrame
temp_df = pd.DataFrame(stats).transpose()
temp_df.columns = columns
final_df = pd.concat([final_df,temp_df], ignore_index=True)
final_df
</code></pre>
<p>Looking forward to hearing from someone</p>
|
<p>Pandas already has a built-in method to get a dataframe from HTML which should make things way easier here.</p>
<p><strong>Code</strong></p>
<pre><code>import requests
from bs4 import BeautifulSoup
import pandas as pd
url = 'https://basketball.realgm.com/ncaa/boxscore/2021-01-29/North-Texas-at-Rice/367436'
page = requests.get(url)
soup = BeautifulSoup(page.content , 'html.parser')
tables = soup.find('div', class_= 'boxscore-gamesummary').find_all('table')
df = pd.read_html(str(tables))[0]
print(df)
</code></pre>
<p><strong>Output</strong></p>
<pre><code> Unnamed: 0 1 2 Final
0 UNT (8-5) 36 43 79
1 RU (10-7) 37 37 74
</code></pre>
|
python|pandas|web-scraping|beautifulsoup|screen-scraping
| 1
|
375,271
| 66,234,547
|
How to convert 'float64' to timestamp in pandas dataframe
|
<p>Here's my data</p>
<pre><code>id enter_time
1 1.481044e+12
2 1.486d74e+12
</code></pre>
<p>Here's my expected output</p>
<pre><code>id enter_time enter_timestamp
1 1.481044e+12 2017-07-14 08:10:03
2 1.486774e+12 2017-07-15 08:10:00
</code></pre>
<p>Note: value in "enter_timestamp" in expectation above is false in value, it's only for give idea about expected format, below is additional information just in case needed for answer</p>
<pre><code>In : main['enter_time'].dtype
Out: dtype('float64')
</code></pre>
<p>What I already try</p>
<pre><code>import datetime
main['loan_timestamp'] = datetime.datetime.fromtimestamp(main['loan_time'])
</code></pre>
<p>and the output</p>
<pre><code>---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-27-cfe2d514debb> in <module>
1 import datetime
----> 2 main['loan_timestamp'] = datetime.datetime.fromtimestamp(main['loan_time'])
/opt/conda/lib/python3.8/site-packages/pandas/core/series.py in wrapper(self)
127 if len(self) == 1:
128 return converter(self.iloc[0])
--> 129 raise TypeError(f"cannot convert the series to {converter}")
130
131 wrapper.__name__ = f"__{converter.__name__}__"
TypeError: cannot convert the series to <class 'int'>
</code></pre>
|
<p>Try using pandas to_datetime() (I assumed that the character 'd' in your second input float is a typo, so I replaced it):</p>
<pre><code>import pandas as pd
df = pd.DataFrame([(1, 1.481044e+12), (2, 1.48674e+12)], columns=['id', 'enter_time'])
df['enter_timestamp'] = pd.to_datetime(df['enter_time'], unit='ms')
df
id enter_time enter_timestamp
0 1 1.481044e+12 2016-12-06 17:06:40
1 2 1.486740e+12 2017-02-10 15:20:00
</code></pre>
<p>I was assuming that the unit of enter_time is in milliseconds, as seconds or higher will result in an OutOfBoundsError (fundamental limitation of pandas timestamp).</p>
|
python|pandas|dataframe|timestamp
| 1
|
375,272
| 66,245,217
|
JSON inside column DataFrame
|
<p>I'm trying to make a <strong>bulk insert</strong> of a dataframe, my table in Postgres has a field type <strong>JSON</strong> and I want to insert raw JSON on it, but when I'm trying to make it, python change from double quote to <code>"</code> to single quote <code>'</code> and it technically destroys my JSON column inside of DataFrame, I'm looking just a way to make this bulk insert.</p>
<p>First I'm getting my data in json format, next I make the a Dataframe for data manipulation and cleaning and finally I want to insert bulk this DF in Postgres.</p>
<p><code>df = pd.DataFrame(response['data'])</code></p>
<p>and this is how python transforma my JSON from
<code>{ "age_max": 44, "age_min": [20,30] }</code>
to:
<code>{ 'age_max': 44, 'age_min': [20,30] }</code></p>
|
<p>pandas has automagically converted the json into a dictionary object. You can easily convert a dictionary to json using <code>dumps</code> from the built in <code>json</code> module.</p>
<pre class="lang-py prettyprint-override"><code>import requests
from json import dumps
import pandas
import psycopg2
#sample dataset
df = pandas.DataFrame.from_dict(
{'date': {0: '2021-02-16',
1: '2021-02-15',
2: '2021-02-14',
3: '2021-02-13',
4: '2021-02-12'},
'name': {0: 'East Midlands',
1: 'East Midlands',
2: 'East Midlands',
3: 'East Midlands',
4: 'East Midlands'},
'cases': {0: {'new': 174, 'cumulative': 294582},
1: {'new': 1477, 'cumulative': 294408},
2: {'new': 899, 'cumulative': 292931},
3: {'new': 898, 'cumulative': 292032},
4: {'new': 1268, 'cumulative': 291134}}}
)
df['json'] = df['cases'].apply(dumps) #create new series running the function json.dumps against each element in the series
p = df[['date', 'name', 'json']].values.tolist() #create parameter list
con = db_connection() #replace with your db connection function or psycopg2.connect()
csr = con.cursor()
sql = """insert into corona (date, name, json) values (%s, %s, %s)"""
csr.executemany(sql, params=p)
con.commit()
con.close()
</code></pre>
|
python|sql|json|pandas|postgresql
| 0
|
375,273
| 66,128,264
|
Vectorization & ValueError, but not from "or" and "and" operators
|
<p>This <a href="https://stackoverflow.com/questions/36921951/truth-value-of-a-series-is-ambiguous-use-a-empty-a-bool-a-item-a-any-o">question and answer chain</a> do a great job explaining how to resolve ValueErrors that come up when utilizing conditionals, e.g. "or" instead of |, and "and" instead of &. But I don't see anything in that chain that resolves the problem of a "ValueError, Truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all()" for vectorization when trying to use vectorization with a function that was written to take a single number as an input.</p>
<p>Specifically, Map and Apply work fine in this case, but Vectorization still throws the ValueError.</p>
<p>Code below, and can someone share how to fix this so vectorization can be used without modifying the function (or without modifying the function too much)? Thank you!</p>
<hr />
<p>Code:</p>
<pre><code># import numpy and pandas, create dataframe.
import numpy as np
import pandas as pd
x = range(1000)
df = pd.DataFrame(data = x, columns = ['Number'])
# define simple function to return True or False if number passed in is prime
def is_prime(num):
if num < 2:
return False
elif num == 2:
return True
else:
for i in range(2,num):
if num % i == 0:
return False
return True
# Call various ways of applying the function to the data frame
df['map prime'] = list(map(is_prime, df['Number']))
df['apply prime'] = df['Number'].apply(is_prime)
# look at dataframe
in: df.head()
out: Number map prime apply prime
0 0 264 False False
1 1 907 False False
2 2 354 True True
3 3 583 True True
4 4 895 False False
# now try to vectorizing
in: df['optimize prime'] = is_prime(df['Number'])
out: ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
</code></pre>
|
<p>You could try out numpy's <code>vectorize</code>:</p>
<pre><code>vis_prime = np.vectorize(is_prime)
df['optimize prime'] = vis_prime(df['Number'])
</code></pre>
<p>That gives you:</p>
<pre><code> Number map prime apply prime optimize prime
0 0 False False False
1 1 False False False
2 2 True True True
3 3 True True True
4 4 False False False
</code></pre>
|
python|pandas|numpy|vectorization|apply
| 2
|
375,274
| 66,123,800
|
Pandas drop nan in a specific row ('Feb-29') and shift remaining rows up
|
<p>I have a pandas dataframe containing several years of timeseries data as columns. Each starts in November and ends in the subsequent year. I'm trying to deal with NaN's in non-leap years. The structure can be recreated with something like this:</p>
<pre><code>import pandas as pd
import numpy as np
from datetime import datetime, timedelta
ndays = 151
sdates = [(datetime(2019,11,1) + timedelta(days=x)).strftime("%b-%d") for x in range(ndays)]
columns=list(range(2016,2021))
df = pd.DataFrame(np.random.randint(0,100,size=(ndays, len(columns))), index=sdates, columns=columns)
df.loc["Feb-29",2017:2019] = np.nan
df.loc["Feb-28":"Mar-01"]
Out[61]:
2016 2017 2018 2019 2020
Feb-28 36 59.0 74.0 19.0 24
Feb-29 85 NaN NaN NaN 6
Mar-01 24 75.0 49.0 99.0 82
</code></pre>
<p>What I want to do is remove the "Feb-29" NaN data only (in the non-leap years) and then shift teh data in those columns up a row, leaving the leap-years as-is. Something like this, with Mar-01 and subsequent rows shifted up for 2017 through 2019:</p>
<pre><code> 2016 2017 2018 2019 2020
Feb-28 36 59.0 74.0 19.0 24
Feb-29 85 75.0 49.0 99.0 6
Mar-01 24 42.0 21.0 41.0 82
</code></pre>
<p>I don't care that "Mar-01" data will be labelled as "Feb-29" as eventually I'll be replacing the string date index with an integer index.</p>
<p>Note that I didn't include this in the example but I have NaN's at the start and end of the dataframe in varying rows that I do not want to remove (i.e., I can't just remove all NaN data, I need to target "Feb-29" specifically)</p>
|
<p>It sounds like you don't actually want to shift dates up, but rather number them correctly based on the day of the year? If so, this will work:</p>
<p>First, make the DataFrame long instead of wide:</p>
<pre><code>df = pd.DataFrame(
{
"2016": {"Feb-28": 36, "Feb-29": 85, "Mar-01": 24},
"2017": {"Feb-28": 59.0, "Feb-29": None, "Mar-01": 75.0},
"2018": {"Feb-28": 74.0, "Feb-29": None, "Mar-01": 49.0},
"2019": {"Feb-28": 19.0, "Feb-29": None, "Mar-01": 99.0},
"2020": {"Feb-28": 24, "Feb-29": 6, "Mar-01": 82},
}
)
df = (
df.melt(ignore_index=False, var_name="year", value_name="value")
.reset_index()
.rename(columns={"index": "month-day"})
)
df
month-day year value
0 Feb-28 2016 36.0
1 Feb-29 2016 85.0
2 Mar-01 2016 24.0
3 Feb-28 2017 59.0
4 Feb-29 2017 NaN
5 Mar-01 2017 75.0
6 Feb-28 2018 74.0
7 Feb-29 2018 NaN
8 Mar-01 2018 49.0
9 Feb-28 2019 19.0
10 Feb-29 2019 NaN
11 Mar-01 2019 99.0
12 Feb-28 2020 24.0
13 Feb-29 2020 6.0
14 Mar-01 2020 82.0
</code></pre>
<p>Then remove rows containing an invalid date and get the day of the year for remaining days:</p>
<pre><code>df["date"] = pd.to_datetime(
df.apply(lambda row: " ".join(row[["year", "month-day"]]), axis=1), errors="coerce",
)
df = df[df["date"].notna()]
df["day_of_year"] = df["date"].dt.dayofyear
df
month-day year value date day_of_year
0 Feb-28 2016 36.0 2016-02-28 59
1 Feb-29 2016 85.0 2016-02-29 60
2 Mar-01 2016 24.0 2016-03-01 61
3 Feb-28 2017 59.0 2017-02-28 59
5 Mar-01 2017 75.0 2017-03-01 60
6 Feb-28 2018 74.0 2018-02-28 59
8 Mar-01 2018 49.0 2018-03-01 60
9 Feb-28 2019 19.0 2019-02-28 59
11 Mar-01 2019 99.0 2019-03-01 60
12 Feb-28 2020 24.0 2020-02-28 59
13 Feb-29 2020 6.0 2020-02-29 60
14 Mar-01 2020 82.0 2020-03-01 61
</code></pre>
|
python|pandas|dataframe|numpy|nan
| 1
|
375,275
| 66,060,254
|
Google Cloud Platform - AI Platform: why do I get different response body when calling API?
|
<p>I created 2 models on Google Cloud AI Platform and I am wondering why do I get different response body when calling REST API with Python?<br />
To be more specific:</p>
<ul>
<li>In the first case, I get 2 dictionaries (keys: "predictions" and "dense_1", the latter is the output layer name of my tensorflow model)<br />
<code>{'predictions': [{'dense_1': [9.130606807519459e-23, 4.872276949885089e-23, 0.002939987927675247, 0.957423210144043, 0.0, 7.103511528994133e-11, 6.0420668887672946e-05, 0.039576299488544464, 3.989315388447379e-12, 8.409963248741903e-32]}]}</code></li>
<li>In the second case, I get 1 dictionary (key: "predictions").<br />
<code>{'predictions': [[9.13060681e-23, 4.87227695e-23, 0.00293998793, 0.95742321, 0.0, 7.10351153e-11, 6.04206689e-05, 0.0395763, 3.98931539e-12, 8.40996325e-32]]}</code></li>
</ul>
<p>This is weird because I am using the exact same model from GCS. The only difference between those 2 models is that the second one has a region endpoint in Europe and they don't run on same machine type (but I don't think there is a link with my issue).</p>
<p>EDIT : Here is my request method. I used <code>regional_endpoint = None</code> in case 1 and <code>regional_endpoint = "europe-west1"</code> in case 2</p>
<pre><code>project_id = "my_project_id"
model_id = "my_model_id"
version_id = None # if None, default version is used
regional_endpoint = None # "europe-west1"
def predict(project, model, instances, version=None, regional_endpoint=None):
'''
Send JSON data to a deployed model for prediction.
Args:
- project (str): Project ID where the AI Platform model is deployed
- model (str): Model ID
- instances (tensor): model's expected inputs
- version (str): Optional. Version ID
- regional_endpoint (str): Optional. See https://cloud.google.com/dataflow/docs/concepts/regional-endpoints
Returns:
- dictionary of prediction results
'''
input_data_json = {"signature_name": "serving_default", "instances": instances.tolist()}
model_path = "projects/{}/models/{}".format(project_id, model_id)
if version is not None:
model_path += "/versions/{}".format(version)
if regional_endpoint is not None:
endpoint = 'https://{}-ml.googleapis.com'.format(regional_endpoint)
regional_endpoint = ClientOptions(api_endpoint=endpoint)
ml_ressource = googleapiclient.discovery.build("ml", "v1", client_options=regional_endpoint).projects()
request = ml_ressource.predict(name=model_path, body=input_data_json)
response = request.execute()
if "error" in response:
raise RuntimeError(response["error"])
return response["predictions"]
</code></pre>
<p>I get the same result using gcloud command:</p>
<pre><code>$ gcloud ai-platform predict --model=my_model_id --json-request=data.json --region=europe-west1
Using endpoint [https://europe-west1-ml.googleapis.com/]
[[5.64439188e-06, 1.11136234e-09, 4.66703168e-05, 1.34729596e-08, 2.34136132e-05, 1.52856941e-07, 0.999924064, 3.328397e-10, 3.32789263e-08, 3.37864092e-09]]
$ gcloud ai-platform predict --model=my_model_id --json-request=data.json
Using endpoint [https://ml.googleapis.com/]
DENSE_1
[5.644391876558075e-06, 1.1113623354930269e-09, 4.6670316805830225e-05, 1.3472959636828818e-08, 2.341361323487945e-05, 1.528569413267178e-07, 0.9999240636825562, 3.328397002455574e-10, 3.327892628135487e-08, 3.378640922591103e-09]
</code></pre>
|
<p>I have reproduced the same behavior.<br/>
From the list of endpoints, I have already tested the following:</p>
<ul>
<li>europe-west1</li>
<li>asia-east1</li>
<li>us-east1</li>
<li>australia-southeast1</li>
</ul>
<p>And neither of them returns the output layer’s name like the global endpoint does.</p>
<p>I have already informed the AI Platform product team of this behavior and created a <a href="https://issuetracker.google.com/issues/180485182" rel="nofollow noreferrer">public issue</a> on issuetracker to track their progress.<br/>
Therefore, I suggest that all future communication about it should be done on issuetracker.</p>
|
python|tensorflow|google-cloud-platform|google-ai-platform
| 1
|
375,276
| 66,210,021
|
Is there a way to make numpy work with Maya 2020?
|
<p>I have Python 3.9.1 with numpy 1.19.4 install, and Maya 2020. I have installed a plug-in (SMPL, actually, from here: <a href="https://smpl.is.tue.mpg.de/downloads" rel="nofollow noreferrer">https://smpl.is.tue.mpg.de/downloads</a>), loads without any problems, but errors when it hits the first line that actually references numpy ('''np.array()'''...) with the error being this:</p>
<pre><code>'module' object has no attribute 'array'
</code></pre>
<p>I suspect I may be using a version of numpy that Maya does not like. Has anybody else come across this? Any hints on how to resolve would be most welcome. Thanks!</p>
|
<p>OK, solved! Thanks @mad-physicist for nudging me towards the correct direction.</p>
<p>The issue boiled down to requiring a maya-compatible build of numpy, to be pip-installed under the specific python instance (mayapy.exe) that ships with the Maya installation.</p>
<p>The details here: <a href="https://forums.autodesk.com/t5/maya-programming/numpy-2018-2019/td-p/9349010" rel="nofollow noreferrer">https://forums.autodesk.com/t5/maya-programming/numpy-2018-2019/td-p/9349010</a></p>
<p>It is supposed to be for Maya 2018 and 2019, but it worked just fine with my 2020 installation too.</p>
|
python|numpy
| 1
|
375,277
| 66,192,477
|
Iterating through multiple rows using multiple values from nested dictionary to update data frame in python
|
<p>I created nested dictionary to keep multiple values for each combination, example rows in the dictionary is as follows:-</p>
<p><code>dict = {'A': {B: array([1,2,3,4,5,6,7,8,9,10]), C: array([array([1,2,3,4,5,6,7,8,9,10],...}}</code></p>
<p>There are multiple As and in that multiple arrays for each array. Now I want to updated the data frame which has following rows:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Col 1</th>
<th>Col 2</th>
<th>Col 3</th>
<th>Col 4</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>B</td>
<td>2</td>
<td>10</td>
</tr>
<tr>
<td>A</td>
<td>C</td>
<td>3</td>
<td>10</td>
</tr>
</tbody>
</table>
</div>
<p>In this data frame depending on the value in col 3 I need to create rows so for example A and B will have two rows and then each rows multiply the first value from the dictionary to col 4, for example first row will multiplied by 1 then 2nd by 2 from the array and so on, the output will be as follows:-</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Col 1</th>
<th>Col 2</th>
<th>Col 3</th>
<th>Col 4</th>
<th>Col 5</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>B</td>
<td>1</td>
<td>10</td>
<td>10</td>
</tr>
<tr>
<td>A</td>
<td>B</td>
<td>1</td>
<td>10</td>
<td>20</td>
</tr>
<tr>
<td>A</td>
<td>C</td>
<td>1</td>
<td>10</td>
<td>10</td>
</tr>
<tr>
<td>A</td>
<td>C</td>
<td>1</td>
<td>10</td>
<td>20</td>
</tr>
<tr>
<td>A</td>
<td>C</td>
<td>1</td>
<td>10</td>
<td>30</td>
</tr>
</tbody>
</table>
</div>
<p>I can access all the value from array by iterating in the dictionary as follows:-</p>
<pre><code>for i in dict:
for j in dict[i]:
dict[i][j]
</code></pre>
<p>But then not able to iterate through each row of the data frame to multiply the value for each Col1 and Col2 combination to create Col 5. Please suggest the most optimal way to loop through the dataframe on Col 1 and Col 2 and use the value from dictionary based on the number of rows from from col3 to multiply Col 4, considering there are multiple values for Col 1 and Col 2 combination and dictionary has 10 values for each combination.</p>
<p><strong>EDIT:</strong></p>
<p>Iterating through the dictionary is important as each combination will have different value for simple explanation I put it like 1,2...etc. But the dictionary is getting created through another code in which each combination will have different values,</p>
<p>for example it can be like</p>
<pre><code>"dict = {'A': {B: array([0.5,0.2,3,4,5,6,7,8,9,10]), C: array([array([0.9,0.6,0.2,4,5,6,7,8,9,10],...}}"
</code></pre>
<p>and in this case Col4 of first rows on A and B combination will be multiplied by 0.5, 2nd will be by 0.2, in case and A and C the first row will be with 0.9, 2nd with 0.6 and 3rd with 0.2.</p>
<p>Looking for help that how to iterate through those values from dictionary and update the data frame, also the dictionary have 10 values for each combination and in data frame each combination can have any rows between 0 to 10 so accordingly values needs to be updated.</p>
|
<h1>EDIT Ver 2: Reference Dict and pick dict index val</h1>
<p>The dictionary you created is a big confusing. I assume you wanted to reference it like the way I have shown (not an array of array as shown in C). Also assume <code>B</code> and <code>C</code> are values and not variables <code>B</code> and <code>C</code>.</p>
<p>I created dictionary <code>dct</code> (dict is a reserved word in python), with different values to show that it picks the value not the index.</p>
<pre><code>import pandas as pd
import numpy as np
dct = {'A': {'B': np.array([.2,.4,.6,.8,1.0,1.2,1.4,1.6,1.8,2.0]),
'C': np.array([.3,.6,.9,1.2,1.5,1.8,2.1,2.4,2.7,3.0])
}
}
c = ['Col 1','Col 2','Col 3','Col 4']
d = [['A','B',2,10], ['A','C',3,10]]
df = pd.DataFrame(d,columns=c)
#repeat the values as per times in Col 3. This will create dups in 1 and 2
df = df.loc[df.index.repeat(df['Col 3'])]
#Now groupby Col 1 and Col 2 and count the number of times we have Col 3 value
#This will give you index to reference the dictionary
df['Col 5'] = (df.groupby(['Col 1','Col 2'])['Col 3'].transform('cumcount'))
#Using the cumcount as index, pick the value from dict using keys Col 1, Col 2 and index Col 5
df['Col 5'] = df.apply(lambda x: dct[x['Col 1']][x['Col 2']][x['Col 5']],axis=1)
print (df)
</code></pre>
<p>The output of this will be:</p>
<pre><code> Col 1 Col 2 Col 3 Col 4 Col 5
0 A B 2 10 0.2
0 A B 2 10 0.4
1 A C 3 10 0.3
1 A C 3 10 0.6
1 A C 3 10 0.9
</code></pre>
<p>If you want to multiply Col 5 with Col 4 value, its very simple. Change the equation to (multiply Col 4 to results from dictionary value):</p>
<pre><code>df['Col 5'] = df.apply(lambda x: x['Col 4'] * dct[x['Col 1']][x['Col 2']][x['Col 5']],axis=1)
</code></pre>
<p>The result of this will be:</p>
<pre><code> Col 1 Col 2 Col 3 Col 4 Col 5
0 A B 2 10 2.0
0 A B 2 10 4.0
1 A C 3 10 3.0
1 A C 3 10 6.0
1 A C 3 10 9.0
</code></pre>
<h1>EDIT Ver 1: Not referencing dictionary</h1>
<p>If you are just looking to have increments of 10 in <code>Col 5</code> for each group of <code>Col 1</code> and <code>Col 2</code>, then you can do this.</p>
<pre><code>c = ['Col 1','Col 2','Col 3','Col 4']
d = [['A','B',2,10],
['A','C',3,10]]
import pandas as pd
df = pd.DataFrame(d,columns=c)
df = df.loc[df.index.repeat(df['Col 3'])]
df['Col 5'] = (df.groupby(['Col 1','Col 2'])['Col 3'].transform('cumcount')+1)*10
print (df)
</code></pre>
<p>The output of this will be:</p>
<pre><code> Col 1 Col 2 Col 3 Col 4 Col 5
0 A B 2 10 10
0 A B 2 10 20
1 A C 3 10 10
1 A C 3 10 20
1 A C 3 10 30
</code></pre>
<p>If you want Col 3 to have a value of 1, then:</p>
<pre><code>df['Col 3'] = 1
</code></pre>
<p>This will then result in:</p>
<pre><code> Col 1 Col 2 Col 3 Col 4 Col 5
0 A B 1 10 10
0 A B 1 10 20
1 A C 1 10 10
1 A C 1 10 20
1 A C 1 10 30
</code></pre>
<p>If you need it to reference the dictionary, then I need to change the code.</p>
|
python|arrays|pandas|dictionary|for-loop
| 0
|
375,278
| 65,959,870
|
Pandas Equivalent for SQL window function and rows range
|
<p>Consider the minimal example</p>
<pre><code>customer day purchase
Joe 1 5
Joe 1 10
Joe 2 5
Joe 2 5
Joe 4 10
Joe 7 5
</code></pre>
<p>In BigQuery, one would do something similar to this to get how much the customer spent in the last 2 days for every day:</p>
<pre><code>SELECT customer, day
, sum(purchase) OVER (PARTITION BY customer ORDER BY day ASC RANGE between 2 preceding and 1 preceding)
FROM table
</code></pre>
<p>What would be the equivalent in pandas? i.e., expected outcome</p>
<pre><code>customer day purchase amount_last_2d
Joe 1 5 null -- spent days [-,-]
Joe 1 10 null -- spent days [-,-]
Joe 2 5 15 -- spent days [-,1]
Joe 2 5 15 -- spent days [-,1]
Joe 4 10 10 -- spent days [2,3]
Joe 7 5 0 -- spent days [5,6]
</code></pre>
|
<p>Not sure if this is the right way to go, and this is limited since only one customer is provided; if there were different customers, I would use <code>merge</code> instead of <code>map</code>; Note also that there is also an implicit assumption that the days are ordered in ascending already:</p>
<p>Get the purchase sum based on the groupby combination of <code>customer</code> and <code>day</code> and create a mapping between <code>day</code> and the sum:</p>
<pre><code>sum_purchase = (df.groupby(["customer", "day"])
.purchase
.sum()
.shift()
.droplevel(0))
</code></pre>
<p>Again, for multiple customers, I would not drop the <code>customer</code> index, and instead use a merge below:</p>
<p>Get a mapping of the days with the difference between the days:</p>
<pre><code>diff_2_days = (df.drop_duplicates("day")[["day"]]
.set_index("day", drop=False)
.diff()
.day)
</code></pre>
<p>Create the new column by mapping the above values to the day column, then use <code>np.where</code> to get columns where the diff is less than or equal to 2:</p>
<pre><code>(
df.assign(
diff_2_days = df.day.map(diff_2_days),
sum_purchase = df.day.map(sum_purchase),
final=lambda df: np.where(df.diff_2_days.le(2),
df.sum_purchase,
np.nan))
.drop(columns=["sum_purchase", "diff_2_days"])
)
customer day purchase final
0 Joe 1 5 NaN
1 Joe 1 10 NaN
2 Joe 2 5 15.0
3 Joe 2 5 15.0
4 Joe 4 10 10.0
5 Joe 7 5 NaN
</code></pre>
<p>Ran your code in postgres to get an idea of what range does and how it differs from rows; quite insightful. I think for windows functions, SQL got this covered and easily too.</p>
<p>SO, let me know where this falls on its face, and I'll gladly have a rejig at it.</p>
|
pandas|google-bigquery|range|window-functions
| 2
|
375,279
| 66,208,359
|
Delete rows from dataframe if column value does not exist in another dataframe
|
<p>I have two datasets, each with two columns (can be made into one column) and 1000s of rows.</p>
<pre><code>A = pd.DataFrame([['07/05/2013 08:00', 1.871287], ['07/05/2013 08:15', 1.878118], ['07/05/2013 08:30', 1.882696], ['07/05/2013 08:45', 1.891523], ['07/05/2013 09:00', 1.876457]], columns=['C', 'D'])
B = pd.DataFrame([['07/05/2013 08:00', 0.942500], ['07/05/2013 08:15', 0.959445], ['07/05/2013 09:00', 0.975362], ['07/05/2013 09:15', 0.981597], ['07/05/2013 09:30', 0.987643]], columns=['E', 'F'])
</code></pre>
<p>One column is a timestamp, the other is a measurement. I need to correlate the two measurements, but one of the datasets has extra measurements.</p>
<p>I want the code to read one row from the first (full) dataset, check if there is a corresponding measurement in the second dataset made at the same date/time, and if not then delete the entire row from the first dataset. I want to repeat this for all rows in the first dataset.</p>
<p>Visual example:
<img src="https://i.stack.imgur.com/0vnGg.png" alt="1" /></p>
<p>Would an <code>if</code> statement to check if the datetime of one df exists in the other, and keep the row if true but remove it if false, work? This seems inefficient. Could this process instead be achieved more efficiently through merges/joins, or some other way?</p>
|
<p>Your question doesn't contain enough information. So I'll try to guess and show you a toy example.
If your using pandas then the solution would be:</p>
<pre><code>>>> df1 = pd.DataFrame([x for x in pd.date_range('1/1/2020', '3/1/2020')], columns=['date'])
>>> df2 = pd.DataFrame([x for x in pd.date_range('2/20/2020', '3/1/2020')], columns=['date'])
>>> df1.shape
out: (61, 1)
>>> df2.shape
out: (11, 1)
>>> df1.head()
out:
date
0 2020-01-01
1 2020-01-02
2 2020-01-03
3 2020-01-04
4 2020-01-05
>>> df2.head()
out:
date
0 2020-02-20
1 2020-02-21
2 2020-02-22
3 2020-02-23
4 2020-02-24
>>> new_df = df1[df1['date'].isin(df2['date'])]
>>> new_df
out:
date
50 2020-02-20
51 2020-02-21
52 2020-02-22
53 2020-02-23
54 2020-02-24
55 2020-02-25
56 2020-02-26
57 2020-02-27
58 2020-02-28
59 2020-02-29
60 2020-03-01
>>> new_df.shape
out: (11, 1)
</code></pre>
<p>Now in the "new_df" you will have only those dates which are contained in both dataframes</p>
|
python|pandas|dataframe
| 4
|
375,280
| 66,003,985
|
How to do image recognition on nearly all black images?
|
<p>I've setup a camera in a squash club and want it to tell me if the squash court is occupied or empty. I trained it with a few hundred images of occupied and empty courts and the results are good.</p>
<p>Now the catch is sometimes the club closes early and the lights get turned off. So I basically have almost black images. I tried adding a few of these images to my "empty" squash court training set. I re-ran the image training but the new model does not predict these dark images as empty. It thinks they are occupied.</p>
<p>I next tried creating a new class called "court_closed". I put five of these dark images there and re-trained. Now the model thinks dark images are "empty". That is technically an improvement over thinking they are occupied. But why is it not predicting them as "court_closed"? Do I need to add hundreds of nearly identical dark/black images?</p>
<p>Here's an example image:</p>
<p><a href="https://i.stack.imgur.com/Ela3i.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ela3i.png" alt="enter image description here" /></a></p>
|
<p>Think I figured it out by just using Imagemagick command line. I can convert the image to HSI or LAB and get the brightness (Intensity or Luminosity) from the average of the I or L channel.</p>
<pre><code>convert court1.jpg -colorspace HSI -channel b -separate +channel -scale 1x1 -format "%[fx:100*u]\n" info:
</code></pre>
<p>The result will be between 0 and 100% with 0 being black and 100 white</p>
|
tensorflow|computer-vision|image-recognition
| 0
|
375,281
| 66,104,657
|
How to fill NaN based on groupby transform without loosing the column grouped by?
|
<p>I have a dataset containing heights, weights etc, and I intend to fill the NaN values with the mean value for that gender.</p>
<p>Example dataset:</p>
<pre><code> gender height weight
1 M 5 NaN
2 F 4 NaN
3 F NaN 40
4 M NaN 50
</code></pre>
<pre><code>df = df.groupby("Gender").transform(lambda x: x.fillna(x.mean()))
</code></pre>
<p>current output:</p>
<pre><code> height weight
1 5 50
2 4 40
3 4 40
4 5 50
</code></pre>
<p>Expected output:</p>
<pre><code> gender height weight
1 M 5 50
2 F 4 40
3 F 4 40
4 M 5 50
</code></pre>
<p>Unfortunately this drops the column Gender which is important later on.</p>
|
<p>How about looping through the 2 columns you want to fill, and perform <code>GroupBy.transform</code>, grouping by 'gender':</p>
<pre><code>for col in ['height','weight']:
df[col] = df.groupby('gender')[col].transform(lambda x: x.fillna(x.mean()))
print(df)
gender height weight
0 M 5.0 50.0
1 F 4.0 40.0
2 F 4.0 40.0
3 M 5.0 50.0
</code></pre>
<p>If you want to fill all the numerical columns, you can get them in a <code>list</code>, and perform the same approach:</p>
<pre><code>features_to_impute = [
x for x in df.columns if df[x].dtypes != 'O' and df[x].isnull().mean() > 0
]
for col in features_to_impute:
df[col] = df.groupby('gender')[col].transform(lambda x: x.fillna(x.mean()))
</code></pre>
|
python|pandas
| 1
|
375,282
| 52,621,497
|
Pandas - group by column and transform the data to numpy array
|
<p>Having the following data frame, group A have 4 samples, B 3 samples and C 1 sample:</p>
<pre><code> group data_1 data_2
0 A 1 4
1 A 2 5
2 A 3 6
3 A 4 7
4 B 1 4
5 B 2 5
6 B 3 6
7 C 1 4
</code></pre>
<p>I would like to transform the data into numpy array, where each row is a group with all its samples and zero padding for groups that have fewer samples.</p>
<p>Resulting in an array like so:</p>
<pre><code>[
[[1,4],[2,5],[3,6],[4,7]], # this is A group 4 samples
[[1,4],[2,5],[3,6],[0,0]], # this is B group 3 samples
[[1,4],[0,0],[0,0],[0,0]], # this is C group 1 sample
]
</code></pre>
|
<p>First is necessary add missing values - first solution with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.unstack.html" rel="noreferrer"><code>unstack</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.stack.html" rel="noreferrer"><code>stack</code></a>, counter Series is created by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.cumcount.html" rel="noreferrer"><code>cumcount</code></a>.</p>
<p>Second solution use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reindex.html" rel="noreferrer"><code>reindex</code></a> by <code>MultiIndex</code>.</p>
<p>Last use lambda function with <code>groupby</code>, convert to numpy array by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.values.html" rel="noreferrer"><code>values</code></a> and last to lists:</p>
<pre><code>g = df.groupby('group').cumcount()
L = (df.set_index(['group',g])
.unstack(fill_value=0)
.stack().groupby(level=0)
.apply(lambda x: x.values.tolist())
.tolist())
print (L)
[[[1, 4], [2, 5], [3, 6], [4, 7]],
[[1, 4], [2, 5], [3, 6], [0, 0]],
[[1, 4], [0, 0], [0, 0], [0, 0]]]
</code></pre>
<p>Another solution:</p>
<pre><code>g = df.groupby('group').cumcount()
mux = pd.MultiIndex.from_product([df['group'].unique(), g.unique()])
L = (df.set_index(['group',g])
.reindex(mux, fill_value=0)
.groupby(level=0)['data_1','data_2']
.apply(lambda x: x.values.tolist())
.tolist()
)
</code></pre>
|
python|pandas|pivot|grouping
| 18
|
375,283
| 52,723,136
|
Iterating by index groups in python
|
<p>I need to send VIN data to an api in groups. The VINs are grouped by their first three letters called a <code>wmi</code>. The <code>wmi</code> is the data frames index. I'm testing this as I go, and I cannot get just the VINs to print when trying to call by groups. The code below is the closest I got after a few hours of work just trying to get the VINs only to print.</p>
<p>In my actual code, I need to pull a groups by WMI and send those VINs only to the api; then pull the next group and send VINs only to the api. My loop is incorrect somewhere.</p>
<pre><code>#Stack exchange
import pandas as pd
#develop the data
df = pd.DataFrame(columns = ["vin"], data = ['LHJLC79U58B001633','SZC84294845693987','LFGTCKPA665700387','L8YTCKPV49Y010001',
'LJ4TCBPV27Y010217','LFGTCKPM481006270','LFGTCKPM581004253','LTBPN8J00DC003107',
'1A9LPEER3FC596536','1A9LREAR5FC596814','1A9LKEER2GC596611','1A9L0EAH9C596099',
'22A000018'])
df['wmi'] = df['vin'].str[0:3]
df.set_index('wmi', inplace = True)
for name, group in df.groupby('wmi'):
df1 = pd.DataFrame()
for i in group:
i = group.vin
print(i)
</code></pre>
|
<p>What about <code>apply</code> for grouped data?</p>
<pre><code>def do_something(df):
print(df)
df = pd.DataFrame(columns = ["vin"], data = ['LHJLC79U58B001633','SZC84294845693987',
'LFGTCKPA665700387','L8YTCKPV49Y010001',
'LJ4TCBPV27Y010217','LFGTCKPM481006270',
'LFGTCKPM581004253','LTBPN8J00DC003107',
'1A9LPEER3FC596536','1A9LREAR5FC596814',
'1A9LKEER2GC596611','1A9L0EAH9C596099',
'22A000018'])
df['wmi'] = df['vin'].str[0:3]
df.groupby('wmi').apply(do_something)
</code></pre>
<p><strong>OUT:</strong></p>
<pre><code> vin wmi
8 1A9LPEER3FC596536 1A9
9 1A9LREAR5FC596814 1A9
10 1A9LKEER2GC596611 1A9
11 1A9L0EAH9C596099 1A9
vin wmi
8 1A9LPEER3FC596536 1A9
9 1A9LREAR5FC596814 1A9
10 1A9LKEER2GC596611 1A9
11 1A9L0EAH9C596099 1A9
vin wmi
12 22A000018 22A
vin wmi
3 L8YTCKPV49Y010001 L8Y
vin wmi
2 LFGTCKPA665700387 LFG
5 LFGTCKPM481006270 LFG
6 LFGTCKPM581004253 LFG
vin wmi
0 LHJLC79U58B001633 LHJ
vin wmi
4 LJ4TCBPV27Y010217 LJ4
vin wmi
7 LTBPN8J00DC003107 LTB
vin wmi
1 SZC84294845693987 SZC
</code></pre>
|
python-3.x|pandas|iterator
| 0
|
375,284
| 52,878,460
|
How to calculate the accuracy when dealing with multi-class mutlilabel classification in tensorflow?
|
<p>I am working with FER2013Plus dataset from <a href="https://github.com/Microsoft/FERPlus" rel="nofollow noreferrer">https://github.com/Microsoft/FERPlus</a> which contains the fer2013new.csv file. This file contains labels for each image in the dataset. An example on labels could be:</p>
<p>(4, 0, 0, 2, 1, 0, 0, 3) </p>
<p>where each dimension is a different emotion. Finally, in their paper <a href="https://arxiv.org/pdf/1608.01041.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1608.01041.pdf</a>, they converted the labels distribution into probabilities => the new label would become </p>
<p>(0.5, 0, 0, 0.25, 0.125, 0, 0, 0.375)</p>
<p>In other words, the person in the image is happy with probability of 0.5, sad with probability of 0.25 and so on... And the sum of of the probabilities is 1.</p>
<p>Now while training I used <code>tf.nn.softmax_cross_entropy_with_logits_v2</code> to calculate the loss between my predictions and the labels. Now how to compute the accuracy?</p>
<p>Any help is much appreciated!!</p>
|
<p>Here is an excerpt from the paper:</p>
<p>"We take the majority emotion as the single
emotion label, and we measure prediction accuracy against
the majority emotion."</p>
<p>They are using a discrete classification task. So you just need to take the <code>tf.argmax()</code> on your logits to get the highest probability, and then compare that with the <code>tf.argmax()</code> of the labels.</p>
<p>For example, if your label is <code>(0.5, 0, 0, 0.25, 0.125, 0, 0, 0.375)</code>, then happy is the majority emotion, so you would check if your logits had happy as the majority emotion as well.</p>
|
tensorflow|prediction|multilabel-classification|multiclass-classification
| 2
|
375,285
| 52,839,576
|
Cannot create a new Timestamp column in pandas based on a conditional w/np.where
|
<p>In the process of writing out a script to automate the compilation of a report, I'm trying to create a column of Timestamps based on a conditional using np.where(). The logic is as follows:</p>
<pre><code>df['StartMonth'] = np.where(
chng['Count'] == 1, pd.Timestamp(
int(year), chng['Month'].astype(int), 1), str('')
)
</code></pre>
<p>The DataFrame is a list of employees who are either considered additions or deletions, where the <code>chng['Count']</code> is used as a flag that shows +1 as an addition and -1 as a deletion. So where any employee is being added, create the <code>StartMonth</code> series where the fixed <code>year</code> variable, the <code>Month</code> of the row, and <code>1</code> are used as the basis to create the timestamp (both <code>year</code> and chng['Month'] are strings, hence casting them as integers in the conditional). The output of the function comes up as the following for each <code>True</code> row:</p>
<pre><code> Month Count StartMonth
0 1 1 1970-01-01 00-00-01.000002+00019:00:01
1 1 1 1970-01-01 00-00-01.000002+00019:00:01
2 4 1 1970-01-01 00-00-01.000002+00019:00:01
3 5 1 1970-01-01 00-00-01.000002+00019:00:01
4 10 1 1970-01-01 00-00-01.000002+00019:00:01
</code></pre>
<p>I've tried this with <code>year</code> and chng['Month'] already cast as integers prior to the conditional and it's been the same result. The only time it "works" is when chng['Month'] is replaced with any other arbitrary number, leading me to believe that is the issue. I have done plenty of other conditionals with np.where() that use values from another Series in the DataFrame (though not as the base for a Timestamp creation) without any problem, so I'm not sure what is causing this.</p>
|
<p>There are a few issues:</p>
<ol>
<li>You should use <code>pd.to_datetime</code> for <strong>vectorised</strong> conversion, rather than <code>pd.Timestamp</code>.</li>
<li><code>numpy.where</code> returns a NumPy array, which is not the same as a Pandas <code>datetime</code> series. But you can feed an array to <code>pd.to_datetime</code>.</li>
<li>You should avoid combining strings with <code>datetime</code> values in a single series. Choose one. Here, instead of <code>''</code> use <code>pd.NaT</code> to ensure your series remains <code>datetime</code>.</li>
</ol>
<p>Here's an example solution:</p>
<pre><code>year = 2018
s = str(year) + '-' + df['Month'].astype(str)
df['StartMonth'] = pd.to_datetime(np.where(df['Count'] == 1, s, pd.NaT))
print(df)
Month Count StartMonth
0 1 1 2018-01-01
1 1 1 2018-01-01
2 4 1 2018-04-01
3 5 1 2018-05-01
4 10 1 2018-10-01
</code></pre>
|
python|pandas|datetime|dataframe
| 0
|
375,286
| 52,774,098
|
How to subtract value from same month last year in pandas?
|
<p>I have the below dataframe and I need to subtract value from same month last year and save it in output:</p>
<pre><code>date value output
01-01-2012 20 null
01-02-2012 10
01-03-2012 40
01-06-2012 30
01-01-2013 20 0
01-02-2013 30 20
01-02-2014 60 30
01-03-2014 50 null
</code></pre>
|
<p>First create <code>DatetimeIndex</code>, then subtract by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.sub.html" rel="nofollow noreferrer"><code>sub</code></a> with new <code>Series</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.shift.html" rel="nofollow noreferrer"><code>shift</code></a> by <code>12</code> months, <code>MS</code> is for <a href="http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases" rel="nofollow noreferrer">start of month</a>:</p>
<pre><code>df['date'] = pd.to_datetime(df['date'], dayfirst=True)
df = df.set_index('date')
df['new'] = df['value'].sub(df['value'].shift(freq='12MS'))
print (df)
value output new
date
2012-01-01 20 NaN NaN
2012-02-01 10 NaN NaN
2012-03-01 40 NaN NaN
2012-06-01 30 NaN NaN
2013-01-01 20 0.0 0.0
2013-02-01 30 20.0 20.0
2014-02-01 60 30.0 30.0
2014-03-01 50 NaN NaN
</code></pre>
|
pandas
| 2
|
375,287
| 52,850,269
|
Dask: Drop NAs on columns?
|
<p>I have tried to apply a filter to remove columns with too many NAs to my dask dataframe:</p>
<pre><code>df.dropna(axis=1, how='all', thresh=round(len(df) * .8))
</code></pre>
<p>Unfortunately it seems that the dask <code>dropna</code> API is slightly different from that of pandas and does not accept either an <code>axis</code> nor a <code>threshold</code>.
One partial way around it is to iterate column by column and remove those that are constant (regardless of whether they are filled with NAs or not, as I do not mind getting rid of constants):</p>
<pre><code> for col in df.columns:
if len(df[col].unique()) == 1:
new_df = df.drop(col, axis = 1)
</code></pre>
<p>But this does not let me apply a threshold. I could compute the threshold manually by adding:</p>
<pre><code>elif sum(df[col].isnull().compute()) / len(df[col]) > 0.8:
new_df = df.drop(col, axis = 1)
</code></pre>
<p>But I'm not sure calling <code>compute</code> and <code>len</code> at this point would be optimal and I would be curious to know if there are any better ways to go about this ?</p>
|
<h1>Update 10 Aug 2021:</h1>
<p>Now Dask has <code>axis</code>, <code>thresh</code>, and <code>subset</code> args that may help. The previous answer can be rewritten as:</p>
<pre><code>df.dropna(subset=columns_to_inspect, thresh=threshold_to_drop_na, axis=1)
</code></pre>
<h1>Old answer</h1>
<p>You're right, there is no way to do this by using <code>df.dropna()</code>.</p>
<p>I would suggest using this equation
<code>df.loc[:,df.isnull().sum()<THRESHOLD]</code></p>
|
python|pandas|optimization|dask
| 5
|
375,288
| 52,457,962
|
Percentage change with groupby python
|
<p>I have the following dataframe:</p>
<pre><code>Year Month Booked
0 2016 Aug 55999.0
6 2017 Aug 60862.0
1 2016 Jul 54062.0
7 2017 Jul 58417.0
2 2016 Jun 42044.0
8 2017 Jun 48767.0
3 2016 May 39676.0
9 2017 May 40986.0
4 2016 Oct 39593.0
10 2017 Oct 41439.0
5 2016 Sep 49677.0
11 2017 Sep 53969.0
</code></pre>
<p>I want to obtain the percentage change with respect to the same month from last year. I have tried the following code:</p>
<pre><code>df['pct_ch'] = df.groupby(['Month','Year'])['Booked'].pct_change()
</code></pre>
<p>but I get the following, which is not at all what I want:</p>
<pre><code>Year Month Booked pct_ch
0 2016 Aug 55999.0 NaN
6 2017 Aug 60862.0 0.086841
1 2016 Jul 54062.0 -0.111728
7 2017 Jul 58417.0 0.080556
2 2016 Jun 42044.0 -0.280278
8 2017 Jun 48767.0 0.159904
3 2016 May 39676.0 -0.186417
9 2017 May 40986.0 0.033017
4 2016 Oct 39593.0 -0.033987
10 2017 Oct 41439.0 0.046624
5 2016 Sep 49677.0 0.198798
11 2017 Sep 53969.0 0.086398
</code></pre>
|
<p>Do not <code>groupby</code> <em>Year</em> otherwise you won't get, for instance, <code>Aug 2017</code> and <code>Aug 2016</code> together. Also, use <code>transform</code> to broadcast back results to original indices </p>
<p>Try:</p>
<pre><code>df['pct_ch'] = df.groupby(['Month'])['Booked'].transform(lambda s: s.pct_change())
Year Month Booked pct_ch
0 2016 Aug 55999.0 NaN
6 2017 Aug 60862.0 0.086841
1 2016 Jul 54062.0 NaN
7 2017 Jul 58417.0 0.080556
2 2016 Jun 42044.0 NaN
8 2017 Jun 48767.0 0.159904
3 2016 May 39676.0 NaN
9 2017 May 40986.0 0.033017
4 2016 Oct 39593.0 NaN
10 2017 Oct 41439.0 0.046624
5 2016 Sep 49677.0 NaN
11 2017 Sep 53969.0 0.086398
</code></pre>
|
python|pandas
| 0
|
375,289
| 52,854,826
|
Python Pandas - Aggregation and count
|
<p>I have a dataframe (below there's a super simplified version) which has transactions data of product bought and device used:</p>
<pre><code>CUST_ID PRODUCT DEVICE
----------------------
1 A MOBILE
1 B TABLET
2 B LAPTOP
2 A MOBILE
3 C TABLET
3 C TABLET
</code></pre>
<p>I would like to transform it in order to have frequencies of purchase for each product and device usage by single cust_id view: i.e. a dataframe (3x7)</p>
<pre><code>CUST_ID PRODUCT_A PRODUCT_B PRODUCT_C DEVICE_MOBILE DEVICE_LAPTOP DEVICE_TABLET
1 1 1 0 1 0 1
2 1 1 0 1 1 0
3 0 0 2 0 0 2
</code></pre>
<p>I tried to use the .pivot_table() function, but it adds me indexes and duplicate columns. This is a simplified version, I would need to do this for many products and devices, so maybe a function or loop would be more efficient?</p>
|
<p>You can use <code>pd.get_dummies</code> and <code>df.groupby</code></p>
<pre><code>pd.get_dummies(df, columns=['PRODUCT','DEVICE']).groupby(['CUST_ID'], as_index=False).sum()
</code></pre>
<p>Output:</p>
<pre><code>CUST_ID PRODUCT_A PRODUCT_B PRODUCT_C DEVICE_LAPTOP DEVICE_MOBILE \
0 1 1 1 0 0 1
1 2 1 1 0 1 1
2 3 0 0 2 0 0
DEVICE_TABLET
0 1
1 0
2 2
</code></pre>
|
python|pandas|pivot-table
| 1
|
375,290
| 52,866,239
|
Getting lowest valued duplicated columns only
|
<p>I have a dataframe with 2 columns: <code>value</code> and <code>product</code>. There will be duplicated products, but with different values. What I want to do is to get all products, but remove any duplication. The condition to remove duplication will be to get the row with the lowest value and drop the rest. For example, I want something like this:</p>
<p>Before:</p>
<pre><code>product value
A 25
B 45
C 15
C 14
C 13
B 22
</code></pre>
<p>After</p>
<pre><code>product value
A 25
B 22
C 13
</code></pre>
<p>How can I make it so that only the lowest valued duplicated columns get added in the new dataframe?</p>
|
<pre><code>df.sort_values('value').groupby('product').first()
# value
#product
#A 25
#B 22
#C 13
</code></pre>
|
python|pandas
| 2
|
375,291
| 52,743,888
|
Python arranging a list to include duplicates
|
<p>I have a list in Python that is similar to:</p>
<pre><code>x = [1,2,2,3,3,3,4,4]
</code></pre>
<p>Is there a way using pandas or some other list comprehension to make the list appear like this, similar to a queue system:</p>
<pre><code>x = [1,2,3,4,2,3,4,3]
</code></pre>
|
<p>It is possible, by using <code>cumcount</code> </p>
<pre><code>s=pd.Series(x)
s.index=s.groupby(s).cumcount()
s.sort_index()
Out[11]:
0 1
0 2
0 3
0 4
1 2
1 3
1 4
2 3
dtype: int64
</code></pre>
|
python|pandas|list|duplicates|unique
| 2
|
375,292
| 52,807,109
|
Pandas dataframe, grouping 3 columns and counting the third
|
<p>I'm trying to group a dataframe by 3 columns, date, time and article, and return an object where i have the groups of date, time and article, and the count of each article per time (hour).</p>
<p>This code does the trick with the grouping, but I can't figure out how to also get the count:</p>
<pre><code>dfs.groupby([dfs['Dato'].dt.date,dfs['Tid'].dt.hour,dfs['Varenavn']])
</code></pre>
<p>so this could be my input:</p>
<pre><code>01.01.2018 0901 Car
01.01.2018 0905 Car
01.01.2018 0945 Horse
01.01.2018 1005 Car
02.01.2018 0900 Horse
02.01.2018 0915 Horse
02.01.2018 1050 Car
02.01.2018 1055 Horse
</code></pre>
<p>Wanted output:</p>
<pre><code>01.01.2018 09-10 Car 2
Horse 1
01.01.2018 10-11 Car 1
02.01.2018 09-10 Horse 2
02.01.2018 10-11 Car 1
Horse 1
</code></pre>
<p>My overall goal is to find how many items were sold per hour per day, from a dataframe containing every sold item, at what time and at what date</p>
|
<p>Assuming columns <code>Dato</code>, <code>Tid</code>, and <code>Varenavn</code> in your OG dataframe, try this:</p>
<pre><code>df['datetime'] = df['Dato'] + str(' ') + df['Tid']
df['datetime'] = pd.to_datetime(df['datetime'], format = '%m.%d.%Y %H%M')
df.groupby([pd.Grouper(key = 'datetime', freq = 'H'), 'Varenavn'])['Varenavn'].count()
</code></pre>
<p>OUTPUT:</p>
<pre><code>datetime Varenavn
2018-01-01 09:00:00 Car 2
Horse 1
2018-01-01 10:00:00 Car 1
2018-02-01 09:00:00 Horse 2
2018-02-01 10:00:00 Car 1
Horse 1
</code></pre>
<p>...implicitly assuming that the hour in the timestamp is the start-time. You can reindex and play with the datetime to get the desired format.</p>
|
python|python-3.x|pandas|pandas-groupby
| 1
|
375,293
| 52,645,495
|
add trailing 0's to a string in a df.Column dependent on length
|
<p>Looking for a sort of chain method to apply to a df.</p>
<p>consider the following DF.</p>
<pre><code>Store
1
33
455
</code></pre>
<p>what I'm trying to do is ascertain the length and append a 0 based on the length.</p>
<p>I've tried a simple for loop which i thought may work</p>
<pre><code>for s in df.Store:
if s.str.len() == 3:
"0" + s
</code></pre>
<p>however this doesn't work.</p>
<p>I've also tried slicing the DF and each variable one by one</p>
<pre><code>if df.Store == df[df['Store'].str.len() == 1]:
"0" + df.Store
</code></pre>
<p>but neither work and just return a blank output.</p>
<p>I'm working with an object dtype.</p>
<p>desired output:</p>
<pre><code>Store
0001
0033
0455
</code></pre>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.zfill.html" rel="nofollow noreferrer"><code>Series.str.zfill</code></a>.</p>
<p>If want append <code>0</code> values by maximum length of string is possible count length by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.len.html" rel="nofollow noreferrer"><code>Series.str.len</code></a> with <code>max</code>:</p>
<pre><code>df['Store'] = df['Store'].astype(str)
df['Store'] = df['Store'].str.zfill(df['Store'].str.len().max())
print (df)
Store
0 001
1 033
2 455
</code></pre>
<p>If want append <code>0</code> by scalar :</p>
<pre><code>df['Store'] = df['Store'].astype(str).str.zfill(4)
print (df)
Store
0 0001
1 0033
2 0455
</code></pre>
|
python|pandas
| 2
|
375,294
| 52,821,367
|
Tensorflow Serving crashes with multiple requests simultaneously
|
<p>Tensorflow Serving crashes with multiple requests simultaneously, the error message is:</p>
<pre><code>*** Error in `tensorflow_model_server': double free or corruption (!prev): 0x00007ff474c18cc0 ***
</code></pre>
<p>I have tried batching, it doesn't work out.</p>
<p>I tried: </p>
<pre><code>sudo apt-get install libtcmalloc-minimal4
export LD_PRELOAD="/usr/lib/libtcmalloc_minimal.so.4"
</code></pre>
<p>got different error with the same issue.</p>
<pre><code>*** Error in `tensorflow_model_server': corrupted size vs. prev_size: 0x00007f2ee8fa7920 ***
</code></pre>
<p>Is there any easy way to solve this without using kubernetes? </p>
|
<p>Unfortunately, this isn't enough information here to debug the issue. If it's a problem with your model, then you'll likely get this issue eventually no matter how you serve it.</p>
|
tensorflow-serving
| 0
|
375,295
| 52,575,075
|
How to group a date column into year and sum a spending column according to the year?
|
<p>I am trying to group my data to years and sum the spending according to the year they belong to.</p>
<p>Here's a sample data:</p>
<pre><code>date: spend_amt:
2/1/2014 10000
2/5/2014 98
1/2/2015 5834.2
7/8/2017 561236
9/3/2017 568
28/1/2016 989895.3
</code></pre>
<p>My current code</p>
<pre><code>def yearlySpending(self):
dfspendingYearly = pd.DataFrame()
dfspendingYearly = self.dfGov.groupby(["date"])['spend_amt'].agg('sum')
dfspendingYearly.groupby(dfspendingYearly["date"].dt.year)['spend_amt'].agg(['sum'])
</code></pre>
<p>I got an error, 'KeyError: 'date''</p>
<p>Desired output</p>
<pre><code>date: spend_amt:
2014 10098
2015 5834.2
2016 989895.3
2017 561804
</code></pre>
|
<p>Your error means there is no column <code>date</code>, I guess there is <code>index</code> called <code>date</code>:</p>
<pre><code>df.index = pd.to_datetime(df.index)
dfspendingYearly = df.groupby(df.index.year).sum().reset_index()
print (dfspendingYearly)
date spend_amt
0 2014 10098.0
1 2015 5834.2
2 2016 989895.3
3 2017 561804.0
</code></pre>
|
python-2.7|pandas|dataframe|pandas-groupby
| 0
|
375,296
| 52,784,204
|
how to create iterator.get_next() for validation set
|
<p>I am working on project to classify medical images using the CNN model, for my project I use tensorflow, after doing some search, at last, I could use new tensorflow input pipeline to prepare the train, validation and test set, here is the code:</p>
<pre><code>train_data = tf.data.Dataset.from_tensor_slices(train_images)
train_labels = tf.data.Dataset.from_tensor_slices(train_labels)
train_set = tf.data.Dataset.zip((train_data,train_labels)).shuffle(500).batch(30)
valid_data = tf.data.Dataset.from_tensor_slices(valid_images)
valid_labels = tf.data.Dataset.from_tensor_slices(valid_labels)
valid_set = tf.data.Dataset.zip((valid_data,valid_labels)).shuffle(200).batch(20)
test_data = tf.data.Dataset.from_tensor_slices(test_images)
test_labels = tf.data.Dataset.from_tensor_slices(test_labels)
test_set = tf.data.Dataset.zip((test_data, test_labels)).shuffle(200).batch(20)
# create general iterator
iterator = tf.data.Iterator.from_structure(train_set.output_types, train_set.output_shapes)
next_element = iterator.get_next()
train_init_op = iterator.make_initializer(train_set)
valid_init_op = iterator.make_initializer(valid_set)
test_init_op = iterator.make_initializer(test_set)
</code></pre>
<p>I can use <code>next_element</code> to iterate over the train set (<code>next_element[0]</code> for images and <code>next_element[1]</code> for labels), now I want to do is doing same thing for the validation set(creating an iterator for validation set), anyone can give me an idea how to do it?</p>
|
<p>You should be able to use the same <code>next_element</code> to get validation and test set. </p>
<p>For example, initialize the dataset by <code>sess.run(valid_init_op)</code> and then <code>next_element</code> generates data in the validation set. </p>
<pre><code>with tf.Session as sess:
sess.run(train_init_op)
image_train, label_train = next_element
sess.run(valid_init_op)
image_val, label_val = next_element
sess.run(test_init_op)
image_test, label_test = next_element
</code></pre>
|
python-3.x|tensorflow
| 1
|
375,297
| 52,821,931
|
Python3 can't see opencv-python, numpy, PyQt5
|
<p>I installed opencv-python numpy PyQt5 using brew. Unfortunately it installed only for python in version 2 but I wanted it to ver 3. So normally when I am using python2 I can import those libs, but in python3 there is just error about not module found.</p>
<p>When I am typing for example brew info numpy, I am getting something like this:</p>
<pre><code>numpy: stable 1.15.2 (bottled), HEAD Package for scientific computing with Python https://www.numpy.org/ /usr/local/Cellar/numpy/1.15.2 (967 files, 25.5MB) Poured from bottle on 2018-10-15 at 12:13:26 From: https://github.com/Homebrew/homebrew-core/blob/master/Formula/numpy.rb
==> Dependencies Build: gcc ✔ Recommended: python ✔, python@2 ✔
==> Options
--without-python Build without python support
--without-python@2 Build without python2 support
--HEAD Install HEAD version
==> Analytics install: 33,262 (30d), 96,001 (90d), 314,869 (365d) install_on_request: 5,934 (30d), 19,037 (90d), 56,029 (365d) build_error: 0 (30d)
</code></pre>
<p>So like you can see there is just python2 in "Recommended". Is there any possibility to repair this mistake and link somehow those libs to python3?</p>
<p>I am using macOS High Sierra.</p>
|
<p>Problem solved. Recently, Python.org sites stopped supporting TLS version 1.0 and 1.1. This helped:</p>
<pre><code>curl https://bootstrap.pypa.io/get-pip.py | python3
</code></pre>
|
python|python-3.x|macos|numpy|opencv
| 0
|
375,298
| 52,876,759
|
converting a python script into a function to iterate over each row
|
<p>How can i convert the below python script into a fucntion so that i can call it over each row of a dataframe in which i want to keep few variables dynamic like <strong>screen_name</strong>, <strong>domain</strong></p>
<pre><code> # We create a tweet list as follows:
tweets = extractor.user_timeline(screen_name="abhi98358", count=200)
data = pd.DataFrame(data=[tweet.text for tweet in tweets], columns=['Tweets'])
# We add relevant data:
data['ID'] = np.array([tweet.id for tweet in tweets])
data['Date'] = np.array([tweet.created_at for tweet in tweets])
data['text'] = np.array([tweet.text for tweet in tweets])
#data['Date'] = pd.to_datetime(data['Date'], unit='ms').dt.tz_localize('UTC').dt.tz_convert('US/Eastern')
created_time = datetime.datetime.utcnow() - datetime.timedelta(minutes=1)
data = data[(data['Date'] > created_time) & (
data['Date'] < datetime.datetime.utcnow())]
my_list = ['Maintenance', 'Scheduled', 'downtime', 'Issue', 'Voice', 'Happy',
'Problem', 'Outage', 'Service', 'Interruption', 'voice-comms', 'Downtime']
ndata = data[data['Tweets'].str.contains(
"|".join(my_list), regex=True)].reset_index(drop=True)
slack = Slacker('xoxb-34234-44232424-sdkjfksdfjksd')
#message = "test message"
slack.chat.post_message('#ops-twitter-alerts', 'domain :' +' '+ ndata['Tweets'] + '<!channel|>')
</code></pre>
<p>my data frame is like below</p>
<pre><code>inp = [{'client': 'epic', 'domain':'fnwp','twittername':'FortniteGame'},{'client': 'epic', 'domain':'fnwp','twittername':'Rainbow6Game'},{'client': 'abhi', 'domain':'abhi','twittername':'abhi98358'}]
df = pd.DataFrame(inp)
</code></pre>
<p>I want to iterate over each row one by one like start from scraping the data and send the slack notification and then go to the second row.</p>
<p>I already have gone through <a href="https://stackoverflow.com/questions/16476924/how-to-iterate-over-rows-in-a-dataframe-in-pandas">How to iterate over rows in a DataFrame in Pandas?</a></p>
|
<p>Here you go buddy :-</p>
<pre><code>for index, row in dff.iterrows():
twt=row['twittername']
domain = row['domain']
print(twt)
print(domain)
extractor = twitter_setup()
# We create a tweet list as follows:
tweets = extractor.user_timeline(screen_name=twt, count=200)
data = pd.DataFrame(data=[tweet.text for tweet in tweets], columns=['Tweets'])
# We add relevant data:
data['ID'] = np.array([tweet.id for tweet in tweets])
data['Date'] = np.array([tweet.created_at for tweet in tweets])
data['text'] = np.array([tweet.text for tweet in tweets])
#data['Date'] = pd.to_datetime(data['Date'], unit='ms').dt.tz_localize('UTC').dt.tz_convert('US/Eastern')
created_time = datetime.datetime.utcnow() - datetime.timedelta(minutes=160)
data = data[(data['Date'] > created_time) & (data['Date'] < datetime.datetime.utcnow())]
my_list = ['Maintenance', 'Scheduled', 'downtime', 'Issue', 'Voice', 'Happy','hound',
'Problem', 'Outage', 'Service', 'Interruption', 'ready','voice-comms', 'Downtime','Patch']
ndata = data[data['Tweets'].str.contains( "|".join(my_list), regex=True)].reset_index(drop=True)
print(ndata)
if len(ndata['Tweets'])> 0:
slack.chat.post_message('#ops-twitter-alerts', domain +': '+ ndata['Tweets'] + '<!channel|>')
else:
print('hi')
</code></pre>
|
python|python-3.x|pandas|dataframe|automation
| 0
|
375,299
| 52,594,686
|
Finding the number of occurrences of a specific string in a column
|
<p>I'm trying to count the number of words that have the string: "hanger" from the column "Description". So I defined a function:</p>
<pre><code>def hanger_count(title):
if 'hanger' in title.lower().split():
return True
else:
return False
</code></pre>
<p>Which seemed to be working correctly when I tested it with a string. But when I attempted to run the function through the data column, using the function:</p>
<pre><code>ecomm['Description'].apply(hangercount)
</code></pre>
<p>I received an error back:</p>
<pre><code>AttributeError: 'float' object has no attribute 'lower'
</code></pre>
<p>I think the issue is that python is seeing some of the rows in the column as objects rather than strings, is there any way I can convert it?</p>
<p>What do you think I'm doing wrong?</p>
|
<p>You appear to have mixed data types in your column, and since <code>lower()</code> is only a method for strings, you are getting an error when pandas attempts to call the function on a numeric value (in this case a float).</p>
<p>This quick tweak might work for you:</p>
<pre><code>def hanger_count(title):
if 'hanger' in str(title).lower().split():
return True
else:
return False
</code></pre>
|
python|pandas|dataframe|data-analysis
| 4
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.