Unnamed: 0
int64
0
378k
id
int64
49.9k
73.8M
title
stringlengths
15
150
question
stringlengths
37
64.2k
answer
stringlengths
37
44.1k
tags
stringlengths
5
106
score
int64
-10
5.87k
4,100
48,520,478
pandas replace NaT with strings in columns when trying to get timedelta object days
<p>I have the following <code>df</code>,</p> <pre><code>A B 3 days NaT NaT 1 days 4 days 3 days NaT NaT </code></pre> <p>the <code>dtype</code> of <code>A</code> and <code>B</code> is <code>timedelta64[ns]</code>, I am tring to get <code>days</code> from each <code>timedelta</code> of the two columns, so first I tried to remove all the rows with <code>A</code> and <code>B</code> happened to be all <code>NaT</code>,</p> <pre><code>daydelta = df.dropna(subset=['A', 'B'], how='all') </code></pre> <p>and then get <code>days</code> on each column value,</p> <pre><code>daydelta[['A', 'B']] = daydelta[['A', 'B']].applymap(lambda x: int(Timedelta(x).days)) </code></pre> <p>but it failed since there is no <code>days</code> attribute in <code>NaT</code>. I am wondering how to get <code>days</code> from <code>timedelta</code> value, while replacing <code>NaT</code> with a string <code>timedelta value does not exist</code>. </p>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.dt.days.html" rel="nofollow noreferrer"><code>dt.days</code></a> which working with <code>NaT</code> too:</p> <pre><code>print (df['A'].dt.days) 0 3.0 1 NaN 2 4.0 3 NaN Name: A, dtype: float64 df[['A', 'B']] = df[['A', 'B']].apply(lambda x: x.dt.days) print (df) A B 0 3.0 NaN 1 NaN 1.0 2 4.0 3.0 3 NaN NaN </code></pre>
python|python-3.x|pandas|dataframe
1
4,101
48,549,101
How to calculate cost for softmax regression with pytorch
<p>I would to calculate the cost for the softmax regression. The cost function to calculate is given at the bottom of the page.</p> <p>For numpy I can get the cost as follows:</p> <pre><code>""" X.shape = 2,300 # floats y.shape = 300, # integers W.shape = 2,3 b.shape = 3,1 """ import numpy as np np.random.seed(100) # Data and labels X = np.random.randn(300,2) y = np.ones(300) y[0:100] = 0 y[200:300] = 2 y = y.astype(np.int) # weights and bias W = np.random.randn(2,3) b = np.random.randn(3) N = X.shape[0] scores = np.dot(X, W) + b hyp = np.exp(scores-np.max(scores, axis=0, keepdims=True)) probs = hyp / np.sum(hyp, axis = 0) logprobs = np.log(probs[range(N),y]) cost_data = -1/N * np.sum(logprobs) print("hyp.shape = {}".format(hyp.shape)) # hyp.shape = (300, 3) print(cost_data) </code></pre> <p>But, when I tried torch I could not get this. So far I have got this:</p> <pre><code>""" X.shape = 2,300 # floats y.shape = 300, # integers W.shape = 2,3 b.shape = 3,1 """ import numpy as np import torch from torch.autograd import Variable np.random.seed(100) # Data and labels X = np.random.randn(300,2) y = np.ones(300) y[0:100] = 0 y[200:300] = 2 y = y.astype(np.int) X = Variable(torch.from_numpy(X),requires_grad=True).type(torch.FloatTensor) y = Variable(torch.from_numpy(y),requires_grad=True).type(torch.LongTensor) # weights and bias W = Variable(torch.randn(2,3),requires_grad=True) b = Variable(torch.randn(3),requires_grad=True) N = X.shape[0] scores = torch.mm(X, W) + b hyp = torch.exp(scores - torch.max(scores)) probs = hyp / torch.sum(hyp) correct_probs = probs[range(N),y] # got problem HERE # logprobs = np.log(correct_probs) # cost_data = -1/N * torch.sum(logprobs) # print(cost_data) </code></pre> <p>I got problem calculating the correct probabilities for the classes.</p> <p>How can we solve this problem and get the correct cost value.</p> <p>The cost function to calculate is given below: <a href="https://i.stack.imgur.com/BuL0k.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BuL0k.png" alt="enter image description here"></a></p>
<p>Your problem is that you cannot use <code>range(N)</code> with <code>pytorch</code>, use the slice <code>0:N</code> instead:</p> <pre><code>hyp = torch.exp(scores - torch.max(scores)) probs = hyp / torch.sum(hyp) correct_probs = probs[0:N,y] # problem solved logprobs = torch.log(correct_probs) cost_data = -1/N * torch.sum(logprobs) </code></pre> <p>Another point is that your labels <code>y</code> do not require gradients, you would better have:</p> <pre><code>y = Variable(torch.from_numpy(y),requires_grad=False).type(torch.LongTensor) </code></pre>
python|machine-learning|deep-learning|pytorch|softmax
3
4,102
48,646,684
Pandas: conditional shift
<p>There is a way to shift a dataframe column dependently on the condition on two other columns? something like:</p> <pre><code>df["cumulated_closed_value"] = df.groupby("user").['close_cumsum'].shiftWhile(df['close_time']&gt;df['open_time]) </code></pre> <p>I have figured out a way to do this but it's inefficient:</p> <p><strong>1)Load data and create the column to shift</strong> </p> <pre><code>df=pd.read_csv('data.csv') df.sort_values(['user','close_time'],inplace=True) df['close_cumsum']=df.groupby('user')['value'].cumsum() df.sort_values(['user','open_time'],inplace=True) print(df) </code></pre> <p>output:</p> <pre><code> user open_time close_time value close_cumsum 0 1 2017-01-01 2017-03-01 5 18 1 1 2017-01-02 2017-02-01 6 6 2 1 2017-02-03 2017-02-05 7 13 3 1 2017-02-07 2017-04-01 3 21 4 1 2017-09-07 2017-09-11 1 22 5 2 2018-01-01 2018-02-01 15 15 6 2 2018-03-01 2018-04-01 3 18 </code></pre> <p><strong>2) shift the column with a self-join and some filters</strong></p> <p>Self-join (this is memory inefficient) <code>df2=pd.merge(df[['user','open_time']],df[['user','close_time','close_cumsum']], on='user')</code></p> <p>filter for 'close_time' &lt; 'open_time'. Then get the row with the max close_time</p> <pre><code>df2=df2[df2['close_time']&lt;df2['open_time']] idx = df2.groupby(['user','open_time'])['close_time'].transform(max) == df2['close_time'] df2=df2[idx] </code></pre> <p><strong>3)merge with the original dataset:</strong></p> <pre><code>df3=pd.merge(df[['user','open_time','close_time','value']],df2[['user','open_time','close_cumsum']],how='left') print(df3) </code></pre> <p>output:</p> <pre><code> user open_time close_time value close_cumsum 0 1 2017-01-01 2017-03-01 5 NaN 1 1 2017-01-02 2017-02-01 6 NaN 2 1 2017-02-03 2017-02-05 7 6.0 3 1 2017-02-07 2017-04-01 3 13.0 4 1 2017-09-07 2017-09-11 1 21.0 5 2 2018-01-01 2018-02-01 15 NaN 6 2 2018-03-01 2018-04-01 3 15.0 </code></pre> <p><strong>There is a more pandas way to get the same result?</strong></p> <p><strong>Edit:</strong> I have added one data line to make the case more clear. My goal is to get the sum of all transactions closed before the opening time of the new transaction</p>
<p>I am using a new para here record the condition <code>df2['close_time']&lt;df2['open_time']</code></p> <pre><code>df['New']=((df.open_time-df.close_time.shift()).dt.days&gt;0).shift(-1) s=df.groupby('user').apply(lambda x : (x['value']*x['New']).cumsum().shift()).reset_index(level=0,drop=True) s.loc[~(df.New.shift()==True)]=np.nan df['Cumsum']=s df Out[1043]: user open_time close_time value New Cumsum 0 1 2017-01-01 2017-03-01 5 False NaN 1 1 2017-01-02 2017-02-01 6 True NaN 2 1 2017-02-03 2017-02-05 7 True 6 3 1 2017-02-07 2017-04-01 3 False 13 4 2 2017-01-01 2017-02-01 15 True NaN 5 2 2017-03-01 2017-04-01 3 NaN 15 </code></pre> <p>Update : since op update the question (Data from Gabriel A)</p> <pre><code>df['New']=df.user.map(df.groupby('user').close_time.apply(lambda x: np.array(x))) df['New1']=df.user.map(df.groupby('user').value.apply(lambda x: np.array(x))) df['New2']=[[x&gt;m for m in y] for x,y in zip(df['open_time'],df['New']) ] df['Yourtarget']=list(map(sum,df['New2']*df['New1'].values)) df.drop(['New','New1','New2'],1) Out[1376]: user open_time close_time value Yourtarget 0 1 2016-12-30 2016-12-31 1 0 1 1 2017-01-01 2017-03-01 5 1 2 1 2017-01-02 2017-02-01 6 1 3 1 2017-02-03 2017-02-05 7 7 4 1 2017-02-07 2017-04-01 3 14 5 1 2017-09-07 2017-09-11 1 22 6 2 2018-01-01 2018-02-01 15 0 7 2 2018-03-01 2018-04-01 3 15 </code></pre>
python|pandas|datetime|data-analysis
9
4,103
48,854,581
How to create a python dataframe containing the mean and standard deviation of some rows of another dataframe
<p>I have a pandas DataFrame containing some values:</p> <pre><code> id pair value subdir taylor_1e3c_1s_56C taylor 6_13 -0.398716 run1 taylor_1e3c_1s_56C taylor 6_13 -0.397820 run2 taylor_1e3c_1s_56C taylor 6_13 -0.397310 run3 taylor_1e3c_1s_56C taylor 6_13 -0.390520 run4 taylor_1e3c_1s_56C taylor 6_13 -0.377390 run5 taylor_1e3c_1s_56C taylor 8_11 -0.393604 run1 taylor_1e3c_1s_56C taylor 8_11 -0.392899 run2 taylor_1e3c_1s_56C taylor 8_11 -0.392473 run3 taylor_1e3c_1s_56C taylor 8_11 -0.389959 run4 taylor_1e3c_1s_56C taylor 8_11 -0.387946 run5 </code></pre> <p>what I would like to do is to isolate the rows that have the same index, id, and pair, compute the mean and the standard deviation over the value column, and put it all in a new dataframe. Because I have now effectively averaged over all the possible values of subdir, that column should also be removed. So the output should look something like this</p> <pre><code> id pair value error taylor_1e3c_1s_56C taylor 6_13 -0.392351 0.013213 taylor_1e3c_1s_56C taylor 8_11 -0.391376 0.016432 </code></pre> <p>How should I do it in pandas?</p> <p><a href="https://stackoverflow.com/questions/48665042/how-to-create-a-python-dataframe-containing-the-mean-of-some-rows-of-another-dat">A previous question</a> showed me how to just get the mean - but it's not clear to me how to generalise this to get the error on the mean (aka the standard deviation) as well.</p> <p>Thank you much to everyone :)</p>
<p>You could promote your index to a column and perform a single <code>groupby</code>:</p> <pre><code>import pandas as pd df = pd.DataFrame([['taylor', '6_13', -0.398716, 'run1'], ['taylor', '6_13', -0.397820, 'run2'], ['taylor', '8_11', -0.389959, 'run4'], ['taylor', '8_11', -0.387946, 'run5']], index=['taylor_1e3c_1s_56C', 'taylor_1e3c_1s_56C', 'taylor_1e3c_1s_56C', 'taylor_1e3c_1s_56C'], columns=['id', 'pair', 'value', 'subdir']) </code></pre> <p><a href="https://i.stack.imgur.com/BhtTw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BhtTw.png" alt="Original Dataframe"></a></p> <p><strong><em>Promote index to column:</em></strong></p> <pre><code>df['index'] = df.index </code></pre> <p><a href="https://i.stack.imgur.com/xudtB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xudtB.png" alt="index to column"></a></p> <p><strong><em>Perform <code>groupby</code> operations:</em></strong></p> <pre><code>new_df = df.groupby(['index', 'id', 'pair']).agg({'value': ['mean', 'std']}) </code></pre> <p><a href="https://i.stack.imgur.com/7aODU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7aODU.png" alt="Aggregated New Dataframe"></a></p>
python|pandas|dataframe|pandas-groupby
4
4,104
48,525,582
Minimizing if statements in python function
<p>I have following function, which takes values from pandas dataframe columns and supplies arguments(s0_loc,s1_loc,.. upto ..,s12_loc if and only if their respective s0,s1,s2...,s12 is not-null) to another function. Also It will check whether s1 is null or not if and only if s0 is not null... Similarly it will check whether s2 is null or not iff s0,s1 is not null.. So on.. </p> <p>Based on above criteria I have written following function. But it's lengthy function... I want to reduce the piece of code in this function. </p> <pre><code>def compare_locality(p,p_loc,s0,s0_loc,s1,s1_loc,s2,s2_loc,s3,s3_loc,s4,s4_loc,s5,s5_loc,s6,s6_loc,s7,s7_loc,s8,s8_loc,s9,s9_loc,s10,s10_loc,s11,s11_loc,s12,s12_loc): loc = [] if s0 != '' : loc.append(s0_loc) if s1 != '' : loc.append(s1_loc) if s2 != '' : loc.append(s2_loc) if s3 != '' : loc.append(s3_loc) if s4 != '' : loc.append(s4_loc) if s5 != '' : loc.append(s5_loc) if s6 != '' : loc.append(s6_loc) if s7 != '' : loc.append(s7_loc) if s8 != '' : loc.append(s8_loc) if s9 != '' : loc.append(s9_loc) if s10 != '' : loc.append(s10_loc) if s11 != '' : loc.append(s11_loc) if s12 != '' : loc.append(s12_loc) if len(loc) == 0: return '' else: return compare(p_loc,*loc) </code></pre> <p>Can I get any suggestion on how to achieve this??</p>
<p>To go with Sandeep's answer, you can build the two lists locally from the giant list of arguments:</p> <pre><code>def compare_locality(p,p_loc,s0,s0_loc,s1,s1_loc,s2,s2_loc,s3,s3_loc,s4,s4_loc,s5,s5_loc,s6,s6_loc,s7,s7_loc,s8,s8_loc,s9,s9_loc,s10,s10_loc,s11,s11_loc,s12,s12_loc): locs = [] ss = [s0, s1, s2, ..., s12] s_locs = [s0_loc, ..., s12_loc] for s, s_loc in zip(ss, s_locs): if s == '': break locs.append(s_loc) if len(locs) == 0: return '' else: return compare(p_loc,*locs) </code></pre>
python|pandas|dataframe
6
4,105
70,835,416
Why sorting a pandas column causing reordering the sub-groups?
<p>The goal of my question is to understand why this happens and if this is a defined behaviour. I need to know to design my unittests in a predictable way. I <strong>do not</strong> want or need to change that behaviour or work around it.</p> <p>Here is the initial data on the left side complete and on the right side just all <code>ID.eq(1)</code> but the order is the same as you can see in the index and the <code>val</code> column.</p> <pre><code>| | ID | val | | | ID | val | |---:|-----:|:------| |---:|-----:|:------| | 0 | 1 | A | | 0 | 1 | A | | 1 | 2 | B | | 3 | 1 | x | | 2 | 9 | C | | 4 | 1 | R | | 3 | 1 | x | | 6 | 1 | G | | 4 | 1 | R | | 9 | 1 | a | | 5 | 4 | F | | 12 | 1 | d | | 6 | 1 | G | | 13 | 1 | e | | 7 | 9 | H | | 8 | 4 | I | | 9 | 1 | a | | 10 | 2 | b | | 11 | 9 | c | | 12 | 1 | d | | 13 | 1 | e | | 14 | 4 | f | | 15 | 2 | g | | 16 | 9 | h | | 17 | 9 | i | | 18 | 4 | X | | 19 | 5 | Y | </code></pre> <p>This right table is also the result I would <strong>expected</strong> when doing the following: When I sort by <code>ID</code> the order of the rows inside the subgroups (e.g. <code>ID.eq(1)</code>) is modified. Why is it so?</p> <p>This is the <strong>unexpected</strong> result</p> <pre><code>| | ID | val | |---:|-----:|:------| | 0 | 1 | A | | 13 | 1 | e | | 12 | 1 | d | | 6 | 1 | G | | 9 | 1 | a | | 3 | 1 | x | | 4 | 1 | R | </code></pre> <p>This is a full MWE</p> <pre><code>#!/usr/bin/env python3 import pandas as pd # initial data df = pd.DataFrame( { 'ID': [1, 2, 9, 1, 1, 4, 1, 9, 4, 1, 2, 9, 1, 1, 4, 2, 9, 9, 4, 5], 'val': list('ABCxRFGHIabcdefghiXY') } ) print(df.to_markdown()) # only the group &quot;1&quot; print(df.loc[df.ID.eq(1)].to_markdown()) # sort by 'ID' df = df.sort_values('ID') # only the group &quot;1&quot; (after sorting) print(df.loc[df.ID.eq(1)].to_markdown()) </code></pre>
<p>As explained in the <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.sort_values.html" rel="nofollow noreferrer"><code>sort_values</code></a> documentation, the <a href="https://en.wikipedia.org/wiki/Sorting_algorithm#Stability" rel="nofollow noreferrer">stability of the sort</a> is not always guaranteed depending on the chosen algorithm:</p> <pre><code>kind : {'quicksort', 'mergesort', 'heapsort', 'stable'}, default 'quicksort' Choice of sorting algorithm. See also :func:`numpy.sort` for more information. `mergesort` and `stable` are the only stable algorithms. For DataFrames, this option is only applied when sorting on a single column or label. </code></pre> <p>If you want to ensure using a stable sort:</p> <pre><code>df.sort_values('ID', kind='stable') </code></pre> <p>output:</p> <pre><code> ID val 0 1 A 3 1 x 4 1 R 6 1 G 9 1 a ... </code></pre>
pandas
3
4,106
70,910,213
pandas named aggregation without multilevel dataframe
<p>I am trying to remove the multi level but unable to do so.</p> <pre><code>import pandas as pd k = pd.DataFrame([['x',2], ['y',4],['x',6]], columns=['name','value']) agg_item={'value': [('n', 'count')]} k=k[['name','value']].groupby(['name'],dropna=False).agg(agg_item).reset_index() k name value n 0 x 2 1 y 1 k.columns ​ MultiIndex([( 'name', ''), ('value', 'n')], ) </code></pre> <p>How do I get sql like table with only 'name' and 'n' columns?</p> <p>Desired output:</p> <pre><code> name n 0 x 2 1 y 1 </code></pre>
<p>You can use a named aggregation with <code>pd.NamedAgg</code> to avoid creating a MultiIndex in the first place:</p> <pre><code>n_agg = pd.NamedAgg(column='value', aggfunc='count') k = k[['name','value']].groupby(['name'],dropna=False).agg(n=n_agg).reset_index() </code></pre> <p>Output:</p> <pre><code>&gt;&gt;&gt; k name n 0 x 2 1 y 1 </code></pre> <p>Or, as @itthrill suggested, you can use <code>.agg(n=('value', 'count'))</code> instead of <code>pd.NamedAgg</code>.</p>
python|pandas
2
4,107
51,658,122
Keras loss is in negative and accuracy is going down, but predictions are good?
<p>I'm training a model in Keras with Tensorflow-gpu backend. Task is to detect buildings in satellite images. loss is going down(which is good) but in negative direction and accuracy is going down. But good part is, model's predictions are improving. My concern is that why loss is in negative. Moreover, why model is improving while accuracy is going down??</p> <pre><code>from tensorflow.keras.layers import Conv2D from tensorflow.keras.layers import BatchNormalization from tensorflow.keras.layers import Activation from tensorflow.keras.layers import MaxPool2D as MaxPooling2D from tensorflow.keras.layers import UpSampling2D from tensorflow.keras.layers import concatenate from tensorflow.keras.layers import Input from tensorflow.keras import Model from tensorflow.keras.optimizers import RMSprop # LAYERS inputs = Input(shape=(300, 300, 3)) # 300 down0 = Conv2D(32, (3, 3), padding='same')(inputs) down0 = BatchNormalization()(down0) down0 = Activation('relu')(down0) down0 = Conv2D(32, (3, 3), padding='same')(down0) down0 = BatchNormalization()(down0) down0 = Activation('relu')(down0) down0_pool = MaxPooling2D((2, 2), strides=(2, 2))(down0) # 150 down1 = Conv2D(64, (3, 3), padding='same')(down0_pool) down1 = BatchNormalization()(down1) down1 = Activation('relu')(down1) down1 = Conv2D(64, (3, 3), padding='same')(down1) down1 = BatchNormalization()(down1) down1 = Activation('relu')(down1) down1_pool = MaxPooling2D((2, 2), strides=(2, 2))(down1) # 75 center = Conv2D(1024, (3, 3), padding='same')(down1_pool) center = BatchNormalization()(center) center = Activation('relu')(center) center = Conv2D(1024, (3, 3), padding='same')(center) center = BatchNormalization()(center) center = Activation('relu')(center) # center up1 = UpSampling2D((2, 2))(center) up1 = concatenate([down1, up1], axis=3) up1 = Conv2D(64, (3, 3), padding='same')(up1) up1 = BatchNormalization()(up1) up1 = Activation('relu')(up1) up1 = Conv2D(64, (3, 3), padding='same')(up1) up1 = BatchNormalization()(up1) up1 = Activation('relu')(up1) up1 = Conv2D(64, (3, 3), padding='same')(up1) up1 = BatchNormalization()(up1) up1 = Activation('relu')(up1) # 150 up0 = UpSampling2D((2, 2))(up1) up0 = concatenate([down0, up0], axis=3) up0 = Conv2D(32, (3, 3), padding='same')(up0) up0 = BatchNormalization()(up0) up0 = Activation('relu')(up0) up0 = Conv2D(32, (3, 3), padding='same')(up0) up0 = BatchNormalization()(up0) up0 = Activation('relu')(up0) up0 = Conv2D(32, (3, 3), padding='same')(up0) up0 = BatchNormalization()(up0) up0 = Activation('relu')(up0) # 300x300x3 classify = Conv2D(1, (1, 1), activation='sigmoid')(up0) # 300x300x1 model = Model(inputs=inputs, outputs=classify) model.compile(optimizer=RMSprop(lr=0.0001), loss='binary_crossentropy', metrics=[dice_coeff, 'accuracy']) history = model.fit(sample_input, sample_target, batch_size=4, epochs=5) OUTPUT: Epoch 6/10 500/500 [==============================] - 76s 153ms/step - loss: -293.6920 - dice_coeff: 1.8607 - acc: 0.2653 Epoch 7/10 500/500 [==============================] - 75s 150ms/step - loss: -309.2504 - dice_coeff: 1.8730 - acc: 0.2618 Epoch 8/10 500/500 [==============================] - 75s 150ms/step - loss: -324.4123 - dice_coeff: 1.8810 - acc: 0.2659 Epoch 9/10 136/500 [=======&gt;......................] - ETA: 55s - loss: -329.0757 - dice_coeff: 1.8940 - acc: 0.2757 </code></pre> <p><a href="https://i.stack.imgur.com/crqP0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/crqP0.png" alt="PREDICTED"></a> Predicted</p> <p><a href="https://i.stack.imgur.com/wNJ0W.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wNJ0W.png" alt="ACTUAL TARGET"></a> Target</p> <p>Where is the problem? (leave dice_coeff it's custom loss)</p>
<p>Your output is not normalized for a binary classification. (Data is also probably not normalized). </p> <p>If you loaded an image, it's probably 0 to 255, or even 0 to 65355.</p> <p>You should normalize <code>y_train</code> (divide by <code>y_train.max()</code>) and use a <code>'sigmoid'</code> activation function at the end of your model. </p>
tensorflow|machine-learning|keras|deep-learning|conv-neural-network
5
4,108
41,746,206
Pandas search for duplicate rows in one column which have different values in another column
<p>I have a Pandas dataframe <code>df</code> for which I want to find all rows for which the value of column <code>A</code> is the same, but the value of column <code>B</code> different, e.g.:</p> <pre><code> | A | B ---|---|--- 0 | 2 | x 1 | 2 | y </code></pre> <p>I know I can use <code>pd.concat(g for _, g in df.groupby('A') if len(g) &gt; 1)</code> to get the rows with duplicate values of <code>A</code>, but how do I add the second constraint?</p>
<p>Thinking about this, it makes sense to call <code>unique</code> on the <code>groupby</code>:</p> <pre><code>In [213]: df = pd.DataFrame({'A':2, 'B':list('xxyzz')}) df Out[213]: A B 0 2 x 1 2 x 2 2 y 3 2 z 4 2 z In [229]: df.groupby('A')['B'].apply(lambda x: x.unique()).reset_index() Out[229]: A B 0 2 [x, y, z] </code></pre>
python|pandas
6
4,109
42,114,381
Bollinger bands in Python. Where/How rm and rstd get defined in code below?
<p>I have a small problem with code snipped below. It works perfectly, but its not written by me and there is one part I do not understand. In my head I would need to return rm &amp; rstd from <code>get_rolling_mean()</code> and <code>get_rolling_std()</code>, but that is not really happening here. So my questions is: I know it works, but how it works?</p> <p>Where and how rm and rstd in <code>get_bollinger_bands(rm, rstd)</code> variables get their values from? </p> <pre><code>"""Bollinger Bands.""" import os import pandas as pd import matplotlib.pyplot as plt def symbol_to_path(symbol, base_dir="data"): """Return CSV file path given ticker symbol.""" return os.path.join(base_dir, "{}.csv".format(str(symbol))) def get_data(symbols, dates): """Read stock data (adjusted close) for given symbols from CSV files.""" df = pd.DataFrame(index=dates) if 'SPY' not in symbols: # add SPY for reference, if absent symbols.insert(0, 'SPY') for symbol in symbols: df_temp = pd.read_csv(symbol_to_path(symbol), index_col='Date', parse_dates=True, usecols=['Date', 'Adj Close'], na_values=['nan']) df_temp = df_temp.rename(columns={'Adj Close': symbol}) df = df.join(df_temp) if symbol == 'SPY': # drop dates SPY did not trade df = df.dropna(subset=["SPY"]) return df def plot_data(df, title="Stock prices"): """Plot stock prices with a custom title and meaningful axis labels.""" ax = df.plot(title=title, fontsize=12) ax.set_xlabel("Date") ax.set_ylabel("Price") plt.show() def get_rolling_mean(values, window): """Return rolling mean of given values, using specified window size.""" return pd.rolling_mean(values, window=window) def get_rolling_std(values, window): """Return rolling standard deviation of given values, using specified window size.""" return pd.rolling_std(values, window=window) def get_bollinger_bands(rm, rstd): """Return upper and lower Bollinger Bands.""" upper_band = rm + (rstd * 2) lower_band = rm - (rstd * 2) return upper_band, lower_band def test_run(): # Read data dates = pd.date_range('2012-01-01', '2012-12-31') symbols = ['SPY'] df = get_data(symbols, dates) # Compute Bollinger Bands # 1. Compute rolling mean rm_SPY = get_rolling_mean(df['SPY'], window=20) # 2. Compute rolling standard deviation rstd_SPY = get_rolling_std(df['SPY'], window=20) # 3. Compute upper and lower bands upper_band, lower_band = get_bollinger_bands(rm_SPY, rstd_SPY) # Plot raw SPY values, rolling mean and Bollinger Bands ax = df['SPY'].plot(title="Bollinger Bands", label='SPY') rm_SPY.plot(label='Rolling mean', ax=ax) upper_band.plot(label='upper band', ax=ax) lower_band.plot(label='lower band', ax=ax) # Add axis labels and legend ax.set_xlabel("Date") ax.set_ylabel("Price") ax.legend(loc='upper left') plt.show() if __name__ == "__main__": test_run() </code></pre>
<p>The Get rollinger bands function gets its variables from the user:</p> <pre><code> get_bollinger_bands(rm, rstd): upper_band = rm + (rstd * 2) lower_band = rm - (rstd * 2) return upper_band, lower_band </code></pre> <p>The only variables used are the ones between the parentheses after the function name. This means they are to be imputted by the user. </p> <pre><code>def get_rolling_mean(values, window): return pd.rolling_mean(values, window=window) def get_rolling_std(values, window): return pd.rolling_std(values, window=window) </code></pre> <p>The fuctions get rolling mean and std both use two imputs: the values (aka x= 1,2,3 and y = 2,3,4) and the window (the amount of observations included in the rolling mean)</p> <p>For additional info I recommend the documentation: <a href="http://pandas.pydata.org/pandas-docs/version/0.17.0/generated/pandas.rolling_mean.html" rel="nofollow noreferrer">http://pandas.pydata.org/pandas-docs/version/0.17.0/generated/pandas.rolling_mean.html</a></p> <p>and wikipedia (rolling mean and moving average are the same thing): <a href="https://en.wikipedia.org/wiki/Moving_average" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Moving_average</a></p>
python|python-3.x|pandas
3
4,110
64,210,138
Define function that takes string value, searches for it in dataframe column, and returns TRUE if it is in the column and contains the word "Sales"
<p>I have to define a function called isSales() that takes the job title of an employee as a string and returns <strong>True</strong> if the job title indicates that the person works in Sales and returns <strong>False</strong> otherwise.</p> <p>For this problem, I've created a dataframe(df) from reading an excel spreadsheet called &quot;Employees&quot;. I then defined a function called employees() that takes the &quot;Employees&quot; Dataframe and sets the index to the &quot;EmployeeID&quot; column.</p> <p>So, what I need the isSales() function to do is take the argument &quot;jobtitle&quot; and then search for this &quot;jobtitle&quot; in the &quot;JobTitle&quot; column of the &quot;Employees&quot; Dataframe.</p> <p>What I have so far:</p> <pre><code>def isSales(jt): df1=load_employees(df) if df1.iloc[jt] in df1[df1[&quot;JobTitle&quot;].apply(lambda x: 'Sales' in x)]: print(&quot;True&quot;) else: print(&quot;False&quot;) </code></pre> <p>However, when I try to test this function out, the only result I get is <strong>False</strong></p> <pre><code>isSales('Sales Representative') </code></pre> <p>Returns <strong>False</strong></p> <p>So, in defining the function IsSales(), what am I doing wrong?</p> <p>I have a couple of other problems like this one where I am defining functions that search through the &quot;Employees&quot; dataframe. I think my issue is not fully understanding how to create functions that use pandas.</p>
<p>In general, when writing Pandas code, you should avoid row-by-row iteration unless you have no choice. This is a situation where you have a choice.</p> <p>Rather than calling your isSales function on each row, I would suggest using the builtin <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.contains.html" rel="nofollow noreferrer">Series.str.contains()</a>.</p> <p>I would create a new column called IsSales, like this:</p> <pre class="lang-py prettyprint-override"><code>df1[&quot;IsSales&quot;] = df1[&quot;JobTitle&quot;].str.contains(&quot;Sales&quot;, regex=False) </code></pre> <p>(The <code>regex=False</code> part is because this function will interpret your search as a regex by default.)</p>
python|pandas
0
4,111
64,307,480
Reading .txt file using pandas and slicing it based on some range in one column
<p>I have multiple .txt files that are full of rubbish data and only need a portion of it based on some range that changes between files. I'm still learning Python and not very experienced.</p> <p>I am using VS code 1.50 and Python 3.8.1</p> <p>Sample of my data: <a href="https://pastebin.com/kZm1spnz" rel="nofollow noreferrer">https://pastebin.com/kZm1spnz</a></p> <p>My first issue is with reading the .txt file, here is what I did at first:</p> <pre><code>import pandas as pd import os #Reading my data Data = pd.read_csv('Data_01.txt') </code></pre> <p>I don't understand why it gives an error even though the python script is in the same folder as the .txt file.</p> <p><strong>Error:</strong></p> <pre><code>--------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) &lt;ipython-input-28-436477220532&gt; in &lt;module&gt; 3 4 #Reading my data ----&gt; 5 Data = pd.read_csv(&quot;Data_01.txt&quot;, sep=&quot;\t&quot;, names=[&quot;Depth&quot;, &quot;Porosity&quot;]) ~\AppData\Local\Programs\Python\Python38\lib\site-packages\pandas\io\parsers.py in read_csv(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, skipfooter, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, cache_dates, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, doublequote, escapechar, comment, encoding, dialect, error_bad_lines, warn_bad_lines, delim_whitespace, low_memory, memory_map, float_precision) 684 ) 685 --&gt; 686 return _read(filepath_or_buffer, kwds) 687 688 ~\AppData\Local\Programs\Python\Python38\lib\site-packages\pandas\io\parsers.py in _read(filepath_or_buffer, kwds) 450 451 # Create the parser. --&gt; 452 parser = TextFileReader(fp_or_buf, **kwds) 453 454 if chunksize or iterator: ~\AppData\Local\Programs\Python\Python38\lib\site-packages\pandas\io\parsers.py in __init__(self, f, engine, **kwds) 934 self.options[&quot;has_index_names&quot;] = kwds[&quot;has_index_names&quot;] 935 --&gt; 936 self._make_engine(self.engine) 937 938 def close(self): ~\AppData\Local\Programs\Python\Python38\lib\site-packages\pandas\io\parsers.py in _make_engine(self, engine) 1166 def _make_engine(self, engine=&quot;c&quot;): 1167 if engine == &quot;c&quot;: -&gt; 1168 self._engine = CParserWrapper(self.f, **self.options) 1169 else: 1170 if engine == &quot;python&quot;: ~\AppData\Local\Programs\Python\Python38\lib\site-packages\pandas\io\parsers.py in __init__(self, src, **kwds) 1996 kwds[&quot;usecols&quot;] = self.usecols 1997 -&gt; 1998 self._reader = parsers.TextReader(src, **kwds) 1999 self.unnamed_cols = self._reader.unnamed_cols 2000 pandas\_libs\parsers.pyx in pandas._libs.parsers.TextReader.__cinit__() pandas\_libs\parsers.pyx in pandas._libs.parsers.TextReader._setup_parser_source() FileNotFoundError: [Errno 2] No such file or directory: 'Data_01.txt' </code></pre> <p>I fixed it by using the full path of my data file but I don't understand the need for the full path, as follows:</p> <pre><code>import pandas as pd import os #Reading my data Data = pd.read_csv(r&quot;C:\Users\User\Desktop\Projects\SDP\Data_01.txt&quot;, sep=&quot;\t&quot;, names=[&quot;Depth&quot;, &quot;Porosity&quot;]) </code></pre> <p>Now when slicing my data, I did not want to use Indices, i.e., &quot;iloc&quot; &amp; &quot;loc&quot;, to keep my code readable and easy to manipulate and to reapply for the other files, maybe use a for loop to sweep through them all in one run. So I tested first by using the following:</p> <pre><code>Data_result_1 = Data[Data['Depth'] &gt;= 7711] </code></pre> <p>This works, however, I wish to use an additional condition in the same line where it stops at Depth = 7786, i.e., my range. But it does not work, here is the code I wrote that failed:</p> <pre><code>Data_result_1 = Data[Data['Depth'] &gt;= 7711 and Data['Depth'] &lt;= 7786] </code></pre> <p>Is there a way to use the nested conditions without creating a new line of code, I was able to reach my desired result by it feels unnecessary and, to be frank, ugly. here is what works:</p> <pre><code>Data_result_1 = Data[Data['Depth'] &gt;= 7711 ] Data_result_1 = Data_result_1[Data_result_1['Depth'] &lt;= 7786] </code></pre>
<p>You should use &amp; instead of and:</p> <pre><code>Data_result_1 = Data[ (Data['Depth'] &gt;= 7711) &amp; (Data['Depth'] &lt;= 7786)] </code></pre>
python|python-3.x|pandas|dataframe
1
4,112
64,259,052
Adding rows to pandas dataframe with date range, created_at and today, python
<p>I have a dataframe <a href="https://i.stack.imgur.com/KNrlw.png" rel="nofollow noreferrer">dataframe</a> consisting of two columns, customer_id and a date column, created_at.</p> <p>I wish to add another row for each month the customer remains in the customer base.</p> <p>For example, if the customer_id was created during July, the dataframe would add 4 additional rows for that customer, between the range of &quot;created_at&quot; and &quot;today&quot;. For example; for customer1 I would have 9 rows, one for each month up to day, for customer2: 7 rows, and customer3: 4 rows. I was thinking of maybe something like I've copied below, with the idea of merging df with seqDates...</p> <pre><code>import pandas as pd import numpy as np df = pd.DataFrame([(&quot;customer1&quot;, &quot;05-02-2020&quot;), (&quot;customer2&quot;,&quot;05-04-2020&quot;), (&quot;customer3&quot;,&quot;04-07-2020&quot;)], index=[&quot;1&quot;,&quot;2&quot;,&quot;3&quot;], columns= (&quot;customer_id&quot;,&quot;created_at&quot;)) df[&quot;created_at&quot;] = pd.to_datetime(df[&quot;created_at&quot;]) # create month expansion column start = min(df[&quot;created_at&quot;]) end = pd.to_datetime(&quot;today&quot;) seqDates = pd.date_range(start, end, freq=&quot;D&quot;) seqDates = pd.DataFrame(seqDates) columns = [&quot;created_at&quot;] </code></pre>
<p>Try this:</p> <pre><code>import pandas as pd import datetime from dateutil.relativedelta import relativedelta from dateutil import rrule, parser outList = [] operations_date = datetime.datetime.now().date() dfDict = df.to_dict(orient='records') for aDict in dfDict: created_at = aDict['created_at'] start_date = datetime.datetime.strptime(created_at, '%d-%m-%Y').date() - relativedelta(months = 1) end_date = parser.parse(str(operations_date)) date_range = list(rrule.rrule(rrule.MONTHLY, bymonthday=1, dtstart=start_date, until=end_date)) for aDate in date_range: outList.append({'customer_id' : aDict['customer_id'], 'created_at' : aDate}) df = pd.DataFrame(outList) </code></pre>
python|pandas|numpy|date|expand
1
4,113
49,103,830
CTC LossTensor is inf or nan: Tensor had Inf values?
<p>I keep encountering this error right on the first step of training (or after 300 hundred steps or so). Can anyone point out the reason why this is happening? If you're interested to about the model I used, here it is:</p> <pre><code>{ "network":[ {"layer_type": "input_layer", "name": "inputs", "shape": [-1, 168, 168, 1]}, {"layer_type": "l2_normalize", "axis": [1, 2]}, {"layer_type": "conv2d", "num_filters": 16, "kernel_size": [3, 3]}, {"layer_type": "max_pool2d", "pool_size": [2, 2]}, {"layer_type": "l2_normalize", "axis": [1, 2]}, {"layer_type": "conv2d", "num_filters": 32, "kernel_size": [3, 3]}, {"layer_type": "max_pool2d", "pool_size": [2, 2]}, {"layer_type": "l2_normalize", "axis": [1, 2]}, {"layer_type": "dropout", "keep_prob": 0.5}, {"layer_type": "conv2d", "num_filters": 64, "kernel_size": [3, 3]}, {"layer_type": "max_pool2d", "pool_size": [2, 2]}, {"layer_type": "l2_normalize", "axis": [1, 2]}, {"layer_type": "dropout", "keep_prob": 0.5}, {"layer_type": "collapse_to_rnn_dims"}, {"layer_type": "birnn", "num_hidden": 128, "cell_type": "LSTM"}, {"layer_type": "birnn", "num_hidden": 128, "cell_type": "LSTM"}, {"layer_type": "birnn", "num_hidden": 128, "cell_type": "LSTM"}, {"layer_type": "dropout", "keep_prob": 0.5} ], "output_layer": "ctc_decoder", "loss": "ctc", "metrics": ["label_error_rate"], "learning_rate": 0.001, "optimizer": "adam" } </code></pre> <p>As for the labels, I pad them first to match the length of the label of longest length. </p>
<p>It's definitely the sequence length of the input that causes the problem. Apparently, the sequence length should be a bit greater than the ground truth length.</p>
tensorflow
2
4,114
49,252,574
Deleting points on grid outside of boundary condition - Python
<p>I have a grid of points (e.g. (1,1), (1,2), (1,3)...(100,99), (100,100)) that is contained in a a pandas dataframe, and also exported as a .csv file. </p> <p>I then have a boundary condition, for example a circle in the centre of this grid with a diameter of 25. I want to be able to delete all the points outside of the circle, and just keep the internal ones in a new dataframe. </p> <p>I can get the boundaries of the circle, xmin, xmax, ymin, ymax, but when I delete the points with respect to this, I get a square (due to the min/max values being integers that just find the furthest points from the centre).</p> <p>Is it possible to save all the points internal to the circle? Preferably with a universal method that could be applied to elipses, etc. </p> <p>EDIT: I've found this, which is similar: <a href="https://stackoverflow.com/questions/29330307/how-to-delete-a-set-of-meshgrid-points-inside-a-circle">How to delete a set of meshgrid points inside a circle?</a></p> <p>But relies on the dimensions of the circle being entered in the code, so is not universal. As I have the boundary points of the shape, I would have to calculate it, and assume it is circular (which it may not always be). Is there a way to adapt this so I could create a "filled" area using the boundary points, and then perform the boolean operation?</p>
<p>Since your question is...</p> <blockquote> <p>Is it possible to save all the points internal to the circle?</p> </blockquote> <p>Yes, it is possible, and there is a formula for this. The function below <code>distance_from</code> is basic euclidean distance for 2D plans.</p> <pre><code>def encloses(center_of_circle, vector2D, radius_of_circle): return distance_from(center_of_circle, vector2D) &lt;= radius_of_circle </code></pre> <p>This function <code>encloses()</code> will return True if the vector2D is in the circle. False if not. Filter your pandas DataFrame with it.</p> <p>I suspect, as you got a square instead of a circle, that you were computing the formula <code>encloses()</code> for a square, and not a circle...</p>
python|pandas|dataframe|data-manipulation
0
4,115
49,088,759
Finding the min. value in a specific column for range of future rows with pandas/python
<p>I have the following data:</p> <pre><code>datetime price 2017-10-02 08:03:00 12877 2017-10-02 08:04:00 12877.5 2017-10-02 08:05:00 12879 2017-10-02 08:06:00 12875.5 2017-10-02 08:07:00 12875.5 2017-10-02 08:08:00 12878 2017-10-02 08:09:00 12878 2017-10-02 08:10:00 12878 2017-10-02 08:11:00 12881 2017-10-02 08:12:00 12882.5 2017-10-02 08:13:00 12884.5 2017-10-02 08:14:00 12882 2017-10-02 08:15:00 12880.5 2017-10-02 08:16:00 12881.5 2017-10-02 08:17:00 12879 2017-10-02 08:18:00 12879 2017-10-02 08:19:00 12880 2017-10-02 08:20:00 12878.5 </code></pre> <p>I am trying to find the min. price for range (the range is defined with windows_size which can be 1/2/3 etc.) of 'datetime' using:</p> <pre><code>df['MinPrice'] = df.ix[window_size:,'price'] </code></pre> <p>which gives me the price on the last row in the window or using </p> <pre><code>df['MinPrice'] = df.ix[window_size:,'price'].min() </code></pre> <p>which gives me the min value of all the column.</p> <p>Pls advice how to get the min. value of the specific rows declared by the window.</p> <p>edited: the expected result will be as follow: if the windows size is 3, i would like to get the min. value of 3 lines. so at 08:05:00 i will get 12877 and for 08:06:00 i will get 12875.5</p>
<p>Since it looks like you have 1 minute intervals, you may want to take advantage of <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.resample.html" rel="nofollow noreferrer"><code>resample</code></a>, that way you can define the window using datetime</p> <pre><code>df.resample('3T',on='datetime').min() datetime price datetime 2017-10-02 08:03:00 2017-10-02 08:03:00 12877.0 2017-10-02 08:06:00 2017-10-02 08:06:00 12875.5 2017-10-02 08:09:00 2017-10-02 08:09:00 12878.0 2017-10-02 08:12:00 2017-10-02 08:12:00 12882.0 2017-10-02 08:15:00 2017-10-02 08:15:00 12879.0 2017-10-02 08:18:00 2017-10-02 08:18:00 12878.5 </code></pre> <p>To set the values back to the initial dataframe, use transform</p> <pre><code>df['minPrice'] = df.resample('3T',on='datetime').transform('min') datetime price minPrice 0 2017-10-02 08:03:00 12877.0 12877.0 1 2017-10-02 08:04:00 12877.5 12877.0 2 2017-10-02 08:05:00 12879.0 12877.0 3 2017-10-02 08:06:00 12875.5 12875.5 4 2017-10-02 08:07:00 12875.5 12875.5 5 2017-10-02 08:08:00 12878.0 12875.5 6 2017-10-02 08:09:00 12878.0 12878.0 7 2017-10-02 08:10:00 12878.0 12878.0 8 2017-10-02 08:11:00 12881.0 12878.0 9 2017-10-02 08:12:00 12882.5 12882.0 10 2017-10-02 08:13:00 12884.5 12882.0 11 2017-10-02 08:14:00 12882.0 12882.0 12 2017-10-02 08:15:00 12880.5 12879.0 13 2017-10-02 08:16:00 12881.5 12879.0 14 2017-10-02 08:17:00 12879.0 12879.0 15 2017-10-02 08:18:00 12879.0 12878.5 16 2017-10-02 08:19:00 12880.0 12878.5 17 2017-10-02 08:20:00 12878.5 12878.5 </code></pre>
python|pandas
2
4,116
58,854,893
Python duplicate list values from index to another
<p>Let's say I have this numpy array :</p> <pre><code>first_array = [[1. 0. 0. 0. 0.] [1. 0. 0. 0. 0.] [0. 1. 0. 0. 0.] [0. 0. 1. 0. 0.] [0. 0. 1. 0. 0.] [0. 0. 1. 0. 0.] [0. 0. 0. 0. 1.]] </code></pre> <p>and I have this 2 lists : </p> <pre><code>index_start = [50, 80, 110, 120, 150, 180, 200] index_end= [70, 90, 115, 140, 170, 190, 220] </code></pre> <p>I want to create a new 2D numpy <strong>output_array</strong> where I iterate by column <strong>first_array</strong> and duplicate each value of every row from an <strong>index_start</strong> to an <strong>index_end</strong>.</p> <p><em>1st iteration -</em> For example <strong>first_array[[1,1]] = 1, index_start[1] = 50</strong> and <strong>index_end[1]=70</strong> then the first column of my output_array will have the value 1 from index <strong>50</strong> to <strong>70</strong>.</p> <p><em>2nd iteration -</em> Then <strong>first_array[[2,1]] = 1, index_start[2] = 80</strong> and <strong>index_end[2]=90</strong> then the <em>first</em> column of my output_array will have also the value 1 but from index <strong>80</strong> to <strong>90</strong>.</p> <p><em>3rd iteration -</em> <strong>first_array[[2,3]] = 1, index_start[3] = 110</strong> and <strong>index_end[3]=115</strong> then the <em>second</em> column of my output_array will have the value 1 from index <strong>110</strong> to <strong>115</strong> and so on.</p> <p>Here's what I have tried but this is giving me wrong result :</p> <pre><code>first_array = [[1, 0, 0, 0, 0,], [1, 0, 0, 0, 0,], [0, 1, 0, 0, 0,], [0, 0, 1, 0, 0,], [0, 0, 1, 0, 0,], [0, 0, 1, 0, 0,], [0, 0, 0, 0, 1,]] index_start = [50, 80, 110, 120, 150, 180, 200] index_end= [70, 90, 115, 140, 170, 190, 220] last_index = max(index_start+index_end) output_array = np.zeros((last_index, 5)) for i in range(len(index_start)): for j in range(last_index): for k in range(5): output_array[index_start[i]:index_end[i]]=first_array[i][k] </code></pre>
<p>It's working now. I forgot to add the k column index at the output_array. Here's the final working code if anyone needed this.</p> <pre><code>first_array = [[1, 0, 0, 0, 0,], [1, 0, 0, 0, 0,], [0, 1, 0, 0, 0,], [0, 0, 1, 0, 0,], [0, 0, 1, 0, 0,], [0, 0, 1, 0, 0,], [0, 0, 0, 0, 1,]] index_start = [50, 80, 110, 120, 150, 180, 200] index_end= [70, 90, 115, 140, 170, 190, 220] last_index = max(index_start+index_end) output_array = np.zeros((last_index+1, 5)) for i in range(len(index_start)): for k in range(5): output_array[index_start[i]:index_end[i]+1, k]=first_array[i][k] </code></pre>
python|numpy
2
4,117
58,680,157
Fill in the missing data using Pandas
<p>What's the best way to fill in the missing data using Pandas . I have a list of visitors where the exit time or the entry time is missing . </p> <pre><code>visitor entry exit A 16/02/2016 08:46 16/02/2016 09:01 A 16/02/2016 09:20 16/02/2016 17:24 A 17/02/2016 09:12 17/02/2016 09:42 A 17/02/2016 09:55 NaT A 17/02/2016 12:42 17/02/2016 12:56 A 17/02/2016 13:02 17/02/2016 17:32 A 17/02/2016 17:44 17/02/2016 18:24 A 18/02/2016 07:59 18/02/2016 16:40 A 18/02/2016 16:53 NaT A NaT 19/02/2016 09:11 A 19/02/2016 09:27 19/02/2016 11:26 A 19/02/2016 12:28 19/02/2016 17:12 A 20/02/2016 08:44 20/02/2016 08:58 A 20/02/2016 09:16 20/02/2016 17:21 </code></pre>
<p>You can use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.ffill.html" rel="nofollow noreferrer"><code>DataFrame.ffill</code></a> + <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.bfill.html" rel="nofollow noreferrer"><code>DataFrame.bfill</code></a> to complete with the same time of entry / exit:</p> <pre><code>df[['entry','exit']]=df[['entry','exit']].ffill(axis=1).bfill(axis=1) print(df) visitor entry exit 0 A 2016-02-16 08:46:00 2016-02-16 09:01:00 1 A 2016-02-16 09:20:00 2016-02-16 17:24:00 2 A 2016-02-17 09:12:00 2016-02-17 09:42:00 3 A 2016-02-17 09:55:00 2016-02-17 09:55:00 4 A 2016-02-17 12:42:00 2016-02-17 12:56:00 5 A 2016-02-17 13:02:00 2016-02-17 17:32:00 6 A 2016-02-17 17:44:00 2016-02-17 18:24:00 7 A 2016-02-18 07:59:00 2016-02-18 16:40:00 8 A 2016-02-18 16:53:00 2016-02-18 16:53:00 9 A 2016-02-19 09:11:00 2016-02-19 09:11:00 10 A 2016-02-19 09:27:00 2016-02-19 11:26:00 11 A 2016-02-19 12:28:00 2016-02-19 17:12:00 12 A 2016-02-20 08:44:00 2016-02-20 08:58:00 13 A 2016-02-20 09:16:00 2016-02-20 17:21:00 </code></pre> <p><strong>EDIT</strong></p> <p><a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.notna.html" rel="nofollow noreferrer"><code>DataFrame.notna</code></a> + <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.all.html" rel="nofollow noreferrer"><code>DataFrame.all</code></a> to performance a <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#boolean-indexing" rel="nofollow noreferrer"><code>boolean indexing</code></a> to filter to ros with NaT values in order to calculate the mean of the diff</p> <pre><code>#filtering valid data df_valid=df[df.notna().all(axis=1)] #Calculating diff time_dif=df_valid[['entry','exit']].diff(axis=1).exit print(time_dif) 0 00:15:00 1 08:04:00 2 00:30:00 4 00:14:00 5 04:30:00 6 00:40:00 7 08:41:00 10 01:59:00 11 04:44:00 12 00:14:00 13 08:05:00 Name: exit, dtype: timedelta64[ns] </code></pre> <hr> <pre><code>#Calculatin mean time_dif_mean=time_dif.mean() print('This is the mean of time in: ', time_dif_mean) This is the mean of time in: 0 days 03:26:54.545454 </code></pre> <hr> <p><strong>Filling mising value with the mean</strong></p> <pre><code>#roud to seconds( optional) time_dif_mean_round_second=time_dif_mean.round('s') df['entry'].fillna(df['exit']-time_dif_mean_round_second,inplace=True) df['exit'].fillna(df['entry']+time_dif_mean_round_second,inplace=True) print(df) </code></pre> <hr> <p><strong>Output:</strong></p> <pre><code> visitor entry exit 0 A 2016-02-16 08:46:00 2016-02-16 09:01:00 1 A 2016-02-16 09:20:00 2016-02-16 17:24:00 2 A 2016-02-17 09:12:00 2016-02-17 09:42:00 3 A 2016-02-17 09:55:00 2016-02-17 13:21:55 4 A 2016-02-17 12:42:00 2016-02-17 12:56:00 5 A 2016-02-17 13:02:00 2016-02-17 17:32:00 6 A 2016-02-17 17:44:00 2016-02-17 18:24:00 7 A 2016-02-18 07:59:00 2016-02-18 16:40:00 8 A 2016-02-18 16:53:00 2016-02-18 20:19:55 9 A 2016-02-19 05:44:05 2016-02-19 09:11:00 10 A 2016-02-19 09:27:00 2016-02-19 11:26:00 11 A 2016-02-19 12:28:00 2016-02-19 17:12:00 12 A 2016-02-20 08:44:00 2016-02-20 08:58:00 13 A 2016-02-20 09:16:00 2016-02-20 17:21:00 </code></pre>
python-3.x|pandas|dataset|data-science|fillna
0
4,118
58,957,367
Merge 2 dictionaries and store them in pandas dataframe where one dictionary has variable length list elements
<p>I'm iterating through some HTML divs like this with Beautiful Soup:</p> <pre><code>for div in soup.findAll('a', {'class': 'result'}): adLink = div.a.get('href') adInfo= { u'adLink':adLink, u'adThumbImg':...some code..., u'adCounty':...some code... } adFullInfo = getFullAdInfo(adLink) adInfo.update(adFullInfo) ads_CarsURL = pd.DataFrame(data=adInfo) #Create pandas DF </code></pre> <p>Where <code>getFullAdInfo</code> is function</p> <pre><code>def getFullAdInfo { ...some code... } </code></pre> <p>which returns dictionary which looks something like this:</p> <pre><code>{'adID': '2027007', 'adTitle': 'Ford 750 Special', 'adDatePublished': '20.11.2009', 'adTimePublished': '14:23', 'adViewed': '102', 'carPriceEUR': '600', 'carManufacturer': 'Ford'} </code></pre> <p>So in each iteration I'm getting values from <code>adInfo</code> dict and from <code>adFullInfo</code> function which returns another dict and merging them so I can have single dictionary record. Idea is on the end to create pandas dataframe. </p> <p>Error I get is:</p> <blockquote> <pre><code>ValueError: arrays must all be same length </code></pre> </blockquote> <p>I don't know why is that so when I initially defined all variables for each dictionary key and assigned empty string to them like <code>adID=""</code> in case they are missing.</p>
<p>After you get the full ad, convert that to a 1 row dataframe, then just append that into a final dataframe. That will take care of the mismatch lengths and if there is data not available on an ad that is there for others. You'll have to work out the logic, as you haven't provided that part of your code to test. So quick example below of what I mean:</p> <pre><code>import pandas as pd data1 = {'adID': '2027007', 'adTitle': 'Ford 750 Special', 'adDatePublished': '20.11.2009', 'adTimePublished': '14:23', 'adViewed': '102', 'carPriceEUR': '600', 'carManufacturer': 'Ford'} data2 = {'adID': '20555', 'adTitle': 'Honda', 'adTimePublished': '11:23', 'adViewed': '2', 'carManufacturer': 'Honda'} # Initialize empty dataframe final_df = pd.DataFrame() # Iterate through your dictionaries, convert to 1 row dataframe and append it to your final dataframe for data in [data1, data2]: temp_df = pd.DataFrame(data, index=[0]) final_df = final_df.append(temp_df, sort=True).reset_index(drop=True) </code></pre> <p><strong>Specifically with what you provided</strong>, it will be something like:</p> <pre><code>ads_CarsURL = pd.DataFrame() for div in soup.findAll('a', {'class': 'result'}): adLink = div.a.get('href') adInfo= { u'adLink':adLink, u'adThumbImg':...some code..., u'adCounty':...some code... } adFullInfo = getFullAdInfo(adLink) adInfo.update(adFullInfo) temp_df = pd.DataFrame(adInfo, index=[0]) ads_CarsURL = final_df.append(temp_df, sort=True).reset_index(drop=True) </code></pre> <p><strong>Output:</strong></p> <pre><code>print (final_df.to_string()) adDatePublished adID adTimePublished adTitle adViewed carManufacturer carPriceEUR 0 20.11.2009 2027007 14:23 Ford 750 Special 102 Ford 600 1 NaN 20555 11:23 Honda 2 Honda NaN </code></pre>
python|pandas|dictionary|beautifulsoup
3
4,119
58,867,907
what is the difference between as_matrix() and to_numpy() methods?
<p>what is the difference between as_matrix() and to_numpy() methods? I know that both are used to convert pandas dataframe into numpy ndarray, but what is the difference between these 2 methods?</p>
<p>I guess <code>as_matrix()</code> is confusing sometimes compared with Numpy metrix, then was deprecated. AS to what we get is <code>Numpy-array</code> from <code>.as_matrix()</code>, instead of <code>Numpy-matrix</code>. As for the defference bwt numpy narrays and nmpy matrices, hear is a good answer <a href="https://stackoverflow.com/questions/4151128/what-are-the-differences-between-numpy-arrays-and-matrices-which-one-should-i-u">by unutbu</a>.</p>
python|pandas|list
0
4,120
58,783,496
How can I assign a name to the 'intersection' of index and columns in a pandas dataframe?
<p>If I create a dataframe and than generate a pivot table from it, it keeps appearing a string in the upper left "cell" of the resulting table, like below. In this example it appears the string "n":</p> <pre><code>import pandas as pd df = pd.DataFrame({'col1':['a','a','b','b','c','c'], 'col2':['str_a1','str_a2','str_b1','str_b2','str_c1','str_c2']}) df2 = df.assign(n=df.groupby('col1').cumcount()).pivot(index='col1',columns='n',values='col2').reset_index() df2 n col1 0 1 0 a str_a1 str_a2 1 b str_b1 str_b2 2 c str_c1 str_c2 </code></pre> <p>If I create the dataframe directly like below, it appears nothing. How can I include the "n" in this second option and how can I remove the "n" from the option above?</p> <pre><code>df3 = pd.DataFrame({'col1':['a','b','c'], '0':['str_a1','str_b1','str_c1'], '1':['srt_a2','str_b2','str_c2']}) df3 col1 0 1 0 a str_a1 srt_a2 1 b str_b1 str_b2 2 c str_c1 str_c2 </code></pre>
<p>I got the answer by 'looking' at the dataframe 'horizontally' instead of 'vertically'. The 'n' that I was mentioning above was not the index name as splash58 pointed out. I must say that I used to think this way.</p> <p>Than I noticed that the 'n' was in the same line as the other columns names's. Therefore it must be the name of the columns index.</p> <p>In fact, if you do:</p> <pre><code>import pandas as pd df = pd.DataFrame({'col1':['a','a','b','b','c','c'], 'col2':['str_a1','str_a2','str_b1','str_b2','str_c1','str_c2']}) df2 = df.assign(n=df.groupby('col1').cumcount()).pivot(index='col1',columns='n',values='col2').reset_index() print(df2) </code></pre> <p>you get:</p> <pre><code>n col1 0 1 0 a str_a1 str_a2 1 b str_b1 str_b2 2 c str_c1 str_c2 </code></pre> <p>After this, if you do:</p> <pre><code>df2.columns.name </code></pre> <p>you get:</p> <pre><code>'n' </code></pre>
python|pandas|indexing|pivot-table|rename
0
4,121
70,351,924
How to use filter condition along with groupby in pandas
<p>I am trying to filter the dataset with multiple columns and by filtering with a particular value in the column using groupby . I am able to filter using groupby but not able to apply the filter</p> <p>I have tried using below code</p> <pre><code>df.groupby(['city','season','toss_winner','toss_decision'])['winner'].size() </code></pre> <p><strong>Actual result :</strong> giving me all city details (i.e) Cape Town, Centurion and Chandigarh</p> <p><strong>expected result:</strong> I just want city details where city is equal to 'Cape Town'</p> <p>Please check the screenshot attached</p> <p><a href="https://i.stack.imgur.com/VqLHS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VqLHS.png" alt="enter image description here" /></a></p>
<p>Filter city=CapeTown first and then groupby:</p> <pre><code>out = (df.query(&quot;city =='Cape Town'&quot;) .groupby(['city','season','toss_winner','toss_decision'])['winner'].size()) </code></pre>
python|pandas|group-by|filtering
0
4,122
70,275,939
Return to previous tensorflow version
<p>I have been running with tensorflow 2.3.0</p> <p>A few days ago I tried installing a library with this</p> <pre><code>pip install tensorflow_decision_forests </code></pre> <p>This upgraded my tensorflow to 2.7.0 and now I'm having problems including not being able to use gpu in my training. Is there any way to revert this change?</p> <p>I tried conda list --revisions but the last revision is from before this change.</p>
<p>Runnig: pip install tensorflow==2.3.0</p> <p>Solved my problem. Sorry for not trying it earlier but was afraid it would make it worse</p>
python|tensorflow|conda
1
4,123
70,178,163
How do I create new pandas dataframe by grouping multiple variables?
<p>I am having tremendous difficulty getting my data sorted. I'm at the point where I could have manually created a new .csv file in the time I have spent trying to figure this out, but I need to do this through code. I have a large dataset of baseball salaries by player going back 150 years. <a href="https://i.stack.imgur.com/HclTq.png" rel="nofollow noreferrer">This is what my dataset looks like</a>.</p> <p>I want to create a new dataframe that adds the individual player salaries for a given team for a given year, organized by team and by year. Using the following technique I have come up with this: <code>team_salaries_groupby_team = salaries.groupby(['teamID','yearID']).agg({'salary' : ['sum']})</code>, which outputs this: <a href="https://i.stack.imgur.com/lRrhj.png" rel="nofollow noreferrer">my output</a>. On screen it looks sort of like what I want, but I want a dataframe with three columns (plus an index on the left). I can't really do the sort of analysis I want to do with this output.</p> <p>Lastly, I have also tried this method: <code>new_column = salaries['teamID'] + salaries['yearID'].astype(str) salaries['teamyear'] = new_column salaries teamyear = salaries.groupby(['teamyear']).agg({'salary' : ['sum']}) print(teamyear)</code>. <a href="https://i.stack.imgur.com/OkVls.png" rel="nofollow noreferrer">Another output</a> It adds the individual player salaries per team for a given year, but now I don't know how to separate the year and put it into its own column. Help please?</p>
<p>You just need to <code>reset_index()</code></p> <p>Here is sample code :</p> <pre><code>salaries = pd.DataFrame(columns=['yearID','teamID','igID','playerID','salary']) salaries=salaries.append({'yearID':1985,'teamID':'ATL','igID':'NL','playerID':'A','salary':10000},ignore_index=True) salaries=salaries.append({'yearID':1985,'teamID':'ATL','igID':'NL','playerID':'B','salary':20000},ignore_index=True) salaries=salaries.append({'yearID':1985,'teamID':'ATL','igID':'NL','playerID':'A','salary':10000},ignore_index=True) salaries=salaries.append({'yearID':1985,'teamID':'ATL','igID':'NL','playerID':'C','salary':5000},ignore_index=True) salaries=salaries.append({'yearID':1985,'teamID':'ATL','igID':'NL','playerID':'B','salary':20000},ignore_index=True) salaries=salaries.append({'yearID':2016,'teamID':'ATL','igID':'NL','playerID':'A','salary':100000},ignore_index=True) salaries=salaries.append({'yearID':2016,'teamID':'ATL','igID':'NL','playerID':'B','salary':200000},ignore_index=True) salaries=salaries.append({'yearID':2016,'teamID':'ATL','igID':'NL','playerID':'C','salary':50000},ignore_index=True) salaries=salaries.append({'yearID':2016,'teamID':'ATL','igID':'NL','playerID':'A','salary':100000},ignore_index=True) salaries=salaries.append({'yearID':2016,'teamID':'ATL','igID':'NL','playerID':'B','salary':200000},ignore_index=True) </code></pre> <p>After that , <code>groupby</code> and <code>reset_index</code></p> <pre><code>sample_df = salaries.groupby(['teamID', 'yearID']).salary.sum().reset_index() </code></pre> <p><a href="https://i.stack.imgur.com/i4xT1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/i4xT1.png" alt="Output" /></a></p> <p>Is this what you are looking for ?</p>
python|pandas|dataframe|sorting|pandas-groupby
1
4,124
70,282,491
Pandas dataframe too slow when using loc function to create a new dataframe (I want to flatten a dataframe)
<p>this is not the most butiful code. And it's too slow when doing what I want it to, so I was hoping someone could tell me a faster way of doing this.</p> <p>I have a file (of about 800K+ lines) looking something like the example below, I want to flatten the file so that one userident has all the answers after it (in the example, I would like to have 3 long lines with the info) There is 8 questions pr. user, so this takes way too long to do this way.</p> <pre><code>userident, qustion, answer, time, answer j/n, amount, text, flag 1,FMTA DYN 01 - PERSON,13:00,'j',14,'some info',0 1,FMTA DYN 02 - PERSON,13:00,'j',14,'some info',0 1,FMTA DYN 03 - PERSON,13:00,'j',14,'some info',0 2,FMTA DYN 01 - PERSON,13:00,'j',14,'some info',0 2,FMTA DYN 02 - PERSON,13:00,'j',14,'some info',0 2,FMTA DYN 03 - PERSON,13:00,'j',14,'some info',0 3,FMTA DYN 01 - PERSON,13:00,'j',14,'some info',0 3,FMTA DYN 02 - PERSON,13:00,'j',14,'some info',0 3,FMTA DYN 03 - PERSON,13:00,'j',14,'some info',0 </code></pre> <pre><code># This is the question FORMAL_INFO = ['FMTA DYN 01 - PERSON', 'FMTA DYN 02 - PERSON','FMTA DYN 03 - PERSON','FMTA DYN 04 - PERSON','FMTA DYN 05 - PERSON','FMTA DYN 06 - PERSON','FORMÅL OG TILSIKTA ART KONTANT','FORMÅL OG TILSIKTA ART UTLAND INNB','FORMÅL OG TILSIKTA ART UTLAND UTBET'] column_names = [&quot;userident&quot;, &quot;Spørsmål1&quot;, &quot;Svar 1&quot;, &quot;tid1&quot;, &quot;svar j/n1&quot;,&quot;SVAR_BELOP1&quot;,&quot;SVAR_TEKST1&quot;,&quot;selvbetjent flag1&quot;, &quot;Spørsmål2&quot;, &quot;Svar 2&quot;, &quot;tid2&quot;, &quot;svar j/n2&quot;,&quot;SVAR_BELOP2&quot;,&quot;SVAR_TEKST2&quot;,&quot;selvbetjent flag2&quot;, &quot;Spørsmål3&quot;, &quot;Svar 3&quot;, &quot;tid3&quot;, &quot;svar j/n3&quot;,&quot;SVAR_BELOP3&quot;,&quot;SVAR_TEKST3&quot;,&quot;selvbetjent flag3&quot;, &quot;Spørsmål4&quot;, &quot;Svar 4&quot;, &quot;tid4&quot;, &quot;svar j/n4&quot;,&quot;SVAR_BELOP4&quot;,&quot;SVAR_TEKST4&quot;,&quot;selvbetjent flag4&quot;, &quot;Spørsmål5&quot;, &quot;Svar 5&quot;, &quot;tid5&quot;, &quot;svar j/n5&quot;,&quot;SVAR_BELOP5&quot;,&quot;SVAR_TEKST5&quot;,&quot;selvbetjent flag5&quot;, &quot;Spørsmål6&quot;, &quot;Svar 6&quot;, &quot;tid6&quot;, &quot;svar j/n6&quot;,&quot;SVAR_BELOP6&quot;,&quot;SVAR_TEKST6&quot;,&quot;selvbetjent flag6&quot;, &quot;Spørsmål7&quot;, &quot;Svar 7&quot;, &quot;tid7&quot;, &quot;svar j/n7&quot;,&quot;SVAR_BELOP7&quot;,&quot;SVAR_TEKST7&quot;,&quot;selvbetjent flag7&quot;, &quot;Spørsmål8&quot;, &quot;Svar 8&quot;, &quot;tid8&quot;, &quot;svar j/n8&quot;,&quot;SVAR_BELOP8&quot;,&quot;SVAR_TEKST8&quot;,&quot;selvbetjent flag8&quot;, &quot;Spørsmål9&quot;, &quot;Svar 9&quot;, &quot;tid9&quot;, &quot;svar j/n9&quot;,&quot;SVAR_BELOP9&quot;,&quot;SVAR_TEKST9&quot;,&quot;selvbetjent flag9&quot; ] DF_FORM_SVAR = pd.DataFrame(columns=column_names) #read all answers from file DF_ALLE_SVAR = pd.read_csv('all_answers.csv') # read all useridents into the userident field DF_FORM_SVAR['brukerident'] = DF_ALLE_SVAR['USERIDENTS'] </code></pre> <p>This is the code I wrote to flatten the file, but it's too slow for me, so I hope there is a faster way of doing this</p> <pre><code># Loop through all useridents for i in len(DF_FORM_SVAR): # Sett the variable userident KUNDENUMMER = DF_FORM_SVAR[&quot;brukerident&quot;][i] # Loop through all 9 questions and fill them out. for j in range(9): # Get the spesific question to fill out for this user DF_SVAR_RUNDE = DF_ALLE_SVAR.loc[DF_ALLE_SVAR['FORMAL_ART_PROD_NAVN']==FORMAL_INFO[j]] k = j+1 # check if question is filled out. if len(DF_SVAR_RUNDE['FORMAL_ART_SPORSMAL_TEKST'].loc[DF_SVAR_RUNDE['FK_BANKKUNDE']==KUNDENUMMER]): DF_FORM_SVAR['Spørsmål'+str(k)][i] = DF_SVAR_RUNDE['FORMAL_ART_SPORSMAL_TEKST'].loc[DF_SVAR_RUNDE['FK_BANKKUNDE']==KUNDENUMMER].iloc[0] DF_FORM_SVAR['tid'+str(k)][i] = DF_SVAR_RUNDE['SVAR_TID'].loc[DF_SVAR_RUNDE['FK_BANKKUNDE']==KUNDENUMMER].iloc[0] DF_FORM_SVAR['SVAR_BELOP'+str(k)][i] = DF_SVAR_RUNDE['SVAR_BELOP'].loc[DF_SVAR_RUNDE['FK_BANKKUNDE']==KUNDENUMMER].iloc[0] DF_FORM_SVAR['SVAR_TEKST'+str(k)][i] = DF_SVAR_RUNDE['SVAR_TEKST'].loc[DF_SVAR_RUNDE['FK_BANKKUNDE']==KUNDENUMMER].iloc[0] DF_FORM_SVAR['svar j/n'+str(k)][i] = DF_SVAR_RUNDE['SVAR_JN'].loc[DF_SVAR_RUNDE['FK_BANKKUNDE']==KUNDENUMMER].iloc[0] DF_FORM_SVAR['selvbetjent flag'+str(k)][i] = DF_SVAR_RUNDE['SELVBETJENT_FLG'].loc[DF_SVAR_RUNDE['FK_BANKKUNDE']==KUNDENUMMER].iloc[0] </code></pre>
<p><code>pivot</code> might do what you need</p> <pre><code>import pandas as pd import io long_df = pd.read_csv(io.StringIO( &quot;&quot;&quot; userident,qustion,time,answer j/n,amount,text,flag 1,FMTA DYN 01 - PERSON,13:00,j,14,some info,0 1,FMTA DYN 02 - PERSON,13:00,j,14,some info,0 1,FMTA DYN 03 - PERSON,13:00,j,14,some info,0 2,FMTA DYN 01 - PERSON,13:00,j,14,some info,0 2,FMTA DYN 02 - PERSON,13:00,j,14,some info,0 2,FMTA DYN 03 - PERSON,13:00,j,14,some info,0 3,FMTA DYN 01 - PERSON,13:00,j,14,some info,0 3,FMTA DYN 02 - PERSON,13:00,j,14,some info,0 3,FMTA DYN 03 - PERSON,13:00,j,14,some info,0&quot;&quot;&quot; )) wide_df = long_df.pivot( index='userident', columns='qustion', ) #collapse multiindex columns stolen from #https://stackoverflow.com/questions/14507794/pandas-how-to-flatten-a-hierarchical-index-in-columns wide_df.columns = [' '.join(col).strip() for col in wide_df.columns.values] wide_df </code></pre> <p>Output</p> <p><a href="https://i.stack.imgur.com/zlcxe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zlcxe.png" alt="enter image description here" /></a></p> <p>Then you can access the time of the first question for the 1-st person like this</p> <pre><code>wide_df.loc[1,'time FMTA DYN 01 - PERSON'] </code></pre>
python|pandas|dataframe
0
4,125
70,168,153
Python's numpy array of doubles to and from a HDF file without unnecessary conversions
<p>I have a numpy array of doubles. np.array([double1, double2, double3, ... , doublen]) All array's elements are successive in memory. I want to use a HDF file as a data container (save / load).</p> <p>save is implemented as: hdf.create_dataset(name='data', data=np.array([double1, double2, double3, ... , doublen]))</p> <p>load is implemented as: data = np.array(hdf_group['data'])</p> <p>How can I verify that no unnecessary conversions like double to string to double occurs?</p>
<p>The variable dtype remains unchanged throughout the process. You can verify by checking dtype as you go. Code below demonstrates behavior. (I created variable <code>arr</code> to hold the array of np.doubles before loading to HDF5.)</p> <pre><code>double1 = np.double(1) double2 = np.double(2) double3 = np.double(3) doublen = np.double(10) with h5py.File('SO_70168153.h5','w') as hdf: arr = np.array([double1, double2, double3, doublen]) print(f&quot;np.array:\nShape: {arr.shape}, Dtype: {arr.dtype}&quot;) print(arr[:]) hdf.create_dataset(name='data', data=arr ) print(f&quot;HDF dataset:\nShape: {hdf['data'].shape}, Dtype: {hdf['data'].dtype}&quot;) print(hdf['data'][:]) data = hdf['data'][:] print(f&quot;data array:\nShape: {data.shape}, Dtype: {data.dtype}&quot;) print(data[:]) </code></pre> <p>Output from above:</p> <pre><code>np.array: Shape: (4,), Dtype: float64 [ 1. 2. 3. 10.] HDF dataset: Shape: (4,), Dtype: float64 [ 1. 2. 3. 10.] data array: Shape: (4,), Dtype: float64 [ 1. 2. 3. 10.] </code></pre>
python|numpy|h5py|hdf
0
4,126
56,050,604
How to makeup FSNS dataset with my own image for attention OCR tensorflow model
<p>I want to apply attention-ocr to detect all digits on number board of cars. I've read your README.md of attention_ocr on github(<a href="https://github.com/tensorflow/models/tree/master/research/attention_ocr" rel="nofollow noreferrer">https://github.com/tensorflow/models/tree/master/research/attention_ocr</a>), and also the way I should do to use my own image data to train model with the StackOverFlow page.(<a href="https://stackoverflow.com/a/44461910/743658">https://stackoverflow.com/a/44461910/743658</a>) However, I didn't get any information of how to store annotation or label of the picture, or the format of this problem. For object detection model, I was able to make my dataset with LabelImg and converting this into csv file, and finally make .tfrecord file. I want to make .tfrecord file on FSNS dataset format.</p> <p>Can you give me your advice to go on this training steps?</p>
<p>Please reread the <a href="https://stackoverflow.com/a/44461910/743658">mentioned answer</a> it has a section explaining how to store the annotation. It is stored in the three features <code>image/text</code>, <code>image/class</code> and <code>image/unpadded_class</code>. The <code>image/text</code> field is used for visualization, some models support unpadded sequences and use <code>image/unpadded_class</code>, while the default version relies on the text padded with null characters to have the same length stored in the feature <code>image/class</code>. Here is the excerpt to store the text annotation:</p> <pre><code>char_ids_padded, char_ids_unpadded = encode_utf8_string( text, charset, length, null_char_id) example = tf.train.Example(features=tf.train.Features( feature={ 'image/class': _int64_feature(char_ids_padded), 'image/unpadded_class': _int64_feature(char_ids_unpadded), 'image/text': _bytes_feature(text) ... } )) </code></pre>
tensorflow|dataset|ocr|attention-model
0
4,127
56,066,544
Creating a Clustered Bar chart with Matplotlib
<p>This is extremely trivial, so I apologize!</p> <p>I'm just getting into matplotlib and pandas and I think I'm over complicating it... I'm trying to create a clustered bar chart (like the one below).</p> <p><a href="https://i.stack.imgur.com/d4o3F.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/d4o3F.jpg" alt="Clustered Bar Chart"></a></p> <p>The dataframe I a working with is structured like this:</p> <p><a href="https://i.stack.imgur.com/HnzGE.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HnzGE.jpg" alt="Data Structure"></a></p> <p>I want to create a clustered bar chart where the x-axis is days of the week (df['Days of Week']), the y axis is count, and the categories of what is being counted are Type A and Type B (determined by df['Type']).</p> <p>From the googling I am doing, my code is long and complicated... but I feel like this is easier than I'm making it...</p> <p>Any help appreciated!</p>
<p>Try this:</p> <pre><code>new_df = (df.groupby('Days of Week')['Type'] .value_counts() .unstack() ) new_df.plot.bar() </code></pre>
python|pandas|matplotlib
0
4,128
55,765,320
Incorrect output: Extracting text from pdf's,docx's pptx's will not output in their own spearte line
<p>I created a function that will open each file in a directory and extract the text from each file and output it in an excel sheet using Pandas. The indexing for each file type seems to be working just fine.However the extracted text from each file comes out next to each other in a list and not separated and next to their corresponding file.</p> <p>See bottom of script for current output and the out put I want.</p> <p>** I believe the problem lies in the loader() function which takes in a path, goes through each directory file checks the file .ext and extracts the text.</p> <p>Thank you!</p> <pre><code>import re #import PyPDF4 import pathlib from pathlib import Path import shutil from datetime import datetime import time from configparser import ConfigParser import glob import fileinput import pandas as pd import os from pdfminer.pdfinterp import PDFResourceManager, PDFPageInterpreter from pdfminer.converter import TextConverter from pdfminer.layout import LAParams from pdfminer.pdfpage import PDFPage from io import StringIO import docx2txt from pptx import Presentation import more_itertools as mit p = Path('C:/Users/XXXX/Desktop/test') txt_files = list(p.rglob('*txt')) PDF_files = list(p.rglob('*pdf')) csv_files = list(p.rglob('*csv')) docx_files = list(p.rglob('*docx')) pptx_files = list(p.rglob('*pptx')) #excel_files = list(p.rglob('xls')) def pdf_to_text(x): # PDFMiner rsrcmgr = PDFResourceManager() sio = StringIO() codec = 'utf-8' laparams = LAParams() device = TextConverter(rsrcmgr, sio, codec=codec, laparams=laparams) interpreter = PDFPageInterpreter(rsrcmgr, device) # Extract text fp = open(x, 'rb') for page in PDFPage.get_pages(fp): interpreter.process_page(page) fp.close() # Get text from StringIO text = sio.getvalue() # Cleanup device.close() sio.close() return text #------------------------------------------------------------------------------- def loader(path): with open(str(path.resolve()),"r",encoding = "ISO-8859-1") as f: docx_out,pptx_out,pdf_out = [],[],[] if path.suffix == ".pdf": for name1 in PDF_files: pdf_out.append(pdf_to_text(name1)) return pdf_out elif path.suffix == ".docx": for name2 in docx_files: docx_out.append(docx2txt.process(name2)) return docx_out elif path.suffix == ".pptx": for file in pptx_files: prs = Presentation(file) for slide in prs.slides: for shape in slide.shapes: if not shape.has_text_frame: continue for paragraph in shape.text_frame.paragraphs: for run in paragraph.runs: pptx_out.append(run.text) return pptx_out else: return f.readlines() print(pdf_out) def file_generator(): files = txt_files+PDF_files+csv_files+docx_files+pptx_files for item in files: yield { "path": item, "name": item.name[0:], "created": time.ctime(item.stat().st_ctime), "modified": time.ctime(item.stat().st_mtime), "content": loader(item) } def to_xlsx(): df = pd.DataFrame.from_dict(file_generator()) df.head() df.to_excel("tester4.xlsx") if __name__ == "__main__": to_xlsx() #------------------------------------------------------------ OUTPUT EXAMPLE current output: content ["content_test1","content_test2"] test1.pdf ["content_test1","content_test2"] test2.pdf What I want: ["content_test1"] test1.pdf ["content_test2"] test2.pdf </code></pre>
<p>The appends called by each filetype_out function look like they are adding the contents of each file to the end of the list pertaining to that filetype. If you want to generate a unique list with the contents of each individual file, I'd recommend creating a separate <a href="https://stackoverflow.com/questions/1024847/add-new-keys-to-a-dictionary">dict</a> for each filetype, which then includes individual lists for each file processed. Taking the PDFs as an example:</p> <pre><code>def loader(path): with open(str(path.resolve()),"r",encoding = "ISO-8859-1") as f: docx_out,pptx_out,pdf_out = {},{},{} if path.suffix == ".pdf": for name1 in PDF_files: name1_contents = [] name1_contents.append(pdf_to_text(name1)) pdf_out[name1] = name1_contents return pdf_out </code></pre> <p>To then print out your results in a similar way as you have been:</p> <pre><code>for name, contents in pdf_out: print(contents + ' ' + name) </code></pre>
python|pandas|anaconda|pdfminer|pathlib
1
4,129
64,750,931
Compared grouped minimum of one column to a group of timestamps in pandas
<p>I have the following dataframe (extract only for one value of id3):</p> <pre><code>id1 id2 id3 id4 id5 id6 status id7 max_snsr_ts max_ts_fs k 292 346 1041 656 578 5780 on 53 10/21/2020 23:59 10/22/2020 23:30 48 292 346 1041 657 708 7080 on 53 10/21/2020 23:59 10/22/2020 23:30 48 292 346 1041 658 579 5790 on 53 10/19/2020 23:59 10/22/2020 23:30 48 292 346 1041 657 708 5780 on 53 10/21/2020 23:59 10/23/2020 23:30 96 292 346 1041 658 579 7080 on 53 10/19/2020 23:59 10/23/2020 23:30 96 292 346 1041 656 578 5790 on 53 10/21/2020 23:59 10/23/2020 23:30 96 </code></pre> <p>I am trying to group by id3, select the minimum of the max_ts column and then compare that with the max_ts_fs for every group of id3 and k. Based on the result I would like to add a boolean as a separate column.</p> <p>I was trying to do as follows:</p> <pre><code>joined_h_raw_fs['new_col'] = np.where(joined_h_raw_fs.groupby(['id3'])['max_snsr_ts'].min().min() &gt; joined_h_raw_fs.groupby(['id3', 'k'])['max_ts_fs'] , True, False) </code></pre> <p>Expecting to get:</p> <pre><code>id1 id2 id3 id4 id5 id6 status id7 max_snsr_ts max_ts_fs k new_col 292 346 1041 656 578 5780 on 53 10/21/2020 23:59 10/22/2020 23:30 48 FALSE 292 346 1041 657 708 7080 on 53 10/21/2020 23:59 10/22/2020 23:30 48 FALSE 292 346 1041 658 579 5790 on 53 10/19/2020 23:59 10/22/2020 23:30 48 FALSE 292 346 1041 657 708 5780 on 53 10/21/2020 23:59 10/23/2020 23:30 96 FALSE 292 346 1041 658 579 7080 on 53 10/19/2020 23:59 10/23/2020 23:30 96 FALSE 292 346 1041 656 578 5790 on 53 10/21/2020 23:59 10/23/2020 23:30 96 FALSE </code></pre> <p>But I am getting the following error:</p> <pre><code>... last 1 frames repeated, from the frame below ... pandas/_libs/tslibs/c_timestamp.pyx in pandas._libs.tslibs.c_timestamp._Timestamp.__richcmp__() RecursionError: maximum recursion depth exceeded in comparison </code></pre> <p>I am still not very good in pandas as I am transitioning from dplyr.</p> <p>Could someone point what I am doing wrong?</p> <p>BR</p>
<p>If want compare original columns use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.transform.html" rel="nofollow noreferrer"><code>GroupBy.transform</code></a> for Series with same size like original filled by aggregated values, also <code>np.where</code> is here not necessary.</p> <pre><code>s1 = joined_h_raw_fs.groupby(['id3'])['max_snsr_ts'].transform('min') s2 = joined_h_raw_fs.groupby(['id3', 'k'])['max_ts_fs'].transform('min') joined_h_raw_fs['new_col'] = s1 &gt; s2 </code></pre>
python|pandas|pandas-groupby
0
4,130
64,790,864
Problem with new_row = dict.fromkeys(df, 0) statement
<p><a href="https://i.stack.imgur.com/xhqY9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xhqY9.png" alt="2nd name should be under name category" /></a></p> <p>Here is the code snippet:</p> <pre><code>df = pd.read_csv(&quot;data.csv&quot;) new_row = dict.fromkeys(df, 0) new_row[df.columns[0]] = filename new_row[df.columns[326]] = &quot;Scanware&quot; df = df.append(new_row, ignore_index=True) </code></pre> <p>It is somehow creating a separate column to store the new filename. Same for the column 326. 2ndly I want to know how do I remove the index numbers from csv file? on the left side, more like remove such column from csv.</p> <p>UPDATE: The problem was solved by passing index = False parameter to df.to_csv function.</p> <pre><code>df.to_csv('data.csv', index = False) </code></pre>
<p>Why append the new DataFrame, you can just assign to the original DF columns:</p> <pre><code>df[df.columns[0]] = filename df[df.columns[326]] = &quot;Scanware&quot; </code></pre> <p>And while exporting the DataFrame to CSV you can add <code>index=False</code> argument to the <code>.to_csv()</code></p>
python|pandas
0
4,131
64,817,970
list index out of range when crawling data and adjust data
<p>I am trying to crawl data from a list of url (1st loop) . And in each url (2nd loop), I want to adjust the product_reviews['reviews'] ( list) by adding more data. Here is my code :</p> <pre><code>import requests import pandas as pd df = pd.read_excel(r'C:\ids.xlsx') ids = df['ids'].values.tolist() link = 'https://www.real.de/product/%s/' url_test = 'https://www.real.de/pdp-test/api/v1/%s/product-attributes/?offset=0&amp;limit=500' url_test1 = 'https://www.real.de/pdp-test/api/v1/%s/product-reviews/?offset=0&amp;limit=500' for i in ids: product_id = requests.get(url_test %i).json() product_reviews = requests.get(url_test1 %i).json() for x in range(0,len(product_reviews['reviews']),1): product_reviews['reviews'][x]['variantAttributes'].append(str(int(100*float(product_reviews['reviews'][x]['variantAttributes'][1]['label'].replace(&quot; m&quot;,&quot;&quot;).replace(&quot;,&quot;,&quot;.&quot;))))) product_reviews['reviews'][x]['variantAttributes'].append(str(int(100*float(product_reviews['reviews'][x]['variantAttributes'][0]['label'].replace(&quot; m&quot;,&quot;&quot;).replace(&quot;,&quot;,&quot;.&quot;))))) product_reviews['reviews'][x]['size']= str(int(100*float(product_reviews['reviews'][x]['variantAttributes'][1]['label'].replace(&quot; m&quot;,&quot;&quot;).replace(&quot;,&quot;,&quot;.&quot;))))+ 'x' + str(int(100*float(product_reviews['reviews'][x]['variantAttributes'][0]['label'].replace(&quot; m&quot;,&quot;&quot;).replace(&quot;,&quot;,&quot;.&quot;)))) product_reviews['reviews'][x]['url'] = link %i product_reviews['reviews'][x]['ean'] = product_id['defaultAttributes'][0]['values'][0]['text'] product_reviews['reviews'][x]['TotalReviewperParent'] = product_reviews['totalReviews'] df = pd.DataFrame(product_reviews['reviews']) df.to_excel( r'C:\new\str(i).xlsx', index=False) </code></pre> <p>However when I run this code, it returns error :</p> <blockquote> <p>line 24, in product_reviews['reviews'][x]['variantAttributes'].append(str(int(100*float(product_reviews['reviews'][x]['variantAttributes'][1]['label'].replace(&quot; m&quot;,&quot;&quot;).replace(&quot;,&quot;,&quot;.&quot;)))))</p> </blockquote> <blockquote> <p>IndexError: list index out of range</p> </blockquote> <p>When I run the 2nd loop for 1 url, it runs fine, however when I put 2nd loop inside 1st loop, it returns error. What is the solution for it ? And my code seems so monkey. Do you know how to improve my code so it can be shorter ?</p>
<p>Please, in the future, try to create a <a href="https://stackoverflow.com/help/minimal-reproducible-example">Minimal, Reproducible Example</a>. We don't have access to your 'ids.xlsx' so we can't verify if the problem is with a specific id in your list or a general problem.</p> <p>Taking a random id, <code>338661983</code>, and using the following code:</p> <pre><code>import requests link = 'https://www.real.de/product/%s/' url_attributes = 'https://www.real.de/pdp-test/api/v1/%s/product-attributes/?offset=0&amp;limit=500' url_reviews = 'https://www.real.de/pdp-test/api/v1/%s/product-reviews/?offset=0&amp;limit=500' ids = [338661983] for i in ids: product_id = requests.get(url_attributes % i).json() product_reviews = requests.get(url_reviews % i).json() for review in product_reviews['reviews']: print(review) break </code></pre> <p>I get the following output:</p> <pre><code>{'reviewId': 1119427, 'title': 'Klasse!', 'date': '11.11.2020', 'rating': 5, 'isVerifiedPurchase': True, 'text': 'Originale Switch, schnelle Lieferung. Alles Top ', 'variantAttributes': [], 'author': 'hm-1511917085', 'datePublished': '2020-11-11T20:09:41+01:00'} </code></pre> <p>Notice that <code>variantAttributes</code> is an empty list. You get an IndexError because you try to take the element at position 1 of that empty list in:</p> <pre><code>review['variantAttributes'][1]['label'].replace(&quot; m&quot;,&quot;&quot;).replace(&quot;,&quot;,&quot;.&quot;) </code></pre>
python|pandas|web-scraping|request|web-crawler
1
4,132
65,042,321
BeautifulSoup find.all() web scraping returns empty
<p>When trying to scrape multiple pages of this website, I get no content in return. I usually check to make sure all the lists I'm creating are of equal length, but all are coming back as <code>len = 0</code>.</p> <p>I've used similar code to scrape other websites, so why does this code not work correctly?</p> <p>Some solutions I've tried, but haven't worked for my purposes: <code>requests.Session()</code> solutions as suggested in <a href="https://stackoverflow.com/questions/34573605/python-requests-get-returns-an-empty-string">this answer</a>, <code>.json</code> as <a href="https://stackoverflow.com/questions/16573332/jsondecodeerror-expecting-value-line-1-column-1-char-0">suggested here.</a></p> <pre><code>import requests from requests import get from bs4 import BeautifulSoup import pandas as pd from time import sleep from random import randint from googletrans import Translator translator = Translator() rg = [] ctr_n = [] ctr = [] yr = [] mn = [] sub = [] cst_n = [] cst = [] mag = [] pty_n = [] pty = [] can = [] pev1 = [] vot1 = [] vv1 = [] ivv1 = [] to1 = [] cv1 = [] cvs1 = [] pv1 = [] pvs1 = [] pev2 = [] vot2 = [] vv2 = [] ivv2 = [] to2 = [] cv2 = [] cvs2 =[] pv2 = [] pvs2 = [] seat = [] no_info = [] manual = [] START_PAGE = 1 END_PAGE = 42 for page in range(START_PAGE, END_PAGE + 1): page = requests.get(&quot;https://sejmsenat2019.pkw.gov.pl/sejmsenat2019/en/wyniki/sejm/okr/&quot; + str(page)) page.encoding = page.apparent_encoding if not page: pass else: soup = BeautifulSoup(page.text, 'html.parser') tbody = soup.find_all('table', class_='table table-borderd table-striped table-hover dataTable no-footer clickable right2 right4') sleep(randint(2,10)) for container in tbody: col1 = container.find_all('tr', {'data-id':'26079'}) for info in col1: col_1 = info.find_all('td') for data in col_1: party = data[0] party_trans = translator.translate(party) pty_n.append(party_trans) pvotes = data[1] pv1.append(pvotes) pshare = data[2] pvs1.append(pshare) mandates = data[3] seat.append(mandates) col2 = container.find_all('tr', {'data-id':'26075'}) for info in col2: col_2 = info.find_all('td') for data in col_2: party2 = data[0] party_trans2 = translator.translate(party2) pty_n.append(party_trans2) pvotes2 = data[1] pv1.append(pvotes2) pshare2 = data[2] pvs1.append(pshare2) mandates2 = data[3] seat.append(mandates2) col3 = container.find_all('tr', {'data-id':'26063'}) for info in col3: col_3 = info.find_all('td') for data in col_3: party3 = data[0].text party_trans3 = translator.translate(party3) pty_n.extend(party_trans3) pvotes3 = data[1].text pv1.extend(pvotes3) pshare3 = data[2].text pvs1.extend(pshare3) mandates3 = data[3].text seat.extend(mandates3) col4 = container.find_all('tr', {'data-id':'26091'}) for info in col4: col_4 = info.find_all('td',recursive=True) for data in col_4: party4 = data[0] party_trans4 = translator.translate(party4) pty_n.extend(party_trans4) pvotes4 = data[1] pv1.extend(pvotes4) pshare4 = data[2] pvs1.extend(pshare4) mandates4 = data[3] seat.extend(mandates4) col5 = container.find_all('tr', {'data-id':'26073'}) for info in col5: col_5 = info.find_all('td') for data in col_5: party5 = data[0] party_trans5 = translator.translate(party5) pty_n.extend(party_trans5) pvotes5 = data[1] pv1.extend(pvotes5) pshare5 = data[2] pvs1.extend(pshare5) mandates5 = data[3] seat.extend(mandates5) col6 = container.find_all('tr', {'data-id':'26080'}) for info in col6: col_6 = info.find_all('td') for data in col_6: party6 = data[0] party_trans6 = translator.translate(party6) pty_n.extend(party_trans6) pvotes6 = data[1] pv1.extend(pvotes6) pshare6 = data[2] pvs1.extend(pshare6) mandates6 = data[3] seat.extend(mandates6) #### TOTAL VOTES #### tfoot = soup.find_all('tfoot') for data in tfoot: fvote = data.find_all('td') for info in fvote: votefinal = info.find(text=True).get_text() fvoteindiv = [votefinal] fvotelist = fvoteindiv * (len(pty_n) - len(vot1)) vot1.extend(fvotelist) #### CONSTITUENCY NAMES #### constit = soup.find_all('a', class_='btn btn-link last') for data in constit: names = data.get_text() names_clean = names.replace(&quot;Sejum Constituency no.&quot;,&quot;&quot;) names_clean2 = names_clean.replace(&quot;[&quot;,&quot;&quot;) names_clean3 = names_clean2.replace(&quot;]&quot;,&quot;&quot;) namesfinal = names_clean3.split()[1] constitindiv = [namesfinal] constitlist = constitindiv * (len(pty_n) - len(cst_n)) cst_n.extend(constitlist) #### UNSCRAPABLE INFO #### region = 'Europe' reg2 = [region] reglist = reg2 * (len(pty_n) - len(rg)) rg.extend(reglist) country = 'Poland' ctr2 = [country] ctrlist = ctr2 * (len(pty_n) - len(ctr_n)) ctr_n.extend(ctrlist) year = '2019' yr2 = [year] yrlist = yr2 * (len(pty_n) - len(yr)) yr.extend(yrlist) month = '10' mo2 = [month] molist = mo2 * (len(pty_n) - len(mn)) mn.extend(molist) codes = '' codes2 = [codes] codeslist = codes2 * (len(pty_n) - len(manual)) manual.extend(codeslist) noinfo = '-990' noinfo2 = [noinfo] noinfolist = noinfo2 * (len(pty_n) - len(no_info)) no_info.extend(noinfolist) print(len(rg), len(pty_n), len(pv1), len(pvs1), len(no_info), len(vot1), len(cst_n)) poland19 = pd.DataFrame({ 'rg' : rg, 'ctr_n' : ctr_n, 'ctr': manual, 'yr' : yr, 'mn' : mn, 'sub' : manual, 'cst_n': cst_n, 'cst' : manual, 'mag': manual, 'pty_n': pty_n, 'pty': manual, 'can': can, 'pev1': no_info, 'vot1': vot1, 'vv1': vot1, 'ivv1': no_info, 'to1': no_info, 'cv1': no_info, 'cvs1': no_info, 'pv1': cv1, 'pvs1': cvs1, 'pev2': no_info, 'vot2': no_info, 'vv2': no_info, 'ivv2': no_info, 'to2': no_info, 'cv2': no_info, 'cvs2': no_info, 'pv2' : no_info, 'pvs2' : no_info, 'seat' : manual }) print(poland19) poland19.to_csv('poland_19.csv') </code></pre>
<p>As commented you probably need to use Selenium. You could replace the requests lib and replace the request statements with sth like this:</p> <pre><code>from selenium import webdriver wd = webdriver.Chrome('pathToChromeDriver') # or any other Browser driver wd.get(url) # instead of requests.get() soup = BeautifulSoup(wd.page_source, 'html.parser') </code></pre> <p>You need to follow the instructions to install and implement the selenium lib at this link: <a href="https://selenium-python.readthedocs.io/" rel="nofollow noreferrer">https://selenium-python.readthedocs.io/</a></p> <p>Note: I tested your code with selenium and I was able to get the table that you were looking for, but with the class_=... does not work for some reason. Instead browsing at the scraped data I found that it has an attribute id. So maybe try also this instead:</p> <pre><code>tbody = soup.find_all('table', id=&quot;DataTables_Table_0&quot;) </code></pre> <p>And again, by doing the get requests with the selenium lib. Hope that was helpful :) Cheers</p>
python|pandas|dataframe|web-scraping|beautifulsoup
1
4,133
40,215,510
ValueError: Unknown label type: array while using Decision Tree Classifier and using a custom dataset
<p>Given below is my code</p> <pre><code>dataset = np.genfromtxt('train_py.csv', dtype=float, delimiter=",") X_train, X_test, y_train, y_test = train_test_split(dataset[:,:-1],dataset[:,-1], test_size=0.2,random_state=0) model = tree.DecisionTreeClassifier(criterion='gini') #y_train = y_train.tolist() #X_train = X_train.tolist() model.fit(X_train, y_train) model.score(X_train, y_train) predicted= model.predict(x_test) </code></pre> <p>I am trying to use the decision Tree classifier on a custom dataset imported using the numpy library. But I get a ValueError which is given below when I try to fit the model.I tried using both numpy arrays and non numpy arrays such as lists but still dont seem to figure out what is causing the error. Any help appreciated. </p> <pre><code> Traceback (most recent call last): File "tree.py", line 19, in &lt;module&gt; model.fit(X_train, y_train) File "/usr/local/lib/python2.7/dist-packages/sklearn/tree/tree.py", line 177, in fit check_classification_targets(y) File "/usr/local/lib/python2.7/dist-packages/sklearn/utils/multiclass.py", line 173, in check_classification_targets raise ValueError("Unknown label type: %r" % y) ValueError: Unknown label type: array([[ 252.3352],....&lt;until end of array&gt; </code></pre>
<p>python (scikit-learn) expects you to pass something that is label-like, thus: integer, string, etc. floats are not a typical encoding form of finite space, they are used for regression.</p> <p>docu: <a href="http://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html#sklearn.tree.DecisionTreeClassifier.fit" rel="noreferrer">fit</a> X_train The training input samples. Internally, it will be converted to dtype=np.float32 and if a sparse matrix is provided to a sparse csc_matrix.</p>
python|numpy|scikit-learn|decision-tree
8
4,134
41,058,760
Subtract a different reference value for each group of rows in pandas
<p>I searched and found this answer which is close but I can't quite see how to apply it to my own situation, as my reference values are not stored within the same dataframe. </p> <p><a href="https://stackoverflow.com/questions/30258974/subtracting-group-specific-value-from-rows-in-pandas?noredirect=1&amp;lq=1">Subtracting group specific value from rows in pandas</a></p> <p>I have a data frame as follows, I want to subtract a different reference value from the "Isotropic Shift" column depending on which Nucleus is present (in this case C and H but in principle any value from the periodic table is possible):</p> <pre><code>REF_H = 30 REF_C = 180 df Atom Number Nucleus Isotropic Shift 0 1 C 49.3721 1 2 C 52.9650 2 3 C 36.3443 3 4 C 50.8163 4 5 C 50.0493 5 6 C 49.7985 6 7 H 24.0772 7 8 H 23.7986 8 9 H 24.2922 9 10 H 24.1632 10 11 H 24.1572 11 12 C 102.9401 </code></pre> <p>So I would like this to return a delta column where the value is the corresponding Ref_H or Ref_C value minus the isotropic shift:</p> <pre><code>modifieddf.tail(2) Atom Number Nucleus Isotropic Shift Delta 10 11 H 24.1572 5.8428 11 12 C 102.9401 77.0599 </code></pre> <p>So far the best I've come up with is this:</p> <pre><code>def generateHandC(df): h = df[df['Nucleus'] == 'H'] h['delta'] = REF_H - h['Isotropic Shift'] c = df[df['Nucleus'] == 'C'] c['delta'] = REF_C - c['Isotropic Shift'] return h, c generateHandC(df) Output: ( Atom Number Nucleus Isotropic Shift delta 6 7 H 24.0772 5.9228 7 8 H 23.7986 6.2014 8 9 H 24.2922 5.7078 9 10 H 24.1632 5.8368 10 11 H 24.1572 5.8428 14 15 H 28.3212 1.6788 15 16 H 28.0110 1.9890 17 18 H 29.2324 0.7676 18 19 H 26.7298 3.2702, Atom Number Nucleus Isotropic Shift delta 0 1 C 49.3721 130.6279 1 2 C 52.9650 127.0350 2 3 C 36.3443 143.6557 3 4 C 50.8163 129.1837 4 5 C 50.0493 129.9507 5 6 C 49.7985 130.2015 11 12 C 102.9401 77.0599 13 14 C 122.3188 57.6812) </code></pre> <p>But this is definitely not optimal, it returns the data frame as a list and throws me a <code>SettingWithCopyWarning</code>. Ideally I want to return the original dataframe plus an extra column for the delta values. Thanks!</p>
<p>You can <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.map.html" rel="nofollow noreferrer"><code>map</code></a> column <code>Nucleus</code> by <code>dict</code> and then substract by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.sub.html" rel="nofollow noreferrer"><code>sub</code></a>:</p> <pre><code>REF_H = 30 REF_C = 180 d = {'C': REF_C, 'H':REF_H} df['Delta'] = df.Nucleus.map(d).sub(df['Isotropic Shift']) print (df) Atom Number Nucleus Isotropic Shift Delta 0 0 1 C 49.3721 130.6279 1 1 2 C 52.9650 127.0350 2 2 3 C 36.3443 143.6557 3 3 4 C 50.8163 129.1837 4 4 5 C 50.0493 129.9507 5 5 6 C 49.7985 130.2015 6 6 7 H 24.0772 5.9228 7 7 8 H 23.7986 6.2014 8 8 9 H 24.2922 5.7078 9 9 10 H 24.1632 5.8368 10 10 11 H 24.1572 5.8428 11 11 12 C 102.9401 77.0599 </code></pre>
python|pandas|dataframe
3
4,135
53,815,562
How to use TensorFlow tf.print with non capital p?
<p>I have some TensorFlow code in a custom loss function.</p> <p>I'm using <code>tf.Print(node, [debug1, debug2], "print my debugs: ")</code></p> <p>It works fine but TF says <code>tf.Print</code> is depricated and will be removed once i update TensorFlow and that i should be using <code>tf.**p**rint()</code>, with small p.</p> <p>I've tried using <code>tf.print</code> the same way i would <code>tf.Print()</code> but it's not working. Once i fit my model in Keras, i get an error. unlike <code>tf.Print</code>, <code>tf.print</code> seems to take in anything <code>**kwargs</code>, so what am i suppose to give it? and unlike <code>tf.Print</code> it do not seem to return something that i can inject into the computational graph.</p> <p>It's really difficult to search because all the information online is about <code>tf.Print()</code>.</p> <p>Can someone explain how to use <code>tf.print()</code>?</p> <p>Edit: Example code</p> <pre class="lang-py prettyprint-override"><code>def custom_loss(y_true, y_pred): loss = K.mean(...) print_no_op = tf.Print(loss, [loss, y_true, y_true.shape], "Debug output: ") return print_no_op model.compile(loss=custom_loss) </code></pre>
<p>Both the documentation of <a href="https://www.tensorflow.org/api_docs/python/tf/print" rel="noreferrer"><code>tf.print</code></a> and <a href="https://www.tensorflow.org/api_docs/python/tf/Print" rel="noreferrer"><code>tf.Print</code></a> mention that <code>tf.print</code> returns an operation with no output, so it cannot be evaluated to any value. The syntax of <code>tf.print</code> is meant to be more similar to Python's builtin <a href="https://docs.python.org/3/library/functions.html#print" rel="noreferrer"><code>print</code></a>. In your case, you could use it as follows:</p> <pre><code>def custom_loss(y_true, y_pred): loss = K.mean(...) print_op = tf.print("Debug output:", loss, y_true, y_true.shape) with tf.control_dependencies([print_op]): return K.identity(loss) </code></pre> <p>Here <a href="https://keras.io/backend/#identity" rel="noreferrer"><code>K.identity</code></a> creates a new tensor identical to <code>loss</code> but with a control dependency to <code>print_op</code>, so evaluating it will force executing the printing operation. Note that Keras also offers <a href="https://keras.io/backend/#print_tensor" rel="noreferrer"><code>K.print_tensor</code></a>, although it is less flexible than <code>tf.print</code>.</p>
python|tensorflow|keras
7
4,136
53,878,992
Dict of two non-header rows in pandas
<p>I have a dataframe that I need to keep without a header, and have the header on the first row. What would be the best way to create a <code>dict</code> of those two rows. For example:</p> <pre><code>df.loc[0:1] </code></pre> <p>Currently I would do something like:</p> <pre><code>dict(zip(df.loc[0].tolist(), df.loc[1].tolist())) </code></pre> <p>But was hoping that perhaps <code>pandas</code> had a simpler way to do that.</p>
<p>Use <code>header=None</code> in the <code>read_csv</code> function. Like so:</p> <pre><code>df = pd.read_csv(path_to_file,header = None) </code></pre> <p>Check the docs <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html" rel="nofollow noreferrer">read_csv</a>.</p> <p>Then to create a <code>dict</code>:</p> <pre><code>df.iloc[:2].to_dict() </code></pre>
python|pandas
0
4,137
66,221,070
is there a pandas function for evaluating values in different columns with a rolling function?
<pre><code>import pandas as pd import numpy as np # setting up the dataframe data = [ ['day 1','day 2', 2, 50], ['day 2','day 4', 2, 60], ['day 3','day 3', 1, 45], ['day 4','day 7', 2, 45], ['day 5','day 10', 3, 90], ['day 6','day 7', 3, 10], ['day 7','day 8', 2, 10] ] columns = ['invoicedate', 'paymentdate', 'clientid', 'amounts'] df = pd.DataFrame(data=data, columns=columns) </code></pre> <p>I have the above dataframe, and I want to check if the last invoice of a certain client ('clientid') was paid ('paymentdate') before a new invoice was issued ('invoicedate').</p> <p>does anyone have a good (pandas?) solution for this? I tried some thing with the .rolling() function.</p>
<p>This is less technology and more business modelling</p> <ul> <li>really you are looking at accounting concepts</li> <li>an approach is to consider the data as payables and receivables</li> <li>then you can run whatever rolling functions you want after modelling using these basic accounting business concepts</li> </ul> <pre><code># setting up the dataframe data = [ ['day 1','day 2', 2, 50], ['day 2','day 4', 2, 60], ['day 3','day 3', 1, 45], ['day 4','day 7', 2, 45], ['day 5','day 10', 3, 90], ['day 6','day 7', 3, 10], ['day 7','day 8', 2, 10] ] columns = ['invoicedate', 'paymentdate', 'clientid', 'amounts'] df = pd.DataFrame(data=data, columns=columns) # make abstract dates actual dates cols = [c for c in df.columns if &quot;date&quot; in c] df.loc[:,cols] = df.loc[:,cols].applymap(lambda d: pd.to_datetime(f'{d.split(&quot; &quot;)[1]}-jan-2021')) # give it an invoice id... df = df.reset_index().rename(columns={&quot;index&quot;:&quot;invoiceid&quot;}) # make a payables / receivables structure dfpr = pd.concat([df.loc[:,[c for c in df.columns if c!=&quot;paymentdate&quot;]].rename(columns={&quot;invoicedate&quot;:&quot;date&quot;}).assign(type=&quot;pay&quot;), df.loc[:,[c for c in df.columns if c!=&quot;invoicedate&quot;]].rename(columns={&quot;paymentdate&quot;:&quot;date&quot;}).assign( type=&quot;rec&quot;,amounts=lambda dfa: dfa.amounts*-1), ]).sort_values([&quot;clientid&quot;,&quot;date&quot;]).reset_index(drop=True) # analysis - what does client owe.. dfpr.assign(rolling=dfpr.groupby([&quot;clientid&quot;]).agg({&quot;amounts&quot;:&quot;cumsum&quot;})) </code></pre> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: right;"></th> <th style="text-align: right;">invoiceid</th> <th style="text-align: left;">date</th> <th style="text-align: right;">clientid</th> <th style="text-align: right;">amounts</th> <th style="text-align: left;">type</th> <th style="text-align: right;">rolling</th> </tr> </thead> <tbody> <tr> <td style="text-align: right;">0</td> <td style="text-align: right;">2</td> <td style="text-align: left;">2021-01-03 00:00:00</td> <td style="text-align: right;">1</td> <td style="text-align: right;">45</td> <td style="text-align: left;">pay</td> <td style="text-align: right;">45</td> </tr> <tr> <td style="text-align: right;">1</td> <td style="text-align: right;">2</td> <td style="text-align: left;">2021-01-03 00:00:00</td> <td style="text-align: right;">1</td> <td style="text-align: right;">-45</td> <td style="text-align: left;">rec</td> <td style="text-align: right;">0</td> </tr> <tr> <td style="text-align: right;">2</td> <td style="text-align: right;">0</td> <td style="text-align: left;">2021-01-01 00:00:00</td> <td style="text-align: right;">2</td> <td style="text-align: right;">50</td> <td style="text-align: left;">pay</td> <td style="text-align: right;">50</td> </tr> <tr> <td style="text-align: right;">3</td> <td style="text-align: right;">1</td> <td style="text-align: left;">2021-01-02 00:00:00</td> <td style="text-align: right;">2</td> <td style="text-align: right;">60</td> <td style="text-align: left;">pay</td> <td style="text-align: right;">110</td> </tr> <tr> <td style="text-align: right;">4</td> <td style="text-align: right;">0</td> <td style="text-align: left;">2021-01-02 00:00:00</td> <td style="text-align: right;">2</td> <td style="text-align: right;">-50</td> <td style="text-align: left;">rec</td> <td style="text-align: right;">60</td> </tr> <tr> <td style="text-align: right;">5</td> <td style="text-align: right;">3</td> <td style="text-align: left;">2021-01-04 00:00:00</td> <td style="text-align: right;">2</td> <td style="text-align: right;">45</td> <td style="text-align: left;">pay</td> <td style="text-align: right;">105</td> </tr> <tr> <td style="text-align: right;">6</td> <td style="text-align: right;">1</td> <td style="text-align: left;">2021-01-04 00:00:00</td> <td style="text-align: right;">2</td> <td style="text-align: right;">-60</td> <td style="text-align: left;">rec</td> <td style="text-align: right;">45</td> </tr> <tr> <td style="text-align: right;">7</td> <td style="text-align: right;">6</td> <td style="text-align: left;">2021-01-07 00:00:00</td> <td style="text-align: right;">2</td> <td style="text-align: right;">10</td> <td style="text-align: left;">pay</td> <td style="text-align: right;">55</td> </tr> <tr> <td style="text-align: right;">8</td> <td style="text-align: right;">3</td> <td style="text-align: left;">2021-01-07 00:00:00</td> <td style="text-align: right;">2</td> <td style="text-align: right;">-45</td> <td style="text-align: left;">rec</td> <td style="text-align: right;">10</td> </tr> <tr> <td style="text-align: right;">9</td> <td style="text-align: right;">6</td> <td style="text-align: left;">2021-01-08 00:00:00</td> <td style="text-align: right;">2</td> <td style="text-align: right;">-10</td> <td style="text-align: left;">rec</td> <td style="text-align: right;">0</td> </tr> <tr> <td style="text-align: right;">10</td> <td style="text-align: right;">4</td> <td style="text-align: left;">2021-01-05 00:00:00</td> <td style="text-align: right;">3</td> <td style="text-align: right;">90</td> <td style="text-align: left;">pay</td> <td style="text-align: right;">90</td> </tr> <tr> <td style="text-align: right;">11</td> <td style="text-align: right;">5</td> <td style="text-align: left;">2021-01-06 00:00:00</td> <td style="text-align: right;">3</td> <td style="text-align: right;">10</td> <td style="text-align: left;">pay</td> <td style="text-align: right;">100</td> </tr> <tr> <td style="text-align: right;">12</td> <td style="text-align: right;">5</td> <td style="text-align: left;">2021-01-07 00:00:00</td> <td style="text-align: right;">3</td> <td style="text-align: right;">-10</td> <td style="text-align: left;">rec</td> <td style="text-align: right;">90</td> </tr> <tr> <td style="text-align: right;">13</td> <td style="text-align: right;">4</td> <td style="text-align: left;">2021-01-10 00:00:00</td> <td style="text-align: right;">3</td> <td style="text-align: right;">-90</td> <td style="text-align: left;">rec</td> <td style="text-align: right;">0</td> </tr> </tbody> </table> </div>
python|pandas
0
4,138
65,924,090
SimpleTransformers Error: VersionConflict: tokenizers==0.9.4? How do I fix this?
<p>I'm trying to execute the simpletransformers example from their site on google colab.</p> <p>Example:</p> <pre><code>from simpletransformers.classification import ClassificationModel, ClassificationArgs import pandas as pd import logging logging.basicConfig(level=logging.INFO) transformers_logger = logging.getLogger(&quot;transformers&quot;) transformers_logger.setLevel(logging.WARNING) # Preparing train data train_data = [ [&quot;Aragorn was the heir of Isildur&quot;, 1], [&quot;Frodo was the heir of Isildur&quot;, 0], ] train_df = pd.DataFrame(train_data) train_df.columns = [&quot;text&quot;, &quot;labels&quot;] # Preparing eval data eval_data = [ [&quot;Theoden was the king of Rohan&quot;, 1], [&quot;Merry was the king of Rohan&quot;, 0], ] eval_df = pd.DataFrame(eval_data) eval_df.columns = [&quot;text&quot;, &quot;labels&quot;] # Optional model configuration model_args = ClassificationArgs(num_train_epochs=1) # Create a ClassificationModel model = ClassificationModel( &quot;roberta&quot;, &quot;roberta-base&quot;, args=model_args ) # Train the model model.train_model(train_df) # Evaluate the model result, model_outputs, wrong_predictions = model.eval_model(eval_df) # Make predictions with the model predictions, raw_outputs = model.predict([&quot;Sam was a Wizard&quot;]) </code></pre> <p>But it gives me the following error:</p> <blockquote> <p>VersionConflict: tokenizers==0.9.4 is required for a normal functioning of this module, but found tokenizers==0.10.0. Try: pip install transformers -U or pip install -e '.[dev]' if you're working with git master</p> </blockquote> <p>I've tried <code>!pip install transformers -U</code>and even <code>!pip install tokenizers==0.9.4</code> but keeps giving the same error. I have executed this code before and it worked just fun, but now it's giving the mentioned error.</p>
<p>I am putting this here incase someone faces the same problem. I was helped by the creator himself.</p> <blockquote> <pre><code>Workaround: Install tokenizers==0.9.4 before install simpletransformers In Colab for example; !pip install tokenizers==0.9.4 !pip install simpletransformers </code></pre> <p><a href="https://github.com/ThilinaRajapakse/simpletransformers/issues/950" rel="nofollow noreferrer">https://github.com/ThilinaRajapakse/simpletransformers/issues/950</a></p> </blockquote>
tensorflow|nlp|bert-language-model|simpletransformers|sentence-transformers
3
4,139
52,482,115
Tensorflow histogram with custom bins
<p>I have two tensors - one with bin specification and the other one with observed values. I'd like to count how many values are in each bin. I know how to do this in either NumPy or bare Python, but I need to do this in <em>pure TensorFlow</em>. Is there a more sophisticated version of <code>tf.histogram_fixed_width</code> with an argument for bin specification?</p> <p>Example:</p> <pre><code># Input - 3 bins and 2 observed values bin_spec = [0, 0.5, 1, 2] values = [0.1, 1.1] # Histogram [1, 0, 1] </code></pre>
<p>This seems to work, although I consider it to be quite memory- and time-consuming. </p> <pre><code>import tensorflow as tf bins = [-1000, 1, 3, 10000] vals = [-3, 0, 2, 4, 5, 10, 12] vals = tf.constant(vals, dtype=tf.float64, name="values") bins = tf.constant(bins, dtype=tf.float64, name="bins") resh_bins = tf.reshape(bins, shape=(-1, 1), name="bins-reshaped") resh_vals = tf.reshape(vals, shape=(1, -1), name="values-reshaped") left_bin = tf.less_equal(resh_bins, resh_vals, name="left-edge") right_bin = tf.greater(resh_bins, resh_vals, name="right-edge") resu = tf.logical_and(left_bin[:-1, :], right_bin[1:, :], name="bool-bins") counts = tf.reduce_sum(tf.to_float(resu), axis=1, name="count-in-bins") with tf.Session() as sess: print(sess.run(counts)) </code></pre>
python-3.x|tensorflow
2
4,140
46,467,416
How to reshape a multi-column dataframe by index?
<p>Following from <a href="https://stackoverflow.com/questions/45677788/how-to-reshape-dataframe-if-they-have-same-index?noredirect=1#comment79889384_45677788">here</a> . The solution works for only one column. How to improve the solution for multiple columns. i.e If I have a dataframe like</p> <pre><code>df= pd.DataFrame([['a','b'],['b','c'],['c','z'],['d','b']],index=[0,0,1,1]) </code></pre> <pre> 0 1 0 a b 0 b c 1 c z 1 d b </pre> <p>How to reshape them like </p> <pre> 0 1 2 3 0 a b b c 1 c z d b </pre> <p>If df is </p> <pre> 0 1 0 a b 1 c z 1 d b </pre> <p>Then </p> <pre> 0 1 2 3 0 a b NaN NaN 1 c z d b </pre>
<p>Use <code>flatten/ravel</code></p> <pre><code>In [4401]: df.groupby(level=0).apply(lambda x: pd.Series(x.values.flatten())) Out[4401]: 0 1 2 3 0 a b b c 1 c z d b </code></pre> <p>Or, <code>stack</code></p> <pre><code>In [4413]: df.groupby(level=0).apply(lambda x: pd.Series(x.stack().values)) Out[4413]: 0 1 2 3 0 a b b c 1 c z d b </code></pre> <p>Also, with unequal indices</p> <pre><code>In [4435]: df.groupby(level=0).apply(lambda x: x.values.ravel()).apply(pd.Series) Out[4435]: 0 1 2 3 0 a b NaN NaN 1 c z d b </code></pre>
python|pandas|numpy|dataframe
3
4,141
58,576,772
How do i merge table index name using pandas in python?
<p>I want to set two columns with their index in a single index. But i can not merge table index. How could i merge table index using pandas or row python code? </p> <p>I tried and get this <a href="https://ibb.co/7nZyxCM" rel="nofollow noreferrer">https://ibb.co/7nZyxCM</a></p> <p>Here is the sample code using PrettyTable <a href="https://ibb.co/Hh80LBJ" rel="nofollow noreferrer">https://ibb.co/Hh80LBJ</a></p> <p>What i want to get : <a href="https://ibb.co/cQWf2Rz" rel="nofollow noreferrer">https://ibb.co/cQWf2Rz</a></p>
<p>You can create the new MultiIndex (to be used for columns) e.g. from tuples:</p> <pre><code>cols = pd.MultiIndex.from_tuples([ ('August', 'Invoice'), ('August', 'Sells'), ('September', 'Invoice'), ('September', 'Sells'), ('Growth', '1'), ('Growth', '2') ]) </code></pre> <p>Then just set it as <em>columns</em> in your <em>df</em>:</p> <pre><code>df.columns = cols </code></pre>
python|pandas|prettytable
1
4,142
69,279,946
Why dataframe exports only last value of iteration to csv? (Python)
<p>I am exporting results from dataframe to CSV, but it only exports the last value of the iteration. Please check my code and let me know, where I am doing wrong. Thank you for your support.</p> <pre><code>from pyswmm import Simulation, LidGroups, Nodes from pyswmm.swmm5 import SWMMException import os import pandas as pd output_path = &quot;E:\VARS_Research\pyswmm_master\Test_Model\Test_Model_Manual&quot; output_csv_file = &quot;node_flow.csv&quot; with Simulation('Test_model_LID.inp') as sim: nodes = Nodes(sim) for step in sim: j1 = Nodes(sim)[&quot;J1&quot;] j2 = Nodes(sim)[&quot;J2&quot;] j3 = Nodes(sim)[&quot;J3&quot;] j4 = Nodes(sim)[&quot;J4&quot;] results = {j1.total_inflow, j2.total_inflow, j3.total_inflow, j4.total_inflow} sim.step_advance(300) for i in results: current_time = sim.current_time my_df = pd.DataFrame.from_dict({'j1': [j1.total_inflow], 'j2': [j2.total_inflow], 'j3': [j3.total_inflow], 'j4': [j4.total_inflow]}) my_df.to_csv(os.path.join(output_path, &quot;node_flow.csv&quot;)) </code></pre>
<p>In each iteration over <code>results</code> you write to csv file overwriting the previous file. Not sure if that's what you want, but specify <code>mode='a'</code> in <code>to_csv()</code>. The default mode is <code>w</code></p> <p><a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.to_csv.html" rel="nofollow noreferrer">Link to docs for <code>to_csv()</code></a></p>
python|pandas|dataframe|csv
0
4,143
69,233,364
Why does Matplotlib saved figure look weird?
<p>I am using the plotting pandas data frame and, saved the resulting figure using the following code. The output file looks weird when opened in image viewer as shown here.I don't understand why the background is not fully white.</p> <p>I am using Linux OS called Zorin, which is a derivative of Ubuntu if that makes a difference.</p> <p><a href="https://i.stack.imgur.com/F7Grg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/F7Grg.png" alt="enter image description here" /></a></p> <pre><code> fig, ax = plt.subplots(figsize=(15, 10)) account_profit_df.plot( style=&quot;*-&quot;, figsize=(15, 10), title=&quot;test&quot;, ax=ax, ) plt.grid() fig.savefig( image_path_backtest, bbox_inches=&quot;tight&quot;, ) </code></pre>
<p>This is the default behaviour, you can set the background color using: <code>fig.patch.set_facecolor('white')</code></p>
python-3.x|pandas|matplotlib
0
4,144
69,233,701
Finding the coordinates of pixels over a line in an image
<p>I have an image represented as a 2D array. I would like to get the coordinates of pixels over a line from point 1 to point 2.</p> <p>For example, let's say I have an image with size 5x4 like in the image below. And I have a line from point 1 at coordinates <code>(0, 2)</code> to point 2 at <code>(4, 1)</code>. Like the red line on the image below:</p> <p><a href="https://i.stack.imgur.com/wUkOn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wUkOn.png" alt="enter image description here" /></a></p> <p>So here I would like to get the coordinates of the blue pixels as a list like this: <code>[(0, 2), (1, 2), (2, 2), (2, 1), (3, 1), (4, 1)]</code></p> <p>How can I achieve this?</p> <p>I am using Python and numpy, but actually a solution in any language including pseudo code would be helpful. I can then try to convert it into a numpy solution.</p>
<p>You can do this with <a href="https://scikit-image.org/docs/dev/api/skimage.draw.html?highlight=line#skimage.draw.line" rel="nofollow noreferrer">scikit-image</a>:</p> <pre><code>from skimage.draw import line # Get coordinates, r=rows, c=cols of your line rr, cc = line(0,2,4,1) print(list(zip(rr,cc))) [(0, 2), (1, 2), (2, 1), (3, 1), (4, 1)] </code></pre> <p>The source code to see the implemented algorithm: <a href="https://github.com/scikit-image/scikit-image/blob/main/skimage/draw/_draw.pyx#L44" rel="nofollow noreferrer">https://github.com/scikit-image/scikit-image/blob/main/skimage/draw/_draw.pyx#L44</a></p> <p>It is an implementation of the <a href="https://en.wikipedia.org/wiki/Bresenham%27s_line_algorithm" rel="nofollow noreferrer">Bresenham's line algorithm</a></p>
python|numpy|image-processing
3
4,145
60,907,540
Use of 1W offset - Would like that offset by one week pushes current TS to begining of next week
<h1>Initial problem statement</h1> <p>Given timestamp '2020-03-24 10:00' (a Tuesday), I would like to get next week start (Monday 00:00) by use of a week DateOffset.</p> <p>I intend to understand the way DateOffset work.</p> <p>Here are my attempts, all failing so far.</p> <pre><code># ts being timestamp for Tuesday the 24th ts = pd.Timestamp('2020-03-24 10:00') # I am looking for the offset that will give me Monday the 30th 00:00 # Attempt 1 / by use of to_offset() off1 = pd.tseries.frequencies.to_offset('1W') ts1 = ts + off1 # ts1 is set to next Sunday the 29th 00:00, why this specific date? # Begining of week is Monday the 30th 00:00 ts1 &gt;&gt;&gt; Out: Timestamp('2020-03-29 00:00:00') # Attempt 2 / by use of DateOffset(weeks=1) off2 = pd.tseries.offsets.DateOffset(weeks=1) ts2 = ts + off2 # ts2 is now Tuesday the 31st 00:00 # It is not what I am looking for, but it makes sense. # This offset is shifting current date to 7 days later, ok. ts2 &gt;&gt;&gt; Out: Timestamp('2020-03-31 00:00:00') # Attempt 3 / by use of DateOffset(weekday=1) off3 = pd.tseries.offsets.DateOffset(weekday=1) ts3 = ts + off3 # This time, I cannot figure any reason why the timestamp is # simply not modified. ts3 &gt;&gt;&gt; Out: Timestamp('2020-03-24 10:00:00') </code></pre> <p>Please, has anyone any explanation for results ts1 &amp; ts3. What logic does follow the computation managed to obtain them?</p> <p>And finally, has anyone any idea how to 'replace' Timestamp value to begining of next week? (I would have thought to onbtain this result with ts3, and hoping to have the same result with ts1, but it is currently a failure).</p> <h1>Completed problem statement</h1> <p>First answer given below support use of an anchored DateOffset, which appears indeed anchoring the begining of next week to my expectation: anchoring it to Monday.</p> <p>But now, looking for consistency, if I use this same anchored offset to create a PeriodIndex, weeks appear anchored to Tuesday?!</p> <pre><code># ts being timestamp for Tuesday the 24th ts = pd.Timestamp('2020-03-24 10:00') ts_end=pd.Timestamp('2020-04-16 10:00') # Offset off1 = pd.tseries.frequencies.to_offset('W-MON') # PeriodIndex pi = pd.period_range(start=ts_start, end=ts_end, freq=off1) # Checking anchoring day of created PeriodIndex: pi[1].start_time &gt;&gt;&gt; Out: Timestamp('2020-03-31 00:00:00') </code></pre> <p>What mystery is that?</p>
<p>Please, consider this code:</p> <pre><code># ts being timestamp for Tuesday the 24th ts = pd.Timestamp('2020-03-24 10:00') # use right offset to start with monday off1 = pd.tseries.frequencies.to_offset('W-MON') # add values ts1 = ts + off1 # call normalize to start at midnight ts1 = ts1.normalize() </code></pre> <p>For <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html" rel="nofollow noreferrer">'W-MON'</a> see 'Anchored offsets', for <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#offset-aliases" rel="nofollow noreferrer">normalize()</a> search page for 'use normalize()'. </p> <hr> <p><strong>EDIT #1</strong> <br>Answering your question about 'mystery' edit: use <code>pi[0].start_time</code> instead of <code>pi[1].start_time</code> to get the first element. Then you will get</p> <pre><code>pi[0].start_time &gt;&gt;2020-03-24 00:00:00 </code></pre> <p>However, I can't tell you why it does generated 03-<strong>24</strong> exactly, but it is very likely that the function finds the previous date corresponding to start of the week with <code>weekday</code> specified in the <code>frequency</code> and then uses one week (exactly 7 days) as frequency. It is (seemingly) baked by the fact, if one uses</p> <pre><code>off1 = pd.tseries.frequencies.to_offset('1W-FRI') #weekday = 4 pi[0].start_time &gt;&gt; 2020-03-21 00:00:00 pi[0].end_time &gt;&gt; 2020-03-27 23:59:59.999999999 pi[1].start_time &gt;&gt; 2020-03-28 00:00:00 pi[1].end_time &gt;&gt; 2020-04-03 23:59:59.999999999 </code></pre> <p>i.e. the ranges starts from Saturday. You can use calculated desired date as start (as shown above) and specify frequency as <code>'1W'</code> without anchors.</p> <pre><code># ts being timestamp for Tuesday the 24th ts = pd.Timestamp('2020-03-24 10:00') # use right offset to start with monday off1 = pd.tseries.frequencies.to_offset('W-MON') # add values ts1 = ts + off1 # call normalize to start at midnight ts1 = ts1.normalize() #ts1 is 2020-03-30 00:00:00 ts_end=pd.Timestamp('2020-04-16 10:00') # PeriodIndex pi = pd.period_range(start=ts1, end=ts_end, freq= to_offset('1W')) pi[0].start_time &gt;&gt; 2020-03-30 00:00:00 pi[-1].start_time &gt;&gt; 2020-04-13 00:00:00 pi[-1].end_time &gt;&gt; 2020-04-19 23:59:59.999999999 </code></pre> <p>Hope it helps.</p>
python|pandas|timestamp
1
4,146
71,635,940
Move a set of rows of a dataframe to the beginning
<p>I want to move a set of dataframe rows to the beginning</p> <p>The indexes of the corresponding rows are these ones:</p> <pre><code>indexes = [2188, 2163, 37, 47, 36, 41, 61, 1009, 40, 39, 123, 121, 2151, 19, 2, 8, 117, 205, 204] </code></pre> <p>So:</p> <pre><code>index 204 -&gt; index 0 index 205 -&gt; index 1 . . index 2188 -&gt; index 18 </code></pre> <p>At the moment I have this code that allows to move only one row:</p> <pre><code>def move_row(index): idx = [index] + [i for i in range(len(df)) if i != index] return df.iloc[idx].reset_index(drop=True) </code></pre>
<p>Here's an approach that reindexes the DataFrame based on using sets theory to create a new set list from indexes. This partially assumes that the indexes will be unique.</p> <pre><code>import pandas as pd ## sample DataFrame d = {'col1': [0, 1, 2, 3, 4], 'col2': [4, 5, 6, 9, 5], 'col3': [7, 8, 12, 1, 11]} df = pd.DataFrame(data=d) print(df) ## col1 col2 col3 ## 0 0 4 7 ## 1 1 5 8 ## 2 2 6 12 ## 3 3 9 1 ## 4 4 5 11 indexes = [3, 1] new_idx = indexes + list(set(range(len(df))).difference(indexes)) df = df = df.iloc[new_idx].reset_index(drop = True) print(df) ## col1 col2 col3 ## 0 3 9 1 ## 1 1 5 8 ## 2 0 4 7 ## 3 2 6 12 ## 4 4 5 11 </code></pre>
python|pandas|dataframe
1
4,147
71,510,471
numpy.eigh and matlab give inconsistent answers for eigenvectors?
<p>I'm writing a code that diagonalizes a 4x4 hermitian matrix. It's a simple enough code but the eigenvectors given by matlab and numpy disagree wildly.</p> <p>Reproducible code:</p> <pre><code>import numpy as np # symbreak_h_bound generates a 4x4 matrix based on input kpt, then returns the eigenvals and eigenvecs w, v = symbreak_h_bound([np.pi/600, np.pi/600]) def symbreak_h_bound(kpt, m=0.1): # This function returns W, V corresponding to eigenvalues and eigenvectors of Hbound # bunch of constants, ignore. necessary to show the problem ghoverG = 1./2. kx = kpt[0] ky = kpt[1] sinkx = 2 * np.sin(2 * kx) sinky = 2 * np.sin(2 * ky) gamma = 2 * np.sin(2 * kx + 2 * np.pi * ghoverG) betax = 1j * np.exp(-2j * kx) * m betay = 1j * np.exp(-2j * ky) * m alpha = np.exp(1j * 2 * np.pi * ghoverG) # Hbound is hermitian, np.allclose(Hbound, Hbound.H) returns True Hbound = np.matrix( [ [0, sinky + betay, sinkx + betax, 0], [sinky + np.conjugate(betay), 0, 0, gamma + np.conjugate(alpha) * betax], [sinkx + np.conjugate(betax), 0, 0, sinky + betay], [0, gamma + alpha * np.conjugate(betax), sinky + np.conjugate(betay), 0], ] ) w, v = lin.eigh(Hbound) return w, v </code></pre> <p>Straightforward, but here's the issue - matlab and numpy give wildly different answers for the eigenvectors of the almost exactly same matrix. Here is the numpy output -</p> <p><a href="https://i.stack.imgur.com/kyO9q.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kyO9q.png" alt="numpy output" /></a></p> <p>Here's the matlab output -</p> <p><a href="https://i.stack.imgur.com/OELaM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OELaM.png" alt="matlab output" /></a></p> <p>We can see that Hbound is almost the same as M, which is reflected in the fact that w == D1. But v != V1.</p> <p>I know that if v is an eigenvector, then $$e^{i \theta} * v$$ is also an eigenvector. However, it is still not reflective of the situation, as v[:,i] are not even exactly orthogonal to each other (upto floating point errors).</p> <p>My code relies on the continuity of eigenvectors for kpt -&gt; kpt+delta, so I was wondering if someone could suggest the reason for inconsistency between numpy and matlab here, but more importantly, if there is a way to ensure a continuous eigenvector v -&gt; v+delta_v corresponding to a slight change when moving from kpt -&gt; kpt + delta?</p>
<p>The <code>v</code> in your example is quite close to orthogonal.</p> <pre class="lang-python prettyprint-override"><code>&gt; v.H*v matrix([[ 1.00000000e+00 +0.00000000e+00j, 2.49800181e-16 +0.00000000e+00j, -1.11022302e-16 +4.16333634e-17j, -1.94289029e-16 +3.46944695e-17j], [ 2.49800181e-16 +0.00000000e+00j, 1.00000000e+00 +0.00000000e+00j, -1.11022302e-16 +8.32667268e-17j, -1.94289029e-16 +3.46944695e-17j], [ -1.11022302e-16 -4.16333634e-17j, -1.11022302e-16 -8.32667268e-17j, 1.00000000e+00 +0.00000000e+00j, 1.11022302e-16 -6.93889390e-18j], [ -1.94289029e-16 -3.46944695e-17j, -1.94289029e-16 -3.46944695e-17j, 1.11022302e-16 +6.93889390e-18j, 1.00000000e+00 +0.00000000e+00j]]) </code></pre> <p>Are you looking at <code>v.transpose()*v</code> perhaps?</p> <p>The first two eigenvectors do span the same subspace as Matlab's first two eigenvectors, and the same for the second two. There's obviously some freedom in the basis for the subspace because of the repeated eigenvalues.</p> <p>I'm not sure about the choices <code>eigh</code> makes for the representation of a subspace corresponding to a repeated eigenvalue when you perturb the problem. I doubt there's a guarantee that it'll be continuous, even if the subspace itself behaves nicely.</p>
python|numpy|matlab|linear-algebra
2
4,148
69,981,281
Create count for series of values grouped by specific column in Python
<p>I have a dataset, df, where I would like to create a count for a series of values grouped by specific column in Python</p> <p><strong>Data</strong></p> <pre><code>id date type aa q1 23 hi aa q1 23 hi aa q1 23 bye aa q1 23 bye aa q2 23 hi aa q2 23 bye bb q1 23 hi </code></pre> <p>resets for every unique date and id</p> <p><strong>Desired</strong></p> <pre><code> id date type count aa q1 23 hi hi01 aa q1 23 hi hi02 aa q1 23 bye bye01 aa q1 23 bye bye02 aa q2 23 hi hi01 aa q2 23 bye bye02 bb q1 23 hi hi01 </code></pre> <p><strong>Doing</strong></p> <p>I am adding leading zeros - keep getting type error</p> <pre><code>df['count'] = df[0].str.upper() + df[1].str.zfill(2) </code></pre> <p>Any suggestion is appreciated.</p>
<p>You can use:</p> <pre><code>df['count'] = df['type'] + df.groupby([*df]).cumcount().add(1).astype(str).str.zfill(2) </code></pre> <p>Output:</p> <pre><code> id date type count 0 aa q1 23 hi hi01 1 aa q1 23 hi hi02 2 aa q1 23 bye bye01 3 aa q1 23 bye bye02 4 aa q2 23 hi hi01 5 aa q2 23 bye bye01 6 bb q1 23 hi hi01 </code></pre>
python|pandas|numpy
3
4,149
69,777,346
how to use np.diff with reference point in python
<p>I have a dataset given with time stamps.</p> <pre><code>import pandas as pd data = pd.DataFrame({'date': pd.to_datetime(['1992-01-01', '1992-02-01', '1992-03-01', '1992-04-01', '1992-05-01']), 'sales': [10, 20, 30, 40, 50], 'price': [4302, 4323, 4199, 4397, 4159]}) </code></pre> <p>I am trying to differencing them with <code>np.diff(data['price'])</code> for <code>price</code> column. However, I want to have a reference point for the first row with timestamp, <code>1992-01-01</code>. My <code>reference value is 4100</code> and I expect to have dataset given below:</p> <pre><code> date, sales, diff_price 1992-01-01, 10, 4302-4100 1992-02-01, 20, 4323-4302 1992-03-01, 30, 4199-4323 1992-04-01, 40, 4397-4199 1992-05-01, 50, 4159-4397 </code></pre> <p>Is there any easy way to do it without changing the structure of data in a pythonic way?</p>
<p>We can use the <code>prepend</code> parameter of <a href="https://numpy.org/doc/stable/reference/generated/numpy.diff.html" rel="nofollow noreferrer"><code>np.diff</code></a> to set the reference value (4100) at the beginning of the Series:</p> <pre><code>reference_value = 4100 data['diff_price'] = np.diff(data['price'], prepend=reference_value) </code></pre> <p>or we can <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.shift.html#pandas.Series.shift" rel="nofollow noreferrer"><code>Series.shift</code></a> with a <code>fill_value</code> of the reference (4100) and subtract:</p> <pre><code>reference_value = 4100 data['diff_price'] = ( data['price'] - data['price'].shift(fill_value=reference_value) ) </code></pre> <p>Either approach produces <code>data</code>:</p> <pre><code> date sales price diff_price 0 1992-01-01 10 4302 202 1 1992-02-01 20 4323 21 2 1992-03-01 30 4199 -124 3 1992-04-01 40 4397 198 4 1992-05-01 50 4159 -238 </code></pre>
python|pandas|diff|np
1
4,150
69,970,206
How can I apply an expanding window to the names of groupby results?
<p>I would like to use pandas to group a dataframe by one column, and then run an expanding window calculation on the groups. Imagine the following dataframe:</p> <pre><code>G Val A 0 A 1 A 2 B 3 B 4 C 5 C 6 C 7 </code></pre> <p>What I am looking for is a way to group the data by column <code>G</code> (resulting in groups <code>['A', 'B', 'C']</code>), and then applying a function first to the items in group <code>A</code>, then to items in groups <code>A</code> and <code>B</code>, and finally items in groups <code>A</code> to <code>C</code>.</p> <p>For example, if the function is <code>sum</code>, then the result would be</p> <pre><code>A 3 B 10 C 28 </code></pre> <p>For my problem the function that is applied needs to be able to access all original items in the dataframe, not only the aggregates from the groupby.</p> <p>For example when applying <code>mean</code>, the expected result would be</p> <pre><code>A 1 B 2 C 3.5 </code></pre> <p>A: <code>mean([0,1,2])</code>, B: <code>mean([0,1,2,3,4])</code>, C: <code>mean([0,1,2,3,4,5,6,7])</code>.</p>
<p><code>cummean</code> not exist, so possible solution is aggregate <code>counts</code> and <code>sum</code>, use cumulative sum and for mean divide:</p> <pre><code>df = df.groupby('G')['Val'].agg(['size', 'sum']).cumsum() s = df['sum'].div(df['size']) print (s) A 1.0 B 2.0 C 3.5 dtype: float64 </code></pre> <p>If need general solution is possible extract expanding groups and then use function in dict comprehension like:</p> <pre><code>g = df['G'].drop_duplicates().apply(list).cumsum() s = pd.Series({x[-1]: df.loc[df['G'].isin(x), 'Val'].mean() for x in g}) print (s) A 1.0 B 2.0 C 3.5 dtype: float64 </code></pre>
pandas|pandas-groupby
2
4,151
70,013,488
Why does model training take significantly way longer when I include validation data?
<p>Obviously, I know that adding in validation data would make training take longer but the time difference I am talking here is absurd. Code:</p> <pre><code># Training def training(self, callback_bool): if callback_bool: callback_list = [] else: callback_list = [] self.history = self.model.fit(self.x_train, self.y_train, validation_data=(self.x_test, self.y_test), batch_size=1, steps_per_epoch=10, epochs=100) </code></pre> <p>The code above takes me more than 30 minutes to train even though the size of my test data is 10,000 data points. The size of my train data is 40,000 data points and when I train without validation data, I am done within seconds. Is there a way to remedy this? Why does it take this long? To boot, I am training on a gpu as well!</p>
<p>I assume validation works as intended, and you have a problem in the training process itself. You are using batch_size = 1 and steps_per_epoch = 10, which means <strong>the model will see only 10 data points during every epoch</strong>. That's why it takes only few seconds. On the other hand, you don't use the validation_steps argument, which means the validation after every epoch will run until your validation dataset is exshausted, i.e. for 10.000 steps. Hence the difference in times. You can read more about model.fit and its arguments <a href="https://www.tensorflow.org/api_docs/python/tf/keras/Model#fit" rel="nofollow noreferrer">in the official documentation</a>.</p> <p>If your training dataset isn't infinite, I suggest you to <strong>remove the steps_per_epoch argument</strong>. If it is, pass it the value of <strong>len(x_train)//batch_size</strong> instead. That way the model will be fed with every single training data point for each epoch. I assume every epoch will take ~1.5 hours instead of seconds you currently have. Also I suggest to increase the batch_size, if there is no specific reason to use batch size of 1.</p> <p>Edited: typos</p>
python|tensorflow|machine-learning|keras|scikit-learn
3
4,152
69,974,412
What's a good way of setting most elements of an ndarray to zero?
<p>I've got an <code>ndarray</code> with, say, 10,000 rows and 75 columns, and another one with the same number of rows and, say, 3 columns. The second one has integer values.</p> <p>I want to end up with an array of 10,000 rows and 75 columns with all the elements set to zero except the elements in each row indexed by the values in the corresponding row of the second array.</p> <p>So starting with <code>z_array</code> and <code>i_array</code>, I want to end up with <code>a_array</code></p> <pre><code>&gt;&gt;&gt; z_array array([[10, 11, 12, 13, 14, 15], [10, 11, 12, 13, 14, 15], [10, 11, 12, 13, 14, 15], [10, 11, 12, 13, 14, 15]]) &gt;&gt;&gt; i_array array([[0, 2], [3, 1], [1, 4], [2, 3]]) &gt;&gt;&gt; a_array array([[10, 0, 12, 0, 0, 0], [ 0, 11, 0, 13, 0, 0], [ 0, 11, 0, 0, 14, 0], [ 0, 0, 12, 13, 0, 0]]) </code></pre> <p>I can see two ways of approaching this: either start with an array full of zeros and copy across the relevant elements from <code>z_array</code>; or start with <code>z_array</code> and set all the irrelevant elements to zero. Note that the number of irrelevant elements is typically much, much larger than the number of relevant elements.</p> <p>Either way, is there a good way of doing the multiple assignments, or do I simply have to loop through them? Or is there a third approach?</p> <p>I'm wondering if I can use <a href="https://numpy.org/doc/stable/reference/generated/numpy.ufunc.at.html" rel="nofollow noreferrer"><code>numpy.ufunc.at</code></a> somehow? I can see how to get a list of indexes for the relevant elements, for example</p> <pre><code>&gt;&gt;&gt; index_list = [[i, val] for (i, x) in enumerate(i_array) for val in x ] index_list [[0, 0], [0, 2], [1, 3], [1, 1], [2, 1], [2, 4], [3, 2], [3, 3]] </code></pre> <p>And there's a slightly more complex way to get them for the irrelevant elements. But these lists would be big!!</p>
<p>It seems like you are looking for something similar to <code>np.put_along_axis</code></p> <p>Taking the example you have there if you run: <code>np.put_along_axis(z_array, i_array, 0, axis=1)</code></p> <pre><code>z_array = [[ 0 11 0 13 14 15] [10 0 12 0 14 15] [10 0 12 13 0 15] [10 11 0 0 14 15]] </code></pre> <p>The output is the opposite of what you want.</p> <p>To get what you want, create a copy of <code>z_array</code> as <code>a_array</code>. Then, compare these matrices and keep the values where the elements of <code>z_array</code> are non-zero.</p> <pre><code>a_array = copy.copy(z_array) np.put_along_axis(z_array, i_array, 0, axis=1) a_array[(z_array != 0)] = 0 </code></pre> <p>This gives the output you expected:</p> <pre><code>a_array = [[10 0 12 0 0 0] [ 0 11 0 13 0 0] [ 0 11 0 0 14 0] [ 0 0 12 13 0 0]] </code></pre> <hr /> <p><code>np.put_along_axis</code> <a href="https://numpy.org/doc/stable/reference/generated/numpy.put_along_axis.html" rel="nofollow noreferrer">documentation</a></p> <p>See this <a href="https://stackoverflow.com/a/34644625/12040795">answer</a> for more options for combining matrics (<code>np.where</code>)</p>
python|arrays|numpy|numpy-ndarray
1
4,153
43,264,153
How to find the output of a trained network for a random input in tensor flow?
<p>So I am trying to write a NN that predicts whether an input number is positive or negative, so I modelled this and trained, also checked the accuracy of it. But I can not use this model, to explicitly check whether a number is positive or negative. I can only, check the accuracy, I can not use this for individual inputs like a function.</p> <p>So this is my attempt;</p> <p>This creates training data</p> <pre><code>import numpy as np import random import pickle import bitstring from collections import Counter def binary(num): f1 = bitstring.BitArray(float=num, length=32) return f1.bin def num2bin(num): return [int(x) for x in binary(num)[0:]] pos=10*np.random.rand(1000) pos_test=10*np.random.rand(1000) neg=-10*np.random.rand(1000) neg_test=-10*np.random.rand(1000) </code></pre> <p>This converts the training data to 32 bit form and labels it</p> <pre><code>def create_label_feature(pos,pos_test,ned,neg_test,test_size=0.1): featuresp=[] labelsp=[] for x in pos: featuresp +=[num2bin(x)] labelsp +=[[1,0]] featuresn=[] labelsn=[] for x in neg: featuresn +=[num2bin(x)] labelsn +=[[0,1]] featurespt=[] labelspt=[] for x in pos_test: featurespt +=[num2bin(x)] labelspt +=[[1,0]] featuresnt=[] labelsnt=[] for x in neg_test: featuresnt +=[num2bin(x)] labelsnt +=[[0,1]] test_x=featuresp+featuresn test_y=labelsp+labelsn train_x=featurespt+featuresnt train_y=labelspt+labelsnt return train_x, train_y, test_x, test_y train_x ,train_y ,test_x, test_y=create_label_feature(pos,pos_test,neg,neg_test) </code></pre> <p>This trains the NN and then tries to determine whether -5 is positive or negative</p> <pre><code>import tensorflow as tf #from tensorflow.examples.tutorials.mnist import input_data import pickle import numpy as np n_nodes_hl1 = 1500 n_nodes_hl2 = 1500 n_nodes_hl3 = 1500 n_classes = 2 batch_size = 100 hm_epochs = 10 x = tf.placeholder('float',shape=[None,32]) y = tf.placeholder('float') hidden_1_layer = {'f_fum':n_nodes_hl1, 'weight':tf.Variable(tf.random_normal([len(train_x[0]), n_nodes_hl1])), 'bias':tf.Variable(tf.random_normal([n_nodes_hl1]))} # hidden_2_layer = {'f_fum':n_nodes_hl2, # 'weight':tf.Variable(tf.random_normal([n_nodes_hl1, n_nodes_hl2])), # 'bias':tf.Variable(tf.random_normal([n_nodes_hl2]))} # hidden_3_layer = {'f_fum':n_nodes_hl3, # 'weight':tf.Variable(tf.random_normal([n_nodes_hl2, n_nodes_hl3])), # 'bias':tf.Variable(tf.random_normal([n_nodes_hl3]))} output_layer = {'f_fum':None, 'weight':tf.Variable(tf.random_normal([n_nodes_hl1, n_classes])), 'bias':tf.Variable(tf.random_normal([n_classes])),} # Nothing changes def neural_network_model(data): l1 = tf.add(tf.matmul(data,hidden_1_layer['weight']), hidden_1_layer['bias']) l1 = tf.nn.relu(l1) # l2 = tf.add(tf.matmul(l1,hidden_2_layer['weight']), hidden_2_layer['bias']) # l2 = tf.nn.relu(l2) # l3 = tf.add(tf.matmul(l2,hidden_3_layer['weight']), hidden_3_layer['bias']) # l3 = tf.nn.relu(l3) output = tf.matmul(l1,output_layer['weight']) + output_layer['bias'] return output def train_neural_network(x): prediction = neural_network_model(x) cost = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(logits=prediction,labels=y) ) optimizer = tf.train.AdamOptimizer(learning_rate=0.001).minimize(cost) with tf.Session() as sess: saver = tf.train.Saver() sess.run(tf.initialize_all_variables()) for epoch in range(hm_epochs): epoch_loss = 0 i=0 while i &lt; len(train_x): start = i end = i+batch_size batch_x = np.array(train_x[start:end]) batch_y = np.array(train_y[start:end]) _, c = sess.run([optimizer, cost], feed_dict={x: batch_x, y: batch_y}) epoch_loss += c i+=batch_size print('Epoch', epoch+1, 'completed out of',hm_epochs,'loss:',epoch_loss) correct = tf.equal(tf.argmax(prediction, 1), tf.argmax(y, 1)) accuracy = tf.reduce_mean(tf.cast(correct, 'float')) print('Accuracy:',accuracy.eval({x:test_x, y:test_y})) a=num2bin(-5) a=np.reshape(a,(1,32)) # a=a[0,:] # print(sess.run(prediction, {x:np.array(num2bin(5))})) print(sess.run(prediction, {x:a})) train_neural_network(x) </code></pre> <p>The last part of the code</p> <pre><code> a=num2bin(-5) a=np.reshape(a,(1,32)) print(sess.run(prediction, {x:a})) </code></pre> <p>So I want to see wheter -5 is positive or negative, and I expected to have [0,1] as an output because that is how I labled negative numbers.</p> <p>But instead I get</p> <p>[[ -29.49657059 123.97122192]]</p> <p>So what is the problem here?</p> <h1>Edit</h1> <p>I add this following part to my code;</p> <pre><code> for k in [-9,3, 5,-8,-77,-16,54.3]: a=num2bin(k) a=np.reshape(a,(1,32)) a=sess.run(prediction, {x:a}) prediction_tensor = tf.sigmoid(a) print(sess.run(prediction_tensor)) </code></pre> <p>Then my output is</p> <pre><code>Accuracy: 0.999933 [[ 0. 0.85150468]] [[ 8.66709650e-01 8.56608536e-32]] [[ 7.24581242e-01 8.87260485e-37]] [[ 0. 0.66523373]] [[ 0. 1.]] [[ 0.00000000e+00 8.08775063e-16]] [[ 1. 0.]] </code></pre> <p>So my code gives two component output, and if the first element is bigger than the second it means that the input is positive and if not it means that it is negative.</p>
<p>You didn't sigmoid your output, that's the unscaled value, and it looks normal. The loss function is applying the sigmoid function to those values before applying cross entropy. The values you see there are the unscaled values you feed to the loss function. If you look at the numbers without applying sigmoid then all negative numbers predict 0 and approach 0 as they go more negative, and all positive numbers predict 1 and approach 1 the more they go positive. This is of course exactly what the sigmoid function is doing for you. You've trained your network to output large negative or large positive numbers when it's confident about the result.</p> <p>So, if you applied sigmoid to those values you'd get what you expected: ~[[0.00000001, 0.9999999999999999999]] or if you round, [[0,1]].</p> <p>Incidentally, you don't need 2 outputs for the binary class case, you can just use one output, it's either positive or negative. The network will probably perform ever so slightly better with the one output than the two. Not that it's going to have a hard time predicting positive vs. negative numbers. :)</p> <p>You can get the scaled (0,1) values by defining another tensor as such:</p> <pre><code>prediction_tensor = tf.sigmoid(a) </code></pre>
python|tensorflow|neural-network
0
4,154
43,396,135
Check that png image and csv file has the same name before processing them
<p>l have a dataset (5000 data) composed of images and csv files. Each image is mapped with its csv files. for instance <code>img_33e_78.png</code> is mapped with<code>img_33e_78.csv</code>. For each image l have a csv file which contains a given pixels to process. To do so l need to check that l'm processing the image with the right csv file . This is why l need to check the name of image and csv. the difference is only on <code>.png</code> and <code>.csv</code>. here is my code :</p> <pre><code>import os import glob import pandas as pd import h5py indir_images="image" os.chdir(indir_images) images_name=glob.glob("*.png") indir_csv="clean_data" os.chdir(indir_csv) csv_names=glob.glob("*.csv") for img,csv in zip(images_name,csv_names): if (image_name == csv_name) #here l need to ckeck that the image and csv file have the same name # do the processing </code></pre>
<p>i suppose i would start with making sets of your images and csv files. i remove the file extensions because they are the real issue of comparing the files. This is done using a list comprehension. could also be done using map. </p> <pre><code>image_names = set([x.rsplit('.', 1)[0] for x in glob.glob('*.png')]) csv_names = set([x.rsplit('.', 1)[0] for x in glob.glob('*.csv')]) # Alternatively using map image_names = set(map(lambda x: x.rsplit('.', 1)[0], glob.glob('*.png')) </code></pre> <p>Then we make a superset with ones where we know we have both. <a href="https://docs.python.org/2/library/sets.html#set-objects" rel="nofollow noreferrer">https://docs.python.org/2/library/sets.html#set-objects</a></p> <pre><code>for name in image_names &amp; csv_names: open(name+'.jpg) etc... </code></pre> <p>that way you know you have all files that match.</p>
python|string|csv|pandas|png
1
4,155
43,141,620
ProgrammingError: (psycopg2.ProgrammingError) can't adapt type 'dict'
<p>I'm trying to insert a dataframe using the query <br></p> <pre><code>engine = create_engine('scot://pswd:xyz@ hostnumb:port/db_name') dataframe.to_sql('table_name', engine, if_exists='replace') </code></pre> <p>but one column is a dictionary and I'm unable to insert it, only the column name is getting inserted. <br> I tried to change the type of the column in <strong>postgres</strong> from text to json object. still not able to insert. <br> I tried to use <strong>json.dumps()</strong> but still facing the issue.getting an error as "dtype: object is not JSON serializable"</p>
<p>Try specifying the dtype. So in your example, you would say</p> <pre><code>dataframe.to_sql('table_name', engine, if_exists='replace',dtype = {'relevant_column':sqlalchemy.types.JSON}) </code></pre>
python|json|postgresql|pandas
5
4,156
45,559,846
How to remove deconvolution noise in style-transfer neural network
<p>Im studying style-transfer networks and right now working with this <a href="https://github.com/lengstrom/fast-style-transfer" rel="nofollow noreferrer">work</a> and here is <a href="https://github.com/lengstrom/fast-style-transfer/blob/master/src/transform.py" rel="nofollow noreferrer">network description</a>. The problem that even with adding TV loss there is still visible noise which is breaking quality of result. Can someone recommend some articles of ways of removing such noise during network training?</p> <p>Thanks</p> <p><a href="https://i.stack.imgur.com/UYbLT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UYbLT.png" alt="Example of noise"></a></p>
<p>The <code>deconvolution</code> noise is because of the uneven overlaps between the input and the kernel which creates a checkerboard-like pattern of varying magnitudes. One fix is to use <code>resize-conv</code> method as mentioned in this <a href="https://distill.pub/2016/deconv-checkerboard/" rel="nofollow noreferrer">article</a>.</p> <p><code>Resize-conv</code> replaces <code>transpose convolution</code> with <code>image scaling</code> followed by a <code>2D convolution</code>. In tensor flow, the 2 steps are: <code>tf.image.resize_images(...)</code> and <code>tf.nn.conv2d(...)</code>. Another tip from the authors is to call <code>tf.pad(...)</code> prior to the convolution method and only use <code>Nearest Neighbour</code> resize method.</p>
machine-learning|tensorflow|neural-network|conv-neural-network|style-transfer
1
4,157
62,733,389
Image Segmentation Tensorflow tutorials
<p>In this <a href="https://www.tensorflow.org/tutorials/images/segmentation" rel="nofollow noreferrer">tf tutorial</a>, the U-net model has been divided into 2 parts, first contraction where they have used Mobilenet and it is not trainable. In second part, I'm not able to understand what all layers are being trained. As far as I could see, only the last layer conv2dTranspose seems trainable. Am I right?</p> <p>And if I am how could only one layer is able to do such a complex task as segmentation?</p> <p>Tutorial link: <a href="https://www.tensorflow.org/tutorials/images/segmentation" rel="nofollow noreferrer">https://www.tensorflow.org/tutorials/images/segmentation</a></p>
<p>The code for the <code>Image Segmentation Model</code>, from the <a href="https://www.tensorflow.org/tutorials/images/segmentation" rel="nofollow noreferrer">Tutorial</a> is shown below:</p> <pre><code>def unet_model(output_channels): inputs = tf.keras.layers.Input(shape=[128, 128, 3]) x = inputs # Downsampling through the model skips = down_stack(x) x = skips[-1] skips = reversed(skips[:-1]) # Upsampling and establishing the skip connections for up, skip in zip(up_stack, skips): x = up(x) concat = tf.keras.layers.Concatenate() x = concat([x, skip]) # This is the last layer of the model last = tf.keras.layers.Conv2DTranspose( output_channels, 3, strides=2, padding='same') #64x64 -&gt; 128x128 x = last(x) return tf.keras.Model(inputs=inputs, outputs=x) </code></pre> <p>First part of the Model is <code>Downsampling</code> uses not the entire <code>Mobilenet Architecture</code> but only the <code>Layers</code>,</p> <pre><code>'block_1_expand_relu', # 64x64 'block_3_expand_relu', # 32x32 'block_6_expand_relu', # 16x16 'block_13_expand_relu', # 8x8 'block_16_project' </code></pre> <p>of the Pre-Trained Model, <code>Mobilenet</code>, which are <code>non-trainable</code>.</p> <p>Second part of the Model (which is of your interest), before the layer, <code>Conv2DTranspose</code> is <code>Upsampling</code> part, which is present in the <code>list</code>,</p> <pre><code>up_stack = [ pix2pix.upsample(512, 3), # 4x4 -&gt; 8x8 pix2pix.upsample(256, 3), # 8x8 -&gt; 16x16 pix2pix.upsample(128, 3), # 16x16 -&gt; 32x32 pix2pix.upsample(64, 3), # 32x32 -&gt; 64x64 ] </code></pre> <p>It means that it is accessing a Function named <code>upsample</code> from the Module, <code>pix2pix</code>. The code for the Module, <code>pix2pix</code> is present in this <a href="https://github.com/tensorflow/examples/blob/master/tensorflow_examples/models/pix2pix/pix2pix.py" rel="nofollow noreferrer"><code>Github Link</code></a>.</p> <p>Code for the function, <code>upsample</code> is shown below:</p> <pre><code>def upsample(filters, size, norm_type='batchnorm', apply_dropout=False): &quot;&quot;&quot;Upsamples an input. Conv2DTranspose =&gt; Batchnorm =&gt; Dropout =&gt; Relu Args: filters: number of filters size: filter size norm_type: Normalization type; either 'batchnorm' or 'instancenorm'. apply_dropout: If True, adds the dropout layer Returns: Upsample Sequential Model &quot;&quot;&quot; initializer = tf.random_normal_initializer(0., 0.02) result = tf.keras.Sequential() result.add( tf.keras.layers.Conv2DTranspose(filters, size, strides=2, padding='same', kernel_initializer=initializer, use_bias=False)) if norm_type.lower() == 'batchnorm': result.add(tf.keras.layers.BatchNormalization()) elif norm_type.lower() == 'instancenorm': result.add(InstanceNormalization()) if apply_dropout: result.add(tf.keras.layers.Dropout(0.5)) result.add(tf.keras.layers.ReLU()) return result </code></pre> <p>This means that the second part of the <code>Model</code> comprises of the <code>Upsampling Layers</code>, whose functionality is defined above, with the Number of <code>Filters</code> being <code>512, 256, 128 and 64</code>.</p>
tensorflow|conv-neural-network|image-segmentation|autoencoder|unet-neural-network
1
4,158
54,370,998
How to compare current excel cell to previous excel cell in same column
<p>I am trying to write a quick function to compare the current cell in a column to the cell just above (before) it. The idea is to perform a different operation on data that is in the same column but different value. </p> <pre><code>if (df.loc() != df.loc[::-1] &amp; df1.loc() != df1.loc[:-1]): df = df.iloc() df1 = df1.iloc() </code></pre> <p>My thinking was to compare the current location to the current location -1, and if they are the same then to proceed. Otherwise do another task when the cell value changes. I am doing this to two different data frames that have the same column name in each that I am trying to read through. </p> <pre><code> Connector Pin Adj. 0 F123 1 2 6 7 1 F123 2 1 3 6 7 8 2 F123 3 2 4 7 8 9 3 F123 4 3 5 8 9 10 4 F123 5 4 9 10 5 F123 6 1 2 7 6 F123 7 1 2 3 6 8 7 F123 8 2 3 4 7 9 8 F123 9 3 4 5 8 10 9 F123 10 4 5 9 10 C137 1 2 1 11 C137 2 1 </code></pre> <p>After iterating down this table, when the Connector changes from F123 to C137 I want to clear all columns above the first C137.</p>
<p>Considering the below dataframe:</p> <pre><code>print(df) Connector Pin Adj. 0 F123 1 2 6 7 1 F123 2 1 3 6 7 8 2 F123 3 2 4 7 8 9 3 F123 4 3 5 8 9 10 4 F123 5 4 9 10 5 F123 6 1 2 7 6 F123 7 1 2 3 6 8 7 F123 8 2 3 4 7 9 8 F123 9 3 4 5 8 10 9 F123 10 4 5 9 10 C137 1 2 1 11 C137 2 1 </code></pre> <p>If you use:</p> <pre><code>df.drop(range(df.Connector.ne(df.Connector.shift()).cumsum().idxmax())) Connector Pin Adj. 10 C137 1 2 1 11 C137 2 1 </code></pre> <p>This will identify the change and drop those rows before where the change in connector has happen</p>
python|pandas
0
4,159
71,145,865
Dataframe doesn't modify in a for loop
<p>I have this for loop and I would like to change the value of a part of a df.</p> <pre><code>for col in last_df: last_df.loc[last_df[col]!=0, col]='stop' stop=last_df[last_df[col]=='stop'][col].index[0] last_df[col].loc[:(stop-1)]='NaN' </code></pre> <p>At the end, the last_df doesn't modify, the error I receive is the following: <code>A value is trying to be set on a copy of a slice from a DataFrame See the caveats in the documentation:....</code></p>
<p>Try without the loc.</p> <pre><code>for col in last_df: last_df[last_df[col]!=0]='stop' stop=last_df[last_df[col]=='stop'][col].index[0] last_df[col][:(stop-1)]='NaN' </code></pre> <p>this is a issue with view and copy of the DataFrame for more information quick understand you can read in this <a href="https://www.skytowner.com/explore/difference_between_copy_and_view_in_pandas" rel="nofollow noreferrer">link</a></p>
python|pandas|dataframe|for-loop|copy
2
4,160
52,137,745
How to mask some cells of a heatmap plot?
<p>I plotted the following heatmap: <a href="https://i.stack.imgur.com/p0Twi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/p0Twi.png" alt="enter image description here" /></a></p> <p>using this code:</p> <pre><code>data = {'Month':['August','August','August','August','August','August','August','August','August','August','August','August', 'February','February','February','February','February','February','February','February','February','February','February','February'], 'Day':['Sunday','Monday','Tuesday','Sunday','Monday','Tuesday','Sunday','Monday','Tuesday','Sunday','Monday','Tuesday', 'Sunday','Monday','Tuesday','Sunday','Monday','Tuesday','Sunday','Monday','Tuesday','Sunday','Monday','Tuesday',], 'Temperature':[34,32,33,36,37,35,29,32,33,32,36,30, 19,22,21,17,15,14,19,20,22,20,19,18], 'WorkingHours':[0,9.5,8.5,0,9,8.5,0,10,9.5,0,8,8.5, 0,8.5,9,0,9,9,0,10,8,0,8.5,9.5]} df = pd.DataFrame(data) def associations(dataset, nominal_columns=None, mark_columns=False, theil_u=False, plot=True, return_results = False, **kwargs): &quot;&quot;&quot; Calculate the correlation/strength-of-association of features in data-set with both categorical (eda_tools) and continuous features using: - Pearson's R for continuous-continuous cases - Correlation Ratio for categorical-continuous cases - Cramer's V or Theil's U for categorical-categorical cases :param dataset: NumPy ndarray / Pandas DataFrame The data-set for which the features' correlation is computed :param nominal_columns: string / list / NumPy ndarray Names of columns of the data-set which hold categorical values. Can also be the string 'all' to state that all columns are categorical, or None (default) to state none are categorical :param mark_columns: Boolean (default: False) if True, output's columns' names will have a suffix of '(nom)' or '(con)' based on there type (eda_tools or continuous), as provided by nominal_columns :param theil_u: Boolean (default: False) In the case of categorical-categorical feaures, use Theil's U instead of Cramer's V :param plot: Boolean (default: True) If True, plot a heat-map of the correlation matrix :param return_results: Boolean (default: False) If True, the function will return a Pandas DataFrame of the computed associations :param kwargs: Arguments to be passed to used function and methods :return: Pandas DataFrame A DataFrame of the correlation/strength-of-association between all features &quot;&quot;&quot; dataset = convert(dataset, 'dataframe') columns = dataset.columns if nominal_columns is None: nominal_columns = list() elif nominal_columns == 'all': nominal_columns = columns corr = pd.DataFrame(index=columns, columns=columns) for i in range(0,len(columns)): for j in range(i,len(columns)): if i == j: corr[columns[i]][columns[j]] = 1.0 else: if columns[i] in nominal_columns: if columns[j] in nominal_columns: if theil_u: corr[columns[j]][columns[i]] = theils_u(dataset[columns[i]],dataset[columns[j]]) corr[columns[i]][columns[j]] = theils_u(dataset[columns[j]],dataset[columns[i]]) else: cell = cramers_v(dataset[columns[i]],dataset[columns[j]]) corr[columns[i]][columns[j]] = cell corr[columns[j]][columns[i]] = cell else: cell = correlation_ratio(dataset[columns[i]], dataset[columns[j]]) corr[columns[i]][columns[j]] = cell corr[columns[j]][columns[i]] = cell else: if columns[j] in nominal_columns: cell = correlation_ratio(dataset[columns[j]], dataset[columns[i]]) corr[columns[i]][columns[j]] = cell corr[columns[j]][columns[i]] = cell else: cell, _ = ss.pearsonr(dataset[columns[i]], dataset[columns[j]]) corr[columns[i]][columns[j]] = cell corr[columns[j]][columns[i]] = cell corr.fillna(value=np.nan, inplace=True) if mark_columns: marked_columns = ['{} (nom)'.format(col) if col in nominal_columns else '{} (con)'.format(col) for col in columns] corr.columns = marked_columns corr.index = marked_columns if plot: plt.figure(figsize=kwargs.get('figsize',None)) sns.heatmap(corr, annot=kwargs.get('annot',True), fmt=kwargs.get('fmt','.2f')) plt.show() if return_results: return corr nominal.associations(df, nominal_columns=['Month','Day']) </code></pre> <p>but I just need it to be like this: <a href="https://i.stack.imgur.com/TSNlM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TSNlM.png" alt="enter image description here" /></a></p> <p>In fact, month and day are nominal features while working hours and temperature are numeric ones. The correlation between numeric and nominal features are computed using Eta, so I want to plot it separately.</p> <p>Thanks in advance.</p>
<p>I believe need filter <code>DataFrame</code> by subset of list of columns name:</p> <p>So change:</p> <pre><code>sns.heatmap(corr, annot=kwargs.get('annot',True), fmt=kwargs.get('fmt','.2f')) </code></pre> <p>to:</p> <pre><code>c1 = ['WorkingHours','Temperature'] c2 = ['Day','Month'] sns.heatmap(corr.loc[c1, c2], annot=kwargs.get('annot',True), fmt=kwargs.get('fmt','.2f')) </code></pre>
python-3.x|pandas|data-visualization|visualization|heatmap
1
4,161
52,266,352
Converting a long list of sequence of 0's and 1's into a numpy array or pandas dataframe
<p>I have a very long list of sequences(suppose of length 16 each) consisting of 0 and 1. e.g.</p> <pre><code>s = ['0100100000010111', '1100100010010101', '1100100000010000', '0111100011110111', '1111100011010111'] </code></pre> <p>Now I want to treat each bit as a feature so I need to convert it into numpy array or pandas dataframe. In order to do that I need to comma separate all the bits present in the sequences which is impossible for big datasets.</p> <p>So what I have tried is to generate all the positions in the string:</p> <pre><code>slices = [] for j in range(len(s[0])): slices.append((j,j+1)) print(slices) [(0, 1), (1, 2), (2, 3), (3, 4), (4, 5), (5, 6), (6, 7), (7, 8), (8, 9), (9, 10), (10, 11), (11, 12), (12, 13), (13, 14), (14, 15), (15, 16)] new = [] for i in range(len(s)): seq = s[i] for j in range(len(s[i])): ## I have tried both of these LOC but couldn't figure out ## how it could be done new.append([s[slice(*slc)] for slc in slices]) new.append(s[j:j+1]) print(new) </code></pre> <p>Expected o/p:</p> <pre><code>new = [[0,1,0,0,1,0,0,0,0,0,0,1,0,1,1,1], [1,1,0,0,1,0,0,0,1,0,0,1,0,1,0,1], [1,1,0,0,1,0,0,0,0,0,0,1,0,0,0,0], [0,1,1,1,1,0,0,0,1,1,1,1,0,1,1,1], [1,1,1,1,1,0,0,0,1,1,0,1,0,1,1,1]] </code></pre> <p>Thanks in advance!!</p>
<p>Using the <code>np.array</code> constructor and a list comprehension:</p> <pre><code>np.array([list(row) for row in s], dtype=int) </code></pre> <p></p> <pre><code>array([[0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1], [1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1], [1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0], [0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1], [1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1]]) </code></pre>
python|string|python-3.x|pandas|numpy
3
4,162
52,392,953
Find and remove rows in 1 data frame that do not exist in another using python pandas
<p>I have 2 csv files of different length. I need to find and remove the rows in one file that do not exist in the other file. Is there an easy way to do this, other than looping through the 2nd file n times?</p>
<p>Assuming you load your csv file into df1, and df2</p> <pre><code>df1[df1.apply(tuple,1).isin(df2.apply(tuple,1))] </code></pre>
pandas
1
4,163
60,450,235
Grouping data based on month wise as a column and row with user data using pandas dataframe
<p>I have few doubts with subsetting grouping the data.</p> <p>My actual data format looks like this </p> <pre><code> month userId usage_count userEmail January aabzhlxycj 2 jakiyah@academy.com January aacuvynjwq 1 jack@gmail.com December aabzhlxycj 2 jakiyah@academy.com January aailjxciyk 2 maria@gmail.com December aacuvynjwq 1 jack@gmail.com </code></pre> <p>I need to convert this above data to this format</p> <pre><code>UserId userEmail January December aabzhlxycj jakiyah@academy.com 2 2 aacuvynjwq jack@gmail.com 1 1 aailjxciyk maria@gmail.com 2 0 </code></pre> <p>Can anyone please suggest to get the data in this above format.</p>
<p>You can use a pivot table:</p> <pre><code>import pandas as pd result = pd.pivot_table(df, values="usage_count", index=["userId", "userEmail"], columns="month").fillna(0).reset_index() print(result) </code></pre> <p>Output:</p> <pre><code>month userId userEmail December January 0 aabzhlxycj jakiyah@academy.com 2.0 2.0 1 aacuvynjwq jack@gmail.com 1.0 1.0 2 aailjxciyk maria@gmail.com 0.0 2.0 </code></pre>
python|pandas|dataframe|grouping|data-manipulation
0
4,164
60,577,492
How can I get predicted the following value of stock using predict method of Tensorflow?
<p>I am wondering how to predict and get future time series data after model training. I would like to get the values after N steps. I wonder if the time series data has been properly learned and predicted. How do I do this right to get the following(next) value? I want to get the next value using <code>model.predict</code> or similar.</p> <p>I have <code>x_test</code> and <code>x_test[-1] == t</code> So, the meaning of the next value is <code>t+1, t+2, .... t+n</code>. In this example <strong>I want to get <code>t+1, t+2 ... t+n</code></strong></p> <h1>First</h1> <p>I tried using stock index data</p> <pre><code>inputs = total_data[len(total_data) - forecast - look_back:] inputs = scaler.transform(inputs) X_test = [] for i in range(look_back, inputs.shape[0]): X_test.append(inputs[i - look_back:i]) X_test = np.array(X_test) predicted = model.predict(X_test) </code></pre> <p>but the result is like below</p> <p><a href="https://i.stack.imgur.com/L6qL1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/L6qL1.png" alt="enter image description here" /></a></p> <p>The results from <code>X_test[-20:]</code> and the following 20 <strong>predictions looks like same.</strong> I'm wondering if it's the correct method to train and predicted value and also if the result was correct.</p> <p><a href="https://gist.github.com/Lay4U/2e1759a0e435ff95b7a017e301db634f" rel="nofollow noreferrer">full source</a></p> <p>The method I tried first did not work correctly.</p> <h1>Second</h1> <p>I realized something is wrong, I tried using another official data so I used the time series in the <strong>Tensorflow tutorial</strong> to practice training the model.</p> <pre><code>a = y_val[-look_back:] for i in range(N-step prediction): #predict a new value n times. tmp = model.predict(a.reshape(-1, look_back, num_feature)) #predicted value a = a[1:] #remove first a = np.append(a, tmp) #insert predicted value </code></pre> <p>The results were predicted in a linear regression shape very differently from the real data.</p> <p><img src="https://imgur.com/7tenqRd.png" alt="2" /></p> <p>Output a linear regression abnormal that is independent of the real data:</p> <p><a href="https://gist.github.com/Lay4U/96e0ba8d8c251046e89eae4bc5d40510" rel="nofollow noreferrer">full source</a> (After the 25th line is my code.)</p> <p>I'm really very curious that <strong>How can I predict the following value of time series using Tensorflow predict method</strong></p> <p>I'm not wondering if this works or not theoretically. I'm just wondering how to <strong>get the following n steps using the predict method.</strong></p> <p>Thank you for reading the long question. I seek advice about your priceless opinion.</p>
<p>In the Second approach, Output is not expected, as per my understanding, because of a small mistake in the code.</p> <p>The line of code,</p> <pre><code>a = y_val[-look_back:] </code></pre> <p>should be replaced by</p> <pre><code>look_back = 20 x = x_val_uni a = x[-look_back:] a.shape </code></pre> <p>In other words, we should send <code>X Values</code> as Inputs to the Model for the Prediction, not the <code>Y Values</code>.</p> <p>However, we can compare it's predictions with Y Values, with the code,</p> <pre><code>y = y_val_uni[-20:] plt.plot(y) plt.plot(tmp) plt.show() </code></pre> <p>Which would result in the plot shown below:</p> <p><a href="https://i.stack.imgur.com/TjFSO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TjFSO.png" alt="enter image description here"></a></p> <p>Please find the Complete Working Code in this <a href="https://colab.sandbox.google.com/gist/rmothukuru/f1999818e9a35183f2c100087ce7dc7b/time_series_prediction.ipynb" rel="nofollow noreferrer">Google Colab Gist</a>.</p>
python|tensorflow|machine-learning|deep-learning|prediction
3
4,165
60,372,038
make syntax automatically from pandas table column
<p>I have the following Dataframe</p> <pre><code>NAME DDGNWW ABC 123 DEF 456 GHI 789 JKL 012 MNO 110 </code></pre> <p>Code to reproduce: </p> <pre><code>import pandas as pd df = pd.DataFrame([ ['ABC', 123], ['DEF', 456], ['GHI', 789], ['JKL', 12], ['MNO', 110] ], columns=['NAME', 'DDGNWW']) </code></pre> <p>Now I want to make SQL syntax based on DDGNWW automatically like:</p> <pre><code>( "DDGNWW" = 123 OR "DDGNWW" = 456 OR "DDGNWW" = 789 OR "DDGNWW" = 12 OR "DDGNWW" = 110 ) </code></pre>
<p>You could use:</p> <pre><code>' OR '.join(df['DDGNWW'].apply(lambda x: '"DDGNWW"={}'.format(x))) </code></pre> <p>Orther way to do with or:</p> <pre><code>'"DDGNWW" IN ' + str(tuple(df['DDGNWW'])) </code></pre>
python|pandas|dataframe
0
4,166
72,721,956
dropout, recurrent_dropout in LSTM layer
<p>I am training a GRU neural network and added dropout and recurrent dropout in my GRU layer but since then I can't get reproducible results every time I run the program again and I can't fix this problem even with :</p> <pre><code>recurrent_initializer=tf.keras.initializers.Orthogonal(seed=42), kernel_initializer=tf.keras.initializers.GlorotUniform(seed=42)) </code></pre> <p>in the same layer.</p> <p>This is my model:</p> <pre><code>model = tf.keras.models.Sequential() model.add(tf.keras.layers.GRU(20, activation='tanh',dropout=0.1, recurrent_dropout=0.2,recurrent_activation=&quot;sigmoid&quot;, return_sequences = False, input_shape=(train_XX.shape[1], train_XX.shape[2]), recurrent_initializer=tf.keras.initializers.Orthogonal(seed=42), kernel_initializer=tf.keras.initializers.GlorotUniform(seed=42))) model.add(tf.keras.layers.Dense(1, activation='sigmoid', kernel_initializer=tf.keras.initializers.GlorotUniform(seed=42),)) model.compile(loss=tf.keras.losses.BinaryCrossentropy(from_logits=False, name=&quot;binary_crossentropy&quot;,),optimizer='adam', metrics=[tf.keras.metrics.PrecisionAtRecall(0.75)] ) </code></pre>
<p>I had already set the seed at the beginning of the programme with:</p> <pre><code>import numpy as np import tensorflow as tf import random as rn np.random.seed(1) tf.random.set_seed(2) rn.seed(3) </code></pre> <p>but by adding before the 3 rows of seed fixation:</p> <pre><code>import os os.environ['PYTHONHASHSEED'] = '0' os.environ['CUDA_VISIBLE_DEVICES'] = '' </code></pre> <p>it resolves my problem.</p>
tensorflow|keras|seed|dropout
0
4,167
72,538,480
Create column in dataframe that divides number by days in a month
<p>I have a Panda's DataFrame with a column of months and a column that gives a total for each month. What I need to do is divide the total for each month by the number of days in that month and put it in a new column.</p> <p>So something like this</p> <pre><code>Month Total Daily Total Nov. 2019 45345 Dec. 2019 87493 Jan. 2020 45765 Feb. 2020 38756 </code></pre> <p>How do I do this? How do I know how many days are in a given month using Python? Thanks</p>
<p>You can use <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.dt.daysinmonth.html" rel="nofollow noreferrer">.dt.daysinmonth</a></p> <pre class="lang-py prettyprint-override"><code>df['Daily Total'] = df['Total'] / pd.to_datetime(df['Month']).dt.daysinmonth </code></pre> <pre><code>print(df) Month Total Daily Total 0 Nov. 2019 45345 1511.500000 1 Dec. 2019 87493 2822.354839 2 Jan. 2020 45765 1476.290323 3 Feb. 2020 38756 1336.413793 </code></pre>
python|pandas|date|divide
1
4,168
59,741,210
Image clustering - allocating memory on GPU
<p>I've written this code for image classification by pretrained googlenet:</p> <pre><code>gnet = models.googlenet(pretrained=True).cuda() transform = transforms.Compose([transforms.Resize(256), transforms.CenterCrop(32), transforms.ToTensor()]) images = {} resultDist = {} i = 1 for f in glob.iglob("/data/home/student/HW3/trainData/train2014/*"): print(i) i = i + 1 image = Image.open(f) # transform, create batch and get gnet weights img_t = transform(image).cuda() batch_t = torch.unsqueeze(img_t, 0).cuda() try: gnet.eval() out = gnet(batch_t) resultDist[f[-10:-4]] = out del out except: print(img_t.shape) del img_t del batch_t image.close() torch.cuda.empty_cache() i = i + 1 torch.save(resultDist, '/data/home/student/HW3/googlenetOutput1.pkl') </code></pre> <p>I deleted all the possible tensors from the GPU after using them, but after about 8000 images from my dataset the GPU is full. I found the problem to be in:</p> <pre><code>resultDist[f[-10:-4]] = out </code></pre> <p>The dictionary taking alot of space and I can't delete it because I want to save my data to pkl file.</p>
<p>Since you're not doing backprop wrap your whole loop with a <code>with torch.no_grad():</code> statement since otherwise a computation graph is created and intermittent results may be stored on the GPU for later application of backprop. This takes a fair amount of space. Also you probably want to save <code>out.cpu()</code> so your results aren't left on the GPU.</p> <pre><code>... with torch.no_grad(): for f in glob.iglob("/data/home/student/HW3/trainData/train2014/*"): ... resultDist[f[-10:-4]] = out.cpu() ... torch.save(resultDist, '/data/home/student/HW3/googlenetOutput1.pkl') </code></pre>
python|image|classification|pytorch
0
4,169
59,726,576
Create different files based off value in dataframe column A and save to different existing folders based off value in dataframe column A
<ol> <li>First, I would like to create different files based off the value in dataframe column A <code>FTP_FOLDER_PATH</code></li> <li>Second, I would like to save these files to different folders depending on the value in dataframe column A 'FTP_FOLDER_PATH'. These folders already exist and do not need to be created.</li> </ol> <p>I am struggling with how to do this through looping. I have done something similar in the past for the first part, where I just create different files, but I could only figure out how to save them to one folder. I am stuck on trying to save them to multiple folders. In the code, I have included:</p> <ol> <li>the dataframe</li> <li>what I have attempted which only solves the first part of the problem and</li> <li>the desired output which all needs to go to the correct FTP folders.</li> </ol> <hr /> <pre><code>import pandas as pd import os FTP_Master_Folder = 'C:/FTP' df = pd.DataFrame({'FTP_FOLDER_PATH' : ['C:\FTP1', 'C:\FTP2', 'C:\FTP2', 'C:\FTP2', 'C:\FTP3', 'C:\FTP3'], 'NAME' : ['Jon', 'Kat', 'Kat', 'Kat', 'Joe', 'Joe'], 'CARS' : ['Honda', 'Lexus', 'Porsche', 'Saleen s7', 'Tesla', 'Tesla']}) df for i, x in df.groupby('FTP_FOLDER_PATH'): #How do I change the below line to loop through and change the directory based on the value of the 'FTP_FOLDER_PATH' os.chdir(f'{FTP_Master_Folder}') p = os.path.join(os.getcwd(), i + '.csv') x.to_csv(p, index=False) #Desired Ouput to specific FTP folder based on row of dataframe df_FTP1 = pd.DataFrame({'FTP_FOLDER_PATH' : ['C:\FTP1'], 'NAME' : ['Jon'], 'CARS' : ['Honda']}) df_FTP1 df_FTP2 = pd.DataFrame({'FTP_FOLDER_PATH' : ['C:\FTP2', 'C:\FTP2', 'C:\FTP2'], 'NAME' : ['Kat', 'Kat', 'Kat'], 'CARS' : ['Lexus', 'Porsche', 'Saleen s7']}) df_FTP2 df_FTP3 = pd.DataFrame({'FTP_FOLDER_PATH' : ['C:\FTP3', 'C:\FTP3'], 'NAME' : ['Joe', 'Joe'], 'CARS' : ['Tesla', 'Tesla']}) df_FTP3 </code></pre>
<p>I discovered a minor basic error. I should have included <strong>/{i}</strong> in line 2. i would be the subfolder of the masterfolder in this case, so adding this in allows the files to go to their destinations, so that solves part two of my problem quite easily.</p> <pre><code>for i, x in df_joined.groupby('FTP_FOLDER_PATH'): os.chdir(f'{FTP_Master_Folder}/{i}') p = os.path.join(os.getcwd(), i + '.csv') x.to_csv(p, index=False) </code></pre>
python|pandas
0
4,170
59,495,151
Reading pandas from disk during concurrent process pool
<p>I've wrote a cli tool to generate simulations and i'm hoping to generate about 10k (~10 minutes) for each cut of data I have ~200. I have functions that do this fine in a for loop but when I converted it to <code>concurrent.futures.ProcessPoolExecutor()</code> I realized that multiple processes can't read in the same pandas dataframe.</p> <p>Here's the smallest example I could think of:</p> <pre class="lang-py prettyprint-override"><code>import concurrent.futures import pandas as pd def example(): # This is a static table with basic information like distributions df = pd.read_parquet("batch/data/mappings.pq") # Then there's a bunch of etl, even reading in a few other static tables return sum(df.shape) def main(): results = [] with concurrent.futures.ProcessPoolExecutor() as pool: futr_results = [pool.submit(example) for _ in range(100)] done_results = concurrent.futures.as_completed(futr_results) for _ in futr_results: results.append(next(done_results).result()) return results if __name__ == "__main__": print(main()) </code></pre> <p>Errors:</p> <pre class="lang-sh prettyprint-override"><code>&lt;jemalloc&gt;: background thread creation failed (11) terminate called after throwing an instance of 'std::system_error' what(): Resource temporarily unavailable Traceback (most recent call last): File "batch/testing.py", line 19, in &lt;module&gt; main() File "batch/testing.py", line 14, in main results.append(next(done_results).result()) File "/home/a114383/miniconda3/envs/hailsims/lib/python3.7/concurrent/futures/_base.py", line 425, in result return self.__get_result() File "/home/a114383/miniconda3/envs/hailsims/lib/python3.7/concurrent/futures/_base.py", line 384, in __get_result raise self._exception concurrent.futures.process.BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending. </code></pre> <p>I'm hoping there's a quick an dirty way to read these (i'm guessing without reference?), otherwise it's looking like i'll need to create all the parameters first without getting them on the fly.</p>
<p>Three things I would try:</p> <ul> <li><p>Pandas has <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_parquet.html" rel="nofollow noreferrer">an option</a> for using either PyArrow or FastParquet when reading parquet files. Try using a different one - this seems to be a bug.</p></li> <li><p>Try forcing pandas to open the file in read only mode to prevent conflicts due to the file being locked:</p></li> </ul> <pre class="lang-py prettyprint-override"><code>pd.read_parquet(open("batch/data/mappings.pq", "rb")) # Also try "r" instead of "rb", not sure if pandas expects string or binary data </code></pre> <ul> <li>Try loading the file into a StringIO/BytesIO buffer, and then handing that to pandas - this avoids any interaction with the file from pandas itself:</li> </ul> <pre class="lang-py prettyprint-override"><code>import io # either this (binary) data = io.BytesIO(open("batch/data/mappings.pq", "rb").read()) # or this (string) data = io.StringIO(open("batch/data/mappings.pq", "r").read()) pd.read_parquet(data) </code></pre>
python|python-3.x|pandas|concurrent.futures
2
4,171
59,677,007
I need to extract Ports for Vlan Id using Python
<pre><code>VLAN Name Status Ports ---- -------------------------------- --------- ------------------------------- 1 default active Po10, Po20, Po30, Po40, Po50 Eth1/1, Eth1/2, Eth1/3, Eth1/4 Eth1/5, Eth1/6, Eth1/7, Eth1/8 2 native active Po10, Po20, Po30, Po40, Po50 Eth1/5, Eth1/6, Eth1/13, Eth1/14 </code></pre> <p>As above is the text file. I need ports for particular vlan id in Dictionary format.</p> <pre><code>[ {'1':'Po10', 'Po20', 'Po30', 'Po40', 'Po50','Eth1/1', 'Eth1/2','Eth1/3',' Eth1/4','Eth1/5', 'Eth1/6', 'Eth1/7', 'Eth1/8'},{'2':'Po10', 'Po20', 'Po30', 'Po40', 'Po50','Eth1/5', 'Eth1/6', 'Eth1/13',' Eth1/14'} </code></pre>
<p>Your dictionary is not valid, if want dict of lists use:</p> <pre><code>d = df.set_index('VLAN')['Ports'].str.split(', ').to_dict() print (d) {1: ['Po10', 'Po20', 'Po30', 'Po40', 'Po50', 'Eth1/1', 'Eth1/2', 'Eth1/3', 'Eth1/4', 'Eth1/5', 'Eth1/6', 'Eth1/7', 'Eth1/8'], 2: ['Po10', 'Po20', 'Po30', 'Po40', 'Po50', 'Eth1/5', 'Eth1/6', 'Eth1/13', 'Eth1/14']} </code></pre> <p>Also is possible create list of dictionaries:</p> <pre><code>d1 = [{k:x} for k, v in d.items() for x in v] print (d1) [{1: 'Po10'}, {1: 'Po20'}, {1: 'Po30'}, {1: 'Po40'}, {1: 'Po50'}, {1: 'Eth1/1'}, {1: 'Eth1/2'}, {1: 'Eth1/3'}, {1: 'Eth1/4'}, {1: 'Eth1/5'}, {1: 'Eth1/6'}, {1: 'Eth1/7'}, {1: 'Eth1/8'}, {2: 'Po10'}, {2: 'Po20'}, {2: 'Po30'}, {2: 'Po40'}, {2: 'Po50'}, {2: 'Eth1/5'}, {2: 'Eth1/6'}, {2: 'Eth1/13'}, {2: 'Eth1/14'}] </code></pre> <p>Or is possible create dict of string:</p> <pre><code>d2 = df.set_index('VLAN')['Ports'].to_dict() print (d2) {1: 'Po10, Po20, Po30, Po40, Po50, Eth1/1, Eth1/2, Eth1/3, Eth1/4, Eth1/5, Eth1/6, Eth1/7, Eth1/8', 2: 'Po10, Po20, Po30, Po40, Po50, Eth1/5, Eth1/6, Eth1/13, Eth1/14'} </code></pre>
python|python-3.x|pandas
1
4,172
59,638,464
Efficient way to get row with closest timestamp to a given datetime in pandas
<p>I have a big dataframe that contains around 7,000,000 rows of time series data that looks like this</p> <pre><code>timestamp | values 2019-08-01 14:53:01 | 20.0 2019-08-01 14:53:55 | 29.0 2019-08-01 14:53:58 | 22.4 ... 2019-08-02 14:53:25 | 27.9 </code></pre> <p>I want to create a column that is a lag version of 1 day for each row, since my timestamps don't match up perfectly, I can't use the normal <code>shift()</code> method. The result will be something like this: </p> <pre><code>timestamp | values | lag 2019-08-01 14:53:01 | 20.0 | Nan 2019-08-01 14:53:55 | 29.0 | Nan 2019-08-01 14:53:58 | 22.4 | Nan ... 2019-08-02 14:53:25 | 27.9 | 20.0 </code></pre> <p>I found some posts related to get the closest timestamp to a given time: <a href="https://stackoverflow.com/questions/15115547/find-closest-row-of-dataframe-to-given-time-in-pandas/19974491">Find closest row of DataFrame to given time in Pandas</a> and tried the methods, it does the job but takes too long to run, here's what I have:</p> <pre><code>def get_nearest(data, timestamp): index = data.index.get_loc(timestamp,"nearest") return data.iloc[index, 0] df['lag'] = [get_nearest(df, dt) for dt in df.index] </code></pre> <p>Any efficient ways to solve the problem?</p>
<p>Hmmmm, not sure if this will work out to be more efficient, but <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.merge_asof.html" rel="nofollow noreferrer">merge_asof</a> is an approach worth looking at as won't require a udf. </p> <pre><code>df['date'] = df.timestamp.dt.date df2 = df.copy() df2['date'] = df2['date'] + pd.to_timedelta(1,unit ='D') df2['timestamp'] = df2['timestamp'] + pd.to_timedelta(1,unit ='D') pd.merge_asof(df,df2, on = 'timestamp', by = 'date', direction = 'nearest') </code></pre> <p>The approach essentially merges the previous day value to the next day and then matches to the nearest timestamp. </p>
python|pandas|timestamp|time-series
1
4,173
61,775,334
Testing for a value in a MultiIndex
<p>I have a pandas data frame with a large MultiIndex. I'm selecting columns from this dataframe with various metadata that is in the index, like for example </p> <pre><code>current_row = df.xs(number, level='counter', drop_level=False, axis=1) </code></pre> <p>So far, so good. However, <code>number</code> comes from a list that might contain numbers that are not contained in the <code>counter</code> level in the index, so the above obviously fails with a KeyError.</p> <p>So is there any way to test if my number exists, so that I can either continue with the number, or throw a custom error and continue with the next number?</p> <p><a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#indexing-with-isin" rel="nofollow noreferrer"><code>isin</code></a> sounds like it would be what I need, but I can't get it to work on my Multiindex. </p>
<p>Tried a search with some different keywords again* and of course it's rather easily done with <code>in</code>:</p> <pre><code>if number in df.columns.get_level_values('counter'): #do stuff else: #print my custom error </code></pre> <p>found for example <a href="https://stackoverflow.com/questions/24870306/how-to-check-if-a-column-exists-in-pandas">here</a></p> <p>*I hate it when that happens. You spend way too much time on something simple, finally give in and post a stupid question, and then you have a brainfart and solve it anyways and of course it was totally simple. Oh well…</p>
pandas|multi-index
0
4,174
61,993,941
NoneType Error when trying to create new column from existing columns with Pandas on Jupyter Notebook
<p>so I recently tried to start using Jupyter notebooks, as I find they are far more convenient than me keeping lengthy comments in my code files.</p> <p>That being said to test out basic functionality I wanted to simulate moving averages. However, as the title says, I was unable to even create a new column using Pandas indexing method (which has worked elsewhere for me).</p> <p>Here is the code I used:</p> <pre><code>import pandas as pd from pandas_datareader import data as pdr import matplotlib.pyplot as plt from datetime import datetime %matplotlib inline fb = pdr.DataReader("FB","yahoo",datetime(2012,5,12),datetime(2020,5,25)) fb['MA10'] = fb['Close'].rolling(10).mean() </code></pre> <p>That last line, is what generates the error ( <code>TypeError: 'NoneType' object is not iterable</code>) which originates from me calling <code>fb['MA10']</code> since I did not run into any problems running the right hand side of the code. I am pretty stumped and would appreciate any feedback, I've posted the full Error Traceback below for whoever is interested.</p> <p><strong>EDIT</strong> I get an error just for typing <code>fb['MA10'] = fb['Close']</code> whereas just <code>fb['Close']=fb['Open']</code> does not yield a problem as both are existing columns, however I don't want to manually create a new column every time.</p> <pre><code>-------------------------------------------------------- TypeError Traceback (most recent call last) &lt;ipython-input-55-fa34c8084387&gt; in &lt;module&gt; 1 fb = pdr.DataReader("FB","yahoo",datetime(2012,5,12),datetime(2020,5,25)) ----&gt; 2 type(fb['MA10']) c:\users\robjr\appdata\local\programs\python\python38\lib\site-packages\pandas\core\frame.py in __getitem__(self, key) 2777 2778 # Do we have a slicer (on rows)? -&gt; 2779 indexer = convert_to_index_sliceable(self, key) 2780 if indexer is not None: 2781 # either we have a slice or we have a string that can be converted c:\users\robjr\appdata\local\programs\python\python38\lib\site-packages\pandas\core\indexing.py in convert_to_index_sliceable(obj, key) 2276 if idx._supports_partial_string_indexing: 2277 try: -&gt; 2278 return idx._get_string_slice(key) 2279 except (KeyError, ValueError, NotImplementedError): 2280 return None c:\users\robjr\appdata\local\programs\python\python38\lib\site-packages\pandas\core\indexes\datetimes.py in _get_string_slice(self, key, use_lhs, use_rhs) 776 def _get_string_slice(self, key: str, use_lhs: bool = True, use_rhs: bool = True): 777 freq = getattr(self, "freqstr", getattr(self, "inferred_freq", None)) --&gt; 778 _, parsed, reso = parsing.parse_time_string(key, freq) 779 loc = self._partial_date_slice(reso, parsed, use_lhs=use_lhs, use_rhs=use_rhs) 780 return loc pandas\_libs\tslibs\parsing.pyx in pandas._libs.tslibs.parsing.parse_time_string() pandas\_libs\tslibs\parsing.pyx in pandas._libs.tslibs.parsing.parse_datetime_string_with_reso() pandas\_libs\tslibs\parsing.pyx in pandas._libs.tslibs.parsing.dateutil_parse() TypeError: 'NoneType' object is not iterable </code></pre>
<p>you need to handle your missing values, try</p> <pre><code>fb['MA10'] = fb['Close'].fillna(0).rolling(10).mean() </code></pre>
python|pandas|typeerror|iterable|nonetype
1
4,175
57,874,436
Tensorflow Data Adapter Error: ValueError: Failed to find data adapter that can handle input
<p>While running a sentdex tutorial script of a cryptocurrency RNN, link here</p> <p><a href="https://www.youtube.com/watch?v=yWkpRdpOiPY&amp;list=PLQVvvaa0QuDfhTox0AjmQ6tvTgMBZBEXN&amp;index=11" rel="noreferrer">YouTube Tutorial: Cryptocurrency-predicting RNN Model</a>,</p> <p>but have encountered an error when attempting to train the model. My tensorflow version is 2.0.0 and I'm running python 3.6. When attempting to train the model I receive the following error:</p> <pre class="lang-py prettyprint-override"><code>File "C:\python36-64\lib\site-packages\tensorflow_core\python\keras\engine\training.py", line 734, in fit use_multiprocessing=use_multiprocessing) File "C:\python36-64\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py", line 224, in fit distribution_strategy=strategy) File "C:\python36-64\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py", line 497, in _process_training_inputs adapter_cls = data_adapter.select_data_adapter(x, y) File "C:\python36-64\lib\site-packages\tensorflow_core\python\keras\engine\data_adapter.py", line 628, in select_data_adapter _type_name(x), _type_name(y))) ValueError: Failed to find data adapter that can handle input: &lt;class 'numpy.ndarray'&gt;, (&lt;class 'list'&gt; containing values of types {"&lt;class 'numpy.float64'&gt;"}) </code></pre> <p>Any advice would be greatly appreciated!</p>
<p>Have you checked whether your training/testing data and training/testing labels are all numpy arrays? It might be that you're mixing numpy arrays with lists. </p>
python|tensorflow|keras|lstm
89
4,176
54,994,658
How does pytorch's nn.Module register submodule?
<blockquote> <p>When I read the source code(python) of torch.nn.Module , I found the attribute <code>self._modules</code> has been used in many functions like <code>self.modules(), self.children()</code>, etc. However, I didn't find any functions updating it. So, where will the <code>self._modules</code> be updated? Furthermore, how does pytorch's <code>nn.Module</code> register submodule?</p> </blockquote> <pre><code>class Module(object): def __init__(self): self._backend = thnn_backend self._parameters = OrderedDict() self._buffers = OrderedDict() self._backward_hooks = OrderedDict() self._forward_hooks = OrderedDict() self._forward_pre_hooks = OrderedDict() self._modules = OrderedDict() self.training = True def named_modules(self, memo=None, prefix=''): if memo is None: memo = set() if self not in memo: memo.add(self) yield prefix, self for name, module in self._modules.items(): if module is None: continue submodule_prefix = prefix + ('.' if prefix else '') + name for m in module.named_modules(memo, submodule_prefix): yield m </code></pre>
<p>Add some details to Jiren Jin's answer:</p> <ul> <li><p>Layers of a net (inherited from <code>nn.Module</code>) are stored in <code>Module._modules</code>, which is initialized in <code>__construct</code>:</p> <pre class="lang-py prettyprint-override"><code>def __init__(self): self.__construct() # initialize self.training separately from the rest of the internal # state, as it is managed differently by nn.Module and ScriptModule self.training = True def __construct(self): """ Initializes internal Module state, shared by both nn.Module and ScriptModule. """ # ... self._modules = OrderedDict() </code></pre></li> <li><p><code>self._modules</code> is updated in <code>__setattr__</code>. <code>__setattr__(obj, name, value)</code> is called when <code>obj.name = value</code> is executed. For example, if one defines <code>self.conv1 = nn.Conv2d(128, 256, 3, 1, 1)</code> when initializing a net inherited from <code>nn.Module</code>, the following code from <code>nn.Module.__setattr__</code> will be executed:</p> <pre class="lang-py prettyprint-override"><code>def __setattr__(self, name, value): def remove_from(*dicts): for d in dicts: if name in d: del d[name] params = self.__dict__.get('_parameters') if isinstance(value, Parameter): # ... elif params is not None and name in params: # ... else: modules = self.__dict__.get('_modules') # equivalent to modules = self._modules if isinstance(value, Module): if modules is None: raise AttributeError( "cannot assign module before Module.__init__() call") remove_from(self.__dict__, self._parameters, self._buffers) # register the given layer (nn.Conv2d) with its name (conv1) # equivalent to self._modules['conv1'] = nn.Conv2d(128, 256, 3, 1, 1) modules[name] = value </code></pre></li> </ul> <p>Question from comments:</p> <blockquote> <p>Do you know how this works with the fact that torch lets you supply your own forward method?</p> </blockquote> <p>If one runs a forward pass of a net inherited from <code>nn.Module</code>, the <code>nn.Module.__call__</code> will be called, in which <code>self.forward</code> is called. However, one has overrided the <code>forward</code> when implementing the net.</p>
python|pytorch
4
4,177
54,854,356
Keras LSTM How to loop for predict by model.predict()
<p>I want to predict lstm 7 time. I have to get output from model.predict() and use the output to predict again to 7 time. </p> <p>This is code.</p> <pre><code>data = 0 y_pred=0 data[0] = model.predict(X_test_t) for i in range(7): data[i+1] = model.predict(data[i]) print(data) </code></pre> <p>when I run it show error like this</p> <pre><code> File "test_load_model.py", line 60, in &lt;module&gt; data[0] = model.predict(X_test_t) TypeError: 'int' object does not support item assignment </code></pre> <p>How to loop for prediction?</p>
<p>You need to have a list for your data. Currently it is an int and you can't index an int. So you need to do </p> <pre><code>data =[] </code></pre> <p>And to append you do</p> <pre><code>data.append(model.predict(data[i])) </code></pre>
python|tensorflow|keras
0
4,178
73,208,925
Consolidate column values into one column as a list with column name as key in python
<p>I have data frame that looks like this:</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>id address city pincode 1 here is address 1 city1 1234 2 here is address 2 city2 4321 3 here is address 3 city3 7654 4 here is address 4 city4 9876 5 here is address 5 city5 987</code></pre> </div> </div> </p> <p>What I am trying to achieve is:</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>id address city pincode Newcolumn 1 here is address 1 city1 1234 address:'here is address 1', city: 'city1', pincode:'1234' 2 here is address 2 city2 4321 address:'here is address 2', city: 'city2', pincode:'4321' 3 here is address 3 city3 7654 address:'here is address 3', city: 'city3', pincode:'7654' 4 here is address 4 city4 9876 address:'here is address 4', city: 'city4', pincode:'9876' 5 here is address 5 city5 987 address:'here is address 5', city: 'city5', pincode:'987'</code></pre> </div> </div> </p> <p>I have been trying to do this:</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>cols = df[['address', 'city', 'pincode']] df['Newcolumn'] = df[cols].str.join()</code></pre> </div> </div> </p>
<p>Can be accomplished with <code>to_dict(orient='records')</code>:</p> <pre><code>import pandas as pd from io import StringIO # create dataframe data = StringIO(&quot;&quot;&quot;id;address;city;pincode 1;here is address 1;city1;1234 2;here is address 2;city2;4321 3;here is address 3;city3;7654 4;here is address 4;city4;9876 5;here is address 5;city5;987 &quot;&quot;&quot;) df = pd.read_csv(data, sep=';', index_col='id') # assign new column df = df.assign(Newcolumn = df.to_dict(orient='records')) </code></pre> <p>Output:</p> <p><a href="https://i.stack.imgur.com/pNuAT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pNuAT.png" alt="DataFrame" /></a></p>
python|pandas|join|multiple-columns
1
4,179
73,256,800
Resampling a numpy array logarithmically
<p>I have a numpy array representing magnitudes of a fourier transform, and I want to resample it logarithmically:</p> <p>Lets say it was from 100hz to 10khz, and each bucket was 100hz, I want to take that discrete distribution, create a continuous distribution, and then resample that continuous distribution logarithmically (i.e. bucket 1 is 100hz to 200hz, bucket 2 is 200hz to 400hz, etc (not specifically doubling, but for any logarithmic base)).</p> <p>I can definitely conceive of a manual way to do this, but I'm absolutely sure that there is a far more pythonic way of doing this in like 2 lines (and I bet you can configure even the interpolation method (linear, logarithmic, parabolic, etc), maybe even as part of numpy).</p>
<p>A possible implementation is given below. The <code>logspace()</code> routine from numpy creates the bucket boundaries and I subsequently draw a random index.</p> <pre><code>import random import numpy as np NumberOfBuckets = 10 LogGrid = np.logspace(2.0, 4.0, num=NumberOfBuckets) % 10^2 to 10^4 IntDraw = random.randint(0,NumberOfBuckets-1) RandomInterval = [LogGrid[IntDraw], LogGrid[IntDraw+1]] print(RandomInterval) </code></pre>
python|numpy
0
4,180
67,264,061
How to match ID between two columns?
<p>I have, I guess a simple question but I cannot find the right answer. I have two pandas series (let's say &quot;A&quot; and &quot;B&quot;) with ID in there (string). Series A is bigger than series B. What I am looking for is a way to have a resulting dataframe with 2 columns where the matching elements are on the same row and if there is a value in A that don't exist in B, to add a NaN</p> <pre><code>A B 10368 10368 12567 NaN 13456 13456 ... ... </code></pre> <p>and so on.</p> <p>I guess the merge function in pandas can be helpful but I could not manage to make it work</p> <p>Thanks in advance</p>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.reindex.html" rel="nofollow noreferrer"><code>Series.reindex</code></a> with helper <code>Series</code> by index from values of <code>s2</code>:</p> <pre><code>df = pd.Series(s2.to_numpy(), index=s2).reindex(s1).rename_axis('A').reset_index(name='B') </code></pre> <p>Or <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.to_frame.html" rel="nofollow noreferrer"><code>Series.to_frame</code></a> with left join in <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.merge.html" rel="nofollow noreferrer"><code>DataFrame.merge</code></a>:</p> <pre><code>df = s1.to_frame('A').merge(s2.to_frame('B'), left_on='A', right_on='B', how='left') </code></pre>
python|pandas
0
4,181
67,373,908
Reshaping a dataframe by splitting list to append rows
<p>What is the best way to reshape my dataframe please (split the list to new rows):</p> <pre><code>df = pd.DataFrame({'A': [[1,2], [1,2], 3], 'B': [x, y, z]}) A B 0 [1, 2] x 1 [1, 2] y 2 3 z </code></pre> <p>to the desired output:</p> <pre><code>out = pd.DataFrame({'A': [1, 2, 1, 2, 3], 'B': [x, x, y, y, z]}) A B 0 1 x 1 2 x 2 1 y 3 2 y 4 3 z </code></pre> <p>Row order doesn't matter.</p>
<p>You can use <code>explode</code></p> <pre><code>df = df.explode('A') </code></pre> <p><strong>Output</strong></p> <pre><code> A B 0 1 x 0 2 x 1 1 y 1 2 y 2 3 z </code></pre>
python|pandas|dataframe
0
4,182
67,188,443
String splitting and joining on a pandas dataframe
<p>I have a dataframe containing devices and their corresponding firmware versions (e.g. 1.7.1.3). I'm trying to shorten the firmware version to only show three numbers (e.g. 1.7.1).</p> <p>I know how to do this on a single string but how would I make it efficient for a large dataframe?</p> <pre><code>test = &quot;1.2.3.4&quot; test = test.split(&quot;.&quot;) '.'.join(test[0:-1]) </code></pre>
<pre><code>#sample dataframe: import pandas as pd df=pd.DataFrame({'data': {0: '1.2.3.4', 1: '1.2.3.9', 2: '1.2.3.8'}}) </code></pre> <p>For this you can use:</p> <pre><code>df['data']=df['data'].str.split('.').str[0:3].apply('.'.join) </code></pre> <p><strong>OR</strong></p> <pre><code>df['data']=df['data'].str[0:5] </code></pre> <p><strong>OR</strong></p> <pre><code>df['data']=df['data'].str[::-1].str.split('.',1).str[1].str[::-1] </code></pre> <p><strong>Performance:</strong></p> <p><a href="https://i.stack.imgur.com/nKgJb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nKgJb.png" alt="enter image description here" /></a></p>
python|pandas|dataframe|split
3
4,183
60,019,727
Adding legends into a Graph made using Matplotlib and Numpy (multiple plots from a txt file)
<p>I had some really good help from here when I asked a question before so I thought I'd jump on again to get some help, here's my code so far:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt import os os.chdir("C:\Users\Chloe\Desktop") data=np.loadtxt("tree_rings.txt") for column in data.T[1:]: plt.plot(data[:,0],column) plt.title("Growth of Tree Rings Over Time") plt.xlabel("Year") plt.ylabel("Size of Tree Rings (mm)") plt.show() </code></pre> <p>How do I go about adding legends for each line in the graph, there are 3 lines (sample 1, sample 2 and sample 3), I'm mostly confused on how to specify which line corresponds to which set of data. The data is set out in 4 columns, the first is the year which corresponds to the x axis so isn't plotted and the 2nd, 3rd and 4th columns are plotted as lines on the graph. I'm really new to python so thanks in advance :)</p>
<p>Does your data have column names? If so you can use <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.genfromtxt.html" rel="nofollow noreferrer"><code>np.genfromtxt()</code></a> like this:</p> <pre><code>data = np.genfromtxt('tree_rings.csv',delimiter=',',names=True) </code></pre> <p>To answer your question though, you'd use <code>label</code> and <code>plt.legend()</code> like this:</p> <pre><code>fields = [field for field in data.dtype.fields.keys() if 'Width' in field] for field in fields: plt.plot(data['Year'],data[field],label=field) # ^^^^^ plt.title("Growth of Tree Rings Over Time") plt.xlabel("Year") plt.ylabel("Size of Tree Rings (mm)") plt.legend() plt.show() </code></pre> <p>Result: <a href="https://i.stack.imgur.com/SifPa.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SifPa.png" alt="enter image description here"></a></p>
python|numpy|matplotlib|legend
0
4,184
59,915,944
Apply qcut for complete dataframe
<p>I want to replace the column values with bin numbers based on quantiles but for each and every column present in the dataframe.</p> <p>I know how to do this with qcut and labels as its parameter for a single column, but do not know whether it can be applied for complete dataframe or not. say the dataframe looks like below..</p> <pre><code> ID CC DD EE 0 Q1 0 23 18 1 Q2 2 32 19 2 Q3 3 45 20 3 Q4 4 54 21 4 Q5 5 67 22 5 Q6 6 76 23 </code></pre> <p>The ID column should remain unchanged but the other columns should be replaced by bin numbers, like below..</p> <pre><code> ID CC DD EE 0 Q1 1 1 1 1 Q2 2 2 1 2 Q3 3 2 2 3 Q4 4 3 3 4 Q5 5 4 4 5 Q6 5 5 5 </code></pre> <p>the bin numbers I have provided here for CC, DD, EE are not exact and for understanding purpose only.</p> <p>And in the real dataset, there are more than 100 columns and 1000 rows, and I do not want to replace the 'ID' column, but all the other columns.</p> <p>How to do this?</p>
<p>you have to use pandas.cut() </p> <pre><code>import pandas as pd df['CC'] = pd.cut(df['CC'], [0, 5, 10,20]) </code></pre> <p>Similarly you can do for other columns as well.</p>
python|pandas|dataframe
0
4,185
65,095,801
reindex (1,N) dimension dataframe
<pre><code>A = pandas.DataFrame({&quot;A&quot; : [1, 4], &quot;Output1&quot; : [6, 8]}).set_index([&quot;A&quot;]).fillna(0) new_A = A.reindex(pandas.MultiIndex.from_tuples([['Output1', &quot;-&quot;]]) , axis=&quot;columns&quot;) </code></pre> <p>I'm expecting to get</p> <pre><code> Output1 - A 1 6 4 8 </code></pre> <p>But instead I get</p> <pre><code> Output1 - A 1 NaN 4 NaN </code></pre> <p>Anything wrong in my code ?</p>
<p>Don't use <code>reindex</code>, which aligns the columns by names. Just reassign the columns:</p> <pre><code>A.columns = pd.MultiIndex.from_tuples([['Output1', &quot;-&quot;]]) </code></pre> <p>Output:</p> <pre><code> Output1 - A 1 6 4 8 </code></pre>
pandas|dataframe|matrix|indexing|series
1
4,186
65,345,469
How do I save the tflearn logs into tflearn.model?
<p>So basically I forgot to save my model for each training loops. how do I save the /tmp/tflearn_logs/subdir into the model? is there any way to collect it as model like:</p> <pre><code># Save a model model.save('my_model.tflearn') </code></pre> <p>from the event logs?</p> <p>And after that I can automatically load it with:</p> <pre><code># Load a model model.load('my_model.tflearn') </code></pre> <p>Here are my event logs:</p> <p><a href="https://i.stack.imgur.com/Gtj4b.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Gtj4b.png" alt="logs" /></a></p> <p>Thank you..</p>
<p>Nevermind, There's no method to do that. Because logs only save our events data for visualization purpose, while model used for learning model using tflearn. they both working in the opposite way. Solution: Recreate save('.model.tflearn') and run it from 0. :)</p>
tensorflow|google-colaboratory|tflearn
0
4,187
65,095,890
group by pandas dataframe and select next upcoming date in each group
<p>Same question as here: <a href="https://stackoverflow.com/questions/41525911/group-by-pandas-dataframe-and-select-latest-in-each-group">group by pandas dataframe and select latest in each group</a>, except instead of latest date, would like to get next upcoming date for each group.</p> <p>So given a dataframe sorted by date:</p> <pre><code> id product date 0 220 6647 2020-09-01 1 220 6647 2020-10-03 2 220 6647 2020-12-16 3 826 3380 2020-11-11 4 826 3380 2020-12-09 5 826 3380 2021-05-19 6 901 4555 2020-09-01 7 901 4555 2020-12-01 8 901 4555 2021-11-01 </code></pre> <p>Using todays date (2020-12-01) to determine the next upcoming date, grouping by id or product and selecting the the next upcoming date should give:</p> <pre><code> id product date 2 220 6647 2020-12-16 5 826 3380 2020-12-09 8 901 4555 2021-11-01 </code></pre>
<p>Filter the dates first, then drop duplicates:</p> <pre><code>df[df['date']&gt;'2020-12-01'].sort_values(['id','date']).drop_duplicates('id') </code></pre> <p>Output:</p> <pre><code> id product date 2 220 6647 2020-12-16 4 826 3380 2020-12-09 8 901 4555 2021-11-01 </code></pre>
python|pandas|datetime|pandas-groupby
0
4,188
65,406,713
Unflatten a tensor back to an image
<p>I am working on GANs and I want to visualize the image formed.</p> <p>For this, I was trying</p> <pre><code>def show_images(image_tensor, num_images=9, size=(1, 28, 28)): image_unflat = image_tensor.detach().cpu.view(-1, *size) image_grid = make_grid(image_unflat[:num_images], nrow=3) plt.imshow(image_grid.permute(1, 2, 0).squeeze()) plt.show() </code></pre> <p>but when I am trying to <code>show_image(some_tensor)</code>, I am getting an error as</p> <pre><code>image_unflat = image_tensor.detach().cpu.view(-1, *size) AttributeError: 'builtin_function_or_method' object has no attribute 'view' </code></pre> <p>Here, the size of some_tensor is N x 784.</p>
<p>You need to call <a href="https://pytorch.org/docs/stable/tensors.html#torch.Tensor.cpu" rel="nofollow noreferrer"><code>cpu()</code></a> before broadcasting with <code>view</code>.</p> <pre><code>image_unflat = image_tensor.detach().cpu().view(-1, *size) </code></pre>
python|pytorch|tensor
1
4,189
50,092,322
get second largest value in row in selected columns in dataframe in pandas
<p>I have a dataframe with subset of it shown below. There are more columns to the right and left of the ones I am showing you </p> <pre><code>M_cols 10D_MA 30D_MA 50D_MA 100D_MA 200D_MA Max Min 2nd smallest 68.58 70.89 69.37 **68.24** 64.41 70.89 64.41 68.24 **68.32**71.00 69.47 68.50 64.49 71.00 64.49 68.32 68.57 **68.40** 69.57 71.07 64.57 71.07 64.57 68.40 </code></pre> <p>I can get the min (and max is easy as well) with the following code</p> <pre><code>df2['MIN'] = df2[['10D_MA','30D_MA','50D_MA','100D_MA','200D_MA']].max(axis=1) </code></pre> <p>But how do I get the 2nd smallest. I tried this and got the following error</p> <pre><code>df2['2nd SMALLEST'] = df2[['10D_MA','30D_MA','50D_MA','100D_MA','200D_MA']].nsmallest(2) </code></pre> <p>TypeError: nsmallest() missing 1 required positional argument: 'columns'</p> <p>Seems like this should be a simple answer but I am stuck</p>
<p>For example you have following df </p> <pre><code>df=pd.DataFrame({'V1':[1,2,3],'V2':[3,2,1],'V3':[3,4,9]}) </code></pre> <p>After pick up the value need to compare , we just need to sort value by axis=0(default)</p> <pre><code>sortdf=pd.DataFrame(np.sort(df[['V1','V2','V3']].values)) sortdf Out[419]: 0 1 2 0 1 3 3 1 2 2 4 2 1 3 9 </code></pre> <p>1st max: </p> <pre><code>sortdf.iloc[:,-1] Out[421]: 0 3 1 4 2 9 Name: 2, dtype: int64 </code></pre> <p>2nd max</p> <pre><code>sortdf.iloc[:,-2] Out[422]: 0 3 1 2 2 3 Name: 1, dtype: int64 </code></pre>
pandas
5
4,190
50,119,131
Merge returns empty dataframe in pandas
<p>I run Python 3.6 on Windows 10.</p> <p>My code is the following:</p> <pre><code>data1 Loan_ID Gender 1 LP001003 Male 2 LP001005 Male 3 LP001006 Male 4 LP001008 Male 5 LP001011 Male data2 Loan_ID2 LoanAmount 1 LP001003 128.0 2 LP001005 66.0 3 LP001006 120.0 4 LP001008 141.0 5 LP001011 267.0 data_merged = data1.merge(right= data2, how='inner',left_on='Loan_ID', right_on = 'Loan_ID2',right_index=True, sort=False) data_merged.shape (0, 4) </code></pre>
<p>You do not need <code>right_index</code> within <code>merge</code></p> <pre><code>df1.merge(df2,left_on='Loan_ID',right_on='Loan_ID2') Out[54]: Loan_ID Gender Loan_ID2 LoanAmount 0 LP001003 Male LP001003 128.0 1 LP001005 Male LP001005 66.0 2 LP001006 Male LP001006 120.0 3 LP001008 Male LP001008 141.0 4 LP001011 Male LP001011 267.0 </code></pre>
python|pandas|merge
3
4,191
64,076,919
Pandas split a column of unequal length lists into multiple boolean columns
<p>Given a DataFrame <code>df1</code> as follows:</p> <pre><code>df1 = pd.DataFrame({ 'col1': [1,2,3,4], 'col2': [['a', 'b'], ['c'], ['a', 'd', 'b'], ['e']] }) </code></pre> <p>Which looks like:</p> <pre><code> col1 col2 0 1 [a, b] 1 2 [c] 2 3 [a, d, b] 3 4 [e] </code></pre> <p>I want to convert <code>col2</code> - a column where each cell is a list - into several columns (<code>a</code>, <code>b</code>, <code>c</code>, <code>d</code>, <code>e</code>), where the values are boolean entries defining whether that column name existed in the original list, in the given row.</p> <p>The output should follow this form:</p> <pre><code>df2 = pd.DataFrame({ 'col1': [1,2,3,4], 'a': [True, False, True, False], 'b': [True, False, True, False], 'c': [False, True, False, False], 'd': [False, False, True, False], 'e': [False, False, False, True] }) </code></pre> <p>Which looks like:</p> <pre><code> col1 a b c d e 0 1 True True False False False 1 2 False False True False False 2 3 True True False True False 3 4 False False False False True </code></pre> <p>What's a clean way to do this?</p>
<p>Alternative approach using <code>str.get_dummies</code> probably more efficient than <code>apply</code> + <code>pd.Series</code>:</p> <pre><code>df1['col2'].str.join(',').str.get_dummies(sep=',').astype(bool) </code></pre> <hr /> <pre><code> a b c d e 0 True True False False False 1 False False True False False 2 True True False True False 3 False False False False True </code></pre> <p><strong>Timings:</strong></p> <pre><code>df1.shape (40000, 2) %%timeit df1['col2'].str.join(',').str.get_dummies(sep=',').astype(bool) 286 ms ± 16.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) %%timeit pd.get_dummies(df1['col2'].apply(pd.Series).stack()).sum(level=0) 9.43 s ± 499 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) </code></pre>
python|pandas
4
4,192
46,976,483
Set initial value of a tf.Variable Python/TensorFlow
<p>I have this function:</p> <pre><code>def new_weights(shape): return tf.Variable(tf.truncated_normal(shape, stddev=0.05)) </code></pre> <p>And I call it like this, for example:</p> <pre><code># shape = [filter_size, filter_size, num_filters, num_input_channels] shape = [1, 1, 8, 1] weights = new_weights(shape) </code></pre> <p>I want to initialize my weights with the following values:</p> <pre><code>weights = [1, 2, 3, 4, 5, 6, 7, 8] </code></pre> <p>And after initializing it with these values, I want it to be updated (trainable).</p> <p>How do I do this?</p>
<p>You can use assign function</p> <pre><code>shape = [1, 1, 8, 1] weights = new_weights(shape) ws = [1, 2, 3, 4, 5, 6, 7, 8] ws = np.array(ws).reshape(shape) weights = weights.assign(ws) </code></pre>
python|python-2.7|tensorflow
1
4,193
46,752,078
How to implement LSTM layer with multiple cells in Pytorch?
<p>I intend to implement an LSTM with 2 layers and 256 cells in each layer. I am trying to understand the PyTorch LSTM framework for the same. The variables in torch.nn.LSTM that I can edit are input_size, hidden_size, num_layers, bias, batch_first, dropout and bidirectional.</p> <p>However, how do I have multiple cells in a single layer?</p>
<p>These cells will be automatically unrolled based on your sequence size in the input. Please check out this code: </p> <pre class="lang-py prettyprint-override"><code># One cell RNN input_dim (4) -&gt; output_dim (2). sequence: 5, batch 3 # 3 batches 'hello', 'eolll', 'lleel' # rank = (3, 5, 4) inputs = Variable(torch.Tensor([[h, e, l, l, o], [e, o, l, l, l], [l, l, e, e, l]])) print("input size", inputs.size()) # input size torch.Size([3, 5, 4]) # Propagate input through RNN # Input: (batch, seq_len, input_size) when batch_first=True # B x S x I out, hidden = cell(inputs, hidden) print("out size", out.size()) # out size torch.Size([3, 5, 2]) </code></pre> <p>You can find more examples at <a href="https://github.com/hunkim/PyTorchZeroToAll/" rel="nofollow noreferrer">https://github.com/hunkim/PyTorchZeroToAll/</a>.</p>
deep-learning|lstm|pytorch
1
4,194
63,110,015
I don't understand Keras function "fit"
<p>When I was building a DataGenerator and trying to fit it into a model, it didn't work. So I've taken a look into the Keras function 'fit' directly. But I don't understand what this down below code is meaning especially backslash sign. May I ask what this code is for and how that works?</p> <pre><code>with self.distribute_strategy.scope(), \ training_utils.RespectCompiledTrainableState(self): # Creates a `tf.data.Dataset` and handles batch and epoch iteration. </code></pre>
<p>as far as I know, the &quot;\&quot; is just there for the <em><strong>linebreak</strong></em></p>
python|keras|tensorflow2.0|training-data|tf.keras
1
4,195
63,309,400
Get mean of lowest axis in a 3D array
<p>I have a 3D array and want to take the mean along <code>axis=0</code>. I tried to convert to a numpy array and do <code>arr.mean(axis=0)</code>, but that throws an error because the lists in <code>axis=2</code> do not have equal lengths.</p> <p>To reproduce:</p> <pre><code>arr = [[[0,1,2,3,4], [1,2,3,4,5], [2,3,4,5,6], [3,4,5,6]], [[10,11,12,13,14], [11,12,13,14,15], [12,13,14,15,16], [13,14,15,16]], [[20,21,22,23,24], [21,22,23,24,25], [22,23,24,25,26], [23,24,25,26]]] np.asarray(arr).mean(axis=0) </code></pre> <p>The result would look like this:</p> <pre><code>[[10,11,12,13,14], [11,12,13,14,15], [12,13,14,15,16], [13,14,15,16]] </code></pre>
<p>If you don't mind Tensoflow, you can do this with ragged tensors.</p> <pre><code>&gt;&gt;&gt; arr = tf.ragged.constant(arr) &gt;&gt;&gt; tf.reduce_mean(arr, axis=0).numpy() array([array([10., 11., 12., 13., 14.]), array([11., 12., 13., 14., 15.]), array([12., 13., 14., 15., 16.]), array([13., 14., 15., 16.])], dtype=object) </code></pre> <p>UPD: without Tensorflow:</p> <pre><code>means = [list(m) for m in np.apply_along_axis( # preventing numpy from casting to ndarray by converting # to iterators and back to lists lambda a: (i for i in np.asarray([*a]).mean(axis=0)), 0, arr )] </code></pre> <p>here, basically we get a 2D numpy array of lists and map it to a 1D array of iterators, which we convert back to lists. All this is to prevent numpy from making a 2D list between the steps by going into lists of means</p>
python|python-3.x|list|numpy
1
4,196
67,944,611
merge_asof with multiple columns and forward direction
<p>I have 2 dataframes:</p> <pre><code>q = pd.DataFrame({'ID':[700,701,701,702,703,703,702],'TX':[0,0,1,0,0,1,1],'REF':[100,120,144,100,103,105,106]}) ID TX REF 0 700 0 100 1 701 0 120 2 701 1 144 3 702 0 100 4 703 0 103 5 703 1 105 6 702 1 106 </code></pre> <p>and</p> <pre><code>p = pd.DataFrame({'ID':[700,701,701,702,703,703,702,708],'REF':[100,121,149,100,108,105,106,109],'NOTE':['A','B','V','V','T','A','L','M']}) ID REF NOTE 0 700 100 A 1 701 121 B 2 701 149 V 3 702 100 V 4 703 108 T 5 703 105 A 6 702 106 L 7 708 109 M </code></pre> <p>I wish to merge p with q in such way that ID are equals AND the REF is exact OR higher.</p> <p><strong>Example 1:</strong></p> <pre><code>for p: ID=700 and REF=100 and for q: ID=700 and RED=100 So that's a clear match! </code></pre> <p><strong>Example 2</strong> for p:</p> <pre><code>1 701 0 120 2 701 1 144 </code></pre> <p>they would match to:</p> <pre><code>1 701 121 B 2 701 149 V </code></pre> <p>this way:</p> <pre><code>1 701 0 120 121 B 121 is just after 120 2 701 1 144 149 V 149 comes after 144 </code></pre> <p>When I use the below code <strong>NOTE: I only indicate the REF which is wrong. Should be ID AND REF:</strong></p> <pre><code>p = p.sort_values(by=['REF']) q = q.sort_values(by=['REF']) pd.merge_asof(p, q, on='REF', direction='forward').sort_values(by=['ID_x','TX']) </code></pre> <p>I get this problem:</p> <p><a href="https://i.stack.imgur.com/KPW91.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KPW91.png" alt="enter image description here" /></a></p> <p>My expected result should be something like this:</p> <pre><code> ID TX REF REF_2 NOTE 0 700 0 100 100 A 1 701 0 120 121 B 2 701 1 144 149 V 3 702 0 100 100 V 4 703 0 103 108 T 5 703 1 105 105 A 6 702 1 106 109 L </code></pre>
<p>Does this work?</p> <pre><code>pd.merge_asof(q.sort_values(['REF', 'ID']), p.sort_values(['REF', 'ID']), on='REF', direction='forward', by='ID').sort_values('ID') </code></pre> <p>Output:</p> <pre><code> ID TX REF NOTE 0 700 0 100 A 5 701 0 120 B 6 701 1 144 V 1 702 0 100 V 4 702 1 106 L 2 703 0 103 A 3 703 1 105 A </code></pre>
python|pandas|merge
0
4,197
67,905,081
Integer Counter when value in next row changes
<p>I'm having a problem adding a &quot;counter column&quot; in my dataframe.</p> <p>I parsed values from multiple columns into so called &quot;merged_attributes&quot; and now I want to create a counter that increments by 1 when value of &quot;merged attributes&quot; columns changes.</p> <p>I have below dataframe, last column is desired column:</p> <pre><code> Price UNIQUE MERGED_ATRIBUTE COUNTER 0 52.08 1 52.081 1 1 52.08 1 52.081 1 2 52.20 1 52.210 2 3 52.20 1 52.210 2 4 52.20 1 52.210 2 5 52.20 1 52.210 2 6 52.20 1 52.210 2 7 52.20 1 52.210 2 8 70.10 1 70.110 3 </code></pre> <p>How do I achieve this?</p> <p>thanks a lot!</p>
<p>Try the following:</p> <pre class="lang-py prettyprint-override"><code>df['COUNTER'] = df.groupby('MERGED_ATRIBUTE').ngroup() + 1 </code></pre> <p>This creates a group for each value of <code>MERGED_ATRIBUTE</code> and then uses <a href="https://pandas.pydata.org/pandas-docs/version/1.2.0/reference/api/pandas.core.groupby.GroupBy.ngroup.html" rel="nofollow noreferrer">GroupBy.ngroup</a>:</p> <blockquote> <p>Number each group from 0 to the number of groups - 1.</p> </blockquote> <p>This returns the following DataFrame:</p> <pre><code> Price UNIQUE MERGED_ATRIBUTE COUNTER 0 52.08 1 52.081 1 1 52.08 1 52.081 1 2 52.20 1 52.210 2 3 52.20 1 52.210 2 4 52.20 1 52.210 2 5 52.20 1 52.210 2 6 52.20 1 52.210 2 7 52.20 1 52.210 2 8 70.10 1 70.110 3 </code></pre> <p>Note that this attributes a unique number to each attribute, so this answer differs from that of @sophocles if the <code>MERGED_ATRIBUTE</code> is not sorted:</p> <pre><code>&gt;&gt;&gt; df2 Price UNIQUE MERGED_ATRIBUTE 0 52.08 1 52.081 1 70.10 1 70.110 2 52.08 1 52.081 &gt;&gt;&gt; df2.groupby('MERGED_ATRIBUTE').ngroup() + 1 0 1 1 2 2 1 dtype: int64 &gt;&gt;&gt; df2['MERGED_ATRIBUTE'].ne(df2['MERGED_ATRIBUTE'].shift()).cumsum() 0 1 1 2 2 3 Name: MERGED_ATRIBUTE, dtype: int64 </code></pre>
pandas|cumsum
0
4,198
67,971,947
Best approach to return multiple objs from function
<p>Lets say i have 2 classes , i want to take from class First two dfs, to second class, should I return them and unpack or use sefl.df and do not use return if i want to use those dfs in more than one function in other class?</p> <p>I can unpack but its only one per program runs and i cannot unpack two times same function.</p> <pre><code>Import pandas as pd class First: def func(self): Df = df.read_csv('my data.csv') Df2= Df.copy() return Df,Df2 class Second(First): def one(self): Df,Df2 = self.func() DF1=.... def two(self): Df,Df2 = self.func() Df2=.... </code></pre>
<p>The &quot;best&quot; approach really depends on what you are trying to achieve and what restrictions you are dealing with (<em>and usually has a significant subjective component</em>). <br /> It doesn't seem to be clear whether flexibility or efficiency is more important from the information you have given, so here is how I would handle such a situation.</p> <p>Considering efficiency, my main concern was that every time you call func, you re-read the csv file. From my experience this is rarely necessary, but if you want it as an option, I would do it like this:</p> <pre><code>import pandas as pd class First: def __init__(self, read_now= True): if read_now: self.Df, self.Df2 = self.func() def func(self): Df = pd.read_csv('my data.csv') Df2= Df.copy() return Df,Df2 class Second(First): def one(self, reread= False): if reread: Df,Df2 = self.func() else: Df, Df2 = self.Df, self.Df2 # operation def two(self, reread= False): if reread: Df, Df2 = self.func() else: Df, Df2 = self.Df, self.Df2 # operation </code></pre> <p>By not rereading everytime, you can make it more efficient and modify the dataframe in memory, but you still have the option of easily rereading the dfs if you really need to.</p> <p>But as I said, &quot;best&quot; tends to be ambiguous, especially with limited information.</p>
python|pandas
0
4,199
68,026,764
Sum based on multiple columns with pandas groupby
<p>I want to create a new column that sums up the <em>value</em> column based on groupings of multiple columns. In this example I want to get the sum per <em>ISIN</em>, <em>date</em> and <em>portfolio</em>.</p> <pre><code>df = pd.DataFrame({&quot;ISIN&quot;: [&quot;IS123&quot;, &quot;IS123&quot;, &quot;UN123&quot;, &quot;UN123&quot;, &quot;FA123&quot;], &quot;date&quot;: [&quot;16&quot;, &quot;16&quot;, &quot;18&quot;, &quot;18&quot;, &quot;22&quot;], &quot;portfolio&quot;: [&quot;A&quot;, &quot;A&quot;, &quot;B&quot;, &quot;A&quot;, &quot;D&quot;], &quot;value&quot;: [400, 300, 200, 600, 500]}) </code></pre> <p>Here is the desired output. As you can see, only the first two rows &quot;satisfies&quot; the condition and both rows get the sum of <strong>700</strong>. The others will keep their respective value.</p> <pre><code>df = pd.DataFrame({&quot;ISIN&quot;: [&quot;IS123&quot;, &quot;IS123&quot;, &quot;UN123&quot;, &quot;UN123&quot;, &quot;FA123&quot;], &quot;date&quot;: [&quot;16&quot;, &quot;16&quot;, &quot;18&quot;, &quot;18&quot;, &quot;22&quot;], &quot;portfolio&quot;: [&quot;A&quot;, &quot;A&quot;, &quot;B&quot;, &quot;A&quot;, &quot;D&quot;], &quot;value&quot;: [400, 300, 200, 600, 500], &quot;Sum per ISIN, date and portfolio&quot;: [700, 700, 200, 600, 500]}) </code></pre> <p>Here is what I have tried, but I am only able to make it work with grouping on one column, for example just <em>ISIN</em>.</p> <pre><code>df[&quot;Sum per ISIN, date and portfolio&quot;] = df[&quot;value&quot;].groupby(df[&quot;ISIN&quot;, &quot;date&quot;, &quot;portfolio&quot;]).transform(&quot;sum&quot;) </code></pre>
<p>Try <code>groupby</code> on the DataFrame instead of the Series (<code>value</code>) then select the column from the grouper:</p> <pre><code>df[&quot;Sum per ISIN, date and portfolio&quot;] = ( df.groupby([&quot;ISIN&quot;, &quot;date&quot;, &quot;portfolio&quot;])[&quot;value&quot;].transform(&quot;sum&quot;) ) </code></pre> <pre class="lang-none prettyprint-override"><code> ISIN date portfolio value Sum per ISIN, date and portfolio 0 IS123 16 A 400 700 1 IS123 16 A 300 700 2 UN123 18 B 200 200 3 UN123 18 A 600 600 4 FA123 22 D 500 500 </code></pre>
python|pandas|sum|pandas-groupby
1