Unnamed: 0
int64
0
378k
id
int64
49.9k
73.8M
title
stringlengths
15
150
question
stringlengths
37
64.2k
answer
stringlengths
37
44.1k
tags
stringlengths
5
106
score
int64
-10
5.87k
9,100
66,090,385
Why parameters in prunning increases in tensorflow's tfmot
<p>I was prunning a model and came across a library TensorFlow model optimization so initially, we have <a href="https://i.stack.imgur.com/Ovnhq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ovnhq.png" alt="in this image my model have 20410 parameters in total" /></a></p> <p>I trained this model on a default dataset and it gave me an accuracy of 96 percent which is good. then I saved the model in a JSON file and saved its weight in h5 file now I loaded this model into another script to prune it after applying prunning and compiling the model I got this model summary <a href="https://i.stack.imgur.com/7h0t7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7h0t7.png" alt="enter image description here" /></a></p> <p>although the model is prunned well and there is a significant amount of reduction in parameters but the problem here is why parameters increased after applying the prunning and also even after rmoving non-trainable parameters still the prunned and simple model has same number of parameters can anyone explain me if this is normal or i am doing something wrong. Also please explain why this is happening. Thank you in advance to all of you :)</p>
<p>It is normal. Pruning doesn't change the original model's structure. So it is not meant to reduce the number of parameters.</p> <p>Pruning is a model optimization technique that eliminates not commonly used(by other words you can say unnecessary) values in the weights.</p> <p>2nd model summary shows the parameters added for pruning. They are the non-trainable parameters. <strong>Non-trainable parameters stand for masking</strong>. In a nutshell, tensorflow adds non-trainable masks to each of the weights in the network to specify which of the weights should be pruned. The masks consist of 0s and 1s.</p>
python|tensorflow|machine-learning|deep-learning|pruning
4
9,101
66,231,771
How to access the second to last row of a csv file using python Pandas?
<p>Want to know if I can access the second to last row of this csv file? Am able to access the very last using:</p> <pre><code>pd.DataFrame(file1.iloc[-1:,:].values) </code></pre> <p>But want to know how I can access the one right before the last?</p> <p>Here is the code I have so far:</p> <pre><code>import pandas as pd import csv url1 = r&quot;https://raw.githubusercontent.com/owid/covid-19-data/master/public/data/vaccinations/country_data/Austria.csv&quot; file1 = pd.read_csv(url1) df1 = pd.DataFrame(file1.iloc[:,:].values) df1 = pd.DataFrame(file1.iloc[-1:,:].values) Austria_date = df1.iloc[:,1] Austria_cum = df1.iloc[:, 4].map('{:,}'.format) if ( Austria_cum.iloc[0] == 'nan' ): </code></pre> <p>Essentially I am checking if the the row at that specific col is 'nan', which is True, and after which I want to get the data from the row right before the last. Please, how would this be done?</p> <p>Thank you</p>
<p>As simple as that :</p> <pre class="lang-py prettyprint-override"><code>df1.iloc[-2,:] </code></pre>
python|pandas|dataframe|csv
1
9,102
52,648,644
Filter outliers in DataFrame rows based on a recursive time-interval
<p>I have the following DataFrame <code>df</code>:</p> <pre><code>ds y 2018-10-01 00:00 1.23 2018-10-01 01:00 2.21 2018-10-01 02:00 6.40 ... ... 2018-10-02 00:00 3.21 2018-10-02 01:00 3.42 2018-10-03 02:00 2.99 ... ... </code></pre> <p>That means that I have one value for <code>y</code> per each hour. I would like to filter the rows so that the values which are not inside the 6-sigma interval (3*std, -3*std) are dropped.</p> <p>I'm able to do this for the entire DataFrame this way:</p> <pre><code>df = df[np.abs(df.y-df.y.mean()) &lt;= (3*df.y.std())] </code></pre> <p>But I would like to do this in a per-day basis.</p> <p>Please note that <code>ds</code> is a <code>datetime64[ns]</code> and <code>y</code> a <code>float64</code>.</p> <p>Also, since my ultimate goal is to exclude outliers from data, can you suggest other viable options to accomplish this?</p>
<p>Try this:</p> <pre><code>g = df.groupby(df.index.floor('D'))['y'] df[(np.abs(df.y - g.transform('mean')) &lt;= (3*g.transform('std')))] </code></pre>
python|pandas|datetime64
0
9,103
46,385,999
Transform an image to a bitmap
<p>I'm trying to create like a bitmap for an image of a letter but I'm not having the desired result. It's been a few days that I started working with images. I tried to read the image, create a numpy array of it and save the content in a file. I wrote the code bellow:</p> <pre><code>import numpy as np from skimage import io from skimage.transform import resize image = io.imread(image_path, as_grey=True) image = resize(image, (28, 28), mode='nearest') array = np.array(image) np.savetxt("file.txt", array, fmt="%d") </code></pre> <p>I'm trying to use images like in this link bellow:</p> <p><a href="https://imgur.com/Yodhh7I" rel="noreferrer">Letter "e"</a></p> <p>I was trying to create an array of 0's and 1's. Where the 0's represent the white pixels and the 1's represent the black pixels. Then when I save the result in a file I can see the letter format.</p> <p>Can anyone guide me on how to get this result?</p> <p>Thank you.</p>
<p>Check this one out:</p> <pre><code>from PIL import Image import numpy as np img = Image.open('road.jpg') ary = np.array(img) # Split the three channels r,g,b = np.split(ary,3,axis=2) r=r.reshape(-1) g=r.reshape(-1) b=r.reshape(-1) # Standard RGB to grayscale bitmap = list(map(lambda x: 0.299*x[0]+0.587*x[1]+0.114*x[2], zip(r,g,b))) bitmap = np.array(bitmap).reshape([ary.shape[0], ary.shape[1]]) bitmap = np.dot((bitmap &gt; 128).astype(float),255) im = Image.fromarray(bitmap.astype(np.uint8)) im.save('road.bmp') </code></pre> <p>The program takes an rgb image and converts it in a numpy array. It then splits it in 3 vectors, one for each channel. I uses the color vectors to create a gray vector. After that it comperes elements with 128, if lower than writes 0(black) else is 255. Next step is reshape and save.</p> <p><a href="https://i.stack.imgur.com/OYCN8.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/OYCN8.jpg" alt="road.jpg"></a> <a href="https://i.stack.imgur.com/cG1Wa.png" rel="noreferrer"><img src="https://i.stack.imgur.com/cG1Wa.png" alt="road.bmp"></a></p>
python|numpy|computer-vision|scikit-image
14
9,104
58,305,385
Rolling mean with start date happens before
<p>this code gives me the rolling mean from 90d before up to today <code>df.rolling('90d', on='Date')['quantity'].mean()</code> what I want now is from 90d before up to 30d before, how to achieve that?</p>
<p>I would roll twice with <code>sum</code> and <code>count</code>:</p> <pre><code>roll90 = df.rolling('90d').quantity.agg({'sum','count'}) # you may want roll29 instead of roll30 roll30 = df.rolling('30d').quantity.agg({'sum','count'}) roll = roll90 - roll30 roll['mean'] = roll['sum']/roll['count'] </code></pre>
python|python-3.x|pandas
2
9,105
58,570,928
Check pandas for NaN and differing types
<p>In order to check, whether a pandas contains missing/nan values, one can use the <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.isnull.html#targetText=Detect%20missing%20values%20for%20an,arrays%2C%20NaT%20in%20datetimelike" rel="nofollow noreferrer">isnull</a> function.</p> <pre><code>test_pandas = pd.DataFrame([[np.float(3),np.float(1),np.float(4.3)],[np.float(5.8),np.nan,[1,2,3]]],columns = ['A','B','C']) value = test_pandas.isnull().values.any() test_pandas.head() </code></pre> <p>gives</p> <pre><code> A B C 0 3.0 1.0 4.3 1 5.8 NaN [1, 2, 3] </code></pre> <p>and with </p> <pre><code>print("There exists a nan value in the dataframe: ",test_pandas.isnull().values.any()) print("Number of nan values: ",test_pandas.isnull().sum().sum()) </code></pre> <p>we find </p> <pre><code>There exists a nan value in the dataframe: True Number of nan values: 1 </code></pre> <ul> <li>How do I check in the same way, wether there are values in a pandas dataframe, whose type is not e.g. a float. I would like to detect, that there is an entry of type list among the float values. </li> </ul>
<p>You could use a custom function with <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.applymap.html" rel="nofollow noreferrer">applymap</a>:</p> <pre><code>def isfloat(x): return isinstance(x, float) print(df.applymap(isfloat)) </code></pre> <p><strong>Output</strong></p> <pre><code> A B C 0 True True True 1 True True False </code></pre>
python-3.x|pandas
3
9,106
58,236,245
Where does Ray.Tune create the model vs implementing the perturbed hyperparameters
<p>I am new to using ray.tune. I already have my network written in a modular format and now I am trying to incorporate ray.tune, but I do not know where to initialize the model (vs updating the perturbed hyperparameters) so that the model and the weights are not re-initialized when a worker is truncated and replaced by a better performing worker.</p> <h2>Background</h2> <p>I am using the PBT scheduler of ray.tune which creates num_samples number of models (workers) each of which are initialized with a different set of sampled hyperparameters. When a model is evaluated, if it is performing poorly, it will be stopped and load the checkpoint of one of the top performing workers. Once it is loaded (this is a deep copy of the network), the hyperparameters are perturbed and then it will train until the next evaluation.<br> The MyTrainable class should have a _setup, _train, _save, and _restore function. The setup calls for a config variable and this is where the newly sampled hyperparameters are implemented. </p> <p>My question is where should be the original model be defined? I can easily implement the updated HPs in this section. But I have not seen anywhere in the documentation where I can pass a pre-defined model into the ray.tune.run function. If I keep the create_model() function in the _setup() though, it will eliminate the previously trained weights which is part of the benefit of this method.</p> <h2>Code</h2> <p>Here are the 3 functions I have: </p> <pre class="lang-py prettyprint-override"><code>self._hyperparameters(config) # redefines the self.opt options accoring to the new perturbations self.model.update_optimizer(self.opt) # redefines the optimizers using the new learning rates and the beta values for Adam self.model = create_model(self.opt) # Original function that defines the initial model and initializes the weights </code></pre>
<p>The create_model should be called in <code>_setup</code>. <code>_restore</code> will be called after <code>_setup</code>, and in <code>restore</code>, the model should be updated to the weights stored in the checkpoint.</p>
python|pytorch|hyperparameters|ray
1
9,107
58,558,172
Python/Numpy optimisation
<p>I have a simple python class which is approx 40 lines of calculations, given thereafter with a use case exemple, that perform a simple computation (independence testing based on L2 distance between densities), and it takes a lot of time to compute with only 100 points and 100 boostrap. Here is the code and some data to test it : </p> <pre class="lang-py prettyprint-override"><code>import numpy as np class IndependenceTesting: def __init__(self,data,a,b,dim_to_test,number_of_simulation = 1000): # rescale the data : self.i = dim_to_test self.data = (data - a)/(b-a) self.d = self.data.shape[1] self.n = self.data.shape[0] self.N = number_of_simulation self.data_restricted = np.hstack((self.data[:,:(self.i-1)],self.data[:,(self.i+1):])) self.emp_cop_restricted = np.array([np.mean(np.array([np.sum(dat &lt;= u) for dat in self.data_restricted]) == self.d - 1) for u in self.data_restricted]) def simulated_dataset(self): unif = np.random.uniform(size=(self.n,1)) before = [x for x in range(self.d) if x &lt; self.i] after = [x for x in range(self.d) if x &gt; self.i] return np.hstack((self.data[:,before],unif,self.data[:,after])) def mse(self, data): emp_cop = np.array([np.mean(np.array([np.sum(dat &lt;= u) for dat in data]) == self.d) for u in self.data]) return ((emp_cop - self.emp_cop_restricted)**2).mean() def mse_distribution(self): return np.array([self.mse(self.simulated_dataset()) for i in np.arange(self.N)]) def mse_observed(self): return self.mse(self.data) def quantile(self): return np.mean(self.mse_distribution() &lt; self.mse_observed()) def p_value(self): return 1 - self.quantile() # clayton copula exemple points = np.array([[0.56129339, 0.99710045, 0.57646982], [0.12256328, 0.17201513, 0.12885428], [0.08511945, 0.11828913, 0.10965346], [0.98324131, 0.95728269, 0.92776529], [0.54921155, 0.39785825, 0.31361901], [0.99487892, 0.63916092, 0.79895483], [0.50433754, 0.56999504, 0.60091257], [0.92823054, 0.93214344, 0.89725172], [0.17751366, 0.18346635, 0.20097246], [0.51466364, 0.63436169, 0.46611089], [0.25800664, 0.28831929, 0.2903953 ], [0.20481173, 0.15871781, 0.15857803], [0.31187595, 0.24635342, 0.24171054], [0.93662273, 0.80126302, 0.90160681], [0.34507788, 0.28888433, 0.30064778], [0.81832302, 0.84296836, 0.73139211], [0.90751759, 0.74184158, 0.60553314], [0.38432821, 0.28571601, 0.22660958], [0.47439066, 0.71614234, 0.54718021], [0.19106315, 0.31102177, 0.18200903], [0.38445433, 0.53108707, 0.35387428], [0.77625631, 0.98215295, 0.7751224 ], [0.52178207, 0.60999481, 0.45028018], [0.2446548 , 0.22270593, 0.30778265], [0.62656838, 0.68516045, 0.49434858], [0.04573006, 0.03194788, 0.04361497], [0.09852491, 0.09004012, 0.08412001], [0.11361961, 0.10879038, 0.11352351], [0.86116076, 0.92607349, 0.98481143], [0.47235565, 0.89094039, 0.52014104], [0.32994434, 0.38757998, 0.48919507], [0.0052988 , 0.00701797, 0.00637456], [0.6230293 , 0.48457337, 0.73184841], [0.8039672 , 0.78400854, 0.76272398], [0.4585257 , 0.64504907, 0.42333538], [0.86565877, 0.89902376, 0.75903263], [0.96763817, 0.883972 , 0.99965508], [0.72431971, 0.86391135, 0.73501178], [0.99153281, 0.98536847, 0.93416086], [0.11746542, 0.1142617 , 0.09463402], [0.86322008, 0.79150614, 0.48112103], [0.031247 , 0.03196072, 0.02701867], [0.44120581, 0.48729271, 0.4607829 ], [0.01393345, 0.01400763, 0.01567294], [0.24365903, 0.20966226, 0.218757 ], [0.94584172, 0.94507558, 0.98623726], [0.79201305, 0.65503713, 0.79137242], [0.06040952, 0.04573984, 0.04640926], [0.5673345 , 0.27567432, 0.35234249], [0.15860006, 0.12212839, 0.15206467], [0.00826576, 0.00407989, 0.00479213], [0.72549979, 0.70557491, 0.60543315], [0.83039818, 0.76500639, 0.89549151], [0.6844257 , 0.81317716, 0.74480599], [0.36904583, 0.41081094, 0.36072341], [0.14211919, 0.14508685, 0.11253501], [0.85139993, 0.86351303, 0.9571894 ], [0.72638876, 0.92343587, 0.67884759], [0.26816568, 0.22169953, 0.28666315], [0.04672121, 0.06183976, 0.09154045], [0.81235354, 0.61478793, 0.76379907], [0.3562006 , 0.2863009 , 0.31200338], [0.42761726, 0.40890689, 0.53401233], [0.66337324, 0.96621491, 0.86041736], [0.55199335, 0.49320256, 0.43633604], [0.80474216, 0.72338883, 0.80206245], [0.10724037, 0.11511572, 0.09207419], [0.36170945, 0.21664901, 0.20827803], [0.9831956 , 0.93518925, 0.89061586], [0.10740562, 0.10503344, 0.12320474], [0.67589713, 0.65032996, 0.69570242], [0.07020206, 0.04963921, 0.06650148], [0.4841555 , 0.68809898, 0.65333047], [0.60416479, 0.74849448, 0.90509825], [0.59250114, 0.71818894, 0.52021291], [0.64724464, 0.91296217, 0.96050912], [0.75206371, 0.83658298, 0.74361849], [0.7338096 , 0.58894243, 0.68243507], [0.63778258, 0.79158918, 0.69136578], [0.73200902, 0.91405125, 0.81908408], [0.15349378, 0.19096759, 0.18099441], [0.53616182, 0.51364115, 0.49836299], [0.60663723, 0.66756579, 0.66600087], [0.72565001, 0.84115262, 0.76362573], [0.65200849, 0.86601501, 0.80996763], [0.02593363, 0.03604641, 0.05726403], [0.39141485, 0.31616432, 0.36365569], [0.64372213, 0.53823589, 0.88647631], [0.79079997, 0.74427728, 0.67554193], [0.07105107, 0.08504079, 0.09113675], [0.82765688, 0.7680246 , 0.93645974], [0.42258547, 0.46685121, 0.46316008], [0.08749291, 0.09122353, 0.10884091], [0.93644383, 0.81629942, 0.70997887], [0.92635455, 0.95107457, 0.99150588], [0.05725108, 0.03565845, 0.03288627], [0.11064689, 0.11070949, 0.11499569], [0.93098314, 0.98552576, 0.93522353], [0.91617665, 0.8137873 , 0.71928403], [0.93477362, 0.87389527, 0.87646188]]) points[:,2] = 1 - points[:,2] points = np.concatenate((points,np.array(np.random.uniform(size=points.shape[0])).reshape((points.shape[0],1))),axis=1) p_values = [IndependenceTesting(points,np.repeat(0,4),np.repeat(1,4),dim_to_test = i,number_of_simulation= 100).p_value() for i in np.arange(points.shape[1])] print(p_values) </code></pre> <p>The line that takes the most evaluation time is probably the computation of <code>emp_cop</code> in the <code>mse</code> function. </p> <p>Do you think this code can be optimised ? I'm fairly new to python.</p> <p>Thanks ! </p>
<p>Generally, list comprehensions in python are slow and <a href="https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html" rel="nofollow noreferrer">broadcasting</a> is much more efficient. In your specific case, <code>np.array([np.sum(dat &lt;= u) for dat in data])</code> takes a lot of time and can be replaced by <code>np.sum(data &lt;= u, axis=1)</code>. In my experiments, that gives a significant speed up. You can likely further improve the performance by replacing additional list comprehensions with broadcast statements. </p> <pre><code># Before 6.9 s ± 103 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) # After 289 ms ± 19.6 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) </code></pre>
python|numpy|micro-optimization
2
9,108
69,048,791
align two pandas dataframes on values in one column, otherwise insert NA to match row number
<p>I have two pandas DataFrames (<code>df1</code>, <code>df2</code>) with a different number of rows and columns and some matching values in a specific column in each <code>df</code>, with caveats (1) there are some unique values in each <code>df</code>, and (2) there are different numbers of matching values across the DataFrames.</p> <p>Baby example:</p> <pre><code>df1 = pd.DataFrame({'id1': [1, 1, 1, 2, 2, 3, 3, 3, 3, 4, 6, 6]}) df2 = pd.DataFrame({'id2': [1, 1, 2, 2, 2, 2, 3, 4, 5], 'var1': ['B', 'B', 'W', 'W', 'W', 'W', 'H', 'B', 'A']}) </code></pre> <p>What I am seeking to do is create <code>df3</code> where <code>df2['id2']</code> is aligned/indexed to <code>df1['id1']</code>, such that:</p> <ol> <li><code>NaN</code> is added to <code>df3[id2]</code> when <code>df2[id2]</code> has fewer (or missing) matches to <code>df1[id1]</code></li> <li><code>NaN</code> is added to <code>df3[id2]</code> &amp; <code>df3[var1]</code> if <code>df1[id1]</code> exists but has no match to <code>df2[id2]</code></li> <li><code>'var1'</code> is filled in for all cases of <code>df3[var1]</code> where <code>df1[id1]</code> and <code>df2[id2]</code> match</li> <li>rows are dropped when <code>df2[id2]</code> has more matching values than <code>df1[id1]</code> (or no matches at all)</li> </ol> <p>The resulting DataFrame (<code>df3</code>) should look as follows (Notice id2 = 5 and var1 = A are gone):</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: center;">id1</th> <th style="text-align: center;">id2</th> <th style="text-align: center;">var1</th> </tr> </thead> <tbody> <tr> <td style="text-align: center;">1</td> <td style="text-align: center;">1</td> <td style="text-align: center;">B</td> </tr> <tr> <td style="text-align: center;">1</td> <td style="text-align: center;">1</td> <td style="text-align: center;">B</td> </tr> <tr> <td style="text-align: center;">1</td> <td style="text-align: center;">NaN</td> <td style="text-align: center;">B</td> </tr> <tr> <td style="text-align: center;">2</td> <td style="text-align: center;">2</td> <td style="text-align: center;">W</td> </tr> <tr> <td style="text-align: center;">2</td> <td style="text-align: center;">2</td> <td style="text-align: center;">W</td> </tr> <tr> <td style="text-align: center;">3</td> <td style="text-align: center;">3</td> <td style="text-align: center;">H</td> </tr> <tr> <td style="text-align: center;">3</td> <td style="text-align: center;">NaN</td> <td style="text-align: center;">H</td> </tr> <tr> <td style="text-align: center;">3</td> <td style="text-align: center;">NaN</td> <td style="text-align: center;">H</td> </tr> <tr> <td style="text-align: center;">3</td> <td style="text-align: center;">NaN</td> <td style="text-align: center;">H</td> </tr> <tr> <td style="text-align: center;">4</td> <td style="text-align: center;">4</td> <td style="text-align: center;">B</td> </tr> <tr> <td style="text-align: center;">6</td> <td style="text-align: center;">NaN</td> <td style="text-align: center;">NaN</td> </tr> <tr> <td style="text-align: center;">6</td> <td style="text-align: center;">NaN</td> <td style="text-align: center;">NaN</td> </tr> </tbody> </table> </div> <p>I cannot find a combination of merge/join/concatenate/align that correctly solves this problem. Currently, everything I have tried stacks the rows in sequence without adding <code>NaN</code> in the proper cells/rows and instead adds all the <code>NaN</code> values at the bottom of <code>df3</code> (so <code>id1</code> and <code>id2</code> never align). Any help is greatly appreciated!</p>
<p>You can first assign a helper column for <code>id1</code> and <code>id2</code> based on <code>groupby.cumcount</code>, then merge. Finally <code>ffill</code> values of <code>var1</code> based on the group <code>id1</code></p> <pre><code>def helper(data,col): return data.groupby(col).cumcount() out = df1.assign(k = helper(df1,['id1'])).merge(df2.assign(k = helper(df2,['id2'])), left_on=['id1','k'],right_on=['id2','k'] ,how='left').drop('k',1) out['var1'] = out['id1'].map(dict(df2[['id2','var1']].drop_duplicates().to_numpy())) </code></pre> <p>Or similar but without assign as <a href="https://stackoverflow.com/users/15497888/henry-ecker">HenryEcker</a> suggests :</p> <pre><code>out = df1.merge(df2, left_on=['id1', helper(df1, ['id1'])], right_on=['id2', helper(df2, ['id2'])], how='left').drop(columns='key_1') out['var1'] = out['id1'].map(dict(df2[['id2','var1']].drop_duplicates().to_numpy())) </code></pre> <hr /> <pre><code>print(out) id1 id2 var1 0 1 1.0 B 1 1 1.0 B 2 1 NaN B 3 2 2.0 W 4 2 2.0 W 5 3 3.0 H 6 3 NaN H 7 3 NaN H 8 3 NaN H 9 4 4.0 B 10 6 NaN NaN 11 6 NaN NaN </code></pre>
python|pandas
2
9,109
68,996,893
Python Pandas Fast Way to Divide Row Value by Previous Value
<p>I want to calculate daily bond returns from clean prices based on the logarithm of the bond price in t divided by the bond price in t-1. So far, I calculate it like this:</p> <pre><code>import pandas as pd import numpy as np #create example data col1 = np.random.randint(0,10,size=10) df = pd.DataFrame() df[&quot;col1&quot;] = col1 df[&quot;result&quot;] = [0]*len(df) #slow computation for i in range(len(df)): if i == 0: df[&quot;result&quot;][i] = np.nan else: df[&quot;result&quot;][i] = np.log(df[&quot;col1&quot;][i]/df[&quot;col1&quot;][i-1]) </code></pre> <p>However, since I have a large sample this takes a lot of time to compute. Is there a way to improve the code in order to make it faster?</p>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.shift.html" rel="nofollow noreferrer"><code>Series.shift</code></a> by <code>col1</code> column with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.div.html" rel="nofollow noreferrer"><code>Series.div</code></a> for division:</p> <pre><code>df[&quot;result1&quot;] = np.log(df[&quot;col1&quot;].div(df[&quot;col1&quot;].shift())) #alternative #df[&quot;result1&quot;] = np.log(df[&quot;col1&quot;] / df[&quot;col1&quot;].shift()) print (df) col1 result result1 0 5 NaN NaN 1 0 -inf -inf 2 3 inf inf 3 3 0.000000 0.000000 4 7 0.847298 0.847298 5 9 0.251314 0.251314 6 3 -1.098612 -1.098612 7 5 0.510826 0.510826 8 2 -0.916291 -0.916291 9 4 0.693147 0.693147 </code></pre> <p>I test both solutions:</p> <pre><code>np.random.seed(0) col1 = np.random.randint(0,10,size=10000) df = pd.DataFrame({'col1':col1}) In [128]: %timeit df[&quot;result1&quot;] = np.log(df[&quot;col1&quot;] / df[&quot;col1&quot;].shift()) 865 µs ± 139 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) In [129]: %timeit df.assign(result=lambda x: np.log(x.col1.pct_change() + 1)) 1.16 ms ± 11.7 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) In [130]: %timeit df[&quot;result1&quot;] = np.log(df[&quot;col1&quot;].pct_change() + 1) 1.03 ms ± 14.1 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) </code></pre> <hr /> <pre><code>np.random.seed(0) col1 = np.random.randint(0,10,size=100000) df = pd.DataFrame({'col1':col1}) In [132]: %timeit df[&quot;result1&quot;] = np.log(df[&quot;col1&quot;] / df[&quot;col1&quot;].shift()) 3.7 ms ± 189 µs per loop (mean ± std. dev. of 7 runs, 1 loop each) In [133]: %timeit df.assign(result=lambda x: np.log(x.col1.pct_change() + 1)) 6.31 ms ± 545 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) In [134]: %timeit df[&quot;result1&quot;] = np.log(df[&quot;col1&quot;].pct_change() + 1) 3.75 ms ± 269 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) </code></pre>
python|pandas|dataframe
2
9,110
69,089,597
Check pandas df2.colA for occurrences of df1.id and write (df2.colB, df2.colC) into df1.colAB
<p>I have two pandas <code>df</code> and they do not have the same length. <code>df1</code> has unique id's in column <code>id</code>. These id's occur (multiple times) in <code>df2.colA</code>. I'd like to add a list of all occurrences of <code>df1.id</code> in <code>df2.colA</code> (and another column at the matching index of <code>df1.id == df2.colA</code>) into a new column in <code>df1</code>. Either with the index of <code>df2.colA</code> of the match or additionally with other row entries of all matches.</p> <p>Example:</p> <pre><code>df1.id = [1, 2, 3, 4] df2.colA = [3, 4, 4, 2, 1, 1] df2.colB = [5, 9, 6, 5, 8, 7] </code></pre> <p>So that my operation creates something like:</p> <pre><code>df1.colAB = [ [[1,8],[1,7]], [[2,5]], [[3,5]], [[4,9],[4,6]] ] </code></pre> <p>I've tries a bunch of approaches with mapping, looping explicitly (super slow), checking with <code>isin</code> etc.</p>
<p>You could use Pandas <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.apply.html#pandas-dataframe-apply" rel="nofollow noreferrer"><code>apply</code></a> to iterate over each row of <code>df1</code> value while creating a list with all the indices in <code>df2.colA</code>. This can be achieved by using Pandas <code>index</code> and <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.loc.html#pandas-dataframe-loc" rel="nofollow noreferrer"><code>loc</code></a> over the <code>df2.colB</code> to create a list with all the indices in <code>df2.colA</code> that match the row in <code>df1.id</code>. Then, within the <code>apply</code> itself use a for-loop to create the list of matched values.</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd # setup df1 = pd.DataFrame({'id':[1,2,3,4]}) print(df1) df2 = pd.DataFrame({ 'colA' : [3, 4, 4, 2, 1, 1], 'colB' : [5, 9, 6, 5, 8, 7] }) print(df2) #code df1['colAB'] = df1['id'].apply(lambda row: [[row, idx] for idx in df2.loc[df2[df2.colA == row].index,'colB']]) print(df1) </code></pre> <p>Output from <em>df1</em></p> <pre class="lang-py prettyprint-override"><code> id colAB 0 1 [[1, 8], [1, 7]] 1 2 [[2, 5]] 2 3 [[3, 5]] 3 4 [[4, 9], [4, 6]] </code></pre>
python|pandas|dataframe|merge|matching
1
9,111
69,246,853
How to split comma separated cell values in a list in pandas?
<p>I hope you are doing well!</p> <p>Code:</p> <pre><code>df = pd.read_excel('Grade.xlsx') df = df.loc[df['Current year'] ==&quot;Final Year&quot;] Num = df['Available Number'].values </code></pre> <p>Sample data:</p> <pre><code>Sr no. Name Current Year City Available Number 1 joe First Year NY 125,869,589,852 2 mike Final Year MI 586 3 Ross Final Year NY 589,639,741 4 juli Second Year NY 869,253 </code></pre> <p>Now my code copied value &quot;586&quot;(row2) and &quot;589,639,741&quot;(row3) in variable. But I want to covert those values in list(list of integers) and then later I want to iterate in for loop.</p> <p>I want some thing like this:</p> <pre><code>list1 = [586] list2 = [589,639,741] </code></pre> <p>I don't know how to separate those values and convert it in list. If anyone here can help me? I started learning pandas recently. Thanks in advance.</p>
<p>Use a loop to convert the strings into list of int:</p> <pre><code>df['Available Number'] = df['Available Number'] \ .apply(lambda x: [int(i) for i in x.split(',')]) </code></pre> <p>Output:</p> <pre><code> Sr no. Name Current Year City Available Number 0 1 joe First Year NY [125, 869, 589, 852] 1 2 mike Final Year MI [586] 2 3 Ross Final Year NY [589, 639, 741] 3 4 juli Second Year NY [869, 253] </code></pre>
python|pandas|split
-1
9,112
68,885,217
How can I add a resizing by scale layer to a model in tensorflow or keras
<p>How can I add a resizing by scale layer to a model using tensorflow or keras ? ( not by fixed output dimensions) for example i want to resize image shape (100, 100, 3) by up scale of 2 , so the output shape of that layer will be (200, 200, 3) resizing layer should use interpolation methods like ( &quot;bilinear&quot;, &quot;nearest&quot;, &quot;bicubic&quot;, &quot;area&quot;) Thank you.</p>
<p>You can't really be dynamic about image shapes within a dataset. To generate high speed execution on GPU, your images need to be fixed size.</p> <p>That said, if all your images are a certain size within a dataset, but you want your model to generalize to different datasets, each with a different image size, you can just use an upsampling layer.</p> <p>If you really need dynamic images within a dataset, and you know the largest image, you could center-pad all images until they match the correct size.</p> <p><a href="https://www.tensorflow.org/api_docs/python/tf/keras/layers/UpSampling2D" rel="nofollow noreferrer">https://www.tensorflow.org/api_docs/python/tf/keras/layers/UpSampling2D</a>.</p>
python|tensorflow|keras|interpolation|resize-image
-1
9,113
44,662,406
Tensorflow exhausted resource
<p>I've wrote a Tensorflow program, that reads <code>128x128</code> images. The program runs kind of OK on my laptop,which I use to check if the code is ok. The 1st programm is bases on MNIST Tutorial , the 2nd ist using MNIST example for convNN. when I try to run them on GPU, I get the following error message: </p> <pre><code>ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[16384,20000] [[Node: inputLayer_1/weights/Variable/Adam_1/Assign = Assign[T=DT_FLOAT, _class=["loc:@inputLayer_1/weights/Variable"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/gpu:0"](inputLayer_1/weights/Variable/Adam_1, inputLayer_1/weights/Variable/Adam_1/Initializer/Const)]] </code></pre> <p>from what I've been reading online. I have to use batches in my Testing, and here 's how feeding is working:</p> <pre><code>........................................... batchSize = 40 img_height = 128 img_width = 128 # 1st function to read images form TF_Record def getImage(filename): # convert filenames to a queue for an input pipeline. filenameQ = tf.train.string_input_producer([filename],num_epochs=None) # object to read records recordReader = tf.TFRecordReader() # read the full set of features for a single example key, fullExample = recordReader.read(filenameQ) # parse the full example into its' component features. features = tf.parse_single_example( fullExample, features={ 'image/height': tf.FixedLenFeature([], tf.int64), 'image/width': tf.FixedLenFeature([], tf.int64), 'image/colorspace': tf.FixedLenFeature([], dtype=tf.string,default_value=''), 'image/channels': tf.FixedLenFeature([], tf.int64), 'image/class/label': tf.FixedLenFeature([],tf.int64), 'image/class/text': tf.FixedLenFeature([], dtype=tf.string,default_value=''), 'image/format': tf.FixedLenFeature([], dtype=tf.string,default_value=''), 'image/filename': tf.FixedLenFeature([], dtype=tf.string,default_value=''), 'image/encoded': tf.FixedLenFeature([], dtype=tf.string, default_value='') }) # now we are going to manipulate the label and image features label = features['image/class/label'] image_buffer = features['image/encoded'] # Decode the jpeg with tf.name_scope('decode_jpeg',[image_buffer], None): # decode image = tf.image.decode_jpeg(image_buffer, channels=3) # and convert to single precision data type image = tf.image.convert_image_dtype(image, dtype=tf.float32) # cast image into a single array, where each element corresponds to the greyscale # value of a single pixel. # the "1-.." part inverts the image, so that the background is black. image=tf.reshape(1-tf.image.rgb_to_grayscale(image),[img_height*img_width]) # re-define label as a "one-hot" vector # it will be [0,1] or [1,0] here. # This approach can easily be extended to more classes. label=tf.stack(tf.one_hot(label-1, numberOFclasses)) return label, image train_img,train_label = getImage(TF_Records+"/train-00000-of-00001") validation_img,validation_label=getImage(TF_Records+"/validation-00000-of-00001") # associate the "label_batch" and "image_batch" objects with a randomly selected batch--- # of labels and images respectively train_imageBatch, train_labelBatch = tf.train.shuffle_batch([train_img, train_label], batch_size=batchSize,capacity=50,min_after_dequeue=10) # and similarly for the validation data validation_imageBatch, validation_labelBatch = tf.train.shuffle_batch([validation_img, validation_label], batch_size=batchSize,capacity=50,min_after_dequeue=10) </code></pre> <p>........................................................</p> <pre><code> sess.run(tf.global_variables_initializer()) # start the threads used for reading files coord = tf.train.Coordinator() threads = tf.train.start_queue_runners(sess=sess,coord=coord) # feeding function def feed_dict(train): if True : #img_batch, labels_batch= tf.train.shuffle_batch([train_label,train_img],batch_size=batchSize,capacity=500,min_after_dequeue=200) img_batch , labels_batch = sess.run([ train_labelBatch ,train_imageBatch]) dropoutValue = 0.7 else: # img_batch,labels_batch = tf.train.shuffle_batch([validation_label,validation_img],batch_size=batchSize,capacity=500,min_after_dequeue=200) img_batch,labels_batch = sess.run([ validation_labelBatch,validation_imageBatch]) dropoutValue = 1 return {x:img_batch,y_:labels_batch,keep_prob:dropoutValue} for i in range(max_numberofiteretion): if i%10 == 0:#Run a Test summary, acc = sess.run([merged,accuracy],feed_dict=feed_dict(False)) test_writer.add_summary(summary,i)# Save to TensorBoard else: # Training if i % 100 == 99: # Record execution stats run_options = tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE) run_metadata = tf.RunMetadata() summary, _ = sess.run([merged, train_step], feed_dict=feed_dict(True), options=run_options, run_metadata=run_metadata) train_writer.add_run_metadata(run_metadata, 'step%03d' % i) train_writer.add_summary(summary, i) print('Adding run metadata for', i) else: # Record a summary summary, _ = sess.run([merged, train_step], feed_dict=feed_dict(True)) train_writer.add_summary(summary, i) # finalise coord.request_stop() coord.join(threads) train_writer.close() test_writer.close() </code></pre> <p>..................................................</p> <p>The validation folder contained 2100 files, so yes I understand that's too much, </p> <p>I found this <a href="https://github.com/tensorflow/tensorflow/issues/136" rel="nofollow noreferrer">suggestion</a> </p> <pre><code>config = tf.ConfigProto() config.gpu_options.allocator_type = 'BFC' with tf.Session(config = config) as s:...... </code></pre> <p>but this didn't solved the issue! any idea how may I solve this ?</p>
<p>The problem seems to be that everything in the graph is done on GPU. You should use the CPU resources for preprocessing functions and the rest of the graph on GPU. So make the input processing functions like getImage() and queues to be run on CPU instead of GPU. Basically when GPU is working on tensors the CPU should be filling the input pipeline queues, so both CPU and GPU are efficiently used. This is explained in the tensorflow performance Guide :</p> <blockquote> <p>Placing preprocessing on the CPU can result in a 6X+ increase in samples/sec processed, which could lead to training in 1/6th of the time. <a href="https://www.tensorflow.org/performance/performance_guide" rel="nofollow noreferrer">https://www.tensorflow.org/performance/performance_guide</a></p> </blockquote> <p>For example you can create a function get_batch to be run on CPU like this:</p> <pre><code>def get_batch(dataset): with tf.device('/cpu:0'): 'File Name Queue' 'Get image function implementation' 'Shuffle batch to make batches' return image, labels train_imageBatch, train_labelBatch = get_batch('train_dataset') validation_imageBatch, validation_labelBatch = get_batch('valid_dataset') </code></pre> <p>Also check the below link on how to switch between testing and validation when using queues:<a href="https://stackoverflow.com/questions/41162955/tensorflow-queues-switching-between-train-and-validation-data">Tensorflow Queues - Switching between train and validation data</a>. Your code should be like:</p> <pre><code># A bool tensor to figure out whether in training loop or tesing loop _is_train = tf.placeholder(dtype=tf.bool, name='is_train') # Select train or test database based on the _is_train tensor images = tf.cond(_is_train, lambda:train_imageBatch, lambda:validation_imageBatch) labels = tf.cond(_is_train, lambda:train_labelBatch, lambda:validation_labelBatch) train_op = ... ... for step in num_steps: # each step summary, _ = sess.run([merged, train_step], fead_dict={_is_train:True} ... if (validate_step) summary, acc = sess.run([merged,accuracy],feed_dict={_is_train:False) ... </code></pre> <p>For implementation of get_batch, you can see this example from tensorflow: <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/how_tos/reading_data/fully_connected_reader.py" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/how_tos/reading_data/fully_connected_reader.py</a> .</p>
python|tensorflow
2
9,114
60,796,928
Can tensorboard display an interactive plot or 3D plot
<p>I have to visualize the interactive 3D plot on tensorboard. Can the tensorboard visualize this or is there any way to display this on tensorboard. Thank you.</p>
<p>Yes, you can use the mesh plugin on TensorBoard. It'll allow you to create a visualization similar to those found on Three.js . You pass in the vertices, colors, and faces of the 3D data and TensorBoard will create a 3D interactive visualization. There are other options such as projections but those are mainly used for embeddings.</p>
tensorflow|pytorch|tensorboard|visualize|tensorboardx
0
9,115
61,003,467
How to convert a list of different-sized tensors to a single tensor?
<p>I want to convert a list of tensors with different sizes to a single tensor. </p> <p>I tried <code>torch.stack</code>, but it shows an error.</p> <pre><code>--------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) &lt;ipython-input-237-76c3ff6f157f&gt; in &lt;module&gt; ----&gt; 1 torch.stack(t) RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 0. Got 5 and 6 in dimension 1 at C:\w\1\s\tmp_conda_3.7_105232\conda\conda-bld\pytorch_1579085620499\work\aten\src\TH/generic/THTensor.cpp:612 </code></pre> <p>My list of tensors:</p> <pre><code>[tensor([-0.1873, -0.6180, -0.3918, -0.5849, -0.3607]), tensor([-0.6873, -0.3918, -0.5849, -0.9768, -0.7590, -0.6707]), tensor([-0.6686, -0.7022, -0.7436, -0.8231, -0.6348, -0.4040, -0.6074, -0.6921])] </code></pre> <p>I have also tried this in a different way, instead of tensors, I used a lists of these individual tensors and tried to make a tensor out of it. That also showed an error. </p> <pre><code>list: [[-0.18729999661445618, -0.6179999709129333, -0.3917999863624573, -0.5849000215530396, -0.36070001125335693], [-0.6873000264167786, -0.3917999863624573, -0.5849000215530396, -0.9768000245094299, -0.7590000033378601, -0.6707000136375427], [-0.6686000227928162, -0.7021999955177307, -0.7436000108718872, -0.8230999708175659, -0.6348000168800354, -0.40400001406669617, -0.6074000000953674, -0.6920999884605408]] </code></pre> <p>The error:</p> <pre><code>--------------------------------------------------------------------------- ValueError Traceback (most recent call last) &lt;ipython-input-245-489aea87f307&gt; in &lt;module&gt; ----&gt; 1 torch.FloatTensor(t) ValueError: expected sequence of length 5 at dim 1 (got 6) </code></pre> <p>Apparently, it says, it is expecting the same length of lists if I am not wrong. </p> <p>Can anyone help here?</p>
<p>I agree with @helloswift123, you cannot stack tensors of different lengths. </p> <p>Also, @helloswift123's answer will work only when the total number of elements is divisible by the shape that you want. In this case, the total number of elements is <code>19</code> and in no case, it can be reshaped into something useful since it is a prime number.</p> <p><code>torch.cat()</code> as suggested,</p> <pre><code>data = [torch.tensor([-0.1873, -0.6180, -0.3918, -0.5849, -0.3607]), torch.tensor([-0.6873, -0.3918, -0.5849, -0.9768, -0.7590, -0.6707]), torch.tensor([-0.6686, -0.7022, -0.7436, -0.8231, -0.6348, -0.4040, -0.6074, -0.6921])] dataTensor = torch.cat(data) dataTensor.numel() </code></pre> <p>Output:</p> <pre><code>tensor([-0.1873, -0.6180, -0.3918, -0.5849, -0.3607, -0.6873, -0.3918, -0.5849, -0.9768, -0.7590, -0.6707, -0.6686, -0.7022, -0.7436, -0.8231, -0.6348, -0.4040, -0.6074, -0.6921]) 19 </code></pre> <p>Possible solution:</p> <p>This is also not a perfect solution but might solve this problem.</p> <pre><code># Have a list of tensors (which can be of different lengths) data = [torch.tensor([-0.1873, -0.6180, -0.3918, -0.5849, -0.3607]), torch.tensor([-0.6873, -0.3918, -0.5849, -0.9768, -0.7590, -0.6707]), torch.tensor([-0.6686, -0.7022, -0.7436, -0.8231, -0.6348, -0.4040, -0.6074, -0.6921])] # Determine maximum length max_len = max([x.squeeze().numel() for x in data]) # pad all tensors to have same length data = [torch.nn.functional.pad(x, pad=(0, max_len - x.numel()), mode='constant', value=0) for x in data] # stack them data = torch.stack(data) print(data) print(data.shape) </code></pre> <p>Output:</p> <pre><code>tensor([[-0.1873, -0.6180, -0.3918, -0.5849, -0.3607, 0.0000, 0.0000, 0.0000], [-0.6873, -0.3918, -0.5849, -0.9768, -0.7590, -0.6707, 0.0000, 0.0000], [-0.6686, -0.7022, -0.7436, -0.8231, -0.6348, -0.4040, -0.6074, -0.6921]]) torch.Size([3, 8]) </code></pre> <p>This will append zeros to the end of any tensor which is having fewer elements and in this case you can use <code>torch.stack()</code> as usual.</p> <p>I hope this helps!</p>
python|pytorch|tensor
1
9,116
71,716,193
Pandas: How to create a new column that adds selected columns across rows
<p>I would like to create a new column called &quot;Excess Return&quot; that is =('SPX TR'-'3M Govt'), and I want to place it to the right of the '3M Govt' column. How do I do that?</p> <p>Please call the following table &quot;df&quot;</p> <pre><code> Date SPX TR 3M Govt Div Yield Real Dividends </code></pre> <p>0 1940-01-31 12.2531 0.0200 0.044580 10.343730</p>
<pre><code>df['Val_Diff'] = dfB['SPX TR'] - dfB['3M Govt'] </code></pre> <p>after this sort your columns</p> <pre><code>df = df[['SPX TR', '3M Govt', 'Val_Diff', 'Div Yield', 'Real Dividends']] </code></pre>
pandas|dataframe|calculated-columns|subtraction
0
9,117
71,492,896
Django CeleryTask return GeoDataFrames leads to TypeError: Object of type int64 is not JSON serializable
<p>I am trying to build a web app in which I make use of celery to tackle a long running process. I need to pass to the view that called the task a couple of GeoDataFrame and an epsg projection value.</p> <pre><code>return {'Working_area_final': Working_area_final.to_json(), 'PoI_buffer_small': PoI_buffer_small.to_json(), 'Streets_gdf': Streets_gdf.to_json(), 'PoI_buffer_BIG_exp': PoI_buffer_BIG_exp.to_json(), 'projection': str(proj.to_epsg(), 'ip': ip.to_json()} </code></pre> <p>The problem occurs here. I get 'TypeError: Object of type int64 is not JSON serializable'. I can tell that before this command I manage to successfully print to console everything that has to be passed as result.</p> <p>EDIT Found out that one of the GeoDataFrame failed the conversion to json. It is a Geodataframe retrieved from a graph of osmnx package. Now the question switch to: Why does this occur only on this Geodataframe?</p>
<p>Have created a MWE with what you describe. It does not fail (as expected) when serialised as JSON (actually GEOJSON). I suggest that you provide more details on</p> <ol> <li>how you are using <strong>osmnx</strong></li> <li>how you are doing CRS projection</li> </ol> <pre><code>import geopandas as gpd import osmnx as ox cities = gpd.read_file(gpd.datasets.get_path('naturalearth_cities')) p = cities.loc[cities[&quot;name&quot;].eq(&quot;Vaduz&quot;), &quot;geometry&quot;].values[0].coords[0] G = ox.graph_from_point(p[::-1], simplify=True) # get the edges, these will be linestrings gdf = ox.graph_to_gdfs(G, edges=True)[1] gdf.to_crs(gdf.estimate_utm_crs()).to_json() </code></pre>
json|django|celery|task|geopandas
0
9,118
71,520,565
Access columns of a dataframe based on column names using a 'for' loop
<p>Let's say I have data frame consisting of column names <code>M1out1</code>, <code>M1out2</code>, ..., <code>M1out120</code>, <code>0</code>, <code>1</code>, ..., <code>120</code>.</p> <p>Is there a way in which I can access the columns based on these names using a <code>for</code> loop?</p> <p>Like <code>for i in range(M1out1, M1out120, 1)</code> , for columns <code>M1out1</code> through <code>M1out120</code>.</p>
<p>You can create a slice of the dataframe by a range of columns, and get the list of columns names for that slice:</p> <pre><code>for col in df.loc[:, 'M1out0':'M1out15'].columns: print(col) </code></pre> <p>Output:</p> <pre><code>M1out0 M1out1 M1out2 M1out3 M1out4 M1out5 M1out6 M1out7 M1out8 M1out9 M1out10 M1out11 M1out12 M1out13 M1out14 M1out15 </code></pre> <h3>Convenience function</h3> <pre><code>def col_range(df, start=None, stop=None, step=None): return df.loc[:, start:stop:step].columns </code></pre> <p>Usage:</p> <pre><code># Loop over every 3rd column starting from M1out0 and ending with M1out20: for col in col_range(df, 'M1out0', 'M1out20', 3): print(col) </code></pre> <p>Output:</p> <pre><code>M1out0 M1out3 M1out6 M1out9 M1out12 M1out15 M1out18 </code></pre>
python|pandas
0
9,119
42,273,246
How does tensorflow implement the embedding_column?
<p>I'm learning tensorflow's wide_n_deep_tutorial these days, and I'm a little bit confused with the <em>tf.contrib.layers.embedding_column</em>. I wonder how does tensorflow implement the embedding column? </p> <p>For example, suppose I have an sparse input with dimension 1000 and I want to embed it into a dense feature with dimension 10. Does it hold a fully connected network with 1000*10 params and train using BP to update the params? Or does it use some other techniques like FM to map the 1000 dim vector to a 10 dim vector?</p>
<p>There are 3 combiner in the embedding_column function:</p> <p>"sum": do not normalize "mean": do l1 normalization "sqrtn": do l2 normalization. see more tf.embedding_lookup_sparse</p> <p>There are not using FM to modulate/transform the dimensions.</p>
tensorflow|embedding
1
9,120
69,797,734
Removing entire rows from a dataframe for which a specified column value contains null
<p>Using python and pandas I am trying to remove entire rows from a dataframe where a value in a specific column is a null: I have tried the following code using a for loop:</p> <pre><code>for row in dataframe.index: if pd.isnull(dataframe['Column'][i]): dataframe[.drop([i], axis=0, inplace=False) </code></pre> <p>However I got the following error:</p> <blockquote> <p>NameError: name 'i' is not defined</p> </blockquote> <p>Does anybody have a suggestion?</p>
<p>Use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.dropna.html" rel="nofollow noreferrer"><code>pandas.DataFrame.dropna</code></a>:</p> <p>In your case, it would be like this:</p> <pre><code>dataframe.dropna(subset=['Column'], inplace=True, axis=0) </code></pre>
python|pandas|dataframe|for-loop
1
9,121
69,912,650
create new DataFrame resulted from processing N rows of other DataFrame
<p>I want to process each N rows of a DataFrame separately.<br />If my data has 15 row indexed from 0 to 14 I want to process rows from index 0 to 3 , 4 to 7, 8 to 11, 12 to 15 <br /> for example let's say for each 4 rows I want the sum(A) and the mean(B)</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Index</th> <th>A</th> <th>B</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>4</td> <td>4</td> </tr> <tr> <td>1</td> <td>7</td> <td>9</td> </tr> <tr> <td>2</td> <td>9</td> <td>3</td> </tr> <tr> <td>3</td> <td>0</td> <td>4</td> </tr> <tr> <td>4</td> <td>7</td> <td>9</td> </tr> <tr> <td>5</td> <td>9</td> <td>2</td> </tr> <tr> <td>6</td> <td>3</td> <td>0</td> </tr> <tr> <td>7</td> <td>7</td> <td>4</td> </tr> <tr> <td>8</td> <td>7</td> <td>2</td> </tr> <tr> <td>9</td> <td>1</td> <td>6</td> </tr> </tbody> </table> </div> <p>The Resulted DataFrame should be</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Index</th> <th>A</th> <th>B</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>20</td> <td>5</td> </tr> <tr> <td>1</td> <td>26</td> <td>3.75</td> </tr> <tr> <td>2</td> <td>8</td> <td>4</td> </tr> </tbody> </table> </div> <p>TLDR: how to let <code>DataFrame.apply</code> takes multiple rows instead of a single row at a time</p>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.agg.html" rel="nofollow noreferrer"><code>GroupBy.agg</code></a> with integer division by <code>4</code> by index:</p> <pre><code>#default RangeIndex df = df.groupby(df.index // 4).agg({'A':'sum', 'B':'mean'}) #any index df = df.groupby(np.arange(len(df.index)) // 4).agg({'A':'sum', 'B':'mean'}) print (df) A B 0 20 5.00 1 26 3.75 2 8 4.00 </code></pre>
python|pandas|dataframe
1
9,122
70,016,547
How to plot each year as a line with months on the x-axis
<p>I have a question, I have the following dataframe containing multiple years and months with an total sum:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd data = {'Bedrag': [210406.49, 191369.87, 118458.65, 81682.95, 90571.61, 196374.53, 223619.85, 144773.64, 240221.67, 110666.73, 108633.49, 194808.85, 103302.85, 186419.17, 96297.53, 81404.79, 94874.21, 209520.46, 270694.15, 107448.21, 188290.77, 163761.78, 168799.28, 190937.74, 127930.28, 262299.96, 48658.0, 48027.57, 220501.67, 234570.63, 89188.45, 233270.46, 179647.23, 272358.86], 'Jaar': [2019, 2019, 2019, 2019, 2019, 2019, 2019, 2019, 2019, 2019, 2019, 2019, 2020, 2020, 2020, 2020, 2020, 2020, 2020, 2020, 2020, 2020, 2020, 2020, 2021, 2021, 2021, 2021, 2021, 2021, 2021, 2021, 2021, 2021], 'maand': ['April', 'August', 'December', 'February', 'January', 'July', 'June', 'March', 'May', 'November', 'October', 'September', 'April', 'August', 'December', 'February', 'January', 'July', 'June', 'March', 'May', 'November', 'October', 'September', 'April', 'August', 'February', 'January', 'July', 'June', 'March', 'May', 'October', 'September']} correct_omzet = pd.DataFrame(data) # display(correct_omzet.head()) Bedrag Jaar maand 0 210406.49 2019 April 1 191369.87 2019 August 2 118458.65 2019 December 3 81682.95 2019 February 4 90571.61 2019 January </code></pre> <p>Now I would like to plot this with seaborn in a line plot to see a season sales pattern. But when I try to plot as below, the outcome only shows the 12 months.</p> <pre><code>plt.figure(figsize=(25,6)) sns.set_style(&quot;darkgrid&quot;) sns.lineplot(data=correct_omzet, x=&quot;maand&quot;, y=&quot;Bedrag&quot;, ci=None, color=&quot;green&quot;, marker='o') </code></pre> <p><img src="https://i.stack.imgur.com/XIJnQ.png" alt="Line plot 2019" /></p> <p>I've also tried to plot it without seaborn:</p> <pre><code>correct_omzet.plot.line(x='maand', rot=90, y='Bedrag', title='Omzetpatroon 2019 - 2021', colormap='Spectral', figsize=(20,8)) plt.minorticks_on() </code></pre> <p><img src="https://i.stack.imgur.com/264lv.png" alt="plot without seaborn" /></p> <p>But then I can't figure out to get all the months as x-axis titles.</p>
<ul> <li>The clearest way to make seasonal observations is to plot each year as a separate line, with the x-axis as the months</li> <li>Set the month column as ordered categorical with <a href="https://pandas.pydata.org/docs/reference/api/pandas.Categorical.html" rel="nofollow noreferrer"><code>pd.Categorical</code></a>, which will ensure the months are plotted in order. The built-in <a href="https://docs.python.org/3/library/calendar.html" rel="nofollow noreferrer"><code>calendar</code></a> package is used for the ordered <code>month_name</code> list, or create a list manually.</li> <li>Use the figure-level <a href="https://seaborn.pydata.org/generated/seaborn.relplot.html" rel="nofollow noreferrer"><code>seaborn.relplot</code></a> with <code>kind='line'</code> to plot the data and separate <code>'Jaar'</code> with the <code>hue=</code> parameter. <ul> <li>The figure-level plot is used because it has <code>height</code> and <code>aspect</code> for sizing, removing the need to create <code>fig, ax = plt.subplots(figsize=(5, 10))</code>. Otherwise use the axes-level <a href="https://seaborn.pydata.org/generated/seaborn.lineplot.html" rel="nofollow noreferrer"><code>seaborn.lineplot</code></a>.</li> </ul> </li> <li>Also see <a href="https://trenton3983.github.io/files/projects/Portland_Weather/Portland_Weather.html" rel="nofollow noreferrer">Weather Visualization for Portland, OR: 1940 - 2020</a></li> <li><strong>Tested in <code>python 3.8.12</code>, <code>pandas 1.3.4</code>, <code>matplotlib 3.4.3</code>, <code>seaborn 0.11.2</code></strong></li> </ul> <pre class="lang-py prettyprint-override"><code>import pandas as pd import seaborn as sns from calendar import month_name as mn # month list months = mn[1:] # convert the column to categorical and ordered correct_omzet.maand = pd.Categorical(correct_omzet.maand, categories=months, ordered=True) # plot the data p = sns.relplot(kind='line', data=correct_omzet, x='maand', y='Bedrag', hue='Jaar', aspect=2.5, marker='o') </code></pre> <p><a href="https://i.stack.imgur.com/jKLX8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jKLX8.png" alt="enter image description here" /></a></p>
python|pandas|dataframe|plot|seaborn
1
9,123
69,973,357
Finding a match in two files and merging the data
<p><strong>Input: file1</strong></p> <pre><code>0 1 EXE sldk EXE1 vkrk TPO dlfk EXE2 sdfs </code></pre> <p><strong>Input: file2</strong></p> <pre><code>0 1 DD=CMD asldkjfalsdkfj DD=EXE mbnwjnjcxjic DD=DMFF pklckwkflkdf DD=EXE2 okvwokmvfv DD=EXE1 ksdjfokwoekc </code></pre> <p><strong>Expected output: file1#</strong></p> <pre><code>0 1 2 EXE sldk mbnwjnjcxjic EXE1 vkrk ksdjfokwoekc TPO dlfk NaN EXE2 sdfs okvwokmvfv </code></pre> <p><strong>Result: file1#</strong></p> <pre><code>0 1 2 EXE &quot;sldk&quot; &quot;mbnwjnjcxjic&quot; EXE &quot;sldk&quot; &quot;ksdjfokwoekc&quot; EXE &quot;sldk&quot; &quot;okvwokmvfv&quot; EXE1 &quot;vkrk&quot; &quot;ksdjfokwoekc&quot; TPO &quot;dlfk&quot; &quot;NaN&quot; EXE2 &quot;sdfs&quot; &quot;okvwokmvfv&quot; </code></pre> <p><strong>Code</strong></p> <pre><code>import pandas as pd import numpy as np with open(file1, 'r') as file1, open(file2, 'r') as file1: lines_1 = file1.readlines() lines_2 = file2.readlines() lines_1_column = [x.split(',') for x in lines_1] lines_2_column = [y.split(',') for y in lines_2] df_file1 = pd.DataFrame(lines_1_column) df_file2 = pd.DataFrame(lines_2_column) extract_from_two_files = file2[0].str.extract(f'({&quot;|&quot;.join(file1[0])})', expand=False) merge_two_files = file1.merge(file2[[1]], how='left', left_on=0, right_on=extract_from_two_files) merge_two_files.columns = np.arange(len(merge_two_files.columns)) merge_two_files.to_csv(file 1#, index=False, sep=',', header=None) </code></pre> <p>I wanna get the 'Expected output: file1#', but the result is just like 'Result: file1#' with this code. I'm sorry but is there any suggestions you have?</p>
<p>You could create a temporary variable from <code>file2</code>, without the <code>DD=</code>, and merge data found in both DataFrames in the first column (<code>0</code>) using the <code>on</code> parameter.</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd df1 = pd.read_csv('file1.csv', sep='\s+') df2 = pd.read_csv('file2.csv', sep='\s+') df_tmp = df2.replace('DD=','', regex=True) result = pd.merge(left=df1, right=df_tmp, on=['0'], how='left') result.columns = range(len(result.columns)) print(result) </code></pre> <p>Output from <em>result</em></p> <pre class="lang-py prettyprint-override"><code> 0 1 2 0 EXE sldk mbnwjnjcxjic 1 EXE1 vkrk ksdjfokwoekc 2 TPO dlfk NaN 3 EXE2 sdfs okvwokmvfv </code></pre>
python|pandas|file|merge|extract
0
9,124
69,803,718
Keras Custom loss Penalize more when actual and prediction are on opposite sides of Zero
<p>I'm training a model to predict percentage change in prices. Both MSE and RMSE are giving me up to 99% accuracy but when I check how often both actual and prediction are pointing in the same direction <code>((actual &gt;0 and pred &gt; 0) or (actual &lt; 0 and pred &lt; 0))</code>, I get about 49%.</p> <p>Please how do I define a custom loss that penalizes opposite directions very heavily. I'd also like to add a slight penalty for when the predictions exceeds the actual in a given direction.</p> <p>So</p> <ul> <li><code>actual = 0.1 and pred = -0.05</code> should be penalized a lot more than <code>actual = 0.1 and pred = 0.05</code>,</li> <li>and <code>actual = 0.1 and pred = 0.15</code> slightly more penalty than <code>actual = 0.1 and pred = 0.05</code></li> </ul>
<p>I will leave it up to you to define your exact logic, but here is how you can implement what you want with <code>tf.cond</code>:</p> <pre class="lang-py prettyprint-override"><code>import tensorflow as tf y_true = [[0.1]] y_pred = [[0.05]] mse = tf.keras.losses.MeanSquaredError() def custom_loss(y_true, y_pred): penalty = 20 # actual = 0.1 and pred = -0.05 should be penalized a lot more than actual = 0.1 and pred = 0.05 loss = tf.cond(tf.logical_and(tf.greater(y_true, 0.0), tf.less(y_pred, 0.0)), lambda: mse(y_true, y_pred) * penalty, lambda: mse(y_true, y_pred) * penalty / 4) #actual = 0.1 and pred = 0.15 slightly more penalty than actual = 0.1 and pred = 0.05 loss = tf.cond(tf.greater(y_pred, y_true), lambda: loss * penalty / 2, lambda: loss * penalty / 3) return loss print(custom_loss(y_true, y_pred)) </code></pre>
python|tensorflow|keras|loss-function|mse
1
9,125
69,864,279
Python Pandas print Dataframe.describe() in default format in JupyterNotebook
<h2>Output format without print function</h2> <p>If I run a JupyterNotebook cell with</p> <pre><code>dataframe.describe() </code></pre> <p>a pretty fromatted table will be printed like that: <a href="https://i.stack.imgur.com/byrgO.png" rel="nofollow noreferrer">VSCode JupyterNotebook dataframe.describe() solo cell printing format</a></p> <h2>Output format with print function</h2> <p>If I run a cell with more than just one line code dataframe.describe() would not print anythink. Therefore I need to call</p> <pre><code>print(dataframe.describe()). </code></pre> <p>This leads to a totally different formatting though: <a href="https://i.stack.imgur.com/BQ3HV.png" rel="nofollow noreferrer">VSCode JupyterNotebook printing dataframe.describe() with print function</a></p> <p>Is there a way to print dataframe.describe() in the first format?</p>
<p>There are multiple things to say here:</p> <ol> <li><p>Jupyter Notebooks can only print out one object at a time when simply calling it by name (e.g. <code>dataframe</code>). If you want, you can use one cell per command to get the right format.</p> </li> <li><p>If you use the function <code>print</code>, it will print anything as text because print is using the function <code>to_string</code> for any object it gets. This is <code>python</code> logic - in contrast, option 1) is Jupyter-specific...</p> </li> <li><p>If you don't want to use a seperate cell and still get the right formatting, there are several options, one might be this:</p> <pre class="lang-py prettyprint-override"><code>from IPython.display import display display(dataframe) </code></pre> </li> </ol>
python|pandas|dataframe|printing|describe
1
9,126
72,296,566
Generating 3 columns from one with .apply on dataframe
<p>I want to extract some data from each row, and make that new columns of existing or new dataframe, without repeatedly doing the same operation of re. match.</p> <p>Here's how one entry of the dataframe looks:</p> <pre><code>00:00 Someones_name: some text goes here </code></pre> <p>And i have a regex that successfully takes 3 groups that I need:</p> <pre><code>re.match(r&quot;^(\d{2}:\d{2}) (.*): (.*)$&quot;, x) </code></pre> <p>The problem I have is, how to take matched_part[1], [2], and [3] without actually matching for every new column again.</p> <p>The solution that I don't want is:</p> <pre><code>new_df['time'] = old_df['text'].apply(function1)` new_df['name'] = old_df['text'].apply(function2)` new_df['text'] = old_df['text'].apply(function3)` def function1(x): return re.match(r&quot;^(\d{2}:\d{2}) (.*): (.*)$&quot;, x)[1] </code></pre>
<p>you can use <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.str.extract.html" rel="nofollow noreferrer">str.extract</a> with your pattern</p> <pre><code>df[['time','name', 'text']] = df['col1'].str.extract(r&quot;^(\d{2}:\d{2}) (.*): (.*)$&quot;) print(df) # col1 time name \ # 0 00:00 Someones_name: some text goes here 00:00 Someones_name # text # 0 some text goes here </code></pre>
python|pandas|dataframe|data-science|series
2
9,127
72,435,759
Pandas: Setting a value in a cell when multiple columns are empty
<p>I've been looking for ways to do this natively for a little while now and can't find a solution.</p> <p>I have a large dataframe where I would like to set the value in other_col to 'True' for all rows where one of a list of columns is empty.</p> <p>This works for a single column page_title:</p> <p><code>df.loc[df['page_title'].isna(), ['other_col']] = ''</code></p> <p>But not when using a list</p> <p><code>df.loc[df[['page_title','brand','name']].isna(), ['other_col']] = ''</code></p> <p>Any ideas of how I could do this without using Numpy or looping through all rows? Thanks</p>
<p>This will allow you to set which columns you want to determine if np.nan is present and set a True/False indicator</p> <pre><code>data = { 'Column1' : [1, 2, 3, np.nan], 'Column2' : [1, 2, 3, 4], 'Column3' : [1, 2, np.nan, 4] } df = pd.DataFrame(data) df['other_col'] = np.where((df['Column1'].isna()) | (df['Column2'].isna()) | (df['Column3'].isna()), True, False) df </code></pre>
python|pandas
0
9,128
72,415,365
Exponential moving average (EMA) with different number of observations per day but equally weighted observations
<p>I have a <code>DataFrame</code> <code>df1</code> with observations according to one specific <code>ID</code>. The number of observations per <code>ID</code> varies over time. For each <code>ID</code>, I try to calculate an exponential moving average (EMA) over 3 days. Each observation should be weighted equally within a rolling window of 3 days, regardless of the number of observations on a specific date.</p> <pre><code>df1: ID Value Date 2022-01-01 ID1 1 2022-01-01 ID2 0 2022-01-01 ID3 -1 2022-01-02 ID1 1 2022-01-02 ID3 0 2022-01-03 ID1 -1 2022-01-03 ID1 1 2022-01-04 ID1 0 2022-01-04 ID1 1 2022-01-04 ID2 1 2022-01-04 ID3 -1 2022-01-06 ID2 1 2022-01-06 ID2 1 2022-01-06 ID3 -1 </code></pre> <p>So far I constructed a simple moving average (SMA) by creating a <code>pivot</code> table with the <code>sum</code> and <code>count</code> of the values per <code>ID</code> on each date.</p> <pre><code>pivot: sum count ID ID1 ID2 ID3 ID1 ID2 ID3 Date 2022-01-01 1 0 -1 1 1 1 2022-01-02 1 0 0 1 0 1 2022-01-03 0 0 0 2 0 0 2022-01-04 1 1 -1 2 1 1 2022-01-06 0 2 -1 0 2 1 </code></pre> <p>Then I took the rolling sum over 3 days of the values and divided it by the number of observations and created SMA:</p> <pre><code>SMA: ID ID1 ID2 ID3 Date 2022-01-01 NaN NaN NaN 2022-01-02 NaN NaN NaN 2022-01-03 0.50 0.0 -0.5 2022-01-04 0.40 1.0 -0.5 2022-01-06 0.25 1.0 -1.0 </code></pre> <p>Is there a similar approach for the EMA, so that I exponentially weight each observation over the period regardless of the number of observations on the days?</p> <p>Thanks a lot and best regards!</p> <p>For reproducability:</p> <pre><code>df1 = pd.DataFrame({ 'Date':['2022-01-01', '2022-01-01', '2022-01-01', '2022-01-02', '2022-01-02', '2022-01-03', '2022-01-03', '2022-01-04', '2022-01-04', '2022-01-04', '2022-01-04', '2022-01-06', '2022-01-06', '2022-01-06'], 'ID':['ID1', 'ID2','ID3', 'ID1', 'ID3', 'ID1', 'ID1', 'ID1', 'ID1', 'ID2', 'ID3', 'ID2', 'ID2', 'ID3'], 'Value':[1, 0, -1, 1, 0, -1, 1, 0, 1, 1, -1, 1, 1, -1]}) df1 = df1.set_index('Date') pivot = df1.explode('ID').pivot_table( index='Date', columns='ID', values='Value', fill_value=0, aggfunc=['sum', 'count']) SMA = pivot.rolling(3).sum().xs('sum', axis=1, level=0).div(RollingSum.xs('count', axis=1, level=0)) </code></pre>
<p>Sorry, as I am new, I do not have the option to leave a comment. But, you can try pandas <code>.emw</code> similar to SMA right? <code>pivot.ewm(span=3, min_periods=3).mean()</code>.</p> <p>I'm not exactly sure what <code>RollingSum</code> does in your code; but try this:</p> <pre><code>EMA = pivot.ewm(span=3, min_periods=3).mean().div(RollingSum.xs('count', axis=1, level=0)) </code></pre>
python|pandas|dataframe
1
9,129
50,430,083
how to see a full image of deep neural network
<p>How it is possible to see in Keras or Tensorflow the graphical structure of deep neural network? I made its model and see output of "plot_model" but I want to see the graphical similar to this image.</p> <p><img src="https://i.stack.imgur.com/auqqO.png" alt="click to see the image"></p>
<p>For Tensorflow at least, you can use Tensorboard <a href="https://www.tensorflow.org/programmers_guide/summaries_and_tensorboard" rel="nofollow noreferrer">Tutorial and Explanation</a></p> <p>It also features <a href="https://www.tensorflow.org/programmers_guide/graph_viz" rel="nofollow noreferrer">Graph visualization</a>, which is what you are looking for. Still, not equal to your sample picture, but good ennough I think.</p>
python-3.x|tensorflow|graphics|keras|deep-learning
-1
9,130
50,515,935
Python rolling sum taking data from to columns
<p>The below is a part of a dataframe which consists of football game results.</p> <p>FTHG stands for "Full time home goals"</p> <p>FTAG stands for "Full time away goals"</p> <pre><code> Date HomeTeam AwayTeam FTHG FTAG FTR 14/08/93 Arsenal Coventry 0 3 A 14/08/93 Aston Villa QPR 4 1 H 16/08/93 Tottenham Arsenal 0 1 A 17/08/93 Everton Man City 1 0 H 21/08/93 QPR Southampton 2 1 H 21/08/93 Sheffield Arsenal 0 1 A 24/08/93 Arsenal Leeds 2 1 H 24/08/93 Man City Blackburn 0 2 A 28/08/93 Arsenal Everton 2 0 H </code></pre> <p>I want to create a code in python that calculates a rolling sum (for ex. 3) of the goals scored by each team regardless if the team was home or visitor. The <code>groupby</code> method does half the job. Say "a" is a variable and "df" is dataframe</p> <pre><code>a = df.groupby("HomeTeam")["FTHG"].rolling(3).sum() </code></pre> <p>The result be something like that:</p> <pre><code> FTHG Arsenal NaN NaN 4.0 ..... </code></pre> <p>However I would like the code to take into account also the goals when Arsenal was visiting team. Respectively to produce a column (it should not be called FTHG but to be some new column)</p> <pre><code>Arsenal NaN NaN 2 4 5 </code></pre> <p>Ideas will be much appreciated</p>
<p>you can combine those columns together and then apply groupby</p> <pre><code>tmp1 = df[['Date','HomeTeam', 'FTHG']] tmp2 = df[['Date','AwayTeam', 'FTAG']] tmp1.columns = ['Date','name', 'score'] tmp2.columns = ['Date','name', 'score'] tmp = pd.concat([tmp1,tmp2]) tmp.sort_values(by='Date').groupby("name")["score"].rolling(3).sum() name Arsenal 0 NaN 2 NaN 5 2.0 6 4.0 8 5.0 </code></pre>
python|pandas
0
9,131
50,358,767
get the last output for non-padded entry from tf.nn.dynamic_rnn
<p>I want to use an RNN in an all-to-one mode (only one output at the end). In TensorFlow, one can use:</p> <pre><code>lstm_cell = tf.nn.rnn_cell.LSTMCell(lstm_num_units) output, _ = tf.nn.dynamic_rnn(lstm_cell, embed, dtype=tf.float32) </code></pre> <p>Where the output contains the output at all time steps <code>[0, max_time-1]</code>, and the <code>max_time</code> is the length of the longest input in the batch.</p> <p>Now, I would like to get the last output for every input in the batch. Let me be clearer. All the implementations that I have seen on the net, use <code>output[:,-1]</code> as the last output. However, for the inputs which have been padded, this would imply that the output is from the padded input.</p> <p>Therefore, the questions:</p> <ol> <li><p>How justified it is to use just <code>output[:,-1]</code></p> </li> <li><p>Is there an easy way to select the last entry for the non-padded value in TensorFlow, which, in general, will be at different time steps for every input in the batch. Somehow, I found it a bit hard to do the necessary manipulations with the TensorFlow tensors, even when I have the original lengths of all input sequences.</p> </li> </ol>
<p>Modifying your code:</p> <pre><code>_, state = tf.nn.dynamic_rnn(lstm_cell, embed, dtype=tf.float32, sequence_length=some_placeholder) last_output = state.h </code></pre> <p>Don't forget to add a <code>sequence_length</code> parameter in the call to <code>dynamic_rnn</code> if you want the sequence lengths to vary.</p> <p>Alternatively, you could use <code>tf.gather_nd</code>.</p>
python|tensorflow
0
9,132
45,522,945
Pandas adding length column to dataframe after converting list to tuple
<p>I have two dataframes, the <code>test_df</code> was a list whereas the <code>product_combos</code> df was tuples. I changed the <code>test_df</code> to tuples as well like so:</p> <pre><code>[in] print(testing_df.head(n=5)) [out] product_id transaction_id 001 [P01] 002 [P01, P02] 003 [P01, P02, P09] 004 [P01, P03] 005 [P01, P03, P05] [in] print(product_combos1.head(n=5)) [out] product_id count length 0 (P06, P09) 36340 2 1 (P01, P05, P06, P09) 10085 4 2 (P01, P06) 36337 2 3 (P01, P09) 49897 2 4 (P02, P09) 11573 2 # Convert the lists to tuples testing_df1 = testing_df['product_id'].apply(tuple) </code></pre> <p>I run into problems when I now try and add the length column to the <code>test_df1</code> (which calculates the number of strings in each row). </p> <p>I have tried first adding the length column and then converting to tuple, but the length column just disappears when I try this. I also did:</p> <pre><code>testing_df1['length'] = testing_df['product_id'].str.len() </code></pre> <p>But this just adds a row of nonsense. I also tried:</p> <pre><code>testing_df1['length'] = testing_df['product_id'].apply(len) </code></pre> <p>This doesnt seem to work either. What am I doing wrong and how can I fix it?</p>
<p>it's working fine</p> <pre><code>df = pd.DataFrame([[1,['a','b']],[2,['a','b','c']],[3,['c','b']],[4,['b','d']],[5,['c','a']]]) </code></pre> <p>df:</p> <pre><code> 0 1 0 1 [a,b] 1 2 [a, b, c] 2 3 [c, b] 3 4 [b, d] 4 5 [c, a] df[1] = df[1].apply(tuple) df['length'] = df[1].apply(len) </code></pre> <p>df:</p> <pre><code> 0 1 length 0 1 (a, b) 2 1 2 (a, b, c) 3 2 3 (c, b) 2 3 4 (b, d) 2 4 5 (c, a) 2 </code></pre>
python|pandas|dataframe
0
9,133
45,680,391
Create a dictionary by grouping by values from a dataframe column in python
<p>I have a dataframe with 7 columns, as follows:</p> <pre><code> Bank_Acct Firstname | Bank_Acct Lastname | Bank_AcctNumber | Firstname | Lastname | ID | Date1 | Date2 B1 | Last1 | 123 | ABC | EFG | 12 | Somedate | Somedate B2 | Last2 | 245 | ABC | EFG | 12 | Somedate | Somedate B1 | Last1 | 123 | DEF | EFG | 12 | Somedate | Somedate B3 | Last3 | 356 | ABC | GHI | 13 | Somedate | Somedate B4 | Last4 | 478 | XYZ | FHJ | 13 | Somedate | Somedate B5 | Last5 | 599 | XYZ | DFI | 13 | Somedate | Somedate </code></pre> <p>I want to create a dictionary with: </p> <pre><code> {ID1: (Count of Bank_Acct Firstname, Count of distinct Bank_Acct Lastname, {Bank_AcctNumber1 : ItsCount, Bank_AcctNumber2 : ItsCount}, Count of distinct Firstname, Count of distinct Lastname), ID2: (...), } </code></pre> <p>For the above example:</p> <pre><code>{12: (2, 2, {123: 2, 245: 1}, 2, 1), 13 : (3, 3, {356: 1, 478: 1, 599: 1}, 2, 3)} </code></pre> <p>Below is the code for that:</p> <pre><code>cols = ['Bank First Name', 'Bank Last Name' 'Bank AcctNumber', 'First Name', 'Last Name'] df1 = df.groupby('ID').apply(lambda x: tuple(x[c].nunique() for c in cols)) d = df1.to_dict() </code></pre> <p>But the above code only gives the output as:</p> <pre><code> {12: (2, 2, 2, 2, 1), 13 : (3, 3, 3, 2, 3)} </code></pre> <p>giving count of distinct bank acctnumber instead of the inner dictionary.</p> <p>How to get the required dictionary instead? Thanks!!</p>
<p>You could define your columns and functions in a list</p> <pre><code>In [15]: cols = [ ...: {'col': 'Bank_Acct Firstname', 'func': pd.Series.nunique}, ...: {'col': 'Bank_Acct Lastname', 'func': pd.Series.nunique}, ...: {'col': 'Bank_AcctNumber', 'func': lambda x: x.value_counts().to_dict()}, ...: {'col': 'Firstname', 'func': pd.Series.nunique}, ...: {'col': 'Lastname', 'func': pd.Series.nunique} ...: ] In [16]: df.groupby('ID').apply(lambda x: tuple(c['func'](x[c['col']]) for c in cols)) Out[16]: ID 12 (2, 2, {123: 2, 245: 1}, 2, 1) 13 (3, 3, {356: 1, 478: 1, 599: 1}, 2, 3) dtype: object In [17]: (df.groupby('ID') .apply(lambda x: tuple(c['func'](x[c['col']]) for c in cols)) .to_dict()) Out[17]: {12: (2, 2, {123: 2, 245: 1}, 2, 1), 13: (3, 3, {356: 1, 478: 1, 599: 1}, 2, 3)} </code></pre>
python|pandas|dictionary|dataframe|group-by
2
9,134
62,633,568
Flatten JSON-response
<p>I have issues with flattening this JSON due to the ending of it which I actually don't need so I could potentially remove it (before or after flattern the JSON). I would like to do this in Python and have tried json_normalized and Panda for export to CSV.</p> <p>What's special, the last three items, TotalNumberOfMunicipalities, TotalCitizens, Aggregations, lies outside the part I would like to export to CSV after flattern.</p> <p>JSON:</p> <pre><code>{ &quot;Municipalities&quot;: [ { &quot;Name&quot;: &quot;Stockholm&quot;, &quot;NumberOfCitizens&quot;: 974073, &quot;Id&quot;: &quot;5203d2be-7cda-4caf-9fb5&quot;, &quot;Attributes&quot;: [], &quot;Location&quot;: { &quot;Lat&quot;: 59.33, &quot;Lon&quot;: 18.06 }, &quot;PoliticalGovernance&quot;: 1 }, { &quot;Name&quot;: &quot;Uppsala&quot;, &quot;NumerOfCitizens&quot;: 230767, &quot;Id&quot;: &quot;d155e5f5-b94a-4d0e-ba80&quot;, &quot;Attributes&quot;: [], &quot;Location&quot;: { &quot;Lat&quot;: 59.86, &quot;Lon&quot;: 17.64 }, &quot;PoliticalGovernance&quot;: 3 } ], &quot;TotalNumberOfMunicipalities&quot;: 33, &quot;TotalCitizens&quot;: 4000000, &quot;Aggregations&quot;: {} } </code></pre> <p>How I would like the output to be <a href="https://i.stack.imgur.com/NASqf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NASqf.png" alt="enter image description here" /></a></p>
<p>Use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.json_normalize.html" rel="nofollow noreferrer"><code>pd.json_normalize</code></a>:</p> <pre><code>df = pd.json_normalize(d, 'Municipalities') print (df) Name NumberOfCitizens Id Attributes \ 0 Stockholm 974073 5203d2be-7cda-4caf-9fb5 [] 1 Uppsala 230767 d155e5f5-b94a-4d0e-ba80 [] PoliticalGovernance Location.Lat Location.Lon 0 1 59.33 18.06 1 3 59.86 17.64 </code></pre>
python|json|pandas|csv|flatten
2
9,135
62,679,380
How do you detect and delete infinite values from a time series in a pandas dataframe?
<p>I can't calculate the mean of a variable because it has infinite values in it, but I can't find and fix them:</p> <pre><code>perc_df[['variable']].mean() variable inf dtype: float64 </code></pre> <p>I've never dealt with infinite values before, is there an equivalent to &quot;isna()&quot; and &quot;dropna()&quot; for infinite values in pandas? If there isn't, how do I deal with these values?</p>
<p>You can try to filter out the infinite values with <a href="https://numpy.org/doc/1.18/reference/constants.html#numpy.Inf" rel="nofollow noreferrer"><code>numpy.inf</code></a>. The code is following:</p> <pre><code>import numpy as np perc_df[perc_df.variable != np.inf].variable.mean() </code></pre>
python|pandas|infinite
2
9,136
62,842,509
Does model.compile() go inside MirroredStrategy
<p>I have a network for transfer learning and want to train on two GPUs. I have just trained on one up to this point and am looking for ways to speed things up. I am getting conflicting answers about how to use it most efficiently.</p> <pre><code>strategy = tf.distribute.MirroredStrategy(devices=[&quot;/gpu:0&quot;, &quot;/gpu:1&quot;]) with strategy.scope(): base_model = MobileNetV2(weights='imagenet', include_top=False, input_shape=(200,200,3)) x = base_model.output x = GlobalAveragePooling2D(name=&quot;class_pool&quot;)(x) x = Dense(1024, activation='relu', name=&quot;class_dense1&quot;)(x) types = Dense(20,activation='softmax', name='Class')(x) model = Model(inputs=base_model.input, outputs=[types]) </code></pre> <p>Then I set trainable layers:</p> <pre><code>for layer in model.layers[:160]: layer.trainable=False for layer in model.layers[135:]: layer.trainable=True </code></pre> <p>Then I compile</p> <pre><code>optimizer = Adam(learning_rate=.0000001) model.compile(optimizer=optimizer, loss='categorical_crossentropy', metrics='accuracy') </code></pre> <p>Should everything be nested inside<code>strategy.scope()</code>?</p> <p>This <a href="https://www.tensorflow.org/tutorials/distribute/keras#create_the_model" rel="nofollow noreferrer">tutorial</a> shows <code>compile</code> within but this <a href="https://www.tensorflow.org/guide/distributed_training#using_tfdistributestrategy_with_tfkerasmodelfit" rel="nofollow noreferrer">tutorial</a> shows it is outside. Thefirst one shows it outside</p> <pre><code>mirrored_strategy = tf.distribute.MirroredStrategy() with mirrored_strategy.scope(): model = tf.keras.Sequential([tf.keras.layers.Dense(1, input_shape=(1,))]) model.compile(loss='mse', optimizer='sgd') </code></pre> <p>but says this right after</p> <blockquote> <p>In this example we used MirroredStrategy so we can run this on a machine with multiple GPUs. strategy.scope() indicates to Keras which strategy to use to distribute the training. Creating models/optimizers/metrics inside this scope allows us to create distributed variables instead of regular variables. Once this is set up, you can fit your model like you would normally. MirroredStrategy takes care of replicating the model's training on the available GPUs, aggregating gradients, and more.</p> </blockquote>
<p>It does not matter where it goes because under the hood, <code>model.compile()</code> would create the optimizer,loss and accuracy metric variables under the strategy scope in use. Then you can call <code>model.fit</code> which would also schedule a training loop under the same strategy scope.</p> <p>I would suggest further searching as my answer does not have any experimental basis to it. It's just what I think.</p>
python|tensorflow|keras|multi-gpu
0
9,137
54,522,336
How do custom input_shape for Inception V3 in Keras work?
<p>I know that the <code>input_shape</code> for Inception V3 is <code>(299,299,3)</code>. But in Keras it is possible to construct versions of Inception V3 that have custom <code>input_shape</code> if <code>include_top</code> is <code>False</code>.</p> <blockquote> <p>"input_shape: optional shape tuple, only to be specified if <code>include_top</code> is <code>False</code> (otherwise the input shape has to be <code>(299, 299, 3)</code> (with <code>'channels_last'</code> data format) or <code>(3, 299, 299)</code> (with <code>'channels_first'</code> data format). It should have exactly 3 inputs channels, and width and height should be no smaller than 75. E.g. <code>(150, 150, 3)</code> would be one valid value" - <a href="https://keras.io/applications/#inceptionv3" rel="nofollow noreferrer">https://keras.io/applications/#inceptionv3</a></p> </blockquote> <p>How is this possible and why can it only have custom input_shape if <code>include_top</code> is <code>false</code>?</p>
<p>This is possible because the model is fully convolutional. Convolutions don't care about the image size, they're "sliding filters". If you have big images, you have big outputs, if small images, small outputs. (The filters, though, have a fixed size defined by <code>kernel_size</code> and input and output filters) </p> <p>You cannot do that when you use <code>include_top</code> because this model is probably using a <code>Flatten()</code> layer followed by <code>Dense</code> layers at the end. <code>Dense</code> layers require a fixed input size (given by flatten based on the image size), otherwise it would be impossible to create trainable weights (having a variable number of weights doesn't make sense) </p>
python|tensorflow|machine-learning|keras|conv-neural-network
1
9,138
54,449,406
Find column values that are present for multiple months in pandas
<p>how to filter the ids which are existed into multiple months with DateTime in Pandas, Python 3 </p> <pre><code>DateTime ID 2011-01-30 08:00:59 367341093 2011-01-30 08:03:00 367341093 2011-02-01 08:03:59 367341093 2011-02-01 08:05:00 367341093 2011-03-12 08:05:00 367341093 2011-03-12 08:05:00 367341093 2011-01-15 08:05:00 367341034 2011-01-15 08:05:00 367341034 2011-01-15 08:05:00 367341012 2011-01-15 08:05:00 367341012 2011-01-15 08:05:00 367341012 2011-02-23 08:05:00 367341045 2011-02-23 08:05:00 367341045 2011-03-01 08:05:00 367341045 </code></pre> <p>result should be two ids which are in multiple months 1,2 and 3</p> <pre><code>result = [367341045, 367341093] </code></pre>
<p>You can do this with <code>groupby</code> and <code>nunique</code>:</p> <pre><code>u = df['DateTime'].dt.month.groupby(df.ID).nunique() u ID 367341012 1 367341034 1 367341045 2 367341093 3 Name: DateTime, dtype: int64 u.index[u &gt; 1] # Int64Index([367341045, 367341093], dtype='int64', name='ID') </code></pre>
python|pandas|filter|group-by|pandas-groupby
2
9,139
54,682,457
requires_grad of params is True even with torch.no_grad()
<p>I am experiencing a strange problem with PyTorch today.</p> <p>When checking network parameters in the <code>with</code> scope, I am expecting <code>requires_grad</code> to be <code>False</code>, but apparently this is not the case unless I explicitly set all params myself.</p> <p><strong>Code</strong></p> <p>Link to Net -> <a href="https://gist.github.com/rexlow/90603a0a68c187a4baedb89fe319753f" rel="nofollow noreferrer">Gist</a></p> <pre><code>net = InceptionResnetV2() with torch.no_grad(): for name, param in net.named_parameters(): print("{} {}".format(name, param.requires_grad)) </code></pre> <p>The above code will tell me all the params are still requiring grad, unless I explicitly specify <code>param.requires_grad = False</code>.</p> <p>My <code>torch</code> version: <code>1.0.1.post2</code></p>
<p><code>torch.no_grad()</code> will disable gradient information for the <em>results</em> of operations involving tensors that have their <code>requires_grad</code> set to <code>True</code>. So consider the following:</p> <pre><code>import torch net = torch.nn.Linear(4, 3) input_t = torch.randn(4) with torch.no_grad(): for name, param in net.named_parameters(): print("{} {}".format(name, param.requires_grad)) out = net(input_t) print('Output: {}'.format(out)) print('Output requires gradient: {}'.format(out.requires_grad)) print('Gradient function: {}'.format(out.grad_fn)) </code></pre> <p>This prints</p> <pre><code>weight True bias True Output: tensor([-0.3311, 1.8643, 0.2933]) Output requires gradient: False Gradient function: None </code></pre> <p>If you remove <code>with torch.no_grad()</code>, you get</p> <pre><code>weight True bias True Output: tensor([ 0.5776, -0.5493, -0.9229], grad_fn=&lt;AddBackward0&gt;) Output requires gradient: True Gradient function: &lt;AddBackward0 object at 0x7febe41e3240&gt; </code></pre> <p>Note that in both cases the module parameters have <code>requires_grad</code> set to <code>True</code>, but in the first case the <code>out</code> tensor doesn't have a gradient function associated with it whereas it does in the second case.</p>
python|neural-network|pytorch
4
9,140
54,460,092
Matching dictionaries with columns and indices in DataFrame | python
<p>I have a DataFrame with column names as on example and the indices from 0 to 1000. The dataframe is filled with zeros.</p> <pre><code>House 1 | House 2 | House 5 | House 8 | ... 0 1 2 3 4... </code></pre> <p>Then, I have dictionary, e.g.:</p> <pre><code>dict_of_houses = {'House 1':[100,201,306,387,500,900],'House 2':[31,87,254,675,987],'House 5':[23,45,67,123,345,654,789,808,864,987,999],'House 8':[23,675,786,858,868,912,934]} </code></pre> <p><em>Dictionary name edited in order not to confuse anyone later.</em></p> <p>My goal is to:</p> <ul> <li>for every dict key match it with the column</li> <li>for every number in the list as dictionary value to match with the index</li> <li>if there is a match of index and column, then change the cell to 1</li> <li>else: leave zero</li> </ul> <p>How would you do that?</p>
<p>You can use a <code>for</code> loop:</p> <pre><code>for house, indices in dict_.items(): df.loc[indices, house] = 1 </code></pre>
python|pandas|dictionary|dataframe|indexing
0
9,141
54,391,166
reshaping of an nparray returned "IndexError: tuple index out of range"
<p>reshaping of an nparray returned "IndexError: tuple index out of range"</p> <p>Following "<a href="https://machinelearningmastery.com/time-series-prediction-lstm-recurrent-neural-networks-python-keras/" rel="nofollow noreferrer">https://machinelearningmastery.com/time-series-prediction-lstm-recurrent-neural-networks-python-keras/</a>" I have made a dataframe from the csv file. Then taken those values into an nparray "dataset". Scaled the dataset then divided into train and test set. Made two columns (trainX, trainY) with the values and its 1 lagged vales. Then tried to reshape trainX. </p> <pre><code>dataset = passenger_data.values dataset = dataset.astype('float32') scale = MinMaxScaler(feature_range=(0,1)) dataset = scale.fit_transform(dataset) train, test = dataset[0:train_size, :], dataset[train_size:len(dataset), :] train_size = int(len(dataset) * 0.70) train, test = dataset[0:train_size, :], dataset[train_size:len(dataset), :] def create_coloumns(dataset, lag = 1): colX, colY = [], [] for i in range(len(dataset) - lag): a = dataset[i,0] colX.append(a) for j in range(lag, len(dataset)): b = dataset[j,0] colY.append(b) return np.array(colX), np.array(colY) trainX, trainY = create_coloumns(train, 1) testX, testY = create_coloumns(test, 1) trainX = np.reshape(trainX, (trainX.shape[0], 1, trainX.shape[1])) --------------------------------------------------------------------------- IndexError Traceback (most recent call last) &lt;ipython-input-62-96b89321dd69&gt; in &lt;module&gt; 1 # trainX.shape ----&gt; 2 trainX = np.reshape(trainX, (trainX.shape[0], 1, trainX.shape[1])) 3 # testX = np.reshape(testX, (testX.shape[0], 1, testX.shape[1])) IndexError: tuple index out of range </code></pre>
<p>Unlike in Matlab, numpy arrays can be one-dimensional, so there is only one value from shape parameter.</p> <pre><code>a = np.array([1,2,3,4]) a.shape[0] # ok a.shape[1] # error </code></pre>
python|numpy
3
9,142
71,112,423
python define function that retrieves data from API and then put into dataframe
<p>I need to get data from an API and then convert into a pandas dataframe. So far I am able to achieve that using for loops (that was efficient when I had one dataset). Now that I have several dataframes, I was wondering how can i write a function that extracts the data from the API and builds a dataframe.</p> <p>Sample code:</p> <pre><code>motion_fourth = ['sensor1_id', 'sensor2_id', 'sensor3_id'] motion_fifth = ['sensor4_id', 'sensor5_id', 'sensor6_id'] motion_results_fourth = [] motion_results_fifth = [] # Iterate get urls and retrieve json files for sensor in motion_sensors_fourth: res = requests.get(f'https://apiv2.XXXX/sensors/{sensor}/Event/history?tstart={timestamp1}&amp;tend={timestamp2}', headers=headers) if res.status_code == 200: motion_results_fourth.append(res.json()) else: print(f'Request to {sensor} failed.') # Create motion dataframes sensor1_motion = pd.DataFrame(motion_results_fourth[0]) sensor2_motion = pd.DataFrame(motion_results_fourth[1]) sensor3_motion = pd.DataFrame(motion_results_fourth[2]) </code></pre> <p>Then after completing this for loop, and converting to dataframe I would need to repeat it again for the motion_fifth... So my question is how can I define a function that retrieves the API data, puts into dataframe for several lists of sensor IDs (aka motion_fourth, motion_fifth, etc.)</p>
<p>Maybe you can try storing the data in dictionaries.</p> <p>In my example below, I created one dictionary for a list of sensors per motion. And one dictionary where we can store a dictionary per motion. In that nested dictionary you link the results to the sensors.</p> <p>I haven't tested it, but maybe you can try it out.</p> <pre><code>motions = {'fourth':['sensor1_id', 'sensor2_id', 'sensor3_id'], 'fifth' = ['sensor4_id', 'sensor5_id', 'sensor6_id']} motion_results = {'fourth':{},'fifth':{}} for motion, sensors in motions.values(): for sensor in sensors: res = requests.get(f'https://apiv2.XXXX/sensors/{sensor}/Event/history?tstart={timestamp1}&amp;tend={timestamp2}', headers=headers) if res.status_code == 200: motion_results[motion][sensor] = res.json() else: print(f'Request to {motion} - {sensor} failed.') </code></pre>
python|pandas|function|loops
1
9,143
52,436,394
How to convert a pandas value_counts into a python list
<p>I have a pandas series and I am taking its value count using <code>value_counts</code>.I need to get it into a list. I tried <code>to_list()</code> but got error.</p> <pre><code>df['AAA'].value_counts(sort=True).to_list() </code></pre> <p>If I run, <code>df['AAA'].value_counts(sort=True)</code> , I will get something like</p> <pre><code>3 301 2 185 7 75 4 25 5 16 Name: AAA, dtype: int64 </code></pre> <p>How to convert this into : <code>[301,185,75,25,16]</code></p>
<p>Try <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.tolist.html" rel="nofollow noreferrer"><code>tolist()</code></a> not <code>to_list()</code> (without <code>_</code>): </p> <pre><code>df['AAA'].value_counts(sort=True).tolist() </code></pre>
python|pandas|list|numpy|dataframe
3
9,144
52,273,322
Snakemake and pandas syntax
<p>I have a input file as follow </p> <pre><code>SampleName Run Read1 Read2 A run1 test/true_data/4k_R1.fq test/true_data/4k_R2.fq A run2 test/samples/A.fastq test/samples/A2.fastq B run1 test/samples/B.fastq test/samples/B2.fastq C run1 test/samples/C.fastq test/samples/C5.fastq D </code></pre> <p>So I am getting all indexs in an array:</p> <pre><code>sample_table = pd.read_table('samples.tsv', sep=' ', lineterminator='\n') sample_table = sample_table.drop_duplicates(subset='SampleName', keep='first', inplace=False) sample_table = sample_table.dropna() sample_table.set_index('SampleName',inplace=True) sample_ID=sample_table.index.values </code></pre> <p>At this point <code>sample_ID=['A' 'B' 'C']</code> which is what I want. Then I want to set a variable r1 that will correspond to the Read1 and r2 for the Read2 of each samples.</p> <pre><code>rule all: input: expand("test/fltr/{ID_sample}.fq", ID_sample=sample_ID) rule send_reads: input: #Tried both way but it does not work r1=sample_table.loc["{ID_sample}",'Read1'] r2=sample_table.Read2["{ID_sample}"] output: "test/fltr/{ID_sample}{input.r1}.fq" shell: "touch {output}" </code></pre> <p>I get the error </p> <blockquote> <p>the label [{ID_sample}] is not in the [index]</p> </blockquote> <p>Is it a syntax error or a bigger mistake ? </p> <p>I am just starting to use Snakemake, I thought I had understood it after the tutorial but obviously I did not.</p> <p>Thanks a lot, Cheers</p>
<p><code>lambda</code> function can be used to get that value.</p> <pre><code>input: lambda wildcards, output: sample_table.Read2[wildcards.ID_sample] </code></pre> <p>Also, based on your <code>rule all</code>, your <code>output</code> needs to be <code>test/fltr/{ID_sample}.fq</code>. And, you have to use comma to separate two variables in <code>input</code>.</p>
pandas|snakemake
0
9,145
60,758,929
Trying to load a tflite model fails with java.io.FileNotFoundException - what am I doing wrong?
<h1>My issue</h1> <p>I'm trying to run my TensorFlow model (which manipulates images) on Android as tflite, but I keep getting <strong>java.io.FileNotFoundException</strong>.</p> <p>Don't bother reading all the Java code - it fails before it even starts, when trying to load the model:</p> <pre><code> // Initialize model try{ MappedByteBuffer tfliteModel = FileUtil.loadMappedFile(this, "model.tflite"); tflite = new Interpreter(tfliteModel); } catch (IOException e){ Log.e("tfliteException", "Error: couldn't load tflite model.", e); } </code></pre> <h1>My code</h1> <p><strong>build.gradle</strong></p> <pre><code>apply plugin: 'com.android.application' android { compileSdkVersion 29 buildToolsVersion "29.0.3" defaultConfig { applicationId "com.example.tflite" minSdkVersion 27 targetSdkVersion 29 versionCode 1 versionName "1.0" testInstrumentationRunner "androidx.test.runner.AndroidJUnitRunner" } buildTypes { release { minifyEnabled false proguardFiles getDefaultProguardFile('proguard-android-optimize.txt'), 'proguard-rules.pro' } } // tf lite aaptOptions { noCompress "tflite" } } dependencies { implementation fileTree(dir: 'libs', include: ['*.jar']) implementation 'androidx.appcompat:appcompat:1.1.0' implementation 'androidx.constraintlayout:constraintlayout:1.1.3' testImplementation 'junit:junit:4.12' androidTestImplementation 'androidx.test.ext:junit:1.1.1' androidTestImplementation 'androidx.test.espresso:espresso-core:3.2.0' // tf lite implementation 'org.tensorflow:tensorflow-lite:+' // tf lite gpu implementation 'org.tensorflow:tensorflow-lite-gpu:+' // tf lite support library implementation 'org.tensorflow:tensorflow-lite-support:+' } </code></pre> <p><strong>AndroidManifest.xml</strong></p> <pre><code>&lt;?xml version="1.0" encoding="utf-8"?&gt; &lt;manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.example.tflite"&gt; &lt;uses-permission android:name="android.permission.ACCESS_MEDIA_LOCATION"/&gt; &lt;application android:allowBackup="true" android:icon="@mipmap/ic_launcher" android:label="@string/app_name" android:roundIcon="@mipmap/ic_launcher_round" android:supportsRtl="true" android:theme="@style/AppTheme"&gt; &lt;activity android:name=".MainActivity"&gt; &lt;intent-filter&gt; &lt;action android:name="android.intent.action.MAIN" /&gt; &lt;category android:name="android.intent.category.LAUNCHER" /&gt; &lt;/intent-filter&gt; &lt;/activity&gt; &lt;/application&gt; &lt;/manifest&gt; </code></pre> <p><strong>MainActivity.java</strong></p> <pre><code>package com.example.tflite; import android.Manifest; import android.content.Intent; import android.content.pm.PackageManager; import android.content.res.AssetFileDescriptor; import android.graphics.Bitmap; import android.graphics.Matrix; import android.net.Uri; import android.os.Bundle; import android.provider.MediaStore; import android.util.Log; import android.view.View; import android.widget.Button; import android.widget.ImageView; import android.widget.TextView; import android.widget.Toast; import androidx.annotation.NonNull; import androidx.appcompat.app.AppCompatActivity; import androidx.core.app.ActivityCompat; import androidx.core.content.ContextCompat; import org.tensorflow.lite.DataType; import org.tensorflow.lite.Interpreter; import org.tensorflow.lite.support.common.FileUtil; import org.tensorflow.lite.support.image.ImageProcessor; import org.tensorflow.lite.support.image.TensorImage; import org.tensorflow.lite.support.image.ops.ResizeOp; import org.tensorflow.lite.support.image.ops.ResizeWithCropOrPadOp; import org.tensorflow.lite.support.image.ops.Rot90Op; import org.tensorflow.lite.support.tensorbuffer.TensorBuffer; import org.tensorflow.lite.support.model.Model; import java.io.FileInputStream; import java.io.IOException; import java.nio.ByteBuffer; import java.nio.ByteOrder; import java.nio.MappedByteBuffer; import java.nio.channels.FileChannel; public class MainActivity extends AppCompatActivity { private static final int PIXEL_SIZE = 3; // constants private int REQUEST_IMAGE_SELECT = 101; private int REQUEST_START = 201; public static final int MEDIA_ACCESS_PERMISSION_CODE = 101; private int imageHeight = 0, imageWidth = 0; // RGB presets private static final int IMAGE_MEAN = 128; private static final float IMAGE_STD = 128.0f; // variables Button selectButton, startButton; ImageView imageView; TextView result; Bitmap bitmap, outputImage; Interpreter tflite; private ByteBuffer imageByteData = null; private float accuracy = 0; private int[] imageIntValues; ImageProcessor imageProcessor = new ImageProcessor.Builder() .add(new ResizeOp(100, 100, ResizeOp.ResizeMethod.BILINEAR)) .build(); TensorImage tImage = new TensorImage(DataType.UINT8); TensorBuffer probabilityBuffer = TensorBuffer.createFixedSize(new int[]{1, 1001}, DataType.UINT8); @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); // init startButton = findViewById(R.id.startButton); selectButton = findViewById(R.id.selectButton); imageView = findViewById(R.id.imageView); result = findViewById(R.id.resultTextView); // Initialize model try{ MappedByteBuffer tfliteModel = FileUtil.loadMappedFile(this, "model.tflite"); tflite = new Interpreter(tfliteModel); } catch (IOException e){ Log.e("tfliteException", "Error: couldn't load tflite model.", e); } // select button selectButton.setOnClickListener( new View.OnClickListener() { @Override public void onClick(View v) { askMediaPermission(REQUEST_IMAGE_SELECT); } }); // start button startButton.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View view) { float[] accuracy = doTfLite(); result.setText(Float.toString(accuracy[0])); // close interperter tflite.close(); } }); } private void askMediaPermission(int request) { if(ContextCompat.checkSelfPermission(this, Manifest.permission.ACCESS_MEDIA_LOCATION) != PackageManager.PERMISSION_GRANTED) ActivityCompat.requestPermissions(this, new String[] {Manifest.permission.ACCESS_MEDIA_LOCATION}, MEDIA_ACCESS_PERMISSION_CODE); else { selectImage(request); } } @Override public void onRequestPermissionsResult(int requestCode, @NonNull String[] permissions, @NonNull int[] grantResults) { if(requestCode == MEDIA_ACCESS_PERMISSION_CODE){ if(grantResults.length &gt; 0 &amp;&amp; grantResults[0] == PackageManager.PERMISSION_GRANTED){ // access granted Toast.makeText(this, "Granted!", Toast.LENGTH_SHORT).show(); } else { // access denied Toast.makeText(this, "Media Access Permissions are required.", Toast.LENGTH_SHORT).show(); } } } private void selectImage(int request) { Intent intent = new Intent(); intent.setType("image/*"); intent.setAction(Intent.ACTION_GET_CONTENT); startActivityForResult(Intent.createChooser(intent, "Select"), request); } @Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { super.onActivityResult(requestCode, resultCode, data); // image select if (requestCode == REQUEST_IMAGE_SELECT &amp;&amp; resultCode == RESULT_OK &amp;&amp; data != null) { Uri uri = data.getData(); try { bitmap = MediaStore.Images.Media.getBitmap(getContentResolver(), uri); // int array init imageHeight = bitmap.getHeight(); imageWidth = bitmap.getWidth(); imageIntValues = new int[imageWidth * imageHeight]; // byte array init (for loading model) imageByteData = ByteBuffer.allocateDirect(imageHeight * imageWidth * PIXEL_SIZE * 4); imageByteData.order(ByteOrder.nativeOrder()); imageView.setImageBitmap(bitmap); } catch (IOException e) { e.printStackTrace(); } } // start else if (requestCode == REQUEST_START &amp;&amp; resultCode == RESULT_OK ) { Toast.makeText(this, "Request code of START.", Toast.LENGTH_SHORT).show(); } } public float[] doTfLite(){ // Analysis code for every frame // Preprocess the image tImage.load(bitmap); tImage = imageProcessor.process(tImage); // Running inference if (tflite != null) { tflite.run(tImage.getBuffer(), probabilityBuffer.getBuffer()); } return probabilityBuffer.getFloatArray(); } //// loading model //private MappedByteBuffer loadModelFile() throws IOException { // // open the model using input stream, and memory-map it to load // AssetFileDescriptor fileDescriptor = this.getAssets().openFd("model.tflite"); // FileInputStream inputStream = new FileInputStream(fileDescriptor.getFileDescriptor()); // FileChannel fileChannel = inputStream.getChannel(); // long startOffset = fileDescriptor.getStartOffset(); // long declaredLength = fileDescriptor.getDeclaredLength(); // return fileChannel.map(FileChannel.MapMode.READ_ONLY, startOffset, declaredLength); //} } </code></pre> <p>Any ideas?</p>
<p>I was stuck on this for 3 days, scanned the entire internet... and solved it a few hours after posting this...</p> <p><strong>I deleted my assets folder and re-created it</strong>. This solved my issue.</p> <p>I have no time to research into this, and I have no idea why this was what helped (since I've already tried to start from scratch twice), but I'm posting it for future reference (maybe also for Google's TFLite developers?).</p>
java|android|tensorflow|tensorflow-lite
4
9,146
60,708,252
Pandas two dataframes for loop: how to set value to old value + 1
<p>There are two dataframe <strong>roads</strong> and <strong>bridges</strong>. </p> <p>If the kilometre point of a bridge falls between the chainage begin and end of a road segment, it should be checked which condition this bridge is in (this is A, B, C or D). </p> <p>If the condition is <strong>'A'</strong> (this is a string) --> then in the roads df, the value under the column name <strong>'BridgeA'</strong> should become the <strong>old value + 1</strong>. </p> <p>In the end, this column will show how many bridges of condition A a road segment contains. </p> <p>The code we've written for this runs. However, it results in only zero's in the columns BridgeA, BridgeB, BridgeC and BridgeD. </p> <p>Can someone help us out what is going wrong? </p> <pre><code>for j in roads.index: # loop through all the rows in the roads file for i in bridges.index: # for every line in the bridges file if bridges.km[i] &gt; roads.Chainage_begin.loc[j] and \ bridges.km[i] &lt; roads.Chainage_end.loc[j]: # if the bridge falls within the road segment if bridges.condition[i] == 'A': roads.BridgeA.loc[j] =+ 1 elif bridges.condition[i] == 'B': roads['BridgeB'].loc[j] =+ 1 elif bridges.condition[i] == 'C': roads['BridgeC'].loc[j] =+ 1 elif bridges.condition[i] == 'D': roads['BridgeD'].loc[j] =+ 1 else: # if it is out of range continue # go on to the next row in the roads file </code></pre>
<p>You should never use <code>df.column[row_index]</code> nor <code>df[column][row_index]</code>. It works when you read a value but is error prone when you try to set a value. The only robust syntax is</p> <pre><code>df.loc[row_index, col] = ... </code></pre> <p>So here you should use:</p> <pre><code> if bridges.condition[i] == 'A': roads.loc[j, 'BridgeA'] += 1 elif bridges.condition[i] == 'B': roads.loc[j, 'BridgeB'] += 1 elif bridges.condition[i] == 'C': roads.loc[j, 'BridgeC'] += 1 elif bridges.condition[i] == 'D': roads.loc[j, 'BridgeD'] += 1 </code></pre>
python|pandas|loops|dataframe|for-loop
0
9,147
60,374,258
How to implement a .tranpose() as .T (like in numpy)
<p>Let's say I have the following class containing a Numpy array <code>a</code>.</p> <pre class="lang-py prettyprint-override"><code>class MyClass(): def __init__(self,a,b): self.a = a self.other_attributes = b def transpose(self): return MyClass(self.a.T,self.other_attributes) </code></pre> <p>Since this "transpose the data, keep the rest unchanged" method will be used quite often, I would like to implement a short-named attribute like Numpy's <code>.T</code>. My problem is that I don't know how to do it without calling <code>.transpose</code> at initialization, i. e., I only want to do the transpose when it is required, instead of saving it in another attribute. Is this possible?</p>
<p>Use a <code>property</code> to compute attributes. This can also be used to cache computed results for later use.</p> <pre><code>class MyClass(): def __init__(self, a, b): self.a = a self.other_attributes = b @property def T(self): try: return self._cached_T # attempt to read cached attribute except AttributeError: self._cached_T = self._transpose() # compute and cache return self._cached_T def _transpose(self): return MyClass(self.a.T, self.other_attributes) </code></pre> <p>Since Python 3.8, the standard library provides <a href="https://docs.python.org/3/library/functools.html#functools.cached_property" rel="nofollow noreferrer"><code>functools.cached_property</code></a> to automatically cache computed attributes.</p> <pre><code>from functools import cached_property class MyClass(): def __init__(self, a, b): self.a = a self.other_attributes = b @cached_property def T(self): return self._transpose() def _transpose(self): return MyClass(self.a.T, self.other_attributes) </code></pre>
python|numpy|class|transpose
0
9,148
72,565,029
Shape of data changing in Tensorflow dataset
<p>The shape of my data after the mapping function should be (257, 1001, 1). I asserted this condition in the function and the data passed without an issue. But when extracting a vector from the dataset, the shape comes out at (1, 257, 1001, 1). Tfds never fails to be a bloody pain.</p> <p>The code:</p> <pre><code>def read_npy_file(data): # 'data' stores the file name of the numpy binary file storing the features of a particular sound file # as a bytes string. # decode() is called on the bytes string to decode it from a bytes string to a regular string # so that it can passed as a parameter into np.load() data = np.load(data.decode()) # Shape of data is now (1, rows, columns) # Needs to be reshaped to (rows, columns, 1): data = np.reshape(data, (257, 1001, 1)) assert data.shape == (257, 1001, 1), f&quot;Shape of spectrogram is {data.shape}; should be (257, 1001, 1).&quot; return data.astype(np.float32) spectrogram_ds = tf.data.Dataset.from_tensor_slices((specgram_files, labels)) spectrogram_ds = spectrogram_ds.map( lambda file, label: tuple([tf.numpy_function(read_npy_file, [file], [tf.float32]), label]), num_parallel_calls=tf.data.AUTOTUNE) num_files = len(train_df) num_train = int(0.8 * num_files) num_val = int(0.1 * num_files) num_test = int(0.1 * num_files) spectrogram_ds = spectrogram_ds.shuffle(buffer_size=1000) specgram_train_ds = spectrogram_ds.take(num_train) specgram_test_ds = spectrogram_ds.skip(num_train) specgram_val_ds = specgram_test_ds.take(num_val) specgram_test_ds = specgram_test_ds.skip(num_val) specgram, _ = next(iter(spectrogram_ds)) # The following assertion raises an error; not the one in the read_npy_file function. assert specgram.shape == (257, 1001, 1), f&quot;Spectrogram shape is {specgram.shape}. Should be (257, 1001, 1)&quot; </code></pre> <p>I thought that the first dimension represented the batch size, which is 1, of course, before batching. But after batching by calling <code>batch(batch_size=64)</code> on the dataset, the shape of a batch was <code>(64, 1, 257, 1001, 1)</code> when it should be <code>(64, 257, 1001, 1)</code>.</p> <p>Would appreciate any help.</p>
<p>Although I still can't explain why I'm getting that output, I did find a workaround. I simply reshaped the data in another mapping like so:</p> <pre><code>def read_npy_file(data): # 'data' stores the file name of the numpy binary file storing the features of a particular sound file # as a bytes string. # decode() is called on the bytes string to decode it from a bytes string to a regular string # so that it can passed as a parameter into np.load() data = np.load(data.decode()) # Shape of data is now (1, rows, columns) # Needs to be reshaped to (rows, columns, 1): data = np.reshape(data, (257, 1001, 1)) assert data.shape == (257, 1001, 1), f&quot;Shape of spectrogram is {data.shape}; should be (257, 1001, 1).&quot; return data.astype(np.float32) specgram_ds = tf.data.Dataset.from_tensor_slices((specgram_files, one_hot_encoded_labels)) specgram_ds = specgram_ds.map( lambda file, label: tuple([tf.numpy_function(read_npy_file, [file], [tf.float32, ]), label]), num_parallel_calls=tf.data.AUTOTUNE) specgram_ds = specgram_ds.map(lambda specgram, label: tuple([tf.reshape(specgram, (257, 1001, 1)), label]), num_parallel_calls=tf.data.AUTOTUNE) </code></pre>
tensorflow|machine-learning|deep-learning|tensorflow-datasets
0
9,149
59,628,494
Merge two data frame based on specific condition in pandas
<p>I have two dataframe as shown below</p> <p>df1 - Inspector ID and assigned place</p> <p>df1: </p> <pre><code>Inspector_ID Assigned_Place 1 ['Bangalore', 'Chennai'] 2 ['Bangalore', 'Delhi', 'Chennai'] 3 ['Bangalore', 'Delhi'] 4 ['Chennai', 'Mumbai'] </code></pre> <p>df2 - Number of tickets raised by inspector in each place df2:</p> <pre><code>Inpector_ID Place Tickets 1 Bangalore 20 1 Mumbai 4 2 Bangalore 40 2 Delhi 4 3 Delhi 20 3 Mumbai 10 4 Chennai 20 4 Mumbai 8 </code></pre> <p>From the above dataframe I want to generate below data frame by .</p> <pre><code>Inpector_ID Place Tickets Assigned 1 Bangalore 20 Yes 1 Mumbai 4 No 1 Chennai 0 Yes 2 Bangalore 40 Yes 2 Delhi 4 Yes 2 Chennai 0 Yes 3 Delhi 20 Yes 3 Mumbai 10 No 3 Bangalore 0 Yes 4 Chennai 20 Yes 4 Mumbai 8 Yes </code></pre> <p>Adding more to the question</p> <p>df1 is the schedule for whole year 2019 ie same for whole month in 2019.</p> <p>df2:</p> <pre><code>Inpector_ID Place Tickets YearMonth 1 Bangalore 20 201901 1 Mumbai 4 201901 2 Bangalore 40 201901 2 Delhi 4 201901 3 Delhi 20 201901 3 Mumbai 10 201901 4 Chennai 20 201901 4 Mumbai 8 201901 1 Bangalore 20 201902 1 Mumbai 4 201902 2 Bangalore 40 201902 2 Delhi 4 201902 2 Chennai 8 201902 3 Delhi 20 201902 3 Mumbai 10 201902 4 Chennai 20 201902 4 Delhi 8 201902 </code></pre> <p>I would like to below dataframe</p> <p>Expected Output:</p> <pre><code> Inpector_ID Place Tickets YearMonth Assigned 1 Bangalore 20 201901 Yes 1 Chennai 0 201901 Yes 1 Mumbai 4 201901 No 2 Bangalore 40 201901 Yes 2 Delhi 4 201901 Yes 2 Chennai 0 201901 Yes 3 Delhi 20 201901 Yes 3 Mumbai 10 201901 No 3 Bangalore 0 201901 Yes 4 Chennai 20 201901 Yes 4 Mumbai 8 201901 Yes 1 Bangalore 20 201902 Yes 1 Mumbai 4 201902 No 1 Chennai 0 201901 Yes 2 Bangalore 40 201902 Yes 2 Delhi 4 201902 Yes 2 Chennai 8 201902 Yes 3 Delhi 20 201902 Yes 3 Mumbai 10 201902 No 3 Bangalore 0 201901 Yes 4 Chennai 20 201902 Yes 4 Delhi 8 201902 No 4 Mumbai 0 201902 Yes </code></pre>
<p>First convert column filled by lists by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.explode.html" rel="nofollow noreferrer"><code>DataFrame.explode</code></a>, then <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.merge.html" rel="nofollow noreferrer"><code>merge</code></a> by outer join and indicator parameter and last set new column name:</p> <pre><code>df1 = df1.explode('Assigned_Place').rename(columns={'Assigned_Place':'Place'}) df = (df2.merge(df1, how='outer', indicator='Assigned') .sort_values(['Inspector_ID','Place']) .fillna({'Tickets':0}) .assign(Assigned = lambda x: np.where(x['Assigned'].eq('left_only'), 'No', 'Yes')) ) print (df) Inspector_ID Place Tickets Assigned 0 1 Bangalore 20.0 Yes 8 1 Chennai 0.0 Yes 1 1 Mumbai 4.0 No 2 2 Bangalore 40.0 Yes 9 2 Chennai 0.0 Yes 3 2 Delhi 4.0 Yes 10 3 Bangalore 0.0 Yes 4 3 Delhi 20.0 Yes 5 3 Mumbai 10.0 No 6 4 Chennai 20.0 Yes 7 4 Mumbai 8.0 Yes </code></pre> <p>EDIT: solution is similar, only is added cross join by all unique <code>YearMonth</code> values:</p> <pre><code>df1 = df1.explode('Assigned_Place').rename(columns={'Assigned_Place':'Place'}) df11 = pd.DataFrame({'YearMonth':df2['YearMonth'].unique(), 'a':1}) df1 = df1.assign(a=1).merge(df11, on='a').drop('a', 1) df = (df2.merge(df1, how='outer', indicator='Assigned') .sort_values(['Inspector_ID','Place']) .fillna({'Tickets':0}) .assign(Assigned = lambda x: np.where(x['Assigned'].eq('left_only'), 'No', 'Yes')) ) print (df) Inspector_ID Place Tickets YearMonth Assigned 0 1 Bangalore 20.0 201901 Yes 8 1 Bangalore 20.0 201902 Yes 17 1 Chennai 0.0 201901 Yes 18 1 Chennai 0.0 201902 Yes 1 1 Mumbai 4.0 201901 No 9 1 Mumbai 4.0 201902 No 2 2 Bangalore 40.0 201901 Yes 10 2 Bangalore 40.0 201902 Yes 12 2 Chennai 8.0 201902 Yes 19 2 Chennai 0.0 201901 Yes 3 2 Delhi 4.0 201901 Yes 11 2 Delhi 4.0 201902 Yes 20 3 Bangalore 0.0 201901 Yes 21 3 Bangalore 0.0 201902 Yes 4 3 Delhi 20.0 201901 Yes 13 3 Delhi 20.0 201902 Yes 5 3 Mumbai 10.0 201901 No 14 3 Mumbai 10.0 201902 No 6 4 Chennai 20.0 201901 Yes 15 4 Chennai 20.0 201902 Yes 16 4 Delhi 8.0 201902 No 7 4 Mumbai 8.0 201901 Yes 22 4 Mumbai 0.0 201902 Yes </code></pre>
pandas|merge|pandas-groupby
2
9,150
59,847,919
Failing to load native tensorflow runtime
<p>I have been using tensorflow (via Keras) to do some simple image classification tasks. It was working fine late last year (mid-december), but now I have done some package updates and get this error when attempting to run my code:</p> <pre><code>Traceback (most recent call last): File "C:\Python\lib\site-packages\tensorflow_core\python\pywrap_tensorflow.py", line 58, in &lt;module&gt; from tensorflow.python.pywrap_tensorflow_internal import * File "C:\Python\lib\site-packages\tensorflow_core\python\pywrap_tensorflow_internal.py", line 28, in &lt;module&gt; _pywrap_tensorflow_internal = swig_import_helper() File "C:\Python\lib\site-packages\tensorflow_core\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description) File "C:\Python\lib\imp.py", line 242, in load_module return load_dynamic(name, filename, file) File "C:\Python\lib\imp.py", line 342, in load_dynamic return _load(spec) ImportError: DLL load failed: The specified module could not be found. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "image_classify.py", line 3, in &lt;module&gt; from keras.preprocessing.image import array_to_img, img_to_array, load_img File "C:\Python\lib\site-packages\keras\__init__.py", line 3, in &lt;module&gt; from . import utils File "C:\Python\lib\site-packages\keras\utils\__init__.py", line 6, in &lt;module&gt; from . import conv_utils File "C:\Python\lib\site-packages\keras\utils\conv_utils.py", line 9, in &lt;module&gt; from .. import backend as K File "C:\Python\lib\site-packages\keras\backend\__init__.py", line 1, in &lt;module&gt; from .load_backend import epsilon File "C:\Python\lib\site-packages\keras\backend\load_backend.py", line 90, in &lt;module&gt; from .tensorflow_backend import * File "C:\Python\lib\site-packages\keras\backend\tensorflow_backend.py", line 5, in &lt;module&gt; import tensorflow as tf File "C:\Python\lib\site-packages\tensorflow\__init__.py", line 101, in &lt;module&gt; from tensorflow_core import * File "C:\Python\lib\site-packages\tensorflow_core\__init__.py", line 40, in &lt;module&gt; from tensorflow.python.tools import module_util as _module_util File "C:\Python\lib\site-packages\tensorflow\__init__.py", line 50, in __getattr__ module = self._load() File "C:\Python\lib\site-packages\tensorflow\__init__.py", line 44, in _load module = _importlib.import_module(self.__name__) File "C:\Python\lib\importlib\__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "C:\Python\lib\site-packages\tensorflow_core\python\__init__.py", line 49, in &lt;module&gt; from tensorflow.python import pywrap_tensorflow File "C:\Python\lib\site-packages\tensorflow_core\python\pywrap_tensorflow.py", line 74, in &lt;module&gt; raise ImportError(msg) ImportError: Traceback (most recent call last): File "C:\Python\lib\site-packages\tensorflow_core\python\pywrap_tensorflow.py", line 58, in &lt;module&gt; from tensorflow.python.pywrap_tensorflow_internal import * File "C:\Python\lib\site-packages\tensorflow_core\python\pywrap_tensorflow_internal.py", line 28, in &lt;module&gt; _pywrap_tensorflow_internal = swig_import_helper() File "C:\Python\lib\site-packages\tensorflow_core\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description) File "C:\Python\lib\imp.py", line 242, in load_module return load_dynamic(name, filename, file) File "C:\Python\lib\imp.py", line 342, in load_dynamic return _load(spec) ImportError: DLL load failed: The specified module could not be found. </code></pre> <p>The natural thing to suspect is that some package update broke something, but I'm not sure which one it might be. I have the (current) most up-to-date versions of both tensorflow (2.1.0) and Keras (2.3.1). I am using a Windows 10 PC with an intel i5-7200U cpu. I am not using tensorflow-gpu because I am running this small task on a laptop without a dedicated gpu, if that helps.</p> <p>Could anyone help me figure out what changed? Thanks!</p>
<p>Common solution for this issue is you need to install/update Microsoft Visual C++ 2015-2019 Redistributable (x64) from <a href="https://support.microsoft.com/en-us/help/2977003/the-latest-supported-visual-c-downloads" rel="nofollow noreferrer">here</a></p> <p>For more details please refer similar question answered <a href="https://stackoverflow.com/a/61656869/13465258">here</a>.</p>
python|tensorflow|keras
0
9,151
59,718,130
What are C classes for a NLLLoss loss function in Pytorch?
<p>I'm asking about C classes for a <a href="https://pytorch.org/docs/stable/nn.html?highlight=nllloss#torch.nn.NLLLoss" rel="noreferrer">NLLLoss</a> loss function.</p> <p>The documentation states:</p> <blockquote> <p>The negative log likelihood loss. It is useful to train a classification problem with C classes.</p> </blockquote> <p>Basically everything after that point depends upon you knowing what a C class is, and I thought I knew what a C class was but the documentation doesn't make much sense to me. Especially when it describes the expected inputs of <code>(N, C) where C = number of classes</code>. That's where I'm confused, because I thought a C class refers to the <em>output</em> only. My understanding was that the C class was a one hot vector of classifications. I've often found in tutorials that the <code>NLLLoss</code> was often paired with a <code>LogSoftmax</code> to solve classification problems.</p> <p>I was expecting to use <code>NLLLoss</code> in the following example:</p> <pre class="lang-py prettyprint-override"><code># Some random training data input = torch.randn(5, requires_grad=True) print(input) # tensor([-1.3533, -1.3074, -1.7906, 0.3113, 0.7982], requires_grad=True) # Build my NN (here it's just a LogSoftmax) m = nn.LogSoftmax(dim=0) # Train my NN with the data output = m(input) print(output) # tensor([-2.8079, -2.7619, -3.2451, -1.1432, -0.6564], grad_fn=&lt;LogSoftmaxBackward&gt;) loss = nn.NLLLoss() print(loss(output, torch.tensor([1, 0, 0]))) </code></pre> <p>The above raises the following error on the last line:</p> <blockquote> <p>ValueError: Expected 2 or more dimensions (got 1)</p> </blockquote> <p>We can ignore the error, because clearly I don't understand what I'm doing. Here I'll explain my intentions of the above source code.</p> <pre class="lang-py prettyprint-override"><code>input = torch.randn(5, requires_grad=True) </code></pre> <p>Random 1D array to pair with one hot vector of <code>[1, 0, 0]</code> for training. I'm trying to do a binary bits to one hot vector of decimal numbers.</p> <pre class="lang-py prettyprint-override"><code>m = nn.LogSoftmax(dim=0) </code></pre> <p>The documentation for <code>LogSoftmax</code> says that the output will be the same shape as the input, but I've only seen examples of <code>LogSoftmax(dim=1)</code> and therefore I've been stuck trying to make this work because I can't find a relative example.</p> <pre class="lang-py prettyprint-override"><code>print(loss(output, torch.tensor([1, 0, 0]))) </code></pre> <p>So now I have the output of the NN, and I want to know the loss from my classification <code>[1, 0, 0]</code>. It doesn't really matter in this example what any of the data is. I just want a loss for a one hot vector that represents classification.</p> <p>At this point I get stuck trying to resolve errors from the loss function relating to expected output and input structures. I've tried using <code>view(...)</code> on the output and input to fix the shape, but that just gets me other errors.</p> <p>So this goes back to my original question and I'll show the example from the documentation to explain my confusion:</p> <pre class="lang-py prettyprint-override"><code>m = nn.LogSoftmax(dim=1) loss = nn.NLLLoss() input = torch.randn(3, 5, requires_grad=True) train = torch.tensor([1, 0, 4]) print('input', input) # input tensor([[...],[...],[...]], requires_grad=True) output = m(input) print('train', output, train) # tensor([[...],[...],[...]],grad_fn=&lt;LogSoftmaxBackward&gt;) tensor([1, 0, 4]) x = loss(output, train) </code></pre> <p>Again, we have <code>dim=1</code> on <code>LogSoftmax</code> which confuses me now, because look at the <code>input</code> data. It's a <code>3x5</code> tensor and I'm lost.</p> <p>Here's the documentation on the first input for the <code>NLLLoss</code> function:</p> <blockquote> <p>Input: (N, C)(N,C) where C = number of classes</p> </blockquote> <p>The inputs are <em>grouped</em> by the number of classes?</p> <p>So each <em>row</em> of the tensor input is associated with each <em>element</em> of the training tensor?</p> <p>If I change the second dimension of the input tensor, then <em>nothing breaks</em> and I don't understand what is going on.</p> <pre class="lang-py prettyprint-override"><code>input = torch.randn(3, 100, requires_grad=True) # 3 x 100 still works? </code></pre> <p>So I don't understand what a C class is here, and I thought a C class was a classification (like a label) and meaningful only on the outputs of the NN.</p> <p>I hope you understand my confusion, because shouldn't the shape of the inputs for the NN be independent from the shape of the one hot vector used for classification?</p> <p>Both the code examples and documentations say that the shape of the inputs is defined by the number of classifications, and I don't really understand why.</p> <p>I have tried to study the documentations and tutorials to understand what I'm missing, but after several days of not being able to get past this point I've decided to ask this question. It's been humbling because I thought this was going to be one of the easier things to learn.</p>
<p>Basically you are missing a concept of <code>batch</code>.</p> <p>Long story short, every input to loss (and the one passed through the network) requires <code>batch</code> dimension (i.e. how many samples are used).</p> <p>Breaking it up, step by step:</p> <h1>Your example vs documentation</h1> <p>Each step will be each step compared to make it clearer (documentation on top, your example below)</p> <h2>Inputs</h2> <pre><code>input = torch.randn(3, 5, requires_grad=True) input = torch.randn(5, requires_grad=True) </code></pre> <p>In the first case (docs), input with <code>5</code> features is created and <code>3</code> samples are used. In your case there is only <code>batch</code> dimension (<code>5</code> samples), you have no features <strong>which are required</strong>. If you meant to have one sample with <code>5</code> features you should do:</p> <pre><code>input = torch.randn(5, requires_grad=True) </code></pre> <h2>LogSoftmax</h2> <p><code>LogSoftmax</code> is done across features dimension, you are doing it across batch.</p> <p>m = nn.LogSoftmax(dim=1) # apply over features m = nn.LogSoftmax(dim=0) # apply over batch</p> <p>It makes no sense usually for this operation as samples are independent of each other.</p> <h2>Targets</h2> <p>As this is multiclass classification and each element in vector represents a sample, one can pass as many numbers as one wants (as long as it's smaller than number of features, in case of documentation example it's <code>5</code>, hence <code>[0-4]</code> is fine ).</p> <pre><code>train = torch.tensor([1, 0, 4]) train = torch.tensor([1, 0, 0]) </code></pre> <p>I assume, you wanted to pass one-hot vector as target as well. PyTorch doesn't work that way as it's <strong>memory inefficient</strong> (why store everything as one-hot encoded when you can just pinpoint exactly the class, in your case it would be <code>0</code>).</p> <p>Only outputs of neural network are one hot encoded in order to backpropagate error through all output nodes, it's not needed for targets.</p> <h1>Final</h1> <p><strong>You shouldn't</strong> use <code>torch.nn.LogSoftmax</code> <strong>at all</strong> for this task. Just use <code>torch.nn.Linear</code> as last layer and use <code>torch.nn.CrossEntropyLoss</code> with your targets.</p>
python|machine-learning|neural-network|pytorch
5
9,152
59,644,859
AttributeError: module 'tensorflow_core.compat.v1' has no attribute 'contrib'
<pre><code>x = tf.placeholder(dtype = tf.float32, shape = [None, 28, 28]) y = tf.placeholder(dtype = tf.int32, shape = [None]) images_flat = tf.contrib.layers.flatten(x) logits = tf.contrib.layers.fully_connected(images_flat, 62, tf.nn.relu) loss = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits( labels = y, logits = logits)) train_op = tf.train.AdamOptimizer(learning_rate=0.001).minimize(loss) correct_pred = tf.argmax(logits, 1) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) print("images_flat: ", images_flat) print("logits: ", logits) print("loss: ", loss) print("predicted_labels: ", correct_pred) AttributeError Traceback (most recent call last) &lt;ipython-input-17-183722ce66a3&gt; in &lt;module&gt; 1 x = tf.placeholder(dtype = tf.float32, shape = [None, 28, 28]) 2 y = tf.placeholder(dtype = tf.int32, shape = [None]) ----&gt; 3 images_flat = tf.contrib.layers.flatten(x) 4 logits = tf.contrib.layers.fully_connected(images_flat, 62, tf.nn.relu) 5 loss = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(labels = y, logits = logits)) AttributeError: module 'tensorflow_core.compat.v1' has no attribute 'contrib' </code></pre> <p>2.This is my code in Jupyter Notebook. I just started with python and get the error I mentioned in the headline. I would be very thankful if someone could help me wizh a code example to solve the problem.</p>
<p><code>tf.contrib</code> was removed from TensorFlow once with TensorFlow 2.0 alpha version. </p> <p>Most likely, you are already using TensorFlow 2.0.</p> <p>You can find more details here: <a href="https://github.com/tensorflow/tensorflow/releases/tag/v2.0.0-alpha0" rel="noreferrer">https://github.com/tensorflow/tensorflow/releases/tag/v2.0.0-alpha0</a></p> <p>For using specific versions of tensorflow, use</p> <pre><code>pip install tensorflow==1.14 </code></pre> <p>or</p> <pre><code>pip install tensorflow-gpu==1.14 </code></pre>
python|tensorflow|jupyter|tensorflow2.0
11
9,153
61,637,754
Python: Overlapping date ranges into time series
<p>My data set is much larger so I have simplified it.</p> <p>I want to convert the dataframe into a time-series.</p> <p>The bit I am stuck on:</p> <p>I have overlapping date ranges, where I have a smaller date range inside a larger one, as shown by row 0 and row 1, where row 1 and row 2 are inside the date range of row 0.</p> <pre><code>df: date1 date2 reduction 0 2016-01-01 - 2016-01-05 7.0 1 2016-01-02 - 2016-01-03 5.0 2 2016-01-03 - 2016-01-04 6.0 3 2016-01-05 - 2016-01-12 10.0 </code></pre> <p>How I want the output to look:</p> <pre><code> date1 date2 reduction 0 2016-01-01 2016-01-02 7.0 1 2016-01-02 2016-01-03 5.0 2 2016-01-03 2016-01-04 6.0 3 2016-01-04 2016-01-05 7.0 4 2016-01-05 2016-01-06 10.0 5 2016-01-06 2016-01-07 10.0 6 2016-01-07 2016-01-08 10.0 7 2016-01-08 2016-01-09 10.0 8 2016-01-09 2016-01-10 10.0 9 2016-01-10 2016-01-11 10.0 10 2016-01-11 2016-01-12 10.0 </code></pre>
<p>I prepared two consecutive columns of data with minimum and maximum dates and ran updates from the original DF.</p> <pre><code> import pandas as pd import numpy as np import io data=''' date1 date2 reduction 0 2016-01-01 2016-01-05 7.0 1 2016-01-02 2016-01-03 5.0 2 2016-01-03 2016-01-04 6.0 3 2016-01-05 2016-01-12 10.0 ''' df = pd.read_csv(io.StringIO(data), sep=' ', index_col=0) date_1 = pd.date_range(df.date1.min(), df.date2.max()) date_2 = pd.date_range(df.date1.min(), df.date2.max()) df2 = pd.DataFrame({'date1':date_1, 'date2':date_2, 'reduction':[0]*len(date_1)}) df2['date2'] = df2.date2.shift(-1) df2.dropna(inplace=True) for i in range(len(df)): df2['reduction'][(df2.date1 &gt;= df.date1.iloc[i]) &amp; (df2.date2 &lt;= df.date2.iloc[i])] = df.reduction.iloc[i] df2 date1 date2 reduction 0 2016-01-01 2016-01-02 7 1 2016-01-02 2016-01-03 5 2 2016-01-03 2016-01-04 6 3 2016-01-04 2016-01-05 7 4 2016-01-05 2016-01-06 10 5 2016-01-06 2016-01-07 10 6 2016-01-07 2016-01-08 10 7 2016-01-08 2016-01-09 10 8 2016-01-09 2016-01-10 10 9 2016-01-10 2016-01-11 10 10 2016-01-11 2016-01-12 10 </code></pre>
python|pandas|time-series
1
9,154
55,045,752
handwriting text recognition (CNN + LSTM + CTC) RNN explanation required
<p>I am trying to understand the following code, which is in python &amp; tensorflow. Im trying to implement a handwriting text recognition. I am referring to the following code <a href="https://github.com/githubharald/SimpleHTR/blob/master/src/Model.py" rel="nofollow noreferrer">here</a></p> <p>I dont understand why the RNN output is put through a "atrous_conv2d" </p> <p>This is the architecture of my model, takes a CNN input and pass into this RNN process and then pass it to a CTC.</p> <pre><code> def build_RNN(self, rnnIn4d): rnnIn3d = tf.squeeze(rnnIn4d, axis=[2]) # squeeze remove 1 dimensions, here it removes the 2nd index n_hidden = 256 n_layers = 2 cells = [] for _ in range(n_layers): cells.append(tf.nn.rnn_cell.LSTMCell(num_units=n_hidden)) stacked = tf.nn.rnn_cell.MultiRNNCell(cells) # combine the 2 LSTMCell created # BxTxF -&gt; BxTx2H ((fw, bw), _) = tf.nn.bidirectional_dynamic_rnn(cell_fw=stacked, cell_bw=stacked, inputs=rnnIn3d, dtype=rnnIn3d.dtype) # BxTxH + BxTxH -&gt; BxTx2H -&gt; BxTx1X2H concat = tf.expand_dims(tf.concat([fw, bw], 2), 2) # project output to chars (including blank): BxTx1x2H -&gt; BxTx1xC -&gt; BxTxC kernel = tf.Variable(tf.truncated_normal([1, 1, n_hidden * 2, len(self.char_list) + 1], stddev=0.1)) rnn = tf.nn.atrous_conv2d(value=concat, filters=kernel, rate=1, padding='SAME') return tf.squeeze(rnn, axis=[2]) </code></pre>
<p>The input to CTC loss layer will be of the form B x T x C</p> <p>B - Batch Size T - Max length of the output (twice max word length due to blank char) C - number of character + 1 (blank char)</p> <p>Input to atrous is of shape (B x T x 1 X 2T) == (batch, height ,width ,channel) filter we are using is (1,1,2T,C) == (height ,width ,input channel ,output channel)</p> <p>After atrous CNN we will get (B ,T ,1 ,C) which is the desired output for CTC</p> <p>note: we will take a transpose before we input our image to CNN since tf is row major.</p> <p>atrous with rate 1 is same as normal conv layer.</p>
python|tensorflow|deep-learning|handwriting
1
9,155
54,772,063
Numpy next element minus previous
<p>I would like to subtract the next element from current element in a specific axis from numpy array. But, I know how to do that using a lot of loops. My question is: How to do that in the most efficient way? Maybe using numpy?</p> <p>My Python code below:</p> <pre><code>import numpy as np np.random.seed(0) myarr = np.random.rand(20, 7, 11, 151, 161) newarr = np.full((20, 6, 11, 151, 161), np.nan) for iyr in range(20): for iwk in range(6): for imb in range(11): for ilat in range(151): for ilon in range(161): newarr[iyr, iwk, imb, ilat, ilon] = myarr[iyr, iwk + 1, imb, ilat, ilon] - myarr[iyr, iwk, imb, ilat, ilon] </code></pre>
<p>There are a few good ways of doing this. If you did not care about the last element being NaN, you could use <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.diff.html" rel="noreferrer"><code>np.diff</code></a></p> <pre><code>myarr = np.random.rand(20, 7, 11, 151, 161) newarr = np.diff(myarr, axis=1) </code></pre> <p>The result will have shape <code>(20, 6, 11, 151, 161)</code>.</p> <p>If you really want to keep those NaNs, I would recommend using <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.empty_like.html" rel="noreferrer"><code>np.empty_like</code></a> and <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.subtract.html" rel="noreferrer"><code>np.subtract</code></a>. Allocating with <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.full.html" rel="noreferrer"><code>np.full</code></a> is somewhat wasteful since you are definitely setting almost all the elements. The only NaNs you have are in the last index along the second dimension, which you can initialize very cheaply yourself:</p> <pre><code>myarr = np.random.rand(20, 7, 11, 151, 161) newarr = np.empty_like(myarr) # Don't repeat shape newarr[:, -1, ...] = np.nan np.subtract(myarr[:, 1:, ...], myarr[:, :-1, ...], out=newarr[:, :-1, ...]) </code></pre> <p>Since <code>myarr[:, 1:, ...]</code>, <code>myarr[:, :-1, ...]</code> and <code>newarr[:, :-1, ...]</code> are views, this operation avoids temporary arrays and unnecessary initialization almost completely.</p>
python|numpy
9
9,156
55,008,961
How to set different excel files to different variales from the same folder using Python?
<p>I want to load different xls files from the same folder and set them to different variables. I've done that manually but I wonder that that should be a better way to do it. </p> <p>Here is my code:</p> <pre><code>import os import pandas as pd os.chdir('/home/marlon/ShiftOne/Previsao_insumos_construcao/dados_de_sao_paulo/insumos infraestrutura/') insumos_mai2004 = pd.read_excel('Custos_Unitarios_Edificacoes_Maio2004.xls') insumos_jan2006 = pd.read_excel('Custos_Unitarios_Edificacoes_Janeiro2006.xls') insumos_jul2006 = pd.read_excel('Custos_Unitarios_Edificacoes_Julho2006.xls') insumos_jan2007 = pd.read_excel('Custos_Unitarios_Edificacoes_jan2007.xls') insumos_jul2007 = pd.read_excel('Custos_Unitarios_Edificacoes_jul2007.xls') insumos_jan2008 = pd.read_excel('custos_unitarios_edif jan 2008.xls') insumos_jul2008 = pd.read_excel('Custos_Unitarios_Edif (1).xls') insumos_jan2009 = pd.read_excel('Custos_Unitarios_Edif.xls') insumos_jul2009 = pd.read_excel('custos_unit_edif_jul_09.xls') insumos_jan2010 = pd.read_excel('Custos_Unit_EDIF_Jan_2010(1).xls') insumos_jul2010 = pd.read_excel('Custos Unit EDIF Jul 2010.xls') insumos_jan2011 = pd.read_excel('Custos Unit EDIF Jan 2011.xls') insumos_jul2011 = pd.read_excel('Custos Unit_ EDIF Julho 2011.xls') insumos_jan2012 = pd.read_excel('Custos Unit - EDIF Jan 2012.xls') insumos_jul2012 = pd.read_excel('Custos Unit EDIF Julho 2012.xls') insumos_jan2013 = pd.read_excel('Custos Unit_ EDIF Jan 2013.xls') insumos_jul2013 = pd.read_excel('Custos Unit EDIF Julho 2013.xls') insumos_set2013 = pd.read_excel('Custos Unit EDIF COM Deson SET13.xls') insumos_jan2014 = pd.read_excel('Custos Unit EDIF SEM Des Jan2014.xls') insumos_jul2014 = pd.read_excel('Custos Unit EDIF SEM Des Julho2014.xls') insumos_jan2015 = pd.read_excel('Custos Unit EDIF SEM Des Jan15.xls') insumos_jul2015 = pd.read_excel('Custos Unit_ EDIF SEM Des SET 2015.xls') insumos_jan2016 = pd.read_excel('Custos Unit EDIF SEM Des JAN 2016(1).xls') insumos_jul2016 = pd.read_excel('Custos Unit EDIF SEM Des Julho 2016(1).xls') insumos_jan2017 = pd.read_excel('Custos Unit EDIF SEM Des JAN 2017.xls') insumos_jul2017 = pd.read_excel('Custos Unit EDIF SEM Des Julho 2017.xlsx') insumos_jan2018 = pd.read_excel('Custos Unit_ EDIF SEM Des JAN 2018.xls') </code></pre>
<p>Another way of doing this might be to store the xls files in a dictionary with the desired names as the keys. For example:</p> <pre><code>my_dict = {} my_dict['insumos_mai2004'] = pd.read_excel('Custos_Unitarios_Edificacoes_Maio2004.xls') </code></pre> <p>However, this won't really fix the core problem: your naming convention isn't great.</p> <p>Consider thinking of a way to more systematically define your names to reflect your xls filename, or a way of renaming your xls files to be easily converted into a dictionary key (name).</p>
python|excel|pandas|dataset|xls
0
9,157
55,022,536
Tensorflow indexing into python list during tf.while_loop
<p>I have this annoying problem and i dont know how to solve it.</p> <p>I am reading in batches of data from a CSV using a dataset reader and am wanting to gather certain columns. The reader returns a tuple of tensors and, depending on which reader i use, columns are either indexed via integer or string.</p> <p>I can easily enough do a for loop in python and slice the columns I want however I am wanting to do this in a tf.while_loop to take advantage of parallel execution.</p> <p>This is where my issue lies - the iterator in the while loop is tensor based and i cannot use this to index into my dataset. If i try and evaluate it I get an error about the session not being the same etc etc</p> <p>How can i use a while loop (or a map function) and have the function be able to index into a python list/dict without evaluating or running the iterator tensor?</p> <p>Simple example:</p> <pre><code> some_data = [1,2,3,4,5] x = tf.constant(0) y = len(some_data) c = lambda x: tf.less(x, y) b = lambda x: some_data[x] &lt;--- You cannot index like this! tf.while_loop(c, b, [x]) </code></pre>
<p>Does this fit your requirement somewhat ? It does nothing apart from print the value. </p> <pre><code>import tensorflow as tf from tensorflow.python.framework import tensor_shape some_data = [11,222,33,4,5,6,7,8] def func( v ): print (some_data[v]) return some_data[v] with tf.Session() as sess: r = tf.while_loop( lambda i, v: i &lt; 4, lambda i, v: [i + 1, tf.py_func(func, [i], [tf.int32])[0]], [tf.constant(0), tf.constant(2, tf.int32)], [tensor_shape.unknown_shape(), tensor_shape.unknown_shape()]) r[1].eval() </code></pre> <p>It prints</p> <blockquote> <p>11 4 222 33</p> </blockquote> <p>The order changes everytime but I guess <code>tf.control_dependencies</code> may be useful to control that.</p>
tensorflow
1
9,158
54,728,783
How to create a dataframe from another dataframe based on GroupBy condition
<p>I don't know how can i create a dataframe based on another dataframe using a groupby conditions. For example, i have a dataframe that if i apply the function:</p> <p><code>flights_df.groupby(by='DepHour')['Cancelled'].value_counts()</code></p> <p>I obtain something like this</p> <pre><code>DepHour Cancelled 0.0 0 20361 1 7 1.0 0 5857 1 4 2.0 0 1850 1 1 **3.0 0 833** 4.0 0 3389 1 1 5.0 0 148143 1 24 </code></pre> <p>As can be seen, for <code>DepHour == 3.0</code> there's no cancelled flights.</p> <p>Using the same dataframe that i used to generate this output i want to create another dataframe containing only of values where there's no cancelled flighs for DepHour. In this case, the output will be a dataframe containing only values of <code>DepHour == 3.0</code>. </p> <p>I know that i can use mask, but it allows only filter values where <code>cancelled == 0</code> (i.e. all other values for where <code>DepHour cancelled == 0</code> are included).</p> <p>Thanks and sorry for my bad english!</p>
<p>There might be a cleaner way (probably without using <code>groupby</code> twice) but this should should work:</p> <pre><code>flights_df.groupby('DepHour') \ .filter(lambda x: (x['Cancelled'].unique()==[0]).all()) \ .groupby('DepHour')['Cancelled'].value_counts() </code></pre>
python|pandas|dataframe
1
9,159
49,618,242
Visualize vector value at each step with Tensorflow
<p>For debugging purposes I want to visualize the output vector of the NN at each step of the training process.</p> <p>I tried to use TensorBoard with a tf.summary.tensor_summary:</p> <pre><code>available_outputs_summary = tf.summary.tensor_summary(name='Probability of move', tensor=available_outputs) </code></pre> <p>Which I use to write during each iteration step:</p> <pre><code>summary_str = available_outputs_summary.eval(feed_dict={X: obs}) file_writer.add_summary(summary_str, iteration) </code></pre> <p>But in TensorBoard when I click on the required tensor I won't see my data:</p> <p><a href="https://i.stack.imgur.com/oVM6P.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oVM6P.png" alt=""></a></p> <p>I know how to print every single value in the console with tf.Print but it's not conveniant...</p> <p>Is there anything else I can do ?</p>
<p>First, your picture is the graph visualization. I believe graph visualization is not supposed to have any summaries - it just shows you the graph.</p> <p>TensorBoard has other tabs for summaries including "scalar", "histogram", "distribution". Normally, you would look in these tabs for visualizations. However, base release of TensorBoard does not yet have a tab to visualize tensor summaries (there might be third-party plugins though).</p> <p>Depending on the kind of visualization you want for you tensor, you have the following options:</p> <ul> <li>Create interesting scalar statistics that you care about, e.g. mean, std, etc.</li> <li>Use "histogram" and/or "distribution" tabs (<a href="https://www.tensorflow.org/programmers_guide/tensorboard_histograms" rel="nofollow noreferrer">https://www.tensorflow.org/programmers_guide/tensorboard_histograms</a>).</li> <li>If your tensor is not very large and fixed size, you can create scalar summaries for each of its fields. See last answer in <a href="https://stackoverflow.com/questions/41671817/how-to-visualize-a-tensor-summary-in-tensorboard">How to visualize a tensor summary in tensorboard</a></li> </ul>
python-3.x|tensorflow|tensorboard
1
9,160
49,560,347
Random crop and bounding boxes in tensorflow
<p>I want to add a data augmentation on the WiderFace dataset and I would like to know, how is it possible to random crop an image and only keep the bouding box of faces with the center inside the crop using tensorflow ?</p> <p>I have already try to implement a solution but I use TFRecords and the TfExampleDecoder and the shape of the input image is set to <code>[None, None, 3]</code> during the process, so no way to get the shape of the image and do it by myself.</p>
<p>You can get the shape, but only at runtime - when you call sess.run and actually pass in the data - that's when the shape is actually defined.</p> <p>So do the random crop manually in tesorflow, basically, you want to reimplement <code>tf.random_crop</code> so you can handle the manipulations to the bounding boxes.</p> <p>First, to get the shape, <code>x = your_tensor.shape[0]</code> will give you the first dimension. It will appear as None until you actually call <code>sess.run</code>, then it will resolve to the appropriate value. Now you can compute some random crop parameters using <code>tf.random_uniform</code> or whatever method you like. Lastly you perform the crop with <code>tf.slice</code>.</p> <p>If you want to choose whether to perform the crop or not you can use <code>tf.cond</code>.</p> <p>Between those components, you should be able to implement what you want using only tensorflow constructs. Try it out and if you get stuck along the way post the code and error you run into.</p>
python|tensorflow
0
9,161
73,245,840
scipy convert coo string directly to numpy matrix
<p>I already have a string in coo matrix format(row, col, value):</p> <pre><code>0 0 -1627.761282 0 1 342.811259 0 2 342.811259 0 3 171.372276 0 4 342.744553 0 5 342.744553 </code></pre> <p>Now I want to convert my string directly to numpy matrix. Currently I have to write my string to file, then create a numpy matrix from file:</p> <pre><code>from scipy.sparse import coo_matrix import numpy as np with open(&quot;Output.txt&quot;, &quot;w&quot;) as text_file: text_file.write(matrix_str) text = np.loadtxt( 'Output.txt', delimiter=' ' , dtype=str) rows,cols,data = text.T matrix = coo_matrix((data.astype(float), (rows.astype(int), cols.astype(int)))).todense() </code></pre> <p>How can I convert my string directly to numpy matrix without writing to file ? Please help</p>
<p>You could use <a href="https://docs.python.org/3/library/io.html#text-i-o" rel="nofollow noreferrer">StriongIO</a> as follows.</p> <pre><code>import numpy as np from scipy.sparse import coo_matrix import io with io.StringIO(matrix_str) as ss: rows, cols, data = np.loadtxt(ss).T matrix = coo_matrix((data.astype(float), (rows.astype(int), cols.astype(int)))).todense() </code></pre>
numpy|scipy|sparse-matrix
0
9,162
73,488,906
pandas: replace values at specifc index in columns of list of strings
<p>I have a dataframe as follows.</p> <pre><code>import pandas as pd df = pd.DataFrame({&quot;first_col&quot;:[[&quot;a&quot;,&quot;b&quot;,&quot;c&quot;,&quot;d&quot;],[&quot;a&quot;,&quot;b&quot;],[&quot;a&quot;,&quot;b&quot;,&quot;c&quot;]], &quot;second_col&quot;:[[&quot;the&quot;,&quot;house&quot;,&quot;is&quot;,&quot;[blue]&quot;],[&quot;the&quot;,&quot;[weather]&quot;][&quot;[the]&quot;,&quot;class&quot;,&quot;today&quot;]]}) </code></pre> <p>I would like to replace values in the first column with the values in the second column if that value is in bracket so, I would like the following output.</p> <p>output:</p> <pre><code> first_col second_col 0 [a, b, c, [blue]] [the, house, is, [blue]] 1 [a, [weather]] [the, [weather]] 2 [[the], b, c] [[the], class, today] </code></pre> <p>I know how to do that for two lists as follows, but do not know how to do it for a pandas dataframe of list columns. so if i have two lists I would do,</p> <pre><code>a =[&quot;a&quot;,&quot;b&quot;,&quot;c&quot;] b = [&quot;[the]&quot;,&quot;class&quot;,&quot;today&quot;] for index,item in enumerate(b): if item.endswith(&quot;]&quot;): a[index] = b[index] </code></pre> <p>so printing a would return: [[the], b, c]</p>
<p>You need to use a list comprehension:</p> <pre><code>df['first_col'] = [ [b if b.startswith('[') and b.endswith(']') else a for a, b in zip(A, B)] for A, B in zip(df['first_col'], df['second_col']) ] </code></pre> <p>output:</p> <pre><code> first_col second_col 0 [a, b, c, [blue]] [the, house, is, [blue]] 1 [a, [weather]] [the, [weather]] 2 [[the], b, c] [[the], class, today] </code></pre>
python|pandas|list|indexing
1
9,163
67,479,667
Calculate simple historical average using pandas
<p>I have a dataframe like as shown below</p> <pre><code>data = pd.DataFrame({'day':['1','21','41','61','81','101','121','141','161','181','201','221'],'Sale':[1.08,0.9,0.72,0.58,0.48,0.42,0.37,0.33,0.26,0.24,0.22,0.11]}) </code></pre> <p>I would like to fill the values for <code>day 241</code> by computing the average of all records till <code>day 221</code>. Similarly, I would like to compute the value for <code>day 261</code> by computing the average of all records till <code>day 241</code> and so on.</p> <p>For ex: Compute the value for <code>day n</code> by taking average of all values from <code>day 1 to day n-21</code>.</p> <p>I would like to do this upto <code>day 1001</code>.</p> <p>I tried the below but it isn't correct</p> <pre><code>df['day'] = df.iloc[:,1].rolling(window=all).mean() </code></pre> <p>How to create new rows for each day under <code>day</code> column?</p> <p>I expect my output to be like as shown below</p> <p><a href="https://i.stack.imgur.com/lPZvp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lPZvp.png" alt="enter image description here" /></a></p>
<p>It sounds like you're looking for an expanding mean:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import pandas as pd df = pd.DataFrame({'day': ['1', '21', '41', '61', '81', '101', '121', '141', '161', '181', '201', '221'], 'Sale': [1.08, 0.9, 0.72, 0.58, 0.48, 0.42, 0.37, 0.33, 0.26, 0.24, 0.22, 0.11]}) # Generate Some new values to_add = pd.DataFrame({'day': np.arange(241, 301, 20)}) # Add New Values To End of DataFrame new_df = pd.concat((df, to_add)).reset_index(drop=True) # Replace Values Where Sale is NaN with the expanding mean new_df['Sale'] = np.where(new_df['Sale'].isna(), new_df['Sale'].expanding().mean(), new_df['Sale']) print(new_df) </code></pre> <pre><code> day Sale 0 1 1.080000 1 21 0.900000 2 41 0.720000 3 61 0.580000 4 81 0.480000 5 101 0.420000 6 121 0.370000 7 141 0.330000 8 161 0.260000 9 181 0.240000 10 201 0.220000 11 221 0.110000 12 241 0.475833 13 261 0.475833 14 281 0.475833 </code></pre> <hr/> <p>With replacing NaNs with 1 then averaging:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import pandas as pd df = pd.DataFrame({'day': ['1', '21', '41', '61', '81', '101', '121', '141', '161', '181', '201', '221'], 'Sale': [1.08, 0.9, 0.72, 0.58, 0.48, 0.42, 0.37, 0.33, 0.26, 0.24, 0.22, 0.11 ]}) # Generate Some new values to_add = pd.DataFrame({'day': np.arange(241, 301, 20)}) # Add New Values To End of DataFrame new_df = pd.concat((df, to_add)).reset_index(drop=True) # Replace Values Where Sale is NaN with the expanding mean new_df['Sale'] = np.where(new_df['Sale'].isna(), new_df['Sale'].fillna(1).shift().expanding().mean(), new_df['Sale']) print(new_df) </code></pre> <pre><code> day Sale 0 1 1.080000 1 21 0.900000 2 41 0.720000 3 61 0.580000 4 81 0.480000 5 101 0.420000 6 121 0.370000 7 141 0.330000 8 161 0.260000 9 181 0.240000 10 201 0.220000 11 221 0.110000 12 241 0.475833 13 261 0.516154 14 281 0.550714 </code></pre>
python|pandas|dataframe|numpy|time-series
6
9,164
67,309,518
InvalidArgumentError on a mixed CNN
<p>I'm very new to Tensorflow/Keras and deep learning, so my apologies in advance.</p> <p>I'm creating a basic mixed convolutional neural net to classify images and metadata. I've created the following using the Keras Functional API:</p> <pre><code># Define inputs meta_inputs = tf.keras.Input(shape=(2065,)) img_inputs = tf.keras.Input(shape=(80,120,3,)) # Model 1 meta_layer1 = tf.keras.layers.Dense(64, activation='relu')(meta_inputs) meta_output_layer = tf.keras.layers.Dense(1, activation='sigmoid')(meta_layer1) # Model 2 img_conv_layer1 = tf.keras.layers.Conv2D(32, kernel_size=(3,3), padding='same', activation='relu')(img_inputs) img_pooling_layer1 = tf.keras.layers.MaxPooling2D()(img_conv_layer1) img_conv_layer2 = tf.keras.layers.Conv2D(64, kernel_size=(3,3), padding='same', activation='relu')(img_pooling_layer1) img_pooling_layer2 = tf.keras.layers.MaxPooling2D()(img_conv_layer2) img_flatten_layer = tf.keras.layers.Flatten()(img_pooling_layer2) img_dense_layer = tf.keras.layers.Dense(1024, activation='relu')(img_flatten_layer) img_output_layer = tf.keras.layers.Dense(1, activation='sigmoid')(img_dense_layer) # Merge models merged = tf.keras.layers.add([meta_output_layer, img_output_layer]) # Define functional model model = tf.keras.Model(inputs=[meta_inputs, img_inputs], outputs=merged) # Compile model auc = tf.keras.metrics.AUC(name = 'auc') model.compile('adam', loss='binary_crossentropy', metrics=[auc]) </code></pre> <p>I then proceed to fit the model:</p> <pre><code>epochs = 15 history = model.fit([meta_train, img_train], y_train, epochs=epochs, batch_size=500, validation_data=([meta_test, img_test], y_test)) </code></pre> <p>This produces an error, and I'm quite frankly not sure what to do with it:</p> <pre><code>InvalidArgumentError Traceback (most recent call last) &lt;ipython-input-11-5ec0cf9ac1d1&gt; in &lt;module&gt; 1 epochs = 15 ----&gt; 2 history = model.fit([meta_train, img_train], y_train, epochs=epochs, batch_size=500, validation_data=([meta_test, img_test], y_test)) ~/anaconda3/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing) 1098 _r=1): 1099 callbacks.on_train_batch_begin(step) -&gt; 1100 tmp_logs = self.train_function(iterator) 1101 if data_handler.should_sync: 1102 context.async_wait() ~/anaconda3/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py in __call__(self, *args, **kwds) 826 tracing_count = self.experimental_get_tracing_count() 827 with trace.Trace(self._name) as tm: --&gt; 828 result = self._call(*args, **kwds) 829 compiler = &quot;xla&quot; if self._experimental_compile else &quot;nonXla&quot; 830 new_tracing_count = self.experimental_get_tracing_count() ~/anaconda3/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py in _call(self, *args, **kwds) 886 # Lifting succeeded, so variables are initialized and we can run the 887 # stateless function. --&gt; 888 return self._stateless_fn(*args, **kwds) 889 else: 890 _, _, _, filtered_flat_args = \ ~/anaconda3/lib/python3.8/site-packages/tensorflow/python/eager/function.py in __call__(self, *args, **kwargs) 2940 (graph_function, 2941 filtered_flat_args) = self._maybe_define_function(args, kwargs) -&gt; 2942 return graph_function._call_flat( 2943 filtered_flat_args, captured_inputs=graph_function.captured_inputs) # pylint: disable=protected-access 2944 ~/anaconda3/lib/python3.8/site-packages/tensorflow/python/eager/function.py in _call_flat(self, args, captured_inputs, cancellation_manager) 1916 and executing_eagerly): 1917 # No tape is watching; skip to running the function. -&gt; 1918 return self._build_call_outputs(self._inference_function.call( 1919 ctx, args, cancellation_manager=cancellation_manager)) 1920 forward_backward = self._select_forward_and_backward_functions( ~/anaconda3/lib/python3.8/site-packages/tensorflow/python/eager/function.py in call(self, ctx, args, cancellation_manager) 553 with _InterpolateFunctionError(self): 554 if cancellation_manager is None: --&gt; 555 outputs = execute.execute( 556 str(self.signature.name), 557 num_outputs=self._num_outputs, ~/anaconda3/lib/python3.8/site-packages/tensorflow/python/eager/execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name) 57 try: 58 ctx.ensure_initialized() ---&gt; 59 tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name, 60 inputs, attrs, num_outputs) 61 except core._NotOkStatusException as e: InvalidArgumentError: assertion failed: [predictions must be &lt;= 1] [Condition x &lt;= y did not hold element-wise:] [x (model/add/add:0) = ] [[1.47704351][1.48876262][1.50816929]...] [y (Cast_4/x:0) = ] [1] [[{{node assert_less_equal/Assert/AssertGuard/else/_11/assert_less_equal/Assert/AssertGuard/Assert}}]] [Op:__inference_train_function_1331] Function call stack: train_function </code></pre> <p>Being new to Tensorflow, Keras, and deep learning in general, I'm not even sure where to begin diagnosing the issue. I don't know what <code>[predictions must be &lt;= 1] [Condition x &lt;= y did not hold element-wise:]</code> is referring to, for example: <code>y_train</code> is an array of 1s and 0s, and should hold to that assertion. I've even tried reshaping it to a (N,1)-D array, to no effect.</p> <p>Can anyone point me in the right direction on this?</p>
<p>AUC metrics need probabilities in [0,1].</p> <p>In your model, this not happen due to the sum you do in <code>merged</code> layer. You can solve for example using an average instead of a sum:</p> <pre><code>merged = tf.keras.layers.Average()([meta_output_layer, img_output_layer]) </code></pre>
tensorflow|keras|deep-learning|conv-neural-network
0
9,165
67,434,966
why scipy.sparse.linalg.LinearOperator has different behaviors with @, np.dot and np.matmul
<p>I thought that when it comes to matrix-vector multiplication, the <code>@</code> operator and the functions <code>np.dot</code> and <code>np.matmul</code> were all 3 equivalent. They give the same result when the matrix is a <code>np.ndarray</code>:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np M = np.ones((2, 2)) a = np.arange(2) M @ a Out[11]: array([1., 1.]) np.dot(M, a) Out[12]: array([1., 1.]) np.matmul(M, a) Out[13]: array([1., 1.]) </code></pre> <p>However, they do not behave the same with scipy's LinearOperator interface</p> <pre class="lang-py prettyprint-override"><code>from scipy.sparse.linalg import aslinearoperator lM = aslinearoperator(M) lM @ a Out[15]: array([1., 1.]) np.dot(lM, a) Out[16]: array([&lt;2x2 _ScaledLinearOperator with dtype=float64&gt;, &lt;2x2 _ScaledLinearOperator with dtype=float64&gt;], dtype=object) np.matmul(lM, a) Traceback (most recent call last): File &quot;/home/luc/anaconda3/envs/ckm/lib/python3.9/site-packages/IPython/core/interactiveshell.py&quot;, line 3437, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File &quot;&lt;ipython-input-17-287f28944706&gt;&quot;, line 1, in &lt;module&gt; np.matmul(lM, a) ValueError: matmul: Input operand 0 does not have enough dimensions (has 0, gufunc core with signature (n?,k),(k,m?)-&gt;(n?,m?) requires 1) </code></pre> <p>I don't understand what is the difference between them functions.</p>
<p><code>lM</code> is not a <code>numpy</code> array, or subclass of that. <code>np.array(lM)</code> produces <code>()</code> shape object dtype array. That's why it doesn't work in <code>matmul</code>. <code>lM@a</code> and <code>lM.dot(a)</code> delegate the task to lM methods. The others make the erroneous conversion to <code>ndarray</code> first.</p>
arrays|numpy|scipy
1
9,166
65,409,958
pandas:calculate jaccard similarity for every row based on the value in multiple columns
<p>I have a dataframe that looks like the following, but with more rows. for each document in the fist column there are some similar labels in the second column and some strings in the last column.</p> <pre><code>import pandas as pd data = {'First': ['First doc', 'Second doc','Third doc','First doc', 'Second doc','Third doc' ,'First doc', 'Second doc','Third doc'], 'second': ['First', 'Second','Third','second', 'third','first', 'third','first','second'], 'third': [['old','far','gold','door'], ['old','view','bold','values'], ['new','view','sure','window'],['old','bored','gold','door'], ['valued','this','bold','door'],['new','view','seen','shirt'], ['old','bored','blouse','door'], ['valued','this','bold','open'], ['new','view','seen','win']]} df = pd.DataFrame (data, columns = ['First','second','third']) df </code></pre> <p>i have stumbled upon this piece of code for jaccard similarity:</p> <pre><code>def lexical_overlap(doc1, doc2): words_doc1 = set(doc1) words_doc2 = set(doc2) intersection = words_doc1.intersection(words_doc2) union = words_doc1.union(words_doc2) return float(len(intersection)) / len(union) * 100 </code></pre> <p>what i would like to get as a result is for the measure to take each row of the third column as doc and compare each pair iteratively and outputs a measure with the row name from the First and second column, so something like this for all combinations :</p> <pre><code> first doc(first) and second doc(first) are 23 percent similar </code></pre> <p>I have already asked a <a href="https://stackoverflow.com/questions/65308769/pandascalculate-jaccard-similarity-for-every-row-based-on-the-value-in-another">similar question</a> and have tried to modify the answer, but did not have any luck with adding multiple columns</p>
<p>ok, I figured how to do that with help from <a href="https://stackoverflow.com/questions/65308769/pandascalculate-jaccard-similarity-for-every-row-based-on-the-value-in-another">this response by Amit Amola</a> so what I did was to refine the code to get all combinations:</p> <pre><code>from itertools import combinations for val in list(combinations(range(len(df)), 2)): firstlist = df.iloc[val[0],2] secondlist = df.iloc[val[1],2] value = round(lexical_overlap(firstlist,secondlist),2) print(f&quot;{df.iloc[val[0],0] + df.iloc[val[0],1]} and {df.iloc[val[1],0]+ df.iloc[val[1],1]}'s value is: {value}&quot;) </code></pre> <p>this will return values from both first and second column</p> <pre><code>sample output: First doc first and second doc first's value is 26. </code></pre>
python|pandas|similarity
0
9,167
65,362,523
How to select a minimum validation dataset that represents all variance?
<p>I have a dataset of 2000 256 x 256 x 3 images to train a CNN model (with approximately 30 million trainable parameters) for pixel wise binary classification. Before training it I would like to divide it into validation and test set. Now, I have gone thru all the answers from <a href="https://stackoverflow.com/questions/13610074/is-there-a-rule-of-thumb-for-how-to-divide-a-dataset-into-training-and-validatio">this question</a>.</p> <p>The suggestions are like 80-20 split or random splits with training and observing performance (hit and trial type). So, my question - Is there a way/technique to choose a minimum dataset for validation and testing that represents all the variance of the total dataset? I am having an intuition like there should be a quantity (like mean) that can be measured per image and that quantity can be plotted so that some values become outliers, some do not and I can take images from these groups so that maximum representation of this variety is there.</p> <p>I have a minimum dataset constraint as I have very less data. Augmentation might be suggested but then should I go for positional or intensity based augmentations. Because from class notes, our teacher told us that max pooling layer makes the network invariant to translations/rotations. So, I am assuming positional augmentations won't do.</p>
<p>Tried and refused:</p> <p><strong>1. Feature detectors and descriptors</strong> - Detectors won't be a good approximation of an image and descriptors were long vectors. I discarded this because at this time I did not know about the desired solution. This can be rethinked upon.</p> <p><strong>2. Autoencoders</strong> - idea was to train an autoencoder, plot the encoded values in 3d space (keep encoding dimensions as 3) and check for clusters and split data from each cluster into train, test, val. This did not work as training loss did not decrease beyond a point, given the limited GPU memory of Google Colab.</p> <p>What seems to be working:</p> <p><strong>Dimensionality reduction techniques:</strong> Idea is flatten an image, fit the reduction model and transform the images into lower dimension to plot and organize them. I found that t-SNE (Neighbor graph methods) work better than PCA (Matrix factorization methods). <strong>Hence, I chose UMap (a Neighbor graph technique)</strong>.</p> <p>MWE:</p> <pre><code>import umap X=np.load(path+'/X_train.npy') # Your image dataset, shape=(number of images,height,width, channels) # an image of size h x w x c is actually a flattened array of size h*w*c X_reshaped=np.zeros((X.shape[0],X.shape[1]*X.shape[2]*X.shape[3])) for i in range(0,X.shape[0]): X_reshaped[i,:]=X[i,:,:,:].flatten() del X X_reshaped=X_reshaped/255 X_reshaped.shape reducer=umap.UMAP(n_components=3) reducer.fit(X=X_reshaped) embedding=reducer.transform(X_reshaped) embedding.shape # Clustering from sklearn import cluster kmeans=cluster.KMeans(n_clusters=4,random_state=42).fit(embedding) import plotly data = [plotly.graph_objs.Scatter3d(x=embedding[:,0], y=embedding[:,1], z=embedding[:,2], mode='markers', marker=dict(color=kmeans.labels_) ) ] plotly.offline.iplot(data) </code></pre> <p>Resulting plot:</p> <p><a href="https://i.stack.imgur.com/PpG4J.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PpG4J.png" alt="plot" /></a></p> <p>And from this plot outliers can be observed, manual splitting can be done from clusters, although what should be the cluster size is a different question. :)</p>
python|tensorflow|deep-learning
0
9,168
65,320,588
Defined Nelson-Siegel function not callable in Python
<p>I am trying to replicate the Nelson-Siegel (1987) model of the Yield Curve. I am new of Python (previously working in R) and I wrote the following function:</p> <pre><code>def nelson_siegel(tau, beta1, beta2, beta3, lambda1): return ( beta1 + beta2*(1-np.exp(-tau/lambda1))/(tau/lambda1) + beta3((1-np.exp(-tau/lambda1))/(tau/lambda1)-np.exp(-tau/lambda1)) ) </code></pre> <p>which I am going to optimize with <code>scipy.optimize.least_squares</code>. However, even before going to the optimizitation I am not able to make this function callable and I do not get what it means. I just pass the parameters and I get either one of the following errors:</p> <pre><code>nelson_siegel(tau=10, beta1=0.5,beta2=0.3,beta3=0.1, lambda1=0.9) TypeError: 'float' object is not callable </code></pre> <p>or</p> <pre><code>nelson_siegel(tau=1, beta1=5,beta2=3,beta3=10, lambda1=5) TypeError: 'int' object is not callable </code></pre>
<pre><code>def nelson_siegel(tau, beta1, beta2, beta3, lambda1): return ( beta1 + beta2*(1-np.exp(-tau/lambda1))/(tau/lambda1) + beta3 * ((1-np.exp(-tau/lambda1))/(tau/lambda1)-np.exp(-tau/lambda1)) ) </code></pre> <p>You haven't used <code>*</code> operator after beta3</p>
python|function|numpy|scipy
1
9,169
65,085,091
Dask - return a dask.dataframe on map_partition call
<p>I'm wondering How can I return a dask Dataframe when I call a <code>map_partitions</code> instead of a pd.Dataframe in order to avoid memory issues.</p> <p>Input Dataframe</p> <pre><code>id | name | pet_id --------------------- 1 Charlie pet_1 2 Max pet_2 3 Buddy pet_3 4 Oscar pet_4 </code></pre> <p>expected output from <code>map_partitions</code></p> <pre><code>pet_id | name | date | is_healty ------------------------------------------ pet_1 Charlie 11-20-2018 False pet_1 Charlie 02-17-2020 True pet_1 Charlie 04-30-2020 True pet_2 Max 10-17-2020 True pet_3 Buddy 01-20-2020 True pet_3 Buddy 12-12-2020 False pet_4 Oscar 08-24-2019 True </code></pre> <p>I already did the following function and is working if I return a pd.Dataframe. But if I return a dask.dataframe an <code>*** AssertionError</code> is raised</p> <pre><code>def get_pets_appointments(df): dask_ddf = None for k, pet_id in df[&quot;pet_id&quot;].iteritems(): _resp = pets.get_pet_appointments(pet_id) # http POST call tmp_df = pd.DataFrame(_resp) if dask_ddf is None: # First iteration, initialize Dask dataframe dask_ddf = dd.from_pandas(tmp_df, npartitions=1) continue # Work with Dask dataframe in order to avoid Memory Issues dask_ddf = dd.concat([dask_ddf, tmp_df]) # this line works fine # return dask_ddf.compute() # this is raising AssertionError return dask_ddf </code></pre> <p>And I'm invoking the function as follow</p> <pre><code>pets_app_df = pets_df.map_partitions(get_pets_appointments) </code></pre>
<p>Short answer: no (sorry)</p> <p>The purpose of <code>map_partitions</code> is to act on each of the constituent pandas dataframes of a dask dataframe. The expectation is, that you will be making a new dataframe of the same number of partitions as the original. I think you are wanting to split each partition into many partitions; you <em>could</em> do this by calling <code>.repartition</code> beforehand.</p> <p>However, I am surprised:</p> <pre><code>dask_ddf = dd.from_pandas(tmp_df, npartitions=1) ... dask_ddf = dd.concat([dask_ddf, tmp_df]) </code></pre> <p>both of the dataframes you give here are in memory, so how would making them into a dask-dataframe help you? Repeatedly concatenating (actually, appending) isn't a great model.</p>
python-3.x|pandas|dataframe|dask|dask-dataframe
0
9,170
50,195,341
Tensorflow: is there a way to load a pretrained model without having to redefine all the variables?
<p>I'm trying to split my code into different modules, one where the model is trained, another which analyzes the weights in the model. </p> <p>When I save the model using </p> <pre><code>save_path = saver.save(sess, "checkpoints5/text8.ckpt") </code></pre> <p>It makes 4 files, ['checkpoint', 'text8.ckpt.data-00000-of-00001', 'text8.ckpt.meta', 'text8.ckpt.index']</p> <p>I tried restoring this in the separate module using this code</p> <pre><code>train_graph = tf.Graph() with train_graph.as_default(): saver = tf.train.Saver() with tf.Session(graph=train_graph) as sess: saver.restore(sess, tf.train.latest_checkpoint('MODEL4')) embed_mat = sess.run(embedding) </code></pre> <p>But I get this error message</p> <pre><code>ValueError Traceback (most recent call last) &lt;ipython-input-15-deaad9b67888&gt; in &lt;module&gt;() 1 train_graph = tf.Graph() 2 with train_graph.as_default(): ----&gt; 3 saver = tf.train.Saver() 4 5 /usr/local/lib/python3.6/dist-packages/tensorflow/python/training/saver.py in __init__(self, var_list, reshape, sharded, max_to_keep, keep_checkpoint_every_n_hours, name, restore_sequentially, saver_def, builder, defer_build, allow_empty, write_version, pad_step_number, save_relative_paths, filename) 1309 time.time() + self._keep_checkpoint_every_n_hours * 3600) 1310 elif not defer_build: -&gt; 1311 self.build() 1312 if self.saver_def: 1313 self._check_saver_def() /usr/local/lib/python3.6/dist-packages/tensorflow/python/training/saver.py in build(self) 1318 if context.executing_eagerly(): 1319 raise RuntimeError("Use save/restore instead of build in eager mode.") -&gt; 1320 self._build(self._filename, build_save=True, build_restore=True) 1321 1322 def _build_eager(self, checkpoint_path, build_save, build_restore): /usr/local/lib/python3.6/dist-packages/tensorflow/python/training/saver.py in _build(self, checkpoint_path, build_save, build_restore) 1343 return 1344 else: -&gt; 1345 raise ValueError("No variables to save") 1346 self._is_empty = False 1347 ValueError: No variables to save </code></pre> <p>After reading up on this issue, it seems that I need to redefine all the variables used when I trained the model. </p> <p>Is there a way to access the weights without having to redefine everything? The weights are just numbers, surely there must be a way to access them directly?</p>
<p>For just accessing variables in checkpoints, please checkout the <a href="https://www.github.com/tensorflow/tensorflow/blob/r1.8/tensorflow/python/training/checkpoint_utils.py" rel="nofollow noreferrer"><code>checkpoint_utils</code></a> library. It provides three useful api function: <code>load_checkpoint</code>, <code>list_variables</code> and <code>load_variable</code>. I'm not sure if there is a better way but you can certainly use these functions to extract a dict of all variables in a checkpoint like this:</p> <pre><code>import tensorflow as tf ckpt = 'checkpoints5/text8.ckpt' var_dict = {name: tf.train.load_checkpoint(ckpt).get_tensor(name) for name, _ in tf.train.list_variables(ckpt)} print(var_dict) </code></pre> <p>To load a pretrained model without having to redefine all the variables, you will need more than just checkpoints. A checkpoint has only variables and it doesn't how to restore these variables, i.e. how to map them to a graph, without an actual graph (and an appropriate map). <a href="https://www.tensorflow.org/programmers_guide/saved_model#build_and_load_a_savedmodel" rel="nofollow noreferrer"><code>SavedModel</code></a> will be better for this scenario. It can save both the model <a href="https://www.tensorflow.org/api_guides/python/meta_graph" rel="nofollow noreferrer"><code>MetaGraph</code></a> and all variables. You don't have to manually redefine everything when restoring the saved model. The following code is an example using just the <a href="https://www.tensorflow.org/api_docs/python/tf/saved_model/simple_save" rel="nofollow noreferrer"><code>simple_save</code></a>.</p> <p>To save a trained model:</p> <pre><code>import tensorflow as tf x = tf.placeholder(tf.float32) y_ = tf.reshape(x, [-1, 1]) y_ = tf.layers.dense(y_, units=1) loss = tf.losses.mean_squared_error(labels=x, predictions=y_) optimizer = tf.train.GradientDescentOptimizer(0.01) train_op = optimizer.minimize(loss) init = tf.global_variables_initializer() with tf.Session() as sess: sess.run(init) for _ in range(10): sess.run(train_op, feed_dict={x: range(10)}) # Let's check the bias here so that we can make sure # the model we restored later on is indeed our trained model here. d_b = sess.graph.get_tensor_by_name('dense/bias:0') print(sess.run(d_b)) tf.saved_model.simple_save(sess, 'test', inputs={"x": x}, outputs={"y": y_}) </code></pre> <p>To restore the saved model:</p> <pre><code>import tensorflow as tf with tf.Session(graph=tf.Graph()) as sess: # A model saved by simple_save will be treated as a graph for inference / serving, # i.e. uses the tag tag_constants.SERVING tf.saved_model.loader.load(sess, [tf.saved_model.tag_constants.SERVING], 'test') d_b = sess.graph.get_tensor_by_name('dense/bias:0') print(sess.run(d_b)) </code></pre>
tensorflow
1
9,171
49,991,670
How can I turn data imported into python from a csv file to time-series?
<p>I want to turn data imported into python through a .csv file to time-series. </p> <pre><code>GDP = pd.read_csv('GDP.csv') [87]: GDP Out[87]: GDP growth (%) 0 0.5 1 -5.2 2 -7.9 3 -9.1 4 -10.3 5 -8.8 6 -7.4 7 -10.1 8 -8.4 9 -8.7 10 -7.9 11 -4.1 </code></pre> <p>Since data imported through a .csv file are into a DataFrame format, I first tried turning them into pd.Series:</p> <pre><code>GDP2 = pd.Series(data = GDP, index = pd.date_range(start = '01-2010', end = '01-2018', freq = 'Q')) </code></pre> <p>But what I got looked like this:</p> <pre><code>GDP2 Out[90]: 2010-03-31 (G, D, P, , g, r, o, w, t, h, , (, %, )) 2010-06-30 (G, D, P, , g, r, o, w, t, h, , (, %, )) 2010-09-30 (G, D, P, , g, r, o, w, t, h, , (, %, )) 2010-12-31 (G, D, P, , g, r, o, w, t, h, , (, %, )) 2011-03-31 (G, D, P, , g, r, o, w, t, h, , (, %, )) 2011-06-30 (G, D, P, , g, r, o, w, t, h, , (, %, )) 2011-09-30 (G, D, P, , g, r, o, w, t, h, , (, %, )) 2011-12-31 (G, D, P, , g, r, o, w, t, h, , (, %, )) 2012-03-31 (G, D, P, , g, r, o, w, t, h, , (, %, )) 2012-06-30 (G, D, P, , g, r, o, w, t, h, , (, %, )) 2012-09-30 (G, D, P, , g, r, o, w, t, h, , (, %, )) 2012-12-31 (G, D, P, , g, r, o, w, t, h, , (, %, )) </code></pre> <p>The same happened when I tried to do that through a pd.DataFrame: </p> <pre><code>GDP2 = pd.DataFrame(data = GDP, index = pd.date_range(start = '01-2010', end = '01-2018', freq = 'Q')) GDP2 Out[92]: GDP growth (%) 2010-03-31 NaN 2010-06-30 NaN 2010-09-30 NaN 2010-12-31 NaN 2011-03-31 NaN 2011-06-30 NaN 2011-09-30 NaN 2011-12-31 NaN 2012-03-31 NaN 2012-06-30 NaN 2012-09-30 NaN </code></pre> <p>Or when I tried this through the use of reindex():</p> <pre><code>dates = pd.date_range(start = '01-2010', end = '01-2018', freq = 'Q') dates Out[100]: DatetimeIndex(['2010-03-31', '2010-06-30', '2010-09-30', '2010-12-31', '2011-03-31', '2011-06-30', '2011-09-30', '2011-12-31', '2012-03-31', '2012-06-30', '2012-09-30', '2012-12-31', '2013-03-31', '2013-06-30', '2013-09-30', '2013-12-31', '2014-03-31', '2014-06-30', '2014-09-30', '2014-12-31', '2015-03-31', '2015-06-30', '2015-09-30', '2015-12-31', '2016-03-31', '2016-06-30', '2016-09-30', '2016-12-31', '2017-03-31', '2017-06-30', '2017-09-30', '2017-12-31'], dtype='datetime64[ns]', freq='Q-DEC') GDP.reindex(dates) Out[101]: GDP growth (%) 2010-03-31 NaN 2010-06-30 NaN 2010-09-30 NaN 2010-12-31 NaN 2011-03-31 NaN 2011-06-30 NaN 2011-09-30 NaN 2011-12-31 NaN 2012-03-31 NaN 2012-06-30 NaN 2012-09-30 NaN 2012-12-31 NaN </code></pre> <p>I'm surely making some stupid, newbie mistake and I would really appreciate it if someone could help me out here. Cheers. </p>
<p>Use <code>set_index</code></p> <pre><code>df gdp 0 0.5 1 -5.2 2 -7.9 3 -9.1 4 -10.3 5 -8.8 6 -7.4 7 -10.1 8 -8.4 9 -8.7 10 -7.9 11 -4.1 df = df.set_index(pd.date_range(start = '01-2010', end = '01-2013',freq = 'Q')) gdp 2010-03-31 0.5 2010-06-30 -5.2 2010-09-30 -7.9 2010-12-31 -9.1 2011-03-31 -10.3 2011-06-30 -8.8 2011-09-30 -7.4 2011-12-31 -10.1 2012-03-31 -8.4 2012-06-30 -8.7 2012-09-30 -7.9 2012-12-31 -4.1 </code></pre>
python|pandas|csv|time-series|date-range
2
9,172
50,029,927
pandas groupby multiple functions
<p>I want summarize the <code>integer_transaction</code> by <code>EMP_NAME</code>. </p> <ol> <li>why does my first command fail? How to modify it</li> <li>in case of the second command how to avoid the warning?</li> <li>Is there any way to put <code>EMP_NAME</code> in a column instead of the index</li> </ol> <p>I want output</p> <pre><code>Emp_name Count Sum a 2 1 b 1 0 import pandas as pd import numpy as np df = pd.DataFrame(data = {'EMP_NAME': ["a", "a", "b"], 'integer_transaction': [0, 1, 0]}) x=df.groupby(['EMP_NAME'])['integer_transaction'].agg({'Frequency_count': count, 'Frequency_Sum': np.sum}) x=df.groupby(['EMP_NAME'])['integer_transaction'].agg({'Frequency_count': np.size, 'Frequency_Sum': np.sum}) FutureWarning: using a dict on a Series for aggregation is deprecated and will be removed in a future version # -*- coding: utf-8 -*- </code></pre>
<p>Try</p> <pre><code> df.groupby(['EMP_NAME'])['integer_transaction'].agg(["count", "sum"]) count sum EMP_NAME a 2 1 b 1 0 </code></pre> <p>If you really want, you can rename the columns using an additional <code>.rename("count": "Frequency_count", "sum": "Frequency_sum")</code>. </p> <p>Just for reference, the following also works perfectly fine:</p> <pre><code>x=df.groupby(['EMP_NAME'])['integer_transaction'].agg({'Frequency_count': "count", 'Frequency_Sum': np.sum}) x __main__:1: FutureWarning: using a dict on a Series for aggregation is deprecated and will be removed in a future version Out[26]: Frequency_count Frequency_Sum EMP_NAME a 2 1 b 1 0 </code></pre> <p>Note how <code>count</code> is quoted. </p> <pre><code>x=df.groupby(['EMP_NAME'])['integer_transaction'].agg({'Frequency_count': np.size, 'Frequency_Sum': np.sum}) x __main__:1: FutureWarning: using a dict on a Series for aggregation is deprecated and will be removed in a future version Out[27]: Frequency_count Frequency_Sum EMP_NAME a 2 1 b 1 0 </code></pre> <p>The warnings you get just tell you that this functionality will be removed in the future, so they should probably not be used. However, they do produce the correct answer. </p> <p>To move the index to the column, try </p> <pre><code>df.groupby(['EMP_NAME'])['integer_transaction'].agg(["count", "sum"]).reset_index() EMP_NAME count sum 0 a 2 1 1 b 1 0 </code></pre>
python|pandas|group-by
5
9,173
46,681,909
Change SSD network (Single Shot Multibox Detector) for two class detection
<p>I have used this link: <a href="https://github.com/albanie/wider2pascal" rel="nofollow noreferrer">https://github.com/albanie/wider2pascal</a> to change Widerface dataset annotations to Pascalvoc format, because SSD network is originally written for PascalVoc dataset. Now we want to run SSD network. What should we do to change SSD to two class detection? The code is complicated and we are confused Which lines should be changed. </p> <p>Please help us.</p>
<p><a href="https://github.com/pierluigiferrari/ssd_keras" rel="nofollow noreferrer">This SSD implementation</a> provides tools and a tutorial on how to use a trained SSD for transfer learning on your own dataset. The solution (or at least one possible solution, the tutorial explains multiple alternatives) is to sub-sample the weight tensors of the classifier layers of the trained SSD so that their shapes fits with the number of classes in your dataset.</p> <p><a href="https://github.com/pierluigiferrari/ssd_keras/blob/master/weight_sampling_tutorial.ipynb" rel="nofollow noreferrer">Here is the tutorial</a>.</p>
python-3.x|tensorflow|deep-learning|face-detection|object-detection
0
9,174
67,837,239
Why this PyTorch regression program reaches zero loss with periodic oscillations?
<p>There is one x and one t with with dimensions 3x1. I am trying to find the w (3x3) and b (3,1) so they can please this equation:</p> <pre><code>t = w*x + b </code></pre> <p>The program below does oscillate. I tried to debug it with no success. Can someone else take a look? What did I miss?</p> <p><a href="https://i.stack.imgur.com/eTqeh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eTqeh.png" alt="enter image description here" /></a></p> <pre><code>class fit(): def __init__(self, w, b): self.w = w self.b = b def forward(self, x): return torch.mm(self.w, x) + self.b w = torch.tensor([[1., 1.1, 1.2], [1., 1.1, 1.2], [1., 1.1, 1.2]], requires_grad=True) b = torch.tensor([[10.], [11.], [12.]], requires_grad=True) x = torch.tensor([[1.], [2.], [3.]], requires_grad=False) t = torch.tensor([[0.], [0.9], [0.1]], requires_grad=False) model = fit(w, b) alpha = 0.001 loss = [] arange = np.arange(200) for i in arange: z = model.forward(x) l = (z - t)**2 l = l.sum() loss.append(l) l.backward() model.w.data = model.w.data - alpha * model.w.grad model.b.data = model.b.data - alpha * model.b.grad plt.plot(arange, loss) </code></pre> <p>If I use the other tools (<code>torch.nn.Linear, torch.optim.sgd, torch.nn.smeloss</code>) from PyTorch everything goes as expected.</p>
<p>You need to reset the gradient to 0 after each backprop. By default, pytorch accumulates gradients when you call <code>loss.backward()</code>.</p> <p>Replacing the last 2 instructions of your loop with the following lines should fix the issue :</p> <pre><code>with torch.no_grad(): model.w.data = model.w.data - alpha * model.w.grad model.b.data = model.b.data - alpha * model.b.grad model.w.grad.zero_() model.b.grad.zero_() </code></pre>
pytorch|linear-regression
1
9,175
67,822,416
Extract co-occurrence data from dataframe
<p>I have something like this:</p> <pre><code> fromJobtitle toJobtitle size 0 CEO CEO 65 1 CEO Vice President 23 2 CEO Employee 56 3 Vice President CEO 112 4 Employee CEO 20 </code></pre> <p>I would like to count number of co-occurences so that it combines the double occurences (showing only how many elements there are between the 2)</p> <p>An example Output:</p> <pre><code>0 CEO Vice President 135 1 CEO Employee 76 2 CEO CEO 65 </code></pre>
<p>Try:</p> <pre><code>df.groupby(lambda x: tuple(sorted(df.loc[x, ['fromJobTitle', 'toJobTitle']]))).sum() </code></pre> <p>Here is the result:</p> <pre><code> size (CEO, CEO) 65 (CEO, Employee) 76 (CEO, Vice President) 135 </code></pre>
python|pandas|dataframe|find-occurrences
2
9,176
67,869,032
Error when trying to find a string in a column
<p>I am trying to find an ID in a dataframe column.</p> <p>If it exists then the new df will be converted to the correspondent one, but it raises an error.</p> <p>Where could be the mistake?</p> <pre class="lang-py prettyprint-override"><code>df = pd.read_excel('somefile.xlsx') ids = df['ID'].to_list() for id in ids: if id in ids: df_arc = df_arc.set_index('ID') df_arc=df_arc.loc[id] else: print('It is not here') </code></pre> <p>This is the error I receive:</p> <pre class="lang-py prettyprint-override"><code>KeyError: &quot;None of ['ID'] are in the columns&quot; </code></pre>
<p><code>df[&quot;ID&quot;]</code> will try to access the ID column - that you <strong>don't</strong> have.</p> <p>If you refer to the rows index, then you can just change this statement with:</p> <pre class="lang-py prettyprint-override"><code>ids = df.index.to_list() </code></pre>
python|pandas
1
9,177
61,522,442
Input not compatible
<p>I am training the dataset for Amazon fine foods review. The code that I have written for it is showing an error as-<br> Encoder_Decoder LSTM... /opt/conda/lib/python3.7/site-packages/ipykernel_launcher.py:66: UserWarning: Update your <code>LSTM</code> call to the Keras 2 API: <code>LSTM(10, return_sequences=True, return_state=True, dropout=0.2, recurrent_dropout=0.2)</code> /opt/conda/lib/python3.7/site-packages/ipykernel_launcher.py:67: UserWarning: Update your <code>LSTM</code> call to the Keras 2 API: `LSTM(10, return_state=True, return_sequences=True, go_backwards=True, dropo</p> <p>Input 0 is incompatible with layer lstm_3: expected ndim=3, found ndim=1 </p> <p>Which part of the code exactly is it telling the error in?</p> <pre><code>import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) import os for dirname, _, filenames in os.walk('/kaggle/input'): for filename in filenames: print(os.path.join(dirname, filename)) # Any results you write to the current directory are saved as output. !pip install pyrouge import re from nltk.corpus import stopwords from numpy.random import seed seed(1) from sklearn.model_selection import train_test_split as tts import logging from pyrouge.Rouge155 import Rouge155 import matplotlib.pyplot as plt import keras from keras import backend as k k.set_learning_phase(1) from keras import initializers from keras.optimizers import RMSprop from keras.models import Model from keras.layers import Dense,LSTM,Input,Activation,Add,TimeDistributed,\ Permute,Flatten,RepeatVector,merge,Lambda,Multiply,Reshape from keras.callbacks import ModelCheckpoint from keras.models import load_model logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s',\ level=logging.INFO) #model parameters batch_size = 50 num_classes = 1 epochs = 20 hidden_units = 10 learning_rate = 0.005 clip_norm = 2.0 train_data=pd.read_csv("../input/Text Summarization Dataset.csv") en_shape=np.shape(train_data["Text"][0]) de_shape=np.shape(train_data["Summary"][0]) #Helper Functions def encoder_decoder(data): print('Encoder_Decoder LSTM...') #Encoder encoder_inputs = Input(shape=en_shape) encoder_LSTM = LSTM(hidden_units,dropout_U=0.2,dropout_W=0.2,return_sequences=True,return_state=True) encoder_LSTM_rev=LSTM(hidden_units,return_state=True,return_sequences=True,dropout_U=0.05,dropout_W=0.05,go_backwards=True) encoder_outputs, state_h, state_c = encoder_LSTM(encoder_inputs) encoder_outputsR, state_hR, state_cR = encoder_LSTM_rev(encoder_inputs) state_hfinal=Add()([state_h,state_hR]) state_cfinal=Add()([state_c,state_cR]) encoder_outputs_final = Add()([encoder_outputs,encoder_outputsR]) encoder_states = [state_hfinal,state_cfinal] #decoder decoder_inputs = Input(shape=(None,de_shape[1])) decoder_LSTM = LSTM(hidden_units,return_sequences=True,dropout_U=0.2,dropout_W=0.2,return_state=True) decoder_outputs, _, _ = decoder_LSTM(decoder_inputs,initial_state=encoder_states) #Pull out XGBoost, (I mean attention) attention = TimeDistributed(Dense(1, activation = 'tanh'))(encoder_outputs_final) attention = Flatten()(attention) attention = Multiply()([decoder_outputs, attention]) attention = Activation('softmax')(attention) attention = Permute([2, 1])(attention) decoder_dense = Dense(de_shape[1],activation='softmax') decoder_outputs = decoder_dense(attention) m model= Model(inputs=[encoder_inputs,decoder_inputs], outputs=decoder_outputs) print(model.summary()) rmsprop = RMSprop(lr=learning_rate,clipnorm=clip_norm) model.compile(loss='categorical_crossentropy',optimizer=rmsprop,metrics=['accuracy']) x_train,x_test,y_train,y_test=tts(data["Text"],data["Summary"],test_size=0.20) history= model.fit(x=[x_train,y_train], y=y_train, batch_size=batch_size, epochs=epochs, verbose=1, validation_data=([x_test,y_test], y_test)) print(model.summary()) #inference mode encoder_model_inf = Model(encoder_inputs,encoder_states) decoder_state_input_H = Input(shape=(en_shape[0],)) decoder_state_input_C = Input(shape=(en_shape[0],)) decoder_state_inputs = [decoder_state_input_H, decoder_state_input_C] decoder_outputs, decoder_state_h, decoder_state_c = decoder_LSTM(decoder_inputs, initial_state=decoder_state_inputs) decoder_states = [decoder_state_h, decoder_state_c] decoder_outputs = decoder_dense(decoder_outputs) decoder_model_inf= Model([decoder_inputs]+decoder_state_inputs, [decoder_outputs]+decoder_states) scores = model.evaluate([x_test,y_test],y_test, verbose=1) print('LSTM test scores:', scores) print('\007') print(model.summary()) return model,encoder_model_inf,decoder_model_inf,history #generate summary from vectors def generateText(SentOfVecs): SentOfVecs=np.reshape(SentOfVecs,de_shape) kk="" for k in SentOfVecs: kk = kk + label_encoder.inverse_transform([argmax(k)])[0].strip()+" " #kk=kk+((getWord(k)[0]+" ") if getWord(k)[1]&gt;0.01 else "") return kk """generate summary vectors""" def summarize(article): stop_pred = False article = np.reshape(article,(1,en_shape[0],en_shape[1])) #get initial h and c values from encoder init_state_val = encoder.predict(article) target_seq = np.zeros((1,1,de_shape[1])) #target_seq =np.reshape(train_data['summaries'][k][0],(1,1,de_shape[1])) generated_summary=[] while not stop_pred: decoder_out,decoder_h,decoder_c= decoder.predict(x=[target_seq]+init_state_val) generated_summary.append(decoder_out) init_state_val= [decoder_h,decoder_c] #get most similar word and put in line to be input in next timestep #target_seq=np.reshape(model.wv[getWord(decoder_out)[0]],(1,1,emb_size_all)) target_seq=np.reshape(decoder_out,(1,1,de_shape[1])) if len(generated_summary)== de_shape[0]: stop_pred=True break return generated_summary #Plot training curves def plot_training(history): print(history.history.keys()) # "Accuracy" plt.plot(history.history['acc']) plt.plot(history.history['val_acc']) plt.title('model accuracy') plt.ylabel('accuracy') plt.xlabel('epoch') plt.legend(['train', 'validation'], loc='upper left') plt.show() # "Loss" plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.title('model loss') plt.ylabel('loss') plt.xlabel('epoch') plt.legend(['train', 'validation'], loc='upper left') plt.show() def saveModels(): trained_model.save("%sinit_model"%modelLocation) encoder.save("%sencoder"%modelLocation) decoder.save("%sdecoder"%modelLocation) def evaluate_summ(article): ref='' for k in wt(data['Summary'][article])[:20]: ref=ref+' '+k gen_sum = generateText(summarize(train_data["Text"][article])) print("-----------------------------------------------------") print("Original summary") print(ref) print("-----------------------------------------------------") print("Generated summary") print(gen_sum) print("-----------------------------------------------------") rouge = Rouge155() score = rouge.score_summary(ref, gen_sum) print("Rouge1 Score: ", score) #Train model and test trained_model,encoder,decoder,history = encoder_decoder(train_data) plot_training(history) evaluate_summ(10) print(generateText(summarize(train_data["article"][8]))) print(data["Summary"][8]) print(data["Text"][78]) </code></pre>
<p>Normally in the traceback you obtain after the error you get will guide you to the error's origin (line of code). And the traceback is usually sequential by depth, i.e. the last one corresponds the most shallow call that gave the error. Normally the most shallow call in you case should be line 198 <code>trained_model,encoder,decoder,history = encoder_decoder(train_data)</code>.</p> <p>I would check the input data shapes. The error states it's expecting a certain input dimension of 3 and it received a dimension of 1</p> <p>I see you're using Spyder, use the debugger to follow up the error to its origin. </p>
python|pandas|numpy|keras
0
9,178
61,459,111
How to apply a Pandas filter on a data frame based on entries in a column from a different data frame (no join)
<p>As an example, I have one data frame (df_1) with one column which contains some text data. The second data frame (df_2) contains some numbers. How do I check if the text contains the numbers from the second data frame?</p> <p><strong>df_1</strong></p> <pre><code> Note 0 The code to this is 1003 1 The code to this is 1004 </code></pre> <p><strong>df_2</strong></p> <pre><code> Code_Number 0 1006 1 1003 </code></pre> <p>So I want to check if the entries in [Note] from df_1 contains the entries from [Code_Number] from df_2</p> <p>I have tried using the following code: <code>df_1[df_1['Note'].str.contains(df_2['Code_Number'])]</code> and I know I cannot use a join as I do not have a key to join on. </p> <p>The final result which I am looking for after the filtering has been applied is:</p> <pre><code> Note 0 The code to this is 1003 </code></pre>
<p>Do this:</p> <pre><code>df_1.loc[df_1['Note'].apply(lambda x: any(str(number) in x for number in df_2['Code_Number']))] </code></pre>
python|pandas|data-analysis
1
9,179
61,302,847
Pandas: How to merge two data frames and fill NaN values using values from the second data frame
<p>I have a pandas dataframe (df1) that looks like this: </p> <pre><code>No car pl. Value Expected 1 Toyota HK 0.1 0.12 1 Toyota NY 0.2 NaN 2 Saab LOS 0.3 NaN 2 Saab UK 0.4 0.6 2 Saab HK 0.5 0.51 3 Audi NYU 0.6 NaN 3 Audi LOS 0.7 NaN 4 VW UK 0.8 NaN 5 Audi HK 0.9 NaN </code></pre> <p>And I have another dataframe (df2) that looks like this:</p> <pre><code>No pl. Expected 2 LOS 0.35 3 NYU 0.62 3 LOS 0.76 5 HK 0.91 </code></pre> <p>I would like my final dataframe to look like this:</p> <pre><code>No car pl. Value Expected 1 Toyota HK 0.1 0.12 1 Toyota NY 0.2 NaN 2 Saab LOS 0.3 0.35 2 Saab UK 0.4 0.6 2 Saab HK 0.5 0.51 3 Audi NYU 0.6 0.62 3 Audi LOS 0.7 0.76 4 VW UK 0.8 NaN 5 Audi HK 0.9 0.91 </code></pre> <p>I tried this:</p> <pre><code>df = df1.fillna(df1.merge(df2, on=['No','pl.'])) </code></pre> <p>But df1 remains unchanged in the output</p> <p>The questions that I have seen here have been of dataframes with the same shape. Is there a way to do this when the shapes are different?</p> <p>Thanks in advance!</p>
<p>Since we have two key columns where we want to match on and update our <code>df1</code> dataframe, we can use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.set_index.html" rel="nofollow noreferrer"><code>set_index</code></a> with <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.fillna.html" rel="nofollow noreferrer"><code>fillna</code></a>, since <code>fillna</code> aligns in the indices:</p> <pre><code>keys = ['No', 'pl.'] df1 = df1.set_index(keys).fillna(df2.set_index(keys)).reset_index() No pl. car Value Expected 0 1 HK Toyota 0.1 0.12 1 1 NY Toyota 0.2 NaN 2 2 LOS Saab 0.3 0.35 3 2 UK Saab 0.4 0.60 4 2 HK Saab 0.5 0.51 5 3 NYU Audi 0.6 0.62 6 3 LOS Audi 0.7 0.76 7 4 UK VW 0.8 NaN 8 5 HK Audi 0.9 0.91 </code></pre> <hr> <p>Or we can use the dedicated method <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.update.html" rel="nofollow noreferrer"><code>Series.update</code></a> for this:</p> <pre><code>df1 = df1.set_index(keys) df1['Expected'].update(df2.set_index(keys)['Expected']) df1 = df1.reset_index() No pl. car Value Expected 0 1 HK Toyota 0.1 0.12 1 1 NY Toyota 0.2 NaN 2 2 LOS Saab 0.3 0.35 3 2 UK Saab 0.4 0.60 4 2 HK Saab 0.5 0.51 5 3 NYU Audi 0.6 0.62 6 3 LOS Audi 0.7 0.76 7 4 UK VW 0.8 NaN 8 5 HK Audi 0.9 0.91 </code></pre>
python|pandas|dataframe|merge|fillna
2
9,180
68,746,406
How to find the index of a given item in a list in a pandas column and extract the address from a column
<p>I have a list</p> <pre><code>In [1]:list=['AM','PM','MT'] </code></pre> <p>and i have a df like this</p> <pre><code>In [1]:d= {'Date': ['8/10/2021','8/10/2021','8/11/2021','8/11/2021','8/11/2021','8/11/2021','8/11/2021'], 'Name': [ 'John','Jason','Derek','Foley','Jason','Derek J','Derek M'], 'Notes':['John is at 234 gamer AMSTRONG AM','Jason did not come','He has 400 pens on 987 Gol power Beam PM, but he was sick','2897 Pace Terrance MT','Jason with 200gems on 2050 Bat Place,AM','Derek J at 390 Jackson Groove,PM,atleast he came','No Show' ]} In [2]:df = pd.DataFrame(d, columns = ['Date', 'Name','Notes']) Out[1]: Date Name Notes 8/10/2021 John John is at 234 gamer AMSTRONG AM 8/10/2021 Jason Jason did not come 8/11/2021 Derek He has 400 pens on 987 Gol power Beam PM, but he was sick 8/11/2021 Foley 2897 Pace Terrance MT 8/11/2021 Jason Jason with 200gems on 2050 Bat Place,AM 8/11/2021 Derek J Derek J at 390 Jackson Groove,PM,atleast he came 8/11/2021 Derek M No Show </code></pre> <p>I want to extract the Address out of the Notes section. What I have done so far is as follows:</p> <pre><code>In [1]:Bool1 = df.iloc[:, 2].str.contains(r'\b(?:{})\b'.format('|'.join(list))) In [2]:df['Yes?'] =Bool1 Out[1]: Date Name Yes? Notes 8/10/2021 John TRUE John is at 234 gamer AMSTRONG AM 8/10/2021 Jason FALSE Jason did not come 8/11/2021 Derek TRUE He has 400 pens on 987 Gol power Beam PM, but he was sick 8/11/2021 Folley TRUE 2897 Pace Terrance MT 8/11/2021 Jason TRUE Jason with 200gems on 2050 Bat Place,AM 8/11/2021 Derek J TRUE Derek J at 390 Jackson Groove,PM,atleast he came 8/11/2021 Derek M FALSE No Show </code></pre> <p>What i would like is to find the index of the characters in the list when they show up ion the column of the df and then return the 20 characters to the left of it. I do not know how to find the index of the item in list in the column in the df.</p> <p>Desired Output:</p> <pre><code>Out[1]: Date Name Yes? Address Notes 8/10/2021 John TRUE 234 gamer AMSTRONG AM John is at 234 gamer AMSTRONG AM 8/10/2021 Jason FALSE Jason did not come 8/11/2021 Derek TRUE 987 Gol power Beam PM He has 400 pens on 987 Gol power Beam PM, but he was sick 8/11/2021 Folley TRUE 2897 Pace Terrance MT 2897 Pace Terrance MT 8/11/2021 Jason TRUE 2050 Bat Place,AM Jason with 200gems on 2050 Bat Place,AM 8/11/2021 Derek J TRUE 390 Jackson Groove,PM Derek J at 390 Jackson Groove,PM,atleast he came 8/11/2021 Derek M FALSE No Show </code></pre>
<p>Maybe something like this will work?</p> <pre><code>import pandas as pd # create a simple data frame df = pd.DataFrame(np.ones((5,5)), columns={'a','b','c','d','e'}, index={'1/1/2020', '2/1/2020', '3/1/2020', '4/1/2020','5/1/2020'}) # add some values to look for df['g'] = [6,7,'find me!',9,10] find_this = 'find me!' idx = df[df.eq(find_this).any(1)].index display(idx) # returns &quot;Index(['4/1/2020'], dtype='object')&quot; </code></pre>
python|regex|pandas|dataframe|indexing
0
9,181
68,589,507
How to compare dates in 2 DFs and count number of event occurred within 30 days
<p>I have 2 dataframes:</p> <ol> <li>df1</li> </ol> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: left;"></th> <th style="text-align: left;">client_id</th> <th style="text-align: left;">prediction_date</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">0</td> <td style="text-align: left;">100000201</td> <td style="text-align: left;">2019-08-20</td> </tr> <tr> <td style="text-align: left;">1</td> <td style="text-align: left;">100000202</td> <td style="text-align: left;">2020-02-27</td> </tr> <tr> <td style="text-align: left;">2</td> <td style="text-align: left;">100000204</td> <td style="text-align: left;">2019-12-19</td> </tr> </tbody> </table> </div> <ol start="2"> <li>df2</li> </ol> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: left;">INC Number</th> <th style="text-align: left;">Importance</th> <th style="text-align: left;">Opened</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">INC11</td> <td style="text-align: left;">minor</td> <td style="text-align: left;">2021-08-21</td> </tr> <tr> <td style="text-align: left;">INC22</td> <td style="text-align: left;">minor</td> <td style="text-align: left;">2020-03-17</td> </tr> <tr> <td style="text-align: left;">INC33</td> <td style="text-align: left;">major</td> <td style="text-align: left;">2019-12-12</td> </tr> </tbody> </table> </div> <p>The df1 is hundreds of thousands long, and df2 has only 20 rows. I would like to count how many INC numbers occurred within <strong>30 days</strong> before clients prediction_date</p> <p>No idea so far.</p>
<p>Idea is use broadcasting for compare columns converted to numpy arrays, subtract and compare if between 0 and 30 days, last <code>sum</code> for counter:</p> <pre><code>df1['prediction_date'] = pd.to_datetime(df1['prediction_date']) df2['Opened'] = pd.to_datetime(df2['Opened']) arr = (df1['prediction_date'].to_numpy() - df2['Opened'].to_numpy()[:, None]) df1['count'] = np.sum((arr &lt; pd.Timedelta(30, unit='d')) &amp; (arr &gt; pd.Timedelta(0)), axis=0) print (df1) client_id prediction_date count 0 100000201 2019-08-20 0 1 100000202 2020-02-27 0 2 100000204 2019-12-19 1 </code></pre> <p>Solution for count <code>Importance</code> values to new columns:</p> <pre><code>df1['prediction_date'] = pd.to_datetime(df1['prediction_date']) df2['Opened'] = pd.to_datetime(df2['Opened']) days = pd.Timedelta(30, unit='d') arr = (df1['prediction_date'].to_numpy() - df2['Opened'].to_numpy()[:, None]) m = (arr &lt; days) &amp; (arr &gt; pd.Timedelta(0)) df3 = pd.DataFrame(m, index=df2['Importance'], columns=df1.index) print (df3) 0 1 2 Importance minor False False False minor False False False major False False True df1['count'] = df3.sum() print (df1) client_id prediction_date count 0 100000201 2019-08-20 0 1 100000202 2020-02-27 0 2 100000204 2019-12-19 1 df1 = df1.join(df3.sum(level=0).T) print (df1) client_id prediction_date count minor major 0 100000201 2019-08-20 0 0 0 1 100000202 2020-02-27 0 0 0 2 100000204 2019-12-19 1 0 1 </code></pre>
python|pandas
2
9,182
68,538,765
Compute Weighted Profit Total for Each Item Sold
<p>I'm currently working through some sales data in Pandas and I'm attempting to find the most profitable item (We sell a range of various items), but this is weighted in order to truely find the item that generates the most profit. This may be a NumPy question. The equation looks like this:</p> <pre><code> (Individual Item Total Profit) * ((Number of that Item Sold / Total Items Sold)) </code></pre> <p>I attempted to write a function that computes the weighted avg profit made based on the above equation:</p> <pre><code> def w_avg(df, profit, item_type): total_profit = df.groupby(profit).sum(item_type) num_indiv_sold = df.groupby(item_type).count() total_all_sold = df.groupby(item_type).count().sum() return (total_profit * (num_indiv_sold / total_all_sold)) </code></pre> <p>Here's what I'm after:</p> <pre><code> Sample Input: Item Type Profit MacBook Pro 205 Macbook Air 430 Dell Inspiron 175 HP 125 Dell Inspiron 315 HP 115 MacBook Pro 410 Macbook Air 225 Dell Inspiron 135 HP 115 Computations for MacBook Pro: (205 + 410) * (2 / 10) = 123 Output: Item Type MacBook Pro 123 ... ... </code></pre> <p>Then call the function with the col names:</p> <pre><code> df.groupby('Item Type').apply(w_avg, 'Profit', 'Item Type') </code></pre> <p>This function does not do what I need it to as I'm sure you can tell. I basically need to return a col with all the item types and they're appropriate weighted profits. I wasn't sure if I needed to loop over the item types, as I was hoping my function would return all item types anyway (as most pandas functions seem to). Hopefully someone can help! Super new with Pandas. Thanks!</p>
<p>try via <code>groupby()</code> and <code>sum()</code> and <code>count()</code> method:</p> <pre><code>g=df.groupby('Item Type',sort=False) out=g['Profit'].sum()*(g['Profit'].count()/len(df)) #Since the sum of total counts is equal to the length of df so we are using len(df) #OR #g['Profit'].sum()*(g['Profit'].count()/g['Profit'].count().sum()) </code></pre> <p>output of <code>out</code>:</p> <pre><code>Item Type MacBook Pro 123.0 Macbook Air 131.0 Dell Inspiron 187.5 HP 106.5 Name: Profit, dtype: float64 </code></pre> <p><strong>Note:</strong> you can also use <code>agg()</code> method for example:</p> <pre><code>out=g['Profit'].agg('sum')*(g['Profit'].agg('count')/len(df)) </code></pre>
python|pandas|dataframe
0
9,183
68,452,598
difference between df.loc[:, columns] and df.loc[:][columns]
<p>I want to normalize some columns of a pandas data frame using <code>MinMaxScaler</code> in this way:</p> <pre><code>scaler = MinMaxScaler() numericals = [&quot;TX_TIME_SECONDS&quot;,'TX_Amount'] </code></pre> <p>while I do in this way:</p> <pre><code>df.loc[:][numericals] = scaler.fit_transform(df.loc[:][numericals]) </code></pre> <p>it's not done inplace and <code>df</code> is not changed;</p> <p>whereas, when I do in this way:</p> <pre><code>df.loc[:, numericals] = scaler.fit_transform(df.loc[:][numericals]) </code></pre> <p>the numerical columns of <code>df</code> are changed in place,</p> <p><strong>So,</strong> What's the difference between <code>df.loc[:, ~]</code> and <code>df.loc[:][~]</code></p>
<p><code>df.loc[:][numericals]</code> selects all rows and then selects columns &quot;TX_TIME_SECONDS&quot; and 'TX_Amount' of the <strong>returning object</strong>, and assigns some value to it. The problem is, the returning object might be a copy so this may not change the actual DataFrame.</p> <p>The correct way of making this assignment is using <code>df.loc[:, numericals]</code>, because with <code>.loc</code> you are guaranteed to modify the original DataFrame.</p>
python|pandas|dataframe
1
9,184
53,055,027
Find the two most recent dates for each customer in Python using pandas
<p>I have a pandas dataframe with purchase date of of each customer. I want to find out most recent purchase date and second most recent purchase date of each unique customer. Here is my dataframe:</p> <pre><code> name date ab1 6/1/18 ab1 6/2/18 ab1 6/3/18 ab1 6/4/18 ab2 6/8/18 ab2 6/9/18 ab3 6/23/18 </code></pre> <p>I am expecting the following output:</p> <pre><code>name second most recent date most recent date ab1 6/3/18 6/4/18 ab2 6/8/18 6/9/18 ab3 6/23/18 6/23/18 </code></pre> <p>I know <code>data['date'].max()</code> can give the most recent purchase date but I don't have any idea how I can find the second most recent date. Any help will be highly appreciated.</p>
<p>To get the two most recent purchase date for each customer, you can first sort your dataframe in descending order by date, then groupby the name and convert the aggregated dates into individual columns. Finally just take the first two of these columns and you'll have just the two most recent purchase dates for each customer.</p> <p>Here's an example:</p> <pre><code>import pandas as pd # set up data from your example df = pd.DataFrame({ "name": ["ab1", "ab1", "ab1", "ab1", "ab2", "ab2", "ab3"], "date": ["6/1/18", "6/2/18", "6/3/18", "6/4/18", "6/8/18", "6/9/18", "6/23/18"] }) # create column of datetimes (for sorting reverse-chronologically) df["datetime"] = pd.to_datetime(df.date) # group by name and convert dates into individual columns grouped_df = df.sort_values( "datetime", ascending=False ).groupby("name")["date"].apply(list).apply(pd.Series).reset_index() # truncate and rename columns grouped_df = grouped_df[["name", 0, 1]] grouped_df.columns = ["name", "most_recent", "second_most_recent"] </code></pre> <p>With <code>grouped_df</code> like this at the end:</p> <pre><code> name most_recent second_most_recent 0 ab1 6/4/18 6/3/18 1 ab2 6/9/18 6/8/18 2 ab3 6/23/18 NaN </code></pre> <p>If you want to fill any missing <code>second_most_recent</code> values with the corresponding <code>most_recent</code> value, you can use <code>np.where</code>. Like this:</p> <pre><code>import numpy as np grouped_df["second_most_recent"] = np.where( grouped_df.second_most_recent.isna(), grouped_df.most_recent, grouped_df.second_most_recent ) </code></pre> <p>With result:</p> <pre><code> name most_recent second_most_recent 0 ab1 6/4/18 6/3/18 1 ab2 6/9/18 6/8/18 2 ab3 6/23/18 6/23/18 </code></pre>
python|pandas
5
9,185
52,949,798
python - how to delete duplicate list in each row (pandas)?
<p>I have a list contained in each row and I would like to delete duplicated element by keeping the highest value from a score. </p> <p>here is my data from data frame df1</p> <pre><code> pair score 0 [A , A ] 1.0000 1 [A , F ] 0.9990 2 [A , G ] 0.9985 3 [A , G ] 0.9975 4 [A , H ] 0.9985 5 [A , H ] 0.9990 </code></pre> <p>I would like to see the result as</p> <pre><code> pair score 0 [A , A ] 1.0000 1 [A , F ] 0.9990 2 [A , G ] 0.9985 4 [A , H ] 0.9990 </code></pre> <p>I have tried to use group by and set a score = max, but it's not working </p>
<p>First I think working with <code>list</code>s in pandas is not <a href="https://stackoverflow.com/a/52563718/2901002">good idea</a>.</p> <p>Solution working if convert lists to helper column with tuples - then <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sort_values.html" rel="nofollow noreferrer"><code>sort_values</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.drop_duplicates.html" rel="nofollow noreferrer"><code>drop_duplicates</code></a>:</p> <pre><code>df['new'] = df.pair.apply(tuple) df = df.sort_values('score', ascending=False).drop_duplicates('new') print (df) pair score new 0 [A, A] 1.0000 (A, A) 1 [A, F] 0.9990 (A, F) 5 [A, H] 0.9990 (A, H) 2 [A, G] 0.9985 (A, G) </code></pre> <p>Or to 2 new columns:</p> <pre><code>df[['a', 'b']] = pd.DataFrame(df.pair.values.tolist()) df = df.sort_values('score', ascending=False).drop_duplicates(['a', 'b']) print (df) pair score a b 0 [A, A] 1.0000 A A 1 [A, F] 0.9990 A F 5 [A, H] 0.9990 A H 2 [A, G] 0.9985 A G </code></pre>
python|pandas|list
1
9,186
53,227,491
Number of missing entries when merging DataFrames
<p>In an exercise, I was asked to merge 3 DataFrames with inner join (df1+df2+df3 = mergedDf), then in another question I was asked to tell how many entries I've lost when performing this 3-way merging.</p> <pre><code>#DataFrame1 df1 = pd.DataFrame(columns=["Goals","Medals"],data=[[5,2],[1,0],[3,1]]) df1.index = ['Argentina','Angola','Bolivia'] print(df1) Goals Medals Argentina 5 2 Angola 1 0 Bolivia 3 1 #DataFrame2 df2 = pd.DataFrame(columns=["Dates","Medals"],data=[[1,0],[2,1],[2,2]) df2.index = ['Venezuela','Africa'] print(df2) Dates Medals Venezuela 1 0 Africa 2 1 Argentina 2 2 #DataFrame3 df3 = pd.DataFrame(columns=["Players","Goals"],data=[[11,5],[11,1],[10,0]]) df3.index = ['Argentina','Australia','Belgica'] print(df3) Players Goals Argentina 11 5 Australia 11 1 Spain 10 0 #mergedDf mergedDf = pd.merge(df1,df2,how='inner',left_index=True, right_index=True) mergedDf = pd.merge(mergedDf,df3,how='inner',left_index=True, right_index=True) print(mergedDF) Goals_X Medals_X Dates Medals_Y Players Goals_Y Argentina 5 2 2 2 11 2 #Calculate number of lost entries by code </code></pre> <p>I tried to merge everything with outer join and then subtracting the mergedDf, but I don't know how to do this, can anyone help me? <a href="https://i.stack.imgur.com/vQDFD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vQDFD.png" alt="enter image description here"></a></p>
<p>I've found a simple but effective solution:</p> <h2>Merging the 3 DataFrames, inner and outer:</h2> <pre><code>df1 = Df1() df2 = Df2() df3 = Df3() inner = pd.merge(pd.merge(df1,df2,on='&lt;Common column&gt;',how='inner'),df3,on='&lt;Common column&gt;',how='inner') outer = pd.merge(pd.merge(df1,df2,on='&lt;Common column&gt;',how='outer'),df3,on='&lt;Common column&gt;',how='outer') </code></pre> <h2>Now, the number of missed entries (rows) is:</h2> <pre><code>return (len(outer)-len(inner)) </code></pre>
python|pandas|dataframe
4
9,187
52,924,546
How to convert a ndarray into opencv::Mat using Boost.Python?
<p>I am reading an image in Python and passing that numpy array to C++ using Boost.Python and receiving that in <code>ndarray</code>.</p> <p>I need to convert the same into <code>cv::Mat</code> to perform operations in OpenCV C++. </p> <p>How do I do that?</p>
<p>Finally I found the solution for that from the documentations:</p> <p>We Have to receive the numpy array as <code>numeric::array</code> in C++ and have to do the following steps to easily convert the numpy into <code>cv::mat</code> efficiently.</p> <pre><code>void* img_arr = PyArray_DATA((PyObject*)arr.ptr()); </code></pre> <p>And we need to pass this void ptr to the cv::Mat Constructor with other parameters required.</p> <pre><code>Mat image(rows, cols , CV_8UC3, img_arr); </code></pre> <ol> <li>int parameter: Expects the no. of rows</li> <li>int parameter: Expects the no. of cols</li> <li>Type parameter : Expects the type of image.</li> <li>Void Pointer Parameter: Expects the image data.</li> </ol> <p>And this resolves the problem!!!!.</p>
python-3.x|c++11|boost-python|mat|numpy-ndarray
1
9,188
65,898,885
Failed to construct interpreter TensorflowLite
<p>I am using `</p> <blockquote> <p>Cuda 10.1, cudnn 7.6, Tensorflow 2.3.0, keras 2.4.3</p> </blockquote> <p>When training model I use <code>save_weights_only=True</code>, and a folder is saved including:</p> <blockquote> <p>assets, variables and saved_model.pb</p> </blockquote> <p>When i am converting the model to a tflite model using:</p> <pre><code>converter = tf.lite.TFLiteConverter.from_saved_model('x/210126_Test') tflite_model = converter.convert() # Save the model. with open('model.tflite', 'wb') as f: f.write(tflite_model) </code></pre> <p>I get no errors of any kind, though when i am trying to use the model in an iPad appliacation, it only returns following:</p> <blockquote> <p>&quot;Exception Error: Failed to construct interpreter&quot;</p> </blockquote> <p>Have anyone experienced this or know a possible solution?</p> <p>.</p>
<p>Problem resolved through changes of packages versions.</p> <blockquote> <p>Tensorflow = 2.0.0 &amp; Keras = 2.3.1</p> </blockquote>
python-3.x|tensorflow|keras
0
9,189
65,810,901
Remove selected area from picture PILLOW
<pre><code>import numpy from PIL import Image, ImageDraw im = Image.open(&quot;image.jpg&quot;).convert(&quot;RGBA&quot;) imArray = numpy.asarray(im) polygon = [(700,150),(1200,150),(1200,450),(1000,650),(700,650)] maskIm = Image.new('L', (imArray.shape[1], imArray.shape[0]), 0) ImageDraw.Draw(maskIm).polygon(polygon, outline=1, fill=1) mask = numpy.array(maskIm) newImArray = numpy.empty(imArray.shape,dtype='uint8') newImArray[:,:,:3] = imArray[:,:,:3] newImArray[:,:,3] = mask*255 newIm = Image.fromarray(newImArray, &quot;RGBA&quot;) newIm.show() </code></pre> <p>Original Image <img src="https://i.imgur.com/1WQin5z.jpeg" alt="Original Image" /></p> <p>After this code, I'm getting this image</p> <p><img src="https://i.imgur.com/gRsKNqQ.png" alt="this image" /></p> <p>how can I just remove the selection from the picture?</p> <p>I want to do like this</p> <p><img src="https://i.imgur.com/GB58dOV.jpg" alt="image" /></p> <p>Thank you in advance for your help</p>
<p>It seems like you need to invert your mask - currently, you have 0 around your polygon and 1 inside.</p> <p>try to change <code>newImArray[:,:,3] = mask*255</code> to <code>newImArray[:,:,3] = (1-mask)*255</code></p>
python|python-3.x|numpy|python-imaging-library
1
9,190
63,399,155
Adding columns in one dataframe from calculations based on other dataframe using pandas library
<p>I have a dataframe df1 like:</p> <pre><code> cycleName quarter product qty price sell/buy 0 2020 q3 wood 10 100 sell 1 2020 q3 leather 5 200 buy 2 2020 q3 wood 2 200 buy 3 2020 q4 wood 12 40 sell 4 2020 q4 leather 12 40 sell 5 2021 q1 wood 12 80 sell 6 2021 q2 leather 12 90 sell </code></pre> <p>And another dataframe df2 as below. It has unique products of df1:</p> <pre><code> product currentValue 0 wood 20 1 leather 50 </code></pre> <p>I want to create new column in df2, called income which will be based on calculations on df1 data. Example if product is wood the income2020 will be created seeing if cycleName is 2020 and if sell/buy is sell then add quantity * price else subtract quantity * price.</p> <h2>product currentValue income2020</h2> <p>0 wood 20 10 * 100 - 2 * 200 + 12 * 40 (=1080)</p> <p>1 leather 50 -5 * 200 + 12 * 40 (= -520)</p> <p>I have a problem statement in python, which I am trying to do using pandas dataframes, which I am very new to.</p> <p>I am not able to understand how to create that column in df2 based on different conditions on df1.</p>
<p>You can map <code>sell</code> as 1 and <code>buy</code> as -1 using <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.map.html" rel="nofollow noreferrer"><code>pd.Series.map</code></a> then multiply columns <code>qty</code>, <code>price</code> and <code>sell/buy</code> using <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.prod.html" rel="nofollow noreferrer"><code>df.prod</code></a> to get only <code>2020</code> <code>cycleName</code> values use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.query.html" rel="nofollow noreferrer"><code>df.query</code></a> and groupby by <code>product</code> and take sum using <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.sum.html" rel="nofollow noreferrer"><code>GroupBy.sum</code></a></p> <pre><code>df_2020 = df.query('cycleName == 2020')<b>.copy()</b> # `df[df['cycleName'] == 2020].copy()` df_2020['sell/buy'] = df_2020['sell/buy'].map({'sell':1, 'buy':-1}) df_2020[['qty', 'price', 'sell/buy']].prod(axis=1).groupby(df_2020['cycleName']).sum() product leather -520 wood 1080 dtype: int64</code></pre> <p>Note:</p> <ul> <li>Use <code>.copy</code> else you would get <code>SettingWithCopyWarning</code></li> <li>To maintain the order use <code>sort=False</code> in <code>df.groupby</code> <pre><code>(df_2020[['qty', 'price', 'sell/buy']]. prod(axis=1). groupby(df_2020['product'],sort=False).sum() ) product wood 1080 leather -520 dtype: int64 </code></pre> </li> </ul>
python|pandas|dataframe
1
9,191
53,395,329
Should Dropout masks be reused during Adversarial Training?
<p>I am implementing adversarial training with the FGSM method from <a href="https://arxiv.org/abs/1412.6572" rel="nofollow noreferrer">Explaining and Harnessing Adversarial Examples</a> using the custom loss function:</p> <p><a href="https://i.stack.imgur.com/RG7eB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RG7eB.png" alt=""></a></p> <p>Implemented in <code>tf.keras</code> using a custom loss function, it conceptually looks like this:</p> <pre class="lang-py prettyprint-override"><code>model = Sequential([ ... ]) def loss(labels, logits): # Compute the cross-entropy on the legitimate examples cross_ent = tf.losses.softmax_cross_entropy(labels, logits) # Compute the adversarial examples gradients, = tf.gradients(cross_ent, model.input) inputs_adv = tf.stop_gradient(model.input + 0.3 * tf.sign(gradients)) # Compute the cross-entropy on the adversarial examples logits_adv = model(inputs_adv) cross_ent_adv = tf.losses.softmax_cross_entropy(labels, logits_adv) return 0.5 * cross_ent + 0.5 * cross_ent_adv model.compile(optimizer='adam', loss=loss) model.fit(x_train, y_train, ...) </code></pre> <p>This works well for a simple convolutional neural network. </p> <p>During the <code>logits_adv = model(inputs_adv)</code> call, the model is called for the second time. This means, that it will use different dropout masks than in the original feed-forward pass with <code>model.inputs</code>. The <code>inputs_adv</code>, however, were created with <code>tf.gradients(cross_ent, model.input)</code>, i.e. with the dropout masks from the original feed-forward pass. This could be problematic, as allowing the model to use new dropout masks will likely dampen the effect of the adversarial batch.</p> <p>Since implementing the reusing of dropout masks in Keras would be cumbersome, I am interested in the actual effect of reusing the masks. Does it make a difference w.r.t. the test accuracy on both legitimate and adversarial examples?</p>
<p>I tried out reusing the dropout masks during the adversarial training step's feed-forward pass with a simple CNN on MNIST. I chose the same network architecture as the one used in this <a href="https://github.com/tensorflow/cleverhans/blob/master/cleverhans_tutorials/mnist_tutorial_keras_tf.py" rel="nofollow noreferrer">cleverhans tutorial</a> with an additional dropout layer before the softmax layer. </p> <p>This is the result <em>(red = reuse dropout masks, blue = naive implementation)</em>: <a href="https://i.stack.imgur.com/o9tjn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/o9tjn.png" alt="enter image description here"></a></p> <p>The solid lines represent the accuracy on legitimate test examples. The dotted lines represent the accuracy on adversarial examples generated on the test set.</p> <p><strong>In conclusion</strong>, if you only use adversarial training as a regularizer in order to improve the test accuracy itself, reusing dropout masks might not be worth the effort. For the robustness against adversarial attacks, it <strong>might</strong> make a small difference. However, you would need to run further experiments on other datasets, architectures, random seeds etc. to make a more confident statement.</p> <p>To keep the figure above readable, I omitted the accuracy on adversarial test examples for the model trained without adversarial training. The values lay around 10%.</p> <p>You can find the code for this experiment in <a href="https://gist.github.com/batzner/aaed9cdb4cd1d05f85a8a88e67c57aac" rel="nofollow noreferrer">this gist</a>. With TensorFlow's eager mode, it was rather straightforward to implement storing and reusing the dropout masks.</p>
python|tensorflow|keras|neural-network|conv-neural-network
2
9,192
72,032,964
rearrange dataframe multi-level columns by inverting the levels
<p>Given a <code>pandas.DataFrame</code> with a column <code>MultiIndex</code> as follows</p> <pre><code>A B C X Y Z X Y Z X Y Z </code></pre> <p>how to rearrange the columns into this format?</p> <pre><code>X Y Z A B C A B C A B C </code></pre> <p>(I tried <code>.columns.swaplevel(0,1)</code>, but that does not yet yield the desired grouping)</p>
<p><code>df.swaplevel</code> with <code>axis=1</code> (for columns) is what you need.</p> <pre><code>&gt;&gt;&gt; df.swaplevel(0,1, axis=1) X Y Z X Y Z X Y Z A A A B B B C C C 0 x x x x x x x x x </code></pre> <p>You can use <code>sort_index</code> to sort:</p> <pre><code>&gt;&gt;&gt; df.swaplevel(0,1, axis=1).sort_index(level=0, axis=1) X Y Z A B C A B C A B C 0 x x x x x x x x x </code></pre>
python|pandas|dataframe|multi-index
3
9,193
71,816,710
Selecting random values from a pandas dataframe and adding constant to them
<p>let's say I have a pandas dataframe, I want to select a column of the dataframe and add values to its existing values randomly, in other words, I want to select random values from that column and add some constant to them. What I did is I selected a sample with</p> <pre><code>df['column_in_question'].sample(frac=0.2, random_state=1).values+1000 </code></pre> <p>but this command only generates a list of the values and add 1000 to them, that's not the behavior that I want.</p>
<p>You can get the sampled indexes and adding value by selecting with these indexes</p> <pre class="lang-py prettyprint-override"><code>indexes = df['column_in_question'].sample(frac=0.2, random_state=1).index df.loc[indexes, 'column_in_question'] += 1000 # or df['Number'] = df['Number'].mask(df.index.isin(indexes), df['Number'].add(1000)) # or import numpy as np df['Number'] = np.where(df.index.isin(indexes), df['Number'].add(1000), df['Number']) </code></pre>
python|pandas|dataframe
1
9,194
71,875,550
Scraping non-interactable table from dynamic webpage
<p>I've seen a couple of posts with this same question but their scripts usually waits until one of the elements (buttons) is clickable. Here is the table I'm trying to scrape:</p> <p><a href="https://ropercenter.cornell.edu/presidential-approval/highslows" rel="nofollow noreferrer">https://ropercenter.cornell.edu/presidential-approval/highslows</a></p> <p>First couple of tries my code was returning all the rows except both Polling Organization columns. Without changing anything, it now only scrapes the table headers and the tbody tag (no table rows).</p> <pre><code>url = &quot;https://ropercenter.cornell.edu/presidential-approval/highslows&quot; driver = webdriver.Firefox() driver.get(url) driver.implicitly_wait(12) soup = BeautifulSoup(driver.page_source, 'lxml') table = soup.find_all('table') approvalData = pd.read_html(str(table[0])) approvalData = pd.DataFrame(approvalData[0], columns = ['President', 'Highest %', 'Polling Organization &amp; Dates H' 'Lowest %', 'Polling Organization &amp; Dates L']) </code></pre> <p>Should I use explicit wait? If so, which condition should I wait for since the dynamic table is not interactive?</p> <p>Also, why did the output of my code change after running it multiple times?</p>
<p>Maybe more cheating, but easier solution, which indeed solves your problem, but in other way, would be to take a look what frontend does (using developer tools), and discover it calls the api, which returns JSON value, so no selenium is really needed. <code>requests</code> and <code>pandas</code> are enough.</p> <pre class="lang-py prettyprint-override"><code>import requests import pandas as pd url = &quot;https://ropercenter.cornell.edu/presidential-approval/api/presidents/highlow&quot; data = requests.get(url).json() df = pd.io.json.json_normalize(data) </code></pre> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; df &gt;&gt;&gt; df president.id president.active president.surname president.givenname president.shortname ... low.approve low.disapprove low.noOpinion low.sampleSize low.presidentName 0 e9c0d19b-dfe9-49cf-9939-d06a0f256e57 True Biden Joe None ... 33 53 13 1313.0 Joe Biden 1 bc9855d5-8e97-4448-b62e-1fb2865c79e6 True Trump Donald None ... 29 68 3 5360.0 Donald Trump 2 1c49881f-0f0c-4a53-9b2c-0dd6540f88e4 True Obama Barack None ... 37 57 5 1017.0 Barack Obama 3 ceda6415-5975-404d-8049-978758a7d1f8 True Bush George W. W. Bush ... 19 77 4 1100.0 George W. Bush 4 4f7344de-a7bd-4bc6-9147-87963ae51095 True Clinton Bill None ... 36 50 14 800.0 Bill Clinton 5 116721f1-f947-4c14-b0b5-d521ed5a4c8b True Bush George H.W. H.W. Bush ... 29 60 11 1001.0 George H.W. Bush 6 43720f8f-0b9f-43b0-8c0d-63da059e7a57 True Reagan Ronald None ... 35 56 9 1555.0 Ronald Reagan 7 7aa76fd3-e1bc-4e9a-b13c-463a64e0c864 True Carter Jimmy None ... 28 59 13 1542.0 Jimmy Carter 8 6255dd77-531d-46c6-bb26-627e2a4b3654 True Ford Gerald None ... 37 39 24 1519.0 Gerald Ford 9 f1a23b06-4200-41e6-b137-dd46260ac4d8 True Nixon Richard None ... 23 55 22 1589.0 Richard Nixon 10 772aabfd-289b-4f10-aaae-81a82dd3dbc6 True Johnson Lyndon B. None ... 35 52 13 1526.0 Lyndon B. Johnson 11 d849b5a8-f711-4ac9-9728-c3915e17bb6a True Kennedy John F. None ... 56 30 14 1550.0 John F. Kennedy 12 e22fd64a-cf20-4bc4-8db6-b4e71dc4483d True Eisenhower Dwight D. None ... 48 36 16 NaN Dwight D. Eisenhower 13 ab0bfa04-61da-49d1-8069-6992f6124f17 True Truman Harry S. None ... 22 65 13 NaN Harry S. Truman 14 11edf04f-9d8d-4678-976d-b9339b46705d True Roosevelt Franklin D. None ... 48 43 8 NaN Franklin D. Roosevelt [15 rows x 41 columns] &gt;&gt;&gt; df.columns Index(['president.id', 'president.active', 'president.surname', 'president.givenname', 'president.shortname', 'president.fullname', 'president.number', 'president.terms', 'president.ratings', 'president.termCount', 'president.ratingCount', 'high.id', 'high.active', 'high.organization.id', 'high.organization.active', 'high.organization.name', 'high.organization.ratingCount', 'high.pollingStart', 'high.pollingEnd', 'high.updated', 'high.president', 'high.approve', 'high.disapprove', 'high.noOpinion', 'high.sampleSize', 'high.presidentName', 'low.id', 'low.active', 'low.organization.id', 'low.organization.active', 'low.organization.name', 'low.organization.ratingCount', 'low.pollingStart', 'low.pollingEnd', 'low.updated', 'low.president', 'low.approve', 'low.disapprove', 'low.noOpinion', 'low.sampleSize', 'low.presidentName'], dtype='object') </code></pre>
python|pandas|selenium|web-scraping|geckodriver
2
9,195
55,487,921
Markevry in df.pyplot
<p>I have a problem with function <code>markevry</code> in dataframe plot in pandas. I want to mark the max value in every column on the plot. I tried compile this in Pycharm on python 3. My code as follow:</p> <pre><code>#projektII import pandas as pd import matplotlib.pyplot as plt dane = pd.read_table('xxx.txt', names=('rok', 'kroliki', 'lisy', 'marchewki')) df = pd.DataFrame(dane) data = df[1:] data=data.astype(float) print(data) markers_on = data['kroliki'].max markers_on2 = data['lisy'].max markers_on3 = data['marchewki'].max ax = plt.gca() data.plot(kind='line',x='rok',y='kroliki', color = 'blue',ax=ax, markevry = [markers_on]) data.plot(kind='line',x='rok',y='lisy', color='red', ax=ax, markevry = [markers_on2]) data.plot(kind='line',x='rok',y='marchewki',color = 'orange',ax=ax, markevry = [markers_on3]) ax.set_xlabel("rok") ax.set_ylabel("ilosc") plt.show() </code></pre> <p>But I see this kind of error every time:</p> <pre><code>Traceback (most recent call last): File "C:/Users/X", line 18, in &lt;module&gt; data.plot(kind='line',x='rok',y='kroliki', color = 'blue',ax=ax, markevry = [markers_on]) File "C:\Users\X\venv\lib\site-packages\pandas\plotting\_core.py", line 2941, in __call__ sort_columns=sort_columns, **kwds) File "C:\Users\X\venv\lib\site-packages\pandas\plotting\_core.py", line 1977, in plot_frame **kwds) File "C:\Users\X\venv\lib\site-packages\pandas\plotting\_core.py", line 1804, in _plot plot_obj.generate() File "C:\Users\X\venv\lib\site-packages\pandas\plotting\_core.py", line 260, in generate self._make_plot() File "C:\Users\X\venv\lib\site-packages\pandas\plotting\_core.py", line 985, in _make_plot **kwds) ... File "C:\Users\MX\venv\lib\site-packages\matplotlib\artist.py", line 912, in _update_property raise AttributeError('Unknown property %s' % k) AttributeError: Unknown property markevry </code></pre> <p>If someone know what is wrong with this code? Thanks!</p>
<p>1st Change: try adjust these:</p> <pre><code>markers_on = data['kroliki'].max markers_on2 = data['lisy'].max markers_on3 = data['marchewki'].max </code></pre> <p>to this:</p> <pre><code>markers_on = data['kroliki'].max() markers_on2 = data['lisy'].max() markers_on3 = data['marchewki'].max() </code></pre> <p>2nd Change: try adjust these:</p> <pre><code>data.plot(kind='line',x='rok',y='kroliki', color = 'blue',ax=ax, markevry = [markers_on]) data.plot(kind='line',x='rok',y='lisy', color='red', ax=ax, markevry = [markers_on2]) data.plot(kind='line',x='rok',y='marchewki',color = 'orange',ax=ax, markevry = [markers_on3]) </code></pre> <p>to </p> <pre><code>data.plot(kind='line',x='rok',y='kroliki', color = 'blue',ax=ax, markevry = markers_on) data.plot(kind='line',x='rok',y='lisy', color='red', ax=ax, markevry = markers_on2) data.plot(kind='line',x='rok',y='marchewki',color = 'orange',ax=ax, markevry = markers_on3) </code></pre> <p>I think this should work!</p>
python|pandas|dataframe
0
9,196
56,464,739
Sampling from tensorflow Dataset into same tensor multiple times per single session.run() call
<p>Consider the following example:</p> <pre class="lang-py prettyprint-override"><code>import tensorflow as tf import numpy as np X = np.arange(4).reshape(4, 1) + (np.arange(3) / 10).reshape(1, 3) batch = tf.data.Dataset.from_tensor_slices(X) \ .batch(2).make_one_shot_iterator().get_next() def foo(x): return x + 1 tensor = foo(batch) </code></pre> <p>Now, I'm looking for a way to be able to sample <code>tensor</code> multiple times per single <code>session.run()</code> call, i.e.:</p> <pre class="lang-py prettyprint-override"><code>def bar(x): return x - 1 result1 = bar(tensor) with tf.control_dependencies([result1]): op = &lt;create operation to sample from dataset into `tensor` again&gt; with tf.control_dependencies([op]): result2 = bar(tensor) sess = tf.Session() print(*sess.run([result1, result2]), sep='\n\n') </code></pre> <p>which should output:</p> <pre><code>[[0. 0.1 0.2] [1. 1.1 1.2]] [[2. 2.1 2.2] [3. 3.1 3.2]] </code></pre> <p>Is that even possible? I know one can call <code>get_next()</code> multiple times to get multiple dataset samples in <strong>different</strong> tensor objects, but can one sample into the <strong>same</strong> tensor object?</p> <p>For me the use case is such that the <code>foo</code> and <code>bar</code> parts of this code are separated, and the <code>foo</code> part doesn't know how many times the samples will be needed per run.</p> <p>P.S. I'm using tf 1.12. 1.13 is an option too, but not tf 2 though.</p>
<p>Yes, it's possible.</p> <p>A couple of insights on what you've tried so far:</p> <ol> <li>You can use the dataset iterator returned from <code>make_one_shot_iterator()</code> each time you need a new value from the dataset</li> <li>You can make your own function that is part of the tf graph to pass the result through <code>foo()</code></li> </ol> <p>Something like this give the output you want (as I understand it)</p> <pre class="lang-py prettyprint-override"><code>import tensorflow as tf import numpy as np X = np.arange(4).reshape(4, 1) + (np.arange(3) / 10).reshape(1, 3) iterator = tf.data.Dataset.from_tensor_slices(X) \ .batch(2).make_one_shot_iterator() def foo(x): return x + 1 def get_tensor(): return foo(iterator.get_next()) tensor = get_tensor() def bar(x): return x - 1 result1 = bar(tensor) with tf.control_dependencies([result1]): op = get_tensor() with tf.control_dependencies([op]): result2 = bar(op) sess = tf.Session() print(*sess.run([result1, result2]), sep='\n\n') </code></pre>
python|tensorflow|tensorflow-datasets
0
9,197
66,848,540
(tensorflow 2.4.1)how to get a ragged tensor has determinate last dimension shape?
<p>like this:<br /> '''</p> <pre><code>a = [1.0, 2.0, 3.0] b = [[a, a, a, a], [a, a, a], [a, a, a, a, a, a, a], [a, a]] c = tf.ragged.constant(b, dtype=tf.float32) </code></pre> <p>''' I got a tensor with shape : [4, None, None], but i expect [4, None, 3],</p>
<p>According to the documentation (<a href="https://www.tensorflow.org/api_docs/python/tf/RaggedTensor#uniform_inner_dimensions_2" rel="nofollow noreferrer">https://www.tensorflow.org/api_docs/python/tf/RaggedTensor#uniform_inner_dimensions_2</a>), to get a uniform inner dimension on a ragged tensor, you need to start from a multidimensional tensor of values so TensorFlow <em>knows</em> that dimension is uniform</p> <p>For your example case, you would want to start by making <code>b</code> a 16x3 matrix and then using one of the &quot;from_row_&quot; methods (e.g. <code>tf.RaggedTensor.from_row_lengths()</code>) to partition it into your ragged splits. E.g. something like</p> <pre><code>import numpy as np import tensorflow as tf a = np.array([1.0, 2.0, 3.0]) b = np.tile(a, (16,1)) c = tf.RaggedTensor.from_row_lengths(values= b, row_lengths = [4,3,7,2]) </code></pre> <p>should get you the ragged tensor you want with shape <code>[4,None,3]</code>.</p>
tensorflow|ragged-tensors
2
9,198
67,065,069
Having an issue creating a separate quantile column based on a condition
<p>I am trying to create a new column that contains quantile information. The one condition I have for this new column is that I only need to produce a quantile value for rows where it equals a certain value from another column. I thought the code below would filter the data to the specific value (&quot;Below&quot;) and apply the quantile to only those records but I'm getting the quantile data for all rows.</p> <pre><code>rsh_df['pcf_q'] = rsh_df.loc[rsh_df['ind_comp'] == &quot;Below&quot;, 'pcf'].quantile(0.05) </code></pre> <p><a href="https://i.stack.imgur.com/LuHFr.png" rel="nofollow noreferrer">Image of the dataframe</a></p> <p>How can I adjust the code to explicitly apply quantiles to a certain tag?</p> <p>Thanks in advance.</p>
<p>You need to assign the values, to the rows where the condition is met. Try:</p> <pre><code>rsh_df.loc[rsh_df['ind_comp'] == &quot;Below&quot;, 'pcf_q'] = ( rsh_df .loc[rsh_df['ind_comp'] == &quot;Below&quot;, 'pcf'] .quantile(0.05) ) </code></pre>
pandas
0
9,199
47,369,827
How to join Multi-level dataframe on values in single-level dataframe
<p>what I have so far is a normal transactional dataframe with the following columns:</p> <pre><code>store | item | year | month | day | sales </code></pre> <p>'year' can be 2015, 2016, 2017. </p> <p>With that I created a summary dataframe:</p> <pre><code>store_item_years = df.groupby( ['store','item','year'])['sales'].agg( [np.sum, np.mean, np.std, np.median, np.min, np.max]).unstack( fill_value=0) </code></pre> <p>The last one results in a Multi-Index with 2 levels, like this:</p> <pre><code> sum mean year | 2015 | 2016 | 2017 | 2015 | 2016 | 2017 | ... store | item sum1 ... ... mean1 mean2 ... | ... </code></pre> <p>Now I'd like to merge the summary table back onto the transactional one: </p> <pre><code>store | item | year | month | day | sales | + | sum+'by'+year | mean+'by'+year 2015 sum1 mean1 2016 sum2 mean2 2017 ... ... </code></pre> <p>I am trying to merge with the following:</p> <pre><code>df = pd.merge(df, store_item_years, left_on=['store', 'item', 'year'], right_on=['store', 'item', 'year'], how='left') </code></pre> <p>which results in the following error:</p> <pre><code>KeyError: 'year' </code></pre> <p>Any ideas? I am just getting my head around groupby. I haven't looked into PivotTable yet.</p> <p>Please keep in mind that the problem is simplified. The number of store_item combinations is 200+K and other groupbys having 300+ columns. But always the same principle.</p> <p>Thanks a lot in advance.</p>
<p>I think you need first remove <code>unstack</code> and then use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.join.html" rel="nofollow noreferrer"><code>join</code></a> for left join:</p> <pre><code>store_item_years = df.groupby( ['store','item','year'])['sales'].agg( [np.sum, np.mean, np.std, np.median, np.min, np.max]) df = df.join(store_item_years, on=['store','item','year']) </code></pre>
pandas|join|dataframe|merge|multi-index
2