Unnamed: 0
int64
0
378k
id
int64
49.9k
73.8M
title
stringlengths
15
150
question
stringlengths
37
64.2k
answer
stringlengths
37
44.1k
tags
stringlengths
5
106
score
int64
-10
5.87k
6,000
48,257,800
Data filtering code in Pandas taking lot of time to run
<p>I am executing the below code in Python. Its taking some time run. Is there something i am doing wrong. </p> <p>Is there a better a way to do the same.</p> <pre><code>y= list(word) words = y similar = [[item[0] for item in model.wv.most_similar(word) if item[1] &gt; 0.7] for word in words] similarity_matrix = pd.DataFrame({'Root_Word': words, 'Similar_Words': similar}) similarity_matrix = similarity_matrix[['Root_Word', 'Similar_Words']] similarity_matrix['Unlist_Root']=similarity_matrix['Root_Word'].apply(lambda x: ', '.join(x)) similarity_matrix['Unlist_Similar']=similarity_matrix['Similar_Words'].apply(lambda x: ', '.join(x)) similarity_matrix=similarity_matrix.drop(['Root_Word','Similar_Words'],1) similarity_matrix.columns=['Root_Word','Similar_Words'] </code></pre>
<p>It is not possible to determine what is going on in the following line as there is not enough data provided (I do not know what <code>model</code> is):</p> <pre><code>similar = [[item[0] for item in model.wv.most_similar(word) if item[1] &gt; 0.7] for word in words] </code></pre> <p>The second line below does not seem necessary as you create a DataFrame <code>similarity_matrix</code> with only two columns:</p> <pre><code>similarity_matrix = pd.DataFrame({'Root_Word': words, 'Similar_Words': similar}) # This below does not do anything similarity_matrix = similarity_matrix[['Root_Word', 'Similar_Words']] </code></pre> <p>The <code>apply</code> method is not very fast. Try using vectorized methods already implemented in pandas as shown below. <a href="https://engineering.upside.com/a-beginners-guide-to-optimizing-pandas-code-for-speed-c09ef2c6a4d6" rel="nofollow noreferrer">Here</a> is a useful link about this topic.</p> <pre><code>similarity_matrix['Unlist_Root'] = similarity_matrix['Root_Word'].apply(lambda x: ', '.join(x)) # will be faster like this: similarity_matrix['Unlist_Root'] = similarity_matrix['Root_Word'].str.join(', ') </code></pre> <p>Similarly:</p> <pre><code>similarity_matrix['Unlist_Similar'] = similarity_matrix['Similar_Words'].apply(lambda x: ', '.join(x)) # will be faster like this: similarity_matrix['Unlist_Similar'] = similarity_matrix['Similar_Words'].str.join(', ') </code></pre> <p>The rest of the code could not run much faster. </p> <p>If you provided more data/info we could help you more than that...</p>
python-3.x|pandas
0
6,001
48,316,370
get indexes with condition
<p>How do I find a position(index) of all entries of search_value?</p> <pre><code>import pandas as pd import numpy as np search_value=8 lst=[5, 8, 2, 7, 8, 8, 2, 4] df=pd.DataFrame(lst) df["is_search_value"]=np.where(df[0]==search_value, True, False) print(df.head(20)) </code></pre> <p>Output:</p> <pre><code> 0 is_search_value 0 5 False 1 8 True 2 2 False 3 7 False 4 8 True 5 8 True 6 2 False 7 4 False </code></pre> <p>Desirable output:</p> <pre><code>1 4 5 </code></pre> <p>If search_value is 10 than desirable output: </p> <pre><code>None </code></pre>
<p>You can use enumerate in a conditional list comprehension to get the index locations.</p> <pre><code>my_list = [5, 8, 2, 7, 8, 8, 2, 4] search_value = 8 &gt;&gt;&gt; [i for i, n in enumerate(my_list) if n == search_value] [1, 4, 5] </code></pre> <p>If the search value is not in the list, then an empty list will be returned (not exactly None, but still a falsey).</p> <p>Using pandas, you can use boolean indexing to get the matches, then extract the index to a list:</p> <pre><code>df[df[0] == search_value].index.tolist() </code></pre> <p>Using an empty list will satisfy the condition for None (they both evaluate to False). If you <em>really</em> need None, then use the suggestion of @cᴏʟᴅsᴘᴇᴇᴅ.</p>
python|python-3.x|pandas|numpy
3
6,002
48,668,706
How do I convert an Armadillo matrix to cube?
<p>I'm trying to recreate the following Python numpy code:</p> <pre><code>num_rows, num_cols = data.shape N = 4 data = data.reshape(N, num_rows/N, num_cols) </code></pre> <p>in C++ using Armadillo matrices and cubes? How can this be done most efficiently. I dont think the resize/reshape operations are supported directly for moving from a 2d matrix to 3d cube?</p>
<p>The <em>fastest</em> way to construct such a cube is to use one of the <a href="http://arma.sourceforge.net/docs.html#Cube" rel="nofollow noreferrer">advanced constructors</a>. These allow you to directly create a new object from an arbitrary section of memory, even without copying any of the data. This is closest in spirit to the way NumPy does reshaping, in which a <em>view</em> of the original data is returned, rather than a copy.</p> <p>You'd use the constructor like so:</p> <pre><code>// Assuming a is an arma::mat. int N = 4; arma::cube c(a.memptr(), N, a.n_rows / N, a.n_cols, false); </code></pre> <p>This takes the memory directly from <code>a</code>, without copying, and uses it as <code>c</code>'s data.</p> <p>Of course, this is fast, but dangerous. You are responsible for guaranteeing that the pointed-to memory is valid as long as <code>c</code> is around. This means that the lifetime of <code>c</code> must be strictly nested in the lifetime of <code>a</code>. This can be hard to ensure, especially when <code>a</code> and <code>c</code> are both created on the heap.</p> <p>You can also allow <code>c</code> to <em>copy</em> <code>a</code>'s data, by leaving off the last argument or setting it to <code>true</code>. This takes more time than the no-copy constructor, but probably less than assigning each slice from <code>a</code>'s data, since this constructor does a single bulk <code>memcpy</code> of the underlying data.</p> <p>All of this is subject to the row- vs. column-major point brought up by @ewcz's answer. Make sure you know what you get when you reshape, especially if you're using the advanced constructors.</p>
python|c++|numpy|armadillo
2
6,003
48,787,340
seed=1, TensorFlor- Xavier_initializer
<p>What does seed=1 is doing in the following code:</p> <p>W3 = tf.get_variable("W3", [L3, L2], initializer = tf.contrib.layers.xavier_initializer(seed=1))</p>
<p>It's to define the random seed. By this means, the weight values are always initialized by the same values. From Wiki: A random seed is a number (or vector) used to initialize a pseudo-random number generator.</p>
python|tensorflow
0
6,004
48,663,207
Colaboratory install Tensorflow Object Detection Api
<p>I succesfully executed in Google Colaboratory a notebook of training model and image recognition in Tensorflow. Now I want to start a new notebook with <a href="https://github.com/tensorflow/models/tree/master/research/object_detection" rel="nofollow noreferrer">Object Detection Api</a>. When I execute my code I get following error:</p> <pre><code>ModuleNotFoundError: No module named 'object_detection' </code></pre> <p>How can I install Object Detection Api in Colaboratory? I follow the install <a href="https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/installation.md" rel="nofollow noreferrer">instructions</a> but I can't execute:</p> <pre><code># From tensorflow/models/research/ protoc object_detection/protos/*.proto --python_out=. </code></pre>
<p>Here is an example notebook that shows the installation and configuration of the TensorFlow object detection API:</p> <p><a href="https://colab.research.google.com/drive/1kHEQK2uk35xXZ_bzMUgLkoysJIWwznYr" rel="noreferrer">https://colab.research.google.com/drive/1kHEQK2uk35xXZ_bzMUgLkoysJIWwznYr</a></p> <p>The departure from the install instructions on the site include modifying <code>sys.path</code> directly and executing <code>model_builder_test.py</code> using <code>%run</code>. The reason for these differences is that when running in Colab, you're already in a Python interpreter, so you don't need to worry about modifying the environment for a future shell invocation of <code>python</code>.</p>
tensorflow|object-detection|google-colaboratory
5
6,005
48,711,082
Empty values in pandas -- most memory-efficient way to filter out empty values for some columns but keep empty values for one column?
<p>Using Python, I have a large file (millions of rows) that I am reading in with Pandas using pd.read_csv. My goal is to minimize the amount of memory I use as much as possible.</p> <p>Out of about 15 columns in the file, I only want to keep 6 columns. Of those 6 columns, I have different needs for the empty rows.</p> <p>Specifically, for 5 of the columns, I'd like to filter out / ignore all of the empty rows. But for 1 of the columns, I need to keep only the empty rows.</p> <p>What is the most memory-efficient way to do this?</p> <p>I guess I have two problems:</p> <p>First, looking at <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html" rel="nofollow noreferrer">the documentation for Pandas read_csv</a>, it's not clear to me if there is a way to filter out empty rows. Are there a set of parameters and specifications for read_csv -- or with some other method --that I can use to filter out empty rows?</p> <p>Second, is it possible to filter out empty rows only for some columns but then keep all of the empty rows for one of my columns?</p>
<p>I would advise you use <a href="http://dask.pydata.org/en/latest/dataframe.html" rel="nofollow noreferrer"><code>dask.dataframe</code></a>. Syntax is pandas-like, but it deals with chunking and optimal memory management. Only when you need the result in memory should you translate the dataframe back to <code>pandas</code>, where of course you will need sufficient memory to hold the result in a dataframe.</p> <pre><code>import dask.dataframe as dd df = dd.read_csv('file.csv') # filtering and manipulation logic df = df.loc[....., ....] # compute &amp; return to pandas df_pandas = df.compute() </code></pre>
python|pandas|numpy|filter|nan
0
6,006
48,474,699
Marker size/alpha scaling with window size/zoom in plot/scatter
<p>When exploring data sets with many points on an xy chart, I can adjust the alpha and/or marker size to give a good quick visual impression of where the points are most densely clustered. However when I zoom in or make the window bigger, the a different alpha and/or marker size is needed to give the same visual impression.</p> <p>How can I have the alpha value and/or the marker size increase when I make the window bigger or zoom in on the data? I am thinking that if I double the window area I could double the marker size, and/or take the square root of the alpha; and the opposite for zooming. </p> <p>Note that all points have the same size and alpha. Ideally the solution would work with plot(), but if it can only be done with scatter() that would be helpful also.</p>
<p>You can achieve what you want with <code>matplotlib</code> event handling. You have to catch zoom and resize events separately. It's a bit tricky to account for both at the same time, but not impossible. Below is an example with two subplots, a line plot on the left and a scatter plot on the right. Both zooming (factor) and resizing of the figure (fig_factor) re-scale the points according to the scaling factors in figure size and x- and y- limits. As there are two limits defined -- one for the <code>x</code> and one for the <code>y</code> direction, I used here the respective minima for the two factors. If you'd rather want to scale with the larger factors, change the <code>min</code> to <code>max</code> in both event functions.</p> <pre><code>from matplotlib import pyplot as plt import numpy as np fig, axes = plt.subplots(nrows=1, ncols = 2) ax1,ax2 = axes fig_width = fig.get_figwidth() fig_height = fig.get_figheight() fig_factor = 1.0 ##saving some values xlim = dict() ylim = dict() lines = dict() line_sizes = dict() paths = dict() point_sizes = dict() ## a line plot x1 = np.linspace(0,np.pi,30) y1 = np.sin(x1) lines[ax1] = ax1.plot(x1, y1, 'ro', markersize = 3, alpha = 0.8) xlim[ax1] = ax1.get_xlim() ylim[ax1] = ax1.get_ylim() line_sizes[ax1] = [line.get_markersize() for line in lines[ax1]] ## a scatter plot x2 = np.random.normal(1,1,30) y2 = np.random.normal(1,1,30) paths[ax2] = ax2.scatter(x2,y2, c = 'b', s = 20, alpha = 0.6) point_sizes[ax2] = paths[ax2].get_sizes() xlim[ax2] = ax2.get_xlim() ylim[ax2] = ax2.get_ylim() def on_resize(event): global fig_factor w = fig.get_figwidth() h = fig.get_figheight() fig_factor = min(w/fig_width,h/fig_height) for ax in axes: lim_change(ax) def lim_change(ax): lx = ax.get_xlim() ly = ax.get_ylim() factor = min( (xlim[ax][1]-xlim[ax][0])/(lx[1]-lx[0]), (ylim[ax][1]-ylim[ax][0])/(ly[1]-ly[0]) ) try: for line,size in zip(lines[ax],line_sizes[ax]): line.set_markersize(size*factor*fig_factor) except KeyError: pass try: paths[ax].set_sizes([s*factor*fig_factor for s in point_sizes[ax]]) except KeyError: pass fig.canvas.mpl_connect('resize_event', on_resize) for ax in axes: ax.callbacks.connect('xlim_changed', lim_change) ax.callbacks.connect('ylim_changed', lim_change) plt.show() </code></pre> <p>The code has been tested in Pyton 2.7 and 3.6 with matplotlib 2.1.1.</p> <p><strong>EDIT</strong></p> <p>Motivated by the comments below and <a href="https://stackoverflow.com/a/48174228/2454357">this answer</a>, I created another solution. The main idea here is to only use one type of event, namely <code>draw_event</code>. At first the plots did not update correctly upon zooming. Also <code>ax.draw_artist()</code> followed by a <code>fig.canvas.draw_idle()</code> like in the linked answer did not really solve the problem (however, this might be platform/backend specific). Instead I added an extra call to <code>fig.canvas.draw()</code> whenever the scaling changes (the <code>if</code> statement prevents infinite loops).</p> <p>In addition, do avoid all the global variables, I wrapped everything into a class called <code>MarkerUpdater</code>. Each <code>Axes</code> instance can be registered separately to the <code>MarkerUpdater</code> instance, so you could also have several subplots in one figure, of which some are updated and some not. I also fixed another bug, where the points in the scatter plot scaled wrongly -- they should scale quadratic, not linear (<a href="https://stackoverflow.com/a/47403507/2454357">see here</a>).</p> <p>Finally, as it was missing from the previous solution, I also added updating for the <code>alpha</code> value of the markers. This is not quite as straight forward as the marker size, because <code>alpha</code> values must not be larger than 1.0. For this reason, in my implementation the <code>alpha</code> value can only be decreased from the original value. Here I implemented it such that the <code>alpha</code> decreases when the figure size is decreased. Note that if no <code>alpha</code> value is provided to the plot command, the artist stores <code>None</code> as alpha value. In this case the automatic <code>alpha</code> tuning is off.</p> <p>What should be updated in which <code>Axes</code> can be defined with the <code>features</code> keyword -- see below <code>if __name__ == '__main__':</code> for an example how to use <code>MarkerUpdater</code>.</p> <p><strong>EDIT 2</strong></p> <p>As pointed out by @ImportanceOfBeingErnest, there was a problem with infinite recursion with my answer when using the <code>TkAgg</code> backend, and apparently problems with the figure not refreshing properly upon zooming (which I couldn't verify, so probably that was implementation dependent). Removing the <code>fig.canvas.draw()</code> and adding <code>ax.draw_artist(ax)</code> within the loop over the <code>Axes</code> instances instead fixed this issue.</p> <p><strong>EDIT 3</strong></p> <p>I updated the code to fix an ongoing issue where figure is not updated properly upon a <code>draw_event</code>. The fix was taken from this answer, but modified to also work for several figures.</p> <p>In terms of an explanation of how the factors are obtained, the <code>MarkerUpdater</code> instance contains a <code>dict</code> that stores for each <code>Axes</code> instance the figure dimensions and the limits of the axes at the time it is added with <code>add_ax</code>. Upon a <code>draw_event</code>, which is for instance triggered when the figure is resized or the user zooms in on the data, the new (current) values for figure size and axes limits are retrieved and a scaling factor is calculated (and stored) such that zooming in and increasing the figure size makes the markers bigger. Because x- and y-dimensions may change at different rates, I use <code>min</code> to pick one of the two calculated factors and always scale against the original size of the figure.</p> <p>If you want your alpha to scale with a different function, you can easily change the lines that adjust the alpha value. For instance, if you want a power law instead of a linear decrease, you can write <code>path.set_alpha(alpha*facA**n)</code>, where n is the power.</p> <pre><code>from matplotlib import pyplot as plt import numpy as np ##plt.switch_backend('TkAgg') class MarkerUpdater: def __init__(self): ##for storing information about Figures and Axes self.figs = {} ##for storing timers self.timer_dict = {} def add_ax(self, ax, features=[]): ax_dict = self.figs.setdefault(ax.figure,dict()) ax_dict[ax] = { 'xlim' : ax.get_xlim(), 'ylim' : ax.get_ylim(), 'figw' : ax.figure.get_figwidth(), 'figh' : ax.figure.get_figheight(), 'scale_s' : 1.0, 'scale_a' : 1.0, 'features' : [features] if isinstance(features,str) else features, } ax.figure.canvas.mpl_connect('draw_event', self.update_axes) def update_axes(self, event): for fig,axes in self.figs.items(): if fig is event.canvas.figure: for ax, args in axes.items(): ##make sure the figure is re-drawn update = True fw = fig.get_figwidth() fh = fig.get_figheight() fac1 = min(fw/args['figw'], fh/args['figh']) xl = ax.get_xlim() yl = ax.get_ylim() fac2 = min( abs(args['xlim'][1]-args['xlim'][0])/abs(xl[1]-xl[0]), abs(args['ylim'][1]-args['ylim'][0])/abs(yl[1]-yl[0]) ) ##factor for marker size facS = (fac1*fac2)/args['scale_s'] ##factor for alpha -- limited to values smaller 1.0 facA = min(1.0,fac1*fac2)/args['scale_a'] ##updating the artists if facS != 1.0: for line in ax.lines: if 'size' in args['features']: line.set_markersize(line.get_markersize()*facS) if 'alpha' in args['features']: alpha = line.get_alpha() if alpha is not None: line.set_alpha(alpha*facA) for path in ax.collections: if 'size' in args['features']: path.set_sizes([s*facS**2 for s in path.get_sizes()]) if 'alpha' in args['features']: alpha = path.get_alpha() if alpha is not None: path.set_alpha(alpha*facA) args['scale_s'] *= facS args['scale_a'] *= facA self._redraw_later(fig) def _redraw_later(self, fig): timer = fig.canvas.new_timer(interval=10) timer.single_shot = True timer.add_callback(lambda : fig.canvas.draw_idle()) timer.start() ##stopping previous timer if fig in self.timer_dict: self.timer_dict[fig].stop() ##storing a reference to prevent garbage collection self.timer_dict[fig] = timer if __name__ == '__main__': my_updater = MarkerUpdater() ##setting up the figure fig, axes = plt.subplots(nrows = 2, ncols =2)#, figsize=(1,1)) ax1,ax2,ax3,ax4 = axes.flatten() ## a line plot x1 = np.linspace(0,np.pi,30) y1 = np.sin(x1) ax1.plot(x1, y1, 'ro', markersize = 10, alpha = 0.8) ax3.plot(x1, y1, 'ro', markersize = 10, alpha = 1) ## a scatter plot x2 = np.random.normal(1,1,30) y2 = np.random.normal(1,1,30) ax2.scatter(x2,y2, c = 'b', s = 100, alpha = 0.6) ## scatter and line plot ax4.scatter(x2,y2, c = 'b', s = 100, alpha = 0.6) ax4.plot([0,0.5,1],[0,0.5,1],'ro', markersize = 10) ##note: no alpha value! ##setting up the updater my_updater.add_ax(ax1, ['size']) ##line plot, only marker size my_updater.add_ax(ax2, ['size']) ##scatter plot, only marker size my_updater.add_ax(ax3, ['alpha']) ##line plot, only alpha my_updater.add_ax(ax4, ['size', 'alpha']) ##scatter plot, marker size and alpha plt.show() </code></pre>
pandas|matplotlib
6
6,007
48,738,112
Pandas MultiIndex Merge
<p>Suppose I have two dataframes as follows:</p> <pre><code>import pandas as pd index = pd.MultiIndex.from_tuples([('one', '1993-02-02'), ('one', '1994-02-03'), ('two', '1995-02-18'), ('two', '1996-03-01')]) s = pd.DataFrame(np.arange(1.0, 5.0), index=index) s.rename(columns = {0 : 'test1'}, inplace = True) s.index.set_names(['name','date'], in place=True) index = pd.MultiIndex.from_tuples([('one', '19930630'), ('one', '19940630'), ('two', '19950630'), ('two', '19960630')]) d = pd.DataFrame(np.arange(1.0, 5.0), index=index) d.rename(columns = {0 : 'test2'}, inplace = True) d.index.set_names(['name','date'], in place=True) </code></pre> <p>where s and d are as follows:</p> <p><a href="https://i.stack.imgur.com/kBMhs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kBMhs.png" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/5PgkR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5PgkR.png" alt="enter image description here"></a></p> <p>I would like to merge them based on the year index so that they appear as follows: </p> <p><a href="https://i.stack.imgur.com/c23d6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/c23d6.png" alt="enter image description here"></a></p> <p>I would greatly appreciate any help with this. </p>
<p>You can <code>reset_index</code>, then it will make the index merge to df merge (which is much more easy )</p> <pre><code>s.reset_index().assign(key=s.index.get_level_values(1).str[:4]).merge(d.reset_index().assign(key=d.index.get_level_values(1).str[:4]),on=['name','key'],how='left').set_index(['name','date_x']).drop(['key','date_y'],1) Out[1099]: test1 test2 name date_x one 1993-02-02 1.0 1.0 1994-02-03 2.0 2.0 two 1995-02-18 3.0 3.0 1996-03-01 4.0 4.0 </code></pre>
pandas|dataframe|concat|multi-index
1
6,008
48,521,360
Pandas - fill NaN based on the previous value of another cell
<p>I have some stocks data in a dataframe that I'm resampling, which results in some NaN values. Here's a section of the raw feed:</p> <pre><code>In [34]: feeddf Out[34]: open high low close volume date 2017-12-03 07:00:00 14.46 14.46 14.46 14.46 25000 2017-12-03 07:01:00 14.46 14.46 14.46 14.46 20917 2017-12-03 07:06:00 14.50 14.50 14.50 14.50 2000 2017-12-03 07:12:00 14.50 14.56 14.50 14.56 17000 </code></pre> <p>The feed is supposed to be minute-by-minute, but when there's not data available, the row is skipped. When resampling the dataframe and aggregating for the opens, highs, lows, and closes, it looks like this:</p> <pre><code>In [35]: feeddf.resample('3Min').agg({'open': 'first', 'high': 'max', 'low': 'min', 'close': 'last'}) Out[35]: open high low close date 2017-12-03 07:00:00 14.46 14.46 14.46 14.46 2017-12-03 07:03:00 NaN NaN NaN NaN 2017-12-03 07:06:00 14.50 14.50 14.50 14.50 2017-12-03 07:09:00 NaN NaN NaN NaN 2017-12-03 07:12:00 14.50 14.56 14.50 14.56 </code></pre> <p><strong>My question</strong>: I want to forward-fill the missing data based on the last row's <code>close</code> value. <code>df.fillna(method='ffill')</code> is not helping because it fills it based on the last value on the same column. Any idea?</p>
<p>First forward fill last column <code>close</code> and then <code>bfill</code> by columns:</p> <pre><code>print (df) open high low close date 2017-12-03 07:00:00 14.46 14.46 14.46 14.81 2017-12-03 07:03:00 NaN NaN NaN NaN 2017-12-03 07:06:00 14.50 14.50 14.50 14.59 2017-12-03 07:09:00 NaN NaN NaN NaN 2017-12-03 07:12:00 14.50 14.56 14.50 14.56 df['close'] = df['close'].ffill() df = df.bfill(axis=1) print (df) open high low close date 2017-12-03 07:00:00 14.46 14.46 14.46 14.81 2017-12-03 07:03:00 14.81 14.81 14.81 14.81 2017-12-03 07:06:00 14.50 14.50 14.50 14.59 2017-12-03 07:09:00 14.59 14.59 14.59 14.59 2017-12-03 07:12:00 14.50 14.56 14.50 14.56 </code></pre>
python|pandas
4
6,009
51,843,514
Filling Null values with respective mean
<p>I have a dataset as follow - </p> <pre><code>alldata.loc[:,["Age","Pclass"]].head(10) Out[24]: Age Pclass 0 22.0 3 1 38.0 1 2 26.0 3 3 35.0 1 4 35.0 3 5 NaN 3 6 54.0 1 7 2.0 3 8 27.0 3 9 14.0 2 </code></pre> <p>Now I want to fill all the null values in <code>Age</code> with the mean of all the <code>Age</code> values for that respective <code>Pclass</code> type.</p> <p>Example - In the above snippet for null value of <code>Age</code> for <code>Pclass = 3</code>, it takes mean of all the age belonging to <code>Pclass = 3</code>. Therefore replacing null value of <code>Age = 22.4</code>.</p> <p>I tried some solutions using <code>groupby</code>, but it made changes only to a specific <code>Pclass</code> value and converted rest of the fields to null. How to achieve <code>0</code> null values in this case.</p>
<p>You can use</p> <p><strong>1]</strong> <code>transform</code> and lambda function</p> <pre><code>In [41]: df.groupby('Pclass')['Age'].transform(lambda x: x.fillna(x.mean())) Out[41]: 0 22.0 1 38.0 2 26.0 3 35.0 4 35.0 5 22.4 6 54.0 7 2.0 8 27.0 9 14.0 Name: Age, dtype: float64 </code></pre> <p>Or use </p> <p><strong>2]</strong> <code>fillna</code> over <code>mean</code></p> <pre><code>In [46]: df['Age'].fillna(df.groupby('Pclass')['Age'].transform('mean')) Out[46]: 0 22.0 1 38.0 2 26.0 3 35.0 4 35.0 5 22.4 6 54.0 7 2.0 8 27.0 9 14.0 Name: Age, dtype: float64 </code></pre> <p>Or use</p> <p><strong>3]</strong> <code>loc</code> to replace <code>null</code> values</p> <pre><code>In [47]: df.loc[df['Age'].isnull(), 'Age'] = df.groupby('Pclass')['Age'].transform('mean') In [48]: df Out[48]: Age Pclass 0 22.0 3 1 38.0 1 2 26.0 3 3 35.0 1 4 35.0 3 5 22.4 3 6 54.0 1 7 2.0 3 8 27.0 3 9 14.0 2 </code></pre>
python|pandas|kaggle
3
6,010
41,925,157
LogisticRegression: Unknown label type: 'continuous' using sklearn in python
<p>I have the following code to test some of most popular ML algorithms of sklearn python library:</p> <pre><code>import numpy as np from sklearn import metrics, svm from sklearn.linear_model import LinearRegression from sklearn.linear_model import LogisticRegression from sklearn.tree import DecisionTreeClassifier from sklearn.neighbors import KNeighborsClassifier from sklearn.discriminant_analysis import LinearDiscriminantAnalysis from sklearn.naive_bayes import GaussianNB from sklearn.svm import SVC trainingData = np.array([ [2.3, 4.3, 2.5], [1.3, 5.2, 5.2], [3.3, 2.9, 0.8], [3.1, 4.3, 4.0] ]) trainingScores = np.array( [3.4, 7.5, 4.5, 1.6] ) predictionData = np.array([ [2.5, 2.4, 2.7], [2.7, 3.2, 1.2] ]) clf = LinearRegression() clf.fit(trainingData, trainingScores) print("LinearRegression") print(clf.predict(predictionData)) clf = svm.SVR() clf.fit(trainingData, trainingScores) print("SVR") print(clf.predict(predictionData)) clf = LogisticRegression() clf.fit(trainingData, trainingScores) print("LogisticRegression") print(clf.predict(predictionData)) clf = DecisionTreeClassifier() clf.fit(trainingData, trainingScores) print("DecisionTreeClassifier") print(clf.predict(predictionData)) clf = KNeighborsClassifier() clf.fit(trainingData, trainingScores) print("KNeighborsClassifier") print(clf.predict(predictionData)) clf = LinearDiscriminantAnalysis() clf.fit(trainingData, trainingScores) print("LinearDiscriminantAnalysis") print(clf.predict(predictionData)) clf = GaussianNB() clf.fit(trainingData, trainingScores) print("GaussianNB") print(clf.predict(predictionData)) clf = SVC() clf.fit(trainingData, trainingScores) print("SVC") print(clf.predict(predictionData)) </code></pre> <p>The first two works ok, but I got the following error in <code>LogisticRegression</code> call:</p> <pre><code>root@ubupc1:/home/ouhma# python stack.py LinearRegression [ 15.72023529 6.46666667] SVR [ 3.95570063 4.23426243] Traceback (most recent call last): File "stack.py", line 28, in &lt;module&gt; clf.fit(trainingData, trainingScores) File "/usr/local/lib/python2.7/dist-packages/sklearn/linear_model/logistic.py", line 1174, in fit check_classification_targets(y) File "/usr/local/lib/python2.7/dist-packages/sklearn/utils/multiclass.py", line 172, in check_classification_targets raise ValueError("Unknown label type: %r" % y_type) ValueError: Unknown label type: 'continuous' </code></pre> <p>The input data is the same as in the previous calls, so what is going on here? </p> <p>And by the way, why there is a huge diference in the first prediction of <code>LinearRegression()</code> and <code>SVR()</code> algorithms <code>(15.72 vs 3.95)</code>?</p>
<p>You are passing floats to a classifier which expects categorical values as the target vector. If you convert it to <code>int</code> it will be accepted as input (although it will be questionable if that's the right way to do it). </p> <p>It would be better to convert your training scores by using scikit's <a href="http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.LabelEncoder.html" rel="noreferrer"><code>labelEncoder</code></a> function.</p> <p>The same is true for your DecisionTree and KNeighbors qualifier.</p> <pre><code>from sklearn import preprocessing from sklearn import utils lab_enc = preprocessing.LabelEncoder() encoded = lab_enc.fit_transform(trainingScores) &gt;&gt;&gt; array([1, 3, 2, 0], dtype=int64) print(utils.multiclass.type_of_target(trainingScores)) &gt;&gt;&gt; continuous print(utils.multiclass.type_of_target(trainingScores.astype('int'))) &gt;&gt;&gt; multiclass print(utils.multiclass.type_of_target(encoded)) &gt;&gt;&gt; multiclass </code></pre>
python|numpy|scikit-learn
119
6,011
41,864,014
Setting pandas multiple rows with enlargement
<p>According to the <code>pandas</code> documentation, it should be possible to append-non-existent rows a <code>DataFrame</code> using <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#setting-with-enlargement" rel="nofollow noreferrer">setting with enlargment</a>, but while <em>retrieving</em> multiple missing keys works fine, <em>setting</em> multiple missing keys throws a <code>KeyError</code>:</p> <pre><code>import pandas as pd print(pd.__version__) # '0.19.2' df = pd.DataFrame([[9] * 3] * 3, index=list('ABC')) ## Show a mix of extant and missing keys: inds_e = pd.Index(list('BCDE')) print(df.loc[inds_e]) # 0 1 2 # B 9.0 9.0 9.0 # C 9.0 9.0 9.0 # D NaN NaN NaN # E NaN NaN NaN ## Assign the enlarging subset to -1: try: df.loc[inds_e] = -1 except KeyError as e: print(e) # "Index(['D', 'E'], dtype='object') not in index" </code></pre> <p>Setting multiple existent keys works just fine, and setting any one row with enlargment works fine as well:</p> <pre><code>## Assign all the non-missing keys at once: inds_nm = inds_e.intersection(df.index) df.loc[inds_nm] = -1 ## Assign the missing keys one at a time: inds_m = inds_e.difference(df.index) for ind in inds_m: df.loc[ind] = -1 print(df) # 0 1 2 # A 9 9 9 # B -1 -1 -1 # C -1 -1 -1 # D -1 -1 -1 # E -1 -1 -1 </code></pre> <p>That said, this seems horribly inelegant and inefficient. There is a <a href="https://stackoverflow.com/questions/31319888/setting-dataframe-values-with-enlargement">very similar question here</a>, but that was solved using the <code>combine_first()</code> functionality - both <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.combine_first.html" rel="nofollow noreferrer"><code>combine_first()</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.update.html" rel="nofollow noreferrer"><code>update()</code></a> methods don't seem to have the same semantics as a simple assignment - in the case of <code>combine_first</code>, non-null values are not updated, and in the case of <code>update</code>, null values in the righthand side dataframe will not overwrite non-null values in the lefthand side.</p> <p>Is this a bug in <code>pandas</code>, and if not, what is the "proper" way to assign values to a mixture of extant and missing keys with enlargment on a <code>pandas</code> dataframe?</p> <p><strong>Edit</strong>: Looks like <a href="https://github.com/pandas-dev/pandas/issues/7887" rel="nofollow noreferrer">there is an issue about this from 2014</a> on the <code>pandas</code> github. The de-facto is apparently to use <code>df.reindex</code>, but it's not clear to me how that works when you're trying to assign a subset of all keys with enlargement.</p>
<p>Per your edit, you can assign with overlap and enlargement by using <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reindex.html#pandas.DataFrame.reindex" rel="nofollow noreferrer"><code>reindex</code></a> on the union of your two indexes, followed by <code>loc</code>:</p> <pre><code># Reindex to add the missing indicies (fill_value preserves integer dtype). df = df.reindex(df.index.union(inds_e), fill_value=-1) # Perform the assignment. df.loc[inds_e] = -1 </code></pre> <p>It seems like this does a bit extra assignments here, as the <code>loc</code> will double fill some of the values that <code>fill_value</code> takes care of. A couple simple timings seem to show that it's faster to double fill than just determining the left over locations to fill. You don't necessarily need to use <code>fill_value</code> either; I just used it in this case to preserve dtype. If you have floats instead of integers it is completely unnecessary.</p> <p>The resulting output:</p> <pre><code> 0 1 2 A 9 9 9 B -1 -1 -1 C -1 -1 -1 D -1 -1 -1 E -1 -1 -1 </code></pre> <p><strong>Timings</strong></p> <p>This does appear to be fairly efficient. Using the following setup to produce a larger example:</p> <pre><code>n = 10**5 df = pd.DataFrame(np.random.randint(1000, size=(n, 4))) inds = pd.Index(range(n//2, 3*n//2)) def root(df, inds): df = df.reindex(df.index.union(inds), fill_value=-1) df.loc[inds] = -1 return df def paul(df, inds): ## Assign all the non-missing keys at once: inds_nm = inds.intersection(df.index) df.loc[inds_nm] = -1 ## Assign the missing keys one at a time: inds_m = inds.difference(df.index) for ind in inds_m: df.loc[ind] = -1 return df </code></pre> <p>I get the following timing:</p> <pre><code>%timeit root(df.copy(), inds) 100 loops, best of 3: 16.5 ms per loop </code></pre> <p>I couldn't get your solution to run with <code>n=10**5</code>. Using <code>n=10**4</code>:</p> <pre><code>%timeit paul(df.copy(), inds) 1 loop, best of 3: 14.1 s per loop </code></pre>
python|python-3.x|pandas|dataframe
1
6,012
64,458,509
Looping through Numpy array and slicing
<p>I am working on a certain task which uses numpy. I have the following array:</p> <pre><code>A = array([[1, 2], [3, 4], [5, 6], [7, 8]]) </code></pre> <p>And I have another variable called B which is of shape (10, 10). What I want to do is basically loop through the array A and do the following:</p> <pre><code>B[1,2] B[3,4] B[5,6] B[7,8] </code></pre> <p>All of these should return a single number and not a numpy array.</p> <p>I have tried converting the array A to list and then loop through the list to take the 2 consecutive elements. But I was wondering whether there is a better method to achieve this.</p> <p>Thank You.</p>
<p>You don't need a loop</p> <p>Setting up an example</p> <pre><code>A = np.array([[1, 2], [3, 4], [5, 6], [7, 8]]) B = np.arange(100).reshape(10,10) B </code></pre> <p>Out:</p> <pre><code>[[ 0 1 2 3 4 5 6 7 8 9] [10 11 12 13 14 15 16 17 18 19] [20 21 22 23 24 25 26 27 28 29] [30 31 32 33 34 35 36 37 38 39] [40 41 42 43 44 45 46 47 48 49] [50 51 52 53 54 55 56 57 58 59] [60 61 62 63 64 65 66 67 68 69] [70 71 72 73 74 75 76 77 78 79] [80 81 82 83 84 85 86 87 88 89] [90 91 92 93 94 95 96 97 98 99]] </code></pre> <p>You can index with <code>A</code></p> <pre><code>B[A[:,0], A[:,1]] </code></pre> <p>Out:</p> <pre><code>array([12, 34, 56, 78]) </code></pre>
arrays|python-3.x|numpy
0
6,013
64,454,928
Time Diff on vertical dataframe in Python
<p>I have a dataframe, df that looks like this</p> <pre><code> Date Value 10/1/2019 5 10/2/2019 10 10/3/2019 15 10/4/2019 20 10/5/2019 25 10/6/2019 30 10/7/2019 35 </code></pre> <p>I would like to calculate the delta for a period of 7 days</p> <p>Desired output:</p> <pre><code>Date Delta 10/1/2019 30 </code></pre> <p>This is what I am doing: A user has helped me with a variation of the code below:</p> <pre><code> df['Delta']=df.iloc[0:,1].sub(df.iloc[6:,1]), Date=pd.Series (pd.date_range(pd.Timestamp('2019-10-01'), periods=7, freq='7d'))[['Delta','Date']] </code></pre> <p>Any suggestions is appreciated</p>
<p>Let us try <code>shift</code></p> <pre><code>s = df.set_index('Date')['Value'] df['New'] = s.shift(freq = '-6 D').reindex(s.index).values df['DIFF'] = df['New'] - df['Value'] df Out[39]: Date Value New DIFF 0 2019-10-01 5 35.0 30.0 1 2019-10-02 10 NaN NaN 2 2019-10-03 15 NaN NaN 3 2019-10-04 20 NaN NaN 4 2019-10-05 25 NaN NaN 5 2019-10-06 30 NaN NaN 6 2019-10-07 35 NaN NaN </code></pre>
python|pandas|numpy
1
6,014
64,393,028
Select different columns in different rows according to another pandas Series
<p>I have a pandas Series which contains the column names that I need to collect data from:</p> <pre><code>1 col1 3 col4 4 col3 5 col5 6 col5 </code></pre> <p>And the dataframe that contains data looks like:</p> <pre><code> col1 col2 col3 col4 col5 1 data1 data2 data3 data4 data5 3 data6 data7 data8 data9 data10 4 data11 data12 data13 data14 data15 5 data16 data17 data18 data19 data20 6 data21 data22 data23 data24 data25 </code></pre> <p>The result should be like:</p> <pre><code>1 data1 3 data9 4 data13 5 data20 6 data25 </code></pre>
<p>This is <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.lookup.html" rel="nofollow noreferrer"><code>lookup</code></a>:</p> <pre><code>print (df2.lookup(df2.index, df1)) ['data1' 'data9' 'data13' 'data20' 'data25'] </code></pre>
python|pandas|dataframe|logic|data-science
1
6,015
64,406,280
Select rows in a dataframe based on a condition spanning several dates
<p>I'm working with the dataframe below. I would like to apply a filter that will create a new dataframe of the filtered result set. The filtered dataset should result in a True condition if the first and last day of a three day rolling lookback window are less or equal to 0.5, <strong>the middle value should be excluded</strong>.</p> <pre><code>df = pd.DataFrame({ 'DT': ['2/20/2020', '2/19/2020', '2/18/2020', '2/17/2020','2/16/2020','2/15/2020','2/14/2020','2/13/2020','2/12/2020','2/11/2020'], 'LSTPX': [0.5, 1.00, .44, 1.23, 1.56, 1.89, 0.46, 1.88, 0.49, 0.44, ], }) </code></pre> <p>Result Should equal</p> <pre><code>DT LSTPX 2/20/20 True 2/14/20 True </code></pre>
<p>I made use of the fact that the source DataFrame contains <strong>consecutive</strong> dates in <strong>descending</strong> order.</p> <p>So instead of the rolling window, <em>shift</em> can be used to get <em>LSTPX</em> from the row 2 positions down from the current row:</p> <pre><code>result = df[(df.LSTPX &lt;= 0.5) &amp; (df.LSTPX.shift(-2) &lt;= 0.5)] </code></pre> <p>The result is:</p> <pre><code> DT LSTPX 0 2/20/2020 0.50 6 2/14/2020 0.46 </code></pre> <p>If you want <em>LSTPX</em> column of the result changed to <em>True</em>, then:</p> <ul> <li><code>result = result.copy()</code> - Create a copy of <em>result</em>, since for now it is a <strong>view</strong> of the original DataFrame, and any attempt to modify it would cause <em>SettingWithCopyWarning</em> message.</li> <li><code>result.LSTPX = True</code> - Replace <em>LSTPX</em> with the new value.</li> </ul> <p>If the condition concerning consecutive dates and / or sort is not met, another approach is needed:</p> <ol> <li><p>Create an auxiliary <em>Series</em> - <em>LSTPX</em> column indexed by <em>DT</em>:</p> <pre><code>wrk = df.set_index('DT').LSTPX </code></pre> </li> <li><p>Define a function to get a value from a <em>Series</em> (<em>s</em>) by index (<em>dd</em>), but if the passed index value is absent, return a default value:</p> <pre><code>def getByIdx(s, dd, defVal): return s.loc[dd] if dd in s.index else defVal </code></pre> </li> <li><p>Define a filtering function:</p> <pre><code>def myFilter(row): return (row.LSTPX &lt;= 0.5) and (getByIdx(wrk, row.DT - pd.Timedelta(2, 'd'), 1.0) &lt;= 0.5) </code></pre> </li> <li><p>Generate the result by application of this function:</p> <pre><code>result = df[df.apply(myFilter, axis=1)] </code></pre> </li> </ol>
python|pandas
2
6,016
64,310,087
how to return only the True values?
<p>I'm checking if a word is in the object in a dataframe series. Like this:</p> <pre><code>indicators['Indicator Name'].str.contains('population') </code></pre> <p>But when I run this command, my result is all values as true or false. How can I print only the true values and show all of them? Since the dataframe is huge.Like this:</p> <pre><code>ndicator Code EG.CFT.ACCS.ZS True EG.ELC.ACCS.ZS True EG.ELC.ACCS.RU.ZS True EG.ELC.ACCS.UR.ZS True FX.OWN.TOTL.ZS True ... SG.VAW.NEGL.ZS False SG.VAW.REFU.ZS False SP.M15.2024.FE.ZS False SP.M18.2024.FE.ZS False SH.DYN.AIDS.FE.ZS True </code></pre>
<p>Use lambda expresion:</p> <pre><code>indicators[lambda x: x['Indicator Name'].str.contains('population')] </code></pre>
python|pandas|string|contains
0
6,017
64,472,837
my model does not perform python and tensorflow process, keras
<pre><code>import sys import os import tensorflow import keras from tensorflow.keras.preprocessing.image import ImageDataGenerator from tensorflow.keras import optimizers from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dropout, Flatten, Dense, Activation from tensorflow.keras.layers import Convolution2D, MaxPooling2D from tensorflow.keras import backend as K K.clear_session() datos_de_entrenamiento = './data/entrenamiento' #Carpeta de imagenes datos_de_validacion = './data/validacion' #Carpeta de imagenes para hacer validacion #Parametros iniciales epocas = 20 # Numero de veces a reiterar en nuestro procesamiento altura, longitud = 150,150 # Tamaño en el que se van a procesar las imagenes, cambiando el tamaño con alt 100px y long=100px batch_size = 32 # Numero de imagenes que se manda a procesar en nuestro computador pasos = 1000 # Numero de veces a procesar la información en cada epoca pasos_validacion = 200 # Corre 200 pasos al final de cada validación de datos para ver que esta aprendiendo el algoritmo filtroConv1 = 32 # Filtros a aplicar en cada convolucion filtroConv2 = 64 # Filtros a aplicar en cada convolucion tamano_filtro1 = (3,3) # Tamaño de filto a usar en la convolucion tamano_filtro2 = (2,2) # Tamaño de filto a usar en la convolucion tamano_pool = (2,2) # Tamaño de filto a usar en el maxpooling clases = 2 # Numero de carpetas contenedoras de nuestras imagenes lr = 0.0005 # El tamño de los ajustes que hace nuestra red neuronal para ir a una solución optima #Preprocesamiento de imagenes ##Preparamos nuestras imagenes entrenamiento_datagen = ImageDataGenerator( rescale=1. / 255, shear_range=0.2, zoom_range=0.2, horizontal_flip=True) test_datagen = ImageDataGenerator(rescale=1. / 255) entrenamiento_generador = entrenamiento_datagen.flow_from_directory( datos_de_entrenamiento, target_size=(altura, longitud), batch_size=batch_size, class_mode='categorical') validacion_generador = test_datagen.flow_from_directory( datos_de_validacion, target_size=(altura, longitud), batch_size=batch_size, class_mode='categorical') # Creacion de red convolucional cnn=Sequential() cnn.add(Convolution2D(filtroConv1, tamano_filtro1, padding='same', input_shape=(altura,longitud,3), activation='relu')) cnn.add(MaxPooling2D(pool_size=tamano_pool)) cnn.add(MaxPooling2D(pool_size=tamano_pool)) cnn.add(Convolution2D(filtroConv2, tamano_filtro2, padding='same', activation='relu')) cnn.add(Flatten()) cnn.add(Dense(256,activation='relu')) cnn.add(Dropout(0.5)) cnn.add(Dense(clases, activation='softmax')) cnn.compile(loss='categorical_crossentropy', optimizer=optimizers.Adam(lr=lr), metrics=['accuracy']) cnn.fit_generator( entrenamiento_generador, steps_per_epoch=pasos, epochs=epocas, validation_data=validacion_generador, validation_steps=pasos_validacion) dir='./modelo/' if not os.path.exists(dir): os.mkdir(dir) cnn.save('./modelo/modelo.h5') cnn.save_weights('./modelo/pesos.h5') Found 460 images belonging to 2 classes. Found 460 images belonging to 2 classes. </code></pre> <p>Epoch 1/20 15/1000 [..............................] - ETA: 2:18:49 - loss: 1.5865 - accuracy: 0.5543WARNING:tensorflow:Your input ran out of data; interrupting training. Make sure that your dataset or generator can generate at least <code>steps_per_epoch * epochs</code> batches (in this case, 20000 batches). You may need to use the repeat() function when building your dataset.</p> <p>It does not pass 15 and the process is finished.</p>
<p>Your steps per epoch is too large for your current dataset size, try lowering it.</p>
python|tensorflow|keras|deep-learning|conv-neural-network
0
6,018
47,621,187
pandas - aggregating on contents of dataframe
<p>I have a pandas dataframe which looks like this:</p> <pre><code> Lane PropA PropB PropC Sample NameA R1 PASS FAIL WARN NameB R2 FAIL FAIL PASS NameC R1 WARN PASS PASS NameD R2 PASS PASS WARN </code></pre> <p>I have as a goal to produce a bar plot that for each Prop shows how many of the Samples have PASS, WARN or FAIL as a value. Thus I need to get out a dataframe where the contents are like this instead:</p> <pre><code> PASS WARN FAIL Prop PropA 3 2 1 PropB 2 5 3 PropC 2 5 1 </code></pre> <p>How to I get from the dataframe above to the one below?</p> <p>Thanks!</p>
<p>First filter only necessary columns by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.drop.html" rel="nofollow noreferrer"><code>drop</code></a> or <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.filter.html" rel="nofollow noreferrer"><code>filter</code></a> and then count with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.value_counts.html" rel="nofollow noreferrer"><code>value_counts</code></a> and then if necessary replace <code>NaN</code>s to <code>0</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.fillna.html" rel="nofollow noreferrer"><code>fillna</code></a>:</p> <pre><code>df1 = df.drop('Lane', 1).apply(lambda x: x.value_counts()).T.fillna(0).astype(int) </code></pre> <p>Or:</p> <pre><code>df1 = df.filter(like='Prop').apply(lambda x: x.value_counts()).T.fillna(0).astype(int) print (df1) FAIL PASS WARN PropA 1 2 1 PropB 2 2 0 PropC 0 2 2 </code></pre> <p>And last:</p> <pre><code>df1.plot.bar() </code></pre>
python|pandas
1
6,019
49,239,478
How to get a single value from model.predict() results
<p>I am trying to use a neural network to predict actions for a car simulator game that runs in another file. I need to get the value predicted for an action to pass into the game but i am struggling to do this. After calling model.predict i have attempted to access the value as if from an array but this returns an out of bounds error. </p> <p>I am fairly new to python in general, never mind using keras, but the idea i had is that the neural network would be trained on some data (saved in a CSV file) gathered by someone else playing the game. Then when it came time for the neural network to play, I would be able to pass in the game values at each frame to generate a predicted action. I <em>think</em> i have done that but am unable to get a predicted action. Here is neural network;</p> <pre><code> def build_nn(self): model = Sequential() model.add(Dense(12, input_dim=8, activation='relu')) model.add(Dense(8, activation='relu')) model.add(Dense(1, activation='linear')) model.compile(loss='mean_squared_error', optimizer=Adam(lr=self.learning_rate)) return model </code></pre> <p>And my code for predicting the action (stateVector wasnt accepted as is, so I had to get the values from it)</p> <pre><code> def action(self, stateVector): a_action = stateVector['lastAction'] currentLane = stateVector['currentLane'] offRoad = stateVector['offRoad'] collision = stateVector['collision'] lane1 = stateVector['lane1Distance'] lane2 = stateVector['lane2Distance'] lane3 = stateVector['lane3Distance'] a = [currentLane, offRoad, collision, lane1, lane2, lane3, reward, a_action] act_value = self.model.predict(a) act = act_values[a_action] return act </code></pre>
<p>You're feeding your network a list of 8 elements, when it expected an iterable of 8-dimensional samples. Practically:</p> <pre><code>&gt;&gt;&gt; a = [currentLane, offRoad, collision, lane1, lane2, lane3, reward, a_action] &gt;&gt;&gt; a = np.array(a) # convert to a numpy array &gt;&gt;&gt; a = np.expand_dims(a, 0) # change shape from (8,) to (1,8) &gt;&gt;&gt; model.predict(a) # voila! </code></pre>
python|tensorflow|machine-learning|neural-network|keras
2
6,020
48,987,956
pandas multi-column merge on index
<p>hello i'm newbie in pandas</p> <p>for example, datas of cryptocurrency are as below</p> <p><strong>BTC</strong></p> <pre><code>time(index) open high low close value 0 1 4 1 2 1 1 2 5 2 3 2 </code></pre> <p><strong>ETH</strong></p> <pre><code>time(index) open high low close value 1 1 1 1 1 1 </code></pre> <p>and I want merge these datas as blow</p> <p><strong>BTC X ETH</strong></p> <pre><code> BTC ETH time(index) open high low close value open high low close value 0 1 4 1 2 1 NaN NaN NaN NaN NaN 1 2 5 2 3 2 1 1 1 1 1 </code></pre> <p>is there any way to merge? </p>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.concat.html" rel="nofollow noreferrer"><code>concat</code></a> with parameter <code>keys</code> for first level of <code>MultiIndex</code>:</p> <pre><code>df = pd.concat([df1, df2], keys=('BTC','ETH'), axis=1) print (df) BTC ETH open high low close value open high low close value time(index) 0 1 4 1 2 1 NaN NaN NaN NaN NaN 1 2 5 2 3 2 1.0 1.0 1.0 1.0 1.0 </code></pre>
pandas|merge
2
6,021
48,977,206
Index Error when using list comprehension on column in Pandas Dataframe
<p>I have a DataFrame with a column like this: <code> Data Science Score 0 303231.0 1 238632.0 2 209423.0 3 207254.0 4 206395.0</code></p> <p>I am trying to split the elements in the series from the end after the character that corresponds to their index+1.</p> <p>If I use <code>print()</code> with this for loop:</p> <pre><code>for x in df["Data Science Score"]: print(df["Data Science Score"][df.index[df["Data Science Score"]== x][0]].astype(str).rsplit(str(df.index[df["Data Science Score"]== x][0]+1),1)[0]) </code></pre> <p>I get the desired output:</p> <pre><code>30323 23863 20942 20725 20639 </code></pre> <p>However, if I actually want to create a new column by using the above code in a list comprehension or just using it to create a list via <code>.append()</code>, like so:</p> <pre><code>df["Data Science Score cleaned"]=[df["Data Science Score"][df.index[df["Data Science Score"]== x][0]].astype(str).rsplit(str(df.index[df["Data Science Score"]== x][0]+1),1)[0] for x in df["Data Science Score"]] </code></pre> <p>or so:</p> <pre><code>asas=[] for x in df["Data Science Score"]: asas.append(df["Data Science Score"][df.index[df["Data Science Score"]== x][0]].astype(str).rsplit(str(df.index[df["Data Science Score"]== x][0]+1),1)[0]) </code></pre> <p>I get the following error:</p> <pre><code>--------------------------------------------------------------------------- IndexError Traceback (most recent call last) &lt;ipython-input-11-4f8f3d44b86b&gt; in &lt;module&gt;() ----&gt; 1 df["Data Science Score cleaned"]=[df["Data Science Score"] [df.index[df["Data Science Score"]== x][0]].astype(str).rsplit(str(df.index[df["Data Science Score"]== x][0]+1),1)[0] for x in df["Data Science Score"]] &lt;ipython-input-11-4f8f3d44b86b&gt; in &lt;listcomp&gt;(.0) ----&gt; 1 df["Data Science Score cleaned"]=[df["Data Science Score"][df.index[df["Data Science Score"]== x][0]].astype(str).rsplit(str(df.index[df["Data Science Score"]== x][0]+1),1)[0] for x in df["Data Science Score"]] ~\Anaconda3\lib\site-packages\pandas\core\indexes\base.py in __getitem__(self, key) 1741 1742 if is_scalar(key): -&gt; 1743 return getitem(key) 1744 1745 if isinstance(key, slice): IndexError: index 0 is out of bounds for axis 0 with size 0 </code></pre> <p>I don't really understand why it doesn't work when creating a list. Can someone explaing why, and how to fix this?</p>
<p>If I correctly understand you want to convert the value in the cell to an int of length 5.</p> <p>What about:</p> <pre><code>df['rounded_score'] = df['Data Science Score'].apply(lambda x: int(x / 10.0)) </code></pre> <p>this approach should easier than coverting to string and split.</p>
python-3.x|pandas|dataframe|list-comprehension
0
6,022
58,620,033
Extract column value based on another column, reading multiple files
<p>I will like to extract out the values based on another on Name,Grade,School,Class. For example if I were to find Name and Grade, I would like to go through column 0 and find the value in the next few column, but the value is scattered(to be extracted) around the next column. Same goes for School and Class. </p> <p>Refer to this: <a href="https://stackoverflow.com/questions/36684013/extract-column-value-based-on-another-column-pandas-dataframe">extract column value based on another column pandas dataframe</a></p> <p>I have multiple files:</p> <pre><code> 0 1 2 3 4 5 6 7 8 0 nan nan nan Student Registration nan nan 1 Name: nan nan John nan nan nan nan nan 2 Grade: nan 6 nan nan nan nan nan nan 3 nan nan nan School: C College nan Class: 1A </code></pre> <hr> <pre><code> 0 1 2 3 4 5 6 7 8 9 0 nan nan nan Student Registration nan nan nan 1 nan nan nan nan nan nan nan nan nan nan 2 Name: Mary nan nan nan nan nan nan nan nan 3 Grade: 7 nan nan nan nan nan nan nan nan 4 nan nan nan School: nan D College Class: nan 5A </code></pre> <p>This is my code: (Error)</p> <pre><code>for file in files: df = pd.read_csv(file,header=0) df['Name'] = df.loc[df[0].isin('Name')[1,2,3] df['Grade'] = df.loc[df[0].isin('Grade')[1,2,3] df['School'] = df.loc[df[3].isin('School')[4,5] df['Class'] = df.loc[df[7].isin('Class')[8,9] d.append(df) df = pd.concat(d,ignore_index=True) </code></pre> <p>This is the outcome I want: (Melt Function)</p> <pre><code>Name Grade School Class ... .... ... ... John 6 C College 1A John 6 C College 1A John 6 C College 1A John 6 C College 1A Mary 7 D College 5A Mary 7 D College 5A Mary 7 D College 5A Mary 7 D College 5A </code></pre>
<p>I think here is possible use:</p> <pre><code>for file in files: df = pd.read_csv(file,header=0) #filter out first column and reshape - removed NaNs, convert to 1 column df df = df.iloc[1:].stack().reset_index(drop=True).to_frame('data') #compare by : m = df['data'].str.endswith(':', na=False) #shift values to new column df['val'] = df['data'].shift(-1) #filter and transpose df = df[m].set_index('data').T.rename_axis(None, axis=1) d.append(df) df = pd.concat(d,ignore_index=True) </code></pre> <p>EDIT:</p> <p>You can use:</p> <pre><code>for file in files: #if input are excel, change read_csv to read_excel df = pd.read_excel(file, header=None) df['Name'] = df.loc[df[0].eq('Name:'), [1,2,3]].dropna(axis=1).squeeze() df['Grade'] = df.loc[df[0].eq('Grade:'), [1,2,3]].dropna(axis=1).squeeze() df['School'] = df.loc[df[3].eq('School:'), [4,5]].dropna(axis=1).squeeze() df['Class'] = df.loc[df[6].eq('Class:'), [7,8]].dropna(axis=1).squeeze() print (df) </code></pre>
python|pandas|dataframe
1
6,023
58,848,252
Python: loading data from file csv insert whole data in .db and operate on tables
<p>I'm learning currently a python language. Here is my question i converted .txt file to .csv then want to insert to table to database file. I have a problem with iteriation on the bottom im pasting results. How can i iterate with it? Im struggle with that few days so don't really know how to solve the problem.</p> <p>txt file(few rows):</p> <pre><code>id,id2,album,artysta TRMMMYQ128F932D901,SOQMMHC12AB0180CB8,Faster Pussy cat,Silent Night TRMMMKD128F425225D,SOVFVAK12A8C1350D9,Karkkiautomaatti,Tanssi vaan TRMMMRX128F93187D9,SOGTUKN12AB017F4F1,Hudson Mohawke,No One Could Ever TRMMMCH128F425532C,SOBNYVR12A8C13558C,Yerba Brava,Si Vos Querés TRMMMWA128F426B589,SOHSBXH12A8C13B0DF,Der Mystic,Tangle Of Aspens TRMMMXN128F42936A5,SOZVAPQ12A8C13B63C,David Montgomery,"Symphony No. 1 G minor ""Sinfonie Serieuse""/Allegro con energia" TRMMMLR128F1494097,SOQVRHI12A6D4FB2D7,Sasha / Turbulence,We Have Got Love TRMMMBB12903CB7D21,SOEYRFT12AB018936C,Kris Kross,2 Da Beat Ch'yall </code></pre> <p>Python: </p> <pre><code>from io import StringIO import pandas as pd import numpy as np import os import sqlite3, csv save_path = r"C:\Users\Maticz\Desktop\python" #konwerter txt -&gt; csv in_file = os.path.join(save_path, "tracks.txt") out_file = os.path.join(save_path, "Output.csv") #df = pd.read_csv(in_file, sep="&lt;SEP&gt;", engine='python') #df.to_csv(out_file, index=False) #print(df) df = pd.read_csv(r'C:\Users\Maticz\PycharmProjects\zadanie\tracks.txt', delimiter='&lt;SEP&gt;', engine='python', names=["id", "id2", "album", "artysta"]) print(df.head(5)) sv = df.to_csv(r'C:\Users\Maticz\PycharmProjects\zadanie\tracks.csv', index = None, header=True) con = sqlite3.connect("artists.db") cur = con.cursor() cur.execute("CREATE TABLE IF NOT EXISTS tabela (id TEXT, id2 TEXT, album TEXT, artysta TEXT);") with open(r'C:\Users\Maticz\PycharmProjects\zadanie\tracks.csv', 'a+') as fin: dr = pd.read_csv(fin, delimiter=',', names=["id", "id2", "album", "artysta"]) # comma is default delimiter to_db = [(i['id'], i['id2'], i['album'], i['artysta']) for i in dr] cur.executemany("INSERT INTO tabela (id, id2, album, artysta) VALUES (?, ?, ?, ?);", to_db) con.commit() cur.execute("SELECT * FROM artists") print(cur.fetchall()) con.close() </code></pre> <p>Output:</p> <pre><code> id id2 album artysta 0 TRMMMYQ128F932D901 SOQMMHC12AB0180CB8 Faster Pussy cat Silent Night 1 TRMMMKD128F425225D SOVFVAK12A8C1350D9 Karkkiautomaatti Tanssi vaan 2 TRMMMRX128F93187D9 SOGTUKN12AB017F4F1 Hudson Mohawke No One Could Ever 3 TRMMMCH128F425532C SOBNYVR12A8C13558C Yerba Brava Si Vos Querés 4 TRMMMWA128F426B589 SOHSBXH12A8C13B0DF Der Mystic Tangle Of Aspens Traceback (most recent call last): File "C:/Users/Maticz/PycharmProjects/zadanie/main.py", line 26, in &lt;module&gt; to_db = [(i['id'], i['id2'], i['album'], i['artysta']) for i in dr] File "C:/Users/Maticz/PycharmProjects/zadanie/main.py", line 26, in &lt;listcomp&gt; to_db = [(i['id'], i['id2'], i['album'], i['artysta']) for i in dr] TypeError: string indices must be integers Process finished with exit code 1 </code></pre> <p>Apreciate for any help thank you :)</p>
<p>You could simplify the operation using sqlalchemy</p> <pre><code>from sqlalchemy import create_engine # sqlite://&lt;nohostname&gt;/&lt;path&gt; # where &lt;path&gt; is relative: engine = create_engine('sqlite:///artists.db') df.to_sql('tabela', con = engine, if_exists = 'append', chunksize=1000) </code></pre> <p>this will remove the need to write to another CSV as you have the data in a pandas dataframe already. After this is done you can create your cursor to validate that the data has been written into the sqlite db file.</p> <p>the statements:</p> <pre><code>if_exsists = 'append' </code></pre> <p>will force the new data to append to the table or even creates the table if it doesn't exist.</p> <pre><code>chunksize = 1000 </code></pre> <p>this will write the records 1000 at a time (or all at once if less than 1000) then commit the records, saving the data to the table.</p>
python|pandas|sqlite
0
6,024
58,853,468
How do I customize the colours in the bars using custom number set in matplotlib?
<p>I am trying to add colors to the bar according to the integer value, lets say the values are 1 to 20, 1 will be the lightest and 20 will be the darkest, but none of the colors can be the same, so far I am at using an incorrect <code>colorbar</code> method:</p> <pre><code>import pandas as pd import numpy as np import matplotlib.pyplot as plt df = pd.DataFrame({'values': [17, 16, 16, 15, 15, 15, 14, 13, 13, 13]}) df.plot(kind='barh') plt.imshow(df) plt.colorbar() plt.show() </code></pre> <p>But it gives a strange result of:</p> <p><a href="https://i.stack.imgur.com/HhuS4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HhuS4.png" alt="Weird plot"></a></p>
<p>I just realized using <code>plt.barh</code> and <code>colormaps</code> provide better plots, use:</p> <pre><code>import pandas as pd import matplotlib.pyplot as plt df = pd.DataFrame({'values': [0, 0, 0, 0, 0, 17, 16, 16, 15, 15, 15, 14, 13, 13, 13]}) df = df.sort_values(by='values').reset_index(drop=True) s = df['values'].replace(0, df.loc[df['values'] != 0, 'values'].min()) s = s.sub(s.min()) colors = (1 - (s / s.max())).astype(str).tolist() plt.barh(df.index, df['values'].values, color=colors) plt.show() </code></pre> <p>Which gives:</p> <p><a href="https://i.stack.imgur.com/5C86J.png" rel="noreferrer"><img src="https://i.stack.imgur.com/5C86J.png" alt=""></a></p>
python|pandas|matplotlib|plot|colorbar
5
6,025
70,196,691
How to split a value in a dataframe if the value contains a digit?
<p>I have a dataframe which looks like below (in reality much bigger):</p> <pre><code>df = pd.DataFrame([ [-0.531, '30 mg', 0], [1.49, '70 kg', 1], [-1.3826, 'food delivery', 2], [0.814, '80 degrees', ' '], [-0.22, ' ', 4], [-1.11, '70 grams', ' '], ], columns='Power Value Stage'.split(), index=pd.date_range('2000-01-01','2000-01-06')) </code></pre> <p>Now I'm adding a new column named <code>Unit</code> to the dataframe which actually does a split on the column <code>Value</code>. However, it seems to literally split everything even if the values don't make sense. For example values like <code>food delivery</code> don't need to be split.</p> <p>I only want the values to be split if <code>str[0]</code> is a digit <code>AND</code> if <code>str[1]</code> is &lt;= 5 characters long. I think I'm really close however I got stuck. This is my code:</p> <pre><code>df['Unit'] = df['Value'].str.split(' ').str[-1] if str[0].isdigit() and len(str[1]) is &lt;= 5 </code></pre> <p><strong>This is my desired output when I do print(df):</strong></p> <pre><code> Power Value Stage Unit 2000-01-01 -0.5310 30 mg 0 mg 2000-01-02 1.4900 70 kg 1 kg 2000-01-03 -1.3826 food delivery 2 2000-01-04 0.8140 80 degrees 2000-01-05 -0.2200 4 2000-01-06 -1.1100 70 grams grams </code></pre> <p><strong>This is my output:</strong></p> <pre><code> df['Unit'] = df['Value'].str.split(' ').str[-1] if str[0].isdigit() and len(str[1]) is &lt;= 5 ^ SyntaxError: invalid syntax </code></pre>
<p>You have multiple problems in your code. You can't simply use if-else in a function call. You are applying pandas string methods, which return a whole Series / DataFrame containing the split lists. That means you can't simply use bracket indexing like <code>[-1]</code>. Look at the output of <code>df['Value'].str.split(' ')</code>. It's not a list but a Series that contains the list in every row.</p> <p>Use <code>apply</code> to apply a self defined function to each cell of <code>df['Value</code>]`. In that function, you can use if-else.</p> <pre><code>df['Unit'] = df['Value'].apply(lambda string: string.split(' ')[-1] if string[0].isdigit() and len(string.split(' ')[1]) &lt;= 5 else '') </code></pre>
python|pandas|dataframe
1
6,026
56,090,087
Operation on data frame to transform rows into separate columns
<p>I have a data-frame containing following structure</p> <pre><code> **Email MAC** email_1@mail.com AA:AA:AA:AA:A1 email_1@mail.com AA:AA:AA:AA:A5 email_1@mail.com PP:PP:PP:PP:P5 email_1@mail.com PP:PP:PP:PP:P6 email_2@mail.com AA:AA:AA:AA:A2 email_2@mail.com AA:AA:AA:AA:A9 </code></pre> <p>I have to settle them into </p> <pre><code>**Email MAC1 MAC2 MAC3** email_1@mail.com AA:AA:AA:AA:A1 AA:AA:AA:AA:A5 PP:PP:PP:PP:P5 email_2@mail.com AA:AA:AA:AA:A2 AA:AA:AA:AA:A9 Null </code></pre> <p>The value PP:PP:PP:PP:P6 corresponding to email_1@mail.com has been discarded as it exceeds the number of allowed columns(only first 3 values are allowed).</p>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.cumcount.html" rel="nofollow noreferrer"><code>GroupBy.cumcount</code></a> for counter column, filter by <a href="http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#boolean-indexing" rel="nofollow noreferrer"><code>boolean indexing</code></a>, reshape by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.set_index.html" rel="nofollow noreferrer"><code>DataFrame.set_index</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.unstack.html" rel="nofollow noreferrer"><code>DataFrame.unstack</code></a>:</p> <pre><code>N = 3 g = df.groupby('Email').cumcount().add(1) df = df[g &lt;= N] df1 = df.set_index(['Email',g[g&lt;=N]])['MAC'].unstack().add_prefix('MAC').reset_index() print (df1) Email MAC1 MAC2 MAC3 0 email_1@mail.com AA:AA:AA:AA:A1 AA:AA:AA:AA:A5 PP:PP:PP:PP:P5 1 email_2@mail.com AA:AA:AA:AA:A2 AA:AA:AA:AA:A9 NaN </code></pre>
python|python-3.x|numpy|dataframe
0
6,027
56,226,621
How to extract data/labels back from TensorFlow dataset
<p>there are plenty of examples how to create and use TensorFlow datasets, e.g.</p> <pre><code>dataset = tf.data.Dataset.from_tensor_slices((images, labels)) </code></pre> <p>My question is how to get back the data/labels from the TF dataset in numpy form? In other words want would be reverse operation of the line above, i.e. I have a TF dataset and want to get back images and labels from it.</p>
<p>In case your <a href="https://www.tensorflow.org/api_docs/python/tf/data/Dataset" rel="noreferrer"><code>tf.data.Dataset</code></a> is batched, the following code will retrieve all the y labels:</p> <pre><code>y = np.concatenate([y for x, y in ds], axis=0) </code></pre> <p>Quick explanation: [y for x, y in ds] is known as “list comprehension” in python. If dataset is batched, this expression will loop thru each batch and put each batch y (a TF 1D tensor) in the list, and return it. Then, np.concatenate will take this list of 1-D tensor (implicitly casting to numpy) and stack it in the 0-axis to produce a single long vector. In summary, it is just converting a bunch of 1-d little vector into one long vector. Note: if your y is more complex, this answer will need some minor modification.</p>
tensorflow|tensorflow-datasets
56
6,028
55,708,136
Does loc/iloc return a reference or a copy?
<p>I am experiencing some problems while using .loc / .iloc as part of a loop. This is a simplified version of my code:</p> <pre class="lang-py prettyprint-override"><code> INDEX=['0', '1', '2', '3', '4'] COLUMNS=['A','B','C'] df=pd.DataFrame(index=INDEX, columns=COLUMNS) i=0 while i&lt;1000: for row in INDEX: df.loc[row] = function() #breakpoint i_max = df['A'].idxmax() row_MAX=df.loc[i_max] if i == 0: row_GLOBALMAX=row_MAX elif row_MAX &gt; row_GLOBALMAX: row_GLOBALMAX=row_MAX i+=1 </code></pre> <p>basically:</p> <ol> <li><p>I initialize a dataframe with index and columns</p></li> <li><p>I populate each row of the dataframe with a for loop</p></li> <li><p>I find the index "i_max" finding the maximum value in column 'A'</p></li> <li><p>I save the row of the dataframe where the value is maximum 'row_MAX'</p></li> <li><p>The while loop iterates over steps 2 to 4 and uses a new variable row_GLOBALMAX to save the row with the highest value in row 'A'</p></li> </ol> <p>The code works as expected during the first execution of the while loop (i=0), however at the second iteration (i=1) when I stop at the indicated breakpoint I observe a problem: both 'row_MAX' and 'row_GLOBALMAX' have already changed with respect to the first iteration and have followed the values in the updated 'df' dataframe, even though I haven't yet assigned them in the second iteration.</p> <p>basically it seems like the .loc function created a pointer to a particular row of the 'df' dataframe instead of actually assigning a value in that particular moment. Is this the normal behaviour? What should I use instead of .loc?</p>
<p>I <strong>think</strong> both <code>loc</code> and <code>iloc</code> (didn't test <code>iloc</code>) will <strong>point</strong> to a specific index of the dataframe. They do not make copies of the row. </p> <p>You can use the <code>copy()</code> method on the row to solve your problem.</p> <pre><code>import pandas as pd import numpy as np INDEX=['0', '1', '2', '3', '4'] COLUMNS=['A','B','C'] df=pd.DataFrame(index=INDEX, columns=COLUMNS) np.random.seed(5) for idx in INDEX: df.loc[idx] = np.random.randint(-100, 100, 3) print("First state") a_row = df.loc["3"] a_row_cp = a_row.copy() print(df) print("---\n") print(a_row) print("\n==================================\n\n\n") for idx in INDEX: df.loc[idx] = np.random.randint(-100, 100, 3) print("Second state") print(df) print("---\n") print(a_row) print("---\n") print(a_row_cp) </code></pre>
python|pandas|dataframe
4
6,029
64,694,572
Python how to apply per-column mean on Series of series
<p>I have a series with 800 elements. Each element - is a series with n elementes , <code>800 &lt; n &lt;= 1200</code>, so the longest series <code>len</code> is 1200. I want to have a single vector with 1200 elements, each element value - is the mean of this position for all series. So if:</p> <pre><code>s = ([1,2,3,4,5,6,1], [1,3,9,6], [4,4]) </code></pre> <p>I will get:</p> <pre><code>v = [2,3,6,5,5,6,1] </code></pre> <p>What will be the best way to do it?</p>
<p>Create <code>DataFrame</code> by constructor and use <code>mean</code>:</p> <pre><code>s = pd.Series(([1,2,3,4,5,6,1], [1,3,9,6], [4,4])) out = pd.DataFrame(s.tolist()).mean().tolist() print (out) [2.0, 3.0, 6.0, 5.0, 5.0, 6.0, 1.0] </code></pre>
python|pandas|numpy|dataframe|series
0
6,030
64,925,753
Python looping over folders and its subfolders to read CSV is getting file names but on read_csv it is returning file not found
<p>I am trying to loop over folders and subfolder to access and read CSV files before transforming them into JSON. Here is the code I am working on:</p> <pre><code>cursor = conn.cursor() try: # Specify the folder containing needed files folderPath = 'C:\\Users\\myUser\\Desktop\\toUpload' # Or using input() fwdPath = 'C:/Users/myUser/Desktop/toUpload' for countries in os.listdir(folderPath): for sectors in os.listdir(folderPath+'\\'+countries): for file in os.listdir(folderPath+'\\'+countries+'\\'+sectors): data = pd.DataFrame() filename, _ext = os.path.splitext(os.path.basename(folderPath+'\\'+countries+'\\'+file)) print(file + ' ' + filename+ ' ' + sectors + ' ' + countries) data = pd.read_csv(file) # cursor.execute('SELECT * FROM SECTORS') # print(list(cursor)) finally: cursor.close() conn.close() </code></pre> <p>The following print line is returning the file with its filename without the extension, and then sectors and countries folder names:</p> <pre><code>print(file + ' ' + filename+ ' ' + sectors + ' ' + countries) </code></pre> <blockquote> <p>myfile.csv myfile WASHSector CTRYIrq</p> </blockquote> <p>Now when it comes to reading the CSV, it will take lots and lots of time and at the end O get the following error:</p> <blockquote> <p>[Errno 2] File myfile.csv does not exist</p> </blockquote>
<p>Before reading the csv file, you should compose the whole path to the file, otherwise, pandas won't be able to read that file.</p> <pre class="lang-py prettyprint-override"><code>import os # ... path = os.path.join(folderPath, countries, sectors, file) data = pd.read_csv(path) </code></pre> <p>Also instead of using three nested for loops I recommend you using the <code>os.walk</code> method. It will automatically recurse through directories</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; folderPath = 'C:\\Users\\myUser\\Desktop\\toUpload' &gt;&gt;&gt; for root, _, files in os.walk(folderPath): &gt;&gt;&gt; ... for f in files: &gt;&gt;&gt; ... pd.read_csv(os.path.join(root, f)) </code></pre>
python|pandas
1
6,031
64,618,725
Which month has the highest cumulative sum in multiindex pandas
<p>I have this <code>MultiIndex</code> pandas dataframe:</p> <pre><code> chamber_temp month day 1 1 0.000000 2 0.005977 3 0.001439 4 -0.000119 5 0.000514 ... 12 27 0.001799 28 0.002346 29 -0.001815 30 0.001102 31 -0.004189 </code></pre> <p>What I want to get is which month has the highest cumsum(). What I am trying to do is for each month there should 1 value which will give me the cumulative sum of all the values for <code>day</code> in that month, that is the problem which I am trying to get help on.</p>
<p>You can leverage on <code>level</code> parameter in <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.sum.html" rel="nofollow noreferrer"><code>Series.sum</code></a> when there's <code>MultiIndex</code> to avoid <code>groupby</code> in such cases.</p> <pre><code>df['champer_temp'].sum(level=0).idxmax() </code></pre>
python|python-3.x|pandas|multi-index
2
6,032
64,886,170
How to extract a nested dictionary from a STRING column in Python Pandas Dataframe?
<p>There's a table where one data point of its column <code>event</code> looks like this:</p> <p>THE 'event IS A STRING COLUMN!</p> <pre><code>df['event'] RETURNS: &quot;{'eventData': {'type': 'page', 'name': &quot;WHAT'S UP&quot;}, 'eventId': '1003', 'deviceType': 'kk', 'pageUrl': '/chick 2/whats sup', 'version': '1.0.0.888-10_7_2020__4_18_30', 'sessionGUID': '1b312346a-cd26-4ce6-888-f25143030e02', 'locationid': 'locakdi-3b0c-49e3-ab64-741f07fd4cb3', 'eventDescription': 'Page Load'}&quot; </code></pre> <p>I'm trying to extract the nested dictionary <code>eventData</code> from the dictionary and create a new column like below:</p> <pre><code>df['event'] RETURNS: {'eventId': '1003', 'deviceType': 'kk', 'pageUrl': '/chick 2/whats sup', 'version': '1.0.0.888-10_7_2020__4_18_30', 'sessionGUID': '1b312346a-cd26-4ce6-888-f25143030e02', 'locationid': 'locakdi-3b0c-49e3-ab64-741f07fd4cb3', 'eventDescription': 'Page Load'} df['eventData'] RETURNS: {'type': 'page', 'name': &quot;WHAT'S UP&quot;} </code></pre> <p>How do I do this?</p>
<p>I've finally fot the answer from another post: <a href="https://stackoverflow.com/questions/51359783/python-flatten-multilevel-nested-json">Python flatten multilevel/nested JSON</a></p> <p>How to use: json_col = pd.DataFrame([flatten_json(x) for x in df['json_column']])</p> <pre><code>def flatten_json(nested_json, exclude=['']): out = {} def flatten(x, name='', exclude=exclude): if type(x) is dict: for a in x: if a not in exclude: flatten(x[a], name + a + '_') elif type(x) is list: i = 0 for a in x: flatten(a, name + str(i) + '_') i += 1 else: out[name[:-1]] = x flatten(nested_json) return out </code></pre>
python|regex|pandas|dataframe|python-re
0
6,033
64,914,930
Finding duplicate row pairs irrespective of column order
<p>I have a pandas data frame and I am looking for a simple way to identify rows where the values are the same (duplicate), <strong>irrespective of the order of the columns.</strong></p> <p>For example:</p> <pre><code>df = pd.DataFrame([[1, 3], [4, 2], [3, 1], [2, 3], [2, 4], [1, 3]], columns=[&quot;a&quot;, &quot;b&quot;]) print(df) a b 0 1 3 1 4 2 2 3 1 3 2 3 4 2 4 5 1 3 </code></pre> <p>The code should be able to identify the rows (0, 2, 5), and (1, 4) as the duplicate ones respectively.</p> <p>I can't think of an efficient solution other than using a set operator to store these pairs and then finding the duplicates. Can you suggest a better method since the data frame is quite big, and thus the suggested method is very inefficient.</p>
<p>You could do this using <a href="https://numpy.org/doc/stable/reference/generated/numpy.sort.html" rel="nofollow noreferrer"><code>np.sort</code></a> on <code>axis=1</code>, then <code>groupby</code></p> <pre><code>u = pd.DataFrame(np.sort(df,axis=1),index=df.index) [tuple(g.index) for _,g in u[u.duplicated(keep=False)].groupby(list(u.columns))] </code></pre> <hr /> <pre><code>[(0, 2, 5), (1, 4)] </code></pre> <p>Or similarly:</p> <pre><code>u[u.duplicated(keep=False)].groupby(list(u.columns)).groups.values() </code></pre> <hr /> <p>Outputs:</p> <pre><code>dict_values([Int64Index([0, 2, 5], dtype='int64'), Int64Index([1, 4], dtype='int64')]) </code></pre>
python|pandas|duplicates
1
6,034
39,887,598
How do I delete rows I don't need in dataframe pandas?
<p>I want to delete a certain row where both the ZIPCODE and AV_LAND values would be deleted. For instance, I want to delete row 1 and 2. How would I do that? In addition, I want to reset the index once I delete all the rows I don't need.</p> <pre><code>ZIPCODE AV_LAND 0 02108 2653506 1 02109 5559661 2 02110 11804931 3 02134 4333212 </code></pre>
<p>You can use drop:</p> <pre><code>df.drop([1, 2]).reset_index(drop=True) Out: ZIPCODE AV_LAND 0 02108 2653506 1 02134 4333212 </code></pre> <p>This is not an inplace operation so if you want to change the original DataFrame you need to assign it back: <code>df = df.drop([1, 2]).reset_index(drop=True)</code></p>
pandas|dataframe
1
6,035
69,363,602
Pivoting without numerical aggregation/ a numerical column
<p>I have a dataframe that looks like this</p> <pre><code>d = {'Name': ['Sally', 'Sally', 'Sally', 'James', 'James', 'James'], 'Sports': ['Tennis', 'Track &amp; field', 'Dance', 'Dance', 'MMA', 'Crosscountry']} df = pd.DataFrame(data=d) </code></pre> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Name</th> <th>Sports</th> </tr> </thead> <tbody> <tr> <td>Sally</td> <td>Tennis</td> </tr> <tr> <td>Sally</td> <td>Track &amp; field</td> </tr> <tr> <td>Sally</td> <td>Dance</td> </tr> <tr> <td>James</td> <td>Dance</td> </tr> <tr> <td>James</td> <td>MMA</td> </tr> <tr> <td>James</td> <td>Crosscountry</td> </tr> </tbody> </table> </div> <p>It seems that pandas' pivot_table only allows reshaping with numerical aggregation, but I want to reshape it to wide format such that the strings are in the &quot;values&quot;:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Name</th> <th>First_sport</th> <th>Second_sport</th> <th>Third_sport</th> </tr> </thead> <tbody> <tr> <td>Sally</td> <td>Tennis</td> <td>Track &amp; field</td> <td>Dance</td> </tr> <tr> <td>James</td> <td>Dance</td> <td>MMA</td> <td>Crosscountry</td> </tr> </tbody> </table> </div> <p>Is there a method in pandas that can help me do this? Thanks!</p>
<p>You can do that, either with <code>.pivot()</code> if your column / index names are unique, or with <code>.pivot_table()</code> by providing an aggregation function that works on strings too, e.g. <code>'first'</code>.</p> <pre><code>&gt;&gt;&gt; df['Sport_num'] = 'Sport ' + df.groupby('Name').cumcount().astype(str) &gt;&gt;&gt; df Name Sports Sport_num 0 Sally Tennis Sport 0 1 Sally Track &amp; field Sport 1 2 Sally Dance Sport 2 3 James Dance Sport 0 4 James MMA Sport 1 5 James Crosscountry Sport 2 &gt;&gt;&gt; df.pivot(index='Name', values='Sports', columns='Sport_num') Sport_num Sport 0 Sport 1 Sport 2 Name James Dance MMA Crosscountry Sally Tennis Track &amp; field Dance &gt;&gt;&gt; df.pivot_table(index='Name', values='Sports', columns='Sport_num', aggfunc='first') Sport_num Sport 0 Sport 1 Sport 2 Name James Dance MMA Crosscountry Sally Tennis Track &amp; field Dance </code></pre>
python|pandas|pivot-table|reshape|melt
2
6,036
69,538,626
How to transform a time series into a two-column dataframe showing the count for each element of the time series, using Python
<p>I have data in a file that takes the form of a list of array: each line correspond to an array of integers, with the first element of each array (it is a time series) corresponding to an index. Here is an example :</p> <pre><code>1 101 103 238 156 48 78 2 238 420 156 103 26 3 220 103 154 48 101 238 156 26 420 4 26 54 43 103 156 238 48 </code></pre> <p>there isn't the same number of element in each line and some elements are present in more than one line, but others are not.</p> <p>I would like, using python, to transform the data so that I have 2 columns: the first corresponds to the list of all the integers appearing in the original dataset and the other is the count of the number of occurences. i.e. in the example given:</p> <pre><code>26 3 43 1 48 3 54 1 78 1 101 2 103 4 154 1 156 4 220 1 238 4 420 2 </code></pre> <p>Could anyone please let me know how I could do that? Is there a straightfoward way to do this using Pandas or Numpy for example? Many thanks in advance!</p>
<pre><code>import pandas as pd array1 = [1, 101, 103, 238, 156, 48, 78] array2 = [2, 238, 420, 156, 103, 26] array3 = [3, 220, 103, 154, 48, 101, 238, 156, 26, 420] array4 = [4, 26, 54, 43, 103, 156, 238, 48] pd.Series(list(array1 + array2 + array3 + array4)).value_counts() </code></pre>
python|pandas|dataframe|numpy|time-series
0
6,037
66,320,198
Keras fit with generator function always execute in the main thread
<p>How can I make Keras Models <code>fit</code> method execute a generator in the main thread? From the docs, it looks like that setting workers=0 would execute the code in the main thread.</p> <blockquote> <p>workers Integer. Used for generator or keras.utils.Sequence input only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1. <strong>If 0, will execute the generator on the main thread.</strong></p> </blockquote> <p>However when I do:</p> <pre><code>import tensorflow as tf import threading model = tf.keras.Sequential([tf.keras.layers.Dense(1)]) model.compile(loss = &quot;mse&quot;, optimizer = &quot;adam&quot;) def gen (): for i in range(100): print(threading.current_thread()) yield (tf.random.normal(shape=(100,1)), tf.random.normal(shape = (100,))) model.fit(gen(), epochs = 1, workers = 0, verbose = 0, steps_per_epoch = 3) </code></pre> <p>I get</p> <pre><code>&lt;_MainThread(MainThread, started 140516450817920)&gt; &lt;_DummyThread(Dummy-5, started daemon 140514709206784)&gt; &lt;_DummyThread(Dummy-4, started daemon 140514717599488)&gt; &lt;tensorflow.python.keras.callbacks.History at 0x7fcc1e8a8d68&gt; </code></pre> <p>Which I interpret as only the first step in the iterator has been executed in the main thread.</p> <p>In my use case this is problematic because I need that the code inside the generator to always be executed in the main thread otherwise the program crashes.</p>
<p>It seems like @Ena was on the right track. The following code runs each iteration of the generator on the main thread. <code>workers</code> must be set to 1. If it is set to 0, then the iterations are not on the main thread.</p> <pre class="lang-py prettyprint-override"><code>import tensorflow as tf import threading def gen(): for i in range(10): current_thread = threading.current_thread() is_main = current_thread is threading.main_thread() print(f&quot;is main: {is_main} | thread.name: {current_thread.name}&quot;) yield (tf.random.normal(shape=(100, 1)), tf.random.normal(shape=(100,))) model = tf.keras.Sequential([tf.keras.layers.Dense(1, input_shape=(1,))]) model.compile( loss=&quot;mse&quot;, optimizer=&quot;adam&quot;, ) model.fit(gen(), workers=1, use_multiprocessing=True, verbose=2) </code></pre> <p>This is the output</p> <pre><code>is main: True | thread.name: MainThread WARNING:tensorflow:multiprocessing can interact badly with TensorFlow, causing nondeterministic deadlocks. For high performance data pipelines tf.data is recommended. is main: True | thread.name: Thread-31 is main: True | thread.name: Thread-31 is main: True | thread.name: Thread-31 is main: True | thread.name: Thread-31 is main: True | thread.name: Thread-31 is main: True | thread.name: Thread-31 is main: True | thread.name: Thread-31 is main: True | thread.name: Thread-31 is main: True | thread.name: Thread-31 10/10 - 1s - loss: 4.1511 &lt;tensorflow.python.keras.callbacks.History at 0x7fcf780ef1d0&gt; </code></pre> <p>It seems like any data that is passed to a <code>model.fit</code> call is converted to a <code>tf.data.Dataset</code>. See <a href="https://github.com/tensorflow/tensorflow/blob/290585044e0180f026cc74cce9ef3e206230194d/tensorflow/python/keras/engine/training.py#L1108-L1122" rel="nofollow noreferrer">the relevant code in <code>model.fit</code></a>. And I have a hunch that <code>tf.data.Dataset</code> is getting data in separate threads.</p> <p>There is also a warning in the output about using <code>tf.data</code> instead of multiprocessing. If one uses <code>tf.data.Dataset</code>, however, the generator runs in a separate thread.</p> <pre class="lang-py prettyprint-override"><code>dset = tf.data.Dataset.from_generator( gen, output_signature=( tf.TensorSpec(shape=(100, 1), dtype=tf.float32), tf.TensorSpec(shape=(100,), dtype=tf.float32), ), ) for _ in dset: pass </code></pre> <p>The output is</p> <pre><code>is main: False | thread.name: Dummy-8 is main: False | thread.name: Dummy-4 is main: False | thread.name: Dummy-4 is main: False | thread.name: Dummy-4 is main: False | thread.name: Dummy-4 is main: False | thread.name: Dummy-8 is main: False | thread.name: Dummy-8 is main: False | thread.name: Dummy-8 is main: False | thread.name: Dummy-8 is main: False | thread.name: Dummy-4 </code></pre> <hr /> <p>Another option is to implement a custom training loop and bypass <code>.fit()</code> entirely. See <a href="https://www.tensorflow.org/tutorials/customization/custom_training_walkthrough#train_the_model" rel="nofollow noreferrer">https://www.tensorflow.org/tutorials/customization/custom_training_walkthrough#train_the_model</a> for an example.</p>
python|tensorflow|keras
2
6,038
66,063,918
How to delete cells and cut rows find by " ̶i̶s̶i̶n̶(̶)̶ " df.mask()?
<p>I have dataframe with random cells, for example &quot;boss&quot;. How can I delete the cells &quot;boss&quot; and all right cells in the same row using df.isin()?</p> <pre><code>x=[] for i in range (5): x.append(&quot;boss&quot;) df=pd.DataFrame(np.diagflat(x) ) 0 1 2 3 4 0 boss 1 boss 2 boss 3 boss 4 boss </code></pre> <p>cut rows using &quot;i̶s̶i̶n̶(̶)̶ &quot; df.mask to</p> <pre><code>mask=(df.eq('boss').cumsum(axis=1).ne(0)) df.mask(mask,&quot;Nan&quot;, inplace =True) df 0 1 2 3 4 0 NaN NaN NaN NaN NaN 1 NaN NaN NaN NaN 2 NaN NaN NaN 3 NaN NaN 4 NaN </code></pre>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.mask.html" rel="nofollow noreferrer"><code>DataFrame.mask</code></a> with mask:</p> <pre><code>df = df.mask(df.eq('boss').cumsum(axis=1).ne(0)) print (df) 0 1 2 3 4 0 NaN NaN NaN NaN NaN 1 NaN NaN NaN NaN 2 NaN NaN NaN 3 NaN NaN 4 NaN </code></pre> <p><strong>Details:</strong></p> <p>Compare value <code>boss</code>:</p> <pre><code>print (df.eq('boss')) 0 1 2 3 4 0 True False False False False 1 False True False False False 2 False False True False False 3 False False False True False 4 False False False False True </code></pre> <p>Use cumulative sum per rows, so after first match get <code>1</code>, after second <code>2</code>..:</p> <pre><code>print (df.eq('boss').cumsum(axis=1)) 0 1 2 3 4 0 1 1 1 1 1 1 0 1 1 1 1 2 0 0 1 1 1 3 0 0 0 1 1 4 0 0 0 0 1 </code></pre> <p>Compare if not equal <code>0</code> and pass to <code>mask</code>:</p> <pre><code>print (df.eq('boss').cumsum(axis=1).ne(0)) 0 1 2 3 4 0 True True True True True 1 False True True True True 2 False False True True True 3 False False False True True 4 False False False False True </code></pre>
python|pandas|dataframe
1
6,039
52,633,582
(Dask) How to distribute expensive resource needed for computation?
<p>What is the best way to distribute a task across a dataset that uses a relatively expensive-to-create resource or object for the computation.</p> <pre><code># in pandas df = pd.read_csv(...) foo = Foo() # expensive initialization. result = df.apply(lambda x: foo.do(x)) # in dask? # is it possible to scatter the foo to the workers? client.scatter(... </code></pre> <p>I plan on using this with dask_jobqueue with SGECluster.</p>
<pre><code>foo = dask.delayed(Foo)() # create your expensive thing on the workers instead of locally def do(row, foo): return foo.do(row) df.apply(do, foo=foo) # include it as an explicit argument, not a closure within a lambda </code></pre>
pandas|dask|python-3.7|dask-distributed
1
6,040
52,508,359
multi label classification confusion matrix have wrong number of labels
<p>i am feeding in y_test and y_pred to a confusion matrix. My data is for multi label classification so the row values are one hot encodings.</p> <p>my data has 30 labels but after feeding into the confusion matrix, the output only has 11 rows and cols which is confusing me. I thought i should have a 30X30. </p> <p>Their formats are numpy arrays. (y_test and y_pred are dataframes of which i convert to numpy arrays using dataframe.values)</p> <p><strong>y_test.shape</strong></p> <pre><code>(8680, 30) </code></pre> <p><strong>y_test</strong></p> <pre><code>array([[1, 0, 0, ..., 0, 0, 0], [1, 0, 0, ..., 0, 0, 0], [1, 0, 0, ..., 0, 0, 0], ..., [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0]]) </code></pre> <p><strong>y_pred.shape</strong></p> <pre><code>(8680, 30) </code></pre> <p><strong>y_pred</strong></p> <pre><code>array([[1, 0, 0, ..., 0, 0, 0], [1, 0, 0, ..., 0, 0, 0], [1, 0, 0, ..., 0, 0, 0], ..., [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0]]) </code></pre> <p><strong>I transform them to confusion matrix usable format:</strong></p> <pre><code>y_test2 = y_test.argmax(axis=1) y_pred2 = y_pred.argmax(axis=1) conf_mat = confusion_matrix(y_test2, y_pred2) </code></pre> <p>here is what my confusion matrix look like:</p> <p><strong>conf_mat.shape</strong></p> <pre><code>(11, 11) </code></pre> <p><strong>conf_mat</strong></p> <pre><code>array([[4246, 77, 13, 72, 81, 4, 6, 3, 0, 0, 4], [ 106, 2010, 20, 23, 21, 0, 5, 2, 0, 0, 0], [ 143, 41, 95, 32, 10, 3, 14, 1, 1, 1, 2], [ 101, 1, 0, 351, 36, 0, 0, 0, 0, 0, 0], [ 346, 23, 7, 10, 746, 5, 6, 4, 3, 3, 2], [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]) </code></pre> <p>Why does my confusion matrix only have 11 X 11 shape? shouldn't it be 30X30?</p>
<p>I think you are not quit clear the definition of <code>confusion_matrix</code> </p> <pre><code>y_true = [2, 0, 2, 2, 0, 1] y_pred = [0, 0, 2, 2, 0, 2] confusion_matrix(y_true, y_pred) array([[2, 0, 0], [0, 0, 1], [1, 0, 2]]) </code></pre> <p>Which in data frame is </p> <pre><code>pd.DataFrame(confusion_matrix(y_true, y_pred),columns=[0,1,2],index=[0,1,2]) Out[245]: 0 1 2 0 2 0 0 1 0 0 1 2 1 0 2 </code></pre> <p>The column and index are the category of input.</p> <p>You have <code>(11,11)</code>, which means you only have 11 categories in your data </p>
python|pandas|numpy|confusion-matrix|multilabel-classification
1
6,041
46,382,820
Creating a tensor variable with a dynamic shape
<p>I am a newbie to tensorflow. I have a tensor <code>score</code> and I am trying to create a tensor variable with the shape <code>score.shape[0]</code>.</p> <pre><code>score = tf.constant(np.array([[10, 0, -5], [4, 3, 0], [-3, 0, 11]]), dtype=tf.float32) v = tf.Variable(tf.zeros(tf.shape(score)[0])) </code></pre> <p>But I am getting the error: <code>ValueError: Shape must be rank 1 but is rank 0 for 'zeros_2' (op: 'Fill') with input shapes: [], [].</code>. Was wondering where I am going wrong. </p>
<p>You can just use <code>score.shape[0]</code> to get the first dimension of the <code>score</code> tensor:</p> <pre><code>sess = tf.InteractiveSession() score = tf.constant(np.array([[10, 0, -5], [4, 3, 0], [-3, 0, 11]]),dtype=tf.float32) v = tf.Variable(tf.zeros(score.shape[0])) tf.global_variables_initializer().run() v.eval() # array([ 0., 0., 0.], dtype=float32) </code></pre> <p><code>tf.shape</code> returns a tensor which needs to be evaluated before you can use it: </p> <pre><code>sess = tf.InteractiveSession() score = tf.constant(np.array([[10, 0, -5], [4, 3, 0], [-3, 0, 11]]),dtype=tf.float32) v = tf.Variable(tf.zeros(tf.shape(score).eval()[0])) tf.global_variables_initializer().run() v.eval() # array([ 0., 0., 0.], dtype=float32) </code></pre>
python|tensorflow
0
6,042
46,305,622
Why does numpy array comparison return boolean array?
<p>Why does:</p> <pre><code>[3] == np.arange(10) </code></pre> <p>return:</p> <pre><code>([False, False, False, True, False, False, False, False, False, False], dtype=bool) </code></pre> <p>Instead of simply <code>False</code>?</p>
<p>Why does <code>np.arange(10)+3</code> return an array? The comparison <code>[3] == np.arange(10)</code> is treating the arguments in the same way, element by element (with broadcasting as needed). </p> <p>If it can't broadcast and do element wise comparison it does return a False or an error.</p> <pre><code>In [285]: np.arange(10)==[1,2] /usr/local/bin/ipython3:1: DeprecationWarning: elementwise == comparison failed; this will raise an error in the future. #!/usr/bin/python3 Out[285]: False </code></pre>
python|numpy|multidimensional-array
1
6,043
46,284,172
Pandas sequentially apply function using output of previous value
<p>I want to compute the "carryover" of a series. This computes a value for each row and then adds it to the previously computed value (for the previous row). </p> <p>How do I do this in pandas?</p> <pre><code>decay = 0.5 test = pd.DataFrame(np.random.randint(1,10,12),columns = ['val']) test val 0 4 1 5 2 7 3 9 4 1 5 1 6 8 7 7 8 3 9 9 10 7 11 2 decayed = [] for i, v in test.iterrows(): if i ==0: decayed.append(v.val) continue d = decayed[i-1] + v.val*decay decayed.append(d) test['loop_decay'] = decayed test.head() val loop_decay 0 4 4.0 1 5 6.5 2 7 10.0 3 9 14.5 4 1 15.0 </code></pre>
<p>Consider a vectorized version with <code>cumsum()</code> where you cumulatively sum (val * decay) with the very first <em>val</em>. </p> <p>However, you then need to subtract the very first (val * decay) since <code>cumsum()</code> includes it:</p> <pre><code>test['loop_decay'] = (test.ix[0,'val']) + (test['val']*decay).cumsum() - (test.ix[0,'val']*decay) </code></pre>
python|pandas|apply
2
6,044
58,227,796
Plotly Scatter Points on Maps - Removing Longitude and Latitude from text
<p>I am following this example very closely to experiment with plotting scatter points on maps and this is working perfectly: <a href="https://plot.ly/python/scatter-plots-on-maps/" rel="nofollow noreferrer">https://plot.ly/python/scatter-plots-on-maps/</a></p> <p>However, when you hover over each scatter point you will notice the text is shown along with the latitude and longitude. Is there a way to remove the two coordinates from the displayed text?</p>
<p>You should be able to set <code>hoverinfo="text"</code> to achieve this. Here is the relevant documentation page: <a href="https://plot.ly/python/hover-text-and-formatting/" rel="nofollow noreferrer">https://plot.ly/python/hover-text-and-formatting/</a></p>
python|pandas|plot|geolocation|plotly
4
6,045
69,040,070
How can I reconstruct original matrix from SVD components with following shapes?
<p>I am trying to reconstruct the following matrix of shape (256 x 256 x 2) with SVD components as</p> <pre><code>U.shape = (256, 256, 256) s.shape = (256, 2) vh.shape = (256, 2, 2) </code></pre> <p>I have already tried methods from documentation of numpy and scipy to reconstruct the original matrix but failed multiple times, I think it maybe 3D matrix has a different way of reconstruction. I am using <strong>numpy.linalg.svd</strong> for decompostion.</p>
<p>From <strong>np.linalg.svd</strong>'s documentation:</p> <blockquote> <p>&quot;... If <code>a</code> has more than two dimensions, then broadcasting rules apply, as explained in :ref:<code>routines.linalg-broadcasting</code>. This means that SVD is working in &quot;stacked&quot; mode: it iterates over all indices of the first <code>a.ndim - 2</code> dimensions and for each combination SVD is applied to the last two indices.&quot;</p> </blockquote> <p>This means that you only need to handle the <code>s</code> matrix (or tensor in general case) to obtain the right tensor. More precisely, what you need to do is pad <code>s</code> appropriately and then take only the first 2 columns (or generally, the number of rows of <code>vh</code> which should be equal to the number of columns of the returned <code>s</code>).</p> <p>Here is a working code with example for your case:</p> <pre><code>import numpy as np mat = np.random.randn(256, 256, 2) # Your matrix of dim 256 x 256 x2 u, s, vh = np.linalg.svd(mat) # Get the decomposition # Pad the singular values' arrays, obtain diagonal matrix and take only first 2 columns: s_rep = np.apply_along_axis(lambda _s: np.diag(np.pad(_s, (0, u.shape[1]-_s.shape[0])))[:, :_s.shape[0]], 1, s) mat_reconstructed = u @ s_rep @ vh </code></pre> <p><code>mat_reconstructed</code> equals to <code>mat</code> up to precision error.</p>
numpy|scipy|numpy-ndarray|svd
1
6,046
68,894,007
Apply multiple filter on column using tuple
<pre><code>data = [['A',23], ['D',50], ['C',32], ['D',21], ['D',24], ['B',20], ['C',68], ['A',52], ['A',41],[ 'D',44], ['B',29], ['B',70], ['B',33], ['C',56], ['A',72]] df = pd.DataFrame(data, columns = ['group', 'age']) </code></pre> <p>I would like to filter down to rows where <code>age</code> is equal to, or between the ranges set out in the <code>group_mask</code> for each corresponding <code>group</code></p> <pre><code>group_mask = {(20, 30): 'A', (25, 30): 'B', (65, 70): 'C', (40, 50): 'D'} </code></pre> <p>I am at a loss here as to how to proceed.</p>
<pre class="lang-py prettyprint-override"><code>df['range'] = df['group'].map({v:k for k, v in group_mask.items()}) df['in_range'] = (df['range'].str[0] &lt;= df['age']) &amp; (df['age'] &lt;= df['range'].str[1]) #filtered df = df[df['in_range']] df.drop(columns=['range', 'in_range'], inplace=True) # group age # 0 A 23 # 1 D 50 # 6 C 68 # 9 D 44 # 10 B 29 </code></pre>
python|pandas|dataframe|filter|tuples
0
6,047
68,920,781
Extracting the data from pandas dataframe
<p>I have a pandas dataframe having data in each row like below</p> <pre><code>Joel Thompson / Tracy K. Smith&lt;/h2&gt; &lt;/div&gt; &lt;div&gt; &lt;p&gt;New work (World Premiere–New York Philharmonic Commission) </code></pre> <p>How would I filter this so I can get results to work with like this:</p> <pre><code> name : Joel Thompson, Tracy K. Smith information : New work (World Premiere–New York Philharmonic Commission) </code></pre>
<p>You should try to use the split function for string variables. You can do this this way :</p> <pre><code>#Get your row in a string variable text text = &quot;Joel Thompson / Tracy K. Smith&lt;/h2&gt;&lt;/div&gt;&lt;div&gt;&lt;p&gt;New work (World Premiere–New York Philharmonic Commission)&quot; #Extracting the name Names_string = text.split(&quot;&lt;/h2&gt;&quot;)[0] Names_list = Names_string.split(&quot; / &quot;) #Extracting information Information = text.split(&quot;&lt;p&gt;&quot;)[-1] result = { &quot;name&quot; : Names_list, 'information' : Information } print(result) </code></pre> <p>It will display this :</p> <blockquote> <p>{'name': ['Joel Thompson', 'Tracy K. Smith'], 'information': 'New work (World Premiere–New York Philharmonic Commission)'}</p> </blockquote> <p>You should make it a function this way :</p> <pre><code>def getDictFromRow(text): #Extracting the name Names_string = text.split(&quot;&lt;/h2&gt;&quot;)[0] Names_list = Names_string.split(&quot; / &quot;) #Extracting information Information = text.split(&quot;&lt;p&gt;&quot;)[-1] result = { &quot;name&quot; : Names_list, 'information' : Information } return result print(getDictFromRow(&quot;Joel Thompson / Tracy K. Smith&lt;/h2&gt;&lt;/div&gt;&lt;div&gt;&lt;p&gt;New work (World Premiere–New York Philharmonic Commission)&quot;)) </code></pre>
python-3.x|pandas
0
6,048
69,178,181
Use string literal instead of header name in Pandas csv file manipulation
<p>Python 3.9.5/Pandas 1.1.3</p> <p>I use the following code to create a nested dictionary object from a csv file with headers:</p> <pre><code>import pandas as pd import json import os csv = &quot;/Users/me/file.csv&quot; csv_file = pd.read_csv(csv, sep=&quot;,&quot;, header=0, index_col=False) csv_file['org'] = csv_file[['location', 'type']].apply(lambda s: s.to_dict(), axis=1) </code></pre> <p>This creates a nested object called <code>org</code> from the data in the columns called <code>location</code> and <code>type</code>.</p> <p>Now let's say the <code>type</code> column doesn't even exist in the csv file, and I want to pass a literal string as a <code>type</code> value instead of the values from a column in the csv file. So for example, I want to create a nested object called <code>org</code> using the values from the <code>data</code> column as before, but I want to just use the string <code>foo</code> for all values of a key called <code>type</code>. How to accomplish this?</p>
<p>You could just build it <em>by hand</em>:</p> <pre><code>csv_file['org'] = csv_file['location'].apply(lambda x: {'location': x, 'type': 'foo'}) </code></pre>
python|pandas
1
6,049
69,217,935
Insert a row into a dataframe based on values in another dataframe
<p>I want to insert a row into a dataframe based on values in another dataframe. I have attempted to reproduce my problem in a simple way. I have two dataframes df1 and df2.</p> <pre><code>data = [['apple', 'apples'], ['orange', 'oranges'], ['banana', 'bananas'], ['kiwi', 'kiwis']] df1 = pd.DataFrame(data, columns= ['fruit', 'fruits']) data_1 = [['apple', 'A'], ['banana', 'B'], ['cherry', 'C'], ['durian', 'D'], ['kiwi', 'K'], ['elderberry', 'E'], ['fig', 'F'], ['orangar', 'O']] df2 = pd.DataFrame(data_1, columns= ['fruit', 'label']) </code></pre> <p>The datframes look like this.</p> <pre><code>df1: fruit fruits 0 apple apples 1 orange oranges 2 banana bananas 3 kiwi kiwi df2: fruit label 0 apple A 1 banana B 2 cherry C 3 durian D 4 kiwi K 5 elderberry E 6 fig F 7 orange O </code></pre> <p>Now I want to combine the two datframes based on values in the 'fruit' column. I have written a nested for loop to achieve my result.</p> <pre><code>line = pd.DataFrame(columns=['fruit', 'label']) for i in range(len(df1)): for j in range(len(df2)): if df1.iloc[i]['fruit'] == df2.iloc[j]['fruit']: df2 = df2.append(pd.DataFrame({&quot;fruit&quot;: df1.iloc[i]['fruits'], &quot;label&quot;: df2.iloc[j]['label']}, index= [j+0.5])) df2 = df2.sort_index().reset_index(drop=True) </code></pre> <p>The result looks like this:</p> <pre><code>df2: fruit label 0 apple A 1 apples A 2 banana B 3 bananas B 4 cherry C 5 durian D 6 kiwi K 7 kiwis K 8 elderberry E 9 fig F 10 orange O 11 oranges O </code></pre> <p>My original datasets have close to 30,000 values. This makes the nested for loop solution that I have used very slow. Is there a faster and more efficient way to do this?</p> <p>Thanks</p>
<p>Let's do:</p> <pre><code># get labels from df2 _df = pd.merge(df1, df2, how='left', on='fruit') # drop the old fruit column and rename fruits to fruit _df = _df.drop('fruit', axis=1) _df = _df.rename({'fruits': 'fruit'}, axis=1) # concat the 2 dataframes together df2 = pd.concat([df2, _df]) </code></pre>
python|pandas|dataframe|nested-loops
1
6,050
44,520,286
Merge dataframe resulting in Series
<p>I working with the Texas Hospital Discharge Dataset and I am trying to determine the top 100 most frequent Principal Surgery Procedures over a period of 4 years.</p> <p>Do to this I need to go through each quarter of each year and count the procedures, but when I try to merge different quarters the result is a Series not a DataFrame.</p> <pre><code>top_procedures = None for year in range(6, 10): for quarter in range(1, 5): quarter_data = pd.read_table( filepath_or_buffer="/path/to/texas/data/PUDF_base" + str(quarter) + "q200" + str(year) + "_tab.txt", ) quarter_data = quarter_data[quarter_data["THCIC_ID"] != 999999] quarter_data = quarter_data[quarter_data["THCIC_ID"] != 999998] quarter_procedures = quarter_data["PRINC_SURG_PROC_CODE"].value_counts() quarter_procedures = pd.DataFrame( {"PRINC_SURG_PROC_CODE": quarter_procedures.index, "count": quarter_procedures.values}) top_procedures = quarter_procedures if (top_procedures is None) else \ top_procedures.merge( right=quarter_procedures, how="outer", on="PRINC_SURG_PROC_CODE" ).set_index( ["PRINC_SURG_PROC_CODE"] ).sum( axis=1 ) </code></pre> <p>Could you please tell me what am I doing wrong? From the <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.merge.html" rel="nofollow noreferrer">documentation</a> it looks like it should return a DataFrame.</p> <p>Cheers,</p> <p>Dan</p>
<p>the merge will indeed return a dataframe, but in your code you are summing on axis=1 (all values in one row) after merging which then gives you a series (since the values from all columns are summed together in one final column).</p> <p>Hope that helps.</p>
python|pandas
1
6,051
44,482,722
Why tensorflow constant is feedable true
<p>I am learning tensorflow</p> <pre><code>import tensorflow as tf print(tf.VERSION) a = tf.placeholder(tf.float32, shape=[3]) b = tf.constant([2, 2, 2], tf.float32) c = a + b with tf.Session() as sess: print(tf.get_default_graph().is_feedable(b)) print(sess.run(c, feed_dict={a: [3, 2, 3]})) </code></pre> <p>The output of this is the below</p> <pre><code>1.0.1 True [ 5. 4. 5.] </code></pre> <p>I don't understand why is tensorflow saying that the constant is feedable. Placeholder is feedable true but why constant?</p>
<p>Because in TF you can also <a href="https://www.tensorflow.org/programmers_guide/reading_data" rel="nofollow noreferrer">feed values in constants and variables</a>:</p> <blockquote> <p>While you can replace any Tensor with feed data, including variables and constants, the best practice is to use a tf.placeholder node</p> </blockquote> <p>Check it for yourself:</p> <pre><code>import tensorflow as tf a = tf.constant([2, 2, 2], tf.float32) with tf.Session() as sess: print(sess.run(a, feed_dict={a: [3, 2, 3]})) </code></pre> <p>Constant <code>a</code> changed its value.</p>
python|tensorflow
2
6,052
71,717,202
Filtering in Python
<p>I have rather a simple question but couldn't find an answer for it. I have an array as following:</p> <pre><code>sample = np.random.uniform(0,1,1000) </code></pre> <p>and I would like to filter values between 0.1 and 0.13 plus values above 0.9.</p> <pre><code>filter1 = sample[((sample &gt; 0.1) &amp; (sample &lt;0.13)) &amp; (sample &gt; 0.9)] </code></pre> <p>However, the code gives an empty array. I don't know if I am doing a mistake regarding parenthesis. I appreciate it if you point me in the right direction.</p>
<p>You are currently filtering for numbers that are above 0.1, below 0.13, and above 0.9; a number must meet all three criteria to meet your requirements.</p> <p>To fix this, change your second ampersand to a pipe, such that the filtering reads &quot;(numbers above 0.1 and below 0.13), or above 0.9&quot;:</p> <pre class="lang-py prettyprint-override"><code>filter1 = sample[((sample &gt; 0.1) &amp; (sample &lt;0.13)) | (sample &gt; 0.9)] </code></pre>
python|arrays|python-3.x|numpy
4
6,053
42,258,758
How to delete the rows with pattern
<p>I have a DataFrame as below. I want to delete rows which has RegionName containing [edit]. I appreciate any help.</p> <pre><code> State RegionName1 0 Alabama Alabama[edit] 1 Alabama Auburn 2 Alabama Florence 3 Alabama Jacksonville 4 Alabama Livingston 9 Alaska Alaska[edit] 10 Alaska Fairbanks 11 Arizona Arizona[edit] 12 Arizona Flagstaff 13 Arizona Tempe 15 Arkansas Arkansas[edit] 16 Arkansas Arkadelphia 18 Arkansas Fayetteville </code></pre>
<p>you can use <code>.str.endswith()</code> method:</p> <pre><code>In [165]: df = df.loc[~df.RegionName1.str.endswith('[edit]')] In [166]: df Out[166]: State RegionName1 1 Alabama Auburn 2 Alabama Florence 3 Alabama Jacksonville 4 Alabama Livingston 6 Alaska Fairbanks 8 Arizona Flagstaff 9 Arizona Tempe 11 Arkansas Arkadelphia 12 Arkansas Fayetteville </code></pre>
python|pandas
1
6,054
69,852,627
Compare 2 pandas.DataFrames, get differences and print only rows that changed from the first one
<p>I have 2 dataframes which I am comparing with below snippet:</p> <pre><code>df3 = pandas.concat([df1, df2]).drop_duplicates(keep=False) </code></pre> <p>It works fine, it compares both and as an output I got rows that are different form both of them.</p> <p>What I would like to achieve is to compare 2 dataframes to get rows that are different but as an output only get/keep rows from the first DataFrame.</p> <p>Is there an easy way to do this?</p>
<p>I would use <code>~isin()</code>:</p> <pre><code>df.set_index(list(df.columns), inplace=True) df2.set_index(list(df2.columns), inplace=True) df[~df.index.isin(df2.index)].reset_index() </code></pre>
python|pandas
1
6,055
69,955,550
Keras model with fasttext word embedding
<p>I am trying to learn a language model to predict the last word of a sentence given all the previous words using keras. I would like to embed my inputs using a learned fasttext embedding model.</p> <p>I managed to preprocess my text data and embed the using fasttext. My training data is comprised of sentences of 40 tokens each. I created 2 np arrays, X and y as inputs, with y what I want to predict.</p> <p>X is of shape (44317, 39, 300) with 44317 the number of example sentences, 39 the number of tokens in each sentence, and 300 the dimension of the word embedding.</p> <p>y is of shape (44317, 300) is for each example the embedding of the last token of the sentence.</p> <p>My code for the keras model goes as follow (inspired by <a href="https://machinelearningmastery.com/how-to-develop-a-word-level-neural-language-model-in-keras/?fbclid=IwAR3NyptI5uPN_8uOP69QWKRfpTfaqG-Y1XzUB2ciN0aTr-vQDxUhnfY4Ets" rel="nofollow noreferrer">this</a>)</p> <pre><code>#importing all the needed tensorflow.keras components model = Sequential() model.add(InputLayer((None, 300))) model.add(LSTM(100, return_sequences=True)) model.add(LSTM(100)) model.add(Dense(100, activation='relu')) model.add(Dense(300, activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) model.fit(X, y, batch_size=128, epochs=20) model.save('model.h5') </code></pre> <p>However, the accuracy I get while training on this model is extremely low (around 1.5%). I think there is some component of the keras model that I misundertood, as if I don't embed my inputs and add an extra embedding layer instead of the InputLayer I get an accuracy of about 60 percents.</p> <p>My main doubt is the value of &quot;300&quot; on my second Dense layer, as I read that this should correspond the vocabulary size of my word embedding model (which is 48000), however if I put anything else than 300 I get a dimension error. So I understand that I'm doing something wrong, but I can't find how to fix it.</p> <p>PS : I have also tried <code>y = to_categorical(y, num_classes=vocab_size)</code> with vocab_size the vocabulary size of my word embedding, and by changing 300 by this same value in the second Dense, however then it tries to create an array of shape(13295100, 48120) instead of what I expect : (44317, 48120).</p>
<p>If you really want to use the word vectors from <code>Fasttext</code>, you will have to incorporate them into your model using a weight matrix and <code>Embedding</code> layer. The goal of the embedding layer is to map each integer sequence representing a sentence to its corresponding 300-dimensional vector representation:</p> <pre class="lang-py prettyprint-override"><code>import gensim.downloader as api import numpy as np import tensorflow as tf def load_doc(filename): file = open(filename, 'r') text = file.read() file.close() return text fasttext = api.load(&quot;fasttext-wiki-news-subwords-300&quot;) embedding_dim = 300 in_filename = 'data.txt' doc = load_doc(in_filename) lines = doc.split('\n') tokenizer = tf.keras.preprocessing.text.Tokenizer() tokenizer.fit_on_texts(lines) text_sequences = tokenizer.texts_to_sequences(lines) text_sequences = tf.keras.preprocessing.sequence.pad_sequences(text_sequences, padding='post') vocab_size = len(tokenizer.word_index) + 1 text_sequences = np.array(text_sequences) X, y = text_sequences[:, :-1], text_sequences[:, -1] y = tf.keras.utils.to_categorical(y, num_classes=vocab_size) max_length = X.shape[1] weight_matrix = np.zeros((vocab_size, embedding_dim)) for word, i in tokenizer.word_index.items(): try: embedding_vector = fasttext[word] weight_matrix[i] = embedding_vector except KeyError: weight_matrix[i] = np.random.uniform(-5, 5, embedding_dim) sentence_input = tf.keras.layers.Input(shape=(max_length,)) x = tf.keras.layers.Embedding(vocab_size, embedding_dim, weights=[weight_matrix], input_length=max_length)(sentence_input) x = tf.keras.layers.LSTM(100, return_sequences=True)(x) x = tf.keras.layers.LSTM(100)(x) x = tf.keras.layers.Dense(100, activation='relu')(x) output = tf.keras.layers.Dense(vocab_size, activation='softmax')(x) model = tf.keras.Model(sentence_input, output) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) model.fit(X, y, batch_size=5, epochs=20) </code></pre> <p>Note that I am using the dataset and preprocessing steps from the <a href="https://machinelearningmastery.com/how-to-develop-a-word-level-neural-language-model-in-keras/?fbclid=IwAR3NyptI5uPN_8uOP69QWKRfpTfaqG-Y1XzUB2ciN0aTr-vQDxUhnfY4Ets" rel="nofollow noreferrer">tutorial</a> you linked.</p>
python|tensorflow|keras|fasttext|language-model
2
6,056
43,077,893
Pandas.to_csv gives error 'ascii' codec can't encode character u'\u2013' in position 8: ordinal not in range(128)
<p>I am trying to save a panda dataframe to csv and it fails with error: </p> <pre><code>df.to_csv(location, sep='|', index=False, header=True) </code></pre> <p>'ascii' codec can't encode character u'\u2013' in position 8: ordinal not in range(128)</p> <p>I have pandas version as: </p> <pre><code>&gt;&gt;&gt; import pandas as pd &gt;&gt;&gt; pd.__version__ u'0.19.2' &gt;&gt;&gt; </code></pre> <p>On Another machine, same commands works. Version of pandas installed is 0.18.1</p> <pre><code>&gt;&gt;&gt; import pandas as pd &gt;&gt;&gt; pd.__version__ u'0.18.1' &gt;&gt;&gt; </code></pre> <p>I understand adding encoding='utf-8' will get me through the error. However I was wondering if there was recent change that caused later version of pandas to fail. </p> <p>Thanks, </p>
<p>from <a href="https://github.com/pandas-dev/pandas/blob/v0.18.1/pandas/core/frame.py" rel="nofollow noreferrer">https://github.com/pandas-dev/pandas/blob/v0.18.1/pandas/core/frame.py</a> we find :</p> <pre><code>formatter = fmt.CSVFormatter(self, path_or_buf, line_terminator = line_terminator, sep = sep, encoding = encoding, compression = compression, quoting = quoting, na_rep = na_rep, float_format = float_format, cols = columns, header = header, index = index, index_label = index_label, mode = mode, chunksize = chunksize, quotechar = quotechar, engine = kwds.get(&quot;engine&quot;), tupleize_cols = tupleize_cols, date_format = date_format, doublequote = doublequote, escapechar = escapechar, decimal = decimal ) </code></pre> <p>from we find : <a href="https://github.com/pandas-dev/pandas/blob/v0.19.2/pandas/core/frame.py" rel="nofollow noreferrer">https://github.com/pandas-dev/pandas/blob/v0.19.2/pandas/core/frame.py</a></p> <pre><code>formatter = fmt.CSVFormatter(self, path_or_buf, line_terminator =line_terminator, sep =sep, encoding =encoding, compression =compression, quoting =quoting, na_rep =na_rep, float_format =float_format, cols =columns, header =header, index =index, index_label =index_label, mode =mode, chunksize =chunksize, quotechar =quotechar, tupleize_cols =tupleize_cols, date_format =date_format, doublequote =doublequote, escapechar =escapechar, decimal =decimal) </code></pre> <p>The only difference is the &quot;engine&quot; parameter... We should now dig deeper on this &quot;engine&quot; parameter :-( somewhere here : <a href="https://github.com/pandas-dev/pandas/blob/v0.18.1/pandas/formats/format.py" rel="nofollow noreferrer">https://github.com/pandas-dev/pandas/blob/v0.18.1/pandas/formats/format.py</a></p> <p>and here : <a href="https://github.com/pandas-dev/pandas/blob/v0.19.2/pandas/formats/format.py" rel="nofollow noreferrer">https://github.com/pandas-dev/pandas/blob/v0.19.2/pandas/formats/format.py</a></p> <p>Good luck !</p>
python|pandas
1
6,057
43,172,357
regarding the ValueError: If `inputs` don't all have same shape and dtype or the shape
<p>There is a program that defines the loss function as follows:</p> <pre><code>def loss(hypes, decoded_logits, labels): """Calculate the loss from the logits and the labels. Args: logits: Logits tensor, float - [batch_size, NUM_CLASSES]. labels: Labels tensor, int32 - [batch_size]. Returns: loss: Loss tensor of type float. """ logits = decoded_logits['logits'] with tf.name_scope('loss'): logits = tf.reshape(logits, (-1, 2)) shape = [logits.get_shape()[0], 2] epsilon = tf.constant(value=hypes['solver']['epsilon']) # logits = logits + epsilon labels = tf.to_float(tf.reshape(labels, (-1, 2))) softmax = tf.nn.softmax(logits) + epsilon if hypes['loss'] == 'xentropy': cross_entropy_mean = _compute_cross_entropy_mean(hypes, labels, softmax) elif hypes['loss'] == 'softF1': cross_entropy_mean = _compute_f1(hypes, labels, softmax, epsilon) elif hypes['loss'] == 'softIU': cross_entropy_mean = _compute_soft_ui(hypes, labels, softmax, epsilon) reg_loss_col = tf.GraphKeys.REGULARIZATION_LOSSES print('******'*10) print('loss type ',hypes['loss']) print('type ', type(tf.get_collection(reg_loss_col))) print( "Regression loss collection: {}".format(tf.get_collection(reg_loss_col))) print('******'*10) weight_loss = tf.add_n(tf.get_collection(reg_loss_col)) total_loss = cross_entropy_mean + weight_loss losses = {} losses['total_loss'] = total_loss losses['xentropy'] = cross_entropy_mean losses['weight_loss'] = weight_loss return losses </code></pre> <p>Running the program raises the following error message</p> <pre><code>File "/home/ decoder/kitti_multiloss.py", line 86, in loss name='reg_loss') File "/devl /tensorflow/tf_0.12/lib/python3.4/site-packages/tensorflow/python/ops/math_ops.py", line 1827, in add_n raise ValueError("inputs must be a list of at least one Tensor with the " ValueError: inputs must be a list of at least one Tensor with the same dtype and shape </code></pre> <p>I checked the function of <code>tf.add_n</code> , its implementation is as follows. My question is that how to check the first parameter <code>tf.get_collection(reg_loss_col)</code> in <code>tf.add_n</code> and print its information to figure out why the error message was generated?</p> <pre><code>def add_n(inputs, name=None): """Adds all input tensors element-wise. Args: inputs: A list of `Tensor` objects, each with same shape and type. name: A name for the operation (optional). Returns: A `Tensor` of same shape and type as the elements of `inputs`. Raises: ValueError: If `inputs` don't all have same shape and dtype or the shape cannot be inferred. """ if not inputs or not isinstance(inputs, (list, tuple)): raise ValueError("inputs must be a list of at least one Tensor with the " "same dtype and shape") inputs = ops.convert_n_to_tensor_or_indexed_slices(inputs) if not all(isinstance(x, ops.Tensor) for x in inputs): raise ValueError("inputs must be a list of at least one Tensor with the " "same dtype and shape") </code></pre>
<p>As a work around, you can replace this line with:</p> <pre><code>temp = tf.get_collection('losses') if temp == []: temp = [0] weight_loss = tf.add_n(temp, name='total_loss') </code></pre> <p>As adding a zero value won't affect the final result but will be effective to run the software... what do you think?</p>
tensorflow
0
6,058
43,457,890
Multiprocessing with GPU in keras
<p>I need to compute multiple deep models in parallel and average their results. My job runs forever after finishing computation with <code>GPU 0</code>.</p> <pre><code>def model_train(self, params): from nn_arch import nn_models X, y, gpu_no = params print("GPU NO ", gpu_no) with tf.device('/gpu:' + str(gpu_no)): model1 = nn_models.lenet5() early_callback = CustomCallback() model1.fit(X, y, batch_size=256, validation_split=0.2, callbacks=[early_callback], verbose=1, epochs=1) return model1 </code></pre> <p>And my main method below. <code>In this case I have 2 GPUs</code></p> <pre><code>def main(self, X_train, y_train, X_test, y_test): random_buckets = self.get_random() X = [X_train[random_buckets[k]] for k in sorted(random_buckets)] y = [y_train[random_buckets[j]] for j in sorted(random_buckets)] params = zip(X, y, [0, 1]) models = pool1.map(self.model_train, params) </code></pre> <p>How do I train multiple models in parallel with Keras. (Data Parallel Approach)</p>
<p>Before compiling the model in keras. Add this line</p> <p>model = make_parallel(model, 2)</p> <p>where 2 is the number of GPUs available.</p> <p>The make_parallel function is available in this file. Just import the file in your code and your code will be executed on multiple GPUs.</p> <p><a href="https://github.com/kuza55/keras-extras/blob/master/utils/multi_gpu.py" rel="nofollow noreferrer">https://github.com/kuza55/keras-extras/blob/master/utils/multi_gpu.py</a></p> <p>make_parallel is a simple function that:</p> <ul> <li>It instantiates a copy of your model on the N GPUs you tell it to </li> <li>It splits your batch into N evenly sized smaller batches </li> <li>It passes each smaller batch into the corresponding model</li> <li>It concatenates the outputs of the models</li> </ul>
tensorflow|keras
4
6,059
43,334,937
How to build tensorflow native for android using clang toolchain?
<p>Based on this <a href="https://github.com/bazelbuild/bazel/issues/817" rel="nofollow noreferrer">bazel using clang</a> , need to add command line option for setting android compiler option. How does this translate to <code>*.bzl</code> files, crosstool files in tensorflow?</p>
<p>Bazel 0.4.5 and later support Android NDK clang via the <code>--android_compiler=clang3.8</code> build flag. Note that in NDK13, clang is the default (previous was gcc) so this flag is only needed for NDK12 and prior. No bzl or crosstool files necessary (although android_ndk_repository is actually generating a crosstool file under the hood).</p>
android|tensorflow|bazel
0
6,060
50,321,673
Slicing a Data frame by checking consecutive elements
<p>I have a DF indexed by time and one of its columns (with 2 variables) is like [x,x,y,y,x,x,x,y,y,y,y,x]. I want to slice this DF so Ill get this column without same consecutive variables- in this example :[x,y,x,y,x] and every variable was the first in his subsequence.</p> <p>Still trying to figure it out...</p> <p>Thanks!! </p>
<p>Assuming you have df like below </p> <pre><code>df=pd.DataFrame(['x','x','y','y','x','x','x','y','y','y','y','x']) </code></pre> <p>We using <code>shift</code> to find the next is equal to the current or not </p> <pre><code>df[df[0].shift()!=df[0]] Out[142]: 0 0 x 2 y 4 x 7 y 11 x </code></pre>
python|pandas|dataframe
2
6,061
50,397,321
tensorflow Invalid symbol
<p>I am using seq2seq to train a language model on some english words not found in the dictionary. But when I train the model on a phonetic dictionary I get these warnings, then the model wont recognize the words after training because it cant recognize these letters.</p> <pre><code>WARNING:tensorflow:Invalid symbol:A WARNING:tensorflow:Invalid symbol:O WARNING:tensorflow:Invalid symbol:O WARNING:tensorflow:Invalid symbol:C WARNING:tensorflow:Invalid symbol:I WARNING:tensorflow:Invalid symbol:A WARNING:tensorflow:Invalid symbol:O WARNING:tensorflow:Invalid symbol:O WARNING:tensorflow:Invalid symbol:C WARNING:tensorflow:Invalid symbol:C WARNING:tensorflow:Invalid symbol:I WARNING:tensorflow:Invalid symbol:E WARNING:tensorflow:Invalid symbol:C WARNING:tensorflow:Invalid symbol:O WARNING:tensorflow:Invalid symbol:I WARNING:tensorflow:Invalid symbol:I WARNING:tensorflow:Invalid symbol:E WARNING:tensorflow:Invalid symbol:E WARNING:tensorflow:Invalid symbol:E WARNING:tensorflow:Invalid symbol:O WARNING:tensorflow:Invalid symbol:O </code></pre> <p>Any help would be appreciated Thanks in advance.</p>
<p>So I am gonna go ahead and answer my own question if someone needs help, The problem was that I needed to add the missing letters to vocab.g2p. Edit: added CAPITAL LETTERS to vocab.g2p</p>
python|tensorflow|speech-recognition|cmusphinx
0
6,062
50,442,156
Loading a model from tensorflow SavedModel onto mutliple GPUs
<p>Let's say someone hands me a TF SavedModel and I would like to replicate this model on the 4 GPUs I have on my machine so I can run inference in parallel on batches of data. Are there any good examples of how to do this? </p> <p>I can load a saved model in this way:</p> <pre><code>def load_model(self, saved_model_dirpath): '''Loads a model from a saved model directory - this should contain a .pb file and a variables directory''' signature_key = tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY input_key = 'input' output_key = 'output' meta_graph_def = tf.saved_model.loader.load(self.sess, [tf.saved_model.tag_constants.SERVING], saved_model_dirpath) signature = meta_graph_def.signature_def input_tensor_name = signature[signature_key].inputs[input_key].name output_tensor_name = signature[signature_key].outputs[output_key].name self.input_tensor = self.sess.graph.get_tensor_by_name(input_tensor_name) self.output_tensor = self.sess.graph.get_tensor_by_name(output_tensor_name) </code></pre> <p>..but this would require that I have a handle to the session. For models that I have written myself, I would have access to the inference function and I could just call it and wrap it using <code>with tf.device()</code>, but in this case, I'm not sure how to extract the inference function out of a Saved Model. Should I load 4 separate sessions or is there a better way? Couldn't find much documentation on this, but apologies in advance if I missed something. Thanks!</p>
<p>There is no support for this use case in TensorFlow at the moment. Unfortunately, "replicating the inference function" based only on the SavedModel (which is basically the computation graph with some metadata), is a fairly complex (and brittle, if implemented) graph transformation problem.</p> <p>If you don't have access to the source code that produced this model, your best bet is to load the SavedModel 4 times into 4 separate graphs, rewriting the target device to the corresponding GPU each time. Then, run each graph/session separately.</p> <p>Note that you can invoke <code>sess.run()</code> multiple times concurrently since <code>sess.run()</code> releases the GIL for the time of actual computation. All you need is several Python threads. </p>
tensorflow|multi-gpu
1
6,063
45,552,856
prediction of MNIST hand-written digit classifier
<p>I am new to Deep Learning and am using Keras to learn it. I followed instructions at this <a href="http://machinelearningmastery.com/handwritten-digit-recognition-using-convolutional-neural-networks-python-keras/" rel="nofollow noreferrer">link</a> to build a handwritten digit recognition classifier using MNIST dataset. It worked fine in terms of seeing comparable evaluation results. I used tensorflow as the backend of Keras.</p> <p>Now I want to read an image file with a handwritten digit and predict its digit using the same model. I think the image needs to be transformed to be in 28x28 dimension with 255 depth first? I am not sure whether my understanding is correct to begin with. If so, how can I do this transformation in Python? If my understanding is incorrect, what kind of transformation is required?</p> <p>Thank you in advance! </p>
<p>To my knowledge, you will need to turn this into a 28x28 grayscale image in order to work with this in Python. That's the same shape and scheme as the images that were used to train MNIST, and the tensors are all expecting 784 (28 * 28)-sized items, each with a value between 0-255 in their tensors as input.</p> <p>To resize an image you could use PIL or Pillow. See <a href="https://stackoverflow.com/questions/273946/how-do-i-resize-an-image-using-pil-and-maintain-its-aspect-ratio">this SO post</a> or <a href="http://pillow.readthedocs.io/en/latest/handbook/tutorial.html#create-jpeg-thumbnails" rel="nofollow noreferrer">this page in the Pillow docs</a> (linked to by Wtower in the previously mentioned post, copied here for ease of accesson resizing and keeping aspect ratio, if that's what you want to do.</p> <p>HTH!</p> <p>Cheers,</p> <p>-Maashu</p>
python|tensorflow|keras|mnist|handwriting-recognition
1
6,064
62,667,466
make columns rows python
<p>hello i have the following code:</p> <pre><code>for j in range(8): b=fran[fran.Año.isin([2020]) &amp; fran.Channel.isin(['CANAL 5'])&amp;fran.Week.isin([j])] c=b[['hour','number']] print(c) </code></pre> <p>I get the output:</p> <pre><code> |hour| number 1|12-1|3.1 2|1-3 |2.3 3|3-7 |4.6 |hour| number 4|7-11|2 1|12-1|1.2 2|1-3 |3 3|3-7 |1.1 4|7-11|5.6 ... |hour| number 1|12-1|1 2|1-3 |1.2 3|3-7 |5.4 4|7-11|2.2 </code></pre> <p>I would like help to get the following output:</p> <pre><code> | hour | number1| number2|...|numbern| 1|12-1 |3.1 | 1.2 |...| 1 2|1-3 |2.3 | 3 |...| 1.2 3|3-7 |4.6 | 1.1 |...| 5.4 4|7-11 |2 | 5.6 |...| 2.2 </code></pre>
<p>Change your code to</p> <pre><code>l=[] for j in range(8): b=fran[fran.Año.isin([2020]) &amp; fran.Channel.isin(['CANAL 5'])&amp;fran.Week.isin([j])] l.append(b[['hour','number']].set_index('hour').rename(columns={'number' : 'number' + str(j)})) </code></pre> <p>Then do <code>concat</code></p> <pre><code>df=pd.concat(l),axis=1).reset_index() </code></pre>
python|pandas
1
6,065
62,844,757
Linspace on a matrix
<p>I'd like to do something like linspace, but where I specify the corners of the matrix.</p> <p>For example:</p> <pre><code>[[-60 -2] [140 6]] </code></pre> <p>I'd like to fill out a larger matrix:</p> <pre><code>[[-60 -31 -2] [40 21 4] [140 73 6]] </code></pre>
<p>I figured out a solution:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np from scipy.interpolate import griddata def interpolate(corners, n): grid_x, grid_y = np.mgrid[0:n:1, 0:n:1] points = [[0, 0], [0, n-1], [n-1, 0], [n-1, n-1]] return griddata(points, corners, (grid_x, grid_y), method='linear') corners = [-60, -2, 140, 6] interpolate(corners, 3) </code></pre> <p>And the result isn't exactly what I expected for the middle point, but it makes sense since it's interpolating diagonally:</p> <pre><code>array([[-60., -31., -2.], [ 40., -27., 2.], [140., 73., 6.]]) </code></pre>
python|numpy
1
6,066
62,719,982
multiply pandas column with a number in python
<p>I am trying to multiply the price column with integers but it is not happening.</p> <pre><code>for index,row in df.iterrows(): a=row['price'] row['price'] = a[1:] b = row['price'].split(' ')[1] </code></pre> <p>So I want to multiply by 100000 where price has 'L' in it and by 10000000 where price has 'Cr' in it. For example, the first cell has 50.0 L so the output should be 5000000.0 I used the <code>dtype</code> and the output was <code>dtype('O')</code></p> <pre><code> price area type price per sq feet Address 0 50.0 L 650 1 7.69 Mankhurd 1 1.15 Cr 650 1 17.69 Chembur 2 95.0 L 642 1 14.80 Bhandup West 3 1.6 Cr 650 2 24.61 Goregaon East 5 88.0 L 570 1 15.44 Borivali East </code></pre> <p>I would appreciate the help. Thank Yoo</p>
<p>IIUC, you can try with <code>series.str.extract</code> with <code>series.map</code> and multiplication:</p> <pre><code>d = {&quot;L&quot;:100000,&quot;Cr&quot;:10000000} pat = '|'.join(d.keys()) mapped = df['price'].str.extract('('+pat+')',expand=False).map(d) df['price'] = pd.to_numeric(df['price'].str.replace(pat,''),errors='coerce') * mapped </code></pre> <hr /> <pre><code>print(df) price area type price per sq feet Address 0 5000000.0 650 1 7.69 Mankhurd 1 11500000.0 650 1 17.69 Chembur 2 9500000.0 642 1 14.80 Bhandup West 3 16000000.0 650 2 24.61 Goregaon East 4 8800000.0 570 1 15.44 Borivali East </code></pre>
python|pandas|string|numpy|dataframe
2
6,067
54,629,993
Python - Replace NA's on Joins not working
<p>I am trying to fill the values of a NA with some default text values.</p> <p>Here is my df1</p> <pre><code>data = [['Alex','10'],['Bob','12'],['Clarke','13']] df1 = pd.DataFrame(data,columns=['Id','Age']) </code></pre> <p>Here is my df2</p> <pre><code>data = [['Alex','10'],['Clarke','13']] df2 = pd.DataFrame(data,columns=['Id','Age']) </code></pre> <p>Here is my df3</p> <pre><code>data = [['Alex','10']] df3 = pd.DataFrame(data,columns=['Id','Age']) </code></pre> <p>Here is my output as per this code</p> <pre><code>df4 = (pd.concat([df2.set_index('Id'), df3.set_index('Id')], axis=1).reindex(df1.Id, fill_value='IDNP').reset_index()) </code></pre> <p>All the Id's in df1 needs to be present in df4. </p> <p>If an Id is not present in df2 or df3 then it gets replaced by 'IDNP'.</p> <p>This is my output as per my code,</p> <pre><code> Id Age Age 0 Alex 10 10 1 Bob IDNP IDNP 2 Clarke 13 NaN </code></pre> <p>What I want, </p> <pre><code> Id Age Age 0 Alex 10 10 1 Bob IDNP IDNP 2 Clarke 13 IDNP </code></pre> <p>Where am I going wrong in my code? </p>
<p>If need replace all missing values after <code>concat</code> by list of <code>DataFrame</code> with creating index by <code>Id</code> use:</p> <pre><code>dfs = [df1, df2, df3] df4 = pd.concat([x.set_index('Id') for x in dfs], axis=1).fillna('IDNP') print (df4) Age Age Age Alex 10 10 10 Bob 12 IDNP IDNP Clarke 13 13 IDNP </code></pre> <p>Your solution create mising value, because it return pd.concat:</p> <pre><code>print ((pd.concat([df2.set_index('Id'), df3.set_index('Id')], axis=1))) Age Age Alex 10 10 Clarke 13 NaN </code></pre> <p>So it is not replace by <code>fill_value</code> parameter.</p> <p>Possible solution is call <code>fillna</code>:</p> <pre><code>df4 = (pd.concat([df2.set_index('Id'), df3.set_index('Id')], axis=1) .fillna('IDNP') .reindex(df1.Id, fill_value='IDNP') .reset_index()) </code></pre>
python|pandas
1
6,068
54,255,345
Pandas highlighting excel columns with conditions using a function
<p>I have a pandas data-frame (Pre_Final_DataFrame) that I am writing to excel. </p> <p>I need to highlight a row in Excel if that corresponding row has a "No Match" word on any of the column that starts with 'Result_'.</p> <p>So, I decided to go for an array to understand which one needed to be highlighted.</p> <p>But now, I would prefer a way to highlight using a function as it is too slow. Kindly help me with this.</p> <p>In Simple words, I am writing a dataframe to excel using Pandas and it has million records and I want a row to be highlighted in "Yellow" only when there is a No Match value present in any one of the column that has a name starting with " Result_"</p> <p>The Expected result appears in excel looks like below,</p> <p>Input codes to start with a dataframe:-</p> <pre><code>import pandas as pd data = { 'ColA':[1, 1], 'ColB':[1, 1], 'Result_1':['Match', 'Match'], 'ColA1':[1, 2], 'ColB1':[1, 1], 'Result_2':['No Match', 'Match'], } Pre_Final_DataFrame = pd.DataFrame(data) ResultColumns_df = Pre_Final_DataFrame.filter(like='Result_') ResultColumns_df_false =ResultColumns_df[ResultColumns_df.values == "No Match"] RequiredRows_Highlight = ResultColumns_df_false.index.tolist() writer = pd.ExcelWriter(OutputName,date_format='%YYYY-%mm-%dd',datetime_format='%YYYY-%mm-%dd') Pre_Final_DataFrame.to_excel(writer,'Sheet1',index = False) writer.save() </code></pre> <p>Output Expected:</p> <p><a href="https://i.stack.imgur.com/Vw1iy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Vw1iy.png" alt="enter image description here"></a></p>
<p>We can use the <code>StyleFrame</code> package for reading it into an excel sheet.</p> <pre><code>import pandas as pd from StyleFrame import StyleFrame, Styler df = pd.read_excel("Your Excel Sheet") sf = StyleFrame(df) style = Styler(bg_color='yellow') for col in df: sf.apply_style_by_indexes(sf[sf[col]== 'No Match'],styler_obj=style) sf.to_excel('test.xlsx').save() </code></pre> <p>This has helped me in getting an output excel sheet with all the rows highlighted that contains at least one column with value <code>No Match</code>.</p> <p>Hope this helps. Cheers</p>
python|pandas
6
6,069
54,639,512
Trouble setting x_tick value with python scatter plot
<p>I have a pandas dataframe <code>avg_temp</code> with 2 columns. I Want to so a scatter plot of these two columns and with the x_ticks being the index values.</p> <pre><code>High_Avg Low_Avg 2014 28.516129032258064 9.419354838709678 2015 32.193548387096776 16.516129032258064 2016 35.32258064516129 18.548387096774192 2017 39.29032258064516 24.483870967741936 2018 31.548387096774192 13.903225806451612 </code></pre> <p>I use the following code:</p> <pre><code>avg_plot=avg_temp.plot(style=['o','rx']) avg_plot.set_xticklabels(avg_temp.index) </code></pre> <p>The resultant graph looks like this:</p> <p><a href="https://i.stack.imgur.com/qAMMv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qAMMv.png" alt="enter image description here"></a></p> <p>However, I want the index values to exactly align with the scatter plot values. How do I do it?</p>
<p>Try:</p> <pre><code>df.plot(style=['o','rx']) _ = plt.xticks(df.index) </code></pre> <p>OR</p> <pre><code>ax = df.plot(style=['o','rx']) _ = ax.set_xticks(df.index) </code></pre> <p>Output:</p> <p><a href="https://i.stack.imgur.com/NvWmn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NvWmn.png" alt="enter image description here"></a></p>
python|pandas|matplotlib
0
6,070
73,719,911
Create chronology column in pandas DataFrame
<p>I have a dataframe characterized by two essential columns: <em>name</em> and <em>timestamp</em>.</p> <pre><code> df = pd.DataFrame({'name':['tom','tom','tom','bert','bert','sam'], \ 'timestamp':[15,13,14,23,22,14]}) </code></pre> <p>I would like to create a third column <em>chronology</em> that checks the <em>timestamp</em> for each <em>name</em> and gives me the chronological order per <em>name</em> such that the final product looks like this:</p> <pre><code>df_final = pd.DataFrame({'name':['tom','tom','tom','bert','bert','sam'], \ 'timestamp':[15,13,14,23,22,14], \ 'chronology':[3,2,1,2,1,1]}) </code></pre> <p>I understand that I can go <code>df = df.sort_values(['name', 'timestamp'])</code> but how do I create the <em>chronology</em> column?</p>
<p>The function <a href="https://pandas.pydata.org/docs/reference/api/pandas.core.groupby.GroupBy.rank.html" rel="nofollow noreferrer">GroupBy.rank()</a>, does exactly what you need. From the documentation:</p> <blockquote> <p><em>GroupBy.rank(method='average', ascending=True, na_option='keep', pct=False, axis=0)</em> <br><strong>Provide the rank of values within each group.</strong></p> </blockquote> <p>Try this code:</p> <pre><code>df['chronology'] = df.groupby(by=['name']).timestamp.rank().astype(int) </code></pre> <p>Result:</p> <pre><code> name timestamp chronology tom 15 3 tom 13 1 tom 14 2 bert 23 2 bert 22 1 sam 14 1 </code></pre>
python|pandas|dataframe
1
6,071
73,783,623
How to make multiple columns in one column in pandas for the data appended from a list
<p>I am scraping data from yahoo finance all data scraping is working fine. But when I want to store the appended list into an indexable dataframe it returns a blank dataframe however, when I store the data in a non-indexable dataframe it store the data.</p> <p>When I print temp I can see the data even if I convert temp into a dataframe it gets converted successfully. But when I run <code>financial_dir[ticker]=temp.append(soup.find('div', {'class' : &quot;D(tbrg)&quot;}).find_all('div')[i].get_text(separator='|').split('|'))</code> it does not create an indexable dataframe it runs an empty dataframe.</p> <p>I want to create <a href="https://i.stack.imgur.com/iGtTu.png" rel="nofollow noreferrer">financial_dir like this</a> which is callable for different stocks for example when I run financial_dir['INDUSINDBK.NS'] it should give the dataframe for INDUSINDBK.NS like the image. Any help will be extremely appreciated</p> <p>'''</p> <pre><code>import requests from bs4 import BeautifulSoup import pandas as pd tickers = ['KOTAKBANK.NS','WIPRO.NS','HINDALCO.NS','RELIANCE.NS', 'INDUSINDBK.NS','HDFCLIFE.NS','TATACONSUM.NS','TITAN.NS', 'ULTRACEMCO.NS'] financial_dir = pd.DataFrame() temp = [] for ticker in tickers: url = 'https://finance.yahoo.com/quote/'+ticker+'/financials?p='+ticker page = requests.get(url, headers={'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36'}) #page_content = page.content soup = BeautifulSoup(page.text, 'html.parser') a = list(range(0,2000,1)) #while IndexError(True): try: for i in a: financial_dir[ticker]=temp.append(soup.find('div', {'class' : &quot;D(tbrg)&quot;}).find_all('div')[i].get_text(separator='|').split('|')) except: pass temp data5 = pd.DataFrame(temp) financial_dir </code></pre> <p>'''</p>
<p>try this:</p> <ol> <li>create function to return one dataframe per ticker:</li> </ol> <pre><code>def f(ticker): url = 'https://finance.yahoo.com/quote/'+ticker+'/financials?p='+ticker page = requests.get(url, headers={'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36'}) soup = BeautifulSoup(page.text, 'html.parser') ticker_header = [i.text for i in soup.find('div', {'class' : &quot;D(tbhg)&quot;}).find('div', {'class' : 'D(tbr)'}).find_all('div', {'class': 'D(ib)'})] values = [i.text for i in soup.find('div', {'class' : &quot;D(tbrg)&quot;}).find_all('div', {'class': 'Ta(c)'})] ticker_index = [i.text for i in soup.find('div', {'class' : &quot;D(tbrg)&quot;}).find_all('div', {'class': 'D(ib)'})] chunk_size = 5 list_chunked = [values[i:i + chunk_size] for i in range(0, len(values), chunk_size)] df = pd.DataFrame(list_chunked, columns=ticker_header[1:]) df_index = pd.Index(ticker_index) df = df.set_index(df_index) df['ticker'] = ticker df = df.reset_index() return df f('TATACONSUM.NS') #return dataframe index ttm 3/31/2022 3/31/2021 3/31/2020 3/31/2019 ticker 0 Total Revenue 126,653,800 123,470,100 115,832,200 95,966,000 72,093,500 TATACONSUM.NS 1 Cost of Revenue 74,531,800 73,265,100 70,742,800 55,775,900 41,540,400 TATACONSUM.NS 2 Gross Profit 52,122,000 50,205,000 45,089,400 40,190,100 30,553,100 TATACONSUM.NS 3 Operating Expense 37,051,000 35,650,800 32,199,200 29,685,700 24,003,600 TATACONSUM.NS #... f('HINDALCO.NS') #return dataframe index ttm 3/31/2022 3/31/2021 3/31/2020 3/31/2019 ticker 0 Total Revenue 2,104,160,000 1,937,560,000 1,310,090,000 1,171,400,000 1,297,455,700 HINDALCO.NS 1 Cost of Revenue 1,531,870,000 1,398,820,000 953,430,000 859,720,000 958,279,000 HINDALCO.NS 2 Gross Profit 572,290,000 538,740,000 356,660,000 311,680,000 339,176,700 HINDALCO.NS 3 Operating Expense 312,010,000 298,540,000 240,410,000 215,740,000 230,666,900 HINDALCO.NS #... </code></pre> <ol start="2"> <li>then you can save each ticket in separate csv file and work with each one separately:</li> </ol> <pre><code>tickers = ['KOTAKBANK.NS','WIPRO.NS','HINDALCO.NS','RELIANCE.NS', 'INDUSINDBK.NS','HDFCLIFE.NS','TATACONSUM.NS','TITAN.NS', 'ULTRACEMCO.NS'] for ticker in tickers: f(ticker).to_csv(f'{ticker}.csv', index=False) </code></pre> <ol start="3"> <li>or you can put them in one dataframe:</li> </ol> <pre><code>tickers = ['KOTAKBANK.NS','WIPRO.NS','HINDALCO.NS','RELIANCE.NS', 'INDUSINDBK.NS','HDFCLIFE.NS','TATACONSUM.NS','TITAN.NS', 'ULTRACEMCO.NS'] all_dataframes = [] for ticker in tickers: print(ticker) all_dataframes.append(f(all_dataframes)) df_all = pd.concat(all_dataframes) </code></pre> <ol start="4"> <li>and you can also pivot the dataframe you got:</li> </ol> <pre><code>df_all.pivot(index='ticker', columns='index', values=[ 'ttm', '3/31/2022', '3/31/2021', '3/31/2020', '3/31/2019',]) </code></pre>
python|pandas|dataframe
0
6,072
73,533,196
Replace A Column Value by Most Frequent Value In Group
<p>I have the following dataframe:</p> <pre><code>df = pd.DataFrame({'student': list('AAABBBCCCC'), 'city': ['LA', 'LA', 'NY', 'DC', 'NY', 'NY', 'SF', 'SF', 'LA', 'SF'], 'score': [75, 27, 31, 22, 43, 20, 26, 40, 33, 20]}) print(df) student city score 0 A LA 75 1 A LA 27 2 A NY 31 3 B DC 22 4 B NY 43 5 B NY 20 6 C SF 26 7 C SF 40 8 C LA 33 9 C SF 20 </code></pre> <p>I have to replace the <code>city</code> column based on which city appears most for that student. For example, here is the <code>value_counts()</code> of the city for each student:</p> <pre><code>df.groupby('student').city.value_counts() student city A LA 2 NY 1 B NY 2 DC 1 C SF 3 LA 1 </code></pre> <p>We can see, for the student <code>A</code>, <code>LA</code> appears most of the time. Hence, we want to replace other cities (here <code>NY</code>) with <code>LA</code>.</p> <p><strong>Desired output</strong>:</p> <pre><code> student city score 0 A LA 75 1 A LA 27 2 A LA 31 3 B NY 22 4 B NY 43 5 B NY 20 6 C SF 26 7 C SF 40 8 C SF 33 9 C SF 20 </code></pre> <p>What would be the ideal way of getting the desired output? Any suggestions would be appreciated. Thanks!</p>
<p>You can get the most frequent value with <code>mode</code>, which is a bit faster than <code>value_counts</code>. Then you can use <code>groupby().transform()</code> to broadcast the values to all the rows:</p> <pre><code># lambda x: x.value_counts().index[0] would also work df['city'] = df.groupby('student')['city'].transform(lambda x: x.mode()[0]) </code></pre> <p>Output:</p> <pre><code> student city score 0 A LA 75 1 A LA 27 2 A LA 31 3 B NY 22 4 B NY 43 5 B NY 20 6 C SF 26 7 C SF 40 8 C SF 33 9 C SF 20 </code></pre> <p><strong>Note</strong> <code>mode()</code> doesn't count <code>NaN</code> values, so if a student is missing <code>city</code> on all rows, <code>mode()</code> would throw an error.</p>
python|pandas|dataframe|group-by
1
6,073
71,171,188
Pandas sum column data based on summation points
<p>Based on a given dataframe e.g</p> <pre><code> x y 0 1 2 1 2 4 2 3 6 3 4 8 4 5 10 5 6 12 6 7 14 </code></pre> <p>I would like to calculate the sum of the in between values based on given summation points e.g</p> <pre><code> x 0 1 1 3 2 6 3 7 </code></pre> <p>So that the final result will look like below</p> <pre><code> x y 0 1 2 1 3 10 #sum of y values for x between (2-3) 2 6 30 #sum of y values for x between (4-6) 3 7 14 </code></pre> <p>can you please help me with an example/idear on how should i approach this? Thank you in advance!!</p>
<p>IIUC, create a custom group and <a href="https://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.core.groupby.DataFrameGroupBy.agg.html" rel="nofollow noreferrer"><code>groupby</code>+<code>agg</code></a>.</p> <p>Note that I used a simple list for the x points, if you have a dataframe <code>df_ref</code>, use <code>x = df_ref['x'].to_list()</code></p> <pre><code>x = [1,3,6,7] df2 = (df.groupby(df['x'].isin(x).shift(fill_value=0).cumsum(), as_index=False) .agg({'x':'last', 'y': 'sum'}) ) </code></pre> <p>output:</p> <pre><code> x y 0 1 2 1 3 10 2 6 30 3 7 14 </code></pre> <p>custom group:</p> <pre><code>&gt;&gt;&gt; df['x'].isin(x).shift(fill_value=0).cumsum() 0 0 1 1 2 1 3 2 4 2 5 2 6 3 </code></pre>
python|pandas
1
6,074
71,230,991
Formatting output of series in python pandas
<p>Here is my DataFrame. This is a representation of an 8-hour day, and the many different combinations of schedules. The time is in 24hr time. Input:</p> <pre><code>solutions = problem.getSolutions() pd.options.display.max_columns = None df = pd.DataFrame(solutions) </code></pre> <p>Output:</p> <pre><code> WorkHr1 WorkHr2 WorkHr3 WorkHr4 WorkOut Lunch FreeHour Cleaning 0 13 14 15 16 11 10 9 12 1 13 14 15 16 11 10 12 9 2 13 14 15 16 11 12 10 9 3 13 14 15 16 11 12 9 10 4 13 14 15 16 12 11 10 9 .. ... ... ... ... ... ... ... ... </code></pre> <p>I can create a series using:</p> <pre><code>series1 = pd.Series(solutions[0]) print(series1) </code></pre> <p>And I get this output:</p> <pre><code>WorkHr1 13 WorkHr2 14 WorkHr3 15 WorkHr4 16 WorkOut 11 Lunch 10 FreeHour 9 Cleaning 12 </code></pre> <p>How can I switch the columns of this series so that the time is first?</p> <p>Also, is there any possible way to display the rows in order of time? Like this:</p> <pre><code> 9 FreeHour 10 Lunch 11 WorkOut 12 Cleaning 13 WorkHr1 14 WorkHr2 15 WorkHr3 16 WorkHr4 </code></pre>
<p>You can reverse it by passing its index as data and data as index to a Series constructor:</p> <pre><code>out = pd.Series(s.index, index=s).sort_index() </code></pre> <p>Output:</p> <pre><code>9 FreeHour 10 Lunch 11 WorkOut 12 Cleaning 13 WorkHr1 14 WorkHr2 15 WorkHr3 16 WorkHr4 dtype: object </code></pre>
python|pandas
2
6,075
52,214,217
Remove all columns matching a value in Numpy
<p>Let's suppose I have a matrix with a number of binary values:</p> <pre><code>matrix([[1., 1., 1., 0., 0.], [0., 0., 1., 1., 1.], [0., 0., 0., 1., 0.], [0., 0., 0., 0., 1.]]) </code></pre> <p>Using <strong>np.sum(M, 0)</strong> produces:</p> <pre><code>matrix([[1., 1., 2., 2., 2.]]) </code></pre> <p>How do I remove all of the columns from the matrix that have only the value of <strong>1</strong>?</p>
<p>Easier to have an array here:</p> <pre><code>M = M.A </code></pre> <p>Now using simple slicing:</p> <pre><code>M[:, np.sum(M, 0)!=1] </code></pre> <p></p> <pre><code>array([[1., 0., 0.], [1., 1., 1.], [0., 1., 0.], [0., 0., 1.]]) </code></pre>
python|numpy|matrix
2
6,076
52,201,644
Regressor Neural Network built with Keras only ever predicts one value
<p>I'm trying to build a NN with Keras and Tensorflow to predict the final chart position of a song, given a set of 5 features. </p> <p>After playing around with it for a few days I realised that although my MAE was getting lower, this was because the model had just learned to predict the mean value of my training set for all input, and this was the optimal solution. (This is illustrated in the scatter plot below)</p> <p><img src="https://i.imgur.com/1ive42r.jpg" alt="scatter plot"></p> <blockquote> <p>This is a random sample of 50 data points from my testing set vs what the network thinks they should be </p> </blockquote> <p>At first I realised this was probably because my network was too complicated. I had one input layer with shape <code>(5,)</code> and a single node in the output layer, but then 3 hidden layers with over 32 nodes each. </p> <p>I then stripped back the excess layers and moved to just a single hidden layer with a couple nodes, as shown here: </p> <pre><code>self.model = keras.Sequential([ keras.layers.Dense(4, activation='relu', input_dim=num_features, kernel_initializer='random_uniform', bias_initializer='random_uniform' ), keras.layers.Dense(1) ]) </code></pre> <p>Training this with a gradient descent optimiser still results in exactly the same prediction being made the whole time.</p> <p>Then it occurred to me that perhaps the actual problem I'm trying to solve isn't hard enough for the network, that maybe it's linearly separable. Since this would respond better to not having a hidden layer at all, essentially just doing regular linear regression, I tried that. I changed my model to: </p> <pre><code>inp = keras.Input(shape=(num_features,)) out = keras.layers.Dense(1, activation='relu')(inp) self.model = keras.Model(inp,out) </code></pre> <p>This also changed nothing. My MAE, the predicted value are all the same. I've tried so many different things, different permutations of optimisation functions, learning rates, network configurations, and nothing can help. I'm pretty sure the data is good, but I've included a sample of it just in case.</p> <pre><code>chartposition,tagcount,dow,artistscore,timeinchart,finalpos 121,3925,5,35128,7,227 131,4453,3,85545,25,130 69,2583,4,17594,24,523 145,1165,3,292874,151,187 96,1679,5,102593,111,540 134,3494,5,1252058,37,370 6,34895,7,6824048,22,5 </code></pre> <blockquote> <p>A sample of my dataset, finalpos is the value I'm trying to predict. Dataset contains ~40,000 records, split 80/20 - training/testing</p> </blockquote> <pre><code>def __init__(self, validation_split, num_features, should_log): self.should_log = should_log self.validation_split = validation_split inp = keras.Input(shape=(num_features,)) out = keras.layers.Dense(1, activation='relu')(inp) self.model = keras.Model(inp,out) optimizer = tf.train.GradientDescentOptimizer(0.01) self.model.compile(loss='mae', optimizer=optimizer, metrics=['mae']) def train(self, data, labels, plot=False): early_stop = keras.callbacks.EarlyStopping(monitor='val_loss', patience=20) history = self.model.fit(data, labels, epochs=self.epochs, validation_split=self.validation_split, verbose=0, callbacks = [PrintDot(), early_stop]) if plot: self.plot_history(history) </code></pre> <blockquote> <p>All code relevant to constructing and training the networ</p> </blockquote> <pre><code>def normalise_dataset(df, mini, maxi): return (df - mini)/(maxi-mini) </code></pre> <blockquote> <p>Normalisation of the input data. Both my testing and training data are normalised to the max and min of the testing set</p> </blockquote> <p><img src="https://i.imgur.com/1BLcuYU.jpg" alt="loss hidden"></p> <blockquote> <p>Graph of my loss vs validation curves with the one hidden layer network with an adamoptimiser, learning rate 0.01</p> </blockquote> <p><img src="https://i.imgur.com/H2nNFIn.png" alt="loss linear"></p> <blockquote> <p>Same graph but with linear regression and a gradient descent optimiser.</p> </blockquote>
<p>So I am pretty sure that your normalization is the issue: You are not normalizing <em>by feature</em> (as is the de-fact industry standard), but <em>across all data</em>. That means, if you have two different features that have very different orders of magnitude/ranges (in your case, compare <code>timeinchart</code> with <code>artistscore</code>.</p> <p>Instead, you might want to normalize using something like scikit-learn's <a href="http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html" rel="nofollow noreferrer">StandardScaler</a>. Not only does this normalize per column (so you can pass all features at once), but it also does unit variance (which is some assumption about your data, but can potentially help, too).</p> <p>To transform your data, use something along these lines</p> <pre><code>from sklearn.preprocessing import StandardScaler import numpy as np raw_data = np.array([[1,40], [2, 80]]) scaler = StandardScaler() processed_data = scaler.fit_transform(raw_data) # fit() calculates mean etc, transform() puts it to the new range. print(processed_data) # returns [[-1, -1], [1,1]] </code></pre> <p>Note that you have two possibilities to normalize/standardize your training data: Either scale them together with your training data, and then split <em>afterwards</em>, or you instead only fit the training data, and then use <em>the same scaler</em> to transform your test data. <br/> <strong>Never fit_transform your test set separate from training data!</strong><br/> Since you have potentially different mean/min/max values, you can end up with totally wrong predictions! In a sense, the StandardScaler is your definition of your "data source distribution", which is inherently still the same for your test set, even though they might be a subset not exactly following the same properties (due to small sample size etc.)</p> <p>Additionally, you might want to use a more advanced <a href="https://keras.io/optimizers/" rel="nofollow noreferrer">optimizer</a>, like Adam, or specify some momentum property (0.9 is a good choice in practic, as a rule of thumb) for your SGD.</p>
python|tensorflow|machine-learning|neural-network|keras
1
6,077
60,495,864
How to handle repeated input for a Keras layer?
<p>I have a Keras model which has two input layers. </p> <ol> <li>a tweet of shape <code>(20,300)</code>.</li> <li>five other tweets of shape <code>(5,20,300)</code>. however this input is same for all training examples.</li> </ol> <p>In other word, for each training step, there will be a different tweet (first input) and the same five tweets (second input). My second input that has a shape of <code>(5,20,300)</code> is very big to be repeated <code>num_samples</code> times and then used as an input layer to Keras model. I need a way to make the second input used inside the keras models but without repeated <code>num_samples</code> times.</p> <p>Is there any way to handle this type of input?</p>
<p>Create a tensor with that constant input:</p> <pre><code>fixed_tweets = keras.backend.constant(the_tweets_as_numpy) </code></pre> <p>Use a regular input and a <code>tensor</code> input: </p> <pre><code>input1 = Input((20,300)) input2 = Input(tensor=fixed_tweets) </code></pre> <p>Go have fun!!</p> <p>You will probably need custom layers to handle the difference between the batch size of <code>input1</code> (any) and <code>input2</code> (5). </p>
python|tensorflow|keras|deep-learning|keras-layer
0
6,078
59,844,745
Adding Future Dates to DataFrame
<p>How do I add future dates to a data frame? This datetime delta only adds deltas to adjacent columns.</p> <pre><code>import pandas as pd from datetime import timedelta df = pd.DataFrame({ 'date': ['2001-02-01','2001-02-02','2001-02-03', '2001-02-04'], 'Monthly Value': [100, 200, 300, 400] }) df["future_date"] = df["date"] + timedelta(days=4) print(df) date future_date 0 2001-02-01 00:00:00 2001-02-05 00:00:00 1 2001-02-02 00:00:00 2001-02-06 00:00:00 2 2001-02-03 00:00:00 2001-02-07 00:00:00 3 2001-02-04 00:00:00 2001-02-08 00:00:00 </code></pre> <p>Desired dataframe:</p> <pre><code> date future_date 0 2001-02-01 00:00:00 2001-02-01 00:00:00 1 2001-02-02 00:00:00 2001-02-02 00:00:00 2 2001-02-03 00:00:00 2001-02-03 00:00:00 3 2001-02-04 00:00:00 2001-02-04 00:00:00 4 2001-02-05 00:00:00 5 2001-02-06 00:00:00 6 2001-02-07 00:00:00 7 2001-02-08 00:00:00 </code></pre>
<p>You can do the following:</p> <pre><code># set to timestamp df['date'] = pd.to_datetime(df['date']) # create a future date df ftr = (df['date'] + pd.Timedelta(4, unit='days')).to_frame() ftr['Monthly Value'] = None # join the future data df1 = pd.concat([df, ftr], ignore_index=True) date Monthly Value 0 2001-02-01 100 1 2001-02-02 200 2 2001-02-03 300 3 2001-02-04 400 4 2001-02-05 None 5 2001-02-06 None 6 2001-02-07 None 7 2001-02-08 None </code></pre>
python|pandas|datetime
2
6,079
59,786,922
How to best coerce a list of numpy arrays into a pandas dataframe column?
<p>I have a list (posterior_list) of 18,000 <code>numpy arrays</code> with length 82,868. I have a dataframe (y_test) with shape <code>(82,868, 1)</code>. The arrays are posterior predicted values. I would like to append each array inside that list as a column onto the dataframe (y_test) with the end result having shape <code>(82868, 18001)</code>. </p> <p>I tried the following:</p> <pre><code>for arr in posterior_list: x_test.append(arr) </code></pre> <p>That resulted in the following error:</p> <pre><code>--------------------------------------------------------------------------- TypeError Traceback (most recent call last) &lt;ipython-input-144-a200df62319d&gt; in &lt;module&gt; 1 for arr in posterior: ----&gt; 2 y_test.append(arr) ~\AppData\Local\Continuum\anaconda3\envs\stan_env\lib\site-packages\pandas\core\frame.py in append(self, other, ignore_index, verify_integrity, sort) 6690 return concat(to_concat, ignore_index=ignore_index, 6691 verify_integrity=verify_integrity, -&gt; 6692 sort=sort) 6693 6694 def join(self, other, on=None, how='left', lsuffix='', rsuffix='', ~\AppData\Local\Continuum\anaconda3\envs\stan_env\lib\site-packages\pandas\core\reshape\concat.py in concat(objs, axis, join, join_axes, ignore_index, keys, levels, names, verify_integrity, sort, copy) 226 keys=keys, levels=levels, names=names, 227 verify_integrity=verify_integrity, --&gt; 228 copy=copy, sort=sort) 229 return op.get_result() 230 ~\AppData\Local\Continuum\anaconda3\envs\stan_env\lib\site-packages\pandas\core\reshape\concat.py in __init__(self, objs, axis, join, join_axes, keys, levels, names, ignore_index, verify_integrity, copy, sort) 287 ' only pd.Series, pd.DataFrame, and pd.Panel' 288 ' (deprecated) objs are valid'.format(type(obj))) --&gt; 289 raise TypeError(msg) 290 291 # consolidate TypeError: cannot concatenate object of type "&lt;class 'numpy.ndarray'&gt;"; only pd.Series, pd.DataFrame, and pd.Panel (deprecated) objs are valid </code></pre> <p>So I tried the following:</p> <pre><code>for arr in posterior: arr = pd.Series(arr) y_test.append(arr, ignore_index=True) </code></pre> <p><code>MemoryError: Unable to allocate array with shape (82877, 82868) and data type float64</code></p> <p>Can anyone advise on the best way to loop through my list, and append each array as a column onto my dataframe?</p>
<p>You can try</p> <pre><code>y_test.join(pd.DataFrame(posterior_list,columns=y_test.index).T) </code></pre>
python|pandas|numpy
1
6,080
61,919,647
Runtime error when optimising Theta in Logistic Regression using fmin_bfgs
<pre><code>#Get libraries import scipy.optimize as opt import numpy as np import pandas import matplotlib.pyplot as plt def plotData(): plt.scatter(X[y==1,0],X[y==1,1], marker='+', c='black',label="Admitted") plt.scatter(X[y==0,0],X[y==0,1], marker='.', c='y', label="Not Admitted") plt.xlabel("Exam 1 Score") plt.ylabel("Exam 2 Score") plt.legend(['Admitted', 'Not Admitted']) def sigmoid(z): return 1/(1 + np.exp(-1. * z)) def costFunction(theta, X, y): y = np.reshape(y, (len(y), 1)) cost = (-1./m) * ( y.T@np.log(sigmoid(X@theta)) + (1 - y).T@np.log(1 - sigmoid(X@theta))) #grad = (1/m)*(X.T@(sigmoid(X@theta) - y)) return(cost[0]) def costFunctionDer(theta, X, y): grad = (1./m)*(X.T@(sigmoid(X@theta) - y)) return np.ndarray.flatten(grad) def predict(theta, X): return (X@theta &gt;= 0).astype('int') #Load Data data = pandas.read_table('ex2data1.txt', sep=',', names=['Exam 1 Score','Exam 2 Score','Admittance'] ) X = data.iloc[:,0:2].to_numpy() y = data.iloc[:,2].to_numpy() #Plot Data plotData() #Compute Cost and Gradient #Add intercept X = np.insert(X, 0, 1, 1) m, n = X.shape #Initialise thetas theta_i = np.zeros((n,1)) cost = costFunction(theta_i, X, y) grad = costFunctionDer(theta_i, X, y) #Optimise Algorithm for theta result = opt.fmin_bfgs(costFunction, theta_i, costFunctionDer,args=(X,y), full_output = True, maxiter=400, retall=True) theta, cost_min = result[0:2] #Make a decision Boundary vv smart, take the min and max range of x # and then calculate the value of score 2 from it as theta[0] + x1theta[1] + x2theta[2] = 0 boundary_xs = np.array([np.min(X[:,1]), np.max(X[:,1])]) boundary_ys = (-1./theta[2])*(theta[0] + theta[1]*boundary_xs) plt.plot(boundary_xs,boundary_ys,'b-',label='Decision Boundary') plt.legend() #Predict p = predict(theta, X) print("Train Accuracy {}".format(100*np.sum(p==y)/len(y))) </code></pre> <p>Recently I hav been learning Machine Learning through Andrew Ng's online course. I was working on exercise 2 which required us to make implement Logistic Regression. Though The course is taught in Octave/Matlab, I am trying to learn it using the industry-standard Python.</p> <p>To optimise the value of theta, I tried using the function <strong>fmin_bfgs</strong> and this is giving me a runtime error. I have used the <strong>min</strong> function and the code work perfectly fine. Can someone help me find the problem? I am new to ML so I apologise for any obvious flaws.</p> <p>The error is as follows</p> <pre><code>RuntimeWarning: divide by zero encountered in log cost = (-1./m) * ( y.T@np.log(sigmoid(X@theta)) + (1 - y).T@np.log(1 - sigmoid(X@theta))) RuntimeWarning: invalid value encountered in matmul cost = (-1./m) * ( y.T@np.log(sigmoid(X@theta)) + (1 - y).T@np.log(1 - sigmoid(X@theta))) RuntimeWarning: divide by zero encountered in log cost = (-1./m) * ( y.T@np.log(sigmoid(X@theta)) + (1 - y).T@np.log(1 - sigmoid(X@theta))) RuntimeWarning: invalid value encountered in matmul cost = (-1./m) * ( y.T@np.log(sigmoid(X@theta)) + (1 - y).T@np.log(1 - sigmoid(X@theta))) RuntimeWarning: divide by zero encountered in log cost = (-1./m) * ( y.T@np.log(sigmoid(X@theta)) + (1 - y).T@np.log(1 - sigmoid(X@theta))) RuntimeWarning: invalid value encountered in matmul cost = (-1./m) * ( y.T@np.log(sigmoid(X@theta)) + (1 - y).T@np.log(1 - sigmoid(X@theta))) </code></pre>
<p>The first RuntimeWarning gave me the clue that I needed. My sigmoid function was returning 0 for very low values of z. To fix the problem I set the lower and upper bounds manually in the sigmoid function.</p>
python|python-3.x|numpy|scipy-optimize
0
6,081
61,975,181
Random generation of uniformly distributed points within given boundries in 3D space(cuboid) with Python
<p><img src="https://i.stack.imgur.com/nLgqQ.png" alt="3D space"></p> <p>I am trying to generate 2000 random points in 3D cuboid space within given boundaries in python. How would one go about it?</p>
<pre class="lang-py prettyprint-override"><code>import random xrange = (-1000.0, 1000.0) yrange = (-1000.0, 1000.0) zrange = (-1000.0, 1000.0) points = [] [ points.append((random.uniform(*xrange), random.uniform(*yrange), random.uniform(*zrange))) for i in range(2000) ] print(points) </code></pre>
python|numpy|random|neural-network|cluster-computing
2
6,082
61,935,672
Converting dictionary into two-column panda dataframe
<p>I have a dictionary in python called <code>word_counts</code> consisting of key words and values which represent the frequency in which they appear in a given text:</p> <pre><code>word_counts = {'the':2, 'cat':2, 'sat':1, 'with':1, 'other':1} </code></pre> <p>I now need to make this into a pandas DataFrame with two columns: a column named 'word' indicating the words and column named 'count' indicating the frequency. </p>
<p>you can create a dataframe from a dictonary:</p> <pre><code>df=pd.DataFrame({"word":list(word_counts.keys()) , "count": list(word_counts.values())}) </code></pre>
python|pandas|dataframe|dictionary
0
6,083
57,785,386
How to convert a column form object to float?
<p>I have a csv whose columns include the results of some mathematic calculations. When I read the csv, the datatype of these columns is object. The content of the columns are numbers like this "9,180693865" (or 0)</p> <p>Now I tried the following to change the datatype:</p> <pre><code>df["column"].astype('float64') df["column"] = Erzeugung.solar_prediction.astype("float64") pd.to_numeric(df["column"]) </code></pre> <p>The error message Looks like this: ValueError: Unable to parse string "9,180693865" at position 27</p> <p>Is there something else I can try?</p>
<p>Replace the <code>,</code> by a <code>.</code> using <code>str.replace</code> to fix this:</p> <pre><code>df["column"] = Erzeugung.solar_prediction.str.replace(',', '.').astype("float64") </code></pre>
python|pandas
0
6,084
54,980,385
Tensorflow dataset generator inverted colors
<p>I have a problem with TF dataset generator. I do not why, but when I get picture from dataset by running it through session, it returns Tensors where colors are inverted. I tried to changed BGR to RGB, but this is not the problem. It is partially solved by inverting the image array (img = 1 - img ), but I would like not this problem to occur in first place. Does somebody know what could be the cause?</p> <pre><code>import os import glob import random import tensorflow as tf from tensorflow import Tensor class PairGenerator(object): person1 = 'img' person2 = 'person2' label = 'same_person' #def __init__(self, lfw_path='./tf_dataset/resources' + os.path.sep + 'lfw'): def __init__(self, lfw_path='/home/tom/Devel/ai-dev/tensorflow-triplet-loss/data/augmentor'): self.all_people = self.generate_all_people_dict(lfw_path) print(self.all_people.keys()) def generate_all_people_dict(self, lfw_path): # generates a dictionary between a person and all the photos of that person all_people = {} for person_folder in os.listdir(lfw_path): person_photos = glob.glob(lfw_path + os.path.sep + person_folder + os.path.sep + '*.jpg') all_people[person_folder] = person_photos return all_people def get_next_pair(self): all_people_names = list(self.all_people.keys()) while True: # draw a person at random person1 = random.choice(all_people_names) # flip a coin to decide whether we fetch a photo of the same person vs different person same_person = random.random() &gt; 0.5 if same_person: person2 = person1 else: # repeatedly pick random names until we find a different name person2 = person1 while person2 == person1: person2 = random.choice(all_people_names) person1_photo = random.choice(self.all_people[person1]) yield ({self.person1: person1_photo, self.label: same_person}) class Inputs(object): def __init__(self, img: Tensor, label: Tensor): self.img = img self.label = label def feed_input(self, input_img, input_label=None): # feed the input images that are necessary to make a prediction feed_dict = {self.img: input_img} # optionally also include the label: # if we're just making a prediction without calculating loss, that won't be necessary if input_label is not None: feed_dict[self.label] = input_label return feed_dict class Dataset(object): img_resized = 'img_resized' label = 'same_person' def __init__(self, generator=PairGenerator()): self.next_element = self.build_iterator(generator) def build_iterator(self, pair_gen: PairGenerator): batch_size = 10 prefetch_batch_buffer = 5 dataset = tf.data.Dataset.from_generator(pair_gen.get_next_pair, output_types={PairGenerator.person1: tf.string, PairGenerator.label: tf.bool}) dataset = dataset.map(self._read_image_and_resize) dataset = dataset.batch(batch_size) dataset = dataset.prefetch(prefetch_batch_buffer) iter = dataset.make_one_shot_iterator() element = iter.get_next() return Inputs(element[self.img_resized], element[PairGenerator.label]) def _read_image_and_resize(self, pair_element): target_size = [224, 224] # read images from disk img_file = tf.read_file(pair_element[PairGenerator.person1]) print("////") print(PairGenerator.person1) img = tf.image.decode_image(img_file, channels=3) # let tensorflow know that the loaded images have unknown dimensions, and 3 color channels (rgb) img.set_shape([None, None, 3]) # resize to model input size img_resized = tf.image.resize_images(img, target_size) #img_resized = tf.image.flip_up_down(img_resized) #img_resized = tf.image.rot90(img_resized) pair_element[self.img_resized] = img_resized pair_element[self.label] = tf.cast(pair_element[PairGenerator.label], tf.float32) return pair_element generator = PairGenerator() iter = generator.get_next_pair() for i in range(10): print(next(iter)) ds = Dataset(generator) import matplotlib.pyplot as plt imgplot = plt.imshow(out) imgplot = plt.imshow(1 - out) </code></pre>
<p>Ok so the solution was </p> <p>imgplot = plt.imshow(out/255)</p>
python|tensorflow|tensorflow-datasets
0
6,085
49,724,954
How are PyTorch's tensors implemented?
<p>I am building my own Tensor class in Rust, and I am trying to make it like PyTorch's implementation. </p> <p><em>What is the most efficient way to store tensors programmatically, but, specifically, in a strongly typed language like Rust?</em> <em>Are there any resources that provide good insights into how this is done?</em></p> <p>I am currently building a contiguous array, so that, given dimensions of <code>3 x 3 x 3</code>, my array would just have <code>3^3</code> elements in it, which would represent the tensor. However, this does make some of the mathematical operations and manipulations of the array harder.</p> <p>The dimension of the tensor should be dynamic, so that I could have a tensor with <code>n</code> dimensions.</p>
<h2>Contiguous array</h2> <p>The commonly used way to store such data is in a single array that is laid out as a single, contiguous block within memory. More concretely, a 3x3x3 tensor would be stored simply as a single array of 27 values, one after the other. </p> <p>The only place where the dimensions are used is to calculate the mapping between the (many) coordinates and the offset within that array. For example, to fetch the item <code>[3, 1, 1]</code> you would need to know if it is a 3x3x3 matrix, a 9x3x1 matrix, or a 27x1x1 matrix - in all cases the "storage" would be 27 items long, but the interpretation of "coordinates" would be different. If you use zero-based indexing, the calculation is trivial, but you need to know the length of each dimension.</p> <p>This does mean that resizing and similar operations may require copying the whole array, but that's ok, you trade off the performance of those (rare) operations to gain performance for the much more common operations, e.g. sequential reads.</p>
python|python-3.x|rust|pytorch|tensor
6
6,086
49,673,059
How to control X's and Y's when interplolating with pandas
<p>I want to use spline interpolation so I can fill some nulls, but I can not find a way to specify X's and Y's for Pandas. It automatically select the index to be the X's and fill nulls for all columns that have nulls respectively. Any ideas of how to make it work? or do I need to use SciPy?</p> <p>I tried somthing like this:</p> <pre><code>df.interpolate(x='some_column1', y='some_column2', method='spline', order=1) </code></pre> <p>but I got: TypeError: _interpolate_scipy_wrapper() got multiple values for keyword argument 'y'</p> <p>I know there is an option to reset the index, but is there another way to choose which columns I want to use?</p>
<p>Pandas interpolation doesn't allow you to simultaneously specify x, y, and metho. For greater control over interpolation, you might want to use <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.interp1d.html#scipy.interpolate.interp1d" rel="nofollow noreferrer"><code>scipy.interp1d</code></a>:</p> <pre><code>df['some_column_2'] = scipy.interp1d(df['some_column_1'], df['some_column_2']) </code></pre> <p>Notes: </p> <ul> <li>If the DataFrame is not sorted by <code>some_column_1</code>, specify <code>assume_sorted=False</code>.</li> <li>By default, <code>interp1d</code> does linear interpolation. It's possible to change this with the <code>kind</code> parameter.</li> </ul>
python|python-2.7|pandas|scipy|interpolation
1
6,087
49,503,565
make a numpy array with shape and offset argument in another style
<p>I wanted to access my array both as a 3-element entity (3d position) and individual element (each of x,y,z coordinate). After some researching, I ended up doing the following.</p> <pre><code>&gt;&gt;&gt; import numpy as np &gt;&gt;&gt; arr = np.zeros(5, dtype={'pos': (('&lt;f8', (3,)), 0), 'x': (('&lt;f8', 1), 0), 'y': (('&lt;f8', 1), 8), 'z': (('&lt;f8', 1), 16)}) &gt;&gt;&gt; arr["x"] = 0 &gt;&gt;&gt; arr["y"] = 1 &gt;&gt;&gt; arr["z"] = 2 # I can access the whole array by "pos" &gt;&gt;&gt; print(arr["pos"]) &gt;&gt;&gt; array([[ 1., 2., 3.], [ 1., 2., 3.], [ 1., 2., 3.], [ 1., 2., 3.], [ 1., 2., 3.]]) </code></pre> <p>However, I've always been making array in this style:</p> <pre><code>&gt;&gt;&gt; arr = np.zeros(10, dtype=[("pos", "f8", (3,))]) </code></pre> <p>But I can't find a way to specify both the offset and the shape of the element at the same time in this style. Is there a way to do this? </p>
<p>In reference to the docs page, <a href="https://docs.scipy.org/doc/numpy-1.14.0/reference/arrays.dtypes.html" rel="nofollow noreferrer">https://docs.scipy.org/doc/numpy-1.14.0/reference/arrays.dtypes.html</a></p> <p>you are using the fields dictionary form, with <code>(data-type, offset)</code> value</p> <pre><code>{'field1': ..., 'field2': ..., ...} dt1 = {'pos': (('&lt;f8', (3,)), 0), 'x': (('&lt;f8', 1), 0), 'y': (('&lt;f8', 1), 8), 'z': (('&lt;f8', 1), 16)} </code></pre> <p>The display for the resulting <code>dtype</code> is the other dictionary format:</p> <pre><code>{'names': ..., 'formats': ..., 'offsets': ..., 'titles': ..., 'itemsize': ...} In [15]: np.dtype(dt1) Out[15]: dtype({'names':['x','pos','y','z'], 'formats':['&lt;f8',('&lt;f8', (3,)),'&lt;f8','&lt;f8'], 'offsets':[0,0,8,16], 'itemsize':24}) In [16]: np.dtype(dt1).fields Out[16]: mappingproxy({'pos': (dtype(('&lt;f8', (3,))), 0), 'x': (dtype('float64'), 0), 'y': (dtype('float64'), 8), 'z': (dtype('float64'), 16)}) </code></pre> <p><code>offsets</code> aren't mentioned any where else on the documentation page.</p> <p>The last format is a <code>union</code> type. It's a little unclear as to whether that's allowed or discouraged. The examples don't seem to work. There have been some changes in how multifield indexing works, and that may have affected this.</p> <p>Let's play around with various ways of viewing the array:</p> <pre><code>In [25]: arr Out[25]: array([(0., [ 0. , 10. , 0. ], 10., 0. ), (1., [ 1. , 11. , 0.1], 11., 0.1), (2., [ 2. , 12. , 0.2], 12., 0.2), (3., [ 3. , 13. , 0.3], 13., 0.3), (4., [ 4. , 14. , 0.4], 14., 0.4)], dtype={'names':['x','pos','y','z'], 'formats':['&lt;f8',('&lt;f8', (3,)),'&lt;f8','&lt;f8'], 'offsets':[0,0,8,16], 'itemsize':24}) In [29]: dt3=[('x','&lt;f8'),('y','&lt;f8'),('z','&lt;f8')] In [30]: np.dtype(dt3) Out[30]: dtype([('x', '&lt;f8'), ('y', '&lt;f8'), ('z', '&lt;f8')]) In [31]: np.dtype(dt3).fields Out[31]: mappingproxy({'x': (dtype('float64'), 0), 'y': (dtype('float64'), 8), 'z': (dtype('float64'), 16)}) In [32]: arr.view(dt3) Out[32]: array([(0., 10., 0. ), (1., 11., 0.1), (2., 12., 0.2), (3., 13., 0.3), (4., 14., 0.4)], dtype=[('x', '&lt;f8'), ('y', '&lt;f8'), ('z', '&lt;f8')]) In [33]: arr['pos'] Out[33]: array([[ 0. , 10. , 0. ], [ 1. , 11. , 0.1], [ 2. , 12. , 0.2], [ 3. , 13. , 0.3], [ 4. , 14. , 0.4]]) In [35]: arr.view('f8').reshape(5,3) Out[35]: array([[ 0. , 10. , 0. ], [ 1. , 11. , 0.1], [ 2. , 12. , 0.2], [ 3. , 13. , 0.3], [ 4. , 14. , 0.4]]) In [37]: arr.view(dt4) Out[37]: array([([ 0. , 10. , 0. ],), ([ 1. , 11. , 0.1],), ([ 2. , 12. , 0.2],), ([ 3. , 13. , 0.3],), ([ 4. , 14. , 0.4],)], dtype=[('pos', '&lt;f8', (3,))]) In [38]: arr.view(dt4)['pos'] Out[38]: array([[ 0. , 10. , 0. ], [ 1. , 11. , 0.1], [ 2. , 12. , 0.2], [ 3. , 13. , 0.3], [ 4. , 14. , 0.4]]) </code></pre>
python|arrays|numpy
2
6,088
73,521,449
How do I adjust the dates of a column in pandas according to a threshhold?
<p>I have a data frame with a datetime column like so:</p> <pre><code> dates 0 2017-09-19 1 2017-08-28 2 2017-07-13 </code></pre> <p>I want to know if there is a way to adjust the dates with this condition:</p> <ol> <li>If the day of the date is before 15, then change the date to the end of last month.</li> <li>If the day of the date is 15 or after, then change the date to the end of the current month.</li> </ol> <p>My desired output would look something like this:</p> <pre><code> dates 0 2017-09-30 1 2017-08-31 2 2017-06-30 </code></pre>
<p>Easy with <a href="https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.tseries.offsets.MonthEnd.html" rel="nofollow noreferrer"><code>MonthEnd</code></a></p> <p>Let's set up the data:</p> <pre><code>dates = pd.Series({0: '2017-09-19', 1: '2017-08-28', 2: '2017-07-13'}) dates = pd.to_datetime(dates) </code></pre> <p>Then:</p> <pre><code>from pandas.tseries.offsets import MonthEnd pre, post = dates.dt.day &lt; 15, dates.dt.day &gt;= 15 dates.loc[pre] = dates.loc[pre] + MonthEnd(-1) dates.loc[post] = dates.loc[post] + MonthEnd(1) </code></pre> <p>Explanation: create masks (<code>pre</code> and <code>post</code>) first. Then use the masks to either get month end for current or previous month, as appropriate.</p>
python|pandas|dataframe
2
6,089
73,184,848
Rearrange a 5D tensor in PyTorch
<p>I have a 5D tensor in the shape of <code>(N,C,T,H,W)</code>. I want to rearrange it using PyTorch to the form of <code>(N,T,HW,C)</code>. How can I do that?</p>
<p>Naturally you can reshape the last two dimensions of your tensor by flattening your tensor from <code>dim=-2</code>, this will produce a shape of <code>(N,C,T,HW)</code>:</p> <pre><code>&gt;&gt;&gt; x.flatten(-2) </code></pre> <p>Then you can permute the dimensions around:</p> <pre><code>&gt;&gt;&gt; x.flatten(-2).permute(0,2,3,1) </code></pre>
python|pytorch
2
6,090
73,431,883
Expand selected keys in a json pandas column
<p>I have this sample dataset:</p> <pre class="lang-py prettyprint-override"><code>the_df = pd.DataFrame( {'id':['AM','AN','AP'], 'target':[130,60,180], 'moves':[[{'date':'2022-08-01','amount':285.0,'name':'Cookie'}, {'name':'Rush','amount':10,'date':'2022-08-02','type':'song'}], [{'amount':250.5,'date':'2022-08-01','source':{'data':'bing'}}],[]]}) the_df id target moves 0 AM 130 [{'date': '2022-08-01', 'amount': 285.0, 'name... 1 AN 60 [{'amount': 250.5, 'date': '2022-08-01', 'sour... 2 AP 180 [] </code></pre> <p>And I want to 'expand' (or 'explode') each value in the json column, but only selecting some columns. This is the expected result:</p> <pre class="lang-py prettyprint-override"><code> id target date amount name 0 AM 130 2022-08-01 285.0 Cookie 1 AM 130 2022-08-02 10.0 Rush 2 AN 60 2022-08-01 250.5 3 AP 180 </code></pre> <p>Firstly I tried using <code>json_normalize</code> and iterate over each row (even when the last row has no data), but I have to know before how many rows I'm going to expand:</p> <pre class="lang-py prettyprint-override"><code>pd.json_normalize(the_df.moves[0]) date amount name type 0 2022-08-01 285.0 Cookie NaN 1 2022-08-02 10.0 Rush song pd.json_normalize(the_df.moves[1]) amount date source.data 0 250.5 2022-08-01 bing pd.json_normalize(the_df.moves[2]) </code></pre> <p>I only want keys <code>date</code>,<code>amount</code> and <code>name</code>. So I tried this:</p> <pre class="lang-py prettyprint-override"><code>temp_df = pd.DataFrame(columns=['date','amount','name']) for i in range(len(the_df)): temp_df = temp_df.append(pd.json_normalize(the_df.moves[i])) temp_df date amount name type source.data 0 2022-08-01 285.0 Cookie NaN NaN 1 2022-08-02 10.0 Rush song NaN 0 2022-08-01 250.5 NaN NaN bing </code></pre> <p>But my data frame <code>temp_df</code> needs a reference to the original dataset <code>the_df</code> to apply a merge. Please, could you guide me to the right solution? I guess there must be a way to recall the id, or a method in pandas to do this without a for loop.</p>
<p>Here are the steps you could follow</p> <p>(1) define df</p> <pre><code>df = pd.DataFrame( {'id':['AM','AN','AP'], 'target':[130,60,180], 'moves':[[{'date':'2022-08-01','amount':285.0,'name':'Cookie'}, {'name':'Rush','amount':10,'date':'2022-08-02','type':'song'}], [{'amount':250.5,'date':'2022-08-01','source':{'data':'bing'}}],[]]}) print(df) id target moves 0 AM 130 [{'date': '2022-08-01', 'amount': 285.0, 'name': 'Cookie'}, {'name': 'Rush', 'amount': 10, 'date': '2022-08-02', 'type': 'song'}] 1 AN 60 [{'amount': 250.5, 'date': '2022-08-01', 'source': {'data': 'bing'}}] 2 AP 180 [] </code></pre> <p>(2) explode the column 'moves'</p> <pre><code>df1 = df.explode('moves', ignore_index=True) print(df1) id target moves 0 AM 130 {'date': '2022-08-01', 'amount': 285.0, 'name': 'Cookie'} 1 AM 130 {'name': 'Rush', 'amount': 10, 'date': '2022-08-02', 'type': 'song'} 2 AN 60 {'amount': 250.5, 'date': '2022-08-01', 'source': {'data': 'bing'}} 3 AP 180 NaN </code></pre> <p>(3) json_normalize the column 'moves'</p> <pre><code>df2 = pd.json_normalize(df1['moves']) print(df2) date amount name type source.data 0 2022-08-01 285.0 Cookie NaN NaN 1 2022-08-02 10.0 Rush song NaN 2 2022-08-01 250.5 NaN NaN bing 3 NaN NaN NaN NaN NaN </code></pre> <p>(4) concat the 2 <code>df</code> with only the relevant columns</p> <pre><code>df3 = pd.concat([df1[['id', 'target']], df2[['date', 'amount', 'name']]], axis=1) print(df3) id target date amount name 0 AM 130 2022-08-01 285.0 Cookie 1 AM 130 2022-08-02 10.0 Rush 2 AN 60 2022-08-01 250.5 NaN 3 AP 180 NaN NaN NaN </code></pre>
python|pandas
2
6,091
73,296,742
How can I use count list value in dataframe
<p>I have a dataframe looks like this</p> <pre><code>df = pd.DataFrame({'id': ['T01', 'T01', 'T01', 'T02', 'T02', 'T03', 'T03'], 'event_list': [(['a', 'b']), (['a', 'c']), (['a', 'b', 'c']), (['a']), (['a','b']), (['a', 'b', 'c']), (['b', 'c'])]}) </code></pre> <p>I wanna <code>group-by</code> id column and count the element inside of the list, so the desired output will look like this</p> <pre><code>df = pd.DataFrame({'id': ['T01','T01','T01','T02','T02', 'T03', 'T03','T03'], 'event': ['a','b','c','a','b','a','b','c'], 'count': [3,2,2,2,1,1,2,2],}) </code></pre>
<p>Making use of pandas' newer functions we can combine <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.explode.html" rel="nofollow noreferrer">explode</a> with <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.agg.html" rel="nofollow noreferrer">pd.NamedAgg</a> recreating your expected output in the desired order:</p> <pre><code>df.explode('event_list').groupby(['id','event_list']).agg(count=pd.NamedAgg('event_list','count')) </code></pre> <p>Outputting:</p> <pre><code> count id event_list T01 a 3 b 2 c 2 T02 a 2 b 1 T03 a 1 b 2 c 2 </code></pre>
python|pandas|list
2
6,092
67,544,547
Python: Pandas Dataframe MultiIndex select data based on Index values gives empty result
<p>I have a pandas dataframe that has multiple index (latitude, longitude, and time) with the data being windspeed. I want to select based on one latitude, longitude location. When I try this, it returns an empty result. What am I doing wrong here?</p> <p>Here is part of my original dataframe:</p> <p><a href="https://i.stack.imgur.com/a0Dwn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/a0Dwn.png" alt="enter image description here" /></a></p> <pre><code>df=df.query('latitude =='+str(24.549999)+ 'and longitude=='+str(-126.870003)) df </code></pre> <p>returns this:</p> <p><a href="https://i.stack.imgur.com/6HYvb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6HYvb.png" alt="enter image description here" /></a></p> <p>completely empty like it couldn't find what I was looking for. What am I doing wrong here? Also is there a way to round the index values so for example latitude and longitude are two decimal places latitude=24.55 and longitude=-126.87?</p>
<p>Actually you are facing this problem because the column <strong>'latitude','longitude'</strong> and <strong>'time'</strong> are of type string so to resolve it:</p> <pre><code>df=df.reset_index() </code></pre> <p>Now use <code>astype()</code> method and <code>to_datetime()</code> method:</p> <pre><code>df[['latitude', 'longitude']]=df[['latitude', 'longitude']].astype(float) df['time']=pd.to_datetime(df['time']) </code></pre> <p>Finally:</p> <pre><code>df = df.set_index(['latitude', 'longitude','time']) </code></pre> <p>Now If you run your code:</p> <pre><code>df=df.query('latitude =='+str(24.549999)+ 'and longitude=='+str(-126.870003) </code></pre> <p>You will get your desired output</p>
python|pandas|dataframe|multi-index
1
6,093
60,162,118
How to get nth max correlation coefficient and its index by using numpy?
<p>I compute correlation coefficient like this (its just example):</p> <pre><code>a = np.array([[1, 2, 3], [4, 7, 9], [8, 7, 5]]) corr = np.corrcoef(a) </code></pre> <p>The result is a correlation matrix.</p> <p>The question is how to get 1st, 2nd (or nth) largest coefficient?</p> <p>And its index? like <code>[0,1]</code> and <code>[2,1]</code></p>
<p>Let's say you have a NumPy array and you computed the correlation coefficient like this:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np a = np.array([[1, 2, 3], [4, 7, 9], [8, 7, 5]]) corr = np.corrcoef(a) </code></pre> <p>Now flatten the array, take the unique coefficients and sort the flattened array:</p> <pre class="lang-py prettyprint-override"><code>flat=corr.flatten() flat = np.unique(flat) </code></pre> <p>The flat array looks like this:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt; array([-0.98198051, -0.95382097, 0.99339927, 1. ]) </code></pre> <p>Now to pick <code>nth largest</code> element, just pick the right index:</p> <pre class="lang-py prettyprint-override"><code>largest = flat[-1] second_largest = flat[-2] print(largest) print(second_largest) </code></pre> <pre><code>&gt;&gt; 1.0 &gt;&gt; 0.9933992677987828 </code></pre> <p>To find the indices of the corresponding coefficient:</p> <pre class="lang-py prettyprint-override"><code>result = np.where(corr == largest) indices = np.array(result) print(indices) </code></pre> <p>This prints out the following array. So the indices where the largest coefficient occurs are (0,0), (1,1) and (2,2).</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt; array([[0, 1, 2], [0, 1, 2]]) </code></pre>
python|numpy
2
6,094
60,140,342
Summing particular rows in a particular column
<p>I have following data where I want to add only "Total" column yearly(12 rows at once). How to do it with <code>pandas</code> ? <a href="https://i.stack.imgur.com/OtLB4.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OtLB4.jpg" alt="enter image description here"></a></p>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.transform.html" rel="nofollow noreferrer"><code>GroupBy.transform</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.dt.year.html" rel="nofollow noreferrer"><code>Series.dt.year</code></a> for new column:</p> <pre><code>df['datum'] = pd.to_datetime(df['datum'], dayfirst=True) df['yearly'] = df.groupby(df['datum'].dt.year)['Total'].transform('sum') </code></pre> <p>If want new DataFrame aggregate <code>sum</code>:</p> <pre><code>df1 = df.groupby(df['datum'].dt.year.rename('Year'))['Total'].sum().reset_index() </code></pre>
python|pandas|csv
2
6,095
60,169,065
my picture won't resize tf.image.resize_with_padding tensorflow
<p>my original image is 600*600 px I want to resize it to be 300*300 px</p> <p><strong>Resize code</strong></p> <pre><code>import tensorflow as tf import numpy as np from tensorflow.keras.preprocessing.image import array_to_img from tensorflow_core.python.keras.layers.image_preprocessing import ResizeMethod def resize(image, w=300, h=300): image = tf.convert_to_tensor(np.asarray(image)) size = (w, h) tf.image.resize_with_pad( image, h, w, method=ResizeMethod.BILINEAR ) image = array_to_img(image) return image </code></pre> <p>After I save the image the dimensions do not change</p> <p><strong>Save images code</strong></p> <pre><code>def write_images(images, path): try: index = 1 for img in images: img.save(path+f'/{index}.jpeg') index += 1 except: print('Error while writing images') </code></pre>
<p>Tensorflow operations are <strong>not in place</strong>. You need to assign the result of the resize operation as follows:</p> <pre><code>image = tf.image.resize_with_pad( image, h, w, method=ResizeMethod.BILINEAR ) </code></pre>
python|python-3.x|image|tensorflow|image-processing
2
6,096
60,281,248
How can I check what value is assigned to what label while using sklearns' LabelEncoder()?
<p>I am transforming categorical data to numeric values for machine learning purposes.</p> <p>To give an example, the buying price (= "buying" variable) of a car is categorized in: "vhigh, high, med, low". To transform it into numeric values, I used:</p> <pre><code>le = preprocessing.LabelEncoder() buying = le.fit_transform(list(data["buying"])) </code></pre> <p>Is there a way to check how exactly Python transformed each of those labels into numeric value since this is done randomly (e.g. vhigh = 0, high = 2)? </p>
<p>You can create an extra column in your dataframe to map the values:</p> <pre><code>mapping_df = data[['buying']].copy() #Create an extra dataframe which will be used to address only the encoded values mapping_df['buying_encoded'] = le.fit_transform(data['buying'].values) #Using values is faster than using list </code></pre> <p>Here's a full working example:</p> <pre><code>import pandas as pd from sklearn.preprocessing import LabelEncoder le = LabelEncoder() data = pd.DataFrame({'index':[0,1,2,3,4,5,6], 'buying':['Luffy','Nami','Luffy','Franky','Sanji','Zoro','Luffy']}) data['buying_encoded'] = le.fit_transform(data['buying'].values) data = data.drop_duplicates('buying').set_index('index') print(data) </code></pre> <p>Output:</p> <pre><code> buying buying_encoded index 0 Luffy 1 1 Nami 2 3 Franky 0 4 Sanji 3 5 Zoro 4 </code></pre>
python|pandas|machine-learning|scikit-learn|one-hot-encoding
3
6,097
60,111,700
How to BEST extract information from multiple dataframes based on a series of if\else conditions and matching values? (Guidance needed!))
<p>So I have three Dataframes, X, Y and Events. df_X has X Co-ordinates, df_Y has Y Co-ordinates and Events_df has a list of events that has happened (The Data is Basketball related). You'll see how they link together by looking below:</p> <pre><code>df_Event: Seconds Passed Event Type Player 1.0 Passed The Ball Steve 2.0 Received Pass Michael 3.0 Touch Michael 4.0 Passed The Ball Michael 5.0 Received The Ball George df_X: Seconds Passed Steve Michael George 1.0 11.43 12.33 15.33 2.0 11.45 12.46 13.22 3.0 10.99 10.33 14.33 4.0 11.34 10.36 11.22 5.0 12.43 12.22 11.78 df_Y: .... (The Same As Above Just With Different Numbers) </code></pre> <p>I want to record patterns of events across time and then take the X, Y coordinates that correspond to the <em>Seconds Passed</em> columns across each Dataframes. So for example, if I wanted to know where a pass started and ended I would need the following information..</p> <p>I want the following information in a new Dataframe labelled "Passes_df":</p> <pre><code>Passing Player Receiving Player X Coordinate PP Y Coordinate PP X Coordinate RP Y Coordinate RP Steve Michael 11.43 .... 12.46 ..... </code></pre> <p>I know I could use the following: </p> <pre><code>Passes_df['Passing Player'] = df_Event['Player'].where(df_Event['Event'] == 'Pass').dropna() Passes_df['Receiving Player'] = df_Event['Player'].shift(-1).where\ ((df_Event['Event'] == 'Pass') &amp; (df_Event['Event'].shift(-1) == 'Received Pass')) </code></pre> <p>However, this seems way too long winded? Could I use a function that picks information from each source more fluently? Some help would be appreciated!</p>
<p>Solution to the question needs a systematic approach which would affect significantly if understaning of problem changes. Since in the asked question, the output dataframe has excluded event type 'Touch' and has only compared Passes and Receiving; therefore, I have adopted the approach to reach at such output.</p> <ol> <li>X and Y cordinates dataframes are untidy. We need to bring them in tidy form through pd.melt function.</li> <li>Merge event, X cordinates and Y cordinates data into a single dataframe through pd.merge function.</li> <li>Create separate dataframe of passes and receivings.</li> <li>Since 'Seconds Passed' is unique column, I'm assuming there is 1 second lag in passing and receiving. Therefore, remove 1 second from receiving dataframe.</li> <li>Merge passes dataframe with receiving dataframe.</li> </ol> <p>(P.S: As convention, I used pd instead of pandas)</p> <h1>Step 1: Bring the data in tidy form</h1> <pre><code>tidy_x = pd.melt(df_x, id_vars='Seconds Passed', var_name='Player', value_name='X_Cordinate') tidy_y = pd.melt(df_y, id_vars='Seconds Passed', var_name='Player', value_name='Y_Cordinate') tidy_y['Y_Cordinate'] += 10 # Hypothetical number to change numbers from X. </code></pre> <h1>Step 2: Merge event, X cordinates and Y cordinates data into a single dataframe</h1> <pre><code>df = pd.merge(df_event, tidy_x) df = pd.merge(df, tidy_y) </code></pre> <p>Now you have comprehensive data of event with coordinates and player.</p> <h1>Step 3: Create separate dataframe of passes and receivings.</h1> <pre><code>passes = df['Event Type'].str.startswith('Pass') df_passes = df[passes].copy() received = df['Event Type'].str.startswith('Received') df_received = df[received].copy() df_received.loc[received, 'Seconds Passed'] -= 1 </code></pre> <h1>Finally: Merge passes dataframe with receiving dataframe.</h1> <pre><code>pd.merge(df_passes, df_received, on='Seconds Passed', suffixes=('_PP', '_RP')) </code></pre> <h1>Result/ Output:</h1> <p><a href="https://i.stack.imgur.com/6cHMF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6cHMF.png" alt="enter image description here"></a></p>
python|pandas|dataframe|where-clause
2
6,098
65,085,586
Pandas code to get the count of each values
<p>Here I'm sharing a sample data(I'm dealing with Big Data), the &quot;counts&quot; value varies from 1 to 3000+,, sometimes more than that..</p> <p>Sample data looks like :</p> <pre><code> ID counts 41 44 17 16 19 52 6 17 30 16 19 4 52 41 44 30 17 16 6 41 44 52 41 41 41 6 17 17 17 17 41 5 </code></pre> <p>I was trying to split &quot;ID&quot; column into multiple &amp; trying to get that count,,</p> <pre><code> data= reading the csv_file split_data = data.ID.apply(lambda x: pd.Series(str(x).split(&quot; &quot;))) # separating columns </code></pre> <p>as I mentioned, I'm dealing with big data,, so this method is not that much effective..i'm facing problem to get the &quot;ID&quot; counts</p> <p>I want to collect the total counts of each ID &amp; map it to the corresponding ID column.</p> <p>Expected output:</p> <pre><code> ID counts 16 17 19 30 41 44 52 41 41 17 16 19 52 6 1 1 1 0 2 0 1 17 30 16 19 4 1 1 1 1 0 0 0 52 41 44 30 17 16 6 1 1 0 1 1 1 1 41 44 52 41 41 41 6 0 0 0 0 4 1 1 17 17 17 17 41 5 0 4 0 0 1 0 0 </code></pre> <p>If you have any idea,, please let me know</p> <p>Thank you</p>
<p>Use <code>Counter</code> for get counts of values splitted by space in list comprehension:</p> <pre><code>from collections import Counter L = [{int(k): v for k, v in Counter(x.split()).items()} for x in df['ID']] df1 = pd.DataFrame(L, index=df.index).fillna(0).astype(int).sort_index(axis=1) df = df.join(df1) print (df) ID counts 16 17 19 30 41 44 52 0 41 44 17 16 19 52 6 1 1 1 0 1 1 1 1 17 30 16 19 4 1 1 1 1 0 0 0 2 52 41 44 30 17 16 6 1 1 0 1 1 1 1 3 41 44 52 41 41 41 6 0 0 0 0 4 1 1 4 17 17 17 17 41 5 0 4 0 0 1 0 0 </code></pre> <p>Another idea, but I guess slowier:</p> <pre><code>df1 = df.assign(a = df['ID'].str.split()).explode('a') df1 = df.join(pd.crosstab(df1['ID'], df1['a']), on='ID') print (df1) ID counts 16 17 19 30 41 44 52 0 41 44 17 16 19 52 6 1 1 1 0 1 1 1 1 17 30 16 19 4 1 1 1 1 0 0 0 2 52 41 44 30 17 16 6 1 1 0 1 1 1 1 3 41 44 52 41 41 41 6 0 0 0 0 4 1 1 4 17 17 17 17 41 5 0 4 0 0 1 0 0 </code></pre>
python|python-3.x|pandas|dataframe|count
1
6,099
65,366,859
Changing values of duplicates in pandas
<p>I have a dataframe of stock prices. There is possibility of duplicates and hence while performing merge functions data goes haywire. What i want is whenever there are duplicates in any column, i want to increment it with small amounts.</p> <p>Eg.Table</p> <pre><code>|Date| High| low| |:--|:---:|---:| |1-12-2020| 515|505| |2-12-2020| 525|515| |3-12-2020| 515| 510| |4-12-2020|530 |505| </code></pre> <p>In above table we had instance of high and low repeating. Hence will increment the duplicates by say very minsicule amount 0.0025</p> <p><strong>Desired output</strong></p> <pre><code>|Date| High| low| |:--|:---:|---:| |1-12-2020| 515|505| |2-12-2020| 525|515| |3-12-2020| 515.0025| 510| |4-12-2020|530 |505.0025| </code></pre> <p>What function should i use to solve this problem Thanks</p>
<p>A solution is this. Your dataframe is ddf</p> <pre><code> Date High low 0 1-12-2020 515 505 1 2-12-2020 525 515 2 3-12-2020 515 510 3 4-12-2020 530 505 </code></pre> <p>and doing this</p> <pre><code>mask = ddf['High'].duplicated(keep=False) ddf.loc[mask, 'High'] += ddf.groupby('High').cumcount().add(1) </code></pre> <p>returns</p> <pre><code> Date High low 0 1-12-2020 516.0 505 1 2-12-2020 525.0 515 2 3-12-2020 517.0 510 3 4-12-2020 530.0 505 </code></pre>
python|pandas
1