Unnamed: 0
int64
0
378k
id
int64
49.9k
73.8M
title
stringlengths
15
150
question
stringlengths
37
64.2k
answer
stringlengths
37
44.1k
tags
stringlengths
5
106
score
int64
-10
5.87k
378,100
26,523,324
pandas AttributeError: 'unicode' object has no attribute 'view'
<p>This is a killer problem that probably has a simple solution for a pandas newbie like me:</p> <p>I'm trying to replace one record of pandas DataFrame (df) with the latest version of that label (found in a separate DataFrame (latest_version).</p> <pre><code>df.ix[label] = latest_version.ix[label] </code></pre> <p>...
<p>So your problem here seems to be that you had a mismatch on the dtypes between the 2 dfs you were trying to assign to and from:</p> <pre><code>df dtypes: datetime64[ns](2), float64(1), object(70) </code></pre> <p>whilst </p> <pre><code>latest_version is :dtypes: float64(1), int64(1), object(71) </code></pre> <p...
python|pandas
1
378,101
39,220,929
python convolution with different dimension
<p>I'm trying to implement convolutional neural network in Python.<br> However, when I use signal.convolve or np.convolve, it can not do convolution on X, Y(X is 3d, Y is 2d). X are training minibatches. Y are filters. I don't want to do for loop for every training vector like:</p> <pre><code>for i in xrange(X.shape[2...
<p>Scipy implements standard N-dimensional convolutions, so that the matrix to be convolved and the kernel are both N-dimensional.</p> <p>A quick fix would be to add an extra dimension to <code>Y</code> so that <code>Y</code> is 3-Dimensional:</p> <pre><code>result = signal.convolve(X, Y[..., None], 'valid') </code><...
python|numpy|scipy|deep-learning
5
378,102
39,359,478
Print Text Representation of Tensorflow (tf-slim) Model
<p>Is there any way to print a textual representation of a tf-slim model along the lines of what <a href="https://gist.github.com/solitaire/441a33e0eaa3c7fc959f#file-neural-net-info" rel="nofollow">nolearn offers</a>:</p> <pre><code>## Layer information name size total cap.Y cap.X cov.Y ...
<p>It is not textual representation that you seek, but maybe TensorBoard will suffice? You can visualize whole computation graph and monitor your model using this tool.</p> <p><a href="https://www.tensorflow.org/how_tos/summaries_and_tensorboard/" rel="nofollow noreferrer">https://www.tensorflow.org/how_tos/summaries_...
tensorflow|tf-slim
0
378,103
39,375,348
sum vs np.nansum weirdness while summing columns with same name on a pandas dataframe - python
<p>taking inspiration from this discussion here on SO (<a href="https://stackoverflow.com/questions/13078751/merge-columns-within-a-dataframe-that-have-the-same-name">Merge Columns within a DataFrame that have the Same Name</a>), I tried the method suggested and, while it works while using the function <code>sum()</cod...
<p>The issue is that <code>np.nansum</code> converts its input to a numpy array, so it effectively loses the column information (<code>sum</code> doesn't do this). As a result, the <code>groupby</code> doesn't get back any column information when constructing the output, so the output is just a Series of numpy arrays....
pandas|dataframe|group-by|multiple-columns
2
378,104
39,059,371
Can numpy's argsort give equal element the same rank?
<p>I want to get the rank of each element, so I use <code>argsort</code> in <code>numpy</code>:</p> <pre><code>np.argsort(np.array((1,1,1,2,2,3,3,3,3))) array([0, 1, 2, 3, 4, 5, 6, 7, 8]) </code></pre> <p>it give the same element the different rank, can I get the same rank like:</p> <pre><code>array([0, 0, 0, 3, 3, ...
<p>If you don't mind a dependency on scipy, you can use <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.rankdata.html" rel="noreferrer"><code>scipy.stats.rankdata</code></a>, with <code>method='min'</code>:</p> <pre><code>In [14]: a Out[14]: array([1, 1, 1, 2, 2, 3, 3, 3, 3]) In [15]: from sc...
python|sorting|numpy
17
378,105
39,030,366
NumPy array element not getting updated
<p>I have a NumPy array as follows:</p> <pre><code>supp = np.array([['A', '5', '0'], ['B', '3', '0'], ['C', '4', '0'], ['D', '1', '0'], ['E', '2', '0']]) </code></pre> <p>Now, I want to update the row[2] as row[1]/6. I'm using..</p> <p><code>for row in supp: row[2] = row[1].astype(int) / 6</code></p> <p>But row...
<p>The problem is that an <code>np.array</code> has only one type which is automatically assumed to be strings <code>supp.dtype == '|S1'</code> since your input contains only strings of length <code>1</code>. So numpy will automatically convert your updated inputs to strings of length <code>1</code>, <code>'0'</code>s ...
python|arrays|python-3.x|numpy
5
378,106
39,278,042
Storing pure python datetime.datetime in pandas DataFrame
<p>Since <code>matplotlib</code> doesn't support <a href="https://github.com/pydata/pandas/issues/8113" rel="nofollow noreferrer">either</a><code>pandas.TimeStamp</code> <a href="https://stackoverflow.com/questions/22048792/how-do-i-display-dates-when-plotting-in-matplotlib-pyplot">or</a><code>numpy.datetime64</code>, ...
<p>The use of <a href="http://pandas.pydata.org/pandas-docs/stable/timeseries.html#converting-to-python-datetimes" rel="nofollow noreferrer">to_pydatetime()</a> is correct.</p> <pre><code>In [87]: t = pd.DataFrame({'date': [pd.to_datetime('2012-12-31'), pd.to_datetime('2013-12-31')]}) In [88]: t.date.dt.to_pydatetime...
python|python-3.x|datetime|pandas
4
378,107
39,313,425
Extracting the last problem_id for every user
<p>I have a dataframe with the following columns: <code>['user_id', 'problem_id', 'timestamp']</code>. So basically who solved what and when. Clearly there are users who solved many many problems.</p> <p>I want to extract the last problem solved by every user. My first approach was to group by user_id and get the maxi...
<p>If your <code>timestamp</code> sorts naturally - ie - latest values are last, then:</p> <pre><code>df_s.sort_values('timestamp').groupby('user_id').last() </code></pre> <p>Should give you what you want as <code>groupby</code> retains the order of its input for grouping...</p>
pandas|dataframe
1
378,108
39,378,363
Remove nan rows in a scipy sparse matrix
<p>I am given a (normalized) sparse adjacency matrix and a list of labels for the respective matrix rows. Because some nodes have been removed by another sanitization function, there are some rows containing NaNs in the matrix. I want to find these rows and remove them <em>as well as their respective labels</em>. Here ...
<p>If I make a sample array:</p> <pre><code>In [328]: A=np.array([[1,0,0,np.nan],[0,np.nan,np.nan,0],[1,0,1,0]]) In [329]: A Out[329]: array([[ 1., 0., 0., nan], [ 0., nan, nan, 0.], [ 1., 0., 1., 0.]]) In [331]: M=sparse.lil_matrix(A) </code></pre> <p>This lil sparse matrix is store...
python|numpy|scipy|sparse-matrix|networkx
1
378,109
39,156,650
How to print or cout a tensor?
<p>I have a tensor that I would like to print for debugging</p> <pre><code>tensorflow::Tensor image_tensor; </code></pre> <p>I tried </p> <pre><code>std::cout &lt;&lt; &amp;image_tensor; </code></pre> <p>But I get something like this:</p> <pre><code>0x16fd81cf8I </code></pre>
<p>you have to use the .vec or .flat methods. let’s say the tensor is a <code>int8</code></p> <pre><code>// for i in image_tensor.size … cout &lt;&lt; i &lt;&lt; “ “ &lt;&lt; image_tensor.vec&lt;int8&gt;()(i) </code></pre> <p>see also this answer <a href="https://stackoverflow.com/questions/39148671/how-to-fill-a-ten...
c++|tensorflow
3
378,110
39,279,858
How to draw a graphical count table in pandas
<p>I have a dataframe df with two columns <code>customer1</code> and <code>customer2</code> which are string valued. I would like to make a square graphical representation of the count number for each pair from those two columns. </p> <p>I can do</p> <pre><code>df[['customer1', 'customer2']].value_counts() </cod...
<p><strong>UPDATE:</strong> </p> <blockquote> <p>Is it possible to sort the rows/columns so the highest count rows are at the top ? In this case the order would be b,a,c</p> </blockquote> <p>IIUC you can do it this way (where ):</p> <pre><code>In [80]: x = df.pivot_table(index='customer1',columns='customer2',ag...
python|pandas
2
378,111
19,529,402
Need some basic Pandas help -- trying to print a dataframe row by row and perform operations on the elements within specific columns of that row
<p>Basically, I have a query returning a dataframe and row by row I want to generate new queries using the elements of the row as arguments for the next query -- the example walks through a simplified version and understanding that should be sufficient!</p> <pre><code>&gt;&gt;&gt; import pandas as pd &gt;&gt;&gt; df2 ...
<p>You can unroll the tuples as you iterate through the dataframe and then just print the columns you desire:</p> <pre><code>for row in df2.itertuples(): index, a, b, c = row print 'in %s say %s!'%(a,b) in colorado say go buffs! in california say go bears! in texas say go sooners! in oregon say go ducks! </c...
python|pandas
2
378,112
19,306,211
OpenCV cv2 image to PyGame image?
<pre><code>def cvimage_to_pygame(image): """Convert cvimage into a pygame image""" return pygame.image.frombuffer(image.tostring(), image.shape[:2], "RGB") </code></pre> <p>The function takes a numpy array taken from the cv2 camera. When I display the returned pyGame image on...
<p>In the <code>shape</code> field width and height parameters are swapped. Replace argument:</p> <pre><code>image.shape[:2] # gives you (height, width) tuple </code></pre> <p>With </p> <pre><code>image.shape[1::-1] # gives you (width, height) tuple </code></pre>
python|numpy|opencv|pygame
8
378,113
19,630,265
ValueError: Shape of passed values is (3, 27), indices imply (4, 27) # pandas DataFrame
<p>Here is my numpy array:</p> <pre><code>import numpy as np num = np.array([[ 0.17899619 0.33093259 0.2076353 0.06130814] [ 0.20392888 0.42653105 0.33325891 0.10473969] [ 0.17038247 0.19081956 0.10119709 0.09032416] [-0.10606583 -0.13680513 -0.13129103 -0.0368...
<p><strong>Update:</strong> Make sure that <code>big_array</code> has 4 columns. The shape of <code>big_array</code> does not match the shape of your sample array <code>num</code>. That's why the example code is working, but your real code not.</p> <hr> <p>I was unable to reproduce your error message. On my system (W...
python|python-3.x|numpy|pandas
8
378,114
19,711,999
rgb_to_hsv and backwards using python and numpy
<p>I tried to execute this code <a href="https://stackoverflow.com/a/7274986/2349589">here</a> as described in this answer. Bu I can't seem to get away from dividing with zero value. </p> <p>I tried to copy this code from caman Js for transforming from rgb to hsv but I get the same thing.</p> <pre><code>RuntimeWarnin...
<p>The error comes from the fact that <code>numpy.where</code> (and <code>numpy.select</code>) computes all its arguments, even if they aren't used in the output. So in your line <code>hsv[...,1] = np.where(maxc==0, 0, dif/maxc)</code>, <code>dif / maxc</code> is computed even for elements where <code>maxc == 0</code>,...
python|image|image-processing|numpy|python-imaging-library
0
378,115
19,602,864
How to plot in a specific axis with DataFrame.hist(by=) in pandas
<p>I am trying plot several histogram groups in the same figure. Each group contains two conditions and I am therefore using the 'by=' argument from pandas histogram options. However, this does not work as I expected and pandas creates a new figure instead of plotting in the axis I am passing. I tried to pass four axes...
<p>This is fixed as of <a href="https://github.com/pydata/pandas/pull/7736" rel="nofollow noreferrer">GH7736</a> which was merged into pandas 0.15.0</p> <p>The correct way to pass multiple plots into an existing figure is to first create all the desired axes and then pass all of them to the pandas plotting command.</p...
python|matplotlib|pandas
1
378,116
19,705,200
Multiprocessing with numpy makes Python quit unexpectedly on OSX
<p>I've run into a problem, where Python quits unexpectedly, when running multiprocessing with numpy. I've isolated the problem, so that I can now confirm that the multiprocessing works perfect when running the code stated below:</p> <pre><code>import numpy as np from multiprocessing import Pool, Process import time i...
<p>I figured out a workaround to the problem. The problem occurs when Numpy is used together with BLAS before initializing a multiprocessing instance. My workaround is simply to put the Numpy code (running BLAS) into a single process and then running the multiprocessing instances afterwards. This is not a good coding s...
python|macos|numpy|multiprocessing
6
378,117
12,970,842
Python multiple search in arrays
<p><code>idtopick</code> is an array of ids </p> <pre><code> idtopick=array([50,48,12,125,3458,155,299,6,7,84,58,63,0,8,-1]) </code></pre> <p><code>idtolook</code> is another array containing the ids I'm interested in</p> <pre><code> idtolook=array([0,8,12,50]) </code></pre> <p>I would like to store in anot...
<p>You can use sorting:</p> <pre><code> sorter = np.argsort(idtopick, kind='mergesort') # you need stable sorting sorted_ids = idtopick[sorter] positions = np.searchsorted(sorted_ids, idtolook) positions = sorter[positions] </code></pre> <p>Note that it won't throw an error though if there is and <code>idtolook</c...
python|arrays|numpy
3
378,118
13,221,218
How to select rows within a pandas dataframe based on time only when index is date and time
<p>I have a dataframe that looks like this:</p> <pre><code>&lt;class 'pandas.core.frame.DataFrame'&gt; DatetimeIndex: 2016910 entries, 2009-01-02 04:51:00 to 2012-11-02 20:00:00 Freq: T Data columns: X1 2016910 non-null values X2 2016910 non-null values X3 2016910 non-null values X4 2016910 non-null va...
<p>A regular DatetimeIndex allows to use between_time method.</p> <pre><code>In [12]: data = """\ 20090102,04:51:00,89.9900,89.9900,89.9900,89.9900,100 20090102,05:36:00,90.0100,90.0100,90.0100,90.0100,200 20090102,05:44:00,90.1400,90.1400,90.1400,90.1400,100 20090102,05:50:00,90.0500,90.0500,90.0500,90.0500,500 ...
dataframe|pandas
2
378,119
12,931,569
Green to red colormap in matplotlib, centered on the median of the data
<p>In my application I'm transitioning from R to native Python (scipy + matplotlib) where possible, and one of the biggest tasks was converting from a R heatmap to a matplotlib heatmap. <a href="http://code.activestate.com/recipes/578175-hierarchical-clustering-heatmap-python/" rel="noreferrer">This post</a> guided me ...
<p>By default, matplotlib will normalise the colormap such that the maximum colormap value will be the maximum of your data. Likewise for the minimum of your data. This means that the median of the colormap (the middle value) will line up with the interpolated median of your data (interpolated if you don't have a data ...
python|numpy|matplotlib|heatmap
4
378,120
12,861,314
Interleave rows of two numpy arrays in Python
<p>I wanted to interleave the rows of two numpy arrays of the same size. I came up with this solution.</p> <pre><code># A and B are same-shaped arrays A = numpy.ones((4,3)) B = numpy.zeros_like(A) C = numpy.array(zip(A[::1], B[::1])).reshape(A.shape[0]*2, A.shape[1]) print(C) </code></pre> <p>Outputs</p> <pre><code>[[ ...
<p>It is maybe a bit clearer to do:</p> <pre><code>A = np.ones((4,3)) B = np.zeros_like(A) C = np.empty((A.shape[0]+B.shape[0],A.shape[1])) C[::2,:] = A C[1::2,:] = B </code></pre> <p>and it's probably a bit faster as well, I'm guessing.</p>
python|arrays|numpy
20
378,121
13,078,751
Combine duplicated columns within a DataFrame
<p>If I have a dataframe that has columns that include the same name, is there a way to combine the columns that have the same name with some sort of function (i.e. sum)?</p> <p>For instance with:</p> <pre><code>In [186]: df["NY-WEB01"].head() Out[186]: NY-WEB01 NY-WEB01 DateTime 2012-10-1...
<p>I believe this does what you are after:</p> <pre><code>df.groupby(lambda x:x, axis=1).sum() </code></pre> <p>Alternatively, between 3% and 15% faster depending on the length of the df:</p> <pre><code>df.groupby(df.columns, axis=1).sum() </code></pre> <p>EDIT: To extend this beyond sums, use <code>.agg()</code> (...
python|pandas|dataframe|group-by|pandas-groupby
48
378,122
29,292,522
String manipulations using Python Pandas
<p>I have some name and ethnicity data, for example:</p> <pre><code>John Wick English Black Widow French </code></pre> <p>I then do a bit of manipulation to make the name as below</p> <pre><code>John Wick -&gt; john#wick?????????????????????????????????? Black Widow -&gt; black#widow????????????????????????????...
<p>!) Regarding the loop, I can't think of a better way than what you're already doing</p> <p>2) Try <code>frame3["name_len"] = frame3["name"].map(lambda x : len(re.findall('[a-zA-Z]', x)))</code></p>
string|python-2.7|pandas
1
378,123
28,995,937
Convert python byte string to numpy int?
<p>Is there a direct way instead of the following?</p> <pre><code>np.uint32(int.from_bytes(b'\xa3\x8eq\xb5', 'big')) </code></pre>
<p>Using <code>np.fromstring</code> for this is deprecated now. Use <code>np.frombuffer</code> instead. You can also pass in a normal numpy dtype:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np np.frombuffer(b'\xa3\x8eq\xb5', dtype=np.uint32) </code></pre>
python|numpy
8
378,124
29,075,785
Assigning real and imaginary parts of a complex array from two arrays containing the two parts - Python
<p>I have two arrays, <code>field_in_k_space_REAL</code> and <code>field_in_k_space_IMAGINARY</code>, that contain, respectively, the real and imaginary parts of an array of complex numbers, <code>field_in_k_space_TOTAL</code>, which I would like to create. Why does the following assignment not work, producing the erro...
<p>The suggestion by @Ffisegydd (and by @jonsharpe in a comment) are good ones. See if that works for you.</p> <p>Here, I'll just point out that the <code>real</code> and <code>imag</code> attributes of the <em>array</em> are writeable, and the vectorized assignment works, so you can simplify your code to</p> <pre><...
python|arrays|numpy|complex-numbers
4
378,125
33,926,947
How to perform cumulative calculations in pandas that restart with each change in date?
<p>This is a simplified version of my data:</p> <pre><code> Date and Time Price Volume 0 2015-01-01 17:00:00.211 2030.25 342 1 2015-01-01 17:00:02.456 2030.75 203 2 2015-01-02 17:00:00.054 2031.00 182 3 2015-01-02 17:00:25.882 2031.75 249 </code></pre> <p>I would like to calculate cumulative vo...
<p>The easiest way would probably be to set 'Date and Time' as the index and then use <code>groupby</code> with <code>TimeGrouper</code> to group the dates. Then you can apply <code>cumsum()</code>:</p> <pre><code>&gt;&gt;&gt; df2 = df.set_index('Date and Time') &gt;&gt;&gt; df2['Volume'] = df2.groupby(pd.TimeGrouper(...
python|pandas|date|dataframe|cumsum
4
378,126
33,942,316
Pandas: Combining and summing rows based on values from other rows
<p>In a Panda's data frame, I'd like combine all <code>'other'</code> rows from <code>col_2</code> into a one row for each value from <code>col_1</code> by assigning <code>col_3</code> the sum of all corresponding values.</p> <p><strong>EDIT - Clarification:</strong> In total, I have about 20 columns (where values in ...
<p>You can <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html#pandas.DataFrame.groupby" rel="nofollow"><code>groupby</code></a> on col_1 and col_2 and call <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.sum.html#pandas.core.groupby.Gr...
python-2.7|pandas
0
378,127
33,846,323
how to assign labels of one numpy array to another numpy array and group accordingly?
<p>I have the following labels </p> <pre><code>&gt;&gt;&gt; lab array([2, 2, 2, 2, 2, 3, 3, 0, 0, 0, 0, 1]) </code></pre> <p>I want to assign this label to another numpy array i.e</p> <pre><code>&gt;&gt;&gt; arr array([[81, 1, 3, 87], # 2 [ 2, 0, 1, 0], # 2 [13, 6, 0, 0], # 2 [14, 0,...
<p>If the labels of equal values are contiguous, as in your example, then you may use <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.ufunc.reduceat.html" rel="nofollow"><code>np.add.reduceat</code></a>:</p> <pre><code>&gt;&gt;&gt; lab array([2, 2, 2, 2, 2, 3, 3, 0, 0, 0, 0, 1]) &gt;&gt;&gt; idx = ...
python-3.x|numpy
1
378,128
33,680,750
convert (nx2) array of floats into (nx1) array of 2-tuples
<p>I have a NumPy float array</p> <pre class="lang-py prettyprint-override"><code>x = np.array([ [0.0, 1.0], [2.0, 3.0], [4.0, 5.0] ], dtype=np.float32 ) </code></pre> <p>and need to convert it into a NumPy array with a tuple dtype,</p> <pre class="lang-py prettyprint-override"><code>y = np.a...
<p>This is close:</p> <pre><code>In [122]: dt=np.dtype([('x',float,(2,))]) In [123]: y=np.zeros(x.shape[0],dtype=dt) In [124]: y Out[124]: array([([0.0, 0.0],), ([0.0, 0.0],), ([0.0, 0.0],)], dtype=[('x', '&lt;f8', (2,))]) In [125]: y['x']=x In [126]: y Out[126]: array([([0.0, 1.0],), ([2.0, 3.0],), ([4.0...
python|arrays|numpy|casting|user-defined-types
1
378,129
33,559,875
Querying dataframe based on numerical similarities between rows
<p>I have a dataframe like this:</p> <pre><code> Allotment Date NDVI_Kurtosis NDVI_Skewness 1 D 19840621 1.02 3.06 2 D 19850619 1.76 2.56 3 A 19840621 3.66 3.50 4 A 19850619 1.56 ...
<p>You can use the shift function for create new columns and after you can compare them to the initial columns.</p> <pre><code>import pandas as pd df=pd.read_clipboard() df['NDVI_Kurtosis_lag']=df['NDVI_Kurtosis'].shift(1).fillna(df['NDVI_Kurtosis']) df['NDVI_Skewness_lag']=df['NDVI_Skewness'].shift(1).fillna(df['NDVI...
python|pandas
0
378,130
33,794,750
replace min value to another in numpy array
<p>Lets say we have this array and I want to replace the minimum value with number 50</p> <pre><code>import numpy as np numbers = np.arange(20) numbers[numbers.min()] = 50 </code></pre> <p>So the output is <code>[50,1,2,3,....20]</code></p> <p>But now I have problems with this:</p> <pre><code>numbers = np.arange(20...
<p>You need to use <code>numpy.argmin</code> instead of <code>numpy.min</code>:</p> <pre><code>In [89]: numbers = np.arange(20).reshape(5,4) In [90]: numbers[np.arange(len(numbers)), numbers.argmin(axis=1)] = 50 In [91]: numbers Out[91]: array([[50, 1, 2, 3], [50, 5, 6, 7], [50, 9, 10, 11], ...
python|arrays|numpy|min
7
378,131
33,728,965
difficult to run the program
<p>I'm trying to run the program below, but have run into this error.</p> <p>Traceback (most recent call last):</p> <pre><code>File "C:\Users\danil\Desktop\HK1.py", line 76, in &lt;module&gt; img1c = cv2.imread(sys.argv[1]) IndexError: list index out of range </code></pre> <p>The code I am using is described below:...
<pre><code>if __name__ == '__main__': img1c = cv2.imread(sys.argv[1]) img2c = cv2.imread(sys.argv[2]) </code></pre> <p>needs to be changed to something like</p> <pre><code>if __name__ == '__main__' if sys.argv[2:]: arg1 = sys.argv[1] arg2 = sys.argv[2] else: print('need at least 2 ar...
python|numpy
0
378,132
23,880,138
Display a 3D bar graph using transparency and multiple colors in matplotlib
<p>I have a dataframe where rows represent hours of the day and the columns represent time frequencies. The aim is to create a 3D bar chart and each column represented a different color. My dataframe is as follows </p> <pre><code>frec=pd.read_csv('tiempo.csv', parse_dates='Horas',index_col='Horas') frec.index=[date.st...
<p>create <code>colors</code> by following method:</p> <pre><code>values = np.linspace(0.2, 1., frec.shape[0]) cmaps = [cm.Blues, cm.Reds, cm.Greens] colors = np.hstack([c(values) for c in cmaps]).reshape(-1, 4) </code></pre> <p>Here is the output:</p> <p><img src="https://i.stack.imgur.com/I7m1a.png" alt="enter ima...
python|matplotlib|pandas
3
378,133
23,667,574
How to add two columns efficiently in Pandas DataFrame?
<p>I have quite large dataset (over 6 million rows with just a few columns). When I try to add two float64 columns (data['C'] = data.A + data.B) it gives me a memory error:</p> <pre><code>Traceback (most recent call last): File "01_processData.py", line 354, in &lt;module&gt; prepareData(snp) File "01_processD...
<p>I cannot reproduce what you are doing (as it won't hit the alignment issue as the indexes are the same).</p> <p>In master/0.14 (releasing shortly)</p> <pre><code>In [2]: df = DataFrame(np.random.randn(6000000,2),columns=['A','C'],index=pd.MultiIndex.from_product([['foo','bar'],range(3000000)])) In [3]: df.values....
python|pandas
2
378,134
23,574,614
Appending Pandas dataframe to sqlite table by primary key
<p>I want to append the Pandas dataframe to an existing table in a sqlite database called 'NewTable'. NewTable has three fields (ID, Name, Age) and ID is the primary key. My database connection:</p> <pre><code>import sqlite3 DB='&lt;path&gt;' conn = sqlite3.connect(DB) </code></pre> <p>The dataframe I want to appen...
<p>You can use SQL functionality <code>insert or replace</code></p> <pre><code>query=''' insert or replace into NewTable (ID,Name,Age) values (?,?,?) ''' conn.executemany(query, test.to_records(index=False)) conn.commit() </code></pre>
python|sqlite|pandas
11
378,135
23,530,848
xtick max value setup with panda and matplotlib
<p>How is it possible that the last xtick would be the last value of the data?</p> <pre><code>import pandas as pd import matplotlib.pyplot as plt a = [583, 1379, 1404, 5442, 5512, 5976, 5992, 6111, 6239, 6375, 6793, 6994, 7109, 7149, 7210, 7225, 7459, 8574, 9154, 9417, 9894, 10119, 10355, 10527, 11933, 12170, 12172, 1...
<p>Not sure what exactly what you need, but you'll need to delete the line</p> <pre><code>f = plt.figure() </code></pre> <p>if you want you histogram of <code>d</code> and the title/label/xlim etc. to be in the same chart.</p>
python|matplotlib|pandas
1
378,136
23,896,336
Can't see matplotlib plots in iPython notebook
<p>I'm trying to show pandas-generated plots in iPython notebooks (running with <code>pylab=inline</code>), but these have mysteriously stopped working—I'll do something like:</p> <pre><code>In [6]: pd.Series([0,2,4,3,8]).plot() Out[6]: &lt;matplotlib.axes.AxesSubplot at 0x10e69e110&gt; &lt;matplotlib.figure.F...
<pre><code> %pylab inline </code></pre> <p>work fine in my env</p> <pre><code> %pylab inline pd.Series([0,2,4,3,8]).plot() </code></pre> <p><img src="https://i.stack.imgur.com/o71QF.png" alt="enter image description here"></p> <p>my Ipython(modules) version</p> <pre><code>Jinja2==2.7.3 MarkupSafe==0.23 backp...
macos|matplotlib|pandas|ipython|libpng
1
378,137
22,848,196
Using Pandas dataframe with FOR loops
<p>and thank you for looking.</p> <p>I am trying my hand at modifying a Python script to download a bunch of data from a website. I have decided that given the large data that will be used, I am wanting to convert the script to Pandas for this. I have this code so far.</p> <pre><code>snames = ['Index','Node_ID','No...
<p>In your pseudocode, you can do this:</p> <pre><code>dfs = [] For node is nodes (4 nodes so far) For index in indexes data_url = websiteinfo + Id + sampleinformation df = smalldata.read.csv(data_url) dfs.append(df) df = pd.concat(dfs) </code></pre>
python|pandas
0
378,138
22,546,425
How to implement a Boolean search with multiple columns in pandas
<p>I have a pandas df and would like to accomplish something along these lines (in SQL terms):</p> <pre><code>SELECT * FROM df WHERE column1 = 'a' OR column2 = 'b' OR column3 = 'c' etc. </code></pre> <p>Now this works, for one column/value pair:</p> <pre><code>foo = df.loc[df['column']==value] </code></pre> <p>Howe...
<p>You need to enclose multiple conditions in braces due to operator precedence and use the bitwise and (<code>&amp;</code>) and or (<code>|</code>) operators:</p> <pre><code>foo = df[(df['column1']==value) | (df['columns2'] == 'b') | (df['column3'] == 'c')] </code></pre> <p>If you use <code>and</code> or <code>or</c...
python|pandas
115
378,139
15,011,935
ConfigNumParser ValueError: invalid literal for int() with base 10: ''
<p>This code is typed nearly exactly from a <a href="http://www.scipy.org/Cookbook/Reading_Custom_Text_Files_with_Pyparsing" rel="nofollow">scipy.org cookbook recipe</a> and I can't yet notice any typo's so perhaps the code is oudated? Why does this code parse the numbers correctly but fail on the KeyWord() and QuotedS...
<blockquote> <p>I can't yet notice any typo's so</p> </blockquote> <p>...ooops...</p> <pre><code>(?P&lt;float2&gt;\.\d+)? </code></pre> <p>should be</p> <pre><code>(?P&lt;float2&gt;\.\d+) </code></pre> <p>That fixed it.</p>
python|parsing|numpy|scipy|pyparsing
2
378,140
15,412,597
How can i combine several database files with numpy?
<p>I know that I can can read a file with numpy with the genfromtxt command. It works like this:</p> <pre><code>data = numpy.genfromtxt('bmrbtmp',unpack=True,names=True,dtype=None) </code></pre> <p>I can plot the stuff in there easily with:</p> <pre><code>ax.plot(data['field'],data['field2'], linestyle=" ",color="re...
<p>To visit all the files in a directory, use <a href="http://docs.python.org/2/library/os.html#os.walk" rel="nofollow">os.walk</a>.</p> <p>To stack two structured numpy arrays "vertically", use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.vstack.html" rel="nofollow">np.vstack</a>.</p> <p>To sav...
python|numpy|matplotlib
2
378,141
15,125,961
Start:stop slicing inconsistencies between numpy and Pandas?
<p>I am a bit suprised/confused about the following difference between numpy and Pandas</p> <pre><code>import numpy as np import pandas as pd a = np.random.randn(10,10) &gt; a[:3,0, newaxis] array([[-1.91687144], [-0.6399471 ], [-0.10005721]]) </code></pre> <p>However:</p> <pre><code>b = pd.DataFrame...
<p>This <em>is</em> documented, and it's part of <a href="http://pandas.pydata.org/pandas-docs/dev/indexing.html#indexing-advanced" rel="nofollow">Advanced Indexing</a>. The key here is that you're not using a stop index at all.</p> <p>The <code>ix</code> attribute is a special thing that lets you do various kinds of ...
python|numpy|pandas
3
378,142
13,419,822
pandas dataframe, copy by value
<p>I noticed a bug in my program and the reason it is happening is because it seems that pandas is copying by reference a pandas dataframe instead of by value. I know immutable objects will always be passed by reference but pandas dataframe is not immutable so I do not see why it is passing by reference. Can anyone pro...
<p>All functions in Python are "pass by reference", there is no "pass by value". If you want to make an explicit copy of a pandas object, try <code>new_frame = frame.copy()</code>.</p>
python|pandas
41
378,143
29,417,763
Plot rolling mean together with data
<p>I have a DataFrame that looks something like this:</p> <pre><code>####delays: Worst case Avg case 2014-10-27 2.861433 0.953108 2014-10-28 2.899174 0.981917 2014-10-29 3.080738 1.030154 2014-10-30 2.298898 0.711107 2014-10-31 2.856278 0.998959 2014-11-01 3.118587 1.147104 ... </c...
<p>You need to say to pandas where you want to plot. By default pandas creates a new figure. </p> <p>Just modify these 2 lines:</p> <pre><code>delays.plot(x_compat=True, style='r--') rolling.plot(style='r') </code></pre> <p>by:</p> <pre><code>ax_delays = delays.plot(x_compat=True, style='--', color=["r","b"]) rolli...
python|pandas|matplotlib
7
378,144
29,468,761
Columns get lost on pandas frame join
<p>I have two datasets:</p> <ol> <li>Dataset <strong>A</strong> represents the number of fans a player of a team has in a specific year</li> <li>Dataset <strong>B</strong> represents the number of wins a team has in a specific game</li> </ol> <p>I would now like to combine both data frames and aggregate the data per ...
<p><strong>1)</strong> <code>reset_index()</code> can be used only once.</p> <pre><code>aa = a.groupby(['year', 'team']).sum() bb = b.groupby(['year', 'team']).sum() aa.join(bb).reset_index() </code></pre> <p><strong>2)</strong> Alternatively, don't create levels for <code>aa</code> and <code>bb</code> using <code>a...
python|pandas
1
378,145
29,463,967
Plotting labed time series data pandas
<p>I am newbie in pandas. I want to plot labeled time series (daily activity) data in pandas. On horizontal (x-axis) represents time and on vertical (y-axis) represents label each activity. On the horizontal, I want a point where the time series says activity happened. My dataset looks like below:</p> <pre><code> [...
<p>This is really a matplotlib question ...</p> <p>I have not sought to replicate every feature of your example plot, but you will get the drift.</p> <p><img src="https://i.stack.imgur.com/pByr8.png" alt="Example image"></p> <p>The code for this image follows ...</p> <pre><code># --- initial data import pandas as p...
python|pandas|matplotlib
0
378,146
29,418,236
Apply where function [SQL like] on datatime Pandas
<p>I have a dataset like:</p> <pre><code> Date/Time Byte </code></pre> <p>0 &nbsp; &nbsp; &nbsp; 2015-04-02 10:44:31 &nbsp; &nbsp; &nbsp; &nbsp; 1 <br> 1 &nbsp; &nbsp; &nbsp; 2015-04-02 10:44:21 &nbsp; &nbsp; &nbsp; 10 <br> 2 &nbsp; &nbsp; &nbsp; 2015-04-02...
<p>use pd.to_datetime() on your timestamps if they are stored as strings.</p> <p>Then you can do</p> <pre><code>df[df['a_date_col'].apply(lambda x: x.hour) == 11] </code></pre> <p>Or you can use the .dt accessor:</p> <pre><code>df[df['a_date_col'].dt.hour == 11] </code></pre> <p><a href="http://pandas.pydata.org/p...
python|pandas
0
378,147
29,377,590
Index error - Python, Numpy, MatLab
<p>I have converted a section of MatLab code to Python using the numpy and scipy libraries. I am however stuck on the following index error;</p> <pre><code>IndexError: index 698 is out of bounds for axis 3 with size 2 </code></pre> <p>698 is the size of the time list.</p> <p>it occurs in the last section of code, on...
<p>The problem is that you are not defining <code>i</code> in the loop. <code>i</code> is being left over from the last time it was used. Specifically, it is using the last value of <code>i</code> from the previous loop. This value is <code>len(t)-1</code>, which is much larger than the length of the 3rd dimension o...
python|matlab|numpy
1
378,148
29,672,674
Python pandas: equivalent to SQL's aggregate functions?
<p>What is the pandas equivalent of something like:</p> <pre><code>select mykey, sum(Field1) as Field1, avg(Field1) as avg_field1, min(field2) as min_field2 from df group by mykey </code></pre> <p>in SQL? I understand that in pandas I can do</p> <pre><code>grouped = df.groupby('mykey') </code></pre> <p>and then</p...
<p>You can apply multiple functions to multiple fields:</p> <pre><code> f = {'Field1':'sum', 'Field2':['max','mean'], 'Field3':['min','mean','count'], 'Field4':'count' } grouped = df.groupby('mykey').agg(f) </code></pre> <p>Hope this helps! Pandas is a very powerful tool. ...
python|sql|pandas
2
378,149
29,404,377
Spliting dataframe in 10 equal parts and merge 9 parts after picking one at a time in loop
<p>I need to split dataframe into 10 parts then use one part as the testset and remaining 9 (merged to use as training set) , I have come up to the following code where I am able to split the dataset , and m trying to merge the remaining sets after picking one of those 10. The first iteration goes fine , but I get fol...
<p>I don't think you have to split the dataframe in 10 but just in 2. I use this code for splitting a dataframe in training set and validation set:</p> <p>test_index = np.random.choice(df.index, int(len(df.index)/10), replace=False)</p> <p>test_df = df.loc[test_index]</p> <p>train_df = df.loc[~df.index.isin(test_in...
python|numpy|pandas
2
378,150
29,447,457
Covariance with a columns
<p>If I have a numpy array X with <code>X.shape=(m,n)</code> and a second column vector y with <code>y.shape=(m,1)</code>, how can I calculate the covariance of each column of X with y wihtout using a for loop? I expect the result to be of shape <code>(m,1)</code> or <code>(1,m)</code>.</p>
<p>Assuming that the output is meant to be of shape <code>(1,n)</code> i.e. a scalar each for <code>covariance</code> operation for each column of <code>A</code> with <code>B</code> and thus for <code>n</code> columns ending up with <code>n</code> such scalars, you can use two approaches here that use <code>covariance ...
python|numpy|scipy|vectorization
2
378,151
29,719,324
Fortran ordered (column-major) numpy structured array possible?
<p>I am looking for a way to more efficiently assign column of a numpy structured array.</p> <p>Example:</p> <pre><code>my_col = fn_returning_1D_array(...) </code></pre> <p>executes more than two times faster on my machine than the same assignment to the column of a structured array:</p> <pre><code>test = np.ndarra...
<p>These are not equivalent actions. One just generates an array (and assigns it to a variable, a minor action). The other generates the array and fills a column of a structured array.</p> <pre><code>my_col = fn_returning_1D_array(...) test['column1'] = fn_returning_1D_array(...) </code></pre> <p>I think a fairer c...
python|arrays|numpy|recarray|structured-array
1
378,152
29,806,936
Why is Pandas Concatenation (pandas.concat) so Memory Inefficient?
<p>I have about 30 GB of data (in a list of about 900 dataframes) that I am attempting to concatenate together. The machine I am working with is a moderately powerful Linux Box with about 256 GB of ram. However, when I try to concatenate my files I quickly run out of available ram. I have tried all sorts of workarounds...
<p>I've had performance issues concatenating a large number of DataFrames to a 'growing' DataFrame. My workaround was appending all sub DataFrames to a list, and then concatenating the list of DataFrames once processing of the sub DataFrames has been completed.</p>
python|numpy|pandas|ram
19
378,153
62,286,514
Is it possible to multiple updates across rows based on a query on single pandas DataFrame column
<p>I am trying to update a dataframe of country names in one go</p> <pre><code>import pandas as pd df = pd.DataFrame( {'countries': ['United States of America','United Kingdom','Republic of Korea','Netherlands']}) df </code></pre> <p>Output 1:</p> <p><img src="https://i.stack.imgur.com/7btsT.png" alt="Output 1:" /></p>...
<p>Use a dictionary with <code>pd.DataFrame.replace</code>:</p> <pre><code>dd = {'United States of America':'USA', 'United Kingdom':'UK', 'Republic of Korea':'South Korea', 'Netherlands':'Holland'} df.replace(dd) </code></pre> <p>Output:</p> <pre><code> countries 0 USA 1 UK...
python|pandas|dataframe
4
378,154
62,201,732
How to create a scatter plot or heatmap of spearman's correlation
<p>I have two dataframes 'A' and 'B' each of them is having 1000 values(some values are missing from each column).</p> <p>Dataframe 'A'</p> <pre><code>([-1.73731693e-03, -5.11060266e-02, 8.46153465e-02, 1.48671467e-03, 1.52286786e-01, 8.26033395e-02, 1.18621477e-01, -6.81430566e-02, 5.11196597e-02, 2.257...
<p>Check this code:</p> <pre><code>import pandas as pd import seaborn as sns import matplotlib.pyplot as plt A = [...] # insert here your list of values for A B = [...] # insert here your list of values for B df = pd.DataFrame({'A': A, 'B': B}) corr = df.corr(method = 'spearman') sns.heatmap(co...
python|pandas|matplotlib|seaborn|correlation
3
378,155
62,062,929
Flatten in simple feedforward networks
<p>I am working on the CIFAR10 dataset and came across this example in Keras, using data augmentation:</p> <p><a href="https://keras.io/examples/cifar10_cnn/" rel="nofollow noreferrer">https://keras.io/examples/cifar10_cnn/</a></p> <p>The example uses CNN. I want to implement just a simple feedforward network, not CN...
<p>You should <code>Flatten</code> your input:</p> <pre><code>model = Sequential() model.add(Flatten(input_shape=x_train.shape[1:])) model.add(Dense(layer_size,activation = "relu") model.add(Dense(128, activation = "relu")) model.add(Dense(64, activation = "relu")) model.add(Dense(10, activation = "softmax")) mo...
tensorflow|machine-learning|keras|deep-learning
0
378,156
62,077,680
How to convert values in list of strings into Pandas DataFrame
<p>I would like to convert this list of strings into a Pandas DataFrame with columns ‘Open’, ‘High’, ‘Low’, ‘Close’, ‘PeriodVolume’, OpenInterest’ and ‘Datetime’ as index. How can I extract the values and create the DataFrame? Thanks for your help!</p> <pre><code>['RequestId: , Datetime: 5/28/2020 12:00:00 AM, High: 3...
<p>You can use split() and some for loops put your data into a dictionary and then pass the dictionary to a dataframe.</p> <pre><code>import pandas as pd # First create the list containing your entries. entries = [ 'RequestId: , Datetime: 5/28/2020 12:00:00 AM, High: 323.44, Low: 315.63,' \ ' Open: 316.77, Cl...
python|pandas|string
1
378,157
62,384,998
Extract and map similar textresult with base text, convert into two columns using Pandas
<p>The following is gui result dataframe.</p> <pre><code>Item_id Similarity_Id Result 100 0 textboxerror 101 100 text_input_issue 102 0 menuitemerror 103 100 text_click_issue 104 100 text_caps_error 105 102 menu_drop_down_error 106...
<p>We can use <a href="https://pandas.pydata.org/pandas-docs/version/0.23.4/generated/pandas.merge.html" rel="nofollow noreferrer"><code>pd.merge</code></a> to <code>left</code> merge the dataframe <code>df</code> with itself on <code>Similarity_ID</code> and <code>Item_id</code> then use <a href="https://pandas.pydata...
python|pandas|dataframe|nlp
1
378,158
62,150,852
Pandas: create columns with deviation from mean in each group
<p>Consider the following <code>DataFrame</code> in Python:</p> <pre><code>import pandas as pd df = pd.DataFrame({'id':[0]*3+[1]*3,'y':np.random.randn(6),'x':np.random.randn(6)}) </code></pre> <p>which gives</p> <pre><code> id y x 0 0 0.721757 1.595646 1 0 0.359601 1.128473 2 0 1.134922 ...
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.transform.html" rel="nofollow noreferrer"><code>GroupBy.transform</code></a> for processing multiple columns, subtract by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.sub.html" re...
python|pandas|dataframe|pandas-groupby
2
378,159
62,263,547
Python - Numpy 3D array - concatenate issues
<p>I have a txt file with 46 entries that looks like this -</p> <pre><code>2020-05-24T10:57:12.743606#[0.0, 0.0, 0.0653934553265572, 0.0, 1.0, 0.0] 2020-05-24T10:57:12.806380#[0.0, 0.0, 0.0, 0.0, 1.0, 0.0] 2020-05-24T10:57:12.869022#[0.0, 0.0, 0.0, 0.0, 1.0, 0.0] </code></pre> <p>The first argument is a timestamp of ...
<p>Here is an implementation with dummy data</p> <pre><code>collect = [] for i in range(46): #create dummy arrays, simulate list of 3 RGB images a = [np.zeros((70,320,3)) for b in range(3)] # a[0].shape: (70,320,3) #concatenate along axis 2 b = np.concatenate(a, axis=2) # b.shape: (70,320,9)...
python|numpy
3
378,160
62,132,312
Clip or threshold a tensor using condition and zero pad the result in PyTorch
<p>let's say I have a tensor like this</p> <pre><code>w = [[0.1, 0.7, 0.7, 0.8, 0.3], [0.3, 0.2, 0.9, 0.1, 0.5], [0.1, 0.4, 0.8, 0.3, 0.4]] </code></pre> <p>Now I want to eliminate certain values base on some condition (for example greater than 0.5 or not)</p> <pre><code>w = [[0.1, 0.3], [0.3, 0.2, 0.1]...
<p>Here's one straightforward way using <em>boolean masking</em>, <a href="https://pytorch.org/docs/master/generated/torch.split.html" rel="nofollow noreferrer">tensor splitting</a>, and then eventually padding the splitted tensors using <a href="https://pytorch.org/docs/master/generated/torch.nn.utils.rnn.pad_sequence...
python|pytorch|vectorization|tensor|zero-padding
1
378,161
62,103,565
cannot import name 'CenterCrop' from 'tensorflow.keras.layers.experimental.preprocessing'
<p>I am using anaconda env. </p> <p>Python 3.7 keras : 2.3.1 tensorflow: 2.1.0</p> <p>when i want to use CenterCrop and Rescaling modules, pycharm gives me error.</p> <pre><code>from tensorflow.keras.layers.experimental.preprocessing import CenterCrop from tensorflow.keras.layers.experimental.preprocessing import Re...
<p>I've tried the import with tensorflow 2.1.0 (keras 2.2.4 by default) and it gave me the same error you are encountering.</p> <p>Using Tensorflow 2.2.0 with keras 2.3.0 works fine.</p> <p>So you just need to upgrade tensorflow.</p>
python|tensorflow|keras
1
378,162
62,161,183
pandas unique values with condition
<p>I am working with a pandas DataFrame and I need to loop trough the unique values of a column. Such columns might contain values that i dont want to loop through, for instance ''</p> <p>normally I do:</p> <pre><code>edges = [edge for edge in estados['EDGE'].unique() if edge != ''] for edge in edges: pass </cod...
<p>You can use NOT operator <code>~</code>:</p> <pre><code>estados[~estados['EDGE'] == '']['EDGE'].dropna().unique() </code></pre> <p><strong>OR</strong> Use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.ne.html" rel="nofollow noreferrer"><code>.ne</code></a>:</p> <pre><code>estad...
python|pandas|filtering|unique
1
378,163
62,375,281
Input shape for CNN and LSTM
<p>I would like to train CNN + LSTM model to drive a car using CNN + LSTM using this code</p> <pre><code>import tensorflow as tf from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Conv2D, MaxPooling2D, GlobalMaxPool2D, Flatten, Dropout, Dense, TimeDistributed, GRU, LSTM from tensorflow....
<p>I think you need to remove the <code>TimeDistributed()</code> wrapper around <code>vgg</code>, because what it does is add another dimension. So, you end up with 5 instead of 4, which is causing the error. Also remove the -1 in this line:</p> <pre><code>X = np.array([i[0] for i in train]).reshape(-1, 4, WIDTH, HEIG...
python|tensorflow|keras|conv-neural-network
0
378,164
62,327,956
How to fill missing values in a dataframe based on group value counts?
<p>I have a pandas DataFrame with 2 columns: Year(int) and Condition(string). In column Condition I have a nan value and I want to replace it based on information from groupby operation.</p> <pre><code>import pandas as pd import numpy as np year = [2015, 2016, 2017, 2016, 2016, 2017, 2015, 2016, 2015, 2015] cond = [...
<p>I did a little extra transformation to get <code>stat</code> as a dictionary mapping the year to its highest frequency name (credit to <a href="https://stackoverflow.com/a/29919489/13386979">this answer</a>):</p> <pre><code>In[0]: fill_dict = stat.unstack().idxmax(axis=1).to_dict() fill_dict Out[0]: {2015: 'good',...
python|pandas|dataframe|pandas-groupby|fillna
1
378,165
62,133,500
Python Panda Dataframe Error while fetching path from .config file
<p>Trying to create the data frame using (.config) file to fetch the file but getting error during creation of Dataframe from the below file </p> <p><strong>Actual file name:rgf_ltd_060520202</strong></p> <p>Sample Structure of my config fil(which is pipe seperated) :</p> <pre><code>...|/user/Doc/ABC/rgf_ltd_[0-9]*...
<p>The <code>pandas.read_csv</code> doesn't deal with regex patters on file reading, you could use the python <a href="https://docs.python.org/3/library/glob.html#glob.glob" rel="nofollow noreferrer">glob.glob</a> module to get a similar result with shell-style wildcards.</p> <blockquote> <p>Return a possibly-empty ...
python|regex|pandas
0
378,166
62,439,560
How to make a time index in dataframe pandas with 15 minutes spacing
<p>How to make a time index in dataframe pandas with 15 minutes spacing for 24 hours with out the date format (12\4\2020 00:15)or doing it manually? example that I only want is 00:15 00:30 00:45.........23:45 00:00 as an index.</p>
<p>You can use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.date_range.html" rel="nofollow noreferrer"><code>pd.date_range</code></a> to create dummy dates with your desired time frequency, then just extract them:</p> <pre><code>pd.Series(pd.date_range( '1/1/2020', '1/2/2020', freq='1...
python|pandas|dataframe|time
3
378,167
62,178,804
Pandas dropna messes up datetime index
<p>I am writing a function that processes a dataframe. Rows in this dataframe are indexed by a datetime index and there is a row per hour in the dataframe. Basically, after doing some processing, this is what I have:</p> <pre><code> inquinante temperatura precipitazioni ... umidita day_of_year...
<p>This is the automatic representation of a datetime index when all the index labels have midnight or time 00:00:00 as its time stamp.</p> <pre><code>df = pd.DataFrame({'value':np.arange(20)}, index=pd.date_range('2020-02-01', periods=20, freq='12H')) df </code></pre> <p>Output:</p> <pre><code> ...
python|pandas
2
378,168
62,275,841
How Can I Solve Error : Unsupported format, or corrupt file: Expected BOF record; found b'<table c'
<p>When I Run This Code Show This Error Unsupported format, or corrupt file: Expected BOF record; found b' <p>data : <a href="https://github.com/DevangBaroliya/DataSet/blob/master/DistrictWiseReport20200607.xlsx" rel="nofollow noreferrer">https://github.com/DevangBaroliya/DataSet/blob/master/DistrictWiseReport20200607...
<p>If you copy the data from your github, right-click cell A1 in Excel and paste special as Unicode Text and save it as an .xlsx file, you will be able to read it in. I'm not sure exactly what you are trying to do and what exactly is going wrong.</p> <p><a href="https://i.stack.imgur.com/i9Cu3.png" rel="nofollow noref...
python|pandas|data-science
0
378,169
62,396,711
tf.keras.losses.categorical_crossentropy returning wrong value
<p>I have</p> <p><code>y_true = 16</code></p> <p>and</p> <pre><code>y_pred = array([1.1868494e-08, 1.8747659e-09, 1.2777099e-11, 3.6140797e-08, 6.5852622e-11, 2.2888577e-10, 1.4515833e-09, 2.8392664e-09, 4.7054605e-10, 9.5605066e-11, 9.3647139e-13, 2.6149302e-10, 2.533...
<p>Found where the problem is</p> <p>I used <strong>softmax</strong> activation in my last layer </p> <p><code>output = Dense(NUM_CLASSES, activation='softmax')(x)</code></p> <p>But I used <code>from_logits=True</code> in <code>tf.keras.losses.categorical_crossentropy</code>, which resulted in <strong>softmax</stron...
python|tensorflow|machine-learning|keras|deep-learning
2
378,170
62,080,926
Removing outliers based on column variables or multi-index in a dataframe
<p>This is another IQR outlier question. I have a dataframe that looks something like this:</p> <pre><code>import numpy as np import pandas as pd df = pd.DataFrame(np.random.randint(0,100,size=(100, 3)), columns=('red','yellow','green')) df.loc[0:49,'Season'] = 'Spring' df.loc[50:99,'Season'] = 'Fall' df.loc[0:24,'Tr...
<p>If need replace outliers by missing values use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.transform.html" rel="nofollow noreferrer"><code>GroupBy.transform</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.quantile....
python-3.x|pandas|dataframe|multi-index|outliers
1
378,171
62,272,495
using RNN with CNN in Keras
<p>beginner question</p> <p>Using Keras I have a sequential CNN model that predicts an output of the size of [3*1] (regression) based on image(input).</p> <p>How to implement RNN in order to add the output of the model as a second input to the next step. (so that we have 2 inputs: the image and the output of the prev...
<p>The easiest method I found was to directly extend <code>Model</code>. The following code will work in TF 2.0, but may not work in older versions:</p> <pre><code>class RecurrentModel(Model): def __init__(self, num_timesteps, *args, **kwargs): self.num_timesteps = num_timesteps super().__init__(*ar...
tensorflow|keras|lstm|recurrent-neural-network|keras-layer
3
378,172
62,431,377
Adding Data to Pandas DataFrame
<p>I want to use machine learning techniques to categorise "images" of energy released in an electromagnetic calorimeter, using a keras CNN. In order to import the data I'm using a Pandas DataFrame, however the data isn't formatted in the necessary way.</p> <p>The calorimeter can be considered a 28x28 crystal square, ...
<p>Your arr shape is actually (4,) and what you want is an array of (1,4) if I didn't misunderstood. You could do<code>arr=np.array([[event,xi,yi,0]])</code> to have the good shape.</p>
python|pandas|numpy
1
378,173
62,301,254
Convert Column to Polygon in Python to perform Point in Polygon
<p>I have written Code to establish Point in Polygon in Python, the program uses a shapefile that I read in as the Polygons. I now have a dataframe I read in with a column containing the Polygon e.g <code>[[28.050815,-26.242253],[28.050085,-26.25938],[28.011934,-26.25888],[28.020216,-26.230127],[28.049828,-26.230704],[...
<p>taking your example above you could do the following:</p> <pre><code>list_coords = [[28.050815,-26.242253],[28.050085,-26.25938],[28.011934,-26.25888],[28.020216,-26.230127],[28.049828,-26.230704],[28.050815,-26.242253]] </code></pre> <pre><code>from shapely.geometry import Point, Polygon # Create a list of point...
python|geopandas|point-in-polygon
0
378,174
62,345,523
How to fill missing values with conditions?
<p>I have a pandas DataFrame like this:</p> <pre><code>year = [2015, 2016, 2009, 2000, 1998, 2017, 1980, 2016, 2015, 2015] mode = ["automatic", "automatic", "manual", "manual", np.nan,'automatic', np.nan, 'automatic', np.nan, np.nan] X = pd.DataFrame({'year': year, 'mode': mode}) print(X) year mode 0 2015...
<p>Similar approach to my answer on your <a href="https://stackoverflow.com/questions/62327956/how-to-fill-missing-values-in-a-dataframe-based-on-group-value-counts">other question</a>:</p> <pre><code>cond = X['year'] &lt; 2010 X['mode'] = X['mode'].fillna(cond.map({True:'manual', False: 'automatic'})) </code></pre>
python|pandas|dataframe|nan|missing-data
3
378,175
62,335,645
How to slice a panda data frame to get required results
<p>I'm working on this problem in Jupyter where I have to get the desired result. The initial DataFrame is:</p> <pre><code>print(df1) df2=pd.DataFrame({'custID':[1,2,3,4], 'cust_age':[20,35,50,85]},columns=['custID','cust_age']) print(df2) </code></pre> <p>I have managed to get my input and output to...
<p>You can get the desired output by</p> <pre><code>In[1]: df2[df2.cust_age.lt(50)].custID.values Out[2]:array([1, 2]) </code></pre> <p>or </p> <pre><code>In[1]: df2[df2.cust_age &lt; 50].custID.tolist() Out[2]:[1, 2] </code></pre> <p>depending on whether you want to get a numpy-array or a list.</p>
pandas|pandas-groupby
0
378,176
62,310,570
How to fix what dates your dataframe includes
<p>I have a dataframe whereby I'm trying to get data from today (-5) days until the end of next month.</p> <p>In the case of today this would be;</p> <pre><code>ix = pd.DatetimeIndex(start=datetime(2020, 6, 05), end=datetime(2020, 7, 31), freq='D') df.reindex(ix) </code></pre> <p>If I wanted to automate this is ther...
<p>Your code was close, I think the main issue is you are constructing a <code>DatetimeIndex</code> incorrectly: it doesn't take a <code>start</code> or <code>end</code> parameter (see <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DatetimeIndex.html" rel="nofollow noreferrer">docs</a>). Al...
python|pandas|datetime|indexing|reindex
1
378,177
62,138,342
How to generate url changing wrt date?
<p>I need to extract, unzip &amp; read data from this url (<a href="https://www1.ukp.com/content/historical/2020/MAY/cm29MAY2020bhav.csv.zip" rel="nofollow noreferrer">https://www1.ukp.com/content/historical/2020/MAY/cm29MAY2020bhav.csv.zip</a><br> ) on every working day. I manually edit the url everyday . Is there an...
<p>Use <code>date.strftime</code> to generate the URL.</p> <pre><code>&gt;&gt;&gt; from datetime import date &gt;&gt;&gt; date.today().strftime("https://www1.ukp.com/content/historical/%Y/%B/cm%d%B%Ybhav.csv.zip") 'https://www1.ukp.com/content/historical/2020/June/cm01June2020bhav.csv.zip' </code></pre> <p>If case se...
python|pandas
1
378,178
62,130,542
Converting 15 minutes interval usage data in pivot form to 60 minutes format
<p><a href="https://i.stack.imgur.com/mutL1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mutL1.png" alt="This is my 15 Minute interval file sample distributed over 24 hrs"></a></p> <p>I want to convert this to hourly format like:</p> <pre><code>df['1:00'] = df['0:15'] + df['0:30'] + df['0:45'] +...
<p>Use:</p> <pre><code>#convert non time values to MultiIndex and sorting times columns df1 = df.set_index(['Account','Date']).sort_index(axis=1) #create groups by previous values of ends with :00 g = df1.columns.str.endswith(':00')[::-1].cumsum()[::-1] #aggregate sum df2 = df1.groupby(g, axis=1).sum().add_suffix(':00...
python|pandas
1
378,179
62,278,894
Writing into Excel by using Pandas Dataframe
<p>I'm very new to programming and I'm just starting out to try. So sorry if my question comes off as too basic. I'm currently trying to extract columns of data from 1 excel sheet to be written into an output excel sheet. However, after I have extracted the column, I am unable to write it into my output sheet as I get ...
<p>It was cast to a list.</p> <pre><code>col1 = list(sourcepath.iloc[1:345,0]) </code></pre> <p>Create a new dataframe</p> <pre><code>df = pd.DataFrame(col1) df.to_excel(writer,'Sheet1',index=True) </code></pre>
python|excel|pandas
0
378,180
62,177,415
How to take a substring from a column in excel using Python?
<p>I have an Excel file and I want to read a specific column in that Excel file, I do that with the following code:</p> <pre><code>import pandas as pd import xlrd file_location = input('Where is the file located? Please input the file path here. ') column = input('In what column is the code? ') code_array = pd.read_...
<p>If the digits will be 5 digits long each time, you could do a quick substring using a lambda.</p> <pre><code>code_array["number_column"] = code_array["YourColumnNameHere"].apply(lambda x: str(x)[:5]) </code></pre> <p>If it will not be the same length each time, but it will be in the same position, you can split it...
python|excel|pandas|xlrd
1
378,181
62,333,471
In Pandas, how to return 2 data using resample('D').first()?
<h2> Questions </h2> <ul> <li>Q1. Why does the <code>float</code> data changes to <code>string</code> data when it gets put into a <code>pd.dataframe</code>? Is there any way to keep it float, rather than changing it to float afterwards with <code>.astype(float)</code>?</li> <li>Q2. How to get 2 data using <code>resam...
<p>Q1: this is because you defined the whole data as <strong>one</strong> array, which can have one type only for all data (string). Define it columnwise like so:</p> <pre><code>BTC_df = pd.DataFrame({'return': [1.05, 1.2, 0.9], 'coin': ['BTC', 'BTC', 'BTC']}, index = [datetime(2020,5,1,15), date...
python|pandas|dataframe
1
378,182
62,166,941
Create single pandas dataframe from a list of dataframes
<p>I have a list of about 25 dfs and all of the columns are the same. Although the row count is different, I am only interested in the first row of each df. How can I iterate through the list of dfs, copy the first row from each and concatenate them all into a single df?</p>
<p>Select first row by position with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.iloc.html" rel="nofollow noreferrer"><code>DataFrame.iloc</code></a> and <code>[[0]]</code> for one row DataFrames and join together by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/...
python|python-3.x|pandas
0
378,183
62,120,508
Python Tensorflow - Running model.fit multiple times without reinstantiating the model
<h2>Background</h2> <p>I am watching a <a href="https://youtu.be/tPYj3fFJGjk?t=12950" rel="nofollow noreferrer">popular YouTube crash course</a> on machine learning.</p> <p>At <a href="https://youtu.be/tPYj3fFJGjk?t=12950" rel="nofollow noreferrer">3:35:50</a>, he mentions that the model is likely overfit, so fits it...
<blockquote> <p>Since he didn't reinstantiate the model, isn't this equivalent to fitting the model with that same data, thereby continuing to overtrain it?</p> </blockquote> <p>You are correct! In order to check which number of epochs would do better in his example, he should have compiled the network again (th...
python|tensorflow|keras
4
378,184
62,470,453
Underfitting Problem in Binary Classification using Multi-Layer Perceptron
<p>I'm currently developing a supervised anomaly detection using Multi-Layer Perceptron (MLP), the goal is to classify between the benign and malicious traffics. I used the <a href="https://www.stratosphereips.org/datasets-ctu13" rel="nofollow noreferrer">CTU-13 dataset</a>, the sample of the dataset is as follows: <a ...
<p>First of all, I am pretty sure your model is actually overfitting, not underfitting. Plot only the training loss and you should see the loss fall close to 0. But as you can see in your plot the validation loss is still quite high compared to training loss. This happens because your model has way too many parameters ...
python|tensorflow|keras|neural-network|mlp
1
378,185
62,275,877
Array Slicing with step 2
<p>Have array like </p> <pre><code>arr = [1,2,3,4,5,6,7,8,9,10]. </code></pre> <p>How I can get array like this:</p> <pre><code>[1,2,5,6,9,10] </code></pre> <p><strong>take 2 elements with step 2(::2)</strong></p> <p>I try something like <code>arr[:2::2]</code>.it's not work</p>
<p><code>[:2::2]</code> is not valid Python syntax. A slice only takes 3 values - start, stop, step. You are trying to provide 4.</p> <p>Here's what you need to do:</p> <pre><code>In [233]: arr = np.arange(1,11) In [234]: arr ...
python|numpy|pytorch
2
378,186
62,341,052
TypeError: __call__() takes 2 positional arguments but 3 were given. To train Raccoon prediction model using FastRCNN through Transfer Learning
<pre><code> from torchvision.models.detection.faster_rcnn import FastRCNNPredictor from engine import train_one_epoch, evaluate import utils import torchvision.transforms as T num_epochs = 10 for epoch in range(num_epochs): train_one_epoch(model, optimizer, data_loader, device, epoch, print_freq=10) lr_sc...
<p>The above answer is incorrect, I accidentally upvoted before noticing. You are using the wrong Compose, note that it says</p> <p><a href="https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html#putting-everything-together" rel="noreferrer">https://pytorch.org/tutorials/intermediate/torchvision_tutorial....
image-processing|computer-vision|pytorch
5
378,187
62,237,696
I am getting Value Error when using if else to manipulate dataframe using pandas?
<p>This is the code I have written and df is dataframe. I am using python3 and I am new to pandas and I have tried bitwise operator as well as keywords and or</p> <pre><code>if((df['Day_Perc_Change']&gt;=-0.5) &amp; (df['Day_Perc_Change']&lt;=0.5)): df['Trend']="Slight or No Change" elif((df['Day_Perc_Change']&g...
<p>BAM, <code>np.where()</code>, which is also vectorized and high-performing.</p> <pre><code>df['Trend'] = '' df['Trend'] = np.where((df['Day_Perc_Change']&gt;=-0.5) &amp; (df['Day_Perc_Change']&lt;=0.5), "Slight or No Change", df['Trend']) df['Trend'] = np.where((df['Day_Perc_Change']&gt;=0.5) &amp; (df['Day_Perc_Ch...
python|pandas
0
378,188
62,087,635
cannot select specific column after merging it with another data frame
<pre><code>unitown=pd.merge(Q1(),Q5(),how='inner',left_on=['State','RegionName'],right_index=True) </code></pre> <p>i created this new data frame called unitown after merging two data frames with index 'State' and 'RegionName'. Below is how unitown looks like: <a href="https://i.stack.imgur.com/2vIb8.png" rel="nofoll...
<p>This should resolve the issue:</p> <pre><code>df.columns = [str(col) for col in df.columns] </code></pre>
python|python-3.x|pandas|dataframe
1
378,189
62,245,518
Pandas merge creates duplicate entries
<p>I'm doing a merge of two dataframes, but when I do so, I have many duplicates entries. My code is a little bit long, so here are the example of two datasets :</p> <pre><code> df1 Season SeasonType ... PitchingWalksPerNineInnings_17 PitchingWeightedOnBasePercentage_17 GameID ... 47547 ...
<p>If you want to keep all columns, you can <code>join</code> them with a suffix, and keep only the cols which are in columnsMerge.</p> <pre><code>df_merged = df1.join(df2, how='right', rsuffix='_df2') columnsMerge = list(set(df.columns).intersection(set(tpstatdf.columns))) cols = [c if any(c in s for s incolumnsMerge...
python|pandas
0
378,190
62,198,953
Combine multiple Pandas series with identical column names, but different indices
<p>I have many pandas series structured more or less as follows. </p> <pre><code>s1 s2 s3 s4 Date val1 Date val1 Date val2 Date val2 Jan 10 Apr 25 Jan 14 Apr 11 Feb 11 May 18 ...
<p>One way to do is groupby <code>Date</code> and choose <code>first</code>:</p> <pre><code>(pd.concat( [s1,s2,s3,s4]) .groupby('Date', as_index=False, sort=False).first() ) </code></pre> <p>Output:</p> <pre><code> Date val1 val2 0 Jan 10 14 1 Feb 11 17 2 Mar 8 16 3 Apr 25 11 4 Ma...
python|pandas|dataframe
2
378,191
62,369,633
How to find the indices of the n largest numbers in an array that aren't 100 in Python
<p>I am trying to find the indices of an array of the n largest numbers from largest to smallest that aren't 100 in Python. I have found several different ways to find the top n maximum numbers from an array, and ways to exclude the values that are equal to 100, but not one that preserves the indices as well. This is w...
<p>First, sort the list but keep track of the original indices. In my solution below, I'm using tuples.</p> <p>Then, go backward on the sorted list, and if the value is not <strong>valueToIgnore</strong>, then append the indice to <em>res</em> until <em>res</em> has a length of <strong>n</strong>.</p> <pre><code>n = ...
python|arrays|pandas|numpy
3
378,192
62,168,928
Python How to loop sequence match Dataframes through specific columns and extra the rows
<p>I have been trying the last 2 weeks to solve this problem, and i am almost at the goal.</p> <p><strong>Case:</strong> <a href="https://i.stack.imgur.com/bPN5A.jpg" rel="nofollow noreferrer">Overall depiction of what i am trying</a></p> <ul> <li>I have 2 dataframes extracted from 2 different excel sheets for this ...
<p>This should work:</p> <pre><code>import pandas as pd import numpy as np from difflib import SequenceMatcher def similar(a, b): ratio = SequenceMatcher(None, a, b).ratio() return ratio data1 = {'Fruit': ['Apple', 'Pear', 'mango', 'Pinapple'], 'nr1': [22000, 25000, 27000, 35000], 'nr2': ...
python|excel|pandas|dataframe|sequencematcher
0
378,193
62,294,549
Convert data-00000-of-00001 file to Tensorflow Lite
<p>Is there any way to <code>convert data-00000-of-00001</code> to Tensorflow Lite model? The file structure is like this</p> <pre><code> |-semantic_model.data-00000-of-00001 |-semantic_model.index |-semantic_model.meta </code></pre>
<p><strong><em>Using TensorFlow Version: 1.15</em></strong></p> <p>The following 2 steps will convert it to a <code>.tflite</code> model.</p> <p><strong>1. Generate a TensorFlow Model for Inference (a frozen graph <code>.pb</code> file) using the <a href="https://stackoverflow.com/a/45868106/3903505">answer posted he...
python|tensorflow|tensorflow-lite
1
378,194
62,146,800
Numpy array_equal and float exact equality check
<p>I know that similar precision questions have been asked here however I am reading a code of a project that is doing an exact equality comparison among floats and is puzzling me. </p> <p>Assume that <code>x1</code> and <code>x2</code> are of type <code>numpy.ndarray</code> and of dtype <code>np.float32</code>. These...
<p>The arithmetic specified by IEEE-754 is deterministic given certain constraints discussed in its clause 11 (2008 version), including suitable rules for expression evaluation (such as unambiguous translation from expressions in a programming language to IEEE-754 operations, such as <code>a+b+c</code> must give <code>...
python|numpy|floating-point|precision|equality
1
378,195
62,056,668
Joining multiple dataframes with multiple common columns
<p>I have multiple dataframes like this-</p> <pre><code>df=pd.DataFrame({'a':[1,2,3],'b':[3,4,5],'c':[4,6,7]}) df2=pd.DataFrame({'a':[1,2,3],'d':[66,24,55],'c':[4,6,7]}) df3=pd.DataFrame({'a':[1,2,3],'f':[31,74,95],'c':[4,6,7]}) </code></pre> <p>I want this output-</p> <pre><code> a c 0 1 4 1 2 6 2 3 ...
<p>A combination of <a href="https://docs.python.org/3/library/functools.html#functools.reduce" rel="nofollow noreferrer">reduce</a>, <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Index.intersection.html" rel="nofollow noreferrer">intersection</a>, <a href="https://pandas.pydata.org/pandas-...
python|python-3.x|pandas
0
378,196
62,210,075
Pandas evaluate a string ratio into a float
<p>I have the following dataframe:</p> <pre><code>Date Ratios 2009-08-23 2:1 2018-08-22 2:1 2019-10-24 2:1 2020-10-28 3:2 </code></pre> <p>I want to convert the ratios into floats, so 2:1 becomes 2/1 becomes 0.5, 3:2 becomes 0.66667.</p> <p>I used the following formula </p> <pre><code>df['Ratios'] = 1/pd...
<p>Dont use <code>pd.eval</code> for <code>Series</code>, because if more like 100 rows it return ugly error, so need convert each value separately:</p> <pre><code>df['Ratios'] = 1/df['Ratios'].str.replace(':','/').apply(pd.eval) </code></pre> <p>But also your error seems some non numeric values together with <code>:...
python|pandas|eval
3
378,197
62,073,545
How do I make it so that my program reads multiple txt files and creates it into a dataframe for python?
<p>Currently I am making a program to cycle through multiple txt files and turn them into dataframes so that the data can be analysed. I have used the glob function to return a list of txt files. After that, I have created a for loop which cycles through every item in the list. Then I use the read_csv function to read ...
<p>Looks like one of your CSV's has an incorrect number of columns. It's on line 110853. You could add some test code to help troubleshoot it, like this:</p> <pre><code>import glob import pandas as pd path = '/content/gdrive/My Drive/Datapoints/*.txt' dataframes = [] for filename in glob.iglob(path): try: data =...
python|pandas|file|glob
0
378,198
62,146,601
Storing multiple ndarrays to a list
<p>During each iteration of for loop some results get stored in a ndarray which look like this,</p> <pre><code>testpredict=[[1.1], [2.344], [3.00]] </code></pre> <p>I want to store the above results in a list variable during each iteration. Something like...</p> <pre><code>list[i]= testpred...
<pre><code># declared outside the iteration loop new_list = [] # inside the loop new_list = new_list.append(testpredict.tolist()) </code></pre> <p>'list' is a built-in type. Avoid using it as a variable name, as a best practice.</p>
python|python-3.x|pandas|numpy
0
378,199
62,278,366
How to make a stacked bar chart in python
<p>Hi guys I'm having a trouble on making a stacked bar chart here is my df</p> <pre><code>In[]top_10_medals_breakdown = pd.DataFrame() top_10_medals_breakdown = top_10_medals_breakdown.append(d) top_10_medals_breakdown Out[] Noc Medal Count 342 USA Bronze 1358 343 USA Gold 2638 344 USA Silver 1641 ...
<p>Use <code>pivot</code> to lay out the data, and then use the order of stacking and descending gold medals. Organized the data. We have graphed it.</p> <pre><code>import pandas as pd import numpy as np import io data = ''' Noc Medal Count 342 USA Bronze 1358 343 USA Gold 2638 344 USA Silver 1641 336 URS Bronze 689...
python|pandas|matplotlib|seaborn
0