Unnamed: 0
int64
0
378k
id
int64
49.9k
73.8M
title
stringlengths
15
150
question
stringlengths
37
64.2k
answer
stringlengths
37
44.1k
tags
stringlengths
5
106
score
int64
-10
5.87k
378,000
70,493,098
pandas-how can I replace rows in a dataframe
<p>I am new in Python and try to replace rows.</p> <p>I have a dataframe such as:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: center;">X</th> <th style="text-align: center;">Y</th> </tr> </thead> <tbody> <tr> <td style="text-align: center;">1</td> <td style="text-align: center;">a</td> </tr> <tr> <td style="text-align: center;">2</td> <td style="text-align: center;">d</td> </tr> <tr> <td style="text-align: center;">3</td> <td style="text-align: center;">c</td> </tr> <tr> <td style="text-align: center;">4</td> <td style="text-align: center;">a</td> </tr> <tr> <td style="text-align: center;">5</td> <td style="text-align: center;">b</td> </tr> <tr> <td style="text-align: center;">6</td> <td style="text-align: center;">e</td> </tr> <tr> <td style="text-align: center;">7</td> <td style="text-align: center;">a</td> </tr> <tr> <td style="text-align: center;">8</td> <td style="text-align: center;">b</td> </tr> </tbody> </table> </div> <p>I have two question:</p> <p>1- How can I replace 2nd row with 5th, such as:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: center;">X</th> <th style="text-align: center;">Y</th> </tr> </thead> <tbody> <tr> <td style="text-align: center;">1</td> <td style="text-align: center;">a</td> </tr> <tr> <td style="text-align: center;">5</td> <td style="text-align: center;">b</td> </tr> <tr> <td style="text-align: center;">3</td> <td style="text-align: center;">c</td> </tr> <tr> <td style="text-align: center;">4</td> <td style="text-align: center;">a</td> </tr> <tr> <td style="text-align: center;">2</td> <td style="text-align: center;">d</td> </tr> <tr> <td style="text-align: center;">6</td> <td style="text-align: center;">e</td> </tr> <tr> <td style="text-align: center;">7</td> <td style="text-align: center;">a</td> </tr> <tr> <td style="text-align: center;">8</td> <td style="text-align: center;">b</td> </tr> </tbody> </table> </div> <p>2- How can I put 6th row above 3rd row, such as:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: center;">X</th> <th style="text-align: center;">Y</th> </tr> </thead> <tbody> <tr> <td style="text-align: center;">1</td> <td style="text-align: center;">a</td> </tr> <tr> <td style="text-align: center;">2</td> <td style="text-align: center;">d</td> </tr> <tr> <td style="text-align: center;">6</td> <td style="text-align: center;">e</td> </tr> <tr> <td style="text-align: center;">3</td> <td style="text-align: center;">c</td> </tr> <tr> <td style="text-align: center;">4</td> <td style="text-align: center;">a</td> </tr> <tr> <td style="text-align: center;">5</td> <td style="text-align: center;">b</td> </tr> <tr> <td style="text-align: center;">7</td> <td style="text-align: center;">a</td> </tr> <tr> <td style="text-align: center;">8</td> <td style="text-align: center;">b</td> </tr> </tbody> </table> </div>
<p>First use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.iloc.html" rel="nofollow noreferrer"><code>DataFrame.iloc</code></a>, python counts from <code>0</code>, so for select second row use <code>1</code> and for fifth use <code>4</code>:</p> <pre><code>df.iloc[[1, 4]] = df.iloc[[4, 1]] print (df) X Y 0 1 a 1 5 b 2 3 c 3 4 a 4 2 d 5 6 e 6 7 a 7 8 b </code></pre> <p>And then <code>rename</code> indices for above value, here <code>1</code> and sorting with only stable sorting <code>mergesort</code>:</p> <pre><code>df = df.rename({5:1}).sort_index(kind='mergesort', ignore_index=True) print (df) X Y 0 1 a 1 2 d 2 6 e 3 3 c 4 4 a 5 5 b 6 7 a 7 8 b </code></pre>
python|pandas|replace|rows
1
378,001
70,628,013
What do the coordinates of verts from Marching Cubes mean?
<p>I have a 3D generated voxel model of a vehicle and the coordinates of the voxels are in the vehicle reference frame. The origin is at the center of the floor. It looks this:</p> <blockquote> <p>array([[-2.88783681, -0.79596956, 0.],<br> [-2.8752784 -0.79596956, 0.],<br> [-2.86271998, -0.79596956, 0.],<br> ...,<br> [ 2.83880176, 0.89941685, 1.98423003],<br> [ 2.85136017, 0.89941685, 1.98423003],<br> [ 2.86391859, 0.89941685, 1.98423003]])</p> </blockquote> <p>Then I create a meshgrid of 0s and 1s</p> <pre><code>ux = np.unique (voxels[:,0]) uy = np.unique (voxels[:,1]) uz = np. unique (voxels[:,2]) X, Y, Z = np.meshgrid(ux, uy, uz) V = np.zeros(X. shape) N = voxels.shape [0] for ii in range(n): ix = ux == voxels[ii,] iy = uy == voxels[ii, 1] iz = uz == voxels[ii,2] V[iy, ix, iz] = 1 </code></pre> <p>Then I call the marching cubes algorithm to generate a mesh of the voxel model.</p> <pre><code>marching_cubes = measure.marching_cubes_lewiner (v, o, spacing=(voxel_size, voxel_size, voxel_size)) verts = marching_cubes[0] faces = marching cubes[1] normals = marching_cubes[2] </code></pre> <p>When I print out the vertices, the coordinates are like this:</p> <blockquote> <p>array([[2.78852894e-18, 4.39544627e-01, 3.39077284e-01),<br> [1.25584179-02, 4.39544627e-01, 3.26518866e-01],<br> [1.25584179-02, 4.26986209e-01, 3.39077284e-01],<br> [1.72050325e+00, 1.26840021e+00, 2.76285194-01],<br> [1.72050325e+00, 1.26840021e+00, 2.88843612e-01],<br> [1.72050325e+00, 1.26840021e+00, 3.014020302-01]])</p> </blockquote> <p>In the <a href="https://scikit-image.org/docs/dev/api/skimage.measure.html?highlight=marching_cubes#skimage.measure.marching_cubes" rel="nofollow noreferrer">documentation</a> it says that verts is nothing but &quot;Spatial coordinates for V unique mesh vertices&quot;. But what do the coordinates mean? In what coordinate system is it?<br /> I plan on projecting the mesh onto the image of the vehicle I generated the voxel model from. How do I do the coordinate transformation in that case? (I've already successfully projected the voxel onto the image)</p>
<p>verts are just points in space. Essentially each vert is a corner of some triangle (usually more than 1).</p> <p>To know what the actual triangles are you will look at faces which will have be something like:</p> <pre><code>[(v1, v2, v3), (v1, v4, v5), ...] </code></pre> <p>Each tuple in the list includes 3 indices for verts. For example:</p> <pre><code>verts[v1], verts[v2], verts[v3] </code></pre> <p>is a triangle in space.</p>
python|numpy|scikit-learn|marching-cubes
0
378,002
70,396,252
Python Pandas- How do I subset time intervals into smaller ones?
<p>Let's imagine I have a timeseries dataframe of temperature sensor data that goes by 30 min intervals. How do I basically subset each 30 min interval into smaller 5 min intervals while accounting for the difference of temperature drops between each interval?</p> <p>I imagine that doing something like this could work:</p> <ul> <li><p>30 min intervals:</p> <pre><code> interval 1: temp = 30 interval 2: temp = 25 </code></pre> </li> <li><p>5 min intervals:</p> <pre><code> interval 1: temp = 30 interval 2: temp = 29 interval 3: temp = 28 interval 4: temp = 27 interval 5: temp = 26 interval 6: temp = 25 </code></pre> </li> </ul>
<p>I would do it with a resample of the data frame to a lower time resolution (&quot;6T&quot; in this case, with T meaning minutes), this will create new rows for missing time steps with nan values then you can fill those nan somehow, for what you describe I think a linear interpolation can be enough.</p> <p>Here you have a simple example that I think can match the data you describe.</p> <pre><code>import pandas as pd df = pd.DataFrame({&quot;temp&quot;:[30, 25, 20, 18]}, index = pd.date_range(&quot;2021-12-01 12:00:00&quot;, &quot;2021-12-01 13:59:00&quot;, freq = &quot;30T&quot;)) #This resample will preserve your values at their original time indexes, and will create new rows for the intermediate #datetime full of nans #the .last() is just used to select the value for each time-step, you could also use mean o max o min or mean as there is just one value for each time step so it would get you the same. df = df.resample(&quot;6T&quot;).last() #it really depends on how you want to implement the change over time of the data, but as you described a linear #variation, what you can use is a simple linear interpolation between values with the method interpolate df.interpolate() </code></pre>
python|pandas|datetime
1
378,003
70,655,084
Convert http text response to pandas dataframe
<p>I want to convert the below text into a pandas dataframe. Is there a way I can use Python Pandas pre-built or in-built parser to convert? I can make a custom function for parsing but want to know if there is pre-built and/or fast solution.</p> <p>In this example, the dataframe should result in two rows, one each of ABC &amp; PQR</p> <pre><code>{ &quot;data&quot;: [ { &quot;ID&quot;: &quot;ABC&quot;, &quot;Col1&quot;: &quot;ABC_C1&quot;, &quot;Col2&quot;: &quot;ABC_C2&quot; }, { &quot;ID&quot;: &quot;PQR&quot;, &quot;Col1&quot;: &quot;PQR_C1&quot;, &quot;Col2&quot;: &quot;PQR_C2&quot; } ] } </code></pre>
<p>You've listed everything you need as tags. Use <code>json.loads</code> to get a dict from string</p> <pre><code>import json import pandas as pd d = json.loads('''{ &quot;data&quot;: [ { &quot;ID&quot;: &quot;ABC&quot;, &quot;Col1&quot;: &quot;ABC_C1&quot;, &quot;Col2&quot;: &quot;ABC_C2&quot; }, { &quot;ID&quot;: &quot;PQR&quot;, &quot;Col1&quot;: &quot;PQR_C1&quot;, &quot;Col2&quot;: &quot;PQR_C2&quot; } ] }''') df = pd.DataFrame(d['data']) print(df) </code></pre> <p>Output:</p> <pre><code> ID Col1 Col2 0 ABC ABC_C1 ABC_C2 1 PQR PQR_C1 PQR_C2 </code></pre>
python|json|pandas|parsing
0
378,004
70,425,398
numpy to spark error: TypeError: Can not infer schema for type: <class 'numpy.float64'>
<p>While trying to convert a numpy array into a Spark DataFrame, I receive <code>Can not infer schema for type: &lt;class 'numpy.float64'&gt;</code> error. The same thing happens with <code>numpy.int64</code> arrays.</p> <p>Example:</p> <pre><code>df = spark.createDataFrame(numpy.arange(10.)) </code></pre> <blockquote> <p>TypeError: Can not infer schema for type: &lt;class 'numpy.float64'&gt;</p> </blockquote>
<p>Or without using pandas:</p> <pre><code>df = spark.createDataFrame([(float(i),) for i in numpy.arange(10.)]) </code></pre>
pandas|numpy|apache-spark|pyspark
1
378,005
70,525,249
Compare file name in a dataframe to file present in a directory and then fetch row value of a different column
<p>I have a pandas dataframe which captures 2 columns - id and corresponding filename.</p> <p>I want to run a loop and compare if the filename in this dataframe is present in a specific directory. If it is present then I want to fetch the id and filename.</p> <p>I am trying the following code -</p> <pre><code>x = df[[&quot;id&quot;, &quot;filename_dir&quot;]] directory = 'C:/users/' for filename in os.listdir(directory): if filename in x[&quot;filename_dir&quot;]: id_1 = x[&quot;id&quot;] print(id_1) </code></pre> <p>However, this is giving me complete list of ids and not the one corresponding to the filename present in the directory. I am new to python so apologies for this basic query.</p>
<pre><code>id = x[&quot;id&quot;] </code></pre> <p>will instantiate &quot;id&quot; with the whole columns of x.id, so the print statement will print the whole column everytime it finds a matching file.</p> <p>try</p> <pre><code>id = x.id[x.filename == filename] </code></pre>
python|pandas
1
378,006
70,607,355
Sparse Categorical CrossEntropy causing NAN loss
<p>So, I've been trying to implement a few custom losses, and so thought I'd start off with implementing SCE loss, without using the built in TF object. Here's the function I wrote for it.</p> <pre><code>def custom_loss(y_true, y_pred): print(y_true, y_pred) return tf.cast(tf.math.multiply(tf.experimental.numpy.log2(y_pred[y_true[0]]), -1), dtype=tf.float32) </code></pre> <p>y_pred is the set of probabilties, and y_true is the index of the correct one. This setup should work according to all that I've read, but it returns NAN loss.</p> <p>I checked if there's a problem with the training loop, but it works prefectly with the builtin losses.</p> <p>Could someone tell me what the problem is with this code?</p>
<p>You can replicate the <code>SparseCategoricalCrossentropy()</code> loss function as follows</p> <pre class="lang-py prettyprint-override"><code>import tensorflow as tf def sparse_categorical_crossentropy(y_true, y_pred, clip=True): y_true = tf.convert_to_tensor(y_true, dtype=tf.int32) y_pred = tf.convert_to_tensor(y_pred, dtype=tf.float32) y_true = tf.one_hot(y_true, depth=y_pred.shape[1]) if clip == True: y_pred = tf.clip_by_value(y_pred, 1e-7, 1 - 1e-7) return - tf.reduce_mean(tf.math.log(y_pred[y_true == 1])) </code></pre> <p>Note that the <code>SparseCategoricalCrossentropy()</code> loss function applies a small offset (<code>1e-7</code>) to the predicted probabilities in order to make sure that the loss values are always finite, see also <a href="https://stackoverflow.com/q/70426044/11989081">this question</a>.</p> <pre class="lang-py prettyprint-override"><code>y_true = [1, 2] y_pred = [[0.05, 0.95, 0.0], [0.1, 0.8, 0.1]] print(tf.keras.losses.SparseCategoricalCrossentropy()(y_true, y_pred).numpy()) print(sparse_categorical_crossentropy(y_true, y_pred, clip=True).numpy()) print(sparse_categorical_crossentropy(y_true, y_pred, clip=False).numpy()) # 1.1769392 # 1.1769392 # 1.1769392 y_true = [1, 2] y_pred = [[0.0, 1.0, 0.0], [0.0, 1.0, 0.0]] print(tf.keras.losses.SparseCategoricalCrossentropy()(y_true, y_pred).numpy()) print(sparse_categorical_crossentropy(y_true, y_pred, clip=True).numpy()) print(sparse_categorical_crossentropy(y_true, y_pred, clip=False).numpy()) # 8.059048 # 8.059048 # inf </code></pre>
python|tensorflow|keras|loss-function
2
378,007
70,708,165
Apply condition on a column after groupby in pandas and then aggregate to get 2 max value
<pre><code>data field bcorr 0 A cs1 0.8 1 A cs2 0.9 2 A cs3 0.7 3 A pq1 0.4 4 A pq2 0.6 5 A pq3 0.5 6 B cs1 0.8 7 B cs2 0.9 8 B cs3 0.7 9 B pq1 0.4 10 B pq2 0.6 11 B pq3 0.5 </code></pre> <p>For every data <code>A</code> and <code>B</code> in <code>data</code> column, segregate the <code>cs</code> &amp; <code>pq</code> fields from <code>field</code> column, and then aggregate to get 2 max value of <code>bcorr</code>.</p> <p>Sample result would be like:</p> <pre><code>data field bcorr 0 A cs1 0.8 1 A cs2 0.9 4 A pq2 0.6 5 A pq3 0.5 6 B cs1 0.8 7 B cs2 0.9 10 B pq2 0.6 11 B pq3 0.5 </code></pre> <p>For this, one of option is to do this while creating the list of records, which obviously will have high complexity.</p> <p>second, i want to do this with pandas dataframe, where i used <code>groupby</code> on <code>data</code> column, then applying <code>startswith</code> to get the source <code>field</code> and then apply <code>max</code></p>
<p>First, extract common part of each field (first letters) then sort values (highest values go bottom). Finally group by <code>data</code> column and <code>field</code> series then keep the two last values (the highest):</p> <pre><code>field = df['field'].str.extract('([^\d]+)', expand=False) out = df.sort_values('bcorr').groupby(['data', field]).tail(2).sort_index() print(out) # Output data field bcorr 0 A cs1 0.8 1 A cs2 0.9 4 A pq2 0.6 5 A pq3 0.5 6 B cs1 0.8 7 B cs2 0.9 10 B pq2 0.6 11 B pq3 0.5 </code></pre> <p>If you field have only two fixed letters to determine the field, you can use <code>df['field'].str[:2]</code> instead of <code>df['field'].str.extract(...)</code>.</p>
python|pandas
0
378,008
70,514,667
Best way to remove specific words from column in pandas dataframe?
<p>I'm working with a huge set of data that I can't work with in excel so I'm using Pandas/Python, but I'm relatively new to it. I have this column of book titles that also include genres, both before and after the title. I only want the column to contain book titles, so what would be the easiest way to remove the genres?</p> <p>Here is an example of what the column contains:</p> <pre><code>Book Labels Science Fiction | Drama | Dune Thriller | Mystery | The Day I Died Thriller | Razorblade Tears | Family | Drama Comedy | How To Marry Keanu Reeves In 90 Days | Drama ... </code></pre> <p>So above, the book titles would be Dune, The Day I Died, Razorblade Tears, and How To Marry Keanu Reeves In 90 Days, but as you can see the genres precede as well as succeed the titles.</p> <p>I was thinking I could create a list of all the genres (as there are only so many) and remove those from the column along with the &quot;|&quot; characters, but if anyone has suggestions on a simpler way to remove the genres and &quot;|&quot; key, please help me out.</p>
<p>It is an enhancement to @tdy Regex solution. The original regex <code>Family|Drama</code> will match the words &quot;Family&quot; and &quot;Drama&quot; in the string. If the book title contains the words in <code>gernes</code>, the words will be removed as well.</p> <p>Supposed that the labels are separated by &quot; | &quot;, there are three match conditions we want to remove.</p> <ol> <li>Gerne at start of string. e.g. <code>Drama | ...</code></li> <li>Gerne in the middle. e.g. <code>... | Drama | ...</code></li> <li>Gerne at end of string. e.g. <code>... | Drama</code></li> </ol> <p>Use regex <code>(^|\| )(?:Family|Drama)(?=( \||$))</code> to match one of three conditions. Note that <code>| Drama | Family</code> has 2 overlapped matches, here I use <code>?=( \||$)</code> to avoid matching once only. See this problem <a href="https://stackoverflow.com/questions/15301832/use-regular-expressions-to-replace-overlapping-subpatterns">[Use regular expressions to replace overlapping subpatterns]</a> for more details.</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; genres = [&quot;Family&quot;, &quot;Drama&quot;] &gt;&gt;&gt; df # Book Labels # 0 Drama | Drama 123 | Family # 1 Drama 123 | Drama | Family # 2 Drama | Family | Drama 123 # 3 123 Drama 123 | Family | Drama # 4 Drama | Family | 123 Drama &gt;&gt;&gt; re_str = &quot;(^|\| )(?:{})(?=( \||$))&quot;.format(&quot;|&quot;.join(genres)) &gt;&gt;&gt; df['Book Labels'] = df['Book Labels'].str.replace(re_str, &quot;&quot;, regex=True) # 0 | Drama 123 # 1 Drama 123 # 2 | Drama 123 # 3 123 Drama 123 # 4 | 123 Drama &gt;&gt;&gt; df[&quot;Book Labels&quot;] = df[&quot;Book Labels&quot;].str.strip(&quot;| &quot;) # 0 Drama 123 # 1 Drama 123 # 2 Drama 123 # 3 123 Drama 123 # 4 123 Drama </code></pre>
python|pandas|string|dataframe
0
378,009
70,399,224
Changing column label Python plotly?
<p>How to change column titles? First column title should say &quot;4-Year&quot; and 2nd column title &quot;2-Year&quot;. I tried using label={} but kept getting an error.</p> <pre><code>df = pd.read_csv('college_data.csv') df1 = df[df.years &gt; 2] df2 = df[df.years &lt; 3] #CUNY College Table fig = go.Figure(data=[go.Table( header=dict(values=list(df1[['college_name', 'college_name']]), fill_color='paleturquoise', font_color='gray', align='left', height = 50, font=dict(size=26), ), cells=dict(values=[df1.college_name, df2.college_name], height = 50, fill_color='lavender', align='left', font=dict(size=20), )) ]) fig.update_layout(title = &quot;CUNY Colleges&quot;, width = 900, height = 1320, font_family='Palanquin', font=dict(size=30), showlegend = False) st.plotly_chart(fig) </code></pre> <p><a href="https://i.stack.imgur.com/PSjxb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PSjxb.png" alt="Table" /></a></p>
<p>Change</p> <pre><code>values=list(df1[['college_name', 'college_name']]), </code></pre> <p>to</p> <pre><code>values=[&quot;4-year&quot;, &quot;2-year&quot;], </code></pre> <p>e.g.</p> <pre><code>fig = go.Figure(data=[go.Table( header=dict(values=[&quot;4-year&quot;, &quot;2-year&quot;], ... </code></pre> <p>Calling <code>list</code> on a pandas <code>DataFrame</code> returns a list of the <em>column names</em> of that dataframe, so <code>list(df1[['college_name', 'college_name']])</code> is essentially identical to <code>['college_name', 'college_name']</code>.</p>
python|pandas|plotly-python|streamlit
1
378,010
70,653,419
Rename files in a folder using python
<p>I have different docs,pdf files available in my folder (almost 1000 no). I want to rename all the files. My folder structure like -</p> <pre><code>nikita ----------abc.doc ----------des.doc ----------jj1.pdf </code></pre> <p>I want name should be starting with NC_. For example</p> <pre><code>nikita ----------NC1_abc.doc ----------NC2_des.doc ----------NC3_jj1.pdf </code></pre> <p>I have done the following code -</p> <pre><code>import os import glob import pandas as pd os.chdir('C:\\Users\\EVM\\Nikita\\') print(os.getcwd()) for count, f in enumerate(os.listdir()): f_name, f_ext = os.path.splitext(f) f_name = &quot;NC&quot; + str(count) + '_' + f_name new_name = f'{f_name}{f_ext}' os.rename(f, new_name) </code></pre> <p>But my output starts with NC0 not NC1.</p> <pre><code>nikita ----------NC0_abc.doc ----------NC1_des.doc ----------NC2_jj1.pdf </code></pre>
<p>the enumerate function accepts an optional argument to declare the index you want to start with. Click <a href="https://docs.python.org/3/library/functions.html#enumerate" rel="nofollow noreferrer">here</a> for further information.</p> <p>So if you added the argument like this:</p> <pre><code>enumerate(os.listdir(), 1) </code></pre> <p>the output should start with NC1.</p>
python-3.x|pandas|dataframe
0
378,011
70,675,464
Creating categorical column based on multiple column values in groupby
<pre><code>print(df.groupby(['Step1', 'Step2', 'Step3']).size().reset_index(name='Freq')) Step1 Step2 Step3 Freq 0 6.0 17.6 28.60 135 1 7.5 22.0 35.75 255 2 10.5 30.8 50.05 129 3 12.0 35.2 57.20 369 4 13.5 39.6 64.35 249 5 15.0 44.0 71.50 246 6 16.5 48.4 78.65 246 7 18.0 52.8 85.80 369 8 21.0 61.6 100.10 375 9 22.5 66.0 107.25 249 10 25.5 74.8 121.55 123 </code></pre> <p>The 'Step1', 'Step2', 'Step3' columns are constant input values. There are 10 unique combinations of input values from these columns (shown in the groupby). I am looking to delete the individual 'Step1', 'Step2', 'Step3' columns and create a single column &quot;Step Type&quot; that has a letter that represents the unique combinations of input values from these columns.</p> <p>Desired output:</p> <pre><code> Step Type Freq 0 A 135 1 B 255 2 C 129 3 D 369 4 E 249 5 F 246 6 G 246 7 H 369 8 J 375 9 L 249 10 M 123 </code></pre> <p>Step Type A: Step1=6.0, Step2=17.6, Step3=28.60</p> <p>How do I do this?</p>
<p>As combinationss of the three steps are unique, I used the combinations as a key of Dictionary for Step Type.</p> <p>Here, I pre-defined category value but it can be auto-generated by scanning the df if needed.</p> <pre><code># df Step1 Step2 Step3 0 6.0 17.6 28.60 1 7.5 22.0 35.75 2 10.5 30.8 50.05 3 12.0 35.2 57.20 4 13.5 30.6 64.35 </code></pre> <pre><code>category = { (6.0, 17.6, 28.60): 'A', (7.5, 22.0, 35.75): 'B', (10.5, 30.8, 50.05): 'C', (12, 35.2, 57.20): 'D', (13.5, 30.6, 64.35): 'E', } df['Step_Type'] = df.apply(lambda row: category[(row['Step1'], row['Step2'], row['Step3'])], axis=1) df = df[['Step_Type', 'Freq']] print(df) # Step_Type Freq #0 A 135 #1 B 255 #2 C 129 #3 D 369 #4 E 249 </code></pre>
python|pandas
0
378,012
70,625,453
Replacing nan values in a Pandas data frame with lists
<p>How to replace nan or empty strings (e.g. &quot;&quot;) with zero if it exists in any column. the values in any column can be a combination of lists and scalar values as follows</p> <pre><code>col1 col2 col3 col4 nan Jhon [nan, 1, 2] ['k', 'j'] 1 nan [1, 1, 5] 3 2 &quot;&quot; nan nan 3 Samy [1, 1, nan] ['b', ''] </code></pre>
<p>You have to handle the three cases (empty string, NaN, NaN in list) separately.</p> <p>For the NaN in list you need to loop over each occurrence and replace the elements one by one.</p> <p><em>NB. <code>applymap</code> is slow, so if you know in advance the columns to use you can subset them</em></p> <p>For the empty string, replace them to NaN, then <code>fillna</code>.</p> <pre><code>sub = 'X' (df.applymap(lambda x: [sub if (pd.isna(e) or e=='') else e for e in x] if isinstance(x, list) else x) .replace('', float('nan')) .fillna(sub) ) </code></pre> <p>Output:</p> <pre><code> col1 col2 col3 col4 0 X Jhon [X, 1, 2] [k, j] 1 1.0 X [1, 1, 5] 3 2 2.0 X X X 3 3.0 Samy [1, 1, X] [b, X] </code></pre> <p>Used input:</p> <pre><code>from numpy import nan df = pd.DataFrame({'col1': {0: nan, 1: 1.0, 2: 2.0, 3: 3.0}, 'col2': {0: 'Jhon', 1: nan, 2: '', 3: 'Samy'}, 'col3': {0: [nan, 1, 2], 1: [1, 1, 5], 2: nan, 3: [1, 1, nan]}, 'col4': {0: ['k', 'j'], 1: '3', 2: nan, 3: ['b', '']}}) </code></pre>
python|pandas
0
378,013
70,584,431
How to modified dataset
<p>I am working with a dataset like this, where the values of 'country name' are repeat several time, and 'Indicator name' to.</p> <p><a href="https://i.stack.imgur.com/fPyT4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fPyT4.png" alt="enter image description here" /></a></p> <p>I want to create a new dataset with its columns are like that</p> <pre><code>Year CountryName IndicatorName1 IndicatorName2 ... IndicatorNameX 2000. USA. value1. value2. valueX 2000. Canada. value1. value2. valueX 2001. USA. value1. value2. valueX 2001. Canada. value1. value2. valueX </code></pre> <p>it is possible to do that??</p> <p>Thanks in advances!</p>
<p>You can use <code>pivot</code> as suggested by @Chris but you can also try:</p> <pre><code>out = df.set_index(['Country Name', 'Indicator Name']).unstack('Country Name').T \ .rename_axis(index=['Year', 'Country'], columns=None).reset_index() print(out) # Output Year Country IndicatorName1 IndicatorName2 0 2000 France 1 3 1 2000 Italy 2 4 2 2001 France 5 7 3 2001 Italy 6 8 </code></pre> <p>Setup a <a href="https://stackoverflow.com/q/20109391/15239951">Pandas</a> / <a href="https://stackoverflow.com/help/minimal-reproducible-example">MRE</a>:</p> <pre><code>data = {'Country Name': ['France', 'Italy', 'France', 'Italy'], 'Indicator Name': ['IndicatorName1', 'IndicatorName1', 'IndicatorName2', 'IndicatorName2'], 2000: [1, 2, 3, 4], 2001: [5, 6, 7, 8]} df = pd.DataFrame(data) print(df) # Output Country Name Indicator Name 2000 2001 0 France IndicatorName1 1 5 1 Italy IndicatorName1 2 6 2 France IndicatorName2 3 7 3 Italy IndicatorName2 4 8 </code></pre>
python|pandas|dataframe
0
378,014
70,542,409
How to resave a csv file using pandas in Python?
<p>I have read a csv file using Pandas and I need to resave the csv file using code instead of opening the csv file and manually saving it.</p> <p>Is it possible?</p>
<p>There must be something I'm missing in the question. Why not simply:</p> <pre class="lang-py prettyprint-override"><code>df = pd.read_csv('file.csv', ...) # any changes df.to_csv('file.csv') </code></pre> <p>?</p>
python|pandas|csv
2
378,015
70,561,095
Replace only replacing the 1st argument
<p>I have the following code:</p> <pre><code>df['Price'] = df['Price'].replace(regex={'$': 1, '$$': 2, '$$$': 3}) df['Price'].fillna(0) </code></pre> <p>but even if a row had &quot;$$&quot; or &quot;$$$&quot; it still replaces it with a 1.0.</p> <p>How can I make it appropriately replace $ with 1, $$ with 2, and $$$ with 3?</p>
<pre class="lang-py prettyprint-override"><code>df.Price.map({'$': 1, '$$': 2, '$$$': 3}) </code></pre>
python|pandas
3
378,016
70,622,836
How to match string and arrange dataframe accordingly?
<p>Got Input df1 and df2</p> <p><strong>df1:</strong></p> <pre><code>Subcategory_Desc Segment_Desc Flow Side Row_no APPLE APPLE LOOSE Apple Kanzi Front Row 1 APPLE APPLE LOOSE Apple Jazz Front Row 1 CITRUS ORANGES LOOSE Orange Navel Front Row 1 PEAR PEARS LOOSE Lemon Right End Row 1 AVOCADOS AVOCADOS LOOSE Avocado Back Row 1 TROPICAL FRUIT KIWI FRUIT Kiwi Gold Back Row 1 TROPICAL FRUIT KIWI FRUIT Kiwi Green Left End Row 1 </code></pre> <p><strong>df2:</strong></p> <pre><code>Subcategory_Desc Segment_Desc Flow TROPICAL FRUIT KIWI FRUIT 5pk Kids Kiwi APPLE APPLE LOOSE Apple GoldenDel AVOCADOS AVOCADOS LOOSE Avocado Tray </code></pre> <p><strong>Scenario:</strong> Dataframe <em>df2</em> rows should be inserted to dataframe <em>df1</em> considering below condition:</p> <ol> <li>Check for the similar <em>Subcategory_Desc</em> and <em>Segment_Desc</em> of df2 in df1 and insert that df2 row at the end of that particular Side(Front/Back). As given in expected Output.</li> <li>Need to consider Row_no column as well, because original dataset holds n number of Row_no, here have given Row 1 alone for sample data.</li> </ol> <p><strong>Expected Output:</strong></p> <pre><code>Subcategory_Desc Segment_Desc Flow Side Row_no APPLE APPLE LOOSE Apple Kanzi Front Row 1 APPLE APPLE LOOSE Apple Jazz Front Row 1 CITRUS ORANGES LOOSE Orange Navel Front Row 1 APPLE APPLE LOOSE Apple GoldenDel Front Row 1 PEAR PEARS LOOSE Lemon Right End Row 1 AVOCADOS AVOCADOS LOOSE Avocado Back Row 1 TROPICAL FRUIT KIWI FRUIT Kiwi Gold Back Row 1 TROPICAL FRUIT KIWI FRUIT 5pk Kids Kiwi Back Row 1 AVOCADOS AVOCADOS LOOSE Avocado Tray Back Row 1 TROPICAL FRUIT KIWI FRUIT Kiwi Green Left End Row 1 </code></pre> <p>Not sure what simple logic can be used for this purpose.</p>
<p>So, given the following dataframes:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd df1 = pd.DataFrame( { &quot;Subcategory_Desc&quot;: { 0: &quot;APPLE&quot;, 1: &quot;APPLE&quot;, 2: &quot;CITRUS&quot;, 3: &quot;PEAR&quot;, 4: &quot;AVOCADOS&quot;, 5: &quot;TROPICAL FRUIT&quot;, 6: &quot;TROPICAL FRUIT&quot;, }, &quot;Segment_Desc&quot;: { 0: &quot;APPLE LOOSE&quot;, 1: &quot;APPLE LOOSE&quot;, 2: &quot;ORANGES LOOSE&quot;, 3: &quot;PEARS LOOSE&quot;, 4: &quot;AVOCADOS LOOSE&quot;, 5: &quot;KIWI FRUIT&quot;, 6: &quot;KIWI FRUIT&quot;, }, &quot;Flow&quot;: { 0: &quot;Apple Kanzi&quot;, 1: &quot;Apple Jazz&quot;, 2: &quot;Orange Navel&quot;, 3: &quot;Lemon&quot;, 4: &quot;Avocado&quot;, 5: &quot;Kiwi Gold&quot;, 6: &quot;Kiwi Green&quot;, }, &quot;Side&quot;: { 0: &quot;Front&quot;, 1: &quot;Front&quot;, 2: &quot;Front&quot;, 3: &quot;Right_End&quot;, 4: &quot;Back&quot;, 5: &quot;Back&quot;, 6: &quot;Left_End&quot;, }, &quot;Row_no&quot;: { 0: &quot;Row 1&quot;, 1: &quot;Row 1&quot;, 2: &quot;Row 1&quot;, 3: &quot;Row 1&quot;, 4: &quot;Row 1&quot;, 5: &quot;Row 1&quot;, 6: &quot;Row 1&quot;, }, } ) df2 = pd.DataFrame( { &quot;Subcategory_Desc&quot;: {0: &quot;TROPICAL FRUIT&quot;, 1: &quot;APPLE&quot;, 2: &quot;AVOCADOS&quot;}, &quot;Segment_Desc&quot;: {0: &quot;KIWI FRUIT&quot;, 1: &quot;APPLE LOOSE&quot;, 2: &quot;AVOCADOS LOOSE&quot;}, &quot;Flow&quot;: {0: &quot;5pk Kids Kiwi&quot;, 1: &quot;Apple GoldenDel&quot;, 2: &quot;Avocado Tray&quot;}, } ) </code></pre> <p>You could try this:</p> <pre class="lang-py prettyprint-override"><code># Initialize new column df2[&quot;idx&quot;] = &quot;&quot; # Find indice of first match in df1 for _, row2 in df2.iterrows(): for i, row1 in df1.iterrows(): if i + 1 &gt;= df1.shape[0]: break if ( row1[&quot;Subcategory_Desc&quot;] == row2[&quot;Subcategory_Desc&quot;] and row1[&quot;Segment_Desc&quot;] == row2[&quot;Segment_Desc&quot;] ): row2[&quot;idx&quot;] = i df2 = df2.sort_values(by=&quot;idx&quot;).reset_index(drop=True) # Starting from previous indice, find insertion indice in df1 for i, idx in enumerate(df2[&quot;idx&quot;]): side_of_idx = df1.loc[idx, &quot;Side&quot;] df2.loc[i, &quot;pos&quot;] = df1.index[df1[&quot;Side&quot;] == side_of_idx].to_list()[-1] + 1 positions = df2[&quot;pos&quot;].astype(&quot;int&quot;).to_list() # Clean up df2 df2 = df2.drop(columns=[&quot;idx&quot;, &quot;pos&quot;]) df2[&quot;Side&quot;] = df2[&quot;Row_no&quot;] = &quot;&quot; # Iterate on df1 to insert new rows for i, pos in enumerate(positions): # Fill missing values df2.loc[i, &quot;Side&quot;] = df1.loc[pos - 1, &quot;Side&quot;] df2.loc[i, &quot;Row_no&quot;] = df1.loc[pos, &quot;Row_no&quot;] # Insert row df1 = pd.concat( [df1.iloc[:pos], pd.DataFrame([df2.iloc[i]]), df1.iloc[pos:]], ignore_index=True ).reset_index(drop=True) # Increment next position since df1 has changed if i &lt; len(positions) - 1: positions[i + 1] += 1 </code></pre> <p>And so:</p> <pre class="lang-py prettyprint-override"><code>print(df1) # Outputs Subcategory_Desc Segment_Desc Flow Side Row_no 0 APPLE APPLE LOOSE Apple Kanzi Front Row 1 1 APPLE APPLE LOOSE Apple Jazz Front Row 1 2 CITRUS ORANGES LOOSE Orange Navel Front Row 1 3 APPLE APPLE LOOSE Apple GoldenDel Front Row 1 4 PEAR PEARS LOOSE Lemon Right_End Row 1 5 AVOCADOS AVOCADOS LOOSE Avocado Back Row 1 6 TROPICAL FRUIT KIWI FRUIT Kiwi Gold Back Row 1 7 TROPICAL FRUIT KIWI FRUIT 5pk Kids Kiwi Back Row 1 8 AVOCADOS AVOCADOS LOOSE Avocado Tray Back Row 1 9 TROPICAL FRUIT KIWI FRUIT Kiwi Green Left_End Row 1 </code></pre>
python|pandas|dataframe|string-matching|fuzzywuzzy
1
378,017
70,412,168
Data cleaning, dictionary, inside dictionary,inside lists in CSV
<p>I'm a newbie learning data science, I've been trying to clean a data set, but I've had some hurdles on the way, the first issue I had was <a href="https://stackoverflow.com/questions/70404479/extract-data-in-a-column-from-a-csv-saved-as-a-dictionary-python-pandas"><strong>to explode a Dictionary inside a table into individual columns</strong></a> link below), thanks to user Parfait I could do it using literal_eval, then I had a problem trying to apply the same solution until I found literal_eval has issues with null values, I got rid of nulls and some bad uses of quotes.</p> <p>Now I got this, it seems that a column, which is a dictionary has not one but two values which are dictionaries themselves, I've tried to pop and del those values, but it seems the data is not considered a dictionary so I couldn't afford it.</p> <p>When running <code>df['creator'].map(eval)</code> I get the message appended below, look to the &quot;avatar&quot; and &quot;api&quot; columns, these two columns are not necessary for what I want, so I could drop them, but I have not find a way to do it.</p> <p>To be clear I just want to extract id and name columns as &quot;cre_id&quot; and &quot;cre_name&quot;, add them to the main df with prefix and deleting the rest of the column, thank you for your help.</p> <pre><code>df['creator'].map(eval) File &quot;&lt;string&gt;&quot;, line 1 {&quot;id&quot;:347819977,&quot;name&quot;:Raul CJ Montes,&quot;is_registered&quot;:None,&quot;is_email_verified&quot;:None,&quot;chosen_currency&quot;:None,&quot;is_superbacker&quot;:None,&quot;avatar&quot;:{&quot;thumb&quot;:&quot;https://ksr-ugc.imgix.net/assets/019/996/402/9de6ab427db7becb81711ce9b25e3645_original.jpg?ixlib=rb-4.0.2&amp;w=40&amp;h=40&amp;fit=crop&amp;v=1517101311&amp;auto=format&amp;frame=1&amp;q=92&amp;s=c41776ee80edfa63ba4dc916b24f6f00&quot;,&quot;small&quot;:&quot;https://ksr-ugc.imgix.net/assets/019/996/402/9de6ab427db7becb81711ce9b25e3645_original.jpg?ixlib=rb-4.0.2&amp;w=80&amp;h=80&amp;fit=crop&amp;v=1517101311&amp;auto=format&amp;frame=1&amp;q=92&amp;s=6983b13a3c4e7a7a5f0b2d42f78f50dc&quot;,&quot;medium&quot;:&quot;https://ksr-ugc.imgix.net/assets/019/996/402/9de6ab427db7becb81711ce9b25e3645_original.jpg?ixlib=rb-4.0.2&amp;w=160&amp;h=160&amp;fit=crop&amp;v=1517101311&amp;auto=format&amp;frame=1&amp;q=92&amp;s=bb04642f7264234e6c01c5b1b77d8c63&quot;},&quot;urls&quot;:{&quot;web&quot;:{&quot;user&quot;:&quot;https://www.kickstarter.com/profile/347819977&quot;},&quot;api&quot;:{&quot;user&quot;:&quot;https://api.kickstarter.com/v1/users/347819977?signature=1631849457.e135d96dc2a9edbddb71deef896c78155ed13e8b&quot;}}} ^ SyntaxError: invalid syntax </code></pre> <p>Edit: Added first ten rows of the dataset:</p> <pre><code>{0: '{&quot;id&quot;:1379875462,&quot;name&quot;:&quot;Batton Lash&quot;,&quot;is_registered&quot;:None,&quot;is_email_verified&quot;:None,&quot;chosen_currency&quot;:None,&quot;is_superbacker&quot;:None,&quot;avatar&quot;:{&quot;thumb&quot;:&quot;https://ksr-ugc.imgix.net/assets/006/347/706/b3908a1a23f6b9e472edcf7c934e5b0e_original.jpg?ixlib=rb-4.0.2&amp;w=40&amp;h=40&amp;fit=crop&amp;v=1461382354&amp;auto=format&amp;frame=1&amp;q=92&amp;s=4d88bd2ed1e7098fcaf046321cc4be15&quot;,&quot;small&quot;:&quot;https://ksr-ugc.imgix.net/assets/006/347/706/b3908a1a23f6b9e472edcf7c934e5b0e_original.jpg?ixlib=rb-4.0.2&amp;w=80&amp;h=80&amp;fit=crop&amp;v=1461382354&amp;auto=format&amp;frame=1&amp;q=92&amp;s=664f586cef17d83dc408a6a10b0f3c4a&quot;,&quot;medium&quot;:&quot;https://ksr-ugc.imgix.net/assets/006/347/706/b3908a1a23f6b9e472edcf7c934e5b0e_original.jpg?ixlib=rb-4.0.2&amp;w=160&amp;h=160&amp;fit=crop&amp;v=1461382354&amp;auto=format&amp;frame=1&amp;q=92&amp;s=fe307263e32a2385e764e3923a13179e&quot;},&quot;urls&quot;:{&quot;web&quot;:{&quot;user&quot;:&quot;https://www.kickstarter.com/profile/1379875462&quot;},&quot;api&quot;:{&quot;user&quot;:&quot;https://api.kickstarter.com/v1/users/1379875462?signature=1631849432.d50b79030e15111575554ecae171babad1f2925d&quot;}}}', 1: '{&quot;id&quot;:408247096,&quot;name&quot;:&quot;Scott(skoddii)&quot;,&quot;is_registered&quot;:None,&quot;is_email_verified&quot;:None,&quot;chosen_currency&quot;:None,&quot;is_superbacker&quot;:None,&quot;avatar&quot;:{&quot;thumb&quot;:&quot;https://ksr-ugc.imgix.net/assets/020/330/517/383423c1c19dfbd99534c6185eb09a6f_original.png?ixlib=rb-4.0.2&amp;w=40&amp;h=40&amp;fit=crop&amp;v=1519354368&amp;auto=format&amp;frame=1&amp;q=92&amp;s=74f83e0070b20db01d5180ba214d1b5e&quot;,&quot;small&quot;:&quot;https://ksr-ugc.imgix.net/assets/020/330/517/383423c1c19dfbd99534c6185eb09a6f_original.png?ixlib=rb-4.0.2&amp;w=80&amp;h=80&amp;fit=crop&amp;v=1519354368&amp;auto=format&amp;frame=1&amp;q=92&amp;s=671b9100176dbfa63752a7a8e9cc63d0&quot;,&quot;medium&quot;:&quot;https://ksr-ugc.imgix.net/assets/020/330/517/383423c1c19dfbd99534c6185eb09a6f_original.png?ixlib=rb-4.0.2&amp;w=160&amp;h=160&amp;fit=crop&amp;v=1519354368&amp;auto=format&amp;frame=1&amp;q=92&amp;s=956c6f85ffbc3fb179c260611254a2be&quot;},&quot;urls&quot;:{&quot;web&quot;:{&quot;user&quot;:&quot;https://www.kickstarter.com/profile/408247096&quot;},&quot;api&quot;:{&quot;user&quot;:&quot;https://api.kickstarter.com/v1/users/408247096?signature=1631849432.6cc0456d4795aea0b32f861b050212afef4387ce&quot;}}}', 2: '{&quot;id&quot;:361953386,&quot;name&quot;:&quot;Luis G. Batista, CPM, C.P.S.M&quot;,&quot;is_registered&quot;:None,&quot;is_email_verified&quot;:None,&quot;chosen_currency&quot;:None,&quot;is_superbacker&quot;:None,&quot;avatar&quot;:{&quot;thumb&quot;:&quot;https://ksr-ugc.imgix.net/assets/015/751/771/b9a11e982831d2190d68e2ea0d3a4ff0_original.jpg?ixlib=rb-4.0.2&amp;w=40&amp;h=40&amp;fit=crop&amp;v=1488754184&amp;auto=format&amp;frame=1&amp;q=92&amp;s=f4dc0bbe5e7edbb35fb15c07bdb2c843&quot;,&quot;small&quot;:&quot;https://ksr-ugc.imgix.net/assets/015/751/771/b9a11e982831d2190d68e2ea0d3a4ff0_original.jpg?ixlib=rb-4.0.2&amp;w=80&amp;h=80&amp;fit=crop&amp;v=1488754184&amp;auto=format&amp;frame=1&amp;q=92&amp;s=9c7e202bb6491516468ec69dff66bcdd&quot;,&quot;medium&quot;:&quot;https://ksr-ugc.imgix.net/assets/015/751/771/b9a11e982831d2190d68e2ea0d3a4ff0_original.jpg?ixlib=rb-4.0.2&amp;w=160&amp;h=160&amp;fit=crop&amp;v=1488754184&amp;auto=format&amp;frame=1&amp;q=92&amp;s=ac05f1a9827cc321ea3e8f754f19be94&quot;},&quot;urls&quot;:{&quot;web&quot;:{&quot;user&quot;:&quot;https://www.kickstarter.com/profile/361953386&quot;},&quot;api&quot;:{&quot;user&quot;:&quot;https://api.kickstarter.com/v1/users/361953386?signature=1631849432.7262fa85aec828a6b01ea70685ef22b0ada784ad&quot;}}}', 3: '{&quot;id&quot;:202579323,&quot;name&quot;:&quot;Brian Carmichael&quot;,&quot;is_registered&quot;:None,&quot;is_email_verified&quot;:None,&quot;chosen_currency&quot;:None,&quot;is_superbacker&quot;:None,&quot;avatar&quot;:{&quot;thumb&quot;:&quot;https://ksr-ugc.imgix.net/assets/010/482/911/12f9ff13c9a415e4e869b8036662f02c_original.jpg?ixlib=rb-4.0.2&amp;w=40&amp;h=40&amp;fit=crop&amp;v=1488680236&amp;auto=format&amp;frame=1&amp;q=92&amp;s=9433c133b6bf02a45dd8ba78a0b44a46&quot;,&quot;small&quot;:&quot;https://ksr-ugc.imgix.net/assets/010/482/911/12f9ff13c9a415e4e869b8036662f02c_original.jpg?ixlib=rb-4.0.2&amp;w=80&amp;h=80&amp;fit=crop&amp;v=1488680236&amp;auto=format&amp;frame=1&amp;q=92&amp;s=900c300f2d425243c108ed4419c78793&quot;,&quot;medium&quot;:&quot;https://ksr-ugc.imgix.net/assets/010/482/911/12f9ff13c9a415e4e869b8036662f02c_original.jpg?ixlib=rb-4.0.2&amp;w=160&amp;h=160&amp;fit=crop&amp;v=1488680236&amp;auto=format&amp;frame=1&amp;q=92&amp;s=55e58d426c7f41b92081ce735abac404&quot;},&quot;urls&quot;:{&quot;web&quot;:{&quot;user&quot;:&quot;https://www.kickstarter.com/profile/202579323&quot;},&quot;api&quot;:{&quot;user&quot;:&quot;https://api.kickstarter.com/v1/users/202579323?signature=1631849432.fb88647e78bbe87ca2646330b0d84a0237c7cc46&quot;}}}', 4: '{&quot;id&quot;:1996450690,&quot;name&quot;:&quot;Dan Schmeidler&quot;,&quot;is_registered&quot;:None,&quot;is_email_verified&quot;:None,&quot;chosen_currency&quot;:None,&quot;is_superbacker&quot;:None,&quot;avatar&quot;:{&quot;thumb&quot;:&quot;https://ksr-ugc.imgix.net/assets/015/757/606/4f4d33cc942cdfe4b95af09e43a49255_original.JPG?ixlib=rb-4.0.2&amp;w=40&amp;h=40&amp;fit=crop&amp;v=1488802482&amp;auto=format&amp;frame=1&amp;q=92&amp;s=97f88d105a1bc21a72f008859b13055c&quot;,&quot;small&quot;:&quot;https://ksr-ugc.imgix.net/assets/015/757/606/4f4d33cc942cdfe4b95af09e43a49255_original.JPG?ixlib=rb-4.0.2&amp;w=80&amp;h=80&amp;fit=crop&amp;v=1488802482&amp;auto=format&amp;frame=1&amp;q=92&amp;s=a423f3fbf75bdb32f1c895a1f0d76bca&quot;,&quot;medium&quot;:&quot;https://ksr-ugc.imgix.net/assets/015/757/606/4f4d33cc942cdfe4b95af09e43a49255_original.JPG?ixlib=rb-4.0.2&amp;w=160&amp;h=160&amp;fit=crop&amp;v=1488802482&amp;auto=format&amp;frame=1&amp;q=92&amp;s=49f4a2d61132d1068d3f604b03a1f8e5&quot;},&quot;urls&quot;:{&quot;web&quot;:{&quot;user&quot;:&quot;https://www.kickstarter.com/profile/1996450690&quot;},&quot;api&quot;:{&quot;user&quot;:&quot;https://api.kickstarter.com/v1/users/1996450690?signature=1631849432.3b51c0d212170f4228293d3133045d040c6a6285&quot;}}}', 5: '{&quot;id&quot;:903880044,&quot;name&quot;:&quot;Doug McQuilken&quot;,&quot;is_registered&quot;:None,&quot;is_email_verified&quot;:None,&quot;chosen_currency&quot;:None,&quot;is_superbacker&quot;:None,&quot;avatar&quot;:{&quot;thumb&quot;:&quot;https://ksr-ugc.imgix.net/assets/014/523/998/230d7cd9d27128f28366a7a1c4977273_original.jpg?ixlib=rb-4.0.2&amp;w=40&amp;h=40&amp;fit=crop&amp;v=1479214827&amp;auto=format&amp;frame=1&amp;q=92&amp;s=84c65c201bdb46e72afeef51ad261913&quot;,&quot;small&quot;:&quot;https://ksr-ugc.imgix.net/assets/014/523/998/230d7cd9d27128f28366a7a1c4977273_original.jpg?ixlib=rb-4.0.2&amp;w=80&amp;h=80&amp;fit=crop&amp;v=1479214827&amp;auto=format&amp;frame=1&amp;q=92&amp;s=52beef6574a551f81be17acc750d4e2e&quot;,&quot;medium&quot;:&quot;https://ksr-ugc.imgix.net/assets/014/523/998/230d7cd9d27128f28366a7a1c4977273_original.jpg?ixlib=rb-4.0.2&amp;w=160&amp;h=160&amp;fit=crop&amp;v=1479214827&amp;auto=format&amp;frame=1&amp;q=92&amp;s=b4bb14d2759e21e6c40d3ef9c86c1ed3&quot;},&quot;urls&quot;:{&quot;web&quot;:{&quot;user&quot;:&quot;https://www.kickstarter.com/profile/903880044&quot;},&quot;api&quot;:{&quot;user&quot;:&quot;https://api.kickstarter.com/v1/users/903880044?signature=1631849432.6a7dcb45d0ca2a4c5922d51a0b3f36f7972b6ac0&quot;}}}', 6: '{&quot;id&quot;:1391487766,&quot;name&quot;:&quot;Karen Scott&quot;,&quot;is_registered&quot;:None,&quot;is_email_verified&quot;:None,&quot;chosen_currency&quot;:None,&quot;is_superbacker&quot;:None,&quot;avatar&quot;:{&quot;thumb&quot;:&quot;https://ksr-ugc.imgix.net/assets/015/612/365/b1ce5bfa90d24a767547b168e3efdbef_original.JPG?ixlib=rb-4.0.2&amp;w=40&amp;h=40&amp;fit=crop&amp;v=1487847709&amp;auto=format&amp;frame=1&amp;q=92&amp;s=e18d2c915b50e20cf27bb1255ad82ba9&quot;,&quot;small&quot;:&quot;https://ksr-ugc.imgix.net/assets/015/612/365/b1ce5bfa90d24a767547b168e3efdbef_original.JPG?ixlib=rb-4.0.2&amp;w=80&amp;h=80&amp;fit=crop&amp;v=1487847709&amp;auto=format&amp;frame=1&amp;q=92&amp;s=bd7c22cafcec49e73bea6a106976043c&quot;,&quot;medium&quot;:&quot;https://ksr-ugc.imgix.net/assets/015/612/365/b1ce5bfa90d24a767547b168e3efdbef_original.JPG?ixlib=rb-4.0.2&amp;w=160&amp;h=160&amp;fit=crop&amp;v=1487847709&amp;auto=format&amp;frame=1&amp;q=92&amp;s=d1d5327de95dac76d4cbed7a95007de1&quot;},&quot;urls&quot;:{&quot;web&quot;:{&quot;user&quot;:&quot;https://www.kickstarter.com/profile/1391487766&quot;},&quot;api&quot;:{&quot;user&quot;:&quot;https://api.kickstarter.com/v1/users/1391487766?signature=1631849432.2720fa0d8a70ccfc33034287985b98c0c791a23d&quot;}}}', 7: '{&quot;id&quot;:1344116211,&quot;name&quot;:&quot;Sanjiv(Sam) Mall&quot;,&quot;is_registered&quot;:None,&quot;is_email_verified&quot;:None,&quot;chosen_currency&quot;:None,&quot;is_superbacker&quot;:None,&quot;avatar&quot;:{&quot;thumb&quot;:&quot;https://ksr-ugc.imgix.net/assets/015/648/502/206b8686072b528ea6fd1fe78adfcc25_original.JPG?ixlib=rb-4.0.2&amp;w=40&amp;h=40&amp;fit=crop&amp;v=1488128800&amp;auto=format&amp;frame=1&amp;q=92&amp;s=fd4520798d39b777e5814219c8fe4ad2&quot;,&quot;small&quot;:&quot;https://ksr-ugc.imgix.net/assets/015/648/502/206b8686072b528ea6fd1fe78adfcc25_original.JPG?ixlib=rb-4.0.2&amp;w=80&amp;h=80&amp;fit=crop&amp;v=1488128800&amp;auto=format&amp;frame=1&amp;q=92&amp;s=67553420e14378664ae3555275a25d51&quot;,&quot;medium&quot;:&quot;https://ksr-ugc.imgix.net/assets/015/648/502/206b8686072b528ea6fd1fe78adfcc25_original.JPG?ixlib=rb-4.0.2&amp;w=160&amp;h=160&amp;fit=crop&amp;v=1488128800&amp;auto=format&amp;frame=1&amp;q=92&amp;s=f08f3b4420e3ab37c4e07b4f98100dde&quot;},&quot;urls&quot;:{&quot;web&quot;:{&quot;user&quot;:&quot;https://www.kickstarter.com/profile/1344116211&quot;},&quot;api&quot;:{&quot;user&quot;:&quot;https://api.kickstarter.com/v1/users/1344116211?signature=1631849432.6e307780f53a56c7a6dd5493ae59f26575d9fbcb&quot;}}}', 8: '{&quot;id&quot;:2071365832,&quot;name&quot;:&quot;Christoph Vogelbusch&quot;,&quot;is_registered&quot;:None,&quot;is_email_verified&quot;:None,&quot;chosen_currency&quot;:None,&quot;is_superbacker&quot;:None,&quot;avatar&quot;:{&quot;thumb&quot;:&quot;https://ksr-ugc.imgix.net/assets/012/912/270/d2f18c4ec6fcb2357ab073d0e6e0aa9e_original.png?ixlib=rb-4.0.2&amp;w=40&amp;h=40&amp;fit=crop&amp;v=1467291732&amp;auto=format&amp;frame=1&amp;q=92&amp;s=3b321faecc138d42f7aa249620fc342d&quot;,&quot;small&quot;:&quot;https://ksr-ugc.imgix.net/assets/012/912/270/d2f18c4ec6fcb2357ab073d0e6e0aa9e_original.png?ixlib=rb-4.0.2&amp;w=80&amp;h=80&amp;fit=crop&amp;v=1467291732&amp;auto=format&amp;frame=1&amp;q=92&amp;s=967c607450ac03547632f0865270822f&quot;,&quot;medium&quot;:&quot;https://ksr-ugc.imgix.net/assets/012/912/270/d2f18c4ec6fcb2357ab073d0e6e0aa9e_original.png?ixlib=rb-4.0.2&amp;w=160&amp;h=160&amp;fit=crop&amp;v=1467291732&amp;auto=format&amp;frame=1&amp;q=92&amp;s=507442b8d2a97678675ec7c19b049e4b&quot;},&quot;urls&quot;:{&quot;web&quot;:{&quot;user&quot;:&quot;https://www.kickstarter.com/profile/2071365832&quot;},&quot;api&quot;:{&quot;user&quot;:&quot;https://api.kickstarter.com/v1/users/2071365832?signature=1631849432.0d05bc7a066a3748232100864f2d3a441186b289&quot;}}}', 9: '{&quot;id&quot;:850790011,&quot;name&quot;:&quot;Harun Sarac&quot;,&quot;is_registered&quot;:None,&quot;is_email_verified&quot;:None,&quot;chosen_currency&quot;:None,&quot;is_superbacker&quot;:None,&quot;avatar&quot;:{&quot;thumb&quot;:&quot;https://ksr-ugc.imgix.net/assets/015/673/759/79ee3faff36e0fb683f834c1f419a0fc_original.jpg?ixlib=rb-4.0.2&amp;w=40&amp;h=40&amp;fit=crop&amp;v=1488440832&amp;auto=format&amp;frame=1&amp;q=92&amp;s=ab34266c1a0ce2ec4ac5e4931a606b64&quot;,&quot;small&quot;:&quot;https://ksr-ugc.imgix.net/assets/015/673/759/79ee3faff36e0fb683f834c1f419a0fc_original.jpg?ixlib=rb-4.0.2&amp;w=80&amp;h=80&amp;fit=crop&amp;v=1488440832&amp;auto=format&amp;frame=1&amp;q=92&amp;s=e1d62a787470490c4189bb9a72cfbacc&quot;,&quot;medium&quot;:&quot;https://ksr-ugc.imgix.net/assets/015/673/759/79ee3faff36e0fb683f834c1f419a0fc_original.jpg?ixlib=rb-4.0.2&amp;w=160&amp;h=160&amp;fit=crop&amp;v=1488440832&amp;auto=format&amp;frame=1&amp;q=92&amp;s=28e1a25444c13592e5ccf2967ac8b8e3&quot;},&quot;urls&quot;:{&quot;web&quot;:{&quot;user&quot;:&quot;https://www.kickstarter.com/profile/850790011&quot;},&quot;api&quot;:{&quot;user&quot;:&quot;https://api.kickstarter.com/v1/users/850790011?signature=1631849432.3ac62ea0ee180b660968be6227e29684c54286d6&quot;}}}'} </code></pre>
<p>You have the following dataframe given by your dictionary:</p> <pre><code>data = {0: '{&quot;id&quot;:1379875462,&quot;name&quot;:&quot;Batton Lash&quot;,&quot;is_registered&quot;:None,&quot;is_email_verified&quot;:None,&quot;chosen_currency&quot;:None,&quot;is_superbacker&quot;:None,&quot;avatar&quot;:{&quot;thumb&quot;:&quot;https://ksr-ugc.imgix.net/assets/006/347/706/b3908a1a23f6b9e472edcf7c934e5b0e_original.jpg?ixlib=rb-4.0.2&amp;w=40&amp;h=40&amp;fit=crop&amp;v=1461382354&amp;auto=format&amp;frame=1&amp;q=92&amp;s=4d88bd2ed1e7098fcaf046321cc4be15&quot;,&quot;small&quot;:&quot;https://ksr-ugc.imgix.net/assets/006/347/706/b3908a1a23f6b9e472edcf7c934e5b0e_original.jpg?ixlib=rb-4.0.2&amp;w=80&amp;h=80&amp;fit=crop&amp;v=1461382354&amp;auto=format&amp;frame=1&amp;q=92&amp;s=664f586cef17d83dc408a6a10b0f3c4a&quot;,&quot;medium&quot;:&quot;https://ksr-ugc.imgix.net/assets/006/347/706/b3908a1a23f6b9e472edcf7c934e5b0e_original.jpg?ixlib=rb-4.0.2&amp;w=160&amp;h=160&amp;fit=crop&amp;v=1461382354&amp;auto=format&amp;frame=1&amp;q=92&amp;s=fe307263e32a2385e764e3923a13179e&quot;},&quot;urls&quot;:{&quot;web&quot;:{&quot;user&quot;:&quot;https://www.kickstarter.com/profile/1379875462&quot;},&quot;api&quot;:{&quot;user&quot;:&quot;https://api.kickstarter.com/v1/users/1379875462?signature=1631849432.d50b79030e15111575554ecae171babad1f2925d&quot;}}}', 1: '{&quot;id&quot;:408247096,&quot;name&quot;:&quot;Scott(skoddii)&quot;,&quot;is_registered&quot;:None,&quot;is_email_verified&quot;:None,&quot;chosen_currency&quot;:None,&quot;is_superbacker&quot;:None,&quot;avatar&quot;:{&quot;thumb&quot;:&quot;https://ksr-ugc.imgix.net/assets/020/330/517/383423c1c19dfbd99534c6185eb09a6f_original.png?ixlib=rb-4.0.2&amp;w=40&amp;h=40&amp;fit=crop&amp;v=1519354368&amp;auto=format&amp;frame=1&amp;q=92&amp;s=74f83e0070b20db01d5180ba214d1b5e&quot;,&quot;small&quot;:&quot;https://ksr-ugc.imgix.net/assets/020/330/517/383423c1c19dfbd99534c6185eb09a6f_original.png?ixlib=rb-4.0.2&amp;w=80&amp;h=80&amp;fit=crop&amp;v=1519354368&amp;auto=format&amp;frame=1&amp;q=92&amp;s=671b9100176dbfa63752a7a8e9cc63d0&quot;,&quot;medium&quot;:&quot;https://ksr-ugc.imgix.net/assets/020/330/517/383423c1c19dfbd99534c6185eb09a6f_original.png?ixlib=rb-4.0.2&amp;w=160&amp;h=160&amp;fit=crop&amp;v=1519354368&amp;auto=format&amp;frame=1&amp;q=92&amp;s=956c6f85ffbc3fb179c260611254a2be&quot;},&quot;urls&quot;:{&quot;web&quot;:{&quot;user&quot;:&quot;https://www.kickstarter.com/profile/408247096&quot;},&quot;api&quot;:{&quot;user&quot;:&quot;https://api.kickstarter.com/v1/users/408247096?signature=1631849432.6cc0456d4795aea0b32f861b050212afef4387ce&quot;}}}', 2: '{&quot;id&quot;:361953386,&quot;name&quot;:&quot;Luis G. Batista, CPM, C.P.S.M&quot;,&quot;is_registered&quot;:None,&quot;is_email_verified&quot;:None,&quot;chosen_currency&quot;:None,&quot;is_superbacker&quot;:None,&quot;avatar&quot;:{&quot;thumb&quot;:&quot;https://ksr-ugc.imgix.net/assets/015/751/771/b9a11e982831d2190d68e2ea0d3a4ff0_original.jpg?ixlib=rb-4.0.2&amp;w=40&amp;h=40&amp;fit=crop&amp;v=1488754184&amp;auto=format&amp;frame=1&amp;q=92&amp;s=f4dc0bbe5e7edbb35fb15c07bdb2c843&quot;,&quot;small&quot;:&quot;https://ksr-ugc.imgix.net/assets/015/751/771/b9a11e982831d2190d68e2ea0d3a4ff0_original.jpg?ixlib=rb-4.0.2&amp;w=80&amp;h=80&amp;fit=crop&amp;v=1488754184&amp;auto=format&amp;frame=1&amp;q=92&amp;s=9c7e202bb6491516468ec69dff66bcdd&quot;,&quot;medium&quot;:&quot;https://ksr-ugc.imgix.net/assets/015/751/771/b9a11e982831d2190d68e2ea0d3a4ff0_original.jpg?ixlib=rb-4.0.2&amp;w=160&amp;h=160&amp;fit=crop&amp;v=1488754184&amp;auto=format&amp;frame=1&amp;q=92&amp;s=ac05f1a9827cc321ea3e8f754f19be94&quot;},&quot;urls&quot;:{&quot;web&quot;:{&quot;user&quot;:&quot;https://www.kickstarter.com/profile/361953386&quot;},&quot;api&quot;:{&quot;user&quot;:&quot;https://api.kickstarter.com/v1/users/361953386?signature=1631849432.7262fa85aec828a6b01ea70685ef22b0ada784ad&quot;}}}', 3: '{&quot;id&quot;:202579323,&quot;name&quot;:&quot;Brian Carmichael&quot;,&quot;is_registered&quot;:None,&quot;is_email_verified&quot;:None,&quot;chosen_currency&quot;:None,&quot;is_superbacker&quot;:None,&quot;avatar&quot;:{&quot;thumb&quot;:&quot;https://ksr-ugc.imgix.net/assets/010/482/911/12f9ff13c9a415e4e869b8036662f02c_original.jpg?ixlib=rb-4.0.2&amp;w=40&amp;h=40&amp;fit=crop&amp;v=1488680236&amp;auto=format&amp;frame=1&amp;q=92&amp;s=9433c133b6bf02a45dd8ba78a0b44a46&quot;,&quot;small&quot;:&quot;https://ksr-ugc.imgix.net/assets/010/482/911/12f9ff13c9a415e4e869b8036662f02c_original.jpg?ixlib=rb-4.0.2&amp;w=80&amp;h=80&amp;fit=crop&amp;v=1488680236&amp;auto=format&amp;frame=1&amp;q=92&amp;s=900c300f2d425243c108ed4419c78793&quot;,&quot;medium&quot;:&quot;https://ksr-ugc.imgix.net/assets/010/482/911/12f9ff13c9a415e4e869b8036662f02c_original.jpg?ixlib=rb-4.0.2&amp;w=160&amp;h=160&amp;fit=crop&amp;v=1488680236&amp;auto=format&amp;frame=1&amp;q=92&amp;s=55e58d426c7f41b92081ce735abac404&quot;},&quot;urls&quot;:{&quot;web&quot;:{&quot;user&quot;:&quot;https://www.kickstarter.com/profile/202579323&quot;},&quot;api&quot;:{&quot;user&quot;:&quot;https://api.kickstarter.com/v1/users/202579323?signature=1631849432.fb88647e78bbe87ca2646330b0d84a0237c7cc46&quot;}}}', 4: '{&quot;id&quot;:1996450690,&quot;name&quot;:&quot;Dan Schmeidler&quot;,&quot;is_registered&quot;:None,&quot;is_email_verified&quot;:None,&quot;chosen_currency&quot;:None,&quot;is_superbacker&quot;:None,&quot;avatar&quot;:{&quot;thumb&quot;:&quot;https://ksr-ugc.imgix.net/assets/015/757/606/4f4d33cc942cdfe4b95af09e43a49255_original.JPG?ixlib=rb-4.0.2&amp;w=40&amp;h=40&amp;fit=crop&amp;v=1488802482&amp;auto=format&amp;frame=1&amp;q=92&amp;s=97f88d105a1bc21a72f008859b13055c&quot;,&quot;small&quot;:&quot;https://ksr-ugc.imgix.net/assets/015/757/606/4f4d33cc942cdfe4b95af09e43a49255_original.JPG?ixlib=rb-4.0.2&amp;w=80&amp;h=80&amp;fit=crop&amp;v=1488802482&amp;auto=format&amp;frame=1&amp;q=92&amp;s=a423f3fbf75bdb32f1c895a1f0d76bca&quot;,&quot;medium&quot;:&quot;https://ksr-ugc.imgix.net/assets/015/757/606/4f4d33cc942cdfe4b95af09e43a49255_original.JPG?ixlib=rb-4.0.2&amp;w=160&amp;h=160&amp;fit=crop&amp;v=1488802482&amp;auto=format&amp;frame=1&amp;q=92&amp;s=49f4a2d61132d1068d3f604b03a1f8e5&quot;},&quot;urls&quot;:{&quot;web&quot;:{&quot;user&quot;:&quot;https://www.kickstarter.com/profile/1996450690&quot;},&quot;api&quot;:{&quot;user&quot;:&quot;https://api.kickstarter.com/v1/users/1996450690?signature=1631849432.3b51c0d212170f4228293d3133045d040c6a6285&quot;}}}', 5: '{&quot;id&quot;:903880044,&quot;name&quot;:&quot;Doug McQuilken&quot;,&quot;is_registered&quot;:None,&quot;is_email_verified&quot;:None,&quot;chosen_currency&quot;:None,&quot;is_superbacker&quot;:None,&quot;avatar&quot;:{&quot;thumb&quot;:&quot;https://ksr-ugc.imgix.net/assets/014/523/998/230d7cd9d27128f28366a7a1c4977273_original.jpg?ixlib=rb-4.0.2&amp;w=40&amp;h=40&amp;fit=crop&amp;v=1479214827&amp;auto=format&amp;frame=1&amp;q=92&amp;s=84c65c201bdb46e72afeef51ad261913&quot;,&quot;small&quot;:&quot;https://ksr-ugc.imgix.net/assets/014/523/998/230d7cd9d27128f28366a7a1c4977273_original.jpg?ixlib=rb-4.0.2&amp;w=80&amp;h=80&amp;fit=crop&amp;v=1479214827&amp;auto=format&amp;frame=1&amp;q=92&amp;s=52beef6574a551f81be17acc750d4e2e&quot;,&quot;medium&quot;:&quot;https://ksr-ugc.imgix.net/assets/014/523/998/230d7cd9d27128f28366a7a1c4977273_original.jpg?ixlib=rb-4.0.2&amp;w=160&amp;h=160&amp;fit=crop&amp;v=1479214827&amp;auto=format&amp;frame=1&amp;q=92&amp;s=b4bb14d2759e21e6c40d3ef9c86c1ed3&quot;},&quot;urls&quot;:{&quot;web&quot;:{&quot;user&quot;:&quot;https://www.kickstarter.com/profile/903880044&quot;},&quot;api&quot;:{&quot;user&quot;:&quot;https://api.kickstarter.com/v1/users/903880044?signature=1631849432.6a7dcb45d0ca2a4c5922d51a0b3f36f7972b6ac0&quot;}}}', 6: '{&quot;id&quot;:1391487766,&quot;name&quot;:&quot;Karen Scott&quot;,&quot;is_registered&quot;:None,&quot;is_email_verified&quot;:None,&quot;chosen_currency&quot;:None,&quot;is_superbacker&quot;:None,&quot;avatar&quot;:{&quot;thumb&quot;:&quot;https://ksr-ugc.imgix.net/assets/015/612/365/b1ce5bfa90d24a767547b168e3efdbef_original.JPG?ixlib=rb-4.0.2&amp;w=40&amp;h=40&amp;fit=crop&amp;v=1487847709&amp;auto=format&amp;frame=1&amp;q=92&amp;s=e18d2c915b50e20cf27bb1255ad82ba9&quot;,&quot;small&quot;:&quot;https://ksr-ugc.imgix.net/assets/015/612/365/b1ce5bfa90d24a767547b168e3efdbef_original.JPG?ixlib=rb-4.0.2&amp;w=80&amp;h=80&amp;fit=crop&amp;v=1487847709&amp;auto=format&amp;frame=1&amp;q=92&amp;s=bd7c22cafcec49e73bea6a106976043c&quot;,&quot;medium&quot;:&quot;https://ksr-ugc.imgix.net/assets/015/612/365/b1ce5bfa90d24a767547b168e3efdbef_original.JPG?ixlib=rb-4.0.2&amp;w=160&amp;h=160&amp;fit=crop&amp;v=1487847709&amp;auto=format&amp;frame=1&amp;q=92&amp;s=d1d5327de95dac76d4cbed7a95007de1&quot;},&quot;urls&quot;:{&quot;web&quot;:{&quot;user&quot;:&quot;https://www.kickstarter.com/profile/1391487766&quot;},&quot;api&quot;:{&quot;user&quot;:&quot;https://api.kickstarter.com/v1/users/1391487766?signature=1631849432.2720fa0d8a70ccfc33034287985b98c0c791a23d&quot;}}}', 7: '{&quot;id&quot;:1344116211,&quot;name&quot;:&quot;Sanjiv(Sam) Mall&quot;,&quot;is_registered&quot;:None,&quot;is_email_verified&quot;:None,&quot;chosen_currency&quot;:None,&quot;is_superbacker&quot;:None,&quot;avatar&quot;:{&quot;thumb&quot;:&quot;https://ksr-ugc.imgix.net/assets/015/648/502/206b8686072b528ea6fd1fe78adfcc25_original.JPG?ixlib=rb-4.0.2&amp;w=40&amp;h=40&amp;fit=crop&amp;v=1488128800&amp;auto=format&amp;frame=1&amp;q=92&amp;s=fd4520798d39b777e5814219c8fe4ad2&quot;,&quot;small&quot;:&quot;https://ksr-ugc.imgix.net/assets/015/648/502/206b8686072b528ea6fd1fe78adfcc25_original.JPG?ixlib=rb-4.0.2&amp;w=80&amp;h=80&amp;fit=crop&amp;v=1488128800&amp;auto=format&amp;frame=1&amp;q=92&amp;s=67553420e14378664ae3555275a25d51&quot;,&quot;medium&quot;:&quot;https://ksr-ugc.imgix.net/assets/015/648/502/206b8686072b528ea6fd1fe78adfcc25_original.JPG?ixlib=rb-4.0.2&amp;w=160&amp;h=160&amp;fit=crop&amp;v=1488128800&amp;auto=format&amp;frame=1&amp;q=92&amp;s=f08f3b4420e3ab37c4e07b4f98100dde&quot;},&quot;urls&quot;:{&quot;web&quot;:{&quot;user&quot;:&quot;https://www.kickstarter.com/profile/1344116211&quot;},&quot;api&quot;:{&quot;user&quot;:&quot;https://api.kickstarter.com/v1/users/1344116211?signature=1631849432.6e307780f53a56c7a6dd5493ae59f26575d9fbcb&quot;}}}', 8: '{&quot;id&quot;:2071365832,&quot;name&quot;:&quot;Christoph Vogelbusch&quot;,&quot;is_registered&quot;:None,&quot;is_email_verified&quot;:None,&quot;chosen_currency&quot;:None,&quot;is_superbacker&quot;:None,&quot;avatar&quot;:{&quot;thumb&quot;:&quot;https://ksr-ugc.imgix.net/assets/012/912/270/d2f18c4ec6fcb2357ab073d0e6e0aa9e_original.png?ixlib=rb-4.0.2&amp;w=40&amp;h=40&amp;fit=crop&amp;v=1467291732&amp;auto=format&amp;frame=1&amp;q=92&amp;s=3b321faecc138d42f7aa249620fc342d&quot;,&quot;small&quot;:&quot;https://ksr-ugc.imgix.net/assets/012/912/270/d2f18c4ec6fcb2357ab073d0e6e0aa9e_original.png?ixlib=rb-4.0.2&amp;w=80&amp;h=80&amp;fit=crop&amp;v=1467291732&amp;auto=format&amp;frame=1&amp;q=92&amp;s=967c607450ac03547632f0865270822f&quot;,&quot;medium&quot;:&quot;https://ksr-ugc.imgix.net/assets/012/912/270/d2f18c4ec6fcb2357ab073d0e6e0aa9e_original.png?ixlib=rb-4.0.2&amp;w=160&amp;h=160&amp;fit=crop&amp;v=1467291732&amp;auto=format&amp;frame=1&amp;q=92&amp;s=507442b8d2a97678675ec7c19b049e4b&quot;},&quot;urls&quot;:{&quot;web&quot;:{&quot;user&quot;:&quot;https://www.kickstarter.com/profile/2071365832&quot;},&quot;api&quot;:{&quot;user&quot;:&quot;https://api.kickstarter.com/v1/users/2071365832?signature=1631849432.0d05bc7a066a3748232100864f2d3a441186b289&quot;}}}', 9: '{&quot;id&quot;:850790011,&quot;name&quot;:&quot;Harun Sarac&quot;,&quot;is_registered&quot;:None,&quot;is_email_verified&quot;:None,&quot;chosen_currency&quot;:None,&quot;is_superbacker&quot;:None,&quot;avatar&quot;:{&quot;thumb&quot;:&quot;https://ksr-ugc.imgix.net/assets/015/673/759/79ee3faff36e0fb683f834c1f419a0fc_original.jpg?ixlib=rb-4.0.2&amp;w=40&amp;h=40&amp;fit=crop&amp;v=1488440832&amp;auto=format&amp;frame=1&amp;q=92&amp;s=ab34266c1a0ce2ec4ac5e4931a606b64&quot;,&quot;small&quot;:&quot;https://ksr-ugc.imgix.net/assets/015/673/759/79ee3faff36e0fb683f834c1f419a0fc_original.jpg?ixlib=rb-4.0.2&amp;w=80&amp;h=80&amp;fit=crop&amp;v=1488440832&amp;auto=format&amp;frame=1&amp;q=92&amp;s=e1d62a787470490c4189bb9a72cfbacc&quot;,&quot;medium&quot;:&quot;https://ksr-ugc.imgix.net/assets/015/673/759/79ee3faff36e0fb683f834c1f419a0fc_original.jpg?ixlib=rb-4.0.2&amp;w=160&amp;h=160&amp;fit=crop&amp;v=1488440832&amp;auto=format&amp;frame=1&amp;q=92&amp;s=28e1a25444c13592e5ccf2967ac8b8e3&quot;},&quot;urls&quot;:{&quot;web&quot;:{&quot;user&quot;:&quot;https://www.kickstarter.com/profile/850790011&quot;},&quot;api&quot;:{&quot;user&quot;:&quot;https://api.kickstarter.com/v1/users/850790011?signature=1631849432.3ac62ea0ee180b660968be6227e29684c54286d6&quot;}}}'} </code></pre> <p>That is:</p> <pre><code>0 {&quot;id&quot;:1379875462,&quot;name&quot;:&quot;Batton Lash&quot;,&quot;is_regi... 1 {&quot;id&quot;:408247096,&quot;name&quot;:&quot;Scott(skoddii)&quot;,&quot;is_re... 2 {&quot;id&quot;:361953386,&quot;name&quot;:&quot;Luis G. Batista, CPM, ... 3 {&quot;id&quot;:202579323,&quot;name&quot;:&quot;Brian Carmichael&quot;,&quot;is_... 4 {&quot;id&quot;:1996450690,&quot;name&quot;:&quot;Dan Schmeidler&quot;,&quot;is_r... 5 {&quot;id&quot;:903880044,&quot;name&quot;:&quot;Doug McQuilken&quot;,&quot;is_re... 6 {&quot;id&quot;:1391487766,&quot;name&quot;:&quot;Karen Scott&quot;,&quot;is_regi... 7 {&quot;id&quot;:1344116211,&quot;name&quot;:&quot;Sanjiv(Sam) Mall&quot;,&quot;is... 8 {&quot;id&quot;:2071365832,&quot;name&quot;:&quot;Christoph Vogelbusch&quot;... 9 {&quot;id&quot;:850790011,&quot;name&quot;:&quot;Harun Sarac&quot;,&quot;is_regis... </code></pre> <p>What you can do is the follwing:</p> <pre><code>df = pd.DataFrame(pd.Series(data)) from ast import literal_eval import numpy as np df[0] = df[0].apply(literal_eval) df = df.join(pd.json_normalize(df[0])) </code></pre> <p>which gives you</p> <pre><code>0 {'id': 1379875462, 'name': 'Batton Lash', 'is_... 1379875462 1 {'id': 408247096, 'name': 'Scott(skoddii)', 'i... 408247096 2 {'id': 361953386, 'name': 'Luis G. Batista, CP... 361953386 3 {'id': 202579323, 'name': 'Brian Carmichael', ... 202579323 4 {'id': 1996450690, 'name': 'Dan Schmeidler', '... 1996450690 5 {'id': 903880044, 'name': 'Doug McQuilken', 'i... 903880044 6 {'id': 1391487766, 'name': 'Karen Scott', 'is_... 1391487766 7 {'id': 1344116211, 'name': 'Sanjiv(Sam) Mall',... 1344116211 8 {'id': 2071365832, 'name': 'Christoph Vogelbus... 2071365832 9 {'id': 850790011, 'name': 'Harun Sarac', 'is_r... 850790011 name is_registered is_email_verified \ 0 Batton Lash None None 1 Scott(skoddii) None None 2 Luis G. Batista, CPM, C.P.S.M None None 3 Brian Carmichael None None 4 Dan Schmeidler None None 5 Doug McQuilken None None 6 Karen Scott None None 7 Sanjiv(Sam) Mall None None 8 Christoph Vogelbusch None None 9 Harun Sarac None None chosen_currency is_superbacker \ 0 None None 1 None None 2 None None 3 None None 4 None None 5 None None 6 None None 7 None None 8 None None 9 None None avatar.thumb \ 0 https://ksr-ugc.imgix.net/assets/006/347/706/b... 1 https://ksr-ugc.imgix.net/assets/020/330/517/3... 2 https://ksr-ugc.imgix.net/assets/015/751/771/b... 3 https://ksr-ugc.imgix.net/assets/010/482/911/1... 4 https://ksr-ugc.imgix.net/assets/015/757/606/4... 5 https://ksr-ugc.imgix.net/assets/014/523/998/2... 6 https://ksr-ugc.imgix.net/assets/015/612/365/b... 7 https://ksr-ugc.imgix.net/assets/015/648/502/2... 8 https://ksr-ugc.imgix.net/assets/012/912/270/d... 9 https://ksr-ugc.imgix.net/assets/015/673/759/7... avatar.small \ 0 https://ksr-ugc.imgix.net/assets/006/347/706/b... 1 https://ksr-ugc.imgix.net/assets/020/330/517/3... 2 https://ksr-ugc.imgix.net/assets/015/751/771/b... 3 https://ksr-ugc.imgix.net/assets/010/482/911/1... 4 https://ksr-ugc.imgix.net/assets/015/757/606/4... 5 https://ksr-ugc.imgix.net/assets/014/523/998/2... 6 https://ksr-ugc.imgix.net/assets/015/612/365/b... 7 https://ksr-ugc.imgix.net/assets/015/648/502/2... 8 https://ksr-ugc.imgix.net/assets/012/912/270/d... 9 https://ksr-ugc.imgix.net/assets/015/673/759/7... avatar.medium \ 0 https://ksr-ugc.imgix.net/assets/006/347/706/b... 1 https://ksr-ugc.imgix.net/assets/020/330/517/3... 2 https://ksr-ugc.imgix.net/assets/015/751/771/b... 3 https://ksr-ugc.imgix.net/assets/010/482/911/1... 4 https://ksr-ugc.imgix.net/assets/015/757/606/4... 5 https://ksr-ugc.imgix.net/assets/014/523/998/2... 6 https://ksr-ugc.imgix.net/assets/015/612/365/b... 7 https://ksr-ugc.imgix.net/assets/015/648/502/2... 8 https://ksr-ugc.imgix.net/assets/012/912/270/d... 9 https://ksr-ugc.imgix.net/assets/015/673/759/7... urls.web.user \ 0 https://www.kickstarter.com/profile/1379875462 1 https://www.kickstarter.com/profile/408247096 2 https://www.kickstarter.com/profile/361953386 3 https://www.kickstarter.com/profile/202579323 4 https://www.kickstarter.com/profile/1996450690 5 https://www.kickstarter.com/profile/903880044 6 https://www.kickstarter.com/profile/1391487766 7 https://www.kickstarter.com/profile/1344116211 8 https://www.kickstarter.com/profile/2071365832 9 https://www.kickstarter.com/profile/850790011 urls.api.user 0 https://api.kickstarter.com/v1/users/137987546... 1 https://api.kickstarter.com/v1/users/408247096... 2 https://api.kickstarter.com/v1/users/361953386... 3 https://api.kickstarter.com/v1/users/202579323... 4 https://api.kickstarter.com/v1/users/199645069... 5 https://api.kickstarter.com/v1/users/903880044... 6 https://api.kickstarter.com/v1/users/139148776... 7 https://api.kickstarter.com/v1/users/134411621... 8 https://api.kickstarter.com/v1/users/207136583... 9 https://api.kickstarter.com/v1/users/850790011... </code></pre>
python|pandas|dictionary|machine-learning|data-science
1
378,018
70,526,822
Display specific column through PANDAS
<p>I have PortalMammals_species.csv which cotains following columns :</p> <pre><code>['record_id', 'new_code', 'oldcode', 'scientificname', 'taxa', 'commonname', 'unknown', 'rodent', 'shrubland_affiliated'] </code></pre> <p>I want to find out how many taxa are “Rodent” and Display those records by using PANDAS. I am trying this:</p> <pre><code>Taxa =df[&quot;taxa&quot;]==&quot;Rodent&quot; print(Taxa.value_counts()) </code></pre> <p>but this code giving me only value counts that are True :28 and False :27</p> <p>How can I display only those records that are true?</p> <p><a href="https://i.stack.imgur.com/tOksT.png" rel="nofollow noreferrer">Example</a></p>
<p>If you want a count of 'Rodent' only while still using value_counts(), you can try:</p> <pre><code>df['taxa'][df['taxa']==&quot;Rodent&quot;].value_counts() </code></pre> <p>Another option:</p> <pre><code>df['taxa'][df['taxa']==&quot;Rodent&quot;].count() </code></pre> <p>The PortalMammals_species.csv dataset I am seeing online also has a 'rodent' column, which is a 1/0 flag for rodent; if you have that column too, you could try</p> <pre><code>df['rodent'].sum() </code></pre> <p><strong>EDIT</strong> in response to OP's comment:</p> <p>To display the 'taxa' column only, filtered for Rodent:</p> <pre><code>df['taxa'][df['taxa']==&quot;Rodent&quot;] </code></pre> <p>To display the entire df filtered for Rodent:</p> <pre><code>df[df['taxa']==&quot;Rodent&quot;] </code></pre>
python|pandas|dataset
0
378,019
70,395,804
Keras: Loss for image rotation and translate (target errore registration)?
<p>My model return 3 cordinate [x,y,angle]. I want TRE similarity between 2 images. My custom loss is:</p> <pre><code>loss(y_true, y_pred): s = tfa.image.rotate(images=y_true[0], angles=y_pred[0][0]) s = tfa.image.translate(images=s, translations=y_pred[0][1:]) s = tf.reduce_sum(tf.sqrt(tf.square(s-y_true[1]))) </code></pre> <p>y_pred=(1, 3)-&gt;tensor with [angle,x,y]</p> <p>y_true=(2,128,128)-&gt; in y_true[0] and y_true[1]: image. I:</p> <ul> <li>s=Rotate and translate y_true[0],</li> <li>Compare s and y_true[1], with MSE</li> </ul> <p>I can't use tfa.image.translate beacuse is not differentiable? How can rotate an image in a custom loss function? There are problem with gradient?</p>
<p>I Believe this will or will not work depending on the frequency distribution in your data. But in fft space this might be easier.</p>
python|tensorflow|rotation|loss|image-registration
0
378,020
70,473,295
python pandas how to read csv file by block
<p>I'm trying to read a CSV file, block by block.</p> <p>CSV looks like:</p> <pre class="lang-none prettyprint-override"><code>No.,time,00:00:00,00:00:01,00:00:02,00:00:03,00:00:04,00:00:05,00:00:06,00:00:07,00:00:08,00:00:09,00:00:0A,... 1,2021/09/12 02:16,235,610,345,997,446,130,129,94,555,274,4, 2,2021/09/12 02:17,364,210,371,341,294,87,179,106,425,262,3, 1434,2021/09/12 02:28,269,135,372,262,307,73,86,93,512,283,4, 1435,2021/09/12 02:29,281,207,688,322,233,75,69,85,663,276,2, No.,time,00:00:10,00:00:11,00:00:12,00:00:13,00:00:14,00:00:15,00:00:16,00:00:17,00:00:18,00:00:19,00:00:1A,... 1,2021/09/12 02:16,255,619,200,100,453,456,4,19,56,23,4, 2,2021/09/12 02:17,368,21,37,31,24,8,19,1006,4205,2062,30, 1434,2021/09/12 02:28,2689,1835,3782,2682,307,743,256,741,52,23,6, 1435,2021/09/12 02:29,2281,2047,6848,3522,2353,755,659,885,6863,26,36, </code></pre> <p>Blocks start with <strong>No.</strong>, and data rows follow.</p> <pre><code>def run(sock, delay, zipobj): zf = zipfile.ZipFile(zipobj) for f in zf.namelist(): print(zf.filename) print(&quot;csv name: &quot;, f) df = pd.read_csv(zf.open(f), skiprows=[0,1,2,3,4,5] #,&quot;nrows=1435? (but for the next blocks?&quot;) print(df, '\n') date_pattern='%Y/%m/%d %H:%M' df['epoch'] = df.apply(lambda row: int(time.mktime(time.strptime(row.time,date_pattern))), axis=1) # create epoch as a column tuples=[] # data will be saved in a list formated_str='perf.type.serial.object.00.00.00.TOTAL_IOPS' for each_column in list(df.columns)[2:-1]: for e in zip(list(df['epoch']),list(df[each_column])): each_column=each_column.replace(&quot;X&quot;, '') #print(f&quot;perf.type.serial.LDEV.{each_column}.TOTAL_IOPS&quot;,e) tuples.append((f&quot;perf.type.serial.LDEV.{each_column}.TOTAL_IOPS&quot;,e)) package = pickle.dumps(tuples, 1) size = struct.pack('!L', len(package)) sock.sendall(size) sock.sendall(package) time.sleep(delay) </code></pre> <p>Many thanks for help,</p>
<p>Load your file with <code>pd.read_csv</code> and create block at each time the row of your first column is <code>No.</code>. Use <code>groupby</code> to iterate over each block and create a new dataframe.</p> <pre><code>data = pd.read_csv('data.csv', header=None) dfs = [] for _, df in data.groupby(data[0].eq('No.').cumsum()): df = pd.DataFrame(df.iloc[1:].values, columns=df.iloc[0]) dfs.append(df.rename_axis(columns=None)) </code></pre> <p>Output:</p> <pre><code># First block &gt;&gt;&gt; dfs[0] No. time 00:00:00 00:00:01 00:00:02 00:00:03 00:00:04 00:00:05 00:00:06 00:00:07 00:00:08 00:00:09 00:00:0A ... 0 1 2021/09/12 02:16 235 610 345 997 446 130 129 94 555 274 4 NaN 1 2 2021/09/12 02:17 364 210 371 341 294 87 179 106 425 262 3 NaN 2 1434 2021/09/12 02:28 269 135 372 262 307 73 86 93 512 283 4 NaN 3 1435 2021/09/12 02:29 281 207 688 322 233 75 69 85 663 276 2 NaN # Second block &gt;&gt;&gt; dfs[1] No. time 00:00:10 00:00:11 00:00:12 00:00:13 00:00:14 00:00:15 00:00:16 00:00:17 00:00:18 00:00:19 00:00:1A ... 0 1 2021/09/12 02:16 255 619 200 100 453 456 4 19 56 23 4 NaN 1 2 2021/09/12 02:17 368 21 37 31 24 8 19 1006 4205 2062 30 NaN 2 1434 2021/09/12 02:28 2689 1835 3782 2682 307 743 256 741 52 23 6 NaN 3 1435 2021/09/12 02:29 2281 2047 6848 3522 2353 755 659 885 6863 26 36 NaN </code></pre> <p>and so on.</p>
python|pandas|csv
3
378,021
70,559,780
Pandas - return value of column
<p>I have a df with categories and thresholds:</p> <pre><code>cat t1 t2 t3 t4 a 2 4 6 8 b 3 5 7 0 c 0 0 1 0 </code></pre> <p>My end goal is to return the column name given category and score. I can select a row using a cat variable:</p> <pre><code>df[df['cat'] == cat] </code></pre> <p>How do I now return the column name that is closest to the score (rounded down)? (c, 3) -&gt; t3</p>
<p>You can compute the absolute difference to your value and get the index of the minimum with <code>idxmin</code>:</p> <pre><code>value = 3 cat = 'c' (df.set_index('cat') .loc[cat] .sub(value).abs() .idxmin() ) </code></pre> <p>Output: <code>'t3'</code></p> <h4>ensuring rounded down</h4> <pre><code>value = 1 cat = 'a' out = ( df.set_index('cat') .loc[cat] .sub(value).abs() .idxmin() ) x = df.set_index('cat').loc[cat,out] out = None if value &lt; x else out print(out) </code></pre>
python|pandas
1
378,022
70,568,067
Calculate standard deviation for groups of values using Python
<p>My data looks similar to this:</p> <pre><code>index name number difference 0 AAA 10 0 1 AAA 20 10 2 BBB 1 0 3 BBB 2 1 4 CCC 5 0 5 CCC 10 5 6 CCC 10.5 0.5 </code></pre> <p>I need to calculate standard deviation for difference column based on groups of name.</p> <p>I tried</p> <pre><code>data[['difference']].groupby(['name']).agg(['mean', 'std']) </code></pre> <p>and</p> <pre><code>data[&quot;std&quot;]=(data['difference'].groupby('name').std()) </code></pre> <p>but both gave KeyError for the variable that's passed to <code>groupby()</code>. I tried to resolve it with:</p> <pre><code>data.columns = data.columns.str.strip() </code></pre> <p>but the error persists.</p> <p>Thanks in advance.</p>
<p>You can use <code>groupby(['name'])</code> on the full data frame first, and only apply the agg on the columns of interest:</p> <pre><code>data = pd.DataFrame({'name':['AAA','AAA','BBB','BBB','CCC','CCC','CCC'], 'number':[10,20,1,2,5,10,10.5], 'difference':[0,10,0,1,0,5,0.5]}) data.groupby(['name'])['difference'].agg(['mean', 'std']) </code></pre>
python|pandas-groupby|aggregate|standard-deviation
2
378,023
70,528,867
How to fix "pip is configured with locations that require TLS/SSL, however the ssl module in Python is not available" when installing Tensorflow?
<p>I was trying to install TensorFlow with Anaconda 3.9.9.</p> <p>I ran the command</p> <pre class="lang-none prettyprint-override"><code>pip install tensorflow </code></pre> <p>and there was an error saying:</p> <pre class="lang-none prettyprint-override"><code>WARNING: pip is configured with locations that require TLS/SSL, however the ssl module in Python is not available. WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(&quot;Can't connect to HTTPS URL because the SSL module is not available.&quot;)': /simple/tensorflow/ Could not fetch URL https://pypi.org/simple/tensorflow/: There was a problem confirming the ssl certificate: HTTPSConnectionPool(host='pypi.org', port=443): Max retries exceeded with url: /simple/tensorflow/ (Caused by SSLError(&quot;Can't connect to HTTPS URL because the SSL module is not available.&quot;)) - skipping ERROR: Could not find a version that satisfies the requirement tensorflow (from versions: none) ERROR: No matching distribution found for tensorflow </code></pre> <p>I have tried adding <code>/anaconda3</code>, <code>/anaconda3/Scripts</code> and <code>/anaconda3/library/bin</code> to the Path variable. I have also tried running the command:</p> <pre class="lang-none prettyprint-override"><code>pip install --trusted-host pypi.org --trusted-host pypi.python.org --trusted-host files.pythonhosted.org tensorflow </code></pre> <p>but nothing seems to be working.</p> <p>Did I miss anything and are there any other solution?</p>
<p>Try running below command to fix this issue:</p> <pre><code>pip install --upgrade pip install ssl </code></pre> <p>Please create an virtual_environment to install <code>TensorFlow</code> in <code>Anaconda</code>.</p> <p>Follow below code to install <code>TensorFlow</code> in virtual_environment:</p> <pre><code>conda create -n tf tensorflow #Create a Virtual environment(tf). conda activate tf #Activate the Virtual environment pip install tensorflow #install TensorFlow in it. </code></pre> <p><strong>Note:</strong> You need to activate the virtual_environment each time you want to use TensorFlow.</p>
python|tensorflow
0
378,024
70,670,965
Python Pandas How to get rid of groupings with only 1 row?
<p>In my dataset, I am trying to get the margin between two values. The code below runs perfectly if the fourth race was not included. After grouping based on a column, it seems that sometimes, there will be only 1 value, therefore, no other value to get a margin out of. I want to ignore these groupings in that case. Here is my current code:</p> <pre><code>import pandas as pd data = {'Name':['A', 'B', 'B', 'C', 'A', 'C', 'A'], 'RaceNumber': [1, 1, 2, 2, 3, 3, 4], 'PlaceWon':['First', 'Second', 'First', 'Second', 'First', 'Second', 'First'], 'TimeRanInSec':[100, 98, 66, 60, 75, 70, 75]} df = pd.DataFrame(data) print(df) def winning_margin(times): times = list(times) winner = min(times) times.remove(winner) return min(times) - winner winning_margins = df[['RaceNumber', 'TimeRanInSec']] \ .groupby('RaceNumber').agg(winning_margin) winning_margins.columns = ['margin'] winners = df.loc[df.PlaceWon == 'First', :] winners = winners.join(winning_margins, on='RaceNumber') avg_margins = winners[['Name', 'margin']].groupby('Name').mean() avg_margins </code></pre>
<p>How about returning a NaN if <code>times</code> does not have enough elements:</p> <pre><code>import numpy as np def winning_margin(times): if len(times) &lt;= 1: # New code return np.NaN # New code times = list(times) winner = min(times) times.remove(winner) return min(times) - winner </code></pre> <p>your code runs with this change and seem to produce sensible results. But you can furthermore remove NaNs later if you want eg in this line</p> <pre><code>winning_margins = df[['RaceNumber', 'TimeRanInSec']] \ .groupby('RaceNumber').agg(winning_margin).dropna() # note the addition of .dropna() </code></pre>
python-3.x|pandas|dataframe
0
378,025
70,450,017
Perform unique row operation after a groupby
<p>I have been stuck to a problem where I have done all the groupby operation and got the resultant dataframe as shown below but the problem came in last operation of calculation of one additional column</p> <p>Current dataframe:</p> <pre><code>code industry category count duration 2 Retail Mobile 4 7 3 Retail Tab 2 33 3 Health Mobile 5 103 2 Food TV 1 88 </code></pre> <p>The question: Want an additional column <code>operation</code> which calculates the ratio of count of industry 'retail' for the specific <code>code</code> column entry</p> <p>for example: code <code>2</code> has 2 <code>industry</code> entry retail and food so <code>operation</code> column should have value <code>4/(4+1) = 0.8</code> and similarly for code<code>3</code> as well as shown below</p> <p>O/P:</p> <pre><code>code industry category count duration operation 2 Retail Mobile 4 7 0.8 3 Retail Tab 2 33 - 3 Health Mobile 5 103 2/7 = 0.285 2 Food TV 1 88 - </code></pre> <p>Help on here as well that if I do just groupby I will miss out the information of <code>category</code> and <code>duration</code> also what would be better way to represent the <code>output df</code> there can been multiple industry and operation is limited to just <code>retail</code></p>
<p>I can't think of a single operation. But the way via a dictionary should work. Oh, and in advance for the other answerers the code to create the example dataframe.</p> <pre><code>st_l = [[2,'Retail','Mobile', 4, 7], [3,'Retail', 'Tab', 2, 33], [3,'Health', 'Mobile', 5, 103], [2,'Food', 'TV', 1, 88]] df = pd.DataFrame(st_l, columns= ['code','industry','category','count','duration']) </code></pre> <p>And now my attempt:</p> <pre><code>sums = df[['code', 'count']].groupby('code').sum().to_dict()['count'] df['operation'] = df.apply(lambda x: x['count']/sums[x['code']], axis=1) </code></pre>
python-3.x|pandas|dataframe|pandas-groupby
0
378,026
70,514,647
Applying a condition for all similar values within a column in a Pandas dataframe
<p>I have the following dataset in a pandas dataframe:</p> <pre><code>Patient_ID Image_Type ... P001 Paired P001 Paired P001 Paired P001 CBCT P002 CBCT P002 CBCT P002 CBCT P002 CBCT P002 CBCT P002 CBCT P003 CBCT ... ... </code></pre> <p>So what im trying to do is to find whether the number of datapoints for each patient (Patient_ID) is equal to the number CBCT images taken for that patient.</p> <p>For example for Patient P002, the number CBCT images taken is equal to the number of datapoints. And for Patient P001, the number of CBCT images taken does not equal to the total number of datapoints for that partient. I would like to assign this condition to a new column; where value = 'Yes' if it is true and 'No' where it false.</p> <p>Please let me know if you need clarificartions with my question. Thanks.</p>
<p>IIUC:</p> <pre><code>df[df[&quot;Image_Type&quot;] == &quot;CBCT&quot;].groupby(&quot;Patient_ID&quot;).size() == df.groupby(&quot;Patient_ID&quot;).size() #Patient_ID #P001 False #P002 True #P003 True #dtype: bool </code></pre> <p>I'm using <code>df</code> as</p> <pre><code> Patient_ID Image_Type 0 P001 Paired 1 P001 Paired 2 P001 Paired 3 P001 CBCT 4 P002 CBCT 5 P002 CBCT 6 P002 CBCT 7 P002 CBCT 8 P002 CBCT 9 P002 CBCT 10 P003 CBCT </code></pre>
python|pandas
-1
378,027
70,477,609
Optimizing using only accuracy
<p>As I know we optimize our model with changing the weight parameters over the iterations. The aim is to minimize the loss <strong>and maximize the accuracy</strong>.</p> <p>I don't understand why we using loss as parameter as well if we have accuracy as parameter.</p> <p>Can we use only accuracy and drop loss from our model? With accuracy we can also change the model weights?</p>
<p>In short, perfecting a neural network is all about minimizing the difference between the intended result and given result. The difference is known as the cost/loss. So the smaller the cost/loss, the closer the intended value, so the higher the accuracy</p> <p>I suggest you watch 3Blue1Brown's video series on neural networks on youtube</p>
python|tensorflow|keras|deep-learning|neural-network
0
378,028
70,604,882
Train a model with a task and test it with another task?
<p>I have a data-frame consists of 3000 samples, n numbers of features, and two targets columns as follow:</p> <pre><code>mydata: id, f1, f2, ..., fn, target1, target2 01, 23, 32, ..., 44, 0 , 1 02, 10, 52, ..., 11, 1 , 2 03, 66, 15, ..., 65, 1 , 0 ... 2000, 76, 32, ..., 17, 0 , 1 </code></pre> <p>Here, I have a multi-task learning problem (I am quite new in this domain) and I want to train a model/network with <code>target1</code> and test it with <code>target2</code>.</p> <p>If we consider <code>target1</code> and <code>target2</code> as tasks, they might be related tasks but we do not know how much. So, I want to see how much we can use the model trained by task1 (<code>target1</code>) to predict task2 (<code>target2</code>).</p> <p>It seems, it is not possible since <code>target1</code> is a binary class (0 and 1), but <code>target2</code> has more than two values (0,1 and 2). Is there any way to handle this issue?</p>
<p>This is not called Multi-Task Learning but Transfer Learning. It would be multi-task learning if you had trained your model to predict both the <code>target1</code> and <code>target2</code>.</p> <p>Yes, there are ways to handle this issue. The final layer of the model is just the classifier head that computes the final label from the previous layer. You can consider the output from the previous layer as embeddings of the datapoint and use this representation to train/fine-tune another model. You have to plug in another head, though, since you now have three classes.</p> <p>so in pseudo-code, you need something like</p> <pre><code>model = remove_last_layer(model) model.add(&lt;your new classification head outputting 3 classes&gt;) model.train() </code></pre> <p>you can then compare this approach to the baseline, where you train from scratch on <code>target2</code> to analyze the transfer learning between these two tasks.</p>
python|dataframe|tensorflow|machine-learning|neural-network
0
378,029
70,470,991
how to create a stacked bar chart indicating time spent on nest per day
<p>I have some data of an owl being present in the nest box. In a previous question you helped me visualize when the owl is in the box:</p> <p><img src="https://i.stack.imgur.com/9L3JY.png" alt="owl in box" /></p> <p>In addition I created a plot of the hours per day spent in the box with the code below (probably this can be done more efficiently):</p> <pre><code>import pandas as pd import matplotlib.pyplot as plt # raw data indicating time spent in box (each row represents start and end time) time = pd.DatetimeIndex([&quot;2021-12-01 18:08&quot;,&quot;2021-12-01 18:11&quot;, &quot;2021-12-02 05:27&quot;,&quot;2021-12-02 05:29&quot;, &quot;2021-12-02 22:40&quot;,&quot;2021-12-02 22:43&quot;, &quot;2021-12-03 19:24&quot;,&quot;2021-12-03 19:27&quot;, &quot;2021-12-06 18:04&quot;,&quot;2021-12-06 18:06&quot;, &quot;2021-12-07 05:28&quot;,&quot;2021-12-07 05:30&quot;, &quot;2021-12-10 03:05&quot;,&quot;2021-12-10 03:10&quot;, &quot;2021-12-10 07:11&quot;,&quot;2021-12-10 07:13&quot;, &quot;2021-12-10 20:40&quot;,&quot;2021-12-10 20:41&quot;, &quot;2021-12-12 19:42&quot;,&quot;2021-12-12 19:45&quot;, &quot;2021-12-13 04:13&quot;,&quot;2021-12-13 04:17&quot;, &quot;2021-12-15 04:28&quot;,&quot;2021-12-15 04:30&quot;, &quot;2021-12-15 05:21&quot;,&quot;2021-12-15 05:25&quot;, &quot;2021-12-15 17:40&quot;,&quot;2021-12-15 17:44&quot;, &quot;2021-12-15 22:31&quot;,&quot;2021-12-15 22:37&quot;, &quot;2021-12-16 04:24&quot;,&quot;2021-12-16 04:28&quot;, &quot;2021-12-16 19:58&quot;,&quot;2021-12-16 20:09&quot;, &quot;2021-12-17 17:42&quot;,&quot;2021-12-17 18:04&quot;, &quot;2021-12-17 22:19&quot;,&quot;2021-12-17 22:26&quot;, &quot;2021-12-18 05:41&quot;,&quot;2021-12-18 05:44&quot;, &quot;2021-12-19 07:40&quot;,&quot;2021-12-19 16:55&quot;, &quot;2021-12-19 20:39&quot;,&quot;2021-12-19 20:52&quot;, &quot;2021-12-19 21:56&quot;,&quot;2021-12-19 23:17&quot;, &quot;2021-12-21 04:53&quot;,&quot;2021-12-21 04:59&quot;, &quot;2021-12-21 05:37&quot;,&quot;2021-12-21 05:39&quot;, &quot;2021-12-22 08:06&quot;,&quot;2021-12-22 17:22&quot;, &quot;2021-12-22 20:04&quot;,&quot;2021-12-22 21:24&quot;, &quot;2021-12-22 21:44&quot;,&quot;2021-12-22 22:47&quot;, &quot;2021-12-23 02:20&quot;,&quot;2021-12-23 06:17&quot;, &quot;2021-12-23 08:07&quot;,&quot;2021-12-23 16:54&quot;, &quot;2021-12-23 19:36&quot;,&quot;2021-12-23 23:59:59&quot;, &quot;2021-12-24 00:00&quot;,&quot;2021-12-24 00:28&quot;, &quot;2021-12-24 07:53&quot;,&quot;2021-12-24 17:00&quot;, ]) # create dataframe with column indicating presence (1) or absence (0) time_df = pd.DataFrame(data={'present':[1,0]*int(len(time)/2)}, index=time) # calculate interval length and add to time_df time_df['interval'] = time_df.index.to_series().diff().astype('timedelta64[m]') # add column with day to time_df time_df['day'] = time.day #select only intervals where owl is present timeinbox = time_df.iloc[1::2, :] interval = timeinbox.interval day = timeinbox.day # sum multiple intervals per day interval_tot = [interval[0]] day_tot = [day[0]] for i in range(1, len(day)): if day[i] == day[i-1]: interval_tot[-1] +=interval[i] else: day_tot.append(day[i]) interval_tot.append(interval[i]) # recalculate to hours for i in range(len(interval_tot)): interval_tot[i] = interval_tot[i]/(60) plt.figure(figsize=(15, 5)) plt.grid(zorder=0) plt.bar(day_tot, interval_tot, color='g', zorder=3) plt.xlim([1,31]) plt.xlabel('day in December') plt.ylabel('hours per day in nest box') plt.xticks(np.arange(1,31,1)) plt.ylim([0, 24]) </code></pre> <p>Now I would like to combine all data in one plot by making a stacked bar chart, where each day is represented by a bar and each bar indicating for each of the 24*60 minutes whether the owl is present or not. Is this possible from the current data structure?</p>
<p>The data seems to have been created manually, so I have changed the format of the data presented. The approach I took was to create the time spent and the time not spent, with a continuous index of 1 minute intervals with the start and end time as the difference time and a flag of 1. Now to create non-stay time, I will create a time series index of start and end date + 1 at 1 minute intervals. Update the original data frame with the newly created index. This is the data for the graph. In the graph, based on the data frame extracted in days, create a color list with red for stay and green for non-stay. Then, in a bar graph, stack the height one. It may be necessary to consider grouping the data into hourly units.</p> <pre><code>import pandas as pd import numpy as np import matplotlib.pyplot as plt from datetime import timedelta import io data = ''' start_time,end_time &quot;2021-12-01 18:08&quot;,&quot;2021-12-01 18:11&quot; &quot;2021-12-02 05:27&quot;,&quot;2021-12-02 05:29&quot; &quot;2021-12-02 22:40&quot;,&quot;2021-12-02 22:43&quot; &quot;2021-12-03 19:24&quot;,&quot;2021-12-03 19:27&quot; &quot;2021-12-06 18:04&quot;,&quot;2021-12-06 18:06&quot; &quot;2021-12-07 05:28&quot;,&quot;2021-12-07 05:30&quot; &quot;2021-12-10 03:05&quot;,&quot;2021-12-10 03:10&quot; &quot;2021-12-10 07:11&quot;,&quot;2021-12-10 07:13&quot; &quot;2021-12-10 20:40&quot;,&quot;2021-12-10 20:41&quot; &quot;2021-12-12 19:42&quot;,&quot;2021-12-12 19:45&quot; &quot;2021-12-13 04:13&quot;,&quot;2021-12-13 04:17&quot; &quot;2021-12-15 04:28&quot;,&quot;2021-12-15 04:30&quot; &quot;2021-12-15 05:21&quot;,&quot;2021-12-15 05:25&quot; &quot;2021-12-15 17:40&quot;,&quot;2021-12-15 17:44&quot; &quot;2021-12-15 22:31&quot;,&quot;2021-12-15 22:37&quot; &quot;2021-12-16 04:24&quot;,&quot;2021-12-16 04:28&quot; &quot;2021-12-16 19:58&quot;,&quot;2021-12-16 20:09&quot; &quot;2021-12-17 17:42&quot;,&quot;2021-12-17 18:04&quot; &quot;2021-12-17 22:19&quot;,&quot;2021-12-17 22:26&quot; &quot;2021-12-18 05:41&quot;,&quot;2021-12-18 05:44&quot; &quot;2021-12-19 07:40&quot;,&quot;2021-12-19 16:55&quot; &quot;2021-12-19 20:39&quot;,&quot;2021-12-19 20:52&quot; &quot;2021-12-19 21:56&quot;,&quot;2021-12-19 23:17&quot; &quot;2021-12-21 04:53&quot;,&quot;2021-12-21 04:59&quot; &quot;2021-12-21 05:37&quot;,&quot;2021-12-21 05:39&quot; &quot;2021-12-22 08:06&quot;,&quot;2021-12-22 17:22&quot; &quot;2021-12-22 20:04&quot;,&quot;2021-12-22 21:24&quot; &quot;2021-12-22 21:44&quot;,&quot;2021-12-22 22:47&quot; &quot;2021-12-23 02:20&quot;,&quot;2021-12-23 06:17&quot; &quot;2021-12-23 08:07&quot;,&quot;2021-12-23 16:54&quot; &quot;2021-12-23 19:36&quot;,&quot;2021-12-24 00:00&quot; &quot;2021-12-24 00:00&quot;,&quot;2021-12-24 00:28&quot; &quot;2021-12-24 07:53&quot;,&quot;2021-12-24 17:00&quot; ''' df = pd.read_csv(io.StringIO(data), sep=',') df['start_time'] = pd.to_datetime(df['start_time']) df['end_time'] = pd.to_datetime(df['end_time']) time_df = pd.DataFrame() for idx, row in df.iterrows(): rng = pd.date_range(row['start_time'], row['end_time']-timedelta(minutes=1), freq='1min') tmp = pd.DataFrame({'present':[1]*len(rng)}, index=rng) time_df = time_df.append(tmp) date_add = pd.date_range(time_df.index[0].date(), time_df.index[-1].date()+timedelta(days=1), freq='1min') time_df = time_df.reindex(date_add, fill_value=0) time_df['day'] = time_df.index.day import matplotlib.pyplot as plt fig, ax = plt.subplots(figsize=(8,15)) ax.set_yticks(np.arange(0,1500,60)) ax.set_ylim(0,1440) ax.set_xticks(np.arange(1,25,1)) days = time_df['day'].unique() for d in days: #if d == 1: day_df = time_df.query('day == @d') colors = [ 'r' if p == 1 else 'g' for p in day_df['present']] for i in range(len(day_df)): ax.bar(d, height=1, width=0.5, bottom=i+1, color=colors[i]) plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/lsXQf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lsXQf.png" alt="enter image description here" /></a></p>
python|pandas|time-series|stackedbarseries
0
378,030
70,437,442
pandas rolling on specific column
<p>I'm trying something very simple, seemingly at least which is to do a rolling sum on a column of a dataframe. See minimal example below :</p> <pre><code>df = pd.DataFrame({&quot;Col1&quot;: [10, 20, 15, 30, 45], &quot;Col2&quot;: [13, 23, 18, 33, 48], &quot;Col3&quot;: [17, 27, 22, 37, 52]}) df['dt'] = pd.date_range(&quot;2020-01-01&quot;, &quot;2020-01-05&quot;) </code></pre> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>index</th> <th>Col1</th> <th>Col2</th> <th>Col3</th> <th>dt.</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>10</td> <td>13</td> <td>17</td> <td>2020-01-01</td> </tr> <tr> <td>1</td> <td>20</td> <td>23</td> <td>27</td> <td>2020-01-02</td> </tr> <tr> <td>2</td> <td>15</td> <td>18</td> <td>22</td> <td>2020-01-03</td> </tr> <tr> <td>3</td> <td>30</td> <td>33</td> <td>37</td> <td>2020-01-04</td> </tr> <tr> <td>4</td> <td>45</td> <td>48</td> <td>52</td> <td>2020-01-05</td> </tr> </tbody> </table> </div> <p>If I run</p> <pre><code>df['sum2']=df['Col1'].rolling(window=&quot;3d&quot;, min_periods=2, on=df['dt']).sum() </code></pre> <p>then instead of getting what I'm hoping which is a rolling sum on column 1, I get this traceback. If I switch the index to the dt field value it works if I removed the <code>on=df['dt']</code> param. I've tried <code>on='dt'</code> also with no luck.</p> <p>This is the error message I get :</p> <pre><code>... ValueError: invalid on specified as 0 2020-01-01 1 2020-01-02 2 2020-01-03 3 2020-01-04 4 2020-01-05 Name: dt, dtype: datetime64[ns], must be a column (of DataFrame), an Index or None </code></pre> <p>Anything I'm overlooking? thanks!</p>
<p>The correct syntax is:</p> <pre><code>df['sum2'] = df.rolling(window=&quot;3d&quot;, min_periods=2, on='dt')['Col1'].sum() print(df) # Output: Col1 Col2 Col3 dt sum2 0 10 13 17 2020-01-01 NaN 1 20 23 27 2020-01-02 30.0 2 15 18 22 2020-01-03 45.0 3 30 33 37 2020-01-04 65.0 4 45 48 52 2020-01-05 90.0 </code></pre> <p>Your error is to extract the columns <code>Col1</code> at first so the column <code>dt</code> does not exist when <code>rolling</code>.</p> <pre><code>&gt;&gt;&gt; df['Col1'] # the column 'dt' does not exist anymore. 0 10 1 20 2 15 3 30 4 45 Name: Col1, dtype: int64 </code></pre>
python|pandas
3
378,031
70,597,991
how add new column with column names based on conditioned values?
<p>I have a table that contains active cases of covid per country for period of time. The columns are country name and dates.</p> <p>I need to find the max value of active cases per country and the corresponding date of the max values. I have created a list of max values but cant manage to create a column with the corresponding date.</p> <p>I have written the following loop, but it returns only one date (the last one - [5/2/20]):</p> <pre><code>for row in active_cases_data[column]: if row in max_cases: active_cases_data['date'] = column </code></pre> <p><a href="https://i.stack.imgur.com/q6Bpi.png" rel="nofollow noreferrer">screenshot of df and resulting column</a></p> <p>table looks like this:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>country</th> <th>4/29/20</th> <th>4/30/20</th> <th>5/1/20</th> <th>5/2/20</th> </tr> </thead> <tbody> <tr> <td>Italy</td> <td>67</td> <td>105</td> <td>250</td> <td>240</td> </tr> </tbody> </table> </div> <p>I need extra column of date for the largest number for the row(in Italy case its the 5/1/20 for value = 250) like this:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>country</th> <th>4/29/20</th> <th>4/30/20</th> <th>5/1/20</th> <th>5/2/20</th> <th>date</th> </tr> </thead> <tbody> <tr> <td>Italy</td> <td>67</td> <td>105</td> <td>250</td> <td>240</td> <td>5/1/20</td> </tr> </tbody> </table> </div>
<p>In pandas we are trying not to use python loops, unless we REALLY need them.</p> <p>I suppose that your dataset looks something like that:</p> <pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({&quot;Country&quot;: [&quot;Poland&quot;, &quot;Ukraine&quot;, &quot;Czechia&quot;, &quot;Russia&quot;], &quot;2021.12.30&quot;: [12, 23, 43, 43], &quot;2021.12.31&quot;: [15, 25, 40, 50], &quot;2022.01.01&quot;: [18, 27, 41, 70], &quot;2022.01.02&quot;: [21, 22, 42, 90]}) # Country 2021.12.30 2021.12.31 2022.01.01 2022.01.02 #0 Poland 12 15 18 21 #1 Ukraine 23 25 27 22 #2 Czechia 43 40 41 42 #3 Russia 43 50 70 90 </code></pre> <p>Short way:</p> <p>You use idxmax(), after excluding column with name:</p> <pre class="lang-py prettyprint-override"><code>df['Date'] = df.loc[:, df.columns != &quot;Country&quot;].idxmax(axis=1) # Country 2021.12.30 2021.12.31 2022.01.01 2022.01.02 Date #0 Poland 12 15 18 21 2022.01.02 #1 Ukraine 23 25 27 22 2022.01.01 #2 Czechia 43 40 41 42 2021.12.30 #3 Russia 43 50 70 90 2022.01.02 </code></pre> <p>You just have to be aware of running this line multiple times - it tooks every column (except of excluded one - &quot;Country&quot;).</p> <p>Long way:</p> <p>First, I would transform the data from wide to long table:</p> <pre class="lang-py prettyprint-override"><code>df2 = df.melt(id_vars=&quot;Country&quot;, var_name = &quot;Date&quot;, value_name = &quot;Cases&quot;) # Country Date Cases #0 Poland 2021.12.30 12 #1 Ukraine 2021.12.30 23 #2 Czechia 2021.12.30 43 #3 Russia 2021.12.30 43 #4 Poland 2021.12.31 15 #... #15 Russia 2022.01.02 90 </code></pre> <p>With the long table we can in many different ways find the needed rows, for example:</p> <pre class="lang-py prettyprint-override"><code>df2 = df2.sort_values(by=[&quot;Country&quot;, &quot;Cases&quot;, &quot;Date&quot;], ascending=[True, False, False]) df2.groupby(&quot;Country&quot;).first().reset_index() # Country Date Cases #0 Czechia 2021.12.30 43 #1 Poland 2022.01.02 21 #2 Russia 2022.01.02 90 #3 Ukraine 2022.01.01 27 </code></pre> <p>By setting the last position in <code>ascending</code> parameter you could manipulate which date should be used in case of a tie.</p>
python|pandas|dataframe
1
378,032
70,499,932
Multiple plots from function Matplotlib
<p>(Adjusted to suggestions) I already have a function that performs some plot:</p> <pre><code>def plot_i(Y, ax = None): if ax == None: ax = plt.gca() fig = plt.figure() ax.plot(Y) plt.close(fig) return fig </code></pre> <p>And I wish to use this to plot in a grid for n arrays. Let's assume the grid is (n // 2, 2) for simplicity and that n is even. At the moment, I came up with this:</p> <pre><code>def multi_plot(Y_arr, function): n = len(Y_arr) fig, ax = plt.subplots(n // 2, 2) for i in range(n): # assign to one axis a call of the function = plot_i that draws a plot plt.close(fig) return fig </code></pre> <p>Unfortunately, what I get if I do something like:</p> <pre><code># inside the loop plot_i(Y[:, i], ax = ax[k,j]) </code></pre> <p>Is correct but I need to close figures each time at the end, otherwise I keep on adding figures to plt. Is there any way I can avoid calling each time plt.close(fig)?</p>
<p>If I understand correctly, you are looking for something like this:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import matplotlib.pyplot as plt def plot_i(Y, ax=None): if ax == None: ax = plt.gca() ax.plot(Y) return def multi_plot(Y_arr, function, n_cols=2): n = Y_arr.shape[1] fig, ax = plt.subplots(n // n_cols + (1 if n % n_cols else 0), n_cols) for i in range(n): # assign to one axis a call of the function = plot_i that draws a plot function(Y_arr[:, i], ax = ax[i//n_cols, i%n_cols]) return fig if __name__ == '__main__': x = np.linspace(0,12.6, 100) # let's create some fake data data = np.exp(-np.linspace(0,.5, 14)[np.newaxis, :] * x[:, np.newaxis]) * np.sin(x[:, np.newaxis]) fig = multi_plot(data, plot_i, 3) </code></pre> <p>Be careful when using <code>gca()</code>: it will create a new figure if there is no figure active.</p>
python|numpy|matplotlib
0
378,033
70,428,401
Is it possible in numpy array to add rows with different length and then add elements to that rows in python?
<ul> <li>Python Version: 3.7.11</li> <li>numpy Version: 1.21.2</li> </ul> <p>I want to have a numpy array, something like below:</p> <pre><code>[ [&quot;Hi&quot;, &quot;Anne&quot;], [&quot;How&quot;, &quot;are&quot;, &quot;you&quot;], [&quot;fine&quot;] ] </code></pre> <p>But the process of creating this numpy array is not simple and it's as follows:</p> <ul> <li><p><code># code block 1</code> At the beginning we have an empty numpy array.</p> <p><em>First loop:</em></p> </li> <li><p><code># code block 2</code> <strong>row</strong> is added in this first loop or</p> <p>in this loop we understand that we need a new row.</p> <p><em>A loop inside of the first loop:</em></p> </li> <li><p><code># code block 3</code> <strong>elements</strong> of that row will be added in this inner loop.</p> </li> </ul> <p>Assume that:</p> <ul> <li><p>the number of iterations is not specified, I mean:</p> <ul> <li><p>the number of columns of each row is different and</p> </li> <li><p>we don't know the number of rows that we want to add to numpy array.</p> </li> </ul> </li> </ul> <p>Maybe bellow code example will help me get my point across:</p> <pre><code>a = [[&quot;Hi&quot;, &quot;Anne&quot;], [&quot;How&quot;, &quot;are&quot;, &quot;you&quot;], [&quot;fine&quot;]] # code block 1: code for creating empty numpy array for row in a: # code block 2: code for creating empty row for element in row: # code block 3: code for appending element to that row or last row </code></pre> <p>Question:</p> <ul> <li><p>Is it possible to create a numpy array with these steps (<code>code block #1, #2, #3</code>)?</p> <p>If yes, how?</p> </li> </ul>
<p>Numpy arrays are not optimised for inconsistent dimensions, and therefore not good practice. You can only do this by making your elements objects, not strings. But like I said, numpy is not the way to go for this.</p> <pre><code>a = numpy.array([[&quot;Hi&quot;, &quot;Anne&quot;], [&quot;How&quot;, &quot;are&quot;, &quot;you&quot;], [&quot;fine&quot;]], dtype=object) </code></pre>
python|arrays|python-3.x|numpy
2
378,034
42,724,786
Why numpy argmax() not getting the index?
<p>so I'm really new with data analysis and numpy library, and just playing around with the builtin function.</p> <p>I have this on top of my file <code>import numpy as np</code></p> <pre><code>new_arr = np.arange(25) print new_arr.argmax() </code></pre> <p>which should print out the index of the maximum value, not the value it self. But it keeps on giving me 24. As what I understand <code>max()</code> gives you the maximum value, while <code>argmax()</code> gives you the index of the maximum value.</p>
<p><code>np.arange</code> starts from zero (unless you give it a different start), and indexing is also 0-based. So in <code>np.arange(25)</code>, the 0th element is zero, the 1th element is 1, etc. So every element is the same number as its index. So the maximum value is 24 and its index is also 24.</p>
python|python-2.7|numpy|data-science
0
378,035
42,962,215
Python - how to correctly index numpy array with other numpy arrays, similarly to MATLAB
<p>I'm trying to learn python after years of using MATLAB and this is something I'm really stuck with. I have an array, say 10 by 8. I want to find rows that have value 3 in the first column and take columns "2:" in that row. What I do is:</p> <pre><code>newArray = oldArray[np.asarray(np.where(oldArray[:,0] == 3)), 2:] </code></pre> <p>But that creates a 3-dimensional array with first dimension 1, instead of 2-dimensional array. I'm trying to achieve MATLAB equivalent of </p> <pre><code>newArray = oldArray(find(oldArray(:,1)==3),3:end); </code></pre> <p>Anyone have any thoughts on how to do that? Thank you!</p>
<p><code>Slice</code> the first column and compare against <code>3</code> to give us a mask for selecting rows. After selecting rows by indexing into the first axis/rows of a <code>2D</code> array of the input array, we need to select the columns (second axis of array). On your MATLAB code, you have <code>3:end</code>, which would translate to <code>2:</code> on NumPy. In MATLAB, you need to specify the end index, in NumPy you don't. So, it simplifies to <code>2:</code>, as compared to <code>3:end</code> on MATLAB.</p> <p>Thus, the code would be -</p> <pre><code>oldArray[oldArray[:,0]==3,2:] </code></pre> <p>Sample run -</p> <pre><code>In [352]: a Out[352]: |===============&gt;| array([[1, 0, 4, 2, 0, 1, 3, 2], [1, 0, 0, 3, 2, 3, 4, 4], [1, 2, 1, 4, 4, 0, 4, 2], [0, 2, 0, 3, 2, 2, 1, 2], [1, 2, 3, 3, 1, 0, 0, 1], [3, 4, 2, 4, 2, 0, 3, 4], &lt;== [3, 1, 1, 0, 0, 1, 2, 0], &lt;== [2, 0, 4, 3, 1, 3, 1, 1], [4, 3, 1, 3, 1, 3, 4, 4], [2, 0, 2, 0, 3, 1, 1, 1]]) In [353]: a[a[:,0]==3,2:] Out[353]: array([[2, 4, 2, 0, 3, 4], [1, 0, 0, 1, 2, 0]]) </code></pre> <hr> <p><strong>Reviewing your code -</strong></p> <p>Your code was -</p> <pre><code>In [359]: a[np.asarray(np.where(a[:,0] == 3)), 2:] Out[359]: array([[[2, 4, 2, 0, 3, 4], [1, 0, 0, 1, 2, 0]]]) </code></pre> <p>That works too, but creates a <code>3D</code> array as listed in the question.</p> <p>Dissecting into it -</p> <pre><code>In [361]: np.where(a[:,0] == 3) Out[361]: (array([5, 6]),) </code></pre> <p>We see <code>np.where</code> is a tuple of arrays, which are the row and column indices. For a slice of <code>1D</code>, you won't have both rows and columns, but just one array of indices.</p> <p>In MATLAB, <code>find</code> gives you an array of indices, so there's less confusion -</p> <pre><code>&gt;&gt; a a = 3 4 3 3 2 5 5 2 2 2 2 3 5 3 4 4 4 3 4 2 3 2 4 2 &gt;&gt; find(a(:,1)==3) ans = 1 6 </code></pre> <p>So, to get those indices, get the first array out of it -</p> <pre><code>In [362]: np.where(a[:,0] == 3)[0] Out[362]: array([5, 6]) </code></pre> <p>Use it to index into the first axis and then slice the column from <code>2</code> onwards -</p> <pre><code>In [363]: a[np.where(a[:,0] == 3)[0]] Out[363]: array([[3, 4, 2, 4, 2, 0, 3, 4], [3, 1, 1, 0, 0, 1, 2, 0]]) In [364]: a[np.where(a[:,0] == 3)[0],2:] Out[364]: array([[2, 4, 2, 0, 3, 4], [1, 0, 0, 1, 2, 0]]) </code></pre> <p>That gives you the expected output.</p> <hr> <p><strong>Word of caution</strong></p> <p>One needs to be careful while indexing into axes with masks or integers.</p> <p>In theory, the column-indexing there should be equivalent of indexing with <code>[2,3,4,5,6,7]</code> for <code>a</code> of <code>8 columns</code>.</p> <p>Let's try that -</p> <pre><code>In [370]: a[a[:,0]==3,[2,3,4,5,6,7]] .... IndexError: shape mismatch: indexing arrays could ... not be broadcast together with shapes (2,) (6,) </code></pre> <p>We are triggering <em><code>broadcastable</code></em> indexing there. The elements for indexing into the two axes are of different lengths and are not broadcastable.</p> <p>Let's verify that. The array for indexing into <code>rows</code> -</p> <pre><code>In [374]: a[:,0]==3 Out[374]: array([False, False, False, False, False, True, True, False, False, False], dtype=bool) </code></pre> <p>Essentially that's an array of two elements, as there are two <code>True</code> elems -</p> <pre><code>In [375]: np.where(a[:,0]==3)[0] Out[375]: array([5, 6]) </code></pre> <p>The array for indexing into columns was <code>[2,3,4,5,6,7]</code>, which was of length <code>6</code> and thus are not broadcastable against the row indices.</p> <p>To get to our desired target of selecting row IDs : <code>5,6</code> and for each of those rows select column IDs <code>2,3,4,5,6,7</code>, we could create <code>open meshes</code> with <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.ix_.html" rel="nofollow noreferrer"><code>np._ix</code></a> that are broadcastable, like so -</p> <pre><code>In [376]: np.ix_(a[:,0]==3, [2,3,4,5,6,7]) Out[376]: (array([[5], [6]]), array([[2, 3, 4, 5, 6, 7]])) </code></pre> <p>Finally, index into input array with those for the desired o/p -</p> <pre><code>In [377]: a[np.ix_(a[:,0]==3, [2,3,4,5,6,7])] Out[377]: array([[2, 4, 2, 0, 3, 4], [1, 0, 0, 1, 2, 0]]) </code></pre>
python|arrays|matlab|numpy|indexing
2
378,036
42,842,418
Using apply on pandas dataframe with strings without looping over series
<p>I have a pandas DataFrame filled with strings. I would like to apply a string operation to all entries, for example <code>capitalize()</code>. I know that for a series we can use <code>series.str.capitlize()</code>. I also know that I can loop over the column of the Dataframe and do this for each of the columns. But I want something more efficient and elegant, without looping. Thanks</p>
<p>use <code>stack</code> + <code>unstack</code><br> <code>stack</code> makes a dataframe with a single level column index into a series. You can then perform your <code>str.capitalize()</code> and <code>unstack</code> to get back your original form.</p> <pre><code>df.stack().str.capitalize().unstack() </code></pre>
string|python-3.x|pandas|dataframe|apply
1
378,037
42,896,605
Tensorflow - About mnist.train.next_batch()
<p>When I search about mnist.train.next_batch() I found this <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/learn/python/learn/datasets/mnist.py" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/learn/python/learn/datasets/mnist.py</a></p> <p>In this code</p> <pre><code> def next_batch(self, batch_size, fake_data=False, shuffle=True): """Return the next `batch_size` examples from this data set.""" if fake_data: fake_image = [1] * 784 if self.one_hot: fake_label = [1] + [0] * 9 else: fake_label = 0 return [fake_image for _ in xrange(batch_size)], [ fake_label for _ in xrange(batch_size) ] start = self._index_in_epoch # Shuffle for the first epoch if self._epochs_completed == 0 and start == 0 and shuffle: perm0 = numpy.arange(self._num_examples) numpy.random.shuffle(perm0) self._images = self.images[perm0] self._labels = self.labels[perm0] # Go to the next epoch if start + batch_size &gt; self._num_examples: # Finished epoch self._epochs_completed += 1 # Get the rest examples in this epoch rest_num_examples = self._num_examples - start images_rest_part = self._images[start:self._num_examples] labels_rest_part = self._labels[start:self._num_examples] # Shuffle the data if shuffle: perm = numpy.arange(self._num_examples) numpy.random.shuffle(perm) self._images = self.images[perm] self._labels = self.labels[perm] # Start next epoch start = 0 self._index_in_epoch = batch_size - rest_num_examples end = self._index_in_epoch images_new_part = self._images[start:end] labels_new_part = self._labels[start:end] return numpy.concatenate((images_rest_part, images_new_part), axis=0) , numpy.concatenate((labels_rest_part, labels_new_part), axis=0) else: self._index_in_epoch += batch_size end = self._index_in_epoch return self._images[start:end], self._labels[start:end] </code></pre> <p>I know that mnist.train.next_batch(batch_size=100) means it randomly pick 100 data from MNIST dataset. Now, Here's my question</p> <ol> <li>What is shuffle=true means?</li> <li>If I set next_batch(batch_size=100,fake_data=False, shuffle=False) then it picks 100 data from the start to the end of MNIST dataset sequentially? Not randomly?</li> </ol>
<p>Re 1, when <code>shuffle=True</code> the order of examples in the data is randomized. Re 2, yes, it should respect whatever order the examples have in the numpy arrays.</p>
python|machine-learning|tensorflow|mnist
3
378,038
42,676,213
Can my numba code be faster than numpy
<p>I am new to Numba and am trying to speed up some calculations that have proved too unwieldy for numpy. The example I've given below compares a function containing a subset of my calculations using a vectorized/numpy and numba versions of the function the latter of which was also tested as pure python by commenting out the @autojit decorator. </p> <p>I find that the numba and numpy versions give similar speed ups relative to the pure python, both of which are about a factor of 10 speed improvement. The numpy version was actually slightly faster than my numba function but because of the 4D nature of this calculation I quickly run out of memory when the arrays in the numpy function are sized much larger than this toy example. </p> <p>This speed up is nice but I have often seen speed ups of >100x on the web when moving from pure python to numba.</p> <p>I would like to know if there is a general expected speed increase when moving to numba in nopython mode. I would also like to know if there are any components of my numba-ized function that would be limiting further speed increases.</p> <pre><code>import numpy as np from timeit import default_timer as timer from numba import autojit import math def vecRadCalcs(slope, skyz, solz, skya, sola): nloc = len(slope) ntime = len(solz) [lenz, lena] = skyz.shape asolz = np.tile(np.reshape(solz,[ntime,1,1,1]),[1,nloc,lenz,lena]) asola = np.tile(np.reshape(sola,[ntime,1,1,1]),[1,nloc,lenz,lena]) askyz = np.tile(np.reshape(skyz,[1,1,lenz,lena]),[ntime,nloc,1,1]) askya = np.tile(np.reshape(skya,[1,1,lenz,lena]),[ntime,nloc,1,1]) phi1 = np.cos(asolz)*np.cos(askyz) phi2 = np.sin(asolz)*np.sin(askyz)*np.cos(askya- asola) phi12 = phi1 + phi2 phi12[phi12&gt; 1.0] = 1.0 phi = np.arccos(phi12) return(phi) @autojit def RadCalcs(slope, skyz, solz, skya, sola, phi): nloc = len(slope) ntime = len(solz) pop = 0.0 [lenz, lena] = skyz.shape for iiT in range(ntime): asolz = solz[iiT] asola = sola[iiT] for iL in range(nloc): for iz in range(lenz): for ia in range(lena): askyz = skyz[iz,ia] askya = skya[iz,ia] phi1 = math.cos(asolz)*math.cos(askyz) phi2 = math.sin(asolz)*math.sin(askyz)*math.cos(askya- asola) phi12 = phi1 + phi2 if phi12 &gt; 1.0: phi12 = 1.0 phi[iz,ia] = math.acos(phi12) pop = pop + 1 return(pop) zenith_cells = 90 azim_cells = 360 nloc = 10 # nominallly ~ 700 ntim = 10 # nominallly ~ 200000 slope = np.random.rand(nloc) * 10.0 solz = np.random.rand(ntim) *np.pi/2.0 sola = np.random.rand(ntim) * 1.0*np.pi base = np.ones([zenith_cells,azim_cells]) skya = np.deg2rad(np.cumsum(base,axis=1)) skyz = np.deg2rad(np.cumsum(base,axis=0)*90/zenith_cells) phi = np.zeros(skyz.shape) start = timer() outcalc = RadCalcs(slope, skyz, solz, skya, sola, phi) stop = timer() outcalc2 = vecRadCalcs(slope, skyz, solz, skya, sola) stopvec = timer() print(outcalc) print(stop-start) print(stopvec-stop) </code></pre>
<p>On my machine running numba 0.31.0, the Numba version is 2x faster than the vectorized solution. When timing numba functions, you need to run the function more than one time because the first time you're seeing the time of jitting the code + the run time. Subsequent runs will not include the overhead of jitting the functions time since Numba caches the jitted code in memory. </p> <p>Also, please note that your functions are not calculating the same thing -- you want to be careful that you're comparing the same things using something like <code>np.allclose</code> on the results. </p>
python|numpy|numba
0
378,039
42,815,644
return streams for multiple securities in pandas
<p>Suppose I have a table which looks like this:</p> <pre><code> Ticker Date ClosingPrice 0 A 01-02-2010 11.4 1 A 01-03-2010 11.5 ... 1000 AAPL 01-02-2010 634 1001 AAPL 01-02-2010 635 </code></pre> <p>So, in other words, we have a sequence of timeseries spliced together one per ticker symbol. Now, I would like to generate a column of daily returns. If I had only one symbol, that would be very easy with the pandas <code>pct_change()</code> function, but how do I do it for multiple time series as above (I can do a sequence of groupbys, make each a dataframe, do the return computation, then splice them all together with <code>pd.concat()</code> but that does not seem optimal.</p>
<p>use <code>groupby</code></p> <pre><code>df.set_index(['Ticker', 'Date']).ClosingPrice.groupby(level=0).pct_change() Ticker Date A 01-02-2010 NaN 01-03-2010 0.008772 AAPL 01-02-2010 NaN 01-02-2010 0.001577 Name: ClosingPrice, dtype: float64 </code></pre>
python|pandas|time-series
1
378,040
42,930,485
How can I slice 2D array in python without Numpy module?
<ol> <li>What I wanna process is slice 2D array partially without numpy module like following example with numpy.</li> <li><p>and I want to know Time Complexity of Slicing Lists in python basic function</p> <pre><code>import numpy as np A = np.array([ [1,2,3,4,5,6,7,8] for i in range(8)]) n = len(A[0]) x = int(n/2) TEMP = [[None]*2 for i in range(2)] for w in range(2): for q in range(2): TEMP[w][q] = A[w*x:w*x+x,q*x:q*x+x] for w in range(2): for q in range(2): print(TEMP[w][q]) </code></pre></li> </ol> <p>here is the result that i wanna get</p> <pre><code>[[1 2 3 4] [1 2 3 4] [1 2 3 4] [1 2 3 4]] [[5 6 7 8] [5 6 7 8] [5 6 7 8] [5 6 7 8]] [[1 2 3 4] [1 2 3 4] [1 2 3 4] [1 2 3 4]] [[5 6 7 8] [5 6 7 8] [5 6 7 8] [5 6 7 8]] Process finished with exit code 0 </code></pre>
<p>For the first question:</p> <pre><code>A = [ [1,2,3,4,5,6,7,8] for i in range(8)] n = len(A[0]) x = int(n/2) TEMP = [[None]*2 for i in range(2)] for w in range(2): for q in range(2): TEMP[w][q] = [item[q * x:(q * x) + x] for item in A[w * x:(w * x) + x]] for w in range(2): for q in range(2): print("{i}, {j}: {item}".format(i=w, j=q, item=repr(TEMP[w][q]))) </code></pre>
python|arrays|numpy
1
378,041
42,733,798
Countvectorizer having words not in data
<p>I am new to sklearn and countvectorizer. </p> <p>Some weird behaviour is happening to me. </p> <p>Initializing the count vectorizer</p> <pre><code>from sklearn.feature_extraction.text import CountVectorizer count_vect = CountVectorizer() document_mtrx = count_vect.fit_transform(df['description']) count_vect.vocabulary_ count_vect.vocabulary_ Out[28]: {u'viewscity': 36216, u'sizeexposed': 31584, u'rentalcontact': 29104, u'villagebldg': 36323, </code></pre> <p>Getting the rows which contains the word rentalcontact</p> <pre><code>df[df['description'].str.contains('rentalcontact')] </code></pre> <p>The number of rows returned is 0. Why is this the case ?</p>
<p><a href="http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html" rel="nofollow noreferrer">CountVectorizer</a> has a parameter <code>lowercase</code> which defaults to <code>True</code> - most probably that's why you can't find those values.</p> <p>So try this:</p> <pre><code>df[df['description'].str.lower().str.contains('rentalcontact')] # ^^^^^^^ </code></pre> <p><strong>UPDATE:</strong></p> <blockquote> <p><strong>vocabulary_ :</strong> dict</p> <p>A mapping of terms to feature <strong>indices</strong>.</p> </blockquote> <p><code>u'rentalcontact': 29104</code> - means that <code>'rentalcontact'</code> has an index <code>29104</code> in the list of features.</p> <p>I.e. <code>vectorizer.get_feature_names()[29104]</code> should return <code>'rentalcontact'</code></p>
pandas|scikit-learn
2
378,042
42,966,813
Pandas DataFrame to drop rows in the groupby
<p>I have a DataFrame with three columns <code>Date</code>, <code>Advertiser</code> and ID. I grouped the data firsts to see if volumns of some Advertisers are too small (For example when <code>count()</code> less than 500). And then I want to drop those rows in the group table.</p> <pre><code>df.groupby(['Date','Advertiser']).ID.count() </code></pre> <p>The result likes this: </p> <pre><code> Date Advertiser 2016-01 A 50000 B 50 C 4000 D 24000 2016-02 A 6800 B 7800 C 123 2016-03 B 1111 E 8600 F 500 </code></pre> <p>I want a result to be this: </p> <pre><code> Date Advertiser 2016-01 A 50000 C 4000 D 24000 2016-02 A 6800 B 7800 2016-03 B 1111 E 8600 </code></pre> <p>Followed up question: </p> <p>How about if I want to filter out the rows in groupby in term of the total <code>count()</code> in date category. For example, I want to <code>count()</code> for a date larger than 15000. The table I want likes this: </p> <pre><code>Date Advertiser 2016-01 A 50000 B 50 C 4000 D 24000 2016-02 A 6800 B 7800 C 123 </code></pre>
<p>You have a Series object after the <code>groupby</code>, which can be filtered based on value with a chained <em>lambda</em> filter:</p> <pre><code>df.groupby(['Date','Advertiser']).ID.count()[lambda x: x &gt;= 500] #Date Advertiser #2016-01 A 50000 # C 4000 # D 24000 #2016-02 A 6800 # B 7800 #2016-03 B 1111 # E 8600 # F 500 </code></pre>
python|pandas|dataframe
11
378,043
42,614,993
Back propagation algorithm gets stuck on training AND function
<p>Here is an implementation of AND function with single neuron using tensorflow:</p> <pre><code>def tf_sigmoid(x): return 1 / (1 + tf.exp(-x)) data = [ (0, 0), (0, 1), (1, 0), (1, 1), ] labels = [ 0, 0, 0, 1, ] n_steps = 1000 learning_rate = .1 x = tf.placeholder(dtype=tf.float32, shape=[2]) y = tf.placeholder(dtype=tf.float32, shape=None) w = tf.get_variable('W', shape=[2], initializer=tf.random_normal_initializer(), dtype=tf.float32) b = tf.get_variable('b', shape=[], initializer=tf.random_normal_initializer(), dtype=tf.float32) h = tf.reduce_sum(x * w) + b output = tf_sigmoid(h) error = tf.abs(output - y) optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(error) sess.run(tf.initialize_all_variables()) for step in range(n_steps): for i in np.random.permutation(range(len(data))): sess.run(optimizer, feed_dict={x: data[i], y: labels[i]}) </code></pre> <p>Sometimes it works perfectly, but on some parameters it gets stuck and doesn't want to learn. For example with these initial parameters:</p> <pre><code>w = tf.Variable(initial_value=[-0.31199348, -0.46391705], dtype=tf.float32) b = tf.Variable(initial_value=-1.94877, dtype=tf.float32) </code></pre> <p>it will hardly make any improvement in cost function. What am I doing wrong, maybe I should somehow adjust initialization of parameters?</p>
<p>Aren't you missing a <code>mean(error)</code> ?</p> <p>Your problem is the particular combination of the sigmoid, the cost function, and the optimizer.</p> <p>Don't feel bad, AFAIK this exact problem stalled the <em>entire field</em> for a few years.</p> <p>Sigmoid is flat when you're far from the middle, and You're initializing it with relatively large numbers, try /1000.</p> <p>So your abs-error (or square-error) is flat too, and the GradientDescent optimizer takes steps proportional to the slope.</p> <p>Either of these should fix it:</p> <p>Use <a href="https://www.tensorflow.org/api_docs/python/tf/nn/sigmoid_cross_entropy_with_logits" rel="nofollow noreferrer">cross-entropy</a> for the error - it's convex.</p> <p>Use a better Optimizer, like <a href="https://www.tensorflow.org/api_docs/python/tf/train/AdamOptimizer" rel="nofollow noreferrer">Adam</a> , who's step size is much less dependent on the slope. More on the consistency of the slope.</p> <p>Bonus: Don't roll your own sigmoid, use <a href="https://www.tensorflow.org/api_docs/python/tf/sigmoid" rel="nofollow noreferrer"><code>tf.nn.sigmoid</code></a>, you'll get a lot fewer NaN's that way.</p> <p>Have fun!</p>
python|machine-learning|tensorflow|neural-network
2
378,044
42,993,439
How to Pivot a table in csv horizontally in Python using Pandas df?
<p>I have data in this format - </p> <pre>MonthYear HPI Div State_fips 1-1993 105.45 7 5 2-1993 105.58 7 5 3-1993 106.23 7 5 4-1993 106.63 7 5 Required Pivot Table as: Stafips 1-1993 2-1993 3-1993 4-1993 5 105.45 105.58 106.23 106.63</pre> <p>(pretty new to pandas)</p>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.unstack.html" rel="nofollow noreferrer"><code>unstack</code></a> or <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.pivot.html" rel="nofollow noreferrer"><code>pivot</code></a>:</p> <pre><code>df1 = df.set_index(['State_fips', 'MonthYear'])['HPI'].unstack() MonthYear 1-1993 2-1993 3-1993 4-1993 State_fips 5 105.45 105.58 106.23 106.63 df1 = df.pivot(index='State_fips', columns='MonthYear', values='HPI') MonthYear 1-1993 2-1993 3-1993 4-1993 State_fips 5 105.45 105.58 106.23 106.63 </code></pre> <p>But if duplicates, need aggregate with <code>groupby</code> or <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.pivot_table.html" rel="nofollow noreferrer"><code>pivot_table</code></a>, <code>mean</code> can be changed to <code>sum</code>, <code>median</code>, ...:</p> <pre><code>print (df) MonthYear HPI Div State_fips 0 1-1993 105.45 7 5 1 2-1993 105.58 7 5 2 3-1993 106.23 7 5 3 4-1993 100.00 7 5 &lt;-duplicates same 4-1993, 5 4 4-1993 200.00 7 5 &lt;-duplicates same 4-1993, 5 df1 = df.pivot_table(index='State_fips', columns='MonthYear', values='HPI', aggfunc='mean') MonthYear 1-1993 2-1993 3-1993 4-1993 State_fips 5 105.45 105.58 106.23 150.0 &lt;- (100+200/2) = 150 df1 = df.groupby(['State_fips', 'MonthYear'])['HPI'].mean().unstack() MonthYear 1-1993 2-1993 3-1993 4-1993 State_fips 5 105.45 105.58 106.23 150.0 &lt;- (100+200/2) = 150 </code></pre> <hr> <p>Last if need create column from index and remove columns name:</p> <pre><code>df1 = df1.reset_index().rename_axis(None, axis=1) print (df1) State_fips 1-1993 2-1993 3-1993 4-1993 0 5 105.45 105.58 106.23 150.0 </code></pre>
python|csv|pandas|dataframe|pivot
1
378,045
42,729,468
Dimensionality Reduction in Python (defining the variance threshold)
<p>Afternoon. I'm having some trouble with my script. Specifically, I'd like to keep the singular values and their corresponding eigenvectors when the sum of a subset of the eigenvalues is greater than .9*the sum of all the eigenvalues. So far Iv'e been able to use a for loop and append function that creates a list of tuples that represent the singular values and eigenvectors. However, when I try to nest an if statement within the for loop to meet the condition i break it. here's my code.</p> <pre><code>o = np.genfromtxt (r"C:\Users\Python\Desktop\PCADUMMYDATADUMP.csv", delimiter=",") o_m=np.matrix(o) #We define the covariance matrix of our data accordingly This is the mean centered data approx #of the covariance matrix. def covariance_matrix(x): #create the mean centered data matrix. this is the data matrix minus the matrix augmented from the vector that represents the column average m_c_d=x-np.repeat(np.mean(x, axis=0,), len(x), axis=0) #we compute the matrix operations here m_c_c=np.multiply(1/((len(m_c_d)-1)),np.transpose(m_c_d)*m_c_d) return m_c_c #Define the correlation matrix for our mean adjsuted data matrix def correlation_matrix(x): C_M = covariance_matrix(x) #matrix operation is diagonal(covariance_matrix)^-1/2*(covaraince_matrix)*diagonal(covariance_matrix)^-1/2 c_m=fractional_matrix_power(np.diag(np.diag(C_M)),-1/2)*C_M*fractional_matrix_power(np.diag(np.diag(C_M)),-1/2) return c_m def s_v_d(x): C_M=covariance_matrix(x) #create arrays that hold the left singular vectors(u), the right singular vectors(v), and the singular values (s) u,s,v=np.linalg.svd(C_M) #not sure if we should keep this here but this is how we can grab the eigenvalues which are the sqares of the singular values eigenvalues=np.square(s) singular_array=[] for i in range(0,len(s)-1): if np.sum(singular_array,axis=1) &lt; (.9*np.sum(s)): singular_pairs=[s[i],v[:,i]] singular_array.append(singular_pairs) else: break return np.sum(s,axis=0) </code></pre> <p>specifically, consider the for and if loop after singular[array]. Thanks!</p>
<p>I think your <code>singular_array</code> with its "mixed" scalar/vector elements is a bit more than <code>np.sum</code> can handle. I'm not 100% sure but aren't the variances the squares of the singular values? In other words shouldn't you be using your <code>eigenvalues</code> for the decision?</p> <p>Anyway, here is a a non-looping approach:</p> <pre><code>part_sums = np.cumsum(eigenvalues) cutoff = np.searchsorted(part_sums, 0.9 * part_sums[-1]) singular_array = list(zip(s[:cutoff], v[:, :cutoff])) </code></pre> <p>Change <code>eigenvalues</code> to <code>s</code> if you think it's more appropriate.</p> <p>How it works:</p> <p><code>cumsum</code> computes the running sum over <code>eigenvalues</code>. Its last element is therefore the total sum and we need only find the place where <code>part_sums</code> crosses 90% of that. This is what <code>searchsorted</code> does for us.</p> <p>Once we have the cutoff all that remains is applying it to the singular values and vectors and to form the pairs using <code>zip</code>.</p>
arrays|numpy|iteration|linear-algebra|python-3.6
0
378,046
42,954,655
Python pandas read dataframe from custom file format
<p>Using Python 3 and pandas 0.19.2</p> <p>I have a log file formatted this way:</p> <pre><code>[Header1][Header2][Header3][HeaderN] [=======][=======][=======][=======] [Value1][Value2][Value3][ValueN] [AnotherValue1][ValuesCanBeEmpty][][] ... </code></pre> <p>...which is very much like a CSV excepted that each value is surrounded by <code>[</code> and <code>]</code> and there is no real delimiter. What would be the most efficient way to load that content into a pandas DataFrame ?</p>
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html" rel="nofollow noreferrer"><code>read_csv</code></a> with separator <code>][</code> which has to be escape by <code>\</code>. Then <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.replace.html" rel="nofollow noreferrer"><code>replace</code></a> columns and values and remove row with all <code>NaN</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.dropna.html" rel="nofollow noreferrer"><code>dropna</code></a>:</p> <pre><code>import pandas as pd from pandas.compat import StringIO temp=u"""[Header1][Header2][Header3][HeaderN] [=======][=======][=======][=======] [Value1][Value2][Value3][ValueN] [AnotherValue1][ValuesCanBeEmpty][][]""" #after testing replace 'StringIO(temp)' to 'filename.csv' df = pd.read_csv(StringIO(temp), sep="\]\[", engine='python') df.columns = df.columns.to_series().replace(['^\[', '\]$'],['',''], regex=True) df = df.replace(['^\[', '\]$', '=', ''], ['', '', np.nan, np.nan], regex=True) df = df.dropna(how='all') print (df) Header1 Header2 Header3 HeaderN 1 Value1 Value2 Value3 ValueN 2 AnotherValue1 ValuesCanBeEmpty NaN NaN print (df.columns) Index(['Header1', 'Header2', 'Header3', 'HeaderN'], dtype='object') </code></pre>
pandas|parsing|dataframe
1
378,047
42,913,564
Numpy C API - Using PyArray_Descr for array creation causes segfaults
<p>I'm trying to use the Numpy C API to create Numpy arrays in C++, wrapped in a utility class. Most things are working as expected, but whenever I try to create an array using one of the functions taking a <code>PyArray_Descr*</code>, the program instantly segfaults. What is the correct way to set up the <code>PyArray_Descr</code> for creation?</p> <p>An example of code which isn't working:</p> <pre><code>PyMODINIT_FUNC PyInit_pysgm() { import_array(); return PyModule_Create(&amp;pysgmmodule); } // .... static PyAry zerosLike(PyAry const&amp; array) { PyArray_Descr* descr = new PyArray_Descr; Py_INCREF(descr); // creation function steals a reference descr-&gt;type = 'H'; descr-&gt;type_num = NPY_UINT16; descr-&gt;kind = 'u'; descr-&gt;byteorder = '='; descr-&gt;alignment = alignof(std::uint16_t); descr-&gt;elsize = sizeof(std::uint16_t); std::vector&lt;npy_intp&gt; shape {array.shape().begin(), array.shape().end()}; // code segfaults after this line before entering PyAry constructor return PyAry(PyArray_Zeros(shape.size(), shape.data(), descr, 0)); } </code></pre> <p>(testing with uint16).</p> <p>I'm not setting the <code>typeobj</code> field, which may be the only problem, but I can't work out what the appropriate value of type <code>PyTypeObject</code> would be.</p> <p><strong>Edit</strong>: <a href="https://docs.scipy.org/doc/numpy-1.10.0/reference/c-api.types-and-structures.html#scalararraytypes" rel="nofollow noreferrer">This page</a> lists the ScalarArray PyTypeObject instances for different types. Adding the line</p> <pre><code>descr-&gt;typeobj = &amp;PyUShortArrType_Type; </code></pre> <p>has not solved the problem.</p>
<p>Try using </p> <pre><code>descr = PyArray_DescrFromType(NPY_UINT16); </code></pre> <p>I've only recently been writing against the numpy C-API, but from what I gather the PyArray_Descr is basically the dtype from python-land. You should building these yourself and use the FromType macro if you can.</p>
python|c++|numpy
4
378,048
42,888,873
Frobenius normalization implementation in tensorflow
<p>I'm beginner in <strong>tensorflow</strong> and i want to apply <a href="http://mathworld.wolfram.com/FrobeniusNorm.html" rel="nofollow noreferrer">Frobenius normalization</a> on a tensor but when i searched i didn't find any function related to it in tensorflow and i couldn't implement it using tensorflow ops, i can implement it with <strong>numpy</strong> operations, but how can i do this using tensorflow ops only ??</p> <p>My implementation using <strong>numpy</strong> in python</p> <pre><code>def Frobenius_Norm(tensor): x = np.power(tensor,2) x = np.sum(x) x = np.sqrt(x) return x </code></pre>
<pre><code>def frobenius_norm_tf(M): return tf.reduce_sum(M ** 2) ** 0.5 </code></pre>
tensorflow|normalization
2
378,049
42,800,565
using or to return two columns. Pandas
<pre><code>def continent_af(): africa = df[df['cont'] == 'AF' or df['cont'] == 'af'] return africa print(continent_af()) </code></pre> <p>So the first half of the second line returned what I wanted, but when I put the or function in, i am getting an error, which reads </p> <blockquote> <p>the truth value of a series is ambiguous. use a.empty(), a.bool(), a.any(), or a.all()</p> </blockquote> <p>any help would be much appreciated</p>
<p>Try:</p> <pre><code>df[(df['cont'] == 'AF') | (df['cont'] == 'af')] </code></pre>
python|pandas
0
378,050
42,980,112
Trouble creating/manipulating Pandas DataFrame from given list of JSON records
<p>I have json records in the file json_data. I used <code>pd.DataFrame(json_data)</code> to make a new table, <code>pd_json_data</code>, using these records.</p> <p><a href="https://i.stack.imgur.com/HuoYq.png" rel="nofollow noreferrer">pandas table pd_json_data</a></p> <p>I want to manipulate <code>pd_json_data</code> to return a new table with primary key <em>(url,hour)</em>, and then a column <em>updated</em> that contains a boolean value.</p> <p><em>hour</em> is based on the <em>number of checks</em>. For example, if <em>number of checks</em> contains 378 at row 0, the new table should have the numbers 1 through 378 in <em>hour</em>, with True in <em>updated</em> if the number in <em>hour</em> is a number in <em>positive checks</em>. </p> <p>Any ideas for how I should approach this? </p>
<h3>Updated Answer</h3> <p>Make fake data</p> <pre><code>df = pd.DataFrame({'number of checks': [5, 10, 300, 8], 'positive checks':[[1,3,10], [10,11], [9,200], [1,8,7]], 'url': ['a', 'b', 'c', 'd']}) </code></pre> <p>Output</p> <pre><code> number of checks positive checks url 0 5 [1, 3, 10] a 1 10 [10, 11] b 2 300 [9, 200] c 3 8 [1, 8, 7] d </code></pre> <p>Iterate and create new dataframes, then concatenate</p> <pre><code>dfs = [] for i, row in df.iterrows(): hour = np.arange(1, row['number of checks'] + 1) df_cur = pd.DataFrame({'hour' : hour, 'url': row['url'], 'updated': np.in1d(hour, row['positive checks'])}) dfs.append(df_cur) df_final = pd.concat(dfs) hour updated url 0 1 True a 1 2 False a 2 3 True a 3 4 False a 4 5 False a 0 1 False b 1 2 False b 2 3 False b 3 4 False b 4 5 False b 5 6 False b 6 7 False b 7 8 False b 8 9 False b 9 10 True b 0 1 False c 1 2 False c </code></pre> <h3>Old answer</h3> <p>Now build new dataframe</p> <pre><code>df1 = df[['url']].copy() df1['hour'] = df['number of checks'].map(lambda x: list(range(1, x + 1))) df1['updated'] = df.apply(lambda x: x['number of checks'] in x['positive checks'], axis=1) </code></pre> <p>Output</p> <pre><code> url hour updated 0 a [1, 2, 3, 4, 5] False 1 b [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] True 2 c [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13,... False 3 d [1, 2, 3, 4, 5, 6, 7, 8] True </code></pre>
python|json|pandas
0
378,051
42,988,302
Pandas groupby results on the same plot
<p>I am dealing with the following data frame (only for illustration, actual df is quite large):</p> <pre><code> seq x1 y1 0 2 0.7725 0.2105 1 2 0.8098 0.3456 2 2 0.7457 0.5436 3 2 0.4168 0.7610 4 2 0.3181 0.8790 5 3 0.2092 0.5498 6 3 0.0591 0.6357 7 5 0.9937 0.5364 8 5 0.3756 0.7635 9 5 0.1661 0.8364 </code></pre> <p>Trying to plot multiple line graph for the above coordinates (x as "x1 against y as "y1").</p> <p>Rows with the same "seq" is one path, and has to be plotted as one separate line, like all the x, y coordinates corresponding the seq = 2 belongs to one line, and so on. </p> <p>I am able to plot them, but on a separate graphs, I want all the lines on the same graph, Using <strong>subplots</strong>, but not getting it right. </p> <pre><code>import matplotlib as mpl import matplotlib.pyplot as plt %matplotlib notebook df.groupby("seq").plot(kind = "line", x = "x1", y = "y1") </code></pre> <p>This creates 100's of graphs (which is equal to the number of unique seq). Suggest me a way to obtain all the lines on the same graph.</p> <p>**UPDATE*</p> <p>To resolve the above problem, I implemented the following code:</p> <pre><code> fig, ax = plt.subplots(figsize=(12,8)) df.groupby('seq').plot(kind='line', x = "x1", y = "y1", ax = ax) plt.title("abc") plt.show() </code></pre> <p>Now, I want a way to plot the lines with specific colors. I am clustering path from seq = 2 and 5 in cluster 1; and path from seq = 3 in another cluster.</p> <p>So, there are two lines under cluster 1 which I want in red and 1 line under cluster 2 which can be green.</p> <p>How should I proceed with this?</p>
<p>You need to init axis before plot like in this example</p> <pre><code>import pandas as pd import matplotlib.pylab as plt import numpy as np # random df df = pd.DataFrame(np.random.randint(0,10,size=(25, 3)), columns=['ProjID','Xcoord','Ycoord']) # plot groupby results on the same canvas fig, ax = plt.subplots(figsize=(8,6)) df.groupby('ProjID').plot(kind='line', x = "Xcoord", y = "Ycoord", ax=ax) plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/2dZjo.png" rel="noreferrer"><img src="https://i.stack.imgur.com/2dZjo.png" alt="enter image description here"></a></p>
python|pandas|matplotlib
9
378,052
42,748,566
pandas diff() giving 0 value for first difference, I want the actual value instead
<p>I have df:</p> <pre><code>Hour Energy Wh 1 4 2 6 3 9 4 15 </code></pre> <p>I would like to add a column that shows the per hour difference. I am using this:</p> <pre><code>df['Energy Wh/h'] = df['Energy Wh'].diff().fillna(0) </code></pre> <p>df1:</p> <pre><code>Hour Energy Wh Energy Wh/h 1 4 0 2 6 2 3 9 3 4 15 6 </code></pre> <p>However, the Hour 1 value is showing up as 0 in the Energy Wh/h column, whereas I would like it to show up as 4, like below:</p> <pre><code>Hour Energy Wh Energy Wh/h 1 4 4 2 6 2 3 9 3 4 15 6 </code></pre> <p>I have tried using np.where:</p> <pre><code>df['Energy Wh/h'] = np.where(df['Hour'] == 1,df['Energy Wh'].diff().fillna(df['Energy Wh']),df['Energy Wh'].diff().fillna(0)) </code></pre> <p>but I am still getting a 0 value in the hour 1 row (df1), with no errors. How do I get the value in 'Energy Wh' for Hour 1 to be filled, instead of 0?</p>
<p>You can just <code>fillna()</code> with the original column, without using <code>np.where</code>:</p> <pre><code>&gt;&gt;&gt; df['Energy Wh/h'] = df['Energy Wh'].diff().fillna(df['Energy Wh']) &gt;&gt;&gt; df Energy Wh Energy Wh/h Hour 1 4 4.0 2 6 2.0 3 9 3.0 4 15 6.0 </code></pre>
python|pandas|numpy|dataframe
23
378,053
43,024,835
How do I read a bytearray from a CSV file using pandas?
<p>I have a <code>csv</code> file which has a column full of <code>bytearrays</code>. It looks like this:</p> <pre><code>bytearray(b'\xf3\x90\x02\xff\xff\xff\xe0?') bytearray(b'\xf3\x90\x02\xff\xff\xff\xe0?') bytearray(b'\xf3\x00\x00\xff\xff\xff\xe0?') </code></pre> <p>and so on. I tried to read this <code>csv</code> file using <code>pandas.read_csv()</code>.</p> <pre><code>df = pd.read_csv(filename, error_bad_lines=False) data = df.msg </code></pre> <p><code>msg</code> is the name of the column with the bytearrays.</p> <p>But it doesn't look like this is a column full of bytearrays. When I pick out a column and try to print individual elements with <code>print(data[1][1])</code>, the output I get is <code>y</code>, which corresponds to the <code>1</code> position in <code>bytearray</code>.</p> <p>How can I import this particular column as a list of bytearrays?</p>
<p>You can pass a converter function to <code>pandas.read_csv()</code> to turn your <code>bytearray</code> into a <code>bytearray</code></p> <p><strong>Code:</strong></p> <pre><code>from ast import literal_eval def read_byte_arrays(bytearray_string): if bytearray_string.startswith('bytearray(') and \ bytearray_string.endswith(')'): return bytearray(literal_eval(bytearray_string[10:-1])) return bytearray_string </code></pre> <p><strong>Test Code:</strong></p> <pre><code>from io import StringIO data = StringIO(u'\n'.join([x.strip() for x in r""" data1,bytes,data2 1,bytearray(b'\xf3\x90\x02\xff\xff\xff\xe0?'),2 1,bytearray(b'\xf3\x90\x02\xff\xff\xff\xe0?'),2 1,bytearray(b'\xf3\x00\x00\xff\xff\xff\xe0?'),2 """.split('\n')[1:-1]])) df = pd.read_csv(data, converters={'bytes': read_byte_arrays}) print(df) </code></pre> <p><strong>Results:</strong></p> <pre><code> data1 bytes data2 0 1 [243, 144, 2, 255, 255, 255, 224, 63] 2 1 1 [243, 144, 2, 255, 255, 255, 224, 63] 2 2 1 [243, 0, 0, 255, 255, 255, 224, 63] 2 </code></pre>
python|csv|pandas
2
378,054
42,983,698
Cant iterate through multiple pandas series
<p>So I was attempting to iterate through two series that I obtained from a Pandas DF, and I found that I could not iterate through them to return numbers less than 280.000. I also realized that I could not iterate over lists either. Is there any way I can iterate over multiple lists, series, etc? thanks. Example below:</p> <pre><code>two_series = df['GNP'], df['Population'] def numb(): for i in two_series: if i &lt; 280.000: print(i) </code></pre>
<p>Currently, two_series is just a tuple with two elements, each of which is a Series. So when you loop through all the elements of two_series, i is the whole series, and you only loop twice. It doesn't make sense to ask if a Series is less than 280, so it throws an error.</p> <p>You could just concatenate the series, like this:</p> <pre><code>two_series = df['GNP'].append(df['Population']) </code></pre> <p>Or you could just add a second nested loop to go through each of the items in each series:</p> <pre><code>for i in two_series: for entry in i: if entry &lt; 280.000: print(entry) </code></pre>
python|pandas
0
378,055
42,751,748
using python to project lat lon geometry to utm
<p>I have a dataframe with earthquake data called eq that has columns listing latitude and longitude. using geopandas I created a point column with the following:</p> <pre><code>from geopandas import GeoSeries, GeoDataFrame from shapely.geometry import Point s = GeoSeries([Point(x,y) for x, y in zip(df['longitude'], df['latitude'])]) eq['geometry'] = s eq.crs = {'init': 'epsg:4326', 'no_defs': True} eq </code></pre> <p>Now I have a geometry column with lat lon coordinates but I want to change the projection to UTM. Can anyone help with the transformation?</p>
<p>Latitude/longitude aren't really a projection, but sort of a default "unprojection". See <a href="http://www.georeference.org/doc/latitude_longitude_projection.htm" rel="nofollow noreferrer">this page for more details</a>, but it probably means your data uses <code>WGS84</code> or <code>epsg:4326</code>.</p> <p>Let's build a dataset and, before we do any reprojection, we'll define the <code>crs</code> as <code>epsg:4326</code></p> <pre><code>import geopandas as gpd import pandas as pd from shapely.geometry import Point df = pd.DataFrame({'id': [1, 2, 3], 'population' : [2, 3, 10], 'longitude': [-80.2, -80.11, -81.0], 'latitude': [11.1, 11.1345, 11.2]}) s = gpd.GeoSeries([Point(x,y) for x, y in zip(df['longitude'], df['latitude'])]) geo_df = gpd.GeoDataFrame(df[['id', 'population']], geometry=s) # Define crs for our geodataframe: geo_df.crs = {'init': 'epsg:4326'} </code></pre> <p>I'm not sure what you mean by "UTM projection". From the <a href="https://en.wikipedia.org/wiki/Universal_Transverse_Mercator_coordinate_system" rel="nofollow noreferrer">wikipedia page</a> I see there are 60 different UTM projections depending on the area of the world. You can find the appropriate <code>epsg</code> code online, but I'll just give you an example with a random <code>epsg</code>code. <a href="http://spatialreference.org/ref/epsg/wgs-84-utm-zone-33n/" rel="nofollow noreferrer">This is the one for zone 33N for example</a></p> <p>How do you do the reprojection? You can easily get this info from <a href="http://geopandas.org/projections.html" rel="nofollow noreferrer">the geopandas docs on projection</a>. It's just one line:</p> <pre><code>geo_df = geo_df.to_crs({'init': 'epsg:3395'}) </code></pre> <p>and the geometry isn't coded as latitude/longitude anymore:</p> <pre><code> id population geometry 0 1 2 POINT (-8927823.161620541 1235228.11420853) 1 2 3 POINT (-8917804.407449147 1239116.84994171) 2 3 10 POINT (-9016878.754255159 1246501.097746004) </code></pre>
python|gis|geopandas
4
378,056
42,889,621
Converting numpy array values into integers
<p>My values are currently showing as <code>1.00+e09</code> in an array (type float64). I would like them to show <code>1000000000</code> instead. Is this possible?</p>
<p>Make a sample array</p> <pre><code>In [206]: x=np.array([1e9, 2e10, 1e6]) In [207]: x Out[207]: array([ 1.00000000e+09, 2.00000000e+10, 1.00000000e+06]) </code></pre> <p>We can convert to ints - except notice that the largest one is too large the default int32</p> <pre><code>In [208]: x.astype(int) Out[208]: array([ 1000000000, -2147483648, 1000000]) In [212]: x.astype(np.int64) Out[212]: array([ 1000000000, 20000000000, 1000000], dtype=int64) </code></pre> <p>Writing a csv with the default format (float) (this is the default format regardless of the array dtype):</p> <pre><code>In [213]: np.savetxt('text.txt',x) In [214]: cat text.txt 1.000000000000000000e+09 2.000000000000000000e+10 1.000000000000000000e+06 </code></pre> <p>We can specify a format:</p> <pre><code>In [215]: np.savetxt('text.txt',x, fmt='%d') In [216]: cat text.txt 1000000000 20000000000 1000000 </code></pre> <p>Potentially there are 3 issues:</p> <ul> <li>integer v float in the array itself, it's <code>dtype</code></li> <li>display or print of the array</li> <li>writing the array to a csv file</li> </ul>
python|numpy
7
378,057
42,737,025
How to find minimum value every x values in an array?
<pre><code>path = ("C:/Users/Calum/AppData/Local/Programs/Python/Python35-32/Python Programs/PV Data/Monthly Data/brunel-11-2016.csv") with open (path) as f: readCSV = csv.reader((islice(f, 0, 8352)), delimiter = ';') irrad_bru1 = [] for row in readCSV: irrad1 = row[1] irrad_bru1.append(irrad1) irrad_bru1 = ['0' if float(x)&lt;0 else x for x in irrad_bru1] bru_arr1 = np.asarray(irrad_bru1).astype(np.float) rr_bru1 = -np.diff(bru_arr1) </code></pre> <p>I want to find the minimum value in the array rr_bru1 every 200 entries how do I go about doing that?</p>
<p>You can use <code>np.minimum.reduceat</code>:</p> <pre><code>np.minimum.reduceat(a, np.arange(0, len(a), 200)) </code></pre>
python|numpy|minimum
0
378,058
43,014,269
Add multiple rows for each datetime in pandas dataframe
<pre><code> name_col datetime 2017-03-22 0.2 </code></pre> <p>I want to add multiple rows till the present date (2017-03-25) so that resulting dataframe looks like:</p> <pre><code> name_col datetime 2017-03-22 0.2 2017-03-23 0.0 2017-03-24 0.0 2017-03-25 0.0 </code></pre> <p>How do I add multiple rows for each datetime? I can get present date as </p> <pre><code>from datetime import datetime, timedelta, date date.today() </code></pre>
<p>You can also use <code>.resample()</code> method:</p> <pre><code>In [98]: df Out[98]: name_col datetime 2017-03-22 0.2 In [99]: df.loc[pd.to_datetime(pd.datetime.now().date())] = 0 In [100]: df Out[100]: name_col datetime 2017-03-22 0.2 2017-03-25 0.0 In [101]: df.resample('D').bfill() Out[101]: name_col datetime 2017-03-22 0.2 2017-03-23 0.0 2017-03-24 0.0 2017-03-25 0.0 </code></pre>
python|pandas|datetime
4
378,059
42,811,697
Pandas DataFrame to csv: Specifying decimal separator for mixed type
<p>I've found a somewhat strange behaviour when I create a Pandas DataFrame from lists and convert it to csv with a specific decimal separator.</p> <p>This works as expected:</p> <pre><code>&gt;&gt;&gt; import pandas as pd &gt;&gt;&gt; a = pd.DataFrame([['a', 0.1], ['b', 0.2]]) &gt;&gt;&gt; a 0 1 0 a 0.1 1 b 0.2 &gt;&gt;&gt; a.to_csv(decimal=',', sep=' ') ' 0 1\n0 a 0,1\n1 b 0,2\n' </code></pre> <p>However, in this case the decimal separator is not set properly:</p> <pre><code>&gt;&gt;&gt; b = pd.DataFrame([['a', 'b'], [0.1, 0.2]]) &gt;&gt;&gt; b 0 1 0 a b 1 0.1 0.2 &gt;&gt;&gt; b.to_csv(decimal=',', sep=' ') ' 0 1\n0 a b\n1 0.1 0.2\n' </code></pre> <p>When I transpose <code>b</code> in order to get a DataFrame like <code>a</code> the decimal separator is still not properly set:</p> <pre><code>&gt;&gt;&gt; b.T.to_csv(decimal=',', sep=' ') ' 0 1\n0 a 0.1\n1 b 0.2\n' </code></pre> <p>Why I am asking: In my program I have columns as individual lists (e.g. <code>col1 = ['a', 'b']</code> and <code>col2 = [0.1, 0.2]</code>, but the number and format of columns can vary) and I would like to convert them to csv with a specific decimal separator, so I'd like to have an output like</p> <pre><code>' 0 1\n0 a 0,1\n1 b 0,2\n' </code></pre>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.applymap.html" rel="nofollow noreferrer"><code>applymap</code></a> and cast the <code>float</code> typed cells to <code>str</code> by checking explicitly for their type. Then, replace the decimal dot<code>(.)</code> with the comma <code>(,)</code> as each cell now constitutes a string and dump the contents to a <code>csv</code> file later.</p> <pre><code>b.applymap(lambda x: str(x).replace(".", ",") if isinstance(x, float) else x).to_csv(sep=" ") # ' 0 1\n0 a b\n1 0,1 0,2\n' </code></pre>
python|csv|pandas
2
378,060
42,977,488
DataFrame of soccer soccers into a league table
<p>so I'm going to re-write this question having spent some time trying to crack it today, and I think I'm doing okay so far.</p> <p>I have a soccer results database with this as the head(3)</p> <pre><code> Date Season home visitor FT hgoal vgoal division tier totgoal goaldif result 1993-04-12 1992 Arsenal Aston Villa 0-1 0 1 1 1 1 -1 A 1992-09-12 1992 Arsenal Blackburn Rovers 0-1 0 1 1 1 1 -1 A 1992-10-03 1992 Arsenal Chelsea 2-1 2 1 1 1 3 1 H </code></pre> <p>I've written this code, which work:</p> <pre><code>def my_table(season) : teams = season['home'].unique().tolist() table = [] for team in teams : home = season[season['home'] == team]['result'] hseq = dict(zip(*np.unique(home, return_counts=True))) away = season[season['visitor'] == team]['result'] aseq = dict(zip(*np.unique(away, return_counts=True))) team_dict = { "season" : season.iloc[0]['Season'], "team" : team, "home_pl" : sum(hseq.values()), "home_w" : hseq.get('H', 0), "home_d" : hseq.get('D', 0), "home_l" : hseq.get('A', 0), "home_gf" : season[season['home'] == team]['hgoal'].sum(), "home_ga" : season[season['home'] == team]['vgoal'].sum(), "home_gd" : season[season['home'] == team]['goaldif'].sum(), "home_pts" : hseq.get('H', 0) * 3 + hseq.get('D', 0), "away_pl" : sum(aseq.values()), "away_w" : aseq.get('A', 0), "away_d" : aseq.get('D', 0), "away_l" : aseq.get('H', 0), "away_gf" : season[season['visitor'] == team]['vgoal'].sum(), "away_ga" : season[season['visitor'] == team]['hgoal'].sum(), "away_gd" : (season[season['visitor'] == team]['goaldif'].sum() * -1), "away_pts" : aseq.get('A', 0) * 3 + hseq.get('D', 0) } team_dict["pl"] = team_dict["home_pl"] + team_dict['away_pl'] team_dict["w"] = team_dict["home_w"] + team_dict['away_w'] team_dict["d"] = team_dict["home_d"] + team_dict['away_d'] team_dict["l"] = team_dict["home_l"] + team_dict['away_l'] team_dict["gf"] = team_dict["home_gf"] + team_dict['away_gf'] team_dict["ga"] = team_dict["home_ga"] + team_dict['away_ga'] team_dict["gd"] = team_dict["home_gd"] + team_dict['away_gd'] team_dict["pts"] = team_dict["home_pts"] + team_dict['away_pts'] table.append(team_dict) return table seasons = pl['Season'].unique().tolist() all_tables = [] for season in seasons : table = my_table(pl[pl['Season'] == season]) all_tables += table tbl = pd.DataFrame(all_tables) away = ['away_pl', 'away_w', 'away_d', 'away_l', 'away_gf', 'away_ga', 'away_gd', 'away_pts'] home = ['home_pl', 'home_w', 'home_d', 'home_l', 'home_gf', 'home_ga', 'home_gd', 'home_pts'] full = ['pl', 'w', 'd', 'l', 'gf', 'ga', 'gd', 'pts'] team = ['team'] tbl = tbl[['season', 'team']+home+away+full] </code></pre> <p>So now 'tbl' is good, and I can index it by season. But I'm having trouble making it a multi-index which is by 'season' first and then by their points total (descending) which would be equivalent to their league finishing position. To be clear, I want the index to be 1-20 (or 1-22) but the index be driven by the points total. </p> <p>Also, if anyone has any thoughts on how I've gone about building the table itself, would love to hear it. I spent a long time trying to use various vectorized functions which I'm told are more efficient but couldn't get it to work and reverted to for loops.</p> <p>Thank you</p>
<p>Consider using <a href="http://pandas.pydata.org/pandas-docs/version/0.17.0/generated/pandas.core.groupby.DataFrameGroupBy.rank.html" rel="nofollow noreferrer">GroupBy.rank</a> or <a href="http://pandas.pydata.org/pandas-docs/version/0.17.0/generated/pandas.Series.rank.html" rel="nofollow noreferrer">Series.rank</a> to calculate teams by descending <code>pts</code> rank. Since I can not tell if your final dataframe is at season, team, or game level choose appropriate ranking:</p> <pre><code>tbl['team_rank'] = tbl.groupby(['season', 'team'])['pts'].rank(ascending=False) tbl['team_rank'] = tbl['pts'].rank(ascending=False) </code></pre> <p>Then use <code>set_index</code> on the pair of fields for the multindex with no need for prior sorting. </p> <pre><code>tbl = tbl.set_index(['season', 'team_rank']) </code></pre> <p>However, since you require multiple fields for ranking purposes, consider using a <code>reset_index</code> then retrieve the <code>index.values</code> to get the ordered number (<code>+ 1</code> if you do not want to begin with zero):</p> <pre><code>tbl = tbl.sort_values(['season', 'pts', 'gd', 'gf'], ascending=[True, False, False, False]).reset_index(drop=True) tbl['rank'] = tbl.index.values + 1 tbl = tbl.set_index(['season', 'rank']) </code></pre>
python|pandas|numpy
1
378,061
43,010,072
numpy.all axis parameter misbehavior?
<p>I have a following array.</p> <pre><code>a = np.array([[0, 5, 0, 5], [0, 9, 0, 9]]) &gt;&gt;&gt;a.shape Out[72]: (2, 4) &gt;&gt;&gt;np.all(a,axis=0) Out[69]: array([False, True, False, True], dtype=bool) &gt;&gt;&gt;np.all(a,axis=1) Out[70]: array([False, False], dtype=bool) </code></pre> <p>Because axis 0 means the first axis(row-wise) in 2D array,</p> <p>I expected when <code>np.all(a,axis=0)</code> is given, it checks whether all element is True or not, per every row. </p> <p>But it seems like checking <strong>per column</strong> cause it gives output as 4 elements like <code>array([False, True, False, True], dtype=bool)</code>.</p> <p>What am I misunderstanding about np.all functioning?</p>
<p><code>axis=0</code> means to AND the elements together <strong>along</strong> axis 0, so <code>a[0, 0]</code> gets ANDed with <code>a[1, 0]</code>, <code>a[0, 1]</code> gets ANDed with <code>a[1, 1]</code>, etc. The axis specified gets collapsed.</p> <p>You're probably thinking that it takes <code>np.all(a[0])</code>, <code>np.all(a[1])</code>, etc., selecting subarrays by indexing along axis 0 and performing <code>np.all</code> on each subarray. That's the opposite of how it works; that would collapse every axis but the one specified.</p> <p>With 2D arrays, there isn't much advantage for one convention over the other, but with 3D and higher, NumPy's chosen convention is much more useful.</p>
python|numpy|axis
3
378,062
26,961,805
Accessing years within a dataframe in Pandas
<p>I have a dataframe wherein there is a column of datetimes:</p> <pre><code>rng = pd.date_range('1/1/2011', periods=4, freq='500D') print(rng) df = DataFrame(rng) </code></pre> <p>which looks like this:</p> <p><img src="https://i.stack.imgur.com/UlKfa.jpg" alt="dataframe"></p> <p>I would like to find the mean year from this column, which would be 2012.75 (I would later round it).</p> <p>Towards this end, I can access an individual year using</p> <p><code>df[0].iloc[0].year</code> </p> <p>which returns <code>2011</code></p> <p>...but to take a mean, I'd have to do this in a clumsy loop. Is there a way to do access these years, then take a mean, which is consistent with Pandas vectorized nature?</p>
<p>If you convert the column into a <a href="http://pandas.pydata.org/pandas-docs/version/0.15.0/generated/pandas.DatetimeIndex.html" rel="nofollow">DatetimeIndex</a>, then you can use its <code>year</code> attribute (which returns a NumPy array) and the array's <code>mean</code> method.</p> <pre><code>In [104]: pd.DatetimeIndex(df[0]).year.mean() Out[104]: 2012.75 </code></pre> <p>Another way is to use the <a href="http://pandas-docs.github.io/pandas-docs-travis/whatsnew.html#dt-accessor" rel="nofollow">dt accessor</a> (new in Pandas 0.15):</p> <pre><code>In [132]: df[0].dt.year.mean() Out[132]: 2012.75 </code></pre> <p>Or, if you want to do some <a href="http://docs.scipy.org/doc/numpy/reference/arrays.datetime.html" rel="nofollow">NumPy datetime64 wrangling</a>:</p> <pre><code>In [115]: (df[0].values.astype('&lt;M8[Y]').astype('&lt;i8')+1970).mean() Out[115]: 2012.75 </code></pre> <hr> <p>For all but small DataFrames, using pd.DatetimeIndex is fastest:</p> <pre><code>In [144]: rng = pd.date_range('1/1/2011', periods=10**5, freq='500D') In [145]: df = pd.DataFrame(rng) In [147]: %timeit pd.DatetimeIndex(df[0]).year.mean() 100 loops, best of 3: 4.5 ms per loop In [146]: %timeit (df[0].values.astype('&lt;M8[Y]').astype('&lt;i8')+1970).mean() 100 loops, best of 3: 5.14 ms per loop In [148]: %timeit df[0].dt.year.mean() 100 loops, best of 3: 5.18 ms per loop </code></pre>
python|pandas
2
378,063
27,370,643
Pandas groupby monthly + prorate
<p>I have a MultiIndex series:</p> <pre><code>date xcs subdomain count 2012-04-05 111-11 zero 10 2012-04-11 222-22 m 25 2012-04-11 111-11 zero 30 </code></pre> <p>Basically the first 3 columns form a unique index. I need to group by year-month+xcs+subdomain, but count needs to be summed-up, divided by the number of items in that group, and multiplied by 30. Thus for [2012-04, 111-11, zero] group from the above example, it would be (10 + 30)/2*30. I am guessing that this is identical to using average() function for each group, but would still need to multiply it by 30.</p> <p>Thanks!</p>
<p>One way is to do it like this:</p> <p>Setup your dummy dataframe:</p> <pre><code>import pandas as pd data = """date xcs subdomain count 2012-04-05 111-11 zero 10 2012-04-11 222-22 m 25 2012-04-11 111-11 zero 30""" df = pd.read_csv(pd.io.common.StringIO(data), sep="\s+") df['date'] = pd.to_datetime(df.date) df.set_index(['date', 'xcs', 'subdomain'], inplace=True) </code></pre> <p>Groupby and apply <code>.mean</code> multiplying by 30:</p> <pre><code>df['value'] = (df.groupby(level=['date', 'xcs', 'subdomain']).mean() * 30).dropna() df </code></pre> <p>Yielding:</p> <pre><code> count value date xcs subdomain 2012-04-05 111-11 zero 10 300 2012-04-11 222-22 m 25 750 111-11 zero 30 900 </code></pre>
pandas
1
378,064
27,132,757
Python pandas cumsum() reset after hitting max
<p>I have a pandas DataFrame with timedeltas as a cumulative sum of those deltas in a separate column expressed in milliseconds. An example is provided below:</p> <pre><code>Transaction_ID Time TimeDelta CumSum[ms] 1 00:00:04.500 00:00:00.000 000 2 00:00:04.600 00:00:00.100 100 3 00:00:04.762 00:00:00.162 262 4 00:00:05.543 00:00:00.781 1043 5 00:00:09.567 00:00:04.024 5067 6 00:00:10.654 00:00:01.087 6154 7 00:00:14.300 00:00:03.646 9800 8 00:00:14.532 00:00:00.232 10032 9 00:00:16.500 00:00:01.968 12000 10 00:00:17.543 00:00:01.043 13043 </code></pre> <p>I would like to be able to provide a maximum value for CumSum[ms] after which the cumulative sum would start over again at 0. For example, if the maximum value was 3000 in the above example, the results would look like so:</p> <pre><code>Transaction_ID Time TimeDelta CumSum[ms] 1 00:00:04.500 00:00:00.000 000 2 00:00:04.600 00:00:00.100 100 3 00:00:04.762 00:00:00.162 262 4 00:00:05.543 00:00:00.781 1043 5 00:00:09.567 00:00:04.024 0 6 00:00:10.654 00:00:01.087 1087 7 00:00:14.300 00:00:03.646 0 8 00:00:14.532 00:00:00.232 232 9 00:00:16.500 00:00:01.968 2200 10 00:00:17.543 00:00:01.043 0 </code></pre> <p>I have explored using the modulo operator, but am only successful in resetting back to zero when the resulting cumsum is equal to the limit provided (i.e. cumsum[ms] of 500 % 500 equals zero).</p> <p>Thanks in advance for any thoughts you may have, and please let me know if I can provide any more information.</p>
<p>Here's an example of how you might do this by iterating over each row in the dataframe. I created new data for the example for simplicity:</p> <pre><code>df = pd.DataFrame({'TimeDelta': np.random.normal( 900, 60, size=100)}) print df.head() TimeDelta 0 971.021295 1 734.359861 2 867.000397 3 992.166539 4 853.281131 </code></pre> <p>So let's do an accumulator loop with your desired 3000 max:</p> <pre><code>maxvalue = 3000 lastvalue = 0 newcum = [] for row in df.iterrows(): thisvalue = row[1]['TimeDelta'] + lastvalue if thisvalue &gt; maxvalue: thisvalue = 0 newcum.append( thisvalue ) lastvalue = thisvalue </code></pre> <p>Then put the <code>newcom</code> list into the dataframe:</p> <pre><code>df['newcum'] = newcum print df.head() TimeDelta newcum 0 801.977678 801.977678 1 893.296429 1695.274107 2 935.303566 2630.577673 3 850.719497 0.000000 4 951.554206 951.554206 </code></pre>
python|pandas|timedelta|cumsum
11
378,065
27,111,083
numpy slice to return last two dimensions
<p>Basically I'm looking for a function or syntax that will allow me to get the first 'slice' of the last two dimensions of a n dimensional numpy array with an arbitrary number of dimensions.</p> <p>I can do this but it's too ugly to live with, and what if someone sends a 6d array in? There must be a numpy function like the ellipse that expands to 0,0,0,... instead of :,:,:,...</p> <pre><code>data_2d = np.ones(5**2).reshape(5,5) data_3d = np.ones(5**3).reshape(5,5,5) data_4d = np.ones(5**4).reshape(5,5,5,5) def get_last2d(data): if data.ndim == 2: return data[:] if data.ndim == 3: return data[0, :] if data.ndim == 4: return data[0, 0, :] np.array_equal(get_last2d(data_3d), get_last2d(data_4d)) </code></pre> <p>Thanks, Colin</p>
<p>How about this,</p> <pre><code>def get_last2d(data): if data.ndim &lt;= 2: return data slc = [0] * (data.ndim - 2) slc += [slice(None), slice(None)] return data[slc] </code></pre>
python|numpy|slice
2
378,066
14,918,357
replace integers in array Python
<p>They told me to post a new question to the <a href="https://stackoverflow.com/questions/14917975/accessing-elements-in-array-python">second part of the question</a>.</p> <p>Is there some way I can replace the first 8 integers in the multidimensional array with 8 integers of array that I created for example: </p> <pre><code> import Image import numpy as np im = Image.open("C:\Users\Jones\Pictures\1.jpg") pix = im.load() array=[0, 3, 38, 13, 7, 18, 3, 715] r, g, b = np.array(im).T print r[0:8] </code></pre>
<p>Try with this:</p> <pre><code>r[0, :8] = array </code></pre> <p>It looks like you can use reading the <a href="http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html" rel="nofollow">numpy docs on indexing</a>.</p>
python|numpy
5
378,067
14,907,364
pandas transform timeseries into multiple column DataFrame
<p>I have a timeseries of intraday day data looks like below</p> <pre><code>ts =pd.Series(np.random.randn(60),index=pd.date_range('1/1/2000',periods=60, freq='2h')) </code></pre> <p>I am hoping to transform the data into a DataFrame, with the columns as each date, and rows as the time in the date.</p> <p>I have tried these, </p> <pre><code>key = lambda x:x.date() grouped = ts.groupby(key) </code></pre> <p>But how do I transform the groups into date columned DataFrame? or is there any better way?</p>
<pre><code>import pandas as pd import numpy as np index = pd.date_range('1/1/2000', periods=60, freq='2h') ts = pd.Series(np.random.randn(60), index = index) key = lambda x: x.time() groups = ts.groupby(key) print pd.DataFrame({k:g for k,g in groups}).resample('D').T </code></pre> <p>out:</p> <pre><code> 2000-01-01 2000-01-02 2000-01-03 2000-01-04 2000-01-05 2000-01-06 \ 00:00:00 0.109959 -0.124291 -0.137365 0.054729 -1.305821 -1.928468 03:00:00 1.336467 0.874296 0.153490 -2.410259 0.906950 1.860385 06:00:00 -1.172638 -0.410272 -0.800962 0.568965 -0.270307 -2.046119 09:00:00 -0.707423 1.614732 0.779645 -0.571251 0.839890 0.435928 12:00:00 0.865577 -0.076702 -0.966020 0.589074 0.326276 -2.265566 15:00:00 1.845865 -1.421269 -0.141785 0.433011 -0.063286 0.129706 18:00:00 -0.054569 0.277901 0.383375 -0.546495 -0.644141 -0.207479 21:00:00 1.056536 0.031187 -1.667686 -0.270580 -0.678205 0.750386 2000-01-07 2000-01-08 00:00:00 -0.657398 -0.630487 03:00:00 2.205280 -0.371830 06:00:00 -0.073235 0.208831 09:00:00 1.720097 -0.312353 12:00:00 -0.774391 NaN 15:00:00 0.607250 NaN 18:00:00 1.379823 NaN 21:00:00 0.959811 NaN </code></pre>
python|pandas|dataframe|pandas-groupby
2
378,068
14,906,962
Python double free error for huge datasets
<p>I have a very simple script in Python, but for some reason I get the following error when running a large amount of data:</p> <pre><code>*** glibc detected *** python: double free or corruption (out): 0x00002af5a00cc010 *** </code></pre> <p>I am used to these errors coming up in C or C++, when one tries to free memory that has already been freed. However, by my understanding of Python (and especially the way I've written the code), I really don't understand why this should happen. </p> <p>Here is the code:</p> <pre><code>#!/usr/bin/python -tt import sys, commands, string import numpy as np import scipy.io as io from time import clock W = io.loadmat(sys.argv[1])['W'] size = W.shape[0] numlabels = int(sys.argv[2]) Q = np.zeros((size, numlabels), dtype=np.double) P = np.zeros((size, numlabels), dtype=np.double) Q += 1.0 / Q.shape[1] nu = 0.001 mu = 0.01 start = clock() mat = -nu + mu*(W*(np.log(Q)-1)) end = clock() print &gt;&gt; sys.stderr, "Time taken to compute matrix: %.2f seconds"%(end-start) </code></pre> <p>One may ask, why declare a P and a Q numpy array? I simply do that to reflect the actual conditions (as this code is simply a segment of what I actually do, where I need a P matrix and declare it beforehand). </p> <p>I have access to a 192GB machine, and so I tested this out on a very large SciPy sparse matrix (2.2 million by 2.2 million, but very sparse, that's not the issue). The main memory is taken up by the Q, P, and mat matrices, as they are all 2.2 million by 2000 matrices (size = 2.2 million, numlabels = 2000). The peak memory goes up to 131GB, which comfortably fits in memory. While the mat matrix is being computed, I get the glibc error, and my process automatically goes into the sleep (S) state, without deallocating the 131GB it has taken up. </p> <p>Given the bizarre (for Python) error (I am not explicitly deallocating anything), and the fact that this works nicely for smaller matrix sizes (around 1.5 million by 2000), I am really not sure where to start to debug this. </p> <p>As a starting point, I have set "ulimit -s unlimited" before running, but to no avail.</p> <p>Any help or insight into numpy's behavior with really large amounts of data would be welcome. </p> <p>Note that this is NOT an out of memory error - I have 196GB, and my process reaches around 131GB and stays there for some time before giving the error below. </p> <p><strong>Update: February 16, 2013 (1:10 PM PST):</strong></p> <p>As per suggestions, I ran Python with GDB. Interestingly, on one GDB run I forgot to set the stack size limit to "unlimited", and got the following output:</p> <pre><code>*** glibc detected *** /usr/bin/python: munmap_chunk(): invalid pointer: 0x00007fe7508a9010 *** ======= Backtrace: ========= /lib64/libc.so.6(+0x733b6)[0x7ffff6ec23b6] /usr/lib64/python2.7/site-packages/numpy/core/multiarray.so(+0x4a496)[0x7ffff69fc496] /usr/lib64/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x4e67)[0x7ffff7af48c7] /usr/lib64/libpython2.7.so.1.0(PyEval_EvalCodeEx+0x309)[0x7ffff7af6c49] /usr/lib64/libpython2.7.so.1.0(PyEval_EvalCode+0x32)[0x7ffff7b25592] /usr/lib64/libpython2.7.so.1.0(+0xfcc61)[0x7ffff7b33c61] /usr/lib64/libpython2.7.so.1.0(PyRun_FileExFlags+0x84)[0x7ffff7b34074] /usr/lib64/libpython2.7.so.1.0(PyRun_SimpleFileExFlags+0x189)[0x7ffff7b347c9] /usr/lib64/libpython2.7.so.1.0(Py_Main+0x36c)[0x7ffff7b3e1bc] /lib64/libc.so.6(__libc_start_main+0xfd)[0x7ffff6e6dbfd] /usr/bin/python[0x4006e9] ======= Memory map: ======== 00400000-00401000 r-xp 00000000 09:01 50336181 /usr/bin/python2.7 00600000-00601000 r--p 00000000 09:01 50336181 /usr/bin/python2.7 00601000-00602000 rw-p 00001000 09:01 50336181 /usr/bin/python2.7 00602000-00e5f000 rw-p 00000000 00:00 0 [heap] 7fdf2584c000-7ffff0a66000 rw-p 00000000 00:00 0 7ffff0a66000-7ffff0a6b000 r-xp 00000000 09:01 50333916 /usr/lib64/python2.7/lib-dynload/mmap.so 7ffff0a6b000-7ffff0c6a000 ---p 00005000 09:01 50333916 /usr/lib64/python2.7/lib-dynload/mmap.so 7ffff0c6a000-7ffff0c6b000 r--p 00004000 09:01 50333916 /usr/lib64/python2.7/lib-dynload/mmap.so 7ffff0c6b000-7ffff0c6c000 rw-p 00005000 09:01 50333916 /usr/lib64/python2.7/lib-dynload/mmap.so 7ffff0c6c000-7ffff0c77000 r-xp 00000000 00:12 54138483 /home/avneesh/.local/lib/python2.7/site-packages/scipy/io/matlab/streams.so 7ffff0c77000-7ffff0e76000 ---p 0000b000 00:12 54138483 /home/avneesh/.local/lib/python2.7/site-packages/scipy/io/matlab/streams.so 7ffff0e76000-7ffff0e77000 r--p 0000a000 00:12 54138483 /home/avneesh/.local/lib/python2.7/site-packages/scipy/io/matlab/streams.so 7ffff0e77000-7ffff0e78000 rw-p 0000b000 00:12 54138483 /home/avneesh/.local/lib/python2.7/site-packages/scipy/io/matlab/streams.so 7ffff0e78000-7ffff0e79000 rw-p 00000000 00:00 0 7ffff0e79000-7ffff0e9b000 r-xp 00000000 00:12 54138481 /home/avneesh/.local/lib/python2.7/site-packages/scipy/io/matlab/mio5_utils.so 7ffff0e9b000-7ffff109a000 ---p 00022000 00:12 54138481 /home/avneesh/.local/lib/python2.7/site-packages/scipy/io/matlab/mio5_utils.so 7ffff109a000-7ffff109b000 r--p 00021000 00:12 54138481 /home/avneesh/.local/lib/python2.7/site-packages/scipy/io/matlab/mio5_utils.so 7ffff109b000-7ffff109f000 rw-p 00022000 00:12 54138481 /home/avneesh/.local/lib/python2.7/site-packages/scipy/io/matlab/mio5_utils.so 7ffff109f000-7ffff10a0000 rw-p 00000000 00:00 0 7ffff10a0000-7ffff10a5000 r-xp 00000000 09:01 50333895 /usr/lib64/python2.7/lib-dynload/zlib.so 7ffff10a5000-7ffff12a4000 ---p 00005000 09:01 50333895 /usr/lib64/python2.7/lib-dynload/zlib.so 7ffff12a4000-7ffff12a5000 r--p 00004000 09:01 50333895 /usr/lib64/python2.7/lib-dynload/zlib.so 7ffff12a5000-7ffff12a7000 rw-p 00005000 09:01 50333895 /usr/lib64/python2.7/lib-dynload/zlib.so 7ffff12a7000-7ffff12ad000 r-xp 00000000 00:12 54138491 /home/avneesh/.local/lib/python2.7/site-packages/scipy/io/matlab/mio_utils.so 7ffff12ad000-7ffff14ac000 ---p 00006000 00:12 54138491 /home/avneesh/.local/lib/python2.7/site-packages/scipy/io/matlab/mio_utils.so 7ffff14ac000-7ffff14ad000 r--p 00005000 00:12 54138491 /home/avneesh/.local/lib/python2.7/site-packages/scipy/io/matlab/mio_utils.so 7ffff14ad000-7ffff14ae000 rw-p 00006000 00:12 54138491 /home/avneesh/.local/lib/python2.7/site-packages/scipy/io/matlab/mio_utils.so 7ffff14ae000-7ffff14b5000 r-xp 00000000 00:12 54138562 /home/avneesh/.local/lib/python2.7/site-packages/scipy/sparse/sparsetools/_csgraph.so 7ffff14b5000-7ffff16b4000 ---p 00007000 00:12 54138562 /home/avneesh/.local/lib/python2.7/site-packages/scipy/sparse/sparsetools/_csgraph.so 7ffff16b4000-7ffff16b5000 r--p 00006000 00:12 54138562 /home/avneesh/.local/lib/python2.7/site-packages/scipy/sparse/sparsetools/_csgraph.so 7ffff16b5000-7ffff16b6000 rw-p 00007000 00:12 54138562 /home/avneesh/.local/lib/python2.7/site-packages/scipy/sparse/sparsetools/_csgraph.so 7ffff16b6000-7ffff17c2000 r-xp 00000000 00:12 54138558 /home/avneesh/.local/lib/python2.7/site-packages/scipy/sparse/sparsetools/_bsr.so 7ffff17c2000-7ffff19c2000 ---p 0010c000 00:12 54138558 /home/avneesh/.local/lib/python2.7/site-packages/scipy/sparse/sparsetools/_bsr.so 7ffff19c2000-7ffff19c3000 r--p 0010c000 00:12 54138558 /home/avneesh/.local/lib/python2.7/site-packages/scipy/sparse/sparsetools/_bsr.so 7ffff19c3000-7ffff19c6000 rw-p 0010d000 00:12 54138558 /home/avneesh/.local/lib/python2.7/site-packages/scipy/sparse/sparsetools/_bsr.so 7ffff19c6000-7ffff19d5000 r-xp 00000000 00:12 54138561 /home/avneesh/.local/lib/python2.7/site-packages/scipy/sparse/sparsetools/_dia.so 7ffff19d5000-7ffff1bd4000 ---p 0000f000 00:12 54138561 /home/avneesh/.local/lib/python2.7/site-packages/scipy/sparse/sparsetools/_dia.so 7ffff1bd4000-7ffff1bd5000 r--p 0000e000 00:12 54138561 /home/avneesh/.local/lib/python2.7/site-packages/scipy/sparse/sparsetools/_dia.so Program received signal SIGABRT, Aborted. 0x00007ffff6e81ab5 in raise () from /lib64/libc.so.6 (gdb) bt #0 0x00007ffff6e81ab5 in raise () from /lib64/libc.so.6 #1 0x00007ffff6e82fb6 in abort () from /lib64/libc.so.6 #2 0x00007ffff6ebcdd3 in __libc_message () from /lib64/libc.so.6 #3 0x00007ffff6ec23b6 in malloc_printerr () from /lib64/libc.so.6 #4 0x00007ffff69fc496 in ?? () from /usr/lib64/python2.7/site-packages/numpy/core/multiarray.so #5 0x00007ffff7af48c7 in PyEval_EvalFrameEx () from /usr/lib64/libpython2.7.so.1.0 #6 0x00007ffff7af6c49 in PyEval_EvalCodeEx () from /usr/lib64/libpython2.7.so.1.0 #7 0x00007ffff7b25592 in PyEval_EvalCode () from /usr/lib64/libpython2.7.so.1.0 #8 0x00007ffff7b33c61 in ?? () from /usr/lib64/libpython2.7.so.1.0 #9 0x00007ffff7b34074 in PyRun_FileExFlags () from /usr/lib64/libpython2.7.so.1.0 #10 0x00007ffff7b347c9 in PyRun_SimpleFileExFlags () from /usr/lib64/libpython2.7.so.1.0 #11 0x00007ffff7b3e1bc in Py_Main () from /usr/lib64/libpython2.7.so.1.0 #12 0x00007ffff6e6dbfd in __libc_start_main () from /lib64/libc.so.6 #13 0x00000000004006e9 in _start () </code></pre> <p>When I set the stack size limit to unlimited", I get the following:</p> <pre><code>*** glibc detected *** /usr/bin/python: double free or corruption (out): 0x00002abb2732c010 *** ^X^C Program received signal SIGINT, Interrupt. 0x00002aaaab9d08fe in __lll_lock_wait_private () from /lib64/libc.so.6 (gdb) bt #0 0x00002aaaab9d08fe in __lll_lock_wait_private () from /lib64/libc.so.6 #1 0x00002aaaab969f2e in _L_lock_9927 () from /lib64/libc.so.6 #2 0x00002aaaab9682d1 in free () from /lib64/libc.so.6 #3 0x00002aaaaaabbfe2 in _dl_scope_free () from /lib64/ld-linux-x86-64.so.2 #4 0x00002aaaaaab70a4 in _dl_map_object_deps () from /lib64/ld-linux-x86-64.so.2 #5 0x00002aaaaaabcaa0 in dl_open_worker () from /lib64/ld-linux-x86-64.so.2 #6 0x00002aaaaaab85f6 in _dl_catch_error () from /lib64/ld-linux-x86-64.so.2 #7 0x00002aaaaaabc5da in _dl_open () from /lib64/ld-linux-x86-64.so.2 #8 0x00002aaaab9fb530 in do_dlopen () from /lib64/libc.so.6 #9 0x00002aaaaaab85f6 in _dl_catch_error () from /lib64/ld-linux-x86-64.so.2 #10 0x00002aaaab9fb5cf in dlerror_run () from /lib64/libc.so.6 #11 0x00002aaaab9fb637 in __libc_dlopen_mode () from /lib64/libc.so.6 #12 0x00002aaaab9d60c5 in init () from /lib64/libc.so.6 #13 0x00002aaaab080933 in pthread_once () from /lib64/libpthread.so.0 #14 0x00002aaaab9d61bc in backtrace () from /lib64/libc.so.6 #15 0x00002aaaab95dde7 in __libc_message () from /lib64/libc.so.6 #16 0x00002aaaab9633b6 in malloc_printerr () from /lib64/libc.so.6 #17 0x00002aaaab9682dc in free () from /lib64/libc.so.6 #18 0x00002aaaabef1496 in ?? () from /usr/lib64/python2.7/site-packages/numpy/core/multiarray.so #19 0x00002aaaaad888c7 in PyEval_EvalFrameEx () from /usr/lib64/libpython2.7.so.1.0 #20 0x00002aaaaad8ac49 in PyEval_EvalCodeEx () from /usr/lib64/libpython2.7.so.1.0 #21 0x00002aaaaadb9592 in PyEval_EvalCode () from /usr/lib64/libpython2.7.so.1.0 #22 0x00002aaaaadc7c61 in ?? () from /usr/lib64/libpython2.7.so.1.0 #23 0x00002aaaaadc8074 in PyRun_FileExFlags () from /usr/lib64/libpython2.7.so.1.0 #24 0x00002aaaaadc87c9 in PyRun_SimpleFileExFlags () from /usr/lib64/libpython2.7.so.1.0 #25 0x00002aaaaadd21bc in Py_Main () from /usr/lib64/libpython2.7.so.1.0 #26 0x00002aaaab90ebfd in __libc_start_main () from /lib64/libc.so.6 #27 0x00000000004006e9 in _start () </code></pre> <p>This makes me believe the basic issue is with the numpy multiarray core module (line #4 in the first output and line #18 in the second). I will bring it up as a bug report in both numpy and scipy just in case. </p> <p>Has anyone seen this before? </p> <p><strong>Update: February 17, 2013 (4:45 PM PST)</strong></p> <p>I found a machine that I could run the code on that had a more recent version of SciPy (0.11) and NumPy (1.7.0). Running the code straight up (without GDB) resulted in a seg fault without any output to stdout or stderr. Running again through GDB, I get the following:</p> <pre><code>Program received signal SIGSEGV, Segmentation fault. 0x00002aaaabead970 in ?? () from /lib/x86_64-linux-gnu/libc.so.6 (gdb) bt #0 0x00002aaaabead970 in ?? () from /lib/x86_64-linux-gnu/libc.so.6 #1 0x00002aaaac5fcd04 in PyDataMem_FREE (ptr=&lt;optimized out&gt;, $K8=&lt;optimized out&gt;) at numpy/core/src/multiarray/multiarraymodule.c:3510 #2 array_dealloc (self=0xc00ab7edbfc228fe) at numpy/core/src/multiarray/arrayobject.c:416 #3 0x0000000000498eac in PyEval_EvalFrameEx () #4 0x000000000049f1c0 in PyEval_EvalCodeEx () #5 0x00000000004a9081 in PyRun_FileExFlags () #6 0x00000000004a9311 in PyRun_SimpleFileExFlags () #7 0x00000000004aa8bd in Py_Main () #8 0x00002aaaabe4f76d in __libc_start_main () from /lib/x86_64-linux-gnu/libc.so.6 #9 0x000000000041b9b1 in _start () </code></pre> <p>I understand this is not as useful as a NumPy compiled with debugging symbols, I will try doing that and post the output later. </p>
<p>After discussions on the same issue on the Numpy Github page (<a href="https://github.com/numpy/numpy/issues/2995" rel="nofollow noreferrer">https://github.com/numpy/numpy/issues/2995</a>) it has been brought to my attention that Numpy/Scipy will not support such a large number of non-zeros in the resulting sparse matrix. </p> <p>Basically, <code>W</code> is a sparse matrix, and <code>Q</code> (or <code>np.log(Q)-1</code>) is a dense matrix. When multiplying a dense matrix with a sparse one, the resulting product will also be represented in sparse matrix form (which makes a lot of sense). However, note that since I have no zero rows in my <code>W</code> matrix, the resulting product <code>W*(np.log(Q)-1)</code> will have <code>nnz &gt; 2^31</code> (2.2 million multiplied by 2000) and this exceeds the maximum number of elements in a sparse matrix in current versions of Scipy. </p> <p>At this stage, I'm not sure how else to get this to work, barring a re-implementation in another language. Perhaps it can still be done in Python, but it might be better to just write up a C++ and Eigen implementation.</p> <p>A special thanks to <a href="https://stackoverflow.com/users/108184/pv">pv.</a> for helping out on this to pinpoint the exact issue, and thanks to everyone else for the brainstorming!</p>
python|memory|numpy|scipy
6
378,069
14,920,903
Time difference in seconds from numpy.timedelta64
<p>How to get time difference in seconds from numpy.timedelta64 variable?</p> <pre><code>time1 = '2012-10-05 04:45:18' time2 = '2012-10-05 04:44:13' dt = np.datetime64(time1) - np.datetime64(time2) print dt 0:01:05 </code></pre> <p>I'd like to convert <code>dt</code> to number (int or float) representing time difference in seconds.</p>
<p>To get number of seconds from <code>numpy.timedelta64()</code> object using <a href="https://docs.scipy.org/doc/numpy/reference/arrays.datetime.html" rel="noreferrer"><code>numpy</code> 1.7 experimental datetime API</a>:</p> <pre><code>seconds = dt / np.timedelta64(1, 's') </code></pre>
python|datetime|numpy
93
378,070
25,404,705
Calculating Percentile scores for each element with respect to its column
<p>So my NumPy array looks like this</p> <pre><code>npfinal = [[1, 3, 5, 0, 0, 0], [5, 2, 4, 0, 0, 0], [7, 7, 2, 0, 0, 0], . . . </code></pre> <p>Sample dataset I'm working with is 25k rows. </p> <p>The first 3 columns contain meaningful data, rest are placeholders for the percentiles.</p> <p>So I need the percentile of a[0][0] <strong>with respect to the entire first column</strong> in a[0][3]. So 1's percentile score wrt the column [1,5,7,...]</p> <p>My first attempt was:</p> <pre><code>import scipy.stats as ss ... numofcols = 3 for row in npfinal: for i in range(0,numofcols): row[i+numofcols] = int(round(ss.percentileofscore(npfinal[:,i], row[i]))) </code></pre> <p>But this is taking way too much time; and on a full dataset it'll be impossible.</p> <p>I'm new to the world of computing on such large datasets so any sort of help will be appreciated. </p>
<p>I found a solution that I believe it works better when there are repeated values in the array:</p> <pre><code>import numpy as np from scipy import stats # some array with repeated values: M = np.array([[1, 7, 2], [5, 2, 2], [5, 7, 2]]) # calculate percentiles applying scipy rankdata to each column: percentile = np.apply_along_axis(sp.stats.rankdata, 0, M, method='average')/len(M) </code></pre> <p>The np.argsort solution has the problem that it gives different percentiles to repetitions of the same value. For example if you had:</p> <pre><code>percentile_argsort = np.argsort(np.argsort(M, axis=0), axis=0) / float(len(M)) * 100 percentile_rankdata = np.apply_along_axis(sp.stats.rankdata, 0, M, method='average')/len(M) </code></pre> <p>the two different approaches will output the results:</p> <pre><code>M array([[1, 7, 2], [5, 2, 2], [5, 7, 2]]) percentile_argsort array([[ 0. , 33.33333333, 0. ], [ 33.33333333, 0. , 33.33333333], [ 66.66666667, 66.66666667, 66.66666667]]) percentile_rankdata array([[ 0.33333333, 0.83333333, 0.66666667], [ 0.83333333, 0.33333333, 0.66666667], [ 0.83333333, 0.83333333, 0.66666667]]) </code></pre>
python|numpy|scipy
2
378,071
25,206,851
Aggregate over an index in pandas?
<p>How can I aggregate (sum) over an index which I intend to map to new values? Basically I have a <code>groupby</code> result by two variables where I want to groupby one variable into larger classes. The following code does this operation on <code>s</code> by mapping the first by-variable but seems too complicating:</p> <pre><code>import pandas as pd mapping={1:1, 2:1, 3:3} s=pd.Series([1]*6, index=pd.MultiIndex.from_arrays([[1,1,2,2,3,3],[1,2,1,2,1,2]])) x=s.reset_index() x["level_0"]=x.level_0.map(mapping) result=x.groupby(["level_0", "level_1"])[0].sum() </code></pre> <p>Is there a way to write this more concisely?</p>
<p>There is a <code>level=</code> option for <code>Series.sum()</code>, I guess you can use that and it will be a quite concise way to do it.</p> <pre><code>In [69]: s.index = pd.MultiIndex.from_tuples(map(lambda x: (mapping.get(x[0]), x[1]), s.index.values)) s.sum(level=(0,1)) Out[69]: 1 1 2 2 2 3 1 1 2 1 dtype: int64 </code></pre>
pandas
3
378,072
25,119,536
Interpolating array columns with PiecewisePolynomial in scipy
<p>I'm trying to interpolate each column of a numpy array using scipy's <code>PiecewisePolynomial</code>. I know that this is possible for scipy's <code>interp1d</code> but for piecewise polynomial interpolation it does not seem to work the same way. I have the following code:</p> <pre><code>import numpy as np import scipy.interpolate as interpolate x1=np.array([1,2,3,4]) y1=np.array([[2,3,1],[4,1,6],[1,2,7],[3,1,3]]) interp=interpolate.PiecewisePolynomial(x1,y1,axis=0) x = np.array([1.2, 2.1, 3.3]) y = interp(x) </code></pre> <p>Which results in <code>y = np.array([2.6112, 4.087135, 1.78648])</code>. It seems that only the first column in <code>y1</code> was taken into account for interpolation. How can I make the method return the interpolated values of each column in <code>y1</code> at the points specified by <code>x</code>? </p>
<p>The <code>scipy.interpolate.PiecewisePolynomial</code> inteprets the different columns of <code>y1</code> as the derivatives of the function to be interpolated, whereas <code>interp1d</code> interprets the columns as different functions.</p> <p>It may be that you do not actually want to use the <code>PiecewisePolynomial</code> at all, if you do not have the derivatives available. If you just want to have a smoother interpolation, then try <code>interp1d</code> with, e.g., <code>kind='quadratic'</code> keyword argument. (See the documentation for <code>interp1d</code>)</p> <p>Now your function looks rather interesting</p> <pre><code>import matplotlib.pyplot as plt fig = plt.figure() ax = fig.add_subplot(111) x = linspace(0,5,200) ax.plot(x, interp(x)) ax.plot(x1, y1[:,0], 'o') </code></pre> <p><img src="https://i.stack.imgur.com/38U8w.png" alt="enter image description here"></p> <p>If you try the quadratic spline interpolation:</p> <pre><code>interp = scipy.interpolate.interp1d(x1, y1.T, kind='quadratic') fig = plt.figure() ax = fig.add_subplot(111) x = linspace(1,4,200) ip = interp(x) ax.plot(x, ip[0], 'b') ax.plot(x, ip[1], 'g') ax.plot(x, ip[2], 'r') ax.plot(x1, y1[:,0], 'bo') ax.plot(x1, y1[:,1], 'go') ax.plot(x1, y1[:,2], 'ro') </code></pre> <p>This might be closer to what you want:</p> <p><img src="https://i.stack.imgur.com/hCs0U.png" alt="enter image description here"></p>
python|arrays|numpy|scipy|interpolation
0
378,073
25,135,578
Python Pandas: drop a column from a multi-level column index?
<p>I have a multi level column table like this:</p> <pre><code> a ---+---+--- b | c | f --+---+---+--- 0 | 1 | 2 | 7 1 | 3 | 4 | 9 </code></pre> <p>How can I drop column "c" by name? to look like this:</p> <pre><code> a ---+--- b | f --+---+--- 0 | 1 | 7 1 | 3 | 9 </code></pre> <p>I tried this:</p> <pre><code>del df['c'] </code></pre> <p>but I get the following error, which makes sense:</p> <blockquote> <p>KeyError: 'Key length (1) was greater than MultiIndex lexsort depth (0)'</p> </blockquote>
<p>Solved:</p> <pre><code>df.drop('c', axis=1, level=1) </code></pre>
pandas|dataframe|multiple-columns|multi-level
44
378,074
25,211,547
optimization of some numpy/scipy code
<p>I'm trying to optimize some python code, which uses <code>scipy.optimize.root</code> for rootfinding. </p> <p>cProfile tells me that most of the time the programm is evaluating the function called by <code>optimize.root</code>: e.g. for a total execution time of 80s, 58s are spend on <code>lineSphericalDist</code> to which <code>fun</code> contributes 54s (and about 215 000 calls):</p> <pre><code>Fri Aug 8 21:09:32 2014 profile2 12796193 function calls (12617458 primitive calls) in 82.707 seconds Ordered by: cumulative time ncalls tottime percall cumtime percall filename:lineno(function) 1 0.005 0.005 82.710 82.710 BilliardsNumpyClass.py:6(&lt;module&gt;) 1 0.033 0.033 64.155 64.155 BilliardsNumpyClass.py:446(traceAll) 100 1.094 0.011 63.549 0.635 BilliardsNumpyClass.py:404(trace) 91333 7.226 0.000 58.804 0.001 BilliardsNumpyClass.py:244(lineSphericalDist) 214667 49.436 0.000 54.325 0.000 BilliardsNumpyClass.py:591(fun) ... </code></pre> <p>Here the optimize.root call somehwere in <code>trace</code>:</p> <pre><code> ... res = optimize.root(self.lineSphericalDist, [tguess], args=(t0, a0), method='lm') ... </code></pre> <p>The function contains of some basic trigonometric functions:</p> <pre><code> def lineSphericalDist(self, tt, t0, a0): x0,y0,vnn = self.fun(t0)[0:3] beta = np.pi + t0 + a0 - vnn l = np.sin(beta - t0)/np.sin(beta - tt) x2,y2 = self.fun(tt)[0:2] return np.sqrt(x0**2+y0**2)*l-np.sqrt(x2**2+y2**2) </code></pre> <p>In the easiest case fun is:</p> <pre><code> def fun(self,t): return self.r*np.cos(t),self.r*np.sin(t),np.pi/2.,np.mod(t+np.pi/2., np.pi*2.) </code></pre> <p><strong>Is there a way to speed this up (tguess is already a pretty good starting value) ?</strong> Am I doing something wrong? e.g. is it a good idea to return multiple values the way I do it in <code>fun</code> ?</p>
<p>If I understand well your <code>a0</code> and <code>t0</code> are not part of the optimization, you only optimize over <code>tt</code>. However, inside <code>lineSphericalDist</code>, you call self.fun(t0). You could precompute that quantity outside of lineSphericalDist, that would halve the number of calls to self.fun...</p> <p>You could also compute <code>beta</code>, and <code>np.sin(beta - t0)</code>, and <code>np.sqrt(x0**2 + y0**2)</code> outside lineSphericalDist, leaving only the bits that really depend on <code>tt</code> inside lineSphericalDist.</p> <p>Lastly, why does self.fun compute 4 values if only 3 or 2 are used? This is your bottleneck function, make it compute only what's strictly necessary...</p>
optimization|numpy|scipy
2
378,075
25,385,374
Pandas aggregate -- how to retain all columns
<p>Example dataframe:</p> <pre><code>rand = np.random.RandomState(1) df = pd.DataFrame({'A': ['group1', 'group2', 'group3'] * 2, 'B': rand.rand(6), 'C': rand.rand(6), 'D': rand.rand(6)}) </code></pre> <p>print df</p> <pre><code> A B C D 0 group1 0.417022 0.186260 0.204452 1 group2 0.720324 0.345561 0.878117 2 group3 0.000114 0.396767 0.027388 3 group1 0.302333 0.538817 0.670468 4 group2 0.146756 0.419195 0.417305 5 group3 0.092339 0.685220 0.558690 </code></pre> <p>Groupby column A</p> <pre><code>group = df.groupby('A') </code></pre> <p>Use agg to return max value for each group</p> <pre><code>max1 = group['B'].agg({'max' : np.max}) print max1 max A group1 0.417022 group2 0.720324 group3 0.092339 </code></pre> <p>But I would like to retain (or get back) the appropriate data in the other columns, C and D. This would be the remaining data for the row which contained the max value. So, the return should be:</p> <pre><code> A B C D group1 0.417022 0.186260 0.204452 group2 0.720324 0.345561 0.878117 group3 0.092339 0.685220 0.558690 </code></pre> <p>Can anybody show how to do this? Any help appreciated.</p>
<p>Two stages: first find indices, then lookup all the rows.</p> <pre><code>idx = df.groupby('A').apply(lambda x: x['B'].argmax()) idx Out[362]: A group1 0 group2 1 group3 5 df.loc[idx] Out[364]: A B C D 0 group1 0.417022 0.186260 0.204452 1 group2 0.720324 0.345561 0.878117 5 group3 0.092339 0.685220 0.558690 </code></pre>
python|pandas|aggregate
6
378,076
25,275,057
Pandas sort_index gives strange result after applying function to grouped DataFrame
<p>Basic setup:</p> <p>I have a <code>DataFrame</code> with a <code>MultiIndex</code> on both the rows and the columns. The second level of the column index has <code>float</code>s for values.</p> <p>I want to perform a <code>groupby</code> operation (grouping by the first level of the row index). The operation will add a few columns (also with <code>float</code>s as their labels) to each group and then return the group.</p> <p>When I get the result back from my <code>groupby</code> operation, I can't seem to get the columns to sort properly. </p> <p>Working example. First, set things up:</p> <pre><code>import pandas as pd import numpy as np np.random.seed(0) col_level_1 = ['red', 'blue'] col_level_2 = [1., 2., 3., 4.] row_level_1 = ['a', 'b'] row_level_2 = ['one', 'two'] col_idx = pd.MultiIndex.from_product([col_level_1, col_level_2], names=['color', 'numeral']) row_idx = pd.MultiIndex.from_product([row_level_1, row_level_2], names=['letter', 'number']) df = pd.DataFrame(np.random.randn(len(row_idx), len(col_idx)), index=row_idx, columns=col_idx) </code></pre> <p>Gives this <code>DataFrame</code> in <code>df</code>: <img src="https://i.stack.imgur.com/PrXWE.png" alt="enter image description here"></p> <p>Then define my group operation and apply it:</p> <pre><code>def mygrpfun(group): for f in [1.5, 2.5, 3.5]: group[('red', f)] = 'hello' group[('blue', f)] = 'world' return group result = df.groupby(level='letter').apply(mygrpfun).sort_index(axis=1) </code></pre> <p>Displaying <code>result</code> gives: <img src="https://i.stack.imgur.com/qcllp.png" alt="enter image description here"></p> <p>What's going on here? Why doesn't the 2nd level of the column index display in ascending order?</p> <p>EDIT: In terms of context:</p> <pre><code>pd.__version__ Out[28]: '0.14.0' In [29]: np.__version__ Out[29]: '1.8.1' </code></pre> <p>Any help much appreciated.</p>
<p>The returned result looks as expected. You added columns. There was no guarantee that order imposed on those columns.</p> <p>You could just reimpose ordering:</p> <pre><code>result = result[sorted(result.columns)] </code></pre>
python|pandas
1
378,077
25,361,828
Plotting Pandas Time Data
<p>My data is a pandas dataframe called 'T':</p> <pre><code> A B C Date 2001-11-13 30.1 2 3 2007-02-23 12.0 1 7 </code></pre> <p>The result of T.index is </p> <pre><code>&lt;class 'pandas.tseries.index.DatetimeIndex'&gt; [2001-11-13, 2007-02-23] Length: 2, Freq: None, Timezone: None </code></pre> <p>So I know that the index is a time series. But when I plot it using <code>ax.plot(T)</code> I don't get a times series on the x axis!</p> <p>I will only ever have two data points so how do I get the dates in my graph (i.e. two dates at either end of the x axis)?</p> <p><img src="https://i.stack.imgur.com/mVWcG.png" alt="Not a time series graph"></p>
<p>Use the implemented pandas command:</p> <pre><code>In[211]: df2 Out[211]: A B C 1970-01-01 30.1 2 3 1980-01-01 12.0 1 7 In[212]: df2.plot() Out[212]: &lt;matplotlib.axes.AxesSubplot at 0x105224e0&gt; In[213]: plt.show() </code></pre> <p>You can access the axis using</p> <pre><code>ax = df2.plot() </code></pre> <p><img src="https://i.stack.imgur.com/oEZtW.png" alt="enter image description here"></p>
python|matplotlib|pandas
3
378,078
25,193,522
Renumbering a 1D mesh in Python
<p>First of all, I couldn't find the answer in other questions. </p> <p>I have a numpy array of integer, this is called ELEM, the array has three columns that indicate, element number, node 1 and node 2. This is one dimensional mesh. What I need to do is to renumber the nodes, I have the old and new node numbering tables, so the algorithm should replace every value in the ELEM array according to this tables.</p> <p>The code should look like this</p> <pre><code>old_num = np.array([2, 1, 3, 6, 5, 9, 8, 4, 7]) new_num = np.arange(1,10) ELEM = np.array([ [1, 1, 3], [2, 3, 6], [3, 1, 3], [4, 5, 6]]) </code></pre> <p>From now, for every element in the second and third column of the ELEM array I should replace every integer from the corresponding integer specified according to the new_num table.</p>
<p>If you're doing a lot of these, it makes sense to encode the renumbering in a dictionary for fast lookup.</p> <pre><code>lookup_table = dict( zip( old_num, new_num ) ) # create your translation dict vect_lookup = np.vectorize( lookup_table.get ) # create a function to do the translation ELEM[:, 1:] = vect_lookup( ELEM[:, 1:] ) # Reassign the elements you want to change </code></pre> <p>np.vectorize is just there to make things nicer syntactically. All it does is allow us to map over the values of the array with our lookup_table.get function</p>
python|arrays|numpy
1
378,079
25,344,895
Ordered colored plot after clustering using python
<p>I have a 1D array called data=[5 1 100 102 3 4 999 1001 5 1 2 150 180 175 898 1012]. I am using python scipy.cluster.vq to find clusters within it. There are 3 clusters in the data. After clustering when I'm trying to plot the data, there is no order in it. </p> <p>It would be great if it's possible to plot the data in the same order as it is given and color different sections belong to different groups or clusters. </p> <h2>Here is my code:</h2> <pre><code>import numpy as np import matplotlib.pyplot as plt from scipy.cluster.vq import kmeans, vq data = np.loadtxt('rawdata.csv', delimiter=' ') #----------------------kmeans------------------ centroid,_ = kmeans(data, 3) idx,_ = vq(data, centroid) x=np.linspace(0,(len(data)-1),len(data)) fig = plt.figure(1) plt.plot(x,data) plot1=plt.plot(data[idx==0],'ob') plot2=plt.plot(data[idx==1],'or') plot3=plt.plot(data[idx==2],'og') plt.show() </code></pre> <hr> <p>Here is my plot <a href="http://s29.postimg.org/9gf7noe93/figure_1.png" rel="nofollow">http://s29.postimg.org/9gf7noe93/figure_1.png</a> (The blue graph in the background is in-order, after clustering,it messed up) </p> <p>Thanks!</p> <p><strong>Update :</strong></p> <p>I wrote the following code to implement in-order colored plot after clustering,</p> <pre><code>import numpy as np import matplotlib.pyplot as plt from scipy.cluster.vq import kmeans, vq data = np.loadtxt('rawdata.csv', delimiter=' ') #----------------------kmeans----------------------------- centroid,_ = kmeans(data, 3) # three clusters idx,_ = vq(data, centroid) x=np.linspace(0,(len(data)-1),len(data)) fig = plt.figure(1) plt.plot(x,data) for i in range(0,(len(data)-1)): if data[i] in data[idx==0]: plt.plot(x[i],(data[i]),'ob' ) if data[i] in data[idx==1]: plt.plot(x[i],(data[i]),'or' ) if data[i] in data[idx==2]: plt.plot(x[i],(data[i]),'og' ) plt.show() </code></pre> <p>The problem with the above code is it's too slow. And my array size is over 3million. So this code will take forever to finish it's job for me. I really appreciate if someone can provide <strong>vectorized version</strong> of the above mentioned code. Thanks!</p>
<p>You can plot the clustered data points based on their distances from the cluster center and then write the index of each data point close to that in order to see how they scattered based on their clustering properties: </p> <pre><code>import numpy as np import matplotlib.pyplot as plt from scipy.cluster.vq import kmeans, vq from scipy.spatial.distance import cdist data=np.array([ 5, 1, 100, 102, 3, 4, 999, 1001, 5, 1, 2, 150, 180, 175, 898, 1012]) centroid,_ = kmeans(data, 3) idx,_ = vq(data, centroid) X=data.reshape(len(data),1) Y=centroid.reshape(len(centroid),1) D_k = cdist( X, Y, metric='euclidean' ) colors = ['red', 'green', 'blue'] pId=range(0,(len(data)-1)) cIdx = [np.argmin(D) for D in D_k] dist = [np.min(D) for D in D_k] r=np.vstack((data,dist)).T fig = plt.figure() ax = fig.add_subplot(1,1,1) mark=['^','o','&gt;'] for i, ((x,y), kls) in enumerate(zip(r, cIdx)): ax.plot(r[i,0],r[i,1],color=colors[kls],marker=mark[kls]) ax.annotate(str(i), xy=(x,y), xytext=(0.5,0.5), textcoords='offset points', size=8,color=colors[kls]) ax.set_yscale('log') ax.set_xscale('log') ax.set_xlabel('Data') ax.set_ylabel('Distance') plt.show() </code></pre> <p><strong>Update</strong>:</p> <p>if you are very keen of using vectorize procedure you can do it as following for a randomly generated data:</p> <pre><code>data=np.random.uniform(1,1000,3000) @np.vectorize def plotting(i): ax.plot(i,data[i],color=colors[cIdx[i]],marker=mark[cIdx[i]]) mark=['&gt;','o','^'] fig = plt.figure() ax = fig.add_subplot(1,1,1) plotting(range(len(data))) ax.set_xlabel('index') ax.set_ylabel('Data') plt.show() </code></pre> <p><img src="https://i.stack.imgur.com/tqTlt.png" alt="enter image description here"></p>
python|python-2.7|numpy|scipy|data-mining
1
378,080
25,368,750
Using Numpy Array to Create Unique Array
<p>Can you create a numpy array with all unique values in it?</p> <pre><code>myArray = numpy.random.random_integers(0,100,2500) myArray.shape = (50,50) </code></pre> <p>So here I have a given random 50x50 numpy array, but I could have non-unique values. Is there a way to ensure every value is unique?</p> <p>Thank you</p> <h2>Update:</h2> <p>I have created a basic function to generate a list and populate a unique integer. </p> <pre><code> dist_x = math.sqrt(math.pow((extent.XMax - extent.XMin), 2)) dist_y = math.sqrt(math.pow((extent.YMax - extent.YMin),2)) col_x = int(dist_x / 100) col_y = int(dist_y / 100) if col_x % 100 &gt; 0: col_x += 1 if col_y % 100 &gt; 0: col_y += 1 print col_x, col_y, 249*169 count = 1 a = [] for y in xrange(1, col_y + 1): row = [] for x in xrange(1, col_x + 1): row.append(count) count += 1 a.append(row) del row numpyArray = numpy.array(a) </code></pre> <p>Is there a better way to do this?</p> <p>Thanks</p>
<p>The most convenient way to get a unique random sample from a set is probably <a href="http://docs.scipy.org/doc/numpy-dev/reference/generated/numpy.random.choice.html" rel="noreferrer"><code>np.random.choice</code></a> with <code>replace=False</code>.</p> <p>For example:</p> <pre><code>import numpy as np # create a (5, 5) array containing unique integers drawn from [0, 100] uarray = np.random.choice(np.arange(0, 101), replace=False, size=(5, 5)) # check that each item occurs only once print((np.bincount(uarray.ravel()) == 1).all()) # True </code></pre> <p>If <code>replace=False</code> the set you're sampling from must, of course, be at least as big as the number of samples you're trying to draw:</p> <pre><code>np.random.choice(np.arange(0, 101), replace=False, size=(50, 50)) # ValueError: Cannot take a larger sample than population when 'replace=False' </code></pre> <hr> <p>If all you're looking for is a random permutation of the integers between 1 and the number of elements in your array, you could also use <a href="http://docs.scipy.org/doc/numpy-dev/reference/generated/numpy.random.permutation.html" rel="noreferrer"><code>np.random.permutation</code></a> like this:</p> <pre><code>nrow, ncol = 5, 5 uarray = (np.random.permutation(nrow * ncol) + 1).reshape(nrow, ncol) </code></pre>
python|arrays|numpy|unique
11
378,081
25,146,277
Pandas - Delete Rows with only NaN values
<p>I have a DataFrame containing many NaN values. <strong>I want to delete rows that contain too many NaN values; specifically: 7 or more.</strong></p> <p>I tried using the <em>dropna</em> function several ways but it seems clear that it greedily deletes columns or rows that contain <em>any</em> NaN values. </p> <p>This question (<a href="https://stackoverflow.com/questions/11881165/slice-pandas-dataframe-by-row">Slice Pandas DataFrame by Row</a>), shows me that if I can just compile a list of the rows that have too many NaN values, I can delete them all with a simple</p> <pre><code>df.drop(rows) </code></pre> <p>I know I can count non-null values using the <em>count</em> function which I could them subtract from the total and get the NaN count that way (Is there a direct way to count NaN values in a row?). But even so, I am not sure how to write a loop that goes through a DataFrame row-by-row. </p> <p>Here's some pseudo-code that I think is on the right track:</p> <pre><code>### LOOP FOR ADDRESSING EACH row: m = total - row.count() if (m &gt; 7): df.drop(row) </code></pre> <p>I am still new to Pandas so I'm very open to other ways of solving this problem; whether they're simpler or more complex.</p>
<p>Basically the way to do this is determine the number of cols, set the minimum number of non-nan values and drop the rows that don't meet this criteria:</p> <pre><code>df.dropna(thresh=(len(df) - 7)) </code></pre> <p>See the <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.dropna.html" rel="nofollow noreferrer">docs</a></p>
python|pandas|dataframe|rows|nan
14
378,082
25,171,420
Speed up Pandas filtering
<p>I have a 37456153 rows x 3 columns Pandas dataframe consisting of the following columns: <code>[Timestamp, Span, Elevation]</code>. Each <code>Timestamp</code> value has approximately 62000 rows of <code>Span</code> and <code>Elevation</code> data, which looks like (when filtered for <code>Timestamp = 17210</code>, as an example):</p> <pre><code> Timestamp Span Elevation 94614 17210 -0.019766 36.571 94615 17210 -0.019656 36.453 94616 17210 -0.019447 36.506 94617 17210 -0.018810 36.507 94618 17210 -0.017883 36.502 ... ... ... ... 157188 17210 91.004000 33.493 157189 17210 91.005000 33.501 157190 17210 91.010000 33.497 157191 17210 91.012000 33.500 157192 17210 91.013000 33.503 </code></pre> <p>As seen above, the <code>Span</code> data is not equal spaced, which I actually need it to be. So I came up with the following code to convert it into an equal spaced format. I know the <code>start</code> and <code>end</code> locations I'd like to analyze. Then I defined a <code>delta</code> parameter as my increment. I created a numpy array called <code>mesh</code>, which holds the equal spaced <code>Span</code> data I would like to end up with. Finally, I decided the iterate over the dataframe for a given <code>TimeStamp</code> (17300 in the code) to test how fast it would work. The for loop in the code calculates average <code>Elevation</code> values for a +/- <code>0.5delta</code> range at each increment.</p> <p>My problem is: it takes 603 ms to filter through dataframe and calculate the average <code>Elevation</code> at a <strong>single</strong> iteration. For the given parameters, I have to go through 9101 iterations, resulting in approximately 1.5 hrs of computing time for this loop to end. Moreover, this is for a single <code>Timestamp</code> value, and I have 600 of them (900 hrs to do all?!).</p> <p>Is there any way that I can speed up this loop? Thanks a lot for any input!</p> <pre><code># MESH GENERATION start = 0 end = 91 delta = 0.01 mesh = np.linspace(start,end, num=(end/delta + 1)) elevation_list =[] #Loop below will take forever to run, any idea about how to optimize it?! for current_loc in mesh: average_elevation = np.average(df[(df.Timestamp == 17300) &amp; (df.Span &gt; current_loc - delta/2) &amp; (df.Span &lt; current_loc + delta/2)].Span) elevation_list.append(average_elevation) </code></pre>
<p>You can vectorize the whole thing using <code>np.searchsorted</code>. I am not much of a pandas user, but something like this should work, and runs reasonably fast on my system. Using chrisb's dummy data:</p> <pre><code>In [8]: %%timeit ...: mesh = np.linspace(start, end, num=(end/delta + 1)) ...: midpoints = (mesh[:-1] + mesh[1:]) / 2 ...: idx = np.searchsorted(midpoints, df.Span) ...: averages = np.bincount(idx, weights=df.Elevation, minlength=len(mesh)) ...: averages /= np.bincount(idx, minlength=len(mesh)) ...: 100 loops, best of 3: 5.62 ms per loop </code></pre> <p>That is about 3500x faster than your code:</p> <pre><code>In [12]: %%timeit ...: mesh = np.linspace(start, end, num=(end/delta + 1)) ...: elevation_list =[] ...: for current_loc in mesh: ...: average_elevation = np.average(df[(df.Span &gt; current_loc - delta/2) &amp; ...: (df.Span &lt; current_loc + delta/2)].Span) ...: elevation_list.append(average_elevation) ...: 1 loops, best of 3: 19.1 s per loop </code></pre> <hr> <p><strong>EDIT</strong> So how does this works? In <code>midpoints</code> we store a sorted list of the boundaries between buckets. We then do a binary search with <code>searchsorted</code> on this sorted list, and get <code>idx</code>, which basically tells us into which bucket each data point belongs. All that is left is to group all the values in each bucket. That's what <code>bincount</code> is for. Given an array of ints, it counts how many times each number comes up. Given an array of ints , and a corresponding array of <code>weights</code>, instead of adding 1 to the tally for the bucket, it adds the corresponding value in <code>weights</code>. With two calls to <code>bincount</code> you get the sum and the number of items per bucket: divide them and you get the bucket average.</p>
python|performance|optimization|numpy|pandas
6
378,083
30,592,868
Updating rows of the same index
<p>Given a DataFrame <code>df</code>:</p> <pre><code> yellowCard secondYellow redCard match_id player_id 1431183600x96x30 76921 X NaN NaN 76921 NaN X X 1431192600x162x32 71174 X NaN NaN </code></pre> <p>I would like to update duplicated rows (of the same index) resulting in:</p> <pre><code> yellowCard secondYellow redCard match_id player_id 1431183600x96x30 76921 X X X 1431192600x162x32 71174 X NaN NaN </code></pre> <p>Does <code>pandas</code> provide a library method to achieve it? </p>
<p>It looks like your df is multi-indexed on <code>match_id</code> and <code>player_id</code> so I would perform a <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html#pandas.DataFrame.groupby" rel="nofollow"><code>groupby</code></a> on the <code>match_id</code> and fill the <code>NaN</code> values twice, ffill and bfill:</p> <pre><code>In [184]: df.groupby(level=0).fillna(method='ffill').groupby(level=0).fillna(method='bfill') Out[184]: yellowCard secondYellow redCard match_id player_id 1431183600x96x30 76921 1 2 2 76921 1 2 2 1431192600x162x32 71174 3 NaN NaN </code></pre> <p>I used the following code to build the above, rather than use <code>x</code> values:</p> <pre><code>In [185]: t="""match_id player_id yellowCard secondYellow redCard 1431183600x96x30 76921 1 NaN NaN 1431183600x96x30 76921 NaN 2 2 1431192600x162x32 71174 3 NaN NaN""" df=pd.read_csv(io.StringIO(t), sep='\s+', index_col=[0,1]) df Out[185]: yellowCard secondYellow redCard match_id player_id 1431183600x96x30 76921 1 NaN NaN 76921 NaN 2 2 1431192600x162x32 71174 3 NaN NaN </code></pre> <p><strong>EDIT</strong> there is a <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.DataFrameGroupBy.ffill.html#pandas.core.groupby.DataFrameGroupBy.ffill" rel="nofollow"><code>ffill</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.DataFrameGroupBy.bfill.html#pandas.core.groupby.DataFrameGroupBy.bfill" rel="nofollow"><code>bfill</code></a> method for groupby objects so this simplifies to:</p> <pre><code>In [189]: df.groupby(level=0).ffill().groupby(level=0).bfill() Out[189]: yellowCard secondYellow redCard match_id player_id 1431183600x96x30 76921 1 2 2 76921 1 2 2 1431192600x162x32 71174 3 NaN NaN </code></pre> <p>You can then call <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.drop_duplicates.html#pandas.DataFrame.drop_duplicates" rel="nofollow"><code>drop_duplicates</code></a>:</p> <pre><code>In [190]: df.groupby(level=0).ffill().groupby(level=0).bfill().drop_duplicates() Out[190]: yellowCard secondYellow redCard match_id player_id 1431183600x96x30 76921 1 2 2 1431192600x162x32 71174 3 NaN NaN </code></pre>
python|pandas
2
378,084
30,354,425
Applying DataFrame with logic expression to another DataFrame (need Pandas wizards)
<p>I have a DataFrame <code>conditions</code> with a set of conditions that are used like an expression:</p> <pre><code> indicator logic value Discount 'ADR Premium' '&lt;' -0.5 Premium 'ADR Premium' '&gt;' 0.5 </code></pre> <p>Now I have a dataframe <code>indicators</code> with a set of values, in this case there is just one indicator <code>ADR Premium</code>:</p> <pre><code> ADR Premium 2015-04-20 15:30:00-04:00 -0.102270 2015-04-21 15:30:00-04:00 0.235315 2015-04-22 15:30:00-04:00 -0.323919 2015-04-23 15:30:00-04:00 0.546363 2015-04-24 15:30:00-04:00 -0.714143 2015-04-27 15:30:00-04:00 -0.153165 2015-04-28 15:30:00-04:00 0.878494 2015-04-29 15:30:00-04:00 0.993079 2015-04-30 15:30:00-04:00 -0.824815 2015-05-04 15:30:00-04:00 1.644784 2015-05-05 15:30:00-04:00 -0.254343 2015-05-06 15:30:00-04:00 -0.268981 2015-05-07 15:30:00-04:00 0.591411 2015-05-08 15:30:00-04:00 -0.588047 2015-05-11 15:30:00-04:00 -0.458143 2015-05-12 15:30:00-04:00 0.063643 2015-05-13 15:30:00-04:00 -0.051659 2015-05-14 15:30:00-04:00 1.474963 2015-05-15 15:30:00-04:00 -0.172429 2015-05-18 15:30:00-04:00 0.035558 </code></pre> <p>What I am hoping to achieve, is to apply the logic of <code>conditions</code> to <code>indicators</code>in order to produce a new dataframe called <code>signals</code>. To give you an idea of what I'm looking for, see below. This looks only at the first condition in <code>conditions</code> and the fifth value in <code>indicator</code> (because it evaluates to True):</p> <pre><code>signals_list = [] conditions_index = 0 indicators_index = 4 if eval( str(indicators[conditions.ix[conditions_index].indicator][indicators_index]) + conditions.ix[conditions_index].log ic + str(conditions.ix[conditions_index].value) ): signal = {'Time': indicators.ix[indicators_index].name, 'Signal': conditions.ix[conditions_index].name} signals_list.append(signal) signals = pd.DataFrame(signals_list) signals.index = signals.Time signals.drop('Time', 1) </code></pre> <p>This leaves me with <code>signals</code>:</p> <pre><code> Signal Time 2015-04-24 15:30:00-04:00 'Discount' </code></pre> <p>I would like to do this for all conditions across applicable indicators in the most efficient, Pandas-ic method. Looking forward to ideas.</p>
<p>It's hard to tell from the question but I think you just want to classify each entry in <code>indicators</code> with according to some set of conditions for that column. First I would initialise signals:</p> <pre><code>signals = pd.Series(index=indicators.index) </code></pre> <p>This will be a series of nans. For a give indicator name (ADR premium in this case), logic, value and classification you can do something like</p> <pre><code>bool_vector = indicators.eval(' '.join(indicator, logic, value)) signals[bool_vector] = classification </code></pre> <p>In the example given, this would translate to</p> <pre><code>bool_vector = indicators.eval('ADR Premium &lt; -0.5) signals[bool_vector] = 'discount' </code></pre> <p>For the first row in <code>conditions</code> and would set all rows which satisfy the condition to 'discount'. You can do the same for each row. It's hard to tell from the example but if you have multiple columns you may want to have signals as a DataFrame. You can loop through <code>conditions</code> using </p> <pre><code>for classification, (indicator, logic, value) in conditions.iterrows(): </code></pre> <p>For a fully vectorized solution you'll need to give a fuller example. </p>
python|pandas|expression|signal-processing
1
378,085
30,568,701
distinct contiguous blocks in pandas dataframe
<p>I have a pandas dataframe looking like this:</p> <pre><code> x1=[np.nan, 'a','a','a', np.nan,np.nan,'b','b','c',np.nan,'b','b', np.nan] ty1 = pd.DataFrame({'name':x1}) </code></pre> <p>Do you know how I can get a list of tuples containing the start and end indices of distinct contiguous blocks? For example for the dataframe above, </p> <pre><code>[(1,3), (6,7), (8,8), (10,11)]. </code></pre>
<p>You can use <code>shift</code> and <code>cumsum</code> to create 'id's for each contiguous block: </p> <pre><code>In [5]: blocks = (ty1 != ty1.shift()).cumsum() In [6]: blocks Out[6]: name 0 1 1 2 2 2 3 2 4 3 5 4 6 5 7 5 8 6 9 7 10 8 11 8 12 9 </code></pre> <p>You are only interested in those blocks that are not NaN, so filter for that:</p> <pre><code>In [7]: blocks = blocks[ty1['name'].notnull()] In [8]: blocks Out[8]: name 1 2 2 2 3 2 6 5 7 5 8 6 10 8 11 8 </code></pre> <p>And then, we can get the first and last index for each 'id':</p> <pre><code>In [10]: blocks.groupby('name').apply(lambda x: (x.index[0], x.index[-1])) Out[10]: name 2 (1, 3) 5 (6, 7) 6 (8, 8) 8 (10, 11) dtype: object </code></pre> <p>Although, if this last step is necessary will depend on what you want to do with it (working with tuples as elements in dataframes in not really recommended). Maybe having the 'id's can already be enough.</p>
python|pandas
11
378,086
30,643,436
copying a 24x24 image into a 28x28 array of zeros
<p>Hi I want to copy a random portion of a 28x28 matrix and then use the resulting 24x24 matrix to be inserted into a 28x28 matrix image = image.reshape(28, 28)</p> <pre><code> getx = random.randint(0,4) gety = random.randint(0,4) # get a 24 x 24 tile from a random location in img blank_image = np.zeros((28,28), np.uint8) tile= image[gety:gety+24,getx:getx+24] cv2.imshow("the 24x24 Image",tile) </code></pre> <p>tile is a 24x24 ROI works as planned</p> <pre><code> blank_image[gety:gety+24,getx:getx+24] = tile </code></pre> <p>blank_image in my example does not get updated with the values from tile</p> <p>Thanks for the help in advance</p>
<p>If you are getting an error, it might be because your np array dimensions are different. If your image is an RGB image, then your blank image should be defined as :</p> <pre><code>blank_image = np.zeros((28,28,3), uint8) </code></pre>
python|opencv|numpy
0
378,087
30,371,646
join two dataframe together
<pre><code> total_purchase_amt 2013-07-01 22533121 2014-08-29 214114844 2014-08-30 183547267 2014-08-31 205369438 total_purchase_amt 2014-08-31 2.016808e+08 2014-09-01 2.481354e+08 2014-09-02 2.626838e+08 2014-09-03 2.497276e+08 </code></pre> <p>having two dataframe, I want to join them together,the result is like this: the last row in first dataframe should be replaced by the first row of second dataframe.</p> <pre><code> total_purchase_amt 2013-07-01 22533121 2014-08-29 214114844 2014-08-30 183547267 2014-08-31 2.016808e+08 2014-09-01 2.481354e+08 2014-09-02 2.626838e+08 2014-09-03 2.497276e+08 </code></pre>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.combine_first.html#pandas.DataFrame.combine_first" rel="nofollow"><code>combine_first</code></a> with the other df combining with the first df, this will perserve the values in your other df and add the missing values from the first df:</p> <pre><code>In [49]: df1.combine_first(df) Out[49]: total_purchase_amt 2013-07-01 22533121 2014-08-29 214114844 2014-08-30 183547267 2014-08-31 201680800 2014-09-01 248135400 2014-09-02 262683800 2014-09-03 249727600 </code></pre>
python|pandas
0
378,088
30,427,374
Pandas MAX formula across different grouped rows
<p>I have dataframe that looks like this:</p> <pre><code>Auction_id bid_price min_bid rank 123 5 3 1 123 4 3 2 124 3 2 1 124 1 2 2 </code></pre> <p>I'd like to create another column that returns MAX(rank 1 min_bid, rank 2 bid_price). I don't care what appears for the rank 2 column values. I'm hoping for the result to look something like this:</p> <pre><code>Auction_id bid_price min_bid rank custom_column 123 5 3 1 4 123 4 3 2 NaN/Don't care 124 3 2 1 2 124 1 2 2 NaN/Don't care </code></pre> <p>Should I be iterating through grouped auction_ids? Can someone provide the topics one would need to be familiar with to tackle this type of problem?</p>
<p>Here's an approach that does some reshaping with pivot()</p> <pre><code> Auction_id bid_price min_bid rank 0 123 5 3 1 1 123 4 3 2 2 124 3 2 1 3 124 1 2 2 </code></pre> <p>Then reshape your frame (df)</p> <pre><code>pv = df.pivot("Auction_id","rank") pv bid_price min_bid rank 1 2 1 2 Auction_id 123 5 4 3 3 124 3 1 2 2 </code></pre> <p>Adding a column to pv that contains the max. I"m using iloc to get a slice of the pv dataframe.</p> <pre><code> pv["custom_column"] = pv.iloc[:,[1,2]].max(axis=1) pv bid_price min_bid custom_column rank 1 2 1 2 Auction_id 123 5 4 3 3 4 124 3 1 2 2 2 </code></pre> <p>and then add the max to the original frame (df) by mapping to our pv frame</p> <pre><code>df.loc[df["rank"] == 1,"custom_column"] = df["Auction_id"].map(pv["custom_column"]) df Auction_id bid_price min_bid rank custom_column 0 123 5 3 1 4 1 123 4 3 2 NaN 2 124 3 2 1 2 3 124 1 2 2 NaN </code></pre> <p>all the steps combined</p> <pre><code>pv = df.pivot("Auction_id","rank") pv["custom_column"] = pv.iloc[:,[1,2]].max(axis=1) df.loc[df["rank"] == 1,"custom_column"] = df["Auction_id"].map(pv["custom_column"]) df Auction_id bid_price min_bid rank custom_column 0 123 5 3 1 4 1 123 4 3 2 NaN 2 124 3 2 1 2 3 124 1 2 2 NaN </code></pre>
pandas
2
378,089
30,535,516
How to check for real equality (of numpy arrays) in python?
<p>I have some function in python returning a numpy.array:</p> <pre><code>matrix = np.array([0.,0.,0.,0.,0.,0.,1.,1.,1.,0.], [0.,0.,0.,1.,1.,0.,0.,1.,0.,0.]) def some_function: rows1, cols1 = numpy.nonzero(matrix) cols2 = numpy.array([6,7,8,3,4,7]) rows2 = numpy.array([0,0,0,1,1,1]) print numpy.array_equal(rows1, rows2) # returns True print numpy.array_equal(cols1, cols2) # returns True return (rows1, cols1) # or (rows2, cols2) </code></pre> <p>It should normally extract the indices of nonzero entries of a matrix (rows1, cols1). However, I can also extract the indices manually (rows2, cols2). The problem is that the program returns different results depending on whether the function returns <code>(rows1, cols1)</code> or <code>(rows2, cols2)</code>, although the arrays should be equal.</p> <p>I should probably add that this code is used in the context of <em>pyipopt</em>, which calls a c++ software package <em>IPOPT</em>. The problem then occurs within this package. </p> <p>Can it be that the arrays are not "completely" equal? I would say that they somehow must be because I am not modifying anything but returning one instead of the other.</p> <p>Any idea on how to debug this problem?</p>
<p>You could check <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.where.html" rel="nofollow">where</a> the arrays are not equal:</p> <pre><code>print(where(rows1 != rows2)) </code></pre> <p>But what you are doing is unclear, first there is no <code>nonzeros</code> function in numpy, only a <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.nonzero.html" rel="nofollow">nonzero</a> which returns a tuple of coordinates. Are you only using the one corresponding to the rows?</p>
python|c++|numpy|ipopt
1
378,090
30,515,204
time-series analysis in python
<p>I'm new to Python and I'm trying to analyze a time series. I have a Series indexed with dates, and I would like to split my time series to see e.g. how many $t$ appeared between 16 and 17, how many between 17 and 18, and so on.</p> <p>How can I do that for minutes, days, weeks, months? Basically I would like to zoom in at different time lengths.</p> <p>The ideal solution would be something like the .groupby() method, that would allow to easily see how my time series behaves in different periods.</p> <pre><code> t 2015-05-27 16:37:08 1 2015-05-27 16:37:12 1 2015-05-27 16:37:48 1 2015-05-27 16:37:49 1 2015-05-27 16:38:00 1 </code></pre>
<p>Check out <a href="http://pandas.pydata.org/" rel="nofollow">Pandas</a>. Pandas provides data structures and data analysis tools for time series and will provide exactly the kind of functionality you are looking for. Look into this page of documentation which focuses on time series: <a href="http://pandas.pydata.org/pandas-docs/stable/timeseries.html" rel="nofollow">http://pandas.pydata.org/pandas-docs/stable/timeseries.html</a></p>
python|pandas|time-series
4
378,091
30,605,261
Referencing numpy array locations within if statements
<p>I have the following section of Python:</p> <pre><code>for j in range(0,T): for x in xrange(len(index)): for y in xrange(x+1,len(index)): if index(y) == index(x): continue </code></pre> <p>For which I have been attempting to translate successfully from a MATLAB equivalent. In matlab, this operation is simple as follows:</p> <pre><code> for iter = 1:T for i = 1:length(index) for j = i+1:length(index) if index(j) == index(i) continue; end </code></pre> <p>However, when I attempt to execute my code I receive a "numpy.ndarray object is not callable" error. Why does this arise, and how would I go about writing this in a proper python manner to successfully execute?</p>
<p>Looks like <code>index</code> is an array of some sort, but when you do <code>index(y)</code> and <code>index(x)</code>, Python thinks you're trying to call a function <code>index()</code> using <code>x</code> and <code>y</code> as parameters, respectively.</p> <p>If you're trying to simply access the elements, use <code>index[x]</code> and <code>index[y]</code>.</p>
python|arrays|matlab|numpy|indexing
2
378,092
26,790,476
playing videos from file in anaconda
<p>This is my first time asking so this is a rather basic question. I'm trying to play saved videos using Anaconda on Windows, but for some reason nothing is playing. The intent is to play the current file, and then progress up to visual tracking in real time. Here is my code:</p> <pre><code>import numpy as np import cv2 cap = cv2.VideoCapture('Animal3.h264') while(cap.isOpened()): print 'opened' ret, frame = cap.read() gray = cv2.cvtColor(frame, cv2.Color_BGR2GRAY) cv2.imshow('frame', gray) if cv2.waitKey(25) &amp; 0xFF == ord('q'): print 'break' break cap.release() cv2.destroyAllWindows() print 'end' </code></pre> <p>And when I run it nothing happens. It just tells me what file I'm running out of. What am I doing wrong?</p>
<p>The main problem is that <strong>y0u 4r3 n0t c0d1ng s4f3ly</strong>: you should always test the return of functions or the validity of the parameters returned by these calls.</p> <p><strong>These are the most common reasons why <code>VideoCapture()</code> fails</strong>:</p> <ul> <li>It was unable to find the file (have you tried passing the filename with the full path?);</li> <li>It couldn't open it (do you have the proper permission/access rights?);</li> <li><a href="https://stackoverflow.com/q/8693975/176769">It cannot handle that specific video container/codec</a>.</li> </ul> <p>Anyway, here's what you should be doing to make sure the problem is in <code>VideoCapture()</code>:</p> <pre><code>cap = cv2.VideoCapture('Animal3.h264') if not cap: print "!!! Failed VideoCapture: unable to open file!" sys.exit(1) </code></pre> <p>I also suggest updating the code to:</p> <pre><code>key = cv2.waitKey(25) if key == ord('q'): print 'Key q was pressed!' break </code></pre>
python|opencv|numpy|video-capture|anaconda
1
378,093
26,685,600
Pandas subtract 2 rows from same dataframe
<p>How do I subtract one row from another in the following dataframe (df):</p> <pre><code>RECL_LCC 1 2 3 RECL_LCC 35.107655 36.015210 28.877135 RECL_PI 36.961519 43.499506 19.538975 </code></pre> <p>I want to do something like:</p> <pre><code>df['Difference'] = df['RECL_LCC']-df['RECL_PI'] </code></pre> <p>but that gives: </p> <pre><code>*** KeyError: 'RECL_LCC' </code></pre>
<p>You can select rows by index value using <code>df.loc</code>:</p> <pre><code>In [98]: df.loc['Diff'] = df.loc['RECL_LCC'] - df.loc['RECL_PI'] In [99]: df Out[99]: RECL_LCC 1 2 3 RECL_LCC 35.107655 36.015210 28.877135 RECL_PI 36.961519 43.499506 19.538975 Diff -1.853864 -7.484296 9.338160 </code></pre>
python|pandas|subtraction
9
378,094
26,639,569
Slicing multiple dimensions in a ndarray
<p>How to slice <code>ndarray</code> by multiple dimensions in one line? Check the last line in the following snippet. This seems so basic yet it gives a surprise... but why?</p> <pre><code>import numpy as np # create 4 x 3 array x = np.random.rand(4, 3) # create row and column filters rows = np.array([True, False, True, False]) cols = np.array([True, False, True]) print(x[rows, :].shape == (2, 3)) # True ... OK print(x[:, cols].shape == (4, 2)) # True ... OK print(x[rows][:, cols].shape == (2, 2)) # True ... OK print(x[rows, cols].shape == (2, 2)) # False ... WHY??? </code></pre>
<p>Since <code>rows</code> and <code>cols</code> are boolean arrays, when you do:</p> <pre><code>x[rows, cols] </code></pre> <p>it is like:</p> <pre><code>x[np.where(rows)[0], np.where(cols)[0]] </code></pre> <p>which is:</p> <pre><code>x[[0, 2], [0, 2]] </code></pre> <p>taking the values at positions <code>(0, 0)</code> and <code>(2, 2)</code>. On the other hand, doing:</p> <pre><code>x[rows][:, cols] </code></pre> <p>works like:</p> <pre><code>x[[0, 2]][:, [0, 2]] </code></pre> <p>returning a shape <code>(2, 2)</code> in this example.</p>
python|arrays|numpy|slice
4
378,095
26,555,774
Efficient way to clean a csv?
<p>I am parsing and modifying large files (about a gig per month) which contain a record of every interaction. These files are sent to me by our client, so I am stuck with what they contain. I am using pandas to clean them up a bit, add some information, etc.</p> <p>I keep running into issues where, out of 1 million+ rows, 1 to 10 values in the datetime column are not a date. A value meant for another column is in the date column due to some issue with comma separation (this is from the client's query, not mine) so it might say the word 'Closed' or something.</p> <p>How do I drop these rows? I can see the ones with errors when I use df.sort('Datetime'). I just want a way to drop these quickly. </p> <p>Here are my ideas:</p> <ol> <li><p>There is a column called 'TransID' which ALWAYS begins with the letter 'H' (and it always is 9 digits) UNLESS there is an error when another column value has shifted into this column</p></li> <li><p>The date column should always have a value (notnull)</p></li> </ol> <p>Can someone help think of a way to solve this problem? (I think this date thing is the key issue because I have formulas which subtract StartDate from EndDate.. if one of those contains a word then it messes up the entire process. Maybe I can create some error exception or drop error rows?)</p>
<p>Use the H column to filter out the error rows using a boolean index and the <a href="http://pandas.pydata.org/pandas-docs/stable/basics.html#vectorized-string-methods" rel="nofollow">vectorized string methods.</a></p> <pre><code>good_rows_mask = df.TransID.str[0] == 'H' df = df[good_rows_mask] </code></pre>
pandas
0
378,096
26,689,614
Recursively calling functions within functions in Python (trying to replicate MATLAB behaviour)
<p>In MATLAB this function (by Hao Zhang) calls itself</p> <pre><code>function r=rotmat2expmap(R) % Software provided by Hao Zhang % http://www.cs.berkeley.edu/~nhz/software/rotations r=quat2expmap(rotmat2quat(R)); </code></pre> <p>as an argument to the function </p> <pre><code>function [r]=quat2expmap(q) % Software provided by Hao Zhang % http://www.cs.berkeley.edu/~nhz/software/rotations % % function [r]=quat2expmap(q) % convert quaternion q into exponential map r % % denote the axis of rotation by unit vector r0, the angle by theta % q is of the form (cos(theta/2), r0*sin(theta/2)) % r is of the form r0*theta if (abs(norm(q)-1)&gt;1E-3) error('quat2expmap: input quaternion is not norm 1'); end sinhalftheta=norm(q(2:4)); coshalftheta=q(1); r0=q(2:4)/(norm(q(2:4))+eps); theta=2*atan2(sinhalftheta,coshalftheta); theta=mod(theta+2*pi,2*pi); %if (theta&gt;pi), theta=2*pi-theta; r0=-r0; end if (theta&gt;pi) theta=2*pi-theta; r0=-r0; end r=r0*theta; </code></pre> <p>Now if we pass a rotation matrix to the first function something along the lines of</p> <pre><code>R = 0.9940 0.0773 -0.0773 -0.0713 0.9945 0.0769 0.0828 -0.0709 0.9940 </code></pre> <p>It recursively calculates the correct result (in this case) simply:</p> <pre><code>r = -0.0741 -0.0803 -0.0745 </code></pre> <p>Alas this is in MATLAB and it works fine (the original author knew what he was doing). I have not quite managed to get the same functionality to work in Python (I am effectively translating the code), I am going wrong somewhere:</p> <pre><code>def rotmat2expmap(R): """ Convert rotation matrix to exponential mapping. Based on G.W. Taylor's MATLAB equivalent. """ r = quat2expmap(rotmat2expmap(R)) return r def quat2expmap(q): """Converts quaternion q (rotation matrix) into exponential map r. Provided by Hao Zhang and G.W. Taylor. Denote the axis of rotation by unit vector r0, the angle by theta q is of the form (cos(theta/2), r0*sin(theta/2)) r is of the form r0*theta """ if abs(np.linalg.norm(q,2)-1) &gt; 1e-3: print('quat2expmap: input quaternion is not norm 1') # Implement to simulate MATLAB like linear array structure temp = q.T.flatten() sinhalftheta = np.linalg.norm(temp[1:4],2) coshalftheta = temp[0] r0 = temp[1:4]/(np.linalg.norm(temp[1:4],2) + np.spacing(1)) theta = 2*math.atan2(sinhalftheta,coshalftheta) theta = fmod(theta+2*pi,2*pi) # Remainder after division (modulo operation) if theta &gt; pi: theta = 2*pi-theta r0 = -r0 r = r0*theta return r </code></pre> <p>If I try to run this (with the same example R) then the number of loops maxes out and the whole thing crashes. </p> <p>Anyone got any fancy ideas?</p>
<p>It seems that you have misread the original function definition. It doesn't recursively call itself, it calls instead <code>rotmat2quat</code> (not <code>rotmat2expmap</code>). You presumably need to implement <code>rotmat2quat</code> (see e.g., <a href="https://github.com/gwtaylor/imCRBM/blob/master/Motion/rotmat2quat.m" rel="nofollow">https://github.com/gwtaylor/imCRBM/blob/master/Motion/rotmat2quat.m</a> ).</p> <p>You are correct in how you are calling a function recursively in Python. However, <em>in any language</em> calling a function recursively without first applying some reduction (to make the input smaller) will result in an infinite recursion. This is what is happening in your Python code and why it is hitting a recursive depth limit. It is also what would happen in the MatLab code if it was written as you originally suspected. That is you have, essentially, f(R) -> f(R) -> f(R) -> f(R) -> f(R) -> ... . The input never changes before the recursive call, and so each time it makes another recursive call and never ends. Hopefully this is clear.</p>
python|matlab|numpy|recursion
2
378,097
26,856,793
Pandas read excel with Chinese filename
<p>I am trying to load as a pandas dataframe a file that has Chinese characters in its name.</p> <p>I've tried:</p> <pre><code>df=pd.read_excel("url/某物2008.xls") </code></pre> <p>and</p> <pre><code>import sys df=pd.read_excel("url/某物2008.xls", encoding=sys.getfilesystemencoding()) </code></pre> <p>But the response is something like: "no such file or directory "url/\xa1\xa92008.xls"</p> <p>I've also tried changing the names of the files using os.rename, but the filenames aren't even read properly (asking python to just print the filenames yields only question marks or squares).</p>
<pre><code>df=pd.read_excel(u"url/某物2008.xls", encoding=sys.getfilesystemencoding()) </code></pre> <p>may work... but you may have to declare an encoding type at the top of the file</p>
python|unicode|pandas|character-encoding
3
378,098
26,874,857
Pandas TimeSeries With duration of event
<p>I've been googling for this for a while and haven't found a proper solution. I have a time series with a couple of million rows that has a rather odd structure:</p> <pre><code>VisitorID Time VisitDuration 1 01.01.2014 00:01 80 seconds 2 01.01.2014 00:03 37 seconds </code></pre> <p>I would want to know how many people are on the website during a certain moment. For this I would have to transform this data into something much bigger:</p> <pre><code>Time VisitorsPresent 01.01.2014 00:01 1 01.01.2014 00:02 1 01.01.2014 00:03 2 ... </code></pre> <p>But doing something like this seems highly inefficient. My code would be:</p> <pre><code>dates = {} for index, row in data.iterrows(): for i in range(0,int(row["duration"])): dates[index+pd.DateOffset(seconds=i)] = dates.get(index+pd.DateOffset(seconds=i), 1) + 1 </code></pre> <p>Then I could transfer this into a series and be able to resample it:</p> <pre><code>result = pd.Series(dates) result.resample("5min",how="mean").plot() </code></pre> <p>Could you point me to a right direction?</p> <p>EDIT---</p> <p>Hi HYRY Here is a head()</p> <pre> uid join_time_UTC duration 0 1 2014-03-07 16:58:01 2953 1 2 2014-03-07 17:13:14 1954 2 3 2014-03-07 17:47:38 223 </pre>
<p>Create some dummy data first:</p> <pre><code>import numpy as np import pandas as pd start = pd.Timestamp("2014-11-01") end = pd.Timestamp("2014-11-02") N = 100000 t = np.random.randint(start.value, end.value, N) t -= t % 1000000000 start = pd.to_datetime(np.array(t, dtype="datetime64[ns]")) duration = pd.to_timedelta(np.random.randint(100, 1000, N), unit="s") df = pd.DataFrame({"start":start, "duration":duration}) df["end"] = df.start + df.duration print df.head(5) </code></pre> <p>Here is what the data looks like:</p> <pre><code> duration start end 0 00:13:45 2014-11-01 08:10:45 2014-11-01 08:24:30 1 00:04:07 2014-11-01 23:15:49 2014-11-01 23:19:56 2 00:09:26 2014-11-01 14:04:10 2014-11-01 14:13:36 3 00:10:20 2014-11-01 19:40:45 2014-11-01 19:51:05 4 00:02:48 2014-11-01 02:25:47 2014-11-01 02:28:35 </code></pre> <p>Then do the value count:</p> <pre><code>enter_count = df.start.value_counts() exit_count = df.end.value_counts() df2 = pd.concat([enter_count, exit_count], axis=1, keys=["enter", "exit"]) df2.fillna(0, inplace=True) print df2.head(5) </code></pre> <p>here is the counts:</p> <pre><code> enter exit 2014-11-01 00:00:00 1 0 2014-11-01 00:00:02 2 0 2014-11-01 00:00:04 4 0 2014-11-01 00:00:06 2 0 2014-11-01 00:00:07 2 0 </code></pre> <p>finally resample and plot:</p> <pre><code>df2["diff"] = df2["enter"] - df2["exit"] counts = df2["diff"].resample("5min", how="sum").fillna(0).cumsum() counts.plot() </code></pre> <p>the output is:</p> <p><img src="https://i.stack.imgur.com/Fj6pn.png" alt="enter image description here"></p>
python|pandas|time-series|sampling
9
378,099
26,678,132
Python numpy subtraction no negative numbers (4-6 gives 254)
<p>I wish to subtract 2 <strong>gray</strong> human faces from each other to see the difference, but I encounter a problem that subtracting e.g. [4] - [6] gives [254] instead of [-2] (or difference: [2]).</p> <pre><code>print(type(face)) #&lt;type 'numpy.ndarray'&gt; print(face.shape) #(270, 270) print(type(nface)) #&lt;type 'numpy.ndarray'&gt; print(nface.shape) #(270, 270) #This is what I want to do: sface = face - self.nface #or sface = np.subtract(face, self.nface) </code></pre> <p>Both don't give negative numbers but instead subtract the rest after 0 from 255.</p> <p>Output example of sface:</p> <pre><code>[[ 8 255 8 ..., 0 252 3] [ 24 18 14 ..., 255 254 254] [ 12 12 12 ..., 0 2 254] ..., [245 245 251 ..., 160 163 176] [249 249 252 ..., 157 163 172] [253 251 247 ..., 155 159 173]] </code></pre> <p><strong>My question:</strong> How do I get sface to be an numpy.ndarray (270,270) with either negative values after subtracting <strong>or</strong> the difference between each point in face and nface? (So not numpy.setdiff1d, because this returns only 1 dimension instead of 270x270)</p> <h1>Working</h1> <p>From the answer of @ajcr I did the following (abs() for showing subtracted face):</p> <pre><code>face_16 = face.astype(np.int16) nface_16 = nface.astype(np.int16) sface_16 = np.subtract(face_16, nface_16) sface_16 = abs(sface_16) sface = sface_16.astype(np.int8) </code></pre>
<p>It sounds like the <code>dtype</code> of the array is <code>uint8</code>. All the numbers will be interpreted as integers in the range 0-255. Here, -2 is equal to 256 - 2, hence the subtraction results in 254.</p> <p>You need to recast the arrays to a <code>dtype</code> which supports negative integers, e.g. <code>int16</code> like this ...</p> <pre><code>face = face.astype(np.int16) </code></pre> <p>...and then subtract.</p>
python|numpy|grayscale|subtraction|array-difference
19