Unnamed: 0
int64
0
378k
id
int64
49.9k
73.8M
title
stringlengths
15
150
question
stringlengths
37
64.2k
answer
stringlengths
37
44.1k
tags
stringlengths
5
106
score
int64
-10
5.87k
6,200
62,604,708
Adding Multiple Pandas Columns to Sparse CSR Matrix
<p>so my question is based on this <a href="https://stackoverflow.com/questions/41927781/adding-pandas-columns-to-a-sparse-matrix">question</a>.</p> <p>I have Twitter data where I extracted unigram features and number of orthographies features such as excalamation mark, question mark, uppercase, and lowercase. I want to stack orthographies features into transformed unigram feature. Here is my code:</p> <pre><code>X_train, X_test, y_train, y_test = train_test_split(tweet_df[['tweets', 'exclamation', 'question', 'uppercase', 'lowercase']], tweet_df['class'], stratify=tweet_df['class'], test_size = 0.2, random_state=0) count_vect = CountVectorizer(ngram_range=(1,1)) X_train_gram = count_vect.fit_transform(X_train['tweets']) tfidf = TfidfTransformer() X_train_gram = tfidf.fit_transform(X_train_gram) X_train_gram = hstack((X_train_gram,np.array(X_train['exclamation'])[:,None])) </code></pre> <p>This worked, however I can't find a way to incorporate the rest of columns (question, uppercase, lowercase) into the stack in one line of code. Here is the failed try:</p> <pre><code>X_train_gram = hstack((X_train_gram,np.array(list(X_train['exclamation'], X_train['question'], X_train['uppercase'], X_train['lowercase']))[:,None])) #list expected at most 1 arguments, got 4 X_train_gram = hstack((X_train_gram,np.array(X_train[['exclamation', 'question', 'uppercase', 'lowercase']])[:,None])) #expected dimension &lt;= 2 array or matrix X_train_gram = hstack((X_train_gram,np.array(X_train[['exclamation', 'question', 'uppercase', 'lowercase']].values)[:,None])) #expected dimension &lt;= 2 array or matrix </code></pre> <p>Any help appreciated.</p>
<p>You have problems with list syntax and <code>sparse.coo_matrix</code> creation.</p> <pre><code>np.array(X_train['exclamation'])[:,None]) </code></pre> <p><code>Series</code> to array is 1d, with None, becomes (n,1)</p> <pre><code>np.array(list(X_train['exclamation'], X_train['question'], X_train['uppercase'], X_train['lowercase']))[:,None] </code></pre> <p>That's not valid list syntax:</p> <pre><code>In [327]: list(1,2,3,4) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) &lt;ipython-input-327-e06d60ac583e&gt; in &lt;module&gt; ----&gt; 1 list(1,2,3,4) TypeError: list() takes at most 1 argument (4 given) </code></pre> <p>next:</p> <pre><code>np.array(X_train[['exclamation', 'question', 'uppercase', 'lowercase']])[:,None]) </code></pre> <p>With multiple columns, we get a DataFrame; which makes a 2d array; add the <code>None</code>, and get a 3d array:</p> <pre><code>In [328]: np.ones((2,3))[:,None].shape Out[328]: (2, 1, 3) </code></pre> <p>Can't make a <code>coo</code> matrix from a 3d array. Adding <code>values</code> doesn't change things. <code>np.array(dataframe)</code> is the same as <code>dataframe.values</code>.</p> <pre><code>np.array(X_train[['exclamation', 'question', 'uppercase', 'lowercase']].values)[:,None] </code></pre> <p>This has a chance of working:</p> <pre><code>hstack((X_train_gram, np.array(X_train[['exclamation', 'question', 'uppercase', 'lowercase']].values)) </code></pre> <p>though I'd suggest writing</p> <pre><code>arr = np.array(X_train[['exclamation', 'question', 'uppercase', 'lowercase']].values M = sparse.coo_matrix(arr) sparse.hstack(( X_train_gram, M)) </code></pre> <p>It's more readable, and should be easier to debug if there are problems.</p>
python|pandas|numpy|scipy|sparse-matrix
1
6,201
62,706,210
Create an array of matrices from 1D arrays in python
<p>Could you please help to create an array of matrices when elements are taken to be 1D arrays in python.</p> <p>For Ex: Here is what I am trying to do</p> <pre><code>import numpy as np ele_1 = np.linspace(0,1,num= 50) ele_2 = np.linspace(1,2,num= 50) ele_3 = np.linspace(2,3,num= 50) ele_4 = np.linspace(3,4,num= 50) Mat_array = np.array([[ele_1,ele_2],[ele_2,ele_4]]) #This is giving me (2, 2, 50) array </code></pre> <p><strong>Expected output:</strong></p> <pre><code>array([matrix_1,matrix_2,.....] </code></pre> <p>Here <code>matrix_i</code> is <code>array([[ele_1[i],ele_2[i]],[ele_3[i],ele_4[i]]])</code> . Mat_array must be <code>(50, 2, 2,)</code> array</p> <p>I want to avoid loops and this method should also be applicable to applicable any n x n matrix.</p> <p>Thank you</p>
<pre><code>In [153]: ele_1 = np.linspace(0,1,num= 50) ...: ele_2 = np.linspace(1,2,num= 50) ...: ele_3 = np.linspace(2,3,num= 50) ...: ele_4 = np.linspace(3,4,num= 50) In [154]: Mat_array = np.array([[ele_1,ele_2],[ele_3,ele_4]]) # correction? In [155]: Mat_array.shape Out[155]: (2, 2, 50) </code></pre> <p><code>transpose</code> can put the 50 first:</p> <pre><code>In [156]: Mat_array.transpose(2,0,1).shape Out[156]: (50, 2, 2) In [157]: Mat_array = np.array([[ele_1,ele_2],[ele_3,ele_4]]).transpose(2,0,1) In [158]: timeit Mat_array = np.array([[ele_1,ele_2],[ele_3,ele_4]]).transpose(2,0,1) 7.56 µs ± 51 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) </code></pre> <p>An alternative with <code>np.stack</code> on a new last axis:</p> <pre><code>In [159]: res = np.stack([ele_1,ele_2,ele_3,ele_4],axis=1).reshape(-1,2,2) In [160]: np.allclose(res, Mat_array) Out[160]: True In [161]: timeit res = np.stack([ele_1,ele_2,ele_3,ele_4],axis=1).reshape(-1,2,2) 16.9 µs ± 69.4 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) </code></pre> <p>but it's slower.</p> <p><a href="https://stackoverflow.com/a/62706470/901925">S_Zizzle's answer</a> is slower, especially when returning an array. It iterates <code>n</code> times:</p> <pre><code>In [162]: final_array = np.array([[[ele_1[i],ele_2[i]],[ele_3[i],ele_4[i]]] for i in ran ...: ge(n)]) In [163]: np.allclose(final_array, Mat_array) Out[163]: True In [164]: timeit final_array = np.array([[[ele_1[i],ele_2[i]],[ele_3[i],ele_4[i]]] for i ...: in range(n)]) 188 µs ± 373 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each) </code></pre>
python|numpy
3
6,202
62,649,647
Under what situation will np.genfromtxt read in an array of voids
<p>Trying to read a .Data file using np.genfromtxt</p> <pre><code>a = np.genfromtxt(&quot;u.data&quot;, dtype = [int, int, int, int], delimiter = '\t') </code></pre> <p>The output is an array of numpy voids. However, if I do not specify the data type, then the output is a normal array. I wonder what went wrong. I should also mention that if I do not specify the data type, numpy treats all the data automatically as float.</p>
<p>Various ways of loading a simple csv</p> <pre><code>In [148]: txt = &quot;&quot;&quot;1,2,3 ...: 4,5,6&quot;&quot;&quot; </code></pre> <p>default float:</p> <pre><code>In [149]: np.genfromtxt(txt.splitlines(), delimiter=',') Out[149]: array([[1., 2., 3.], [4., 5., 6.]]) </code></pre> <p>multiple int dtype - produces a structured array (read the docs):</p> <pre><code>In [150]: np.genfromtxt(txt.splitlines(), delimiter=',',dtype=[int,int,int]) Out[150]: array([(1, 2, 3), (4, 5, 6)], dtype=[('f0', '&lt;i8'), ('f1', '&lt;i8'), ('f2', '&lt;i8')]) </code></pre> <p>Structured with a mix of dtypes (more common case):</p> <pre><code>In [152]: np.genfromtxt(txt.splitlines(), delimiter=',',dtype=[int,float,'U10']) Out[152]: array([(1, 2., '3'), (4, 5., '6')], dtype=[('f0', '&lt;i8'), ('f1', '&lt;f8'), ('f2', '&lt;U10')]) </code></pre> <p>All integer - 2d array like the float case:</p> <pre><code>In [153]: np.genfromtxt(txt.splitlines(), delimiter=',',dtype=int) Out[153]: array([[1, 2, 3], [4, 5, 6]]) </code></pre> <p><code>genfromtxt</code> docs are a bit long, but worth reading in full!</p>
numpy|numpy-ndarray
1
6,203
62,663,877
how to count observations based on timestamp condition
<p>I have a Pandas dataframe in the following format:</p> <pre><code>id name timestamp 001 movie1 2012-05-05 19:52:04 001 movie5 2012-05-05 13:42:52 001 movie3 2012-05-04 18:29:11 002 movie8 2012-05-05 13:18:31 002 movie7 2012-05-04 09:13:28 003 movie7 2012-05-05 19:23:45 003 movie1 2012-05-04 17:00:48 004 movie11 2012-05-05 12:55:34 005 movie8 2012-05-04 15:48:25 005 movie7 2012-05-04 11:14:53 </code></pre> <p>with a few thousand rows.</p> <p>The data shows the movies watched on a video streaming platform. Id is the user id, name is the name of the movie and timestamp is the timestamp at which the movie started.</p> <p>How can I track if two movies are played consecutively (where consecutively means that the second one is played less than 2 hours from the first one?</p>
<p>You can try this, sort by user id and date, group by the user id, and find diff in hours:</p> <pre><code>df['timestamp'] = pd.to_datetime(df['timestamp']) df.sort_values(by=['id', 'timestamp'], inplace=True) df['time_diff'] = df.groupby(by=['id'])['timestamp'].diff().astype('timedelta64[h]') df['&lt;2'] = df['time_diff'] &lt;= 2 print(df) id name timestamp time_diff &lt;2 2 1 movie3 2009-05-04 18:29:11+00:00 NaN False 1 1 movie5 2009-05-05 13:42:52+00:00 19.0 False 0 1 movie1 2009-05-05 19:52:04+00:00 6.0 False 4 2 movie7 2009-05-04 09:13:28+00:00 NaN False 3 2 movie8 2009-05-05 13:18:31+00:00 28.0 False 6 3 movie1 2009-05-04 17:00:48+00:00 NaN False 5 3 movie7 2009-05-05 19:23:45+00:00 26.0 False 7 4 movie11 2009-05-05 12:55:34+00:00 NaN False 9 5 movie7 2009-05-04 11:14:53+00:00 NaN False 8 5 movie8 2009-05-04 15:48:25+00:00 4.0 False </code></pre>
python|pandas|dataframe|timestamp|pandas-groupby
2
6,204
62,570,557
Extract keywords from a dataframe column to another column
<p>I have a dataframe in the following format : <a href="https://www.kaggle.com/hsankesara/flickr-image-dataset" rel="nofollow noreferrer">link to the csv file</a></p> <pre><code> image_name caption_number caption 0 1000092795.jpg 0 Two young guys with shaggy hair look at their... 1 1000092795.jpg 1 Two young , White males are outside near many... 2 1000092795.jpg 2 Two men in green shirts are standing in a yard . 3 1000092795.jpg 3 A man in a blue shirt standing in a garden . 4 1000092795.jpg 4 Two friends enjoy time spent together . </code></pre> <p>I want to add another column <code>keywords</code> that extracts keywords using NLP keyword extraction methods.</p> <p>Here is what I tried:</p> <pre><code>df = pd.read_csv('results.csv', delimiter='|') df.columns = ['image_name', 'caption_number', 'caption'] stop_words = stopwords.words('english') def get_keywords(row): some_text = row['caption'] lowered = some_text.lower() tokens = nltk.tokenize.word_tokenize(some_text) keywords = [keyword for keyword in tokens if keyword.isalpha() and not keyword in stop_words] keywords_string = ','.join(keywords) return keywords_string df['Keywords'] = df['caption'].apply(get_keywords, axis=1) </code></pre> <p>The above returns an error: <code>get_keywords() got an unexpected keyword argument 'axis'</code></p>
<p>The reason was the caption column had nan values so it is required to drop the nan values before applying the function.</p> <pre><code>#replaces all occurring digits in the strings with nothing df['caption'] = df['caption'].str.replace('\d+', '') #drop all the nan values df=df.dropna() #if you need the whole row to be passed inside the function df['Keywords'] = df.apply(lambda row:get_keywords(row), axis=1) </code></pre>
python|pandas|keyword
2
6,205
73,651,847
How can I create a vector of longitud len(x) numbered one by one from 0 to len(x) in python
<p>I have a len(x), and need create a vector of that lenght (lenx), starting with a 0, so the last one should be len(x)-1.</p> <p>how can I do this in python?</p>
<p>I am not sure of what you mean by &quot;vector&quot;, but I guess it is either of:</p> <p><strong>A Python list:</strong></p> <pre><code>vector = [i for i in range(len(x))] </code></pre> <p>or</p> <pre><code>vector = list(range(len(x))) </code></pre> <p><strong>A numpy array:</strong></p> <pre><code>np.array(range(len(x))) </code></pre>
python|numpy
0
6,206
73,811,862
Getting coordinates of the edges of the box inside the image in python
<p>I am trying to get the coordinates <code>[x, y]</code> of the edges of the box in the image attached.</p> <p>This is the image I am using to get the edge coordinates:</p> <p><img src="https://i.stack.imgur.com/ekSXa.jpg" alt="image" /></p> <p>I am finding difficulty in getting. Anybody, please help me in getting the coordinates.</p> <pre><code>image= Image.open(r&quot;C:/Users/LikithP/OneDrive - Ennoventure Inc/Documents/Projects/Gold_Bar/finding_corner_points/mask_images/enc-1.jpg&quot;) numpy_data=np.array(image) img = numpy_data[:,:,0] _, th = cv2.threshold(img, img.mean(), 255, cv2.THRESH_BINARY_INV) th = cv2.morphologyEx(th, cv2.MORPH_CLOSE, np.ones((3,3))) x1, y1 = 0, 0 y2, x2 = th.shape[:2] while np.all(th[:,x1]==255): x1 = x1+1 while np.all(th[:,x2-1]==255): x2 = x2-1 while np.all(th[y1,:]==255): y1 = y1+1 while np.all(th[y2-1,:]==255): y2 = y2-1 cv2.imwrite(&quot;image.jpg&quot;,image[y1:y2-1,x1:x2-1]) </code></pre> <p>This is giving error as <code>TypeError: 'JpegImageFile' object is not subscriptable</code></p>
<p>Try referencing <code>img</code> instead of <code>image</code>. You are initially trying to index the <code>Image</code> object rather than the actual image data which is in <code>img</code>:</p> <pre class="lang-py prettyprint-override"><code>cv2.imwrite(&quot;image.jpg&quot;, img[y1:y2-1,x1:x2-1]) </code></pre>
python|numpy
0
6,207
73,672,752
What happens with workers when they are done with their task?
<p>I have a task, which I aim to parallelize with the help of the <code>joblib</code>-library. The function is fairly slow when ran sequentially, therefore I tried using parallelization paradigms to speed up the process.</p> <pre><code>with Parallel(n_jobs = -1,verbose = 100) as parallel: test = parallel(delayed(create_time_series_capacity_v4)(block_info.UnitID[i]) for i in block_info.UnitID.unique()) out_data = pd.concat([out_data,test[test.columns[1]]],axis=1 ) </code></pre> <p>The block unique has approximately 1000 entries and the creation of the timeseries, takes longer for some units compared to others. Which leaves me to think, that some workers are left working while others are performing an intensive task. Is there a way for to reuse the available processes rather than leaving them idling ? I have pasted below what the code returns while being executed:</p> <pre><code>UNIT05-001 has been written UNIT04-001 has been written UNIT05-003 has been written [Parallel(n_jobs=-1)]: Done 1 tasks | elapsed: 0.2s [Parallel(n_jobs=-1)]: Done 2 out of 10 | elapsed: 0.2s remaining: 1.2s [Parallel(n_jobs=-1)]: Done 3 out of 10 | elapsed: 0.2s remaining: 0.7s UNIT05-004 has been written [Parallel(n_jobs=-1)]: Done 4 out of 10 | elapsed: 0.4s remaining: 0.7s UNIT05-002 has been written [Parallel(n_jobs=-1)]: Done 5 out of 10 | elapsed: 0.6s remaining: 0.6s UNIT02-001 has been written [Parallel(n_jobs=-1)]: Done 6 out of 10 | elapsed: 27.9s remaining: 18.5s UNIT01-001 has been written [Parallel(n_jobs=-1)]: Done 7 out of 10 | elapsed: 50.4s remaining: 21.5s </code></pre>
<p>I am not that familiar with <code>joblib</code> but I quickly perused the documentation. It appears that you are using the default &quot;multiprocessing&quot; backend that is based on Pythons <code>multiprocessing.Pool</code> implementation, of which I do know a bit. This class creates a pool of processes as you would expect. Your 1000 tasks are placed on a &quot;task queue&quot; (see <strong>Chunking</strong> below). Each process in the pool is initially idle so they each remove a task from the queue and execute their respective tasks. When a process has completed executing the task, it becomes idle again and so it goes back to retrieve the next task on the queue. This continues until there are no more tasks on the queue at which point all the processors remain idle until another task is added.</p> <p>What we cannot assume <em>in general</em> is that every task takes an equal amount of time to run. But for the sake of argument let's assume that all tasks take an equal amount of time to execute. If you submit 1000 tasks to be handled by 16 processors, then after each process has executed 62 tasks (16 * 62 = 992) there will be only 8 tasks left on the task queue to be executed. In this case 8 processes will remain idle while the other 8 processes execute the final 8 tasks. But unless these tasks are very long running, you would see all 16 processes going idle and remaining that way more or less at the same time. Now let us assume that all tasks take the same amount of time except the very last task submitted, which takes 15 minutes longer to execute. Now you would expect to see 15 processes going idle more or less at the same time with the 16th processor taking an extra 15 minutes before it goes idle. But if this extra long-running task were the first task submitted, again you would expect to see all the processors going and staying idle at the same time. Of course, the process that executed the very long-running task will end up processing fewer tasks than the other processes under the assumption that the other tasks take far less time to complete.</p> <p><strong>Chunking</strong></p> <p>The <code>multiprocessing.Pool</code> suports <em>chunking</em>; whether or not <code>joblib</code> uses this capaility or not I cannot determine. But this is the way it works:</p> <p>Since reading and writing to the task queue can be rather expensive, to reduce the number of operations to the task queue the pool can batch the submitted tasks into <em>chunks</em> of a certain size. That is, instead of writing 1000 tasks to the queue one at a time, the pool might write the tasks in chunks of 16 tasks as an example of a possible chunk size. Thus, when a processor becomes idle, it gets from the task queue the next chunk containing 16 tasks. The process will not become idle until all 16 tasks have been executed and only then will the process try to get the next chunk of tasks.</p> <p>So, if a chunk size of 16 were being used, there would be 62 chunks of size 16 (16 * 62 = 992 tasks) placed on the queue plus a final chunk of 8 tasks for a total of 63 chunks. After each of the 16 processes has executed 3 chunks (16 * 3 = 48 chunks), there would be 15 chunks left on the queue. So one of the processes would go idle immediately. Of the 15 processors left processing the 15 chunks, remember that one of the chunks only contains 8 tasks rather than 16. This process will go idle before the other 14, each of which will still have 8 more tasks to execute in its final chunk.</p> <p><strong>Conclusion</strong></p> <p>The above examples based on tasks that all take the same time to run is not very realistic but should still give you an idea of how things work.</p>
python|pandas|multithreading|multiprocessing|joblib
1
6,208
71,192,005
Value replacement based on multiple conditions
<p>My pandas dataframe looks like <a href="https://i.stack.imgur.com/Eq8zX.png" rel="nofollow noreferrer">this </a>. For each row I want to replace values in Q2 to &quot;positive&quot; if the term &quot;xxpos&quot; occurs within the &quot;SNIPPET&quot; column and if the value in Q2 == 1. Also I want to replace values in Q2 to &quot;negative&quot; if the term &quot;xxneg&quot; occurs within the &quot;SNIPPET&quot; column and the value in Q2 == 1 etc.</p> <p>I tried a few things, including the following but without success: <code>df['Q2'] = np.where((&quot;xxpos&quot; in df[&quot;SNIPPET&quot;]) &amp; (df['Q2'] == 1) ,&quot;Positive&quot;, df['Q2'])</code></p> <p>What would be the easiest solution to deal with the multiple conditions?</p>
<p>You can try with the following code.</p> <pre class="lang-py prettyprint-override"><code>df.loc[(df['Q2']==1) &amp; (df['SNIPPET'].str.contains('xxpos')), 'Q2'] = 'Positive' df.loc[(df['Q2']==1) &amp; (df['SNIPPET'].str.contains('xxneg')), 'Q2'] = 'Negative' </code></pre>
python|pandas|string
0
6,209
71,342,640
Is there a way to delete an entire row and shift the cells up in xlwings?
<p>If I wanted to delete an entire row and shift the cells up is there a way to do that? Below is a snippet of my loop which is iterating through the column and clearing the contents of the cell if it doesn't match my parameters. Is there a way rather than clearing just the cell in column A I could delete the whole row and shift up?</p> <pre><code> for i in range(lastRow): i = i + 1 if sheet.range('A' + str(i)).value != 'DLQ' or 'DLR': xw.Range('A' + str(i)).clear() continue else: continue </code></pre>
<p>Use <a href="https://docs.xlwings.org/en/stable/api.html#xlwings.Range.delete" rel="nofollow noreferrer">delete()</a> and specify the rows number(s) you want to delete in range():</p> <pre><code>import xlwings as xw wb = xw.Book(r&quot;test.xlsx&quot;) wb.sheets[0].range(&quot;2:2&quot;).delete() </code></pre> <p>This would delete row number 2.</p>
python|excel|pandas|numpy|xlwings
0
6,210
71,140,280
I'm trying to merge a small dataframe to another large one, looping through the small dataframes
<p>I am able to print the small dataframe and see it is being generated correctly, I've written it using the code below. My final result however contains just the result of the final merge, as opposed to passing over each one and merging them.</p> <p>MIK_Quantiles is the first larger dataframe, df2_t is the smaller dataframe being generated in the while loop. The dataframes are both produced correctly and the merge works, but I'm left with just the result of the very last merge. I want it to merge the current df2_t with the already merged result (df_merged) of the previous loop. I hope this makes sense!</p> <pre><code>i = 0 while i &lt; df_length - 1: cur_bound = MIK_Quantiles['bound'].iloc[i] cur_percentile = MIK_Quantiles['percentile'].iloc[i] cur_bin_low = MIK_Quantiles['auppm'].iloc[i] cur_bin_high = MIK_Quantiles['auppm'].iloc[i+1] ### Grades/Counts within bin, along with min and max df2 = df_orig['auppm'].loc[(df_orig['bound'] == cur_bound) &amp; (df_orig['auppm'] &gt;= cur_bin_low) &amp; (df_orig['auppm'] &lt; cur_bin_high)].describe() ### Add fields of interest to the output of describe for later merging together df2['bound'] = cur_bound df2['percentile'] = cur_percentile df2['bin_name'] = 'bin name' df2['bin_lower'] = cur_bin_low df2['bin_upper'] = cur_bin_high df2['temp_merger'] = str(int(df2['bound'])) + '_' + str(df2['percentile']) # Write results of describe to a CSV file and transpose columns to rows df2.to_csv('df2.csv') df2_t = pd.read_csv('df2.csv').T df2_t.columns = ['count', 'mean', 'std', 'min', '25%', '50%', '75%', 'max', 'bound', 'percentile', 'bin_name', 'bin_lower', 'bin_upper', 'temp_merger'] # Merge the results of the describe on the selected data with the table of quantile values to produce a final output df_merged = MIK_Quantiles.merge(df2_t, how = 'inner', on = ['temp_merger']) pd.merge(df_merged, df2_t) print(df_merged) i = i + 1 </code></pre>
<p>Your loop does not do anything meaningful, other than increment <code>i</code>.</p> <p>You do a merge of 2 (static) dfs (<code>MIK_Quantiles</code> and <code>df2_t</code>), and you do that <code>df_length</code> number of times. Everytime you do that (first, i-th, and last iteration of the loop), you overwrite the output variable <code>df_merged</code>.</p> <p>To keep in the output whatever has been created in the previous loop iteration, you need to concat all the created <code>df2_t</code>:</p> <ol> <li><code>df2 = pd.concat([df2, df2_t])</code> to 'append' the newly created data <code>df2_t</code> to an output dataframe <code>df2</code> during each iteration of the loop, so in the end all the data will be contained in <code>df2</code></li> </ol> <p>Then, <strong>after</strong> the loop, <code>merge</code> that one onto <code>MIK_Quantiles</code></p> <ol start="2"> <li><code>pd.merge(MIK_Quantiles, df2)</code> (not <code>df2_t</code> (!)) to merge on the previous output</li> </ol> <pre><code>df2 = pd.DataFrame([]) # initialize your output for i in range(0, df_length): df2_t = ... # read your .csv files df2 = pd.concat([df2, df2_t]) df2 = ... # do vector operations on df2 (process all of the df2_t at once) out = pd.merge(MIK_Quantiles, df2) </code></pre>
python|pandas|loops|merge
1
6,211
71,360,345
How to handle .json fine in tabular form in python?
<p>By using this code:</p> <pre><code>import pandas as pd patients_df = pd.read_json('/content/students.json',lines=True) patients_df.head() </code></pre> <p>the data are shown in tabular form look like this: <a href="https://i.stack.imgur.com/1Srhq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1Srhq.png" alt="enter image description here" /></a></p> <p>The main json file looks like this:</p> <pre><code>data = [] for line in open('/content/students.json', 'r'): data.append(json.loads(line)) </code></pre> <p><a href="https://i.stack.imgur.com/ZRXye.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZRXye.png" alt="enter image description here" /></a></p> <p>How can I get the score column of the table in an organized manner like column name <strong>Exam, Quiz, and Homework</strong></p>
<p>Possible solution could be the following:</p> <pre><code># pip install pandas import pandas as pd import json def separate_column(row): for e in row[&quot;scores&quot;]: row[e[&quot;type&quot;]] = e[&quot;score&quot;] return row with open('/content/students.json', 'r') as file: data = [json.loads(line.rstrip()) for line in file] df = pd.json_normalize(data) df = df.apply(separate_column, axis=1) df = df.drop(['scores'], axis=1) print(df) </code></pre> <p><a href="https://i.stack.imgur.com/YmXIB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YmXIB.png" alt="enter image description here" /></a></p>
python|json|python-3.x|pandas
2
6,212
52,102,805
Two dataframes broadcast together in multiple subplots
<p>I want to plot 5 different subplots, one for each year of data. For each year, I want to show the DEMs and REPs for each county. I have written the following code so far:</p> <pre><code>fig = plt.figure(figsize=(13,10)) plt.subplot(3, 2, 1) plt.bar(data=districts_2018[['DEM', 'REP']], x=districts_2018.index, height=(districts_2018[['DEM', 'REP']])) plt.subplot(3, 2, 2) plt.bar(data=districts_2017[['DEM', 'REP']], x=districts_2017.index, height=districts_2017['DEM']) plt.subplot(3, 2, 3) plt.bar(data=districts_2016[['DEM', 'REP']], x=districts_2016.index, height=districts_2016['DEM']) plt.subplot(3, 2, 4) plt.bar(data=districts_2015[['DEM', 'REP']], x=districts_2015.index, height=districts_2015['DEM']) plt.subplot(3, 2, 5) plt.bar(data=districts_2014[['DEM', 'REP']], x=districts_2014.index, height=districts_2014['DEM']) plt.tight_layout(); </code></pre> <p>However, as in the case of the first subplot (3,2,1) I get the error: <code>ValueError: shape mismatch: objects cannot be broadcast to a single shape</code>. This code works if I only set the height to <code>districts_2018['DEM']</code> but then that only shows the DEMs and not the REPs.</p>
<p>You may directly use the pandas wrapper to plot grouped bar plots. Set the <code>ax</code> to the respective subplot you want the plot to appear in.</p> <pre><code>fig, axes = plt.subplots(3,2,figsize=(13,10)) for ax, df in zip(axes.flat, [districts_2018, districts_2017, ....]) df[['DEM', 'REP']].plot.bar(ax=ax) </code></pre>
python|pandas|matplotlib|plot
1
6,213
60,567,266
Reshaping Numpy array repeating some elements
<p>I have trained a NN in Keras with LSTM, so I have been using 3D tensors. Now I want to predict with a dataset and I have to insert a 3D tensor in my NN. </p> <p>(In my case I used <code>features = 2</code> and <code>lookback = 2</code>, so input elements in LSTM are <code>(batch_size, lookback, features)</code>)</p> <p>So, imagine this example:</p> <pre><code>a = np.array([[1, 2], [3, 4]]) </code></pre> <p>I need to do <code>a_2 = np.reshape(1, 2, 2)</code> to be able to insert it in the LSTM.</p> <p>But if I have a bigger test dataset, like for instance:</p> <pre><code>b = np.array([[1, 2], [3, 4], [5, 6], [7, 8], [9, 10]]) </code></pre> <p>I would need to convert it to a 3D array of this type:</p> <pre><code>b_2 = np.array([[[1, 2], [3, 4]], [[3, 4], [5, 6]], [[5, 6], [7, 8]], [[7, 8], [9, 10]]]) </code></pre> <p>so in this case I have predictions for each <code>lookback</code> with the new point. I guess this could be done with a complicated solution using many <code>for</code> loops nested, but I wonder if there is more pythonic way. Thx.</p>
<p>You are looking for sliding windows and there's <a href="https://scikit-image.org/docs/dev/api/skimage.util.html#skimage.util.view_as_windows" rel="nofollow noreferrer"><code>skimage's view_as_windows</code></a> for that -</p> <pre><code>In [46]: from skimage.util.shape import view_as_windows In [44]: features = 2; lookback = 2 In [45]: view_as_windows(b,(lookback, features))[:,0] Out[45]: array([[[ 1, 2], [ 3, 4]], [[ 3, 4], [ 5, 6]], [[ 5, 6], [ 7, 8]], [[ 7, 8], [ 9, 10]]]) </code></pre>
python|numpy|keras
2
6,214
72,660,393
How to convert a 2D Numpy object array containing same-length lists to a normal 3D Numpy array?
<p>I have a 2D object array of arrays of the same size, and I want to convert this to an ordinary 3D array. The array dimensions are huge, so this should preferably be done in an optimized and in-place way.</p> <p>I found a question about doing this for an 1D array containing arrays of size 1, but the solutions, and anything else I tried, are failing with the same error:</p> <p><code>ValueError: setting an array element with a sequence.</code></p> <p>The code below generates an array similar to mine, and presents two solutions I tried:</p> <pre class="lang-py prettyprint-override"><code>x = np.array([1,2,3,4]) # in my real case, this has about 5000 items arr = np.empty((3,3), dtype=object) # in my real case, this is a 400x400 array for i in range(3): for j in range(3): arr[i,j] = x # Simplest way fails: x_new = arr.astype(int) # solution for the length=1 question fails: x_new = np.stack(arr).astype(None) </code></pre> <p>How to do this successfully? Why do I even get this error?</p> <h4><em>Answer:</em></h4> <p>I figured it out at the end, and posted the solution as an answer to <a href="https://stackoverflow.com/q/30666403/5099168">&quot;How to convert an object array to a normal array in python?&quot;</a>, since it works for arbitrary dimensions including 1D: <a href="https://stackoverflow.com/a/72714279/5099168">https://stackoverflow.com/a/72714279/5099168</a>*</p>
<p>So you have a 2d object dtype array:</p> <pre><code>In [110]: arr Out[110]: array([[array([1, 2, 3, 4]), array([1, 2, 3, 4]), array([1, 2, 3, 4])], [array([1, 2, 3, 4]), array([1, 2, 3, 4]), array([1, 2, 3, 4])], [array([1, 2, 3, 4]), array([1, 2, 3, 4]), array([1, 2, 3, 4])]], dtype=object) In [111]: arr.shape Out[111]: (3, 3) </code></pre> <p>One option it convert it to a list (of lists), and make array from that. Note, you had to use the <code>np.empty</code> construct to get around the normal <code>np.array</code> action that does just this:</p> <pre><code>In [117]: arr.tolist() Out[117]: [[array([1, 2, 3, 4]), array([1, 2, 3, 4]), array([1, 2, 3, 4])], [array([1, 2, 3, 4]), array([1, 2, 3, 4]), array([1, 2, 3, 4])], [array([1, 2, 3, 4]), array([1, 2, 3, 4]), array([1, 2, 3, 4])]] In [118]: np.array(arr.tolist()) Out[118]: array([[[1, 2, 3, 4], [1, 2, 3, 4], [1, 2, 3, 4]], [[1, 2, 3, 4], [1, 2, 3, 4], [1, 2, 3, 4]], [[1, 2, 3, 4], [1, 2, 3, 4], [1, 2, 3, 4]]]) </code></pre> <p><code>np.stack</code> can convert a 1d array:</p> <pre><code>In [119]: np.stack(arr.ravel()) Out[119]: array([[1, 2, 3, 4], [1, 2, 3, 4], [1, 2, 3, 4], [1, 2, 3, 4], [1, 2, 3, 4], [1, 2, 3, 4], [1, 2, 3, 4], [1, 2, 3, 4], [1, 2, 3, 4]]) </code></pre> <p>which can then be reshaped.</p> <p><code>np.stack(arr)</code> because it actually does <code>np.stack(list(arr))</code>. <code>list(arr)</code> is a list of object dtype arrays - with the same shape. <code>np.array(arr[i],int)</code> has the same <code>sequence</code> error. That's why stack only works to convert 1d arrays (or lists), not 2d.</p>
python|arrays|numpy|types|casting
0
6,215
59,875,481
How to find the maximum value of a column with pandas?
<p>I have a table with 40 columns and 1500 rows. I want to find the maximum value among the 30-32nd (3 columns). How can it be done? I want to return the maximum value among these 3 columns and the index of dataframe.</p> <pre><code>print(Max_kVA_df.iloc[30:33].max()) </code></pre> <p><a href="https://i.stack.imgur.com/Wn6e4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Wn6e4.png" alt="enter image description here"></a></p>
<p>hi you can refer this example</p> <pre><code>import pandas as pd df=pd.DataFrame({'col1':[1,2,3,4,5], 'col2':[4,5,6,7,8], 'col3':[2,3,4,5,7] }) print(df) #print(df.iloc[:,0:3].max())# Mention range of the columns which you want, In your case change 0:3 to 30:33, here 33 will be excluded ser=df.iloc[:,0:3].max() print(ser.max()) </code></pre> <p><strong>Output</strong></p> <pre><code>8 </code></pre>
python|excel|pandas
1
6,216
59,522,090
Resample daily data to hourly dataframe and copy contents
<p>I have the following Dataframe:</p> <pre><code> Date Holiday 0 2018-01-01 New Year's Day 1 2018-01-15 Martin Luther King, Jr. Day 2 2018-02-19 Washington's Birthday 3 2018-05-08 Truman Day 4 2018-05-28 Memorial Day ... ... ... 58 2022-10-10 Columbus Day 59 2022-11-11 Veterans Day 60 2022-11-24 Thanksgiving 61 2022-12-25 Christmas Day 62 2022-12-26 Christmas Day (Observed) </code></pre> <p>I would like to re-sample this data frame so that it is an hourly df from a daily df (while copying the content in the holidays column to the correct date). I'd like it to look like this [Ignore the index of the table, it should be alot more numbers than this]</p> <pre><code> Timestamp Holiday 0 2018-01-01 00:00:00 New Year's Day 1 2018-01-01 01:00:00 New Year's Day 2 2018-01-01 02:00:00 New Year's Day 3 2018-01-01 03:00:00 New Year's Day 4 2018-01-01 04:00:00 New Year's Day 5 2018-01-01 05:00:00 New Year's Day ... ... ... 62 2022-12-26 20:00:00 Christmas Day (Observed) 63 2022-12-26 21:00:00 Christmas Day (Observed) 64 2022-12-26 22:00:00 Christmas Day (Observed) 65 2022-12-26 23:00:00 Christmas Day (Observed) </code></pre> <p>What's the fastest way to go about doing so? Thanks in advance.</p>
<p>How about</p> <pre><code>df.set_index("Date").resample("H").ffill().reset_index().rename( {"Date": "Timestamp"}, axis=1 ) </code></pre>
python|python-3.x|pandas
1
6,217
32,252,728
Pandas Indexing vs Copy Error
<p>I have the Data2 column in my dataframe. I am trying to create a new column ('NewCol') by applying a filter to the Data2 column. Below code works and the results of the new column is correct. But I get the below error message when running the code. How can I fix this? I would think this impacts performance.</p> <p>C:\Python27\lib\site-packages\IPython\kernel__main__.py:2: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame</p> <p>See the the caveats in the documentation: <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy" rel="nofollow">http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy</a></p> <pre><code># In[1]: import pandas as pd import numpy as np from pandas import DataFrame # In[2]: df = pd.DataFrame({'Date': ['2015-05-08', '2015-05-07', '2015-05-06', '2015-05-05', '2015-05-08', '2015-05-07', '2015-05-06', '2015-05-05'], 'Sym': ['aapl', 'aapl', 'aapl', 'aapl', 'aaww', 'aaww', 'aaww', 'aaww'], 'Data2': [11, 8, 10, 15, 110, 60, 100, 40],'Data3': [5, 8, 6, 1, 50, 100, 60, 120]}) # In[4]: df['NewCol'] = '' df['NewCol'][df['Data2']&gt; 60] = 'True' df </code></pre>
<p>Try using <code>.loc</code></p> <pre><code>df.loc[df['Data2']&gt; 60, 'NewCol'] = 'True' </code></pre> <p>Pandas is very efficient in memory management. For most operations (filters) it returns reference to data already existing in memory (DataFrame). However in some cases it has to make copy and return this. Any assignment on this copy will not reflect in original DataFrame. Hence the warning.</p> <p>Also for all slicing try to use <code>.loc</code> if slicing based index values and <code>.iloc</code> for slicing based on integer locations. In some cases this is faster as explained in <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#returning-a-view-versus-a-copy" rel="nofollow">documentation</a></p> <blockquote> <p>When slicing using dfmi['one']['second']<br> ... dfmi['one'] selects the first level of the columns and returns a data frame that is singly-indexed. Then another python operation dfmi_with_one['second'] selects the series indexed by 'second' happens. This is indicated by the variable dfmi_with_one because pandas sees these operations as separate events. e.g. separate calls to <strong>getitem</strong>, so it has to treat them as linear operations, they happen one after another.</p> <p>Contrast this to df.loc[:,('one','second')] which passes a nested tuple of (slice(None),('one','second')) to a single call to <strong>getitem</strong>. This allows pandas to deal with this as a single entity. Furthermore this order of operations can be significantly faster, and allows one to index both axes if so desired.</p> </blockquote>
python|pandas
1
6,218
40,699,349
How to use a custom pandas groupby aggregation function to combine rows in a dataframe
<p>I have a dataframe with a <code>name</code> column and a <code>department</code> column. There are repeats in the <code>name</code> column that have different <code>department</code> values but all other column values are identical. I'd like to <em>flatten</em> these repeats into a single row and combine the different (unique) department values into a list. So, take first row of each group and just change the <code>department</code> value to a list of the unique <code>department</code> values in that group. So resulting dataframe should have exact same columns but no repeats in <code>name</code> column and <code>department</code> column now has lists of at least one element.</p> <p>I thought to use <code>groupby</code> and a custom aggregation function passed to <code>agg()</code> but the following just totally fails. My thinking was that my aggregation function would get each group as a dataframe and if for each dataframe group I returned a series then the output of <code>groupby.agg(flatten_departments)</code> would be a dataframe.</p> <pre><code>def flatten_departments(name_group): #I thought name_group would be a df of that group #this group is length 1 so this name doesn't actually repeat so just return same row if len(name_group) == 1: return name_group.squeeze() #turn length-1 df into a series to return, don't worry that department is a string and not a list for now else: #treat name_group like a df and get the unique departments departments = list(name_group['department'].unique()) name_ser = name_group.iloc[0,:] #take first "row" of this group name_ser['department'] = departments #replace department value with list of unique values from group return name_ser my_df = my_df.groupby(['name']).agg(flatten_departments) </code></pre> <p>This was a disaster and <code>name_group</code> is not a df but a series whose index is an index from the original df, and name is the name of some other column in the original df and value the value for that column. </p> <p>I know that I could just do a for loop over the <code>groupby</code> object as follows</p> <pre><code>list_of_ser = [] for name, gp in my_df.groupby(['name']): if len(gp) == 1: list_of_ser.append(gp.squeeze()) else: new_ser = gp.iloc[0,:] new_ser['department'] = list(gp['department'].unique()) list_of_ser.append(new_ser) new_df = pd.DataFrame(list_of_ser, columns=my_df.columns) </code></pre> <p>but I just thought that was the point of <code>agg</code>!</p> <p>Any ideas how to accomplish my goal with <code>agg</code> or if the for loop is really the correct way. If the for loop is the right way, what is the point of <code>agg</code>?</p> <p>Thank you!</p>
<pre><code>df = pd.DataFrame( dict( name=list('ABCDEFGACEF'), dept=list('xyxyzxyzyxz') ) ) df.groupby('name').dept.apply(list).reset_index() </code></pre> <p><a href="https://i.stack.imgur.com/QFP7y.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QFP7y.png" alt="enter image description here"></a></p> <hr> <p><code>agg</code> could have been used like this</p> <pre><code>df.groupby('name').dept.agg(dict(dept=lambda x: list(x))).reset_index() </code></pre> <hr> <p>if you need to preserve all other columns</p> <pre><code>df = pd.DataFrame( dict( name=list('ABCDEFGACEF'), dept=list('xyxyzxyzyxz') ) ) g = df.groupby('name') pd.concat([g.dept.apply(list), g.first().drop('dept', 1)], axis=1).reset_index() </code></pre>
python|pandas|group-by
1
6,219
40,671,443
Numpy: how to return a view on a matrix A based on submatrix B
<p>Given a matrix A with dimensions axa, and B with dimensions bxb, and axa modulo bxb == 0. B is a submatrix(s) of A starting at (0,0) and tiled until the dimensions of axa is met.</p> <pre><code>A = array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11], [12, 13, 14, 15]]) </code></pre> <p>An example of a submatrix might be:</p> <pre><code>B = array([[10, 11], [14, 15]]) </code></pre> <p>Where the number 15 is in position (1, 1) with respect to B's coordinates.</p> <p>How could I return a view on the array A, for a particular position in B? For example for position (1,1) in B, I want to get all such values from A:</p> <pre><code>C = array([[5, 7], [13, 15]]) </code></pre> <p>The reason I want a view, is that I wish to update multiple positions in A:</p> <pre><code>C = array([[5, 7],[13, 15]]) = 20 </code></pre> <p>results in </p> <pre><code>A = array([[ 0, 1, 2, 3], [ 4, 20, 6, 20], [ 8, 9, 10, 11], [12, 20, 14, 20]]) </code></pre>
<p>You can obtain this as follows:</p> <pre><code>&gt;&gt;&gt; A = np.array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11], [12, 13, 14, 15]]) &gt;&gt;&gt; A[np.ix_([1,3],[1,3])] = 20 &gt;&gt;&gt; A array([[ 0, 1, 2, 3], [ 4, 20, 6, 20], [ 8, 9, 10, 11], [12, 20, 14, 20]]) </code></pre> <p>For more info about <code>np.ix_</code> could review the <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.ix_.html" rel="nofollow noreferrer">NumPy documentation</a></p>
python|numpy|matrix
3
6,220
57,812,300
Python pandas to calculate mean of datetime of multiple columns
<p>Given an example table <code>df</code> as below, how to calculate mean date of <code>TIME1, TIME2, TIME3.</code></p> <pre><code>df['AVG_TIME'] = df[['TIME1', 'TIME2', 'TIME3']].mean(axis=1) </code></pre> <p>This returns <code>NaN</code> values</p> <pre><code>ID TIME1 TIME2 TIME3 0 2018-07-11 2018-07-09 2018-07-12 1 2018-07-12 2018-06-12 2018-07-15 2 2018-07-13 2018-06-13 2018-08-03 3 2019-09-11 2019-08-11 2019-09-01 4 2019-09-12 2019-08-12 2019-09-15 </code></pre>
<p>This could be done as follows:</p> <pre class="lang-py prettyprint-override"><code>import time import datetime import pandas as pd # build the df c = ['TIME1' , 'TIME2' , 'TIME3'] d = [['2018-07-11', '2018-07-09', '2018-07-12'], ['2018-07-12', '2018-06-12', '2018-07-15'], ['2018-07-13', '2018-06-13', '2018-08-03'], ['2019-09-11', '2019-08-11', '2019-09-01'], ['2019-09-12', '2019-08-12', '2019-09-15']] df = pd.DataFrame(d, columns=c) # conversion from dates to seconds since epoch (unix time) def to_unix(s): return time.mktime(datetime.datetime.strptime(s, "%Y-%m-%d").timetuple()) # sum the seconds since epoch, calculate average, and convert back to readable date averages = [] for index, row in df.iterrows(): unix = [to_unix(i) for i in row] average = sum(unix) / len(unix) averages.append(datetime.datetime.utcfromtimestamp(average).strftime('%Y-%m-%d')) df['averages'] = averages </code></pre>
python|pandas|datetime
0
6,221
34,396,128
Resample pandas dataframe by both name and origin
<p>I have the following Pandas DataFrame object <code>df</code>. It is a train schedule listing the date of departure, scheduled time of departure, and train company.</p> <pre><code>import pandas as pd df = Year Month DayofMonth DayOfWeek DepartureTime Train Origin Datetime 1988-01-01 1988 1 1 5 1457 BritishRail Leeds 1988-01-02 1988 1 2 6 1458 DeutscheBahn Berlin 1988-01-03 1988 1 3 7 1459 SNCF Lyons 1988-01-02 1988 1 2 6 1501 BritishRail Ipswich </code></pre> <p>Now, I would like to resample this time series by listing for each week the number of times a certain rail company departed from this station by origin. </p> <p>For instance, how many British Rail trains leave this station per week? How many British Rail trains leave this station per week originating from Leeds? </p> <p>I suspected the result to be a pandas series object. </p> <p>I tried for total British Rails per week</p> <pre><code>BR_weekly = df[df['Train']=='BritishRail'].resample("W", how='sum') </code></pre> <p>but this does not give me a time series of the form</p> <pre><code>Datetime Number of trains i.e. Datetime 1988-01-03 434 1988-01-10 982 1988-01-17 989 Freq: W-SUN, dtype: int64 </code></pre> <p>How can I fix this? </p>
<p>My input data (add and change some date):</p> <pre><code>print df Year Month DayofMonth DayOfWeek DepartureTime Train \ Datetime 1988-01-01 1988 1 1 5 1457 BritishRail 1988-01-01 1988 1 1 5 1457 BritishRail 1988-01-10 1988 1 2 6 1458 DeutscheBahn 1988-01-12 1988 1 3 7 1459 SNCF 1988-01-20 1988 1 2 6 1501 BritishRail Origin Datetime 1988-01-01 Leeds 1988-01-01 Leeds 1988-01-10 Berlin 1988-01-12 Lyons 1988-01-20 Ipswich </code></pre> <p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow"><code>groupby</code></a> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Grouper.html" rel="nofollow"><code>Grouper</code></a>and count values of column <code>Train</code>.</p> <pre><code>print df.groupby(pd.Grouper(freq='W'))['Train'].count() 1988-01-03 2 1988-01-10 1 1988-01-17 1 1988-01-24 1 Freq: W-SUN, Name: Train, dtype: int64 </code></pre> <p>Or you can select column <code>Train</code> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.resample.html" rel="nofollow"><code>resample</code></a> it by <code>count</code>:</p> <pre><code>print df['Train'].resample('W', how='count') Datetime 1988-01-03 2 1988-01-10 1 1988-01-17 1 1988-01-24 1 Freq: W-SUN, Name: Train, dtype: int64 </code></pre> <p>EDIT:</p> <p>I think you cannot use <code>sum</code>, because it concatenate strings in column <code>Train</code>:</p> <pre><code>print df.Train[df['Train'].isin(['BritishRail'])].resample("W", how='sum') Datetime 1988-01-03 BritishRailBritishRail 1988-01-10 0 1988-01-17 0 1988-01-24 BritishRail Freq: W-SUN, Name: Train, dtype: object </code></pre> <p>Select one column <code>Train</code>, where is <code>BritishRail</code> using <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.isin.html" rel="nofollow"><code>isin</code></a> and resample it with <code>count</code> instead of <code>sum</code>:</p> <pre><code>print df.Train[df['Train'].isin(['BritishRail'])].resample("W", how='count') Datetime 1988-01-03 2 1988-01-10 0 1988-01-17 0 1988-01-24 1 Freq: W-SUN, Name: Train, dtype: int64 </code></pre>
python|pandas|time-series
1
6,222
36,896,172
"undefined symbol" error when importing SWIG+python module
<p>I created a *.so file for use in Python using SWIG, but when I import it I get this:</p> <pre><code>/_analyzer.so: undefined symbol: autocorellation </code></pre> <p>I did almost everything according to this instruction: <a href="https://scipy.github.io/old-wiki/pages/Cookbook/SWIG_NumPy_examples.html" rel="nofollow">https://scipy.github.io/old-wiki/pages/Cookbook/SWIG_NumPy_examples.html</a> </p> <p>my code is following:</p> <p>analyzer.h:</p> <pre><code>void autocorellation(double *in, double *out, long long n); </code></pre> <p>analyzer.cpp:</p> <pre><code>#include "analyzer.h" #include &lt;math.h&gt; #include &lt;stdlib.h&gt; #define PI 3.14159265358979323846 typedef struct { double real; double im; } Complex; void complex_multiply(Complex a,Complex b,Complex* c){ c-&gt;real = a.real * b.real - a.im * b.im; c-&gt;im = a.real * b.im + a.im * b.real; } void complex_multiply_int(int a, Complex b,Complex* c){ c-&gt;real = a * b.real; c-&gt;im = a * b.im; } void complex_sum(Complex a,Complex b,Complex* c){ c-&gt;real = a.real + b.real; c-&gt;im = a.im + b.im; } void complex_conjugate(Complex* a,Complex* b,long long n){ for(int i = 0; i &lt; n; ++i){ b[i].real = a[i].real; b[i].im = -1 * a[i].im; } } long long rev (long long num, long long lg_n) { long long res = 0; for (long long i=0; i &lt; lg_n; ++i) if (num &amp; (1 &lt;&lt; i)) res |= 1 &lt;&lt; (lg_n-1-i); return res; } void fft (Complex* a, long long n,bool invert) { long long lg_n = 0; while ((1 &lt;&lt; lg_n) &lt; n) ++lg_n; for (long long i=0; i&lt;n; ++i){ long long r= rev(i,lg_n); if (i &lt; r){ a[i].real = a[i].real + a[r].real; a[r].real = a[i].real - a[r].real; a[i].real = a[i].real - a[r].real; a[i].im = a[i].im + a[r].im; a[r].im = a[i].im - a[r].im; a[i].im = a[i].im - a[r].im; } } for (long long len=2; len&lt;=n; len &lt;&lt;= 1) { double ang = 2*PI/len * (invert ? -1 : 1); Complex wn; wn.real = cos(ang); wn.im = sin(ang); for (long long i=0; i&lt;n; i+=len) { Complex w; w.real = 1; w.im = 0; long long ll = (long long)(len * 0.5); for (long long j=0; j&lt; ll; ++j) { Complex u = a[i+j],v; complex_multiply(a[i+j+ll],w,&amp;v); complex_sum(u,v,&amp;a[i+j]); complex_multiply_int(-1,v,&amp;v); complex_sum(u,v,&amp;a[i+j+ll]); complex_multiply(w,wn,&amp;w); } } } if (invert) for (long long i=0; i&lt;n; ++i){ a[i].real /= n; a[i].im /= n; } } void autocorellation(double *in, double *out, long long n){ long long le = 1; while(n &gt; le) le *= 2; double m = 0; for(int i = 0; i &lt; n; ++i) m+=in[i]; m /= n; for(int i = 0; i &lt; n; ++i) in[i] -= m; Complex* a = (Complex*) malloc(le*sizeof(Complex)); Complex* b = (Complex*) malloc(le*sizeof(Complex)); for(long long i = 0; i &lt; n; ++i){ a[i].im = 0; a[i].real = in[i]; } for(long long i = n; i &lt; le; ++i){ a[i].im = 0; a[i].real = 0; } fft(a,le,false); complex_conjugate(a,b,le); Complex* c = (Complex*) malloc(le*sizeof(Complex)); for(long long i = 0; i &lt; le; ++i) complex_multiply(b[i],a[i],&amp;c[i]); fft(c,le,true); for(long long i = 0; i &lt; n; ++i) out[i] = (c[i].real/c[0].real); free(a); free(b); free(c); } </code></pre> <p>analyzer.i:</p> <pre><code> %module analyzer %{ #define SWIG_FILE_WITH_INIT #include "analyzer.h" %} %include "numpy.i" %init %{ import_array(); %} %apply (double* IN_ARRAY1,int DIM1) {(double *in, long long n)} %apply (double* ARGOUT_ARRAY1,int DIM1) {(double *out, long long n)} %include "analyzer.h" </code></pre> <p>setup.py:</p> <pre><code> #! /usr/bin/env python # System imports from distutils.core import * from distutils import sysconfig # Third-party modules - we depend on numpy for everything import numpy # Obtain the numpy include directory. This logic works across numpy versions. try: numpy_include = numpy.get_include() except AttributeError: numpy_include = numpy.get_numpy_include() # ezrange extension module _analyzer = Extension("_analyzer", ["analyzer.i","analyzer.cpp"], include_dirs = [numpy_include], ) # ezrange setup setup( name = "range function", description = "Autocorellation function evaluation", author = "Bodya", version = "1.0", ext_modules = [_analyzer] ) </code></pre>
<p>The difference between your code and the cookbook examples is that your code is C++. Therefore, you need to pass the <code>-c++</code> option to SWIG. In the construction of <code>Extension(...)</code> in setup.py, simply add <code>swig_opts=['-c++'],</code>.</p> <p>Note that distutils will still invoke the C compiler on the generated wrapper file, but this will have a <code>.cpp</code> extension, so it should be compiled correctly if the compiler is gcc or clang.</p> <p>My experience using distutils or setuptools for C++ SWIG extensions that are even slightly beyond trivial has been poor, so I invoke SWIG to generate a wrapper outside of distutils (I do it via a Makefile) and only use distutils to compile the extension from the wrapper file.</p>
python|c++|numpy|swig
0
6,223
37,011,828
Pandas: delete duplicate rows
<p>I have the following df:</p> <pre><code>url='https://raw.githubusercontent.com/108michael/ms_thesis/master/crsp.dime.mpl.abbridged' zz=pd.read_csv(url) zz.head(30) date feccandid feccandcfscore.dyn pacid paccfscore cid catcode type_x di amtsum state log_diff_unemployment party type_y bills years_exp disposition billsum 0 2006 S8NV00073 0.496 C00000422 0.330 N00006619 H1100 24K D 5000 NV -0.024693 Republican rep s22-109 12 support 3 1 2006 S8NV00073 0.496 C00000422 0.330 N00006619 H1100 24K D 5000 NV -0.024693 Republican rep s22-109 12 support 3 2 2006 S8NV00073 0.496 C00000422 0.330 N00006619 H1100 24K D 5000 NV -0.024693 Republican rep s22-109 12 support 3 3 2006 S8NV00073 0.496 C00000422 0.330 N00006619 H1100 24K D 5000 NV -0.024693 Republican rep s22-109 12 support 3 4 2006 S8NV00073 0.496 C00000422 0.330 N00006619 H1100 24K D 5000 NV -0.024693 Republican rep s22-109 12 support 3 5 2006 S8NV00073 0.496 C00000422 0.330 N00006619 H1100 24K D 5000 NV -0.024693 Republican rep s22-109 12 support 3 6 2006 S8NV00073 0.496 C00375360 0.176 N00006619 H1100 24K D 4500 NV -0.024693 Republican rep s22-109 12 support 3 7 2006 S8NV00073 0.496 C00375360 0.176 N00006619 H1100 24K D 4500 NV -0.024693 Republican rep s22-109 12 support 3 8 2006 S8NV00073 0.496 C00375360 0.176 N00006619 H1100 24K D 4500 NV -0.024693 Republican rep s22-109 12 support 3 9 2006 S8NV00073 0.496 C00375360 0.176 N00006619 H1100 24K D 4500 NV -0.024693 Republican rep s22-109 12 support 3 10 2006 S8NV00073 0.496 C00375360 0.176 N00006619 H1100 24K D 4500 NV -0.024693 Republican rep s22-109 12 support 3 11 2006 S8NV00073 0.496 C00375360 0.176 N00006619 H1100 24K D 4500 NV -0.024693 Republican rep s22-109 12 support 3 12 2006 S8NV00073 0.496 C00113803 0.269 N00006619 H1130 24K D 2500 NV -0.024693 Republican rep s22-109 12 support 2 13 2006 S8NV00073 0.496 C00113803 0.269 N00006619 H1130 24K D 2500 NV -0.024693 Republican rep s22-109 12 support 2 14 2006 S8NV00073 0.496 C00249342 0.421 N00006619 H1130 24K D 5000 NV -0.024693 Republican rep s22-109 12 support 2 15 2006 S8NV00073 0.496 C00249342 0.421 N00006619 H1130 24K D 5000 NV -0.024693 Republican rep s22-109 12 support 2 </code></pre> <p>Some of the rows are complete duplicates of each other. Is there a way to delete duplicate rows?</p>
<p>I think you can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.drop_duplicates.html" rel="nofollow"><code>drop_duplicates</code></a>:</p> <pre><code>print zz.drop_duplicates() </code></pre>
python|pandas
2
6,224
49,519,954
exp() overflow error python 3
<p>I tried various solutions for below, but I still get the errors as described:</p> <pre><code>log1p(1 + math.exp(comp * -1)) </code></pre> <p>Error: <code>OverflowError: math range error</code></p> <p>So I changed it to: <code>log1p(1 + np.exp(comp * -1))</code> Now I get error : <code>RuntimeWarning: overflow encountered in exp</code></p> <p>So again based on some suggestion on previous questions asked I changed it to: <code>log1p(1 + np.exp((comp * -1), dtype=np.float256))</code></p> <p>Now my error is : <code>module 'numpy' has no attribute 'float256'</code></p> <p>Any other suggestions? Please help thanks!</p> <p><strong>EDIT:</strong> X -> Input feature array of 'N' rows and 'm' features. w -> weight vector of size 'm'</p> <pre><code> for rowIndex in range(len(X)): val1 = np.sum(np.dot(X[rowIndex], w)) val2 = y[rowIndex] comp = np.dot(val2, val1) loss = loss + log1p(1 + np.exp((comp * -1))) </code></pre>
<p>I replaced the code as below:</p> <pre><code>loss = loss - log1p(expit(val)) </code></pre> <p>Basically I rearranged my code to be able to use the expit function...</p>
python-3.x|numpy|exp|overflowexception
0
6,225
28,200,786
How to plot scikit learn classification report?
<p>Is it possible to plot with matplotlib scikit-learn classification report?. Let's assume I print the classification report like this:</p> <pre><code>print '\n*Classification Report:\n', classification_report(y_test, predictions) confusion_matrix_graph = confusion_matrix(y_test, predictions) </code></pre> <p>and I get:</p> <pre><code>Clasification Report: precision recall f1-score support 1 0.62 1.00 0.76 66 2 0.93 0.93 0.93 40 3 0.59 0.97 0.73 67 4 0.47 0.92 0.62 272 5 1.00 0.16 0.28 413 avg / total 0.77 0.57 0.49 858 </code></pre> <p>How can I "plot" the avobe chart?.</p>
<p>Expanding on <a href="https://stackoverflow.com/users/3089523/bin">Bin</a>'s answer:</p> <pre><code>import matplotlib.pyplot as plt import numpy as np def show_values(pc, fmt="%.2f", **kw): ''' Heatmap with text in each cell with matplotlib's pyplot Source: https://stackoverflow.com/a/25074150/395857 By HYRY ''' from itertools import izip pc.update_scalarmappable() ax = pc.get_axes() #ax = pc.axes# FOR LATEST MATPLOTLIB #Use zip BELOW IN PYTHON 3 for p, color, value in izip(pc.get_paths(), pc.get_facecolors(), pc.get_array()): x, y = p.vertices[:-2, :].mean(0) if np.all(color[:3] &gt; 0.5): color = (0.0, 0.0, 0.0) else: color = (1.0, 1.0, 1.0) ax.text(x, y, fmt % value, ha="center", va="center", color=color, **kw) def cm2inch(*tupl): ''' Specify figure size in centimeter in matplotlib Source: https://stackoverflow.com/a/22787457/395857 By gns-ank ''' inch = 2.54 if type(tupl[0]) == tuple: return tuple(i/inch for i in tupl[0]) else: return tuple(i/inch for i in tupl) def heatmap(AUC, title, xlabel, ylabel, xticklabels, yticklabels, figure_width=40, figure_height=20, correct_orientation=False, cmap='RdBu'): ''' Inspired by: - https://stackoverflow.com/a/16124677/395857 - https://stackoverflow.com/a/25074150/395857 ''' # Plot it out fig, ax = plt.subplots() #c = ax.pcolor(AUC, edgecolors='k', linestyle= 'dashed', linewidths=0.2, cmap='RdBu', vmin=0.0, vmax=1.0) c = ax.pcolor(AUC, edgecolors='k', linestyle= 'dashed', linewidths=0.2, cmap=cmap) # put the major ticks at the middle of each cell ax.set_yticks(np.arange(AUC.shape[0]) + 0.5, minor=False) ax.set_xticks(np.arange(AUC.shape[1]) + 0.5, minor=False) # set tick labels #ax.set_xticklabels(np.arange(1,AUC.shape[1]+1), minor=False) ax.set_xticklabels(xticklabels, minor=False) ax.set_yticklabels(yticklabels, minor=False) # set title and x/y labels plt.title(title) plt.xlabel(xlabel) plt.ylabel(ylabel) # Remove last blank column plt.xlim( (0, AUC.shape[1]) ) # Turn off all the ticks ax = plt.gca() for t in ax.xaxis.get_major_ticks(): t.tick1On = False t.tick2On = False for t in ax.yaxis.get_major_ticks(): t.tick1On = False t.tick2On = False # Add color bar plt.colorbar(c) # Add text in each cell show_values(c) # Proper orientation (origin at the top left instead of bottom left) if correct_orientation: ax.invert_yaxis() ax.xaxis.tick_top() # resize fig = plt.gcf() #fig.set_size_inches(cm2inch(40, 20)) #fig.set_size_inches(cm2inch(40*4, 20*4)) fig.set_size_inches(cm2inch(figure_width, figure_height)) def plot_classification_report(classification_report, title='Classification report ', cmap='RdBu'): ''' Plot scikit-learn classification report. Extension based on https://stackoverflow.com/a/31689645/395857 ''' lines = classification_report.split('\n') classes = [] plotMat = [] support = [] class_names = [] for line in lines[2 : (len(lines) - 2)]: t = line.strip().split() if len(t) &lt; 2: continue classes.append(t[0]) v = [float(x) for x in t[1: len(t) - 1]] support.append(int(t[-1])) class_names.append(t[0]) print(v) plotMat.append(v) print('plotMat: {0}'.format(plotMat)) print('support: {0}'.format(support)) xlabel = 'Metrics' ylabel = 'Classes' xticklabels = ['Precision', 'Recall', 'F1-score'] yticklabels = ['{0} ({1})'.format(class_names[idx], sup) for idx, sup in enumerate(support)] figure_width = 25 figure_height = len(class_names) + 7 correct_orientation = False heatmap(np.array(plotMat), title, xlabel, ylabel, xticklabels, yticklabels, figure_width, figure_height, correct_orientation, cmap=cmap) def main(): sampleClassificationReport = """ precision recall f1-score support Acacia 0.62 1.00 0.76 66 Blossom 0.93 0.93 0.93 40 Camellia 0.59 0.97 0.73 67 Daisy 0.47 0.92 0.62 272 Echium 1.00 0.16 0.28 413 avg / total 0.77 0.57 0.49 858""" plot_classification_report(sampleClassificationReport) plt.savefig('test_plot_classif_report.png', dpi=200, format='png', bbox_inches='tight') plt.close() if __name__ == "__main__": main() #cProfile.run('main()') # if you want to do some profiling </code></pre> <p>outputs:</p> <p><a href="https://i.stack.imgur.com/KcsBu.png" rel="noreferrer"><img src="https://i.stack.imgur.com/KcsBu.png" alt="enter image description here"></a></p> <p>Example with more classes (~40):</p> <p><a href="https://i.stack.imgur.com/ukXdA.png" rel="noreferrer"><img src="https://i.stack.imgur.com/ukXdA.png" alt="enter image description here"></a></p>
python|numpy|matplotlib|scikit-learn
42
6,226
27,979,443
What is the real working of ndim in NumPy?
<p>Consider:</p> <pre><code>import numpy as np &gt;&gt;&gt; a=np.array([1, 2, 3, 4]) &gt;&gt;&gt; a array([1, 2, 3, 4]) &gt;&gt;&gt; a.ndim 1 </code></pre> <p>How is the dimension 1? I have given a equation of three variables. It means it is three-dimensional, but it is showing the dimension as 1. What is the logic of ndim?</p>
<p>As the <a href="http://docs.scipy.org/doc/numpy/reference/arrays.ndarray.html" rel="nofollow noreferrer">NumPy documentation</a> says, <code>numpy.ndim(a)</code> returns:</p> <blockquote> <p>The number of dimensions in <code>a</code>. Scalars are zero-dimensional</p> </blockquote> <p>E.g.:</p> <pre><code>a = np.array(111) b = np.array([1,2]) c = np.array([[1,2], [4,5]]) d = np.array([[1,2,3,], [4,5]]) print a.ndim, b.ndim, c.ndim, d.ndim #outputs: 0 1 2 1 </code></pre> <p>Note that the last array, <code>d</code>, is an array of <em>object</em> <code>dtype</code>, so its dimension is still <code>1</code>.</p> <p>What you want to use could be <code>a.shape</code> (or <code>a.size</code> for a one-dimensional array):</p> <pre><code>print a.size, b.size print c.size # == 4, which is the total number of elements in the array # Outputs: 1 2 4 </code></pre> <p>Method <code>.shape</code> returns you a <code>tuple</code>, and you should get your <em>dimension</em> using <code>[0]</code>:</p> <pre><code>print a.shape, b.shape, b.shape[0] () (2L,) 2 </code></pre>
python|numpy
5
6,227
28,035,236
test if list contains a number in some range
<p>Let's say I have a list <code>L=[1.1, 1.8, 4.4, 5.2]</code>. For some integer, <code>n</code>, I want to know whether <code>L</code> has a value <code>val</code> with <code>n-1&lt;val&lt;n+1</code>, and if so I want to know the index of <code>val</code>.</p> <p>The best I can do so far is to define a generator</p> <pre><code>x = (index for index,val in enumerate(L) if n-1&lt;val&lt;n+1) </code></pre> <p>and check to see whether it has an appropriate value using <code>try... except</code>. So let's assume I'm looking for the smallest n>=0 for which such a value exists... </p> <pre><code>L=[1.1, 1.8, 4.4, 5.2] n=0 while True: x = (index for index,val in enumerate(L) if n-1&lt;val&lt;n+1) try: index=next(x) break except StopIteration: n+=1 print n,index </code></pre> <blockquote> <p>1 0</p> </blockquote> <p>In reality, I'm doing a more complicated task. I'll want to be able to take an n, find the first index, and if it doesn't exist, I need to do something else.</p> <p>This doesn't seem like particularly clean code to me. Is there a better way? I feel like numpy probably has the answer, but I don't know it well enough.</p>
<p>If L is sorted, you could use <code>bisect.bisect_left</code> to find the index i for which all L[&lt; i] &lt; n &lt;= all L[>= i].</p> <p>Then</p> <pre><code>if n - L[i-1] &lt; 1.0: val = L[i-1] elif L[i] - n &lt; 1.0: val = L[i] else: val = None # no such value found </code></pre> <hr> <p><strong>Edit:</strong> Depending on your data, what you want to accomplish, and how much time you want to spend writing a clever algorithm, sorting <em>may or may not</em> be a good solution for you; and before I see too many more O(n)s waved around, I would like to point out that his actual problem seems to involve repeatedly probing for various values of n - which would pretty rapidly amortize the initial sorting overhead - and that his suggested algorithm above is actually O(n**2).</p> <p>@AntoinePelisse: by all means, let's do some profiling:</p> <pre><code>from bisect import bisect_left, bisect_right from functools import partial import matplotlib.pyplot as plt from random import randint, uniform from timeit import timeit #blues density_col_lin = [ (0.000, 0.502, 0.000, 1.000), (0.176, 0.176, 0.600, 1.000), (0.357, 0.357, 0.698, 1.000), (0.537, 0.537, 0.800, 1.000) ] # greens density_col_sor = [ (0.000, 0.502, 0.000, 1.000), (0.176, 0.600, 0.176, 1.000), (0.357, 0.698, 0.357, 1.000), (0.537, 0.800, 0.537, 1.000) ] def make_data(length, density): max_ = length / density return [uniform(0.0, max_) for _ in range(length)], max_ def linear_probe(L, max_, probes): for p in range(probes): n = randint(0, int(max_)) for index,val in enumerate(L): if n - 1.0 &lt; val &lt; n + 1.0: # return index break def sorted_probe(L, max_, probes): # initial sort sL = sorted((val,index) for index,val in enumerate(L)) for p in range(probes): n = randint(0, int(max_)) left = bisect_right(sL, (n - 1.0, max_)) right = bisect_left (sL, (n + 1.0, 0.0 ), left) if left &lt; right: index = min(sL[left:right], key=lambda s:s[1])[1] # return index def main(): densities = [0.8, 0.2, 0.08, 0.02] probes = [1, 3, 10, 30, 100] lengths = [[] for d in densities] lin_pts = [[[] for p in probes] for d in densities] sor_pts = [[[] for p in probes] for d in densities] # time each function at various data lengths, densities, and probe repetitions for d,density in enumerate(densities): for trial in range(200): print("{}-{}".format(density, trial)) # length in 10 to 5000, with log density length = int(10 ** uniform(1.0, 3.699)) L, max_ = make_data(length, density) lengths[d].append(length) for p,probe in enumerate(probes): lin = timeit(partial(linear_probe, L, max_, probe), number=5) / 5 sor = timeit(partial(sorted_probe, L, max_, probe), number=5) / 5 lin_pts[d][p].append(lin / probe) sor_pts[d][p].append(sor / probe) # plot the results plt.figure(figsize=(9.,6.)) plt.axis([0, 5000, 0, 0.004]) for d,density in enumerate(densities): xs = lengths[d] lcol = density_col_lin[d] scol = density_col_sor[d] for p,probe in enumerate(probes): plt.plot(xs, lin_pts[d][p], "o", color=lcol, markersize=4.0) plt.plot(xs, sor_pts[d][p], "o", color=scol, markersize=4.0) plt.show() if __name__ == "__main__": main() </code></pre> <p>which results in</p> <p><img src="https://i.stack.imgur.com/Ab6Oy.png" alt="enter image description here"></p> <p>x-axis is number of items in L, y-axis is amortized time per probe; green dots are sorted_probe(), blue are linear_probe().</p> <p>Conclusions:</p> <ul> <li>runtimes for both functions are remarkably linear with respect to length</li> <li>for a single probe into L, presorting is about 4 times slower than iterating</li> <li>the crossover point seems to be about 5 probes; for fewer than that, linear search is faster, for more, presorting is faster.</li> </ul>
python|numpy
4
6,228
73,424,093
'poorly' organized csv file
<p>I have a CSV file that I have to do some data processing and it's a bit of a mess. It's about 20 columns long, but there are multiple datasets that are concatenated in each column. see dummy file below</p> <p>I'm trying to import each sub file into a separate pandas dataframe, but I'm not sure the best way to parse the csv other than manually hardcoding importing a certain length. any suggestions? I guess if there is some way to find where the spaces are (I could loop through the entire file and find them, and then read each block, but that doesn't seem very efficient). I have lots of csv files like this to read.</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd nrows = 20 skiprows = 0 #but this only reads in the first block df = pd.read_csv(csvfile, nrows=nrows, skiprows=skiprows) </code></pre> <p>Below is a dummy example:</p> <pre class="lang-none prettyprint-override"><code>TIME,HDRA-1,HDRA-2,HDRA-3,HDRA-4 0.473934934,0.944026678,0.460177668,0.157028404,0.221362174 0.911384892,0.336694914,0.586014563,0.828339071,0.632790473 0.772652589,0.318146985,0.162987171,0.555896202,0.659099194 0.541382917,0.033706768,0.229596419,0.388057901,0.465507295 0.462815443,0.088206108,0.717132904,0.545779038,0.268174922 0.522861489,0.736462083,0.532785319,0.961993893,0.393424116 0.128671067,0.56740537,0.689995486,0.518493779,0.94916205 0.214026742,0.176948186,0.883636252,0.732258971,0.463732841 0.769415726,0.960761306,0.401863804,0.41823372,0.812081565 0.529750933,0.360314266,0.461615009,0.387516958,0.136616263 TIME,HDRB-1,HDRB-2,HDRB-3,HDRB-4 0.92264286,0.026312552,0.905839375,0.869477136,0.985560264 0.410573341,0.004825381,0.920616162,0.19473237,0.848603523 0.999293171,0.259955029,0.380094352,0.101050014,0.428047493 0.820216119,0.655118219,0.586754951,0.568492346,0.017038336 0.040384337,0.195101879,0.778631044,0.655215972,0.701596844 0.897559206,0.659759362,0.691643603,0.155601111,0.713735399 0.860188233,0.805013656,0.772153733,0.809025634,0.257632085 0.844167809,0.268060979,0.015993504,0.95131982,0.321210766 0.86288383,0.236599974,0.279435193,0.311005146,0.037592509 0.938348876,0.941851279,0.582434058,0.900348616,0.381844182 0.344351819,0.821571854,0.187962046,0.218234588,0.376122331 0.829766776,0.869014514,0.434165111,0.051749472,0.766748447 0.327865017,0.938176948,0.216764504,0.216666543,0.278110502 0.243953506,0.030809033,0.450110334,0.097976735,0.762393831 0.484856452,0.312943244,0.443236377,0.017201097,0.038786057 0.803696521,0.328088545,0.764850865,0.090543472,0.023363909 TIME,HDRB-1,HDRB-2,HDRB-3,HDRB-4 0.342418934,0.290979228,0.84201758,0.690964176,0.927385229 0.173485057,0.214049903,0.27438753,0.433904377,0.821778689 0.982816721,0.094490904,0.105895645,0.894103833,0.34362529 0.738593272,0.423470984,0.343551191,0.192169774,0.907698897 0.021809601,0.406001002,0.072701623,0.964640184,0.023427393 0.406226618,0.421944527,0.413150342,0.337243905,0.515996389 0.829989793,0.168974332,0.246064043,0.067662474,0.851182924 0.812736737,0.667154845,0.118274705,0.484017732,0.052666038 0.215947395,0.145078319,0.484063281,0.79414799,0.373845815 0.497877968,0.554808367,0.370429652,0.081553316,0.793608698 0.607612542,0.424703584,0.208995066,0.249033837,0.808169709 0.199613478,0.065853429,0.77236195,0.757789625,0.597225697 0.044167285,0.1024231,0.959682778,0.892311813,0.621810775 0.861175219,0.853442735,0.742542086,0.704287769,0.435969078 0.706544823,0.062501379,0.482065481,0.598698867,0.845585046 0.967217599,0.13127149,0.294860203,0.191045015,0.590202032 0.031666757,0.965674812,0.177792841,0.419935921,0.895265056 TIME,HDRB-1,HDRB-2,HDRB-3,HDRB-4 0.306849588,0.177454423,0.538670939,0.602747137,0.081221293 0.729747557,0.11762043,0.409064884,0.051577964,0.666653287 0.492543468,0.097222882,0.448642979,0.130965724,0.48613413 0.0802024,0.726352481,0.457476151,0.647556514,0.033820374 0.617976299,0.934428994,0.197735831,0.765364856,0.350880707 0.07660401,0.285816636,0.276995238,0.047003343,0.770284864 0.620820688,0.700434525,0.896417099,0.652364756,0.93838793 0.364233925,0.200229902,0.648342989,0.919306736,0.897029239 0.606100716,0.203585366,0.167232701,0.523079381,0.767224301 0.616600448,0.130377791,0.554714839,0.468486555,0.582775753 0.254480861,0.933534632,0.054558237,0.948978985,0.731855548 0.620161044,0.583061202,0.457991555,0.441254272,0.657127968 0.415874646,0.408141761,0.843133575,0.40991199,0.540792744 0.254903429,0.655739954,0.977873649,0.210656057,0.072451639 0.473680525,0.298845701,0.144989283,0.998560665,0.223980961 0.30605008,0.837920854,0.450681322,0.887787908,0.793229776 0.584644405,0.423279153,0.444505314,0.686058204,0.041154856 </code></pre>
<pre><code>from io import StringIO import pandas as pd data =&quot;&quot;&quot; TIME,HDRA-1,HDRA-2,HDRA-3,HDRA-4 0.473934934,0.944026678,0.460177668,0.157028404,0.221362174 0.911384892,0.336694914,0.586014563,0.828339071,0.632790473 0.772652589,0.318146985,0.162987171,0.555896202,0.659099194 0.541382917,0.033706768,0.229596419,0.388057901,0.465507295 0.462815443,0.088206108,0.717132904,0.545779038,0.268174922 0.522861489,0.736462083,0.532785319,0.961993893,0.393424116 TIME,HDRB-1,HDRB-2,HDRB-3,HDRB-4 0.92264286,0.026312552,0.905839375,0.869477136,0.985560264 0.410573341,0.004825381,0.920616162,0.19473237,0.848603523 0.999293171,0.259955029,0.380094352,0.101050014,0.428047493 0.820216119,0.655118219,0.586754951,0.568492346,0.017038336 0.040384337,0.195101879,0.778631044,0.655215972,0.701596844 TIME,HDRB-1,HDRB-2,HDRB-3,HDRB-4 0.342418934,0.290979228,0.84201758,0.690964176,0.927385229 0.173485057,0.214049903,0.27438753,0.433904377,0.821778689 0.982816721,0.094490904,0.105895645,0.894103833,0.34362529 0.738593272,0.423470984,0.343551191,0.192169774,0.907698897 &quot;&quot;&quot; df = pd.read_csv(StringIO(data), header=None) start_marker = 'TIME' grouper = (df.iloc[:, 0] == start_marker).cumsum() groups = df.groupby(grouper) frames = [gr.T.set_index(gr.index[0]).T for _, gr in groups] </code></pre> <p><a href="https://i.stack.imgur.com/NXlE1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NXlE1.png" alt="frames" /></a></p>
pandas|csv|python-3.9
0
6,229
35,085,830
Python pandas plot time-series with gap
<p>I am trying to plot a pandas DataFrame with TimeStamp indizes that has a time gap in its indizes. Using pandas.plot() results in linear interpolation between the last TimeStamp of the former segment and the first TimeStamp of the next. I do not want linear interpolation, nor do I want empty space between the two date segments. Is there a way to do that?</p> <p>Suppose we have a DataFrame with TimeStamp indizes:</p> <pre><code>&gt;&gt;&gt; import numpy as np &gt;&gt;&gt; import pandas as pd &gt;&gt;&gt; import matplotlib.pyplot as plt &gt;&gt;&gt; df = pd.DataFrame(np.random.randn(1000), index=pd.date_range('1/1/2000', periods=1000)) &gt;&gt;&gt; df = df.cumsum() </code></pre> <p>Now lets take two time chunks of it and plot it:</p> <pre><code>&gt;&gt;&gt; df = pd.concat([df['Jan 2000':'Aug 2000'], df['Jan 2001':'Aug 2001']]) &gt;&gt;&gt; df.plot() &gt;&gt;&gt; plt.show() </code></pre> <p>The resulting plot has an interpolation line connecting the TimeStamps enclosing the gap. I cannot figure out how to upload pictures on this machine, but these pictures from <a href="https://groups.google.com/forum/#!msg/pydata/x46E9Gpac68/oO2w2TiYR4w" rel="nofollow noreferrer">Google Groups</a> show my problem (interpolated.jpg, no-interpolation.jpg and no gaps.jpg). I can recreate the first as shown above. The second is achievable by replacing all gap values with NaN (see also <a href="https://stackoverflow.com/questions/27266987/python-matplotlib-avoid-plotting-gaps">this question</a>). How can I achieve the third version, where the time gap is omitted?</p>
<p>Try:</p> <pre><code>df.plot(x=df.index.astype(str)) </code></pre> <p><a href="https://i.stack.imgur.com/1Rb14.png" rel="noreferrer"><img src="https://i.stack.imgur.com/1Rb14.png" alt="Skip the gap"></a></p> <p>You may want to customize ticks and tick labels.</p> <p><strong>EDIT</strong></p> <p>That works for me using pandas 0.17.1 and numpy 1.10.4.</p> <p>All you really need is a way to convert the <code>DatetimeIndex</code> to another type which is not datetime-like. In order to get meaningful labels I chose <code>str</code>. If <code>x=df.index.astype(str)</code> does not work with your combination of pandas/numpy/whatever you can try other options:</p> <pre><code>df.index.to_series().dt.strftime('%Y-%m-%d') df.index.to_series().apply(lambda x: x.strftime('%Y-%m-%d')) ... </code></pre> <p>I realized that resetting the index is not necessary so I removed that part.</p>
python|pandas|plot|time-series
9
6,230
67,298,754
ImportError: Missing optional dependency 'xlrd'. Install xlrd >= 1.0.0 for Excel support Use pip or conda to install xlrd
<p>I used pandas to read excel file and then received an ImportError shown below.</p> <p>code:</p> <pre><code>pressure_2018=pd.read_excel('2018_pressures.xlsx') </code></pre> <p>Error:</p> <pre><code>ImportError: Missing optional dependency 'xlrd'. Install xlrd &gt;= 1.0.0 for Excel support Use pip or conda to install xlrd. </code></pre> <p>Then I installed xlrd on my computer using code shown below:</p> <pre><code>pip install xlrd </code></pre> <p>But I still received the same issue. In output, it always returned this ImportError. It made me feel confused and frustrated, because I have installed xlrd on my computer. Could you please give me some ideas about how to resolve this error.</p>
<p>You can install <a href="https://pypi.org/project/openpyxl/" rel="noreferrer">openpyxl</a> using <code>pip install openpyxl</code> and then try:</p> <pre><code>pd.read_excel('2018_pressures.xlsx', engine='openpyxl') </code></pre> <p>This is an alternative solution but it will work.</p>
python|pandas
11
6,231
67,426,181
How to use the result of tf.argmax to access values at the corresponding positions?
<p>I want to access values of a tensor next to their maximal values. For that, I get the locations of the maxima via <code>tf.argmax</code>, add one to it and then need to look up the values.</p> <pre><code>f0_binned = tf.random.normal([2, 1000, 360]) idx = tf.argmax(f0_binned, axis=-1) # [2, 1000] tf.gather(f0_binned, idx+1, axis=-1).shape # TensorShape([2, 1000, 2, 1000]) </code></pre> <p>I would like to get something of the same shape as <code>idx</code>, but filled with the values of the corresponding positions. I only found <code>tf.gather</code>, but I am not sure if it is the correct operation and I am using it wrong or if I need to use an entirely different operation. Does someone know how to do it?</p>
<p>You can calculate the gradients using TF and then use NumPy to find the values you are looking for:</p> <pre class="lang-py prettyprint-override"><code>import tensorflow as tf import numpy as np f0_binned = tf.random.normal([2, 1000, 360]) idx = np.argmax(f0_binned, axis=-1) i, j, k = f0_binned.numpy().shape I, J = np.ogrid[:i, :j] idx_plus_one = idx + 1 idx_plus_one[np.where(idx_plus_one &gt;= k)] = k - 1 the_values = f0_binned.numpy()[I, J, idx_plus_one] </code></pre> <p>Also when you doing <code>idx + 1</code> you should check it is not getting out of bound (for 3rd dimension).</p>
python|tensorflow
1
6,232
67,542,315
Pandas expanding dataframe returning multiple values on apply
<p>Is there a way I can apply percentile function on multiple percentile values on an expanding dataframe.</p> <pre><code>import numpy as np import pandas as pd a = np.random.rand(1000) df = pd.DataFrame(a,columns=['Data']) val = [25,30] df['25th_Perc'] = df.expanding(min_periods=1).apply(lambda x: np.nanpercentile(x, val, interpolation='nearest'), raw=True) </code></pre> <p>The code works for one value 25 but cant work on list of values [25,30] and throws the error shown below.<a href="https://i.stack.imgur.com/4RtXH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4RtXH.png" alt="enter image description here" /></a></p>
<p>I have only a solution with <code>numpy</code>:</p> <pre class="lang-py prettyprint-override"><code>a = np.tril(df[&quot;Data&quot;].values) a[np.triu_indices(a.shape[0], k=1)] = np.nan p = np.nanpercentile(a, val, interpolation=&quot;nearest&quot;, axis=1) df[[&quot;25th_Perc&quot;, &quot;50th_Perc&quot;]] = p.T </code></pre> <p>For demo purpose only, I took a subset of your data and set percentile to <code>[25, 50]</code>:</p> <pre class="lang-py prettyprint-override"><code>vals = [25, 50] df1 = df.head(5).copy() print(df) Data 0 0.173577 1 0.559380 2 0.634297 3 0.932697 4 0.452523 </code></pre> <p>Get the lower triangle (equiv. <code>df.expanding(min_periods=1)</code>):</p> <pre class="lang-py prettyprint-override"><code>a = np.tril(df1[&quot;Data&quot;].values) print(a) array([[0.17357693, 0. , 0. , 0. , 0. ], [0.17357693, 0.55937968, 0. , 0. , 0. ], [0.17357693, 0.55937968, 0.63429673, 0. , 0. ], [0.17357693, 0.55937968, 0.63429673, 0.93269719, 0. ], [0.17357693, 0.55937968, 0.63429673, 0.93269719, 0.45252274]]) </code></pre> <p>Set <code>nan</code> the upper triangle, diagonal excluded (<code>k=1</code>) to ignore non applicable data:</p> <pre class="lang-py prettyprint-override"><code>a[np.triu_indices(a.shape[0], k=1)] = np.nan print(a) array([[0.17357693, nan, nan, nan, nan], [0.17357693, 0.55937968, nan, nan, nan], [0.17357693, 0.55937968, 0.63429673, nan, nan], [0.17357693, 0.55937968, 0.63429673, 0.93269719, nan], [0.17357693, 0.55937968, 0.63429673, 0.93269719, 0.45252274]]) </code></pre> <p>Compute percentiles:</p> <pre class="lang-py prettyprint-override"><code>p = np.nanpercentile(a, [25, 50], interpolation=&quot;nearest&quot;, axis=1) print(p) array([[0.17357693, 0.17357693, 0.17357693, 0.55937968, 0.45252274], # 25th [0.17357693, 0.17357693, 0.55937968, 0.63429673, 0.55937968]]) # 50th # Row 1 # Row 2 # Row 3 # Row 4 # Row 5 </code></pre> <p>Return to pandas:</p> <pre class="lang-py prettyprint-override"><code>df1[[&quot;25th_Perc&quot;, &quot;50th_Perc&quot;]] = p.T print(df1) Data 25th_Perc 50th_Perc 0 0.173577 0.173577 0.173577 1 0.559380 0.173577 0.173577 2 0.634297 0.173577 0.559380 3 0.932697 0.559380 0.634297 4 0.452523 0.452523 0.559380 </code></pre>
python|pandas|numpy
1
6,233
67,564,125
Item wrong length when use pandas isin to filter column
<p>I'm experienced item wrong length when use pandas isin to filter column</p> <p>Here's my code</p> <p><code>selected_raw_data = raw_data[raw_data.columns.isin(selected['Column'])].copy()</code></p> <p>Error message here</p> <pre><code>--------------------------------------------------------------------------- ValueError Traceback (most recent call last) &lt;ipython-input-28-f7f86ab9946e&gt; in &lt;module&gt; ----&gt; 1 selected_raw_data = raw_data[raw_data.columns.isin(selected['Column'])].copy() /opt/conda/lib/python3.8/site-packages/pandas/core/frame.py in __getitem__(self, key) 2891 # Do we have a (boolean) 1d indexer? 2892 if com.is_bool_indexer(key): -&gt; 2893 return self._getitem_bool_array(key) 2894 2895 # We are left with two options: a single key, and a collection of keys, /opt/conda/lib/python3.8/site-packages/pandas/core/frame.py in _getitem_bool_array(self, key) 2937 ) 2938 elif len(key) != len(self.index): -&gt; 2939 raise ValueError( 2940 f&quot;Item wrong length {len(key)} instead of {len(self.index)}.&quot; 2941 ) ValueError: Item wrong length 396 instead of 362. </code></pre>
<p>Assuming selected['Column'] resolves to a list or list-like (like a series or a column of a dataframe) of column names, you can use:</p> <pre><code>raw_data[selected['Column']].copy() </code></pre> <p>to filter for the selected columns.</p>
python|pandas|dataframe
1
6,234
34,539,965
Data Frame in Panda with Time series data
<p>I just started learning pandas. I came across this;</p> <pre><code>d = date_range('1/1/2011', periods=72, freq='H') s = Series(randn(len(rng)), index=rng) </code></pre> <p>I have understood what is the above data means and I tried with IPython:</p> <pre><code>import numpy as np from numpy.random import randn import time r = date_range('1/1/2011', periods=72, freq='H') r len(r) [r[i] for i in range(len(r))] s = Series(randn(len(r)), index=r) s s.plot() df_new = DataFrame(data = s, columns=['Random Number Generated']) </code></pre> <p>Is it correct way of creating a data frame?</p> <p>The Next step given is to : Return a series where the absolute difference between a number and the next number in the series is less than 0.5 </p> <p>Do I need to find the difference between each random number generated and store only the sets where the abs diff is &lt; 0.5 ? Can someone explain how can I do that in pandas?</p> <p>Also I tried to plot the series as histogram with;</p> <pre><code> df_new.diff().hist() </code></pre> <p>The graph display the x as Random number with Y axis 0 to 18 (which I don't understand). Can some one explain this to me as well?</p>
<p>To give you some pointers in addition to @Dthal's comments:</p> <pre><code>r = pd.date_range('1/1/2011', periods=72, freq='H') </code></pre> <p>As commented by @Dthal, you can simplify the creation of your <code>DataFrame</code> randomly sampled from the normal distribution like so:</p> <pre><code>df = pd.DataFrame(index=r, data=randn(len(r)), columns=['Random Number Generated']) </code></pre> <p>To show only <code>values</code> that differ by less than <code>0.5</code> from the preceding value:</p> <pre><code>diff = df.diff() diff[abs(diff['Random Number Generated']) &lt; 0.5] Random Number Generated 2011-01-01 02:00:00 0.061821 2011-01-01 05:00:00 0.463712 2011-01-01 09:00:00 -0.402802 2011-01-01 11:00:00 -0.000434 2011-01-01 22:00:00 0.295019 2011-01-02 03:00:00 0.215095 2011-01-02 05:00:00 0.424368 2011-01-02 08:00:00 -0.452416 2011-01-02 09:00:00 -0.474999 2011-01-02 11:00:00 0.385204 2011-01-02 12:00:00 -0.248396 2011-01-02 14:00:00 0.081890 2011-01-02 17:00:00 0.421897 2011-01-02 18:00:00 0.104898 2011-01-03 05:00:00 -0.071969 2011-01-03 15:00:00 0.101156 2011-01-03 18:00:00 -0.175296 2011-01-03 20:00:00 -0.371812 </code></pre> <p>Can simplify using <code>.dropna()</code> to get rid of the missing values.</p> <p>The <code>pandas.Series.hist()</code> <a href="http://pandas.pydata.org/pandas-docs/version/0.17.1/generated/pandas.Series.hist.html" rel="nofollow noreferrer">docs</a> inform that the default number of <code>bins</code> is <code>10</code>, so that's number of <code>bars</code> you should expect and so it turns out in this case roughly symmetric around zero ranging roughly <code>[-4, +4]</code>.</p> <blockquote> <p>Series.hist(by=None, ax=None, grid=True, xlabelsize=None, xrot=None, ylabelsize=None, yrot=None, figsize=None, bins=10, **kwds) diff.hist()</p> </blockquote> <p><a href="https://i.stack.imgur.com/a5ECh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/a5ECh.png" alt="enter image description here"></a></p>
python|pandas|matplotlib|dataframe|histogram
1
6,235
60,169,813
Python: Find roots of 2d polynomial
<p>I have a 2D numpy array C which contains the coefficients of a 2d polynomial, such that the polynomial is given by the sum over all coefficients:</p> <pre><code>c[i,j]*x^i*y^j </code></pre> <p>How can I find the roots of this 2d polynomial? It seems that numpy.roots only works for 1d polynomials.</p>
<p>This is a polynomial in two variables. In general there will be infinitely many roots (think about all the values of x and y that will yield xy=0), so an algorithm that gives you all the roots cannot exist.</p>
python|numpy|polynomials|equation-solving
2
6,236
49,969,484
sequence tagging task in tensorflow using bidirectional lstm
<p>I am little interested in sequence tagging for NER. I follow the code "<a href="https://github.com/monikkinom/ner-lstm/blob/master/model.py" rel="nofollow noreferrer">https://github.com/monikkinom/ner-lstm/blob/master/model.py</a>" to make my model like below:</p> <pre><code>X = tf.placeholder(tf.float32, shape=[None, timesteps , num_input]) Y = tf.placeholder("float", [None, timesteps, num_classes]) y_true = tf.reshape(tf.stack(Y), [-1, num_classes]) </code></pre> <p>the input is,<br> X: (batch_size,max_sent_length,word_embed_dim) <br>and <br> Y: (batch_size,max_sent_length,number_of_labels)</p> <p>Then I pass the value to a Bi-direction LSTM unit:</p> <pre><code>def BiRNN(x): x=tf.unstack(tf.transpose(x, perm=[1, 0, 2])) def rnn_cell(): cell = tf.nn.rnn_cell.LSTMCell(rnn_size, forget_bias=1,state_is_tuple=True) return cell fw_cell=rnn_cell() bw_cell=rnn_cell() output,_, _ = tf.nn.static_bidirectional_rnn(fw_cell, bw_cell,x, dtype=tf.float32) weight, bias = weight_and_bias(2 * rnn_size, num_classes) output = tf.reshape(tf.transpose(tf.stack(output), perm=[1, 0, 2]), [-1, 2 * rnn_size]) return (tf.matmul(output, weight) + bias) </code></pre> <p>Where, rnn_size = 128</p> <p>Then I am doing the below calculations:</p> <pre><code>logits = BiRNN(X) logits = tf.reshape(tf.stack(logits), [-1, timesteps,num_classes]) prediction = tf.reshape(logits, [-1, num_classes]) cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=prediction, labels=y_true)) optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.001) train_op = optimizer.minimize(cost) </code></pre> <p>I took, batch_size = 64 and 30 epochs. <br>But in my model only one label is detected every time. I am not able to point out the problem in my code. Please help.</p>
<p>Please check the dimensions of the tensors y_true, output(both the places), logits and prediction and check whether it comes as per your expectation. </p>
python|tensorflow|lstm|sequence-to-sequence|named-entity-recognition
0
6,237
50,114,964
Concat consecutive rows by grouping columns
<p>I am trying something in Python to concat <code>ProdID</code> based on ProdCategory. All I need is last two columns <code>MainProdConcat</code> and <code>MainProdConcat_PCOnly</code>.</p> <p>Let me know if its possible</p> <pre><code>OrderN0 ProdID ProdCategory ItemNo ProdType MainItem MainProdConcat MainProdConcat_PConly 123334 1 PC 100 Main 100 1,2,3,4,5,6 1,2,3,4 123334 2 PC 110 Option 100 1,2,3,4,5,6 1,2,3,4 123334 3 PC 120 Option 100 1,2,3,4,5,6 1,2,3,4 123334 4 PC 130 Option 100 1,2,3,4,5,6 1,2,3,4 123334 5 Accessories 140 Option 100 1,2,3,4,5,6 123334 6 Accessories 150 Option 100 1,2,3,4,5,6 123334 7 PC 200 Main 200 7,8,9,10,11 7,8,9,10 123334 8 PC 210 Option 200 7,8,9,10,11 7,8,9,10 123334 9 PC 220 Option 200 7,8,9,10,11 7,8,9,10 123334 10 PC 240 Option 200 7,8,9,10,11 7,8,9,10 123334 11 Accessories 260 Option 200 7,8,9,10,11 for index, row in df_OrderNo_WithBase.iterrows(): orderid = row['Legacy Sales Order Identifier'] dealid = row['Deal ID'] df_Master.loc[(df_Master['OrderNo'] == orderid ) &amp; (df_Master['Deal ID'] == dealid)),'ProductConcatMain'] = df_Master[(df_Master['OrderNo'] == orderid) &amp; (df_Master['Deal ID'] == dealid) ]['ProdID'].str.cat(sep=',') </code></pre>
<p>Given print(df):</p> <pre><code> OrderN0 ProdID ProdCategory ItemNo ProdType MainItem 0 123334 1 PC 100 Main 100 1 123334 2 PC 110 Option 100 2 123334 3 PC 120 Option 100 3 123334 4 PC 130 Option 100 4 123334 5 Accessories 140 Option 100 5 123334 6 Accessories 150 Option 100 6 123334 7 PC 200 Main 200 7 123334 8 PC 210 Option 200 8 123334 9 PC 220 Option 200 9 123334 10 PC 240 Option 200 10 123334 11 Accessories 260 Option 200 </code></pre> <p>Then we can use these to populate 'MainProdConcat' and 'MainProdConcat_PConly':</p> <pre><code>df['MainProdConcat_PConly'] = (df[df.ProdCategory == 'PC'] .groupby([df.ProdType.eq('Main').cumsum()])['ProdID'] .transform(lambda x: ','.join(x.astype(str)))) df['MainProdConcat'] = (df.groupby([df.ProdType.eq('Main').cumsum()])['ProdID'] .transform(lambda x: ','.join(x.astype(str)))) </code></pre> <p>Output print(df):</p> <pre><code> OrderN0 ProdID ProdCategory ItemNo ProdType MainItem MainProdConcat_PConly MainProdConcat 0 123334 1 PC 100 Main 100 1,2,3,4 1,2,3,4,5,6 1 123334 2 PC 110 Option 100 1,2,3,4 1,2,3,4,5,6 2 123334 3 PC 120 Option 100 1,2,3,4 1,2,3,4,5,6 3 123334 4 PC 130 Option 100 1,2,3,4 1,2,3,4,5,6 4 123334 5 Accessories 140 Option 100 NaN 1,2,3,4,5,6 5 123334 6 Accessories 150 Option 100 NaN 1,2,3,4,5,6 6 123334 7 PC 200 Main 200 7,8,9,10 7,8,9,10,11 7 123334 8 PC 210 Option 200 7,8,9,10 7,8,9,10,11 8 123334 9 PC 220 Option 200 7,8,9,10 7,8,9,10,11 9 123334 10 PC 240 Option 200 7,8,9,10 7,8,9,10,11 10 123334 11 Accessories 260 Option 200 NaN 7,8,9,10,11 </code></pre>
python|pandas
0
6,238
64,094,104
How to take one column out of a dataframe in python
<p>THIS PROGRAM IMPORTS A DATAFRAME AND THEN ATTEMPTS TO EXTRACT ONE COLUMN HOWEVER I RECEIVE AN EROR WHEN I TRY TO EXTRACT ONE COLUMN (THE OPEN COLUMN)</p> <p>import tensorflow as tf print(tf.<strong>version</strong>)</p> <h1>IMPORT LIBRARIES</h1> <pre><code>import pandas as pd import numpy as np import io from google.colab import files uploaded = files.upload() apple_all_stock_data = pd.read_csv('apple_all_stock_data.csv') apple_all_stock_data.head() </code></pre> <p>Date Close/Last Volume Open High Low 0 9/25/2020 $112.28 149981400 $108.43 $112.44 $107.67 1 9/24/2020 $108.22 167743300 $105.17 $110.25 $105 2 9/23/2020 $107.12 150718700 $111.62 $112.11 $106.77 3 9/22/2020 $111.81 183055400 $112.68 $112.86 $109.16 4 9/21/2020 $110.08 195713800 $104.54 $110.19 $103.10 ... ... ... ... ... ... ... 122 4/2/2020 $61.23 165933960 $60.09 $61.29 $59.23 123 4/1/2020 $60.23 176218560 $61.63 $62.18 $59.78 124 3/31/2020 $63.57 197002000 $63.90 $65.62 $63 125 3/30/2020 $63.70 167976440 $62.69 $63.88 $62.35 126 3/27/2020 $61.94 204216600 $63.19 $63.97 $61.76 127 rows × 6 columns</p> <h1>HERE I ATTEMPT TO TAKE ONE COLUMN OUT OF THE DATAFRAME BUT ERRORS RESULT</h1> <pre><code>apple_open_price = apple_all_stock_data[['Open']].values </code></pre> <p>#I GET THE ERROR BELOW WHEN I RUN THE LINE ABOVE: KeyError Traceback (most recent call last) in () ----&gt; 1 apple_open_price = apple_all_stock_data[['Open']].values</p> <p>2 frames /usr/local/lib/python3.6/dist-packages/pandas/core/indexing.py in _validate_read_indexer(self, key, indexer, axis, raise_missing) 1638 if missing == len(indexer): 1639 axis_name = self.obj._get_axis_name(axis) -&gt; 1640 raise KeyError(f&quot;None of [{key}] are in the [{axis_name}]&quot;) 1641 1642 # We (temporarily) allow for some missing keys with .loc, except in</p> <p>KeyError: &quot;None of [Index(['Open'], dtype='object')] are in the [columns]&quot;</p> <pre><code></code></pre>
<p>you can extract the column using the iloc method, like this:</p> <pre><code>apple_open_price = apple_all_stock_data.iloc[:, 3].values </code></pre> <p>The colon will indicate the lines and the number after the comma the respective column, remembering the first column starts at 0.</p> <p>In pandas we have two very interesting ways to rescue the data we want, they are:</p> <ul> <li>loc</li> <li>iloc</li> </ul> <p>Basically the two methods are used to rescue data, but they have different characteristics when we are going to use them.</p> <p>The loc method is primarily based on column labels, but we can use it with a Boolean array as well. The method works like this:</p> <pre><code>df.loc [&lt;lines&gt;, &lt;columns&gt;] </code></pre> <p>with it we can combine logical operators to get the result we want, for example:</p> <pre><code>df.loc[(df['Superstar']*100) &gt;= 10] </code></pre> <p>with this example, we can filter athletes who are likely to be Superstars greater than or equal to 10% on a dataframe on the performance of athletes.</p> <p>while the iloc indexer is somewhat simpler, it selects by integers of the lines, arrays or by slice. We can conclude that iloc selects rows and columns by numbers, this is a good definition for the resource.</p> <p>for more information you can see at: <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.iloc.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.iloc.html</a></p>
python|pandas|csv|opencsv
0
6,239
64,165,643
how to find the cells which values are a string type inside a dataframe
<p>I have a dataframe, when I tried to calcualte pct_change(), it shows me an error of <code>TypeError: unsupported operand type(s) for /: 'str' and 'float'</code>. Then I tried to convert the type into float, it shows me <code>ValueError: could not convert string to float</code>:</p> <pre><code>unemployment_df['Unemployment_pct_change'] = unemployment_df['Unemployment_Value'].pct_change() unemployment_df['Unemployment_Value']=unemployment_df['Unemployment_Value'].astype(float) </code></pre> <p>But I don't know where are the strings and what kind of strings they are? so I am trying to find the cell index where cell value is a string instead of a number. How to do it? Thanks</p>
<p>Pandas doesn't really distinguish between types more granular than <code>object</code>.</p> <pre><code>df[df['col']].apply(lambda x: isinstance(x, str)) </code></pre> <p>will give you the rows that contain strings in column <code>'col'</code></p> <p>You could then clean them up however you wish.</p>
python|pandas
1
6,240
64,082,656
Why can't I split the calendar year into 10-day increments correctly?
<p>I have the following code that splits the calendar year into 10-day increments in which the first ten-day increment should be a &quot;1&quot;, the next 10-day increment, &quot;2&quot;, etc.</p> <p>For some reason I only have nine &quot;1s&quot; whereas there should be ten. Could someone help me with this?</p> <pre class="lang-py prettyprint-override"><code>from datetime import timedelta, datetime datetimes = np.arange( datetime(2018,1,1), datetime(2019,1,1), timedelta(days=1) ).astype(datetime) np.array([datetime.timetuple().tm_yday//10+1 for datetime in datetimes]) </code></pre>
<p>Because <code>tm_yday</code> starts with <code>1</code> and not <code>0</code>.</p> <p>You should use this if you want to start counting from <code>1</code>:</p> <pre class="lang-py prettyprint-override"><code>from datetime import timedelta, datetime datetimes = np.arange( datetime(2018,1,1), datetime(2019,1,1), timedelta(days=1) ).astype(datetime) np.array([(datetime_.timetuple().tm_yday - 1)//10+1 for datetime_ in datetimes]) </code></pre>
python|numpy|datetime|series
1
6,241
63,979,191
Problem with changing value of multiple rows to NaN
<p>I have this DataFrame:</p> <pre><code>test = database[['WEATHER']] </code></pre> <p><img src="https://i.stack.imgur.com/VVEda.png" alt="enter image description here" /></p> <p>Some of the values of WEATHER are &quot;Unknown&quot; and &quot;Other&quot;, which don't bring much value to it so I want to change them to NaN. Thus, I try the following code:</p> <pre><code>for i in range(len(test)): if test['WEATHER'][i] == &quot;Other&quot; or test['WEATHER'][i] == &quot;Unknown&quot;: test['WEATHER'][i] = np.nan </code></pre> <p>And this error keeps appearing:</p> <p><img src="https://i.stack.imgur.com/6WcaC.png" alt="enter image description here" /></p> <p>I have been trying to correct it but I haven't found the way to.</p>
<p>Typically, you want to avoid iterating over a pandas <code>DataFrame</code>. Here is how I would do it:</p> <pre><code>&gt;&gt;&gt; df.a 0 Other 1 Unknown 2 BLAH Name: a, dtype: object &gt;&gt;&gt; df.a = np.choose(df.a.isin(['Other', 'Unknown']), [df.a, np.nan]) &gt;&gt;&gt; df.a 0 NaN 1 NaN 2 BLAH Name: a, dtype: object </code></pre> <p><code>isin()</code> checks if each value is in the predefined list <code>['Other', 'Unknown']</code> and <code>np.choose()</code> attributes a value depending on the boolean result of the call to <code>isin()</code>. The result is either the original value <code>df.a</code> or <code>np.nan</code>.</p>
python|numpy|dataframe|for-loop|nan
0
6,242
63,900,788
Compare timestamp with datetime
<p>I have one timestamp from a dataframe and a datetime object, I want to compare them to do a select in a dataframe. My data are as followed:</p> <pre><code>print(type(datetime.datetime.now())) &lt;class 'datetime.datetime'&gt; print(type((df.created_at[0]))) &lt;class 'pandas._libs.tslibs.timestamps.Timestamp'&gt; </code></pre> <p>How can I select specific rows within that dataframe with the datetime object? as follow:</p> <pre><code>df[df.created &gt; datetime.datetime.now()] </code></pre> <p>But it returns me the following error message: <code>TypeError: Cannot compare tz-naive and tz-aware datetime-like objects</code>, any idea on how to solve that? thanks!</p>
<p>Timestamp is a timezone-aware object, while the datetime object you get from <code>datetime.datetime.now()</code> is timezone-naive since you don't specify otherwise, hence the error. You should convert so that they're either both timezone-aware or both timezone-naive.</p> <p>For example, you can call <code>datetime.datetime.now()</code> like this to make it timezone-aware (passing timezone info from timestamp object as an argument):</p> <pre><code>datetime.datetime.now(df.created_at[0].tzinfo) </code></pre>
python|pandas|datetime
2
6,243
63,871,200
Plotting the Convergence Results of scipy.optimize.differential_evolution
<p>I have two dataframes (df_1, df_2), some variables (A,B,C), a function (fun) and a global, genetic optimiser that finds the maximum value of fun for a given range of A,B,C.</p> <pre><code>from scipy.optimize import differential_evolution df_1 = pd.DataFrame({'O' : [1,2,3], 'M' : [2,8,3]}) df_2 = pd.DataFrame({'O' : [1,1,1, 2,2,2, 3,3,3], 'M' : [9,2,4, 6,7,8, 5,3,4], 'X' : [2,4,6, 4,8,7, 3,1,9], 'Y' : [3,6,1, 4,6,5, 1,0,7], 'Z' : [2,4,8, 3,5,4, 7,5,1]}) # Index df_1 = df_1.set_index('O') df_1_M = df_1.M df_1_M = df_1_M.sort_index() # Fun def fun(z, *params): A,B,C = z # Score df_2['S'] = df_2['X']*A + df_2['Y']*B + df_2['Z']*C # Top score df_Sort = df_2.sort_values(['S', 'X', 'M'], ascending=[False, True, True]) df_O = df_Sort.set_index('O') M_Top = df_O[~df_O.index.duplicated(keep='first')].M M_Top = M_Top.sort_index() # Compare the top scoring row for each O to df_1 df_1_R = df_1_M.reindex(M_Top.index) # Nan T_N_T = M_Top == df_1_R # Record the results for the given values of A,B,C df_Res = pd.DataFrame({'it_is':T_N_T}) # is this row of df_1 the same as this row of M_Top? # p_hat = TP / (TP + FP) p_hat = df_Res.sum() / len(df_Res.index) print(z) return -p_hat[0] # Bounds min_ = 0 max_ = 1 ran_ge = (min_, max_) bounds = [ran_ge,ran_ge,ran_ge] # Params params = (df_1, df_2) # DE DE = differential_evolution(fun, bounds, args=params) </code></pre> <p>It prints out [A B C] on each iteration, for example the last three rows are:</p> <pre><code>[0.04003901 0.50504249 0.56332845] [0.040039 0.5050425 0.56332845] [0.040039 0.50504249 0.56332846] </code></pre> <p>To see how it is converging, how can I <strong>plot</strong> A,B,C against iteration please?</p> <p>I tried to store A,B,C in:</p> <pre><code>df_P = pd.DataFrame({0}) </code></pre> <p>while adding to fun:</p> <pre><code>df_P.append(z) </code></pre> <p>but I got:</p> <pre><code>RuntimeError: The map-like callable must be of the form f(func, iterable), returning a sequence of numbers the same length as 'iterable' </code></pre>
<p>So I am not sure to have found the best way, but I found one. It uses the fact that list are pass by reference. That means that if you pass the list to the function and modify it, it will be modified for the rest of the programme even if it is not returned by the function.</p> <pre><code># Params results = [] # this list will hold our restuts params = (df_1, df_2, results) # add it to the params of the functions # now in the function add the output to the list, Instead of the mean here I used the distance to the origin (as if you 3 value were a 3d vector) p_hat = df_Res.sum() / len(df_Res.index) distance_to_zeros = sum([e**2 for e in z]) ** 1/2 results.append(distance_to_zeros) # Indeed you can also append z directly. # Then after DE call DE = differential_evolution(fun, bounds, args=params) x = range(0, len(results)) plt.scatter(x, results, alpha=0.5) plt.show() </code></pre>
python|pandas|matplotlib|genetic-algorithm|scipy-optimize
2
6,244
64,119,007
Move row up and reset index pandas dataframe
<p>I have a dataframe with the following columns. need to sortby tr_date and move the 6th index row to 1st index.</p> <pre><code>original datafarame index tr_date val_date des con cr dr bal 0 05-06-2020 05-06-2020 JH876875 NEFT 0 500 500 1 02-07-2020 02-07-2020 45546 MPS 100 0 400 2 02-07-2020 02-07-2020 45546 IMPS 20 0 380 3 22-07-2020 20-07-2020 AASADD with 200 0 -320 4 28-07-2020 15-07-2020 876876 withdr 0 300 -20 5 03-08-2020 01-08-2020 BCGFD NEFT 200 0 -220 6 02-07-2020 02-09-2020 23 man 500 0 -120 Expected output: index tr_date val_date des con cr dr bal 0 05-06-2020 05-06-2020 JH876875 NEFT 0 500 500 1 02-07-2020 02-09-2020 23 man 500 0 -120 2 02-07-2020 02-07-2020 45546 MPS 100 0 400 3 02-07-2020 02-07-2020 45546 IMPS 20 0 380 4 22-07-2020 20-07-2020 AASADD with 200 0 -320 5 28-07-2020 15-07-2020 876876 withdr 0 300 -20 6 03-08-2020 01-08-2020 BCGFD NEFT 200 0 -220 </code></pre>
<p>this code works for changing the rows:</p> <pre><code>df.iloc[6], df.iloc[1] = df.iloc[1], df.iloc[6] </code></pre> <p>greetings Jan</p>
python|pandas|dataframe
1
6,245
46,911,163
How to get similar elements of two numpy arrays with a tolerance
<p>I would like to compare values from columns of two different numpy arrays A and B. More specifically, A contains values from a real experiment that I want to match with theoretical values that are given in the third column of B.</p> <p>There are no perfect matches and therefore I have to use a tolerance, e.g. 0.01. For each value in A, I expect 0 to 20 matches in B with respect to the selected tolerance. As a result, I would like to get those lines in B that are within the tolerance to a value in A.</p> <p>To be more specific, here an example:</p> <pre><code>A = array([[ 2.83151742e+02, a0], [ 2.83155339e+02, a1], [ 3.29241719e+02, a2], [ 3.29246229e+02, a3]]) B = array([[ 0, 0, 3.29235519e+02, ...], [ 0, 0, 3.29240819e+02, ...], [ 0, 0, 3.29241919e+02, ...], [ 0, 0, 3.29242819e+02, ...]]) </code></pre> <p>So here all values of B would match A[3,0] and A[4,0] for a tolerance of 0.02.</p> <p>My preferred result would like this with the matched value of A in C[:,0] and the difference between C[:,0] and C[:,2] in C[:,1]:</p> <pre><code>C = array([[ 3.29241719e+02, c0, 3.29235519e+02, ...], [ 3.29241719e+02, c1, 3.29240819e+02, ...], [ 3.29241719e+02, c2, 3.29241919e+02, ...], [ 3.29241719e+02, c3, 3.29242819e+02, ...] [ 3.29242819e+02, c4, 3.29235519e+02, ...], [ 3.29242819e+02, c5, 3.29240819e+02, ...], [ 3.29242819e+02, c6, 3.29241919e+02, ...], [ 3.29242819e+02, c7, 3.29242819e+02, ...]]) </code></pre> <p>Typically, A has shape (500, 2) and B has shape (300000, 11). I can solve it with for-loops, yet it takes ages.</p> <p>What would be the most efficient way for this comparison?</p>
<p>I'd imagine it would be something like</p> <pre><code>i = np.nonzero(np.isclose(A[:,:,None], B[:, 2]))[-1] </code></pre> <p><a href="https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.isclose.html" rel="nofollow noreferrer"><code>np.isclose</code></a> accepts a few different tolerance parameters.</p> <p>The values in <code>B</code> close to the <code>A</code> values would then be <code>B[i, 2]</code></p>
arrays|python-3.x|performance|numpy|comparison
1
6,246
32,929,318
Is there a way to test an SQLAlchemy Connection?
<p>I'm using SQLAlchemy to connect to write a pandas DataFrame to a MySQL database. Early on in my code I create an SQLAlchemy engine:</p> <pre><code>engine = create_my_sqlalchemy_connection() </code></pre> <p>I execute some queries, do some calculations, and then try to use that same engine to write to the database a little later:</p> <pre><code>df.to_sql('my_table', engine, if_exists='append', index=False) </code></pre> <p>Sometimes this works, and sometimes the connection is lost by the time the code is ready to write to the DB, and there is an error.</p> <p>I could do a try, except and create a new connection if needed:</p> <pre><code>try: df.to_sql('my_table', engine, if_exists='append', index=False) except: engine = create_my_sqlalchemy_connection() df.to_sql('my_table', engine, if_exists='append', index=False) </code></pre> <p>However, I thought I'd reach out and see if anyone knows of a better way (e.g. if there is some SQLAlchemy method that I am unaware of for testing to see if the connection still exists).</p>
<p>You can have SQLAlchemy check for the liveness of the connection with the parameter <code>pool_pre_ping</code>: <a href="https://docs.sqlalchemy.org/en/13/core/engines.html#sqlalchemy.create_engine.params.pool_pre_ping" rel="nofollow noreferrer">https://docs.sqlalchemy.org/en/13/core/engines.html#sqlalchemy.create_engine.params.pool_pre_ping</a> </p> <blockquote> <p>if True will enable the connection pool “pre-ping” feature that tests connections for liveness upon each checkout.</p> </blockquote> <p>Simply enable it by using when you create your engine.</p>
python|pandas|sqlalchemy
4
6,247
38,630,716
Issue creating a Numpy NDArray from PyArray_SimpleNewFromData
<p>I try to do a python wrapper to bind some C++ functions and types to python. My issue is when I try to convert a custom matrix type to a numpy ndarray. The most convincing solution is to use <code>PyArray_SimpleNewFromData</code>.</p> <p>To test its behaviour, as I didn't manage to do what I wanted I tried to implement a simple test:</p> <pre><code>PyObject* ConvertToPython(...) { uint8_t test[10] = {12, 15, 82, 254, 10, 32, 0, 8, 127, 54}; int32_t ndims = 1; npy_intp dims[1]; dims[0] = 10; int32_t typenum = (int32_t)NPY_UBYTE; PyObject* python_object = PyArray_SimpleNewFromData(ndims, dims, typenum, (void*)test); Py_XINCREF(python_object); return python_object; } </code></pre> <p>And then I got in python these results:</p> <pre><code>type(test) = &lt;type 'numpy.ndarray'&gt; test.ndim = 1 test.dtype = uint8 test.shape = (10,) </code></pre> <p>But the values inside the array are: </p> <pre><code>test.values = [ 1 0 0 0 0 0 0 0 80 8] </code></pre> <p>I cannot figure out, what am I doing wrong ? And I am not very experienced doing a python Wrapper so any help would be appreciable !</p>
<p>I would try with an array that has been allocated by malloc, and then perhaps settings some flag named <code>OWNDATA</code> in order to avoid a memory leak. </p> <p>At least the garbage data can be explained if the instance of <code>numpy.ndarray</code> does not copy the data but just stores a pointer to the supplied array. After the functions returns, a pointer to stack allocated array points to memory that may change any time the stack is changed.</p>
python|c++|numpy|wrapper
3
6,248
63,165,778
Numpy Array to Rust by ndpointer, fails in Windows (works on Linux)
<p>Objective: Pass an np.ascontiguousarray to a Rust function via ctypes. Rust makes various changes to the array in place. Process continues in Python. the code is tested an runs as expected in a Linux environment (Built in rust-cargo stable on Linux, called from Python 3.8 in Manjaro, 4.19 Kernel), but raises the error: <strong>OSError: exception: access violation reading 0xFFFFFFFFFFFFFFFF</strong> (see below for Windows build conditions)</p> <p>The (simplified) code:</p> <pre><code>#python: import ctypes from numpy.ctypeslib import ndpointer import numpy as np ext_lib_path = &quot;extlib.dll&quot; #in the windows version ext_lib = ctypes.CDLL(ext-lib-path) process_array = ext_lib.proc_array process_array.argtypes = [ ndpointer(ctypes.c_double), ctypes.c_size_t ] process_array.restype = ctypes.c_size_t #other code builds np array of 2xn float64, called src_ar c_array = np.ascontiguousarray(src_ar) result_count = process_array(c, c.size) </code></pre> <p>Actual rust function is more involved. This tiny snip is enough to prove it works in Linux, while raising the exception in Windows</p> <pre><code>//Rust: #[no_mangle] pub extern &quot;C&quot; fn proc_array(data: &amp;mut [f64], count : usize) -&gt; usize { println!(&quot;In Windows Array Test: received {} items...&quot;, count); let e = count - 1; // Next Line is where the exception is raised: println!(&quot;Start &amp; End: {:.4}, {:.4}&quot;, data[0], data[e]); data[0] += 200.0; data[e] *= 2.0; println!(&quot;Start &amp; End: {:.4}, {:.4}&quot;, data[0], data[e]); let pairs : usize = count / 2; pairs } </code></pre> <p>I know that the exception is raised in the line where it first tries to read <code>data[0]</code> (i ran also some even shorter versions of this involving also eg <code>let x :f64 = data[0]</code> to demonstrate it is the first read operation on <code>data[0]</code> that raises the exception.)</p> <p>Also known:<br /> The windows version of this is compiled under rust-cargo in windows. The behavior is the same if compiled with the windows-gnu, or the windows-msvc toolchains. In all cases: <code>print(hex(c.__array_interface__['data'][0]))</code> shows the address of <code>c_array</code> is for example <code>0x225108514c0</code>, something expected, certainly not 0xFFFFFFFFFFFFFFF (which points to the MOON, certainly nowhere in my 32GB of ram...).</p> <p>My conclusion is somehow Python in windows is passing the pointer differently than in Linux and I need to pass this pointer differently when under windows, but I have found nothing that answers this exact point in my searches.</p>
<p>Following Jmb's suggestion of <a href="http://jakegoulding.com/rust-ffi-omnibus/slice_arguments/" rel="nofollow noreferrer">http://jakegoulding.com/rust-ffi-omnibus/slice_arguments/</a> and</p> <ul> <li><a href="https://doc.rust-lang.org/std/slice/fn.from_raw_parts_mut.html" rel="nofollow noreferrer">https://doc.rust-lang.org/std/slice/fn.from_raw_parts_mut.html</a>,</li> <li>'help' responses from the Rust compiler</li> </ul> <p>the following accomplishes the stated goal of <em>Pass the np.ascontiguousarray to Rust such that it can be mutated with the changes available to the python caller</em>, with the same code serving a windows caller and a linux caller</p> <pre><code> struct Node { x : f64, y : f64, // ... (real version has additional fields used elsewhere) } #[no_mangle] pub extern &quot;C&quot; fn array_test(dptr: *mut f64, count : usize) -&gt; usize { println!(&quot;In Windows Array Test: received {} items...&quot;, count); let data : &amp;mut[f64] = unsafe { assert!(!dptr.is_null()); std::slice::from_raw_parts_mut(dptr, count) }; let pairs : usize = count / 2; // populate the structs let mut nodes : Vec&lt;Node&gt; = Vec::with_capacity(pairs); for i in (0..count).filter(|x| (x % 2 == 0)) { nodes.push(Node { x: data[i], y : data[i+1] } ); } // actual detail of the changes made to the data // not relevant to this question // write x &amp; y's back to the data buffer for i in 0..pairs { data[i * 2] = nodes[i].x; data[(i * 2) + 1] = nodes [i].y; } // placeholder return: pairs } </code></pre> <p>This tested in the two environments Win 10, plain linux (within the systems I use... sorry I do not have the ability to test this approach on other configurations just now)</p>
numpy|rust|ffi
1
6,249
67,950,144
extract headers from dataframe's column containing both headers and values
<p>I am trying to read an excel file that has a column that consists of both numerical information and headers. Below I enclose the screenshot of this excel file: <a href="https://i.stack.imgur.com/ZOuOq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZOuOq.png" alt="excel file" /></a></p> <p>As you can see, in the column &quot;Specification&quot; I have rows with numbers and rows with string &quot;Model ...&quot;. I need to load this file and access information about every model, for example extract information about the number of sold Model 1 blue fountain pens in set of 2 (cell G13 on the screenshot).</p> <p>What I tried was to load the table through <code>pd.read_excel(path)</code>, but I have no idea how to group this data into data about particular models.</p> <p>I also tried this:</p> <pre><code>data = pd.read_excel(path, header=[5,6]) data = pd.DataFrame(data, index=data['Specification']) </code></pre> <p>Unfortunately obtained df consisted only on NaNs.</p>
<p>Aside from reading the excel file, the main issue you’ll have here is that the specification column has repeated values, so if you set it as an index and try getting <code>2</code>, it’s not going to know which model to return.</p> <p>To load the data:</p> <pre><code>df = pd.read_excel(path, header=[5,6], sheet_name='Brand1', na_values='-') </code></pre> <ul> <li><code>sheet_name</code> allows to specify the sheet, and thus make sure you get the right one</li> <li><code>na_values</code> is interesting as it allows to remove all the values marked <code>-</code> in your sheet.</li> </ul> <p>Now let’s try and build a better index. We’ll use the 2 first columns:</p> <pre><code>&gt;&gt;&gt; idx_cols = df[['Specification', 'Code']].droplevel(1, axis='columns') &gt;&gt;&gt; idx_cols Specification Code 0 Model 1 865.0 1 1 NaN 2 2 NaN </code></pre> <p><code>.droplevel()</code> allows to remove that nasty level with <code>Unnamed: </code> values that comes from the merged cells.</p> <p>Now basically, where <code>Code</code> is present, we have the model names, otherwise we have the quantities. So let’s use that to build a 2-level index:</p> <pre><code>&gt;&gt;&gt; quantities = idx_cols['Specification'].where(idx_cols['Code'].isna(), 'Total') &gt;&gt;&gt; models = idx_cols['Specification'].where(idx_cols['Code'].notna()).ffill() &gt;&gt;&gt; new_idx = pd.MultiIndex.from_arrays([models, quantities]) &gt;&gt;&gt; new_idx MultiIndex([('Model 1', 'Total'), ('Model 1', 1), ('Model 1', 2)], names=['Specification', 'Specification']) </code></pre> <p>If we didn’t have <code>Code</code> we could have used maybe <code>idx_cols['Specification'].str.startswith('Model ')</code></p> <p>Now we can assign this index to <code>df</code> and see either the « global » columns, or the columns per pen type:</p> <pre><code>&gt;&gt;&gt; df.index = new_idx &gt;&gt;&gt; all_pen_types = df[['Code', 'Total', 'Black', 'Blue']].droplevel(1, axis='columns') &gt;&gt;&gt; all_pen_types Code Total Black Blue Specification Specification Model 1 Total 865.0 20 10 10 1 NaN 20 10 10 2 NaN 20 10 10 &gt;&gt;&gt; pen_type = df[['Fountain', 'Pen']] &gt;&gt;&gt; pen_type Fountain Pen Total Black Blue Total Black Blue Specification Specification Model 1 Total 20 10 10 NaN NaN NaN 1 20 10 10 NaN NaN NaN 2 20 10 10 NaN NaN NaN </code></pre> <p>So now your question becomes easy:</p> <ul> <li>blue fountain pens means column <code>('Fountain', 'Blue')</code> in <code>pen_type</code></li> <li>Model 1 in set of 2 means <code>('Model 1', 2)</code> in the index</li> </ul> <pre><code>&gt;&gt;&gt; pen_type.loc[('Model 1', 2), ('Fountain', 'Blue')] 10.0 </code></pre>
python|excel|pandas|dataframe|pandas-groupby
1
6,250
68,008,609
Numpy: use array of indices to replace values in another array
<p>I have the following two bidimensional arrays:</p> <pre><code>np.random.seed(1) a = np.random.normal(1,11,(5,5)) b = np.random.randint(0,5,(2,2)) print(a) print(b) </code></pre> <p>What yields this:</p> <pre><code>[[ 18.867799 -5.72932055 -4.80988927 -10.80265484 10.51948392] [-24.31692567 20.19292941 -7.37327591 4.50943006 -1.74307413] [ 17.08318731 -21.6615478 -2.54658924 -3.2245979 13.47146387] [-11.09880394 -0.89671028 -8.6564426 1.46435121 7.41096735] [-11.10681095 13.59196081 10.91749793 6.52743773 10.90941544]] [[2 1] [0 1]] </code></pre> <p>Now imagine that each row in <code>b</code> contains the indices <code>(num_row, num_column)</code> of values that I want to change in <code>a</code> to <code>0.</code>, like this:</p> <pre><code>[[ 18.867799 0. -4.80988927 -10.80265484 10.51948392] [-24.31692567 20.19292941 -7.37327591 4.50943006 -1.74307413] [ 17.08318731 0. -2.54658924 -3.2245979 13.47146387] [-11.09880394 -0.89671028 -8.6564426 1.46435121 7.41096735] [-11.10681095 13.59196081 10.91749793 6.52743773 10.90941544]] </code></pre> <p>What expression should I use to get the previous result? Thx.</p>
<p>Maybe not the sexiest answer but if it doesn't have to be fast you could use a vanilla <code>for</code> loop</p> <pre><code>for i in b: a[i[0], i[1]] = 0 </code></pre>
python|numpy
0
6,251
67,963,735
Python: Is there a direct simple way to delete a row of a .csv file without the read-delete-rewrite process?
<p>I have a .csv file that includes hundreds of millions of rows (yes, big data), and I want to use Python to delete the last row of it. I do know some methods that follow the read-delete-rewrite process. For example, use <code>pandas</code> library, <code>pd.read_csv()</code> to read it first, use <code>.drop()</code> to drop the last row, and then use <code>.to_csv()</code> to overwrite/rewrite the file. This works, but too slow as this file includes hundreds of millions of rows ... So, is there a simple direct method that can work faster for such big data without these traditional three steps? Thanks!</p>
<p>I would not use Python at all. Just use Unix command-line tools. <a href="https://superuser.com/a/543959">Here's an example</a> using the <code>head</code> command to skip the nth last line. That being said, if you want to do anything more complex then skipping the last line, then you should put this file into a database, as the commenter above recommended. Doing anything meaningful with data this size is not feasible in Python - you need a database.</p>
python|pandas|dataframe|csv|data-processing
0
6,252
41,353,885
Theano function using individual elements of input
<p>I am trying to build a Theano function that takes a <code>T.vector</code> of Euler angles as input and returns a directional vector corresponding to those Euler angles. First, I take the sines and cosines of each element of the vector, then I arrange these into a rotation matrix. Finally, I multiply the directional vector <code>[1, 0, 0]</code> by this rotation matrix. The problem I am running into is that I can't multiply this NumPy array by the rotation matrix.</p> <p>This is my code:</p> <pre><code>import theano.tensor as T import theano import numpy as np euler_angles = T.vector('euler_angles', dtype=theano.config.floatX) origin_vec = theano.shared(np.asarray([1, 0, 0], dtype=theano.config.floatX)) sinx = T.sin(euler_angles[0]) siny = T.sin(euler_angles[1]) sinz = T.sin(euler_angles[2]) cosx = T.cos(euler_angles[0]) cosy = T.cos(euler_angles[1]) cosz = T.cos(euler_angles[2]) # Create the rotation matrix rot_matrix = np.asarray([ [cosy*cosz, -1*sinz, cosz * siny], [(sinx*siny)+(cosx*cosy*sinz), cosx*cosz, (-1*cosy*sinx)+(cosx*siny*sinz)], [(-1*cosx*siny)+(cosy*sinx*sinz), cosz*sinx, (cosx*cosy)+(sinx*siny*sinz)] ]) vector = T.dot(origin_vec, rot_matrix) get_vector = theano.function([euler_angles], vector) </code></pre> <p>The second-to-last line throws this error:</p> <pre><code>AsTensorError: ('Cannot convert [[Elemwise{mul,no_inplace}.0 Elemwise{mul,no_inplace}.0\n Elemwise{mul,no_inplace}.0]\n [Elemwise{add,no_inplace}.0 Elemwise{mul,no_inplace}.0\n Elemwise{add,no_inplace}.0]\n [Elemwise{add,no_inplace}.0 Elemwise{mul,no_inplace}.0\n Elemwise{add,no_inplace}.0]] to TensorType', &lt;type 'numpy.ndarray'&gt;) </code></pre> <p>I can't think of any way to create this rotation matrix through matrix operations on the Euler angles. How can I create this function in a format that Theano can compile?</p>
<p>use <code>theano.tensor.stacklists()</code>.</p> <pre><code>rot_matrix = T.stacklists([[...], ...]) </code></pre>
python|numpy|graphics|rotation|theano
2
6,253
41,399,481
How do you decode one-hot labels in Tensorflow?
<p>Been looking, but can't seem to find any examples of how to decode or convert back to a single integer from a one-hot value in TensorFlow.</p> <p>I used <code>tf.one_hot</code> and was able to train my model but am a bit confused on how to make sense of the label after my classification. My data is being fed in via a <code>TFRecords</code> file that I created. I thought about storing a text label in the file but wasn't able to get it to work. It appeared as if <code>TFRecords</code> couldn't store text string or maybe I was mistaken.</p>
<p>You can find out the index of the largest element in the matrix using <a href="https://www.tensorflow.org/api_docs/python/math_ops/sequence_comparison_and_indexing#argmax" rel="noreferrer"><code>tf.argmax</code></a>. Since your one hot vector will be one dimensional and will have just one <code>1</code> and other <code>0</code>s, This will work assuming you are dealing with a single vector.</p> <pre><code>index = tf.argmax(one_hot_vector, axis=0) </code></pre> <p>For the more standard matrix of <code>batch_size * num_classes</code>, use <code>axis=1</code> to get a result of size <code>batch_size * 1</code>.</p>
python|tensorflow|machine-learning|deep-learning|one-hot-encoding
25
6,254
61,252,660
How do I parallelize .apply in pandas on string?
<p>I realize this question might've been asked before, but I didn't find solution that works specifically for strings and is relatively simple.</p> <p>I have a data frame that has a column with a zip code that uses remote API to fetch details about this zip code. What I'm trying is to parallelize data fetching to perform it in multiple threads.</p> <p>Simple example:</p> <pre class="lang-py prettyprint-override"><code>def get_cities_by_zip_code(zip): resp = requests.post(geo_svc_url, json={'query': """query GetZipCodeInformation($zip: Float!) { zipCode(zip: $zip) { .... } }""", 'variables': {'zip': zip}}) return resp.json()['data']['zipCode'] def location_options(df): resp = get_cities_by_zip_code(df['Zip code']) if resp is not None: df['City'] = resp['preferredName'] df['Population'] = (next(x for x in resp['places'] if x['type'] == 'city') or { 'population': 'n/a' })['population'] return df def make_df(): // A function that generates initial dataframe df = make_df() </code></pre> <p>Then I have to apply <code>location_options</code> on <code>df</code> parallel. I tried a couple of solutions to achieve that. For example:</p> <ol> <li>Via <code>multiprocessing</code></li> </ol> <pre><code>num_partitions = 20 #number of partitions to split dataframe num_cores = 8 #number of cores on your machine def parallelize_dataframe(df, func): df_split = np.array_split(df, num_partitions) pool = Pool(num_cores) df = pd.concat(pool.map(func, df_split)) pool.close() pool.join() return df df = parallelize_dataframe(df, location_options) </code></pre> <p>It doesn't work (not a full stacktrace). </p> <pre><code>multiprocessing.pool.RemoteTraceback: """ Traceback (most recent call last): File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.7/lib/python3.7/multiprocessing/pool.py", line 121, in worker result = (True, func(*args, **kwds)) File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.7/lib/python3.7/multiprocessing/pool.py", line 44, in mapstar return list(map(*args)) TypeError: Object of type Series is not JSON serializable </code></pre> <ol start="2"> <li><code>swifter</code> - doesn't work with strings for some reasons, runs but only one thread.</li> </ol> <p>Whether as simple </p> <pre><code>df = df.apply(location_options, axis=1) </code></pre> <p>works just fine. But it's single threaded.</p>
<p>I may have found a solution from the related post.</p> <p><a href="https://stackoverflow.com/questions/45545110/how-do-you-parallelize-apply-on-pandas-dataframes-making-use-of-all-cores-on-o/55643414#55643414">This one</a> worked for me, while others didn't. I also had to this: <a href="https://github.com/darkskyapp/forecast-ruby/issues/13" rel="nofollow noreferrer">https://github.com/darkskyapp/forecast-ruby/issues/13</a></p>
python|pandas|dataframe
0
6,255
61,360,663
populating empty dataframe from a list in python
<p>I need to populate dataframe from the list.</p> <pre><code>lst=[1,"name1",10,2,"name2",2,"name2",20,3] df=pd.DataFrame(columns=['a','b','c']) j=0 for i in range(len(list(df.columns))-1): for t,v in enumerate(lst): col_index=j%3 df.iloc[i,col_index]=lst[t] j=j+1 </code></pre> <p>The above code is giving me an error.</p> <p>i want df to be following</p> <pre><code>a b c 1 name1 10 2 name2 20 3 NaN NaN </code></pre> <p>I have tried this but it is giving me a following error IndexError :Single positional indexer is out of bounds</p>
<p>Create a list of dictionarys <code>[{key:value, key:value}, {key:value, key:value}, {key:value, key:value}]</code></p> <p>Add this straight as a dataframe. You can also control what is added this way by making a fucntion and passing data to it as the dictionary is built.</p> <p>You can achieve this using itertools cycle if the rows are always in the correct order to the columns.</p> <p>I assume that <code>3, name3, 30</code>were incorrect and the list i think you should have should look like this.</p> <pre><code>cols = ['a','b','c'] rows = [1, "name1", 10, 2,"name2", 20, 3, "name3", 30] </code></pre> <p>And using the power of itertools <a href="https://docs.python.org/3/library/itertools.html#itertools.cycle" rel="nofollow noreferrer">https://docs.python.org/3/library/itertools.html#itertools.cycle</a></p> <pre><code>cycle('abc') --&gt; a b c a b c a b c a b c ... </code></pre> <p>I think this code can help you.</p> <pre><code>import itertools def parse_data(data): if data: pass #do something. return data cols = ['a','b','c'] rows = [1, "name1", 10, 2,"name2", 20, 3, "name3", 30] d = [] # Temp list for dataframe to hold the dictionaries of data. e = {} # Temp dict to fill rows &amp; cols for each cycle. for x, y in zip(itertools.cycle(cols), rows): # cycle through the cols but not the rows. y = parse_data(y) # do any filtering or removals here. if x == cols[0]: # the first col triggers the append and reset of the dictionary e = {x:y} # re init the temp dictionary d.append(e) # append to temp df list else: e.update({x:y}) # add other elements print(e) print(d) df=pd.DataFrame(d) # create dataframe print(df) """ a b c 1 name1 10 2 name2 20 3 name3 30 """" </code></pre>
python|pandas
1
6,256
68,513,904
Get line count of a specific column and get the value of that specific column using row number
<p>python, pandas read from csv file.</p> <p>How do I get only TMI value from a specific row?</p> <p>I mean by using ROW and single INDEX or COLUMN,</p> <p>Like only get TMI 17 or 20 value and see how many TMI is there and get TMI line count.</p> <pre><code>import pandas as pd with open('./Essentials/test2.csv','r') as f: weather_df = pd.read_csv(f) </code></pre> <pre><code>STAT,NAME,DATE,TA,TM,TMI test123,&quot;ASDDD&quot;,10115,23,29,17 test123,&quot;ASDDD&quot;,20115,23,29.2,20 test123,&quot;ASDDD&quot;,30115,24,29.9,20 test123,&quot;ASDDD&quot;,40115,23,26.1,13 test123,&quot;ASDDD&quot;,50115,20,23.7,18 test123,&quot;ASDDD&quot;,60115,20,24.3,13 test123,&quot;ASDDD&quot;,70115,17,22.5,13 test123,&quot;ASDDD&quot;,80115,17,22.9,12 test123,&quot;ASDDD&quot;,90115,18,23.3,13 test123,&quot;ASDDD&quot;,100115,19,13.2,13 test123,&quot;ASDDD&quot;,110115,16,21,11 test123,&quot;ASDDD&quot;,120115,19,24.5,11 test123,&quot;ASDDD&quot;,130115,18,26.5,12 test123,&quot;ASDDD&quot;,150115,18,28.1,13 ,&quot;ASDDD&quot;,160115,21,28,14.2 ,&quot;ASDDD&quot;,170115,18,24, ,&quot;ASDDD&quot;,180115,14,16, ,,190115,14,13, ,,200115,15,18, </code></pre> <p>csv file, here I want to get, STAT has 14 rows or NAME has 17 lines, Then call the Value, suppose call &quot;TMI&quot; line 8 value and put it into variable</p>
<p>to get a value of a specific cell</p> <pre><code>weather_df.at[row index, 'column name'] </code></pre> <p>for example the following will give you a value of <code>17</code></p> <pre><code>weather_df.at[0, 'TMI'] </code></pre> <p>to get the number of cells excluding NaN use <code>.count()</code></p> <pre><code>weather_df.['TMI'].count() </code></pre> <p>without specifying a column it will return the non-Nan row count for each column individually</p>
python|pandas|csv|datatable
0
6,257
68,486,220
Using tensorflow in ML, why my kernel restarts constantly?
<p>from the kernel :</p> <pre><code>In[1]: runfile('/home/yannick/Documents/ML/MNIST-reco/neural_network_V1.py', wdir='/home/yannick/Documents/ML/MNIST-reco') Restarting kernel... In [1]: </code></pre> <p>The thing is that when i run the code, instead of working, it restarts the kernel and nothing happends please help me</p> <pre><code>import tensorflow as tf import input_data import matplotlib.pyplot as plt import matplotlib.cm as cm config = tf.ConfigProto() config.gpu_options.allow_growth = True sess = tf.Session(config=config) mnist = input_data.read_data_sets('MNIST_data', one_hot=True) img = mnist.train.images[0] img = img.reshape((28,28)) plt.imshow(img, cmap=cm.Greys) plt.show() x = tf.placeholder(tf.float32, [None, 784]) y_ = tf.placeholder(tf.float32, [None, 10]) # ex : [0 0 0 1 0 0 0 0 0 0 0] W = tf.get_variable('weights', [784,10]) b = tf.get_vairable('bias', [10]) y = tf.add(tf.matmul(x,W), b) cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(labels=y_,logits=y)) train_step = tf.trin.GradientDescentOptimizer(0.001).minimize(cross_entropy) correct_pred = tf.equal(tf.argmax(y,1), tf.argmax(y_,1)) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) sess = tf.Session() sess.run(tf.global_variable_initializer()) def feed_dict(is_training): if is_training : batch_x, batch_y = mnist.train.next_batch(100) else: batch_x, batch_y = mnist.test.images,mnist.test.labels return {x: batch_x, y_: batch_y} for i in range(100): if i % 10 == 0: acc = sess.run(accuracy, feed_dict=feed_dict(True)) print('étape %d: Précision du training:%f' % (i, acc)) else : sess.run([train_step], feed_dict=feed_dict(True)) print('Précision Test: ', sess.run(accuracy,feed_dict=feed_dict(False))) </code></pre>
<p>I needed to delete my tensorflow lib and reinstall it using anaconda, then it work again. It's just about re-importing the libs on anaconda and relaunch spyder. hth</p>
python|tensorflow
0
6,258
53,336,497
Matplotlib: Stacked area chart for all the groups
<p>I am trying to create a stacked area chart for all the groups in my data on a similar timeline x-axis. My data looks like following </p> <pre><code>dataDate name prediction 2018-09-30 A 2.309968 2018-10-01 A 1.516652 2018-10-02 A 2.086062 2018-10-03 A 1.827490 2018-09-30 B 0.965861 2018-10-01 B 6.521989 2018-10-02 B 9.219777 2018-10-03 B 17.434451 2018-09-30 C 6.890485 2018-10-01 C 6.106187 2018-10-02 C 5.535563 2018-10-03 C 1.913100 </code></pre> <p>And I am trying to create something like following <a href="https://i.stack.imgur.com/duj0K.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/duj0K.png" alt="enter image description here"></a></p> <p>The x-axes will be the time series. Please help me to recreate the same. Thanks </p>
<p>Say your data is stored in a dataframe named <code>df</code>. Then you can pivot the dataframe and plot it directly. Make sure your dates are actual dates, not strings.</p> <pre><code>df["dataDate"] = pd.to_datetime(df["dataDate"]) df.pivot("dataDate", "name", "prediction").plot.area(); </code></pre> <p><a href="https://i.stack.imgur.com/7qDcQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7qDcQ.png" alt="enter image description here"></a></p>
python|pandas|matplotlib|stacked-chart
4
6,259
65,554,263
How do we output a Panda dataframe via Python for use as a .csv?
<p>What's the quickest way to output a Panda dataframe via Python for use as a .csv?</p> <p>My output is called 'dframe' and it is really simple. Here is some code for context:</p> <p>dframe = df.head()</p>
<p>You can try</p> <pre><code>dframe.to_csv(r'your path') </code></pre>
python|pandas|output
0
6,260
65,687,227
Set values of pandas df cell based on conditions
<p>My df is as follows:</p> <p><a href="https://i.stack.imgur.com/ff1Dp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ff1Dp.png" alt="enter image description here" /></a></p> <p>What I want to do is, Condition: <code>Fruit Name</code> is NOT (<code>Apple or Mango</code>) and <code>veggie Name</code>== No Action: Set values in <code>Veggie Color</code> and <code>Enjoy Eating</code> = <code>Unicorn</code></p> <p>My code is</p> <pre><code>df.loc[(~df[&quot;Fruit Name&quot;].isin([&quot;Apple&quot;,&quot;Mango&quot;]))&amp; (df[&quot;Veggie Name&quot;]==&quot;Potato&quot;),[&quot;Veggie Color&quot;,&quot;Enjoy Eating&quot;]]=&quot;Unicorn&quot; </code></pre> <p>While it does so as follows</p> <p><a href="https://i.stack.imgur.com/eIOWg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eIOWg.png" alt="enter image description here" /></a></p> <p>It sets NaN to other cells</p> <p><a href="https://i.stack.imgur.com/1uBJa.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1uBJa.png" alt="enter image description here" /></a></p> <p><strong>What am I missing?</strong></p>
<p>You didn't use the exact name of the column <code>Enjoy Eating?</code>, so it created a new column called <code>Enjoy Eating</code> with NaN as default values. Just add the question mark and it will work as expected.</p> <p><code>df.loc[(~df[&quot;Fruit Name&quot;].isin([&quot;Apple&quot;,&quot;Mango&quot;]))&amp; (df[&quot;Veggie Name&quot;]==&quot;Potato&quot;),[&quot;Veggie Color&quot;,&quot;Enjoy Eating?&quot;]]=&quot;Unicorn&quot;</code></p>
python|pandas|dataframe
0
6,261
65,629,347
Create dataframe in a "for" loop, in which a function can be applied to them
<p>The first for loop seems to work. However, when I move onto doing a groupby function on the next dataframe, something about the global variable in the for loop doesn't store the dataframe's correctly. Any help would be much appreciated. Thank you</p> <pre><code>chan_group = list(df_2017['Default Channel Grouping'].value_counts().index) gbl = globals() for i in chan_group: gbl['df_'+i] = df_2017[df_2017['Default Channel Grouping']==i] g_chang_group = df_(Other), df_Aggregators, df_Direct, df_Display, df_Email, df_Email alerts, df_Newsletter, df_Organic Search, df_Paid Search, df_Partner referral, df_Referral, df_Retargeting, df_Social for i in g_chan_group: x = i.groupby(['Month']).sum() </code></pre>
<p>From @Nathan Furnal</p> <p><code>df_2017.groupby([&quot;Default Channel Grouping&quot;, &quot;Month&quot;]).sum()</code></p>
python|pandas|dataframe|for-loop|global
0
6,262
65,832,397
How can I find duplicates in a pandas data frame?
<p>I got the task to highlight all email duplicates in a pandas data frame. Is there a function for this or a way to drop all the NON duplicates which leaves me with a nice list off all the duplicates in the dataset?</p> <p>The table consists of six columns:</p> <pre><code>Email, FirstName, LastName, C_ID, A_ID, CreatedDate a@a.com, Bill, Schneider, 123, 321, 20190502 a@a.com, Damian, Schneider, 124, 231, 20190502 b@b.com, Bill, Schneider, 164, 313, 20190503 </code></pre> <p>I want to get rid of the last column as the last mail is NOT a duplicate.</p>
<p>Something like this might be the solution you're looking for:</p> <pre><code>import pandas as pd series = [ ('a@a.com','Bill', 'Schneider', 123, 321, 20190502), ('a@a.com', 'Damian', 'Schneider', 124, 231, 20190502), ('b@b.com', 'Bill', 'Schneider',164, 313, 20190503) ] # Create a DataFrame object df = pd.DataFrame(series, columns=['email', 'first name', 'last name', 'C_ID', 'A_ID', 'CreatedDate']) # Find duplicate rows df_duplicates = df[df.email.duplicated()] print(df_duplicates) </code></pre>
python|pandas|dataframe
3
6,263
65,527,125
Creating a dict of list from pandas row?
<p>I have a weird problem. I have a index and bunch of columns in a dataframe. I want the index to be a key and all the other columns to be in a list. Here's a example</p> <p>df:</p> <pre><code> 0 1 2 3 Barker Minerals Ltd Blackout Media Corp Booking Holdings Inc Booking Holdings Inc Booking Holdings Inc 4.10 04/13/2025 BOOKING HOLDINGS INC Baker Hughes Company Baker Hughes Company BAKER HUGHES A GE COMPANY LLC-3.34%-12-15-2027 BAKER HUGHES A GE COMPANY LLC-3.14%-11-7-2029 Bank of Queensland Limited Bank of Queensland Limited Bank of Queensland Limited FRN 10-MAY-2026 3.50% 05/10/26 Bank of Queensland Limited FRN 26-OCT-2020 1.27% 10/26/20 Bank of Queensland Limited FRN 16-NOV-2021 1.12% 11/16/21 </code></pre> <p>If I do this command it turns everything into a list when I want it to be a dict of a list:</p> <pre><code>df.to_numpy().tolist() </code></pre> <p>I want a dict with each key a list of values in the other columns(kind of like this):</p> <pre><code>{ Barker Minerals Ltd: Blackout Media Corp: Booking Holdings Inc: [Booking Holdings Inc ,Booking Holdings Inc 4.10 04/13/2025,BOOKING HOLDINGS INC] Baker Hughes Company: [Baker Hughes Company ,BAKER HUGHES A GE COMPANY LLC-3.34%-12-15-2027,BAKER HUGHES A GE COMPANY LLC-3.14%-11-7-2029] Bank of Queensland Limited: [Bank of Queensland Limited ,Bank of Queensland Limited FRN 10-MAY-2026 3.50% 05/10/26,Bank of Queensland Limited FRN 26-OCT-2020 1.27% 10/26/20, Bank of Queensland Limited FRN 16-NOV-2021 1.12% 11/16/21] } </code></pre> <p>Is this possible to do?</p>
<p>The easiest answer as pointed out in the comments by Michael Szczesny:</p> <pre><code>df.T.to_dict(orient=&quot;list&quot;) </code></pre> <p>The output:</p> <pre><code>{'Barker Minerals Ltd': [nan, nan, nan, nan], 'Blackout Media Corp': [nan, nan, nan, nan], 'Booking Holdings Inc': ['Booking Holdings Inc', 'Booking Holdings Inc 4.10 04/13/2025', 'BOOKING HOLDINGS INC', nan], 'Baker Hughes Company': ['Baker Hughes Company', 'BAKER HUGHES A GE COMPANY LLC-3.34%-12-15-2027', 'BAKER HUGHES A GE COMPANY LLC-3.14%-11-7-2029', nan], 'Bank of Queensland Limited': ['Bank of Queensland Limited', 'Bank of Queensland Limited FRN 10-MAY-2026 3.50% 05/10/26', 'Bank of Queensland Limited FRN 26-OCT-2020 1.27% 10/26/20', ' Bank of Queensland Limited FRN 16-NOV-2021 1.12% 11/16/21']} </code></pre> <p>Also, in case you would like to lose all the <code>nan</code>s, then the code is as follows:</p> <pre><code>df = pd.read_csv(&quot;df_to_dict.csv&quot;, index_col=0) val = df.T.to_dict(orient=&quot;list&quot;) cleaned_val = {} for i in val: cleaned_val[i] = [j for j in val[i] if str(j)!=&quot;nan&quot;] cleaned_val </code></pre> <p>The output is as follows:</p> <pre><code>{'Barker Minerals Ltd': [], 'Blackout Media Corp': [], 'Booking Holdings Inc': ['Booking Holdings Inc', 'Booking Holdings Inc 4.10 04/13/2025', 'BOOKING HOLDINGS INC'], 'Baker Hughes Company': ['Baker Hughes Company', 'BAKER HUGHES A GE COMPANY LLC-3.34%-12-15-2027', 'BAKER HUGHES A GE COMPANY LLC-3.14%-11-7-2029'], 'Bank of Queensland Limited': ['Bank of Queensland Limited', 'Bank of Queensland Limited FRN 10-MAY-2026 3.50% 05/10/26', 'Bank of Queensland Limited FRN 26-OCT-2020 1.27% 10/26/20', ' Bank of Queensland Limited FRN 16-NOV-2021 1.12% 11/16/21']} </code></pre> <p>The documentation of <code>to_dict()</code> can be accessed <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_dict.html" rel="nofollow noreferrer">here</a>.</p>
python|pandas
3
6,264
21,115,741
Average of a time related datasets in Pandas with missing values
<p>For a project I am working on I need to calculate average price of products for shops. Every time a shop changes the price of a product a new entry is added to the dataset. If a shop stops (temporarily or permanently) to sell a product, an entry is made with the time stamp and a price value of -1. Example:</p> <pre><code> timestamp shop product price 2014-01-01 10:07:32 E 4 19.99 2014-01-01 10:07:32 F 5 54.00 2014-01-02 14:41:12 A 1 28.00 2014-01-02 14:41:12 D 3 249.99 2014-01-02 15:12:38 C 1 29.99 2014-01-03 14:05:12 B 2 43.00 2014-01-05 12:21:57 F 5 49.99 2014-01-06 23:55:32 F 5 -1 2014-01-07 03:05:12 B 2 39.99 2014-01-07 11:24:49 D 3 -1 2014-01-08 11:35:33 C 2 40.99 2014-01-08 16:28:07 F 5 65.00 2014-01-12 21:41:04 E 3 199.00 </code></pre> <p>Test cases:</p> <ul> <li>Shop: A that has no price entry for product 1 in the time period to calculate</li> <li>Shop B that has product 2 switch prices within the period</li> <li>Shop C that start selling product 2 in the period, and sells product 1 all through</li> <li>Shop D that stops selling product 3 in the period.</li> <li>Shop E that starts selling product 3 after the period and sells product 4 throughout</li> <li>Shop F that, for product 5 changes price, then stop selling, then starts again with a new price, all within the period</li> </ul> <p>The period to fint the averages is from 2014-01-05 00:00:00 to 2014-01-10 23:59:59</p> <p>What I need to do is calculate the average price within a certain period for a certain shop, and overall. That is the average is time weighted (3 days a price of 3 and 1 day price 1 is an avg of 2.5 not 1.5 for those 4 days). I have two problems:</p> <ul> <li>The starting value can be missing. The last price change i most likely in the begging of the time period being calculated, so I need to find a way to fill it in so that it will be used in the avg. In fact it possible that this is the only price in the whole period.</li> <li>Calculating with -1 will give the wrong results. The value should be ignored, and the overal time delta should be reduced with the time the product is no longer available.</li> </ul> <p>The expected output for the data given above is (prices rounded up to the nearest cent):</p> <pre><code>shop product price A 1 28.00 B 2 41.06 C 1 29.99 C 2 40.99 D 3 249.99 E 4 19.99 F 5 53.81 </code></pre> <p>I have tried using numpy.ma to mask out the -1 values. However I have been unsuccessful in doing this as <code>isnan</code> and <code>masked_less</code> cannot handle this.</p> <p>Any idea as to how I can achieve this?</p> <p>Edit: Edited test data en expected results to more clearly reflect the problem</p>
<p>AFAIR, <code>pandas</code> doesn't handle masked values the <code>numpy.ma</code> way. However, it should handle <code>nans</code> when computing the mean. The simplest solution is to parse your <code>Dataframe</code> and replace your price of <code>-1.00</code> by <code>np.nan</code> with something like:</p> <pre><code>price = dataframe['price'] price[price == -1] = np.nan </code></pre>
python|numpy|pandas
0
6,265
63,636,901
Drop a row from Pandas Dataframe when any of the columns are duplicate
<p>I have a dataframe that contains answers to many questions.</p> <p>Each row represents an answer-er and the columns are the answers to the questions given Because people often spam those questionnaires sometimes there are answer-ers that give the same answer many times like ''yes good'', ''yes good''....</p> <p>I would like to remove those rows where same answers are repeated more than once or twice (because a single repetition could be coincidence)</p> <p>My dataframe looks like this: Questions differ from file to file but always column 0 is ID and all rest columns are questions and their number vary.</p> <p>ID , Question 1 , Question 2 , Question 3 , Question 4 , ...</p> <p>Id1 , Ans. str1 ,Ans. string2 ,Ans. string3 , Ans. string4 , ...</p> <p>Id2 , Ans. str1 ,Ans. string2 ,Ans. string3 , Ans. string4 , ...</p> <p>Id3 , Ans. str1 ,Ans. string2 ,Ans. string3 , Ans. string4 , ...</p> <p>Id4 , Ans. str1 ,Ans. string2 ,Ans. string3 , Ans. string4 , ...</p> <p>What I need is to drop rows that contain same answers to more than one questions Idealy i would like to be able to adjust the number of identical answers found that for a row to be dropped. Because when you have big questionnaires 2 answers can be same without being a spammer. If such case is not easy lets try to drop when any 2 are same.</p>
<pre><code># importing pandas package import pandas as pd data = {'ID': ['Id1', 'Id2','Id3', 'Id4'], 'Question 1': ['Ans. str1', 'Ans. string1','Ans. string1', 'Ans. string1'], 'Question 2': ['Ans. str2', 'Ans. string2','Ans. string2', 'Ans. string2'], 'Question 3': ['Ans. str3', 'Ans. string3','Ans. string3', 'Ans. string3'], 'Question 4': ['Ans. str4', 'Ans. string4','Ans. string4', 'Ans. string4'] } df = pd.DataFrame (data) </code></pre> <p><strong>output</strong></p> <pre><code> ID Question 1 Question 2 Question 3 Question 4 0 Id1 Ans. str1 Ans. str2 Ans. str3 Ans. str4 1 Id2 Ans. string1 Ans. string2 Ans. string3 Ans. string4 2 Id3 Ans. string1 Ans. string2 Ans. string3 Ans. string4 3 Id4 Ans. string1 Ans. string2 Ans. string3 Ans. string4 </code></pre> <p>Drop the duplicate rows</p> <pre><code>df = df.drop_duplicates() print(df) ID Question 1 Question 2 Question 3 Question 4 0 Id1 Ans. str1 Ans. str2 Ans. str3 Ans. str4 </code></pre>
python|pandas|rows|drop
0
6,266
63,467,610
Execute SQL file, return results as Pandas DataFrame
<p>I have a complex SQL Server query that I would like to execute from Python and return the results as a Pandas DataFrame.</p> <p>My database is read only so I don't have a lot of options like other answers say for making less complex queries.</p> <p><a href="https://stackoverflow.com/questions/46694359/read-external-sql-file-into-pandas-dataframe">This answer was helpful</a>, but I keep getting <code>TypeError: 'NoneType' object is not iterable</code></p> <h3>SQL Example</h3> <p>This is not the real query - just to demonstrate I have temporary tables. Using global temporary tables because my queries failed previously using local temp tables: <a href="https://stackoverflow.com/questions/37863125/sql-server-temp-table-not-available-in-pyodbc-code">See this question</a></p> <pre><code>SET ANSI_NULLS ON SET QUOTED_IDENTIFIER ON SET NOCOUNT ON SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED IF OBJECT_ID('tempdb..##temptable') IS NOT NULL DROP TABLE ##temptable IF OBJECT_ID('tempdb..##results') IS NOT NULL DROP TABLE ##results DECLARE @closing_period int = 0, @starting_period int = 0 Select col1, col2, col3 into ##temptable from readonlytables Select * into ##results from ##temptable Select * from ##results </code></pre> <h3>Execute query with pyodbc and pandas</h3> <pre><code>conn = pyodbc.connect('db connection details') sql = open('myquery.sql', 'r') df = read_sql_query(sql.read(), conn) sql.close() conn.close() </code></pre> <h3>Results - Full Stack Trace</h3> <pre><code>ypeError Traceback (most recent call last) &lt;ipython-input-38-4fcfe4123667&gt; in &lt;module&gt; 5 6 sql = open('sql/month_end_close_hp.sql', 'r') ----&gt; 7 df = pd.read_sql_query(sql.read(), conn) 8 #sql.close() 9 C:\ProgramData\Anaconda3\lib\site-packages\pandas\io\sql.py in read_sql_query(sql, con, index_col, coerce_float, params, parse_dates, chunksize) 330 coerce_float=coerce_float, 331 parse_dates=parse_dates, --&gt; 332 chunksize=chunksize, 333 ) 334 C:\ProgramData\Anaconda3\lib\site-packages\pandas\io\sql.py in read_query(self, sql, index_col, coerce_float, params, parse_dates, chunksize) 1632 args = _convert_params(sql, params) 1633 cursor = self.execute(*args) -&gt; 1634 columns = [col_desc[0] for col_desc in cursor.description] 1635 1636 if chunksize is not None: TypeError: 'NoneType' object is not iterable </code></pre> <p>When I run the query in my database I get the expected results. If I pass the query in as a string I also get the expected results:</p> <h3>Query as String</h3> <pre><code>conn = pyodbc.connect('db connection details') sql = ''' SET ANSI_NULLS ON SET QUOTED_IDENTIFIER ON SET NOCOUNT ON SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED IF OBJECT_ID('tempdb..##temptable') IS NOT NULL DROP TABLE ##temptable IF OBJECT_ID('tempdb..##results') IS NOT NULL DROP TABLE ##results DECLARE @closing_period int = 0, @starting_period int = 0 Select col1, col2, col3 into ##temptable from readonlytables Select * into ##results from ##temptable Select * from ##results ''' df = read_sql(sql, conn) conn.close() </code></pre> <p>I think it might have something to do with the single quotes inside my query?</p>
<p>I got it working.</p> <p>I had to use global variables by replacing @ with @@ I was able to get the query working as expected.</p> <p><code>DECLARE @@closing_period int = 0, @@starting_period int = 0</code></p> <p>Update: My ODBC driver was very outdated - after updating to the latest version, I no longer needed global temp tables or variables - and the query ran significantly faster.</p>
python|sql-server|pandas|pyodbc
1
6,267
63,718,944
Tensorflow Image Processing Function
<p>Guys I have made the tutorial of Basic Image Classification from Tensorflow.org. But I couldnt understand the codes of def image_process. Beceause there is no explanation in tutorial.</p> <p>This is code:</p> <pre><code>def plot_image(i, predictions_array, true_label, img): true_label, img = true_label[i], img[i] plt.grid(False) plt.xticks([]) plt.yticks([]) plt.imshow(img, cmap=plt.cm.binary) predicted_label = np.argmax(predictions_array) if predicted_label == true_label: color = 'blue' else: color = 'red' plt.xlabel(&quot;{} {:2.0f}% ({})&quot;.format(class_names[predicted_label], 100*np.max(predictions_array), class_names[true_label]), color=color) </code></pre> <p><strong>My Question:</strong></p> <p>How the function determines predictions_array is predicted value and true label is the correct label. Shouldnt we say true_label = train_label[i] or predictions_array = prediction[i]</p> <p>How does fuction determines objects while we are not set them in our function as I showed.</p>
<p>Lets start with the train code (Documentation inline)</p> <pre><code># TensorFlow and tf.keras import tensorflow as tf from tensorflow import keras # Helper libraries import numpy as np import matplotlib.pyplot as plt # load data fashion_mnist = keras.datasets.fashion_mnist (train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data() # Text representation of labels class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot'] # Normalize the train and test images train_images = train_images / 255.0 test_images = test_images / 255.0 # Define the model model = keras.Sequential([ keras.layers.Flatten(input_shape=(28, 28)), keras.layers.Dense(128, activation='relu'), keras.layers.Dense(10) ]) model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) # train the model model.fit(train_images, train_labels, epochs=10) </code></pre> <p>As you can see the last layer is a <code>Dense</code> layers of output size <code>10</code>. That is because we have 10 classes. To identify to which class it belongs to we can just take the max value out of those 10 and assign its class as its predictions. But if we can change these value to probabilities we can also tell how confident is the model in making this predictions. So lets attach softmax layers which normalized these 10 outputs to probabilities.</p> <pre><code>probability_model = tf.keras.Sequential([model, tf.keras.layers.Softmax()]) predictions = probability_model.predict(test_images) print (f&quot;Input: {test_images.shape}, Output: {predictions.shape}&quot;) </code></pre> <p>Output:</p> <pre><code>Input: (10000, 28, 28), Output: (10000, 10) </code></pre> <p>Lets print the predicted and the true label of ith test image</p> <pre><code>i = 0 print (f&quot;Actual Label: {train_labels[i]}, Predicted Label: {np.argmax(predictions[i])}&quot;) </code></pre> <p>Output:</p> <pre><code>Actual Label: 9, Predicted Label: 9 </code></pre> <p>Finally lets plot the ith image and label it with the predicted class and its probability. (Documentation inline)</p> <pre><code>def plot_image(i, predictions_array, true_label, img): &quot;&quot;&quot; i: render ith image predictions_array: Probabilities of each class predicted by the model for the ith image true_label: All the the acutal label img: All the images &quot;&quot;&quot; # Get the true label of ith image and the ithe image itself true_label, img = true_label[i], img[i] plt.grid(False) plt.xticks([]) plt.yticks([]) # Render the ith image plt.imshow(img, cmap=plt.cm.binary) # Get the class with the higest probability for the ith image predicted_label = np.argmax(predictions_array) if predicted_label == true_label: color = 'blue' else: color = 'red' plt.xlabel(&quot;{} {:2.0f}% ({})&quot;.format(class_names[predicted_label], 100*np.max(predictions_array), class_names[true_label]), color=color) </code></pre> <p>Finally lets call it</p> <pre><code>plot_image(i, predictions[i], test_labels, test_images) </code></pre> <p><a href="https://i.stack.imgur.com/4pz99.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4pz99.png" alt="enter image description here" /></a></p> <p>You confusion is because of <code>predictions_array</code> parameters. Please note that it is the predictions made by the model for the ith test data. It has 10 values each of which represent the probability of it belonging to the corresponding class.</p>
python|tensorflow|keras
0
6,268
24,782,365
Looping over 1st dimension of 3D numpy array to create a smaller 3D array, via slicing
<p>this is my first post so apologies if the formatting isn't quite right. I am writing some code for my masters dissertation, in which I am am studying satellite images of sea ice near the Alaskan coast. The satellite instrument I am using has 9 cameras, so for each image/band I have 9 subdatasets, which I am trying to loop over: NIR_data is a 3D numpy array, with the following dimensions: 9,512,256. I am trying to create a new 3D array, which is a 10x10 subset of the original array, defined by the pixel coordinates [256:266,112:122]. So if I was just doing it for 1 file the code would be:</p> <pre><code>NIR_BRF = NIR_data[i][256:266,112:122] </code></pre> <p>So, trying to loop over the first dimension of my NIR_data array, this is the closest I am getting:</p> <pre><code>for i,f in enumerate(NIR_data): NIR_BRF[i] = NIR_data[i][256:266,112:122] </code></pre> <p>where NIR_BRF is a predefined, empty array measuring 9,10,10. The result is a 9,10,10 array, however all the values in this array are identical, i.e. the loop hasn't worked. I hope I've explained this well enough, I know this shouldn't be too difficult, but I'm struggling to get my brain working properly.</p> <p>Many thanks</p> <p>Alex</p>
<p>Here no need to iterate through 3D array . Remember when you want to perform some operations on elements of array (may be after getting the shorter array) then you will need to iterate over it.. when you want to create another subarray from existing there will be a way always to get rid of iteration in most the cases..here you just have to create subarray (Here python behaves as if its functional programming lang) here the possible fix is to use the syntax</p> <pre><code> [dim1_start:dim1_stop:dim1_step,dim2_start:dim2_stop:dim2_step,...dimn_start:dimn_stop:dimn_step] </code></pre> <blockquote> <p>when you don't give start:stop it assumes everything , and when you dont give step it assumes step as 1 unit</p> </blockquote> <p>so use <code>NIR_BRF = NIR_data[:, 256:266, 112:122]</code></p>
python|arrays|numpy|3d|netcdf
0
6,269
29,914,981
Fill in a numpy array without creating list
<p>I would like to create a numpy array without creating a list first. <br>At the moment I've got this: </p> <pre><code>import pandas as pd import numpy as np dfa = pd.read_csv('csva.csv') dfb = pd.read_csv('csvb.csv') pa = np.array(dfa['location']) pb = np.array(dfb['location']) ra = [(pa[i+1] - pa[i]) / float(pa[i]) for i in range(9999)] rb = [(pb[i+1] - pb[i]) / float(pb[i]) for i in range(9999)] ra = np.array(ra) rb = np.array(rb) </code></pre> <p>Is there any elegant way to do in one step the last fill in of this np array without creating the list first ? </p> <p>Thanks </p>
<p>You can calculate with vectors in numpy, without the need of lists:</p> <pre><code>ra = (pa[1:] - pa[:-1]) / pa[:-1] rb = (pb[1:] - pb[:-1]) / pb[:-1] </code></pre>
python|arrays|list|numpy
4
6,270
53,565,895
xlsx pandas write to s3 (with tabs)
<p>I have a project where i need to write dataframes to xlsx in an s3 bucket. It's quite simple to load a file from s3 with pandas quite simply by: df= pd.read_excel('s3://path/file.xlsx')</p> <p>But writing a file to s3 gives me problems. </p> <pre><code> import pandas as pd # Create a Pandas dataframe from the data. df = pd.DataFrame({'Data': [10, 20, 30, 20, 15, 30, 45]}) # Create a Pandas Excel writer using XlsxWriter as the engine. writer = pd.ExcelWriter('s3://path/', engine='xlsxwriter') df.to_excel(writer, sheet_name='Sheet1') writer.save() FileNotFoundError: [Errno 2] No such file or directory: 's3://path' </code></pre> <p>So how can i write xlsx files to s3 with pandas, preferably with tabs?</p>
<pre><code>import io import boto3 import xlsxwriter import pandas as pd bucket = 'your-s3-bucketname' filepath = 'path/to/your/file.format' df = pd.DataFrame({'Data': [10, 20, 30, 20, 15, 30, 45]}) with io.BytesIO() as output: with pd.ExcelWriter(output, engine='xlsxwriter') as writer: df.to_excel(writer, 'sheet_name') data = output.getvalue() s3 = boto3.resource('s3') s3.Bucket(bucket).put_object(Key=filepath, Body=data) </code></pre>
pandas|amazon-web-services|amazon-s3|xlsx
5
6,271
53,569,622
Difference between tf.train.Checkpoint and tf.train.Saver
<p>I found there are different ways to save/restore models and variables in <code>Tensorflow</code>. These ways including:</p> <ul> <li><a href="https://www.tensorflow.org/api_docs/python/tf/saved_model/simple_save" rel="nofollow noreferrer">tf.saved_model.simple_save</a></li> <li><a href="https://www.tensorflow.org/api_docs/python/tf/train/Checkpoint" rel="nofollow noreferrer">tf.train.Checkpoint</a></li> <li><a href="https://www.tensorflow.org/api_docs/python/tf/train/Saver" rel="nofollow noreferrer">tf.train.Saver</a></li> </ul> <p>In tensorflow's documentations, I found some differences between them:</p> <ol> <li><code>tf.saved_model</code> is a thin wrapper around <code>tf.train.Saver</code></li> <li><code>tf.train.Checkpoint</code> support eager execution but <code>tf.train.Saver</code> <strong>not</strong>.</li> <li><code>tf.train.Checkpoint</code> not creating <code>.meta</code> file but still can load graph structure (here is a big question! how it can do that?)</li> </ol> <p>How <code>tf.train.Checkpoint</code> can load graph without <code>.meta</code> file? or more generally What is the difference between <code>tf.train.Saver</code> and <code>tf.train.Checkpoint</code>?</p>
<p>According to Tensorflow <a href="https://www.tensorflow.org/api_docs/python/tf/train/Checkpoint" rel="nofollow noreferrer">docs</a>:</p> <blockquote> <p><code>Checkpoint.save</code> and <code>Checkpoint.restore</code> write and read object-based checkpoints, in contrast to <code>tf.train.Saver</code> which writes and reads variable.name based checkpoints. Object-based checkpointing saves a graph of dependencies between Python objects (Layers, Optimizers, Variables, etc.) with named edges, and this graph is used to match variables when restoring a checkpoint. It can be more robust to changes in the Python program, and helps to support restore-on-create for variables when executing eagerly. <strong>Prefer <code>tf.train.Checkpoint</code> over <code>tf.train.Saver</code> for new code</strong>.</p> </blockquote>
python|tensorflow|deep-learning|eager-execution
0
6,272
53,633,043
Memory efficient solution to replace invalid values in a large DataFrame?
<p>This question is a continuation of the following: <a href="https://stackoverflow.com/questions/53625099/how-to-replace-certain-rows-by-shared-column-values-in-pandas-dataframe">How to replace certain rows by shared column values in pandas DataFrame?</a></p> <p>Let's say I have the following pandas DataFrame:</p> <pre><code>import pandas as pd data = [['Alex',10],['Bob',12],['Clarke',13], ['Bob', '#'], ['Bob', '#'], ['Bob', '#'], ['Clarke', '#']] df = pd.DataFrame(data,columns=['Name','Age'], dtype=float) Name Age 0 Alex 10 1 Bob 12 2 Clarke 13 3 Bob # 4 Bob # 5 Bob # 6 Clarke # </code></pre> <p>Rows 3-6 have invalid values, the string <code>#</code>. These should be replaced by valid values, outputting:</p> <pre><code> Name Age 0 Alex 10 1 Bob 12 2 Clarke 13 3 Bob 12 4 Bob 12 5 Bob 12 6 Clarke 13 </code></pre> <p>The pandas solutions discussed to replace these values discussed were using <code>coerce</code>, or replacing with a subset data frame:</p> <pre><code>v = df.assign(Age=pd.to_numeric(df['Age'], errors='coerce')).dropna() df['Age'] = df['Name'].map(v.set_index('Name').Age) </code></pre> <p>or </p> <pre><code>d= df[df['Age']!='#'].set_index('Name')['Age'] df['Age']=df['Name'].replace(d) </code></pre> <p>The problem is for a pandas DataFrame with millions of rows, these pandas-based solutions become very memory intensive. </p> <p>In situations like these with pandas, what would be the most practical solution? </p> <p>I could try to create a massive dictionary using <code>df[df['Age']!='#']</code>, with <code>Name: Age</code> as the key-value pairs. Then, iterate through the original pandas DataFrame row by row; if there is a row with Age==<code>#</code>, then replace it based on the key-value pair in the dictionary. The downside to this is, a for-loop will take forever. </p> <p>Are there other solutions which would have better performance? </p>
<p>What if you try something a bit more memory efficient, like dictionary-based replacement instead of series-based? </p> <pre><code>mapping = dict(df.drop_duplicates('Name', keep='first').values) df['Age'] = df['Name'].map(mapping) print(df) Name Age 0 Alex 10 1 Bob 12 2 Clarke 13 3 Bob 12 4 Bob 12 5 Bob 12 6 Clarke 13 </code></pre> <p>Another alternative would be using a list comprehension:</p> <pre><code>mapping = dict(df.drop_duplicates('Name', keep='first').values) df['Age'] = [mapping.get(x, np.nan) for x in df['Name']] print(df) Name Age 0 Alex 10 1 Bob 12 2 Clarke 13 3 Bob 12 4 Bob 12 5 Bob 12 6 Clarke 13 </code></pre> <p>This should work assuming valid values in "Age" come first.</p>
python|pandas|performance|dataframe
1
6,273
19,862,686
Error in astype float32 vs float64 for integer
<p>I'm sure this is due to a lapse in my understanding in how casting between different precision of float works, but can someone explain why the value is getting cast as 3 less than its true value in 32 vs 64 bit representation?</p> <pre><code>&gt;&gt;&gt; a = np.array([83734315]) &gt;&gt;&gt; a.astype('f') array([ 83734312.], dtype=float32) &gt;&gt;&gt; a.astype('float64') array([ 83734315.]) </code></pre>
<p>A <a href="http://en.wikipedia.org/wiki/Single_precision_floating-point_format" rel="nofollow">32-bit float</a> can exactly represent about 7 decimal digits of mantissa. Your number requires more, and therefore cannot be represented exactly.</p> <p>The mechanics of what happens are as follows:</p> <p>A 32-bit float has a 24-bit mantissa. Your number requires 27 bits to be represented exactly, so the last three bits are getting truncated (set to zero). The three lowest bits of your number are <code>011</code><sub>2</sub>; these are getting set to <code>000</code><sub>2</sub>. Observe that <code>011</code><sub>2</sub> is <code>3</code><sub>10</sub>.</p>
python|python-2.7|numpy|floating-point
4
6,274
20,206,615
How can a pandas merge preserve order?
<p>I have two DataFrames in pandas, trying to merge them. But pandas keeps changing the order. I've tried setting indexes, resetting them, no matter what I do, I can't get the returned output to have the rows in the same order. Is there a trick? Note we start out with the loans order 'a,b,c' but after the merge, it's "a,c,b".</p> <pre><code>import pandas loans = [ 'a', 'b', 'c' ] states = [ 'OR', 'CA', 'OR' ] x = pandas.DataFrame({ 'loan' : loans, 'state' : states }) y = pandas.DataFrame({ 'state' : [ 'CA', 'OR' ], 'value' : [ 1, 2]}) z = x.merge(y, how='left', on='state') </code></pre> <p>But now the order is no longer the original 'a,b,c'. Any ideas? I'm using pandas version 11.</p>
<p>Hopefully someone will provide a better answer, but in case no one does, this will definitely work, so…</p> <p>Zeroth, I'm assuming you don't want to just end up sorted on <code>loan</code>, but to preserve <em>whatever</em> original order was in <code>x</code>, which may or may not have anything to do with the order of the <code>loan</code> column. (Otherwise, the problem is easier, and less interesting.)</p> <p>First, you're asking it to sort based on the join keys. As <a href="http://pandas.pydata.org/pandas-docs/dev/merging.html#database-style-dataframe-joining-merging" rel="noreferrer">the docs</a> explain, that's the default when you don't pass a <code>sort</code> argument.</p> <hr> <p>Second, if you <em>don't</em> sort based on the join keys, the rows will end up grouped together, such that two rows that merged from the same source row end up next to each other, which means you're still going to get <code>a</code>, <code>c</code>, <code>b</code>.</p> <p>You can work around this by getting the rows grouped together in the order they appear in the original <code>x</code> by just merging again with <code>x</code> (on either side, it doesn't really matter), or by reindexing based on <code>x</code> if you prefer. Like this:</p> <pre><code>x.merge(x.merge(y, how='left', on='state', sort=False)) </code></pre> <hr> <p>Alternatively, you can cram an x-index in there with <code>reset_index</code>, then just sort on that, like this:</p> <pre><code>x.reset_index().merge(y, how='left', on='state', sort=False).sort('index') </code></pre> <hr> <p>Either way obviously seems a bit wasteful, and clumsy… so, as I said, hopefully there's a better answer that I'm just not seeing at the moment. But if not, that works.</p>
python|pandas
27
6,275
72,060,825
My code works outside the function but not inside
<p>The function below is not working anymore :</p> <pre><code>def videos_to_watch(section): #OK global list_already_watched list_already_watched = already_watched(ID).tolist() if section == 111: list_to_watch = set(p111)-set(list_already_watched) elif section == 113: list_to_watch = set(p113)-set(list_already_watched) return list(list_to_watch) </code></pre> <p>When I debuged it, it said that <code>list_already_watched</code> was empty, but when I ran it outside the function, it showed the value that is registred in the dataframe. I tried to use <code>global</code> but I think I'm not using it in the right way. Could someone help me?</p>
<p>I'm not sure if this will help, but maybe you should initialize the variable as well. Like so:</p> <pre><code>def videos_to_watch(section): #OK global list_already_watched = 0 list_already_watched = already_watched(ID).tolist() if section == 111: list_to_watch = set(p111)-set(list_already_watched) elif section == 113: list_to_watch = set(p113)-set(list_already_watched) return list(list_to_watch) </code></pre>
python|pandas|dataframe|function|global-variables
-1
6,276
71,859,126
How to generate uncorrelated samples with Numpy
<p>I'd like to generate random samples on python, but each with their own standard deviation.</p> <p>I thought I could use <code>np.random.normal(0, scale=np.array(standard_deviation), size=(len(np.array(standard_deviation)), number_of_simulations)</code></p> <p>However, bumpy seems not to work when I put an array for scale (which is contrary to what I understand from the documentation) and only wants a float as an argument.</p> <p>My goal is to render an array of size (Number of standards deviations X Numbers of simulations) where each row is just <code>np.random.normal(0, scale=np.array(standard_deviation)[i], size= (1,number_of_simulations)</code></p> <p>I think I could do a loop and then concatenate each result but i don't want to do this if not necessary because you loose the interest of Numpy and Pandas by doing loops I believe.</p> <p>I hope I was clear and thanks for your help !</p>
<p>The NumPy random functions <em>do</em> accept arrays, but when you also give a <code>size</code> parameter, the shapes must be compatible.</p> <p>Change this:</p> <pre><code>np.random.normal(0, scale=np.array(standard_deviation), size=(len(np.array(standard_deviation)), number_of_simulations) </code></pre> <p>to</p> <pre><code>np.random.normal(0, scale=standard_deviation, size=(number_of_simulations, len(standard_deviation))) </code></pre> <p>The result will have shape <code>(number_of_simulations, len(standard_deviation))</code>.</p> <p>Here's a concrete example in an ipython session. Note that instead of using <code>numpy.random.normal</code>, I use the newer NumPy random API, in which I create a generator called <code>rng</code> and call its <code>normal()</code> method:</p> <pre><code>In [103]: rng = np.random.default_rng() In [104]: standard_deviation = np.array([1, 5, 25]) In [105]: number_of_simulations = 6 In [106]: rng.normal(scale=standard_deviation, size=(number_of_simulations, len(standard_deviation))) Out[106]: array([[ -0.31088926, 1.95005394, -8.77983357], [ 1.80907248, 4.27082827, 31.13457498], [ -0.27178958, -12.6589072 , -31.70729135], [ 0.2848883 , 1.71198071, -23.6336055 ], [ 0.78457822, 2.78281586, 32.61089728], [ -0.7014944 , 5.47845616, 5.34276638]]) </code></pre>
python|arrays|numpy|random|standard-deviation
1
6,277
22,029,167
Creating a Multi-Index / Hierarchical DataFrame from Dictionaries
<p>Say I have the following dictionaries:</p> <pre><code>multilevel_indices = {'foo': ['A', 'B', 'C'], 'bar': ['X', 'Y'], 'baz': []} column_data_1 = {'foo': [2, 4, 5], 'bar': [2, 3], 'baz': []} </code></pre> <p>How can I create a multi-index DataFrame using these dictionaries?</p> <p>It should be something like:</p> <pre><code>index_1 index_2 column_data_1 foo A 2 B 4 C 5 bar X 2 Y 3 baz np.NaN np.NaN </code></pre> <h2>Note:</h2> <p>If <code>NaN</code> indices are not supported by Pandas, we can drop the empty entries in the dictionaries above. </p> <p>Ideally, I would like the DataFrame to capture somehow the fact that those entries are missing if possible. However, the most important thing is being able to index the dataframe using the indices in <code>multilevel_indices</code>.</p>
<p>use <code>concat</code>:</p> <pre><code>multilevel_indices = {'foo': ['A', 'B', 'C'], 'bar': ['X', 'Y'], 'baz': []} column_data_1 = {'foo': [2, 4, 5], 'bar': [2, 3], 'baz': []} pd.concat([pd.Series(column_data_1[k], index=multilevel_indices[k]) for k in multilevel_indices], keys=multilevel_indices.keys()) </code></pre> <p>Results in:</p> <pre><code>foo A 2 B 4 C 5 bar X 2 Y 3 dtype: float64 </code></pre> <p>Also, as @CT Zhu mentioned, in the definitions for <code>baz</code>, if you change <code>[]</code> to <code>[None]</code> you can keep track of those entries:</p> <pre><code>baz NaN None foo A 2 B 4 C 5 bar X 2 Y 3 dtype: object </code></pre>
python|pandas
2
6,278
55,553,062
How to count_values() for greater than column counts
<p>I am trying to determine how many rows have over 1000 counts for a specific column within my data. </p> <pre><code>police_2013 = pd.read_csv('..Data.csv') police_2013.unit.value_counts() </code></pre> <p>In this code, <code>police_2013.unit.value_counts()</code> gives me how many calls the Unit had. I want to count up the units that had the previous formula where the result was over 1000. </p> <p>This failed:</p> <pre><code>countif(police_2013.unit.value_counts() &gt; 1000) unit_count_1000 = len(police_2013[police_2013['unit'] &gt; 1000]) </code></pre>
<p>I think you want this.<br> This gives you how many unique values > 1000 are in the column <code>unit</code> of your dataset</p> <pre><code>len(police_2013[police_2013['unit']&gt;1000].unit.value_counts()) </code></pre>
python|pandas
0
6,279
56,810,854
How does df.groupby('A').agg('min') translate to featuretools?
<p>Say I have this simple snippet of code. I will group, aggregate, and merge the dataframe:</p> <hr> <h1>Using Pandas:</h1> <hr> <h3>Data</h3> <pre><code>df = pd.DataFrame({'A': [1, 1, 2, 2], 'B': [1, 2, 3, 4], 'C': [0.3, 0.2, 1.2, -0.5]}) </code></pre> df: <pre><code> A B C 0 1 1 0.3 1 1 2 0.2 2 2 3 1.2 3 2 4 -0.5 </code></pre> <h3>Group and Aggregate</h3> <pre><code>df_result = df.groupby('A').agg('min') df_result.columns = ['groupby_A(min_'+x+')' for x in df_result.columns] </code></pre> df_result: <pre><code> groupby_A(min_B) groupby_A(min_C) A 1 1 0.2 2 3 -0.5 </code></pre> <h3>Merge</h3> <pre><code>df_new = pd.merge(df,df_result,on='A') df_new </code></pre> df_new: <pre><code> A B C groupby_A(min_B) groupby_A(min_C) 0 1 1 0.3 1 0.2 1 1 2 0.2 1 0.2 2 2 3 1.2 3 -0.5 3 2 4 -0.5 3 -0.5 </code></pre> <hr> <h1>An Attempt using featuretools:</h1> <hr> <pre><code># ---- Import the Module ---- import featuretools as ft # ---- Make the Entity Set (the set of all tables) ---- es = ft.EntitySet() # ---- Make the Entity (the table) ---- es.entity_from_dataframe(entity_id = 'df', dataframe = df) # ---- Do the Deep Feature Synthesis (group, aggregate, and merge the features) ---- feature_matrix, feature_names = ft.dfs(entityset = es, target_entity = 'df', trans_primitives = ['cum_min']) feature_matrix </code></pre> feature_matrix: <pre><code> A B C CUM_MIN(A) CUM_MIN(B) CUM_MIN(C) index 0 1 1 0.3 1 1 0.3 1 1 2 0.2 1 1 0.2 2 2 3 1.2 1 1 0.2 3 2 4 -0.5 1 1 -0.5 </code></pre> <hr> <p>How does the operation with Pandas translate into featuretools (preferably without adding another table)?</p> <p>My attempt with featuretools does not give the right output, but I believe the process that I used is somewhat correct.</p>
<p>Here is the recommended way to do it in Featuretools. You do need to create another table to make it work exactly as you want. </p> <pre><code>import featuretools as ft import pandas as pd df = pd.DataFrame({'A': [1, 1, 2, 2], 'B': [1, 2, 3, 4], 'C': [0.3, 0.2, 1.2, -0.5]}) es = ft.EntitySet() es.entity_from_dataframe(entity_id="example", index="id", make_index=True, dataframe=df) es.normalize_entity(new_entity_id="a_entity", base_entity_id="example", index="A") fm, fl = ft.dfs(target_entity="example", entityset=es, agg_primitives=["min"]) fm </code></pre> <p>this returns</p> <pre><code> A B C a_entity.MIN(example.B) a_entity.MIN(example.C) id 0 1 1 0.3 1 0.2 1 1 2 0.2 1 0.2 2 2 3 1.2 3 -0.5 3 2 4 -0.5 3 -0.5 </code></pre> <p>If you don't want to create an extra table you could try using the <code>cum_min</code> primitive which calculate the cumulative after grouping by <code>A</code></p> <pre><code>df = pd.DataFrame({'A': [1, 1, 2, 2], 'B': [1, 2, 3, 4], 'C': [0.3, 0.2, 1.2, -0.5]}) es = ft.EntitySet() es.entity_from_dataframe(entity_id="example", index="id", make_index=True, variable_types={ "A": ft.variable_types.Id }, dataframe=df,) fm, fl = ft.dfs(target_entity="example", entityset=es, groupby_trans_primitives=["cum_min"]) fm </code></pre> <p>this returns </p> <pre><code> B C A CUM_MIN(C) by A CUM_MIN(B) by A id 0 1 0.3 1 0.3 1.0 1 2 0.2 1 0.2 1.0 2 3 1.2 2 1.2 3.0 3 4 -0.5 2 -0.5 3.0 </code></pre>
python|pandas|group-by|featuretools|feature-engineering
2
6,280
56,725,660
How does the groups parameter in torch.nn.conv* influence the convolution process?
<p>I want to convolve a multichannel tensor with the same single channel weight. I could repeat the weight along the channel dimension, but I thought there might be an other way.</p> <p>I thought the groups parameter might do the job. However I don't understand the documentation. That's why I want to ask how the groups parameter influences the convolution process ?</p>
<p>Just minor tips since I never used it.</p> <p>Group parameter multiplies the number of kernels you would normally have. So if you set group=2, expect 2 times more kernels.</p> <p>The definition of <a href="https://pytorch.org/docs/stable/nn.html#conv2d" rel="nofollow noreferrer">conv2d</a> in PyTorch states group is 1 by default. </p> <p>If you increase the group you get the depth-wise convolution, where each input channel is getting specific kernels per se.</p> <p>The constraint is both in and out channels should be dividable by group number.</p> <p>I think in Tensorfolow you can read the documentation of <code>SeparableConv2D</code> since this is what is equivalent when group>1.</p>
python|pytorch
1
6,281
56,780,825
Pandas dataframe partition by different keys in one run
<p>In SQL we can count by different keys in one go with a help of OLAP functions, which improve sql performance:</p> <pre><code>select B, C, D, count(A) over (partition by B, C, D order by D) as by_BCD. count(A) over (partition by B, C order by D) as by_BC, count(A) over (partition by B order by D) as by_B, count(A) over () as total, from table; </code></pre> <p>Can we do the same in one pandas dataframe scan, not to 3 times grouping by dataframe?</p> <pre><code>Input dataset: A B C D 1 LZ 0 1 2 LZ 0 1 3 LZ 1 1 4 LZ 1 2 5 LZ 1 2 6 SB 0 1 7 SB 0 1 8 SB 1 1 9 SB 1 2 10 SB 1 2 11 PZ 0 1 Output dataset: A B C D by_BCD by_BC by_B total 1 LZ 0 1 2 2 5 11 2 LZ 0 1 2 2 5 11 3 LZ 1 1 1 3 5 11 4 LZ 1 2 2 3 5 11 5 LZ 1 2 2 3 5 11 6 SB 0 1 2 2 5 11 7 SB 0 1 2 2 5 11 8 SB 1 1 1 3 5 11 9 SB 1 2 2 3 5 11 10 SB 1 2 2 3 5 11 11 PZ 0 1 1 1 1 11 </code></pre> <p>Here is the snippet:</p> <pre><code>d = {'A': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], 'B': ['LZ', 'LZ', 'LZ', 'LZ', 'LZ', 'SB', 'SB', 'SB', 'SB', 'SB', 'PZ'], 'C': [0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0], 'D': [1, 1, 1, 2, 2, 1, 1, 1, 2, 2, 1]} df = pd.DataFrame(d) </code></pre>
<p>In my comment above, I suggested using a <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/advanced.html" rel="nofollow noreferrer">Multiindex</a>.</p> <p>My assumption was, that the performance penalty arises from the implicit indexing within group by statements.</p> <p>Creating the df as described by OP:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd d = {'A': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], 'B': ['LZ', 'LZ', 'LZ', 'LZ', 'LZ', 'SB', 'SB', 'SB', 'SB', 'SB', 'PZ'], 'C': [0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0], 'D': [1, 1, 1, 2, 2, 1, 1, 1, 2, 2, 1]} df = pd.DataFrame(d) </code></pre> <p>Sorting and creating Multiindex. It may well be, that for a better performance of <code>DataFrame.groupby</code> sorting is sufficient. I haven't tried.</p> <pre class="lang-py prettyprint-override"><code>indexed = df.sort_values(['B', 'C', 'D']).set_index(['B', 'C', 'D']) </code></pre> <p>This yields:</p> <pre><code> A B C D LZ 0 1 1 1 2 1 1 3 2 4 2 5 PZ 0 1 11 SB 0 1 6 1 7 1 1 8 2 9 2 10 </code></pre> <p>Selecting counts for a single row:</p> <pre class="lang-py prettyprint-override"><code>indexed.loc['LZ', 0, 1].count() # 2 </code></pre> <p>Grouping and counting, e.g. over 'BC':</p> <pre class="lang-py prettyprint-override"><code>indexed.groupby(['B', 'C']).count() </code></pre> <p>yields:</p> <pre><code> A B C LZ 0 2 1 3 PZ 0 1 SB 0 2 1 3 </code></pre> <p>As stated, my assumptions about performance are assumptions only.</p>
python|pandas
0
6,282
25,790,740
Create a vector of mean substituted values based on another vector
<p>In my line of work it is not uncommon to have a continuous vector that needs to be 'discretized'. What I want to do is replace the values of a continuous variable that has been discretized by <code>cut</code> with the mean of another variable over those cut ranges.</p> <p><strong>EDIT</strong></p> <p>Furthermore, the mean (or whatever other function I want to use to generate a value) must be based on the original data and applied to new data. Imagine the situation where I calculate mean bad rates for a continuous variable on a training data set, build a model and then have to apply that same transformation logic to new data.</p> <p><strong>END EDIT</strong></p> <p>Consider the following data:</p> <pre><code>x &lt;- rnorm(100) x.disc &lt;- cut(x, c(-Inf, -2, 0, 2, Inf)) lookup &lt;- aggregate(x, list(x.disc), mean) &gt; lookup Group.1 x 1 (-Inf,-2] -2.2322429 2 (-2,0] -0.6968720 3 (0,2] 0.8671428 4 (2, Inf] 2.6696064 </code></pre> <p>What I would like to do is create a new vector, x1, where the value is equal to the x value in lookup when the original x values fall in the corresponding range. My expected output vector would look like this:</p> <pre><code>&gt; head(x) [1] -0.1867972 1.7309683 -0.1306331 1.2787303 0.8388222 -0.4449465 </code></pre> <p>Desired Output:</p> <pre><code>&gt; head(x1) [1] -0.6968720 0.8671428 -0.6968720 0.8671428 0.8671428 -0.6968720 </code></pre> <p>In <code>pandas</code> for <code>python</code> there is a group-by-apply paradigm that uses <code>transform</code> to broadcast the aggregated values back to the same dimension as the input. Is there something similar for <code>R</code>? I would like to keep it to base functions for my understanding but am not opposed to using other packages.</p>
<p>You could try:</p> <pre><code> x.disc &lt;- cut(x, c(-Inf, -2, 0, 2, Inf), labels=FALSE) lookup &lt;- aggregate(x, list(x.disc), mean) lookup$x[x.disc] </code></pre>
r|pandas
1
6,283
26,364,329
Fastest Count of Row Dependent Date Ranges
<p>I have a data set that looks like this (End_Time is 7 hours after Start_Time):</p> <pre><code> Value Start_Time End_Time 1 A 2014-10-14 05:00:00 2014-10-14 12:00:00 2 A 2014-10-14 08:00:00 2014-10-14 15:00:00 3 A 2014-10-14 14:00:00 2014-10-14 21:00:00 4 A 2014-10-14 06:00:00 2014-10-14 13:00:00 5 B 2014-10-14 05:00:00 2014-10-14 12:00:00 6 B 2014-10-14 06:00:00 2014-10-14 13:00:00 </code></pre> <p>I want to add a new column that counts the number of rows with the same Value with a Start_Time within the Start_Time and End_Time of that row. The result would look like this:</p> <pre><code> Value Start_Time End_Time Count 1 A 2014-10-14 05:00:00 2014-10-14 12:00:00 2 2 A 2014-10-14 08:00:00 2014-10-14 15:00:00 1 3 A 2014-10-14 14:00:00 2014-10-14 21:00:00 0 4 A 2014-10-14 06:00:00 2014-10-14 13:00:00 2 5 B 2014-10-14 05:00:00 2014-10-14 12:00:00 1 6 B 2014-10-14 06:00:00 2014-10-14 13:00:00 0 </code></pre> <p>Currently I have:</p> <pre><code>for i in range(0, len(df['Value'])): df['Count'][i] = df[(df['Start_Time'] &gt;= df['Start_Time'][i]) &amp; (df['Start_Time'] &lt;= df['End_Time'][i]) &amp; (df['Value'] == df['Value'][i])].shape[0] </code></pre> <p>I have a large number of rows and this turns out to be very slow and currently includes itself in the count so that every row needs to be subtracted by 1.</p> <p>Is there a faster way to do this calculation?</p> <p>Thanks!</p>
<p>In my opinion the only way you can achieve this quickly is if <code>Start_Time</code> increases. You could dispatch some complexity at insertion time by keeping ordered rows. With a sorted list of rows, testing if the following ones are within <code>[Start_Time, End_Time]</code> is easy, since as soon as you get an element which is not in the bound you'll know the following elements won't be either.</p> <p>If you can't keep a sorted list at insertion, then I think there is no more efficient way than to sort the list. </p>
python|pandas
0
6,284
66,892,709
Reinforcement Learning - only size-1 arrays can be converted to Python scalars - is it data problem?
<p>I'm new to pytorch and even though I was searching for this error I can't seem to understand where axactly I'm doing something wrong.</p> <p>I'm trying to run a codewith a model that trades 3 different stocks. My data is a csv file with three columns with closing prices of stocks.</p> <p>I'm trying to run this part of code</p> <pre><code>env.reset() # In case you're running this a second time with the same model, delete the gradients del model.rewards[:] del model.saved_actions[:] gamma = 0.9 log_interval = 60 def finish_episode(): R = 0 saved_actions = model.saved_actions policy_losses = [] value_losses = [] rewards = [] for r in model.rewards[::-1]: R = r + (gamma * R) rewards.insert(0, R) rewards = torch.tensor(rewards) epsilon = (torch.rand(1) / 1e4) - 5e-5 # With different architectures, I found the following standardization step sometimes # helpful, sometimes unhelpful. # rewards = (rewards - rewards.mean()) / (rewards.std(unbiased=False) + epsilon) # Alternatively, comment it out and use the following line instead: rewards += epsilon for (log_prob, value), r in zip(saved_actions, rewards): reward = torch.tensor(r - value.item()).cuda() policy_losses.append(-log_prob * reward) value_losses.append(F.smooth_l1_loss(value, torch.tensor([r]).cuda())) optimizer.zero_grad() loss = torch.stack(policy_losses).sum() + torch.stack(value_losses).sum() loss = torch.clamp(loss, -1e-5, 1e5) loss.backward() optimizer.step() del model.rewards[:] del model.saved_actions[:] running_reward = 0 for episode in range(0, 4000): state = env.reset() reward = 0 done = False msg = None while not done: action = model.act(state) state, reward, done, msg = env.step(action) model.rewards.append(reward) if done: break running_reward = running_reward * (1 - 1/log_interval) + reward * (1/log_interval) finish_episode() # Resetting the hidden state seems unnecessary - it's effectively random from the previous # episode anyway, more random than a bunch of zeros. # model.reset_hidden() if msg[&quot;msg&quot;] == &quot;done&quot; and env.portfolio_value() &gt; env.starting_portfolio_value * 1.1 and running_reward &gt; 500: print(&quot;Early Stopping: &quot; + str(int(reward))) break if episode % log_interval == 0: print(&quot;&quot;&quot;Episode {}: started at {:.1f}, finished at {:.1f} because {} @ t={}, \ last reward {:.1f}, running reward {:.1f}&quot;&quot;&quot;.format(episode, env.starting_portfolio_value, \ env.portfolio_value(), msg[&quot;msg&quot;], env.cur_timestep, reward, running_reward)) </code></pre> <p>But I'm getting such an error:</p> <pre><code>TypeError Traceback (most recent call last) &lt;ipython-input-91-ce955397be85&gt; in &lt;module&gt;() 45 msg = None 46 while not done: ---&gt; 47 action = model.act(state) 48 state, reward, done, msg = env.step(action) 49 model.rewards.append(reward) 1 frames &lt;ipython-input-89-f463539c7fe3&gt; in forward(self, x) 16 17 def forward(self, x): ---&gt; 18 x = torch.tensor(x).cuda() 19 x = torch.sigmoid(self.input_layer(x)) 20 x = torch.tanh(self.hidden_1(x)) TypeError: only size-1 arrays can be converted to Python scalars </code></pre> <p>This is the part of code with forward function defined</p> <pre><code>class Policy(nn.Module): def __init__(self): super(Policy, self).__init__() self.input_layer = nn.Linear(11, 128) self.hidden_1 = nn.Linear(128, 128) self.hidden_2 = nn.Linear(32,31) self.hidden_state = torch.tensor(torch.zeros(2,1,32)).cuda() self.rnn = nn.GRU(128, 32, 2) self.action_head = nn.Linear(31, 5) self.value_head = nn.Linear(31, 1) self.saved_actions = [] self.rewards = [] def reset_hidden(self): self.hidden_state = torch.tensor(torch.zeros(2,1,32)).cuda() def forward(self, x): x = torch.tensor(x).cuda() x = torch.sigmoid(self.input_layer(x)) x = torch.tanh(self.hidden_1(x)) x, self.hidden_state = self.rnn(x.view(1,-1,128), self.hidden_state.data) x = F.relu(self.hidden_2(x.squeeze())) action_scores = self.action_head(x) state_values = self.value_head(x) return F.softmax(action_scores, dim=-1), state_values def act(self, state): probs, state_value = self.forward(state) m = Categorical(probs) action = m.sample() if action == 1 and env.state[0] &lt; 1: action = torch.LongTensor([2]).squeeze().cuda() if action == 4 and env.state[1] &lt; 1: action = torch.LongTensor([2]).squeeze().cuda() if action == 6 and env.state[2] &lt; 1: action = torch.LongTensor([2]).squeeze().cuda() self.saved_actions.append((m.log_prob(action), state_value)) return action.item() </code></pre> <p>Can you please direct me where I should make changes? Is it the data I'm feeding my model with, or something different?</p> <p>Thank you so much for help</p>
<p>You are passing <code>state = env.reset()</code> to:</p> <ul> <li><code>action = model.act(state)</code></li> <li><code>probs, state_value = self.forward(state)</code></li> <li><code>x = torch.tensor(x).cuda()</code></li> </ul> <p>And hence torch is throwing an error. It expects a numeric or array type input.</p>
python|pytorch|reinforcement-learning
0
6,285
66,864,453
How to change legend labels in scatter matrix
<p>I have a scatter matrix that I want to change the labels for. On the right-hand, I want to change the blue color <code>1</code> to Say Mystery and the red color <code>2</code> to say Science. I also want to change the labels of each graph to label their counterpart [Spicy, Savory, and Sweet]. I tried using dict to relabel but then my charts came out wrong.</p> <p><a href="https://i.stack.imgur.com/rd0zb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rd0zb.png" alt="enter image description here" /></a></p> <pre><code>import plotly.express as px fig = px.scatter_matrix(df, dimensions=[&quot;Q12_Spicy&quot;, &quot;Q12_Sav&quot;, &quot;Q12_Sweet&quot;, ],color=&quot;Q11_Ans&quot; ) fig.show() </code></pre>
<p>You can create a new column called <code>Q11_Labels</code> that maps <code>1</code> to <code>Mystery</code> and <code>2</code> to <code>Science</code> from the <code>Q11_Ans</code> column, and pass <code>colors='Q11_Labels'</code> to the <code>px.scatter_matrix</code> function. If you still want the legend to display the original column name, you can pass a dictionary to the labels parameter of the <code>px.scatter_matrix</code> function with <code>labels={&quot;Q11_Labels&quot;:&quot;Q11_Ans&quot;}</code></p> <p>Then you can extend this dictionary to include the other column name to display name mappings as well, so that <code>[Spicy, Savory, Sweet]</code> are displayed instead of <code>[Q12_Spicy, Q12_Savory, Q12_Sweet]</code>.</p> <pre><code>import numpy as np import pandas as pd import plotly.express as px ## recreate random data with the same columns np.random.seed(42) df = pd.DataFrame( np.random.randint(0,100,size=(100, 3)), columns=[&quot;Q12_Spicy&quot;, &quot;Q12_Sav&quot;, &quot;Q12_Sweet&quot;] ) df[&quot;Q11_Ans&quot;] = np.random.randint(1,3,size=100) df[&quot;Q11_Ans&quot;] = df[&quot;Q11_Ans&quot;].astype(&quot;category&quot;) df = df.sort_values(by=&quot;Q11_Ans&quot;) ## remap the values of 1 and 2 to their meanings, then pass this as the color df[&quot;Q11_Labels&quot;] = df[&quot;Q11_Ans&quot;].map({1: &quot;Mystery&quot;, 2: &quot;Science&quot;}) ## pass a dictionary to the labels parameter fig = px.scatter_matrix(df, dimensions=[&quot;Q12_Spicy&quot;, &quot;Q12_Sav&quot;, &quot;Q12_Sweet&quot;],color=&quot;Q11_Labels&quot;, labels = {&quot;Q12_Spicy&quot;:&quot;Spicy&quot;,&quot;Q12_Sav&quot;:&quot;Savory&quot;,&quot;Q12_Sweet&quot;:&quot;Sweet&quot;, &quot;Q11_Labels&quot;:&quot;Q11_Ans&quot;} ) fig.show() </code></pre> <p><a href="https://i.stack.imgur.com/wwvy3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wwvy3.png" alt="enter image description here" /></a></p>
pandas|dataframe|plotly
0
6,286
66,806,383
Pandas Pivot Table Based on Specific Column Value
<p>I need to pivot my data in a df like shown below based on a specific date in the YYMMDD and HHMM column &quot;20180101 100&quot;. This specific date represents a new category of data with equal amounts of rows. I plan on replacing the repeating column names in the output with unique names. Suppose my data looks like this below.</p> <pre><code> YYMMDD HHMM BestGuess(kWh) 0 20180101 100 20 1 20180101 200 70 0 20201231 2100 50 1 20201231 2200 90 2 20201231 2300 70 3 20210101 000 40 4 20180101 100 5 5 20180101 200 7 6 20201231 2100 2 7 20201231 2200 3 8 20201231 2300 1 9 20210101 000 4 </code></pre> <p>I need the new df (dfpivot) to look like this:</p> <pre><code> YYMMDD HHMM BestGuess(kWh) BestGuess(kWh) 0 20180101 100 20 5 1 20180101 200 70 7 2 20201231 2100 50 2 3 20201231 2200 90 3 4 20201231 2300 70 1 5 20210101 000 40 4 </code></pre>
<p>Does this suffice?</p> <pre><code>cols = ['YYMMDD', 'HHMM'] df.set_index([*cols, df.groupby(cols).cumcount()]).unstack() BestGuess(kWh) 0 1 YYMMDD HHMM 20180101 100 20 5 200 70 7 20201231 2100 50 2 2200 90 3 2300 70 1 20210101 0 40 4 </code></pre> <p>More fully baked</p> <pre><code>cols = ['YYMMDD', 'HHMM'] temp = df.set_index([*cols, df.groupby(cols).cumcount()]).unstack() temp.columns = [f'{l0} {l1}' for l0, l1 in temp.columns] temp.reset_index() YYMMDD HHMM BestGuess(kWh) 0 BestGuess(kWh) 1 0 20180101 100 20 5 1 20180101 200 70 7 2 20201231 2100 50 2 3 20201231 2200 90 3 4 20201231 2300 70 1 5 20210101 0 40 4 </code></pre>
python|pandas|pivot
0
6,287
67,014,581
I have Dataframe in pandas with column Case Number, True value, Predicted, confidence. I need to split values accordingly with all combination shown
<p><strong>I have dataframe given below <a href="https://i.stack.imgur.com/wR9lP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wR9lP.png" alt="enter image description here" /></a></strong></p> <p>and am expecting result to be <a href="https://i.stack.imgur.com/eJhpP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eJhpP.png" alt="enter image description here" /></a></p> <p>is there any way to do in pandas Thanks in advance</p>
<p>You can <code>split()</code> the pipe-strings into lists, pad each row's lists to the same length, then <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.explode.html" rel="nofollow noreferrer"><strong><code>explode()</code></strong></a> the lists.</p> <p>Using toy data:</p> <pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({'Case':[1,2],'True':['A | B',np.nan],'Predicted':['A | B | C',np.nan],'Confidence':['45 | 23 | 90','0'],}).set_index('Case') # True Predicted Confidence # Case # 1 A | B A | B | C 45 | 23 | 90 # 2 NaN NaN 0 </code></pre> <p>1: Split the pipe-strings into lists:</p> <pre class="lang-py prettyprint-override"><code>df = df.applymap(lambda x: [] if pd.isnull(x) else str(x).split(' | ')) # True Predicted Confidence # Case # 1 [A, B] [A, B, C] [45, 23, 90] # 2 [] [] [0] </code></pre> <p>2: Pad each row's lists to the same length:</p> <pre class="lang-py prettyprint-override"><code>def pad(row): length = max([len(array) for array in row]) for array in row: array += [np.nan] * (length - len(array)) return row df = df.apply(pad, axis=1) # True Predicted Confidence # Case # 1 [A, B, nan] [A, B, C] [45, 23, 90] # 2 [nan] [nan] [0] </code></pre> <p>3: <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.explode.html" rel="nofollow noreferrer"><strong><code>explode()</code></strong></a> the lists:</p> <pre class="lang-py prettyprint-override"><code>df = df.apply(pd.Series.explode) # True Predicted Confidence # Case # 1 A A 45 # 1 B B 23 # 1 NaN C 90 # 2 NaN NaN 0 </code></pre>
python|pandas|dataframe|numpy|pandas-datareader
1
6,288
47,280,228
Tensorflow: How to create confusion matrix
<p>I am new to tensorflow, I used this tutorial: </p> <p><a href="https://codelabs.developers.google.com/codelabs/tensorflow-for-poets/" rel="nofollow noreferrer">https://codelabs.developers.google.com/codelabs/tensorflow-for-poets/</a>. </p> <p>I have trained the same model on new dataset which contains 3 labels. I am trying to create the confusion matrix. </p> <p>tf.confusion_matrix function is very confusing. </p> <p>Can someone please help using same code example.</p>
<p>You have 3 labels (say 0,1,2). Let's assume that you have a test set of size 10 and you get the following tensors: truth: [0,0,0,0,1,1,2,2,2,2] prediction: [2,0,0,1,1,1,2,1,2,2] Then you can do as,</p> <pre><code>&gt;&gt;&gt; import tensorflow as tf &gt;&gt;&gt; truth = [0,0,0,0,1,1,2,2,2,2] &gt;&gt;&gt; prediction = [2,0,0,1,1,1,2,1,2,2] &gt;&gt;&gt; cm = tf.contrib.metrics.confusion_matrix(truth, prediction) &gt;&gt;&gt; with tf.Session() as sess: ... sess.run(cm) ... array([[2, 1, 1], [0, 2, 0], [0, 1, 3]], dtype=int32) </code></pre> <p>Note the following: The result is a 3x3 matrix. The first row says that 2 times label 0 was predicted correctly, once it was mistaken as label 1 and once it was mistaken as label 2.</p>
python|python-2.7|tensorflow|tensorboard
3
6,289
47,131,780
Replace dots in a float column with nan in Python
<p>I have a data frame df like this</p> <pre><code>df = pd.DataFrame([ {'Name': 'Chris', 'Item Purchased': 'Sponge', 'Cost': 22.50}, {'Name': 'Kevyn', 'Item Purchased': 'Kitty Litter', 'Cost': '.........'}, {'Name': 'Filip', 'Item Purchased': 'Spoon', 'Cost': '...'}], index=['Store 1', 'Store 1', 'Store 2']) </code></pre> <p>I want to replace the missing values in 'Cost' columns to <code>np.nan</code>. So far I have tried:</p> <pre><code>df['Cost']=df['Cost'].str.replace("\.\.+", np.nan) </code></pre> <p>and </p> <pre><code>df['Cost']=re.sub('\.\.+',np.nan,df['Cost']) </code></pre> <p>but neither of them seem to work properly. Please help.</p>
<p>Use <code>DataFrame.replace</code> with the <code>regex=True</code> switch.</p> <pre><code>df = df.replace('\.+', np.nan, regex=True) df Cost Item Purchased Name Store 1 22.5 Sponge Chris Store 1 NaN Kitty Litter Kevyn Store 2 NaN Spoon Filip </code></pre> <p>The pattern <code>\.+</code> specifies one or more dots. You could also use <code>[.]+</code> as a pattern to the same effect.</p>
python|pandas|dataframe|nan|missing-data
5
6,290
68,230,315
Pandas: Replace blank field only with "Na" in a specific column mixed with float objects and blank strings
<p><strong>I have this dataframe:</strong></p> <pre><code> id cars rent sale 0 123 Kia 2 1 345 Bmw 1 4 2 Mercedes 1 3 345 Ford 1 4 Audi 2 1 </code></pre> <p>I want to fill the blank field only in <strong>the column id with &quot;Na&quot;</strong> and leave the blank fiekd in the others columns(rent/Sale) Any suggestions please?</p> <p><strong>Expected output:</strong></p> <pre><code> id cars rent sale 0 123 Kia 2 1 345 Bmw 1 4 2 Na Mercedes 1 3 345 Ford 1 4 Na Audi 2 1 </code></pre>
<p>As your <code>id</code> column is mixed with float objects and blank fields, and assume you don't want to change the float objects to strings, you can use <code>.replace()</code> with regex, as follows:</p> <pre><code>df['id'] = df['id'].replace(r'^\s*$', 'Na', regex=True) </code></pre> <p><strong>Explanation:</strong></p> <p>Regex <code>^\s*$</code> matches for zero or more white space(s) <code>\s</code> in the whole strings. Thus, it matches empty string (zero white space), one space character, two space characters, etc. It replaces with only one <code>Na</code> no matter how many white spaces matched (e.g. won't replace with <code>NaNa</code> even with 2 space characters).</p> <p><code>^</code> Start of string anchor (together with <code>$</code> to signify matching for the whole string)</p> <p><code>\s</code> White space</p> <p><code>*</code> Zero or more repetition of the character preceding it (<code>\s</code>).</p> <p><code>$</code> End of string anchor</p> <p><strong>Result:</strong></p> <pre><code>print(df) id cars rent sale 0 123 Kia 2 1 345 Bmw 1 4 2 Na Mercedes 1 3 345 Ford 1 4 Na Audi 2 1 </code></pre>
python|pandas|dataframe
0
6,291
68,159,097
Python - move specific rows of columns in csv file
<p>I'm new to Python. I have no idea the way to move specific rows of columns in csv file.</p> <p>As shown in the picture below, I would like to move columns B and C to the right (column D) where column D does not have value.</p> <p>Thanks a lot.</p> <p>desired outcome</p> <p><a href="https://i.stack.imgur.com/h2y1o.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/h2y1o.png" alt="enter image description here" /></a></p> <p>Code:</p> <pre class="lang-py prettyprint-override"><code>import csv import pandas as pd filename=(&quot;.csv&quot;) data=pd.read_csv(filename) </code></pre>
<p>I guess you only want remove rows without nil.<br /> so I write a simple example for this</p> <pre class="lang-py prettyprint-override"><code>#split Nil and None Nil df1 = data[data['D']=='Nil'] df2 = data[data['D']!='Nil'] #move None Nil rows df2['D'] = df2['C'] df2['C'] = df2['B'] df2['B'] = [&quot;&quot; for _ in range(len(df2['B']))] #concat two dataframe df_new = pd.concat([df1,df2]) df_new = df_new.sort_values(by='num') </code></pre>
python|pandas|csv
1
6,292
68,424,959
Speed up nested for loop with NumPy
<p>I'm trying to write a package about image processing with some numpy operations. I've observe that the operations inside the nested loop are costly and want to speed it up.</p> <p>Input is an 512 by 1024 image and be preprocessing into a edge set, which is a list of (Ni,2) ndarrays for each array i.</p> <p>And next, the nested for loop code will pass edge set and do some math stuffs.</p> <pre><code>###proprocessing: img ===&gt; countour set img = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY) high_thresh, _ = cv2.threshold(img, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU) lowThresh = 0.5*high_thresh b = cv2.Canny(img, lowThresh, high_thresh) edgeset, _ = cv2.findContours(b,cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE) imgH = img.shape[0] ## 512 imgW = img.shape[1] ## 1024 num_edges = len(edgeset) ## ~900 min_length_segment_vp = imgH/6 ## ~100 ### nested for loop for i in range(num_edges): if(edgeset[i].shape[0] &gt; min_length_segment_vp): #points: (N, 1, 2) ==&gt; uv: (N, 2) uv = edgeset[i].reshape(edgeset[i].shape[0], edgeset[i].shape[2]) uv = np.unique(uv, axis=0) theta = -(uv[:, 1]-imgH/2)*np.pi/imgH phi = (uv[:, 0]-imgW/2)*2*np.pi/imgW xyz = np.zeros((uv.shape[0], 3)) xyz[:, 0] = np.sin(phi) * np.cos(theta) xyz[:, 1] = np.cos(theta) * np.cos(phi) xyz[:, 2] = np.sin(theta) ##xyz: (N, 3) N=xyz.shape[0] for _ in range(10): if(xyz.shape[0] &gt; N * 0.1): bestInliers = np.array([]) bestOutliers = np.array([]) #### #### watch this out! #### for _ in range(1000): id0 = random.randint(0, xyz.shape[0]-1) id1 = random.randint(0, xyz.shape[0]-1) if(id0 == id1): continue n = np.cross(xyz[id0, :], xyz[id1, :]) n = n / np.linalg.norm(n) cosTetha = n @ xyz.T inliers = np.abs(cosTetha) &lt; threshold outliers = np.where(np.invert(inliers))[0] inliers = np.where(inliers)[0] if inliers.shape[0] &gt; bestInliers.shape[0]: bestInliers = inliers bestOutliers = outliers </code></pre> <p>What I have tried:</p> <ol> <li>I changed np.cross and np.norm into my custom cross and norm only work for shape (3,) ndarray. This gives me a from ~0.9s into ~0.3s in my i5-4460 cpu.</li> <li>I profile my code and find that now the code inside the most inner loop still cost 2/3 of time.</li> </ol> <p>What I think I can try next:</p> <ol> <li>Compile code into cython and add some cdef notation.</li> <li>Translate whole file into C++.</li> <li>Use some faster library for calculation like numexpr.</li> <li>Vectorization of the loop process (but I don't know how).</li> </ol> <p>Can I do more faster? Please give me some suggestions! Thanks!</p>
<p>The question is quite broad so I'll only give a few non-obvious tips based on my own experience.</p> <ul> <li>If you use Cython, you might want to change the <code>for</code> loops into <code>while</code> loops. I've managed to get quite big (x5) speed-ups just from this, although it may not help for all possible cases;</li> <li>Sometimes code that would be considered inefficient in regular Python, such as a nested <code>while</code> (or <code>for</code>) loop to apply a function to an array one element at a time, can be optimized by Cython to be faster than the equivalent vectorized Numpy approach;</li> <li>Find out which Numpy functions cost the most time, and write your own in a way that Cython can most easily optimise them (see above point).</li> </ul>
python|image|numpy|loops|nested-loops
0
6,293
68,165,821
Changing size of scatter plot points by value
<p>I am trying to make a scatter plot and scale the size of points on the basis of numbers from a different list.</p> <p>So I am making a scatter plot of <code>y</code> vs <code>x</code> but the size of each point will depend on the corresponding number from <code>s</code>. Essentially, the larger the value of the element from <code>s</code> bigger would be the point on the scatter plot.</p> <pre><code>y = [2.2, 3.1, 4.3, 4.9, 5.1] x = [0.03, 0.3, 0.4, 0.6, 0.7] s = [48, 134, 391, 203, 193] fig, ax = plt.subplots() ax.scatter(x, y) </code></pre> <p>Any ideas?</p> <p>Thanks in advance!</p>
<p>You are so close</p> <pre><code>ax.scatter(x, y, s=s) </code></pre>
python|numpy|matplotlib
1
6,294
59,076,750
how to convert code older version of tensorflow into tensorflow 2.0
<p>how can i convert my older version of tensorflow code to newer version as CNN ,RNN ,CTC is not working in newer version. I updated tensorflow thereafter many of the function stop working properly and shows error. Some of the function are not in the package anymore. I dont have idea about how to convert it into new version of the tensorflow</p> <pre><code>from __future__ import absolute_import, division, print_function, unicode_literals import codecs import sys import numpy as np import tensorflow as tf from DataLoader import FilePaths import matplotlib.pyplot as plt class DecoderType: BestPath = 0 WordBeamSearch = 1 BeamSearch = 2 class Model: # Model Constants batchSize = 10 # 50 imgSize = (800, 64) maxTextLen = 100 def __init__(self, charList, decoderType=DecoderType.BestPath, mustRestore=False): self.charList = charList self.decoderType = decoderType self.mustRestore = mustRestore self.snapID = 0 # input image batch self.inputImgs =tf.compat.v1.placeholder(tf.float32, shape=(None, Model.imgSize[0], Model.imgSize[1])) # setup CNN, RNN and CTC self.setupCNN() self.setupRNN() self.setupCTC() # setup optimizer to train NN self.batchesTrained = 0 self.learningRate = tf.placeholder(tf.float32, shape=[]) self.optimizer = tf.train.RMSPropOptimizer(self.learningRate).minimize(self.loss) # Initialize TensorFlow (self.sess, self.saver) = self.setupTF() self.training_loss_summary = tf.summary.scalar('loss', self.loss) self.writer = tf.summary.FileWriter( './logs', self.sess.graph) # Tensorboard: Create writer self.merge = tf.summary.merge([self.training_loss_summary]) # Tensorboard: Merge def setupCNN(self): """ Create CNN layers and return output of these layers """ cnnIn4d = tf.expand_dims(input=self.inputImgs, axis=3) # First Layer: Conv (5x5) + Pool (2x2) - Output size: 400 x 32 x 64 with tf.name_scope('Conv_Pool_1'): kernel = tf.Variable( tf.random.truncated_normal([5, 5, 1, 64], stddev=0.1)) conv = tf.nn.conv2d( cnnIn4d, kernel, padding='SAME', strides=(1, 1, 1, 1)) learelu = tf.nn.leaky_relu(conv, alpha=0.01) pool = tf.nn.max_pool2d(learelu, (1, 2, 2, 1), (1, 2, 2, 1), 'VALID') # Second Layer: Conv (5x5) + Pool (1x2) - Output size: 400 x 16 x 128 with tf.name_scope('Conv_Pool_2'): kernel = tf.Variable(tf.truncated_normal( [5, 5, 64, 128], stddev=0.1)) conv = tf.nn.conv2d( pool, kernel, padding='SAME', strides=(1, 1, 1, 1)) learelu = tf.nn.leaky_relu(conv, alpha=0.01) pool = tf.nn.max_pool(learelu, (1, 1, 2, 1), (1, 1, 2, 1), 'VALID') # Third Layer: Conv (3x3) + Pool (2x2) + Simple Batch Norm - Output size: 200 x 8 x 128 with tf.name_scope('Conv_Pool_BN_3'): kernel = tf.Variable(tf.truncated_normal( [3, 3, 128, 128], stddev=0.1)) conv = tf.nn.conv2d( pool, kernel, padding='SAME', strides=(1, 1, 1, 1)) mean, variance = tf.nn.moments(conv, axes=[0]) batch_norm = tf.nn.batch_normalization( conv, mean, variance, offset=None, scale=None, variance_epsilon=0.001) learelu = tf.nn.leaky_relu(batch_norm, alpha=0.01) pool = tf.nn.max_pool(learelu, (1, 2, 2, 1), (1, 2, 2, 1), 'VALID') # Fourth Layer: Conv (3x3) - Output size: 200 x 8 x 256 with tf.name_scope('Conv_4'): kernel = tf.Variable(tf.truncated_normal( [3, 3, 128, 256], stddev=0.1)) conv = tf.nn.conv2d( pool, kernel, padding='SAME', strides=(1, 1, 1, 1)) learelu = tf.nn.leaky_relu(conv, alpha=0.01) # Fifth Layer: Conv (3x3) + Pool(2x2) - Output size: 100 x 4 x 256 with tf.name_scope('Conv_Pool_5'): kernel = tf.Variable(tf.truncated_normal( [3, 3, 256, 256], stddev=0.1)) conv = tf.nn.conv2d( learelu, kernel, padding='SAME', strides=(1, 1, 1, 1)) learelu = tf.nn.leaky_relu(conv, alpha=0.01) pool = tf.nn.max_pool(learelu, (1, 2, 2, 1), (1, 2, 2, 1), 'VALID') # Sixth Layer: Conv (3x3) + Pool(1x2) + Simple Batch Norm - Output size: 100 x 2 x 512 with tf.name_scope('Conv_Pool_BN_6'): kernel = tf.Variable(tf.truncated_normal( [3, 3, 256, 512], stddev=0.1)) conv = tf.nn.conv2d( pool, kernel, padding='SAME', strides=(1, 1, 1, 1)) mean, variance = tf.nn.moments(conv, axes=[0]) batch_norm = tf.nn.batch_normalization( conv, mean, variance, offset=None, scale=None, variance_epsilon=0.001) learelu = tf.nn.leaky_relu(batch_norm, alpha=0.01) pool = tf.nn.max_pool(learelu, (1, 1, 2, 1), (1, 1, 2, 1), 'VALID') # Seventh Layer: Conv (3x3) + Pool (1x2) - Output size: 100 x 1 x 512 with tf.name_scope('Conv_Pool_7'): kernel = tf.Variable(tf.truncated_normal( [3, 3, 512, 512], stddev=0.1)) conv = tf.nn.conv2d( pool, kernel, padding='SAME', strides=(1, 1, 1, 1)) learelu = tf.nn.leaky_relu(conv, alpha=0.01) pool = tf.nn.max_pool(learelu, (1, 1, 2, 1), (1, 1, 2, 1), 'VALID') self.cnnOut4d = pool def setupRNN(self): """ Create RNN layers and return output of these layers """ # Collapse layer to remove dimension 100 x 1 x 512 --&gt; 100 x 512 on axis=2 rnnIn3d = tf.squeeze(self.cnnOut4d, axis=[2]) # 2 layers of LSTM cell used to build RNN numHidden = 512 cells = [tf.contrib.rnn.LSTMCell( num_units=numHidden, state_is_tuple=True, name='basic_lstm_cell') for _ in range(2)] stacked = tf.contrib.rnn.MultiRNNCell(cells, state_is_tuple=True) # Bi-directional RNN # BxTxF -&gt; BxTx2H ((forward, backward), _) = tf.nn.bidirectional_dynamic_rnn( cell_fw=stacked, cell_bw=stacked, inputs=rnnIn3d, dtype=rnnIn3d.dtype) # BxTxH + BxTxH -&gt; BxTx2H -&gt; BxTx1X2H concat = tf.expand_dims(tf.concat([forward, backward], 2), 2) # Project output to chars (including blank): BxTx1x2H -&gt; BxTx1xC -&gt; BxTxC kernel = tf.Variable(tf.truncated_normal( [1, 1, numHidden * 2, len(self.charList) + 1], stddev=0.1)) self.rnnOut3d = tf.squeeze(tf.nn.atrous_conv2d(value=concat, filters=kernel, rate=1, padding='SAME'), axis=[2]) def setupCTC(self): """ Create CTC loss and decoder and return them """ # BxTxC -&gt; TxBxC self.ctcIn3dTBC = tf.transpose(self.rnnOut3d, [1, 0, 2]) # Ground truth text as sparse tensor with tf.name_scope('CTC_Loss'): self.gtTexts = tf.SparseTensor(tf.placeholder(tf.int64, shape=[ None, 2]), tf.placeholder(tf.int32, [None]), tf.placeholder(tf.int64, [2])) # Calculate loss for batch self.seqLen = tf.placeholder(tf.int32, [None]) self.loss = tf.reduce_mean(tf.nn.ctc_loss(labels=self.gtTexts, inputs=self.ctcIn3dTBC, sequence_length=self.seqLen, ctc_merge_repeated=True, ignore_longer_outputs_than_inputs=True)) with tf.name_scope('CTC_Decoder'): # Decoder: Best path decoding or Word beam search decoding if self.decoderType == DecoderType.BestPath: self.decoder = tf.nn.ctc_greedy_decoder( inputs=self.ctcIn3dTBC, sequence_length=self.seqLen) elif self.decoderType == DecoderType.BeamSearch: self.decoder = tf.nn.ctc_beam_search_decoder(inputs=self.ctcIn3dTBC, sequence_length=self.seqLen, beam_width=50, merge_repeated=True) elif self.decoderType == DecoderType.WordBeamSearch: # Import compiled word beam search operation (see https://github.com/githubharald/CTCWordBeamSearch) word_beam_search_module = tf.load_op_library( './TFWordBeamSearch.so') # Prepare: dictionary, characters in dataset, characters forming words chars = codecs.open(FilePaths.wordCharList.txt, 'r').read() wordChars = codecs.open( FilePaths.fnWordCharList, 'r').read() corpus = codecs.open(FilePaths.corpus.txt, 'r').read() # # Decoder using the "NGramsForecastAndSample": restrict number of (possible) next words to at most 20 words: O(W) mode of word beam search # decoder = word_beam_search_module.word_beam_search(tf.nn.softmax(ctcIn3dTBC, dim=2), 25, 'NGramsForecastAndSample', 0.0, corpus.encode('utf8'), chars.encode('utf8'), wordChars.encode('utf8')) # Decoder using the "Words": only use dictionary, no scoring: O(1) mode of word beam search self.decoder = word_beam_search_module.word_beam_search(tf.nn.softmax( self.ctcIn3dTBC, dim=2), 25, 'Words', 0.0, corpus.encode('utf8'), chars.encode('utf8'), wordChars.encode('utf8')) # Return a CTC operation to compute the loss and CTC operation to decode the RNN output return self.loss, self.decoder def setupTF(self): """ Initialize TensorFlow """ print('Python: ' + sys.version) print('Tensorflow: ' + tf.__version__) sess = tf.Session() # Tensorflow session saver = tf.train.Saver(max_to_keep=3) # Saver saves model to file modelDir = '../model/' latestSnapshot = tf.train.latest_checkpoint(modelDir) # Is there a saved model? # If model must be restored (for inference), there must be a snapshot if self.mustRestore and not latestSnapshot: raise Exception('No saved model found in: ' + modelDir) # Load saved model if available if latestSnapshot: print('Init with stored values from ' + latestSnapshot) saver.restore(sess, latestSnapshot) else: print('Init with new values') sess.run(tf.global_variables_initializer()) return (sess, saver) def toSpare(self, texts): """ Convert ground truth texts into sparse tensor for ctc_loss """ indices = [] values = [] shape = [len(texts), 0] # Last entry must be max(labelList[i]) # Go over all texts for (batchElement, texts) in enumerate(texts): # Convert to string of label (i.e. class-ids) # print(texts) # labelStr = [] # for c in texts: # print(c, '|', end='') # labelStr.append(self.charList.index(c)) # print(' ') labelStr = [self.charList.index(c) for c in texts] # Sparse tensor must have size of max. label-string if len(labelStr) &gt; shape[1]: shape[1] = len(labelStr) # Put each label into sparse tensor for (i, label) in enumerate(labelStr): indices.append([batchElement, i]) values.append(label) return (indices, values, shape) def decoderOutputToText(self, ctcOutput): """ Extract texts from output of CTC decoder """ # Contains string of labels for each batch element encodedLabelStrs = [[] for i in range(Model.batchSize)] # Word beam search: label strings terminated by blank if self.decoderType == DecoderType.WordBeamSearch: blank = len(self.charList) for b in range(Model.batchSize): for label in ctcOutput[b]: if label == blank: break encodedLabelStrs[b].append(label) # TF decoders: label strings are contained in sparse tensor else: # Ctc returns tuple, first element is SparseTensor decoded = ctcOutput[0][0] # Go over all indices and save mapping: batch -&gt; values idxDict = {b : [] for b in range(Model.batchSize)} for (idx, idx2d) in enumerate(decoded.indices): label = decoded.values[idx] batchElement = idx2d[0] # index according to [b,t] encodedLabelStrs[batchElement].append(label) # Map labels to chars for all batch elements return [str().join([self.charList[c] for c in labelStr]) for labelStr in encodedLabelStrs] def trainBatch(self, batch, batchNum): """ Feed a batch into the NN to train it """ sparse = self.toSpare(batch.gtTexts) rate = 0.01 if self.batchesTrained &lt; 10 else ( 0.001 if self.batchesTrained &lt; 2750 else 0.001) evalList = [self.merge, self.optimizer, self.loss] feedDict = {self.inputImgs( batch.imgs), self.gtTexts( sparse), self.seqLen ([Model.maxTextLen] * Model.batchSize), self.learningRate( rate)} (loss_summary, _, lossVal) = self.sess.run(evalList, feedDict) # Tensorboard: Add loss_summary to writer self.writer.add_summary(loss_summary, batchNum) self.batchesTrained += 1 return lossVal def return_rnn_out(self, batch, write_on_csv=False): """Only return rnn_out prediction value without decoded""" numBatchElements = len(batch.imgs) decoded, rnnOutput = self.sess.run([self.decoder, self.ctcIn3dTBC], {self.inputImgs: batch.imgs, self.seqLen: [Model.maxTextLen] * numBatchElements}) decoded = rnnOutput print(decoded.shape) if write_on_csv: s = rnnOutput.shape b = 0 csv = '' for t in range(s[0]): for c in range(s[2]): csv += str(rnnOutput[t, b, c]) + ';' csv += '\n' open('mat_0.csv', 'w').write(csv) return decoded[:,0,:].reshape(100,80) def inferBatch(self, batch): """ Feed a batch into the NN to recognize texts """ numBatchElements = len(batch.imgs) feedDict = {self.inputImgs: batch.imgs, self.seqLen: [Model.maxTextLen] * numBatchElements} evalRes = self.sess.run([self.decoder, self.ctcIn3dTBC], feedDict) decoded = evalRes[0] # # Dump RNN output to .csv file # decoded, rnnOutput = self.sess.run([self.decoder, self.rnnOutput], { # self.inputImgs: batch.imgs, self.seqLen: [Model.maxTextLen] * Model.batchSize}) # s = rnnOutput.shape # b = 0 # csv = '' # for t in range(s[0]): # for c in range(s[2]): # csv += str(rnnOutput[t, b, c]) + ';' # csv += '\n' # open('mat_0.csv', 'w').write(csv) texts = self.decoderOutputToText(decoded) return texts def save(self): """ Save model to file """ self.snapID += 1 self.saver.save(self.sess, r'C:\Users\PycharmProjects\hand\model\snapshot', global_step=self.snapID) </code></pre>
<p>You can run tf1 code in tf2 by importing tf a bit differently:</p> <pre><code>import tensorflow.compat.v1 as tf tf.disable_v2_behavior() </code></pre> <p>For details on how to migrate your code you should look here: <a href="https://www.tensorflow.org/guide/migrate" rel="nofollow noreferrer">https://www.tensorflow.org/guide/migrate</a></p>
python|tensorflow|machine-learning|keras|deep-learning
0
6,295
59,233,327
Creating Bin for timestamp column
<p>I am trying to create a proper bin for a timestamp interval column,</p> <p>using code such as </p> <pre><code>df['Bin'] = pd.cut(df['interval_length'], bins=pd.to_timedelta(['00:00:00','00:10:00','00:20:00','00:30:00','00:40:00','00:50:00','00:60:00'])) </code></pre> <p>The Resulting df looks like:</p> <pre><code>time_interval | bin 00:17:00 (0 days 00:10:00, 0 days 00:20:00] 01:42:00 NaN 00:15:00 (0 days 00:10:00, 0 days 00:20:00] 00:00:00 NaN 00:06:00 (0 days 00:00:00, 0 days 00:10:00] </code></pre> <p>Which is a little off as the result I want is jjust the time value and not the days and also I want the upper limit or last bin to be 60 mins or inf ( or more)</p> <p><strong>Desired Output:</strong></p> <pre><code>time_interval | bin 00:17:00 (00:10:00,00:20:00] 01:42:00 (00:60:00,inf] 00:15:00 (00:10:00,00:20:00] 00:00:00 (00:00:00,00:10:00] 00:06:00 (00:00:00,00:10:00] </code></pre> <p>Thanks for looking!</p>
<p>In pandas <code>inf</code> for timedeltas not exist, so used maximal value. Also for include lowest values is used parameter <code>include_lowest=True</code> if want bins filled by timedeltas:</p> <pre><code>b = pd.to_timedelta(['00:00:00','00:10:00','00:20:00', '00:30:00','00:40:00', '00:50:00','00:60:00']) b = b.append(pd.Index([pd.Timedelta.max])) df['Bin'] = pd.cut(df['time_interval'], include_lowest=True, bins=b) print (df) time_interval Bin 0 00:17:00 (0 days 00:10:00, 0 days 00:20:00] 1 01:42:00 (0 days 01:00:00, 106751 days 23:47:16.854775] 2 00:15:00 (0 days 00:10:00, 0 days 00:20:00] 3 00:00:00 (-1 days +23:59:59.999999, 0 days 00:10:00] 4 00:06:00 (-1 days +23:59:59.999999, 0 days 00:10:00] </code></pre> <p>If want strings instead timedeltas use <code>zip</code> for create labels with append <code>'inf'</code>:</p> <pre><code>vals = ['00:00:00','00:10:00','00:20:00', '00:30:00','00:40:00', '00:50:00','00:60:00'] b = pd.to_timedelta(vals).append(pd.Index([pd.Timedelta.max])) vals.append('inf') labels = ['{}-{}'.format(i, j) for i, j in zip(vals[:-1], vals[1:])] df['Bin'] = pd.cut(df['time_interval'], include_lowest=True, bins=b, labels=labels) print (df) time_interval Bin 0 00:17:00 00:10:00-00:20:00 1 01:42:00 00:60:00-inf 2 00:15:00 00:10:00-00:20:00 3 00:00:00 00:00:00-00:10:00 4 00:06:00 00:00:00-00:10:00 </code></pre>
python|python-3.x|pandas|data-science|bins
1
6,296
59,161,920
Python Pandas slicing with various datatypes
<p>I have a column in a dataframe with two data types, like this:</p> <pre><code>25 3037205 26 2019-09-04 19:54:57 27 2019-09-09 17:55:45 28 2019-09-16 21:40:36 29 3037206 30 2019-09-06 14:49:41 31 2019-09-11 17:17:11 32 3037207 33 2019-09-11 17:19:04 </code></pre> <p>I'm trying to slice it and build a new data frame like this:</p> <pre><code>26 3037205 2019-09-04 19:54:57 27 3037205 2019-09-09 17:55:45 28 3037205 2019-09-16 21:40:36 29 3037206 2019-09-06 14:49:41 30 3037206 2019-09-11 17:17:11 31 3037207 2019-09-11 17:19:04 </code></pre> <p>I can't find how to slice between numbers "no datetype". </p> <p>Some ideas?</p> <p>Thx!</p>
<p>Another approach:</p> <pre><code>s = pd.to_numeric(df['col1'], errors='coerce') df.assign(val=s.ffill().astype(int)).loc[s.isnull()] </code></pre> <p>Output:</p> <pre><code> col1 val 26 2019-09-04 19:54:57 3037205 27 2019-09-09 17:55:45 3037205 28 2019-09-16 21:40:36 3037205 30 2019-09-06 14:49:41 3037206 31 2019-09-11 17:17:11 3037206 33 2019-09-11 17:19:04 3037207 </code></pre>
python|pandas|dataframe|datetime|slice
4
6,297
59,072,242
Python Logistic Regression error : "TypeError: issubclass() arg 2 must be a class or tuple of classes"
<p>I'm creating a multiclass classification model with 4 possible outcomes. it worked yesterday but today, I receive the error below. I'm not very familiar with Python so any help in regards to how to fix this is appreciated.</p> <pre><code>from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0) from sklearn.preprocessing import StandardScaler scaler = StandardScaler() scaler.fit(X_train) X_train_scaled = scaler.transform(X_train) X_test_scaled = scaler.transform(X_test) Logistic=LogisticRegression() logistic.fit(X_train_scaled,y_train) y_pred_log=logistic.predict(X_test_scaled) log_cm=(metrics.confusion_matrix(y_test, y_pred_log)) </code></pre> <pre><code>TypeError Traceback (most recent call last) &lt;ipython-input-213-8e436855d9cc&gt; in &lt;module&gt; 1 logistic=LogisticRegression() ----&gt; 2 logistic.fit(X_train_scaled,y_train) 3 y_pred_log=logistic.predict(X_test_scaled) 4 log_cm=(metrics.confusion_matrix(y_test, y_pred_log)) ~\Anaconda3\lib\site-packages\sklearn\linear_model\logistic.py in fit(self, X, y, sample_weight) 1491 The SAGA solver supports both float64 and float32 bit arrays. 1492 """ -&gt; 1493 solver = _check_solver(self.solver, self.penalty, self.dual) 1494 1495 if not isinstance(self.C, numbers.Number) or self.C &lt; 0: ~\Anaconda3\lib\site-packages\sklearn\linear_model\logistic.py in _check_solver(solver, penalty, dual) 430 warnings.warn("Default solver will be changed to 'lbfgs' in 0.22. " 431 "Specify a solver to silence this warning.", --&gt; 432 FutureWarning) 433 434 all_solvers = ['liblinear', 'newton-cg', 'lbfgs', 'sag', 'saga'] TypeError: issubclass() arg 2 must be a class or tuple of classes </code></pre>
<p>Seems like your solver is causing the error. Try changing your solver:</p> <pre><code>solver = 'lbfgs' </code></pre>
python|scikit-learn|logistic-regression|sklearn-pandas
0
6,298
59,398,266
Can't see the impact of drop_duplicates when used for pandas dataframe
<p>I see no change after calling pandas.drop_duplicates() on the dataframe I'm working on in Python.</p> <pre><code>df = pd.read_excel('sample_data.xlsx', index_col=0) df.drop_duplicates() </code></pre> <p><a href="https://i.stack.imgur.com/pm4UJ.png" rel="nofollow noreferrer">This is the data I'm working on</a></p>
<p>There are two issues that I can see you are having with the code:</p> <ol> <li>You are not passing a subset. By default, in panda's <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.drop_duplicates.html" rel="nofollow noreferrer">documentation</a>, <code>drop_duplicates()</code> will take into account <strong>all</strong> columns and delete rows that are duplicate in all these rows. If you wish to delete duplicates for a certain column or group of columns then you should use the <code>subset</code>.</li> <li>You should check the effect of the parameter <code>inplace</code> therefore <code>df = df.drop_duplicates(['col_1','col_2'])</code> </li> </ol> <p>And after taking into account these 2 items you should notice the difference. </p> <p>Here is an example:</p> <pre><code>import pandas as pd import numpy as np df = pd.DataFrame({'col_1':[1,2,3,3,1],'col_2':[1,1,3,3,1],'col_3':['a','b','c','d','a']}) print(df) col_1 col_2 col_3 0 1 1 a 1 2 1 b 2 3 3 c 3 3 3 d 4 1 1 a </code></pre> <p>If we use <code>drop_duplicates()</code> without any subset, then it will drop rows that all duplicate for all columns. This is row 0 and 4, as they are duplicates for all 3 columns. Since the default is <code>keep='first'</code> you will keep row 0 and drop 4.</p> <p>If we wish to use a subset, for instance <code>drop_duplicates(['col_1','col_2'])</code> then we can expect two groups of duplicate rows 0 and 4 (because their values for col_1 and col_2 are the same) and rows 2 and 3 because you are not taking into account <code>col_3</code>. Similarly to the first case, you will drop 4 and keep 0, drop row 3 and keep 2. This would be the output for the first case:</p> <pre><code>df.drop_duplicates(inplace=True) print(df) col_1 col_2 col_3 0 1 1 a 1 2 1 b 2 3 3 c 3 3 3 d </code></pre> <p>And this one for the second case:</p> <pre><code>df.drop_duplicates(['col_1','col_2'],inplace=True) print(df) col_1 col_2 col_3 0 1 1 a 1 2 1 b 2 3 3 c </code></pre>
python|pandas
1
6,299
59,097,688
how to delete nan values in pandas?
<p>How to delete NaN values in <code>pandas</code>? When I was to print the code to (.csv). The columns are irregular and filled with NaN values. </p> <pre><code>import pandas as pd egzersizler = [{'Hareket Adı': 'Smith Machine Shrug', 'Url': 'https://www.bodybuilding.com/exercises/smith-machine-shrug'}, {'Hareket Adı': 'Leverage Shrug', 'Url': 'https://www.bodybuilding.com/exercises/leverage-shrug'}, {'Hareket Adı': 'Standing Dumbbell Upright Row', 'Url': 'https://www.bodybuilding.com/exercises/standing-dumbbell-upright-row'}, {'Hareket Adı': 'Kettlebell Sumo High Pull', 'Url': 'https://www.bodybuilding.com/exercises/kettlebell-sumo-high-pull'}, {'Hareket Adı': 'Dumbbell Shrug', 'Url': 'https://www.bodybuilding.com/exercises/dumbbell-shrug'}, {'Hareket Adı': 'Calf-Machine Shoulder Shrug', 'Url': 'https://www.bodybuilding.com/exercises/calf-machine-shoulder-shrug'}, {'Hareket Adı': 'Barbell Shrug', 'Url': 'https://www.bodybuilding.com/exercises/barbell-shrug'}, {'Hareket Adı': 'Barbell Shrug Behind The Back', 'Url': 'https://www.bodybuilding.com/exercises/barbell-shrug-behind-the-back'}, {'Hareket Adı': 'Upright Cable Row', 'Url': 'https://www.bodybuilding.com/exercises/upright-cable-row'}, {'Hareket Adı': 'Cable Shrugs', 'Url': 'https://www.bodybuilding.com/exercises/cable-shrugs'}, {'Hareket Adı': 'Upright Row - With Bands', 'Url': 'https://www.bodybuilding.com/exercises/upright-row-with-bands'}, {'Hareket Adı': 'Smith Machine Behind the Back Shrug', 'Url': 'https://www.bodybuilding.com/exercises/smith-machine-behind-the-back-shrug'}, {'Hareket Adı': 'Smith Machine Upright Row', 'Url': 'https://www.bodybuilding.com/exercises/smith-machine-upright-row'}, {'Hareket Adı': 'Clean Shrug', 'Url': 'https://www.bodybuilding.com/exercises/clean-shrug'}, {'Hareket Adı': 'Scapular Pull-Up', 'Url': 'https://www.bodybuilding.com/exercises/scapular-pull-up'}, {'Hareket Adı': 'Snatch Shrug', 'Url': 'https://www.bodybuilding.com/exercises/snatch-shrug'}, {'Kas Grubu': 'Traps'}, {'Kas Grubu': 'Traps'}, {'Kas Grubu': 'Traps'}, {'Kas Grubu': 'Traps'}, {'Kas Grubu': 'Traps'}, {'Kas Grubu': 'Traps'}, {'Kas Grubu': 'Traps'}, {'Kas Grubu': 'Traps'}, {'Kas Grubu': 'Traps'}, {'Kas Grubu': 'Traps'}, {'Kas Grubu': 'Traps'}, {'Kas Grubu': 'Traps'}, {'Kas Grubu': 'Traps'}, {'Kas Grubu': 'Traps'}, {'Kas Grubu': 'Traps'}, {'Kas Grubu': 'Traps'}, {'Ekipmanlar': 'Machine'}, {'Ekipmanlar': 'Machine'}, {'Ekipmanlar': 'Dumbbell'}, {'Ekipmanlar': 'Kettlebells'}, {'Ekipmanlar': 'Dumbbell'}, {'Ekipmanlar': 'Machine'}, {'Ekipmanlar': 'Barbell'}, {'Ekipmanlar': 'Barbell'}, {'Ekipmanlar': 'Cable'}, {'Ekipmanlar': 'Cable'}, {'Ekipmanlar': 'Bands'}, {'Ekipmanlar': 'Machine'}, {'Ekipmanlar': 'Machine'}, {'Ekipmanlar': 'Barbell'}, {'Ekipmanlar': 'None'}, {'Ekipmanlar': 'Barbell'}, {'Düzey': 'Level: Beginner'}, {'Düzey': 'Level: Beginner'}, {'Düzey': 'Level: Beginner'}, {'Düzey': 'Level: Intermediate'}, {'Düzey': 'Level: Beginner'}, {'Düzey': 'Level: Beginner'}, {'Düzey': ''}, {'Düzey': 'Level: Beginner'}, {'Düzey': 'Level: Intermediate'}, {'Düzey': 'Level: Beginner'}, {'Düzey': 'Level: Beginner'}, {'Düzey': 'Level: Beginner'}, {'Düzey': 'Level: Beginner'}, {'Düzey': 'Level: Beginner'}, {'Düzey': 'Level: Beginner'}, {'Düzey': 'Level: Intermediate'}] df=pd.DataFrame(egzersizler, columns = ['Hareket Adı','Url','Düzey','Kas Grubu','Ekipmanlar'] ) print (df) </code></pre> <p><a href="https://i.stack.imgur.com/tZYeP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tZYeP.png" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/RYURo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RYURo.png" alt="enter image description here"></a></p>
<p>Try this, it should fix you dataframe:</p> <pre><code>ndf = df ndf['Kas Grubu'] = ndf['Kas Grubu'].dropna().reset_index().drop(columns='index') ndf['Ekipmanlar'] = ndf['Ekipmanlar'].dropna().reset_index().drop(columns='index') ndf['Düzey'][54]="Level: Unknown" ndf['Düzey'] = ndf['Düzey'].dropna().reset_index().drop(columns='index') ndf = ndf.dropna() </code></pre> <p>To display the full URL, use this option:</p> <pre><code>pd.set_option('display.max_colwidth', -1) </code></pre>
python|pandas
0