Unnamed: 0
int64
0
378k
id
int64
49.9k
73.8M
title
stringlengths
15
150
question
stringlengths
37
64.2k
answer
stringlengths
37
44.1k
tags
stringlengths
5
106
score
int64
-10
5.87k
3,500
58,601,120
How to split record from a DataFrame cross pairs in pandas?
<p>I hava a dataframe like this :</p> <pre><code> a b c 0 A B 1 4 B A 1 1 C D -1 3 D C 3 2 E F 3 </code></pre> <p>The '0' row and '4'row are a pair, I will remove one row by the value of 'c' columns. According to 'c' columns, I decide to remove which one or remove all of them. If mirror pair have same value in <code>c</code> column, I will remove one row, or I will remove all of them.</p> <pre><code> a b c 0 A B 1 2 E F 3 </code></pre> <p>I use while, but my data set is huge. Have any good ideas ?</p>
<p>first select the non-duplicated rows using <code>np.sort</code> and <a href="https://pandas.pydata.org/pandas-docs/version/0.25/reference/api/pandas.Series.duplicated.html" rel="nofollow noreferrer"><code>Series.duplicated</code></a> (see <strong>m1</strong> detail)</p> <p>Then you can use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.groupby.html" rel="nofollow noreferrer"><code>DataFrame.groupby</code></a> and group according to columns <code>a, b</code> (see detail <strong>g</strong>). Then perform a <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#boolean-indexing" rel="nofollow noreferrer"><code>Boolean indexing</code></a> using <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.transform.html" rel="nofollow noreferrer"><code>Groupby.transform</code></a> to eliminate duplicates when c does not match.:</p> <pre><code>df2=df.reset_index(drop=True) m1=~pd.DataFrame(np.sort(df2[['a','b']])).duplicated() g=m1.cumsum() m2=~df2.groupby(g,sort=False)['c'].transform(lambda x: (x.nunique()==len(x))&amp;(len(x)&gt;1)) mask=m1&amp;m2 print(mask) 0 True 1 False 2 False 3 False 4 True dtype: bool </code></pre> <hr> <pre><code>df_filtered=df2[mask] print(df_filtered) a b c 0 A B 1 4 E F 3 </code></pre> <hr> <p><strong>Details:</strong></p> <pre><code>m1 0 True 1 False 2 True 3 False 4 True dtype: bool </code></pre> <hr> <pre><code>m2 0 True 1 True 2 False 3 False 4 True dtype: bool </code></pre> <hr> <pre><code>g 0 1 1 1 2 2 3 2 4 3 dtype: int64 </code></pre>
python-3.x|pandas
1
3,501
58,208,274
Python: Array with too many Indices?
<p>I'm trying to understand the code below and what it does:</p> <pre><code>im = pilimage.open(img_path) image_array = np.array(im) imgstack = image_array[area[0]:area[1], area[2]:area[3], z_stack[0]:z_stack[1]] </code></pre> <p>I know that it opens up an image and stores it in <code>im</code> and then converts <code>im</code> into an array and stores that in <code>image_array</code>. What I don't really understand is the last part. I don't have that much experience with python syntax so can anyone help me. Thank you so much!</p>
<p>This is an advanced-ish application of the python slice notation which you can learn about here: <a href="https://stackoverflow.com/questions/509211/understanding-slice-notation">Understanding slice notation</a>. There are a couple nuances here though:</p> <ul> <li><code>area[0]:area[1]</code> is taking the <code>int</code> at these two indexes and making a slice <code>start:end</code> of the <code>image_array</code> following the normal rules.</li> <li>Only <code>numpy</code> arrays support the ability to slice n-dimensions at the same time. <a href="https://docs.scipy.org/doc/numpy-1.13.0/reference/arrays.indexing.html" rel="nofollow noreferrer">https://docs.scipy.org/doc/numpy-1.13.0/reference/arrays.indexing.html</a></li> </ul>
python|arrays|image|numpy|python-imaging-library
0
3,502
44,546,086
Remove one dataframe from another with Pandas
<p>I have two dataframes of different size (<code>df1</code> nad <code>df2</code>). I would like to remove from <code>df1</code> all the rows which are stored within <code>df2</code>.</p> <p>So if I have <code>df2</code> equals to:</p> <pre><code> A B 0 wer 6 1 tyu 7 </code></pre> <p>And <code>df1</code> equals to: </p> <pre><code> A B C 0 qwe 5 a 1 wer 6 s 2 wer 6 d 3 rty 9 f 4 tyu 7 g 5 tyu 7 h 6 tyu 7 j 7 iop 1 k </code></pre> <p>The final result should be like so:</p> <pre><code> A B C 0 qwe 5 a 1 rty 9 f 2 iop 1 k </code></pre> <p>I was able to achieve my goal by using a for loop but I would like to know if there is a better and more elegant and efficient way to perform such operation.</p> <p>Here is the code I wrote in case you need it: import pandas as pd</p> <pre><code>df1 = pd.DataFrame({'A' : ['qwe', 'wer', 'wer', 'rty', 'tyu', 'tyu', 'tyu', 'iop'], 'B' : [ 5, 6, 6, 9, 7, 7, 7, 1], 'C' : ['a' , 's', 'd', 'f', 'g', 'h', 'j', 'k']}) df2 = pd.DataFrame({'A' : ['wer', 'tyu'], 'B' : [ 6, 7]}) for i, row in df2.iterrows(): df1 = df1[(df1['A']!=row['A']) &amp; (df1['B']!=row['B'])].reset_index(drop=True) </code></pre>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.merge.html" rel="noreferrer"><code>merge</code></a> with outer join with filter by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.query.html" rel="noreferrer"><code>query</code></a>, last remove helper column by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.drop.html" rel="noreferrer"><code>drop</code></a>:</p> <pre><code>df = pd.merge(df1, df2, on=['A','B'], how='outer', indicator=True) .query("_merge != 'both'") .drop('_merge', axis=1) .reset_index(drop=True) print (df) A B C 0 qwe 5 a 1 rty 9 f 2 iop 1 k </code></pre>
python|pandas|dataframe|compare|difference
17
3,503
61,019,326
asfreq yields unexpected results with Period dtype
<p>When upsampling a Dataframe, I would to like that new rows created are left empty.</p> <p>Considering following code:</p> <pre><code>import pandas as pd p5h = pd.period_range(start='2020-02-01 00:00', end='2020-03-04 00:00', freq='5h', name='p5h') df = pd.DataFrame({'Values' : 1}, index=p5h) </code></pre> <p>I would like to upsample to '1H' frequency, leaving new rows filled with NaN values.</p> <pre><code>import numpy as np df1h = df.asfreq('1H', method=None, how='start', fill_value = np.NaN) </code></pre> <p>But here is what I get:</p> <pre><code> df1h.head(7) Values p5h 2020-02-01 00:00 1 2020-02-01 05:00 1 2020-02-01 10:00 1 2020-02-01 15:00 1 2020-02-01 20:00 1 2020-02-02 01:00 1 2020-02-02 06:00 1 </code></pre> <p>(need for that is then to merge/join/concat this DataFrame to another one having a '1H' PeriodIndex - this merging operation cannot be achieved if PeriodIndex of both DataFrames do not share the same frequency)</p> <p>Thanks for any help! Bests</p>
<p><a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Period.asfreq.html#pandas.Period.asfreq" rel="nofollow noreferrer"><code>asfreq()</code></a> is indeed a method for <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Period.html" rel="nofollow noreferrer"><code>Period</code></a> dtypes. Note that your index has dtype:</p> <pre><code>df.index.dtype # period[5H] </code></pre> <p>However, its functionality is slightly different, and it only takes these two parameters:</p> <blockquote> <ul> <li><p><strong>freqstr</strong> The desired frequency.</p></li> <li><p><strong>how</strong> {‘E’, ‘S’, ‘end’, ‘start’}, default ‘end’ Start or end of the timespan.</p></li> </ul> </blockquote> <hr> <p>What could be done to handle the <code>Period</code> index dtype is to use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.resample.html" rel="nofollow noreferrer"><code>resample</code></a> and just aggregate with <code>first</code>:</p> <pre><code>df.resample('1H').first() Values p5h 2020-02-01 00:00 1.0 2020-02-01 01:00 NaN 2020-02-01 02:00 NaN 2020-02-01 03:00 NaN 2020-02-01 04:00 NaN ... ... 2020-03-03 21:00 1.0 2020-03-03 22:00 NaN 2020-03-03 23:00 NaN 2020-03-04 00:00 NaN 2020-03-04 01:00 NaN </code></pre> <p>Though if you instead defined the index using <code>pd.date_range</code> you would get as expected:</p> <pre><code>p5h = pd.date_range(start='2020-02-01 00:00', end='2020-03-04 00:00', freq='5h', name='p5h') df = pd.DataFrame({'Values' : 1}, index=p5h) df.asfreq('1H') Values p5h 2020-02-01 00:00:00 1.0 2020-02-01 01:00:00 NaN 2020-02-01 02:00:00 NaN 2020-02-01 03:00:00 NaN 2020-02-01 04:00:00 NaN ... ... 2020-03-03 17:00:00 NaN 2020-03-03 18:00:00 NaN 2020-03-03 19:00:00 NaN 2020-03-03 20:00:00 NaN 2020-03-03 21:00:00 1.0 </code></pre>
python|pandas|period
3
3,504
60,848,403
Pycharm interperter uses the wrong path
<p>I am trying to follow <a href="https://github.com/huggingface/transformers#quick-tour-of-the-fine-tuningusage-scripts" rel="nofollow noreferrer">this</a> instructions. I downloaded the Glue dataset, and I am trying to run this command</p> <pre><code>python ./examples/run_glue.py \ --model_type bert \ --model_name_or_path bert-base-uncased \ --task_name MRPC \ --do_train \ --do_eval \ --do_lower_case \ --data_dir C:/Git/RemoteDGX/MRPC/glue_data/MRPC \ --max_seq_length 128 \ --per_gpu_eval_batch_size=8 \ --per_gpu_train_batch_size=8 \ --learning_rate 2e-5 \ --num_train_epochs 3.0 \ --output_dir /tmp/MRPC/ </code></pre> <p>I am running the command from pycharm, so I use this configuration.</p> <p>When I press the run command:</p> <pre><code>C:\Git\PythonEnv\Scripts\python.exe C:/Git/RemoteDGX/transformers/examples/run_glue.py --model_type bert --model_name_or_path bert-base-uncased --task_name MRPC --do_train --do_eval --do_lower_case --data_dir C:/Git/RemoteDGX/MRPC/glue_data/MRPC --max_seq_length 128 --per_gpu_eval_batch_size=8 --per_gpu_train_batch_size=8 --learning_rate 2e-5 --num_train_epochs 3.0 --output_dir /tmp/MRPC/ </code></pre> <p>But I am getting this error:</p> <pre><code>ImportError: cannot import name 'MODEL_FOR_SEQUENCE_CLASSIFICATION_MAPPING' from 'transformers' (C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python37_64\lib\site-packages\transformers\__init__.py) </code></pre> <p>By the error, I see that the interperter is trying to find transformers in (C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python37_64\lib\site-packages\transformers__init__.py.</p> <p>What am I doing wrong? I set the configuration accordingly</p>
<p>The problem was the versions of the transformers module, the interpreter was set correctly </p>
python|pycharm|huggingface-transformers
0
3,505
60,846,087
Renaming column name based on its value Pandas
<p>I have the following data frame. </p> <pre><code>df col_1 col_2 col_3 min1 max1 target1 min2 max2 target2 </code></pre> <p>I would like to rename columns col_1, col_2, col_3 based on their value as Min, Max, and Target respectively. I would like to have something below as a result. </p> <pre><code>Min Max Target min1 max1 target1 min2 max2 target2 </code></pre> <p>Can anyone show me a simple trick to do this using Pandas python? </p>
<p>Well, I am not sure that you <strong>should</strong> rename columns this way, but here is a way you <strong>could</strong> do it. First <a href="https://stackoverflow.com/questions/41719259/how-to-remove-numbers-from-string-terms-in-a-pandas-dataframe">remove numbers from string</a> and then use the <a href="https://stackoverflow.com/questions/20950650/how-to-sort-counter-by-value-python">most common string</a>.</p> <pre><code>from collections import Counter cols = [Counter(df[col].str.replace('\d+', '')).most_common()[0][0].capitalize() for col in df.columns ] df.columns = cols </code></pre> <p>Output:</p> <pre><code> Min Max Target 0 min1 max1 target1 1 min2 max2 target2 </code></pre>
python|pandas|dataframe
2
3,506
71,692,261
Nested column names in pandas rows, trying to do an unstack type operation
<p>I have this code and dataframe</p> <pre><code>df_initial = pd.DataFrame(data = {'ref':['02','NaN','NaN','NaN','03','NaN','NaN','NaN'], 'Part_ID':['1234-1', 'Shop_Work','repair','scrap','4567-2','Shop_Work','clean','overhaul']}) </code></pre> <p><a href="https://i.stack.imgur.com/fsUVZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fsUVZ.png" alt="enter image description here" /></a></p> <p>I wish to somehow 'unstack' rows into columns, to give the following output:</p> <p><a href="https://i.stack.imgur.com/BqUCv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BqUCv.png" alt="enter image description here" /></a></p> <p>I have tried unstack but this is only for multi-index?</p>
<p>Use:</p> <pre><code>#if NaNs are string replace to missing values df_initial['ref'] = df_initial['ref'].replace('NaN', np.nan) #test missing values m = df_initial['ref'].isna() #forward filling missing values df_initial['ref'] = df_initial['ref'].ffill() #new column Shop_Work df_initial['Shop_Work'] = df_initial['Part_ID'] #replace Part_ID by mask to NaN and forward filling df_initial['Part_ID'] = df_initial['Part_ID'].mask(m).ffill() #get out Shop_Work rows df = df_initial[df_initial['Shop_Work'].ne('Shop_Work') &amp; m].reset_index(drop=True) print (df) ref Part_ID Shop_Work 0 02 1234-1 repair 1 02 1234-1 scrap 2 03 4567-2 clean 3 03 4567-2 overhaul </code></pre>
python|pandas
1
3,507
69,947,446
Sorting Algorithm according to match/mismatch
<p>The data I'm working on represent the presence or the absence of some species in 5 habitats. I want to obtain clusters according to the shared zones between them, basically i want to maximize the matches between elements for each species.<br /> This is the original dataset <img src="https://i.stack.imgur.com/H6XIa.png" alt="Initial Dataset" /></p> <p>And this is the type of sorting I'm looking for <img src="https://i.stack.imgur.com/VFO0j.png" alt="Desired Sorting" /><br /> I managed to obtain 3 groups by simply sorting on the numbers of occupied zones and then manually fixing evident errors.<br /> This very simplified algorithm only worked because occupied zones are always contiguous, and some common pattern are present.</p> <p>At first I thought this problem was somewhat similar to <a href="https://en.wikipedia.org/wiki/Sequence_alignment" rel="nofollow noreferrer">Sequence Alignment</a>, but I can't really think of any way to apply those algorythm to my case. Furthermore I'd also like to find an automated way to cluster those data into 3 groups, but this is not strictly necessary.</p> <p>The real challenge is sorting this <a href="https://i.stack.imgur.com/06aZb.png" rel="nofollow noreferrer">new Dataset</a> that contains gaps.<br /> I think that the best method should:</p> <ul> <li>Count the number of matches and mismatches between each possible couple of species</li> <li>Find the species with most matches with the first one and set them next to each other</li> <li>Repeat with the second species (the one just repositioned) excluding the check with the first species (to avoid infinite loops), and so on...</li> </ul> <pre><code>## Original df Species Zones array 0 A [0, 1, 1, 1, 1] 1 B [0, 1, 1, 1, 1] 2 C [0, 1, 1, 1, 1] 3 D [0, 1, 1, 0, 0] 4 E [0, 1, 1, 1, 1] 5 F [0, 1, 1, 1, 1] 6 G [0, 1, 1, 1, 1] 7 H [0, 1, 1, 1, 1] 8 I [1, 1, 1, 1, 1] 9 J [0, 1, 1, 1, 1] 10 K [1, 1, 1, 0, 0] 11 L [1, 1, 1, 0, 0] 12 M [1, 1, 1, 1, 1] 13 N [0, 1, 1, 1, 1] 14 O [0, 0, 1, 1, 1] 15 P [0, 1, 1, 1, 1] 16 Q [0, 0, 1, 1, 1] 17 R [1, 1, 1, 0, 0] 18 S [0, 1, 1, 1, 1] 19 T [0, 1, 1, 1, 0] 20 U [0, 1, 1, 1, 1] ## Sorted df Species Zones array 0 A [1, 1, 1, 1, 1] 1 B [1, 1, 1, 1, 1] 2 C [0, 1, 1, 1, 1] 3 D [0, 1, 1, 1, 1] 4 E [0, 1, 1, 1, 1] 5 F [0, 1, 1, 1, 1] 6 G [0, 1, 1, 1, 1] 7 H [0, 1, 1, 1, 1] 8 I [0, 1, 1, 1, 1] 9 J [0, 1, 1, 1, 1] 10 K [0, 1, 1, 1, 1] 11 L [0, 1, 1, 1, 1] 12 M [0, 1, 1, 1, 1] 13 N [0, 1, 1, 1, 1] 14 O [0, 0, 1, 1, 1] 15 P [0, 0, 1, 1, 1] 16 Q [0, 1, 1, 1, 0] 17 R [0, 1, 1, 0, 0] 18 S [1, 1, 1, 0, 0] 19 T [1, 1, 1, 0, 0] 20 U [1, 1, 1, 0, 0] ## New gapped df Species Prey 0 A [1, 1, 1, 0, 1, 0, 1, 1, 1] 1 B [1, 0, 1, 0, 1, 1, 1, 0, 1] 2 C [1, 1, 1, 0, 1, 0, 1, 0, 1] 3 D [1, 1, 1, 0, 1, 0, 1, 0, 0] 4 E [1, 1, 1, 0, 1, 0, 0, 0, 0] 5 F [1, 0, 1, 0, 1, 1, 0, 0, 0] 6 G [1, 1, 1, 1, 0, 0, 0, 0, 0] 7 H [1, 1, 1, 0, 0, 0, 0, 0, 0] 8 I [1, 0, 1, 0, 0, 0, 0, 0, 0] 9 J [0, 0, 1, 0, 0, 1, 0, 0, 0] 10 K [1, 0, 1, 0, 0, 0, 0, 0, 0] 11 L [1, 0, 1, 0, 0, 0, 0, 0, 0] 12 M [1, 0, 0, 0, 0, 0, 0, 0, 0] 13 N [0, 0, 0, 0, 0, 1, 0, 0, 0] 14 O [1, 0, 0, 0, 0, 0, 0, 0, 0] 15 P [1, 0, 0, 0, 0, 0, 0, 0, 0] 16 Q [0, 0, 0, 0, 0, 1, 0, 0, 0] 17 R [1, 0, 0, 0, 0, 0, 0, 0, 0] 18 S [0, 0, 1, 0, 0, 0, 0, 0, 0] 19 T [0, 0, 1, 0, 0, 0, 0, 0, 0] 20 U [0, 0, 1, 0, 0, 0, 0, 0, 0] </code></pre>
<p>I think I solved this problem in a simple yet appropriate way (I'm sure there may be better algorithms though). I check the number of matches between each couple of boolean vectors:</p> <pre class="lang-py prettyprint-override"><code>n=len(diet_bool_df['Prey']) match_mtx = [[0 for i in range(n)] for j in range(n)] #empty matrix nxn for i in range(n): for j in range(n): #this gives a vector containing 1 for a match and 0 for a mismatch position-wise match_vec = [int(diet_bool_df['Boolean Diet'][i][k]==diet_bool_df['Prey'][j][k]) for k in range(len(diet_bool_df['Prey'][i]))] #the sum of all 1 gives the number of matches between each couple (i,j) match_mtx[i][j] = sum(match_vec) </code></pre> <p>Since there are 9 preys there will be a maximum of 9 matches, which will indicate identical diets. This new matrix now can be actually clusterized with kmeans because data are isotropic and all assumptions should be met.</p> <pre><code>def plot_match_cluster(): fig= go.Figure() fig.add_trace(go.Heatmap( z=match_mtx, )) fig.show() </code></pre> <p><a href="https://i.stack.imgur.com/JOYcA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JOYcA.png" alt="Match Matrix as a heatmap" /></a></p> <pre><code>kmeans = sklearn.cluster.KMeans(n_clusters=3).fit_predict(match_mtx) </code></pre> <p>The new clusterized data looks like this<br /> <a href="https://i.stack.imgur.com/5I0Qd.png" rel="nofollow noreferrer">Clusterized Data</a></p> <p>NB. Interestingly enough the initial sorting I decided according to the number of preys is a good indicator in this scenario to create clusters. This is only true with 3 clusters though.</p>
python|pandas|dataframe|sorting|cluster-analysis
0
3,508
43,068,785
Speed up nested for-loops in python / going through numpy array
<p>Say I have 4 numpy arrays A,B,C,D , each the size of (256,256,1792). I want to go through each element of those arrays and do something to it, <em>but</em> I need to do it in chunks of 256x256x256-cubes.</p> <p>My code looks like this:</p> <pre><code>for l in range(7): x, y, z, t = 0,0,0,0 for m in range(a.shape[0]): for n in range(a.shape[1]): for o in range(256*l,256*(l+1)): t += D[m,n,o] * constant x += A[m,n,o] * D[m,n,o] * constant y += B[m,n,o] * D[m,n,o] * constant z += C[m,n,o] * D[m,n,o] * constant final = (x+y+z)/t doOutput(final) </code></pre> <p>The code works and outputs exactly what I want, but its awfully slow. I've read online that those kind of nested for loops should be avoided in python. What is the cleanest solution to it? (right now I'm trying to do this part of my code in C and somehow import it via Cython or other tools, but I'd love a pure python solution)</p> <p>Thanks</p> <p><strong>Add on</strong></p> <p><em>Willem Van Onsem</em>'s Solution to the first part seems to work just fine and I think I comprehend it. But now I want to modify my values before summing them. It looks like</p> <p>(within the outer l loop)</p> <pre><code>for m in range(a.shape[0]): for n in range(a.shape[1]): for o in range(256*l,256*(l+1)): R += (D[m,n,o] * constant * (A[m,n,o]**2 + B[m,n,o]**2 + C[m,n,o]**2)/t - final**2) doOutput(R) </code></pre> <p>I obviously can't just square the sum <code>x = (A[:a.shape[0],:a.shape[1],256*l:256*(l+1)]*Dsub).sum()**2*constant</code> since (A²+B²) != (A+B)² How can I redo this last for loops?</p>
<p>Since you update <code>t</code> with every element of <code>m in range(a.shape[0])</code>, <code>n in range(a.shape[1])</code> and <code>o in range(256*l,256*(l+1))</code>, you can substitute:</p> <pre><code>for m in range(a.shape[0]): for n in range(a.shape[1]): for o in range(256*l,256*(l+1)): t += D[m,n,o] </code></pre> <p>With:</p> <pre><code>t += D[:a.shape[0],:a.shape[1],256*l:256*(l+1)].sum() </code></pre> <p>The same for the other assignments. So you can rewrite your code to:</p> <pre><code>for l in range(7): Dsub = D[:a.shape[0],:a.shape[1],256*l:256*(l+1)] x = (A[:a.shape[0],:a.shape[1],256*l:256*(l+1)]*Dsub).sum()*constant y = (B[:a.shape[0],:a.shape[1],256*l:256*(l+1)]*Dsub).sum()*constant z = (C[:a.shape[0],:a.shape[1],256*l:256*(l+1)]*Dsub).sum()*constant t = Dsub.sum()*constant final = (x+y+z)/t doOutput(final) </code></pre> <p>Note that the <code>*</code> in <em>numpy</em> is the <em>element-wise</em> multiplication, <strong>not</strong> the matrix product. You can do the multiplication before the sum, but since the sum of a multiplications with a constant is equal to the multiplication of that constant with the sum, I think it is more efficient to do this out of the loop.</p> <p>If <code>a.shape[0]</code> is equal to <code>D.shape[0]</code>, etc. You can use <code>:</code> instead of <code>:a.shape[0]</code>. Based on your question, <strong>that seems to be the case</strong>. so:</p> <pre><code># only when `a.shape[0] == D.shape[0], a.shape[1] == D.shape[1] (and so for A, B and C)` for l in range(7): Dsub = D[:,:,256*l:256*(l+1)] x = (A[:,:,256*l:256*(l+1)]*Dsub).sum()*constant y = (B[:,:,256*l:256*(l+1)]*Dsub).sum()*constant z = (C[:,:,256*l:256*(l+1)]*Dsub).sum()*constant t = Dsub.sum()*constant final = (x+y+z)/t doOutput(final) </code></pre> <p>Processing the <code>.sum()</code> on the <code>numpy</code> level will boost performance since you do not convert values back and forth and with <code>.sum()</code>, you use a <em>tight</em> loop.</p> <p><strong>EDIT</strong>:</p> <p>Your updated question does not change much. You can simply use:</p> <pre><code>m,n,_* = a.shape lo,hi = 256*l,256*(l+1) R = (D[:m,:n,lo:hi]*constant*(A[:m,:n,lo:hi]**2+B[:m,:n,lo:hi]**2+D[:m,:n,lo:hi]**2)/t-final**2)).sum() doOutput(R) </code></pre>
python|arrays|numpy|for-loop
1
3,509
72,344,379
How can I make dictionary key as column of dataframe?
<pre><code>!pip install -U LeXmo from LeXmo import LeXmo df['Dict'] = df['Content '].apply(lambda x: [LeXmo.LeXmo(x)]) </code></pre> <p>Using this snippet , I am able to generate this</p> <p><a href="https://i.stack.imgur.com/e9Dv2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/e9Dv2.png" alt="enter image description here" /></a></p> <p><strong>But my desired output is</strong></p> <p><a href="https://i.stack.imgur.com/9Fpd5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9Fpd5.png" alt="enter image description here" /></a></p> <p><em>I want to make each dictionary key of 'Dict' as a seperate column for each row. How can I do this??</em></p>
<p>You can convert output from <code>LeXmo.LeXmo(x)</code> to <code>Series</code>, so it create new columns if call function in <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.apply.html" rel="nofollow noreferrer"><code>Series.apply</code></a>, last append to original DataFrame by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.join.html" rel="nofollow noreferrer"><code>DataFrame.join</code></a>:</p> <pre><code>df = df.join(df['Content '].apply(lambda x: pd.Series(LeXmo.LeXmo(x)))) </code></pre>
python|pandas|dataframe|dictionary|nlp
1
3,510
72,259,405
pandas evaluating strings as numeric
<p>assume df as;</p> <pre><code>data = {'duration':['1week 3day 2hour 4min 23', '2hour 4min 23sec', '2hour 4min', np.nan, '', '23sec']} df = pd.DataFrame(data) </code></pre> <p>I'm trying to calculate the duration as sum of seconds. Replaced the values as:</p> <pre><code>df['duration'] = df['duration'].str.replace('week', '*604800+') \ .str.replace('day', '*604800+') \ .str.replace('hour', '*3600+') \ .str.replace('min', '*60+') \ .str.replace('sec', '') \ .str.replace(' ', '') </code></pre> <p>But cant run eval functions like (pd.eval, apply.eval, eval etc). Some cells ends with '+' sign or other string/na problems.. Any help?</p> <p>Ps: This is not a duplicate question.</p>
<p>You can use a regex combined to a custom function to replace weeks by 7 days and add seconds on lonely numbers (you can add other units). Then convert <a href="https://pandas.pydata.org/docs/reference/api/pandas.to_timedelta.html" rel="nofollow noreferrer"><code>to_timedelta</code></a>:</p> <pre><code>def change_units(m): d = {'week': (7, 'days'), '': (1, 's')} _, i, period = m.groups() factor, txt = d[period] return f'{factor*int(i)}{txt}' df['delta'] = pd.to_timedelta(df['duration'].str.replace(r'((\d)\s*(week|)\b)', replace, regex=True)) </code></pre> <p>output:</p> <pre><code> duration delta 0 1week 3day 2hour 4min 23 10 days 02:04:23 1 2hour 4min 23sec 0 days 02:04:23 2 2hour 4min 0 days 02:04:00 3 NaN NaT 4 NaT 5 23sec 0 days 00:00:23 </code></pre> <p>Then you can benefit from the TimeDelta object, for example to convert to <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.dt.total_seconds.html" rel="nofollow noreferrer"><code>total_seconds</code></a>:</p> <pre><code>pd.to_timedelta(df['duration'].str.replace(r'((\d)\s*(week|)\b)', change_units, regex=True) ).dt.total_seconds() </code></pre> <p>output:</p> <pre><code>0 871463.0 1 7463.0 2 7440.0 3 NaN 4 NaN 5 23.0 Name: duration, dtype: float64 </code></pre>
python|pandas|eval
2
3,511
50,462,988
Split lists within dataframe column into multiple columns
<p>I have a Pandas DataFrame column with multiple lists within a list. Something like this:</p> <pre><code>df col1 0 [[1,2], [2,3]] 1 [[a,b], [4,5], [x,y]] 2 [[6,7]] </code></pre> <p>I want to split the list over multiple columns so the output should be something like:</p> <pre><code> col1 col2 col3 0 [1,2] [2,3] 1 [a,b] [4,5] [x,y] 2 [6,7] </code></pre> <p>Please help me with this. Thanks in advance </p>
<p>You can use <code>pd.Series.apply</code>:</p> <pre><code>df = pd.DataFrame({'col1': [[[1, 2], [2, 3]], [['a', 'b'], [4, 5], ['x', 'y']], [[6, 7]]]}) res = df['col1'].apply(pd.Series) print(res) 0 1 2 0 [1, 2] [2, 3] NaN 1 [a, b] [4, 5] [x, y] 2 [6, 7] NaN NaN </code></pre>
python|pandas
7
3,512
45,694,517
Add, Delete, Edit Rows and Columns while Iterating PANDAS DATAFRAME
<p>I have a csv file with over 50,000 tweets that I open as DataFrame with Pandas</p> <pre><code>df = pd.read_csv('dataset_tweets.csv') </code></pre> <p><a href="https://i.stack.imgur.com/5Diyo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5Diyo.png" alt="enter image description here"></a></p> <p>My goal is to Analyze the sentiment of the tweets, and before proceeding, I need to normalize the tweet. I have defined a function for that, and I would like to add the output as a new column of the dataframe (e.g. Text_Normalized).</p> <p>Nevertheless, I may also need to delete the row if it meets certain conditions (e.g. if the tweet is not written in English).</p> <p>How do I iterate through the dataframe, apply the "normalizer" function to the text column, delete the row if it did not meet certain criteria and eventually add a new column with the text normalized?</p>
<p>say you have some 'text normalising' function:</p> <pre><code>def normalises_text(text): .... return normalised_text </code></pre> <p>You can apply this 'row-wise' to your 'text' column and put this in a new column very simply , as follows:</p> <pre><code>df['normalised_text'] = df.text.apply(normalises_text) </code></pre> <p>to remove rows which don't fit some criteria, you need a way to define your criteria in the dataframe.</p> <p>Say you defined a function which identified whether text is English, and returns a boolean:</p> <pre><code>def is_text_english(text): .... return text_is_english </code></pre> <p>Then put this in a column as before:</p> <pre><code>df['text_is_english'] = df.text.apply(is_text_english) </code></pre> <p>Then, you could filter your dataframe as follows:</p> <pre><code>filtered_df = df[df.text_is_english] </code></pre> <p>Or, say you had a column which states the language of the tweet, you could do:</p> <pre><code>filtered_df = df[df.tweet_language == 'EN'] </code></pre> <p>The key point here is the apply function:</p> <p><a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.apply.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.apply.html</a></p>
python|pandas|dataframe
2
3,513
45,525,270
Multidimensional array in numpy
<p>I have an array of shape <code>(5,2)</code> which each row consist of an array of shape <code>(4,3,2)</code> and a float number.</p> <p>After I slice that array<code>[:,0]</code>, I get an array of shape <code>(5,)</code> which each element has shape of <code>(4,3,2)</code>, instead of an array of shape <code>(5,4,3,2)</code> (even if I'd use <code>np.array()</code>). </p> <p>Why?</p> <p>Edited</p> <p>Example:</p> <pre><code>a1 = np.arange(50).reshape(5, 5, 2) a2 = np.arange(50).reshape(5, 5, 2) b1 = 15.0 b2 = 25.0 h = [] h.append(np.array([a1, b1])) h.append(np.array([a2, b2])) h = np.array(h)[:,0] np.shape(h) # (2,) np.shape(h[0]) # (5, 5, 2) np.shape(h[1]) # (5, 5, 2) h = np.array(h) np.shape(h) # (2,) Why not (2, 5, 5, 2)? </code></pre>
<p>You have an array of <em>objects</em>; You can use <code>np.stack</code> to convert it to the shape you need if you are sure all the sub elements have the same shape:</p> <pre><code>np.stack(a[:,0]) </code></pre> <hr> <pre><code>a = np.array([[np.arange(24).reshape(4,3,2), 1.]]*5) a.shape # (5, 2) a[:,0].shape # (5,) a[:,0][0].shape # (4, 3, 2) np.stack(a[:,0]).shape # (5, 4, 3, 2) </code></pre>
python|numpy|multidimensional-array
2
3,514
62,585,169
Data frame with duplicates in a column map to another column
<pre><code>import pandas as pd df = pd.DataFrame({ &quot;col1&quot;: [11, 12, 13], &quot;col2&quot;: [21, 22, 23], &quot;col3&quot;: [31, 32, 33], &quot;col4&quot;: [41, 42, 43], }) </code></pre> <p>I have a Pandas data frame like <code>df</code> above, and I would like to reshape <code>df</code> to look like the following.</p> <pre><code>import pandas as pd df = pd.DataFrame({ &quot;col1&quot;: [11, 12, 13, 11, 12, 13, 11, 12, 13], &quot;col2&quot;: [21, 22, 23, 31, 32, 33, 41, 42, 43], &quot;indx&quot;: [&quot;col2&quot;, &quot;col2&quot;, &quot;col2&quot;, &quot;col3&quot;, &quot;col3&quot;, &quot;col3&quot;, &quot;col4&quot;, &quot;col4&quot;, &quot;col4&quot;] }) </code></pre> <p>I can slice up <code>df</code> and get my desired data frame, but what would be the slick, Pythonic way to do it in Pandas?</p> <p><strong>EDIT</strong></p> <p>I'm realizing that my question is more complicated that I originall realized, but not too much more (I think). Again, I have a data frame.</p> <pre><code>import pandas as pd df = pd.DataFrame({ &quot;col1&quot;: [11, 12, 13], &quot;col2&quot;: [21, 22, 23], &quot;col3&quot;: [31, 32, 33], &quot;col4&quot;: [41, 42, 43], &quot;col5&quot;: [51, 52, 53], &quot;col6&quot;: [61, 62, 63] }) </code></pre> <p>I want to do something like <code>melt</code> to get my data frame to be like this:</p> <pre><code>import pandas as pd df = pd.DataFrame({ &quot;col1&quot;: [11, 12, 13, 11, 12, 13, 11, 12, 13], &quot;colA&quot;: [21, 22, 23, 31, 32, 33, 41, 42, 43], &quot;indx&quot;: [&quot;do&quot;, &quot;do&quot;, &quot;do&quot;, &quot;re&quot;, &quot;re&quot;, &quot;re&quot;, &quot;me&quot;, &quot;me&quot;, &quot;me&quot;], &quot;col4&quot;: [41, 42, 43, 41, 42, 43, 41, 42, 43], &quot;col5&quot;: [51, 52, 53, 51, 52, 53, 51, 52, 53], &quot;col6&quot;: [61, 62, 63, 61, 62, 63, 61, 62, 63] }) </code></pre> <p>So I want to be able to set the strings to which the &quot;indx&quot; etc are set; I want to drag along several other columns the way that I drag along &quot;col1&quot;, and I want to set the name of the new &quot;col2&quot; column header.</p> <p>Thanks!</p>
<p>You need <code>melt</code>, <code>merge</code> and <code>replace</code></p> <pre><code>d = {'col2': 'do', 'col3': 're', 'col4': 'me'} df_final = (df.melt(['col1','col5','col6'], var_name=&quot;indx&quot;, value_name=&quot;colA&quot;) .merge(df[['col1','col4']], how='left').replace(d)) Out[522]: col1 col5 col6 indx colA col4 0 11 51 61 do 21 41 1 12 52 62 do 22 42 2 13 53 63 do 23 43 3 11 51 61 re 31 41 4 12 52 62 re 32 42 5 13 53 63 re 33 43 6 11 51 61 me 41 41 7 12 52 62 me 42 42 8 13 53 63 me 43 43 </code></pre> <hr /> <p>Or you may <code>rename</code> columns before <code>melt</code></p> <pre><code>d = {'col2': 'do', 'col3': 're', 'col4': 'me'} df_final = (df.rename(d, axis=1) .melt(['col1','col5','col6'], var_name=&quot;indx&quot;, value_name=&quot;colA&quot;) .merge(df[['col1','col4']], how='left')) Out[529]: col1 col5 col6 indx colA col4 0 11 51 61 do 21 41 1 12 52 62 do 22 42 2 13 53 63 do 23 43 3 11 51 61 re 31 41 4 12 52 62 re 32 42 5 13 53 63 re 33 43 6 11 51 61 me 41 41 7 12 52 62 me 42 42 8 13 53 63 me 43 43 </code></pre>
python|pandas|dataframe
2
3,515
54,689,438
Pods unschedulable error while deploying tensorflow serving model to kubernetes using GPUs
<p>I am getting two errors after deploying my object detection model for prediction using GPUs:</p> <p>1.PodUnschedulable Cannot schedule pods: Insufficient nvidia</p> <p>2.PodUnschedulable Cannot schedule pods: com/gpu.</p> <p>I have two node pools. One of them is configured to have Tesla K80 GPU and auto scaling enabled. When I deploy the serving component using a ksonnet app (described in here :<a href="https://github.com/kubeflow/examples/blob/master/object_detection/tf_serving_gpu.md#deploy-serving-component" rel="nofollow noreferrer">https://github.com/kubeflow/examples/blob/master/object_detection/tf_serving_gpu.md#deploy-serving-component</a>.</p> <p>This is the output of the <code>kubectl describe pods</code> command:</p> <pre><code> Name: xyz-v1-5c5b57cf9c-kvjxn Namespace: default Node: &lt;none&gt; Labels: app=xyz pod-template-hash=1716137957 version=v1 Annotations: &lt;none&gt; Status: Pending IP: Controlled By: ReplicaSet/xyz-v1-5c5b57cf9c Containers: aadhar: Image: tensorflow/serving:1.11.1-gpu Port: 9000/TCP Host Port: 0/TCP Command: /usr/bin/tensorflow_model_server Args: --port=9000 --model_name=xyz --model_base_path=gs://xyz_kuber_app-xyz-identification/export/ Limits: cpu: 4 memory: 4Gi nvidia.com/gpu: 1 Requests: cpu: 1 memory: 1Gi nvidia.com/gpu: 1 Environment: &lt;none&gt; Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-b6dpn (ro) aadhar-http-proxy: Image: gcr.io/kubeflow-images-public/tf-model-server-http-proxy:v20180606-9dfda4f2 Port: 8000/TCP Host Port: 0/TCP Command: python /usr/src/app/server.py --port=8000 --rpc_port=9000 --rpc_timeout=10.0 Limits: cpu: 1 memory: 1Gi Requests: cpu: 500m memory: 500Mi Environment: &lt;none&gt; Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-b6dpn (ro) Conditions: Type Status PodScheduled False Volumes: default-token-b6dpn: Type: Secret (a volume populated by a Secret) SecretName: default-token-b6dpn Optional: false QoS Class: Burstable Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s nvidia.com/gpu:NoSchedule Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 20m (x5 over 21m) default-scheduler 0/2 nodes are available: 1 Insufficient nvidia.com/gpu, 1 node(s) were unschedulable. Warning FailedScheduling 20m (x2 over 20m) default-scheduler 0/2 nodes are available: 1 Insufficient nvidia.com/gpu, 1 node(s) were not ready, 1 node(s) were out of disk space, 1 node(s) were unschedulable. Warning FailedScheduling 16m (x9 over 19m) default-scheduler 0/1 nodes are available: 1 Insufficient nvidia.com/gpu. Normal NotTriggerScaleUp 15m (x26 over 20m) cluster-autoscaler pod didn't trigger scale-up (it wouldn't fit if a new node is added) Warning FailedScheduling 2m42s (x54 over 23m) default-scheduler 0/2 nodes are available: 2 Insufficient nvidia.com/gpu. Normal TriggeredScaleUp 13s cluster-autoscaler pod triggered scale-up: [{https://content.googleapis.com/compute/v1/projects/xyz-identification/zones/us-central1-a/instanceGroups/gke-kuberflow-xyz-pool-1-9753107b-grp 1-&gt;2 (max: 10)}] Name: mnist-deploy-gcp-b4dd579bf-sjwj7 Namespace: default Node: gke-kuberflow-xyz-default-pool-ab1fa086-w6q3/10.128.0.8 Start Time: Thu, 14 Feb 2019 14:44:08 +0530 Labels: app=xyz-object pod-template-hash=608813569 version=v1 Annotations: sidecar.istio.io/inject: Status: Running IP: 10.36.4.18 Controlled By: ReplicaSet/mnist-deploy-gcp-b4dd579bf Containers: xyz-object: Container ID: docker://921717d82b547a023034e7c8be78216493beeb55dca57f4eddb5968122e36c16 Image: tensorflow/serving:1.11.1 Image ID: docker-pullable://tensorflow/serving@sha256:a01c6475c69055c583aeda185a274942ced458d178aaeb84b4b842ae6917a0bc Ports: 9000/TCP, 8500/TCP Host Ports: 0/TCP, 0/TCP Command: /usr/bin/tensorflow_model_server Args: --port=9000 --rest_api_port=8500 --model_name=xyz-object --model_base_path=gs://xyz_kuber_app-xyz-identification/export --monitoring_config_file=/var/config/monitoring_config.txt State: Running Started: Thu, 14 Feb 2019 14:48:21 +0530 Last State: Terminated Reason: Error Exit Code: 137 Started: Thu, 14 Feb 2019 14:45:58 +0530 Finished: Thu, 14 Feb 2019 14:48:21 +0530 Ready: True Restart Count: 1 Limits: cpu: 4 memory: 4Gi Requests: cpu: 1 memory: 1Gi Liveness: tcp-socket :9000 delay=30s timeout=1s period=30s #success=1 #failure=3 Environment: GOOGLE_APPLICATION_CREDENTIALS: /secret/gcp-credentials/user-gcp-sa.json Mounts: /secret/gcp-credentials from gcp-credentials (rw) /var/config/ from config-volume (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-b6dpn (ro) Conditions: Type Status Initialized True Ready True PodScheduled True Volumes: config-volume: Type: ConfigMap (a volume populated by a ConfigMap) Name: mnist-deploy-gcp-config Optional: false gcp-credentials: Type: Secret (a volume populated by a Secret) SecretName: user-gcp-sa Optional: false default-token-b6dpn: Type: Secret (a volume populated by a Secret) SecretName: default-token-b6dpn Optional: false QoS Class: Burstable Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: &lt;none&gt; </code></pre> <p>The output of <code>kubectl describe pods | grep gpu</code>is :</p> <pre><code> Image: tensorflow/serving:1.11.1-gpu nvidia.com/gpu: 1 nvidia.com/gpu: 1 nvidia.com/gpu:NoSchedule Warning FailedScheduling 28m (x5 over 29m) default-scheduler 0/2 nodes are available: 1 Insufficient nvidia.com/gpu, 1 node(s) were unschedulable. Warning FailedScheduling 28m (x2 over 28m) default-scheduler 0/2 nodes are available: 1 Insufficient nvidia.com/gpu, 1 node(s) were not ready, 1 node(s) were out of disk space, 1 node(s) were unschedulable. Warning FailedScheduling 24m (x9 over 27m) default-scheduler 0/1 nodes are available: 1 Insufficient nvidia.com/gpu. Warning FailedScheduling 11m (x54 over 31m) default-scheduler 0/2 nodes are available: 2 Insufficient nvidia.com/gpu. Warning FailedScheduling 48s (x23 over 6m57s) default-scheduler 0/3 nodes are available: 3 Insufficient nvidia.com/gpu. </code></pre> <p>I am new to kubernetes and am not able to understand what is going wrong here.</p> <p><strong>Update</strong>: I did have an extra pod running that I was experimenting with earlier. I shut that after @Paul Annett pointed it out but I still have the same error. </p> <pre><code>Name: aadhar-v1-5c5b57cf9c-q8cd8 Namespace: default Node: &lt;none&gt; Labels: app=aadhar pod-template-hash=1716137957 version=v1 Annotations: &lt;none&gt; Status: Pending IP: Controlled By: ReplicaSet/aadhar-v1-5c5b57cf9c Containers: aadhar: Image: tensorflow/serving:1.11.1-gpu Port: 9000/TCP Host Port: 0/TCP Command: /usr/bin/tensorflow_model_server Args: --port=9000 --model_name=aadhar --model_base_path=gs://xyz_kuber_app-xyz-identification/export/ Limits: cpu: 4 memory: 4Gi nvidia.com/gpu: 1 Requests: cpu: 1 memory: 1Gi nvidia.com/gpu: 1 Environment: &lt;none&gt; Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-b6dpn (ro) aadhar-http-proxy: Image: gcr.io/kubeflow-images-public/tf-model-server-http-proxy:v20180606-9dfda4f2 Port: 8000/TCP Host Port: 0/TCP Command: python /usr/src/app/server.py --port=8000 --rpc_port=9000 --rpc_timeout=10.0 Limits: cpu: 1 memory: 1Gi Requests: cpu: 500m memory: 500Mi Environment: &lt;none&gt; Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-b6dpn (ro) Conditions: Type Status PodScheduled False Volumes: default-token-b6dpn: Type: Secret (a volume populated by a Secret) SecretName: default-token-b6dpn Optional: false QoS Class: Burstable Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s nvidia.com/gpu:NoSchedule Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal TriggeredScaleUp 3m3s cluster-autoscaler pod triggered scale-up: [{https://content.googleapis.com/compute/v1/projects/xyz-identification/zones/us-central1-a/instanceGroups/gke-kuberflow-xyz-pool-1-9753107b-grp 0-&gt;1 (max: 10)}] Warning FailedScheduling 2m42s (x2 over 2m42s) default-scheduler 0/2 nodes are available: 1 Insufficient nvidia.com/gpu, 1 node(s) were not ready, 1 node(s) were out of disk space. Warning FailedScheduling 42s (x10 over 3m45s) default-scheduler 0/2 nodes are available: 2 Insufficient nvidia.com/gpu. </code></pre> <p><strong>Update 2</strong>: I haven't used nvidia-docker. Although, the <code>kubectl get pods -n=kube-system</code> command gives me:</p> <pre><code>NAME READY STATUS RESTARTS AGE event-exporter-v0.2.3-54f94754f4-vd9l5 2/2 Running 0 16h fluentd-gcp-scaler-6d7bbc67c5-m8gt6 1/1 Running 0 16h fluentd-gcp-v3.1.0-4wnv9 2/2 Running 0 16h fluentd-gcp-v3.1.0-r6bd5 2/2 Running 0 51m heapster-v1.5.3-75bdcc556f-8z4x8 3/3 Running 0 41m kube-dns-788979dc8f-59ftr 4/4 Running 0 16h kube-dns-788979dc8f-zrswj 4/4 Running 0 51m kube-dns-autoscaler-79b4b844b9-9xg69 1/1 Running 0 16h kube-proxy-gke-kuberflow-aadhaar-pool-1-57d75875-8f88 1/1 Running 0 16h kube-proxy-gke-kuberflow-aadhaar-pool-2-10d7e787-66n3 1/1 Running 0 51m l7-default-backend-75f847b979-2plm4 1/1 Running 0 16h metrics-server-v0.2.1-7486f5bd67-mj99g 2/2 Running 0 16h nvidia-device-plugin-daemonset-wkcqt 1/1 Running 0 16h nvidia-device-plugin-daemonset-zvzlb 1/1 Running 0 51m nvidia-driver-installer-p8qqj 0/1 Init:CrashLoopBackOff 13 51m nvidia-gpu-device-plugin-nnpx7 1/1 Running 0 51m </code></pre> <p>Looks like an issue with nvidia driver installer.</p> <p><strong>Update 3:</strong> Added nvidia driver installer log. Describing the pod: <code>kubectl describe pods nvidia-driver-installer-p8qqj -n=kube-system</code></p> <pre><code>Name: nvidia-driver-installer-p8qqj Namespace: kube-system Node: gke-kuberflow-aadhaar-pool-2-10d7e787-66n3/10.128.0.30 Start Time: Fri, 15 Feb 2019 11:22:42 +0530 Labels: controller-revision-hash=1137413470 k8s-app=nvidia-driver-installer name=nvidia-driver-installer pod-template-generation=1 Annotations: &lt;none&gt; Status: Pending IP: 10.36.5.4 Controlled By: DaemonSet/nvidia-driver-installer Init Containers: nvidia-driver-installer: Container ID: docker://a0b18bc13dad0d470b601ad2cafdf558a192b3a5d9ace264fd22d5b3e6130241 Image: gke-nvidia-installer:fixed Image ID: docker-pullable://gcr.io/cos-cloud/cos-gpu-installer@sha256:e7bf3b4c77ef0d43fedaf4a244bd6009e8f524d0af4828a0996559b7f5dca091 Port: &lt;none&gt; Host Port: &lt;none&gt; State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Error Exit Code: 32 Started: Fri, 15 Feb 2019 13:06:04 +0530 Finished: Fri, 15 Feb 2019 13:06:33 +0530 Ready: False Restart Count: 23 Requests: cpu: 150m Environment: &lt;none&gt; Mounts: /boot from boot (rw) /dev from dev (rw) /root from root-mount (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-n5t8z (ro) Containers: pause: Container ID: Image: gcr.io/google-containers/pause:2.0 Image ID: Port: &lt;none&gt; Host Port: &lt;none&gt; State: Waiting Reason: PodInitializing Ready: False Restart Count: 0 Environment: &lt;none&gt; Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-n5t8z (ro) Conditions: Type Status Initialized False Ready False PodScheduled True Volumes: dev: Type: HostPath (bare host directory volume) Path: /dev HostPathType: boot: Type: HostPath (bare host directory volume) Path: /boot HostPathType: root-mount: Type: HostPath (bare host directory volume) Path: / HostPathType: default-token-n5t8z: Type: Secret (a volume populated by a Secret) SecretName: default-token-n5t8z Optional: false QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/disk-pressure:NoSchedule node.kubernetes.io/memory-pressure:NoSchedule node.kubernetes.io/not-ready:NoExecute node.kubernetes.io/unreachable:NoExecute Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning BackOff 3m36s (x437 over 107m) kubelet, gke-kuberflow-aadhaar-pool-2-10d7e787-66n3 Back-off restarting failed container </code></pre> <p>Error log from the pod <code>kubectl logs nvidia-driver-installer-p8qqj -n=kube-system</code> :</p> <pre><code>Error from server (BadRequest): container "pause" in pod "nvidia-driver-installer-p8qqj" is waiting to start: PodInitializing </code></pre>
<p>Issue seems to be with the resources not being available to run the pod. the pod contains two containers that needs min 1.5Gi Memory and 1.5 cpu and max 5GB memenory and 5 cpu.</p> <p>controller is not able to identify the node that meets the resource requirements for running the pod and hence it is not getting scheduled. </p> <p>see if you can reduce the resource limits that can be matched with one of the node. i also see from the logs one of the node is out of disk space. check those issues reported from ( kubectl describe po ) and take action on those items</p> <pre><code> Limits: cpu: 4 memory: 4Gi nvidia.com/gpu: 1 Requests: cpu: 1 memory: 1Gi nvidia.com/gpu: 1 </code></pre> <pre><code> Limits: cpu: 1 memory: 1Gi Requests: cpu: 500m memory: 500Mi </code></pre> <p>i see the pod is using a node affinity. </p> <pre><code> affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: cloud.google.com/gke-accelerator operator: Exists </code></pre> <p>can you check the node where the pod is deployed has the below label</p> <pre><code>cloud.google.com/gke-accelerator </code></pre> <p>alternatively remove the nodeaffinity section and see if the pods gets deployed and shows running</p>
tensorflow|kubernetes|google-cloud-platform|nvidia|google-kubernetes-engine
1
3,516
73,722,058
Pandas groupby multiple columns with value_counts function
<p>I want to apply <code>value_counts()</code> to multiple columns and reuse the same dataframe further to add more columns. I have the following dataframe as an example.</p> <pre><code> id shop type status 0 1 mac A open 1 1 mac B close 2 1 ikea B open 3 1 ikea A open 4 1 meta A open 5 1 meta B close 6 2 meta B open 7 2 ikea B open 8 2 ikea B close 9 3 ikea A close 10 3 apple B close 11 3 apple B open 12 3 apple A open 13 4 denim A close 14 4 denim A close </code></pre> <p>I want to achieve, the groupby count of both <code>id</code> and <code>shop</code> for each <code>type</code> and <code>status</code> category as shown below.</p> <pre><code> id shop A B close open 0 1 ikea 1 1 0 2 1 1 mac 1 1 1 1 2 1 meta 1 1 1 1 3 2 ikea 0 2 1 1 4 2 meta 0 1 0 1 5 3 apple 1 2 1 2 6 3 ikea 1 0 1 0 7 4 denim 2 0 2 0 </code></pre> <p>I have tried this so far which works correctly but I don't feel that it is efficient, especially if I have more data and maybe want to use an extra two aggs functions for the same groupby. Also, the merging may not always work in some rare cases.</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd from functools import reduce df = pd.DataFrame({ 'id': [1, 1, 1, 1, 1, 1, 2, 2, 2, 3, 3, 3, 3, 4, 4], 'shop': ['mac', 'mac', 'ikea', 'ikea', 'meta', 'meta', 'meta', 'ikea', 'ikea', 'ikea', 'apple', 'apple', 'apple', 'denim', 'denim'], 'type': ['A', 'B', 'B', 'A', 'A', 'B', 'B', 'B', 'B', 'A', 'B', 'B', 'A', 'A', 'A'], 'status': ['open', 'close', 'open', 'open', 'open', 'close', 'open', 'open', 'close', 'close', 'close', 'open', 'open', 'close', 'close'] }) df = df.groupby(['id', 'shop']) df_type = df['type'].value_counts().unstack().reset_index() df_status = df['status'].value_counts().unstack().reset_index() df = reduce(lambda df1, df2: pd.merge(df1, df2, how='left', on=['id', 'shop']), [df_type, df_status]) </code></pre>
<p>You can do with <code>groupby()</code> and <code>value_counts</code>:</p> <pre><code>groups = df.groupby(['id','shop']) pd.concat([groups['type'].value_counts().unstack(fill_value=0), groups['status'].value_counts().unstack(fill_value=0)], axis=1).reset_index() </code></pre> <p>Or a bit more dynamic:</p> <pre><code>groups = df.groupby(['id','shop']) count_cols = ['type','status'] out = pd.concat([groups[c].value_counts().unstack(fill_value=0) for c in count_cols], axis=1).reset_index() </code></pre> <p>Or with <code>crosstab</code>:</p> <pre><code>count_cols = ['type','status'] out = pd.concat([pd.crosstab([df['id'],df['shop']], df[c]) for c in count_cols], axis=1).reset_index() </code></pre> <p>Output:</p> <pre><code> id shop A B close open 0 1 ikea 1 1 0 2 1 1 mac 1 1 1 1 2 1 meta 1 1 1 1 3 2 ikea 0 2 1 1 4 2 meta 0 1 0 1 5 3 apple 1 2 1 2 6 3 ikea 1 0 1 0 7 4 denim 2 0 2 0 </code></pre>
python|pandas|dataframe
3
3,517
71,130,161
Performing Differentiation wrt input within a keras model for use in loss
<p>Is there any layer in keras which calculates the derivative wrt input? For example if <code>x</code> is input, the first layer is say <code>f(x)</code>, then the next layer's output should be <code>f'(x)</code>. There are multiple question here about this topic but all of them involve computation of derivative outside the model. In essence, I want to create a neural network whose loss function involves both the jacobian and hessians wrt the inputs.</p> <p>I've tried the following</p> <pre><code>import keras.backend as K def create_model(): x = keras.Input(shape = (10,)) layer = Dense(1, activation = &quot;sigmoid&quot;) output = layer(x) jac = K.gradients(output, x) model = keras.Model(inputs=x, outputs=jac) return model model = create_model() X = np.random.uniform(size = (3, 10)) </code></pre> <p>This is gives the error <code>tf.gradients is not supported when eager execution is enabled. Use tf.GradientTape instead.</code></p> <p>So I tried using that</p> <pre><code>def create_model2(): with tf.GradientTape() as tape: x = keras.Input(shape = (10,)) layer = Dense(1, activation = &quot;sigmoid&quot;) output = layer(x) jac = tape.gradient(output, x) model = keras.Model(inputs=x, outputs=jac) return model model = create_model2() X = np.random.uniform(size = (3, 10)) </code></pre> <p>but this tells me <code>'KerasTensor' object has no attribute '_id'</code></p> <p>Both these methods work fine outside the model. My end goal is to use the Jacobian and Hessian in the loss function, so alternative approaches would also be appreciated</p>
<p>Not sure what exactly you want to do, but maybe try a custom <code>Keras</code> layer with <code>tf.gradients</code>:</p> <pre class="lang-py prettyprint-override"><code>import tensorflow as tf tf.random.set_seed(111) class GradientLayer(tf.keras.layers.Layer): def __init__(self): super(GradientLayer, self).__init__() self.dense = tf.keras.layers.Dense(1, activation = &quot;sigmoid&quot;) @tf.function def call(self, inputs): outputs = self.dense(inputs) return tf.gradients(outputs, inputs) def create_model2(): gradient_layer = GradientLayer() inputs = tf.keras.layers.Input(shape = (10,)) outputs = gradient_layer(inputs) model = tf.keras.Model(inputs=inputs, outputs=outputs) return model model = create_model2() X = tf.random.uniform((3, 10)) print(model(X)) </code></pre> <pre><code>tf.Tensor( [[-0.07935508 -0.12471244 -0.0702782 -0.06729251 0.14465885 -0.0818079 -0.08996294 0.07622238 0.11422144 -0.08126545] [-0.08666676 -0.13620329 -0.07675356 -0.07349276 0.15798753 -0.08934557 -0.09825202 0.08324542 0.12474566 -0.08875315] [-0.08661086 -0.13611545 -0.07670406 -0.07344536 0.15788564 -0.08928795 -0.09818865 0.08319173 0.12466521 -0.08869591]], shape=(3, 10), dtype=float32) </code></pre>
python|tensorflow|keras|tensorflow2.0
3
3,518
71,413,160
TypeError: forward() got an unexpected keyword argument 'return_dict' BERT CLASSIFICATION HUGGINFACE with tuning
<p>I'm stacked with this model, every day errors came to my code! Anyway I'm trying to implement a Bert Classifier to discriminate between 2 sequences classes (BINARY CLASSIFICATION), with AX hyperparameters tuning. This is all my code implemented anticipated by a sample of my datasets ( I have 3 csv, train-test-val). Thank you very much !</p> <pre><code>df_train=pd.read_csv('CLASSIFIER_train',sep=',',header=None) df_train 0 1 M A T T D R P T P D G T D A I D L T T R V R R... 1 M K K L F Q T E P L L E L F N C N E L R I I G... 0 M L V A A A V C P H P P L L I P E L A A G A A... 1 M I V A W G N S G S G L L I L I L S L A V S A... 0 M V E E G R R L A A L H P N I V V K L P T T E... 1 M G S K V S K N A L V F N V L Q A L R E G L T... 1 M P S K E T S P A E R M A R D E Y Y M R L A M... 1 M V K E Y A L E W I D G Y R E R L V K V S D A... 1 M G T A A S Q D R A A M A E A A Q R V G D S F... 0 </code></pre> <pre><code>class SequenceDataset(Dataset): def __init__(self, sequences, targets, tokenizer, max_len): self.sequences = sequences self.targets = targets self.tokenizer = tokenizer self.max_len = max_len def __len__(self): return len(self.sequences) def __getitem__(self, item): sequences = str(self.sequences[item]) target = self.targets[item] encoding = self.tokenizer.encode_plus( sequences, add_special_tokens=True, max_length=self.max_len, return_token_type_ids=False, pad_to_max_length=True, return_attention_mask=True, return_tensors='pt', ) return { 'sequences_text': sequences, 'input_ids': encoding['input_ids'].flatten(), 'attention_mask': encoding['attention_mask'].flatten(), 'targets': torch.tensor(target, dtype=torch.long) } class SequenceDataset(Dataset): def __init__(self, sequences, targets, tokenizer, max_len): self.sequences = sequences self.targets = targets self.tokenizer = tokenizer self.max_len = max_len def __len__(self): return len(self.sequences) def __getitem__(self, item): sequences = str(self.sequences[item]) target = self.targets[item] encoding = self.tokenizer.encode_plus( sequences, add_special_tokens=True, max_length=self.max_len, return_token_type_ids=False, pad_to_max_length=True, return_attention_mask=True, return_tensors='pt', ) return { 'sequences_text': sequences, 'input_ids': encoding['input_ids'].flatten(), 'attention_mask': encoding['attention_mask'].flatten(), 'targets': torch.tensor(target, dtype=torch.long) } def create_data_loader(df, tokenizer, max_len, batch_size): ds = SequenceDataset( sequences=df[0].to_numpy(), targets=df[1].to_numpy(), tokenizer=tokenizer, max_len=max_len ) return DataLoader( ds, batch_size=batch_size, num_workers=2, shuffle=True ) def net_train(net, train_data_loader, parameters, dtype, device): net.to(dtype=dtype, device=device) # Define loss and optimizer #criterion = nn.CrossEntropyLoss() criterion = nn.NLLLoss() optimizer = optim.SGD(net.parameters(), # or any optimizer you prefer lr=parameters.get(&quot;lr&quot;, 0.001), # 0.001 is used if no lr is specified momentum=parameters.get(&quot;momentum&quot;, 0.9) ) scheduler = optim.lr_scheduler.StepLR( optimizer, step_size=int(parameters.get(&quot;step_size&quot;, 30)), gamma=parameters.get(&quot;gamma&quot;, 1.0), # default is no learning rate decay ) num_epochs = parameters.get(&quot;num_epochs&quot;, 3) # Play around with epoch number # Train Network # Train Network for _ in range(num_epochs): # Your dataloader returns a dictionary # so access it as such for batch in train_data_loader: # move data to proper dtype and device labels = batch['targets'].to(device=device) attention_mask = batch['attention_mask'].to(device=device) input_ids = batch['input_ids'].to(device=device) #labels = labels.long() # zero the parameter gradients optimizer.zero_grad() # forward + backward + optimize outputs,x= net(input_ids, attention_mask,return_dict=True) #outputs,x= net(input_ids,atten_mask) loss = criterion(outputs, labels) loss.backward() optimizer.step() scheduler.step() return net class BERT_Arch(nn.Module): def __init__(self, bert): super(BERT_Arch, self).__init__() self.bert = bert # dropout layer self.dropout = nn.Dropout(0.1) # relu activation function self.relu = nn.ReLU() # dense layer 1 self.fc1 = nn.Linear(1024,512) # dense layer 2 (Output layer) self.fc2 = nn.Linear(512,1) #softmax activation function self.softmax = nn.LogSoftmax(dim=1) #define the forward pass def forward(self, input_ids, attention_mask ): #pass the inputs to the model _, cls_hs = self.bert(input_ids, attention_mask,return_dict=False) x = self.fc1(cls_hs) x = self.relu(x) x = self.dropout(x) # output layer x = self.fc2(x) # apply softmax activation x = self.softmax(x) return x from transformers import AutoModel # import BERT-base pretrained model bert = BertModel.from_pretrained(PRE_TRAINED_MODEL_NAME) from transformers.models.bert.modeling_bert import BertForSequenceClassification def init_net(parameterization): model = BERT_Arch(bert) #pretrained ResNet50 # push the model to GPU model = model.to(device) # The depth of unfreezing is also a hyperparameter for param in model.parameters(): param.requires_grad = False # Freeze feature extractor return model # return untrained model def train_evaluate(parameterization): # constructing a new training data loader allows us to tune the batch size train_data_loader=create_data_loader(df_train, tokenizer, MAX_LEN, batch_size=parameterization.get(&quot;batchsize&quot;, 32)) # Get neural net untrained_net = init_net(parameterization) # train trained_net = net_train(net=untrained_net, train_data_loader=train_data_loader, parameters=parameterization, dtype=dtype, device=device) # return the accuracy of the model as it was trained in this run return evaluate( net=trained_net, data_loader=test_data_loader, dtype=dtype, device=device, ) dtype = torch.float device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') best_parameters, values, experiment, model = optimize( parameters=[ {&quot;name&quot;: &quot;lr&quot;, &quot;type&quot;: &quot;range&quot;, &quot;bounds&quot;: [1e-6, 0.4], &quot;log_scale&quot;: True}, {&quot;name&quot;: &quot;batchsize&quot;, &quot;type&quot;: &quot;range&quot;, &quot;bounds&quot;: [16, 128]}, {&quot;name&quot;: &quot;momentum&quot;, &quot;type&quot;: &quot;range&quot;, &quot;bounds&quot;: [0.0, 1.0]}, #{&quot;name&quot;: &quot;max_epoch&quot;, &quot;type&quot;: &quot;range&quot;, &quot;bounds&quot;: [1, 30]}, #{&quot;name&quot;: &quot;stepsize&quot;, &quot;type&quot;: &quot;range&quot;, &quot;bounds&quot;: [20, 40]}, ], evaluation_function=train_evaluate, objective_name='accuracy', ) print(best_parameters) means, covariances = values print(means) print(covariances) </code></pre> <pre><code> File &quot;&lt;ipython-input-61-aa60b2f44317&gt;&quot;, line 35, in net_train outputs,x= net(input_ids, attention_mask,return_dict=True) File &quot;/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py&quot;, line 1102, in _call_impl return forward_call(*input, **kwargs) TypeError: forward() got an unexpected keyword argument 'return_dict' </code></pre>
<p>In <code>net_train</code> you call:</p> <pre class="lang-py prettyprint-override"><code>outputs,x= net(input_ids, attention_mask,return_dict=True) </code></pre> <p>but your object <code>net</code> only accepts two parameters despite <code>self</code> as defined in <code>BERT_Arch</code>:</p> <pre class="lang-py prettyprint-override"><code>class BERT_Arch(nn.Module): def __init__(self, bert): ... #define the forward pass def forward(self, input_ids, attention_mask ): </code></pre> <p>You probably want to add an additional parameter and use it in the forward pass:</p> <pre class="lang-py prettyprint-override"><code>class BERT_Arch(nn.Module): def __init__(self, bert): ... #define the forward pass def forward(self, input_ids, attention_mask, return_dict): #pass the inputs to the model _, cls_hs = self.bert(input_ids, attention_mask,return_dict=return_dict) ... return x </code></pre>
python|pytorch|huggingface-transformers|bert-language-model|ray
0
3,519
71,198,417
TensorBoard GPU profiling with Tensorflow Agents
<p>I would like to profile my GPU usage when training agents from <a href="https://github.com/tensorflow/agents" rel="nofollow noreferrer">tensorflow/agents</a>, but I cannot figure out how. Specifically I am trying to profile my GPU when running this <a href="https://github.com/tensorflow/agents/blob/master/tf_agents/agents/sac/examples/v2/train_eval.py" rel="nofollow noreferrer">example</a>.</p> <p>It seems that the TensorBoard profiler requires TensorBoard callbacks to be used like so:</p> <pre class="lang-py prettyprint-override"><code># Create a TensorBoard callback logs = &quot;logs/&quot; + datetime.now().strftime(&quot;%Y%m%d-%H%M%S&quot;) tboard_callback = tf.keras.callbacks.TensorBoard(log_dir = logs, histogram_freq = 1, profile_batch = '500,520') model.fit(ds_train, epochs=2, validation_data=ds_test, callbacks = [tboard_callback]) </code></pre> <p>However no <code>fit</code> methods are called when training a TF Agent. They are trained using a <code>train</code> method that accepts no <code>callbacks</code> argument, which can be seen <a href="https://github.com/tensorflow/agents/blob/v0.12.0/tf_agents/agents/tf_agent.py#L294-L338" rel="nofollow noreferrer">here</a>.</p> <p>Is there another way to get the TensorBoard profiler to work when training an agent from the Tensorflow Agents library?</p>
<p>You can use the <code>tf.profiler</code> module, which is what the TensorBoard callback does <a href="https://github.com/keras-team/keras/blob/v2.8.0/keras/callbacks.py#L2608-L2637" rel="nofollow noreferrer">under the hood</a>.</p> <p>The agent example you linked uses a custom training loop, one possibility would be to use the profiler this way:</p> <pre><code># to customize based on the number of steps you want to profile start_profiling_step = 50 stop_profiling_step = 100 profiling_log_dir = &quot;./profile_logs&quot; while global_step_val &lt; num_iterations: # rest of the training code # ... if global_step_val == start_profiling_step: tf.profiler.experimental.start(logdir=profiling_log_dir) if global_step_val == stop_profiling_step: tf.profiler.experimental.stop(save=True) </code></pre> <p>You can look at the <a href="https://www.tensorflow.org/api_docs/python/tf/profiler/experimental" rel="nofollow noreferrer">documentation of the <code>tf.profiler</code> module</a> for more information.</p>
tensorflow|gpu|tensorboard
1
3,520
52,447,633
Python Script, Pandas Special Characters
<p>I am using this script to geocode addresses. The script works fine, however the output file converts special characters such as <code>中央区</code> and <code>Athénée</code> to gibberish. i.e.</p> <p><code>中央区</code> -> <code>中央区</code></p> <p><code>Athénée</code> -> <code>Athénée</code></p> <p>The input file is a UTF-8 .CSV saved in MAC excel. The script is using Pandas to process data. How could I support special characters such as the above?</p> <p>The code for the full script can be found here: <a href="https://github.com/shanealynn/python_batch_geocode/blob/master/python_batch_geocoding.py" rel="nofollow noreferrer">https://github.com/shanealynn/python_batch_geocode/blob/master/python_batch_geocoding.py</a></p> <pre><code> import pandas as pd import requests import logging import time #------------------ CONFIGURATION ------------------------------- # Set your output file name here. output_filename = '/Users/_Library/Python/geobatch/res1000_output.csv' # Set your input file here input_filename = "/Users/_Library/Python/geobatch/res1000.csv" # Specify the column name in your input data that contains addresses here address_column_name = "Address" # Return Full Google Results? If True, full JSON results from Google are included in output RETURN_FULL_RESULTS = False #------------------ DATA LOADING -------------------------------- # Read the data to a Pandas Dataframe data = pd.read_csv(input_filename, encoding='utf8') addresses = data[address_column_name].tolist() # All done logger.info("Finished geocoding all addresses") # Write the full results to csv using the pandas library. pd.DataFrame(results).to_csv(output_filename, encoding='utf8') </code></pre>
<p>If I insert the line:</p> <pre><code>data['Address'] = data['Address'].map(lambda x: x.encode('unicode-escape').decode('utf-8')) </code></pre> <p>to decode and re-encode the inputs - then output becomes.</p> <p><code>中央区</code> -> <code>\u4e2d\u592e\u533a</code> instead of <code>中央区</code></p> <p>which is one step closer to the right direction I presume, if someone could built on this?</p>
python|pandas
0
3,521
60,599,607
ImportError: cannot import name 'UnicodeWriter' from 'pandas.io.common'
<p>I am getting this error in this line:</p> <pre><code>sample_submission.to_csv('submissions/submission.csv', index=False, float_format='%.4f') </code></pre> <p><strong>Traceback:</strong></p> <pre><code>--------------------------------------------------------------------------- ImportError Traceback (most recent call last) &lt;ipython-input-227-f01c3d31b8ad&gt; in &lt;module&gt; ---&gt; 33 sample_submission.to_csv('submissions/submission.csv', index=False, float_format='%.4f') ~/anaconda3/lib/python3.7/site-packages/pandas/core/generic.py in to_csv(self, path_or_buf, sep, na_rep, float_format, columns, header, index, index_label, mode, encoding, compression, quoting, quotechar, line_terminator, chunksize, date_format, doublequote, escapechar, decimal) 3178 when appropriate. 3179 decimal : str, default '.' -&gt; 3180 Character recognized as decimal separator. E.g. use ',' for 3181 European data. 3182 ~/anaconda3/lib/python3.7/site-packages/pandas/io/formats/csvs.py in &lt;module&gt; 21 from pandas.core.dtypes.missing import notna 22 ---&gt; 23 from pandas.io.common import ( 24 UnicodeWriter, 25 _get_handle, ImportError: cannot import name 'UnicodeWriter' from 'pandas.io.common' (/home/kriti/anaconda3/lib/python3.7/site-packages/pandas/io/common.py) </code></pre> <p>How to resolve this error?</p>
<p>What is your pandas version? This does not happen in pandas 1.0.1. May be try upgrading your pandas to see if you still get the error.</p>
python|python-3.x|pandas
2
3,522
72,750,343
How convert this Pytorch loss function to Tensorflow?
<p>This Code for a paper I read had a loss function written using Pytorch, I tried to convert it as best as I could but am getting all Zero's as model predictions, so would like to ask the following:</p> <ol> <li>Are the methods I used the correct equivalent in Tensorflow?</li> <li>Why is the model predicting only Zero's?</li> </ol> <p>Here is the function:</p> <pre><code>#Pytorch class AdjMSELoss1(nn.Module): def __init__(self): super(AdjMSELoss1, self).__init__() def forward(self, outputs, labels): outputs = torch.squeeze(outputs) alpha = 2 loss = (outputs - labels)**2 adj = torch.mul(outputs, labels) adj[adj&gt;0] = 1 / alpha adj[adj&lt;0] = alpha loss = loss * adj return torch.mean(loss) #Tensorflow def custom_loss_function(outputs,labels): outputs = tf.squeeze(outputs) alpha = 2.0 loss = (outputs - labels) ** 2.0 adj = tf.math.multiply(outputs,labels) adj = tf.where(tf.greater(adj, 0.0), tf.constant(1/alpha), adj) adj = tf.where(tf.less(adj, 0.0), tf.constant(alpha), adj) loss = loss * adj return tf.reduce_mean(loss) </code></pre> <p>The function compiles correctly and is being used in the loss and metric parameters, it is outputing results in metrics logs that appear to be correct (Similar to val_loss) <em><strong>but the output of the model after running is just predicting all 0's</strong></em></p> <pre><code> model.compile( loss= custom_loss_function, optimizer=optimization, metrics = [custom_loss_function] ) </code></pre> <p><strong>MODEL</strong></p> <pre><code>#Simplified for readability model = Sequential() model.add(LSTM(32,input_shape=(SEQ_LEN,feature_number),return_sequences=True,)) model.add(Dropout(0.3)) model.add(LSTM(96, return_sequences = False)) model.add(Dropout(0.3)) model.add(Dense(1)) return model </code></pre> <p>Inputs/Features are pct_change Price for the previous SEQ_LEN days. (Given SEQ_LEN days tries to predict next day: Target)</p> <p>Outputs/Targets are the next day's price pct_change * 100 (Ex: 5 for 5%). (1 value per row)</p> <p><em><strong>Note: The model predicts normally when RMSE() is set as the loss function, as mentioned when using the custom_loss_function above it's just predicting Zero's</strong></em></p>
<p>Try this <code>custom_loss</code>:</p> <pre><code>def custom_loss(y_pred, y_true): alpha = 2.0 loss = (y_pred - y_true) ** 2.0 adj = tf.math.multiply(y_pred,y_true) adj = tf.where(tf.greater(adj, 0.0), tf.constant(1/alpha), adj) adj = tf.where(tf.less(adj, 0.0), tf.constant(alpha), adj) loss = loss * adj return tf.reduce_mean(loss) </code></pre> <p>I check with the below code and work correctly <em>(Code for creating a model for learning and predicting the sum of two variables with the <strong><code>custom_loss</code></strong>)</em>:</p> <pre><code>from keras.models import Sequential from keras.layers import Dense import tensorflow as tf import numpy as np x = np.random.rand(1000,2) y = x.sum(axis=1) y = y.reshape(-1,1) def custom_loss(y_pred, y_true): alpha = 2.0 loss = (y_pred - y_true) ** 2.0 adj = tf.math.multiply(y_pred,y_true) adj = tf.where(tf.greater(adj, 0.0), tf.constant(1/alpha), adj) adj = tf.where(tf.less(adj, 0.0), tf.constant(alpha), adj) loss = loss * adj return tf.reduce_mean(loss) model = Sequential() model.add(Dense(128, activation='relu', input_dim=2)) model.add(Dense(64, activation='relu')) model.add(Dense(32, activation='relu')) model.add(Dense(16, activation='relu')) model.add(Dense(1,)) model.compile(optimizer='adam', loss=custom_loss) model.fit(x, y, epochs=200, batch_size=16) for _ in range(10): rnd_num = np.random.randint(50, size=2)[None, :] pred_add = model.predict(rnd_num) print(f'predict sum of {rnd_num[0]} -&gt; {pred_add}') </code></pre> <p>Output:</p> <pre><code>Epoch 1/200 63/63 [==============================] - 1s 2ms/step - loss: 0.2903 Epoch 2/200 63/63 [==============================] - 0s 2ms/step - loss: 0.0084 Epoch 3/200 63/63 [==============================] - 0s 2ms/step - loss: 0.0016 ... Epoch 198/200 63/63 [==============================] - 0s 2ms/step - loss: 3.3231e-07 Epoch 199/200 63/63 [==============================] - 0s 2ms/step - loss: 5.1004e-07 Epoch 200/200 63/63 [==============================] - 0s 2ms/step - loss: 9.8688e-08 predict sum of [43 44] -&gt; [[82.81973]] predict sum of [39 13] -&gt; [[48.97299]] predict sum of [36 46] -&gt; [[78.05187]] predict sum of [46 7] -&gt; [[49.445843]] predict sum of [35 11] -&gt; [[43.311478]] predict sum of [33 1] -&gt; [[31.695848]] predict sum of [6 8] -&gt; [[13.433815]] predict sum of [14 38] -&gt; [[49.54941]] predict sum of [ 1 40] -&gt; [[39.709686]] predict sum of [10 2] -&gt; [[11.325197]] </code></pre>
python|tensorflow|deep-learning|pytorch|tensor
1
3,523
59,850,305
Pandas Plot with Bar Returns Sorted by Counts?
<p>I am using the following code to create a plot of how many nulls appear in my df, sorted by the date column:</p> <pre><code>df[df['Gas'].isnull()]['ReportDate_Time'].sort_values().value_counts().plot() </code></pre> <p><a href="https://i.stack.imgur.com/RSPrq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RSPrq.png" alt="enter image description here"></a></p> <p>This is the plot it returns which is ok but I would rather use a bar plot. However, if I pass the 'bar' argument to the plot method, I automatically get my bars sorted by total count rather than sorted by ReportDate_Time, which is what I originally wanted:</p> <pre><code>df[df['Gas'].isnull()]['ReportDate_Time'].sort_values().value_counts().plot('bar') </code></pre> <p><a href="https://i.stack.imgur.com/IYGoE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IYGoE.png" alt="enter image description here"></a></p> <p>How would I use the bars and sort by ReportDate_Time at the same time?</p>
<p>If I understand correctly, you just need to sort your dataframe by report date, <code>sort_values('Reportdate')</code></p> <p>see below for an example. </p> <pre><code>dates = pd.date_range(pd.to_datetime('2020-01-21'), pd.to_datetime('2020-02-01'),freq='D') vals = np.random.randint(0,500,size=len(dates)) df = pd.DataFrame({'ReportDate' : dates, 'count' : vals}) df.sort_values('count',inplace=True) df.reset_index(drop=True,inplace=True) </code></pre> <hr> <pre><code>print(df) ReportDate count 0 2020-01-28 135 1 2020-01-30 194 2 2020-01-21 238 3 2020-01-29 316 4 2020-01-31 325 5 2020-01-26 408 6 2020-01-23 450 7 2020-01-22 451 8 2020-01-25 452 9 2020-01-24 454 10 2020-02-01 463 11 2020-01-27 489 df.set_index('ReportDate').plot(kind='bar') </code></pre> <p><a href="https://i.stack.imgur.com/ohCsk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ohCsk.png" alt="bar plot"></a></p> <p>and with a sort:</p> <pre><code>df.sort_values('ReportDate',ascending=True).set_index('ReportDate').plot(kind='bar') </code></pre> <p><a href="https://i.stack.imgur.com/TgCaV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TgCaV.png" alt="enter image description here"></a></p>
python|pandas|plot|bar-chart
1
3,524
59,857,571
Why randn doesn't always have a mean of 0 and variance of 1?
<p>For the <a href="https://pytorch.org/docs/stable/torch.html?highlight=randn#torch.randn" rel="nofollow noreferrer">PyTorch.randn()</a> method the documentation says:</p> <blockquote> <p>Returns a tensor filled with random numbers from a normal distribution with mean <code>0</code> and variance <code>1</code> (also called the standard normal distribution).</p> </blockquote> <p>So here is an example tensor:</p> <pre><code>x = torch.randn(4,3) tensor([[-0.6569, -0.7337, -0.0028], [-0.3938, 0.3223, 0.0497], [ 0.0129, -2.7546, -2.2488], [ 1.6754, -0.1497, 1.8202]]) </code></pre> <p>When I print the mean:</p> <pre><code>x.mean() tensor(-0.2550) </code></pre> <p>When I print the standard deviation:</p> <pre><code>x.std() tensor(1.3225) </code></pre> <p>So why isn't the mean 0 and the standard deviation 1?</p> <p>Bonus question: How do I generate a random tensor that always has a mean of 0?</p>
<p>It would be a big coincidence that a finite sample of the distribution has <strong>exactly</strong> the same mean and <strong>exactly</strong> the same standard deviation. It is to be expected that the more numbers you generate, the closer the mean and deviation of the sample approaches the "true" mean and deviation of the distribution.</p>
python|random|pytorch|tensor
5
3,525
59,786,177
Find difference between two pandas dataframes when both contains same rows but one dataframe contains it more than once
<p>I have two pandas dataframes</p> <p>df1</p> <pre><code>jon,12,NewYork jon,12,NewYork james,14,LA </code></pre> <p>df2</p> <pre><code>jon,12,NewYork james,14,LA </code></pre> <p>I want to compare them and get the difference as below</p> <p>deltaDF</p> <pre><code>jon,12,NewYork </code></pre> <p>I tried <code>pd.concat([df1,df2,df2],axis=0,sort=False).drop_duplicates(keep=False)</code> This works fine when there are not duplicates but doesn't give difference when one of the dataframe contains duplicates and other dataframe has single entry. I have also tried the solutions mentioned in <a href="https://stackoverflow.com/questions/48647534/python-pandas-find-difference-between-two-data-frames">Python Pandas - Find difference between two data frames</a> but that is also returning empty dataframe in this case</p> <h2>Similar questions</h2> <p>I think this is not a duplicate question because an answer given to <a href="https://stackoverflow.com/questions/48647534/python-pandas-find-difference-between-two-data-frames">this question</a> returning empty dataframe for the above scenario.</p> <h2>Edit</h2> <p>People are telling that this is not possible. Can we do something like this:</p> <p>Add a column giving the count of occurrence of each row</p> <p>Convert above df1 to</p> <pre><code>jon,12,NewYork,2 james,14,LA,1 </code></pre> <p>Convert above df2 to</p> <pre><code>jon,12,NewYork,1 </code></pre> <p>Now I can use all columns as index and subtract the last column.</p>
<p>You can add a new column to catch the duplicates:</p> <pre><code>df1['merge'] = df1.groupby(['0','1','2']).cumcount() df2['merge'] = df2.groupby(['0','1','2']).cumcount() pd.concat([df1,df2]).drop_duplicates(keep=False) </code></pre> <p>Afterwards you can drop the added column again</p>
python|pandas|dataframe
2
3,526
32,533,722
last value carried forward in pandas
<p>I have 20 minutes of observed data, in 5 minute bins as follows:</p> <pre><code> bin var1 var2 var3 var4 5 -76.30 71.96 557.79 0.06 10 -61.23 78.14 600.69 0.09 15 -54.36 73.63 630.71 0.03 20 -12.41 71.46 661.19 0.08 </code></pre> <p>I need to model for an hour of data by carrying the last observed value forward and get the following output:</p> <pre><code>bin var1 var2 var3 var4 5 -76.30 71.96 557.79 0.06 10 -61.23 78.14 600.69 0.03 15 -54.36 73.63 630.71 0.09 20 -12.41 71.46 661.19 0.08 25 -12.41 71.46 661.19 0.08 30 -12.41 71.46 661.19 0.08 35 -12.41 71.46 661.19 0.08 40 -12.41 71.46 661.19 0.08 45 -12.41 71.46 661.19 0.08 50 -12.41 71.46 661.19 0.08 55 -12.41 71.46 661.19 0.08 60 -12.41 71.46 661.19 0.08 </code></pre> <p>what is the best way to code this in a pandas data frame? please &amp; thanks.</p>
<p>While you can append to the DataFrame, it's a relatively inefficient operation, as each step takes a copy. <a href="http://pandas.pydata.org/pandas-docs/stable/basics.html#reindexing-and-altering-labels" rel="nofollow"><code>reindex</code></a> provides an easy way to align the data to a new index, then you can forward fill the the values with <code>fillna</code> method.</p> <pre><code>In [31]: df = df.set_index('bin') ...: df = df.reindex(range(5, 65, 5)).fillna(method='ffill') In [32]: df Out[32]: var1 var2 var3 var4 bin 5 -76.30 71.96 557.79 0.06 10 -61.23 78.14 600.69 0.09 15 -54.36 73.63 630.71 0.03 20 -12.41 71.46 661.19 0.08 25 -12.41 71.46 661.19 0.08 30 -12.41 71.46 661.19 0.08 35 -12.41 71.46 661.19 0.08 40 -12.41 71.46 661.19 0.08 45 -12.41 71.46 661.19 0.08 50 -12.41 71.46 661.19 0.08 55 -12.41 71.46 661.19 0.08 60 -12.41 71.46 661.19 0.08 </code></pre>
python|pandas
4
3,527
40,367,510
Deep learning Gradient at last but output layer is always zero
<p>I have been working with udacity self driving challenge#2. What ever changes I make to the deep network like learning rate, activation function, i am getting gradient zero issue while training. I have used both cross entropy loss and mse loss. For cross entropy 100 classes are used with degree difference of 10 i.e radian angle of 0.17. For example from (-8.2 to -8.03) is class 0 and then (-8.03 to -7.86) is class 1 and so on. </p> <p>Please find attached screen shots. As seen the layer before output (fc4 in the first image) almost becomes zero. So most of the gradient above almost follows the same pattern. Need some suggestion to eliminate this gradient zero error.</p> <p><img src="https://i.stack.imgur.com/BgOCc.png" alt="Model_View"></p> <p><img src="https://i.stack.imgur.com/q16Qv.png" alt="Gradient_Zero_fc4_layer"></p>
<p>This seems to be vanishing gradient problem, 1.) Have you tried Relu? (I know you said you have tried diff activation fn) 2.) Have you tried reducing # of layers? 3.) Are your features normalized?</p> <p>There are architectures designed to prevent this as well (ex. <a href="https://www.quora.com/How-does-LSTM-help-prevent-the-vanishing-and-exploding-gradient-problem-in-a-recurrent-neural-network" rel="nofollow noreferrer">LSTM</a>) but I think you should be able to get by with something simple like above. </p>
tensorflow|deep-learning
0
3,528
40,660,716
numpy append slices to two dimensional array to make it three dimensional
<p>I have <code>5</code> numpy arrays with <code>shape (5,5)</code>. What I want to achieve is to combine these <code>5</code> numpy arrays to one array of shape (5,5,5). My code looks like the following but does not work:</p> <pre><code>combined = np.empty((0, 5, 5), dtype=np.uint8) for idx in range(0, 5): array = getarray(idx) # returns an array of shape (5,5) np.append(combined, img, axis=0) </code></pre> <p>I thought if I set the first axis to 0 it will append on this axis so that in the end the shape will be (5,5,5). What is wrong here?</p>
<p>I have figured it out by myself:</p> <pre><code>combined = np.empty((0, 5, 5), dtype=np.uint8) for idx in range(0, 5): array = getarray(idx) # returns an array of shape (5,5) array array[np.newaxis, :, :] combined = np.append(combined, img, axis=0) print combined.shape + returns (5,5,5) </code></pre>
python|arrays|numpy|multidimensional-array|append
1
3,529
61,640,412
How can I get multiple dataframes returned from a class function?
<p>So I have made a class that takes in a ticker symbol, and it returns a dataframe with all the price information for the dates specified. here is the code below:</p> <pre><code>import pandas as pd import numpy as np import pandas_datareader as pdr # class to get stock price class GetStockInfo(): ''' Class to retrieve stock info and returns it as a dataframe ''' def __init__(self, ticker): self.ticker = ticker.upper() def build_df(self, start_date, end_date): df = pd.DataFrame(pdr.DataReader(self.ticker, 'yahoo', start_date, end_date)) return df </code></pre> <p>now this works perfectly, but ideally id like to pass in a list of symbols and have it return a seperate df for each symbol. so for example, </p> <p><code>symbols = ['aapl','googl','msft','tsla']</code></p> <p>and id like it to return 4 seperate dfs, each named <code>aapl_df</code>, <code>msft_df</code> etc. is there any way to do this?</p> <p>ive tried using a for loop like so</p> <pre><code>for i in symbols: stock = GetStockInfo(i) i_df = stock.build_df('2019-01-01', '2020-01-01') </code></pre> <p>but im not sure how to get it to return seperate dfs.</p>
<p>You could put the dataframes in a list and return that. Or better yet, in a dictionary:</p> <pre class="lang-py prettyprint-override"><code>results = {} for i in symbols: stock = GetStockInfo(i) results[i] = stock.build_df('2019-01-01', '2020-01-01') return results </code></pre>
python|pandas|dataframe|oop
1
3,530
61,676,600
How to use map and lambda function to convert a column in pandas to datetime?
<p>I have a column that has time like 9:3:15 (no leading 0s) (3 mins and 15 sec past 9). I would like to plot this (no date, just time) on the x axis. so i tried the following code: </p> <pre><code>def data_vis(dayN): plt.subplots_adjust(bottom=0.2) plt.xticks( rotation=25 ) ax=plt.gca() #ax.xaxis_date() xfmt = md.DateFormatter('%H:%MM:%S') ax.xaxis.set_major_formatter(xfmt) plt.plot_date(md.date2num(dayN["Time Stamp"]),dayN["Temperature(deg C)"]) plt.show() </code></pre> <p>I got the following error:</p> <blockquote> <p>DateFormatter found a value of x=0, which is an illegal date; this usually occurs because you have not informed the axis that it is plotting dates, e.g., with ax.xaxis_date()</p> </blockquote> <p>with <code>ax.xaxis_date()</code> enabled, I got the below error. </p> <blockquote> <p>'str' object has no attribute 'toordinal' </p> </blockquote> <p>Since that columns was "str", I thought of using </p> <p><code>pd.to_datetime(day2["Time Stamp"], format = '%H:%M:%S)</code></p> <p>but it results in the below output:</p> <blockquote> <p>1900-01-01 09:51:33</p> </blockquote> <p>Now I would like to try <code>datetime.datetime.strptime</code> using map and/or lambda function for the above said column.. </p> <p>Any help regarding implementation of map function and plotting the data would be really helpful.Also wil the <code>datetime.strptime</code> help in resolving the problem? </p> <p>thanks,</p>
<p>not sure what's going wrong here; the following works fine for me:</p> <pre><code>import pandas as pd import numpy as np from matplotlib import pyplot as plt import matplotlib.dates as md # create a dummy df... dayN = pd.DataFrame({"Temperature(deg C)": np.random.rand(10)+20, "Time Stamp": ['9:3:15','9:3:16','9:3:17','9:3:18','9:3:19', '9:3:20','9:3:21','9:3:22','9:3:23','9:3:24']}) # format to datetime, don't care about the date: dayN["Time Stamp"] = pd.to_datetime(dayN["Time Stamp"], format='%H:%M:%S') plt.subplots_adjust(bottom=0.2) plt.xticks(rotation=25) ax = plt.gca() xfmt = md.DateFormatter('%H:%M:%S') ax.xaxis.set_major_formatter(xfmt) plt.plot_date(md.date2num(dayN["Time Stamp"]), dayN["Temperature(deg C)"]) plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/TtJYZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TtJYZ.png" alt="enter image description here"></a></p>
pandas|datetime|matplotlib|time
0
3,531
61,935,573
Turn NaN in dataframe if condition met
<p>I have a column here that looks like this and is part of a dataframe:</p> <pre><code>df.Days_Since_Earnings Out[5]: 0 21.0 2 1.0 4 1000.0 5 500.0 6 119.0 Name: Days_Since_Earnings, Length: 76, dtype: float64 </code></pre> <p>I want to leave it as it is except I want to turn numbers above 120 to 'nan's, so it would look like this:</p> <pre><code>df.Days_Since_Earnings Out[5]: 0 21.0 2 1.0 4 nan 5 nan 6 119.0 Name: Days_Since_Earnings, Length: 76, dtype: float64 </code></pre> <p>thanks to anyone who helps!</p>
<p>You can use <code>mask</code>:</p> <pre><code>df['Days_Since_Earnings'] = df.Days_Since_Earnings.mask(df.Days_Since_Earnings &gt; 120) </code></pre> <p>or <code>where</code> with reverse condition</p> <pre><code>df['Days_Since_Earnings'] = df.Days_Since_Earnings.where(df.Days_Since_Earnings &lt;= 120) </code></pre> <p>or <code>loc</code> assignment:</p> <pre><code>df.loc[df.Days_Since_Earnings &gt; 120, 'Days_Since_Earnings'] = np.nan </code></pre>
python|pandas|dataframe|filter|filtering
2
3,532
62,017,979
Python Generate unique ranges of a specific length and categorize them
<p>I have a dataframe column which specifies how many times a user has performed an activity. eg. </p> <pre><code>&gt;&gt;&gt; df['ActivityCount'] Users ActivityCount User0 220 User1 190 User2 105 User3 109 User4 271 User5 265 ... User95 64 User96 15 User97 168 User98 251 User99 278 Name: ActivityCount, Length: 100, dtype: int32 &gt;&gt;&gt; activities = sorted(df['ActivityCount'].unique()) [9, 15, 16, 17, 20, 23, 25, 26, 28, 31, 33, 34, 36, 38, 39, 43, 49, 57, 59, 64, 65, 71, 76, 77, 78, 83, 88, 94, 95, 100, 105, 109, 110, 111, 115, 116, 117, 120, 132, 137, 138, 139, 140, 141, 144, 145, 148, 153, 155, 157, 162, 168, 177, 180, 182, 186, 190, 192, 194, 197, 203, 212, 213, 220, 223, 231, 232, 238, 240, 244, 247, 251, 255, 258, 260, 265, 268, 269, 271, 272, 276, 278, 282, 283, 285, 290] </code></pre> <p>According to their ActivityCount, I have to divide users into 5 different categories eg <code>A, B, C, D</code> and <code>E</code>. Activity Count range varies from time to time. In the above example it's approx in-between <code>(9-290)</code> (lowest and highest of the series), it could be <code>(5-500)</code> or <code>(5 to 30)</code>. In above example, I can take the max number of activities and divide it by 5 and categorize each user between the range of 58 <code>(from 290/5)</code> like <code>Range A: 0-58</code>, <code>Range B: 59-116</code>, <code>Range C: 117-174</code>...etc</p> <p>Is there any other way to achieve this using pandas or numpy, so that I can directly categorize the column in the given categories? Expected output: -</p> <pre><code>&gt;&gt;&gt; df Users ActivityCount Category/Range User0 220 D User1 190 D User2 105 B User3 109 B User4 271 E User5 265 E ... User95 64 B User96 15 A User97 168 C User98 251 E User99 278 E </code></pre>
<p>The natural way to do that would be to split the data into 5 quanties, and then split the data into bins based on these quantities. Luckily, pandas allows you do easily do that: </p> <pre><code>df["category"] = pd.cut(df.Activity, 5, labels= ["a","b", "c", "d", "e"]) </code></pre> <p>The output is something like: </p> <pre><code> Activity Category 34 115 b 15 43 a 57 192 d 78 271 e 26 88 b 6 25 a 55 186 d 63 220 d 1 15 a 76 268 e </code></pre> <h2>An alternative view - clustering</h2> <p>In the above method, we've split the data into 5 bins, where the sizes of the different bins are equal. An alternative, more sophisticated approach, would be to split the data into 5 clusters and aim to have the data points in each cluster as similar to each other as possible. In machine learning, this is known as a clustering / classification problem. </p> <p>One classic clustering algorithm is <a href="https://en.wikipedia.org/wiki/K-means_clustering" rel="nofollow noreferrer">k-means</a>. It's typically used for data with multiple dimensions (e.g. monthly activity, age, gender, etc.) This is, therefore, a very simplistic case of clustering. </p> <p>In this case, k-means clustering can be done in the following way: </p> <pre><code>import scipy from scipy.cluster.vq import vq, kmeans, whiten df = pd.DataFrame({"Activity": l}) features = np.array([[x] for x in df.Activity]) whitened = whiten(features) codebook, distortion = kmeans(whitened, 5) code, dist = vq(whitened, codebook) df["Category"] = code </code></pre> <p>And the output looks like:</p> <pre><code> Activity Category 40 138 1 79 272 0 72 255 0 13 38 3 41 139 1 65 231 0 26 88 2 59 197 4 76 268 0 45 145 1 </code></pre> <p>A couple of notes: </p> <ul> <li>The labels of the categories are random. In this case label '2' refers to higher activity than lavel '1'. </li> <li>I didn't migrate the labels from 0-4 to A-E. This can easily be done using pandas' <code>map</code>. </li> </ul>
python|pandas|numpy
2
3,533
58,089,770
Using apply function Dataframe
<p>I need help correcting an error I am getting.</p> <p>I have the following dataframe:</p> <pre><code>x = [-0.75853, -0.75853, -0.75853, -0.75852] y = [-0.63435, -0.63434, -0.63435, -0.63436] z = [-0.10488, -0.10490, -0.10492, -0.10495] w = [-0.10597, -0.10597, -0.10597, -0.10596] df = pd.DataFrame([x, y, z, w], columns=['x', 'y', 'z', 'w']) </code></pre> <p>I created the following functions:</p> <pre><code>import math def roll(qw, qx, qy, qz): # x-axis rotation sinr_cosp = +2.0 * (qw * qx + qy + qz) cosr_cosp = +1.0 - 2.0 * (qx * qx + qy * qy) roll = math.atan2(sinr_cosp, cosr_cosp) return roll def pitch(qw, qx, qy, qz): # y-axis rotation sinp = +2.0 * (qw * qy - qz * qx) if(math.fabs(sinp) &gt;= 1): pitch = copysign(M_PI/2, sinp) else: pitch = math.asin(sinp) return sinp def yaw(qw, qx, qy, qz): # z-axis rotation siny_cosp = +2.0 * (qw * qz + qx * qy) cosy_cosp = +1.0 - 2.0 * (qy * qy + qz * qz) yaw = math.atan2(siny_cosp, cosy_cosp) return yaw </code></pre> <p>Finally, using Pandas apply function, I tried to associate the result with a new column:</p> <pre><code>q_w = df['w'] q_x = df['x'] q_y = df['y'] q_z = df['z'] df['row'] = df.apply(roll(q_w, q_x, q_y, q_z)) </code></pre> <p>The same error occurs when using the other functions.</p> <p>I saw an issue right here on Stack where this bug was fixed using Numpy. I believe this is not possible here because I am using functions specific to the Math package.</p> <blockquote> <p>TypeError Traceback (most recent call last) /usr/local/lib/python3.6/dist-packages/pandas/core/series.py in wrapper(self) 92 raise TypeError("cannot convert the series to " ---> 93 "{0}".format(str(converter))) 94 </p> <p>TypeError: cannot convert the series to </p> <p>The above exception was the direct cause of the following exception:</p> <p>SystemError Traceback (most recent call last) 4 frames in () ----> 1 df['row'] = df.apply(roll(q_w, q_x, q_y, q_z))</p> <p> in roll(qw, qx, qy, qz) 4 sinr_cosp = +2.0 * (qw * qx + qy + qz) 5 cosr_cosp = +1.0 - 2.0 * (qx * qx + qy * qy) ----> 6 roll = math.atan2(sinr_cosp, cosr_cosp) 7 return roll 8 </p> <p>/usr/local/lib/python3.6/dist-packages/pandas/core/series.py in wrapper(self) 88 89 def wrapper(self): ---> 90 if len(self) == 1: 91 return converter(self.iloc[0]) 92 raise TypeError("cannot convert the series to "</p> <p>/usr/local/lib/python3.6/dist-packages/pandas/core/series.py in <strong>len</strong>(self) 593 Return the length of the Series. 594 """ --> 595 return len(self._data) 596 597 def view(self, dtype=None):</p> <p>/usr/local/lib/python3.6/dist-packages/pandas/core/internals/managers.py in <strong>len</strong>(self) 290 291 def <strong>len</strong>(self): --> 292 return len(self.items) 293 294 def <strong>unicode</strong>(self):</p> <p>SystemError: PyEval_EvalFrameEx returned a result with an error set</p> </blockquote>
<p>You should using <code>apply</code> like </p> <pre><code>df.apply(lambda x : roll(x['w'],x['x'],x['y'],x['z']),1) Out[291]: 0 -2.175472 1 -1.909103 2 -0.394163 3 -0.397885 dtype: float64 </code></pre>
python-3.x|pandas|math|apply
4
3,534
58,145,617
ValueError: could not convert string to float: ' 2,019,278 ' - potentially illegal character or format?
<p>I am having an issue with a format conversion from object to 'float64'. I cannot perform any matrix operations until this conversion occurs and in general this is a major annoyance because I am getting caught up in the weeds...</p> <p>Potentially it has to do with the inclusion of a space or a space and an apostrophe in the array in the original format.</p> <p><strong>The original array looks like this:</strong></p> <pre><code>array([[' 2,019,278 ', ' 14,569,743 ', ' 14,116,057 ', ' 30,705,078 '], [' 514,049 ', ' 3,301,330 ', ' 1,775,624 ', ' 5,591,003 '], [' 40,364 ', ' 283,894 ', ' 138,342 ', ' 462,600 '], [' 77,849 ', ' 528,504 ', ' 665,829 ', ' 1,272,182 '], [' 4,534 ', ' 39,282 ', ' 23,902 ', ' 67,718 '], [' 182,313 ', ' 795,102 ', ' 369,981 ', ' 1,347,396 '], [' 256,867 ', ' 694,895 ', ' 240,025 ', ' 1,191,787 '], [' 1,527,829 ', ' 12,690,612 ', ' 12,968,625 ', ' 27,187,066 '], [' 771,142 ', ' 2,937,748 ', ' 1,612,082 ', ' 5,320,972 ']], dtype=object) </code></pre> <p><strong>My code looks like this:</strong></p> <pre><code>a = pd.DataFrame.to_numpy(df_By_Race_Disability, dtype=None, copy=False) a = a.astype('float64') </code></pre> <p><strong>This is the error I receive:</strong></p> <pre><code>--------------------------------------------------------------------------- ValueError Traceback (most recent call last) &lt;ipython-input-18-ba9570a13fdd&gt; in &lt;module&gt; 1 a = pd.DataFrame.to_numpy(df_By_Race_Disability, dtype=None, copy=False) 2 #b = pd.DataFrame.to_numpy(df_By_Race_Total, dtype=None, copy=False) ----&gt; 3 a = a.astype('float64') 4 #b = b.astype('float64') 5 #b = np.reciprocal(b) ValueError: could not convert string to float: ' 2,019,278 ' </code></pre>
<p>Do first this:</p> <pre><code>df_temp = df_By_Race_Disability.applymap(lambda x: x.replace(',', '')) </code></pre> <p>Then you can continue with your code</p> <pre><code>a = pd.DataFrame.to_numpy(df_temp, dtype=None, copy=False) a = a.astype('float64') </code></pre>
numpy
0
3,535
57,897,080
Load custom loss with extra input in keras
<p>I have a custom loss function that takes the input to the model as one of the arguments. If I load in the same session in which I train, I can load it no problem using <a href="https://stackoverflow.com/questions/48373845/loading-model-with-custom-loss-keras">this</a> technique.</p> <pre class="lang-py prettyprint-override"><code> def custom_loss(inputs): def loss(y_true, y_pred): return ... return loss inputs = keras.layers.Input(shape=(...)) y = keras.layers.Activation('tanh')(inputs) model = keras.models.Model(inputs=inputs, outputs=y) model.compile(loss=custom_loss(inputs), optimizer='Adam') model.fit(...) model.save('mymodel.h5') load_model('mymodel.h5', custom_objects={'custom_loss': custom_loss(inputs}) </code></pre> <p>However, I run into problems when I try to load the model in a later session, because this time I don't have access to the original input tensor. If I make a new inputs placeholder, then the model expects two different sets of inputs and I error out.</p> <pre><code>inputs = keras.layers.Input(shape=(...)) load_model('mymodel.h5', custom_objects={'custom_loss': custom_loss(inputs)}) </code></pre> <p>Is there a good way to solve this problem? The issue at the end of the day is that the inputs haven't been deserialized yet so they can't be passed in to the custom objects.</p> <p>I don't want to just save the weights and create a new model with the same weights because I lose the optimizer state.</p>
<p>An alternate way is to compute the loss inside a Keras layer and pass a dummy loss function that just returns the model's output as the loss in the compile method. There are other ways of doing this. But this is the one that I prefer.</p> <pre><code>import tensorflow as tf print('Tensorflow', tf.__version__) def custom_loss(tensor): y_true, y_pred, inputs = tensor[0], tensor[1], tensor[1] loss = ... return tf.constant([0], dtype=tf.float32) def dummy_loss(y_true, y_pred): return y_pred def get_model(training=False): inputs = tf.keras.layers.Input(shape=(10,)) y = tf.keras.layers.Activation('tanh')(inputs) if training: targets = tf.keras.layers.Input(shape=(10,)) loss_layer = tf.keras.layers.Lambda(custom_loss)([targets, y, inputs]) model = tf.keras.models.Model(inputs=[inputs, targets], outputs=loss_layer) else: model = tf.keras.models.Model(inputs=inputs, outputs=y) return model model = get_model(training=True) model.compile(optimizer='sgd', loss=dummy_loss) model.save('model.h5') new_model = tf.keras.models.load_model('model.h5', custom_objects={'dummy_loss':dummy_loss}) </code></pre>
python|tensorflow|keras
3
3,536
57,875,425
Python - Pandas replacing dataframe column values - column data stored as list (i.e., '[this, that,'])
<p>I've tried a bunch of different things and I'm assuming I'm close here. I have a list of words that I generated from research abstracts courtesy of the Gensim keyword summarizer. The data is accurate, however it's stored as a list for each row and I want to get rid of the [' and '] for each row. I tried the code below and different variations, but either I get an error or the code processes but doesn't replace. I've tried: </p> <pre><code> #scenario 1 keywords = ['screened', 'model', 'health', 'volume'] df['newnlpkeywords'] = keywords df['newnlpkeywords'].replace("']", "", inplace=True) </code></pre> <p>and</p> <pre><code> #scenario 2 keywords = ['screened', 'model', 'health', 'volume'] df['newnlpkeywords'] = keywords.replace(replace("']", "") </code></pre> <p>I knew it's a noob question, but I'm trying to learn! I figure after 30 minutes of attempts, I should ask for help. Thanks!</p>
<p>Is this what you're looking for</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import re rgx = lambda x: re.sub("']","",x) rgx = np.vectorize(rgx) df['newnlpkeywords'].values = rgx(df['newnlpkeywords'].values) </code></pre> <p>The following code applies the rgx function to every row in df['newnlpkeywords']</p> <p>(I know there are probably more pythonic ways to do this however this is a quick fix, I'm sure there's a tidier answer)</p>
python|pandas|list|text|replace
1
3,537
34,080,909
Pandas : Add new column with function based on index
<p>Let say I have a series <code>s</code></p> <pre><code>index_column size A 1 B 2 C 3 D 4 </code></pre> <p>I want to add a new column contains a function <code>f</code></p> <pre><code>def f(index_column): % do something return string </code></pre> <p>so that</p> <pre><code>index_column size function(index_column) A 1 f(A) B 2 f(B) C 3 f(C) D 4 f(D) </code></pre> <p>is it possible in <code>Series</code> or do I need to do that in <code>Dataframe</code> ?</p>
<p>Here is one way to do it with a DataFrame:</p> <pre><code>import pandas as pd def app_Z(s): """Append 'Z' onto column data""" return s+'Z' # recreate the series s = pd.Series(data=[1,2,3,4], index=['A','B','C','D'], name='Size') # create DataFrame and apply function to column 'Index' df = pd.DataFrame(s) df.reset_index(inplace=True) df.columns = ['Index', 'Size'] df['Func'] = df['Index'].apply(app_Z) df.set_index('Index', drop=True, inplace=True) print(df) Size Func Index A 1 AZ B 2 BZ C 3 CZ D 4 DZ </code></pre>
python|pandas
8
3,538
33,997,753
Calculating pairwise correlation among all columns
<p>I am working with large biological dataset.</p> <p>I want to calculate PCC(Pearson's correlation coefficient) of all 2-column combinations in my data table and save the result as DataFrame or CSV file.</p> <p>Data table is like below:columns are the name of genes, and rows are the code of dataset. The float numbers mean how much the gene is activated in the dataset.</p> <pre><code> GeneA GeneB GeneC ... DataA 1.5 2.5 3.5 ... DataB 5.5 6.5 7.5 ... DataC 8.5 8.5 8.5 ... ... </code></pre> <p>As a output, I want to build the table(DataFrame or csv file) like below, because scipy.stats.pearsonr function returns (PCC, p-value). In my example, XX and YY mean the results of pearsonr([1.5, 5.5, 8.5], [2.5, 6.5, 8.5]). In the same way, ZZ and AA mean the result of pearsonr([1.5, 5.5, 8.5], [3.5, 7.5, 8.5]). I do not need the redundant data such as GeneB_GeneA or GeneC_GeneB in my test.</p> <pre><code> PCC P-value GeneA_GeneB XX YY GeneA_GeneC ZZ AA GeneB_GeneC BB CC ... </code></pre> <p>As the number of columns and rows are many(over 100) and their names are complicated, using column names or row names will be difficult.</p> <p>It might be a simple problem for experts, I do not know how to deal with this kind of table with python and pandas library. Especially making new DataFrame and adding result seems to be very difficult.</p> <p>Sorry for my poor explanation, but I hope someone could help me.</p>
<pre><code>from pandas import * import numpy as np from libraries.settings import * from scipy.stats.stats import pearsonr import itertools </code></pre> <p>Creating random sample data:</p> <pre><code>df = DataFrame(np.random.random((5, 5)), columns=['gene_' + chr(i + ord('a')) for i in range(5)]) print(df) gene_a gene_b gene_c gene_d gene_e 0 0.471257 0.854139 0.781204 0.678567 0.697993 1 0.292909 0.046159 0.250902 0.064004 0.307537 2 0.422265 0.646988 0.084983 0.822375 0.713397 3 0.113963 0.016122 0.227566 0.206324 0.792048 4 0.357331 0.980479 0.157124 0.560889 0.973161 correlations = {} columns = df.columns.tolist() for col_a, col_b in itertools.combinations(columns, 2): correlations[col_a + '__' + col_b] = pearsonr(df.loc[:, col_a], df.loc[:, col_b]) result = DataFrame.from_dict(correlations, orient='index') result.columns = ['PCC', 'p-value'] print(result.sort_index()) PCC p-value gene_a__gene_b 0.461357 0.434142 gene_a__gene_c 0.177936 0.774646 gene_a__gene_d -0.854884 0.064896 gene_a__gene_e -0.155440 0.802887 gene_b__gene_c -0.575056 0.310455 gene_b__gene_d -0.097054 0.876621 gene_b__gene_e 0.061175 0.922159 gene_c__gene_d -0.633302 0.251381 gene_c__gene_e -0.771120 0.126836 gene_d__gene_e 0.531805 0.356315 </code></pre> <ul> <li>Get unique combinations of <code>DataFrame</code> columns using <code>itertools.combination(iterable, r)</code></li> <li>Iterate through these combinations and calculate pairwise correlations using <code>scipy.stats.stats.personr</code></li> <li>Add results (PCC and p-value tuple) to <code>dictionary</code> </li> <li>Build <code>DataFrame</code> from <code>dictionary</code></li> </ul> <p>You could then also save <code>result.to_csv()</code>. You might find it convenient to use a <code>MultiIndex</code> (two columns containing the names of each columns) instead of the created names for the pairwise correlations.</p>
python|pandas|correlation
19
3,539
37,097,207
How to Print a specific column in a 3D Matrix using numpy Python
<p>I have a problem with printing a column in a numpy 3D Matrix. Here is a simplified version of the problem:</p> <pre><code>import numpy as np Matrix = np.zeros(((10,9,3))) # Creates a 10 x 9 x 3 3D matrix Matrix[2][2][6] = 578 # I want to print Matrix[2][x][6] for x in range(9) # the purpose of this is that I want to get all the Values in Matrix[2][x][6] </code></pre> <p>Much appreciated if you guys can help me out. Thanks in advance.</p>
<p>Slicing would work:</p> <pre><code>a = np.zeros((10, 9, 3)) a[6, 2, 2] = 578 for x in a[6, :, 2]: print(x) </code></pre> <p>Output:</p> <pre><code>0.0 0.0 578.0 0.0 0.0 0.0 0.0 0.0 0.0 </code></pre>
python|numpy|matrix|multidimensional-array
3
3,540
54,757,652
Drop all rows that *do not* contain any NaNs in their columns
<p>I use to drop the rows which has one cell with NAN value with this command:</p> <pre><code>pos_data = df.iloc[:,[5,6,2]].dropna() </code></pre> <p>No I want to know how can I keep the rows with NAN and remove all other rows which do not have NAN in one of their columns. my data is Pandas dataframe.</p> <p>Thanks.</p>
<p>Use boolean indexing, find all columns that have at least <em>one</em> NaN in their rows and use the mask to filter.</p> <pre><code>df[df.iloc[:, [5, 6, 2]].isna().any(1)] </code></pre> <p>The DeMorgan equivalent of this is:</p> <pre><code>df[~df.iloc[:, [5, 6, 2]].notna().all(1)] </code></pre> <hr> <pre><code>df = pd.DataFrame({'A': ['x', 'x', np.nan, np.nan], 'B': ['y', np.nan, 'y', 'y'], 'C': list('zzz') + [np.nan]}) df A B C 0 x y z 1 x NaN z 2 NaN y z 3 NaN y NaN </code></pre> <p>If we're only considering columns "A" and "C", then our solution will look like</p> <pre><code>df[['A', 'C']] A C 0 x z 1 x z 2 NaN z 3 NaN NaN # Check which cells are NaN df[['A', 'C']].isna() A C 0 False False 1 False False 2 True False 3 True True # Use `any` along the first axis to perform a logical OR across columns df[['A', 'C']].isna().any(axis=1) 0 False 1 False 2 True 3 True dtype: bool # Now, we filter df[df[['A', 'C']].isna().any(axis=1)] A B C 2 NaN y z 3 NaN y NaN </code></pre> <p>As mentioned, the inverse of this is using <code>notna</code> + <code>all(axis=1)</code>:</p> <pre><code>df[['A', 'C']].notna().all(1) 0 True 1 True 2 False 3 False dtype: bool # You'll notice this is the logical inverse of what we need, # so we invert using bitwise NOT `~` operator ~df[['A', 'C']].notna().all(1) 0 False 1 False 2 True 3 True dtype: bool </code></pre>
python|python-3.x|pandas|dataframe
2
3,541
54,748,478
pandas df - calculate percentage difference not change
<p>I want to calculate the percentage difference not change or just the difference between two values. </p> <p>my df:</p> <pre><code> Radisson Collection 6 Total awareness 0.440553 Very/Somewhat familiar 0.462577 Consideration 0.494652 Ever used 0.484620 </code></pre> <p>Expected output:</p> <pre><code> Radisson Collection 6 Total awareness none Very/Somewhat familiar 4.87726% Consideration 6.70163% Ever used 2.04886% </code></pre> <p>The calculation would be: </p> <pre><code>Difference of 0.440553 and 0.462577 = |0.440553 - 0.462577|/((0.440553 + 0.462577)/2) = 0.022024/0.451565 = 0.048772601950993 = 4.8772601950993% </code></pre>
<p>Divide difference by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.diff.html" rel="nofollow noreferrer"><code>diff</code></a> with absolute values by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.abs.html" rel="nofollow noreferrer"><code>abs</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.rolling.html" rel="nofollow noreferrer"><code>rolling</code></a> <code>mean</code>:</p> <pre><code>s = df['Radisson Collection'].rolling(2).mean() df['new'] = df['Radisson Collection'].diff().abs().div(s) * 100 print (df) Radisson Collection new Total awareness 0.440553 NaN Very/Somewhat familiar 0.462577 4.877260 Consideration 0.494652 6.701636 Ever used 0.484620 2.048869 </code></pre> <p>If need percentages:</p> <pre><code>df['new'] = (df['Radisson Collection'].diff().abs().div(s) * 100) .iloc[1:].round(5).astype(str) + '%' </code></pre>
python|pandas
2
3,542
49,465,836
Replacing Unicode character in pandas Dataframe column
<p>I have a problem with a pandas Dataframe that amongst other things contains the number of rooms in an apartment (type String). </p> <p>This data consists of a unicode character <strong>u"\u00BD"</strong> (<a href="https://www.fileformat.info/info/unicode/char/00bd/index.htm" rel="nofollow noreferrer">https://www.fileformat.info/info/unicode/char/00bd/index.htm</a>). </p> <p>How do i effectively replace this character with decimal values so that instead of the unicode character the data will read <code>2.5, 3.5, 4.5 etc (Still String format)</code>. </p> <p><strong>It currently looks like this:</strong> <code>2½, 3½, 4½ etc</code> And i want the values in the column to be <code>2.5, 3.5, 4.5 etc</code>.</p>
<p>You can fix your column with:</p> <pre><code>df['rooms'] = df['rooms'].str.replace("½", ".5") </code></pre> <p>To make it a float:</p> <pre><code>df['rooms'] = df['rooms'].str.replace("½", ".5").apply(float) </code></pre>
python|pandas|replace
1
3,543
73,224,541
Tensorflow Recommender - Saving large model with ScaNN index - memory bottleneck
<p>I have a relatively large TF retrieval model using the TFRS library. It uses a <a href="https://github.com/google-research/google-research/tree/master/scann" rel="nofollow noreferrer">ScaNN</a> layer for <a href="https://www.tensorflow.org/recommenders/api_docs/python/tfrs/layers/factorized_top_k/ScaNN" rel="nofollow noreferrer">indexing the recommendations</a>. I am having a system host memory issue when I try to save this model via the <a href="https://www.tensorflow.org/api_docs/python/tf/saved_model/save" rel="nofollow noreferrer">tf.saved_model.save()</a> method. I am running the official TF 2.9.1 Docker Container with TFRS on a VM in the cloud. I have 28 GB of memory to try to save the model.</p> <p><a href="https://www.tensorflow.org/recommenders/examples/quickstart" rel="nofollow noreferrer">Here is the quickstart example:</a></p> <p>Basically we create the first embedding</p> <pre><code>user_model = tf.keras.Sequential([ tf.keras.layers.StringLookup( vocabulary=unique_user_ids, mask_token=None), # We add an additional embedding to account for unknown tokens. tf.keras.layers.Embedding(len(unique_user_ids) + 1, embedding_dimension) ]) </code></pre> <p>Then create the model</p> <pre><code>class MovielensModel(tfrs.Model): def __init__(self, user_model, movie_model): super().__init__() self.movie_model: tf.keras.Model = movie_model self.user_model: tf.keras.Model = user_model self.task: tf.keras.layers.Layer = task def compute_loss(self, features: Dict[Text, tf.Tensor], training=False) -&gt; tf.Tensor: # We pick out the user features and pass them into the user model. user_embeddings = self.user_model(features[&quot;user_id&quot;]) # And pick out the movie features and pass them into the movie model, # getting embeddings back. positive_movie_embeddings = self.movie_model(features[&quot;movie_title&quot;]) # The task computes the loss and the metrics. return self.task(user_embeddings, positive_movie_embeddings) </code></pre> <p>Next we create the ScaNN indexing layer</p> <pre><code>scann_index = tfrs.layers.factorized_top_k.ScaNN(model.user_model) scann_index.index_from_dataset( tf.data.Dataset.zip((movies.batch(100), movies.batch(100).map(model.movie_model))) ) # Get recommendations. _, titles = scann_index(tf.constant([&quot;42&quot;])) print(f&quot;Recommendations for user 42: {titles[0, :3]}&quot;) </code></pre> <p>Finally the model is sent out to be saved</p> <pre><code># Export the query model. with tempfile.TemporaryDirectory() as tmp: path = os.path.join(tmp, &quot;model&quot;) # Save the index. tf.saved_model.save( index, path, options=tf.saved_model.SaveOptions(namespace_whitelist=[&quot;Scann&quot;]) ) # Load it back; can also be done in TensorFlow Serving. loaded = tf.saved_model.load(path) # Pass a user id in, get top predicted movie titles back. scores, titles = loaded([&quot;42&quot;]) print(f&quot;Recommendations: {titles[0][:3]}&quot;) </code></pre> <p>This is the problem line:</p> <pre><code> # Save the index. tf.saved_model.save( index, path, options=tf.saved_model.SaveOptions(namespace_whitelist=[&quot;Scann&quot;]) ) </code></pre> <p>I'm not sure if there is a memory leak or what, but when I train my model on 5M+ records... I can watch the host system memory spike to 100% and the process is killed. If I train on a smaller dataset... there is no problem, so I know the code is okay.</p> <p>Can anyone suggest how to get around the memory bottleneck when saving a large ScaNN retrieval model, so I can eventually load the model back in for inference?</p>
<p>I think you are saving the TF model after training was completed. You just need the saved model to get trained weights from the model.</p> <p>You can try the following code:</p> <pre><code> sku_ids = df['SKU_ID'] sku_ids_list = sku_ids.to_list() q = embedding(sku_ids, output_mode='distance_matrix') dist_mat = tf.cast(q, tf.float32) tree = scann.Scann(n_tables=scann_tables_file_name, n_clusters_per_table=scann_clusters_file_name, dimension=embedding_dimensions, space_type=dist_mat.dtype, metric_type=tf.float32, random_seed=seed, transport_dtype=tf.float32, symmetrize_query_and_dataset=True, num_neighbors_per_table=scann_tables_number_of_neighbors) q = tree.build_index(dist_mat) p = tree.run(dist_mat) model = keras.models.Sequential([ scann.Dense(1, use_bias=False, activation='linear', dtype=tf.float32), keras.layers.Activation('sigmoid') ]) model.compile( keras.optimizers.Adam(1e-3), 'binary_crossentropy', metrics=[metrics.BinaryAccuracy()]) idx = -1 number_of_epochs = 10 optimizer = keras.optimizers.Adam(1e-3) optimizer_state = None random_seed = seed callbacks = [ keras.callbacks.EarlyStopping( monitor='binary_accuracy', mode='max', patience=10, restore_best_weights=True)] batch_size = 1000 total_records = len(sku_ids) epochs = number_of_epochs epochs_completed = 0 while epochs_completed &amp;lt; epochs: idx += 1 if idx * batch_size &amp;gt;= total_records: idx = 0 epochs_completed += 1 optimizer_state = None print(&quot;training epoch: {}&quot;.format(idx)) q_ = tree.transform(dist_mat[idx * batch_size : (idx + 1) * batch_size]) p_ = tree.transform(dist_mat) y = p_[:, :, 0] print(&quot;callbacks: {}&quot;.format(callbacks)) print(&quot;model compile: {}&quot;.format(model.compile)) model.fit(q_, y, epochs=1, batch_size=batch_size, callbacks=callbacks, validation_split=0.2, verbose=0, shuffle=True, initial_epoch=0, steps_per_epoch=None, validation_steps=None, validation_batch_size=None, validation_freq=1, class_weight=None, max_queue_size=10, workers=1, use_multiprocessing=False, shuffle=False, initial_epoch=0) sku_ids_tensor = tf.constant(sku_ids_list, shape=[len(sku_ids_list), 1], dtype=tf.int64) print(&quot;sku_ids_tensor shape: {}&quot;.format(sku_ids_tensor.shape)) tree_tensor = tree.transform(dist_mat) print(&quot;tree_tensor shape: {}&quot;.format(tree_tensor.shape)) predictions = tf.constant(tf.sigmoid(model.predict(tree_tensor)), dtype=tf.float32) print(&quot;predictions shape: {}&quot;.format(predictions.shape)) recommendations = tf.concat([sku_ids_tensor, predictions], axis=1) print(&quot;recommendations shape: {}&quot;.format(recommendations.shape)) retrieval_user_sku_recommendations = [] for u in unique_sku_list: print(&quot;u: {}&quot;.format(u)) user_skus = sku_ids[sku_ids.isin([u])] print(&quot;user_skus: {}&quot;.format(user_skus)) user_sku_id = user_skus.index[0] print(&quot;user_sku_id: {}&quot;.format(user_sku_id)) user_sku_recommendations = recommendations[sku_ids.isin([u])] print(&quot;user_sku_recommendations: {}&quot;.format(user_sku_recommendations)) retrieval_user_sku_recommendations.append(user_sku_recommendations) retrieval_skus_df = pd.DataFrame(sku_ids_list, columns=['SKU_ID']) retrieval_skus_df['SKU_ID'] = retrieval_skus_df['SKU_ID'].astype(int) retrieval_skus_df.head() user_sku_recommendations_list = [] for sku in retrieval_skus_df['SKU_ID']: for u in unique_sku_list: print(&quot;sku: {}&quot;.format(sku)) print(&quot;u: {}&quot;.format(u)) if sku == u: user_skus = sku_ids[sku_ids.isin([sku])] user_sku_id = user_skus.index[0] user_sku_recommendations = recommendations[sku_ids.isin([sku])] user_sku_recommendations_list.append(user_sku_recommendations) tf.saved_model.save(model, ss_model_dir) </code></pre>
python|tensorflow
0
3,544
73,286,833
Year over Year difference and selecting maximum row in pandas
<p>I have a dataframe given as below:</p> <pre><code>ID YEAR NPS 500 2020 0 500 2021 0 500 2022 0 501 2020 32 501 2021 52 501 2022 99 503 2021 1 503 2022 4 504 2020 45 504 2021 55 504 2022 50 </code></pre> <p>I have to calculate year over year difference as given below:</p> <pre><code>ID YEAR NPS nps_gain_yoy 500 2020 0 0 500 2021 0 0 500 2022 0 0 501 2020 32 0 501 2021 52 20 501 2022 99 47 503 2021 1 0 503 2022 4 3 504 2020 45 0 504 2021 55 10 504 2022 50 -5 </code></pre> <p>In above output for starting year 2020 or first occurance of Id nps_gain_yoy needs to be zero then for 2021 nps_gain_yoy is difference between nps of 2021 and 2020 i.e 52-32 = 20 as shown in output for ID 501 for year 2021 and so on. After this I need to pick the maximum difference or maximum nps_gain_yoy for each ID as given in below output:</p> <pre><code>ID YEAR NPS NPS_gain_yoy 501 2022 0 0 501 2022 99 47 503 2022 4 3 504 2021 55 10 </code></pre> <p>Here 47 is the maximum nps gain for ID 501 in year 2022 similarly 3 for ID 503 and 4 for Id 504.</p>
<p>If years are consecutive per <code>ID</code> first use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.DataFrameGroupBy.diff.html" rel="nofollow noreferrer"><code>DataFrameGroupBy.diff</code></a>:</p> <pre><code>df = df.sort_values(['ID','YEAR']) df['nps_gain_yoy'] = df.groupby('ID')['NPS'].diff().fillna(0) print (df) ID YEAR NPS nps_gain_yoy 0 500 2020 0 0.0 1 500 2021 0 0.0 2 500 2022 0 0.0 3 501 2020 32 0.0 4 501 2021 52 20.0 5 501 2022 99 47.0 6 503 2021 1 0.0 7 503 2022 4 3.0 8 504 2020 45 0.0 9 504 2021 55 10.0 10 504 2022 50 -5.0 </code></pre> <p>And then <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.DataFrameGroupBy.idxmax.html" rel="nofollow noreferrer"><code>DataFrameGroupBy.idxmax</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.loc.html" rel="nofollow noreferrer"><code>DataFrame.loc</code></a>:</p> <pre><code>df1 = df.loc[df.iloc[::-1].groupby('ID')['nps_gain_yoy'].idxmax()] #alternative solution #df1 = df.sort_values(['ID','nps_gain_yoy']).drop_duplicates('ID', keep='last') print (df1) ID YEAR NPS nps_gain_yoy 2 500 2022 0 0.0 5 501 2022 99 47.0 7 503 2022 4 3.0 9 504 2021 55 10.0 </code></pre>
python|pandas|dataframe
0
3,545
73,418,067
How to search for name in dataframe
<p>For example I want to find all the people that has &quot;Abbott&quot; in their name</p> <pre><code>0 Abbing, Mr. Anthony 1 Abbott, Mr. Rossmore Edward 2 Abbott, Mrs. Stanton (Rosa Hunt) 3 Abelson, Mr. Samuel 4 Abelson, Mrs. Samuel (Hannah Wizosky) ... 886 de Mulder, Mr. Theodore 887 de Pelsmaeker, Mr. Alfons 888 del Carlo, Mr. Sebastiano 889 van Billiard, Mr. Austin Blyler 890 van Melkebeke, Mr. Philemon Name: Name, Length: 891, dtype: object </code></pre> <p>df.loc[name in df[&quot;Name&quot;]] I tried this and it didn't work</p> <pre><code>'False: boolean label can not be used without a boolean index' </code></pre>
<p>You can use <code>str.contains</code> with the column you are interested in searching</p> <pre><code>&gt;&gt;&gt; import pandas as pd &gt;&gt;&gt; df = pd.DataFrame(data={'Name': ['Smith', 'Jones', 'Smithson']}) &gt;&gt;&gt; df Name 0 Smith 1 Jones 2 Smithson &gt;&gt;&gt; df[df['Name'].str.contains('Smith')] Name 0 Smith 2 Smithson </code></pre>
python|pandas
1
3,546
73,363,295
How can I split sub-strings to new rows of a dataframe?
<p>I have a dataframe which contains data that is entered without whitespace between words, i.e VolvoSaabVauxhall = Volvo Saab Vauxhall. I want to separate each word and insert new rows for each word that contain same data as row they originated from -</p> <pre><code>type year colour Mazda 1990 Cyan VolvoSaabVauxhall 2000 Red Lada 1980 Black </code></pre> <p>becomes</p> <pre><code>type year colour Mazda 1990 Cyan Volvo 2000 Red Saab 2000 Red Vauxhall 2000 Red Lada 1980 Black </code></pre> <p>what is the best way to achieve this without using iteration?</p>
<p>you say</p> <blockquote> <p>it's always a Capital letter preceded by a lower case letter</p> </blockquote> <p>so a regex for it is</p> <pre><code>[A-Z][^A-Z]* </code></pre> <p>capital letter ([A-Z]), followed by zero or more (*) not-capital letters (with ^).</p> <ul> <li>So we can <code>findall</code> such matches</li> <li>Assign the <code>type</code> column to be the list of those findings</li> <li>explode the type column</li> </ul> <p>So in code:</p> <pre class="lang-py prettyprint-override"><code>df.type = df.type.str.findall(r&quot;[A-Z][^A-Z]*&quot;) df = df.explode(&quot;type&quot;, ignore_index=True) </code></pre> <p>(if you want to be cool to write it in one line, lookup <code>assign</code>)</p> <p>sample run:</p> <pre class="lang-py prettyprint-override"><code>In [436]: df Out[436]: type year colour 0 Mazda 1990 Cyan 1 VolvoSaabVauxhall 2000 Red 2 Lada 1980 Black In [437]: df.type.str.findall(r&quot;[A-Z][^A-Z]*&quot;) Out[437]: 0 [Mazda] 1 [Volvo, Saab, Vauxhall] 2 [Lada] Name: type, dtype: object In [438]: df.type = df.type.str.findall(r&quot;[A-Z][^A-Z]*&quot;) In [439]: df Out[439]: type year colour 0 [Mazda] 1990 Cyan 1 [Volvo, Saab, Vauxhall] 2000 Red 2 [Lada] 1980 Black In [440]: df.explode(&quot;type&quot;, ignore_index=True) Out[440]: type year colour 0 Mazda 1990 Cyan 1 Volvo 2000 Red 2 Saab 2000 Red 3 Vauxhall 2000 Red 4 Lada 1980 Black </code></pre> <p>I passed <code>ignore_index=True</code> to <code>explode</code>; you can unpass and see what it does!</p>
python|pandas|data-wrangling
2
3,547
73,441,628
Map pandas column to numpy array
<p>I have a pandas dataframe which has a column containing index to be mapped.</p> <p>df:</p> <pre><code>Date Value id 01-01-2011 99 -9999 01-02-2011 0 -9999 01-03-2011 5 4 01-01-2012 0 9 01-02-2012 1 0 01-03-2012 5 15 01-01-2013 11 -9999 01-02-2013 9 13 01-03-2013 1 22 </code></pre> <p>This is how I am mapping:</p> <pre><code>df['value_from_array'] = numpy_array[df.id.unique()] </code></pre> <p>given some of the id are not in array example: -9999. This is throwing an index error. Any way to avoid index error and get 0 instead of index error.</p>
<p>Looks like a merge to me.</p> <pre class="lang-py prettyprint-override"><code>numpy_array = np.random.rand(df.id.max() + 1) df2 = df.merge(pd.Series(numpy_array, name='value_from_array'), left_on=&quot;id&quot;, right_index=True, how=&quot;left&quot;) &gt;&gt;&gt; df2 Date Value id value_from_array 0 01-01-2011 99 -9999 NaN 1 01-02-2011 0 -9999 NaN 2 01-03-2011 5 4 0.575414 3 01-01-2012 0 9 0.608703 4 01-02-2012 1 0 0.720560 5 01-03-2012 5 15 0.062212 6 01-01-2013 11 -9999 NaN 7 01-02-2013 9 13 0.310980 8 01-03-2013 1 22 0.195131 </code></pre>
python|pandas
0
3,548
73,333,230
regex = False doesn't exist in str.split
<p>I am trying to split strings in my column using str.split: <code>str.split(&quot;. &quot;, regex = False)</code>. But when I do that, I get error:</p> <p><code>split() got an unexpected keyword argument 'regex'</code></p> <p>why it happens? regex should exist in that method: <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.str.split.html" rel="nofollow noreferrer">https://pandas.pydata.org/docs/reference/api/pandas.Series.str.split.html</a></p>
<p><code>str.split</code> is a function in the <code>str</code> (string) class.</p> <p>You want to use: <code>my_series.str.split(&quot;. &quot;, regex = False)</code></p>
python|python-3.x|pandas|string|split
1
3,549
73,195,820
Create 2d image from point cloud
<p>I am trying to project a point cloud into a 2d image as if it were a satellite image.</p> <p>I have six files I want to project and the point clouds are quite big. For the biggest one, I have <code>len(las.X) = 37_763_608</code>, <code>max(las.X) - min(las.X) = 122_124</code>, and <code>max(las.X) - min(las.X) = 273_683</code>, so sometimes when calculate the size I have an overflow error.</p> <p>My first try was this, but this was quite slow and took about 28 minutes to run.<br /> Here I added the loops with <code>k_x</code> and <code>k_y</code> because the image I got was mostly black, and I wanted to have colour everywhere. I tried looping around each point/pixel to make them 5 times bigger, but this is the slow part.</p> <p>see pictures</p> <p><a href="https://i.stack.imgur.com/TKzQP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TKzQP.png" alt="Colour version with the k padding" /></a> Colour version with the k padding <a href="https://i.stack.imgur.com/YNSJB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YNSJB.png" alt="Black and white version without the padding" /></a> Black and white version without the padding</p> <p>Ideally, I would like to have the colour from one point/pixel shift to the colour of their neighbours, so that there is a gradient between them, and no have any black leftover from me initialize the image as np.zeros</p> <pre class="lang-py prettyprint-override"><code>import laspy import numpy as np from PIL import Image import cv2 from tqdm import tqdm las = laspy.read(&quot;area1.las&quot;) def las_to_rgb(las): x, y = las.X, las.Y delta_x = max(x) - min(x) delta_y = max(y) - min(y) re_x = x - min(x) re_y = y - min(y) # las.red, green and blue are stored as 16bit r, g, b = (las.red/256).astype(np.uint8), (las.green/256).astype(np.uint8), (las.blue/256).astype(np.uint8) image = np.zeros((delta_y+1, delta_x+1, 3)) for i, val in enumerate(zip(tqdm(re_x), re_y)): for k_x in range(-5, 6): for k_y in range(-5, 6): if val[0] + k_x &lt; 0 or val[0] + k_x &gt;= delta_x + 1: k_x = 0 if val[1] + k_y &lt; 0 or val[1] + k_y &gt;= delta_y + 1: k_y = 0 image[val[1]+k_y, val[0]+k_x] = [b[i], g[i], r[i]] cv2.imwrite(&quot;test.png&quot;, image) cv2.waitKey(0) </code></pre> <p>I found how to do it faster in numpy, but it can only do one colour at a time, so I decided to loop for multiple color but I think I am doing something wrong when I change the type to <code>np.unit8</code> as python takes up to 50GB of RAM.</p> <p>With numpy:</p> <p>One colour:</p> <pre class="lang-py prettyprint-override"><code>def nu_pro(las): x, y = las.X, las.Y delta_x = max(x) - min(x) delta_y = max(y) - min(y) xs = x - min(x) ys = y - min(y) img_size = (delta_y+1, delta_x+1) # +1 for ravel_multi_index bgr = np.array([(las.blue/256).astype(np.uint8), (las.green/256).astype(np.uint8), (las.red/256).astype(np.uint8)]) coords = np.stack((ys, xs)) abs_coords = np.ravel_multi_index(coords, img_size) image = np.bincount(abs_coords, weights=color, minlength=img_size[1]*img_size[0]) image = image.reshape(img_size)) cv2.imwrite(&quot;test.png&quot;, image) cv2.waitKey(0) </code></pre> <p>For rgb</p> <pre class="lang-py prettyprint-override"><code>def nu_pro_rgb(las): x, y = las.X, las.Y delta_x = max(x) - min(x) delta_y = max(y) - min(y) xs = x - min(x) ys = y - min(y) img_size = (delta_y+1, delta_x+1) # +1 for ravel_multi_index rgb = np.array([(las.red/256).astype(np.uint8), (las.green/256).astype(np.uint8), (las.blue/256).astype(np.uint8)]) image = [] coords = np.stack((ys, xs)) abs_coords = np.ravel_multi_index(coords, img_size) for i, color in enumerate(tqdm(rgb)): img = np.bincount(abs_coords, weights=color, minlength=img_size[1]*img_size[0]) image.append(img.reshape(img_size)) image = np.uint8(np.array(image)) # I am probably messing up this transpose but I'll figure it out eventually im = Image.fromarray(image.T, &quot;RGB&quot;) im.save(&quot;pil.png&quot;) </code></pre> <p>Any indication would be welcome :)</p> <p><strong>EDIT</strong> for clarification about the colours.</p> <ul> <li><p>When there is overlapping, it should be the point with the highest <code>z</code> coordinates that should be displayed.</p> </li> <li><p>For the colouring, in the picture below, the points between <code>A</code> and <code>B</code> should be a colour gradient from <code>A</code> to <code>B</code>.<br /> If it is like the yellow point, then an average of the neighbouring colour (without the black if present)<br /> I hope I am making some sense.</p> </li> </ul> <p><a href="https://i.stack.imgur.com/vRYql.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vRYql.png" alt="Colouring example" /></a></p>
<p>To interpolate, there are lots of libraries.</p> <p>This uses cubic interpolation, but it only works inside the convex hull, so the points outside the convex hull are taken from the nearest neighbor.</p> <p>If you are interpolating GIS data, you may look on Krigging interpolation, which should interpolate outside the convex hull.</p> <p>This code does not check that a point with lower Z is under one with higher Z. You have to delete those points to avoid having them interpolated.</p> <pre><code>from scipy.interpolate import griddata import numpy as np import matplotlib.pyplot as plt import cv2 # create data height, width = 256, 256 # generate a random sample of 1000 (x,y) coordinates and colors x, y, z = np.random.randint(0, 256, size=(3, 1000)) color = np.random.randint(0, 256, size=(1000, 3)) # sort x,y,z by z in ascending order so the highest z is plotted over the lowest z zSort = z.argsort() x, y, z, color = x[zSort], y[zSort], z[zSort], color[zSort] # interpolation # generate a grid where the interpolation will be calculated X, Y = np.meshgrid(np.arange(width), np.arange(height)) R = griddata(np.vstack((x, y)).T, color[:, 0], (X, Y), method='cubic') Rlinear= griddata(np.vstack((x, y)).T, color[:, 0], (X, Y), method='nearest') G = griddata(np.vstack((x, y)).T, color[:, 1], (X, Y), method='cubic') Glinear= griddata(np.vstack((x, y)).T, color[:, 1], (X, Y), method='nearest') B = griddata(np.vstack((x, y)).T, color[:, 2], (X, Y), method='cubic') Blinear= griddata(np.vstack((x, y)).T, color[:, 2], (X, Y), method='nearest') #Fill empty values with nearest neighbor R[np.isnan(R)] = Rlinear[np.isnan(R)] G[np.isnan(G)] = Glinear[np.isnan(G)] B[np.isnan(B)] = Blinear[np.isnan(B)] R = R/np.max(R) G = G/np.max(G) B = B/np.max(B) interpolated = cv2.merge((R, G, B)) plt.imshow(interpolated) plt.scatter(x, y, c=color/255, marker=&quot;s&quot;,s=1) plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/mPaKT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mPaKT.png" alt="enter image description here" /></a></p>
python|numpy|point-clouds
1
3,550
35,034,389
Applying a Fast Coordinate Transformation in Python
<p>I have a simple 2x2 transformation matrix, <strong>s</strong>, which encodes some liner transformation of coordinates such that <strong>X' = sX</strong>.</p> <p>I have generated a set of uniformley distributed coordinates on a grid using the <strong>np.meshgrid()</strong> function and at the moment I traverse each coordinate and apply the transformation at a coordinate by coordinate level. Unfortunately, this very slow for large arrays. Are there any fast ways of doing this? Thanks!</p> <pre><code>import numpy as np image_dimension = 1024 image_index = np.arange(0,image_dimension,1) xx, yy = np.meshgrid(image_index,image_index) # Pre-calculated Transformation Matrix. s = np.array([[ -2.45963439e+04, -2.54997726e-01], [ 3.55680731e-02, -2.48005486e+04]]) xx_f = xx.flatten() yy_f = yy.flatten() for x_t in range(0, image_dimension*image_dimension): # Get the current (x,y) coordinate. x_y_in = np.matrix([[xx_f[x_t]],[yy_f[x_t]]]) # Perform the transformation with x. optout = s * x_y_in # Store the new coordinate. xx_f[x_t] = np.array(optout)[0][0] yy_f[x_t] = np.array(optout)[1][0] # Reshape Output xx_t = xx_f.reshape((image_dimension, image_dimension)) yy_t = yy_f.reshape((image_dimension, image_dimension)) </code></pre>
<p>You can use the numpy <code>dot</code> function to get the dot product of your matices as:</p> <pre><code>xx_tn,yy_tn = np.dot(s,[xx.flatten(),yy.flatten()]) xx_t = xx_tn.reshape((image_dimension, image_dimension)) yy_t = yy_tn.reshape((image_dimension, image_dimension)) </code></pre> <p>Which is much faster</p>
python|numpy|grid|transform|coordinate
4
3,551
31,144,356
Numpy indexing 3-dimensional array into 2-dimensional array
<p>I have a three-dimensional array of the following structure:</p> <pre><code>x = np.array([[[1,2], [3,4]], [[5,6], [7,8]]], dtype=np.double) </code></pre> <p>Additionally, I have an index array</p> <pre><code>idx = np.array([[0,1],[1,3]], dtype=np.int) </code></pre> <p>Each row of <code>idx</code> defines the row/column indices for the placement of each sub-array along the <code>0</code> axis in <code>x</code> into a two-dimensional array <code>K</code> that is initialized as</p> <pre><code>K = np.zeros((4,4), dtype=np.double) </code></pre> <p>I would like to use fancy indexing/broadcasting to performing the indexing without a <code>for</code> loop. I currently do it this way:</p> <pre><code>for i, id in enumerate(idx): idx_grid = np.ix_(id,id) K[idx_grid] += x[i] </code></pre> <p>Such that the result is:</p> <pre><code>&gt;&gt;&gt; K = array([[ 1., 2., 0., 0.], [ 3., 9., 0., 6.], [ 0., 0., 0., 0.], [ 0., 7., 0., 8.]]) </code></pre> <p>Is this possible to do with fancy indexing?</p>
<p>Here's one alternative way. With <code>x</code>, <code>idx</code> and <code>K</code> defined as in your question:</p> <pre><code>indices = (idx[:,None] + K.shape[1]*idx).ravel('f') np.add.at(K.ravel(), indices, x.ravel()) </code></pre> <p>Then we have:</p> <pre><code>&gt;&gt;&gt; K array([[ 1., 2., 0., 0.], [ 3., 9., 0., 6.], [ 0., 0., 0., 0.], [ 0., 7., 0., 8.]]) </code></pre> <hr> <p>To perform unbuffered inplace addition on NumPy arrays you need to use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.ufunc.at.html" rel="nofollow"><code>np.add.at</code></a> (to avoid using <code>+=</code> in a <code>for</code> loop). </p> <p>However, it's slightly probelmatic to pass a list of 2D index arrays, and corresponding arrays to add at these indices, to <code>np.add.at</code>. This is because the function interprets these lists of arrays as higher-dimensional arrays and IndexErrors are raised.</p> <p>It's much simpler to pass in 1D arrays. You can temporarily ravel <code>K</code> and <code>x</code> to give you a 1D array of zeros and a 1D array of values to add to those zeros. The only fiddly part is constructing a corresponding 1D array of indices from <code>idx</code> at which to add the values. This can be done via broadcasting with arithmetical operators and then ravelling, as shown above.</p>
python|arrays|numpy|multidimensional-array
3
3,552
31,212,141
How to resample a df with datetime index to exactly n equally sized periods?
<p>I've got a large dataframe with a datetime index and need to resample data to exactly 10 equally sized periods.</p> <p>So far, I've tried finding the first and last dates to determine the total number of days in the data, divide that by 10 to determine the size of each period, then resample using that number of days. eg:</p> <pre><code>first = df.reset_index().timesubmit.min() last = df.reset_index().timesubmit.max() periodsize = str((last-first).days/10) + 'D' df.resample(periodsize,how='sum') </code></pre> <p>This doesn't guarantee exactly 10 periods in the df after resampling since the periodsize is a rounded down int. Using a float doesn't work in the resampling. Seems that either there's something simple that I'm missing here, or I'm attacking the problem all wrong.</p>
<p>Here is one way to ensure equal-size sub-periods by using <code>np.linspace()</code> on <code>pd.Timedelta</code> and then classifying each obs into different bins using <code>pd.cut</code>.</p> <pre><code>import pandas as pd import numpy as np # generate artificial data np.random.seed(0) df = pd.DataFrame(np.random.randn(100, 2), columns=['A', 'B'], index=pd.date_range('2015-01-01 00:00:00', periods=100, freq='8H')) Out[87]: A B 2015-01-01 00:00:00 1.7641 0.4002 2015-01-01 08:00:00 0.9787 2.2409 2015-01-01 16:00:00 1.8676 -0.9773 2015-01-02 00:00:00 0.9501 -0.1514 2015-01-02 08:00:00 -0.1032 0.4106 2015-01-02 16:00:00 0.1440 1.4543 2015-01-03 00:00:00 0.7610 0.1217 2015-01-03 08:00:00 0.4439 0.3337 2015-01-03 16:00:00 1.4941 -0.2052 2015-01-04 00:00:00 0.3131 -0.8541 2015-01-04 08:00:00 -2.5530 0.6536 2015-01-04 16:00:00 0.8644 -0.7422 2015-01-05 00:00:00 2.2698 -1.4544 2015-01-05 08:00:00 0.0458 -0.1872 2015-01-05 16:00:00 1.5328 1.4694 ... ... ... 2015-01-29 08:00:00 0.9209 0.3187 2015-01-29 16:00:00 0.8568 -0.6510 2015-01-30 00:00:00 -1.0342 0.6816 2015-01-30 08:00:00 -0.8034 -0.6895 2015-01-30 16:00:00 -0.4555 0.0175 2015-01-31 00:00:00 -0.3540 -1.3750 2015-01-31 08:00:00 -0.6436 -2.2234 2015-01-31 16:00:00 0.6252 -1.6021 2015-02-01 00:00:00 -1.1044 0.0522 2015-02-01 08:00:00 -0.7396 1.5430 2015-02-01 16:00:00 -1.2929 0.2671 2015-02-02 00:00:00 -0.0393 -1.1681 2015-02-02 08:00:00 0.5233 -0.1715 2015-02-02 16:00:00 0.7718 0.8235 2015-02-03 00:00:00 2.1632 1.3365 [100 rows x 2 columns] # cutoff points, 10 equal-size group requires 11 points # measured by timedelta 1 hour time_delta_in_hours = (df.index - df.index[0]) / pd.Timedelta('1h') n = 10 ts_cutoff = np.linspace(0, time_delta_in_hours[-1], n+1) # labels, time index time_index = df.index[0] + np.array([pd.Timedelta(str(time_delta)+'h') for time_delta in ts_cutoff]) # create a categorical reference variables df['start_time_index'] = pd.cut(time_delta_in_hours, bins=10, labels=time_index[:-1]) # for clarity, reassign labels using end-period index df['end_time_index'] = pd.cut(time_delta_in_hours, bins=10, labels=time_index[1:]) Out[89]: A B start_time_index end_time_index 2015-01-01 00:00:00 1.7641 0.4002 2015-01-01 00:00:00 2015-01-04 07:12:00 2015-01-01 08:00:00 0.9787 2.2409 2015-01-01 00:00:00 2015-01-04 07:12:00 2015-01-01 16:00:00 1.8676 -0.9773 2015-01-01 00:00:00 2015-01-04 07:12:00 2015-01-02 00:00:00 0.9501 -0.1514 2015-01-01 00:00:00 2015-01-04 07:12:00 2015-01-02 08:00:00 -0.1032 0.4106 2015-01-01 00:00:00 2015-01-04 07:12:00 2015-01-02 16:00:00 0.1440 1.4543 2015-01-01 00:00:00 2015-01-04 07:12:00 2015-01-03 00:00:00 0.7610 0.1217 2015-01-01 00:00:00 2015-01-04 07:12:00 2015-01-03 08:00:00 0.4439 0.3337 2015-01-01 00:00:00 2015-01-04 07:12:00 2015-01-03 16:00:00 1.4941 -0.2052 2015-01-01 00:00:00 2015-01-04 07:12:00 2015-01-04 00:00:00 0.3131 -0.8541 2015-01-01 00:00:00 2015-01-04 07:12:00 2015-01-04 08:00:00 -2.5530 0.6536 2015-01-04 07:12:00 2015-01-07 14:24:00 2015-01-04 16:00:00 0.8644 -0.7422 2015-01-04 07:12:00 2015-01-07 14:24:00 2015-01-05 00:00:00 2.2698 -1.4544 2015-01-04 07:12:00 2015-01-07 14:24:00 2015-01-05 08:00:00 0.0458 -0.1872 2015-01-04 07:12:00 2015-01-07 14:24:00 2015-01-05 16:00:00 1.5328 1.4694 2015-01-04 07:12:00 2015-01-07 14:24:00 ... ... ... ... ... 2015-01-29 08:00:00 0.9209 0.3187 2015-01-27 09:36:00 2015-01-30 16:48:00 2015-01-29 16:00:00 0.8568 -0.6510 2015-01-27 09:36:00 2015-01-30 16:48:00 2015-01-30 00:00:00 -1.0342 0.6816 2015-01-27 09:36:00 2015-01-30 16:48:00 2015-01-30 08:00:00 -0.8034 -0.6895 2015-01-27 09:36:00 2015-01-30 16:48:00 2015-01-30 16:00:00 -0.4555 0.0175 2015-01-27 09:36:00 2015-01-30 16:48:00 2015-01-31 00:00:00 -0.3540 -1.3750 2015-01-30 16:48:00 2015-02-03 00:00:00 2015-01-31 08:00:00 -0.6436 -2.2234 2015-01-30 16:48:00 2015-02-03 00:00:00 2015-01-31 16:00:00 0.6252 -1.6021 2015-01-30 16:48:00 2015-02-03 00:00:00 2015-02-01 00:00:00 -1.1044 0.0522 2015-01-30 16:48:00 2015-02-03 00:00:00 2015-02-01 08:00:00 -0.7396 1.5430 2015-01-30 16:48:00 2015-02-03 00:00:00 2015-02-01 16:00:00 -1.2929 0.2671 2015-01-30 16:48:00 2015-02-03 00:00:00 2015-02-02 00:00:00 -0.0393 -1.1681 2015-01-30 16:48:00 2015-02-03 00:00:00 2015-02-02 08:00:00 0.5233 -0.1715 2015-01-30 16:48:00 2015-02-03 00:00:00 2015-02-02 16:00:00 0.7718 0.8235 2015-01-30 16:48:00 2015-02-03 00:00:00 2015-02-03 00:00:00 2.1632 1.3365 2015-01-30 16:48:00 2015-02-03 00:00:00 [100 rows x 4 columns] df.groupby('start_time_index').agg('sum') Out[90]: A B start_time_index 2015-01-01 00:00:00 8.6133 2.7734 2015-01-04 07:12:00 1.9220 -0.8069 2015-01-07 14:24:00 -8.1334 0.2318 2015-01-10 21:36:00 -2.7572 -4.2862 2015-01-14 04:48:00 1.1957 7.2285 2015-01-17 12:00:00 3.2485 6.6841 2015-01-20 19:12:00 -0.8903 2.2802 2015-01-24 02:24:00 -2.1025 1.3800 2015-01-27 09:36:00 -1.1017 1.3108 2015-01-30 16:48:00 -0.0902 -2.5178 </code></pre> <p>Another potential shorter way to do this is to specify your sampling freq as the time delta. But the problem, as shown in below, is that it delivers 11 sub-samples instead of 10. I believe the reason is that the <code>resample</code> implements a <code>left-inclusive/right-exclusive (or left-exclusive/right-inclusive)</code> sub-sampling scheme so that the very last obs at '2015-02-03 00:00:00' is considered as a separate group. If we use <code>pd.cut</code> to do it ourself, we can specify <code>include_lowest=True</code> so that it gives us exactly 10 sub-samples rather than 11.</p> <pre><code>n = 10 time_delta_str = str((df.index[-1] - df.index[0]) / (pd.Timedelta('1s') * n)) + 's' df.resample(pd.Timedelta(time_delta_str), how='sum') Out[114]: A B 2015-01-01 00:00:00 8.6133 2.7734 2015-01-04 07:12:00 1.9220 -0.8069 2015-01-07 14:24:00 -8.1334 0.2318 2015-01-10 21:36:00 -2.7572 -4.2862 2015-01-14 04:48:00 1.1957 7.2285 2015-01-17 12:00:00 3.2485 6.6841 2015-01-20 19:12:00 -0.8903 2.2802 2015-01-24 02:24:00 -2.1025 1.3800 2015-01-27 09:36:00 -1.1017 1.3108 2015-01-30 16:48:00 -2.2534 -3.8543 2015-02-03 00:00:00 2.1632 1.3365 </code></pre>
python|pandas
1
3,553
31,099,781
Apply curve_fit within a loop
<p>as indcated, I have a problem with fitting a function to data within a loop. This is a groupby object out of a dataframe. This groupby object has the following structure:</p> <pre><code> f [MHz] T [K] Rs 0 400 1.75 13.472493 1 400 2.00 14.054298 2 400 2.25 14.900821 3 400 2.50 16.453007 4 400 2.75 18.050460 13 800 1.75 36.008499 14 800 2.00 37.924344 15 800 2.25 41.246962 16 800 2.50 45.780308 17 800 2.75 51.904333 26 1200 1.75 53.809458 27 1200 2.00 61.427391 28 1200 2.25 67.438682 29 1200 2.50 75.302240 30 1200 2.75 88.015202 </code></pre> <p>Now, I would like to apply a fit to each frequency group (400, 800 and 1200) and do this efficiently within a loop. The first attempt is:</p> <pre><code>i = 0 for freq, grp1 in RT1.groupby(['f [MHz]']): T[i] = grp1['T [K]'].values[condition] Rs[i]= grp1['Rs'].values[condition] popt[i], pcov[i] = curve_fit(RsT, T[i], Rs[i], p0) figure = plt.figure() grp1.plot(x = 'T [K]',y = 'Rs', color = colors[i], marker = markers[i] , ls = 'None', title = 'R(T) {f} MHz with Fit'.format(f = freq)) plt.plot(T[i], RsT(T[i], *popt[i]), label = 'fit') i += 1 </code></pre> <p>i runs from 0 to 2 to create T1, Rs1; T2, Rs2 and so on. The condition constrains the T values to a certain range and this seems to work properly. However, I could not manage to adress the curv_fit routine properly to all three frequency groups - it raises the following error:</p> <pre><code>ValueError Traceback (most recent call last) &lt;ipython-input-14-0b3b4adc65e7&gt; in &lt;module&gt;() 24 i = 0 25 for freq, grp1 in RT1.groupby(['f [MHz]']): ---&gt; 26 T[i] = grp1['T [K]'].values[condition] 27 Rs[i]= grp1['Rs'].values[condition] 28 popt[i], pcov[i] = curve_fit(RsT, T[i], Rs[i], p0) ValueError: setting an array element with a sequence. </code></pre> <p>The problem is, that I fail to assign three different arrays for T and Rs for each group - I would like to have T1 and Rs1 with the values for 400 MHz, T2 and Rs2 with the values of 800 MHz and so on. Also, popt and pcov should be calculated three times separately (popt1, pcov1; pop2, pcov2; ...) corresponding to a separate fit on each data group. I hope, that there is someone, who could explain, if it is possible to apply the curv_fit routine within a loop and if so - how. Many thanks!</p>
<p>It sounds like <code>T</code> and <code>RsT</code> are NumPy arrays with a <em>non-object dtype</em>. Trying to assign an array of values to one cell of <code>T</code> raises the ValueError you are seeing:</p> <pre><code>In [117]: T = np.zeros(3) In [118]: T[0] = np.arange(10) ValueError: setting an array element with a sequence. </code></pre> <p>You could use an array of object dtype</p> <pre><code>In [119]: T = np.zeros(3, dtype='O') In [120]: T[0] = np.arange(10) In [121]: T Out[121]: array([array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]), 0, 0], dtype=object) </code></pre> <p>but object arrays are not particularly fast; you would do just as well making <code>T</code> and <code>RsT</code> lists.</p>
python|pandas
0
3,554
67,405,300
None of ['date'] are in the columns
<p>I'm trying to plot some graphs with numpy and plotly. I'm getting the data from a csv file containing all the trends that I need. I want to plot and analyze the values contained in columns 1-80 along the timeframe (&quot;Date&quot;) Here's the script I've written to do that.</p> <pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt import numpy as np import pandas as pd import seaborn as sns import plotly.express as px ztec = pd.read_csv(r'C:\Users\tashkpa\Desktop\Работа\ТЭЦ\Сырые данные ЗТЭЦ\данные для python.csv') ztec=ztec.set_index('Date') print(ztec) fig=plt.figure() ax=fig.add_subplot() sns.set_style(&quot;ticks&quot;, {'grid.linestyle': '--'}) sns.lineplot(x=ztec['Date'], y=ztec['01']) plt.grid(True, which=&quot;both&quot;, ls=&quot;--&quot;, c='gray') plt.plot </code></pre> <p>A sample of my array looks like this:</p> <pre class="lang-none prettyprint-override"><code>'Date';'01';'02';'03';'04';'05';'06';'07';'08';'09';'10';'11';'12';'13';'14';'15';'16';'17';'18';'19';'20';'21';'22';'23';'24';'25';'26';'27';'28';'29';'30';'31';'32';'33';'34';'35';'36';'37';'38';'39';'40';'41';'42';'43';'44';'45';'46';'47';'48';'49';'50';'51';'52';'53';'54';'55';'56';'57';'58';'59';'60';'61';'62';'63';'64';'65';'66';'67';'68';'69';'70';'71';'72';'73';'74';'75';'76';'77';'78';'79';'80';;;;;;;;;;;;; 01/01/2021 02:00:00;0.791;7.019;199.265;183.586;506.253;5.641;41.729;212.521;0.528;0.115;32.356;35.264;0.788;208.541;0.356;0.348;0.087;3.887;3.483;8.784;31.930;13.237;24.296;13.552;23.546;36.840;36.978;218.913;35.373;0.860;0.147;0.319;115.694;0.785;7.064;201.466;183.316;509.751;5.721;42.600;210.235;0.563;12.276;60.000;135.939;0.784;206.678;0.347;0.334;0.066;3.959;4.138;4.821;8.027;13.494;18.367;13.702;18.536;30.446;31.199;243.470;1.908;0.901;2.082;0.216;106.911;3.681;-1.484;205.884;284.659;146.202;3.681;-9.315;56.252;54.483;40195.012;39724.953;102.121;128.471;102.573;;;;;;;;;;;;; </code></pre> <p>Here's the exception that Python fires.</p> <pre class="lang-none prettyprint-override"><code>C:\Users\tashkpa\Anaconda3\python.exe C:/Users/tashkpa/PycharmProjects/pythonProject/zteccsv.py Traceback (most recent call last): File &quot;C:/Users/tashkpa/PycharmProjects/pythonProject/zteccsv.py&quot;, line 7, in &lt;module&gt; ztec=ztec.set_index('Date') File &quot;C:\Users\tashkpa\AppData\Roaming\Python\Python38\site-packages\pandas\core\frame.py&quot;, line 4724, in set_index raise KeyError(f&quot;None of {missing} are in the columns&quot;) KeyError: &quot;None of ['Date'] are in the columns&quot; </code></pre> <p>How could I bypass this error?</p>
<ul> <li>By default <code>pd.read_csv</code> use a default value for <code>sep=','</code></li> <li>In this case use <code>sep=';'</code> when using <code>pd.read_csv</code>, since the value separator is <code>;</code></li> <li>Since you have set <code>Date</code> column as index using <code>ztec=ztec.set_index('Date')</code>, you have to use the <code>ztec.index</code> as <code>x</code> when plotting <code>sns.lineplot(x=ztec.index, y=ztec['01'])</code></li> </ul> <pre><code>import matplotlib.pyplot as plt import numpy as np import pandas as pd import seaborn as sns import plotly.express as px ztec = pd.read_csv(r'C:\Users\tashkpa\Desktop\Работа\ТЭЦ\Сырые данные ЗТЭЦ\данные для python.csv', sep=';') ztec=ztec.set_index('Date') print(ztec) fig=plt.figure() ax=fig.add_subplot() sns.set_style(&quot;ticks&quot;, {'grid.linestyle': '--'}) sns.lineplot(x=ztec.index, y=ztec['01']) plt.grid(True, which=&quot;both&quot;, ls=&quot;--&quot;, c='gray') plt.plot() </code></pre>
pandas|numpy
1
3,555
67,357,062
pandas extract array to columns
<p>How can the array (the length of the array is constant for all elements in the series) be extracted into columns efficiently?</p> <pre><code>import pandas as pd d = pd.DataFrame({'foo':\[1,2,3\], 'bar':\[\[1,1,1\], \[2,2,2\], \[3,3,3\]\]}) d][1]][1] </code></pre> <p><a href="https://i.stack.imgur.com/DOf7F.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DOf7F.png" alt="enter image description here" /></a></p> <p>I.e. extract the array of <code>[1,1,1]</code> into a <code>bar_0, bar_1, bar_3</code> column?</p> <p>Is there a better way than manually iterating over the indices in the array and calling <code>pandas.apply</code>?</p>
<p>How about this:</p> <pre><code>&gt;&gt;&gt; d.join(pd.DataFrame(d['bar'].to_list(), columns=['bar_1', 'bar_2', 'bar_3'])) foo bar bar_1 bar_2 bar_3 0 1 [1, 1, 1] 1 1 1 1 2 [2, 2, 2] 2 2 2 2 3 [3, 3, 3] 3 3 3 </code></pre> <p>You convert the <code>bar</code> column to list (nested list), convert it to a dataframe, and join the new dataframe with your initial dataframe.</p>
python|arrays|pandas|numpy|apply
5
3,556
67,338,354
Why does installing Tensorflow with conda ruin matplotlib?
<p>Starting from a new anaconda environment, it looks like this (summarized, not a full paste):</p> <pre><code>[myenv]$ python Python 3.8.8 ... &gt;&gt; import matplotlib.pyplot as plt &gt;&gt; [myenv]$ conda install tensorflow-gpu ... [myenv]$ python &gt;&gt; import tensorflow &gt;&gt; tensorflow.__version__ '2.4.1' &gt;&gt; import matplotlib.pyplot as plt ModuleNotFoundError: No module named 'matplotlib' &gt;&gt; [myenv]$ conda install matplotlib ... [myenv]% python &gt;&gt; import tensorflow &gt;&gt; import matplotlib.pyplot as plt &lt;tons of traceback&gt; RunTimeError: the sip module implements API v12.0 to v12.4 but the PyQt5.core module requires API v12.5 </code></pre> <p>How can I have tensorflow and matplotlib at the same time?</p>
<p>Do you have the complete traceback? Based on the Error you posted, it seems like its a dependency issue on the sip module. I'll recommend using you own virtualenv instead of anacondas.</p> <p>FOR CREATING YOUR OWN VIRTUALENV:<br /> open terminal<br /> go to the directory you want your project to be<br /> pip install virtualenv<br /> virtualenv venv<br /> source venv/bin/activate (for linux or mac)<br /> pip install tensorflow<br /> pip install matplotlib<br /> pip install &lt;whatever library you want&gt;</p>
python|tensorflow|matplotlib|anaconda|conda
0
3,557
67,350,036
Transform DataFrame of Dataframe into Single DataFrame Selecting only Some Columns Python
<p>I have the next Json File:</p> <pre><code>exchangeInfo = { &quot;timezone&quot;: &quot;UTC&quot;, &quot;serverTime&quot;: 1565246363776, &quot;rateLimits&quot;: [], &quot;exchangeFilters&quot;: [], &quot;symbols&quot;: [ { &quot;symbol&quot;: &quot;ETHBTC&quot;, &quot;status&quot;: &quot;TRADING&quot;, &quot;baseAsset&quot;: &quot;ETH&quot;, &quot;baseAssetPrecision&quot;: 8, &quot;quoteAsset&quot;: &quot;BTC&quot;, &quot;quotePrecision&quot;: 8, &quot;quoteAssetPrecision&quot;: 8, &quot;baseCommissionPrecision&quot;: 8, &quot;quoteCommissionPrecision&quot;: 8, &quot;filters&quot;: [ { &quot;filterType&quot;: &quot;PRICE_FILTER&quot;, &quot;minPrice&quot;: &quot;0.00000100&quot;, &quot;maxPrice&quot;: &quot;100000.00000000&quot;, &quot;tickSize&quot;: &quot;0.00000100&quot;, }, { &quot;filterType&quot;: &quot;PERCENT_PRICE&quot;, &quot;multiplierUp&quot;: &quot;1.3000&quot;, &quot;multiplierDown&quot;: &quot;0.7000&quot;, &quot;avgPriceMins&quot;: 5, }, { &quot;filterType&quot;: &quot;LOT_SIZE&quot;, &quot;minQty&quot;: &quot;0.00100000&quot;, &quot;maxQty&quot;: &quot;100000.00000000&quot;, &quot;stepSize&quot;: &quot;0.00100000&quot;, }, ], }, ], } </code></pre> <p>And applying the next code, I transform the column &quot;filters&quot; into columns.</p> <pre><code>df = pd.json_normalize(exchangeInfo[&quot;symbols&quot;]) df = pd.concat( [ df, df.pop(&quot;filters&quot;) .apply(lambda x: dict(i for d in x for i in d.items())) .apply(pd.Series), ], axis=1, ).drop(columns=&quot;filterType&quot;) print(df) </code></pre> <p>Prints:</p> <pre><code>symbol status baseAsset baseAssetPrecision quoteAsset quotePrecision quoteAssetPrecision baseCommissionPrecision quoteCommissionPrecision minPrice maxPrice tickSize multiplierUp multiplierDown avgPriceMins minQty maxQty stepSize 0 ETHBTC TRADING ETH 8 BTC 8 8 8 8 0.00000100 100000.00000000 0.00000100 1.3000 0.7000 5 0.00100000 100000.00000000 0.00100000 </code></pre> <p>But, I would like to select only 2 of those filters, by filterType name, I would like to have &quot;PRICE_FILTER&quot; and &quot;LOT_SIZE&quot;</p>
<p>To get columns from only &quot;PRICE_FILTER&quot; and &quot;LOT_SIZE&quot; filters, try:</p> <pre><code>df = pd.json_normalize(exchangeInfo[&quot;symbols&quot;]) df = pd.concat( [ df, df.pop(&quot;filters&quot;) .apply( lambda x: dict( i for d in x for i in d.items() if d[&quot;filterType&quot;] in {&quot;PRICE_FILTER&quot;, &quot;LOT_SIZE&quot;} ) ) .apply(pd.Series), ], axis=1, ).drop(columns=&quot;filterType&quot;) print(df) </code></pre> <p>Prints:</p> <pre class="lang-none prettyprint-override"><code> symbol status baseAsset baseAssetPrecision quoteAsset quotePrecision quoteAssetPrecision baseCommissionPrecision quoteCommissionPrecision minPrice maxPrice tickSize minQty maxQty stepSize 0 ETHBTC TRADING ETH 8 BTC 8 8 8 8 0.00000100 100000.00000000 0.00000100 0.00100000 100000.00000000 0.00100000 </code></pre>
python|pandas|dataframe|binance
2
3,558
67,341,551
Highlight data based on a condition using Python Pandas and write it back to same .xls file
<p>I have a sample dataset, I want to</p> <ol> <li>Highlight the cell and add a comment column with comment &quot;Null/Blank in 'Item' column&quot;</li> <li>Highlight the cell and a add a comment column with comment &quot;presence of '_' in 'Store' Column&quot;</li> <li>Highlight the cell and add a comment column with comment &quot;Keyword 'Pen' should not tagged to Fruits category&quot;</li> </ol> <p>Input Dataset</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: left;">Store</th> <th style="text-align: center;">Item</th> <th style="text-align: right;">Category</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">Store A</td> <td style="text-align: center;"></td> <td style="text-align: right;">Fruits</td> </tr> <tr> <td style="text-align: left;">Store A</td> <td style="text-align: center;">Apple_</td> <td style="text-align: right;">Fruits</td> </tr> <tr> <td style="text-align: left;">Store A</td> <td style="text-align: center;">Orange</td> <td style="text-align: right;">Fruits</td> </tr> <tr> <td style="text-align: left;">Store A</td> <td style="text-align: center;">Banana</td> <td style="text-align: right;">Fruits</td> </tr> <tr> <td style="text-align: left;">Store_B</td> <td style="text-align: center;">Books</td> <td style="text-align: right;">Stationary</td> </tr> <tr> <td style="text-align: left;">Store B</td> <td style="text-align: center;">Pen</td> <td style="text-align: right;">Fruits</td> </tr> <tr> <td style="text-align: left;">Store B</td> <td style="text-align: center;">Pencil</td> <td style="text-align: right;">Fruits</td> </tr> <tr> <td style="text-align: left;">Store B</td> <td style="text-align: center;">Glue</td> <td style="text-align: right;">Stationary</td> </tr> <tr> <td style="text-align: left;">Store B</td> <td style="text-align: center;">Eraser</td> <td style="text-align: right;">Stationary</td> </tr> <tr> <td style="text-align: left;">Store C</td> <td style="text-align: center;">Frozen</td> <td style="text-align: right;">Movies</td> </tr> <tr> <td style="text-align: left;">Store_C</td> <td style="text-align: center;">Titanic</td> <td style="text-align: right;">Movies</td> </tr> <tr> <td style="text-align: left;">Store C</td> <td style="text-align: center;">Iron_Man</td> <td style="text-align: right;">Movies</td> </tr> <tr> <td style="text-align: left;">Store C</td> <td style="text-align: center;"></td> <td style="text-align: right;">Movies</td> </tr> </tbody> </table> </div> <p><a href="https://i.stack.imgur.com/cFUBs.png" rel="nofollow noreferrer">Input Dataset</a></p> <p>Output Dataset</p> <p><a href="https://i.stack.imgur.com/YyVTy.png" rel="nofollow noreferrer">Output Dataset</a></p> <p>Thanks</p>
<p>Input data:</p> <pre><code>import pandas as pd import io df = pd.read_csv(io.StringIO(&quot;&quot;&quot;Store;Item;Category\nStore A;;Fruits\nStore A;Apple_;Fruits\nStore A;Orange;Fruits\nStore A;Banana;Fruits\nStore_B;Books;Stationary\nStore B;Pen;Fruits\nStore_B;Pencil;Fruits\nStore B;Glue;Stationary\nStore B;Eraser;Stationary\nStore C;Frozen;Movies\nStore_C;Titanic;Movies\nStore C;Iron_Man;Movies\nStore C;;Movies&quot;&quot;&quot;), sep=&quot;;&quot;) </code></pre> <p>I slightly modified your data to be able highlight a row twice:</p> <pre><code>&gt;&gt;&gt; df Store Item Category 0 Store A NaN Fruits 1 Store A Apple_ Fruits 2 Store A Orange Fruits 3 Store A Banana Fruits 4 Store_B Books Stationary 5 Store B Pen Fruits 6 Store_B Pencil Fruits 7 Store B Glue Stationary 8 Store B Eraser Stationary 9 Store C Frozen Movies 10 Store_C Titanic Movies 11 Store C Iron_Man Movies 12 Store C NaN Movies </code></pre> <p>Try to detect errors:</p> <pre><code>comments = {&quot;m1&quot;: &quot;Null/Blank in Item column&quot;, &quot;m2&quot;: &quot;Presence of '_' in Store column&quot;, &quot;m3&quot;: &quot;Keyword 'Pen' should not tagged to Fruits category&quot;} # Conditions m1 = (df[&quot;Item&quot;].str.len() == 0) | (df[&quot;Item&quot;].isna()) m2 = df[&quot;Store&quot;].str.contains(&quot;_&quot;) m3 = (df[&quot;Item&quot;].str.startswith(&quot;Pen&quot;)) &amp; (df[&quot;Category&quot;].str.match(&quot;Fruits&quot;)) dfm = pd.DataFrame({&quot;m1&quot;: m1, &quot;m2&quot;: m2, &quot;m3&quot;: m3}, index=df.index) df[&quot;Highlight Comments&quot;] = dfm.mul(pd.DataFrame(comments, index=df.index)) \ .apply(lambda c: ', '.join(filter(bool, c)), axis=&quot;columns&quot;) </code></pre> <pre><code>&gt;&gt;&gt; df[&quot;Highlight Comments&quot;] 0 Null/Blank in Item column 1 2 3 4 Presence of '_' in Store column 5 Keyword 'Pen' should not tagged to Fruits category 6 Presence of '_' in Store column, Keyword 'Pen' should not tagged to Fruits category 7 8 9 10 Presence of '_' in Store column 11 12 Null/Blank in Item column Name: Highlight Comments, dtype: object </code></pre> <p>Export to excel and set style to cells:</p> <pre><code>import openpyxl as xl with pd.ExcelWriter(&quot;output.xlsx&quot;, engine=&quot;openpyxl&quot;) as writer: df.to_excel(writer, index=False) ws = writer.book.active # convert indexes to excel cell coordinates (so ugly!) dfm.columns = [chr(ord('A') + i) for i, _ in enumerate(dfm.columns)] dfm.index = map(str, dfm.index + 2) highlight = xl.styles.PatternFill(fill_type=&quot;solid&quot;, start_color=&quot;ffff00&quot;, end_color=&quot;ffff00&quot;) for cell in dfm.unstack()[dfm.unstack()].index.map(''.join): ws[cell].fill = highlight </code></pre>
python|excel|pandas|dataframe
0
3,559
67,275,896
Python program freezes my computer when try to read CSV files with only 200k rows data
<p>I am a beginner in Python and I am facing a problem. I would like a user to select a CSV file to be read-in. In the case that the program cannot locate the file or handle the condition, it should default to an error.</p> <p>I have successfully implemented this solution for small file sizes (&lt; 50000 rows) but when the selected file becomes larger (e.x. &gt; 50000 rows), the program freezes.</p> <p>The following are some characteristics to consider:</p> <ol> <li>My computer has 8GB of RAM.</li> <li>The selected file was only 200k+ rows, which is not considered &quot;Big Data.&quot;</li> </ol> <p>The following is my attempt at an implementation:</p> <pre><code>def File_DATALOG(): global df_LOG try: dataloggerfile = tk.filedialog.askopenfilename(parent=root, title='Choose Logger File', filetype=((&quot;csv files&quot;, &quot;*.csv&quot;), (&quot;All Files&quot;, &quot;*.*&quot;))) if len(dataloggerfile) == 0: return None lb.insert(tk.END, dataloggerfile) if dataloggerfile[-4:] == &quot;.csv&quot;: df_LOG = pd.DataFrame(pd.read_csv(dataloggerfile)) if 'Unnamed: 1' in df_LOG.columns: df_LOG = pd.DataFrame(pd.read_csv(dataloggerfile, skiprows=5, low_memory=False)) else: df_LOG = pd.DataFrame(pd.read_excel(dataloggerfile, skiprows=5)) df_LOG.rename(columns={'Date/Time': 'DateTime'}, inplace=True) df_LOG.drop_duplicates(subset=None, keep=False, inplace=True) df_LOG['DateTime'] = df_LOG['DateTime'].apply(lambda x: insert_space(x, 19)) df_LOG['DateTime'] = pd.to_datetime(df_LOG['DateTime'], dayfirst=False, errors='coerce') df_LOG.sort_values('DateTime', inplace=True) df_LOG = df_LOG[~df_LOG.DateTime.duplicated(keep='first')] df_LOG = df_LOG.set_index('DateTime').resample('1S').pad() print(df_LOG) columnsDict['Logger'] = df_LOG.columns.to_list() except Exception as ex: tk.messagebox.showerror(title=&quot;Title&quot;, message=ex) return None </code></pre>
<p>You are trying to take all file at once. But your memory isn't enough for that as you saying.</p> <p>So you need to process data little by little.</p> <p>Here is a good and simple example: <a href="https://stackoverflow.com/a/43286094/7285863">https://stackoverflow.com/a/43286094/7285863</a></p>
python|pandas|dataframe|csv|bigdata
-1
3,560
34,446,158
Dates to Durations in Pandas
<p>I feel like this should be done very easily, yet I can't figure out how. I have a <code>pandas</code> <code>DataFrame</code> with column <strong>date</strong>:</p> <pre><code>0 2012-08-21 1 2013-02-17 2 2013-02-18 3 2013-03-03 4 2013-03-04 Name: date, dtype: datetime64[ns] </code></pre> <p>I want to have a columns of durations, something like:</p> <pre><code>0 0 1 80 days 2 1 day 3 15 days 4 1 day Name: date, dtype: datetime64[ns] </code></pre> <p>My attempt yields bunch of 0 days and <code>NaT</code> instead:</p> <pre><code>&gt;&gt;&gt; df.date[1:] - df.date[:-1] 0 NaT 1 0 days 2 0 days ... </code></pre> <p>Any ideas? </p>
<p><code>Timedeltas</code> are useful here: <a href="http://pandas.pydata.org/pandas-docs/stable/timedeltas.html" rel="nofollow noreferrer">(see docs)</a></p> <blockquote> <p>Starting in v0.15.0, we introduce a new scalar type Timedelta, which is a subclass of datetime.timedelta, and behaves in a similar manner, but allows compatibility with np.timedelta64 types as well as a host of custom representation, parsing, and attributes.</p> <p>Timedeltas are differences in times, expressed in difference units, e.g. days, hours, minutes, seconds. They can be both positive and negative.</p> </blockquote> <pre><code>df 0 0 2012-08-21 1 2013-02-17 2 2013-02-18 3 2013-03-03 4 2013-03-04 </code></pre> <p>You could:</p> <pre><code>pd.to_timedelta(df) TimedeltaIndex(['0 days'], dtype='timedelta64[ns]', freq=None) 0 0 1 180 2 1 3 13 4 1 Name: 0, dtype: int64 </code></pre> <p>Alternatively, you can calculate the difference between points in time using <code>.shift()</code> (or <code>.diff()</code> as illustrated by @Andy Hayden):</p> <pre><code>res = df-df.shift() </code></pre> <p>to get:</p> <pre><code>res.fillna(0) 0 0 0 days 1 180 days 2 1 days 3 13 days 4 1 days </code></pre> <p>You can convert these from <code>timedelta64</code> <code>dtype</code> to <code>integer</code> using:</p> <pre><code>res.fillna(0).squeeze().dt.days 0 0 1 180 2 1 3 13 4 1 </code></pre>
python|pandas|timedelta
6
3,561
59,991,390
Pandas value_counts returning multiple lines for the same value
<p>Running into an issue with Pandas where my dataframe value_counts call is returning multiple lines for the same values. Instead of grouping all "True" values and all "False" values, it's splitting them into 4 groups.</p> <p>Here's my code:</p> <pre><code>import pandas as pd filepath=r"C:\Users\09.41.csv" df = pd.read_csv(filepath) print(df['Finished'].value_counts()) </code></pre> <p>Output:</p> <p>True 3904</p> <p>True 1877</p> <p>False 190</p> <p>False 94</p> <p>I want to be able to group all "True" and "False" responses together for analyses, but I keep getting stuck with these 4 groups instead of 2.</p> <p>Running Python 3.7.4 and the CSV is directly from a survey software (Qualtrics).</p> <p>Thanks in advance for any help!</p>
<p>Check your values data type. Some of the rows may be string and some of them may be bool. For example:</p> <pre><code>[True,'True','False',False,False] </code></pre> <p>If that's the case, change them all to bool and then count the values:</p> <pre><code>df.Finished.apply(lambda x: 'True' in x if type(x)!= bool else x).value_counts() </code></pre>
python|python-3.x|pandas|csv
2
3,562
59,951,312
Pandas equivalent to dplyr dot
<p>I am sorry for pretty heavy explanation, but hope you will get the idea.</p> <p>I'm R user and I find tidyverse capabilities in data wrangling really powerful. But recently I have started learning Python, and in particular pandas to extend my opportunities in data analysis. Instinctively I'm trying to do things in pandas as I used to do them while I was using dplyr. </p> <p>So my question is whether any equivalent to dplyr dot while you are using method chaining in pandas.</p> <p>Here example illustrates computing of minimum value from all values that are greater than current value in test_df['data'] per each group and than the same computing but across new column.</p> <p>R's Example:</p> <pre><code>require(dplyr) require(purrr) test_df = data.frame(group = rep(c(1,2,3), each = 3), data= c(1:9)) test_df %&gt;% group_by(group) %&gt;% mutate(., min_of_max = map_dbl(data, ~data[data &gt; .x] %&gt;% min())) %&gt;% mutate(., min_of_max_2 = map_dbl(min_of_max, ~min_of_max[min_of_max &gt; .x] %&gt;% min())) </code></pre> <p>Output:</p> <pre><code># A tibble: 9 x 4 # Groups: group [3] group data min_of_max min_of_max_2 &lt;dbl&gt; &lt;int&gt; &lt;dbl&gt; &lt;dbl&gt; 1 1 1 2 3 2 1 2 3 Inf 3 1 3 Inf Inf 4 2 4 5 6 5 2 5 6 Inf 6 2 6 Inf Inf 7 3 7 8 9 8 3 8 9 Inf 9 3 9 Inf Inf </code></pre> <p>I know that dplyr doesn't even require dot, but I put it for better understanding the specific of my question </p> <p>Doing the same in Pandas</p> <p>Invalid Example:</p> <pre><code>import pandas as pd import numpy as np test_df = ( pd.DataFrame({'A': np.array([1,2,3]*3), 'B': np.array(range(1,10))}) .sort_values(by = ['A', 'B']) ) (test_df.assign(min_of_max = test_df.apply(lambda x: (test_df.B[(test_df.B &gt; x.B) &amp; (test_df.A[test_df.A == x.A])]).min(), axis = 1)) .assign(min_of_max2 = 'assume_dot_here'.apply(lambda x: (test_df.min_of_max[(test_df.min_of_max &gt; x.min_of_max) &amp; (test_df.A[test_df.A == x.A])]).min(), axis = 1))) </code></pre> <p>In this example putting dot in a second <code>.assign</code> would be great ability but it doesn't work in pandas.</p> <p>Valid Example, which ruins chain:</p> <pre><code>test_df = test_df.assign(min_of_max = test_df.apply(lambda x: (test_df.B[(test_df.B &gt; x.B) &amp; (test_df.A[test_df.A == x.A])]).min(), axis = 1)) test_df = test_df.assign(min_of_max2 = test_df.apply(lambda x : (test_df.min_of_max[(test_df.min_of_max &gt; x.min_of_max) &amp; (test_df.A[test_df.A == x.A])]).min(), axis = 1)) </code></pre> <p>Output:</p> <pre><code> A B min_of_max min_of_max2 0 1 1 4.0 7.0 3 1 4 7.0 NaN 6 1 7 NaN NaN 1 2 2 5.0 8.0 4 2 5 8.0 NaN 7 2 8 NaN NaN 2 3 3 6.0 9.0 5 3 6 9.0 NaN 8 3 9 NaN NaN </code></pre> <p>So is there any convenient way to call object from previous part of chain in second <code>.assign</code>? Since using <code>test_df.apply()</code> in second .assign will take initial test_df without computed <code>test_df['min_of_max']</code></p> <p>Sorry for somewhat unreadable code in Python, I'am still figuring out how to write more clear.</p>
<p>In Pandas, run the chain of two <code>assign</code> calls but do so in any way that does not rely on <em>original</em> data frame context such as with <code>DataFrame.apply</code> call. Below uses a list comprehension equivalent across index values:</p> <pre><code>test_df = pd.DataFrame({'group': np.repeat([1,2,3],3), 'data': np.arange(1,10)}) ( test_df.assign(min_of_max = lambda x: [np.min(x["data"].loc[(x["data"] &gt; x["data"].iloc[i]) &amp; (x["group"] == x["group"].iloc[i])] ) for i in test_df.index.values]) .assign(min_of_max_2 = lambda x: [np.min(x["min_of_max"].loc[(x["min_of_max"] &gt; x["min_of_max"].iloc[i]) &amp; (x["group"] == x["group"].iloc[i])] ) for i in test_df.index.values]) ) # group data min_of_max min_of_max_2 # 0 1 1 2.0 3.0 # 1 1 2 3.0 NaN # 2 1 3 NaN NaN # 3 2 4 5.0 6.0 # 4 2 5 6.0 NaN # 5 2 6 NaN NaN # 6 3 7 8.0 9.0 # 7 3 8 9.0 NaN # 8 3 9 NaN NaN </code></pre> <hr> <p>However, just as you can combine the assignments in <code>dplyr::mutate</code>, you can do the same by combining the <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.assign.html" rel="nofollow noreferrer"><code>DataFrame.assign</code></a> calls by using the <code>lambda</code> method (not to be confused with <code>lambda</code> in <code>DataFrame.apply</code>).</p> <p><strong>R</strong></p> <pre><code>test_df &lt;- data.frame(group = rep(c(1,2,3), each = 3), data = c(1:9)) test_df %&gt;% group_by(group) %&gt;% mutate(min_of_max = map_dbl(data, ~data[data &gt; .x] %&gt;% min()), min_of_max_2 = map_dbl(min_of_max, ~min_of_max[min_of_max &gt; .x] %&gt;% min())) # # A tibble: 9 x 4 # # Groups: group [3] # group data min_of_max min_of_max_2 # &lt;dbl&gt; &lt;int&gt; &lt;dbl&gt; &lt;dbl&gt; # 1 1 1 2 3 # 2 1 2 3 Inf # 3 1 3 Inf Inf # 4 2 4 5 6 # 5 2 5 6 Inf # 6 2 6 Inf Inf # 7 3 7 8 9 # 8 3 8 9 Inf # 9 3 9 Inf Inf </code></pre> <p><strong>Pandas</strong></p> <pre><code>test_df = pd.DataFrame({'group': np.repeat([1,2,3],3), 'data': np.arange(1,10)}) test_df.assign(min_of_max = lambda x: [np.min(x["data"].loc[(x["data"] &gt; x["data"].iloc[i]) &amp; (x["group"] == x["group"].iloc[i])] ) for i in test_df.index.values], min_of_max_2 = lambda x: [np.min(x["min_of_max"].loc[(x["min_of_max"] &gt; x["min_of_max"].iloc[i]) &amp; (x["group"] == x["group"].iloc[i])] ) for i in test_df.index.values]) # group data min_of_max min_of_max_2 # 0 1 1 2.0 3.0 # 1 1 2 3.0 NaN # 2 1 3 NaN NaN # 3 2 4 5.0 6.0 # 4 2 5 6.0 NaN # 5 2 6 NaN NaN # 6 3 7 8.0 9.0 # 7 3 8 9.0 NaN # 8 3 9 NaN NaN </code></pre> <hr> <p>By the way, since Pandas was arguably modeled after R many years ago by Wes McKinney (see <a href="https://www.researchgate.net/publication/265194455_pandas_a_Foundational_Python_Library_for_Data_Analysis_and_Statistics" rel="nofollow noreferrer">paper</a>), base R tends to be more translatable to Pandas. Below, <code>within</code> mirrors uses of <code>assign</code> and <code>sapply</code> mirrors list comprehension.</p> <p><strong>Base R</strong></p> <pre><code>test_df &lt;- within(test_df, { min_of_max &lt;- sapply(1:nrow(test_df), function(i) min(data[data &gt; data[i] &amp; group == group[i]])) min_of_max_2 &lt;- sapply(1:nrow(test_df), function(i) min(min_of_max[min_of_max &gt; min_of_max[i] &amp; group == group[i]])) }) test_df[c("group", "data", "min_of_max", "min_of_max_2")] # group data min_of_max min_of_max_2 # 1 1 1 2 3 # 2 1 2 3 Inf # 3 1 3 Inf Inf # 4 2 4 5 6 # 5 2 5 6 Inf # 6 2 6 Inf Inf # 7 3 7 8 9 # 8 3 8 9 Inf # 9 3 9 Inf Inf </code></pre>
python|r|pandas
2
3,563
60,171,887
Is it possible to train with tensorflow 1 using float16?
<p>Currently train keras on tensorflow model with default setting - float32.</p> <p>Post training the network is quantized: cast weights to float16. This improves performance by ~x3 while keeping the same accuracy.</p> <p>I was trying to train from start using float16 and failed miserably. I cannot find any link that explain if that is possible and if not why is it not possible.</p>
<p><a href="https://docs.nvidia.com/deeplearning/sdk/mixed-precision-training/index.html#faq-tf" rel="nofollow noreferrer">Automated Mixed Precision</a> from NVidia might be a way to go. </p> <p>From what I've gathered since <code>1.14</code> it is (was) supported in the upstream. All you would have to do is wrap your optimizer like this: </p> <pre><code>opt = tf.train.experimental.enable_mixed_precision_graph_rewrite(opt) </code></pre> <p>You might also need to set specific <code>environment variable</code> from within your Python script, namely:</p> <pre><code>os.environ[‘TF_ENABLE_AUTO_MIXED_PRECISION’] = ‘1’ </code></pre> <p>Above should already employ good mixed precision training practices (e.g. loss scaling, keeping <code>float32</code> where necessary etc.).</p> <p>Good resource for this solution should be <a href="https://docs.nvidia.com/deeplearning/sdk/mixed-precision-training/index.html#faq-tf" rel="nofollow noreferrer">official NVidia's documentation</a>.</p> <p>Some other resources gathered which also might be useful (though do not seem to indicate you would have to do anything more) <a href="https://devblogs.nvidia.com/nvidia-automatic-mixed-precision-tensorflow/" rel="nofollow noreferrer">here</a>, <a href="https://medium.com/tensorflow/automatic-mixed-precision-in-tensorflow-for-faster-ai-training-on-nvidia-gpus-6033234b2540" rel="nofollow noreferrer">here</a> or <a href="https://developer.nvidia.com/automatic-mixed-precision" rel="nofollow noreferrer">here</a>.</p> <p>I would advise against manual casting as you might easily lose precision (e.g. in <code>BatchNorm</code> statistics used during inference) unless you know ins-and-outs of specific layers.</p> <p>Additionally, you might also check <code>bfloat16</code> (brain float) type from Google which has <code>exponent</code> part of <code>float32</code> (<code>8</code> bits) and smaller fraction. This allows it to keep greater range of values (e.g. when computing gradients) when compared to <code>float16</code> which allows one to avoid <code>loss scaling</code>. </p> <p>Above (<code>bfloat16</code>) should be useful mainly in TPUs, AFAIK NVidia GPU's support for it is not too great (someone correct me if I'm wrong). Some information <a href="https://cloud.google.com/tpu/docs/bfloat16" rel="nofollow noreferrer">here</a>.</p>
python|tensorflow|precision|keras-rl
2
3,564
60,329,503
Listify indices according to column groupby
<p>My question is similar to "<a href="https://stackoverflow.com/questions/22219004/grouping-rows-in-list-in-pandas-groupby">grouping rows in list in pandas groupby</a>", but what is to be listified is the index, not just another column.</p> <p>I know I can turn the index into just another column with <code>reset_index()</code>, but I spent a lot of time trying to capture the index field directly. Is there a way?</p> <p>Example:</p> <pre><code>df = pd.DataFrame({'a':['A','A','B','B','B','C'], 'b':[1,2,5,5,4,6]}) df.reset_index().groupby('a')['index'].apply(list) </code></pre> <p>Output:</p> <pre><code>A [0, 1] B [2, 3, 4] C [5] </code></pre>
<p>Use a list comprehension to iterate through the groups, and create a new dataframe</p> <pre><code>(pd.DataFrame([(name,group.index.tolist()) for name, group in df.groupby('a')], columns=['name','index']) ) name index 0 A [0, 1] 1 B [2, 3, 4] 2 C [5] </code></pre>
pandas|pandas-groupby
1
3,565
60,162,029
How can hide bars that does not have values in matplotlib bar plots
<p>I have a code that plots the total transactions per months. The dataset doesn't include all the months (Only from 10 to 4). Yet when I plot it, it still includes the months from 5 to 9 (with, of course, no bars). I want to hide those as they are not even part of the dataset. </p> <p>here is what I get: </p> <p><a href="https://i.stack.imgur.com/X27PE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/X27PE.png" alt="enter image description here"></a></p> <p>here is my code </p> <pre><code>df_month = df.groupby('Month')['Transaction'].count() months_unique = df.Month.unique() df_month = df_month.reindex(months_unique, axis=0) # This line and the line above are to reorder the months as they are in the original dataframe (the first line orders them starting from 1. wrong) df_month = df_month.to_frame() df_month.reset_index(level=0, inplace=True) #resetting the index istead of having the month as an index. plt.figure(figsize=(20, 10)) # specify the size of the plot plt.bar(months_unique, df_month['Transaction']) plt.suptitle('Transactions over the months', fontsize=25) # Specify the suptitle of the plot plt.title('Using Data from Years October - April', fontsize=20) # Specify the title of the plot plt.xlabel('month', fontsize=20) # Specify the x label plt.ylabel('number', fontsize=20) # Specify the y label plt.setp(plt.gca().get_xticklabels(),fontsize=20) plt.setp(plt.gca().get_yticklabels(), fontsize=20) </code></pre> <p>EDIT </p> <p>How does the result of <code>df_month = df.groupby('Month')['Transaction'].count()</code> look like ?: </p> <p><a href="https://i.stack.imgur.com/J6qq6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/J6qq6.png" alt="enter image description here"></a></p> <p>After using <code>to_frame</code> and <code>reset_index</code>: </p> <p><a href="https://i.stack.imgur.com/Ipr1H.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ipr1H.png" alt="enter image description here"></a></p>
<p>The easiest way might be casting your <code>month</code> to <code>str</code> to avoid <code>matplotlib</code> filling up the missing numbers:</p> <pre><code>plt.bar(months_unique.astype(str), df_month['Transaction']) ... </code></pre> <p>Or simply let <code>pandas</code> handle the plotting for you instead:</p> <pre><code>df.groupby('Month')['Transaction'].count().plot(kind="bar") ... plt.show() </code></pre>
python|pandas|matplotlib
0
3,566
60,068,282
Tensorflow Object Detection AttributeError: module 'tensorflow._api.v1.compat' has no attribute 'v2'
<p>Currently running TF 1.13.2</p> <p>I'm trying to train my own object detection model. After trying to run this command:</p> <pre><code>python model_main.py --logtostderr --model_dir=training/ --pipeline_config_path=traini ng/faster_rcnn_inception_v2_pets.config </code></pre> <p>I get this error:</p> <pre><code>Traceback (most recent call last): File "model_main.py", line 26, in &lt;module&gt; from object_detection import model_lib File "C:\Users\Admin\Anaconda3\envs\object_detection\lib\site-packages\object_detection-0.1-py3.6.egg\object_detection\model_lib.py", line 28, in &lt;module&gt; from object_detection import exporter as exporter_lib File "C:\Users\Admin\Anaconda3\envs\object_detection\lib\site-packages\object_detection-0.1-py3.6.egg\object_detection\exporter.py", line 24, in &lt;module&gt; from object_detection.builders import model_builder File "C:\Users\Admin\Anaconda3\envs\object_detection\lib\site-packages\object_detection-0.1-py3.6.egg\object_detection\builders\model_builder.py", line 47, in &lt;module&gt; from object_detection.models.ssd_mobilenet_edgetpu_feature_extractor import SSDMobileNetEdgeTPUFeatureExtractor File "C:\Users\Admin\Anaconda3\envs\object_detection\lib\site-packages\object_detection-0.1-py3.6.egg\object_detection\models\ssd_mobilenet_edgetpu_feature_extractor.py", line 19, in &lt;module&gt; from object_detection.models import ssd_mobilenet_v3_feature_extractor File "C:\Users\Admin\Anaconda3\envs\object_detection\lib\site-packages\object_detection-0.1-py3.6.egg\object_detection\models\ssd_mobilenet_v3_feature_extractor.py", line 25, in &lt;module&gt; from nets.mobilenet import mobilenet File "C:\Users\Admin\Desktop\ObjectDetection\models\research\object_detection\nets\mobilenet\mobilenet.py", line 399, in &lt;module&gt; def global_pool(input_tensor, pool_op=tf.compat.v2.nn.avg_pool2d): AttributeError: module 'tensorflow._api.v1.compat' has no attribute 'v2' </code></pre> <p>I need to stay below TF 2.x because 2.x does not yet allow for training your own model (according to Gilbert Tanner's object detection tutorial). I've bounced around between TF versions and I've gotten similar errors in different versions that some module has no attribute 'v1'. In either case, I'm not sure at all how to fix this.</p> <p>Edit: Here are the site-packages for my environment:</p> <pre><code>Package Version Latest Version absl-py 0.9.0 0.8.1 astor 0.8.1 0.8.0 biwrap 0.1.6 bleach 1.5.0 3.1.0 certifi 2019.11.28 2019.11.28 gast 0.3.3 0.3.2 grpcio 1.27.0 1.16.1 h5py 2.10.0 2.10.0 html5lib 1.000000 1.0.1 keras-applications 1.0.8 1.0.8 keras-preprocessing 1.1.0 1.1.0 markdown 3.1.1 3.1.1 mock 3.0.5 3.0.5 numpy 1.18.1 1.18.1 object-detection 0.100000 pandas 1.0.0 1.0.0 pillow 7.0.0 7.0.0 pip 20.0.2 20.0.2 protobuf 3.11.3 3.11.2 pycocotools 2.000000 python 3.6.10 3.8.1 python-dateutil 2.8.1 2.8.1 pytz 2019.300000 2019.300000 setuptools 39.1.0 45.1.0 six 1.14.0 1.14.0 sqlite 3.31.1 3.31.1 tensorboard 1.9.0 2.0.0 tensorflow 1.9.0 2.0.0 tensorflow-estimator 1.13.0 2.0.0 tensorflow-plot 0.3.0 tensorflow-tensorboard 1.5.1 termcolor 1.1.0 1.1.0 vc 14.100000 14.100000 vs2015_runtime 14.16.27012 14.16.27012 werkzeug 0.16.1 0.16.1 wheel 0.34.2 0.34.2 wincertstore 0.200000 0.200000 </code></pre>
<p>EDIT: from experience you are using a TF2 designed model in a TF1 application.</p> <p>Did you clone the models from Github? The repository is based on the latest TF</p> <p>You can use the models designed for 1.13 specifically: <a href="https://github.com/tensorflow/models/archive/v1.13.0.zip" rel="nofollow noreferrer">https://github.com/tensorflow/models/archive/v1.13.0.zip</a> </p> <p>reboot to clear EXPORT PATH</p> <p>cd models/models-1.13.0/research/</p> <pre><code>protoc object_detection/protos/*.proto --python_out=. export PYTHONPATH=$PYTHONPATH:`pwd`:`pwd`/slim </code></pre> <p>cd ../../..</p> <pre><code>python3 github/TensorFlow-object-detection-tutorial-master/5_part\ step\ by\ step\ custom\ object\ detection/train.py --logtostderr --train_dir=training/ --pipeline_config_path=dataset/data1/faster_rcnn_inception_v2_coco_tf13.config </code></pre> <p>Solved the issue EDIT: made the TF training on GCP and on Edge (jetson nano) too, feel free to up if you need moar backup.</p> <p><a href="https://github.com/tensorflow/models/issues/8088#issuecomment-581696751" rel="nofollow noreferrer">source</a></p>
python|windows|tensorflow
1
3,567
65,107,049
Installing packages in pycharm
<p>I'm kinda new to programming, and this is one of my first projects, a data science project.<br /> I am using PyCharm and I need to install numpy and pandas (and probably many others later).<br /> But I can't manage to download any package.</p> <p><a href="https://i.stack.imgur.com/hdUxr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hdUxr.png" alt="Screenshot" /></a></p> <p>I tried different versions of numpy and pandas, tried with the terminal, and tried to use a Virtualenv and a Conda environment.<br /> If you have any suggestion I'd be happy to hear them.</p>
<p>As discuss in comments : Pycharm 2020.2.4 is bugged with pip install, update to 2020.2.5 (and probably next versions in the future) resolve the issue.</p>
python|numpy|pycharm|package
0
3,568
50,213,201
How can one initialize data for a contour plot using a function that takes one input and outputs a scalar value?
<p><strong>NOTE:</strong> The post looks longer than it ought to because of docstrings and an array consisting of 40 datetimes.</p> <p>I have some time-series data. For examples sake, let's say I have three parameters, each consisting of 40 data points: datetimes (given by <code>dts</code>), speed (given by <code>vobs</code>), and elapsed hour (given by <code>els</code>), which are combined by key into a dictionary <code>data_dict</code>.</p> <pre><code>dts = np.array(['2006/01/01 02:30:04', '2006/01/01 03:30:04', '2006/01/01 03:54:04' ,'2006/01/01 05:30:04', '2006/01/01 06:30:04', '2006/01/01 07:30:04' ,'2006/01/01 08:30:04', '2006/01/01 09:30:04', '2006/01/01 10:30:04' ,'2006/01/01 11:30:04', '2006/01/01 12:30:04', '2006/01/01 13:30:04' ,'2006/01/01 14:30:04', '2006/01/01 15:30:04', '2006/01/01 16:30:04' ,'2006/01/01 17:30:04', '2006/01/01 18:30:04', '2006/01/01 19:30:04' ,'2006/01/01 20:30:04', '2006/01/01 21:30:04', '2006/01/01 21:54:05' ,'2006/01/01 23:30:04', '2006/01/02 00:30:04', '2006/01/02 01:30:04' ,'2006/01/02 02:30:04', '2006/01/02 03:30:04', '2006/01/02 04:30:04' ,'2006/01/02 05:30:04', '2006/01/02 06:30:04', '2006/01/02 07:30:04' ,'2006/01/02 08:30:04', '2006/01/02 09:30:04', '2006/01/02 10:30:04' ,'2006/01/02 11:30:04', '2006/01/02 12:30:04', '2006/01/02 13:30:04' ,'2006/01/02 14:30:04', '2006/01/02 15:30:04', '2006/01/02 16:30:04' ,'2006/01/02 17:30:04']) vobs = np.array([158, 1, 496, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 , 1, 1, 823, 1, 1, 1, 1, 303, 1, 1, 1, 1, 253, 1, 1, 1, 408, 1 , 1, 1, 1, 321]) els = np.array([i for i in range(len(vobs))]) data_dictionary = {'datetime' : dts, 'values' : vobs, 'elapsed' : els} </code></pre> <p>I have a function that takes a dictionary as an input and outputs a single scalar value of <code>type &lt;float&gt;</code> or <code>type &lt;int&gt;</code>. The function given below is simpler than my actual use case and is given for examples sake.</p> <pre><code>def get_z(dictionary): """ This function returns a scalar value. """ return np.sum(dictionary['elapsed'] / dictionary['values']) </code></pre> <p>I would like to see how this function output changes as the time-interval changes. So, I've created a function that takes a dictionary as input and outputs a new dictionary, the array values of which are sliced at the input indices for each of the keys in the input dictionary. Note that the consecutive elapsed hours can serve as indices.</p> <pre><code>def subsect(dictionary, indices): """ This function returns a dictionary, the array values of which are sliced at the input indices. """ return {key : dictionary[key][indices] for key in list(dictionary.keys())} </code></pre> <p>To verify that the above functions work, one can run the for-loop containing the function <code>read_dictionary(...)</code> below.</p> <pre><code>def read_dictionary(dictionary): """ This function prints the input dictionary as a check. """ for key in list(dictionary.keys()): print(" .. KEY = {}\n{}\n".format(key, dictionary[key])) print("\nORIGINAL DATA DICTIONARY\n") read_dictionary(data_dictionary) # for i in range(1, 38): # mod_dictionary = subsect(data_dictionary, indices=slice(i, 39, 1)) # print("\n{}th MODIFIED DATA DICTIONARY\n".format(i)) # read_dictionary(mod_dictionary) </code></pre> <p>My issue is that I would like a contour plot. The x-axis will contain the lower bound of the datetime interval (the first entry of <code>mod_dictionary[i]</code>) while the y-axis will contain the upper bound of the datetime interval (the last entry of <code>mod_dictioary[i]</code>). Normally when making a contour plot, one has an array of <code>(x,y)</code> values that are made into a grid <code>(X,Y)</code> via <code>numpy.meshgrid</code>. As my actual function (not the one in the example) is not vectorized, I can use <code>X.copy().reshape(-1)</code> and reshape my result back using <code>(...).reshape(X.shape)</code>. </p> <p>My exact problem is that I do not know how I can make a grid of different parameters using a single dictionary as an input for a function that outputs a single scalar value. Is there a way to do this?</p>
<p>If I understood your idea correctly then this should be what you need. However I needed the following packages:</p> <pre><code>import numpy as np import matplotlib import matplotlib.pyplot as plt from matplotlib.mlab import griddata import pandas as pd </code></pre> <p>First the required values are stored in three lists. I had to change the for loop a little because in your example all upper bounds where the same, so no contour plot was possible:</p> <pre><code>lower_bounds = []; upper_bounds = []; z_values = []; for j in range(1, 30): for i in range(0,j): mod_dictionary = subsect(data_dictionary, indices=slice(i, j, 1)) lower_bounds.append(mod_dictionary['datetime'][0]) upper_bounds.append(mod_dictionary['datetime'][-1]) z_values.append(get_z(mod_dictionary)) </code></pre> <p>Then the datetime strings are converted to <code>Timestamps</code>:</p> <pre><code>lower_bounds_dt = [pd.Timestamp(date).value for date in lower_bounds] upper_bounds_dt = [pd.Timestamp(date).value for date in upper_bounds] </code></pre> <p>And the grid for the contour plot is generated:</p> <pre><code>xi = np.linspace(min(lower_bounds_dt), max(lower_bounds_dt), 100) print(xi) yi = np.linspace(min(upper_bounds_dt), max(upper_bounds_dt), 100) print(yi) </code></pre> <p>Using <code>griddata</code> the missing grid points for the <code>z</code> values are generated.</p> <pre><code>zi = griddata(lower_bounds_dt, upper_bounds_dt, z_values, xi, yi) print(zi) </code></pre> <p>Finally you can use <code>contour</code> or <code>contourf</code> to generate the contour plot: </p> <pre><code>fig1 = plt.figure(figsize=(10, 8)) ax1 = fig1.add_subplot(111) ax1.contourf(xi, yi, zi) fig1.savefig('graph.png') </code></pre> <p>As currently the generated data is only a small band (because the lower and upper bound in the for loop increase together) the result looks like this:</p> <p><a href="https://i.stack.imgur.com/ca7GG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ca7GG.png" alt="Result of contourf"></a></p> <p>You could easily change this by changing the way you span your data arrays in the for loop. Using <code>pd.to_datetime</code> you could also display the <code>x</code> and <code>y</code> axis in your preferred datetime format.</p> <p><strong>Edit:</strong> I uploaded the complete example to <a href="https://repl.it/@AxelFiedler1/DateTimeDict" rel="nofollow noreferrer">repl.it</a></p>
python-3.x|numpy|matplotlib|contour|scalar
1
3,569
49,973,180
Confused about tensorflow timeline
<p><a href="https://i.stack.imgur.com/XPj5Q.png" rel="nofollow noreferrer">enter image description here</a></p> <p>I think /device:GPU:0 and /job:localhost/replica:0/task:0/device:GPU:0 denote the same thing, but why their timelines are different? For example, the blue Conv2d of /device:GPU:0 is later than /job:localhost/replica:0/task:0/device:GPU:0. Can somebody explain this? Thanks.</p>
<p><code>stream:all</code> indicates the computation itself happening. <code>/job:localhost/replica:0/task:0/device:GPU:0</code> are just the requests being made to enqueue kernels.</p> <p>See: <a href="https://github.com/tensorflow/tensorflow/issues/1824#issuecomment-244251867" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow/issues/1824#issuecomment-244251867</a></p>
tensorflow
0
3,570
49,797,977
Pandas - Reshape
<p>Can you help me to reshape a Pandas DataFrame from this:</p> <pre><code>df = pd.DataFrame({ 'Clasif1': [np.NaN, np.NaN, 'PRE', 'POST'], 'Currency': [np.NaN, np.NaN, 'LC', 'USD'], 'Unnamed: 1': ['A','01/01/2018',1,7], 'Unnamed: 2': ['A','02/01/2018',2,8], 'Unnamed: 3': ['A','03/01/2018',3,9], 'Unnamed: 4': ['B','01/01/2018',4,10], 'Unnamed: 5': ['B','02/01/2018',5,11], 'Unnamed: 6': ['B','03/01/2018',6,12] }) </code></pre> <p><a href="https://i.stack.imgur.com/G18LT.png" rel="nofollow noreferrer">Source</a></p> <p>To this:</p> <pre><code>df_result = pd.DataFrame({ 'Clasif1': ['PRE', 'POST','PRE', 'POST','PRE', 'POST','PRE', 'POST','PRE', 'POST','PRE', 'POST'], 'Currency': ['LC', 'USD','LC', 'USD','LC', 'USD','LC', 'USD','LC', 'USD','LC', 'USD'], 'A/B': ['A','A','A','A','A','A','B','B','B','B','B','B'], 'Date': ['01/01/2018','01/01/2018','02/01/2018','02/01/2018','03/01/2018','03/01/2018','01/01/2018','01/01/2018','02/01/2018','02/01/2018','03/01/2018','03/01/2018'], 'Value': [1,7,2,8,3,9,4,10,5,11,6,12] }) </code></pre> <p><a href="https://i.stack.imgur.com/hQRxr.png" rel="nofollow noreferrer">Output</a></p> <p>The result DataFrame rows order not needs necesary to match with the expected.</p> <p>Thanks for your help,</p>
<p>This is more like a customize solution , but if you can make sure the data is like this structure , you can using <code>stack</code> </p> <pre><code>s=df.iloc[:,2:].T.set_index([0,1]).stack() s=s.to_frame('V').reset_index(level=[0,1]) s=s.join(df.iloc[:,:2]).sort_values([0,1]) s Out[226]: 0 1 V Clasif1 Currency 2 A 01/01/2018 1 PRE LC 3 A 01/01/2018 7 POST USD 2 A 02/01/2018 2 PRE LC 3 A 02/01/2018 8 POST USD 2 A 03/01/2018 3 PRE LC 3 A 03/01/2018 9 POST USD 2 B 01/01/2018 4 PRE LC 3 B 01/01/2018 10 POST USD 2 B 02/01/2018 5 PRE LC 3 B 02/01/2018 11 POST USD 2 B 03/01/2018 6 PRE LC 3 B 03/01/2018 12 POST USD </code></pre>
python|pandas|transform|reshape
2
3,571
49,948,876
GeoPandas: How to obtain bounding boxes for every geometry in a geodataframe
<p>I am using GeoPandas in python and have a valid GeoDataframe of polygons. </p> <pre><code>0 POLYGON Z ((68.70999999999999 623.1 0, 35.71 6... 1 POLYGON Z ((221.33 645.02 0, 185.7 640.33 0, 1... 2 POLYGON Z ((150.3 650 0, 160.9 650 0, 150.58 6... </code></pre> <p>I want to obtain a new dataframe that has the bounding box coordinates for each row in the dataframe. </p> <p>Now I am getting some odd behavior for GeoPandas. </p> <p>Say I name the GeoDataFrame <code>gdf</code>, then using the code:</p> <pre><code>gdf.bounds </code></pre> <p>I get the corresponding error. I have no clue what this error is supposed to mean, since I did not pass any values into the <code>bounds</code> method--they were passed implicitly. </p> <pre><code>ValueError: Shape of passed values is (1, 110042), indices imply (4, 110042) </code></pre> <p>When I try: <code>gdf.geometry.bounds</code> I get the same <code>ValueError...</code></p> <p>However, when I do it this way, I get a valid answer:</p> <pre><code>gdf.head(10).bounds </code></pre> <p>I get </p> <pre><code> minx miny maxx maxy 0 0.00 618.15 68.71 650.00 1 169.56 640.33 221.33 650.00 2 150.30 648.64 160.90 650.00 </code></pre> <p>So <code>gdf</code> and <code>gdf.head()</code> are not any different, yet one gives me an error and one does not. Does anyone know the correct way to get the bounding boxes corresponding to each row. </p>
<p>You can also try the following</p> <pre><code># remove empty geometry valid_geom = gdf[gdf.geometry.map(lambda z: True if not z.is_empty else False)] # get the bounds of each geometry valid_geom.geometry.map(lambda z: z.exterior.xy) # or in one line gdf[gdf.geometry.map(lambda z: True if not z.is_empty else False)].geometry.map(lambda z: z.exterior.xy) </code></pre> <p>This would result in the following output. you get (minx, miny, maxx, maxy) as a list.</p> <pre><code>0 ([346494.47052450513, 346512.1633455531, 34642... 1 ([347156.6195963654, 347140.5694171803, 347106... 2 ([347374.2493280142, 347343.280266067, 347331.... 3 ([347752.9399173185, 347732.0804000348, 347699... 4 ([352462.7065634858, 352421.82634455897, 35239... 5 ([352398.84073305037, 352366.62657852937, 3523... 6 ([351619.2911484046, 351581.3489685701, 351559... 7 ([349298.04394918215, 349284.4299869118, 34926... 8 ([349402.6562116009, 349390.3714050767, 349364... 9 ([347447.35067824554, 347427.2888365253, 34740... 10 ([351038.9227137904, 351023.75894022046, 35101... 11 ([352360.8991716495, 352311.8060843693, 352289... 12 ([348053.8637179602, 348014.5578245763, 347995... 13 ([350854.3664365387, 350802.39711500367, 35075... 14 ([350661.291738528, 350539.01532645256, 350497... 15 ([349634.9936554617, 349617.43041924713, 34959... 16 ([346588.703008323, 346576.2541223159, 346560.... 17 ([347323.7364982413, 347311.6537559405, 347289... 18 ([347592.9326738138, 347588.24603437353, 34757... 19 ([347871.4965194545, 347852.9032783319, 347846... 20 ([349503.7927385038, 349484.6946827946, 349482... 21 ([349917.505834857, 349907.19522809517, 349885... 22 ([350254.82670837734, 350243.1101097837, 35024... dtype: object </code></pre>
geospatial|shapely|geopandas
2
3,572
63,990,844
How to generate values based on conditions for new columns?
<p>I have the following data frame:</p> <pre><code>Hotel_id Month_Year Chef_Id Chef_is_masterchef Transition 2400188 February-2018 4597566 1 0 2400188 March-2018 4597566 1 0 2400188 April-2018 4597566 1 0 2400188 May-2018 4597566 1 0 2400188 June-2018 4597566 1 0 2400188 July-2018 4597566 1 0 2400188 August-2018 4597566 1 0 2400188 September-2018 4597566 0 1 2400188 October-2018 4597566 0 0 2400188 November-2018 4597566 0 0 2400188 December-2018 4597566 0 0 2400188 January-2019 4597566 0 0 2400188 February-2019 4597566 0 0 2400188 March-2019 4597566 0 0 2400188 April-2019 4597566 0 0 2400188 May-2019 4597566 0 0 2400614 May-2015 2297544 0 0 2400614 June-2015 2297544 0 0 2400614 July-2015 2297544 0 0 2400614 August-2015 2297544 0 0 2400614 September-2015 2297544 0 0 2400614 October-2015 2297544 0 0 2400614 November-2015 2297544 0 0 2400614 December-2015 2297544 0 0 2400614 January-2016 2297544 1 1 2400614 February-2016 2297544 1 0 2400614 March-2016 2297544 1 0 3400624 May-2016 2597531 0 0 3400624 June-2016 2597531 0 0 3400624 July-2016 2597531 0 0 3400624 August-2016 2597531 1 1 2400133 February-2016 4597531 0 0 2400133 March-2016 4597531 0 0 2400133 April-2016 4597531 0 0 2400133 May-2016 4597531 0 0 2400133 June-2016 4597531 0 0 2400133 July-2016 4597531 0 0 2400133 August-2016 4597531 1 1 2400133 September-2016 4597531 1 0 2400133 October-2016 4597531 1 0 2400133 November-2016 4597531 1 0 2400133 December-2016 4597531 1 0 2400133 January-2017 4597531 1 0 2400133 February-2017 4597531 1 0 2400133 March-2017 4597531 1 0 2400133 April-2017 4597531 1 0 2400133 May-2017 4597531 1 0 </code></pre> <p>When the transition takes place from <strong>0 to 1</strong> or <strong>1 to 0</strong> in the <strong>Chef_is_Masterchef</strong> column, this transition is indicated in the <strong>Transition</strong> column as <strong>1</strong>.</p> <p>Actually, I thought of creating another column (named as &quot;<strong>Var</strong>&quot;) where the values will be filled as mentioned below for the original data frame,</p> <p><strong>Expected data frame:</strong></p> <pre><code>Hotel_id Month_Year Chef_Id Chef_is_masterchef Transition Var 2400188 February-2018 4597566 1 0 -7 2400188 March-2018 4597566 1 0 -6 2400188 April-2018 4597566 1 0 -5 2400188 May-2018 4597566 1 0 -4 2400188 June-2018 4597566 1 0 -3 2400188 July-2018 4597566 1 0 -2 2400188 August-2018 4597566 1 0 -1 2400188 September-2018 4597566 0 1 0 2400188 October-2018 4597566 0 0 1 2400188 November-2018 4597566 0 0 2 2400188 December-2018 4597566 0 0 3 2400188 January-2019 4597566 0 0 4 2400188 February-2019 4597566 0 0 5 2400188 March-2019 4597566 0 0 6 2400188 April-2019 4597566 0 0 7 2400188 May-2019 4597566 0 0 8 2400614 May-2015 2297544 0 0 -8 2400614 June-2015 2297544 0 0 -7 2400614 July-2015 2297544 0 0 -6 2400614 August-2015 2297544 0 0 -5 2400614 September-2015 2297544 0 0 -4 2400614 October-2015 2297544 0 0 -3 2400614 November-2015 2297544 0 0 -2 2400614 December-2015 2297544 0 0 -1 2400614 January-2016 2297544 1 1 0 2400614 February-2016 2297544 1 0 1 2400614 March-2016 2297544 1 0 2 3400624 May-2016 2597531 0 0 -3 3400624 June-2016 2597531 0 0 -2 3400624 July-2016 2597531 0 0 -1 3400624 August-2016 2597531 1 1 0 2400133 February-2016 4597531 0 0 -6 2400133 March-2016 4597531 0 0 -5 2400133 April-2016 4597531 0 0 -4 2400133 May-2016 4597531 0 0 -3 2400133 June-2016 4597531 0 0 -2 2400133 July-2016 4597531 0 0 -1 2400133 August-2016 4597531 1 1 0 2400133 September-2016 4597531 1 0 1 2400133 October-2016 4597531 1 0 2 2400133 November-2016 4597531 1 0 3 2400133 December-2016 4597531 1 0 4 2400133 January-2017 4597531 1 0 5 2400133 February-2017 4597531 1 0 6 2400133 March-2017 4597531 1 0 7 2400133 April-2017 4597531 1 0 8 2400133 May-2017 4597531 1 0 9 </code></pre> <p>If observed, at the point of transition in the <strong>Var</strong> column I am giving the value as zero and for the rows before and after I am maintaining the corresponding integer values.</p> <p><strong>But after using the below code I had an issue in the Var column,</strong></p> <pre><code>s = df['Chef_is_masterchef'].eq(0).groupby(df['Chef_Id']).transform('sum') df['var'] = df.groupby('Chef_Id').cumcount().sub(s) </code></pre> <p><strong>Output from the above code</strong>:</p> <pre><code>Hotel_id Month_Year Chef_Id Chef_is_masterchef Transition Var 2400188 February-2018 4597566 1 0 -9 2400188 March-2018 4597566 1 0 -8 2400188 April-2018 4597566 1 0 -7 2400188 May-2018 4597566 1 0 -6 2400188 June-2018 4597566 1 0 -5 2400188 July-2018 4597566 1 0 -4 2400188 August-2018 4597566 1 0 -3 2400188 September-2018 4597566 0 1 -2 2400188 October-2018 4597566 0 0 -1 2400188 November-2018 4597566 0 0 0 2400188 December-2018 4597566 0 0 1 2400188 January-2019 4597566 0 0 2 2400188 February-2019 4597566 0 0 3 2400188 March-2019 4597566 0 0 4 2400188 April-2019 4597566 0 0 5 2400188 May-2019 4597566 0 0 6 2400614 May-2015 2297544 0 0 -8 2400614 June-2015 2297544 0 0 -7 2400614 July-2015 2297544 0 0 -6 2400614 August-2015 2297544 0 0 -5 2400614 September-2015 2297544 0 0 -4 2400614 October-2015 2297544 0 0 -3 2400614 November-2015 2297544 0 0 -2 2400614 December-2015 2297544 0 0 -1 2400614 January-2016 2297544 1 1 0 2400614 February-2016 2297544 1 0 1 2400614 March-2016 2297544 1 0 2 3400624 May-2016 2597531 0 0 -3 3400624 June-2016 2597531 0 0 -2 3400624 July-2016 2597531 0 0 -1 3400624 August-2016 2597531 1 1 0 2400133 February-2016 4597531 0 0 -6 2400133 March-2016 4597531 0 0 -5 2400133 April-2016 4597531 0 0 -4 2400133 May-2016 4597531 0 0 -3 2400133 June-2016 4597531 0 0 -2 2400133 July-2016 4597531 0 0 -1 2400133 August-2016 4597531 1 1 0 2400133 September-2016 4597531 1 0 1 2400133 October-2016 4597531 1 0 2 2400133 November-2016 4597531 1 0 3 2400133 December-2016 4597531 1 0 4 2400133 January-2017 4597531 1 0 5 2400133 February-2017 4597531 1 0 6 2400133 March-2017 4597531 1 0 7 2400133 April-2017 4597531 1 0 8 2400133 May-2017 4597531 1 0 9 </code></pre> <p>If Observed, for the Chef_Id = 4597566 you can see at the point of transition the value is different instead of zero in the Var column.</p> <p>This creates a problem because, at the point of transition, I have to select rows including up to 3 months before and 2 months after for each id. Also at the point of transition, I have to select rows including up to 6 months before and 5 months after for each id using the below code:</p> <pre><code>df1 = df[df['var'].between(-3, 2)] print (df1) df2 = df[df['var'].between(-6, 5)] print (df2) </code></pre> <p>So please let me know the solution.</p> <p>Thanks in advance!</p>
<p>IIUC, use <code>pandas.DataFrame.groupby.transform</code> with <code>numpy.arange</code> and <code>numpy.argmax</code>:</p> <pre><code>df[&quot;Var&quot;] = df.groupby(&quot;Chef_Id&quot;)[&quot;Transition&quot;].transform(lambda x: np.arange(x.size) - np.argmax(x)) print(df) </code></pre> <p>Output:</p> <pre><code> Hotel_id Month_Year Chef_Id Chef_is_masterchef Transition Var 0 2400188 February-2018 4597566 1 0 -7 1 2400188 March-2018 4597566 1 0 -6 2 2400188 April-2018 4597566 1 0 -5 3 2400188 May-2018 4597566 1 0 -4 4 2400188 June-2018 4597566 1 0 -3 5 2400188 July-2018 4597566 1 0 -2 6 2400188 August-2018 4597566 1 0 -1 7 2400188 September-2018 4597566 0 1 0 8 2400188 October-2018 4597566 0 0 1 9 2400188 November-2018 4597566 0 0 2 10 2400188 December-2018 4597566 0 0 3 11 2400188 January-2019 4597566 0 0 4 12 2400188 February-2019 4597566 0 0 5 13 2400188 March-2019 4597566 0 0 6 14 2400188 April-2019 4597566 0 0 7 15 2400188 May-2019 4597566 0 0 8 16 2400614 May-2015 2297544 0 0 -8 17 2400614 June-2015 2297544 0 0 -7 18 2400614 July-2015 2297544 0 0 -6 19 2400614 August-2015 2297544 0 0 -5 20 2400614 September-2015 2297544 0 0 -4 21 2400614 October-2015 2297544 0 0 -3 22 2400614 November-2015 2297544 0 0 -2 23 2400614 December-2015 2297544 0 0 -1 24 2400614 January-2016 2297544 1 1 0 25 2400614 February-2016 2297544 1 0 1 26 2400614 March-2016 2297544 1 0 2 27 3400624 May-2016 2597531 0 0 -3 28 3400624 June-2016 2597531 0 0 -2 29 3400624 July-2016 2597531 0 0 -1 30 3400624 August-2016 2597531 1 1 0 31 2400133 February-2016 4597531 0 0 -6 32 2400133 March-2016 4597531 0 0 -5 33 2400133 April-2016 4597531 0 0 -4 34 2400133 May-2016 4597531 0 0 -3 35 2400133 June-2016 4597531 0 0 -2 36 2400133 July-2016 4597531 0 0 -1 37 2400133 August-2016 4597531 1 1 0 38 2400133 September-2016 4597531 1 0 1 39 2400133 October-2016 4597531 1 0 2 40 2400133 November-2016 4597531 1 0 3 41 2400133 December-2016 4597531 1 0 4 42 2400133 January-2017 4597531 1 0 5 43 2400133 February-2017 4597531 1 0 6 44 2400133 March-2017 4597531 1 0 7 45 2400133 April-2017 4597531 1 0 8 46 2400133 May-2017 4597531 1 0 9 </code></pre>
python|pandas|dataframe|pandas-groupby
2
3,573
64,112,510
how to consolidate lots of series data? to make another dataframe. using stack function
<p>I'd like to consolidate dataframe but I don't know how to upload dataframe here so I just remain link below.<br /> The data(original data on my spreadsheet)that I want to consolidate is form of series or dataframe.</p> <p>Part#: consists of 8~13 character set (alphabet and number mixed)<br /> Description: always right below the Part#.<br /> Ref#: lots of Ref# separated by comma. At the end of last Ref#, there is no comma</p> <p>From my last question, someone advised me to find Part# with iloc. But I will have a lot of row to be consolidated so I am not afford to designate them one by one. Is there anybody to give me achieve</p> <p>With stack function, I want to stack Ref# and every different Ref# has to have its Part# and description according to its above value on original data</p> <p>How I can build up python code for this?</p> <p><a href="https://docs.google.com/spreadsheets/d/10zSKfXavaXWl1MOo_ScW60rbERUDiQtDi6p-UbrUgro/edit?usp=sharing" rel="nofollow noreferrer">https://docs.google.com/spreadsheets/d/10zSKfXavaXWl1MOo_ScW60rbERUDiQtDi6p-UbrUgro/edit?usp=sharing</a></p>
<pre><code>import pandas as pd # create dataframe df = pd.DataFrame({ &quot;a&quot; : [ 'A2C02158300', 'D REC/BAS16-03W,100V,250mA,SOD323,0s,SMD', 'D201,D206,D218,D219,D222,D302,D308,D408,', 'D409,D501,D502,D505,D506,D507,D508', 'A2C02250500', 'T BIP/PUMD3,SOT363,SMD SOLDERING', 'T209,T501,T502', 'A2C00004540', 'CY-AIIA 5.6K 1% 1/16W 0603', 'R107,R124,R125,R126,R209,R214,R255,R329,', 'R377,R404,R426', 'A2C00000243', 'ZENER DIODE(A/S)', 'Z119', 'A2C01888600', 'R LIN,10K,5%,TK200,63mW,0402', 'R101,R102,R106,R120,R184,R187,R289,R291', ',', 'R317,R347,R400,R432,R449,R450,R464,R514', ',', 'R515,R524,R615,R720,R753,R779,R780,R781', ',', 'R784,R787,R788,R789,R790' ] }) df.head(100) </code></pre> <p>Output:</p> <p><a href="https://i.stack.imgur.com/XPU1i.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XPU1i.png" alt="Output" /></a></p> <p>Then we create new dataframe with &quot;part&quot; and &quot;description&quot; columns. Part column is based on regexp, which probably should be changed (i don't know format of the part name):</p> <pre><code>df1 = pd.DataFrame({ 'part': df[df['a'].str.match('A\dC\d{8}')]['a'].tolist(), 'description': df.iloc[df[df['a'].str.match('A\dC\d{8}')].index + 1]['a'].tolist() }) df1.head() </code></pre> <p>Output:</p> <p><a href="https://i.stack.imgur.com/asqFJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/asqFJ.png" alt="Output" /></a></p> <p>Then we create a temporary dataframe for merging of ref rows:</p> <pre><code>df2 = pd.merge(df, df1, left_on='a', right_on='part', how='left') df2.drop(df2[df2['a'].isin(df2['description'])].index, inplace=True) df2.loc[df2['a'] == df2['part'], 'a'] = '' df2['part'].fillna(method='ffill', inplace=True) df2.head() </code></pre> <p>Output:</p> <p><a href="https://i.stack.imgur.com/s5qZ8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/s5qZ8.png" alt="Output" /></a></p> <p>Then we merge ref rows:</p> <pre><code>df1 = df1.merge(df2.groupby(['part'])['a'].agg([('a', ''.join)]), on='part').rename(columns={'a': 'ref'}) df1.head() </code></pre> <p>Output:</p> <p><a href="https://i.stack.imgur.com/22fVo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/22fVo.png" alt="Output" /></a></p> <p>And finally we explode ref column:</p> <pre><code>df1 = df1.set_index(['part', 'description']).apply(lambda x: x.str.split(',').explode()).reset_index() df1 </code></pre> <p>Output:</p> <p><a href="https://i.stack.imgur.com/oi4YQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oi4YQ.png" alt="Output" /></a></p>
python|pandas|stack|character|apply
1
3,574
63,816,321
Why Pytorch autograd need another vector to backward instead of computing Jacobian?
<p>To perform <code>backward</code> in Pytorch, we can use an optional parameter <code>y.backward(v)</code> to compute the Jacobian matrix multiplied by <code>v</code>:</p> <pre><code>x = torch.randn(3, requires_grad=True) y = x * 2 v = torch.tensor([0.1, 1.0, 0.0001], dtype=torch.float) y.backward(v) print(x.grad) </code></pre> <p>I think that costs the same to compute the Jacobian matrix, because each node in the AD graph which is necessary to compute the Jacobian matrix is still computed. So why not Pytorch doesn't want to give us the Jacobian matrix?</p>
<p>When you call backward() PyTorch udpdates the <code>grad</code> of each learnable parameter with the gradient of some loss function <code>L</code> w.r.t to that parameter. It has been designed with Gradient Descent [GD] (and its variants) in mind. Once the gradient has been computed you can update each parameter with <code>x = x - learning_rate * x.grad</code>. Indeed in the background the Jacobian has to be computed but it is not what one needs (generally) when applying GD optimization. The vector <code>[0.1, 1.0, 0.0001]</code> lets you reduce the output to a scalar so that x.grad will be a vector (and not a matrix, in case you do not reduce), and hence GD is well defined. You could, however, obtain the Jacobian using backward with one-hot vectors. For example, in this case:</p> <pre><code>x = torch.randn(3, requires_grad=True) y = x * 2 J = torch.zeros(x.shape[0],x.shape[0]) for i in range(x.shape[0]): v = torch.tensor([1 if j==i else 0 for j in range(x.shape[0])], dtype=torch.float) y.backward(v, retain_graph=True) J[:,i] = x.grad x.grad.zero_() print(J) </code></pre>
python|optimization|pytorch|backpropagation|automatic-differentiation
3
3,575
46,655,518
How to one hot encode a large dataframe when multiple columns contain the same values?
<p>The title essentially captures my problem. </p> <p>I have a dataframe and multiple columns have values such as <code>[0,1]</code> and if I were to go and one hot encode the df, I'd have multiple columns with the same name. </p> <p>The tedious solution would be to manually create unique columns but I have 58 columns that are categorical so that doesn't seem very efficient. </p> <p>I'm not sure if this will be helpful, but here is the <code>head()</code> of my dataframe. </p> <pre><code>x2 x3 x4 x5 x6 x7 x8 x9 x10 x11 ... z217 z218 z219 z220 z221 z222 subject phase state output 0 0 0 1 -300.361218 0.886360 -2.590886 225.001899 0.006204 0.000037 -0.000013 ... 0.005242 0.024971 -1017.620978 -382.850838 -48.275711 -2.040336 A 3 B 0 1 0 0 1 -297.126090 0.622211 -3.960940 220.179017 0.006167 -0.000014 -0.000003 ... 0.001722 0.023595 91.229094 24.802230 1.783950 0.022620 A 3 C 0 2 0 0 1 -236.460253 0.423640 -12.656341 139.453445 0.006276 -0.000028 0.000022 ... -0.010894 -0.036318 -188.232347 -17.474861 -1.005571 -0.021628 A 3 B 0 3 0 0 1 33.411458 2.854415 -1.962432 3.208911 0.009752 -0.000273 -0.000024 ... -0.034184 -0.047734 185.122907 -549.282067 542.193381 -178.049926 A 3 A 0 4 0 0 1 -118.125214 2.009809 -3.291637 34.874176 0.007598 0.000001 -0.000022 ... 0.001963 0.004084 35.207794 -78.143166 57.084208 -13.700212 A 4 C 0 </code></pre>
<p>You are probably already using <code>pandas.get_dummies</code>? If not, this function converts categorical columns into multiple indicator columns (one hot encoding).</p> <p>There is a 'prefix' argument to this function which exists specifically for your case. This can be a list of strings (length must be equal to number of columns in dataframe). In your case though, you can make it a dictionary wherein you will map column names to prefixes. So, something like:</p> <pre><code>pd.get_dummies(df, prefix={'x3': 'x3', 'x4': 'x4'}) </code></pre> <p>This will additional columns like <code>x3_0, x3_1 ... x4_0, x4_1 ...</code></p>
python|pandas|one-hot-encoding
1
3,576
46,691,873
Datashader - Can it work with timestamp on x-axis without converting it to milliseconds as shown in an example
<p>I am going through a notebook available to plot time series using data shader and noticed that they have converted the time series vales to 'ms' and then used these values for x-axis</p> <p><a href="https://anaconda.org/jbednar/tseries/notebook" rel="nofollow noreferrer">https://anaconda.org/jbednar/tseries/notebook</a></p> <p>Can I have x-axis as datetime values while plotting time series data or does it have to converted to integer or float format ?</p> <p>Thanks</p>
<p>Bokeh's low level, foundational representation of datetime values is "floating point milliseconds since epoch". So sending that is always an option. However, Bokeh can recognize and generally convert most common datetime data types automatically: numpy datetime arrays, Pandas datetime indices and series, python datetime objects, etc. so there is usually no need to convert to ms yourself. </p>
python|pandas|numpy|bokeh|datashader
1
3,577
32,751,077
Splitting & Moving cell in Pandas
<p>I have a slight problem with my pandas DataFrame. </p> <p>As the image shows, the first row has <code>released_date</code> as "Released 2006" while all other values for the same column have the format "Released MMM DD". </p> <p>I would like to split the first cell under <code>released_date</code> to "Released" and "2006", copy "2006" to year column and subsequently move everything by one column. Any ideas? </p> <p>Current Format: </p> <pre><code>...|**released_date**| **year** | **genre** | ... ...| Released 2006 | Arcade | Comic |... </code></pre> <p>Desired output format:</p> <pre><code>...|**released_date**| **year** | **genre** | ... ...| Released | 2006 | Arcade |... </code></pre> <p>Thanks in advance!!</p> <p><a href="https://i.stack.imgur.com/gDkfg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gDkfg.png" alt="enter image description here"></a></p> <p>Here's the code to read the file in:</p> <pre><code>import pandas as pd df = pd.read_csv("IndieGameCSV/page_1.csv", \ names=["Windows","Mac","Linux","engine","release_date","year","genre1",\ "theme","players","score_final","rating", "link" ], index_col=False) </code></pre> <p>and here is the data as seen in the image:</p> <pre><code>True, False, True,Custom Built,Released 2006,Arcade,Comic,Single Player, 10,1 v, http://indiedb.com/games/tux-climber, True, True, True,Custom Built,Released Oct 20, 2014,Role Playing,Fantasy,MMO, 7.3,45 , http://indiedb.com/games/pokemon-planet, True, True, True,Ren'py,Released May 16, 2015,Turn Based Strategy,Noire,Single Player, 9,1 v, http://indiedb.com/games/black-closet, True, True, False,ShiVa3D,Released Jan 2, 2015,First Person Shooter,Sci-Fi,Single Player, 7.8,4 v, http://indiedb.com/games/kumoon, </code></pre>
<p>You can use the <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.strings.StringMethods.extract.html" rel="nofollow"><code>str.extract</code></a> method to extract the year:</p> <pre><code>In [11]: df["release_date"].str.extract("(\d{4})") Out[11]: 0 2006 1 2014 2 2015 3 2015 Name: "release_date", dtype: object </code></pre> <hr> <p>If you wanted to split the DataFrame you can also look <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.strings.StringMethods.match.html" rel="nofollow"><code>.str.match</code></a>, to check whether a column matches a regex:</p> <pre><code>In [12]: df["release_date"].str.match("Released \d{4}") Out[12]: 0 True 1 False 2 False 3 False Name: "release_date", dtype: bool </code></pre> <p>and index the df with this and ~this.</p>
python|pandas
0
3,578
32,996,137
Selecting a subset of values in python
<p>I have a pandas dataframe, df, which contains a feature ('alpha') which is a list of letters {'A','B',...,'G'}</p> <p>I'd like to select from df all rows which belong to a subset of this feature, say {'A','B','C'}.</p> <p>What's the most 'pythonic' way to do this? </p> <p>I was thinking something along the lines of:</p> <pre><code>subset = {'A','B','C'} df1 = df[df['alpha'] == subset] </code></pre> <p>...but this generates an error: </p> <pre><code>"need more than 0 values to unpack" </code></pre>
<p>I think you want to use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.isin.html#pandas.Series.isin" rel="nofollow"><code>isin</code></a> to test for membership, example:</p> <pre><code>In [79]: subset = {'a','b','c'} df = pd.DataFrame({'a':list('abasbvggcgasgfdasgcdce')}) df[df['a'].isin(subset)] Out[79]: a 0 a 1 b 2 a 4 b 8 c 10 a 15 a 18 c 20 c </code></pre>
python|pandas
1
3,579
38,769,935
Scipy Non-central Chi-Squared Random Variable
<p>Consider a sum of <code>n</code> squared iid normal random variables <code>S = sum (Z^2(mu, sig^2))</code>. According to <a href="https://mathoverflow.net/questions/89779/sum-of-squares-of-normal-distributions">this question</a>, <code>S / sig^2</code> has a <a href="https://en.wikipedia.org/wiki/Noncentral_chi-squared_distribution" rel="nofollow noreferrer">noncentral chi-squared distribution</a> with degrees of freedom = <code>n</code> and non-centrality parameter = <code>n*mu^2</code>.</p> <p>However, compare generating <code>N</code> of these variables <code>S</code> by summing squared normals with generating <code>N</code> noncentral chi-squared random variables directly using <code>scipy.ncx2</code>:</p> <pre><code>import numpy as np from scipy.stats import ncx2, chi2 import matplotlib.pyplot as plt n = 1000 # number of normals in sum N_MC = 100000 # number of trials mu = 0.05 sig = 0.3 ### Generate sums of squared normals ### Z = np.random.normal(loc=mu, scale=sig, size=(N_MC, n)) S = np.sum(Z**2, axis=1) ### Generate non-central chi2 RVs directly ### dof = n non_centrality = n*mu**2 NCX2 = sig**2 * ncx2.rvs(dof, non_centrality, size=N_MC) # NCX2 = sig**2 * chi2.rvs(dof, size=N_MC) # for mu = 0.0 ### Plot histos ### fig, ax = plt.subplots() ax.hist(S, bins=50, label='S') ax.hist(NCX2, bins=50, label='NCX2', alpha=0.7) ax.legend() plt.show() </code></pre> <p>This results in the histograms <a href="https://i.stack.imgur.com/IY9o2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IY9o2.png" alt="comparison of distros"></a></p> <p>I believe the mathematics is correct; could the discrepancy be a bug in the <code>ncx2</code> implementation? Setting <code>mu = 0</code> and using <code>scipy.chi2</code> looks much better: <a href="https://i.stack.imgur.com/gAIxh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gAIxh.png" alt="distros good"></a></p>
<p>The problem is in the second sentence of the question: <em>"<code>S / sig^2</code> has a noncentral chi-squared distribution with degrees of freedom = <code>n</code> and non-centrality parameter = <code>n*mu^2</code>."</em> That non-centrality parameter is not correct. It should be <code>n*(mu/sig)^2</code>.</p> <p>The standard definition of the noncentral chi-squared distribution is that it is the sum of the squares of normal variates that have mean mu and <em>standard deviation 1</em>. You are computing <code>S</code> using normal variates with standard deviation <code>sig</code>. Let's write that distribution as <code>N(mu, sig**2)</code>. By using the location-scale properties of the normal distribution, we have</p> <pre><code>N(mu, sig**2) = mu + sig*N(0, 1) = sig*(mu/sig + N(0,1)) = sig*N(mu/sig, 1) </code></pre> <p>So summing the squares of variates from <code>N(mu, sig**2)</code> is equivalent to summing the squares of <code>sig*N(mu/sig, 1)</code>. That gives <code>sig**2</code> times a noncentral chi-squared variate with noncentrality <code>mu/sig</code>.</p> <p>If you change the line where <code>non_centrality</code> is computed to</p> <pre><code>non_centrality = n*(mu/sig)**2 </code></pre> <p>the histograms line up as you expect.</p>
python|numpy|scipy
3
3,580
38,936,016
Keras - How are batches and epochs used in fit_generator()?
<p>I have a video of 8000 frames, and I'd like to train a Keras model on batches of 200 frames each. I have a frame generator that loops through the video frame-by-frame and accumulates the (3 x 480 x 640) frames into a numpy matrix <code>X</code> of shape <code>(200, 3, 480, 640)</code> -- (batch size, rgb, frame height, frame width) -- and yields <code>X</code> and <code>Y</code> every 200th frame:</p> <pre><code>import cv2 ... def _frameGenerator(videoPath, dataPath, batchSize): """ Yield X and Y data when the batch is filled. """ camera = cv2.VideoCapture(videoPath) width = camera.get(3) height = camera.get(4) frameCount = int(camera.get(7)) # Number of frames in the video file. truthData = _prepData(dataPath, frameCount) X = np.zeros((batchSize, 3, height, width)) Y = np.zeros((batchSize, 1)) batch = 0 for frameIdx, truth in enumerate(truthData): ret, frame = camera.read() if ret is False: continue batchIndex = frameIdx%batchSize X[batchIndex] = frame Y[batchIndex] = truth if batchIndex == 0 and frameIdx != 0: batch += 1 print "now yielding batch", batch yield X, Y </code></pre> <p>Here's how run <a href="https://keras.io/models/model/" rel="noreferrer"><code>fit_generator()</code></a>:</p> <pre><code> batchSize = 200 print "Starting training..." model.fit_generator( _frameGenerator(videoPath, dataPath, batchSize), samples_per_epoch=8000, nb_epoch=10, verbose=args.verbosity ) </code></pre> <p>My understanding is an epoch finishes when <code>samples_per_epoch</code> samples have been seen by the model, and <code>samples_per_epoch</code> = batch size * number of batches = 200 * 40. So after training for an epoch on frames 0-7999, the next epoch will start training again from frame 0. Is this correct?</p> <p>With this setup <strong>I expect 40 batches (of 200 frames each) to be passed from the generator to <code>fit_generator</code>, per epoch; this would be 8000 total frames per epoch</strong> -- i.e., <code>samples_per_epoch=8000</code>. Then for subsequent epochs, <code>fit_generator</code> would reinitialize the generator such that we begin training again from the start of the video. Yet this is not the case. <strong>After the first epoch is complete (after the model logs batches 0-24), the generator picks up where it left off. Shouldn't the new epoch start again from the beginning of the training dataset?</strong></p> <p>If there is something incorrect in my understanding of <code>fit_generator</code> please explain. I've gone through the documentation, this <a href="https://github.com/fchollet/keras/blob/master/examples/cifar10_cnn.py" rel="noreferrer">example</a>, and these <a href="https://github.com/fchollet/keras/issues/1627" rel="noreferrer">related</a> <a href="https://github.com/fchollet/keras/issues/107" rel="noreferrer">issues</a>. I'm using Keras v1.0.7 with the TensorFlow backend. This issue is also posted in the <a href="https://github.com/fchollet/keras/issues/3461" rel="noreferrer">Keras repo</a>.</p>
<blockquote> <p>After the first epoch is complete (after the model logs batches 0-24), the generator picks up where it left off</p> </blockquote> <p>This is an accurate description of what happens. If you want to reset or rewind the generator, you'll have to do this internally. Note that keras's behavior is quite useful in many situations. For example, you can end an epoch after seeing 1/2 the data then do an epoch on the other half, which would be impossible if the generator status was reset (which can be useful for monitoring the validation more closely).</p>
python|tensorflow|generator|keras
11
3,581
38,676,558
How to get the Average of a specific category via Python
<p>I was wondering how I could calculate the average of a specific category via Python? I have a csv file called demo.csv</p> <pre><code> import pandas as pd import numpy as np #loading the data into data frame X = pd.read_csv('demo.csv') </code></pre> <p>the two columns of interest are the <code>Category</code> and <code>Totals</code> column:</p> <pre><code>Category Totals estimates 2 2777 0.43 4 1003 0.26 4 3473 0.65 4 2638 0.17 1 2855 0.74 0 2196 0.13 0 2630 0.91 2 2714 0.39 3 2472 0.51 0 1090 0.12 </code></pre> <p>I'm interested in finding the average for the Totals corresponding with <code>Category</code> 2. I know how to do this on excel, I would just filter to only show category 2 and get the average(which ends up being 2745.5) but how would I code this via Python?</p>
<p>You can restrict your dataframe to the subset of the rows you want(<code>Category=2</code>), followed by taking mean of the columns corresponding to <code>Totals</code> column as follows:</p> <pre><code>df[df['Category'] == 2]['Totals'].mean() 2745.5 </code></pre>
python|csv|pandas|average
3
3,582
63,055,541
Finding closest points to a multidimensional data point?
<p>I have a dataset that gives the values of some songs, ie something that looks like:</p> <pre><code> acousticness danceability energy instrumentalness key liveness loudness 0 0.223 0.780 0.72 0.111 1 0.422 0.231 1 0.4 0.644 0.88 0.555 0.5 0.66 0.555 2 0.5 0.223 0.145 0.76 0 0.144 0.567 . . . </code></pre> <p>I want to find the songs/ rows that are numerically closest together by these points. I have already reduced the dimensions by removing other highly correlated variables, so this is seemingly the lowest dimension. Does anyone know how I can do this? It likely requires a machine learning algorithm but I don't really know where to start.</p>
<p>You can use <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.distance.euclidean.html#scipy.spatial.distance.euclidean" rel="nofollow noreferrer">scipy.spital.distance.euclidean</a> to calculate the distance between two multidimetional points.</p> <p>In your case e.g.</p> <pre class="lang-py prettyprint-override"><code>from scipy.spatial import distance d = distance.euclidean([0.223, 0.780, 0.72, 0.111, 1, 0.422, 0.231], [0.4, 0.644, 0.88, 0.555, 0.5, 0.66, 0.555]) print(d) </code></pre> <p>You don't need machine learning for this.</p>
python|pandas|dataframe|dimension
0
3,583
63,166,570
Pandas: Convert a JSON column with multiple rows into multiple dataframe rows
<p>I have a dataframe with two columns: <code>countries</code> and <code>year</code>. The <code>countries</code> column is JSON in the form of:</p> <pre><code>[{'continent': 'europe', 'country': 'Yugoslavia', 'income': None, 'life_exp': None, 'population': 4687422}, {'continent': 'asia', 'country': 'United Korea (former)', 'income': None, 'life_exp': None, 'population': 13740000}, {'continent': 'asia', 'country': 'Tokelau', 'income': None, 'life_exp': None, 'population': 1009}, ... </code></pre> <p>How can I convert this dataframe into something like:</p> <pre><code>continent | country | income | life_exp | population | year ----------+---------+--------+----------+------------+------- europe | Yugos | None | None | 4600000 | 1800 asia | Korea | None ||None | 13000000 | 1800 asia | Tokelau | None | None | 1009 | 1800 </code></pre> <p>That's to split the JSON column into several rows with its corresponding columns, AND adding the year that corresponds to that row?</p> <p>I used <code>json_normalize()</code> on the column and it gives me the columns I need, but I don't know how I may add the year at the end</p> <p>EDIT: This is my original dataframe:</p> <pre><code>df = pd.read_json('data.json') print(df-head()) countries year 0 [{'continent': 'europe', 'country': 'Yugoslavi... 1800 1 [{'continent': 'europe', 'country': 'Svalbard'... 1801 2 [{'continent': 'europe', 'country': 'Svalbard'... 1802 3 [{'continent': 'asia', 'country': 'Wallis et F... 1803 4 [{'continent': 'asia', 'country': 'Wallis et F... 1804 </code></pre> <p>The countries column is a JSON with multiple rows of data, the year applies to all that data, so how can I convert it to a dataframe with all the rows and the corresponding year in each row?</p> <p>I know that if I do <code>pd.DataFrame(df.countries[0])</code> will produce the dataframe with all the countries for the first row, but I don't know how to add the year to a new column. I think a loop would do, but I also guess there must be a much more efficient way</p> <p><strong>EDIT</strong>: this loop would produce the result I need, but I think it is highly inefficient:</p> <pre><code>new_df = pd.DataFrame(columns=['continent', 'country', 'income', 'life_exp', 'population', 'year']) for i in range(len(old_df)): temp_df = pd.DataFrame(old_df.countries[i]) temp_df['year'] = old_df.year[i] new_df = new_df.append(temp_df) </code></pre> <p>There must be a better way, right?</p>
<p>should add <code>ignore_index=True</code> argument in <code>explode</code> function to make sure the following <code>join</code> is not messed up.</p> <pre class="lang-python prettyprint-override"><code>df = pd.DataFrame(data).explode('countries', ignore_index=True) df = df.join(pd.json_normalize(df.pop('countries'))) print(df) </code></pre>
python|json|pandas
0
3,584
68,013,311
Converting one column of a dataframe from minute to hhmm and hh:mm format in python
<p>I have a data frame named 'df' where the column 'A' contains the time from 0 to 1440 minutes in a day. I want to add extra columns having the same time in hhmm format and hh:mm format. How can I do that in python?</p>
<p>You can first convert values to timedeltas:</p> <pre><code>df = pd.DataFrame({'A':[10,20,1440, 0]}) df['A'] = pd.to_timedelta(df['A'], unit='min') print (df) A 0 0 days 00:10:00 1 0 days 00:20:00 2 1 days 00:00:00 3 0 days 00:00:00 </code></pre> <p>For formating use:</p> <pre><code>def f(x): ts = x.total_seconds() hours, remainder = divmod(ts, 3600) minutes, seconds = divmod(remainder, 60) return ('{:02d}:{:02d}').format(int(hours), int(minutes)) df['A'] = pd.to_timedelta(df['A'], unit='min').apply(f) print (df) A 0 00:10 1 00:20 2 24:00 3 00:00 </code></pre>
python|pandas|dataframe|datetime
0
3,585
31,998,780
Removing rows based off of a value in a column (pandas)
<p>I'm trying to remove row values if the column 'Comment' has 'Bad Process' in it. </p> <pre><code> ID Name Comment 0 W12D0 Fine 1 W12D0 Bad Process 2 W12D0 What 3 W12D4 Fine 4 W12D5 Random 5 W12D5 Fine .. ... ... </code></pre> <p>Notice how the ID Name '<strong>W12D0</strong>' has 3 comments: <em>Fine, Bad Process, What</em>. Because that ID Name has 'Bad Process' corresponding to it, I want to remove all occurrences of W12D0. Essentially I'm looking for data that looks like this (w/ reindexing):</p> <pre><code> ID Name Comment 1 W12D4 Fine 2 W12D5 Random 3 W12D5 Fine .. ... ... </code></pre>
<p>You can use <code>.loc</code> to get the <code>ID Name</code> of all rows which have 'Bad Process' in the Comments column.</p> <p>You then use <code>.loc</code> again, but this time as a mask to filter out the bad records. The tilda (~) is a negation, so it finds rows in the dataframe where the ID Name is NOT in the list of bad records.</p> <pre><code>bad = df.loc[df.Comment.str.contains('Bad Process'), 'ID Name'] df_good = df.loc[~df['ID Name'].isin(bad)] &gt;&gt;&gt; df_good ID Name Comment 3 W12D4 Fine 4 W12D5 Random 5 W12D5 Fine </code></pre>
python|pandas|indexing|row|dataframe
2
3,586
31,838,044
Is there a way to write two (or more) dataframes to one excel spreadsheet?
<p>it would help me produce output that was a lot neater and a little more 'human-like' if I could use pandas and xlsxwriter in a way that would stack two dataframes, one on top of the other, on the same sheet of the Excel spreadsheet I am outputting.</p> <p>Pls note the data of the two dataframes is related but different, one being a summary of the other.</p> <p>Is there a neat way I can just take my dataframe and my summary dataframe and stack them on the same sheet?</p>
<p>Yes, it's very much possible.</p> <p>You will have to build the excel file using xlsxwriter and keep track of the current cell as you write.</p> <p>Here is some pseudo code/syntax (I use xlsxwriter as an extension of Pandas):</p> <pre><code>wb = pd.ExcelWriter(file) tab = wb.sheets["My Tab"] row, column = 9, 1 df1.to_excel(wb, tab, header=False, startrow=row, startcol=column, index=False) row += 4 column = 1 df2.to_excel(wb, tab, header=False, startrow=row, startcol=column, index=False) </code></pre> <p>I am missing some parts in here, but all I really wanted to do was illustrate the point. </p> <p>I've built out a flimsy <a href="https://github.com/kennes913/ezpdreports/blob/master/report_class.py" rel="nofollow">Report Class</a> to do this for me. You can see some of my syntax in the <code>.write_tab</code> method.</p>
python|excel|pandas|dataframe
2
3,587
31,996,872
Getting the date of the last day of this [week/month/quarter/year]
<p>Is there any way to get the date (a <code>datetime</code>, <code>pd.Timestamp</code> or equivalent) of the last day of this [week/month/quarter/year] with <code>datetime</code>, <code>pandas</code> or other date &amp; time utils?</p>
<p>Using <code>datetime</code> only.</p> <pre><code>&gt;&gt;&gt; d = datetime.date.today() </code></pre> <p>Last day of week:</p> <pre><code># This was wrong earlier. &gt;&gt;&gt; d + datetime.timedelta(days=5 - d.weekday()) datetime.date(2015, 8, 15) </code></pre> <p>Last day of month:</p> <pre><code>&gt;&gt;&gt; datetime.date(year=(d.year + int(d.month % 12 == 0)), month=(d.month + 1) % 12, day=1) - datetime.timedelta(days=1) datetime.date(2015, 8, 31) </code></pre> <p>Last day of quarter:</p> <pre><code>&gt;&gt;&gt; datetime.date(year=d.year, month=((d.month % 3) + 1) * 3 + 1, day=1) - datetime.timedelta(days=1) datetime.date(2015, 9, 30) </code></pre> <p>Last day of year:</p> <pre><code>&gt;&gt;&gt; datetime.date(year=d.year, month=12, day=31) datetime.date(2015, 12, 31) </code></pre> <p>EDIT: This is all pretty ugly and using a higher level third party library is probably best, unless there is a compelling reason not to (and there does not seem to be here).</p>
python|python-3.x|pandas
7
3,588
41,521,962
Bazel error: "No test targets were found, yet testing was requested"
<p>I have a small assignment where I used TensorFlow to create music:</p> <p><a href="https://github.com/tensorflow/magenta" rel="nofollow noreferrer">https://github.com/tensorflow/magenta</a></p> <p>When I run the code <code>--- bazel test //magenta:all</code> I get the following error:</p> <pre> WARNING: /home/admin/.cache/bazel/_bazel_admin/fb30f33370a5b97d4f9b1dde06f8f344/external/protobuf/protobuf.bzl:90:19: Variables HOST_CFG and DATA_CFG are deprecated in favor of strings "host" and "data" correspondingly. WARNING: /home/admin/.cache/bazel/_bazel_admin/fb30f33370a5b97d4f9b1dde06f8f344/external/protobuf/protobuf.bzl:96:28: Variables HOST_CFG and DATA_CFG are deprecated in favor of strings "host" and "data" correspondingly. INFO: Found 2 targets and 0 test targets... INFO: Elapsed time: 4.977s, Critical Path: 0.66s ERROR: No test targets were found, yet testing was requested.</pre>
<p>When you run</p> <pre><code>bazel test magenta:all </code></pre> <p>This means "execute all *_test rules defined in file magenta/BUILD. When I look at that file, there are no tests defined there. <a href="https://github.com/tensorflow/magenta/blob/master/magenta/BUILD" rel="noreferrer">https://github.com/tensorflow/magenta/blob/master/magenta/BUILD</a></p> <p>You should try:</p> <pre><code>bazel test magenta/... </code></pre> <p>This translates to all things that are included in magenta folder, including other packages. For more information, please see: <a href="https://bazel.build/versions/master/docs/command-line-reference.html" rel="noreferrer">https://bazel.build/versions/master/docs/command-line-reference.html</a></p>
tensorflow|bazel|magenta
6
3,589
41,669,496
Counting in dataframe
<p>I have a dataframe that looks as follows </p> <pre><code>A,B,C,D X1,desc,may 1, 1 X2,desc, june 5, 1 Y,desc, dec 8, 2 Y,desc, jan 4, 3 </code></pre> <p>I want to look at X1, X2, and Y. And sum so that the dataframe looks as follows: </p> <pre><code>A,B X1,1 X2,1 Y,5 </code></pre> <p>So for all instances of X1 we sum them, same for X2 and Y. Is there a useful pandas function for this that I don't know about? I know a really bad solution where I could just extract everything into lists and see if it is present and then sum that way and turn it back into a dataframe, but I'm not sure if there is a better method to do this all with pandas. Essentially it is like an aggregate. </p>
<p>If the column to group by is set as the index as is the case here:</p> <pre><code> B C D A X1 desc may 1 X2 desc june 1 Y desc dec 2 Y desc jan 3 </code></pre> <p>Simply use a group by index as below:</p> <pre><code>df1.groupby([df1.index]).D.sum() </code></pre> <p>Which yields the desired result:</p> <pre><code>A X1 1 X2 1 Y 5 Name: D, dtype: int64 </code></pre>
pandas|dataframe|count
0
3,590
61,194,194
pandas to numpy matrix conversion drops columns
<p>I have the following data frame, but need it as a numpy array to pass into Keras. I need to preserve the month, year, shop_id and item_id columns, but the numpy array drops them and only keeps the item_category_id and avg_item_price.</p> <pre><code>month year shop_id item_id item_category_id avg_item_price 01 2013 0 32 160 147.333328 33 111 347.000000 35 40 247.000000 </code></pre> <p>And at the end it says that there are</p> <pre><code>[32920 rows x 2 columns] </code></pre>
<p>You those columns are used as the indices in pandas, you have to reset them:</p> <pre><code>df = df.reset_index() </code></pre>
pandas
1
3,591
68,734,504
Boxplot by two groups in pandas
<p>I have the following dataset:</p> <pre><code>df_plots = pd.DataFrame({'Group':['A','A','A','A','A','A','B','B','B','B','B','B'], 'Type':['X','X','X','Y','Y','Y','X','X','X','Y','Y','Y'], 'Value':[1,1.2,1.4,1.3,1.8,1.5,15,19,18,17,12,13]}) df_plots Group Type Value 0 A X 1.0 1 A X 1.2 2 A X 1.4 3 A Y 1.3 4 A Y 1.8 5 A Y 1.5 6 B X 15.0 7 B X 19.0 8 B X 18.0 9 B Y 17.0 10 B Y 12.0 11 B Y 13.0 </code></pre> <p>And I want to create boxplots per <code>Group</code> (there are two in the example) and in each plot to show by type. I have tried this:</p> <pre><code>fig, axs = plt.subplots(1,2,figsize=(8,6), sharey=False) axs = axs.flatten() for i, g in enumerate(df_plots[['Group','Type','Value']].groupby(['Group','Type'])): g[1].boxplot(ax=axs[i]) </code></pre> <ul> <li>Results in an <code>IndexError</code>, because the loop tries to create 4 plots.</li> </ul> <pre class="lang-py prettyprint-override"><code>--------------------------------------------------------------------------- IndexError Traceback (most recent call last) &lt;ipython-input-12-8e1150950024&gt; in &lt;module&gt; 3 4 for i, g in enumerate(df[['Group','Type','Value']].groupby(['Group','Type'])): ----&gt; 5 g[1].boxplot(ax=axs[i]) IndexError: index 2 is out of bounds for axis 0 with size 2 </code></pre> <p>Then I tried this:</p> <pre><code>fig, axs = plt.subplots(1,2,figsize=(8,6), sharey=False) axs = axs.flatten() for i, g in enumerate(df_plots[['Group','Type','Value']].groupby(['Group','Type'])): g[1].boxplot(ax=axs[i], by=['Group','Type']) </code></pre> <p>But no, I have the same problem. The expected result should have only two plots, and each plot have a box-and-whisker per Type. This is a sketch of this idea:</p> <p><a href="https://i.stack.imgur.com/Ue998.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ue998.png" alt="enter image description here" /></a></p> <p>Please, any help will be greatly appreciated, with this code I can control some aspects of the data that I can't with seaborn.</p>
<p>We can use <a href="https://pandas.pydata.org/docs/reference/api/pandas.core.groupby.DataFrameGroupBy.boxplot.html" rel="nofollow noreferrer"><code>groupby boxplot</code></a> to create subplots per <code>Group</code> and then separate each <code>boxplot</code> by <code>Type</code>:</p> <pre><code>fig, axes = plt.subplots(1, 2, figsize=(8, 6), sharey=False) df_plots.groupby('Group').boxplot(by='Type', ax=axes) plt.show() </code></pre> <p>Or without <code>subplots</code> by passing parameters directly through the function call:</p> <pre><code>axes = df_plots.groupby('Group').boxplot(by='Type', figsize=(8, 6), layout=(1, 2), sharey=False) plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/VEVvV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VEVvV.png" alt="plot" /></a></p> <hr /> <p>Data and imports:</p> <pre><code>import pandas as pd from matplotlib import pyplot as plt df_plots = pd.DataFrame({ 'Group': ['A', 'A', 'A', 'A', 'A', 'A', 'B', 'B', 'B', 'B', 'B', 'B'], 'Type': ['X', 'X', 'X', 'Y', 'Y', 'Y', 'X', 'X', 'X', 'Y', 'Y', 'Y'], 'Value': [1, 1.2, 1.4, 1.3, 1.8, 1.5, 15, 19, 18, 17, 12, 13] }) </code></pre>
python|pandas|matplotlib|boxplot
4
3,592
68,514,274
Parameters for LSTM with CNN in a sequential data
<p>I am doing a classification problem with ECG data. I built a LSTM model but the accuracy of the model is not quiet good. Hence, I am thinking to implement it with CNN. I am planning to pass the data from CNN, then passing the output from CNN to LSTM. Howver, I have noticed that CNN is mostly used in Image classifications. I have sequential data with 4000 time steps. Could you please help me to define the parameters of the CNN model.</p> <pre><code>Conv2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros', device=None, dtype=None) </code></pre> <p>Can someone explain me what would be the in_channels, out_channels, kernel_size and stride for a sequence data having 4000 time steps?</p>
<p>Well, that's yours to define, it's the actual architectural decision your need to take to construct your model. The following is not a solution to your question, however, this might give you some ideas.</p> <ul> <li><p>You could pass each timestep through the CNN and retrieve a sequence of feature vectors corresponding to the CNN's outputs at consecutive time steps. Your CNN input would be shaped as <code>(batch_size, channel, height, width)</code> and output something like <code>(batch_size, feature_length)</code>. Stacking the timesteps results would give you <code>(batch_size, sequence_length, feature_length)</code>.</p> </li> <li><p>You could use a 3D convolutional layer, in that case, you can work straight away with shape <code>(batch_size, sequence_length, channel, height, width)</code>. This is much more computation-intensive, since you are already planning on using an LSTM, it might be a little over-complex.</p> </li> </ul> <p>The number of channels, kernel sizes, number of filters in each convolutional layer isn't really an obvious question. You need to decide on that based on your setup: how large is your dataset, how many classes you have, how complex is the task (if not a classification problem).</p> <p>My best advice is you start by using a well-known CNN architecture such as VGG or ResNet and work from there. Better yet, look at the literature and see if some else has ever faced that problem, you will most likely find interesting ideas that will help shape your project.</p>
parameters|pytorch|conv-neural-network|lstm
1
3,593
68,581,656
Issues with conditionals in pandas
<p>I have this <code>df</code>:</p> <pre><code> CODE DATE MONTH_DAY PPTOT SECTOR_1 0 472606FA 2001-01-01 01-01 0.0 SN 1 472606FA 2001-01-02 01-02 0.0 SN 2 472606FA 2001-01-03 01-03 0.7 SN 3 472606FA 2001-01-04 01-04 NaN SN 4 472606FA 2001-01-05 01-05 NaN SN ... ... ... ... ... 248220 47E2A75C 2021-04-26 04-26 0.0 SI 248221 47E2A75C 2021-04-27 04-27 0.0 SI 248222 47E2A75C 2021-04-28 04-28 0.0 SI 248223 47E2A75C 2021-04-29 04-29 0.0 SI 248224 47E2A75C 2021-04-30 04-30 NaN SI [248225 rows x 5 columns] </code></pre> <p>I want to apply 2 conditionals. When <code>df['PPTOT'] &lt;= 0</code> and <code>df['SECTOR_1']=='CS'</code>, <code>df['PPTOT']</code> must be <code>np.nan</code>. So i did this code:</p> <pre><code>df.loc[(df['PPTOT'] &lt;= 0 &amp; df['SECTOR_1']=='CS'), 'PPTOT'] = np.nan </code></pre> <p>But i get this error:</p> <pre><code>TypeError: unsupported operand type(s) for &amp;: 'bool' and 'str' </code></pre> <p>So i wrote the parenthesis only in <code>df['PPTOT'] &lt;= 0</code> like this:</p> <pre><code>df.loc[(df['PPTOT'] &lt;= 0) &amp; df['SECTOR_1']=='CS', 'PPTOT'] = np.nan </code></pre> <p>But i get again another error:</p> <pre><code>ValueError: unknown type str64 </code></pre> <p>How can i solve this? or maybe there is another efficient or accurate way to do this?</p> <p>Thanks in advance.</p>
<p>The <code>&amp;</code> operator has higher precedence than <code>&lt;=</code>, <code>==</code>, etc., so you have to add parentheses around the second condition as well.</p> <pre><code>df.loc[(df['PPTOT'] &lt;= 0) &amp; (df['SECTOR_1']=='CS'), 'PPTOT'] = np.nan </code></pre>
python|pandas
2
3,594
68,665,191
Change numeric values in rows of CSV if string in same row has certain value
<p>I have a <em><strong>csv</strong></em> that looks like below with up to <em><strong>15000 lines</strong></em>.<br> The numeric values of <code>Start</code> and <code>End</code> are between 0 and 300.<br> I am looking for a way to parse through the file, search for rows starting with <code>white</code>, then check the <code>Start</code> value and the <code>End</code> value of this row with the following conditions:</p> <ol> <li>if the value ≤ 150 then add 150</li> <li>if the value is &gt; 150 then subtract 150</li> </ol> <p>Finally, overwrite the source file with the edits.<br> I am looking for a way to realize that with bash or python. Any help is much appreciated!</p> <p>Raw Data:</p> <pre><code>Color, Start, End white, 0, 1, black, 23, 150, black, 150, 24, white, 24, 152, black, 152, 25, black, 25, 154, black, 154, 81, white, 99, 220, ... </code></pre> <p>Final Data:</p> <pre><code>Color, Start, End white, 150, 151, black, 23, 150, black, 150, 24, white, 174, 2, black, 152, 25, black, 25, 154, black, 154, 81, white, 249, 70, ... </code></pre>
<pre><code>$ awk 'BEGIN{FS=&quot; *, *&quot;; OFS=&quot;, &quot;; n=150} $1==&quot;white&quot;{ for (i=2; i&lt;NF; i++) $i+=($i&gt;n ? -n : n) } 1' file Color, Start, End white, 150, 151, black, 23, 150, black, 150, 24, white, 174, 2, black, 152, 25, black, 25, 154, black, 154, 81, white, 249, 70, </code></pre>
python|pandas|bash|csv|awk
0
3,595
68,694,047
PyTorch Lightning functools.partial error
<p>I'm using a combination of PyTorch Forecasting and PyTorch Lightning, and running into an odd error. Some code below.</p> <pre><code>batch_size = 128 train_dataloader = training.to_dataloader(train=True, batch_size=batch_size, num_workers=8) val_dataloader = validation.to_dataloader(train=False, batch_size=batch_size, num_workers=8) . . . tft = TemporalFusionTransformer.from_dataset( training, learning_rate=0.05, hidden_size=16, # biggest influence network size attention_head_size=1, dropout=0.1, hidden_continuous_size=8, output_size=7, # QuantileLoss has 7 quantiles by default loss=QuantileLoss(), log_interval=10, # log example every 10 batches reduce_on_plateau_patience=4, # reduce learning automatically ) trainer.fit( tft, train_dataloaders=train_dataloader, val_dataloaders=val_dataloader ) </code></pre> <p>However, I then run into this error and I can't figure out why. Can anyone help me figure out what to do with the below error? I tried playing around with changing the syntax for the val_dataloader, but couldn't get anything to work.</p> <pre><code>Traceback (most recent call last): File &quot;/model.py&quot;, line 136, in &lt;module&gt; val_dataloaders=val_dataloader, File &quot;C:\...\venv\lib\site-packages\pytorch_lightning\trainer\trainer.py&quot;, line 553, in fit self._run(model) File &quot;C:\...\venv\lib\site-packages\pytorch_lightning\trainer\trainer.py&quot;, line 912, in _run self._pre_dispatch() File &quot;C:\...\venv\lib\site-packages\pytorch_lightning\trainer\trainer.py&quot;, line 941, in _pre_dispatch self._log_hyperparams() File &quot;C:\...\venv\lib\site-packages\pytorch_lightning\trainer\trainer.py&quot;, line 970, in _log_hyperparams self.logger.save() File &quot;C:\...\venv\lib\site-packages\pytorch_lightning\utilities\distributed.py&quot;, line 48, in wrapped_fn return fn(*args, **kwargs) File &quot;C:\...\venv\lib\site-packages\pytorch_lightning\loggers\tensorboard.py&quot;, line 249, in save save_hparams_to_yaml(hparams_file, self.hparams) File &quot;C:\...\venv\lib\site-packages\pytorch_lightning\core\saving.py&quot;, line 405, in save_hparams_to_yaml yaml.dump(v) File &quot;C:\...\venv\lib\site-packages\yaml\__init__.py&quot;, line 290, in dump return dump_all([data], stream, Dumper=Dumper, **kwds) File &quot;C:\...\venv\lib\site-packages\yaml\__init__.py&quot;, line 278, in dump_all dumper.represent(data) File &quot;C:\...\lib\site-packages\yaml\representer.py&quot;, line 27, in represent node = self.represent_data(data) File &quot;C:\...\venv\lib\site-packages\yaml\representer.py&quot;, line 52, in represent_data node = self.yaml_multi_representers[data_type](self, data) File &quot;C:\...\venv\lib\site-packages\yaml\representer.py&quot;, line 343, in represent_object 'tag:yaml.org,2002:python/object:'+function_name, state) File &quot;C:\...\venv\lib\site-packages\yaml\representer.py&quot;, line 118, in represent_mapping node_value = self.represent_data(item_value) File &quot;C:\...\venv\lib\site-packages\yaml\representer.py&quot;, line 52, in represent_data node = self.yaml_multi_representers[data_type](self, data) File &quot;C:\...\venv\lib\site-packages\yaml\representer.py&quot;, line 343, in represent_object 'tag:yaml.org,2002:python/object:'+function_name, state) File &quot;C:\...\venv\lib\site-packages\yaml\representer.py&quot;, line 118, in represent_mapping node_value = self.represent_data(item_value) File &quot;C:\...\venv\lib\site-packages\yaml\representer.py&quot;, line 52, in represent_data node = self.yaml_multi_representers[data_type](self, data) File &quot;C:\...\venv\lib\site-packages\yaml\representer.py&quot;, line 346, in represent_object return self.represent_sequence(tag+function_name, args) File &quot;C:\...\venv\lib\site-packages\yaml\representer.py&quot;, line 92, in represent_sequence node_item = self.represent_data(item) File &quot;C:\...\venv\lib\site-packages\yaml\representer.py&quot;, line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File &quot;C:\...\venv\lib\site-packages\yaml\representer.py&quot;, line 286, in represent_tuple return self.represent_sequence('tag:yaml.org,2002:python/tuple', data) File &quot;C:\...\venv\lib\site-packages\yaml\representer.py&quot;, line 92, in represent_sequence node_item = self.represent_data(item) File &quot;C:\...\venv\lib\site-packages\yaml\representer.py&quot;, line 52, in represent_data node = self.yaml_multi_representers[data_type](self, data) File &quot;C:\...\venv\lib\site-packages\yaml\representer.py&quot;, line 331, in represent_object if function.__name__ == '__newobj__': AttributeError: 'functools.partial' object has no attribute '__name__' Process finished with exit code 1 </code></pre>
<p>This ended up being caused by an issue with a recent pandas upgrade. Rolling back to 1.2.5 resolved the issue.</p> <pre><code>pip install --upgrade pandas==1.2.5 </code></pre> <p>Details on the problem in the link below.</p> <p><a href="https://github.com/pandas-dev/pandas/issues/42748" rel="nofollow noreferrer">https://github.com/pandas-dev/pandas/issues/42748</a></p>
python|pytorch|pytorch-lightning
3
3,596
68,536,204
Extract value of each column and store in a list/dictionary for each index value
<p>I have a dataframe with 5 columns. I want to look through 3 of them and store in dict or list (whichever is more efficient) the values of each of the 5 columns</p> <p>Example:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th></th> <th>A</th> <th>B</th> <th>C</th> <th>D</th> <th>E</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>10</td> <td>20</td> <td>9</td> <td>5</td> <td>4</td> </tr> <tr> <td>2</td> <td>4</td> <td>55</td> <td>14</td> <td>5</td> <td>2</td> </tr> <tr> <td>3</td> <td>3</td> <td>3</td> <td>9</td> <td>7</td> <td>7</td> </tr> </tbody> </table> </div> <p>I would like to create three lists as such</p> <pre><code>index_1 = [10,20,4] index_2 = [4,55,2] index_3 = [3,3,7] </code></pre> <p>I have no idea how to go forward after looping through the columns</p> <pre><code>cols = ['A', 'B', 'E'] for col in cols: df[col] </code></pre>
<p>Try:</p> <pre><code>index_1, index_2, index_3 = [list(row) for row in df[[&quot;A&quot;, &quot;B&quot;, &quot;E&quot;]].values] </code></pre>
python|pandas
2
3,597
68,729,622
How to assign column heading to corresponding cell values?
<p>I'm just learning Python and for my first project I am trying to re-format a excel table that I can use on GIS. The Table have many columns with x for each corresponding records. I need to assign (replace the x) with the column names and concatenate all rows separated by commas. I was told that Pandas is a very good library to accomplish this. I did started (see sample code) but I am not sure what to do next. Any help or suggestions will be greatly appreciated. Here is a visual representation of what I am trying to accomplish:</p> <p><a href="https://i.stack.imgur.com/04aqE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/04aqE.png" alt="enter image description here" /></a></p> <p>Sample Code:</p> <pre><code>import pandas as pd input_excel = r&quot;C:\Projects\... Habitat_table.xlsx&quot; # excel sheet path excel = pd.read_excel(input_excel, sheet_name = 'Species_habitat') # sheet name final_dataframe = pd.DataFrame (excel, columns=[‘Habitat_A, ‘Habitat B,C,&amp;D’, ‘Habitat_E']) # every single column name habitats = [‘Habitat_A, ‘Habitat B,C,&amp;D’, ‘Habitat_E'] for index, row in final_dataframe.iterrows(): final_string = &quot; &quot; print (final_dataframe.columns.name) for h in habitats: print(h) for c in index: if h in index.name: #checks if habitat is in column name print(h) if row[c] is not null: final_string == final_string + c.name + &quot;, &quot; print(final_string) </code></pre>
<pre><code>data_dict = { 'Species_ID': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10], 'Habitat_A': ['X', '', 'X', 'X', '', 'X', 'X', '', 'X', ''], 'Habitat B,C,&amp;D': ['X', '', 'X', 'X', '', 'X', 'X', '', 'X', ''], 'Habitat_E': ['', 'X', '', 'X', 'X', '', 'X', 'X', '', 'X'], } df = pd.DataFrame.from_dict(data_dict) df.iloc[:, 1:] = df.iloc[:, 1:].apply(lambda x: pd.Series([y[0] if y[1] == 'X' else '' for y in x.iteritems()]), axis=1) df['All_habitats(CONCAT)'] = df.apply(lambda x: ','.join(filter(None, x[1:])), axis=1) print(df) </code></pre> <p>Prints:</p> <pre class="lang-none prettyprint-override"><code> Species_ID Habitat_A Habitat B,C,&amp;D Habitat_E All_habitats(CONCAT) 0 1 Habitat_A Habitat B,C,&amp;D Habitat_A,Habitat B,C,&amp;D 1 2 Habitat_E Habitat_E 2 3 Habitat_A Habitat B,C,&amp;D Habitat_A,Habitat B,C,&amp;D 3 4 Habitat_A Habitat B,C,&amp;D Habitat_E Habitat_A,Habitat B,C,&amp;D,Habitat_E 4 5 Habitat_E Habitat_E 5 6 Habitat_A Habitat B,C,&amp;D Habitat_A,Habitat B,C,&amp;D 6 7 Habitat_A Habitat B,C,&amp;D Habitat_E Habitat_A,Habitat B,C,&amp;D,Habitat_E 7 8 Habitat_E Habitat_E 8 9 Habitat_A Habitat B,C,&amp;D Habitat_A,Habitat B,C,&amp;D 9 10 Habitat_E Habitat_E </code></pre> <h2>Test on 2095 rows * 19 columns from .csv (dummy data)</h2> <pre><code>import pandas as pd, time tic = time.perf_counter() df = pd.read_csv(r'c:\Users\Alex20\Documents\Habitats.csv') df.iloc[:, 1:] = df.iloc[:, 1:].apply(lambda x: pd.Series([y[0] if y[1] == 'X' else '' for y in x.iteritems()]), axis=1) df['All_habitats(CONCAT)'] = df.apply(lambda x: ','.join(filter(None, x[1:])), axis=1) print(df) print(f&quot;Processed in {time.perf_counter() - tic:0.4f} seconds&quot;) </code></pre> <p>Output:</p> <pre class="lang-none prettyprint-override"><code> Species_ID ... All_habitats(CONCAT) 0 1 ... HabitatA,HabitatB,HabitatC,HabitatD,HabitatF,H... 1 2 ... HabitatC,HabitatG,HabitatP 2 3 ... HabitatA,HabitatB,HabitatC,HabitatE,HabitatG,H... 3 4 ... HabitatA,HabitatB,HabitatE,HabitatJ,HabitatL,H... 4 5 ... HabitatD,HabitatI,HabitatK,HabitatL,HabitatM,H... ... ... ... ... 2090 2091 ... HabitatA,HabitatB,HabitatE,HabitatF,HabitatG,H... 2091 2092 ... HabitatA,HabitatB,HabitatC,HabitatE,HabitatF,H... 2092 2093 ... HabitatB,HabitatC,HabitatD,HabitatG,HabitatH,H... 2093 2094 ... HabitatC,HabitatF,HabitatG,HabitatI,HabitatK,H... 2094 2095 ... HabitatB,HabitatE,HabitatG,HabitatI,HabitatK,H... [2095 rows x 19 columns] Processed in 0.4257 seconds </code></pre> <p><strong>.csv</strong><br /> <a href="https://i.stack.imgur.com/a1seL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/a1seL.png" alt="enter image description here" /></a></p>
python|pandas
2
3,598
36,600,465
converting pandas column to np 2d array
<p>I am adapting a pandas plotting example for my own data. I have a question on converting a pandas dataframe column to the correct shape (1 , 10). More specifically. the correct format is achieved with this code </p> <pre><code>z = np.random.rand(1, 10) </code></pre> <p>which produces this </p> <pre><code>array([[ 0.45671971, 0.21101451, 0.08022069, 0.80602989, 0.92816774, 0.03677719, 0.97893078, 0.97003696, 0.23232276, 0.65328171]]) </code></pre> <p>My dataframe column is created like this </p> <pre><code>y = df['col_name'].as_matrix() </code></pre> <p>which is creating this</p> <pre><code>array([218584205, 55738338, 52152386, 37920152, 35472238, 32611026, 30268255, 26709195, 25979749, 24804423], dtype=int64) </code></pre> <p>notice the extra bracket. So the shape is (10, 1). What is the correct method for converting the column to the correct form ?</p>
<pre><code>y = df['col_name'].as_matrix().reshape(1, len(df['col_name'])) </code></pre> <p>Yields:</p> <pre><code>array([[218584205, 55738338, 52152386, 37920152, 35472238, 32611026, 30268255, 26709195, 25979749, 24804423]]) </code></pre>
python-3.x|pandas|plot|dataframe
1
3,599
53,200,550
How to extract t[i,i,:] using tensorflow?
<p>For example:</p> <pre><code>t = tf.constant(np.array([[[1,2],[3,4]],[[5,6],[7,8]]])) # I want to extract h = [[1,2],[7,8]] </code></pre> <p>How to use tensorflow to do this? Thank you!</p>
<p>You can use <code>tf.gather_nd()</code>. It gathers slices from the last dimension given some indices. You can read more on that <a href="https://www.tensorflow.org/api_docs/python/tf/gather_nd" rel="nofollow noreferrer">here</a>.</p> <pre><code>import tensorflow as tf t = tf.constant(np.array([[[1,2],[3,4]],[[5,6],[7,8]]])) t_res = tf.gather_nd(t, [(0,0),(1,1)]) sess = tf.Session() h = sess.run(t_res) # &gt;&gt;&gt;[[1,2],[7,8]] </code></pre>
python|tensorflow
1