Unnamed: 0
int64
0
378k
id
int64
49.9k
73.8M
title
stringlengths
15
150
question
stringlengths
37
64.2k
answer
stringlengths
37
44.1k
tags
stringlengths
5
106
score
int64
-10
5.87k
8,100
60,385,361
How to obtain value_count() of different data types of elements in a series with "Object" data type?
<p>Pandas assign dtype "object" to a series that contains mixture of numeric and non numeric data. Is it possible to obtain a value count of dtypes of all the elements in a series? </p>
<p>Yes you can </p> <pre><code>pd.Series([1,'1']).map(type).value_counts() Out[65]: &lt;class 'int'&gt; 1 &lt;class 'str'&gt; 1 </code></pre>
python|pandas|numpy
2
8,101
60,368,896
Install Pytorch GPU with pre-installed CUDA and cudnn
<p>As the title suggests, I have pre-installed CUDA and cudnn (my Tensorflow is using them). </p> <p>The version of CUDA is <strong>10.0</strong> from <code>nvcc --version</code>.</p> <p>The versiuon of cudnn is <strong>7.4</strong>. </p> <p>I am trying to install pytorch in a conda environment using <code>conda install pytorch torchvision cudatoolkit=10.0 -c pytorch</code>. </p> <p>However, the installed pytorch does not detect my GPU successfully. </p> <p>Does anyone know if there is a way to install GPU-version pytorch with a specific CUDA and cudnn version? I do not want to change CUDA and cudnn version because my Tensorflow is using them. </p> <p>Any ideas would be appreciated!</p>
<p>So I solved this myself finally. The issue is that I didn't reboot my system after installing pytorch. After rebooting, <code>torch.cuda.is_available()</code> returns <code>True</code> as expected. </p>
installation|pytorch|conda
1
8,102
72,496,841
KNNImputer is replacing data with Nulls
<p>I was working on a project with sensitive data and stumbled upon this &quot;bug&quot; (probably something that went over my head). Recently I learned about KNNimputer from sklearn and I love its concept. <strong>However, it's replacing data with null values.</strong> I'm working on a data cleaning and modeling project, at the moment I don't have any null data but I wanted to add code to fill nulls in the case there are in the future but after running the algorithm it's replacing good data with NaN values. Am I using it wrong?</p> <h3>Here is the code:</h3> <p>The libraries I use</p> <pre><code>from sklearn.impute import KNNImputer import pandas as pd import numpy as np </code></pre> <p>Transformed categorical data as dummies</p> <pre><code>df_cleaned = pd.get_dummies(df_cleaned, columns=[&quot;A&quot;, &quot;B&quot;, &quot;C&quot;, &quot;D&quot;, &quot;E&quot;]) print(&quot;Categorical -&gt; dummies \n&quot;, df_cleaned.info(5)) </code></pre> <p>&quot;I replaced the names of the features&quot; <br /> &quot;And didn't show the 33 columns as the remaining columns have 28519 non-null uint8&quot;</p> <pre><code>Data columns (total 33 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 A 28519 non-null int64 1 B 28519 non-null int64 2 C 28519 non-null object 3 D 28519 non-null int64 4 E 28519 non-null int64 5 F 28519 non-null object 6 H 28519 non-null int64 7 I 28519 non-null object 8 J 28519 non-null uint8 9 K 28519 non-null uint8 </code></pre> <pre><code>print(&quot;looking for nulls (before) \n&quot;, df.isnull().sum()) </code></pre> <p>There are no nulls at this moment</p> <blockquote> <p>looking for nulls (before) <br /> A 0 <br /> B 0 <br /> C 0 <br /> D 0 <br /> E 0 <br /> F 0 <br /> G 0 <br /> H 0 <br /> I 0 <br /> J 0 <br /></p> </blockquote> <p><strong>Something Happens here</strong></p> <pre><code>imputer = KNNImputer(n_neighbors=5) df_immputed = pd.DataFrame(imputer.fit_transform(df_cleaned.drop(&quot;venue&quot;, axis=1)), columns=df_cleaned.drop(&quot;venue&quot;, axis=1).columns) df_cleaned = pd.concat([df_immputed, df_cleaned[&quot;venue&quot;]], axis=1) </code></pre> <pre><code>print(&quot;looking for nulls (after) \n&quot;, df.isnull().sum()) </code></pre> <p><strong>Now there are</strong></p> <blockquote> <p>looking for nulls (after) <br /> A 28 <br /> B 28 <br /> C 28 <br /> D 28 <br /> E 28 <br /> F 28 <br /> G 28 <br /> H 28 <br /> I 28 <br /> J 28 <br /></p> </blockquote> <p>What is happening? Why is it creating nulls?</p> <p>Edit:</p> <p><strong>Row affected</strong><br /> <em>The Letter_# are dummies</em><br /> <br /> Original row</p> <pre><code>A B C D E F G H_1 H_2 H_3 H_4 H_5 H_6 151 128 134110.51 681 532 593894.54 151 0 0 1 0 0 0 H_7 H_8 H_9 H_10 H_11 H_12 I_0 I_1 I_2 J_1 J_2 J_3 J_4 J_5 0 0 0 0 0 0 0 0 1 1 0 0 0 0 J_6 K_1 K_1 L_1 L_2 M 0 0 1 1 0 string value I cannot share sorry </code></pre> <p>Row with nulls after Knnimputer</p> <pre><code>A B C D E F G H_1 H_2 H_3 H_4 H_5 H_6 H_7 H_8 H_9 H_10 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN H_11 H_12 I_0 I_1 I_2 J_1 J_2 J_3 J_4 J_5 J_6 K_1 K_1 L_1 L_2 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN M string value I cannot share sorry </code></pre>
<p>It's probably due to a nonstandard index of your dataframe. Check the shape of the output: if I'm right, you'll have 28 more rows than before.</p> <p>The problem arises because when you re-dataframe the numpy result of <code>fit_transform</code>, you get a standard index (0...n-1). Then <code>pd.concat</code> matches those indices against the original index in the column <code>&quot;venue&quot;</code>, taking an outer join.</p> <p>You can fix this in a number of ways; maybe the nicest is to assign the correct frame index when re-framing the imputed numpy array:</p> <pre class="lang-py prettyprint-override"><code>df_immputed = pd.DataFrame( imputer.fit_transform(df_cleaned.drop(&quot;venue&quot;, axis=1)), columns=df_cleaned.drop(&quot;venue&quot;, axis=1).columns, index=df_cleaned.index, ) </code></pre>
python|pandas|scikit-learn|missing-data|knn
0
8,103
59,572,151
Pandas function issues - equation output incorrect
<p>row['conus_days']>0 or row['conus_days1']>0: return row ['conus_days']* 8 + row['conus_days1']<em>12 elif (row['Country']== 'Afghanistan' or row['Country']== 'Iraq' or row['Country']=='Somalia' or row['Country']=='Yemen') and row['oconus_days']>0 or row['oconus_days1']>0: return row ['oconus_days']</em> 12 + row['oconus_days1']*8 elif (row['Country']== 'Afghanistan' or row['Country']== 'Iraq' or row['Country']=='Somalia' or row['Country']=='Yemen'): return row ['days_in_month']*12 elif (row['Country'] == 'Germany') and row['conus_days']>0: return row['conus_days']*8 + row['conus_days1']<em>10 elif (row['Country'] == 'Germany'): return row['days_in_month']</em> 10 elif row['Country'] == 'Conus': return row['working_days']* 8 else: return row['working_days']*8 forecast ['hours']= forecast.apply(lambda row: get_hours(row), axis=1) print(forecast.head())</p> <p>this is returning the following output: </p> <pre><code> Name EID Start Date End Date Country year Month \ 0 xx 123456 2019-08-01 2020-01-03 Afghanistan 2020 1 1 XX 3456789 2019-09-22 2020-02-16 Conus 2020 1 2 xx. 456789 2019-12-05 2020-03-12 Conus 2020 1 3 DR. 789456 2019-09-11 2020-03-04 Iraq 2020 1 4 JR. 985756 2020-01-03 2020-05-06 Germany 2020 1 days_in_month start_month end_month working_days conus_mth oconus_mth \ 0 31 2020-01-01 2020-01-31 21 8 1 1 31 2020-01-01 2020-01-31 21 9 2 2 31 2020-01-01 2020-01-31 21 12 3 3 31 2020-01-01 2020-01-31 21 9 3 4 31 2020-01-01 2020-01-31 21 1 5 conus_days conus_days1 oconus_days oconus_days1 hours 0 0 0 2 25 224 1 0 0 0 0 168 2 0 0 0 0 168 3 0 0 0 0 372 4 1 28 0 0 344 </code></pre> <p>---output on row 4 is incorrect, this should return 288 </p>
<p>Closing each if statement in double parenthesis allows for each if statement to run individual and accurately.</p> <p>def get_hours(row): if ((row['Country']== 'Afghanistan' or row['Country']== 'Iraq' or row['Country']=='Somalia' or row['Country']=='Yemen') and (row['conus_days']>0 or row['conus_days1']>0)): return row ['conus_days']* 8 + row['conus_days1']<em>12 if ((row['Country']== 'Afghanistan' or row['Country']== 'Iraq' or row['Country']=='Somalia' or row['Country']=='Yemen') and row['oconus_days']>0 or row['oconus_days1']>0): return row ['oconus_days']</em> 12 + row['oconus_days1']*8 if (row['Country']== 'Afghanistan' or row['Country']== 'Iraq' or row['Country']=='Somalia' or row['Country']=='Yemen'): return row ['days_in_month']*12 if ((row['Country'] == 'Germany') and row['conus_days']>0): return row['conus_days']*8 + row['conus_days1']*10 if ((row['Country'] == 'Germany') and row['oconus_days']>0): return row['oconus_days']*10 + row['oconus_days1']<em>8 if (row['Country'] == 'Germany'): return row['days_in_month']</em> 10 if (row['Country'] == 'Conus'): return row['working_days']* 8 else: return row['working_days']*8 forecast ['hours']= forecast.apply(lambda row: get_hours(row), axis=1) print(forecast.head())</p> <pre><code> Name EID Start Date End Date Country year Month \ </code></pre> <p>0 XX. 123456 2019-08-01 2020-01-03 Afghanistan 2020 1<br> 1 xx 3456789 2019-09-22 2020-02-16 Conus 2020 1<br> 2 Mh 456789 2019-12-05 2020-03-12 Conus 2020 1<br> 3 DR 789456 2019-09-11 2020-03-04 Iraq 2020 1<br> 4 JR 985756 2020-01-03 2020-05-06 Germany 2020 1 </p> <p>days_in_month start_month end_month working_days conus_mth oconus_mth \ 0 31 2020-01-01 2020-01-31 21 8 1<br> 1 31 2020-01-01 2020-01-31 21 9 2<br> 2 31 2020-01-01 2020-01-31 21 12 3<br> 3 31 2020-01-01 2020-01-31 21 9 3<br> 4 31 2020-01-01 2020-01-31 21 1 5 </p> <p>conus_days conus_days1 oconus_days oconus_days1 hours<br> 0 0 0 2 25 224<br> 1 0 0 0 0 168<br> 2 0 0 0 0 168<br> 3 0 0 0 0 372<br> 4 1 28 0 0 288 ​</p>
pandas|function|lambda|apply
0
8,104
59,510,800
Cannot select rows out of numpy array to perform an std
<p><code>import numpy as Np</code> I need to calculate the std on the first three rows of a numPy array I made with <code>y = Np.random(100, size = (5, 3))</code>.</p> <p>The above produced the array I am working on. Note that I have since calculated the median of the array after having removed the 2 smallest values in the array with: <code>y=Np.delete(y, y.argmin())</code> <code>y=Np.delete(y, y.argmin())</code> <code>Np.median(y)</code></p> <p>When I call <code>y</code> now it no longer is in a square matrix. It comes all on one line like array([48, 90, 67, 26, 53, 16, 19, 64, 51, 47, 54, 91, 36]).</p> <p>When I try to slice it and calculate an standard deviation (<code>std</code>) I get an IndexError. I think it is because this array is now a tuple. </p>
<p>As other people suggested the question format is not clear. Here what I tried:</p> <pre><code>import numpy as np y = np.random.randint(100, size = (5, 3)) y array([[65, 84, 56], [90, 44, 42], [51, 58, 9], [82, 1, 91], [96, 32, 24]]) </code></pre> <p>Now to compute <code>std</code> for each row: </p> <pre><code>y.std(axis=1) array([11.6714276 , 22.1710522 , 21.63844316, 40.47221269, 32.22145593]) </code></pre> <p>Since you just want the first 3 rows you can slice the result:</p> <pre><code>result = y.std(axis=1)[:3] result array([11.6714276 , 22.1710522 , 21.63844316]) </code></pre> <p>Alternatively you can first select/slice the 1st 3 rows and then use <code>std</code>:</p> <pre><code>y[:3].std(axis=1) array([11.6714276 , 22.1710522 , 21.63844316]) </code></pre>
python|arrays|numpy|row
0
8,105
40,408,471
Select data when specific columns have null value in pandas
<p>I have a dataframe where there are 2 date fields I want to filter and see rows when any one of the date field is null. </p> <pre><code>ID Date1 Date2 58844880 04/11/16 NaN 59745846 04/12/16 04/14/16 59743311 04/13/16 NaN 59745848 04/14/16 04/11/16 59598413 NaN NaN 59745921 04/14/16 04/14/16 59561199 04/15/16 04/15/16 NaN 04/16/16 04/16/16 59561198 NaN 04/17/16 </code></pre> <p>It should look like below</p> <pre><code>ID Date1 Date2 58844880 04/11/16 NaN 59743311 04/13/16 NaN 59598413 NaN NaN 59561198 NaN 04/17/16 </code></pre> <p>Tried the code <code>df = (df['Date1'].isnull() | df['Date1'].isnull())</code></p>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing" rel="noreferrer"><code>boolean indexing</code></a>:</p> <pre><code>mask = df['Date1'].isnull() | df['Date2'].isnull() print (df[mask]) ID Date1 Date2 0 58844880.0 04/11/16 NaN 2 59743311.0 04/13/16 NaN 4 59598413.0 NaN NaN 8 59561198.0 NaN 04/17/16 </code></pre> <p><strong>Timings</strong>:</p> <pre><code>#[900000 rows x 3 columns] df = pd.concat([df]*100000).reset_index(drop=True) In [12]: %timeit (df[df['Date1'].isnull() | df['Date2'].isnull()]) 10 loops, best of 3: 89.3 ms per loop In [13]: %timeit (df[df.filter(like='Date').isnull().any(1)]) 10 loops, best of 3: 146 ms per loop </code></pre>
python|pandas
10
8,106
40,402,545
How to make a numpy array from an array of arrays?
<p>I'm experimenting in <code>ipython3</code>, where I created an array of arrays:</p> <pre><code>In [105]: counts_array Out[105]: array([array([ 17, 59, 320, ..., 1, 7, 0], dtype=uint32), array([ 30, 71, 390, ..., 12, 20, 6], dtype=uint32), array([ 7, 145, 214, ..., 4, 12, 0], dtype=uint32), array([ 23, 346, 381, ..., 15, 19, 5], dtype=uint32), array([ 51, 78, 270, ..., 3, 0, 2], dtype=uint32), array([212, 149, 511, ..., 19, 31, 8], dtype=uint32)], dtype=object) In [106]: counts_array.shape Out[106]: (6,) In [107]: counts_array[0].shape Out[107]: (1590,) </code></pre> <p>I would like to obtain a plain <code>shape=(6, 1590), dtype=uint32</code> array from this monster I created.</p> <p>How can I do that?</p>
<p>You can use <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.vstack.html" rel="nofollow noreferrer"><code>np.vstack</code></a> -</p> <pre><code>np.vstack(counts_array) </code></pre> <p>Another way with <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.concatenate.html" rel="nofollow noreferrer"><code>np.concatenate</code></a> -</p> <pre><code>np.concatenate(counts_array).reshape(len(counts_array),-1) </code></pre> <p>Sample run -</p> <pre><code>In [23]: a Out[23]: array([array([68, 92, 84, 35, 14, 71, 55, 40, 21, 41]), array([30, 90, 52, 64, 86, 68, 61, 85, 26, 98]), array([98, 64, 23, 49, 13, 17, 52, 96, 97, 19]), array([54, 26, 25, 22, 95, 77, 20, 73, 22, 80]), array([15, 84, 91, 54, 25, 21, 37, 19, 25, 25]), array([87, 17, 49, 74, 11, 34, 27, 23, 22, 83])], dtype=object) In [24]: np.vstack(a) Out[24]: array([[68, 92, 84, 35, 14, 71, 55, 40, 21, 41], [30, 90, 52, 64, 86, 68, 61, 85, 26, 98], [98, 64, 23, 49, 13, 17, 52, 96, 97, 19], [54, 26, 25, 22, 95, 77, 20, 73, 22, 80], [15, 84, 91, 54, 25, 21, 37, 19, 25, 25], [87, 17, 49, 74, 11, 34, 27, 23, 22, 83]]) </code></pre>
python|arrays|numpy|reshape|dimensions
4
8,107
40,366,943
Get list with element's columns from Pandas DataFrame
<p>I need to have a list containing all specific element's columns for every index. For example, this DataFrame:</p> <pre><code>&gt;&gt;&gt; df 1 2 3 4 5 2016-01-27 A B B I I 2016-03-07 A C D U U 2016-04-12 H A V V V 2016-05-02 B L Y S N 2016-05-23 L N N A S </code></pre> <p>Inputting "A" I'd like to have this list as output:</p> <pre><code>[1,1,2,NaN,4] </code></pre> <p>Is there a built-in method for this?</p> <p><strong>Edit: In the original table all items in a row are unique, when editing original table to make it less "dense" to post here and I made this mistake, sorry.</strong></p>
<p>You may want to <code>melt</code> your data frame to long format and then calculate the corresponding list of columns for each input(value), After obtaining the Series as follows, it would be easy for you to query the result for any intended input:</p> <pre><code>import pandas as pd pd.melt(df).groupby('value').variable.apply(list) #value #A [1, 1, 2, 4] #B [1, 2, 3] #C [2] #D [3] #H [1] #I [4, 5] #L [1, 2] #N [2, 3, 5] #S [4, 5] #U [4, 5] #V [3, 4, 5] #Y [3] #Name: variable, dtype: object </code></pre> <p>To get the list of columns for input <code>A</code>:</p> <pre><code>result = pd.melt(df).groupby('value').variable.apply(list) result['A'] # ['1', '1', '2', '4'] </code></pre>
python|pandas|dataframe
2
8,108
40,645,498
Create sparse RDD from scipy sparse matrix
<p>I have a large sparse matrix from scipy (300k x 100k with all binary values, mostly zeros). I would like to set the rows of this matrix to be an RDD and then do some computations on those rows - evaluate a function on each row, evaluate functions on pairs of rows, etc. </p> <p>Key thing is that it's quite sparse and I don't want to explode the cluster - can I convert the rows to SparseVectors? Or perhaps convert the whole thing to SparseMatrix?</p> <p>Can you give an example where you read in a sparse array, setup rows into an RDD, and compute something from the cartesian product of those rows?</p>
<p>I had this issue recently--I think you can convert directly by constructing the SparseMatrix with the scipy csc_matrix attributes. (Borrowing from Yang Bryan)</p> <pre><code>import numpy as np import scipy.sparse as sps from pyspark.mllib.linalg import Matrices # create a sparse matrix row = np.array([0, 2, 2, 0, 1, 2]) col = np.array([0, 0, 1, 2, 2, 2]) data = np.array([1, 2, 3, 4, 5, 6]) sv = sps.csc_matrix((data, (row, col)), shape=(3, 3)) # convert to pyspark SparseMatrix sparse_matrix = Matrices.sparse(sv.shape[0],sv.shape[1],sv.indptr,sv.indices,sv.data) </code></pre>
python|numpy|apache-spark|scipy|pyspark
5
8,109
61,922,051
Pandas dataframe - convert time series with multiple elements, to a flattened dataframe with elements as columns
<p>I have a time series dataset stored in a dataframe, with multiple elements, for example stocks with their price, p/e ratio, and p/b ratio - so I have 3 rows per ticker/date. I'm wondering if there is a way to convert this, so I have one row for each ticker/date, and the price,p/e, and p/b as columns.</p> <p>Sample dataframe:</p> <pre><code>import pandas as pd dfts = pd.DataFrame({ 'date': ['2020-01-01','2020-01-01','2020-01-01', '2020-01-01','2020-01-01','2020-01-01', '2020-01-02','2020-01-02','2020-01-02', '2020-01-02','2020-01-02','2020-01-02'], 'ticker': ['AAPL','AAPL','AAPL', 'GOOGL','GOOGL','GOOGL', 'AAPL', 'AAPL', 'AAPL', 'GOOGL', 'GOOGL', 'GOOGL'], 'type': ['price','p/e','p/b', 'price','p/e','p/b', 'price','p/e','p/b', 'price','p/e','p/b'], 'value': [300,20,2, 1000,25,3, 310,21,2.1, 1110,26,2.9] }) print(dfts) </code></pre> <p>I'm looking to convert this and get a result such as:</p> <pre><code>Date Ticker Price P/E P/B 2020-01-01 AAPL 300 20 2 2020-01-02 AAPL 310 21 2.1 2020-01-01 GOOGL 1000 25 3 2020-01-02 GOOGL 1110 26 2.6 </code></pre> <p>Thanks</p>
<p>Check below lines if help you </p> <p>import pandas as pd</p> <pre><code>dfts = pd.DataFrame({ 'date': ['2020-01-01','2020-01-01','2020-01-01', '2020-01-01','2020-01-01','2020-01-01', '2020-01-02','2020-01-02','2020-01-02', '2020-01-02','2020-01-02','2020-01-02'], 'ticker': ['AAPL','AAPL','AAPL', 'GOOGL','GOOGL','GOOGL', 'AAPL', 'AAPL', 'AAPL', 'GOOGL', 'GOOGL', 'GOOGL'], 'type': ['price','p/e','p/b', 'price','p/e','p/b', 'price','p/e','p/b', 'price','p/e','p/b'], 'value': [300,20,2, 1000,25,3, 310,21,2.1, 1110,26,2.9] }) df = dfts.set_index(['date','ticker','type'])['value'].unstack().reset_index() print(df) </code></pre> <p>output frame is like this--</p> <p><a href="https://i.stack.imgur.com/k7y8J.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/k7y8J.png" alt="enter image description here"></a></p>
python|pandas|dataframe
0
8,110
61,619,967
Facilities to work with datetime format in R: %Y-%m-%d %H:%M:%OS-0200
<p>I have a datetime string in a format of <code>27.04.2020 15:50:30.391-0700</code>. <code>0700</code> after dash was to denote the timezone relative to GMT. </p> <p>I found it kind of annoying in <code>R</code> as there is no known tools to work with this type of non-standard datetime format. I use R way more than python on a daily basis, yet <code>pandas</code> is just very intuitive to use for this and <code>pandas</code> can actually do time diff directly on these format when they are proper python <code>datetime</code> objects. R, as of this minute I could not figure out a way to do it; I've experimented quite a bit of string replacing, as.character and more (to no avail). </p> <p>Can anyone prove me wrong? </p>
<p>You need to change display options to see the milliseconds. Try this one:</p> <pre><code>library(lubridate) time_string &lt;- '27.04.2020 15:50:30.391-0700' time_lubridate &lt;- dmy_hms(time_string) options(digits.secs=3) time_lubridate &gt; time_lubridate [1] "2020-04-27 22:50:30.391 UTC" </code></pre>
python|r|pandas|datetime-format
1
8,111
62,029,836
String endswith() in Python
<p>I have a pandas dataframe as below.I want to create list of columns by iterating over list called 'fields_list' and separate out lists which ends with the list in 'fields_list'</p> <pre><code>import pandas as pd import numpy as np import sys df = pd.DataFrame({'a_balance': [3,4,5,6], 'b_balance': [5,1,1,1]}) df['ah_balance'] = 0 df['a_agg_balance'] = 0 df['b_agg_balance'] = 0 df a_balance b_balance ah_balance a_agg_balance b_agg_balance 3 5 0 0 0 4 1 0 0 0 5 1 0 0 0 6 1 0 0 0 fields_list = [ ['&lt;val&gt;','_balance'],['&lt;val_class&gt;','_agg_balance']] fields_list [['&lt;val&gt;', '_balance'], ['&lt;val_class&gt;', '_agg_balance']] for i,field in fields_list: df_final= [col for col in df if col.endswith(field)] print("df_final" ,df_final) </code></pre> <p>I tried above code but when it iterates over 1st element of fields_list(i.e. '', '_balance') it also includes elements that ends with '_agg_balance' and hence I get the below result</p> <pre><code>df_final ['a_balance', 'b_balance', 'ah_balance', 'a_agg_balance', 'b_agg_balance'] df_final ['a_agg_balance', 'b_agg_balance'] </code></pre> <p>My expected output is </p> <pre><code>df_final ['a_balance', 'b_balance', 'ah_balance'] df_final ['a_agg_balance', 'b_agg_balance'] </code></pre>
<p>You can sort the suffixes you're looking at, and start with the longest one. When you find a column that matches a suffix, remove it from the set of columns you need to look at: </p> <pre><code>fields_list = [ ['&lt;val&gt;','_balance'],['&lt;val_class&gt;','_agg_balance']] sorted_list = sorted(fields_list, key=lambda x: len(x[1]), reverse = True) sorted_suffixes = [x[1] for x in sorted_list] col_list = set(df.columns) for suffix in sorted_suffixes: forecast_final_fields = [col for col in col_list if col.endswith(suffix)] col_list.difference_update(forecast_final_fields) print("forecast_final_fields" ,forecast_final_fields) </code></pre> <p>Results in </p> <pre><code>forecast_final_fields ['a_agg_balance', 'b_agg_balance'] forecast_final_fields ['ah_balance', 'a_balance', 'b_balance'] </code></pre>
python|pandas|ends-with
0
8,112
58,050,790
Pandas filtering by date column with mixed columns data types
<p>I've a pandas dataframe similar to the one below with mixed columns data types (strings, datatime, integers) what I wanted to do was filtering the rows to get the last record by date of the combination of Company and Model.</p> <p>I've searched among many filtering / groupby solution what I was able to get were the rows I needed but many columns were missing (see the groupby below). I've read about the nuisance of columns in pandas, I tried using groupby to generate a mask to use in the original dataframe but I failed. I don't know how to proceed to have the same result but with all the original columns.</p> <pre class="lang-py prettyprint-override"><code>data = {'Company': ['Mercedes', 'Fiat', 'Ferrari', 'Mercedes', 'Volkswagen'], 'Model': ['Class A', 'Punto', 'GTO', 'Class A', 'Polo'], 'User': ['Mario', 'Paolo', 'Filippo', 'Andrea', 'Giuseppe'], 'Rented on': ['2017-04-02', '2017-05-01', '2017-05-22', '2017-08-01', '2017-08-02'], 'Kms': [2200, 3000, 110, 2400, 3000] } df = pd.DataFrame(data) print df.groupby(['Company', 'Model'])['Rented on'].last().reset_index() </code></pre> <pre class="lang-py prettyprint-override"><code># What I have Company Kms Model Rented on User 0 Mercedes 2200 Class A 2017-04-02 Mario 1 Fiat 3000 Punto 2017-05-01 Paolo 2 Ferrari 110 GTO 2017-05-22 Filippo 3 Mercedes 2400 Class A 2017-08-01 Andrea 4 Volkswagen 3000 Polo 2017-08-02 Giuseppe # What I get Company Model Rented on 0 Ferrari GTO 2017-05-22 1 Fiat Punto 2017-05-01 2 Mercedes Class A 2017-08-01 3 Volkswagen Polo 2017-08-02 # What I want Company Kms Model Rented on User 0 Fiat 3000 Punto 2017-05-01 Paolo 1 Ferrari 110 GTO 2017-05-22 Filippo 2 Mercedes 2400 Class A 2017-08-01 Andrea 3 Volkswagen 3000 Polo 2017-08-02 Giuseppe </code></pre>
<p>use apply instead of last</p> <pre class="lang-py prettyprint-override"><code>data = {'Company': ['Mercedes', 'Fiat', 'Ferrari', 'Mercedes', 'Volkswagen'], 'Model': ['Class A', 'Punto', 'GTO', 'Class A', 'Polo'], 'User': ['Mario', 'Paolo', 'Filippo', 'Andrea', 'Giuseppe'], 'Rented on': ['2017-04-02', '2017-05-01', '2017-05-22', '2017-08-01', '2017-08-02'], 'Kms': [2200, 3000, 110, 2400, 3000] } df = pd.DataFrame(data) df["Rented on"]=pd.to_datetime(df["Rented on"]) result = df.groupby(['Company', 'Model']).apply(lambda x: x[x["Rented on"]==x["Rented on"].max()] ) result = result.reset_index(drop=True) display(result) </code></pre>
python|pandas
1
8,113
57,950,732
How to match rows when one row contain string from another row?
<p>My aim is to find <code>City</code> that matches row from column <code>general_text</code>, but the match must be exact.</p> <p>I was trying to use searching <code>IN</code> but it doesn't give me expected results, so I've tried to use <code>str.contain</code> but the way I try to do it shows me an error. Any hints on how to do it properly or efficient?</p> <p>I have tried code based on <a href="https://stackoverflow.com/questions/46858780/filtering-out-rows-that-have-a-string-field-contained-in-one-of-the-rows-of-anot">Filtering out rows that have a string field contained in one of the rows of another column of strings</a></p> <pre><code>df['matched'] = df.apply(lambda x: x.City in x.general_text, axis=1) </code></pre> <p>but it gives me the result below:</p> <pre><code>data = [['palm springs john smith':'spring'], ['palm springs john smith':'palm springs'], ['palm springs john smith':'smith'], ['hamptons amagansett':'amagansett'], ['hamptons amagansett':'hampton'], ['hamptons amagansett':'gans'], ['edward riverwoods lake':'wood'], ['edward riverwoods lake':'riverwoods']] df = pd.DataFrame(data, columns = [ 'general_text':'City']) df['match'] = df.apply(lambda x: x['general_text'].str.contain( x.['City']), axis = 1) </code></pre> <p>What I would like to receive by the code above is match only this:</p> <pre><code>data = [['palm springs john smith':'palm springs'], ['hamptons amagansett':'amagansett'], ['edward riverwoods lake':'riverwoods']] </code></pre>
<p>You can use word boundaries <code>\b\b</code> for exact match:</p> <pre><code>import re f = lambda x: bool(re.search(r'\b{}\b'.format(x['City']), x['general_text'])) </code></pre> <p>Or:</p> <pre><code>f = lambda x: bool(re.findall(r'\b{}\b'.format(x['City']), x['general_text'])) df['match'] = df.apply(f, axis = 1) print (df) general_text City match 0 palm springs john smith spring False 1 palm springs john smith palm springs True 2 palm springs john smith smith True 3 hamptons amagansett amagansett True 4 hamptons amagansett hampton False 5 hamptons amagansett gans False 6 edward riverwoods lake wood False 7 edward riverwoods lake riverwoods True </code></pre>
python|pandas|dataframe|row|contains
3
8,114
57,876,294
Remove the rows which contain double underscore in the column
<p>I have a column name called Box which contains 4000+ rows with unique variable names.Every variable in the row differentiated by the first letter and the last number present in the string in a row. for example, A_B(1),A__B(1),C__D(3),D__F(2), AA__B(1). Since, In this case, I want to remove all the rows which contains __ (double underscore) in the string.</p> <p>Previous I have done with the based on the names. But I don't want to be hardcoded. I want to be generic. Just remove all the rows which contains double underscore(__).</p> <pre><code>#to_remove = ['I__ND_LD\(\d+\)', 'I__BS_ND\(\d+\)','I__LN_LN2\(\d+\)','P__ND_LN2\(\d+\)','I__XF_XF2\(\d+\)','P__ND_XF2\(\d+\)'] #eda=eda[~eda.Devices.str.contains('|'.join(to_remove), regex=True)] </code></pre> <p>Please let me know how we can use pattern matching.</p>
<pre><code>df.loc[~df['Box'].str.contains(r'_{2}')] </code></pre> <p>An example:</p> <pre><code>import pandas as pd df1 = pd.DataFrame({'A' : ['James', 'Mary', 'John', 'Patricia'], 'B' : [30, 37, 30, 35], 'C' : ['Robert', 'Jennifer', 'Michael', 'Linda'], 'D' : ['Drop__A', 'Keep_A', 'Dr__pB', 'Keep_B']}) df1.loc[~df1['D'].str.contains(r'_{2}')] A B C D 1 Mary 37 Jennifer Keep_A 3 Patricia 35 Linda Keep_B </code></pre>
pandas|dataframe
0
8,115
57,771,481
I have a code and want a to return single value
<p>I have a data frame which has a row called items, and I have a list called topitems. Below are some ex to it </p> <pre><code>Df.head() Item Toy Car, Toy Buses, Car Bike Barbie Lorri </code></pre> <p>My list is topitems</p> <pre><code>[Toy, Bike, Car] </code></pre> <p>Now I want another column in the data frame called Top Item.</p> <p>I have tried with set &amp; intersection but they return two matching values</p> <p>Against Toy, it returns Toy d against Toy and Car it returns Toy and car but I want it to return the only Toy</p> <pre><code>dff['topitems'] = dff.items.apply(lambda x: list(set(x).intersection(set(topitems)))) </code></pre> <p>I want the result to be like below,</p> <pre><code>Df.head() Item | Top item Toy | Toy Car, Toy | Car (note : i don't want the second value even though it's in my list) Buses, Car | Car Bike | Bike Barbie | Blank Lorri | Blank </code></pre>
<p>You can use index <code>[0]</code> to get first element from list. Or better use <code>[:1]</code> and it will not raise error when list is empty and there is no <code>[0]</code></p> <pre><code>dff['topitems'] = dff.items.apply(lambda x: list(set(x).intersection(set(topitems)))[:1]) </code></pre> <hr> <p>Example code:</p> <p><strong>EDIT:</strong> I removed <code>set()</code> in <code>intersection()</code> as suggested @rpanai in comment.</p> <pre><code>import pandas as pd dff = pd.DataFrame({'items':[ ['Toy'], ['Car', 'Toy'], ['Buses', 'Car'], ['Bike'], ['Barbie'], ['Lorri'], ]}) topitems = ['Toy', 'Bike', 'Car'] dff['topitems'] = dff['items'].apply(lambda x: list(set(x).intersection(topitems))[:1]) print(dff) </code></pre>
python|pandas|set|intersection
2
8,116
57,868,713
Dask not efficient on concatenating large pandas dataframes and gives Memory Error
<p>At first, I tried typical concatenation of pandas dataframe:</p> <pre><code>df=pd.concat([df,df_filtered2],axis=1,sort=False) </code></pre> <p>but it gave the error:</p> <pre><code>/home/user/.pyenv/versions/3.6.0/lib/python3.6/site-packages/pandas/compat/__init__.py:84: UserWarning: Could not import the lzma module. Your installed Python is incomplete. Attempting to use lzma compression will result in a RuntimeError. warnings.warn(msg) /home/user/.pyenv/versions/3.6.0/lib/python3.6/site-packages/pandas/compat/__init__.py:84: UserWarning: Could not import the lzma module. Your installed Python is incomplete. Attempting to use lzma compression will result in a RuntimeError. warnings.warn(msg) Traceback (most recent call last): File "process_data_interpolation.py", line 435, in &lt;module&gt; df=pd.concat([df,df_filtered2],axis=1,sort=False) File "/home/user/.pyenv/versions/3.6.0/lib/python3.6/site-packages/pandas/core/reshape/concat.py", line 255, in concat sort=sort, File "/home/user/.pyenv/versions/3.6.0/lib/python3.6/site-packages/pandas/core/reshape/concat.py", line 335, in __init__ obj._consolidate(inplace=True) File "/home/user/.pyenv/versions/3.6.0/lib/python3.6/site-packages/pandas/core/generic.py", line 5270, in _consolidate self._consolidate_inplace() File "/home/user/.pyenv/versions/3.6.0/lib/python3.6/site-packages/pandas/core/generic.py", line 5252, in _consolidate_inplace self._protect_consolidate(f) File "/home/user/.pyenv/versions/3.6.0/lib/python3.6/site-packages/pandas/core/generic.py", line 5241, in _protect_consolidate result = f() File "/home/user/.pyenv/versions/3.6.0/lib/python3.6/site-packages/pandas/core/generic.py", line 5250, in f self._data = self._data.consolidate() File "/home/user/.pyenv/versions/3.6.0/lib/python3.6/site-packages/pandas/core/internals/managers.py", line 932, in consolidate bm._consolidate_inplace() File "/home/user/.pyenv/versions/3.6.0/lib/python3.6/site-packages/pandas/core/internals/managers.py", line 937, in _consolidate_inplace self.blocks = tuple(_consolidate(self.blocks)) File "/home/user/.pyenv/versions/3.6.0/lib/python3.6/site-packages/pandas/core/internals/managers.py", line 1913, in _consolidate list(group_blocks), dtype=dtype, _can_consolidate=_can_consolidate File "/home/user/.pyenv/versions/3.6.0/lib/python3.6/site-packages/pandas/core/internals/blocks.py", line 3323, in _merge_blocks new_values = new_values[argsort] numpy.core._exceptions.MemoryError: Unable to allocate array with shape (41, 156082680) and data type float64 </code></pre> <p>so I tried Dask:</p> <pre><code>df = dd.concat([df,df_filtered2],axis=1) </code></pre> <p>but it also gave me the MemoryError:</p> <pre><code>/home/user/.pyenv/versions/3.6.0/lib/python3.6/site-packages/pandas/compat/__init__.py:84: UserWarning: Could not import the lzma module. Your installed Python is incomplete. Attempting to use lzma compression will result in a RuntimeError. warnings.warn(msg) /home/user/.pyenv/versions/3.6.0/lib/python3.6/site-packages/pandas/compat/__init__.py:84: UserWarning: Could not import the lzma module. Your installed Python is incomplete. Attempting to use lzma compression will result in a RuntimeError. warnings.warn(msg) Traceback (most recent call last): File "process_data_interpolation.py", line 443, in &lt;module&gt; df = dd.concat([df,df_filtered2],axis=1) File "/home/user/.pyenv/versions/3.6.0/lib/python3.6/site-packages/dask/dataframe/multi.py", line 1045, in concat dfs = _maybe_from_pandas(dfs) File "/home/user/.pyenv/versions/3.6.0/lib/python3.6/site-packages/dask/dataframe/core.py", line 4465, in _maybe_from_pandas for df in dfs File "/home/user/.pyenv/versions/3.6.0/lib/python3.6/site-packages/dask/dataframe/core.py", line 4465, in &lt;listcomp&gt; for df in dfs File "/home/user/.pyenv/versions/3.6.0/lib/python3.6/site-packages/dask/dataframe/io/io.py", line 209, in from_pandas for i, (start, stop) in enumerate(zip(locations[:-1], locations[1:])) File "/home/user/.pyenv/versions/3.6.0/lib/python3.6/site-packages/dask/dataframe/io/io.py", line 209, in &lt;dictcomp&gt; for i, (start, stop) in enumerate(zip(locations[:-1], locations[1:])) File "/home/user/.pyenv/versions/3.6.0/lib/python3.6/site-packages/pandas/core/indexing.py", line 1424, in __getitem__ return self._getitem_axis(maybe_callable, axis=axis) File "/home/user/.pyenv/versions/3.6.0/lib/python3.6/site-packages/pandas/core/indexing.py", line 2137, in _getitem_axis return self._get_slice_axis(key, axis=axis) File "/home/user/.pyenv/versions/3.6.0/lib/python3.6/site-packages/pandas/core/indexing.py", line 1308, in _get_slice_axis return self._slice(indexer, axis=axis, kind="iloc") File "/home/user/.pyenv/versions/3.6.0/lib/python3.6/site-packages/pandas/core/indexing.py", line 166, in _slice return self.obj._slice(obj, axis=axis, kind=kind) File "/home/user/.pyenv/versions/3.6.0/lib/python3.6/site-packages/pandas/core/generic.py", line 3371, in _slice result = self._constructor(self._data.get_slice(slobj, axis=axis)) File "/home/user/.pyenv/versions/3.6.0/lib/python3.6/site-packages/pandas/core/internals/managers.py", line 755, in get_slice bm._consolidate_inplace() File "/home/user/.pyenv/versions/3.6.0/lib/python3.6/site-packages/pandas/core/internals/managers.py", line 937, in _consolidate_inplace self.blocks = tuple(_consolidate(self.blocks)) File "/home/user/.pyenv/versions/3.6.0/lib/python3.6/site-packages/pandas/core/internals/managers.py", line 1913, in _consolidate list(group_blocks), dtype=dtype, _can_consolidate=_can_consolidate File "/home/user/.pyenv/versions/3.6.0/lib/python3.6/site-packages/pandas/core/internals/blocks.py", line 3323, in _merge_blocks new_values = new_values[argsort] MemoryError: Unable to allocate array with shape (41, 156082680) and data type float64 </code></pre> <p>what else can I try? I am running Python script on linux node with 128GB of RAM memory. In my case the size of one of pandas dataframe after dropping unnecesary columns and converting some columns to integer is 44.48 GB.</p>
<p>This question is answered in the Dask Best Practices documentation:</p> <p><a href="https://docs.dask.org/en/latest/best-practices.html#load-data-with-dask" rel="nofollow noreferrer">https://docs.dask.org/en/latest/best-practices.html#load-data-with-dask</a></p>
python|pandas|dask
0
8,117
58,002,781
Factorize current unique values in pandas df
<p>I am assigning an integer to various groups in a <code>pandas</code> <code>df</code>. I'm currently using <code>pd.factorize</code> for this. However, I'm hoping to account for <em>current</em> values only. </p> <p>For instance, using the <code>df</code> below, a unique integer gets assigned to <code>Member</code>. This accumulates based on each unique value that appears. But I'm hoping to account for current values only. As in, if a value in <code>Member</code> does not appear again, then assign that integer to the next new value in <code>Member</code>. As C2 does not appear in the df again, I want to pass that integer to the next unique value in <code>Member</code>.</p> <pre><code>df = pd.DataFrame({ 'Period' : [1,1,1,2,2,2,3,3,3,3], 'Member' : ['C1','C2','C4','C1','C2','C4','C1','C3','C4','C5'], }) df['Area'] = (pd.factorize(df['Member'])[0] + 1) </code></pre> <p>Out:</p> <pre><code> Period Member Area 0 1 C1 1 1 1 C2 2 2 1 C4 3 3 2 C1 1 4 2 C2 2 5 2 C4 3 6 3 C1 1 7 3 C3 4 8 3 C4 3 9 3 C5 5 </code></pre> <p>Intended:</p> <pre><code> Period Member Area 0 1 C1 1 1 1 C2 2 2 1 C4 3 3 2 C1 1 4 2 C2 2 5 2 C4 3 6 3 C1 1 7 3 C3 2 8 3 C4 3 9 3 C5 4 </code></pre> <p>This output assumes <code>C1,C3,C4,C5</code> all appear in following periods</p>
<p>Below is my solution with explanation</p> <p>Steps:</p> <ul> <li>get unique members and their counts</li> <li>create list of available area code equal to length of members, sorted in reverse order so that poping gives the minimum available id</li> <li>track assigned ids to member in "areas" dictionary</li> <li>decrement count of member when id is assigned to the member</li> <li>un-assign the area assigned to member when count of member is 0 and add that to available areas so that it can be re-used to new member </li> </ul> <p><strong>NOTE</strong>: This is according to logic you explained but gives different result that you shown above</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd df = pd.DataFrame({ 'Period' : [1,1,1,2,2,2,2,3,3,3,3], 'Member' : ['C1','C2','C4','C1','C2','C3','C4','C1','C3','C4','C5'], }) def assign_area(df): members, counts = pd.np.unique(df.Member, return_counts=True) member_counts = dict(zip(members, counts)) areas = {} available_areas = list(range(len(members), 0, -1)) area_col = [] for member in df.Member: if member in areas: area = areas[member] else: area = available_areas.pop() areas[member] = area area_col.append(area) member_counts[member] -=1 if member_counts[member] == 0: available_areas.append(area) available_areas.sort(reverse=True) df["area"] = area_col return df assign_area(df) </code></pre>
python|pandas|dataframe
2
8,118
57,837,712
Another "ValueError: Cannot feed value of shape (30, 5) for Tensor 'Placeholder:0', which has shape '(?, 30)'"
<p>Be kind, I'm new to TensorFlow. I have found a project that trains a <strong>Policy Gradient agent</strong> to trade the stock market, trained on only the daily <strong>Close</strong> prices. And I'm interested in making it train on the <strong>Open, High, Low, and Volume</strong> features as well, so I'm attempting to add them in to the existing code. I've stripped away most of the project to leave only what's necessary for diagnosis.</p> <p>You can find my <a href="https://colab.research.google.com/drive/1U0UDHxOv_CjDCMa9ynZEyNk4I9Tkb-M5" rel="nofollow noreferrer">Colab notebook here</a>. And I have done my best to comment each section to make it easier to browse, and where I THINK the issues are, but I need someone to show me where and why.</p> <p>I'm currently getting the error:</p> <pre><code>ValueError: Cannot feed value of shape (30, 5) for Tensor 'Placeholder:0', which has shape '(?, 30)' </code></pre> <p>...which makes sense because the <code>self.X = tf.compat.v1.placeholder(tf.float32, (None, self.state_size))</code> is designed for the only feature (Close), but I'm trying to add in the other features into the state as well. The <code>state_size</code> is the <code>window_size</code> of <code>30</code> (to look back 30 rows during training). And I'm trying to change the state to include the added features. So I try to change the placeholder to <code>self.state_size,5</code>, but then I get the error:</p> <pre><code>ValueError: Cannot feed value of shape (35760, 5) for Tensor 'Placeholder:0', which has shape '(30, 5)' </code></pre> <p>...which I'm a little unclear about, but I don't think that's the issue. I know I'm trying to feed the tensor data in a shape that it's not expecting, but I don't know how to adapt this on my own. (I think) What I'm looking to do is add in those extra features into the <code>get_state</code> function so that each row is 1 window_size, and each column represents the features. Then the training should take place over the iterations.</p> <p>I've found similar questions/answers at the below links to help someone who knows more about this than me out. Most of them talk about reshaping the data at the placeholder, which I thought I had tried, but now I've just outrun my knowledge. Thanks in advance.</p> <p><a href="https://stackoverflow.com/questions/45966301/tensorflow-cannot-feed-value-of-shape-100-784-for-tensor-placeholder0/46876972">Here</a></p> <p><a href="https://stackoverflow.com/questions/44216254/tensorflow-valueerror-cannot-feed-value-of-shape-423-for-tensor-placeholde">Here</a></p> <p><a href="https://stackoverflow.com/questions/40430186/tensorflow-valueerror-cannot-feed-value-of-shape-64-64-3-for-tensor-uplace">Here</a></p> <p><a href="https://stackoverflow.com/questions/45966301/tensorflow-cannot-feed-value-of-shape-100-784-for-tensor-placeholder0">Here</a></p> <p><strong>UPDATE</strong></p> <p>Boy I'm sure struggling to figure this out on my own, but I appreciate the guidance thus far and I think I'm getting close, I just have limited knowledge of what to change. Given the answer below, I understand that my window_size/number of rows can change depending on what the window_size/lookback value is, so that would be the None part of my the placeholder, and in this particular case, I WOULD know the number of features (in this case 5) ahead of time so setting that as a static number (5) would suffice. So I'm trying to not have individual placeholders for each new feature I want to add.</p> <p>So trying to use the new placeholder <code>self.X = tf.compat.v1.placeholder(tf.float32, (None,5))</code> now the error I'm getting is:</p> <pre><code> InvalidArgumentError: Incompatible shapes: [3270,3] vs. [98100,3] [[{{node sub}}]] During handling of the above exception, another exception occurred: InvalidArgumentError Traceback (most recent call last) &lt;ipython-input-4-78f4afe280e1&gt; in &lt;module&gt;() 22 skip = skip) 23 ---&gt; 24 agent.train(iterations = 500, checkpoint = 10, initial_money = initial_money) &lt;ipython-input-3-1788840ff10e&gt; in train(self, iterations, checkpoint, initial_money) 135 cost, _ = self.sess.run([self.cost, self.optimizer], feed_dict={self.X:np.vstack(ep_history[:,0]), 136 self.REWARDS:ep_history[:,2], --&gt; 137 self.ACTIONS:ep_history[:,1]}) 138 139 </code></pre> <p>...so I get the printed output of the <code>ep_history</code> array to check its shape, which has a starting shape of :</p> <pre><code>array([[0., 0., 0., 0., 0.], [0., 0., 0., 0., 0.], ... [0., 0., 0., 0., 0.], [0., 0., 0., 0., 0.], [0., 0., 0., 0., 0.]]) # which is the current state (5x30) 0 10000 # which is the action and reward/starting_money array([[ 0.000000e+00, 0.000000e+00, 0.000000e+00, 0.000000e+00, 0.000000e+00], [ 0.000000e+00, 0.000000e+00, 0.000000e+00, 0.000000e+00, 0.000000e+00], .... [ 0.000000e+00, 0.000000e+00, 0.000000e+00, 0.000000e+00, 0.000000e+00], [ 6.000060e-01, 1.669999e+00, 4.899980e-01, 2.309998e+00, -2.135710e+07]])] # which is the NEXT state </code></pre> <p>...which acts as the state (5 columns of features and 30 rows of window_size), action (0), starting_money (10,000), and next_state (again 5 x 30 with the next/future row of the dataset).</p> <p>So what seems to be the issue now? Is it the array? I apologize for the lengthy code, but I want to be thorough for those helping me and to show that I'm actually trying to understand the logic behind the fix so I can apply it later. Any further input? Would it be something to do with <code>get_state</code> function? (I've added a couple of extra comments to the colab book as I work through this). Thanks a lot.</p>
<p>you have used "self.X" as input for 1st layer, for a model number of rows (data points) can vary but number of features should be same during training and predicting because that determines the number of neurons on that layer. </p> <p>But you can reuse the code for data with different number of features to create different model but it has to remain same for a model.</p> <pre class="lang-py prettyprint-override"><code>self.X = tf.compat.v1.placeholder(tf.float32, (None,feature_numbers)) </code></pre> <p>You must know feature size of your input when you start training and it can't be changed latter. If you want to track other things then you need to create other variables for that or you can pre-process your data to know feature numbers before creating tensorflow graph </p>
python|python-3.x|tensorflow
0
8,119
54,734,545
Indices of unique values in n-dimensional array
<p>I have a 2D Numpy array containing values from 0 to n. I want to get a list of length n, such that the i'th element of that list is an array of all the indices with value i+1 (0 is excluded).</p> <p>For example, for the input</p> <pre><code>array([[1, 0, 1], [2, 2, 0]]) </code></pre> <p>I'm expecting to get</p> <pre><code>[array([[0, 0], [0, 2]]), array([[1,0], [1,1]])] </code></pre> <p>I found this related question: <a href="https://stackoverflow.com/questions/30003068/get-a-list-of-all-indices-of-repeated-elements-in-a-numpy-array">Get a list of all indices of repeated elements in a numpy array</a> which may be helpful, but I hoped to find a more direct solution that doesn't require flattening and sorting the array and that is as efficient as possible.</p>
<p>Here's a vectorized approach, which works for arrays of an arbitrary amount of dimensions. The idea of this solution is to extend the functionality of the <code>return_index</code> method in <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.unique.html" rel="nofollow noreferrer"><code>np.unique</code></a>, and return an array of arrays, each containing the N-dimensional indices of unique values in a numpy array.</p> <p>For a more compact solution, I've defined the following function along with some explanations throughout the different steps:</p> <pre><code>def ndix_unique(x): """ Returns an N-dimensional array of indices of the unique values in x ---------- x: np.array Array with arbitrary dimensions Returns ------- - 1D-array of sorted unique values - Array of arrays. Each array contains the indices where a given value in x is found """ x_flat = x.ravel() ix_flat = np.argsort(x_flat) u, ix_u = np.unique(x_flat[ix_flat], return_index=True) ix_ndim = np.unravel_index(ix_flat, x.shape) ix_ndim = np.c_[ix_ndim] if x.ndim &gt; 1 else ix_flat return u, np.split(ix_ndim, ix_u[1:]) </code></pre> <hr> <p>Checking with the array from the question -</p> <pre><code>a = np.array([[1, 0, 1],[2, 2, 0]]) vals, ixs = ndix_unique(a) print(vals) array([0, 1, 2]) print(ixs) [array([[0, 1], [1, 2]]), array([[0, 0], [0, 2]]), array([[1, 0], [1, 1]])] </code></pre> <p>Lets try with this other case:</p> <pre><code>a = np.array([[1,1,4],[2,2,1],[3,3,1]]) vals, ixs = ndix_unique(a) print(vals) array([1, 2, 3, 4]) print(ixs) array([array([[0, 0], [0, 1], [1, 2], [2, 2]]), array([[1, 0], [1, 1]]), array([[2, 0], [2, 1]]), array([[0, 2]])], dtype=object) </code></pre> <p>For a <strong>1D</strong> array:</p> <pre><code>a = np.array([1,5,4,3,3]) vals, ixs = ndix_unique(a) print(vals) array([1, 3, 4, 5]) print(ixs) array([array([0]), array([3, 4]), array([2]), array([1])], dtype=object) </code></pre> <p>Finally another example with a <strong>3D</strong> ndarray:</p> <pre><code>a = np.array([[[1,1,2]],[[2,3,4]]]) vals, ixs = ndix_unique(a) print(vals) array([1, 2, 3, 4]) print(ixs) array([array([[0, 0, 0], [0, 0, 1]]), array([[0, 0, 2], [1, 0, 0]]), array([[1, 0, 1]]), array([[1, 0, 2]])], dtype=object) </code></pre>
python|arrays|numpy
4
8,120
28,192,810
What's the difference between ndarray.item(arg) and ndarry[arg]?
<p>I read the <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.item.html" rel="nofollow">Docs</a>, but still not quite understand the difference and the use case for <code>item</code>.</p> <p>But recently I found where only <code>item</code> works:</p> <pre><code>a = np.array(100) # a has shape ()! a.item() # or a.item(0) </code></pre> <p>This is the only way to get the value out of <code>a</code>, <code>a[0]</code> doesn't work.</p>
<p><code>ndarray.item</code> allows you to interpret the array with a flat index, as opposed to using <code>[]</code> notation. This allows you to do something like this:</p> <pre><code>import numpy as np a = np.arange(16).reshape((4,4)) print(a) #[[ 0 1 2 3] # [ 4 5 6 7] # [ 8 9 10 11] # [12 13 14 15]] print(a[2,2]) # "Normal" indexing, taking the 3rd element from the 3rd row # 10 print(a.item(12)) # Take the 12th element from the array, equal to [3,0] # 12 </code></pre> <p>It also allows you to pass a tuple of indices easily, as below:</p> <pre><code>print(a.item((1,1))) # Equivalent to a[1,1] # 5 </code></pre> <p>Finally, as you mentioned in your question, it's a way to get the element of an array with <code>size = 1</code> as a Python scalar. Note that this is different to a <code>numpy</code> scalar, such that if <code>a = np.array([1.0], dtype=np.float32)</code> then <code>type(a[0]) != type(a.item(0))</code>.</p> <pre><code>b = np.array(3.14159) print(b, type(b)) # 3.14159 &lt;class 'numpy.ndarray'&gt; print(b.item(), type(b.item())) # 3.14159 &lt;class 'float'&gt; </code></pre>
python|numpy
6
8,121
28,312,374
numpy where compare arrays as a whole
<p>I have an array <code>x=np.array([[0,1,2,],[0,0,0],[3,4,0],[1,2,3]])</code>, and I want to get the index where x=[0,0,0], i.e. 1. I tried <code>np.where(x==[0,0,0])</code> resulting in <code>(array([0, 1, 1, 1, 2]), array([0, 0, 1, 2, 2]))</code>. How can I get the desired answer? </p>
<p>As @transcranial solution, you can use <code>np.all()</code> to do the job. But <code>np.all()</code> is slow, so if you apply it to a large array, speed will be your concern.</p> <p>To test for a specific value or a specific range, I would do like this.</p> <pre><code>x = np.array([[0,1,2],[0,0,0],[3,4,0],[1,2,3],[0,0,0]]) condition = (x[:,0]==0) &amp; (x[:,1]==0) &amp; (x[:,2]==0) np.where(condition) # (array([1, 4]),) </code></pre> <p>It's a bit ugly but it almost twice as fast as <code>np.all()</code> solution.</p> <pre><code>In[23]: %timeit np.where(np.all(x == [0,0,0], axis=1) == True) 100000 loops, best of 3: 6.5 µs per loop In[22]: %timeit np.where((x[:,0]==0)&amp;(x[:,1]==0)&amp;(x[:,2]==0)) 100000 loops, best of 3: 3.57 µs per loop </code></pre> <p>And you can test not only for equality but also a range.</p> <pre><code>condition = (x[:,0]&lt;3) &amp; (x[:,1]&gt;=1) &amp; (x[:,2]&gt;=0) np.where(condition) # (array([0, 3]),) </code></pre>
python|arrays|numpy
5
8,122
28,212,435
How to separate Monday-Friday from Saturday and Sunday Pandas?
<p>I'm working on project that has data like this (I use pandas framework with python):</p> <pre><code>days rain 0 1 2 0 3 1 1 0 6 1 2 1 1 1 2 1 3 0 4 0 5 0 </code></pre> <p>Days 0-6 is Monday-Sunday and rain 0 is no rain day and rain 1 is raining day.</p> <p>I want to separate the days into these new column Monday-Friday, Saturday, Sunday with the data in that row is 1 if it is that day and 0 if it is not that day and the index need to be the same as the original file. How can I achieve that?</p>
<p>Try this:</p> <pre><code>df['Monday-Friday'] = df['days'].isin(range(5)).astype(int) df['Saturday'] = (df['days'] == 5).astype(int) df['Sunday'] = (df['days'] == 6).astype(int) </code></pre>
python|pandas
1
8,123
73,184,171
KeyError: None of [Index([(....)])] are in the columns for a list of columns generated with df.columns
<p>I have a <code>df_a</code> that contains all columns. Then I have <code>df_b</code> that contains a subset of this dataframe. I want to select the columns that are in <code>df_b</code> from <code>df_a</code>.</p> <p>Why does the following code not work?</p> <pre><code>df_a[[df_b.columns]] </code></pre> <p>It throws a KeyError <code>&quot;None of [Index([(....), (....))], dtype='object)] are in the [columns]</code>. Why?</p>
<p>Inner <code>[]</code> is redundant, you can try</p> <pre class="lang-py prettyprint-override"><code>df_a[df_b.columns] # or df_a.reindex(columns=df_b.columns) </code></pre>
python|pandas
1
8,124
73,240,427
Extract sub String from column in Pandas
<p>Given String;- <code>&quot;\NA*(0.0001,0.,NA,0.99999983,0.02) \EVENT=_Schedule185 \WGT=_WEEKS&quot;</code></p> <p>Output = <code>EVENT=_Schedule185</code></p>
<p>If you are able to get that into a dataframe then you can use this</p> <pre><code>df = pd.DataFrame({ 'Column1' : [r&quot;\NA*(0.0001,0.,NA,0.99999983,0.02) \EVENT=_Schedule185 \WGT=_WEEKS&quot;] }) df['Column1'].apply(lambda x : x.split('\\')[2]) </code></pre> <p>However, you are doing all this on an escape character so it might be a little tricky depending on how your actual data is structured. This code, however, will produce the desired results.</p>
python|pandas|jupyter-notebook
0
8,125
73,193,301
looping an object array with pandas
<p>I would like to find another way to loop in an array of objects for this I use pandas to generate my Excel file <code>response.text </code></p> <pre><code>{&quot;Header&quot;:{&quot;Time&quot;:&quot;2022-08-01T01:55:41-07:00&quot;,&quot;ReportName&quot;:&quot;TransactionListByCustomer&quot;,&quot;StartPeriod&quot;:&quot;2016-06-01&quot;,&quot;EndPeriod&quot;:&quot;2016-07-31&quot;,&quot;Currency&quot;:&quot;USD&quot;,&quot;Option&quot;:[{&quot;Name&quot;:&quot;NoReportData&quot;,&quot;Value&quot;:&quot;true&quot;}]},&quot;Columns&quot;:{&quot;Column&quot;:[{&quot;ColTitle&quot;:&quot;Date&quot;,&quot;MetaData&quot;:[{&quot;Name&quot;:&quot;ColKey&quot;,&quot;Value&quot;:&quot;tx_date&quot;}]},{&quot;ColTitle&quot;:&quot;Transaction Type&quot;,&quot;MetaData&quot;:[{&quot;Name&quot;:&quot;ColKey&quot;,&quot;Value&quot;:&quot;txn_type&quot;}]},{&quot;ColTitle&quot;:&quot;Num&quot;,&quot;MetaData&quot;:[{&quot;Name&quot;:&quot;ColKey&quot;,&quot;Value&quot;:&quot;doc_num&quot;}]},{&quot;ColTitle&quot;:&quot;Posting&quot;,&quot;MetaData&quot;:[{&quot;Name&quot;:&quot;ColKey&quot;,&quot;Value&quot;:&quot;is_no_post&quot;}]},{&quot;ColTitle&quot;:&quot;Memo/Description&quot;,&quot;MetaData&quot;:[{&quot;Name&quot;:&quot;ColKey&quot;,&quot;Value&quot;:&quot;memo&quot;}]},{&quot;ColTitle&quot;:&quot;Account&quot;,&quot;MetaData&quot;:[{&quot;Name&quot;:&quot;ColKey&quot;,&quot;Value&quot;:&quot;account_name&quot;}]},{&quot;ColTitle&quot;:&quot;Amount &quot;,&quot;MetaData&quot;:[{&quot;Name&quot;:&quot;ColKey&quot;,&quot;Value&quot;:&quot;amount&quot;}]}]},&quot;Rows&quot;:{}} </code></pre> <p>here is my code</p> <pre><code>response = requests.request(&quot;GET&quot;, url, headers=headers, data=payload) list = pd.read_json(response.text) df = pd.DataFrame({ list['Columns']['Column'][0]['ColTitle']: list['Columns']['Column'][0]['MetaData'][0]['Value'], list['Columns']['Column'][1]['ColTitle']: list['Columns']['Column'][1]['MetaData'][0]['Value'], list['Columns']['Column'][2]['ColTitle']: list['Columns']['Column'][2]['MetaData'][0]['Value'], list['Columns']['Column'][3]['ColTitle']: list['Columns']['Column'][3]['MetaData'][0]['Value'], list['Columns']['Column'][4]['ColTitle']: list['Columns']['Column'][4]['MetaData'][0]['Value'], list['Columns']['Column'][5]['ColTitle']: list['Columns']['Column'][5]['MetaData'][0]['Value'], list['Columns']['Column'][6]['ColTitle']: list['Columns']['Column'][6]['MetaData'][0]['Value'] }, index=[0]) df.to_excel('TransactionList.xlsx') </code></pre> <p>here is the result <img src="https://i.stack.imgur.com/7ZsKG.png" alt="enter image description here" /></p>
<p>You can try <code>pd.json_normalize</code></p> <pre class="lang-py prettyprint-override"><code>import json data = json.loads(response.text) df = (pd.json_normalize(data['Columns']['Column'], record_path='MetaData', meta='ColTitle') .drop(columns='Name') .set_index('ColTitle') .T) </code></pre> <pre><code>print(df) ColTitle Date Transaction Type Num Posting Memo/Description Account Amount Value tx_date txn_type doc_num is_no_post memo account_name amount </code></pre>
python|python-3.x|excel|pandas
0
8,126
73,264,675
Can't import object detection imageai (python)
<p>I installed imageai,tensorflow,keras in python with pip</p> <p>i typed this code</p> <pre><code>from imageai.Detection import ObjectDetection </code></pre> <p>it shows this error</p> <pre><code>ModuleNotFoundError: No module named 'keras.layers.advanced_activations' </code></pre> <p>module versions<br /> imageai - 2.0.2<br /> keras - 2.90<br /> tensorflow - 2.9.1</p> <p>im running on windows 10 pro</p>
<p>Try to update version of <strong>imageai</strong> to new versions. <a href="https://pypi.org/project/imageai/2.1.6/" rel="nofollow noreferrer">try this</a></p>
python|tensorflow|keras|imageai
1
8,127
73,200,402
How to loop through dataframe column and compare dates to current
<p>Hello I have a dataframe containing a date column I would like to loop through these dates and compare it to the current date to see if any entry is today. I tried converting the column to a list using the tolist() method but it outputted not the date but rather &quot;Timestamp('2022-08-02 00:00:00')&quot; however my column only contains dates formatted as %Y-%m-%d as you can see in the image.</p> <p><a href="https://i.stack.imgur.com/llwxR.png" rel="nofollow noreferrer">dataframe</a></p>
<p>Assuming that your Dataframe is called df, here's a possible way of solving your issue: <br></p> <pre><code>df.loc[df.Date == pd.Timestamp.now().date().strftime('%Y-%m-%d')] </code></pre> <p>I think it's a straightforward solution, you filter your dataframe by &quot;Date&quot; and compare to the date part of &quot;today's date&quot; while maintaining the correct format of y-m-d.</p>
python|pandas|dataframe
0
8,128
35,244,858
Count occurences of elements of a matrix fast
<p>Let <strong>M</strong> and <strong>n</strong> be <strong>d x d</strong>- and <strong>d</strong>-dimensonal numpy arrays of integers, respectively. I want to count the number of triples of the form <em>(n(i), n(j), M(i,j))</em>. As a result I want a numpy array such that each entry counts the number of occurences of such a triple.</p> <p><em>Edit:</em> <strong>M</strong> is symmetric and I don't want to count triples with i=j.</p> <p>I'm currently using <code>itertools.product</code> (for loop over all pairs) and <code>numpy.bincount</code> to do this, but it is too slow. Is there a smarter way doing this, probably using <code>numpy</code>?</p>
<p>Let :</p> <pre><code>M=np.random.randint(0,3,(10,10)) n=np.random.randint(0,3,10) </code></pre> <p>Making triples and drop i=j :</p> <pre><code>x,y=np.meshgrid(n,n) a=np.dstack((x,y,M)).reshape(-1,3) au=a[a [:,0]!=a[:,1]] # i&lt;&gt;j </code></pre> <p>The problem with unique is that it use only 1D array. a solution is to convert rows in strings : this ensure lazy comparisons and is generally fast. </p> <pre><code>c=np.frombuffer(au,dtype='S12') # 12 is 3*n.itemsize _,indices,counts=np.unique(c,return_index=True,return_counts=True) result=np.vstack((counts,au[indices].T)) # count first. ## array([[1, 2, 5, 3, 4, 1, 4, 4, 3, 4, 9, 1, 3, 4, 9, 3, 4], [0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2], [1, 1, 1, 2, 2, 2, 0, 0, 2, 2, 2, 0, 0, 0, 1, 1, 1], [0, 1, 2, 0, 1, 2, 0, 2, 0, 1, 2, 0, 1, 2, 0, 1, 2]], dtype=int64) </code></pre> <p>If integers are small like here (&lt;4), you can present the results so that <code>res[n(i),n(j),M(i,j)]</code>give the count :</p> <pre><code>res=np.zeros((3,3,3),int) res[list(zip(*au[indices]))]=counts </code></pre>
python|numpy
1
8,129
67,318,910
ValueError: Shapes (None, None) and (None, None, None, 43) are incompatible
<p>I know that there were similar threads on this forum already, but though I checked them out I can't seem to find a solution.</p> <p>I'm trying to use a VGG model for multi-classification of images. I'm following a tutorial from a book. I use the last layer from the VGG model as my input for the last sequentail layers.</p> <p>My images are stored in a folder 'train', inside this folder there are 43 subfolders containing the images belonging to 43 classes. Each subfolder's name is a number from 0 to 42.</p> <p>I use <code>flow_from_directory()</code> function to load the images, and then finally <code>fit_generator()</code>.</p> <p>The last layer in my model is a dense layer <code>model.add(Dense(43, activation='softmax'))</code></p> <p>This is my code:</p> <pre><code>input_shape1 = (224, 224, 3) vgg = vgg16.VGG16(include_top=False, weights='imagenet', input_shape=input_shape1) output = vgg.layers[-2].output output = keras.layers.Flatten()(output) vgg_model = Model(vgg.input, output) vgg_model.trainable = False for layer in vgg_model.layers: layer.trainable = False vgg_model.summary() </code></pre> <pre><code>input_shape = vgg_model.output_shape[1] model = Sequential() model.add(InputLayer(input_shape=(input_shape,))) model.add(Dense(512, activation='relu', input_dim=input_shape)) model.add(Dropout(0.3)) model.add(Dense(512, activation='relu')) model.add(Dropout(0.3)) model.add(Dense(43, activation='softmax')) model.compile(optimizer=optimizers.RMSprop(lr=1e-4), loss='categorical_crossentropy', metrics=['accuracy']) </code></pre> <p>When I try to run the model with this line, I get the error:</p> <pre><code>epochs=100 history = model.fit_generator(train_ds, steps_per_epoch=1226, epochs=epochs, verbose=1) WARNING:tensorflow:Model was constructed with shape (None, 100352) for input KerasTensor(type_spec=TensorSpec(shape=(None, 100352), dtype=tf.float32, name='input_4'), name='input_4', description=&quot;created by layer 'input_4'&quot;), but it was called on an input with incompatible shape (None, None, None, None). ValueError: Shapes (None, None) and (None, None, None, 43) are incompatible </code></pre> <p>I really have no idea where it is coming from. I tried experimenting with input shapes but with no luck.</p> <p><strong>EDIT</strong></p> <p>This is my image generator</p> <pre><code>train_datagen = ImageDataGenerator( rescale=1./255, validation_split=0.3, shear_range=0.2, zoom_range=0.2, horizontal_flip=True) test_datagen = ImageDataGenerator(rescale=1./255) </code></pre> <pre><code>train_ds = train_datagen.flow_from_directory( train_dir, seed=123, target_size=(224, 224), batch_size=32, class_mode='categorical') </code></pre> <p>Found 39209 images belonging to 43 classes. But I also specified validation split for this dataset.</p> <p><strong>EDIT 2</strong></p> <pre><code>vgg_model.output_shape[0] 100352 </code></pre> <p>The output shape of the model after adding the last layers is 43 though.</p> <p>Also, I tried changing the loss function to <code>sparse_categorical_crossentropy</code> and got this error:</p> <pre><code>InvalidArgumentError: Matrix size-incompatible: In[0]: [720000,3], In[1]: [8192,512] [[node sequential_8/dense_24/Tensordot/MatMul (defined at &lt;ipython-input-37-e259535ec653&gt;:2) ]] [Op:__inference_train_function_4462] </code></pre> <p>Somethig is wrong either with my model or with the way I'm loading the images, but I simply have no clue.</p> <p>I'd really appreciate your help. Thanks!</p>
<p>I actually changed my Image Generator to <code>flow_from_dataframe</code> and it worked.</p> <pre><code>train_df = train_datagen.flow_from_dataframe( traindf, y_col='ClassId', x_col='Path', directory=None, subset='training', seed=123, target_size=(150, 150), batch_size=32, class_mode='categorical') </code></pre>
python|tensorflow|machine-learning|keras|deep-learning
1
8,130
67,523,633
Python: Passed two arrays as function arguments. Expecting a series but only the last value is returned
<p>I believe there are other ways of doing this but I wish to learn why I am getting the results that I am getting.</p> <p>For added context, I am trying to learn vectorization in python and I came across tutorials that show passing the arrays is quicker than say the .apply() method.</p> <p>Aim: Compare two boolean arrays, and based on multiple conditions, return a result which should also be a series.</p> <p>However, doing the below I am only getting the value of the last combination not the series of results.</p> <pre><code>bool_1 = np.array([True,False,False]) bool_2 = np.array([False,True,False]) # Categorise the outcomes def bool_combinations(bool_array_1,bool_array_2): if bool_array_1 is True: OUTCOME = &quot;Outcome 1&quot; elif bool_array_2 is True: OUTCOME = &quot;Outcome 2&quot; else: OUTCOME = &quot;Outcome 3&quot; print(bool_array_1,bool_array_2,OUTCOME) return OUTCOME bool_combinations(bool_1,bool_2) </code></pre> <p>From the above I get the output of:</p> <pre><code>[ True False False] [False True False] Outcome 3 'Outcome 3' </code></pre> <p>I was hoping for a result which looks more like:</p> <pre><code>[ True False False] [False True False] [ 'Outcome 1' 'Outcome 2' 'Outcome 3'] [ 'Outcome 1' 'Outcome 2' 'Outcome 3'] </code></pre>
<p>You are close, but you likely want to zip the arrays together and compare element by element rather than comparing the arrays for truthiness. The reason you got &quot;Outcome 3&quot; is that:</p> <pre><code>numpy.array([...]) is True ## ---&gt; is always False </code></pre> <p>Using your code as a base, you might try:</p> <pre><code>import numpy def bool_combinations(bool_array_1,bool_array_2): OUTCOME = [] for (bool1, bool2) in zip(bool_array_1,bool_array_2): if bool1: OUTCOME.append(&quot;Outcome 1&quot;) elif bool2: OUTCOME.append(&quot;Outcome 2&quot;) else: OUTCOME.append(&quot;Outcome 3&quot;) return OUTCOME bool_1 = numpy.array([True,False,False]) bool_2 = numpy.array([False,True,False]) print(bool_combinations(bool_1,bool_2)) </code></pre> <p>That will give you:</p> <pre><code>['Outcome 1', 'Outcome 2', 'Outcome 3'] </code></pre> <p>I might personally do:</p> <pre><code>import numpy def bool_combinations(bool_array_1,bool_array_2): def _get_outcome(bool1, bool2): if bool1: return &quot;Outcome 1&quot; if bool2: return &quot;Outcome 2&quot; return &quot;Outcome 3&quot; return [_get_outcome(bool1, bool2) for bool1, bool2 in zip(bool_array_1,bool_array_2)] bool_1 = numpy.array([True,False,False]) bool_2 = numpy.array([False,True,False]) print(bool_combinations(bool_1,bool_2)) </code></pre> <p>That will also give you:</p> <pre><code>['Outcome 1', 'Outcome 2', 'Outcome 3'] </code></pre> <p>You can get golfier still, but I don't think making it shorter results in easier code to understand.</p>
python|arrays|python-3.x|pandas|numpy
0
8,131
67,486,035
Tensorflow No gradients provided for any variable
<p>I'm new to Tensorflow and Machine learning in general. I'm trying to create a model to detect brain tumor through MRIs.</p> <p>I'm splitting the data using <code>validation_split</code>. After compiling the model when when I try to fitting using the <code>.fit</code> function I get this Error. After googling I have found I might be because I'm not passing the <code>y</code> parameter when calling the <code>fit</code> function.</p> <p><strong>Code</strong>:</p> <pre class="lang-py prettyprint-override"><code>datagen = ImageDataGenerator(validation_split=0.2, rescale=1. / 255) train_generator = datagen.flow_from_directory( TRAIN_DIR, target_size=(150, 150), batch_size=32, class_mode='binary', subset='training' ) val_generator = datagen.flow_from_directory( TRAIN_DIR, target_size=(150, 150), batch_size=32, class_mode='binary', subset='validation' ) model = tf.keras.models.Sequential() model.add( tf.keras.layers.Conv2D( 16, (3, 3), activation='relu', input_shape=(150, 150, 3) ) ) model.add( tf.keras.layers.MaxPool2D(2, 2) ) ... # some more layers ... model.compile( optimizer='adam', loss=None, metrics=['accuracy'], ) print(model.summary()) Test = model.fit( train_generator, epochs=2, verbose=1, validation_data=val_generator ) </code></pre> <p>What am I doing wrong ?</p> <p><strong>Folder structure for the images</strong>:</p> <pre><code>images | ├── training │   ├── no │ ├── yes ├── testing │ ├── no │ ├── yes </code></pre> <p><strong>Exact Error Message</strong>:</p> <pre><code>ValueError: No gradients provided for any variable: ['conv2d/kernel:0', 'conv2d/bias:0', 'conv2d_1/kernel:0', 'conv2d_1/bias:0', 'conv2d_2/kernel:0', 'conv2d_2/bias:0', 'dense/kernel:0', 'dense/bias:0', 'dense_1/kernel:0', 'dense_1/bias:0']. </code></pre> <p><strong>Output of model.summary()</strong>:</p> <pre><code>Model: &quot;sequential&quot; _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d (Conv2D) (None, 148, 148, 16) 448 _________________________________________________________________ max_pooling2d (MaxPooling2D) (None, 74, 74, 16) 0 _________________________________________________________________ conv2d_1 (Conv2D) (None, 72, 72, 32) 4640 _________________________________________________________________ max_pooling2d_1 (MaxPooling2 (None, 36, 36, 32) 0 _________________________________________________________________ conv2d_2 (Conv2D) (None, 34, 34, 64) 18496 _________________________________________________________________ max_pooling2d_2 (MaxPooling2 (None, 17, 17, 64) 0 _________________________________________________________________ flatten (Flatten) (None, 18496) 0 _________________________________________________________________ dense (Dense) (None, 512) 9470464 _________________________________________________________________ dense_1 (Dense) (None, 2) 1026 ================================================================= Total params: 9,495,074 Trainable params: 9,495,074 Non-trainable params: 0 </code></pre>
<p>This is because you set the loss to <code>None</code>, no gradient is provided from the loss function back to your model. Modify</p> <pre><code>model.compile( optimizer='adam', loss=None, metrics=['accuracy'], ) </code></pre> <p>to</p> <pre><code>model.compile( optimizer='adam', loss='mse', # or some other loss metrics=['accuracy'], ) </code></pre>
python|tensorflow
1
8,132
34,504,191
Python pandas combine the second row if the first row IDs are the same
<p>We are using Python 2.7</p> <p>We have a simple table below:</p> <pre><code>import pandas as pd import numpy as np df = pd.DataFrame({'A': 'foo bar foo bar foo bar foo foo polar bear'.split(), 'B': '1 1 2 3 2 2 1 3 4 5'.split()}) print(df) </code></pre> <p>It generates </p> <pre><code> A B 0 foo 1 1 bar 1 2 foo 2 3 bar 3 4 foo 2 5 bar 2 6 foo 1 7 foo 3 </code></pre> <p>Is there any Pandas way to match the <code>ID</code> in the column <code>A</code>? For example, if the <code>ID</code> in the column <code>A</code> is the same, then concatenate the second row into a dictionary or a list. For example:</p> <pre><code>{'foo,12213','bar,132'} </code></pre> <p>Thank you!</p>
<p>You could groupby aggregate to list and join the list as below.</p> <pre><code>df Out[7]: A B 0 foo 1 1 bar 1 2 foo 2 3 bar 3 4 foo 2 5 bar 2 6 foo 1 7 foo 3 df.groupby("A")["B"].apply(list) Out[10]: A bar [1, 3, 2] foo [1, 2, 2, 1, 3] new_df = df.groupby("A")["B"].apply(list).reset_index() new_df['B'] = new_df['B'].map(lambda x: ''.join([str(i) for i in x])) A B 0 bar 132 1 foo 12213 new_df.set_index("A").to_dict() Out[34]: {'B': {'bar': '132', 'foo': '12213'}} </code></pre>
python-2.7|pandas
1
8,133
34,671,012
How to create 2 column binary numpy array from string list?
<p><strong>Input:</strong></p> <p>A string list like this: </p> <pre><code>['a', 'a', 'a', 'b', 'b', 'a', 'b'] </code></pre> <p><strong>Output I want:</strong></p> <p>A numpy array like this:</p> <pre><code>array([[ 1, 0], [ 1, 0], [ 1, 0], [ 0, 1], [ 0, 1], [ 1, 0], [ 0, 1]]) </code></pre> <p>What I tried:</p> <p>Try 1 - My starting data is actually stored in a column as a csv file. So I tried the following:</p> <pre><code>data1 = genfromtxt('csvname.csv', delimiter=',') </code></pre> <p>I did this because I thought I could manipulate the csv data into to form I want after I input it into the numpy format. However, the problem is I get all nan which is not a number. I'm not sure how else to go about this effectively because I need to do this for a large data set.</p> <p>Try 2 - The ineffective method which I was thinking of doing:</p> <p>For each element of the list, append [1,0] if a and append [0,1] if b.</p> <p>Is there a better method?</p>
<p>Using List comprehension </p> <p><strong>Code:</strong></p> <pre><code>import numpy lst = ['a', 'a', 'a', 'b', 'b', 'a', 'b'] numpy.array([[1,0] if val =="a" else [0,1]for val in lst]) </code></pre> <p><strong>Output:</strong></p> <pre><code>array([[1, 0], [1, 0], [1, 0], [0, 1], [0, 1], [1, 0], [0, 1]]) </code></pre> <p><strong>Note:</strong></p> <ul> <li>Rather then appending to a list\numpy array, creating a list is faster</li> </ul>
python|arrays|numpy
4
8,134
34,805,014
Loop through a pandas dataframe with multiple groupby functions and write to excel
<p>I currently have a script that I use to produce an excel output file, using a pandas df. I run the script 5 times- only changing the columns I groupby with- and append all 5 sheets into a 'master file' manually. I'm wondering how I can loop through my script automatically with 5 different groupby functions and simultaneously create 5 separate xlsx sheets for the output. </p> <p>These are the groupby functions that I usually paste under the '### Column Renaming, NaN Replacing &amp; DataFrame Column Additions' comment:</p> <pre><code>grouped = df.groupby(['customer_account', 'CounterPartyID']) grouped = df.groupby(['customer_account', 'CounterPartyID', 'symbol']) grouped = df.groupby(['customer_account', 'CounterPartyID', 'Providers', 'symbol']) grouped = df.groupby(['Providers', 'customer_account']) grouped = df.groupby(['Providers', 'symbol']) </code></pre> <pre><code>import pandas as pd import numpy as np import csv import time import glob import datetime import re import sys import os from dateutil import relativedelta from xlsxwriter.utility import xl_rowcol_to_cell '''This is where I find the file with the compiled data and add the needed columns to the df''' ### File Finding Stuff file_names = sorted(glob.glob(r'T:\Tom\Scripts\\' + '*_fillssideclient.csv'), reverse=True) file = file_names[0] date = os.path.basename(file)[0:8] #file = "20151215_fillssideclient.csv" ### For manual file pulls df = pd.read_csv(file) ### Column Renaming, NaN Replacing &amp; DataFrame Column Additions df.rename(columns={'provider':'Providers'}, inplace=True) df = df.replace(np.nan,'All Tags', regex=True) df['five_avg'] = df.iloc[:, 30:40].sum(axis=1).astype('int64') / 10 #Added column at end of df for 5s avg df['ten_avg'] = df.iloc[:, 30:50].sum(axis=1).astype('int64') / 20 #Added column at end of df for 10s avg df['twenty_avg'] = df.iloc[:, 30:70].sum(axis=1).astype('int64') / 40 #Added column at end of df for 20s avg #This is the primary function that I need to have my 5 'groupby' variables loop through and create 5 sheets''' ### Primary DataFrame Calculations filled_total = df['filled'].sum() order_total = grouped['filled'].count() total_tickets = grouped['filled'].sum() share = total_tickets / filled_total fill_rate = total_tickets / order_total total_size = grouped['fill_size'].sum() avg_size = total_size / total_tickets ### One Second Calculations one_toxicity = grouped.apply(lambda x: x['filled'][x['1000'] &lt; -25].sum()) / total_tickets one_average = grouped.apply(lambda x: x[x['filled'] == 1]['1000'].mean()) one_low = grouped.apply(lambda x: x[x['filled'] == 1]['1000'].quantile(.25)) one_med = grouped.apply(lambda x: x[x['filled'] == 1]['1000'].quantile(.50)) one_high = grouped.apply(lambda x: x[x['filled'] == 1]['1000'].quantile(.75)) ### Five Second Calculations #five_toxicity = grouped.apply(lambda x: x['filled'][x['5000'] &lt; -25].sum()) / total_tickets five_average = grouped.apply(lambda x: x[x['filled'] == 1]['five_avg'].mean()) five_low = grouped.apply(lambda x: x[x['filled'] == 1]['five_avg'].quantile(.25)) five_med = grouped.apply(lambda x: x[x['filled'] == 1]['five_avg'].quantile(.50)) five_high = grouped.apply(lambda x: x[x['filled'] == 1]['five_avg'].quantile(.75)) #five_std = grouped.apply(lambda x: x[x['filled'] == 1]['five_avg'].std()) ### Ten Second Calculations #ten_toxicity = grouped.apply(lambda x: x['filled'][x['10000'] &lt; -25].sum()) / total_tickets ten_average = grouped.apply(lambda x: x[x['filled'] == 1]['ten_avg'].mean()) ten_low = grouped.apply(lambda x: x[x['filled'] == 1]['ten_avg'].quantile(.25)) ten_med = grouped.apply(lambda x: x[x['filled'] == 1]['ten_avg'].quantile(.50)) ten_high = grouped.apply(lambda x: x[x['filled'] == 1]['ten_avg'].quantile(.75)) #ten_std = grouped.apply(lambda x: x[x['filled'] == 1]['ten_avg'].std()) ### Twenty Second Calculations #twenty_toxicity = grouped.apply(lambda x: x['filled'][x['20000'] &lt; -50].sum()) / total_tickets twenty_avg = grouped.apply(lambda x: x[x['filled'] == 1]['twenty_avg'].mean()) twenty_low = grouped.apply(lambda x: x[x['filled'] == 1]['twenty_avg'].quantile(.25)) twenty_med = grouped.apply(lambda x: x[x['filled'] == 1]['twenty_avg'].quantile(.50)) twenty_high = grouped.apply(lambda x: x[x['filled'] == 1]['twenty_avg'].quantile(.75)) #twenty_std = grouped.apply(lambda x: x[x['filled'] == 1]['twenty_avg'].std()) ### Column Formatting #comma_fmt = workbook.add_format({'num_format': '#,##0'}) #money_fmt = workbook.add_format({'num_format': '$#,##0.000'}) #percent_fmt = workbook.add_format({'num_format': '0.0%'}) #Still need to figure out how to customize column width, column format and conditional formatting''' list_of_lists = [ ['Trades', total_tickets], ['Share %', share], ['Fill Rate', fill_rate], ['Total Size', total_size], ['Avg Size', avg_size], ['1s Toxic', one_toxicity], ['1s Avg', one_average], ['1s 25th', one_low], ['1s 50th', one_med], ['1s 75th', one_high], ['5s Avg', five_average], ['5s 25th', five_low], ['5s 50th', five_med], ['5s 75th', five_high], ['10s Avg', ten_average], ['10s 25th', ten_low], ['10s 50th', ten_med], ['10s 75th', ten_high], ['20s Avg', twenty_avg], ['20s 25th', twenty_low], ['20s 50th', twenty_med], ['20s 75th', twenty_high] ] result = pd.concat([lst[1] for lst in list_of_lists], axis=1) result.columns = [lst[0] for lst in list_of_lists] result = result[result.Trades &gt; 0] # Removes results that are less than 1...use '!= 0' to remove only 0 trades # This is where I find the output location, declare my 'groupby' variables and execute the script writer = pd.ExcelWriter(date + '_counterparty_monthly.xlsx', engine='xlsxwriter') result.to_excel(writer, sheet_name='All Trades') workbook = writer.book worksheet = writer.sheets['All Trades'] worksheet.set_zoom(80) #Worksheet and Print Options worksheet.hide_gridlines(2) worksheet.fit_to_pages(1, 1) writer.save() </code></pre>
<p>IIUC you can add list of columns and then use for loop. Last you can add number to sheet name:</p> <pre><code>col = [['customer_account', 'CounterPartyID'], ['customer_account', 'CounterPartyID', 'symbol'], ['customer_account', 'CounterPartyID', 'Providers', 'symbol'], ['Providers', 'customer_account'], ['Providers', 'symbol']] for i, col in enumerate(col): print col print i #grouped = df.groupby(col) sheetname = 'All Trades-' + str(i) print sheetname #['customer_account', 'CounterPartyID'] #0 #All Trades-0 #['customer_account', 'CounterPartyID', 'symbol'] #1 #All Trades-1 #['customer_account', 'CounterPartyID', 'Providers', 'symbol'] #2 #All Trades-2 #['Providers', 'customer_account'] #3 #All Trades-3 #['Providers', 'symbol'] #4 #All Trades-4 </code></pre> <p>And in row 133 use variable <code>sheetname</code>:</p> <pre><code>#add sheet name result.to_excel(writer, sheet_name=sheetname) workbook = writer.book #add sheet name worksheet = writer.sheets[sheetname] worksheet.set_zoom(80) </code></pre> <p>And you can open and save excel file only once:</p> <pre><code># This is where I find the output location, declare my 'groupby' variables and execute the script writer = pd.ExcelWriter(date + '_counterparty_monthly.xlsx', engine='xlsxwriter') for i, col in enumerate(col): #print col #print i grouped = df.groupby(col) . . . #Worksheet and Print Options worksheet.hide_gridlines(2) worksheet.fit_to_pages(1, 1) writer.save() </code></pre> <p>All together:</p> <pre><code>import pandas as pd import numpy as np import csv import time import glob import datetime import re import sys import os from dateutil import relativedelta from xlsxwriter.utility import xl_rowcol_to_cell '''This is where I find the file with the compiled data and add the needed columns to the df''' ### File Finding Stuff file_names = sorted(glob.glob(r'T:\Tom\Scripts\\' + '*_fillssideclient.csv'), reverse=True) file = file_names[0] date = os.path.basename(file)[0:8] #file = "20151215_fillssideclient.csv" ### For manual file pulls df = pd.read_csv(file) ### Column Renaming, NaN Replacing &amp; DataFrame Column Additions df.rename(columns={'provider':'Providers'}, inplace=True) df = df.replace(np.nan,'All Tags', regex=True) df['five_avg'] = df.iloc[:, 30:40].sum(axis=1).astype('int64') / 10 #Added column at end of df for 5s avg df['ten_avg'] = df.iloc[:, 30:50].sum(axis=1).astype('int64') / 20 #Added column at end of df for 10s avg df['twenty_avg'] = df.iloc[:, 30:70].sum(axis=1).astype('int64') / 40 #Added column at end of df for 20s avg col = [['customer_account', 'CounterPartyID'], ['customer_account', 'CounterPartyID', 'symbol'], ['customer_account', 'CounterPartyID', 'Providers', 'symbol'], ['Providers', 'customer_account'], ['Providers', 'symbol']] # This is where I find the output location, declare my 'groupby' variables and execute the script writer = pd.ExcelWriter(date + '_counterparty_monthly.xlsx', engine='xlsxwriter') for i, col in enumerate(col): #print col #print i grouped = df.groupby(col) sheetname = 'All Trades-' + str(i) #print sheetname #This is the primary function that I need to have my 5 'groupby' variables loop through and create 5 sheets''' ### Primary DataFrame Calculations filled_total = df['filled'].sum() order_total = grouped['filled'].count() total_tickets = grouped['filled'].sum() share = total_tickets / filled_total fill_rate = total_tickets / order_total total_size = grouped['fill_size'].sum() avg_size = total_size / total_tickets ### One Second Calculations one_toxicity = grouped.apply(lambda x: x['filled'][x['1000'] &lt; -25].sum()) / total_tickets one_average = grouped.apply(lambda x: x[x['filled'] == 1]['1000'].mean()) one_low = grouped.apply(lambda x: x[x['filled'] == 1]['1000'].quantile(.25)) one_med = grouped.apply(lambda x: x[x['filled'] == 1]['1000'].quantile(.50)) one_high = grouped.apply(lambda x: x[x['filled'] == 1]['1000'].quantile(.75)) ### Five Second Calculations #five_toxicity = grouped.apply(lambda x: x['filled'][x['5000'] &lt; -25].sum()) / total_tickets five_average = grouped.apply(lambda x: x[x['filled'] == 1]['five_avg'].mean()) five_low = grouped.apply(lambda x: x[x['filled'] == 1]['five_avg'].quantile(.25)) five_med = grouped.apply(lambda x: x[x['filled'] == 1]['five_avg'].quantile(.50)) five_high = grouped.apply(lambda x: x[x['filled'] == 1]['five_avg'].quantile(.75)) #five_std = grouped.apply(lambda x: x[x['filled'] == 1]['five_avg'].std()) ### Ten Second Calculations #ten_toxicity = grouped.apply(lambda x: x['filled'][x['10000'] &lt; -25].sum()) / total_tickets ten_average = grouped.apply(lambda x: x[x['filled'] == 1]['ten_avg'].mean()) ten_low = grouped.apply(lambda x: x[x['filled'] == 1]['ten_avg'].quantile(.25)) ten_med = grouped.apply(lambda x: x[x['filled'] == 1]['ten_avg'].quantile(.50)) ten_high = grouped.apply(lambda x: x[x['filled'] == 1]['ten_avg'].quantile(.75)) #ten_std = grouped.apply(lambda x: x[x['filled'] == 1]['ten_avg'].std()) ### Twenty Second Calculations #twenty_toxicity = grouped.apply(lambda x: x['filled'][x['20000'] &lt; -50].sum()) / total_tickets twenty_avg = grouped.apply(lambda x: x[x['filled'] == 1]['twenty_avg'].mean()) twenty_low = grouped.apply(lambda x: x[x['filled'] == 1]['twenty_avg'].quantile(.25)) twenty_med = grouped.apply(lambda x: x[x['filled'] == 1]['twenty_avg'].quantile(.50)) twenty_high = grouped.apply(lambda x: x[x['filled'] == 1]['twenty_avg'].quantile(.75)) #twenty_std = grouped.apply(lambda x: x[x['filled'] == 1]['twenty_avg'].std()) ### Column Formatting #comma_fmt = workbook.add_format({'num_format': '#,##0'}) #money_fmt = workbook.add_format({'num_format': '$#,##0.000'}) #percent_fmt = workbook.add_format({'num_format': '0.0%'}) #Still need to figure out how to customize column width, column format and conditional formatting''' list_of_lists = [ ['Trades', total_tickets], ['Share %', share], ['Fill Rate', fill_rate], ['Total Size', total_size], ['Avg Size', avg_size], ['1s Toxic', one_toxicity], ['1s Avg', one_average], ['1s 25th', one_low], ['1s 50th', one_med], ['1s 75th', one_high], ['5s Avg', five_average], ['5s 25th', five_low], ['5s 50th', five_med], ['5s 75th', five_high], ['10s Avg', ten_average], ['10s 25th', ten_low], ['10s 50th', ten_med], ['10s 75th', ten_high], ['20s Avg', twenty_avg], ['20s 25th', twenty_low], ['20s 50th', twenty_med], ['20s 75th', twenty_high] ] result = pd.concat([lst[1] for lst in list_of_lists], axis=1) result.columns = [lst[0] for lst in list_of_lists] result = result[result.Trades &gt; 0] # Removes results that are less than 1...use '!= 0' to remove only 0 trades result.to_excel(writer, sheet_name=sheetname) workbook = writer.book #add sheet name worksheet = writer.sheets[sheetname] worksheet.set_zoom(80) #Worksheet and Print Options worksheet.hide_gridlines(2) worksheet.fit_to_pages(1, 1) writer.save() </code></pre>
python|pandas|dataframe|xlsxwriter
2
8,135
34,676,926
Generate 1d numpy with chunks of random length
<p>I need to generate 1D array where repeated sequences of integers are separated by a random number of zeros.</p> <p>So far I am using next code for this:</p> <pre><code>from random import normalvariate regular_sequence = np.array([1,2,3,4,5], dtype=np.int) n_iter = 10 lag_mean = 10 # mean length of zeros sequence lag_sd = 1 # standard deviation of zeros sequence length # Sequence of lags lengths lag_seq = [int(round(normalvariate(lag_mean, lag_sd))) for x in range(n_iter)] # Generate list of concatenated zeros and regular sequences seq = [np.concatenate((np.zeros(x, dtype=np.int), regular_sequence)) for x in lag_seq] seq = np.concatenate(seq) </code></pre> <p>It works but looks very slow when I need a lot of long sequences. So, how can I optimize it? </p>
<p>You can pre-compute indices where repeated <code>regular_sequence</code> elements are to be put and then set those with <code>regular_sequence</code> in a vectorized manner. For pre-computing those indices, one can use <a href="http://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.cumsum.html" rel="nofollow"><code>np.cumsum</code></a> to get the start of each such <em>chunk</em> of <code>regular_sequence</code> and then add a continuous set of integers extending to the size of <code>regular_sequence</code> to get all indices that are to be updated. Thus, the implementation would look something like this -</p> <pre><code># Size of regular_sequence N = regular_sequence.size # Use cumsum to pre-compute start of every occurance of regular_sequence offset_arr = np.cumsum(lag_seq) idx = np.arange(offset_arr.size)*N + offset_arr # Setup output array out = np.zeros(idx.max() + N,dtype=regular_sequence.dtype) # Broadcast the start indices to include entire length of regular_sequence # to get all positions where regular_sequence elements are to be set np.put(out,idx[:,None] + np.arange(N),regular_sequence) </code></pre> <hr> <p>Runtime tests -</p> <pre><code>def original_app(lag_seq, regular_sequence): seq = [np.concatenate((np.zeros(x, dtype=np.int), regular_sequence)) for x in lag_seq] return np.concatenate(seq) def vectorized_app(lag_seq, regular_sequence): N = regular_sequence.size offset_arr = np.cumsum(lag_seq) idx = np.arange(offset_arr.size)*N + offset_arr out = np.zeros(idx.max() + N,dtype=regular_sequence.dtype) np.put(out,idx[:,None] + np.arange(N),regular_sequence) return out In [64]: # Setup inputs ...: regular_sequence = np.array([1,2,3,4,5], dtype=np.int) ...: n_iter = 1000 ...: lag_mean = 10 # mean length of zeros sequence ...: lag_sd = 1 # standard deviation of zeros sequence length ...: ...: # Sequence of lags lengths ...: lag_seq = [int(round(normalvariate(lag_mean, lag_sd))) for x in range(n_iter)] ...: In [65]: out1 = original_app(lag_seq, regular_sequence) In [66]: out2 = vectorized_app(lag_seq, regular_sequence) In [67]: %timeit original_app(lag_seq, regular_sequence) 100 loops, best of 3: 4.28 ms per loop In [68]: %timeit vectorized_app(lag_seq, regular_sequence) 1000 loops, best of 3: 294 µs per loop </code></pre>
python|arrays|numpy
4
8,136
60,165,547
Adding columns to dataframe that depends on a existing column and its qcut bin values
<p>I have a dataframe that looks like below. dataframe1 = </p> <pre><code>Ind ID T1 T2 T3 T4 T5 0 Q1 100 121 43 56 78 1 Q2 23 43 56 76 87 2 Q3 345 56 76 78 98 3 Q4 21 32 34 45 56 4 Q5 45 654 567 78 90 5 Q6 123 32 45 56 67 6 Q7 23 24 25 26 27 7 Q8 32 33 34 35 36 8 Q9 123 124 125 126 127 9 Q10 56 56 56 56 56 10 Q11 76 77 78 79 80 11 Q12 87 87 87 87 87 12 Q13 90 90 90 90 90 13 Q14 43 44 45 46 47 14 Q15 23 24 25 26 27 15 Q16 51 52 53 54 55 16 Q17 67 67 67 67 67 17 Q18 87 87 87 87 87 18 Q19 90 91 92 93 94 19 Q20 23 24 25 26 27 </code></pre> <p>Now,I have applied qcut to column 'T1' to get bins by using - </p> <pre><code>pd.qcut(data_data['T1'].rank(method = 'first'),10,labels = list(range(1,11))) </code></pre> <p>that gives me.</p> <pre><code>0 9 1 1 2 10 3 1 4 4 5 9 6 2 7 3 8 10 9 5 10 6 11 7 12 8 13 4 14 2 15 5 16 6 17 7 18 8 19 3 </code></pre> <p>Now, I want to get the mean of all bin 5 values, so that I can add another column in dataframe1 named 'T1_FOLD' that is simply the ((individual 'T1' values) - (that mean of bin 5 values)).</p> <p>How can I do that??</p>
<p>Filter column <code>T1</code> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.loc.html" rel="nofollow noreferrer"><code>DataFrame.loc</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#boolean-indexing" rel="nofollow noreferrer"><code>boolean indexing</code></a>, get <code>mean</code>s and use for subtracting of column <code>T1</code>:</p> <pre><code>s = pd.qcut(data_data['T1'].rank(method = 'first'),10,labels = list(range(1,11))) data_data['T1_FOLD'] = data_data['T1'] - data_data.loc[s == 5, 'T1'].mean() print (data_data) ID T1 T2 T3 T4 T5 T1_FOLD 0 Q1 100 121 43 56 78 46.5 1 Q2 23 43 56 76 87 -30.5 2 Q3 345 56 76 78 98 291.5 3 Q4 21 32 34 45 56 -32.5 4 Q5 45 654 567 78 90 -8.5 5 Q6 123 32 45 56 67 69.5 6 Q7 23 24 25 26 27 -30.5 7 Q8 32 33 34 35 36 -21.5 8 Q9 123 124 125 126 127 69.5 9 Q10 56 56 56 56 56 2.5 10 Q11 76 77 78 79 80 22.5 11 Q12 87 87 87 87 87 33.5 12 Q13 90 90 90 90 90 36.5 13 Q14 43 44 45 46 47 -10.5 14 Q15 23 24 25 26 27 -30.5 15 Q16 51 52 53 54 55 -2.5 16 Q17 67 67 67 67 67 13.5 17 Q18 87 87 87 87 87 33.5 18 Q19 90 91 92 93 94 36.5 19 Q20 23 24 25 26 27 -30.5 </code></pre>
python|pandas|dataframe
0
8,137
60,015,596
pandas dataframe drop a row of data with same value
<p>I have a data set like below and I want to drop the row of data with same value:</p> <p><a href="https://i.stack.imgur.com/TDkPJ.png" rel="nofollow noreferrer">enter image description here</a></p> <p>I think I can check the value of all rows, if all are duplicate then drop it, or I can specify a row with specific time (12:30 in this case), but I don't know how to code it...</p> <p>I tried the following and try to drop just one line but fail..</p> <p>df.drop['2020-01-29 12:30']</p> <p>Anyone could give me a push? Thanks in advance!</p>
<p>Hi let me know if this works for you or not,</p> <p>Just For example I have created the data frame</p> <pre><code>import pandas as pd data1={'A':[1,2,3,43], 'B':[11,22,3,53], 'C':[21,23,3,433], 'D':[131,223,3,54]} df=pd.DataFrame(data1) df.index.names=['index'] print(df) </code></pre> <p><strong>DataFrame</strong></p> <pre><code> A B C D index 0 1 11 21 131 1 2 22 23 223 2 3 3 3 3 3 43 53 433 54 ind=df[df['A']+df['B'] == df['C']+df['D']].index # get the index where values are similar. Here i have done the addition of the values from first two columns and same with next two columns, if both sums are equal then get the index. df.drop(ind,inplace=True) #drop row (ind=2) and save the dataframe print(df) </code></pre> <p><strong>Final output</strong></p> <pre><code> A B C D index 0 1 11 21 131 1 2 22 23 223 3 43 53 433 54 </code></pre> <p>Note: index 2 row is removed.</p>
python|pandas
0
8,138
65,080,599
Export dataframe to excel file using xlsxwriter
<p>I have dataframes as output and I need to export to excel file. I can use pandas for the task but I need the output to be the worksheet from right to left direction. I have searched and didn't find any clue regarding using the pandas to change the direction .. I have found the package xlsxwriter do that</p> <pre><code>import xlsxwriter workbook = xlsxwriter.Workbook('output.xlsx') worksheet1 = workbook.add_worksheet() format_right_to_left = workbook.add_format({'reading_order': 2}) worksheet1.set_column('A:A', 20) worksheet1.right_to_left() worksheet1.write(new_df) workbook.close() </code></pre> <p>But I don't know how to export the dataframe using this approach ..</p> <p>snapshot to clarify the directions: <a href="https://i.stack.imgur.com/IFMvf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IFMvf.png" alt="enter image description here" /></a></p> <p><a href="https://i.stack.imgur.com/OPFkp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OPFkp.png" alt="enter image description here" /></a></p> <p>** I have used multiple lines as for format point</p> <pre><code>myformat = workbook.add_format() myformat.set_reading_order(2) myformat.set_align('center') myformat.set_align('vcenter') </code></pre> <p>Is it possible to make such lines shorter using dictionary ..for example?</p>
<p>You can do this:</p> <pre><code>import xlsxwriter writer = pd.ExcelWriter('pandas_excel.xlsx', engine='xlsxwriter') df.to_excel(writer, sheet_name='Sheet1') # Assuming you already have a `df` workbook = writer.book worksheet = writer.sheets['Sheet1'] format_right_to_left = workbook.add_format({'reading_order': 2}) worksheet.right_to_left() writer.save() </code></pre>
python|pandas|xlsxwriter
1
8,139
65,361,279
Pandas Key error when reading from the file?
<p>I have a problem. This is my program. I use this program to find properly A and T parameters which give me properly exponential curve.</p> <pre><code>#otwieranie pliku import pandas as pd data = pd.read_csv(&quot;mgdg&quot;, sep = &quot; &quot;) #przypisanie do zmiennych t -czas, C - C t = data[&quot;czas&quot;] C = data[&quot;C&quot;] #jak wyglada zaleznosc C od t? import matplotlib.pyplot as plt plt.plot(t,C) plt.show() #funkcja do optymalizacji import numpy as np from scipy.optimize import curve_fit def func(t, A1,A2,A3,T1, T2, T3, T4): return A1 * np.exp(-t/T1) + A2 * np.exp(-t/T2) + A3 * np.exp(-t/T3) +(1-A1-A2-A3) * np.exp(-t/T4) #sama optymalizacja, p0 zawiera parametry początkowe params, params_covariance = curve_fit(func, t,C , p0=np.asarray([0.23,0.40,0.1,253,8,4600,1400])) #do P zapisuje zaokrąglone parametry P = [round(x,2) for x in params] #jak sie dopasowalo? plt.plot(t,C) plt.plot(t,func(t,P[0],P[1], P[2], P[3], P[4], P[5], P[6]), c = &quot;red&quot;) plt.show() </code></pre> <p>This is a fragment from my file</p> <pre><code>0 1 1 0.756897 2 0.712127 3 0.679612 4 0.653257 5 0.630961 6 0.611496 7 0.594308 8 0.578927 9 0.564992 10 0.552246 </code></pre> <p>This is my error</p> <pre><code>(anaconda_env) jakub@jakub-Z370-HD3P:~/czasy_wiazan/Adrian$ python exp.py Traceback (most recent call last): File &quot;/home/jakub/anaconda3/envs/anaconda_env/lib/python3.9/site-packages/pandas/core/indexes/base.py&quot;, line 2898, in get_loc return self._engine.get_loc(casted_key) File &quot;pandas/_libs/index.pyx&quot;, line 70, in pandas._libs.index.IndexEngine.get_loc File &quot;pandas/_libs/index.pyx&quot;, line 101, in pandas._libs.index.IndexEngine.get_loc File &quot;pandas/_libs/hashtable_class_helper.pxi&quot;, line 1675, in pandas._libs.hashtable.PyObjectHashTable.get_item File &quot;pandas/_libs/hashtable_class_helper.pxi&quot;, line 1683, in pandas._libs.hashtable.PyObjectHashTable.get_item KeyError: 'czas' The above exception was the direct cause of the following exception: Traceback (most recent call last): File &quot;/home/jakub/czasy_wiazan/Adrian/exp.py&quot;, line 5, in &lt;module&gt; t = data[&quot;czas&quot;] File &quot;/home/jakub/anaconda3/envs/anaconda_env/lib/python3.9/site-packages/pandas/core/frame.py&quot;, line 2906, in __getitem__ indexer = self.columns.get_loc(key) File &quot;/home/jakub/anaconda3/envs/anaconda_env/lib/python3.9/site-packages/pandas/core/indexes/base.py&quot;, line 2900, in get_loc raise KeyError(key) from err KeyError: 'czas' </code></pre> <p>Why does this error occur? I'm using Anaconda, Python 3.9, pandas, numpy and matplotlib. What should I change?</p>
<p>It appears that you don't have a column named czas. One way to check your column names is to type <code>list(data)</code>. You should see a list of all of your column names. If you need to rename columns in your DataFrame, see this link: <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.rename.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.rename.html</a></p>
python|pandas
1
8,140
65,123,183
GPU not recognized after building tensorflow==1.15.4 from source with CUDA 10.2
<p>For some code that I want to replicate, I need to install <code>tensorflow==1.15.4</code> with GPU support. Unfortunately, the pre-built binary is <a href="https://www.tensorflow.org/install/source#gpu" rel="nofollow noreferrer">compiled with CUDA 10.0</a>, but I have CUDA 10.2 on my system.</p> <p>Thus, I wanted to install it from source and build it myself. I've followed <a href="https://www.tensorflow.org/install/source" rel="nofollow noreferrer">these official instructions</a>. During <code>configure</code> I selected always the default value except for <code>Do you wish to build TensorFlow with CUDA support? [y/N]:</code> which I answered with <code>Y</code>. I used the following build command:</p> <pre class="lang-sh prettyprint-override"><code>bazel build --config=v1 --config=cuda //tensorflow/tools/pip_package:build_pip_package </code></pre> <p>I think the <code>--config=cuda</code> is redundant here, but I included it anyway to make sure.</p> <p>I initially encountered an error during build, which I could resolve with <a href="https://github.com/tensorflow/tensorflow/issues/34429#issuecomment-557408498" rel="nofollow noreferrer">this</a>. After that, the compilation completed successfully.</p> <p>To my surprise, running the following snippet after the installation indicates, that my GPU is still not available to use with <code>tensorflow</code>.</p> <pre class="lang-py prettyprint-override"><code>import tensorflow as tf tf.test.is_built_with_cuda() # True tf.test.is_gpu_available() # False </code></pre> <p>Can someone can tell me what I'm doing wrong here?</p>
<p>The <a href="https://github.com/CompVis/adaptive-style-transfer#training" rel="nofollow noreferrer">code I was trying to replicate</a> had <code>CUDA_VISIBLE_DEVICES=1</code> set as environment variable. Out of inexperience with Tensorflow I also set this without understanding what I meant.</p> <p>Since I have only a single GPU, i.e. index 0, my GPU was not recognized. Thus, this had nothing to do with the build, which worked as intended.</p>
tensorflow|build|gpu
0
8,141
49,883,048
Should I use individual vocabulary files for each tensorflow categorical column?
<p>Is there a reason to use a different vocab list for each feature column rather than giving every feature column the same "global" vocab list?</p> <p>For instance, let's say I was building a DNN with Tensorflow's DNNClassifier estimator to determine whether a cat is "awesome" or "lame".</p> <p>Each feature column is a categorical_column_with_vocabulary_file wrapped in an indicator_column. </p> <p>Column 1 might be "Birth Month" with options "January", "February", etc.</p> <p>Column 2 is "Coloration" with options "Calico" or "Tabby".</p> <p>Column 3 is "Likes Cheese" with options "Yes" or "No".</p> <p>I make "global_vocab_list.txt" a list of every month as well as:</p> <p>Calico</p> <p>Tabby</p> <p>Yes </p> <p>No</p> <p>And use that same list as the vocab file for every feature column.</p> <p>Will Tensorflow give me meaningfully different results if instead I pass "month_vocab_list.txt" to the "Birth Month" feature column, "coloration_vocab_list.txt" to the "Coloration" feature column, and "yes_no_vocab.txt" to the "Likes Cheese" feature column? Would there perhaps be a performance increase with one or the other?</p>
<p>I think you should use some individual files. According to the Tensorflow <a href="https://www.tensorflow.org/api_docs/python/tf/feature_column/categorical_column_with_vocabulary_file" rel="nofollow noreferrer">documentation</a>, in <code>categorical_column_with_vocabulary_file</code>, there are no args capable of what you described. </p> <blockquote> <ul> <li><code>vocabulary_file</code>: The vocabulary file name.</li> <li><code>vocabulary_size</code>: Number of the elements in the vocabulary. This must be no greater than length of vocabulary_file, if less than length, later values are ignored. If None, it is set to the length of vocabulary_file.</li> <li><code>num_oov_buckets</code>: Non-negative integer, the number of out-of-vocabulary buckets. All out-of-vocabulary inputs will be assigned IDs in the range [vocabulary_size, vocabulary_size+num_oov_buckets) based on a hash of the input value. A positive num_oov_buckets can not be specified with default_value.</li> <li><code>default_value</code>: The integer ID value to return for out-of-vocabulary feature values, defaults to -1. This can not be specified with a positive num_oov_buckets.</li> </ul> </blockquote>
tensorflow|tensorflow-estimator
0
8,142
50,100,941
Extract values from List to Pandas DF
<p>I have a python list as below,</p> <pre><code>list_fs = ['drwxrwx--- - uname 0 2017-08-25 12:10 hdfs://filepath=2011-01-31 16%3A06%3A09.0', 'drwxrwx--- - uname 0 2017-08-29 14:12 hdfs://filepath=2011-02-28 10%3A00%3A00', 'drwxrwx--- - uname 0 2017-08-29 14:20 hdfs://filepath=2011-03-31 10%3A00%3A00', 'drwxrwx--- - uname 0 2017-08-29 14:32 hdfs://filepath=2011-04-30 10%3A00%3A00', 'drwxrwx--- - uname 0 2018-02-20 13:57 hdfs://filepath=2011-05-31 08%3A00%3A00', 'drwxrwx--- - uname 0 2017-08-29 15:02 hdfs://filepath=2011-05-31 10%3A00%3A00', 'drwxrwx--- - uname 0 2017-08-29 15:06 hdfs://filepath=2011-06-30 10%3A00%3A00', 'drwxrwx--- - uname 0 2017-08-31 10:38 hdfs://filepath=2011-07-31 10%3A00%3A00', 'drwxrwx--- - uname 0 2017-08-31 10:42 hdfs://filepath=2011-08-31 10%3A00%3A00', 'drwxrwx--- - uname 0 2017-08-31 11:08 hdfs://filepath=2011-09-30 10%3A00%3A00', 'drwxrwx--- - uname 0 2017-08-31 11:11 hdfs://filepath=2011-10-31 10%3A00%3A00', 'drwxrwx--- - uname 0 2017-08-31 11:15 hdfs://filepath=2011-11-30 10%3A00%3A00', 'drwxrwx--- - uname 0 2017-08-31 11:16 hdfs://filepath=2011-12-31 10%3A00%3A00'] </code></pre> <p>I need to extract the timestamp and filepath into a pandas dataframe. The timestamp column needs to be in timestamp datatype and As below.</p> <p><a href="https://i.stack.imgur.com/I0vBl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/I0vBl.png" alt="enter image description here"></a></p> <p>What is the best way to do this?</p>
<pre><code>import pandas as pd df = pd.DataFrame(list_fs) df['Timestamp_ordered'] = [re.findall('\d+-\d+-\d+ \d+:\d+',i)[0] for i in list_fs] df['FilePath'] = [re.findall('hdfs:.*', i)[0] for i in list_fs] df = df[['Timestamp_ordered', 'FilePath']].sort_values('Timestamp_ordered') </code></pre>
python|pandas|dataframe
2
8,143
49,923,145
pandas: records with lists to separate rows
<p>I have a Python Pandas DataFrame like this (UCSC schema for NCBI RefSeq):</p> <pre><code>chrom exonStart exonEnds name chr1 100,200,300 110,210,310 gen1 chr1 500,700 600,800 gen2 chr2 50,60,70,80 55,65,75,85 gen3 </code></pre> <p>and I'd like to pair values from exonStarts and exonEnds and put them as separate rows (keeping the rest of corresponding information):</p> <pre><code>chrom exonStart exonEnds name chr1 100 110 gen1 chr1 200 210 gen1 chr1 300 310 gen1 chr1 500 600 gen2 chr1 700 800 gen2 chr2 50 55 gen3 chr2 60 65 gen3 chr2 70 75 gen3 chr2 80 85 gen3 </code></pre> <p>I was thinking to use combinations of python/pandas functions as:</p> <blockquote> <p>zip, split, melt, concat</p> </blockquote> <p>but somehow it doesn't work for me</p>
<p>Use a <code>zip</code> and <code>split</code> within a comprehension</p> <pre><code>pd.DataFrame([ [c, s, e, n] for c, S, E, n in df.itertuples(index=False) for s, e in zip(S.split(','), E.split(',')) ], columns=df.columns) chrom exonStart exonEnds name 0 chr1 100 110 gen1 1 chr1 200 210 gen1 2 chr1 300 310 gen1 3 chr1 500 600 gen2 4 chr1 700 800 gen2 5 chr2 50 55 gen3 6 chr2 60 65 gen3 7 chr2 70 75 gen3 8 chr2 80 85 gen3 </code></pre>
python|pandas|numpy|dataframe|split
4
8,144
64,165,877
How to turn a list of values into column names and inputted as a variable in a pandas dataframe?
<p>I want to turn a list of values (defined as modernization_area) into column headers. For example, the modernization_area outputs: A, B, C, D and the want the function to loop through each area by generating columns A, B, C, and D. The variable would ideally replace 'modernization_area' in the last line, but python is not accepting that as a variable.</p> <pre><code>modernization_list = pd.DataFrame(keyword_table['Modernization_Area'].unique().tolist()) modernization_list.columns = ['Modernization_Area'] x = range(len(modernization_list['Modernization_Area'].unique().tolist())) for i in x: modernization_area = modernization_list._get_value(i, 'Modernization_Area') keyword_subset = keyword_table[keyword_table.Modernization_Area == modernization_area] keywords = keyword_subset['Keyword'].tolist() report_table['a'] = report_table.award_description.str.findall('({0})'.format('|'.join(keywords), flags=re.IGNORECASE) </code></pre>
<p>It is not easy to help you because your question is lacking a lot of information. I am assuming hipotheticals <code>keyword_table</code> and <code>report_table</code>. Actually, I don't know if I really got what you truly want. But I hope this piece of code could help:</p> <p>Block of assumptions:</p> <pre><code>supposed_keyword_table = pd.DataFrame({'Keyword': ['word1', 'word2', 'word3', 'word4', 'word5', 'word6', 'word7'], 'Modernization Area': ['A', 'B', 'C', 'D', 'A', 'B', 'D']}) supposed_report_table = pd.DataFrame({'Modernization Area': ['A', 'B', 'C', 'D'], 'Some Value': [1, 2, 3, 4]}) supposed_keyword_table Keyword Modernization Area 0 word1 A 1 word2 B 2 word3 C 3 word4 D 4 word5 A 5 word6 B 6 word7 D supposed_report_table Modernization Area Some Value 0 A 1 1 B 2 2 C 3 3 D 4 </code></pre> <p>Now, after assumptions, here is what you can do:</p> <pre><code>keyword_table_by_mod_area = supposed_keyword_table.groupby(['Modernization Area'])['Keyword'].apply(lambda x: '|'.join(x)) supposed_report_table = pd.merge(supposed_report_table, keyword_table_by_mod_area, on='Modernization Area', how='left') supposed_report_table Modernization Area Some Value Keyword 0 A 1 word1|word5 1 B 2 word2|word6 2 C 3 word3 3 D 4 word4|word7 </code></pre>
python|pandas
0
8,145
46,796,662
Filter CSV File Program using Pandas and Python
<p>I currently have a task that involves downloading a CSV master file, removing any lines where column A - Column B &lt;= 0, and where Column C equals a given phrase. I'm looking to a create a program that will:</p> <ul> <li>Import a CSV File</li> <li>Remove all lines where Column A - Column B &lt;= 0</li> <li>Ask for input to filter on Column C for one or more phrases</li> <li>Export the CSV into a new file</li> </ul> <p>So far, I have determined that the best way to do this is to use Pandas' dataframe functionality, as I've used it previously to perform other operations on CSV files:</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>import pandas as pd file = read_csv("sourcefile.csv") file['NewColumn'] = file['A'] - file['B'] file = file[file.NewColumn &gt; 0] columns = ['ColumnsIWantToRemove'] file.drop(columns, inplace=True, axis=1) phrases = input('What phrases are you filtering for? ') file = file[file.C = phrases] file.to_csv('export.csv')</code></pre> </div> </div> </p> <p>My question is, how do I filter Column C for multiple phrases? I want the program to take one or more phrases and only show rows where Column C's value equals one of those values. Any guidance would be amazing. Thank you!!</p>
<p>I would just ask for input to be comma separated:</p> <pre><code>phrases = phrases.split(",") file = file[file.C.isin(phrases)] </code></pre>
python|pandas|csv|dataframe
1
8,146
46,955,022
Plot certain information from dataframe
<p>I have a data frame holding Crime information with total crime values for the past 12 months. When I plot the data frame I get a graph showing lines for every <code>crimeType</code>. </p> <p>Is there anyway I can plot a graph for each specific <code>crimeType</code> over the course of the year? </p> <p>This plots my entire graph. </p> <pre><code>crimeMonthDf.plot().legend(loc='center left', bbox_to_anchor=(1,0.5)) </code></pre> <p>Some <code>crimeType</code>'s are <code>Anti-social behaviour</code> and <code>Robbery</code>. </p> <p>If more info is needed, ask. </p>
<p>Perhaps select each crime type and plot each as a separate series. i.e</p> <pre><code> fig, ax = plt.subplots() for ctype in crimeMonthDf['crimeType'].unique(): ax.plot(crimeMonthDf.loc[crimeMonthDf['crimeType'] == ctype, label=ctype) plt.legend() </code></pre>
python|pandas|dataframe
0
8,147
46,655,551
changing specific columns to that of values in the list in Pandas DataFrame
<p>Fairly new to coding and python.</p> <p>My DataFrame looks like this at the moment.</p> <pre><code>Text Location .... NY, USA .... NewYork .... Austin,Texas .... Tx .... California .... Somehere on Earth </code></pre> <p>The DataFrame consists of tweets and location extracted from the Users Bio. </p> <pre><code>states = ["AL","Alabama", "AK","Alaska", "AS", "American Samoa", "AZ", "Arizona", "AR", "Arkansas", "CA", "California", "CO", "Colarado" "CT", "Connecticut" "DE", "Delaware", "DC", "District Of Columbia", "FM", "Federated States Of Micronesia", "FL", "Florida" "GA", "Georgia", "GU", "Guam" "HI", "Hawaii", "ID", "Idaho", "IL", "Illinois", "IN", "Indiana","IA", "Iowa", "KS", "Kansas", "KY", "Kentucky", "LA", "Louisiana","ME", "Maine", "MH", "Marshall Islands", "MD", "Maryland", "MA", "Massachusetts", "MI", "Michigan", "MN", "Minnesota", "MS", "Mississippi", "MO", "Missouri", "MT", "Montana", "NE", "Nebraska", "NV", "Nevada", "NH", "New Hampshire", "NJ", "New Jersey", "NM", "New Mexico", "NY", "New York", "NC", "North Carolina", "ND", "North Dakota", "MP", "Northern Mariana Islands", "OH", "Ohio", "OK", "Oklahoma", "OR", "Oregon", "PW", "Palau", "PA", "Pennsylvania","PR", "Puerto Rico", "RI", "Rhode Island", "SC", "South Carolina", "SD", "South Dakota", "TN", "Tennessee", "TX", "Texas", "UT", "Utah", "VT", "Vermont", "VI", "Virgin Islands", "VA", "Virginia", "WA", "Washington", "WV", "West Virginia", "WI", "Wisconsin", "WY", "Wyoming"] </code></pre> <p>Now Im trying to find out any if there is a way to change the location field to the following format.</p> <pre><code>Text Location .... NY .... NewYork .... Texas .... Tx .... California .... NaN </code></pre> <p>I tried replacing values on the list. But it just doesn't do the job. Can someone please help me with this?</p>
<p>Sounds like you need a lambda function with some regular expressions.</p> <pre><code>import re states_lower = [state.lower() for state in states] df['NewLocation'] = df['Location'].map(lambda x: ' '.join([loc for loc in re.findall('\\w+',x) if loc.lower() in states])) </code></pre> <ul> <li>First, lowercase all your states for easier matching</li> <li>The regex grabs all the words</li> <li>Loop through the resulting list to determine which of those words are in the <code>states</code> list</li> <li>Join the rest of the items together so you end up with a string</li> </ul>
python|pandas|dataframe
0
8,148
32,645,238
Python Bokeh: Set line color based on column in columndatasource
<p>I'm trying to produce a chart that has multiple lines, but the data I use typically comes in long form like this:</p> <pre><code>x = [0.1, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0] y0 = [i**2 for i in x] y1 = [10**i for i in x] y2 = [10**(i**2) for i in x] df = pandas.DataFrame(data=[x,y0,y1,y2]).T df.columns = ['x','y0','y1','y2'] df2 = pd.concat([df.iloc[0:,1],df.iloc[0:,2],df.iloc[0:,3]], axis=0, keys = ['a','b','c']).reset_index() df2.columns = ['grp','x','y'] df2 +----+-----+---+----------+ | | grp | x | y | +----+-----+---+----------+ | 0 | a | 0 | 0.01 | | 1 | a | 1 | 0.25 | | 2 | a | 2 | 1.00 | | 3 | a | 3 | 2.25 | | 4 | a | 4 | 4.00 | | 5 | a | 5 | 6.25 | | 6 | a | 6 | 9.00 | | 7 | b | 0 | 1.26 | | 8 | b | 1 | 3.16 | | 9 | b | 2 | 10.00 | | 10 | b | 3 | 31.62 | | 11 | b | 4 | 100.00 | | 12 | b | 5 | 316.23 | | 13 | b | 6 | 1,000.00 | +----+-----+---+----------+ cd_df2 = ColumnDataSource(df2) </code></pre> <p>That is to say, I'll have 'groups' where x,y pairs for each group are listed out across multiple rows.</p> <p>The following produces all 3 lines, but they all show up as grey. Setting color = 'grp' does not specific a color for each value in the grp field in the columndata source</p> <pre><code>f = figure(tools="save",y_axis_type="log", y_range=[0.001, 10**11], title="log axis example",x_axis_label='sections', y_axis_label= 'particles') f.line('x','y', line_color = 'grp', source = cd_df2) </code></pre> <p>How could I achieve this in the bokeh.plotting or bokeh.models api (want to avoid high level charts to better understand the library)? I'm open to other suggestions that avoid explicitly calling f.line() once for each line and individually set the color (I may have 10+ lines and this would get tedious).</p>
<p>You can reuse the principle described in <a href="https://docs.bokeh.org/en/latest/docs/gallery/stacked_area.html" rel="nofollow noreferrer">this example</a>. It is based on "patches" but is the same for "line" (see <a href="http://docs.bokeh.org/en/latest/docs/reference/plotting.html" rel="nofollow noreferrer">http://docs.bokeh.org/en/latest/docs/reference/plotting.html</a>):</p> <pre><code>p = figure() p.patches([x2 for a in areas], list(areas.values()), color=colors, alpha=0.8, line_color=None) </code></pre>
python|pandas|bokeh
1
8,149
32,641,460
pandas DataFrame size inconsistent with windows memory usage
<p>I am reading in a DataFrame from a hdf5 file:</p> <pre><code>import pandas as pd store = pd.HDFStore('some_file.h5') df= store['df'] store.close() </code></pre> <p>Using <code>info</code> shows:</p> <pre><code>In [11]: df.info() &lt;class 'pandas.core.frame.DataFrame'&gt; Int64Index: 21423657 entries, 0 to 21423656 Data columns (total 5 columns): date datetime64[ns] name object length float64 flag1 object flag2 object dtypes: datetime64[ns](1), float64(1), object(3) memory usage: 980.7+ MB </code></pre> <p>The hdf5 is about 1GB and <code>df.info()</code> also shows <code>memory usage</code> of about 1GB. However, the physical memory usage from windows task manager shows an increase of over 2GB after reading in the DataFrame. In general I have observed that the actual memory usage from windows task manager is about twice as large as indicated by the <code>info</code> function in <code>pandas</code>. This extra memory usage is causing MemoryError in later computations. Does anyone know the reason for this behavior? Or does anyone have suggestions on how to go about debugging the "phantom" memory usage?</p>
<p>The <code>info</code> function just calls <code>numpy.nparray.nbytes</code> which multiplies the <code>nitemsize</code>(the size of the data type, e.g. 8 bytes for int64) and array's length. The problem can come from the <code>object</code> data type.</p> <p>Numpy has rich type system: <a href="http://docs.scipy.org/doc/numpy/user/basics.types.html" rel="nofollow">Array types and conversions between types </a>, <a href="http://docs.scipy.org/doc/numpy/reference/arrays.dtypes.html" rel="nofollow">Data type objects</a>, which you can utilize for higher memory efficiency. You could convert the columns of choice from your data frame, for example from default <code>float64</code> to <code>float32</code> or <code>int32</code> or <code>unsigned int32</code> and <code>object</code>s to shorter strings with type constructor, e.g. <code>np.dtype('a25')</code> — 25-character string, and this, as I've tested, actually frees some memory on my Win7.</p>
python|memory|pandas
0
8,150
38,537,399
pandas the row data transform to the column data
<p>I have a dataframe like :</p> <pre><code>user_id category view collect 1 1 a 2 3 2 1 b 5 9 3 2 a 8 6 4 3 a 7 3 5 3 b 4 2 6 3 c 3 0 7 4 e 1 4 </code></pre> <p>how to change it to a new dataframe ,each user_id can appear once,then the category with the view and collect appears to the columns ,if there is no data ,fill it with 0, like this :</p> <pre><code>user_id a_view a_collect b_view b_collect c_view c_collect d_view d_collect e_view e_collect 1 2 3 5 6 0 0 0 0 0 0 2 8 6 0 0 0 0 0 0 0 0 3 7 3 4 2 3 0 0 0 0 0 4 0 0 0 0 0 0 0 0 1 4 </code></pre>
<p>The desired result can be obtained by <a href="http://pandas.pydata.org/pandas-docs/stable/reshaping.html" rel="nofollow">pivoting <code>df</code></a>, with values from <code>user_id</code> becoming the index and values from <code>category</code> becoming a column level:</p> <pre><code>import numpy as np import pandas as pd df = pd.DataFrame({'category': ['a', 'b', 'a', 'a', 'b', 'c', 'e'], 'collect': [3, 9, 6, 3, 2, 0, 4], 'user_id': [1, 1, 2, 3, 3, 3, 4], 'view': [2, 5, 8, 7, 4, 3, 1]}) result = (df.pivot(index='user_id', columns='category') .swaplevel(axis=1).sortlevel(axis=1).fillna(0)) </code></pre> <p>yields</p> <pre><code>category a b c e view collect view collect view collect view collect user_id 1 2.0 3.0 5.0 9.0 0.0 0.0 0.0 0.0 2 8.0 6.0 0.0 0.0 0.0 0.0 0.0 0.0 3 7.0 3.0 4.0 2.0 3.0 0.0 0.0 0.0 4 0.0 0.0 0.0 0.0 0.0 0.0 1.0 4.0 </code></pre> <p>Above, <code>result</code> has a MultiIndex. In general I think this should be preferred over a flattened single index, since it retains more of the structure of the data. </p> <p>However, the MultiIndex can be flattened into a single index:</p> <pre><code>result.columns = ['{}_{}'.format(cat,col) for cat, col in result.columns] print(result) </code></pre> <p>yields</p> <pre><code> a_view a_collect b_view b_collect c_view c_collect e_view \ user_id 1 2.0 3.0 5.0 9.0 0.0 0.0 0.0 2 8.0 6.0 0.0 0.0 0.0 0.0 0.0 3 7.0 3.0 4.0 2.0 3.0 0.0 0.0 4 0.0 0.0 0.0 0.0 0.0 0.0 1.0 e_collect user_id 1 0.0 2 0.0 3 0.0 4 4.0 </code></pre>
python|pandas
1
8,151
38,952,853
how to convert a 1-dimensional image array to PIL image in Python
<p>My question is related to a <a href="https://www.kaggle.com/c/digit-recognizer/data" rel="nofollow">Kaggle data science competition</a>. I'm trying to read an image from a one-dimensional array containing <strong>1-bit grayscale</strong> pixel information (<strong>0 to 255)</strong> for an <strong>28x28 image</strong>. So the array is from <strong>0 to 783</strong> where each pixel is encoded as x = i * 28 + j.</p> <p>Converted into a two-dimensional 28x28 matrix this:</p> <pre><code>000 001 002 003 ... 026 027 028 029 030 031 ... 054 055 056 057 058 059 ... 082 083 | | | | ... | | 728 729 730 731 ... 754 755 756 757 758 759 ... 782 783 </code></pre> <p>For reasons of image manipulation (resizing, skewing) I would like to read that array into an in-memory PIL image. I did some research on the <a href="http://matplotlib.org/users/image_tutorial.html" rel="nofollow">Matplotlib image function</a>, which I think is most promising. Another idea is the <a href="https://pythonhosted.org/pypng/ex.html#numpy" rel="nofollow">Numpy image functions</a>.</p> <p><strong>What I'm looking for</strong>, is a code example that shows me how to load that 1-dimensional array via Numpy or Matplotlib or anything else. Or how to convert that array into a 2-dimensional image using for instance Numpy.vstack and then read it as an image.</p>
<p>You can convert a NumPy array to PIL image using <code>Image.fromarray</code>:</p> <pre><code>import numpy as np from PIL import Image arr = np.random.randint(255, size=(28*28)) img = Image.fromarray(arr.reshape(28,28), 'L') </code></pre> <p><code>L</code> mode indicates the array values represent luminance. The result will be a gray-scale image.</p>
python|image|numpy|matplotlib|python-imaging-library
4
8,152
38,848,411
Apply function on each column in a pandas dataframe
<p>How I can write following function in more pandas way:</p> <pre><code> def calculate_df_columns_mean(self, df): means = {} for column in df.columns.columns.tolist(): cleaned_data = self.remove_outliers(df[column].tolist()) means[column] = np.mean(cleaned_data) return means </code></pre> <p>Thanks for help.</p>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.apply.html" rel="nofollow"><code>dataFrame.apply(func, axis=0)</code></a>:</p> <pre><code># axis=0 means apply to columns; axis=1 to rows df.apply(numpy.sum, axis=0) # equiv to df.sum(0) </code></pre>
python|pandas|dataframe
3
8,153
63,037,684
Applying an operation efficiently on one column that depends on another column with pandas
<p>I have a Dataframe called <code>df</code> with around 20m rows, that looks like</p> <pre><code>userId movieId rating 0 1 296 5.0 1 1 306 3.5 2 1 307 5.0 3 2 665 5.0 4 2 899 3.5 ... </code></pre> <p>and I have a Series, <code>user_bias</code></p> <pre><code>userId 1 0.280431 2 0.096580 3 0.163554 4 -0.155755 5 0.218621 ... </code></pre> <p>I would like to subtract the matching value according to <code>userId</code> column in <code>user_bias</code> from <code>df['rating']</code>. For example the rating value of the first row should be replaced with <code>5.0 - 0.280431 = 4.719569</code>. I tried two solutions but they seems to be very slow. Is there a better way to achieve this?</p> <h2>Solution 1</h2> <pre><code>for i, row in df.iterrows(): df.at[i, 'rating'] -= user_bias[row.userId] </code></pre> <h2>Solution 2</h2> <p>To get rid of the for loop, I've used <code>apply</code> method. Not sure if it is correct result-wise but it is again way slower than I expected.</p> <p><code>df['rating'] = df.apply(lambda row: row.rating - user_bias[row.userId], axis=1)</code></p>
<p>Try with <code>reindex</code></p> <pre><code>df['rating'] = df['rating'] - user_bias.reindex(df['userId']).values </code></pre>
python|pandas
5
8,154
67,708,191
In Pytorch how to slice tensor across multiple dims with BoolTensor masks?
<p>I want to use BoolTensor indices to slice a multidimensional tensor in Pytorch. I expect for the indexed tensor, the parts where the indices are true are kept, while the parts where the indices are false are sliced out.</p> <p>My code is like</p> <pre><code>import torch a = torch.zeros((5, 50, 5, 50)) tr_indices = torch.zeros((50), dtype=torch.bool) tr_indices[1:50:2] = 1 val_indices = ~tr_indices print(a[:, tr_indices].shape) print(a[:, tr_indices, :, val_indices].shape) </code></pre> <p>I expect <code>a[:, tr_indices, :, val_indices]</code> to be of shape <code>[5, 25, 5, 25]</code>, however it returns <code>[25, 5, 5]</code>. The result is</p> <pre><code>torch.Size([5, 25, 5, 50]) torch.Size([25, 5, 5]) </code></pre> <p>I'm very confused. Can anyone explain why?</p>
<p>PyTorch inherits its advanced indexing behaviour <a href="https://stackoverflow.com/questions/42309460/boolean-masking-on-multiple-axes-with-numpy">from Numpy</a>. Slicing twice like so should achieve your desired output:</p> <pre><code>a[:, tr_indices][..., val_indices] </code></pre>
python|numpy|pytorch|advanced-indexing
1
8,155
67,671,268
Unique columns pandas
<p>I have a pandas dataframe of dimensions <code>(20000,3000)</code> and I would there are some duplicated columns but they have different headings. How would I remove those duplicates but keep the original columns in pandas</p>
<p>You can use to following to remove duplicated columns according to their values:</p> <pre><code>df=df.T.drop_duplicates().T </code></pre> <p>like below:</p> <pre><code>import pandas as pd df = pd.DataFrame( {'A': [2, 4, 8, 0], 'B': [2, 0, 0, 0], 'B_duplicated': [2, 0, 0, 0], 'C': [10, 2, 1, 8]}) df = df.T.drop_duplicates().T </code></pre> <p>This would result in:</p> <pre><code>A B C 0 2 2 10 1 4 0 2 2 8 0 1 3 0 0 8 </code></pre>
python|pandas|dataframe
1
8,156
41,599,051
Can I use an aggregate function over a specific index?
<p>Suppose I have data as follows:</p> <pre><code> Month User Visits April 101078350 16 April 101187789 10000 April 101204204 98 April 101220432 659 April 103021861 25 April 103052403 93 April 103235453 25 April 103309704 77 April 103613303 87 April 103641403 735 April 103698304 62 April 103709630 198 April 103880860 94 April 104090303 448 May 104146303 561 May 104170303 143 May 104216403 273 May 104531678 786 May 104548151 811 May 104584503 15000 </code></pre> <p>Here, Month and User form a multindex. Is there a an easy way to take the mean of each month granted that the month is part of an index? As of now, I reset the index, re group by the month, and calculate the mean. </p>
<p>try this:</p> <pre><code>In [16]: df.groupby(level='Month').mean() Out[16]: Visits Month April 901.214286 May 2929.000000 </code></pre>
pandas
1
8,157
41,622,221
pandas: merging dataframes and replacing values
<p>I have two dataframes:</p> <pre><code> A = pd.DataFrame(data=np.array([['t1',1,'t2',2]]).reshape(2,2),columns=['a','b']) A Out[6]: a b 0 t1 1 1 t2 2 B = pd.DataFrame(data=np.array([[1,2,3],[2,5,6],[3,6,7]]).reshape(3,3),columns=['x','y','z']) B Out[8]: x y z 0 1 2 3 1 2 5 6 2 3 6 7 </code></pre> <p>I am trying to basically match columns 'x' of dataframe B on column 'b' dataframe A but replace the matched values with column 'a' of dataframe A.</p> <p>i.e. I want to merge the two dataframes so that the output will look like this:</p> <pre><code> x y z 0 t1 2 3 1 t2 5 6 2 3 6 7 </code></pre> <p>Any ideas how to go about this?</p>
<pre><code>B.loc[B.x.astype(str).isin(A.b), 'x'] = A.a B x y z 0 t1 2 3 1 t2 5 6 2 3 6 7 </code></pre>
python|pandas|numpy
6
8,158
41,473,397
How to obtain information on tensorflow architecture
<p>I have been working on retraining TensorFlow Inception v3 (see <a href="https://github.com/tensorflow/models/tree/master/inception" rel="nofollow noreferrer">TensorFlow Github</a>) and was curious as to how I would obtain some general "metadata" about the model.</p> <p>For instance:</p> <ul> <li>How many hidden layers are there?</li> <li>Which is the "last" layer that gets retrained?</li> <li>How many total neurons (or neurons / layer)?</li> <li>How many convolutions were used?</li> <li>How many poolings were used?</li> </ul> <p>I basically want to be able to write a sentence or two describing the model. There is sentence on the site </p> <blockquote> <p>...single frame evaluation with a computational cost of 5 billion multiply-adds per inference and with using less than 25 million parameters.</p> </blockquote> <p>but I am not quit sure what that means. Is that 25 million neurons? </p> <p>Looking at the document <a href="http://arxiv.org/abs/1512.00567" rel="nofollow noreferrer">Rethinking the Inception Architecture for Computer Vision</a> which was referenced on the GitHub, I think much of what I need is in Table 1?</p> <p>Would I write this ? </p> <p>"TensorFlow InceptionV3 is a deep convolution neural network that has 13 hidden layers and uses six convolutions, two pooling steps, and three inception modules to perform a softmax classification of images"</p> <p>Of course I want the correct data in the sentence and would love to know how many neurons are in the model as well</p> <p>Thanks! </p> <p><a href="https://i.stack.imgur.com/Tsy2A.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Tsy2A.png" alt="Table 1"></a></p>
<p>You're very much on the right track. Yes, parameters are neurons. For instance, the output of the first conv layer is 32 filters (kernels) of grid size 149^2. That's a total of 710,432 neurons/parameters for that layer alone.</p> <p>The critical part of training is back-propagation which adjusts the weights between one layer and the next. The result of that SoftMax operation is the 1000 output predictions; the last trained layer would be its connection to the prior layer.</p> <p>You can read the simple convolutions and poolings from the chart. I'm not sure whether or not you are supposed to include the ones inside the inception layers.</p> <p>Finally, if I read this correctly, the inceptions are multiple units of each type, applied in series. That would mean that we have 10 inception layers, not 3.</p>
python|architecture|tensorflow
1
8,159
41,613,242
Dealing with NaNs in Pandas
<p>I have a dataframe and I want to return a subset (new copy not reference) of this dataframe to perform some operations. However I find it unable to filter on the criteria i need. </p> <p>I need these three criteria to filer : </p> <pre><code>1. df['A'] != NaN 2. df['B'] == 'X' | df['B'] == NaN 3. df['C'] == NaN </code></pre> <p>Currently i am doing this for criteria 1 but I am a little stuck with how to include criteria 2 and 3. </p> <pre><code> filter_data = df.loc[(df['A'].dropna)] </code></pre>
<p>Need special function for <code>NaN</code> - <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.isnull.html" rel="nofollow noreferrer"><code>isnull</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.notnull.html" rel="nofollow noreferrer"><code>notnull</code></a>:</p> <pre><code>df['A'].notnull() (df['B'] == 'X') | (df['B'].isnull()) df['C'].isnull() </code></pre>
python|pandas|numpy
4
8,160
27,592,456
Floor or ceiling of a pandas series in python?
<p>I have a pandas series <code>series</code>. If I want to get the element-wise floor or ceiling, is there a built in method or do I have to write the function and use apply? I ask because the data is big so I appreciate efficiency. Also this question has not been asked with respect to the Pandas package. </p>
<p>You can use NumPy's built in methods to do this: <code>np.ceil(series)</code> or <code>np.floor(series)</code>.</p> <p>Both return a Series object (not an array) so the index information is preserved.</p>
python|pandas|series|floor|ceil
129
8,161
27,481,349
Retrieving statistical information when 2 rows are involved
<p>I need to get some information from a data set (csv) which I have boiled down to the following simple table, </p> <pre><code>Date_Time Id passed 2013-06-23 20:13:10 112 A 2013-06-23 20:58:11 112 B 2013-06-23 21:01:10 118 A 2013-06-23 21:03:31 118 A 2013-06-23 21:05:49 118 A 2013-06-23 23:05:08 118 B 2013-06-24 08:10:03 118 B </code></pre> <p>The first two records show the simple case, after a check-in (A) we see 0:45:01 later a check-out (B). </p> <p>But one can also have more check-ins in row (records 3,4,5) and the check-out following later. Normally, there would be for every check-in a corresponding check-out. Unfortunately, the data is not perfect and there are sometimes records missing. (In the example there are only two check-outs for three check-ins) </p> <p>I would like to get some statistical values of the times between check-in and check-out, perhaps on month basis or by weekday and so on. But I also do have to find a way to discard records if I have no check-out within X-hours or if I find a check-out without a check-in.</p> <p>I have been trying with pandas and it looked so prommissing but as a new-be I got stuck on all the huge possiblities that this magical package offers. I hope some one can help me out and maybe can explane me a little bit where to look fore. </p> <p>Many thanks in advance, </p> <p>avm</p>
<p>Your table is not structured in such a way that you can do this with one query. If you had a check_in_id column which would be and added column then you could do it with one query. the idea being that there would be at most two rows with the same check_in_id and they would always have the same id.</p> <p>So instead write a stored procedure to create a tmp table. The tmp table would contain the added column. Your stored procedure would need iterate over the rows of the table and find the most recent check out given the id, that is not already in the tmp table. </p>
sql|pandas
0
8,162
61,353,437
Numpy Lognorm function, used in dataframe
<p>Suppose you have a dataframe A which looks like this:</p> <pre><code>ID sigma miu prob 1 20 0.5 0.875 2 25 0.2 0.800 3 10 0.4 0.668 4 30 0.6 0.994 </code></pre> <p>how can i use python to create another column which does this Excel equivalent calculation?</p> <pre><code>LOGNORM.INV(prob, sigma, exp(miu)) </code></pre> <p>thanks in advance :)</p>
<p>This is the expected output of the scipy.stats.lognorm function, as stated in the <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.lognorm.html" rel="nofollow noreferrer">documentation</a>. </p>
python|pandas|numpy|normal-distribution
2
8,163
61,330,002
Python Pandas loc keyerror
<pre><code>import pandas as pd import matplotlib.pyplot as plt eurusd = pd.read_csv("G:\Kuliah python\EURUSD_M15.csv",sep="\t") print(eurusd.loc['2020-04-03 21:15']) </code></pre> <p>it shows an error :</p> <pre><code>KeyError: '2020-04-03 21:15' </code></pre> <h2>Here is my data</h2> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>&lt;table&gt;&lt;tbody&gt;&lt;tr&gt;&lt;th&gt;&lt;th&gt;Time&lt;/th&gt;&lt;th&gt;Open&lt;/th&gt;&lt;th&gt;High&lt;/th&gt;&lt;th&gt;Low&lt;/th&gt;&lt;th&gt;Close&lt;/th&gt;&lt;th&gt;Volume&lt;/th&gt;&lt;th&gt; &lt;/th&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;0&lt;/td&gt;&lt;td&gt;2020-04-03 21:00:00&lt;/td&gt;&lt;td&gt;1.07893&lt;/td&gt;&lt;td&gt;1.07936&lt;/td&gt;&lt;td&gt;1.07839&lt;/td&gt;&lt;td&gt;1.07868&lt;/td&gt;&lt;td&gt;4380&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;1&lt;/td&gt;&lt;td&gt;2020-04-03 21:15:00&lt;/td&gt;&lt;td&gt;1.07867&lt;/td&gt;&lt;td&gt;1.07943&lt;/td&gt;&lt;td&gt;1.07831&lt;/td&gt;&lt;td&gt;1.07889&lt;/td&gt;&lt;td&gt;4860&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;2&lt;/td&gt;&lt;td&gt;2020-04-03 21:30:00&lt;/td&gt;&lt;td&gt;1.07888&lt;/td&gt;&lt;td&gt;1.07908&lt;/td&gt;&lt;td&gt;1.07762&lt;/td&gt;&lt;td&gt;1.07783&lt;/td&gt;&lt;td&gt;4022&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;3&lt;/td&gt;&lt;td&gt;2020-04-03 21:45:00&lt;/td&gt;&lt;td&gt;1.07782&lt;/td&gt;&lt;td&gt;1.08059&lt;/td&gt;&lt;td&gt;1.07727&lt;/td&gt;&lt;td&gt;1.07975&lt;/td&gt;&lt;td&gt;6816&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;4&lt;/td&gt;&lt;td&gt;2020-04-03 22:00:00&lt;/td&gt;&lt;td&gt;1.07975&lt;/td&gt;&lt;td&gt;1.08093&lt;/td&gt;&lt;td&gt;1.07920&lt;/td&gt;&lt;td&gt;1.08059&lt;/td&gt;&lt;td&gt;582&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;</code></pre> </div> </div> </p>
<p>I'll give it a try: Set the first column as index and your .loc[] indexing works properly.</p> <pre><code>eurusd = pd.DataFrame( [['2020-04-03 21:00:00', 1.07893, 1.07936, 1.07839, 1.07868, 4380], ['2020-04-03 21:15:00', 1.07867, 1.07943, 1.07831, 1.07889, 4860], ['2020-04-03 21:30:00', 1.07888, 1.07908, 1.07762, 1.07783, 4022], ['2020-04-03 21:45:00', 1.07782, 1.08059, 1.07727, 1.07975, 6816], ['2020-04-03 22:00:00', 1.07975, 1.08093, 1.07920, 1.08059, 582]] ).set_index(0) eurusd.loc['2020-04-03 21:15:00'] 1 1.07867 2 1.07943 3 1.07831 4 1.07889 5 4860.00000 Name: 2020-04-03 21:15:00, dtype: float64 </code></pre> <p>You can improve this even further by making the index a DateTimeIndex:</p> <pre><code>eurusd.index = pd.to_datetime(eurusd.index) </code></pre> <p>This allows for "partial string indexing" like</p> <pre><code>eurusd.loc['2020-04-03 21'] 2020-04-03 21:00:00 1.07893 1.07936 1.07839 1.07868 4380 2020-04-03 21:15:00 1.07867 1.07943 1.07831 1.07889 4860 2020-04-03 21:30:00 1.07888 1.07908 1.07762 1.07783 4022 2020-04-03 21:45:00 1.07782 1.08059 1.07727 1.07975 6816 </code></pre> <p>which gives you all rows between 21:00 and 22:00 exclusive. </p>
python|pandas
0
8,164
68,838,944
Same code, same library, but why my training runs slower in a new laptop compare to an old laptop
<p>Here is the background:</p> <p>I do not know much about Deep learning, and I am not the one creates the code. I follow someone's procedure and test the AI. I try the same process on 3 different laptop. I thought a laptop with better hardware would increase the training speed but ends up this is not that case.</p> <p>Base on the code, seems it was using Keras with tensorflow backend.</p> <p>I did some research and try to speed up the process: like use GPU.But then I found out that both laptop the GPU load was in 0 to 1%. seems the GPU is not used on both laptop.</p> <p>So I think, maybe the tensorflow didnt recognize the GPU, so I try to use tersorflow-gpu, install cuda and cudnn...</p> <pre><code>&gt;&gt;&gt; from tensorflow.python.client import device_lib &gt;&gt;&gt; print(device_lib.list_local_devices()) 2021-08-18 17:17:00.307495: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2 2021-08-18 17:17:00.312631: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library nvcuda.dll 2021-08-18 17:17:00.364157: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties: pciBusID: 0000:01:00.0 name: NVIDIA GeForce GTX 1070 computeCapability: 6.1 coreClock: 1.645GHz coreCount: 16 deviceMemorySize: 8.00GiB deviceMemoryBandwidth: 238.66GiB/s 2021-08-18 17:17:00.364352: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll 2021-08-18 17:17:00.397938: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll 2021-08-18 17:17:00.427946: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cufft64_10.dll 2021-08-18 17:17:00.435072: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library curand64_10.dll 2021-08-18 17:17:00.478467: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusolver64_10.dll 2021-08-18 17:17:00.495200: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusparse64_10.dll 2021-08-18 17:17:00.559633: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll 2021-08-18 17:17:00.560557: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 0 2021-08-18 17:17:04.129809: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1096] Device interconnect StreamExecutor with strength 1 edge matrix: 2021-08-18 17:17:04.129968: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] 0 2021-08-18 17:17:04.130734: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] 0: N 2021-08-18 17:17:04.132802: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1241] Created TensorFlow device (/device:GPU:0 with 6788 MB memory) -&gt; physical GPU (device: 0, name: NVIDIA GeForce GTX 1070, pci bus id: 0000:01:00.0, compute capability: 6.1) [name: &quot;/device:CPU:0&quot; device_type: &quot;CPU&quot; memory_limit: 268435456 locality { } incarnation: 2340425778646607054 , name: &quot;/device:GPU:0&quot; device_type: &quot;GPU&quot; memory_limit: 7118530151 locality { bus_id: 1 links { } } incarnation: 4718765836722936952 physical_device_desc: &quot;device: 0, name: NVIDIA GeForce GTX 1070, pci bus id: 0000:01:00.0, compute capability: 6.1&quot; ] </code></pre> <p>Even tensorflow-gpu seems recognize the GPU, Still not getting faster and the laptop without GPU and older CPU were actually faster.</p> <p>The New laptop runs about 1 it/s, but the Old laptop runs in 9 it/s.I also have an even older laptop can runs in 5~6 it/s</p> <p>Now to train 14 GB dataset it takes me estimate 30days with the old laptop, and the new laptop would takes maybe 45days.</p> <p>The things bugging me is: with the same code and library, the next thing would affect the training speed isnt the hardware? Or there are something I misunderstood?</p> <p><a href="https://i.stack.imgur.com/rxtFm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rxtFm.png" alt="Hardware Spec" /></a></p>
<p>If you would like a particular operation to run on a device of your choice instead of what's automatically selected for you, you can use with <code>tf.device</code> to create a device context, and all the operations within that context will run on the same designated device.</p> <pre><code>import tensorflow as tf tf.debugging.set_log_device_placement(True) # Place tensors on the CPU with tf.device('/CPU:0'): a = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]]) b = tf.constant([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]]) # Run on the GPU c = tf.matmul(a, b) print(c) </code></pre>
tensorflow|artificial-intelligence|hardware
0
8,165
68,585,124
Tensorflow equivalent of numpy.random.normal
<p>I am trying to add a Gaussian noise to output of each activation layer in a Keras pre-trained imagenet. I am inserting a custom layer after every activation layer. In this custom layer, I want to add a Guassian noise with stddev as a percentage of the input tensor. In numpy, if I have a stddev matrix stddev_dist, I will generate random Gaussian noise as</p> <pre><code>guass_noise = np.random.normal(scale = stddev_dist, size=stddev_dist.shape) </code></pre> <p>How to do equivalent of this to the input tensor in a custom layer. stddev_dist_tensor = tf.abs(input) * 0.02 (stddev= 2% of input tensor)</p> <p>Can somebody help with generating gaussian noise for stddev_dist_tensor ?</p>
<p><strong>Possible solution to gaussian noise in between keras layers</strong></p> <pre><code>import tensorflow as tf stddev=0.02 input_layer = tf.keras.layers.InputLayer(input_shape=(128,128,3)) gaus = tf.keras.layers.GaussianNoise(stddev,name='output')(input_layer) model = tf.keras.models.Model(inputs=input_layer, outputs=gaus) </code></pre>
python|numpy|tensorflow|keras|tf.keras
0
8,166
68,544,850
How to calculate point spread function (PSF) for signal data?
<p>I have chromatograph data (signal) in a pandas df and in one of the signal processing step is to perform peak sharpening as shown in fig below</p> <p><a href="https://i.stack.imgur.com/aKiRh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aKiRh.png" alt="enter image description here" /></a></p> <p>The reference literature is as follows:</p> <p>Literature :<a href="https://www.scirp.org/html/28309.html" rel="nofollow noreferrer">Paper 1</a>(Peak sharping section ) <a href="https://www.ee.ryerson.ca/%7Exzhang/publications/zhang-gensips02.pdf" rel="nofollow noreferrer">Paper 2</a></p> <p>Algorithm in Literature</p> <p><img src="https://chart.googleapis.com/chart?cht=tx&amp;chl=%20D_%7BK%2B1%7D%20%3D%20D_%7BK%7D%20%2B%20%5Clambda(D_%7B0%7D%20-%20h%5Cbigotimes_%7B%7D%5E%7B%7DD_%7BK%7D)" alt="D_{K+1} = D_{K} + \lambda(D_{0} - h\bigotimes_{}^{}D_{K}) " /></p> <p><img src="https://chart.googleapis.com/chart?cht=tx&amp;chl=%20D_%7BK%2B1" alt="D_{K+1} " />: Deconvolved high resolution data after K + 1 iterations</p> <p><img src="https://chart.googleapis.com/chart?cht=tx&amp;chl=h" alt="h" />: point spread function</p> <blockquote> <p>Adaptive point spread function estimation: Peaks are detected in the regularized trace and called as bases with standard classification methods. The called peaks are used to adaptively estimate the local point spread function h. The time-localization parameter d is estimated according to the peak spacing in the segment.</p> </blockquote> <p>How I can find <code>h</code> for signal data? Already gone through following</p> <p><a href="https://stackoverflow.com/questions/47585871/psf-point-spread-function-for-an-image-2d">PSF (point spread function) for an image (2D)</a></p> <p><a href="https://stackoverflow.com/questions/55883497/how-do-you-extract-a-point-spread-function-from-a-fits-image">How do you extract a point spread function from a fits image?</a></p> <p><a href="https://stackoverflow.com/questions/63739492/how-to-build-a-function-in-python-jupyter-for-calculation-of-point-spread-func">How to build a function in Python (Jupyter) for calculation of Point Spread Function (image processing?)</a></p> <p>in all of these it was image data</p> <p><strong>Sample Data</strong> <a href="https://i.stack.imgur.com/R5czX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/R5czX.png" alt="enter image description here" /></a></p> <pre><code>Sample data array([[ 31, 49, 1, 44], [ 36, 48, 0, 47], [ 43, 47, 0, 53], [ 50, 44, 0, 63], [ 59, 41, 0, 75], [ 68, 40, 1, 90], [ 78, 40, 6, 107], [ 87, 41, 12, 123], [ 99, 43, 20, 140], [110, 45, 31, 155], [121, 47, 42, 170], [131, 48, 53, 182], [140, 49, 63, 191], [148, 50, 72, 196], [155, 51, 79, 196], [161, 53, 83, 189], [166, 55, 83, 177], [169, 58, 80, 160], [170, 62, 72, 140], [167, 65, 62, 119], [161, 70, 51, 100], [154, 75, 40, 84], [144, 80, 30, 72], [132, 86, 23, 65], [121, 92, 19, 61], [111, 98, 19, 61], [106, 102, 23, 63], [105, 104, 29, 67], [111, 104, 38, 71], [123, 102, 48, 75], [141, 98, 59, 78], [160, 92, 71, 79], [179, 85, 85, 78], [195, 77, 101, 74], [205, 68, 117, 68], [208, 59, 133, 61], [203, 51, 145, 52], [191, 43, 152, 43], [173, 37, 154, 35], [150, 32, 151, 28], [123, 30, 142, 23], [ 94, 32, 129, 20], [ 65, 40, 114, 21], [ 40, 52, 96, 25], [ 21, 70, 77, 35], [ 9, 91, 58, 51], [ 1, 113, 39, 71], [ 0, 134, 24, 97], [ 0, 152, 13, 126], [ 0, 168, 5, 157], [ 0, 181, 0, 188], [ 0, 193, 0, 216], [ 0, 203, 0, 241], [ 0, 211, 0, 258], [ 0, 215, 0, 265], [ 0, 213, 0, 262], [ 0, 207, 0, 249], [ 0, 195, 0, 227], [ 0, 180, 0, 200], [ 0, 164, 0, 170], [ 0, 148, 0, 140], [ 0, 132, 0, 113], [ 0, 116, 0, 90], [ 0, 100, 5, 74], [ 0, 83, 18, 62], [ 0, 66, 38, 57], [ 0, 49, 64, 58], [ 0, 36, 98, 64], [ 0, 25, 133, 76], [ 0, 18, 164, 94], [ 0, 15, 187, 116], [ 0, 16, 199, 143], [ 0, 18, 199, 169], [ 0, 22, 186, 193], [ 1, 26, 164, 211], [ 7, 28, 134, 222], [ 17, 29, 102, 224], [ 31, 28, 71, 218], [ 50, 26, 44, 204], [ 71, 22, 24, 184], [ 91, 18, 11, 160], [106, 13, 4, 134], [117, 8, 3, 109], [122, 5, 5, 85], [120, 2, 10, 64], [113, 0, 16, 46], [101, 0, 22, 32], [ 86, 0, 28, 22], [ 69, 0, 34, 16], [ 52, 3, 39, 13], [ 36, 10, 44, 13], [ 24, 22, 47, 18], [ 18, 37, 48, 25], [ 20, 56, 46, 36], [ 31, 76, 39, 49], [ 51, 94, 28, 63], [ 81, 110, 19, 75], [118, 123, 10, 85], [158, 132, 4, 90], [199, 136, 0, 89], [236, 135, 0, 84], [265, 131, 0, 73], [282, 122, 0, 59], [286, 110, 0, 44], [277, 95, 0, 29], [256, 79, 0, 17], [226, 61, 0, 8], [189, 44, 0, 2], [150, 29, 6, 0], [112, 17, 19, 0], [ 77, 8, 41, 0], [ 49, 3, 74, 0], [ 28, 0, 117, 0], [ 15, 0, 168, 3], [ 7, 0, 224, 12], [ 5, 0, 280, 28], [ 5, 0, 333, 53], [ 5, 0, 379, 87], [ 5, 0, 411, 130], [ 4, 0, 425, 178], [ 2, 0, 419, 226], [ 1, 0, 393, 271], [ 0, 0, 350, 307], [ 0, 0, 299, 329], [ 0, 0, 248, 334], [ 0, 0, 206, 320], [ 0, 0, 178, 289], [ 0, 0, 167, 246], [ 0, 0, 173, 196], [ 0, 0, 192, 146], [ 0, 0, 217, 100], [ 0, 0, 246, 61], [ 0, 0, 275, 33], [ 0, 0, 301, 15], [ 0, 0, 326, 4], [ 0, 0, 351, 0], [ 0, 0, 377, 0], [ 0, 0, 403, 0], [ 0, 0, 430, 0], [ 0, 0, 456, 0], [ 0, 0, 484, 0], [ 0, 0, 510, 0], [ 0, 3, 535, 0], [ 0, 7, 555, 0], [ 0, 14, 569, 0], [ 0, 22, 574, 0], [ 0, 33, 572, 0], [ 0, 44, 565, 0], [ 0, 55, 555, 0], [ 0, 66, 548, 0], [ 1, 76, 546, 0], [ 9, 84, 550, 0], [ 27, 91, 555, 0], [ 53, 94, 558, 0], [ 88, 95, 554, 0], [132, 94, 536, 2], [178, 91, 503, 7], [219, 86, 453, 15], [252, 82, 390, 24], [273, 77, 317, 36], [279, 72, 240, 44], [270, 69, 165, 49], [247, 69, 102, 48], [214, 73, 55, 41], [173, 83, 23, 30], [132, 98, 5, 19], [ 93, 121, 0, 10], [ 59, 150, 0, 3], [ 34, 186, 0, 0], [ 17, 227, 0, 0], [ 6, 271, 0, 0], [ 1, 315, 0, 0], [ 0, 354, 0, 0], [ 0, 384, 0, 5], [ 0, 404, 0, 13], [ 0, 413, 0, 25], [ 0, 411, 0, 39], [ 0, 398, 0, 55], [ 0, 378, 0, 67], [ 0, 350, 0, 75], [ 0, 315, 0, 76], [ 0, 276, 9, 72], [ 0, 235, 28, 64], [ 0, 198, 60, 53], [ 2, 164, 107, 42], [ 13, 139, 172, 30], [ 36, 120, 242, 20], [ 78, 109, 310, 12], [143, 103, 367, 7], [233, 101, 404, 3], [344, 101, 414, 1], [467, 103, 394, 0], [591, 103, 350, 0], [702, 100, 287, 0], [786, 94, 215, 0], [831, 84, 144, 0], [833, 70, 86, 0], [792, 54, 44, 4], [713, 38, 17, 16], [606, 24, 4, 36]], dtype=int64) </code></pre>
<p>Thanks for posting your sample data.</p> <p>Start by taking the Fourier Transform of each of the four columns in turn. <code>arr</code> is your data above.</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import matplotlib.pyplot as plt t=np.arange(0, arr.shape[0]) plt.figure() for icol,col in enumerate(arr.T): sigFFT = np.fft.fft(col) / t.shape[0] freq = np.fft.fftfreq(t.shape[0], d=1) plt.plot(freq,sigFFT) </code></pre> <p>(Huge apologies for no labels etc.)</p> <p>You can see the PSF (broad features). Fit the envelope to get the PSFs. What shape PSF are you expecting? Try a sinc or some such.</p> <p><a href="https://i.stack.imgur.com/wmwwi.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wmwwi.jpg" alt="FFT of data" /></a></p>
python|pandas|numpy|scipy|signal-processing
1
8,167
65,872,566
Using tensorflow and TFBertForNextSentencePrediction to further train bert on a specific corpus
<p>I'm trying to train <code>TFBertForNextSentencePrediction</code> on my own corpus, not from scratch, but rather taking the existing bert model with only a next sentence prediction head and further train it on a specific cuprous of text (pairs of sentences). Then I want to use the model I trained to be able to extract sentence embeddings from the last hidden state for other texts.</p> <p>Currently the problem I encounter is that after I train the <code>keras</code> model I am not able to extract the hidden states of the last layer before the next sentence prediction head.</p> <p>Below is the code. Here I only train it on a few sentences just to make sure the code works. Any help will be greatly appreciated.</p> <p>Thanks, Ayala</p> <pre><code>import numpy as np import pandas as pd import tensorflow as tf from datetime import datetime from tensorflow.keras.utils import to_categorical from tensorflow.keras.preprocessing import sequence from tensorflow.keras.callbacks import ModelCheckpoint from transformers import BertTokenizer, PreTrainedTokenizer, BertConfig, TFBertForNextSentencePrediction from sklearn.metrics import confusion_matrix, accuracy_score, f1_score, precision_score, recall_score PRETRAINED_MODEL = 'bert-base-uncased' # set paths and file names time_stamp = str(datetime.now().year) + &quot;_&quot; + str(datetime.now().month) + &quot;_&quot; + str(datetime.now().day) + &quot;_&quot; + \ str(datetime.now().hour) + &quot;_&quot; + str(datetime.now().minute) model_name = &quot;pretrained_nsp_model&quot; model_dir_data = model_name + &quot;_&quot; + time_stamp model_fn = model_dir_data + &quot;.h5&quot; base_path = os.path.dirname(__file__) input_path = os.path.join(base_path, &quot;input_data&quot;) output_path = os.path.join(base_path, &quot;output_models&quot;) model_path = os.path.join(output_path, model_dir_data) if not os.path.exists(model_path): os.makedirs(model_path) # set model checkpoint checkpoint = ModelCheckpoint(os.path.join(model_path, model_fn), monitor=&quot;val_loss&quot;, verbose=1, save_best_only=True, save_weights_only=True, mode=&quot;min&quot;) # read data max_length = 512 def get_tokenizer(pretrained_model_name): tokenizer = BertTokenizer.from_pretrained(pretrained_model_name) return tokenizer def tokenize_nsp_data(A, B, max_length): data_inputs = tokenizer(A, B, add_special_tokens=True, max_length=max_length, truncation=True, pad_to_max_length=True, return_attention_mask=True, return_tensors=&quot;tf&quot;) return data_inputs def get_data_features(data_inputs, max_length): data_features = {} for key in data_inputs: data_features[key] = sequence.pad_sequences(data_inputs[key], maxlen=max_length, truncating=&quot;post&quot;, padding=&quot;post&quot;, value=0) return data_features def get_transformer_model(transformer_model_name): # get transformer model config = BertConfig(output_attentions=True) config.output_hidden_states = True config.return_dict = True transformer_model = TFBertForNextSentencePrediction.from_pretrained(transformer_model_name, config=config) return transformer_model def get_keras_model(transformer_model): # get keras model input_ids = tf.keras.layers.Input(shape=(max_length,), name='input_ids', dtype='int32') input_masks_ids = tf.keras.layers.Input(shape=(max_length,), name='attention_mask', dtype='int32') token_type_ids = tf.keras.layers.Input(shape=(max_length,), name='token_type_ids', dtype='int32') X = transformer_model({'input_ids': input_ids, 'attention_mask': input_masks_ids, 'token_type_ids': token_type_ids})[0] model = tf.keras.Model(inputs=[input_ids, input_masks_ids, token_type_ids], outputs=X) model.summary() model.compile(loss=tf.keras.losses.BinaryCrossentropy(from_logits=True), optimizer=tf.optimizers.Adam(learning_rate=0.00005), metrics=['accuracy']) return model def get_metrices(true_values, pred_values): cm = confusion_matrix(true_values, pred_values) acc_score = accuracy_score(true_values, pred_values) f1 = f1_score(true_values, pred_values, average=&quot;binary&quot;) precision = precision_score(true_values, pred_values, average=&quot;binary&quot;) recall = recall_score(true_values, pred_values, average=&quot;binary&quot;) metrices = {'confusion_matrix': cm, 'acc_score': acc_score, 'f1': f1, 'precision': precision, 'recall': recall } for k, v in metrices.items(): print(k, ':\n', v) return metrices # get tokenizer tokenizer = get_tokenizer(PRETRAINED_MODEL) # train prompt = [&quot;Hello&quot;, &quot;Hello&quot;, &quot;Hello&quot;, &quot;Hello&quot;] next_sentence = [&quot;How are you?&quot;, &quot;Pizza&quot;, &quot;How are you?&quot;, &quot;Pizza&quot;] train_labels = [0, 1, 0, 1] train_labels = to_categorical(train_labels) train_inputs = tokenize_nsp_data(prompt, next_sentence, max_length) train_data_features = get_data_features(train_inputs, max_length) # val prompt = [&quot;Hello&quot;, &quot;Hello&quot;, &quot;Hello&quot;, &quot;Hello&quot;] next_sentence = [&quot;How are you?&quot;, &quot;Pizza&quot;, &quot;How are you?&quot;, &quot;Pizza&quot;] val_labels = [0, 1, 0, 1] val_labels = to_categorical(val_labels) val_inputs = tokenize_nsp_data(prompt, next_sentence, max_length) val_data_features = get_data_features(val_inputs, max_length) # get transformer model transformer_model = get_transformer_model(PRETRAINED_MODEL) # get keras model model = get_keras_model(transformer_model) callback_list = [] early_stop = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=4, min_delta=0.005, verbose=1) callback_list.append(early_stop) reduce_lr = tf.keras.callbacks.ReduceLROnPlateau(monitor='val_loss', factor=0.2, patience=2, epsilon=0.001) callback_list.append(reduce_lr) callback_list.append(checkpoint) history = model.fit([train_data_features['input_ids'], train_data_features['attention_mask'], train_data_features['token_type_ids']], np.array(train_labels), batch_size=2, epochs=3, validation_data=([val_data_features['input_ids'], val_data_features['attention_mask'], val_data_features['token_type_ids']], np.array(val_labels)), verbose=1, callbacks=callback_list) model.layers[3].save_pretrained(model_path) # need to save this and make sure i can get the hidden states ## predict # load model transformer_model = get_transformer_model(model_path) model = get_keras_model(transformer_model) model.summary() model.load_weights(os.path.join(model_path, model_fn)) # test prompt = [&quot;Hello&quot;, &quot;Hello&quot;] next_sentence = [&quot;How are you?&quot;, &quot;Pizza&quot;] test_labels = [0, 1] test_df = pd.DataFrame({'A': prompt, 'B': next_sentence, 'label': test_labels}) test_labels = to_categorical(val_labels) test_inputs = tokenize_nsp_data(prompt, next_sentence, max_length) test_data_features = get_data_features(test_inputs, max_length) # predict pred_test = model.predict([test_data_features['input_ids'], test_data_features['attention_mask'], test_data_features['token_type_ids']]) preds = tf.keras.activations.softmax(tf.convert_to_tensor(pred_test)).numpy() true_test = test_df['label'].to_list() pred_test = [1 if p[1] &gt; 0.5 else 0 for p in preds] test_df['pred_val'] = pred_test metrices = get_metrices(true_test, pred_test) </code></pre> <p>I am also attaching a picture from the debugging mode in which I try (with no success) to view the hidden state. <strong>The problem is I am not able to see and save the transform model I trained and view the embeddings of the last hidden state.</strong> I tried converting the <code>KerasTensor</code> to <code>numpy array</code> but without success.</p> <p><a href="https://i.stack.imgur.com/inP4a.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/inP4a.png" alt="enter image description here" /></a></p>
<p>The issue resides in your 'get_keras_model()' function. You defined here that you are only interested in the first of the element of the output (i.e. logits) with:</p> <pre class="lang-py prettyprint-override"><code>X = transformer_model({'input_ids': input_ids, 'attention_mask': input_masks_ids, 'token_type_ids': token_type_ids})[0] </code></pre> <p>Just do the index selection as conditional like this to get the whole output of the model</p> <pre class="lang-py prettyprint-override"><code>def get_keras_model(transformer_model, is_training=True): ###your other code X = transformer_model({'input_ids': input_ids, 'attention_mask': input_masks_ids, 'token_type_ids': token_type_ids}) if is_training: X= X[0] ###your other code return model #predict ###your other code model = get_keras_model(transformer_model, is_training=False) ###your other code print(pred_test.keys()) </code></pre> <p>Output:</p> <pre class="lang-py prettyprint-override"><code>odict_keys(['logits', 'hidden_states', 'attentions']) </code></pre> <p>P.S.: The BertTokenizer can truncate and add padding by themself (<a href="https://huggingface.co/transformers/main_classes/tokenizer.html#transformers.PreTrainedTokenizer.__call__" rel="nofollow noreferrer">documentation</a>).</p>
python|tensorflow|keras|huggingface-transformers
2
8,168
65,869,458
Tensoflow error: Could not create cudnn handle: CUDNN_STATUS_NOT_INITIALIZED
<p>My computer specification are: Windows 10 cuda 11.2 cudnn 8.0.5 Nvidia geforce GTX 3080</p> <p>I used this web(<a href="https://github.com/armaanpriyadarshan/Training-a-Custom-TensorFlow-2.x-Object-Detector" rel="nofollow noreferrer">https://github.com/armaanpriyadarshan/Training-a-Custom-TensorFlow-2.x-Object-Detector</a>) to install faster rcnn. When I trained this network, it had an error:</p> <pre><code>2021-01-24 18:12:47.713443: E tensorflow/stream_executor/cuda/cuda_dnn.cc:336] Could not create cudnn handle: CUDNN_STATUS_NOT_INITIALIZED 2021-01-24 18:12:47.715010: E tensorflow/stream_executor/cuda/cuda_dnn.cc:340] Error retrieving driver version: Unimplemented: kernel reported driver version not implemented on Windows 2021-01-24 18:12:47.718097: E tensorflow/stream_executor/cuda/cuda_dnn.cc:336] Could not create cudnn handle: CUDNN_STATUS_NOT_INITIALIZED 2021-01-24 18:12:47.719553: E tensorflow/stream_executor/cuda/cuda_dnn.cc:340] Error retrieving driver version: Unimplemented: kernel reported driver version not implemented on Windows Traceback (most recent call last): File &quot;model_main_tf2.py&quot;, line 113, in &lt;module&gt; tf.compat.v1.app.run() File &quot;C:\Anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\platform\app.py&quot;, line 40, in run _run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef) File &quot;C:\Anaconda\envs\tensorflow\lib\site-packages\absl\app.py&quot;, line 300, in run _run_main(main, args) File &quot;C:\Anaconda\envs\tensorflow\lib\site-packages\absl\app.py&quot;, line 251, in _run_main sys.exit(main(argv)) File &quot;model_main_tf2.py&quot;, line 104, in main model_lib_v2.train_loop( File &quot;C:\Anaconda\envs\tensorflow\lib\site-packages\object_detection\model_lib_v2.py&quot;, line 561, in train_loop load_fine_tune_checkpoint(detection_model, File &quot;C:\Anaconda\envs\tensorflow\lib\site-packages\object_detection\model_lib_v2.py&quot;, line 361, in load_fine_tune_checkpoint strategy.run( File &quot;C:\Anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\distribute\distribute_lib.py&quot;, line 1259, in run return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs) File &quot;C:\Anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\distribute\distribute_lib.py&quot;, line 2730, in call_for_each_replica return self._call_for_each_replica(fn, args, kwargs) File &quot;C:\Anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\distribute\mirrored_strategy.py&quot;, line 628, in _call_for_each_replica return mirrored_run.call_for_each_replica( File &quot;C:\Anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\distribute\mirrored_run.py&quot;, line 75, in call_for_each_replica return wrapped(args, kwargs) File &quot;C:\Anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\eager\def_function.py&quot;, line 828, in __call__ result = self._call(*args, **kwds) File &quot;C:\Anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\eager\def_function.py&quot;, line 888, in _call return self._stateless_fn(*args, **kwds) File &quot;C:\Anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\eager\function.py&quot;, line 2942, in __call__ return graph_function._call_flat( File &quot;C:\Anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\eager\function.py&quot;, line 1918, in _call_flat return self._build_call_outputs(self._inference_function.call( File &quot;C:\Anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\eager\function.py&quot;, line 555, in call outputs = execute.execute( File &quot;C:\Anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\eager\execute.py&quot;, line 59, in quick_execute tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name, tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found. (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above. [[node model/conv1_conv/Conv2D (defined at \site-packages\object_detection\meta_architectures\faster_rcnn_meta_arch.py:1346) ]] [[Loss/RPNLoss/BalancedPositiveNegativeSampler/Cast_8/_192]] (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above. [[node model/conv1_conv/Conv2D (defined at \site-packages\object_detection\meta_architectures\faster_rcnn_meta_arch.py:1346) ]] 0 successful operations. 0 derived errors ignored. [Op:__inference__dummy_computation_fn_16411] Errors may have originated from an input operation. Input Source operations connected to node model/conv1_conv/Conv2D: model/lambda/Pad (defined at \site-packages\object_detection\models\keras_models\resnet_v1.py:49) Input Source operations connected to node model/conv1_conv/Conv2D: model/lambda/Pad (defined at \site-packages\object_detection\models\keras_models\resnet_v1.py:49) Function call stack: _dummy_computation_fn -&gt; _dummy_computation_fn </code></pre> <p>How to solve this problem?</p>
<p>Could you please share your tensorflow version, I believe that tensorflow&lt;=2.4 does not support cuda versions of higher than 10.1, so that might be causing the problem.</p> <p><del>If you do have the correct versions for cuda and tensorflow then i suggest you to check out <a href="https://stackoverflow.com/questions/61021287/tf-2-could-not-create-cudnn-handle-cudnn-status-internal-error">this</a>: It suggested to allow memory growth on your gpu.</del></p> <p>EDIT:</p> <p>So appears that you do have the tensorflow 2.4, so what i recommend here is downgrading cuda to 10.1 and tensorflow to 2.3 as suggested by the author of the repository. Or if you insist on using tensorflow 2.4, you should still downgrade your cuda version to 11.0 as mentioned <a href="https://github.com/tensorflow/tensorflow/issues/46093" rel="nofollow noreferrer">here</a>, since tensorflow still does not provide support for cuda 11.2.</p>
python|tensorflow
2
8,169
65,499,901
Error in getting accuracy using test set with PyTorch
<p>I am trying to find the accuracy of my model that I created with PyTorch, but I get an error. Originally I had a different error, which is fixed, but now I get this error.</p> <p>I use this to get my test set:</p> <pre><code>testset = torchvision.datasets.FashionMNIST(MNIST_DIR, train=False, download=True, transform=torchvision.transforms.Compose([ torchvision.transforms.ColorJitter(brightness=0.1, contrast=0.1, saturation=0.1, hue=0.1), torchvision.transforms.ToTensor(), # image to Tensor torchvision.transforms.Normalize((0.1307,), (0.3081,)) # image, label ])) testloader = torch.utils.data.DataLoader(testset, batch_size=100, shuffle=False) </code></pre> <p>When I try to access the test set I created, it tries to retrain the model for some reason, then proceeds to error out. This is the code that gets the accuracy and calls the test set</p> <pre><code>correct = 0 total = 0 with torch.no_grad(): print(&quot;entered here&quot;) for (x, y_gt) in testloader: x = x.to(device) y_gt = y_gt.to(device) outputs = teacher_model(x) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() print('Accuracy of the network on the 10000 test images: %d %%' % (100 * correct / total)) </code></pre> <p>This is the error I am getting:</p> <pre><code>Traceback (most recent call last): File &quot;[path]/train_teacher_1.py&quot;, line 134, in &lt;module&gt; outputs = teacher_model(x) File &quot;[path]\anaconda3\lib\site-packages\torch\nn\modules\module.py&quot;, line 727, in _call_impl result = self.forward(*input, **kwargs) File &quot;[path]\models.py&quot;, line 17, in forward x = F.relu(self.layer1(x)) File &quot;[path]\anaconda3\lib\site-packages\torch\nn\modules\module.py&quot;, line 727, in _call_impl result = self.forward(*input, **kwargs) File &quot;[path]\anaconda3\lib\site-packages\torch\nn\modules\linear.py&quot;, line 93, in forward return F.linear(input, self.weight, self.bias) File &quot;[path]\anaconda3\lib\site-packages\torch\nn\functional.py&quot;, line 1692, in linear output = input.matmul(weight.t()) RuntimeError: mat1 dim 1 must match mat2 dim 0 </code></pre> <p>Please let me know if you would like the rest of the code for training the model. I left it out because the post got too long.</p> <p>I am new to PyTorch and any help is appreciated. Thanks in advance.</p>
<p>I was able to figure out the issue and I needed to check the size of x.</p> <p>I added this to the for loop to fix it: <code>x = torch.flatten(x, start_dim=1, end_dim=-1)</code></p>
python|machine-learning|pytorch
0
8,170
65,581,765
Interpolate CubicSpline with Pandas
<p>I have a dataframe with ResidMat and Price, I use scipy to find the interpolate CubicSpline. I used CubicSpline and apply to find all data on my dataset. But it's not very fast, because in this case have no more data. I will have more than a hundred data and it's very slow. Do you have an idea to do that but maybe with a matrix ?</p> <p>Thank you,</p> <pre><code> def add_interpolated_price(row, generic_residmat): from scipy.interpolate import CubicSpline residmats = row[['ResidMat']].values prices = row[['Price']].values cs = CubicSpline(residmats, prices) return float(cs(generic_residmat)) df = pd.DataFrame([[1,18,38,58,83,103,128,148,32.4,32.5,33.8,33.5,32.8,32.4,32.7],[2,17,37,57,82,102,127,147,31.2,31.5,32.7,33.2,32.5,32.9,33.3]],columns = ['index','ResidMat','ResidMat','ResidMat','ResidMat','ResidMat','ResidMat','ResidMat','Price','Price','Price','Price','Price','Price','Price'],index=['2010-06-25','2010-06-28']) my_resimmat = 30 df['Generic_Value'] = df.apply(lambda row: add_interpolated_price(row, generic_residmat=my_resimmat), axis=1) </code></pre>
<p>After looking at the profile of this code most of the time is spent in interpolating so the best thing I would suggest is going pandarallel. <a href="https://stackoverflow.com/questions/45545110/make-pandas-dataframe-apply-use-all-cores">Make Pandas DataFrame apply() use all cores?</a> has the details. My fave is this method... (outline code below)</p> <pre><code>from pandarallel import pandarallel from math import sin pandarallel.initialize() def func(x): return sin(x**2) df.parallel_apply(func, axis=1) </code></pre> <p>but this only works on Linux and Macos, on Windows, Pandarallel will work only if the Python session is executed from Windows Subsystem for Linux (WSL).</p>
python|pandas|scipy|interpolation|cubic
1
8,171
65,828,791
process columns in pandas dataframe
<p>I have a dataframe <strong>df</strong>:</p> <pre><code> Col1 Col2 Col3 0 a1 NaN NaN 1 a2 b1 NaN 2 a3 b3 c1 3 a4 NaN c2 </code></pre> <p>I have tried :</p> <p><code>new_df = '[' + df + ']'</code></p> <p><code>new_df['Col4']=new_df[new_df.columns[0:]].apply(lambda x:','.join(x.dropna().astype(str)),axis =1)</code></p> <p><code>df_final = pd.concat([df, new_df['col4']], axis =1)</code></p> <p>I am getting at this :</p> <p><a href="https://i.stack.imgur.com/OPh1L.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OPh1L.jpg" alt="wrong dataframe" /></a></p> <p>I was looking for a robust solution to get to something which must look like this:</p> <p><a href="https://i.stack.imgur.com/QChbE.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QChbE.jpg" alt="expected resultant frame" /></a></p> <p>I know there is no direct way to do this, the data frame eventually is going to be at least 20k rows and so the question to fellow stack-people.</p> <p>Thanks.</p> <p>let me know if you have any more questions and I can edit the question to add points.</p>
<p>You can add <code>[]</code> for all columns without first not missing value tested with helper <code>i</code> from <code>enumerate</code>:</p> <pre><code>def f(x): gen = (y for y in x if pd.notna(y)) return ','.join(y if i == 0 else '['+y+']' for i, y in enumerate(gen)) #f = lambda x: ','.join(y if i == 0 else '['+y+']' for i, y in enumerate(x.dropna())) df['col4'] = df.apply(f, axis=1) print (df) Col1 Col2 Col3 Col4 col4 0 a1 NaN d8 NaN a1,[d8] 1 a2 b1 d3 NaN a2,[b1],[d3] 2 NaN b3 c1 NaN b3,[c1] 3 a4 NaN c2 NaN a4,[c2] 4 NaN NaN c6 d5 c6,[d5] </code></pre> <p>Performance test:</p> <pre><code>#test for 25k rows df = pd.concat([df] * 5000, ignore_index=True) f1 = lambda x: ','.join(y if i == 0 else '['+y+']' for i, y in enumerate(x.dropna())) %timeit df.apply(f1, axis=1) 3.62 s ± 21 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) %timeit df.apply(f, axis =1) 475 ms ± 3.92 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) </code></pre>
python|pandas|dataframe
2
8,172
63,655,453
Keras. Siamese network and triplet loss
<p>I want to build a network that should be able to verificate images (e.g. human faces). As I understand, that the best solution for that is Siamese network with a triplet loss. I didn't found any ready-made implementations, so I decided to create my own.</p> <p>But I have question about Keras. For example, here's the structure of the network:</p> <p><a href="https://i.stack.imgur.com/uY1Gg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uY1Gg.png" alt="network structure" /></a></p> <p>And the code is something like that:</p> <pre class="lang-py prettyprint-override"><code>embedding = Sequential([ Flatten(), Dense(1024, activation='relu'), Dense(64), Lambda(lambda x: K.l2_normalize(x, axis=-1)) ]) input_a = Input(shape=shape, name='anchor') input_p = Input(shape=shape, name='positive') input_n = Input(shape=shape, name='negative') emb_a = embedding(input_a) emb_p = embedding(input_p) emb_n = embedding(input_n) out = Concatenate()([emb_a, emb_p, emp_n]) model = Model([input_a, input_p, input_n], out) model.compile(optimizer='adam', loss=&lt;triplet_loss&gt;) </code></pre> <p>I defined only one embedding model. Does this mean that once the model starts training weights would be the same for each input?</p> <p>If it is, how can I extract embedding weights from the <code>model</code>?</p>
<p>Yes, In triplet loss function weights should be shared across all three networks, i.e <strong>Anchor, Positive and Negetive</strong>. In Tensorflow 1.x to achieve weight sharing you can use <code>reuse=True</code> in <code>tf.layers</code>.</p> <p>But in Tensorflow 2.x since the <code>tf.layers</code> has been moved to <code>tf.keras.layers</code> and <code>reuse</code> functionality has been removed. To achieve weight sharing you can write a custom layer that takes the parent layer and reuses its weights.</p> <p>Below is the sample example to do the same.</p> <pre><code>class SharedConv(tf.keras.layers.Layer): def __init__( self, filters, kernel_size, strides=None, padding=None, dilation_rates=None, activation=None, use_bias=True, **kwargs ): self.filters = filters self.kernel_size = kernel_size self.strides = strides self.padding = padding self.dilation_rates = dilation_rates self.activation = activation self.use_bias = use_bias super().__init__(*args, **kwargs) def build(self, input_shape): self.conv = Conv2D( self.filters, self.kernel_size, padding=self.padding, dilation_rate=self.dilation_rates[0] ) self.net1 = Activation(self.activation) self.net2 = Activation(self.activation) def call(self, inputs, **kwargs): x1 = self.conv(inputs) x1 = self.act1(x1) x2 = tf.nn.conv2d( inputs, self.conv.weights[0], padding=self.padding, strides=self.strides, dilations=self.dilation_rates[1] ) if self.use_bias: x2 = x2 + self.conv.weights[1] x2 = self.act2(x2) return x1, x2 </code></pre>
python|tensorflow|keras|computer-vision|loss-function
3
8,173
63,717,207
how to combine (merge) different regression models
<p>I am working on training different models for different estimation of human pose problems. actually, what I need is to get different outputs from a regression model for different joints of the human body. After I did searches for this problem, I come up with this idea that I have two ways:</p> <ol> <li>training different models and combine their final results.</li> <li>training models in a chain shape. (The input of the second model is the output of the first model and ...)</li> </ol> <p>I know Keras has a function called concatenate that is such a layer to merge two outputs of the models. But If I don't want to use Keras is it possible to have 6 models and then merge them in a way that the final trained model can estimate all the output of these different models at once?</p> <p>my models are something like this(they are different based on different datasets i have):</p> <pre><code> ## conv1 layer W_conv1 = weight_func([3, 3, 1, 32]) b_conv1 = bias_func([32]) h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1) # h_pool1 = max_pool_2x2(h_conv1) #h_drop1 = tf.nn.dropout(h_conv1, keep_prob) ## conv2 layer W_conv2 = weight_func([3, 3, 32, 64]) # patch 2x2, in size 32, out size 64 b_conv2 = bias_func([64]) h_conv2 = tf.nn.relu(conv2d(h_conv1, W_conv2) + b_conv2) #h_drop2 = tf.nn.dropout(h_conv2, keep_prob) ## conv3 layer W_conv3 = weight_func([3, 3, 64, 128]) b_conv3 = bias_func([128]) h_conv3 = tf.nn.relu(conv2d(h_conv2, W_conv3) + b_conv3) #h_drop3 = tf.nn.dropout(h_conv3, keep_prob) ## conv4 layer W_conv4 = weight_func([3, 3, 128,256]) # patch 3*3, in size 32, out size 64 b_conv4 = bias_func([256]) h_conv4 = tf.nn.relu(conv2d(h_conv3, W_conv4) + b_conv4) #h_drop4 = tf.nn.dropout(h_conv4, keep_prob) ## fc1 layer W_fc1 = weight_func([6 * 6 * 256, 9216]) b_fc1 = bias_func([9216]) h_pool2_flat = tf.reshape(h_conv4, [-1, 6 * 6 * 256]) h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1) h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob) # fc2 layer W_fc2 = weight_func([9216, 1]) b_fc2 = bias_func([1]) prediction = tf.add(tf.matmul(h_fc1_drop, W_fc2) , b_fc2, name= 'output_node') cross_entropy = tf.reduce_mean(tf.reduce_sum(tf.square(ys - prediction), reduction_indices=[1])) </code></pre>
<p>You can use Functional API to achieve this. I have added a simple example you can adapt this example to more complicated models according to your usecase.</p> <p><strong>Code:</strong></p> <pre><code>import tensorflow as tf import numpy as np # Here I have generated to different data and labels containing different number of features. x1 = tf.constant(np.random.randint(50, size =(1000,13)), dtype = tf.float32) y1 = tf.constant(np.random.randint(2, size =(1000,)), dtype = tf.int32) x2 = tf.constant(np.random.randint(50, size =(1000,6)), dtype = tf.float32) y2 = tf.constant(np.random.randint(2, size =(1000,)), dtype = tf.int32) # Creation of model def create_model3(): input1 = tf.keras.Input(shape=(13,), name = 'I1') input2 = tf.keras.Input(shape=(6,), name = 'I2') hidden1 = tf.keras.layers.Dense(units = 4, activation='relu')(input1) hidden2 = tf.keras.layers.Dense(units = 4, activation='relu')(input2) hidden3 = tf.keras.layers.Dense(units = 3, activation='relu')(hidden1) hidden4 = tf.keras.layers.Dense(units = 3, activation='relu')(hidden2) output1 = tf.keras.layers.Dense(units = 2, activation='softmax', name ='O1')(hidden3) output2 = tf.keras.layers.Dense(units = 2, activation='softmax', name = 'O2')(hidden4) model = tf.keras.models.Model(inputs = [input1,input2], outputs = [output1,output2]) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) return model model = create_model3() tf.keras.utils.plot_model(model, 'my_first_model.png', show_shapes=True) </code></pre> <p><strong>Model Architecture:</strong></p> <p><a href="https://i.stack.imgur.com/bh99J.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bh99J.png" alt="Model" /></a></p> <p>You can train this model using model.fit() like this:</p> <pre><code>history = model.fit( x = {'I1':x1, 'I2':x2}, y = {'O1':y1, 'O2': y2}, batch_size = 32, epochs = 10, verbose = 1, callbacks = None, # validation_data = [(val_data,new_val_data),(val_labels, new_val_labels)] ) </code></pre> <p>Note: For training to work the number of samples in all your input data should be the same. ie x1 contains 1000 rows so x2 should also contain 1000 rows.</p> <p>You can predict using this model like this:</p> <pre><code>model.predict(x = {'I1':x1, 'I2':x2}) </code></pre>
python|tensorflow|regression|pose-estimation
1
8,174
24,900,247
numpy.genfromtxt csv file with null characters
<p>I'm working on a scientific graphing script, designed to create graphs from csv files output by Agilent's Chemstation software. </p> <p>I got the script working perfectly when the files come from one version of Chemstation (The version for liquid chromatography). </p> <p>Now i'm trying to port it to work on our GC (Gas Chromatography). For some reason, this version of chemstation inserts nulls in between each character in any text file it outputs.</p> <p>I'm trying to use numpy.genfromtxt to get the x,y data into python in order to create the graphs (using matplotlib). </p> <p>I originally used:</p> <pre><code>data = genfromtxt(directory+signal, delimiter = ',') </code></pre> <p>to load the data in. When I do this with a csv file generated by our GC, I get an array of all 'nan' values. If I set the dtype to none, I get 'byte strings' that look like this:</p> <pre><code>b'\x00 \x008\x008\x005\x00.\x002\x005\x002\x001\x007\x001\x00\r' </code></pre> <p>What I need is a float, for the above string it would be 885.252171.</p> <p>Anyone have any idea how I can get where I need to go?</p> <p>And just to be clear, I couldn't find any setting on Chemstation that would affect it's output to just not create files with nulls.</p> <p>Thanks</p> <p>Jeff</p>
<p>Given that your file is encoded as utf-16-le with a BOM, and all the actual unicode codepoints (except the BOM) are less than 128, you should be able to use an instance of <code>codecs.EncodedFile</code> to transcode the file from utf-16 to ascii. The following example works for me.</p> <p>Here's my test file:</p> <pre><code>$ cat utf_16_le_with_bom.csv ??2.0,19 1.5,17 2.5,23 1.0,10 3.0,5 </code></pre> <p>The first two bytes, <code>ff</code> and <code>fe</code> are the BOM U+FEFF:</p> <pre><code>$ hexdump utf_16_le_with_bom.csv 0000000 ff fe 32 00 2e 00 30 00 2c 00 31 00 39 00 0a 00 0000010 31 00 2e 00 35 00 2c 00 31 00 37 00 0a 00 32 00 0000020 2e 00 35 00 2c 00 32 00 33 00 0a 00 31 00 2e 00 0000030 30 00 2c 00 31 00 30 00 0a 00 33 00 2e 00 30 00 0000040 2c 00 35 00 0a 00 0000046 </code></pre> <p>Here's the python script <code>genfromtxt_utf16.py</code> (updated for Python 3):</p> <pre><code>import codecs import numpy as np fh = open('utf_16_le_with_bom.csv', 'rb') efh = codecs.EncodedFile(fh, data_encoding='ascii', file_encoding='utf-16') a = np.genfromtxt(efh, delimiter=',') fh.close() print("a:") print(a) </code></pre> <p>With python 3.4.1 and numpy 1.8.1, the script works:</p> <pre><code>$ python3.4 genfromtxt_utf16.py a: [[ 2. 19. ] [ 1.5 17. ] [ 2.5 23. ] [ 1. 10. ] [ 3. 5. ]] </code></pre> <p>Be sure that you don't specify the encoding as <code>file_encoding='utf-16-le'</code>. If the endian suffix is included, the BOM is not stripped, and it can't be transcoded to ascii.</p>
python|numpy
2
8,175
30,270,820
Can't figure out a method to do this. Python, csv, pandas
<p>Ok, so I'm working on a backtest for stock data and here is where I am stumped:</p> <p>I have 150 csv files, each one contains daily stock data for the length of the stock's life. Each stock has a different starting date.</p> <pre><code>Date Close etc 2015-05-05 123.24 </code></pre> <p>I want to check certain conditions on certain days. So, for example, on 2015-05-05, what was the closing price of every stock in the 150 files? Then take some action, then check 2015-05-06, etc. How can I make a row by row conditional statement with all these csv files?:</p> <pre><code>if date in csv_file: return row in csv_file that has this date </code></pre> <p>Really not sure how to approach this, I know how to do it by hand, but that would day forever and that is what computers are for. Thanks in advance.</p>
<p>@Alexender is right, this isn't a lot of data, and it's probably easiest to read all the files into a single <code>DataFrame</code>, but when you get to the point where your data doesn't fit into memory, you can use <a href="http://dask.pydata.org/en/latest/" rel="nofollow"><code>dask</code></a> to operate on the data as if it were read into a <code>DataFrame</code> in memory. One nice feature is that it lets you pass a glob to <code>read_csv()</code> so you can operate on multiple csv files at once.</p> <p>Using this approach, your code would look something like the following.</p> <pre><code>import dask.dataframe as dd ddf = dd.read_csv('/path/to/csvs/fname*glob.csv') ddf[ddf.Date == some_date].compute() </code></pre> <p>One small drawback of <code>dask</code> is that it is not as good at inferring types as <code>pandas</code> is so you may have to pass an explicit <code>dtype</code> to <code>read_csv()</code>.</p>
python|csv|pandas
0
8,176
20,255,623
Update array based on other array Python
<p>I have two arrays of the same size:</p> <pre><code>import numpy as np myArray = np.array([[5,3,2,1,2], [2,5,3,3,3]]) myotherArray = np.array([[0,1,1,0,0], [0,0,1,0,0]]) </code></pre> <p>I like to multiple all values in <code>myArray</code> by 5, but only if on the same index in <code>myotherArray</code> a value of 0 is. How do I do this? I tried this, but it doesn't do anything.</p> <pre><code>myArray[myotherArray == 0]*5 </code></pre> <p>My expected output is for <code>myArray</code></p> <pre><code>([[25,3,2,5,10], [10,25,3,15,15]]) </code></pre>
<p>Multiply in place:</p> <pre><code>&gt;&gt;&gt; myArray[myotherArray == 0] *= 5 &gt;&gt;&gt; myArray array([[25, 3, 2, 5, 10], [10, 25, 3, 15, 15]]) </code></pre>
python|arrays|numpy
4
8,177
20,212,830
Generate a random 3 element Numpy array of integers summing to 3
<p>I need to fill a numpy array of three elements with random integers such that the sum total of the array is three (e.g. <code>[0,1,2]</code>).</p> <p>By my reckoning there are 10 possible arrays:</p> <p>111, 012, 021, 102, 120, 201, 210, 300, 030, 003</p> <p>My ideas is to randomly generate an integer between 1 and 10 using <code>randint</code>, and then use a look-up table to fill the array from the above list of combinations.</p> <p>Does anyone know of a better approach?</p>
<p>Here is how I did it:</p> <pre><code>&gt;&gt;&gt; import numpy as np &gt;&gt;&gt; a=np.array([[1,1,1],[0,1,2],[0,2,1],[1,0,2],[1,2,0],[2,0,1],[2,1,0],[3,0,0],[0,3,0],[0,0,3]]) &gt;&gt;&gt; a[np.random.randint(0,10)] array([1, 2, 0]) &gt;&gt;&gt; a[np.random.randint(0,10)] array([0, 1, 2]) &gt;&gt;&gt; a[np.random.randint(0,10)] array([1, 0, 2]) &gt;&gt;&gt; a[np.random.randint(0,10)] array([3, 0, 0]) </code></pre>
python|arrays|random|numpy|combinations
2
8,178
71,905,722
Keras won't broadcast-multiply the model output with a mask designed for the entire mini batch
<p>I have a data generator that produces batches of input data (<code>X</code>) and targets (<code>Y</code>), and also a mask (<code>batch_mask</code>) to be applied to the model output (the same mask applies to all the datapoint in the batch; there are different masks for different batches and the data generator takes care of doing this).</p> <p>As a result, the first dimension of <code>batch_mask</code> could have shape <code>1</code> or <code>batch_size</code> (by repeating the same mask along the first dimension <code>batch_size</code> times). I was expecting Keras to let me use either, and I wanted to simply create masks having a shape of <code>1</code> on the first dimension.</p> <p>However, when I tried this, I got the error:</p> <pre><code>ValueError: Data cardinality is ambiguous: x sizes: 128, 1 y sizes: 128 Make sure all arrays contain the same number of samples. </code></pre> <p><strong>Why won't Keras broadcast along the first dimension?</strong> It seems like this should not be complicated.</p> <p>Here's some minimal example code to observe this behavior</p> <pre><code>import tensorflow.keras as tfk import numpy as np ####################### # 1. model definition # ####################### # model parameters nfeatures_in = 6 target_size = 8 # model inputs input = tfk.layers.Input(nfeatures_in) input_mask = tfk.layers.Input(target_size) # model graph out = tfk.layers.Dense(target_size)(input) out_masked = tfk.layers.Multiply()((out,input_mask)) # multiply all model outputs in the batch by the same mask model = tfk.Model(inputs=(input, input_mask), outputs=out_masked) ########################## # 2. dummy data creation # ########################## batch_size = 32 # create masks the batch zeros_vector = np.zeros((1,target_size)) # &quot;batch_size&quot;==1 zeros_vector[0,:6] = 1 batch_mask = zeros_vector # dummy data creation X = np.random.randn(batch_size, 6) Y = np.random.randn(batch_size, target_size)*batch_mask # the target is masked by design in each batch ############################ # 3. compile model and fit # ############################ model.compile(optimizer=&quot;Adam&quot;, loss=&quot;mse&quot;) model.fit((X, batch_mask),Y, batch_size=batch_size) </code></pre> <p>I know I could make this work by either:</p> <ul> <li>repeating the mask to make the first dimension of <code>batch_mask</code> be the size of the first dimension of <code>X</code> (instead of 1).</li> <li>using pure tensorflow (but I feel like broadcasting along the batch dimension should not be a problem for Keras).</li> </ul> <p><strong>How can I make this work with Keras?</strong></p> <p>Thank you!</p>
<p>You can create an <code>IdentityLayer</code> which receives as an external input parameter the <code>batch_mask</code> and returns it as a tensor.</p> <pre><code>class IdentityLayer(tfk.layers.Layer): def __init__(self, my_mask, **kwargs): super(IdentityLayer, self).__init__() self.my_mask = my_mask def call(self, _): my_mask = tf.convert_to_tensor(self.my_mask, dtype=tf.float32) return my_mask def get_config(self): config = super().get_config() config.update({ &quot;my_mask&quot;: self.my_mask, }) return config </code></pre> <p>The usage of <code>IdentityLayer</code> in a model is straightforward:</p> <pre><code># model inputs input = tfk.layers.Input(nfeatures_in) input_mask = IdentityLayer(batch_mask)(input) # model graph out = tfk.layers.Dense(target_size)(input) out_masked = tfk.layers.Multiply()((out,input_mask)) model = tfk.Model(inputs=input, outputs=out_masked) </code></pre> <p>Where <code>batch_mask</code> is a numpy array created as you reported:</p> <pre><code>zeros_vector = np.zeros((1,target_size)) # &quot;batch_size&quot;==1 zeros_vector[0,:6] = 1 batch_mask = zeros_vector </code></pre>
python|tensorflow|keras|array-broadcasting
1
8,179
72,073,417
UserWarning: Geometry is in a geographic CRS. Results from 'buffer' are likely incorrect
<p>I am create geopandas DataFrames and create a buffer to be able to do spatial joins. I set the <code>crs</code> for the DataFrame and then proceed to create buffers and encounter the warning then.</p> <pre><code>df1 = gpd.GeoDataFrame(df1, geometry=gpd.points_from_xy(df1['Long'], df1['Lat'])) # set crs for buffer calculations df1.geometry.set_crs('EPSG:4326', inplace=True) df2 = gpd.GeoDataFrame(df2, geometry=gpd.points_from_xy(df2['Long'], df2['Lat'])) # set crs for buffer calculations df2.geometry.set_crs('EPSG:4326', inplace=True) # Returns a geoseries of geometries representing all points within a given distance df1['geometry'] = df2.geometry.buffer(0.001) </code></pre> <p>User Warning:</p> <pre><code>/var/folders/d0/gnksqzwn2fn46fjgrkp6045c0000gn/T/ipykernel_5601/4150826928.py:10: UserWarning: Geometry is in a geographic CRS. Results from 'buffer' are likely incorrect. Use 'GeoSeries.to_crs()' to re-project geometries to a projected CRS before this operation. df1['geometry'] = df2.geometry.buffer(0.001) </code></pre>
<p>You may need <strong>to project the coordinate system</strong>. From geodetic coordinates (e.g. 4826) to meters (e.g. 3857) and vice-versa. The calculation of the buffer is usually done in the projected meters system because it takes a <strong>distance</strong> as an argument. The shapely doc may be useful: <a href="http://shapely.readthedocs.io/en/latest/manual.html#object.buffer" rel="nofollow noreferrer">http://shapely.readthedocs.io/en/latest/manual.html#object.buffer</a></p> <p>Here is an example of what you could do, first you define the coordinate system. Then you do your projection in meters to calculate the buffer. And finally you project back to your original coordinate system.</p> <pre class="lang-py prettyprint-override"><code>df2.crs = &quot;epsg:4326&quot; #identical to df2 = df2.set_crs('epsg:4326') df2 = df2.to_crs(crs=3857) df1.geometry = df2.buffer(0.001) df1 = df1.to_crs(crs=4326) </code></pre>
python|gis|geopandas
3
8,180
71,876,632
How to decompose multiple periodicities present in the data without specifying the period?
<p>I am trying to decompose the periodicities present in a signal into its individual components, to calculate their time-periods.</p> <p>Say the following is my sample signal:</p> <p><a href="https://i.stack.imgur.com/peC13.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/peC13.png" alt="enter image description here" /></a></p> <p>You can reproduce the signal using the following code:</p> <pre><code>t_week = np.linspace(1,480, 480) t_weekend=np.linspace(1,192,192) T=96 #Time Period x_weekday = 10*np.sin(2*np.pi*t_week/T)+10 x_weekend = 2*np.sin(2*np.pi*t_weekend/T)+10 x_daily_weekly_sinu = np.concatenate((x_weekday, x_weekend)) #Creating the Signal x_daily_weekly_long_sinu = np.concatenate((x_daily_weekly_sinu,x_daily_weekly_sinu,x_daily_weekly_sinu,x_daily_weekly_sinu,x_daily_weekly_sinu,x_daily_weekly_sinu,x_daily_weekly_sinu,x_daily_weekly_sinu,x_daily_weekly_sinu,x_daily_weekly_sinu)) #Visualization plt.plot(x_daily_weekly_long_sinu) plt.show() </code></pre> <p>My objective is to split this signal into 3 separate isolated component signals consisting of:</p> <ol> <li>Days as period</li> <li>Weekdays as period</li> <li>Weekends as period</li> </ol> <p>Periods as shown below:</p> <p><a href="https://i.stack.imgur.com/szRxj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/szRxj.png" alt="enter image description here" /></a></p> <p>I tried using the STL decomposition method from statsmodel:</p> <pre><code>sm.tsa.seasonal_decompose() </code></pre> <p>But this is suitable only if you know the period beforehand. And is only applicable for decomposing a single period at a time. While, I need to decompose any signal having multiple periodicities and whose periods are not known beforehand.</p> <p>Can anyone please help how to achieve this?</p>
<p>Have you tried more of an algorithmic approach? We could first try to identify the changes in the signal, either amplitude or frequency. Identify all threshold points where there is a major change, with some epsilon, and then do FFT on that window.</p> <p>Here was my approach:</p> <ol> <li>I found that the DWT with Daubechies wavelet was really good at this. There are distinct peaks when transformed for when either one changes, which makes identifying the windows really nice.</li> <li>Did a Gaussian mixture, to essentially key in on 2 specific window sizes. In your example, this is fixed but with real data, it might not be.</li> <li>Looped back through pairs of peaks applied FFT and found prominent frequency.</li> <li>Now you have the width of windows which you can use to identify from Gaussian mixture with another epsilon to figure out the period between windows, and FFT to have the prominent frequency within that window. Although, if I were you I would use the mixture model to identify the key frequencies or amplitudes in the windows. If we can assume your frequencies/amplitudes in the real world are normally distributed.</li> </ol> <p>Note there are many ways you could mess with this. I would say starting with a wavelet transform, personally, is a good start.</p> <p>Here is the code, try adding some Gaussian noise or other variability to test it out. You see the more noise the higher your min epsilon for DWT will need to be so you do need to tune some of it.</p> <pre class="lang-py prettyprint-override"><code>import pywt from sklearn.mixture import GaussianMixture data = x_daily_weekly_long_sinu times = np.linspace(0, len(data), len(data)) SAMPLING_RATE = len(data) / len(times) # needed for frequency calc (number of discrete times / time interval) in this case it's 1 cA, cD = pywt.dwt(data, 'db4', mode='periodization') # Daubechies wavelet good for changes in freq # find peaks, with db4 good indicator of changes in frequencies, greater than some arbitrary value (you'll have to find by possibly plotting plt.plot(cD)) EPSILON = 0.02 peaks = (np.where(np.abs(cD) &gt; EPSILON)[0] * 2) # since cD (detailed coef) is len(x) / 2 only true for periodization mode peaks = [0] + peaks.tolist() + [len(data) -1 ] # always add start and end as beginning of windows # iterate through peak pairs if len(peaks) &lt; 2: print('No peaks found...') exit(0) # iterate through the &quot;paired&quot; windows MIN_WINDOW_WIDTH = 10 # min width for the start of a new window peak_starts = [] for i in range(len(peaks) - 1): s_ind, e_ind = peaks[i], peaks[i + 1] if len(peak_starts) &gt; 0 and (s_ind - peak_starts[-1]) &lt; MIN_WINDOW_WIDTH: continue # not wide enough peak_starts.append(s_ind) # calculate the sequential differences between windows # essentially giving us how wide they are seq_dist = np.array([t - s for s, t in zip(peak_starts, peak_starts[1:])]) # since peak windows might not be exact in the real world let's make a gaussian mixture # you're assuming how many different windows there are here) # with this assumption we're going to assume 2 different kinds of windows WINDOW_NUM = 2 gmm = GaussianMixture(WINDOW_NUM).fit(seq_dist.reshape(-1, 1)) window_widths = [float(m) for m in gmm.means_] # for example we would assume this prints (using your example of 2 different window types) weekday_width, weekend_width = list(sorted(window_widths)) print('Weekday Width, Weekend Width', weekday_width, weekend_width) # prints 191.9 and 479.59 # now we can process peak pairs with their respective windows # we specify a padding which essentially will remove edge data which might overlap with another window (we really only care about the frequency) freq_data = {} PADDING = 3 # add padding to remove edge elements WIDTH_EPSILON = 5 # make sure the window found is within the width found in gaussian mixture (to remove other small/large windows with noise) T2_data = [] T3_data = [] for s, t in zip(peak_starts, peak_starts[1:]): width = t - s passed = False for testw in window_widths: if abs(testw - width) &lt; WIDTH_EPSILON: passed = True break # weird window ignore it if not passed: continue # for your example let's populate T2 data if (width - weekday_width) &lt; WIDTH_EPSILON: T2_data.append(s) # append start elif (width - weekend_width) &lt; WIDTH_EPSILON: T3_data.append(s) # append main frequency in window window = data[s + PADDING: t - PADDING] # get domininant frequency fft = np.real(np.fft.fft(window)) fftfreq = np.fft.fftfreq(len(window)) freq = SAMPLING_RATE * fftfreq[np.argmax(np.abs(fft[1:])) + 1] # ignore constant (shifting) freq 0 freq_data[int(testw)] = np.abs(freq) print('T2 = ', np.mean([t - s for s, t in zip(T2_data, T2_data[1:])])) print('T3 = ', np.mean([t - s for s, t in zip(T3_data, T3_data[1:])])) print('Frequency data', freq_data) # convert to periods period_data = {} for w in freq_data.keys(): period_data[w] = 1.0 / freq_data[w] print('Period data', period_data) </code></pre> <p>With your example that printed the following (note results weren't exact).</p> <pre><code>Weekday Width, Weekend Width 191.99999999999997 479.5999999999999 T2 = 672.0 T3 = 671.5555555555555 Frequency data {479: 0.010548523206751054, 191: 0.010752688172043012} Period data {479: 94.8, 191: 92.99999999999999} </code></pre> <p>Note this is what the DWT coefs look like. <a href="https://i.stack.imgur.com/7QAK3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7QAK3.png" alt="DWT Coefs" /></a></p>
python|numpy|signal-processing|fft|autocorrelation
3
8,181
16,952,632
Read a .csv into pandas from F: drive on Windows 7
<p>I have a .csv file on my F: drive on Windows 7 64-bit that I'd like to read into pandas and manipulate.</p> <p>None of the examples I see read from anything other than a simple file name (e.g. 'foo.csv').</p> <p>When I try this I get error messages that aren't making the problem clear to me: </p> <pre><code>import pandas as pd trainFile = "F:/Projects/Python/coursera/intro-to-data-science/kaggle/data/train.csv" trainData = pd.read_csv(trainFile) </code></pre> <p>The error message says: </p> <pre><code>IOError: Initializing from file failed </code></pre> <p>I'm missing something simple here. Can anyone see it?</p> <p>Update: </p> <p>I did get more information like this: </p> <pre><code>import csv if __name__ == '__main__': trainPath = 'F:/Projects/Python/coursera/intro-to-data-science/kaggle/data/train.csv' trainData = [] with open(trainPath, 'r') as trainCsv: trainReader = csv.reader(trainCsv, delimiter=',', quotechar='"') for row in trainReader: trainData.append(row) print trainData </code></pre> <p>I got a permission error on read. When I checked the properties of the file, I saw that it was read-only. I was able to read 892 lines successfully after unchecking it.</p> <p>Now pandas is working as well. No need to move the file or amend the path. Thanks for looking.</p>
<p>I cannot promise that this will work, but it's worth a shot:</p> <pre><code>import pandas as pd import os trainFile = "F:/Projects/Python/coursera/intro-to-data-science/kaggle/data/train.csv" pwd = os.getcwd() os.chdir(os.path.dirname(trainFile)) trainData = pd.read_csv(os.path.basename(trainFile)) os.chdir(pwd) </code></pre>
python|csv|pandas
13
8,182
21,929,971
Re-order numpy array based on where its associated ids are positioned in the `master_order` array
<p>I am looking for a function that makes a new array of values based on ordered_ids, when the array has a length of one million.</p> <p><strong>Input:</strong></p> <pre><code> &gt;&gt;&gt; ids=array(["WYOMING01","TEXAS01","TEXAS02",...]) &gt;&gt;&gt; values=array([12,20,30,...]) &gt;&gt;&gt; ordered_ids=array(["TEXAS01","TEXAS02","ALABAMA01",...]) </code></pre> <p><strong>Output:</strong></p> <pre><code> ordered [ 20 , 30 , nan , ...] </code></pre> <p><strong>Closing Summary</strong></p> <p>@Dietrich's use of a dictionary in list comprehension is 10x faster than using numpy index search (numpy.where). I compared the times of three results in my answer below. </p>
<p>You could try:</p> <pre><code>import numpy as np def order_array(ids, values, master_order_ids): n = len(master_order_ids) idx = np.searchsorted(master_order_ids, ids) ordered_values = np.zeros(n) ordered_values[idx &lt; n] = values[idx &lt; n] print "ordered", ordered_values return ordered_values </code></pre> <p>Searchsorted gives you indices where you should insert ids into master_order_ids to keep the arrray ordered. Then you just drop those (idx, values) that are out of the range of master_order_ids.</p>
python|numpy
1
8,183
55,555,419
Decoding prediction outputs generated by a pretrained model to human readable labels
<p>I'm trying to use a pre-trained object detection model from the <a href="https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md" rel="nofollow noreferrer">Tensorflow model zoo</a>. Basically, I've chosen <code>faster_rcnn_inception_resnet_v2_atrous_oidv4</code> trained on the Open Images dataset.</p> <p>Here is my code:</p> <pre><code>import tensorflow as tf # restore the deep model sess=tf.Session() #First let's load meta graph and restore weights saver = tf.train.import_meta_graph('pretrained/faster_rcnn_inception_resnet_v2_atrous_oid_v4_2018_12_12/model.ckpt.meta') saver.restore(sess, tf.train.latest_checkpoint('pretrained/faster_rcnn_inception_resnet_v2_atrous_oid_v4_2018_12_12/')) # Now, let's access and create placeholders variables and # create feed-dict to feed new data graph = tf.get_default_graph() X = graph.get_tensor_by_name('image_tensor:0') feed_dict ={X: image_raw_feature} #Now, access the op that we want to run. num_detections = graph.get_tensor_by_name('num_detections:0') detection_scores = graph.get_tensor_by_name('detection_scores:0') detection_boxes = graph.get_tensor_by_name('detection_boxes:0') x1, x2, x3 = sess.run( [num_detections, detection_scores, detection_boxes], feed_dict ) </code></pre> <p>The outputs <code>x1, x2, x3</code> have the shapes <code>4</code>, <code>[4, 100]</code> and <code>[4, 100, 4]</code>. The problem is that I don't know how to decode the result to human readable labels. I guess the total number of object categories is 100 as indicated in <code>x2</code>? But it seems to be very small compared with what is described in the dataset <a href="https://storage.googleapis.com/openimages/web/index.html" rel="nofollow noreferrer">Open Images</a>.</p> <p>How can I decode the outputs to the labels?</p>
<p>As described in the <a href="https://github.com/tensorflow/models/blob/82b928fe199588c62be20d4c39e69c3b5b7f649d/research/object_detection/meta_architectures/faster_rcnn_meta_arch.py#L1138" rel="nofollow noreferrer">faster_rcnn_meta_arch.py</a>, the output tensors should have following shapes:</p> <pre><code>detection_boxes: [batch, max_detection, 4] detection_scores: [batch, max_detections] detection_classes: [batch, max_detections] num_detections: [batch] </code></pre> <p>Here <code>bacth = 4</code>, <code>max_detections = 100</code> and <strong>it contains all detections with different confidence scores, so you might need to decide on a score threshold to filter out detections with low confidence scores.</strong> Also the <code>detection_boxes</code> contain the box encodings in the order of <code>ymin, xmin, ymax, xmax</code> and in normalized coordinates, you need to get the shape of the image to get absolute coordinates.</p> <p>For example, say you want all detections with <code>score &gt; 0.5</code>:</p> <pre><code>final_boxes = [] for i in range(int(num_detections)): final_boxes.append(detection_boxes[i, detection_scores[i]&gt;0.5, ]) </code></pre> <p>This will give you detections with confidence score higher than 0.5.</p>
tensorflow|machine-learning|deep-learning|object-detection|pre-trained-model
1
8,184
55,442,366
How to fix "['Student Name'] not in index"?
<p>I'm new in Programming and I'm trying to replace the old dataframe df with a new dataframe, but when I run the code it says KeyError: "['Student Name'] not in index". How can I fix it?</p> <p>This is my code import numpy as np import matplotlib.pyplot as plt import pandas as pd</p> <p>df=pd.read_excel(r'C:\Users\Thep18\Desktop\Thep New.xlsx')</p> <p>print('\n') df=df[['Height (cm)','Weight (kg)','Allowance per day','Student Name']] print(df)</p> <p>And this is my result Traceback (most recent call last):</p> <p>File "", line 1, in runfile('C:/Users/Thep18/.spyder-py3/temp.py', wdir='C:/Users/Thep18/.spyder-py3')</p> <p>File "C:\Users\Thep18\Anaconda3\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 786, in runfile execfile(filename, namespace)</p> <p>File "C:\Users\Thep18\Anaconda3\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 110, in execfile exec(compile(f.read(), filename, 'exec'), namespace)</p> <p>File "C:/Users/Thep18/.spyder-py3/temp.py", line 20, in df=df[['Height (cm)','Weight (kg)','Allowance per day','Student Name']]</p> <p>File "C:\Users\Thep18\Anaconda3\lib\site-packages\pandas\core\frame.py", line 2934, in <strong>getitem</strong> raise_missing=True)</p> <p>File "C:\Users\Thep18\Anaconda3\lib\site-packages\pandas\core\indexing.py", line 1354, in _convert_to_indexer return self._get_listlike_indexer(obj, axis, **kwargs)[1]</p> <p>File "C:\Users\Thep18\Anaconda3\lib\site-packages\pandas\core\indexing.py", line 1161, in _get_listlike_indexer raise_missing=raise_missing)</p> <p>File "C:\Users\Thep18\Anaconda3\lib\site-packages\pandas\core\indexing.py", line 1252, in _validate_read_indexer raise KeyError("{} not in index".format(not_found))</p> <p>KeyError: "['Student Name'] not in index"</p> <p>runfile('C:/Users/Thep18/.spyder-py3/temp.py', wdir='C:/Users/Thep18/.spyder-py3')</p> <p>Traceback (most recent call last):</p> <p>File "", line 1, in runfile('C:/Users/Thep18/.spyder-py3/temp.py', wdir='C:/Users/Thep18/.spyder-py3')</p> <p>File "C:\Users\Thep18\Anaconda3\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 786, in runfile execfile(filename, namespace)</p> <p>File "C:\Users\Thep18\Anaconda3\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 110, in execfile exec(compile(f.read(), filename, 'exec'), namespace)</p> <p>File "C:/Users/Thep18/.spyder-py3/temp.py", line 20, in df=df[['Height (cm)','Weight (kg)','Allowance per day','Student Name']]</p> <p>File "C:\Users\Thep18\Anaconda3\lib\site-packages\pandas\core\frame.py", line 2934, in <strong>getitem</strong> raise_missing=True)</p> <p>File "C:\Users\Thep18\Anaconda3\lib\site-packages\pandas\core\indexing.py", line 1354, in _convert_to_indexer return self._get_listlike_indexer(obj, axis, **kwargs)[1]</p> <p>File "C:\Users\Thep18\Anaconda3\lib\site-packages\pandas\core\indexing.py", line 1161, in _get_listlike_indexer raise_missing=raise_missing)</p> <p>File "C:\Users\Thep18\Anaconda3\lib\site-packages\pandas\core\indexing.py", line 1252, in _validate_read_indexer raise KeyError("{} not in index".format(not_found))</p> <p>KeyError: "['Student Name'] not in index"</p>
<p>It means that 'Student Name' is not a parameter of your dataframe, when you modify the data structure you have to execute the migration of data before calling the parameter because in the database the parameter not exist.</p>
python|pandas|anaconda|spyder
0
8,185
55,401,864
Why does my new column does net get assigned after using .sample method?
<p>So I was just answering a question and I came across something interesting:</p> <p>The dataframe looks like this:</p> <pre><code> string1 string2 0 abc def 1 ghi jkl 2 mno pqr 3 stu vwx </code></pre> <p>So when I do the following, the assigning of new columns works:</p> <pre><code>df['string3'] = df.string2 print(df) string1 string2 string3 0 abc def def 1 ghi jkl jkl 2 mno pqr pqr 3 stu vwx vwx </code></pre> <p>But when I use <code>pandas.DataFrame.Series.sample</code>, the new column does net get assigned, at least not the <code>sampled</code> one:</p> <pre><code>df['string4'] = df.string2.sample(len(df.string2)) print(df) string1 string2 string3 string4 0 abc def def def 1 ghi jkl jkl jkl 2 mno pqr pqr pqr 3 stu vwx vwx vwx </code></pre> <p>So I tested some things:</p> <p><strong>Test1</strong> Using sample without assign, gives us correct output:</p> <pre><code>df.string2.sample(len(df.string2)) 2 pqr 1 jkl 0 def 3 vwx Name: string2, dtype: object </code></pre> <p><strong>Test2</strong> Cannot overwrite either:</p> <pre><code>df['string2'] = df.string2.sample(len(df.string2)) print(df) string1 string2 0 abc def 1 ghi jkl 2 mno pqr 3 stu vwx </code></pre> <p><strong>This works</strong> but why?</p> <pre><code>df['string2'] = df.string2.sample(len(df.string2)).values print(df) string1 string2 0 abc jkl 1 ghi def 2 mno vwx 3 stu pqr </code></pre> <p>Why do I need to explicitly use <code>.values</code> or <code>.tolist()</code> to get the assigning correct?</p>
<p><code>pandas</code> is <code>index</code> sensitive , which means they check the <code>index</code> when <code>assign</code> it , that is when you do the <code>serise</code> assign , the whole df not change , since the <code>index</code> is not change , after <code>sort_index</code>, it still show the same order of <code>values</code>, but if you do the <code>numpy</code> <code>array</code> assignment , the <code>index</code> will not be considered , so that the value itself will be assign back to the original <code>df</code> , which yield the output </p> <p>An example of egde</p> <pre><code>df['string3']=pd.Series(['aaa','aaa','aaa','aaa'],index=[100,111,112,113]) df Out[462]: string1 string2 string3 0 abc vwx NaN 1 ghi jkl NaN 2 mno dfe NaN 3 stu pqr NaN </code></pre> <hr> <p>Because of that index sensitive when you do condition assignment with<code>.loc</code> </p> <p>You can always do </p> <pre><code>df.loc[df.condition,'value']=df.value*100 # since the not selected one will not be change </code></pre> <p>Just same to what you do with <code>np.where</code></p> <pre><code>df['value']=np.where(df.condition,df.value*100 ,df.value) </code></pre> <hr> <p>Some other use case when I do <code>groupby</code> <code>apply</code> with none-agg function and try to assign it back ,why it is failed </p> <blockquote> <p><code>df['String4']=df.groupby('string1').apply(lambda x :x['string2']+'aa')</code></p> <p>TypeError: incompatible index of inserted column with frame index</p> </blockquote> <p>Let us try to look at the return of <code>groupby.apply</code> </p> <pre><code>df.groupby('string1').apply(lambda x : x['string2']+'aa') Out[466]: string1 abc 0 vwxaa ghi 1 jklaa mno 2 dfeaa stu 3 pqraa Name: string2, dtype </code></pre> <p>Notice here it add the one more level into the index , so the return is multiple index ,and original df only have one dimension which will cause the error message . </p> <hr> <p>How to fix it ?</p> <hr> <p><code>reset</code> the <code>index</code> and using the original index which is the second level of the <code>groupby</code> product , then assign it back </p> <pre><code>df['String4']=df.groupby('string1').apply(lambda x : x['string2']+'aa').reset_index(level=0,drop=True) df Out[469]: string1 string2 string3 String4 0 abc vwx NaN vwxaa 1 ghi jkl NaN jklaa 2 mno dfe NaN dfeaa 3 stu pqr NaN pqraa </code></pre> <hr> <p>As Erfan mentioned in the comment, how can we forbidden accidentally assign unwanted value to <code>pandas.DataFrame</code></p> <p>Two different ways of assign . </p> <p>1st, with a array or list or tuple .. CANNOT ALIGN, which means when you have different length between df and assign object , it will fail</p> <p>2nd assign with <code>pandas</code> <code>object</code>, ALWAYS aligns, no error will return, even the length different </p> <p>However when the assign object have duplicated index , it will raise the error</p> <blockquote> <pre><code>df['string3']=pd.Series(['aaa','aaa','aaa','aaa'],index=[100,100,100,100]) ValueError: cannot reindex from a duplicate axis </code></pre> </blockquote>
python|pandas|dataframe|sample
4
8,186
55,482,647
How to count the number of dropoffs per month for dataframe column
<p>I have a dataframe that has records from 2011 to 2018. One of the columns has the drop_off_date which is the date when the customer left the rewards program. I want to count for each month between 2011 to 2018 how many people dropped of during that month. So for the 84 month period, I want the count of people who dropped off then using the drop_off_date column. </p> <p>I changed the column to datetime and I know i can use the .agg and .count method but I am not sure how to count per month. I honestly do not know what the next step would be.</p> <p><strong>Example of the data:</strong></p> <pre><code>Record ID | store ID | drop_off_date a1274c212| 12876| 2011-01-27 a1534c543| 12877| 2011-02-23 a1232c952| 12877| 2018-12-02 </code></pre> <p><strong>The result should look like this:</strong></p> <pre><code>Month: | #of dropoffs: Jan 2011 | 15 ........ Dec 2018 | 6 </code></pre>
<p>What I suggest is to work directly with the strings in the column drop_off_ym and to strip them to only keep the year and month:</p> <pre><code>df['drop_off_ym'] = df.drop_off_date.apply(lambda x: x[:-3]) </code></pre> <p>Then you apply a groupby on the new created column an then a count(): </p> <pre><code>df_counts_by_month = df.groupby('drop_off_ym')['StoreId'].count() </code></pre>
python|pandas
1
8,187
9,689,173
Shape recognition with numpy/scipy (perhaps watershed)
<p>My goal is to trace drawings that have a lot of separate shapes in them and to split these shapes into individual images. It is black on white. I'm quite new to numpy,opencv&amp;co - but here is my current thought: </p> <ul> <li>scan for black pixels</li> <li>black pixel found -> watershed</li> <li>find watershed boundary (as polygon path)</li> <li>continue searching, but ignore points within the already found boundaries</li> </ul> <p>I'm not very good at these kind of things, is there a better way? </p> <p>First I tried to find the rectangular bounding box of the watershed results (this is more or less a collage of examples):</p> <pre><code>from numpy import * import numpy as np from scipy import ndimage np.set_printoptions(threshold=np.nan) a = np.zeros((512, 512)).astype(np.uint8) #unsigned integer type needed by watershed y, x = np.ogrid[0:512, 0:512] m1 = ((y-200)**2 + (x-100)**2 &lt; 30**2) m2 = ((y-350)**2 + (x-400)**2 &lt; 20**2) m3 = ((y-260)**2 + (x-200)**2 &lt; 20**2) a[m1+m2+m3]=1 markers = np.zeros_like(a).astype(int16) markers[0, 0] = 1 markers[200, 100] = 2 markers[350, 400] = 3 markers[260, 200] = 4 res = ndimage.watershed_ift(a.astype(uint8), markers) unique(res) B = argwhere(res.astype(uint8)) (ystart, xstart), (ystop, xstop) = B.min(0), B.max(0) + 1 tr = a[ystart:ystop, xstart:xstop] print tr </code></pre> <hr> <p>Somehow, when I use the original array (a) then argwhere seems to work, but after the watershed (res) it just outputs the complete array again. </p> <p>The next step could be to find the polygon path around the shape, but the bounding box would be great for now!</p> <p>Please help!</p>
<p>@Hooked has already answered most of your question, but I was in the middle of writing this up when he answered, so I'll post it in the hopes that it's still useful...</p> <p>You're trying to jump through a few too many hoops. You don't need <code>watershed_ift</code>.</p> <p>You use <code>scipy.ndimage.label</code> to differentiate separate objects in a boolean array and <code>scipy.ndimage.find_objects</code> to find the bounding box of each object.</p> <p>Let's break things down a bit.</p> <pre><code>import numpy as np from scipy import ndimage import matplotlib.pyplot as plt def draw_circle(grid, x0, y0, radius): ny, nx = grid.shape y, x = np.ogrid[:ny, :nx] dist = np.hypot(x - x0, y - y0) grid[dist &lt; radius] = True return grid # Generate 3 circles... a = np.zeros((512, 512), dtype=np.bool) draw_circle(a, 100, 200, 30) draw_circle(a, 400, 350, 20) draw_circle(a, 200, 260, 20) # Label the objects in the array. labels, numobjects = ndimage.label(a) # Now find their bounding boxes (This will be a tuple of slice objects) # You can use each one to directly index your data. # E.g. a[slices[0]] gives you the original data within the bounding box of the # first object. slices = ndimage.find_objects(labels) #-- Plotting... ------------------------------------- fig, ax = plt.subplots() ax.imshow(a) ax.set_title('Original Data') fig, ax = plt.subplots() ax.imshow(labels) ax.set_title('Labeled objects') fig, axes = plt.subplots(ncols=numobjects) for ax, sli in zip(axes.flat, slices): ax.imshow(labels[sli], vmin=0, vmax=numobjects) tpl = 'BBox:\nymin:{0.start}, ymax:{0.stop}\nxmin:{1.start}, xmax:{1.stop}' ax.set_title(tpl.format(*sli)) fig.suptitle('Individual Objects') plt.show() </code></pre> <p><img src="https://i.stack.imgur.com/08D78.png" alt="enter image description here"> <img src="https://i.stack.imgur.com/D4VCM.png" alt="enter image description here"> <img src="https://i.stack.imgur.com/xzGTQ.png" alt="enter image description here"></p> <p>Hopefully that makes it a bit clearer how to find the bounding boxes of the objects.</p>
python|numpy|scipy|watershed
16
8,188
56,655,240
Recreate Relu function in Python
<p>I trained a neural network with TensorFlow using the relu function, then I built from scratch the neural network in python using weights from TensorFlow, but when I apply the relu function to np.dot(input,weight), the output is not the same I get from TensorFlow. For instance using:</p> <pre><code>def relu(x): return max(0,x) </code></pre> <p>I get a result for example 0.00213, if I use</p> <pre><code>def relu(x): return max(0.000,x) </code></pre> <p>I get something different. My question is how can I implement the relu function equal to TensorFlow?</p>
<p>This should be helpful for you.</p> <pre><code># Numpy relu X = np.random.rand(5, 10).astype(np.float32) W_np = np.random.rand(10, 7).astype(np.float32) np_relu = np.maximum(np.matmul(X, W_np), 0) # Tensorflow relu W_tf = tf.get_variable(initializer=tf.constant_initializer(W_np), shape=[10, 7], name="W_tf") tf_relu = tf.nn.relu(tf.matmul(X, W_np)) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) np.testing.assert_array_almost_equal(sess.run(tf_relu), np_relu) </code></pre> <p>As you can see, the assertion succeeds which means that the arrays are equal.</p>
python|tensorflow
0
8,189
56,628,079
proc mean with weight python equivalent
<p>I'm converting SAS to python and came across this code where I'm not matching the exact value. SAS says to take the weighted mean of the associates columns and pwgtp columns. But tried in python value not matching.</p> <pre><code>proc means data=hhhead1 nway noprint; weight pwgtp; var associates; output out=propassociates (drop=_:) mean=; run; </code></pre> <p>Answer is 0.2871426408 of SAS</p> <p>I have tried doing various methods to get the weight mean. Data consist of 1,2 million rows Can't share the data sorry</p> <pre><code>propassociates = hhhead1.groupby(by = ['PWGTP_y'])['associates'].mean().reset_index() np.mean(propassociates['associates']) </code></pre> <p>Answer is 0.26806426594942845</p> <pre><code>hhhead1['weight_sum'] = hhhead1['associates'] * hhhead1['PWGTP_x'] propassociates = hhhead1['weight_sum'].sum() / hhhead1['PWGTP_x'].sum() </code></pre> <p>answer is 0.08837267780237641</p> <pre><code>propassociates = hhhead1.groupby(by = ['PWGTP_y'])['associates'].mean().reset_index() np.mean(propassociates['associates']) </code></pre> <p>Answer is 0.26806426594942845</p> <pre><code>hhhead1['weight_sum'] = hhhead1['associates'] * hhhead1['PWGTP_x'] propassociates = hhhead1['weight_sum'].sum() / hhhead1['PWGTP_x'].sum() </code></pre> <p>answer is 0.08837267780237641</p> <p>Answer is 0.2871426408 of SAS</p> <p>Answer is 0.26806426594942845</p>
<pre><code>def wavg(group, avg_name, weight_name): d = group[avg_name] w = group[weight_name] try: return (d * w).sum() / w.sum() except ZeroDivisionError: return d.mean() a=data1.groupby(['GroupByVar']).apply(wavg, "yourVar", "WeightVar") </code></pre> <p>This should work</p>
python|python-3.x|numpy|sas
0
8,190
56,756,118
How to duplicate a pandas dataframe to match other dataframe's length?
<p>Assume the following dataframes:</p> <p>df1:</p> <pre><code>a 10. 20. 30. 40. 50. 60. 70. 80. 90. 100. 110. 120. </code></pre> <p>df2:</p> <pre><code>b 1. 2. </code></pre> <p>df3:</p> <pre><code>b 1. 2. 3. </code></pre> <p>Knowing <code>len(df1.values) % len(df2.values) == 0</code>, I want to divide each element of <code>df1</code> by each element of <code>df2</code>, after having repeated <code>df2</code> as many times as needed to fit <code>df1</code>1's length, meaning in this case</p> <p>result(df1, df2):</p> <pre><code>a 10. 10. 30. 20. 50. 30. 70. 40. 90. 50. 110. 60. </code></pre> <p>result(df1, df3):</p> <pre><code>a 10. 10. 10. 40. 25. 20. 70. 40. 30. 100. 55. 40. </code></pre> <p>What is the cleanest way to achieve this, preferably without going through numpy?</p>
<p>Here's one way using <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.resize.html" rel="nofollow noreferrer"><code>np.resize</code></a>, where the new array will be filled with copies of the original until it fits the specified length:</p> <pre><code>df1['a'] /= np.resize(df2.b.values, df1.shape[0]) a 0 10.0 1 10.0 2 30.0 3 20.0 4 50.0 5 30.0 6 70.0 7 40.0 8 90.0 9 50.0 10 110.0 11 60.0 </code></pre> <hr> <p>Or using <a href="https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.tile.html" rel="nofollow noreferrer"><code>pd.np.tile</code></a>:</p> <pre><code>df1['a'] /= pd.np.tile(df2.b, df1.shape[0]//df2.shape[0]) </code></pre>
python|pandas|dataframe
3
8,191
25,828,478
Conditionally Replace Elements of an Array Depending on the Contents of Another Array
<p>I am trying to implement the iRPOP- learning algorithm for neural networks. I am using numpy for performance reasons. One important optimization requires conditionally zeroing out elements of an float array based on the contents of a boolean array. The equivalent python code would be:</p> <pre><code>for index, condition in enumerate(boolean_array): if condition: float_array[index] = 0 </code></pre> <p>Is there any way to efficiently do this with numpy?</p>
<p>You could use <code>float_array[boolean_array] = 0</code>:</p> <pre><code>In [2]: boolean_array = np.array([True, False, False, True]) In [3]: float_array = np.ones(4) * 1.0 In [4]: float_array Out[4]: array([ 1., 1., 1., 1.]) In [5]: float_array[boolean_array] = 0 In [6]: float_array Out[6]: array([ 0., 1., 1., 0.]) </code></pre>
python|arrays|python-3.x|numpy
3
8,192
25,922,678
Not able to retrieve data in a particular columns with Python
<p>I am trying to load a huge text file of size 2GB and trying to extract data in a particular column using pandas </p> <pre><code>LOCATION_ID PRODUCT_ID PRODUCT_DESC NET_SALES SALES_DATE ------------------------------ ----------- ------------------------------ --------------------------------- ---------- 100020 8 Lotto Texas 8.000 01/01/2009 100020 9 Pick 3 105.500 01/01/2009 100020 10 Cash Five 7.000 01/01/2009 100020 12 Texas Two Step </code></pre> <p>The data looks like this what I am trying to do is extract number of unique columns in Location ID </p> <p>I tried using pandas.read_csv(file,chunksize=4) but I am not getting anything in columns only indexes are present. I am kind of stuck, I was able to do it using simple file read but since the size of file is so huge python compiler crashes. How can I achieve the desired result using Pandas ? Please help </p>
<p>That file doesn't look like a csv, given that there don't seem to be any commas, and it doesn't even seem to be a delimited file. You might have better luck treating it as a fixed-width-format file and using <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.io.parsers.read_fwf.html" rel="nofollow"><code>read_fwf</code></a>:</p> <pre><code>&gt;&gt;&gt; pd.read_fwf("296.dat", skiprows=[1]) LOCATION_ID PRODUCT_ID PRODUCT_DESC NET_SALES SALES_DATE 0 100020 8 Lotto Texas 8.0 01/01/2009 1 100020 9 Pick 3 105.5 01/01/2009 2 100020 10 Cash Five 7.0 01/01/2009 3 100020 12 Texas Two Step NaN NaN </code></pre> <p>You can do the same <code>chunksize</code> tricks with <code>fwf</code> as you can with <code>read_csv</code>, so you can limit the amount in memory at any one time.</p> <p>Also note that here I simply used the "infer column width" default; you may have to specify them manually, depending on your data.</p>
python|pandas
1
8,193
26,183,175
Issue with scipy quad integration in python
<p>I'm quite new to python so I'm hoping my wording makes sense. I'm currently attempting to model a set of equations that require the product of an integration to be multiplied by a float. I'm getting a Nan output from the integration output alone and I don't know why that is my code is below:</p> <pre><code>from __future__ import division import matplotlib.pylab as plt import numpy as np import scipy.special as sp import scipy as sci from scipy import integrate import math from sympy.functions import coth Tc = 9.25 Tb = 7.2 t = Tb / Tc Temperature = [] Temp1=[] Temp0=[] D=[] d = [] D1=[] d1 = [] n = 2*10**-6 L = 100*10**-9 W = 80*10**-9 a = 3*10**-2 s1 = W/ (2*n) y1 = (L+(W/2)) / (2*n) x0 = 0.015 r0 = 2*x0 s2 = r0 / n y0 = (x0 / n)/1000000 print x0, y0, y1 A = ((W/n)**2) *(sp.kv(0, s1)+(math.pi / 2)*sp.kv(1,s1)*coth(y1)) B = ((W/n)**2) *(sp.iv(0, s1)+(math.pi / 2)*sp.iv(1,s1)*coth(y1)) print A, B def t1(t): return (t**-1)*sp.kv(0, s2) def t2(t): return (t**-1)*sp.iv(0, s2) print t2 Fk2 =(math.pi**-2) * integrate.quad(t1, s1, s2, full_output=False)[0] FI2 =(math.pi**-2) * integrate.quad(t2, s1, s2, full_output=False)[0] print Fk2 , #FI2 r1 = 0.0 while r1 &lt; y1: #C0 = sp.kv(0,s2)*(1 + (A*FI2)-(B*Fk2))/A #print C0 #D_ = 1 - B*Fk2 - A*Fk2*sp.iv(1, s1) / sp.kv(1, 1) #print D_ r1 += 0.0001 j = -1*r1 D.append(r1) d.append(j) #T = Tb + (Tc - Tb) * (sp.kv(0,s1) + (math.pi /2)* sp.kv(1, s1)*coth(r1))*(1- D_ * math.cosh(y1)) * (C0*A) #Temp0.append(T) #print Temp0, r1 </code></pre> <p>The main culprit seems to be the the modified bessel function in equation FI2 sp.iv(t2, s1) that returns an Nan value but the other equation results Fk2 gives 0. for some time I was getting the following error:</p> <pre><code>IntegrationWarning: The occurrence of roundoff error is detected, which prevents the requested tolerance from being achieved. The error may be underestimated. warnings.warn(msg, IntegrationWarning) </code></pre> <p>but that has stopped and now I only get 0.0 and Nan. Any help is really appreciated I'm quite lost here.</p>
<p>Your value for s2 is rather large (15000.0). So, when you evaluate the <a href="http://mathworld.wolfram.com/ModifiedBesselFunctionoftheSecondKind.html" rel="nofollow">Bessel Function</a> at s2 you get zero:</p> <pre><code>&gt;&gt;&gt; sp.kv(0, 15000.0) 0.0 </code></pre> <p>So your function t1 always returns zero, making the integral zero.</p>
python|numpy|scipy|integration|quad
1
8,194
26,312,174
How to check if the signs of a Series conform to a given string of signs?
<p>For example I have a Series as below,</p> <pre><code>ts = pd.Series([-1,-2.4,5,6,7, -4, -8]) </code></pre> <p>I would like to know if there is pythonic way to check the signs of <code>ts</code> against a list of signs, such as, </p> <pre><code>sign = '++++---' # returns False </code></pre> <p>while</p> <pre><code>sign = '--+++--' # returns True </code></pre>
<p>To check the whether the elements of the Series are positive you could create a Boolean Series like this:</p> <pre><code>&gt;&gt;&gt; ts &gt;= 0 0 False 1 False 2 True 3 True 4 True 5 False 6 False dtype: bool </code></pre> <p>(I assume <code>0</code> is positive, but this technique could be adapted if you choose otherwise.)</p> <p>To get <code>sign</code> into a similar Boolean series, you can need to interpret the strings '+' and '-' strings as Boolean values. For example:</p> <pre><code>&gt;&gt;&gt; sign = '++++---' &gt;&gt;&gt; pd.Series(list(sign)).replace({'-': 0, '+': 1}).astype(bool) 0 True 1 True 2 True 3 True 4 False 5 False 6 False dtype: bool </code></pre> <p>Now you can compare the two series and use <code>all</code>. In one line, the whole thing looks like this:</p> <pre><code>&gt;&gt;&gt; all((ts &gt;= 0) == pd.Series(list('++++---')).replace({'-': 0, '+': 1}).astype(bool)) False &gt;&gt;&gt; all((ts &gt;= 0) == pd.Series(list('--+++--')).replace({'-': 0, '+': 1}).astype(bool)) True </code></pre>
python|pandas|series
1
8,195
67,042,495
Pandas: How to relabel the index of rows
<p>Currently I have: <a href="https://i.stack.imgur.com/iszDx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iszDx.png" alt="enter image description here" /></a></p> <p>I would like the row indices on the left of product_id to start from 0 in ascending order. Is that possible?</p>
<p>you can use 'inplace' argument.</p> <pre><code>df.reset_index(drop=True, inplace= True) </code></pre>
pandas|dataframe
2
8,196
66,788,292
How to name a pandas Series column title?
<p>I am trying to set the title for a <code>pd.Series</code>, such that when assigning it to <code>pd.Dataframe</code>s it goes along with its title.</p> <p>I searched and couldn't find anything. Best I could do was name the index, but not the data column.</p> <p>Here is what I have</p> <pre><code>import pandas as pd s = pd.Series([1, 2, 3, 4], index=[5, 6, 7, 8]) s.name = &quot;a&quot; s.rename(&quot;b&quot;, inplace=True) s.index.name = &quot;c&quot; print(&quot;____&quot;) print(s) </code></pre> <p>output:</p> <blockquote> <pre><code>____ c 5 1 6 2 7 3 8 4 Name: b, dtype: int64 </code></pre> </blockquote> <p>I want the <code>1, 2, 3, 4</code> column to be named.</p>
<p>You did well, but you can directly pass it in the constructor</p> <pre><code>s = pd.Series([1, 2, 3, 4], index=[5, 6, 7, 8], name=&quot;my_amazing_name&quot;) print(s) # --------------------------------------------- 5 1 6 2 7 3 8 4 Name: my_amazing_name, dtype: int64 </code></pre> <p>If you use them to build a <code>DataFrame</code> (or append to it), it'll keep it's name</p> <pre><code>df = pd.DataFrame([ pd.Series([1, 2, 3, 4], index=[5, 6, 7, 8], name=&quot;my_amazing_name_1&quot;), pd.Series([1, 2, 3, 4], index=[5, 6, 7, 8], name=&quot;my_amazing_name_2&quot;), pd.Series([1, 2, 3, 4], index=[5, 6, 7, 8], name=&quot;my_amazing_name_3&quot;), ]).T print(df) # --------------------------------------------- my_amazing_name_1 my_amazing_name_2 my_amazing_name_3 5 1 1 1 6 2 2 2 7 3 3 3 8 4 4 4 </code></pre>
python|pandas
2
8,197
67,111,657
Why these Python codes fail in building a dummy variable?
<p>I have the following dataframe:</p> <pre><code>df = pd.DataFrame.from_dict({'Date': {0: '2021-01-01 00:00:00', 1: '2021-01-02 00:00:00', 2: '2021-01-03 00:00:00', 3: '2021-01-04 00:00:00', 4: '2021-01-05 00:00:00', 5: '2021-01-06 00:00:00', 6: '2021-01-07 00:00:00', 7: '2021-01-08 00:00:00', 8: '2021-01-09 00:00:00', 9: '2021-01-10 00:00:00', 10: '2021-01-11 00:00:00', 11: '2021-01-12 00:00:00', 12: '2021-01-13 00:00:00', 13: '2021-01-14 00:00:00', 14: '2021-01-15 00:00:00', 15: '2021-01-16 00:00:00', 16: '2021-01-17 00:00:00', 17: '2021-01-18 00:00:00', 18: '2021-01-19 00:00:00', 19: '2021-01-20 00:00:00'}}) </code></pre> <p>I want to create a simple dummy variable: when the date in the dataframe is equal to a specific date then 1, otherwise 0. I did this:</p> <pre><code>def int_21(x): if x == '2021-01-07': return '1' else: return '0' df['comm0'] = df['Date'].apply(int_21) </code></pre> <p>However, it returns only 0s. Why? What am I doing wrong?</p> <p>Thanks!</p>
<pre><code>import pandas as pd </code></pre> <p>Use <code>to_datetime()</code> method and convert your date column from string to datetime:</p> <pre><code>df['Date']=pd.to_datetime(df['Date']) </code></pre> <p>Finally use <code>apply()</code> method:</p> <pre><code>df['comm0']=df['Date'].apply(lambda x:1 if x==pd.to_datetime('2021-01-07') else 0) </code></pre> <p>Or as suggested by @anky:</p> <p>Simply use:</p> <pre><code>df['comm0']=pd.to_datetime(df['Date']).eq('2021-01-07').astype(int) </code></pre> <p>Or If you are familiar with <code>numpy</code> then you can also use after converting your Date columns to datetime:</p> <pre><code>import numpy as np df['comm0']=np.where(df['Date']=='2021-01-07',1,0) </code></pre>
python|pandas|dataframe|dummy-variable
2
8,198
67,101,676
TypeError: read_csv() got an unexpected keyword argument ‘sheetname’ when merging csv files
<p>I’m trying to merge a bunch of csv files using pandas but I am getting the above error from the code below. Each csv file has one sheet but they are named differently so I am trying to say “I want the first sheet”. I’ve tried both sheet_names and sheetnames with the same error each time. Am I missing something?</p> <pre><code>import os import pandas as pd #show current working directory and list files path = os.getcwd() files = os.listdir(path) files #pick out csv files files_csv = [f for f in files if f[-3:] == 'csv'] #initialize empty data frame df = pd.DataFrame() #loops over list of files to append empty dataframe for f in files_csv: data = pd.read_csv(f, sheetname=0, engine='openpyxl') df = df.append(data) df.to_csv('ConsolidatedResults.csv') </code></pre>
<p>read_csv() have no argument 'sheetname'; read_excel() have one argument 'sheet_name'. <a href="https://pandas.pydata.org/docs/reference/api/pandas.read_excel.html" rel="nofollow noreferrer">https://pandas.pydata.org/docs/reference/api/pandas.read_excel.html</a></p>
python|pandas
1
8,199
47,391,948
Pandas - Style - Background Gradient using other dataframe
<p>I like using the background_gradient as it helps me look at my dataframes in an excel way.<br> But I'm wondering if I there is a way I could map the colors to the figures in another dataframe.<br> For example, something I am keen to do is to color the dataframe using a dataframe of zscores so i can see quickly the value of outliers.</p> <pre><code>A = pd.DataFrame(np.random.randn(6, 3), columns=['a', 'b', 'c']) B = pd.DataFrame(np.random.randn(6, 3), columns=['a', 'b', 'c']) A.style.background_gradient(???) </code></pre> <p>I'm wondering how to use <code>background_gradient</code> so that it uses the values in the dataframe B to style A.</p>
<p>I don't see a different method other than altering the <a href="https://github.com/pandas-dev/pandas/blob/29d81f3df81eb0a4d077ae1317df74d509cdc446/pandas/formats/style.py#L817" rel="noreferrer">background_gradient code</a> for transferring style from one dataframe to other i.e </p> <pre><code>import pandas as pd import matplotlib.pyplot as plt from matplotlib import colors def b_g(s, cmap='PuBu', low=0, high=0): # Pass the columns from Dataframe A a = A.loc[:,s.name].copy() rng = a.max() - a.min() norm = colors.Normalize(a.min() - (rng * low), a.max() + (rng * high)) normed = norm(a.values) c = [colors.rgb2hex(x) for x in plt.cm.get_cmap(cmap)(normed)] return ['background-color: %s' % color for color in c] B.style.apply(b_g,cmap='PuBu') </code></pre> <p>Output : </p> <p><a href="https://i.stack.imgur.com/YwvNm.png" rel="noreferrer"><img src="https://i.stack.imgur.com/YwvNm.png" alt="Dataframes"></a></p> <p>Hope it helps </p>
python|pandas|matplotlib|pandas-styles
9