Unnamed: 0
int64
0
378k
id
int64
49.9k
73.8M
title
stringlengths
15
150
question
stringlengths
37
64.2k
answer
stringlengths
37
44.1k
tags
stringlengths
5
106
score
int64
-10
5.87k
375,100
69,661,091
X axis label and minor tick labels do not show on Pandas scatter plot
<p>I have created a new environment with Python 3.8.12 and installed the following packages:</p> <ul> <li><code>matplotlib</code> (3.4.3)</li> <li><code>pandas</code> (1.3.3)</li> <li><code>scikit-learn</code> (0.24.2)</li> </ul> <p>I am running the code in a Jupyter notebook.</p> <p>I am using code from two examples in the documentation for <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.plot.scatter.html" rel="nofollow noreferrer">pandas.DataFrame.plot.scatter</a>:</p> <p>My output for the first example</p> <pre><code>df = pd.DataFrame([[5.1, 3.5, 0], [4.9, 3.0, 0], [7.0, 3.2, 1], [6.4, 3.2, 1], [5.9, 3.0, 2]], columns=['length', 'width', 'species']) ax1 = df.plot.scatter(x='length', y='width', c='DarkBlue') </code></pre> <p>does match the plot in the documentation.</p> <p>However: my output for the second example</p> <pre><code>ax2 = df.plot.scatter(x='length', y='width', c='species', colormap='viridis') </code></pre> <p>does <em>not</em> match the plot in the documentation: X axis label and minor tick labels do not show on the (bottom of the) plot.</p> <p><a href="https://pandas.pydata.org/docs/_images/pandas-DataFrame-plot-scatter-2.png" rel="nofollow noreferrer">Plot from documentation</a></p> <p><a href="https://i.stack.imgur.com/ftGWP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ftGWP.png" alt="Plot from documentation" /></a></p> <p>My plot</p> <p><a href="https://i.stack.imgur.com/3h3x0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3h3x0.png" alt="My plot" /></a></p> <p>I have tried <code>ax2.set_xlabel(&quot;length&quot;)</code> (which should be necessary) and it has no effect.</p> <p>Is this a bug? Can someone please try and reproduce?</p>
<p>This is a known and open issue with Pandas when using the matplotlib backend in Jupyter (noteboooks).</p> <blockquote> <p>This issue was reported and resolved in #10611 but returned with 0.24.0.</p> </blockquote> <p>see <a href="https://github.com/pandas-dev/pandas/issues/36064" rel="nofollow noreferrer">BUG: Scatterplot x-axis label disappears with colorscale when using matplotlib backend #36064</a></p> <p>The simplest workaround is passing <code>sharex=False</code> to pandas.DataFrame.plot.scatter.</p> <p>Thanks to @rftr to pointing me in the right direction!</p>
python|pandas|plot
0
375,101
69,342,255
How to use recursion to record all routes in a parent child hierarchy?
<p>I am trying to go through a hierarchy dataframe and record every possible routes into another dataframe. These routes can have variable depth.</p> <p>Original dataframe (df). The highest column means that the value in the parent column is not a child of any:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: left;">parent</th> <th style="text-align: left;">child</th> <th style="text-align: left;">highest</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">a</td> <td style="text-align: left;">b</td> <td style="text-align: left;">1</td> </tr> <tr> <td style="text-align: left;">b</td> <td style="text-align: left;">c</td> <td style="text-align: left;">0</td> </tr> <tr> <td style="text-align: left;">b</td> <td style="text-align: left;">d</td> <td style="text-align: left;">0</td> </tr> <tr> <td style="text-align: left;">d</td> <td style="text-align: left;">e</td> <td style="text-align: left;">0</td> </tr> </tbody> </table> </div> <p>End goal dataframe:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: left;">level 3</th> <th style="text-align: left;">level 2</th> <th style="text-align: left;">level 1</th> <th style="text-align: left;">level 0</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">a</td> <td style="text-align: left;">b</td> <td style="text-align: left;">c</td> <td style="text-align: left;"></td> </tr> <tr> <td style="text-align: left;">a</td> <td style="text-align: left;">b</td> <td style="text-align: left;">d</td> <td style="text-align: left;">e</td> </tr> </tbody> </table> </div> <p>This what I currently have</p> <pre><code>def search(parent): for i in range(df.shape[0]): if(df.iloc[i,0] == parent): search(df.iloc[i,1]) for i in range(df.shape[0]): if(df.iloc[i,2] == 1): search(df.iloc[i,0]) </code></pre> <p>I am able to go through the hierarchy but I do not know how to save it in the format I want.</p>
<p>You can use <a href="https://networkx.org/" rel="nofollow noreferrer"><code>networkx</code></a> to solve the problem. Note if you use <code>networkx</code> you don't need the <code>highest</code> columns. The main function to find all paths is <a href="https://networkx.org/documentation/stable/reference/algorithms/generated/networkx.algorithms.simple_paths.all_simple_paths.html" rel="nofollow noreferrer"><code>all_simple_paths</code></a></p> <pre><code># Python env: pip install networkx # Anaconda env: conda install networkx import networkx as nx # Create network from your dataframe #G = nx.from_pandas_edgelist(df, source='parent', target='child', # create_using=nx.DiGraph) # For older versions of networkx G = nx.DiGraph() for _, (source, target) in df[['parent', 'child']].iterrows(): G.add_edge(source, target) # Find roots of your graph (a root is a node with no input) roots = [node for node, degree in G.in_degree() if degree == 0] # Find leaves of your graph (a leaf is a node with no output) leaves = [node for node, degree in G.out_degree() if degree == 0] # Find all paths paths = [] for root in roots: for leaf in leaves: for path in nx.all_simple_paths(G, root, leaf): paths.append(path) # Create a new dataframe out = pd.DataFrame(paths).fillna('') out.columns = reversed(out.add_prefix('level ').columns) </code></pre> <p>Output:</p> <pre><code>&gt;&gt;&gt; out level 3 level 2 level 1 level 0 0 a b c 1 a b d e </code></pre>
python|pandas|dataframe|recursion|hierarchy
4
375,102
69,474,172
How can I use pandas explode on string?
<p>I have the following dataFrame in pandas</p> <pre><code>df=pd.DataFrame({'questionId':[1, 2],'answer':['[&quot;Trustful&quot;, &quot;Curious&quot;, &quot;Nervous&quot;]', &quot;very good&quot;]}) df.explode('answer') </code></pre> <p>The actual answer:</p> <pre><code>questionId answer 0 1 [&quot;Trustful&quot;, &quot;Curious&quot;, &quot;Nervous&quot;] 1 2 very good </code></pre> <p><strong>My desired answer:</strong></p> <pre><code>questionId answer 0 1 Trustful 0 1 Curious 0 1 Nervous 1 2 very good </code></pre> <p>Can you help me out with how can I convert</p> <pre><code>'[&quot;Trustful&quot;, &quot;Curious&quot;, &quot;Nervous&quot;]' to [&quot;Trustful&quot;, &quot;Curious&quot;, &quot;Nervous&quot;] </code></pre> <p>so that I can get the answer I am looking for?</p> <p>Thank you so much in advance.</p>
<p>Try with <code>str.findall</code></p> <pre><code>s = df.answer.str.findall('&quot;([^&quot;]*)&quot;') out = df.assign(answer = np.where(s.astype(bool),s,df.answer)).explode('answer') out questionId answer 0 1 Trustful 0 1 Curious 0 1 Nervous 1 2 very good </code></pre>
python|pandas
1
375,103
69,415,715
Pandas Pivot/Reshape/... GroupyBy rows to Columns
<p>[First off, these are my first &quot;real&quot; experiments with pandas, so the terminology in this question might be off.]</p> <p>I am working with the GHCN weather data (<a href="https://www.ncei.noaa.gov/data/global-historical-climatology-network-daily/" rel="nofollow noreferrer">https://www.ncei.noaa.gov/data/global-historical-climatology-network-daily/</a>). The data consists of a CSV file that I load into a data frame (simplified here):</p> <pre><code>data = { 'station': {0: 'AE000041196', 1: 'AE000041196', 2: 'AE000041196', 3: 'AEM00041194', 4: 'AEM00041194', 5: 'AEM00041194', 6: 'AEM00041194', 7: 'AEM00041217', 8: 'AEM00041217', 9: 'AEM00041217'}, 'date': {0: 20210101, 1: 20210101, 2: 20210102, 3: 20210103, 4: 20210101, 5: 20210101, 6: 20210101, 7: 20210101, 8: 20210101, 9: 20210101}, 'measurement': {0: 'TMAX', 1: 'PRCP', 2: 'TAVG', 3: 'TMAX', 4: 'TMIN', 5: 'PRCP', 6: 'TAVG', 7: 'TMAX', 8: 'TMIN', 9: 'TAVG'}, 'value': {0: 278, 1: 0, 2: 214, 3: 266, 4: 178, 5: 0, 6: 217, 7: 262, 8: 155, 9: 202} } df = pd.DataFrame(data) </code></pre> <p>Each row specifies a station and a date, as well as a measurement type and its value. In the actual data, there are around 50 different measurement types. I need to turn this data into a more &quot;traditional&quot; format where each column is one measurement type, and each row contains the data for a given station and date.</p> <p>So far I can only come up with a manual approach that is terribly slow:</p> <pre><code>result = pd.DataFrame() for key, item in df.groupby(['station', 'date']): group = input_df.get_group(key) vals = {} for idx, row in group.iterrows(): vals[&quot;station&quot;] = row[0] vals[&quot;date&quot;] = row[1] vals[row[2]] = row[3] result = result.append(vals, ignore_index=True) </code></pre> <p>It works, but surely there must be a more &quot;pandas&quot; way of doing this, and ideally also allowing parallel processing using multiple CPU cores?</p>
<p>You have the right terminology, and looking for <a href="https://pandas.pydata.org/pandas-docs/version/1.2.0/reference/api/pandas.pivot.html" rel="nofollow noreferrer">pivot</a> in the docs would probably have led you straight to using:</p> <pre><code>&gt;&gt;&gt; df.pivot(index=['station', 'date'], columns='measurement', values='value') measurement PRCP TAVG TMAX TMIN station date AE000041196 20210101 0.0 NaN 278.0 NaN 20210102 NaN 214.0 NaN NaN AEM00041194 20210101 0.0 217.0 NaN 178.0 20210103 NaN NaN 266.0 NaN AEM00041217 20210101 NaN 202.0 262.0 155.0 </code></pre> <p>You can possibly add <code>.reset_index()</code> at the end to transform the index columns back to normal columns:</p> <pre><code>&gt;&gt;&gt; _.reset_index() measurement station date PRCP TAVG TMAX TMIN 0 AE000041196 20210101 0.0 NaN 278.0 NaN 1 AE000041196 20210102 NaN 214.0 NaN NaN 2 AEM00041194 20210101 0.0 217.0 NaN 178.0 3 AEM00041194 20210103 NaN NaN 266.0 NaN 4 AEM00041217 20210101 NaN 202.0 262.0 155.0 </code></pre> <p>There’s also a <a href="https://pandas.pydata.org/pandas-docs/version/1.2.0/user_guide/reshaping.html#reshaping" rel="nofollow noreferrer">reshaping user guide</a> in the documentation that’s very well done.</p>
pandas|dataframe|pandas-groupby
1
375,104
69,306,245
How to find rows with duplicate values using pd.duplicated() and a date within +- 2 days Pandas Dataframe
<p>I have a pandas dataframe such as:</p> <pre><code> number GENDER DOB Code 0 500401081 M 1994-08-01 AP 1 500401094 F 1998-05-04 CB 2 500401081 M 1994-08-03 AP 3 500401096 M 1998-05-06 AP </code></pre> <p>I want all rows that have the same number, GENDER, Code, and a DOB within 2 days of each other. So for example I would get a table returned like this:</p> <pre><code> number GENDER DOB Code 0 500401081 M 1994-08-01 AP 2 500401081 M 1994-08-03 AP </code></pre> <p>I have started out by using the duplicated() function to get all the duplicates of number, gender, and Code but am unable to come up with a solution to add the the +- two days.</p> <pre><code>df_dup = df.loc[df.duplicated(subset=[&quot;number&quot;,&quot;GENDER&quot;,&quot;Code&quot;],keep=False)] </code></pre>
<p>You can sort the data by &quot;DOB&quot;, compute then difference between successive &quot;DOB&quot; per group and construct a mask if the difference is lower of equal 2 days:</p> <pre><code># group per number/GENDER group = df.groupby(['number', 'GENDER']).ngroup() # compute first mask mask = df.sort_values(by='DOB').groupby(group)['DOB'].diff().le('2D') # retrieve other duplicate value (the one before that matched in &quot;mask&quot;) duplicates = mask|mask.groupby(group).shift(-1) # subset dataframe df[duplicates] </code></pre> <p>output:</p> <pre><code> number GENDER DOB Code 0 500401081 M 1994-08-01 AP 2 500401081 M 1994-08-03 AP </code></pre>
python|pandas|duplicates
1
375,105
69,607,902
How to transform the following dataset for time series analysis?
<p><a href="https://i.stack.imgur.com/IDugj.jpg" rel="nofollow noreferrer">This is the dataset, I want to transform for time series forecasting. Here, the column names contains the store number.</a></p> <p>df=</p> <pre><code>| Date | store_1 |store_2 |store_3 |:---- |:------:| -----:|-----:| | 1-1-21 | 0.5 | 0.2 | 0.3 | | 1-2-21 | 0.3 | 0.7 | 0.1 | | 1-3-21 | 0.6 | 0.9 | 0.4 | </code></pre> <p>I want to convert df to df1:</p> <pre><code>Date store number value 1-1-21 1 0.5 1-2-21 1 0.3 1-3-21 1 0.6 1-1-21 2 0.2 1-2-21 2 0.7 1-3-21 2 0.9 1-1-21 3 0.3 1-2-21 3 0.1 1-3-21 3 0.4 </code></pre>
<p>Use <code>melt</code>:</p> <pre><code>out = df.melt(id_vars=['Date'], var_name='Store_number', value_name='Value') out['Store_number'] = out['Store_number'].str.extract(r'store_(\d+)') print(out) # Output: Date Store_number Value 0 1-1-21 1 0.5 1 1-2-21 1 0.3 2 1-3-21 1 0.6 3 1-1-21 2 0.2 4 1-2-21 2 0.7 5 1-3-21 2 0.9 6 1-1-21 3 0.3 7 1-2-21 3 0.1 8 1-3-21 3 0.4 </code></pre> <p><strong>Update</strong>:</p> <blockquote> <p>Can you suggest me a way to get back to the orginal form? After forecasting the prediction, I need to make the dataframe as the orginal one.</p> </blockquote> <pre><code>out = out.pivot(index='Date', columns='Store_number', values='Value') \ .add_prefix('store_').rename_axis(columns=None).reset_index() print(out) Date store_1 store_2 store_3 0 1-1-21 0.5 0.2 0.3 1 1-2-21 0.3 0.7 0.1 2 1-3-21 0.6 0.9 0.4 </code></pre>
python|pandas|dataframe|time-series
2
375,106
69,316,778
How to create rows from pandas dataframe column value
<p>I want to create Dataframe rows using the value in the Dataframe column(<strong>Race, TGR1</strong>). I still have additional columns aside from <strong>Race, TGR1</strong> but the number of column values are the same. I can't think of the best possible way to achieve this.</p> <p>Any help would be greatly appreciated.</p> <pre><code>Track Date Race TGR1 0 Addington 24/09/2021 R1,R2,R3,R4,R5,R6,R7,R8,R9,R0,R1,R2 5,8,2,5,6,1,6,3,1,2,1,2 1 Mount Gambier 26/09/2021 R1,R2,R3,R4,R5,R6,R7,R8,R9,R0 8,1,4,8,8,1,2,1,2,2 </code></pre> <p><strong>Expected output</strong></p> <pre><code>Track Date Race TGR1 Addington 24/09/2021 R1 5 Addington 24/09/2021 R2 8 Addington 24/09/2021 R3 2 Addington 24/09/2021 R4 5 Addington 24/09/2021 R5 6 Addington 24/09/2021 R6 1 Addington 24/09/2021 R7 6 Addington 24/09/2021 R8 3 Addington 24/09/2021 R9 1 Addington 24/09/2021 R0 2 Addington 24/09/2021 R1 1 Addington 24/09/2021 R2 2 Mount Gambier 26/09/2021 R1 8 Mount Gambier 26/09/2021 R2 1 Mount Gambier 26/09/2021 R3 4 Mount Gambier 26/09/2021 R4 8 Mount Gambier 26/09/2021 R5 8 Mount Gambier 26/09/2021 R6 1 Mount Gambier 26/09/2021 R7 2 Mount Gambier 26/09/2021 R8 1 Mount Gambier 26/09/2021 R9 2 Mount Gambier 26/09/2021 R10 2 </code></pre>
<p>You can use <code>apply</code>+<code>pd.Series.explode</code>. You first need to set aside the columns not to be exploded using <code>set_index</code>, then bring them back as columns with <code>reset_index</code>.</p> <pre><code>(df.assign(Race=df['Race'].str.split(','), TGR1=df['TGR1'].str.split(',')) .set_index(['Track', 'Date']) .apply(pd.Series.explode) .reset_index() ) </code></pre> <p>output:</p> <pre><code> Track Date Race TGR1 0 Addington 24/09/2021 R1 5 1 Addington 24/09/2021 R2 8 2 Addington 24/09/2021 R3 2 3 Addington 24/09/2021 R4 5 4 Addington 24/09/2021 R5 6 5 Addington 24/09/2021 R6 1 6 Addington 24/09/2021 R7 6 7 Addington 24/09/2021 R8 3 8 Addington 24/09/2021 R9 1 9 Addington 24/09/2021 R0 2 10 Addington 24/09/2021 R1 1 11 Addington 24/09/2021 R2 2 12 Mount Gambier 26/09/2021 R1 8 13 Mount Gambier 26/09/2021 R2 1 14 Mount Gambier 26/09/2021 R3 4 15 Mount Gambier 26/09/2021 R4 8 16 Mount Gambier 26/09/2021 R5 8 17 Mount Gambier 26/09/2021 R6 1 18 Mount Gambier 26/09/2021 R7 2 19 Mount Gambier 26/09/2021 R8 1 20 Mount Gambier 26/09/2021 R9 2 21 Mount Gambier 26/09/2021 R0 2 </code></pre>
python|pandas|dataframe
1
375,107
69,390,591
mixing two sensors data in a dataframe regarding timestamp condition
<p>I would like to merge some data from different sensors regardin the Timestamp.</p> <p>I can be done by iterating the dataframe with an if condition on the timestamp, but it's not very efficient.. Does somebody have a better idea ?</p> <p>Here is a simple example with the result I expect :</p> <p>The sensor 1 worked correctly until 2018-01-01 02:00:00. The sensor 2 worked correctly from 2018-01-01 03:00:00</p> <pre><code>idx = pd.date_range(&quot;2018-01-01&quot;, periods=6, freq=&quot;H&quot;) df = pd.DataFrame(data={ 'sensor 1' :[5.4,5,5.2,3,2,2], 'sensor 2' : [-1,-2,-3,5.5,5.4,5.6]}, index=idx) display(df) #result = &quot;sensor 1 before 2018-01-01 03:00 and sensor 2 after that&quot; result = pd.DataFrame(data={ 'sensor' :[5.4,5,5.2,5.5,5.4,5.6]}, index=idx) display(result) </code></pre>
<p>why not filter by the index?</p> <pre><code>idx = pd.date_range(&quot;2018-01-01&quot;, periods=6, freq=&quot;H&quot;) df = pd.DataFrame(data={ 'sensor 1' :[5.4,5,5.2,3,2,2], 'sensor 2' : [-1,-2,-3,5.5,5.4,5.6]}, index=idx) date = datetime.datetime(2018, 1, 1,02) sens_1_data = df.loc[df.index &lt;= date] sens_1_data = sens_1_data[['sensor 1']] sens_2_data = df.loc[df.index &gt; date] sens_2_data = sens_1_data[['sensor 2']] sens_1_data.columns =['sensor'] sens_2_data.columns =['sensor'] sens_data = pd.concat(sens_1_data,sens_2_data) </code></pre>
python|pandas|dataframe
1
375,108
69,589,226
TensorFlow Probability: Different log probabilities for Sequential vs Named JointDistributions?
<p>I'm fairly new to Bayesian estimation with TensorFlow. I was trying to set up a very simple regression of height on weight (using McElreath's <a href="https://github.com/rmcelreath/rethinking/blob/master/data/Howell1.csv" rel="nofollow noreferrer">Howell data</a>) to familiarize myself with the machinery in TensorFlow Probability, but I am running into something I don't understand. I presumed that defining a model with <code>JointDistributionSequentialAutoBatched</code> would yield a model that was identical to one defined by <code>JointDistributionNamedAutoBatched</code>, but the latter would just yield some nice handles for getting at parameters.</p> <pre class="lang-py prettyprint-override"><code>ht_wt: tfd.JointDistributionSequentialAutoBatched = tfd.JointDistributionSequentialAutoBatched([ tfd.Normal(loc=tf.cast(0., dtype=tf.float64), scale=0.2), tfd.Normal(loc=tf.cast(0., dtype=tf.float64), scale=1.), tfd.Uniform(low=tf.cast(1., dtype=tf.float64), scale=10.), lambda beta0, beta1, sigma: tfd.Independent(tfd.Normal( loc=beta0 + beta1 * weight, # weight is a tf.Tensor of weights from Howell sigma=sigma )) ], name=&quot;Height vs Weight&quot;) ht_wt_named: tfd.JointDistributionNamedAutoBatched = tfd.JointDistributionNamedAutoBatched([ beta0=tfd.Normal(loc=tf.cast(0., dtype=tf.float64), scale=0.2), beta1=tfd.Normal(loc=tf.cast(0., dtype=tf.float64), scale=1.), sigma=tfd.Uniform(low=tf.cast(1., dtype=tf.float64), scale=10.), x=lambda beta0, beta1, sigma: tfd.Independent(tfd.Normal( loc=beta0 + beta1 * weight, # weight is a tf.Tensor of weights from Howell sigma=sigma )) ], name=&quot;Height vs Weight (Named)&quot;) </code></pre> <p>However, when I look at the distribution of logged probabilities across different parameter values, I get inconsistently different values. For example...</p> <pre><code>beta0 = 1., beta1 = 1., sigma = 0.5 yields: Sequential = -19163.0633 Named = -inf </code></pre> <pre><code>beta0 = 1., beta1 = 1., sigma = 1.0 yields: Sequential = -5108.2755 Named = -5108.2755 </code></pre> <pre><code>beta0 = 1., beta1 = 1., sigma = 1.5 yields: Sequential = -2616.9668 Named = -2601.3418 </code></pre> <p>I get different values when I vary the <code>beta</code> as well. Have I missed something about underlying differences between <code>Sequential</code> and <code>Named</code> <code>JointDistributions</code>?</p>
<p>JDSequential &quot;pops&quot; distributions off a stack, so you need to reference them in reverse order. <a href="https://colab.research.google.com/gist/brianwa84/a47f4f43b1e58b9ac24ed34766478679/so69589226.ipynb" rel="nofollow noreferrer">https://colab.research.google.com/gist/brianwa84/a47f4f43b1e58b9ac24ed34766478679/so69589226.ipynb</a></p>
python|tensorflow-probability|probability-distribution
0
375,109
69,613,766
ref() of tensor not equal in dataset. Why?
<p>I am very confused by the following behavior. Take this program:</p> <pre><code>import tensorflow_datasets as tfds # %% Train dataset (ds_train_original, ds_test_original), ds_info = tfds.load( &quot;mnist&quot;, split=[&quot;train&quot;, &quot;test&quot;], shuffle_files=True, as_supervised=True, with_info=True, ) iterator = iter(ds_train_original) el = iterator.get_next()[0] el[0].ref() == el[0].ref() # &lt;- this should be True </code></pre> <p>The last line IMO should return <code>True</code>. However, this is <code>False</code>. I cannot understand why.</p> <p>According to the <a href="https://www.tensorflow.org/api_docs/python/tf/Tensor#ref" rel="nofollow noreferrer">ref</a> documentation:</p> <blockquote> <p>Returns a hashable reference object to this Tensor. The primary use case for this API is to put tensors in a set/dictionary.</p> </blockquote> <p>My understanding is that you should be able to use the ref() to check for equality between Tensor. Here the problem doesn't happen anymore once I have extracted the ref. For example, this is True:</p> <pre><code>a_ref = el[0].ref() a_deref = a_ref.deref() another_ref = a_deref.ref() a_ref == another_ref </code></pre> <p>So the &quot;problem&quot; seems confined to extracting the ref() from <code>iterator</code>.</p> <p>Can anybody explain to me what is happening and why <code>el[0].ref() == el[0].ref()</code> is <code>False</code>?</p>
<p>After posting an <a href="https://github.com/tensorflow/tensorflow/issues/52537" rel="nofollow noreferrer">issue</a> on Github, it seems like the only viable solution is to compare the samples values, since only <a href="https://docs.python.org/3/library/weakref.html" rel="nofollow noreferrer">weakrefs</a> are created.</p> <p>Thus the solution is:</p> <pre class="lang-py prettyprint-override"><code>import tensorflow_datasets as tfds # %% Train dataset (ds_train_original, ds_test_original), ds_info = tfds.load( &quot;mnist&quot;, split=[&quot;train&quot;, &quot;test&quot;], shuffle_files=True, as_supervised=True, with_info=True, ) iterator = iter(ds_train_original) el = iterator.get_next()[0] (el[0].numpy() == el[0].numpy()).all() </code></pre>
tensorflow|tensorflow-datasets
0
375,110
69,361,063
Normalization of samples based on the controls when we have several groups
<p>Let's say we have the following DataFrame:</p> <pre><code>data = {'Compounds': ['Drug_A', 'Drug_A', 'Drug_A', 'Drug_A', 'Drug_A', 'Drug_A', 'Drug_B', 'Drug_B', 'Drug_B','Drug_B','Drug_B','Drug_B','Drug_B','Drug_B','Drug_B','Drug_B','Drug_B','Drug_B', 'Drug_C', 'Drug_C','Drug_C','Drug_C','Drug_C','Drug_C','Drug_C','Drug_C','Drug_C','Drug_C', np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan], 'values': [24, 20, 48, 17, 20, 8, 22, 16, 46, 44, 12, 38, 26, 16, 19, 23, 9, 39, 19, 24, 43, 6, 24, 46, 26, 15, 8, 22, 22, 32, 23, 41, 8, 46, 29, 34, 34, 39, 32, 22, 28, 34, 29, 19, 44, 22, 17, 41, 19, 39, 27, 46, 37, 26], 'identifier': ['Sample', 'Sample','Sample','Sample','Sample','Sample','Sample','Sample','Sample','Sample', 'Sample','Sample','Sample','Sample','Sample','Sample','Sample','Sample','Sample','Sample', 'Sample','Sample','Sample','Sample','Sample','Sample','Sample','Sample', 'Control', 'Control', 'Control','Control','Control','Control','Control','Control','Control','Control','Control', 'Control','Control','Control','Control','Control','Control','Control','Control','Control', 'Control','Control','Control','Control','Control','Control',], 'Experiment': ['P1', 'P1', 'P2', 'P2', 'P3', 'P3', 'P1', 'P1', 'P1', 'P2', 'P2', 'P2', 'P3', 'P3', 'P1', 'P1', 'P1', 'P2', 'P2', 'P2', 'P2', 'P2', 'P3', 'P3', 'P1','P1', 'P1', 'P1', 'P1', 'P1', 'P1', 'P1', 'P2', 'P2','P2','P2','P2','P2','P2','P2','P2','P2','P2','P2','P3','P3','P3','P3','P3','P3', 'P1', 'P2', 'P3','P1' ]} df = pd.DataFrame(data) </code></pre> <p>In the identifier column, we have both Sample and Control values. We want to first: Calculate the average value of the column 'values' for all the controls from different experiments (i.e. P1, P2, P3):</p> <pre><code>df_control = df.loc[df['identifier'] == 'Control'] z = df_control['values'].mean() </code></pre> <p>What is the compact form of the script above, if I want to write it in one line? may I use list comprehensive?</p> <p>Next, for normalization purposes, we want to divide z by the average 'values' of controls in each experiment P1, P2, P3, separately, to get a normalization_factor for each of these experiments.</p> <p>At the end, multiply the normalization factor of each specific experiment by the values of Samples belonging to that experiment.</p> <p>What's the simplest and most straightforward way to do it? Thanks for you kind help!</p>
<p>Is this what you're looking for?</p> <pre><code>df.groupby(by=['identifier']).mean() Out: values identifier Control 30.384615 Sample 24.285714 </code></pre> <p>and then:</p> <pre><code>df.groupby(by=['identifier', 'Experiment']).mean() Out: values identifier Experiment Control P1 28.500000 P2 30.769231 P3 31.285714 Sample P1 20.833333 P2 29.000000 P3 23.333333 </code></pre> <p>The second has the following <code>MultiIndex</code> which you can use to access the data:</p> <pre><code>MultiIndex([('Control', 'P1'), ('Control', 'P2'), ('Control', 'P3'), ( 'Sample', 'P1'), ( 'Sample', 'P2'), ( 'Sample', 'P3')], names=['identifier', 'Experiment']) </code></pre> <p>You could now build on this as:</p> <pre><code>all_mean = df.groupby(by=['identifier']).mean() spec_mean = df.groupby(by=['identifier', 'Experiment']).mean() result = all_mean/spec_mean Out values identifier Experiment Control P1 1.066127 P2 0.987500 P3 0.971198 Sample P1 1.165714 P2 0.837438 P3 1.040816 </code></pre> <p>Now getting the data into some kind of flat structure (? the OP is not explicit about this):</p> <pre><code>normalization_factors = {idx[1]: result.loc[idx].values[0] for idx in result.index if idx[0] == 'Control'} # {'P1': 1.0661268556005397, 'P2': 0.9874999999999999, 'P3': 0.9711977520196698} sample_values = {idx[1]: result.loc[idx].values[0] * normalization_factors[idx[1]] for idx in result.index if idx[0] == 'Sample'} # {'P1': 1.2427993059572005, 'P2': 0.8269704433497537, 'P3': 1.0108384765919014} </code></pre> <p>Map the <code>sample_data</code> to the <code>df</code> as:</p> <p><code>df[&quot;calculated_col_with_the_name_you_prefer&quot;] = df[&quot;Experiment&quot;].map(sample_values)</code></p>
python|pandas|normalization
1
375,111
69,452,108
Save multiple dataframes into the environment in Python
<p>I have a similar problem with <a href="https://stackoverflow.com/questions/57452403/how-to-split-pandas-dataframe-into-multiple-dataframes-based-on-unique-string-va">this question</a> but the original question was to make multiple csv output. In my case, I am wondering if there's a way to make the multiple dataframe output into environment through a loop so I can carry on some data analysis.</p> <pre><code>us = df[df['country_code'].str.match(&quot;US&quot;)] mx = df[df['country_code'].str.match(&quot;MX&quot;)] ca = df[df['country_code'].str.match(&quot;CA&quot;)] au = df[df['country_code'].str.match(&quot;AU&quot;)] </code></pre>
<p>You could use the same code as the link posted, but save the different dfs into a dictionary:</p> <pre><code>codes = ['US', 'MX', 'CA', 'AU'] result_dict = {} for code in codes: temp = df.query(f'country_code.str.match(&quot;{code}&quot;)') result_dict[code] = temp </code></pre>
python|pandas|f-string
1
375,112
69,311,721
Counting number of events on each user in two dataframe
<p>I'm attempting to count the number of events that occurred in the past for each user in a table. Actually, I have two dataframe, one for each user at a specific point 'T' in time and one for each event that also occur in time.</p> <p>This is the exemple of the user table:</p> <pre><code> ID_CLIENT START_DATE 0 A 2015-12-31 1 A 2016-12-31 2 A 2017-12-31 3 B 2016-12-31 </code></pre> <p>This is the exemple of the event table:</p> <pre><code> ID_CLIENT DATE_EVENT 0 A 2017-01-01 1 A 2017-05-01 2 A 2018-02-01 3 A 2016-05-02 4 B 2015-01-01 </code></pre> <p>The idea is that I want for each line in the &quot;user&quot; table the count of event that occurs before the date registered on &quot;START_DATE&quot;.</p> <p>Exemple of the final result :</p> <pre><code> ID_CLIENT START_DATE nb_event_tot 0 A 2015-12-31 0 1 A 2016-12-31 1 2 A 2017-12-31 3 3 B 2016-12-31 1 </code></pre> <p>I have created a function which leverage the &quot;.apply&quot; function of pandas but it's too slow... If anyone have an idea on how to speed it up it would be glady appreciated. I have 800K line of user and 200k line of event which take up to 3 hours with the apply method.</p> <p>Here is my code to reproduce :</p> <pre><code>import pandas as pd def check_below_df(row, df_events, col_event): # Select the ids id_c = row['ID_CLIENT'] date = row['START_DATE'] # Select subset of events df sub_df_events = df_events.loc[df_events['ID_CLIENT'] == id_c, :] sub_df_events = sub_df_events.loc[sub_df_events[col_event] &lt;= date, :] count = len(sub_df_events) return count def count_events(df_clients: pd.DataFrame, df_event: pd.DataFrame, col_event_date: str = 'DATE_EVENEMENT', col_start_date: str = 'START_DATE', col_end_date: str = 'END_DATE', col_event:str = 'nb_sin', events = ['compensation']): df_clients_cp = df_clients[[&quot;ID_CLIENT&quot;, col_start_date]].copy() df_event_cp = df_event.copy() df_event_cp[col_event] = 1 # TOTAL df_clients_cp[f'{col_event}_tot'] = df_clients_cp.apply(lambda row: check_below_df(row, df_event_cp, col_event_date), axis=1) return df_clients_cp # ------------------------------------------------------------------ # ------------------------------------------------------------------ df_users = pd.DataFrame(data={ 'ID_CLIENT': ['A', 'A', 'A', 'B'], 'START_DATE': ['2015-12-31', '2016-12-31', '2017-12-31', '2016-12-31'], }) df_users[&quot;START_DATE&quot;] = pd.to_datetime(df_users[&quot;START_DATE&quot;]) df_events = pd.DataFrame(data={ 'ID_CLIENT': ['A', 'A', 'A', 'A', 'B'], 'DATE_EVENT': ['2017-01-01', '2017-05-01', '2018-02-01', '2016-05-02', '2015-01-01'] }) df_events[&quot;DATE_EVENT&quot;] = pd.to_datetime(df_events[&quot;DATE_EVENT&quot;]) tmp = count_events(df_users, df_events, col_event_date='DATE_EVENT', col_event='nb_event') tmp </code></pre> <p>Thank's for your help.</p>
<p>I guess the slow exection is caused by <code>pd.apply(axis=1)</code>, which is explained <a href="https://towardsdatascience.com/avoiding-apply-ing-yourself-in-pandas-a6ade4569b7f" rel="nofollow noreferrer">here</a>.</p> <p>I estimate that you can improve the execution time by using functions that are not applied rowwise, for instance by using <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.merge.html" rel="nofollow noreferrer">merge</a> and <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.groupby.html" rel="nofollow noreferrer">groupby</a>.</p> <p>First we merge the frames:</p> <pre class="lang-py prettyprint-override"><code>df_merged = pd.merge(df_users, df_events, on='ID_CLIENT', how='left') </code></pre> <p>Then we check where <code>DATE_EVENT</code> &lt;= <code>START_DATE</code> for the entire frame:</p> <pre class="lang-py prettyprint-override"><code>df_merged.loc[:, 'before'] = df_merged['DATE_EVENT'] &lt;= df_merged['START_DATE'] </code></pre> <p>Then we group by <code>CLIENT_ID</code> and <code>START_DATE</code>, and sum the <code>'before'</code> column:</p> <pre class="lang-py prettyprint-override"><code>df_grouped = df_merged.groupby(by=['ID_CLIENT', 'START_DATE']) df_out = df_grouped['before'].sum() # returns a series </code></pre> <p>Finally we convert <code>df_out</code> (a series) back to a dataframe, renaming the new column to <code>'nb_event_tot'</code>, and subsequently reset the index to get your desired output:</p> <pre class="lang-py prettyprint-override"><code>df_out = df_out.to_frame('nb_event_tot') df_out = df_out.reset_index() </code></pre>
python|pandas
0
375,113
69,472,596
BCELoss().backward throws Runtime Error or doesn't train with Requires_grad
<p>I'm new to PyTorch and am running into an error with optimizing an <code>nn.Embedding</code> matrix.</p> <p>My code below, with given variables <code>embedding_dim</code>, <code>num_node</code>, <code>train_label</code> and <code>train_edge</code>:</p> <pre><code>emb = nn.Embedding(num_node, embedding_dim) optimizer = SGD(emb.parameters(), lr=0.1, momentum=0.9) loss_fn = nn.BCELoss() sigmoid = nn.Sigmoid() for i in range(500): optimizer.zero_grad() res = torch.FloatTensor([sigmoid(torch.dot(emb(a), emb(b))) for (a, b) in zip(train_edge[0], train_edge[1])]) loss = loss_fn(res, train_label) loss.backward() optimizer.step() print(f'loss:{loss}') </code></pre> <p>runs into a RuntimeError at <code>loss.backward()</code>:</p> <p><code>RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn</code></p> <p>If I add <code>requires_grad=True</code> to the assignment of <code>res</code>, the Runtime Error disappears, but the optimizer doesn't train / the loss doesn't decrease. Any idea what might I be doing wrong?</p> <p>Thanks in advance!</p>
<p>The error raises clearly due to <code>res</code> not having <code>.requires_grad</code> on.</p> <p>Since you are making a Tensor from a list of Tensors, its better to use <code>torch.cat</code> and not <code>torch.FloatTensor</code>. The latter is for <em>constructing</em> a tensor and do not have <code>.requires_grad=True</code> by default.</p> <hr /> <p>If it does not train (non-decreasing loss as you mentioned), that may be due to other reasons (data, wrong hyperparameters etc.) which seems to be out of scope of this particular question.</p>
python|pytorch
0
375,114
69,400,936
How to find the last date of the month from current month (excel version of EOMONTH) in PYTHON?
<p>my df currently consists of only date column</p> <pre><code>date 28/09/1995 30/10/1993 26/02/2021 04/04/2020 </code></pre> <p>I want to create 2 new columns called &quot;end of month&quot; which gives the last day of the month &amp; &quot;end of quarter&quot; which gives last day of quarter</p> <pre><code>date end of month end of quarter 28/09/1995 30/09/1995 30/09/1995 30/10/1993 31/10/1993 31/12/1993 26/02/2021 28/02/2021 31/03/2021 04/04/2020 30/04/2020 30/06/2020 </code></pre> <p>Kindly help me in solving this</p>
<p>Try this:</p> <pre><code>import pandas as pd df = pd.DataFrame({'date':['28/09/1995', '30/10/1993', '26/02/2021', '04/04/2020']}) df['date'] = pd.to_datetime(df['date'], dayfirst=True) df['end of month'] = df['date'] + pd.offsets.MonthEnd(1) df['end of quarter'] = df['date'] + pd.offsets.QuarterEnd(1) date end of month end of quarter 0 1995-09-28 1995-09-30 1995-09-30 1 1993-10-30 1993-10-31 1993-12-31 2 2021-02-26 2021-02-28 2021-03-31 3 2020-04-04 2020-04-30 2020-06-30 </code></pre>
python|pandas|date|datetime|data-manipulation
1
375,115
69,660,627
Finding first occurrence of negative value in a row
<p>Say I have the following DataFrame:</p> <pre><code>df = pd.DataFrame({'a': [12, 34, -45], 'b':[-24, 36, 48], 'c':[28, -14, 68]}) </code></pre> <p>df:</p> <pre><code> a b c 12 -24 28 34 36 -14 -45 48 68 </code></pre> <p>I am looking to return the index(+1) of the first column to contain a negative number within each row, so for the example I would produce:</p> <pre><code> a b c first_neg_col 12 -24 28 2 34 36 -14 3 -45 48 68 1 </code></pre> <p>I have ways of achieving this:</p> <pre><code>def first_negval(val_list): for idx, val in enumerate(val_list): if val &lt; 0: return idx + 1 df['first_neg_col'] = df[:].values.tolist() df.first_neg_col= df['first_neg_col'].apply(lambda x: first_negbal(x)) </code></pre> <p>But this seems cumbersome/inefficient. I was wondering if there was a more vectorized approach / some way of using list comprehension?</p>
<p>If always exist at least one negative value use <a href="https://numpy.org/doc/stable/reference/generated/numpy.argmax.html" rel="nofollow noreferrer"><code>numpy.argmax</code></a> for first negative value less like <code>0</code>:</p> <pre><code>df['first_neg_col'] = np.argmax(df.lt(0).to_numpy(), axis=1) + 1 print (df) a b c first_neg_col 0 12 -24 28 2 1 34 36 -14 3 2 -45 48 68 1 </code></pre> <p>Generally is necessary test if exist at least one negative and set to <code>0</code> in <a href="https://numpy.org/doc/stable/reference/generated/numpy.where.html" rel="nofollow noreferrer"><code>numpy.where</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.any.html" rel="nofollow noreferrer"><code>DataFrame.any</code></a>:</p> <pre><code>df = pd.DataFrame({'a': [12, 34, -45, 1], 'b':[-24, 36, 48, 8], 'c':[28, -14, 68, 8]}) m = df.lt(0) df['first_neg_col'] = np.where(m.any(axis=1), np.argmax(m.to_numpy(), axis=1) + 1, 0) print (df) a b c first_neg_col 0 12 -24 28 2 1 34 36 -14 3 2 -45 48 68 1 3 1 8 8 0 </code></pre>
python|pandas
1
375,116
69,568,271
Pandas how to squeeze duplicate rows into one row
<p>I have following dataframe</p> <pre><code>| name | value | | name_1 | A | | name_1 | B | | name_1 | C | </code></pre> <p>How to reshape dataframe to looks like</p> <pre><code>| name | value | | name_1 | A,B,C | </code></pre> <p>or</p> <pre><code> | name | value | | name_1 | [A,B,C] | </code></pre>
<p>Use <code>groupby.agg</code>:</p> <pre><code>&gt;&gt;&gt; df.groupby('name').agg(list) value name name_1 [A, B, C] &gt;&gt;&gt; </code></pre> <p>Or:</p> <pre><code>&gt;&gt;&gt; df.groupby('name').agg(', '.join) value name name_1 A, B, C &gt;&gt;&gt; </code></pre>
python|pandas
1
375,117
69,454,217
Op type not registered \'IO>BigQueryClient\' with BigQuery connector on AI platform
<p>I'm trying to parallelize the training step of my model with tensorflow <code>ParameterServerStrategy</code>. I work with GCP <code>AI Platform</code> to create the cluster and launch the task. As my dataset is huge, I use the bigquery tensorflow connector included in <code>tensorflow-io</code>.</p> <p>My script is inspired by the <a href="https://colab.research.google.com/github/tensorflow/io/blob/master/docs/tutorials/bigquery.ipynb#scrollTo=4_NlkxZt1rwR" rel="nofollow noreferrer">documentation of tensorflow bigquery reader</a> and the <a href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/distribute/parameter_server_training.ipynb#scrollTo=gmPvactfa6Eh" rel="nofollow noreferrer">documentation of tensorflow ParameterServerStrategy</a></p> <p>Locally my script works well but when I launch it with AI Platform I get the following error :</p> <p><code>{&quot;created&quot;:&quot;@1633444428.903993309&quot;,&quot;description&quot;:&quot;Error received from peer ipv4:10.46.92.135:2222&quot;,&quot;file&quot;:&quot;external/com_github_grpc_grpc/src/core/lib/surface/call.cc&quot;,&quot;file_line&quot;:1056,&quot;grpc_message&quot;:&quot;Op type not registered \'IO&gt;BigQueryClient\' in binary running on gke-cml-1005-141531--n1-standard-16-2-644bc3f8-7h8p. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) `tf.contrib.resampler` should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.&quot;,&quot;grpc_status&quot;:5}</code></p> <p>The scripts works with fake data on AI platform and works locally with bigquery connector. I imagine that the compilation of the model including the bigquery connector and its calls on other devices creates the bug but I don't know how to fix it.</p> <p>I read this error happens when devices don't have same tensorflow versions so I checked tensorflow and tensorflow-io version on each device.</p> <p><strong>tensorflow : 2.5.0</strong></p> <p><strong>tensorflow-io : 0.19.1</strong></p> <p>I created a similar example which reproduce the bug on AI platform</p> <pre><code>import os from tensorflow_io.bigquery import BigQueryClient from tensorflow_io.bigquery import BigQueryReadSession import tensorflow as tf import multiprocessing import portpicker from tensorflow.keras.layers.experimental import preprocessing from google.cloud import bigquery from tensorflow.python.framework import dtypes import numpy as np import pandas as pd client = bigquery.Client() PROJECT_ID = &lt;your_project&gt; DATASET_ID = 'tmp' TABLE_ID = 'bq_tf_io' BATCH_SIZE = 32 # Bigquery requirements def init_bq_table(): table = '%s.%s.%s' %(PROJECT_ID, DATASET_ID, TABLE_ID) # Create toy_data def create_toy_data(N): x = np.random.random(size = N) y = 0.2 + x + np.random.normal(loc=0, scale = 0.3, size = N) return x, y x, y =create_toy_data(1000) df = pd.DataFrame(data = {'x': x, 'y': y}) job_config = bigquery.LoadJobConfig(write_disposition=&quot;WRITE_TRUNCATE&quot;,) job = client.load_table_from_dataframe( df, table, job_config=job_config ) job.result() # Create initial data #init_bq_table() CSV_SCHEMA = [ bigquery.SchemaField(&quot;x&quot;, &quot;FLOAT64&quot;), bigquery.SchemaField(&quot;y&quot;, &quot;FLOAT64&quot;), ] def transform_row(row_dict): # Trim all string tensors dataset_x = row_dict dataset_x['constant'] = tf.cast(1, tf.float64) # Extract feature column dataset_y = dataset_x.pop('y') #Export as tensor dataset_x = tf.stack([dataset_x[column] for column in dataset_x], axis=-1) return (dataset_x, dataset_y) def read_bigquery(table_name): tensorflow_io_bigquery_client = BigQueryClient() read_session = tensorflow_io_bigquery_client.read_session( &quot;projects/&quot; + PROJECT_ID, PROJECT_ID, TABLE_ID, DATASET_ID, list(field.name for field in CSV_SCHEMA), list(dtypes.double if field.field_type == 'FLOAT64' else dtypes.string for field in CSV_SCHEMA), requested_streams=2) dataset = read_session.parallel_read_rows() return dataset def get_data(): dataset = read_bigquery(TABLE_ID) dataset = dataset.map(transform_row, num_parallel_calls=4) dataset = dataset.batch(BATCH_SIZE).prefetch(2) return dataset cluster_resolver = tf.distribute.cluster_resolver.TFConfigClusterResolver() # parameter server and worker just wait jobs from the coordinator (chief) if cluster_resolver.task_type in (&quot;worker&quot;): worker_config = tf.compat.v1.ConfigProto() server = tf.distribute.Server( cluster_resolver.cluster_spec(), job_name=cluster_resolver.task_type, task_index=cluster_resolver.task_id, config=worker_config, protocol=&quot;grpc&quot;) server.join() elif cluster_resolver.task_type in (&quot;ps&quot;): server = tf.distribute.Server( cluster_resolver.cluster_spec(), job_name=cluster_resolver.task_type, task_index=cluster_resolver.task_id, protocol=&quot;grpc&quot;) server.join() elif cluster_resolver.task_type == 'chief': strategy = tf.distribute.experimental.ParameterServerStrategy(cluster_resolver=cluster_resolver) if cluster_resolver.task_type == 'chief': learning_rate = 0.01 with strategy.scope(): # model model_input = tf.keras.layers.Input( shape=(2,), dtype=tf.float64) layer_1 = tf.keras.layers.Dense( 8, activation='relu')(model_input) dense_output = tf.keras.layers.Dense(1)(layer_1) model = tf.keras.Model(model_input, dense_output) #optimizer optimizer=tf.keras.optimizers.SGD(learning_rate=learning_rate) accuracy = tf.keras.metrics.MeanSquaredError() @tf.function def distributed_train_step(iterator): def train_step(x_batch_train, y_batch_train): with tf.GradientTape() as tape: y_predict = model(x_batch_train, training=True) loss_value = tf.keras.losses.MeanSquaredError(reduction=tf.keras.losses.Reduction.NONE)(y_batch_train, y_predict) grads = tape.gradient(loss_value, model.trainable_weights) optimizer.apply_gradients(zip(grads, model.trainable_weights)) accuracy.update_state(y_batch_train, y_predict) return loss_value x_batch_train, y_batch_train = next(iterator) return strategy.run(train_step, args=(x_batch_train, y_batch_train)) coordinator = tf.distribute.experimental.coordinator.ClusterCoordinator(strategy) #test def dataset_fn(_): def create_toy_data(N): x = np.random.random(size = N) y = 0.2 + x + np.random.normal(loc=0, scale = 0.3, size = N) return np.c_[x,y] def toy_transform_row(row): dataset_x = tf.stack([row[0], tf.cast(1, tf.float64)], axis=-1) dataset_y = row[1] return dataset_x, dataset_y N = 1000 data =create_toy_data(N) dataset = tf.data.Dataset.from_tensor_slices(data) dataset = dataset.map(toy_transform_row, num_parallel_calls=4) dataset = dataset.batch(BATCH_SIZE) dataset = dataset.prefetch(2) return dataset @tf.function def per_worker_dataset_fn(): return strategy.distribute_datasets_from_function(lambda x : get_data()) # &lt;-- Not working with AI platform #return strategy.distribute_datasets_from_function(dataset_fn) # &lt;-- Working with AI platform per_worker_dataset = coordinator.create_per_worker_dataset(per_worker_dataset_fn) # Train model for epoch in range(5): per_worker_iterator = iter(per_worker_dataset) accuracy.reset_states() for step in range(5): coordinator.schedule(distributed_train_step, args=(per_worker_iterator,)) coordinator.join() print (&quot;Finished epoch %d, accuracy is %f.&quot; % (epoch, accuracy.result().numpy())) </code></pre> <p>When I create the dataset with <code>per_worker_dataset_fn()</code> I can use the bigquery connector (bugging) or create the dataset in live (working).</p> <p><strong>AI Platform Cluster configuration :</strong></p> <p>runtimeVersion: &quot;2.5&quot;</p> <p>pythonVersion: &quot;3.7&quot;</p> <p>Did someone get this issue ? Bigquery connector worked pretty well with MirroredStrategy on AI Platform. Tell me if I should report the issue somewhere else.</p>
<p>I think this is due to lazy loading of libtensorflow_io.so. <a href="https://github.com/tensorflow/io/commit/85d018ee59ceccfae06914ec2a2f6d6583775ff7" rel="nofollow noreferrer">https://github.com/tensorflow/io/commit/85d018ee59ceccfae06914ec2a2f6d6583775ff7</a></p> <p>Can you try adding something like this to your code:</p> <pre class="lang-py prettyprint-override"><code>import tensorflow_io tensorflow_io.experimental.oss() </code></pre>
tensorflow|google-cloud-ml|google-cloud-ai
2
375,118
69,630,736
How to use Model Parallelism with a custom Tensorflow 2.0 model on TPUs?
<p>To replicate <a href="https://arxiv.org/abs/2106.13884" rel="nofollow noreferrer">Multimodal Few-Shot Learning with Frozen Language Models</a>, I am trying to train a ~7B parameter subclassed TF2 model on a TPUv3-32. Out of the 7B parameters, roughly 6B parameters are frozen.</p> <p>I want to use model and data parallelism to train it as efficiently as possible. As far as I know, MeshTensorflow can only be used for models written in TF1.</p> <p>I tried using experimental_device_assignment from TPUStrategy but it was placing all the variables only on the 1st(0th) core of the TPU which quickly ran out of memory.</p> <p><a href="https://i.stack.imgur.com/uEnQb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uEnQb.png" alt="Using TPUStrategy" /></a></p> <p>On a TPUv3-8, I tried to keep computation_shape = [2, 2, 1, 2] and [1, 1, 1, 2] and num_replicas = 1 but it didn't work.</p> <p>I am also open to using GPUs to train it.</p>
<p>According to the cloud TPU documents, there is no official support:</p> <blockquote> <p>Does Cloud TPU support model parallelism?</p> <p>Model parallelism (or executing non-identical programs on the multiple cores within a single Cloud TPU device) is not currently supported.</p> </blockquote> <p><a href="https://cloud.google.com/tpu/docs/faq" rel="nofollow noreferrer">https://cloud.google.com/tpu/docs/faq</a></p> <p>The underlying issue may be that there is no automatic sharding of the computation graph in TPUStrategy so the graph is all placed one device, unless (in the model code) you manually assign device placements for weights and operations to the logical devices as created by <code>DeviceAssignment.build</code> and handle communication across the devices carefully.</p> <p>That said, there is another TF2-compatible library (also from Google) that could help if you are building a big Transformer where you want layers that are friendly to graph sharding: <a href="https://github.com/tensorflow/lingvo" rel="nofollow noreferrer">Lingvo</a>. In their Github, there is an <a href="https://github.com/tensorflow/lingvo/blob/master/lingvo/tasks/lm/README.md" rel="nofollow noreferrer">example of sharding a model on a TPU v3-512 node</a>. The library has Google's open sourced GPipe which can help speed up model parallel training loops. Lingvo should also work with GPUs.</p>
tensorflow|machine-learning|google-cloud-platform|deep-learning|tpu
2
375,119
69,563,301
How to group data by customized date logic?
<p>I have a dataframe that looks like this:</p> <pre><code>Date | Apples | Bananas etc 2020-01-01 | 2 | 5 2020-02-01 | 12 | 44 2020-03-01 | 4 | 45 </code></pre> <p>I want to create a grouping logic by date but the date must be transformed to the following:</p> <p>If the Date is on or after February of the current year, then label the Date as the next respective Year, otherwise label as the current respective year. Example:</p> <p>For Apples on 2020-01-01, it should be labelled as the year '2020' because it is prior to February of the current year. However, Bananas in 2020-03-01 would be labelled as '2021' because the date falls after February of the current year.</p> <pre><code>Date | Apples | Bananas etc 2020 | 2 | 5 2021 | 12 | 44 2021 | 4 | 45 </code></pre> <p>How would this work the best?</p>
<p>You could use <code>np.where</code> to do a conditional date offset.</p> <pre><code>import pandas as pd import numpy as np df = pd.DataFrame({'Date': ['2020-01-01', '2020-02-01', '2020-03-01'], 'Apples': [2, 12, 4], 'Bananas': [5, 44, 45]}) df['Date'] = pd.to_datetime(df['Date']) # Add a year if the month is greater than 1 df['Date'] = np.where(df['Date'].dt.month&gt;1, df['Date']+ pd.offsets.DateOffset(years=1), df['Date']) # If you want the actual dates print(df.groupby('Date').sum()) Apples Bananas Date 2020-01-01 2 5 2021-01-31 12 44 2021-03-01 4 45 # If you want grouped by year only print(df.groupby(df['Date'].dt.year).sum()) Apples Bananas Date 2020 2 5 2021 16 89 </code></pre>
python|pandas
0
375,120
69,402,510
How to combine lists(first three columns) to generate output shown in last column in python
<div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>List1</th> <th>List2</th> <th>List3</th> <th><strong>output</strong></th> </tr> </thead> <tbody> <tr> <td>MAN</td> <td>TH</td> <td>ESE</td> <td>MAN-TH-ESE</td> </tr> <tr> <td></td> <td></td> <td>PA</td> <td>MAN-TH-PA</td> </tr> <tr> <td>PWP</td> <td>TH</td> <td>ESE</td> <td>PWP-TH-ESE</td> </tr> <tr> <td></td> <td></td> <td>PA</td> <td>PWP-TH-PA</td> </tr> <tr> <td></td> <td>PR</td> <td>ESE</td> <td>PWP-PR-ESE</td> </tr> <tr> <td></td> <td></td> <td>PA</td> <td>PWP-PR-PA</td> </tr> <tr> <td>MAD</td> <td>TH</td> <td>ESE</td> <td>MAD-TH-ESE</td> </tr> <tr> <td></td> <td></td> <td>PA</td> <td>MAD-TH-PA</td> </tr> <tr> <td></td> <td>PR</td> <td>ESE</td> <td>MAD-PR-ESE</td> </tr> <tr> <td></td> <td></td> <td>PA</td> <td>MAD-PR-PA</td> </tr> <tr> <td>ETI</td> <td>TH</td> <td>ESE</td> <td>ETI-TH-ESE</td> </tr> <tr> <td></td> <td></td> <td>PA</td> <td>ETI-TH-PA</td> </tr> <tr> <td>NIS</td> <td>TH</td> <td>ESE</td> <td>NIS-TH-ESE</td> </tr> <tr> <td></td> <td></td> <td>PA</td> <td>NIS-TH-PA</td> </tr> <tr> <td></td> <td>PR</td> <td>ESE</td> <td>NIS-PR-ESE</td> </tr> <tr> <td></td> <td></td> <td>PA</td> <td>NIS-PR-PA</td> </tr> <tr> <td>EDP</td> <td>PR</td> <td>ESE</td> <td>EDP-PR-ESE</td> </tr> <tr> <td></td> <td></td> <td>PA</td> <td>EDP-PR-PA</td> </tr> <tr> <td>CAP</td> <td>PR</td> <td>ESE</td> <td>CAP-PR-ESE</td> </tr> <tr> <td></td> <td></td> <td>PA</td> <td>CAP-PR-PA</td> </tr> </tbody> </table> </div>
<p>Replace empty space by missing values, forward filling them and last join together:</p> <pre><code>df['output'] = df[['List1','List2','List3']].replace('',np.nan).ffill().apply('-'.join, 1) </code></pre> <p>If need join all columns:</p> <pre><code>df['output'] = df.replace('',np.nan).ffill().apply('-'.join, 1) </code></pre>
python|pandas|dataframe|loops|conditional-statements
1
375,121
69,356,473
split strings in a cell into different row, pandas
<p>I need to split this cell in different rows in pandas dataframe, basically splitting strings but ignoring the comma inside quotes.</p> <pre><code>'one two three', 'don't split, this' </code></pre> <p>output would be like:</p> <pre><code>'one two three' 'don't split, this' </code></pre> <p>Thanks in advance</p>
<p>Why not just use:</p> <pre><code>df['col'].str.split(r&quot;(?&lt;=')[,(\s|)](?=')&quot;) </code></pre> <p>Or if there always be a space after the comma, do:</p> <pre><code>s.str.split(r&quot;(?&lt;='), (?=')&quot;) </code></pre>
python|regex|pandas
0
375,122
69,581,447
Inconsistent behaviour with numpy broadcasting while creating array of array
<p>If I'm trying to execute :</p> <pre><code>a = np.ones((1, 2)) b = np.ones((1, 3)) np.array([a, b], dtype=np.ndarray) </code></pre> <p>I get the following error :</p> <pre><code>ValueError: could not broadcast input array from shape (2,) into shape (1,) </code></pre> <p>But I'm would like to get :</p> <pre><code>array([array([[1., 1.]]), array([[1., 1., 1.]])], dtype=object) </code></pre> <p>But if I'm executing this :</p> <pre><code>a = np.ones((1, 2)) b = np.ones((2, 4)) np.array([a, b], dtype=np.ndarray) </code></pre> <p>I get the expected results :</p> <pre><code>array([array([[1., 1.]]), array([[1., 1., 1.], [1., 1., 1.]])], dtype=object) </code></pre> <p>Running on :</p> <p>numpy==1.21.2 python==3.7.11</p>
<p>Based on @hpaulh comments, this works:</p> <pre><code>import numpy as np a = np.ones((1, 2)) b = np.ones((1, 3)) test = np.empty((2,), dtype=np.ndarray) test[0] = a test[1] = b </code></pre> <p>returns</p> <pre><code>array([array([[1., 1.]]), array([[1., 1., 1.]])], dtype=object) </code></pre>
python|numpy|numpy-ndarray
1
375,123
69,343,401
Is it possible to convert numbers in an array to tuple enumerating with python?
<p>I have an example array:</p> <pre><code>&gt;&gt;&gt; arr = np.array([[2, 4, 1], [3, 4, 2], [3, 6, 1]]) &gt;&gt;&gt; arr array([[2, 4, 1], [3, 4, 2], [3, 6, 1]]) </code></pre> <p>Desired output:</p> <pre><code> array([[(0, 2), (1, 4), (2, 1)], [(0, 3), (1, 4), (2, 2)], [(0, 3), (1, 6), (2, 1)]]) </code></pre> <p>I applied this:</p> <pre><code>np.apply_along_axis(lambda tuple(enumerate(x)), 1, arr) </code></pre> <p>I got different output:</p> <pre><code>array([[[0, 2], [1, 4], [2, 1]], [[0, 3], [1, 4], [2, 2]], [[0, 3], [1, 6], [2, 1]]]) </code></pre>
<p>Usually you don't have tuples in a numpy array. Your output is basically what you desire but as lists and not as tuples. You can use a workaround like shown <a href="https://stackoverflow.com/questions/47389447/how-convert-a-list-of-tupes-to-a-numpy-array-of-tuples">here</a>:</p> <pre><code>import numpy as np def return_tuple(x): out = np.empty(len(x), dtype=object) out[:] = list(enumerate(x)) return out arr = np.array([[2,4,1],[3,4,2],[3,6,1]]) result = np.apply_along_axis(return_tuple, 1, arr) print(result) print(type(result)) [[(0, 2) (1, 4) (2, 1)] [(0, 3) (1, 4) (2, 2)] [(0, 3) (1, 6) (2, 1)]] &lt;class 'numpy.ndarray'&gt; </code></pre>
python|arrays|numpy
3
375,124
69,467,339
What is the purpose of the view() method in numpy
<p><strong>Code 1</strong></p> <pre><code>arr = np.array([1, 2, 3]) arr2 = arr.view() </code></pre> <p><strong>Code 2</strong></p> <pre><code>arr = np.array([1, 2, 3]) arr2 = arr </code></pre> <p>Both of these snippets have the same functionality, so why do we actually need the <code>view</code> method in NumPy if we can simply achieve the same result with code 2?</p>
<p>Even without using the things <code>view</code> lets you do, the semantics are different.</p> <pre><code>arr2 = arr </code></pre> <p>This assigns a reference to the original array to a different name. Any change you make to <code>arr2</code> short of reassignment will show up when you access <code>arr</code>.</p> <pre><code>arr2 = arr.view() </code></pre> <p>This creates a new array object that only shares data with the original. You can do something like <code>arr2.shape = (3, 1, 1)</code> without affecting <code>arr</code> at all.</p> <p>At the same time, that's not what <code>view</code> is generally used for. Let's say you wanted to look at the individual bytes that make up your integers. You would create a view with a different dtype:</p> <pre><code>arr2 = arr.view(np.uint8) </code></pre> <p>Or say you wanted to reinterpret your integers as big- instead of little-endian:</p> <pre><code>arr2 = arr.view('&gt;i4') </code></pre> <p>Keep in mind that many other useful operations do this as well, like <code>reshape</code> (same dtype, different shape), <code>transpose</code> (same dtype, different strides), etc.</p>
python|numpy
2
375,125
69,566,433
Export final data from numpy to excel
<p>here I used panda for export my data which is located in numpy array. but there is a problem that I cant export my data and also there is a erroe that you can see below.</p> <p>valueError: Must pass 2-d input</p> <p>this is my main variable AccZONE=c.T and The type of that is Array Of float64, and the size Of That is (710,1,1)</p>
<p>From the error it looks like the array is 3 dimensions, you need to change it to 2 dimensions, it would be nice if you could provide some code.</p> <p>You can try <code>np.reshape(arr,(-1,1))</code> or <code>np.ravel(arr)</code>.</p>
python|numpy
0
375,126
69,353,179
Can one add the output of a function in Python as a column to a dataframe
<p>I have an array of countries. I would like to run this array through a function and append the output of the function as a column to a dataframe.</p> <p>I used the <code>apply</code> method but keep getting a <code>KeyError</code>. I am not sure what I am doing wrong.</p> <p><strong>Code</strong></p> <pre><code>import matplotlib.pyplot as plt import pandas as pd import pycountry_convert as pc data - pd.read_csv('/content/2019.csv', index_col=0) data.loc[71, 'Country or region'] = 'Trinidad and Tobago' country_region = data['Country or region'] for country in country_region: country_code = pc.country_name_to_country_alpha2(country) data['Continent'].apply(pc.country_alpha2_to_continent_code(country_code)) </code></pre> <p>Here is <a href="https://i.stack.imgur.com/fnG2h.png" rel="nofollow noreferrer">a screenshot of my error</a>, for more details.</p>
<p>Yes, you can use a function. You were almost there. If your function takes a single argument, and you are applying to a column, there's no need to add the argument. If you want to use multiple args, you can combine apply and lambda.</p> <p>No need for lambda in your case, This should solve it:</p> <pre><code>data = pd.read_csv('/content/2019.csv', index_col=0) data.loc[71, 'Country or region'] = 'Trinidad and Tobago' data['country_code'] = data['Country or region'].apply(pc.country_name_to_country_alpha2) data['Continent'] = data.country_code.apply(pc.country_alpha2_to_continent_code) </code></pre> <p>For future references, you can copy the code in your question instead of attaching a screenshot.</p>
python|pandas|dataframe|apply|pycountry-convert
1
375,127
69,325,728
Using a loop to select multiple columns from a pandas dataframe
<p>I am trying to create a <code>DataFrame</code> that only has certain columns from a previously created dataframe using a loop.</p> <p>I have the following dataframe:</p> <pre><code> Time Amount Amount i=2 Amount i=3 Amount i=4 0 20 10 20 20 20 1 10 5 10 10 10 2 15 25 50 50 50 </code></pre> <p>I am then trying to create a new data frame which has the following columns: Time, Amount, Amount =2, Amount i=3 using a loop function. I understand that this can be solved relatively easily by just select each column, but this is part of a larger project that I can't do that for.</p> <p>So far I have this:</p> <pre><code>for i in range (2,4): df1 = df[['Time','Amount','Amount i={}'.format(i)]] </code></pre> <p>But this only pulls out the 'Time' , 'Amount' &amp; 'Amount i=3.</p> <pre><code> Time Amount Amount i=3 0 20 10 20 1 10 5 10 2 5 25 50 </code></pre>
<p>You can do something like this if you want to use a loop</p> <pre><code>df1 = df[['Time','Amount'] + ['Amount i={}'.format(i) for i in range(2,4)]] </code></pre>
python|pandas|loops
2
375,128
69,413,567
filename as key in dictionary - pandas
<p>I have around 100 .csv files in a folder.<br/> They are named like AA.csv, BB.csv, CC.csv.....<br/></p> <p>I have used the below command to load all the files into the dataframe</p> <pre><code>import pandas as pd import glob df = pd.concat(map(pd.read_csv, glob.glob('/Users/redman/stock-data/*.csv'))) </code></pre> <p>How can i store them in a dictionary with using the filename as the key???<br/> Here the dictionary key will be &quot;AA&quot;, &quot;BB&quot;, &quot;CC&quot;.<br/> Any suggestions would be great.</p>
<p>We can use a dictionary comprehension:</p> <pre><code>import glob import pandas as pd d = {f: pd.read_csv(f) for f in glob.glob('/Users/redman/stock-data/*.csv')} </code></pre> <p>We can use <a href="https://docs.python.org/3/library/pathlib.html#pathlib.PurePath.stem" rel="nofollow noreferrer"><code>Path.stem</code></a> and <a href="https://docs.python.org/3/library/pathlib.html#pathlib.Path.glob" rel="nofollow noreferrer"><code>Path.glob</code></a> if we only want the file name and not the extension or path:</p> <pre><code>import glob from pathlib import Path import pandas as pd d = {f.stem: pd.read_csv(f.resolve()) for f in Path('/Users/redman/stock-data/').glob('*.csv')} </code></pre> <hr /> <p>Complete Working Example with Generated Sample Data:</p> <pre><code>import glob from pathlib import Path from pprint import pprint import numpy as np import pandas as pd # Generate Sample Directory and csv np.random.seed(26) Path('./stock-data').mkdir(exist_ok=True) for f in ['AA', 'BB', 'CC']: pd.DataFrame( np.random.randint(1, 100, (3, 5)), ).to_csv(f'./stock-data/{f}.csv', index=False) # Build Dictionary d = {f.stem: pd.read_csv(f.resolve()) for f in Path('./stock-data/').glob('*.csv')} pprint(d) </code></pre> <p><code>d</code>:</p> <pre><code>{'AA': 0 1 2 3 4 0 54 63 7 49 66 1 84 78 91 33 69 2 99 46 18 56 54, 'BB': 0 1 2 3 4 0 90 53 89 22 31 1 16 43 24 48 92 2 57 83 48 13 48, 'CC': 0 1 2 3 4 0 17 13 61 77 84 1 20 30 52 29 82 2 45 18 60 53 71} </code></pre>
python|pandas|dataframe
4
375,129
69,322,880
Bokeh remove gaps in datatime axis when date is mising
<p>I trying to plot chandlestick with the OHLC data that I have. The data come from 5 minute timeframe resample to 4 hour timeframe, so there will be huge gap on weekends.</p> <pre class="lang-py prettyprint-override"><code># Load data subdata = pd.read_csv( 'data/M5/EURUSD.csv', header = None, skiprows = 0, sep = '\t', names = [ 'date', 'open', 'high', 'low', 'close', 'volume' ], ) subdata['date'] = pd.to_datetime(subdata['date']) subdata.set_index(['date'], inplace = True) # Resample subdata = subdata.resample('4H').agg({ 'open': 'first', 'high': 'max', 'low': 'min', 'close': 'last', 'volume': 'sum' }).dropna(axis=0) </code></pre> <p>After I resampled the data then plotted data use Bokeh, and here comes the problem which is the gap on weekend day. Here the code I used to plot the data and used this <a href="https://stackoverflow.com/questions/48528519/bokeh-autofill-datetime-axis-missing-values-how-to-stop-it">concept</a> to solve this problem but still not work.</p> <pre class="lang-py prettyprint-override"><code>fig1 = figure(x_axis_type='datetime', height=400, width=900) # I try to add this code but still not work fig1.xaxis.major_label_overrides = { i: date.strftime('%Y-%m-%d %H:%S') for i, date in enumerate(subdata.index) } wide = 12*60*60*200 inc = subdata['close'] &gt; subdata['open'] dec = subdata['open'] &gt; subdata['close'] fig1.segment(subdata.index, subdata['high'], subdata.index, subdata['low'], color='black') fig1.vbar(subdata.index[inc], wide, subdata['open'][inc], subdata.close[inc], fill_color='#D5E1DD', line_color='black') fig1.vbar(subdata.index[dec], wide, subdata['open'][dec], subdata['close'][dec], fill_color='#F2583E', line_color='black') show(gridplot([[fig1]])) </code></pre> <p>Here the result <a href="https://i.stack.imgur.com/Nstjk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Nstjk.png" alt="There gap on weekend" /></a></p> <p>Is there something wrong with my code or am I wrong with the concept?</p>
<p>After trial and error, I finally found the root of the problem. When changing xaxis to <code>enumerate(subdata.index)</code> it means xaxis uses numbers instead of datetime. But i still use datetime to make plots that are supposed to use numbers, and here comes the weird thing. Why is bokeh still receiving xaxis datetime on xaxis numbers, which ends up creating gaps and wrong plots?</p> <p>To solve this problem, an index number from the row is needed. in my case, index uses datetime so need to create a new column for index number and then create a plot with index number.</p> <pre class="lang-py prettyprint-override"><code># Here code to make number index subdata['x'] = subdata.reset_index().index fig1 = figure(x_axis_type='datetime', height=400, width=900) fig1.xaxis.major_label_overrides = { i: date.strftime('%Y-%m-%d %H:%S') for i, date in enumerate(subdata.index) } wide = 0.5 inc = subdata['close'] &gt; subdata['open'] dec = subdata['open'] &gt; subdata['close'] fig1.segment(subdata['x'], subdata['high'], subdata['x'], subdata['low'], color='black') fig1.vbar(subdata['x'][inc], wide, subdata['open'][inc], subdata['close'][inc], fill_color='#D5E1DD', line_color='black') fig1.vbar(subdata['x'][dec], wide, subdata['open'][dec], subdata['close'][dec], fill_color='#F2583E', line_color='black') show(gridplot([[fig1]])) </code></pre> <p><a href="https://i.stack.imgur.com/ODL9M.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ODL9M.png" alt="Result the plot" /></a></p>
python|pandas|bokeh
1
375,130
69,360,657
Date extraction without "T00:00:00" and formated as %d/%m/%Y
<p>I have been working on a huge text file. Where I want to read and cut it with pandas.</p> <p>Here is a sample of the raw file:</p> <pre><code>Date;Time;GHI;DNI;DIF;flagR;SE;SA;TEMP;AP;RH;WS;WD;PWAT 01.01.1994;00:07;0;0;0;0;-41.92;-19.43;14.3;1004.4;93.4;0.3;189;17.7 01.01.1994;00:22;0;0;0;0;-40.65;-23.70;14.3;1004.4;93.6;0.1;186;17.8 01.01.1994;00:37;0;0;0;0;-39.14;-27.75;14.3;1004.3;93.7;0.0;10;18.0 </code></pre> <p>To do that, I have a date format <code>%d.%m.%Y</code>, and I changed it into <code>%d/%m/%Y</code>. Then I saw on the VSCode Data Viewer the need to sort because my result was <code>%Y-%m-%d+time</code>. This <code>time</code> part is always <code>T00:00:00</code>, and I do not need it because I already have time. Why is this text appearing in VSCode Data Viewer? Does this time is always generated? Is it ignored by Python? Why the date format I wrote is not working?</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd import numpy as np import datetime # It will read the file: It will separate by semi-colonne, # and it will ignore the first 56 rows. file = pd.read_csv('file.txt', sep = ';', skiprows = 56) # It will read the &quot;Date&quot; column to replace the &quot;.&quot; # to &quot;/&quot;. This will help the code to read properly the # date column. Then it will give the format to the # whole column [day/month/year]. file[&quot;Date&quot;] = file[&quot;Date&quot;].str.replace('.','/').apply(lambda x: datetime.datetime.strptime(x, &quot;%d/%m/%Y&quot;).date()) </code></pre> <p>I used the code snippet above but it doesn't work with the format <code>%d/%m/%Y</code> and <code>.date()</code>.</p> <p>This is the file contents when I print it:</p> <pre class="lang-py prettyprint-override"><code> Date Time GHI DNI DIF flagR SE SA TEMP AP RH WS WD PWAT 0 1994-01-01 00:07 0 0 0 0 -41.92 -19.43 14.3 1004.4 93.4 0.3 189 17.7 1 1994-01-01 00:22 0 0 0 0 -40.65 -23.70 14.3 1004.4 93.6 0.1 186 17.8 2 1994-01-01 00:37 0 0 0 0 -39.14 -27.75 14.3 1004.3 93.7 0.0 10 18.0 </code></pre> <p>This is the file contents when I look it using VSCode Data Viewer:</p> <pre class="lang-py prettyprint-override"><code> Date Time GHI DNI DIF flagR SE SA TEMP AP RH WS WD PWAT 0 1994-01-01T00:00:00 00:07 0 0 0 0 -41.92 -19.43 14.3 1004.4 93.4 0.3 189 17.7 1 1994-01-01T00:00:00 00:22 0 0 0 0 -40.65 -23.70 14.3 1004.4 93.6 0.1 186 17.8 2 1994-01-01T00:00:00 00:37 0 0 0 0 -39.14 -27.75 14.3 1004.3 93.7 0.0 10 18.0 </code></pre> <p>Thank you</p>
<p>That's how VScode Data Viewer views date, it doesn't mean it's this way actually.</p> <p>So, you can change the format of your <code>Date</code> column by replacing it with this:</p> <pre><code>file[&quot;Date&quot;] = pd.to_datetime(file['Date'], format='%d.%M.%Y').dt.strftime('%d/%m/%Y') # write dataframe to CSV file file.to_csv(&quot;out.csv&quot;, index=False) </code></pre> <p>And this is the content of the CSV file:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Date</th> <th>Time</th> <th>GHI</th> <th>DNI</th> <th>DIF</th> <th>flagR</th> <th>SE</th> <th>SA</th> <th>TEMP</th> <th>AP</th> <th>RH</th> <th>WS</th> <th>WD</th> <th>PWAT</th> </tr> </thead> <tbody> <tr> <td>01/01/1994</td> <td>00:07</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td>-41.92</td> <td>-19.43</td> <td>14.3</td> <td>1004.4</td> <td>93.4</td> <td>0.3</td> <td>189</td> <td>17.7</td> </tr> <tr> <td>01/01/1994</td> <td>00:22</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td>-40.65</td> <td>-23.7</td> <td>14.3</td> <td>1004.4</td> <td>93.6</td> <td>0.1</td> <td>186</td> <td>17.8</td> </tr> <tr> <td>01/01/1994</td> <td>00:37</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td>-39.14</td> <td>-27.75</td> <td>14.3</td> <td>1004.3</td> <td>93.7</td> <td>0.0</td> <td>10</td> <td>18.0</td> </tr> </tbody> </table> </div>
python|pandas|datetime|visual-studio-code
1
375,131
69,497,328
Why are torch.version.cuda and deviceQuery reporting different versions?
<p>I have a doubt about the CUDA version installed on my system and being effectively used by my software. I have done some research online but could not find a solution to my doubt. The issue which helped me a bit in my understanding and is the most related to what I will ask below is <a href="https://stackoverflow.com/questions/9727688/how-to-get-the-cuda-version">this one</a>.</p> <p>Description of the problem:</p> <p>I created a virtualenvironment with virtualenvironmentwrapper and then I installed pytorch in it.</p> <p>After some time I realized I did not have CUDA installed on my system.</p> <p>You can find it out by doing:<br /> <code>nvcc –V </code></p> <p>If nothing is returned it means that you did not install CUDA (as far as I understood).</p> <p>Therefore, I followed the instructions <a href="https://docs.nvidia.com/cuda/cuda-installation-guide-linux/" rel="nofollow noreferrer">here</a></p> <p>And I installed CUDA with <a href="https://developer.nvidia.com/cuda-downloads?target_os=Linux&amp;target_arch=x86_64&amp;Distribution=Ubuntu&amp;target_version=20.04&amp;target_type=deb_local" rel="nofollow noreferrer">this</a> official link.</p> <p>Then, I installed the <code>nvidia-development-kit</code> simply with</p> <p><code>sudo apt install nvidia-cuda-toolkit</code></p> <p>Now, if in my virtualenvironment I do:</p> <p><code>nvcc -V</code></p> <p>I get:</p> <pre><code>nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2019 NVIDIA Corporation Built on Sun_Jul_28_19:07:16_PDT_2019 Cuda compilation tools, release 10.1, V10.1.243 </code></pre> <p>However, if (always in the virtualenvironment) I do:</p> <p><code>python -c &quot;import torch; print(torch.version.cuda)&quot;</code></p> <p>I get:</p> <p><code>10.2</code></p> <p><strong>This is the first thing I don't understand. Which version of CUDA am I using in my virtualenvironment?</strong></p> <p>Then, if I run the sample <code>deviceQuery</code> (from the <code>cuda-samples</code> folder - the samples can be installed by following <a href="https://docs.nvidia.com/cuda/cuda-samples/index.html#getting-started-with-cuda-samples" rel="nofollow noreferrer">this link</a>) I get:</p> <pre><code>./deviceQuery ./deviceQuery Starting... CUDA Device Query (Runtime API) version (CUDART static linking) Detected 1 CUDA Capable device(s) Device 0: &quot;NVIDIA GeForce RTX 2080 Super with Max-Q Design&quot; CUDA Driver Version / Runtime Version 11.4 / 11.4 CUDA Capability Major/Minor version number: 7.5 Total amount of global memory: 7974 MBytes (8361279488 bytes) (048) Multiprocessors, (064) CUDA Cores/MP: 3072 CUDA Cores GPU Max Clock rate: 1080 MHz (1.08 GHz) Memory Clock rate: 5501 Mhz Memory Bus Width: 256-bit L2 Cache Size: 4194304 bytes Maximum Texture Dimension Size (x,y,z) 1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384) Maximum Layered 1D Texture Size, (num) layers 1D=(32768), 2048 layers Maximum Layered 2D Texture Size, (num) layers 2D=(32768, 32768), 2048 layers Total amount of constant memory: 65536 bytes Total amount of shared memory per block: 49152 bytes Total shared memory per multiprocessor: 65536 bytes Total number of registers available per block: 65536 Warp size: 32 Maximum number of threads per multiprocessor: 1024 Maximum number of threads per block: 1024 Max dimension size of a thread block (x,y,z): (1024, 1024, 64) Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535) Maximum memory pitch: 2147483647 bytes Texture alignment: 512 bytes Concurrent copy and kernel execution: Yes with 3 copy engine(s) Run time limit on kernels: Yes Integrated GPU sharing Host Memory: No Support host page-locked memory mapping: Yes Alignment requirement for Surfaces: Yes Device has ECC support: Disabled Device supports Unified Addressing (UVA): Yes Device supports Managed Memory: Yes Device supports Compute Preemption: Yes Supports Cooperative Kernel Launch: Yes Supports MultiDevice Co-op Kernel Launch: Yes Device PCI Domain ID / Bus ID / location ID: 0 / 1 / 0 Compute Mode: &lt; Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) &gt; deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 11.4, CUDA Runtime Version = 11.4, NumDevs = 1 Result = PASS </code></pre> <p><strong>Why is it now mentioned CUDA version 11.4? Is it because I am using the <code>NVIDIA_CUDA-11.4_Samples</code> I guess?</strong></p> <p>Another information is the following. If I check in my <code>/usr/local</code> folder I see three folders related to CUDA.</p> <p>If I do:</p> <p><code>cd /usr/local &amp;&amp; ll | grep -i CUDA</code></p> <p>I get:</p> <pre><code>lrwxrwxrwx 1 root root 22 Oct 7 11:33 cuda -&gt; /etc/alternatives/cuda/ lrwxrwxrwx 1 root root 25 Oct 7 11:33 cuda-11 -&gt; /etc/alternatives/cuda-11/ drwxr-xr-x 16 root root 4096 Oct 7 11:33 cuda-11.4/ </code></pre> <p>Is that normal?</p> <p>Thanks for your help.</p>
<p>PyTorch doesn't use the system's CUDA library. When you install PyTorch using the precompiled binaries using either pip or conda it is shipped with a copy of the specified version of the CUDA library which is installed locally in your environment. In fact, you don't even need to install CUDA on your system to use PyTorch with CUDA support.</p>
python|linux|pytorch|cuda|virtual-environment
5
375,132
69,523,275
How to efficiently filter two-dimensional np array by values given in a list (by many values)
<p>I have a two-dimensional np array and I need to efficiently filter it by values given in a list.</p> <pre><code>b = np.array([['a', 'b', 'c', 'd'], ['b', 'a', 'c', 'd'], ['c', 'b', 'a', 'd'], ['a', 'd', 'c', 'b']]) values_to_stay_in_b = ['a', 'b'] </code></pre> <p>I found solution with using set difference, but the position in array b is important.</p> <p>Is any better solution than simple list comprehensions as below:</p> <pre><code>output = [] for l in b: output.append([a for a in l if a in values_to_stay_in_b ]) np.array(output) </code></pre> <p>Result:</p> <pre><code>array([['a', 'b'], ['b', 'a'], ['b', 'a'], ['a', 'b']], dtype='&lt;U1') </code></pre>
<p>Using pure numpy, so atleast removing for loops.</p> <p>For each entry in b, compare it with each entry in values_to_stay_in_b and get a mask list. This is done adding extra axis and comapring using broadcasting</p> <p>Any one in this need to be true.</p> <p>Since you clarify that after filtering each row has same columns, I reshape it based on number of rows</p> <pre><code>b[(b[..., None] == values_to_stay_in_b).any(axis=2)].reshape(b.shape[0], -1) </code></pre>
python|arrays|numpy
0
375,133
69,526,947
Confidence Interval 3 dimensional plot
<p>I have a 3-dimensional plot and I am able to plot it with the code written below.</p> <p>Considering that my point distribution is represented by a 100x100 matrix, is it possible to plot a confidence interval on my data? In the code below, my data are called &quot;result&quot;, while the upper bound and lower bound that I want to show are called &quot;upper_bound&quot; and &quot;lower_bound&quot;.</p> <p>For example, I am asking if exist something like this, but in 3 dimension (instead of 2 dimension like the picture below)</p> <p><a href="https://i.stack.imgur.com/64bHz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/64bHz.png" alt="enter image description here" /></a></p> <pre><code>import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D from matplotlib import cm from matplotlib.ticker import LinearLocator, FormatStrFormatter interval = np.random.normal(0, 1, size=(100, 100)) x = np.arange(0.1,1.1,0.01) y = np.linspace(-np.pi,np.pi,100) X,Y = np.meshgrid(x,y) result = [] for i,j in zip(X,Y): result.append(np.log(i)+np.sin(j)) upper_bound = np.array(result)+interval lower_bound = np.array(result)-interval fig = plt.figure() fig.set_figwidth(20) fig.set_figheight(6) ax = fig.gca(projection='3d') surf = ax.plot_surface(X, Y, np.array(result)) ax.zaxis.set_major_locator(LinearLocator(10)) ax.zaxis.set_major_formatter(FormatStrFormatter('%.02f')) fig.colorbar(surf, shrink=0.5, aspect=5) plt.show() </code></pre>
<p>Check out this 3d surface plot using plotly graph objects:</p> <pre><code>import plotly.graph_objects as go import numpy as np x = np.arange(0.1,1.1,0.01) y = np.linspace(-np.pi,np.pi,100) X,Y = np.meshgrid(x,y) result = [] for i,j in zip(X,Y): result.append(np.log(i)+np.sin(j)) upper_bound = np.array(result)+1 lower_bound = np.array(result)-1 fig = go.Figure(data=[ go.Surface(z=result), go.Surface(z=upper_bound, showscale=False, opacity=0.3,colorscale='purp'), go.Surface(z=lower_bound, showscale=False, opacity=0.3,colorscale='purp'), ]) fig.show() </code></pre> <p>This plots 3 surfaces, the one for your results and the 2 bounds. However if you'd like something that looks more like a filled volume you'd have to add volume graphs with scaling opacity.</p>
python|numpy|matplotlib|multidimensional-array|numpy-ndarray
1
375,134
69,327,200
Speed Up Python Function that Extracts Text from PDF
<p>I am currently working on a program that scrapes text from tens of thousands of PDFs of court opinions. I am relatively new to Python and am trying to make this code as efficient as possible. I have gathered from <em>many</em> posts on this site and elsewhere that I should be trying to vectorize my code, but I have tried three methods for doing so without results.</p> <p>My reprex uses these packages and this sample data.</p> <pre><code>import os import pandas as pd import pdftotext import wget df = pd.DataFrame({'OpinionText': [&quot;&quot;], 'URLs': [&quot;https://cases.justia.com/federal/appellate-courts/ca6/20-6226/20-6226-2021-09-17.pdf?ts=1631908842&quot;]}) df = pd.concat([df]*50, ignore_index=True) </code></pre> <p>I started by defining this function, which downloads the PDF, extracts the text, deletes the PDF, and then returns the text.</p> <pre><code>def Link2Text(Link): OpinionPDF = wget.download(Link, &quot;Temporary_Opinion.pdf&quot;) with open(OpinionPDF, &quot;rb&quot;) as f: pdf = pdftotext.PDF(f) OpinionText = &quot;\n\n&quot;.join(pdf) if os.path.exists(&quot;Temporary_Opinion.pdf&quot;): os.remove(&quot;Temporary_Opinion.pdf&quot;) return(OpinionText) </code></pre> <p>The first way that I called the function, which works but is very slow, is:</p> <pre><code>df['OpinionText'] = df['URLs'].apply(Link2Text) </code></pre> <p>Based on what I read about vectorization, I tried calling the function using:</p> <pre><code>df['OpinionText'] = Link2Text(df['URLs']) #and, alternatively: df['OpinionText'] = Link2Text(df['URLs'].values) </code></pre> <p>Both of these returned the same error, which is:</p> <pre><code>Traceback (most recent call last): File &quot;/Users/brendanbernicker/Downloads/Reprex for SO Vectorization Q.py&quot;, line 22, in &lt;module&gt; df['OpinionText'] = Link2Text(df['URLs']) File &quot;/Users/brendanbernicker/Downloads/Reprex for SO Vectorization Q.py&quot;, line 10, in Link2Text OpinionPDF = wget.download(Link, &quot;Temporary_Opinion.pdf&quot;) File &quot;/Applications/anaconda3/lib/python3.8/site-packages/wget.py&quot;, line 505, in download prefix = detect_filename(url, out) File &quot;/Applications/anaconda3/lib/python3.8/site-packages/wget.py&quot;, line 483, in detect_filename if url: File &quot;/Applications/anaconda3/lib/python3.8/site-packages/pandas/core/generic.py&quot;, line 1442, in __nonzero__ raise ValueError( ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all(). [Finished in 0.683s] </code></pre> <p>I gather that this is saying that Python does not know what to do with the input because it is a vector, so I tried replacing the call with the one below and got this traceback.</p> <pre><code>df['OpinionText'] = Link2Text(df['URLs'].item) Traceback (most recent call last): File &quot;/Users/brendanbernicker/Downloads/Reprex for SO Vectorization Q.py&quot;, line 22, in &lt;module&gt; df['OpinionText'] = Link2Text(df['URLs'].item) File &quot;/Users/brendanbernicker/Downloads/Reprex for SO Vectorization Q.py&quot;, line 10, in Link2Text OpinionPDF = wget.download(Link, &quot;Temporary_Opinion.pdf&quot;) File &quot;/Applications/anaconda3/lib/python3.8/site-packages/wget.py&quot;, line 505, in download prefix = detect_filename(url, out) File &quot;/Applications/anaconda3/lib/python3.8/site-packages/wget.py&quot;, line 484, in detect_filename names[&quot;url&quot;] = filename_from_url(url) or '' File &quot;/Applications/anaconda3/lib/python3.8/site-packages/wget.py&quot;, line 230, in filename_from_url fname = os.path.basename(urlparse.urlparse(url).path) File &quot;/Applications/anaconda3/lib/python3.8/urllib/parse.py&quot;, line 372, in urlparse url, scheme, _coerce_result = _coerce_args(url, scheme) File &quot;/Applications/anaconda3/lib/python3.8/urllib/parse.py&quot;, line 124, in _coerce_args return _decode_args(args) + (_encode_result,) File &quot;/Applications/anaconda3/lib/python3.8/urllib/parse.py&quot;, line 108, in _decode_args return tuple(x.decode(encoding, errors) if x else '' for x in args) File &quot;/Applications/anaconda3/lib/python3.8/urllib/parse.py&quot;, line 108, in &lt;genexpr&gt; return tuple(x.decode(encoding, errors) if x else '' for x in args) AttributeError: 'function' object has no attribute 'decode' </code></pre> <p>I tried adding <code>.decode('utf-8')</code> to my function call and within the function to the input, but got the same traceback for both. At this point, I do not know what else to try to speed up my code.</p> <p>I also tried <code>numpy.vectorize</code> with the version that works using <code>.apply</code>, but it dramatically slowed down the execution. I am assuming that those two should not be used together.</p> <p>In the interest of completeness, based on some excellent answers here, I also tried:</p> <pre><code>from numba import njit @njit def Link2Text(Link, Opinion): res = np.empty(Link.shape) for i in range(length(Link)): OpinionPDF = wget.download(Link[i], &quot;Temporary_Opinion.pdf&quot;) with open(OpinionPDF, &quot;rb&quot;) as f: pdf = pdftotext.PDF(f) OpinionText = &quot;\n\n&quot;.join(pdf) if os.path.exists(&quot;Temporary_Opinion.pdf&quot;): os.remove(&quot;Temporary_Opinion.pdf&quot;) Opinion[i] = OpinionText Link2Text(df['URLs'].values, df['OpinionText'].values) </code></pre> <p>I gather that this did not work because numba does not work with the packages I am calling inside the function and is intended more for mathematical operations. If that is not correct and I should be trying to use numba for this, please let me know.</p>
<p>I took the advice in the comments. I did not use pandas, used list comprehension, and rewrote this as:</p> <pre><code>def pdftotext(path): args = r'pdftotext -layout -q Temporary_Opinion.pdf Opinion_Text.txt' cp = sp.run( args, stdout=sp.PIPE, stderr=sp.DEVNULL, check=True, text=True ) return cp.stdout def Link2Text(Link): OpinionPDF = wget.download(Link, &quot;Temporary_Opinion.pdf&quot;) pdftotext(&quot;Temporary_Opinion.pdf&quot;) OpinionText = io.open(&quot;Opinion_Text.txt&quot;, mode=&quot;r&quot;, encoding=&quot;utf-8&quot;) OpinionText = OpinionText.readlines() if os.path.exists(&quot;Temporary_Opinion.pdf&quot;): os.remove(&quot;Temporary_Opinion.pdf&quot;) if os.path.exists(&quot;Opinion_Text.txt&quot;): os.remove(&quot;Opinion_Text.txt&quot;) return(OpinionText) Opinions = [Link2Text(item) for item in URLs] </code></pre> <p>This is considerably faster and does exactly what I need. Thanks to everyone who offered advice on this! The next step will be using threading and layout analysis to speed up the IO and clean the data.</p>
python|pandas|vectorization|coding-efficiency|pdftotext
0
375,135
69,444,086
Running loop for frequency calculation in Python
<p>I have the data as shown in the table. I want to use Python. For all the fruits that exist in the year 2016 and 2017, I want the frequencies of country in 2015 for those fruits.</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Country</th> <th>Fruit</th> <th>Year</th> </tr> </thead> <tbody> <tr> <td>Germany</td> <td>Apple</td> <td>2015</td> </tr> <tr> <td>France</td> <td>Apple</td> <td>2015</td> </tr> <tr> <td>France</td> <td>Apple</td> <td>2015</td> </tr> <tr> <td>Spain</td> <td>Apple</td> <td>2015</td> </tr> <tr> <td>Germany</td> <td>Banana</td> <td>2015</td> </tr> <tr> <td>France</td> <td>Banana</td> <td>2015</td> </tr> <tr> <td>France</td> <td>Apple</td> <td>2016</td> </tr> <tr> <td>Spain</td> <td>Apple</td> <td>2016</td> </tr> <tr> <td>Germany</td> <td>Banana</td> <td>2016</td> </tr> <tr> <td>France</td> <td>Banana</td> <td>2016</td> </tr> <tr> <td>France</td> <td>Banana</td> <td>2017</td> </tr> <tr> <td>France</td> <td>Grapes</td> <td>2017</td> </tr> </tbody> </table> </div> <p>The final table I want looks like below:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Fruit</th> <th>Germany</th> <th>France</th> <th>Spain</th> </tr> </thead> <tbody> <tr> <td>Apple</td> <td>1</td> <td>2</td> <td>1</td> </tr> <tr> <td>Banana</td> <td>1</td> <td>1</td> <td>0</td> </tr> <tr> <td>Grapes</td> <td>0</td> <td>0</td> <td>0</td> </tr> </tbody> </table> </div>
<p>Try:</p> <pre><code>df_2015 = df[df['Year'] == 2015] pd.crosstab(df_2015['Fruit'], df_2015['Country']).reindex(df['Fruit'].unique(), fill_value=0) </code></pre> <p>Output:</p> <pre><code>Country France Germany Spain Fruit Apple 2 1 1 Banana 1 1 0 Grapes 0 0 0 </code></pre>
python|pandas|loops|dummy-variable
2
375,136
69,623,839
How to pass a pandas dataframe from main class to another class?
<p>There are lots of widgets in the original code and that is why I need to open the file in the main window. Therefore, I need to pass a dataframe (data_df) that comes from a csv file open in the main menu (main class) to 'MyApp' class. I will use the dataframe (input_df) to perform calculations down the road.</p> <p>How to pass the data from main class to MyApp class?</p> <pre><code># Import dependencies from PyQt5.QtWidgets import (QWidget, QApplication, QTableWidget, QTableWidgetItem, QHBoxLayout, QVBoxLayout, QHeaderView, QPushButton, QCheckBox, QLabel, QFileDialog, QMainWindow, QAction, QLineEdit, QMessageBox, QComboBox, QSizePolicy) from PyQt5.Qt import Qt, QPen, QFont from PyQt5.QtGui import * from PyQt5.QtChart import QChart, QChartView, QLineSeries, QCategoryAxis import sys import pandas as pd import math import csv # Creates a QApplication instance class MyApp(QWidget): def __init__(self): super().__init__() # Creates layout object self.layout = QHBoxLayout() # Create push buttons self.buttonCalc = QPushButton('Calculate') self.layout.addWidget(self.buttonCalc) # Connect button to function self.buttonCalc.clicked.connect(self.calculate) def displayInfo(self): self.show() # Create a Model to handle the calculator's operation def calculate(self): # get dataframe input_df = df # Create a subclass of QMainWindow to setup the main GUI class MainWindow(QMainWindow): def __init__(self, w): super().__init__() self.setWindowTitle('My code') # for icon, uncomment line below #self.setWindowIcon(QIcon(r'c:\image.png')) self.resize(1200, 1200) self.myApp = MyApp() self.menuBar = self.menuBar() self.fileMenu = self.menuBar.addMenu('File') # import data importAction = QAction('Open csv File', self) importAction.setShortcut('Ctrl+O') importAction.triggered.connect(self.openSeries) # exit action exitAction = QAction('Exit', self) exitAction.setShortcut('Ctrl+Q') exitAction.triggered.connect(lambda: app.quit()) self.fileMenu.addAction(importAction) self.fileMenu.addAction(exitAction) self.setCentralWidget(w) def openSeries(self): self.filePath = QFileDialog.getOpenFileName(self, 'Open data series csv file', 'C:\', 'CSV(*.csv)') if self.filePath != ('', ''): file_data = self.filePath[0] data_df = pd.read_csv(file_data, encoding='ISO-8859-1') # I need to pass this dataframe to MyApp class return data_df def passInformation(self): self.myApp.input_df if __name__ =='__main__': app = QApplication(sys.argv) w = MyApp() window = MainWindow(w) window.show() try: sys.exit(app.exec()) except SystemExit: print('Closing window...') </code></pre>
<p>You can pass the data from one another through the <code>__init__</code> method using something like this on your main window class:</p> <pre><code>class MainWindow(QtWidgets.QWidget): def __init__(self, parent=None): super(MainWindow, self).__init__(parent) self.init_ui() def goToOtherWindow(self, variable): self.window = OtherWindow(variable) self.window.show() self.close() </code></pre> <p>On <code>OtherWindow</code> class:</p> <pre><code>class OtherWindow(QtWidgets.QWidget): def __init__(self, variable, parent=None): super(OtherWindow, self).__init__(parent) self.variable = variable self.init_ui() </code></pre> <p>Of course, you have to adapt this function to your specific case.</p>
python|pandas|pyqt5
0
375,137
69,586,002
How to convert a pandas dataframe into a format (similar to one hot encoding) taking an amount-column into account
<p>What is the most convenient way to convert a pandas dataframe (entailing date, amount, category) into a one hot endocing format which takes the amount-column into account. Please see the example below.</p> <p><a href="https://i.stack.imgur.com/h6sVw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/h6sVw.png" alt="enter image description here" /></a></p>
<p>You can just loop over the database and use the values for each entry for indexing the new database. See example below:</p> <pre><code>import pandas as pd # create the example databases d1 = {'value' : [100,200,300], 'char': ['a' , 'b', 'c']} d2 = {'a' : [None, None, None], 'b': [None, None, None], 'c': [None, None, None] } d1 = pd.DataFrame(data=d1) d2 = pd.DataFrame(data=d2) # loop over the entries and use their values for indexing in the new database for i in d1.index: d2[d1['char'][i]][i] = d1['value'][i] print(d2) </code></pre>
python|pandas
0
375,138
69,592,003
Why is meshgrid changing (x, y, z) order to (y, x, z)?
<p>I have 3 vectors:</p> <pre><code>u = np.array([0, 100, 200, 300]) #hundreds v = np.array([0, 10, 20]) #tens w = np.array([0, 1]) #units </code></pre> <p>Then I used <code>np.meshgrid</code> to sum <code>u[i]+v[j],w[k]</code>:</p> <pre><code>x, y, z = np.meshgrid(u, v, w) func1 = x + y + z </code></pre> <p>So, when (i,j,k)=(3,2,1), <code>func1[i, j, k]</code> should return 321, but I only get 321 if I put <code>func1[2, 3, 1]</code>. Why is it asking me for vector <code>v</code> before <code>u</code>? Should I use <code>numpy.ix_</code> instead?</p>
<p>From the <code>meshgrid</code> docs:</p> <pre><code>Notes ----- This function supports both indexing conventions through the indexing keyword argument. Giving the string 'ij' returns a meshgrid with matrix indexing, while 'xy' returns a meshgrid with Cartesian indexing. In the 2-D case with inputs of length M and N, the outputs are of shape (N, M) for 'xy' indexing and (M, N) for 'ij' indexing. In the 3-D case with inputs of length M, N and P, outputs are of shape (N, M, P) for 'xy' indexing and (M, N, P) for 'ij' indexing. In [109]: U,V,W = np.meshgrid(u,v,w, sparse=True) In [110]: U Out[110]: array([[[ 0], # (1,4,1) [100], [200], [300]]]) In [111]: U+V+W Out[111]: array([[[ 0, 1], [100, 101], [200, 201], [300, 301]], [[ 10, 11], [110, 111], [210, 211], [310, 311]], [[ 20, 21], [120, 121], [220, 221], [320, 321]]]) </code></pre> <p>The result is (3,4,2) array; This is the <code>cartesian</code> case described in the notes.</p> <p>With the documented <code>indexing</code> change:</p> <pre><code>In [113]: U,V,W = np.meshgrid(u,v,w, indexing='ij',sparse=True) In [114]: U.shape Out[114]: (4, 1, 1) In [115]: (U+V+W).shape Out[115]: (4, 3, 2) </code></pre> <p>Which matches the <code>ix_</code> that you wanted:</p> <pre><code>In [116]: U,V,W = np.ix_(u,v,w) In [117]: (U+V+W).shape Out[117]: (4, 3, 2) </code></pre> <p>You are welcome to use either. Or even <code>np.ogrid</code> as mentioned in the docs.</p> <p>Or even the home-brewed broadcasting:</p> <pre><code>In [118]: (u[:,None,None]+v[:,None]+w).shape Out[118]: (4, 3, 2) </code></pre> <p>Maybe the 2d layout clarifies the two coordinates:</p> <pre><code>In [119]: Out[111][:,:,0] Out[119]: array([[ 0, 100, 200, 300], # u going across, x-axis [ 10, 110, 210, 310], [ 20, 120, 220, 320]]) In [120]: (u[:,None,None]+v[:,None]+w)[:,:,0] Out[120]: array([[ 0, 10, 20], # u going down - rows [100, 110, 120], [200, 210, 220], [300, 310, 320]]) </code></pre>
python|numpy
1
375,139
69,352,013
Convert string to multidimensional array in Python
<p>I'm having a problem managing some data that are saved in a really awful format.</p> <p>I have <a href="https://filebin.net/mfqrxtphkno5kbw0" rel="nofollow noreferrer">data for points</a> that correspond to the edges of a polygon. The data for each polygon is separated by the string <code>&gt;</code>, while the <code>x</code> and <code>y</code> values for the points are separated with non-unified criteria, sometimes with a number of spaces, sometimes with some spaces and a tabulation. I've tried to load such data to an array of arrays with the following code:</p> <pre class="lang-py prettyprint-override"><code>f = open('/Path/Data.lb','r') data = f.read() splat = data.split('&gt;') region = [] for number, polygon in enumerate(splat[1:len(splat)], 1): region.append(float(polygon)) </code></pre> <p>But I keep getting an error trying to run the <code>float()</code> function (I've cut it as it's much longer):</p> <pre class="lang-py prettyprint-override"><code>ValueError: could not convert string to float: '\n -73.311 -48.328\n -73.311 -48.326\n -73.318 -48.321\n ... ... -73.324\t -48.353\n -73.315\t -48.344\n -73.313\t -48.337\n' </code></pre> <p>Is there a way to convert the data to float without modifying the source file? If not, is there a way to easily modify the source file so that all columns are separated the same way? I guess that way the same code should run smoothly.</p> <p>Thanks!</p>
<p>You can use regex to match decimal numbers.</p> <pre><code>import re PATH = &lt;path_to_file&gt; coords = [] with open(PATH) as f: for line in f: nums = re.findall('-?\d+\.\d+', line) if len(nums) &gt;0: coords.append(nums) print(coords) </code></pre> <p><strong>Note:</strong> this solution ignores the trailing 0 at the end of some lines. Be aware that the results in <code>coords</code> are still strings. You'll need to convert them to float using <code>float()</code>.</p>
python|arrays|string|numpy|multidimensional-array
0
375,140
69,354,179
How to get the classes from a Binary Image Classification model with Keras?
<p>Currently I am working on a binary classification model using Keras(version '2.6.0'). And I build simple model with three Blocks of 2D Convolution (Conv2D + ReLU + Pooling), then a finale blocks contain a Flatten, Dropout and two Dense layers. I have a small dataset of images in my disk and they are organized in a main directory like this:</p> <pre><code>/content/data/ .............train/ ..................classA/ ........................img1.jpg ........................img2.jpg . . . ..................classB/ ........................img1.jpg ........................img2.jpg . . . </code></pre> <p>After the training step i have the following learning curves: <a href="https://i.stack.imgur.com/CIet0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CIet0.png" alt="enter image description here" /></a> <a href="https://i.stack.imgur.com/Cni7D.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Cni7D.png" alt="enter image description here" /></a></p> <p>Even with the noisy behave, they seems great for me (correct me if I am wrong). No overfitting the training and the validation curves have the same behavior, and after 15 epochs I get 1 of accuracy and less than 0.2 as losses.</p> <h1>Question:</h1> <p>When I test the model, I want to display to which classes the image belong A or B ?</p> <p>I tried the following :</p> <pre><code>predictions = MODEL.predict(img_array) score = np.argmax(predictions) prob = tf.nn.sigmoid(predictions[0]) </code></pre> <p>but i get the same score (0) for two different images belong to two different classes.</p> <p>I appreciate any suggestions or written documents, because the documentations at Keras didn't specified the details of this step. Thanks in advance.</p>
<p>Try this :</p> <pre><code>ImagePath = &quot;YourImagePath&quot; img = keras.preprocessing.image.load_img( ImagePath, target_size=image_size ) img_array = keras.preprocessing.image.img_to_array(img) img_array = tf.expand_dims(img_array, 0) # Create batch axis predictions = model.predict(img_array) score = predictions[0] print( &quot;This image is %.2f percent cat and %.2f percent dog.&quot; % (100 * (1 - score), 100 * score) ) # Will print on the Console </code></pre> <p>And this is a tutorial of Adrian Rosebrock that you can follow it for more details :</p> <p><a href="https://www.pyimagesearch.com/2017/12/11/image-classification-with-keras-and-deep-learning/" rel="nofollow noreferrer">Image classification with Keras and deep learning</a></p>
python|tensorflow|keras
0
375,141
40,916,388
Python Pandas Bokeh Indexerror: list index out of range - why?
<p>I'm having trouble with the code below:</p> <pre><code>from bokeh.plotting import figure, output_file, show, save from bokeh.models import ColumnDataSource from bokeh.models import Range1d, LinearAxis import pandas as pd from pandas import HDFStore from bokeh.palettes import Spectral9 store = pd.HDFStore('&lt;hdf store location&gt;') df = pd.DataFrame(store['d1']) df = df.rename_axis('Time') df.fillna(0) #the number of colums is the number of lines that we will make numlines = len(df.columns) #import colour pallet mypalette = Spectral9[0:numlines] # remove unwanted columns col_list = ['Col1', 'Col2', 'Col3'] df = df[col_list] # make the figure, p = figure(x_axis_type="datetime", title="&lt;title&gt;", width = 800, height = 450) p.xaxis.axis_label = 'Date' p.yaxis.axis_label = '&lt;y axis label&gt;' p.line(df.index, df['Col1'], legend = 'Col1', color = mypalette[0] ) p.line(df.index, df['Col2'], legend = 'Col2', color = mypalette[1] ) # add extra y axis p.extra_y_ranges = {'Col3': Range1d(start=0, end=1)} p.circle(df.index, df['Col3'], legend = 'Col3', color = mypalette[8], y_range_name='Col3' ) p.add_layout(LinearAxis(y_range_name='Col3'), 'right') # creates an output file output_file('&lt;output file location&gt;') #save the plot save(p) </code></pre> <p>This is what my dataframe looks like:</p> <pre><code> Time Col1 Col2 Col3 Col4 29/11/2016 00:00 4 41 41 55 29/11/2016 01:00 55 15 61 81 29/11/2016 02:00 51 75 2 4 29/11/2016 03:00 21 21 51 9 etc. </code></pre> <p>When I try to run the code above, I get the following error:</p> <pre><code>IndexError Traceback (most recent call last) &lt;ipython-input-20-9d2c8911130d&gt; in &lt;module&gt;() 38 39 # add extra y axis ---&gt; 40 p.circle(df.index, df['Col3'], legend = 'Col3', color = mypalette[8], y_range_name='Col3') 41 p.add_layout(LinearAxis(y_range_name='Col3'), 'right') 42 IndexError: list index out of range </code></pre> <p>I can't seem to work out what I am doing wrong. Can anyone help?</p>
<p>The following lines appear towards the top of your code.</p> <pre><code>#the number of colums is the number of lines that we will make numlines = len(df.columns) #import colour pallet mypalette = Spectral9[0:numlines] </code></pre> <p>In the first line, you set numlines equal to the number of columns you have. You only have 4 columns in your dataframe. In your second line, you set mypalette equal to the first N elements of Spectral9, where n is the number of lines you have. As such, your palette is limited to the first 4 elements of Spectral9.</p> <p>Later in your code, you try to grab the 9th element of mypalette (which would be [8] with zero-indexing in python).</p> <pre><code>p.circle(df.index, df['Col3'], legend = 'Col3', color = mypalette[8], y_range_name='Col3' ) </code></pre> <p>You limited mypalette to have only 4 elements, so mypalette[8] is out of range. If you want to use that specific color, you can consider using <code>color = Spectral9[8]</code> instead of <code>color = mypalette[8]</code>.</p>
python|pandas|range|bokeh|index-error
2
375,142
41,225,041
Dataframe complex reformating
<p>I would like to transform this dataframe:</p> <pre><code>import pandas as pd df = pd.DataFrame.from_items([('a', [13,'F','RD',0,0,1,0,1]), ('b', [45,'M','RD',1,1,0,1,0]), ('c', [67,'F','AN',0,0,1,0,1]), ('d', [23,'M','AN',1,0,0,1,1])], orient='index', columns=['AGE', 'SEX', 'REG', 'A', 'B', 'C', 'D', 'E']) print df AGE SEX REG A B C D E a 13 F RD 0 0 1 0 1 b 45 M RD 1 1 0 1 0 c 67 F AN 0 0 1 0 1 d 23 M AN 1 0 0 1 1 </code></pre> <p>To be transform into: </p> <pre><code> AGE SEX REG PRODUCT PA a 13 F RD A 0 a 13 F RD B 0 a 13 F RD C 1 a 13 F RD D 0 a 13 F RD E 1 b 45 M RD A 1 b 45 M RD B 1 b 45 M RD C 0 b 45 M RD D 1 b 45 M RD E 0 c 67 F AN A 0 c 67 F AN B 0 c 67 F AN C 1 c 67 F AN D 0 c 67 F AN E 1 d 23 M AN A 1 d 23 M AN B 0 d 23 M AN C 0 d 23 M AN D 1 d 23 M AN E 1 </code></pre> <p>So basically repeating the each product (A,B,C,D,E) for each users (a, b, c, d) and attribute the value for each user/product. The original table has thousand of rows.</p>
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.set_index.html" rel="nofollow noreferrer"><code>set_index</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.stack.html" rel="nofollow noreferrer"><code>stack</code></a>, <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reset_index.html" rel="nofollow noreferrer"><code>reset_index</code></a> and last <code>rename</code> column name to <code>PRODUCT</code>:</p> <pre><code>print (df.set_index(['AGE','SEX','REG']) .stack() .reset_index(name='PA') .rename(columns={'level_3':'PRODUCT'})) AGE SEX REG PRODUCT PA 0 13 F RD A 0 1 13 F RD B 0 2 13 F RD C 1 3 13 F RD D 0 4 13 F RD E 1 5 45 M RD A 1 6 45 M RD B 1 7 45 M RD C 0 8 45 M RD D 1 9 45 M RD E 0 10 67 F AN A 0 11 67 F AN B 0 12 67 F AN C 1 13 67 F AN D 0 14 67 F AN E 1 15 23 M AN A 1 16 23 M AN B 0 17 23 M AN C 0 18 23 M AN D 1 19 23 M AN E 1 </code></pre> <hr> <pre><code>print (df.set_index(['AGE','SEX','REG'], append=True) .stack() .reset_index([1,2,3,4], name='PA') .rename(columns={'level_4':'PRODUCT'})) AGE SEX REG PRODUCT PA a 13 F RD A 0 a 13 F RD B 0 a 13 F RD C 1 a 13 F RD D 0 a 13 F RD E 1 b 45 M RD A 1 b 45 M RD B 1 b 45 M RD C 0 b 45 M RD D 1 b 45 M RD E 0 c 67 F AN A 0 c 67 F AN B 0 c 67 F AN C 1 c 67 F AN D 0 c 67 F AN E 1 d 23 M AN A 1 d 23 M AN B 0 d 23 M AN C 0 d 23 M AN D 1 d 23 M AN E 1 </code></pre>
python|pandas|dataframe|formatting
0
375,143
41,071,446
Merging back new calculations in original pandas dataframe
<p>Say I have a Pandas dataframe named 'df' and seen below:</p> <pre><code> X Y Z 0 -3 6 -7 1 -4 -10 -1 2 9 -10 -9 3 5 0 -8 4 -2 1 -8 </code></pre> <p>And I want to create a new frame out of some of the rows in df:</p> <pre><code>new_df = df.loc[(df['X'] == -3) &amp; (df['X'] == 9)] </code></pre> <p>And then I modify the new data frame:</p> <pre><code>new_df.Y = 150 </code></pre> <p>Is there an easy way to insert my new data frame back into the old frame, replacing only those values that I modified in the new frame?</p> <p>So in the end, I would have the original 'df' and it would look like this:</p> <pre><code> X Y Z 0 -3 150 -7 1 -4 -10 -1 2 9 -150 -9 3 5 0 -8 4 -2 1 -8 </code></pre>
<p>If you want to replace values with conditions, you can do it in one step, i.e. specify the row and column conditions and assign values, and you can avoid merging the new data frame with the original data frame:</p> <pre><code>df.loc[(df['X'] == -3) | (df['X'] == 9), "Y"] = 150 # I assume you mean or instead of and from your result df # X Y Z #0 -3 150 -7 #1 -4 -10 -1 #2 9 150 -9 #3 5 0 -8 #4 -2 1 -8 </code></pre> <p>As long as the index of the <code>new_df</code> is not modified, you can assign the <code>new_df.Y</code> back to df after modification has been made to <code>new_df</code>:</p> <pre><code>df.loc[(df['X'] == -3) | (df['X'] == 9), "Y"] = new_df.Y </code></pre> <p>Or even:</p> <pre><code>df.loc[(df['X'] == -3) | (df['X'] == 9)] = new_df </code></pre>
python|pandas
2
375,144
41,007,797
Pandas: find most frequent values in columns of lists
<pre><code> x animal 0 5 [dog, cat] 1 6 [dog] 2 8 [elephant] </code></pre> <p>I have dataframe like this. How can i find most frequent animals contained in all lists of column.</p> <p>Method value_counts() consider list as one element and i can't use it. </p>
<p>something along these lines?</p> <pre><code>import pandas as pd df = pd.DataFrame({'x' : [5,6,8], 'animal' : [['dog', 'cat'], ['elephant'], ['dog']]}) x = sum(df.animal, []) #x #Out[15]: ['dog', 'cat', 'elephant', 'dog'] from collections import Counter c = Counter(x) c.most_common(1) #Out[17]: [('dog', 2)] </code></pre>
pandas
4
375,145
40,837,066
Numpy: convert 2D array of indices to 1D array for intersection calculation
<p>I have a situation where I need to do the intersection of two binary image arrays in python. Ideally, I do this pretty quickly. </p> <hr> <p>Numpy has the <code>intersect1d</code> function that will do the job, if I can turn my coordinates into single elements. </p> <p>Right now (since I know the dimensions of my photos), I do the trick by converting everything into integer format using a multiply, sum, intersection...then unpack using similar means. </p> <pre><code>def npimg_intersection(A,B): Aargwhere = np.argwhere(A==0) Bargwhere = np.argwhere(B==0) Aargwhere[:,0] = Aargwhere[:,0]*1000 Aargwhere = np.sum(Aargwhere,axis=1) Bargwhere[:,0] = Bargwhere[:,0]*1000 Bargwhere = np.sum(Bargwhere,axis=1) Iargwhere0 = np.intersect1d(Aargwhere,Bargwhere) Iargwhere = np.zeros(shape=(Iargwhere0.shape[0],2),dtype=Iargwhere0.dtype) Iargwhere[:,0] = Iargwhere0[:]/1000 Iargwhere[:,1] = Iargwhere0[:]%1000 I = np.zeros(shape = A.shape,dtype=A.dtype) I[:,:] = 255 I[Iargwhere[:,0],Iargwhere[:,1]] = 0 return I </code></pre> <p>And it works. Fairly quickly. </p> <hr> <p>But what is the correct (less hack-ish) way to do this using numpy?</p>
<p>Two approaches could be suggested -</p> <pre><code>255*(~((A==0) &amp; (B==0))).astype(A.dtype) 255*(((A!=0) | (B!=0))).astype(A.dtype) </code></pre>
python|arrays|numpy
1
375,146
40,888,274
How to loop list value of a specific column in pandas?
<p>I have a pandas dataframe, which the first column are list values. I want to loop each str value of each list, and the values of next columns will be in included together.</p> <p>For example:</p> <pre><code>tm = pd.DataFrame({'author':[['author_a1','author_a2','author_a3'],['author_b1','author_b2'],['author_c1','author_c2']],'journal':['journal01','journal02','journal03'],'date':pd.date_range('2015-02-03',periods=3)}) tm author date journal 0 [author_a1, author_a2, author_a3] 2015-02-03 journal01 1 [author_b1, author_b2] 2015-02-04 journal02 2 [author_c1, author_c2] 2015-02-05 journal03 </code></pre> <p>I want this:</p> <pre><code> author date journal 0 author_a1 2015-02-03 journal01 1 author_a2 2015-02-03 journal01 2 author_a3 2015-02-03 journal01 3 author_b1 2015-02-04 journal02 4 author_b2 2015-02-04 journal02 5 author_c1 2015-02-05 journal03 6 author_c2 2015-02-05 journal03 </code></pre> <hr> <p>I 've used a complex method to solve the problem. Is there any simple and efficient method by using pandas?</p> <pre><code>author_use = [] date_use = [] journal_use = [] for i in range(0,len(tm['author'])): for m in range(0,len(tm['author'][i])): author_use.append(tm['author'][i][m]) date_use.append(tm['date'][i]) journal_use.append(tm['journal'][i]) df_author = pd.DataFrame({'author':author_use, 'date':date_use, 'journal':journal_use, }) df_author </code></pre>
<p>I think you can use <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.repeat.html" rel="nofollow noreferrer"><code>numpy.repeat</code></a> for repeat values by legths by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.len.html" rel="nofollow noreferrer"><code>str.len</code></a> and flat values of nested <code>lists</code> by <code>chain</code>:</p> <pre><code>from itertools import chain lens = tm.author.str.len() df = pd.DataFrame({ "date": np.repeat(tm.date.values, lens), "journal": np.repeat(tm.journal.values,lens), "author": list(chain.from_iterable(tm.author))}) print (df) author date journal 0 author_a1 2015-02-03 journal01 1 author_a2 2015-02-03 journal01 2 author_a3 2015-02-03 journal01 3 author_b1 2015-02-04 journal02 4 author_b2 2015-02-04 journal02 5 author_c1 2015-02-05 journal03 6 author_c2 2015-02-05 journal03 </code></pre> <p>Another <code>numpy</code> solution:</p> <pre><code>df = pd.DataFrame(np.column_stack((tm[['date','journal']].values.\ repeat(list(map(len,tm.author)),axis=0) ,np.hstack(tm.author))), columns=['date','journal','author']) print (df) date journal author 0 2015-02-03 00:00:00 journal01 auther_a1 1 2015-02-03 00:00:00 journal01 auther_a2 2 2015-02-03 00:00:00 journal01 auther_a3 3 2015-02-04 00:00:00 journal02 auther_b1 4 2015-02-04 00:00:00 journal02 auther_b2 5 2015-02-05 00:00:00 journal03 auther_c1 6 2015-02-05 00:00:00 journal03 auther_c2 </code></pre>
python|loops|pandas
2
375,147
41,225,604
Concat list of pandas data frame, but ignoring column name
<p>Sub-title: Dumb it down pandas, stop trying to be clever.</p> <p>I've a list (<code>res</code>) of single-column pandas data frames, each containing the same kind of numeric data, but each with a different column name. The row indices have no meaning. I want to put them into a single, very long, single-column data frame.</p> <p>When I do <code>pd.concat(res)</code> I get one column per input file (and loads and loads of NaN cells). I've tried various values for the parameters (*), but none that do what I'm after.</p> <p>Edit: Sample data:</p> <pre><code>res = [ pd.DataFrame({'A':[1,2,3]}), pd.DataFrame({'B':[9,8,7,6,5,4]}), pd.DataFrame({'C':[100,200,300,400]}), ] </code></pre> <p>I have an ugly-hack solution: copy every data frame and giving it a new column name:</p> <pre><code>newList = [] for r in res: r.columns = ["same"] newList.append(r) pd.concat( newList, ignore_index=True ) </code></pre> <p>Surely that is not the best way to do it??</p> <p>BTW, <a href="https://stackoverflow.com/q/34282847/841830">pandas: concat data frame with different column name</a> is similar, but my question is even simpler, as I don't want the index maintained. (I also start with a list of N single-column data frames, not a single N-column data frame.)</p> <p>*: E.g. <code>axis=0</code> is default behaviour. <code>axis=1</code> gives an error. <code>join="inner"</code> is just silly (I only get the index). <code>ignore_index=True</code> renumbers the index, but I stil gets lots of columns, lots of NaNs.</p> <hr> <p><strong>UPDATE for empty lists</strong></p> <p>I was having problems (with all the given solutions) when the data had an empty list, something like:</p> <pre><code>res = [ pd.DataFrame({'A':[1,2,3]}), pd.DataFrame({'B':[9,8,7,6,5,4]}), pd.DataFrame({'C':[]}), pd.DataFrame({'D':[100,200,300,400]}), ] </code></pre> <p>The trick was to force the type, by adding <code>.astype('float64')</code>. E.g.</p> <pre><code>pd.Series(np.concatenate([df.values.ravel().astype('float64') for df in res])) </code></pre> <p>or:</p> <pre><code>pd.concat(res,axis=0).astype('float64').stack().reset_index(drop=True) </code></pre>
<p>I think you need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.concat.html" rel="nofollow noreferrer"><code>concat</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.stack.html" rel="nofollow noreferrer"><code>stack</code></a>:</p> <pre><code>print (pd.concat(res, axis=1)) A B C 0 1.0 9 100.0 1 2.0 8 200.0 2 3.0 7 300.0 3 NaN 6 400.0 4 NaN 5 NaN 5 NaN 4 NaN print (pd.concat(res, axis=1).stack().reset_index(drop=True)) 0 1.0 1 9.0 2 100.0 3 2.0 4 8.0 5 200.0 6 3.0 7 7.0 8 300.0 9 6.0 10 400.0 11 5.0 12 4.0 dtype: float64 </code></pre> <p>Another solution with <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.ravel.html" rel="nofollow noreferrer"><code>numpy.ravel</code></a> for flattening:</p> <pre><code>print (pd.Series(pd.concat(res, axis=1).values.ravel()).dropna()) 0 1.0 1 9.0 2 100.0 3 2.0 4 8.0 5 200.0 6 3.0 7 7.0 8 300.0 10 6.0 11 400.0 13 5.0 16 4.0 dtype: float64 </code></pre> <hr> <pre><code>print (pd.DataFrame(pd.concat(res, axis=1).values.ravel(), columns=['col']).dropna()) col 0 1.0 1 9.0 2 100.0 3 2.0 4 8.0 5 200.0 6 3.0 7 7.0 8 300.0 10 6.0 11 400.0 13 5.0 16 4.0 </code></pre> <p>Solution with <code>list comprehension</code>:</p> <pre><code>print (pd.Series(np.concatenate([df.values.ravel() for df in res]))) 0 1 1 2 2 3 3 9 4 8 5 7 6 6 7 5 8 4 9 100 10 200 11 300 12 400 dtype: int64 </code></pre>
python|pandas|rbind
5
375,148
41,085,803
tensorflow slim fix bias, train weights
<p>What is the best way to create a slim fully connected layer with fixed biases? Currently I have:</p> <pre><code>fc = slim.fully_connected(inputs, num_outputs = 10) </code></pre> <p>but the fully_connected function only allows fixing both the weights and biases at the same time.</p>
<p>It turns out that if you pass in an identity normalizer_fn, no biases are created and this is good enough for me:</p> <pre><code>fc = slim.fully_connected(normed, num_outputs = self.nInternalUnits, normalizer_fn=lambda x: x ) </code></pre>
tensorflow
0
375,149
41,011,469
Improving performance of Python for loop?
<p>I am trying to write a code to construct dataFrame which consists of cointegrating pairs of portfolios (stock price is cointegrating). In this case, stocks in a portfolio are selected from S&amp;P500 and they have the equal weights. </p> <p>Also, for some economical issue, the portfolios must include the same sectors. </p> <p>For example: if stocks in one portfolio are from [IT] and [Financial] sectors, the second portoflio must select stocks from [IT] and [Financial] sectors. </p> <p>There are no correct number of stocks in a portfolio, so I'm considering about 10 to 20 stocks for each of them. However, when it comes to think about the combination, this is (500 choose 10), so I have an issue of computation time. </p> <p>The followings are my code:</p> <pre><code>def adf(x, y, xName, yName, pvalue=0.01, beta_lower=0.5, beta_upper=1): res=pd.DataFrame() regress1, regress2 = pd.ols(x=x, y=y), pd.ols(x=y, y=x) error1, error2 = regress1.resid, regress2.resid test1, test2 = ts.adfuller(error1, 1), ts.adfuller(error2, 1) if test1[1] &lt; pvalue and test1[1] &lt; test2[1] and\ regress1.beta["x"] &gt; beta_lower and regress1.beta["x"] &lt; beta_upper: res[(tuple(xName), tuple(yName))] = pd.Series([regress1.beta["x"], test1[1]]) res = res.T res.columns=["beta","pvalue"] return res elif test2[1] &lt; pvalue and regress2.beta["x"] &gt; beta_lower and\ regress2.beta["x"] &lt; beta_upper: res[(tuple(yName), tuple(xName))] = pd.Series([regress2.beta["x"], test2[1]]) res = res.T res.columns=["beta","pvalue"] return res else: pass def coint(dataFrame, nstocks = 2, pvalue=0.01, beta_lower=0.5, beta_upper=1): # dataFrame = pandas_dataFrame, in this case, data['Adj Close'], row=time, col = tickers # pvalue = level of significance of adf test # nstocks = number of stocks considered for adf test (equal weight) # if nstocks &gt; 2, coint return cointegration between portfolios # beta_lower = lower bound for slope of linear regression # beta_upper = upper bound for slope of linear regression a=time.time() tickers = dataFrame.columns tcomb = itertools.combinations(dataFrame.columns, nstocks) res = pd.DataFrame() sec = pd.DataFrame() for pair in tcomb: xName, yName = list(pair[:int(nstocks/2)]), list(pair[int(nstocks/2):]) xind, yind = tickers.searchsorted(xName), tickers.searchsorted(yName) xSector = list(SNP.ix[xind]["Sector"]) ySector = list(SNP.ix[yind]["Sector"]) if set(xSector) == set(ySector): sector = [[(xSector, ySector)]] x, y = dataFrame[list(xName)].sum(axis=1), dataFrame[list(yName)].sum(axis=1) res1 = adf(x,y,xName,yName) if res1 is None: continue elif res.size==0: res=res1 sec = pd.DataFrame(sector, index = res.index, columns = ["sector"]) print("added : ", pair) else: res=res.append(res1) sec = sec.append(pd.DataFrame(sector, index = [res.index[-1]], columns = ["sector"])) print("added : ", pair) res = pd.concat([res,sec],axis=1) res=res.sort_values(by=["pvalue"],ascending=True) b=time.time() print("time taken : ", b-a, "sec") return res </code></pre> <p>when nstocks=2, this takes about 263 seconds, but as nstocks increases, the loop takes alot of time (more than a day)</p> <p>I collected 'Adj Close' data from yahoo finance using pandas_datareader.data and the index is time and columns are different tickers</p> <p>Any suggestions or help will be appreciated</p>
<p>I dont know what computer you have, but i would advise you to use some kind of multiprocessing for the loop. I haven't looked really hard into your code, but as far as i see <code>res</code> and <code>sec</code> can be moved into shared memory objects, and the individual loops paralleled with <code>multiprocessing</code>. </p> <p>If you have a decent CPU it can improve the performance 4-6 times. In case you have access to some kind of HPC it can do wonders. </p>
python|python-3.x|pandas|dataframe
2
375,150
41,176,033
ConvNet : Validation Loss not strongly decreasing but accuracy is improving
<p>Using <code>TensorFlow</code> I've build a simple <code>CNN</code> for classification. It has the following definition:</p> <pre><code>Input Tensor : 32,32,1 Grayscale Image 1 Conv Layer 3x3x32 Relu Activated 2x2 Max Pooled 128 FC1 43 FC2 # 43 classes </code></pre> <p>Full code can be found on this <a href="https://github.com/autojazari/gtsd-solution/blob/master/Solution-Cleaned.ipynb" rel="nofollow noreferrer">notebook on github</a></p> <p>The <code>validation loss</code> and <code>accuracy</code> at <code>Epochs</code> <strong>100</strong>, <strong>1000</strong>, <strong>2000</strong> are</p> <pre><code>epoch 100 validation loss 3.67, validation accuracy 12.05% epoch 1000 validation loss 3.234, validation accuracy 57.63% epoch 2750 validation loss 3.111, validation accuracy 69.25% </code></pre> <p>Unless I've misunderstood or have a bug somewhere, the network is learning. However the validation loss is has only decreased very slightly. </p> <p>What does that mean? How can I use this information to improve the network?</p>
<p>This is a classic mistake in TensorFlow: you shouldn't apply a softmax on your output and then <code>tf.nn.softmax_cross_entropy_with_logits</code>.</p> <p>The operation <code>tf.nn.softmax_cross_entropy_with_logits</code> expects unscaled logits (i.e. without softmax). From the <a href="https://www.tensorflow.org/api_docs/python/nn/classification#softmax_cross_entropy_with_logits" rel="nofollow noreferrer">documentation</a>:</p> <blockquote> <p>WARNING: This op expects unscaled logits, since it performs a softmax on logits internally for efficiency. Do not call this op with the output of softmax, as it will produce incorrect results.</p> </blockquote>
tensorflow|conv-neural-network|multilabel-classification
1
375,151
40,978,859
Creating simple Android app using android studio and tensorflow
<p><strong>Update 1</strong></p> <p>I have installed the official WIN package located at PyPi repository </p> <p>Just to make sure I did not miss anything, I downloaded the .whl file manually, renamed it to .zip, opened and listed all the files and directories inside this package. There is nothing related to android. </p> <hr> <p>Following <a href="https://stackoverflow.com/questions/33616094/is-tensorflow-compatible-with-a-windows-workflow">this question</a>, there is now a support of TensorFlow for Windows. I installed it using the instructions provided as part of Anaconda, and it works. However, I can't seem to find information about developing an application with TensorFlow for Android in Windows, by using Android Studio.</p> <p>Currently I tried:</p> <ol> <li><p>Downloaded the <a href="https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/android" rel="nofollow noreferrer">Android example</a> from the main branch (which is for Linux), opened it in Android Studio. The following line doesn't work: <code>import org.tensorflow.contrib.android.TensorFlowInferenceInterface;</code>. I can't guess where to import the libraries. I've tried to read the documentation in <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/android/README.md" rel="nofollow noreferrer">readme.md</a>, but it looks very Linux oriented, <code>.sh</code> scripts and etc.</p></li> <li><p>Found the path of TensorFlow installation in Windows, <code>C:\Program Files\Miniconda2\envs\py35\Lib\site-packages\tensorflow\contrib\android\java</code>, but it is completely empty.</p></li> </ol> <p>Any idea?</p>
<p>My guess is the Android example, originally targeted for Linux, has not been updated to work on Windows. The TensorFlow update that supports Windows was just released last week.</p>
android|windows|tensorflow
0
375,152
40,852,729
Permanently Inject Constant into Tensorflow Graph for Inference
<p>I train a model with a placeholder for <code>is_training</code>:</p> <pre><code>is_training_ph = tf.placeholder(tf.bool) </code></pre> <p>however once training and validation are done, I would like to permanently inject a constant of <code>false</code> in for this value and then "re-optimize" the graph (ie using <a href="https://github.com/tensorflow/tensorflow/blob/5657d0dee8d87f4594b3e5902ed3e3ca8d6dfc0a/tensorflow/python/tools/optimize_for_inference.py" rel="nofollow noreferrer"><code>optimize_for_inference</code></a>). Is there something along the lines of <a href="https://github.com/tensorflow/tensorflow/blob/5657d0dee8d87f4594b3e5902ed3e3ca8d6dfc0a/tensorflow/python/tools/freeze_graph.py" rel="nofollow noreferrer"><code>freeze_graph</code></a> that will do this?</p>
<p>One possibility is to use the <a href="https://www.tensorflow.org/versions/r0.11/api_docs/python/framework.html#import_graph_def" rel="noreferrer"><code>tf.import_graph_def()</code></a> function and its <code>input_map</code> argument to rewrite the value of that tensor in the graph. For example, you could structure your program as follows:</p> <pre><code>with tf.Graph().as_default() as training_graph: # Build model. is_training_ph = tf.placeholder(tf.bool, name="is_training") # ... training_graph_def = training_graph.as_graph_def() with tf.Graph().as_default() as temp_graph: tf.import_graph_def(training_graph_def, input_map={is_training_ph.name: tf.constant(False)}) temp_graph_def = temp_graph.as_graph_def() </code></pre> <p>After building <code>temp_graph_def</code>, you can use it as the input to <code>freeze_graph</code>.</p> <hr> <p>An alternative, which might be more compatible with the <code>freeze_graph</code> and <code>optimize_for_inference</code> scripts (which make assumptions about variable names and checkpoint keys) would be to modify TensorFlow's <code>graph_util.convert_variables_to_constants()</code> function so that it converts placeholders instead:</p> <pre><code>def convert_placeholders_to_constants(input_graph_def, placeholder_to_value_map): """Replaces placeholders in the given tf.GraphDef with constant values. Args: input_graph_def: GraphDef object holding the network. placeholder_to_value_map: A map from the names of placeholder tensors in `input_graph_def` to constant values. Returns: GraphDef containing a simplified version of the original. """ output_graph_def = tf.GraphDef() for node in input_graph_def.node: output_node = tf.NodeDef() if node.op == "Placeholder" and node.name in placeholder_to_value_map: output_node.op = "Const" output_node.name = node.name dtype = node.attr["dtype"].type data = np.asarray(placeholder_to_value_map[node.name], dtype=tf.as_dtype(dtype).as_numpy_dtype) output_node.attr["dtype"].type = dtype output_node.attr["value"].CopyFrom(tf.AttrValue( tensor=tf.contrib.util.make_tensor_proto(data, dtype=dtype, shape=data.shape))) else: output_node.CopyFrom(node) output_graph_def.node.extend([output_node]) return output_graph_def </code></pre> <p>...then you could build <code>training_graph_def</code> as above, and write:</p> <pre><code>temp_graph_def = convert_placeholders_to_constants(training_graph_def, {is_training_ph.op.name: False}) </code></pre>
tensorflow|tensorflow-serving
7
375,153
40,855,591
pandas select subset of pivot_table
<p>There are a few questions here on this topic, but none seem to be helpful in my case. Here's a dumbed down version of what I want:</p> <p>This is the csv file of interest: <a href="http://pastebin.com/rP7tPDse" rel="nofollow noreferrer">http://pastebin.com/rP7tPDse</a></p> <p>I'm creating the pivot table as:</p> <pre><code>piv = pd.read_csv("test.csv",delimiter = "\s+").pivot_table('z','x','y') </code></pre> <p>And this returns </p> <pre><code>y 0.0 1.0 1.3 2.0 x 0.0 1.0 5.0 NaN 4.0 1.0 3.0 4.0 NaN 6.0 1.5 NaN NaN 7.0 NaN 2.0 3.0 5.0 NaN 7.0 </code></pre> <p>I would like to find a slice of this array as a pivot_table, such as:</p> <pre><code>y 1.3 2.0 x 0.0 NaN 4.0 1.0 NaN 6.0 </code></pre> <p>Based on the x and y values. I want to include the NaN's as well, to do processing on them later. Help much appreciated.</p> <p>EDIT: updating the question to be more specific.</p> <p>I'm looking to extract a pivot table that has values denoted by the column 'z' and indexed by 'x' and 'y', with the condition that:</p> <ul> <li>All x values between arbitrary xmin and xmax</li> <li>All y values between arbitrary ymin and ymax</li> </ul> <p>From <strong>piv</strong>, as defined above, I want to do something like:</p> <pre><code>piv.loc[(piv.y &lt;= 2.0) &amp; (piv.y &gt;= 1.3) &amp; (piv.x &gt;= 0.0) &amp; (piv.x &lt;= 1.2)] </code></pre> <p>And this would yield me the example answer, above. Also, in the actual dataset, which I did not post here, there are many more columns. 'x', 'y' and 'z' are just some of them.</p>
<p>When I copied dataframe, the columns were strings and rows were floats.<br> To get the columns as float</p> <pre><code>df.columns = df.columns.astype(float) </code></pre> <p>Now you can <a href="http://pandas.pydata.org/pandas-docs/stable/advanced.html" rel="nofollow noreferrer"><code>pd.IndexSlice</code></a></p> <pre><code>df.loc[pd.IndexSlice[0:1], pd.IndexSlice[1.3:2]] </code></pre> <p><a href="https://i.stack.imgur.com/PEKXS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PEKXS.png" alt="enter image description here"></a></p>
python|pandas|pivot-table
2
375,154
41,004,155
Parse Salesforce report in Pandas DataFrame using Beatbox
<p>Has anybody tried parsing SalesForce report into Pandas DataFrame using Beatbox? There are couple of examples on SO but none of them have provided comprehensive solution or at least what I have perceived it hasn't.</p> <pre><code>#!/usr/bin/env python3 import beatbox import pandas as pd sf = beatbox._tPartnerNS service = beatbox.Client() service.serverUrl = 'https://login.salesforce.com/services/Soap/u/38.0' service.login('my-username', 'my-password') report_id = '00myreport4G3V' query = "SELECT Name FROM Report where id = '{}'".format(report_id) query_result = service.query(query) </code></pre> <p>This is just selecting the name but ideally I would like to load the content of report into a DataFrame. Any help please?</p>
<p>Report data can be retrieved by <a href="https://resources.docs.salesforce.com/sfdc/pdf/salesforce_analytics_rest_api.pdf" rel="nofollow noreferrer">Salesforce Reports and Dashboards REST API</a>. This works in Salesforce since Summer'15 (ver 34.0).</p> <p>I wrote an example with package <a href="https://github.com/simple-salesforce/simple-salesforce/" rel="nofollow noreferrer">Simple-salesforce</a>, due to REST API. (It is however possible to rewrite it without simple-salesforce and use an api session from Beatbox, write at least about 10 additional lines of code and install only the <a href="https://github.com/requests/requests" rel="nofollow noreferrer">requests</a> package.)</p> <p><strong>Universal code</strong></p> <pre><code>from collections import OrderedDict from simple_salesforce import Salesforce import pandas as pd import json class SfReportsApi(Salesforce): def __init__(self, *args, **kwargs): super(SfReportsApi, self).__init__(*args, **kwargs) def describe_report(self, report_id): return self._call_report(report_id, command='/describe') def to_pandas_dataframe(self, report_id, metadata=None): &quot;&quot;&quot;SF report details exported to DataFrame, can be modified by metadata&quot;&quot;&quot; resp = self._call_report(report_id, metadata=metadata) if not resp['allData']: print(&quot;Detailed data have been truncated to the usual report limit (2000).&quot;) columns = [] converters = [] get_label = lambda x: x['label'] sf_pandas_map = { 'boolean': lambda x: x['value'], 'currency': lambda x: x['value'].get('amount'), 'date': lambda x: pd.Timestamp(x['value']), 'datetime': lambda x: pd.Timestamp(x['value']), 'double': lambda x: x['value'], 'picklist': get_label, 'string': get_label, 'textarea': get_label, } for col in resp['reportExtendedMetadata']['detailColumnInfo'].values(): columns.append(col['label']) converters.append(sf_pandas_map.get(col['dataType'], get_label)) data = [[conv(cell) for conv, cell in zip(converters, row['dataCells'])] for sect_key, section in resp['factMap'].items() if 'rows' in section for row in section['rows'] ] df = pd.DataFrame(data, columns=columns) return df def _call_report(self, report_id, metadata=None, command=None): url = '{}analytics/reports/{}{}'.format(self.base_url, report_id, command or '') data = json.dumps({'reportMetadata': metadata}) if metadata else None resp = self._call_salesforce('POST' if metadata else 'GET', url, data=data) return resp.json(object_pairs_hook=OrderedDict) </code></pre> <p><strong>Usage example</strong></p> <pre><code>report_id = '00O24000004qtI4EAI' # set Salesforce session_id some way (by login or reused from other app) sf = SfReportsApi(username='me@example.com', password='password', security_token='token') # sf = SfReportsApi(instance_url='https://na1.salesforce.com', session_id='') # get report metadata if useful metadata = sf.describe_report(report_id)['reportMetadata'] # modify them or write only the modified keys, e.g. change filters or remove subtotals etc. metadata = { 'orderBy': ['ACCOUNT.NAME'], 'reportFilters': [{'value': 'W', 'column': 'ACCOUNT.NAME', 'operator': greaterOrEqual'}] } # or you can omit `metadata` parameter and use the report as is without changing anything df = sf.to_pandas_dataframe(report_id, metadata) </code></pre> <p>It is possible to dynamically add columns, filters, sorting etc. (docs about <a href="https://developer.salesforce.com/docs/atlas.en-us.api_analytics.meta/api_analytics/sforce_analytics_rest_api_getreportrundata.htm" rel="nofollow noreferrer">report Execute synchronous</a>). The method <code>to_pandas_dataframe</code> is for a normal tabular report with details and optionally can be with one grand-total and not more than one level of subtotals. It could be possible to retrieve data from more complicated reports (see docs about <a href="https://developer.salesforce.com/docs/atlas.en-us.api_analytics.meta/api_analytics/sforce_analytics_rest_api_factmap_example.htm" rel="nofollow noreferrer">Decode the Fact Map</a> or a <a href="https://resources.docs.salesforce.com/rel1/doc/en-us/static/pdf/SF_Reports_and_Dashboards_Rest_API_web.pdf" rel="nofollow noreferrer">cheatsheet</a>), but it is not implemented because it is easier to remove them on the fly by metadata parameter before running.</p> <p>Only 2000 detailed data rows can be reported. Several requests with filters can be used to see all data.</p> <p><strong>Fixed code</strong> The new code works for any report that contain original rows. It it is not for summary reports without original rows. (The old code worked only for reports with rows and subtotals. Excuse)</p>
pandas|salesforce|beatbox
3
375,155
41,044,374
Reshaping dataframe in Pandas
<p>Is there a quick pythonic way to transform this table </p> <pre><code>index = pd.date_range('2000-1-1', periods=36, freq='M') df = pd.DataFrame(np.random.randn(36,4), index=index, columns=list('ABCD')) In[1]: df Out[1]: A B C D 2000-01-31 H 1.368795 0.106294 2.108814 2000-02-29 -1.713401 0.557224 0.115956 -0.851140 2000-03-31 -1.454967 -0.791855 -0.461738 -0.410948 2000-04-30 1.688731 -0.216432 -0.690103 -0.319443 2000-05-31 -1.103961 0.181510 -0.600383 -0.164744 2000-06-30 0.216871 -1.018599 0.731617 -0.721986 2000-07-31 0.621375 0.790072 0.967000 1.347533 2000-08-31 0.588970 -0.360169 0.904809 0.606771 ... </code></pre> <p>into this table</p> <pre><code> 2001 2000 12 11 10 9 8 7 6 5 4 3 2 1 12 11 10 9 8 7 6 5 4 3 2 1 A H B C D </code></pre> <p>Please excuse the missing values. I added the "H" manually. I hope it gets clear what I am looking for.</p>
<p>For easier check, I've created dataframe of the same shape but with integers as values.</p> <p>The core of the solution is <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.transpose.html" rel="nofollow noreferrer"><code>pandas.DataFrame.transpose</code></a>, but you need to use <code>index.year</code> + <code>index.month</code> as a new index:</p> <pre><code>&gt;&gt;&gt; df = pd.DataFrame(np.random.randint(10,size=(36, 4)), index=index, columns=list('ABCD')) &gt;&gt;&gt; df.set_index(keys=[df.index.year, df.index.month]).transpose() 2000 2001 2002 1 2 3 4 5 6 7 8 9 10 11 12 1 2 3 4 5 6 7 8 9 10 11 12 1 2 3 4 5 6 7 8 9 10 11 12 A 0 0 8 7 8 0 7 1 5 1 5 4 2 1 9 5 2 0 5 3 6 4 9 3 5 1 7 3 1 7 6 5 6 8 4 1 B 4 9 9 5 2 0 8 0 9 5 2 7 5 6 3 6 8 8 8 8 0 6 3 7 5 9 6 3 9 7 1 4 7 8 3 3 C 3 2 4 3 1 9 7 6 9 6 8 6 3 5 3 2 2 1 3 1 1 2 8 2 2 6 9 6 1 5 6 5 4 6 7 5 D 8 1 3 9 2 3 8 7 3 2 1 0 1 3 9 1 8 6 4 7 4 6 3 2 9 8 9 9 0 7 4 7 3 6 5 2 </code></pre> <p>Of course, this will not work properly if you have more then one record per year+month. In this case you need to <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow noreferrer"><code>groupby</code></a> your data first:</p> <pre><code>&gt;&gt;&gt; i = pd.date_range('2000-1-1', periods=36, freq='W') # weekly index &gt;&gt;&gt; df = pd.DataFrame(np.random.randint(10,size=(36, 4)), index=i, columns=list('ABCD')) &gt;&gt;&gt; df.groupby(by=[df.index.year, df.index.month]).sum().transpose() 2000 1 2 3 4 5 6 7 8 9 A 12 13 15 23 9 21 21 31 7 B 33 24 19 30 15 19 20 7 4 C 20 24 26 24 15 18 29 17 4 D 23 29 14 30 19 12 12 11 5 </code></pre>
python|pandas|dataframe|time-series
6
375,156
41,003,463
Create column in a pandas DataFrame based on whether value exists in a different DataFrame column
<p>I would like to add a column to dfA based on whether or not the job title (and its matching State) exists in dfB.</p> <p>dfA=</p> <pre><code>Title State Income Cashier WY 15000 Cashier WY 20000 Cashier WY 15000 Manager WY 25000 Cashier CO 15000 </code></pre> <p>dfB=</p> <pre><code>Title State MostFreqIncome Cashier WY 15000 </code></pre> <p>In English: if a Title/State pair in dfA match any Title/State pair in dfB, create a new column in dfA which gives the MostFreqIncome attached to that Title/State pair.</p> <p>Desired dfA:</p> <pre><code>Title State Income MostFreqIncome Cashier WY 15000 15000 Cashier WY 20000 15000 Cashier WY 15000 15000 Manager WY 25000 NA Cashier CO 15000 NA </code></pre> <p>Here's what I have so far:</p> <pre><code>is_in = dfA.Title.isin(dfB.Title) &amp; dfA.State.isin(dfB.State) </code></pre> <p>This gives me False/True, but if it's True I want dfA.MostFreqIncome = dfB.MostFreqIncome. If it's False I want dfA.MostFreqIncome = 'NA'</p>
<p>You can <code>merge</code> the two DataFrames A and B to create the new DataFrame:</p> <pre><code>&gt;&gt;&gt; dfA.merge(dfB, on=['Title', 'State'], how='left') Title State Income MostFreqIncome 0 Cashier WY 15000 15000.0 1 Cashier WY 20000 15000.0 2 Cashier WY 15000 15000.0 3 Manager WY 25000 NaN 4 Cashier CO 15000 NaN </code></pre> <p>Specifying <code>how='left'</code> here means that we're just only <code>dfA</code>'s Title/State keys in the merged DataFrame.</p>
python|pandas|dataframe
2
375,157
40,910,857
How to interpret increase in both loss and accuracy
<p>I have run deep learning models(CNN's) using tensorflow. Many times during the epoch, i have observed that both loss and accuracy have increased, or both have decreased. My understanding was that both are always inversely related. What could be scenario where both increase or decrease simultaneously.</p>
<p>The loss decreases as the training process goes on, except for some fluctuation introduced by the mini-batch gradient descent and/or regularization techniques like dropout (that introduces random noise).</p> <p>If the loss decreases, the training process is going well.</p> <p>The (validation I suppose) accuracy, instead, it's a measure of how good the predictions of your model are.</p> <p>If the model is learning, the accuracy increases. If the model is overfitting, instead, the accuracy stops to increase and can even start to decrease.</p> <p>If the loss decreases and the accuracy decreases, your model is overfitting.</p> <p>If the loss increases and the accuracy increase too is because your regularization techniques are working well and you're fighting the overfitting problem. This is true only if the loss, then, starts to decrease whilst the accuracy continues to increase. Otherwise, if the loss keep growing your model is diverging and you should look for the cause (usually you're using a too high learning rate value).</p>
tensorflow|deep-learning|loss
58
375,158
41,202,025
Find where array values increase monotonically over some value
<p>I'm trying to find the locations in an array where the values increase monotonically such that the total change in value is greater than k. Ie for <code>k = 5</code> and <code>data = [1, 4, 5, 7, 10, 9, 6, 14, 3, 4]</code> I would want to return:</p> <pre><code>[4, 7] </code></pre> <p>In general the array would be floats.</p> <p><strong>Edit to further clarify:</strong> In the example the data values increase monotonically by greater than 5 in two intervals. The first is from element 0 to 4 (where the data values increase by 9), the 2nd is from element 6 to 7 (where the data values increase by 8). Thus, I want to report the end of each valid interval.</p> <p>I want to avoid loops and make this efficient. If necessary the range of consecutive values to check could be limited.</p>
<p>Here's a vectorized approach -</p> <pre><code>d = a[1:] - a[:-1] mask = np.concatenate(( [False], d &gt; 0, [False] )) start = np.flatnonzero(mask[1:] &gt; mask[:-1]) stop = np.flatnonzero(mask[1:] &lt; mask[:-1]) count = np.bincount(np.repeat(np.arange(start.size) ,stop - start), d[mask[1:-1]]) out = stop[count &gt; k] </code></pre> <p>Sample input, output -</p> <pre><code>In [202]: a # Adding in two monotonically decreasing elems at start for variety Out[202]: array([10, 7, 1, 4, 5, 7, 10, 9, 6, 14, 3, 4]) In [203]: k = 5 In [204]: out Out[204]: array([6, 9]) </code></pre>
python|performance|numpy|vectorization
2
375,159
41,049,567
How to convert a series object in to data frame using string cleaning
<p>I have a series object of strings where there is a specific characters i can go along with. For instance, the one with the end character of <code>[]</code> will be corresponded to those with end character of <code>()</code></p> <pre><code>s = pd.Series(['September[jk]', 'firember hfh(start)','secmber(end)','Last day(hjh)', 'October[jk]','firober fhfh (start)','thber(marg)','lasber(sth)', 'December[jk]','anober(start)','secber(start)','Another(hkjl)']) </code></pre> <p>I can simply clean the data but these characters at the end should help me build the resulting data frame like this</p> <pre><code>0 September firember hfh 1 September secmber 2 September Last day 3 October firober fhfh 4 October thber 5 October lasber 6 December anober 7 December secber 8 December Another </code></pre>
<p>I don't think there's any magic here, so I recommend parsing the list yourself before creating the dataframe:</p> <pre><code>import re import pandas as pd l = ['September[jk]', 'firember hfh(start)','secmber(end)','Last day(hjh)', 'October[jk]','firober fhfh (start)','thber(marg)','lasber(sth)', 'December[jk]','anober(start)','secber(start)','Another(hkjl)'] month = None mylist = [] for i, el in enumerate(l): m = re.match('(.*?)\[.*?\]', el) if m: month = m.groups()[0] else: m = re.match('(.*?)\(.*?\)', el) if m: mylist.append({'Month':month, 'Value':m.groups()[0]}) else: print("Cannot find a match for {}".format(el)) df = pd.DataFrame(mylist) print(df) </code></pre> <p>Out:</p> <pre><code> Month Value 0 September firember hfh 1 September secmber 2 September Last day 3 October firober fhfh 4 October thber 5 October lasber 6 December anober 7 December secber 8 December Another </code></pre> <hr> <p>Side note: I used the <code>re</code> library for regex because it could be adapted to many more complex situations, but in your case you could just use the built-in functions, with <code>in</code> and <code>split</code>:</p> <pre><code>for i, el in enumerate(l): if '[' in el: month = el.split('[')[0] else: if '(' in el: mylist.append({'Month':month, 'Value':el.split('(')[0]}) else: print("Cannot find a match for {}".format(el)) </code></pre>
python-3.x|pandas
0
375,160
40,887,409
Numpy matrix determinant precision problems
<p>I am trying to write a script on python to determine a matrix determinant using Gauss method. It works correctly, but the precision isn't enough for me. My code is:</p> <pre><code>import scipy.linalg as sla import numpy as np def my_det(X): n = len(X) s = 0 if n != len(X[0]): return ValueError for i in range(0, n): maxElement = abs(X[i][i]) maxRow = i for k in range(i+1, n): if abs(X[k][i]) &gt; maxElement: maxElement = abs(X[k][i]) maxRow = k if maxRow != i: s += 1 for k in range(i, n): X[i][k], X[maxRow][k] = X[maxRow][k], X[i][k] for k in range(i+1, n): c = -X[k][i]/X[i][i] for j in range(i, n): if i == j: X[k][j] = 0 else: X[k][j] += c * X[i][j] det = (-1)**s for i in range(n): det *= X[i][i] return det </code></pre> <p>And I have tests for this code:</p> <pre><code>for x in range(10): X = np.random.rand(3,3) if np.abs(my_det(X) - sla.det(X)) &gt; 1e-6: print('FAILED') </code></pre> <p>My function fails all tests. I tried Decimals, but it didn't help. What's wrong?</p>
<p>The reason why the code fails the test condition, <code>abs(my_det(X) - sla.det(X)) &lt; 1e-6</code>, is not due to lack of precision but rather the change in sign brought about the unintended side-effect of <code>my_det</code> mutating <code>X</code>:</p> <pre><code>X[i][k], X[maxRow][k] = X[maxRow][k], X[i][k] </code></pre> <p>This row swapping changes the sign of the determinant. The code uses <code>s</code> to adjust for the change in sign, but <code>X</code> itself is altered in a way which changes the sign of the determinant.</p> <p>So the <code>X</code> passed to <code>my_det</code> is not the same as the <code>X</code> which is subsequently passed to <code>sla.det</code>. Here is an example where the alteration of <code>X</code> changes the sign of the determinant:</p> <pre><code>In [55]: X = np.random.rand(3, 3); X Out[55]: array([[ 0.38062719, 0.41892961, 0.88277747], [ 0.39881724, 0.00188804, 0.79258322], [ 0.40195279, 0.3950311 , 0.32771527]]) In [56]: my_det(X) Out[56]: 0.098180005266934267 In [57]: X Out[57]: array([[ 0.40195279, 0.3950311 , 0.32771527], [ 0. , -0.39006151, 0.46742438], [ 0. , 0. , 0.62620267]]) In [58]: sla.det(X) Out[58]: -0.09818000526693427 </code></pre> <hr> <p>You can fix the problem by making a copy of <code>X</code> inside <code>my_det</code>:</p> <pre><code>def my_det(X): X = np.array(X, copy=True) # copy=True is the default; shown here for emphasis ... </code></pre> <p>Thus, subsequent changes to <code>X</code> within <code>my_det</code> no longer affect the <code>X</code> outside of <code>my_det</code>.</p> <hr> <pre><code>import scipy.linalg as sla import numpy as np def my_det(X): X = np.array(X, dtype='float64', copy=True) n = len(X) s = 0 if n != len(X[0]): return ValueError for i in range(0, n): maxElement = abs(X[i, i]) maxRow = i for k in range(i + 1, n): if abs(X[k, i]) &gt; maxElement: maxElement = abs(X[k, i]) maxRow = k if maxRow != i: s += 1 for k in range(i, n): X[i, k], X[maxRow, k] = X[maxRow, k], X[i, k] for k in range(i + 1, n): c = -X[k, i] / X[i, i] for j in range(i, n): if i == j: X[k, j] = 0 else: X[k, j] += c * X[i, j] det = (-1)**s for i in range(n): det *= X[i, i] return det for i in range(10): X = np.random.rand(3, 3) diff = abs(my_det(X) - sla.det(X)) if diff &gt; 1e-6: print('{} FAILED: {:0.8f}'.format(i, diff)) </code></pre> <hr> <p>Also note that dtype matters:</p> <pre><code>In [88]: my_det(np.arange(9).reshape(3,3)) Out[88]: 6 </code></pre> <p>while the correct answer is </p> <pre><code>In [89]: my_det(np.arange(9).reshape(3,3).astype(float)) Out[89]: 0.0 </code></pre> <p>Since <code>my_det</code> uses division (in <code>c = -X[k, i] / X[i, i]</code>), we need <code>X</code> to have floating point dtype so that the <code>/</code> performs floating point division, not integer division. So to fix this, use <code>X = np.asarray(X, dtype='float64')</code> to ensure that <code>X</code> has dtype <code>float64</code>:</p> <pre><code>def my_det(X): X = np.array(X, dtype='float64', copy=True) ... </code></pre> <p>With this change, </p> <pre><code>In [91]: my_det(np.arange(9).reshape(3,3)) Out[91]: 0.0 </code></pre> <p>now gives the correct answer.</p>
python|numpy|algebra
1
375,161
40,855,047
np.argsort with support for ties
<p>Say we define a function for doing <code>argsort</code> with support for ties <a href="https://stackoverflow.com/a/20199459/1732769">as described in this solution</a>:</p> <pre><code>def argsort_with_support_for_ties(a): rnd_array = np.random.random(a.size) return np.lexsort((rnd_array,a)) </code></pre> <p>We test it on:</p> <pre><code>input = np.array([5.5, 3.5, 2.0, 2.0, 7.0, 7.0, 7.0, 3.5, 6.5, 6.5, 6.5, 9.0]) output = argsort_with_support_for_ties(input) </code></pre> <p>with he following result:</p> <pre><code>&gt; np.stack([input, output], axis=0).T array([[ 5.5, 3. ], [ 3.5, 2. ], [ 2. , 1. ], [ 2. , 7. ], [ 7. , 0. ], &lt;--- A [ 7. , 10. ], &lt;--- B [ 7. , 9. ], [ 3.5, 8. ], [ 6.5, 5. ], [ 6.5, 4. ], [ 6.5, 6. ], [ 9. , 11. ]]) </code></pre> <p>Note how entries <code>A</code> and <code>B</code> shared the same input value (<code>7</code>), but ended up in vastly different locations <code>0</code> and <code>10</code>. </p> <p>This is not what I was hoping to get. A more acceptable answer would have been:</p> <pre><code>output = np.array([4, 2, 1, 0, 8, 9, 10, 3, 5, 6, 7, 11]) </code></pre> <p>So what failed above with <code>argsort_with_support_for_ties</code>?</p>
<p>I understand that you want to rank the elements so that the tiebreaking is random. For this you just need to invert the permutation that you got from <code>lexsort</code>:</p> <pre><code>output = np.argsort(np.lexsort((rnd_array,a))) </code></pre> <p>My output (which is not identical to yours because of randomness):</p> <pre><code>array([ 4, 3, 0, 1, 10, 9, 8, 2, 7, 5, 6, 11]) </code></pre>
python-3.x|sorting|numpy
1
375,162
40,848,809
How to Find a Point within a Polygon?
<p>I am trying to find a point within polygons of a shapefile. </p> <p>I need to write a loop that can loop over the polygons and return the index of the polygon in which the point is located. </p> <p>How would I write a loop to find out which polygon the point is in? </p> <p>Here's what I have written so far: </p> <pre><code>import pandas as pd import pylab as pl import os import zipfile import geopandas as gp import shapely %pylab inline # Read in the shapefile ct_shape = gp.read_file(path) # Segmented the file so it only contains Brooklyn data &amp; set projection ct_latlon = ct_shape[ct_shape.BoroName == 'Brooklyn'] ct_latlon = ct_latlon.to_crs({'init': 'epsg:4326'}) ct_latlon.head() # Dataframe image [Head of the dataframe image][1]: https://i.stack.imgur.com/xAl6m.png # Created a point that I need to look for within the shapefile CUSP = shapely.geometry.Point(40.693217, -73.986403) </code></pre> <p>The output could be something like this: '3001100' (the BCTCB2010 of the correct polygon)</p>
<p>I solved it in one line of code. No loop necessary.</p> <p>Posting for anyone else that may be interested: </p> <pre><code># Setting the coordinates for the point CUSP = shapely.geometry.Point((-73.986403, 40.693217,)) # Longitude &amp; Latitude # Printing a list of the coords to ensure iterable list(CUSP.coords) # Searching for the geometry that intersects the point. Returning the index for the appropriate polygon. index = ct_latlon[ct_latlon.geometry.intersects(CUSP)].BCTCB2010.values[0] </code></pre>
python|pandas|geometry|shapely|geopandas
1
375,163
41,051,492
Add attribute to edge in projected graph
<p>I have a DataFrame that resembles an affiliation matrix. I have a person, an event and the year of the event.</p> <pre><code>d = {'person' : ['1', '2', '3', '1', '4', '3', '4', '1', '2'], 'event' : ['A', 'A', 'A', 'B', 'B', 'C', 'C', 'D', 'D'], 'year' : [1995, 1995, 1995, 1996, 1996, 2000, 2000, 2001, 2001]} df = pd.DataFrame(d) </code></pre> <p>I need to get the first meeting between two persons. That is, if '1' and '2' met at event 'A' and 'D', I need to know when they met for the first time (in this example, it was in 'A' in 1995).</p> <p>I do not know if this is possible using NetworkX or if I need to do it in some other way using Pandas. How can I do this?</p> <p>I can get to the projected network but I do not know how to transfer the attribute 'year' to the edges of that projected network. It is important to note that the attribute ('year' in this case) is an attribute of the event so it is constant for all the edges of each event.</p> <p>This is what I have so far:</p> <pre><code>import networkx as nx import pandas as pd d = {'person' : ['1', '2', '3', '1', '4', '3', '4', '1', '2'], 'event' : ['A', 'A', 'A', 'B', 'B', 'C', 'C', 'D', 'D'], 'year' : [1995, 1995, 1995, 1996, 1996, 2000, 2000, 2001, 2001]} df = pd.DataFrame(d) B = nx.from_pandas_dataframe(df, 'person', 'event', edge_attr='year') G = nx.bipartite.projected_graph(B, df.person.unique(), multigraph = True) </code></pre>
<p>I'm not familiar enough with NetworkX to help you with the problem of adding edge attributes, but this method does identify the first meeting of individuals.</p> <pre><code>import pandas as pd import itertools # initial data d = {'person' : ['1', '2', '3', '1', '4', '3', '4', '1', '2'], 'event' : ['A', 'A', 'A', 'B', 'B', 'C', 'C', 'D', 'D'], 'year' : [1995, 1995, 1995, 1996, 1996, 2000, 2000, 2001, 2001]} df = pd.DataFrame(d) # create a unique list of individuals for each meeting. this should be # unique anyway, but just in case. :) # note that this approach is also robust to events in different years # sharing the same name. grpd = df.groupby(['year', 'event'])['person'].unique().apply(lambda x: sorted(x)) # sort based on the year from the oldest meetings to the most recent grpd.sort_index(ascending=False, inplace=True) # we'll add meetings to a dictionary and overwrite as encounter more # recent meetings meetings = {} for idx in range(len(grpd)): year = grpd.index[idx][0] meeting = grpd.index[idx][1] for combo in itertools.combinations(grpd[idx], 2): meetings[combo] = (meeting, year) import pprint &gt;&gt;&gt; pprint.pprint(meetings) {('1', '2'): ('A', 1995), ('1', '3'): ('A', 1995), ('1', '4'): ('B', 1996), ('2', '3'): ('A', 1995), ('3', '4'): ('C', 2000) </code></pre>
python|pandas|networkx
0
375,164
40,788,391
Attribute prediction with CNN and cross entropy loss (negative values for absent attributes)
<p>I am building a neural net (CNN) which predict 100 attributes of an image. The training data is as follows-</p> <p><code>image_name image_attributes</code></p> <p><code>img/img001.jpg -1, 1, -1 , 1, 0 .......-1 , 1</code></p> <p>So the attributes which are present have value <code>1</code>, and <code>-1</code> if that attribute is not present and <code>0</code> if unknown. I am using Tensorflow and defining my loss as - </p> <p><code>loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y))</code></p> <p>My question is if I minimize this loss, sometimes it has negative values due to <code>-1</code> representation of absence of some attributes and my optimization algorithm will minimize the already negative loss. Isn't it a divergence? </p> <p>Is it correct to minimize this loss or should I use mod function to have only positive losses?</p>
<p>So to make it clear you should see your result out of the classifier as three nodes having their result whatever it is. which they are (y0,y1,y2) and then by applying softmax on these results you'll have a new values representing the answer in a probability range between 0 and 1 </p> <p><a href="https://i.stack.imgur.com/OaMOW.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OaMOW.jpg" alt="enter image description here"></a></p> <p>so let's say you have a result of this form [1,−2,0] by applying the softmax you'll have [0.7,0.04,0.26] </p> <pre><code>p_y_given_x = softmax(y_given_x) </code></pre> <p>and then by applyinng argmax you can define which class is predicted based on the highest probability result </p> <pre><code>y_prediction = argmax(p_y_given_x) </code></pre> <p>which mean in our case [1,0,0] </p> <p>now what you should do is to take the number of the predicted class based on your data attribution.</p> <p>but first let's agree on one thing </p> <ul> <li>present: class 0 </li> <li>not present: class 1</li> <li>un known: class 2</li> </ul> <p>let's say your object is not present in the image so you should take the value of the second object and apply logliklihood </p> <pre><code>-log(p_y_given_x[y]) </code></pre> <p>which is the second one 0.04 and penalize the system by back propagating the error in this manner. </p>
machine-learning|computer-vision|tensorflow|deep-learning
1
375,165
53,859,920
Filling a pandas column based on another column
<p>I would like to fill each row of a column of my dataframe based on the entries in another column, in particular I want to fill each row with the corresponding name of the corresponding ticker for that stock, like so</p> <pre><code>dict1 = [{'ticker': 'AAPL','Name': 'Apple Inc.'}, {'ticker': 'MSFT','Name': 'Microsoft Corporation'}] df1 = pd.DataFrame(dict1) </code></pre> <p>This function provides the name for a given ticker:</p> <p>So I can pull the name for for say MSFT:</p> <pre><code>dict1 = [{'ticker': 'AAPL','Name': 'Apple Inc.'}, {'ticker': 'MSFT','Name': get_nasdaq_symbols().loc['MSFT'].loc['Security Name'][:-15]}] </code></pre> <p>I am struggling to find a way to automate this with a for loop or apply. Can anyone suggest an approach?</p> <p>Note, the function used to pull the name comes from here:</p> <pre><code> from pandas_datareader.nasdaq_trader import get_nasdaq_symbols </code></pre>
<p>You can first create a series mapping:</p> <pre><code>ticker_name_map = get_nasdaq_symbols()['Security Name'].str[:-15] </code></pre> <p>Then use <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.map.html" rel="nofollow noreferrer"><code>pd.Series.map</code></a><sup>1</sup>:</p> <pre><code>df1['Name'] = df1['ticker'].map(ticker_name_map) </code></pre> <p>If you wish unmapped values to remain unchanged, then use a subsequent <code>fillna</code>:</p> <pre><code>df1['Name'] = df1['ticker'].map(ticker_name_map).fillna(df1['Name']) </code></pre> <hr> <p><sup>1</sup> <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.replace.html" rel="nofollow noreferrer"><code>pd.Series.replace</code></a> is also possible, but <a href="https://stackoverflow.com/questions/49259580/replace-values-in-a-pandas-series-via-dictionary-efficiently">inefficient</a>.</p>
python|pandas|apply
2
375,166
54,183,920
Append any further columns to the first three columns AND indicate the triple column it comes from
<p>This is a follow-up question to <a href="https://stackoverflow.com/questions/54182470/append-any-further-columns-to-the-first-three-columns/54182646">Append any further columns to the first three columns</a>.</p> <p>I start out with about 120 columns. It is always three columns that belong to each other. Instead of being 120 columns side by side, they should be stacked on top of each other, so we end up with three columns. This has already been solved (see link above).</p> <p>Sample data:</p> <pre><code>df = pd.DataFrame({ "1": np.random.randint(900000000, 999999999, size=5), "2": np.random.choice( ["A","B","C", np.nan], 5), "3": np.random.choice( [np.nan, 1], 5), "4": np.random.randint(900000000, 999999999, size=5), "5": np.random.choice( ["A","B","C", np.nan], 5), "6": np.random.choice( [np.nan, 1], 5) }) </code></pre> <p>Working solution for initial question as suggested by Jezrael:</p> <pre><code>arr = np.arange(len(df.columns)) df.columns = [arr // 3, arr % 3] df = df.stack(0).sort_index(level=[1, 0]).reset_index(drop=True) df.columns = ['A','B','C'] </code></pre> <p>This transforms this:</p> <pre><code> 1 2 3 4 5 6 0 960189042 B NaN 991581392 A 1.0 1 977655199 nan 1.0 964195250 A 1.0 2 961771966 A NaN 969007327 B 1.0 3 955308022 C 1.0 973316485 A NaN 4 933277976 A 1.0 976749175 A NaN </code></pre> <p>to this:</p> <pre><code> A B C 0 960189042 B NaN 1 977655199 nan 1.0 2 961771966 A NaN 3 955308022 C 1.0 4 933277976 A 1.0 5 991581392 A 1.0 6 964195250 A 1.0 7 969007327 B 1.0 8 973316485 A NaN 9 976749175 A NaN </code></pre> <p><strong>Follow Up Question:</strong> Now, if I'd need an indicator from which triple each block comes from, how could this be done? So a result could look like:</p> <pre><code> A B C D 0 960189042 B NaN 0 1 977655199 nan 1.0 0 2 961771966 A NaN 0 3 955308022 C 1.0 0 4 933277976 A 1.0 0 5 991581392 A 1.0 1 6 964195250 A 1.0 1 7 969007327 B 1.0 1 8 973316485 A NaN 1 9 976749175 A NaN 1 </code></pre> <p>These blocks can be of different lengths! So I cannot simply add a counter.</p>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reset_index.html" rel="nofollow noreferrer"><code>reset_index</code></a> for remove only first level, second level of <code>MultiIndex</code> convert to column:</p> <pre><code>arr = np.arange(len(df.columns)) df.columns = [arr // 3, arr % 3] df = df.stack(0).sort_index(level=[1, 0]).reset_index(level=0, drop=True).reset_index() df.columns = ['D','A','B','C'] print (df) D A B C 0 0 960189042 B NaN 1 0 977655199 nan 1.0 2 0 961771966 A NaN 3 0 955308022 C 1.0 4 0 933277976 A 1.0 5 1 991581392 A 1.0 6 1 964195250 A 1.0 7 1 969007327 B 1.0 8 1 973316485 A NaN 9 1 976749175 A NaN </code></pre> <p>Then if need change order of columns:</p> <pre><code>cols = df.columns[1:].tolist() + df.columns[:1].tolist() df = df[cols] print (df) A B C D 0 960189042 B NaN 0 1 977655199 nan 1.0 0 2 961771966 A NaN 0 3 955308022 C 1.0 0 4 933277976 A 1.0 0 5 991581392 A 1.0 1 6 964195250 A 1.0 1 7 969007327 B 1.0 1 8 973316485 A NaN 1 9 976749175 A NaN 1 </code></pre>
python|pandas
1
375,167
53,814,772
Copying data from one tensor to another using bit masking
<pre><code>import numpy as np import torch a = torch.zeros(5) b = torch.tensor(tuple((0,1,0,1,0)),dtype=torch.uint8) c= torch.tensor([7.,9.]) print(a[b].size()) a[b]=c print(a) </code></pre> <blockquote> <p>torch.Size([2])<br>tensor([0., 7., 0., 9., 0.])</p> </blockquote> <p>I am struggling to understand how this works. I initially thought the above code was using Fancy indexing but I realised that values from <strong>c</strong> tensors are getting copied corresponding to the indices marked 1. Also, if I don't specify dtype of <strong>b</strong> as <strong>uint8</strong> then the above code does not work. Can someone please explain me the mechanism of the above code.</p>
<p>Indexing with arrays works the same as in numpy and most other vectorized math packages I am aware of. There are two cases:</p> <ol> <li><p>When <code>b</code> is of type <code>uint8</code> (think boolean, pytorch doesn't distinguish <code>bool</code> from <code>uint8</code>), <code>a[b]</code> is a 1-d array containing the <em>subset</em> of values of <code>a</code> (<code>a[i]</code>) for which the corresponding in <code>b</code> (<code>b[i]</code>) was nonzero. These values are aliased to the original <code>a</code> so if you modify them, their corresponding locations will change as well.</p></li> <li><p>The alternative type you can use for indexing is an array of <code>int64</code>, in which case <code>a[b]</code> creates an array of shape <code>(*b.shape, *a.shape[1:])</code>. Its structure is as if each element of <code>b</code> (<code>b[i]</code>) was replaced by <code>a[i]</code>. In other words, you create a new array by specifying from which indexes of <code>a</code> should the data be fetched. Again, the values are aliased to the original <code>a</code>, so if you modify <code>a[b]</code> the values of <code>a[b[i]]</code>, for each <code>i</code>, will change. An example usecase is shown in <a href="https://stackoverflow.com/q/53697596/4280242">this</a> question.</p></li> </ol> <p>These two modes are explained for numpy in <a href="https://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#integer-array-indexing" rel="nofollow noreferrer">integer array indexing</a> and <a href="https://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#boolean-array-indexing" rel="nofollow noreferrer">boolean array indexing</a>, where for the latter you have to keep in mind that pytorch uses <code>uint8</code> in place of <code>bool</code>.</p> <p>Also, if your goal is to copy data from one tensor to another you have to keep in mind that an operation like <code>a[ixs] = b[ixs]</code> is an in-place operation (<code>a</code> is modified in place), which my not play well with autograd. If you want to do out of place masking, use <a href="https://pytorch.org/docs/master/torch.html#torch.where" rel="nofollow noreferrer"><code>torch.where</code></a>. An example usecase is shown in <a href="https://stackoverflow.com/a/53692802/4280242">this</a> answer. </p>
pytorch
2
375,168
53,830,057
Update Column based on another column and Delete data from the other
<p>Lets assume the df looks like:</p> <pre><code>import pandas as pd df = pd.DataFrame(data={'fname':['Anky','Anky','Tom','Harry','Harry','Harry'],'lname':['sur1','sur1','sur2','sur3','sur3','sur3'],'role':['','abc','def','ghi','','ijk'],'mobile':['08511663451212','+4471123456','0851166346','','0851166347',''],'Pmobile':['085116634512','1234567890','8885116634','','+353051166347','0987654321']}) import numpy as np df.replace('',np.nan,inplace=True) </code></pre> <p>df:</p> <pre><code> fname lname role mobile Pmobile 0 Anky sur1 NaN 08511663451212 085116634512 1 Anky sur1 abc +4471123456 1234567890 2 Tom sur2 def 0851166346 8885116634 3 Harry sur3 ghi NaN NaN 4 Harry sur3 NaN 0851166347 +353051166347 5 Harry sur3 ijk NaN 0987654321 </code></pre> <p>So I want to update the column <code>mobile</code> with values from <code>Pmobile</code> where the values starts with <code>'08','8','+353</code> and simultaneously it should delete the value from <code>Pmobile</code> field where it finds a match and copies data to <code>mobile</code> field.</p> <p>Presently I am getting this by :</p> <pre><code>df.mobile.update(df['Pmobile'][df['Pmobile'].str.startswith(('08','8','+353'),na=False)]) df.Pmobile[df.mobile==df.Pmobile] = np.nan </code></pre> <p>df:</p> <pre><code> fname lname role mobile Pmobile 0 Anky sur1 NaN 085116634512 NaN 1 Anky sur1 abc +4471123456 1234567890 2 Tom sur2 def 8885116634 NaN 3 Harry sur3 ghi NaN NaN 4 Harry sur3 NaN +353051166347 NaN 5 Harry sur3 ijk NaN 0987654321 </code></pre> <p>Is there a way to do this on the fly?</p> <p>Thanks in advance. :)</p>
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.shift.html" rel="nofollow noreferrer"><code>shift</code></a> to shift the columns left do this:</p> <pre><code>In[50]: df.loc[df['Pmobile'].str.startswith(('08','8','+353'),na=False), ['mobile','Pmobile']] = df[['mobile','Pmobile']].shift(-1,axis=1) df Out[50]: fname lname role mobile Pmobile 0 Anky sur1 NaN 085116634512 NaN 1 Anky sur1 abc +4471123456 1234567890 2 Tom sur2 def 8885116634 NaN 3 Harry sur3 ghi NaN NaN 4 Harry sur3 NaN +353051166347 NaN 5 Harry sur3 ijk NaN 0987654321 </code></pre> <p>So use your condition to mask the rows of interest and then assign the result of those 2 columns shifted left by 1 where the condition is met.</p> <p>This will leave a <code>NaN</code> where the value has shifted and do nothing where the condition isn't met</p>
python-3.x|pandas|dataframe
2
375,169
54,031,493
How to Create Tensorflow Session with nvlink GPU
<p>I am trying to run inference with Tensorflow. I have 2 Quadro GV100's connected via nvlink and another GPU for display on my desktop.</p> <p>When I create the SessionOptions object, I need to call the following to set which GPU to use:</p> <pre><code>auto options = SessionOptions(); options.config.mutable_gpu_options()-&gt;set_visible_device_list(gpuToUse); </code></pre> <p>It does not seem like Tensorflow sees the nvlink'ed GPU's as one, if I were to create a session by specifying only 1 gpu, that seems to negate the benefit of the nvlink and the second GPU. </p> <p>My question is, is Tensorflow able to take advantage of the nvlink dual GPU setup?</p> <p>I am using Tensorflow v1.7. Thank you very much for your help!</p>
<p>The short answer is yes, Tensorflow is able to take advantage of the NVLINK technology. But, as mentioned <a href="https://www.pugetsystems.com/labs/hpc/NVLINK-on-RTX-2080-TensorFlow-and-Peer-to-Peer-Performance-with-Linux-1262/?utm_source=youtube&amp;utm_campaign=smarter#tensorflow-performance-with-2-rtx-2080-gpus-and-nvlink" rel="nofollow noreferrer">here</a>, most of the algorithms take little benefit from this technology.</p> <p>There are use-cases where NVLINK bridge might have a <strong>significant</strong> impact. For instance, in some of machine learning applications, parallelism can be gained through data distribution across devices, assuming that the GPU code is optimized to minimize the communication.</p>
c++|tensorflow|nvidia
2
375,170
53,953,087
How to Assign Data after Groupby and Concat in pandas
<p>Iam newbie in python. I have huge a <code>dataframe</code> with millions of rows and id. my data looks like this:</p> <pre><code>Time ID X Y 8:00 A 23 100 9:00 B 24 110 10:00 B 25 120 11:00 C 26 130 12:00 C 27 140 13:00 A 28 150 14:00 A 29 160 15:00 D 30 170 16:00 C 31 180 17:00 B 32 190 18:00 A 33 200 19:00 C 34 210 20:00 A 35 220 21:00 B 36 230 22:00 C 37 240 23:00 B 38 250 </code></pre> <p>I sorted the data on id and time.</p> <pre><code>Time ID X Y 8:00 A 23 100 13:00 A 28 150 14:00 A 29 160 18:00 A 33 200 20:00 A 35 220 9:00 B 24 110 10:00 B 25 120 17:00 B 32 190 21:00 B 36 230 23:00 B 38 250 11:00 C 26 130 12:00 C 27 140 16:00 C 31 180 19:00 C 34 210 22:00 C 37 240 15:00 D 30 170 </code></pre> <p>and I want to pick only "The first and the last" of the id and eliminate the rest. The result looked like this:</p> <pre><code>Time ID X Y 8:00 A 23 100 20:00 A 35 220 9:00 B 24 110 23:00 B 38 250 11:00 C 26 130 22:00 C 37 240 15:00 D 30 170 </code></pre> <p>I used this code:</p> <pre><code>df = pd.read_csv("data.csv") g = df.groupby('ID') g_1 = pd.concat([g.head(1),g.tail(1)]).drop_duplicates().sort_values('ID').reset_index(drop=True) g_1.to_csv('result.csv') </code></pre> <p>but I want to assign or give annotation every row as "first" and "last" in the new column.<br> my expected result looks like this:</p> <pre><code>Time ID X Y Annotation 8:00 A 23 100 First 20:00 A 35 220 Last 9:00 B 24 110 First 23:00 B 38 250 Last 11:00 C 26 130 First 22:00 C 37 240 Last 15:00 D 30 170 </code></pre> <p>anyone could help me with this? please give me advice thank you.</p>
<p>No need <code>groupby</code> using <code>drop_duplicates</code>after you sort </p> <pre><code>df=pd.concat([df.drop_duplicates(['ID']).assign(sign='first'),df.drop_duplicates(['ID'],keep='last').assign(sign='last')]).sort_values('ID') df Time ID X Y sign 0 8:00 A 23 100 first 4 20:00 A 35 220 last 5 9:00 B 24 110 first 9 23:00 B 38 250 last 10 11:00 C 26 130 first 14 22:00 C 37 240 last 15 15:00 D 30 170 first 15 15:00 D 30 170 last </code></pre>
pandas|dataframe|group-by|concat
3
375,171
53,921,570
accessing to a specific cell value in excel with pandas
<p>i want to store a specific value of some cell in a variable which have my condition. i am trying on <a href="https://www.dataquest.io/blog/large_files/movies.xls" rel="nofollow noreferrer">IMDB movie dataset</a>.</p> <pre><code>import pandas as pd file='movies.xls' sorted=pd.read_excel(file,index_col='Language') print(sorted.head()) </code></pre> <p>for example i want to collect name of the movies which their Language is German. i read function code of read_excel() and searched a lot but nothing helpful... what should i do, any help?</p>
<p>this code do what i want...</p> <pre><code>import pandas as pd file='movies.xls' sorted=pd.read_excel(file,sheet_name=0,index_col='Title') var=sorted[sorted['Language'].str.contains("German")] print(var.head()) </code></pre>
python|excel|pandas
1
375,172
54,191,326
Moving pandas series value by switching column name?
<p>I have a DF, however the last value of some series should be placed in a different one. This happened due to column names not being standardized - i.e., some are "Wx_y_x_PRED" and some are "Wx_x_y_PRED". I'm having difficulty writing a function that will simply find the columns with >= 225 NaN's and changing the column it's assigned to. </p> <p>I've written a function that for some reason will sometimes work and sometimes won't. When it does, it further creates approx 850 columns in its wake (the OG dataframe is around 420 with the duplicate columns). I'm hoping to have something that just reassigns the value. If it automatically deletes the incorrect column, that's awesome too, but I just used .dropna(thresh = 2) when my function worked originally. </p> <p>Here's what it looks like originally:</p> <pre><code>in: df = pd.DataFrame(data = {'W10_IND_JAC_PRED': ['NaN','NaN','NaN','NaN','NaN',2], 'W10_JAC_IND_PRED': [1,2,1,2,1,'NAN']}) out:df W10_IND_JAC_PRED W10_JAC_IND_PRED 0 NaN 1 1 NaN 2 2 NaN 1 3 NaN 2 4 NaN 1 W 2 NAN </code></pre> <p>I wrote this, which occasionally works but most of the time doesn't and i'm not sure why. </p> <pre><code>def switch_cols(x): """Takes mismatched columns (where only the last value != NaN) and changes order of team column names""" if x.isna().sum() == 5: col_string = x.name.split('_') col_to_switch = ('_').join([col_string[0],col_string[2],col_string[1],'PRED']) df[col_to_switch]['row_name'] = x[-1] else: pass return x </code></pre> <p>Most of the time it just returns to me the exact same DF, but this is the desired outcome. </p> <pre><code>W10_IND_JAC_PRED W10_JAC_IND_PRED 0 NaN 1 1 NaN 2 2 NaN 1 3 NaN 2 4 NaN 1 W 2 2 </code></pre> <p>Anyone have any tips or could share why my function works maybe 10% of the time? </p> <p>Edit:</p> <p>so this is an ugly "for" loop I wrote that works. I know there has to be a much more pythonic way of doing this while preserving original column names, though. </p> <p><code>for i in range(df.shape[1]): if df.iloc[:,i].isna().sum() == 5: split_nan_col = df.columns[i].split('_') correct_col_name = ('_').join([split_nan_col[0],split_nan_col[2],split_nan_col[1],split_nan_col[3]]) df.loc[5,correct_col_name] = df.loc[5,df.columns[i]] else: pass</code></p>
<p>Doing with <code>split</code> before <code>frozenset</code>(will return the order list), then we do <code>join</code>: Notice this solution can be implemented to more columns </p> <pre><code>df.columns=df.columns.str.split('_').map(frozenset).map('_'.join) df.mask(df=='NaN').groupby(level=0,axis=1).first() # groupby first will return the first not null value PRED_JAC_W10_IND 0 1 1 2 2 1 3 2 4 1 5 2 </code></pre>
python|pandas
1
375,173
53,885,868
Is it Possible to classify dataset using DNN classifier based on Different label values of type String?
<p>I have a network traffic as CSV file and inside that file all required features and class column (Label Column). But the problem is with the class column of type String and it contents with in the following labels: </p> <p>'normal','icmp-echo','tcp-syn','udp-flood','httpFlood','slowloris','slowpost','bruteForce</p> <p>I am trying to classify the network traffic(dataset) based on above labels. Is the n-Class > 2 correct/possible?</p> <p>Please refer the below snapshot which gives a better understanding of what I'm trying to do.</p> <p><a href="https://i.stack.imgur.com/izI2y.png" rel="nofollow noreferrer">First Snapshot</a></p> <p><a href="https://i.stack.imgur.com/7dlhh.png" rel="nofollow noreferrer">Second Snapshot</a></p>
<p>Yes you can do classification using DNN. Here is an <a href="http://vprusso.github.io/blog/2016/tensor-flow-neural-net-breast-cancer/" rel="nofollow noreferrer">example</a> to do breast cancer classification using DNN.</p> <p>As far as the <strong>String labels</strong> are concerned, you need to do <a href="https://hackernoon.com/what-is-one-hot-encoding-why-and-when-do-you-have-to-use-it-e3c6186d008f" rel="nofollow noreferrer">One Hot Encoding</a> to convert categorical variables into numerical variables. You can use <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.get_dummies.html" rel="nofollow noreferrer">pandas.get_dummies</a> for this.</p> <h3>Example</h3> <pre><code>&gt;&gt;&gt; s1 = ['a', 'b', 'c', 'a'] &gt;&gt;&gt; pd.get_dummies(s1) a b c 0 1 0 0 1 0 1 0 2 0 0 1 3 1 0 0 </code></pre>
python|tensorflow|deep-learning
2
375,174
54,089,396
Threshold numpy array, find windows
<p>Input data is a 2D array (timestamp, value) pairs, ordered by timestamp:</p> <pre><code>np.array([[50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66], [ 2, 3, 5, 6, 4, 2, 1, 2, 3, 4, 5, 4, 3, 2, 1, 2, 3]]) </code></pre> <p>I want to find time windows where the value exceeds a threshold (eg. >=4). Seems I can do the threshold part with a boolean condition, and map back to the timestamps with <code>np.extract()</code>:</p> <pre><code>&gt;&gt;&gt; a[1] &gt;= 4 array([False, False, True, True, True, False, False, False, False, True, True, True, False, False, False, False, False]) &gt;&gt;&gt; np.extract(a[1] &gt;= 4, a[0]) array([52, 53, 54, 59, 60, 61]) </code></pre> <p>But from that I need the first and last timestamps of each window matching the threshold (ie. <code>[[52, 54], [59, 61]]</code>), which is where I can't quite find the right approach.</p>
<p>Here's one way:</p> <pre><code># Create a mask In [42]: mask = (a[1] &gt;= 4) # find indice of start and end of the threshold In [43]: ind = np.where(np.diff(mask))[0] # add 1 to starting indices In [44]: ind[::2] += 1 # find and reshape the result In [45]: result = a[0][ind].reshape(-1, 2) In [46]: result Out[46]: array([[52, 54], [59, 61]]) </code></pre>
python|arrays|numpy
6
375,175
53,820,131
How to create a dataframe from numpy arrays?
<p>I am trying to create a matrix / DataFrame with the numbers stored in 2 variables</p> <pre><code>x = np.linspace(0,50) y = np.exp(x) </code></pre> <p>and I would like them to look like this:</p> <pre><code>x | y ___________________ 0 | 1.0... 1 | 2.77... 2 | 7.6... ... | ... 50 | 5.18e+21... </code></pre> <p>I would like it to be in a <code>DataFrame</code> so I can work with it with the <code>pandas</code> library. </p> <p>Thanks in advance </p>
<p>With <code>pandas</code>:</p> <p>You can issue</p> <pre><code>&gt;&gt;&gt; xs = np.arange(51) &gt;&gt;&gt; ys = np.exp(xs) </code></pre> <p>to get the x and y values and then build your dataframe with</p> <pre><code>&gt;&gt;&gt; df = pd.DataFrame({'x': xs, 'y': ys}) &gt;&gt;&gt; df x y 0 0 1.000000e+00 1 1 2.718282e+00 2 2 7.389056e+00 3 3 2.008554e+01 ... </code></pre> <p>In this case, you can also use the x-values as the index of a series without losing any information.</p> <pre><code>&gt;&gt;&gt; index = pd.RangeIndex(0, 51, name='x') &gt;&gt;&gt; exps = pd.Series(data=np.exp(index), index=index, name='y') &gt;&gt;&gt; exps x 0 1.000000e+00 1 2.718282e+00 2 7.389056e+00 3 2.008554e+01 ... Name: y, dtype: float64 </code></pre> <hr> <p>Without <code>pandas</code>:</p> <p>Consider if you truly need a dataframe or series. You could just leave it at </p> <pre><code>&gt;&gt;&gt; xs = np.arange(51) &gt;&gt;&gt; ys = np.exp(xs) </code></pre> <p>and then index into <code>ys</code> with the integers <code>0</code>, <code>1</code>, <code>2</code>, ... to get the values of <code>exp(0)</code>, <code>exp(1)</code>, <code>exp(2)</code>, ...</p>
python|pandas|numpy|dataframe
4
375,176
53,979,153
Error when plotting contour with matplotlib
<p>I am trying to plot such a function. However, the following code will cause an error. I think that the cause is that a scalar value is returned in <code>norm ()</code>, but how can it be solved?<a href="https://i.stack.imgur.com/JLzB7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JLzB7.png" alt="function"></a></p> <p>The label of the image represents the definition formula, the search space, the optimal solution from the left</p> <pre><code>import numpy as np import matplotlib.pyplot as plt x = np.arange(-5, 5, 0.05) y = np.arange(-5, 5, 0.05) X ,Y= np.meshgrid(x, y) print(X) c1 = -2 * np.ones((2,200,200)) c2 = 4 * np.ones((2,200,200)) print(np.linalg.norm(np.array([X,Y]) - c1)) Z = (1 - 1 / (1 * np.linalg.norm(np.array([X,Y]) - c1) + 1)) + (1 - 1 / (2 * np.linalg.norm(np.array([X,Y]) - c2) + 1)) plt.pcolormesh(X, Y, Z,cmap='hsv') plt.show() </code></pre>
<p>The problem is that your current <code>Z</code> is not of same dimension as your <code>X</code> and <code>Y</code>. This could be verified by printing the shape of X, Y and Z. The reason is that you did not provide an <code>axis</code> while computing the <code>norm</code> in your equation and hence you were getting a scalar value. You can refer to the <a href="https://docs.scipy.org/doc/numpy-1.15.1/reference/generated/numpy.linalg.norm.html" rel="nofollow noreferrer">Official docs</a> for more info on how the <code>axis</code> argument works. In your case, since you did not specify any value for the <code>axis</code>, it was returning you a <em>matrix norm</em> instead of a <em>vector norm</em></p> <p>Below is the solution where you provide <code>axis=0</code> to compute the norm properly for column wise entry combinations of your <code>X</code> and <code>Y</code> </p> <pre><code>Z = (1 - 1 / (1 * np.linalg.norm(np.array([X,Y]) - c1, axis=0) + 1)) + (1 - 1 / (2 *np.linalg.norm(np.array([X,Y]) - c2, axis=0) + 1)) plt.pcolormesh(X, Y, Z,cmap='hsv') </code></pre> <p><a href="https://i.stack.imgur.com/cTZyb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cTZyb.png" alt="enter image description here"></a></p>
python|numpy|matplotlib
1
375,177
54,093,244
Shape of data and LSTM Input for varying timesteps
<p>For my master thesis, I want to predict the price of a stock in the next hour using a LSTM model. My X data contains 30.000 rows with 6 dimensions (= 6 features), my Y data contains 30.000 rows and only 1 dimension (=target variable). For my first LSTM model, I reshaped the X data to (30.000x1x6), the Y data to (30.000x1) and determined the input like this: input_nn = Input(shape=(1, 6))</p> <p>I am not sure how to reshape the data and to determine the input shape for the model if I want to increase the timesteps. I still want to predict the stock price in the next hour, but include more previous time steps. Do I have to add the data from previous timesteps in my X data in the second dimension? </p> <p>Can you explain what the number of units of a LSTM exactly refers to? Should it be the same as the number of timesteps in my case?</p>
<p>You are on the right track but confusing the number of units with timesteps. The <code>units</code> is a hyper-parameter that controls the output dimension of the LSTM. It is the dimension of the LSTM output vector, so if input is <code>(1,6)</code> and you have 32 units you will get <code>(32,)</code> as in the LSTM will traverse the single timestep and produce a vector of size 32.</p> <p>Timesteps refers to the size of the history you can your LSTM to consider. So it isn't the same as units at all. Instead of processing the data yourself, Keras has a handy <a href="https://keras.io/preprocessing/sequence/" rel="nofollow noreferrer">TimeseriesGenerator</a> which will take a 2D data like yours and use a sliding window of some timestep size to generate timeseries data. From the documentation:</p> <pre><code>from keras.preprocessing.sequence import TimeseriesGenerator import numpy as np data = np.array([[i] for i in range(50)]) targets = np.array([[i] for i in range(50)]) data_gen = TimeseriesGenerator(data, targets, length=10, sampling_rate=2, batch_size=2) assert len(data_gen) == 20 batch_0 = data_gen[0] x, y = batch_0 assert np.array_equal(x, np.array([[[0], [2], [4], [6], [8]], [[1], [3], [5], [7], [9]]])) assert np.array_equal(y, np.array([[10], [11]])) </code></pre> <p>which you can use directory in <code>model.fit_generator(data_gen,...)</code> giving you the option to try out different sampling_rates, timesteps etc. You should probably investigate these parameters and how they affect the result in your thesis.</p>
python|tensorflow|keras|neural-network|lstm
1
375,178
54,140,523
Retain order when taking unique rows in a NumPy array
<p>I have three 2D arrays <code>a1</code>, <code>a2</code>, and <code>a3</code></p> <pre><code>In [165]: a1 Out[165]: array([[ 0, 1, 2], [ 3, 4, 5], [ 6, 7, 8], [ 9, 10, 11]]) In [166]: a2 Out[166]: array([[ 9, 10, 11], [15, 16, 17], [18, 19, 20]]) In [167]: a3 Out[167]: array([[6, 7, 8], [4, 5, 5]]) </code></pre> <p>And I stacked these arrays into a single array:</p> <pre><code>In [168]: stacked = np.vstack((a1, a2, a3)) In [170]: stacked Out[170]: array([[ 0, 1, 2], [ 3, 4, 5], [ 6, 7, 8], [ 9, 10, 11], [ 9, 10, 11], [15, 16, 17], [18, 19, 20], [ 6, 7, 8], [ 4, 5, 5]]) </code></pre> <p>Now, I want to get rid of the duplicate rows. So, <code>numpy.unique</code> does the job.</p> <pre><code>In [169]: np.unique(stacked, axis=0) Out[169]: array([[ 0, 1, 2], [ 3, 4, 5], [ 4, 5, 5], [ 6, 7, 8], [ 9, 10, 11], [15, 16, 17], [18, 19, 20]]) </code></pre> <p>However, there is one issue here. The original order is lost when taking the unique rows. How can I retain the original ordering and still take the unique rows?</p> <p>So, the expected output should be:</p> <pre><code>array([[ 0, 1, 2], [ 3, 4, 5], [ 6, 7, 8], [ 9, 10, 11], [15, 16, 17], [18, 19, 20], [ 4, 5, 5]]) </code></pre>
<p>Using <code>return_index</code></p> <pre><code>_,idx=np.unique(stacked, axis=0,return_index=True) stacked[np.sort(idx)] array([[ 0, 1, 2], [ 3, 4, 5], [ 6, 7, 8], [ 9, 10, 11], [15, 16, 17], [18, 19, 20], [ 4, 5, 5]]) </code></pre>
python|numpy|multidimensional-array|unique|size-reduction
9
375,179
53,921,175
Not able to import tensorflow on anaconda with python 3.6 version on 64bit system with 64bit anaconda
<p>When I import tensorflow it gives me this error:</p> <blockquote> <blockquote> <blockquote> <blockquote> <p>Traceback (most recent call last): File "C:\Users\User\Anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in from tensorflow.python.pywrap_tensorflow_internal import * File "C:\Users\User\Anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in _pywrap_tensorflow_internal = swig_import_helper() File "C:\Users\User\Anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description) File "C:\Users\User\Anaconda3\lib\imp.py", line 243, in load_module return load_dynamic(name, filename, file) File "C:\Users\User\Anaconda3\lib\imp.py", line 343, in load_dynamic return _load(spec) ImportError: DLL load failed: A dynamic link library (DLL) initialization routine failed.</p> </blockquote> </blockquote> </blockquote> <p>During handling of the above exception, another exception occurred:</p> <p>Traceback (most recent call last): File "", line 1, in import tensorflow as tf File "C:\Users\User\Anaconda3\lib\site-packages\tensorflow__init__.py", line 24, in from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-import File "C:\Users\User\Anaconda3\lib\site-packages\tensorflow\python__init__.py", line 49, in from tensorflow.python import pywrap_tensorflow File "C:\Users\User\Anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 74, in raise ImportError(msg) ImportError: Traceback (most recent call last): File "C:\Users\User\Anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in from tensorflow.python.pywrap_tensorflow_internal import * File "C:\Users\User\Anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in _pywrap_tensorflow_internal = swig_import_helper() File "C:\Users\User\Anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description) File "C:\Users\User\Anaconda3\lib\imp.py", line 243, in load_module return load_dynamic(name, filename, file) File "C:\Users\User\Anaconda3\lib\imp.py", line 343, in load_dynamic return _load(spec) ImportError: DLL load failed: A dynamic link library (DLL) initialization routine failed.</p> <p>Failed to load the native TensorFlow runtime.</p> <p>See <a href="https://www.tensorflow.org/install/errors" rel="nofollow noreferrer">https://www.tensorflow.org/install/errors</a></p> </blockquote> <p>Please Help me with this </p>
<p>I just solved this same issue with my system (Win 10, 64 bit). Following are the details of how I resolved this issue:</p> <ol> <li>Install <strong>VS 2017</strong>, tensorflow doesn't use it but having it helps in the smooth installation of CUDA toolkit. </li> <li>Update <strong>NVDIA driver from windows device manager</strong>.</li> <li>Download and Install <strong>CUDA toolkit (version 10.1)</strong>.</li> <li>Download and unzip <strong>CUDnn 7.6.5</strong>. Copy the extracted files into a folder in C drive. </li> <li>Add <code>~\NVIDIA GPU Computing Toolkit\CUDA\v10.1\bin</code>, <code>~\NVIDIA GPU Computing Toolkit\CUDA\v10.1\bin</code> and <code>cudnn-10.1-windows10-x64-v7.6.4.38\cuda\bin</code> to PATH</li> </ol> <p>Doing <code>import tensorflow as tf</code> after the above steps solved the issue.</p>
python|python-3.x|tensorflow
0
375,180
54,122,400
Efficient way of getting average area in numpy
<p>Is there a more efficient way in determining the averages of a certain area in a given numpy array? For simplicity, lets say I have a 5x5 array:</p> <pre><code>values = np.array([[0, 1, 2, 3, 4], [1, 2, 3, 4, 5], [2, 3, 4, 5, 6], [3, 4, 5, 6, 7], [4, 5, 6, 7, 8]]) </code></pre> <p>I would like to get the averages of each coordinate, with a specified area size, assuming the array wraps around. Lets say the certain area is size <code>2</code>, thus anything around a certain point within distance 2 will be considered. For example, to get the average of the area from coordinate (2,2), we need to consider</p> <pre><code> 2, 2, 3, 4, 2, 3, 4, 5, 6 4, 5, 6, 6, </code></pre> <p>Thus, the average will be <code>4.</code></p> <p>For coordinate (4, 4) we need to consider:</p> <pre><code> 6, 6, 7, 3, 6, 7, 8, 4, 5 3, 4, 0, 5, </code></pre> <p>Thus the average will be <code>4.92.</code></p> <p>Currently, I have the following code below. But since I have a for loop I feel like it could be improved. Is there a way to just use numpy built in functions?</p> <p>Is there a way to use np.vectorize to gather the subarrays (area), place it all in an array, then use np.einsum or something.</p> <pre><code>def get_average(matrix, loc, dist): sum = 0 num = 0 size, size = matrix.shape for y in range(-dist, dist + 1): for x in range(-dist + abs(y), dist - abs(y) + 1): y_ = (y + loc.y) % size x_ = (x + loc.x) % size sum += matrix[y_, x_] num += 1 return sum/num class Coord(): def __init__(self, x, y): self.x = x self.y = y values = np.array([[0, 1, 2, 3, 4], [1, 2, 3, 4, 5], [2, 3, 4, 5, 6], [3, 4, 5, 6, 7], [4, 5, 6, 7, 8]]) height, width = values.shape averages = np.zeros((height, width), dtype=np.float16) for r in range(height): for c in range(width): loc = Coord(c, r) averages[r][c] = get_average(values, loc, 2) print(averages) </code></pre> <p>Output:</p> <pre><code>[[ 3.07617188 2.92382812 3.5390625 4.15234375 4. ] [ 2.92382812 2.76953125 3.38476562 4. 3.84570312] [ 3.5390625 3.38476562 4. 4.6171875 4.4609375 ] [ 4.15234375 4. 4.6171875 5.23046875 5.078125 ] [ 4. 3.84570312 4.4609375 5.078125 4.921875 ]] </code></pre>
<p>This solution is less efficient (slower) than yours but is just an example using <a href="https://www.numpy.org/devdocs/reference/maskedarray.generic.html" rel="nofollow noreferrer"><code>numpy.ma</code></a> module.</p> <p>Required libraries:</p> <pre><code>import numpy as np import numpy.ma as ma </code></pre> <p>Define methods to do the job:</p> <pre><code># build the shape of the area as a rhomboid def rhomboid2(dim): size = 2*dim + 1 matrix = np.ones((size,size)) for y in range(-dim, dim + 1): for x in range(-dim + abs(y), dim - abs(y) + 1): matrix[(y + dim) % size, (x + dim) % size] = 0 return matrix # build a mask using the area shaped def mask(matrix_shape, rhom_dim): mask = np.zeros(matrix_shape) bound = 2*rhom_dim+1 rhom = rhomboid2(rhom_dim) mask[0:bound, 0:bound] = rhom # roll to set the position of the rhomboid to 0,0 mask = np.roll(mask,-rhom_dim, axis = 0) mask = np.roll(mask,-rhom_dim, axis = 1) return mask </code></pre> <p>Then, iterate to build the result:</p> <pre><code>mask_ = mask((5,5), 2) # call the mask sized as values array with a rhomboid area of size 2 averages = np.zeros_like(values, dtype=np.float16) # initialize the recipient # iterate over the mask to calculate the average for y in range(len(mask_)): for x in range(len(mask_)): masked = ma.array(values, mask = mask_) averages[y,x] = np.mean(masked) mask_ = np.roll(mask_, 1, axis = 1) mask_ = np.roll(mask_, 1, axis = 0) </code></pre> <p>Which returns</p> <pre><code># [[3.076 2.924 3.54 4.152 4. ] # [2.924 2.77 3.385 4. 3.846] # [3.54 3.385 4. 4.617 4.46 ] # [4.152 4. 4.617 5.23 5.08 ] # [4. 3.846 4.46 5.08 4.92 ]] </code></pre>
python|numpy|vectorization
0
375,181
54,168,946
Adding text annotations to a map
<p>I'm using geoplotlib to siplay points on a map and i would like to add names to the points displayed in my map, like text annotations. But can't figure out how after googling it for a while and looking in the github documentation site. Here's the code to create the map:</p> <pre><code>import pandas as pd # Dataframe containing the data to plot locs = pd.DataFrame({'name': ['a','b'],'lat': [-22.951916, -43.210487], 'lon': [-13.163141, -72.544962]}) # import eoplotlib. import geoplotlib %matplotlib inline # Load the data data = locs[['lat', 'lon']] # Pass the data to geoplotlib.plot geoplotlib.dot(data, color='b', point_size= 10) # Display the map. geoplotlib.inline() </code></pre> <p>This is the map generated:</p> <p><a href="https://i.stack.imgur.com/YpNV5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YpNV5.png" alt="enter image description here"></a> How could i display the names of the column <code>names</code> in the df alongside the blue points in the map??</p> <p>Thanks a lot in advance</p>
<p>The <a href="https://andrea-cuttone.github.io/geoplotlib/api.html#geoplotlib.dot" rel="nofollow noreferrer">geoplotlib API reference for the geoplotlib.dot()</a> function provides an argument <code>f_tooltip</code> which accepts a <em>function</em> to generate a tooltip string for a point.</p> <p>In your code, you do not need a separate <code>data</code> dataframe to plot the map and get the annotated tooltip. The <code>locs</code> dataframe already has the <code>name</code> column which will provide the string for the tooltip and can be used directly. Behind the scenes, the dataframe is converted into a dictionary of key-value pairs. In our case, we just need a simple <code>lambda</code> function to retrieve the <code>name</code> key from the dictionary for the corresponding <code>lat</code> and <code>lon</code>.</p> <p><strong>Note</strong>: I've used <code>geoplotlib.show()</code> as <code>inline</code> plots aren't working for me at this time.</p> <pre><code>import pandas as pd # Dataframe containing the data to plot locs = pd.DataFrame({'name': ['a','b'],'lat': [-22.951916, -43.210487], 'lon': [-13.163141, -72.544962]}) #import geoplotlib. import geoplotlib # %matplotlib inline #function to create a dot density map with annotated tooltip geoplotlib.dot(locs, color='b', point_size= 10,f_tooltip=lambda r:r['name']) # Display the map. geoplotlib.show() </code></pre> <p><strong>The result</strong>:</p> <p><a href="https://i.stack.imgur.com/oJOms.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oJOms.png" alt="enter image description here"></a></p>
python|pandas|python-2.7|google-maps|dataframe
1
375,182
53,995,570
Fetch DB tables blueprint like "describe table_name" commande from redshift and DB2 from python
<p>I want to fetch data using my python code like we do with <code>describe [tableName] statement</code>. I want to do that on Redshift and DB2. </p> <p>I tried to do that using Pandas and cursors, I tried the following chunks of commands:</p> <ol> <li><p><code>"set search_path to SCHEMA; select * from pg_table_def where schemaname = 'schema' and LOWER(tablename) = 'TableName';</code></p></li> <li><p><code>describe schema.tableName</code></p></li> <li><p><code>select column_name, data_type, character_manimum_length from information_schema.columns where table_schema = 'Schema' and table_name = 'TableName';</code></p></li> <li><p><code>\d or \d+</code></p></li> </ol> <p>.</p> <pre><code>import psycopg2 import numpy as np import pandas as pd con=psycopg2.connect(dbname= 'DBNAME', host="Host-Link", port= '5439', user= 'username', password= 'password') print(con) cur = con.cursor() query = "set search_path to Schema; select * from pg_table_def where schemaname = 'Schema' and LOWER(tablename) = 'TableName';" cur.execute(query) temp = cur.fetchall() print(temp) data_frame = pd.read_sql("set search_path to Schema; select * from pg_table_def where schemaname = 'Schema' and LOWER(tablename) = 'TableName';", con) print(data_frame) con.close() </code></pre> <p>I want the output as following:</p> <pre><code>COLUMN_NAME DATA_TYPE PK NULLABLE DEFAULT AUTOINCREMENT COMPUTED REMARKS POSITION col1 varchar(10) YES NO NO NO 1 col2 varchar(50) NO NO NO NO 2 col3 smallint NO NO NO NO 3 </code></pre>
<p>A lot of this data is included in the <code>SVV_COLUMNS</code> system view. You can query that table using the <code>table_name</code> and <code>table_schema</code> columns.</p> <p><a href="https://docs.aws.amazon.com/redshift/latest/dg/r_SVV_COLUMNS.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/redshift/latest/dg/r_SVV_COLUMNS.html</a></p>
python|pandas|db2|cursor|amazon-redshift
0
375,183
54,081,213
Flatten Dataset of multiple files tensorflow
<p>I'm trying to read the CIFAR-10 dataset from 6 .bin files, and then create a initializable_iterator. <a href="https://www.cs.toronto.edu/~kriz/cifar.html" rel="nofollow noreferrer">This</a> is the site I downloaded the data from, and it also contains a description of the structure of the binary files. Each file contains 2500 images. The resulting iterator, however, only generates one tensor for each file, a tensor of size (2500,3703). Here is my code</p> <pre><code>import tensorflow as tf filename_dataset = tf.data.Dataset.list_files("cifar-10-batches-bin/*.bin") image_dataset = filename_dataset.map(lambda x: tf.decode_raw(tf.read_file(x), tf.float32)) iter_ = image_dataset.make_initializable_iterator() next_file_data = iter_.get_next()I next_file_data = tf.reshape(next_file_data, [-1,3073]) next_file_img_data, next_file_labels = next_file_data[:,:-1], next_file_data[:,-1] next_file_img_data = tf.reshape(next_file_img_data, [-1,32,32,3]) init_op = iter_.initializer with tf.Session() as sess: sess.run(init_op) print(next_file_img_data.eval().shape) _______________________________________________________________________ &gt;&gt; (2500,32,32,3) </code></pre> <p>The first two lines are based on <a href="https://stackoverflow.com/questions/49180980/input-multiple-files-into-tensorflow-dataset">this answer</a>. I would like to be able to specify the number of images generated by <code>get_next()</code>, using <code>batch()</code> rather than it being the number of images in each .bin file, which here is 2500. </p> <p>There has already been a question about flattening a dataset <a href="https://stackoverflow.com/questions/49960875/flatten-a-dataset-in-tensorflow">here</a>, but the answer is not clear to me. In particular, the question seems to contain a code snippet from a class function which is defined elsewhere, and I am not sure how to implement it. </p> <p>I have also tried creating the dataset with <code>tf.data.Dataset.from_tensor_slices()</code>, replacing the first line above with</p> <pre><code>import os filenames = [os.path.join('cifar-10-batches-bin',f) for f in os.listdir("cifar-10-batches-bin") if f.endswith('.bin')] filename_dataset = tf.data.Dataset.from_tensor_slices(filenames) </code></pre> <p>but this didn't solve the problem. </p> <p>Any help would be very much appreciated. Thanks.</p>
<p>I am not sure how your bin file is structured. I am assuming 32*32*3 = 3072 points per image is present in each file. So the data present in each file is a multiple of 3072. However for any other structure, the kind of operations would be similar, so this can still serve as a guide for that. You could do a series of mapping operations:</p> <pre><code>import tensorflow as tf filename_dataset = tf.data.Dataset.list_files("cifar-10-batches-bin/*.bin") image_dataset = filename_dataset.map(lambda x: tf.decode_raw(tf.read_file(x), tf.float32)) image_dataset = image_dataset.map(lambda x: tf.reshape(x, [-1, 32, 32, 3]) # Reshape your data to get 2500, 32, 32, 3 image_dataset = image_dataset.flat_map(lambda x: tf.data.Dataset.from_tensor_slices(x)) # This operation would give you tensors of shape 32,32,3 and put them all together. image_dataset = image_dataset.batch(batch_size) # Now you can define your batchsize </code></pre>
python|tensorflow|tensorflow-datasets
1
375,184
54,139,851
Using Pandas Dataframe within a SQL Join
<p>I'm trying to perform a SQL join on the the contents of a dataframe with an external table I have in a Postgres Database.</p> <p>This is what the Dataframe looks like:</p> <pre><code>&gt;&gt;&gt; df name author count 0 a b 10 1 c d 5 2 e f 2 </code></pre> <p>I need to join it with a Postgres table that looks like this:</p> <pre><code>TABLE: blog title author url a b w.com b b x.com e g y.com </code></pre> <p>This is what I'm attempting to do, but this doesn't appear to be the right syntax for the query:</p> <pre><code>&gt;&gt;&gt; sql_join = r"""select b.*, frame.* from ({0}) frame join blog b on frame.name = b.title where frame.owner = b.owner order by frame.count desc limit 30;""".format(df) &gt;&gt;&gt; res = pd.read_sql(sql_join, connection) </code></pre> <p>I'm not sure how I can use the values in the dataframes within the sql query. Can someone point me in the right direction? Thanks!</p> <p><strong>Edit</strong>: As per my use case, I'm not able to convert the blog table into a dataframe given memory and performance constraints. </p>
<p>I managed to do this without having to convert the dataframe to a temp table or without reading SQL into a dataframe from the blog table. </p> <p>For anyone else facing the same issue, this is achieved using a virtual table of sorts.</p> <p>This is what my final sql query looks like this:</p> <pre><code>&gt;&gt;&gt; inner_string = "VALUES ('a','b',10), ('c','d',5), ('e','f',2)" &gt;&gt;&gt; sql_join = r"""SELECT * FROM blog JOIN ({0}) AS frame(title, owner, count) ON blog.title = frame.title WHERE blog.owner = frame.owner ORDER BY frame.count DESC LIMIT 30;""".format(inner_string) &gt;&gt;&gt; res = pd.read_sql(sql_join, connection) </code></pre> <p>You can use string manipulation to convert all rows in the dataframe into one large string similar to <code>inner_string</code>.</p>
python|sql|postgresql|pandas
7
375,185
54,063,466
Shift elements above the diagonal to the start of the row
<p>I have a matrix that was generated as a pivot table. I have included the data below. I need to turn the diagonal into the first column, which effectively re-orients the matrix so that the cell in the diagonal becomes the cell in the first column, for each row. </p> <p>This is the matrix as rendered in Pandas</p> <p><a href="https://i.stack.imgur.com/frZpD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/frZpD.png" alt="enter image description here"></a></p> <p>This is a representation of what the matrix should look like after.</p> <p><a href="https://i.stack.imgur.com/6AdZ9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6AdZ9.png" alt="enter image description here"></a></p> <pre><code>df = pd.DataFrame({ 'exposure':[4500,2000, 2000, 2000, 2000, 6000,10000,3000,2000,1000, 2000,3000,4000,6000], 'due_date':['2019-01-01', '2019-01-01', '2019-01-01', '2019-01-01', '2019-01-01', '2019-01-02', '2019-01-02', '2019-01-02','2019-01-01','2019-01-04', '2019-01-03','2019-01-03','2019-01-03','2019-01-04'], 'repaid_date':['2019-01-01', '2019-01-04','2019-01-01', '2019-01-03', '2019-01-02', '2019-01-03','2019-01-04', '2019-01-02', '2019-01-03', '2019-01-04', '2019-01-03','2019-01-04','2019-01-03','2019-01-04']}) pivot = df.pivot_table(values='exposure', index='due_date', columns='repaid_date', aggfunc=len) pivot.fillna(0,inplace=True) pivot.reset_index(inplace=True) </code></pre>
<p>Before filling or resetting the index, you can justify NaNs using Divakar's <a href="https://stackoverflow.com/a/44559180/4909087"><code>justify</code></a> function.</p> <pre><code>pivot = df.pivot_table(values='exposure', index='due_date', columns='repaid_date', aggfunc='size') pivot[:] = justify(pivot.values, invalid_val=np.nan, axis=1, side='left') pivot.fillna(0, downcast='infer').reset_index() repaid_date due_date 2019-01-01 2019-01-02 2019-01-03 2019-01-04 0 2019-01-01 2 1 2 1 1 2019-01-02 1 1 1 0 2 2019-01-03 2 1 0 0 3 2019-01-04 2 0 0 0 </code></pre>
python|pandas|dataframe
1
375,186
53,968,639
How to create a columns from another column in a pandas dataframe like below
<p>I have a dataframe which contains duplicate records with columns x,y,z,A </p> <pre><code>X Y Z A a US 88 2016 a IND 88 2016 a IND 88 2017 a RSA 45 2017 a RSA 45 2018 b US 65 2017 b RSA 58 2018 c RSA 58 2016 </code></pre> <p>I want to create columns from the values of column A by having count of distinct countries for each vaue of X column.like below.</p> <pre><code>X Z 2016 2017 2018 a 88 2 1 0 a 45 0 1 1 b 65 0 1 0 c 58 1 0 0 </code></pre> <p>i couldn't figure out how to do this in python , pleas help me on this</p>
<p>You can use <code>pivot_table</code>:</p> <pre><code>df.pivot_table('Y',['X','Z'],'A',aggfunc='count', fill_value=0).reset_index() </code></pre> <p>Output:</p> <pre><code>A X Z 2016 2017 2018 0 a 45 0 1 1 1 a 88 2 1 0 2 b 58 0 0 1 3 b 65 0 1 0 4 c 58 1 0 0 </code></pre>
python|pandas|pandas-groupby|data-science
0
375,187
53,951,351
return indexes of filtered dataframe as values
<p>I have a dataframe like the input dataframe below. I would like to create a new dataframe, where I filter my original data frame to only return the indexes of every column value above 0.66. I know I could filter the whold dataframe like </p> <pre><code>df[df&gt;0.66] </code></pre> <p>which would give me NaN values for all entries below 0.66, but then how do I get the indexes in to each column? Any tips are greatly appreciated.</p> <p>input data:</p> <pre><code> str_home_page str_emails str_bigticket_orders str_home_page 1.000000 0.680272 0.654346 str_emails 0.680272 1.000000 0.927515 str_bigticket_orders 0.654346 0.927515 1.000000 </code></pre> <p>desired output:</p> <pre><code> str_home_page str_emails str_bigticket_orders 0 str_home_page str_home_page 1 str_emails str_emails str_emails 2 str_bigticket_orders str_bigticket_orders </code></pre>
<p>Just using <code>mul</code> with Boolean dataframe </p> <pre><code>(df&gt;0.66).mul(df.index.values,0) </code></pre>
python-3.x|pandas|numpy
1
375,188
54,042,674
Lambda Apply on Pandas values taking the average of row before and after
<p>I have a price time-series where I wish cleanse the data-set. How I plan to do it is to set 'incorrect' jump in prices to the average of the 'before' and 'after' price.</p> <p>I have a panda frame name df, with price as 'mid'. I set the prx_chg as per below.</p> <pre><code>df['prx_chg'] = df['mid'].pct_change(periods= 1, fill_method='pad', limit=None, freq=None).shift(periods = -1).fillna(0) </code></pre> <p>Is there a simple way, to set across the rows of 'mid' such that if the prx_chg is above a magnitude X, the 'mid' is set to be the average of [row -1], [row +1] ?</p> <p>I tried the below using lambda apply, but didn't work</p> <pre><code>mid = [1.0, 1.1, 1.0, 100, 1.2, 0.9, -100, 1.2] df = pd.DataFrame(mid, columns = ['mid']) df['prx_chg'] = df['mid'].pct_change(periods= 1, fill_method='pad', limit=None, freq=None).shift(periods = -1).fillna(0) df.apply(lambda row: row['mid'] = np.average(a, b) if row['prx_chg'] &gt;= n.abs(10)) </code></pre>
<p>IIUC, you might use <code>np.where</code> and <code>shift</code> in this case;</p> <pre><code>df['mid'] = np.where((df['prx_chg'].shift(1) &gt;= 10) | (df['prx_chg'].shift(1) &lt;= -10), (df['mid'].shift(-1) + df['mid'].shift(1)) / 2, df['mid']) df mid prx_chg 0 1.00 0.100000 1 1.10 -0.090909 2 1.00 99.000000 3 1.10 -0.988000 4 1.20 -0.250000 5 0.90 -112.111111 6 1.05 -1.012000 7 1.20 0.000000 </code></pre>
python-3.x|pandas|lambda
1
375,189
53,904,175
python nested lists and arrays
<p>I have an output from a code where coordinates of several rectangles (four corners x,y) are provided in a list of arrays containing nested lists, which looks as follows: </p> <pre><code>[array([[[x1, y1], [x2, y2], [x3, y3], [x4, y4]]], dtype=float32), ... array([[[x1, y1], [x2, y2], [x3, y3], [x4, y4]]], dtype=float32)] </code></pre> <p>I have another list of corresponding rectangle IDs. which looks like that :</p> <pre><code>[[310] [401] ... [203] [181]] </code></pre> <p>They are in the same order as the coordinates. I want to mashup both lists to get the following data structure:</p> <pre><code>[[rect_ID, [(x1,y1),(x2,y2),(x3,y3),(x4,y4)], [rect_ID, [(x1,y1),(x2,y2),(x3,y3),(x4,y4)], ... [rect_ID, [(x1,y1),(x2,y2),(x3,y3),(x4,y4)]] </code></pre> <p>I need then to sort the list by the rect_ID</p> <p>Any ideas how to achieve that?</p>
<p>Here is one way of doing it using list comprehensions. </p> <p><strong>Explanation:</strong> You loop over the combination of two lists (<code>coords</code> and <code>ids</code>) since they map one to one. <code>i[0]</code> gives you the index and <code>j.flatten()</code> converts each array of your <code>coords</code> into a single 1d array. The task then is to create pairs of coordinates as tuples. To do so, first you get every even indexed elements starting from 0 in steps of 2 using <code>[0::2]</code> and every odd indexed element starting from 1 in steps of 2 using <code>[1::2]</code>. Using zip, you combine them in pairs and then finally use <code>list</code> to convert them into a list <code>[]</code>. </p> <p>Finally you sort the <code>final</code> list using the id (first element) as the key. </p> <pre><code># Sample data (Just taken for example purpose) coords = [np.array([[[1, 2], [2,1], [3,2], [4,4]]]), np.array([[[3,2], [1,2], [1,4], [5,6]]]), np.array([[[12,2], [1,21], [1,14], [15,6]]])] ids = [[310], [181],[123]] </code></pre> <hr> <p><strong>Code</strong></p> <pre><code>final = [[i[0], list(zip(j.flatten()[0::2], j.flatten()[1::2]))] for i, j in zip(ids, coords)] result = sorted(final, key=lambda x: x[0]) print (result) </code></pre> <p><strong>Output</strong></p> <pre><code>[[123, [(12, 2), (1, 21), (1, 14), (15, 6)]], [181, [(3, 2), (1, 2), (1, 4), (5, 6)]], [310, [(1, 2), (2, 1), (3, 2), (4, 4)]]] </code></pre>
python|arrays|list|numpy|nested
4
375,190
53,841,760
Python pandas concatenate columns csv
<p>I have a huge list of <code>Users_id</code> that I want to concatenate. I know how to do it in excel but the file is much too large.</p> <pre><code>Users ID 101 101 102 101,102 103 101,102,103 104 101,102,103,104 </code></pre> <p>Here is what I want to achieve. Here is what I have so far.</p> <pre><code>import pandas as pd df = pd.read_csv('file.csv') pd.concat = df['USER ID']=.astype(str)+','+df['USER ID'] </code></pre>
<p>This is an unusual operation since your input is numeric, while your output is a sequence of comma-separated strings. One solution is to use <a href="https://docs.python.org/3/library/itertools.html#itertools.accumulate" rel="nofollow noreferrer"><code>itertools.accumulate</code></a> with f-strings (Python 3.6; <a href="https://www.python.org/dev/peps/pep-0498/" rel="nofollow noreferrer">PEP498</a>):</p> <pre><code>import pandas as pd from itertools import accumulate df = pd.DataFrame({'Users': [101, 102, 103, 104]}) def joiner(x, y): return f'{x},{y}' df['Cumulative'] = list(accumulate(df['Users'].astype(str), func=joiner)) print(df) Users Cumulative 0 101 101 1 102 101,102 2 103 101,102,103 3 104 101,102,103,104 </code></pre>
python|pandas|csv|concatenation
1
375,191
54,120,583
tf-serving abnormal exit without error message
<p>tf-serving abnormal exit without error message</p> <h3>System information</h3> <p>OS Platform and Distribution (e.g., Linux Ubuntu 16.04): ReaHat EL6</p> <p>TensorFlow Serving installed from (source or binary): source using bazel 0.18.0</p> <p>TensorFlow Serving version: 1.12.0</p> <h3>Describe the problem</h3> <p>i compile the tf-serving using bazel in RHEL 6.9, and start it using:</p> <p>./model_servers/tensorflow_model_server --model_config_file=./data/models.conf --rest_api_port=8502</p> <p>models.conf:</p> <pre><code>model_config_list: { config: { name: "model_1", base_path:"/search/work/tf_serving_bin/tensorflow_serving/data/model_data/model_1", model_platform: "tensorflow", model_version_policy: { latest: { num_versions: 1 } } } } </code></pre> <p><strong>Client using C++, and use libCurl to request tf-serving REST api, but, tf-serving often abnormal exits without error message in some minutes.</strong></p> <p><strong>When my client service requests localhost tf-serving, the question occur frequently. But, client service requests tf-serving at other machines, the question do not occur, qps &lt; 100.</strong></p> <p>I check memory, cpu idle, etc... no problems is found. so, it is very strange.</p> <p>export export TF_CPP_MIN_VLOG_LEVEL=1, no error/critical message too.</p> <h3>Source code / logs</h3> <pre><code>2019-01-09 09:28:35.118183: I tensorflow_serving/model_servers/server_core.cc:461] Adding/updating models. 2019-01-09 09:28:35.118259: I tensorflow_serving/model_servers/server_core.cc:558] (Re-)adding model: app_ks_nfm_1 2019-01-09 09:28:35.227383: I tensorflow_serving/core/basic_manager.cc:739] Successfully reserved resources to load servable {name: app_ks_nfm_1 version: 201901072359} 2019-01-09 09:28:35.227424: I tensorflow_serving/core/loader_harness.cc:66] Approving load for servable version {name: app_ks_nfm_1 version: 201901072359} 2019-01-09 09:28:35.227443: I tensorflow_serving/core/loader_harness.cc:74] Loading servable version {name: app_ks_nfm_1 version: 201901072359} 2019-01-09 09:28:35.227492: I external/org_tensorflow/tensorflow/contrib/session_bundle/bundle_shim.cc:363] Attempting to load native SavedModelBundle in bundle-shim from: /search/work/bazel-bin-serving/tensorflow_serving/data/model_data/app_ks_nfm_1/201901072359 2019-01-09 09:28:35.227530: I external/org_tensorflow/tensorflow/cc/saved_model/reader.cc:31] Reading SavedModel from: /search/work/bazel-bin-serving/tensorflow_serving/data/model_data/app_ks_nfm_1/201901072359 2019-01-09 09:28:35.256712: I external/org_tensorflow/tensorflow/cc/saved_model/reader.cc:54] Reading meta graph with tags { serve } 2019-01-09 09:28:35.267728: I external/org_tensorflow/tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA 2019-01-09 09:28:35.313087: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:162] Restoring SavedModel bundle. 2019-01-09 09:28:38.797633: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:138] Running MainOp with key legacy_init_op on SavedModel bundle. 2019-01-09 09:28:38.803984: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:259] SavedModel load for tags { serve }; Status: success. Took 3570131 microseconds. 2019-01-09 09:28:38.804027: I tensorflow_serving/servables/tensorflow/saved_model_warmup.cc:83] No warmup data file found at /search/work/bazel-bin-serving/tensorflow_serving/data/model_data/app_ks_nfm_1/201901072359/assets.extra/tf_serving_warmup_requests 2019-01-09 09:28:38.804148: I tensorflow_serving/core/loader_harness.cc:86] Successfully loaded servable version {name: app_ks_nfm_1 version: 201901072359} 2019-01-09 09:28:38.831860: I tensorflow_serving/model_servers/server.cc:286] Running gRPC ModelServer at 0.0.0.0:8500 ... [warn] getaddrinfo: address family for nodename not supported 2019-01-09 09:28:38.865243: I tensorflow_serving/model_servers/server.cc:302] Exporting HTTP/REST API at:localhost:8502 ... [evhttp_server.cc : 237] RAW: Entering the event loop ... </code></pre>
<p>It is not an abnormal exit. It is an indication that the <strong>Server is ready to receive the Inference Requests.</strong> </p> <p>For clarification, please find the below explanation:</p> <pre><code>docker run --runtime=nvidia -p 8501:8501 \ --mount type=bind,\ source=/tmp/tfserving/serving/tensorflow_serving/servables/tensorflow/testdata/saved_model_half_plus_two_gpu,\ target=/models/half_plus_two \ -e MODEL_NAME=half_plus_two -t tensorflow/serving:latest-gpu &amp; </code></pre> <p>This will run the docker container with the nvidia-docker runtime, launch the TensorFlow Serving Model Server, bind the REST API port 8501, and map our desired model from our host to where models are expected in the container. We also pass the name of the model as an environment variable, which will be important when we query the model.</p> <p><strong>TIP: Before querying the model, be sure to wait till you see a message like the following, indicating that the server is ready to receive requests:</strong></p> <pre><code>2018-07-27 00:07:20.773693: I tensorflow_serving/model_servers/main.cc:333] Exporting HTTP/REST API at:localhost:8501 ... </code></pre> <p><strong>After that Message, just press Enter and you can query the model using the below command</strong></p> <pre><code>curl -d '{"instances": [1.0, 2.0, 5.0]}' \ -X POST http://localhost:8501/v1/models/half_plus_two:predict </code></pre> <p>For more information, refer the below link:</p> <p><a href="https://www.tensorflow.org/tfx/serving/docker#gpu_serving_example" rel="nofollow noreferrer">https://www.tensorflow.org/tfx/serving/docker#gpu_serving_example</a></p>
tensorflow-serving
0
375,192
54,050,581
Installed Keras with pip3, but getting the "No Module Named keras" error
<p>I am Creating a leaf Identification Classifier using the CNN, the Keras and the Tensorflow backends on Windows. I have installed Anaconda, Tensorflow, numpy, scipy and keras.</p> <p>I installed keras using pip3:</p> <pre><code>C:\&gt; pip3 list | grep -i keras Keras 2.2.4 Keras-Applications 1.0.6 Keras-Preprocessing 1.0.5 </code></pre> <p>However, when i run my project i get following error</p> <pre><code>ModuleNotFoundError: No module named 'keras' </code></pre> <p>Why is the module not found, and how can I fix this error?</p>
<p>Installing Anaconda and then install packages with pip seams like confusing the goal of Anaconda(or any other package management tools)</p> <p>Anaconda is there to help you organize your environments and their dependences.</p> <p>Assuming you have conda on your system path, Do:</p> <p>Update conda</p> <pre><code>conda update conda </code></pre> <p>We can create an environment called 'awesome' with python 3.6 and add all awesome datascience packages coming with anaconda(numpy, scipy, jupyter notebook/lab etc), and tensorflow and keras. you can drop <em>anaconda</em> and have minimal package if desired.</p> <pre><code>conda create -n awesome python=3.6 anaconda tensorflow keras </code></pre> <p>After quite a time, and all is well, activate your environment and test if we can import keras.</p> <pre><code>conda activate awesome python -c "import keras" </code></pre> <p>When done doing awesomeness, you can deactivate as so:</p> <pre><code>conda deactivate </code></pre> <p>conda is better than pip because it deal with libraries compatibalities. It upgrades and downgrade packages for you. </p> <p>Sometimes beautiful about Anaconda is that you can just install the main package and it will install all its dependences for you, so you could just do:</p> <pre><code>conda create -n awesome python=3.6 keras </code></pre> <p>This will automatically find all packages that keras depends on or set to default such as tensorflow and numpy</p> <p><strong>What you are doing wrong</strong>:<br> You get that error because your python sys.path can not locate the packages you install.</p> <p>You can do:</p> <pre><code>python -c "import sys;print(sys.path)" </code></pre> <p>This will print the location your python will look for packages. It is most likely that the path to keras library is not one them.</p> <p>When you just use pip to install, your default python that has that pip will have access to your installations. So if you have multiple Pythons, the recommendation is to be explicit like:</p> <pre><code>python3 -m pip install packages </code></pre> <p>So here you are sure that it is Python in python3 directory that did installation. This is where we need environments that keeps our Python versions and dependence different and easy to control. Anaconda, Pipenv, Poetry, piptools and more are there trying to help you managed your systems better ;)</p> <p><b>Update: For Jupyter Notebook/Lab users</b></p> <p>If you already have Jupyter, say on your base environment, we can add awesome as another kernel:</p> <pre class="lang-sh prettyprint-override"><code>conda activate awesome (awesome ) conda install ipykernel -y (awesome) python -m ipykernel install --user --name my_env --display-name "Awesome" conda deactivate </code></pre> <p>Now if you run Jupyter, you should be able to choose between Base Python and Awesome environment.</p>
python|windows|tensorflow|keras|keras-2
6
375,193
53,861,726
Inserting zeros in numpy array
<p>A function that takes in a <strong>vector</strong> and returns a new vector where every element is separated by 4 consecutive zeros. </p> <p>Example: </p> <pre><code>[4, 2, 1] --&gt; [4,0,0,0,0,2,0,0,0,0,1] </code></pre>
<p><strong><em>Setup</em></strong></p> <pre><code>a = np.array([4, 2, 1]) </code></pre> <hr> <p>Using slice assignment:</p> <pre><code>s = a.shape[0] v = s + (4 * (s - 1)) f = np.zeros(v) f[::5] = a </code></pre> <p></p> <pre><code>array([4., 0., 0., 0., 0., 2., 0., 0., 0., 0., 1.]) </code></pre>
python|numpy
3
375,194
53,983,083
Linear regression with defined intercept
<p>I have a DataFrame (df) with two columns and three rows. </p> <p>Column X = [137,270,344] Column Y = [51, 121, 136]</p> <p>I want to get the slope of the linear regression considering the intercept = 0. </p> <p>I have tried to add a point (0,0) but it doesn´t work.</p> <p>EX. Column X = [0, 137,270,344] Column Y = [0, 51, 121, 136]</p> <p>The code that I am using.</p> <p>Code:</p> <pre><code>X= df [“Column X”].astype(float) Y = df [“Column Y”].astype(float) slope, intercept, r_value, p_value, std_err = stats.linregress(X, Y) intercept_desv = slope coef_desv = intercept </code></pre> <p>I expected intercept = 0 but is less than 0.</p>
<p>In standard linear regression, all data points implicitly have a weight of 1.0. In any software that allows linear regression using weights, the regression can effectively be made to pass through any single point - such as the origin - by assigning that data point an extremely large weight. Numpy's polyfit() allows weights. Here is a graphing example with your data using this technique to make the fitted line pass through the 0,0 point.</p> <pre><code>import numpy, matplotlib import matplotlib.pyplot as plt xData = numpy.array( [0.0, 137.0, 270.0, 344.0]) yData = numpy.array([0.0, 51.0, 121.0, 136.0]) weights = numpy.array([1.0E10, 1.0, 1.0, 1.0]) # heavily weight the 0,0 point #weights = None # use this for "no weights" polynomialOrder = 1 # example straight line # curve fit the test data fittedParameters = numpy.polyfit(xData, yData, polynomialOrder, w=weights) print('Fitted Parameters:', fittedParameters) modelPredictions = numpy.polyval(fittedParameters, xData) absError = modelPredictions - yData SE = numpy.square(absError) # squared errors MSE = numpy.mean(SE) # mean squared errors RMSE = numpy.sqrt(MSE) # Root Mean Squared Error, RMSE Rsquared = 1.0 - (numpy.var(absError) / numpy.var(yData)) print('RMSE:', RMSE) print('R-squared:', Rsquared) print() print('Predicted value at x=0:', modelPredictions[0]) print() ########################################################## # graphics output section def ModelAndScatterPlot(graphWidth, graphHeight): f = plt.figure(figsize=(graphWidth/100.0, graphHeight/100.0), dpi=100) axes = f.add_subplot(111) # first the raw data as a scatter plot axes.plot(xData, yData, 'D') # create data for the fitted equation plot xModel = numpy.linspace(min(xData), max(xData)) yModel = numpy.polyval(fittedParameters, xModel) # now the model as a line plot axes.plot(xModel, yModel) axes.set_xlabel('X Data') # X axis data label axes.set_ylabel('Y Data') # Y axis data label plt.show() plt.close('all') # clean up after using pyplot graphWidth = 800 graphHeight = 600 ModelAndScatterPlot(graphWidth, graphHeight) </code></pre>
python|pandas|dataframe|linear-regression
0
375,195
54,084,310
How to prevent multi value dictionary object from splitting each word into individual letter strings?
<p>I have a dictionary object that looks like this:</p> <pre><code>my_dict = {123456789123: ('a', 'category'), 123456789456:('bc','subcategory'),123456789678:('c_d','subcategory')} </code></pre> <p>The below code extracts and compares a integer in column headers in a df to the key in the dictionary and creates a new dataframe by picking the second value as the columns of the new df and first value as the value inside the df. </p> <p>Code:</p> <pre><code>names = df.columns.values new_df = pd.DataFrame() for name in names: if ('.value.' in name) and df[name][0]: last_number = int(name[-13:]) print(last_number) key, value = my_dict[last_number] try: new_df[value][0] = list(new_df[value][0]) + [key] except: new_df[value] = [key] </code></pre> <p>new_df:</p> <pre><code> category subcategory 0 a [b, c, c_d] </code></pre> <p>I am not sure what is causing it in my code, but how do I prevent <code>bc</code>from split up? </p> <p>edit:</p> <p>example df from above:</p> <pre><code>data.value.123456789123 data.value.123456789456 data.value.123456789678 TRUE TRUE TRUE </code></pre> <p>new_df should look like this:</p> <pre><code> category subcategory 0 a [bc, c_d] </code></pre>
<p><code>list(new_df[value][0])</code> breaks a string into a list of characters, that's why you get the individual characters.</p> <p><code>list(new_df[value][0])</code> must be <code>[new_df[value][0]]</code>. Or, better, <code>list(new_df[value][0]) + [key]</code> must be <code>[new_df[value][0], key]</code>. </p>
python|python-3.x|string|pandas
2
375,196
54,065,097
Is there any way to remove column and rows numbers from DataFrame.from_dict?
<p>So, I have a problem with my dataframe from dictionary - python actually "names" my rows and columns with numbers. Here's my code:</p> <pre><code>a = dict() dfList = [x for x in df['Marka'].tolist() if str(x) != 'nan'] dfSet = set(dfList) dfList123 = list(dfSet) for i in range(len(dfList123)): number = dfList.count(dfList123[i]) a[dfList123[i]]=number sorted_by_value = sorted(a.items(), key=lambda kv: kv[1], reverse=True) dataframe=pd.DataFrame.from_dict(sorted_by_value) print(dataframe) </code></pre> <p>I've tried to rename columns like this: <code>dataframe=pd.DataFrame.from_dict(sorted_by_value, orient='index', columns=['A', 'B', 'C'])</code>, but it gives me a error:</p> <pre><code>AttributeError: 'list' object has no attribute 'values' </code></pre> <p>Is there any way to fix it?</p> <p><strong>Edit:</strong> Here's the first part of my data frame:</p> <pre><code> 0 1 0 VW 1383 1 AUDI 1053 2 VOLVO 789 3 BMW 749 4 OPEL 621 5 MERCEDES BENZ 593 ... </code></pre> <p>The 1st rows and columns are exactly what I need to remove/rename</p>
<h3><code>index</code> and <code>columns</code> are properties of your dataframe</h3> <p>As long as <code>len(df.index) &gt; 0</code> and <code>len(df.columns) &gt; 0</code>, i.e. your dataframe has nonzero rows and nonzero columns, you cannot get rid of the labels from your <code>pd.DataFrame</code> object. Whether the dataframe is constructed from a dictionary, or otherwise, is irrelevant.</p> <p>What you <em>can</em> do is remove them from a <strong>representation</strong> of your dataframe, with output either as a Python <code>str</code> object or a CSV file. Here's a minimal example:</p> <pre><code>df = pd.DataFrame([[1, 2, 3], [4, 5, 6]]) print(df) # 0 1 2 # 0 1 2 3 # 1 4 5 6 # output to string without index or headers print(df.to_string(index=False, header=False)) # 1 2 3 # 4 5 6 # output to csv without index or headers df.to_csv('file.csv', index=False, header=False) </code></pre>
python|python-3.x|pandas|dataframe|series
1
375,197
53,896,749
Understanding peaked/curved results in mAP and Loss during object detector training
<p>I am working on training the object detector with a custom dataset designed to detect the head of a plant. I am using the "Faster R-CNN with Resnet-101 (v1)" that was originally designed for the pet dataset. </p> <p>I modified the config file to match my dataset (1875 training/375 eval) of images that 275x550 in size. I converted all record files. And the pipeline file is shown below. </p> <p>I trained on a gpu overnight for 100k steps and the actual evaluation results look really good. It detects all the plant heads and the data is really useful. </p> <p>The issue is the actual metrics. When checking the tensorboard logs for the eval, all the metrics increase until 30k steps and then drop again making a nice hump in the middle. This goes for the loss, mAP, and precision results. </p> <p>Why is this result happening? I assumed that if you keep training, the metrics should just flatten out to a line and not just decrease downwards again. </p> <p>mAP Evaluation: <a href="https://imgur.com/a/hjobr6c" rel="nofollow noreferrer">https://imgur.com/a/hjobr6c</a></p> <p>Loss Evaluation: <a href="https://imgur.com/a/EY8Afqc" rel="nofollow noreferrer">https://imgur.com/a/EY8Afqc</a></p> <pre><code># Faster R-CNN with Resnet-101 (v1) originally for Oxford-IIIT Pets Dataset. Modified for wheat head detection # Users should configure the fine_tune_checkpoint field in the train config as # well as the label_map_path and input_path fields in the train_input_reader and # eval_input_reader. Search for "" to find the fields that # should be configured. model { faster_rcnn { num_classes: 1 image_resizer { keep_aspect_ratio_resizer { min_dimension: 275 max_dimension: 550 } } feature_extractor { type: 'faster_rcnn_resnet101' first_stage_features_stride: 16 } first_stage_anchor_generator { grid_anchor_generator { scales: [0.25, 0.5, 1.0, 2.0] aspect_ratios: [0.5, 1.0, 2.0] height_stride: 16 width_stride: 16 } } first_stage_box_predictor_conv_hyperparams { op: CONV regularizer { l2_regularizer { weight: 0.0 } } initializer { truncated_normal_initializer { stddev: 0.01 } } } first_stage_nms_score_threshold: 0.0 first_stage_nms_iou_threshold: 0.7 first_stage_max_proposals: 300 first_stage_localization_loss_weight: 2.0 first_stage_objectness_loss_weight: 1.0 initial_crop_size: 14 maxpool_kernel_size: 2 maxpool_stride: 2 second_stage_box_predictor { mask_rcnn_box_predictor { use_dropout: false dropout_keep_probability: 1.0 fc_hyperparams { op: FC regularizer { l2_regularizer { weight: 0.0 } } initializer { variance_scaling_initializer { factor: 1.0 uniform: true mode: FAN_AVG } } } } } second_stage_post_processing { batch_non_max_suppression { score_threshold: 0.0 iou_threshold: 0.6 max_detections_per_class: 100 max_total_detections: 300 } score_converter: SOFTMAX } second_stage_localization_loss_weight: 2.0 second_stage_classification_loss_weight: 1.0 } } train_config: { batch_size: 1 optimizer { momentum_optimizer: { learning_rate: { manual_step_learning_rate { initial_learning_rate: 0.0003 schedule { step: 900000 learning_rate: .00003 } schedule { step: 1200000 learning_rate: .000003 } } } momentum_optimizer_value: 0.9 } use_moving_average: false } gradient_clipping_by_norm: 10.0 fine_tune_checkpoint: "object_detection/faster_rcnn_resnet101_coco_11_06_2017/model.ckpt" from_detection_checkpoint: true load_all_detection_checkpoint_vars: true # Note: The below line limits the training process to 200K steps, which we # empirically found to be sufficient enough to train the pets dataset. This # effectively bypasses the learning rate schedule (the learning rate will # never decay). Remove the below line to train indefinitely. num_steps: 200000 data_augmentation_options { random_horizontal_flip { } } } train_input_reader: { tf_record_input_reader { input_path: "object_detection/data_wheat/train.record-?????-of-00010" } label_map_path: "object_detection/data_wheat/wheat_label_map.pbtxt" } eval_config: { metrics_set: "coco_detection_metrics" num_examples: 375 } eval_input_reader: { tf_record_input_reader { input_path: "object_detection/data_wheat/val.record-?????-of-00010" } label_map_path: "object_detection/data_wheat/wheat_label_map.pbtxt" shuffle: false num_readers: 1 } </code></pre>
<p>This is a standard case of overfitting: your model is memorizing the training data and lost its ability to generalize on unseen data.</p> <p>For cases like this one you have two options:</p> <ul> <li>early stopping: monitor the validation metrics and as soon as the metrics become constants and/or starts decreasing stop the training</li> <li>add regularization to the model (and also do early stopping anyway)</li> </ul>
tensorflow|deep-learning|object-detection-api
-1
375,198
54,124,828
Python pandas multiplying 4 columns with decimal values
<p>I have a pandas dataframe with 4 columns containing decimal values which I have to multiply to create a 5 column with the answer. For example</p> <pre><code>col1 col2 col3 col4 0.03 0.02 0.01 0.05 0.12 0.32 0.05 0.03 </code></pre> <p>I tried multiplying using the following code:</p> <pre><code>df['col5'] = df['col1']*df['col2']*df['col3']*df['col4'] </code></pre> <p>I am getting the values such as these:</p> <pre><code>3.600000e-06 1.701000e-04 </code></pre> <p>I don't know how to read these. I tried rounding of these numbers to 8 digits. </p> <pre><code>round(df['col5'], 8) </code></pre> <p>does not work. However </p> <pre><code>round(df['col5'], 6) </code></pre> <p>works. I want to derive a minimum of 8 decimal points. Hence, I tried converting the column to a Decimal column using,</p> <pre><code>from decimal import Decimal df['col5'] = df['col5'].apply(Decimal) </code></pre> <p>This works...but it provides with with very long string of decimal values, which I could not round off as I am getting an error:</p> <pre><code>TypeError: unsupported operand type(s) for *: 'decimal.Decimal' and 'float' </code></pre> <p>Instead I tried converting it to string with <code>format</code>:</p> <pre><code>df['col5'] = format(df['col5'], ".8f") </code></pre> <p>but getting the error:</p> <pre><code>TypeError: unsupported format string passed to Series.__format__ </code></pre> <p>How do i multiply the 4 columns and retain the values in the 5th column upto 8 decimal points.</p>
<p>You can modify pandas option to display the number of decimals you want :</p> <pre><code>df = pd.DataFrame(np.random.randn(5,5)) print(df) pd.set_option('precision',10) print(df) </code></pre>
python|pandas|multiplication
1
375,199
54,202,579
Type error when trying to modify values using .loc
<p>Trying to modify all values in the column of a dataframe where values in another column is equal to something specific. </p> <p>I'm using a dataframe <code>df</code>, with columns a,b,c,d. I first duplicated column d using </p> <p>df["e"] = df["d"]</p> <p>Then, using <code>.loc</code>, I went for:</p> <pre><code>df.loc[df["d"] == "Unknown", "e"] = "Not Unknown!" </code></pre> <p>And I'm getting a:</p> <pre><code>TypeError: 'Series' objects are mutable, thus they cannot be hashed </code></pre> <p>I'm terribly confused since this has worked in the past, in other cases, and I can't seem to figure out what might be happening. For info, dtype of "d" is a string. If I straight up <code>.loc</code> it it returns the expected result. </p> <p>Since I'm changing all values of column d, I also thought that my copying the column might be the problem, so I tried copying it over using a different method with:</p> <pre><code>df = df.assign(e=pd.Series(np.random.randn(len(df))).values) </code></pre> <p>But got the same result.</p> <p>Thanks for any help catching my (what I'm sure will be) obvious mistake!</p> <p><strong>EDIT</strong>: sample from df, <code> a b c d e 0 21838344 00001 50 Unknown Unknown 1 35652924 00001 80 Unknown Unknown 2 35652925 00001 80 Unknown Unknown 3 31206900 00001 80 Unknown Unknown 4 37544700 00001 80 Unknown Unknown</code></p>
<pre><code>import pandas as pd data = [['2334','00001','50','Unknown'],['6754','00001','80','Unknown']] df = pd.DataFrame(data, columns = ['a','b','c','d']) df['e'] = df['d'] df.loc[df['d'] == 'Unknown', 'e'] = 'Not Unknown!' </code></pre> <p>Completely works for me.</p>
python-3.x|pandas
0