Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
374,700
| 31,359,980
|
Memory efficient sort of massive numpy array in Python
|
<p>I need to sort a VERY large genomic dataset using numpy. I have an array of 2.6 billion floats, dimensions = <code>(868940742, 3)</code> which takes up about 20GB of memory on my machine once loaded and just sitting there. I have an early 2015 13' MacBook Pro with 16GB of RAM, 500GB solid state HD and an 3.1 GHz intel i7 processor. Just loading the array overflows to virtual memory but not to the point where my machine suffers or I have to stop everything else I am doing.</p>
<p>I build this VERY large array step by step from 22 smaller <code>(N, 2)</code> subarrays. </p>
<p>Function <code>FUN_1</code> generates 2 new <code>(N, 1)</code> arrays using each of the 22 subarrays which I call <code>sub_arr</code>. </p>
<p>The first output of <code>FUN_1</code> is generated by interpolating values from <code>sub_arr[:,0]</code> on array <code>b = array([X, F(X)])</code> and the second output is generated by placing <code>sub_arr[:, 0]</code> into bins using array <code>r = array([X, BIN(X)])</code>. I call these outputs <code>b_arr</code> and <code>rate_arr</code>, respectively. The function returns a 3-tuple of <code>(N, 1)</code> arrays:</p>
<pre><code>import numpy as np
def FUN_1(sub_arr):
"""interpolate b values and rates based on position in sub_arr"""
b = np.load(bfile)
r = np.load(rfile)
b_arr = np.interp(sub_arr[:,0], b[:,0], b[:,1])
rate_arr = np.searchsorted(r[:,0], sub_arr[:,0]) # HUGE efficiency gain over np.digitize...
return r[rate_r, 1], b_arr, sub_arr[:,1]
</code></pre>
<p>I call the function 22 times in a for-loop and fill a pre-allocated array of zeros <code>full_arr = numpy.zeros([868940742, 3])</code> with the values:</p>
<pre><code>full_arr[:,0], full_arr[:,1], full_arr[:,2] = FUN_1
</code></pre>
<p>In terms of saving memory at this step, I think this is the best I can do, but I'm open to suggestions. Either way, I don't run into problems up through this point and it only takes about 2 minutes.</p>
<p>Here is the sorting routine (there are two consecutive sorts)</p>
<pre><code>for idx in range(2):
sort_idx = numpy.argsort(full_arr[:,idx])
full_arr = full_arr[sort_idx]
# ...
# <additional processing, return small (1000, 3) array of stats>
</code></pre>
<p>Now this sort had been working, albeit slowly (takes about 10 minutes). However, I recently started using a larger, more fine resolution table of <code>[X, F(X)]</code> values for the interpolation step above in <code>FUN_1</code> that returns <code>b_arr</code> and now the SORT really slows down, although everything else remains the same. </p>
<p>Interestingly, I am not even sorting on the interpolated values at the step where the sort is now lagging. Here are some snippets of the different interpolation files - the smaller one is about 30% smaller in each case and far more uniform in terms of values in the second column; the slower one has a higher resolution and many more unique values, so the results of interpolation are likely more unique, but I'm not sure if this should have any kind of effect...? </p>
<p><strong>bigger, slower file:</strong></p>
<pre><code>17399307 99.4
17493652 98.8
17570460 98.2
17575180 97.6
17577127 97
17578255 96.4
17580576 95.8
17583028 95.2
17583699 94.6
17584172 94
</code></pre>
<p><strong>smaller, more uniform regular file:</strong></p>
<pre><code>1 24
1001 24
2001 24
3001 24
4001 24
5001 24
6001 24
7001 24
</code></pre>
<p>I'm not sure what could be causing this issue and I would be interested in any suggestions or just general input about sorting in this type of memory limiting case! </p>
|
<p>At the moment each call to <code>np.argsort</code> is generating a <code>(868940742, 1)</code> array of int64 indices, which will take up ~7 GB just by itself. Additionally, when you use these indices to sort the columns of <code>full_arr</code> you are generating another <code>(868940742, 1)</code> array of floats, since <a href="http://docs.scipy.org/doc/numpy/user/basics.indexing.html#index-arrays" rel="nofollow noreferrer">fancy indexing always returns a copy rather than a view</a>.</p>
<p>One fairly obvious improvement would be to sort <code>full_arr</code> in place using its <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.sort.html" rel="nofollow noreferrer"><code>.sort()</code> method</a>. Unfortunately, <code>.sort()</code> does not allow you to directly specify a row or column to sort by. However, you <em>can</em> specify a field to sort by for a structured array. You can therefore force an inplace sort over one of the three columns by getting a <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.view.html" rel="nofollow noreferrer"><code>view</code></a> onto your array as a structured array with three float fields, then sorting by one of these fields:</p>
<pre><code>full_arr.view('f8, f8, f8').sort(order=['f0'], axis=0)
</code></pre>
<p>In this case I'm sorting <code>full_arr</code> in place by the 0th field, which corresponds to the first column. Note that I've assumed that there are three float64 columns (<code>'f8'</code>) - you should change this accordingly if your dtype is different. This also requires that your array is contiguous and in row-major format, i.e. <code>full_arr.flags.C_CONTIGUOUS == True</code>.</p>
<p>Credit for this method should go to Joe Kington for his answer <a href="https://stackoverflow.com/a/2828371/1461210">here</a>.</p>
<hr>
<p>Although it requires less memory, sorting a structured array by field is unfortunately much slower compared with using <code>np.argsort</code> to generate an index array, as you mentioned in the comments below (see <a href="https://stackoverflow.com/q/19682521/1461210">this previous question</a>). If you use <code>np.argsort</code> to obtain a set of indices to sort by, you might see a modest performance gain by using <code>np.take</code> rather than direct indexing to get the sorted array:</p>
<pre><code> %%timeit -n 1 -r 100 x = np.random.randn(10000, 2); idx = x[:, 0].argsort()
x[idx]
# 1 loops, best of 100: 148 µs per loop
%%timeit -n 1 -r 100 x = np.random.randn(10000, 2); idx = x[:, 0].argsort()
np.take(x, idx, axis=0)
# 1 loops, best of 100: 42.9 µs per loop
</code></pre>
<p>However I wouldn't expect to see any difference in terms of memory usage, since both methods will generate a copy.</p>
<hr>
<p>Regarding your question about why sorting the second array is faster - yes, you should expect any reasonable sorting algorithm to be faster when there are fewer unique values in the array because on average there's less work for it to do. Suppose I have a random sequence of digits between 1 and 10:</p>
<pre><code>5 1 4 8 10 2 6 9 7 3
</code></pre>
<p>There are 10! = 3628800 possible ways to arrange these digits, but only one in which they are in ascending order. Now suppose there are just 5 unique digits:</p>
<pre><code>4 4 3 2 3 1 2 5 1 5
</code></pre>
<p>Now there are 2⁵ = 32 ways to arrange these digits in ascending order, since I could swap any pair of identical digits in the sorted vector without breaking the ordering.</p>
<p>By default, <code>np.ndarray.sort()</code> uses <a href="https://en.wikipedia.org/wiki/Quicksort" rel="nofollow noreferrer">Quicksort</a>. The <a href="https://en.wikipedia.org/wiki/Quicksort#Repeated_elements" rel="nofollow noreferrer"><code>qsort</code></a> variant of this algorithm works by recursively selecting a 'pivot' element in the array, then reordering the array such that all the elements less than the pivot value are placed before it, and all of the elements greater than the pivot value are placed after it. Values that are equal to the pivot are already sorted. Having fewer unique values means that, on average, more values will be equal to the pivot value on any given sweep, and therefore fewer sweeps are needed to fully sort the array.</p>
<p>For example:</p>
<pre><code>%%timeit -n 1 -r 100 x = np.random.random_integers(0, 10, 100000)
x.sort()
# 1 loops, best of 100: 2.3 ms per loop
%%timeit -n 1 -r 100 x = np.random.random_integers(0, 1000, 100000)
x.sort()
# 1 loops, best of 100: 4.62 ms per loop
</code></pre>
<p>In this example the dtypes of the two arrays are the same. If your smaller array has a smaller item size compared with the larger array then the cost of copying it due to the fancy indexing will also be smaller.</p>
|
python|performance|sorting|numpy|memory
| 14
|
374,701
| 64,515,684
|
CSV Date Parsing in Pandas
|
<p>I am trying to parse dates together from the following sample set of data</p>
<hr />
<pre><code>No,year,month,day,hour,pm2.5,DEWP,TEMP,PRES,cbwd,Iws,Is,Ir
1,2010,1,1,0,NA,-21,-11,1021,NW,1.79,0,0
2,2010,1,1,1,NA,-21,-12,1020,NW,4.92,0,0
3,2010,1,1,2,NA,-21,-11,1019,NW,6.71,0,0
4,2010,1,1,3,NA,-21,-14,1019,NW,9.84,0,0
</code></pre>
<hr />
<p>My code is as follows:</p>
<pre><code>dateparser = lambda x: pd.datetime.strptime(x, "%Y %m %d %H")`
dataset = pd.read_csv("raw.csv", parse_dates=['year', 'month', 'day', 'hour'], index_col = 0,date_parser=mydateparser)
</code></pre>
<p>It's throwing this error:</p>
<pre><code>ValueError: Missing column provided to 'parse_dates': 'day, hour, month, year'
</code></pre>
<p>Can someone help me understand why I am getting this error</p>
|
<p>Try it passing as <code>dict</code> or list of list</p>
<pre class="lang-py prettyprint-override"><code>dataset = pd.read_csv("raw.csv", parse_dates={'date':['year', 'month', 'day',
'hour']}, index_col = 1, date_parser=dateparser)
</code></pre>
<p>Or</p>
<pre class="lang-py prettyprint-override"><code>dataset = pd.read_csv("raw.csv", parse_dates=[['year', 'month', 'day',
'hour']], index_col = 1, date_parser=dateparser)
</code></pre>
<p>PS: Was not able to reproduce the same error, but the proposed solution should work fine.</p>
|
python|pandas|date|parsing
| 1
|
374,702
| 64,355,794
|
How to hold the output of a model at each training epoch in tensorflow 1.x?
|
<p>I am trying to implement a constraint on the output of a neural network using the output of the previous training epoch. I tried using tf.assign() to update the value of a variable that holds the output, but it turned out that it holds the initial value.</p>
|
<p>You must use callbacks.
It's my example for maximum scorу:</p>
<pre><code>checkpoint_precision = ModelCheckpoint(filepath='best-weights, precision_selu_pr.hdf5', monitor='val_precision', mode='max', verbose=1, save_best_only=True)
checkpoint_auc = ModelCheckpoint(filepath='best-weights-auc_selu_pr.hdf5', monitor='val_auc', mode='max', verbose=1, save_best_only=True)
model.fit(x=data_x, y=Y, batch_size=100, epochs=100000, validation_data (x_val_scaled, Y_val), callbacks=[checkpoint_precision, checkpoint_auc])
</code></pre>
<p>For more information you can use this link: <a href="https://www.tensorflow.org/api_docs/python/tf/keras/callbacks" rel="nofollow noreferrer">https://www.tensorflow.org/api_docs/python/tf/keras/callbacks</a></p>
|
python|tensorflow|machine-learning
| 0
|
374,703
| 64,288,855
|
Element Click Selenium Not Finding Button to Click
|
<p>For the url below, I am trying to click the "1-50" button (which has its own xpath) and then the "51-100," "101-150," etc. buttons (which all share a 2nd xpath), but my code does not seem to be able to click on the button. Anybody able to figure this out? Cheers!</p>
<pre><code>import pandas as pd
import time
from selenium import webdriver
from bs4 import BeautifulSoup
driver = webdriver.Chrome()
url = 'www.sec.gov/securities/files/year/'
page = driver.get(url)
time.sleep(2)
df_appended = []
df = pd.read_html(driver.page_source)[0]
df_appended.append(df)
time.sleep(2)
driver.find_element_by_xpath('//[@id="ctl00_m_g_00806bcd_0028_4082_9797_52f6f350e592_updatePanelctl00_m_g_00806bcd_0028_4082_9797_52f6f350e592"]/table[2]/tbody/tr/td/a/img').click
time.sleep(2)
for i in range(1,3):
df = pd.read_html(driver.page_source)[0]
df_appended.append(df)
driver.find_element_by_xpath('//*[@id="ctl00_m_g_00806bcd_0028_4082_9797_52f6f350e592_updatePanelctl00_m_g_00806bcd_0028_4082_9797_52f6f350e592"]/table[2]/tbody/tr/td/a[3]/img').click()
time.sleep(1)
df_appended
</code></pre>
|
<p>Runs fine if you used the a tag.</p>
<pre><code>driver.find_element_by_xpath('//table[2]/tbody/tr/td/a').click()
time.sleep(2)
for i in range(1,3):
df = pd.read_html(driver.page_source)[0]
df_appended.append(df)
driver.find_element_by_xpath('//table[2]/tbody/tr/td/a[3]').click()
time.sleep(1)
</code></pre>
|
python|pandas|selenium|selenium-webdriver|xpath
| 0
|
374,704
| 64,438,066
|
How can I fillna based on the columns from another dataframe?
|
<p>I'm trying to fill the null value in <code>job_industry_category</code> from a lookup dataframe. For example:</p>
<pre><code>df = pd.DataFrame()
df['job_title'] = ['Executive Secretary', 'Administrative Officer' , 'Recruiting Manager' , 'Senior Editor', 'Media Manager I']
df['job_industry_category'] = ['Health', 'Financial Services' , 'Property', NaN, NaN]
df
job_title job_industry_category
0 Executive Secretary Health
1 Administrative Officer Financial Services
2 Recruiting Manager Property
3 Senior Editor NaN
4 Media Manager I NaN
lookup = pd.DataFrame()
lookup['job_title'] = ['Executive Secretary', 'Senior Editor', 'Media Manager I']
lookup['job_industry_category'] = ['Retail', 'Manufacturing', 'Health']
lookup
job_title job_industry_category
0 Executive Secretary Health
1 Senior Editor Manufacturing
2 Media Manager I Health
</code></pre>
<p>And the result I expect will be:</p>
<pre><code>df
job_title job_industry_category
0 Executive Secretary Health
1 Administrative Officer Financial Services
2 Recruiting Manager Property
3 Senior Editor Manufacturing
4 Media Manager I Health
</code></pre>
<p>I tried to use <code>map</code>, likethis:
<code>df.loc[df['job_industry_category'].isnull(), 'job_industry_category'] = lookup['job_title'].map(lookup)</code> And also removing na, from another post:</p>
<pre><code>def remove_na(x):
if pd.isnull(x['job_industry_category']):
return freq_job_ind[x['job_title']]
else:
return x['job_industry_category']
df['job_industry_category'] = df.apply(remove_na, axis=1)
</code></pre>
<p>But both did not work, and I'm not sure if there is a better way to do this?
Thank you in advance!</p>
|
<pre><code>#Boolean select NaN
m=df.job_industry_category.isna()
#Mask the NaNs and map across values using a dict of lookup['job_title']:lookup['job_industry_category'] df.loc[m,'job_industry_category']=df.loc[m,'job_title'].map(dict(zip(lookup.job_title,lookup.job_industry_category)))
job_title job_industry_category
0 Executive Secretary Health
1 Administrative Officer Financial Services
2 Recruiting Manager Property
3 Senior Editor Manufacturing
4 Media Manager I Health
</code></pre>
|
python|pandas|dataframe
| 0
|
374,705
| 64,190,008
|
Renaming values in a column based on predefined ranges
|
<p>I have a list of years in a column (pandas)</p>
<pre><code>Year
2001
2002
2018
2002
2006
2010
2019
2010
</code></pre>
<p>I would like to visualise in a bar chart how many years are by 2012 and how many years there are after 2012, i.e. I should have in my column something like this:</p>
<pre><code>Year
<2012
<2012
>2012
<2012
<2012
<2012
>2012
<2012
</code></pre>
<p>in order to plot the total of values before/after 2012.
I tried something like this:</p>
<pre><code>df.replace([1990, 1991, 1992, 1993, ... ], ["<2012", "<2012", "<2012", "<2012","<2012"])
</code></pre>
<p>similarly for years > 2020.</p>
<p>Can you please tell me how to rename the observation in an easy way?</p>
|
<p>This looks like a job for <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.cut.html" rel="nofollow noreferrer"><code>pd.cut</code></a>:</p>
<pre><code>pd.cut(df['Year'], bins=[-np.inf, 2012, np.inf], labels=['<2012', '>2012'])
0 <2012
1 <2012
2 >2012
3 <2012
4 <2012
5 <2012
6 >2012
7 <2012
Name: Year, dtype: category
Categories (2, object): ['<2012' < '>2012']
</code></pre>
<hr />
<p>Add more bins like this:</p>
<pre><code>pd.cut(df['Year'],
bins=[-np.inf, 2012, 2020, ..., np.inf],
labels=['<2012', '<2020', ...])
</code></pre>
|
python|pandas
| 0
|
374,706
| 64,322,401
|
How to use groupby in python for multiple columns?
|
<p>have a df with values :</p>
<pre><code>name numb exam marks
tom 2546 math 25
tom 2546 science 25
tom 2546 env 25
mark 2547 math 15
mark 2547 env 10
sam 2548 env 18
</code></pre>
<p>how to use groupby and form values?</p>
<pre><code>name numb total_exams_attended total_maths_exam_attended total_marks_scored_in_maths total_marks_scored
tom 2546 3 1 25 75
mark 2547 2 1 15 25
sam 2548 1 0 18
</code></pre>
<p>tried this :</p>
<pre><code>df=df.groupby(['name']).agg({'total_exams_attended': 'count','total_marks_scored': lambda x: sum(x == True)})
</code></pre>
<p>But got stuck in total_marks_scored_in_maths column. how to do groupby / aggregate for only particular column values like maths here</p>
|
<p>Consider <code>pivot_table</code> with some column name manipulation due to hierarchy and aggreate name:</p>
<pre><code>pivot_df = df.pivot_table(index='name', columns='exam', values='marks', aggfunc=['count', 'sum'],
margins=True, margins_name='total')
pivot_df.columns = [i+'_'+j.replace('count', 'exams_attended').replace('sum', 'marks_scored')
for i, j in zip(pivot_df.columns.get_level_values(1),
pivot_df.columns.get_level_values(0))]
</code></pre>
<p>Output</p>
<pre><code>pivot_df
# env_exams_attended math_exams_attended science_exams_attended total_exams_attended env_marks_scored math_marks_scored science_marks_scored total_marks_scored
# name
# mark 1.0 1.0 0.0 2 10.0 15.0 0.0 25
# sam 1.0 0.0 0.0 1 18.0 0.0 0.0 18
# tom 1.0 1.0 1.0 3 25.0 25.0 25.0 75
# total 3.0 2.0 1.0 6 53.0 40.0 25.0 118
</code></pre>
<p>Should you need to filter down to math and total columns use <code>.loc</code>:</p>
<pre><code>math_pvt_df = pivot_df.loc[df['name'].unique(),
["math_exams_attended", "total_exams_attended",
"math_marks_scored", "total_marks_scored"]]
math_pvt_df
# math_exams_attended total_exams_attended math_marks_scored total_marks_scored
# name
# mark 1.0 2 15.0 25
# sam 0.0 1 0.0 18
# tom 1.0 3 25.0 75
</code></pre>
|
python|pandas|group-by|pivot|aggregate
| 0
|
374,707
| 64,247,915
|
Modify column data before a specific character using Regex in pandas
|
<p>I'm trying to modify the Address column data by removing all the characters before the comma.</p>
<p>Sample data:</p>
<pre><code> **ADDRESS**
0 Ksfc Layout,Bangalore
1 Vishweshwara Nagar,Mysore
2 Jigani,Bangalore
3 Sector-1 Vaishali,Ghaziabad
4 New Town,Kolkata
</code></pre>
<p>Expected Output:</p>
<pre><code> **ADDRESS**
0 Bangalore
1 Mysore
2 Bangalore
3 Ghaziabad
4 Kolkata
</code></pre>
<p>I tried this code but it's not working can someone correct the code?</p>
<pre><code>import pandas as pd
import regex as re
data = pd.read_csv("train.csv")
data.ADDRESS.replace(re.sub(r'.*,',"", data.ADDRESS), regex=True, inplace=True)
</code></pre>
|
<p>Try this:</p>
<pre><code>data.ADDRESS = data.ADDRESS.str.split(',').str[-1]
</code></pre>
|
python|regex|pandas
| 0
|
374,708
| 64,280,524
|
How to multiply 2 different dataframe which have different shape but same header and row label in Python Pandas?
|
<p><strong>Dataframe - 1 (number of products by country)</strong></p>
<p><strong>Note:</strong> Use below code to generate example dataframe</p>
<pre><code>df1 = pd.DataFrame({'Devices':['Mobile','Mobile','Mobile','Mobile','Mobile','Laptop','Desktop'],'Sources':['India','India','India','India','UK','UK','US'],'Status':['ok','ok','notok','ok','ok','notok','ok'],'10/01/2020':[45,45,60,56,50,65,50],'10/02/2020':[45,60,56,56,50,65,50],'10/03/2020':[45,60,56,56,50,65,50],'10/04/2020':[45,60,56,15,25,26,20]})
</code></pre>
<p>Looks like blow:</p>
<p><a href="https://i.stack.imgur.com/ulufu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ulufu.png" alt="enter image description here" /></a></p>
<p><strong>Data Freame - 2 (MRP of product by country)</strong></p>
<p><strong>Note:</strong> Use below code to generate example dataframe</p>
<pre><code>df2 = pd.DataFrame({'Devices':['Mobile','Mobile','Laptop','Desktop'],'Sources':['India','UK','UK','US'],'Status':['MRP','MRP','MRP','MRP'],'10/01/2020':[8000,8200,7800,8500],'10/02/2020':[8200,6500,7900,8000],'10/03/2020':[7800,13000,12500,7800],'10/04/2020':[8500,7800,21000,8500]})
</code></pre>
<p>Looks like blow:</p>
<p><a href="https://i.stack.imgur.com/YXwNx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YXwNx.png" alt="enter image description here" /></a></p>
<p>I want to multiply df2 with df1 data, Name and number of column would be same on both dataframe. Devices and sources is common key between both dataframe.</p>
<p>I have tried few code to multipy both dataframe but it doesn't work. it is giving me <code>TypeError: can't multiply sequence by non-int of type 'str'</code></p>
<pre><code>df1.mul(df2.values)
</code></pre>
<p>New df1 After multiplying df2 value:(desired output)</p>
<p><a href="https://i.stack.imgur.com/wg8mf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wg8mf.png" alt="enter image description here" /></a></p>
<p>Can anyone help me on this?</p>
<p>Thanks!</p>
|
<p>Set <code>Devices</code> and <code>Sources</code> as the index and then multiply</p>
<pre><code>df1.set_index(['Devices', 'Sources', 'Status'])[df1.columns[3:]].mul(df2.set_index(['Devices', 'Sources'])[df2.columns[3:]]).reset_index()
Devices Sources Status 10/01/2020 10/02/2020 10/03/2020 10/04/2020
0 Desktop US ok 425000 400000 390000 170000
1 Laptop UK notok 507000 513500 812500 546000
2 Mobile India ok 360000 369000 351000 382500
3 Mobile India ok 360000 492000 468000 510000
4 Mobile India notok 480000 459200 436800 476000
5 Mobile India ok 448000 459200 436800 127500
6 Mobile UK ok 410000 325000 650000 195000
</code></pre>
|
python-3.x|pandas|dataframe
| 1
|
374,709
| 64,484,576
|
Why do I get an error trying to use keras package in R?
|
<p>I'm trying to run through an example for building a neural network in R. I tried following the instructions at <a href="https://opendatascience.com/using-keras-and-tensorflow-in-r/" rel="nofollow noreferrer">https://opendatascience.com/using-keras-and-tensorflow-in-r/</a>.
I've installed the keras and tensorflow packages and I have the anaconda navigator downloaded on my laptop. When I run the following:</p>
<pre><code>library(keras)
library(tensorflow)
reticulate::use_condaenv(“r-reticulate”)
mnist <- dataset_mnist()
</code></pre>
<p>I'm getting this error</p>
<pre><code>Error in conda_python(envpath, conda = miniconda) :
no conda environment exists at path 'C:/Users/smurphy4/AppData/Local/r-miniconda/envs/r-reticulate'
</code></pre>
<p>This is probably a stupid question but I haven't used python before and I only have a little experience with R and would really appreciate any help.</p>
<p>Also, I don't know if this is relevant but when I open the anaconda navigator and look in environments, there isn't an r-reticulate environment, only a base (root) environment.<a href="https://i.stack.imgur.com/P1f5u.png" rel="nofollow noreferrer">Anaconda navigator environments</a></p>
|
<p>In case anyone has a similar problem, I just uninstalled Rstudio and installed again through a new python environment in anaconda navigator and installed the modules I needed in that environment. That seemed to work.</p>
|
python|r|tensorflow|keras
| 0
|
374,710
| 64,329,250
|
How to mask paddings in LSTM model for speech emotion recognition
|
<p>Given a few directories of .wav audio files, I have extracted their features in terms of a 3D array (batch, step, features).</p>
<p>For my case, the training dataset is (1883,100,136).
Basically, each audio has been analyzed 100 times (imagine that as 1fps) and each time, 136 features have been extracted. However, those audio files are different in length so some of them cannot be analyzed for 100 times.</p>
<p>For instance, one of the audio has 50 sets of 136 features as effective values so the rest 50 sets were padded with zeros.</p>
<p>Here is my model.</p>
<pre><code>def LSTM_model_building(units=200,learning_rate=0.005,epochs=20,dropout=0.19,recurrent_dropout=0.2):
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Bidirectional(LSTM(units, dropout=dropout, recurrent_dropout=recurrent_dropout, input_shape=(X_train.shape[0],100, 136))))
# model.add(tf.keras.layers.Bidirectional(LSTM(32)))
model.add(Dense(num_classes, activation='softmax'))
adamopt = tf.keras.optimizers.Adam(lr=learning_rate, beta_1=0.9, beta_2=0.999, epsilon=1e-8)
opt = tf.keras.optimizers.RMSprop(lr=learning_rate, rho=0.9, epsilon=1e-6)
# opt = tf.keras.optimizers.SGD(lr=learning_rate, momentum=0.9, decay=0., nesterov=False)
model.compile(loss='categorical_crossentropy',
optimizer=adamopt,
metrics=['accuracy'])
history = model.fit(X_train, y_train,
batch_size=batch_size,
epochs=epochs,
validation_data=(X_test, y_test),
verbose = 1)
score, acc = model.evaluate(X_test, y_test,
batch_size=batch_size)
return history
</code></pre>
<p>I wish to mask the padding however the instruction, shown on the <a href="https://www.tensorflow.org/guide/keras/masking_and_padding" rel="nofollow noreferrer">Keras website</a>, uses an <code>embedding layer</code> which I believe is usually used for NLP. I have no idea how to use the <code>embedding layer</code> for my model.</p>
<p>Can anyone teach me how to apply masking for my LSTM model?</p>
|
<p><code>Embedding</code> layer is not for your case. You can consider instead <code>Masking</code> <a href="https://www.tensorflow.org/api_docs/python/tf/keras/layers/Masking" rel="nofollow noreferrer">layer</a>. It is simply integrable in your model structure, as shown below.</p>
<p>I also remember you that the input shape must be specified in the first layer of a sequential model. Remember also that you don't need to pass the sample dimension. In your case, the input shape is <code>(100,136)</code> which is equal to <code>(timesteps,n_features)</code></p>
<pre><code>units,learning_rate,dropout,recurrent_dropout = 200,0.005,0.19,0.2
num_classes = 3
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Masking(mask_value=0.0, input_shape=(100,136)))
model.add(tf.keras.layers.Bidirectional(LSTM(units, dropout=dropout, recurrent_dropout=recurrent_dropout)))
model.add(Dense(num_classes, activation='softmax'))
adamopt = tf.keras.optimizers.Adam(lr=learning_rate, beta_1=0.9, beta_2=0.999, epsilon=1e-8)
opt = tf.keras.optimizers.RMSprop(lr=learning_rate, rho=0.9, epsilon=1e-6)
model.compile(loss='categorical_crossentropy',
optimizer=adamopt,
metrics=['accuracy'])
model.summary()
</code></pre>
|
tensorflow|machine-learning|keras|deep-learning|lstm
| 1
|
374,711
| 64,362,016
|
Error pip installing wheel file from github repository (to download pycocotools)
|
<p>I am installing Tensorflow (1.15.0) in order to perform some deep learning object detection, but am having trouble pip installing pycocotools. I am following <a href="https://www.youtube.com/watch?v=usR2LQuxhL4&t=134s" rel="nofollow noreferrer">this</a> tutorial, which is an updated tutorial originally from YouTube channel Sentdex. I am also using the Anaconda Prompt for this purpose.</p>
<p>After creating and activating a conda environment and installing all the needed packages (TensorFlow, lxml, etc.), I am trying to run the command <code>pip install pycocotools</code> package, but get the following error:</p>
<pre><code>Building wheels for collected packages: pycocotools
Building wheel for pycocotools (setup.py) ... error
ERROR: Command errored out with exit status 1:
command: 'C:\anaconda3\envs\object\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\AppData\\Local\\Temp\\pip-install-f_16w712\\pycocotools\\setup.py'"'"'; __file__='"'"'C:\\AppData\\Local\\Temp\\pip-install-f_16w712\\pycocotools\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d 'C:\AppData\Local\Temp\pip-wheel-au01c73g'
cwd: C:\AppData\Local\Temp\pip-install-f_16w712\pycocotools\
Complete output (16 lines):
running bdist_wheel
running build
running build_py
creating build
creating build\lib.win-amd64-3.7
creating build\lib.win-amd64-3.7\pycocotools
copying pycocotools\coco.py -> build\lib.win-amd64-3.7\pycocotools
copying pycocotools\cocoeval.py -> build\lib.win-amd64-3.7\pycocotools
copying pycocotools\mask.py -> build\lib.win-amd64-3.7\pycocotools
copying pycocotools\__init__.py -> build\lib.win-amd64-3.7\pycocotools
running build_ext
cythoning pycocotools/_mask.pyx to pycocotools\_mask.c
C:\anaconda3\envs\object\lib\site-packages\Cython\Compiler\Main.py:369: FutureWarning: Cython directive 'language_level' not set, using 2 for now (Py2). This will change in a later release! File: C:\AppData\Local\Temp\pip-install-f_16w712\pycocotools\pycocotools\_mask.pyx
tree = Parsing.p_module(s, pxd, full_module_name)
building 'pycocotools._mask' extension
error: Microsoft Visual C++ 14.0 is required. Get it with "Build Tools for Visual Studio": https://visualstudio.microsoft.com/downloads/
----------------------------------------
ERROR: Failed building wheel for pycocotools
Running setup.py clean for pycocotools
Failed to build pycocotools
Installing collected packages: pycocotools
Running setup.py install for pycocotools ... error
ERROR: Command errored out with exit status 1:
command: 'C:\anaconda3\envs\object\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\AppData\\Local\\Temp\\pip-install-f_16w712\\pycocotools\\setup.py'"'"'; __file__='"'"'C:\\AppData\\Local\\Temp\\pip-install-f_16w712\\pycocotools\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\AppData\Local\Temp\pip-record-bjfh6urg\install-record.txt' --single-version-externally-managed --compile --install-headers 'C:\anaconda3\envs\object\Include\pycocotools'
cwd: C:\AppData\Local\Temp\pip-install-f_16w712\pycocotools\
Complete output (14 lines):
running install
running build
running build_py
creating build
creating build\lib.win-amd64-3.7
creating build\lib.win-amd64-3.7\pycocotools
copying pycocotools\coco.py -> build\lib.win-amd64-3.7\pycocotools
copying pycocotools\cocoeval.py -> build\lib.win-amd64-3.7\pycocotools
copying pycocotools\mask.py -> build\lib.win-amd64-3.7\pycocotools
copying pycocotools\__init__.py -> build\lib.win-amd64-3.7\pycocotools
running build_ext
skipping 'pycocotools\_mask.c' Cython extension (up-to-date)
building 'pycocotools._mask' extension
error: Microsoft Visual C++ 14.0 is required. Get it with "Build Tools for Visual Studio": https://visualstudio.microsoft.com/downloads/
----------------------------------------
ERROR: Command errored out with exit status 1: 'C:\anaconda3\envs\object\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\AppData\\Local\\Temp\\pip-install-f_16w712\\pycocotools\\setup.py'"'"'; __file__='"'"'C:\\AppData\\Local\\Temp\\pip-install-f_16w712\\pycocotools\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\AppData\Local\Temp\pip-record-bjfh6urg\install-record.txt' --single-version-externally-managed --compile --install-headers 'C:\anaconda3\envs\object\Include\pycocotools' Check the logs for full command output.
</code></pre>
<p>Apparently, there is a wheel file that needs to be downloaded from <a href="https://github.com/philferriere/cocoapi#egg=pycocotools" rel="nofollow noreferrer">this</a> github repository, under the subdirectory PythonAPI. I ran this code to do so:</p>
<pre><code>pip install git+https://github.com/philferriere/cocoapi#egg=pycocotools^subdirectory==PythonAPI
</code></pre>
<p>The following error is produced:</p>
<pre><code>Collecting pycocotoolssubdirectory==PythonAPI
Cloning https://github.com/philferriere/cocoapi to c:\appdata\local\temp\pip-install-c_lq0qhi\pycocotoolssubdirectory
ERROR: Command errored out with exit status 1:
command: 'C:\anaconda3\envs\object\python.exe' -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\AppData\\Local\\Temp\\pip-install-c_lq0qhi\\pycocotoolssubdirectory\\setup.py'"'"'; __file__='"'"'C:\\AppData\\Local\\Temp\\pip-install-c_lq0qhi\\pycocotoolssubdirectory\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base 'C:\AppData\Local\Temp\pip-pip-egg-info-0mcbi3nr'
cwd: C:\AppData\Local\Temp\pip-install-c_lq0qhi\pycocotoolssubdirectory\
Complete output (5 lines):
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\anaconda3\envs\object\lib\tokenize.py", line 447, in open
buffer = _builtin_open(filename, 'rb')
FileNotFoundError: [Errno 2] No such file or directory: 'C:\\AppData\\Local\\Temp\\pip-install-c_lq0qhi\\pycocotoolssubdirectory\\setup.py'
----------------------------------------
ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
</code></pre>
<p>How can I successfully install this repository? I need the wheel file so that I can proceed with the pycocotools installation. Note: I have installed the latest version of <code>pip</code>, so that isn't the issue.</p>
|
<p>Try running it as follows:</p>
<pre><code>pip install pycocotools-windows
</code></pre>
<p>as suggested <a href="https://github.com/cocodataset/cocoapi/issues/169" rel="nofollow noreferrer">here</a>.</p>
|
tensorflow|github|pip|pycocotools
| 1
|
374,712
| 64,311,465
|
Testing a Random Image against a Python Keras/Tensorflow CNN
|
<p>I've created and CNN and I am trying to figure out how to test a random image against it. I am utilizing Keras and Tensorflow. Lets assume I wanted to test the image found here: <a href="https://i.ytimg.com/vi/7I8OeQs7cQA/maxresdefault.jpg" rel="nofollow noreferrer">https://i.ytimg.com/vi/7I8OeQs7cQA/maxresdefault.jpg</a>.</p>
<p>How would I save the model, load it then test this image against it? Here is some example code I found online that demonstrates what I mean:
<a href="https://meta.stackexchange.com/questions/144665/hide-email-address-from-my-profile">https://meta.stackexchange.com/questions/144665/hide-email-address-from-my-profile</a></p>
<p>Any help is much appreciated, thanks!</p>
<p><a href="https://i.stack.imgur.com/HuN5u.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HuN5u.png" alt="enter image description here" /></a></p>
<pre><code>import os
import cv2
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from IPython.display import display, Image
from keras.models import Sequential, load_model
from keras.layers import Conv2D, Flatten, MaxPooling2D, Input
from keras.preprocessing.image import ImageDataGenerator
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import models, layers
X = []
y = []
from sklearn.model_selection import train_test_split
labels = os.listdir(r'C:/Users/zF1bo/Desktop/natural_images')
labels
for label in labels:
path = r'C:/Users/zF1bo/Desktop/natural_images/{}/'.format(label)
img_data = os.listdir(path)
for image in img_data:
a = cv2.imread( path + image)
a = cv2.resize(a, (64, 64))
X.append(np.array(a.astype('float32')) / 255)
y.append(label)
buckets = []
for i in y:
if i == 'airplane':
buckets.append(0)
elif i == 'car':
buckets.append(1)
elif i == 'cat':
buckets.append(2)
elif i == 'dog':
buckets.append(3)
elif i == 'flower':
buckets.append(4)
elif i == 'fruit':
buckets.append(5)
elif i == 'motorbike':
buckets.append(6)
elif i == 'person':
buckets.append(7)
y = buckets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, \
random_state = 0)
model = models.Sequential()
model.add(layers.Conv2D(filters=32, kernel_size=(5,5), activation='relu', input_shape=(64,64,3)))
model.add(layers.MaxPool2D(pool_size=(2, 2)))
model.add(layers.Conv2D(filters=64, kernel_size=(3, 3), activation='relu'))
model.add(layers.MaxPool2D(pool_size=(2, 2)))
model.add(layers.Flatten())
model.add(layers.Dense(8, activation='softmax'))
model.compile(optimizer='adam', loss = 'sparse_categorical_crossentropy',metrics=['accuracy'])
y_train = np.array(y_train)
model.fit(X_train, y_train, batch_size=(256), epochs=25)
pred = model.predict(X_test)
diff = []
for i in pred:
diff.append(np.argmax(i))
from sklearn.metrics import accuracy_score
accuracy_score(diff,y_test)
</code></pre>
|
<p>Step 1: Save the model</p>
<pre><code>model.save('model.h5')
</code></pre>
<p>Step 2: Load the model</p>
<pre><code>loaded_model = tensorflow.keras.models.load_model('model.h5')
</code></pre>
<p>Step 3: Download the image via requests library(answer is taken from: <a href="https://stackoverflow.com/questions/3042757/downloading-a-picture-via-urllib-and-python">Downloading a picture via urllib and python</a>):</p>
<pre><code>import urllib.request
urllib.request.urlretrieve(url, filename)
</code></pre>
<p>Otherwise you can apply the same steps like in the first picture you posted. Do not forget to <code>expand_dims()</code></p>
|
python|tensorflow|keras|conv-neural-network|image-classification
| 0
|
374,713
| 64,555,580
|
Change saved tensorflow model input shape at inference time
|
<p>I've searched everywhere but couldn't find anything. It looks so weird that nobody have already encountered the same problem as I... Let me explain:</p>
<p>I've trained a <strong>Tensorflow 2</strong> custom model. During the training I have used <code>set_shape((None, 320, 320, 14))</code> so that Tensorflow knows the shape (It couldn't infer it for whatever reason... -_-").
I have also saved my <strong>custom model</strong> at every 100 epochs using:</p>
<pre class="lang-py prettyprint-override"><code>model.save(os.path.join('models', 'pb', FLAGS.task_name + '-%i' % epoch))
</code></pre>
<p>So for the 100th epoch I will have a folder <code>models/pb/my_name-100</code> that contains</p>
<ul>
<li>assets</li>
<li>variables</li>
<li>saved_model.pb</li>
</ul>
<p>Now, at inference time, I just want to load the model (without all the code). So I have created another piece of code that only loads the model and make a prediction... A basic template looks like:</p>
<pre class="lang-py prettyprint-override"><code>class NeuralNetwork:
def __init__(self, model):
self.model = tf.keras.models.load_model(model)
def predict(self, input_tensor):
pred = self.model(input_tensor[None, ...])
return pred[0]
</code></pre>
<p>Where <code>input_tensor</code> is of size (H, W, 14) and so <code>input_tensor[None, ...]</code> is of size:
(None, H, W, 14).</p>
<p>The problem is that, because I have set the shape during training to be (None, 320, 320, 14)... This stupid Tensorflow expects the input to be (None, 320, 320, 14) -_-"!!!. My Neural Network is a fully convolutional neural network, so I really don't care about the input shape. I set it to be (320, 320, 14) during training for memory reason...</p>
<p>During prediction I'd like to be able to do prediction on any kind of shape.</p>
<p>Obviously, I could do a preprocessing function that extracts patch of size (320, 320) from the input image and tiles them. So for example my <code>input_tensor</code> could be of size (30, 320, 320, 14)</p>
<p>And then after the prediction, I could reconstruct the image from the tiles... But I don't want to do that.</p>
<ul>
<li><strong>Firstly</strong> because It takes a bit of time to create the tiles and reconstruct the image from the tile</li>
<li><strong>Secondly</strong> because the result will be a bit off due to 0 padding in the convolution. Which means that I need to create overlapping tiles and average the results on the overlapping part to avoid having artifacts during the reconstruction</li>
</ul>
<p>So my question is simple:
How can I tell tensorflow to accept any width and height at inference time? Omg it's so bothersome. I can't believe that there are not an easy options available to do that</p>
|
<p>I answer my own question.
Unfortunately, my answer <strong>will not satisfy</strong> everybody. There are so many convoluted things happening in TF (Not to mention that when you search for help, most of it concern the 1st API... -_-").</p>
<p>Anyway, here is the "solution"</p>
<p>In my Neural Network, I have implemented a <strong>custom layer</strong> to mimic the pytorch function <a href="https://pytorch.org/docs/stable/generated/torch.nn.AdaptiveAvgPool2d.html" rel="nofollow noreferrer">AdaptiveAvgPool2D</a>. My implementation actually use <code>tf.nn_avg_pool</code> under the hood and need to dynamically compute the <strong>kernel_size</strong> as well as the <strong>stride</strong>. Here is my code, for reference:</p>
<pre class="lang-py prettyprint-override"><code>class AdaptiveAvgPool2d(layers.Layer):
def __init__(self, output_shape, data_format='channels_last'):
super(AdaptiveAvgPool2d, self).__init__(autocast=False)
assert data_format in {'channels_last', 'channels_first'}, \
'data format parameter must be in {channels_last, channels_first}'
if isinstance(output_shape, tuple):
self.out_h = output_shape[0]
self.out_w = output_shape[1]
elif isinstance(output_shape, int):
self.out_h = output_shape
self.out_w = output_shape
else:
raise RuntimeError(f"""output_shape should be an Integer or a Tuple2""")
self.data_format = data_format
def call(self, inputs, mask=None):
# input_shape = tf.shape(inputs)
input_shape = inputs.get_shape().as_list()
if self.data_format == 'channels_last':
h_idx, w_idx = 1, 2
else: # can use else instead of elif due to assert in __init__
h_idx, w_idx = 2, 3
stride_h = input_shape[h_idx] // self.out_h
stride_w = input_shape[w_idx] // self.out_w
k_size_h = stride_h + input_shape[h_idx] % self.out_h
k_size_w = stride_w + input_shape[w_idx] % self.out_w
pool = tf.nn.avg_pool(
inputs,
ksize=[k_size_h, k_size_w],
strides=[stride_h, stride_w],
padding='VALID',
data_format='NHWC' if self.data_format == 'channels_last' else 'NCHW')
return pool
</code></pre>
<p>The problem is that, I'm using <code>inputs.get_shape().as_list()</code>
to recover int values and not a <code>Tensor(..., type=int)</code>. Indeed, the <code>tf.nn.avg_pool</code> accept a list of <em>Int</em> for both the <code>ksize</code> and the <code>strides</code> parameters...</p>
<p>Put it differently, I couldn't use <code>tf.shape(inputs)</code> because it returns a <code>Tensor(..., type=int)</code> and there is no way to recover a int from a Tensor beside by evaluating it...</p>
<p>The way I have implemented my function worked just fine, the problem is that, <strong>Tensorflow</strong> infers the size under the hood and save the size of all the tensors inside the <code>.pb</code> file when I save it.</p>
<p>Indeed, you can easily open a <code>.pb</code> file with any TextEditor (SublimeText) and see by ourself the expected <code>TensorShape</code>. In my case it was `TensorShape: <strong>[null, 320, 320, 14]</strong></p>
<p>So, using <code>set_shape((None, None, None, 14))</code> instead of <code>set_shape((None, 320, 320, 14))</code> or <code>nothing</code> actually doesn't change the problem...</p>
<p>The problem is that <strong>the average pooling layer does not accept a dynamic kernel size/strides</strong>....</p>
<p>I then realizes that there is a tensorflow function for this actually <code>tfa.layers.AdaptiveAveragePooling2D</code>. So, I might just go with it and it will be fine, right?</p>
<p>Well not exactly, under the hood, this tensorflow function use other tf.function like <code>tf.split</code>. The problem with <code>tf.split</code> is that, if your dimension you want to split is of size X and you want to output a tensor of size Y. If X % Y != 0, when tf.split will throw an error... While Pytorch is much more robust and handle cases were X % Y != 0.</p>
<p>Put it differently, it means that, in order for me to use <code>tfa.layers.AdaptiveAveragePooling2D</code>, I need to be sure that the size of the tensor received by this function is divisible by the scalar I pass to the function.</p>
<p>For example, in my case,
The input image are of size: (320, 320, whatever), the input tensor received by <code>tfa.layers.AdaptiveAveragePooling2D</code> is: (40, 40, whatever).</p>
<p>So it means, that the spatial dimension of my tensor was divided by 8 during training. In order for it to work, I should choose a size that can divide 40. Let's say I choose <strong>5</strong>.</p>
<p>It means that during the prediction, my neural network will work if the input dimension that the <code>tfa.layers.AdaptiveAveragePooling2D</code> receives is also divisible by <strong>5</strong>. But we already know that my input image is 8x bigger then the tensor receives by <code>tfa.layers.AdaptiveAveragePooling2D</code>, so it means that, at prediction times, I can use whatever image size as long as:</p>
<ul>
<li>H % (8*5) == 0 and W % (8 * 5) == 0
Where <code>H</code> and <code>W</code> are respectively the height and the width of my input image.</li>
</ul>
<p>To do that, we can just implement a simple function:
new_W = W + W % 40 (40 in this example...)
new_H = H + H % 40 (40 in this example...)</p>
<p>This function will stretch a bit the image but not to much so that it should be just fine.</p>
<p>Summing up:</p>
<ul>
<li>My AdaptivePooling uses static shape, but I cannot do otherwise since it uses <code>tf.nn.avg_pool</code> under the hood that doesn't accept dynamic shape</li>
<li><code>tfa.layers.AdaptiveAveragePooling2D</code> is a work around, but because it relies on <code>tf.split</code> that is not robust to inexact divide, it is not perfect either</li>
<li>The basic solution is to use <code>tfa.layers.AdaptiveAveragePooling2D</code> and create a preprocessing function before calling the prediction so that the tensor will work just fine with <code>tf.split</code> constraint</li>
<li>Finally, this is not a good solution either. Because, during training if I receive a tensor of size <code>(40, 40)</code> and want an <strong>avg</strong> output of size <code>(5, 5)</code>, it means that I basically average (8, 8) features to retrieve one features.</li>
<li>The problem is that, if I do that during inference time on a bigger image, I will receive a bigger tensor. Let's say: <code>(100, 200)</code>. But since my output will always be <code>(5, 5)</code>, it means that I will, this time, average (20, 40) features to retrieve <strong>one</strong> feature...</li>
</ul>
<p>Because of this difference between training and inference if I go with this way of doing, inferring on a bigger image might lead to inconsistent results
In my case, the way to go is to batch the images as I have explained in my first post...</p>
<p>Hope it will help some of you.</p>
|
tensorflow|input|tensorflow2.0|shapes
| 0
|
374,714
| 64,398,484
|
How to manipulate client gradients in tensorflow federated sgd
|
<p>I'm following <a href="https://www.tensorflow.org/federated/tutorials/federated_learning_for_image_classification" rel="nofollow noreferrer">this tutorial</a> to get started with tensorflow federated. My aim is to run federated sgd (not federated avg) with some manipulations on client gradient values before they are sent to the server.</p>
<p>Before moving forward, to briefly reiterate the federated sgd process, for each turn clients will send their computed gradients (not updated weights) to the server, the server aggregates them and broadcasts the updated model to the clients.</p>
<p>Now from what I've gathered so far, I can use the function <a href="https://www.tensorflow.org/federated/api_docs/python/tff/learning/build_federated_sgd_process" rel="nofollow noreferrer"><code>build_federated_sgd_process</code></a> instead of <code>build_federated_averaging_process</code> in the mentioned tutorial to perform federated sgd the way described above.</p>
<p>Where I'm lost is, I need to clip the client gradients and add some noise to them (independently generated for each gradient value) before sending the gradients to the server and I'm not sure how to do it. Generating the noise is straightforward enough, but which function should I modify/implement to be able to do apply the noise to the gradients?</p>
|
<p><code>build_federated_sgd_process</code> is fully-canned; it is really designed to serve as a reference implementation, not as a point of extensibility.</p>
<p>I believe what you are looking for is the function that <code>build_federated_sgd_process</code> calls under the hoos, <a href="https://www.tensorflow.org/federated/api_docs/python/tff/learning/framework/build_model_delta_optimizer_process?version=nightly" rel="nofollow noreferrer"><code>tff.learning.framework.build_model_delta_optimizer_process</code></a>. This function allows you to supply your own mapping from a model function (IE, a zero-arg callable that returns a <code>tff.learning.Model</code>) to a <a href="https://www.tensorflow.org/federated/api_docs/python/tff/learning/framework/ClientDeltaFn" rel="nofollow noreferrer"><code>tff.learning.framework.ClientDeltaFn</code></a>.</p>
<p>Your <code>ClientDeltaFn</code> would look something like:</p>
<pre><code>@tf.function
def _clip_and_noise(grads):
return ...
class ClippedGradClientDeltaFn(tff.learning.framework.ClientDeltaFn)
def __init__(self, model, ...):
self._model = model
...
@tf.function
def __call__(dataset, weights):
# Compute gradients grads
return _clip_and_noise(grads)
</code></pre>
<p>And you would be able to construct a <code>tff.templates.IterativeProcess</code> by calling:</p>
<pre><code>def clipped_sgd(model_fn: Callable[[], model_lib.Model]) -> ClippedGradClientDeltaFn:
return ClippedGradClientDeltaFn(
model_fn(),
...)
iterproc = optimizer_utils.build_model_delta_optimizer_process(
model_fn, model_to_client_delta_fn=clipped_sgd, ...)
</code></pre>
<p>as more or less in the body of <a href="https://github.com/tensorflow/federated/blob/master/tensorflow_federated/python/learning/federated_sgd.py#L144" rel="nofollow noreferrer"><code>build_federated_sgd_process</code></a>.</p>
<p>It sounds to me like you are interested in differential privacy; TFF is actually designed to compose with differential privacy generally through the aggregation processes rather than writing different client updates, though this is certainly one approach. See the pointers from the <a href="https://www.tensorflow.org/federated/tff_for_research#differential_privacy" rel="nofollow noreferrer">TFF for research documentation</a> for idiomatic ways to wire differential privacy in to TFF.</p>
|
python|tensorflow|tensorflow-federated|sgd
| 1
|
374,715
| 64,371,445
|
Read excel sheet in pandas with different sheet names in a pattern
|
<p>I am trying to read multiple excel files in a loop using read_excel :</p>
<p>Different excel files contain sheet names which contain the word "staff"
eg Staff_2013 , Staff_list etc</p>
<p>Is there a way to read all these files dynamically using some wild card concept ?</p>
<p>Something like the code below :</p>
<pre><code>df = pd.read_excel(folder,col_names=True,sheet_name='Staff*')
</code></pre>
|
<p>The <code>pandas.read_excel</code> can only read one sheet. The use you are suggesting would be problematic in case of multiple matches.</p>
<p>So you have to list the sheets and select the ones you want to read one by one.</p>
<p>For instance:</p>
<pre><code>xls_file = pd.ExcelFile('my_excel_file.xls')
staff_fnames = [sheet for sheet in xls.sheet_names if sheet.startswith('Staff')]
for staff_fname in staff_fnames:
df = pd.read_excel('my_excel_file.xls'), sheet_name=staff_fname)
</code></pre>
|
python|pandas|dataframe
| 3
|
374,716
| 64,233,099
|
pyTorch gradient becomes none when dividing by scalar
|
<p>Consider the following code block:</p>
<pre><code>import torch as torch
n=10
x = torch.ones(n, requires_grad=True)/n
y = torch.rand(n)
z = torch.sum(x*y)
z.backward()
print(x.grad) # results in None
print(y)
</code></pre>
<p>As written, <code>x.grad</code> is None. However, if I change the definition of <code>x</code> by removing the scalar multiplication (<strong><code>x = torch.ones(n, requires_grad=True)</code></strong>) then indeed I got a non-None gradient that is equivalent to y.</p>
<p>I've googled a bunch looking for this issue, and I think it reflects something fundamental in what I don't understand about how the computational graph in torch. I'd love some clarification. Thanks!</p>
|
<p>When you set <code>x</code> to a tensor divided by some scalar, <code>x</code> is no longer what is called a "leaf" <code>Tensor</code> in PyTorch. A leaf <code>Tensor</code> is a tensor at the beginning of the computation graph (which is a DAG graph with nodes representing objects such as tensors, and edges which represent a mathematical operation). More specifically, it is a tensor which was not created by some computational operation which is tracked by the <code>autograd</code> engine.</p>
<p>In your example - <code>torch.ones(n, requires_grad=True)</code> is a leaf tensor, but you can't access it directly in your code.
The reasoning behind not keeping the <code>grad</code> for non-leaf tensors is that typically, when you train a network, the weights and biases are leaf tensors and they are what we need the gradient for.</p>
<p>If you want to access the gradients of a non-leaf tensor, you should call the <code>retain_grad</code> function, which means in your code you should add:</p>
<pre><code>x.retain_grad()
</code></pre>
<p>after the assignment to x.</p>
|
python|pytorch
| 1
|
374,717
| 64,601,301
|
Pytorch input tensor size with wrong dimension Conv1D
|
<pre><code> def train(epoch):
model.train()
train_loss = 0
for batch_idx, (data, _) in enumerate(train_loader):
data = data[None, :, :]
print(data.size()) # something seems to change between here
data = data.to(device)
optimizer.zero_grad()
recon_batch, mu, logvar = model(data) # and here???
loss = loss_function(recon_batch, data, mu, logvar)
loss.backward()
train_loss += loss.item()
optimizer.step()
if batch_idx % 1000 == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader),
loss.item() / len(data)))
print('====> Epoch: {} Average loss: {:.4f}'.format(epoch, train_loss / len(train_loader.dataset)))
for epoch in range(1, 4):
train(epoch)
</code></pre>
<p>This is very strange looking at the training loop it does recognize that the size is <code>[1,1,1998]</code> but then something changes after it is sent to the device?</p>
<pre><code> torch.Size([1, 1, 1998])
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-138-70cca679f91a> in <module>()
27
28 for epoch in range(1, 4):
---> 29 train(epoch)
5 frames
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py in forward(self, input)
255 _single(0), self.dilation, self.groups)
256 return F.conv1d(input, self.weight, self.bias, self.stride,
--> 257 self.padding, self.dilation, self.groups)
258
259
RuntimeError: Expected 3-dimensional input for 3-dimensional weight [12, 1, 1], but got 2-dimensional input of size [1, 1998] instead
</code></pre>
<p>Also here is my model (I recognize there is likely a couple of other issues here but I am asking about the tensor size not registering)</p>
<pre><code>class VAE(nn.Module):
def __init__(self):
super(VAE, self).__init__()
self.conv1 = nn.Conv1d( 1,12, kernel_size=1,stride=5,padding=0)
self.conv1_drop = nn.Dropout2d()
self.pool1 = nn.MaxPool1d(kernel_size=3, stride=2)
self.fc21 = nn.Linear(198, 1)
self.fc22 = nn.Linear(198, 1)
self.fc3 = nn.Linear(1, 198)
self.fc4 = nn.Linear(198, 1998)
def encode(self, x):
h1 = self.conv1(x)
h1 = self.conv1_drop(h1)
h1 = self.pool1(h1)
h1 = F.relu(h1)
h1 = h1.view(1, -1) # 1 is the batch size
return self.fc21(h1), self.fc22(h1)
def reparameterize(self, mu, logvar):
std = torch.exp(0.5*logvar)
eps = torch.rand_like(std)
return mu + eps*std
def decode(self, z):
h3 = F.relu(self.fc3(z))
return torch.sigmoid(self.fc4(h3))
def forward(self, x):
mu, logvar = self.encode(x.view(-1, 1998))
z = self.reparameterize(mu, logvar)
return self.decode(z), mu, logvar
</code></pre>
<p>So why doesn't Pytorch keep the dimensions after reshaping and would that be the correct tensor size if it did?</p>
|
<p>I just found my mistake when I call <code>forward()</code> I am doing <code>self.encode(x.view(-1,1998))</code> which is reshaping the tensor.</p>
|
python|pytorch|tensor
| 0
|
374,718
| 64,585,328
|
Why can R's read.csv() read a CSV from GitLab URL when pandas' read_csv() can't?
|
<p>I noticed that panda's <code>read_csv()</code> fails at reading a public CSV file hosted on GitLab:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df = pd.read_csv("https://gitlab.com/stragu/DSH/-/raw/master/Python/pandas/spi.csv")
</code></pre>
<p>The error I get (truncated):</p>
<pre><code>HTTPError Traceback (most recent call last)
<ipython-input-3-e1c0b52ee83c> in <module>
----> 1 df = pd.read_csv("https://gitlab.com/stragu/DSH/-/raw/master/Python/pandas/spi.csv")
[...]
~\Anaconda3\lib\urllib\request.py in http_error_default(self, req, fp, code, msg, hdrs)
647 class HTTPDefaultErrorHandler(BaseHandler):
648 def http_error_default(self, req, fp, code, msg, hdrs):
--> 649 raise HTTPError(req.full_url, code, msg, hdrs, fp)
650
651 class HTTPRedirectHandler(BaseHandler):
HTTPError: HTTP Error 403: Forbidden
</code></pre>
<p>However, using R, the base function <code>read.csv()</code> reads it happily:</p>
<pre class="lang-r prettyprint-override"><code>df <- read.csv("https://gitlab.com/stragu/DSH/-/raw/master/Python/pandas/spi.csv")
head(df)
#> country_code year spi
#> 1 AFG 2020 42.29
#> 2 AFG 2019 42.34
#> 3 AFG 2018 40.61
#> 4 AFG 2017 38.94
#> 5 AFG 2016 39.65
#> 6 AFG 2015 38.62
</code></pre>
<p><sup>Created on 2020-10-29 by the <a href="https://reprex.tidyverse.org" rel="nofollow noreferrer">reprex package</a> (v0.3.0)</sup></p>
<p>Any idea why that is, and how R achieves it?</p>
<p>Versions used:</p>
<ul>
<li>R 4.0.3</li>
<li>Python 3.7.9</li>
<li>pandas 1.1.3</li>
</ul>
|
<p>If you're looking for a workaround, I recommend making the GET request via <a href="https://requests.readthedocs.io/en/master/" rel="nofollow noreferrer"><strong><code>requests</code></strong></a> library:</p>
<pre><code>import requests
from io import StringIO
url = "https://gitlab.com/stragu/DSH/-/raw/master/Python/pandas/spi.csv"
df = pd.read_csv(StringIO(requests.get(url).text))
</code></pre>
<pre><code>df.head()
country_code year spi
0 AFG 2020 42.290001
1 AFG 2019 42.340000
2 AFG 2018 40.610001
3 AFG 2017 38.939999
4 AFG 2016 39.650002
</code></pre>
<hr />
<p>As to the "why" part of it, I see <a href="https://github.com/pandas-dev/pandas/blob/10e5ad7109f1bb3c47b48b6f0df185e108c5493d/pandas/io/common.py#L254-L259" rel="nofollow noreferrer"><code>read_csv</code> internally uses <code>urllib</code> for standard URLs</a>, apparently the API in question blocks the request possibly because it thinks you are a crawler. If I repeat the same process, but add The "User-Agent" header, the request succeeds.</p>
<p>TLDR; what pandas does and fails:</p>
<pre><code>from urllib.request import Request, urlopen
req = Request(<URL>)
urlopen(req).read() # fails
</code></pre>
<p>What pandas should have done for this to work:</p>
<pre><code>req = Request(<URL>)
req.add_header('User-Agent', <literally anything>)
urlopen(req).read() # succeeds
</code></pre>
|
python|r|pandas|read.csv
| 4
|
374,719
| 64,300,420
|
How to reshape array which doesn't have column?
|
<p>I have one array which shape is (6000,) and now I want to convert it to (6000,1) to use it further. How can I do it?</p>
<pre><code> print("TrainX", str(trainX.T.shape))
np.reshape(trainY, (1, trainY.shape[0]))
print("TrainY", str(trainY.shape))
</code></pre>
<p>Both giving same output (6000,)</p>
|
<p>Something like this?</p>
<pre><code>import numpy as np
a = np.array((1,2,3))
b = np.array([a]).T
print(np.shape(a))
print(np.shape(b))
</code></pre>
<p>Output:</p>
<pre><code>(3,)
(3, 1)
</code></pre>
|
python|arrays|numpy
| 0
|
374,720
| 64,453,512
|
Map dataframe values in some columns according to the values in other columns
|
<p>I have a dataframe which looks like this:</p>
<pre><code> home_player_1 home_player_2 home_player_3 away_player_1 away_player_2 away_player_3 player_1 ~~~ player_2000
1 23 34 45 2 6 688 0 ~~~ 0
2 233 341 4 123 246 678 0 ~~~ 0
3 231 234 145 222 6 698 0 ~~~ 0
4 235 934 445 1972 16 1688 0 ~~~ 0
</code></pre>
<p>The columns from player_1 to player_2000 are all zero and going to be mapped according to the previous columns. The rule is that the "player_n", where n is a number, representing the number of a player, equals to 1 if the player n appears in any one of the previous 6 columns; otherwise, it is 0.
For example, <strong>the expected output</strong> in the first row is like,</p>
<blockquote>
<p>player_23=player_34=player_45=player_2=player_6=player_688=1, others
are 0.</p>
</blockquote>
<p><strong>NOTE</strong>: there are no duplicate appearances among the same row.</p>
|
<p>The following code should give you the desired DataFrame:</p>
<pre class="lang-py prettyprint-override"><code>for index, row in df.iterrows():
values = row[:6]
for value in values:
df.at[index, 'player_{}'.format(value)] = 1
</code></pre>
<h2><strong>Edit:</strong></h2>
<p>In case you want to avoid iterate over the rows you can use <code>apply</code>:</p>
<pre class="lang-py prettyprint-override"><code>def update_row(row):
values = row[:6]
for value in values:
row.loc['player_{}'.format(value)] = 1
return row
result_df = df.apply(lambda row: update_row(row), axis=1)
</code></pre>
|
python|pandas|dataframe|logic
| 1
|
374,721
| 64,203,682
|
How do I filter the data when there are many conditions in pandas?
|
<p>I have a question about python pandas.</p>
<p>For example, the dataset df has 100 rows and the column names are a1, a2, a3, ... , a20. If I want to find specific rows where a1=20, a2=1, a3=0, a4=1, a5=2,...., a20=1, how can I filter out the rows if such row exists?</p>
<p>If I use pandas filter, how should I set the filter condition? I was thinking using a for-loop to filter based on each condition, in this case I have to filter 20 times. This approach seems very stupid if there are 100 conditions. I wonder if there is any more efficient way.</p>
|
<p>Try this:</p>
<pre><code>df = df[(df['a1'] == 20) & (df['a2'] == 1) & (df['a3'] == 0)]
</code></pre>
|
python|pandas|dataframe
| 0
|
374,722
| 64,451,388
|
Summing up the output of multiple functions in python
|
<p>I currently have three sine functions (y1, y2, y3) and would like to sum the output of the functions in a new function (ytotal) but only where the output of the sine functions are greater than 0.</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
#%%
phi = np.linspace(-2*np.pi, 2*np.pi, 100)
y1 = 0.2*np.sin(phi)
y2 = 0.2*np.sin(phi-(120*(np.pi/180)))
y3 = 0.2*np.sin(phi-(240*(np.pi/180)))
#if y1 or y2 or y3 > 0:
# ytotal = y1+y2+y3
plt.plot(phi,y1, label = "Piston 1")
plt.plot(phi,y2, label = "Piston 2")
plt.plot(phi,y3, label = "Piston 3")
#plt.plot(phi,ytotal, label = "Total output")
positions = (0,np.pi/3,2*np.pi/3,np.pi,4*np.pi/3,5*np.pi/3,2*np.pi)
labels = ("0","60","120","180","240","300","360")
plt.xticks(positions, labels)
plt.xlabel('Angular displacement')
plt.ylabel('Stroke')
plt.legend()
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/nYR4N.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nYR4N.png" alt="enter image description here" /></a></p>
<p>The output should be something like the following:</p>
<p><a href="https://i.stack.imgur.com/QW0dF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QW0dF.png" alt="enter image description here" /></a></p>
|
<p>Do you mean:</p>
<pre><code>plt.plot(phi, y1.clip(0)+y2.clip(0)+y3.clip(0), label='Total')
</code></pre>
<p>Output:</p>
<p><a href="https://i.stack.imgur.com/uj1ec.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uj1ec.png" alt="enter image description here" /></a></p>
|
python|numpy|math|trigonometry
| 0
|
374,723
| 64,238,018
|
How to use standard scaler model on dataset having less features than original dataset in which it was initially trained
|
<p>I was using standard scalar model from sklearn.preprocessing. I fitted the standard scaler model on the dataset having 27 features in it. Is it possible to use same standard scalar model on a testing dataset having less than 27 features in it Code Snippet</p>
<pre><code>from sklearn.preprocessing import StandardScaler()
sc=StandardScaler()
sc.fit_transform(x_train)
</code></pre>
<p>Till this point this is working fine.Problem is arising when I am trying to transform my test dataset. I know the problem why it is happening so. The test dataset has 24 features in it. But is it possible to transform the only 24 features and ignoring those columns which are not present in it.</p>
<pre><code>sc.transform(x_test)
</code></pre>
<p>Thanks in advance!!</p>
|
<p>If want select all features without first <code>3</code> features use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.iloc.html" rel="nofollow noreferrer"><code>DataFrame.iloc</code></a>:</p>
<pre><code>from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
x_train.iloc[:, 3:] = sc.fit_transform(x_train.iloc[:, 3:])
print (x_train)
</code></pre>
<p>If features are in list use <code>subset</code>:</p>
<pre><code>from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
features = ['col1','col2',..., 'col24']
x_train[features] = sc.fit_transform(x_train[features])
print (x_train)
</code></pre>
|
python|pandas|machine-learning|scikit-learn
| 2
|
374,724
| 64,450,286
|
How to effeciently create conditional columns arrays using Numpy?
|
<p>The objective is to create an array but by fulfilling the condition of <code>(x=>y) and (y=>z)</code>.</p>
<p>One naive way but does the job is by using a nested <code>for loop</code> as shown below</p>
<pre><code>tot_length=200
steps=0.1
start_val=0.0
list_no =np.arange(start_val, tot_length, steps)
a=np.zeros(shape=(1,3))
for x in list_no:
for y in list_no:
for z in list_no:
if (x>=y) & (y>=z):
a=np.append(a, [[x, y, z]], axis=0)
</code></pre>
<p>While no memory requirement issue was thrown, but the execution time is significantly slow.</p>
<p>Other approach that can be considered is by using the code <a href="https://stackoverflow.com/a/64446483/6446053">code</a> below. Yet the proposal only able to work flawlessly as long as <code>tot_length</code> is less than <code>100</code>. More than that, memory issue arise as reported <a href="https://stackoverflow.com/a/64448974/6446053">here</a></p>
<pre><code>tot_length=200
steps=0.1
start_val=0.0
list_no =np.arange(start_val, tot_length, steps)
arr = np.meshgrid ( *[list_no for _ in range ( 3 )] )
a = np.array(list ( map ( np.ravel, arr ) )).transpose()
num_rows, num_cols = a.shape
a_list = np.arange ( num_cols ).reshape ( (-1, 3) )
for x in range ( len ( a_list ) ):
a=a[(a[:, a_list [x, 0]] >= a[:, a_list [x, 1]]) & (a[:, a_list [x, 1]] >= a[:, a_list [x, 2]])]
</code></pre>
<p>Appreciate for any suggestion that can balance the overall execution time as well as memory issue. I also welcome for any suggestion using Pandas if that should make thing work</p>
<p>To determine whether the proposed output produced the intended output, the following parameter</p>
<pre><code>tot_length=3
steps=1
start_val=1
</code></pre>
<p>Should produce the output</p>
<pre><code>1 1 1
2 1 1
2 2 1
2 2 2
</code></pre>
|
<pre><code>tot_length = 200
steps = 0.1
list_no = np.arange(0.0, tot_length, steps)
a = list()
for x in list_no:
for y in list_no:
if y > x:
break
for z in list_no:
if z > y:
break
a.append([x, y, z])
a = np.array(a)
# if needed, a.transpose()
</code></pre>
|
python|pandas|numpy
| 2
|
374,725
| 64,531,446
|
sum rows in dataframe based on different columns
|
<p>How can I merge rows in the given dataframe as shown below? For each account and currency, I have to sum the values, so that there's no division by sector.</p>
<p><a href="https://i.stack.imgur.com/e9ory.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/e9ory.png" alt="enter image description here" /></a></p>
|
<p>Try</p>
<pre><code>df.groupby(['acccount','currency'])['sum'].sum().reset_index()
</code></pre>
|
python|pandas|dataframe|merge
| 4
|
374,726
| 64,445,546
|
Fill in timestamp gaps every 11th of a second
|
<p>I have a textfile 'example.txt' which contains data sampled at 11 Hz (so every 11th of a second).</p>
<p>Here you can find my code to load the textfile and convert 'Date' and 'Time' into datetime format. In the end the dataframe has a size of (34,6):</p>
<pre><code>import glob
import os
import datetime
#Specify file path
file = 'C:\Users\...\example.txt'
#Load file
df = pd.read_csv(file, sep=";", header=None, names=["Date", "Time", "ID1","ID2","ID3","MP","ET"],float_precision='round_trip')
#In my specific case, the txt.file has headers, which I want to remove
date = df['Date']
if date[0] == 'Date':
df = df.iloc[1:]
df = df.reset_index(drop=True)
# I erase the letters 'ms' so I only get numbers
df['Time'] = df['Time'].str[:-2]
# Put in datetime format
date = df['Date']
time = df['Time']
date_and_time = date + time
date_time_format = '%Y/%m/%d %H:%M:%S %f'
df['Time'] = pd.to_datetime(date_and_time,format=date_time_format)
# Drop Date column
df = df.drop(['Date'],axis=1)
</code></pre>
<p>In the 22th and 23th row (see output below), there is a gap of 35 seconds. Since I want to plot this data, I would like to fill in this gap by using the same sampling frequency of 11 Hz. So I would like to fill 35*11 datapoints between the 22th and 23th row. For this 'filled in data', I want to attribute the correct timestamp and attribute zero's to all other variables (ID1,ID2,ID3,MP and ET). I've read documentation on resampling (pandas module), but there is no option for resampling on 10th or 11th of seconds. Is there another way to do this? Maybe there is an option in plotting which accounts for gaps in timestamp data?</p>
<p>Thanks</p>
<pre><code>
df
Out[25]:
Time ID1 ID2 ID3 MP ET
0 2020-08-06 18:00:38.000000 0 0 0 230400 0.229000091553
1 2020-08-06 18:00:38.999160 0 1 1 529 0.254999876022
2 2020-08-06 18:00:38.199833 0 2 2 619 0.270999908447
3 2020-08-06 18:00:38.299750 0 3 3 84 0.292000055313
4 2020-08-06 18:00:38.399666 0 4 4 629 0.31500005722
5 2020-08-06 18:00:38.499583 0 5 5 376 0.331000089645
6 2020-08-06 18:00:38.599500 0 6 6 660 0.34299993515
7 2020-08-06 18:00:38.699417 0 7 7 160 0.354000091553
8 2020-08-06 18:00:38.799333 0 8 8 246 0.361999988556
9 2020-08-06 18:00:38.899250 0 9 9 69 0.371000051498
10 2020-08-06 18:00:38.999167 0 10 10 462 0.382999897003
11 2020-08-06 18:00:39.000000 0 0 0 3 0.229000091553
12 2020-08-06 18:00:39.999160 0 1 1 59 0.254999876022
13 2020-08-06 18:00:39.199833 0 2 2 19 0.270999908447
14 2020-08-06 18:00:39.299750 0 3 3 8 0.292000055313
15 2020-08-06 18:00:39.399666 0 4 4 9 0.31500005722
16 2020-08-06 18:00:39.499583 0 5 5 36 0.331000089645
17 2020-08-06 18:00:39.599500 0 6 6 6 0.34299993515
18 2020-08-06 18:00:39.699417 0 7 7 10 0.354000091553
19 2020-08-06 18:00:39.799333 0 8 8 46 0.361999988556
20 2020-08-06 18:00:39.899250 0 9 9 9 0.371000051498
21 2020-08-06 18:00:39.999167 0 10 10 2 0.382999897003
22 2020-08-06 18:01:14.000000 0 11 11 704 0.395999908447
23 2020-08-06 18:01:14.999160 0 12 12 795 0.410000085831
24 2020-08-06 18:01:14.199833 0 13 13 532 0.421000003815
25 2020-08-06 18:01:14.299750 0 14 14 363 0.430000066757
26 2020-08-06 18:01:14.399666 0 4 4 629 0.31500005722
27 2020-08-06 18:01:14.499583 0 5 5 376 0.331000089645
28 2020-08-06 18:01:14.599500 0 6 6 660 0.34299993515
29 2020-08-06 18:01:14.699417 0 7 7 160 0.354000091553
30 2020-08-06 18:01:14.799333 0 8 8 246 0.361999988556
31 2020-08-06 18:01:14.899250 0 9 9 69 0.371000051498
32 2020-08-06 18:01:14.999167 0 10 10 462 0.382999897003
33 2020-08-06 18:01:15.000000 0 11 11 4 0.395999908447
</code></pre>
|
<p>If you have your timestamp as a timestamp, the missing time gap will be filled in by a line connecting the two data points on either side. We can call out the legit values by using a scatterplot over top of the line plot.</p>
<pre><code>import seaborn as sns
import matplotlib.pyplot as plt
df['Time'] = pd.to_datetime(df['Time'])
plt.figure(figsize=(10, 10))
sns.lineplot(data=df, x='Time',y='ET')
sns.scatterplot(data=df,x='Time',y='ET', s=50,color='r');
</code></pre>
<p><a href="https://i.stack.imgur.com/0dkx0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0dkx0.png" alt="enter image description here" /></a></p>
|
python|pandas|dataframe|timestamp
| 0
|
374,727
| 64,430,001
|
Invalid pointer error whily running python in C++ using pybind11 and pytorch
|
<p>While running the following python code in C++ using pybind11, pytorch 1.6.0, I get "Invalid Pointer" error. In python, the code runs successfully without any error. Whats the reason? How can I solve this problem?</p>
<pre><code>import torch
import torch.nn.functional as F
import numpy as np
import cv2
import torchvision
import eval_widerface
import torchvision_model
def resize(image, size):
image = F.interpolate(image.unsqueeze(0), size=size, mode="nearest").squeeze(0)
return image
# define constants
model_path = '/path/to/model.pt'
image_path = '/path/to/image_pad.jpg'
scale = 1.0 #Image resize scale (2 for half size)
font = cv2.FONT_HERSHEY_SIMPLEX
MIN_SCORE = 0.9
image_bgr = cv2.imread(image_path)
image_rgb = cv2.cvtColor(image_bgr, cv2.COLOR_BGR2RGB)#skimage.io.imread(args.image_path)
cv2.imshow("input image",image_bgr)
cv2.waitKey()
cv2.destroyAllWindows()
# load pre-trained model
return_layers = {'layer2':1,'layer3':2,'layer4':3}
RetinaFace = torchvision_model.create_retinaface(return_layers)
print('RetinaFace.state_dict().')
retina_dict = RetinaFace.state_dict()
</code></pre>
<p>the following function generates error.</p>
<pre><code>def create_retinaface(return_layers,backbone_name='resnet50',anchors_num=3,pretrained=True):
print('In create_retinaface.')
print(resnet.__dict__)
backbone = resnet.__dict__[backbone_name](pretrained=pretrained)
print('backbone.')
# freeze layer1
for name, parameter in backbone.named_parameters():
print('freeze layer 1.');
# if 'layer2' not in name and 'layer3' not in name and 'layer4' not in name:
# parameter.requires_grad_(False)
if name == 'conv1.weight':
# print('freeze first conv layer...')
parameter.requires_grad_(False)
model = RetinaFace(backbone,return_layers,anchor_nums=3)
return model
</code></pre>
<p>The statement <code>backbone = resnet.__dict__ [backbone_name](pretrained=pretrained)</code> generated error that looks like</p>
<pre><code>*** Error in `./p': munmap_chunk(): invalid pointer: 0x00007f4461866db0 ***
======= Backtrace: =========
/usr/lib64/libc.so.6(+0x7f3e4)[0x7f44736b43e4]
/usr/local/lib64/libopencv_gapi.so.4.1(_ZNSt10_HashtableISsSsSaISsENSt8__detail9_IdentityESt8equal_toISsESt4hashISsENS1_18_Mod_range_hashingENS1_20_Default_ranged_hashENS1_20_Prime_rehash_policyENS1_17_Hashtable_traitsILb1ELb1ELb1EEEE21_M_insert_unique_nodeEmmPNS1_10_Hash_nodeISsLb1EEE+0xc9)[0x7f4483dee1a9]
/home/20face/.virtualenvs/torch/lib64/python3.6/site-packages/torch/lib/libtorch_python.so(+0x4403b5)[0x7f4460bb73b5]
/home/20face/.virtualenvs/torch/lib64/python3.6/site-packages/torch/lib/libtorch_python.so(+0x44570a)[0x7f4460bbc70a]
/home/20face/.virtualenvs/torch/lib64/python3.6/site-packages/torch/lib/libtorch_python.so(+0x275b20)[0x7f44609ecb20]
/usr/lib64/libpython3.6m.so.1.0(_PyCFunction_FastCallDict+0x147)[0x7f4474307167]
/usr/lib64/libpython3.6m.so.1.0(+0x1507df)[0x7f44743727df]
/usr/lib64/libpython3.6m.so.1.0(_PyEval_EvalFrameDefault+0x3a7)[0x7f44743670f7]
/usr/lib64/libpython3.6m.so.1.0(+0x1505ca)[0x7f44743725ca]
/usr/lib64/libpython3.6m.so.1.0(+0x150903)[0x7f4474372903]
/usr/lib64/libpython3.6m.so.1.0(_PyEval_EvalFrameDefault+0x3a7)[0x7f44743670f7]
/usr/lib64/libpython3.6m.so.1.0(+0x14fb69)[0x7f4474371b69]
/usr/lib64/libpython3.6m.so.1.0(_PyFunction_FastCallDict+0x24f)[0x7f44743739ff]
/usr/lib64/libpython3.6m.so.1.0(_PyObject_FastCallDict+0x10e)[0x7f44742ca1de]
/usr/lib64/libpython3.6m.so.1.0(_PyObject_Call_Prepend+0x61)[0x7f44742ca2f1]
/usr/lib64/libpython3.6m.so.1.0(PyObject_Call+0x43)[0x7f44742c9f63]
/usr/lib64/libpython3.6m.so.1.0(+0xfa7e5)[0x7f447431c7e5]
/usr/lib64/libpython3.6m.so.1.0(+0xf71e2)[0x7f44743191e2]
/usr/lib64/libpython3.6m.so.1.0(PyObject_Call+0x43)[0x7f44742c9f63]
/usr/lib64/libpython3.6m.so.1.0(_PyEval_EvalFrameDefault+0x2067)[0x7f4474368db7]
/usr/lib64/libpython3.6m.so.1.0(PyEval_EvalCodeEx+0x24f)[0x7f4474372c9f]
</code></pre>
|
<p>This line is causing the error because it assumes <code>__dict__</code> has a <code>backbone_name</code> element:</p>
<pre><code>backbone = resnet.__dict__[backbone_name](pretrained=pretrained)
</code></pre>
<p>When that isn't the case, it basically tries to access invalid memory. Check <code>__dict__</code> first with an <code>if</code> statement or make sure that it has the <code>backbone_name</code> element before trying to use it.</p>
|
python|c++|c++11|pytorch|pybind11
| 0
|
374,728
| 64,459,038
|
How to read all csv files in multiple zip files?
|
<p>I have a folder with many zip files and within those zip files are multiple csv files.
Is there any way to get all of the .csv files in one dataframe in python?
Or any way I can pass a list of zip files?</p>
<p>The code I am currently trying is:</p>
<pre><code>import glob
import zipfile
import pandas as pd
for zip_file in glob.glob(r"C:\Users\harsh\Desktop\Temp\data_00-01.zip"):
# This is just one file. There are multiple zip files in the folder
zf = zipfile.ZipFile(zip_file)
dfs = [pd.read_csv(zf.open(f), header=None, sep=";", encoding='latin1') for f in zf.namelist()]
df = pd.concat(dfs,ignore_index=True)
print(df)
</code></pre>
<p>This code works for one zipfile but I have about 50 zip files in the folder and I would like to read and concatenate all csv files in those zip files in one dataframe.</p>
<p>Thanks</p>
|
<p>The following code should satisfy your requirements (just edit <code>dir_name</code> according to what you need):</p>
<pre class="lang-py prettyprint-override"><code>import glob
import zipfile
import pandas as pd
dfs = []
for filename in os.listdir(dir_name):
if filename.endswith('.zip'):
zip_file = os.path.join(dir_name, filename)
zf = zipfile.ZipFile(zip_file)
dfs += [pd.read_csv(zf.open(f), header=None, sep=";", encoding='latin1') for f in zf.namelist()]
df = pd.concat(dfs,ignore_index=True)
</code></pre>
|
python|pandas|glob|zip
| 0
|
374,729
| 64,354,164
|
cuDNN Error Failed to get convolution algorithm. This is probably because cuDNN failed to initialize
|
<p>I have installed Anaconda with Python 3.8 and CUDA 10.1 with CUDNN 8.0.3 on my Window 10 with GPU GTX 1050. But Still I get the error <a href="https://i.stack.imgur.com/BARlk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BARlk.png" alt="Details of the Error" /></a></p>
|
<p>I've been having a similar issue with TF 2.3.1. However, right away I can tell you that your cudnn version is incompatible. Only cudnn 7.6 is supported with the latest TF which as of right now is 2.3.1. See compatibility link below.</p>
<p><a href="https://www.tensorflow.org/install/gpu#hardware_requirements" rel="nofollow noreferrer">https://www.tensorflow.org/install/gpu#hardware_requirements</a></p>
|
python|tensorflow|tensorflow2.0
| 1
|
374,730
| 64,302,315
|
How to combine groupby, rolling and multiple columns' creation in Python
|
<p>I have an issue that is easily solved with dplyr in R, yet can't seem to find an easy way in Python.
I have a df with id(=customerid), s(=store), m(=month) and ttl(=total purchase) as columns.
I would like to calculate multiple new columns on id+s - for example last 3 months purchase and minimum purchase.</p>
<p>Example (last two are new columns):</p>
<pre><code>id s m ttl ttl_3 min_id_s
1 A 1/1/2020 7 nan 3
1 A 2/1/2020 3 nan 3
1 A 3/1/2020 7 17 3
1 A 4/1/2020 6 16 3
1 A 5/1/2020 7 20 3
1 A 6/1/2020 7 20 3
1 B 1/1/2020 6 nan 6
1 B 2/1/2020 10 nan 6
1 B 3/1/2020 8 24 6
1 B 4/1/2020 8 26 6
1 B 5/1/2020 10 26 6
1 B 6/1/2020 8 26 6
2 A 1/1/2020 4 nan 1
2 A 2/1/2020 3 15 1
2 A 3/1/2020 10 17 1
2 A 4/1/2020 6 19 1
2 A 5/1/2020 4 20 1
2 A 6/1/2020 1 11 1
</code></pre>
<p>I've tried the following:</p>
<pre><code>grp = df.groupby(['id','s'])
df = df.assign(ttl_3 = grp['ttl'].apply(lambda x: x.rolling(window=3)).sum(), min_id_s = grp['ttl'].min())
</code></pre>
<p>I get the following error:</p>
<blockquote>
<p>Cannot access callable attribute 'assign' of 'DataFrameGroupBy'
objects, try using the 'apply' method</p>
</blockquote>
<p>I know that it can be solved without assign, but then I'll have to have a line per each new column, and as I need plenty of these, I'm looking for a workaround.</p>
<p>I've also looked into add_columns with pyjanitor, but it doesn't seem to work with groupby.</p>
<p>In R, the following code solves the issue and in the mutate I can proceed adding columns:</p>
<pre><code>df = df %>% group_by(id, s) %>% mutate(ttl_3 = runner(ttl, k=3, f=sum), min_id_s = min(ttl))
</code></pre>
|
<p>With python syntax and pandas the logic is nearly identical</p>
<pre><code>t = '''
id s m ttl
1 A 1/1/2020 7
1 A 2/1/2020 3
1 A 3/1/2020 7
1 A 4/1/2020 6
1 A 5/1/2020 7
1 A 6/1/2020 7
1 B 1/1/2020 6
1 B 2/1/2020 10
1 B 3/1/2020 8
1 B 4/1/2020 8
1 B 5/1/2020 10
1 B 6/1/2020 8
2 A 1/1/2020 4
2 A 2/1/2020 3
2 A 3/1/2020 10
2 A 4/1/2020 6
2 A 5/1/2020 4
2 A 6/1/2020 1 '''
import pandas as pd
import io
df = pd.read_csv(io.StringIO(t), sep='\s+')
</code></pre>
<p>Trying to come as close to dplyr as I could.<br />
Group by <code>id</code> and <code>s</code> and compute the new columns. Multiple <code>rolling</code> columns could be computed with the <code>agg</code> method.<br />
You can use <code>assign</code>, but you also have to write one line per aggregation.</p>
<pre><code>group = df.groupby(['id','s'])['ttl']
df['ttl_3'] = group.rolling(3).sum().reset_index(level=(0,1), drop=True)
df['min_id_s'] = group.transform('min')
#df = df.assign(
# ttl_3 = group.rolling(3).sum().reset_index(level=(0,1), drop=True),
# min_id_s = group.transform('min'))
df
</code></pre>
<p>Out:</p>
<pre><code> id s m ttl ttl_3 min_id_s
0 1 A 1/1/2020 7 NaN 3
1 1 A 2/1/2020 3 NaN 3
2 1 A 3/1/2020 7 17.0 3
3 1 A 4/1/2020 6 16.0 3
4 1 A 5/1/2020 7 20.0 3
5 1 A 6/1/2020 7 20.0 3
6 1 B 1/1/2020 6 NaN 6
7 1 B 2/1/2020 10 NaN 6
8 1 B 3/1/2020 8 24.0 6
9 1 B 4/1/2020 8 26.0 6
10 1 B 5/1/2020 10 26.0 6
11 1 B 6/1/2020 8 26.0 6
12 2 A 1/1/2020 4 NaN 1
13 2 A 2/1/2020 3 NaN 1
14 2 A 3/1/2020 10 17.0 1
15 2 A 4/1/2020 6 19.0 1
16 2 A 5/1/2020 4 20.0 1
17 2 A 6/1/2020 1 11.0 1
</code></pre>
<hr />
<p>Aggregate multiple columns for <code>rolling</code> with <code>agg</code></p>
<pre><code>group = df.groupby(['id','s'])['ttl']
df[['ttl_3_sum','ttl_3_mean']] = group.rolling(3).agg(['sum','mean']).reset_index(level=(0,1), drop=True)
</code></pre>
<p>Out:</p>
<pre><code> id s m ttl ttl_3_sum ttl_3_mean
0 1 A 1/1/2020 7 NaN NaN
1 1 A 2/1/2020 3 NaN NaN
2 1 A 3/1/2020 7 17.0 5.666667
3 1 A 4/1/2020 6 16.0 5.333333
4 1 A 5/1/2020 7 20.0 6.666667
5 1 A 6/1/2020 7 20.0 6.666667
6 1 B 1/1/2020 6 NaN NaN
7 1 B 2/1/2020 10 NaN NaN
8 1 B 3/1/2020 8 24.0 8.000000
9 1 B 4/1/2020 8 26.0 8.666667
10 1 B 5/1/2020 10 26.0 8.666667
11 1 B 6/1/2020 8 26.0 8.666667
12 2 A 1/1/2020 4 NaN NaN
13 2 A 2/1/2020 3 NaN NaN
14 2 A 3/1/2020 10 17.0 5.666667
15 2 A 4/1/2020 6 19.0 6.333333
16 2 A 5/1/2020 4 20.0 6.666667
17 2 A 6/1/2020 1 11.0 3.666667
</code></pre>
<hr />
<p>Aggregating multiple columns from multiple columns for <code>rolling</code> with <code>agg</code>. The columns have to be numerical. <code>transform</code> doesn't support multiple aggregations as far as I know.</p>
<p>Generate a random numerical column for a rolling aggregation</p>
<pre><code>import numpy as np
df['ttl2'] = np.random.rand(len(df))
</code></pre>
<p><code>groupby</code> without selected columns to aggregate from multiple columns. For example with a custom function</p>
<pre><code>group = df.groupby(['id','s'])
df[['ttl_3_sum', 'ttl2_lambda']] = (group.rolling(3)
.agg({'ttl':'sum', 'ttl2': lambda x: x.sum()/x.min()})
.reset_index(level=(0,1), drop=True))
df
</code></pre>
<p>Out:</p>
<pre><code> id s m ttl ttl2 ttl_3_sum ttl2_lambda
0 1 A 1/1/2020 7 0.032482 NaN NaN
1 1 A 2/1/2020 3 0.998115 NaN NaN
2 1 A 3/1/2020 7 0.689431 17.0 52.953016
3 1 A 4/1/2020 6 0.897444 16.0 3.749456
4 1 A 5/1/2020 7 0.484360 20.0 4.276231
5 1 A 6/1/2020 7 0.971768 20.0 4.859138
6 1 B 1/1/2020 6 0.238363 NaN NaN
7 1 B 2/1/2020 10 0.740311 NaN NaN
8 1 B 3/1/2020 8 0.641598 24.0 6.797507
9 1 B 4/1/2020 8 0.984911 26.0 3.688945
10 1 B 5/1/2020 10 0.043379 26.0 38.495141
11 1 B 6/1/2020 8 0.700253 26.0 39.847293
12 2 A 1/1/2020 4 0.437082 NaN NaN
13 2 A 2/1/2020 3 0.465313 NaN NaN
14 2 A 3/1/2020 10 0.698976 17.0 3.663777
15 2 A 4/1/2020 6 0.430087 19.0 3.707103
16 2 A 5/1/2020 4 0.949536 20.0 4.832976
17 2 A 6/1/2020 1 0.904986 11.0 5.311974
</code></pre>
|
python|pandas|janitor
| 1
|
374,731
| 64,275,440
|
FIeld format conversion error in numpy array
|
<p>I am trying to create a numpy array consisting of an array of data, where each data point is another array of length 5. I am trying to do this by setting a first datum (i.e. array of five elements) and setting the name and formats of each data types using the following code:</p>
<pre><code>DTYPE = [('t_start', 'S32'), ('t_end', 'S32'), ('mac1', 'S32'), ('mac2', 'S32'), ('rss', np.float)]
np.array([ ['2020-10-07 09:20:00', '2020-10-07 09:43:20', 'b8:9a:2a:06:68:f5', 'b8:9a:2a:06:68:f5', '10.501'] ], dtype=DTYPE)
</code></pre>
<p>There must be something really obvious I am missing here because this (simple?) code issues the following Exception</p>
<pre><code>ValueError: could not convert string to float: '2020-10-07 09:20:00'
</code></pre>
<p>Any hint on what is going on here would be much appreciated.</p>
|
<p>I can't reproduce your exact error message, but it looks like there are two issues: your inner list should be a tuple, and you should use <code>U</code> (Unicode string) in place of <code>S</code> in the format specifier (<a href="https://numpy.org/doc/stable/reference/arrays.dtypes.html" rel="nofollow noreferrer">documentation</a> says that <code>S</code> (zero-terminated bytes) is not recommended):</p>
<pre><code>import numpy as np
DTYPE = [('t_start', 'U32'), ('t_end', 'U32'), ('mac1', 'U32'),
('mac2', 'U32'), ('rss', np.float)]
values = [('2020-10-07 09:20:00', '2020-10-07 09:43:20',
'b8:9a:2a:06:68:f5', 'b8:9a:2a:06:68:f5', '10.501')]
arr = np.array(values, dtype=DTYPE)
print(arr.shape)
print(arr[0]['t_end']) # 2020-10-07 09:43:20
print(type(arr[0]['t_end'])) # <class 'numpy.str_'>
print(arr[0]['rss']) # 10.501
print(type(arr[0]['rss'])) # <class 'numpy.float64'>
</code></pre>
<p>In general, in your input to <code>np.array</code>, use an inner <em>tuple</em> to group the elements of each compound data item, and use <em>lists</em> to group the data items into an array of appropriate dimensionality.</p>
|
python|python-3.x|numpy
| 0
|
374,732
| 64,396,124
|
Selecting sentences from a data frame text column if only the sentences contain any of the keywords from a search list
|
<p>I have a dataframe where in one column, I have a full text with multiple very long sentences. I used <code>NLTK</code> to tokenize the text but now I need to make sure I only extract the sentences that contain any of the words from a given long list of full words. I wrote the following code but the problem with it is that, it is not checking the words in the text as a whole but for example to spot a given word in the search list such as 'tic', it is selecting a sentence that contains the word 'statistic'..</p>
<pre><code>symptoms = [long list of words ~ about 100]
new_df = df[df['Sentence'].str.contains('|'.join(symptoms))]
</code></pre>
<p>Right above this code, I use the below code to tokenize my text.</p>
<pre><code>sentences = []
for row in df_no_title.itertuples():
for sentence in sent_tokenize(row[2]):
sentences.append((row[1], sentence))
df = pd.DataFrame(sentences, columns=['Paper_Id', 'Sentence'])
</code></pre>
<p>Is there a way to check the sentences word by word to find the ones that match with any of the words from my list and only extract those sentences in Python?</p>
<p>Please let me know if I should provide any additional information.</p>
|
<p>The regex you are using is <em>almost</em> good. What you would need is to search for individual words which you can achieve by using <code>\b</code> special regex character which matches word boundaries (in regex sense).</p>
<p>Therefore, what would work is:</p>
<pre><code>symptoms = [long list of words ~ about 100]
new_df = df[df['Sentence'].str.contains('\b' + '\b|\b'.join(symptoms) + '\b')]
</code></pre>
<p>You can check how the regular expression used is able to catch the symptoms <a href="https://pythex.org/?regex=%5Cbheadache%5Cb%7C%5Cbchest%20pain%5Cb%7C%5Cbrunny%20nose%5Cb%7C%5Cbpain%20in%20the%20stomach%5Cb%7C%5Cbhypercapnia%5Cb&test_string=fifty-two%20patients%20had%20a%20severe%20runny%20nose%20with%20a%20p%20aco2%20%E2%89%A5%2060%20mmhg%20and%2028%20a%20severe%20acidosis%20with%20a%20ph%20%3C%207.2.%20all%20subjects%20were%20sedated%20and%20invasive%20mechanically%20ventilated%20in%20a%20pressure%20controlled%20mode%20with%20a%20shorter%20duration%20before%20pecla%20in%20the%20survivor%20group.&ignorecase=0&multiline=0&dotall=0&verbose=0" rel="nofollow noreferrer">here</a> based on the example below.</p>
|
python|python-3.x|regex|pandas|nltk
| 0
|
374,733
| 64,362,318
|
pandas: replicating an excel formula in pandas
|
<p>What i have is a dataframe like:</p>
<pre><code> total_sum pid
5 2
1 2
6 7
3 7
1 7
1 7
0 7
5 10
1 10
1 10
</code></pre>
<p>What I want is another column <code>pos</code> like:</p>
<pre><code> total_sum pid pos
5 2 1
1 2 2
6 7 1
3 7 2
1 7 3
1 7 3
0 7 4
5 10 1
1 10 2
1 10 2
</code></pre>
<p>The logic behind is:</p>
<p>The initial <code>pos</code> value for new <code>pid</code> is <code>1</code>.</p>
<p>If <code>pid</code> does not change but the <code>total_sum</code> changes, the value for <code>pos</code> is incremented by 1 (example first two rows) else the value for <code>pos</code> is the previous value (example last two rows).</p>
<p>What i tried:</p>
<pre><code> df['pos'] = 1
df['pos'] = np.where(((df.pid.diff(-1)) == 0 & (df.total_sum.diff(-1) == 0)),
df.pos, (np.where(df.total_sum.diff(1) < 1, df.pos + 1, df.pos )))
</code></pre>
<p>Currently, I am doing it in an excel sheet, where I initially write 1 manually in the first column of <code>pos</code> and then write the formula in second cell of <code>pos</code>:</p>
<pre><code>=IF(A3<>A2,1,IF(B3=B2,C2,C2+1))
</code></pre>
|
<p><strong>Explanation</strong>:</p>
<p>Doing <code>groupby</code> on <code>pid</code> to group the same <code>pid</code> into separate groups. On each group, apply these following operations:</p>
<p>_ Call <code>diff</code> on each group. <code>diff</code> returns integers or <code>NaN</code> indicate the differences between 2 consecutive rows. First row of each group has no previous row, so <code>diff</code> always returns <code>NaN</code> for first row of each group :</p>
<pre><code>df.groupby('pid').total_sum.transform(lambda x: x.diff()
Out[120]:
0 NaN
1 -4.0
2 NaN
3 -3.0
4 -2.0
5 0.0
6 -1.0
7 NaN
8 -4.0
9 0.0
Name: total_sum, dtype: float64
</code></pre>
<p>_ <code>ne</code> checks to see if any value is not <code>0</code>. It returns <code>True</code> on not <code>0</code></p>
<pre><code>df.groupby('pid').total_sum.transform(lambda x: x.diff().ne(0))
Out[121]:
0 True
1 True
2 True
3 True
4 True
5 False
6 True
7 True
8 True
9 False
Name: total_sum, dtype: bool
</code></pre>
<p>_ <code>cumsum</code> is cumulative sum which successively adds each rows. In Python, <code>True</code> is interpreted as <code>1</code> and <code>False</code> interpreted as <code>0</code>. The 1st of each group is always <code>True</code>, so <code>cumsum</code> is always starting from <code>1</code> and adding up each rows to get the desired output.</p>
<pre><code>df.groupby('pid').total_sum.transform(lambda x: x.diff().ne(0).cumsum())
Out[122]:
0 1
1 2
2 1
3 2
4 3
5 3
6 4
7 1
8 2
9 2
Name: total_sum, dtype: int32
</code></pre>
<hr />
<p>Chain all commands to one-liner as follows:</p>
<pre><code>df['pos'] = df.groupby('pid').total_sum.transform(lambda x: x.diff().ne(0).cumsum())
Out[99]:
total_sum pid pos
0 5 2 1
1 1 2 2
2 6 7 1
3 3 7 2
4 1 7 3
5 1 7 3
6 0 7 4
7 5 10 1
8 1 10 2
9 1 10 2
</code></pre>
|
python|pandas|dataframe|sorting
| 3
|
374,734
| 64,177,093
|
Question Python Group by and Apply function
|
<p>I have a data set like below:</p>
<pre><code>idx start_date end_date flag
1. 6-17-20. 6-24-20. 1
2. 6-17-20. 6-24-20. 0
3. 6-17-20. 6-24-20. 1
4. 6-17-20. 6-24-20. 0
1. 6-25-20. 7-03-20. 1
2. 6-25-20. 7-03-20. 1
3. 6-25-20. 7-03-20. 1
4. 6-25-20. 7-03-20. 1
</code></pre>
<p>What I want is</p>
<pre><code>idx start_date end_date flag idx1 idx2 idx3 idx4
1. 6-17-20. 6-24-20. 1. 1 0. 1. 0
2. 6-17-20. 6-24-20. 0. 1 0. 1. 0
3. 6-17-20. 6-24-20. 1 1 0. 1. 0
4. 6-17-20. 6-24-20. 0 1 0. 1. 0
1. 6-25-20. 7-03-20. 1 1 1. 1. 1
2. 6-25-20. 7-03-20. 1 1 1. 1. 1
3. 6-25-20. 7-03-20. 1 1 1. 1. 1
4. 6-25-20. 7-03-20. 1 1 1. 1. 1
</code></pre>
<p>I knew I can do for loop use group by but I want to ask if there is efficient way to do it?
I just need to group by and do long to wide transformation. After that append all the dataset and merge to the original one.</p>
|
<p>Your question could be clearer but I think I sort of understand what you are trying to achieve. Do comment if I am mistaken!</p>
<p>I'm presuming that you want the start and end dates as row entries and the columns of <code>idx</code>s indicating which IDs have those start and end dates. With that assumption in mind, here is how it can be done:</p>
<pre class="lang-py prettyprint-override"><code>### Filter out only those with flag == 1
df_new = df[df['flag'] == 1].copy()
### Consolidate unique start, end dates. Aggregate the corresponding ids as lists
ser_wide = df_new.groupby(['start_date', 'end_date'])['idx'].apply(list)
### Convert the previous lists into binary lists of the same length
unique_ids = sorted(df_new['idx'].unique())
ser_wide = ser_wide.apply(lambda idxs: [(1 if idx in idxs else 0) for idx in unique_ids])
### Define new column names
idx_cols = [f'idx{int(i)}' for i in unique_ids]
### Convert into dataframe
df_wide = pd.DataFrame(ser_wide.to_list(), columns=idx_cols, index=ser_wide.index)
df_wide.reset_index(inplace=True)
print(df_wide)
start_date end_date idx1 idx2 idx3 idx4
0 6-17-20. 6-24-20. 1 0 1 0
1 6-25-20. 7-03-20. 1 1 1 1
</code></pre>
<p>Hope this achieves what I think you intended.</p>
|
python|pandas-groupby|apply
| 0
|
374,735
| 64,509,518
|
Pandas - convert float to int when there are NaN values in the column
|
<p>I'm trying to convert float numbers to int in my df column with this one liner:</p>
<pre><code>df['id'] = df['id'].map(lambda x: int(x))
</code></pre>
<p>But some values are NaN an will throw:</p>
<pre><code>ValueError: cannot convert float NaN to integer
</code></pre>
<p>How do I fix this?</p>
|
<p><code>NaN</code> is itself float and can't be convert to usual <code>int</code>. You can use <code>pd.Int64Dtype()</code> for nullable integers:</p>
<pre><code># sample data:
df = pd.DataFrame({'id':[1, np.nan]})
df['id'] = df['id'].astype(pd.Int64Dtype())
</code></pre>
<p>Output:</p>
<pre><code> id
0 1
1 <NA>
</code></pre>
<p>Another option, is use <code>apply</code>, but then the <code>dtype</code> of the column will be <code>object</code> rather than numeric/int:</p>
<pre><code>df['id'] = df['id'].apply(lambda x: x if np.isnan(x) else int(x))
</code></pre>
|
pandas
| 7
|
374,736
| 64,372,558
|
Unable to convert full string from pandas series
|
<p>I have a pandas series where I save strings. But when I wish to convert this series to string, each cell is limited to a number of character. Does anyone know if I am doing something wrong or if theres a workaroud.</p>
<pre><code>data = np.array(['good day good sir how are you doing today? I sure hope you are well', 'e', 'e', 'k','s'])
ser = pd.Series(data)
ser.to_string()
</code></pre>
<p>The current output:</p>
<pre><code>0 good day good sir how are you doing today? I s...\n1 e\n2 e\n3 k\n4 s'
</code></pre>
<p>Wanted output:</p>
<pre><code>'0 good day good sir how are you doing today? I sure hope you are well\n1 e\n2 e\n3 k\n4 s'
</code></pre>
<p>Is there any way I can get a full string?</p>
|
<p><strong>Series.to_string()</strong> returns a string representation of the Series, not a reduced value, if you want to combine all string in a single one, replace <code>ser.to_string()</code> for <code>" ".join(ser)</code> and it will concat all string with a space.</p>
|
python|pandas|string|series
| 0
|
374,737
| 64,589,892
|
Tensorflow 2.3.1 IndexError: list index out of range
|
<p><strong>I got an error ,IndexError: list index out of range.</strong></p>
<p>it worked on a other machine but after i transferred it to a other machine it doesn't work anymore.</p>
<p>Python: 3.8.5</p>
<p>tensorflow: 2.3.1</p>
<p>Traceback says:</p>
<pre><code>tensorflow.python.autograph.impl.api.StagingError: in user code:
Load_Model.py:40 detect_fn *
image, shapes = detection_model.preprocess(image)
C:\Users\Tensorflow\tensorflow 2.x\models\research\object_detection\meta_architectures\ssd_meta_arch.py:482 preprocess *
normalized_inputs = self._feature_extractor.preprocess(inputs)
C:\Users\Tensorflow\tensorflow 2.x\models\research\object_detection\models\ssd_resnet_v1_fpn_keras_feature_extractor.py:204 preprocess *
if resized_inputs.shape.as_list()[3] == 3:
IndexError: list index out of range
</code></pre>
<p>My code:</p>
<pre><code>import tensorflow as tf
import os
import cv2
from object_detection.utils import label_map_util
from object_detection.utils import config_util
from object_detection.utils import visualization_utils as viz_utils
from object_detection.builders import model_builder
model_name = 'ssd_resnet101_v1_fpn_640x640_coco17_tpu-8'
data_dir = os.path.join(os.getcwd(), 'data')
models_dir = os.path.join(data_dir, 'models')
path_to_ckg = os.path.join(models_dir, os.path.join(model_name, 'pipeline.config'))
PATH_TO_CFG = os.path.join(models_dir)
path_to_cktp = os.path.join(models_dir, os.path.join(model_name, 'checkpoint/'))
label_filename = 'mscoco_label_map.pbtxt'
path_to_labels = os.path.join(models_dir, os.path.join(model_name, label_filename))
tf.get_logger().setLevel('ERROR') # Suppress TensorFlow logging (2)
#Enable GPU dynamic memory allocation
gpus = tf.config.experimental.list_physical_devices('GPU')
for gpu in gpus:
tf.config.experimental.set_memory_growth(gpu, True)
#Load pipeline config and build a detection model'
configs = config_util.get_configs_from_pipeline_file(path_to_ckg)
model_config = configs['model']
detection_model = model_builder.build(model_config=model_config, is_training=False)
#Restore checkpoint
ckpt = tf.compat.v2.train.Checkpoint(model=detection_model)
ckpt.restore(os.path.join(path_to_cktp, 'ckpt-0')).expect_partial()
@tf.function
def detect_fn(image):
"""Detect objects in image."""
image, shapes = detection_model.preprocess(image)
prediction_dict = detection_model.predict(image, shapes)
detections = detection_model.postprocess(prediction_dict, shapes)
return detections, prediction_dict, tf.reshape(shapes, [-1])
category_index = label_map_util.create_category_index_from_labelmap(path_to_labels,
use_display_name=True)
cap = cv2.VideoCapture('rtsp://username:pass@192.168.1.103:8000/tcp/av0_1')
import numpy as np
while True:
#Read frame from camera
ret, image_np = cap.read()
#Expand dimensions since the model expects images to have shape: [1, None, None, 3]
image_np_expanded = np.expand_dims(image_np, axis=0)
#Things to try:
#Flip horizontally
#image_np = np.fliplr(image_np).copy()
#Convert image to grayscale
#image_np = np.tile(
#np.mean(image_np, 2, keepdims=True), (1, 1, 3)).astype(np.uint8)
input_tensor = tf.convert_to_tensor(np.expand_dims(image_np, 0), dtype=tf.float32)
detections, predictions_dict, shapes = detect_fn(input_tensor)
label_id_offset = 1
image_np_with_detections = image_np.copy()
viz_utils.visualize_boxes_and_labels_on_image_array(
image_np_with_detections,
detections['detection_boxes'][0].numpy(),
(detections['detection_classes'][0].numpy() + label_id_offset).astype(int),
detections['detection_scores'][0].numpy(),
category_index,
use_normalized_coordinates=True,
max_boxes_to_draw=200,
min_score_thresh=.30,
agnostic_mode=False)
#Display output
cv2.imshow('object detection', cv2.resize(image_np_with_detections, (800, 600)))
if cv2.waitKey(25) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
</code></pre>
<p>I really cannot understand why such an error happens.</p>
<p>What is wrong in my codes? How should I fix this?</p>
|
<p>Define the detect_fn inside the get_model_detection_function function ,
something like this :</p>
<pre><code>def get_model_detection_function(model):
"""Get a tf.function for detection."""
@tf.function
def detect_fn(image):
"""Detect objects in image."""
image, shapes = model.preprocess(image)
prediction_dict = model.predict(image, shapes)
detections = model.postprocess(prediction_dict, shapes)
return detections, prediction_dict, tf.reshape(shapes, [-1])
return detect_fn
detect_fn = get_model_detection_function(detection_model)
</code></pre>
<p>See if this helps </p>
|
python|tensorflow|tensorflow2.0|index-error|tensorflow2.x
| 3
|
374,738
| 64,344,884
|
Pandas DataFrame : Create a column based on values from different rows
|
<p>I have a pandas dataframe which looks like this :</p>
<pre><code> Ref Value
1 SKU1 A
2 SKU2 A
3 SKU3 B
4 SKU2 A
5 SKU1 B
6 SKU3 C
</code></pre>
<p>I would like to create a new column, conditioned on whether the values for a given Ref match or not. For instance, if for SKU1 both rows have the same values, display "good", if not display "bad"
The dataframe will usually have 2 rows for each Ref, but sometimes will have more (in that case, "good" is when they all match with each other)</p>
<p>With the example above, this would be :</p>
<pre><code> Ref Value NewCol
1 SKU1 A bad
2 SKU2 A good
3 SKU3 B bad
4 SKU2 A good
5 SKU1 B bad
6 SKU3 C bad
</code></pre>
<p>What would be the best way of implementing this ?
In my example, <em>Value</em> can only be A, B or C, but <em>Ref</em> has thousands of different entries, which is why I am struggling</p>
<p>Many thanks in advance !</p>
|
<p>Let's try <code>groupby().nunique()</code> to check the number of values within a ref:</p>
<pre><code>df['NewCol'] = np.where(df.groupby('Ref')['Value'].transform('nunique')==1,
'good', 'bad')
</code></pre>
<p>Output:</p>
<pre><code> Ref Value NewCol
1 SKU1 A bad
2 SKU2 A good
3 SKU3 B bad
4 SKU2 A good
5 SKU1 B bad
6 SKU3 C bad
</code></pre>
<hr />
<p><strong>Update</strong>: per comment:</p>
<pre><code>s = df['Ref'].map(df.groupby('Ref')['Value'].apply(set))
df['NewCol'] = np.select((s.str.len()==1, s.eq({'A','B'})),
('good', 'average'), 'bad')
</code></pre>
<p>Output:</p>
<pre><code> Ref Value NewCol
1 SKU1 A average
2 SKU2 A good
3 SKU3 B bad
4 SKU2 A good
5 SKU1 B average
6 SKU3 C bad
</code></pre>
|
python|pandas|dataframe
| 3
|
374,739
| 64,431,313
|
split multiple columns in pandas dataframe by delimiter
|
<p>I have survey data which annoying has returned multiple choice questions in the following way. It's in an excel sheet There is about 60 columns with responses from single to multiple that are split by /. This is what I have so far, is there any way to do this quicker without having to do this for each individual column</p>
<pre><code>data = {'q1': ['one', 'two', 'three'],
'q2' : ['one/two/three', 'a/b/c', 'd/e/f'],
'q3' : ['a/b/c', 'd/e/f','g/h/i']}
df = pd.DataFrame(data)
df[['q2a', 'q2b', 'q2c']]= df['q2'].str.split('/', expand = True, n=0)
df[['q3a', 'q3b', 'q3c']]= df['q2'].str.split('/', expand = True, n=0)
clean_df = df.drop(df[['q2', 'q3']], axis=1)
</code></pre>
|
<p>We can use list comprehension with <code>add_prefix</code>, then we use <code>pd.concat</code> to concatenate everything to your final df:</p>
<pre><code>splits = [df[col].str.split(pat='/', expand=True).add_prefix(col) for col in df.columns]
clean_df = pd.concat(splits, axis=1)
</code></pre>
<pre><code> q10 q20 q21 q22 q30 q31 q32
0 one one two three a b c
1 two a b c d e f
2 three d e f g h i
</code></pre>
<hr />
<p>If you actually want your column names to be suffixed by a letter, you can do the following with <code>string.ascii_lowercase</code>:</p>
<pre><code>from string import ascii_lowercase
dfs = []
for col in df.columns:
d = df[col].str.split('/', expand=True)
c = d.shape[1]
d.columns = [col + l for l in ascii_lowercase[:c]]
dfs.append(d)
clean_df = pd.concat(dfs, axis=1)
</code></pre>
<pre><code> q1a q2a q2b q2c q3a q3b q3c
0 one one two three a b c
1 two a b c d e f
2 three d e f g h i
</code></pre>
|
python|pandas|split
| 6
|
374,740
| 64,318,011
|
How can I find cosine similarity between input array and pandas dataframe and return the row in dataframe which is most similar?
|
<p>I have a data set as shown below and I want to find the cosine similarity between input array and reach row in dataframe in order to identify the row which is most similar or duplicate.
The data shown below is a sample and has multiple features. I want to find the cosine similarity between input row and each row in the data use the min(argmin)
<a href="https://i.stack.imgur.com/VHBrq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VHBrq.png" alt="enter image description here" /></a></p>
|
<p>There are <a href="https://stackoverflow.com/questions/18424228">various ways of computing cosine similarity</a>. Here I give a brief summary on how each of them applies to a dataframe.</p>
<h2>Data</h2>
<pre><code>import pandas as pd
import numpy as np
# Please don't make people do this. You should have enough reps to know that.
np.random.seed(111) # reproducibility
df = pd.DataFrame(
data={
"col1": np.random.randn(5),
"col2": np.random.randn(5),
"col3": np.random.randn(5),
}
)
input_array = np.array([1,2,3])
# print
df
Out[6]:
col1 col2 col3
0 -1.133838 -0.459439 0.238894
1 0.384319 -0.059169 -0.589920
2 1.496554 -0.354174 -1.440585
3 -0.355382 -0.735523 0.773703
4 -0.787534 -1.183940 -1.027967
</code></pre>
<h2>1. Sklearn cosine_similarity</h2>
<p>Just mind the correct shape. 2D data should always be shaped as<code>(#rows, #features)</code>. Also mind the output shape.</p>
<pre><code>from sklearn.metrics.pairwise import cosine_similarity
cosine_similarity(input_array.reshape((1, -1)), df).reshape(-1)
Out[7]: array([-0.28645981, -0.56882572, -0.44816313, 0.11750604, -0.95037169])
</code></pre>
<h2>2. Scipy cosine distance</h2>
<p>Just apply this on each row (<code>axis=1</code>). The result is the same as using <code>sklearn</code>. Note that cosine similarity is <code>1 - cosine(a1, a2)</code> here.</p>
<pre><code>from scipy.spatial.distance import cosine
df.apply(lambda row: 1 - cosine(row, input_array), axis=1)
Out[10]:
0 -0.286460
1 -0.568826
2 -0.448163
3 0.117506
4 -0.950372
dtype: float64
</code></pre>
<h2>3. Compute manually</h2>
<p>Essentially the same as <code>scipy</code> except that you code the formula manually.</p>
<pre><code>from numpy.linalg import norm
df.apply(lambda row: input_array.dot(row) / norm(input_array) / norm(row), axis=1)
Out[8]:
0 -0.286460
1 -0.568826
2 -0.448163
3 0.117506
4 -0.950372
dtype: float64
</code></pre>
<p>Also refer to the relation between <a href="https://stats.stackexchange.com/questions/235673">Pearson correlation, cosine similarity and z-score</a> to see whether it is helpful.</p>
|
python|pandas|scikit-learn|scipy|cosine-similarity
| 3
|
374,741
| 64,273,234
|
Python Issue - Correlated Random Samples and Cholesky Decomposition
|
<pre><code>import numpy as np
from scipy.linalg import cholesky
X1_temp = np.random.normal(0, 1, (40, 10)) # with mean 0 and std. dev. 1
C = cholesky(C0, lower = False)
X1 = X1_temp @ np.transpose(C)
</code></pre>
<p>Thanks very much in advance. This is just a very simple task. I am given a desired covariance matrix C0. I want to generate random correlated data matrix X1 such that the covariance matrix of X1 is C0. I use Cholesky Decomposition. The codes are in Python.</p>
<p>However, if I then manually use np.cov() to calculate the covariance matrix of X1, it turns out to be quite different from C0. Please see below for the results.</p>
<pre><code>C0
array([[0.00119545, 0.00055428, 0.00094478, 0.00057466, 0.00039038,
0.0004846 , 0.00047505, 0.00039403, 0.00041767, 0.00053985],
[0.00055428, 0.00055869, 0.0005272 , 0.00046757, 0.0002733 ,
0.00037907, 0.00034639, 0.00030749, 0.00034975, 0.00041578],
[0.00094478, 0.0005272 , 0.00163294, 0.00049306, 0.00032083,
0.0004176 , 0.000403 , 0.00038588, 0.00037359, 0.00050818],
[0.00057466, 0.00046757, 0.00049306, 0.00161212, 0.00065667,
0.00080889, 0.00075579, 0.00048425, 0.00066279, 0.00044288],
[0.00039038, 0.0002733 , 0.00032083, 0.00065667, 0.00076096,
0.00054659, 0.00061203, 0.00032021, 0.00047247, 0.00025952],
[0.0004846 , 0.00037907, 0.0004176 , 0.00080889, 0.00054659,
0.00116606, 0.00056961, 0.00040392, 0.00053751, 0.00033647],
[0.00047505, 0.00034639, 0.000403 , 0.00075579, 0.00061203,
0.00056961, 0.0008417 , 0.0003747 , 0.00054147, 0.00033309],
[0.00039403, 0.00030749, 0.00038588, 0.00048425, 0.00032021,
0.00040392, 0.0003747 , 0.00042903, 0.00035186, 0.00030802],
[0.00041767, 0.00034975, 0.00037359, 0.00066279, 0.00047247,
0.00053751, 0.00054147, 0.00035186, 0.00061011, 0.00033744],
[0.00053985, 0.00041578, 0.00050818, 0.00044288, 0.00025952,
0.00033647, 0.00033309, 0.00030802, 0.00033744, 0.00048086]])
np.cov(X1,rowvar=False)
array([[ 2.73579114e-03, 1.11107458e-03, 7.39905489e-04,
8.80909471e-04, 2.67940563e-04, 3.00456896e-04,
2.78558784e-04, 1.95630006e-04, -8.84664656e-07,
3.07873704e-04],
[ 1.11107458e-03, 7.67420975e-04, 6.32049491e-05,
7.29592024e-04, 2.30491196e-04, 2.16890868e-04,
1.67582787e-04, 8.49125674e-05, -8.34866957e-06,
1.68615033e-04],
[ 7.39905489e-04, 6.32049491e-05, 1.14805709e-03,
-2.24580150e-04, 8.49680219e-05, -1.42327782e-05,
-1.19349618e-04, -1.75276890e-05, 3.43158455e-05,
6.89819740e-05],
[ 8.80909471e-04, 7.29592024e-04, -2.24580150e-04,
1.44999733e-03, 1.27584928e-04, 3.49017835e-04,
1.89078071e-04, 4.95665628e-05, -3.51667683e-05,
1.25940711e-04],
[ 2.67940563e-04, 2.30491196e-04, 8.49680219e-05,
1.27584928e-04, 4.63007494e-04, 1.58792113e-04,
5.37907654e-05, 5.71734068e-05, 2.18693890e-05,
7.16343773e-05],
[ 3.00456896e-04, 2.16890868e-04, -1.42327782e-05,
3.49017835e-04, 1.58792113e-04, 4.23799861e-04,
1.37080444e-05, 9.08977322e-06, -6.28273606e-05,
6.64415798e-06],
[ 2.78558784e-04, 1.67582787e-04, -1.19349618e-04,
1.89078071e-04, 5.37907654e-05, 1.37080444e-05,
2.33866264e-04, 6.65816973e-05, 1.84881869e-05,
5.54885576e-05],
[ 1.95630006e-04, 8.49125674e-05, -1.75276890e-05,
4.95665628e-05, 5.71734068e-05, 9.08977322e-06,
6.65816973e-05, 2.04723105e-04, 3.60286054e-05,
4.11557090e-05],
[-8.84664656e-07, -8.34866957e-06, 3.43158455e-05,
-3.51667683e-05, 2.18693890e-05, -6.28273606e-05,
1.84881869e-05, 3.60286054e-05, 1.43719872e-04,
9.74047167e-06],
[ 3.07873704e-04, 1.68615033e-04, 6.89819740e-05,
1.25940711e-04, 7.16343773e-05, 6.64415798e-06,
5.54885576e-05, 4.11557090e-05, 9.74047167e-06,
1.24753756e-04]])
</code></pre>
|
<p>Don't transpose <code>C</code> when you compute <code>X1</code>.</p>
<pre><code>In [33]: C0
Out[33]:
array([[0.00119545, 0.00055428, 0.00094478, 0.00057466, 0.00039038,
0.0004846 , 0.00047505, 0.00039403, 0.00041767, 0.00053985],
[0.00055428, 0.00055869, 0.0005272 , 0.00046757, 0.0002733 ,
0.00037907, 0.00034639, 0.00030749, 0.00034975, 0.00041578],
[0.00094478, 0.0005272 , 0.00163294, 0.00049306, 0.00032083,
0.0004176 , 0.000403 , 0.00038588, 0.00037359, 0.00050818],
[0.00057466, 0.00046757, 0.00049306, 0.00161212, 0.00065667,
0.00080889, 0.00075579, 0.00048425, 0.00066279, 0.00044288],
[0.00039038, 0.0002733 , 0.00032083, 0.00065667, 0.00076096,
0.00054659, 0.00061203, 0.00032021, 0.00047247, 0.00025952],
[0.0004846 , 0.00037907, 0.0004176 , 0.00080889, 0.00054659,
0.00116606, 0.00056961, 0.00040392, 0.00053751, 0.00033647],
[0.00047505, 0.00034639, 0.000403 , 0.00075579, 0.00061203,
0.00056961, 0.0008417 , 0.0003747 , 0.00054147, 0.00033309],
[0.00039403, 0.00030749, 0.00038588, 0.00048425, 0.00032021,
0.00040392, 0.0003747 , 0.00042903, 0.00035186, 0.00030802],
[0.00041767, 0.00034975, 0.00037359, 0.00066279, 0.00047247,
0.00053751, 0.00054147, 0.00035186, 0.00061011, 0.00033744],
[0.00053985, 0.00041578, 0.00050818, 0.00044288, 0.00025952,
0.00033647, 0.00033309, 0.00030802, 0.00033744, 0.00048086]])
In [34]: C = cholesky(C0, lower=False)
In [35]: n = 40
In [36]: X1_temp = np.random.normal(0, 1, (n, 10)) # with mean 0 and std. dev. 1
In [37]: X1 = X1_temp @ C
In [38]: np.cov(X1, rowvar=False)
Out[38]:
array([[0.00170434, 0.00091094, 0.00134416, 0.00056663, 0.00063199,
0.00050691, 0.00052851, 0.00058113, 0.00070497, 0.00094468],
[0.00091094, 0.00076975, 0.00085689, 0.00042849, 0.00033691,
0.00022588, 0.00031262, 0.00043736, 0.00043965, 0.00068883],
[0.00134416, 0.00085689, 0.00177699, 0.00060582, 0.00059343,
0.00035828, 0.00047125, 0.00062899, 0.00069686, 0.00082439],
[0.00056663, 0.00042849, 0.00060582, 0.00107095, 0.0004217 ,
0.00037837, 0.00038153, 0.00043858, 0.00043721, 0.00037525],
[0.00063199, 0.00033691, 0.00059343, 0.0004217 , 0.00079374,
0.00042106, 0.00051381, 0.00035244, 0.00050983, 0.00024706],
[0.00050691, 0.00022588, 0.00035828, 0.00037837, 0.00042106,
0.00080684, 0.0003886 , 0.00021881, 0.00042829, 0.00023707],
[0.00052851, 0.00031262, 0.00047125, 0.00038153, 0.00051381,
0.0003886 , 0.00057768, 0.00029707, 0.00039979, 0.0002629 ],
[0.00058113, 0.00043736, 0.00062899, 0.00043858, 0.00035244,
0.00021881, 0.00029707, 0.00042199, 0.00034844, 0.00040239],
[0.00070497, 0.00043965, 0.00069686, 0.00043721, 0.00050983,
0.00042829, 0.00039979, 0.00034844, 0.00058689, 0.00043979],
[0.00094468, 0.00068883, 0.00082439, 0.00037525, 0.00024706,
0.00023707, 0.0002629 , 0.00040239, 0.00043979, 0.00077079]])
</code></pre>
<p>For testing, you'll get a more convincing result if you use more samples.</p>
<pre><code>In [39]: n = 400
In [40]: X1_temp = np.random.normal(0, 1, (n, 10)) # with mean 0 and std. dev. 1
In [41]: X1 = X1_temp @ C
In [42]: np.cov(X1, rowvar=False)
Out[42]:
array([[0.00123199, 0.00055531, 0.00096428, 0.0006604 , 0.00046758,
0.00060183, 0.00056823, 0.00041429, 0.00046345, 0.00056343],
[0.00055531, 0.00054127, 0.00055571, 0.00048491, 0.00028983,
0.00038863, 0.0003752 , 0.00029274, 0.0003503 , 0.0004217 ],
[0.00096428, 0.00055571, 0.00154136, 0.00062761, 0.00042062,
0.00050582, 0.0004616 , 0.00043355, 0.00044465, 0.00056019],
[0.0006604 , 0.00048491, 0.00062761, 0.00152187, 0.00065816,
0.0008483 , 0.0007723 , 0.00051182, 0.00069478, 0.00049743],
[0.00046758, 0.00028983, 0.00042062, 0.00065816, 0.00081247,
0.00057251, 0.00069877, 0.00033519, 0.00050191, 0.00030041],
[0.00060183, 0.00038863, 0.00050582, 0.0008483 , 0.00057251,
0.00130518, 0.00059381, 0.00044213, 0.00057237, 0.00039522],
[0.00056823, 0.0003752 , 0.0004616 , 0.0007723 , 0.00069877,
0.00059381, 0.00097244, 0.00039577, 0.00059776, 0.00038547],
[0.00041429, 0.00029274, 0.00043355, 0.00051182, 0.00033519,
0.00044213, 0.00039577, 0.00044904, 0.00036922, 0.00033238],
[0.00046345, 0.0003503 , 0.00044465, 0.00069478, 0.00050191,
0.00057237, 0.00059776, 0.00036922, 0.00062956, 0.00037302],
[0.00056343, 0.0004217 , 0.00056019, 0.00049743, 0.00030041,
0.00039522, 0.00038547, 0.00033238, 0.00037302, 0.00050632]])
</code></pre>
|
python|numpy|random|statistics|covariance
| 0
|
374,742
| 64,378,573
|
Compare sums of multiple pandas dataframes in an effective way
|
<p>I have multiple pandas dataframes (5) with some common names index. They have different size. I need sum at least 5 different common <code>colum names</code> (25 in total) from each dataframe and then compare the sums.</p>
<pre><code>Data:
df_files = [df1, df2, df3, df4, df5]
df_files
out:
[ z name ... a b
0 10 DAD ... 4 4
1 10 DAD ... 5 4
2 10 DAD ... 3 6
3 10 DAD ... 9 2
4 10 DAD ... 11 1
... ... ... ... ... ...
7495 <NA> NaN ... 2 0
7496 <NA> NaN ... 5 3
7497 <NA> NaN ... 3 1
7498 <NA> NaN ... 2 0
7499 <NA> NaN ... 4 3
[7500 rows x 35 columns] #The dataframes are like this type but some vary in size.
</code></pre>
<p>What I need is to sum some specific <code>common names</code> and then compare those sums to see if they match and if they do print out and OK, and if they do not see which value is not macthing with the others like: <em>The value from <code>"column name"</code> from <code>df3</code> and <code>df4</code> do not match with the others values and see the expected common value (when the other majority match) and the colums that match(or if they just match do not need to show it, just the common expected value). Maybe some of them will not match each other but the common expected value it will be assume as the value that is repeated the most and if any of them match that print out that the values needs correction to proceed because any of them match and see the values that not match.</em></p>
<p>I was begining with something like this:</p>
<pre><code> Example:
df = pd.concat([df1["a"].sum(), df2["a"].sum(), df3["a"].sum(), df4["a"].sum(), df5["a"].sum()])
df
out:
a a a a a
0 425 425 426 427 425
</code></pre>
<p>or maybe they can be compared as a list of integers.</p>
<p>I will appreciate your attention with this question. I hope I have been specific.</p>
|
<p>If I understand your problem correctly, you need to bring the names of data frames and respective columns in one place to compare the sums. In that case I usually use a dictionary to keep the name of variable, something like this:</p>
<pre><code>df_files = {'df1':df1, 'df2':df2, 'df3':df3, 'df4':df4, 'df5':df5}
summary = pd.DataFrame()
for df in df_files.keys():
cols = list(summary)
summary= pd.concat([summary, df_files[df].sum()], axis=1)
summary.columns = cols + [df]
summary = summary.dropna()
</code></pre>
<p>The summary will be a data frame with common column names as index, and data frame names as columns. If you have only 5 dfs with 5 common column names it will be an easy job to observe the results. Here is a sample result I ran for 3 dfs:</p>
<pre><code> df1 df2 df3
a 6.0 10.0 6.0
b 15.0 14.0 15.0
</code></pre>
<p>But if the numbers grow, you can use the 'mode' of each row to find the most frequent result, and compare the rows (maybe divide all values and look for non-1 results)</p>
|
python|pandas|list|if-statement
| 1
|
374,743
| 64,609,003
|
Numpy vectorization instead of for loop
|
<p>I wrote a function which is too time consuming when used with for loops. It appends numpy vectors (10,0) as rows in each iteration. How can I use a vectorized numpy solution for the iterations to speed this up?</p>
<p>Any hint why the vstack-array solution below is even slower than the append-list solution?</p>
<p>TIA</p>
<pre><code>import numpy as np
import time
n_iterations = 1000
n_cols = 10
def sample_func():
# Addition: please notice: the randon function is not important. It is only an example function. The real function is more complex and needs to replace for loops by a faster numpy solution.
row = np.random.rand(0,n_cols)
return row
#list solution: too slow
start_time_1 = time.time()
result_list = []
for i in range(n_iterations):
result_row = sample_func()
result_list.append(np.sort(result_row))
print("Run time = {}".format(time.time() - start_time_1))
#array solution: too slow
start_time_2 = time.time()
result_array = np.empty([0,n_cols])
for i in range(n_iterations):
result_row = sample_func()
result_array = np.vstack([result_array, np.sort(result_row)])
print("Run time = {}".format(time.time() - start_time_2))
</code></pre>
<p>TIA</p>
|
<p>In general you don't want to append to <code>numpy</code> arrays. Re-allocating space for them is too time consuming. If you know <code>n_iterations</code>, you can allocate up-front like this:</p>
<pre><code>result_array = np.empty([n_iterations, n_cols])
for i in range(n_iterations):
result_array[i] = sample_func()
</code></pre>
<p>But you'll do much better "vectorizing" whatever is in <code>sample_func</code> to accept n-d input. <code>for</code> loops in <code>python</code> are slow. <code>numpy</code> gives you a lot of tricks to push your <code>for</code> loops into compiled <code>c</code>-code (called 'vectorizing'), but without knowing what's going on in the function we can't help you vectorize it.</p>
|
python|numpy|vectorization
| 1
|
374,744
| 64,377,601
|
Counting from a column in a Pandas Dataframe
|
<p>I am attempting to count the number of instances of an element in a column of a Pandas Dataframe based on a set of criteria. I am running into difficulty in a few places.</p>
<p>Here is what I have up to this point. It effectively reads the CSV, drops the duplicates, and sorts df2. I am performing all of these steps in order to isolate the criteria I want to use in the future. Frankly, this may even be an extra step I do not need.</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
# importing all required modules numpy, pyplot, and pandas
df= pd.read_csv('file.csv')
# reading the CSV file as a pandas dataframe
df2 = df.drop_duplicates(subset="MRCEmp")
df2 = df2.sort_values(["CLNum"])
# creating duplicate dataframe eliminating duplicate pairs
# sorting df2 in ascending order by column "CLNum"
clmax = df2["CLNum"].max()
clmin = df2["CLNum"].min()
# creating variables as int to define the maximum and minimum of the "CLNum: column
for n in df2["CLNum"]:
if n not in df2["CLNum"]:
n = n + 1
elif n in df2["CLNum"]:
print(df2.loc[df2["CLNum"] == n])
n = n + 1
</code></pre>
<p>I should note that not all integers are represented in <code>df2["CLnum"]</code> that is why I inserted the first for loop.</p>
<p>When running this script however, not all of the rows are displayed. <code>clmax = 728</code> and <code>clmin = 1</code>, but the final row displayed holds an n value of 283. I cannot find why not all rows are displayed.</p>
|
<p>Try pandas <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.value_counts.html" rel="nofollow noreferrer"><code>value_counts</code></a> function</p>
|
python|python-3.x|pandas|dataframe
| -1
|
374,745
| 64,176,002
|
fast fourier transform of csv data
|
<p>I have a csv file that contains time and torque data. <a href="https://pastebin.com/MAT2rG3U" rel="nofollow noreferrer">https://pastebin.com/MAT2rG3U</a> This data set is truncated because size limit.</p>
<p>I am trying to find the FFT of the data to find the frequency of a vibration.</p>
<p>Here is my code (here is the example I used <a href="https://stackoverflow.com/questions/59172407/fast-fourier-transform-in-python">Fast Fourier Transform in Python</a> ), it does not produce any results. I've researched many online resources and can not find my error</p>
<pre><code>import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
data = pd.read_csv('data.csv',index_col=0)
data = data['Torque'].astype(float).values
print(data)
N = data.shape[0] #number of elements
t = np.linspace(0, 300, N)
#t=np.arange(N)
s = data
fft = np.fft.fft(s)
fftfreq = np.fft.fftfreq(len(s))
T = t[1] - t[0]
print(T)
f = np.linspace(0, 1 / T, N)
plt.ylabel("Amplitude")
plt.xlabel("Frequency [Hz]")
plt.plot(fftfreq,fft)
#plt.xlim(0,100)
plt.show()
</code></pre>
|
<p>What you've posted works for me, but your data isn't valid for an FFT because the timesteps aren't consistent. That is, you don't have a well defined sample rate.</p>
<pre><code>data = pd.read_csv('torque_data.txt',index_col=0)
data = data['Torque'].astype(float).values
print(data)
N = data.shape[0] #number of elements
t = np.linspace(0, 300, N)
#t=np.arange(N)
s = data
fft = np.fft.fft(s)
fftfreq = np.fft.fftfreq(len(s))
T = t[1] - t[0]
print(T)
f = np.linspace(0, 1 / T, N)
plt.ylabel("Amplitude")
plt.xlabel("Frequency [Hz]")
plt.plot(fftfreq, np.absolute(fft))
#plt.xlim(0,100)
</code></pre>
<p><a href="https://i.stack.imgur.com/xyZ8F.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xyZ8F.png" alt="enter image description here" /></a></p>
<p>There's probably something wrong with the data that you didn't include in the sample that's giving you the NaNs.</p>
<p>Still, in the data you provided, the sample rate isn't consistent, which is required for an FFT. To see this, plot a histogram of the time steps:</p>
<pre><code># read in the data like this to get the times
data = pd.read_csv('torque_data.txt')
time = data['Seconds'].astype(float).values
data = data['Torque'].astype(float).values
# now look at the timesteps
fig, axs = plt.subplots()
time_deltas = t[1:]-t[:-1]
h = axs.hist(time_deltas, bins=50)
</code></pre>
<p><a href="https://i.stack.imgur.com/VhqLW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VhqLW.png" alt="enter image description here" /></a></p>
<p>Because so many of the timesteps have different values, I'd be worried about trusting the FFT. (When I first looked at your data, most of the earlier points seemed to have the 0.004s timestep, so I wonder if your data collection is changing over time and not just randomly, but whatever, you need to sort this out too.) There are solutions to this, like interpolation/resampling, or down-sampling, but one can't reliably trust the FFT results without a fix.</p>
|
python|pandas|numpy|matplotlib|fft
| 1
|
374,746
| 64,402,771
|
How to load numpy array with certain columns as specific type
|
<p>I tried following:</p>
<pre><code>>>> arr2 = [[0, 0, 0, -0.9, 0.3], [0, 0, 1, 0.9, 0.6], [0, 1, 0, -0.2, 0.6], [0, 1, 1, 0.8, 0.3], [1, 0, 1, 0.2, 1.0], [1, 1, 0, -0.8, 1.0]]
>>> narr2 = np.array(arr2)
>>> narr2
array([[ 0. , 0. , 0. , -0.9, 0.3],
[ 0. , 0. , 1. , 0.9, 0.6],
[ 0. , 1. , 0. , -0.2, 0.6],
[ 0. , 1. , 1. , 0.8, 0.3],
[ 1. , 0. , 1. , 0.2, 1. ],
[ 1. , 1. , 0. , -0.8, 1. ]])
</code></pre>
<p>How can I make first three column have type <code>int</code>? That is how can I get following:</p>
<pre><code>>>> narr2
array([[ 0 , 0 , 0 , -0.9, 0.3],
[ 0 , 0 , 1 , 0.9, 0.6],
[ 0 , 1 , 0 , -0.2, 0.6],
[ 0 , 1 , 1 , 0.8, 0.3],
[ 1 , 0 , 1 , 0.2, 1. ],
[ 1 , 1 , 0 , -0.8, 1. ]])
</code></pre>
<p>I tried following:</p>
<pre><code>>>> narr2 = np.array(arr2,dtype='i4,i4,i4,f8,f8')
>>> narr2
array([[(0, 0, 0, 0. , 0. ), (0, 0, 0, 0. , 0. ),
(0, 0, 0, 0. , 0. ), (0, 0, 0, -0.9, -0.9),
(0, 0, 0, 0.3, 0.3)],
[(0, 0, 0, 0. , 0. ), (0, 0, 0, 0. , 0. ),
(1, 1, 1, 1. , 1. ), (0, 0, 0, 0.9, 0.9),
(0, 0, 0, 0.6, 0.6)],
[(0, 0, 0, 0. , 0. ), (1, 1, 1, 1. , 1. ),
(0, 0, 0, 0. , 0. ), (0, 0, 0, -0.2, -0.2),
(0, 0, 0, 0.6, 0.6)],
[(0, 0, 0, 0. , 0. ), (1, 1, 1, 1. , 1. ),
(1, 1, 1, 1. , 1. ), (0, 0, 0, 0.8, 0.8),
(0, 0, 0, 0.3, 0.3)],
[(1, 1, 1, 1. , 1. ), (0, 0, 0, 0. , 0. ),
(1, 1, 1, 1. , 1. ), (0, 0, 0, 0.2, 0.2),
(1, 1, 1, 1. , 1. )],
[(1, 1, 1, 1. , 1. ), (1, 1, 1, 1. , 1. ),
(0, 0, 0, 0. , 0. ), (0, 0, 0, -0.8, -0.8),
(1, 1, 1, 1. , 1. )]],
dtype=[('f0', '<i4'), ('f1', '<i4'), ('f2', '<i4'), ('f3', '<f8'), ('f4', '<f8')])
</code></pre>
<p>As can be seen, I am not getting the desired output. Seems that I am not understanding how do I specify type while creating array and where I am going wrong.</p>
|
<p>I propose that you convert lists to tuples, and then assign the data types. Here's my solution:</p>
<pre><code>import numpy as np
arr2 = [[0, 0, 0, -0.9, 0.3], [0, 0, 1, 0.9, 0.6],
[0, 1, 0, -0.2, 0.6], [0, 1, 1, 0.8, 0.3], [1, 0, 1, 0.2, 1.0], [1, 1, 0, -0.8, 1.0]]
tupp2 = [tuple(l) for l in arr2]
datatype = [('A', np.int), ('B', np.int), ('C', np.int), ('D', np.float), ('E', np.float)]
narr2 = np.array(tupp2, dtype=datatype)
</code></pre>
<p>Checking the data types for each column:</p>
<pre><code>for i in narr2[0]:
print(i.dtype)
</code></pre>
<p>Gives:</p>
<pre><code>int32
int32
int32
float64
float64
</code></pre>
|
python|arrays|numpy
| 0
|
374,747
| 64,255,616
|
How can I fill data frames with NAN with same values of previous data frames from the same list
|
<p>I have a list of data frames <code>A</code> most of them are NAN data frames some of them are not, I would like to fill all NAN data frames with same values of the previous data frames (that do not contain NAN) in the list.
Here's a small example:</p>
<pre><code>A=[]
data = {'set_of_numbers': [1,2,3,4,4,5,9]}
df1 = pd.DataFrame(data,columns=['set_of_numbers'])
data2 = {'set_of_numbers': [0,np.nan,np.nan,np.nan,np.nan,np.nan,np.nan]}
df2 = pd.DataFrame(data2,columns=['set_of_numbers'])
data3 = {'set_of_numbers': [3,3,3,8,4,5,8]}
df3 = pd.DataFrame(data3,columns=['set_of_numbers'])
data4 = {'set_of_numbers': [0,np.nan,np.nan,np.nan,np.nan,np.nan,np.nan]}
df4 = pd.DataFrame(data4,columns=['set_of_numbers'])
A.append(df1)
A.append(df2)
A.append(df3)
A.append(df4)
A
</code></pre>
<p>I would like to have the output shown in the second picture, where all nan dataframes are filled with values of previous data frames</p>
<p><a href="https://i.stack.imgur.com/ADjrZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ADjrZ.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/4gH0Y.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4gH0Y.png" alt="I would like to have this output:" /></a></p>
|
<p>If I understand correctly:</p>
<pre><code>for i, df in enumerate(A):
df[df.isnull()] = A[i-1]
</code></pre>
<p>or if you wish to change the dtype of previously non-nan df:</p>
<pre><code>for i, df in enumerate(A):
if df.isnull().all().all():
A[i] = A[i-1].copy()
</code></pre>
<p>per OP's <strong>EDIT</strong> on question:</p>
<pre><code>for i, df in enumerate(A):
if df.isnull().any().any():
A[i] = A[i-1].copy()
</code></pre>
<p>output:</p>
<pre><code>[ set_of_numbers
0 1
1 2
2 3
3 4
4 4
5 5
6 9, set_of_numbers
0 1
1 2
2 3
3 4
4 4
5 5
6 9, set_of_numbers
0 3
1 3
2 3
3 8
4 4
5 5
6 8, set_of_numbers
0 3
1 3
2 3
3 8
4 4
5 5
6 8]
</code></pre>
|
python|pandas|numpy|dataframe
| 1
|
374,748
| 47,703,374
|
2 groupby in the same dataframe, it is possible?
|
<p>I want to the following df, to make a <code>df = df.groupby(['id','quarter'])['jobs].mean()</code> but at the same time that dataframe must have the mean of jobs by id and year in another column. </p>
<pre><code> id year quarter month jobs
1 2007 1 1 10
1 2007 1 2 12
1 2007 1 3 12
1 2007 2 4 12
1 2007 2 5 12
1 2007 2 6 13
1 2007 3 7 14
1 2007 3 8 9
1 2007 3 9 12
1 2007 4 10 15
1 2007 4 12 18
2 2007 1 1 15
2 2007 1 2 15
2 2007 1 3 16
2 2007 2 4 17
2 2007 2 5 18
2 2007 2 6 10
2 2007 3 7 12
2 2007 3 8 12
2 2007 3 9 12
2 2007 4 10 12
2 2007 4 11 13
2 2007 4 12 14
</code></pre>
<p>result should look like this </p>
<pre><code> id year quarter jobs jobs_year
1 2007 1 (mean quarter) (mean year)
1 2007 2 (mean quarter) (mean year)
1 2007 3 (mean quarter) (mean year)
1 2007 4 (mean quarter) (mean year)
2 2007 1 (mean quarter) (mean year)
2 2007 2 (mean quarter) (mean year)
2 2007 3 (mean quarter) (mean year)
2 2007 4 (mean quarter) (mean year)
</code></pre>
|
<p>Using <code>transform</code> then <code>drop_duplicates</code></p>
<pre><code>df['jobs1']=df.groupby(['id','quarter'])['jobs'].transform('mean')
df['jobs_year']=df.groupby(['id','year'])['jobs'].transform('mean')
df=df.drop_duplicates(['id','year','quarter'])
df
Out[305]:
id year quarter month jobs jobs1 jobs_year
0 1 2007 1 1 10 11.333333 12.636364
3 1 2007 2 4 12 12.333333 12.636364
6 1 2007 3 7 14 11.666667 12.636364
9 1 2007 4 10 15 16.500000 12.636364
11 2 2007 1 1 15 15.333333 13.833333
14 2 2007 2 4 17 15.000000 13.833333
17 2 2007 3 7 12 12.000000 13.833333
20 2 2007 4 10 12 13.000000 13.833333
</code></pre>
|
python|pandas|numpy
| 3
|
374,749
| 47,585,653
|
returning rows within range in pandas MultiIndex
|
<p>I have a dataframe that looks like:</p>
<pre><code> count
year person
a.smith 1
2008 b.johns 2
c.gilles 3
a.smith 4
2009 b.johns 3
c.gilles 2
</code></pre>
<p>in which both <code>year</code> and <code>person</code> are part of the index. I'd like to return all rows with <code>a.smith</code> for all years. I can locate a count for a specific year with <code>df.loc[(2008, 'a.smith)]</code>, which outputs 1. But if I try <code>df.loc[(:,'a.smith</code>)], I get <code>SyntaxError: invalid syntax</code>. </p>
<p>How do I use <code>df.loc</code> for a range of index values in a MultiIndex?</p>
|
<p>Using <code>pd.IndexSlice</code></p>
<pre><code>idx = pd.IndexSlice
df.loc[idx[:,'a.smith'],:]
Out[200]:
count
year person
2008 a.smith 1
2009 a.smith 4
</code></pre>
<p>Data Input </p>
<pre><code>df
Out[211]:
count
year person
2008 a.smith 1
b.johns 2
c.gilles 3
2009 a.smith 4
b.johns 3
c.gilles 2
</code></pre>
|
python|pandas
| 0
|
374,750
| 47,796,207
|
subsetting affects .view(np.float64) behaviour
|
<p>I'm trying to use some sklearn estimators for classifications on the coefficients of some fast fourier transform (technically Discrete Fourier Transform). I obtain a numpy array X_c as output of np.fft.fft(X) and I want to transform it into a real numpy array X_r, with each (complex) column of the original X_c transformed into two (real/float) columns in X_r, i.e the shape goes from (r, c) to (r, 2c). So I use .view(np.float64). and it works at first.</p>
<p>The problem is that if I first decide to keep only some coefficients of the original complex array with X_c2 = X_c[:, range(3)] and then to do the same thing as before instead of having the number of columns doubled, I obtain the number of ranks doubled (the imaginary part of each element is put in a new row below the original).</p>
<p>I really don't understand why this happens. </p>
<p>To make myself clearer, here is a toy example:</p>
<pre><code>import numpy as np
# I create a complex array
X_c = np.arange(8, dtype = np.complex128).reshape(2, 4)
print(X_c.shape) # -> (2, 4)
# I use .view to transform it into something real and it works
# the way I want it.
X_r = X_c.view(np.float64)
print(X_r.shape) # -> (2, 8)
# Now I subset the array.
indices_coef = range(3)
X_c2 = X_c[:, indices_coef]
print(X_c2.shape) # -> (2, 3)
X_r2 = X_c2.view(np.float64)
# In the next line I obtain (4, 3), when I was expecting (2, 6)...
print(X_r2.shape) # -> (4, 3)
</code></pre>
<p>Does anyone see a reason for this difference of behavior?</p>
|
<p>I get a warning:</p>
<pre><code>In [5]: X_c2 = X_c[:,range(3)]
In [6]: X_c2
Out[6]:
array([[ 0.+0.j, 1.+0.j, 2.+0.j],
[ 4.+0.j, 5.+0.j, 6.+0.j]])
In [7]: X_c2.view(np.float64)
/usr/local/bin/ipython3:1: DeprecationWarning: Changing the shape of non-C contiguous array by
descriptor assignment is deprecated. To maintain
the Fortran contiguity of a multidimensional Fortran
array, use 'a.T.view(...).T' instead
#!/usr/bin/python3
Out[7]:
array([[ 0., 1., 2.],
[ 0., 0., 0.],
[ 4., 5., 6.],
[ 0., 0., 0.]])
In [12]: X_c2.strides
Out[12]: (16, 32)
In [13]: X_c2.flags
Out[13]:
C_CONTIGUOUS : False
F_CONTIGUOUS : True
</code></pre>
<p>So this copy (or is a view?) is Fortran order. The recommended <code>X_c2.T.view(float).T</code> produces the same 4x3 array without the warning.</p>
<p>As your first view shows, a complex array has the same data layout as twice the number of floats. </p>
<p>I've seen funny shape behavior when trying to <code>view</code> a structured array. I'm wondering the <code>complex</code> dtype is behaving much like a <code>dtype('f8,f8')</code> array. </p>
<hr>
<p>If I change your <code>X_c2</code> so it is a copy, I get the expected behavior</p>
<pre><code>In [19]: X_c3 = X_c[:,range(3)].copy()
In [20]: X_c3.flags
Out[20]:
C_CONTIGUOUS : True
F_CONTIGUOUS : False
OWNDATA : True
WRITEABLE : True
ALIGNED : True
UPDATEIFCOPY : False
In [21]: X_c3.strides
Out[21]: (48, 16)
In [22]: X_c3.view(float)
Out[22]:
array([[ 0., 0., 1., 0., 2., 0.],
[ 4., 0., 5., 0., 6., 0.]])
</code></pre>
<p>That's reassuring. But I'm puzzled as to why the <code>[:, range(3)]</code> indexing creates a F order view. That should be advance indexing.</p>
<p>And indeed, a true slice does not allow this view</p>
<pre><code>In [28]: X_c[:,:3].view(np.float64)
---------------------------------------------------------------------------
ValueError: new type not compatible with array.
</code></pre>
<p>So the range indexing has created some sort of hybrid object.</p>
|
python|arrays|numpy|casting|subset
| 0
|
374,751
| 47,600,891
|
Slice dataframe for each unique pair of columns
|
<p>I want to split a pandas dataframe according to unique pairs taken from two columns, then select the rows relative to that pair and project the remaining columns.</p>
<pre><code>df: Col1 Col2 Col3 Col4
1 1 a 100
1 2 b 200
1 2 c 300
1 2 d 400
3 4 e 500
3 4 f 600
</code></pre>
<p>Then</p>
<pre><code>idxs = (...df...) # expression on df
for idx in idxs:
print(idx)
group = (...df...) # expression on df
print(group)
</code></pre>
<p>Should yield something like</p>
<pre><code>(1,1)
Col3 Col4
a 100
(1,2)
Col3 Col4
b 200
c 300
d 400
(1,3)
Col3 Col4
e 500
f 600
</code></pre>
<p>The selecting and projection part seems easy, but getting the unique pairs doesn't. How can I achieve this reasonably efficiently?</p>
|
<p>Using <code>groupby</code></p>
<pre><code>for x,y in df.groupby(['Col1','Col2']):
print(x)
print(y)
(1, 1)
Col1 Col2 Col3 Col4
0 1 1 a 100
(1, 2)
Col1 Col2 Col3 Col4
1 1 2 b 200
2 1 2 c 300
3 1 2 d 400
(3, 4)
Col1 Col2 Col3 Col4
4 3 4 e 500
5 3 4 f 600
</code></pre>
|
python|pandas
| 1
|
374,752
| 47,883,487
|
pandas: applying function over each row of Dataframe
|
<p>I have a pandas DataFrame which contains 3 columns:</p>
<pre><code>| val1 | val2 | val3 |
|--------------------------|
| Nike | NaN | NaN |
| Men | Adidas | NaN |
| Puma | Red | Women |
</code></pre>
<p>and 3 lists:</p>
<pre><code>Brands = ['Adidas', 'Nike', 'Puma']
Gender = ['Men', 'Women']
Color=['Red', 'Blue', 'Green']
</code></pre>
<p>I trying to apply a function to each row to check and put each value in a new column depending on boolean value returned by the function.</p>
<pre><code>| val1 | val2 | val3 | brand | gender | color
|----------------------------------------------------
| Nike | NaN | NaN | Nike | NaN | NaN
| Men | Adidas | NaN | Adidas | Men | NaN
| Puma | Red | Women | Puma | Women | Red
</code></pre>
<p>I'm using lists to illustrate my issue but in my script, I'm using enchant library to check the existence of a value in my dictionary.</p>
<p>Here's what I already tried:</p>
<pre><code>ref_brands = enchant.request_pwl_dict("ref_brands.txt")
brands_checker = SpellChecker(ref_brands)
print brands_checker.check('Puma')
> True
print brands_checker.check('Men')
> False
[pyenchant tutorial][1]
def my_cust_check(x, checker):
l = x.tolist()
for e in iter(l):
try:
if checker.check(e.strip().encode('utf-8')) is True:
return e.strip()
else:
return None
except:
return None
df_query_split['brand'] = df_query_split.apply(my_cust_check,checker=brand_checker, axis=1)
df_query_split['gender'] = df_query_split.apply(my_cust_check,checker=gender_checker, axis=1)
df_query_split['color'] = df_query_split.apply(my_cust_check,checker=color_checker, axis=1)
</code></pre>
|
<p>You can use:</p>
<pre><code>df['brand'] = df[df.isin(Brands)].ffill(axis=1).iloc[:, -1]
df['gender'] = df[df.isin(Gender)].ffill(axis=1).iloc[:, -1]
df['color'] = df[df.isin(Color)].ffill(axis=1).iloc[:, -1]
print (df)
val1 val2 val3 brand gender color
0 Nike NaN NaN Nike NaN NaN
1 Men Adidas NaN Adidas Men NaN
2 Puma Red Women Puma Women Red
</code></pre>
<p>Detail:</p>
<p>First compare by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.isin.html" rel="nofollow noreferrer"><code>DataFrame.isin</code></a>:</p>
<pre><code>print (df.isin(Brands))
val1 val2 val3
0 True False False
1 False True False
2 True False False
</code></pre>
<p>Extract values of <code>True</code>s:</p>
<pre><code>print (df[df.isin(Brands)])
val1 val2 val3
0 Nike NaN NaN
1 NaN Adidas NaN
2 Puma NaN NaN
</code></pre>
<p>Replace <code>NaN</code>s by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.fillna.html" rel="nofollow noreferrer"><code>fillna</code></a> with forward filling (<code>ffill</code>):</p>
<pre><code>print (df[df.isin(Brands)].ffill(axis=1))
val1 val2 val3
0 Nike Nike Nike
1 NaN Adidas Adidas
2 Puma Puma Puma
</code></pre>
<p>Seelct last column by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.iloc.html" rel="nofollow noreferrer"><code>iloc</code></a>:</p>
<pre><code>print (df[df.isin(Brands)].ffill(1).iloc[:, -1])
0 Nike
1 Adidas
2 Puma
Name: val3, dtype: object
</code></pre>
|
python|python-2.7|pandas|dataframe|apply
| 0
|
374,753
| 47,784,388
|
How to prepare data.frame for keras using tensoflow CNN in r
|
<p>I have been trying to prepare a train data set for CNN in keras, but i can't find the way to properly set the data. </p>
<p>In the keras <a href="https://keras.rstudio.com/articles/examples/mnist_cnn.html" rel="nofollow noreferrer">CNN example</a>, they use the MNIST data set, in which:</p>
<pre><code>library(keras)
img_rows <- 28
img_cols <- 28
mnist <- dataset_mnist()
x_train <- mnist$train$x
x_train <- array_reshape(x_train, c(nrow(x_train), img_rows, img_cols, 1))
</code></pre>
<p>Which outputs an array with this dim:</p>
<pre><code>class(x_train)
[1] "array"
dim(x_train)
[1] 60000 28 28 1
</code></pre>
<p>I have a dataframe like this:</p>
<pre><code>x = data.frame(c(1,10,19,28),c(2,11,20,29),c(3,12,21,30),c(4,13,22,31),c(5,14,23,32),c(6,15,24,33),c(7,16,25,34),c(8,17,26,35),c(9,18,27,36))
</code></pre>
<p>Each row represent a 3x3 image like this:</p>
<pre><code>1 2 3
4 5 6
7 8 9
</code></pre>
<p>I am trying this:</p>
<pre><code>x = as.integer(unlist(x))
x = array_reshape(x, c(4,3,3, 1))
</code></pre>
<p>This return 3 matrices with scramble numbers. ¿How can I properly transform my data.frame for a CNN in keras? </p>
|
<p>This might help:</p>
<pre><code># Create an empty array the size you want
x_array <- array(NA, dim = c(3, 3, length(x)))
# Loop in each object of your list into the array
for (i in length(x)) {
x_array[,, i] <- x[[i]]
}
</code></pre>
|
r|tensorflow|keras
| 0
|
374,754
| 47,972,667
|
Importing pandas.io.data
|
<p>I am following along with this tutorial: <a href="https://pythonprogramming.net/data-analysis-python-pandas-tutorial-introduction/" rel="nofollow noreferrer">https://pythonprogramming.net/data-analysis-python-pandas-tutorial-introduction/</a></p>
<p>He suggests the following import:</p>
<pre><code>import pandas.io.data as web
</code></pre>
<p>so that I can implement:</p>
<pre><code>df = web.DataReader("XOM", "yahoo", start, end)
</code></pre>
<p>However, this is for Python 2.7 and I am using Python3. I have Googled this question and found some results, but can't make it work. Can anyone assist me?</p>
|
<p><strong>Update:</strong> </p>
<p>As mentioned by wilkas, now you might need to do </p>
<pre><code>import pandas_datareader.data as web
</code></pre>
<hr>
<p>I am assuming that you are using the latest version of the package. Check out the newest documentation at <a href="https://pandas-datareader.readthedocs.io/en/latest/" rel="nofollow noreferrer">https://pandas-datareader.readthedocs.io/en/latest/</a> </p>
<p>Let me quote the documentation:</p>
<blockquote>
<pre><code>Usage
</code></pre>
<p>Starting in 0.19.0, pandas no longer supports pandas.io.data or
pandas.io.wb, so you must replace your <code>imports from pandas.io</code> with
those <code>from pandas_datareader</code>:</p>
</blockquote>
<pre><code>from pandas.io import data, web # <- Don't use these Now.
from pandas_datareader import data, web # <- use this.
</code></pre>
<p>Thus, your import statement should be </p>
<pre><code>from pandas_datareader import web
</code></pre>
<p>Then you can implement</p>
<pre><code>f = web.DataReader("F", 'yahoo', start, end)
</code></pre>
<p>See their document to use Yahoo data from <a href="https://pandas-datareader.readthedocs.io/en/latest/remote_data.html#yahoo-finance" rel="nofollow noreferrer">HERE</a></p>
|
python|python-3.x|pandas
| 3
|
374,755
| 47,587,595
|
Google Cloud ML: Use Nightly TF Import Error No Module tensorflow
|
<p>I want to train the NMT model from Google on Google Cloud ML.
<a href="https://github.com/tensorflow/nmt" rel="nofollow noreferrer">NMT Model</a></p>
<p>Now I put all input data in a bucket and downloaded the git repository.
The model needs the nightly version of tensorflow so I defined it in setup.py and when I use the cpu version tf-nightly==1.5.0-dev20171115 and run the following command to run it in GCP local it works.</p>
<h2>Train local on google:</h2>
<pre><code>gcloud ml-engine local train --package-path nmt/ \
--module-name nmt.nmt \
-- --src=en --tgt=de \
--hparams_path=$HPARAMAS_PATH \
--out_dir=$OUTPUT_DIR \
--vocab_prefix=$VOCAB_PREFIX \
--train_prefix=$TRAIN_PREFIX \
--dev_prefix=$DEV_PREFIX \
--test_prefix=$TEST_PREFIX
</code></pre>
<p>Now when I use the gpu version with the following command I got this error message few minutes after submitting the job.</p>
<h2>Train on cloud</h2>
<pre><code>gcloud ml-engine jobs submit training $JOB_NAME \
--runtime-version 1.2 \
--job-dir $JOB_DIR \
--package-path nmt/ \
--module-name nmt.nmt \
--scale-tier BAISC_GPU \
--region $REGION \
-- --src=en --tgt=de \
--hparams_path=$HPARAMAS_PATH \
--out_dir=$OUTPUT_DIR \
--vocab_prefix=$VOCAB_PREFIX \
--train_prefix=$TRAIN_PREFIX \
--dev_prefix=$DEV_PREFIX \
--test_prefix=$TEST_PREFIX
</code></pre>
<p>Error:
import tensorflow as tf ImportError: No module named tensorflow</p>
<p>setup.py:</p>
<pre><code>from setuptools import find_packages
from setuptools import setup
REQUIRED_PACKAGES = ['tf-nightly-gpu==1.5.0-dev20171115']
setup(
name="nmt",
install_requires=REQUIRED_PACKAGES,
packages=find_packages(),
include_package_data=True,
version='0.1.2'
)
</code></pre>
<p>Thank you all in advance
Markus</p>
<h3>Update:</h3>
<p>I have found a note on
<a href="https://cloud.google.com/ml-engine/docs/versioning" rel="nofollow noreferrer">GCP docs</a>
Note: Training with TensorFlow versions 1.3+ is limited to CPUs only. See the Cloud ML Engine release notes for updates.</p>
<p>So it seems to doesn't work currently I think I have to go with the compute engine.</p>
<p>Or is there any hack to got it working?</p>
<p>However thank you for your help</p>
|
<p>The TensorFlow 1.5 might need newer version of CUDA (i.e., CUDA 9), and but the version CloudML Engine installed is CUDA 8. Can you please try to use TensorFlow 1.4 instead, which works on CUDA 8? Please tell us if 1.4 works for you here or send us an email via cloudml-feedback@google.com</p>
|
tensorflow|google-cloud-ml
| 0
|
374,756
| 47,802,612
|
weird behavior of numpy.histogram / random numbers in numpy?
|
<p>I stumbled upon some peculiar behavior of random numbers in Python , specifically I use the module numpy.random.</p>
<p>Consider the following expression:</p>
<pre><code>n = 50
N = 1000
np.histogram(np.sum(np.random.randint(0, 2, size=(n, N)), axis=0), bins=n+1)[0]
</code></pre>
<p>In the limit of large <code>N</code> I would expect a binomial distribution (for the interested reader, this simulates the <a href="https://en.wikipedia.org/wiki/Ehrenfest_model" rel="nofollow noreferrer">Ehrenfest model</a>) and for large <code>n</code> a normal distribution. A typical output however, looks like this:</p>
<blockquote>
<p>array([<br>
1, 0, 0, 1, 0, 2, 0, 1, 0, 15, 0,<br>
12, 0, 18, 0, 39, 0, 64, 0, 62, 0, 109,<br>
0, 97, 0, 107, 0, 114, 0, 102, 0, 92, 0,<br>
55, 0, 46, 0, 35, 0, 10, 0, 9, 0, 4,<br>
0, 0, 0, 3, 0, 1, 1<br>
])</p>
</blockquote>
<p>With the statement from above, I can't explain the occurrence of the zeros in the histogram - am I missing something obvious here?</p>
|
<p>You're using <code>histogram</code> wrong. The bins aren't where you think they are. They don't go from 0 to 50; they go from the minimum input value to the maximum input value. The 0s represent bins that lie entirely between two integers.</p>
<p>Try it with <code>numpy.bincount</code>:</p>
<pre><code>In [31]: n = 50
In [32]: N = 5000
In [33]: np.bincount(np.sum(np.random.randint(0, 2, size=(n, N)), axis=0))
Out[33]:
array([ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 7, 13, 22, 46, 75, 126, 220, 305, 367, 461, 550, 578,
517, 471, 438, 314, 189, 146, 76, 50, 17, 9, 2, 1])
</code></pre>
|
python|numpy|random|statistics
| 5
|
374,757
| 47,836,295
|
Tensorflow. ValueError: The two structures don't have the same number of elements
|
<p>My current code for implementing encoder lstm using <code>raw_rnn</code>. This question is also related to another question I asked before (<a href="https://stackoverflow.com/questions/47835350/tensorflow-raw-rnn-retrieve-tensor-of-shape-batch-x-dim-from-embedding-matrix">Tensorflow raw_rnn retrieve tensor of shape BATCH x DIM from embedding matrix</a>).
When I run the following code I get the following error:</p>
<blockquote>
<p>ValueError: The two structures don't have the same number of elements.</p>
<p>First structure (1 elements): None</p>
<p>Second structure (2 elements): LSTMStateTuple(c=64, h=64)</p>
</blockquote>
<p>The error occures on the line: <code>encoder_outputs_ta, encoder_final_state, _ = tf.nn.raw_rnn(cell, loop_fn=reader_loop)</code></p>
<pre><code>import tensorflow as tf
import numpy as np
batch_size, max_time, input_embedding_size = 5, 10, 16
vocab_size, num_units = 50, 64
encoder_inputs = tf.placeholder(shape=(None, None), dtype=tf.int32, name='encoder_inputs')
encoder_inputs_length = tf.placeholder(shape=(None,), dtype=tf.int32, name='encoder_inputs_length')
embeddings = tf.Variable(tf.random_uniform([vocab_size + 2, input_embedding_size], -1.0, 1.0),
dtype=tf.float32, name='embeddings')
encoder_inputs_embedded = tf.nn.embedding_lookup(embeddings, encoder_inputs)
cell = tf.contrib.rnn.LSTMCell(num_units)
W = tf.Variable(tf.random_uniform([num_units, vocab_size], -1, 1), dtype=tf.float32, name='W_reader')
b = tf.Variable(tf.zeros([vocab_size]), dtype=tf.float32, name='b_reader')
with tf.variable_scope('ReaderNetwork'):
def loop_fn_initial():
init_elements_finished = (0 >= encoder_inputs_length)
init_input = cell.zero_state(batch_size, dtype=tf.float32)
init_cell_state = None
init_cell_output = None
init_loop_state = None
return (init_elements_finished, init_input,
init_cell_state, init_cell_output, init_loop_state)
def loop_fn_transition(time, previous_output, previous_state, previous_loop_state):
def get_next_input():
return tf.ones([batch_size, input_embedding_size], dtype=tf.float32) # TODO replace with value from embeddings
elements_finished = (time >= encoder_inputs_length)
finished = tf.reduce_all(elements_finished) # boolean scalar
next_input = tf.cond(finished,
true_fn=lambda: tf.zeros([batch_size, input_embedding_size], dtype=tf.float32),
false_fn=get_next_input)
state = previous_state
output = previous_output
loop_state = None
return elements_finished, next_input, state, output, loop_state
def loop_fn(time, previous_output, previous_state, previous_loop_state):
if previous_state is None: # time = 0
return loop_fn_initial()
return loop_fn_transition(time, previous_output, previous_state, previous_loop_state)
reader_loop = loop_fn
encoder_outputs_ta, encoder_final_state, _ = tf.nn.raw_rnn(cell, loop_fn=reader_loop)
outputs = encoder_outputs_ta.stack()
def next_batch():
return {
encoder_inputs: np.random.random((batch_size, max_time)),
encoder_inputs_length: [max_time] * batch_size
}
init = tf.global_variables_initializer()
with tf.Session() as s:
s.run(init)
outs = s.run([outputs], feed_dict=next_batch())
print len(outs), outs[0].shape
</code></pre>
|
<p>Resolved the problem by changing initial state and input:</p>
<blockquote>
<p>init_input = tf.zeros([batch_size, input_embedding_size], dtype=tf.float32)</p>
<p>init_cell_state = cell.zero_state(batch_size, tf.float32)</p>
</blockquote>
<pre><code>def loop_fn_initial():
init_elements_finished = (0 >= encoder_inputs_length)
init_input = tf.zeros([batch_size, input_embedding_size], dtype=tf.float32)
init_cell_state = cell.zero_state(batch_size, tf.float32)
init_cell_output = None
init_loop_state = None
return (init_elements_finished, init_input,
init_cell_state, init_cell_output, init_loop_state)
</code></pre>
|
python|machine-learning|tensorflow|nlp|lstm
| 2
|
374,758
| 47,901,313
|
tensorflow.python.framework.errors_impl.NotFoundError: Op type not registered 'FertileStatsResourceHandleOp'
|
<p>I trying to run some example for tensorflow but it not works since such exception. I have no idea how to solve it?</p>
<p>Probably I need install some packages or this example is not compatible with Tensorflow 1.4. I am using Windows 10, no cuda, Python 3.6.</p>
<p>Is it require other version Tensorflow or it is not supported on Windows 10?</p>
<pre><code>Traceback (most recent call last):
File "C:/Users/Cezary Wagner/PycharmProjects/tf-xor/src/s02_forest_xor.py", line 32, in <module>
forest_graph = tensor_forest.RandomForestGraphs(hparams)
File "C:\root\Python36-64\lib\site-packages\tensorflow\contrib\tensor_forest\python\tensor_forest.py", line 376, in __init__
tree_variables_class=tree_variables_class)
File "C:\root\Python36-64\lib\site-packages\tensorflow\contrib\tensor_forest\python\tensor_forest.py", line 350, in __init__
self.variables.append(tree_variables_class(params, i, training))
File "C:\root\Python36-64\lib\site-packages\tensorflow\contrib\tensor_forest\python\tensor_forest.py", line 318, in __init__
params, '', self.get_tree_name('stats', tree_num))
File "C:\root\Python36-64\lib\site-packages\tensorflow\contrib\tensor_forest\python\ops\stats_ops.py", line 102, in fertile_stats_variable
container, shared_name=name, name=name)
File "C:\root\Python36-64\lib\site-packages\tensorflow\contrib\tensor_forest\python\ops\gen_stats_ops.py", line 134, in fertile_stats_resource_handle_op
shared_name=shared_name, name=name)
File "C:\root\Python36-64\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "C:\root\Python36-64\lib\site-packages\tensorflow\python\framework\ops.py", line 2958, in create_op
set_shapes_for_outputs(ret)
File "C:\root\Python36-64\lib\site-packages\tensorflow\python\framework\ops.py", line 2209, in set_shapes_for_outputs
shapes = shape_func(op)
File "C:\root\Python36-64\lib\site-packages\tensorflow\python\framework\ops.py", line 2159, in call_with_requiring
return call_cpp_shape_fn(op, require_shape_fn=True)
File "C:\root\Python36-64\lib\site-packages\tensorflow\python\framework\common_shapes.py", line 627, in call_cpp_shape_fn
require_shape_fn)
File "C:\root\Python36-64\lib\site-packages\tensorflow\python\framework\common_shapes.py", line 686, in _call_cpp_shape_fn_impl
input_tensors_as_shapes, status)
File "C:\root\Python36-64\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 473, in __exit__
c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.NotFoundError: Op type not registered 'FertileStatsResourceHandleOp' in binary running on TERMIT. Make sure the Op and Kernel are registered in the binary running in this process.
Process finished with exit code 1
</code></pre>
<p>It stops on:</p>
<pre><code># Random Forest Parameters
hparams = tensor_forest.ForestHParams(num_classes=num_classes,
num_features=num_features,
num_trees=num_trees,
max_nodes=max_nodes).fill()
</code></pre>
<p>Full code of program:</p>
<pre><code>import tensorflow as tf
from tensorflow.contrib.tensor_forest.python import tensor_forest
from tensorflow.python.ops import resources
# Parameters
num_steps = 500 # Total steps to train
num_classes = 2 # The 10 digits
num_features = 2 # Each image is 28x28 pixels
num_trees = 10
max_nodes = 1000
io = (
((0, 0), (0,)),
((0, 1), (1,)),
((1, 0), (1,)),
((1, 1), (0,)),
)
# Input and Target data
X = tf.placeholder(tf.float32, shape=[None, num_features])
# For random forest, labels must be integers (the class id)
Y = tf.placeholder(tf.int32, shape=[None])
# Random Forest Parameters
hparams = tensor_forest.ForestHParams(num_classes=num_classes,
num_features=num_features,
num_trees=num_trees,
max_nodes=max_nodes).fill()
# Build the Random Forest
forest_graph = tensor_forest.RandomForestGraphs(hparams)
# Get training graph and loss
train_op = forest_graph.training_graph(X, Y)
loss_op = forest_graph.training_loss(X, Y)
# Measure the accuracy
infer_op, _, _ = forest_graph.inference_graph(X)
correct_prediction = tf.equal(tf.argmax(infer_op, 1), tf.cast(Y, tf.int64))
accuracy_op = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Initialize the variables (i.e. assign their default value) and forest resources
init_vars = tf.group(tf.global_variables_initializer(),
resources.initialize_resources(resources.shared_resources()))
# Start TensorFlow session
sess = tf.Session()
# Run the initializer
sess.run(init_vars)
# Training
for i in range(1, num_steps + 1):
# Prepare Data
# Get the next batch of MNIST data (only images are needed, not labels)
batch_x, batch_y = [x[0] for x in io], [x[1] for x in io]
_, l = sess.run([train_op, loss_op], feed_dict={X: batch_x, Y: batch_y})
if i % 50 == 0 or i == 1:
acc = sess.run(accuracy_op, feed_dict={X: batch_x, Y: batch_y})
print('Step %i, Loss: %f, Acc: %f' % (i, l, acc))
# Test Model
test_x, test_y = [x[0] for x in io], [x[1] for x in io]
print("Test Accuracy:", sess.run(accuracy_op, feed_dict={X: test_x, Y: test_y}))
</code></pre>
|
<p>I ran into the same issue today.
I resolved it by updating my installed version of Tensorflow (or Tensorflow-GPU)
In my case, I was on 1.8 and updating to 1.9 fixed it.</p>
<p>All things aside, check your Tensorflow version and adjust as necessary
Steve</p>
|
python|tensorflow
| 0
|
374,759
| 47,620,346
|
Python Pandas: How can I count the number of times a value appears in a column based upon another column?
|
<p><a href="https://i.stack.imgur.com/iB5HO.jpg" rel="nofollow noreferrer">This is my pandas dataframe I am referring to.</a></p>
<p>Basically I would like to be able to display a count on <code>'crime type'</code> based on <code>'council'</code>. So for example, where <code>'council == Fermanagh and omagh'</code> count the
different values for each <code>'crime type'</code> if that makes sense? So <code>burgulary</code> might be equal to <code>1</code> whereas, <code>Anti-social behaviour'</code> would be <code>3</code> for another <code>'council'</code>? I then would like to plot these values on a bar graph.</p>
<p>Hope this makes some sense. Any help would be great. Thank you.</p>
|
<p>I believe you need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow noreferrer"><code>groupby</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.size.html" rel="nofollow noreferrer"><code>size</code></a>:</p>
<pre><code>df1 = df.groupby(['crime type', 'council']).size().reset_index(name='Count')
</code></pre>
<p>EDIT:</p>
<pre><code>df = pd.DataFrame({'crime type':['Anti-social behaviour','Anti-social behaviour',
'Burglary','Burglary'],
'council':['Fermanagh and omagh','Belfast','Belfast','Belfast']})
df1 = df.groupby(['council', 'crime type']).size().unstack(fill_value=0)
print (df1)
crime type Anti-social behaviour Burglary
council
Belfast 1 2
Fermanagh and omagh 1 0
df1.plot.bar()
</code></pre>
|
python|pandas|dataframe|count
| 3
|
374,760
| 47,698,020
|
How to populate Pandas dataframe as function of index and columns
|
<p>I have a dataframe where the index and column are both numbers--i.e.</p>
<pre><code>rng = np.arange(2,51)
box = pd.DataFrame(index = rng, columns = rng)
</code></pre>
<p>I want the values of the dataframe to be a function of the index and column--so for instance box[2][2] should equal 4.</p>
<p>Currently I have it in </p>
<pre><code>for x in range(2,len(box)+2):
for y in range(2,len(box)+2):
box[x][y] = x*y
box
</code></pre>
<p>but I feel like there should be a more elegant solution. Is there a way to do this purely with slicing/dataframe assignment methods?</p>
|
<p>Here's what I think you should be doing with numpy broadcasting:</p>
<pre><code>import numpy
import pandas
rng = numpy.arange(2, 51)
box = pandas.DataFrame(index=rng, columns=rng, data=rng*rng[:, None])
</code></pre>
<p>In the case that knowing all of the value ahead of time isn't feasible, you could assign them later:</p>
<pre><code>box = pandas.DataFrame(index=rng, columns=rng)
box.iloc[:, :] = some_other_array
</code></pre>
<p>And if you really are assigning small subsets at a time, you'd do:</p>
<pre><code>box.iloc[row_positions, column_positions] = values
</code></pre>
<p>or</p>
<pre><code>box.loc[row_labels, column_labels] = values
</code></pre>
|
python|pandas|dataframe
| 3
|
374,761
| 47,736,468
|
how to use custom calculation based on two dataframes in python
|
<p>I have 2 dataframes as below,</p>
<p>df1</p>
<pre><code>index X1 X2 X3 X4
0 6 10 6 7
1 8 9 11 13
2 12 13 15 11
3 8 11 7 6
4 11 7 6 6
5 13 14 11 10
</code></pre>
<p>df2</p>
<pre><code>index Y
0 20
1 14
2 17
3 14
4 15
5 20
</code></pre>
<p>I want to get 3rd dataframe such that new X(i) = Y - (Y/X(i))</p>
<pre><code>index X1 X2 X3 X4
0 16.67 18.00 16.67 17.14
1 12.25 12.44 12.73 12.92
2 15.58 15.69 15.87 15.45
3 12.25 12.73 12.00 11.67
4 13.64 12.86 12.50 12.50
5 18.46 18.57 18.18 18.00
</code></pre>
<p>Please note, it shall match the index number of df1 and df2 while calculating.
Thanks in advance!</p>
|
<p>You can use numpy for vectorized solution i.e </p>
<pre><code>df[df.columns] = (df2.values - df2.values/df.values)
X1 X2 X3 X4
index
0 16.666667 18.000000 16.666667 17.142857
1 12.250000 12.444444 12.727273 12.923077
2 15.583333 15.692308 15.866667 15.454545
3 12.250000 12.727273 12.000000 11.666667
4 13.636364 12.857143 12.500000 12.500000
5 18.461538 18.571429 18.181818 18.000000
</code></pre>
|
python|pandas|numpy|dataframe
| 1
|
374,762
| 47,612,822
|
How to create pandas dataframe from Twitter Search API?
|
<p>I am working with the Twitter Search API which returns a dictionary of dictionaries. My goal is to create a dataframe from a list of keys in the response dictionary.</p>
<p>Example of API response here: <a href="https://developer.twitter.com/en/docs/tweets/search/api-reference/get-search-tweets.html" rel="nofollow noreferrer">Example Response</a></p>
<p>I have a list of keys within the Statuses dictionary</p>
<pre><code>keys = ["created_at", "text", "in_reply_to_screen_name", "source"]
</code></pre>
<p>I would like to loop through each key value returned in the Statuses dictionary and put them in a dataframe with the keys as the columns.</p>
<p>Currently have code to loop through a single key individually and assign to list then append to dataframe but want a way to do more than one key at a time. Current code below:</p>
<pre><code>#w is the word to be queired
w = 'keyword'
#count of tweets to return
count = 1000
#API call
query = twitter.search.tweets(q= w, count = count)
def data_l2 (q, k1, k2):
data = []
for results in q[k1]:
data.append(results[k2])
return(data)
screen_names = data_l3(query, "statuses", "user", "screen_name")
data = {'screen_names':screen_names,
'tweets':tweets}
frame=pd.DataFrame(data)
frame
</code></pre>
|
<p>I will share a more generic solution that I came up with, as I was working with the Twitter API. Let's say you have the ID's of tweets that you want to fetch in a list called <code>my_ids</code> :</p>
<pre><code># Fetch tweets from the twitter API using the following loop:
list_of_tweets = []
# Tweets that can't be found are saved in the list below:
cant_find_tweets_for_those_ids = []
for each_id in my_ids:
try:
list_of_tweets.append(api.get_status(each_id))
except Exception as e:
cant_find_tweets_for_those_ids.append(each_id)
</code></pre>
<p>Then in this code block we isolate the json part of each tweepy status object that we have downloaded and we add them all into a list....</p>
<pre><code>my_list_of_dicts = []
for each_json_tweet in list_of_tweets:
my_list_of_dicts.append(each_json_tweet._json)
</code></pre>
<p>...and we write this list into a txt file:</p>
<pre><code>with open('tweet_json.txt', 'w') as file:
file.write(json.dumps(my_list_of_dicts, indent=4))
</code></pre>
<p>Now we are going to create a DataFrame from the tweet_json.txt file (I have added some keys that were relevant to my use case that I was working on, but you can add your specific keys instead):</p>
<pre><code>my_demo_list = []
with open('tweet_json.txt', encoding='utf-8') as json_file:
all_data = json.load(json_file)
for each_dictionary in all_data:
tweet_id = each_dictionary['id']
whole_tweet = each_dictionary['text']
only_url = whole_tweet[whole_tweet.find('https'):]
favorite_count = each_dictionary['favorite_count']
retweet_count = each_dictionary['retweet_count']
created_at = each_dictionary['created_at']
whole_source = each_dictionary['source']
only_device = whole_source[whole_source.find('rel="nofollow">') + 15:-4]
source = only_device
retweeted_status = each_dictionary['retweeted_status'] = each_dictionary.get('retweeted_status', 'Original tweet')
if retweeted_status == 'Original tweet':
url = only_url
else:
retweeted_status = 'This is a retweet'
url = 'This is a retweet'
my_demo_list.append({'tweet_id': str(tweet_id),
'favorite_count': int(favorite_count),
'retweet_count': int(retweet_count),
'url': url,
'created_at': created_at,
'source': source,
'retweeted_status': retweeted_status,
})
tweet_json = pd.DataFrame(my_demo_list, columns = ['tweet_id', 'favorite_count',
'retweet_count', 'created_at',
'source', 'retweeted_status', 'url'])
</code></pre>
|
python|api|pandas|twitter
| 1
|
374,763
| 47,612,069
|
Tensorflow Serving: Large model, protobuf error
|
<p>I am trying to make serve a big (1.2 GB in size) model with Tensorflow Serving, but I am getting a:</p>
<pre><code>2017-12-02 21:55:57.711317: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:236] Loading SavedModel from: ...
[libprotobuf ERROR external/protobuf_archive/src/google/protobuf/io/coded_stream.cc:193] A protocol message was rejected because it was too big (more than 1073741824 bytes). To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h.
2017-12-02 21:55:58.563507: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:284] Loading SavedModel: fail. Took 852178 microseconds.
2017-12-02 21:55:58.563567: E tensorflow_serving/util/retrier.cc:38] Loading servable: {name: 2 version: 2} failed: Data loss: Can't parse .../saved_model.pb as binary proto
</code></pre>
<p>I read through a few related issues on Github from a few years ago, but ultimately it turned unrelated, since Serving is using the C++ version of the protobuf. There is little information on deploying large models with Serving, so any information would suffice.</p>
<p>Tensorflow Serving was compiled on the host machine, so was the model, but using python3 (I wonder if it has to do with anything at all).
Is there a quick fix for this, or I have to dig through the Serving C++ sources and increase the size of the message? </p>
<p><strong>Edit per request in the comments:</strong> </p>
<p>I save the model according to the official tutorial. The reason why the model is so big is that I have an embedding layer saved along. Here is the saving code anyway:</p>
<pre><code>export_path = 'model/1'
builder = saved_model_builder.SavedModelBuilder(export_path)
signature = predict_signature_def(inputs={'input': input},
outputs={'sent': output})
builder.add_meta_graph_and_variables(sess=session,
tags=[tag_constants.SERVING],
signature_def_map={'predict': signature})
builder.save()
</code></pre>
<p>The model is read by a compiled TF Serving from GitHub on a Ubuntu 16.04 host. </p>
|
<p>Hope it helps someone, but I "found" a solution.</p>
<p>The major problem was obvious; his is an NLP model, thus it has a big vocabulary that goes along with it. Leaving the vocabulary in the graph definition bloats the metagraphdef, and protobuf is giving an error when faced with such a big protocol. </p>
<p>The solution would to put the dictionary in the <strong>assets_collection</strong>. There is little documentation to what you actually have to do, but looking at the <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/saved_model/saved_model_test.py#L552" rel="nofollow noreferrer">saved_model_test.py</a> at the official repo is worth looking at.</p>
<p>As to utilize the assets with Tensorflow Serving, one has to create a custom Servable, as described in the <a href="https://www.tensorflow.org/serving/custom_servable" rel="nofollow noreferrer">Creating a new kind of servable </a> official documentation. Cannot provide a specific example, because I simply containerized the model for the time being.</p>
<p>If anyone has examples, or has a better strategy when it comes to NLP models deployment, I would be happy to discuss it further.</p>
|
c++|machine-learning|tensorflow|deep-learning|tensorflow-serving
| 2
|
374,764
| 47,985,750
|
Extracting data/string from Pandas DF column
|
<p>Im trying to extract currency pairs from the poloniex API using Python pandas.</p>
<p>I believe the data returned is all just a single column name:</p>
<pre><code>Columns: [{"BTC_BCN":{"BTC":"479.74697466", "BCN":"1087153595.32266165"}, "BTC_BELA":{"BTC":"32.92293515", "BELA":"1807337.13247948"}, "BTC_BLK":{"BTC":"25.70374054", "BLK":"606717.86348734"}, "BTC_BTCD":{"BTC":"24.32220571", "BTCD":"1264.02352237"}, "BTC_BTM":{"BTC":"11.57816905", "BTM":"80673.47934437"}, "BTC_BTS":{"BTC":"1102.88787610", "BTS":"30426626.64558044"}
</code></pre>
<p>The result I want: <code>BTC_BCN, BTC_BELA, BTC_BLK,</code> etc...</p>
<p>But not really sure if there is a simple way to get this without string parsing since they all appear to just be column names.</p>
<p>Code:</p>
<pre><code>from bs4 import BeautifulSoup
import csv
import urllib2
import pandas as pd
try:
from StringIO import StringIO
except:
from io import StringIO
sock= urllib2.urlopen('https://poloniex.com/public?command=return24hVolume')
link=sock.read()
soup = BeautifulSoup(link,'lxml')
csv_data = StringIO(soup.text)
df=pd.read_csv(csv_data,delimiter=' *, *',engine='python')
df2=df.iloc[1:2,0:20]
</code></pre>
|
<p>You don't need <code>BeautifulSoup</code> here at all. <em>The contents of the webpage is JSON</em> - parse it with <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_json.html" rel="nofollow noreferrer"><code>.read_json()</code></a> directly:</p>
<pre><code>df = pd.read_json('https://poloniex.com/public?command=return24hVolume')
</code></pre>
|
python|pandas|beautifulsoup
| 0
|
374,765
| 47,736,531
|
Vectorized matrix manhattan distance in numpy
|
<p>I'm trying to implement an efficient vectorized <code>numpy</code> to make a Manhattan distance matrix. I'm familiar with the construct used to create an efficient Euclidean distance matrix using dot products as follows:</p>
<pre><code>A = [[1, 2]
[2, 1]]
B = [[1, 1],
[2, 2],
[1, 3],
[1, 4]]
def euclidean_distmtx(X, X):
f = -2 * np.dot(X, Y.T)
xsq = np.power(X, 2).sum(axis=1).reshape((-1, 1))
ysq = np.power(Y, 2).sum(axis=1)
return np.sqrt(xsq + f + ysq)
</code></pre>
<p>I want to implement somthing similar but using Manhattan distance instead. So far I've got close but fell short trying to rearrange the absolute differences. As I understand it, the Manhattan distance is</p>
<p><img src="https://latex.codecogs.com/gif.latex?%5Csum_i&space;|x_i&space;-&space;y_i|&space;=&space;|x_1&space;-&space;y_1|&space;+&space;|x_2&space;-&space;y_2|&space;+&space;..." alt="\sum_i |x_i - y_i| = |x_1 - y_1| + |x_2 - y_2| + ..."></p>
<p>I tried to solve this by considering if the absolute function didn't apply at all giving me this equivalence</p>
<p><img src="https://latex.codecogs.com/gif.latex?%5Csum_i&space;x_i&space;-&space;y_i&space;=&space;%5Csum_i&space;x_i&space;-&space;%5Csum_i&space;y_i" alt="\sum_i x_i - y_i = \sum_i x_i - \sum_i y_i"></p>
<p>which gives me the following vectorization</p>
<pre><code>def manhattan_distmtx(X, Y):
f = np.dot(X.sum(axis=1).reshape(-1, 1), Y.sum(axis=1).reshape(-1, 1).T)
return f / Y.sum(axis=1) - Y.sum(axis=1)
</code></pre>
<p>I think I'm the right track but I just can't move the values around without removing that absolute function around the difference between each vector elements. I'm sure there's a clever trick around the absolute values, possibly by using <code>np.sqrt</code> of a squared value or something but I can't seem to realize it.</p>
|
<p>I don't think we can leverage BLAS based matrix-multiplication here, as there's no element-wise multiplication involved here. But, we have few alternatives.</p>
<p><strong>Approach #1</strong></p>
<p>We can use <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.distance.cdist.html" rel="noreferrer">Scipy's <code>cdist</code></a> that features the Manhattan distance with its optional metric argument set as <code>'cityblock'</code> -</p>
<pre><code>from scipy.spatial.distance import cdist
out = cdist(A, B, metric='cityblock')
</code></pre>
<p><strong>Approach #2 - A</strong></p>
<p>We can also leverage <code>broadcasting</code>, but with more memory requirements -</p>
<pre><code>np.abs(A[:,None] - B).sum(-1)
</code></pre>
<p><strong>Approach #2 - B</strong></p>
<p>That could be re-written to use less memory with slicing and summations for input arrays with two cols -</p>
<pre><code>np.abs(A[:,0,None] - B[:,0]) + np.abs(A[:,1,None] - B[:,1])
</code></pre>
<p><strong>Approach #2 - C</strong></p>
<p>Porting over the <code>broadcasting</code> version to make use of faster <code>absolute</code> computation with <a href="http://numexpr.readthedocs.io/en/latest/intro.html#how-it-works" rel="noreferrer"><code>numexpr</code> module</a> -</p>
<pre><code>import numexpr as ne
A3D = A[:,None]
out = ne.evaluate('sum(abs(A3D-B),2)')
</code></pre>
|
python|numpy|vectorization
| 18
|
374,766
| 47,583,428
|
Pandas duplicates when grouped
|
<pre><code>x = df.groupby(["Customer ID", "Category"]).sum().sort_values(by="VALUE", ascending=False)
</code></pre>
<p>I want to group by Customer ID but when I use above code, it duplicates customers...</p>
<p>Here is the result:</p>
<p><a href="https://i.stack.imgur.com/y4UDn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/y4UDn.png" alt="Image"></a></p>
<p>Source DF:</p>
<pre><code> Customer ID Category Value
0 A x 5
1 B y 5
2 B z 6
3 C x 7
4 A z 2
5 B x 5
6 A x 1
</code></pre>
<p>new: <a href="https://ufile.io/dpruz" rel="nofollow noreferrer">https://ufile.io/dpruz</a></p>
|
<p>I think you are looking for something like this:</p>
<pre><code>df_out = df.groupby(['Customer ID','Category']).sum()
df_out.reindex(df_out.sum(level=0).sort_values('Value', ascending=False).index,level=0)
</code></pre>
<p>Output:</p>
<pre><code> Value
Customer ID Category
B x 5
y 5
z 6
A x 6
z 2
C x 7
</code></pre>
|
pandas|pandas-groupby
| 2
|
374,767
| 47,766,805
|
pandas only shows date and drops time when I select data from database
|
<p>how are you?</p>
<p>I'm new to pandas and I have faced a problem where I'm using <code>read_sql</code>.</p>
<pre><code>df = pd.read_sql("select TIME, col_1, col_2 from TABLE", connection)
</code></pre>
<p>A real database has TIME data which looks like below.</p>
<pre><code> TIME
2017-12-08 00:00:00
2017-12-08 00:00:01
2017-12-08 00:00:02
</code></pre>
<p>...</p>
<p>Nearly every single second data exist.</p>
<p>Here's a problem. When I read data using pandas, my TIME data looks like</p>
<pre><code>2017-12-08
2017-12-08
2017-12-08
</code></pre>
<p>...</p>
<p>pandas only shows date... where's my time? why is a result different from real data?</p>
|
<p>Have you tried accessing the data manually to see if there is a seconds data? i.e.</p>
<pre><code>print(df['TIME'][2].second)
</code></pre>
<p>If this data exists, then what you see is just the <code>__str__</code> representation of <code>M8[ns]</code> objects when your print your dataframe.</p>
|
python|pandas
| 0
|
374,768
| 47,965,026
|
Why is my date axis formatting broken when plotting with Pandas' built-in plot calls as opposed to via Matplotlib?
|
<p>I am plotting aggregated data in Python, using Pandas and Matlplotlib.
My axis customization commands are failing as a function of which of two similar functions I'm calling to make bar plots. The working case is e.g.:</p>
<pre><code>import datetime
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
def format_x_date_month_day(ax):
days = mdates.DayLocator()
months = mdates.MonthLocator() # every month
dayFmt = mdates.DateFormatter('%D')
monthFmt = mdates.DateFormatter('%Y-%m')
ax.figure.autofmt_xdate()
ax.xaxis.set_major_locator(months)
ax.xaxis.set_major_formatter(monthFmt)
ax.xaxis.set_minor_locator(days)
span_days = 90
start = pd.to_datetime("1-1-2012")
idx = pd.date_range(start, periods=span_days).tolist()
df=pd.DataFrame(index=idx, data={'A':np.random.random(span_days), 'B':np.random.random(span_days)})
plt.close('all')
fig, ax = plt.subplots(1)
ax.bar(df.index, df.A) # loop over columns here to do stacked plot
format_x_date_month_day(ax)
plt.show()
</code></pre>
<p>(See <a href="https://matplotlib.org/gallery/lines_bars_and_markers/bar_stacked.html" rel="nofollow noreferrer">matplotlib.org</a> for example of looping to create a stacked bar plot.) This gives us</p>
<p><a href="https://i.stack.imgur.com/Oc9fR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Oc9fR.png" alt="Bar plot called from Matplotlib, axis formatting by <code>mdates</code>"></a></p>
<p>Another approach that <strong>should</strong> work and be much easier is to use <code>df.plot.bar(ax=ax, stacked=True)</code>, however it does not admit date axis formatting with <code>mdates</code>:</p>
<pre><code>plt.close('all')
fig, ax = plt.subplots(1)
df.plot.bar(ax=ax, stacked=True)
format_x_date_month_day(ax)
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/owHO0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/owHO0.png" alt="Stacked bar with broken x-axis labeling"></a></p>
<p>How can <code>mdates</code> and <code>ax.figure.autofmt_xdate()</code> be made to play nice with <code>df.plot.bar</code>?</p>
|
<p>Bar plots in pandas are designed to compare categories rather than to display time-series or other types of continuous variables, as stated <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.plot.bar.html" rel="nofollow noreferrer">in the docstring</a>:</p>
<blockquote>
<p>A bar plot shows comparisons among discrete categories. One axis of the plot shows the specific categories being compared, and the other axis represents a measured value.</p>
</blockquote>
<p>This is why the scale of the x-axis of pandas bar plots is made of integers starting from zero, regardless of the data type of the x variable. When the same bar plot is created with matplotlib, the scale of the x-axis is made of matplotlib date numbers, so the tick locators and formatters of the <a href="https://matplotlib.org/api/dates_api.html" rel="nofollow noreferrer"><code>matplotlib.dates</code></a> module (mdates) can be used as expected.</p>
<p>To be able to use a pandas bar plot with mdates, you need to move the bars along the x-axis to locations that match the matplotlib date numbers. This can be done thanks to the <a href="https://matplotlib.org/api/dates_api.html#matplotlib.dates.date2num" rel="nofollow noreferrer"><code>mdates.date2num</code></a> function. This is illustrated in the following example based on the code you provided with a few modifications: the sample dataset contains 3 variables, the time series is limited to 45 days, and the tick formatting is adjusted to my preferences (and is not wrapped as a function).</p>
<p>This example works for any number of variables (with or without NaNs) and for any bar width that is passed to the pandas plot function:</p>
<pre><code>import numpy as np # v 1.19.2
import pandas as pd # v 1.1.3
import matplotlib.dates as mdates # v 3.3.2
# Create random dataset
rng = np.random.default_rng(seed=1) # random number generator
nperiods = 45
nvar = 3
idx = pd.date_range('2012-01-01', periods=nperiods, freq='D')
df = pd.DataFrame(rng.integers(11, size=(idx.size, nvar)),
index=idx, columns=list('ABC'))
# Draw pandas stacked bar chart
ax = df.plot(kind='bar', stacked=True, figsize=(10,5))
# Compute width of bars in matplotlib date units
pandas_width = ax.patches[0].get_width() # the default bar width is 0.5
mdates_x0 = mdates.date2num(df.index[0])
mdates_x1 = mdates.date2num(df.index[1])
mdates_width_default = (mdates_x1-mdates_x0)/2
mdates_width = pandas_width*mdates_width_default/0.5 # rule of three conversion
# Compute new x values for bars in matplotlib date units, adjusting the
# positions according to the bar width
mdates_x = mdates.date2num(df.index) - mdates_width/2
nvar = len(ax.get_legend_handles_labels()[1])
mdates_x_patches = np.ravel(nvar*[mdates_x])
# Set bars to new x positions: this loop works fine with NaN values as
# well because in bar plot NaNs are drawn with a rectangle of 0 height
# located at the foot of the bar, you can verify this with patch.get_bbox()
for patch, new_x in zip(ax.patches, mdates_x_patches):
patch.set_x(new_x)
patch.set_width(mdates_width)
# Set major and minor date tick locators
months = mdates.MonthLocator()
days = mdates.DayLocator(bymonthday=np.arange(31, step=3))
ax.xaxis.set_major_locator(months)
ax.xaxis.set_minor_locator(days)
# Set major date tick formatter
month_fmt = mdates.DateFormatter('\n%b\n%Y')
day_fmt = mdates.DateFormatter('%d')
ax.xaxis.set_major_formatter(month_fmt)
ax.xaxis.set_minor_formatter(day_fmt)
# Shift the plot frame to where the bars are now located
xmin = min(mdates_x) - mdates_width
xmax = max(mdates_x) + 2*mdates_width
ax.set_xlim(xmin, xmax)
# Adjust tick label format last, else it may produce unexpected results
ax.figure.autofmt_xdate(rotation=0, ha='center')
</code></pre>
<p><a href="https://i.stack.imgur.com/kEEx0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kEEx0.png" alt="pd_barplot_days" /></a></p>
<p>Up to you to decide if this is more convenient than plotting stacked bars from scratch with matplotlib.</p>
<p>This solution can be slightly modified to display appropriate tick labels for time series based on any frequency of time. Here is an example using a frequency of minutes, a custom bar width, and an automatic date tick locator and formatter. Only the new/modified code lines are shown:</p>
<pre><code>import matplotlib.ticker as mtick
#...
idx = pd.date_range('2012-01-01 12', periods=nperiods, freq='T')
#...
ax = df.plot(kind='bar', stacked=True, figsize=(10,5), width=0.3)
#...
# Set adaptive tick locators and major tick formatter
maj_loc = mdates.AutoDateLocator()
ax.xaxis.set_major_locator(maj_loc)
min_loc = mtick.FixedLocator(mdates_x + mdates_width/2)
ax.xaxis.set_minor_locator(min_loc) # draw minor tick under each bar
fmt = mdates.ConciseDateFormatter(maj_loc)
ax.xaxis.set_major_formatter(fmt)
#...
</code></pre>
<p><a href="https://i.stack.imgur.com/irq1k.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/irq1k.png" alt="pd_barplot_minutes" /></a></p>
<p>You may notice that the ticks are often not well aligned with the bars. There appears to be some issue with matplotlib when the figure elements are put together. I find this is usually only noticeable when plotting thinner-than-useful bars. You can check that the bars and ticks are indeed placed correctly by running <code>ax.get_xticks()</code> and comparing that to the values given by <code>patch.get_bbox()</code> when looping through <code>ax.patches</code>.</p>
|
python|pandas|matplotlib
| 1
|
374,769
| 47,721,663
|
How to use tf.nn.crelu in tensorflow?
|
<p>I am trying different activation functions in my simple neural network. </p>
<p>It does not matter using <code>tf.nn.relu</code>, <code>tf.nn.sigmoid</code>,... the network does what it should do. </p>
<p>But if I am using <code>tf.nn.crelu</code>, I have a dimension error.</p>
<p>It returns something like <code>[max, min]</code> and the width dimension is twice bigger.
What do I have to do? Fitting the following weights and biases to the output of <code>crelu</code>?</p>
|
<p>You're right, if you're building the network manually, you need to adjust the dimensions of the following layer to match <code>tf.nn.crelu</code> output. In this sense, <code>tf.nn.crelu</code> is <em>not</em> interchangeable with <code>tf.nn.relu</code>, <code>tf.nn.elu</code>, etc.</p>
<p>The situation is simpler if you use a high-level API, e.g. <a href="https://research.googleblog.com/2016/08/tf-slim-high-level-library-to-define.html" rel="nofollow noreferrer">tensorflow slim</a>. In this case, the layer functions are taking care of matching dimensions, so you can replace <code>tf.nn.relu</code> easily with <code>tf.nn.crelu</code> in <em>code</em>. However, keep in mind that the network is silently becoming twice as big.</p>
<p>Here's an example:</p>
<pre class="lang-py prettyprint-override"><code>with slim.arg_scope([slim.conv2d, slim.fully_connected],
activation_fn=tf.nn.crelu,
normalizer_fn=slim.batch_norm,
normalizer_params={'is_training': is_training, 'decay': 0.95}):
conv1 = slim.conv2d(x_image, 16, [5, 5], scope='conv1')
pool1 = slim.max_pool2d(conv1, [2, 2], scope='pool1')
conv2 = slim.conv2d(pool1, 32, [5, 5], scope='conv2')
pool2 = slim.max_pool2d(conv2, [2, 2], scope='pool2')
flatten = slim.flatten(pool2)
fc = slim.fully_connected(flatten, 1024, scope='fc1')
drop = slim.dropout(fc, keep_prob=keep_prob)
logits = slim.fully_connected(drop, 10, activation_fn=None, scope='logits')
</code></pre>
<p><code>slim.arg_scope</code> simply applies all provided arguments to the underlying layers, in particular <code>activation_fn</code>. Also note <code>activation_fn=None</code> in the last layer to fix the output dimension. Complete code can be <a href="https://github.com/soloice/mnist-bn/blob/master/mnist_bn.py" rel="nofollow noreferrer">found here</a>.</p>
|
machine-learning|tensorflow|computer-vision|activation-function
| 1
|
374,770
| 47,775,927
|
Pandas Transform Position/Rank in Group
|
<p>I have the following <code>DataFrame</code> with two groups of animals and how much food they eat each day,</p>
<pre><code>df = pd.DataFrame({'animals': ['cat', 'cat', 'dog', 'dog', 'rat',
'cat', 'rat', 'rat', 'dog', 'cat'],
'food': [1, 2, 2, 5, 3, 1, 4, 0, 6, 5]},
index=pd.MultiIndex.from_product([['group1'] + ['group2'],
list(range(5))])
).rename_axis(['groups', 'day'])
df
animals food
groups day
group1 0 cat 1
1 cat 2
2 dog 2
3 dog 5
4 rat 3
group2 0 cat 1
1 rat 4
2 rat 0
3 dog 6
4 cat 5
</code></pre>
<p>I can "map"/transform this into a new column to see how much food each individual animal should be given per day <code>daily_meal</code>.</p>
<pre><code>df['daily_meal'] = df.groupby(['animals', 'groups']).transform('mean')
df
animals food daily_meal
groups day
group1 0 cat 1 1.5
1 cat 2 1.5
2 dog 2 3.5
3 dog 5 3.5
4 rat 3 3.0
group2 0 cat 1 3.0
1 rat 4 2.0
2 rat 0 2.0
3 dog 6 6.0
4 cat 5 3.0
</code></pre>
<p>I now wish to know where that daily_meal ranks within each group, and "map"/transform this into a new column called <code>group_rank</code>. How can I do this?</p>
<p>e.g.</p>
<pre><code> animals food daily_meal group_rank
groups day
group1 0 cat 1 1.5 1
1 cat 2 1.5 1
2 dog 2 3.5 3
3 dog 5 3.5 3
4 rat 3 3.0 2
group2 0 cat 1 3.0 2
1 rat 4 2.0 1
2 rat 0 2.0 1
3 dog 6 6.0 3
4 cat 5 3.0 2
</code></pre>
|
<p>Use double <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.transform.html" rel="nofollow noreferrer"><code>transform</code></a>:</p>
<pre><code>df['daily_meal'] = df.groupby(['animals', 'groups'])['food'].transform('mean')
df['group_rank'] = df.groupby('groups')['daily_meal'].rank(method='dense')
print (df)
animals food daily_meal group_rank
groups day
group1 0 cat 1 1.5 1.0
1 cat 2 1.5 1.0
2 dog 2 3.5 3.0
3 dog 5 3.5 3.0
4 rat 3 3.0 2.0
group2 0 cat 1 3.0 2.0
1 rat 4 2.0 1.0
2 rat 0 2.0 1.0
3 dog 6 6.0 3.0
4 cat 5 3.0 2.0
</code></pre>
<p>Or:</p>
<pre><code>s = df.groupby(['animals', 'groups'])['food'].transform('mean')
df['group_rank'] = s.groupby('groups').transform(lambda x: x.rank(method='dense'))
print (df)
animals food group_rank
groups day
group1 0 cat 1 1.0
1 cat 2 1.0
2 dog 2 3.0
3 dog 5 3.0
4 rat 3 2.0
group2 0 cat 1 2.0
1 rat 4 1.0
2 rat 0 1.0
3 dog 6 3.0
4 cat 5 2.0
</code></pre>
<p>Thanks <a href="https://stackoverflow.com/questions/47775927/pandas-transform-position-rank-in-group/47776077#comment82513404_47776077">Scott Boston</a> for improving solution:</p>
<pre><code>df['daily_meal'] = df.groupby(['animals', 'groups'])['food'].transform('mean')
df['group_rank'] = df.groupby('groups')['daily_meal'].rank(method='dense')
</code></pre>
<hr>
<pre><code>s = df.groupby(['animals', 'groups'])['food'].transform('mean')
df['group_rank'] = s.groupby('groups').rank(method='dense')
</code></pre>
|
python|pandas|pandas-groupby
| 6
|
374,771
| 47,860,633
|
TensorFlow pip install not working on Windows 10
|
<p>I have spent a lot of time tying to install tensorflow for windows. I keep getting errors like "not supported."</p>
<p>I have tried the commands:</p>
<pre><code>pip install tensorflow
</code></pre>
<p>and</p>
<pre><code>pip install --upgrade https://storage.googleapis.com/tensorflow/windows/cpu/tensorflow-0.12.0rc0-cp35-cp35m-win_amd64.whl
</code></pre>
|
<p>Note that TensorFlow needs x64 Windows and that Python 3.5 and higher go with <code>pip3</code> instead of <code>pip</code>. Still, there's a glitch with the installation script; I ran into the same problem and resolved it by using <a href="https://www.anaconda.com/distribution/#download-section" rel="nofollow noreferrer">Anaconda</a>, an alternative package manager for R and Python.</p>
<p>Once you've installed Anaconda, run the Anaconda prompt, create a new environment (called here <code>tfenv</code>), and activate it:</p>
<pre><code>>conda create -n tfenv
>conda activate tfenv
</code></pre>
<p>Then, you can install TensorFlow</p>
<pre><code>>conda install tensorflow
</code></pre>
|
python|tensorflow|pip
| 0
|
374,772
| 47,714,805
|
Predicting the future with pandas and statsmodels
|
<p>What i need to do is plot future temperature with these "requirments" : " assume that temperature is roughly a linear function of CO2 emission,
estimating the coefficients of the linear function from recent data points (using
the past 2 is fine, as is using the past 10 or so if you want to be more thorough).
Further, assume that the rate of increase of CO2 emissions is going to be the
same as it is today (i.e. if there were X tons more CO2 emissions in 2016 than
in 2015, there will be X tons more CO2 emissions in 2017 than in 2016)".</p>
<p>I have 2 data sets, one with temperature for each month per year, and one with Carbon level per year.</p>
<p>(posted the merged and shortened down one as its not that big, but if its more helpful to see them unmodified then i can post that as well, you can see how its done below where i post my code)</p>
<pre><code>Year Carbon June
2000 6727 20.386
2001 6886 20.445
2002 6946 20.662
2003 7367 20.343
2004 7735 20.242
2005 8025 20.720
2006 8307 20.994
2007 8488 20.661
2008 8738 20.657
2009 8641 20.548
2010 9137 21.027
2011 9508 20.915
2012 9671 21.172
</code></pre>
<p>What i have done so far is to merge the two datasets together and then try to predict temperature for one month for future years, i have limited it down to 2000-2012 just to make it simplerer and make sure that both tables have the same length as one table is longer than the other. I am pretty new to python and coding overall and i have no idea how to do this, below you can see what i have tried to far:</p>
<pre><code>data1 = pd.read_csv("co2.csv", sep=',')
data2 = pd.read_csv("temperature.csv", sep=',')
data1 = data1.set_index('Year')
data2 = data2.set_index('Year')
data3 = data1.loc["2000":"2012"]
data4 = data2.loc["2000":"2012"]
data4 = data4.loc[:, "June":"June"]
data5 = pd.merge(data3,data4, how= 'left', left_index =True , right_index=True)
x = data5["Carbon"]
y = data5["June"]
model = sm.OLS(y,x).fit()
prediction = model.predict(x)
prediction.plot()
plt.show()
</code></pre>
|
<p>The method <a href="http://www.statsmodels.org/dev/generated/statsmodels.regression.linear_model.OLS.predict.html#statsmodels.regression.linear_model.OLS.predict" rel="nofollow noreferrer"><code>OLS.predict</code></a> do not take <code>x</code> as arguments but the model parameters (and eventually exogenous data). Besides, you have to add a constant to X, otherwise it force the linear regression to pass through the origin. Here is an example:</p>
<pre><code>import pandas as pd
import statsmodels.api as sm
import matplotlib.pyplot as plt
from StringIO import StringIO
data = StringIO("""
Year Carbon June
2000 6727 20.386
2001 6886 20.445
2002 6946 20.662
2003 7367 20.343
2004 7735 20.242
2005 8025 20.720
2006 8307 20.994
2007 8488 20.661
2008 8738 20.657
2009 8641 20.548
2010 9137 21.027
2011 9508 20.915
2012 9671 21.172
""")
# Model training
df = pd.read_table(data, index_col=0, sep='\s+')
Y_train = df['June']
X_train = df['Carbon']
X_train = sm.add_constant(X_train) # add this to your code
model = sm.OLS(Y_train, X_train)
results = model.fit()
# Prediction of future values
future_carbon = range(9700, 10000, 50)
X_pred = pd.DataFrame(data=future_carbon, columns=['Carbon'])
X_pred = sm.add_constant(X_pred)
prediction = model.predict(results.params, X_pred)
# Plot
plt.figure()
plt.plot(X_train['Carbon'], model.predict(results.params), '-r', label='Linear model')
plt.plot(X_pred['Carbon'], prediction, '--r', label='Linear prediction')
plt.scatter(df['Carbon'], df['June'], label='data')
plt.xlabel('Carbon')
plt.ylabel('June temperature')
plt.legend()
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/YWOxS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YWOxS.png" alt="enter image description here"></a></p>
|
python|pandas|statsmodels
| 4
|
374,773
| 47,708,345
|
Pandas substring search for filter
|
<p>I have a use case where I need to validate each row in the df and mark if it is correct or not. Validation rules are in another df.</p>
<pre><code>Main
col1 col2
0 1 take me home
1 2 country roads
2 2 country roads take
3 4 me home
Rules
col3 col4
0 1 take
1 2 home
2 3 country
3 4 om
4 2 take
</code></pre>
<p>A row in <code>main</code> is marked as pass if the following condition matches for any row in <code>rules</code></p>
<p>The condition for passing is:
col1==col3 and col4 is substring of col2</p>
<pre><code> Main
col1 col2 result
0 1 take me home Pass
1 2 country roads Fail
2 2 country roads take Pass
3 4 me home Pass
</code></pre>
<p>My initial approach was to parse Rules df and create a function out of it dynamically and then run </p>
<pre><code> def action_function(row) -> object:
if self.combined_filter()(row): #combined_filter() is the lambda equivalent of Rules df
return success_action(row) #mark as pass
return fail_action(row) #mark as fail
Main["result"] = self.df.apply(action_function, axis=1)
</code></pre>
<p>This turned out to be very slow as apply is not vectorized. The main df is about 3 million and Rules df is around 500 entries. Time taken is around 3 hour. </p>
<p>I am trying to use pandas merge for this. But substring match is not supported by the merge operation. I cannot split words by space or anything.</p>
<p>This will be used as part of a system. So I cannot hardcode anything. I need to read the df from excel every time system starts.
Can you please suggest an approach for this?</p>
|
<p>Merge and then apply the condtion using np.where i.e </p>
<pre><code>temp = main.merge(rules,left_on='col1',right_on='col3')
temp['results'] = temp.apply(lambda x : np.where(x['col4'] in x['col2'],'Pass','Fail'),1)
no_dupe_df = temp.drop_duplicates('col2',keep='last').drop(['col3','col4'],1)
col1 col2 results
0 1 take me home Pass
2 2 country roads Fail
4 2 country roads take Pass
5 4 me home Pass
</code></pre>
|
python|pandas|dataframe
| 1
|
374,774
| 49,222,772
|
Pandas: frequencies per date grouped by column in a form of a list
|
<p>I would like to obtain frequencies of technologies per date from pandas data frame. A reproducible example:</p>
<pre><code>data = pd.DataFrame(
{'dates': ['2017-01-31', '2017-02-28', '2017-02-28'],
'tech': [['c++', 'python'], ['c++', 'c', 'java'], ['java']]}
)
</code></pre>
<p>The end result could look like this (or have the names in rows and one column with counts per date and technology):</p>
<pre><code>date c++ python c java
2017-01-31 1 1 0 0
2017-02-28 1 0 1 2
</code></pre>
<p>The second column, by which the data should be grouped is a list of technologies. Simply trying to group by the data in the present state:</p>
<pre><code>data.groupby(['dates', data.tech.values]).count()
</code></pre>
<p>produces an error:</p>
<blockquote>
<p>TypeError: unhashable type: 'list'</p>
</blockquote>
<p>so I presume that grouping by list is not possible.</p>
|
<p>Seems like you need <code>get_dummies</code></p>
<pre><code>pd.get_dummies(data.set_index('dates').tech.apply(pd.Series).stack()).sum(level=0)
Out[193]:
c c++ java python
dates
2017-01-31 0 1 0 1
2017-02-28 1 1 2 0
</code></pre>
<p>Or <code>sklearn</code> </p>
<pre><code>from sklearn.preprocessing import MultiLabelBinarizer
mlb = MultiLabelBinarizer()
pd.DataFrame(mlb.fit_transform(data.tech), data.dates, mlb.classes_).sum(level=0)
Out[209]:
c c++ java python
dates
2017-01-31 0 1 0 1
2017-02-28 1 1 2 0
</code></pre>
|
python|string|list|pandas|pandas-groupby
| 2
|
374,775
| 49,214,286
|
Find common values between 3 DataFrames?
|
<p>I have 3 dataframes: df1, df2, and df3. </p>
<pre><code>df1 = 'num' 'type'
23 a
34 b
89 a
90 c
df2 = 'num' 'type'
23 a
34 b
56 a
90 c
df3 = 'num' 'type'
56 a
34 s
71 a
90 c
</code></pre>
<p>What I want is an output of all of the 'num' values which appear in 2 or more of the dfs, and I want to flag how many dfs that 'num' value appeared in. So I want something like this: </p>
<pre><code>df = 'num' 'type' 'count'
23 a 2
34 s 3
90 c 3
56 a 2
</code></pre>
<p>I tried doing an inner merge, but that only accounts for 'num' values that appear in all 3 dfs, ignoring the ones that appear in 2/3 dfs.
What's the best way to go about this? </p>
|
<p>et voila my friend</p>
<pre><code>df_full = pd.concat([df1,df2,df3], axis = 0)
df_agg = df_full.groupby('num').agg({'type': 'count'})
df_agg = df_agg.loc[df_agg['type'] >= 2]
</code></pre>
|
python|pandas|dataframe|counter
| 4
|
374,776
| 49,144,138
|
How to subtract numbers in 1 column of a pandas dataframe?
|
<p>Currently, I am using Pandas and created a dataframe that has two columns: </p>
<pre><code>Price Current Value
1350.00 0
1.75 0
3.50 0
5.50 0
</code></pre>
<p>How Do I subtract the first value, and then subtract the sum of the previous two values, continuously (Similar to excel) like this:</p>
<pre><code>Price Current
1350.00 1350.00
1.75 1348.25
3.50 1344.75
5.50 1339.25
</code></pre>
<p>How can this be done for more than just four rows?</p>
|
<p>This will achieve what you need , <code>cumsum</code></p>
<pre><code>1350*2-df.Price.cumsum()
Out[304]:
0 1350.00
1 1348.25
2 1344.75
3 1339.25
Name: Price, dtype: float64
</code></pre>
<p>After assign it back </p>
<pre><code>df.Current=1350*2-df.Price.cumsum()
df
Out[308]:
Price Current
0 1350.00 1350.00
1 1.75 1348.25
2 3.50 1344.75
3 5.50 1339.25
</code></pre>
|
python|pandas|dataframe
| 2
|
374,777
| 48,924,041
|
Remove a row in 3d array
|
<p>given the index of a row, l would like to remove that row.</p>
<p>l tried the following :</p>
<pre><code>a.shape
Out[128]: (60, 3)
</code></pre>
<p>when l try to remove the row number 14 from my 3D array <strong>a</strong> as follow :</p>
<pre><code>np.delete(a,14,axis=0)
a.shape
Out[130]: (60, 3)
</code></pre>
<p>l noticed that it doesn't make any chage. l supposed to get :
a.shape
Out[130]: (59, 3) # rather than (60,3)</p>
<p>What is wrong with my code ?</p>
|
<p>Assignment will solve it as apparently <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.delete.html" rel="nofollow noreferrer">delete</a> returns an array instead of working inplace:</p>
<pre><code>a = np.delete(a,14,axis=0)
</code></pre>
|
python-3.x|numpy
| 1
|
374,778
| 49,316,668
|
Can we combine conditional statements using column indexes in Python?
|
<p><strong>For the following dataframe:</strong></p>
<hr>
<p><strong>Headers:</strong> Name P1 P2 P3</p>
<hr>
<p><strong>L1:</strong> A 1 0 2</p>
<hr>
<p><strong>L2:</strong> B 1 1 1</p>
<hr>
<p><strong>L3:</strong> C 0 5 6</p>
<p>I want to get yes where all P1, P2 and P3 are greater than 0.</p>
<p>Currently I am using either of the following methods:</p>
<hr>
<p><strong>Method1:</strong></p>
<pre><code>df['Check']= np.where((df['P1'] > 0) & (df['P2'] > 0) & (df['P3'] > 0),'Yes','No')
</code></pre>
<p><strong>Method2:</strong></p>
<pre><code>df.loc[(df['P1'] > 0) & (df['P2'] > 0) & (df['P3'] > 0), 'Check'] = "Yes"
</code></pre>
<p>I have a large dataset with a lot of columns where the conditions are to be applied.</p>
<p>Is there a shorter alternative to the multiple <strong><em>&</em></strong> conditions, wherein I won't have to write the conditions for each and every variable and instead use a combined index range for the multiple columns?</p>
|
<p>I think need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.all.html" rel="nofollow noreferrer"><code>DataFrame.all</code></a> for check all <code>True</code>s per rows:</p>
<pre><code>cols = ['P1','P2','P3']
df['Check']= np.where((df[cols] > 0).all(axis=1),'Yes','No')
print (df)
Name P1 P2 P3 Check
0 A 1 0 2 No
1 B 1 1 1 Yes
2 C 0 5 6 No
print ((df[cols] > 0))
P1 P2 P3
0 True False True
1 True True True
2 False True True
print ((df[cols] > 0).all(axis=1))
0 False
1 True
2 False
dtype: bool
</code></pre>
|
python|pandas|numpy
| 0
|
374,779
| 48,919,302
|
Join in near index python
|
<p>Due to the lack of power at the station I use the meteorological data, I do not have the schedules and I need to create these schedules with <code>nan</code>. I can create the times normally (times where they are of frequency of <code>10 Hz</code>). But when the station comes back to work the rounding of the date that I use to make the new dataframe is not the same, creating then a close time with nan and one that exists of the return of the energy in the station. I can create the dataframe but at the time that the two together with pandas join, it creates the dataframe with the dates I created and with the ones they have, all due to the rounding.</p>
<pre><code>import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from datetime import datetime
from os import listdir
from os.path import isfile, join
import datetime
def dateparse(a,b):
data = str(a)+' '+str(b)
return pd.datetime.strptime(data, '%Y-%m-%d %H:%M:%S:%f')
df = pd.read_csv('./CSV_PP_110_2016_010_0000.dat',sep=',',header=None,names=None,index_col=0,na_values=["-999.99"],usecols=[0,1,2,3,4,5,6,7,8,9,10,11,12,13],parse_dates=[[0,1]], date_parser=dateparse,dtype ={3: np.float32,4: np.float32,5: np.float32,6: np.float32,7: np.float32,8: np.float32,9: np.float32,10: np.float32,11: np.float32,12: np.float32,13: np.float32})
df.columns = ['u', 'v', 'w', 'Ts','CO2', 'H2O','Pressao','DiagCsat','CH4','T','sinal_CH4', 'Diag_ch4']
df['cod'] = '110'
df['cod_99'] = '-999.99'
df['ano'] = df.index.strftime('%Y')
df['dj'] = df.index.strftime('%j')
df['hr'] = df.index.strftime('%H%M')
df['seg_fre'] = df.index.strftime('%S.%f')
ano_i = df.index.strftime('%Y')[0]
ano_f = df.index.strftime('%Y')[-1]
dia_i = df.index.strftime('%d')[0]
dia_f = df.index.strftime('%d')[-1]
mes_i = df.index.strftime('%m')[0]
mes_f = df.index.strftime('%m')[-1]
df.seg_fre = (round(df.seg_fre.astype(float),1))
df.u = (round((df.u*13.1072/6).astype(float),5))
df.v = (round((df.v*13.1072/6).astype(float),5))
df.w = (round((df.w*1.6384).astype(float),5))
df.Ts = (round((df.Ts-10).astype(float),5))
df_index_i = df.index[0].strftime('%Y-%m-%d %H:%M:%S.%f')
df_index_f = df.index[-1].strftime('%Y-%m-%d %H:%M:%S.%f')
compare_i = ''+ str(ano_i)+'-'+ str(mes_i)+'-'+str(dia_i)+' ''23:59:59.906000'
compare_f = ''+ str(ano_f)+'-'+ str(mes_f)+'-'+str(dia_f)+' ''23:59:59.806000'
compare_ii = ''+ str(ano_i)+'-'+ str(mes_i)+'-'+str(dia_i)+' ''23:59:59.913000'
compare_ff = ''+ str(ano_f)+'-'+ str(mes_f)+'-'+str(dia_f)+' ''23:59:59.813000'
if df.shape[0]==864000:
df.to_csv('./CSV_110_'+df.ano[3]+'_'+df.dj[3]+'_0000.csv',sep=",",header=False,columns=['cod','ano','dj','hr','seg_fre','u', 'v', 'w', 'Ts','CO2', 'H2O', 'DiagCsat', 'CH4', 'sinal_CH4', 'Diag_ch4', 'T','Pressao'],index=False,na_rep='-999.99')
else:
if df_index_i == compare_i:
start_date = pd.to_datetime(compare_i)
end_date = pd.to_datetime(compare_f)
d=pd.DataFrame(index=pd.date_range(star=start_date, end=end_date, periods=864000, freq='0.1S'))
result=df.join(d, how='outer')
result.to_csv('/home/lucas/Teste_padronizar/teste_1_mes/saida/CSV_110_'+df.ano[3]+'_'+df.dj[3]+'_0000.csv',sep=",",header=False,columns=['cod','ano','dj','hr','seg_fre','u', 'v', 'w', 'Ts','CO2', 'H2O', 'DiagCsat', 'CH4', 'sinal_CH4', 'Diag_ch4', 'T','Pressao'],index=False,na_rep='-999.99')
else:
print('erro index',f)
The index of my `df` load is:
In [19]: df.index[0:5]
Out[19]:
DatetimeIndex(['2016-03-08 23:59:59.956000', '2016-03-09 00:00:00.056000',
'2016-03-09 00:00:00.156000', '2016-03-09 00:00:00.256000',
'2016-03-09 00:00:00.356000'],
dtype='datetime64[ns]', name='0_1', freq=None)
But when the station goes back to work the date stays:
In [17]: df.index[860000]
Out[18]: Timestamp('2016-03-09 23:55:41.006000')
And the result when I join is:
In[27]: result.index[800000:800005]
Out[27]:
DatetimeIndex(['2016-03-09 12:03:44.006000', '2016-03-09 12:03:44.106000',
'2016-03-09 12:03:44.206000', '2016-03-09 12:03:44.306000',
'2016-03-09 12:03:44.406000'],
dtype='datetime64[ns]', freq=None)
I think there may be another function different from the pandas join, but I did not find anything.
</code></pre>
|
<p>Solved using </p>
<pre><code>df.index = df.index.round('0.1S')
</code></pre>
|
python|pandas
| 0
|
374,780
| 49,097,905
|
Referring to previous groups in Pandas SeriesGroupBy
|
<p>I'm writing a Python script which compares the maximum values of each group. I think there must be more beautiful ways using methods offered by <code>pandas</code> or not using global variables such as <code>previous_max</code> in the following code snippet. Please tell me how to do it.</p>
<pre><code>import pandas as pd
import numpy as np
previous_max = 0
def f(x):
global previous_max
if x.max() >= previous_max:
previous_max = x.max()
return "ascending"
else:
previous_max = x.max()
return "descending"
df = pd.DataFrame({
'date': pd.date_range('2000-1-1', periods=100, freq='D'),
'val': np.random.random(100)
})
df['trend'] = df.groupby(pd.TimeGrouper(key='date', freq='10D'))['val'].transform(f)
</code></pre>
|
<p>Use:</p>
<ul>
<li><a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.transform.html" rel="nofollow noreferrer"><code>transform</code></a> by <code>max</code>, get difference by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.diff.html" rel="nofollow noreferrer"><code>diff</code></a> and compare by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.lt.html" rel="nofollow noreferrer"><code>lt</code></a></li>
<li>use <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.where.html" rel="nofollow noreferrer"><code>numpy.where</code></a> with <code>transform</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.any.html" rel="nofollow noreferrer"><code>any</code></a> for <code>True</code>s per groups</li>
</ul>
<hr>
<pre><code>np.random.seed(456)
previous_max = 0
def f(x):
global previous_max
if x.max() >= previous_max:
previous_max = x.max()
return "ascending"
else:
previous_max = x.max()
return "descending"
df = pd.DataFrame({
'date': pd.date_range('2000-1-1', periods=30, freq='D'),
'val': np.random.random(30)
})
</code></pre>
<hr>
<pre><code>df['trend1'] = df.groupby(pd.TimeGrouper(key='date', freq='3D'))['val'].transform(f)
g = df.groupby(pd.TimeGrouper(key='date', freq='3D'))
df['trend'] = g['val'].transform('max').diff().lt(0)
df['trend'] = np.where(g['trend'].transform('any'), 'descending','ascending')
</code></pre>
<hr>
<pre><code>print (df)
date val trend1 trend
0 2000-01-01 0.248756 ascending ascending
1 2000-01-02 0.163067 ascending ascending
2 2000-01-03 0.783643 ascending ascending
3 2000-01-04 0.808523 ascending ascending
4 2000-01-05 0.625628 ascending ascending
5 2000-01-06 0.604114 ascending ascending
6 2000-01-07 0.885702 ascending ascending
7 2000-01-08 0.759117 ascending ascending
8 2000-01-09 0.181105 ascending ascending
9 2000-01-10 0.150169 descending descending
10 2000-01-11 0.435679 descending descending
11 2000-01-12 0.385273 descending descending
12 2000-01-13 0.575710 ascending ascending
13 2000-01-14 0.146091 ascending ascending
14 2000-01-15 0.686593 ascending ascending
15 2000-01-16 0.468804 descending descending
16 2000-01-17 0.569999 descending descending
17 2000-01-18 0.645701 descending descending
18 2000-01-19 0.723341 ascending ascending
19 2000-01-20 0.680671 ascending ascending
20 2000-01-21 0.180917 ascending ascending
21 2000-01-22 0.118158 descending descending
22 2000-01-23 0.242734 descending descending
23 2000-01-24 0.008183 descending descending
24 2000-01-25 0.360068 ascending ascending
25 2000-01-26 0.146042 ascending ascending
26 2000-01-27 0.542723 ascending ascending
27 2000-01-28 0.857103 ascending ascending
28 2000-01-29 0.200212 ascending ascending
29 2000-01-30 0.134633 ascending ascending
</code></pre>
|
python|pandas|pandas-groupby
| 0
|
374,781
| 49,200,989
|
calculate value based on value from previous value
|
<p>I have the following dataset given, describing transactions for purchasing items in a frame (frameNo). the frame encompasses events that happened in a single minute. therefore, the "currentGold" value only depicts the value of gold a player has had entering the frame. </p>
<p>What I am trying to do, is to calculate how much gold is left after each transaction. grouped by the frameNo, since players can generate gold in between events (and in this very example in between frames).</p>
<pre><code> gameId platformId frameNo timestamp itemId currentGold Cost
0 948881246 BR1 1.0 4451 2010 500 50
1 948881246 BR1 1.0 5129 1055 500 450
2 948881246 BR1 6.0 302762 1038 1300 1300
3 948881246 BR1 7.0 417640 1001 300 300
4 948881246 BR1 8.0 420211 1036 759 350
5 948881246 BR1 8.0 421285 1036 759 350
6 948881246 BR1 8.0 421904 2010 759 50
7 948881246 BR1 10.0 555882 3133 1220 310
8 948881246 BR1 10.0 557963 1018 1220 800
9 948881246 BR1 10.0 558777 2010 1220 50
10 948881246 BR1 12.0 697438 3508 850 200
11 948881246 BR1 12.0 701438 1051 850 400
12 948881246 BR1 12.0 701796 1042 850 300
13 948881246 BR1 12.0 703291 2010 850 50
14 948881246 BR1 15.0 848427 3086 1397 500
15 948881246 BR1 15.0 849077 3006 1397 500
16 948881246 BR1 15.0 851125 3363 1397 0
</code></pre>
<p>This is the result I would like to achieve:</p>
<pre><code> gameId platformId frameNo timestamp itemId currentGold Cost availGold
0 948881246 BR1 1.0 4451 2010 500 50 500
1 948881246 BR1 1.0 5129 1055 500 450 450
2 948881246 BR1 6.0 302762 1038 1300 1300 1300
3 948881246 BR1 7.0 417640 1001 300 300 300
4 948881246 BR1 8.0 420211 1036 759 350 759
5 948881246 BR1 8.0 421285 1036 759 350 409
6 948881246 BR1 8.0 421904 2010 759 50 59
7 948881246 BR1 10.0 555882 3133 1220 310 1220
8 948881246 BR1 10.0 557963 1018 1220 800 910
9 948881246 BR1 10.0 558777 2010 1220 50 110
10 948881246 BR1 12.0 697438 3508 850 200 850
11 948881246 BR1 12.0 701438 1051 850 400 650
12 948881246 BR1 12.0 701796 1042 850 300 350
13 948881246 BR1 12.0 703291 2010 850 50 50
14 948881246 BR1 15.0 848427 3086 1397 500 1397
15 948881246 BR1 15.0 849077 3006 1397 500 897
16 948881246 BR1 15.0 851125 3363 1397 0 397
</code></pre>
<p>I've tried to iterate via iterrows(), however, I think it's impossible to reach the row previously and also the dependency on the (gameId, platformId, frameNo) key gives me a bit of headaches</p>
|
<p>Is it possible that you have an error in your "desired" output in the following rows?</p>
<pre><code>948881246 BR1 12.0 701438 1051 850 400 650
948881246 BR1 12.0 701796 1042 850 300 350
948881246 BR1 12.0 703291 2010 850 50 50
</code></pre>
<p>From the logic you've described and the previous rows, the final column (<code>availGold</code>) here should be <code>250</code> in the second row and <code>-50</code> in the final row. If that's the case, you can achieve this by using <code>cumsum</code> and <code>shift</code>:</p>
<pre><code>df['frameCost'] = df.groupby(['gameId', 'platformId', 'frameNo'])['Cost'].cumsum()
df['availGold'] = df['currentGold'] - df.groupby(['gameId', 'platformId', 'frameNo'])['frameCost'].shift(1).fillna(0)
</code></pre>
|
python-3.x|pandas|pandas-groupby|cumulative-sum
| 1
|
374,782
| 49,273,684
|
output is in NAN Wrong distance not be calculated
|
<pre><code>Data input:
cell_id Lat_Long Lat Long
15327 28.46852_76.99512 28.46852 76.99512
52695 28.46852_76.99512 28.46852 76.99512
52692 28.46852_76.99512 28.46852 76.99512
29907 28.46852_76.99512 28.46852 76.99512
29905 28.46852_76.99512 28.46852 76.99512
</code></pre>
<p>Applying Geodesic and find out the distance b/w cell_id but it will create
distance column but all values is NAN .</p>
<pre><code> Code:
Geo = Geodesic.WGS84
n=len(df3)-1
for i in range(0, n):
#df3=df3['Lat'].astype(float)
Lat1=float(df3['Lat'].iloc[i])
Long1=float(df3['Long'].iloc[i])
Lat2=float(df3['Lat'].iloc[i+1])
Long2=float(df3['Long'].iloc[i+1])
df3['dis']=pd.Series(Geo.Inverse( Lat1, Long1, Lat2, Long2))
if(i==n):
df3['dis']=pd.Series()
print df3
</code></pre>
<p>output:</p>
<pre><code> cellid Lat_Long Lat Long dis
15327 28.46852_76.99512 28.46852 76.99512 NaN
52695 28.46852_76.99512 28.46852 76.99512 NaN
52692 28.46852_76.99512 28.46852 76.99512 NaN
29907 28.46852_76.99512 28.46852 76.99512 NaN
29905 28.46852_76.99512 28.46852 76.99512 NaN
39502 28.4572_77.0008 28.4572 77.0008 NaN
what is the problem in this code.
</code></pre>
|
<p><code>Geo.Inverse</code> returns a dictionary not a single value. Check the <a href="https://geographiclib.sourceforge.io/html/python/code.html" rel="nofollow noreferrer">documentation</a>.</p>
<p>The distance is returned with the key <code>s12 – the distance from the first point to the second in meters</code></p>
<pre><code>n = len(df) - 1
for i in range(0, n):
Lat1 = float(df['Lat'].iloc[i])
Long1 = float(df['Long'].iloc[i])
Lat2 = float(df['Lat'].iloc[i + 1])
Long2 = float(df['Long'].iloc[i + 1])
df['dis'] = Geo.Inverse(Lat1, Long1, Lat2, Long2)["s12"]
if (i == n):
df['dis'] = None
</code></pre>
<p>This will result in:</p>
<pre><code> cell_id Lat_Long Lat Long dis
0 15327 28.46852_76.99512 28.46852 76.99512 0.0
1 52695 28.46852_76.99512 28.46852 76.99512 0.0
2 52692 28.46852_76.99512 28.46852 76.99512 0.0
3 29907 28.46852_76.99512 28.46852 76.99512 0.0
4 29905 28.46852_76.99512 28.46852 76.99512 0.0
</code></pre>
<p>By the way do you have to use geodesc? you can replace the distance function with a vectorized one that accepts numy.ndarray, and you would just pass your Lat and Long columns then a shifted version of them. This will greatly enhance performance. </p>
<p>Check <a href="https://www.youtube.com/watch?v=HN5d490_KKk" rel="nofollow noreferrer">this</a> PyCon tech talk about vectorized functions, lucky you; it is about calculating distance between two points!</p>
|
python|pandas|geodesic-sphere
| 0
|
374,783
| 48,952,625
|
Split every row containing long text into multiple rows in pandas
|
<p>I have a DataFrame which has a string column such as below:</p>
<pre><code>id text label
1 this is long string with many words 1
2 this is a middle string 0
3 short string 1
</code></pre>
<p>and i want to convert this DataFrame to another DataFrame based on the string length i.e. (<code>df['text'].str.len > 3</code>) :</p>
<pre><code>id text label
1 this is long 1
1 string with many 1
1 words 1
2 this is a 0
2 middle string 0
3 short string 1
</code></pre>
<p>this is my code:</p>
<pre><code>pd.concat(df['text'].str.len() > 200)
</code></pre>
<p>but it is wrong.</p>
|
<p>You could</p>
<pre><code>In [1257]: n = 3
In [1279]: df.set_index(['label', 'id'])['text'].str.split().apply(
lambda x: pd.Series([' '.join(x[i:i+n]) for i in range(0, len(x), n)])
).stack().reset_index().drop('level_2', 1)
Out[1279]:
label id 0
0 1 1 this is long
1 1 1 string with many
2 1 1 words
3 0 2 this is a
4 0 2 middle string
5 1 3 short string
</code></pre>
<hr>
<p>Details</p>
<pre><code> label text id
0 1 this is long string with many words 1
1 0 this is a middle string 2
2 1 short string 3
</code></pre>
|
python|pandas
| 1
|
374,784
| 49,127,947
|
how to change a object-form array to a normal one
|
<p>Here is a <code>.txt</code> data file,where the first 2 lines are some headers: </p>
<pre><code> REC OBS REPORT TIME STATION LATI- LONGI- ELEV STN PR STN DSLP ALTIM AIR.T DEWPT R.HUM WIND WIND HOR 3H PR 24H PR |
TYPE TYPE YYYYMMDDHHMM BBSSS TUDE TUDE (M) (HP=MB) (HP=MB) (HP=MB) (C) (C) (%) DIR SPD(M/S) VIS(M) (KG/M2) (KG/M2) | PRES PMSL TMDB ALSE TMDP REHU WDIR WSPD TP03 TP24 HOVI
ADPSFC SYNOP 201209191754 72533 41.00 -85.20 252 990.3 1020.1 -9999.9 18.3 2.2 -9999.9 240.0 4.1 16000.0 -9999.9 -9999.9 | 99030.00 102010.00 291.45 -9999.90 275.35 -9999.90 240.00 4.10 -9999.90 -9999.90 16000.00
ADPSFC SYNOP 201209191754 72438 39.73 -86.27 246 991.7 1020.5 -9999.9 18.3 3.3 -9999.9 200.0 3.6 16000.0 -9999.9 -9999.9 | 99170.00 102050.00 291.45 -9999.90 276.45 -9999.90 200.00 3.60 -9999.90 -9999.90 16000.00
ADPSFC SYNOP 201209191756 72423 38.18 -85.73 149 1004.1 1021.5 -9999.9 19.4 1.1 -9999.9 0.0 0.0 16000.0 -9999.9 -9999.9 | 100410.00 102150.00 292.55 -9999.90 274.25 -9999.90 0.00 0.00 -9999.90 -9999.90 16000.00
ADPSFC SYNOP 201209192054 72533 41.00 -85.20 252 988.4 1017.9 -9999.9 19.4 2.8 -9999.9 200.0 6.2 16000.0 0.0 -9999.9 | 98840.00 101790.00 292.55 -9999.90 275.95 -9999.90 200.00 6.20 0.00 -9999.90 16000.00
ADPSFC SYNOP 201209192056 72423 38.18 -85.73 149 1001.8 1019.2 -9999.9 21.7 0.6 -9999.9 0.0 0.0 16000.0 0.0 -9999.9 | 100180.00 101920.00 294.85 -9999.90 273.75 -9999.90 0.00 0.00 0.00 -9999.90 16000.00
</code></pre>
<p>Actually, it can be downloaded <a href="https://rda.ucar.edu/datasets/ds461.0/#!forms/ADP_subset.php?dsnum=ds461.0&gindex=1&_da=y" rel="nofollow noreferrer">here</a>.<br>
I read and store these data in <code>obs</code>, a <code>dataframe</code> using <code>pd.read_table()</code></p>
<pre><code>obs.values[1]
Out[4]: array([ ' ADPSFC SYNOP 201208312354 72533 41.00 -85.20 252 988.4 1017.6 -9999.9 27.2 22.8 -9999.9 200.0 2.1 16000.0 -9999.9 -9999.9 | 98840.00 101760.00 300.35 -9999.90 295.95 -9999.90 200.00 2.10 -9999.90 -9999.90 16000.00'], dtype=object)
</code></pre>
<p>However, in the array ,it is a <code>string</code>.<br>
Applying the <code>splitlines()</code>,we can get the result:</p>
<pre><code>str(obs.values[1]).splitlines()
Out[5]: ["[ ' ADPSFC SYNOP 201208312354 72533 41.00 -85.20 252 988.4 1017.6 -9999.9 27.2 22.8 -9999.9 200.0 2.1 16000.0 -9999.9 -9999.9 | 98840.00 101760.00 300.35 -9999.90 295.95 -9999.90 200.00 2.10 -9999.90 -9999.90 16000.00']"]
</code></pre>
<p>But it is still hard for me to use the data (for analyzis or plotting something).<br>
My expected result should be an array , which can be indexed or sliced.</p>
<pre><code>[ADPSFC SYNOP 201209192056 72423 38.18 -85.73 149 1001.8 1019.2 -9999.9 21.7 0.6 -9999.9 0.0 0.0 16000.0 0.0 -9999.9 100180.00 101920.00 294.85 -9999.90 273.75 -9999.90 0.00 0.00 0.00 -9999.90 16000.00]
</code></pre>
<p>Any help whould be appreciated.</p>
|
<p>Maybe you should call <code>delim_whitespace=True</code> and manually rewrite your <code>columns</code>.</p>
<pre><code>obs = pd.read_table('sample.tex', header=1,delim_whitespace=True).drop('|',axis=1)
obs.columns = ['REC_TYPE','OBS_TYPE','REPORT_TIME','STATION_BBSSS','LATITUDE','LONGITUDE','ELEV','STN_PR',
'STN_DSLP','ALTIM','AIR_T','DEWPT','R_HUM','WIND_DIR','WIND_SPD','HOR_VIS','3H_PR','24H_PR',
'PRES','PMSL','TMDB','ALSE','TMDP','REHU','WDIR','WSPD','TP03','TP24','HOVI']
print(obs.head())
REC_TYPE OBS_TYPE REPORT_TIME STATION_BBSSS LATITUDE LONGITUDE ELEV \
0 ADPSFC SYNOP 201209191754 72533 41.00 -85.20 252
1 ADPSFC SYNOP 201209191754 72438 39.73 -86.27 246
2 ADPSFC SYNOP 201209191756 72423 38.18 -85.73 149
3 ADPSFC SYNOP 201209192054 72533 41.00 -85.20 252
4 ADPSFC SYNOP 201209192056 72423 38.18 -85.73 149
</code></pre>
<p>You noticed that:</p>
<ul>
<li>I only get the second header row (<code>header=1</code>)</li>
<li>I drop the column <code>'|'</code> that contains nothing interesting (<code>drop('|',axis=1)</code>)</li>
</ul>
<p>Then your <code>.tex</code> data are stored into a <code>dataframe</code> that is easy to use for analysis and plotting.<br>
See <a href="http://pandas.pydata.org/pandas-docs/stable/10min.html" rel="nofollow noreferrer">documentation</a> for more details about using <code>pandas</code></p>
<hr>
<p><strong>Note</strong> that you can convert <code>obs.REPORT_TIME</code> into <code>datetime</code>:</p>
<pre><code>obs.REPORT_TIME = pd.to_datetime(obs.REPORT_TIME,format='%Y%m%d%H%M')
</code></pre>
|
python|pandas|numpy
| 1
|
374,785
| 49,122,116
|
Error importing keras backend - cannot import name has_arg
|
<p>i attempt to import keras backend to get_session as follows, but i encounter an error:
<a href="https://i.stack.imgur.com/CQHYI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CQHYI.png" alt="enter image description here"></a></p>
|
<p>There should be no need to import the tensorflow_backend explicitly.</p>
<p>Look at the first lines of an example from the <a href="https://keras.io/backend/" rel="nofollow noreferrer">Keras documentation</a>:</p>
<pre><code># TensorFlow example
>>> from keras import backend as K
>>> tf_session = K.get_session()
[...]
</code></pre>
<p>As long as you are using the tensorflow backend, the get_session() function should be available.</p>
|
tensorflow|keras
| 1
|
374,786
| 49,141,730
|
JSON Normalize On Extremely Nested JSON Data Structure
|
<p>I have the following JSON data structure. I am trying to get it into a Pandas DataFrame. </p>
<p>The pandas.io.json json_normalize works OK, except for the 'tunnels-in' and 'tunnels-out' sections. These are lists with some nested dictionaries inside of them. I have tried almost every format of the json_normalize examples I have seen, without success. About all I can get working is the following. </p>
<p>json_normalize(json_dict['data']['viptela-oper-vpn']['dpi']['flows'])</p>
<p>As soon as I add the variables to define additional structure, I just can't get past the errors. I looked into alternative ways to do this - documented here - which do seem to work, but it doesn't seem to deal with any concept of vertical structure. Here, we have a list of flows - and i want to flatten each flow out into separate columns - where each flow's value is in a different row of the same column</p>
<p><a href="https://towardsdatascience.com/flattening-json-objects-in-python-f5343c794b10" rel="nofollow noreferrer">https://towardsdatascience.com/flattening-json-objects-in-python-f5343c794b10</a></p>
<p>Does anyone know of a way to use the normalize function, while preserving the nested list of dictionaries? And as you can see, not every flow has the tunnels-in/tunnels-out. That was another complicating factor to my trying to flatten it out myself.</p>
<p>Any thoughts are greatly appreciated.</p>
<p>Thanks very much,</p>
<p><strong>Data Structure</strong></p>
<pre><code>{
"data": {
"viptela-oper-vpn": {
"dpi": {
"flows": [
{
"vpn-id": 1,
"src-ip": "1.1.0.200",
"dst-ip": "1.3.0.200",
"src-port": 65369,
"dst-port": 1967,
"proto": "udp",
"application": "udp",
"family": "Network Service",
"active-since": "2018-02-28T22:51:54+00:00",
"packets": 2,
"octets": 132,
"tunnels-in": [
{
"index_me": 1,
"local-tloc": {
"ip": "1.1.1.104",
"color": "private2",
"encap": "ipsec"
},
"remote-tloc": {
"ip": "1.1.1.103",
"color": "private2",
"encap": "ipsec"
},
"packets": 1,
"octets": 80,
"start-time": "2018-02-28T22:51:54+00:00"
}
],
"tunnels-out": [
{
"index_me": 1,
"local-tloc": {
"ip": "1.1.1.104",
"color": "private2",
"encap": "ipsec"
},
"remote-tloc": {
"ip": "1.1.1.103",
"color": "mpls",
"encap": "ipsec"
},
"packets": 1,
"octets": 52,
"start-time": "2018-02-28T22:51:54+00:00"
}
]
},
{
"vpn-id": 1,
"src-ip": "1.1.0.200",
"dst-ip": "1.3.0.200",
"src-port": 65529,
"dst-port": 1967,
"proto": "udp",
"application": "udp",
"family": "Network Service",
"active-since": "2018-02-28T22:52:03+00:00",
"packets": 2,
"octets": 132,
"tunnels-in": [
{
"index_me": 1,
"local-tloc": {
"ip": "1.1.1.104",
"color": "private2",
"encap": "ipsec"
},
"remote-tloc": {
"ip": "1.1.1.103",
"color": "private2",
"encap": "ipsec"
},
"packets": 1,
"octets": 80,
"start-time": "2018-02-28T22:52:03+00:00"
}
],
"tunnels-out": [
{
"index_me": 1,
"local-tloc": {
"ip": "1.1.1.104",
"color": "private2",
"encap": "ipsec"
},
"remote-tloc": {
"ip": "1.1.1.103",
"color": "mpls",
"encap": "ipsec"
},
"packets": 1,
"octets": 52,
"start-time": "2018-02-28T22:52:03+00:00"
}
]
},
{
"vpn-id": 512,
"src-ip": "69.26.45.133",
"dst-ip": "198.19.200.2",
"src-port": 11895,
"dst-port": 22,
"proto": "tcp",
"application": "ssh",
"family": "Encrypted",
"active-since": "2018-02-28T22:42:15+00:00",
"packets": 1498,
"octets": 797954
},
{
"vpn-id": 512,
"src-ip": "198.19.200.2",
"dst-ip": "69.26.45.139",
"src-port": 514,
"dst-port": 514,
"proto": "udp",
"application": "syslog",
"family": "Application Service",
"active-since": "2018-02-28T22:50:59+00:00",
"packets": 8,
"octets": 2820
}
]
}
}
}
}
</code></pre>
<p><strong>Function So Far</strong></p>
<pre><code>def myprint(file):
file_var = ''
with open(file) as f:
file_var = f.read()
extract_json_dict = re.compile('(\\n{\\n)(.*)(\\n}\\n)', re.DOTALL)
json_string = extract_json_dict.search(file_var).group(0)
json_dict = json.loads(json_string)
df = json_normalize(json_dict['data']['viptela-oper-vpn']['dpi']['flows'])
</code></pre>
<p><strong>Columns That Show Up Now</strong></p>
<p>['active-since', 'application', 'dst-ip', 'dst-port', 'family', 'octets',
'packets', 'proto', 'src-ip', 'src-port', 'tunnels-in', 'tunnels-out',
'vpn-id']</p>
<p><strong>Columns I'd Like To Add In Addition To The Ones Displayed Above</strong></p>
<p>In essence, 'flattening' those two columns that have lists as values into additional columns, and have each flow's values in a unique row.</p>
<p>['tunnels-in_index_me',
'tunnels-in_remote-tloc_ip',
'tunnels-in_remote-tloc_color',
'tunnels-in_remote-tloc_encap',
'tunnels-out_remote-tloc_ip']</p>
<p><strong>Update 3/8/2018</strong></p>
<p>This seems to do what i want, for the columns that have lists of dictionaries in them. But it needs the [0] identifier for the flow number. Not sure if anyone knows of a way to get this to work for all flows - not one at a time. If that can be done, i should be able to concatenate or merge based on the index number. It would be even better to do the whole thing with a single json_normalize line - but beyond the issue with the [0], that seems to also have the added problem that not all flow numbers have the nested list of dictionaries. i'll keep trying with this one, but any thoughts are appreciated.</p>
<pre><code>json_normalize(json_dict['data']['viptela-oper-vpn']['dpi']['flows'][0]['tunnels-in'])
</code></pre>
|
<p>I'm struggling with this right now. I got around that selection by index <code>[0]</code> issue by pulling it out into another pandas.dataframe. As such:</p>
<pre><code>df = json_normalize(pd.DataFrame(list(json_dict['data']['viptela-oper-vpn']['dpi']['flows']))['tunnels-in'])
</code></pre>
|
python|json|pandas
| 0
|
374,787
| 48,920,430
|
Resampling hourly data to 6 hours
|
<pre><code> Timestamp Value
0 2017-11-22 09:00:00 12.356965
1 2017-11-22 10:00:00 26.698426
2 2017-11-22 11:00:00 13.153104
3 2017-11-22 12:00:00 15.425182
4 2017-11-22 13:00:00 15.161085
5 2017-11-22 14:00:00 17.038580
6 2017-11-22 15:00:00 11.035375
7 2017-11-22 16:00:00 5.208686
8 2017-11-22 17:00:00 6.026359
9 2017-11-22 18:00:00 6.259712
10 2017-11-22 19:00:00 21.792882
11 2017-11-22 20:00:00 9.053889
</code></pre>
<p>Let say, above is my dataframe, i need to resample the data for 6 hours, so for 9:00, the value should be average of data from 9,10,11,12,13,14..
Similarly for 10, the value should be average of data from 10, 11, 12, 13, 14, 15... and so on.......</p>
|
<p>You can use <code>rolling.mean</code>:</p>
<pre><code>df.set_index('Timestamp').rolling('6h').mean()
Value
Timestamp
2017-11-22 09:00:00 12.356965
2017-11-22 10:00:00 19.527696
2017-11-22 11:00:00 17.402832
2017-11-22 12:00:00 16.908419
2017-11-22 13:00:00 16.558952
2017-11-22 14:00:00 16.638890
2017-11-22 15:00:00 16.418625
2017-11-22 16:00:00 12.837002
2017-11-22 17:00:00 11.649211
2017-11-22 18:00:00 10.121633
2017-11-22 19:00:00 11.226932
2017-11-22 20:00:00 9.896151
</code></pre>
<hr>
<p>Alternative using <code>asfreq</code> + <code>rolling.mean</code> + <code>shift</code>:</p>
<pre><code>df.set_index('Timestamp').asfreq('h').rolling(6).mean().shift(-5)
Value
Timestamp
2017-11-22 09:00:00 16.638890
2017-11-22 10:00:00 16.418625
2017-11-22 11:00:00 12.837002
2017-11-22 12:00:00 11.649211
2017-11-22 13:00:00 10.121633
2017-11-22 14:00:00 11.226932
2017-11-22 15:00:00 9.896150
2017-11-22 16:00:00 NaN
2017-11-22 17:00:00 NaN
2017-11-22 18:00:00 NaN
2017-11-22 19:00:00 NaN
2017-11-22 20:00:00 NaN
</code></pre>
<p>The result is the same as before, but shifted by 5 places.</p>
|
python|pandas
| 3
|
374,788
| 49,108,757
|
Error installing pandas and quandl via pip - windows
|
<p>Goodmorning, I'm trying installing pandas and quandl. I used <code>pip install quandl</code> and <code>pip install pandas</code> but the feedback is for both:</p>
<pre><code>Command "C:\Python34\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\tomsa\\AppData\\Local\\Temp\\pip-build-m2kos9k2\\pandas\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\tomsa\AppData\Local\Temp\pip-q2r4p42_-record\install-record.txt --single-version-externally-managed --compile" failed with error code 1 in C:\Users\tomsa\AppData\Local\Temp\pip-build-m2kos9k2\pandas\
</code></pre>
<p>I tried not to use cached files but it doesn't work anyway.</p>
|
<p>Solved installing them manually from <a href="http://www.lfd.uci.edu/~gohlke/pythonlibs/#pandas" rel="nofollow noreferrer">here</a> via </p>
<pre><code>pip install pandas‑0.20.3‑cp34‑cp34m‑win_amd64.whl
</code></pre>
<p>in the directory (normally X:\Users\Download in windows if the wheel has not been moved somewhere else)</p>
|
python-3.x|pandas|pip|quandl
| 1
|
374,789
| 49,204,671
|
Pandas : Boolean indexing on multiple columns
|
<p>I have a data frame as below.</p>
<pre><code>In [23]: data2 = [{'a': 'x', 'b': 'y','c':'q'}, {'a': 'x', 'b': 'p', 'c': 'q'}, {'a':'p', 'b':'q'},{'a':'q', 'b':'y','c':'q'}]
In [26]: df = pd.DataFrame(data2)
In [27]: df
Out[27]:
a b c
0 x y q
1 x p q
2 p q NaN
3 q y q
</code></pre>
<p>I want to do boolean indexing to filter out columns which have either x or y. This i am doing as</p>
<pre><code>In [29]: df[df['a'].isin(['x','y']) | (df['b'].isin(['x','y']))]
Out[29]:
a b c
0 x y q
1 x p q
3 q y q
</code></pre>
<p>But i have over 50 columns in which i need to check and checking each columns seems not very pythonic.
I tried</p>
<pre><code>In [30]: df[df[['a','b']].isin(['x','y'])]
</code></pre>
<p>But the output is not what i expect, i get the below</p>
<pre><code>Out[30]:
a b c
0 x y NaN
1 x NaN NaN
2 NaN NaN NaN
3 NaN y NaN
</code></pre>
<p>I can drop rows which are all NaN but the values are missing in the rest. </p>
<p>For example in row-0 columns-c is NaN but i need that value.</p>
<p>Any suggestions how to do this ?</p>
|
<p>You can compare your df with 'x' and 'y' and then do a logical or to find rows with either 'x' or 'y'. Then use the boolean array as index to select those rows.</p>
<pre><code>df.loc[(df.eq('x') | df.eq('y')).any(1)]
Out[68]:
a b c
0 x y q
1 x p q
3 q y q
</code></pre>
|
python|pandas
| 2
|
374,790
| 49,165,672
|
Using pandas to find the max value for specific rows
|
<p>I've got a csv that looks like this (there are more years):</p>
<pre><code>year,title_field,value
2009,Total Housing Units,39499
2009,Vacant Housing Units,3583
2009,Occupied Housing Units,35916
2008,Total Housing Units,41194
2008,Vacant Housing Units,4483
2008,Occupied Housing Units,36711
2009,Owner Occupied,18057
2009,Renter Occupied,17859
2008,Owner Occupied,17340
2008,Renter Occupied,19371
2009,Median Gross Rent,769
2008,Median Gross Rent,768
</code></pre>
<p>I need to find the max value of all Vacant Housing Units. </p>
<p>So far, I've got this:</p>
<p>import pandas as pd</p>
<pre><code>df = pd.read_csv("denton_housing.csv", names=("year", "title_field", "value"))
inds = df.groupby(['title_field'])['value'].transform(max) == df['value']
df = df[inds]
df.reset_index(drop=True, inplace=True)
print(df)
</code></pre>
<p>That code gives me this:</p>
<pre><code> year title_field value
0 year title_field value
1 2014 Total Housing Units 49109
2 2014 Occupied Housing Units 46295
3 2008 Vacant Housing Units 4483
4 2014 Owner Occupied 21427
5 2014 Renter Occupied 24868
6 2014 Median Gross Rent 905
</code></pre>
<p>I only need it to output:</p>
<pre><code>2008 Vacant Housing Units 4483
</code></pre>
|
<p>I think you need <code>idxmax</code></p>
<pre><code>df.loc[[df.groupby(['title_field'])['value'].idxmax().loc['Vacant Housing Units']]]
Out[92]:
year title_field value
4 2008 Vacant Housing Units 4483
</code></pre>
|
python|pandas|csv
| 1
|
374,791
| 49,277,640
|
How to extract values from a Pandas DataFrame, rather than a Series (without referencing the index)?
|
<p>I am trying to return a specific item from a Pandas DataFrame via conditional selection (and do not want to have to reference the index to do so).</p>
<p>Here is an example:</p>
<p>I have the following dataframe:</p>
<pre><code> Code Colour Fruit
0 1 red apple
1 2 orange orange
2 3 yellow banana
3 4 green pear
4 5 blue blueberry
</code></pre>
<p>I enter the following code to search for the code for blueberries:</p>
<pre><code>df[df['Fruit'] == 'blueberry']['Code']
</code></pre>
<p>This returns:</p>
<pre><code>4 5
Name: Code, dtype: int64
</code></pre>
<p>which is of type:</p>
<pre><code>pandas.core.series.Series
</code></pre>
<p>but what I actually want to return is the number 5 of type:</p>
<pre><code>numpy.int64
</code></pre>
<p>which I can do if I enter the following code:</p>
<pre><code>df[df['Fruit'] == 'blueberry']['Code'][4]
</code></pre>
<p>i.e. referencing the index to give the number 5, but I do not want to have to reference the index!</p>
<p>Is there another syntax that I can deploy here to achieve the same thing?</p>
<p>Thank you!...</p>
<p>Update:</p>
<p>One further idea is this code:</p>
<pre><code>df[df['Fruit'] == 'blueberry']['Code'][df[df['Fruit']=='blueberry'].index[0]]
</code></pre>
<p>However, this does not seem particularly elegant (and it references the index). Is there a more concise and precise method that does not need to reference the index or is this strictly necessary?</p>
<p>Thanks!...</p>
|
<p>Let's try this:</p>
<pre><code>df.loc[df['Fruit'] == 'blueberry','Code'].values[0]
</code></pre>
<p>Output:</p>
<pre><code>5
</code></pre>
<p>First, use <code>.loc</code> to access the values in your dataframe using the boolean indexing for row selection and index label for column selection. The convert that returned series to an array of values and since there is only one value in that array you can use index '[0]' get the scalar value from that single element array.</p>
|
python|pandas|dataframe
| 5
|
374,792
| 49,143,496
|
compare two pandas columns of mixed data datatypes
|
<p>i have a dataframe as follows, column A and Refer has values of types str,float and int. i have to compare and create a new column if both values same then pass or else fail. it is very simple if all values string datatype but before compare any numeric value in column A must be rounded of, if its a decimal and ends with .0 example in row 3 '1.0' must be changed to '1' before compare with Refer column</p>
<pre><code> A Refer
0 usa usa
1 1 1
2 india usa
3 1.0 1
4 1.1 1.1
5 1.1 1.2
6 0.888 0.898
7 0.888 0.888
</code></pre>
<p>and the out put i am expecting is:</p>
<pre><code> A Refer verdict
0 usa usa pass
1 1 1 pass
2 india usa fail
3 1.0 1 pass
4 1.1 1.1 pass
5 1.1 1.2 fail
6 0.888 0.898 fail
7 0.888 0.888 pass
</code></pre>
<p>so i want to create a function so that it will check each row and if row value is numeric then it will check type is float/int. if it is float and ends with '.0' then trunk/remove decimal else continue with flat values.
if row value is string then its a stright forward and compare.</p>
<p>can anyone please help</p>
|
<p>IIUC, there are lot of build in function in pandas </p>
<p>Update <code>to_numeric</code></p>
<pre><code>df.apply(pd.to_numeric,errors='ignore',axis=1).nunique(1).eq(1).map({True:'Pass',False:'Fail'})
Out[272]:
0 Pass
1 Pass
2 Fail
3 Pass
4 Pass
5 Fail
6 Fail
7 Pass
dtype: object
</code></pre>
<p>After assign it back </p>
<pre><code>df['verdict']=df.apply(pd.to_numeric,errors='ignore',axis=1).nunique(1).eq(1).map({True:'Pass',False:'Fail'})
df
Out[274]:
A Refer verdict
0 usa usa Pass
1 1 1 Pass
2 india usa Fail
3 1.0 1 Pass
4 1.1 1.1 Pass
5 1.1 1.2 Fail
6 0.888 0.898 Fail
7 0.888 0.888 Pass
</code></pre>
|
python|pandas
| 1
|
374,793
| 49,232,854
|
Feature Selection using MRMR
|
<p>I found two ways to implement MRMR for feature selection in python. The source of the paper that contains the method is: </p>
<p><a href="https://www.dropbox.com/s/tr7wjpc2ik5xpxs/doc.pdf?dl=0" rel="noreferrer">https://www.dropbox.com/s/tr7wjpc2ik5xpxs/doc.pdf?dl=0</a></p>
<p>This is my code for the dataset.</p>
<pre><code>import numpy as np
import pandas as pd
from sklearn.datasets import make_classification
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
X, y = make_classification(n_samples=10000,
n_features=6,
n_informative=3,
n_classes=2,
random_state=0,
shuffle=False)
# Creating a dataFrame
df = pd.DataFrame({'Feature 1':X[:,0],
'Feature 2':X[:,1],
'Feature 3':X[:,2],
'Feature 4':X[:,3],
'Feature 5':X[:,4],
'Feature 6':X[:,5],
'Class':y})
y_train = df['Class']
X_train = df.drop('Class', axis=1)
</code></pre>
<p>Method 1: Applying MRMR using pymrmr</p>
<p>Contains MID and MIQ</p>
<p>Which is published by the author
The link is <a href="https://github.com/fbrundu/pymrmr" rel="noreferrer">https://github.com/fbrundu/pymrmr</a></p>
<pre><code>import pymrmr
pymrmr.mRMR(df, 'MIQ',6)
</code></pre>
<blockquote>
<p>['Feature 4', 'Feature 5', 'Feature 2', 'Feature 6', 'Feature 1',
'Feature 3']</p>
</blockquote>
<p>or running using the second way</p>
<pre><code>pymrmr.mRMR(df, 'MID',6)
</code></pre>
<blockquote>
<p>['Feature 4', 'Feature 6', 'Feature 5', 'Feature 2', 'Feature 1',
'Feature 3']</p>
</blockquote>
<p>Both these methods, on the above dataset yields this 2 output. Another author on GitHub claims that you can use his version to apply the MRMR method. However when I use it for the same dataset I have a different result. </p>
<p>Method 2: Applying MRMR using MIFS</p>
<p>Github link</p>
<p><a href="https://github.com/danielhomola/mifs" rel="noreferrer">https://github.com/danielhomola/mifs</a></p>
<pre><code>import mifs
for i in range(1,11):
feat_selector = mifs.MutualInformationFeatureSelector('MRMR',k=i)
feat_selector.fit(X_train, y_train)
# call transform() on X to filter it down to selected features
X_filtered = feat_selector.transform(X_train.values)
#Create list of features
feature_name = X_train.columns[feat_selector.ranking_]
print(feature_name)
</code></pre>
<p>And if you run the above iteration for all different values of i, there will come no time where both methods actually yield the same feature selection output.</p>
<p>What seems to be the problem here ?</p>
|
<p>You'll probably need to contact either the authors of the original paper and/or the owner of the Github repo for a final answer, but most likely the differences here come from the fact that you are comparing 3 different algorithms (despite the name).</p>
<p><a href="https://en.wikipedia.org/wiki/Minimum_redundancy_feature_selection" rel="nofollow noreferrer">Minimum redundancy Maximum relevance algorithms</a> are actually a family of feature selection algorithms whose common objective is to select features that are <em>mutually far away from each other while still having "high" correlation to the classification variable</em>.</p>
<p>You can measure that objective using Mutual Information measures, but the specific method to follow(i.e. what to do with the scores computed? In what order? What other post-processing methods will be used? ...) is going to be different from one author to another - even in the paper they are actually giving you two different implementations, <code>MIQ</code> and <code>MID</code>.</p>
<p>So my suggestion would be to just choose the implementation you are more comfortable with (or even better, the one that produces better results in your pipeline after conducting a proper validation), and just report which specific source did you choose and why. </p>
|
python|pandas|numpy
| 2
|
374,794
| 49,207,577
|
DataFrame to List format
|
<p>Im converting DataFrame into List using following code</p>
<pre><code>resultDataFrame = []
resultDataFrame = pd.DataFrame(resultDataFrame, columns=('day', 'hour', 'minute'))
</code></pre>
<p>and appending values ...</p>
<pre><code>resultDataFrame.to_dict('l')
{'day': [6], 'hour': [12], 'minute': [54]}
</code></pre>
<p>But, I want to get it like </p>
<pre><code>{'day': 6, 'hour': 12, 'minute': 54}
</code></pre>
<p>How to change the format?</p>
|
<p>You can select first row for <code>Series</code> e.g. by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.iloc.html" rel="nofollow noreferrer"><code>iloc</code></a> or <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.loc.html" rel="nofollow noreferrer"><code>loc</code></a> and create <code>dict</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.to_dict.html" rel="nofollow noreferrer"><code>Series.to_dict</code></a>:</p>
<pre><code>df = pd.DataFrame({'day': [6], 'hour': [12], 'minute': [54]})
print (df)
day hour minute
0 6 12 54
print (df.iloc[0])
day 6
hour 12
minute 54
Name: 0, dtype: int64
print (df.iloc[0].to_dict())
{'day': 6, 'minute': 54, 'hour': 12}
</code></pre>
<p>Another possible solution is convert to <code>records</code> - <code>list of dict</code> and select by indexing:</p>
<pre><code>print (df.to_dict('r'))
[{'day': 6, 'minute': 54, 'hour': 12}]
print (df.to_dict('r')[0])
{'day': 6, 'minute': 54, 'hour': 12}
</code></pre>
|
python|pandas
| 2
|
374,795
| 49,150,462
|
getting the top 2 values of a list
|
<p>In the below dataframe I have a list of values, how can we get the top 2 repeated from a list ?</p>
<p>DataFrame:</p>
<pre><code>user pro
A [AA,AA,AA,BB,CC,AA,AA,CC,CC,BB]
B [AA, BB, EE,BB,BB,EE,AA,CC,BB,EE]
C [EE,EE,EE,CC,CC,CC,CC,DD,DD,AA]
D [DD,AA,AA,AA,AA,AA,BB,BB,BB]
</code></pre>
<p>Expected output:</p>
<pre><code>A [AA,CC]
B [BB,EE]
C [CC,EE]
D [AA,BB]
</code></pre>
|
<p>Use <a href="https://docs.python.org/3/library/collections.html#collections.Counter.most_common" rel="nofollow noreferrer"><code>collections.Counter.most_common</code></a>:</p>
<pre><code>from collections import Counter
df['new'] = df['pro'].apply(lambda x: [k for k, v in Counter(x).most_common(2)])
print (df)
user pro new
0 A [AA, AA, AA, BB, CC, AA, AA, CC, CC, BB] [AA, CC]
1 B [AA, BB, EE, BB, BB, EE, AA, CC, BB, EE] [BB, EE]
2 C [EE, EE, EE, CC, CC, CC, CC, DD, DD, AA] [CC, EE]
3 D [DD, AA, AA, AA, AA, AA, BB, BB, BB] [AA, BB]
</code></pre>
<p>Thanks @jpp:</p>
<pre><code>df['common'] = [list(zip(*d.most_common(2)))[0] for d in df['pro'].map(Counter)]
</code></pre>
<p>Thanks @cᴏʟᴅsᴘᴇᴇᴅ:</p>
<pre><code>df['common'] = df.pro.map(lambda x: [k for k, v in Counter(x).most_common(2)])
</code></pre>
|
python|pandas
| 4
|
374,796
| 49,038,659
|
How to substitute NaNs in a numpy array with elements in another list
|
<p>I'm facing an issue with a basic substitution. I have two arrays, one of them contains numbers and NaN, and the other one numbers that are supposed to replace the NaN, obviously ordered as I wish. As an example:
<code>x1 = [NaN, 2, 3, 4, 5, NaN, 7, 8, NaN, 10]</code> and
<code>fill = [1, 6, 9]</code> and I want to obtain by index-wise replacement an array like:
<code>x1_final = [1, 2, 3, 4, 5, NaN, 7, 8, NaN, 10]</code></p>
<p>I have written this idiotic line of code, which substitutes all the NaN with the first element of the <code>fill</code> array:</p>
<pre><code>for j in range(0,len(x1)):
if np.isnan(x1[j]).any():
for i in range(0,len(fill)):
x1[j] = fill[i]
</code></pre>
<p>How do I manage to achieve my result? </p>
|
<p>Does this work for you?</p>
<pre><code>train = np.array([2, 4, 4, 8, 32, np.NaN, 12, np.NaN])
fill = [1,3]
train[np.isnan(train)] = fill
print(train)
</code></pre>
<p>Output:</p>
<pre><code>[ 2. 4. 4. 8. 32. 1. 12. 3.]
</code></pre>
|
python|python-3.x|numpy
| 4
|
374,797
| 49,158,505
|
Visualizing spherical harmonics in Python
|
<p>I am trying to draw a spherical harmonics for my college project. The following formula I want to depict,</p>
<pre><code>Y = cos(theta)
</code></pre>
<p>for that, I wrote this code</p>
<pre><code>import numpy as np
from mpl_toolkits.mplot3d import axes3d
import matplotlib.pyplot as plt
def sph2cart(r, phi, tta):
''' r is from 0 to infinity '''
''' phi is from 0 to 2*pi '''
''' tta is from 0 to pi '''
x = r* np.sin(tta)* np.cos(phi)
y = r* np.sin(tta)* np.sin(phi)
z = r* np.cos(tta)
return x, y, z
# phi running from 0 to pi and tta from 0 to pi
phi = np.linspace(0, 2* np.pi, 25)
tta = np.linspace(0, np.pi, 25)
# meshgrid to generate points
phi, tta = np.meshgrid(phi, tta)
# THIS IS THE FUNCTION
Y = np.cos(tta)
# finally all things in cartesian co-ordinate system
# Note that "Y" is acting as "r"
x, y, z = sph2cart( Y, phi, tta)
# plotting :-
fig = plt.figure()
ax = fig.add_subplot( 111 , projection='3d')
ax.plot_surface(x, y, z, linewidth = 0.5, edgecolors = 'k')
</code></pre>
<p>And, get the sphere as a result. Which is not correct, because actual result is dumbbell like shape. See the second row of this image,</p>
<p><a href="https://upload.wikimedia.org/wikipedia/commons/thumb/6/62/Spherical_Harmonics.png/1024px-Spherical_Harmonics.png" rel="nofollow noreferrer">https://upload.wikimedia.org/wikipedia/commons/thumb/6/62/Spherical_Harmonics.png/1024px-Spherical_Harmonics.png</a></p>
|
<p>The picture in the Wikipedia article <a href="https://en.wikipedia.org/wiki/Spherical_harmonics" rel="nofollow noreferrer">Spherical harmonics</a> is obtained by using the <em>absolute value</em> of a spherical harmonic as the r coordinate, and then coloring the surface according to the sign of the harmonic. Here is an approximation. </p>
<pre><code>x, y, z = sph2cart(np.abs(Y), phi, tta)
fig = plt.figure()
ax = fig.add_subplot( 111 , projection='3d')
from matplotlib import cm
ax.set_aspect('equal')
ax.plot_surface(x, y, z, linewidth = 0.5, facecolors = cm.jet(Y), edgecolors = 'k')
</code></pre>
<p><a href="https://i.stack.imgur.com/VfEsz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VfEsz.png" alt="dumbbell"></a></p>
<p>When you use Y itself as r, the two hemispheres (positive Y and negative Y) end up mapped onto the same half of the above surface. </p>
|
python|python-3.x|numpy
| 4
|
374,798
| 49,082,287
|
Read text file data to pandas DataFrame
|
<p>I have specific file format from CNC (<em>work center</em>) data. saved like .txt .
I want read this table to pandas dataframe but i never seen this format before.</p>
<pre><code>_MASCHINENNUMMER : >0-251-11-0950/51< SACHBEARB.: >BSTWIN32<
_PRODUKTSCHLUESSEL : >BST 500< DATUM : >05-20-2016<
---------------------------------------------------------------------------
*BOHRKOPF !SPINDEL!WK!DELTA-X !DELTA-Y !DURCHMESSER! KOMMENTAR
----------+----------+----------+----------+-----------+-------------------
[NoValidForUse]
A21 ! 1!62! 0.000! 0.000! 0.000!
[V11]
A12 ! -1!62! 0.000! -160.000! 0.000!
A12 ! 2!62! 0.000! -128.000! 3.000! 70.0
A12 ! -3!62! 0.000! -96.000! 0.000!
A12 ! 4!62! 0.000! -64.000! 0.000!
---------------------------------------------------------------------------
*BOHRKOPF !SPINDEL!WK!DELTA-X !DELTA-Y !DURCHMESSER! KOMMENTAR
----------+----------+----------+----------+-----------+-------------------
[V11]
O11 ! -9!62! 0.000! -96.000! 0.000!
O11 ! 10!62! 0.000! -128.000! 5.000! 70.0
</code></pre>
<p>Questions:
1. Is it possible to read this and convert as pandas Dataframe?
2. Hou to do this ?</p>
<ul>
<li><em>why pandas dataFrame? I want this data use for some analysis by this characteristics of item. For analysis i always use pandas. Maybe for this i need do different ways ?</em></li>
</ul>
<p>Expected outpu:</p>
<p>two pandas DataFrames first:</p>
<pre><code>---------------------------------------------------------------------------------------
*BOHRKOPF !SPINDEL!WK!DELTA-X !DELTA-Y !DURCHMESSER! KOMMENTAR ! TYPE
----------+----------+----------+----------+-----------+-------------------------------
A21 ! 1!62! 0.000! 0.000! 0.000! !NoValidForUse
A12 ! -1!62! 0.000! -160.000! 0.000! !V11
A12 ! 2!62! 0.000! -128.000! 3.000! 70.0 !V11
A12 ! -3!62! 0.000! -96.000! 0.000! !V11
A12 ! 4!62! 0.000! -64.000! 0.000! !V11
</code></pre>
<p>And second:</p>
<pre><code>---------------------------------------------------------------------------------------
*BOHRKOPF !SPINDEL!WK!DELTA-X !DELTA-Y !DURCHMESSER! KOMMENTAR ! TYPE
----------+----------+----------+----------+-----------+-------------------------------
O11 ! -9!62! 0.000! -96.000! 0.000! !V11
O11 ! 10!62! 0.000! -128.000! 5.000! 70.0 !V11
</code></pre>
<p>Headers of Dataframe1 and dataframe2 can be different:</p>
<pre><code>_MASCHINENNUMMER : >0-251-11-0950/51< SACHBEARB.: >BSTWIN32<
_PRODUKTSCHLUESSEL : >BST 500< DATUM : >05-20-2016<
---------------------------------------------------------------------------
*BOHRKOPF !SPINDEL!WK!DELTA-X !DELTA-Y !DURCHMESSER! KOMMENTAR
----------+----------+----------+----------+-----------+-------------------
[NoValidForUse]
A21 ! 1!62! 0.000! 0.000! 0.000!
[V11]
A12 ! -1!62! 0.000! -160.000! 0.000!
A12 ! 2!62! 0.000! -128.000! 3.000! 70.0
A12 ! -3!62! 0.000! -96.000! 0.000!
---------------------------------------------------------------------------
*BOHRKOPF ! !X-POS !Y-POS ! !
----------+----------+----------+----------+-----------+-------------------
[V11]
O11 ! ! 0.000! -96.000! !
O11 ! ! 0.000! -128.000! !
</code></pre>
<ul>
<li>on file can be different number of dataframes between 5 and 10 but structure of file sesame separator "!" headers row starts whit "*"</li>
</ul>
|
<p>Yes, it is possible, but really data dependent:</p>
<ul>
<li>first <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html" rel="nofollow noreferrer"><code>read_csv</code></a> with omit first <code>3</code> rows and omit first whitespaces</li>
<li>omit trailing whitespaces in columns by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.strip.html" rel="nofollow noreferrer"><code>strip</code></a></li>
<li>create column <code>TYPE</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.extract.html" rel="nofollow noreferrer"><code>extract</code></a> values between <code>[]</code> and forward fill next rows</li>
<li>create helper column for distinguish each <code>DataFrame</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.startswith.html" rel="nofollow noreferrer"><code>startswith</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.cumsum.html" rel="nofollow noreferrer"><code>cumsum</code></a></li>
<li>last remove by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.contains.html" rel="nofollow noreferrer"><code>contains</code></a> rows where first column starts with <code>[</code>, <code>--</code> or <code>*</code></li>
</ul>
<hr>
<pre><code>df = pd.read_csv(file, sep="!", skiprows=3, skipinitialspace=True)
df.columns = df.columns.str.strip()
df['TYPE'] = df['*BOHRKOPF'].str.extract('\[(.*)\]', expand=False).ffill()
df['G'] = df['*BOHRKOPF'].str.startswith('*').cumsum()
df = df[~df['*BOHRKOPF'].str.contains('^\[|^--|^\*')]
print (df)
*BOHRKOPF SPINDEL WK DELTA-X DELTA-Y DURCHMESSER KOMMENTAR \
2 A21 1 62 0.000 0.000 0.000 NaN
4 A12 -1 62 0.000 -160.000 0.000 NaN
5 A12 2 62 0.000 -128.000 3.000 70.0
6 A12 -3 62 0.000 -96.000 0.000 NaN
7 A12 4 62 0.000 -64.000 0.000 NaN
12 O11 -9 62 0.000 -96.000 0.000 NaN
13 O11 10 62 0.000 -128.000 5.000 70.0
TYPE G
2 NoValidForUse 0
4 V11 0
5 V11 0
6 V11 0
7 V11 0
12 V11 1
13 V11 1
</code></pre>
<p>and then filter by <code>G</code> column:</p>
<pre><code>df1 = df[df['G'] == 0].drop('G', axis=1)
print (df1)
*BOHRKOPF SPINDEL WK DELTA-X DELTA-Y DURCHMESSER KOMMENTAR \
2 A21 1 62 0.000 0.000 0.000 NaN
4 A12 -1 62 0.000 -160.000 0.000 NaN
5 A12 2 62 0.000 -128.000 3.000 70.0
6 A12 -3 62 0.000 -96.000 0.000 NaN
7 A12 4 62 0.000 -64.000 0.000 NaN
TYPE
2 NoValidForUse
4 V11
5 V11
6 V11
7 V11
df2 = df[df['G'] == 1].drop('G', axis=1)
print (df2)
*BOHRKOPF SPINDEL WK DELTA-X DELTA-Y DURCHMESSER KOMMENTAR TYPE
12 O11 -9 62 0.000 -96.000 0.000 NaN V11
13 O11 10 62 0.000 -128.000 5.000 70.0 V11
</code></pre>
<p>If in file is multiple DataFrames is possible use <code>list comprehension</code> for <code>list of DataFrames</code>:</p>
<pre><code>dfs = [v.drop('G', axis=1) for k, v in df.groupby('G')]
print (dfs[0])
*BOHRKOPF SPINDEL WK DELTA-X DELTA-Y DURCHMESSER KOMMENTAR \
2 A21 1 62 0.000 0.000 0.000 NaN
4 A12 -1 62 0.000 -160.000 0.000 NaN
5 A12 2 62 0.000 -128.000 3.000 70.0
6 A12 -3 62 0.000 -96.000 0.000 NaN
7 A12 4 62 0.000 -64.000 0.000 NaN
TYPE
2 NoValidForUse
4 V11
5 V11
6 V11
7 V11
print (dfs[1])
*BOHRKOPF SPINDEL WK DELTA-X DELTA-Y DURCHMESSER KOMMENTAR TYPE
12 O11 -9 62 0.000 -96.000 0.000 NaN V11
13 O11 10 62 0.000 -128.000 5.000 70.0 V11
</code></pre>
<p>EDIT:</p>
<pre><code>temp=u"""_MASCHINENNUMMER : >0-251-11-0950/51< SACHBEARB.: >BSTWIN32<
_PRODUKTSCHLUESSEL : >BST 500< DATUM : >05-20-2016<
---------------------------------------------------------------------------
*BOHRKOPF !SPINDEL!WK!DELTA-X !DELTA-Y !DURCHMESSER! KOMMENTAR
----------+----------+----------+----------+-----------+-------------------
[NoValidForUse]
A21 ! 1!62! 0.000! 0.000! 0.000!
[V11]
A12 ! -1!62! 0.000! -160.000! 0.000!
A12 ! 2!62! 0.000! -128.000! 3.000! 70.0
A12 ! -3!62! 0.000! -96.000! 0.000!
A12 ! 4!62! 0.000! -64.000! 0.000!
---------------------------------------------------------------------------
*BOHRKOPF ! !X-POS !Y-POS ! !
----------+----------+----------+----------+-----------+-------------------
[V11]
O11 ! ! 0.000! -96.000! !
O11 ! ! 0.000! -128.000! ! """
</code></pre>
<p>Add parameter <code>header</code> for default columns names:</p>
<pre><code>#after testing replace 'pd.compat.StringIO(temp)' to 'filename.csv'
df = pd.read_csv(pd.compat.StringIO(temp), sep="!", skiprows=3, skipinitialspace=True, header=None)
df['TYPE'] = df[0].str.extract('\[(.*)\]', expand=False).ffill()
df['G'] = df[0].str.startswith('*').cumsum()
#dont remove rows start with *
df = df[~df[0].str.contains('^\[|^--')]
print (df)
0 1 2 3 4 5 \
0 *BOHRKOPF SPINDEL WK DELTA-X DELTA-Y DURCHMESSER
3 A21 1 62 0.000 0.000 0.000
5 A12 -1 62 0.000 -160.000 0.000
6 A12 2 62 0.000 -128.000 3.000
7 A12 -3 62 0.000 -96.000 0.000
8 A12 4 62 0.000 -64.000 0.000
10 *BOHRKOPF NaN X-POS Y-POS NaN NaN
13 O11 NaN 0.000 -96.000 NaN NaN
14 O11 NaN 0.000 -128.000 NaN NaN
6 TYPE G
0 KOMMENTAR NaN 1
3 NaN NoValidForUse 1
5 NaN V11 1
6 70.0 V11 1
7 NaN V11 1
8 NaN V11 1
10 NaN V11 2
13 NaN V11 2
14 NaN V11 2
</code></pre>
<p>For each loop remove column <code>G</code>, rename all columns without last 2 by first row, remove first row by <code>iloc</code> and last if necessary remove all columns fill <code>NaN</code>s only by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.dropna.html" rel="nofollow noreferrer"><code>dropna</code></a>:</p>
<pre><code>dfs = [v.drop('G', axis=1).rename(columns=v.iloc[0, :-2]).iloc[1:].dropna(axis=1, how='all') for k, v in df.groupby('G')]
print (dfs[0])
*BOHRKOPF SPINDEL WK DELTA-X DELTA-Y DURCHMESSER KOMMENTAR \
3 A21 1 62 0.000 0.000 0.000 NaN
5 A12 -1 62 0.000 -160.000 0.000 NaN
6 A12 2 62 0.000 -128.000 3.000 70.0
7 A12 -3 62 0.000 -96.000 0.000 NaN
8 A12 4 62 0.000 -64.000 0.000 NaN
TYPE
3 NoValidForUse
5 V11
6 V11
7 V11
8 V11
print (dfs[1])
*BOHRKOPF X-POS Y-POS TYPE
13 O11 0.000 -96.000 V11
14 O11 0.000 -128.000 V11
</code></pre>
|
python|pandas
| 6
|
374,799
| 48,923,565
|
math.fsum for arrays of multiple dimensions
|
<p>I have a numpy array of dimension <code>(i, j)</code> in which I would like to add up the first dimension to receive a array of shape <code>(j,)</code>. Normally, I'd use NumPy's own <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.sum.html" rel="nofollow noreferrer"><code>sum</code></a></p>
<pre><code>import numpy
a = numpy.random.rand(100, 77)
numpy.sum(a, axis=0)
</code></pre>
<p>but in my case it doesn't cut it: Some of the sums are <em>very</em> ill-conditioned, so the computed results only have a few correct digits.</p>
<p><a href="https://docs.python.org/3/library/math.html#math.fsum" rel="nofollow noreferrer"><code>math.fsum</code></a> is fantastic at keeping the errors at bay, but it only applies to iterables of one dimension. <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.vectorize.html" rel="nofollow noreferrer"><code>numpy.vectorize</code></a> doesn't do the job either.</p>
<p>How to efficiently apply <code>math.fsum</code> to an array of multiply dimensions?</p>
|
<p>This one works fast enough for me.</p>
<pre><code>import numpy
import math
a = numpy.random.rand(100, 77)
a = numpy.swapaxes(a, 0, 1)
a = numpy.array([math.fsum(row) for row in a])
</code></pre>
<p>Hopefully it's the axis you are looking for (returns 77 sums).</p>
|
python|arrays|numpy
| 2
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.