Unnamed: 0
int64
0
378k
id
int64
49.9k
73.8M
title
stringlengths
15
150
question
stringlengths
37
64.2k
answer
stringlengths
37
44.1k
tags
stringlengths
5
106
score
int64
-10
5.87k
373,900
57,011,858
Not able to delete 'matchId' column from pandas dataframe
<p>I have a dataframe which looks like this</p> <p><a href="https://i.stack.imgur.com/3r7to.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3r7to.png" alt="enter image description here"></a> </p> <p>I tried to delete <strong>matchId</strong> but no matter what I use to delete it, for preprocessing, its outputting this error:</p> <pre><code>KeyError: "['matchId'] not found in axis" </code></pre>
<p>What you attempted to do (which you should have mentioned in the question) is probably failing because you assume that the <code>matchID</code> column is a normal column. It is actually a special, <strong>index</strong> column and so cannot be accessed in the same way other columns can be accessed.</p> <p>As suggested by anky_91, because of that, you should do </p> <pre><code>df = df.reset_index(drop=True) </code></pre> <p>if you want to completely remove the indexes in your table. This will replace them with the default indexes. To just make them into another column, you can just remove the <code>drop=True</code> from the above statement. </p> <p>Your table will always have indexes, however, so you cannot completely get rid of them. </p> <p>You can, however, output it with</p> <pre><code>df.values </code></pre> <p>and this will ignore the indexes and show just the values as arrays.</p>
python|pandas
1
373,901
56,873,709
Pandas append sheet to workbook if sheet doesn't exist, else overwrite sheet
<p>I am updating an existing Excel workbook using pandas. When using an <code>ExcelWriter</code> object, can I overwrite a sheet if it exists and otherwise create a new sheet? The code I have appends new sheets, but when I try to overwrite an existing sheet it appends a new sheet with a slightly varied name (ex: If sheet 'data1' exists, running the code appends a new sheet named 'data1 1').</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd import openpyxl path = 'test-out.xlsx' book = openpyxl.load_workbook(path) df1 = pd.DataFrame({'a': range(10), 'b': range(10)}) writer = pd.ExcelWriter(path, mode='a') writer.book = book df1.to_excel(writer, sheet_name='data1') writer.save() </code></pre>
<p>Pass the sheets to the <code>writer</code> with <code>writer.sheets = dict((ws.title, ws) for ws in book.worksheets)</code>:</p> <pre><code>import pandas as pd import openpyxl path = 'test-out.xlsx' book = openpyxl.load_workbook(path) df1 = pd.DataFrame({'a': range(10), 'b': range(10)}) writer = pd.ExcelWriter(path, mode='a') writer.book = book writer.sheets = dict((ws.title, ws) for ws in book.worksheets) df1.to_excel(writer, sheet_name='data1') writer.save() </code></pre> <p><strong>Edit:</strong></p> <p>Seems like you don't even need <code>mode='w'</code>, <code>writer = pd.ExcelWriter(path, mode='a')</code> is still working...</p>
python|excel|pandas
6
373,902
57,038,085
how to create shared weights layer in multiple input models with no grads
<p>I want to create a model with 2 inputs x and y. And I want make loss function only concerned about x. So the model can optimize former layer only with x. But now even the loss is only concerned about x, the optimazition will still calculate x and y in the former layer.</p> <p>I have tried to make y to y.detach() to make grads stop. But that won't work. And I also want to try to create new shared weights layer, but I don't know how to do that.</p> <pre><code>def forward(self, x, y=None): x = self.conv1(x) x = self.bn1(x) x = self.maxpool(x) x = self.layer1(x) x = self.layer2(x) x = self.layer3(x) x = self.layer4(x) y = self.conv1(y) y = self.bn1(y) y = self.maxpool(y) y = self.layer1(y) return x, y </code></pre>
<p>What you did should work, you just need to put your <code>y.detach()</code> at the end, and if the loss doesn't contain <code>y</code> it shouldn't modify the weights through <code>y</code> anyway.</p> <pre><code>def forward(self, x, y=None): x = self.conv1(x) x = self.bn1(x) x = self.maxpool(x) x = self.layer1(x) x = self.layer2(x) x = self.layer3(x) x = self.layer4(x) y = self.conv1(y) y = self.bn1(y) y = self.maxpool(y) y = self.layer1(y) return x, y.detach() </code></pre>
python|neural-network|deep-learning|pytorch
0
373,903
57,148,617
RuntimeError: Expected hidden size (2, 24, 50), got (2, 30, 50)
<p>I am trying to build a model for learning assigned scores (real numbers) to some sentences in a data set. I use RNNs (in PyTorch) for this purpose. I have defined a model:</p> <pre><code>class RNNModel1(nn.Module): def forward(self, input ,hidden_0): embedded = self.embedding(input) output, hidden = self.rnn(embedded, hidden_0) output=self.linear(hidden) return output , hidden </code></pre> <p>Train function is as:</p> <pre><code>def train(model,optimizer,criterion,BATCH_SIZE,train_loader,clip): model.train(True) total_loss = 0 hidden = model._init_hidden(BATCH_SIZE) for i, (batch_of_data, batch_of_labels) in enumerate(train_loader, 1): hidden=hidden.detach() model.zero_grad() output,hidden= model(batch_of_data,hidden) loss = criterion(output, sorted_batch_target_scores) total_loss += loss.item() loss.backward() torch.nn.utils.clip_grad_norm(model.parameters(), clip) optimizer.step() return total_loss/len(train_loader.dataset) </code></pre> <p>when I run the code I receive this error:</p> <blockquote> <p>RuntimeError: Expected hidden size (2, 24, 50), got (2, 30, 50)</p> </blockquote> <p>Batch size=30, Hidden size=50, Number of Layers=1, Bidirectional=True.</p> <p>I receive that error in the last batch of data. I checked the description of RNNs in PyTorch to solve this problem. RNNs in PyTorch have two input arguments and two output arguments. The input arguments are <em>input</em> and <em>h_0</em>. <em>h_0</em> is a tensor includes initial hidden state for each element in batch of size(num_layers*num_directions, batch, hidden size). The output arguments are <em>output</em> ans <em>h_n</em>. <em>h_n</em> is a tensor includes hidden state for t=seq_len of size (num_layers*num_directions, batch, hidden size).</p> <p>in all batches (except the last batch) the size of h_0 and h_n is the same. but in the last batch, perhaps number of elements is less than batch size. Thesefore the size of h_n is (num_layers<em>num_directions, remained_elements_in_last_batch, hidden size) but the size of h_0 is still (num_layers</em>num_directions, batch_size, hidden size).</p> <p>So I receive that error in the last batch of data.</p> <p>How can I solve this problem and handle the situation in which the size of h_0 and h_n is different?</p> <p>Thanks in advance.</p>
<p>This error happens when the number of samples in data set is not a multiple of the size of the batch. Ignoring the last batch can solve the problem. For identifying the last batch, check the number of elements in each batch. If that was less than BATCH_SIZE then it is the last batch in data set.</p> <pre><code>if(len(batch_of_data)==BATCH_SIZE): output,hidden= model(batch_of_data,hidden) </code></pre>
python|pytorch|lstm|recurrent-neural-network|gated-recurrent-unit
1
373,904
57,060,488
pandas.Series.div() vs /=
<p>I'm curious why pandas.Series.div() is slower than /= when applying to a pandas Series of numbers. For example:</p> <pre><code>python3 -m timeit -s 'import pandas as pd; ser = pd.Series(list(range(99999)))' 'ser /= 7' 1000 loops, best of 3: 584 usec per loop python3 -m timeit -s 'import pandas as pd; ser = pd.Series(list(range(99999)))' 'ser.div(7)' 1000 loops, best of 3: 746 usec per loop </code></pre> <p>I assume that it's because the former changes the series in place whereas the latter returns a new Series. But if that's the case, then why bother implementing div() and mul() at all if they're not as fast as /= and */? Even if you don't want to change the series in place, ser / 7 is still faster than .div():</p> <pre><code>python3 -m timeit -s 'import pandas as pd; ser = pd.Series(list(range(99999)))' 'ser / 7' 1000 loops, best of 3: 656 usec per loop </code></pre> <p>So what is the use of pd.Series.div() and what about it makes it slower?</p>
<p>Pandas <code>.div</code> obviously implement division similarly to <code>/</code> and <code>/=</code>.</p> <p>The main reason to have a separate <code>.div</code> is that Pandas embraces a syntax model where operations on dataframes are described by the applications of consecutive <em>filters</em>, e.g. <code>.div</code>, <code>.str</code>, etc. which allows for simple concatenations:</p> <pre class="lang-py prettyprint-override"><code>ser.div(7).apply(lambda x: 'text: ' + str(x)).str.upper() </code></pre> <p>as well as simpler support for multiple arguments (cfr. <code>.func(a, b, c)</code> would not be possible to write with a binary operator).</p> <p>By contrast, the same would have been written without <code>div</code> as:</p> <pre class="lang-py prettyprint-override"><code>(ser / 7).apply(lambda x: 'text: ' + str(x)).str.upper() </code></pre> <p>The <code>/</code> operation may be faster because there is less Python overhead associated with <code>/</code> operator compared to <code>.div()</code>.</p> <p>By contrast, the <code>x /= y</code> operator replaces the construct <code>x = x / y</code>. For vectorized containers based on NumPy (like Pandas), it goes a little beyond that: it uses an <strong>in-place</strong> operation instead of creating a (potentially time- and memory-consuming) copy of <code>x</code>. This is the reason why <code>/=</code> is faster than both <code>/</code> and <code>.div()</code>.</p> <p>Note that, while in most cases this is equivalent, sometimes (like in this case) it may still require conversion to a different data type, which is done automatically in Pandas (but not in NumPy).</p>
python|pandas|performance
3
373,905
56,955,962
What is the best way to count the number of entries across 3 dataframes that aren't shared?
<p>I have three dataframes that are summaries of various statistics about countries. I've created a join of the three dataframes on the 'Country Name' column. But I want to know how many entries exist in the three original dataframes that were excluded from the join. Whats the best way code wise to count this?</p>
<p>As you didn't provide your code and dataframes, it is not clear what is the output of your three dataframes join. also you should consider the pandas default <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.join.html#pandas.DataFrame.join" rel="nofollow noreferrer">join</a> is left join so rearranging dataframes could change results.</p> <p>However, It doesn't change the solution. I assume you have a dataframe named df( which you said you made it by join) and you are looking for all missing indices in df which exists on those three dataframes.</p> <p>first step is joining all dataframes with <code>how = 'outer'</code> parameter. the output should have all the indices for all dataframes<code>([df1,df2,df3])</code>. The second step is as easy as getting difference on indices for full_df and df.</p> <p>Here is the code</p> <pre><code> full_df = pd.join([df1,df2,df3], how = 'outer') missing_indices = full_df.index.difference(df.index) print(missing_indices) </code></pre>
python|python-3.x|pandas|dataframe
0
373,906
56,980,886
Partial Pivoting In Pandas SQL Or Spark
<p>Partial Pivoting In Pandas SQL Or Spark</p> <p>Make the Year remain as Rows, and have the States Transpose to columns Take Pecentage value of a Gender Male Race White,</p> <p><strong>Input</strong></p> <blockquote> <p><a href="https://i.stack.imgur.com/qz2mj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qz2mj.png" alt="Blockquote" /></a></p> </blockquote> <p><strong>Output</strong></p> <blockquote> <p><a href="https://i.stack.imgur.com/NqS2e.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NqS2e.png" alt="Blockquote" /></a></p> </blockquote>
<p>Answer Posted By @Peter Leimbigler as comment</p> <pre><code>df.pivot_table(index='Year', columns='States', values='Percentage') </code></pre> <p>I tried it works.. Thanks Peter</p>
python|pandas|scala|apache-spark|xlsx
0
373,907
57,059,576
Why is np.pad not working the way I expect it to?
<p>My code generates an array that is 4x2. It also generates another array that is 10x6 I want to pad each array with zeros so that it is centered in an array that is 14x12 after padding. </p> <p><a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.pad.html" rel="nofollow noreferrer">https://docs.scipy.org/doc/numpy/reference/generated/numpy.pad.html</a></p> <pre><code>a = some array 4x2 b = some array 10x6 c = np.pad(a, padder=0, 2, 'pad_width', padder=0)) </code></pre> <p>TypeError: pad() takes exactly 3 arguments (2 given)</p>
<p>Like this you get an array with shape(14,12) with the smaller array centered.</p> <pre><code>source_array = np.random.rand(10,6) target_array_shape = (14,12) pad_x = (target_array_shape[0]-source_array.shape[0])//2 pad_y = (target_array_shape[1]-source_array.shape[1])//2 target_array = np.pad(source_array, ((pad_x,pad_x),(pad_y,pad_y)), mode="constant") </code></pre> <p>Obviously the centering can only be correct if the source array is smaller than the target array otherwise you get a <code>ValueError(index can't contain negative values)</code>. </p> <p>Also the target dimension might not be correct if the target and source dimension, are not both even or both odd.</p>
python|numpy|runtime-error
1
373,908
57,228,323
How do I export multiple lists to one csv?
<p>How do I add each iteration of this list to a csv file for an unknown number of columns. </p> <p>This is because the genre list and not the same length for each film.</p> <p>If the film only has less than the max then the other columns I would expect to be empty.</p> <p>I would expect the output to look a little like the following;</p> <pre><code>WebPage,Film,Genre1,Genre2,Genre3, ..... maxnumberofGenres https://www.imdb.com/title/tt6644200/, A Quiet Place, Drama, Horror, Sci-Fi </code></pre> <p>How do I solve the problem?</p> <pre><code>import requests from googlesearch import search import csv import pandas from bs4 import BeautifulSoup import numpy as np import os from datetime import datetime import time start_time = time.time() colnames = ['title'] data = pandas.read_csv('D:/Desktop/webScrapeMovieInfo/mediaDataForGenreScrape2.csv', names=colnames, header=None) my_list = data["title"] my_list = list(my_list) my_list = my_list[1:] length = len(my_list) for film in my_list: query = film + " imdb" for j in search(query, tld="co.in", num=10, stop=1, pause=2): print(j) page = requests.get(j) response = page.status_code if response == 200: soup = BeautifulSoup(page.content, "lxml") genreData = soup.find_all("div",{"class":"subtext"}) filmtitle = soup.find("h1") filmtitle = filmtitle.contents[0] print(filmtitle) links = [] for h in genreData: a = h.find_all('a') aLength = len(a) - 1 a1 = a[0] for b in range(0,aLength): print(a[b].string) np.savetxt("filmWebPages.csv", j, delimiter=",", fmt='%s', header="imdbPageOfFilms") print("--- %s seconds ---" % (time.time() - start_time)) </code></pre>
<p>To extract all genres you could use this script - it will save it to the CSV and print to the screen too:</p> <pre><code>import csv import requests from bs4 import BeautifulSoup url = 'https://www.imdb.com/search/title/?pf_rd_i=moviemeter&amp;genres=action&amp;explore=title_type,genres' soup = BeautifulSoup(requests.get(url).text, 'lxml') rows = [] for h3, genres in zip(soup.select('.lister-item-header'), soup.select('.lister-item-header ~ p .genre')): title = h3.select_one('a').text url = h3.select_one('a')['href'] genres = [*map(str.strip, genres.text.split(', '))] rows.append([title, url, genres]) #find all the genres we have: all_genres = sorted(list(set(sum((row[2] for row in rows), [])))) #transform all rows to include True/False if they belong to certain genre for row in rows: row[2] = [g in row[2] for g in all_genres] #print header print('{: &lt;40}{: ^20}'.format('Name', 'URL') + ''.join('{: ^10}'.format(g) for g in all_genres)) #print all rows for title, url, genres in rows: print('{: &lt;40}{: &lt;20}'.format(title, url), end='') print(''.join('{: ^10}'.format('X' if g else '-') for g in genres)) #save to csv with open('data.csv', 'w', newline='') as csvfile: csvwriter = csv.writer(csvfile, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL) csvwriter.writerow(['Name', 'URL'] + all_genres) for title, url, genres in rows: csvwriter.writerow([title, url, *['✔' if g else '' for g in genres]]) </code></pre> <p>Prints:</p> <pre><code>Name URL Action Adventure Animation Comedy Crime Drama Fantasy Mystery Sci-Fi Thriller Spider-Man: Far from Home /title/tt6320628/ X X - - - - - - X - Top Gun: Maverick /title/tt1745960/ X - - - - X - - - - The King's Man /title/tt6856242/ X X - X - - - - - - La Casa de Papel /title/tt6468322/ X - - - X - - X - - Troonide mäng /title/tt0944947/ X X - - - X - - - - Crawl /title/tt8364368/ X X - - - X - - - - Alita: Sõjaingel /title/tt0437086/ X X - - - - - - X - Tasujad: Lõppmäng /title/tt4154796/ X X - - - - - - X - Terminaator: Tume Saatus /title/tt6450804/ X X - - - - - - X - The Witcher /title/tt5180504/ X X - - - X - - - - Hellboy /title/tt2274648/ X X - - - - X - - - Point Blank /title/tt2499472/ X - - - - - - - - X Shazam! /title/tt0448115/ X X - X - - - - - - Stuber /title/tt7734218/ X - - X X - - - - - Fast &amp; Furious Presents: Hobbs &amp; Shaw /title/tt6806448/ X X - - - - - - - - Tippkutt /title/tt0092099/ X - - - - X - - - - John Wick 3: Parabellum /title/tt6146586/ X - - - X - - - - X Ämblikmees: Uus universum /title/tt4633694/ X X X - - - - - - - S.H.I.E.L.D.i agendid /title/tt2364582/ X X - - - X - - - - The Boys /title/tt1190634/ X - - X X - - - - - Designated Survivor /title/tt5296406/ X - - - - X - X - - Kapten Marvel /title/tt4154664/ X X - - - - - - X - Viikingid /title/tt2306299/ X X - - - X - - - - Mulan /title/tt4566758/ X X - - - X - - - - Bond 25 /title/tt2382320/ X X - - - - - - - X Spider-Man: Homecoming /title/tt2250912/ X X - - - - - - X - Murder Mystery /title/tt1618434/ X - - X X - - - - - Pandora /title/tt10207090/ X - - - - X - - X - Shaft /title/tt4463894/ X - - X X - - - - - Jessica Jones /title/tt2357547/ X - - - X X - - - - Star Wars: The Rise of Skywalker /title/tt2527338/ X X - - - - X - - - Leegion /title/tt5114356/ X - - - - X - - X - Anna /title/tt7456310/ X - - - - - - - - X Vibukütt /title/tt2193021/ X X - - X - - - - - NCIS: Kriminalistid /title/tt0364845/ X - - - X X - - - - Välk /title/tt3107288/ X X - - - X - - - - Wonder Woman 1984 /title/tt7126948/ X X - - - - X - - - Titans /title/tt1043813/ X X - - - X - - - - Ghostbusters 2020 /title/tt4513678/ X - - X X - - - - - Power Rangers /title/tt3717490/ X X - - - - - - X - Charlie's Angels /title/tt5033998/ X X - X - - - - - - Mehed mustas: globaalne oht /title/tt2283336/ X X - X - - - - - - Swamp Thing /title/tt8362852/ X X - - - X - - - - Queen of the South /title/tt1064899/ X - - - X X - - - - Tasujad: Igaviku sõda /title/tt4154756/ X X - - - - - - X - Gotham /title/tt3749900/ X - - - X X - - - - Godzilla: King of the Monsters /title/tt3741700/ X X - - - - X - - - Shingeki no kyojin /title/tt2560140/ X X X - - - - - - - Escape Plan: The Extractors /title/tt6772804/ X - - - X - - - - X Thor: Ragnarök /title/tt3501632/ X X - X - - - - - - </code></pre> <p>And saves <code>data.csv</code>. Here's screenshot from LibreOffice:</p> <p><a href="https://i.stack.imgur.com/5pkgE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5pkgE.png" alt="enter image description here"></a></p>
python|numpy|beautifulsoup
1
373,909
57,260,045
Replace values between dataframes with different sizes and multiple conditions
<p>So I have 2 dataframes, from different sizes, <code>df1 = (578, 81)</code> and <code>df2 = (1500, 59)</code>, all lines on <code>df1 exists in df2</code>, and all columns in <code>df2 exists in df1</code>, my problem is, I have a value that i want to update in df1 based on <code>6 conditions</code>, so to update the <code>column X</code>, the values at columns <code>X1, X2, Y1, Y2, Z1 and Z2</code> must be equal on both DataFrames.</p> <p>On java I would do somenthing like:</p> <pre><code>for(i=0;i&lt;df1.length;i++){ for(k=0;k&lt;df2.length;k++){ if(df1[i][1]==df2[k][1] &amp;&amp; df1[i][2]==df2[k][2] ...){ df1[i][0] = df2[k][0]; } } </code></pre>
<p>You can easily use <code>numpy.where</code>. And i think it should work best in this case too.</p> <p>Let's say you have the following DataFrames</p> <pre><code>import pandas as pd df1=pd.DataFrame({'X':[1,3,4,6,5], 'X1':[2,3,4,6,3], 'Y1':[4,2,1,51,3], 'Z1':[2,3,4,1,5]}) df2=pd.DataFrame({'L':[2,3,4,1,4], 'X2':[2,3,4,6,5], 'Y2':[4,3,4,6,3], 'Z2':[2,2,1,51,3]}) </code></pre> <p>And you want to change the value of X based on the conditions if <code>X1==X2 &amp; Y1==Y2 &amp; Z1==Z2</code> . Also lets say the value you want to update is from column L in this case. </p> <p>You can use <code>numpy.where</code> like this</p> <pre><code>df1['X']=np.where((df1['X1']==df2['X2'])&amp;(df1['Y1']==df2['Y2'])&amp;(df1['Z1']==df2['Z2']),df2['L'],,df1['X']) </code></pre> <p>It would only change the first row as the conditions only gets satisfied there. This function is changing the values to <code>df2['L']</code> if it meets the condition and keeping the original values if the conditions are not met.</p> <p>Read more about <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.where.html" rel="nofollow noreferrer">np.where</a></p> <p><strong>Update: The dataframes in the question are not equal. It doesn't matter if they don't have equal columns but the rows should be equal for the sake of comparison. Below is the example in which the two data frames are not equal and how <code>numpy.where</code> is performed in that case.</strong></p> <pre><code>import pandas as pd import numpy as np df1=pd.DataFrame({'X':[1,3,4,6,5], 'X1':[2,3,4,6,3], 'Y1':[4,3,1,51,3], 'Z1':[2,3,4,1,5]}) df2=pd.DataFrame({'L':[2,3,4,1,4,5,1], 'X2':[2,3,4,6,5,2,3], 'Y2':[4,3,4,6,3,8,7], 'Z2':[2,3,1,51,3,9,9], 'R2':[2,5,1,2,7,3,9]}) #make both the dataframes equal for i in range(len(df2)-len(df1)): df1=df1.append(pd.Series(),ignore_index=True) df1['X']=np.where((df1['X1']==df2['X2'])&amp;(df1['Y1']==df2['Y2'])&amp;(df1['Z1']==df2['Z2']),df2['L'],df1['X']) #drop those null values which have been appended above to get back to original df1=df1.dropna(how='all') </code></pre>
python|pandas|dataframe
2
373,910
56,946,934
How to convert key and value of dictionary to a dataframe column?
<p>I have a Python dictionary and I am unable to convert key and value of dictionary to a dataframe column.</p> <pre><code>import pandas as pd data={'form-0-publish': ['05/28/2019'], 'form-0-cell': ['81'], 'form-0-cell_name': ['13a'], 'form-0-jam': ['07.00-08.00'], 'form-0-target': ['60'], 'form-1-publish': ['05/28/2019'], 'form-1-cell': ['81'], 'form-1-cell_name': ['13a'], 'form-1-jam': ['07.00-08.00'], 'form-1-target': ['60'], 'form-2-publish': ['05/28/2019'], 'form-2-cell': ['81'], 'form-2-cell_name': ['13a'], 'form-2-jam': ['07.00-08.00'], 'form-2-target': ['60'], } df = pd.DataFrame(data.items(), columns=['FormPublish', 'DatePublish', 'FormCell', 'Cell', 'FormCellName', 'FormCellName', 'FormJam', 'Jam', 'FormTarget', 'Target']) df </code></pre> <p>Expected result:</p> <pre><code>&gt; FormPublish DatePublish FormCell Cell FormCellName FormCellName &gt; FormJam Jam FormTarget Target 0 form-0-publish ['05/28/2019'] ... 1 &gt; form-0-publish ['05/28/2019'] ... 3 form-0-publish ['05/28/2019'] ... </code></pre>
<p>I think directly there is no function to perform this type of task, but I have made 2 <code>for</code> loops to get your result:-</p> <pre><code>data={'form-0-publish': ['05/28/2019'], 'form-0-cell': ['81'], 'form-0-cell_name': ['13a'], 'form-0-jam': ['07.00-08.00'], 'form-0-target': ['60'], 'form-1-publish': ['05/28/2019'], 'form-1-cell': ['81'], 'form-1-cell_name': ['13a'], 'form-1-jam': ['07.00-08.00'], 'form-1-target': ['60'], 'form-2-publish': ['05/28/2019'], 'form-2-cell': ['81'], 'form-2-cell_name': ['13a'], 'form-2-jam': ['07.00-08.00'], 'form-2-target': ['60'], } my_list = [] #Getting all keys and values in a single list. for key,value in data.items(): my_list.append(key) my_list.append(*value) new_list = [] extra_list = [] for index,var in enumerate(my_list): # Making pairs of 10 elements from my_list. if index%10==0: new_list.append(extra_list) extra_list = [] extra_list.append(var) else: extra_list.append(var) new_list.append(extra_list) new_list.pop(0) # Now we will use this new_list in DataFrame df = pd.DataFrame(new_list, columns=['FormPublish', 'DatePublish', 'FormCell', 'Cell', 'FormCellName', 'FormCellName', 'FormJam', 'Jam', 'FormTarget', 'Target']) df </code></pre> <p><strong>Output</strong></p> <pre><code> FormPublish DatePublish FormCell Cell FormCellName FormCellName FormJam Jam FormTarget Target 0 form-0-publish 05/28/2019 form-0-cell 81 form-0-cell_name 13a form-0-jam 07.00-08.00 form-0-target 60 1 form-1-publish 05/28/2019 form-1-cell 81 form-1-cell_name 13a form-1-jam 07.00-08.00 form-1-target 60 2 form-2-publish 05/28/2019 form-2-cell 81 form-2-cell_name 13a form-2-jam 07.00-08.00 form-2-target 60 </code></pre> <p>I hope it may help you. ​</p>
python|python-3.x|pandas|dictionary|jupyter-notebook
0
373,911
57,177,262
How to upsample an array to arbitrary sizes?
<p>I am trying to resize an array to a larger size in Python by repeating each element proportionally to the new size. However, I want to be able to resize to arbitrary sizes.</p> <p>I know that I can do it with <code>numpy.repeat</code> if for example I have to double the size but lets say I want to convert an array of size <code>(180,150)</code> to <code>(300,250)</code>. I know there is not a perfect way to do this but I am looking for the most efficient (minimum loss of information) method!</p> <p>So far, I was converting the array to an image and resize it accordingly, then convert it to an array again. However, it seems that I cannot convert all types of data to image so I need a general way to do this.</p> <p>For example, lets say I have an input array of size <code>(2,2)</code>:</p> <p><code>input_array=np.array([[1,2],[3,4]])</code></p> <p>If I want to convert it to a <code>(3,3)</code> array, output may be like:</p> <p><code>output_array=np.array([[1,1,2],[1,1,2],[3,3,4]])</code></p> <p>Like I said before, I just don't want to tile or fill with zeros, I want to expand the size by repeating some of the elements. </p>
<p>Without a clear idea about the final result you would like to achieve, your question opens multiple paths and solutions. Just to name a few:</p> <ol> <li>Using <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.resize.html" rel="nofollow noreferrer"><code>numpy.resize</code></a>:</li> </ol> <pre><code>import numpy as np input_array=np.array([[1.,2],[3,4]]) np.resize(input_array, (3,3)) </code></pre> <p>you get:</p> <pre><code>array([[1., 2., 3.], [4., 1., 2.], [3., 4., 1.]]) </code></pre> <ol start="2"> <li>Using <a href="https://docs.opencv.org/trunk/da/d54/group__imgproc__transform.html#ga47a974309e9102f5f08231edc7e7529d" rel="nofollow noreferrer"><code>cv2.resize</code></a>:</li> </ol> <pre><code>import cv2 import numpy as np input_array=np.array([[1.,2],[3,4]]) cv2.resize(input_array, (3,3), interpolation=cv2.INTER_NEAREST) </code></pre> <p>you get:</p> <pre><code>array([[1., 1., 2.], [1., 1., 2.], [3., 3., 4.]]) </code></pre> <p>Depending on your objective, you can use different interpolation methods.</p>
python|arrays|python-3.x|numpy|resize
3
373,912
56,890,822
Pandas - add a row at the end of a for loop iteration
<p>So I have a for loop that gets a series of values and makes some tests:</p> <pre><code>list = [1, 2, 3, 4, 5, 6] df = pd.DataFrame(columns=['columnX','columnY', 'columnZ']) for value in list: if value &gt; 3: df['columnX']="A" else: df['columnX']="B" df['columnZ']="Another value only to be filled in this condition" df['columnY']=value-1 </code></pre> <p>How can I do this and keep all the values in a single row for each loop iteration no matter what's the if outcome? Can I keep some columns empty?</p> <p>I mean something like the following process: </p> <pre><code>[create empty row] -&gt; [process] -&gt; [fill column X] -&gt; [process] -&gt; [fill column Y if true] ... </code></pre> <p>Like:</p> <pre><code>[index columnX columnY columnZ] [0 A 0 NULL ] [1 A 1 NULL ] [2 B 2 "..." ] [3 B 3 "..." ] [4 B 4 "..." ] </code></pre>
<p>I am not sure to understand exactly but I think this may be a solution:</p> <pre><code>list = [1, 2, 3, 4, 5, 6] d = {'columnX':[],'columnY':[]} for value in list: if value &gt; 3: d['columnX'].append("A") else: d['columnX'].append("B") d['columnY'].append(value-1) df = pd.DataFrame(d) </code></pre> <p>for the second question just add another condition</p> <pre><code>list = [1, 2, 3, 4, 5, 6] d = {'columnX':[],'columnY':[], 'columnZ':[]} for value in list: if value &gt; 3: d['columnX'].append("A") else: d['columnX'].append("B") if condition: d['columnZ'].append(xxx) else: d['columnZ'].append(None) df = pd.DataFrame(d) </code></pre>
python|pandas
1
373,913
57,043,614
Is there any faster way in python to split strings to sublists in a list with 1million elements?
<p>I'm trying to help my friend to clean an order list dataframe with one million elements.</p> <p><a href="https://i.stack.imgur.com/hy3KZ.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hy3KZ.jpg" alt="enter image description here"></a></p> <p>you can see that the product_name column should be a list, but they are in string type. So I want to split them into sublists.</p> <p>Here's my code:</p> <pre><code>order_ls = raw_df['product_name'].tolist() cln_order_ls = list() for i in order_ls: i = i.replace('[', '') i = i.replace(']', '') i = i.replace('\'', '') cln_order_ls.append(i) new_cln_order_ls = list() for i in cln_order_ls: new_cln_order_ls.append(i.split(', ')) </code></pre> <p>But in the 'split' part, it took lots of time to process. I'm wondering is there faster way to deal with it ?</p> <p>Thanks~</p>
<h1>EDIT</h1> <p><strong>(I did not like last answer, it was too much confused, so I reordered it and tested I little bit more systematically).</strong></p> <h1>Long story short:</h1> <p>For speed, just use:</p> <pre><code>def str_to_list(s): return s[1:-1].replace('\'', '').split(', ') df['product_name'].apply(str_to_list).to_list() </code></pre> <hr /> <h1>Long story long:</h1> <p>Let's dissect your code:</p> <pre class="lang-py prettyprint-override"><code>order_ls = raw_df['product_name'].tolist() cln_order_ls = list() for i in order_ls: i = i.replace('[', '') i = i.replace(']', '') i = i.replace('\'', '') cln_order_ls.append(i) new_cln_order_ls = list() for i in cln_order_ls: new_cln_order_ls.append(i.split(', ')) </code></pre> <p>What you would really like to do is to have a function, say <code>str_to_list()</code> which converts your input <code>str</code>ing to a <code>list</code>.</p> <p>For some reasons, you do it in multiple steps, but this is really not necessary. What you have so far, can be rewritten as:</p> <pre><code>def str_to_list_OP(s): return s.replace('[', '').replace(']', '').replace('\'', '').split(', ') </code></pre> <p>If you can assume that <code>[</code> and <code>]</code> are always the first and last char of your string, you can simplify this to:</p> <pre><code>def str_to_list(s): return s[1:-1].replace('\'', '').split(', ') </code></pre> <p>which should also be faster.</p> <p>Alternative approaches would use regular expressions, e.g.:</p> <pre><code>def str_to_list_regex(s): regex = re.compile(r'[\[\]\']') return re.sub(regex, '', s).split(', ') </code></pre> <p>Note that all approaches so far use <code>split()</code>. This is a quite fast implementation which approach <strong>C speed</strong> and hardly any Python construct would beat it.</p> <p>All these methods are quite <strong>unsafe</strong> as they do not take into account escaping properly, e.g. all of the above would fail for the following valid Python code:</p> <pre><code>['ciao', &quot;pippo&quot;, 'foo, bar'] </code></pre> <p>More robust alternative in this scenario would be:</p> <ol> <li><code>ast.literal_eval</code> which works for any valid Python code</li> <li><code>json.loads</code> which actually requires valid JSON strings so it is not really an option here.</li> </ol> <p>The speed for these solutions is compared here:</p> <p><a href="https://i.stack.imgur.com/UPPq0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UPPq0.png" alt="benchmark1" /></a></p> <p>As you can see, safety comes at the price of speed.</p> <p>(these graphs are generated using <a href="https://github.com/norok2/BenchmarkTemplateIPY/blob/master/BenchmarkingTemplate.ipynb" rel="nofollow noreferrer">these scripts</a> with the following</p> <pre class="lang-py prettyprint-override"><code>def gen_input(n): return str([str(x) for x in range(n)]) def equal_output(a, b): return a == b input_sizes = (5, 10, 50, 100, 500, 1000, 5000, 10000, 50000, 100000, 500000) funcs = str_to_list_OP, str_to_list, str_to_list_regex, ast.literal_eval runtimes, input_sizes, labels, results = benchmark( funcs, gen_input=gen_input, equal_output=equal_output, input_sizes=input_sizes) </code></pre> <hr /> <p>Now let's concentrate to the looping. What you do is an explicit looping, and we know that Python is typically not terribly fast with that. However, looping in a comprehension can be faster because it can generate more optimized code. Another approach would be to use a vectorized expression using Pandas primitives, either using <code>apply()</code> or with <code>.str.</code> chainings.</p> <p>The following timings are obtained, indicating comprehensions to be the fastest for smaller inputs, although the vectorized solution (using <code>apply</code>) catches up and eventually surpasses the comprehension:</p> <p><a href="https://i.stack.imgur.com/C4hsS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/C4hsS.png" alt="benchmark2" /></a></p> <p>The following test functions were used:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd def str_to_list(s): return s[1:-1].replace('\'', '').split(', ') def func_OP(df): order_ls = df['product_name'].tolist() cln_order_ls = list() for i in order_ls: i = i.replace('[', '') i = i.replace(']', '') i = i.replace('\'', '') cln_order_ls.append(i) new_cln_order_ls = list() for i in cln_order_ls: new_cln_order_ls.append(i.split(', ')) return new_cln_order_ls def func_QuangHoang(df): return df['product_name'].str[1:-1].str.replace('\'','').str.split(', ').to_list() def func_apply_df(df): return df['product_name'].apply(str_to_list).to_list() def func_compr(df): return [str_to_list(s) for s in df['product_name']] </code></pre> <p>with the following test code:</p> <pre class="lang-py prettyprint-override"><code>def gen_input(n): return pd.DataFrame( columns=('order_id', 'product_name'), data=[[i, &quot;['ciao', 'pippo', 'foo', 'bar', 'baz']&quot;] for i in range(n)]) def equal_output(a, b): return a == b input_sizes = (5, 10, 50, 100, 500, 1000, 5000, 10000, 50000, 100000, 500000) funcs = func_OP, func_QuangHoang, func_apply_df, func_compr runtimes, input_sizes, labels, results = benchmark( funcs, gen_input=gen_input, equal_output=equal_output, input_sizes=input_sizes) </code></pre> <p>again using the same <a href="https://github.com/norok2/BenchmarkTemplateIPY/blob/master/BenchmarkingTemplate.ipynb" rel="nofollow noreferrer">base scripts</a> as before.</p>
python|python-3.x|pandas|performance|dataframe
4
373,914
57,174,889
How to get last n-number of rows with datetime index in a pandas dataframe?
<p>I am trying to get the last 32 data points from a pandas dataframe indexed by date. I have multiple re-sampled dataframes numbered data1, data2, data3, ect... that have been re-sampled from 1 hour, 4 hour, 12 hour, 1 day.</p> <p>I already tried to use get_loc with the datetime index that I want to end on for each dataframe but the problem is that my datetime index is sampled differently so the datetime index is off by a few hours. I also tried to just subtract the equivalent hours from datetime but this does not guarantee 32 data points</p> <pre><code>from datetime import timedelta import pandas as pd data1 = data.resample('4H').last().ffill() data2 = data.resample('6H').last().ffill() data3 = data.resample('12H').last().ffill() data4 = data.resample('1D').last().ffill() # datetime I want to end my row with and get last 32 values end_index = pd.Timestamp("2019-02-27 00:00:00+00:00") # this method does not always guarantee 32 data points b = data1.loc[end_index - timedelta(hours=192): end_index].bfill().ffill() c = data2.loc[end_index - timedelta(hours=380): end_index].bfill().ffill() d = data3.loc[end_index - timedelta(hours=768): end_index].bfill().ffill() e = data4.loc[end_index - timedelta(hours=768): end_index].bfill().ffill() # this method throws an error because end_index is off by a few hours sometimes pos = data1.index.get_loc(end_index) b = data1.loc[pos - 32: pos].bfill().ffill() pos = data2.index.get_loc(end_index) c = data2.loc[pos - 32: pos].bfill().ffill() pos = data3.index.get_loc(end_index) d = data3.loc[pos - 32: pos].bfill().ffill() pos = data2.index.get_loc(end_index) e = data4.loc[pos - 32: pos].bfill().ffill() </code></pre> <p>KeyError: 1498208400000000000 During handling of the above exception, another exception occurred:</p>
<p>I think you need <code>iloc</code> for select by positions:</p> <pre><code>pos = data2.index.get_loc(end_index) c = data2.iloc[pos - 32: pos].bfill().ffill() pos = data3.index.get_loc(end_index) d = data3.iloc[pos - 32: pos].bfill().ffill() pos = data2.index.get_loc(end_index) e = data4.iloc[pos - 32: pos].bfill().ffill() </code></pre>
python|pandas|datetime|indexing
2
373,915
57,188,182
How to evaluate difference between RGB numpy arrays?
<p>I'm building a software for a LED Canvas with 24 x 30 pixels. And I want to save the current state in a numpy array, then get a new state as a numpy array and slowly fade from the first state to the second state.</p> <p>To do so I was thinking i have to compare my two numpy arrays:</p> <pre><code>currentState = np.zeros((24,30,3), 'int_') # All LEDs off e.g. newState = np.zeros((24,30,3), 'int_') + 255 # All LEDS full white </code></pre> <p>Now i need an array with the difference between each item on the matrix like </p> <pre><code>currentState[x][y] = [0, 0, 0] newState[x][y] = [255, 255, 255] # Some compare operation difference[x][y] = [255, 255, 255] # or e.g. currentState[x][y] = [255, 70, 30] newState[x][y] = [100, 255, 30] # Some compare operation difference[x][y] = [-155, 185, 0] </code></pre> <p>Since execution time is crutial i don't wanna iterate over the matrix arrays. Is there any other way? </p> <p>Thanks a lot in advance.</p> <p>The answer is not currentState - newState. Please look carefully on the second example. </p>
<p>Literally just subtract them:</p> <pre><code>difference = newState - currentState </code></pre>
python|numpy
0
373,916
57,262,885
How is the memory allocated for numpy arrays in python?
<p>I tried to understand the difference caused by numpy "2D" arrays, that is, numpy.zeros((3, )), numpy.zeros((3, 1)), numpy.zeros((1, 3)).</p> <p>I used <code>id</code> to look at the memory allocation for each element. But I found some weird outputs in iPython console.</p> <pre class="lang-py prettyprint-override"><code>a = np.zeros((1, 3)) In [174]: id(a[0, 0]) Out[174]: 4491074656 In [175]: id(a[0, 1]) Out[175]: 4491074680 In [176]: id(a[0, 2]) Out[176]: 4491074704 In [177]: id(a[0, 0]) Out[177]: 4491074728 In [178]: id(a[0, 1]) Out[178]: 4491074800 In [179]: id(a) Out[179]: 4492226688 In [180]: id(a[0, 1]) Out[180]: 4491074752 </code></pre> <p>The memories of the elements are</p> <ol> <li>not consecutive</li> <li>changing without reassignment</li> </ol> <p>Moreover, the elements in the array of shape (1, 3) seem to be of successive memory at first, but it's not even the case for other shapes, like</p> <pre class="lang-py prettyprint-override"><code>In [186]: a = np.zeros((3, )) In [187]: id(a) Out[187]: 4490927280 In [188]: id(a[0]) Out[188]: 4491075040 In [189]: id(a[1]) Out[189]: 4491074968 </code></pre> <pre class="lang-py prettyprint-override"><code>In [191]: a = np.random.rand(4, 1) In [192]: id(a) Out[192]: 4491777648 In [193]: id(a[0]) Out[193]: 4491413504 In [194]: id(a[1]) Out[194]: 4479900048 In [195]: id(a[2]) Out[195]: 4491648416 </code></pre> <p>I am actually not quite sure whether <code>id</code> is suitable to check memory in Python. From my knowledge I guess there is no easy way to get the physical address of variables in Python.</p> <p>Just like C or Java, I expect the elements in such "2D" arrays should be consecutive in memory, which seems not to be true. Besides, the results of <code>id</code> are keeping changing, which really confuses me.</p> <p>I am interested in this because I am using mpi4py a little bit, and I wanna figure out how the variables are sent/received between CPUs.</p>
<p>Numpy array saves its data in a memory area seperated from the object itself. As following image shows:</p> <p><img src="https://i.stack.imgur.com/EeBUb.png" alt="enter image description here"></p> <p>To get the address of the data you need to create views of the array and check the <code>ctypes.data</code> attribute which is the address of the first data element:</p> <pre><code>import numpy as np a = np.zeros((3, 2)) print(a.ctypes.data) print(a[0:1, 0].ctypes.data) print(a[0:1, 1].ctypes.data) print(a[1:2, 0].ctypes.data) print(a[1:2, 1].ctypes.data) </code></pre>
python|numpy|memory|mpi4py
5
373,917
57,089,861
How to convert a set to an array?
<p>How to convert a set to an array?</p> <p>I tried:</p> <pre><code>import numpy as np mySet = {1,2,3,4,5} myRandomArray = np.asarray(mySet, dtype=int, order="C") print(myRandomArray) </code></pre> <p><strong>Output</strong></p> <blockquote> <p>return array(a, dtype, copy=False, order=order)</p> <p>TypeError: int() argument must be a string, a bytes-like object or a number, not 'set'</p> </blockquote> <p>Where am I making the mistake?</p>
<pre><code>myset = {1,2,3,4,5} np.array(list(myset)) </code></pre>
python|numpy
1
373,918
56,937,088
How to get the index of duplicated values, while dropping NAs? Resulting index is smaller than original dataframe
<p>I'm working with a df:</p> <pre><code>df.shape[0] 82208 </code></pre> <p>And I want to index duplicates based on firstname, lastname and email:</p> <pre><code>indx = (df.dropna(subset=['firstname', 'lastname', 'email']) .duplicated(subset=['firstname', 'lastname', 'email'], keep=False)) indx 0 True 1 True 2 False 3 False 4 True 5 True indx.shape[0] 73797 </code></pre> <p>I am unable to use this against the original df using <code>df[indx]</code> as they do not match in size as you can see from <code>.shape[0]</code>. I tried to use <code>indx.index</code> also, but I get:</p> <pre><code> df[indx.indx] KeyError: "None of [Int64Index([ 0, 1, 2, 3, 4, 5, 6, 7, 8,\n 9,\n ...\n 82198, 82199, 82200, 82201, 82202, 82203, 82204, 82205, 82206,\n 82207],\n dtype='int64', length=73797)] are in the [columns]" </code></pre> <p>I know it's something very simple, I just can't figure it out. It seems the <code>indx</code> I generate resets its index. What I'm trying to get is an index of where there are dupes in the first df. I'm guessing my problem has something to do with the <code>dropna()</code> when generating the index. </p> <p>edit: It was suggested to check out a duplicate post, but this doesn't answer my question.The duplicate is just basic indexing. </p> <p>My problem is that in generating the new index / boolean series <code>'indx'</code>, the original <code>df</code> indexes are lost. So it can't be used to index the <code>df</code>. </p> <p><strong>edit:</strong> another solution for this is reindexing so it matches the size of the df. </p> <pre><code> df = pd.DataFrame({'firstname':['stack','Bar Bar',np.nan,'Bar Bar','john','mary','jim'], 'lastname':['jim','Bar','Foo Bar','Bar','con','sullivan','Ryan'], 'email':[np.nan,'Bar','Foo Bar','Bar','john@com','mary@com','Jim@com']}) print(df) firstname lastname email 0 stack jim NaN 1 Bar Bar Bar Bar 2 NaN Foo Bar Foo Bar 3 Bar Bar Bar Bar 4 john con john@com 5 mary sullivan mary@com 6 jim Ryan Jim@com indx = (df.dropna(subset=['firstname', 'lastname', 'email']) .duplicated(subset=['firstname', 'lastname', 'email'], keep=False)) indx = indx.reindex(df.index, fill_value=False) df[indx ] firstname lastname email 1 Bar Bar Bar Bar 3 Bar Bar Bar Bar </code></pre>
<p>Instead of dropping the nans and then creating the boolean mask, add to the boolean mask that returns False for <code>nan</code> so you have all indexes retained, but false for nans. using <code>df.isna()</code> and <code>df.any()</code> for <code>axis=1</code> we can use the below:</p> <pre><code>cols=['firstname', 'lastname', 'email'] index=(~df[cols].isna().any(1)&amp;df.duplicated(subset=cols, keep=False)) </code></pre>
python|pandas
1
373,919
57,147,356
class 'numpy.int32' wanted, but type() function only shows: numpy.ndarray
<p>I want to send a numpy array as a digital output through a NI card. I am using the nidaqmx package from NI (national instruments). For the digital output they expect a array. I converted my numpy arrays to int32, but it still does not work and when I checked the array with the type() function it gave numpy.ndarray as class.</p> <pre><code>import numpy as np p = np.array(np.zeros(100), dtype = np.int32) q = np.array(np.ones(50), dtype = np.int32) vec = np.concatenate((p,q)) type(vec) </code></pre> <p>Out: </p> <pre><code>numpy.ndarray </code></pre> <p>expected: </p> <pre><code>numpy.int32 </code></pre>
<p>You should use <code>vec.dtype</code> to see the type of what the array contains. <code>type(vec)</code> tells you the type of <code>vec</code>, which is obviously a <code>numpy.array</code>. The output of <code>vec.dtype</code> is <code>dtype('int32')</code>. It is the same as <code>np.int32</code>, check for yourself:</p> <pre><code>vec.dtype == np.int32 </code></pre> <p>Output:</p> <pre><code>True </code></pre> <p>The code is fine..</p>
python|numpy|output
0
373,920
57,116,361
How to make a pivot table from this data?
<p>I have data like </p> <pre><code>data = { "Person": ["A", "A", "A", "B", "B", "B"], "Month": [1, 2, 3, 1, 2, 3], "Value 1": [5, 6, 7, 8, 9, 10], "Value 2": [10, 11, 12, 13, 5, 4] } df = pd.DataFrame(data) </code></pre> <p>I want it to look like: </p> <pre><code> Person Value Month 1 Month 2 Month 3 0 A 1 5 6 7 0 A 2 10 11 12 0 B 1 8 9 10 0 B 2 13 5 4 ... </code></pre> <p>How would I go about doing this?</p>
<p>IIUC, can <code>pivot_table</code>+<code>unstack</code></p> <pre><code>df.pivot_table(columns='Month', index='Person')\ .unstack()\ .reset_index()\ .rename(columns={'level_0': 'Value'})\ .pivot_table(columns='Month', index=['Person', 'Value']) </code></pre> <p>Outputs</p> <pre><code> Month 1 2 3 5 6 Person Value A Value 1 5.0 6.0 7.0 NaN NaN Value 2 10.0 11.0 12.0 NaN NaN B Value 1 NaN NaN NaN 8.5 10.0 Value 2 NaN NaN NaN 9.0 4.0 </code></pre>
python|pandas
1
373,921
56,887,503
Implement gradient descent in python
<p>I am trying to implement gradient descent in python. Though my code is returning result by I think results I am getting are completely wrong. </p> <p>Here is the code I have written:</p> <pre><code>import numpy as np import pandas dataset = pandas.read_csv('D:\ML Data\house-prices-advanced-regression-techniques\\train.csv') X = np.empty((0, 1),int) Y = np.empty((0, 1), int) for i in range(dataset.shape[0]): X = np.append(X, dataset.at[i, 'LotArea']) Y = np.append(Y, dataset.at[i, 'SalePrice']) X = np.c_[np.ones(len(X)), X] Y = Y.reshape(len(Y), 1) def gradient_descent(X, Y, theta, iterations=100, learningRate=0.000001): m = len(X) for i in range(iterations): prediction = np.dot(X, theta) theta = theta - (1/m) * learningRate * (X.T.dot(prediction - Y)) return theta theta = np.random.randn(2,1) theta = gradient_descent(X, Y, theta) print('theta',theta) </code></pre> <p>The result I get after running this program is:</p> <blockquote> <p>theta [[-5.23237458e+228] [-1.04560188e+233]]</p> </blockquote> <p>Which are very high values. Can someone point out the mistake I have made in implementation.</p> <p>Also, 2nd problem is I have to set value of learning rate very low (in this case i have set to 0.000001) to work other wise program throws an error.</p> <p>Please help me in diagnosis the problem.</p>
<p>try to reduce the learning rate with iteration otherwise it wont be able to reach the optimal lowest.try this</p> <pre><code>import numpy as np import pandas dataset = pandas.read_csv('start.csv') X = np.empty((0, 1),int) Y = np.empty((0, 1), int) for i in range(dataset.shape[0]): X = np.append(X, dataset.at[i, 'R&amp;D Spend']) Y = np.append(Y, dataset.at[i, 'Profit']) X = np.c_[np.ones(len(X)), X] Y = Y.reshape(len(Y), 1) def gradient_descent(X, Y, theta, iterations=50, learningRate=0.01): m = len(X) for i in range(iterations): prediction = np.dot(X, theta) theta = theta - (1/m) * learningRate * (X.T.dot(prediction - Y)) learningRate/=10; return theta theta = np.random.randn(2,1) theta = gradient_descent(X, Y, theta) print('theta',theta) </code></pre>
python|numpy|machine-learning|gradient-descent
1
373,922
45,724,633
Down-sampling specific period on dataframe using Pandas
<p>I have a long time serie that starts in 1963 and ends in 2013. However, from 1963 til 2007 it has an hourly sampling period while after 2007's sampling rate changes to 5 minutes. Is it possible to resample data just after 2007 in a way that the entire time serie has hourly data sampling? Data slice below.</p> <pre><code>yr, m, d, h, m, s, sl 2007, 11, 30, 19, 0, 0, 2180 2007, 11, 30, 20, 0, 0, 2310 2007, 11, 30, 21, 0, 0, 2400 2007, 11, 30, 22, 0, 0, 2400 2007, 11, 30, 23, 0, 0, 2270 2008, 1, 1, 0, 0, 0, 2210 2008, 1, 1, 0, 5, 0, 2210 2008, 1, 1, 0, 10, 0, 2210 2008, 1, 1, 0, 15, 0, 2200 2008, 1, 1, 0, 20, 0, 2200 2008, 1, 1, 0, 25, 0, 2200 2008, 1, 1, 0, 30, 0, 2200 2008, 1, 1, 0, 35, 0, 2200 2008, 1, 1, 0, 40, 0, 2200 2008, 1, 1, 0, 45, 0, 2200 2008, 1, 1, 0, 50, 0, 2200 2008, 1, 1, 0, 55, 0, 2200 2008, 1, 1, 1, 0, 0, 2190 2008, 1, 1, 1, 5, 0, 2190 </code></pre> <p>Thanks!</p>
<p>Give your dataframe proper column names</p> <pre><code>df.columns = 'year month day hour minute second sl'.split() </code></pre> <p><strong>Solution</strong> </p> <pre><code>df.groupby(['year', 'month', 'day', 'hour'], as_index=False).first() year month day hour minute second sl 0 2007 11 30 19 0 0 2180 1 2007 11 30 20 0 0 2310 2 2007 11 30 21 0 0 2400 3 2007 11 30 22 0 0 2400 4 2007 11 30 23 0 0 2270 5 2008 1 1 0 0 0 2210 6 2008 1 1 1 0 0 2190 </code></pre> <hr> <p><strong>Option 2</strong><br> Here is an option that builds off of the column renaming. We'll use <code>pd.to_datetime</code> to cleverly get at our dates, then use <code>resample</code>. However, you have time gaps and will have to address nulls and re-cast dtypes.</p> <pre><code>df.set_index( pd.to_datetime(df.drop('sl', 1)) ).resample('H').first().dropna().astype(df.dtypes) year month day hour minute second sl 2007-11-30 19:00:00 2007 11 30 19 0 0 2180 2007-11-30 20:00:00 2007 11 30 20 0 0 2310 2007-11-30 21:00:00 2007 11 30 21 0 0 2400 2007-11-30 22:00:00 2007 11 30 22 0 0 2400 2007-11-30 23:00:00 2007 11 30 23 0 0 2270 2008-01-01 00:00:00 2008 1 1 0 0 0 2210 2008-01-01 01:00:00 2008 1 1 1 0 0 2190 </code></pre>
python|pandas|dataframe|downsampling
2
373,923
45,741,649
(Python2) Combining pandas dataframe of mulilayer columns
<p>I want to add values of dataframe of which format is same. for exmaple</p> <pre><code>&gt;&gt;&gt; my_dataframe1 class1 score subject 1 2 3 student 0 1 2 5 1 2 3 9 2 8 7 2 3 3 4 7 4 6 7 7 &gt;&gt;&gt; my_dataframe2 class2 score subject 1 2 3 student 0 4 2 2 1 4 4 14 2 8 7 7 3 1 2 NaN 4 NaN 2 3 </code></pre> <p>as you can see, the two dataframes have multi-layer columns that the main column is 'class score' and the sub columns is 'subject'. what i want to do is that get summed dataframe which can be showed like this</p> <pre><code> score subject 1 2 3 student 0 5 4 7 1 2 1 5 2 16 14 9 3 4 6 7 4 6 9 10 </code></pre> <p>Actually, i could get this dataframe by</p> <pre><code>for i in my_dataframe1['class1 score'].index: my_dataframe1['class1 score'].loc[i,:] = my_dataframe1['class1 score'].loc[i,:].add(my_dataframe2['class2 score'].loc[i,:], fill_value = 0) </code></pre> <p>but, when dimensions increases, it takes tremendous time to get result dataframe, and i do think it isn't good way to solve problem.</p>
<p>IIUC:</p> <pre><code>df_out = df['class1 score'].add(df2['class2 score'],fill_value=0).add_prefix('scores_') df_out.columns = df_out.columns.str.split('_',expand=True) df_out </code></pre> <p>Output:</p> <pre><code> scores 1 2 3 student 0 5.0 4 7.0 1 6.0 7 23.0 2 16.0 14 9.0 3 4.0 6 7.0 4 6.0 9 10.0 </code></pre>
python-2.7|pandas|dataframe
0
373,924
45,766,829
Pandas ordered categorical data on exam grades 'D',...,'A+'
<p>I have the following data in pandas, I was surprized that the output was: D+ A I was expecting A+ D</p> <p>can someone explain please</p> <pre><code>df = pd.DataFrame(['A+','A','A-','B+','B','B-','C+','C','C-','D+','D'], index = ['excellent','excellent','excellent','good','good','good','ok','ok','ok','poor','poor']) df.rename (columns={0:'Grades'},inplace=True) grades = df['Grades'].astype('category', categories = ['D','D+', 'C-', 'C','C+','B-','B','B+','A-','A','A+'],ordered=True) print(max(grades),min(grades)) &gt; D+ A </code></pre>
<p><code>max</code> is a Python function and it does not respect the category ordering. It uses lexicographical ordering based on unicode codes.</p> <p>If you want to take into account categorical orders, you need to use the methods defined on Series/DataFrames:</p> <pre><code>print(grades.min(), grades.max()) </code></pre> <p>yields</p> <pre><code>D A+ </code></pre>
pandas|sorting|categorical-data
2
373,925
45,805,720
find indeces of grouped-item matches between two arrays
<pre><code>a = np.array([5,8,3,4,2,5,7,8,1,9,1,3,4,7]) b = np.array ([3,4,7,8,1,3]) </code></pre> <p>I have two lists of integers that each is grouped by every 2 consecutive items (ie indices [0, 1], [2, 3] and so on). The pairs of items cannot be found as duplicates in either list, neither in the same or the reverse order. </p> <p>One list is significantly larger and inclusive of the other. I am trying to figure out an efficient way to get the indices of the larger list's grouped items that are also in the smaller one. </p> <p>The desired output in the example above should be:</p> <pre><code>[2,3,6,7,10,11] #indices </code></pre> <p>Notice that, as an example, the first group ([3,4]) should not get indices 11,12 as a match because in that case 3 is the second element of [1,3] and 4 the first element of [4,7].</p>
<p>Since you are grouping your arrays by pairs, you can reshape them into 2 columns for comparison. You can then compare each of the elements in the shorter array to the longer array, and reduce the boolean arrays. From there it is a simple matter to get the indices using a reshaped <code>np.arange</code>.</p> <pre><code>import numpy as np from functools import reduce a = np.array([5,8,3,4,2,5,7,8,1,9,1,3,4,7]) b = np.array ([3,4,7,8,1,3]) # reshape a and b into columns a2 = a.reshape((-1,2)) b2 = b.reshape((-1,2)) # create a generator of bools for the row of a2 that holds b2 b_in_a_generator = (np.all(a2==row, axis=1) for row in b2) # reduce the generator to get an array of boolean that is True for each row # of a2 that equals one of the rows of b2 ix_bool = reduce(lambda x,y: x+y, b_in_a_generator) # grab the indices by slicing a reshaped np.arange array ix = np.arange(len(a)).reshape((-1,2))[ix_bool] ix # returns: array([[ 2, 3], [ 6, 7], [10, 11]]) </code></pre> <p>If you want a flat array, simply ravel <code>ix</code></p> <pre><code>ix.ravel() # returns array([ 2, 3, 6, 7, 10, 11]) </code></pre>
python|numpy
2
373,926
46,005,286
pytorch inception model outputs the wrong label for every input image
<p>For the pytorch models I found <a href="http://blog.outcome.io/pytorch-quick-start-classifying-an-image/" rel="nofollow noreferrer">this tutorial</a> explaining how to classify an image. I tried to apply the same prcedure for an inception model. However the model fails for every image I load in</p> <p>Code:</p> <pre><code># some people need these three lines to make it work #from torchvision.models.inception import model_urls #name = 'inception_v3_google' #model_urls[name] = model_urls[name].replace('https://', 'http://') from torch.autograd import Variable import torchvision import requests from torchvision import models, transforms from PIL import Image import io from PIL import Image LABELS_URL = 'https://s3.amazonaws.com/outcome-blog/imagenet/labels.json' # cat IMG_URL1 = 'http://farm2.static.flickr.com/1029/762542019_4f197a0de5.jpg' # dog IMG_URL2 = 'http://farm3.static.flickr.com/2314/2518519714_98b01968ee.jpg' # lion IMG_URL3 = 'http://farm1.static.flickr.com/62/218998565_62930f10fc.jpg' labels = {int(key):value for (key, value) in requests.get(LABELS_URL).json().items()} model = torchvision.models.inception_v3(pretrained=True) model.training = False model.transform_input = False def predict_url_img(url): response = requests.get(url) img_pil = Image.open(io.BytesIO(response.content)) normalize = transforms.Normalize( mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225] ) preprocess = transforms.Compose([ transforms.Scale(256), transforms.CenterCrop(299), transforms.ToTensor(), normalize ]) img_tensor = preprocess(img_pil) img_tensor.unsqueeze_(0) img_variable = Variable(img_tensor) fc_out = model(img_variable) print("prediction:", labels[fc_out.data.numpy().argmax()]) predict_url_img(IMG_URL1) predict_url_img(IMG_URL2) predict_url_img(IMG_URL3) </code></pre> <p>As output I get this:</p> <blockquote> <p>('prediction:', u"plunger, plumber's helper")</p> <p>('prediction:', u'plastic bag')</p> <p>('prediction:', u"plunger, plumber's helper")</p> </blockquote>
<p>I found out that one needs to call <code>model.eval()</code> before applying the model. Because of the batch normalisations and also dropout layers, the model bahaves differently for training and testing.</p>
pytorch
2
373,927
45,953,647
TensorFlow restore throwing "No Variable to save" error
<p>I am working through some code to understand how to save and restore checkpoints in tensorflow. To do so, I implemented a simple neural netowork that works with MNIST digits and saved the .ckpt file like so:</p> <pre><code> from tensorflow.examples.tutorials.mnist import input_data import numpy as np learning_rate = 0.001 n_input = 784 # MNIST data input (img shape = 28*28) n_classes = 10 # MNIST total classes 0-9 #import MNIST data mnist = input_data.read_data_sets('.', one_hot = True) #Features and Labels features = tf.placeholder(tf.float32, [None, n_input]) labels = tf.placeholder(tf.float32, [None, n_classes]) #Weights and biases weights = tf.Variable(tf.random_normal([n_input, n_classes])) bias = tf.Variable(tf.random_normal([n_classes])) #logits = xW + b logits = tf.add(tf.matmul(features, weights), bias) #Define loss and optimizer cost = tf.reduce_mean(\ tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=labels)) optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)\ .minimize(cost) # Calculate accuracy correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(labels, 1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) import math save_file = './train_model.ckpt' batch_size = 128 n_epochs = 100 saver = tf.train.Saver() # Launch the graph with tf.Session() as sess: sess.run(tf.global_variables_initializer()) # Training cycle for epoch in range(n_epochs): total_batch = math.ceil(mnist.train.num_examples / batch_size) # Loop over all batches for i in range(total_batch): batch_features, batch_labels = mnist.train.next_batch(batch_size) sess.run( optimizer, feed_dict={features: batch_features, labels: batch_labels}) # Print status for every 10 epochs if epoch % 10 == 0: valid_accuracy = sess.run( accuracy, feed_dict={ features: mnist.validation.images, labels: mnist.validation.labels}) print('Epoch {:&lt;3} - Validation Accuracy: {}'.format( epoch, valid_accuracy)) # Save the model saver.save(sess, save_file) print('Trained Model Saved.') </code></pre> <p>This part works well, and I get the .ckpt file saved in the correct directory. The problem comes in when I try to restore the model in an attempt to work on it again. I use the following code to restore the model:</p> <pre><code>saver = tf.train.Saver() with tf.Session() as sess: saver.restore(sess, 'train_model.ckpt.meta') print('model restored') </code></pre> <p>and end up with the error: <code>ValueError: No variables to save</code></p> <p>Not too sure, what the mistake here is. Any help is appreciated. Thanks in advance</p>
<p>A <code>Graph</code> is different to the <code>Session</code>. A graph is the set of operations joining tensors, each of which is a symbolic representation of a set of values. A <code>Session</code> assigns specific values to the <code>Variable</code> tensors, and allows you to <code>run</code> operations in that graph.</p> <p>The chkpt file saves variable values - i.e. those saved in the weights and biases - but not the graph itself.</p> <p>The solution is simple: re-run the graph construction (everything before the <code>Session</code>, then start your session and load values from the chkpt file.</p> <p>Alternatively, you can check out <a href="https://www.tensorflow.org/api_guides/python/meta_graph" rel="nofollow noreferrer">this guide for exporting and importing <code>MetaGraph</code>s</a>.</p>
python|tensorflow
1
373,928
45,884,288
Pandas Series.dt.total_seconds() not found
<p>I need a datetime column in seconds, everywhere (<a href="http://pandas.pydata.org/pandas-docs/version/0.20.3/generated/pandas.Series.dt.total_seconds.html" rel="noreferrer">including the docs</a>) is saying that I should use <code>Series.dt.total_seconds()</code> but it can't find the function. I'm assuming I have the wrong version of something but I don't...</p> <pre><code>pip freeze | grep pandas pandas==0.20.3 python --version Python 3.5.3 </code></pre> <p>This is all within a virtualenv that has worked without fault for a long time, and the other <code>Series.dt</code> functions work. Here's the code:</p> <pre><code>from pandas import Series from datetime import datetime s = Series([datetime.now() for _ in range(10)]) 0 2017-08-25 15:55:25.079495 1 2017-08-25 15:55:25.079504 2 2017-08-25 15:55:25.079506 3 2017-08-25 15:55:25.079508 4 2017-08-25 15:55:25.079509 5 2017-08-25 15:55:25.079510 6 2017-08-25 15:55:25.079512 7 2017-08-25 15:55:25.079513 8 2017-08-25 15:55:25.079514 9 2017-08-25 15:55:25.079516 dtype: datetime64[ns] s.dt &lt;pandas.core.indexes.accessors.DatetimeProperties object at 0x7f5a686507b8&gt; s.dt.minute 0 55 1 55 2 55 3 55 4 55 5 55 6 55 7 55 8 55 9 55 dtype: int64 s.dt.total_seconds() AttributeError: 'DatetimeProperties' object has no attribute 'total_seconds' </code></pre> <p>I've also tested this on a second machine and get the same result. Any ideas what I'm missing?</p>
<p><code>total_seconds</code> is a member of <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.dt.total_seconds.html#pandas.Series.dt.total_seconds" rel="noreferrer"><code>timedelta</code></a> not <code>datetime</code> </p> <p>Hence the error</p> <p>You maybe be wanting <code>dt.second</code></p> <p>This returns the second component which is different to <code>total_seconds</code></p> <p>So you need to perform some kind of arithmetic operation such as deleting something against this in order to generate a series of timedeltas, then you can do <code>dt.total_seconds</code></p> <p>Example:</p> <pre><code>In[278]: s = s - pd.datetime.now() s Out[278]: 0 -1 days +23:59:46.389639 1 -1 days +23:59:46.389639 2 -1 days +23:59:46.389639 3 -1 days +23:59:46.389639 4 -1 days +23:59:46.389639 5 -1 days +23:59:46.389639 6 -1 days +23:59:46.389639 7 -1 days +23:59:46.389639 8 -1 days +23:59:46.389639 9 -1 days +23:59:46.389639 dtype: timedelta64[ns] In[279]: s.dt.total_seconds() Out[279]: 0 -13.610361 1 -13.610361 2 -13.610361 3 -13.610361 4 -13.610361 5 -13.610361 6 -13.610361 7 -13.610361 8 -13.610361 9 -13.610361 dtype: float64 </code></pre>
python|pandas
29
373,929
46,021,931
Tensorflow Installation in Ubuntu14.04
<p>I install tensorflow on my Ubuntu14.04,and I installed it with Anaconda.I follow the official installation guide.After I installed it,when I ran this code step by step.It went wrong.</p> <pre><code>import tensorflow as tf hello = tf.constant('Hello, TensorFlow!') sess = tf.Session() print(sess.run(hello)) </code></pre> <p>When I run</p> <pre><code>sess = tf.Session() </code></pre> <p>It says:</p> <pre><code>2017-09-03 17:04:11.220320: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations. 2017-09-03 17:04:11.220355: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations. 2017-09-03 17:04:11.220363: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations. 2017-09-03 17:04:11.220369: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations. 2017-09-03 17:04:11.220374: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations. </code></pre>
<p>This is not an actual "error". It is <a href="https://www.tensorflow.org/performance/performance_guide#best_practices" rel="nofollow noreferrer">informing you</a> that the computations could be faster if you have installed from source with support to the mentioned instructions, e.g., SSE4.1, SSE4.2, etc.</p> <p>You can go ahead and use Tensorflow, though it will be not as fast as compiled from the sources.</p> <p>If you want to install Tensorflow with those instructions enabled, you have to:</p> <ol> <li>Install Bazel</li> </ol> <p>Download it from one of their available <a href="https://github.com/bazelbuild/bazel/releases" rel="nofollow noreferrer">releases</a>, for example <a href="https://github.com/bazelbuild/bazel/releases/download/0.5.2/bazel-0.5.2-dist.zip" rel="nofollow noreferrer">0.5.2</a>. Extract it, go into the directory and configure it: <code>bash ./compile.sh</code>. Copy the executable to <code>/usr/local/bin</code>: <code>sudo cp ./output/bazel /usr/local/bin</code></p> <ol start="2"> <li>Install Tensorflow</li> </ol> <p>Clone tensorflow: <code>git clone https://github.com/tensorflow/tensorflow.git</code> Go to the cloned directory to configure it: <code>./configure</code></p> <p>It will prompt you with several questions, bellow I have suggested the response to each of the questions, you can, of course, choose your own responses upon as you prefer:</p> <pre><code>Using python library path: /usr/local/lib/python2.7/dist-packages Do you wish to build TensorFlow with MKL support? [y/N] y MKL support will be enabled for TensorFlow Do you wish to download MKL LIB from the web? [Y/n] Y Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native]: Do you wish to use jemalloc as the malloc implementation? [Y/n] n jemalloc disabled Do you wish to build TensorFlow with Google Cloud Platform support? [y/N] N No Google Cloud Platform support will be enabled for TensorFlow Do you wish to build TensorFlow with Hadoop File System support? [y/N] N No Hadoop File System support will be enabled for TensorFlow Do you wish to build TensorFlow with the XLA just-in-time compiler (experimental)? [y/N] N No XLA JIT support will be enabled for TensorFlow Do you wish to build TensorFlow with VERBS support? [y/N] N No VERBS support will be enabled for TensorFlow Do you wish to build TensorFlow with OpenCL support? [y/N] N No OpenCL support will be enabled for TensorFlow Do you wish to build TensorFlow with CUDA support? [y/N] N No CUDA support will be enabled for TensorFlow </code></pre> <ol start="3"> <li>The final build will be a pip package, to build it you have to describe which instructions you want (you know, those Tensorflow informed you are missing).</li> </ol> <p>Build pip script: <code>bazel build -c opt --copt=-mavx --copt=-mavx2 --copt=-mfma --copt=-mfpmath=both --copt=-msse4.1 --copt=-msse4.2 -k //tensorflow/tools/pip_package:build_pip_package</code></p> <p>Build pip package: <code>bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg</code></p> <p>Install Tensorflow pip package you just built: <code>sudo pip install /tmp/tensorflow_pkg/tensorflow-1.2.1-cp27-cp27mu-linux_x86_64.whl</code></p> <p>Now next time you start up Tensorflow it will not complain anymore about missing instructions.</p>
python|tensorflow
0
373,930
45,871,154
How to efficiently create a pivot table?
<p>I do have a dataframe like this:</p> <pre><code>import pandas as pd df = pd.DataFrame({"c0": list('ABC'), "c1": [" ".join(list('ab')), " ".join(list('def')), " ".join(list('s'))], "c2": list('DEF')}) c0 c1 c2 0 A a b D 1 B d e f E 2 C s F </code></pre> <p>I want to create a pivot table that looks like this:</p> <pre><code> c2 c0 c1 A a D b D B d E e E f E C s F </code></pre> <p>So, the entries in <code>c1</code> are split and then treated as single elements used in a multiindex.</p> <p>I do this as follows:</p> <pre><code>newdf = pd.DataFrame() for indi, rowi in df.iterrows(): # get all single elements in string n_elements = rowi['c1'].split() # only one element so we can just add the entire row if len(n_elements) == 1: newdf = newdf.append(rowi) # more than one element else: for eli in n_elements: # that allows to add new elements using loc, without it we will have identical index values if not newdf.empty: newdf = newdf.reset_index(drop=True) newdf.index = -1 * newdf.index - 1 # add entire row newdf = newdf.append(rowi) # replace the entire string by the single element newdf.loc[indi, 'c1'] = eli print newdf.reset_index(drop=True) </code></pre> <p>which yields</p> <pre><code> c0 c1 c2 0 A a D 1 A b D 2 B d E 3 B e E 4 B f E 5 C s F </code></pre> <p>Then I can just call</p> <pre><code>pd.pivot_table(newdf, index=['c0', 'c1'], aggfunc=lambda x: ' '.join(set(str(v) for v in x))) </code></pre> <p>which gives me the desired output (see above).</p> <p>For huge dataframes that can be quite slow, so I am wondering whether there is a more efficient way of doing this.</p>
<p><strong>Option 1</strong> </p> <pre><code>import numpy as np, pandas as pd s = df.c1.str.split() l = s.str.len() newdf = df.loc[df.index.repeat(l)].assign(c1=np.concatenate(s)).set_index(['c0', 'c1']) newdf c2 c0 c1 A a D b D B d E e E f E C s F </code></pre> <hr> <p><strong>Option 2</strong><br> Should be faster </p> <pre><code>import numpy as np, pandas as pd s = np.core.defchararray.split(df.c1.values.astype(str), ' ') l = [len(x) for x in s.tolist()] r = np.arange(len(s)).repeat(l) i = pd.MultiIndex.from_arrays([ df.c0.values[r], np.concatenate(s) ], names=['c0', 'c1']) newdf = pd.DataFrame({'c2': df.c2.values[r]}, i) newdf c2 c0 c1 A a D b D B d E e E f E C s F </code></pre>
python|performance|pandas
3
373,931
45,962,669
pandas.DatetimeIndex.snap timestamps to left occurring frequency
<p>I would like to have the same functionality that snap but using the <strong>left</strong> occurring frequency instead of the nearest. </p> <p>This is what I am trying:</p> <pre><code>date = pd.date_range('2015-01-01', '2015-12-31') week_index = pd.DatetimeIndex.snap(date, 'W-MON') week_index DatetimeIndex(['2014-12-29', '2015-01-05', '2015-01-05', '2015-01-05', '2015-01-05', '2015-01-05', '2015-01-05', '2015-01-05', '2015-01-12', '2015-01-12', ... '2015-12-21', '2015-12-21', '2015-12-21', '2015-12-28', '2015-12-28', '2015-12-28', '2015-12-28', '2015-12-28', '2015-12-28', '2015-12-28'], dtype='datetime64[ns]', length=365, freq='W-MON') </code></pre> <p>It seems like floor should do it, although it outputs ValueError:</p> <pre><code>week_index = pd.DatetimeIndex.floor(date, 'W-MON') --------------------------------------------------------------------------- ValueError Traceback (most recent call last) &lt;ipython-input-40-a053b5230ee3&gt; in &lt;module&gt;() ----&gt; 1 week_index = pd.DatetimeIndex.floor(date, 'W-MON') C:\ProgramData\Anaconda3\lib\site-packages\pandas\tseries\base.py in floor(self, freq) 99 @Appender(_round_doc % "floor") 100 def floor(self, freq): --&gt; 101 return self._round(freq, np.floor) 102 103 @Appender(_round_doc % "ceil") C:\ProgramData\Anaconda3\lib\site-packages\pandas\tseries\base.py in _round(self, freq, rounder) 79 80 from pandas.tseries.frequencies import to_offset ---&gt; 81 unit = to_offset(freq).nanos 82 83 # round the local times C:\ProgramData\Anaconda3\lib\site-packages\pandas\tseries\offsets.py in nanos(self) 510 @property 511 def nanos(self): --&gt; 512 raise ValueError("{0} is a non-fixed frequency".format(self)) 513 514 ValueError: &lt;Week: weekday=0&gt; is a non-fixed frequency </code></pre>
<p>I find the solution in pandas snap source code:</p> <p>use rollback instead:</p> <pre><code>from pandas.tseries.frequencies import to_offset freq = to_offset('W-MON') date.map(freq.rollback) </code></pre>
python|pandas|datetimeindex
5
373,932
45,976,234
module 'tensorflow.contrib.rnn' has no attribute 'BasicLSTMCell'
<p>When I try to run</p> <pre><code>lstm_fw_cell = rnn.BasicLSTMCell(num_hidden, forget_bias=1.0) </code></pre> <p>I get the error mentioned in the title. </p> <p>Is this due to the tensorflow version? How to resolve this issue?</p>
<p>Try to replace <code>rnn.BasicLSTMCell</code> with <a href="https://www.tensorflow.org/versions/r0.12/api_docs/python/rnn_cell/rnn_cells_for_use_with_tensorflow_s_core_rnn_methods#BasicLSTMCell" rel="nofollow noreferrer"><code>tf.nn.rnn_cell.BasicLSTMCell</code></a>. See more details <a href="https://github.com/tensorflow/tensorflow/issues/6432" rel="nofollow noreferrer">here</a>.</p>
tensorflow|lstm|rnn
1
373,933
46,026,834
Creating pd.dataframe from multiline String object
<p>I am new to Python and I have already found the posts here very helpful but now I am stuck. I have parsed trading data from an email and saved it into a string object that looks like this:</p> <p>=E2=84=96\tOrderID\tInstrument/ISIN\tDirection\tQuantity\t=\nPrice\tAmount\tDeal time\tSlippage\tConfirmation time\t=\nSettlement time\tCommission\tCharges and fees\tOther\n> <strong>1</strong>\tPO59332737\tOil-20Sep17\tBuy\t100\t46.100000\t=\n4610.00 USD\t2017-08-30 20:46:36\t0.000000\t2017-08-30 =\n21:01:47\t2017-08-30 21:01:47\t0.000000 GBP\t0.000000 GBP\t=\n-\n> <strong>2</strong>\tPO59332799\tOil-20Sep17\tBuy\t50\t46.100000\t=\n2305.00 USD\t2017-08-30 20:46:48\t0.000000\t2017-08-30 =\n21:01:47\t2017-08-30 21:01:47\t0.000000 GBP\t0.000000 GBP\t=\n-\n> <strong>3</strong>\tMO59332700\tOil-20Sep17\tBuy\t100\t46.019000\t=\n4601.90 USD\t2017-08-30 20:46:27\t0.000000\t2017-08-30 =\n20:46:27\t2017-08-30 20:46:27\t0.000000 GBP\t0.000000 GBP\t=\n-\n></p> <p>The string continues but the structure is the same: The column headers ('=E2=84=96', Order ID', ..., 'Other') are followed by the specific values. The snippet shows 3 rows of data. Columns are separated with \t and new rows in the email start with \n . </p> <p>My goal is turn this string into a pandas dataframe object but I am struggling to do so. I have tried replacing <strong>\t</strong> and <strong>\n</strong> with <strong>;</strong> and then saving the String as StringIO object and using pd.read_csv to create a dataframe from the string. However this puts all the data into separate columns so that I end up with 0 rows.</p> <p>How I can manipulate the string object such that pd.read_csv automatically recognises when a new row starts. In csv files a new row starts with a new line, however, in my string all rows are joined together.</p> <p>Any help would be much appreciated. Thanks.</p> <p>EDIT: I have realised that new rows in the string start with <strong>\n></strong>. How can I use this to specify when a new row in the dataframe should start?</p>
<p>A messy one.</p> <p>The first thing I did, having saved that text to a file, was to split it on the <code>\\n</code>s, to make it easier to understand, and write out the pieces. This enabled me to see several features of the data:</p> <ul> <li>There are three lines of 'header' and then three lines in each 'record', if I ignore the inconsequential lines consisting of just a '-'.</li> <li>I could collapse the '>' out of existence.</li> <li>Pandas is happy with tab characters. I could replace each '\t' with a tab.</li> </ul> <p>.</p> <pre><code>&gt;&gt;&gt; with open('simon.txt') as simon: ... for line in simon.read().split('\\n'): ... line ... '=E2=84=96\\tOrderID\\tInstrument/ISIN\\tDirection\\tQuantity\\t=' 'Price\\tAmount\\tDeal time\\tSlippage\\tConfirmation time\\t=' 'Settlement time\\tCommission\\tCharges and fees\\tOther' '&gt; 1\\tPO59332737\\tOil-20Sep17\\tBuy\\t100\\t46.100000\\t=' '4610.00 USD\\t2017-08-30 20:46:36\\t0.000000\\t2017-08-30 =' '21:01:47\\t2017-08-30 21:01:47\\t0.000000 GBP\\t0.000000 GBP\\t=' '-' '&gt; 2\\tPO59332799\\tOil-20Sep17\\tBuy\\t50\\t46.100000\\t=' '2305.00 USD\\t2017-08-30 20:46:48\\t0.000000\\t2017-08-30 =' '21:01:47\\t2017-08-30 21:01:47\\t0.000000 GBP\\t0.000000 GBP\\t=' '-' '&gt; 3\\tMO59332700\\tOil-20Sep17\\tBuy\\t100\\t46.019000\\t=' '4601.90 USD\\t2017-08-30 20:46:27\\t0.000000\\t2017-08-30 =' '20:46:27\\t2017-08-30 20:46:27\\t0.000000 GBP\\t0.000000 GBP\\t=' '-' '&gt;' </code></pre> <ul> <li>As I read through the input file I number each line in <code>n</code>. Skipped lines, ie, those that contain only a hyphen, are not counted. </li> <li>When I accumulate three lines of input in <code>big_line</code> I join them together, replace the '\t' characters with tabs and write the result to the output file. Then I reset <code>big_line</code> to empty, ready for the next three lines.</li> </ul> <p>.</p> <pre><code>&gt;&gt;&gt; with open('simon.txt') as simon: ... with open('simon_out.txt', 'w') as simon_out: ... n = 0 ... big_line = [] ... for line in simon.read().split('\\n'): ... if line=='-': ... pass ... else: ... n += 1 ... if n % 3 == 1: ... if big_line: ... simon_out.write(' '.join(big_line).replace('\\t', '\t')+'\n') ... big_line = [] ... line = line.replace('&gt;', '') ... big_line.append(line) ... 157 157 156 157 </code></pre> <p>The result is acceptable to pandas. It might still need some massaging, depending on your requirements.</p> <pre><code>&gt;&gt;&gt; import pandas as pd &gt;&gt;&gt; df = pd.read_csv('simon_out.txt', sep='\t') &gt;&gt;&gt; df =E2=84=96 OrderID Instrument/ISIN Direction Quantity = Price \ 0 1 PO59332737 Oil-20Sep17 Buy 100 46.100 1 2 PO59332799 Oil-20Sep17 Buy 50 46.100 2 3 MO59332700 Oil-20Sep17 Buy 100 46.019 Amount Deal time Slippage Confirmation time \ 0 = 4610.00 USD 2017-08-30 20:46:36 0.0 2017-08-30 = 21:01:47 1 = 2305.00 USD 2017-08-30 20:46:48 0.0 2017-08-30 = 21:01:47 2 = 4601.90 USD 2017-08-30 20:46:27 0.0 2017-08-30 = 20:46:27 = Settlement time Commission Charges and fees Other 0 2017-08-30 21:01:47 0.000000 GBP 0.000000 GBP = 1 2017-08-30 21:01:47 0.000000 GBP 0.000000 GBP = 2 2017-08-30 20:46:27 0.000000 GBP 0.000000 GBP = </code></pre>
python-3.x|pandas
0
373,934
45,911,225
How to use Tensorflow to train Poisson regression?
<p>I want to train a Poisson regression to compare with the log(Y) linear regression in Tensorflow. However, I've only found <a href="https://www.tensorflow.org/api_docs/python/tf/contrib/distributions/Poisson" rel="nofollow noreferrer">tf.contrib.distributions.Poisson</a>. Can anyone offer some insight to help me? Thanks!</p>
<p>those days I find one function maybe help: </p> <pre><code>loss = tf.nn.log_poisson_loss(targets=batch_labels, log_input=logits) </code></pre>
tensorflow|regression
1
373,935
46,071,295
Can't get a new tf.Operation to work in Python shell in Tensorflow
<p>I am trying to add a new integer Matrix Multiplication OP in tensorflow and I am not able to successfully register it as a tf operation so that it can be called as tf.intmatmul in python. </p> <p>Steps I did : 1) Added a new REGISTER_OP - IntMatMul in the math_ops.cc file.</p> <p>2) Added a new kernel implementation for this OP in the core/kernels path - int_matmul_op.cc and a corresponding header file - int_matmul_op.h</p> <p>3) I added the dependency of the OP in the core/kernels/BUILD file. This will add the kernel linking for this OP. </p> <p>4) Added the definition for this OP (as 'intmatmul') in the Python Wrapper file i.e python/ops/math_ops.py - This file calls the gen_math_ops.int_mat_mul</p> <p>5) re-built from source using Bazel and re-installed Tensorflow using the pip package. </p> <p>However when I try to use this OP as tf.intmatmul, I get an error saying the module is not defined. I am not sure now what I am missing here. Is there any linking that is missing? Do I need to add any OP linking in the core/BUILD file as well?</p>
<p><em>UPDATE:</em></p> <p>So this turned out to be more complicated than expected. These are the things that had to be taken into account:</p> <ul> <li>Apparently, in order for a function to be exposed as public API (that is, at <code>tf.</code> level), its name must be listed at the beginning of the module in its docstring preceded with <code>@@</code>. Take a look for example to <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/ops/math_ops.py" rel="nofollow noreferrer"><code>math_ops.py</code></a>.</li> <li>The Kernel definition has to be absolutely correct in order to reflect an operation as an operation, even if you can still access it from the internal module (e.g. doing <code>from tensorflow.python.ops import math_ops</code>).</li> </ul> <p>--</p> <p>As <a href="https://www.tensorflow.org/extend/adding_an_op#use_the_op_in_python" rel="nofollow noreferrer">the docs</a> indicate, the name of the operation, that must be registered with a CamelCase identifier in C++, is "translated" into snake_case in Python. Try <code>tf.int_mat_mul</code> instead.</p> <p>As a side note, that tutorial provides additional guidance to implement custom operations without needing to recompile TensorFlow from source, loading it from a custom library instead.</p>
python|c++|tensorflow
2
373,936
46,102,671
How to show distributions of multiple features in one tensor in TensorBoard
<p>I have a tensor <code>X</code> which is an output of the batch normalization layer (<code>tf.layers.batch_normalization</code>) and has the shape of <code>[batch_size, 15]</code>. To monitor its distribution, I created a histogram for <code>X</code> with <code>tf.summary.histogram('out_BN_0', X)</code>. The graph is what I got in Tensorboard after > 70k steps (~ 130 epoches). Is it the average results of all 15 features? or any particular feature in <code>X</code>? How do I get the distribution of just one feature (e.g. 5th)?</p> <p><a href="https://i.stack.imgur.com/vncv3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vncv3.png" alt="enter image description here"></a></p>
<p>How about building a histogram per each feature?</p> <pre><code>import tensorflow as tf import numpy as np batch_size = 100 num_features = 15 X = tf.constant(np.random.uniform(size=(batch_size, num_features))) hists = {feature_index: tf.summary.histogram(f'hist_{feature_index}', X[:, feature_index]) for feature_index in range(num_features)} </code></pre>
tensorflow|visualization|tensorboard
2
373,937
46,141,690
How do I write a PyTorch sequential model?
<p>How do I write a sequential model in PyTorch, just like what we can do with Keras? I tried:</p> <pre><code>import torch import torch.nn as nn net = nn.Sequential() net.add(nn.Linear(3, 4)) net.add(nn.Sigmoid()) net.add(nn.Linear(4, 1)) net.add(nn.Sigmoid()) net.float() </code></pre> <p>But I get the error:</p> <blockquote> <p>AttributeError: 'Sequential' object has no attribute 'add'</p> </blockquote>
<p><code>Sequential</code> does not have an <code>add</code> method at the moment, though there is some <a href="https://github.com/pytorch/pytorch/issues/358" rel="noreferrer">debate</a> about adding this functionality. </p> <p>As you can read in the <a href="http://pytorch.org/docs/master/nn.html#torch.nn.Sequential" rel="noreferrer">documentation</a> <code>nn.Sequential</code> takes as argument the layers separeted as sequence of arguments or an <a href="https://docs.python.org/3/library/collections.html#collections.OrderedDict" rel="noreferrer"><code>OrderedDict</code></a>. </p> <p>If you have a model with lots of layers, you can create a list first and then use the <code>*</code> operator to expand the list into positional arguments, like this:</p> <pre class="lang-py prettyprint-override"><code>layers = [] layers.append(nn.Linear(3, 4)) layers.append(nn.Sigmoid()) layers.append(nn.Linear(4, 1)) layers.append(nn.Sigmoid()) net = nn.Sequential(*layers) </code></pre> <p>This will result in a similar structure of your code, as adding directly.</p>
python|sequential|pytorch
43
373,938
46,148,302
What is the best way to access values in a dataframe column?
<p>For example I have</p> <pre><code>df=pd.DataFrame({'a':[1,2,3]}) df[df['a']==3].a = 4 </code></pre> <p>This does not assign 4 to where 3 is </p> <pre><code>df[df['a']==3] = 4 </code></pre> <p>But this works.</p> <p>It confused me on how the assignment works. Appreciate if anyone can give me some references or explanation.</p>
<p>You do <em>not</em> want to use the second method. It returns a dataframe subslice and assigns the same value to every single row.</p> <p>For example,</p> <pre><code>df a b 0 1 4 1 2 3 2 3 6 df[df['a'] == 3] a b 2 3 6 df[df['a']==3] = 3 df a b 0 1 4 1 2 3 2 3 3 </code></pre> <p>The first method does not work because boolean indexing returns a copy of the column (series), which you are trying to assign to, so assignment fails:</p> <pre><code>df[df['a'] == 3].a = 4 /Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/pandas/core/generic.py:3110: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy self[name] = value </code></pre> <hr> <p>So, your options are using <code>.loc</code> (access by name) or <code>iloc</code> (access by index) based indexing:</p> <pre><code>df.loc[df.a == 3, 'a'] = 4 df a 0 1 1 2 2 4 </code></pre> <p>If you are passing a boolean mask, you cannot use <code>iloc</code>.</p>
python|pandas|dataframe|indexing
3
373,939
46,086,136
Shipping and using virtualenv in a pyspark job
<p>PROBLEM: I am attempting to run a spark-submit script from my local machine to a cluster of machines. The work done by the cluster uses numpy. I currently get the following error:</p> <pre><code>ImportError: Importing the multiarray numpy extension module failed. Most likely you are trying to import a failed build of numpy. If you're working with a numpy git repo, try `git clean -xdf` (removes all files not under version control). Otherwise reinstall numpy. Original error was: cannot import name multiarray </code></pre> <p>DETAIL: In my local environment I have setup a virtualenv that includes numpy as well as a private repo I use in my project and other various libraries. I created a zip file (lib/libs.zip) from the site-packages directory at venv/lib/site-packages where 'venv' is my virtual environment. I ship this zip to the remote nodes. My shell script for performing the spark-submit looks like this:</p> <pre><code>$SPARK_HOME/bin/spark-submit \ --deploy-mode cluster \ --master yarn \ --conf spark.pyspark.virtualenv.enabled=true \ --conf spark.pyspark.virtualenv.type=native \ --conf spark.pyspark.virtualenv.requirements=${parent}/requirements.txt \ --conf spark.pyspark.virtualenv.bin.path=${parent}/venv \ --py-files "${parent}/lib/libs.zip" \ --num-executors 1 \ --executor-cores 2 \ --executor-memory 2G \ --driver-memory 2G \ $parent/src/features/pi.py </code></pre> <p>I also know that on the remote nodes there is a /usr/local/bin/python2.7 folder that includes a python 2.7 install.</p> <p>so in my conf/spark-env.sh I have set the following:</p> <pre><code>export PYSPARK_PYTHON=/usr/local/bin/python2.7 export PYSPARK_DRIVER_PYTHON=/usr/local/bin/python2.7 </code></pre> <p>When I run the script I get the error above. If I screen print the installed_distributions I get a zero length list []. Also my private library imports correctly (which says to me it is actually accessing my libs.zip site-packages.). My pi.py file looks something like this:</p> <pre><code>from myprivatelibrary.bigData.spark import spark_context spark = spark_context() import numpy as np spark.parallelize(range(1, 10)).map(lambda x: np.__version__).collect() </code></pre> <p>EXPECTATION/MY THOUGHTS: I expect this to import numpy correctly especially since I know numpy works correctly in my local virtualenv. I suspect this is because I'm not actually using the version of python that is installed in my virtualenv on the remote node. My question is first, how do I fix this and second how do I use my virtualenv installed python on the remote nodes instead of the python that is just manually installed and currently sitting on those machines? I've seen some write-ups on this but frankly they are not well written.</p>
<p>With <code>--conf spark.pyspark.{}</code> and <code>export PYSPARK_PYTHON=/usr/local/bin/python2.7</code> you set options for your local environment / your driver. To set options for the cluster (executors) use the following syntax:</p> <pre><code>--conf spark.yarn.appMasterEnv.PYSPARK_PYTHON </code></pre> <p>Furthermore, I guess you should make your <a href="https://virtualenv.pypa.io/en/stable/userguide/#making-environments-relocatable" rel="noreferrer">virtualenv relocatable</a> (this is experimental, however). <strong><code>&lt;edit 20170908&gt;</code></strong> This means that the virtualenv uses relative instead of absolute links. <strong><code>&lt;/edit&gt;</code></strong></p> <p>What we did in such cases: we shipped an entire anaconda distribution over hdfs.</p> <p><strong><code>&lt;edit 20170908&gt;</code></strong></p> <p>If we are talking about different environments (MacOs vs. Linux, as mentioned in the comment below), you cannot just submit a virtualenv, at least not if your virtualenv contains packages with binaries (as is the case with numpy). In that case I suggest you create yourself a 'portable' anaconda, i.e. install Anaconda in a Linux VM and zip it.</p> <p>Regarding <code>--archives</code> vs. <code>--py-files</code>:</p> <ul> <li><p><code>--py-files</code> adds python files/packages to the python path. From the <a href="https://spark.apache.org/docs/latest/submitting-applications.html" rel="noreferrer">spark-submit documentation</a>:</p> <blockquote> <p>For Python applications, simply pass a .py file in the place of instead of a JAR, and add Python .zip, .egg or .py files to the search path with --py-files.</p> </blockquote></li> <li><p><code>--archives</code> means these are extracted into the working directory of each executor (only yarn clusters).</p></li> </ul> <p>However, a crystal-clear distinction is lacking, in my opinion - see for example <a href="https://stackoverflow.com/q/38066318/6699237">this SO post</a>.</p> <p>In the given case, add the <code>anaconda.zip</code> via <code>--archives</code>, and your 'other python files' via <code>--py-files</code>.</p> <p><strong><code>&lt;/edit&gt;</code></strong></p> <p>See also: <a href="http://henning.kropponline.de/2016/09/17/running-pyspark-with-virtualenv/" rel="noreferrer">Running Pyspark with Virtualenv</a>, a blog post by Henning Kropp.</p>
numpy|pyspark|virtualenv
6
373,940
45,786,083
Change 1 point color in scatter plot regardless of color palette
<p>I have a pandas data frame <code>df</code> like this </p> <p><code>NAME VALUE ID A 0.2 X B 0.4 X C 0.5 X D 0.8 X ... Z 0.3 X </code></p> <p>I would like to color all the points by the 'NAME' column by specifying the hue='NAME' but specify the color for ONE point: B. </p> <p>How do you specify the color for 1 point only, and have the "hue" command take care of the rest (where each point A-Z has a unique color)?</p> <p>Right now this is my command to plot, where hue is the NAME.</p> <p><code>plot = sns.stripplot(x="ID", y="VALUE", hue="NAME", data=df, jitter=True, c=df['NAME'], s=7, linewidth=1)</code></p>
<p>You can replace one color in the palette by converting it to a list of colors and then replace one of the colors by some other color of your liking.</p> <pre><code>import pandas as pd import numpy as np;np.random.seed(42) import matplotlib.pyplot as plt import seaborn as sns letters = list(map(chr, range(ord('A'), ord('Z')+1))) df = pd.DataFrame({"NAME" : letters, "VALUE": np.sort(np.random.rand(len(letters)))[::-1], "ID" : ["X"]*len(letters)}) special_letter = "B" special_color = "indigo" levels = df["NAME"].unique() colors = sns.color_palette("hls", len(levels)) inx = list(levels).index(special_letter) colors[inx] = special_color ax = sns.stripplot(x="ID", y="VALUE", hue="NAME", data=df, jitter=True, s=7, palette=colors) ax.legend(ncol=3, bbox_to_anchor=(1.05,1), loc=2) ax.figure.subplots_adjust(right=0.6) plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/r6MM5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/r6MM5.png" alt="enter image description here"></a></p> <p>Instead of providing a palette directly, one may also (thanks to @mwaskom for pointing that out) use a dictionary of (hue name, color) pairs:</p> <pre><code>levels = df["NAME"].unique() colors = sns.color_palette("hls", len(levels)) colors = dict(zip(levels, colors)) colors[special_letter] = special_color </code></pre>
python|pandas|matplotlib|seaborn|scatter
4
373,941
45,852,289
Adding multiple columns of unique calculated values to a pandas dataframe
<p>I would like to add new columns to a data frame using function and values from the original data frame </p> <p>Create the dataframe</p> <pre><code>df = pd.DataFrame({'f1' : np.random.randn(10), 'f2' : np.random.randn(10), 'f3' : np.random.randn(10), 'f4' : np.random.randn(10), 'f5' : np.random.randn(10)}) </code></pre> <p>Test function to be applied to the existing columns</p> <pre><code>def testfun(x,n): return x * n </code></pre> <p>Arguments for the function - each new column has different arguments</p> <pre><code>colnum = [1,2,3,4,5] </code></pre> <p>Create new columns names for the new columns to add to the data frame</p> <pre><code>newcol = [s + "_D" for s in df.columns] </code></pre> <p>Loop through the existing columns applying the function and appropriate argument for that column. Each new column will be assigned an unique name.</p> <p>This part function does not work!</p> <pre><code>for s in range(len(df.columns)): df = df.assign(newcol[s] = testfun(df[[df.columns[s]]], s)) </code></pre> <p>The new data frame should contain 10 columns.</p>
<p>You can try this</p> <pre><code>import pandas as pd import numpy as np df = pd.DataFrame({'f1' : np.random.randn(10), 'f2' : np.random.randn(10), 'f3' : np.random.randn(10), 'f4' : np.random.randn(10), 'f5' : np.random.randn(10)}) def testfun(x,n): return x * n colnum = [1,2,3,4,5] newcol = [s + "_D" for s in df.columns] for s in range(len(df.columns)): df.loc[:,newcol[s]] = df[[df.columns[s]]]*s </code></pre>
python|pandas|dataframe|assign
0
373,942
46,151,185
Calculate jaccard distance using scipy in python
<p>I have two separate lists as follows.</p> <pre><code>list1 =[[0.0, 0.75, 0.2], [0.0, 0.5, 0.7]] list2 =[[0.9, 0.0, 0.8], [0.0, 0.0, 0.8], [1.0, 0.0, 0.0]] </code></pre> <p>I want to get a list1 x list2 jaccard distance matrix (i.e. the matrix includes 6 values: 2 x 3)</p> <pre><code> For example; [0.0, 0.75, 0.2] in list1 with all the three lists in list2 [0.0, 0.5, 0.7] in list1 with all the three lists in list2 </code></pre> <p>I actually tried both <code>pdist</code> and <code>cdist</code>. However I get the following errors respectively; <code>TypeError: pdist() got multiple values for argument 'metric'</code> and <code>ValueError: XA must be a 2-dimensional array.</code>.</p> <p>Please help me to fix this issue.</p>
<p>You need to pass to pdist a <code>m x n</code> 2D array. To construct it, you can use a simple nested loop. You could probably do something like this :</p> <pre><code>import scipy.spatial.distance as dist list1 =[[0.0, 0.75, 0.2], [0.0, 0.5, 0.7]] list2 =[[0.9, 0.0, 0.8], [0.0, 0.0, 0.8], [1.0, 0.0, 0.0]] distance = [] for elem1 in list1: for elem2 in list2: distance.append(dist.pdist([elem1,elem2], 'jaccard')) </code></pre> <p>You get your results in the <code>distance</code> array. </p>
python|numpy|scipy
1
373,943
45,754,067
Error in plotting line graph
<p>Appreciate it if someone could explain to me what went wrong?</p> <p>1) Couldnt plot line graph....I managed to plot my data only with point marker('r.') or round marker('r.'), the plot just blank with no data when I tried to plot it with line graph changing it to ('r-') </p> <p>Below is my code to produce the figure and also the data printed out</p> <pre><code>import cv2 import numpy as np import matplotlib.pyplot as plt path = 'R:\\Temp\\xxx\\' path1 = 'R:\\Temp\\xxx\\' def Hue(im_file): im = cv2.imread(im_file) im = cv2.cvtColor(im, cv2.COLOR_BGR2HSV_FULL) im1 = im[776, 402] Hue = im1[0] return Hue def Saturation(im_file): im = cv2.imread(im_file) im = cv2.cvtColor(im, cv2.COLOR_BGR2HSV_FULL) im1 = im[776, 402] Saturation = im1[1] return Saturation def Value(im_file): im = cv2.imread(im_file) im = cv2.cvtColor(im, cv2.COLOR_BGR2HSV_FULL) im1 = im[776, 402] Value = im1[2] return Value def BlueComponent(im_file): im = cv2.imread(im_file) #return blue value im1 = im[776, 402] b = im1[0] return b def GreenComponent(im_file): im = cv2.imread(im_file) #return green value im1 = im[776, 402] g = im1[1] return g def RedComponent(im_file): #return red value im = cv2.imread(im_file) im1 = im[776, 402] r = im1[2] return r myHueList = [] mySaturationList = [] myValueList = [] myBlueList = [] myGreenList = [] myRedList = [] myList = [] num_images = 99 # number of images dotPos = 0 for i in range(1770, 1869): # loop to auto-generate image names and run prior function image_name = path + 'Cropped_Aligned_IMG_' + str(i) + '.png' # for loop runs from image number 1770 to 1868 myHueList.append(Hue(image_name)) mySaturationList.append(Saturation(image_name)) myValueList.append(Value(image_name)) myBlueList.append(BlueComponent(image_name)) myGreenList.append(GreenComponent(image_name)) myRedList.append(RedComponent(image_name)) myList.append(dotPos) dotPos = dotPos + 0.5 print(myHueList) print(mySaturationList) print(myValueList) print(myList) print(myBlueList) print(myGreenList) print(myRedList) for k in range(1770,1869): a = 'Cropped_Aligned_IMG_' + str(k) image_name = path + a + '.png' img_file = cv2.imread(image_name) y = [myBlueList] x = [myList] y1 = [myGreenList] y2 = [myRedList] y3 = [myHueList] y4 = [mySaturationList] y5 = [myValueList] plt.axes([0.1, 0.1, 1, 1]) plt.suptitle('BGR &amp; HSV Color Decimal Code Against Function of Time(Hours)', fontsize=14, fontweight='bold') plt.subplot(3,2,1) plt.plot(x, y, 'b.-') plt.title('Blue Component Color Decimal Code') plt.xlabel('Time(Hours)') plt.ylabel('Colour Code') plt.subplot(3,2,3) plt.plot(x, y1, 'g.-') plt.title('Green Component Color Decimal Code') plt.xlabel('Time(Hours)') plt.ylabel('Colour Code') plt.subplot(3,2,5) plt.plot(x, y2, 'r.-') plt.title('Red Component Color Decimal Code') plt.xlabel('Time(Hours)') plt.ylabel('Colour Code') plt.subplot(3,2,2) plt.plot(x, y3, 'b.-') plt.title('Hue Component HSV Color Decimal Code') plt.xlabel('Time(Hours)') plt.ylabel('Colour Code') plt.subplot(3,2,4) plt.plot(x, y4, 'g.-') plt.title('Saturation Component HSV Color Decimal Code') plt.xlabel('Time(Hours)') plt.ylabel('Colour Code') plt.subplot(3,2,6) plt.plot(x, y5, 'r.-') plt.title('Value Component HSV Color Decimal Code') plt.xlabel('Time(Hours)') plt.ylabel('Colour Code') plt.subplots_adjust(hspace = 0.5) plt.show() </code></pre> <p>I have copy out the data as below: </p> <pre><code>myHueList = [5, 4, 5, 5, 5, 5, 6, 5, 5, 5, 5, 5, 5, 5, 4, 4, 4, 5, 5, 5, 6, 6, 6, 6, 7, 5, 5, 6, 5, 6, 5, 5, 5, 5, 4, 4, 5, 5, 5, 5, 5, 5, 5, 5, 4, 6, 6, 5, 6, 6, 6, 6, 5, 4, 4, 5, 6, 6, 5, 6, 6, 6, 4, 4, 4, 5, 6, 5, 4, 5, 5, 5, 5, 4, 4, 5, 5, 4, 3, 5, 3, 3, 3, 2, 1, 3, 3, 4, 1, 1, 1, 0, 2, 0, 1, 1, 2, 2, 2] mySaturationList = [99, 95, 102, 99, 98, 102, 99, 99, 96, 99, 102, 100, 99, 95, 94, 102, 105, 98, 97, 107, 105, 104, 107, 102, 109, 102, 101, 102, 96, 102, 105, 97, 100, 97, 99, 100, 99, 100, 99, 100, 106, 100, 102, 99, 96, 104, 102, 102, 104, 104, 100, 99, 95, 101, 105, 96, 101, 101, 107, 100, 105, 102, 100, 97, 103, 104, 106, 99, 96, 97, 97, 97, 104, 93, 96, 98, 101, 93, 88, 93, 83, 84, 82, 79, 78, 83, 78, 79, 80, 74, 72, 75, 75, 71, 71, 66, 74, 76, 73] myValueList = [137, 134, 133, 137, 138, 133, 139, 136, 135, 131, 123, 135, 137, 132, 135, 125, 121, 133, 134, 121, 134, 135, 119, 137, 133, 123, 134, 125, 135, 138, 121, 134, 135, 139, 137, 138, 142, 135, 137, 135, 135, 135, 133, 131, 133, 123, 132, 137, 123, 135, 135, 141, 142, 137, 122, 136, 137, 137, 121, 138, 134, 138, 127, 140, 124, 137, 125, 137, 130, 139, 140, 139, 123, 137, 135, 128, 134, 137, 136, 134, 142, 139, 143, 139, 137, 144, 138, 149, 127, 141, 142, 136, 143, 136, 141, 135, 144, 141, 144] myList = [0, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, 5.0, 5.5, 6.0, 6.5, 7.0, 7.5, 8.0, 8.5, 9.0, 9.5, 10.0, 10.5, 11.0, 11.5, 12.0, 12.5, 13.0, 13.5, 14.0, 14.5, 15.0, 15.5, 16.0, 16.5, 17.0, 17.5, 18.0, 18.5, 19.0, 19.5, 20.0, 20.5, 21.0, 21.5, 22.0, 22.5, 23.0, 23.5, 24.0, 24.5, 25.0, 25.5, 26.0, 26.5, 27.0, 27.5, 28.0, 28.5, 29.0, 29.5, 30.0, 30.5, 31.0, 31.5, 32.0, 32.5, 33.0, 33.5, 34.0, 34.5, 35.0, 35.5, 36.0, 36.5, 37.0, 37.5, 38.0, 38.5, 39.0, 39.5, 40.0, 40.5, 41.0, 41.5, 42.0, 42.5, 43.0, 43.5, 44.0, 44.5, 45.0, 45.5, 46.0, 46.5, 47.0, 47.5, 48.0, 48.5, 49.0] myBlueList = [84, 84, 80, 84, 85, 80, 85, 83, 84, 80, 74, 82, 84, 83, 85, 75, 71, 82, 83, 70, 79, 80, 69, 82, 76, 74, 81, 75, 84, 83, 71, 83, 82, 86, 84, 84, 87, 82, 84, 82, 79, 82, 80, 80, 83, 73, 79, 82, 73, 80, 82, 86, 89, 83, 72, 85, 83, 83, 70, 84, 79, 83, 77, 87, 74, 81, 73, 84, 81, 86, 87, 86, 73, 87, 84, 79, 81, 87, 89, 85, 96, 93, 97, 96, 95, 97, 96, 103, 87, 100, 102, 96, 101, 98, 102, 100, 102, 99, 103] myGreenList = [90, 89, 86, 90, 91, 86, 92, 89, 90, 86, 80, 88, 90, 89, 90, 80, 76, 88, 89, 76, 87, 88, 76, 90, 86, 80, 87, 82, 90, 91, 77, 89, 88, 92, 89, 89, 93, 88, 90, 88, 86, 88, 86, 86, 88, 80, 86, 88, 80, 88, 89, 94, 95, 88, 77, 91, 90, 90, 76, 91, 87, 91, 82, 92, 79, 88, 80, 90, 86, 92, 93, 92, 79, 92, 89, 85, 87, 92, 92, 91, 99, 96, 100, 98, 96, 100, 99, 107, 88, 101, 103, 96, 103, 98, 103, 101, 104, 101, 105] myRedList = [137, 134, 133, 137, 138, 133, 139, 136, 135, 131, 123, 135, 137, 132, 135, 125, 121, 133, 134, 121, 134, 135, 119, 137, 133, 123, 134, 125, 135, 138, 121, 134, 135, 139, 137, 138, 142, 135, 137, 135, 135, 135, 133, 131, 133, 123, 132, 137, 123, 135, 135, 141, 142, 137, 122, 136, 137, 137, 121, 138, 134, 138, 127, 140, 124, 137, 125, 137, 130, 139, 140, 139, 123, 137, 135, 128, 134, 137, 136, 134, 142, 139, 143, 139, 137, 144, 138, 149, 127, 141, 142, 136, 143, 136, 141, 135, 144, 141, 144] </code></pre>
<p>The data is stored in a list of a list. I.e. you have <code>[[1,2,3]]</code> instead of <code>[1,2,3]</code>. This will not be understood correctly by matplotlib, such that no lines are drawn in between the points. </p> <p>To correctly produce the required lists, use </p> <pre><code>y = myBlueList x = myList y1 = myGreenList y2 = myRedList y3 = myHueList y4 = mySaturationList y5 = myValueList </code></pre>
python|opencv|numpy|matplotlib
3
373,944
45,917,524
Why NumPy is casting objects to floats?
<p>I'm trying to store intervals (with its specific arithmetic) in NumPy arrays. If I use my own Interval class, it works, but my class is very poor and my Python knowledge limited.</p> <p>I know <a href="http://pyinterval.readthedocs.io/en/latest/" rel="nofollow noreferrer" title="pyInterval">pyInterval</a> and it's very complete. It covers my problems. The only thing which is not working is storing pyInterval objects in NumPy arrays. </p> <pre><code>class Interval(object): def __init__(self, lower, upper = None): if upper is None: self.upper = self.lower = lower elif lower &lt;= upper: self.lower = lower self.upper = upper else: raise ValueError(f"Lower is bigger than upper! {lower},{upper}") def __repr__(self): return "Interval " + str((self.lower,self.upper)) def __mul__(self,another): values = (self.lower * another.lower, self.upper * another.upper, self.lower * another.upper, self.upper * another.lower) return Interval(min(values), max(values)) import numpy as np from interval import interval i = np.array([Interval(2,3), Interval(-3,6)], dtype=object) # My class ix = np.array([interval([2,3]), interval([-3,6])], dtype=object) # pyInterval </code></pre> <p>These are the results</p> <pre><code>In [30]: i Out[30]: array([Interval (2, 3), Interval (-3, 6)], dtype=object) In [31]: ix Out[31]: array([[[2.0, 3.0]], [[-3.0, 6.0]]], dtype=object) </code></pre> <p>The intervals from pyInterval has been casted as list of list of floats. It doesn't be a problem if them preserves interval arithmetics...</p> <pre><code>In [33]: i[0] * i[1] Out[33]: Interval (-9, 18) In [34]: ix[0] * ix[1] Out[34]: array([[-6.0, 18.0]], dtype=object) </code></pre> <p><code>Out[33]</code> is the wished output. The output using pyInterval is incorrect. Obviously using raw pyInterval it works like a charm</p> <pre><code>In [35]: interval([2,3]) * interval([-3,6]) Out[35]: interval([-9.0, 18.0]) </code></pre> <p><a href="https://github.com/taschini/pyinterval/blob/master/interval/__init__.py" rel="nofollow noreferrer" title="pyInterval source">Here</a> is the pyInterval source code. I don't understand why using this object NumPy doesn't work as I expect.</p>
<p>To be fair, it is really hard for the <code>numpy.ndarray</code> constructor to infer what kind of data should go into it. It receives objects which resemble lists of tuples and makes do with it.</p> <p>You can, however, help your constructor a bit by not having it guess the shape of your data:</p> <pre><code>a = interval([2,3]) b = interval([-3,6]) ll = [a,b] ix = np.empty((len(ll),), dtype=object) ix[:] = [*ll] ix[0]*ix[1] #interval([-9.0, 18.0]) </code></pre>
python|class|numpy|intervals
2
373,945
45,788,433
How to enable Dataset pipeline has distributed reading and consuming
<p>It is easy to use two threads that one keeps feeding data to the <a href="https://www.tensorflow.org/programmers_guide/threading_and_queues" rel="nofollow noreferrer">queue</a> and the other consumes data from the queue and perform the computation. Since the TensorFlow recommends <a href="https://www.tensorflow.org/programmers_guide/datasets" rel="nofollow noreferrer">Dataset</a> as input pipeline after 1.2.0., I would like to use the <code>Dataset</code> and its <code>iterator</code> to accomplish the task above, namely:</p> <ol> <li>There are two <strong>processes</strong>, one feeds and the other consumes;</li> <li>The pipeline suspends either it is full or empty and it stops when computation finishes at consuming.</li> </ol> <p>P.S. Why in the tutorial of <a href="https://www.tensorflow.org/programmers_guide/threading_and_queues" rel="nofollow noreferrer">Threading and Queues</a>, TensorFlow uses <code>thread</code> instead of <code>process</code>?</p> <p>Thank you in advance.</p>
<p>Distributed <a href="https://www.tensorflow.org/api_docs/python/tf/contrib/data" rel="nofollow noreferrer"><code>tf.contrib.data</code></a> pipelines are not yet supported as of TensorFlow 1.3. We are working on support for splitting datasets across devices and/or processes, but that support is not yet ready.</p> <p>In the meantime, the easiest way to achieve your goal is to use a <code>tf.FIFOQueue</code>. You can define a <code>Dataset</code> that reads from a queue as follows:</p> <pre><code>q = tf.FIFOQueue(...) # Define a dummy dataset that contains the same value repeated indefinitely. dummy = tf.contrib.data.Dataset.from_tensors(0).repeat(None) dataset_from_queue = dummy.map(lambda _: q.dequeue()) </code></pre> <p>You can then compose other <code>Dataset</code> tranformations with <code>dataset_from_queue</code>.</p>
python|machine-learning|tensorflow|distributed-computing
1
373,946
45,765,809
Python input function explanation
<p>Could someone explain the role of the following functions:</p> <pre><code>list() map() split() </code></pre> <p>In the context of this line of code please:</p> <pre><code>input = list(map(int,input().split())) </code></pre> <p>Finally, should it be:</p> <pre><code>int,input() </code></pre> <p>rather than:</p> <pre><code>int(input()) </code></pre> <p>Thank you!</p>
<p>All of these functions are members of the standard library and are as such covered by the <a href="https://docs.python.org/3.6/library/functions.html" rel="nofollow noreferrer">official documentation</a>.</p> <p>That being said, I'll summarise them briefly.</p> <ol> <li><p><code>list</code> turns an <a href="https://docs.python.org/3.6/glossary.html?highlight=iterable" rel="nofollow noreferrer">iterable</a> into a list. In this case, the iterable is a <code>map</code> object.</p> </li> <li><p><code>map</code> takes a function <code>f</code> (or any callable, really), and an iterable <code>iter</code> and produces another iterable where the callable is applied to each element in <code>iter</code>.</p> <p>In your case, the callable is <code>int</code>, which tries to convert its argument to an integer. <code>map</code> is a common term for applying a function to a collection of elements, but the 'Pythonic' way is to use a list comprehension:</p> <pre><code> [f(x) for x in iterable] == list(map(f, iterable)) </code></pre> </li> <li><p><code>split</code> is a method on <code>str</code>ing objects, which divides the given string at every occurrence of the given separator, returning a list. If the separator argument is omitted, it defaults to a space.</p> </li> </ol> <hr /> <p>Putting it all together, you're reading input from stdin, splitting the resulting string into multiple strings, mapping <code>int</code> to each item (turning them into integers, or causing an exception on input like <code>'words instead of numbers'</code>) and converting the mapping to a list.</p> <p>You're then shadowing a built in by assigning the result to <code>input</code>, which is generally speaking a bad idea (imagine the confusion when <code>input()</code> causes an error).</p> <p>To answer your second question: no, since <code>map</code> takes two separate arguments.</p> <hr /> <p>I'd rewrite it slightly, if I were using it in a production environment:</p> <pre><code>try: numbers = [int(n) for n in input('Space-separated integers, please: ').split()] except ValueError as e: print('I needed numbers. You gave me something else.') </code></pre> <p>The argument to <code>input</code> is a prompt to the user (it's optional, of course).</p>
python|numpy|input
2
373,947
46,049,389
Error: tuple index out of range python 3
<p>Maybe you can give me your advise? </p> <p>I have a web page <code>clarity-project.info/tenders/…</code> and I need to extract <code>data-id="&lt;some number&gt;"</code> and write them in a new file</p> <p>Here is my code: </p> <pre><code>from urllib.request import urlopen, Request from bs4 import BeautifulSoup import numpy as np url = 'https://clarity-project.info/tenders/?entiy=38163425&amp;offset=100' agent = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_3)AppleWebKit/537.36\ (KHTML, like Gecko) Chrome/35.0.1916.47 Safari/537.36' request = Request(url, headers={'User-Agent': agent}) html = urlopen(request).read().decode() soup = BeautifulSoup(html, 'html.parser') tags = soup.findAll(lambda tag: tag.get('data-id', None) is not None) with open('/Users/tinasosiak/Documents/number.txt', 'a') as f: for tag in tags: print(tag['data-id']) np.savetxt(f, 'data-id') </code></pre> <p>But when I run my code, I get this error:</p> <pre><code>1f1d2745f1b641c6bd6831288b49d54e --------------------------------------------------------------------------- IndexError Traceback (most recent call last) &lt;ipython-input-5-556a89a7507f&gt; in &lt;module&gt;() 15 for tag in tags: 16 print(tag['data-id']) ---&gt; 17 np.savetxt(f, 'data-id') 18 /Users/tinasosiak/anaconda/lib/python3.6/site-packages/numpy/lib/npyio.py in savetxt(fname, X, fmt, delimiter, newline, header, footer, comments) 1212 ncol = len(X.dtype.descr) 1213 else: -&gt; 1214 ncol = X.shape[1] 1215 1216 iscomplex_X = np.iscomplexobj(X) IndexError: tuple index out of range </code></pre>
<p>Is this what you wanted? It'll give you all the value of "data-id" with a text file with those data written within.</p> <pre><code>import requests from bs4 import BeautifulSoup file = open("testfile.txt","w") res = requests.get('https://clarity-project.info/tenders/?entiy=38163425&amp;offset=100').text soup = BeautifulSoup(res,"lxml") for item in soup.find_all(class_="table-row"): try: file.write(item.get('data-id')+'\n') except: continue print(item.get('data-id')) file.close() </code></pre>
python|python-3.x|numpy|web-scraping|beautifulsoup
0
373,948
46,121,278
Retrain InceptionV4's Final Layer for New Categories: local variable not initialized
<p>I'm still newbie in tensorflow so I'm sorry if this is a naive question. I'm trying to use the <code>inception_V4</code> <a href="http://download.tensorflow.org/models/inception_v4_2016_09_09.tar.gz" rel="noreferrer">model pretrained</a> on <code>ImageNet</code> dataset published on this <a href="https://github.com/tensorflow/models/tree/master/slim" rel="noreferrer">site</a>. Also, I'm using their network as it is, I mean the one published on their <a href="https://github.com/tensorflow/models/blob/master/slim/nets/inception_v4.py" rel="noreferrer">site</a>.</p> <p>Here is how I call the network:</p> <pre><code>def network(images_op, keep_prob): width_needed_InceptionV4Net = 342 shape = images_op.get_shape().as_list() H = int(round(width_needed_InceptionV4Net * shape[1] / shape[2], 2)) resized_images = tf.image.resize_images(images_op, [width_needed_InceptionV4Net, H], tf.image.ResizeMethod.BILINEAR) with slim.arg_scope(inception.inception_v4_arg_scope()): logits, _ = inception.inception_v4(resized_images, num_classes=20, is_training=True, dropout_keep_prob = keep_prob) return logits </code></pre> <p>Since I need to retrain the <code>Inception_V4</code>'s final layer for my categories, I modified the number of classes to be 20 as you can see in the method call (<code>inception.inception_v4</code>).</p> <p>Here is the train method I have so far:</p> <pre><code>def optimistic_restore(session, save_file, flags): reader = tf.train.NewCheckpointReader(save_file) saved_shapes = reader.get_variable_to_shape_map() var_names = sorted([(var.name, var.name.split(':')[0]) for var in tf.global_variables() if var.name.split(':')[0] in saved_shapes]) restore_vars = [] name2var = dict(zip(map(lambda x:x.name.split(':')[0], tf.global_variables()), tf.global_variables())) if flags.checkpoint_exclude_scopes is not None: exclusions = [scope.strip() for scope in flags.checkpoint_exclude_scopes.split(',')] with tf.variable_scope('', reuse=True): variables_to_init = [] for var_name, saved_var_name in var_names: curr_var = name2var[saved_var_name] var_shape = curr_var.get_shape().as_list() if var_shape == saved_shapes[saved_var_name]: print(saved_var_name) excluded = False for exclusion in exclusions: if saved_var_name.startswith(exclusion): variables_to_init.append(var) excluded = True break if not excluded: restore_vars.append(curr_var) saver = tf.train.Saver(restore_vars) saver.restore(session, save_file) def train(images, ids, labels, total_num_examples, batch_size, train_dir, network, flags, optimizer, log_periods, resume): """!@brief Trains the network for a number of steps. @param images image tensor @param ids id tensor @param labels label tensor @param total_num_examples total number of training examples @param batch_size batch size @param train_dir directory where checkpoints should be saved @param network pointer to a function describing the network @param flags command-line arguments @param optimizer pointer to the optimization class @param log_periods list containing the step intervals at which 1) logs should be printed, 2) logs should be saved for TensorBoard and 3) variables should be saved @param resume should training be resumed (or restarted from scratch)? @return the number of training steps performed since the first call to 'train' """ # clearing the training directory if not resume: if tf.gfile.Exists(train_dir): tf.gfile.DeleteRecursively(train_dir) tf.gfile.MakeDirs(train_dir) print('Training the network in directory %s...' % train_dir) global_step = tf.Variable(0, trainable = False) # creating a placeholder, set to ones, used to assess the importance of each pixel mask, ones = _mask(images, batch_size, flags) # building a Graph that computes the logits predictions from the inference model keep_prob = tf.placeholder_with_default(0.5, []) logits = network(images * mask, keep_prob) # creating the optimizer if optimizer == tf.train.MomentumOptimizer: opt = optimizer(flags.learning_rate, flags.momentum) else: opt = optimizer(flags.learning_rate) # calculating the semantic loss, defined as the classification or regression loss if flags.boosting_weights is not None and os.path.isfile(flags.boosting_weights): boosting_weights_value = np.loadtxt(flags.boosting_weights, dtype = np.float32, delimiter = ',') boosting_weights = tf.placeholder_with_default(boosting_weights_value, list(boosting_weights_value.shape), name = 'boosting_weights') semantic_loss = _boosting_loss(logits, ids, boosting_weights, flags) else: semantic_loss = _loss(logits, labels, flags) tf.add_to_collection('losses', semantic_loss) # computing the loss gradient with respect to the mask (i.e. the insight tensor) and # penalizing its L1-norm # replace 'semantic_loss' with 'tf.reduce_sum(logits)'? insight = tf.gradients(semantic_loss, [mask])[0] insight_loss = tf.reduce_sum(tf.abs(insight)) if flags.insight_loss &gt; 0.0: with tf.control_dependencies([semantic_loss]): tf.add_to_collection('losses', tf.multiply(flags.insight_loss, insight_loss, name = 'insight_loss')) else: tf.summary.scalar('insight_loss_raw', insight_loss) # summing all loss factors and computing the moving average of all individual losses and of # the sum loss = tf.add_n(tf.get_collection('losses'), name = 'total_loss') loss_averages_op = tf.train.ExponentialMovingAverage(0.9, name = 'avg') losses = tf.get_collection('losses') loss_averages = loss_averages_op.apply(losses + [loss]) # attaching a scalar summary to all individual losses and the total loss; # do the same for the averaged version of the losses for l in losses + [loss]: tf.summary.scalar(l.op.name + '_raw', l) tf.summary.scalar(l.op.name + '_avg', loss_averages_op.average(l)) # computing and applying gradients with tf.control_dependencies([loss_averages]): grads = opt.compute_gradients(loss) apply_gradient = opt.apply_gradients(grads, global_step = global_step) # adding histograms for trainable variables and gradients for var in tf.trainable_variables(): tf.summary.histogram(var.op.name, var) for grad, var in grads: if grad is not None: tf.summary.histogram(var.op.name + '/gradients', grad) tf.summary.histogram('insight', insight) # tracking the moving averages of all trainable variables variable_averages_op = tf.train.ExponentialMovingAverage(flags.moving_average_decay, global_step) variable_averages = variable_averages_op.apply(tf.trainable_variables()) # building a Graph that trains the model with one batch of examples and # updates the model parameters with tf.control_dependencies([apply_gradient, variable_averages]): train_op = tf.no_op(name = 'train') # creating a saver saver = tf.train.Saver(tf.global_variables()) # building the summary operation based on the TF collection of Summaries summary_op = tf.summary.merge_all() # creating a session current_global_step = -1 with tf.Session(config = tf.ConfigProto(log_device_placement = False, inter_op_parallelism_threads = flags.num_cpus, device_count = {'GPU': flags.num_gpus})) as sess: # initializing variables if flags.checkpoint_exclude_scopes is not None: optimistic_restore(sess, os.path.join(train_dir, 'inception_V4.ckpt'), flags) # starting the queue runners .. # creating a summary writer .. # training itself .. # saving the model checkpoint checkpoint_path = os.path.join(train_dir, 'model.ckpt') saver.save(sess, checkpoint_path, global_step = current_global_step) # stopping the queue runners .. return current_global_step </code></pre> <p>I added a flag to the python script called <code>checkpoint_exclude_scopes</code> where I precise which Tensors should not be restored. This is required to change the number of classes in the last layer of the network. Here is how I call the python script:</p> <pre><code>./toolDetectionInceptions.py --batch_size=32 --running_mode=resume --checkpoint_exclude_scopes=InceptionV4/Logits,InceptionV4/AuxLogits </code></pre> <p>My first tests were terrible because I got too much problems.. something like:</p> <pre><code>tensorflow.python.framework.errors.NotFoundError: Tensor name "InceptionV4/Mixed_6b/Branch_3/Conv2d_0b_1x1/weights/read:0" not found in checkpoint files </code></pre> <p>After some googling I could find a workaround on this <a href="https://github.com/Aaron-Zhao123/mayo-dev/issues/13" rel="noreferrer">site</a> where they propose to use the function <code>optimistic_restore</code> presented in the code above including some modifications to it.</p> <p>But now the problem is something else:</p> <pre><code>W tensorflow/core/framework/op_kernel.cc:993] Failed precondition: Attempting to use uninitialized value Variable [[Node: Variable/read = Identity[T=DT_INT32, _class=["loc:@Variable"], _device="/job:localhost/replica:0/task:0/cpu:0"](Variable)]] </code></pre> <p>It seems there is a local variable that it's not initialized but I could not find it. Can u please help? </p> <p><strong>EDITED:</strong></p> <p>To debug this problem, I checked the number of variables that should be initialized and restored by adding some logs in the function <code>optimistic_restore</code>. Here is a brief:</p> <pre><code> # saved_shapes 609 # var_names 608 # name2var 1519 # variables_to_init: 7 # restore_vars: 596 # global_variables: 1519 </code></pre> <p>For your information, <code>CheckpointReader.get_variable_to_shape_map():</code> returns a dict mapping tensor names to lists of ints, representing the shape of the corresponding tensor in the checkpoint. This means the number of variables in this checkpoint is <code>609</code> and the total number of variables needed for the restore is <code>1519</code>.</p> <p>It seems there is a huge gap between the pretrained checkpoint tensors and the variables used by the network architecture (It's actually their network as well). Is there any kind of compression done on the checkpoint? Is it accurate what I'm saying? I know now what's missing: it's just the initialization of the variables that have not been restored. Yet, I need to know why there is a huge difference between their <code>InceptionV4</code> network architecture and the pretrained checkpoint?</p>
<p>Variables that are not restored with the saver need to be initialized. To this end, you could run <code>v.initializer.run()</code> for each variable <code>v</code> that you don't restore.</p>
python|tensorflow
3
373,949
45,741,552
Do I need to install keras 2.0 seprately after installing tensorflow 1.3?
<p>I just upgraded my tf from 1.0 to tf 1.3 (pip install --upgrade tensorflow) . I know keras 2.0 became part of tensorflow since tf version 1.2. However, when I import keras and check its version it still shows 1.2. Am I supposed to upgrade keras also? if so, then what does "<a href="https://blog.keras.io/introducing-keras-2.html" rel="noreferrer">the Keras API will now become available directly as part of TensorFlow, starting with TensorFlow 1.2</a>" mean? </p>
<p>Nope, you don't need to install keras 2.0 separately. (See: <a href="https://www.tensorflow.org/guide/keras" rel="nofollow noreferrer">https://www.tensorflow.org/guide/keras</a>)</p> <p><strong>Do</strong> this:</p> <pre class="lang-py prettyprint-override"><code>import tensorflow as tf model = tf.keras.Sequential() </code></pre> <p><strong>Don't</strong> do this (Unless you really need framework independent code):</p> <pre class="lang-py prettyprint-override"><code>import keras model = keras.Sequential() </code></pre>
tensorflow|keras
3
373,950
46,058,500
How to correctly save a dictionary in .npz format
<p>I am writing a program that loads an npz file, makes some changes to the values and stores it again. The .npz file varies in shape (dimensions).</p> <p>I have gotten to the point where I can do everything, with exception that the individual numpy arrays are out of order. </p> <p>Code flow: </p> <pre><code>data = dict(np.load(my_npz_file)) #Do some changes to dictionary data_to_store = [data[layer] for layer in data] np.savez(output_npz_file, *data_to_store) </code></pre> <p>However the problem is this, ordering is lost.</p> <p>Print code:</p> <pre><code>keys=data.keys() for i,h in enumerate(keys): print "i="+str(i)+"h="+str(h)+"shape="+str(np.shape(data[h])) </code></pre> <p>Data read in format:</p> <pre><code>i=0h=arr_24shape=(128, 32, 3, 3) i=1h=arr_25shape=(128,) i=2h=arr_26shape=(48, 256, 1, 1) i=3h=arr_27shape=(48,) i=4h=arr_20shape=(32, 256, 1, 1) i=5h=arr_21shape=(32,) i=6h=arr_22shape=(128, 32, 1, 1) i=7h=arr_23shape=(128,) i=8h=arr_28shape=(192, 48, 1, 1) i=9h=arr_29shape=(192,) i=10h=arr_46shape=(256, 64, 1, 1) i=11h=arr_47shape=(256,) i=12h=arr_44shape=(64, 512, 1, 1) i=13h=arr_45shape=(64,) i=14h=arr_42shape=(256, 64, 3, 3) i=15h=arr_43shape=(256,) i=16h=arr_40shape=(256, 64, 1, 1) i=17h=arr_41shape=(256,) i=18h=arr_48shape=(256, 64, 3, 3) i=19h=arr_49shape=(256,) i=20h=arr_33shape=(48,) i=21h=arr_32shape=(48, 384, 1, 1) i=22h=arr_31shape=(192,) i=23h=arr_30shape=(192, 48, 3, 3) i=24h=arr_37shape=(192,) i=25h=arr_36shape=(192, 48, 3, 3) i=26h=arr_35shape=(192,) i=27h=arr_34shape=(192, 48, 1, 1) i=28h=arr_39shape=(64,) i=29h=arr_38shape=(64, 384, 1, 1) i=30h=arr_19shape=(128,) i=31h=arr_18shape=(128, 32, 3, 3) i=32h=arr_51shape=(1000,) i=33h=arr_50shape=(1000, 512, 1, 1) i=34h=arr_11shape=(64,) i=35h=arr_10shape=(64, 16, 1, 1) i=36h=arr_13shape=(64,) i=37h=arr_12shape=(64, 16, 3, 3) i=38h=arr_15shape=(32,) i=39h=arr_14shape=(32, 128, 1, 1) i=40h=arr_17shape=(128,) i=41h=arr_16shape=(128, 32, 1, 1) i=42h=arr_1shape=(96,) i=43h=arr_0shape=(96, 3, 7, 7) i=44h=arr_3shape=(16,) i=45h=arr_2shape=(16, 96, 1, 1) i=46h=arr_5shape=(64,) i=47h=arr_4shape=(64, 16, 1, 1) i=48h=arr_7shape=(64,) i=49h=arr_6shape=(64, 16, 3, 3) i=50h=arr_9shape=(16,) i=51h=arr_8shape=(16, 128, 1, 1) </code></pre> <p>Data output format:</p> <pre><code>i=0h=arr_24shape=(192,) i=1h=arr_25shape=(192, 48, 3, 3) i=2h=arr_26shape=(192,) i=3h=arr_27shape=(192, 48, 1, 1) i=4h=arr_20shape=(48,) i=5h=arr_21shape=(48, 384, 1, 1) i=6h=arr_22shape=(192,) i=7h=arr_23shape=(192, 48, 3, 3) i=8h=arr_28shape=(64,) i=9h=arr_29shape=(64, 384, 1, 1) i=10h=arr_46shape=(64,) i=11h=arr_47shape=(64, 16, 1, 1) i=12h=arr_44shape=(16,) i=13h=arr_45shape=(16, 96, 1, 1) i=14h=arr_42shape=(96,) i=15h=arr_43shape=(96, 3, 7, 7) i=16h=arr_40shape=(128,) i=17h=arr_41shape=(128, 32, 1, 1) i=18h=arr_48shape=(64,) i=19h=arr_49shape=(64, 16, 3, 3) i=20h=arr_33shape=(1000, 512, 1, 1) i=21h=arr_32shape=(1000,) i=22h=arr_31shape=(128, 32, 3, 3) i=23h=arr_30shape=(128,) i=24h=arr_37shape=(64, 16, 3, 3) i=25h=arr_36shape=(64,) i=26h=arr_35shape=(64, 16, 1, 1) i=27h=arr_34shape=(64,) i=28h=arr_39shape=(32, 128, 1, 1) i=29h=arr_38shape=(32,) i=30h=arr_19shape=(256,) i=31h=arr_18shape=(256, 64, 3, 3) i=32h=arr_51shape=(16, 128, 1, 1) i=33h=arr_50shape=(16,) i=34h=arr_11shape=(256,) i=35h=arr_10shape=(256, 64, 1, 1) i=36h=arr_13shape=(64,) i=37h=arr_12shape=(64, 512, 1, 1) i=38h=arr_15shape=(256,) i=39h=arr_14shape=(256, 64, 3, 3) i=40h=arr_17shape=(256,) i=41h=arr_16shape=(256, 64, 1, 1) i=42h=arr_1shape=(128,) i=43h=arr_0shape=(128, 32, 3, 3) i=44h=arr_3shape=(48,) i=45h=arr_2shape=(48, 256, 1, 1) i=46h=arr_5shape=(32,) i=47h=arr_4shape=(32, 256, 1, 1) i=48h=arr_7shape=(128,) i=49h=arr_6shape=(128, 32, 1, 1) i=50h=arr_9shape=(192,) i=51h=arr_8shape=(192, 48, 1, 1) </code></pre> <p>As you can see, no data is lost, but it has gotten out of order.</p>
<p>That's because at the beginning you convert the data to a dictionary:</p> <pre><code>data = dict(np.load(my_npz_file)) </code></pre> <p>Dictionaries don't preserve order in Python (<a href="https://stackoverflow.com/questions/39980323/dictionaries-are-ordered-in-python-3-6">at least in your Python version</a>), but you can use an <a href="https://docs.python.org/2/library/collections.html#collections.OrderedDict" rel="noreferrer">OrderedDict</a>.</p> <hr> <p>Update: the exact problem is here...</p> <pre><code>data_to_store = [data[layer] for layer in data] np.savez(output_npz_file, *data_to_store) </code></pre> <p>you make a list of all the layers in data, then iterate over them in a random order and write them into the file. So what was previously called <code>arr_0</code> will now be for example <code>arr_23</code> because that's how it can end up in a random traversal of data, and <code>np.savez</code> will just assign new, sequential names.</p> <p>But you can also provide your own names to <code>np.savez</code> which will simplify your code a lot:</p> <pre><code>np.savez(output_npz_file, **data) # data is a dict here </code></pre> <p>This will save each layer with the same name as it had originally.</p>
python|numpy
9
373,951
46,037,548
Stata to Pandas: even if there are repeated Value Labels?
<p>i try to open a .dta as DataFrame. But an Error appears: "ValueError: Value labels for column ... are not unique. The repeated labels are:" followed by labels wich apper twice in a column.</p> <p>I know labeling multiplie codes with the exact same value label in stata is not clever (not my fault :)) After some research i know, pandas will not accept repeated value labels (this IS clever).</p> <p>But i can't figure out a (good) solution: Is there:</p> <p>a. a smooth way to open the data with pandas and just rename the doubles (like "label" to "label(2)") in this process?</p> <p>here is what the data looks like (value labels in brackets):</p> <pre><code> | multilabel 1 | 11 (oneone or twotwo) 2 | 22 (oneone or twotwo) 3 | 33 (other-label-which-is-unique) </code></pre> <p>my code so far:</p> <pre><code>import pandas as pd #followed by any option that delivers this solution: dataframe = pd.read_stata('file.dta') </code></pre> <p>or</p> <p>b. a fast an easy way to tell stata: just rename all repeated value labels by "label(2)" instead of "label"? and yes, the code so far is also rather boring:</p> <pre><code>use "file.dta" *followed by a loop wich finds repeated labels and changes them save "file.dta", replace </code></pre> <p>And yes, there are to many repeated value labels to go trough it one by one.</p> <p>And here the Stata-Commands to produce a minimal example:</p> <pre><code>set obs 1 generate var1 = 1 in 1 set obs 2 replace var1 = 2 in 2 set obs 3 replace var1 = 3 in 3 generate var2 = 11 in 1 replace var2 = 22 in 2 replace var2 = 33 in 3 rename var2 multilabel label define labelrepeat 11 "oneone or twotwo" 22 "oneone or twotwo" label values multilabel labelrepeat </code></pre> <p>I am happy for each suggestion!</p>
<p>Since at least pandas 0.22, you can pass <code>convert_categoricals=False</code> to <code>read_stata</code> and it will not attempt to map the numerical values to their definitions.</p> <p><code>d = pd.read_stata('fooy_labels.dta', convert_categoricals=False)</code></p> <p>Your resulting DataFrame will have the numerical values in the problem column. You can now recode them as you wish.</p>
python|pandas|stata|label
3
373,952
45,810,417
OpenCV, Python: How to use mask parameter in ORB feature detector
<p>By reading a few answers on stackoverflow, I've learned this much so far:</p> <p>The mask has to be a <code>numpy</code> array (which has the same shape as the image) with data type <code>CV_8UC1</code> and have values from <code>0</code> to <code>255</code>.</p> <p>What is the meaning of these numbers, though? Is it that any pixels with a corresponding mask value of zero will be ignored in the detection process and any pixels with a mask value of 255 will be used? What about the values in between?</p> <p>Also, how do I initialize a <code>numpy</code> array with data type <code>CV_8UC1</code> in python? Can I just use <code>dtype=cv2.CV_8UC1</code></p> <p>Here is the code I am using currently, based on the assumptions I'm making above. But the issue is that I don't get any keypoints when I run <code>detectAndCompute</code> for either image. I have a feeling it might be because the mask isn't the correct data type. If I'm right about that, how do I correct it?</p> <pre><code># convert images to grayscale base_gray = cv2.cvtColor(self.base, cv2.COLOR_BGRA2GRAY) curr_gray = cv2.cvtColor(self.curr, cv2.COLOR_BGRA2GRAY) # initialize feature detector detector = cv2.ORB_create() # create a mask using the alpha channel of the original image--don't # use transparent or partially transparent parts base_cond = self.base[:,:,3] == 255 base_mask = np.array(np.where(base_cond, 255, 0)) curr_cond = self.base[:,:,3] == 255 curr_mask = np.array(np.where(curr_cond, 255, 0), dtype=np.uint8) # use the mask and grayscale images to detect good features base_keys, base_desc = detector.detectAndCompute(base_gray, mask=base_mask) curr_keys, curr_desc = detector.detectAndCompute(curr_gray, mask=curr_mask) print("base keys: ", base_keys) # [] print("curr keys: ", curr_keys) # [] </code></pre>
<p>So here is most, if not all, of the answer:</p> <blockquote> <p>What is the meaning of those numbers</p> </blockquote> <p>0 means to ignore the pixel and 255 means to use it. I'm still unclear on the values in between, but I don't think all nonzero values are considered "equivalent" to 255 in the mask. See <a href="http://answers.opencv.org/question/103087/behaviour-of-orb-keypoint-detection-with-masks/" rel="noreferrer">here</a>. </p> <blockquote> <p>Also, how do I initialize a numpy array with data type CV_8UC1 in python? </p> </blockquote> <p>The type CV_8U is the unsigned 8-bit integer, which, using numpy, is <code>numpy.uint8</code>. The C1 postfix means that the array is 1-channel, instead of 3-channel for color images and 4-channel for rgba images. So, to create a 1-channel array of unsigned 8-bit integers:</p> <pre><code>import numpy as np np.zeros((480, 720), dtype=np.uint8) </code></pre> <p>(a three-channel array would have shape <code>(480, 720, 3)</code>, four-channel <code>(480, 720, 4)</code>, etc.) This mask would cause the detector and extractor to ignore the entire image, though, since it's all zeros.</p> <blockquote> <p>how do I correct [the code]?</p> </blockquote> <p>There were two separate issues, each separately causing each keypoint array to be empty.</p> <p>First, I forgot to set the type for the <code>base_mask</code></p> <pre><code>base_mask = np.array(np.where(base_cond, 255, 0)) # wrong base_mask = np.array(np.where(base_cond, 255, 0), dtype=uint8) # right </code></pre> <p>Second, I used the wrong image to generate my <code>curr_cond</code> array:</p> <pre><code>curr_cond = self.base[:,:,3] == 255 # wrong curr_cond = self.curr[:,:,3] == 255 # right </code></pre> <p>Some pretty dumb mistakes.</p> <p>Here is the full corrected code:</p> <pre><code># convert images to grayscale base_gray = cv2.cvtColor(self.base, cv2.COLOR_BGRA2GRAY) curr_gray = cv2.cvtColor(self.curr, cv2.COLOR_BGRA2GRAY) # initialize feature detector detector = cv2.ORB_create() # create a mask using the alpha channel of the original image--don't # use transparent or partially transparent parts base_cond = self.base[:,:,3] == 255 base_mask = np.array(np.where(base_cond, 255, 0), dtype=np.uint8) curr_cond = self.curr[:,:,3] == 255 curr_mask = np.array(np.where(curr_cond, 255, 0), dtype=np.uint8) # use the mask and grayscale images to detect good features base_keys, base_desc = detector.detectAndCompute(base_gray, mask=base_mask) curr_keys, curr_desc = detector.detectAndCompute(curr_gray, mask=curr_mask) </code></pre> <p>TL;DR: The mask parameter is a 1-channel <code>numpy</code> array with the same shape as the grayscale image in which you are trying to find features (if image shape is <code>(480, 720)</code>, so is mask).</p> <p>The values in the array are of type <code>np.uint8</code>, <code>255</code> means "use this pixel" and <code>0</code> means "don't"</p> <p><em>Thanks to <a href="https://stackoverflow.com/users/3962537/dan-ma%C5%A1ek">Dan Mašek</a> for leading me to parts of this answer.</em></p>
python|opencv|numpy|feature-detection
7
373,953
45,942,222
KeyError when extracting data from a pandas.core.series.Series
<p>In the following ipython3 session, I read differently-formatted tables and make the sum of the values found in one of the columns:</p> <pre><code>In [278]: F = pd.read_table("../RNA_Seq_analyses/mapping_worm_number_tests/hisat2/mapped_C_elegans/feature_count/W100_1_on_C_elegans/protein_coding_fwd_counts.txt", skip ...: rows=2, usecols=[6]).sum() In [279]: S = pd.read_table("../RNA_Seq_analyses/mapping_worm_number_tests/hisat2/mapped_C_elegans/intersect_count/W100_1_on_C_elegans/protein_coding_fwd_counts.txt", us ...: ecols=[6], header=None).sum() In [280]: S Out[280]: 6 3551266 dtype: int64 In [281]: F Out[281]: 72 3164181 dtype: int64 In [282]: type(F) Out[282]: pandas.core.series.Series In [283]: type(S) Out[283]: pandas.core.series.Series In [284]: F[0] Out[284]: 3164181 In [285]: S[0] --------------------------------------------------------------------------- KeyError Traceback (most recent call last) &lt;ipython-input-285-5a4339994a41&gt; in &lt;module&gt;() ----&gt; 1 S[0] /home/bli/.local/lib/python3.6/site-packages/pandas/core/series.py in __getitem__(self, key) 601 result = self.index.get_value(self, key) 602 --&gt; 603 if not is_scalar(result): 604 if is_list_like(result) and not isinstance(result, Series): 605 /home/bli/.local/lib/python3.6/site-packages/pandas/indexes/base.py in get_value(self, series, key) pandas/index.pyx in pandas.index.IndexEngine.get_value (pandas/index.c:3323)() pandas/index.pyx in pandas.index.IndexEngine.get_value (pandas/index.c:3026)() pandas/index.pyx in pandas.index.IndexEngine.get_loc (pandas/index.c:4009)() pandas/src/hashtable_class_helper.pxi in pandas.hashtable.Int64HashTable.get_item (pandas/hashtable.c:8146)() pandas/src/hashtable_class_helper.pxi in pandas.hashtable.Int64HashTable.get_item (pandas/hashtable.c:8090)() KeyError: 0 </code></pre> <p><strong>How come the <code>F</code> and <code>S</code> objects have different behaviours if they result from similar operation (<code>sum</code>) and are of the same type (<code>pandas.core.series.Series</code>)?</strong></p> <p>What is the correct way to extract the value I want (the sum of a column)?</p> <h3>Edit: Trying solutions:</h3> <pre><code>In [297]: F["72"] Out[297]: 3164181 In [298]: S["6"] --------------------------------------------------------------------------- TypeError Traceback (most recent call last) pandas/index.pyx in pandas.index.IndexEngine.get_loc (pandas/index.c:4009)() pandas/src/hashtable_class_helper.pxi in pandas.hashtable.Int64HashTable.get_item (pandas/hashtable.c:8125)() TypeError: an integer is required During handling of the above exception, another exception occurred: KeyError Traceback (most recent call last) &lt;ipython-input-298-0127424036a0&gt; in &lt;module&gt;() ----&gt; 1 S["6"] /home/bli/.local/lib/python3.6/site-packages/pandas/core/series.py in __getitem__(self, key) 601 result = self.index.get_value(self, key) 602 --&gt; 603 if not is_scalar(result): 604 if is_list_like(result) and not isinstance(result, Series): 605 /home/bli/.local/lib/python3.6/site-packages/pandas/indexes/base.py in get_value(self, series, key) pandas/index.pyx in pandas.index.IndexEngine.get_value (pandas/index.c:3323)() pandas/index.pyx in pandas.index.IndexEngine.get_value (pandas/index.c:3026)() pandas/index.pyx in pandas.index.IndexEngine.get_loc (pandas/index.c:4075)() KeyError: '6' </code></pre> <p>Further investigating:</p> <pre><code>In [306]: print(S.index) Int64Index([6], dtype='int64') In [307]: print(F.index) Index(['72'], dtype='object') In [308]: S[6] Out[308]: 3551266 </code></pre> <p>So the two objects ended up having different types of indices. This kind of behaviour reminds me of R...</p> <p>It seems that <code>header=None</code> resulted in columns indexed by numbers for <code>S</code>, whereas the absence of <code>header=None</code> combined with <code>skiprows=2</code> resulted in the index being generated from data read on the third row. (And this revealed a bug in the way I parsed the data in pandas...)</p>
<p>I think you need:</p> <pre><code>#select first value of one element series f = F.iat[0] #alternative #f = F.iloc[0] </code></pre> <p>Or:</p> <pre><code>#convert to numpy array and select first value f = F.values[0] </code></pre> <p>Or:</p> <pre><code>f = F.item() </code></pre> <p>And I think you get error, because no index value <code>0</code>.</p> <p>As IanS commented should be working select by index value <code>6</code> and <code>72</code>:</p> <pre><code>f = F[72] #f = f.loc[72] s = S[6] #s = S.loc[6] </code></pre> <p>Sample:</p> <pre><code>F = pd.Series([3164181], index=[72]) f = F[72] print (f) 3164181 print (F.index) Int64Index([72], dtype='int64') print (F.index.tolist()) [72] f = F[0] print (f) </code></pre> <blockquote> <p>KeyError: 0</p> </blockquote> <p>You get one integer index in <code>S</code>, because parameter <code>header=None</code> - pandas add default index (<code>0,1,...</code>). For <code>F</code> is used <code>6th</code> column called <code>'72'</code> - it is string. There is difference.</p>
python|pandas
7
373,954
45,947,951
plotly, not showing coordinates with np.array dataset
<p>I wish to display a dataset of 1000 float, I have decided to do this with plotly, and I want to do it offline, I am getting in to a problem I really can't understand - I simply don't know what I am doing wrong at all.</p> <p>Let's jump in to the code. First of I will show that the code should work, with a small np.array</p> <pre><code>import numpy as np import plotly as py import plotly.graph_objs as go list = [1.2,2.3,3.3,4.4,5.4,6.4] x_data = np.array(list) y_data = np.array(list) #x_data = np.array(graph_test_q()) #y_data = np.array(graph_test_h()) trace = go.Scatter( x = x_data, y = y_data, ) data = [trace] fig = dict(data=data) py.offline.plot(fig, filename='hejsa.html') print data </code></pre> <p>The output of the above code: <a href="https://i.stack.imgur.com/VCo5e.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VCo5e.png" alt="Output of small np.array"></a> Seems to work fine, but: Below is the code, where I use a np.array created from a function, that extracts it from a postgrSQL db. I have checked, and it does print the data in the terminal.</p> <pre><code>def graph_test_q(): conn = psycopg2.connect("dbname='database1' user='postgres' password='FFgg1905560' host='localhost' port='5432'") cur = conn.cursor() cur.execute("SELECT q FROM pump_data_test WHERE pump_id = 1229") rows = cur.fetchall() conn.commit() conn.close() return rows def graph_test_h(): conn = psycopg2.connect("dbname='database1' user='postgres' password='FFgg1905560' host='localhost' port='5432'") cur = conn.cursor() cur.execute("SELECT h FROM pump_data_test WHERE pump_id = 1229") rows = cur.fetchall() conn.commit() conn.close() return rows #list = [1.2,2.3,3.3,4.4,5.4,6.4] #x_data = np.array(list) #y_data = np.array(list) x_data = np.array(graph_test_q()) y_data = np.array(graph_test_h()) trace = go.Scatter( x = x_data, y = y_data, ) data = [trace] fig = dict(data=data) py.offline.plot(fig, filename='hejsa.html') print data </code></pre> <p>Now here is what I find strange, the output of these new np.arrays is this empty graph: <a href="https://i.stack.imgur.com/McuIh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/McuIh.png" alt="enter image description here"></a></p> <p>When I click on the link in the bottom right corner - "export to plot.ly" this is now the output I get: <a href="https://i.stack.imgur.com/JKVp5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JKVp5.png" alt="enter image description here"></a> Here I can see see the graph on the left, just as it's supposed to be. I would be very appreciated if anyone can help me find out what I'm doing wrong.</p> <p>EDIT: From comments: Code for showing dtypes of x_data &amp; y_data ( x_data = np.array(graph_test_q()) &amp; y_data = np.array(graph_test_h())):</p> <pre><code>print(x_data.dtype) print(y_data.dtype) </code></pre> <p>output:</p> <p></p> <p>float64<p></p> float64<p></p> 0:96: execution error: "file://hejsa.html" doesn’t understand the “open location” message. (-1708)<p></p> 70:78: execution error: Can’t get application "firefox". (-1728) * Running on <a href="http://127.0.0.1:5000/" rel="nofollow noreferrer">http://127.0.0.1:5000/</a> (Press CTRL+C to quit)</p>
<p><code>fetchall</code> returns tuples, so you end up with a <code>Numpy</code> array which has a shape of <code>(n, 1)</code> where n is the number of results. You could get the data in the correct format for <code>Plotly</code> using the following index:</p> <pre><code>np.array(graph_test_q())[:,0] </code></pre> <p>See below for a complete demo.</p> <pre><code>import plotly import numpy as np import random import sqlite3 def graph_test_q(): cur = conn.cursor() cur.execute("SELECT q FROM pump_data_test WHERE pump_id = 1229") rows = cur.fetchall() return rows def graph_test_h(): cur = conn.cursor() cur.execute("SELECT h FROM pump_data_test WHERE pump_id = 1229") rows = cur.fetchall() return rows plotly.offline.init_notebook_mode() # Create some mockup data conn = sqlite3.connect(':memory:') c = conn.cursor() c.execute("CREATE TABLE pump_data_test (h, q, pump_id)") for i in range(10): c.execute("INSERT INTO pump_data_test VALUES ('{}','{}',1229)".format(i, i + random.random())) conn.commit() trace1 = plotly.graph_objs.Scatter( name='works', x=np.array(graph_test_q())[:,0], y=np.array(graph_test_h())[:,0], ) trace2 = plotly.graph_objs.Scatter( name='does not work', x=np.array(graph_test_q()), y=np.array(graph_test_h()), ) data = [trace1, trace2] fig = dict(data=data) plotly.offline.iplot(fig) </code></pre> <p><a href="https://i.stack.imgur.com/FYl2K.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FYl2K.png" alt="enter image description here"></a></p>
python|numpy|ipython|data-visualization|plotly
0
373,955
45,901,315
Pandas read_table returns ’ characters
<p>I'm seeing things like <code>’</code> after reading a text file with read_table(). The input file contents appear as ordinary ASCII characters in Windows Notepad. </p> <pre><code>dataRaw = pd.read_table('data.txt', header=None) </code></pre> <p>Do I need to include some character set parameter to prevent this?</p>
<p>I figured it out. It took two steps: (1) use the correct encoding; (2) convert things that are supposed to be apostrophes to apostrophes.</p> <pre><code>for line in open(dataPath, encoding='utf-8'): outstr = re.sub(r'[´]', '’', line) # replace non-ASCII tick with apostrophe outstr = re.sub('[\']', '’', outstr) # replace single quote with apostrophe </code></pre> <p>Thanks for the tip.</p>
python|file|pandas|text|encoding
1
373,956
46,099,109
OutOfRangeError: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 1, current size 0)
<p>I am getting the outOfRange error when trying to feed the data to the model. I am guessing that data never reaches the queue, hence the error. Just for the testing I am feeding it the tfrecord with one tuple (image,ground_truth). I also tried tensorflow debugger(tfdbg) but it would also just throw the same error I couldn't see any tensoeflow value.</p> <p>Tensorflow version: 1.3</p> <p>Python version: 3.5.3</p> <p>Os: Windows10</p> <pre><code>filename_queue = tf.train.string_input_producer([tfrecord_filename],num_epochs=1) image_batch, annotation_batch = tf.train.shuffle_batch([resized_image, resized_annotation], batch_size=1, capacity=10, num_threads=1, min_after_dequeue=1) </code></pre> <p>StackTrace:</p> <pre><code>Traceback (most recent call last): File "C:\Program Files\Python35\lib\site-packages\tensorflow\python\client\session.py", line 1327, in _do_call return fn(*args) File "C:\Program Files\Python35\lib\site-packages\tensorflow\python\client\session.py", line 1306, in _run_fn status, run_metadata) File "C:\Program Files\Python35\lib\contextlib.py", line 66, in __exit__ next(self.gen) File "C:\Program Files\Python35\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 466, in raise_exception_on_not_ok_status pywrap_tensorflow.TF_GetCode(status)) tensorflow.python.framework.errors_impl.OutOfRangeError: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 1, current size 0) [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_UINT8, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]] During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:/Users/supriya.godge/PycharmProjects/tf-image-segmentation/tf_image_segmentation/recipes/pascal_voc/DeepLab/resnet_v1_101_8s_train.py", line 160, in &lt;module&gt; train_step]) File "C:\Program Files\Python35\lib\site-packages\tensorflow\python\client\session.py", line 895, in run run_metadata_ptr) File "C:\Program Files\Python35\lib\site-packages\tensorflow\python\client\session.py", line 1124, in _run feed_dict_tensor, options, run_metadata) File "C:\Program Files\Python35\lib\site-packages\tensorflow\python\client\session.py", line 1321, in _do_run options, run_metadata) File "C:\Program Files\Python35\lib\site-packages\tensorflow\python\client\session.py", line 1340, in _do_call raise type(e)(node_def, op, message) tensorflow.python.framework.errors_impl.OutOfRangeError: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 1, current size 0) [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_UINT8, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]] Caused by op 'shuffle_batch', defined at: File "C:/Users/supriya.godge/PycharmProjects/tf-image-segmentation/tf_image_segmentation/recipes/pascal_voc/DeepLab/resnet_v1_101_8s_train.py", line 68, in &lt;module&gt; min_after_dequeue=1) File "C:\Program Files\Python35\lib\site-packages\tensorflow\python\training\input.py", line 1220, in shuffle_batch name=name) File "C:\Program Files\Python35\lib\site-packages\tensorflow\python\training\input.py", line 791, in _shuffle_batch dequeued = queue.dequeue_many(batch_size, name=name) File "C:\Program Files\Python35\lib\site-packages\tensorflow\python\ops\data_flow_ops.py", line 457, in dequeue_many self._queue_ref, n=n, component_types=self._dtypes, name=name) File "C:\Program Files\Python35\lib\site-packages\tensorflow\python\ops\gen_data_flow_ops.py", line 1342, in _queue_dequeue_many_v2 timeout_ms=timeout_ms, name=name) File "C:\Program Files\Python35\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 767, in apply_op op_def=op_def) File "C:\Program Files\Python35\lib\site-packages\tensorflow\python\framework\ops.py", line 2630, in create_op original_op=self._default_original_op, op_def=op_def) File "C:\Program Files\Python35\lib\site-packages\tensorflow\python\framework\ops.py", line 1204, in __init__ self._traceback = self._graph._extract_stack() # pylint: disable=protected-access OutOfRangeError (see above for traceback): RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 1, current size 0) [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_UINT8, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]] </code></pre> <p>I have tried different solution posted on stackoverflow for the same error. Unfortunately nothing worked for me. Please let me know if I should provide any additional information. Any suggestion is appreciated. Thanks in advance.</p>
<blockquote> <p>If the Annotation image is wrong, this error will be shown. Please see the annotation image matrix value. It should be differ. For example the annotation image matrix value have gray color(matrix value[1 1 1]) but your image not gray(matix value[1 3 6]). So check the matrix value of your annotation image</p> <p>If the annotation image is not gray, then queue is not fill. This is the error.</p> </blockquote> <p>Try this code in Inputs.py line 68 inside CamVid_reader_seq def</p> <pre><code>label_bytes = tf.image.decode_png(labelValue,1) </code></pre> <p>Note: 1 means convert color image to grayscale</p>
debugging|tensorflow
0
373,957
45,813,400
Sort two lists of lists by index of inner list
<p>Assume I want to sort a list of lists like explained <a href="https://stackoverflow.com/questions/4174941/how-to-sort-a-list-of-lists-by-a-specific-index-of-the-inner-list">here</a>:</p> <pre><code>&gt;&gt;&gt;L=[[0, 1, 'f'], [4, 2, 't'], [9, 4, 'afsd']] &gt;&gt;&gt;sorted(L, key=itemgetter(2)) [[9, 4, 'afsd'], [0, 1, 'f'], [4, 2, 't']] </code></pre> <p>(Or with lambda.) Now I have a second list which I want to sort in the same order, so I need the <strong>new order of the indices</strong>. sorted() or .sort() do not return indices. How can I do that? </p> <p>Actually in my case both lists contain <strong>numpy arrays</strong>. But the numpy sort/argsort aren't intuitive for that case either.</p>
<p>If I understood you correctly, you want to order <code>B</code> in the example below, based on a sorting rule you apply on <code>L</code>. Take a look at this:</p> <pre><code>L = [[0, 1, 'f'], [4, 2, 't'], [9, 4, 'afsd']] B = ['a', 'b', 'c'] result = [i for _, i in sorted(zip(L, B), key=lambda x: x[0][2])] print(result) # ['c', 'a', 'b'] # that corresponds to [[9, 4, 'afsd'], [0, 1, 'f'], [4, 2, 't']] </code></pre>
python|list|sorting|numpy
2
373,958
45,989,249
Pandas pivot table ValueError: Index contains duplicate entries, cannot reshape
<p>I have a dataframe as shown below (top 3 rows):</p> <pre><code>Sample_Name Sample_ID Sample_Type IS Component_Name IS_Name Component_Group_Name Outlier_Reasons Actual_Concentration Area Height Retention_Time Width_at_50_pct Used Calculated_Concentration Accuracy Index 1 20170824_ELN147926_HexLacCer_Plasma_A-1-1 NaN Unknown True GluCer(d18:1/12:0)_LCB_264.3 NaN NaN NaN 0.1 2.733532e+06 5.963840e+05 2.963911 0.068676 True NaN NaN 2 20170824_ELN147926_HexLacCer_Plasma_A-1-1 NaN Unknown True GluCer(d18:1/17:0)_LCB_264.3 NaN NaN NaN 0.1 2.945190e+06 5.597470e+05 2.745026 0.068086 True NaN NaN 3 20170824_ELN147926_HexLacCer_Plasma_A-1-1 NaN Unknown False GluCer(d18:1/16:0)_LCB_264.3 GluCer(d18:1/17:0)_LCB_264.3 NaN NaN NaN 3.993535e+06 8.912731e+05 2.791991 0.059864 True 125.927659773487 NaN </code></pre> <p>When trying to generate a pivot table:</p> <pre><code>pivoted_report_conc = raw_report.pivot(index = "Sample_Name", columns = 'Component_Name', values = "Calculated_Concentration") </code></pre> <p>I get the following error:</p> <pre><code>ValueError: Index contains duplicate entries, cannot reshape </code></pre> <p>I tried resetting the index but it did not help. I couldn't find any duplicate values in the "Index" column. Could someone please help identify the problem here?</p> <p>The expected output would be a reshaped dataframe with only the unique component names as columns and respective concentrations for each sample name:</p> <pre><code>Sample_Name GluCer(d18:1/12:0)_LCB_264.3 GluCer(d18:1/17:0)_LCB_264.3 GluCer(d18:1/16:0)_LCB_264.3 20170824_ELN147926_HexLacCer_Plasma_A-1-1 NaN NaN 125.927659773487 </code></pre> <p>To clarify, I am not looking to aggregate the data, just reshape it.</p>
<p>You can use <code>groupby()</code> and <code>unstack()</code> to get around the error you're seeing with <code>pivot()</code>. </p> <p>Here's some example data, with a few edge cases added, and some column values removed or substituted for <a href="https://stackoverflow.com/help/mcve">MCVE</a>:</p> <pre><code># df Sample_Name Sample_ID IS Component_Name Calculated_Concentration Outlier_Reasons Index 1 foo NaN True x NaN NaN 1 foo NaN True y NaN NaN 2 foo NaN False z 125.92766 NaN 2 bar NaN False x 1.00 NaN 2 bar NaN False y 2.00 NaN 2 bar NaN False z NaN NaN (df.groupby(['Sample_Name','Component_Name']) .Calculated_Concentration .first() .unstack() ) </code></pre> <p>Output:</p> <pre><code>Component_Name x y z Sample_Name bar 1.0 2.0 NaN foo NaN NaN 125.92766 </code></pre>
python|pandas
12
373,959
46,114,462
TensorFlow allocating large amounts of main memory at session startup time
<p>Consider the following two line Python/TensorFlow interactive session:</p> <pre><code>import tensorflow as tf s=tf.Session() </code></pre> <p>If these commands are executed on an Ubuntu Linux 14.04 machine, using Anaconda Python 2.7.13 and TensorFlow r1.3 (compiled from sources), with 32G physical memory and 2 GPUs (a GTX Titan X and a GTX 970) while <code>CUDA_VISIBLE_DEVICES</code> is not set (i.e. both GPUs are visible) the resulting python process has 59.7G of memory allocated! Note that it only actually uses 754M.</p> <p>If <code>CUDA_VISIBLE_DEVICES=0</code> (i.e. only the Titan X is visible) then 55.2G is allocated and 137M is in use.</p> <p>If <code>CUDA_VISIBLE_DEVICES=1</code> (i.e. only the 970 is visible) then 47.0G is allocated and 325M is in use.</p> <p>If <code>CUDA_VISIBLE_DEVICES=</code> (i.e. neither GPU is visible) then only 2.5G is allocated and only 131M is in use.</p> <p>This is a problem in environments where the amount of allocated memory is constrained, e.g. inside a grid engine setup.</p> <p>Is there any way to limit the amount of main memory that TensorFlow allocates when it is using CUDA?</p> <p><strong>Update 1</strong></p> <p>The amount of memory allocated is determined, in these trials, by looking at the <code>VIRT</code> column in <code>htop</code>.</p> <p>TensorFlow r1.3 is compiled with mostly default <code>configure</code> answers. The only variations are the paths to CUDA and cuDNN. As a result, <code>jemalloc</code> is being used.</p> <p><strong>Update 2</strong></p> <p>I've tried recompiling with <code>jemalloc</code> disabled and see the same behaviour.</p>
<p>The default behavior of TensorFlow on GPU is to use all the memory available. However, if you want to avoid this behavior, you can specify to the session to dynamically allocate the memory.</p> <p>From the <a href="https://github.com/tensorflow/tensorflow/blob/r1.3/tensorflow/core/protobuf/config.proto" rel="nofollow noreferrer">ConfigProto</a> declaration : </p> <blockquote> <pre><code>// allow_growth // If true, the allocator does not pre-allocate the entire specified // GPU memory region, instead starting small and growing as needed. </code></pre> </blockquote> <p>In order to do this, pass a ConfigProto object to your session when creating it : </p> <pre><code>session_config = tf.ConfigProto() session_config.gpu_options.allow_growth=True sess = tf.Session(config=session_config) </code></pre> <p>If you want to limit the amount of memory used, it's up to your batch size and the number of parameters in your model.</p>
python|tensorflow
1
373,960
45,897,179
string argument without an encoding
<p>Can someone please tell why am I receiving the following error on python 3. The following is the traceback:</p> <pre><code>TypeError Traceback (most recent call last) &lt;ipython-input-24-a81d4875414b&gt; in &lt;module&gt;() 7 filename = [("id"), ("name"), ("email"), ("amount"),("sent")] 8 writer= csv.DictWriter(temp_file, fieldnames = fieldnames) ----&gt; 9 writer.writeheader() 10 11 for row in reader: C:\Users\johsc_001\AppData\Local\conda\conda\envs\ipykernel_py3\lib\csv.py in writeheader(self) 142 def writeheader(self): 143 header = dict(zip(self.fieldnames, self.fieldnames)) --&gt; 144 self.writerow(header) 145 146 def _dict_to_list(self, rowdict): C:\Users\johsc_001\AppData\Local\conda\conda\envs\ipykernel_py3\lib\csv.py in writerow(self, rowdict) 153 154 def writerow(self, rowdict): --&gt; 155 return self.writer.writerow(self._dict_to_list(rowdict)) 156 157 def writerows(self, rowdicts): C:\Users\johsc_001\AppData\Local\conda\conda\envs\ipykernel_py3\lib\tempfile.py in func_wrapper(*args, **kwargs) 481 @_functools.wraps(func) 482 def func_wrapper(*args, **kwargs): --&gt; 483 return func(*args, **kwargs) 484 # Avoid closing the file as long as the wrapper is alive, 485 # see issue #18879. TypeError: a bytes-like object is required, not 'str' </code></pre> <p>Here's the source code: </p> <pre><code>import csv import shutil from tempfile import NamedTemporaryFile filename = 'appendpyt2.csv' temp_file = NamedTemporaryFile(delete= False) with open(filename, 'rb')as csvfile, temp_file: reader =csv.DictReader(csvfile) filename = ["id", "name", "email", "amount", "sent"] writer= csv.DictWriter(temp_file, fieldnames = ["id", "name", "email","amout", "sent"]) writer.writeheader() for row in reader: print(row) writer.writerow({ "id": row["id"], "name": row["name"], "email":row["email"], "amout":"1234.56", "sent": "" }) </code></pre>
<p>The error seems to be from using Python 3, but using Python 2 requirements for opening <code>csv</code> files. If using Python 3, CSV files should not be opened in binary mode, and the the newline parameter should be the empty string. The temporary file defaults to binary mode as well, so I've overridden it. I also used the following as an input file, deduced from the code since sample input wasn't provided.</p> <p><strong>appendpyt2.csv:</strong></p> <pre><code>id,name,email id1,name1,email1 id2,name2,email2 </code></pre> <p><strong>Python 3 code:</strong></p> <pre><code>import csv import shutil from tempfile import NamedTemporaryFile filename = 'appendpyt2.csv' temp_file = NamedTemporaryFile(mode='w+',newline='',delete= False) with open(filename,newline='') as csvfile, temp_file: reader =csv.DictReader(csvfile) filename = ["id", "name", "email", "amount", "sent"] writer= csv.DictWriter(temp_file, fieldnames = ["id", "name", "email","amount", "sent"]) writer.writeheader() for row in reader: print(row) writer.writerow({ "id": row["id"], "name": row["name"], "email":row["email"], "amount":"1234.56", "sent": "" }) </code></pre> <p><strong>Temporary file output:</strong></p> <pre><code>id,name,email,amount,sent id1,name1,email1,1234.56, id2,name2,email2,1234.56, </code></pre>
python|python-2.7|python-3.x|numpy|data-analysis
0
373,961
22,965,747
Deleting Array Element in Python, Issues with np.delete
<p>I have a list of coordinates among other things, I want to delete the number of objects I have in, say, quadrant I. I tried using np.delete, but perhaps my loop is wrong since it only deletes one single object. Here's what I have so far:</p> <pre><code>import sys import os import numpy as np from pylab import * import scipy def get_distance(x,y,x_center,y_center): d = (x - x_center)**2 + (y - y_center)**2 d = sqrt(d) return d dataA=np.genfromtxt('match.txt') c1=dataA[:,0] c2=dataA[:,1] d1=dataA[:,2] d2=dataA[:,3] for i in xrange(len(c1)): if c1[i] &gt;= 0 and c1[i] &lt;= 2288 and c2[i] &gt;= 2288 and c2[i] &lt;= 4576: new_a = np.delete(c1,i) new_b = np.delete(c2,i) </code></pre>
<p>In your for loop build a list of i's that need to be deleted (e.g. del_list). Once you are done with the loop, you can delete the list of i's from c1 and c2</p> <pre><code>new_a = np.delete(c1, del_list) new_b = np.delete(c2, del_list) </code></pre>
python|arrays|numpy
1
373,962
22,995,762
pandas to_csv: suppress scientific notation in csv file when writing pandas to csv
<p>I am writing a pandas df to a csv. When I write it to a csv file, some of the elements in one of the columns are being incorrectly converted to scientific notation/numbers. For example, <code>col_1</code> has strings such as <code>'104D59'</code> in it. The strings are mostly represented as strings in the csv file, as they should be. However, occasional strings, such as <code>'104E59'</code>, are being converted into scientific notation (e.g., 1.04 E 61) and represented as integers in the ensuing csv file.</p> <p>I am trying to export the csv file into a software package (i.e., pandas -&gt; csv -&gt; software_new) and this change in data type is causing problems with that export.</p> <p>Is there a way to write the df to a csv, ensuring that all elements in <code>df['problem_col']</code> are represented as string in the resulting csv or not converted to scientific notation?</p> <p>Here is the code I have used to write the pandas df to a csv:</p> <pre><code>df.to_csv('df.csv', encoding='utf-8') </code></pre> <p>I also check the dtype of the problem column:</p> <pre><code>for df.dtype, df['problem_column'] is an object </code></pre>
<blockquote> <p>For python 3.xx (<code>Python 3.7.2</code>)&amp;</p> <p><code>In [2]: pd.__version__</code> <code>Out[2]: '0.23.4'</code>:</p> </blockquote> <p><a href="https://pandas.pydata.org/pandas-docs/stable/options.html" rel="noreferrer">Options and Settings</a></p> <p><a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.set_option.html#pandas.set_option" rel="noreferrer">For visualization of the dataframe pandas.set_option</a></p> <pre><code>import pandas as pd #import pandas package # for visualisation fo the float data once we read the float data: pd.set_option('display.html.table_schema', True) # to can see the dataframe/table as a html pd.set_option('display.precision', 5) # setting up the precision point so can see the data how looks, here is 5 df = pd.DataFrame(np.random.randn(20,4)* 10 ** -12) # create random dataframe </code></pre> <h1>Output of the data:</h1> <pre><code>df.dtypes # check datatype for columns [output]: 0 float64 1 float64 2 float64 3 float64 dtype: object </code></pre> <h3>Dataframe:</h3> <pre><code>df # output of the dataframe [output]: 0 1 2 3 0 -2.01082e-12 1.25911e-12 1.05556e-12 -5.68623e-13 1 -6.87126e-13 1.91950e-12 5.25925e-13 3.72696e-13 2 -1.48068e-12 6.34885e-14 -1.72694e-12 1.72906e-12 3 -5.78192e-14 2.08755e-13 6.80525e-13 1.49018e-12 4 -9.52408e-13 1.61118e-13 2.09459e-13 2.10940e-13 5 -2.30242e-13 -1.41352e-13 2.32575e-12 -5.08936e-13 6 1.16233e-12 6.17744e-13 1.63237e-12 1.59142e-12 7 1.76679e-13 -1.65943e-12 2.18727e-12 -8.45242e-13 8 7.66469e-13 1.29017e-13 -1.61229e-13 -3.00188e-13 9 9.61518e-13 9.71320e-13 8.36845e-14 -6.46556e-13 10 -6.28390e-13 -1.17645e-12 -3.59564e-13 8.68497e-13 11 3.12497e-13 2.00065e-13 -1.10691e-12 -2.94455e-12 12 -1.08365e-14 5.36770e-13 1.60003e-12 9.19737e-13 13 -1.85586e-13 1.27034e-12 -1.04802e-12 -3.08296e-12 14 1.67438e-12 7.40403e-14 3.28035e-13 5.64615e-14 15 -5.31804e-13 -6.68421e-13 2.68096e-13 8.37085e-13 16 -6.25984e-13 1.81094e-13 -2.68336e-13 1.15757e-12 17 7.38247e-13 -1.76528e-12 -4.72171e-13 -3.04658e-13 18 -1.06099e-12 -1.31789e-12 -2.93676e-13 -2.40465e-13 19 1.38537e-12 9.18101e-13 5.96147e-13 -2.41401e-12 </code></pre> <h3>And now write <em>to_csv</em> using the <em>float_format='%.15f'</em> parameter</h3> <pre><code>df.to_csv('estc.csv',sep=',', float_format='%.15f') # write with precision .15 </code></pre> <h3>file output:</h3> <pre><code>,0,1,2,3 0,-0.000000000002011,0.000000000001259,0.000000000001056,-0.000000000000569 1,-0.000000000000687,0.000000000001919,0.000000000000526,0.000000000000373 2,-0.000000000001481,0.000000000000063,-0.000000000001727,0.000000000001729 3,-0.000000000000058,0.000000000000209,0.000000000000681,0.000000000001490 4,-0.000000000000952,0.000000000000161,0.000000000000209,0.000000000000211 5,-0.000000000000230,-0.000000000000141,0.000000000002326,-0.000000000000509 6,0.000000000001162,0.000000000000618,0.000000000001632,0.000000000001591 7,0.000000000000177,-0.000000000001659,0.000000000002187,-0.000000000000845 8,0.000000000000766,0.000000000000129,-0.000000000000161,-0.000000000000300 9,0.000000000000962,0.000000000000971,0.000000000000084,-0.000000000000647 10,-0.000000000000628,-0.000000000001176,-0.000000000000360,0.000000000000868 11,0.000000000000312,0.000000000000200,-0.000000000001107,-0.000000000002945 12,-0.000000000000011,0.000000000000537,0.000000000001600,0.000000000000920 13,-0.000000000000186,0.000000000001270,-0.000000000001048,-0.000000000003083 14,0.000000000001674,0.000000000000074,0.000000000000328,0.000000000000056 15,-0.000000000000532,-0.000000000000668,0.000000000000268,0.000000000000837 16,-0.000000000000626,0.000000000000181,-0.000000000000268,0.000000000001158 17,0.000000000000738,-0.000000000001765,-0.000000000000472,-0.000000000000305 18,-0.000000000001061,-0.000000000001318,-0.000000000000294,-0.000000000000240 19,0.000000000001385,0.000000000000918,0.000000000000596,-0.000000000002414 </code></pre> <h3>And now write <em>to_csv</em> using the <em>float_format='%f'</em> parameter</h3> <pre><code>df.to_csv('estc.csv',sep=',', float_format='%f') # this will remove the extra zeros after the '.' </code></pre> <p><a href="http://pandas.pydata.org/pandas-docs/version/0.19.1/generated/pandas.DataFrame.to_csv.html" rel="noreferrer">For more details check pandas.DataFrame.to_csv</a></p>
python|csv|pandas|type-conversion|scientific-notation
17
373,963
23,023,878
Why to convert a python list to a numpy array?
<p>There's <a href="https://stackoverflow.com/questions/7717380/how-to-convert-2d-list-to-2d-numpy-array">a simply way</a> way to convert a list of numbers in python to a numpy array.</p> <p>But the simple functions that I have tried, for instance <code>numpy.average(x)</code>, would work regardless of whether <code>x</code> is a simple python list or a numpy array. In which type(s) of cases is it required to convert a list (or an array for that matter) in python to an array in numpy?</p>
<p>The answers that have been given thus far are very good. The simple convenience of having much of the NumPy functionality bound to the array object through its methods is very helpful. Here's something that hasn't been mentioned yet.</p> <p>One very good reason to convert your lists to arrays <em>before</em> passing them to NumPy functions is that, internally, most NumPy functions try to make arguments that ought to be arrays into NumPy arrays before performing any computation. This means that, when you call a NumPy function on a list of values or a list of lists of values, NumPy first converts the list to an array, then runs the computation, then returns the result.</p> <p>If you make multiple calls to NumPy functions on the same list, NumPy will be forced to construct a new array and copy the values from your array into it <em>every time</em> you make a call to a built in NumPy function. This causes a lot of unnecessary conversions and will slow down every function call. To convert a list to an array, NumPy must iterate through the list you have given to determine a suitable data type and shape, allocate an empty array to hold the needed values, then iterate over the list again to store the value contained in each Python object in the appropriate entry of the array.</p> <p>All this extra work can slow things down, and it doesn't make sense to do all the same conversions multiple times if they aren't necessary.</p>
python|python-2.7|numpy
3
373,964
23,232,463
Python: Replace a cell value in Dataframe with if statement
<p>I have a matrix with that looks like this:</p> <pre><code> com 0 1 2 3 4 5 AAA 0 5 0 4 2 1 4 ABC 0 9 8 9 1 0 3 ADE 1 4 3 5 1 0 1 BCD 1 6 7 8 3 4 1 BCF 2 3 4 2 1 3 0 ... </code></pre> <p>Where <code>AAA, ABC</code> ... is the dataframe index. The dataframe columns are <code>com 0 1 3 4 5 6</code></p> <p>I want to set the cell values in my dataframe equal to 0 when the row values of <code>com</code> is equal the column "number". So for instance, the above matrix will look like:</p> <pre><code> com 0 1 2 3 4 5 AAA 0 0 0 4 2 1 4 ABC 0 0 8 9 1 0 3 ADE 1 4 0 5 1 0 1 BCD 1 6 0 8 3 4 1 BCF 2 3 4 0 1 3 0 ... </code></pre> <p>I tried to iterate over rows and use both <code>.loc</code> and <code>.ix</code> but no success.</p>
<p>Just require some <code>numpy</code> trick</p> <pre><code>In [22]: print df 0 1 2 3 4 5 0 5 0 4 2 1 4 0 9 8 9 1 0 3 1 4 3 5 1 0 1 1 6 7 8 3 4 1 2 3 4 2 1 3 0 [5 rows x 6 columns] In [23]: #making a masking matrix, 0 where column and index values equal, 1 elsewhere, kind of the vectorized way of doing if TURE 0, else 1 print df*np.where(df.columns.values==df.index.values[..., np.newaxis], 0,1) 0 1 2 3 4 5 0 0 0 4 2 1 4 0 0 8 9 1 0 3 1 4 0 5 1 0 1 1 6 0 8 3 4 1 2 3 4 0 1 3 0 [5 rows x 6 columns] </code></pre>
python|pandas
2
373,965
23,232,989
Boxplot stratified by column in python pandas
<p>I would like to draw a boxplot for the following pandas dataframe:</p> <pre><code>&gt; p1.head(10) N0_YLDF MAT 0 1.29 13.67 1 2.32 10.67 2 6.24 11.29 3 5.34 21.29 4 6.35 41.67 5 5.35 91.67 6 9.32 21.52 7 6.32 31.52 8 3.33 13.52 9 4.56 44.52 </code></pre> <p>I want the boxplots to be of the column 'N0_YLDF', but they should be stratified by 'MAT'. When I use the foll. command:</p> <pre><code>p1.boxplot(column='N0_YLDF',by='MAT') </code></pre> <p>It uses all the unique MAT values, which in the full p1 dataframe number around 15,000. This results in an incomprehensible boxplot.</p> <p>Is there any way I can stratify the MAT values, so that I get a different boxplot of N0_YLDF for the first quartile of MAT values and so on....</p> <p>thanks!</p>
<p>Pandas has the <code>cut</code> and <code>qcut</code> functions to make stratifying variables like this easy:</p> <pre><code># Just asking for split into 4 equal groups (i.e. quartiles) here, # but you can split on custom quantiles by passing in an array p1['MAT_quartiles'] = pd.qcut(p1['MAT'], 4, labels=['0-25%', '25-50%', '50-75%', '75-100%']) p1.boxplot(column='N0_YLDF', by='MAT_quartiles') </code></pre> <p>Output:</p> <p><img src="https://i.stack.imgur.com/dTdre.png" alt="enter image description here"></p>
python|matplotlib|pandas|boxplot
10
373,966
22,991,318
TypeError in Python when using Pyalgotrade
<p>I trying to write a Stochcastic Oscillator in python using the list function in Pyalgotrade library.</p> <p>My code is below:</p> <pre><code>from pyalgotrade.tools import yahoofinance from pyalgotrade import strategy from pyalgotrade.barfeed import yahoofeed from pyalgotrade.technical import stoch from pyalgotrade import dataseries from pyalgotrade.technical import ma from pyalgotrade import technical from pyalgotrade.technical import highlow from pyalgotrade import bar from pyalgotrade.talibext import indicator import numpy import talib class MyStrategy(strategy.BacktestingStrategy): def __init__(self, feed, instrument): strategy.BacktestingStrategy.__init__(self, feed) self.__instrument = instrument def onBars(self, bars): barDs = self.getFeed().getDataSeries("002389.SZ") self.__stoch = indicator.STOCH(barDs, 20, 3, 3) bar = bars[self.__instrument] self.info("%0.2f, %0.2f" % (bar.getClose(), self.__stoch[-1])) # Downdload then Load the yahoo feed from the CSV file yahoofinance.download_daily_bars('002389.SZ', 2013, '002389.csv') feed = yahoofeed.Feed() feed.addBarsFromCSV("002389.SZ", "002389.csv") # Evaluate the strategy with the feed's bars. myStrategy = MyStrategy(feed, "002389.SZ") myStrategy.run() </code></pre> <p>And I got the error like this:</p> <pre><code> File "/Users/johnhenry/Desktop/simple_strategy.py", line 46, in onBars self.info("%0.2f, %0.2f" % (bar.getClose(), self.__stoch[-1])) TypeError: float argument required, not numpy.ndarray </code></pre> <p>Stochastic:</p> <p>pyalgotrade.talibext.indicator.STOCH(barDs, count, fastk_period=-2147483648, slowk_period=-2147483648, slowk_matype=0, slowd_period=-2147483648, slowd_matype=0)</p>
<p>Either <code>bar.getClose()</code> or <code>self.__stoch[-1]</code> is returning a <code>numpy.ndarray</code> while both should be returning <code>float</code>s.</p>
python|numpy|pyalgotrade
0
373,967
23,159,791
Find the indices of non-zero elements and group by values
<p>I wrote a code in python that takes a numpy matrix as input and returns a list of indices grouped by the corresponding values (i.e. output[3] returns all indices with value of 3). However, I lack the knowledge of writing vectorized code and had to do it using ndenumerate. This operation only took about 9 seconds which is too slow.</p> <p>The second idea that I had was using numpy.nonzero as follows:</p> <pre><code>for i in range(1, max_value): current_array = np.nonzero(input == i) # save in an array </code></pre> <p>This took 5.5 seconds and so it was a good improvement but still slow. Any way to do it without loops or optimized way to get the pairs of indices per value?</p>
<p>Here's an O(n log n) algorithm for your problem. The obvious looping solution is O(n), so for sufficiently large datasets this will be slower:</p> <pre><code>&gt;&gt;&gt; a = np.random.randint(3, size=10) &gt;&gt;&gt; a array([1, 2, 2, 0, 1, 0, 2, 2, 1, 1]) &gt;&gt;&gt; index = np.arange(len(a)) &gt;&gt;&gt; sort_idx = np.argsort(a) &gt;&gt;&gt; cnt = np.bincount(a) &gt;&gt;&gt; np.split(index[sort_idx], np.cumsum(cnt[:-1])) [array([3, 5]), array([0, 4, 8, 9]), array([1, 2, 6, 7])] </code></pre> <p>It will depend on the size of your data, but it is reasonably fast for largish data sets:</p> <pre><code>In [1]: a = np.random.randint(1000, size=1e6) In [2]: %%timeit ...: indices = np.arange(len(a)) ...: sort_idx = np.argsort(a) ...: cnt = np.bincount(a) ...: np.split(indices[sort_idx], np.cumsum(cnt[:-1])) ...: 10 loops, best of 3: 140 ms per loop </code></pre>
python|optimization|numpy
3
373,968
35,769,944
Manipulating matrix elements in tensorflow
<p>How can I do the following in tensorflow? </p> <pre><code>mat = [4,2,6,2,3] # mat[2] = 0 # simple zero the 3rd element </code></pre> <p>I can't use the [] brackets because it only works on constants and not on variables. I cant use the slice function either because that returns a tensor and you can't assign to a tensor.</p> <pre><code>import tensorflow as tf sess = tf.Session() var1 = tf.Variable(initial_value=[2, 5, -4, 0]) assignZerosOP = (var1[2] = 0) # &lt; ------ This is what I want to do sess.run(tf.initialize_all_variables()) print sess.run(var1) sess.run(assignZerosOP) print sess.run(var1) </code></pre> <h1>Will print</h1> <pre><code>[2, 5, -4, 0] [2, 5, 0, 0]) </code></pre>
<p>You can't change a tensor - but, as you noted, you can change a variable.</p> <p>There are three patterns you could use to accomplish what you want:</p> <p>(a) Use <a href="https://www.tensorflow.org/api_docs/python/tf/scatter_update" rel="noreferrer"><code>tf.scatter_update</code></a> to directly poke to the part of the variable you want to change.</p> <pre><code>import tensorflow as tf a = tf.Variable(initial_value=[2, 5, -4, 0]) b = tf.scatter_update(a, [1], [9]) init = tf.initialize_all_variables() with tf.Session() as s: s.run(init) print s.run(a) print s.run(b) print s.run(a) </code></pre> <blockquote> <p>[ 2 5 -4 0]</p> <p>[ 2 9 -4 0]</p> <p>[ 2 9 -4 0]</p> </blockquote> <p>(b) Create two <code>tf.slice()</code>s of the tensor, excluding the item you want to change, and then <code>tf.concat(0, [a, 0, b])</code> them back together.</p> <p>(c) Create <code>b = tf.zeros_like(a)</code>, and then use <code>tf.select()</code> to choose which items from <code>a</code> you want, and which zeros from <code>b</code> that you want.</p> <p>I've included (b) and (c) because they work with normal tensors, not just variables.</p>
matrix|indexing|element|variable-assignment|tensorflow
19
373,969
35,640,364
Python Pandas max value in a group as a new column
<p>I am trying to calculate a new column which contains maximum values for each of several groups. I'm coming from a Stata background so I know the Stata code would be something like this:</p> <pre><code>by group, sort: egen max = max(odds) </code></pre> <p>For example: </p> <pre><code>data = {'group' : ['A', 'A', 'B','B'], 'odds' : [85, 75, 60, 65]} </code></pre> <p>Then I would like it to look like:</p> <pre><code> group odds max A 85 85 A 75 85 B 60 65 B 65 65 </code></pre> <p>Eventually I am trying to form a column that takes <code>1/(max-min) * odds</code> where <code>max</code> and <code>min</code> are for each group. </p>
<p>Use <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="noreferrer"><code>groupby</code></a> + <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.transform.html" rel="noreferrer"><code>transform</code></a>:</p> <pre><code>df['max'] = df.groupby('group')['odds'].transform('max') </code></pre> <p>This is equivalent to the verbose:</p> <pre><code>maxima = df.groupby('group')['odds'].max() df['max'] = df['group'].map(maxima) </code></pre> <p>The <code>transform</code> method aligns the <code>groupby</code> result to the <code>groupby</code> indexer, so no explicit mapping is required.</p>
python|pandas|dataframe|grouping|pandas-groupby
38
373,970
35,663,705
how to plot time on y-axis in '%H:%M' format in matplotlib?
<p>i would like to plot the times from a datetime64 series, where the y-axis is formatted as '%H:%M, showing only 00:00, 01:00, 02:00, etc. </p> <p>this is what the plot looks like without customizing the y-axis formatting.</p> <pre><code>import numpy as np import pandas as pd import matplotlib.pyplot as plt from matplotlib.dates import DateFormatter from matplotlib.dates import HourLocator df = pd.DataFrame(data=dict(a=pd.date_range('1/1/2011',periods=1440000,freq='1min'))) df = df.iloc[np.arange(0,1440*100,1440)+np.random.randint(1,300,100)] plt.plot(df.index,df['a'].dt.time) plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/U5l4y.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/U5l4y.png" alt="enter image description here"></a></p> <p>After reading around on the topic on SO, I attempted the following but without success.</p> <pre><code>ax = plt.subplot() ax.yaxis.set_major_locator(HourLocator()) ax.yaxis.set_major_formatter(DateFormatter('%H:%M')) plt.plot(df.index,df['a'].dt.time) plt.show() ValueError: DateFormatter found a value of x=0, which is an illegal date. This usually occurs because you have not informed the axis that it is plotting dates, e.g., with ax.xaxis_date() </code></pre> <p>Could anyone advise me?</p>
<p>For that to work you need to pass <code>datetime</code> objects (and I mean <code>datetime</code>, not <code>datetime64</code>). You can convert all timestamps to the same date and then use <code>.tolist()</code> to get the actual <code>datetime</code> objects.</p> <pre><code>y = df['a'].apply(lambda x: x.replace(year=1967, month=6, day=25)).tolist() ax = plt.subplot() ax.plot(df.index, y) ax.yaxis.set_major_locator(HourLocator()) ax.yaxis.set_major_formatter(DateFormatter('%H:%M')) </code></pre> <p><a href="https://i.stack.imgur.com/ZdyRc.png" rel="noreferrer"><img src="https://i.stack.imgur.com/ZdyRc.png" alt="enter image description here"></a></p>
python|pandas|matplotlib|python-datetime
6
373,971
35,636,896
return a list of all datasets in a hdf file with pandas
<p>This may be a stupid question, but i have yet to find an answer in the pandas docs or elsewhere. The same question has been asked before <a href="https://stackoverflow.com/questions/24236252/read-the-properties-of-hdf-file-in-python">here</a>. But the only answer was to look at the pandas docs, which as I stated don't provide an answer to this problem. </p> <p>I want to be able to build an hdf file with several datasets. Once this hdf has been closed I would like to be able to list each of the datasets contained within. For example:</p> <pre><code>import pandas as pd import numpy as np store = pd.HDFStore('test.h5') df1 = pd.DataFrame(np.random.randn(10,2), columns=list('AB') df2 = pd.DataFrame(np.random.randn(10,2), columns=list('AB') store['df1'] = df1 store['df2'] = df2 print(store) </code></pre> <p>Returns:</p> <pre><code>&lt;class 'pandas.io.pytables.HDFStore'&gt; File path: test.h5 /df1 frame (shape-&gt;[10,2]) /df2 frame (shape-&gt;[10,2]) </code></pre> <p>However if you close the hdf with <code>store.close()</code> and then attempt to read it using <code>pd.read_hdf()</code> the following error returns:</p> <pre><code>ValueError: key must be provided when HDF contains multiple datasets. </code></pre> <p>Is there a way to return a list of all these datasets?</p> <p>Thanks in advance for any help!</p>
<p>Yes, there is.</p> <pre><code>store = pd.HDFStore('test.h5') print(store) &lt;class 'pandas.io.pytables.HDFStore'&gt; File path: test.h5 /df1 frame (shape-&gt;[10,2]) /df2 frame (shape-&gt;[10,2]) </code></pre>
python|pandas|hdf
11
373,972
35,644,847
python numpy slice notation (COMMA VS STANDARD INDEX)
<p>Is there a performance difference between using a comma and explicitly exploding out the index references for perhaps more conventional readers? Since both seem to yield the same results but the latter may be more intuitive to some</p> <pre><code>x = numpy.array([[1,2,3,4], [5,6,7,8]]) comma_method = x[0,1:3] &gt;&gt;&gt; numpy.array([2,3]) conventional method = x[0][1:3] &gt;&gt;&gt; numpy.array([2,3]) </code></pre>
<p>Pretty much always go for the comma, not for performance reasons, but because indexing twice isn't quite equivalent:</p> <pre><code>In [2]: x = numpy.array([[0, 1], [2, 3]]) In [3]: x[:1, :1] Out[3]: array([[0]]) In [4]: x[:1][:1] Out[4]: array([[0, 1]]) </code></pre> <p>That said, the comma also appears to have a speed advantage:</p> <pre><code>In [7]: %timeit x[0][0] The slowest run took 25.41 times longer than the fastest. This could mean that a n intermediate result is being cached 1000000 loops, best of 3: 357 ns per loop In [8]: %timeit x[0, 0] The slowest run took 41.92 times longer than the fastest. This could mean that a n intermediate result is being cached 1000000 loops, best of 3: 148 ns per loop </code></pre> <p>I'm not sure what's up with the slowest run and the fastest run having such a time difference.</p>
python|numpy|performance|slice
3
373,973
35,564,063
comparing ndarray with values in 1D array to get a mask
<p>I have two numpy array, 2D and 1D respectively. I want to obtain a 2D binary mask where each element of the mask is true if it matches any of the element of 1D array.</p> <p>Example</p> <pre><code> 2D array ----------- 1 2 3 4 9 6 7 2 3 1D array ----------- 1,9,3 Expected output --------------- True False True False True False False False True </code></pre> <p>Thanks</p>
<p>You could use <code>np.in1d</code>. Although <code>np.in1d</code> returns a 1D array, you could simply reshape the result afterwards:</p> <pre><code>In [174]: arr = np.array([[1,2,3],[4,9,6],[7,2,3]]) In [175]: bag = [1,9,3] In [177]: np.in1d(arr, bag).reshape(arr.shape) Out[177]: array([[ True, False, True], [False, True, False], [False, False, True]], dtype=bool) </code></pre> <p>Note that <code>in1d</code> is checking of the elements in <code>arr</code> match <em>any</em> of the elements in <code>bag</code>. In contrast, <code>arr == bag</code> tests if the elements of <code>arr</code> equal the broadcasted elements of <code>bag</code> <em>element-wise</em>. You can see the difference by permuting <code>bag</code>:</p> <pre><code>In [179]: arr == np.array([1,3,9]) Out[179]: array([[ True, False, False], [False, False, False], [False, False, False]], dtype=bool) In [180]: np.in1d(arr, [1,3,9]).reshape(arr.shape) Out[180]: array([[ True, False, True], [False, True, False], [False, False, True]], dtype=bool) </code></pre> <hr> <p>When you compare two arrays of unequal shape, NumPy tries to <a href="http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html" rel="nofollow">broadcast</a> the two arrays to a single compatible shape before testing for equality. In this case, <code>[1, 3, 9]</code> gets broadcasted to </p> <pre><code>array([[1, 3, 9], [1, 3, 9], [1, 3, 9]]) </code></pre> <p>since new axes are added on the left. You can check the effect of broadcasting this way:</p> <pre><code>In [181]: np.broadcast_arrays(arr, [1,3,9]) Out[185]: [array([[1, 2, 3], [4, 9, 6], [7, 2, 3]]), array([[1, 3, 9], [1, 3, 9], [1, 3, 9]])] </code></pre> <p>Once the two arrays are broadcasted up to a common shape, equality is tested <em>element-wise</em>, which means the values in corresponding locations are tested for equality. In the top row, for example, the equality tests are <code>1 == 1</code>, <code>2 == 3</code>, <code>3 == 9</code>. Hence,</p> <pre><code>In [179]: arr == np.array([1,3,9]) Out[179]: array([[ True, False, False], [False, False, False], [False, False, False]], dtype=bool) </code></pre>
python|numpy
2
373,974
35,458,365
Use numpy arrays to count similar tuples
<p>I have 32 arrays like this one::</p> <pre><code>&gt;&gt;&gt; d01 array([[8, 4, 1, 0, 0], [6, 8, 5, 5, 2], [1, 1, 1, 1, 1]]) &gt;&gt;&gt; d02 ... &gt;&gt;&gt; d32 array([[8, 7, 1, 0, 3], [2, 8, 5, 5, 2], [1, 1, 1, 1, 1]]) </code></pre> <ul> <li>they have only ones in line 3, dxy[2, i] = 1</li> <li>pairs (dxy[0, i], res[1, i]) are uniques </li> <li>the typical dxy.shape ~(3, 10e6)</li> <li>the 32 dxy arrays can't be a dict because they're on shared memory.</li> </ul> <p>What is the fastest way that scale up onto memory to get this final structure:: </p> <pre><code>&gt;&gt;&gt; res array([[8, 8, 7, 4, 1, 0, 0, 3], [6, 2, 8, 8, 5, 5, 2, 2], [1, 1, 1, 1, 2, 2, 1, 1]]) </code></pre> <p>So the value of res[2, i] represent the number of times the tuple (res[0, i], res[1, i]) exist in the 32 dxy arrays.</p>
<p>You can use the Counter container, which is a boosted dictionnary (see the <a href="https://docs.python.org/2/library/collections.html#collections.Counter" rel="nofollow">documentation</a>).</p> <pre><code>import numpy as np from collections import Counter d01 = np.array([[8, 4, 1, 0, 0], [6, 8, 5, 5, 2], [1, 1, 1, 1, 1]]) d02 = np.array([[8, 7, 1, 0, 3], [2, 8, 5, 5, 2], [1, 1, 1, 1, 1]]) </code></pre> <p>Build the counter:</p> <pre><code>count = Counter(map(tuple,np.c_[d01,d02][:2].T.tolist())) Out[1]: Counter({(0, 2): 1, (0, 5): 2, (1, 5): 2, (3, 2): 1, (4, 8): 1, (7, 8): 1, (8, 2): 1, (8, 6): 1}) </code></pre> <p>Reformatting the counter as matrix:</p> <pre><code>res = np.c_[np.array(list(count.keys())),list(count.values())].T Out[2]: array([[1, 3, 0, 8, 4, 8, 7, 0], [5, 2, 5, 2, 8, 6, 8, 2], [2, 1, 2, 1, 1, 1, 1, 1]]) </code></pre>
python|numpy
2
373,975
35,383,388
Modify pandas dataframe in python based on multiple rows
<p>I am working with a DataFrame in Pandas / Python, each row has an ID (that is not unique), I would like to modify the dataframe to add a column with the secondname for each row that has multiple matching ID's. </p> <pre><code>Starting with: ID Name Rate 0 1 A 65.5 1 2 B 67.3 2 2 C 78.8 3 3 D 65.0 4 4 E 45.3 5 5 F 52.0 6 5 G 66.0 7 6 H 34.0 8 7 I 2.0 Trying to get to: ID Name Rate Secondname 0 1 A 65.5 None 1 2 B 67.3 C 2 2 C 78.8 B 3 3 D 65.0 None 4 4 E 45.3 None 5 5 F 52.0 G 6 5 G 66.0 F 7 6 H 34.0 None 8 7 I 2.0 None </code></pre> <p>My code:</p> <pre><code>import numpy as np import pandas as pd mydict = {'ID':[1,2,2,3,4,5,5,6,7], 'Name':['A','B','C','D','E','F','G','H','I'], 'Rate':[65.5,67.3,78.8,65,45.3,52,66,34,2]} df=pd.DataFrame(mydict) df['Newname']='None' for i in range(0, df.shape[0]-1): if df.irow(i)['ID']==df.irow(i+1)['ID']: df.irow(i)['Newname']=df.irow(i+1)['Name'] </code></pre> <p>Which results in the following error:</p> <pre><code>A value is trying to be set on a copy of a slice from a DataFrame See the the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy df.irow(i)['Newname']=df.irow(i+1)['Secondname'] C:\Users\L\Anaconda3\lib\site-packages\pandas\core\series.py:664: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the the caveats in the documentation: http://pandas.pydata.org/pandas- docs/stable/indexing.html#indexing-view-versus-copy self.loc[key] = value </code></pre> <p>Any help would be much appreciated. </p>
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow noreferrer"><code>groupby</code></a> with custom function <code>f</code>, which use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.shift.html" rel="nofollow noreferrer"><code>shift</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.combine_first.html" rel="nofollow noreferrer"><code>combine_first</code></a>:</p> <pre><code>def f(x): #print x x['Secondname'] = x['Name'].shift(1).combine_first(x['Name'].shift(-1)) return x print df.groupby('ID').apply(f) ID Name Rate Secondname 0 1 A 65.5 NaN 1 2 B 67.3 C 2 2 C 78.8 B 3 3 D 65.0 NaN 4 4 E 45.3 NaN 5 5 F 52.0 G 6 5 G 66.0 F 7 6 H 34.0 NaN 8 7 I 2.0 NaN </code></pre> <p>You can avoid <code>groupby</code> and find <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.duplicated.html" rel="nofollow noreferrer"><code>duplicated</code></a>, then fill helper columns by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.loc.html" rel="nofollow noreferrer"><code>loc</code></a> with column <code>Name</code>, then <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.shift.html" rel="nofollow noreferrer"><code>shift</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.combine_first.html" rel="nofollow noreferrer"><code>combine_first</code></a> and last <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.drop.html" rel="nofollow noreferrer"><code>drop</code></a> helper columns:</p> <pre><code>print df.duplicated('ID', keep='first') 0 False 1 False 2 True 3 False 4 False 5 False 6 True 7 False 8 False dtype: bool print df.duplicated('ID', keep='last') 0 False 1 True 2 False 3 False 4 False 5 True 6 False 7 False 8 False dtype: bool df.loc[ df.duplicated('ID', keep='first'), 'first'] = df['Name'] df.loc[ df.duplicated('ID', keep='last'), 'last'] = df['Name'] print df ID Name Rate first last 0 1 A 65.5 NaN NaN 1 2 B 67.3 NaN B 2 2 C 78.8 C NaN 3 3 D 65.0 NaN NaN 4 4 E 45.3 NaN NaN 5 5 F 52.0 NaN F 6 5 G 66.0 G NaN 7 6 H 34.0 NaN NaN 8 7 I 2.0 NaN NaN </code></pre> <pre><code>df['SecondName'] = df['first'].shift(-1).combine_first(df['last'].shift(1)) df = df.drop(['first', 'l1'], axis=1) </code></pre> <pre><code>print df ID Name Rate SecondName 0 1 A 65.5 NaN 1 2 B 67.3 C 2 2 C 78.8 B 3 3 D 65.0 NaN 4 4 E 45.3 NaN 5 5 F 52.0 G 6 5 G 66.0 F 7 6 H 34.0 NaN 8 7 I 2.0 NaN </code></pre> <p><strong>TESTING</strong>: (in time of testing solution of <a href="https://stackoverflow.com/a/35384195/2901002">Roman Kh</a> has wrong output)</p> <p><code>len(df) = 9</code>:</p> <pre><code>In [154]: %timeit jez(df1) 100 loops, best of 3: 15 ms per loop In [155]: %timeit jez2(df2) 100 loops, best of 3: 3.45 ms per loop In [156]: %timeit rom(df) 100 loops, best of 3: 3.55 ms per loop </code></pre> <p><code>len(df) = 90k</code>:</p> <pre><code>In [158]: %timeit jez(df1) 10 loops, best of 3: 57.1 ms per loop In [159]: %timeit jez2(df2) 10 loops, best of 3: 36.4 ms per loop In [160]: %timeit rom(df) 10 loops, best of 3: 40.4 ms per loop </code></pre> <pre><code>import pandas as pd mydict = {'ID':[1,2,2,3,4,5,5,6,7], 'Name':['A','B','C','D','E','F','G','H','I'], 'Rate':[65.5,67.3,78.8,65,45.3,52,66,34,2]} df=pd.DataFrame(mydict) print df df = pd.concat([df]*10000).reset_index(drop=True) df1 = df.copy() df2 = df.copy() def jez(df): def f(x): #print x x['Secondname'] = x['Name'].shift(1).combine_first(x['Name'].shift(-1)) return x return df.groupby('ID').apply(f) def jez2(df): #print df.duplicated('ID', keep='first') #print df.duplicated('ID', keep='last') df.loc[ df.duplicated('ID', keep='first'), 'first'] = df['Name'] df.loc[ df.duplicated('ID', keep='last'), 'last'] = df['Name'] #print df df['SecondName'] = df['first'].shift(-1).combine_first(df['last'].shift(1)) df = df.drop(['first', 'last'], axis=1) return df def rom(df): # cpIDs = True if the next row has the same ID df['cpIDs'] = df['ID'][:-1] == df['ID'][1:] # fill in the last row (get rid of NaN) df.iloc[-1,df.columns.get_loc('cpIDs')] = False # ShiftName == Name of the next row df['ShiftName'] = df['Name'].shift(-1) # fill in SecondName df.loc[df['cpIDs'], 'SecondName'] = df.loc[df['cpIDs'], 'ShiftName'] # remove columns del df['cpIDs'] del df['ShiftName'] return df print jez(df1) print jez2(df2) print rom(df) </code></pre> <pre><code>print jez(df1) ID Name Rate Secondname 0 1 A 65.5 NaN 1 2 B 67.3 C 2 2 C 78.8 B 3 3 D 65.0 NaN 4 4 E 45.3 NaN 5 5 F 52.0 G 6 5 G 66.0 F 7 6 H 34.0 NaN 8 7 I 2.0 NaN print jez2(df2) ID Name Rate SecondName 0 1 A 65.5 NaN 1 2 B 67.3 C 2 2 C 78.8 B 3 3 D 65.0 NaN 4 4 E 45.3 NaN 5 5 F 52.0 G 6 5 G 66.0 F 7 6 H 34.0 NaN 8 7 I 2.0 NaN print rom(df) ID Name Rate SecondName 0 1 A 65.5 NaN 1 2 B 67.3 C 2 2 C 78.8 NaN 3 3 D 65.0 NaN 4 4 E 45.3 NaN 5 5 F 52.0 G 6 5 G 66.0 NaN 7 6 H 34.0 NaN 8 7 I 2.0 NaN </code></pre> <p>EDIT:</p> <p>If there is more duplicated pairs with same names, use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.shift.html" rel="nofollow noreferrer"><code>shift</code></a> for creating <code>first</code> and <code>last</code> columns:</p> <pre><code>df.loc[ df['ID'] == df['ID'].shift(), 'first'] = df['Name'] df.loc[ df['ID'] == df['ID'].shift(-1), 'last'] = df['Name'] </code></pre>
python|python-3.x|pandas
4
373,976
35,528,472
By dumping a python list, creating a python ".py" file which returns the same python list
<p>I have a ".csv" file which involves more than 1 million rows of data.</p> <p>In python, I have to process this data. In this case, after each running, I have to wait almost 1 minute in order to load the data.</p> <p>In order not to wait for such a long time, automatically I want to create a ".py" file which has the numpy array list of this ".csv" file, and returns this value.</p> <p>How can I create such a ".py" file automatically with a python script?</p> <p>Thanks,</p>
<p>You can use NumPy's <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.save.html" rel="nofollow">save()</a> and <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.load.html" rel="nofollow">load()</a>:</p> <pre><code>import numpy as np </code></pre> <p>Save:</p> <pre><code>np.save('my_array.npy', my_array) </code></pre> <p>Load again:</p> <pre><code>my_array = np.load('my_array.npy') </code></pre> <p>This is a simple and likely a pretty fast solution.</p> <p>You can use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.savez.html#numpy.savez" rel="nofollow">savez()</a> if you have more than one array. Maybe compression with <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.savez_compressed.html#numpy.savez_compressed" rel="nofollow">savez_compressed</a> can be useful. Just try if it works for you.</p>
python|list|csv|numpy
2
373,977
35,531,367
In pandas, how to get 2nd mode
<p>So I'm generating a <code>summary report</code> from a <code>data set</code>. I used <code>.describe()</code> to do the heavy work but it doesn't generate everything I need i.e. the second most common thing in the data set.</p> <p>I noticed that if I use <code>.mode()</code> it returns the most common value, is there an easy way to get the second most common?</p>
<pre><code>df['column'].value_counts() </code></pre> <p>What this does, according to the docs:</p> <blockquote> <p>The resulting object will be in descending order so that the first element is the most frequently-occurring element.</p> </blockquote>
python|pandas
5
373,978
35,744,140
Selecting rows by a list of values without using several ands
<p>I have a dataframe with columns <code>(a,b,c)</code>. I have a list of values <code>(x,y,z)</code> How can I select the rows containing exactly this three values, something like:</p> <pre><code>df = df[df[(a,b,c)] == (x,y,z)] </code></pre> <p>I know that</p> <pre><code>df = df[(df[a] == x) &amp; (df[b] == y) &amp; (df[c] == z)] </code></pre> <p>should work, but I'm looking for something more convenient. Does it exist ?</p>
<h1>Solution using Indexing</h1> <p>I would set the columns as the index and use the <code>.loc</code> function</p> <p>Indexing like this is the fastest way of accessing rows, while masking is very slow on larger datasets.</p> <pre><code>In [4]: df = pd.DataFrame({'a':[1,2,3,4,5], 'b':['a','b','c','d','e'], 'c':['z','x','y','v','u'], 'othervalue':range(100, 105)}) In [5]: df Out[5]: a b c othervalue 0 1 a z 100 1 2 b x 101 2 3 c y 102 3 4 d v 103 4 5 e u 104 In [6]: df.set_index(['a','b','c'], inplace=True) In [7]: df Out[7]: othervalue a b c 1 a z 100 2 b x 101 3 c y 102 4 d v 103 5 e u 104 In [8]: df.loc[[4,'d','v']] Out[8]: othervalue a b c 4 d v 103 </code></pre> <h3>Extra bonus</h3> <p>Also, if you just want to access a certain value of a certain column, you can extend the <code>.loc</code> function to access that certain column for you, like this:</p> <pre><code>In [9]: df.loc[[4,'d','v'], 'othervalue'] Out[9]: a b c 4 d v 103 Name: othervalue, dtype: int64 </code></pre>
python|pandas
2
373,979
35,439,723
Using pandas TimeStamp with scikit-learn
<p>sklearn classifiers accept pandas' <code>TimeStamp</code> (=<code>datetime64[ns]</code>) as a column in X, as long as <em>all</em> of X columns are of that type. But when there are both <code>TimeStamp</code> and <code>float</code> columns, sklearn refuses to work with TimeStamp.</p> <p>Is there any workaround besides converting TimeStamp into <code>int</code> using astype(<code>int</code>)? (I still need the original column to access <code>dt.year</code> etc., so ideally would prefer not to create a duplicate column just to provide a feature to sklearn.)</p> <pre><code>import pandas as pd from sklearn.linear_model import LinearRegression test = pd.date_range('20000101', periods = 100) test_df = pd.DataFrame({'date': test}) test_df['a'] = 1 test_df['y'] = 1 lr = LinearRegression() lr.fit(test_df[['date']], test_df['y']) # works fine lr.fit(test_df[['date', 'date']], test_df['y']) # works fine lr.fit(test_df[['date', 'a']], test_df['y']) # complains --------------------------------------------------------------------------- TypeError Traceback (most recent call last) &lt;ipython-input-90-0605fa5bcdfa&gt; in &lt;module&gt;() ----&gt; 1 lr.fit(test_df[['date', 'a']], test_df['y']) /home/shoya/.pyenv/versions/3.5.0/envs/study-env/lib/python3.5/site-packages/sklearn/linear_model/base.py in fit(self, X, y, sample_weight) 434 n_jobs_ = self.n_jobs 435 X, y = check_X_y(X, y, accept_sparse=['csr', 'csc', 'coo'], --&gt; 436 y_numeric=True, multi_output=True) 437 438 if ((sample_weight is not None) and np.atleast_1d( /home/shoya/.pyenv/versions/3.5.0/envs/study-env/lib/python3.5/site-packages/sklearn/utils/validation.py in check_X_y(X, y, accept_sparse, dtype, order, copy, force_all_finite, ensure_2d, allow_nd, multi_output, ensure_min_samples, ensure_min_features, y_numeric, warn_on_dtype, estimator) 521 X = check_array(X, accept_sparse, dtype, order, copy, force_all_finite, 522 ensure_2d, allow_nd, ensure_min_samples, --&gt; 523 ensure_min_features, warn_on_dtype, estimator) 524 if multi_output: 525 y = check_array(y, 'csr', force_all_finite=True, ensure_2d=False, /home/shoya/.pyenv/versions/3.5.0/envs/study-env/lib/python3.5/site-packages/sklearn/utils/validation.py in check_array(array, accept_sparse, dtype, order, copy, force_all_finite, ensure_2d, allow_nd, ensure_min_samples, ensure_min_features, warn_on_dtype, estimator) 402 # make sure we acually converted to numeric: 403 if dtype_numeric and array.dtype.kind == "O": --&gt; 404 array = array.astype(np.float64) 405 if not allow_nd and array.ndim &gt;= 3: 406 raise ValueError("Found array with dim %d. %s expected &lt;= 2." TypeError: float() argument must be a string or a number, not 'Timestamp' </code></pre> <p>Apparently, when the dtypes are mixed, and therefore the ndarray has type <code>object</code>, sklearn attempts to convert them to <code>float</code>, which fails with <code>TimeStamp</code>. But when the dtypes are all <code>datetime64[ns]</code>, sklearn just leaves things unchanged.</p>
<p>You can translate it to a proper integer or float</p> <pre><code>test_df['date'] = test_df['date'].astype(int) </code></pre>
python|python-3.x|datetime|pandas|scikit-learn
0
373,980
35,689,248
Train Tensorflow model without using command line
<p>I want to call <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/models/embedding/word2vec_optimized.py#L408" rel="nofollow">this <code>main(_)</code> function</a> from another Python script without spawning a new process (so that it's easier to debug). However, that function is written to work with command line arguments. What would be the cleanest way call that function directly from another function?</p>
<p>You can import <code>FLAGS</code> and then define the required args (train_data, eval_data, save_path).</p> <pre><code>In [13]: from tensorflow.models.embedding.word2vec_optimized import FLAGS In [14]: from tensorflow.models.embedding.word2vec_optimized import main In [16]: main(_) --train_data --eval_data and --save_path must be specified. An exception has occurred, use %tb to see the full traceback. In [17]: FLAGS.train_data = "this" In [18]: FLAGS.eval_data = "that" In [19]: FLAGS.save_path = "some_path" In [20]: main(_) I tensorflow/core/common_runtime/local_device.cc:40] Local device intra op parallelism threads: 8 </code></pre>
python|argparse|tensorflow|gflags
4
373,981
35,397,821
Neural Network cannot learn
<p>I am trying to implement a neural network with python and numpy. The problem is when I try to train my network the error stocks around 0.5. It cannot learn further. I tried learning rates 0.001 and 1. I guess I am doing something wrong during the back propagation. But I haven't been figured what is wrong. </p> <p>p.s. I was getting a lot of overflow problems then I started to use np.clip() method.</p> <p>Here is my back propagation code:</p> <pre><code># z2 is softmax output def calculateBackpropagation(self, z1, z2, y): delta3 = z2 delta3[range(self.numOfSamples), y] -= 1 dW2 = (np.transpose(z1)).dot(delta3) db2 = np.sum(delta3, axis=0, keepdims=True) delta2 = delta3.dot(np.transpose(self.W2)) * ActivationFunction.DRELU(z1) dW1 = np.dot(np.transpose(self.train_data), delta2) db1 = np.sum(delta2, axis=0) self.W1 += -self.alpha * dW1 self.b1 += -self.alpha * db1 self.W2 += -self.alpha * dW2 self.b2 += -self.alpha * db2 # RELU can be approximated with soft max function # so the derivative of this function is g(x) = log(1+exp(x)) # Source: https://imiloainf.wordpress.com/2013/11/06/rectifier-nonlinearities/ @staticmethod def DRELU(x): x = np.clip( x, -500, 500 ) return np.log(1 + np.exp(x)) def softmax(self, x): """Compute softmax values for each sets of scores in x.""" x = np.clip( x, -500, 500 ) e = np.exp(x) return e / np.sum(e, axis=1, keepdims=True) def train(self): X = self.train_data Y = self.train_labels (row, col) = np.shape(self.train_data) for i in xrange(self.ephocs): [p1, z1, p2, z2] = self.feedForward(X) probs = z2 self.backPropagate(X, Y, z1, probs) self.learning_rate = self.learning_rate * (self.learning_rate / (self.learning_rate + (self.learning_rate * self.rate_decay))) def softmax(self, x): """Compute softmax values for each sets of scores in x.""" x = np.clip( x, -500, 500 ) e = np.exp(x) return e / np.sum(e, axis=1, keepdims=True) def feedForward(self, X): p1 = X.dot(self.W1) + self.b1 z1 = self.neuron(p1) p2 = z1.dot(self.W2) + self.b2 # z2 = self.neuron(p2) z2 = self.softmax(p2) return [p1, z1, p2, z2] def predict(self, X): [p1, z1, p2, z2] = self.feedForward(X) return np.argmax(z2, axis=1) # Calculates the cross-entropy loss # P.S. In some cases true distribution is unknown so cross-entropy cannot be directly calculated. # hence, I will use the cross entropy estimation formula on wikipedia # https://en.wikipedia.org/wiki/Cross_entropy def calculateLoss(self, x): [p1, z1, p2, z2] = self.feedForward(x) softmax_probs = self.softmax(p2) # Calculates the estimated loss based on wiki return np.sum(-np.log(softmax_probs[range(self.numOfSamples), self.train_labels])) def neuron(self, p): return ActivationFunction.RELU(p) def CreateRandomW(self, row, col): return np.random.uniform(low=-1.0, high=1.0, size=(row, col)) def normalizeData(self, rawpoints, high=255.0, low=0.0): return (rawpoints/128.0) - 1 @staticmethod def RELU(x): # x = np.clip( x, -1, 1 ) x = np.clip( x, -500, 500 ) return np.maximum(0.001, x) # RELU can be approximated with soft max function # so the derivative of this function is g(x) = log(1+exp(x)) # Source: https://imiloainf.wordpress.com/2013/11/06/rectifier-nonlinearities/ @staticmethod def DRELU(x): x = np.clip( x, -500, 500 ) return np.log(1 + np.exp(x)) </code></pre>
<p>Here are some issues I found:</p> <ol> <li>The array slicing softmax_probs[r, y] in calculateLoss() is incorrect. This slicing produces a 10000x10000 matrix and slows down the code.</li> <li>Similarly, the slicing of delta3[r, y] in backPropagate() is incorrect.</li> <li>As of yet, I'm unsure if backprop is done correctly (didn't check), however, clipping the gradients to (-5, 5), I was able to get just below 70% training and testing accuracy after 100 iterations, </li> </ol> <p>I used the output of the following utility function (one-to-K encoding of the labels) to fix both issues 1 and 2.</p> <pre><code>def one_to_two_encoding(y): v = np.array([[1, 0] if y[i] == 0 else [0, 1] for i in range(len(y))]) return v </code></pre> <p>I applied gradient clipping in backPropagte() right after each gradient is computed. For example,</p> <pre><code>delta3 = delta3.clip(-5, 5) </code></pre>
python|python-2.7|numpy|neural-network|softmax
0
373,982
11,460,806
Multiply a 1d array x 2d array python
<p>I have a 2d array and a 1d array and I need to multiply each element in the 1d array x each element in the 2d array columns. It's basically a matrix multiplication but numpy won't allow matrix multiplication because of the 1d array. This is because matrices are inherently 2d in numpy. How can I get around this problem? This is an example of what I want:</p> <pre><code>FrMtx = np.zeros(shape=(24,24)) #2d array elem = np.zeros(24, dtype=float) #1d array Result = np.zeros(shape=(24,24), dtype=float) #2d array to store results some_loop to increment i: some_other_loop to increment j: Result[i][j] = (FrMtx[i][j] x elem[j]) </code></pre> <p>Numerous efforts have given me errors such as <code>arrays used as indices must be of integer or boolean type</code></p>
<p>Due to the NumPy broadcasting rules, a simple</p> <pre class="lang-py prettyprint-override"><code>Result = FrMtx * elem </code></pre> <p>Will give the desired result.</p>
python|numpy
4
373,983
12,044,043
Python: py2app "ImportError: dlopen(): Library not loaded"
<p>I've written a python script that does some work with numpy and scikit's audiolab. I want to create a standalone app using py2app but I keep getting the same error no matter which OS X computer I test it on. </p> <pre><code>ImportError: dlopen(/Users/transfer15/Desktop/app/dist/PCMAlign/app/Contents/Resources/lib/python2.7/numpy/linalg/lapack_lite.so, 2): Library not loaded: @rpath/libmkl_intel_lp64.dylib Referenced from: /Users/transfer15/Desktop/app/dist/PCMAlign/app/Contents/Resources/lib/python2.7/numpy/linalg/lapack_lite.so Reason: image not found </code></pre> <p>This is somewhat strange to me because if I follow the filepath I can see <code>lapack_lite.so</code> in the correct folder. </p> <p>Is there any fix for this? Or, is there any way to exclude this library, since I'm not using linear algebra (pretty much just using the numpy arrays) so as to avoid this error?</p> <p>Thanks!</p>
<p>Having encountered the same problem recently (Python 2.7, trying to import numpy version 1.11), downgrading the version of numpy cleared up the error.</p> <p>If you used pip to install numpy, you can downgrade with: <code>pip install 'numpy&lt;1.7'</code>. It is possible that a higher version may work out for you.</p>
python|macos|numpy|importerror|py2app
0
373,984
11,889,537
Arithmetic Operations with Nested Lists in Python
<p>I am attempting to subtract values in a nested list (a list of historical stock price data from Yahoo finance) and I have been running into problems. I am attempting simple subtraction (i.e. high - low), but I am unable to implement this. I am probably missing something fundamental on the nature of lists, but I am stumped.</p> <p>An example of the nested list I am using:</p> <pre><code>[['2012-07-31', '16.00', '16.06', '15.81', '15.84', '13753800', '15.8'], ['2012-07-30', '16.15', '16.15', '15.90', '15.98', '10187600', '15.9'], ['2012-07-27', '15.88', '16.17', '15.84', '16.11', '14220800', '16.1'], ['2012-07-26', '15.69', '15.88', '15.62', '15.80', '11033300', '15.8'], ['2012-07-25', '15.52', '15.64', '15.40', '15.50', '15092000', '15.5'], ['2012-07-24', '15.74', '15.76', '15.23', '15.43', '19733400', '15.4'], ['2012-07-23', '15.70', '15.81', '15.59', '15.76', '14825800', '15.7'], ['2012-07-20', '15.75', '15.94', '15.68', '15.92', '16919700', '15.9'], ['2012-07-19', '15.71', '15.86', '15.64', '15.73', '15985300', '15.7'], ...] </code></pre> <p>I want to subtract the 4th 'column' from the third 'column' and populate another list with the results (order IS important.) What is the best way to implement this?</p>
<p>You can use a list comprehension:</p> <pre><code>from decimal import Decimal result = [(row[0], Decimal(row[2]) - Decimal(row[3])) for row in data] </code></pre>
python|numpy|python-2.7|nested-lists
3
373,985
11,679,716
Comparing DateOffsets in pandas
<p>Is there a way to compare the size of two <code>DateOffset</code> objects?</p> <pre><code>&gt;&gt;&gt; from pandas.core.datetools import * &gt;&gt;&gt; Hour(24) &gt; Minute(5) False </code></pre> <p>This works with <code>timedelta</code>, so I assumed that pandas would inherit that behavior - or is the time system made from scratch?</p>
<p>pandas DateOffsets does not inherit from timedelta. It's possible for some DateOffsets to be compared, but for offsets like MonthEnd, MonthStart, etc, the span of time to the next offset is non-uniform and depends on the starting date.</p> <p>Please feel free to start a github issue on this at <a href="https://github.com/pydata/pandas" rel="nofollow">https://github.com/pydata/pandas</a>, we can continue the discussion there and it'll serve as a reminder.</p> <p>Thanks.</p>
python|datetime|pandas
1
373,986
11,800,544
Why does my python process use up so much memory?
<p>I'm working on a project that involves using python to read, process and write files that are sometimes as large as a few hundred megabytes. The program fails occasionally when I try to process some particularly large files. It does not say 'memory error', but I suspect that is the problem (in fact it gives no reason at all for failing'). </p> <p>I've been testing the code on smaller files and watching 'top' to see what memory usage is like, and it typically reaches 60%. top says that I have 4050352k total memory, so 3.8Gb.</p> <p>Meanwhile I'm trying to track memory usage within python itself (see my question from <a href="https://stackoverflow.com/questions/11784329/python-memory-usage-of-numpy-arrays">yesterday</a>) with the following little bit of code:</p> <pre><code>mem = 0 for variable in dir(): variable_ = vars()[variable] try: if str(type(variable_))[7:12] == 'numpy': numpy_ = True else: numpy_ = False except: numpy_ = False if numpy_: mem_ = variable_.nbytes else: mem_ = sys.getsizeof(variable) mem += mem_ print variable+ type: '+str(type(variable_))+' size: '+str(mem_) print 'Total: '+str(mem) </code></pre> <p>Before I run that block I set all the variables I don't need to None, close all files and figures, etc etc. After that block I use subprocess.call() to run a fortran program that is required for the next stage of processing. Looking at top while the fortran program is running shows that the fortran program is using ~100% of the cpu, and ~5% of the memory, and that python is using 0% of cpu and 53% of memory. However my little snippet of code tells me that all of the variables in python add up to only 23Mb, which ought to be ~0.5%.</p> <p>So what's happening? I wouldn't expect that little snippet to give me a spot on memory usage, but it ought to be accurate to within a few Mb surely? Or is it just that top doesn't notice the memory has been relinquished, but that it is available to other programs that need it if necessary? </p> <p>As requested, here's a simplified part of the code that is using up all the memory (file_name.cub is an ISIS3 cube, it's a file that contains 5 layers (bands) of the same map, the first layer is spectral radiance, the next 4 have to do with latitude, longitude, and other details. It's an image from Mars that I'm trying to process. StartByte is a value I previously read from the .cub file's ascii header telling me the beginning byte of the data, Samples and Lines are the dimensions of the map, also read from the header.):</p> <pre><code>latitude_array = 'cheese' # It'll make sense in a moment f_to = open('To_file.dat','w') f_rad = open('file_name.cub', 'rb') f_rad.seek(0) header=struct.unpack('%dc' % (StartByte-1), f_rad.read(StartByte-1)) header = None # f_lat = open('file_name.cub', 'rb') f_lat.seek(0) header=struct.unpack('%dc' % (StartByte-1), f_lat.read(StartByte-1)) header = None pre=struct.unpack('%df' % (Samples*Lines), f_lat.read(Samples*Lines*4)) pre = None # f_lon = open('file_name.cub', 'rb') f_lon.seek(0) header=struct.unpack('%dc' % (StartByte-1), f_lon.read(StartByte-1)) header = None pre=struct.unpack('%df' % (Samples*Lines*2), f_lon.read(Samples*Lines*2*4)) pre = None # (And something similar for the other two bands) # So header and pre are just to get to the right part of the file, and are # then set to None. I did try using seek(), but it didn't work for some # reason, and I ended up with this technique. for line in range(Lines): sample_rad = struct.unpack('%df' % (Samples), f_rad.read(Samples*4)) sample_rad = np.array(sample_rad) sample_rad[sample_rad&lt;-3.40282265e+38] = np.nan # And Similar lines for all bands # Then some arithmetic operations on some of the arrays i = 0 for value in sample_rad: nextline = sample_lat[i]+', '+sample_lon[i]+', '+value # And other stuff f_to.write(nextline) i += 1 if radiance_array == 'cheese': # I'd love to know a better way to do this! radiance_array = sample_rad.reshape(len(sample_rad),1) else: radiance_array = np.append(radiance_array, sample_rad.reshape(len(sample_rad),1), axis=1) # And again, similar operations on all arrays. I end up with 5 output arrays # with dimensions ~830*4000. For the large files they can reach ~830x20000 f_rad.close() f_lat.close() f_to.close() # etc etc sample_lat = None # etc etc sample_rad = None # etc etc # plt.figure() plt.imshow(radiance_array) # I plot all the arrays, for diagnostic reasons plt.show() plt.close() radiance_array = None # etc etc # I set all arrays apart from one (which I need to identify the # locations of nan in future) to None # LOCATION OF MEMORY USAGE MONITOR SNIPPET FROM ABOVE </code></pre> <p>So I lied in the comments about opening several files, it's many instances of the same file. I only continue with one array that isn't set to None, and it's size is ~830x4000, though this somehow constitutes 50% of my available memory. I've also tried gc.collect, but no change. I'd be very happy to hear any advice on how I could improve on any of that code (related to this problem or otherwise).</p> <p>Perhaps I should mention: originally I was opening the files in full (i.e. not line by line as above), doing it line by line was an initial attempt to save memory.</p>
<p>Just because you've deferenced your variables doesn't mean the Python process has given the allocated memory back to the system. See <a href="https://stackoverflow.com/q/1316767/3924118">How can I explicitly free memory in Python?</a>.</p> <p>If <code>gc.collect()</code> does not work for you, investigate forking and reading/writing your files in child processes using IPC. Those processes will end when they're finished and release the memory back to the system. Your main process will continue to run with low memory usage.</p>
python|optimization|memory|numpy
15
373,987
11,686,720
Is there a numpy builtin to reject outliers from a list
<p>Is there a numpy builtin to do something like the following? That is, take a list <code>d</code> and return a list <code>filtered_d</code> with any outlying elements removed based on some assumed distribution of the points in <code>d</code>.</p> <pre><code>import numpy as np def reject_outliers(data): m = 2 u = np.mean(data) s = np.std(data) filtered = [e for e in data if (u - 2 * s &lt; e &lt; u + 2 * s)] return filtered &gt;&gt;&gt; d = [2,4,5,1,6,5,40] &gt;&gt;&gt; filtered_d = reject_outliers(d) &gt;&gt;&gt; print filtered_d [2,4,5,1,6,5] </code></pre> <p>I say 'something like' because the function might allow for varying distributions (poisson, gaussian, etc.) and varying outlier thresholds within those distributions (like the <code>m</code> I've used here).</p>
<p>Something important when dealing with outliers is that one should try to use estimators as robust as possible. The mean of a distribution will be biased by outliers but e.g. the median will be much less.</p> <p>Building on eumiro's answer:</p> <pre><code>def reject_outliers(data, m = 2.): d = np.abs(data - np.median(data)) mdev = np.median(d) s = d/mdev if mdev else 0. return data[s&lt;m] </code></pre> <p>Here I have replace the mean with the more robust median and the standard deviation with the median absolute distance to the median. I then scaled the distances by their (again) median value so that <code>m</code> is on a reasonable relative scale.</p> <p>Note that for the <code>data[s&lt;m]</code> syntax to work, <code>data</code> must be a numpy array.</p>
python|numpy
219
373,988
28,765,696
How do I get a text output from a string created from an array to remain unshortened?
<p>Python/Numpy Problem. Final year Physics undergrad... I have a small piece of code that creates an array (essentially an n×n matrix) from a formula. I reshape the array to a single column of values, create a string from that, format it to remove extraneous brackets etc, then output the result to a text file saved in the user's Documents directory, which is then used by another piece of software. The trouble is above a certain value for "n" the output gives me only the first and last three values, with "...," in between. I think that Python is automatically abridging the final result to save time and resources, but I need all those values in the final text file, regardless of how long it takes to process, and I can't for the life of me find how to stop it doing it. Relevant code copied beneath...</p> <pre><code>import numpy as np; import os.path ; import os ''' Create a single column matrix in text format from Gaussian Eqn. ''' save_path = os.path.join(os.path.expandvars("%userprofile%"),"Documents") name_of_file = 'outputfile' #&lt;---- change this as required. completeName = os.path.join(save_path, name_of_file+".txt") matsize = 32 def gaussf(x,y): #defining gaussian but can be any f(x,y) pisig = 1/(np.sqrt(2*np.pi) * matsize) #first term sumxy = (-(x**2 + y**2)) #sum of squares term expden = (2 * (matsize/1.0)**2) # 2 sigma squared expn = pisig * np.exp(sumxy/expden) # and put it all together return expn matrix = [[ gaussf(x,y) ]\ for x in range(-matsize/2, matsize/2)\ for y in range(-matsize/2, matsize/2)] zmatrix = np.reshape(matrix, (matsize*matsize, 1))column string2 = (str(zmatrix).replace('[','').replace(']','').replace(' ', '')) zbfile = open(completeName, "w") zbfile.write(string2) zbfile.close() print completeName num_lines = sum(1 for line in open(completeName)) print num_lines </code></pre> <p>Any help would be greatly appreciated!</p>
<p>Generally you should iterate over the array/list if you just want to write the contents.</p> <pre><code>zmatrix = np.reshape(matrix, (matsize*matsize, 1)) with open(completeName, &quot;w&quot;) as zbfile: # with closes your files automatically for row in zmatrix: zbfile.writelines(map(str, row)) zbfile.write(&quot;\n&quot;) </code></pre> <p>Output:</p> <pre><code>0.00970926751178 0.00985735189176 0.00999792646484 0.0101306077521 0.0102550302672 0.0103708481917 0.010477736974 0.010575394844 0.0106635442315 ......................... </code></pre> <p>But using <code>numpy</code> we simply need to use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.tofile.html" rel="nofollow noreferrer">tofile</a>:</p> <pre><code>zmatrix = np.reshape(matrix, (matsize*matsize, 1)) # pass sep or you will get binary output zmatrix.tofile(completeName,sep=&quot;\n&quot;) </code></pre> <p>Output is in the same format as above.</p> <p>Calling <code>str</code> on the matrix will give you similarly formatted output to what you get when you try to <code>print</code> so that is what you are writing to the file the formatted truncated output.</p> <p>Considering you are using python2, using <code>xrange</code> would be more efficient that using rane which creates a list, also having multiple imports separated by colons is not recommended, you can simply:</p> <p><code>import numpy as np, os.path, os</code></p> <p>Also variables and function names should use underscores <code>z_matrix</code>,<code>zb_file</code>,<code>complete_name</code> etc..</p>
python|arrays|string|numpy|text
1
373,989
28,569,548
Point Python Launcher to Anaconda Installation
<p>I am using Python 3.4 on Windows 7.</p> <p>I would like to run a .py and .pyc file from the command line using my Anaconda python3 installation. </p> <p>I also have a default python installation which comes bundled with the "Python Launcher" per <a href="https://www.python.org/dev/peps/pep-0397/" rel="nofollow">PEP 397</a>. </p> <p>When I double-click on my python file, it launches with the standard Python interpreter rather than Anaconda. Of note is that my environment variables seem to be fine. Typing "python" into the command line yields an Anaconda Python prompt.</p>
<p>If you have created an environment through conda, you should first activate that environment before running the script.</p> <pre><code>activate envname python scriptname.py </code></pre>
python|numpy|pandas|anaconda
1
373,990
28,641,542
export pandas dataframe object to the console
<p>Is there a way to export/print/IO a pandas dataframe object to either the python console or ipython notebook output? </p> <p>It would be nice if there is some IO mechanism that lets you quickly export a dataframe object so it can be copied to the clipboard and then pasted in another window. For example, if I'm trying to work through a problem with someone on Stackoverflow and want to quickly reproduce the dataframe for them it would be nice if you could quickly export/import it through copy/paste actions?</p> <p>I've read through the IO documentation but not sure if there is anything like what I describe.</p> <p><a href="http://pandas.pydata.org/pandas-docs/dev/io.html" rel="nofollow">http://pandas.pydata.org/pandas-docs/dev/io.html</a></p> <p><strong>Update 2:</strong></p> <p>Try the following with the dataframe below:</p> <p>1) Copy the dataframe and directly insert it into ipython WITHOUT using read_clipboad(). Call the dataframe df.</p> <p>2) Now copy df to clipboard by df.to_clipboard()</p> <p>3) Control P to paste in a text editor such as Notepad/Notepad++/SublimeText2</p> <p>4) Select what was pasted in #3 and copy to clip board using Control C</p> <p>5) Go back to ipython console and type in df2 = pd.read_clipboard()</p> <p>Inspect df2 and notice that it is not the same as df. The data is misaligned and corrupt.</p> <pre><code>df = pd.DataFrame({ 'BlahBlah0' : ['','','',''], 'BlahBlah1' : ['','','',''], 'BlahBlah2' : ['','','',''], 'BlahBlah3' : ['','','',''], 'BlahBlah4' : ['','','',''], 'BlahBlah5' : ['A','C','E','G'], 'BlahBlah6' : ['B','D','F','H'], 'BlahBlah7' : ['','','',''], 'BlahBlah8' : ['','','',''], 'BlahBlah9' : ['','','',''], 'BlahBlah10' : ['','','',''], 'BlahBlah11' : ['','','',''], 'Size1':[1,1,1,1], 'Price1':[50,50,50,50], 'Size2':[2,2,2,2], 'Price2':[75,75,75,75], 'Size3':[4,4,4,4], 'Price3':[100,100,100,100], 'Size4':[8,8,8,8], 'Price4':[125,125,125,125], 'Size5':[10,10,10,10], 'Price5':[200,200,200,200], 'Size6':[5,5,5,5], 'Price6':[250,250,250,250], 'Size7':[10,10,10,10], 'Price7':[300,300,300,300] },columns=['BlahBlah0', 'BlahBlah1', 'BlahBlah2', 'BlahBlah3', 'BlahBlah4', 'BlahBlah5', 'BlahBlah6', 'BlahBlah7', 'BlahBlah8', 'BlahBlah9', 'BlahBlah10', 'BlahBlah11', 'Size1', 'Price1', 'Size2', 'Price2', 'Size3', 'Price3', 'Size4', 'Price4', 'Size5', 'Price5', 'Size6', 'Price6', 'Size7', 'Price7'] ) </code></pre>
<p>see docs here: <a href="http://pandas.pydata.org/pandas-docs/dev/io.html#io-clipboard" rel="nofollow">http://pandas.pydata.org/pandas-docs/dev/io.html#io-clipboard</a></p> <p>df.to_clipboard() exports to the clipboard. pd.read_clipboard() is the reverse.</p>
pandas|io|dataframe
1
373,991
28,744,190
Pandas ExcelFile.parse has NaNs in index when index_col is specified
<p>I have an excel file that I am reading into a pandas DataFrame that has the header on row 1 (python index) and a blank row between the header and the data. When I specify index_col it treats the blank row as part of the index as a NaN. What is the best way to avoid this behavior?</p> <p>Test file:</p> <pre><code>idx value a 1 </code></pre> <p>Without specifying index_col:</p> <pre><code>print xs.parse(header = 1) idx value 0 NaN NaN 1 a 1 print xs.parse(header = 1).index Int64Index([0, 1], dtype='int64') </code></pre> <p>Specifying index col:</p> <pre><code>print xs.parse(header = 1, index_col = 0) value idx NaN NaN a 1 print xs.parse(header = 1, index_col = 0).index Index([nan, u'a'], dtype='object') </code></pre>
<p>You can pass <code>skiprows=[1]</code> to skip the blank line, I tested this on a dummy xl sheet, see <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.ExcelFile.parse.html#pandas.ExcelFile.parse" rel="nofollow"><code>ExcelFile.parse</code></a>:</p> <pre><code>In [44]: xs = pd.ExcelFile(r'c:\data\book1.xls') xs.parse(skiprows=[1]) Out[44]: idx value 0 12 NaN 1 2 NaN 2 1 NaN </code></pre> <p>compare with:</p> <pre><code>In [45]: xs = pd.ExcelFile(r'c:\data\book1.xls') xs.parse() Out[45]: idx value 0 NaN NaN 1 12 NaN 2 2 NaN 3 1 NaN In [47]: xs = pd.ExcelFile(r'c:\data\book1.xls') xs.parse(skiprows=[1], header=0) Out[47]: idx value 0 12 NaN 1 2 NaN 2 1 NaN In [49]: xs = pd.ExcelFile(r'c:\data\book1.xls') xs.parse(skiprows=[1], header=0, index_col=0) Out[49]: value idx 12 NaN 2 NaN 1 NaN In [50]: xs = pd.ExcelFile(r'c:\data\book1.xls') xs.parse(header=0, index_col=0) Out[50]: value idx NaN NaN 12 NaN 2 NaN 1 NaN </code></pre>
pandas
1
373,992
28,562,997
Refactoring a bad 'code smell' in multiplication of columns in a dataframe
<p>I've another question about efficiency. I have the following kind of multiplication:</p> <pre><code>df['Allocated'] = df['Base Days'] * df['Base (MW) Allocated'] * 24 df['Bought'] = df['Base Days'] * df['Base (MW) Bought'] * 24 df['Sold'] = df['Base Days'] * df['Base (MW) Bought'] * 24 df['Remaining'] = df['Base Days'] * df['Base (MW) Remaining'] * 24 </code></pre> <p>I was thinking of using a for loop - but does anyone have a more efficient way in terms of typing. Or is something like this the most efficient way. I could define a function </p> <pre><code> def multiply(df['X']): return df['X'] * df['Base Days'] * 24 </code></pre> <p>That might make the code more reusable. Does anyone have any other ideas, it just feels like I am doing a bad code smell - and I'd like some advice on how I could improve it. </p>
<p>There's no inherent 'bad code smell' about this sort of thing. For example, suppose that in the future one of the columns, say <code>'Base (MW) Bought'</code> will need to be treated differently than the others. In that case, it would actually be a virtue that each different multiplication step was handled explicitly, rather than implicitly inside a function call or as an iteration of a loop.</p> <p>However, I can appreciate the fact that it seems like there are wasted characters, since you're repeating the logic of <code>*24</code> and the access to <code>'Base Days'</code>.</p> <p>What I might do is create some options for future extension or ways to add flexibility later, but still more or less use your <code>multiply</code> idea:</p> <pre><code>def baseMultiply(data, colToMul, baseCol='Base Days', convFactor=24, preTreat=None): col_data = data[colToMult] if preTreat is not None: col_data = preTreat(col_data) return data[baseCol] * col_data * convFactor </code></pre> <p>Then perform the multiplication and assignment in a loop:</p> <pre><code>col_prefix = "Base (MW) " for col in ['Allocated', 'Bought', 'Sold', 'Remaining']: df[col] = baseMultiply(df, col_prefix + col) </code></pre> <p>Then say later you want to remove outliers but only in the case when you report the number for <code>'Bought'</code> (that's unrealistic, but it's just an example). You could write a helper function like:</p> <pre><code>def removeOutlier(lowerBound, upperBound, colData): return colData.clip(lowerBound, upperBound) </code></pre> <p>And we can use <code>functools.partial</code> to bind some arguments to this guy.</p> <pre><code>import functools boughtColClipper = functools.partial(removeOutlier, 5, 105) </code></pre> <p>Now we can modify the loop above to check when we are at the <code>'Bought'</code> column, and to give this function as the keyword argument <code>preTreat</code> in that case:</p> <pre><code>col_prefix = "Base (MW) " for col in ['Allocated', 'Bought', 'Sold', 'Remaining']: treatment = None if col != 'Bought' else boughtColClipper df[col] = baseMultiply(df, col_prefix + col, preTreat=treatment) </code></pre> <p>Now it's at least somewhat more extensible to permit later data cleaning, outlier clipping, winsorization, variable z-scoring or whatever, which are the usual things that come up later and require painfully going back and breaking earlier code.</p> <p>A final trick that I often use is a name-mapping. Instead of iterating over a <code>list</code> of the column names like I have done above (which implicitly assumes that the assignment names will be the same or derived directly from the existing names), you can give a dict mapping.</p> <p>For example, in the original post, you were assigning into the new name "Sold" but on the right-hand-side it was calculated from "Base (MW) Bought" and <strong>not</strong> from "Base (MW) Sold". I assumed this was a typo and so I used "Base (MW) Sold" in my code.</p> <p>But let's suppose it was not a typo and that two different "output names" (both "Bought" and "Sold") come from one input name ("Base (MW) Bought").</p> <pre><code>namesToAssign = {"Allocated":"Allocated", "Bought":"Bought", "Sold":"Bought", "Remaining":"Remaining"} col_prefix = "Base (MW) " for newCol, oldCol in namesToAssign.iteritems(): treatment = None if newCol != 'Bought' else boughtColClipper df[newCol] = baseMultiply(df, col_prefix + oldCol, preTreat=treatment) </code></pre> <p>You can even go one step further and have a mapping from the existing columns to <em>both</em> the new output column and also the pre-treatment functions, such as:</p> <pre><code>namesAndTreatments = {"Allocated":("Allocated", None), "Bought":("Bought", boughtColClipper), "Sold":("Bought", None), "Remaining":("Remaining", None)} col_prefix = "Base (MW) " for newCol, (oldCol, treatment) in namesToAssign.iteritems(): df[newCol] = baseMultiply(df, col_prefix + oldCol, preTreat=treatment) </code></pre> <p>and even this could be further extended so that the values inside of <code>namesAndTreatments</code> each contain extra arguments, logging handlers, database connections for fallback data if the data is bad, etc., etc. At that point, you'd want to refactor whatever <code>namesAndTreatments</code> is to be its own class of some sort, and to make functions like <code>baseMultiply</code> work by unpackaging the member data attributes of that class (it will help with compartmentalization and testing, whereas a <code>dict</code> that just grows and grows in its responsibility will be hard to maintain).</p>
python|pandas
2
373,993
28,656,736
Using Scikit's LabelEncoder correctly across multiple programs
<p>The basic task that I have at hand is</p> <p>a) Read some tab separated data.</p> <p>b) Do some basic preprocessing</p> <p>c) For each categorical column use <code>LabelEncoder</code> to create a mapping. This is don somewhat like this</p> <pre><code>mapper={} #Converting Categorical Data for x in categorical_list: mapper[x]=preprocessing.LabelEncoder() for x in categorical_list: df[x]=mapper[x].fit_transform(df.__getattr__(x)) </code></pre> <p>where <code>df</code> is a pandas dataframe and <code>categorical_list</code> is a list of column headers that need to be transformed.</p> <p>d) Train a classifier and save it to disk using <code>pickle</code></p> <p>e) Now in a different program, the model saved is loaded. </p> <p>f) The test data is loaded and the same preprocessing is performed.</p> <p>g) The <code>LabelEncoder's</code> are used for converting categorical data.</p> <p>h) The model is used to predict.</p> <p>Now the question that I have is, will the step <code>g)</code> work correctly?</p> <p>As the documentation for <code>LabelEncoder</code> says</p> <pre><code>It can also be used to transform non-numerical labels (as long as they are hashable and comparable) to numerical labels. </code></pre> <p>So will each entry hash to the exact same value everytime? </p> <p>If No, what is a good way to go about this. Any way to retrive the mappings of the encoder? Or an altogether different way from LabelEncoder?</p>
<p>According to the <a href="https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/preprocessing/label.py#L55" rel="noreferrer"><code>LabelEncoder</code></a> implementation, the pipeline you've described will work correctly if and only if you <code>fit</code> LabelEncoders at the test time with data that have exactly the same set of unique values.</p> <p>There's a somewhat hacky way to reuse LabelEncoders you got during train. <code>LabelEncoder</code> has only one property, namely, <code>classes_</code>. You can pickle it, and then restore like</p> <p>Train:</p> <pre><code>encoder = LabelEncoder() encoder.fit(X) numpy.save('classes.npy', encoder.classes_) </code></pre> <p>Test</p> <pre><code>encoder = LabelEncoder() encoder.classes_ = numpy.load('classes.npy') # Now you should be able to use encoder # as you would do after `fit` </code></pre> <p>This seems more efficient than refitting it using the same data.</p>
python|pandas|scikit-learn
60
373,994
28,585,367
Python Pandas: How I can determine the distribution of my dataset?
<p>This is my dataset with two columns of NS and count.</p> <pre><code> NS count 0 ns18.dnsdhs.com. 1494 1 ns0.relaix.net. 1835 2 ns2.techlineindia.com. 383 3 ns2.microwebsys.com. 1263 4 ns2.holy-grail-body-transformation-program.com. 1 5 ns2.chavano.com. 1 6 ns1.x10host.ml. 17 7 ns1.amwebaz.info. 48 8 ns2.guacirachocolates.com.br. 1 9 ns1.clicktodollars.com. 2 </code></pre> <p>Now I would like to see how many NSs have the same count by plotting it. My own guess is that I can use histogram to see that but I am not sure how. Can anyone help?</p>
<p>From your comment, I'm guessing your data table is actually much longer, and you want to see the distribution of name server <code>counts</code> (whatever count is here).</p> <p>I think you should just be able to do this:</p> <pre><code>df.hist(column="count") </code></pre> <p>And you'll get what you want. IF that is what you want.</p> <p>pandas has decent documentation for all of it's functions though, and histograms are described <a href="http://pandas.pydata.org/pandas-docs/dev/generated/pandas.DataFrame.hist.html" rel="noreferrer">here</a>.</p> <p>If you actually want to see "how many have the same count", rather than a representation of the disribution, then you'll either need to set the <code>bins</code> kwarg to be <code>df["count"].max()-df["count"].min()</code> - or do as you said and count the number of times you get each <code>count</code> and then create a bar chart.</p> <p>Maybe something like:</p> <pre><code>from collections import Counter counts = Counter() for count in df["count"]: counts[count] += 1 print counts </code></pre> <p>An alternative, and cleaner approach, which i completely missed and wwii pointed out below, is just to use the standard constructor of <code>Counter</code>:</p> <pre><code>count_counter = Counter(df['count']) </code></pre>
python|pandas|plot|histogram
7
373,995
28,651,079
Pandas unstack problems: ValueError: Index contains duplicate entries, cannot reshape
<p>I am trying to unstack a multi-index with pandas and I am keep getting:</p> <pre><code>ValueError: Index contains duplicate entries, cannot reshape </code></pre> <p>Given a dataset with four columns:</p> <ul> <li>id (string)</li> <li>date (string)</li> <li>location (string)</li> <li>value (float)</li> </ul> <p>I first set a three-level multi-index:</p> <pre><code>In [37]: e.set_index(['id', 'date', 'location'], inplace=True) In [38]: e Out[38]: value id date location id1 2014-12-12 loc1 16.86 2014-12-11 loc1 17.18 2014-12-10 loc1 17.03 2014-12-09 loc1 17.28 </code></pre> <p>Then I try to unstack the location:</p> <pre><code>In [39]: e.unstack('location') --------------------------------------------------------------------------- ValueError Traceback (most recent call last) &lt;ipython-input-39-bc1e237a0ed7&gt; in &lt;module&gt;() ----&gt; 1 e.unstack('location') ... C:\Anaconda\envs\sandbox\lib\site-packages\pandas\core\reshape.pyc in _make_selectors(self) 143 144 if mask.sum() &lt; len(self.index): --&gt; 145 raise ValueError('Index contains duplicate entries, ' 146 'cannot reshape') 147 ValueError: Index contains duplicate entries, cannot reshape </code></pre> <p>What is going on here?</p>
<p>Here's an example DataFrame which show this, it has duplicate values with the same index. The question is, do you want to aggregate these or keep them as multiple rows?</p> <pre><code>In [11]: df Out[11]: 0 1 2 3 0 1 2 a 16.86 1 1 2 a 17.18 2 1 4 a 17.03 3 2 5 b 17.28 In [12]: df.pivot_table(values=3, index=[0, 1], columns=2, aggfunc='mean') # desired? Out[12]: 2 a b 0 1 1 2 17.02 NaN 4 17.03 NaN 2 5 NaN 17.28 In [13]: df1 = df.set_index([0, 1, 2]) In [14]: df1 Out[14]: 3 0 1 2 1 2 a 16.86 a 17.18 4 a 17.03 2 5 b 17.28 In [15]: df1.unstack(2) ValueError: Index contains duplicate entries, cannot reshape </code></pre> <hr> <p>One solution is to <code>reset_index</code> (and get back to <code>df</code>) and use <code>pivot_table</code>.</p> <pre><code>In [16]: df1.reset_index().pivot_table(values=3, index=[0, 1], columns=2, aggfunc='mean') Out[16]: 2 a b 0 1 1 2 17.02 NaN 4 17.03 NaN 2 5 NaN 17.28 </code></pre> <p>Another option (if you don't want to aggregate) is to append a dummy level, unstack it, then drop the dummy level...</p>
python|pandas
68
373,996
51,113,982
TensorFlow: How to use 'tf.data' instead of 'load_csv_without_header'?
<p>2 years ago I wrote code in TensorFlow, and as part of the data loading I used the function 'load_csv_without_header'. Now, when I'm running the code, I get the message:</p> <pre><code>WARNING:tensorflow:From C:\Users\Roi\Desktop\Code_Win_Ver\code_files\Tensor_Flow\version1\build_database_tuple.py:124: load_csv_without_header (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version. Instructions for updating: Use tf.data instead. </code></pre> <p>How do I use 'tf.data' instead of the current function? How can I can the same dtype, at the same format, without the csv header with tf.data? I'm using TF version 1.8.0 over Python 3.5.</p> <p>Appreciate your help!</p>
<h2>Using <code>tf.data</code> to work with a <code>csv</code> file:</h2> <p>From TensorFlow's <a href="https://www.tensorflow.org/guide/datasets" rel="noreferrer">official documentation</a>:</p> <blockquote> <p>The tf.data module contains a collection of classes that allows you to easily load data, manipulate it, and pipe it into your model.</p> </blockquote> <p>Using the API, <code>tf.data.Dataset</code> is intended as the new standard of interfacing with data in TensorFlow. It represent "a sequence of elements, in which each element contains one or more Tensor objects". For a CSV, an element is just a single row of training example, represented as a pair of tensor that correspond to the data (our <code>x</code>) and the label ("target") respectively. </p> <p>Using the API, the primary method of extracting each row (or each element more accurately) in a tensorflow dataset (<code>tf.data.Dataset</code>) is by consuming the Iterator and TensorFlow has an API named <code>tf.data.Iterator</code> for that. To return the next row, we can call <code>get_next()</code> on the Iterator for example.</p> <p>Now onto the code to take <code>csv</code> and transform that into our tensorflow dataset. </p> <h2>Method 1: <code>tf.data.TextLineDataset()</code> and <code>tf.decode_csv()</code></h2> <p>With more recent versions of TensorFlow's Estimator API, instead of <code>load_csv_without_header</code>, you'd read your CSV or using the more generic <code>tf.data.TextLineDataset(you_train_path)</code> instead. You can chain that with <code>skip()</code> to skip the first row if there is a header row, but in your case, that wasn't necessary. </p> <p>You can then use <code>tf.decode_csv()</code> to pack decode each line of your CSV into its own respective fields.</p> <p>The code solution:</p> <pre><code>import tensorflow as tf train_path = 'data_input/iris_training.csv' # if no header, remove .skip() trainset = tf.data.TextLineDataset(train_path).skip(1) # Metadata describing the text columns COLUMNS = ['SepalLength', 'SepalWidth', 'PetalLength', 'PetalWidth', 'label'] FIELD_DEFAULTS = [[0.0], [0.0], [0.0], [0.0], [0]] def _parse_line(line): # Decode the line into its fields fields = tf.decode_csv(line, FIELD_DEFAULTS) # Pack the result into a dictionary features = dict(zip(COLUMNS,fields)) # Separate the label from the features label = features.pop('label') return features, label trainset = trainset.map(_parse_line) print(trainset) </code></pre> <p>You would get:</p> <pre><code>&lt;MapDataset shapes: ({ SepalLength: (), SepalWidth: (), PetalLength: (), PetalWidth: ()}, ()), types: ({ SepalLength: tf.float32, SepalWidth: tf.float32, PetalLength: tf.float32, PetalWidth: tf.float32}, tf.int32)&gt; </code></pre> <p>You can verify the <code>output classes</code>:</p> <pre><code>{'PetalLength': tensorflow.python.framework.ops.Tensor, 'PetalWidth': tensorflow.python.framework.ops.Tensor, 'SepalLength': tensorflow.python.framework.ops.Tensor, 'SepalWidth': tensorflow.python.framework.ops.Tensor}, tensorflow.python.framework.ops.Tensor) </code></pre> <p>You can also use <code>get_next</code> to iterate through the iterator:</p> <pre><code>x = trainset.make_one_shot_iterator() x.next() # Output: ({'PetalLength': &lt;tf.Tensor: id=165, shape=(), dtype=float32, numpy=1.3&gt;, 'PetalWidth': &lt;tf.Tensor: id=166, shape=(), dtype=float32, numpy=0.2&gt;, 'SepalLength': &lt;tf.Tensor: id=167, shape=(), dtype=float32, numpy=4.4&gt;, 'SepalWidth': &lt;tf.Tensor: id=168, shape=(), dtype=float32, numpy=3.2&gt;}, &lt;tf.Tensor: id=169, shape=(), dtype=int32, numpy=0&gt;) </code></pre> <h2>Method 2: <code>from_tensor_slices()</code> to construct a dataset object from numpy or pandas</h2> <pre><code>train, test = tf.keras.datasets.mnist.load_data() mnist_x, mnist_y = train mnist_ds = tf.data.Dataset.from_tensor_slices(mnist_x) print(mnist_ds) # returns: &lt;TensorSliceDataset shapes: (28,28), types: tf.uint8&gt; </code></pre> <p>Another (more elaborated) example:</p> <pre><code>import pandas as pd california_housing_dataframe = pd.read_csv("https://download.mlcc.google.com/mledu-datasets/california_housing_train.csv", sep=",") # Define the input feature: total_rooms my_feature = california_housing_dataframe[["total_rooms"]] # Configure a numeric feature column for total_rooms feature_columns = [tf.feature_column.numeric_column("total_rooms")] # Define the label targets = california_housing_dataframe["median_house_value"] # Convert pandas data into a dict of np arrays. features = {key:np.array(value) for key,value in dict(features).items()} # Construct a dataset, and configure batching/repeating. ds = tf.data.Dataset.from_tensor_slices((features,targets)) </code></pre> <hr> <p>I also strongly suggest <a href="https://www.tensorflow.org/guide/datasets_for_estimators" rel="noreferrer">this article</a> and <a href="https://www.tensorflow.org/guide/datasets" rel="noreferrer">this</a>, both from the official documentation; Safe to say that should cover most if not all your use case and will help you migrate from the deprecated <code>load_csv_without_header()</code> function.</p>
python|tensorflow|deep-learning|pycharm|tensorflow-datasets
7
373,997
50,858,746
Ignore errors in pandas astype
<p>I have a numeric column that could contain another characters different form <strong>[0-9]</strong>. Say: <code>x = pandas.Series(["1","1.2", "*", "1", "**."])</code>. Then I want to <b> convert </b> that serie into a numerical column using <code>x.astype(dtype = float, errors = 'ignore')</code> . I just can't figure out why Pandas keeps giving me an error despite the fact that I ask him not to! Is there something wrong with my code ?</p>
<p>I think you want to use <a href="http://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.to_numeric.html" rel="noreferrer">pd.to_numeric(x, errors='coerce')</a> instead:</p> <pre><code>In [73]: x = pd.to_numeric(x, errors='coerce') In [74]: x Out[74]: 0 1.0 1 1.2 2 NaN 3 1.0 4 NaN dtype: float64 </code></pre> <p>PS actually <code>x.astype(dtype = float, errors = 'ignore')</code> - works as expected, it doesn't give an error, it just leaves series as it is as it can't convert some elements:</p> <pre><code>In [77]: x.astype(dtype = float, errors = 'ignore') Out[77]: 0 1 1 1.2 2 * 3 1 4 **. dtype: object # &lt;----- NOTE!!! In [81]: x.astype(dtype = float, errors = 'ignore').tolist() Out[81]: ['1', '1.2', '*', '1', '**.'] </code></pre>
pandas
48
373,998
50,904,496
How to use TensorFlow's WALSMatrixFactorization
<p>I'm trying to use TensorFlow's WALSMatrixFactorization estimator, but I can't figure out how to use it. The fit method takes an input_fn as argument, but what should this function return? As input, I basically have a matrix, that I want factorized with the WALS method, but I can't find out how to pass this matrix to the fit function.</p>
<p>If you are still looking for the answer to this question check this <a href="https://github.com/GoogleCloudPlatform/training-data-analyst/blob/ahybrid_fewer_vms/courses/machine_learning/deepdive/10_recommend/wals.ipynb" rel="nofollow noreferrer">notebook</a> has a lot to offer on how to use WALSMatrixFactorization with TensorFlow.</p>
tensorflow|tensorflow-estimator
0
373,999
50,967,353
Pandas custom functions in returning column values
<p>Please help me out in writing pandas custom functions, in the confusion loop in returning specific row and col values as custom results,i want to return col means without using slicing no user defined functions like numpy(np.mean) and i need only parameter to pass is dataset 'df' to custom function. In layman way i want to return column ['A','B'] means from function col_mean() by passing dataset "df" without using pandas slicing and predefined functions like mean/np.mean Below is my dataset please give me code logic in getting col means.</p> <pre><code>df = pd.DataFrame({'A': [10,20,30], 'B': [20, 30, 10]}) def col_men(df): means=[0 for i in range(df.shape[1])] for k in range(df.shape[1]): col_values=[row[k] for row in df] means[k]=sum(col_values)/float(len(df)) return means </code></pre>
<p>Instead of using <code>range(df.shape[1])</code> use <code>enumerate(df.columns)</code>, so you keep both name and position:</p> <pre><code>df = pd.DataFrame({'A': [10,20,30], 'B': [20, 30, 10]}) def col_men(df): means=[0 for i in range(df.shape[1])] for index, k in enumerate(df.columns): col_values=[row for row in df[k]] means[index]=sum(col_values)/len(df) return means col_men(df) </code></pre>
python|pandas|dataframe
1