Unnamed: 0
int64
0
378k
id
int64
49.9k
73.8M
title
stringlengths
15
150
question
stringlengths
37
64.2k
answer
stringlengths
37
44.1k
tags
stringlengths
5
106
score
int64
-10
5.87k
375,300
52,543,312
Iterate numbers in a df where a condition matches
<p>Playing around with different data frames whilst trying to teach myself Pandas, this has had me stumped fora while, which seems like a lack in programming comprehension, but could anyone help?</p> <p>Consider the following df:</p> <pre><code>ID Name Week 1 Matthew 1751 1 Matthew 1751 1 Matthew 1751 2 Jon 1751 2 Jon 1751 2 Jon 1751 2 Jon 1751 3 Lisa 1751 3 Lisa 1751 3 Lisa 1751 3 Lisa 1751 3 Lisa 1751 3 Lisa 1751 3 Lisa 1751 </code></pre> <p>What I'm trying to do here is add + 1 to the Week number for each occurance in the index but only where the name matches it self.</p> <pre><code>ID Name Week 1 Matthew 1751 1 Matthew 1752 1 Matthew 1753 2 Jon 1751 2 Jon 1752 2 Jon 1753 2 Jon 1754 3 Lisa 1751 3 Lisa 1752 3 Lisa 1753 3 Lisa 1754 3 Lisa 1755 3 Lisa 1756 3 Lisa 1757 </code></pre> <p>I've tried a simple for loop</p> <p>but it just increments the length of the index to the number, I've also tried </p> <pre><code>for n in df.Name: print(len(n)) </code></pre> <p>which just returns the length of each string (rightly) and adding index just returns the length of the index the amount of times n occurs. </p> <p>am i missing something fundamental? should I create a list first and then pass that to the df?</p>
<p>Use <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.cumcount.html" rel="nofollow noreferrer"><code>groupby.cumcount</code></a></p> <pre><code>df['Week'] = df.Week.add(df.groupby('Name').cumcount()) ID Name Week 0 1 Matthew 1751 1 1 Matthew 1752 2 1 Matthew 1753 3 2 Jon 1751 4 2 Jon 1752 5 2 Jon 1753 6 2 Jon 1754 7 3 Lisa 1751 8 3 Lisa 1752 9 3 Lisa 1753 10 3 Lisa 1754 11 3 Lisa 1755 12 3 Lisa 1756 13 3 Lisa 1757 </code></pre>
python|pandas
6
375,301
52,819,641
Removing multiple characters and joining in pandas columns
<p>I am trying to format this string but excluding the characters: <strong>(</strong> <strong>)</strong></p> <pre><code>My_name (1) Your_name (2) </code></pre> <p>Desired output:</p> <pre><code>My_name_ID_1 Your_name_ID_2 </code></pre> <p>This is a column of my dataframe.I tried replacing but only one character at a time, and I also would like to join afterward.</p> <p>Can I join and replace already those both characters?</p>
<p>You can use a regular expression with <code>str.replace</code>:</p> <pre><code>s.str.replace(r'(\w+)\s+\(([^\)])\)', r'\1_ID_\2') </code></pre> <p></p> <pre><code>0 My_name_ID_1 1 Your_name_ID_2 Name: 0, dtype: object </code></pre> <p>An alternative is:</p> <pre><code>s.str.replace(r'\s+\(([^\)])\)', r'_ID_\1') </code></pre> <p>If you'd like to be less explicit.</p> <hr> <p><strong><em>Regex Explanation</em></strong></p> <pre><code>( # matching group 1 \w+ # matches any word character ) \s+ # matches one or more spaces \( # matches the character ( ( # matching group 2 [^\)] # matches any character that IS NOT ) ) \) # matches the character ) </code></pre>
python|string|pandas|dataframe|join
2
375,302
52,809,314
How to plot multi-index, categorical data?
<p>Given the following data:</p> <pre><code>DC,Mode,Mod,Ven,TY1,TY2,TY3,TY4,TY5,TY6,TY7,TY8 Intra,S,Dir,C1,False,False,False,False,False,True,True,False Intra,S,Co,C1,False,False,False,False,False,False,False,False Intra,M,Dir,C1,False,False,False,False,False,False,True,False Inter,S,Co,C1,False,False,False,False,False,False,False,False Intra,S,Dir,C2,False,True,True,True,True,True,True,False Intra,S,Co,C2,False,False,False,False,False,False,False,False Intra,M,Dir,C2,False,False,False,False,False,False,False,False Inter,S,Co,C2,False,False,False,False,False,False,False,False Intra,S,Dir,C3,False,False,False,False,True,True,False,False Intra,S,Co,C3,False,False,False,False,False,False,False,False Intra,M,Dir,C3,False,False,False,False,False,False,False,False Inter,S,Co,C3,False,False,False,False,False,False,False,False Intra,S,Dir,C4,False,False,False,False,False,True,False,True Intra,S,Co,C4,True,True,True,True,False,True,False,True Intra,M,Dir,C4,False,False,False,False,False,True,False,True Inter,S,Co,C4,True,True,True,False,False,True,False,True Intra,S,Dir,C5,True,True,False,False,False,False,False,False Intra,S,Co,C5,False,False,False,False,False,False,False,False Intra,M,Dir,C5,True,True,False,False,False,False,False,False Inter,S,Co,C5,False,False,False,False,False,False,False,False </code></pre> <p>Imports:</p> <pre><code>import pandas as pd import matplotlib.pyplot as plt import numpy as np </code></pre> <p>To reproduce my <code>DataFrame</code>, copy the data then use:</p> <pre><code>df = pd.read_clipboard(sep=',') </code></pre> <p><strong>I'd like to create a plot conveying the same information as my example, but not necessarily with the same shape (I'm open to suggestions). I'd also like to hover over the color and have the appropriate <code>Ven</code> displayed (e.g. C1, not 1).:</strong></p> <p><strong>Edit 2018-10-17:</strong></p> <p>The two solutions provided so far, are helpful and each accomplish a different aspect of what I'm looking for. However, the key issue I'd like to resolve, which wasn't explicitly stated prior to this edit, is the following:</p> <p><strong>I would like to perform the plotting without converting <code>Ven</code> to an <code>int</code>; this numeric transformation isn't practical with the real data. So the actual scope of the question is to plot all categorical data with two categorical axes.</strong></p> <p><a href="https://i.stack.imgur.com/cp7fc.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cp7fc.jpg" alt="enter image description here"></a></p> <p>The issue I'm experiencing is the data is categorical and the y-axis is multi-indexed. </p> <p>I've done the following to transform the <code>DataFrame</code>:</p> <pre><code># replace False witn nan df = df.replace(False, np.nan) # replace True with a number representing Ven (e.g. C1 = 1) def rep_ven(row): return row.iloc[4:].replace(True, int(row.Ven[1])) df.iloc[:, 4:] = df.apply(rep_ven, axis=1) # drop the Ven column df = df.drop(columns=['Ven']) # set multi-index df_m = df.set_index(['DC', 'Mode', 'Mod']) </code></pre> <p>Plotting the transformed <code>DataFrame</code> produces:</p> <pre><code>plt.figure(figsize=(20,10)) heatmap = plt.imshow(df_m) plt.xticks(range(len(df_m.columns.values)), df_m.columns.values) plt.yticks(range(len(df_m.index)), df_m.index) plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/1Gh1n.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1Gh1n.jpg" alt="enter image description here"></a></p> <p>This plot isn't very streamlined, there are four axis values for each <code>Ven</code>. This is a subset of data, so the graph would be very long with all the data.</p>
<p><strong>Explanation</strong>: Remove rows where <code>TY1</code>-<code>TY8</code> are all <code>nan</code> to create your plot. Refer to <a href="https://stackoverflow.com/a/47166787/5492877">this answer</a> as a starting point for creating interactive annotations to display <code>Ven</code>.</p> <p>The below code should work:</p> <pre><code>import pandas as pd import matplotlib.pyplot as plt import numpy as np df = pd.read_clipboard(sep=',') # replace False witn nan df = df.replace(False, np.nan) # replace True with a number representing Ven (e.g. C1 = 1) def rep_ven(row): return row.iloc[4:].replace(True, int(row.Ven[1])) df.iloc[:, 4:] = df.apply(rep_ven, axis=1) # drop the Ven column df = df.drop(columns=['Ven']) idx = df[['TY1','TY2', 'TY3', 'TY4','TY5','TY6','TY7','TY8']].dropna(thresh=1).index.values df = df.loc[idx,:].sort_values(by=['DC', 'Mode','Mod'], ascending=False) # set multi-index df_m = df.set_index(['DC', 'Mode', 'Mod']) plt.figure(figsize=(20,10)) heatmap = plt.imshow(df_m) plt.xticks(range(len(df_m.columns.values)), df_m.columns.values) plt.yticks(range(len(df_m.index)), df_m.index) plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/LIrYy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LIrYy.png" alt="enter image description here"></a></p>
python-3.x|pandas|matplotlib|data-visualization|bokeh
1
375,303
52,640,036
keras with tensorflow backend: layer implementation when batch_size is used, but it is None during graph construction
<p>I am implementing a keras layer <code>AnchorTargets</code> for retinanet in object detection, it inputs <code>anchors</code> and <code>annotations</code> (i.e. ground-truth bounding-boxes). The ideas for the codes are motivated by <a href="https://github.com/fizyr/keras-retinanet" rel="nofollow noreferrer">keras-retinanet</a> in github. </p> <p>However, I got error in <code>labels_shape = (input_shape[0]*input_shape[1], num_classes)</code> since input_shape[0] denotes batch size, which is unavailable during network construction. Is there any suggestions for this issue? Thanks.</p> <pre><code>def batch_anchor_targets_bbox( anchors, annotations, num_classes, mask_shape=None, negative_overlap=0.4, positive_overlap=0.5, **kwargs): """ Generate anchor targets for bbox detection. Args anchors: np.array of annotations of shape (B, N, 4) for (x1, y1, x2, y2). annotations: np.array of shape (B, M, 5) for (x1, y1, x2, y2, label). num_classes: Number of classes to predict. mask_shape: If the image is padded with zeros, mask_shape can be used to mark the relevant part of the image. negative_overlap: IoU overlap for negative anchors (all anchors with overlap &lt; negative_overlap are negative). positive_overlap: IoU overlap or positive anchors (all anchors with overlap &gt; positive_overlap are positive). Returns labels: np.array of shape (B, N, num_classes) where a row consists of 0 for negative and 1 for positive for a certain class. annotations: np.array of shape (B, N, 5) for (x1, y1, x2, y2, label) containing the annotations corresponding to each anchor or 0 if there is no corresponding anchor. anchor_states: np.array of shape (B, N) containing the state of an anchor (-1 for ignore, 0 for bg, 1 for fg). """ # anchor states: 1 is positive, 0 is negative, -1 is dont care anchors = keras.backend.cast(anchors, 'float32') annotations = keras.backend.cast(annotations, 'float32') input_shape = keras.backend.int_shape(anchors) annos_shape = keras.backend.int_shape(annotations) overlaps = batch_compute_overlap(anchors, annotations) argmax_overlaps_inds = keras.backend.argmax(overlaps, axis=2) max_overlaps = keras.backend.max(overlaps, axis=2) # assign "dont care" labels flag_pos = keras.backend.greater_equal(max_overlaps, positive_overlap) anchor_states = keras.backend.cast(flag_pos, 'float32') flag = backend.logical_and(keras.backend.greater_equal(max_overlaps, negative_overlap), \ keras.backend.less(max_overlaps, positive_overlap)) anchor_states = anchor_states - keras.backend.cast(flag, 'float32') # reshape the values and indices of the max output argmax_overlaps_inds = keras.backend.reshape(argmax_overlaps_inds,(-1,)) flag_pos = keras.backend.reshape(flag_pos,(-1,)) # compute box regression targets annotations_o = keras.backend.reshape(annotations, (-1, annos_shape[2])) annotations_o = backend.gather(annotations_o,argmax_overlaps_inds) annotations = keras.backend.reshape(annotations_o, (-1, input_shape[1],annos_shape[2])) # compute target class labels ind_x = backend.where(flag_pos) ind_y = keras.backend.cast(backend.gather(annotations_o[:,4], ind_x), 'int64') indices = keras.backend.concatenate([ind_x,ind_y], axis=1) labels_shape = (input_shape[0]*input_shape[1], num_classes) values = keras.backend.ones(keras.backend.shape(ind_x)[0]) delta = backend.SparseTensor(indices, values, labels_shape) labels = backend.sparse_tensor_to_dense(delta) labels = keras.backend.reshape(labels, (-1, input_shape[1], num_classes)) return indices, annotations, anchor_states class AnchorTargets(keras.layers.Layer): def __init__(self, num_classes, negative_overlap=0.4, positive_overlap=0.5, *args, **kwargs): self.num_classes = num_classes self.negative_overlap = negative_overlap self.positive_overlap = positive_overlap super(AnchorTargets, self).__init__(*args, **kwargs) def call(self, inputs, **kwargs): # anchors (batch_size x N x 4): pre-defined anchors # annotations (batch_size x #gt x 4): gt_bboxes. anchors, annotations = inputs # calculate regression and labels labels, annotations, anchor_states = batch_anchor_targets_bbox(anchors, annotations, self.num_classes, self.negative_overlap, self.positive_overlap) anchor_states = keras.backend.expand_dims(anchor_states,axis=2) labels = keras.backend.concatenate([labels, anchor_states], axis=2) regressions = keras.backend.concatenate([annotations, anchor_states], axis=2) return [regressions, labels] def compute_output_shape(self, input_shape): return [(input_shape[0][0],input_shape[0][1],input_shape[0][2]+1),\ (input_shape[0][0],input_shape[0][1],self.num_classes+2)] def get_config(self): config = super(AnchorTargets, self).get_config() config.update({ 'num_classes': self.num_classes, 'negative_overlap': self.negative_overlap, 'positive_overlap': self.positive_overlap }) return config </code></pre>
<p>Never mind. I have figured it out. The basic idea is to avoid the use of input_shape[0] (i.e. batch size) in <code>call</code> function of <code>keras.layers.Layer</code>. My implementation is as follows if anyone is interested in it.</p> <pre><code>def batch_anchor_targets_bbox( anchors, annotations, num_classes, mask_shape=None, negative_overlap=0.4, positive_overlap=0.5, **kwargs): """ Generate anchor targets for bbox detection. Args anchors: np.array of annotations of shape (B, N, 4) for (x1, y1, x2, y2). annotations: np.array of shape (B, M, 5) for (x1, y1, x2, y2, label). num_classes: Number of classes to predict. mask_shape: If the image is padded with zeros, mask_shape can be used to mark the relevant part of the image. negative_overlap: IoU overlap for negative anchors (all anchors with overlap &lt; negative_overlap are negative). positive_overlap: IoU overlap or positive anchors (all anchors with overlap &gt; positive_overlap are positive). Returns labels: np.array of shape (B, N, num_classes) where a row consists of 0 for negative and 1 for positive for a certain class. annotations: np.array of shape (B, N, 5) for (x1, y1, x2, y2, label) containing the annotations corresponding to each anchor or 0 if there is no corresponding anchor. anchor_states: np.array of shape (B, N) containing the state of an anchor (-1 for ignore, 0 for bg, 1 for fg). """ # anchor states: 1 is positive, 0 is negative, -1 is dont care anchors = keras.backend.cast(anchors, 'float32') annotations = keras.backend.cast(annotations, 'float32') input_shape = keras.backend.int_shape(anchors) annos_shape = keras.backend.int_shape(annotations) overlaps = batch_compute_overlap(anchors, annotations) argmax_overlaps_inds = keras.backend.argmax(overlaps, axis=2) max_overlaps = keras.backend.max(overlaps, axis=2) # assign "dont care" labels flag_pos = keras.backend.greater_equal(max_overlaps, positive_overlap) anchor_states = keras.backend.cast(flag_pos, 'float32') flag = backend.logical_and(keras.backend.greater_equal(max_overlaps, negative_overlap), \ keras.backend.less(max_overlaps, positive_overlap)) anchor_states = anchor_states - keras.backend.cast(flag, 'float32') # reshape the values and indices of the max output argmax_overlaps_inds = keras.backend.reshape(argmax_overlaps_inds,(-1,)) flag_pos = keras.backend.reshape(flag_pos,(-1,)) # compute box regression targets annotations_o = keras.backend.reshape(annotations, (-1, annos_shape[2])) annotations_o = backend.gather(annotations_o,argmax_overlaps_inds) annotations = keras.backend.reshape(annotations_o, (-1, input_shape[1],annos_shape[2])) # compute target class labels anchor_states_o = -1*keras.backend.cast(keras.backend.less(max_overlaps, positive_overlap), 'float32') anchor_states_o = keras.backend.reshape(anchor_states_o, (-1,)) ind_y = backend.where(keras.backend.greater_equal(anchor_states_o,-0.5), annotations_o[:,4], anchor_states_o) labels = keras.backend.one_hot(keras.backend.cast(ind_y,'int32'), num_classes) labels = keras.backend.reshape(labels, (-1, input_shape[1], num_classes)) return labels, annotations, anchor_states </code></pre>
python|tensorflow|keras
0
375,304
52,451,797
How does the reshape work before the fully connected layer in the following CNN model?
<p>Consider the convolutional neural network (two convolutional layers):</p> <pre><code>class ConvNet(nn.Module): def __init__(self, num_classes=10): super(ConvNet, self).__init__() self.layer1 = nn.Sequential( nn.Conv2d(1, 16, kernel_size=5, stride=1, padding=2), nn.BatchNorm2d(16), nn.ReLU(), nn.MaxPool2d(kernel_size=2, stride=2)) self.layer2 = nn.Sequential( nn.Conv2d(16, 32, kernel_size=5, stride=1, padding=2), nn.BatchNorm2d(32), nn.ReLU(), nn.MaxPool2d(kernel_size=2, stride=2)) self.fc = nn.Linear(7*7*32, num_classes) def forward(self, x): out = self.layer1(x) out = self.layer2(out) out = out.reshape(out.size(0), -1) out = self.fc(out) return out </code></pre> <p>The fully connected layer <code>fc</code> is to have <code>7*7*32</code> inputs coming in. The above:</p> <p><code>out = out.reshape(out.size(0), -1)</code> leads to a tensor with size of <code>(32, 49)</code>. This doesn't seem right as the dimensions of input for the dense layer is different. What am I missing here?</p> <p>[Note that in Pytorch the input is in the following format: [N, C, W, H] so no. of channels comes before the width and height of image]</p> <p><strong><em>source</em></strong>: <a href="https://github.com/yunjey/pytorch-tutorial/blob/master/tutorials/02-intermediate/convolutional_neural_network/main.py#L35-L56" rel="nofollow noreferrer">https://github.com/yunjey/pytorch-tutorial/blob/master/tutorials/02-intermediate/convolutional_neural_network/main.py#L35-L56</a></p>
<p>If you look at the output of each layer you can easily understand what you are missing. </p> <pre><code>def forward(self, x): print ('input', x.size()) out = self.layer1(x) print ('layer1-output', out.size()) out = self.layer2(out) print ('layer2-output', out.size()) out = out.reshape(out.size(0), -1) print ('reshape-output', out.size()) out = self.fc(out) print ('Model-output', out.size()) return out test_input = torch.rand(4,1,28,28) model(test_input) OUTPUT: ('input', (4, 1, 28, 28)) ('layer1-output', (4, 16, 14, 14)) ('layer2-output', (4, 32, 7, 7)) ('reshape-output', (4, 1568)) ('Model-output', (4, 10)) </code></pre> <p>Conv2d layer doesn't change the height and width of the tensor. only changes the channel of tensor because of stride and padding. MaxPool2d layer halves the height and width of the tensor. </p> <pre><code>inpt = 4,1,28,28 conv1_output = 4,16,28,28 max_output = 4,16,14,14 conv2_output = 4,32,14,14 max2_output = 4,32,7,7 reshapeutput = 4,1585 (32*7*7) fcn_output = 4,10 </code></pre> <p><strong>N</strong> --> Input Size, <strong>F</strong> --> Filter Size, <strong>stride</strong>-> Stride Size, <strong>pdg</strong>-> Padding size</p> <p>ConvTranspose2d;</p> <p><strong>OutputSize = N*stride + F - stride - pdg*2</strong></p> <p>Conv2d;</p> <p><strong>OutputSize = (N - F)/stride + 1 + pdg*2/stride</strong> [e.g. 32/3=10 it ignores after the comma]</p>
python|deep-learning|conv-neural-network|pytorch
3
375,305
52,588,827
For loop - running 1 loop to completion then running next loop python
<p>this script needs to run all the way through RI_page_urls.csv, then run through all the resulting urls from RI_License_urls.csv and grab the business info. </p> <p>it's pulling all the url's from RI_page_urls.csv, but then only running and printing the first of 100 urls from RI_License_urls.csv. Need help figuring out how to make it wait for the first part to complete before running the second part. </p> <p>I appreciate any and all help. </p> <p>Here's a url for the RI_page_urls.csv to start with:</p> <pre><code>http://www.crb.state.ri.us/verify_CRB.php </code></pre> <p>and the code:</p> <pre><code>from bs4 import BeautifulSoup as soup import requests as r import pandas as pd import re import csv #pulls lic# url with open('RI_page_urls.csv') as f_input: csv_input = csv.reader(f_input) for url in csv_input: data = r.get(url[0]) page_data = soup(data.text, 'html.parser') links = [r'www.crb.state.ri.us/' + link['href'] for link in page_data.table.tr.find_all('a') if re.search('licensedetail.php', str(link))] df = pd.DataFrame(links) df.to_csv('RI_License_urls.csv', header=False, index=False, mode = 'a') #Code Above works! #need to pull table info from license url #this pulls the first record, but doesn't loop through the requests with open('RI_License_urls.csv') as f_input_2: csv_input_2 = csv.reader(f_input_2) for url in csv_input_2: data = r.get(url[0]) page_data = soup(data.text, 'html.parser') company_info = (' '.join(info.get_text(", ", strip=True).split()) for info in page_data.find_all('h9')) df = pd.DataFrame(info, columns=['company_info']) df.to_csv('RI_company_info.csv', index=False) </code></pre>
<p>Well , The question is a bit unclear and also there are a couple of things wrong about the code </p> <pre><code>data = r.get(url[0]) </code></pre> <p>should be because its urls start with http or https not www</p> <pre><code>data = r.get("http://"+url[0]) </code></pre> <p>In the below code ,</p> <p><code>info</code> is not defined so , i just assumed it should be <code>company_info</code></p> <pre><code> company_info = (' '.join(info.get_text(", ", strip=True).split()) for info in page_data.find_all('h9')) df = pd.DataFrame(info, columns=['company_info']) </code></pre> <p>Hence the full code is </p> <pre><code>from bs4 import BeautifulSoup as soup import requests as r import pandas as pd import re import csv #pulls lic# url with open('RI_page_urls.csv') as f_input: csv_input = csv.reader(f_input) for url in csv_input: data = r.get(url[0]) page_data = soup(data.text, 'html.parser') links = [r'www.crb.state.ri.us/' + link['href'] for link in page_data.table.tr.find_all('a') if re.search('licensedetail.php', str(link))] df = pd.DataFrame(links) df.to_csv('RI_License_urls.csv', header=False, index=False, mode = 'a') #Code Above works! #need to pull table info from license url #this pulls the first record, but doesn't loop through the requests with open('RI_License_urls.csv') as f_input_2: csv_input_2 = csv.reader(f_input_2) with open('RI_company_info.csv','a',buffering=0) as companyinfofiledescriptor: for url in csv_input_2: data = r.get("http://"+url[0]) page_data = soup(data.text, 'html.parser') company_info = (' '.join(info.get_text(", ", strip=True).split()) for info in page_data.find_all('h9')) df = pd.DataFrame(company_info, columns=['company_info']) df.to_csv(companyinfofiledescriptor, index=False) print(df) </code></pre>
python|pandas|loops|beautifulsoup|with-statement
1
375,306
52,570,086
Plotting numpy array using Seaborn
<p>I'm using python 2.7. I know this will be very basic, however I'm really confused and I would like to have a better understanding of seaborn.</p> <p>I have two numpy arrays <code>X</code> and <code>y</code> and I'd like to use Seaborn to plot them.</p> <p>Here is my <code>X</code> numpy array:</p> <pre><code>[[ 1.82716998 -1.75449225] [ 0.09258069 0.16245259] [ 1.09240926 0.08617436]] </code></pre> <p>And here is the <code>y</code> numpy array:</p> <pre><code>[ 1. -1. 1. ] </code></pre> <p>How can I successfully plot my data points taking into account the class label from the <code>y</code> array? </p> <p>Thank you,</p>
<p>You can use seaborn functions to plot graphs. Do dir(sns) to see all the plots. Here is your output in <code>sns.scatterplot</code>. You can check the api docs <a href="https://seaborn.pydata.org/generated/seaborn.scatterplot.html" rel="noreferrer">here</a> or example code with plots <a href="https://python-graph-gallery.com/40-basic-scatterplot-seaborn/" rel="noreferrer">here</a></p> <pre><code>import seaborn as sns import pandas as pd df = pd.DataFrame([[ 1.82716998, -1.75449225], [ 0.09258069, 0.16245259], [ 1.09240926, 0.08617436]], columns=["x", "y"]) df["val"] = pd.Series([1, -1, 1]).apply(lambda x: "red" if x==1 else "blue") sns.scatterplot(df["x"], df["y"], c=df["val"]).plot() </code></pre> <p>Gives </p> <p><a href="https://i.stack.imgur.com/Es9mV.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Es9mV.png" alt="enter image description here"></a> Is this the exact input output you wanted?</p> <p>You can do it with pyplot, just importing seaborn changes pyplot color and plot scheme</p> <pre><code>import seaborn as sns import matplotlib.pyplot as plt fig, ax = plt.subplots() df = pd.DataFrame([[ 1.82716998, -1.75449225], [ 0.09258069, 0.16245259], [ 1.09240926, 0.08617436]], columns=["x", "y"]) df["val"] = pd.Series([1, -1, 1]).apply(lambda x: "red" if x==1 else "blue") ax.scatter(x=df["x"], y=df["y"], c=df["val"]) plt.plot() </code></pre> <p>Here is a <a href="https://stackoverflow.com/questions/29637150/scatterplot-without-linear-fit-in-seaborn">stackoverflow post</a> of doing the same with sns.lmplot</p>
python|numpy|matplotlib|machine-learning|seaborn
10
375,307
52,716,910
Cannot train a model with numpy array
<p>I am training the following autoencoder on float numbers. </p> <pre><code>input_img = Input(shape=(2623,1), name='input') x = ZeroPadding1D(1)(input_img) x = Conv1D(32, 3, activation='relu', padding='same', use_bias=False)(input_img) x = BatchNormalization(axis=-1)(x) x = MaxPooling1D(2, padding='same')(x) x = Conv1D(16, 3, activation='relu', padding='same', use_bias=False)(x) x = BatchNormalization(axis=-1)(x) x = MaxPooling1D(2, padding='same')(x) x = Conv1D(16,3, activation='relu', padding='same', use_bias=False)(x) x = BatchNormalization(axis=-1)(x) encoded = MaxPooling1D(2, padding='same')(x) x = Conv1D(16,3, activation='relu', padding='same', use_bias=False)(encoded) x = BatchNormalization(axis=-1)(x) x = UpSampling1D(2)(x) x = Conv1D(16,3, activation='relu', padding='same', use_bias=False)(x) x = BatchNormalization(axis=-1)(x) x = UpSampling1D(2)(x) x = Conv1D(32, 3, activation='relu', padding='same', use_bias=False)(x) #input_shape=(30, 1)) x = BatchNormalization(axis=-1)(x) x = UpSampling1D(2)(x) x = Cropping1D(cropping=(0, 1))(x) #Crop nothing from input but crop 1 element from the end decoded = Conv1D(1, 3, activation='sigmoid', padding='same', use_bias=False)(x) autoencoder = Model(input_img, decoded) autoencoder.compile(optimizer='rmsprop', loss='binary_crossentropy') x = Input(shape=(16, 300), name="input") h = x h = Conv1D(filters=300, kernel_size=16, activation="relu", padding='same', name='Conv1')(h) h = MaxPooling1D(pool_size=16, name='Maxpool1')(h) </code></pre> <p>I had to convert the data to a numpy array in order to process it but when the model starts training i get:</p> <blockquote> <p>ValueError: could not convert string to float:</p> </blockquote> <p>This is happening because my training data looks like :</p> <blockquote> <p>train[0][1] # one number of training data</p> </blockquote> <pre><code>array(['0.001758873'], dtype=object) </code></pre> <p>What can i do in order to avoid the "dtype=object" in my training data, or maybe do i have to convert it to something else? Thank you!</p>
<p>Perhaps, as a preprocessing step, you can typecast your object type array to a float array using something like:</p> <pre><code># if float32 is the desired &amp; appropriate datatype train = train.astype(numpy.float32) </code></pre>
python|numpy|neural-network|deep-learning|autoencoder
1
375,308
52,612,874
Pandas - replace for loop for efficiency
<p>I have a data frame (df)</p> <pre><code>df = pd.DataFrame({'No': [123,234,345,456,567,678], 'text': ['60 ABC','1nHG','KL HG','21ABC','K 200','1g HG'], 'reference':['ABC','HG','FL','','200',''], 'result':['','','','','','']}, columns=['No', 'text', 'reference', 'result']) No text reference result 0 123 60 ABC ABC 1 234 1nHG HG 2 345 KL HG FL 3 456 21ABC 4 567 K 200 200 5 678 1g HG </code></pre> <p>and a list with elements</p> <pre><code>list ['ABC','HG','FL','200','CP1'] </code></pre> <p>Now I have the following coding:</p> <pre><code>for idx, row in df.iterrows(): for item in list: if row['text'].strip().endswith(item): if pd.isnull(row['reference']): df.at[idx, 'result'] = item elif pd.notnull(row['reference']) and row['reference'] != item: df.at[idx, 'result'] = 'wrong item' if pd.isnull(row['result']): break </code></pre> <p>I run through df and the list and check for matches.</p> <p>Output:</p> <pre><code> No text reference result 0 123 60 ABC ABC 1 234 1nHG HG 2 345 KL HG FL wrong item 3 456 21ABC ABC 4 567 K 200 200 5 678 1g HG HG </code></pre> <p>The break instruction is important because otherwise a second element could be found within the list and then this second element would overwrite the content in result.</p> <p>Now I need another solution because the data frame is huge and for loops are inefficient. Think using apply could work but how?</p> <p>Thank you!</p>
<p>Instead of iterating rows, you can iterate your suffixes, which is likely a much smaller iterable. This way, you can take advantage of series-based methods and Boolean indexing.</p> <p>I've also created an extra series to identify when a row has been updated. The cost of this extra check should be small versus the expense of iterating by row.</p> <pre><code>L = ['ABC', 'HG', 'FL', '200', 'CP1'] df['text'] = df['text'].str.strip() null = df['reference'].eq('') df['updated'] = False for item in L: ends = df['text'].str.endswith(item) diff = df['reference'].ne(item) m1 = ends &amp; null &amp; ~df['updated'] m2 = ends &amp; diff &amp; ~null &amp; ~df['updated'] df.loc[m1, 'result'] = item df.loc[m2, 'result'] = 'wrong item' df.loc[m1 | m2, 'updated'] = True </code></pre> <p>Result:</p> <pre><code> No text reference result updated 0 123 60 ABC ABC False 1 234 1nHG HG False 2 345 KL HG FL wrong item True 3 456 21ABC ABC True 4 567 K 200 200 False 5 678 1g HG HG True </code></pre> <p>You can drop the final column, but you may find it useful for other purposes.</p>
python|pandas|performance|for-loop|dataframe
2
375,309
52,783,479
How does Pandas compute exponential moving averages under the hood?
<p>I am trying to compare <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.ewm.html" rel="nofollow noreferrer">pandas EMA</a> performance to <a href="https://numba.pydata.org/numba-doc/dev/index.html" rel="nofollow noreferrer">numba</a> performance.</p> <p>Generally, I don't write functions if they are already in-built with pandas, as pandas will always be faster than my slow hand-coded python functions; for example <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.quantile.html" rel="nofollow noreferrer">quantile</a>, <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sort_values.html" rel="nofollow noreferrer">sort values</a> etc. I believe this is because much of pandas is coded in C under the hood, as well as pandas <code>.apply()</code> methods being much faster than explicit python for loops due to vectorization (but I'm open to an explanation if this is not true). But here, for computing EMA's, I have found that using numba far outperforms pandas.</p> <p>The <a href="https://en.wikipedia.org/wiki/Moving_average" rel="nofollow noreferrer">EMA</a> I have coded is defined by</p> <p>S_t = Y_1, t = 1</p> <p>S_t = alpha*Y_t + (1 - alpha)*S_{t-1}, t > 1</p> <p>where Y_t is the value of the time series at time t, S_t is the value of the moving average at time t, and alpha is the smoothing parameter.</p> <p>The code is as follows</p> <pre><code>from numba import jit import pandas as pd import numpy as np @jit def ewm(arr, alpha): """ Calculate the EMA of an array arr :param arr: numpy array of floats :param alpha: float between 0 and 1 :return: numpy array of floats """ # initialise ewm_arr ewm_arr = np.zeros_like(arr) ewm_arr[0] = arr[0] for t in range(1,arr.shape[0]): ewm_arr[t] = alpha*arr[t] + (1 - alpha)*ewm_arr[t-1] return ewm_arr # initialize array and dataframe randomly a = np.random.random(10000) df = pd.DataFrame(a) %timeit df.ewm(com=0.5, adjust=False).mean() &gt;&gt;&gt; 1000 loops, best of 3: 1.77 ms per loop %timeit ewm(a, 0.5) &gt;&gt;&gt; 10000 loops, best of 3: 34.8 µs per loop </code></pre> <p>We see that the hand the hand coded <code>ewm</code> function is around 50 times faster than the pandas ewm method.</p> <p>It may be the case that numba also outperforms various other pandas methods depending how one codes their function. But here I am interested in how numba outperforms pandas in calculating Exponential Moving Averages. What is pandas doing (not doing) that makes it slow - or is it that numba is just extremely fast in this case? How does pandas compute EMA's under the hood?</p>
<blockquote> <p>But here I am interested in how numba outperforms Pandas in calculating exponential moving averages.</p> </blockquote> <p>Your version appears to be faster solely because you're passing it a NumPy array rather than a Pandas data structure:</p> <pre><code>&gt;&gt;&gt; s = pd.Series(np.random.random(10000)) &gt;&gt;&gt; %timeit ewm(s, alpha=0.5) 82 ms ± 10.1 ms per loop (mean ± std. dev. of 7 runs, 10 loops each) &gt;&gt;&gt; %timeit ewm(s.values, alpha=0.5) 26 µs ± 193 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each) &gt;&gt;&gt; %timeit s.ewm(alpha=0.5).mean() 852 µs ± 5.44 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) </code></pre> <p>In general, comparing NumPy versus Pandas operations is apples-to-oranges. The latter is built on top of the former and will almost always trade speed for flexibility. (But, taking that into consideration, Pandas is still fast and has come to rely more heavily on Cython ops over time.) I'm not sure specifically what it is about numba/jit that behaves better with NumPy. But if you compare both functions using a Pandas Series, Pandas itself comes out faster.</p> <blockquote> <p>How does Pandas compute EMAs under the hood?</p> </blockquote> <p>When you call <code>df.ewm()</code> (without yet calling the methods such <code>.mean()</code> or <code>.cov()</code>), the intermediate result is a bona fide class <code>EWM</code> that's found in <code>pandas/core/window.py</code>.</p> <pre><code>&gt;&gt;&gt; ewm = pd.DataFrame().ewm(alpha=0.1) &gt;&gt;&gt; type(ewm) &lt;class 'pandas.core.window.EWM'&gt; </code></pre> <p>Whether you pass <code>com</code>, <code>span</code>, <code>halflife</code>, or <code>alpha</code>, Pandas will <a href="https://github.com/pandas-dev/pandas/blob/12a0dc49ac63b63bcc5f93bccf71bce51c60bdad/pandas/core/window.py#L2168" rel="nofollow noreferrer">map this back to a <code>com</code></a> and use that.</p> <p>When you call the method itself, such as <code>ewm.mean()</code>, this maps to <a href="https://github.com/pandas-dev/pandas/blob/12a0dc49ac63b63bcc5f93bccf71bce51c60bdad/pandas/core/window.py#L2226" rel="nofollow noreferrer"><code>._apply()</code></a>, which in this case serves as a <a href="https://github.com/pandas-dev/pandas/blob/master/pandas/core/window.py#L2254" rel="nofollow noreferrer">router</a> to the appropriate Cython function:</p> <pre><code>cfunc = getattr(_window, func, None) </code></pre> <p>In the case of <code>.mean()</code>, <code>func</code> is "ewma". <code>_window</code> is the Cython module <a href="https://github.com/pandas-dev/pandas/blob/v0.23.4/pandas/_libs/window.pyx" rel="nofollow noreferrer"><code>pandas/libs/window.pyx</code></a>.</p> <p>That brings you to the heart of things, at the function <a href="https://github.com/pandas-dev/pandas/blob/0409521665bd436a10aea7e06336066bf07ff057/pandas/_libs/window.pyx#L1656" rel="nofollow noreferrer"><code>ewma()</code></a>, which is where the bulk of the work takes place:</p> <pre><code>weighted_avg = ((old_wt * weighted_avg) + (new_wt * cur)) / (old_wt + new_wt) </code></pre> <p>If you'd like a fairer comparison, call this function directly with the underlying NumPy values:</p> <pre><code>&gt;&gt;&gt; from pandas._libs.window import ewma &gt;&gt;&gt; %timeit ewma(s.values, 0.4, 0, 0, 0) 513 µs ± 10.1 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) </code></pre> <p>(Remember, it takes only a com; for that, you can use <a href="https://github.com/pandas-dev/pandas/blob/v0.23.4/pandas/core/window.py#L2377" rel="nofollow noreferrer"><code>pandas.core.window._get_center_of_mass()</code></a>.</p>
python|arrays|pandas|time|numba
1
375,310
52,715,499
How to serve a tensorflow-module, specifically Universal Sentence Encoder?
<p>I have spent several hours trying to set up Tensorflow serving of the Tensorflow-hub module, "Universal Sentence Encoder." There is a similar question here:</p> <p><a href="https://stackoverflow.com/questions/50788080/how-to-make-the-tensorflow-hub-embeddings-servable-using-tensorflow-serving">How to make the tensorflow hub embeddings servable using tensorflow serving?</a> </p> <p>I have been doing this on a Windows machine.</p> <p>This is the code I used to build the model:</p> <pre><code>import tensorflow as tf import tensorflow_hub as hub MODEL_NAME = 'test' VERSION = 1 SERVE_PATH = './models/{}/{}'.format(MODEL_NAME, VERSION) with tf.Graph().as_default(): module = hub.Module("https://tfhub.dev/google/universal-sentence- encoder/1") text = tf.placeholder(tf.string, [None]) embedding = module(text) init_op = tf.group([tf.global_variables_initializer(), tf.tables_initializer()]) with tf.Session() as session: session.run(init_op) tf.saved_model.simple_save( session, SERVE_PATH, inputs = {"text": text}, outputs = {"embedding": embedding}, legacy_init_op = tf.tables_initializer() ) </code></pre> <p>I have gotten to the point where running the following line:</p> <pre><code>saved_model_cli show --dir ${PWD}/models/test/1 --tag_set serve --signature_def serving_default </code></pre> <p>gives me the following result:</p> <pre><code>The given SavedModel SignatureDef contains the following input(s): inputs['text'] tensor_info: dtype: DT_STRING shape: (-1) name: Placeholder:0 The given SavedModel SignatureDef contains the following output(s): outputs['embedding'] tensor_info: dtype: DT_FLOAT shape: (-1, 512) name: module_apply_default/Encoder_en/hidden_layers/l2_normalize:0 </code></pre> <p>I have then tried running:</p> <pre><code>saved_model_cli run --dir ${PWD}/models/test/1 --tag_set serve --signature_def serving_default --input_exprs 'text=["what this is"]' </code></pre> <p>which gives the error:</p> <pre><code> File "&lt;string&gt;", line 1 [what this is] ^ SyntaxError: invalid syntax </code></pre> <p>I've tried changing the format of the part 'text=["what this is"]', but nothing worked for me.</p> <p>Regardless of if this part works, the main goal is to set up the module for serving and create a callable API. </p> <p>I've tried with docker, the following line:</p> <pre><code>docker run -p 8501:8501 --name tf-serve -v ${PWD}/models/:/models -t tensorflow/serving --model_base_path=/models/test </code></pre> <p>Things appear to be set up properly: </p> <pre><code>Building single TensorFlow model file config: model_name: model model_base_path: /models/test 2018-10-09 07:05:08.692140: I tensorflow_serving/model_servers/server_core.cc:462] Adding/updating models. 2018-10-09 07:05:08.692301: I tensorflow_serving/model_servers/server_core.cc:517] (Re-)adding model: model 2018-10-09 07:05:08.798733: I tensorflow_serving/core/basic_manager.cc:739] Successfully reserved resources to load servable {name: model version: 1} 2018-10-09 07:05:08.798841: I tensorflow_serving/core/loader_harness.cc:66] Approving load for servable version {name: model version: 1} 2018-10-09 07:05:08.798870: I tensorflow_serving/core/loader_harness.cc:74] Loading servable version {name: model version: 1} 2018-10-09 07:05:08.798904: I external/org_tensorflow/tensorflow/contrib/session_bundle/bundle_shim.cc:360] Attempting to load native SavedModelBundle in bundle-shim from: /models/test/1 2018-10-09 07:05:08.798947: I external/org_tensorflow/tensorflow/cc/saved_model/reader.cc:31] Reading SavedModel from: /models/test/1 2018-10-09 07:05:09.055822: I external/org_tensorflow/tensorflow/cc/saved_model/reader.cc:54] Reading meta graph with tags { serve } 2018-10-09 07:05:09.338142: I external/org_tensorflow/tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA 2018-10-09 07:05:09.576751: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:162] Restoring SavedModel bundle. 2018-10-09 07:05:28.975611: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:138] Running MainOp with key saved_model_main_op on SavedModel bundle. 2018-10-09 07:06:30.941577: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:259] SavedModel load for tags { serve }; Status: success. Took 82120946 microseconds. 2018-10-09 07:06:30.990252: I tensorflow_serving/servables/tensorflow/saved_model_warmup.cc:83] No warmup data file found at /models/test/1/assets.extra/tf_serving_warmup_requests 2018-10-09 07:06:31.046262: I tensorflow_serving/core/loader_harness.cc:86] Successfully loaded servable version {name: model version: 1} 2018-10-09 07:06:31.184541: I tensorflow_serving/model_servers/server.cc:285] Running gRPC ModelServer at 0.0.0.0:8500 ... [warn] getaddrinfo: address family for nodename not supported 2018-10-09 07:06:31.221644: I tensorflow_serving/model_servers/server.cc:301] Exporting HTTP/REST API at:localhost:8501 ... [evhttp_server.cc : 235] RAW: Entering the event loop ... </code></pre> <p>I've tried </p> <pre><code>curl http://localhost:8501/v1/models/test </code></pre> <p>which gives</p> <pre><code>{ "error": "Malformed request: GET /v1/models/test:predict" } </code></pre> <p>and </p> <pre><code>curl -d '{"text": "Hello"}' -X POST http://localhost:8501/v1/models/test:predict </code></pre> <p>which gives </p> <pre><code>{ "error": "JSON Parse error: Invalid value. at offset: 0" } </code></pre> <p>A similar question is here </p> <p><a href="https://stackoverflow.com/questions/52671138/tensorflow-serving-rest-api-returns-malformed-request-error">Tensorflow Serving: Rest API returns &quot;Malformed request&quot; error</a></p> <p>Just looking for any way to get this module serving. Thanks.</p>
<p>I was finally able to figure things out. I'll post what I did here in case someone else is trying to do the same thing.</p> <p>My issue with the saved_model_cli run command was with the quotes (using Windows command prompt). Change <code>'text=["what this is"]'</code> to <code>"text=['what this is']"</code></p> <p>The issue with the POST request was two-fold. One, I noticed that the model's name is model, so should have been <a href="http://localhost:8501/v1/models/model:predict" rel="noreferrer">http://localhost:8501/v1/models/model:predict</a></p> <p>Secondly, the input format was not correct. I used Postman, and the body of the request looks like this: <code>{"inputs": {"text": ["Hello"]}}</code></p>
python|tensorflow|tensorflow-serving|tensorflow-hub
17
375,311
52,750,553
Having problems converting strings to floats in pandas data frame
<p>Trying to make a scatter plot with a pandas dataframe, but "ValueError: x and y must be the same size" kept popping up. Looks like Slaughter Steers data column are strings instead of floats so try to convert it, but ValueError: could not convert string to float: '1,062.6' happens. Tried to replace ' with a space still same error.</p> <pre><code>import pandas as pd import matplotlib.pyplot as plt import numpy as np #Read in Data set date as index cattle_price = pd.read_csv('C:/Users/SkyLH/Documents/CattleForcast Model/Slaughter Cattle Monthly Data.csv', index_col = 'DATE') cattle_slaughter = pd.read_csv('C:/Users/SkyLH/Documents/Cattle Forcast Model/SlaughterCountsFull - Sheet1.csv', index_col = 'Date') cattle_price.index = pd.to_datetime(cattle_price.index) cattle_price.index.names = ['Date',] cattle_slaughter.replace("'"," ") cattle_slaughter.astype(float) cattle_df = cattle_price.join(cattle_slaughter, how = 'inner') print(cattle_df) plt.scatter(cattle_df, y = 'Price') plt.show() Price Slaughter Steers Date 1955-01-01 34.899999 983.8 1955-02-01 35.999998 847.9 1955-03-01 34.600001 1,062.6 1955-04-01 35.800002 1,000.9 1955-05-01 33.100002 1,090.1 </code></pre>
<p>Believe the commas (thousands separators) are preventing the conversion. This question has possible solutions that may help you:</p> <p><a href="https://stackoverflow.com/questions/1779288/how-do-i-use-python-to-convert-a-string-to-a-number-if-it-has-commas-in-it-as-th">How do I use Python to convert a string to a number if it has commas in it as thousands separators?</a></p>
python|pandas
0
375,312
52,858,015
Create series of tuples from pandas DataFrame efficiently
<p>I am using <code>apply()</code> to construct a Series of tuples from the values of an existing DataFrame. I need to construct a specific order of the values in the tuple, and replace <code>NaN</code> in all but one column with <code>'{}'</code>. </p> <p>The following functions work to produce the desired result, but the execution is rather slow:</p> <pre><code>def build_insert_tuples_series(row): # Here I attempt to handle ordering the final tuple # I must also replace NaN with "{}" for all but v2 column. vals = [row['v2']] row_sans_v2 = row.drop(labels=['v2']) row_sans_v2.fillna("{}", inplace=True) res = [val for val in row_sans_token] vals += res return tuple(vals) def generate_insert_values_series(df): df['insert_vals'] = df.apply(lambda x: build_insert_tuples_series(x), axis=1) return df['insert_vals'] </code></pre> <p>Original DataFrame:</p> <pre><code> id v1 v2 0 1.0 foo quux 1 2.0 bar foo 2 NaN NaN baz </code></pre> <p>Resulting DataFrame upon calling <code>generate_insert_values_series(df)</code>:</p> <p>The logic for order on the final tuple is <code>(v2, ..all_other_columns..)</code></p> <pre><code> id v1 v2 insert_vals 0 1.0 foo quux (quux, 1.0, foo) 1 2.0 bar foo (foo, 2.0, bar) 2 NaN NaN baz (baz, {}, {}) </code></pre> <p>Timing the function to generate the resulting DataFrame:</p> <pre><code>%%timeit generate_insert_values_series(df) 100 loops, best of 3: 2.69 ms per loop </code></pre> <p>I feel that there may be a way to more efficiently construct the Series, but am unsure of how to optimize the operation using vectorization, or another approach.</p>
<h3><code>zip</code>, <code>get</code>, <code>mask</code>, <code>fillna</code>, and <code>sorted</code></h3> <p>One liner for what it's worth</p> <pre><code>df.assign( insert_vals= [*zip(*map(df.mask(df.isna(), {}).get, sorted(df, key=lambda x: x != 'v2')))]) id v1 v2 insert_vals 0 1.0 foo quux (quux, 1.0, foo) 1 2.0 bar foo (foo, 2.0, bar) 2 NaN NaN baz (baz, {}, {}) </code></pre> <p>Less one-liner-ish</p> <pre><code>get = df.mask(df.isna(), {}).get key = lambda x: x != 'v2' cols = sorted(df, key=key) df.assign(insert_vals=[*zip(*map(get, cols))]) id v1 v2 insert_vals 0 1.0 foo quux (quux, 1.0, foo) 1 2.0 bar foo (foo, 2.0, bar) 2 NaN NaN baz (baz, {}, {}) </code></pre> <hr> <p>This should work for legacy python</p> <pre><code>get = df.mask(df.isna(), {}).get key = lambda x: x != 'v2' cols = sorted(df, key=key) df.assign(insert_vals=zip(*map(get, cols))) </code></pre>
python|python-2.7|pandas
3
375,313
52,564,644
Pandas Dataframe Turned my Dictionaries into String
<p>I have a dataframe, each cell saves a dictionary. Before exporting the dataframe, I could call each cell as an individual dataframe. </p> <p>However, after saving the dataframe as csv and reopening this each cell became string so I could not turn the cell I called into a dataframe anymore. </p> <p><a href="https://i.stack.imgur.com/SLQRr.png" rel="nofollow noreferrer">The output should look like this</a></p> <p><a href="https://i.stack.imgur.com/YX0CK.png" rel="nofollow noreferrer">After saving the dataframe as csv, dictionary became string</a></p> <p>I was surprising to learn after my research on Stackoverflow, there were not many people experienced same issue as I'm having. I wondered whether my practice is wrong. I only found two posts related to my issue. Here is the one (<a href="https://stackoverflow.com/questions/46858848/dict-objects-converting-to-string-when-read-from-csv-to-dataframe-pandas-python">dict objects converting to string when read from csv to dataframe pandas python</a>). </p> <p>I basically tried json, ast.literal_eval and yaml but none of these could solve my issue. </p> <p>This is the first part of my code(I created this four list to store my data which I called from an api)</p> <pre><code>tickers4 = [] last_1st_bs4 = [] last_2nd_bs4 = [] last_3rd_bs4 = [] for i in range(len(tickers)): try: ticker = tickers.loc[i, 'ticker'] ann_yr = 2018 yr_1st = intrinio.financials_period(ticker, str(ann_yr-1), fiscal_period='FY', statement='balance_sheet') yr_2nd = intrinio.financials_period(ticker, str(ann_yr-2), fiscal_period='FY', statement='balance_sheet') yr_3rd = intrinio.financials_period(ticker, str(ann_yr-3), fiscal_period='FY', statement='balance_sheet') tickers4.append(ticker) last_1st_bs4.append(yr_1st) last_2nd_bs4.append(yr_2nd) last_3rd_bs4.append(yr_3rd) print('{} Feeding data {}'.format(i, ticker)) except: tickers4.append(ticker) last_1st_bs4.append(0) last_2nd_bs4.append(0) last_3rd_bs4.append(0) print('{} Error {}'.format(i, ticker)) </code></pre> <p>Second part: I put them into a dataframe and saved as csv</p> <pre><code>BS = pd.DataFrame() BS['ticker'] = tickers4 BS['BS_2017'] = last_1st_bs4 BS['BS_2016'] = last_2nd_bs4 BS['BS_2015'] = last_3rd_bs4 BS.to_csv('Balance_Sheet_2015_2017.csv') </code></pre> <p>now, I need read this csv in another notebook</p> <pre><code>BS = pd.read_csv('./Balance_Sheet_2015_2017.csv', index_col=0) BS.loc[9, 'BS_2017'] </code></pre> <p>here is the result I got: <code>' cashandequivalents shortterminvestments notereceivable \\\nyear \n2017 2.028900e+10 5.389200e+10 1.779900e+10 \n\n accountsreceivable netinventory othercurrentassets \\\nyear \n2017 1.787400e+10 4.855000e+09 1.393600e+10 \n\n totalcurrentassets netppe longterminvestments \\\nyear \n2017 1.286450e+11 3.378300e+10 1.947140e+11 \n\n othernoncurrentassets ... \\\nyear ... \n2017 1.817700e+10 ... \n\n commitmentsandcontingencies commonequity retainedearnings \\\nyear \n2017 0.0 3.586700e+10 9.833000e+10 \n\n aoci totalcommonequity totalequity \\\nyear \n2017 -150000000.0 1.340470e+11 1.340470e+11 \n\n totalequityandnoncontrollinginterests totalliabilitiesandequity \\\nyear \n2017 1.340470e+11 3.753190e+11 \n\n currentdeferredrevenue noncurrentdeferredrevenue \nyear \n2017 7.548000e+09 2.836000e+09 \n\n[1 rows x 30 columns]'</code></p> <p>Thanks for your help. </p>
<p>CSV is not an appropriate format for saving dictionaries (and honestly, putting dictionaries into DataFrames isn't a great data structure). You should try writing the DataFrame to json instead: <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_json.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_json.html</a></p>
python|string|pandas|dictionary|dataframe
2
375,314
52,507,748
Using multiple CPU cores in TensorFlow
<p>I have extensively studied other answers on TensorFlow and I just cannot seem to get it to use multiple cores on my CPU.</p> <p>According to htop, the following program only uses a single CPU core:</p> <pre><code>import tensorflow as tf n_cpus = 20 sess = tf.Session(config=tf.ConfigProto( device_count={ "CPU": n_cpus }, inter_op_parallelism_threads=n_cpus, intra_op_parallelism_threads=1, )) size = 100000 A = tf.ones([size, size], name="A") B = tf.ones([size, size], name="B") C = tf.ones([size, size], name="C") with tf.device("/cpu:0"): x = tf.matmul(A, B) with tf.device("/cpu:1"): y = tf.matmul(A, C) sess.run([x, y]) # run_options = tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE) # run_metadata = tf.RunMetadata() # sess.run([x, y], options=run_options, run_metadata=run_metadata) # for device in run_metadata.step_stats.dev_stats: # device_name = device.device # print(device.device) # for node in device.node_stats: # print(" ", node.node_name) </code></pre> <p>However, when I uncomment the lines at the bottom, and change <code>size</code> so that the computation actually finishes in a reasonable amount of time, I see that TensorFlow seems to think it's using at least 2 CPU devices:</p> <pre><code>/job:localhost/replica:0/task:0/device:CPU:0 _SOURCE MatMul _retval_MatMul_0_0 _retval_MatMul_1_0_1 /job:localhost/replica:0/task:0/device:CPU:1 _SOURCE MatMul_1 </code></pre> <p>Fundamentally, what I want to do here is execute different ops on different cores in parallel. I don't want to split a single op over multiple cores, though I know that happens to work in this contrived example. Both <code>device_count</code> and <code>inter_op_parallelism_threads</code> sound like what I want, but neither seems to actually result in using multiple cores. I've tried all combinations I can think of, including setting one or the other to <code>1</code> in case they conflict with each other, and nothing seems to work.</p> <p>I can also confirm with <code>taskset</code> that I'm not doing anything strange with my CPU affinity:</p> <pre><code>$ taskset -p $$ pid 21395's current affinity mask: ffffffffff </code></pre> <p>What exactly do I have to do to this code to get it to use multiple CPU cores?</p> <p>Note:</p> <ul> <li>From <a href="https://stackoverflow.com/q/45985641/188046">this answer</a> among others I'm setting the <code>device_count</code> and <code>inter_op_parallelism_threads</code>.</li> <li>The tracing command comes from <a href="https://stackoverflow.com/a/41525764/188046">this answer</a>.</li> <li>I can remove the <code>tf.device</code> calls and it doesn't seem to make any difference to my CPU utilization.</li> </ul> <p>I'm using TensorFlow 1.10.0 installed from conda.</p>
<p>After some back and forth on the <a href="https://github.com/tensorflow/tensorflow/issues/22619" rel="noreferrer">TensorFlow issue here</a> we determined that the issue was that the program was being "optimized" by a constant folding pass, because the inputs were all trivial. It turns out this constant folding pass runs sequentially. Therefore, if you want to observe a parallel execution, the way to do this is to make the inputs non-trivial so that the constant folding won't apply to them. The method suggested in the issue was to use <code>tf.placeholder</code>, and I have written an example program that makes use of this here:</p> <p><a href="https://gist.github.com/elliottslaughter/750a27c832782f4daec8686281027de8" rel="noreferrer">https://gist.github.com/elliottslaughter/750a27c832782f4daec8686281027de8</a></p> <p>See the original issue for sample output from the program: <a href="https://github.com/tensorflow/tensorflow/issues/22619" rel="noreferrer">https://github.com/tensorflow/tensorflow/issues/22619</a></p>
python|multithreading|tensorflow|parallel-processing|affinity
5
375,315
52,850,196
Take an element of a tensor which is inside also in another tensor
<p>I have two tensors and I have to iterate on the first to take only the element that is inside the other tensor. There is only one element in <code>t2</code> that it is also inside <code>t1</code>. Here an example</p> <pre><code>t1 = tf.where(values &gt; 0) # I get some indices example [6, 0], [3, 0] t2 = tf.where(values2 &gt; 0) # I get [4, 0], [3, 0] t3 = .... # [3, 0] </code></pre> <p>I've tried to evaluate and iterate over them using <code>.eval()</code> and checked if an element of <code>t2</code> is in <code>t1</code> using the operator <code>in</code>, but doesn't work. Is there a function from TensorFlow that can do that?</p> <p><strong>edit</strong></p> <pre><code>for index in xrange(max_indices): indices = tf.where(tf.equal(values, (index + 1))).eval() # indices: [[1 0]\n [4 0]\n [9 0]] cent_indices = tf.where(centers &gt; 0).eval() # cent_indices: [[6 0]\n [9 0]] indices_list.append(indices) for cent in cent_indices: if cent in indices: centers_list.append(cent) break </code></pre> <p>The first iteration <code>cent</code> has the value <code>[6 0]</code> but it enters the <code>if</code> condition.</p> <p><strong>answer</strong></p> <pre><code>for index in xrange(max_indices): indices = tf.where(tf.equal(values, (index + 1))).eval() cent_indices = tf.where(centers &gt; 0).eval() indices_list.append(indices) for cent in cent_indices: # batch_item is an iterator from an outer loop if values[batch_item, cent[0]].eval() == (index + 1): centers_list.append(tf.constant(cent)) break </code></pre> <p>The solution is related to my task, but if you are looking for a solution in 1D tensor I suggest to have a look on <code>tf.sets.set_intersection</code></p>
<p>Is that what you wanted ? I used just these two test cases.</p> <pre><code>x = tf.constant([[1, 2, 3, 4, 5, 6], [1, 2, 3, 4, 5, 1]]) y = tf.constant([[1, 2, 3, 4, 3, 6], [1, 2, 3, 4, 5, 1]]) # x = tf.constant([[1, 2], [4, 5], [7, 7]]) # y = tf.constant([[7, 7], [3, 5]]) def match(xiterations, yiterations, yvalues, xvalues ): for i in range(xiterations): for j in range(yiterations): if (np.array_equal(yvalues[j], xvalues[i])): print( yvalues[j]) with tf.Session() as sess: xindex = tf.where( x &gt; 4 ) yindex = tf.where( y &gt; 4 ) xvalues = xindex.eval() yvalues = yindex.eval() xiterations = tf.shape(xvalues)[0].eval() yiterations = tf.shape(yvalues)[0].eval() print(tf.shape(xvalues)[0].eval()) print(tf.shape(yvalues)[0].eval()) if tf.shape(xvalues)[0].eval() &gt;= tf.shape(yvalues)[0].eval(): match( xiterations, yiterations, yvalues, xvalues) else: match( yiterations, xiterations, xvalues, yvalues) </code></pre>
python|tensorflow
1
375,316
52,524,851
Broadcast a 1D array using a 2D array
<p>I have a 1D array <code>array_data</code> with ~10**8 elements. </p> <p>I have a second array <code>array_index</code> which specifies the <strong>bound</strong>ing indices used to slice <code>array_data</code> with.</p> <p>Below is Minimal, Complete, and Verifiable example of <code>array_data</code> and <code>array_index</code>:</p> <pre><code>import numpy as np #Create data array_data = np.arange(100) #Randomly create indices array_index = np.sort(np.random.randint(100, size=(10,2))) #For each randomly created index, slice the array array_sliced = [array_data[index[0]:index[1]]) for index in array_index] #Now data is sliced, perform operation on the sliced data. For example: val = [] for slice in array_sliced: val.append(np.nanmean(slice)) </code></pre> <p><strong>Question:</strong> What is the best way to slice <code>array_data</code> with <code>array_index</code> along <code>axis=1</code> so I can perform another task on the sliced arrays (e.g. <code>min</code>, <code>max</code>, <code>mean</code>)?</p> <p>My solution at the moment uses list comprehension and conversion back to a numpy array. This method seems clunky and slow:</p> <pre><code>&gt;&gt;&gt; np.array([np.nanmean(array_data[index[0]:index[1]]) for index in array_index], dtype=np.float64) </code></pre> <p><strong>EDIT:</strong> Added Minimal, Complete, and Verifiable example (works in python 2.7).</p>
<p>When I run your code I get a list of arrays of varying size:</p> <pre><code>In [63]: [len(x) for x in array_sliced] Out[63]: [3, 46, 38, 9, 73, 66, 3, 23, 40, 36] </code></pre> <p>(you also get this from <code>np.diff(array_index,axis=1)</code>)</p> <p>A general observation is that when dealing arrays of differing sizes, it is quite difficult to treat them in any sort of 2d manner.</p> <p>You might be able to generate a (10,100) mask, True for values you want to keep in each row, False for the omits. Or maybe <code>np.nan</code> for the omits.</p> <p>Or think in terms of padding these 10 arrays so they fit in a (10,73) array, again with an appropriate padding element (0, nan, etc).</p>
python|list|numpy|list-comprehension|numpy-slicing
1
375,317
52,711,175
Restarting a countdown based on another column
<p>I have a data frame with a variable called 'Countdown' that counts down the days in my data frame even though some days have multiple entries (rows).</p> <pre><code> full dates Countdown 0 2008-01-01 3652 1 2008-01-02 3651 2 2008-01-03 3650 3 2008-01-04 3649 4 2008-01-05 3648 5 2008-01-06 3647 </code></pre> <p>I would like the countdown variable to 'restart' after certain days. So I would like one countdown from 2008-01-01 to 2008-01-03 then 2008-01-03 to 2008-01-06, etc. </p> <p>Desired output:</p> <pre><code> full dates Countdown 0 2008-01-01 2 1 2008-01-02 1 2 2008-01-03 0 3 2008-01-04 2 4 2008-01-05 1 5 2008-01-06 0 </code></pre> <p>My dataframe is much larger but the idea is the same: between two given days I would like to start a countdown and then 'restart' it at another day (in the example it 'restarted' on 2008-01-03 and 2008-01-06.</p>
<p>You can do this with <code>pd.merge_asof</code>. Create a <code>DataFrame</code> of your right bin edges, then merge the closest edge and calculate the number of days until. </p> <pre><code>import pandas as pd # Right bin edges for your countdowns. dates = ['2008-01-03', '2008-01-06'] df_dates = pd.DataFrame({'date': pd.to_datetime(dates)}) # Convert original DataFrame to datetime df['full dates'] = pd.to_datetime(df['full dates']) # Merge and calculate the Countdown value df = pd.merge_asof(df, df_dates, left_on ='full dates', right_on ='date', direction='forward') df['Countdown'] = (df['date']-df['full dates']).dt.days df = df.drop(columns='date') # No longer needed </code></pre> <h1>Output: <code>df</code></h1> <pre><code> full dates Countdown 0 2008-01-01 2 1 2008-01-02 1 2 2008-01-03 0 3 2008-01-04 2 4 2008-01-05 1 5 2008-01-06 0 </code></pre>
python|pandas|dataframe
1
375,318
52,715,049
Comparing numpy array of dtype object
<p>My question is "why?:"</p> <pre><code>aa[0] array([[405, 162, 414, 0, array([list([1, 9, 2]), 18, (405, 18, 207), 64, 'Universal'], dtype=object), 0, 0, 0]], dtype=object) aaa array([[405, 162, 414, 0, array([list([1, 9, 2]), 18, (405, 18, 207), 64, 'Universal'], dtype=object), 0, 0, 0]], dtype=object) np.array_equal(aaa,aa[0]) False </code></pre> <p>Those arrays are completly identical.</p> <p>My minimal example doesn't reproduce this:</p> <pre><code>be=np.array([1],dtype=object) be array([1], dtype=object) ce=np.array([1],dtype=object) ce array([1], dtype=object) np.array_equal(be,ce) True </code></pre> <p>Nor does this one:</p> <pre><code>ce=np.array([np.array([1]),'5'],dtype=object) be=np.array([np.array([1]),'5'],dtype=object) np.array_equal(be,ce) True </code></pre> <h2>However, to reproduce my problem try this:</h2> <pre><code>be=np.array([[405, 162, 414, 0, np.array([list([1, 9, 2]), 18, (405, 18, 207), 64, 'Universal'],dtype=object),0, 0, 0]], dtype=object) ce=np.array([[405, 162, 414, 0, np.array([list([1, 9, 2]), 18, (405, 18, 207), 64, 'Universal'],dtype=object),0, 0, 0]], dtype=object) np.array_equal(be,ce) False np.array_equal(be[0],ce[0]) False </code></pre> <p>And I have no idea why those are not equal. And to add the bonus question, <strong>how do I compare them?</strong> </p> <p><strong>I need an efficient way to check if aaa is in the stack aa.</strong></p> <p><em>I'm not using <code>aaa in aa</code> because of <code>DeprecationWarning: elementwise == comparison failed; this will raise an error in the future.</code> and because it still returns <code>False</code> if anyone is wondering.</em></p> <hr> <h2>What else have I tried?:</h2> <pre><code>np.equal(be,ce) *** ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() np.all(be,ce) *** TypeError: only integer scalar arrays can be converted to a scalar index all(be,ce) *** TypeError: all() takes exactly one argument (2 given) all(be==ce) *** TypeError: 'bool' object is not iterable np.where(be==ce) (array([], dtype=int64),) </code></pre> <p>And these, which I can't get to run in the console, all evaluate to False, some giving the deprecation warning:</p> <pre><code>import numpy as np ce=np.array([[405, 162, 414, 0, np.array([list([1, 9, 2]), 18, (405, 18, 207), 64, 'Universal'],dtype=object),0, 0, 0]], dtype=object) be=np.array([[405, 162, 414, 0, np.array([list([1, 9, 2]), 18, (405, 18, 207), 64, 'Universal'],dtype=object),0, 0, 0]], dtype=object) print(np.any([bee in ce for bee in be])) print(np.any([bee==cee for bee in be for cee in ce])) print(np.all([bee in ce for bee in be])) print(np.all([bee==cee for bee in be for cee in ce])) </code></pre> <p>And of course <a href="https://stackoverflow.com/q/10580676/7322095">other questions</a> telling me this should work...</p>
<p>To make an element-wise comparison between the arrays, you can use <a href="https://docs.scipy.org/doc/numpy-1.15.1/reference/generated/numpy.equal.html" rel="nofollow noreferrer"><strong><code>numpy.equal()</code></strong></a> with the keyword argument <code>dtype=numpy.object</code> as in :</p> <pre><code>In [60]: np.equal(be, ce, dtype=np.object) Out[60]: array([[True, True, True, True, array([ True, True, True, True, True]), True, True, True]], dtype=object) </code></pre> <p><strong>P.S.</strong> checked using NumPy version <code>1.15.2</code> and Python <code>3.6.6</code></p> <h2>edit</h2> <p>From the release notes for 1.15, </p> <p><a href="https://docs.scipy.org/doc/numpy-1.15.1/release.html#comparison-ufuncs-accept-dtype-object-overriding-the-default-bool" rel="nofollow noreferrer">https://docs.scipy.org/doc/numpy-1.15.1/release.html#comparison-ufuncs-accept-dtype-object-overriding-the-default-bool</a></p> <pre><code>Comparison ufuncs accept dtype=object, overriding the default bool This allows object arrays of symbolic types, which override == and other operators to return expressions, to be compared elementwise with np.equal(a, b, dtype=object). </code></pre>
python|python-3.x|numpy
5
375,319
52,635,026
pandas dataframe filtering multiple columns and rows
<p>Given a dataframe with the following format:</p> <pre><code>TEST_ID | ATOMIC_NUMBER | COMPOSITION_PERCENT | POSITION 1 | 28 | 49.84 | 0 1 | 22 | 50.01 | 0 1 | 47 | 0.06 | 1 2 | 22 | 49.84 | 0 2 | 47 | 50.01 | 1 3 | 28 | 49.84 | 0 3 | 22 | 50.01 | 0 3 | 47 | 0.06 | 0 </code></pre> <p>I want to select only the tests that have ATOMIC_NUMBER of 22 AND 28 in POSITION 0, no more no less. So I'd like a filter that returns:</p> <pre><code>TEST_ID | ATOMIC_NUMBER | COMPOSITION_PERCENT | POSITION 1 | 28 | 49.84 | 0 1 | 22 | 50.01 | 0 1 | 47 | 0.06 | 1 </code></pre> <p>EDIT: I'm trying to convert this logic from SQL into python. Here's the SQL code:</p> <pre><code>select * from compositions where compositions.test_id in ( select a.test_id from ( select test_id from compositions where test_id in ( select test_id from ( select * from COMPOSITIONS where position == 0 ) group by test_id having count(test_id) = 2 ) and atomic_number = 22) a join ( select test_id from compositions where test_id in ( select test_id from ( select * from COMPOSITIONS where position == 0 ) group by test_id having count(test_id) = 2 ) and atomic_number = 28) b on a.test_id = b.test_id ) </code></pre>
<p>You can create a boolean series to capture test_ids and then index the df using the same.</p> <pre><code>s = df[df['POSITION'] == 0].groupby('TEST_ID').apply(lambda x: ((x['ATOMIC_NUMBER'].count() == 2 ) &amp; (sorted(x['ATOMIC_NUMBER'].values.tolist()) == [22,28])).all()) test_id = s[s].index.tolist() df[df['TEST_ID'].isin(test_id)] TEST_ID ATOMIC_NUMBER COMPOSITION_PERCENT POSITION 0 1 28 49.84 0 1 1 22 50.01 0 2 1 47 0.06 1 </code></pre>
python|pandas
1
375,320
52,561,762
Count occurrences of number from specific column in python
<p>I am trying to do the equivalent of a COUNTIF() function in excel. I am stuck at how to tell the .count() function to read from a specific column in excel.</p> <p>I have</p> <pre><code>df = pd.read_csv('testdata.csv') df.count('1') </code></pre> <p>but this does not work, and even if it did it is not specific enough. I am thinking I may have to use read_csv to read specific columns individually.</p> <p>Example: </p> <pre><code>Column name 4 4 3 2 4 1 </code></pre> <p>the function would output that there is one '1' and I could run it again and find out that there are three '4' answers. etc.</p> <p>I got it to work! Thank you</p> <p>I used:</p> <pre><code>print (df.col.value_counts().loc['x'] </code></pre>
<p>If all else fails, why not try something like this?</p> <pre><code>import numpy as np import pandas import matplotlib.pyplot as plt df = pandas.DataFrame(data=np.random.randint(0, 100, size=100), columns=["col1"]) counters = {} for i in range(len(df)): if df.iloc[i]["col1"] in counters: counters[df.iloc[i]["col1"]] += 1 else: counters[df.iloc[i]["col1"]] = 1 print(counters) plt.bar(counters.keys(), counters.values()) plt.show() </code></pre>
python|pandas
0
375,321
46,620,844
how to compute pairwise distance among series of different length (na inside) efficiently?
<p><em>resuming this question</em>: <a href="https://stackoverflow.com/questions/24781461/compute-the-pairwise-distance-in-scipy-with-missing-values">Compute the pairwise distance in scipy with missing values</a></p> <p>test case: I want to compute the pairwise distance of series with different length taht are grouped together and I have to do it in the most efficient possible way (using euclidean distance).</p> <p>one way that makes it work could be this:</p> <pre><code>import pandas as pd import numpy as np from scipy.spatial.distance import pdist a = pd.DataFrame(np.random.rand(10, 4), columns=['a','b','c','d']) a.loc[0, 'a'] = np.nan a.loc[1, 'a'] = np.nan a.loc[0, 'c'] = np.nan a.loc[1, 'c'] = np.nan def dropna_on_the_fly(x, y): return np.sqrt(np.nansum(((x-y)**2))) pdist(starting_set, dropna_on_the_fly) </code></pre> <p>but I feel this could be very inefficient as built in methods of the <code>pdist</code> function are internally optimized whereas the function is simply passed over.</p> <p>I have a hunch that a vectorized solution in <code>numpy</code> for which I <code>broadcast</code> the subtraction and then I proceed with the <code>np.nansum</code> for <code>na</code> resistant sum but I am unsure on how to proceed.</p>
<p>Inspired by <a href="https://stackoverflow.com/a/44157144/"><code>this post</code></a>, there would be two solutions.</p> <p><strong>Approach #1 :</strong> The vectorized solution would be -</p> <pre><code>ar = a.values r,c = np.triu_indices(ar.shape[0],1) out = np.sqrt(np.nansum((ar[r] - ar[c])**2,1)) </code></pre> <p><strong>Approach #2 :</strong> The memory-efficient and more performant one for large arrays would be -</p> <pre><code>ar = a.values b = np.where(np.isnan(ar),0,ar) mask = ~np.isnan(ar) n = b.shape[0] N = n*(n-1)//2 idx = np.concatenate(( [0], np.arange(n-1,0,-1).cumsum() )) start, stop = idx[:-1], idx[1:] out = np.empty((N),dtype=b.dtype) for j,i in enumerate(range(n-1)): dif = b[i,None] - b[i+1:] mask_j = (mask[i] &amp; mask[i+1:]) masked_vals = mask_j * dif out[start[j]:stop[j]] = np.einsum('ij,ij-&gt;i',masked_vals, masked_vals) # or simply : ((mask_j * dif)**2).sum(1) out = np.sqrt(out) </code></pre>
python|numpy|scipy|vectorization|pdist
3
375,322
46,606,633
tensorflow: gradients for a custom loss function
<p>I have an LSTM predicting time series values in tensorflow. The model is working using an MSE as a loss function. However, I'd like to be able to create a custom loss function where one of the error values is multiplied by two (therefore producing a higher error value).</p> <p>In my batch of size 10, I want the 3rd value of the first input to be multiplied by 2, but because this is time series, this corresponds to the second value in the second input and the first value in the third input. </p> <p>The error I get is: ValueError: No gradients provided for any variable, check your graph for ops that do not support gradients</p> <p>How do I make the gradients?</p> <pre><code>def loss_function(y_true, y_pred, peak_value=3, weight=2): # peak value is where the multiplication happens on the first line # weight is the how much the error is multiplied by all_dif = tf.squared_difference(y_true, y_pred) # should be shape=[10,10] peak = [peak_value] * 10 listy = range(0, 10) c = [(i - j) % 10 for i, j in zip(peak, listy)] for i in range(0, 10): indices = [[i, c[i]]] values = [1.0] shape = [10,10] delta = tf.SparseTensor(indices, values, shape) all_dif = all_dif + tf.sparse_tensor_to_dense(delta) return tf.reduce_sum(all_dif) </code></pre>
<p>I believe the psuedo code would look something like this:</p> <pre><code>@tf.custom_gradient def loss_function(y_true, y_pred, peak_value=3, weight=2) ## your code def grad(dy): return dy * partial_derivative return loss, grad </code></pre> <p>Where <code>partial_derivative</code> is the analytically evaluated partial derivative with respect to your loss function. If your loss function is a function of more than one variable, it will require a partial derivative respect to each variable, I believe.</p> <p>If you need more information, the documentation is good: <a href="https://www.tensorflow.org/api_docs/python/tf/custom_gradient" rel="nofollow noreferrer">https://www.tensorflow.org/api_docs/python/tf/custom_gradient</a></p> <p>And I've yet to find an example of this functionality embedded in a model that's not a toy.</p>
python|tensorflow|neural-network
1
375,323
46,519,539
How to select all non-NaN columns and non-NaN last column using pandas?
<p>Forgive me if the title a little bit confusing.</p> <p>Assuming I have <code>test.h5</code>. Below is the result of reading this file using <code>df.read_hdf('test.h5', 'testdata')</code></p> <pre><code> 0 1 2 3 4 5 6 0 123 444 111 321 NaN NaN NaN 1 12 234 113 67 21 32 900 3 212 112 543 321 45 NaN NaN </code></pre> <p>I want to select the last non-Nan column. My expected result is like this</p> <pre><code>0 321 1 900 2 45 </code></pre> <p>Also I want to select all column except the last non-NaN column. My expected result perhaps is like this. It might can be in numpy array but I have not foud any solution yet.</p> <pre><code> 0 1 2 3 4 5 6 0 123 444 111 1 12 234 113 67 21 32 3 212 112 543 321 </code></pre> <p>I searched online and found <code>df.iloc[:, :-1]</code> for reading all column but the last one and <code>df.iloc[:, -1]</code> for reading the last column.</p> <p>My current result using these 2 command is like this: 1. for reading all column except the last one</p> <pre><code> 0 1 2 3 4 5 0 123 444 111 321 NaN NaN 1 12 234 113 67 21 32 3 212 112 543 321 45 NaN </code></pre> <p>2.for reading the last column</p> <pre><code>0 NaN 1 900 2 Nan </code></pre> <p>My question is, is there any command or query used in pandas to address these condition?</p> <p>Thank you for any help and suggestion.</p>
<p>You can use sorted to satisfy your condition i.e </p> <pre><code>ndf = df.apply(lambda x : sorted(x,key=pd.notnull),1) </code></pre> <p>This will give </p> <pre> 0 1 2 3 4 5 6 0 NaN NaN NaN 123.0 444.0 111.0 321.0 1 12.0 234.0 113.0 67.0 21.0 32.0 900.0 3 NaN NaN 212.0 112.0 543.0 321.0 45.0 </pre> <p>Now you can select the last column i.e </p> <pre><code>ndf.iloc[:,-1] </code></pre> <pre> 0 321.0 1 900.0 3 45.0 Name: 6, dtype: float64 </pre> <pre><code>ndf.iloc[:,:-1].apply(lambda x : sorted(x,key=pd.isnull),1) </code></pre> <pre> 0 1 2 3 4 5 0 123.0 444.0 111.0 NaN NaN NaN 1 12.0 234.0 113.0 67.0 21.0 32.0 3 212.0 112.0 543.0 321.0 NaN NaN </pre>
python|pandas|numpy|dataframe
6
375,324
46,590,163
Numpy eigenvalues/eigenvectors seem wrong for complex valued matrix?
<p>This may be something really stupid, but I am getting a rather weird output with Numpy, version 1.12.1. I am trying to diagonalise a random symmetric matrix, then check the quality by transforming back the diagonal eigenvalue matrix, but it seems to fail for complex values. Basically:</p> <pre><code>A = np.random.random((3, 3)) A += A.T.conj() evals, evecs = np.linalg.eig(A) print np.isclose(np.dot(evecs, np.dot(np.diag(evals), evecs.T)), A).all() </code></pre> <p>prints <code>True</code> whereas</p> <pre><code>A = np.random.random((3, 3))+1.0j*np.random.random((3, 3)) A += A.T.conj() evals, evecs = np.linalg.eig(A) print np.isclose(np.dot(evecs, np.dot(np.diag(evals), evecs.T)), A).all() </code></pre> <p>prints <code>False</code>. I checked the values and it doesn't seem just some numerical inaccuracy, it seems dead wrong. Am I doing something fundamentally wrong? I know it works for Hermitian matrices when I use <code>np.linalg.eigh</code> as that's something I use very often, but why does <code>eig</code> fail for complex values along the diagonal too?</p>
<p>The answer to your question is that you failed to do the diagonalisation/matrix reconstruction properly. </p> <pre><code>A = np.random.random((3, 3))+1.0j*np.random.random((3, 3)) A += A.T.conj() evals, evecs = np.linalg.eig(A) from scipy.linalg import inv print(np.isclose(np.dot(evecs, np.dot(np.diag(evals), inv(evecs))), A).all()) </code></pre> <p>will tell you a neat little <code>True</code>, as it is the proper formula.</p> <p>Now, what happens if you call </p> <pre><code>print np.isclose(np.dot(evecs, np.dot(np.diag(evals), evecs.T)), A).all() #False </code></pre> <p>is that you multiply by the transpose of the eigenvector matrix, which is valid in the case of a real-valued normalised eigenvector matrix. The normalised part is luckily still true, so all you have to do to mimic the inverse is to take the complex conjugate of the matrix.</p> <pre><code>print(np.isclose(np.dot(evecs, np.dot(np.diag(evals), evecs.T.conj())), A).all()) #True </code></pre>
python|numpy|matrix
2
375,325
46,590,920
Deconvolutions/Transpose_Convolutions with tensorflow
<p>I am attempting to use <code>tf.nn.conv3d_transpose</code>, however, I am getting an error indicating that my filter and output shape is not compatible. </p> <ul> <li>I have a tensor of size [1,16,16,4,192]</li> <li>I am attempting to use a filter of [1,1,1,192,192]</li> <li>I believe that the output shape would be [1,16,16,4,192]</li> <li>I am using "same" padding and a stride of 1.</li> </ul> <p>Eventually, I want to have an output shape of [1,32,32,7,"does not matter"], but I am attempting to get a simple case to work first.</p> <p>Since these tensors are compatible in a regular convolution, I believed that the opposite, a deconvolution, would also be possible. </p> <p><strong>Why is it not possible to perform a deconvolution on these tensors. Could I get an example of a valid filter size and output shape for a deconvolution on a tensor of shape [1,16,16,4,192]</strong></p> <p>Thank you.</p>
<blockquote> <ul> <li>I have a tensor of size [1,16,16,4,192]</li> <li>I am attempting to use a filter of [1,1,1,192,192]</li> <li>I believe that the output shape would be [1,16,16,4,192]</li> <li>I am using "same" padding and a stride of 1.</li> </ul> </blockquote> <p>Yes the output shape will be [1,16,16,4,192]</p> <p>Here is a simple example showing that the dimensions are compatible:</p> <pre><code>import tensorflow as tf i = tf.Variable(tf.constant(1., shape=[1, 16, 16, 4, 192])) w = tf.Variable(tf.constant(1., shape=[1, 1, 1, 192, 192])) o = tf.nn.conv3d_transpose(i, w, [1, 16, 16, 4, 192], strides=[1, 1, 1, 1, 1]) print(o.get_shape()) </code></pre> <p>There must be some other problem in your implementation than the dimensions.</p>
tensorflow|conv-neural-network|deconvolution
1
375,326
46,175,344
Parsing a column of JSON strings
<p>I have a tab seperated flatfile, one column of which is JSON data stored as a string, e.g.</p> <pre><code>Col1 Col2 Col3 1491109818 2017-08-02 00:00:09.250 {"type":"Tipper"} 1491110071 2017-08-02 00:00:19.283 {"type":"HGV"} 1491110798 2017-08-02 00:00:39.283 {"type":"Tipper"} 1491110798 2017-08-02 00:00:39.283 \N ... </code></pre> <p>What I want to do is load the table as a pandas dataframe, and for col3 change the data to a string with just the information from the type key. Where there is no JSON or a JSON without a type key I want to return None.</p> <p>e.g.</p> <pre><code>Col1 Col2 Col3 1491109818 2017-08-02 00:00:09.250 Tipper 1491110071 2017-08-02 00:00:19.283 HGV 1491110798 2017-08-02 00:00:39.283 Tipper 1491110798 2017-08-02 00:00:39.283 None ... </code></pre> <p>The only way I can think to do this is with iterrows, however this is very slow when dealing with large files.</p> <pre><code>for index, row in df.iterrows(): try: df.loc[index, 'Col3'] = json.loads(row['Col3'])['type'] except: df.loc[index, 'Col3'] = None </code></pre> <p>Any suggestions on a quicker approach?</p>
<p>Using <em><code>np.vectorize</code></em> and <em><code>json.loads</code></em></p> <pre><code>import json def foo(x): try: return json.loads(x)['type'] except (ValueError, KeyError): return None v = np.vectorize(foo) df.Col3 = v(df.Col3) </code></pre> <p>Note that it is never recommended to use a bare <code>except</code>, as you can inadvertently catch and drop errors you didn't mean to. </p> <hr> <pre><code>df Col1 Col2 Col3 0 1491109818 2017-08-02 00:00:09.250 Tipper 1 1491110071 2017-08-02 00:00:19.283 HGV 2 1491110798 2017-08-02 00:00:39.283 Tipper 3 1491110798 2017-08-02 00:00:39.283 None </code></pre>
python|json|performance|pandas|dataframe
2
375,327
46,501,934
Using count occurrences of a string in a pandas series
<p>I have a pandas series of lists with collection of words in them.I am trying to find frequency of a particular word in each list For e.g., the series is </p> <pre><code>0 [All, of, my, kids, have, cried, nonstop, when... 1 [We, wanted, to, get, something, to, keep, tra... 2 [My, daughter, had, her, 1st, baby, over, a, y... 3 [One, of, babys, first, and, favorite, books, ... 4 [Very, cute, interactive, book, My, son, loves... </code></pre> <p>I want to get count of kids in each row. I have tried </p> <pre><code>series.count('kids') </code></pre> <p>Which gives me an error saying 'Level kids must be same as name (None)'</p> <pre><code>series.str.count('kids) </code></pre> <p>gives me NaN values.</p> <p>How should i go about getting the counts?</p>
<p>On your original series, use <code>str.findall</code> + <code>str.len</code>:</p> <pre><code>print(series) 0 All of my kids have cried nonstop when 1 We wanted to get something to keep tra 2 My daughter had her 1st baby over a y 3 One of babys first and favorite books 4 Very cute interactive book My son loves print(series.str.findall(r'\bkids\b')) 0 [kids] 1 [] 2 [] 3 [] 4 [] dtype: object counts = series.str.findall(r'\bkids\b').str.len() print(counts) 0 1 1 0 2 0 3 0 4 0 dtype: int64 </code></pre>
python|string|pandas|count|series
2
375,328
46,545,401
label in python docstring
<p>I'm using sphinx autodoc to translate my docstring to a nice documentation page. In the docstring I'm following numpy's docstring guideline by using the sphinx napoleon extension. I'm wondering about the following: If I have an equation like</p> <pre><code>""" This is a very important equation which is used in the code .. math:: a+b=c :label: important_eq """ </code></pre> <p>the autodoc doesn't recognize the <code>:label:</code>. Do I have a wrong formatting or can't autodoc / mathjax / napoleon deal with labels in equations?</p>
<p>Be careful with whitespace and indentation. This works:</p> <pre><code>.. math:: :label: important_eq a+b=c </code></pre> <p>This works too (when the math content is only one line of text, it can be given as a directive argument):</p> <pre><code>.. math:: a+b=c :label: important_eq </code></pre>
python-sphinx|mathjax|autodoc|numpydoc
1
375,329
46,194,971
Add legend names to a SVM plot in matplotlib
<p>I made a SVM plot from the Iris-dataset by using matplotlib and mlxtend in Jupyter notebook. I am trying to get the Species name on the legend of the plot instead of 0, 1 and 2. So far my code is :</p> <pre><code>from sklearn import svm from mlxtend.plotting import plot_decision_regions X = iris[['SepalLengthCm', 'SepalWidthCm']] y = iris['SpecieID'] clf = svm.SVC(decision_function_shape = 'ovo') clf.fit(X.values, y.values) # Plot Decision Region using mlxtend's awesome plotting function plot_decision_regions(X=X.values, y=y.values, clf=clf, legend=2) # Update plot object with X/Y axis labels and Figure Title plt.xlabel(X.columns[0], size=14) plt.ylabel(X.columns[1], size=14) plt.title('SVM Decision Region Boundary', size=16) </code></pre> <p>And results in this plot : <a href="https://i.stack.imgur.com/psDcC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/psDcC.png" alt="enter image description here"></a></p> <p>I couldn't find how to replace 0,1 and 2 by the species names (Iris-setosa, Iris-versicolor and Iris-virginica).</p> <p>I created the pandas DataFrame by :</p> <pre><code>import pandas as pd iris = pd.read_csv("Iris.csv") # the iris dataset is now a Pandas DataFrame iris = iris.assign(SepalRatio = iris['SepalLengthCm'] / iris['SepalWidthCm']).assign(PetalRatio = iris['PetalLengthCm'] / iris['PetalWidthCm']).assign(SepalMultiplied = iris['SepalLengthCm'] * iris['SepalWidthCm']).assign(PetalMultiplied = iris['PetalLengthCm'] * iris['PetalWidthCm']) d = {"Iris-setosa" : 0, "Iris-versicolor": 1, "Iris-virginica": 2} iris['SpecieID'] = iris['Species'].map(d).fillna(-1) </code></pre>
<p>Another one with the help of handles and labels of current plot axes i.e </p> <pre><code>handles, labels = plt.gca().get_legend_handles_labels() plt.legend(handles, list(map(d.get, [int(i) for i in labels])) , loc= 'upper left') #Map the values of current labels with dictionary and pass it as labels parameter. plt.show() </code></pre> <p>Sample output : </p> <p><a href="https://i.stack.imgur.com/yL7W9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yL7W9.png" alt="enter image description here"></a></p>
python|pandas|matplotlib|jupyter-notebook
2
375,330
46,557,241
Pad a dataframe with pane data
<p>I have a dataframe like this.</p> <pre><code>import pandas as pd df = pd.DataFrame({'User':['A','A','A','A','B', 'B'], 'Month':['2017-01-01','2017-03-01','2017-05-01','2017-09-01','2017-01-01','2017-05-01'], 'count':[2,2,2,2,5,5]}) </code></pre> <p>I want to pad the data so that it looks like this</p> <pre><code>df = pd.DataFrame({'User':['A','A','A','A','A','A','A','A','A','A','A','A','B','B','B','B','B','B','B','B','B','B','B','B'], 'Month':['2017-01-01','2017-02-01','2017-03-01','2017-04-01','2017-05-01','2017-06-01','2017-07-01','2017-08-01','2017-09-01','2017-10-01','2017-11-01','2017-12-01','2017-01-01','2017-02-01','2017-03-01','2017-04-01','2017-05-01','2017-06-01','2017-07-01','2017-08-01','2017-09-01','2017-10-01','2017-11-01','2017-12-01'], 'count':[2,0,2,0,2,0,0,0,2,0,0,0,5,0,0,0,5,0,0,0,0,0,0,0]}) </code></pre>
<pre><code>mux = pd.MultiIndex.from_product([ df.User.unique(), pd.date_range('2017-01-01', periods=12, freq='MS') ], names=['User', 'Month']) df.set_index(['User', 'Month']).reindex(mux, fill_value=0) \ .swaplevel(0, 1).reset_index() Month User count 0 2017-01-01 A 2 1 2017-02-01 A 0 2 2017-03-01 A 2 3 2017-04-01 A 0 4 2017-05-01 A 2 5 2017-06-01 A 0 6 2017-07-01 A 0 7 2017-08-01 A 0 8 2017-09-01 A 2 9 2017-10-01 A 0 10 2017-11-01 A 0 11 2017-12-01 A 0 12 2017-01-01 B 5 13 2017-02-01 B 0 14 2017-03-01 B 0 15 2017-04-01 B 0 16 2017-05-01 B 5 17 2017-06-01 B 0 18 2017-07-01 B 0 19 2017-08-01 B 0 20 2017-09-01 B 0 21 2017-10-01 B 0 22 2017-11-01 B 0 23 2017-12-01 B 0 </code></pre>
python|python-3.x|pandas|indexing
5
375,331
46,547,319
Error when parsing graph_def from string
<p>I am trying to run a very simple saving of a Tensorflow graph as .pb file, but I have this error when parsing it back:</p> <pre><code>Traceback (most recent call last): File "test_import_stripped_bm.py", line 28, in &lt;module&gt; graph_def.ParseFromString(fileContent) File "/usr/local/lib/python3.5/dist-packages/google/protobuf/message.py", line 185, in ParseFromString self.MergeFromString(serialized) File "/usr/local/lib/python3.5/dist-packages/google/protobuf/internal/python_message.py", line 1069, in MergeFromString if self._InternalParse(serialized, 0, length) != length: File "/usr/local/lib/python3.5/dist-packages/google/protobuf/internal/python_message.py", line 1105, in InternalParse pos = field_decoder(buffer, new_pos, end, self, field_dict) File "/usr/local/lib/python3.5/dist-packages/google/protobuf/internal/decoder.py", line 633, in DecodeField if value._InternalParse(buffer, pos, new_pos) != new_pos: File "/usr/local/lib/python3.5/dist-packages/google/protobuf/internal/python_message.py", line 1105, in InternalParse pos = field_decoder(buffer, new_pos, end, self, field_dict) File "/usr/local/lib/python3.5/dist-packages/google/protobuf/internal/decoder.py", line 612, in DecodeRepeatedField if value.add()._InternalParse(buffer, pos, new_pos) != new_pos: File "/usr/local/lib/python3.5/dist-packages/google/protobuf/internal/python_message.py", line 1105, in InternalParse pos = field_decoder(buffer, new_pos, end, self, field_dict) File "/usr/local/lib/python3.5/dist-packages/google/protobuf/internal/decoder.py", line 743, in DecodeMap if submsg._InternalParse(buffer, pos, new_pos) != new_pos: File "/usr/local/lib/python3.5/dist-packages/google/protobuf/internal/python_message.py", line 1095, in InternalParse new_pos = local_SkipField(buffer, new_pos, end, tag_bytes) File "/usr/local/lib/python3.5/dist-packages/google/protobuf/internal/decoder.py", line 850, in SkipField return WIRETYPE_TO_SKIPPER[wire_type](buffer, pos, end) File "/usr/local/lib/python3.5/dist-packages/google/protobuf/internal/decoder.py", line 799, in _SkipGroup new_pos = SkipField(buffer, pos, end, tag_bytes) File "/usr/local/lib/python3.5/dist-packages/google/protobuf/internal/decoder.py", line 850, in SkipField return WIRETYPE_TO_SKIPPER[wire_type](buffer, pos, end) File "/usr/local/lib/python3.5/dist-packages/google/protobuf/internal/decoder.py", line 814, in _SkipFixed32 raise _DecodeError('Truncated message.') google.protobuf.message.DecodeError: Truncated message. </code></pre> <p>This is the code that I use to write it to .pb:</p> <pre><code>import tensorflow as tf builder = tf.saved_model.builder.SavedModelBuilder('models/TEST-3') w1 = tf.Variable(tf.random_normal((2,2)), name="w1") w2 = tf.Variable(tf.random_normal((2,2)), name="w2") sess = tf.Session() sess.run(tf.global_variables_initializer()) builder.add_meta_graph_and_variables(sess, tags=[tf.saved_model.tag_constants.SERVING], clear_devices = True) builder.save() sess.close() </code></pre> <p>And this is the code to parse it:</p> <pre><code>import tensorflow as tf import os model_path = os.path.join('models/TEST-3', 'saved_model.pb') with open(model_path, mode='rb') as f: fileContent = f.read() graph_def = tf.GraphDef() graph_def.ParseFromString(fileContent) </code></pre> <p>To see the exact error I had to do </p> <pre><code>export PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python </code></pre> <p>before running it. Also I've tried this on python 2 and 3 with different tensorflow versions, I am running on Ubuntu 16.04. On python 2.7 with tensorflow 0.9.0rc0 I managed to get a slightly different error:</p> <pre><code>Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; File "/usr/local/lib/python2.7/dist-packages/google/protobuf/message.py", line 185, in ParseFromString self.MergeFromString(serialized) File "/usr/local/lib/python2.7/dist-packages/google/protobuf/internal/python_message.py", line 1091, in MergeFromString if self._InternalParse(serialized, 0, length) != length: File "/usr/local/lib/python2.7/dist-packages/google/protobuf/internal/python_message.py", line 1127, in InternalParse pos = field_decoder(buffer, new_pos, end, self, field_dict) File "/usr/local/lib/python2.7/dist-packages/google/protobuf/internal/decoder.py", line 633, in DecodeField if value._InternalParse(buffer, pos, new_pos) != new_pos: File "/usr/local/lib/python2.7/dist-packages/google/protobuf/internal/python_message.py", line 1127, in InternalParse pos = field_decoder(buffer, new_pos, end, self, field_dict) File "/usr/local/lib/python2.7/dist-packages/google/protobuf/internal/decoder.py", line 612, in DecodeRepeatedField if value.add()._InternalParse(buffer, pos, new_pos) != new_pos: File "/usr/local/lib/python2.7/dist-packages/google/protobuf/internal/python_message.py", line 1127, in InternalParse pos = field_decoder(buffer, new_pos, end, self, field_dict) File "/usr/local/lib/python2.7/dist-packages/google/protobuf/internal/decoder.py", line 612, in DecodeRepeatedField if value.add()._InternalParse(buffer, pos, new_pos) != new_pos: File "/usr/local/lib/python2.7/dist-packages/google/protobuf/internal/python_message.py", line 1127, in InternalParse pos = field_decoder(buffer, new_pos, end, self, field_dict) File "/usr/local/lib/python2.7/dist-packages/google/protobuf/internal/decoder.py", line 489, in DecodeRepeatedField value.append(_ConvertToUnicode(buffer[pos:new_pos])) File "/usr/local/lib/python2.7/dist-packages/google/protobuf/internal/decoder.py", line 469, in _ConvertToUnicode return local_unicode(byte_str, 'utf-8') UnicodeDecodeError: 'utf8' codec can't decode byte 0x80 in position 18: 'utf8' codec can't decode byte 0x80 in position 18: invalid start byte in field: tensorflow.FunctionDef.Node.ret </code></pre> <p>I am able to parse other .pb graph with this code for example this one <a href="https://github.com/taey16/tf/blob/master/imagenet/classify_image_graph_def.pb" rel="noreferrer">https://github.com/taey16/tf/blob/master/imagenet/classify_image_graph_def.pb</a></p> <p>Thanks in advance.</p>
<p>The problem here is that you are trying to parse a <a href="https://github.com/tensorflow/tensorflow/blob/635196732151e6d8638c189c52f4c4336ede81b6/tensorflow/core/protobuf/saved_model.proto" rel="noreferrer"><code>SavedModel</code></a> protocol buffer as if it were a <a href="https://github.com/tensorflow/tensorflow/blob/635196732151e6d8638c189c52f4c4336ede81b6/tensorflow/core/framework/graph.proto" rel="noreferrer"><code>GraphDef</code></a>. Although a <code>SavedModel</code> contains <code>GraphDef</code>, they have different binary formats. The following code, using <a href="https://www.tensorflow.org/api_docs/python/tf/saved_model/loader/load" rel="noreferrer"><code>tf.saved_model.loader.load()</code></a> should work: </p> <pre><code>import tensorflow as tf with tf.Session(graph=tf.Graph()) as sess: tf.saved_model.loader.load( sess, [tf.saved_model.tag_constants.SERVING], "models/TEST-3") </code></pre>
python|python-2.7|python-3.x|tensorflow|protocol-buffers
13
375,332
46,581,018
Converting many string values to categories
<p>I have a data frame with one column full of string values. They need to be converted into categories. Due to huge amount it would be inconvenient to define categories in dictionary. Is there any other way in pandas to do that?</p>
<p>I applied below command and it works: df['kategorie']=action['kategorie'].astype('category')</p>
python|pandas
0
375,333
46,332,479
Store netCDF data in GeoDataFrame
<p>I need to perform some geometric operations with geometries from another source on a netCDF-file. Therefore I store the geometries (<code>shapely.geometry.Polygon</code>) from the other source in a <code>geopandas.GeoDataFrame</code>.</p> <p>Next is to read a <code>netCDF</code> file into a <code>GeoDataFrame</code>. The recipe seems clear: read the <code>netCDF</code> with <code>xarray</code>, store it into a <code>pandas.DataFrame</code>, perform a <code>shapely.geometry.Point</code> operation on the extracted lat/lon data and convert it into a <code>GeoDataFrame</code>.</p> <p>Afterwards, I will do some statistics with geometry-operators.</p> <hr> <p>When I read the <code>netCDF</code> file with <code>xarray</code> (<a href="https://stackoverflow.com/questions/14035148/import-netcdf-file-to-pandas-dataframe">see here</a>)</p> <pre><code>import xarray as xr dnc = xr.open_dataset(ff) df = dnc.to_dataframe() </code></pre> <p>I get</p> <pre><code>&gt;&gt;&gt;&gt; dnc &lt;xarray.Dataset&gt; Dimensions: (lat: 16801, lon: 19201) Coordinates: * lat (lat) float32 -32.0 -31.9992 -31.9983 -31.9975 -31.9967 ... * lon (lon) float32 -73.0 -72.9992 -72.9983 -72.9975 -72.9967 ... Data variables: hgt (lat, lon) int16 0 0 0 4 0 5 0 9 9 8 0 0 0 0 0 0 0 0 0 0 0 0 0 ... &gt;&gt;&gt; dnc.hgt.size 322596001 &gt;&gt;&gt; dnc.lat.size 16801 &gt;&gt;&gt; dnc.lon.size 19201 </code></pre> <p>and</p> <pre><code>&gt;&gt;&gt; df.head() hgt lat lon -32.0 -73.000000 0 -72.999168 0 -72.998337 0 -72.997498 4 -72.996666 0 </code></pre> <p>In <code>df</code> there is no access on <code>lat</code>and <code>lon</code>. I also have problems to understand the <em>partially</em> empty column <code>lat</code>. So I think that the <code>shapely.geometry.Point((lon, lat))</code> must be performed on <code>dnc</code> for every combination of <code>lon</code> and <code>lat</code>. Is that right? Any ideas how to code it?</p>
<p>Like @jhamman mentioned in the comments, your lats and lons are indexes in your pandas frame. So starting with that</p> <pre><code>import pandas as pd import geopandas as gpd from shapely.geometry import Point from io import StringIO s = StringIO(''' lat,lon,hgt -32.0,-73.000000,0 -32.0,-72.999168,0 -32.0,-72.998337,0 -32.0,-72.997498,4 -32.0,-72.996666,0 ''') df = pd.read_csv(s) df = df.set_index(['lat', 'lon']) </code></pre> <p>We'll first reset the frame's index</p> <p><code>df = df.reset_index()</code></p> <p>then we'll create our geometry. i.e. shapely points with a list comp</p> <p><code>geom = [Point(x,y) for x, y in zip(df['lon'], df['lat'])]</code></p> <p>and now we convert our Pandas DataFrame to a GeoPandas GeoDataFrame</p> <pre><code>gdf = gpd.GeoDataFrame(df, geometry=geom) print(gdf.head()) lat lon hgt geometry 0 -32.0 -73.000000 0 POINT (-73 -32) 1 -32.0 -72.999168 0 POINT (-72.99916800000001 -32) 2 -32.0 -72.998337 0 POINT (-72.99833700000001 -32) 3 -32.0 -72.997498 4 POINT (-72.99749799999999 -32) 4 -32.0 -72.996666 0 POINT (-72.996666 -32) </code></pre>
python|python-xarray|geopandas|shapely|netcdf4
3
375,334
46,550,371
Python: np.nanpercentile, which datatype does my dataframe need to have?
<p>I have a panda dataframe of type object.</p> <pre><code>df.dtypes Out: data object stimulus object trial object dtype: object df.head() Out: data stimulus trial 0 2 -2 1 1 2 -2 2 2 2 -2 3 3 2 -2 4 4 2 -2 5 </code></pre> <p>I want to get a specific percentile of my dataset. When I use this code, I get NaN in my output, probably because I have NaN in my dataset itself, which python interprets as infinity, so it will get problems when calculating higher percentiles.</p> <pre><code>df.groupby('stimulus').data.apply(lambda x: np.percentile(x, q=66)) Out: stimulus -2.00 2.0 -1.75 2.9 -1.00 1.0 -0.75 1.0 -0.50 0.0 0.50 7.8 1.00 9.9 1.25 11.9 1.75 13.9 2.50 NaN </code></pre> <p>I already found out that I would need to use np.nanpercentile() instead, but when I use np.nanpercentile() instead then I get this error. I read somewhere else that np.nanpercentile() checks the data format of the input array and complains if it doesn't fit. Do you know how and to which format I need to change my data?</p> <pre><code>df.groupby('stimulus').data.apply(lambda x: np.nanpercentile(x, q=66)) Out: TypeError: ufunc 'isnan' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe'' </code></pre>
<p>This did the job for me in the end:</p> <pre><code>df = df.astype(float) </code></pre>
python|pandas|dataframe|percentile
0
375,335
46,585,698
Verify that points lie on a grid of specified pitch
<p>While I am trying to solve this problem in a context where numpy is used heavily (and therefore an elegant numpy-based solution would be particularly welcome) the fundamental problem has nothing to do with numpy (or even Python) as such.</p> <p>The task is to create an automated test for an algorithm which is supposed to produce points distributed on a grid whose pitch is specified as an input to the algorithm. The <em>absolute</em> positions of the points do not matter, but their <em>relative</em> positions do. For example, following</p> <pre><code>collection_of_points = algorithm(data, pitch=[1.3, 1.5, 2]) </code></pre> <p><code>collection_of_points</code> should contain only points whose x-coordinates differ by multiples of 1.3, whose y-coordinates differ by multiples of 1.5 and whose z-coordinates differ by multiples of 2.</p> <p>The test should verify that this condition is satisfied.</p> <p>One thing that I have tried, which doesn't seem too ugly, but doesn't work is</p> <pre><code>points = algo(data, pitch=requested_pitch) for p1, p2 in itertools.combinations(points, 2): distance_between_points = np.array(p2) - np.array(p1) assert np.allclose(distance_between_points % requested_pitch, 0) </code></pre> <p>[ Aside for those unfamiliar with python or numpy:</p> <ul> <li><code>itertools.combinations(points, 2)</code> is a simple way of iterating through all pairs of points</li> <li>Arithmetic operations on <code>np.array</code>s are performed elementwise, so <code>np.array([5,6,7]) % np.array([2,3,4])</code> evaluates to <code>np.array([1, 0, 3])</code> via <code>np.array([5%2, 6%3, 7%4])</code></li> <li><code>np.allclose</code> checks whether all corresponding elements in the two inputs arrays are <em>approximately</em> equal, and numpy automatically pretends that the <code>0</code> which is passed in as the second argument, was really an all-zero array of the correct size</li> </ul> <p>]</p> <p>To see why the idea shown above fails, consider a desired pitch of <code>3</code> and two points which are separated by <code>8.9999999</code> in the relevant dimension. <code>8.999999 % 3</code> is around <code>2.999999</code> which is nowhere near the required <code>0</code>.</p> <p>In all of this, I can't help feeling that I'm missing something obvious or that I'm re-inventing some wheel.</p> <p>Can you suggest an elegant way of writing such a check?</p>
<p>Change your assertion to:</p> <pre><code>np.all(np.logical_or(np.isclose(x % y, 0), np.isclose((x % y) - y, 0))) </code></pre> <p>If you want to make it more readable, you should functionalize the statement. Something like: </p> <pre><code>def is_multiple(x, y, rtol=1e-05, atol=1e-08): """ Test if x is a multiple of y. """ remainder = x % y is_zero = np.isclose(remainder, 0., rtol, atol) is_y = np.isclose(remainder, y, rtol, atol) return np.logical_or(is_zero, is_y) </code></pre> <p>And then: </p> <pre><code> assert np.all(is_multiple(distance_between_points, requested_pitch)) </code></pre>
numpy|floating-point
1
375,336
46,588,641
Python list into numpy array
<p>I have seen lots of example of lists into arrays, but no examples of lists in this format into arrays, which is strange because the list format I present is the standard go-to way of defining a graph, point-to-point mapping that you would find in any table, csv, database, etc. I tried everything <a href="https://docs.scipy.org/doc/numpy-1.13.0/user/basics.creation.html" rel="nofollow noreferrer">here</a> with no luck. Thanks for any ideas.</p> <p><a href="https://i.stack.imgur.com/Mrq71.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Mrq71.png" alt="Input and desired output"></a></p> <pre><code>input= [[A, A, 0], [A, B, 5], [A, C, 3], [B, A, 5], [B, B, 0], [B, C, 6], [C, A, 3], [C, B, 6], [C, C, 0]] desiredOutput= [[0, 5, 3], [5, 0, 6], [3, 6, 0]] </code></pre>
<p>Here's one way to produce your adjacency matrix as a 2D Numpy array. It assumes that the input graph data is correct, in particular, that its length is a perfect square. </p> <pre><code>import numpy as np graph_data = [ ['A', 'A', 0], ['A', 'B', 5], ['A', 'C', 3], ['B', 'A', 5], ['B', 'B', 0], ['B', 'C', 6], ['C', 'A', 3], ['C', 'B', 6], ['C', 'C', 0], ] size = np.sqrt(len(graph_data)).astype(np.int) adjacency_matrix = np.array(graph_data)[:,-1].astype(np.int).reshape(size, size) print(adjacency_matrix) </code></pre> <p><strong>output</strong></p> <pre><code>[[0 5 3] [5 0 6] [3 6 0]] </code></pre> <p>The above code also assumes that the graph data is in the proper order, since it ignores the letters. Of course, that's easily handled by sorting the graph data before attempting to convert it to an array. Eg,</p> <pre><code>graph_data.sort() </code></pre> <hr> <p>Here's a pure Python version that outputs a list of tuples:</p> <pre><code>graph_data = [ ['A', 'A', 0], ['A', 'B', 5], ['A', 'C', 3], ['B', 'A', 5], ['B', 'B', 0], ['B', 'C', 6], ['C', 'A', 3], ['C', 'B', 6], ['C', 'C', 0], ] graph_data.sort() size = int(len(graph_data) ** 0.5) it = iter(row[-1] for row in graph_data) print(list(zip(*[it]*size))) </code></pre> <p><strong>output</strong></p> <pre><code>[(0, 5, 3), (5, 0, 6), (3, 6, 0)] </code></pre>
python|python-3.x|numpy
2
375,337
46,244,901
Tensorflow: GPU gradients for nested map_fn
<p>The piece of code below works fine when I am using the CPU as device, however it fails when using GPU. This is the error I am getting:</p> <blockquote> <p>InvalidArgumentError (see above for traceback): Cannot assign a device for operation 'Adam/update_Variable/Cast_5': Could not satisfy explicit device specification '/device:GPU:0' because no supported kernel for GPU devices is available.</p> </blockquote> <p>Based on that, I am assuming there are no GPU gradients for nested map_fn calls, is that the case? If true, is there any way I can implement that same piece of code, so it works on the GPU, while keeping the two nested functions?</p> <p>Thanks.</p> <pre><code>import numpy as np import tensorflow as tf def loop_inner(x): return tf.reduce_sum(tf.square(x)) def loop_outer(x): return tf.map_fn(lambda x: loop_inner(x), x) np.random.seed(10) io, d, k, m = 2, 4, 3, 2 A = np.random.random((io, d, k, m)) with tf.device('/cpu:0'): sess = tf.Session() A = tf.Variable(A) B = tf.map_fn(lambda x: loop_outer(x), A) L = tf.reduce_sum(B) optim = tf.train.AdamOptimizer(learning_rate=0.1).minimize(L) sess.run(tf.global_variables_initializer()) for i in range(1000): sess.run(optim) print(sess.run(L)) </code></pre>
<p>I don't think this is related to the <strong>nested map_fn</strong> since the simple non-nested map_fn can cause that error: </p> <pre><code>import numpy as np import tensorflow as tf def my_fn(x, y): return x * y with tf.device('/gpu:0'): a = np.array([[1, 2, 3], [2, 4, 1], [5, 1, 7]]) b = np.array([[1, -1, -1], [1, 1, 1], [-1, 1, -1]]) elems = (a, b) sess = tf.Session() B = tf.map_fn(lambda x: my_fn(x[0], x[1]), elems, dtype=tf.int32) sess.run(tf.global_variables_initializer()) print(sess.run(B)) </code></pre> <p>The error is like this: </p> <blockquote> <p>InvalidArgumentError (see above for traceback): Cannot assign a device for operation 'map/TensorArray_2': Could not satisfy explicit device specification '' because the node was colocated with a group of nodes that required incompatible device '/device:GPU:0' Colocation Debug Info: Colocation group had the following types and devices: Mul: CPU TensorArrayGatherV3: GPU CPU Range: GPU CPU TensorArrayWriteV3: CPU TensorArraySizeV3: GPU CPU Enter: GPU CPU TensorArrayV3: CPU Const: CPU [[Node: map/TensorArray_2 = TensorArrayV3clear_after_read=true, dtype=DT_INT32, dynamic_size=false, element_shape=, tensor_array_name=""]]</p> </blockquote> <p>If the gpu is changed to cpu everything works well. </p>
tensorflow|gpu|gradient
0
375,338
46,269,294
Convert a list of strings into a dataframe
<p>I have a list as follows:</p> <pre><code>data_content=['Country', 'Capital', 'Currency', 'US', 'Washington', 'USD', 'India', 'Delhi', 'Rupee'] </code></pre> <p>I want to have it as follows:</p> <pre><code>Country Capital Currency ------------------------ US Washington USD India Delhi Rupee </code></pre> <p>My aim is to then export this table to panda dataframe.</p>
<p>You can proceed as follows: </p> <ol> <li>Split the initial list into groups of 3 elements,</li> <li>Take the first 3 elements as columns and the rest as data</li> <li>Use the <code>pd.DataFrame</code> construct to create a <code>pandas.DataFrame</code>: </li> </ol> <p>Here's how the code looks like: </p> <pre><code>import pandas as pd data_content=['Country', 'Capital', 'Currency', 'US', 'Washington', 'USD', 'India', 'Delhi', 'Rupee'] data = list(zip(*[iter(data_content)]*3)) pd.DataFrame(data[1:], columns=data[0]) </code></pre> <p>Out: </p> <pre><code> Country Capital Currency 0 US Washington USD 1 India Delhi Rupee </code></pre>
python|list|pandas|numpy|dataframe
1
375,339
46,268,344
Numpy: Alternative ways to construct a matrix of points given the matrix of each coordinate separately
<p>Let x, y, z be matrix representations, so that (x[i, j], y[i, j], z[i, j]) corresponds to a certain point. </p> <p>Instead of having 3 variables we want to have just one variable (Points) where "Points[i,j]=(x[i,j],y[i,j],z[i,j])" and "Points[i,j,0]=x[i,j]"</p> <p>Example: </p> <pre><code>import numpy as np x = np.array([[1, 1], [2, 2]]) y = np.array([[1, 2], [1, 2]]) z = np.array([[3, 4], [5, 6]]) Points = np.array([[ [1, 1, 3], [1, 2, 4] ], [2, 1, 5], [2, 2, 6] ]]) </code></pre> <p>Currently I have thought of some solutions: </p> <p>1st Solution:</p> <pre><code>from itertools import izip Temp_List=[] for xi, yi, zi in izip(x, y, z): Temp_List.append([(xij, yij, zij) for xij, yij, zij in izip(xi, yi, zi)]) Points=np.array(Temp_List) </code></pre> <p>I know that unpacking a tuple to pack it again is not very smart, but is for the sake of making it more readable and prepare the next solution</p> <p>2nd Solution: # one-liner</p> <pre><code>from itertools import izip Points=np.array([zip(xi, yi, zi) for xi, yi, zi in izip(x,y,z)]) </code></pre> <p>I really like this option. However in this solution I'm concerned about readability. Maybe it's just me but I feel that it is not that obvious that the list comprehension generates something similar to Points in the Example. Unless you are familiarized with the difference between izip and zip.</p> <p>It's obvious another solution is using indexes to iterate over the elements x, y and z like in other languages ( for i in xrange(...) : for j in xrange(...): do stuff ... )</p> <p>Concluding: Is there another way of generating the Points variable from x,y,z using a numpy function ( or not) that improves either Readability, Memory consumption or performance ? </p>
<p>You can use numpy's <code>stack</code> function:</p> <pre><code>import numpy as np x = np.array([ [1, 1], [2, 2], ]) y = np.array([ [1, 2], [1, 2], ]) z = np.array([ [3, 4], [5, 6], ]) points = np.stack([x, y, z], axis=2) </code></pre> <p><code>stack</code> with the <code>axis</code> keyword supersedes the old <code>vstack</code>, <code>hstack</code> and <code>dstack</code> functions, which are now deprecated.</p>
python|numpy|matrix|coordinates
2
375,340
46,466,754
Transposing each row data in to column data for each ID in Pandas dataframe
<p>My pandas datframe looks like this</p> <pre><code>Id Length1 height1 Length2 height2 1 100 20 80 30 2 70 10 60 15 </code></pre> <p>ALL Id's data need to be grouped for length/height of each measurement.</p> <pre><code>Id 0 1 2 3 1 100 20 70 10 2 80 30 60 15 </code></pre> <p>How to transpose the rows in to columns and columns in to rows in dataframe</p>
<p><strong>Setup</strong> </p> <pre><code>df = pd.DataFrame(dict( Length1=[1, 2, 3], Height1=[4, 5, 6], Length2=[7, 8, 9], Height2=[0, 1, 2] ))['Length1 Height1 Length2 Height2'.split()] df Length1 Height1 Length2 Height2 0 1 4 7 0 1 2 5 8 1 2 3 6 9 2 </code></pre> <hr> <pre><code>pd.DataFrame({n: g.values.ravel() for n, g in df.groupby(lambda x: x[-1], 1)}).T 0 1 2 3 4 5 1 1 4 2 5 3 6 2 7 0 8 1 9 2 </code></pre>
python|pandas|dataframe
3
375,341
46,498,784
Some operations not respecting custom attributes in Series subclass
<p>According to <a href="https://pandas.pydata.org/pandas-docs/stable/internals.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/internals.html</a><br> I should be able to sublcass a pandas Series</p> <p>My <a href="http://stackoverflow.com/help/mcve">MCVE</a> is </p> <pre><code>from pandas import Series class Xseries(Series): _metadata = ['attr'] @property def _constructor(self): return Xseries def __init__(self, *args, **kwargs): self.attr = kwargs.pop('attr', 0) super().__init__(*args, **kwargs) s = Xseries([1, 2, 3], attr=3) </code></pre> <p>Notice that the <code>attr</code> attribute is:</p> <pre><code>s.attr 3 </code></pre> <p>However, when I multiply by <code>2</code></p> <pre><code>(s * 2).attr 0 </code></pre> <p>Which is the default. Therefore, the <code>attr</code> was not passed on. You may ask, maybe that isn't the intended behavior? I think it is according to the documentation <a href="https://pandas.pydata.org/pandas-docs/stable/internals.html#define-original-properties" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/internals.html#define-original-properties</a></p> <p>And if we use the <code>mul</code> method, it seems to work</p> <pre><code>s.mul(2).attr 3 </code></pre> <p>And this doesn't (which is the same as <code>s * 2</code>)</p> <pre><code>s.__mul__(2).attr 0 </code></pre> <hr> <p>I wanted to put this passed SO before I created an issue on github. Is this a bug?</p> <p>Is there a workaround?</p> <p>I need to be able to do <code>s * 2</code> and have the <code>attr</code> attribute passed on to the result.</p>
<p>If you use <code>inspect.getsourcelines</code> to check the source code of these two functions <code>mul</code> and <code>__mul__</code>, you will find they actually have different implementations.</p> <p>And using <code>s.mul(2).attr</code> still doesn't work as it just uses <code>__finalize__</code> to propagate all attributes but not really multiply it.</p> <p>Or maybe I am misunderstanding your question and you just want to propagate but not multiply <code>attr</code> as well?</p> <p>If yes, you can modify your custom <code>__mul__</code> function to call <code>__finalize__</code>.</p> <pre><code>from pandas import Series class Xseries(Series): _metadata = ['attr'] @property def _constructor(self): return Xseries def __init__(self, *args, **kwargs): self.attr = kwargs.pop('attr', 0) super().__init__(*args, **kwargs) def __mul__(self, other): internal_result = super().__mul__(other) return internal_result.__finalize__(self) s = Xseries([1, 2, 3], attr=3) </code></pre> <p>If not, you can manually multiply <code>attr</code> and return.</p> <pre><code>from pandas import Series class Xseries(Series): _metadata = ['attr'] @property def _constructor(self): return Xseries def __init__(self, *args, **kwargs): self.attr = kwargs.pop('attr', 0) super().__init__(*args, **kwargs) def __mul__(self, other): internal_result = super().__mul__(other) if hasattr(other, "attr"): internal_result.attr = self.attr * other.attr else: internal_result.attr = self.attr * other return internal_result s = Xseries([1, 2, 3], attr=3) </code></pre>
python|pandas
2
375,342
46,263,994
Change decimal separator (Python, Sqlite, Pandas)
<p>I have an Excel spreadsheet (which is an extract from SAP) which has data in it (text, numbers). I turn this data into a DataFrame, do some calculations and finally save it to a sqlite database. Whereas the excel spreadsheet has a comma as the decimal separator. The sqlite database includes the numbers with a dot as the decimal separator.</p> <p>Extract from the code:</p> <pre><code>df_mce3 = pd.read_excel('rawdata/filename.xlsx', header=0, converters={'supplier': str, 'month': str}, decimal=',') </code></pre> <p>(decimal=',' was a suggested solution which only works when you are working with csv)</p> <p>After doing the calculations I use the following code to save the results to a sqlite database:</p> <pre><code>conn = sqlite.connect("database.db") df_mce3.to_sql("rawdata", conn, if_exists="replace") df_ka_ext.to_sql("costanalysis_external", conn, if_exists="replace") [...] </code></pre> <p>Input:</p> <pre><code>month ordqty ordprice ordervolume invoiceqty invoiceprice 08.2017 10,000 14,90 149,00 10,000 14,90 </code></pre> <p>Output:</p> <pre><code>month ordqty ordprice ordervolume invoiceqty invoiceprice 08.2017 10.000 14.90 149.00 10.000 14.90 </code></pre> <p>I do need those numbers to have the same decimal separator as the input data and I cannot find a way to do this.</p> <p><strong>Therefore I am asking if someone of you has an idea how to achieve it?</strong></p> <p>I am using Python 3.5 with pandas (0.19.1) and numpy (1.11.2) on Mac OS X.</p> <p>Thank you!</p>
<p>You need to convert the <code>float</code> values before saving. Just loop through the column with values containing <code>.</code>, convert each value to string and then you can use <code>replace</code> method.</p> <p>Here I converted all values in column <code>x</code></p> <pre class="lang-py prettyprint-override"><code>df['x'] = [str(val).replace('.', ',') for val in df['x']] df.to_sql('rawdata', conn, if_exists='replace') </code></pre>
python|excel|sqlite|pandas
0
375,343
46,481,997
How to use np.genfromtxt and fill in missing columns?
<p>I am trying to use <code>np.genfromtxt</code> to load a data that looks something like this into a matrix:</p> <pre><code>0.79 0.10 0.91 -0.17 0.10 0.33 -0.90 0.10 -0.19 -0.00 0.10 -0.99 -0.06 0.10 -0.42 -0.66 0.10 -0.79 0.21 0.10 0.93 0.79 0.10 0.91 -0.72 0.10 0.25 0.64 0.10 -0.27 -0.36 0.10 -0.66 -0.52 0.10 0.92 -0.39 0.10 0.43 0.63 0.10 0.25 -0.58 0.10 -0.03 0.59 0.10 0.02 -0.69 0.10 0.79 0.30 0.10 0.09 0.70 0.10 0.67 -0.04 0.10 -0.65 -0.07 0.10 0.70 -0.06 0.10 0.08 7 566 112 32 163 615 424 543 424 422 490 47 499 595 94 515 163 535 0.79 0.10 0.91 -0.17 0.10 0.33 -0.90 0.10 -0.19 -0.00 0.10 -0.99 -0.06 0.10 -0.42 -0.66 0.10 -0.79 0.21 0.10 0.93 0.79 0.10 0.91 -0.72 0.10 0.25 0.64 0.10 -0.27 -0.36 0.10 -0.66 -0.52 0.10 0.92 -0.39 0.10 0.43 0.63 0.10 0.25 -0.58 0.10 -0.03 0.59 0.10 0.02 -0.69 0.10 0.79 0.30 0.10 0.09 0.70 0.10 0.67 -0.04 0.10 -0.65 -0.07 0.10 0.70 -0.06 0.10 0.08 263 112 32 30 163 366 543 457 424 422 556 55 355 485 112 515 163 509 112 535 0.79 0.10 0.91 -0.17 0.10 0.33 -0.90 0.10 -0.19 -0.00 0.10 -0.99 -0.06 0.10 -0.42 -0.66 0.10 -0.79 0.21 0.10 0.93 0.79 0.10 0.91 -0.72 0.10 0.25 0.64 0.10 -0.27 -0.36 0.10 -0.66 -0.52 0.10 0.92 -0.39 0.10 0.43 0.63 0.10 0.25 -0.58 0.10 -0.03 0.59 0.10 0.02 -0.69 0.10 0.79 0.30 0.10 0.09 0.70 0.10 0.67 -0.04 0.10 -0.65 -0.07 0.10 0.70 -0.06 0.10 0.08 311 112 32 543 457 77 639 355 412 422 509 112 535 163 77 125 30 412 422 556 55 355 485 112 515 </code></pre> <p>Suppose I want to import data into a matrix of size (4, 5). If not all rows have 5 columns, when it imports the matrix it should replace those columns without 5 rows with "". For example, if the data were simpler, it would look like this:</p> <pre><code>1,2,3,4,5 6,7,8,9,10 11,12,13,14,15 16,"","","","" </code></pre> <p>Thus, I want the number of columns to be imported to match that of the max row column count, and if a row doesn't have that many columns, it will fill it with "". I am reading from a file called "data.txt".</p> <p>This is what I have tried so far:</p> <pre><code>trainData = np.genfromtxt('data.txt', usecols = range(0, 5), invalid_raise=False, missing_values = "", filling_values="") </code></pre> <p>However, it gives errors saying:</p> <pre><code>Line #4 (got 1 columns instead of 5) </code></pre> <p>How can I solve this?</p> <p>Thanks!</p>
<p>Pandas has more robust readers and you can use the <code>DataFrame</code> methods to handle the missing values.</p> <p>You'll have to figure out how many columns to use first:</p> <pre><code>columns = max(len(l.split()) for l in open('data.txt')) </code></pre> <p>To read the file:</p> <pre><code>import pandas df = pandas.read_table('data.txt', delim_whitespace=True, header=None, usecols=range(columns), engine='python') </code></pre> <p>To convert to a numpy array:</p> <pre><code>import numpy a = numpy.array(df) </code></pre> <p>This will fill in NaNs in the blank positions. You can use <code>.fillna()</code> to get other values for blanks.</p> <pre><code>filled = numpy.array(df.fillna(999)) </code></pre>
python|numpy
2
375,344
46,190,748
Reformat Dataframe in pandas
<p>I have a Dataframe in a very weird format:</p> <pre><code>id Code Week1 Week2 week3 sunday nan nan nan nan id Code Week1 Week2 week3 1 100 y y n 2 200 n y n 3 300 n n y Monday nan nan nan nan id Code Week1 Week2 week3 1 500 n y y 2 600 y y y Tuesday nan nan nan nan id Code Week1 Week2 week3 1 800 n y y 2 900 y n y </code></pre> <p>I want to bring it in this format:</p> <pre><code>Code Day Week 100 Sunday 1 600 Monday 1 900 Tuesday 1 100 Sunday 2 200 Sunday 2 500 Monday 2 600 Monday 2 800 Tuesday 2 300 Sunday 3 500 Monday 3 600 Monday 3 800 Tuesday 3 900 Tuesday 3 </code></pre> <p>i.e if in a week the value is y for a Code , that Code will be visited in that week.</p> <p>Is there any way to do this in pandas? </p>
<p>Not my finest work... but I don't want to try anymore... it hurts my soul.</p> <pre><code>d = df.query('id != "id"').replace(dict(id={'\d+': None}), regex=True).ffill() s = d[d.duplicated('id')].set_index(['id', 'Code']).replace({'y': 1, 'n': np.nan}).stack() s.rename_axis(['Day', 'Code', 'Week']).reset_index('Week').Week.str.replace( 'week', '', flags=re.IGNORECASE ).reset_index() Day Code Week 0 sunday 100 1 1 sunday 100 2 2 sunday 200 2 3 sunday 300 3 4 Monday 500 2 5 Monday 500 3 6 Monday 600 1 7 Monday 600 2 8 Monday 600 3 9 Tuesday 800 2 10 Tuesday 800 3 11 Tuesday 900 1 12 Tuesday 900 3 </code></pre>
python|pandas|dataframe
3
375,345
46,579,435
Fast way to calculate the average all the c grouped by (a, b) tuples from zip(a, b, c)
<p>I have <code>ddd</code> as <code>zip()</code> of three arrays; </p> <pre><code>aaa = np.array([1, 1, 1, 1, 3, 2]) bbb = np.array([10, 10, 2, 2, 3, 2]) ccc = np.array([5, 15, 9, 11, 20, 10]) ddd = zip(aaa, bbb, ccc) </code></pre> <p>I would like to get average of elements in <code>ccc</code> grouped by the elements from <code>aaa</code> and <code>bbb</code> at the same index. In the example above, there are two <code>ccc</code> values where their corresponding <code>(aaa, bbb)</code> pair is <code>(1, 10)</code>, so I want the average of the two <code>ccc</code> values, 5 and 15.</p> <p>So far, I only managed to calculate the average for <code>ccc</code> grouped on the value of <code>bbb</code> being the same:</p> <pre><code>&gt;&gt;&gt; [(chosenb, np.mean([cc for aa,bb,cc in ddd if bb==chosenb])) for chosenb in set([b for a,b,c in ddd])] [(10, 10.0), (3, 20.0), (2, 10.0)] </code></pre> <p>The expected answer is</p> <pre><code>[(1, 10, 10.0), (1, 2, 10.0), (3, 3, 20.0), (2, 2, 10.0)] </code></pre> <p>I also feel my one-liner way is too long and hard to read. How is the fast and simpler to read way for adding another layer to compare here? </p>
<p>I recommend you switch to using <a href="https://pandas.pydata.org/" rel="nofollow noreferrer">Pandas</a> for this task, as it makes it far simpler to reason about data in <em>rows</em>:</p> <pre><code>&gt;&gt;&gt; import pandas as pd &gt;&gt;&gt; df = pd.DataFrame({'aaa': aaa, 'bbb': bbb, 'ccc': ccc}) &gt;&gt;&gt; df.groupby(['aaa', 'bbb'], as_index=False).mean() aaa bbb ccc 0 1 2 10 1 1 10 10 2 2 2 10 3 3 3 20 </code></pre> <p>Note how simple it was to produce a new dataframe with rows grouped by <code>(aaa, bbb)</code> tuples, then asking for the mean of the remaining columns.</p> <p>If Pandas is not an option for you, there are also add-on projects that give <code>numpy</code> multi-dimensional arrays group-by capabilities, such as <a href="https://github.com/EelcoHoogendoorn/Numpy_arraysetops_EP" rel="nofollow noreferrer"><code>numpy-indexed</code></a> and <a href="https://github.com/ml31415/numpy-groupies" rel="nofollow noreferrer"><code>numpy-groupies</code></a>.</p> <p>If you wanted to have a Python solution, you'd have to use a dictionary to group your values by first:</p> <pre><code>grouped = {} for a, b, c in zip(aaa, bbb, ccc): grouped.setdefault((a, b), []).append(c) result = [(a, b, np.mean(cs)) for (a, b), cs in grouped.items()] </code></pre>
python|python-2.7|numpy
7
375,346
46,257,988
using pandas to compare large CSV files with different numbers of columns
<p>I am new at python programming and I am trying to join two csv files with different numbers of columns. The aim is to find missing records and create a report with specific columns from the master column.</p> <p>An example of two csv files copied directly from excel SAMPLE CSV 1(combine201709.csv)</p> <pre><code>start_time end_time aitechid hh_village grpdetails1/farmername grpdetails1/farmermobile 2016-11-26T14:01:47.329+03 2016-11-26T14:29:05.042+03 AI00001 2447 KahsuGebru 919115604 2016-11-26T19:34:42.159+03 2016-11-26T20:39:27.430+03 936891238 2473 Moto Aleka 914370833 2016-11-26T12:13:23.094+03 2016-11-26T14:25:19.178+03 914127382 2390 Hagos 914039654 2016-11-30T14:31:28.223+03 2016-11-30T14:56:33.144+03 920784222 </code></pre> <p>SAMPLE CSV 2 (combinedmissingrecords.csv)</p> <pre><code>farmermobile 941807851 946741296 9 920212218 915 939555303 961579437 919961811 100004123 972635273 918166831 961579437 922882638 100006273 919728710 30000739 920770648 100004727 963767487 915855665 932255143 923531603 0 931875236 918027506 8 916353266 918020303 924359729 934623027 916585963 960791618 988047183 100002632 300007241 918271897 300007238 918250712 </code></pre> <p>I tried this, but was unable to get the expected output:</p> <pre><code> import pandas as pd normalize = lambda x: "%.4f" % float(x) # round df = pd.read_csv("/media/dmogaka/DATA/week progress/week4/combine201709.csv", index_col=(0,1), usecols=(1, 2, 3,4), header=None, converters=dict.fromkeys([1,2])) df2 = pd.read_csv("/media/dmogaka/DATA/week progress/week4/combinedmissingrecords.csv", index_col=(0,1), usecols=(0), header=None, converters=dict.fromkeys([1,2])) result = df2.merge(df[['aitechid','grpdetails1/farmermobile','grpdetails1/farmername']], left_on='farmermobile', right_on='grpdetails1/farmermobile') result.to_csv("/media/dmogaka/DATA/week progress/week4/output.csv", header=None) # write as csv </code></pre> <p>error message</p> <pre><code>/usr/bin/python3.5 "/media/dmogaka/DATA/Panda tut/test/test.py" Traceback (most recent call last): File "/media/dmogaka/DATA/Panda tut/test/test.py", line 7, in &lt;module&gt; header=None, converters=dict.fromkeys([1,2])) File "/home/dmogaka/.local/lib/python3.5/site-packages/pandas/io/parsers.py", line 655, in parser_f return _read(filepath_or_buffer, kwds) File "/home/dmogaka/.local/lib/python3.5/site-packages/pandas/io/parsers.py", line 405, in _read parser = TextFileReader(filepath_or_buffer, **kwds) File "/home/dmogaka/.local/lib/python3.5/site-packages/pandas/io/parsers.py", line 764, in __init__ self._make_engine(self.engine) File "/home/dmogaka/.local/lib/python3.5/site-packages/pandas/io/parsers.py", line 985, in _make_engine self._engine = CParserWrapper(self.f, **self.options) File "/home/dmogaka/.local/lib/python3.5/site-packages/pandas/io/parsers.py", line 1605, in __init__ self._reader = parsers.TextReader(src, **kwds) File "pandas/_libs/parsers.pyx", line 461, in pandas._libs.parsers.TextReader.__cinit__ (pandas/_libs/parsers.c:4968) TypeError: 'int' object is not iterable Process finished with exit code 1 </code></pre>
<p>Try this:</p> <pre><code>d2.merge(d1[['aitechid','grpdetails1/farmermobile','grpdetails1/farmername']], left_on='farmermobile', right_on='grpdetails1/farmermobile') </code></pre> <p>or</p> <pre><code>d2.merge(d1[['aitechid','grpdetails1/farmermobile','grpdetails1/farmername']] \ .rename(columns={'grpdetails1/farmermobile':'farmermobile'})) </code></pre>
python|pandas|csv
1
375,347
46,225,831
Pandas max date by row?
<p>The solution to the question asked <a href="https://stackoverflow.com/a/44304535/4764434">here</a> unfortunately does not solve this problem. I'm using Python 3.6.2</p> <p>The Dataframe, <code>df</code>: </p> <pre><code> date1 date2 rec0 2017-05-25 14:02:23+00:00 2017-05-25 14:34:43+00:00 rec1 NaT 2017-05-16 19:37:43+00:00 </code></pre> <p>To reproduce the problem:</p> <pre><code>import psycopg2 import pandas as pd Timestamp = pd.Timestamp NaT = pd.NaT df = pd.DataFrame({'date1': [Timestamp('2017-05-25 14:02:23'), NaT], 'date2': [Timestamp('2017-05-25 14:34:43'), Timestamp('2017-05-16 19:37:43')]}) tz = psycopg2.tz.FixedOffsetTimezone(offset=0, name=None) for col in ['date1', 'date2']: df[col] = pd.DatetimeIndex(df[col]).tz_localize(tz) print(df.max(axis=1)) </code></pre> <p>Both of the above columns have been converted using <code>pd.to_datetime()</code> to get the following column type: <code>datetime64[ns, psycopg2.tz.FixedOffsetTimezone(offset=0, name=None)]</code></p> <p>Running <code>df.max(axis=1)</code> doesn't give an error but certainly provides the incorrect solution. </p> <p>Output (incorrect):</p> <pre><code>rec0 NaN rec1 NaN dtype: float64 </code></pre> <p>The fix that I have in place is to <code>apply</code> a custom function to the df as written below:</p> <pre><code>def get_max(x): test = x.dropna() return max(test) df.apply(get_max,axis=1) </code></pre> <p>Output (correct):</p> <pre><code>rec0 2017-05-25 14:34:43+00:00 rec1 2017-05-16 19:37:43+00:00 dtype: datetime64[ns, psycopg2.tz.FixedOffsetTimezone(offset=0, name=None)] </code></pre> <p>Maybe <code>df.max()</code> doesn't deal with date objects but only looks for floats (<a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.max.html" rel="nofollow noreferrer">docs</a>). <strong>Any idea why <code>df.max(axis=1)</code> only returns <code>NaN</code>?</strong> </p>
<p>After some testing, it looks like there is something wrong with <code>pandas</code> and <code>psycopg2.tz.FixedOffsetTimezone</code>.</p> <p>If you try <code>df.max(axis=0)</code> it will work as expected, but as you indicate <code>df.max(axis=1)</code> will return a series of <code>NaN</code>. If you do not use <code>psycopg2.tz.FixedOffsetTimezone</code> as <code>tz</code>, <code>df.max(axis=1)</code> will return the expected result.</p> <p>Other manipulations will fail in this case, such as <code>df.transpose</code>.</p> <p>Note that if you try <code>df.values.max(axis=1)</code>, you will get the expected result. So <code>numpy.array</code> seems to be able to deal with this. You should search in <code>pandas</code> Github issues (<a href="https://github.com/pandas-dev/pandas/pull/7364" rel="nofollow noreferrer">like this one</a>) and maybe consider opening a new one if you can't find a fix.</p> <p>Another solution would be to drop <code>psycopg2.tz.FixedOffsetTimezone</code>, but you may have some reason to use this specifically.</p>
python|pandas|datetime|dataframe|max
1
375,348
58,284,012
How to accelerate the code which convert tensor to numpy array in tensorflow_datasets?
<p>Although I want to convert tensor to numpy array in tensorflow_datasets, my code is progressively drastically slow down. Now, I use the lsun/bedroom dataset which has over 3 million images. How to accelerate my code?</p> <p>My code saves tuple which has numpy array every 100,000 images.</p> <pre><code>train_tf = tfds.load("lsun/bedroom", data_dir="{$my_directory}", download=False) train_tf = train_tf["train"] for data in train_tf: if d_cnt==0 and d_cnt%100001==0: train = (tfds.as_numpy(data["image"]), ) else: train += (tfds.as_numpy(data["image"]), ) if d_cnt%100000==0 and d_cnt!=0: with open("{$my_directory}/lsun.pickle%d"%(d_cnt), "wb") as f: pickle.dump(train, f) d_cnt += 1 </code></pre>
<p>Your <code>if</code> condition is never going to get executed after the first pass and consequently your <code>train</code> variable keeps accumulating.</p> <p>I think you wish to have condition as:</p> <pre><code>if d_cnt!=0 and d_cnt%100001==0: train = (tfds.as_numpy(data["image"]), ) </code></pre>
python|numpy|tensorflow|tensor|tensorflow-datasets
1
375,349
58,445,405
Pandas JOIN/MERGE/CONCAT Data Frame On Specific Indices
<p>I want to join two data frames specific indices as per the map (<code>dictionary</code>) I have created. What is an efficient way to do this?</p> <p><strong>Data:</strong></p> <pre><code>df = pd.DataFrame({"a":[10, 34, 24, 40, 56, 44], "b":[95, 63, 74, 85, 56, 43]}) print(df) a b 0 10 95 1 34 63 2 24 74 3 40 85 4 56 56 5 44 43 df1 = pd.DataFrame({"c":[1, 2, 3, 4], "d":[5, 6, 7, 8]}) print(df1) c d 0 1 5 1 2 6 2 3 7 3 4 8 d = { (1,0):0.67, (1,2):0.9, (2,1):0.2, (2,3):0.34 (4,0):0.7, (4,2):0.5 } </code></pre> <p><strong>Desired Output:</strong></p> <pre><code> a b c d ratio 0 34 63 1 5 0.67 1 34 63 3 7 0.9 ... 5 56 56 3 7 0.5 </code></pre> <p>I'm able to achieve this but it takes a lot of time since my original data frames' map has about 4.7M rows to map. I'd love to know if there is a way to <code>MERGE</code>, <code>JOIN</code> or <code>CONCAT</code> these data frames on different indices.</p> <p><strong>My Approach:</strong></p> <pre><code>matched_rows = [] for key in d.keys(): s = df.iloc[key[0]].tolist() + df1.iloc[key[1]].tolist() + [d[key]] matched_rows.append(s) df_matched = pd.DataFrame(matched_rows, columns = df.columns.tolist() + df1.columns.tolist() + ['ratio'] </code></pre> <p>I would highly appreciate your help. Thanks a lot in advance.</p>
<p>Create <code>Series</code> and then <code>DaatFrame</code> by <code>dictioanry</code>, <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.join.html" rel="nofollow noreferrer"><code>DataFrame.join</code></a> both and last remove first 2 columns by positions:</p> <pre><code>df = (pd.Series(d).reset_index(name='ratio') .join(df, on='level_0') .join(df1, on='level_1') .iloc[:, 2:]) print (df) ratio a b c d 0 0.67 34 63 1 5 1 0.90 34 63 3 7 2 0.20 24 74 2 6 3 0.34 24 74 4 8 4 0.70 56 56 1 5 5 0.50 56 56 3 7 </code></pre> <p>And then if necessary reorder columns:</p> <pre><code>df = df[df.columns[1:].tolist() + df.columns[:1].tolist()] print (df) a b c d ratio 0 34 63 1 5 0.67 1 34 63 3 7 0.90 2 24 74 2 6 0.20 3 24 74 4 8 0.34 4 56 56 1 5 0.70 5 56 56 3 7 0.50 </code></pre>
python-3.x|pandas|data-structures
2
375,350
58,496,948
Python/Pandas is there a way to vectorize a comparison to all other points in an opposing category?
<p>I have a dataset of x,y points that are in two separate categories. There are many "frames" of ten or so points that I want to groupby (or split on) instead of iterate through. I want to compare each point in category A with all points in category B. Specifically I want the distance between them. I haven't found the right combination of groupby operations to vectorize it.</p> <p>Here's a sample df:</p> <pre><code> frame_id point_id x y cat 0 1 1 1.769 2.491 A 1 1 2 1.024 0.981 A 2 1 3 4.327 9.81 A 3 1 4 5.407 4.33 A 4 1 5 0.936 0.019 B 5 1 6 5.1 7.639 B 6 1 7 9.139 6.721 B 7 1 8 1.954 5.424 B 8 2 1 5.835 9.702 A 9 2 2 1.784 1.374 A 10 2 3 0.23 1.921 A 11 2 4 9.328 5.836 A 12 2 5 5.516 8.971 B 13 2 6 9.108 8.917 B 14 2 7 4.412 1.033 B 15 2 8 1.33 5.898 B </code></pre> <p>Ideally, with this example, I'd add four columns. One column for each distance to point in the other category. I imagine there's some way to do df.groupby(['frame_id']) or df.groupby(['frame_id','cat']) and compare them that way, I just haven't figured it out.</p> <p>I've been able to accomplish this by iterating:</p> <pre><code>import scipy.spatial for idx, fid in enumerate(frame_ids): if idx % 1000 == 0: print(idx) # separate categories cat_a = df.loc[(df.frame_id==fid)&amp;(df.Cat=="A")] cat_b = df.loc[(df.frame_id==fid)&amp;(df.Cat=="B")] # get distance to every opposing category point a_mat = scipy.spatial.distance.cdist(cat_a[['X','Y']], cat_b[['X','Y']], metric='euclidean') b_mat = scipy.spatial.distance.cdist(cat_b[['X','Y']], cat_a[['X','Y']], metric='euclidean') a_ids = cat_a[['frame_id','point_id']].values b_ids = cat_b[['frame_id','point_id']].values a_dist = np.concatenate((a_ids, a_mat),axis=1) b_dist = np.concatenate((b_ids, b_mat),axis=1) ### then concat one by one w/ larger dataframe (takes forever) ### </code></pre> <p>Output (dropped a few columns for clarity):</p> <pre><code> frame_id point_id Dist_Opp1 Dist_Opp2 Dist_Opp3 Dist_Opp4 0 1 1 2.60858 6.13168 8.49763 2.93883 1 1 2 0.966017 7.80658 9.93986 4.53929 2 1 3 10.3616 2.30451 5.71815 4.9868 3 1 4 6.21084 3.32321 4.43223 3.62216 4 1 5 2.60858 0.966017 10.3616 6.21084 5 1 6 6.13168 7.80658 2.30451 3.32321 6 1 7 8.49763 9.93986 5.71815 4.43223 7 1 8 2.93883 4.53929 4.9868 3.62216 8 2 1 0.797573 3.36582 8.78502 5.89622 9 2 2 8.46417 10.5137 2.65003 4.54672 10 2 3 8.8116 11.3032 4.27524 4.12632 11 2 4 4.93554 3.08884 6.87284 7.99824 12 2 5 0.797573 8.46417 8.8116 4.93554 13 2 6 3.36582 10.5137 11.3032 3.08884 14 2 7 8.78502 2.65003 4.27524 6.87284 15 2 8 5.89622 4.54672 4.12632 7.99824 </code></pre> <p>It's not necessary to compare points within the same category.</p>
<p>Figured it out eventually. It just required creative reshaping/repeating with numpy matricies.</p> <pre><code> df['loc'] = list(zip(df['x'],df['y'])) groupA = df.loc[df.Cat==1] groupB = df.loc[df.Cat==0] groupA = groupA[['frame_id','point_id','loc']] groupB = groupB[['frame_id','point_id','loc']] acol = groupA['loc'].values bcol = groupB['loc'].values group_size = 4 acol = np.repeat(acol,group_size,axis=0) bcol = bcol.reshape(-1,group_size) bcol = np.repeat(bcol,group_size,axis=0) bcol = bcol.reshape(-1) # numpy requires replacing tuple with 2d point acol = np.array([*acol]) bcol = np.array([*bcol]) # distance calc desired_matrix = np.linalg.norm(acol - bcol, axis=-1) </code></pre>
python|pandas|numpy
0
375,351
58,452,300
Is there a way of extracting indices from a pandas DataFrame based on value
<p>I have the following DataFrame:</p> <pre><code>index col0 col1 col2 0 0 1 0 1 1 0 1 2 0 1 1 </code></pre> <p>I would like to extract the following indices(those that contain ones(or any value)): </p> <pre><code>[(0, 1), (1, 0), (1, 2), (2, 1), (2,2))] </code></pre> <p>Is there a method in pandas that can do this?</p>
<p>Use <code>np.where</code> + <code>zip</code></p> <hr> <pre><code>[*zip(*np.where(df))] </code></pre> <p></p> <pre><code>[(0, 1), (1, 0), (1, 2), (2, 1), (2, 2)] </code></pre>
python|pandas
7
375,352
58,358,862
DataFrame Groupby two columns and get counts of another column
<p>Novice programmer here seeking help. I have a Dataframe that looks like this:</p> <pre><code> Cashtag Date Message 0 $AAPL 2018-01-01 "Blah blah $AAPL" 1 $AAPL 2018-01-05 "Blah blah $AAPL" 2 $AAPL 2019-01-08 "Blah blah $AAPL" 3 $AAPL 2019-02-09 "Blah blah $AAPL" 4 $AAPL 2019-02-10 "Blah blah $AAPL" 5 $AAPL 2019-03-01 "Blah blah $AAPL" 6 $FB 2018-01-03 "Blah blah $FB" 7 $FB 2018-02-10 "Blah blah $FB" 8 $FB 2018-02-11 "Blah blah $FB" 9 $FB 2019-03-22 "Blah blah $FB" 10 $AMZN 2018-04-13 "Blah blah $AMZN" 11 $AMZN 2018-04-29 "Blah blah $AMZN" 12 $AMZN 2019-07-23 "Blah blah $AMZN" 13 $AMZN 2019-07-27 "Blah blah $AMZN" </code></pre> <p>My desired output is a DataFrame that tells me the number of messages for each month of every year in the sample for each company. In this example it would be:</p> <pre><code> Cashtag Date #Messages 0 $AAPL 2018-01 02 1 $AAPL 2019-01 01 2 $AAPL 2019-02 02 3 $AAPL 2019-03 01 4 $FB 2018-01 01 5 $FB 2018-02 02 6 $FB 2019-03 01 7 $AMZN 2018-04 02 8 $AMZN 2019-07 02 </code></pre> <p>I've tried many combinations of .groupby() but have not achieved a solution.</p> <p>How can I achieve my desired output?</p>
<p>Try:</p> <p>In case <code>Date</code> is <code>string</code>:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; df.groupby([df["Cashtag"], df["Date"].apply(lambda x: x[:7])]).agg({"Message": "count"}).reset_index() </code></pre> <p>If <code>Date</code> is <code>datetime</code>:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; df.groupby([df["Cashtag"], df["Date"].apply(lambda x: "{0}-{1:02}".format(x.year, x.month))]).agg({"Message": "count"}).reset_index() </code></pre> <p>and output:</p> <pre class="lang-py prettyprint-override"><code> Cashtag Date Message 0 $AAPL 2018-01 2 1 $AAPL 2019-01 1 2 $AAPL 2019-02 2 3 $AAPL 2019-03 1 4 $AMZN 2018-04 2 5 $AMZN 2019-07 2 6 $FB 2018-01 1 7 $FB 2018-02 2 8 $FB 2019-03 1 </code></pre>
python|dataframe|pandas-groupby
0
375,353
58,468,290
How can I cumulatively add or subtract values based on another column values using Pandas?
<p>I have a dataframe below that shows voltage output based on seconds. The <code>v_out</code> value is based on a displacement of either +/- 0.05 centimeters. </p> <p>So when <code>v_out</code> gets more positive, then there is positive displacement compared to the last <code>v_out</code> value. When <code>v_out</code> gets more negative, the displacement is going in the <code>-</code> direction. </p> <p>I have the initial <code>df</code> and I want to add a <code>sign</code> column that tells whether the <code>v_out</code> is positive or negative based on the previous <code>v_out</code> value. And, I want a <code>cumulative</code> column that keeps track of the added running total of the <code>sign</code> column. </p> <p><strong>Initial <code>df</code></strong></p> <pre><code> secs v_out 0 0.0 -1.179100 1 15.0 -1.179100 2 18.0 -1.179200 3 33.0 -1.181800 4 48.0 0.029461 </code></pre> <p><strong>What I want</strong></p> <pre><code> secs v_out sign cumul 0 0.0 -1.179100 0.00 0.00 1 15.0 -1.179100 0.00 0.00 2 18.0 -1.179200 -0.05 -0.05 3 33.0 -1.181800 -0.05 -0.10 4 48.0 0.029461 0.05 -0.05 </code></pre>
<p>The method to <em>look at</em> a _lagged value is named <code>shift</code>, and then we inspect if the values is positive, negative, or zero with an if-else construct.</p> <p>So, first we'd construct the column <code>sign</code>. The logic can be packed into 1 line.</p> <pre><code>df['sign'] = (df.v_out - df.v_out.shift()).apply(lambda x: 0.05 if x &gt; 0 else -0.05 if x &lt; 0 else 0) </code></pre> <p><em>btw, I'd often write this code interactively, but would recommend splitting it over a few lines so the logic is easier to follow if its going to be saved in a file)</em></p> <p>Then, to make the <code>cumul</code> column can be made through the application of the <code>cumsum</code> method.</p> <pre><code>df['cumul'] = df.sign.cumsum() </code></pre> <p>The final df looks like this:</p> <pre><code> secs v_out sign cumul 0 0.0 -1.179100 0.00 0.00 1 15.0 -1.179100 0.00 0.00 2 18.0 -1.179200 -0.05 -0.05 3 33.0 -1.181800 -0.05 -0.10 4 48.0 0.029461 0.05 -0.05` </code></pre>
python|pandas|cumulative-sum
-1
375,354
58,557,329
Get the percentage of predicted values
<p>Using basic logistic regression i predicted 0 and 2 values</p> <p>The DATA dataframe has the next structure: </p> <pre><code>Duration | y 12.45 | 0 123.66 | 0 0.34 | 2 14.69 | 2 </code></pre> <p>The logistic regression:</p> <pre><code>x = DATA.Duration.values.reshape(-1,1) y = DATA.y.values.reshape(-1,1) lgr = LogisticRegression(max_iter = 200) lgr.fit(x,y) lgr.score(x,y) X = np.arange(0,200,10).reshape(-1,1) Y = lgr.predict(X) </code></pre> <p>If to plot the result I get the picture like this:<br> <a href="https://i.stack.imgur.com/XyLw3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XyLw3.png" alt="enter image description here"></a></p> <p>My goal is to count amount of red and blue dots separatedly "covered" by line (predicted). </p> <p>I used the next approach: </p> <pre><code>(np.count_nonzero(Y == 0)*100)/np.count_nonzero(y == 0) (np.count_nonzero(Y == 2)*100)/np.count_nonzero(y == 2) </code></pre> <p>But it gives strange results. What is the raight way to obtain the desired percentage?</p>
<p>Using sklearn: it is </p> <pre><code>from sklearn.linear_model import LogisticRegression clf = LogisticRegression().fit(X, y) clf.predict(X) </code></pre>
python|pandas
1
375,355
58,368,120
Conditional Change in Pandas Dataframe
<p>I would like to change the column days (which is a datetime column) conditional on the indicator column, i.e. when indicator is equal to either DTM or AMC, I would like to add 1 day to the days column. </p> <pre><code> import pandas as pd df = pd.DataFrame({'days': [1, 2, 3], 'indicator': ['BMO', 'DTM','AMC']}) </code></pre> <p>So the result looks like this:</p> <pre><code> days indicator 0 1 BMO 1 3 DTM 2 4 AMC </code></pre>
<p>Use a <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#boolean-indexing" rel="nofollow noreferrer">boolean mask</a>:</p> <pre><code>df['days'] += (df.indicator.eq('AMC') | df.indicator.eq('DTM')) print(df) </code></pre> <p><strong>Output</strong></p> <pre><code> days indicator 0 1 BMO 1 3 DTM 2 4 AMC </code></pre> <p>As an alternative you could use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.isin.html" rel="nofollow noreferrer">isin</a>:</p> <pre><code>df['days'] += df.indicator.isin(('AMC', 'DTM')) print(df) </code></pre> <p>You can add the boolean mask directly because in Python, booleans values are integers <code>(0, 1)</code>.</p>
python|pandas|datetime
1
375,356
58,195,100
Dealing with the Multiindex headers dataFrame - Python
<p>I have a dataframe which looks like this. In the header, it have 2 lines of header, like one heading in row 1 cover 5 subheaddings in row 2.</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code> Toneladas planta, ton Fecha Fecha Ton SAG1 Ton1 1_2017 1/1/2017 827 1309 2195 1_2017 1/2/2017 913 1343 2222 1_2017 1/3/2017 887 1435 2272 1_2017 1/4/2017 877 1388 2151 1_2017 1/5/2017 900 1236 2177 1_2017 1/6/2017 797 1201 2012 1_2017 1/7/2017 751 1215 2109 1_2017 1/8/2017 851 1241 2109 1_2017 1/9/2017 917 1408 2303 1_2017 1/10/2017 864 1529 2414 1_2017 1/11/2017 911 1560 2383</code></pre> </div> </div> </p> <p><a href="https://i.stack.imgur.com/RaS10.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RaS10.png" alt="Dataframe"></a></p> <p>The values in row 2 are of value to me. But when i am applying functions on the data frame, it is unable to identify the row 2 headers and gives false values. For example: df.info() like functions, gives the wrong values.</p> <p>I would like to know, if there is a way, i can either mix the two headers to one, in a way that that row 1 becomes the prefix and row 2 as suffix of the shares heading. </p> <p>Like : Toneladas planta, ton Fecha Fecha </p> <p>becomes: Toneladas planta, ton Fecha Toneladas planta, ton Fecha</p> <p>as otherwise its too difficult to work with the dataframe. </p> <p>Thanks</p>
<h3>Given the following Excel Sheet:</h3> <p><a href="https://i.stack.imgur.com/AREBO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AREBO.png" alt="enter image description here"></a></p> <ul> <li>Just specify the header row with the <code>header</code> parameter, and <code>usecols</code> to get the correct columns</li> </ul> <pre class="lang-py prettyprint-override"><code>df = pd.read_excel('file.xlsx', header=3, usecols='A:E') Fecha Fecha.1 Ton SAG1 Ton SAG2 Ton Planta 1_2017 2017-01-01 826.555503 1308.834944 2194.939490 1_2017 2017-01-02 912.653670 1343.165048 2221.776328 1_2017 2017-01-03 886.866000 1434.944123 2272.475950 1_2017 2017-01-04 877.476604 1388.086279 2150.790596 1_2017 2017-01-05 900.459985 1236.101284 2177.152583 </code></pre>
python|pandas|dataframe
1
375,357
58,564,237
Import a directory of CSV files at once and keep only oldest record per file
<p>I have database containing several csv files. Each csv file contains the last 7 days and only the oldest date is final data. </p> <p>For example "variables_2019-08-12.csv" file contains data from 08-06 until 08-12 ( only 08-06 data is final data) and "variables_2019-08-13.csv" file contains data from 08-07 until 08-13 ( only 08-07 data is final data). I want to keep only records for date 08-06 from variables_2019-08-12.csv file and records for date 08-07 from variables_2019-08-13.csv file and so on. Server produce each data 7 times during 7 days and only after 7 days the data is considered as a final. Data after import will look like this:</p> <pre><code>import pandas as pd source = ["data/variables_2019-08-12.csv", "data/variables_2019-08-12.csv", "data/variables_2019-08-12.csv", "data/variables_2019-08-12.csv", "data/variables_2019-08-12.csv", "data/variables_2019-08-12.csv", "data/variables_2019-08-12.csv", "data/variables_2019-08-13.csv", "data/variables_2019-08-13.csv", "data/variables_2019-08-13.csv", "data/variables_2019-08-13.csv", "data/variables_2019-08-13.csv", "data/variables_2019-08-13.csv", "data/variables_2019-08-13.csv"] date = ["2019-08-06", "2019-08-07", "2019-08-08", "2019-08-09", "2019-08-10", "2019-08-11", "2019-08-12", "2019-08-07","2019-08-08", "2019-08-09", "2019-08-10", "2019-08-11", "2019-08-12","2019-08-13"] id = [18404487, 18404487, 18502437, 18502437, 18502437, 18502437, 18502437, 18502437, 18502437, 18502437, 18502437, 18502437,18502437, 18502437] usage = [11, 146, 41, 1, 2, 8, 2, 152, 42, 1, 5, 100, 2, 15] dict = {'source': source, 'date': date, 'id': id, 'usage': usage} df = pd.DataFrame(dict) </code></pre> <p>I am reading all CSV files at once then group by source column and filter and keep only the oldest date from each source. What am I doing wrong here?</p> <pre><code> # group by source # filter only oldest date # ungroup dataframe df['date'] = pd.to_datetime(df['date']) df.groupby('source').filter(lambda x: (x['date'].min())).reset_index() #error filter function returned a Timestamp, but expected a scalar bool </code></pre>
<p>IIUC,</p> <p>if you've already got your concatenated df you could possibly leverage the <code>.agg</code> function in groupby which lets you access columns </p> <pre><code>df.groupby('source').agg({'date' : min}) </code></pre> <p>note this would by the same as </p> <pre><code>df.groupby('source')['date'].min().reset_index() </code></pre>
python|pandas|numpy|pandas-groupby
1
375,358
58,458,351
with this code i could get list of author and book title from first url!! how to crawl multiple urls data using beautifulsoup?
<pre><code>import requests, bs4 import numpy as np import requests import pandas as pd import requests from bs4 import BeautifulSoup from pandas import DataFrame urls = ['http://www.gutenberg.org/ebooks/search/? sort_order=title','http://www.gutenberg.org/ebooks/search/?sort_order=title&amp;start_index=26'] for url in urls: page = requests.get(url) soup = BeautifulSoup(page.content, 'html.parser') tb = soup.find_all('span', class_='cell content') soup_books = soup.findAll("span",{"class":"title"}) #books soup_authors= soup.findAll("span",{"class":"subtitle"}) #authors article_title = [] article_author = [] soup_title= soup.findAll("span",{"class":"title"}) # books soup_para= soup.findAll("span",{"class":"subtitle"}) #authors for x in range(len(soup_para)): article_title.append(soup_title[x].text.strip()) article_author.append(soup_para[x].text) data = {'Article_Author':article_author, 'Article_Title':article_title} df = DataFrame(data, columns = ['Article_Title','Article_Author']) print(df) len(df) </code></pre> <blockquote> <p>I need to crawl data from website '<a href="http://www.gutenberg.org/ebooks/search/" rel="nofollow noreferrer">http://www.gutenberg.org/ebooks/search/</a>? sort_order=title' till the end of the page how can i iterate through the pages to get all the authors and titles of there work in that section</p> </blockquote>
<p>Do you mean after the first 25 results, you want to navigate to the next page and get the next page's results? You can use beatufiulsoup to get the URL of the "Next" button at the bottom right of the page:</p> <pre><code>next_url = soup.find('a', {'title': 'Go to the next page results.'}) </code></pre> <p>and then run your code again with the new URL.</p>
python|pandas|web-scraping|beautifulsoup|web-crawler
0
375,359
58,574,473
Colab TPU error InvalidArgumentError: Cannot assign a device for operation
<p>in google colab when using TPU , i have the following error</p> <p>InvalidArgumentError: Cannot assign a device for operation Adam/iterations/IsInitialized/VarIsInitializedOp: {{node Adam/iterations/IsInitialized/VarIsInitializedOp}} was explicitly assigned to /job:worker/replica:0/task:0/device:TPU:0 but available devices are [ /job:localhost/replica:0/task:0/device:CPU:0, /job:localhost/replica:0/task:0/device:XLA_CPU:0 ]. Make sure the device specification refers to a valid device. [[Adam/iterations/IsInitialized/VarIsInitializedOp]]</p> <pre><code>TPU_WORKER = 'grpc://' + os.environ['COLAB_TPU_ADDR'] resolver = tf.contrib.cluster_resolver.TPUClusterResolver(TPU_WORKER) tf.contrib.distribute.initialize_tpu_system(resolver) strategy = tf.contrib.distribute.TPUStrategy(resolver) with strategy.scope(): # Setup the model inputs / outputs model = Model(inputs=[inputs_img, inputs_mask], outputs=outputs) # Compile the model model.compile( optimizer = Adam(lr=lr), loss=self.loss_total(inputs_mask) ) </code></pre>
<p>Here is a reference TPU colab, make sure to change accelerator to a TPU: runtime -> change run time -> TPU</p> <p><a href="https://colab.research.google.com/notebooks/tpu.ipynb" rel="nofollow noreferrer">https://colab.research.google.com/notebooks/tpu.ipynb</a></p>
tensorflow|keras|google-colaboratory|tpu|google-cloud-tpu
0
375,360
58,493,848
Pandas: Merging two columns into one with corresponding values
<p>I have a large dataframe with a bunch of names which appear in two columns It is in the following layout</p> <pre><code>Winner Value_W Loser Value_L Jack 5 Sally -3 Sally 2 Max -1 Max 4 Jack -2 Lucy 1 Jack -6 Jack 6 Henry -3 Henry 5 Lucy -4 </code></pre> <p>I then filtered on columns 'Winner' and 'Loser' to get all rows which Jack appears in using the following code</p> <pre><code>pd.loc[(df['Winner'] == 'Jack') | (df['Loser'] == 'Jack')] </code></pre> <p>Which returns the following:</p> <pre><code>Winner Value_W Loser Value_L Jack 5 Sally -3 Max 4 Jack -2 Lucy 1 Jack -6 Jack 6 Henry -3 </code></pre> <p>I am now looking to generate one column which only has Jack and his corresponding values. So in this example, the output I want is:</p> <pre><code>New_1 New_2 Jack 5 Jack -2 Jack -6 Jack 6 </code></pre> <p>I am unsure of how to do this.</p>
<p>You could <code>wide_to_long</code> after renaming the columns slightly. This allows you to capture additional information, like whether that row is a Win or Loss. Or if you don't care do <code>df1 = df1.reset_index(drop=True)</code></p> <pre><code>d = {'Winner': 'Person_W', 'Loser': 'Person_L'} df1 = pd.wide_to_long(df.rename(columns=d).reset_index(), stubnames=['Person', 'Value'], i='index', j='Win_Lose', sep='_', suffix='.*') df1[df1.Person == 'Jack'] # Person Value #index Win_Lose #0 W Jack 5 #4 W Jack 6 #2 L Jack -2 #3 L Jack -6 </code></pre> <hr> <p>If that specific ordering is important, we still have the original Index so:</p> <pre><code>df1.sort_index(level=0).query('Person == "Jack"').reset_index(drop=True) # Person Value #0 Jack 5 #1 Jack -2 #2 Jack -6 #3 Jack 6 </code></pre>
python|pandas
4
375,361
58,419,436
Extracting attention weights of each token at each layer of transformer in python
<p>I am doing some NLP and I am interested in extracting attention weights of individual test token at each layer of transformer via Python (PyTorch, TensorFlow, etc.)</p> <p>Is coding up a Transformer (any transformers like Transformer-XL, OpenAL-GPT, GPT2 ,etc.) from the scratch the only way to get attention weights of individual test token at each transformer layer? Is there easier way to perform this task in Python? More specifically, can Keras-transformer be used for this purpose? If someone can provide me with some example code, it will be great!</p> <p>Thank you,</p>
<p>The type of API you are probably is looking for is <a href="https://github.com/jessevig/bertviz" rel="nofollow noreferrer">BertViz</a> which is a tool for visualizing attention in the Transformer model (BERT, GPT-2, XLNet, and RoBERTa).</p> <p>Also, Hugginface's <a href="https://github.com/huggingface/transformers" rel="nofollow noreferrer">transformer</a> API is an excellent source to play with Transformer architecture. OpenNMT is another helpful API to use the transformer. You may check out the <a href="http://opennmt.net/OpenNMT-py/FAQ.html#how-do-i-use-the-transformer-model" rel="nofollow noreferrer">documentation</a>.</p>
tensorflow|keras|deep-learning|nlp|pytorch
0
375,362
58,560,304
TimeDistributed(Dense) vs Dense in seq2seq
<p>Given the code below</p> <pre><code>encoder_inputs = Input(shape=(16, 70)) encoder = LSTM(latent_dim, return_state=True) encoder_outputs, state_h, state_c = encoder(encoder_inputs) # We discard `encoder_outputs` and only keep the states. encoder_states = [state_h, state_c] # Set up the decoder, using `encoder_states` as initial state. decoder_inputs = Input(shape=(59, 93)) # We set up our decoder to return full output sequences, # and to return internal states as well. We don't use the # return states in the training model, but we will use them in inference. decoder_lstm = LSTM(latent_dim, return_sequences=True, return_state=True) decoder_outputs,_,_ = decoder_lstm(decoder_inputs, initial_state=encoder_states) decoder_dense = TimeDistributed(Dense(93, activation='softmax')) decoder_outputs = decoder_dense(decoder_outputs) # Define the model that will turn # `encoder_input_data` &amp; `decoder_input_data` into `decoder_target_data` model = Model([encoder_inputs, decoder_inputs], decoder_outputs) </code></pre> <p>if I change</p> <p><code>decoder_dense = TimeDistributed(Dense(93, activation='softmax'))</code></p> <p>to</p> <p><code>decoder_dense = Dense(93, activation='softmax')</code></p> <p>it still work, but which method is more effective?</p>
<p>If your Data is dependent on Time, like <code>Time Series</code> Data or the data comprising different frames of a <code>Video</code>, then Time <code>Distributed Dense</code> Layer is effective than simple <code>Dense</code> Layer.</p> <p><code>Time Distributed Dense</code> applies the same <code>dense</code> layer to every time step during <code>GRU/LSTM</code> Cell unrolling. That’s why the error function will be between the <code>predicted label sequence</code> and the <code>actual label sequence</code>.</p> <p>Using <code>return_sequences=False</code>, the <code>Dense</code> layer will get applied only once in the last cell. This is normally the case when <code>RNNs</code> are used for classification problems. </p> <p>If <code>return_sequences=True</code>, then the <code>Dense</code> layer is used to apply at every timestep just like <code>TimeDistributedDense</code>.</p> <p>In your models both are the same, but if u change your second model to <code>return_sequences=False</code>, then the <code>Dense</code> will be applied only at the last cell. </p> <p>Hope this helps. Happy Learning!</p>
tensorflow|keras|lstm|seq2seq|encoder-decoder
4
375,363
58,311,566
reading multiple s3 objects in a numpy array and concatenate
<p>I have multiple objects in a s3 bucket (part files). I need to read them and concatenate to one single numpy array. I am using below code</p> <pre><code>def read_and_concat(bucket, key_list): length = len(key_list) for index, key in enumerate(key_list): s3_client.download_file(bucket, key, 'test.out') target_data = genfromtxt('test.out', delimiter=',') data_shape = target_data.shape data[index] = np.array(data_shape) data[index] = target_data result = np.concatenate([data[i] for i in range(length)]) return result </code></pre> <p>This throws me error <code>NameError: name 'data' is not defined</code>. I guess I need to define <code>data</code> as a 2D numpy array before using it in <code>data[index] = np.array(data_shape)</code> line. But I am not sure how.</p> <p>Or is there any other thing I am missing?</p> <p>Please suggest.</p>
<p>I <em>think</em> that <code>data</code> needs to be defined before you use it in this case. Assigning by index to a variable that doesn't exist throws a <code>NameError</code>. I'm not sure the extra step of creating the array is needed because <code>genfromtext</code> returns an ndarray.</p> <pre><code>def read_and_concat(bucket, key_list): length = len(key_list) data = [] for index, key in enumerate(key_list): s3_client.download_file(bucket, key, 'test.out') data.append(genfromtxt('test.out', delimiter=',')) return np.concatenate(data) </code></pre>
python|numpy|amazon-s3
1
375,364
58,531,295
convert pandas series AND dataframe objects to a numpy array
<h1>Series to Numpy Array:</h1> <p>I have a <code>pandas</code> series object that looks like the following:</p> <pre><code>s1 = pd.Series([0,1,2,3,4,5,6,7,8], index=['AB', 'AC','AD', 'BA','BB','BC','CA','CB','CC']) </code></pre> <p>I want to convert this series to a <code>numpy</code> array as follows:</p> <pre><code>series_size = s1.size dimension_len = np.sqrt(series_size) **Note: series_size will always have an integer sqrt </code></pre> <p>The dimension_len will determine the size of each of the dimensions in the desired 2 dimensional array. </p> <p>In the above series object, the dimension_len = 3 so the desired <code>numpy</code> array will be a 3 x 3 array as follows:</p> <pre><code>np.array([[0, 1, 2], [3, 4, 5], [6,7, 8]]) </code></pre> <h1>Dataframe to Numpy Array:</h1> <p>I have a <code>pandas</code> dataframe object that looks like the following:</p> <pre><code>s1 = pd.Series([0,1,2,3,4,5,6,7,8], index=['AA', 'AB','AC', 'BA','BB','BC','CA','CB','CC']) s2 = pd.Series([-2,2], index=['AB','BA']) s3 = pd.Series([4,3,-3,-4], index=['AC','BC', 'CB','CA']) df = pd.concat([s1, s2, s3], axis=1) max_size = max(s1.size, s2.size, s3.size) dimension_len = np.sqrt(max_size) num_columns = len(df.columns) **Note: max_size will always have an integer sqrt </code></pre> <p>The resulting <code>numpy</code> array will be determined by the following information:</p> <p>num_columns = determines number of dimensions of the array dimension_len = determines the size of each dimension</p> <p>In the above example the desired <code>numpy</code> array will be 3 x 3 x 3 (num_columns = 3 and dimension_len = 3)</p> <p>As well the first column of df will become DESIRED_ARRAY[0], the second column of df will become DESIRED_ARRAY[1], the third column of df will become DESIRED_ARRAY[2] and so on...</p> <p>The desired array I want looks like:</p> <pre><code>np.array([[[0, 1, 2], [3, 4, 5], [6, 7, 8]], [[np.nan,-2, np.nan], [2, np.nan, np.nan], [np.nan, np.nan, np.nan]], [[np.nan,np.nan, 4], [np.nan, np.nan, 3], [-4, -3, np.nan]], ]) </code></pre>
<p>IIUC, you may try numpy transpose and <code>reshape</code></p> <pre><code>df.values.T.reshape(-1, int(dimension_len), int(dimension_len)) Out[30]: array([[[ 0., 1., 2.], [ 3., 4., 5.], [ 6., 7., 8.]], [[nan, -2., nan], [ 2., nan, nan], [nan, nan, nan]], [[nan, nan, 4.], [nan, nan, 3.], [-4., -3., nan]]]) </code></pre>
python|arrays|pandas|numpy
1
375,365
58,196,587
how to save this Matplotlib drawing as a Numpy array?
<p>I have a function that takes an image stored as a Numpy array, draws a few rectangles on it, labels them, then displays the result.</p> <p>The shape of the source Numpy array is (480, 640, 3) - it's an RGB image from a camera. This probably doesn't matter a lot, but I'm just showing you an example of the data I'm working with.</p> <p>This is the function:</p> <pre class="lang-py prettyprint-override"><code>def draw_boxes(imdata, v_boxes, v_labels, v_scores): fig = pyplot.imshow(imdata) # get the context for drawing boxes ax = pyplot.gca() # plot each box for i in range(len(v_boxes)): box = v_boxes[i] # get coordinates y1, x1, y2, x2 = box.ymin, box.xmin, box.ymax, box.xmax # calculate width and height of the box width, height = x2 - x1, y2 - y1 # create the shape rect = Rectangle((x1, y1), width, height, fill=False, color='white') # draw the box ax.add_patch(rect) # draw text and score in top left corner label = "%s (%.3f)" % (v_labels[i], v_scores[i]) ax.text(x1, y1, label, color='white') pyplot.show() </code></pre> <p>I would like to take the annotated image (the image with the rectangles and labels drawn on it) and extract all that as a Numpy array. Basically, return an annotated Numpy array.</p> <p>I've spent a couple hours trying various solution found on Google, but nothing works. For example, I cannot do this...</p> <pre class="lang-py prettyprint-override"><code>fig.canvas.draw() X = np.array(fig.canvas.renderer.buffer_rgba()) </code></pre> <p>...because fig.canvas.draw() fails with:</p> <pre><code>AttributeError: 'AxesImage' object has no attribute 'canvas' </code></pre>
<p>The problem is that your <code>fig</code> variable is not a figure but an <code>AxesImage</code> as the error is stating. Thus change the first line of your code with :</p> <pre><code>fig, ax = plt.subplots() ax = plt.imshow(imdata) </code></pre> <p>The complete function is then :</p> <pre><code>def draw_boxes(imdata, v_boxes, v_labels, v_scores): fig, ax = plt.subplots() ax = plt.imshow(imdata) # get the context for drawing boxes ax = pyplot.gca() # plot each box for i in range(len(v_boxes)): box = v_boxes[i] # get coordinates y1, x1, y2, x2 = box.ymin, box.xmin, box.ymax, box.xmax # calculate width and height of the box width, height = x2 - x1, y2 - y1 # create the shape rect = Rectangle((x1, y1), width, height, fill=False, color='white') # draw the box ax.add_patch(rect) # draw text and score in top left corner label = "%s (%.3f)" % (v_labels[i], v_scores[i]) ax.text(x1, y1, label, color='white') fig.canvas.draw() X = np.array(fig.canvas.renderer.buffer_rgba(), dtype=float) return X </code></pre>
python|numpy|matplotlib
1
375,366
58,546,554
Query about pandas copy() method
<pre><code>df1 = pd.DataFrame({'A':['aaa','bbb','ccc'], 'B':[1,2,3]}) df2=df1.copy() df1.loc[0,'A']='111' #modifying the 1st element of column A print df1 print df2 </code></pre> <p>When modifying <code>df1</code> the object <code>sf2</code> is not modified. I expected it because I used <code>copy()</code></p> <pre><code>s1=pd.Series([[1,2],[3,4]]) s2=s1.copy() s1[0][0]=0 #modifying the 1st element of list [1,2] print s1 print s2 </code></pre> <p>But why did <code>s2</code> changed as well in this case? I expected no change of <code>s2</code> because I used <code>copy()</code> to create it, but for my surprise, when modifying <code>s1</code> the object <code>s2</code> is also modified. I don't get why.</p>
<p>This is occurring because your <code>pd.Series</code> is of dtype=object, so it essentially copied a bunch of references to python objects. Observe:</p> <pre><code>In [1]: import pandas as pd In [2]: s1=pd.Series([[1,2],[3,4]]) ...: In [3]: s1 Out[3]: 0 [1, 2] 1 [3, 4] dtype: object In [4]: s1.dtype Out[4]: dtype('O') </code></pre> <p>Since <code>list</code> objects are mutable, then the operation:</p> <pre><code>s1[0][0]=0 </code></pre> <p>Modifies the list <em>in-place</em>.</p> <p>This behavior is a "shallow copy", which <em>normally</em> isn't an issue with <code>pandas</code> data structures, because normally you would be using a numeric data type in which case shallow copies don't apply, or if you do use the object dtype you would be using python string objects, which are immutable.</p> <p>Note, <code>pandas</code> containers have a different notion of a deep-copy. Notice the <code>.copy</code> method has a default <code>deep=True</code>, but from the documentation:</p> <blockquote> <p>When <code>deep=True</code> (default), a new object will be created with a copy of the calling object's data and indices. Modifications to the data or indices of the copy will not be reflected in the original object (see notes below).</p> <p>When <code>deep=False</code>, a new object will be created without copying the calling object's data or index (only references to the data and index are copied). Any changes to the data of the original will be reflected in the shallow copy (and vice versa). ... When <code>deep=True</code>, data is copied but actual Python objects will not be copied recursively, only the reference to the object. This is in contrast to <code>copy.deepcopy</code> in the Standard Library, which recursively copies object data (see examples below).</p> </blockquote> <p>Again, this is because <code>pandas</code> is designed for using numeric dtypes, with some built-in support for <code>str</code> objects. A <code>pd.Series</code> of <code>list</code> objects is very strange indeed, and really not a good use-case for a <code>pd.Series</code>.</p>
python|pandas
5
375,367
58,257,738
Custom Merge Function for Different Size Tensors in Tensorflow
<p>I have two tensors of different sizes and want to write a custom merge function </p> <pre><code>a = tf.constant([[1,2,3]]) b = tf.constant([[1,1,2,2,3,3]]) </code></pre> <p>I want to take the dot product of each point in tensor <code>a</code> with two points in tensor <code>b</code>. So in the example above element <code>1</code> in <code>a</code> is multiplied with the first two elements in <code>b</code> and so on. I'm unsure of how to do loops in tensorflow:</p> <pre><code>def customMergeFunct(x): # not sure how to write a loop over a tensor </code></pre> <p>The output should be:</p> <pre><code>c = Lambda(customMergeFunct)([a,b]) with tf.Session() as sess: print(c.eval()) =&gt; [[2,8,18]] </code></pre>
<p>I'm not exactly sure why you call this a merge function. You don't really need to define a custom function. You can do this with a simple lambda function. Here's my solution.</p> <pre><code>import tensorflow as tf from tensorflow.keras.layers import Lambda import tensorflow.keras.backend as K a = tf.constant([[1,2,3]]) b = tf.constant([[1,1,2,2,3,3]]) a_res = tf.reshape(a,[-1,1]) # make a.shape [3,1] b_res = tf.reshape(b,[-1,2]) # make b.shape [3,2] layer = Lambda(lambda x: K.sum(x[0]*x[1],axis=1)) res = layer([a_res,b_res]) with tf.Session() as sess: print(res.eval()) </code></pre>
tensorflow
2
375,368
58,255,449
Unable to dynamically import TensorFlow.js
<p>I am trying to dynamically import TensorFlow.js using the <code>import</code> function. However, I always receive a <code>TypeError: t is undefined</code> error. The following code is a simple HTML file which recreates the error.</p> <pre><code>!DOCTYPE html&gt; &lt;html lang="en"&gt; &lt;head&gt; &lt;meta charset="UTF-8"&gt; &lt;meta name="viewport" content="width=device-width, initial-scale=1.0"&gt; &lt;meta http-equiv="X-UA-Compatible" content="ie=edge"&gt; &lt;title&gt;Document&lt;/title&gt; &lt;/head&gt; &lt;body&gt; &lt;script&gt; import("https://cdn.jsdelivr.net/npm/@tensorflow/tfjs@1.0.0/dist/tf.min.js") .then(tf =&gt; { console.log(tf); }); &lt;/script&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p>Please note that I also desire to dynamically create the code that will use the TensorFlow.js library. Any help on how to dynamically import TensorFlow.js in the browser and run dynamically created code that uses its functions is much appreciated. Below is code that acts similarly to my end goal.</p> <pre><code>&lt;!DOCTYPE html&gt; &lt;html lang="en"&gt; &lt;head&gt; &lt;meta charset="UTF-8"&gt; &lt;meta name="viewport" content="width=device-width, initial-scale=1.0"&gt; &lt;meta http-equiv="X-UA-Compatible" content="ie=edge"&gt; &lt;title&gt;Document&lt;/title&gt; &lt;/head&gt; &lt;body&gt; &lt;script&gt; let code = `import("https://cdn.jsdelivr.net/npm/@tensorflow/tfjs@1.0.0/dist/tf.min.js").then(tf =&gt; { // Define a model for linear regression. const model = tf.sequential(); model.add(tf.layers.dense({units: 1, inputShape: [1]})); model.compile({loss: 'meanSquaredError', optimizer: 'sgd'}); // Generate some synthetic data for training. const xs = tf.tensor2d([1, 2, 3, 4], [4, 1]); const ys = tf.tensor2d([1, 3, 5, 7], [4, 1]); // Train the model using the data. model.fit(xs, ys, {epochs: 10}).then(() =&gt; { model.predict(tf.tensor2d([5], [1, 1])).print(); // Open the browser devtools to see the output }); }); `; let script = document.createElement("script"); script.type = "text/javascript"; script.appendChild(document.createTextNode(code)); document.body.appendChild(script); &lt;/script&gt; &lt;/body&gt; &lt;/html&gt; </code></pre>
<p>You could very well just add the script element dynamically ?</p> <pre><code>const el = document.createElement('script') el.src = "https://cdn.jsdelivr.net/npm/@tensorflow/tfjs@1.0.0/dist/tf.min.js"; el.onload = (() =&gt; { const script = document.createElement('script'); script.innerHTML = "console.log(tf)"; document.body.appendChild(script); })(); document.body.appendChild(el); </code></pre> <p><strong>Alternative</strong></p> <p>you could also append the script earlier, but do not execute until tf is loaded</p> <p>example is</p> <pre><code>const script = document.createElement('script'); script.innerHTML = ` function someDependentCode() { console.log(tf); // put all dependent code string here } `; document.body.appendChild(script); //code is added but not called const el = document.createElement('script') el.src = "https://cdn.jsdelivr.net/npm/@tensorflow/tfjs@1.0.0/dist/tf.min.js"; el.onload = someDependentCode(); //dependent code can now execute document.body.appendChild(el); </code></pre>
javascript|tensorflow
1
375,369
58,543,321
Iterating Over Numpy Array for NLP Application
<p>I have a Word2Vec model that I'm building where I have a vocab_list of about 30k words. I have a list of sentences (sentence_list) about 150k large. I am trying to remove tokens (words) from the sentences that weren't included in vocab_list. The task seemed simple, but nesting for loops and reallocating memory is slow using the below code. This task took approx. 1hr to run so I don't want to repeat it. </p> <p>Is there a cleaner way to try this? </p> <pre><code>import numpy as np from datetime import datetime start=datetime.now() timing=[] result=[] counter=0 for sent in sentences_list: counter+=1 if counter %1000==0 or counter==1: print(counter, 'row of', len(sentences_list), ' Elapsed time: ', datetime.now()-start) timing.append([counter, datetime.now()-start]) final_tokens=[] for token in sent: if token in vocab_list: final_tokens.append(token) #if len(final_tokens)&gt;0: result.append(final_tokens) print(counter, 'row of', len(sentences_list),' Elapsed time: ', datetime.now()-start) timing.append([counter, datetime.now()-start]) sentences=result del result timing=pd.DataFrame(timing, columns=['Counter', 'Elapsed_Time']) </code></pre>
<p>Note that typical word2vec implementations (like Google's original <code>word2vec.c</code> or <code>gensim</code> <code>Word2Vec</code>) will often just ignore words in their input that aren't part of their established vocabulary (as specified by <code>vocab_list</code> or enforced via a <code>min_count</code>). So you may not need to perform this filtering at all. </p> <p>Using a more-idiomatic Python list-comprehension <em>might</em> be noticeably faster (and would certainly be more compact). Your code could simply be:</p> <pre><code>filtered_sentences = [ [word for word in sent if word in vocab_list] for sent in sentences_list ] </code></pre>
python|numpy|nlp|word2vec
1
375,370
58,574,610
python3 recognizes tensorflow, but doesn't recognize any of its attributes
<p>I am getting the following errors:</p> <pre><code>AttributeError: module 'tensorflow' has no attribute 'variable_scope' AttributeError: module 'tensorflow' has no attribute 'squared_difference' </code></pre> <p><a href="https://www.tensorflow.org/install" rel="nofollow noreferrer">tensorflow</a> is installed:</p> <pre><code>&gt;&gt; pip3 list | grep tensorflow tensorflow 2.0.0 tensorflow-estimator 2.0.1 </code></pre>
<p>TensorFlow 2.0 cleaned up some of the APIs. Mathematical functions such as <code>squared_difference()</code> are now under <code>tf.math</code>. </p> <p>There is no <code>tf.variable_scope()</code> in TensorFlow 2.0. I suggest reading <a href="https://www.tensorflow.org/guide/migrate" rel="noreferrer">this post</a> with examples on how to migrate your code to TF2.</p> <p>If you want your code to be compatible with older versions of TensorFlow, you can use <code>tf.compat.v1.variable_scope()</code></p>
tensorflow|attributeerror
18
375,371
58,182,032
You tried to call count_params on ..., but the layer isn't built. TensorFlow 2.0
<p>I receive the following error in in Pyhotn 3 and TF 2.0.</p> <p>"ValueError: You tried to call count_params on digits, but the layer isn't built. You can build it manually via: digits.build(batch_input_shape)." at line new_model.summary().</p> <p>what is the problem and how to solve it?</p> <pre><code>inputs = keras.Input(shape=(784,), name='digits') x = layers.Dense(64, activation='relu', name='dense_1')(inputs) x = layers.Dense(64, activation='relu', name='dense_2')(x) outputs = layers.Dense(10, activation='softmax', name='predictions')(x) model = keras.Model(inputs=inputs, outputs=outputs, name='3_layer_mlp') model.summary() (x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data() x_train = x_train.reshape(60000, 784).astype('float32') / 255 x_test = x_test.reshape(10000, 784).astype('float32') / 255 model.compile(loss='sparse_categorical_crossentropy', optimizer=keras.optimizers.RMSprop(), metrics=['accuracy']) history = model.fit(x_train, y_train, batch_size=64, epochs=2) model.save('saved_model', save_format='tf') new_model = keras.models.load_model('saved_model') new_model.summary() </code></pre>
<p>For 2.0 version Model can be saved in .h5 format, please use <code>model.save('my_model.h5')</code> while saving.</p> <p>Please find the link of working <a href="https://colab.sandbox.google.com/gist/oanush/0a1db97731b201717003d19c29257f08/tf_nightly.ipynb" rel="nofollow noreferrer">gist</a>.</p> <p>Also issue seems to be resolved in the Latest TF-nightly version,as going forward 2.1 will be official version try using <code>pip install tf-nightly</code></p> <p>Find the link of working gist <a href="https://colab.sandbox.google.com/gist/oanush/4790590321eeaf5bf8be653b0454963c/tf_nightly.ipynb" rel="nofollow noreferrer">here</a>.</p>
tensorflow2.0
1
375,372
58,507,814
How do I stack certain columns and copy the others all the way down to fill the empty rows?
<p>I'm quite new to python and am learning by applying what I know to automate some tasks in Excel. </p> <p>Basically what I'm trying to do is take certain columns (ex: <code>columns J:Z</code>) and stack them below each other i.e. <code>column J</code> goes under <code>column J</code>, and then column L goes under the column J &amp; K stack, so on and so forth. </p> <p>I was able to achieve this by:</p> <p>python </p> <pre><code>df1 = df1['December']\ .append(df1['Jan 2017'])\ .append(df1['Feb 2017'])\ .reset_index(drop=False) </code></pre> <p>But in the process it took out columns <code>A:I</code>. What I would like to accomplish is copy columns <code>A:I</code> rows <code>1 - 20</code> for each stack. The data is columnar and I'd like to convert it into rows for each column</p>
<p>I think you are looking for pandas .stack(), .unstack(), or .melt().</p> <p><a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.stack.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.stack.html</a></p> <p><a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.melt.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.melt.html</a></p> <p><a href="https://www.youtube.com/watch?v=qOkj5zOHwRE" rel="nofollow noreferrer">https://www.youtube.com/watch?v=qOkj5zOHwRE</a></p>
python|excel|pandas|multiple-columns
0
375,373
58,400,004
Load dtypes into Python via a separate mapping table that can be linked to the primary dataframe to specify the dtypes by column
<p>Although new to python, i seem to be getting the hang of it. </p> <p>However, i will be dealing with massive databases with hundreds of columns and specifying dtypes for each seem to be a very code intensive excersize i.e. having to specifically write out column names and convert to a certain dtype. </p> <p>Question: Is it possible to create an excel/CSV file with all the columns from the primary database down a column and have a separate column for dtypes for each. Then linking this to the primary dataframe to specify the dtype based on the secondary dataframe? i.e. primary database has 100 columns and i load a separate table with those 100 columns down the rows and just have 100 rows + a column with dtypes (str, int, etc.) that can be indexed off of to specify the exact dtype for each row in the primary database? </p> <p>this is similar to how you would do it in excel i.e. index matching off of a separate mapping table</p>
<p>if you have another dataframe with correct dtypes or you can create a dictionary of {columnname: dtype} then you can use it to change dtypes like below</p> <pre class="lang-py prettyprint-override"><code>d = { "A": np.random.choice("A B C".split(), 5), "B": np.random.rand(5), "C": np.arange(5) } df = pd.DataFrame(d) df2 = df.astype("O") print(df2.dtypes) ## convert using dtypes of another dataframe print(df2.astype(df.dtypes).dtypes) ## convert using dictionary print(df2.astype({"B":"f", "C":"i"}).dtypes) </code></pre>
python|pandas
0
375,374
58,202,580
Replicating in pytorch https://www.d2l.ai/chapter_linear-networks/linear-regression-scratch.html
<p>I am trying to replicate the code in pytorch. However I am having some problems with the autograd function. I am having the following runtime error. </p> <p>RuntimeError: Trying to backward through the graph a second time</p> <p>The code is the following:</p> <pre class="lang-py prettyprint-override"><code>for epoch in range(num_epochs): # Assuming the number of examples can be divided by the batch size, all # the examples in the training data set are used once in one epoch # iteration. The features and tags of mini-batch examples are given by X # and y respectively for X, y in data_iter(batch_size, features, labels): print (X) print (y) l = loss(net(X,w,b) , y) print (l) l.backward(retain_graph=True) print (w.grad) print (b.grad) with torch.no_grad(): w -= w.grad * 1e-5/batch_size b -= b.grad * 1e-5/batch_size w.grad.zero_() b.grad.zero_() </code></pre> <p>Can someone explain how the autograd works in python? If someone can recommend me a good resource on learning pytorch that will be great. </p>
<p>Pytorch is quite different from Tensorflow by its dynamic computational graph. To save the memory, Pytorch will delete all intermediate nodes in the grpah once they are no longer used. That is, you will face troubles if you want to backprop your gradients through these intermediate nodes twice or more.</p> <p>The simple solution would be setting <code>retain_graph=True</code>. For example,</p> <pre><code>model = Autoencoder() rec = model(x) loss_1 = mse_loss(rec, x) loss_2 = l1_loss(rec, x) opt.zero_grad() loss_1.backward(retain_graph=True) loss_2.backward() opt.step() </code></pre>
pytorch|linear-regression|autograd
0
375,375
58,312,770
Break a row into muliple rows based on the (string) content of a column
<p>One column of my dataframe has a variable number of <code>\n</code>s inside its content and I need each line to be on a single row on the final dataframe. </p> <p>This is a minimal example:</p> <pre><code>df = pd.DataFrame({'a': ['x', 'y'], 'b':['line 1\nline 2\nline 3', 'line 1' ]}) </code></pre> <p>That produces this starting dataframe:</p> <pre><code> a b 0 x line 1\nline 2\nline 3 1 y line 1 </code></pre> <p>I want it to become like this one:</p> <pre><code> a b 0 x line 1 1 x line 2 2 x line 3 3 y line 1 </code></pre> <p>I've seen there is a built in function that converts each <code>pattern</code> to a new column with the <code>str.extract</code> command below, for example, this is what I tried:</p> <pre><code>df['b'].str.extract(pat='(.*)\n(.*)', expand=True) </code></pre> <p>That produces a somewhat interesting output:</p> <pre><code> 0 1 0 line 1 line 2 1 NaN NaN </code></pre> <p>But this is not a viable solution, because the data is split over columns and not rows, not all patterns matched and it's not clear how to put it back on the original dataframe in place and order. The order of the entries is relevant to be preserved, although the <code>dataframe index</code> is not.</p> <p>In order to capture all the patterns, it would be possible to do this:</p> <pre><code>df['b'].transform(lambda x: x.split('\n')) </code></pre> <p>That yields this output:</p> <pre><code>0 [line 1, line 2, line 3] 1 [line 1] </code></pre> <p>But again, I don't see a way to make progress from this to the desired state.</p>
<p>Try using <code>str.split</code> and <code>explode</code></p> <pre><code>df = df.set_index('a').b.str.split('\\n').explode().reset_index() Out[153]: a b 0 x line 1 1 x line 2 2 x line 3 3 y line 1 </code></pre> <hr> <p><strong>For pandas &lt; 0.25</strong></p> <pre><code>df = (df.set_index('a').b.str.split('\\n', expand=True).stack() .droplevel(1).reset_index(name='b')) Out[174]: a b 0 x line 1 1 x line 2 2 x line 3 3 y line 1 </code></pre>
python|pandas
3
375,376
58,206,559
RuntimeError: expected device cpu and dtype Byte but got device cpu and dtype Bool
<p>As described in <a href="https://github.com/facebookresearch/inversecooking/issues/10" rel="nofollow noreferrer">the issue I opened</a>, I get the following error when running the Pytorch <a href="https://github.com/facebookresearch/inversecooking" rel="nofollow noreferrer">inverse-cooking</a> model on CPU:</p> <p><code>RuntimeError: expected device cpu and dtype Byte but got device cpu and dtype Bool</code></p> <p>I have tried running the <code>demo.ipynb</code> file in both my laptop's Intel i7-4700HQ 8 threads and my desktop Ryzen 3700x. I was using Arch Linux on my laptop and Manjaro on my desktop.</p> <p>The model works fine when I run it on Google Collabs GPU.</p> <p>According to the <code>demo.ipynb</code> file the model should be able to run on CPU as well. Does anyone know if I have to tweak any parameters in order to make it work?</p>
<p>As stated by @iacolippo and in the comment session and <a href="https://github.com/facebookresearch/inversecooking/issues/10#issuecomment-537848360" rel="nofollow noreferrer">myDennisCode</a>, the problem really was dependency versions. I had <code>torchvision==0.4.0</code> (which confused me) and <code>torch==1.2.0</code>.</p> <p>To fix the problem, simply install <code>torch==0.4.1</code> and <code>torchvision==0.2.1</code>.</p>
pytorch|cpu|archlinux
1
375,377
58,362,762
extracting string from pandas
<p>I have dataframe, i want to extract number from it, if 'transfer' word is on 8 column it should extract from position 13, 15 character and else it should extract from position 21, 15 character</p> <pre><code> =IF(LEFT(C10,8)="Transfer",MID(C10,13,15),MID(C10,21,15)) i want same excel from formula in pandas Particular Expected Result On-Line Transfer - 01901091900014 01901091900014 On-Line Transfer - 02501091900004 02501091900004 On-Line Transfer - 03601091900018 03601091900018 Transfer - 03631081900095 03631081900095 Transfer - 03829081900083 03829081900083 </code></pre>
<p>Try this:</p> <pre><code>import pandas as pd import numpy as np df['new_extract_column'] = np.nan df.loc[ df['column8'].str.contains('transfer'), 'new_extract_column' ] = df[ df['column8'].str.contains('transfer') ].apply(lambda x: x[13:16]) df.loc[ ~df['column8'].str.contains('transfer'), 'new_extract_column' ] = df[ df['column8'].str.contains('transfer') ].apply(lambda x: x[15: 22]) </code></pre>
python-3.x|pandas
0
375,378
58,278,715
Is it possible to set dtype to an existing structured array, or add "column name" to an existing array?
<p>this code (snippet_1) is to construct a <a href="https://docs.scipy.org/doc/numpy-1.14.0/user/basics.rec.html" rel="nofollow noreferrer">structured array</a></p> <pre><code>&gt;&gt;&gt; dt = np.dtype([('name', np.str_, 16), ('age', np.int)]) &gt;&gt;&gt; x = np.array([('Sarah', 16), ('John', 17)], dtype=dt) &gt;&gt;&gt; x array([('Sarah', 16), ('John', 17)], dtype=[('name', '&lt;U16'), ('age', '&lt;i8')]) </code></pre> <p>this code is to set dtype to a given simple array</p> <pre><code>arr = np.array([10, 20, 30, 40, 50]) arr = arr.astype('float64') </code></pre> <p>this code (snippet_3) is trying to set dtype to a structured array,</p> <pre><code>x = np.array([('Sarah', 16), ('John', 17)]) x = x.astype(dt) </code></pre> <p>of course, set dtype this way causes ValueError</p> <pre><code>ValueError Traceback (most recent call last) &lt;ipython-input-18-201b69204e82&gt; in &lt;module&gt;() 1 x = np.array([('Sarah', 16), ('John', 17)]) ----&gt; 2 x = x.astype(dt) ValueError: invalid literal for int() with base 10: 'Sarah' </code></pre> <p>Is it possible to set dtype to an existing structured array? something like snippet_3?</p> <p>Why would I want to do this? Because there is a handy approach to access data in the setting of snippet_1.</p> <pre><code>x['name'] </code></pre> <p>If I can add "column name" to an existing array, that would be cool.</p>
<p>You can use <a href="https://docs.scipy.org/doc/numpy/user/basics.rec.html#numpy.lib.recfunctions.unstructured_to_structured" rel="nofollow noreferrer"><code>numpy.lib.recfunctions.unstructured_to_structured</code></a></p> <pre><code>x = np.array([('Sarah', 16), ('John', 17)]) x # array([['Sarah', '16'], # ['John', '17']], dtype='&lt;U5') dt = np.dtype([('name', np.str_, 16), ('age', np.int)]) import numpy.lib.recfunctions as nlr xs = nlr.unstructured_to_structured(x, dtype=dt) xs # array([('Sarah', 16), ('John', 17)], # dtype=[('name', '&lt;U16'), ('age', '&lt;i8')]) </code></pre>
python|numpy
1
375,379
58,439,345
Efficient 2d numpy array construction from 1d array and function with conditionals
<p>I am creating a 2d numpy array from a function applied to a 1d numpy array (which contains a conditional) and would like to know a more efficient way of doing this. This is currently the slowest part of my code. x is a 1d numpy array, and the output is a 2d numpy array. There is a switch to construct a different array element based on whether x is less than or greater than 0. In the future, there could be an arbitrary number of switches.</p> <pre><code>def basis2(x) : final = [] for i in x : if i &gt; 0 : xr = 2.0*(i-0.5) final.append(np.array([0.0, 0.0, 0.0, 0.5*xr*(xr-1.0),-1.0*(xr+1)*(xr-1), 0.5*xr*(xr+1.0)])) else : xl = 2.0*(i+0.5) final.append(np.array([0.5*xl*(xl-1.0),-1.0*(xl+1)*(xl-1),0.5*xl*(xl+1.0),0.0,0.0,0.0])) return np.array(final) </code></pre> <p>Ideally, I would be able to eliminate the for loop - but so far I have not managed to do this properly, using 'where' for example. Thanks, for any help.</p>
<p>With your function:</p> <pre><code>In [247]: basis2(np.array([1,.5,0,-.5,-1])) Out[247]: [array([ 0., 0., 0., 0., -0., 1.]), array([ 0., 0., 0., -0., 1., 0.]), array([ 0., -0., 1., 0., 0., 0.]), array([-0., 1., 0., 0., 0., 0.]), array([ 1., 0., -0., 0., 0., 0.])] In [248]: %hist 245 basis2_1(np.array([1,.5,0,-.5,-1])) </code></pre> <p>With some superficial changes:</p> <pre><code>def basis2_1(x) : xr = 2.0*(x[x&gt;0]-0.5) res1 = np.array([0.0*xr, 0.0*xr, 0.0*xr, 0.5*xr*(xr-1.0),-1.0*(xr+1)*(xr-1), 0.5*xr*(xr+1.0)]) xl = 2.0*(x[x&lt;=0]+0.5) res2 = np.array([0.5*xl*(xl-1.0),-1.0*(xl+1)*(xl-1),0.5*xl*(xl+1.0),0.0*xl,0.0*xl,0.0*xl]) return res1, res2 In [250]: basis2_1(np.array([1,.5,0,-.5,-1])) Out[250]: (array([[ 0., 0.], [ 0., 0.], [ 0., 0.], [ 0., -0.], [-0., 1.], [ 1., 0.]]), array([[ 0., -0., 1.], [-0., 1., 0.], [ 1., 0., -0.], [ 0., 0., -0.], [ 0., 0., -0.], [ 0., 0., -0.]])) </code></pre> <p>Joining the two subarrays:</p> <pre><code>In [251]: np.hstack(_) Out[251]: array([[ 0., 0., 0., -0., 1.], [ 0., 0., -0., 1., 0.], [ 0., 0., 1., 0., -0.], [ 0., -0., 0., 0., -0.], [-0., 1., 0., 0., -0.], [ 1., 0., 0., 0., -0.]]) </code></pre> <p>Obviously that needs refinement, but it should be enough to get you started.</p> <p>For example you might make a <code>result = np.zeros((5,x.shape[0]))</code> array, just insert the respective non-zero elements (saving all those <code>0.0*xr</code> terms).</p> <p>Looking at those blocks in <code>Out[251]</code>:</p> <pre><code>In [257]: x = np.array([1,.5,0,-.5,-1]) In [258]: Out[251][3:,np.nonzero(x&gt;0)[0]] Out[258]: array([[ 0., -0.], [-0., 1.], [ 1., 0.]]) In [259]: Out[251][:3,np.nonzero(x&lt;=0)[0]] Out[259]: array([[ 0., -0., 1.], [-0., 1., 0.], [ 1., 0., -0.]]) </code></pre>
python|arrays|numpy
1
375,380
58,505,369
How to partially transpose pandas a dataframe
<p>I have a question similar to this one <a href="https://stackoverflow.com/questions/38821985/processing-transposing-pandas-dataframe">here</a>. I'd like to partially transpose a pandas dataframe. I got my hands on a dataframe looking similar to the following: </p> <pre><code>data = [{"Student" : "john", "Subject" : 'Math', 'Plan_Actual_Delta' : 'Plan' , "2009" : 100, "2010" : 100}, {"Student" : "john", "Subject" : 'Math', 'Plan_Actual_Delta' : 'Actual' ,"2009" : 80, "2010" : 100}, {"Student" : "john", "Subject" : 'Math' , 'Plan_Actual_Delta' : 'Delta' ,"2009" : -20, "2010" : 0}, {"Student" : "lisa", "Subject" : 'Math', 'Plan_Actual_Delta' : 'Plan' ,"2009" : 80, "2010" : 100}, {"Student" : "lisa", "Subject" : 'Math', 'Plan_Actual_Delta' : 'Actual' ,"2009" : 75, "2010" : 100}, {"Student" : "lisa", "Subject" : 'Math', 'Plan_Actual_Delta' : 'Delta' ,"2009" : -5, "2010" : 0}] df = pd.DataFrame(data) </code></pre> <p>It shows students and their planned and actual performance (and the delta) for a given subject in a given year. The years are columns in this example. And whether the row shows the planned, actual or the delta of the students performance is given in the rows. </p> <p>I'd like to transform it in a way that plan, actual and delta become columns. My goal is hence the following strucuture:</p> <pre><code>data = [{"Student" : "john", "Subject" : 'Math', 'Year': '2009', 'Plan':100, 'Actual':80, 'Delta': -20}, {"Student" : "john", "Subject" : 'Math', 'Year': '2010', 'Plan':100, 'Actual':100, 'Delta': 0}, {"Student" : "lisa", "Subject" : 'Math', 'Year': '2009', 'Plan':80, 'Actual':75, 'Delta': -5}, {"Student" : "lisa", "Subject" : 'Math', 'Year': '2010', 'Plan':100, 'Actual':100, 'Delta': 0}] df = pd.DataFrame(data) </code></pre> <p>How would you do that? Thanks in advance /R</p>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.set_index.html" rel="nofollow noreferrer"><code>DataFrame.set_index</code></a> with reshape by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.stack.html" rel="nofollow noreferrer"><code>DataFrame.stack</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.unstack.html" rel="nofollow noreferrer"><code>Series.unstack</code></a> by third level:</p> <pre><code>df = (df.set_index(['Student','Subject','Plan_Actual_Delta']) .rename_axis('Year', axis=1) .stack() .unstack(2) .reset_index() .rename_axis(None, axis=1)) print (df) Student Subject Year Actual Delta Plan 0 john Math 2009 80 -20 100 1 john Math 2010 100 0 100 2 lisa Math 2009 75 -5 80 3 lisa Math 2010 100 0 100 </code></pre> <p>Another solution, if not working first with possible aggregation with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.melt.html" rel="nofollow noreferrer"><code>DataFrame.melt</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.pivot_table.html" rel="nofollow noreferrer"><code>DataFrame.pivot_table</code></a>:</p> <pre><code>df = (df.melt(['Student','Subject','Plan_Actual_Delta'], var_name='Year') .pivot_table(index=['Student','Subject','Year'], columns='Plan_Actual_Delta', values='value', aggfunc='mean') .reset_index() .rename_axis(None, axis=1) ) print (df) Student Subject Year Actual Delta Plan 0 john Math 2009 80 -20 100 1 john Math 2010 100 0 100 2 lisa Math 2009 75 -5 80 3 lisa Math 2010 100 0 100 </code></pre>
python|pandas|dataframe|transpose
3
375,381
58,492,933
Numpy delete() is deleting different arrays with same elements from 2D array
<p>I have a 2D numpy array like <code>B = [[1. 0.], [0. 1.], [3. 1.]]</code> and I want to delete <code>[0. 1.]</code>, but when I do:</p> <pre><code>B = np.delete(B, [0, 1], 0) print(B) </code></pre> <p>both <code>[1. 0.], [0. 1.]</code> are deleted and I'm left with<br> <code>[[3. 1.]]</code></p> <p>thus I suppose <code>delete()</code> does not recognize different arrays with the same elements. What can I do?</p>
<p>You are asking delete() to remove first and second index by asking [0,1] as a parameter. This second arameter is the index from which you want to delete the value. You should try: </p> <pre class="lang-py prettyprint-override"><code>np.delete(B, 1, 0) </code></pre>
python|numpy|numpy-ndarray
3
375,382
58,314,905
Extracting Audio channel from a video file to feed in decode_wav function of TensorFlow
<p>I want to feed audio channel of a video file to the following TenorFlow function:</p> <pre><code>tf.audio.decode_wav( contents, desired_channels=-1, desired_samples=-1, name=None) </code></pre> <p>Where Args:</p> <ul> <li><p>contents: A Tensor of type string. The WAV-encoded audio, usually from a file. </p></li> <li><p>desired_channels: An optional int. Defaults to -1. Number of sample channels wanted. </p></li> <li><p>desired_samples: An optional int. Defaults to -1. Length of audio requested. </p></li> <li><p>name: A name for the operation (optional).</p></li> </ul>
<p>You can extract the audio of video by eg.:</p> <pre class="lang-py prettyprint-override"><code>import subprocess command = "ffmpeg -i C:/test.mp4 -ab 160k -ac 2 -ar 44100 -vn audio.wav" subprocess.call(command, shell=True) </code></pre> <p>And pass the <code>*.wav</code> file as tensor to <code>tf.audio.decode_wav</code>:</p> <pre class="lang-py prettyprint-override"><code>raw_audio = tf.io.read_file(filename) waveform = tf.audio.decode_wav(raw_audio) </code></pre> <p>References:</p> <ul> <li><a href="https://stackoverflow.com/questions/26741116/python-extract-wav-from-video-file">Python extract wav from video file</a></li> <li><a href="https://stackoverflow.com/questions/58096095/how-does-tf-audio-decode-wav-get-its-contents">How does tf.audio.decode_wav get its contents?</a></li> </ul>
python|tensorflow|wav
1
375,383
58,226,715
Eigenvalues not following order while changing parameter in Python
<p>I have a <code>6x6</code> matrix <code>H</code> of which two eigenvalues become complex at certain values of the parameter <code>l</code> (for lambda). Apparently they are the first two elements of the eigenvalue array <code>E</code> we after diagonalizing <code>H</code> (calling <code>np.linalg.eig</code> module). However, this sequence is not maintained when they become real (in my case, when <code>l</code> is between -1 and 1). I can track them by comparing them to the exact analytical expressions.</p> <p>See my code below:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt import cmath dim=6; t=1.0; eps=0.5; U = 2.0 # Initialize H H=np.zeros((dim,dim),float) # Vary parameter l for l in np.linspace(-2,2,3): # Construct H-matrix (size: 6 x 6) tm=t-l; tp=t+l H[0][0]=H[2][2]=H[3][3]=H[5][5]=2.0*eps H[1][1]=H[4][4]=2.0*eps+U H[1][2]=H[2][4]=tm H[2][1]=H[4][2]=tp H[1][3]=H[3][4]=-tm H[3][1]=H[4][3]=-tp # Diagonalize H E,psi=np.linalg.eig(H) # Print all the eigenvalues print('lambda=',l,'===&gt;') print() print('First two eigenvalues: E0=',E[0], 'E1=',E[1] ) print('Always real eigenvalues: E2=',E[2], 'E3=',E[3], 'E4=',E[4], 'E5=',E[5] ) print() print("Expected sometimes complex eigenvalues:") print("2\eps+[U+\srqt{16(t^2-\lambda^2)+U^2}]/2=",2.0*eps+(U+cmath.sqrt(16*tp*tm+U**2) )*0.5,"degeneracy=1") print("2\eps+[U-\srqt{16(t^2-\lambda^2)}+U^2]/2=",2.0*eps+(U-cmath.sqrt(16*tp*tm+U**2) )*0.5,"degeneracy=1") print("Expected always real eigenvalues:") print("2\eps=",2.0*eps,"degeneracy=3") print("2\eps+U=",2.0*eps+U,"degeneracy=1") print('---------------------') print() </code></pre> <p>How can I maintain the sequence so that I can plot only the "sometimes complex" pair of eigenvalues against <code>l</code> (or lambda)?</p> <p>Output:</p> <pre><code>lambda= -2.0 ===&gt; First two eigenvalues: E0= (2.0000000000000058+3.3166247903554043j) E1= (2.0000000000000058-3.3166247903554043j) Always real eigenvalues: E2= (2.9999999999999982+0j) E3= (0.9999999999999999+0j) E4= (1+0j) E5= (1+0j) Expected sometimes complex eigenvalues: 2\eps+[U+\srqt{16(t^2-\lambda^2)+U^2}]/2= (2+3.3166247903554j) degeneracy=1 2\eps+[U-\srqt{16(t^2-\lambda^2)}+U^2]/2= (2-3.3166247903554j) degeneracy=1 Expected always real eigenvalues: 2\eps= 1.0 degeneracy=3 2\eps+U= 3.0 degeneracy=1 --------------------- lambda= 0.0 ===&gt; First two eigenvalues: E0= -0.23606797749979025 E1= 2.9999999999999996 Always real eigenvalues: E2= 4.236067977499788 E3= 1.0 E4= 1.0 E5= 1.0 Expected sometimes complex eigenvalues: 2\eps+[U+\srqt{16(t^2-\lambda^2)+U^2}]/2= (4.23606797749979+0j) degeneracy=1 2\eps+[U-\srqt{16(t^2-\lambda^2)}+U^2]/2= (-0.2360679774997898+0j) degeneracy=1 Expected always real eigenvalues: 2\eps= 1.0 degeneracy=3 2\eps+U= 3.0 degeneracy=1 --------------------- lambda= 2.0 ===&gt; First two eigenvalues: E0= (2.000000000000008+3.3166247903554043j) E1= (2.000000000000008-3.3166247903554043j) Always real eigenvalues: E2= (2.999999999999997+0j) E3= (1+0j) E4= (1+0j) E5= (1+0j) Expected sometimes complex eigenvalues: 2\eps+[U+\srqt{16(t^2-\lambda^2)+U^2}]/2= (2+3.3166247903554j) degeneracy=1 2\eps+[U-\srqt{16(t^2-\lambda^2)}+U^2]/2= (2-3.3166247903554j) degeneracy=1 Expected always real eigenvalues: 2\eps= 1.0 degeneracy=3 2\eps+U= 3.0 degeneracy=1 --------------------- </code></pre> <p>PS: Please let me know if my question is not clear or if it needs to get reorganized.</p>
<p>The answer is in the documentation: <a href="https://docs.scipy.org/doc/numpy-1.15.1/reference/generated/numpy.linalg.eig.html" rel="nofollow noreferrer">https://docs.scipy.org/doc/numpy-1.15.1/reference/generated/numpy.linalg.eig.html</a> </p> <p>I quote:</p> <blockquote> <p>... The eigenvalues are not necessarily ordered. The resulting array will be of complex type, unless the imaginary part is zero in which case it will be cast to a real type. When a is real the resulting eigenvalues will be real (0 imaginary part) or occur in conjugate pairs</p> </blockquote> <p>Therefore, if you want to keep the order and you know that there is a continous change of the real part of the eigenvalues with respect to the parameters, what you need to do is sort the eigenvectors by key, where the key is just given by the real part of the eigenvalues:</p> <pre><code>vals, eigs = np.linalg.eig(H); eigs = eigs[ np.argsort( np.real(vals) ) ] </code></pre> <p>should do the trick.</p>
python|python-3.x|numpy|eigenvalue
0
375,384
58,426,684
Drop column values
<p>How can I only keep top 20% (by ascending = False) values from a column in a dataframe? </p> <pre><code>df10 = df9[df9['quality'] &gt; df9['quality'].quantile(0.20)] </code></pre> <p>I tried this code but it doesn't seem to work</p>
<p>try:</p> <pre><code>df = pd.DataFrame({'quality':[1,2,3,4,5,6,7,8,9,10]}) df.loc[df['quality'] &gt; df['quality'].quantile(0.8) , : ].sort_values(by='quality', ascending=False) </code></pre>
pandas
0
375,385
58,557,964
List Column Names if the value change from max is within certain % (in pandas)
<p>Apologies for unclear title. My data look like this. They always sum to 1</p> <pre><code>&gt;df A B C D E 0.3 0.3 0.05 0.2 0.05 </code></pre> <p>What i want to do it identify columns which:</p> <p>1) Highest value</p> <p>2) The % reduction for highest was less than threshold.</p> <p>For example: Assuming 50% was threshold, I want to end up with [A,B,C], based on logic that:</p> <p>1) A &amp; B have highest value.</p> <p>2) 50% of A or B is 0.15. Since D is 0.2, it is added to list</p> <p>3) 50% of D is 0.1. Since both C or E are less than 0.1, they are not added to list.</p>
<p>I used the following test DataFrame:</p> <pre><code> A B C D E 0 0.3 0.3 0.05 0.2 0.05 1 0.5 0.1 0.20 0.1 0.10 </code></pre> <p>Start from defining the following function to get column names for the current row:</p> <pre><code>def getCols(row, threshold): s = row.sort_values(ascending=False) currVal = 0.0 lst = [] for key, grp in s.groupby(s, sort=False): if len(lst) &gt; 0 and key &lt; currVal * threshold: break currVal = key lst.extend(grp.index.sort_values().tolist()) return lst </code></pre> <p>Then apply it:</p> <pre><code>df['cols'] = df.apply(getCols, axis=1, threshold = 0.5) </code></pre> <p>The result is:</p> <pre><code> A B C D E cols 0 0.3 0.3 0.05 0.2 0.05 [A, B, D] 1 0.5 0.1 0.20 0.1 0.10 [A] </code></pre>
python|pandas
0
375,386
58,323,628
For every row in pandas, do until sample ID change
<p>How can I iterarate over rows in a dataframe until the sample ID change?</p> <p>my_df:</p> <pre><code>ID loc_start sample1 10 sample1 15 sample2 10 sample2 20 sample3 5 </code></pre> <p>Something like:</p> <pre><code>samples = ["sample1", "sample2" ,"sample3"] out = pd.DataFrame() for sample in samples: if my_df["ID"] == sample: my_list = [] for index, row in my_df.iterrows(): other_list = [row.loc_start] my_list.append(other_list) my_list = pd.DataFrame(my_list) out = pd.merge(out, my_list) </code></pre> <p>Expected output:</p> <pre><code>sample1 sample2 sample3 10 10 5 15 20 </code></pre> <p>I realize of course that this could be done easier if my_df really would look like this. However, what I'm after is the principle to iterate over rows until a certain column value change.</p>
<p>Based on the input &amp; output provided, this should work. You need to provide more info if you are looking for something else.</p> <pre><code>df.pivot(columns='ID', values = 'loc_start').rename_axis(None, axis=1).apply(lambda x: pd.Series(x.dropna().values)) </code></pre> <p><strong>output</strong></p> <pre><code>sample1 sample2 sample3 0 10.0 10.0 5.0 1 15.0 20.0 NaN </code></pre>
pandas
0
375,387
58,435,657
How to access column after pandas .groupby
<p>I have a data frame that I used the .groupby() along with .agg() function on. </p> <p><code>movieProperties = combined_df.groupby(['movieId', 'title', 'genres']).agg({'rating': ['count', 'mean']})</code></p> <p>This is the code to create the new data frame. However I can't seem to access columns the same way anymore. If I try <code>movieProperties['genres']</code> I always get a KeyError. How can I access columns again in this new data frame?</p>
<p>after you groupby, the columns you grouped by are now called <code>index</code>:</p> <pre><code>movieProperties = pd.DataFrame({"movie": ["x", "x", "y"], "title":["tx", "tx", "ty"], "rating": [3, 4, 3]}).groupby(["movie", "title"]).agg({"rating":["count", "mean"]}) movieProperties.index.values Out[13]: array([('x', 'tx'), ('y', 'ty')], dtype=object) </code></pre> <p>if you're not comfortable with that, reset them back to regular columns:</p> <pre><code>movieProperties.reset_index() Out[16]: movie title rating count mean 0 x tx 2 3.5 1 y ty 1 3.0 </code></pre> <p>and then </p> <pre><code>movieProperties.reset_index()["movie"] Out[17]: 0 x 1 y </code></pre>
python|pandas|pandas-groupby
7
375,388
58,521,181
Pandas sum over a date range for each category separately
<p>I have a dataframe with timeseries of sales transactions for different items:</p> <pre><code>import pandas as pd from datetime import timedelta df_1 = pd.DataFrame() df_2 = pd.DataFrame() df_3 = pd.DataFrame() # Create datetimes and data df_1['date'] = pd.date_range('1/1/2018', periods=5, freq='D') df_1['item'] = 1 df_1['sales']= 2 df_2['date'] = pd.date_range('1/1/2018', periods=5, freq='D') df_2['item'] = 2 df_2['sales']= 3 df_3['date'] = pd.date_range('1/1/2018', periods=5, freq='D') df_3['item'] = 3 df_3['sales']= 4 df = pd.concat([df_1, df_2, df_3]) df = df.sort_values(['item']) df </code></pre> <p>Resulting dataframe:</p> <pre><code> date item sales 0 2018-01-01 1 2 1 2018-01-02 1 2 2 2018-01-03 1 2 3 2018-01-04 1 2 4 2018-01-05 1 2 0 2018-01-01 2 3 1 2018-01-02 2 3 2 2018-01-03 2 3 3 2018-01-04 2 3 4 2018-01-05 2 3 0 2018-01-01 3 4 1 2018-01-02 3 4 2 2018-01-03 3 4 3 2018-01-04 3 4 4 2018-01-05 3 4 </code></pre> <p>I want to compute a sum of "sales" for a given item in a given time window. I can't use pandas rolling.sum because the timeseries is sparse (eg. 2018-01-01 > 2018-01-04 > 2018-01-06 > etc.).</p> <p>I've tried this solution (for time window = 2 days):</p> <pre><code>df['start_date'] = df['date'] - timedelta(3) df['end_date'] = df['date'] - timedelta(1) df['rolled_sales'] = df.apply(lambda x: df.loc[(df.date &gt;= x.start_date) &amp; (df.date &lt;= x.end_date), 'sales'].sum(), axis=1) </code></pre> <p>but it results with sums of sales of all items for a given time window:</p> <pre><code> date item sales start_date end_date rolled_sales 0 2018-01-01 1 2 2017-12-29 2017-12-31 0 1 2018-01-02 1 2 2017-12-30 2018-01-01 9 2 2018-01-03 1 2 2017-12-31 2018-01-02 18 3 2018-01-04 1 2 2018-01-01 2018-01-03 27 4 2018-01-05 1 2 2018-01-02 2018-01-04 27 0 2018-01-01 2 3 2017-12-29 2017-12-31 0 1 2018-01-02 2 3 2017-12-30 2018-01-01 9 2 2018-01-03 2 3 2017-12-31 2018-01-02 18 3 2018-01-04 2 3 2018-01-01 2018-01-03 27 4 2018-01-05 2 3 2018-01-02 2018-01-04 27 0 2018-01-01 3 4 2017-12-29 2017-12-31 0 1 2018-01-02 3 4 2017-12-30 2018-01-01 9 2 2018-01-03 3 4 2017-12-31 2018-01-02 18 3 2018-01-04 3 4 2018-01-01 2018-01-03 27 4 2018-01-05 3 4 2018-01-02 2018-01-04 27 </code></pre> <p>My goal is to have rolled_sales computed for each item separately, like this:</p> <pre><code> date item sales start_date end_date rolled_sales 0 2018-01-01 1 2 2017-12-29 2017-12-31 0 1 2018-01-02 1 2 2017-12-30 2018-01-01 2 2 2018-01-03 1 2 2017-12-31 2018-01-02 4 3 2018-01-04 1 2 2018-01-01 2018-01-03 6 4 2018-01-05 1 2 2018-01-02 2018-01-04 8 0 2018-01-01 2 3 2017-12-29 2017-12-31 0 1 2018-01-02 2 3 2017-12-30 2018-01-01 3 2 2018-01-03 2 3 2017-12-31 2018-01-02 6 3 2018-01-04 2 3 2018-01-01 2018-01-03 9 4 2018-01-05 2 3 2018-01-02 2018-01-04 12 0 2018-01-01 3 4 2017-12-29 2017-12-31 0 1 2018-01-02 3 4 2017-12-30 2018-01-01 4 2 2018-01-03 3 4 2017-12-31 2018-01-02 8 3 2018-01-04 3 4 2018-01-01 2018-01-03 12 4 2018-01-05 3 4 2018-01-02 2018-01-04 16 </code></pre> <p>I tried to apply solution suggested here: <a href="https://stackoverflow.com/questions/58510308/pandas-rolling-sum-for-multiply-values-separately">Pandas rolling sum for multiply values separately</a> but failed.</p> <p>Any ideas?</p> <p>Many Thanks in advance :)</p> <p>Andy</p>
<p>Total sales With <strong>2-days</strong> rolling window per item:</p> <pre><code>z = df.sort_values('date').set_index('date').groupby('item').rolling('2d')['sales'].sum() </code></pre> <p>Output:</p> <pre><code>item date 1 2018-01-01 2.0 2018-01-02 4.0 2018-01-03 4.0 2018-01-04 4.0 2018-01-05 4.0 2 2018-01-01 3.0 2018-01-02 6.0 2018-01-03 6.0 2018-01-04 6.0 2018-01-05 6.0 3 2018-01-01 4.0 2018-01-02 8.0 2018-01-03 8.0 2018-01-04 8.0 2018-01-05 8.0 Name: sales, dtype: float64 </code></pre> <p>Total sales from last 2 days per item:</p> <pre><code>df[df.groupby('item').cumcount() &lt; 2 ].groupby('item').sum() </code></pre> <p>Total sales between start_date and end_date per item:</p> <pre><code>start_date = pd.to_datetime('2017-12-2') end_date = pd.to_datetime('2018-12-2') df[df['date'].between(start_date, end_date)].groupby('item')['sales'].sum() </code></pre>
python|pandas|time-series|grouping|rolling-computation
1
375,389
58,509,352
Is it possible to make a csv file out of a non delineated list?
<p>I have over 100,000 pieces of data im working on and the issue is that it was written in a very non conducive format, pdf. I have no idea of how to separate the data. I'm using pandas and matplotlib to do some basic plotting on this data. I cannot figure out how to make a csv out of this. </p> <p>For example:</p> <pre><code>Property 1 Data 1 Data 2 Data 3 Property 2 Data 4 Data 5 Data 6 </code></pre> <p>I have tried using find and replace but do to there being no formatting I cannot figure it out, but i do not have the time to literally go through each piece of data and manually adding a comma.</p> <p>I would hope to be able to plot each property as a column with each data piece being a cell.</p>
<p>1) You may be able to copy and paste the data into an excel file. You can then split the column by going into "Data" and then "Text to Column".</p> <p>2) If you are already reading a dataframe in python and need to split one column into 2 - You could create additional columns in the dataframe from the original data. </p>
python|pandas|csv|matplotlib
0
375,390
58,476,374
error about using dict to split dataframe
<p>The training data looks like below :</p> <pre><code>p,x,s,n,t,p,f,c,n,k,e,e,s,s,w,w,p,w,o,p,k,s,u e,x,s,y,t,a,f,c,b,k,e,c,s,s,w,w,p,w,o,p,n,n,g e,b,s,w,t,l,f,c,b,n,e,c,s,s,w,w,p,w,o,p,n,n,m p,x,y,w,t,p,f,c,n,n,e,e,s,s,w,w,p,w,o,p,k,s,u e,x,s,g,f,n,f,w,b,k,t,e,s,s,w,w,p,w,o,e,n,a,g e,x,y,y,t,a,f,c,b,n,e,c,s,s,w,w,p,w,o,p,k,n,g e,b,s,w,t,a,f,c,b,g,e,c,s,s,w,w,p,w,o,p,k,n,m </code></pre> <p>The first column is the label about whether this mushroom is edible.(e:edible, p:poisonous) And I want to split this data into two part by edible or not. My code is below :</p> <pre><code>mushdf = pd.read_csv('agaricus-lepiota.data') #load in two data for mushroom and iris mushdf.columns = ['edible?','cap-shape','cap-surface','cap-color','bruises?','odor', 'gill-attachment','gill-spacing','gill-size','gill-color', 'stalk-shape','stalk-root','stalk-surface-above-ring','stalk-surface-below-ring', 'stalk-color-above-ring','stalk-color-below-ring','veil-type','veil-color', 'ring-number','ring-type','spore-print-color','population','habitat'] print(mushdf) mushdic = {key: mushdf for (key, mushdf) in mushdf.groupby('edible?')} for key in mushdic: print(f'mushdic[{key}]') print(mushdic[key]) print('-'*50) </code></pre> <p>The problem is, when I delete <code>mushdf.columns</code> in line 2 to line 6, this code works. However, when I do <code>mushdf.columns</code>, the terminal return error message. </p> <p>Same method with another column is fine. For example, <code>mushdic = {key: mushdf for (key, mushdf) in mushdf.groupby('bruises?')}</code> is running correctly.</p> <p>I have no idea about this.</p> <pre><code>Traceback (most recent call last): File "e:\Visual Studio Project\LiMing\vs2017_python\.vscode\helloworld.py", line 11, in &lt;module&gt; mushdic = {key: mushdf for (key, mushdf) in mushdf.groupby('edible?')} File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python36_64\lib\site-packages\pandas\core\generic.py", line 7894, in groupby **kwargs File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python36_64\lib\site-packages\pandas\core\groupby\groupby.py", line 2522, in groupby return klass(obj, by, **kwds) File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python36_64\lib\site-packages\pandas\core\groupby\groupby.py", line 391, in __init__ mutated=self.mutated, File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python36_64\lib\site-packages\pandas\core\groupby\grouper.py", line 621, in _get_grouper raise KeyError(gpr) KeyError: 'edible?' The terminal process terminated with exit code: 1 </code></pre>
<p><a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html" rel="nofollow noreferrer"><code>pandas.read_csv</code></a> implies that your first line in the csv file is the header. Since your csv file has no header, you need to tell this during the import. You should also pass the column names here already:</p> <pre><code>mushdf = pd.read_csv('agaricus-lepiota.data', header=None, names=[ 'edible?','cap-shape','cap-surface','cap-color','bruises?','odor', 'gill-attachment','gill-spacing','gill-size','gill-color', 'stalk-shape','stalk-root','stalk-surface-above-ring','stalk-surface-below-ring', 'stalk-color-above-ring','stalk-color-below-ring','veil-type','veil-color', 'ring-number','ring-type','spore-print-color','population','habitat']) </code></pre>
python|pandas|dataframe
0
375,391
58,430,530
ValueError: multiclass format is not supported
<p>While I am trying to use metrics.roc_auc_score, I am getting <code>ValueError: multiclass format is not supported</code>.</p> <pre><code>import lightgbm as lgb from sklearn import metrics def train_model(train, valid): dtrain = lgb.Dataset(train, label=y_train) dvalid = lgb.Dataset(valid, label=y_valid) param = {'num_leaves': 64, 'objective': 'binary', 'metric': 'auc', 'seed': 7} print("Training model!") bst = lgb.train(param, dtrain, num_boost_round=1000, valid_sets=[dvalid], early_stopping_rounds=10, verbose_eval=False) valid_pred = bst.predict(valid) print('Valid_pred: ') print(valid_pred) print('y_valid:') print(y_valid) valid_score = metrics.roc_auc_score(y_valid, valid_pred) print(f"Validation AUC score: {valid_score:.4f}") return bst bst = train_model(X_train_final, X_valid_final) </code></pre> <p>valid_pred and y_valid are:</p> <pre><code>Training model! Valid_pred: [1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.] y_valid: Id 530 200624 492 133000 460 110000 280 192000 656 88000 ... 327 324000 441 555000 1388 136000 1324 82500 62 101000 Name: SalePrice, Length: 292, dtype: int64 </code></pre> <p>Error:</p> <pre><code>ValueError Traceback (most recent call last) &lt;ipython-input-80-df034caf8c9b&gt; in &lt;module&gt; ----&gt; 1 bst = train_model(X_train_final, X_valid_final) &lt;ipython-input-79-483a6fb5ab9b&gt; in train_model(train, valid) 17 print('y_valid:') 18 print(y_valid) ---&gt; 19 valid_score = metrics.roc_auc_score(y_valid, valid_pred) 20 print(f"Validation AUC score: {valid_score:.4f}") 21 return bst /opt/conda/lib/python3.6/site-packages/sklearn/metrics/ranking.py in roc_auc_score(y_true, y_score, average, sample_weight, max_fpr) 353 return _average_binary_score( 354 _binary_roc_auc_score, y_true, y_score, average, --&gt; 355 sample_weight=sample_weight) 356 357 /opt/conda/lib/python3.6/site-packages/sklearn/metrics/base.py in _average_binary_score(binary_metric, y_true, y_score, average, sample_weight) 71 y_type = type_of_target(y_true) 72 if y_type not in ("binary", "multilabel-indicator"): ---&gt; 73 raise ValueError("{0} format is not supported".format(y_type)) 74 75 if y_type == "binary": ValueError: multiclass format is not supported </code></pre> <p>I tried: <code>valid_pred = pd.Series(bst.predict(valid)).astype(np.int64)</code> also I removed <code>'objective': 'binary'</code> and tried but no success.</p> <p>Still not able to figure out what is the issue.</p>
<p>It seems the task you are trying to solve is regression: predicting the price. However, you are training a classification model, that assigns a class to every input.</p> <p>ROC-AUC score is meant for classification problems where the output is the probability of the input belonging to a class. If you do a multi-class classification, then you can compute the score for each class independently.</p> <p>Moreover, the <code>predict</code> method returns a discrete class, not a probability. Let's imagine you do a binary classification and have only one example, it should be classified as <code>False</code>. If your classifier yields a probability of 0.7, the ROC-AUC value is 1.0-0.7=0.3. If you use the <code>predict</code> method, the ROC-AUC value will be 1.0-1.0=0.0, which won't tell you much.</p>
python|pandas|machine-learning|scikit-learn|training-data
7
375,392
58,192,448
How to plot a 3D graph with Z axis being the magnitude of values in a csv?
<p>I believe the X and Y values in the plot should be represented by the columns and lines in the csv, which looks like this:</p> <pre><code>0,original,1.0000,0.9999,0.9998,0.9997,0.9996,0.9995... 0.9900 1,28663, 4144,6096,6859,7366,7876,8125... 2,11268, 1374,2119,2393,2615,2809,2904... 3,14734, 2122,3115,3466,3740,4011,4144... 4,13341, 1452,2322,2689,2877,3114,3238... 5,18458, 2677,3643,4047,4333,4652,4806... 6,13732, 1621,2224,2502,2704,2930,3020... 7,17771, 2955,3904,4270,4566,4872,5041... 8,14447, 1822,2437,2715,2933,3179,3292... . . . 5400,18458,2677,3643,4047,4333,4652,4806 </code></pre> <p>I would like to plot my data in a graphic. If I do so in a 2D graphic it looks really ugly. </p> <p>Image <a href="https://i.stack.imgur.com/e0zLL.png" rel="nofollow noreferrer">1</a> shows this data in a 2D format, each column series is a different color, the "original" values are the blue series for example. It looks to me that it would have a better visual representation if it was 3D. <a href="https://i.stack.imgur.com/e0zLL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/e0zLL.png" alt="enter image description here"></a></p> <p>I would be looking for something like image <a href="https://i.stack.imgur.com/9Ty38.png" rel="nofollow noreferrer">2</a>. <strong>I understand that the Z values are the magnitude of each cell in the table</strong>. I believe I would have to plot a bunch of different series (where each series comes from each whole column on the csv). Am I right?</p> <p><a href="https://i.stack.imgur.com/9Ty38.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9Ty38.png" alt="enter image description here"></a></p> <p>That being said, my question is: <strong>How can I plot my '.csv' data in a 3D graphic, considering the situation I pointed out?</strong></p> <p><strong>EDIT:</strong> I found a code that kinda does what I want. I got it from <a href="https://superuser.com/questions/1328448/3d-plot-with-matplotlib-from-imported-data">here</a>. I guess I would like to apply each column like <em>np.meshgrid(DataAll1D[:,0]</em> adds column 0 to the plot. It seems like the meshgrid function accepts any number of 1D arrays I just don't know how to do that in python.</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D DataAll1D = np.loadtxt("datacsv_1d.csv", delimiter=",") # create 2d x,y grid (both X and Y will be 2d) X, Y = np.meshgrid(DataAll1D[:,0], DataAll1D[:,1]) # repeat Z to make it a 2d grid Z = np.tile(DataAll1D[:,2], (len(DataAll1D[:,2]), 1)) fig = plt.figure() ax = fig.add_subplot(111, projection='3d') ax.plot_surface(X, Y, Z, cmap='ocean') plt.show() </code></pre>
<p>A very minimal example, but I guess what you want to achieve is have each of your curves separated from the others in a 3D space. The code below generates two plots, one that draws curves individually, the other which treats the input as a surface. You can easily build onto this and achieve a more specific goal of yours I guess.</p> <pre><code>from mpl_toolkits.mplot3d import Axes3D # noqa: F401 unused import import matplotlib.pyplot as plt import numpy data = numpy.array([[28663, 4144, 6096, 6859, 7366, 7876, 8125], [11268, 1374, 2119, 2393, 2615, 2809, 2904], [14734, 2122, 3115, 3466, 3740, 4011, 4144], [13341, 1452, 2322, 2689, 2877, 3114, 3238], [18458, 2677, 3643, 4047, 4333, 4652, 4806], [13732, 1621, 2224, 2502, 2704, 2930, 3020]]) fig, (ax, bx) = plt.subplots(nrows=1, ncols=2, num=0, figsize=(16, 8), subplot_kw={'projection': '3d'}) for i in range(data.shape[1]): ax.plot3D(numpy.repeat(i, data.shape[0]), numpy.arange(data.shape[0]), data[:, i]) gridY, gridX = numpy.mgrid[1:data.shape[0]:data.shape[0] * 1j, 1:data.shape[1]:data.shape[1] * 1j] pSurf = bx.plot_surface(gridX, gridY, data, cmap='viridis') fig.colorbar(pSurf) plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/FOlae.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FOlae.png" alt="enter image description here"></a></p>
python|numpy|csv|matplotlib|3d
1
375,393
58,192,599
Python Pandas read_csv to dataframe without separator
<p>I'm new to the Pandas library.<br> I have shared code that works off of a dataframe.</p> <p>Is there a way to read a gzip file line by line without any delimiter (use the full line, the line can include commas and other characters) as a single row and use it in the dataframe? It seems that you have to provide a delimiter and when I provide "\n" it is able to read but error_bad_lines will complain with something like "Skipping line xxx: expected 22 fields but got 23" fields since each line is different. </p> <p>I want it to treat each line as a single row in the dataframe. How can this be achieved? Any tips would be appreciated. </p>
<p>if you just want each line to be one row and one column then dont use read_csv. Just read the file line by line and build the data frame from it.</p> <p>You could do this manually by creating an empty data frame with a single columns header. then iterate over each line in the file appending it to the data frame.</p> <pre><code>#explicitly iterate over each line in the file appending it to the df. import pandas as pd with open("query4.txt") as myfile: df = pd.DataFrame([], columns=['line']) for line in myfile: df = df.append({'line': line}, ignore_index=True) print(df) </code></pre> <p>This will work for large files as we only process one line at a time and build the dataframe so we dont use more memory than needed. This probably isnt the most efficent there is a lot of reassigning of the dataframe here but it would certainly work. </p> <p>However we can do this more cleanly since the pandas dataframe can take an iterable as the input for data.</p> <pre><code>#create a list to feed the data to the dataframe. import pandas as pd with open("query4.txt") as myfile: mydata = [line for line in myfile] df = pd.DataFrame(mydata, columns=['line']) print(df) </code></pre> <p>Here we read all the lines of the file into a list and then pass the list to pandas to create the data from. However the down side to this is if our file was very large we would essentially have 2 copies of it in memory. One in list and one in the data frame. </p> <p>Given that we know pandas will accept an iterable for the data so we can use a generator expression to give us a generator that will feed each line of the file to the data frame. Now the data frame will be built its self by reading each line one at a time from the file.</p> <pre><code>#create a generator to feed the data to the dataframe. import pandas as pd with open("query4.txt") as myfile: mydata = (line for line in myfile) df = pd.DataFrame(mydata, columns=['line']) print(df) </code></pre> <p>In all three cases there is no need to use read_csv since the data you want to load isnt a csv. Each solution provides the same data frame output</p> <p><strong>SOURCE DATA</strong></p> <pre><code>this is some data this is other data data is fun data is weird this is the 5th line </code></pre> <p><strong>DATA FRAME</strong></p> <pre><code> line 0 this is some data\n 1 this is other data\n 2 data is fun\n 3 data is weird\n 4 this is the 5th line </code></pre>
python|pandas
2
375,394
58,545,656
Python match records where elements are the same but dollars are within %
<p>I am trying kickout exceptions from a s/s where most if not all elements match, with the exception of the dollar amounts associated with the record. So if for example, Column A - Column C match, but the dollar difference between the two is 10% or less, i would like to create logic to only highlight these examples within a dataframe. And i would need this for any example where this happens, not just a static id. S/S:</p> <pre><code>Client ID(Numeric) Client_2nd_ID(Alphanumeric) Instrument(text) Dollars(numer) 12345 FA000123AB Baseball 600 45678 PP000157DC Football 800 12345 FA000123AB Baseball 570 12345 FA000123AB Baseball 645 12345 FB000159EE Baseball 605 </code></pre> <p>Using the above example, I would like the dataframe to only show the three records for Client ID: 12345, 2nd_ID FA000123AB, instrument Baseball and Dollars 600,570,645 and as i mentioned any other situation where there are similarities for other record instances not including the above mentioned ID examples (making this variable vs static)</p>
<p>Following code will filter any records within group client/instrument which "Dollars" field value has difference less than a threshold with a closest value within the group:</p> <pre><code>import pandas as pd import numpy as np threshold = 0.01 df = pd.DataFrame({'Client_ID': [12345, 45678, 12345, 12345, 12345], 'Client_2nd_ID': ["FA000123AB", "PP000157DC", "FA000123AB", "FA000123AB", "FB000159EE"], 'Instrument': ["Baseball", "Football", "Baseball", "Baseball", "Baseball"], 'Dollars': [600, 800, 570, 645, 605]}) idx_lookup = df.apply(lambda x: (df.loc[(df['Client_ID'] == x['Client_ID']) &amp; (df['Instrument'] == x['Instrument'] ), 'Dollars'] - x['Dollars']).abs().replace(0, np.nan).idxmin(), axis=1) df['percent'] = (df['Dollars'] - df.loc[idx_lookup, 'Dollars'].values) / df.loc[idx_lookup, 'Dollars'].values df = df.drop(df[(df.percent&lt;=threshold) &amp; (df.percent&gt;0)].index) </code></pre> <p><a href="https://i.stack.imgur.com/PxxPm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PxxPm.png" alt="enter image description here"></a></p> <p>It looks like it matches your criteria for client #12345, however, I checked additionally by adding 805 values for client #45678 to make sure it works correct for different clients:</p> <pre><code>import pandas as pd import numpy as np threshold = 0.01 df = pd.DataFrame({'Client_ID': [12345, 45678, 12345, 12345, 12345, 45678], 'Client_2nd_ID': ["FA000123AB", "PP000157DC", "FA000123AB", "FA000123AB", "FB000159EE", "PP000157DC"], 'Instrument': ["Baseball", "Football", "Baseball", "Baseball", "Baseball", "Football" ], 'Dollars': [600, 800, 570, 645, 605, 805]}) idx_lookup = df.apply(lambda x: (df.loc[(df['Client_ID'] == x['Client_ID']) &amp; (df['Instrument'] == x['Instrument'] ), 'Dollars'] - x['Dollars']).abs().replace(0, np.nan).idxmin(), axis=1) df['percent'] = (df['Dollars'] - df.loc[idx_lookup, 'Dollars'].values) / df.loc[idx_lookup, 'Dollars'].values df = df.drop(df[(df.percent&lt;=threshold) &amp; (df.percent&gt;0)].index) </code></pre> <p><a href="https://i.stack.imgur.com/DP5qZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DP5qZ.png" alt="enter image description here"></a></p>
python|pandas
1
375,395
58,560,297
Python: How to import a .csv and run the content through the code?
<p>I'm quite new to Python and I've been trying to run "<strong>The CODE</strong>" (see below)</p> <p>The code works perfectly although it generates random data. </p> <p>I have my own data in csv file which I would like to run through it and see whether my manual calculations reconcile. So, what I have done is:</p> <p>I have removed the import <code>numpy.random as nrand</code> from the code and added two lines to see if I can perhaps enter the range from my csv column manually:</p> <pre><code>numpy.arrange(15) numpy.array([0,1,2,3,4]) </code></pre> <p>and then replaced <code>nrand</code> in the original code (<strong>The CODE</strong>) with <code>numpy</code></p> <p>Unfortunately, that generated an error:</p> <p><a href="https://i.stack.imgur.com/DjPgD.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DjPgD.jpg" alt="enter image description here"></a></p> <p><strong>I would be greatly obliged if someone could show me how to import a sample csv file (with 1 column of data) from a C:\ drive location into Python and run the code, so it picks it up (regardless of how many data points I have in the column). Can anyone help with that?</strong></p> <p><strong>The CODE</strong></p> <pre><code>import math import numpy import numpy.random as nrand """ Note - for some of the metrics the absolute value is returns. This is because if the risk (loss) is higher we want to discount the expected excess return from the portfolio by a higher amount. Therefore risk should be positive. """ def vol(returns): # Return the standard deviation of returns return numpy.std(returns) def beta(returns, market): # Create a matrix of [returns, market] m = numpy.matrix([returns, market]) # Return the covariance of m divided by the standard deviation of the market returns return numpy.cov(m)[0][1] / numpy.std(market) def lpm(returns, threshold, order): # This method returns a lower partial moment of the returns # Create an array he same length as returns containing the minimum return threshold threshold_array = numpy.empty(len(returns)) threshold_array.fill(threshold) # Calculate the difference between the threshold and the returns diff = threshold_array - returns # Set the minimum of each to 0 diff = diff.clip(min=0) # Return the sum of the different to the power of order return numpy.sum(diff ** order) / len(returns) def hpm(returns, threshold, order): # This method returns a higher partial moment of the returns # Create an array he same length as returns containing the minimum return threshold threshold_array = numpy.empty(len(returns)) threshold_array.fill(threshold) # Calculate the difference between the returns and the threshold diff = returns - threshold_array # Set the minimum of each to 0 diff = diff.clip(min=0) # Return the sum of the different to the power of order return numpy.sum(diff ** order) / len(returns) def var(returns, alpha): # This method calculates the historical simulation var of the returns sorted_returns = numpy.sort(returns) # Calculate the index associated with alpha index = int(alpha * len(sorted_returns)) # VaR should be positive return abs(sorted_returns[index]) def cvar(returns, alpha): # This method calculates the condition VaR of the returns sorted_returns = numpy.sort(returns) # Calculate the index associated with alpha index = int(alpha * len(sorted_returns)) # Calculate the total VaR beyond alpha sum_var = sorted_returns[0] for i in range(1, index): sum_var += sorted_returns[i] # Return the average VaR # CVaR should be positive return abs(sum_var / index) def prices(returns, base): # Converts returns into prices s = [base] for i in range(len(returns)): s.append(base * (1 + returns[i])) return numpy.array(s) def dd(returns, tau): # Returns the draw-down given time period tau values = prices(returns, 100) pos = len(values) - 1 pre = pos - tau drawdown = float('+inf') # Find the maximum drawdown given tau while pre &gt;= 0: dd_i = (values[pos] / values[pre]) - 1 if dd_i &lt; drawdown: drawdown = dd_i pos, pre = pos - 1, pre - 1 # Drawdown should be positive return abs(drawdown) def max_dd(returns): # Returns the maximum draw-down for any tau in (0, T) where T is the length of the return series max_drawdown = float('-inf') for i in range(0, len(returns)): drawdown_i = dd(returns, i) if drawdown_i &gt; max_drawdown: max_drawdown = drawdown_i # Max draw-down should be positive return abs(max_drawdown) def average_dd(returns, periods): # Returns the average maximum drawdown over n periods drawdowns = [] for i in range(0, len(returns)): drawdown_i = dd(returns, i) drawdowns.append(drawdown_i) drawdowns = sorted(drawdowns) total_dd = abs(drawdowns[0]) for i in range(1, periods): total_dd += abs(drawdowns[i]) return total_dd / periods def average_dd_squared(returns, periods): # Returns the average maximum drawdown squared over n periods drawdowns = [] for i in range(0, len(returns)): drawdown_i = math.pow(dd(returns, i), 2.0) drawdowns.append(drawdown_i) drawdowns = sorted(drawdowns) total_dd = abs(drawdowns[0]) for i in range(1, periods): total_dd += abs(drawdowns[i]) return total_dd / periods def treynor_ratio(er, returns, market, rf): return (er - rf) / beta(returns, market) def sharpe_ratio(er, returns, rf): return (er - rf) / vol(returns) def information_ratio(returns, benchmark): diff = returns - benchmark return numpy.mean(diff) / vol(diff) def modigliani_ratio(er, returns, benchmark, rf): np_rf = numpy.empty(len(returns)) np_rf.fill(rf) rdiff = returns - np_rf bdiff = benchmark - np_rf return (er - rf) * (vol(rdiff) / vol(bdiff)) + rf def excess_var(er, returns, rf, alpha): return (er - rf) / var(returns, alpha) def conditional_sharpe_ratio(er, returns, rf, alpha): return (er - rf) / cvar(returns, alpha) def omega_ratio(er, returns, rf, target=0): return (er - rf) / lpm(returns, target, 1) def sortino_ratio(er, returns, rf, target=0): return (er - rf) / math.sqrt(lpm(returns, target, 2)) def kappa_three_ratio(er, returns, rf, target=0): return (er - rf) / math.pow(lpm(returns, target, 3), float(1/3)) def gain_loss_ratio(returns, target=0): return hpm(returns, target, 1) / lpm(returns, target, 1) def upside_potential_ratio(returns, target=0): return hpm(returns, target, 1) / math.sqrt(lpm(returns, target, 2)) def calmar_ratio(er, returns, rf): return (er - rf) / max_dd(returns) def sterling_ration(er, returns, rf, periods): return (er - rf) / average_dd(returns, periods) def burke_ratio(er, returns, rf, periods): return (er - rf) / math.sqrt(average_dd_squared(returns, periods)) def test_risk_metrics(): # This is just a testing method r = nrand.uniform(-1, 1, 50) m = nrand.uniform(-1, 1, 50) print("vol =", vol(r)) print("beta =", beta(r, m)) print("hpm(0.0)_1 =", hpm(r, 0.0, 1)) print("lpm(0.0)_1 =", lpm(r, 0.0, 1)) print("VaR(0.05) =", var(r, 0.05)) print("CVaR(0.05) =", cvar(r, 0.05)) print("Drawdown(5) =", dd(r, 5)) print("Max Drawdown =", max_dd(r)) def test_risk_adjusted_metrics(): # Returns from the portfolio (r) and market (m) r = nrand.uniform(-1, 1, 50) m = nrand.uniform(-1, 1, 50) # Expected return e = numpy.mean(r) # Risk free rate f = 0.06 # Risk-adjusted return based on Volatility print("Treynor Ratio =", treynor_ratio(e, r, m, f)) print("Sharpe Ratio =", sharpe_ratio(e, r, f)) print("Information Ratio =", information_ratio(r, m)) # Risk-adjusted return based on Value at Risk print("Excess VaR =", excess_var(e, r, f, 0.05)) print("Conditional Sharpe Ratio =", conditional_sharpe_ratio(e, r, f, 0.05)) # Risk-adjusted return based on Lower Partial Moments print("Omega Ratio =", omega_ratio(e, r, f)) print("Sortino Ratio =", sortino_ratio(e, r, f)) print("Kappa 3 Ratio =", kappa_three_ratio(e, r, f)) print("Gain Loss Ratio =", gain_loss_ratio(r)) print("Upside Potential Ratio =", upside_potential_ratio(r)) # Risk-adjusted return based on Drawdown risk print("Calmar Ratio =", calmar_ratio(e, r, f)) print("Sterling Ratio =", sterling_ration(e, r, f, 5)) print("Burke Ratio =", burke_ratio(e, r, f, 5)) if __name__ == "__main__": test_risk_metrics() test_risk_adjusted_metrics() </code></pre>
<p>Ok, so reading your comments, you mention that <code>r</code> can have the same length as <code>m</code> <strong>or less</strong>. Therefore, my proposed solution is to just load 2 CSV files where the first file contains your <code>r</code> values and the second file contains your <code>m</code> values.</p> <p>Make sure your csv files don't have a header, just list the values in a column.</p> <p>For the purposes of this test, here is what I have as my <code>r</code> CSV file.</p> <pre><code>3.223 1.313 1.023 0.333 23.311 </code></pre> <p>And my <code>m</code> CSV file:</p> <pre><code>1.233 0.3231 23.132 0.032 132.14 </code></pre> <p>Now, you can load them in your script and feed them into your functions. Put this in your <code>__name__ == '__main__'</code> block:</p> <pre><code>import csv # load r with open(r'C:\path\to\r_values.csv') as csvfile: # change your filename here r = numpy.array([float(x[0]) for x in csv.reader(csvfile)]) # load m with open(r'C:\path\to\m_values.csv') as csvfile: # change your filename here m = numpy.array([float(x[0]) for x in csv.reader(csvfile)]) </code></pre> <p>Next, I would just redefine your <code>test_risk_metrics</code> and <code>test_risk_adjusted_metrics</code> functions:</p> <pre><code># Now you can feed them into your functions def test_risk_metrics(r, m): print("vol =", vol(r)) print("beta =", beta(r, m)) print("hpm(0.0)_1 =", hpm(r, 0.0, 1)) print("lpm(0.0)_1 =", lpm(r, 0.0, 1)) print("VaR(0.05) =", var(r, 0.05)) print("CVaR(0.05) =", cvar(r, 0.05)) print("Drawdown(5) =", dd(r, 5)) print("Max Drawdown =", max_dd(r)) def test_risk_adjusted_metrics(r, m): # Returns from the portfolio (r) and market (m) # Expected return e = numpy.mean(r) # Risk free rate f = 0.06 # Risk-adjusted return based on Volatility print("Treynor Ratio =", treynor_ratio(e, r, m, f)) print("Sharpe Ratio =", sharpe_ratio(e, r, f)) print("Information Ratio =", information_ratio(r, m)) # Risk-adjusted return based on Value at Risk print("Excess VaR =", excess_var(e, r, f, 0.05)) print("Conditional Sharpe Ratio =", conditional_sharpe_ratio(e, r, f, 0.05)) # Risk-adjusted return based on Lower Partial Moments print("Omega Ratio =", omega_ratio(e, r, f)) print("Sortino Ratio =", sortino_ratio(e, r, f)) print("Kappa 3 Ratio =", kappa_three_ratio(e, r, f)) print("Gain Loss Ratio =", gain_loss_ratio(r)) print("Upside Potential Ratio =", upside_potential_ratio(r)) # Risk-adjusted return based on Drawdown risk print("Calmar Ratio =", calmar_ratio(e, r, f)) print("Sterling Ratio =", sterling_ration(e, r, f, 5)) print("Burke Ratio =", burke_ratio(e, r, f, 5)) </code></pre> <p>Here's how the entire code should look:</p> <pre><code>import math import numpy """ Note - for some of the metrics the absolute value is returns. This is because if the risk (loss) is higher we want to discount the expected excess return from the portfolio by a higher amount. Therefore risk should be positive. """ def vol(returns): # Return the standard deviation of returns return numpy.std(returns) def beta(returns, market): # Create a matrix of [returns, market] m = numpy.matrix([returns, market]) # Return the covariance of m divided by the standard deviation of the market returns return numpy.cov(m)[0][1] / numpy.std(market) def lpm(returns, threshold, order): # This method returns a lower partial moment of the returns # Create an array he same length as returns containing the minimum return threshold threshold_array = numpy.empty(len(returns)) threshold_array.fill(threshold) # Calculate the difference between the threshold and the returns diff = threshold_array - returns # Set the minimum of each to 0 diff = diff.clip(min=0) # Return the sum of the different to the power of order return numpy.sum(diff ** order) / len(returns) def hpm(returns, threshold, order): # This method returns a higher partial moment of the returns # Create an array he same length as returns containing the minimum return threshold threshold_array = numpy.empty(len(returns)) threshold_array.fill(threshold) # Calculate the difference between the returns and the threshold diff = returns - threshold_array # Set the minimum of each to 0 diff = diff.clip(min=0) # Return the sum of the different to the power of order return numpy.sum(diff ** order) / len(returns) def var(returns, alpha): # This method calculates the historical simulation var of the returns sorted_returns = numpy.sort(returns) # Calculate the index associated with alpha index = int(alpha * len(sorted_returns)) # VaR should be positive return abs(sorted_returns[index]) def cvar(returns, alpha): # This method calculates the condition VaR of the returns sorted_returns = numpy.sort(returns) # Calculate the index associated with alpha index = int(alpha * len(sorted_returns)) # Calculate the total VaR beyond alpha sum_var = sorted_returns[0] for i in range(1, index): sum_var += sorted_returns[i] # Return the average VaR # CVaR should be positive return abs(sum_var / index) def prices(returns, base): # Converts returns into prices s = [base] for i in range(len(returns)): s.append(base * (1 + returns[i])) return numpy.array(s) def dd(returns, tau): # Returns the draw-down given time period tau values = prices(returns, 100) pos = len(values) - 1 pre = pos - tau drawdown = float('+inf') # Find the maximum drawdown given tau while pre &gt;= 0: dd_i = (values[pos] / values[pre]) - 1 if dd_i &lt; drawdown: drawdown = dd_i pos, pre = pos - 1, pre - 1 # Drawdown should be positive return abs(drawdown) def max_dd(returns): # Returns the maximum draw-down for any tau in (0, T) where T is the length of the return series max_drawdown = float('-inf') for i in range(0, len(returns)): drawdown_i = dd(returns, i) if drawdown_i &gt; max_drawdown: max_drawdown = drawdown_i # Max draw-down should be positive return abs(max_drawdown) def average_dd(returns, periods): # Returns the average maximum drawdown over n periods drawdowns = [] for i in range(0, len(returns)): drawdown_i = dd(returns, i) drawdowns.append(drawdown_i) drawdowns = sorted(drawdowns) total_dd = abs(drawdowns[0]) for i in range(1, periods): total_dd += abs(drawdowns[i]) return total_dd / periods def average_dd_squared(returns, periods): # Returns the average maximum drawdown squared over n periods drawdowns = [] for i in range(0, len(returns)): drawdown_i = math.pow(dd(returns, i), 2.0) drawdowns.append(drawdown_i) drawdowns = sorted(drawdowns) total_dd = abs(drawdowns[0]) for i in range(1, periods): total_dd += abs(drawdowns[i]) return total_dd / periods def treynor_ratio(er, returns, market, rf): return (er - rf) / beta(returns, market) def sharpe_ratio(er, returns, rf): return (er - rf) / vol(returns) def information_ratio(returns, benchmark): diff = returns - benchmark return numpy.mean(diff) / vol(diff) def modigliani_ratio(er, returns, benchmark, rf): np_rf = numpy.empty(len(returns)) np_rf.fill(rf) rdiff = returns - np_rf bdiff = benchmark - np_rf return (er - rf) * (vol(rdiff) / vol(bdiff)) + rf def excess_var(er, returns, rf, alpha): return (er - rf) / var(returns, alpha) def conditional_sharpe_ratio(er, returns, rf, alpha): return (er - rf) / cvar(returns, alpha) def omega_ratio(er, returns, rf, target=0): return (er - rf) / lpm(returns, target, 1) def sortino_ratio(er, returns, rf, target=0): return (er - rf) / math.sqrt(lpm(returns, target, 2)) def kappa_three_ratio(er, returns, rf, target=0): return (er - rf) / math.pow(lpm(returns, target, 3), float(1/3)) def gain_loss_ratio(returns, target=0): return hpm(returns, target, 1) / lpm(returns, target, 1) def upside_potential_ratio(returns, target=0): return hpm(returns, target, 1) / math.sqrt(lpm(returns, target, 2)) def calmar_ratio(er, returns, rf): return (er - rf) / max_dd(returns) def sterling_ration(er, returns, rf, periods): return (er - rf) / average_dd(returns, periods) def burke_ratio(er, returns, rf, periods): return (er - rf) / math.sqrt(average_dd_squared(returns, periods)) def test_risk_metrics(r, m): print("vol =", vol(r)) print("beta =", beta(r, m)) print("hpm(0.0)_1 =", hpm(r, 0.0, 1)) print("lpm(0.0)_1 =", lpm(r, 0.0, 1)) print("VaR(0.05) =", var(r, 0.05)) print("CVaR(0.05) =", cvar(r, 0.05)) print("Drawdown(5) =", dd(r, 5)) print("Max Drawdown =", max_dd(r)) def test_risk_adjusted_metrics(r, m): # Returns from the portfolio (r) and market (m) # Expected return e = numpy.mean(r) # Risk free rate f = 0.06 # Risk-adjusted return based on Volatility print("Treynor Ratio =", treynor_ratio(e, r, m, f)) print("Sharpe Ratio =", sharpe_ratio(e, r, f)) print("Information Ratio =", information_ratio(r, m)) # Risk-adjusted return based on Value at Risk print("Excess VaR =", excess_var(e, r, f, 0.05)) print("Conditional Sharpe Ratio =", conditional_sharpe_ratio(e, r, f, 0.05)) # Risk-adjusted return based on Lower Partial Moments print("Omega Ratio =", omega_ratio(e, r, f)) print("Sortino Ratio =", sortino_ratio(e, r, f)) print("Kappa 3 Ratio =", kappa_three_ratio(e, r, f)) print("Gain Loss Ratio =", gain_loss_ratio(r)) print("Upside Potential Ratio =", upside_potential_ratio(r)) # Risk-adjusted return based on Drawdown risk print("Calmar Ratio =", calmar_ratio(e, r, f)) print("Sterling Ratio =", sterling_ration(e, r, f, 5)) print("Burke Ratio =", burke_ratio(e, r, f, 5)) if __name__ == "__main__": import csv # load r with open(r'test.csv') as csvfile: # change your filename here r = numpy.array([float(x[0]) for x in csv.reader(csvfile)]) # load m with open(r'test2.csv') as csvfile: # change your filename here m = numpy.array([float(x[0]) for x in csv.reader(csvfile)]) test_risk_metrics(r, m) test_risk_adjusted_metrics(r, m) </code></pre> <p>And here's the output with my test files:</p> <pre><code>vol = 8.787591196681829 beta = 10.716740105069574 hpm(0.0)_1 = 5.8406 lpm(0.0)_1 = 0.0 VaR(0.05) = 0.333 test.py:69: RuntimeWarning: divide by zero encountered in double_scalars return abs(sum_var / index) CVaR(0.05) = inf Drawdown(5) = 23.311 Max Drawdown = 0.684347620175231 Treynor Ratio = 0.5393991030225205 Sharpe Ratio = 0.6578139413429632 Information Ratio = -0.5991798008409744 Excess VaR = 17.35915915915916 Conditional Sharpe Ratio = 0.0 test.py:163: RuntimeWarning: divide by zero encountered in double_scalars return (er - rf) / lpm(returns, target, 1) Omega Ratio = inf test.py:167: RuntimeWarning: divide by zero encountered in double_scalars return (er - rf) / math.sqrt(lpm(returns, target, 2)) Sortino Ratio = inf test.py:171: RuntimeWarning: divide by zero encountered in double_scalars return (er - rf) / math.pow(lpm(returns, target, 3), float(1/3)) Kappa 3 Ratio = inf test.py:175: RuntimeWarning: divide by zero encountered in double_scalars return hpm(returns, target, 1) / lpm(returns, target, 1) Gain Loss Ratio = inf test.py:179: RuntimeWarning: divide by zero encountered in double_scalars return hpm(returns, target, 1) / math.sqrt(lpm(returns, target, 2)) Upside Potential Ratio = inf Calmar Ratio = 8.446876747404843 Sterling Ratio = 14.51982017208844 Burke Ratio = 12.583312697186637 </code></pre>
python|python-3.x|numpy
2
375,396
58,600,556
Insert an image onto a substrate
<p>please help me, I need to insert an image on the substrate.</p> <p>substrate:</p> <p><a href="https://i.stack.imgur.com/Da2VG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Da2VG.png" alt="enter image description here"></a></p> <p>It png, and in the area that is blank with cities, you must insert the image from edge to edge of the frame.</p> <p>The problem is that I can't find an example of how to insert an image to the known coordinate points of the corners of a given substrate.</p> <p>Pls help))</p> <p>My test image</p> <p><a href="https://i.stack.imgur.com/TQx18.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TQx18.jpg" alt="enter image description here"></a></p> <pre class="lang-py prettyprint-override"><code> import cv2 import numpy as np from skimage import io frame = cv2.cvtColor(io.imread('as.png'), cv2.COLOR_RGB2BGR) image = cv2.cvtColor(io.imread("Vw5Rc.jpg"), cv2.COLOR_RGB2BGR) mask = 255 * np.uint8(np.all(frame == [0, 0, 0], axis=2)) contours, _ = cv2.findContours(mask, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE) cnt = min(contours, key=cv2.contourArea) (x, y, w, h) = cv2.boundingRect(cnt) # Copy appropriately resized image to frame frame[y:y+h, x:x+w] = cv2.resize(image, (w, h)) cv2.imwrite('frame.png', frame) </code></pre> <p>I'm trying to find the area where to insert the image by color, the red color of the area I can find, and if there is no color?</p> <p>The static frame has a constant size.</p>
<p>Here is one way to do it in Python/OpenCV, if I understand what you want.</p> <pre><code>Read the substrate and trees images Extract the alpha channel from the substrate Extract the substrate image without the alpha channel Use the alpha channel to color the base substrate image white where the alpha channel is black to correct a flaw in the base image Threshold the alpha channel and invert it Use morphology to remove the grid lines so that there is only one "outer" contour. Extract the contour and its bounding box Resize the trees image to the size of the bounding box. Use numpy indexing and slicing to multiply the region of the substrate with the resized trees image. Save the results. Optionally, display the various images. </code></pre> <p><br> Substrate Image:</p> <p><a href="https://i.stack.imgur.com/xqGf3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xqGf3.png" alt="enter image description here"></a></p> <p>Trees Image:</p> <p><a href="https://i.stack.imgur.com/rOY9n.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rOY9n.jpg" alt="enter image description here"></a></p> <pre><code>import cv2 import numpy as np # load substrate with alpha channel substrate = cv2.imread("substrate.png", cv2.IMREAD_UNCHANGED) hh, ww, cc = substrate.shape # load colored image trees = cv2.imread("trees.jpg") # make img white where alpha is black to merge the alpha channel with the image alpha = substrate[:,:,3] img = substrate[:,:,0-2] img[alpha==0] = 255 img = cv2.merge((img,img,img)) # threshold the img ret, thresh = cv2.threshold(alpha,0,255,0) # invert thresh thresh = 255 - thresh # make grid lines white in thresh so will get only one contour kernel = np.ones((9,9), np.uint8) thresh = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel) # find one outer contour cntrs = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) cntrs = cntrs[0] if len(cntrs) == 2 else cntrs[1] # get bounding box of contour of white rectangle in thresh for c in cntrs: x,y,w,h = cv2.boundingRect(c) #cv2.rectangle(img, (x,y), (x+w,y+h),(0, 0, 255), 2) # resize trees trees = cv2.resize(trees,(w,h),0,0) # generate result result = img.copy() result[y:y+h, x:x+w] = img[y:y+h, x:x+w]/255 * trees # write result to disk cv2.imwrite("substrate_over_trees.jpg", result) cv2.imshow("ALPHA", alpha) cv2.imshow("IMG", img) cv2.imshow("THRESH", thresh) cv2.imshow("TREES", trees) cv2.imshow("RESULT", result) cv2.waitKey(0) cv2.destroyAllWindows() </code></pre> <p><br> Result:</p> <p><a href="https://i.stack.imgur.com/hjekD.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hjekD.jpg" alt="enter image description here"></a></p> <p>Note that there is distortion of the trees image, because its aspect ratio does not match the region of the substrate image corresponding to the contour bounding box. This can be changed to maintain the aspect ratio, but then the image will need to be padded to white or some other color to fill the remaining area of the bounding box.</p>
python|image|numpy|opencv|python-imaging-library
2
375,397
58,210,841
How to display my polynomial regression line?
<p>My plot has a very fat line, which I didn't expect and haven't been able to troubleshoot on my own. I don't know how to show the image.</p> <p>Doing EDA on Kaggle's Craigslist Auto data set. I want to display and then compare and contrast a linear and polynomial regression fit correlating price and model year for each unique vehicle make and model (i.e. Ford F150).</p> <p>How do I do the following plot with a more normal looking line, line width doesn't change anything. </p> <p><a href="https://i.stack.imgur.com/vQ2jo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vQ2jo.png" alt="enter image description here"></a></p> <pre><code>df_f150=df[df['Make and Model']=='ford F-150'] #plotting a linear regression line for each dataframe fig = plt.figure(figsize=(10,7)) sns.regplot(x=df_f150.year, y=df_f150.price, color='b') '#Here is where I try to do one of the polynomial regressions' # Legend, title and labels. #plt.legend(labels=x) plt.title('Relationship Between Model Year and Price', size=24) plt.xlabel('Year', size=18) plt.ylabel('Price', size=18) plt.xlim(1990,2020) plt.ylim(1000,100000) from sklearn.preprocessing import PolynomialFeatures X = df_f150['year'].values.reshape(-1,1) y = df_f150['price'].values.reshape(-1,1) poly = PolynomialFeatures(degree = 8) poly.fit_transform(X) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0) regressor = LinearRegression() regressor.fit(X_train, y_train) #training the algorithm #To retrieve the intercept: print(regressor.intercept_) #For retrieving the slope: print(regressor.coef_) y_pred = regressor.predict(X_test) dfres = pd.DataFrame({'Actual': y_test.flatten(), 'Predicted': y_pred.flatten()}) dfres plt.scatter(X_test, y_test, color='gray') plt.plot(X_test, y_pred, color='red', linewidth=2) plt.show() </code></pre>
<h1>First, always clean and check the data:</h1> <ul> <li>Given data from <a href="https://www.kaggle.com/austinreese/craigslist-carstrucks-data#craigslistVehiclesFull.csv" rel="nofollow noreferrer">Kaggle: Vehicles listings from Craigslist.org</a></li> <li>Incidentally, the plot generated by <code>sns.regplot</code> is nearly the same as that generated by performing the regression with <code>sklearn</code>. As such, I have not included the additional code.</li> </ul> <h2>Load and Select Data:</h2> <pre class="lang-py prettyprint-override"><code>from pathlib import Path import pandas as pd file = Path.cwd() / 'data/craigslist-carstrucks-data/craigslistVehicles.csv' df = pd.read_csv(file, usecols=['price', 'year', 'manufacturer', 'make']) price year manufacturer make 3500 2006.0 chevrolet NaN 3399 2002.0 lexus es300 9000 2009.0 chevrolet suburban lt2 31999 2012.0 ram 2500 16990 2003.0 ram 3500 # Select specific data: # outliers exist, so price &lt; 120000 and f-150 began production in 1975 ford = df[['price', 'year']][(df.manufacturer == 'ford') &amp; (df.make == 'f-150') &amp; (df.price &lt; 120000) &amp; (df.year &gt;= 1975)] price year 1600 1992.0 39760 2018.0 11490 2014.0 2500 1993.0 17950 2014.0 </code></pre> <h3>Plot with <a href="https://seaborn.pydata.org/" rel="nofollow noreferrer">seaborn</a>:</h3> <ul> <li><a href="https://seaborn.pydata.org/generated/seaborn.regplot.html?highlight=regplot#seaborn.regplot" rel="nofollow noreferrer"><code>sns.regplot</code></a></li> </ul> <pre class="lang-py prettyprint-override"><code>import seaborn as sns sns.regplot(x=ford.year, y=ford.price) plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/Kl9U1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Kl9U1.png" alt="enter image description here"></a></p> <h3>Here is the plot, without removing outliers:</h3> <ul> <li>The plot is flat because the max price is <code>8.888889e+07</code> <ul> <li>You set <code>plt.ylim(1000,100000)</code>, so the outliers didn't show up</li> </ul></li> <li>I made the arbitrary decision to exclude all prices above $120k, because I know that's an unrealistic price for this sample.</li> <li>Simply removing outliers is not always the best option.</li> </ul> <p><a href="https://i.stack.imgur.com/8y0u0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8y0u0.png" alt="enter image description here"></a></p> <pre class="lang-py prettyprint-override"><code>print(ford.describe()) price year count 1.127000e+04 11270.000000 mean 2.405777e+04 2010.459184 std 8.372461e+05 6.454361 min 0.000000e+00 1975.000000 25% 5.300000e+03 2007.000000 50% 1.548750e+04 2012.000000 75% 2.549500e+04 2015.000000 max 8.888889e+07 2020.000000 </code></pre> <h3>Plot from performing the regression with <code>sklearn</code></h3> <ul> <li><a href="https://scikit-learn.org/stable/auto_examples/linear_model/plot_ols.html#sphx-glr-auto-examples-linear-model-plot-ols-py" rel="nofollow noreferrer">Linear Regression Example Plot</a></li> </ul> <pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt plt.scatter(X_test, y_test) plt.plot(X_test, y_pred, color='violet', linewidth=3) plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/srARc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/srARc.png" alt="enter image description here"></a></p> <h3>Plot the <code>X_test</code> &amp; <code>y_pred</code> in the <code>sns.regplot()</code>:</h3> <pre class="lang-py prettyprint-override"><code>sns.regplot(x=ford.year, y=ford.price) sns.scatterplot(X_test.flatten(), y_pred.flatten(), color='r') plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/qm6V8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qm6V8.png" alt="enter image description here"></a></p>
python|pandas|matplotlib|scikit-learn|seaborn
1
375,398
69,051,858
Finding the first value of at least a certain value
<p>I spent a few hours on this, so any help would be amazing!</p> <p>I have a pandas dataframe df. Then I group by one of the columns (A), focus on another column (B) and get the mean of each group:</p> <pre><code>group_mean = df.groupby('A').B.agg('mean') group = df.groupby('A').B </code></pre> <p>In the same order above, these are the types python reports:</p> <pre><code>&lt;class 'pandas.core.series.Series'&gt; &lt;class 'pandas.core.groupby.generic.SeriesGroupBy'&gt; </code></pre> <p>Now the question, how can I, for each group in &quot;group&quot; identify the index of the first element that is equal or greater than the mean. So in other words, if a group has elements 5, 3, 7, 9, 1, 10 and the mean is 8, I want to return the value 3 (to point to &quot;9&quot;).</p> <p>The result can be another groupby object with one number per group (the index).</p> <p>Thanks in advance!</p>
<p>You can use <code>apply</code> to check per group the values greater than the mean, and <code>idxmax</code> to get the first True value:</p> <pre><code>df.groupby('A')['B'].apply(lambda x: x.ge(x.mean()).idxmax()) </code></pre>
pandas|pandas-groupby
1
375,399
69,228,301
put all the items containing a certain string in a dataframe in another column
<p>The dataframe I have right now looks like this</p> <pre><code>In[1]: df Out[1]: index yesterday today tomorrow 1 apple_1 banana_3 cherry_4 2 pear_2 apple_4 blueberry_1 3 kiwi_3 orange_6 banana_2 4 apple_1 melon_3 banana_4 </code></pre> <p>I want to record all the apples and put it in another column/series like</p> <pre><code>index yesterday today tomorrow apple 1 apple_1 banana_3 cherry_4 apple_1 2 pear_2 apple_4 blueberry_1 apple_4 3 kiwi_3 orange_6 banana_2 nan 4 apple_1 melon_3 banana_4 apple_2 </code></pre> <ol> <li>IDK which column will contain an apple</li> <li>IDK what else is with the apple</li> </ol> <p>Thanks for the help!</p>
<p>Try with <code>startswith</code> then <code>where</code> mask then as nan and do <code>ffill</code></p> <pre><code>df['new'] = df.where(df.apply(lambda x : x.str.startswith('apple'))).ffill(1).iloc[:,-1] df Out[149]: yesterday today tomorrow new index 1 apple_1 banana_3 cherry_4 apple_1 2 pear_2 apple_4 blueberry_1 apple_4 3 kiwi_3 orange_6 banana_2 NaN 4 apple_1 melon_3 banana_4 apple_1 </code></pre>
python|pandas
1