Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
376,600
| 65,443,829
|
Problem with read text file in Python Pandas?
|
<p>Hello How to read below txt file ?</p>
<p><a href="https://i.stack.imgur.com/ivxXQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ivxXQ.png" alt="enter image description here" /></a></p>
|
<p>Your file appears to be a csv, not a fwf. Use <code>pd.read_csv</code>.</p>
|
python|pandas|dataframe|text
| 0
|
376,601
| 65,234,228
|
Sum of only specific columns in a pandas dataframe
|
<p>Consider I have a dataframe with few columns</p>
<p><a href="https://i.stack.imgur.com/3jxz8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3jxz8.png" alt="before operation" /></a></p>
<p>and a list ['salary','gross exp']</p>
<p>Now I want to perform sum of the column operation only on the columns from the list on the dataframe and save that to the dataframe</p>
<p>To put this in prespective, the list of columns ['salary','gross exp'] are money related and it makes sense to perform sum on these columns and not touch any of the other columns</p>
<p><a href="https://i.stack.imgur.com/af4su.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/af4su.jpg" alt="after operation" /></a></p>
<p>P.S: I have several Excel workbooks to work on and each consists of few tens of sheets, so doing it manually is out of options</p>
<p>Also macro code for excel works fine if that's possible</p>
<p>TIA</p>
|
<p>Working with the following example:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
list_ = ['sallary', 'gross exp']
d = {'sallary': [1,2,3], 'gross exp': [2,2,2], 'another column': [10,10,10]}
df = pd.DataFrame(d)
df
</code></pre>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: right;"></th>
<th style="text-align: right;">sallary</th>
<th style="text-align: right;">gross exp</th>
<th style="text-align: right;">another column</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">0</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">2</td>
<td style="text-align: right;">10</td>
</tr>
<tr>
<td style="text-align: right;">1</td>
<td style="text-align: right;">2</td>
<td style="text-align: right;">2</td>
<td style="text-align: right;">10</td>
</tr>
<tr>
<td style="text-align: right;">2</td>
<td style="text-align: right;">3</td>
<td style="text-align: right;">2</td>
<td style="text-align: right;">10</td>
</tr>
</tbody>
</table>
</div>
<p>You can then add a new empty row, and insert the sum of the columns that you have from the list in that same row:</p>
<pre class="lang-py prettyprint-override"><code>df = df.append(pd.Series(dtype = float), ignore_index=True)
df.loc[df.index[-1],list_] = df[list_].sum()
df
</code></pre>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: right;"></th>
<th style="text-align: right;">sallary</th>
<th style="text-align: right;">gross exp</th>
<th style="text-align: right;">another column</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">0</td>
<td style="text-align: right;">1.0</td>
<td style="text-align: right;">2.0</td>
<td style="text-align: right;">10.0</td>
</tr>
<tr>
<td style="text-align: right;">1</td>
<td style="text-align: right;">2.0</td>
<td style="text-align: right;">2.0</td>
<td style="text-align: right;">10.0</td>
</tr>
<tr>
<td style="text-align: right;">2</td>
<td style="text-align: right;">3.0</td>
<td style="text-align: right;">2.0</td>
<td style="text-align: right;">10.0</td>
</tr>
<tr>
<td style="text-align: right;">3</td>
<td style="text-align: right;">6.0</td>
<td style="text-align: right;">6.0</td>
<td style="text-align: right;">NaN</td>
</tr>
</tbody>
</table>
</div>
|
python|excel|vba|pandas|dataframe
| 0
|
376,602
| 65,353,418
|
How do you take a text file and change split it into data usable for a machine learning classifier?
|
<p>for this practice exercise I'm only supposed to use numpy so I can't just use scikit learn.</p>
<p>I've loaded the data set and managed to split it into positive and negative arrays. However I'm not sure what to do now or even what I am doing right to process the data for the classifier.</p>
<pre><code>datasettrain = np.loadtxt("Adaboost-trainer.txt")
negtrain, postrain = np.delete(datasettrain[datasettrain[:,2] < 0],2,1), np.delete(datasettrain[datasettrain[:,2] > 0],2,1)
clf = Adaboost(n_clf=5)
clf.fit(postrain, negtrain)
</code></pre>
<p>I know I'm supposed to be inputting features and labels but surely the data has to be in a different format for that? as opposed to just a plain text file? at least I always received data that was just labeled with features and labels and I could input it just by splitting that data. Any thoughts on how someone might process just a regular text file into features and labels?</p>
<p>edit</p>
<pre><code>
1.116574 0.157686 +1
-0.359096 0.653998 -1
1.845620 0.873235 +1
-0.271484 -0.960392 -1
0.304631 2.797998 +1
</code></pre>
|
<p>Ah, if I'm interpreting your sample data correctly, the first two columns are your feature columns and the last column is your target values. If this is correct, then to get training and test sets, you would need to do something like the following:</p>
<pre><code>import numpy as np
data = np.loadtxt("Adaboost-trainer.txt")
# Determine your training/test split. I opted for 80/20
test_size = 0.2
split_index = int(data.shape[0] * test_size)
# Get the full train and test splits
indices = np.random.permutation(data.shape[0])
test_idx = indices[split_index:]
train_idx = indices[:split_index]
test = data[test_idx,:]
train = data[train_idx,:]
# Split the X and y for use in models
y_train = train[:,-1]
X_train = np.delete(train, 2, axis=1)
y_test = test[:,-1]
X_test = np.delete(test, 2, axis=1)
</code></pre>
<p>From there, you have an 80/20 train/test split of your data for use with a model.</p>
|
python|numpy|machine-learning
| 0
|
376,603
| 65,165,637
|
'Fill forward' dummy variable for observations in same group (Python)
|
<p>I've created a dummy variable (in Python), <code>seo</code>, which takes the value 1 if the value of another column is greater than 0, as shown in the code below.</p>
<pre><code>df['seo'] = (df['amount'] > 0).astype(int)
</code></pre>
<p>What I want to do is to create a second dummy variable, <code>past_seo</code>, which takes the value 1 if the <code>seo</code> dummy for a particular firm was 1 at any historical time.</p>
<p>For reference, my dataset comprises monthly firm data and contains a firm identifier variable (<code>6_cusip</code>).</p>
<p>What I tried to do was to group the dataset by <code>6_cusip</code> and <code>date</code>, and then "fill forward" the <code>seo</code> dummy variable. However, I couldn't get this to work.</p>
<p>The code below shows an example of the first 20 observations in my dataset. As shown, the observations are all from the same firm. What I want to do is create a new column which fills that '1' in the <code>seo</code> column forward to all subsequent observations belonging to the same firm.</p>
<pre><code>{'date': {0: '1994-05',
1: '1994-06',
2: '1994-07',
3: '1994-08',
4: '1994-09',
5: '1994-10',
6: '1994-11',
7: '1994-12',
8: '1995-01',
9: '1995-02',
10: '1995-03',
11: '1995-04',
12: '1995-05',
13: '1995-06',
14: '1995-07',
15: '1995-08',
16: '1995-09',
17: '1995-10',
18: '1995-11',
19: '1995-12'},
'6_cusip': {0: '00077R',
1: '00077R',
2: '00077R',
3: '00077R',
4: '00077R',
5: '00077R',
6: '00077R',
7: '00077R',
8: '00077R',
9: '00077R',
10: '00077R',
11: '00077R',
12: '00077R',
13: '00077R',
14: '00077R',
15: '00077R',
16: '00077R',
17: '00077R',
18: '00077R',
19: '00077R'},
'seo': {0: 0,
1: 0,
2: 0,
3: 0,
4: 0,
5: 0,
6: 0,
7: 0,
8: 0,
9: 0,
10: 0,
11: 0,
12: 0,
13: 0,
14: 0,
15: 1,
16: 0,
17: 0,
18: 0,
19: 0}}
</code></pre>
<p>Let me know if you have any advice, thanks!</p>
|
<p>I think this should work:</p>
<pre><code>df["past_seo"] = df.groupby("6_cusip").seo.cumsum().gt(0).astype(int)
</code></pre>
<p>Basically, cumulatively sum seo for each group, flag as true if it's greater than <code>1</code> and cast as an integer.</p>
<p>output:</p>
<pre><code> date 6_cusip seo past_seo
0 1994-05 00077R 0 0
1 1994-06 00077R 0 0
2 1994-07 00077R 0 0
3 1994-08 00077R 0 0
4 1994-09 00077R 0 0
5 1994-10 00077R 0 0
6 1994-11 00077R 0 0
7 1994-12 00077R 0 0
8 1995-01 00077R 0 0
9 1995-02 00077R 0 0
10 1995-03 00077R 0 0
11 1995-04 00077R 0 0
12 1995-05 00077R 0 0
13 1995-06 00077R 0 0
14 1995-07 00077R 0 0
15 1995-08 00077R 1 1
16 1995-09 00077R 0 1
17 1995-10 00077R 0 1
18 1995-11 00077R 0 1
19 1995-12 00077R 0 1
</code></pre>
|
python|pandas|dataframe|data-science|dummy-variable
| 0
|
376,604
| 65,338,012
|
Pandas apply() with condition on last notnull value and its index
|
<p>I have a problem with a code I'm working on right now, so basically what I have is a dataframe with a column filled with numbers in the form pd.Dataframe([2,2,2,0,0,0,0,2,0,2]) for example. So what I want as an output is this [2,2,2,0,0,0,0,10,0,4] (like a memory effect).</p>
<p>So I'm thinking if there is a way of doing something like this:</p>
<pre><code>df.apply(lambda x: x * (index(x) - index( lastnotnull(x) ) ) if x!=0 else 0, axis=1)
</code></pre>
<p>Any idea would do, but preferably something optimised.</p>
<p>lastnotnull is not really a function, its just a way of explaining what I'm thinking of. So basically what I want is that in each row, it checks if its null, if it is then 0, else it multiplies it by (the number of previous null values +1), so in my example, the fourth 2 becomes 2*(7 - "2") = 10 with 7 the index of the 2 that became 10 and the "2" is the index of the third 2 in the list which is the last not null value of the list.</p>
|
<p>You included in your post an example how to compute the expected
value: (<code>2 * (7 - 2) = 10</code>). It indicates that a more precise formula,
for values <em>!= 0</em>, is rather:</p>
<pre><code>x * (index(x) - index(previousNonZero(x)))
</code></pre>
<p>Note the following difference:</p>
<ul>
<li><em>lastnotnull</em> (as you wrote) means last <strong>non-null</strong> value in the
current column, regardless of what is the current element.
If this column contains no <em>NaN</em>, this is the element from the <strong>last</strong>
row.</li>
<li><em>previousNonZero</em> (a more precise formula) means that you are
looking <strong>upwards</strong> from the current element, for an element
which is <em>!= 0</em>. You didn't specify what to do with the <strong>first</strong> row,
for which there is no <em>previous</em> row, so I assumed that the index in
this case will be <em>0</em> (just the same as for the first row).</li>
</ul>
<p>To have some column name, other than default <em>0</em>, I created the source
DataFrame as:</p>
<pre><code>df = pd.DataFrame({'x': [2, 2, 2, 0, 0, 0, 0, 2, 0, 2]})
</code></pre>
<p>so that the column is named <em>x</em>.</p>
<p>To find the previous non-zero element (and its index) easily,
let's create an auxiliary DataFrame:</p>
<pre><code>wrk = df[df.x != 0]
</code></pre>
<p>And to generate the expected result, run:</p>
<pre><code>result = np.where(df.x != 0, df.x * (wrk.index - wrk.index.to_series()
.shift(fill_value=0)), 0).astype(int)
</code></pre>
<p>Details:</p>
<ul>
<li><code>df.x != 0</code> - the <em>condition</em> parameter to <em>np.where</em>,</li>
<li><code>wrk.index.to_series().shift(fill_value=0)</code> - the index of the
previous non-zero element in <em>x</em> column (with the initial <em>NaN</em> replaced
with <em>0</em>),</li>
<li><code>df.x * (wrk.index - ...)</code> - the formula for <em>x != 0</em> case,</li>
<li><code>0</code> - the formula for <em>x == 0</em> case.</li>
<li><code>astype(int)</code> - convert the result to <em>int</em> (otherwise it would be <em>float</em>).</li>
</ul>
<p>The result is:</p>
<pre><code>array([ 0, 2, 2, 0, 0, 0, 0, 10, 0, 4])
</code></pre>
<p>Note that the first element is different from your expected result,
but this is the result of the formula used.</p>
<p>To support my view, let's analyze the case of the first row:</p>
<ul>
<li><code>x</code> == 2,</li>
<li><code>index(x)</code> == 0,</li>
<li><code>index(previousNonZero(x))</code> == 0,</li>
<li><code>(index(x) - index(previousNonZero(x)))</code> == 0,</li>
<li>the final result == 0.</li>
</ul>
<p>An alternative: Change <em>fill_value</em> to <em>-1</em> (the index of the "hypothetically"
previous row (if it existed)) and the result will be just as you want.</p>
<h1>Edit</h1>
<p>The above code can be reworked into a function, operating on any column
of the source DataFrame:</p>
<pre><code>def proc(col):
wrk = col[col != 0]
return (col * (wrk.index - wrk.index.to_series()\
.shift(fill_value=-1))).fillna(0, downcast='infer')
</code></pre>
<p>In this function still a temporary <em>Series</em> (<em>wrk</em>) is created, but
after exit from this function it is garbage-collected, so don't worry
about this detail.</p>
<p>Now you can call it:</p>
<pre><code>result = proc(df.x)
</code></pre>
<p>getting (this time a <em>Series</em>):</p>
<pre><code>0 2
1 2
2 2
3 0
4 0
5 0
6 0
7 10
8 0
9 4
dtype: int64
</code></pre>
<p>The left column is the index and the right - values (previously in an array).</p>
<p>If you want, change the name of this function to any other of your choice
(I had no better idea).</p>
|
python|pandas|dataframe|conditional-statements
| 0
|
376,605
| 65,447,577
|
Remove characters from string (DataFrame)
|
<p>How do I remove extra characters with <strong>REGEX</strong> in this string code snippet below.</p>
<p><strong>From This :</strong> Fulham\n3.20\nDraw\n3.25\nSouthampton\n2.25\n</p>
<p><strong>To Desired Outcome:</strong> 3.20\n\n3.25\n\n2.25</p>
<p><em>Note</em>: I've tried with this regex -> ([^\d.\n]) but it leaves unwanted 'n' in team name if applicable.</p>
<pre><code>([^\d\.\\n])
Fulham\n3.20\nDraw\n3.25\nSouthampton\n2.25\n
</code></pre>
|
<p>Try this:</p>
<pre><code>s = "Fulham\n3.20\nDraw\n3.25\nSouthampton\n2.25\n"
"\n\n".join(i for i in s.split() if re.search(r"\d", i))
</code></pre>
<p>Output:</p>
<pre><code>'3.20\n\n3.25\n\n2.25'
</code></pre>
|
python-3.x|pandas|dataframe
| 1
|
376,606
| 65,202,651
|
Python : Most efficient way to count elements of long list 1 in long list 2 ? (list comprehension is really slow)
|
<p>I have many tuples (e.g. <code>(0, 1)</code>) in two pretty long lists <code>list_0</code>and <code>list_1</code>of size ~40k elements. I need to count the tuples of <code>list_0</code> also in <code>list_1</code>.</p>
<p>The following statement with list comprehension takes ~ 1 min and I need to do this multiple times, so I am looking for something more efficient :</p>
<pre><code>len([element for element in list_0 if element in list_1])
</code></pre>
<p>What could be more efficient ?</p>
<p>To reproduce :</p>
<pre><code>elements_0 = 200*[0]+50*[1]+150*[2]
elements_1 = 100*[1]+150*[2]+150*[1]
df = pd.DataFrame(data=[list(elements_0), list(elements_1)]).T
list_0 = [item for sublist in df.groupby(0)[0].apply(lambda x: list(combinations(x.index, 2))) for item in sublist]
list_1 = [item for sublist in df.groupby(1)[1].apply(lambda x: list(combinations(x.index, 2))) for item in sublist]
print(len([pair for pair in list_0 if pair in list_1])) # Long
</code></pre>
|
<p>It looks like you can use</p>
<pre><code>pd.Series(list_0).isin(list_1).sum()
</code></pre>
<p>Output:</p>
<pre><code>22300
CPU times: user 14.8 ms, sys: 20 µs, total: 14.8 ms
Wall time: 14.1 ms
</code></pre>
<p>which should give the same answer with:</p>
<pre><code>len([element for element in list_0 if element in list_1])
</code></pre>
<p>which gives:</p>
<pre><code>22300
CPU times: user 13.8 s, sys: 0 ns, total: 13.8 s
Wall time: 13.8 s
</code></pre>
<p>Also <code>merge</code> and query:</p>
<pre><code>s = df.reset_index()
print(len(s.merge(s, on=[0,1])
.query('index_x > index_y')
))
</code></pre>
<p>with output:</p>
<pre><code>22300
CPU times: user 13.4 ms, sys: 15 µs, total: 13.4 ms
Wall time: 12.3 ms
</code></pre>
|
python|pandas|numpy
| 3
|
376,607
| 65,259,666
|
Pandas group by id and year(date), but show year for all years, not just those which are present in id?
|
<p>I have a years of transaction data which I am working with by customer ids. The transaction information is at an invoice level and an id could easily have multiple invoices on the same day or not have invoices for years. I am attempting to create dataframes which contain sums of invoices by customer by each year, but also show years where invoices where not added. Something akin to:</p>
<pre><code>tmp = invoices[invoice['invoice_year'].isin([2018,2019,2020]]
tmp = tmp.groupby(['id', pd.Grouper(key = 'invoice_date', freq = 'Y')])['sales'].sum()
</code></pre>
<p>This would return something akin to:</p>
<pre><code>id invoice_year sales
1 2018 483982.20
1 2019 3453
1 2020 453533
2 2018 243
2 2020 23423
3 2020 2330202
</code></pre>
<p>However the desired output would be:</p>
<pre><code>id invoice_year sales
1 2018 483982.20
1 2019 3453
1 2020 453533
2 2018 243
2 2019 nan
2 2020 23423
3 2018 nan
3 2019 nan
3 2020 2330202
</code></pre>
<p>Ideas?</p>
|
<p>Let's suppose the original values are defined in the dataframe named <code>df</code> then you can try the following:</p>
<pre><code>output = (df.groupby(['id', 'invoice_date'])['val'].sum()
.unstack(fill_value=0)
.stack()
.reset_index(name='val'))
</code></pre>
<p>Otherwise you can previously create the column <code>invoice_year</code>:</p>
<pre><code>df['invoice_year'] = df['invoice_date'].dt.year
</code></pre>
<p>And repeat the same code, this outputs:</p>
<pre><code> id invoice_year val
0 1 2018 1
1 1 2019 1
2 1 2020 0
3 2 2018 1
4 2 2019 0
5 2 2020 1
6 3 2018 0
7 3 2019 1
8 3 2020 1
</code></pre>
<p>Using the following data as example:</p>
<pre><code>df = pd.DataFrame({'id':[1]*2+[2]*2+[3]*2,'invoice_date':pd.to_datetime(['2018-12-01','2019-12-01','2020-12-01']*2,infer_datetime_format=True),'val':[1]*6})
</code></pre>
|
python|pandas|group-by
| 1
|
376,608
| 65,470,212
|
ValueError: Expected target size (128, 44), got torch.Size([128, 100]), LSTM Pytorch
|
<p>I want to build a model, that predicts next character based on the previous characters.
I have spliced text into sequences of integers with length = 100(using dataset and dataloader).</p>
<p>Dimensions of my input and target variables are:</p>
<pre><code>inputs dimension: (batch_size,sequence length). In my case (128,100)
targets dimension: (batch_size,sequence length). In my case (128,100)
</code></pre>
<p>After forward pass I get dimension of my predictions: (batch_size, sequence_length, vocabulary_size) which is in my case (128,100,44)</p>
<p>but when I calculate my loss using <code>nn.CrossEntropyLoss()</code> function:</p>
<pre><code>batch_size = 128
sequence_length = 100
number_of_classes = 44
# creates random tensor of your output shape
output = torch.rand(batch_size,sequence_length, number_of_classes)
# creates tensor with random targets
target = torch.randint(number_of_classes, (batch_size,sequence_length)).long()
# define loss function and calculate loss
criterion = nn.CrossEntropyLoss()
loss = criterion(output, target)
print(loss)
</code></pre>
<p>I get an error:</p>
<pre><code>ValueError: Expected target size (128, 44), got torch.Size([128, 100])
</code></pre>
<p>Question is: how should I handle calculation of the loss function for many-to-many LSTM prediction? Especially sequence dimension? According to <a href="https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html" rel="nofollow noreferrer">nn.CrossEntropyLoss</a> Dimension must be(N,C,d1,d2...dN), where N is batch_size,C - number of classes. But what is D? Is it related to sequence length?</p>
|
<p>As a general comment, let me just say that you have asked many different questions, which makes it difficult for someone to answer. I suggest asking just one question per StackOverflow post, even if that means making several posts. I will answer just the main question that I think you are asking: "why is my code crashing and how to fix it?" and hopefully that will clear up your other questions.</p>
<p>Per your code, the output of your model has dimensions (128, 100, 44) = (N, D, C). Here N is the minibatch size, C is the number of classes, and D is the dimensionality of your input. The cross entropy loss you are using expects the output to have dimension (N, C, D) and the target to have dimension (N, D). To clear up the documentation that says (N, C, D1, D2, ..., Dk), remember that your input can be an arbitrary tensor of any dimensionality. In your case inputs have length 100, but nothing is to stop someone from making a model with, say, a 100x100 image as input. (In that case the loss would expect output to have dimension (N, C, 100, 100).) But in your case, your input is one dimensional, so you have just a single D=100 for the length of your input.</p>
<p>Now we see the error, outputs should be (N, C, D), but yours is (N, D, C). Your targets have the correct dimensions of (N, D). You have two paths the fix the issue. First is to change the structure of your network so that its output is (N, C, D), this may or may not be easy or what you want in the context of your model. The second option is to transpose your axes at the time of loss computation using <code>torch.transpose</code> <a href="https://pytorch.org/docs/stable/generated/torch.transpose.html" rel="nofollow noreferrer">https://pytorch.org/docs/stable/generated/torch.transpose.html</a></p>
<pre><code>batch_size = 128
sequence_length = 100
number_of_classes = 44
# creates random tensor of your output shape (N, D, C)
output = torch.rand(batch_size,sequence_length, number_of_classes)
# transposes dimensionality to (N, C, D)
tansposed_output = torch.transpose(output, 1, 2)
# creates tensor with random targets
target = torch.randint(number_of_classes, (batch_size,sequence_length)).long()
# define loss function and calculate loss
criterion = nn.CrossEntropyLoss()
loss = criterion(transposed_output, target)
print(loss)
</code></pre>
|
pytorch|lstm|recurrent-neural-network
| 1
|
376,609
| 65,247,333
|
How to specify a proxy in transformers pipeline
|
<p>I am using sentiment-analysis pipeline as described <a href="https://huggingface.co/transformers/quicktour.html" rel="noreferrer">here</a>.</p>
<pre><code>from transformers import pipeline
classifier = pipeline('sentiment-analysis')
</code></pre>
<p>It's failing with a connection error message</p>
<blockquote>
<p>ValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on.</p>
</blockquote>
<p>Is there a way to specify a proxy within the pipeline method so that it's able to connect to the internet and download the files needed?</p>
|
<p>It should be proxy problem. You can try add this code snippet to go through proxies.</p>
<pre><code>import os
os.environ['HTTP_PROXY'] = 'http://xxx:xxx@xxx:xxx'
os.environ['HTTPS_PROXY'] = 'http://xxx:xxx@xxx:xxx'
from transformers import pipeline
classifier = pipeline('sentiment-analysis')
</code></pre>
<p><strong>Note:</strong> Remember to use <strong>http</strong> instead of <strong>https</strong> for the proxy value.</p>
|
python|bert-language-model|huggingface-transformers
| 1
|
376,610
| 65,127,731
|
I'm using a mask to slice a numpy array, but the output is flattened. How do I retain the number of columns?
|
<p>Here is what I have so far:</p>
<pre><code>arr = np.round(np.random.uniform(0,1,size = (10,10)),decimals = 0)
print(arr)
arr2 = np.cumsum(arr,axis=0)
print(arr2)
mask = np.where((arr == 1)&(arr2<=3),1,0)
print(mask)
population = np.round(np.random.uniform(0,5,size=(10,10)),decimals=0)
print(population)
maskedPop = population[mask==1]
print(maskedPop)
</code></pre>
<p>This outputs a flattened array, is there a way I can keep the 10 columns? So the output would be 3x10?</p>
|
<p>It looks like the maks produces the same amount of non-zero rows per column. So you could probably mask (using the boolean array directly) and <code>reshape</code>:</p>
<pre><code>population[(arr == 1)&(arr2<=3)].reshape(3,-1)
array([[3., 2., 5., 0., 4., 2., 0., 4., 5., 1.],
[4., 3., 5., 3., 4., 1., 1., 4., 5., 4.],
[3., 3., 4., 3., 4., 2., 4., 4., 1., 5.]])
</code></pre>
<p>Note that the output is flattened, since numpy doesn't know that the result is expected to be a 2d homogeneous array. If <code>mask.sum(0)</code> resulted in different values per column, you wouldn't be able to reconstruct as an ndarray, so numpy just doesn't do that guess for you.</p>
|
numpy|mask
| 1
|
376,611
| 65,194,949
|
Number of different values / distinct in a column per ID in a sorted dataframe
|
<p>I have a sorted dataframe with an ID, and a value column, which looks like:</p>
<pre><code>ID value
A 10
A 10
A 10
B 15
B 15
C 10
C 10
...
</code></pre>
<p>How can i create a new dataframe, that it counts the "new" distinct values in terms of the number of different IDS, so that it basically goes over my dataframe and looks like:</p>
<pre><code>Number of ID Number of distinct values
1 1
2 2
3 2
</code></pre>
<p>In that case above we have 3 different IDs, but ID A and C have the same value.</p>
<p>So the first row in the new dataframe:
Numer of ID = 1, because we have 1 different ID so far
Number of distinct values= 1 , because we have one distinct value so far</p>
<p>Second row:
Number of ID=2, because we are going to row 4 in the old dataframe( we only are interessted in new IDS)
Number of disntinct values=2, because the value changed to 15 and didn't occur so far</p>
|
<p>I think you need processing new DataFrame by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.drop_duplicates.html" rel="nofollow noreferrer"><code>DataFrame.drop_duplicates</code></a> with <code>factorize</code> and <code>cumsum</code>:</p>
<p>Replace duplicated values to <code>NaN</code>, forward filling them and then call <code>pd.factorize</code>:</p>
<pre><code>df1 = df.drop_duplicates(['ID','value']).copy()
df1['Number of ID'] = range(1, len(df1)+1)
df1['Number of distinct values'] = pd.factorize(df1['value'].mask(df1['value'].duplicated()).ffill())[0] + 1
print (df1)
ID value Number of ID Number of distinct values
0 A 10 1 1
3 B 15 2 2
5 C 10 3 2
</code></pre>
<p>I change data for better testing:</p>
<pre><code>print (df)
ID value
0 A 10
1 A 10
2 A 10
3 B 15
4 B 15
5 C 10
6 C 15
df1 = df.drop_duplicates(['ID','value']).copy()
df1['Number of ID'] = range(1, len(df1)+1)
df1['Number of distinct values'] = pd.factorize(df1['value'].mask(df1['value'].duplicated()).ffill())[0] + 1
print (df1)
ID value Number of ID Number of distinct values
0 A 10 1 1
3 B 15 2 2
5 C 10 3 2
6 C 15 4 2
</code></pre>
<p>Working wrong if multiple values <code>value</code> per <code>ID</code>:</p>
<pre><code>df = pd.DataFrame({'Number of ID': range(1, len(df1)+1),
'Number of distinct values': np.cumsum(pd.factorize(df1['value'])[0])+1})
print (df)
Number of ID Number of distinct values
0 1 1
1 2 2
2 3 2
3 4 3
</code></pre>
|
python|pandas
| 3
|
376,612
| 65,063,107
|
Plotting Monthly data using groupby in dask dataset
|
<p>I have a large <code>CSV</code> file that it is opened with Dask.</p>
<pre><code>import numpy as np
import pandas as pd
import hvplot.pandas
import hvplot.dask
import intake
data = '../file.csv'
ddf = intake.open_csv(data).to_dask()
ddf.head()
Datetime latitude longitude Temp_2m(C)
1 1980-01-02 03:00:00 30.605 50.217 5.31
2 1980-01-02 04:00:00 30.605 50.217 5.36
3 1980-01-02 05:00:00 30.605 50.217 7.04
4 1980-01-02 06:00:00 30.605 50.217 10.24
</code></pre>
<p>I want to plot <code>Temp_2m(C)</code> monthly with hvplot. Plot with hourly data of <code>Datetime</code> is done correctly, but when I want to group <code>Datetime</code> as follow, It return an error.</p>
<pre><code># Convert 'Datetime' column to 'datetime64'
ddf["Datetime"] = ddf["Datetime"].astype("M8[us]")
# set index column
ddf = ddf.set_index('Datetime')
g = pd.Grouper(freq='M', key='Datetime')
month_ddf = dff.groupby(g).mean()
# plot
month_ddf.hvplot('Temp_2m(C)')
</code></pre>
<p>ERROR:
<code>ValueError: all keys need to be the same shape</code>
what is my mistake?</p>
<p>for reply @frankr6591:</p>
<pre><code>month_ddf.describe()
Dask DataFrame Structure:
latitude longitude Temp_2m(C)
npartitions=1
float64 float64 float64
... ... ...
Dask Name: describe-numeric, 89 tasks
</code></pre>
|
<p>I used to_datetime() and got correct plot with .plot()... ran into problems installing hvplot.</p>
<pre><code>import numpy as np
import pandas as pd
# FIXME : the following does not work
#import hvplot.pandas
%matplotlib inline
d = dict(datetime = ['1980-01-02 02:00:00',
'1980-01-02 03:00:00',
'1980-01-02 04:00:00',
'1980-01-02 05:00:00',
'1980-07-02 06:00:00'],
latitude = [30.605 for n in range(5)],
longitude = [50.217 for n in range(5)],
Temp_2m = [np.random.random()*10 for n in range(5)])
df = pd.DataFrame(d)
df['datetime'] = pd.to_datetime(df['datetime'])
df['mon'] = df['datetime'].dt.to_period('M')
print(df)
ddf = df.groupby('mon').mean()
print(ddf)
# This works on my py3.7
ddf.plot('Temp_2m')
# This fails because hvplot could not be imported.
ddf.hvplot('Temp_2m')
datetime latitude longitude Temp_2m mon
0 1980-01-02 02:00:00 30.605 50.217 2.512897 1980-01
1 1980-01-02 03:00:00 30.605 50.217 0.247358 1980-01
2 1980-01-02 04:00:00 30.605 50.217 7.678030 1980-01
3 1980-01-02 05:00:00 30.605 50.217 0.637331 1980-01
4 1980-07-02 06:00:00 30.605 50.217 2.156502 1980-07
latitude longitude Temp_2m
mon
1980-01 30.605 50.217 5.080373
1980-07 30.605 50.217 1.324140
</code></pre>
<p><a href="https://i.stack.imgur.com/nLSVL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nLSVL.png" alt="enter image description here" /></a></p>
|
python|pandas|pandas-groupby|dask|hvplot
| 1
|
376,613
| 65,119,742
|
Cannot subset the first column in a DataFrame
|
<p>Im learning how to use Pandas and I've downloaded some data from Kaggle about car prices etc.</p>
<p>I'm trying to create a new dataframe by subsetting all the cars out that have the model "Golf".</p>
<pre><code>golfs = df[df.model == "Golf"]
</code></pre>
<p>It does return a new dataframe but when i call it, its just empty besides the column names.</p>
<p>trying this:</p>
<pre><code>others = df[df.model != "Golf"]
</code></pre>
<p>creates a new dataframe, but it has everything in it. The datatype for the column is an object. So i tried to create subsets by transmission, which is also an object.</p>
<pre><code>man_trans = df[df.transmission == "Manual"]
</code></pre>
<p>creates a new data frame with solely Manual transmissions... I have no idea where its going wrong. I've tried subsetting all other columns but its just the first one that wont behave. Ive even tried copying and pasting the cell value directly into the code.</p>
<p>Ive even tried adding in:</p>
<pre><code>df.reset_index()
</code></pre>
<p>to add in a new index as i thought that might be the problem.</p>
|
<p>The code looks like it is correct to me. If the golf dataframe is empty, it is possible you dont have any rows where df['model'] == 'Golf'?. Maybe it's =="golf" instead?</p>
<pre><code># It this doesnt work....
# golfs = df[df.model == "Golf"]
# Maybe try this (or something like this
golfs = df[df.model == "golf"]
</code></pre>
|
python|python-3.x|pandas|dataframe
| 0
|
376,614
| 65,349,992
|
Converting a Segemented Ground Truth to a Contour Image efficiently with Numpy
|
<p>Suppose I have a segmented image as a Numpy array, where each entry in the image is a number from 1, ... C, C+1 where C is the number of segmentation classes, and class C+1 is some background class. I want to find an efficient way to convert this to a contour image (a binary image where a contour pixel will have value 1, and the rest will have values 0), so that any pixel who has a neighbor in its 8-neighbourhood (or 4-neighbourhood) will be a contour pixel.</p>
<p>The inefficient way would be something like:</p>
<pre><code>def isValidLocation(i, j, image_height, image_width):
if i<0:
return False
if i>image_height-1:
return False
if j<0:
return False
if j>image_width-1:
return False
return True
def get8Neighbourhood(i, j, image_height, image_width):
nbd = []
for height_offset in [-1, 0, 1]:
for width_offset in [-1, 0, 1]:
if isValidLocation(i+height_offset, j+width_offset, image_height, image_width):
nbd.append((i+height_offset, j+width_offset))
return nbd
def getContourImage(seg_image):
seg_image_height = seg_image.shape[0]
seg_image_width = seg_image.shape[1]
contour_image = np.zeros([seg_image_height, seg_image_width], dtype=np.uint8)
for i in range(seg_image_height):
for j in range(seg_image_width):
nbd = get8Neighbourhood(i, j, seg_image_height, seg_image_width)
for (m,n) in nbd:
if seg_image[m][n] != seg_image[i][j]:
contour_image[i][j] = 1
break
return contour_image
</code></pre>
<p>I'm looking for a more efficient "vectorized" way of achieving this, as I need to be able to compute this at run time on batches of 8 images at a time in a deep learning context. Any insights appreciated. Visual Example Below. The first image is the original image overlaid over the ground truth segmentation mask (not the best segmentation admittedly...), the second is the output of my code, which looks good, but is way too slow. Takes me about 10 seconds per image with an intel 9900K cpu.</p>
<p><a href="https://i.stack.imgur.com/WYDzR.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WYDzR.jpg" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/tVLqi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tVLqi.png" alt="enter image description here" /></a></p>
<p>Image Credit from SUN RGBD dataset.</p>
|
<p>This might work but it might have some limitations which I cannot be sure of without testing on the actual data, so I'll be relying on your feedback.</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
from scipy import ndimage
import matplotlib.pyplot as plt
# some sample data with few rectangular segments spread out
seg = np.ones((100, 100), dtype=np.int8)
seg[3:10, 3:10] = 20
seg[24:50, 40:70] = 30
seg[55:80, 62:79] = 40
seg[40:70, 10:20] = 50
plt.imshow(seg)
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/d2ENH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/d2ENH.png" alt="enter image description here" /></a></p>
<p>Now to find the contours, we will convolve the image with a kernel which should give <code>0</code> values when convolved within the same segment of the image and <code><0</code> or <code>>0</code> values when convolved over image regions with multiple segments.</p>
<pre class="lang-py prettyprint-override"><code># kernel for convolving
k = np.array([[1, -1, -1],
[1, 0, -1],
[1, 1, -1]])
convolved = ndimage.convolve(seg, k)
# contour pixels
non_zeros = np.argwhere(convolved != 0)
plt.scatter(non_zeros[:, 1], non_zeros[:, 0], c='r', marker='.')
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/2gGkS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2gGkS.png" alt="enter image description here" /></a></p>
<p>As you can see in this sample data the kernel has a small limitation and misses identifying <em>two</em> contour pixels caused due to symmetric nature of data (which I think would be a rare case in actual segmentation outputs)</p>
<p>For better understanding, this is the scenario(occurs at top left and bottom right corners of the rectangle) where the kernel convolution fails to identify the contour i.e. misses one pixel</p>
<pre><code>[ 1, 1, 1]
[ 1, 1, 1]
[ 1, 20, 20]
</code></pre>
|
numpy|optimization|image-segmentation
| 1
|
376,615
| 65,309,417
|
How to read '08E55' as string from Excel Using Pandas
|
<p>In excel i have value in a Column as '<strong>08E55</strong>'. The value is product ID and should be read as it is.
While reading the excel through pandas it is being converted as '<strong>8e+55</strong>' .
How can i avoid this?</p>
<p>Few records include values like -
U8716
U8715
8725
U8716
U8721
08E55</p>
<p>I have tried following things -</p>
<p>xls = pd.ExcelFile("New.xlsx")</p>
<p>sheet = xls.parse("Sheet1", dtype={'col': 'str'},convert_float = False)</p>
<p>Steps to reproduce the problem -</p>
<p>Create a new Excel File and add a record 08E55 without changing any Data Type in Excel and Try to read the value in Pandas.</p>
<p>Expected Output - 08E55</p>
<p>Current Output - 8e+55 or 8.000000e+55</p>
|
<p>Pretty simple, just use the converters parameter while reading the excel file.</p>
<pre><code>pd.read_excel('New.xlsx', converters={'column_name':str})
</code></pre>
<p><strong>Edit 1:</strong></p>
<pre><code>pd.read_excel('New.xlsx', converters={'column_name':str}, convert_float=False)
</code></pre>
<p><strong>Edit 2:</strong></p>
<pre><code>pd.read_excel('new.xlsx', convert_float=False, dtype='str')
</code></pre>
<p><strong>Edit 3:</strong></p>
<pre><code>df = pd.read_excel('new.xlsx', convert_float=False, dtype='str')
for val in df.itertuples():
if '+' in df.at[val[0], 'prod_no'] or 'e' in df.at[val[0], 'prod_no'] or 'E' in df.at[val[0], 'prod_no']:
df.at[val[0], 'prod_no'] = '0'+df.at[val[0], 'prod_no'].replace('e+', 'E')
else:
continue
print(df)
</code></pre>
<p>Output:</p>
<pre><code> prod_no
0 08E55
</code></pre>
|
python|python-3.x|excel|pandas
| 1
|
376,616
| 49,972,166
|
Why does this numpy array with ten times the values take exponentially larger amounts of time to randomly generate?
|
<p>I initialize three numpy arrays, because I need to feed some random data into an algorithm.</p>
<p>My second array has about a hundred times the values, and takes about a hundred times the time.</p>
<p>The third, for some reason, takes almost 1800 times the amount of time as the second does.</p>
<pre><code>nparray = np.random.randint(0, 256, (1024, 800, 3)) #0.03125357627868652
nparray = np.random.randint(0, 256, (100, 1024, 800, 3) #2.9687747955322266
nparray = np.random.randint(0, 256, (10, 100, 1024, 800, 3)) #5339.585757017136
</code></pre>
|
<p>Assuming numpy uses <code>dtype('int64')</code> for these arrays, i.e. 8 bytes per element:</p>
<ul>
<li>The 1st array is 2457600 elements (~20 Megabytes)</li>
<li>The 2nd array is 245760000 elements (~2 Gigabytes)</li>
<li>The 3rd array is 2457600000 elements (~20 Gigabytes)</li>
</ul>
<p>If you have a reasonably average machine, the first and second cases can likely work entirely in RAM. The third array is huge and will almost surely require swapping data to disk, which is significantly slower.</p>
<p>You can check the sizes of an objects in Python by using <a href="https://docs.python.org/3/library/sys.html#sys.getsizeof" rel="nofollow noreferrer"><code>sys.getsizeof(obj)</code></a>. Check memory available with <code>free</code> (Linux) or <code>vm_stat</code> (macOS). </p>
|
python|numpy|runtime
| 2
|
376,617
| 50,006,950
|
Split a dataframe into smaller dataframes based on a column python
|
<p>I have this dataset : </p>
<p><a href="https://i.stack.imgur.com/OAmuC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OAmuC.png" alt="raw data"></a></p>
<p>And I want it to look like this:</p>
<p><a href="https://i.stack.imgur.com/OYcZU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OYcZU.png" alt="weekly output"></a></p>
<p>I know that i need to do this: </p>
<pre><code>df= df.groupby('city_id').resample('W').agg({'Quantity':'sum'}, loffset = pd.offsets.timedelta(days=-8))
</code></pre>
<p>to get a weekly aggregation, but I need it grouped by city id THEN aggregated by week.</p>
<p>My thought is that I would need to create multiple dataframes, each one per city id, aggregate them by date to make the weekly output then concat them back together but I feel that there's a better way to do this. </p>
|
<p>Try this :</p>
<pre><code>df.groupby(['index', 'city_id'], as_index=False).sum()
</code></pre>
<p>This will group the first 2 columns and sum up the remaining ones.</p>
|
python|pandas
| 0
|
376,618
| 49,863,401
|
How to use apply function on multiple columns at once
|
<p>Is it possible to call the apply function on multiple columns in pandas and if so how does one do this.. for example,</p>
<pre><code> df['Duration'] = df['Hours', 'Mins', 'Secs'].apply(lambda x,y,z: timedelta(hours=x, minutes=y, seconds=z))
</code></pre>
<p><a href="https://i.stack.imgur.com/FdKpV.png" rel="nofollow noreferrer">This is what the expected output should look like once everything comes together</a></p>
<p>Thank you.</p>
|
<p><strong>You should use:</strong> </p>
<pre><code>df['Duration'] = pd.to_timedelta(df.Hours*3600 + df.Mins*60 + df.Secs, unit='s')
</code></pre>
<p>When you use apply on a <code>DataFrame</code> with <code>axis=1</code>, it's a row calculation, so typically this syntax makes sense:</p>
<pre><code>df['Duration'] = df.apply(lambda row: pd.Timedelta(hours=row.Hours, minutes=row.Mins,
seconds=row.Secs), axis=1)
</code></pre>
<p>Some timings</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame({'Hours': np.tile([1,2,3,4],50),
'Mins': np.tile([10,20,30,40],50),
'Secs': np.tile([11,21,31,41],50)})
%timeit pd.to_timedelta(df.Hours*3600 + df.Mins*60 + df.Secs, unit='s')
#432 µs ± 5.4 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
%timeit df.apply(lambda row: pd.Timedelta(hours=row.Hours, minutes=row.Mins, seconds=row.Secs), axis=1)
#12 ms ± 67.4 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
</code></pre>
<p>As always, apply should be a last resort.</p>
|
python|pandas|apply|duration|timedelta
| 4
|
376,619
| 50,090,076
|
slicing a numpy array with characters
|
<p>I have a text file made as:</p>
<pre><code>0.01 1 0.1 1 10 100 a
0.02 3 0.2 2 20 200 b
0.03 2 0.3 3 30 300 c
0.04 1 0.4 4 40 400 d
</code></pre>
<p>I read it as a list <code>A</code> and then converted to a numpy array, that is:</p>
<pre><code>>>> A
array([['0.01', '1', '0.1', '1', '10', '100', 'a'],
['0.02', '3', '0.2', '2', '20', '200', 'b'],
['0.03', '2', '0.3', '3', '30', '300', 'c'],
['0.04', '1', '0.4', '4', '40', '400', 'd']],
dtype='|S4')
</code></pre>
<p>I just want to extract a sub-array <code>B</code>, made of <code>A</code> wherever its 4th entry is lower than 30, that should look something like:</p>
<pre><code>B = array([['0.01', '1', '0.1', '1', '10', '100', 'a'],
['0.02', '3', '0.2', '2', '20', '200', 'b']])
</code></pre>
<p>When dealing with arrays, I usually do simply <code>B = A[A[:,4]<30]</code>, but in this case (maybe due to the presence of characters/strings I've never worked with) it doesn't work, giving me this:</p>
<pre><code>>>> A[A[:,4]<30]
array(['0.01', '1', '0.1', '1', '10', '100', 'a'],
dtype='|S4')
</code></pre>
<p>and I can't figure out the reason. I'm not dealing with a code of mine and I don't think I can switch all this to structures or dictionaries: any suggestion for doing this with numpy arrays? Thank you very much in advance!</p>
|
<p>You have to compare <code>int</code> to <code>int</code></p>
<pre><code>A[A[:,4].astype(int)<30]
</code></pre>
<p>or <code>str</code> to <code>str</code></p>
<pre><code>A[A[:,4]<'30']
</code></pre>
<p>However, notice that the latter would work in your <em>specific example</em>, but won't work generally because you are comparing <code>str</code> ordering (for example, <code>'110' < '30'</code> returns <code>True</code>, but <code>110 < 30</code> returns <code>False</code>)</p>
<hr>
<p><code>numpy</code> will infer your elements' types from your data. In this case, it attributed the <code>type = '|S4'</code> to your elements, meaning they strings of length 4. This is probably a consequence of the underlying <code>C</code> code (which enhances <code>numpy</code>'s performance) that requires elements to have fixed types. </p>
<p>To illustrate this difference, check the following code:</p>
<pre><code>>>> np.array([['0.01', '1', '0.1', '1', '10', '100', 'a']])
array(['0.01', '1', '0.1', '1', '10', '100', 'a'], dtype='|S4')
</code></pre>
<p>The inferred type of strings of length 4, which is the max length of your elements (in elem <code>0.01</code>). Now, if you expclitily define it to hold general type objects, it will do what you want</p>
<pre><code>>>> np.array([[0.01, 1, 0.1, 1, 10, 100, 'a']], dtype=object)
array([0.01, 1, 0.1, 1, 10, 100, 'a'], dtype=object)
</code></pre>
<p>and your code <code>A[A[:,4]<30]</code> would work properly.</p>
<p>For more information, <strong><em><a href="https://docs.scipy.org/doc/numpy-1.13.0/reference/arrays.dtypes.html" rel="nofollow noreferrer">this</a></em></strong> is a very complete guide</p>
|
python|arrays|string|numpy|sub-array
| 3
|
376,620
| 50,098,336
|
How to Solve: 'str' object has no attribute 'data_format' in keras
|
<p>I am trying to make a classifier which can classify cats and dogs using keras.
I am just trying to create the tensor data from images using <strong>ImageDataGenerator.flow_from_directory()</strong> which are sorted and kept in the directories whose paths are given in train_path, test_path etc.</p>
<p><strong>Here is my code:</strong> </p>
<pre><code>import numpy as np
import keras
from keras import backend as K
from keras.models import Sequential
from keras.layers import Activation
train_path = 'cats-and-dogs/train' test_path = 'cats-and-dogs/test' valid_path = 'cats-and-dogs/valid'
train_dir = 'cats-and-dogs/' test_dir = 'cats-and-dogs/' valid_dir = 'cats-and-dogs/'
train_batches = ImageDataGenerator.flow_from_directory(train_path, directory=train_dir, target_size=(200,200), classes=['dog','cat'], batch_size=10)
test_batches = ImageDataGenerator.flow_from_directory(test_path, directory=test_dir, target_size=(200,200), classes=['dog','cat'], batch_size=5)
valid_batches = ImageDataGenerator.flow_from_directory(valid_path, directory=valid_dir, target_size=(200,200), classes=['dog','cat'], batch_size=10)
</code></pre>
<p><strong>But I am getting the following error using python 3.5:</strong></p>
<blockquote>
<p>/usr/local/lib/python3.5/site-packages/h5py/<strong>init</strong>.py:36:
FutureWarning: Conversion of the second argument of issubdtype from
<code>float</code> to <code>np.floating</code> is deprecated. In future, it will be treated
as <code>np.float64 == np.dtype(float).type</code>. from ._conv import
register_converters as _register_converters Using TensorFlow backend.
Traceback (most recent call last): File "CNNFromScratch.py", line
29, in
train_batches = ImageDataGenerator.flow_from_directory(train_path, directory=train_dir, target_size=(200,200), classes=['dog','cat'],
batch_size=10) File
"/usr/local/lib/python3.5/site-packages/keras/preprocessing/image.py",
line 565, in flow_from_directory
data_format=self.data_format,</p>
<p>AttributeError: 'str' object has no attribute 'data_format'</p>
</blockquote>
<p><strong>What can I do to solve this problem?</strong> </p>
|
<p>Method <code>flow_from_directory</code> of <code>ImageDataGenerator</code> is not static. Therefore you first have to initialize an instance of class <code>ImageDataGenerator</code> and then call this method.</p>
<p>This should work:</p>
<pre><code>import numpy as np
import keras
from keras import backend as K
from keras.models import Sequential
from keras.layers import Activation
from keras.preprocessing.image import ImageDataGenerator
train_path = 'cats-and-dogs/train'
test_path = 'cats-and-dogs/test'
valid_path = 'cats-and-dogs/valid'
my_generator = ImageDataGenerator()
train_batches = my_generator.flow_from_directory(directory=train_path, target_size=(200,200), classes=['dog','cat'], batch_size=10)
test_batches = my_generator.flow_from_directory(directory=test_path, target_size=(200,200), classes=['dog','cat'], batch_size=5)
valid_batches = my_generator.flow_from_directory(directory=valid_path, target_size=(200,200), classes=['dog','cat'], batch_size=10)
</code></pre>
<p>Check <a href="https://keras.io/preprocessing/image/" rel="nofollow noreferrer">documentation</a> for adding more parameters.</p>
|
python|python-3.x|tensorflow|deep-learning|keras
| 2
|
376,621
| 49,907,240
|
Serving a TensorFlow Custom Model
|
<p>I am new in machine learning, basically i created own dataset of images and do training on them and recognize images on jupyter notebook, after this i tried to deploy this model by following <a href="https://www.tensorflow.org/serving/setup" rel="nofollow noreferrer">this</a> tutorial</p>
<p>I execute</p>
<pre><code>bazel build -c opt //tensorflow_serving/example:mnist_saved_model
bazel-bin/tensorflow_serving/example/mnist_saved_model /tmp/mnist_model
</code></pre>
<p>it runs successfully.</p>
<p>How to export my own model and deploy? my model name is "GoogleTensorflow"</p>
<p>I created this model using</p>
<pre><code>python3 export_inference_graph.py
--input_type image_tensor
--pipeline_config_path training/ssd_mobilenet_v1_pets.config
--trained_checkpoint_prefix training/model.ckpt-26456
--output_directory GoogleTensorflow
</code></pre>
|
<p>Move your custom training folder to tmp folder and that model should have version ex 1 folder inside it</p>
|
tensorflow|tensorflow-serving|tensorflow-datasets|tensorflow-estimator|tensorflow-slim
| 1
|
376,622
| 50,042,016
|
Correct way to store samples in numpy arrays
|
<p>Suppose you have a matrix (two-dimensional numpy array) storing multivariate sample data. Is it correct (wrt speed and ease of use) to store the data using one <strong>row</strong> for each sample or one <strong>column</strong> for each? E.g</p>
<pre><code>array([[x1, y1, ...], [x2, y2, ...], ..., [xN, yN, ...]])
</code></pre>
<p>or</p>
<pre><code>array([[x1, x2, ..., xN], [y1, y2, ..., yN], ...])
</code></pre>
<p>In MATLAB and Octave, it is definitely easier to treat each sample as a <strong>column</strong> vector, but numpy gives you no indication either way.</p>
<p>For example. Here is how you can normalize a set of samples if each one is stored as a row vector:</p>
<pre><code>X - mean(X, axis = 0)
</code></pre>
<p>But if you store them as column vectors you have to write</p>
<pre><code>(X.T - mean(X, axis = 1)).T
</code></pre>
<p>Which absolutely is not as convenient.</p>
|
<p>The performance depends on both the access pattern and the memory layout of the array. The latter may be set with the <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.array.html" rel="nofollow noreferrer"><code>order</code> parameter of <code>np.array()</code></a>, which:</p>
<blockquote>
<p>Specify the memory layout of the array. If object is not an array, the newly created array will be in C order (row major) unless ‘F’ is specified, in which case it will be in Fortran order (column major).</p>
</blockquote>
<p>(If object is an array, there are more options as the layout may be preserved.)</p>
<p>Also the right approach may depend on the libraries you depend on. For example for <a href="http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html#sklearn.linear_model.LogisticRegression.fit" rel="nofollow noreferrer">linear regression in sklearn</a> you are expected to have one row for each sample.</p>
<p>[EDIT]</p>
<p>Storing samples in rows is also compatible with <code>pandas.DataFrame</code> objects:</p>
<pre><code>>>> CIRCLES = np.array([[1, 3.14],
... [2, 12.56],
... [3, 28.26]])
>>> DF = DataFrame(CIRCLES, columns=['r', 'S'])
>>> DF.mean()
r 2.000000
S 14.653333
dtype: float64
</code></pre>
|
python|numpy
| 1
|
376,623
| 49,928,463
|
Python Pandas update a dataframe value from another dataframe
|
<p>I have two dataframes in python. I want to update rows in first dataframe using matching values from another dataframe. Second dataframe serves as an override. </p>
<p>Here is an example with same data and code: </p>
<p>DataFrame 1 : </p>
<p><a href="https://i.stack.imgur.com/QPJs2.png" rel="noreferrer"><img src="https://i.stack.imgur.com/QPJs2.png" alt="enter image description here"></a></p>
<p>DataFrame 2: </p>
<p><a href="https://i.stack.imgur.com/Xbpmh.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Xbpmh.png" alt="enter image description here"></a></p>
<p>I want to update update dataframe 1 based on matching code and name. In this example Dataframe 1 should be updated as below: </p>
<p><a href="https://i.stack.imgur.com/JA3Fl.png" rel="noreferrer"><img src="https://i.stack.imgur.com/JA3Fl.png" alt="enter image description here"></a></p>
<p>Note : Row with Code =2 and Name= Company2 is updated with value 1000 (coming from Dataframe 2) </p>
<pre><code>import pandas as pd
data1 = {
'Code': [1, 2, 3],
'Name': ['Company1', 'Company2', 'Company3'],
'Value': [200, 300, 400],
}
df1 = pd.DataFrame(data1, columns= ['Code','Name','Value'])
data2 = {
'Code': [2],
'Name': ['Company2'],
'Value': [1000],
}
df2 = pd.DataFrame(data2, columns= ['Code','Name','Value'])
</code></pre>
<p>Any pointers or hints? </p>
|
<p>Using DataFrame.update, which aligns on indices (<a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.update.html" rel="noreferrer">https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.update.html</a>):</p>
<pre><code>>>> df1.set_index('Code', inplace=True)
>>> df1.update(df2.set_index('Code'))
>>> df1.reset_index() # to recover the initial structure
Code Name Value
0 1 Company1 200.0
1 2 Company2 1000.0
2 3 Company3 400.0
</code></pre>
|
python|pandas|dataframe
| 66
|
376,624
| 49,956,302
|
How to merge / concat two pandas dataframes with different length?
|
<p>I would like to concat/merge two pandas dataframes but I don't get the right result. I have following dataframes:</p>
<pre><code>df1
Username | User_trim
-------------------------------
0 Maria M | Maria
1 FakeName | N/A
2 Achim B | Achim
3 FlashMaster11 | N/A
4 Fakename2 | N/A
5 Gustav W | Gustav
df2
0 |1 | 2
---------------------------------
0 Maria M | Maria | female
2 Achim B | Achim | male
5 Gustav W | Gustav | male
</code></pre>
<p>I would like to have following result dataframe:</p>
<pre><code> Username | User_trim | Gender
---------------------------------
0 Maria M | Maria | female
1 FakeName | N/A | N/A
2 Achim B | Achim | male
3 FlashMaster11 | N/A | N/A
4 Fakename2 | N/A | N/A
5 Gustav W | Gustav | male
</code></pre>
<p>I tried following code</p>
<pre><code>result = pd.concat([df1,df2], axis=1,ignore_index=True)
</code></pre>
<p>But I get a wrong result but the right length of the table. So I tried this:</p>
<pre><code>df1.merge(df2,how='outer', left_on='Username', right_on=0)
</code></pre>
<p>This code seems like I get the right result but the table is bigger then df1 (I mean by rows)?</p>
<p>I dont have a problem, when I merge the dataframe and get get all columns. I can drop them. Its just the problem to merge them with different length and to get them in the right row.</p>
<p>Does anyone can give me an advice how I can get the result table? </p>
|
<p>I think need <code>left join</code> in <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.merge.html" rel="nofollow noreferrer"><code>merge</code></a>:</p>
<pre><code>df = df1.merge(df2,how='left', left_on='Username', right_on=0)
print (df)
Username User_trim 0 1 2
0 Maria M Maria Maria M Maria female
1 FakeName NaN NaN NaN NaN
2 Achim B Achim Achim B Achim male
3 FlashMaster11 NaN NaN NaN NaN
4 Fakename2 NaN NaN NaN NaN
5 Gustav W Gustav Gustav W Gustav male
</code></pre>
<p>Solution if need append new column(s) by <code>merge</code> without remove unnecessary columns is first <code>rename</code> at least one column use for join (here <code>Username</code> in both <code>DataFrame</code>s) and then select all necessary columns (always join column + all another new columns):</p>
<pre><code>df22 = df2.rename(columns={0:'Username', 2:'Gender'})[['Username', 'Gender']]
print (df22)
Username Gender
0 Maria M female
1 Achim B male
2 Gustav W male
df = df1.merge(df22,how='left', on='Username')
print (df)
Username User_trim Gender
0 Maria M Maria female
1 FakeName NaN NaN
2 Achim B Achim male
3 FlashMaster11 NaN NaN
4 Fakename2 NaN NaN
5 Gustav W Gustav male
</code></pre>
<hr>
<p>If need add only one new column use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.map.html" rel="nofollow noreferrer"><code>map</code></a> by <code>Series</code> created by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.set_index.html" rel="nofollow noreferrer"><code>set_index</code></a>:</p>
<pre><code>df1['Gender'] = df1['Username'].map(df2.set_index(0)[2])
print (df1)
Username User_trim Gender
0 Maria M Maria female
1 FakeName NaN NaN
2 Achim B Achim male
3 FlashMaster11 NaN NaN
4 Fakename2 NaN NaN
5 Gustav W Gustav male
</code></pre>
|
python|python-2.7|pandas|dataframe|merge
| 1
|
376,625
| 49,950,092
|
Sklearn multiclass classification class order
|
<p>I have three classes [-1,0,1] and I am running multi class logistic regression on them. When I run logreg.predict_proba(x) it returns a an array [.25, .5, .25] does this mean that position 0 is class -1, position 1 is class 0, and position 2 is class 1? In other words, how does the logistic regression map the classes to the output columns? Does it do it by numerical order? Or based on the first class it sees?</p>
|
<p>You can verify the order of the classes using the classes attribute of your logistic regression classifier. For example, if the classifier is named logreg then</p>
<pre><code>logreg.classes_
</code></pre>
<p>will reveal the order of the classes.</p>
<p><a href="http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html#sklearn.linear_model.LogisticRegression.predict_proba" rel="nofollow noreferrer">http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html#sklearn.linear_model.LogisticRegression.predict_proba</a> .</p>
|
python|pandas|scikit-learn
| 5
|
376,626
| 50,163,284
|
how to read an lz4 compressed file in Pandas?
|
<p>I have a file like <code>stackunderflow.csv.lz4</code> and I want to load it in <code>Pandas</code> for processing.</p>
<p>I tried the naive <code>pd.read_csv()</code> without success. Can the great <code>Pandas</code> handle these types of compressed files?</p>
<p>Thanks!</p>
|
<p>Per <a href="https://stackoverflow.com/questions/45966508/reading-large-lz4-compressed-json-data-set-in-python-2-7">this StackOverFlow Answer</a>, you can use a 3rd party library to read in the data in chunks and then load that into your Pandas dataframe</p>
<p><code>
import lz4.frame
chunk_size = 128 * 1024 * 1024
with lz4.frame.open('mybigfile.lz4', 'r') as file:
chunk = file.read(size=chunk_size)
</code></p>
|
python|pandas|lz4
| 2
|
376,627
| 49,927,354
|
Python numpy - DepricationWarning: Passing 1d arrays as data is deprecated
|
<p>I am quite new to data science/python, and currently I am working on some deep learning algorithms where I would like to use one variable for both input and output data. I have 4 inputs and 1 output. I use the following structure:</p>
<pre><code> samples = np.zeros(nb_samples, dtype=[('input', float, 4), ('output', float, 1)] )
</code></pre>
<p>and get the following warning, when I StandardScale, the array:</p>
<pre><code> DeprecationWarning: Passing 1d arrays as data is deprecated in 0.17 and will raise
ValueError in 0.19. Reshape your data either using X.reshape(-1, 1) if your
data has a single feature or X.reshape(1, -1) if it contains a single sample.
</code></pre>
<p>I think the problem is that my structure looks like this:</p>
<pre><code> [ [x0,x1,x2,x3], y0 ]
</code></pre>
<p>And it should look something like this:</p>
<pre><code> [ [x0,x1,x2,x3], [y0] ]
</code></pre>
<p>I found some similar questions but none of the answers worked for me.</p>
<p>How can I solve this warning? And what is the exact problem?</p>
|
<p>I'm using 0.19.1 and indeed I get an error when I try to scale this array. But here's the transformation that works for me:</p>
<pre class="lang-py prettyprint-override"><code>samples = np.zeros(nb_samples, dtype=[('input', float, 4), ('output', float, 1)])
x = samples['input'] # shape=(nb_samples, 4)
y = samples['output'] # shape=(nb_samples,)
scaler = StandardScaler()
scaler.fit_transform(x, y) # does the same with and without `y`
</code></pre>
<p>Separation of <code>input</code> and <code>output</code> is better particularly for <code>StandardScaler</code>, because it scales only <code>x</code> and doesn't do anything to <code>y</code>. <a href="http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html" rel="nofollow noreferrer">In fact</a>, the <code>y</code> is a "passthrough argument for <code>Pipeline</code> compatibility". If you ignore this warning and transform <code>samples</code> directly, the <code>output</code> will be modified too and that's not what you want.</p>
|
python|arrays|numpy|scikit-learn|deep-learning
| 3
|
376,628
| 50,169,882
|
Multiply rows and append to dataframe by cell value
|
<p>Consider the following dataframe;</p>
<pre><code>df = pd.DataFrame(
{'X':('a','b','c','d'),
'Y':('a','b','d','e'),
'Z':('a','b','c','d'),
'#':(1,2,1,3)
})
df
</code></pre>
<p><a href="https://i.stack.imgur.com/qA1uM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qA1uM.png" alt="enter image description here"></a></p>
<p>I would like to append the rows with a figure higher than 1 in column '#' with the figure in that row minus 1. The df should preferably
then look like this;</p>
<p><a href="https://i.stack.imgur.com/qIhcm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qIhcm.png" alt="enter image description here"></a></p>
<p>Alternatively it may look like this (the rows multiplied completely);</p>
<p><a href="https://i.stack.imgur.com/Ri1Qs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ri1Qs.png" alt="enter image description here"></a></p>
<p>Btw, I've searched this problem extensively, but cannot find anything that helps me in the right direction.</p>
|
<p>Use <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.repeat.html" rel="nofollow noreferrer"><code>numpy.repeat</code></a>:</p>
<pre><code>c = df.columns[1:]
df = pd.DataFrame(np.repeat(df.values, df['#'], axis=0)[:, 1:], columns=c)
print (df)
X Y Z
0 a a a
1 b b b
2 b b b
3 c d c
4 d e d
5 d e d
6 d e d
</code></pre>
<p>Similar:</p>
<pre><code>df = pd.DataFrame(np.repeat(df.values, df['#'], axis=0), columns=df.columns)
print (df)
# X Y Z
0 1 a a a
1 2 b b b
2 2 b b b
3 1 c d c
4 3 d e d
5 3 d e d
6 3 d e d
</code></pre>
<p>But if order is important:</p>
<pre><code>dfs = []
for i in range(df['#'].max()):
df = df[df['#'] > 0].copy()
df['#'] -= 1
dfs.append(df.iloc[:, 1:])
df1 = pd.concat(dfs, ignore_index=True)
print (df1)
X Y Z
0 a a a
1 b b b
2 c d c
3 d e d
4 b b b
5 d e d
6 d e d
</code></pre>
|
python|pandas|numpy
| 2
|
376,629
| 50,082,220
|
Tensorboard: unable to find named scope
|
<p>I have a scope which I named <code>'Pred/Accuracy'</code> that I cant seem to find in Tensorboard. I will include my entire code a little later but specifically in my definition of my cost function I have: </p>
<pre><code>def compute_cost(z, Y, parameters, l2_reg=False):
with tf.name_scope('cost'):
logits = tf.transpose(z)
labels = tf.transpose(Y)
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = logits,
labels = labels))
if l2_reg == True:
reg = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)
cost = cost + tf.reduce_sum(reg)
with tf.name_scope('Pred/Accuracy'):
prediction=tf.argmax(z)
correct_prediction = tf.equal(tf.argmax(z), tf.argmax(Y))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
return cost, prediction, accuracy
</code></pre>
<p>But on tensorboard I cant see it even if I click on the cost block:</p>
<p><a href="https://i.stack.imgur.com/dqPCM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dqPCM.png" alt="Tensorboard graph"></a></p>
<p>Below is basically my entire code excluding importing / pre-processing data</p>
<pre><code># Create X and Y placeholders
def create_xy_placeholder(n_x, n_y):
X = tf.placeholder(tf.float32, shape = [n_x, None], name = 'X')
Y = tf.placeholder(tf.float32, shape = [n_y, None], name = 'Y')
return X, Y
# initialize parameters hidden layers
def initialize_parameters(n_x, scale, hidden_units):
hidden_units= [n_x] + hidden_units
parameters = {}
regularizer = tf.contrib.layers.l2_regularizer(scale)
for i in range(0, len(hidden_units[1:])):
with tf.variable_scope('hidden_parameters_'+str(i+1)):
w = tf.get_variable("W"+str(i+1), [hidden_units[i+1], hidden_units[i]],
initializer=tf.contrib.layers.xavier_initializer(),
regularizer=regularizer)
b = tf.get_variable("b"+str(i+1), [hidden_units[i+1], 1],
initializer = tf.constant_initializer(0.1))
parameters.update({"W"+str(i+1): w})
parameters.update({"b"+str(i+1): b})
return parameters
# forward progression with batch norm and dropout
def forward_propagation(X, parameters, batch_norm=False, keep_prob=1):
a_new = X
for i in range(0, int(len(parameters)/2)-1):
with tf.name_scope('forward_pass_'+str(i+1)):
w = parameters['W'+str(i+1)]
b = parameters['b'+str(i+1)]
z = tf.matmul(w, a_new) + b
if batch_norm == True:
z = tf.layers.batch_normalization(z, momentum=0.99, axis=0)
a = tf.nn.relu(z)
if keep_prob < 1:
a = tf.nn.dropout(a, keep_prob)
a_new = a
tf.summary.histogram('act_'+str(i+1), a_new)
# calculating final Z before input into cost as logit
with tf.name_scope('forward_pass_'+str(int(len(parameters)/2))):
w = parameters['W'+str(int(len(parameters)/2))]
b = parameters['b'+str(int(len(parameters)/2))]
z = tf.matmul(w, a_new) + b
if batch_norm == True:
z = tf.layers.batch_normalization(z, momentum=0.99, axis=0)
return z
# compute cost with option for l2 regularizatoin
def compute_cost(z, Y, parameters, l2_reg=False):
with tf.name_scope('cost'):
logits = tf.transpose(z)
labels = tf.transpose(Y)
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = logits,
labels = labels))
if l2_reg == True:
reg = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)
cost = cost + tf.reduce_sum(reg)
with tf.name_scope('Pred/Accuracy'):
prediction=tf.argmax(z)
correct_prediction = tf.equal(tf.argmax(z), tf.argmax(Y))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
return cost, prediction, accuracy
# defining the model (need to add keep_prob for dropout)
def model(X_train, Y_train, X_test, Y_test,
hidden_units=[30, 50, 50, 30, 4], # hidden units/layers
learning_rate = 0.0001, # Learning rate
num_epochs = 2000, minibatch_size = 30, # minibatch/ number epochs
keep_prob=0.5, # dropout
batch_norm=True, # batch normalization
l2_reg=True, scale = 0.01, # L2 regularization/scale is lambda
print_cost = True):
ops.reset_default_graph() # to be able to rerun the model without overwriting tf variables
tf.set_random_seed(1) # to keep consistent results
seed = 3 # to keep consistent results
(n_x, m) = X_train.shape # (n_x: input size, m : number of examples in the train set)
n_y = Y_train.shape[0] # n_y : output size
costs = [] # To keep track of the cost
logs_path = '/tmp/tensorflow_logs/example/'
# Create Placeholders of shape (n_x, n_y)
X, Y = create_xy_placeholder(n_x, n_y)
# Initialize parameters
parameters = initialize_parameters(n_x, scale, hidden_units)
# Forward propagation: Build the forward propagation in the tensorflow graph
z = forward_propagation(X, parameters, keep_prob, batch_norm)
# Cost function: Add cost function to tensorflow graph
cost, prediction, accuracy = compute_cost(z, Y, parameters, l2_reg)
# Backpropagation: Define the tensorflow optimizer. Use an AdamOptimizer.
with tf.name_scope('optimizer'):
optimizer = tf.train.AdamOptimizer(learning_rate = learning_rate)
# Op to calculate every variable gradient
grads = tf.gradients(cost, tf.trainable_variables())
grads = list(zip(grads, tf.trainable_variables()))
# Op to update all variables according to their gradient
apply_grads = optimizer.apply_gradients(grads_and_vars = grads)
# Initialize all the variables
init = tf.global_variables_initializer()
# to view in tensorboard
tf.summary.scalar('loss', cost)
tf.summary.scalar('accuracy', accuracy)
# Create summaries to visualize weights
for var in tf.trainable_variables():
tf.summary.histogram(var.name, var)
# Summarize all gradients
for grad, var in grads:
tf.summary.histogram(var.name + '/gradient', grad)
merged_summary_op = tf.summary.merge_all()
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
# Start the session to compute the tensorflow graph
with tf.Session(config=config) as sess:
# Run the initialization
sess.run(init)
# define writer
summary_writer = tf.summary.FileWriter(logs_path,
graph=tf.get_default_graph())
# Do the training loop
for epoch in range(num_epochs):
epoch_cost = 0. # Defines a cost related to an epoch
num_minibatches = int(m / minibatch_size) # number of minibatches of size minibatch_size in the train set
seed = seed + 1
minibatches = random_mini_batches(X_train, Y_train, minibatch_size, seed)
count = 0
for minibatch in minibatches:
# Select a minibatch
(minibatch_X, minibatch_Y) = minibatch
# IMPORTANT: The line that runs the graph on a minibatch.
# Run the session to execute the "optimizer" and the "cost", the feedict should contain a minibatch for (X,Y).
_ , minibatch_cost, summary = sess.run([apply_grads, cost,
merged_summary_op],
feed_dict = {X: minibatch_X, Y: minibatch_Y})
epoch_cost += minibatch_cost / num_minibatches
# Write logs at every iteration
summary_writer.add_summary(summary, epoch * num_minibatches + count)
count += 1
# Print the cost every epoch
if print_cost == True and epoch % 100 == 0:
print ("Cost after epoch %i: %f" % (epoch, epoch_cost))
prediction1=tf.argmax(z)
# print('Z5: ', Z5.eval(feed_dict={X: minibatch_X, Y: minibatch_Y}))
print('prediction: ', prediction1.eval(feed_dict={X: minibatch_X,
Y: minibatch_Y}))
correct1=tf.argmax(Y)
# print('Y: ', Y.eval(feed_dict={X: minibatch_X,
# Y: minibatch_Y}))
print('correct: ', correct1.eval(feed_dict={X: minibatch_X,
Y: minibatch_Y}))
if print_cost == True and epoch % 5 == 0:
costs.append(epoch_cost)
# plot the cost
plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('iterations (per tens)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
# lets save the parameters in a variable
parameters = sess.run(parameters)
print ("Parameters have been trained!")
# Calculate the correct predictions
correct_prediction = tf.equal(tf.argmax(z), tf.argmax(Y))
# Calculate accuracy on the test set
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
print ("Train Accuracy:", accuracy.eval({X: X_train, Y: Y_train}))
print ("Test Accuracy:", accuracy.eval({X: X_test, Y: Y_test}))
print("Run the command line:\n" \
"--> tensorboard --logdir=/tmp/tensorflow_logs " \
"\nThen open http://0.0.0.0:6006/ into your web browser")
return parameters
# run model on test data
parameters = model(x_train, y_train, x_test, y_test, keep_prob=1)
</code></pre>
|
<p>Tensorflow scopes are hierarchical: you can have a scope within another scope within another scope, etc. The name <code>"Pred/Accuracy"</code> means exactly that: you have a top level <code>"Pred"</code> scope and <code>"Accuracy"</code> nested scope (this is because slash is has a special meaning in naming).</p>
<p>Tensorboard shows the top ones by default: <code>"Pred"</code> (on the top), <code>"batch_normalization"</code>, etc. You can expand them to see what's inside them by double clicking. Inside <code>"Pred"</code> you should find <code>"Accuracy"</code>.</p>
<p>If you like, just name your scope differently, e.g. <code>"Pred_Accuracy"</code>, and the full name will appear in tensorboard.</p>
|
python|python-3.x|tensorflow|deep-learning|tensorboard
| 1
|
376,630
| 49,938,429
|
Removing consecutive asc/desc sequences from dataframe
|
<p>am thinking of a pandaistic way (not a loop) to remove all consecutive positives or negative pct changes. So assuming i have a dataframe like this:</p>
<p><code>df=pd.DataFrame([1,2,3,5,4,3,2,4,5,6,7,8,9])</code></p>
<p>i would want to remove all in between points where there is consecutive ascending/descending sequences. the end output would be [1,5,2,9]. Thanks!</p>
|
<p>With the other words you need to choose items where <code>A[i-1] > A[i] < A[i+1]</code> or <code>A[i-1] < A[i] > A[i+1]</code></p>
<pre><code>df = pd.DataFrame([1,2,3,4,5,4,3,2,4,5,6,7,8,9])
numbers_list = df[0].values.tolist()
df = pd.DataFrame([item[1] for item in filter(lambda x: ((x[2] < x[1] > x[0]) or (x[2] > x[1] < x[0])), zip(numbers_list, numbers_list[1:], numbers_list[2:]))])
</code></pre>
<p>For this items, you also have to concat the first and last items from your given array.</p>
|
python|pandas|dataframe
| 1
|
376,631
| 50,004,616
|
Special Moving Average
|
<p>I want to predict the direction towards which the price will change.
The term price is used to refer to the mid-price of a stock, which is defined as the mean between the best bid price and
best ask price at time t: </p>
<p><a href="https://i.stack.imgur.com/AvuIN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AvuIN.png" alt="enter image description here"></a></p>
<p>This is a virtual value for the price since no order can happen at
that exact price, but predicting its upwards or downwards movement
provides a good estimate of the price of the future orders. A set of
discrete choices must be constructed from our data to use as targets
for our classification model. Simply using p(t) > p(t+k) to
determine the direction of the mid-price would introduce unmanageably
amount of noise, since the smallest change would be registered as an
upward or downward movement.</p>
<p>The mean of the previous k mid-prices, denoted by m_b, and the mean of the next k mid-prices, denoted by m_a, are defined as: </p>
<p><a href="https://i.stack.imgur.com/b5oDj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/b5oDj.png" alt="enter image description here"></a></p>
<p>Here is a sample : </p>
<pre><code>0 2015-03-31 09:30:00.233 2.4645
1 2015-03-31 09:30:00.233 2.4634
2 2015-03-31 09:34:44.116 2.5363
3 2015-03-31 09:34:44.116 2.5434
4 2015-03-31 09:36:38.535 2.5356
5 2015-03-31 09:36:38.535 2.5432
6 2015-03-31 09:36:38.537 2.5463
7 2015-03-31 09:36:38.537 2.5432
8 2015-03-31 09:45:10.512 2.5274
9 2015-03-31 09:45:10.512 2.5262
10 2015-03-31 09:45:10.523 2.5299
11 2015-03-31 09:45:10.529 2.5234
12 2015-03-31 09:45:10.531 2.5276
13 2015-03-31 09:45:10.568 2.5212
14 2015-03-31 09:45:10.569 2.5262
15 2015-03-31 09:45:10.635 2.5143
16 2015-03-31 09:45:10.684 2.5267
17 2015-03-31 09:45:10.686 2.5212
18 2015-03-31 10:00:02.111 2.5213
19 2015-03-31 10:00:02.111 2.5298
20 2015-03-31 10:00:02.112 2.5212
21 2015-03-31 10:00:02.381 2.5263
22 2015-03-31 10:00:02.472 2.5212
23 2015-03-31 10:00:02.486 2.5298
24 2015-03-31 10:00:02.524 2.5298
25 2015-03-31 10:00:04.026 2.5270
26 2015-03-31 10:06:54.546 2.5212
27 2015-03-31 10:06:54.558 2.5234
28 2015-03-31 10:06:54.558 2.5253
29 2015-03-31 10:06:54.566 2.5234
</code></pre>
<p>Aat any time, I want to compute m_a and m_b, but I don't know how to do it with pandas or numpy. Assuming the horizon k=5, how could I code those two special moving averages with python? So I need two functions, i.e. <code>leftmovingaverage()</code> and <code>rightmovingaverage()</code> and display two columns next to the price column with name <code>M_A</code> and <code>M_B</code>.</p>
<p><strong>Example :</strong> </p>
<p>Image we have 1000 time data. And set k = 600, then you can compute m_a from k=590~600 and m_b from k = 601~610. Everything is well explain in the following link : <a href="http://poseidon.csd.auth.gr/papers/PUBLISHED/CONFERENCE/pdf/2017/2017_CBI_CNNLOB.pdf" rel="nofollow noreferrer">http://poseidon.csd.auth.gr/papers/PUBLISHED/CONFERENCE/pdf/2017/2017_CBI_CNNLOB.pdf</a>.</p>
|
<pre><code>def moving_average(df, col_name='Price', k=3):
ma_cols = []
mb_cols = []
temp_df = DataFrame()
for i in range(0, k+1):
ma_col = 'M_A_{}'.format(i)
ma_cols.append(ma_col)
mb_col = 'M_B_{}'.format(i)
mb_cols.append(mb_col)
temp_df[ma_col] = df[col_name].shift(i)
temp_df[mb_col] = df[col_name].shift(-i)
df['M_A'] = temp_df[ma_cols].mean(axis=1, skipna=True, numeric_only=True)
df['M_B'] = temp_df[mb_cols].mean(axis=1, skipna=True)
print (df)
return df
moving_average(df)
</code></pre>
<p>For both summation, both index start at i=1 and the sum is divided by k+1 instead of k</p>
|
python|pandas
| 0
|
376,632
| 50,106,611
|
Convert list of dictionaries containing another list of dictionaries to dataframe
|
<p>I tried to look for the solution and I am unable to get 1. I have the following output from an api in python.</p>
<pre><code>insights = [ <Insights> {
"account_id": "1234",
"actions": [
{
"action_type": "add_to_cart",
"value": "8"
},
{
"action_type": "purchase",
"value": "2"
}
],
"cust_id": "xyz123",
"cust_name": "xyz",
}, <Insights> {
"account_id": "1234",
"cust_id": "pqr123",
"cust_name": "pqr",
}, <Insights> {
"account_id": "1234",
"actions": [
{
"action_type": "purchase",
"value": "45"
}
],
"cust_id": "abc123",
"cust_name": "abc",
}
]
</code></pre>
<p>I want the data frame something like this</p>
<pre><code>- account_id add_to_cart purchase cust_id cust_name
- 1234 8 2 xyz123 xyz
- 1234 pqr123 pqr
- 1234 45 abc123 abc
</code></pre>
<p>When I use the following </p>
<pre><code>> insights_1 = [x for x in insights]
> df = pd.DataFrame(insights_1)
</code></pre>
<p>I get the following</p>
<pre><code>- account_id actions cust_id cust_name
- 1234 [{'value': '8', 'action_type': 'add_to_cart'},{'value': '2', 'action_type': 'purchase'}] xyz123 xyz
- 1234 NaN pqr123 pqr
- 1234 [{'value': '45', 'action_type': 'purchase'}] abc123 abc
</code></pre>
<p>How do I move ahead with this?</p>
|
<p>This is one solution.</p>
<pre><code>df = pd.DataFrame(insights)
parts = [pd.DataFrame({d['action_type']: d['value'] for d in x}, index=[0])
if x == x else pd.DataFrame({'add_to_cart': [np.nan], 'purchase': [np.nan]})
for x in df['actions']]
df = df.drop('actions', 1)\
.join(pd.concat(parts, axis=0, ignore_index=True))
print(df)
account_id cust_id cust_name add_to_cart purchase
0 1234 xyz123 xyz 8 2
1 1234 pqr123 pqr NaN NaN
2 1234 abc123 abc NaN 45
</code></pre>
<p><strong>Explanation</strong></p>
<ul>
<li>Utilise <code>pandas</code> to read the outer list of dictionaries into a dataframe.</li>
<li>For the inner dictionaries, use a list comprehension together with a dictionary comprehension.</li>
<li>Account for <code>nan</code> values by testing for equality within the list comprehension.</li>
<li>Concatenate and join the parts to the original dataframe.</li>
</ul>
<p><strong>Explanation - detail</strong></p>
<p>This details the construction and use of <code>parts</code>:</p>
<ol>
<li>Take each entry in <code>df['actions']</code>; each entry will be a <em>list
of dictionaries</em>.</li>
<li>Iterate them one by one, i.e. by row, in a <code>for</code> loop.</li>
<li>The <code>else</code> part says "if it is <code>np.nan</code> [i.e. null] then return a dataframe of <code>nan</code>s". The <code>if</code> part takes the list of dictionaries and creates a mini-dataframe <em>for each row</em>.</li>
<li>We then use the next part to concatenate these mini-dictionaries, one for each row, and join them to the original dataframe.</li>
</ol>
|
python|pandas|dictionary|dataframe
| 4
|
376,633
| 50,200,303
|
how to plot attention vector on tensorboard graph
|
<p>Attention vector for sequence 2 sequence model is basically a array of shape [batch_size, time_step,1], which indicates the weighs of a particular time step. </p>
<p>But if I use <code>tf.summary.histogram</code> to show it on tensorboard, tensorflow will only show the distributions of weights, I can't tell the which time step is more important. I can use <code>tf.summary.scalar</code>, but length of my source sequence is 128 , it is too much plots. The most nature way of show this kind of data it a picture like this, but how can I do it in tensorboad?</p>
<p><a href="https://i.stack.imgur.com/WqgO4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WqgO4.png" alt="enter image description here"></a></p>
|
<p>Tensorboard does not currently support visualizing tensor summaries. There is a <a href="https://www.tensorflow.org/api_docs/python/tf/summary/tensor_summary" rel="nofollow noreferrer">summary op</a> for it, but Tensorboard will just skip it when reading summaries from disk at the moment. I am also not aware of any third party plugins that support this, though it is very much doable.</p>
<p>In your plot, it seems like there are only 19 time steps. One way is to create 19 scalar summaries. Alternatively, you can use tf.summary.tensor_summary op, but process the tensorboard event file (containing this data) with your own script.</p>
|
tensorflow|tensorboard
| 0
|
376,634
| 49,949,428
|
apply mask on np.array in pandas
|
<p>I have a <code>pd.DataFrame</code> containing a mask and <code>np.array</code>. I want to apply the mask on the array (like I would do with <code>np.where</code>)</p>
<p>Does anyone have an idea how to succeed ?</p>
<pre><code>df = pd.DataFrame({'Mask' : [[True, False, True], [False, False], [True, True]],
'Array' : [[2, 5,4] , [1, 0] , [4, 5],],
'Result' : [[2, 4] , [] , [4,5]]})
def ffilter(entry):
return entry['Array']['Mask']
df.apply(ffilter) #--> Nope too easy :-(
</code></pre>
|
<p>You could just create a mask by using <code>df.Mask</code>, pass it to the <code>mask()</code> function of the data frame and aggregate.</p>
<p>This would be the "<em>one-liner</em>":</p>
<pre><code>pd.DataFrame(df.Array.tolist())\
.mask(np.asarray(df.Mask.tolist()))\
.agg(['mean', 'std', 'min', 'max'])
</code></pre>
<p>which gives you:</p>
<pre><code> 0 1
mean 1.0 2.500000
std NaN 3.535534
min 1.0 0.000000
max 1.0 5.000000
</code></pre>
<hr>
<p>Or as a whole:</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame({'Mask' : [[True, False], [False, False], [True, True]],
'Array' : [[2, 5] , [1, 0] , [4, 5],],
'Result' : [[2] , [] , [4, 5]]})
df_Array = pd.DataFrame(df.Array.tolist())
mask = np.asarray(df.Mask.tolist())
df_Array.mask(mask).agg(['mean', 'std', 'min', 'max'])
</code></pre>
<hr>
<p>From the comments, it is still not clear what your desired output is. I'll just assume you want to calculate statistics like min, max, std etc for each of these array in your data frame - and further - have a data frame where each row represents one of those arrays:</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame({'Mask' : [[True, False, True], [False, False], [True, True]],
'Array' : [[2, 5,4] , [1, 0] , [4, 5],],
'Result' : [[2, 4] , [] , [4,5]]})
df_stats = df.apply(lambda x: pd.Series(x.Array)[x.Mask]
.agg(['min', 'max', 'std', 'mean']), 1)
print(df_stats)
</code></pre>
<p>which produces:</p>
<pre><code> min max std mean
0 2.0 4.0 1.414214 3.0
1 NaN NaN NaN NaN
2 4.0 5.0 0.707107 4.5
</code></pre>
|
python|pandas|numpy
| 2
|
376,635
| 49,900,798
|
How to limit RAM usage while batch training in tensorflow?
|
<p>I am training a <em>deep neural network</em> with a <em>large image dataset</em> in mini-batches of size 40. My dataset is in <code>.mat</code> format (which I can easily change to any other format e.g. <code>.npy</code> format if necessitates) and before training, loaded as a 4-D <code>numpy</code> array. My problem is that while training, cpu-RAM (not GPU RAM) is very quickly exhausting and starts using almost half of my Swap memory.</p>
<p>My training code has the following pattern:</p>
<pre><code>batch_size = 40
...
with h5py.File('traindata.mat', 'r') as _data:
train_imgs = np.array(_data['train_imgs'])
# I can replace above with below loading, if necessary
# train_imgs = np.load('traindata.npy')
...
shape_4d = train_imgs.shape
for epoch_i in range(max_epochs):
for iter in range(shape_4d[0] // batch_size):
y_ = train_imgs[iter*batch_size:(iter+1)*batch_size]
...
...
</code></pre>
<p>This seems like the initial loading of the full training data is itself becoming the bottle-neck (taking over 12 GB cpu RAM before I abort).</p>
<p>What is the best efficient way to tackle this bottle-neck?</p>
<p>Thanks in advance.</p>
|
<p>Loading a big dataset in memory is not a good idea. I suggest you to use something different for loading the datasets, take a look to the dataset API in TensorFlow: <a href="https://www.tensorflow.org/programmers_guide/datasets" rel="nofollow noreferrer">https://www.tensorflow.org/programmers_guide/datasets</a></p>
<p>You might need to convert your data into other format, but if you have a CSV or TXT file with a example per line you can use <code>TextLineDataset</code> and feed the model with it:</p>
<pre class="lang-py prettyprint-override"><code>filenames = ["/var/data/file1.txt", "/var/data/file2.txt"]
dataset = tf.data.TextLineDataset(filenames)
def _parse_py_fun(text_line):
... your custom code here, return np arrays
def _map_fun(text_line):
result = tf.py_func(_parse_py_fun, [text_line], [tf.uint8])
... other tensorlow code here
return result
dataset = dataset.map(_map_fun)
dataset = dataset.batch(4)
iterator = dataset.make_one_shot_iterator()
input_data_of_your_model = iterator.get_next()
output = build_model_fn(input_data_of_your_model)
sess.run([output]) # the input was assigned directly when creating the model
</code></pre>
|
python|tensorflow|training-data
| 3
|
376,636
| 49,972,651
|
How to combine multiple columns from a pandas df into a list
|
<p>How can you combine multiple columns from a dataframe into a list? </p>
<p>Input:</p>
<pre><code>df = pd.DataFrame(np.random.randn(10000, 7), columns=list('ABCDEFG'))
</code></pre>
<p>If I wanted to create a list from column A I would perform:</p>
<pre><code>df1 = df['A'].tolist()
</code></pre>
<p>But if I wanted to combine numerous columns into this list it wouldn't be efficient write <code>df['A','B','C'...'Z'].tolist()</code></p>
<p>I have tried to do the following but it just adds the columns headers to a list.</p>
<pre><code>df1 = list(df.columns)[0:8]
</code></pre>
<p>Intended input:</p>
<pre><code> A B C D E F G
0 0.787576 0.646178 -0.561192 -0.910522 0.647124 -1.388992 0.728360
1 0.265409 -1.919283 -0.419196 -1.443241 -2.833812 -1.066249 0.553379
2 0.343384 0.659273 -0.759768 0.355124 -1.974534 0.399317 -0.200278
</code></pre>
<p>Intended Output:</p>
<pre><code>[0.787576, 0.646178, -0.561192, -0.910522, 0.647124, -1.388992, 0.728360,
0.265409, -1.919283, -0.419196, -1.443241, -2.833812, -1.066249, 0.553379,
0.343384, 0.659273, -0.759768, 0.355124, -1.974534, 0.399317, -0.200278]
</code></pre>
|
<p>Is this what you are looking for</p>
<pre><code>lst = df.values.tolist()
flat_list = [item for x in lst for item in x]
print(flat_list)
</code></pre>
|
python|list|pandas
| 0
|
376,637
| 50,036,888
|
How to get all possible slices of a 1D numpy array depending on input
|
<p>I have a numpy array</p>
<pre><code>a = np.arange(12)
>>> [0,1,2,3,4,5,6,7,8,9,10,11]
</code></pre>
<p>I am trying to calculate all possible cumsums like this</p>
<pre><code>np.cumsum[2:] + np.cumsum[:-2]
np.cumsum[3:] + np.cumsum[:-3]
...
np.cumsum[11:] + np.cumsum[:-11]
</code></pre>
<p>How can I achieve this without a loop
I tried doing </p>
<pre><code>starts = np.arange(2,12)
np.cumsum[starts:] + np.cumsum[:-starts]
but I get this error
TypeError: only integer scalar arrays can be converted to a scalar index
</code></pre>
<p>How do I do this without a for loop</p>
<p><strong>What I am trying to do</strong></p>
<p>I am trying to calculate moving average of all possible time frames within the length of a sequence. For example, if I had an array size of 10, I could do moving average 1 period (doesn't make sense) , moving average 2 periods, 3 periods...10 periods. How do I accomplish this. I want to calculate the moving average from 2 to n where n is the size of the sequence</p>
|
<p>It is not what you asked for. But if you are looking for a simpler solution , you can use the pandas approach. </p>
<pre><code>df = pd.DataFrame({'a' :np.arange(11)}) # your data
window_lengths = np.arange(2,len(a)) # define window lengths from 2 to n
[rolling_win.mean() for rolling_win in [df.rolling(length) for length in window_lengths]]
</code></pre>
<p><strong>output :</strong></p>
<pre><code> [ a
0 NaN
1 0.5
2 1.5
3 2.5
4 3.5
5 4.5
6 5.5
7 6.5
8 7.5
9 8.5
10 9.5, a
0 NaN
1 NaN
2 1.0
3 2.0
4 3.0
5 4.0
6 5.0
7 6.0
8 7.0
9 8.0
10 9.0, a
0 NaN
1 NaN
2 NaN
3 1.5
4 2.5
5 3.5
6 4.5
7 5.5
8 6.5
9 7.5
10 8.5, a
0 NaN
1 NaN
2 NaN
3 NaN
4 2.0
5 3.0
6 4.0
7 5.0
8 6.0
9 7.0
10 8.0, a
0 NaN
1 NaN
2 NaN
3 NaN
4 NaN
5 2.5
6 3.5
7 4.5
8 5.5
9 6.5
10 7.5, a
0 NaN
1 NaN
2 NaN
3 NaN
4 NaN
5 NaN
6 3.0
7 4.0
8 5.0
9 6.0
10 7.0, a
0 NaN
1 NaN
2 NaN
3 NaN
4 NaN
5 NaN
6 NaN
7 3.5
8 4.5
9 5.5
10 6.5, a
0 NaN
1 NaN
2 NaN
3 NaN
4 NaN
5 NaN
6 NaN
7 NaN
8 4.0
9 5.0
10 6.0, a
0 NaN
1 NaN
2 NaN
3 NaN
4 NaN
5 NaN
6 NaN
7 NaN
8 NaN
9 4.5
10 5.5]
</code></pre>
|
python|python-3.x|numpy
| 1
|
376,638
| 50,146,655
|
when to use square brackets and when to use parentheses?
|
<p>Do we have any difference between </p>
<pre><code>a = np.array([1,2,3])
</code></pre>
<p>and</p>
<pre><code>a = np.array((1,2,3))?
</code></pre>
<p>With both inputs, I am getting the following output when I try this:</p>
<pre><code>print(a)
print(a.ndim)
print(a.shape)
print(type(a))
</code></pre>
<p>output</p>
<pre><code>[1 2 3]
1
(3,)
<class 'numpy.ndarray'>
</code></pre>
<p>Is there any difference between them?
What is the best syntax for calling <code>numpy.array:</code> </p>
<p>If they are the same, then is there a reason why people prefer one over the other?</p>
|
<p>Square brackets <code>[1,2,3]</code> make a <code>list</code>. Round brackets <code>(1,2,3)</code> make a <code>tuple</code>. The main difference is that a list can be resized and modified, whereas a tuple is immutable.</p>
<p>There is no practical difference in anonymous expressions like <code>np.array([1,2,3])</code>. You can use either form with equal correctness and effect. The square-brackets form is perhaps more conventional.</p>
|
arrays|python-3.x|list|numpy|tuples
| 0
|
376,639
| 49,846,461
|
Extracting data from pandas based on condition
|
<p>I have a data frame <code>A = [1,2,3,5,9,8,11,13] and B = [2,1,6,19,16,15,14,12]</code>. I want to is check is <em>whether the criss cross elements of the A and B are equal in any case</em></p>
<p>For eg: here <code>A[0]==B[1] and B[0]==A[1]</code>, this is a criss cross element.</p>
<pre><code>import pandas as pd
df=pd.DataFrame({'A':[1,2,3],'B':[2,1,6]})
if df.loc[0,"A"] == df.loc[1,"B"] & df.loc[1,"A"] == df.loc[0,"B"]:
print("the values which are equal")
else:
print("the values which are not equal")
</code></pre>
|
<h2>Compare contiguous columns</h2>
<p>In order to check wheter or not <code>A[i]==B[i+1] & A[i+1]==B[i]</code> for all the rows in the dataframe, you can compare the colums vectorially but shifted:</p>
<pre><code>A = np.array([1,2,3,5,14,16,16,13]) # I mdified input data from the question for the second example
B = [2,1,6,19,16,15,14,12]
df = pd.DataFrame({'A':A,'B':B})
eq_diag = (df['A'][:-1].values==df['B'][1:].values) & (df['A'][1:].values==df['B'][:-1].values)
# boolean array with rows=rows_in_df-1, eq_diag[i] will be true if the
# diagonal between rows i and i+1 is equal
# Output for eq_diag
# [ True False False False False False False]
</code></pre>
<p>Then, the values which are equal in these diagonal comparisons can be printed:</p>
<pre><code>print df[:-1][eq_diag] # the [:-1] is important for dimensions to match
# Out
A B
0 1 2 # it also returns the index i (not i+1) where the diagonal is equal
</code></pre>
<h2>Compare <em>ALL</em> column combinations in the dataframe</h2>
<p>If instead of comparing the column <code>i</code> with the column <code>i+1</code>, all possible combinations should be compared, the module <code>itertools</code> can be used:</p>
<pre><code>import itertools
combinations = np.array(list(itertools.combinations(range(len(df)),2)))
print combinations.T
eq_diag = ((df['A'].values[combinations[:,0]]==df['B'].values[combinations[:,1]]) &
(df['A'].values[combinations[:,1]]==df['B'].values[combinations[:,0]]))
# Out: all the column combinations
[[0 0 0 0 0 0 0 1 1 1 1 1 1 2 2 2 2 2 3 3 3 3 4 4 4 5 5 6]
[1 2 3 4 5 6 7 2 3 4 5 6 7 3 4 5 6 7 4 5 6 7 5 6 7 6 7 7]]
</code></pre>
<p>And then, the elemenst which are equal can be printed:</p>
<pre><code>for i,j in combinations[eq_diag]:
print 'The criss cross element of columns {} and {} is equal:\n{}'.format(i,j,df.values[[i,j]])
# Out
# The criss cross element of columns 0 and 1 is equal:
# [[1 2]
# [2 1]]
# The criss cross element of columns 4 and 6 is equal:
# [[14 16]
# [16 14]]
</code></pre>
|
python|pandas|loops|indexing
| 0
|
376,640
| 50,178,925
|
Convert nested dictionary of lists into pandas dataframe efficiently
|
<p>I have a json object such that</p>
<pre><code>{
"hits": {
"hits": [
{
"_source": {
"TYPES": [
{
"_ID": 130,
"_NM": "ARB-130"
},
{
"_ID": 131,
"_NM": "ARB-131"
},
{
"_ID": 132,
"_NM": "ARB-132"
}
]
}
},
{
"_source": {
"TYPES": [
{
"_ID": 902,
"_NM": "ARB-902"
},
{
"_ID": 903,
"_NM": "ARB-903"
},
{
"_ID": 904,
"_NM": "ARB-904"
}
]
}
}
]
}
}
</code></pre>
<p>I need to unpack it into a pandas dataframe such that I get all the unique _id and _nm pairs under the _types object</p>
<pre><code> _ID _NM
0 130 ARB-130
1 131 ARB-131
2 132 ARB-132
3 902 ARB-902
4 903 ARB-903
5 904 ARB-904
</code></pre>
<p>I am looking for the fastest possible solution since the number of types and number of pairs within types can be in hundred of thousands. So my unpacking using pd.Series and using apply makes it slow and I would like to avoid it if possible. Any ideas would be appreciated. Also about exploding dictionaries or lists in a column into separate columns without using pd.Series as I encounter this use case on the regular</p>
|
<p>One way is to restructure your dictionary and flatten using <code>itertools.chain</code>.</p>
<p>For performance, you should benchmark with your data.</p>
<pre><code>from itertools import chain
res = list(chain.from_iterable(i['_source']['TYPES'] for i in d['hits']['hits']))
df = pd.DataFrame(res)
print(df)
_ID _NM
0 130 ARB-130
1 131 ARB-131
2 132 ARB-132
3 902 ARB-902
4 903 ARB-903
5 904 ARB-904
</code></pre>
|
python|pandas|dictionary|dataframe
| 2
|
376,641
| 50,109,667
|
Python: Running function to append values to an empty list returns no values
|
<p>This is probably a very basic question but I haven't been able to figure this out.</p>
<p>I'm currently using the following to append values to an empty list</p>
<pre><code>shoes = {'groups':['running','walking']}
df_shoes_group_names = pd.DataFrame(shoes)
shoes_group_name=[]
for type in df_shoes_group_names['groups']:
shoes_group_name.append(type)
shoes_group_name
['running', 'walking']
</code></pre>
<p>I'm trying to accomplish the same using a for loop, however, when I execute the loop the list comes back as blank</p>
<pre><code>shoes_group_name=[]
def list_builder(dataframe_name):
if 'shoes' in dataframe_name:
for type in df_shoes_group_names['groups']:
shoes_group_name.append(type)
list_builder(df_shoes_group_names)
shoes_group_name
[]
</code></pre>
<p>Reason for the function is that eventually I'll have multiple DF's with different product's so i'd like to just have if statements within the function to handle the creation of each list</p>
<p>so for example future examples could look like this:</p>
<pre><code>df_shoes_group_names
df_boots_group_names
df_sandals_group_names
shoes_group_name=[]
boots_group_name=[]
sandals_group_name=[]
def list_builder(dataframe_name):
if 'shoes' in dataframe_name:
for type in df_shoes_group_names['groups']:
shoes_group_name.append(type)
elif 'boots' in dataframe_name:
for type in df_boots_group_names['groups']:
boots_group_name.append(type)
elif 'sandals' in dataframe_name:
for type in df_sandals_group_names['groups']:
sandals_group_name.append(type)
list_builder(df_shoes_group_names)
list_builder(df_boots_group_names)
list_builder(df_sandals_group_names)
</code></pre>
<p>Not sure if I'm approaching this the right way so any advice would be appreciated.</p>
<p>Best,</p>
|
<p>You should <strong>never</strong> call or search a variable name as if it were a string.</p>
<p>Instead, use a dictionary to store a variable number of variables.</p>
<p><strong>Bad practice</strong></p>
<pre><code># dataframes
df_shoes_group_names = pd.DataFrame(...)
df_boots_group_names = pd.DataFrame(...)
df_sandals_group_names = pd.DataFrame(...)
def foo(x):
if shoes in df_shoes_group_names: # <-- THIS WILL NOT WORK
# do something with x
</code></pre>
<p><strong>Good practice</strong></p>
<pre><code># dataframes
df_shoes_group_names = pd.DataFrame(...)
df_boots_group_names = pd.DataFrame(...)
df_sandals_group_names = pd.DataFrame(...)
dfs = {'shoes': df_shoes_group_names,
'boots': df_boots_group_names,
'sandals': df_sandals_group_names}
def foo(key):
if 'shoes' in key: # <-- THIS WILL WORK
# do something with dfs[key]
</code></pre>
|
python|pandas|loops|for-loop
| 1
|
376,642
| 64,162,672
|
How to randomly set a fixed number of elements in each row of a tensor in PyTorch
|
<p>I was wondering if there is any more efficient alternative for the below code, without using the "for" loop in the 4th line?</p>
<pre><code>import torch
n, d = 37700, 7842
k = 4
sample = torch.cat([torch.randperm(d)[:k] for _ in range(n)]).view(n, k)
mask = torch.zeros(n, d, dtype=torch.bool)
mask.scatter_(dim=1, index=sample, value=True)
</code></pre>
<p>Basically, what I am trying to do is to create an <code>n</code> by <code>d</code> mask tensor, such that in each row exactly <code>k</code> random elements are True.</p>
|
<p>Here's a way to do this with no loop. Let's start with a random matrix where all elements are drawn iid, in this case uniformly on [0,1]. Then we take the k'th quantile for each row and set all smaller or equal elements to True and the rest to False on each row:</p>
<pre><code>rand_mat = torch.rand(n, d)
k_th_quant = torch.topk(rand_mat, k, largest = False)[0][:,-1:]
mask = rand_mat <= k_th_quant
</code></pre>
<p>No loop needed :) x2.1598 faster than the code you attached on my CPU.</p>
|
pytorch
| 4
|
376,643
| 64,037,243
|
Python:how to split column into multiple columns in a dataframe and with dynamic column naming
|
<p>i have a sample dataset</p>
<pre><code>id value
[10,10] ["apple","orange"]
[15,67] ["banana","orange"]
[12,34,45] ["apple","banana","orange"]
</code></pre>
<p>i want to convert this into</p>
<pre><code>id1 id2 id3 value1 value2 value3
10 10 nan apple orange nan
15 67 nan banana orange nan
10 10 45 apple banana orange
</code></pre>
<ul>
<li>i solved this problem earlier using if else conditions</li>
<li>but data could be dynamic so it may have more then 3 values.</li>
<li>How to split into multiple column with renaming it as mentioned</li>
</ul>
|
<p>We can reconstruct your data with <code>tolist</code> and <code>pd.DataFrame</code>. Then <code>concat</code> everything together again:</p>
<pre><code>d = [pd.DataFrame(df[col].tolist()).add_prefix(col) for col in df.columns]
df = pd.concat(d, axis=1)
id0 id1 id2 value0 value1 value2
0 10 10 NaN apple orange None
1 15 67 NaN banana orange None
2 12 34 45.0 apple banana orange
</code></pre>
|
python|pandas|numpy|dataframe
| 3
|
376,644
| 64,129,235
|
Plot Multiple Y axis + 'hue' scatterplot in python
|
<p>Dataframe</p>
<pre><code>df
Sample Type y1 y2 y3 y4
S1 H 1000 135 220 171
S2 H 2900 1560 890 194
S3 P 678 350 127 255
S4 P 179 510 154 275
</code></pre>
<p>I want to plot <code>y1</code>, <code>y2</code>, <code>y3</code>, <code>y4</code> vs <code>Sample</code> scatterplot with hue as <code>Type</code>.</p>
<p>Is there any way to do it in Seaborn?</p>
|
<p>Since, you want just one plot you can use <a href="https://seaborn.pydata.org/generated/seaborn.scatterplot.html" rel="nofollow noreferrer"><code>sns.scatterplot</code></a>:</p>
<pre><code>import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
#df = pd.read_csv('yourfile.csv')
#plotting
df1 = df.melt(['Type','Sample'])
sns.scatterplot(data=df1, x="Sample", y="value",hue="Sample",style="Type")
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/GvglT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GvglT.png" alt="enter image description here" /></a></p>
<p>In case you want multiple scatter plots, you can use <a href="https://seaborn.pydata.org/generated/seaborn.relplot.html#seaborn.relplot" rel="nofollow noreferrer"><code>sns.relplot</code></a>:</p>
<pre><code>#some preprocessing
df1 = df.melt(['Type','Sample'])
#plotting
sns.relplot(data=df1, x="Sample", y="value", hue="Type", col="variable", height=2, aspect=1.5)
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/8kyx3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8kyx3.png" alt="enter image description here" /></a></p>
<p>In case, you want 2x2 grid :</p>
<pre><code>df1 = df.melt(['Type','Sample'])
#plotting
sns.relplot(data=df1, x="Sample", y="value", hue="Type", col="variable",col_wrap=2, height=2, aspect=1.5)
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/MSUNo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MSUNo.png" alt="enter image description here" /></a></p>
<p>In case, you want 1x4 grid :</p>
<pre><code>df1 = df.melt(['Type','Sample'])
#plotting
sns.relplot(data=df1, x="Sample", y="value", hue="Type", col="variable",col_wrap=1, height=2, aspect=1.5)
plt.show()
</code></pre>
|
python|pandas|plot|hue
| 4
|
376,645
| 64,001,892
|
Group by and assign it to intermediate groups in python pandas
|
<p>I have the pandas dataframe in the below format. For every group in col1, I am trying to compute the average of 'price' and assign it to the same group but for year '2015'(in Result dataframe below). That result has to be added to the original dataframe.</p>
<p>I have tried this but not sure how to assign the intermediate results by creating a separate row for it.</p>
<pre><code>df_make_year.apply(lambda x: x.groupby('Col1')['price'].mean())
col1 year price
XXX 2016 4633.028506
XXX 2017 4805.72567
YYY 2016 4919.385966
YYY 2017 4959.816429
YYY 2018 4987.046863
</code></pre>
<p>Result(added to the above dataframe):</p>
<pre><code>XXX 2015 4719
YYY 2015 4955
</code></pre>
|
<p>You can do <code>append</code> after <code>groupby</code> <code>assign</code></p>
<pre><code>df = df.append(df.groupby('col1').agg({'col1':'first', 'price':'mean'}).assign(year=2015).reset_index(drop=True),sort=True)
</code></pre>
|
python|pandas
| 1
|
376,646
| 64,039,003
|
extract certain words from column in a pandas df
|
<p>I have a pandas df in which one column is the message and having a string and have data like below:-</p>
<p>df['message']</p>
<pre><code>2020-09-23T22:38:34-04:00 mpp-xyz-010101-10-103.vvv0x.net patchpanel[1329]: RTP:a=end pp=10.10.10.10:9999 user=sip:.F02cf9f54b89a48e79772598007efc8c5.@user.com;tag=2021005845 lport=12270 raddr=11.00.111.212 rport=3004 d=5 arx=0.000 tx=0.000 fo=0.000 txf=0.000 bi=11004 bo=453 pi=122 pl=0 ps=0 rtt="" font=0 ua=funny-SDK-4.11.2.34441.fdc6567fW jc=10 no-rtp=0 cid=2164444 relog=0 vxdi=0 vxdo=0 vxdr=0\n
</code></pre>
<p>So I want to extract the <code>raddr</code> from the data and join it back to the df.
I am doing it with the code below and thought that its on position 7 after the split:-</p>
<pre><code>df[['raddr']]=df['message'].str.split(' ', 100, expand=True)[[7]]
df['raddr']=df['raddr'].str[6:]
</code></pre>
<p>the issue is in some columns it's coming at 8 and some at 7 so in some columns, it gives me a report and not radar because of the issue.</p>
<p>How can I extract that so that it will extract it on a string search and not using split?</p>
<p>Note:- Also, I want a faster approach as I am doing in on hunters of thousands of records every minute.</p>
|
<p>You can use <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.str.extract.html" rel="nofollow noreferrer">series.str.extract</a></p>
<pre><code>df['raddr'] = df['message'].str.extract(r'raddr=([\d\.]*)') # not tested
</code></pre>
<p>The pattern has only one capturing group with the value after the equal sign. It will capture any combination of digits and periods until it finds something else (a blank space, letter, symbol, or end of line).</p>
|
python|python-3.x|pandas|dataframe
| 2
|
376,647
| 64,159,078
|
Classify and Restore data in python
|
<p>I have a dataset in which resides in a 13 by 506 matrix, let's call the data set data_1. I am interested in one of the columns data, lets call that data column data_c1. Data_c1 is numeric, so the 50th percentile can be calculated with the numpy library.</p>
<p>My goal is to go through data_c1, do a binary classification on whether it is above or below the 50th percentile (y=1 for above, y=0 for below) and store that information in a new matrix with the corresponding tag (y=1 or y=0.)</p>
<p>I figured out how to load the data and calculate t50 (see below.) Can someone show me how to complete the reclassification? I think I would need to use a while loop, but I can't get it to restore the data into a new matrix.</p>
<p>Here is my code so far:</p>
<pre><code>#import libraries
import numpy as np
import pandas as pd
#import data set
from datasoure import data_file
data_file = data_1()
data_1['data_c1'] = data_c1
#calculate percentile using numpy
t50 = np.percentile(data_1, 50)
#classify target data as y=1 for >=t50 or <=t50
#while loop????
</code></pre>
|
<p>You can apply a function like this:</p>
<pre><code>def classifier(row):
global t50 #defined somewhere else
if row["data_c1"] > t50:
return 1
else:
return 0
new_col = df.apply(classifier, axis=1)
</code></pre>
<p>Then you can do whatever you want with <code>new_col</code></p>
|
python|numpy|classification
| 0
|
376,648
| 63,766,688
|
Pandas - Understanding how rolling averages work
|
<p>So I'm trying to calculate rolling averages, based on some column and some groupby columns.
In my case:</p>
<p>rolling column = RATINGS,</p>
<p>groupby_columns = ["DEMOGRAPHIC","ORIGINATOR","START_ROUND_60","WDAY","PLAYBACK_PERIOD"]</p>
<p>one group of my data looks like that:</p>
<p><a href="https://i.stack.imgur.com/EcoxP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EcoxP.png" alt="one group of the data" /></a></p>
<p>my code to compute the rolling average is:</p>
<pre><code>df['rolling']= df.groupby(groupby_columns_keys)['RATINGS'].\
apply(lambda x: x.shift().rolling(10,min_periods=1).mean())
</code></pre>
<p>What I don't understand is what is happening when the RATINGS value are starting to be NaN.</p>
<p>As my window size is 10, I would expect the second number in the test (index 11) to be:</p>
<pre><code>np.mean([178,479,72,272,158,37,85.5,159,107,164.55]) = 171.205
</code></pre>
<p>But it is instead 171.9444, and same apply to the next numbers.
What is happening here?
And how I should calculate the next rolling averages the way I want (simply to average the 10 last ratings - and if ratings is NaN to take the calculated average of the previous row instead).</p>
<p>Any help will be appreciated.</p>
|
<blockquote>
<p>np.mean([178,479,72,272,158,37,85.5,159,107,<strong>164.55</strong>]) = 171.205</p>
</blockquote>
<p>Where does the 164.55 come from? The rest of those values are from the "RATINGS" column and the 164.55 is from the "rolling" column. Maybe I am misunderstanding what the <code>rolling</code> function does.</p>
|
python|pandas|rolling-computation
| 1
|
376,649
| 64,081,034
|
Combining different dataframes columns into new dataframe and bonus filtering question
|
<p>Im trying to create a new dataframe from two other dataframes and I think the indexing is messing me up.
Might be a chaining operations issue from what I have been reading, but the answer I am seeing is to use iloc which I did but am still seeing the error.</p>
<p>I have original dataframe sorted by date index</p>
<pre><code>df.head()
open high low close volume returns returns_final
Datetime
2020-07-06 09:30:00 255.337982 261.950012 253.208786 261.421997 6592145 -6.084015 1
2020-07-06 11:00:00 261.526001 268.399994 261.239990 266.275452 4955678 -4.749451 1
2020-07-06 12:30:00 266.269043 266.989990 264.200012 265.191986 2002640 1.077057 -1
2020-07-06 14:00:00 265.185455 269.558014 261.597992 268.513763 3303263 -3.328308 1
2020-07-06 15:30:00 268.528015 275.558014 268.096008 274.200012 2583149 -5.671997 1
</code></pre>
<p>Created some filters for the new dataframes</p>
<pre><code># Creating filter for time frame
df_inc = df.filter(like='09:30', axis=0)
df_inc_11 = df.filter(like='11:00', axis=0)
</code></pre>
<p>I am having a really hard time combining the two frames. Im pretty sure the indexing is causing all the problems.</p>
<pre><code>newer = df_inc.filter(['open','close'], axis=1)
newer.head()
open close
Datetime
2020-07-06 09:30:00 255.337982 261.421997
2020-07-07 09:30:00 281.002014 277.621979
2020-07-08 09:30:00 281.000000 278.865784
2020-07-09 09:30:00 279.398010 272.015991
2020-07-10 09:30:00 278.220367 283.506012
</code></pre>
<p>Trying to add one Column from other dataframe.</p>
<pre><code>df_inc_11.iloc[:, 3:4].head()
close
Datetime
2020-07-06 11:00:00 266.275452
2020-07-07 11:00:00 278.123718
2020-07-08 11:00:00 278.633118
2020-07-09 11:00:00 274.414978
2020-07-10 11:00:00 282.440613
newer['new_close'] = df_inc_11.iloc[:, 3:4]
newer.head()
open close new_close
Datetime
2020-07-06 09:30:00 255.337982 261.421997 NaN
2020-07-07 09:30:00 281.002014 277.621979 NaN
2020-07-08 09:30:00 281.000000 278.865784 NaN
2020-07-09 09:30:00 279.398010 272.015991 NaN
2020-07-10 09:30:00 278.220367 283.506012 NaN
</code></pre>
<p>I also tried to delete the index of the 2nd frame before copying it over but no go. Keep getting a NaN.</p>
<pre><code>df_inc_11 = df_inc_11.reset_index(drop=True)
</code></pre>
<p>Any idea how I can fix the NaN?</p>
<p>On a side note, is there a better way to combine both of these searches into one filter? I'm thinking that might sort out the indexing issue.</p>
<pre><code># Creating filter for time frame
df_inc = df.filter(like='09:30', axis=0)
df_inc_11 = df.filter(like='11:00', axis=0)
</code></pre>
|
<p>Try</p>
<pre><code>newer['new_close'] = df_inc_11.close.values
</code></pre>
|
python|pandas
| 1
|
376,650
| 64,061,470
|
Better way of concatenating multiple for loops in a dataframe
|
<p>So I have a dataframe with quite a few columns and I am running multiple for loops to create variable to be used in my desired function. Is there a better(concatenated format) way/format to run these loops?</p>
<pre><code>for x in df['A']:
L = x
for y in df['B']:
M = y
for w in df['C']:
N = w
for v in df['D']:
O = v
</code></pre>
<p>maybe also improve execution speed at the same time.</p>
|
<p>You can create numpy array by seelcting columns by list and <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_numpy.html" rel="nofollow noreferrer"><code>DataFrame.to_numpy</code></a>:</p>
<pre><code>for L,M,N,O in df[['A','B','C','D']].to_numpy():
print (L, M, N, O)
</code></pre>
<p>Or use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.itertuples.html" rel="nofollow noreferrer"><code>DataFrame.itertuples</code></a>:</p>
<pre><code>for row in df.itertuples():
(L, M, N, O) = (row.A,row.B,row.C,row.D)
</code></pre>
|
python|pandas|dataframe
| 3
|
376,651
| 63,920,408
|
Identify first row amongst similar set of data from pandas dataframe
|
<p>I have a dataframe similar to the one shown below:</p>
<pre><code> BillNumber Description LineAmount TotalAmount
0 INV001 Line Item 1 of INV001 500 700
1 INV001 Line Item 2 of INV001 200 700
2 INV002 Line Item 1 of INV002 100 800
3 INV002 Line Item 2 of INV002 300 800
4 INV002 Line Item 3 of INV002 400 800
</code></pre>
<p>What I want is as follows:</p>
<pre><code> BillNumber Description LineAmount TotalAmount NewBill
0 INV001 Line Item 1 of INV001 500 700 Yes
1 INV001 Line Item 2 of INV001 200 700
2 INV002 Line Item 1 of INV002 100 800 Yes
3 INV002 Line Item 2 of INV002 300 800
4 INV002 Line Item 3 of INV002 400 800
</code></pre>
<p>I want to identify the first row of every new BillNumber and mark the value 'Yes' for it under new column named 'NewBill'. How can we achieve this using pandas ?</p>
<p>Thanks in advance.</p>
|
<p>Use <a href="https://numpy.org/doc/stable/reference/generated/numpy.where.html" rel="nofollow noreferrer"><code>numpy.where</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.duplicated.html" rel="nofollow noreferrer"><code>Series.duplicated</code></a>:</p>
<pre><code>df['NewBill'] = np.where(df['BillNumber'].duplicated(), '', 'Yes')
print (df)
BillNumber Description LineAmount TotalAmount NewBill
0 INV001 Line Item 1 of INV001 500 700 Yes
1 INV001 Line Item 2 of INV001 200 700
2 INV002 Line Item 1 of INV002 100 800 Yes
3 INV002 Line Item 2 of INV002 300 800
4 INV002 Line Item 3 of INV002 400 800
</code></pre>
|
python|pandas
| 4
|
376,652
| 63,765,472
|
Pivot table to Pivot table, by swapping indexes and columns
|
<p>supposed my dataset</p>
<pre><code>Name Month Value
A 1 120
A 3 130
A 5 140
B 1 80
B 2 110
B 4 90
C 1 150
C 4 120
C 5 190
D 1 100
D 2 105
....
</code></pre>
<p>As shown in the data, there are values that do not exist for each month, so first create the first pivot table to fill in the missing values,</p>
<pre><code>df_pivot1 = (df.pivot_table(index='Month',columns='Name', values='Value'))
df_pivot1
Name A B C D
Month
1 120 80 150 100
2 Nan 110 Nan 105
3 130 Nan Nan Nan
4 Nan 90 120 Nan
5 140 Nan 190 Nan
</code></pre>
<p>and after filling in missing values(data imputation),</p>
<pre><code>Val = Assume that imputation value
Name A B C D
Month
1 120 80 150 100
2 Val 110 Val 105
3 130 Val Val Val
4 Val 90 120 Val
5 140 Val 190 Val
</code></pre>
<p>Now what I want is to use <strong>df_pivot1</strong> so that the index becomes the <strong>Name</strong> and the column becomes the <strong>Month</strong>.</p>
<p><strong>output what I want</strong></p>
<pre><code>Month 1 2 3 4 5
Name
A 120 Val 130 Val 140
B 80 110 Val 90 Val
C 150 Val Val 120 190
D 100 105 Val Val Val
</code></pre>
<p>thank you for reading.</p>
|
<p>After impute the value try</p>
<pre><code>df_pivot1 = df_pivot1.T
</code></pre>
|
python|pandas|pivot
| 2
|
376,653
| 64,015,762
|
How to convert a dataframe to tidy form (unpivot)?
|
<p>I have the following dataframe <code>df = pd.read_excel('...')</code>:</p>
<pre><code>Date Id V1 V2 V3
2020-1-1 1 10 100 NaN
2020-1-1 2 20 120 23
2020-1-1 3 11 101 NaN
</code></pre>
<p>I need to transform it to</p>
<pre><code>Date Name Value
2020-1-1 1_V1 10
2020-1-1 1_V2 100
2020-1-1 2_V1 20
2020-1-1 2_V2 120
2020-1-1 2_V3 23
2020-1-1 3_V1 11
2020-1-1 3_V2 101
</code></pre>
<p>The <code>'Name'</code> column is a concatenation of <code>Id</code> and column names of <code>V1</code>, <code>V2</code>, <code>V3</code>, etc. The <code>NaN</code> values are ignored.</p>
<p>How to implement it using dataframe features?</p>
|
<p>Let us try <code>melt</code></p>
<pre><code>s = df.melt(['Date','Id']).dropna()
s['name'] = s.pop('variable') +'_'+ s.pop('Id').astype(str)
s
Date value name
0 2020-1-1 10.0 V1_1
1 2020-1-1 20.0 V1_2
2 2020-1-1 11.0 V1_3
3 2020-1-1 100.0 V2_1
4 2020-1-1 120.0 V2_2
5 2020-1-1 101.0 V2_3
7 2020-1-1 23.0 V3_2
</code></pre>
|
python|pandas|dataframe
| 2
|
376,654
| 64,117,751
|
Convert c-order index into f-order index in Python
|
<p>I am trying to find a solution to the following problem. I have an index in C-order and I need to convert it into F-order.</p>
<p><strong>To explain simply my problem, here is an example:</strong></p>
<hr />
<p>Let's say we have a matrix <code>x</code> as:</p>
<pre><code>x = np.arange(1,5).reshape(2,2)
print(x)
array([[1, 2],
[3, 4]])
</code></pre>
<p>Then the <strong>flattened matrix in C order is:</strong></p>
<pre><code>flat_c = x.ravel()
print(flat_c)
array([1, 2, 3, 4])
</code></pre>
<p>Now, the value <code>3</code> is at the <code>2nd position</code> of the <code>flat_c</code> vector i.e. <code>flat_c[2] is 3</code>.</p>
<p><strong>If I would flatten the matrix <code>x</code> using the F-order</strong>, I would have:</p>
<pre><code>flat_f = x.ravel(order='f')
array([1, 3, 2, 4])
</code></pre>
<p>Now, the value <code>3</code> is at the <code>1st position</code> of the <code>flat_f</code> vector i.e. <code>flat_f[1] is 3</code>.</p>
<p><strong>I am trying to find a way to get the F-order index knowing the dimension of the matrix and the corresponding index in C-order.</strong></p>
<p>I tried using <code>np.unravel_index</code> but this function returns the matrix positions...</p>
|
<p>We can use a combination of <a href="https://numpy.org/doc/stable/reference/generated/numpy.ravel_multi_index.html" rel="nofollow noreferrer"><code>np.ravel_multi_index</code></a> and <a href="https://numpy.org/doc/stable/reference/generated/numpy.unravel_index.html" rel="nofollow noreferrer"><code>np.unravel_index</code></a> for a ndarray supported solution. Hence, given array shape <code>s</code> of input array <code>a</code> and c-order index <code>c_idx</code>, it would be -</p>
<pre><code>s = a.shape
f_idx = np.ravel_multi_index(np.unravel_index(c_idx,s)[::-1],s[::-1])
</code></pre>
<p>So, the idea is pretty simple. Use <code>np.unravel_index</code> to get c-based indices in n-dim, then get flattened-linear index in fortran order by using <code>np.ravel_multi_index</code> on flipped shape and those flipped n-dim indices to simulate fortran behavior.</p>
<p>Sample runs on <code>2D</code> -</p>
<pre><code>In [321]: a
Out[321]:
array([[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14]])
In [322]: s = a.shape
In [323]: c_idx = 6
In [324]: np.ravel_multi_index(np.unravel_index(c_idx,s)[::-1],s[::-1])
Out[324]: 4
In [325]: c_idx = 12
In [326]: np.ravel_multi_index(np.unravel_index(c_idx,s)[::-1],s[::-1])
Out[326]: 8
</code></pre>
<p>Sample run on 3D array -</p>
<pre><code>In [336]: a
Out[336]:
array([[[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14]],
[[15, 16, 17, 18, 19],
[20, 21, 22, 23, 24],
[25, 26, 27, 28, 29]]])
In [337]: s = a.shape
In [338]: c_idx = 21
In [339]: np.ravel_multi_index(np.unravel_index(c_idx,s)[::-1],s[::-1])
Out[339]: 9
In [340]: a.ravel('F')[9]
Out[340]: 21
</code></pre>
|
python|numpy
| 2
|
376,655
| 64,042,328
|
Is convolution useful on a network with a timestep of 1?
|
<p>This code comes from <a href="https://www.kaggle.com/dkaraflos/1-geomean-nn-and-6featlgbm-2-259-private-lb" rel="nofollow noreferrer">https://www.kaggle.com/dkaraflos/1-geomean-nn-and-6featlgbm-2-259-private-lb</a>, The goal of this competition is to use seismic signals to predict the timing of laboratory earthquakes. The person in this link has won first place among more than 4000 teams</p>
<pre><code>def get_model():
inp = Input(shape=(1,train_sample.shape[1]))
x = BatchNormalization()(inp)
x = LSTM(128,return_sequences=True)(x) # LSTM as first layer performed better than Dense.
x = Convolution1D(128, (2),activation='relu', padding="same")(x)
x = Convolution1D(84, (2),activation='relu', padding="same")(x)
x = Convolution1D(64, (2),activation='relu', padding="same")(x)
x = Flatten()(x)
x = Dense(64, activation="relu")(x)
x = Dense(32, activation="relu")(x)
#outputs
ttf = Dense(1, activation='relu',name='regressor')(x) # Time to Failure
tsf = Dense(1)(x) # Time Since Failure
classifier = Dense(1, activation='sigmoid')(x) # Binary for TTF<0.5 seconds
model = models.Model(inputs=inp, outputs=[ttf,tsf,classifier])
opt = optimizers.Nadam(lr=0.008)
# We are fitting to 3 targets simultaneously: Time to Failure (TTF), Time Since Failure (TSF), and Binary for TTF<0.5 seconds
# We weight the model to optimize heavily for TTF
# Optimizing for TSF and Binary TTF<0.5 helps to reduce overfitting, and helps for generalization.
model.compile(optimizer=opt, loss=['mae','mae','binary_crossentropy'],loss_weights=[8,1,1],metrics=['mae'])
return model
</code></pre>
<p><a href="https://i.stack.imgur.com/ANjEU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ANjEU.png" alt="enter image description here" /></a></p>
<p>However, According to my derivation, I think <code>x = Convolution1D(128, (2),activation='relu', padding="same")(x)</code> and <code>x = Dense(128, activation='relu ')(x)</code> has the same effect, because the convolution kernel performs convolution on the sequence with a time step of 1. In principle, it is very similar to the fully connected layer. Why use conv1D here instead of directly using the fullly connection layer? Is my derivation wrong?</p>
|
<p><strong>1) Assuming you would input a sequence to the LSTM (the normal use case):</strong></p>
<p>It would not be the same since the LSTM returns a sequence (<code>return_sequences=True</code>), thereby not reducing the input dimensionality. The output shape is therefore <code>(Batch, Sequence, Hid)</code>. This is being fed to the <code>Convolution1D</code> layer which performs convolution on the <code>Sequence</code> dimension, i.e. on <code>(Sequence, Hid)</code>. So in effect, the purpose of the 1D Convolutions is to extract local 1D subsequences/patches after the LSTM.</p>
<p>If we had <code>return_sequences=False</code>, the LSTM would return the final state <code>h_t</code>. To ensure the same behavior as a Dense layer, you need a fully connected convolutional layer, i.e. a kernel size of <code>Sequence</code> length, and we need as many filters as we have <code>Hid</code> in the output shape. This would then make the 1D Convolution equivalent to a Dense layer.</p>
<p><strong>2) Assuming you do not input a sequence to the LSTM (your example):</strong></p>
<ul>
<li><p>In your example, the LSTM is used as a replacement for a Dense layer.
It serves the same function, though it gives you a slightly different
result as the gates do additional transformations (even though we
have no sequence).</p>
</li>
<li><p>Since the Convolution is then performed on <code>(Sequence, Hid)</code> = <code>(1, Hid)</code>, it is indeed operating per timestep. Since we have 128 inputs and 128 filters, it is fully connected and the kernel size is large enough to operate on the single element. This meets the above defined criteria for a 1D Convolution to be equivalent to a Dense layer, so you're correct.</p>
</li>
</ul>
<p>As a side note, this type of architecture is something you would typically get with a <a href="https://en.wikipedia.org/wiki/Neural_architecture_search" rel="nofollow noreferrer">Neural Architecture Search</a>. The "replacements" used here are not really commonplace and not generally guaranteed to be better than the more established counterparts. In a lot of cases, using Reinforcement Learning or Evolutionary Algorithms can however yield slightly better accuracy using "untraditional" solutions since very small performance gains can just happen by chance and don't have to necessarily reflect back on the usefulness of the architecture.</p>
|
python|tensorflow|keras|neural-network|conv-neural-network
| 1
|
376,656
| 63,993,846
|
Matching values of a dict with the values of two columns of a dataframe and substituting the value of a third column with the key of the dict
|
<p>I have a pandas dataframe like this:</p>
<pre><code>Index | Line Item | Insertion Order | Creative Type
_________________________________________________________________________________________________
1 | blbl 33 dEs '300x600' Q3 | hello 444 | UNKNOWN
2 | QQQ4 Hello trueview Apple | something 68793274 | UNKNOWN
3 | A useless string | pre-roll Video <10 tttt 89 CASIO | UNKNOWN
4 | Something not in dict | Neither here | UNKNOWN
</code></pre>
<p>And a dictionary like this:</p>
<pre><code> dct = {
'RISING STARS': ['300x600', 'Box 300x600', '300x250', 'Box 300x250', 'Classic Skin', 'Main Banner', 'Half Banner', 'Masthead', 'Push Bar', 'Strip', 'In Image', 'Mix formati display rising'],
'VIDEO': ['trueview', 'Video Banner', 'Video in Picture', 'Videobox', 'Mid-roll Video', 'Pre-roll+Inread', 'Pre-roll Video <10', 'Pre-roll Video =10', 'Pre-roll Video =15', 'Pre-roll Video =20', 'Pre-roll Video =30' ,'Pre-roll Video >30','Inread / Intext / Outstream','Mix formati video','Post-roll Video','Inread XXX (Landscape/Vertical/Square)', 'Pre-roll Video Sponsored Session' ,'Pre-roll Video Viewmax' ,'Pre-roll Video Takeover']}
</code></pre>
<p>I would like to substitute the value in the column Creative Type of my dataframe: if the values of the column <code>Line Item</code> or <code>Insertion Order</code> match the values of the dictionary, the corresponding row of the column <code>Creative Type</code> should take the name of the key of the dictionary. If there is no match, the corresponding row of the column creative type should receive the value <code>NaN</code>.</p>
<p><strong>The expected output is:</strong></p>
<pre><code>Index | Line Item | Insertion Order | Creative Type
_________________________________________________________________________________________________
1 | blbl 33 dEs '300x600' Q3 | hello 444 | RISING STARS
2 | QQQ4 Hello trueview Apple | something 68793274 | VIDEO
3 | A useless string | pre-roll Video <10 tttt 89 CASIO | VIDEO
4 | Something not in dict | Neither here | NaN
</code></pre>
<p>What's the easiest way to do it? (less computationally expensive if possible)</p>
|
<p>Create a <strong>replacement</strong> dictionary by inverting the key-value pairs of given <code>dict</code> i.e for each value in the list map it to its corresponding key, then using <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.replace.html" rel="nofollow noreferrer"><code>Series.replace</code></a> replace the strings from the combined columns <code>Line Item</code> and <code>Insertion Order</code> with its corresponding value from replace-ment dictionary when there is a match, finally <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.mask.html" rel="nofollow noreferrer"><code>mask</code></a> the strings which can't be replaced:</p>
<pre><code>r = {rf'(?i).*?\b{z}\b.*':x for x, y in dct.items() for z in y}
s = df['Line Item'].add(':' + df['Insertion Order'])
df['Creative Type'] = s.replace(r, regex=True).mask(lambda x: x.eq(s))
</code></pre>
<hr />
<pre><code> Line Item Insertion Order Creative Type
1 blbl 33 dEs '300x600' Q3 hello 444 RISING STARS
2 QQQ4 Hello trueview Apple something 68793274 VIDEO
3 A useless string pre-roll Video <10 tttt 89 CASIO VIDEO
4 Something not in dict Neither here NaN
</code></pre>
|
python|pandas|dataframe|dictionary
| 1
|
376,657
| 63,939,064
|
Shuffle a square numpy array, but retain correspondence between row and column indices
|
<p>If I have a square <em>and symmetric</em> matrix, for example,</p>
<pre><code>[[0 3 2]
[3 8 4]
[2 4 5]]
</code></pre>
<p>I do not want to shuffle rows only or columns only. instead,</p>
<p>how can I, <em>for example</em> (not the following in the strict order as written, but instead at random):</p>
<ul>
<li>shuffle the matrix in numpy so that row <em>and</em> column 1 are moved together to row and column 3,</li>
<li>while row <em>and</em> column 3 are moved to row and column 2</li>
<li>and row <em>and</em> column 2 are moved to row and column 1</li>
</ul>
|
<p>What you are asking for can be done with so-called matrix conjugation:</p>
<pre><code>perm_mat = np.random.permutation(np.eye(len(a),dtype=np.int))
out = (perm_mat @ a) @ (np.linalg.inv(perm_mat))
</code></pre>
<p>Output (random of course):</p>
<pre><code>array([[8., 4., 3.],
[4., 5., 2.],
[3., 2., 0.]])
</code></pre>
<p>Or can be done with slicing:</p>
<pre><code>np.random.seed(1)
orders = np.random.permutation(np.arange(len(a)))
a[orders][:,orders]
</code></pre>
<p>Output:</p>
<pre><code>array([[0, 2, 3],
[2, 5, 4],
[3, 4, 8]])
</code></pre>
|
arrays|numpy|matrix|shuffle
| 2
|
376,658
| 63,990,338
|
Masked object disapear when converting image from float32 into uint8
|
<p>Bellow is the following mask showing the detected object by using histogram back projection</p>
<p><a href="https://i.stack.imgur.com/5EfNk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5EfNk.png" alt="float" /></a></p>
<p><strong>The image has the type float32 which results from the algorithm's output</strong>. I want to detect contours using <code>cv2.findContours</code> function.</p>
<p>As you know this function accept a certain image type <strong>which is uint8</strong>, otherwise it raises ans error. Therefore, I converted the image type from float32 into uint8 using <code>imageFloat.astype(np.uint8)</code>.</p>
<p>When displaying the new converted binary image (new uint8) it displays a <strong>black image</strong> which means that the detected object is <strong>no longer visible</strong> <strong>(Zero mask)</strong></p>
<p><a href="https://i.stack.imgur.com/EJ9fm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EJ9fm.png" alt="black" /></a></p>
<p>So my question is: anyone know why this happens and what i'm doing wrong?</p>
<p>Thanks in advance
Khaled</p>
|
<p>You are not scaling up the values of the image pixels before converting to int, this is the reason why you are facing error.</p>
<p>Do this:</p>
<pre class="lang-py prettyprint-override"><code>imageFloat *= 255
imageFloat.astype(np.uint8)
</code></pre>
|
python|numpy|opencv|image-processing|mask
| 3
|
376,659
| 63,974,864
|
DataFrames - Average Columns
|
<p>I have the following dataframe in pandas</p>
<pre><code>Column 1 Column 2 Column3 Column 4
2 2 2 4
1 2 2 3
</code></pre>
<p>I am looking to create a dataframe which contains averages of columns 1& 2, Columns 3 &4, and so on.</p>
<pre><code> ColumnAvg(12) ColumnAvg(34)
2 3
1.5 1.5
</code></pre>
<p>I was using this, but it is averaging everything.</p>
<pre><code>df.mean(axis=1)
</code></pre>
<p>Is there a way, that I can add the column headers, when averaging each row.
If not, another way would be to create two arrays, average them and then create a new dataframe.</p>
|
<p>You an do <code>groupby</code> with <code>axis</code> and pass the list</p>
<pre><code>out = df.groupby([1,1,2,2],axis=1).mean()
1 2
0 2.0 3.0
1 1.5 2.5
</code></pre>
|
python|pandas|numpy|dataframe
| 2
|
376,660
| 64,050,235
|
Pandas - fillna with mean for specific categories
|
<p>I'd like to fillna with the mean number for the column but only for representatives of the same category as the missing value</p>
<pre><code>data = {'Class': ['Superlight', 'Aero', 'Aero', 'Superlight', 'Superlight', 'Superlight', 'Aero', 'Aero'],
'Weight': [5.6, 8.6, np.nan, 5.9, 5.65, np.nan, 8.1, 8.4]}
Class Weight
0 Superlight 5.60
1 Aero 8.60
2 Aero NaN
3 Superlight 5.90
4 Superlight 5.65
5 Superlight NaN
6 Aero 8.10
7 Aero 8.40
</code></pre>
<p>I know I can do:</p>
<pre><code>df.Weight.fillna(df.Weight.mean())
</code></pre>
<p>But that will fill in the missing values with the mean of the whole column.</p>
<p>The following would replace the null values with the mean for the AERO category (which is better but still no good as I'd have to do it for each category/class separately)</p>
<pre><code>df.Weight.fillna(df[df.Class == 'Aero'].Weight.mean())
</code></pre>
<p>Is it possible to abstract it so that it'll automatically take the Class of the current row and find the mean of the values falling into that category and replace it without hardcoding the Class values? Hope that makes sense.</p>
|
<p><code>groupby + transform</code> and then fillna:</p>
<pre><code>df['Weight'].fillna(df.groupby("Class")['Weight'].transform("mean"))
</code></pre>
<hr />
<pre><code>0 5.600000
1 8.600000
2 8.366667
3 5.900000
4 5.650000
5 5.716667
6 8.100000
7 8.400000
Name: Weight, dtype: float64
</code></pre>
|
python|pandas|fillna
| 9
|
376,661
| 63,833,249
|
ValueError: Must pass DataFrame with boolean values only - When converting Pandas Columns to Numeric
|
<p>I'm trying to convert these columns to numeric but I get this error, and haven't found much of anything on it for this specific use case on Google or Stack Overflow:</p>
<pre><code>ValueError: Must pass DataFrame with boolean values only
</code></pre>
<p>df:</p>
<pre><code> 5 6 7 8 9 10 11
0 0 0 7 0 0 0 11
1 5 0 0 0 0 0 0
2 0 0 0 0 9 0 0
#dtypes:
0 object
1 object
2 object
5 object
6 object
</code></pre>
<pre><code>cols = df.iloc[:,0:]
df[cols] = df[cols].apply(pd.to_numeric, errors='ignore', axis=1).fillna(df)
#I've tried with errors = 'coerce' and without fillna(df)
</code></pre>
<p>What can I try next?</p>
|
<p>If you want to convert all the columns to numeric you can use</p>
<pre><code>df = df.astype(int)
</code></pre>
<p>If you want to convert specific columns to numeric you can pass the dictionary in astype.</p>
<pre><code>convert_dict = {'A': int,
'C': float
}
df = df.astype(convert_dict)
</code></pre>
|
python|pandas
| 1
|
376,662
| 64,089,038
|
How can I pivot a really large dataframe using dask?
|
<p>I have a Dask dataframe that I load like this:</p>
<pre class="lang-py prettyprint-override"><code>dates_devices = dd.read_csv('data_part*.csv.gz', compression='gzip', blocksize=None)
dates_devices['cnt'] = 1
dates_devices.astype({'cnt': 'uint8'}).dtypes # make it smaller
</code></pre>
<p>I'm trying to use dask to pivot the table:</p>
<pre class="lang-py prettyprint-override"><code>final_table = (dates_devices
.categorize(columns=['date'])
.pivot_table(index='device',
columns='date',
values='cnt').fillna(0).astype('uint8'))
</code></pre>
<p>That "runs" just fine, but when I do the implicit compute in the <code>dd.to_parquet()</code> I get:</p>
<p><code>MemoryError: Unable to allocate 5.42 GiB for an array with shape (727304656,) and data type uint8</code>,</p>
<p>then taken from <a href="https://stackoverflow.com/questions/57507832/unable-to-allocate-array-with-shape-and-data-type">here</a>, I tried</p>
<pre class="lang-sh prettyprint-override"><code>`$ echo 1 > /proc/sys/vm/overcommit_memory`
</code></pre>
<p>but the kernel still gets killed. I've 32GB RAM, and 32GB swap on Linux Xubuntu, so that should fit comfortably in RAM. Is there a way to do this or to "test" why exactly I'm getting my kernel killed?</p>
|
<p>You can can try writing your dask dataframes in chunks to overcome the memory limitation: For example:</p>
<pre><code>for i in range(final_table.npartitions):
partition = final_table.get_partition(i)`
</code></pre>
<p>Please see how I do <code>.to_sql</code> -- you can take a similar approach with <code>.to_parquet</code>: <a href="https://stackoverflow.com/a/62458085/6366770">https://stackoverflow.com/a/62458085/6366770</a></p>
|
python|pandas|dask
| 1
|
376,663
| 63,959,950
|
How to filter a pandas dataframe and then groupby and aggregate a list of values?
|
<p>I'm trying to use groupby and get values as a list.</p>
<p>End df should be "bid" as index, score as list for second column (ex. [85, 58] if they both have the same "bid"]</p>
<p>This is my df:</p>
<p><a href="https://i.stack.imgur.com/rJjyG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rJjyG.png" alt="" /></a></p>
<p>When I use <code>merged.groupby("bid")['score_y'].apply(list)</code></p>
<p>I get TypeError: 'Series' objects are mutable, thus they cannot be hashed.</p>
<p>Does anyone know why I'm getting this error?</p>
<p>Edit 1:</p>
<p>This is the datasource: <a href="https://data.sfgov.org/Health-and-Social-Services/Restaurant-Scores-LIVES-Standard/pyih-qa8i" rel="nofollow noreferrer">https://data.sfgov.org/Health-and-Social-Services/Restaurant-Scores-LIVES-Standard/pyih-qa8i</a></p>
<p><a href="https://i.stack.imgur.com/ZaKDN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZaKDN.png" alt="enter image description here" /></a></p>
<p>The df "ins" yields the following where "bid" are the numbers before the "_' in "iid".
<a href="https://i.stack.imgur.com/DW0xD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DW0xD.png" alt="enter image description here" /></a></p>
<p>My code so far:</p>
<pre class="lang-py prettyprint-override"><code>ins2018 = ins[ins['year'] == 2018] #.drop(["iid", 'date', 'type', 'timestamp', 'year', 'Missing Score'], axis = 1)
# new = ins2018.loc[ins2018["score"] > 0].sort_values("date").groupby("bid").count()
# new = new.loc[new["iid"] == 2]
# merge = pd.merge(new, ins2018, how = "left", on = "bid").sort_values('date_y')
# merged = merge.loc[merge['score_y'] > 0].drop(['iid_x', 'date_x', 'score_x', 'type_x', 'timestamp_x', 'year_x', 'Missing Score_x', 'iid_y', 'type_y', 'timestamp_y', 'year_y', 'Missing Score_y', "date_y"], axis = 1)
</code></pre>
|
<ul>
<li>Aggregate <code>list</code> onto <code>score_y</code> with <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.aggregate.html" rel="nofollow noreferrer"><code>pandas.DataFrame.aggregat</code></a></li>
<li>Depending on <code>merged</code>, the index may need to be reset.</li>
</ul>
<pre class="lang-py prettyprint-override"><code># reset the index of of merged
merged = merged.reset_index(drop=True)
# groupby bid and aggregate a list onto score_y
merged.groupby('bid').agg({'score_y': list})
</code></pre>
<h2>Example</h2>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import numpy as np
import random
np.random.seed(365)
random.seed(365)
rows = 100
data = {'a': np.random.randint(10, size=(rows)),
'groups': [random.choice(['1-5', '6-25', '26-100', '100-500', '500-1000', '>1000']) for _ in range(rows)]}
df = pd.DataFrame(data)
# groupby and aggregate a list
dfg = df.groupby('groups').agg({'a': list})
dfg
[out]:
a
groups
1-5 [7, 8, 4, 3, 1, 7, 9, 3, 2, 7, 6, 4, 4, 6]
100-500 [4, 3, 2, 8, 6, 3, 1, 5, 7, 7, 3, 5, 4, 7, 2, 2, 4]
26-100 [4, 2, 2, 9, 5, 3, 1, 0, 7, 9, 7, 7, 9, 9, 9, 7, 0, 0, 4]
500-1000 [2, 8, 0, 7, 6, 6, 8, 4, 6, 2, 2, 5]
6-25 [5, 9, 7, 0, 6, 5, 7, 9, 9, 9, 6, 5, 6, 0, 2, 7, 4, 0, 3, 9, 0, 5, 0, 3]
>1000 [2, 1, 3, 6, 7, 6, 0, 5, 9, 9, 3, 2, 6, 0]
</code></pre>
<ul>
<li>Using data from <a href="https://data.sfgov.org/Health-and-Social-Services/Restaurant-Scores-LIVES-Standard/pyih-qa8i" rel="nofollow noreferrer">Restaurant Scores - LIVES Standard</a></li>
<li>Attempts to follow along with the code in the OP.</li>
</ul>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
# load data
ins = pd.read_csv('data/Restaurant_Scores_-_LIVES_Standard.csv')
# convert inspection_date to a datetime format
ins.inspection_date = pd.to_datetime(ins.inspection_date)
# add a year column
ins['year'] = ins.inspection_date.dt.year
# select data for 2018
ins2018 = ins[ins['year'] == 2018]
################################################################
# this is where you run into issues
# new is the counts for every column
# this is what you could have done to get the number of inspection counts
# just count the occurrences of business_id
counts = ins2018.groupby('business_id').agg({'business_id': 'count'}).rename(columns={'business_id': 'inspection_counts'}).reset_index()
# don't do this: get dataframe of counts
# new = ins2018.loc[ins2018["inspection_score"] > 0].sort_values("inspection_date").groupby("business_id").count()
# don't do this: select data
# new = new.loc[new["inspection_id"] == 2].reset_index()
# merge updated
merge = pd.merge(counts, ins2018, how = "left", on = "business_id")
################################################################
# select data again
merged = merge.loc[(merge['inspection_score_y'] > 0) & (merge.inspection_counts >= 2)]
# groupby and aggregate list
mg = merged.groupby('business_id').agg({'inspection_score_y': list})
# display(mg)
inspection_score_y
business_id
31 [96.0, 96.0]
54 [94.0, 94.0]
61 [94.0, 94.0]
66 [98.0, 98.0]
101 [92.0, 92.0]
</code></pre>
<h2><code>groupby</code> on <code>ins</code> updated</h2>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
# load data and parse the dates
ins = pd.read_csv('data/Restaurant_Scores_-_LIVES_Standard.csv', parse_dates=['inspection_date'])
# select specific data
data = ins[(ins.inspection_date.dt.year == 2018) & (ins.inspection_score > 0)].dropna().reset_index(drop=True)
# groupby
dg = data.groupby('business_id').agg({'inspection_score': list})
# display(dg)
inspection_score
business_id
54 [94.0, 94.0]
146 [90.0, 81.0, 90.0, 81.0, 90.0, 81.0, 81.0, 81.0]
151 [81.0, 81.0, 81.0, 81.0, 81.0]
155 [90.0, 90.0, 90.0, 90.0]
184 [90.0, 90.0, 90.0, 96.0]
# if you only want results with 2 or more inspections
# get the length of the list because each score represents and inspection
dg['inspection_count'] = dg.inspection_score.map(len)
# filter for 2 or more; this removes 81 business_id that had less than two inspections
dg = dg[dg.inspection_count >= 2]
</code></pre>
|
python|pandas|group-by|apply
| 1
|
376,664
| 64,157,389
|
Filter NaN values in Tensorflow dataset
|
<p><strong>Is there an easy way to filter all entries containing a <code>nan</code> value from a <code>tensorflow.data.Dataset</code> instance? Like the <code>dropna</code> method in Pandas?</strong></p>
<hr />
<p>Short example:</p>
<pre><code>import numpy as np
import tensorflow as tf
X = tf.data.Dataset.from_tensor_slices([[1,2,3], [0,0,0], [np.nan,np.nan,np.nan], [3,4,5], [np.nan,3,4]])
y = tf.data.Dataset.from_tensor_slices([np.nan, 0, 1, 2, 3])
ds = tf.data.Dataset.zip((X,y))
ds = foo(ds) # foo(x) = ?
for x in iter(ds): print(str(x))
</code></pre>
<p>What can I use for <code>foo(x)</code> to get the following output:</p>
<pre><code>(<tf.Tensor: shape=(3,), dtype=float32, numpy=array([0., 0., 0.], dtype=float32)>, <tf.Tensor: shape=(), dtype=float32, numpy=0.0>)
(<tf.Tensor: shape=(3,), dtype=float32, numpy=array([3., 4., 5.], dtype=float32)>, <tf.Tensor: shape=(), dtype=float32, numpy=2.0>)
</code></pre>
<p>If you want to try for yourself, <a href="https://colab.research.google.com/drive/1e_iAZiQfRmhARJhuU0hbA9jsAIJw6AO7?usp=sharing" rel="nofollow noreferrer">here is Google Colab notebook</a>.</p>
|
<p>I had a slightly different approach than the existing answer. Rather than using sum, I'm using <code>tf.reduce_any</code>:</p>
<pre><code>filter_nan = lambda x, y: not tf.reduce_any(tf.math.is_nan(x)) and not tf.math.is_nan(y)
ds = tf.data.Dataset.zip((X,y)).filter(filter_nan)
list(ds.as_numpy_iterator())
</code></pre>
<pre><code>[(array([0., 0., 0.], dtype=float32), 0.0),
(array([3., 4., 5.], dtype=float32), 2.0)]
</code></pre>
|
python|tensorflow|tensorflow2.0|tensorflow-datasets
| 3
|
376,665
| 64,122,003
|
add color to certian cells in excel via pandas - python
|
<p>I want to add a highlight specific cells in a <strong>CSV</strong> file using the <strong>highlight_special</strong> function
the code runs in the terminal with no exceptions but when I look at the <strong>CSV</strong> it stays the same</p>
<p>the code takes a csv file runs it to see if there are any words with special characters and then if there are it adds themes to the <strong>siders</strong> list.
the siders list is then iterated through in order to highlight cells that contain text that is in the <strong>siders</strong> list.</p>
<p>thanks, upfront</p>
<pre><code>import pandas as pd
#import numpy as np 4 LATER
import os
# get the file path
dfile = input("please enter the name of the file you wish ro analyse plus the type(.csv/.xls/.bat): ")
dfile = os.getcwd() + "\\" + dfile
# list of the words with the special letters
siders = []
# special letters list
special_characters = ["\\", ",", "-", "_", "+", ".", "?", "\\", "#", "*", "&", "!", "'", "\""]
# analasys function
def special(data, filter_col):
# loads the file as a csv
global datafile
datafile = pd.read_csv(data)
# itterates the file line by line plus stating the number of line
for row, i in datafile.iterrows():
# tlowercase the column indicated by [filter_col
lowi = str(i[filter_col]).lower()
# looks for a special letter in lowi stated..
for chr in special_characters:
if chr in lowi:
siders.append(lowi) # adds the words with special letters to a side list
print("succes special character {} found in row {}".format(chr, str(row)))
else:
continue
# print("{} no special chars where found".format(str(row)))
count = 0
for index, word in enumerate(siders):
count += 1
print(str(index) + " " + word + "\n ") # prints the special woprds
print("count of words that need manual review is: {}".format(count))
def highlight_special(cells): # cells=datafile
for each in cells:
if each in siders:
return ['background_color: yellow']
else:
return ['background_color: white']
datafile.style.apply(highlight_special, axis=1)
def duplicants(datafile):
pass
highlight_special(dfile)
special(dfile, 'Account Name')
</code></pre>
|
<p>When you call <code>highlight_special()</code>, <code>siders</code> is still empty.
You have to call your method <code>special()</code>before.</p>
<p><code>highlight_special</code> also misused (see <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/style.html" rel="nofollow noreferrer">here</a>), and it is calling itself in the <code>datafile.style.apply</code>.</p>
<p>Also, you're using global variables and you are settings them in the function. It won't work unless you are doing something like this (see the <a href="https://www.w3schools.com/python/gloss_python_global_variables.asp" rel="nofollow noreferrer">doc</a>):</p>
<pre><code>x = ""
def myfunc():
global x
x = "fantastic"
myfunc()
</code></pre>
<hr />
<p>here is a working example with of colouring an excel file with <code>applymap</code></p>
<pre><code>siders = [1]
df = pd.DataFrame([{'value': 1, "value_2": 913}])
def highlight_cells(value):
color = "yellow" if value in siders else "white"
return f"background-color: {color}"
writer = pd.ExcelWriter(f"/tmp/test.xlsx", engine="xlsxwriter")
df2 = df.style.applymap(highlight_cells)
df2.to_excel(writer)
writer.save()
</code></pre>
|
python|excel|pandas|csv|cell
| 0
|
376,666
| 64,161,106
|
How to plot a histogram to get counts for all unique values?
|
<p>I have a Pandas column with data unique to .0001</p>
<p>I would like to plot a histogram that has a bar for each unique .0001 of data.</p>
<p>I achieve a lot of granularity by</p>
<pre><code>plt.hist(df['data'], bins=500)
</code></pre>
<p>but I would like to see counts for each unique value.</p>
<p>How would I go about doing this?
thank you</p>
|
<p>As your values are discrete, it is important to set the bin boundaries nicely in-between these values. If the boundaries coincide with the values, strange rounding artifacts can happen. The example below has each value 10 times, but the histogram with the boundaries on top of the values puts the last two values into the same bin:</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
df = pd.DataFrame({'data': np.repeat(np.arange(0.0005, 0.0030, 0.0001), 10)})
fig, (ax1, ax2) = plt.subplots(ncols=2, figsize=(15, 4))
ax1.hist(df['data'], bins=np.arange(df['data'].min(), df['data'].max(), 0.0001), ec='w')
ax1.set_title('bin boundaries on top of the values')
ax2.hist(df['data'], bins=np.arange(df['data'].min() - 0.00005, df['data'].max() + 0.0001, 0.0001), ec='w')
ax2.set_title('bin boundaries in-between the values')
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/qwadI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qwadI.png" alt="example plot" /></a></p>
<p>Note that the version with the boundaries at the halves also puts the x-ticks nicely in the center of the bins.</p>
|
python|pandas|matplotlib
| 2
|
376,667
| 64,139,658
|
Memory leak with Keras Lambda layer
|
<p>I need to split the channels of a Tensor to apply different normalizations for each split. To do so, I use the Lambda layer from Keras:</p>
<pre><code># split the channels in two (first part for IN, second for BN)
x_in = Lambda(lambda x: x[:, :, :, :split_index])(x)
x_bn = Lambda(lambda x: x[:, :, :, split_index:])(x)
# apply IN and BN on their respective group of channels
x_in = InstanceNormalization(axis=3)(x_in)
x_bn = BatchNormalization(axis=3)(x_bn)
# concatenate outputs of IN and BN
x = Concatenate(axis=3)([x_in, x_bn])
</code></pre>
<p>Everything works as expected (see <code>model.summary()</code> bellow) but the RAM keeps increasing at each iteration, indicating a memory leak.</p>
<pre><code>Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) (None, 832, 832, 1) 0
__________________________________________________________________________________________________
conv1 (Conv2D) (None, 832, 832, 32) 320 input_1[0][0]
__________________________________________________________________________________________________
lambda_1 (Lambda) (None, 832, 832, 16) 0 conv1[0][0]
__________________________________________________________________________________________________
lambda_2 (Lambda) (None, 832, 832, 16) 0 conv1[0][0]
__________________________________________________________________________________________________
instance_normalization_1 (Insta (None, 832, 832, 16) 32 lambda_1[0][0]
__________________________________________________________________________________________________
batch_normalization_1 (BatchNor (None, 832, 832, 16) 64 lambda_2[0][0]
__________________________________________________________________________________________________
concatenate_1 (Concatenate) (None, 832, 832, 32) 0 instance_normalization_1[0][0]
batch_normalization_1[0][0]
__________________________________________________________________________________________________
</code></pre>
<p>I am sure the leak comes from the Lambda layer as I tried another strategy where I don't split but apply the two normalizations independently on all the channels and then add the features together. I didn't experience any memory leak with this code:</p>
<pre><code># apply IN and BN on the input tensor independently
x_in = InstanceNormalization(axis=3)(x)
x_bn = BatchNormalization(axis=3)(x)
# addition of the feature maps outputed by IN and BN
x = Add()([x_in, x_bn])
</code></pre>
<p>Any idea to resolve this memory leak ? I am using Keras 2.2.4 with Tensorflow 1.15.3, and I can't upgrade to TF 2 or tf.keras for now.</p>
|
<p><a href="https://stackoverflow.com/users/9794742/thibault-bacqueyrisses">Thibault Bacqueyrisses</a> answer was right, the memory leak disappeared with a custom layer!</p>
<p>Here is my implementation:</p>
<pre><code>class Crop(keras.layers.Layer):
def __init__(self, dim, start, end, **kwargs):
"""
Slice the tensor on the last dimension, keeping what is between start
and end.
Args
dim (int) : dimension of the tensor (including the batch dim)
start (int) : index of where to start the cropping
end (int) : index of where to stop the cropping
"""
super(Crop, self).__init__(**kwargs)
self.dimension = dim
self.start = start
self.end = end
def call(self, inputs):
if self.dimension == 0:
return inputs[self.start:self.end]
if self.dimension == 1:
return inputs[:, self.start:self.end]
if self.dimension == 2:
return inputs[:, :, self.start:self.end]
if self.dimension == 3:
return inputs[:, :, :, self.start:self.end]
if self.dimension == 4:
return inputs[:, :, :, :, self.start:self.end]
def compute_output_shape(self, input_shape):
return (input_shape[:-1] + (self.end - self.start,))
def get_config(self):
config = {
'dim': self.dimension,
'start': self.start,
'end': self.end,
}
base_config = super(Crop, self).get_config()
return dict(list(base_config.items()) + list(config.items()))
</code></pre>
|
python|tensorflow|keras|memory-leaks
| 4
|
376,668
| 63,929,796
|
Can't append to an existing table. Fails silently
|
<p>I'm trying to dump a pandas DataFrame into an existing Snowflake table (via a jupyter notebook).
When I run the code below no errors are raised, but no data is written to the destination SF table (df has ~800 rows).</p>
<pre><code>from sqlalchemy import create_engine
from snowflake.sqlalchemy import URL
sf_engine = create_engine(
URL(
user=os.environ['SF_PROD_EID'],
password=os.environ['SF_PROD_PWD'],
account=account,
warehouse=warehouse,
database=database,
)
)
df.to_sql(
"test_table",
con=sf_engine,
schema=schema,
if_exists="append",
index=False,
chunksize=16000,
)
</code></pre>
<p>If I check the SF History, I can see that the queries apparently ran without issue:</p>
<p><a href="https://i.stack.imgur.com/yaiXa.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yaiXa.png" alt="enter image description here" /></a></p>
<p>If I pull the query from the SF History UI and run it manually in the Snowflake UI the data shows up in the destination table.</p>
<p>If I try to use <a href="https://capitalone.github.io/Data-Load-and-Copy-using-Python/index.html" rel="nofollow noreferrer">locopy</a> I run into the same issue.</p>
<p>If the table does not exist before hand, the same code above creates the table and drops the rows no problem.</p>
<p>Here's where it gets weird. When I run the <code>pd.to_sql</code> command to try and append and then drop the destination table, if I then issue a <code>select count(*) from destination_table</code> a table still exists with that name and has (only) the data that I've been trying to drop. Thinking it may be a case-sensitive table naming situation?</p>
<p>Any insight is appreciated :)</p>
|
<p>Try adding <code>role="<role>"</code> and <code>schema="<schema>"</code> in URL.</p>
<pre><code>engine = create_engine(URL(
account=os.getenv("SNOWFLAKE_ACCOUNT"),
user=os.getenv("SNOWFLAKE_USER"),
password=os.getenv("SNOWFLAKE_PASSWORD"),
role="<role>",
warehouse="<warehouse>",
database="<database>",
schema="<schema>"
))
</code></pre>
|
python|pandas|sqlalchemy|snowflake-cloud-data-platform
| 0
|
376,669
| 64,022,247
|
Matplotlib Time-Series Heatmap Visualization Row Modification
|
<p>Thank you in advance for the assistance!</p>
<p>I am trying to create a heat map from time-series data and the data begins mid year, which is causing the top of my heat map to be shifted to the left and not match up with the rest of the plot (Shown Below). How would I go about shifting the just the top line over so that the visualization of the data syncs up with the rest of the plot?</p>
<p>(Code Provided Below)</p>
<p><a href="https://i.stack.imgur.com/YgCKu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YgCKu.png" alt="enter image description here" /></a></p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
# links to datadata
url1 = 'https://raw.githubusercontent.com/the-datadudes/deepSoilTemperature/master/minotDailyAirTemp.csv'
# load the data into a DataFrame, not a Series
# parse the dates, and set them as the index
df1 = pd.read_csv(url1, parse_dates=['Date'], index_col=['Date'])
# groupby year and aggregate Temp into a list
dfg1 = df1.groupby(df1.index.year).agg({'Temp': list})
# create a wide format dataframe with all the temp data expanded
df1_wide = pd.DataFrame(dfg1.Temp.tolist(), index=dfg1.index)
# ploting the data
fig, (ax1) = plt.subplots(ncols=1, figsize=(20, 5))
ax1.matshow(df1_wide, interpolation=None, aspect='auto');
</code></pre>
|
<p>Now, what its the problem, the dates on the dataset, if you see the Dataset this start on</p>
<pre><code>`1990-4-24,15.533`
</code></pre>
<p>To solve this is neccesary to add the data between 1990/01/01 -/04/23 and delete the 29Feb.</p>
<pre><code>rng = pd.date_range(start='1990-01-01', end='1990-04-23', freq='D')
df = pd.DataFrame(index= rng)
df.index = pd.to_datetime(df.index)
df['Temp'] = np.NaN
frames = [df, df1]
result = pd.concat(frames)
result = result[~((result.index.month == 2) & (result.index.day == 29))]
</code></pre>
<p>With this data</p>
<pre><code>dfg1 = result.groupby(result.index.year).agg({'Temp': list})
df1_wide = pd.DataFrame(dfg1['Temp'].tolist(), index=dfg1.index)
# ploting the data
fig, (ax1) = plt.subplots(ncols=1, figsize=(20, 5))
ax1.matshow(df1_wide, interpolation=None, aspect='auto');
</code></pre>
<p><a href="https://i.stack.imgur.com/JJFeK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JJFeK.png" alt="enter image description here" /></a></p>
<p>The problem with the unfilled portions are a consequence of the NaN values on your dataset, in this case you take the option, replace the NaN values with the column-mean or replace by the row-mean.
Another ways are available to replace the NaN values</p>
<pre><code>df1_wide = df1_wide.apply(lambda x: x.fillna(x.mean()),axis=0)
</code></pre>
<p><a href="https://i.stack.imgur.com/SVw52.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SVw52.png" alt="enter image description here" /></a></p>
|
python|pandas|numpy|dataframe|matplotlib
| 2
|
376,670
| 64,071,574
|
Why isn't SchemaGen supported in tfdv.display_schema()?
|
<p>Regarding TFX' tensorflow-data-validation, I'm trying to understand when I should use *Gen components vs. using TFDV provided methods.</p>
<p>Specifically, what's confusing me is that I have this as my ExampleGen:</p>
<pre><code>output = example_gen_pb2.Output(
split_config=example_gen_pb2.SplitConfig(splits=[
example_gen_pb2.SplitConfig.Split(name='train', hash_buckets=7),
example_gen_pb2.SplitConfig.Split(name='test', hash_buckets=2),
example_gen_pb2.SplitConfig.Split(name='eval', hash_buckets=1)
]))
example_gen = CsvExampleGen(input_base=os.path.join(base_dir, data_dir),
output_config=output)
context.run(example_gen)
</code></pre>
<p>So I figured, I'd want to generate my statistics from my train split, rather than from the original train file, so I tried with:</p>
<pre><code>statistics_gen = StatisticsGen(
examples=example_gen.outputs['examples'],
exclude_splits=['eval']
)
context.run(statistics_gen)
</code></pre>
<p>and that runs fine. But then, I tried inferring my schema (insert buzzer sound):</p>
<pre><code>schema = tfdv.infer_schema(statistics=statistics_gen)
</code></pre>
<p>and knowingly this raises the error below. I fully expected that it wasn't the correct type but <strong>I cannot figure out how to extract from the StatsGen object the proper output to feed to the infer_schema() method</strong>.</p>
<p>Alternatively, if I pursue a solely *Gen-based component structure, it builds, but I don't see how to properly visualize the schema, stats, etc. Finally, the reason I'm using the tfdv.infer_schema() call here is for the similarly ill-fated "display_schema()" call that errors if you try passing it a SchemaGen.</p>
<p>Error from above:</p>
<pre><code>---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-11-93ceafbcb04a> in <module>
----> 1 schema = tfdv.infer_schema(statistics=validate_stats)
2 tfdv.write_schema_text(schema, schema_location)
3
4 tfdv.display(infer_schema)
/usr/local/lib/python3.6/dist-packages/tensorflow_data_validation/api/validation_api.py in infer_schema(statistics, infer_feature_shape, max_string_domain_size, schema_transformations)
95 raise TypeError(
96 'statistics is of type %s, should be '
---> 97 'a DatasetFeatureStatisticsList proto.' % type(statistics).__name__)
98
99 # This will raise an exception if there are multiple datasets, none of which
TypeError: statistics is of type ExampleValidator, should be a DatasetFeatureStatisticsList proto.
</code></pre>
<p>What I'm really trying to understand is why do we have components, such as SchemaGen and StatisticsGen only to have TFDV require we use the internal functions in order to get value from this. I'm assuming its providing for the interactive pipeline vs. non-interactive scenarios but my Googling has left me unclear.</p>
<p>If there is a way to generate and view stats based on a split of my data rather than relying on the file reader, I'd love to know that also. (In case it's not obvious, yes, I'm new to TFX).</p>
<p>TIA</p>
|
<p>I'm also new to TFX. Your post about the <code>ExampleValidator</code> helped me out, hopefully this answers your question.</p>
<p><strong>Using components only to visualize schema</strong></p>
<pre><code> statistics_gen = StatisticsGen(
examples=example_gen.outputs['examples'],
exclude_splits=['eval']
)
context.run(statistics_gen)
schema_gen = SchemaGen(
statistics=statistics_gen.outputs['statistics'],
infer_feature_shape=True
)
context.run(schema_gen)
context.show(schema_gen.outputs['schema']) # this should allow you to to visualize your schema
</code></pre>
<p><strong>Using components + TFDV to visualize schema</strong></p>
<p>It looks like we can't use the <code>StatisticsGen</code> directly. We'll need to know the location of where the statistics gen artifact is being saved to and then load that artifact using <code>tfdv.load_statistics</code></p>
<pre><code># get the stats artifact
stats_artifact = statistics_gen.outputs.statistics._artifacts[0]
# get base path
base_path = stats_artifact.uri
# get path to file
train_stats_file = os.path.join(base_path, 'train/stats_tfrecord') #only showing training as an example
# load stats
loaded_stats = tfdv.load_statistics(train_stats_file)
# generic and show schema
schema = tfdv.infer_schema(loaded_stats)
tfdv.display_schema(schema)
</code></pre>
|
tensorflow2.0|tfx|tensorflow-data-validation
| 2
|
376,671
| 63,838,762
|
How to plot parallel coordinae plot ftrom Hyperparameter Tuning with the HParams Dashboard?
|
<p>I am trying to replicate the parallel coordinate plot form Hyperparameter Tuning tutorial in this Tensorflow <a href="https://www.tensorflow.org/tensorboard/hyperparameter_tuning_with_hparams" rel="nofollow noreferrer">tutorial</a> and I have writen my own csv file where I store my results.
My output reading the csv file is like this:</p>
<pre><code> conv_layers filters dropout accuracy
0 4 16 0.5 0.447917
1 4 16 0.6 0.458333
2 4 32 0.5 0.635417
3 4 32 0.6 0.447917
4 4 64 0.5 0.604167
5 4 64 0.6 0.645833
6 8 16 0.5 0.437500
7 8 16 0.6 0.437500
8 8 32 0.5 0.437500
9 8 32 0.6 0.562500
10 8 64 0.5 0.562500
11 8 64 0.6 0.437500
</code></pre>
<p>How can I create the same plot like in the tutorial in python?</p>
|
<p>so I found the answer using plotly</p>
<pre><code>import os
import sys
import pandas as pd
from plotly.offline import init_notebook_mode, iplot
import plotly.graph_objects as go
init_notebook_mode(connected=True)
df = pd.read_csv('path/to/csv')
fig = go.Figure(data=
go.Parcoords(
line = dict(color = df['accuracy'],
colorbar = [],
colorscale = [[0, '#6C9E12'], ##
[0.25,'#0D5F67'], ##
[0.5,'#AA1B13'], ##
[0.75, '#69178C'], ##
[1, '#DE9733']]),
dimensions = list([
dict(range = [0,12],
label = 'Conv_layers', values = df['conv_layers']),
dict(range = [8,64],
label = 'filter_number', values = df['filters']),
dict(range = [0.2,0.8],
label = 'dropout_rate', values = df['dropout']),
dict(range = [0.2,0.8],
label = 'dense_num', values = df['dense']),
dict(range = [0.1,1.0],
label = 'accuracy', values = df['accuracy'])
])
)
)
fig.update_layout(
plot_bgcolor = '#E5E5E5',
paper_bgcolor = '#E5E5E5',
title="Parallel Coordinates Plot"
)
# print the plot
fig.show()
</code></pre>
|
tensorboard|tensorflow-serving|hyperparameters
| 0
|
376,672
| 63,988,743
|
How to draw multiple line plots in a grid?
|
<p>I have a dictionary whose values are consisted of dataframes. Every <code>df</code> has the same column names: <code>X1</code> and <code>X2</code>:</p>
<pre><code>dic = {"a": df1, "b": df2, ..., "y": df25}
</code></pre>
<p>Now I want to draw line plots of these dataframes so that they will be in 5 rows and 5 columns. I want to get a visual as follows:</p>
<p><a href="https://i.stack.imgur.com/MvYfc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MvYfc.png" alt="enter image description here" /></a></p>
|
<p>The basic idea using <a href="https://matplotlib.org/3.2.1/api/_as_gen/matplotlib.pyplot.subplots.html" rel="nofollow noreferrer"><code>matplotlib.pyplot.subplots</code></a>:</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
fig, axes = plt.subplots(5, 5)
for ax, (key, df) in zip(axes.flat, dic.items()):
ax.set_title(key)
ax.plot(df["X1"], df["X2"])
</code></pre>
|
python|pandas|dataframe|line-plot
| 1
|
376,673
| 63,917,203
|
Find least frequent value in whole dataframe
|
<p>my dataframe is something like this</p>
<pre><code>> 93 40 73 41 115 74 59 98 76 109 43 44
105 119 56 62 69 51 50 104 91 78 77 75
119 61 106 105 102 75 43 51 60 114 91 83
</code></pre>
<p>It has 8000 rows and 12 columns</p>
<p>I wanted to find the least frequent value in this whole dataframe (not only in columns).</p>
<p>I tried converting this dataframe into numpy array and use <code>for</code> loop to count the numbers and then return the least count number but it it not very optimal. I searched if there are any other methods but could not find it.</p>
<p>I only found <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.mode.html" rel="nofollow noreferrer">scipy.stats.mode</a> which returns the most frequent number.</p>
<p>is there any other way to do it?</p>
|
<p>You could <code>stack</code> and take the <code>value_counts</code>:</p>
<pre><code>df.stack().value_counts().index[-1]
# 69
</code></pre>
<p><code>value_counts</code> orders by frequency, so you can just take the last, though in this example many appear just once. <code>69</code> happens to be the last.</p>
|
python|pandas|numpy|scipy
| 4
|
376,674
| 63,861,019
|
Fusion of multiple 3D binary arrays in python
|
<p>I am looking for an efficient way to fuse multiple (N) binary 3D arrays of the same shape. I.e. the resulting fused array should have for each coordinate a value that is obtained by a majority vote among all values at the corresponding coordinate of the N arrays.</p>
<p>E.g. a toy 1D case:</p>
<pre><code>[0,0,1] - 1st array
[0,1,1] - 2d array
[0,0,0] - ...
[0,1,0] - ...
[1,0,1] - Nth array
-------
[0,0,1] - fused array
</code></pre>
<p>Thanks!</p>
|
<p>You can use <code>scipy.stats.mode</code>, which will take an array of your 3D arrays as input.
An example with 2D arrays is:</p>
<pre><code>arrs = [[[0,1,0],[0,0,0]],
[[1,1,0],[0,0,1]],
[[1,0,1],[1,0,0]]]
scipy.stats.mode(arrs).mode
>>> array([[[1, 1, 0], [0, 0, 0]]])
</code></pre>
|
python|numpy|matrix|numpy-ndarray
| 0
|
376,675
| 64,072,274
|
Panda's DataFrame dump to CSV file is not decoding values correctly. It has Bytea data as columns
|
<p>I have a complex table structure in the Database, which I am reading Panda's DataFrame. While printing DataFrame everything is printing correctly but when I dump in CSV or convert it to a list (each of DataFrame row as list) I see the following data at few columns:
<memory at 0x11a2c4640></p>
<p>After debugging a little more I came to know they are BYTEA columns of Postgres and perhaps the conversation is falling. But actual confusion is if the print is fine, why write to CSV is not working. Is there any way to raw dump dataFrames?</p>
|
<p>Try to save the csv with an special encoding:</p>
<p><code>df.to_csv(r"C:\your path\ file.csv, index =True, encoding='utf-8-sig')</code></p>
|
pandas|postgresql|dataframe|export-to-csv|bytea
| 0
|
376,676
| 64,089,984
|
Day of the month split on Python pandas dictionary
|
<p>I have the following list of stocks:</p>
<p><a href="https://i.stack.imgur.com/3NLnm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3NLnm.png" alt="
" /></a></p>
<p>For each one I would like to separate by day of month as this explanatory drawing:</p>
<p><a href="https://i.stack.imgur.com/t2YNO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/t2YNO.png" alt="
" /></a></p>
<p>With this separation I can perform the cumulative return for each day and separate by max and min cumulative returns for each stock symbol.</p>
<p>I am doing the following (example from another stock list) from SO:
<a href="https://stackoverflow.com/questions/63892673/call-a-report-from-a-dictionary-of-dataframes">Call a report from a dictionary of dataframes</a> :</p>
<pre><code>data_dict = dict() # create an empty dict here
for k, df in df_dict.items():
df_dict[k]['Return %'] = df.iloc[:, 0].pct_change(-1)*100
# aggregate the max and min of Return
mm = df_dict[k]['Return %'].agg(['max', 'min'])
# add it to the dict, with ticker as the key
data_dict[k] = {'max': mm.max(), 'min': mm.min()}
# convert to a dataframe if you want
mm_df = pd.DataFrame.from_dict(data_dict, orient='index')
# display(mm_df)
max min
aapl 8.70284 -4.90070
msft 6.60377 -4.08443
</code></pre>
<p>This results in a linear analysis of the stocks in the list and do not separate by day as I wish to do as per drawing above..</p>
<p>Question:</p>
<ul>
<li>How can I insert a step to split by day of month and then perform the above code?</li>
</ul>
|
<p>You can use <code>pandas.groupby</code> and use <code>datetime.date()</code> as the grouping field. Then you can use <code>sum</code> operator on the group object to calculate daily return. This <a href="https://stackoverflow.com/questions/24082784/pandas-dataframe-groupby-datetime-month/24083253">post</a> shows using <code>groupby</code> with datetime</p>
|
python|pandas|dictionary
| 0
|
376,677
| 64,070,760
|
Getting memory error while transforming spars matrix to array with column names. This array is input to training model
|
<p>My training data consists of 5 million rows of product description having average length of 10 words. I can use either CountVectorizer or Tf-IDF to transform my input feature. However, post transforming the feature to a sparse matrix, while converting it to an array or dense array, I am constantly getting memory error. The count Vectorizer return ~130k column token. Below are the two methods I am trying to implement. Please note, the system I am working on has 512Gb of Memory. Below is the error:</p>
<blockquote>
<p>return np.zeros(self.shape, dtype=self.dtype, order=order)
. MemoryError</p>
</blockquote>
<p><strong>Method 1</strong></p>
<pre><code>from sklearn.feature_extraction.text import CountVectorizer
vect1 = CountVectorizer(ngram_range= (1,2), min_df = 20)
#vect1.fit(train_data['description_cleaned'])
train_dtm1 = vect1.fit_transform(train_data)
dtm_data = pd.DataFrame(train_dtm1.toarray(), columns=vect1.get_feature_names())
</code></pre>
<p><strong>Method 2</strong></p>
<pre><code>tfidf = TfidfVectorizer(stop_words='english', ngram_range=(1, 2), max_df=0.5, min_df=20, use_idf=True)
corpus = tfidf.fit_transform(train_data)
dtm_data = pd.DataFrame(corpus_split.todense(), columns=tfidf.get_feature_names())
</code></pre>
<p><strong>dtm_data</strong> goes into test-train split, which further goes into Keras ANN.
How to resolve this memory issue?</p>
|
<p>Out of memory error happens when python is using more memory than available. Along with your system memory, look at your graphics card memory if you are using tensorflow-gpu. You might want to take a look at google colab, which runs the python program in the cloud.</p>
|
python|tensorflow|out-of-memory
| 0
|
376,678
| 63,746,101
|
Multiply each value in a pandas dataframe column with all values of 2nd dataframe column & replace each 1st dataframe value with resulting array
|
<p>I have a dataframe with 4 rows and 3 columns, and all values in this first dataframe (df1) are floats. I also have a second dataframe (df2) that has a column with 8760 entries. I would like to multiply each value in column 3 of the first dataframe by all 8760 values in the second dataframe. Finally, I want to replace the original value in the first dataframe by the resulting series of 8760 values (from multiplying each value by the 2nd dataframe values). So the values in column 3 of each row of the first dataframe are a resulting array of 8760 values.</p>
<pre><code>data = {'col1':[1.0, 2.0, 1.0, 3.0], 'col2':[.01, .04, .8, 1.0], 'col3':[0.7, 0.1, 0.5, 0.9]}
np.random.seed(123)
df1 = pd.DataFrame(data)
df2 = pd.DataFrame(np.random.randint(0,10, size=(1,8760)))
</code></pre>
<p>So here I would like to take each value of col3 in df1 and replace with resulting array from multiplying that single value by all 8760 values in df2. So "0.7" would be replaced by an array of 8760 values from multiplying 0.7 by each value in df2. Is there an easy way to do this? When I tried, I just got the first value or NAN in df1 and not the array.</p>
|
<p>The following single line of could will fetch you the desired result:</p>
<pre><code>df1['col3'] = df1['col3'].apply(lambda x: df2.values[0]*x)
</code></pre>
<p>Here, the values of column 'col3' are treated as a single value multiplied by the entire DataFrame df2 for each row of the df1.</p>
|
python|pandas|dataframe
| 0
|
376,679
| 63,825,146
|
Fastest way to solve an array or list of functions with fsolve
|
<p>I have the working function below. I have a function from which I calculate the first and second derivative. I then need to find the value of theta where the first derivative is zero and second one is negative. I have to compute this for a large number of points. The number of points is equal to the length of K1 and K2. Using sympy I calculate the first and second derivative. I currently iterate over all the derivatives and solve the equations for each. Is there a faster way to do this, once the lengths of K1 and K2 increase > 1000 , this takes much to long for my application.</p>
<pre><code>import numpy as np
import sympy as sp
from scipy.optimize import fsolve
from sympy.utilities.lambdify import lambdify
def get_cpd(K1, K2):
'''
*args:
K1, 1D numpy array: mode I stress intensity factors
K2, 1D numpy array: mode II stress intensity factor
*Note:
K1 and K2 should have the same length
'''
# Define symbols
r, theta = sp.symbols("r theta")
# Shear stress intensity
sif_shear = 1/2*sp.cos(theta/2)*(K1*sp.sin(theta)+K2*(3*sp.cos(theta)-1))
# Determine the first and second derivative w.r.t. theta
first_derivative = sp.diff(sif_shear, theta)
second_derivative = sp.diff(first_derivative, theta)
cpd_lst = []
for first, second in zip(first_derivative, second_derivative):
# Lambdify function such that it can evaluate an array of points
func1 = sp.lambdify(theta, first, "numpy")
func2 = sp.lambdify(theta, second, "numpy")
# initialize array from -π/2 to π/2, this is used for the initial guesses of the solver
x = np.linspace(-np.pi/2, np.pi/2, num=50)
# Solve the first derivative for all initial guesses to find possible propagation angles
y1 = fsolve(func1, x)
# Evaluate the second derivative in the roots of the first derivative
y2 = func2(y1)
# Get roots of first derivative between -π/2 to π/2
# and where second derivative is negative
y1 = np.round(y1, 4)
y1 = y1[(y1 > -np.pi/2) & (y1 < np.pi/2) & (y2 < 0)]
# get unique roots
cpd = np.unique(y1)
cpd_lst.append(cpd)
return cpd_lst
</code></pre>
<p>Input example:</p>
<pre><code>K1 = np.random.rand(10000,)
K2 = np.random.rand(10000,)
get_cpd(K1, K2)
</code></pre>
|
<p>The best thing is to try and process the equation symbolically as much as possible in terms of symbolic parameters. It is possible to get an analytic solution for e.g. <code>first_derivative</code> but you need to transform it a bit. Here I'll rewrite sin/cos as exp and then use the substitution <code>exp(I*theta/2) = sqrt(z)</code> to get a cubic polynomial for <code>z</code>:</p>
<pre><code>In [150]: K1, K2 = symbols('K1, K2', real=True)
In [151]: theta = Symbol('theta', real=True)
In [152]: sif_shear = S.Half*sp.cos(theta/2)*(K1*sin(theta)+K2*(3*cos(theta)-1))
In [153]: eq = diff(sif_shear, theta)
In [154]: eq
Out[154]:
⎛K₁⋅sin(θ) K₂⋅(3⋅cos(θ) - 1)⎞ ⎛θ⎞
⎜───────── + ─────────────────⎟⋅sin⎜─⎟
⎝ 2 2 ⎠ ⎝2⎠ ⎛K₁⋅cos(θ) 3⋅K₂⋅sin(θ)⎞ ⎛θ⎞
- ────────────────────────────────────── + ⎜───────── - ───────────⎟⋅cos⎜─⎟
2 ⎝ 2 2 ⎠ ⎝2⎠
In [155]: eqz = fraction(cancel(eq.rewrite(exp).subs(exp(I*theta/2), sqrt(z))))[0].collect(z)
In [156]: eqz
Out[156]:
3 2
3⋅K₁ - 9⋅ⅈ⋅K₂ + z ⋅(3⋅K₁ + 9⋅ⅈ⋅K₂) + z ⋅(K₁ + ⅈ⋅K₂) + z⋅(K₁ - ⅈ⋅K₂)
</code></pre>
<p>Now sympy can solve this (<code>roots(eqz, z)</code>) but the general formula for a cubic is quite complicated so that might not be the best approach. Given particular float values for <code>K1</code> and <code>K2</code> though sympy can easily get the roots with <code>nroots</code> or otherwise you could use numpy's <code>roots</code> function.</p>
<pre><code>In [157]: eqzp = eqz.subs({K1:0.2, K2:0.5})
In [158]: eqzp
Out[158]:
3 2
z ⋅(0.6 + 4.5⋅ⅈ) + z ⋅(0.2 + 0.5⋅ⅈ) + z⋅(0.2 - 0.5⋅ⅈ) + 0.6 - 4.5⋅ⅈ
In [159]: Poly(eqzp, z).nroots()
Out[159]: [-0.617215947987055 + 0.786793793538333⋅ⅈ, -0.491339121039621 - 0.870968350823388⋅ⅈ, 0.993562347047054 + 0.113286638798883⋅ⅈ]
In [163]: coeffs = [complex(c) for c in Poly(eqzp, z).all_coeffs()]
In [164]: np.roots(coeffs)
Out[164]:
array([ 0.99356235+0.11328664j, -0.61721595+0.78679379j,
-0.49133912-0.87096835j])
</code></pre>
<p>Either way this gives you 3 possible values for <code>z</code> which is <code>exp(I*theta)</code> so you can get theta (modulo <code>2*pi</code>) with:</p>
<pre><code>In [167]: r1, r2, r3 = Poly(eqzp, z).nroots()
In [168]: get_theta = lambda r: acos((r + r.conjugate())/2)
In [169]: get_theta(r1)
Out[169]: 2.23599562043958
In [170]: get_theta(r2)
Out[170]: 2.08442292239622
In [171]: get_theta(r3)
Out[171]: 0.113530366549989
</code></pre>
<p>The transformations we've done mean that <code>+-</code> these values can be solutions to the original equation so we can check by substituting in e.g.:</p>
<pre><code>In [178]: eq.subs({K1:0.2, K2:0.5}).subs(theta, get_theta(r1))
Out[178]: -5.55111512312578e-17
In [179]: eq.subs({K1:0.2, K2:0.5}).subs(theta, get_theta(r2))
Out[179]: -0.124767626702216
In [180]: eq.subs({K1:0.2, K2:0.5}).subs(theta, -get_theta(r2))
Out[180]: 5.55111512312578e-17
</code></pre>
|
python|numpy|scipy|sympy
| 2
|
376,680
| 64,022,364
|
How to do group by 2 column and performe count in pandas
|
<p>How to performe count like this query in pandas?</p>
<pre><code>Select col1, col2, count(col3) as total from table
GROUP by col1,col2
</code></pre>
|
<p>You want the pandas <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.groupby.html" rel="nofollow noreferrer">groupby</a> method:</p>
<pre><code>df2 = df.groupby(['col1', 'col2'], as_index = False).count()
</code></pre>
<p>This will give you a count of all your other columns. If you want to specify a different aggregation function for each column, you can use <code>.agg</code>:</p>
<pre><code>df2 = df.groupby(['col1', 'col2'], as_index = False).agg({'col3': 'count', 'col4': 'sum'})
</code></pre>
|
python|pandas
| 1
|
376,681
| 64,159,067
|
Pandas MySQL exception don't shows
|
<p>I have this code for connect to MySQL through a SSH, inside of a python class:</p>
<pre><code>def executeQuery(self, query_string):
print("connecting to database " + self.sql_main_database)
with SSHTunnelForwarder(
(
self.ssh_host,
self.ssh_port),
ssh_username = self.ssh_user,
ssh_pkey = self.pkey,
remote_bind_address=(self.sql_hostname, self.sql_port)
) as tunnel:
print("performing connection")
conn = pymysql.connect(
host="127.0.0.1",
user=self.sql_username,
password=self.sql_password,
db=self.sql_main_database,
port=tunnel.local_bind_port)
query = query_string
print("Querying")
data = pd.read_sql_query(query, conn)
print("Done!")
conn.close()
return data
</code></pre>
<p>The code is working well, but when the query is not well defined, the notebook freezes.</p>
<p>Then, I tried to use a try/catch, and the code ended like this</p>
<pre><code>def executeQuery(self, query_string):
try:
with SSHTunnelForwarder(
(
self.ssh_host,
self.ssh_port
),
ssh_username = self.ssh_user,
ssh_pkey = self.pkey,
remote_bind_address=(self.sql_hostname, self.sql_port)
) as tunnel:
try:
conn = pymysql.connect(
host = "127.0.0.1",
user = self.sql_username,
password = self.sql_password,
db = self.sql_main_database,
port = tunnel.local_bind_port
)
try:
query = query_string
data = pd.read_sql_query(query, conn)
return data
except DatabaseError as e:
Log.debug(self,str(e))
raise DatabaseError
except pymysql.err.InternalError as e:
Log.debug(self, str(e))
raise DataError
except Exception as e:
Log.debug(self, "[Error]Setting up database: \'" + self.sql_main_database + "\'")
raise DataError
</code></pre>
<p>The issue is that pd.read_sql_query never stops so the except is never called, the try won't fail, and the process will just continue forever</p>
<p>The timeout workaround is not possible, because the queries don't have defined execution times and some of them can stay in processing during a couple of hours.</p>
<p>I'm not sure how to fix it.</p>
|
<p>Indeed the problem was not on the connector, just updating the jupyter version was needed.</p>
|
mysql|pandas|ssh|jupyter-notebook
| -1
|
376,682
| 64,051,457
|
Using df.apply on a function with multiple inputs to generate multiple outputs
|
<p>I have a dataframe that looks like this</p>
<pre><code>initial year0 year1
0 0 12
1 1 13
2 2 14
3 3 15
</code></pre>
<p>Note that the number of year columns year0, year1... (year_count) is completely variable but will be constant throughout this code</p>
<p>I first wanted to apply a function to each of the 'year' columns to generate 'mod' columns like so</p>
<pre><code>def mod(year, scalar):
return (year * scalar)
s = 5
year_count = 2
# Generate new columns
df[[f"mod{y}" for y in range (year_count)]] = df[[f"year{y}" for y in range(year_count)]].apply(mod, scalar=s)
initial year0 year1 mod0 mod1
0 0 12 0 60
1 1 13 5 65
2 2 14 10 70
3 3 15 15 75
</code></pre>
<p>All good so far. The problem is that I now want to apply another function to both the year column and its corresponding mod column to generate another set of val columns, so something like</p>
<pre><code>def sum_and_scale(year_col, mod_col, scale):
return (year_col + mod_col) * scale
</code></pre>
<p>Then I apply this to each of the columns (year0, mod0), (year1, mod1) etc to generate the next tranche of columns.</p>
<p>With scale = 10 I should end up with</p>
<pre><code>initial year0 year1 mod0 mod1 val0 val1
0 0 12 0 60 0 720
1 1 13 5 65 60 780
2 2 14 10 70 120 840
3 3 15 15 75 180 900
</code></pre>
<p>This is where I'm stuck - I don't know how to put two existing df columns together in a function with the same structure as in the first example, and if I do something like</p>
<pre><code>df[['val0', 'val1']] = df['col1', 'col2'].apply(lambda x: sum_and_scale('mod0', 'mod1', scale=10))
</code></pre>
<p>I don't know how to generalise this to have arbitrary inputs and outputs and also apply the constant scale parameter. (I know the last piece of won't work but it's the other avenue to a solution I've seen)</p>
<p>The reason I'm asking is because I believe the loop that I currently have working is creating performance issues with the number of columns and the length of each column.</p>
<p>Thanks</p>
|
<p>IMHO, it's better with a simple <code>for</code> loop:</p>
<pre><code>for i in range(2):
df[f'val{i}'] = sum_and_scale(df[f'year{i}'], df[f'mod{i}'], scale=10)
</code></pre>
|
python|pandas
| 0
|
376,683
| 63,984,312
|
How to make a simple Vandermonde matrix with numpy?
|
<p>My question is how to make a vandermonde matrix. This is the definition:
In linear algebra, a Vandermonde matrix, named after Alexandre-Théophile Vandermonde, is a matrix with the terms of a geometric progression in each row, i.e., an m × n matrix</p>
<p>I would like to make a 4*4 version of this.</p>
<p>So farI have defined values but only for one row as follows</p>
<pre><code>a=2
n=4
for a in range(n):
for i in range(n):
v.append(a**i)
v = np.array(v)
print(v)
</code></pre>
<p>I dont know how to scale this. Please help!</p>
|
<p>Given a starting column <code>a</code> of length <code>m</code> you can create a Vandermonde matrix <code>v</code> with <code>n</code> columns <code>a**0</code> to <code>a**(n-1)</code>like so:</p>
<pre><code>import numpy as np
m = 4
n = 4
a = range(1, m+1)
v = np.array([a]*n).T**range(n)
print(v)
#[[ 1 1 1 1]
# [ 1 2 4 8]
# [ 1 3 9 27]
# [ 1 4 16 64]]
</code></pre>
|
numpy|matrix|numpy-ndarray
| 2
|
376,684
| 64,141,458
|
Using the Pandas query function and testing if a string is in a column containing lists
|
<p>I have a DataFrame where one column contains lists e.g.:</p>
<pre><code>columnA
--------
[val1, val2]
[val1, val3]
...
</code></pre>
<p>I want to use the <code>df.query()</code> syntax to return only rows where a given value exists in the array. But am getting errors, I'm trying things like:</p>
<p><code>df.query('"val1" in columnA')</code>
Or <code>df.query('val1 in columnA)</code></p>
<p>I've also tried splitting the value out into its own variable to see if that helped without success:
<code>val = "val1" </code>df.query('@val in columnA')
Neither is running. Has anyone done something similar and got it working?</p>
|
<p>I did not use "query" but I think this can help you. You can change the value so I think having a function is a good idea.</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'columnA': [[1, 4], [1, 2], [3, 4], [6, 2], [0, 10] ,[2, 8]]})
def check(data, value):
temp_df = []
for i in range(len(data)):
if value in data['columnA'].iloc[i]:
temp_df.append(data['columnA'].iloc[i])
else:
pass
return temp_df
new_df = check(data = df, value = 3)
</code></pre>
|
python|pandas
| 1
|
376,685
| 63,793,319
|
Split column of strings by list of possible substrings
|
<p>I have a column with text that contains subheadings, such as '1. DESCRIPTION', '2. FOO', etc. I have all the possible subheadings in a list, but the issue is that not every entry in the column contains every subheading. I want to add columns to the df for every possible subheading and add the corresponding text after the subheadings into these columns.</p>
<p>A minimal example:</p>
<pre><code>Text
'1. Description: example description here. 3. BAR: more text'
'1. Description: second example. 2. FOO: a foo'
</code></pre>
<p>should become</p>
<pre><code>Description | Foo | Bar
example description here | | more text
second example | a foo |
</code></pre>
<p>I've tried making a function that converts a string and list of possible subheadings into a dictionary, with the idea of .apply()-ing it to the df. It works, but is not neat:</p>
<pre><code>def split_into_dict(input_string, separators):
seps_in_string = []
for i in len(separators):
if separators[i] in input_string:
seps_in_string.append(sep)
split_text = []
for i in len(seps_in_string):
[text_part, input_string] = input_string.split(seps_in_string[i])
split_text.append(text_part)
return dict(zip(seps_in_string, split_text[1:]))
</code></pre>
<p>I'm not sure this is a good idea in general, but I'm also struggling on how to then use this function to create new columns.</p>
|
<p>I tried a solution with some regex</p>
<pre><code>df = pd.DataFrame([
{"Text": '1. Description: example description here. 3. BAR: more text'},
{"Text": '1. Description: second example. 2. FOO: a foo'}
])
# regex to capture the columns names
reg_key = re.compile("([A-Za-z]*)\:")
# regex to capture the values
reg_value = re.compile("\: ([A-Za-z1-9 ]*)")
output = pd.DataFrame()
for index, row in df.iterrows():
txt = row['Text']
# Find keys and values
keys = re.findall(reg_key, txt)
values = re.findall(reg_value, txt)
for i, key in enumerate(keys):
output.at[index, key] = values[i]
</code></pre>
<p>Output :</p>
<pre><code> Description BAR FOO
0 example description here more text NaN
1 second example NaN a foo
</code></pre>
|
python|pandas
| 1
|
376,686
| 64,165,983
|
What is the fastest way to find the average for a list of tuples in Python, each tuple containing a pair of namedtuples?
|
<pre class="lang-py prettyprint-override"><code>import numpy as numpy
from collections import namedtuple
from random import random
Smoker = namedtuple("Smoker", ["Female","Male"])
Nonsmoker = namedtuple("Nonsmoker", ["Female","Male"])
LST = [(Smoker(random(),random()),Nonsmoker(random(),random())) for i in range(100)]
</code></pre>
<p>So I have a long list whose elements are tuples. Each tuple contains a pair of namedtuples. What is the fastest way to find the average of this list? Ideally the result is still of the same structure, that is, <code>(Smoker(Female=w,Male=x),Nonsmoker(Female=y,Male=z))</code>..</p>
<pre class="lang-py prettyprint-override"><code>grizzly = Smoker(np.mean([a.Female for a,b in LST]),np.mean([a.Male for a,b in LST]))
panda = Nonmoker(np.mean([b.Female for a,b in LST]),np.mean([b.Male for a,b in LST]))
result = (grizzly, panda)
</code></pre>
|
<p><code>np.mean</code> has to convert the list to an array, which takes time. Python <code>sum</code> saves time:</p>
<pre><code>In [6]: %%timeit
...: grizzly = Smoker(np.mean([a.Female for a,b in LST]),np.mean([a.Male for
...: a,b in LST]))
...: panda = Nonsmoker(np.mean([b.Female for a,b in LST]),np.mean([b.Male for
...: a,b in LST]))
...: result = (grizzly, panda)
...:
...:
158 µs ± 597 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
In [9]: %%timeit
...: n=len(LST)
...: grizzly = Smoker(sum([a.Female for a,b in LST])/n,sum([a.Male for a,b in
...: LST])/n)
...: panda = Nonsmoker(sum([b.Female for a,b in LST])/n,sum([b.Male for a,b i
...: n LST])/n)
...: result = (grizzly, panda)
...:
...:
46.2 µs ± 37.4 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
</code></pre>
<p>Both produce the same <code>result</code> (to within a small epsilon):</p>
<pre><code>In [8]: result
Out[8]:
(Smoker(Female=0.5383695316982974, Male=0.5493854404111675),
Nonsmoker(Female=0.4913454565011218, Male=0.47143788469638825))
</code></pre>
<p>If you could collect the values in one array, possibly (n,4) shape, then the mean will be fast. For one time calculation it probably isn't worth it -</p>
<pre><code>In [11]: M = np.array([(a.Female, a.Male, b.Female, b.Male) for a,b in LST])
In [12]: np.mean(M, axis=0)
Out[12]: array([0.53836953, 0.54938544, 0.49134546, 0.47143788])
In [13]: timeit M = np.array([(a.Female, a.Male, b.Female, b.Male) for a,b in LST])
128 µs ± 1.22 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
In [14]: timeit np.mean(M, axis=0)
21.9 µs ± 371 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
</code></pre>
<p>Since named tuples can be accessed like regular tuples, we can make an array directly from <code>LST</code>:</p>
<pre><code>In [16]: np.array(LST).shape
Out[16]: (100, 2, 2)
In [17]: np.array(LST).mean(axis=0)
Out[17]:
array([[0.53836953, 0.54938544],
[0.49134546, 0.47143788]])
</code></pre>
<p>But timing isn't encouraging:</p>
<pre><code>In [18]: timeit np.array(LST).mean(axis=0)
1.26 ms ± 7.92 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
</code></pre>
<p>I can also make a structured array from your list - with nested dtypes:</p>
<pre><code>In [26]: dt = np.dtype([('Smoker', [('Female','f'),('Male','f')]),('Nonsmoker',[
...: ('Female','f'),('Male','f')])])
In [27]: M=np.array(LST,dt)
In [28]: M['Smoker']['Female'].mean()
Out[28]: 0.53836954
</code></pre>
<p>Curiously timing is relatively good:</p>
<pre><code>In [29]: timeit M=np.array(LST,dt)
40.6 µs ± 243 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
</code></pre>
<p>But I have to take each mean separately, or else convert it to an unstructured array first.</p>
<p>I could make a (n,4) float array from the structured one with a <code>view</code> or a <code>recfunctions</code> utility:</p>
<pre><code>In [53]: M1 = M.view([('f0','f',(4,))])['f0']
In [54]: M1.shape
Out[54]: (100, 4)
In [55]: M2=rf.structured_to_unstructured(M)
</code></pre>
|
python|list|numpy|tuples|namedtuple
| 1
|
376,687
| 64,159,676
|
Correlation matrix for panel data in Python
|
<p>I want to create a correlation matrix for a data panel. The dataframe contains data on 15 numerical variables on a monthly basis for 11 years.</p>
<p>I would like to know, if possible, how to generate a single correlation matrix for the variables of this type of dataframe.</p>
<p>The alternative I have in mind would be to generate one correlation matrix per year, but I would like to know if it is possible to make only one correlation matrix for the whole dataframe in case the number of years is very large (which would make it unfeasible to make one matrix for each year).</p>
<p>Thanks in advance.</p>
|
<p>IIUC, you're mainly looking for the <code>corr</code> method of a DataFrame. Consider this example:</p>
<pre><code>import pandas as pd
import numpy as np
np.random.seed(0)
df = pd.DataFrame(np.random.rand(30, 5)).add_prefix("feature_")
df["year"] = np.repeat(["2012", "2013", "2014"], 10)
print(df.head()) # first 5 rows. Note that there are 30 rows
feature_0 feature_1 feature_2 feature_3 feature_4 year
0 0.548814 0.715189 0.602763 0.544883 0.423655 2012
1 0.645894 0.437587 0.891773 0.963663 0.383442 2012
2 0.791725 0.528895 0.568045 0.925597 0.071036 2012
3 0.087129 0.020218 0.832620 0.778157 0.870012 2012
4 0.978618 0.799159 0.461479 0.780529 0.118274 2012
</code></pre>
<p>Subset the numerical columns you want to be in the cormat (in this case I use <code>.filter</code> to get just the "feature_X" columns) and use <code>DataFrame.corr</code>:</p>
<pre><code>cormat = df.filter(like="feature").corr()
print(cormat)
feature_0 feature_1 feature_2 feature_3 feature_4
feature_0 1.000000 0.004582 0.412658 0.269969 0.151162
feature_1 0.004582 1.000000 -0.200808 0.140620 -0.138652
feature_2 0.412658 -0.200808 1.000000 -0.019439 0.284211
feature_3 0.269969 0.140620 -0.019439 1.000000 -0.063653
feature_4 0.151162 -0.138652 0.284211 -0.063653 1.000000
</code></pre>
<p>If you want to get a correlation matrix at a grouping of some other variable, you can use <code>.groupby</code> first.</p>
<pre><code>annual_cormat = df.groupby("year").corr()
print(annual_cormat)
feature_0 feature_1 feature_2 feature_3 feature_4
year
2012 feature_0 1.000000 0.359721 -0.266740 0.285998 -0.526528
feature_1 0.359721 1.000000 -0.330484 0.180620 -0.580236
feature_2 -0.266740 -0.330484 1.000000 0.262000 0.428895
feature_3 0.285998 0.180620 0.262000 1.000000 -0.144745
feature_4 -0.526528 -0.580236 0.428895 -0.144745 1.000000
2013 feature_0 1.000000 0.135499 0.704653 0.081326 0.453111
feature_1 0.135499 1.000000 -0.385677 0.732700 -0.065941
feature_2 0.704653 -0.385677 1.000000 -0.607016 0.143572
feature_3 0.081326 0.732700 -0.607016 1.000000 0.107971
feature_4 0.453111 -0.065941 0.143572 0.107971 1.000000
2014 feature_0 1.000000 -0.624004 0.056185 0.351376 -0.038286
feature_1 -0.624004 1.000000 0.103911 -0.284685 0.266124
feature_2 0.056185 0.103911 1.000000 0.249860 0.145773
feature_3 0.351376 -0.284685 0.249860 1.000000 -0.347361
feature_4 -0.038286 0.266124 0.145773 -0.347361 1.000000
</code></pre>
|
python|pandas|dataframe|panel|correlation
| 1
|
376,688
| 63,877,900
|
KeyError(key) while merging the data frames
|
<pre class="lang-py prettyprint-override"><code>Input = df=pd.merge(Bx_Users,BX_ratings,on='user_id')
Error = Traceback (most recent call last):
File "C:/Users/91943/AppData/Roaming/JetBrains/PyCharmCE2020.2/scratches/MergingwithSummerclothingdataset.py", line 14, in <module>
df=pd.merge(Bx_Users,BX_ratings,on='user_id')
File "C:\Users\91943\PycharmProjects\Pandas\venv\lib\site-packages\pandas\core\reshape\merge.py", line 74, in merge
op = _MergeOperation(
File "C:\Users\91943\PycharmProjects\Pandas\venv\lib\site-packages\pandas\core\reshape\merge.py", line 652, in __init__
) = self._get_merge_keys()
File "C:\Users\91943\PycharmProjects\Pandas\venv\lib\site-packages\pandas\core\reshape\merge.py", line 1005, in _get_merge_keys
right_keys.append(right._get_label_or_level_values(rk))
File "C:\Users\91943\PycharmProjects\Pandas\venv\lib\site-packages\pandas\core\generic.py", line 1560, in _get_label_or_level_values
raise KeyError(key)
KeyError: 'user_id'
</code></pre>
<p>I tried different ways of fixing the issue. Not sure where I am going wrong.</p>
|
<p>Parameters
rightDataFrame or named Series
Object to merge with.</p>
<p>how{‘left’, ‘right’, ‘outer’, ‘inner’, ‘cross’}, default ‘inner’
Type of merge to be performed.</p>
<p>left: use only keys from left frame, similar to a SQL left outer join; preserve key order.</p>
<p>right: use only keys from right frame, similar to a SQL right outer join; preserve key order.</p>
<p>outer: use union of keys from both frames, similar to a SQL full outer join; sort keys lexicographically.</p>
<p>inner: use intersection of keys from both frames, similar to a SQL inner join; preserve the order of the left keys.</p>
<p>cross: creates the cartesian product from both frames, preserves the order of the left keys.</p>
<p>New in version 1.2.0.</p>
|
pandas|merge
| 0
|
376,689
| 64,099,754
|
Iterating optimization on a dataframe
|
<p>I'm trying to build an iterating interpolation of series <code>x</code> and dataframe <code>y</code>.
Df <code>y</code> is made by <code>n</code> rows and <code>m</code> columns. I would like to run the interpolation for every row of DataFrame <code>y</code>.</p>
<p>So far, I've been able to successfully build the iteration for one single row by using <code>iloc[0:]</code></p>
<pre><code>### SX5E
z=np.linspace(0.2,0.99,200)
z_pd_SX5E=pd.Series(z)
from scipy import interpolate
def f(z_pd_SX5E):
x_SX5E=x
y_SX5E=y.iloc[0,:]
tck_SX5E = interpolate.splrep(x_SX5E, y_SX5E)
return interpolate.splev(z_pd_SX5E, tck_SX5E)
Optimal_trigger_P_SX5E= z_pd_SX5E[f(z_pd_SX5E).argmax(axis=0)]
</code></pre>
<p>How can I run the function through every row of <code>y</code>?
Thanks
Many thanks</p>
|
<p>In general you can run any function for each row by using <code>.apply</code>. So something like:</p>
<pre><code>y.apply(lambda val: interpolate.splrep(x,val))
</code></pre>
<p>This will then return a new series object.</p>
<p><a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.apply.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.apply.html</a></p>
|
python|pandas|dataframe|loops|optimization
| 0
|
376,690
| 64,098,085
|
Finding non-matching rows between two dataframes
|
<p>I have a scenario where I want to find non-matching rows between two dataframes. Both dataframes will have around 30 columns and an <code>id</code> column that uniquely identify each record/row. So, I want to check if a row in <code>df1</code> is different from the one in <code>df2</code>. The <code>df1</code> is an updated dataframe and <code>df2</code> is the previous version.</p>
<p>I have tried an approach <code>pd.concat([df1, df2]).drop_duplicates(keep=False)</code> , but it just combines both dataframes. Is there a way to do it. I would really appreciate the help.</p>
<p>The sample data looks like this for both <code>dfs</code>.</p>
<p><code>id</code> <code>user_id</code> <code>type</code> <code>status</code></p>
<p>There will be total 39 columns which may have <code>NULL</code> values in them.</p>
<p>Thanks.</p>
<p>P.S. <code>df2</code> will always be a subset of <code>df1</code>.</p>
|
<p>If your df1 and df2 has the same shape, you may easily compare with this code.</p>
<pre><code>df3 = pd.DataFrame(np.where(df1==df2,True,False), columns=df1.columns)
</code></pre>
<p>And you will see boolean output "False" for not matching cell value.</p>
|
python|pandas|dataframe
| 1
|
376,691
| 63,811,180
|
pyodbc import error because of Invalid Datetime format
|
<p>I´ve already look it up here, but couldn´t find a solution for my problem. I want to get a dataframe from 4 accces databanks and 2 work with this exact code and the other 2 display this error:</p>
<pre><code>DataError: ('22007', '[22007] [Microsoft][ODBC-Treiber für Microsoft Access]Ungültiges Datetime-Format. bei Spaltennummer 11 (dtime) (35) (SQLGetData)')
</code></pre>
<p>the Data is in each data bank the same in terms of format. See my code below:</p>
<pre><code> conn_str = (
r'DRIVER={Microsoft Access Driver (*.mdb, *.accdb)};'
r'DBQ=C:\Users\hoho11.DE\Documents\WLTP_Datenbank\Database_JRC2_SE_UK.accdb;')
conn = pyodbc.connect(conn_str)
cursor = conn.cursor()
for table_info in cursor.tables(tableType='TABLE'):
print(table_info.table_name)
</code></pre>
<p>and the error comes here:</p>
<pre><code>df_3 = pd.read_sql_query(sql='SELECT * FROM TB_cycles_car', con=conn)
df_3.head()
</code></pre>
<p>many thanks in advance for your support!!</p>
|
<p>So I finally find the answer. I selected the Error showing column and import it as a string. I just wrote:</p>
<pre><code>df = pd.read_sql_query(sql='SELECT ID, ..., Cstr(dtime), dates FROM TB_cycles_car', con=conn)
</code></pre>
<p>The DataError didn´t show up anymore :)
Thanks a lot @GordThompson for helping!</p>
|
python|pandas|ms-access|pyodbc
| 1
|
376,692
| 63,989,761
|
Select Dataframe rows in a date range
|
<p>I have a data frame like the following</p>
<pre><code> transaction_no sales_order is_delivered dispatch_date remarks ....
0 2122.0 1.0 True 06-01-2020 NaN
1 2122.0 1.0 True 06-01-2020 NaN
2 2122.0 1.0 True 06-01-2020 NaN
3 2122.0 1.0 True 06-01-2020 NaN
4 2122.0 1.0 True 06-01-2020 NaN
</code></pre>
<p>I want to select rows based on a date range criteria but I am getting the empty dataframe every time</p>
<p>Here's what I did:</p>
<pre><code> dt_format = '%Y-%m-%d %H:%M'
o_f = datetime.strptime(request.GET['from'], dt_format).strftime('%d/%m/%Y')
o_t = datetime.strptime(request.GET['to'], dt_format).strftime('%d/%m/%Y')
f = datetime.strptime(request.GET['from'], dt_format).replace(tzinfo=pytz.UTC).date().strftime("%d-%m-%Y")
t = datetime.strptime(request.GET['to'], dt_format).replace(tzinfo=pytz.UTC).date().strftime("%d-%m-%Y")
allot_df = allot_df[allot_df['dispatch_date'].isin(pd.date_range(f, t))]
</code></pre>
<p>How can I do that ?
Better yet why is this not working?</p>
<p>Update:
The type of column was <code>str</code>
so I changed it to datetime</p>
<pre><code> allot_df['dispatch_date'] = pd.to_datetime(allot_df['dispatch_date'])
allot_df = allot_df[allot_df['dispatch_date'].isin(pd.date_range(f, t))]
</code></pre>
<p>But now the whole dataframe comes as the output</p>
|
<p>Assume that just after reading, e.g. calling <em>pd.read_csv</em>, without any
type conversion, your DataFrame contains:</p>
<pre><code> transaction_no sales_order is_delivered dispatch_date
0 2122.0 1.0 True 06-01-2020
1 2123.0 1.0 True 07-01-2020
2 2124.0 1.0 True 08-01-2020
3 2125.0 1.0 True 09-01-2020
4 2126.0 1.0 True 10-01-2020
</code></pre>
<p>To check column types run <code>df.info()</code> and the result should be something like:</p>
<pre><code><class 'pandas.core.frame.DataFrame'>
Int64Index: 5 entries, 0 to 4
Data columns (total 4 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 transaction_no 5 non-null float64
1 sales_order 5 non-null float64
2 is_delivered 5 non-null bool
3 dispatch_date 5 non-null object
dtypes: bool(1), float64(2), object(1)
memory usage: 165.0+ bytes
</code></pre>
<p>Note <em>Dtype</em> for <em>dispatch_date</em> column. It is <em>object</em> (more precisely,
something other than a number, and actually - a <strong>string</strong>).</p>
<p>A good habit in working with <em>Pandas</em> object is to use its native
<em>datetime</em> type, and not to use <em>datetime</em> module.
This way your code will run substantially faster than if you used other
date/time representation.</p>
<p>So the first step is to convert <em>dispatch_date</em> column from <em>string</em> to
<em>datetime</em>. You can do it calling:</p>
<pre><code>df.dispatch_date = pd.to_datetime(df.dispatch_date, dayfirst=True)
</code></pre>
<p>Now when you print <em>df</em>, you will get:</p>
<pre><code> transaction_no sales_order is_delivered dispatch_date
0 2122.0 1.0 True 2020-01-06
1 2123.0 1.0 True 2020-01-07
2 2124.0 1.0 True 2020-01-08
3 2125.0 1.0 True 2020-01-09
4 2126.0 1.0 True 2020-01-10
</code></pre>
<p>The first thing to notice is that now <em>dispatch_date</em> is printed in
<em>year-month-day</em> format, but for now you may be not sure about its
type. To check this detail, run <code>df.info()</code> again and the row
concerning <em>dispatch_date</em> should be:</p>
<pre><code>3 dispatch_date 5 non-null datetime64[ns]
</code></pre>
<p>And if you want to retrieve rows for particular date range, you can e.g.:</p>
<ul>
<li>specify both border dates as strings, but also in <em>year-month-day</em> format,</li>
<li>call <em>df.query</em>, passing both dates in the query string.</li>
</ul>
<p>Something like:</p>
<pre><code>df.query("dispatch_date.between('2020-01-07', '2020-01-09')")
</code></pre>
<p>The result is:</p>
<pre><code> transaction_no sales_order is_delivered dispatch_date
1 2123.0 1.0 True 2020-01-07
2 2124.0 1.0 True 2020-01-08
3 2125.0 1.0 True 2020-01-09
</code></pre>
<p>Note that the ending date is <strong>inclusive</strong>, contrary to the way how
you specify <em>Pandas</em> slices, where the right border is <strong>exclusive</strong>.</p>
<p>I deliberately didn't go into such details like how to extract both date
strings from your source data, this is another issue and you should cope
with it alone.</p>
|
python|pandas
| 1
|
376,693
| 47,049,073
|
Keras Initializers of a particular shape
|
<p>I make a small keras model and get weights of model using following code:</p>
<pre><code>from keras.models import Sequential
from keras.layers import Dense, Flatten,Conv2D, MaxPooling2D
input_shape = (28, 28, 1)
model = Sequential()
model.add(Conv2D(1, kernel_size=(2, 2),
activation='relu',
input_shape=input_shape,trainable=False))
model.add(MaxPooling2D(pool_size=(16,16)))
model.add(Flatten())
model.add(Dense(3, activation='softmax',trainable=False))
a=model.get_weights()
</code></pre>
<p>Now i want to initialize weights as same shape of <strong>a</strong> using keras initializers, i am using following code:</p>
<pre><code>from keras.initializers import glorot_uniform
W1 = glorot_uniform((a,))
</code></pre>
<p>Is my approach is right? If it is wrong then please suggest me the solution and if it is right then why i am not able to see the weights, it's showing:</p>
<pre><code><keras.initializers.VarianceScaling at 0x7f65746ba128>
</code></pre>
|
<p><strong>About <code>get_weights()</code>:</strong></p>
<p>The method <code>model.get_weights()</code> will return a list of numpy arrays. So you have to take care to create a list with the same number of arrays, in the same order, with the same shapes. </p>
<p>In this model, it seems there will be 4 arrays in the list, convolution kernel and bias plus dense kernel and bias. Each one with a different shape.</p>
<p><strong>About the initializers:</strong></p>
<p>Initializers are functions that take a <code>shape</code> as input and return a <code>tensor</code>. </p>
<p>You see <code>VarianceScaling</code> because it's probably the name of the function. You should call the function with a shape to get the result:</p>
<pre><code>weights = [glorot_uniform()(npArray.shape) for npArray in a]
</code></pre>
<p>They will be keras tensors, though, not numpy arrays. You should <code>K.eval(arr)</code> them to get them as numpy arrays. </p>
<p>If using <code>model.set_weights()</code>, pass the list with numpy arrays (same number that is present in <code>get_weights()</code>, same shapes) </p>
<p><strong>Standard usage of initializers:</strong></p>
<p>But actually, initializers are meant to be used directly in the creation of the layers, and if you don't want to specify seeds and other initializer parameters, you can use just strings:</p>
<pre><code>from keras.models import Sequential
from keras.layers import Dense, Flatten,Conv2D, MaxPooling2D
input_shape = (28, 28, 1)
model = Sequential()
model.add(Conv2D(1, kernel_size=(2, 2),
activation='relu',
input_shape=input_shape,
trainable=False,
kernel_initializer='glorot_uniform', #example with string
bias_initializer='zeros'))
model.add(MaxPooling2D(pool_size=(16,16)))
model.add(Flatten())
model.add(Dense(3,
activation='softmax',
trainable=False,
kernel_initializer=glorot_uniform(seed=None), #example creating a function
bias_initializer='zeros'))
</code></pre>
<p>Read more about <a href="https://keras.io/initializers/" rel="nofollow noreferrer">initializers</a> here. </p>
|
numpy|keras|keras-2
| 2
|
376,694
| 46,973,453
|
Rolling window on dataframe rows Python 3
|
<p>Is there a way to create a rolling window (2 periods) over a dataframe rows and compute the sum of the values?</p>
<p>My data:</p>
<pre><code>ID Name Value1 Value2 Value3 Value4
0 A 2 2 4 4
1 B 1 1 3 3
</code></pre>
<p>The output desired:</p>
<pre><code>ID Name Value1 Value2 Value3 Value4 Rol1 Rol2 Rol3
0 A 2 2 4 4 4 6 8
1 B 1 1 3 3 2 4 6
</code></pre>
<p>I tried to use df.rolling() but was only able to use it on a specific column </p>
|
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.rolling.html" rel="nofollow noreferrer"><code>rolling</code></a>, but first <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.set_index.html" rel="nofollow noreferrer"><code>set_index</code></a> of all columns which cannot be used in function and last add <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.dropna.html" rel="nofollow noreferrer"><code>dropna</code></a> for remove all <code>NaN</code>s columns:</p>
<pre><code>df1 = (df.set_index(['ID','Name'])
.rolling(2, axis=1).sum()
.dropna(axis=1, how='all'))
#rename columns
df1.columns = ['Roll{}'.format(x) for x in range(1, len(df1.columns)+1)]
print (df1)
Roll1 Roll2 Roll3
ID Name
0 A 4.0 6.0 8.0
1 B 2.0 4.0 6.0
</code></pre>
<p>Last <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.join.html" rel="nofollow noreferrer"><code>join</code></a> output to original:</p>
<pre><code>df = df.join(df1, on=['ID','Name'])
print (df)
ID Name Value1 Value2 Value3 Value4 Roll1 Roll2 Roll3
0 0 A 2 2 4 4 4.0 6.0 8.0
1 1 B 1 1 3 3 2.0 4.0 6.0
</code></pre>
|
python-3.x|pandas
| 4
|
376,695
| 46,796,618
|
Add missing rows to data frame equally distribueted
|
<p>I sampled a pandas dataframe using a custom sampler function.
This is basically made up by two columns:</p>
<ul>
<li>a timestamp</li>
<li>a value</li>
</ul>
<p>I'd like to create a new data frame with all the datetimes equally distributed (i.e. every 10 minutes) to fill missing values in the sampled one (sampled at the same frequency). </p>
<p>Shoudl I need to use the <em>reindex</em> method?</p>
<p>I'm trying to do something like:</p>
<pre><code>dd = pd.date_range(
start_date.astimezone(pytz.utc),
end_date.astimezone(pytz.utc),
freq="3min"
)
dd = dd.map(lambda item: calendar.timegm(item.timetuple()))
df.index = df.reindex(dd, fill_value="NaN")
</code></pre>
<p>It just does not works. I get a "length mismatch error" since the two indexes have different size.</p>
<p>Is this the correct approach?</p>
<p>Thanks, </p>
<p>FB</p>
|
<p>You can try with this, I used <code>comvibe_first</code> to merge two dataframe.</p>
<pre><code>start_date = datetime.datetime.today()
end_date = datetime.datetime(2017, 10, 19)
dd = pd.date_range(
start_date,
end_date,
freq="3min"
)
dd = dd.map(lambda item: calendar.timegm(item.timetuple()))
columns = ['some', 'column', 'headers']
df = pd.DataFrame(columns=columns, index=dd)
myarray = np.random.random((len(dd),3))
for val, item in enumerate(myarray):
df.ix[df.index.values[val]] = item
index_new = df.sample(frac=0.8, random_state=200)
df = df.drop(index_new.index)
df_ok = pd.DataFrame(columns=columns, index=dd)
df_ok = df_ok.combine_first(df)
</code></pre>
|
python|pandas|resampling
| 1
|
376,696
| 46,760,020
|
Existance row in Dataframe based on other Dataframes
|
<p>Let's say, I have a Dataframe DF1 like this:</p>
<pre><code> A B
0 123 997
1 123 998
2 124 999
3 125 997
4 125 998
</code></pre>
<p>And other 2 Dataframes A and B, containing every possible item present in DF1:</p>
<pre><code>A
a
0 123
1 124
2 125
3 126
4 127
B
b
0 999
1 998
2 997
3 996
4 995
</code></pre>
<p>How do I check, in an efficient way, the existence in DF1 of every combination of rows in Dataframe A and Dataframe B in order to get a matrix of it?</p>
<p>Something like this</p>
<pre><code>Existence matrix/dataframe:
999 998 997 996 995
123 False True True False False
124 True False False False False
125 False True True False False
126 False False False False False
127 False False False False False
</code></pre>
|
<p>You can use <code>pd.crosstab</code> + <code>reindex</code>:</p>
<pre><code>df = pd.crosstab(df.A, df.B).reindex(index=A.a,
columns=B.b).fillna(0).astype(bool)
print(df)
b 999 998 997 996 995
a
123 False True True False False
124 True False False False False
125 False True True False False
126 False False False False False
127 False False False False False
</code></pre>
|
python|pandas|dataframe
| 2
|
376,697
| 46,950,927
|
How to create a Initializer for layers.batch_normalization?
|
<p>The default <code>beta_initializer</code> for <code>layers.batch_normalization</code> is: <code>tf.zeros_initializer()</code>.</p>
<p>Is it possible to create a new initializer with an arbitrary value?</p>
|
<p>See the list of <a href="https://www.tensorflow.org/versions/r1.0/api_guides/python/state_ops#Sharing_Variables" rel="nofollow noreferrer">built-in initializers</a>. The one that interests you is <a href="https://www.tensorflow.org/versions/r1.0/api_docs/python/tf/constant_initializer" rel="nofollow noreferrer"><code>tf.constant_initializer</code></a>:</p>
<blockquote>
<p>Initializer that generates tensors with constant values.</p>
</blockquote>
|
machine-learning|tensorflow|initializer|batch-normalization
| 1
|
376,698
| 46,834,436
|
Choose one entry from a list if its key contains a string from another column
|
<p>I have a question regarding my dataframe. Specifically, in one column, for each row, I have a list of speakers and speeches. Now, I want to choose exactly one speech, based on whether the speaker is the one I am looking for, which is noted within another column. So one column provides the last name I am looking for and the other column provides a list of all speakers (first and last name) and their speech and I want to create a new columns where this speech is stored in the respective row.</p>
<p>So my initial dataset looks like this:</p>
<pre><code>ticker year quarter exel_lname jobposition speech
xx 2009 1 Angle CEO [("Mike Angle", "Thank you"), ("Barbara Barth", "It is")]
xx 2009 1 Barth CFO [("Mike Angle", "Thank you"), ("Barbara Barth", "It is")]
xx 2009 2 Angle CEO [("Mike Angle", "I am surprised"), ("Barbara Barth", "So am I")]
xx 2009 2 Barth CFO [("Mike Angle", "I am surprised"), ("Barbara Barth", "So am I")]
yy 2008 3 Cruz CEO [("Damien Cruz", "Hello"), ("Lara Dolm", "Nice to meet you")]
yy 2008 3 Dolm CFO [("Damien Cruz", "Hello"), ("Lara Dolm", "Nice to meet you")]
</code></pre>
<p>For row one for instance, I want to check each key-value pair whether the first list entry contains the last name, if no continue, if yes, take the speech part (i.e. second list entry) and store it in new column. As such, I want the following dataset (I hid the initial column speech here, but it should still be contained, so I do not want to replace it, just create a new column).</p>
<pre><code>ticker year quarter exel_lname jobposition speechmanager
xx 2009 1 Angle CEO "Thank you"
xx 2009 1 Barth CFO "It is"
xx 2009 2 Angle CEO "I am surprised"
xx 2009 2 Barth CFO "So am I"
yy 2008 3 Cruz CEO "Hello"
yy 2008 3 Dolm CFO "Nice to meet you"
</code></pre>
<p>Could someone help me how to solve this in Python 3?</p>
<p>Thank you!!
Julia</p>
|
<p>This is perhaps best accomplished by writting a function, and then applying it row-wise:</p>
<pre><code>def get_speech(row):
matches = list(filter(lambda x: x[0].endswith(row['exel_lname']), row['speech']))
if len(matches) > 0:
return matches[0][1]
return ''
df['speechmanager'] = df.apply(get_speech, axis=1)
</code></pre>
|
python|pandas|dataframe
| 2
|
376,699
| 46,980,287
|
Output node for tensorflow graph created with tf.layers
|
<p>I have built a tensorflow neural net and now want to run the <code>graph_util.convert_variables_to_constants</code> function on it. However this requires an <code>output_node_names</code> parameter. The last layer in the net has the name <code>logit</code> and is built as follows:</p>
<pre><code>logits = tf.layers.dense(inputs=dropout, units=5, name='logit')
</code></pre>
<p>however there are many nodes in that scope:</p>
<pre><code>gd = sess.graph_def
for n in gd.node:
if 'logit' in n.name:print(n.name)
</code></pre>
<p>prints:</p>
<pre><code>logit/kernel/Initializer/random_uniform/shape
logit/kernel/Initializer/random_uniform/min
logit/kernel/Initializer/random_uniform/max
logit/kernel/Initializer/random_uniform/RandomUniform
logit/kernel/Initializer/random_uniform/sub
logit/kernel/Initializer/random_uniform/mul
logit/kernel/Initializer/random_uniform
logit/kernel
logit/kernel/Assign
logit/kernel/read
logit/bias/Initializer/zeros
logit/bias
logit/bias/Assign
logit/bias/read
logit/Tensordot/Shape
logit/Tensordot/Rank
logit/Tensordot/axes
...
logit/Tensordot/Reshape_1
logit/Tensordot/MatMul
logit/Tensordot/Const_2
logit/Tensordot/concat_2/axis
logit/Tensordot/concat_2
logit/Tensordot
logit/BiasAdd
...
</code></pre>
<p>How do I work out which of these nodes is the output node?</p>
|
<p>If the graph is complex, a common way is to add an identity node at the end:</p>
<pre><code>output = tf.identity(logits, 'output')
# you can use the name "output"
</code></pre>
<p>For example, the following code should work:</p>
<pre><code>logits = tf.layers.dense(inputs=dropout, units=5, name='logit')
output = tf.identity(logits, 'output')
output_graph_def = tf.graph_util.convert_variables_to_constants(
ss, tf.get_default_graph().as_graph_def(), ['output'])
</code></pre>
|
tensorflow
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.