Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
6,100
| 65,205,582
|
How can i add a Bi-LSTM layer on top of bert model?
|
<p>I'm using <strong>pytorch</strong> and I'm using the base <strong>pretrained bert</strong> to classify sentences for hate speech.
I want to implement a <strong>Bi-LSTM</strong> layer that takes as an input all outputs of the latest
transformer encoder from the bert model as a new model (class that implements <strong>nn.Module</strong>), and i got confused with the <strong>nn.LSTM</strong> parameters.
I tokenized the data using</p>
<pre class="lang-py prettyprint-override"><code>bert = BertForSequenceClassification.from_pretrained("bert-base-uncased", num_labels=int(data['class'].nunique()),output_attentions=False,output_hidden_states=False)
</code></pre>
<p>My data-set has 2 columns: class(label), sentence.
Can someone help me with this?
Thank you in advance.</p>
<p><strong>Edit</strong>:
Also, after processing the input in the bi-lstm, the network sends the final hidden state to a fully connected network that performs classication using the softmax activation function. how can I do that ?</p>
|
<p>You can do it as follows:</p>
<pre><code>from transformers import BertModel
class CustomBERTModel(nn.Module):
def __init__(self):
super(CustomBERTModel, self).__init__()
self.bert = BertModel.from_pretrained("bert-base-uncased")
### New layers:
self.lstm = nn.LSTM(768, 256, batch_first=True,bidirectional=True)
self.linear = nn.Linear(256*2, <number_of_classes>)
def forward(self, ids, mask):
sequence_output, pooled_output = self.bert(
ids,
attention_mask=mask)
# sequence_output has the following shape: (batch_size, sequence_length, 768)
lstm_output, (h,c) = self.lstm(sequence_output) ## extract the 1st token's embeddings
hidden = torch.cat((lstm_output[:,-1, :256],lstm_output[:,0, 256:]),dim=-1)
linear_output = self.linear(hidden.view(-1,256*2)) ### assuming that you are only using the output of the last LSTM cell to perform classification
return linear_output
tokenizer = BertTokenizerFast.from_pretrained("bert-base-uncased")
model = CustomBERTModel()
</code></pre>
|
python|deep-learning|neural-network|pytorch|lstm
| 9
|
6,101
| 50,179,949
|
Create a function to extract specific columns and rename pandas
|
<p>I have a target table structure (3 columns). I have multiple sources, each with its own nuances but ultimately I want to use each table to populate the target table (append entries)</p>
<p>I want to use a function (I know I can do it without a function but it will help me out in the long run to be able to use a function)</p>
<p>I have the following source table</p>
<pre><code>id col1 col2 col3 col4
1 a b c g
1 a b d h
1 c d e i
</code></pre>
<p>I want this final structure</p>
<pre><code>id num group
1 a b
1 a b
1 c d
</code></pre>
<p>So all I am doing is returning id, col1 and col2 from the source table (but note the column name changes. For different source tables it will be a different set of 3 columns that I will be extracting hence the use of a function).</p>
<p>The function I am using is currently returning only 1 column (instead of 3) </p>
<p>Defining function:</p>
<pre><code>def func(x, col1='id', col2='num', col3='group'):
d=[{'id':x[col1], 'num':x[col2], 'group':x[col3]}]
return pd.DataFrame(d)
</code></pre>
<p>Applying the function to a source table. </p>
<pre><code>target= source.apply(func, axis=1)
</code></pre>
|
<p>Here's a flexible way to write this function:</p>
<pre><code>def func(dframe, **kwargs):
return dframe.filter(items=kwargs.keys()).rename(columns=kwargs)
func(df, id="id", col1="num", col2="group")
# group id num
# 0 b 1 a
# 1 b 1 a
# 2 d 1 c
</code></pre>
<p>To ensure that your new dataframe preserves the column order of the original, you can sort the argument keys first:</p>
<pre><code>def func(dframe, **kwargs):
keys = sorted(kwargs.keys(), key=lambda x: list(dframe).index(x))
return dframe.filter(items=keys).rename(columns=kwargs)
func(df, id="id", col1="num", col2="group")
# id num group
# 0 1 a b
# 1 1 a b
# 2 1 c d
</code></pre>
|
python|pandas|numpy
| 2
|
6,102
| 63,831,229
|
How to split a DataFrame into multiple DataFrames by row value?
|
<p>I have a dataframe like below one. I want to split the Dataframe based on Rows</p>
<pre><code> Rows col1 value1 value2
0 row_1 var1 12 3434
1 row_1 var2 212 546
2 row_1 var3 340 8686
3 row_2 var1 226 55
4 row_2 var2 323 878
97 row_33 var1 592 565
98 row_33 var2 282 343
99 row_33 var3 455 764
100 row_34 var1 457 24
101 row_34 var2 617 422
</code></pre>
<p>expected Dataframes</p>
<p>Df1</p>
<pre><code> Rows col1 value1 value2
0 row_1 var1 12 3434
1 row_1 var2 212 546
2 row_1 var3 340 8686
</code></pre>
<p>Df2</p>
<pre><code> Rows col1 value1 value2
0 row_2 var1 226 55
1 row_2 var2 323 878
2 row_2 var3 453 78
</code></pre>
|
<ul>
<li>You can use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.groupby.html" rel="nofollow noreferrer"><code>.groupby</code></a> on the <code>'Rows'</code> and create a <code>dict</code> of <code>DataFrames</code> with unique <code>'Row'</code> values as keys, with a <a href="https://www.python.org/dev/peps/pep-0274/" rel="nofollow noreferrer"><code>dict-comprehension</code></a>.
<ul>
<li><code>.groupby</code> returns a <code>groupby</code> object, that contains information about the groups, where <code>g</code> is the unique value in <code>'Rows'</code> for each group, and <code>d</code> is the <code>DataFrame</code> for that group.</li>
</ul>
</li>
<li>The <code>value</code> of each <code>key</code> in <code>df_dict</code>, will be a <code>DataFrame</code>, which can be accessed in the standard way, <code>df_dict['key']</code>.</li>
</ul>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
# setup data and dataframe
data = {'Rows': ['row_1', 'row_1', 'row_1', 'row_2', 'row_2', 'row_33', 'row_33', 'row_33', 'row_34', 'row_34'],
'col1': ['var1', 'var2', 'var3', 'var1', 'var2', 'var1', 'var2', 'var3', 'var1', 'var2'],
'value1': [12, 212, 340, 226, 323, 592, 282, 455, 457, 617],
'value2': [3434, 546, 8686, 55, 878, 565, 343, 764, 24, 422]}
df = pd.DataFrame(data)
# split the dataframe and loop of the groupby object
df_dict = dict()
for g, d in df.groupby('Rows'):
df_dict[g] = d
# or as a dict comprehension: the unique Row value will be the key
df_dict = {g: d for g, d in df.groupby('Rows')}
# or a specific name for the key, using enumerate
df_dict = {f'df{i}': d for i, (g, d) in enumerate(df.groupby('Rows'))}
</code></pre>
<h3><code>df_dict['df0']</code> or <code>df_dict['row_1']</code></h3>
<pre><code> Rows col1 value1 value2
0 row_1 var1 12 3434
1 row_1 var2 212 546
2 row_1 var3 340 8686
</code></pre>
<h3><code>df_dict['df1']</code> or <code>df_dict['row_2']</code></h3>
<pre><code> Rows col1 value1 value2
3 row_2 var1 226 55
4 row_2 var2 323 878
</code></pre>
<h3><code>df_dict['df2']</code> or <code>df_dict['row_33']</code></h3>
<pre><code> Rows col1 value1 value2
5 row_33 var1 592 565
6 row_33 var2 282 343
7 row_33 var3 455 764
</code></pre>
|
python|pandas|dataframe
| 1
|
6,103
| 46,893,745
|
Set new column values in pandas DataFrame1 where DF2 column values match DF1 index
|
<p>I'd like to set a new column in a pandas dataframe with values calculated using a groupby on dataframe2.</p>
<p>DF1:</p>
<pre><code> col1 col2
id
1 'a'
2 'b'
3 'c'
</code></pre>
<p>DF2:</p>
<pre><code> id col2
index
1 1 11
1 1 22
1 1 12
1 1 45
3 3 83
3 3 11
3 3 35
3 3 54
</code></pre>
<p>I want to group DF2 by 'id', and then apply a function on 'col2' to put the result into the corresponding index in DF1. If there is no group for that particular index, then I want to fill with NaN...</p>
<pre><code>ret_val = DF2.groupby('id').apply(lambda x: my_func(x['col_2']))
col1 col2
id
1 'a' ret_val
2 'b' NaN
3 'c' ret_val
</code></pre>
<p>... I can't quite figure out how to achieve this though</p>
|
<p>Use <code>map</code> on <code>df1.index</code> series.</p>
<pre><code>In [5327]: df1['col2'] = df1.index.to_series().map(df2.groupby('id')
.apply(lambda x: my_func(x['col2'])))
In [5328]: df1
Out[5328]:
col1 col2
id
1 a 360.0
2 b NaN
3 c 536.0
</code></pre>
<p>Details</p>
<pre><code>In [5322]: def my_func(x):
...: return x.sum()
...:
In [5323]: df2.groupby('id').apply(lambda x: my_func(x['col2']))
Out[5323]:
id
1 360.0
3 536.0
dtype: float64
In [5324]: df1.index.to_series().map(df2.groupby('id').apply(lambda x: my_func(x['col2'])))
Out[5324]:
id
1 360.0
2 NaN
3 536.0
Name: id, dtype: float64
</code></pre>
|
python|pandas|pandas-groupby
| 2
|
6,104
| 63,139,530
|
Most efficient way to detect non-transparent pixels in ImageMagick / RMagick
|
<p>I've got a series of drawings created via HTML Canvas. I need to create a heatmap using these drawings to show where the most common areas are.</p>
<p>Each drawing is a PNG image with a transparent background. The only non-transparent pixels are those which the users has drawn on. Here are a couple sample images: <a href="https://imgur.com/a/saTK9d7" rel="nofollow noreferrer">https://imgur.com/a/saTK9d7</a></p>
<p>I'm looking for the most efficient way to grab the non-transparent pixels for each image, which I will then add to a matrix to generate a heatmap.</p>
<p>I can get this by iterating over each pixel and checking it's alpha value. However, these images are roughly 1400 x 750 pixels each, so that's ~1,000,000 of checks per image. This becomes untenable when we have hundreds of images to analyze on the fly.</p>
<p>Is there any way to do this more efficiently? I'm open to all ideas.</p>
|
<p>In command line ImageMagick, you can list all the non-transparent pixels using sparse-color: or parsing txt:. Here is a 100x100 transparent image with a 2x2 red square in the top left corner.</p>
<p><a href="https://i.stack.imgur.com/qi79H.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qi79H.png" alt="enter image description here" /></a></p>
<pre><code>convert img.png sparse-color:
</code></pre>
<br>
<pre><code>0,0,srgba(255,0,0,1) 1,0,srgba(255,0,0,1) 0,1,srgba(255,0,0,1) 1,1,srgba(255,0,0,1)
</code></pre>
<br>
<p>or</p>
<pre><code>convert x.png txt: | tail -n +2 | grep -v "none" | awk '{print $1 $4}'
</code></pre>
<br>
<pre><code>0,0:red
1,0:red
0,1:red
1,1:red
</code></pre>
<br>
|
ruby|numpy|imagemagick|heatmap|rmagick
| 2
|
6,105
| 63,181,994
|
Python pandas: if column A value appears more than once, assign first value of column B
|
<p>I am trying to dynamically replace the value <em>i</em> of <strong>column B</strong> with a consistent value conditional on the value count of <em>j</em> in <strong>column A</strong>.</p>
<p>I'm trying to use a dictionary to map the values, but it isn't working.</p>
<pre><code>color = ['black','mauve','teal','green','teal','black']
code = ['E45', 'M46', 'Y76', 'G44', 'T76','B43']
df = pd.DataFrame({'color': color, 'code': code})
# Dedupe a copy
df_copy = df
df_copy = df_copy.drop_duplicates(subset='color', keep='first')
# Create a dictionary
dummy_dict = df_copy[['color','code']].to_dict('list')
# {'color': ['black', 'mauve', 'teal', 'green', 'teal', 'black'], 'code': ['E45', 'M46', 'Y76', 'G44', 'T76', 'B43']}
### Not working
df["new_code"] = df.code.replace(dummy_dict)
### Output (wrong):
# color code new_code
# black E45 E45
# mauve M46 M46
# teal Y76 Y76
# green G44 G44
# teal T76 T76
# black B43 B43
### Desired output:
# color code new_code
# black E45 E45
# mauve M46 M46
# teal Y76 Y76
# green G44 G44
# teal T76 Y76
# black B43 E45
</code></pre>
<p>Where am I going wrong? It's as though Python isn't even accessing my dictionary to map the values.</p>
|
<p>It is <code>transform</code> and <code>first</code></p>
<pre><code>df['new_code'] = df.groupby('color').code.transform('first')
Out[21]:
color code new_code
0 black E45 E45
1 mauve M46 M46
2 teal Y76 Y76
3 green G44 G44
4 teal T76 Y76
5 black B43 E45
</code></pre>
|
python|pandas|dictionary|conditional-statements|mapping
| 2
|
6,106
| 63,290,575
|
text response from get request into a python pandas data frame excluding begin and end lines
|
<p>I am new to python, I am working on code that performs a get request from an api and returns the response in a text format and when I use</p>
<pre><code>print(response.text)
</code></pre>
<p>I get the response in the below format -</p>
<pre><code>ResponseBegin
Name|Age|Gender|Country
"ABC"|23|M|USA
"ABCD"|21|F|CAN
ResponseEnd
</code></pre>
<p>Can anyone please advise how to convert this into a pandas dataframe and also remove the ResponseBegin and ResponseEnd lines at the beginning and ending making the second row as the column header using | as a delimiter.</p>
<p>Thank you very much for your advise.</p>
<p>Thank you</p>
|
<p>It's more helpful if you do not show what <code>print(response.text)</code> contains, but just what <code>response.text</code> contains, since the print function is doing some formatting for human readability.</p>
<p>But I will assume that <code>response.text</code> is just a single string that looks like this:</p>
<pre class="lang-py prettyprint-override"><code>'ResponseBegin\nName|Age|Gender|Country\n"ABC"|23|M|USA\n"ABCD"|21|F|CAN\nResponseEnd'
</code></pre>
<p>Notice the <code>\n</code>, which is the "newline" character.</p>
<p>There are several ways to solve this, but the easiest (fewest lines of code) I think is to export it to a CSV file and then read it in:</p>
<pre class="lang-py prettyprint-override"><code>with open('mydf.csv', 'w') as fh:
fh.write(response.text)
import pandas as pd
df = pd.read_csv('mydf.csv', sep='|', skiprows=1, skipfooter=1)
</code></pre>
<p>You can read more about <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html" rel="nofollow noreferrer"><code>read_csv</code></a> for all of its handy tools, but here I am using:</p>
<ul>
<li><code>sep</code>: the thing to use as a separator, <code>|</code> in your case</li>
<li><code>skiprows</code>/<code>skipfooter</code>: the number of lines at the beginning or end to skip</li>
</ul>
|
python|pandas|dataframe
| 2
|
6,107
| 67,824,746
|
How to print rows of data where the difference in columns is >1
|
<p>the data table i'm using is:</p>
<pre><code>enter code here
|State| Sno Center| Mar-21| Apr-21|
|AP 1 | Guntur | 121 | 121.1 |
| 2 | Nellore | 118.8 | 118.3|
| 3 | Visakhapatnam| 131.6 | 131.5|
|ASM | 4 Biswanath-| 123.7 | 124.5|
| 5 | Doom-Dooma | 127.8 |128.2|
| 6 | Guwahati | 125.9 |128.2|
| 7 | Labac-Silchar| 114.2 | 115.4|
| 8 | Numaligarh- | 114.2 | 115.1|
| 9 | Sibsagar | 117.7 | 117.3|
| 10| Munger-Jamalpur|117.2 | 118.3|
</code></pre>
<p>I want to find the difference between the columns Mar-21 and Apr-21 and print those rows only where the difference is >1.
I tried the following</p>
<p>from numpy import median
import pandas as pd
from pandas.core.tools.numeric import to_numeric</p>
<pre><code>df=pd.read_csv('CPIIW_421a.csv')
mydiff=(df['Apr-21']-df['Mar-21'])
print(mydiff)
df['diff']=(df['Apr-21']-df['Mar-21'])
print(df['diff'])
</code></pre>
<p>this code put displays only one column of difference as under, instead of details of rows.</p>
<pre><code>0 | 0.1|
1 | -0.5|
2 | -0.1|
3 | 0.8|
4 | 0.4|
</code></pre>
<hr />
<p>I need all rows display where difference is >1.
How I need to proceed further.
I also want to copy the required data in a new csv file. please advise.
I am a beginner.
thanks</p>
|
<p>You can try as follows</p>
<pre><code>df_new = df[(df["Apr-21"]-df["Mar-21"]) > 1].copy()
</code></pre>
<p>For example</p>
<pre><code>df = pd.DataFrame(data={'State':[1,2,3,4,5,6, 7, 8, 9, 10],
'Sno Center': ["Guntur", "Nellore", "Visakhapatnam", "Biswanath", "Doom-Dooma", "Guwahati", "Labac-Silchar", "Numaligarh", "Sibsagar", "Munger-Jamalpu"],
'Mar-21': [121, 118.8, 131.6, 123.7, 127.8, 125.9, 114.2, 114.2, 117.7, 117.7],
'Apr-21': [121.1, 118.3, 131.5, 124.5, 128.2, 128.2, 115.4, 115.1, 117.3, 118.3]})
df
State Sno Center Mar-21 Apr-21
0 1 Guntur 121.0 121.1
1 2 Nellore 118.8 118.3
2 3 Visakhapatnam 131.6 131.5
3 4 Biswanath 123.7 124.5
4 5 Doom-Dooma 127.8 128.2
5 6 Guwahati 125.9 128.2
6 7 Labac-Silchar 114.2 115.4
7 8 Numaligarh 114.2 115.1
8 9 Sibsagar 117.7 117.3
9 10 Munger-Jamalpu 117.7 118.3
df_new = df[(df["Apr-21"]-df["Mar-21"]) > 1].copy()
</code></pre>
<p>The result</p>
<pre><code>df_new
State Sno Center Mar-21 Apr-21
5 6 Guwahati 125.9 128.2
6 7 Labac-Silchar 114.2 115.4
</code></pre>
|
pandas|dataframe
| 0
|
6,108
| 67,688,117
|
How to re-add deleted columns from a dataframe in pandas python?
|
<p>How to re-add columns from original dataframe, which were once in the data frame but got removed using a list?</p>
<pre><code>df_original =['a','b','c','d','f']
df_new=df_original[['b','c','d']]
user_re_add col=['a']
if user_re_add not in df_new.columns:
add= df_new.append(user_re_add)
print("Re-add the column from original df")
</code></pre>
<p>Expecting the df_new = ['a','b','c','d','f']</p>
|
<p>Using your example:</p>
<pre><code>df_original = pd.DataFrame(columns=['a','b','c','d','f'])
df_new=df_original[['b','c','d']]
user_re_add = df_original[['a']]
if [column for column in user_re_add.columns] not in [column for column in df_new.columns]:
add = df_new.append(user_re_add)
add = add[[column for column in df_original.columns if column in add.columns]]
print("Re-add the column from original df")
</code></pre>
<p>Restoring the column deleted that was saved in a variable and reordering as the original frame.</p>
|
python|pandas|dataframe|data-science|data-manipulation
| 1
|
6,109
| 68,012,066
|
To plot graph non linear function
|
<p>I want to plot graph of this function:</p>
<pre><code>y = 2[1-e^(-x+1)]^2-2
</code></pre>
<p>When I plot a linear function, I used this code :</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
x = np.array(...)
y = np.array(...)
z = np.polyfit(x, y, 2)
p = np.poly1d(z)
xp = np.linspace(...)
_ = plt.plot(x, y, '.', xp, p(xp), '-')
plt.ylim(0, 200)
plt.show()
</code></pre>
<p>When the function is non-linear, it does not works
becasue it hard to find each x,y value.</p>
<p>How can I plot a non-linear function?</p>
|
<p>I hate to be the one to break this news to you, but polynomials of order greater than one are technically nonlinear too.</p>
<p>When you plot in matplotlib, you're really supplying discreet x and y values at a resolution sufficient to be visually pleasing. In this case, you've chosen <code>xp</code> to determine the points you plot for the parabola. You then call <code>p(xp)</code> to generate an array of y-values at those locations.</p>
<p>There nothing stopping you from generating y-values for your formula of interest using simple numpy functions:</p>
<pre><code>y = 2 * (1 - np.exp(1 - xp))**2 - 2
</code></pre>
|
python|numpy|matplotlib|plot
| 1
|
6,110
| 67,683,777
|
tflite: Make multiple invocations at once
|
<p>Assume a webpage where multiple users send an image at once for inference. One option is to load the tflite model for each inference call by loading it inside the function</p>
<pre><code>def detect_from_image(image_path):
# load model
interpreter = tf.lite.Interpreter(model_path="detect.tflite")
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
img = cv2.imread(image_path)
# set input tensor
interpreter.set_tensor(input_details[0]['index'], img)
# run
interpreter.invoke()
# get outpu tensor
boxes = interpreter.get_tensor(output_details[0]['index'])
</code></pre>
<p>The drawback above is the model is loaded separately for each call.</p>
<p>Loading the model outside the function</p>
<pre><code># load model
interpreter = tf.lite.Interpreter(model_path="detect.tflite")
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
def predict(image_path):
img = cv2.imread(image_path)
# set input tensor
interpreter.set_tensor(input_details[0]['index'], img)
# run
interpreter.invoke()
# get outpu tensor
boxes = interpreter.get_tensor(output_details[0]['index'])
</code></pre>
<p>When the model is loaded outside the function like above, it does not seem to work when multiple calls are made at the same time as interpreter.set_tensor and invoke() are not producing any output but replacing the tensors internally.</p>
<p>Is there any functionality like below to make it work for parallel calls?</p>
<pre><code>def predict(image_path):
img = cv2.imread(image_path)
# set input tensor
new_tensor = interpreter.set_tensor(input_details[0]['index'], img)
# run
output = interpreter.invoke(new_tensor)
# get outpu tensor
boxes = output.get_tensor(output_details[0]['index'])
</code></pre>
|
<p>Please consider having a pool for storing TFLite interpreter instances and picking up one instance per request. TFLite interpreter API does not guarantee the multi-thread support.</p>
|
python|tensorflow|tensorflow-lite
| 0
|
6,111
| 61,441,165
|
Pandas: Slicing up a df based on repeated flags
|
<p>I have a <code>df</code> that looks something like this: </p>
<pre><code> HEADER1 HEADER2 HEADER3
0 Group1 Value2 Value3
1 Group2 Value4 Value5
4 Group1 Value6 Value7
5 Group2 Value8 Value9
6 TAIL1 TAIL2 TAIL3
</code></pre>
<p>Header and Tail will always be the same and I need to persist that across all split up dfs</p>
<p>So if we assume that a <code>record</code> or a unit of useful information in this <code>df</code> is each set of <code>Group1 and Group2</code> - this <code>df</code> would then have 2 sets of data. </p>
<p>Whats the best way to separate this up? so we have 2 dfs, that look this this: </p>
<pre><code> HEADER1 HEADER2 HEADER3
0 Group1 Value2 Value3
1 Group2 Value4 Value5
6 TAIL1 TAIL2 TAIL3
</code></pre>
<pre><code> HEADER1 HEADER2 HEADER3
4 Group1 Value6 Value7
5 Group2 Value8 Value9
6 TAIL1 TAIL2 TAIL3
</code></pre>
<p>There could be any number of splits so ideally I would like to focus on efficiency... ant information is appreciated </p>
<ul>
<li>EDIT * </li>
</ul>
<p>If I wanted to extend the answer and convert the dfs into something like this: </p>
<pre><code>{
"headers":
{"SomeHeaderName": "Header1", "SomeOtherHeaderName": "Header2"},
"groups": [
"Group1": {"Value2": "GroupValue2", "Value3": "GroupValue3"},
"Group2": {"Value4": "GroupValue4", "Value5": "GroupValue5"}
]
"trailer":
{"SomeTailName": "Tail1", "SomeOtherTailName": "Tail2"}
}
</code></pre>
<p>The keys being pulled from an already existing structure and then just zipping them with the df entries as the values</p>
|
<p>Use list comprehension with <code>groupby</code> and add last row by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.append.html" rel="nofollow noreferrer"><code>DataFrame.append</code></a>:</p>
<pre><code>#get last row
last = df.iloc[[-1]]
print (last)
HEADER1 HEADER2 HEADER3
6 TAIL1 TAIL2 TAIL3
#get all rows without last
df1 = df.iloc[:-1]
#specify first value of group in first column
s = df1.iloc[:, 0].eq('Group1').cumsum()
a = [x.append(last, ignore_index=True) for i, x in df1.groupby(s)]
print (a)
[ HEADER1 HEADER2 HEADER3
0 Group1 Value2 Value3
1 Group2 Value4 Value5
2 TAIL1 TAIL2 TAIL3, HEADER1 HEADER2 HEADER3
0 Group1 Value6 Value7
1 Group2 Value8 Value9
2 TAIL1 TAIL2 TAIL3]
</code></pre>
|
python|python-3.x|pandas|dataframe
| 2
|
6,112
| 61,280,184
|
Model with multiple outputs and custom loss function
|
<p>I'm trying to train a model that has multiple outputs and a custom loss function using keras, but I'm getting some error <code>tensorflow.python.framework.errors_impl.OperatorNotAllowedInGraphError: iterating over ``tf.Tensor`` is not allowed in Graph execution. Use Eager execution or decorate this function with @tf.function.</code></p>
<p>It's hard to debug it because I'm doing <code>model.compile</code> and <code>model.fit</code>. I think it has something to do with how models are supposed to be defined when having multiple outputs, but I can't find good documentation on this. The guide specifies how to have models with multiple outputs suing the functional API, and has an example for this, but it doesn't clarify how custom loss functions should work when subclassing the <code>Model</code> API. My code is as follows:</p>
<pre><code>class DeepEnsembles(Model):
def __init__(self, **kwargs):
super(DeepEnsembles, self).__init__()
self.num_models = kwargs.get('num_models')
model = kwargs.get('model')
self.mean = [model(**dict(**kwargs)) for _ in range(self.num_models)]
self.variance = [model(**dict(**kwargs)) for _ in range(self.num_models)]
def call(self, inputs, training=None, mask=None):
mean_predictions = []
variance_predictions = []
for idx in range(self.num_models):
mean_predictions.append(self.mean[idx](inputs, training=training))
variance_predictions.append(self.variance[idx](inputs, training=training))
mean_stack = tf.stack(mean_predictions)
variance_stack = tf.stack(variance_predictions)
return mean_stack, variance_stack
</code></pre>
<p>And where MLP is the following:</p>
<pre><code>class MLP(Model):
def __init__(self, **kwargs):
super(MLP, self).__init__()
# Initialization parameters
self.num_inputs = kwargs.get('num_inputs', 779)
self.num_outputs = kwargs.get('num_outputs', 1)
self.hidden_size = kwargs.get('hidden_size', 256)
self.activation = kwargs.get('activation', 'relu')
# Optional parameters
self.p = kwargs.get('p', 0.05)
self.model = tf.keras.Sequential([
layers.Dense(self.hidden_size, activation=self.activation, input_shape=(self.num_inputs,)),
layers.Dropout(self.p),
layers.Dense(self.hidden_size, activation=self.activation),
layers.Dropout(self.p),
layers.Dense(self.num_outputs)
])
def call(self, inputs, training=None, mask=None):
output = self.model(inputs, training=training)
return output
</code></pre>
<p>I'm trying to minimize a custom loss function </p>
<pre><code>class GaussianNLL(Loss):
def __init__(self):
super(GaussianNLL, self).__init__()
def call(self, y_true, y_pred):
mean, variance = y_pred
variance = variance + 0.0001
nll = (tf.math.log(variance) / 2 + ((y_true - mean) ** 2) / (2 * variance))
nll = tf.math.reduce_mean(nll)
return nll
</code></pre>
<p>Finally, this is how I try to train it:</p>
<pre><code> ensembles_params = {'num_models': 5, 'model': MLP, 'p': 0}
model = DeepEnsembles(**ensembles_params)
loss_fn = GaussianNLL()
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-4)
epochs = 10000
model.compile(optimizer='adam',
loss=loss_fn,
metrics=['mse', 'mae'])
history = model.fit(x_train, y_train,
batch_size=2048,
epochs=10000,
verbose=0,
validation_data=(x_val, y_val))
</code></pre>
<p>Which results in the above error. Any pointers? In particular, the whole stack trace is </p>
<pre><code>Traceback (most recent call last):
File "/home/emilio/anaconda3/lib/python3.7/contextlib.py", line 130, in __exit__
self.gen.throw(type, value, traceback)
File "/home/emilio/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/ops/variable_scope.py", line 2803, in variable_creator_scope
yield
File "/home/emilio/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_v2.py", line 235, in fit
use_multiprocessing=use_multiprocessing)
File "/home/emilio/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_v2.py", line 593, in _process_training_inputs
use_multiprocessing=use_multiprocessing)
File "/home/emilio/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_v2.py", line 646, in _process_inputs
x, y, sample_weight=sample_weights)
File "/home/emilio/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training.py", line 2360, in _standardize_user_data
self._compile_from_inputs(all_inputs, y_input, x, y)
File "/home/emilio/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training.py", line 2618, in _compile_from_inputs
experimental_run_tf_function=self._experimental_run_tf_function)
File "/home/emilio/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/training/tracking/base.py", line 457, in _method_wrapper
result = method(self, *args, **kwargs)
File "/home/emilio/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training.py", line 446, in compile
self._compile_weights_loss_and_weighted_metrics()
File "/home/emilio/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/training/tracking/base.py", line 457, in _method_wrapper
result = method(self, *args, **kwargs)
File "/home/emilio/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training.py", line 1592, in _compile_weights_loss_and_weighted_metrics
self.total_loss = self._prepare_total_loss(masks)
File "/home/emilio/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training.py", line 1652, in _prepare_total_loss
per_sample_losses = loss_fn.call(y_true, y_pred)
File "/home/emilio/fault_detection/tensorflow_code/tf_utils/loss.py", line 13, in call
mean, variance = y_pred
File "/home/emilio/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py", line 539, in __iter__
self._disallow_iteration()
File "/home/emilio/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py", line 535, in _disallow_iteration
self._disallow_in_graph_mode("iterating over `tf.Tensor`")
File "/home/emilio/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py", line 515, in _disallow_in_graph_mode
" this function with @tf.function.".format(task))
tensorflow.python.framework.errors_impl.OperatorNotAllowedInGraphError: iterating over `tf.Tensor` is not allowed in Graph execution. Use Eager execution or decorate this function with @tf.function.
</code></pre>
<p>So it's clearly related to the loss function. But the model's forward pass outputs a tuple, which I unpack in the loss function, so I don't know why is this an issue.</p>
|
<p>With quick tests, I think I solved the problem by replacing:</p>
<pre><code> mean, variance = y_pred
variance = variance + 0.0001
</code></pre>
<p>With</p>
<pre><code> mean = y_pred[0]
variance = y_pred[1] + 0.0001
</code></pre>
<p>Unpacking <code>y_pred</code> (which is a Tensor) calls the method <code>Tensor.__iter__</code> which apparently yields an error, whereas I suppose that the method <code>Tensor.__getitem__</code> does not...</p>
<p>I haven't got to the point when it start learning, I think my current dummy x_train and y_train are not exactly of correct shape. If you notice that this problem happens again later, I will try to investigate.</p>
<p>EDIT:</p>
<p>I managed to make your code run by using</p>
<pre><code>x_train = np.random.random((10000, 779))
y_train = np.random.random ((10000, 1))
</code></pre>
<p>by changing the last line of the method <code>DeepEnsembles.call</code> with </p>
<pre><code> return tf.stack([mean_stack, variance_stack])
</code></pre>
<p>and by commenting out the metrics (necessary because the sizes of y_true and y_pred are expected to be different, so you might want to define your own versions of mse and mae to use as a metric):</p>
<pre><code>model.compile(optimizer='adam',
loss=loss_fn,
# metrics=['mse', 'mae']
)
</code></pre>
<p>I believe it is quite close to what you expect.</p>
<p>The reason for not returning a tuple is that tensorflow will interpret each element of the tuple as an output of the network and will apply the loss independently on each of them.</p>
<p>You can test it by keeping the old version of <code>DeepEnsembles.call</code> and instead use</p>
<pre><code>y_train_1 = np.random.random ((10000, 1))
y_train_2 = np.random.random ((10000, 1))
y_train = [y_train_1, y_train_2]
</code></pre>
<p>It will execute, there will be 10 MLP, but MLP_1/2 will learn the mean and variance of y_train_1, MLP_6/7 the mean and var of y_train_2, and all other MLPs will not learn anything.</p>
|
python|tensorflow|keras
| 4
|
6,113
| 68,860,208
|
Put specific rows at the end of data frame depending on column value
|
<p>If I have a data frame that looks something like:</p>
<pre><code>df =
col1 col2 col3
--------------------
10 56.4 78.2
20 45.6 23.3
30 12.1 26.0
40 55.4 22.9
50 10.1 98.3
</code></pre>
<p>Then I have a regular list that contains:</p>
<pre><code>list1 = [10, 30]
</code></pre>
<p>Is there any way to then sort the data frame, so that the values in <code>list1</code> corresponding to the values in <code>col1</code> will be "sorted" towards the end, such as:</p>
<pre><code>df_sorted =
col1 col2 col3
--------------------
20 45.6 23.3
40 55.4 22.9
50 10.1 98.3
10 56.4 78.2
30 12.1 26.0
</code></pre>
|
<p>Use <code>key</code> parameter in <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.sort_values.html" rel="nofollow noreferrer"><code>DataFrame.sort_values</code></a>:</p>
<pre><code>list1 = [10, 30]
df = df.sort_values('col1', key=lambda x: x.isin(list1))
print (df)
col1 col2 col3
1 20 45.6 23.3
3 40 55.4 22.9
4 50 10.1 98.3
0 10 56.4 78.2
2 30 12.1 26.0
</code></pre>
<p>If order is important one idea is use <code>merge</code> with helper <code>DataFrame</code> and then <code>concat</code>:</p>
<pre><code>list1 = [10, 30]
df1 = df[~df['col1'].isin(list1)]
df2 = pd.DataFrame({'col1':list1}).merge(df)
df = pd.concat([df1, df2], ignore_index=True)
print (df)
col1 col2 col3
1 20 45.6 23.3
3 40 55.4 22.9
4 50 10.1 98.3
5 10 56.4 78.2
6 30 12.1 26.0
</code></pre>
<hr />
<pre><code>list1 = [30, 10]
df1 = df[~df['col1'].isin(list1)]
df2 = pd.DataFrame({'col1':list1}).merge(df)
df = pd.concat([df1, df2], ignore_index=True)
print (df)
col1 col2 col3
0 20 45.6 23.3
1 40 55.4 22.9
2 50 10.1 98.3
3 30 12.1 26.0
4 10 56.4 78.2
</code></pre>
|
python|pandas
| 5
|
6,114
| 53,126,795
|
Sum time difference and pivot it - pandas dataframe
|
<p>I have a dataframe has two columns: unix_time and user. It has thousands of rows, this is part of it:</p>
<pre><code>unix_time user
2000000000000 A
2000000000001 A
2000000000002 B
2000000000003 B
2000000000004 B
</code></pre>
<p>I want to calculate how much unix_time each user spent in total by:<br>
1. calculating time difference between rows. eg: <code>unix_time column (row2 - row1)</code><br>
2. sum the time difference if they are from the same user. eg: <code>sum(row2 - row1) and (row3 - row2)</code> </p>
<p>output will be</p>
<pre><code>time_difference_sum user
1 A
2 B
</code></pre>
<p>I read several posts such as <a href="https://stackoverflow.com/questions/47385719/date-time-difference-and-dataframe-filtering">these</a> <a href="https://stackoverflow.com/questions/22923775/calculate-pandas-dataframe-time-difference-between-two-columns-in-hours-and-minu">two</a> but still struggle to find a solution because I got more constraints. Any suggestions about how can I do this ? Thank you in advanced!</p>
|
<p>You can use <code>groupby()</code> and <code>diff()</code> and then <code>agg()</code> your results:</p>
<pre><code>df['time_difference_sum'] = df.sort_values(['user','unix_time']).groupby('user')['unix_time'].diff()
df.groupby('user').agg({'time_difference_sum': 'sum'})
</code></pre>
<p>Yields:</p>
<pre><code> time_difference_sum
user
A 1.0
B 2.0
</code></pre>
|
python|pandas|datetime|dataframe
| 1
|
6,115
| 53,101,293
|
Scalar-valued isnull()/isnan()/isinf()
|
<p>In Pandas and Numpy, there are vectorized functions like <code>np.isnan</code>, <code>np.isinf</code>, and <code>pd.isnull</code> to check if the elements of an array, series, or dataframe are various kinds of missing/null/invalid.</p>
<p>They do work on scalars. <code>pd.isnull(None)</code> simply returns <code>True</code> rather than <code>pd.Series([True])</code>, which is convenient.</p>
<p>But let's say I want to know if <em>any</em> object is one of these null values; You can't do that with any of these functions! That's because they will happily vectorize over a variety of data structures. Carelessly using them will inevitably lead to the dreaded "The truth value of a Series is ambiguous" error.</p>
<p>What I want is a function like this:</p>
<pre><code>assert not is_scalar_null(3)
assert not is_scalar_null([1,2])
assert not is_scalar_null([None, 1])
assert not is_scalar_null(pd.Series([None, 1]))
assert not is_scalar_null(pd.Series([None, None]))
assert is_scalar_null(None)
assert is_scalar_null(np.nan)
</code></pre>
<p>Internally, the Pandas function <code>pandas._lib.missing.checknull</code> will do the right thing:</p>
<pre><code>import pandas._libs.missing as libmissing
libmissing.checknull(pd.Series([1,2])) # correctly returns False
</code></pre>
<p>But it's generally bad practice to use it; according to Python naming convention, <code>_lib</code> is private. I'm also not sure about the Numpy equivalents.</p>
<p><strong>Is there an "acceptable" but official way to use the same null-checking logic as NumPy and Pandas, but strictly for scalars?</strong></p>
|
<p>All you have to do is wrap <code>pd.isnull</code> in a way that in case it gets an iterable it will be forced to check it element-wise. This way you will always get a scalar boolean as output.</p>
<pre><code>from collections import Iterable
def is_scalar_null(value):
if isinstance(value, Iterable):
return all(not pd.isnull(v) for v in value)
return not pd.isnull(value)
assert is_scalar_null(3)
assert is_scalar_null([1, 2])
assert is_scalar_null(pd.Series([1]))
assert not is_scalar_null(None)
assert not is_scalar_null(np.nan)
assert not is_scalar_null([np.nan, 1])
assert not is_scalar_null(pd.Series([np.nan, 1]))
</code></pre>
<p>You can then patch the actual <code>pd.isnull</code>, but I can not say that I suggest doing so.</p>
<pre><code>from collections import Iterable
orig_pd_is_null = pd.isnull
def is_scalar_null(value):
if isinstance(value, Iterable):
return all(not orig_pd_is_null(v) for v in value)
return not orig_pd_is_null(value)
pd.isnull = is_scalar_null
assert pd.isnull(3)
assert pd.isnull([1, 2])
assert pd.isnull(pd.Series([1]))
assert not pd.isnull(None)
assert not pd.isnull(np.nan)
assert not pd.isnull([np.nan, 1])
assert not pd.isnull(pd.Series([np.nan, 1]))
</code></pre>
<p>This approach will <strong>probably break</strong> in case of nested iterables, but that can be fixed by using recursion in <code>is_scalar_null</code>.</p>
|
python|pandas|numpy|missing-data
| 2
|
6,116
| 53,275,955
|
how to join two dataframe with a key and duplicate the matching value to fill in
|
<p>how can I join two data frames by column "ID" and fill the blanks with the matching value. Since it is complicated to explain, here is my code to show what I want for the result.</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'id': [1, 1, 1, 2, 2, 3, 4, 4, 4], 'col1': [3, 0, -1, 3.4, 4, 5, 6, 7, 8]})
df2 = pd.DataFrame({'id': [1, 2, 3, 4, 5, 6, 7, 8, 9], 'col2': ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I']})
</code></pre>
<p>Now, I want to join these two dataframes with "id" and duplicate the values in col2 to fill in the blank col2 column after join.</p>
<p>please help me. Thanks</p>
|
<p>Are you looking for <code>merge</code>?</p>
<pre><code>df.merge(df2, on='id')
id col1 col2
0 1 3.0 A
1 1 0.0 A
2 1 -1.0 A
3 2 3.4 B
4 2 4.0 B
5 3 5.0 C
6 4 6.0 D
7 4 7.0 D
8 4 8.0 D
</code></pre>
|
python|pandas|dataframe|join
| 1
|
6,117
| 53,231,699
|
Converting png to Tensor tensorflow.js
|
<p>I'm currently attempting to figure out how to convert a input png into a tensor with tensorflow.js so I can feed it into my model for training. Currently I'm capturing the image, saving it locally, reading it with fs.readFileSync, and then creating a buffer. Where i'm a bit lost is normalizing the buffer values from 0-244 to 0-1, then creating a tensor from this buffer to feed into the model.fit function as the X arg. I also don't really know how to set up my labels file and properly convert that into a buffer for the Y arg. (<a href="https://js.tensorflow.org/api/0.11.2/#tf.Model.fit" rel="noreferrer">https://js.tensorflow.org/api/0.11.2/#tf.Model.fit</a>) Any insight into the proper usage / configuration of images into tensors for using tensorflow.js would be greatly appreciated. </p>
<p>Repo is here;
<a href="https://github.com/Durban-Designer/Fighter-Ai" rel="noreferrer">https://github.com/Durban-Designer/Fighter-Ai</a></p>
<p>code for loading local image in data.js;</p>
<pre><code>const tf = require('@tensorflow/tfjs');
const assert = require('assert');
const IMAGE_HEADER_BYTES = 32;
const IMAGE_HEIGHT = 600;
const IMAGE_WIDTH = 800;
const IMAGE_FLAT_SIZE = IMAGE_HEIGHT * IMAGE_WIDTH;
function loadHeaderValues(buffer, headerLength) {
const headerValues = [];
for (let i = 0; i < headerLength / 4; i++) {
headerValues[i] = buffer.readUInt32BE(i * 4);
}
return headerValues;
}
...
...
class Dataset {
async loadLocalImage(filename) {
const buffer = fs.readFileSync(filename);
const headerBytes = IMAGE_HEADER_BYTES;
const recordBytes = IMAGE_HEIGHT * IMAGE_WIDTH;
const headerValues = loadHeaderValues(buffer, headerBytes);
console.log(headerValues, buffer);
assert.equal(headerValues[5], IMAGE_HEIGHT);
assert.equal(headerValues[4], IMAGE_WIDTH);
const images = [];
let index = headerBytes;
while (index < buffer.byteLength) {
const array = new Float32Array(recordBytes);
for (let i = 0; i < recordBytes; i++) {
// Normalize the pixel values into the 0-1 interval, from
// the original 0-255 interval.
array[i] = buffer.readUInt8(index++) / 255;
}
images.push(array);
}
assert.equal(images.length, headerValues[1]);
return images;
}
}
module.exports = new Dataset();
</code></pre>
<p>image capture loop in app.js;</p>
<pre><code>const ioHook = require("iohook");
const tf = require('@tensorflow/tfjs');
var screenCap = require('desktop-screenshot');
require('@tensorflow/tfjs-node');
const data = require('./src/data');
const virtKeys = require('./src/virtKeys');
const model = require('./src/model');
var dir = __dirname;
var paused = true;
var loopInterval,
image,
imageData,
result
ioHook.on('keyup', event => {
if (event.keycode === 88) {
if (paused) {
paused = false;
gameLoop();
} else {
paused = true;
}
}
});
ioHook.start();
function gameLoop () {
if (!paused) {
screenCap(dir + '\\image.png', {width: 800, height: 600, quality: 60}, function (error, complete) {
if (error) {
console.log(error);
} else {
imageData = await data.getImage(dir + '\\image.png')
console.log(imageData);
result = model.predict(imageData, {batchSize: 4});
console.log(result);
gameLoop();
}
})
}
}
</code></pre>
<p>I know I use model.predict here, I wanted to get the actual image to tensor part working then figure out labels and model.fit() in train-tensor.js in the repo. I don't have any actual working code for training so I didn't include it in this question, sorry if it caused any confusion.</p>
<p>Thank you again!</p>
<p><strong>Edit final working code</strong></p>
<pre><code>const { Image, createCanvas } = require('canvas');
const canvas = createCanvas(800, 600);
const ctx = canvas.getContext('2d');
async function loadLocalImage (filename) {
try {
var img = new Image()
img.onload = () => ctx.drawImage(img, 0, 0);
img.onerror = err => { throw err };
img.src = filename;
image = tf.fromPixels(canvas);
return image;
} catch (err) {
console.log(err);
}
}
...
...
async getImage(filename) {
try {
this.image = await loadLocalImage(filename);
} catch (error) {
console.log('error loading image', error);
}
return this.image;
}
</code></pre>
|
<p>tensorflowjs already has a method for this: <a href="https://js.tensorflow.org/api/0.13.3/#fromPixels" rel="nofollow noreferrer"><code>tf.fromPixels(), tf.browser.fromPixels()</code></a>.</p>
<p>You just need to load the image into on of the accepted types(<code>ImageData|HTMLImageElement|HTMLCanvasElement|HTMLVideoElement</code>).</p>
<p>Your image loading Promise returns nothing because your async function doesn't return anything, just your callback, to fix this you need to create and resolve a promise yourself:</p>
<pre><code>const imageGet = require('get-image-data');
async fucntion loadLocalImage(filename) {
return new Promise((res, rej) => {
imageGet(filename, (err, info) => {
if (err) {
rej(err);
return;
}
const image = tf.fromPixels(info.data)
console.log(image, '127');
res(image);
});
});
}
</code></pre>
|
javascript|node.js|png|buffer|tensorflow.js
| 13
|
6,118
| 65,754,740
|
Modifying entire row of an array based on a condition using Numpy
|
<p>I have an array:</p>
<p><code>xNew = np.array([[0.50,0.25],[-0.4,-0.2],[0.60,0.80],[1.20,1.90],[-0.10,0.60],[0.10,1.2]])</code></p>
<p>and another array:</p>
<p><code>x = np.array([[0.55,0.34],[0.45,0.26],[0.14,0.29],[0.85,0.89],[0.27,0.78],[0.45,0.05]])</code></p>
<p>If an element in a row is smaller than 0 or larger than 1 in <code>xNew</code> , that row should be entirely replaced by corresponding row in <code>x</code>. The desired output is:</p>
<p><code>xNew = np.array([[0.50,0.25],[0.45,0.26],[0.60,0.80],[0.85,0.89],[0.27,0.78],[0.45,0.05]])</code></p>
<p>I am looking for an efficient way to accomplish this using numpy functions.</p>
<p>Thanks!</p>
|
<p>You can use advanced indexing:</p>
<pre><code>idx = ((xNew<0)|(xNew>1)).any(-1)
xNew[idx]=x[idx]
</code></pre>
<p>output:</p>
<pre><code>[[0.5 0.25]
[0.45 0.26]
[0.6 0.8 ]
[0.85 0.89]
[0.27 0.78]
[0.45 0.05]]
</code></pre>
|
python|arrays|numpy
| 1
|
6,119
| 65,754,329
|
Create a empty dataframe and append a new row
|
<p>Im trying to create a empty dataframe with 3 columns: <code>movieId</code>, <code>title</code> and <code>predicted_rating</code>.</p>
<p>This is what I have so far:</p>
<pre><code>column_names = ["movieId", "title", "predicted_rating"]
return_df = pd.DataFrame(columns=column_names)
return_df.append({"movieId":1,"title":2,"predicted_rating":3}, ignore_index=True)
</code></pre>
<p>When I run this code and print <code>return_df</code> the df has the column names but no row. The code to append a new row is from another question so I think the problem is the way i'm making the empty dataframe.</p>
|
<p>I think its because <code>append</code> returns a new DataFrame, it probably doesn't modify inplace.</p>
<p>So try this:</p>
<pre><code>return_df = return_df.append({"movieId":1,"title":2,"predicted_rating":3}, ignore_index=True)
</code></pre>
|
python|pandas|dataframe
| 0
|
6,120
| 65,704,040
|
How to sort a date column in Pandas with value_counts?
|
<p>I have a column in my dataframe like this:</p>
<pre><code> submit_date
1 2020-12-14
3 2020-12-14
4 2020-12-14
5 2020-12-14
29 2020-12-15
...
746 2021-01-12
771 2021-01-12
744 2021-01-12
757 2021-01-12
772 2021-01-12
</code></pre>
<p>I want to obtain how many submissions are happening per day. Here's the code I've tried:</p>
<pre><code>print(df['submit_date'].value_counts())
</code></pre>
<p>Which results in:</p>
<pre><code>2021-01-07 95
2021-01-08 58
2021-01-05 47
2021-01-11 45
2021-01-09 41
2020-12-28 39
2021-01-06 39
2020-12-29 34
2021-01-02 32
2021-01-04 31
2021-01-01 29
2021-01-12 28
2020-12-31 28
2020-12-16 27
2020-12-15 25
2020-12-30 22
2021-01-03 21
2020-12-26 19
2020-12-22 19
2020-12-18 17
2020-12-21 16
2020-12-17 15
2020-12-27 12
2020-12-23 12
2021-01-10 6
2020-12-19 5
2020-12-14 4
2020-12-24 4
2020-12-20 2
Name: submit_date, dtype: int64
</code></pre>
<p>I want these counts to be sorted by date. So my expected output would be:</p>
<pre><code>2020-12-14 4
2020-12-15 25
2020-12-16 27
2020-12-17 15
...
</code></pre>
<p>I know it's sorting by the value counts but how do I sort by the date instead? I tried df['submit_date'].value_counts().sort_values(ascending=True) but no luck.</p>
|
<p>You can sorting index <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.sort_index.html" rel="nofollow noreferrer"><code>Series.sort_index</code></a> :</p>
<pre><code>df['submit_date'].value_counts().sort_index()
</code></pre>
<p>Or if original column is sorted add <code>sort=False</code> parameter to <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.value_counts.html" rel="nofollow noreferrer"><code>Series.value_counts</code></a> for prevent sorting by counts:</p>
<pre><code>df['submit_date'].value_counts(sort=False)
</code></pre>
|
python|pandas|datetime
| 2
|
6,121
| 65,804,813
|
Pandas - How to fill column with range time conditions
|
<p>I have a dataset with different columns: the activity description and when it's started and ended</p>
<pre><code> Activity Start End In time
Activity 1 10:44:26 15:02:24
Activity 2 15:22:42 13:52:54
Activity 3 14:41:57 16:03:48
Activity 4 11:16:08 13:37:16
Activity 5 15:49:39 08:51:18
Activity 6 19:36:37 15:19:26
Activity 7 14:47:33 19:39:29
Activity 8 15:40:52 19:30:26
</code></pre>
<p>How can i fill in Pandas the column "In time" with this condition:</p>
<ul>
<li>if <strong>Start is > 8AM</strong> and <strong>End is < 5:30PM</strong> is in time else is not in time.</li>
</ul>
<p>I tried with datetime module, pd.between_time()... I created my own def but it doesn't work.</p>
<p>How can I fix my problem?</p>
|
<p>Use <a href="https://numpy.org/doc/stable/reference/generated/numpy.where.html" rel="nofollow noreferrer"><code>numpy.where</code></a>:</p>
<pre><code>#if necessary convert to times
#df['Start'] = pd.to_datetime(df['Start']).dt.time
#df['End'] = pd.to_datetime(df['End']).dt.time
from datetime import time
mask = (df.Start > time(8,0,0) ) & (df.End < time(17,30,0))
df['In time'] = np.where(mask, 'yes','no')
print (df)
Activity Start End In time
0 Activity 1 10:44:26 15:02:24 yes
1 Activity 2 15:22:42 13:52:54 yes
2 Activity 3 14:41:57 16:03:48 yes
3 Activity 4 11:16:08 13:37:16 yes
4 Activity 5 15:49:39 08:51:18 yes
5 Activity 6 19:36:37 15:19:26 yes
6 Activity 7 14:47:33 19:39:29 no
7 Activity 8 15:40:52 19:30:26 no
</code></pre>
|
python|pandas|dataframe|if-statement
| 1
|
6,122
| 63,359,261
|
Finding newest columns in pandas dataframe
|
<p>I am trying to read/parse some Excel-file through pandas dataframe into SQL Server.</p>
<p>The excel-file that I need to read is not completely static and column-names changes from time to time, but mostly in a fairly predictable manner - I am just not sure how to actually capture it. Also the order of the columns can change.</p>
<p>I need to find the column that holds the newest values/Amounts.</p>
<p>For example my Excel-file might look like this in one period:</p>
<pre><code>| ID | Type | Amount May 20 | Amount Mar20 |
|----|------|---------------|----------------|
| 1 | red | 1000 | 998 |
| 2 | blue | 400 | 400 |
</code></pre>
<p>Then perhaps the next Excel file looks like this:</p>
<pre><code>| ID | Type | Amount May20 | Amount July 20 |
|----|------|---------------|----------------|
| 1 | red | 1000 | 1050 |
| 2 | blue | 400 | 410 |
</code></pre>
<p>As you can see, sometimes the month is spelled out completely and a space between the month and the year, other times it could be spelled out in short format with only the first three letter directly followed by the year. It is arbitrary if there is a space between month and year or not - also it is arbitrary if the month is spelled out or not.</p>
<p>Also as you can see, the newest column is placed arbitrarily, some times the first amount is the newest, some times it is not (some files may hold several periods amount).</p>
<p>Any suggestions to how I can identify which column that holds the most recent value? i.e. in the first example it would be column 3 and in the second example it would be column 4.</p>
|
<p>Might require a hack-y solution, given the inconsistencies. Import your Excel file and grab the column names, and then use string methods to pull out and track the relevant info. Luckily months are unique, and you can just use the abbreviation.</p>
<pre><code>df = pd.DataFrame({'ID': np.random.randn(5),
'type': list('abcde'),
'Amount May 20': np.random.randint(1,5,5),
'Amount Mar20': np.random.randint(5,10,5)})
most_recent_yr = 19
recent_cols = []
for col_name in df.columns[2:]:
col_yr = int(col_name[-2:])
if col_yr >= most_recent_yr:
recent_cols.append(col_name)
most_recent_yr = col_yr
months = ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec']
max_month = 0
for i in range(len(months)):
for col in recent_cols:
if (months[i] in col) & (i > max_month):
max_month = i
</code></pre>
|
python|pandas
| 1
|
6,123
| 63,497,505
|
Automating the creation of dataframes from subsets of an existing dataframe
|
<p>I'm working with the kaggle New York City Airbnb Open Data which is available here:
<a href="https://www.kaggle.com/dgomonov/new-york-city-airbnb-open-data" rel="nofollow noreferrer">https://www.kaggle.com/dgomonov/new-york-city-airbnb-open-data</a></p>
<p>The data contains a column of the 'neighbourhood_groups', consisting of the 5 boroughs of NYC, and 'neighbourhood', consisting of the neighbourhoods within each neighbourhood group.</p>
<p>I have created a subset of the Manhattan neighbourhood with the following code:</p>
<pre><code>airbnb_manhattan = airbnb[airbnb['neighbourhood_group'] == 'Manhattan']
</code></pre>
<p>I would like to create further subsets of this dataframe by neighbourhood. However, there are 32 neighbourhoods, so I'd like to automate the process.</p>
<p>This is the code that I tried:</p>
<pre><code>manhattan_neighbourhoods = list(airbnb_manhattan['neighbourhood'].unique())
neighbourhoods = pd.DataFrame()
for n in manhattan_neighbourhoods:
neighbourhoods[n] = pd.DataFrame(affordable_manhattan[affordable_manhattan['neighbourhood'] == manhattan_neighbourhoods[n]])
</code></pre>
<p>Which produces the following error message:</p>
<p>TypeError: list indices must be integers or slices, not str</p>
<p>Thanks.</p>
|
<p>You should not copy into new dfs unless strictly necessary. Try to do your analysis with the full df as much as possible. Use <code>.groupby</code> as in</p>
<pre><code>by_neigh = airbnb.groupby('neighbourhood_group')
</code></pre>
<p>Then use <code>.agg</code>, <code>.apply</code>, or <code>.transform</code> as needed. Or as a last resort you can iterate with</p>
<pre><code>for neigh, rows in by_neigh:
</code></pre>
<p>Or get just one group with</p>
<pre><code>by_neigh.get_group('Manhattan')
</code></pre>
<p>The advantage of all this is that underlying data is not copied intil absolutely necessary, and pandas can just view the same array with different filters and slices as needed.</p>
<p>Read more in the <a href="https://pandas.pydata.org/docs/reference/groupby.html" rel="nofollow noreferrer">docs</a></p>
|
python|pandas|dataframe|for-loop|automation
| 0
|
6,124
| 53,662,172
|
How to save timestamps in parquet files in C++ and load it in Python Pandas?
|
<p>I am using <code>Apache Arrow</code> in C++ to save a collection of time-series as a parquet file and use python to load the parquet file as a <code>Pandas</code> <code>Dataframe</code>. The process works for all types except the <code>Date64Type</code>. I am saving the epoch time in C++ and when loading it in pandas the time information is lost. </p>
<p>For example for boost posix time : <code>2018-04-01T20:11:17.112Z</code>, the epoch time (in <code>int64_t</code>) is <code>1522613477112000</code>, but when I saved to parquet file as (<code>Date64Type</code>) and load in pandas the result is <code>2018-04-01</code> and the time information is lost. What is the correct to save timestamps in parquet files?</p>
|
<p>You need to use <code>arrow::TimestampType</code> instead. <code>Date32Type</code> and <code>Date64Type</code> only support day resolution; their internal representation is a bit different through (<code>int32_t</code> days since the UNIX epoch vs. <code>int64_t</code> milliseconds since the UNIX epoch)</p>
|
c++|pandas|parquet|pyarrow
| 1
|
6,125
| 72,046,944
|
Python (pandas) - How to group values in one column and then delete or keep that group based on values in another column
|
<p>Let's say I have the following pandas dataset:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Column 1</th>
<th>Column 2</th>
</tr>
</thead>
<tbody>
<tr>
<td>dog</td>
<td>RE</td>
</tr>
<tr>
<td>dog</td>
<td>RE FX</td>
</tr>
<tr>
<td>cat</td>
<td>RE BA</td>
</tr>
<tr>
<td>mouse</td>
<td>AQ</td>
</tr>
<tr>
<td>mouse</td>
<td>RE FX</td>
</tr>
<tr>
<td>salmon</td>
<td>AQ</td>
</tr>
</tbody>
</table>
</div>
<p>Essentially what I would like to do is group the values in Column 1 and then either keep or delete them based on the values in Column 2. So for example, I want to delete all values in a group in Column 1 if ANY of the corresponding rows in Column 2 are "RE" or "RE BA". Based on the dataset above, the output would be the following:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Column 1</th>
<th>Column 2</th>
</tr>
</thead>
<tbody>
<tr>
<td>mouse</td>
<td>AQ</td>
</tr>
<tr>
<td>mouse</td>
<td>RE FX</td>
</tr>
<tr>
<td>salmon</td>
<td>AQ</td>
</tr>
</tbody>
</table>
</div>
<p>I am struggling with this because although I understand how to drop rows based on whether they contain a specific value, I don't understand how to drop entire groups based on whether ANY of the rows in that group contain a specific value. Any help would be greatly appreciated!</p>
|
<p>You can try <code>groupby</code> then <code>filter</code></p>
<pre class="lang-py prettyprint-override"><code>out = df.groupby("Column 1").filter(lambda df: ~df['Column 2'].isin(["RE", "RE BA"]).any())
</code></pre>
<pre><code>print(out)
Column 1 Column 2
3 mouse AQ
4 mouse RE FX
5 salmon AQ
</code></pre>
|
python|pandas
| 1
|
6,126
| 71,832,948
|
How to convert Excel file to json using pandas?
|
<p>I would like to parse Excel file which has a couple of sheets and save a data in JSON file.</p>
<p>I don't want to parse first and second sheet, and also the last one. I want to parse those in between and that number of sheets is not always equal and names of those sheets are not always the same.</p>
<pre><code>import pandas
import json
# Read excel document
excel_data_df = pandas.read_excel('data.xlsx', sheet_name='sheet1')
</code></pre>
<p>Is there a way not to put this parameter <code>sheet_name</code>?</p>
<pre><code># Convert excel to string
# (define orientation of document in this case from up to down)
thisisjson = excel_data_df.to_json(orient='records')
# Print out the result
print('Excel Sheet to JSON:\n', thisisjson)
# Make the string into a list to be able to input in to a JSON-file
thisisjson_dict = json.loads(thisisjson)
# Define file to write to and 'w' for write option -> json.dump()
# defining the list to write from and file to write to
with open('data.json', 'w') as json_file:
json.dump(thisisjson_dict, json_file)
</code></pre>
|
<p>I would just create a sheet level dict and loop through each of the sheets. Something like this:</p>
<pre><code>import pandas
import json
sheets = ['sheet1','sheet2','sheet3']
output = dict()
# Read excel document
for sheet in sheets:
excel_data_df = pandas.read_excel('data.xlsx', sheet_name=sheet)
# Convert excel to string
# (define orientation of document in this case from up to down)
thisisjson = excel_data_df.to_json(orient='records')
# Print out the result
print('Excel Sheet to JSON:\n', thisisjson)
# Make the string into a list to be able to input in to a JSON-file
thisisjson_dict = json.loads(thisisjson)
output[sheet] = thisisjson_dict
# Define file to write to and 'w' for write option -> json.dump()
# defining the list to write from and file to write to
with open('data.json', 'w') as json_file:
json.dump(output, json_file)
</code></pre>
|
python|json|excel|pandas|dataframe
| 1
|
6,127
| 55,511,186
|
Could not identify NUMA node of platform GPU
|
<p>I try to get Tensorflow to start on my machine, but I always get stuck with a "Could not identify NUMA node" error message.</p>
<p>I use a Conda environment:</p>
<ul>
<li>tensorflow-gpu 1.12.0</li>
<li>cudatoolkit 9.0</li>
<li>cudnn 7.1.2</li>
<li>nvidia-smi says: Driver Version 418.43, CUDA Version 10.1</li>
</ul>
<p>Here is the error code:</p>
<pre><code>>>> import tensorflow as tf
>>> tf.Session()
2019-04-04 09:56:59.851321: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
2019-04-04 09:56:59.950066: E tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:950] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
Your kernel may have been built without NUMA support.
2019-04-04 09:56:59.950762: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 0 with properties:
name: GeForce GTX 750 Ti major: 5 minor: 0 memoryClockRate(GHz): 1.0845
pciBusID: 0000:01:00.0
totalMemory: 1.95GiB freeMemory: 1.84GiB
2019-04-04 09:56:59.950794: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0
2019-04-04 09:59:45.338767: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-04-04 09:59:45.338799: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988] 0
2019-04-04 09:59:45.338810: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0: N
2019-04-04 09:59:45.339017: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1193] Could not identify NUMA node of platform GPU id 0, defaulting to 0. Your kernel may not have been built with NUMA support.
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
</code></pre>
<p>Unfortunately, I have no idea what to do with the error code.</p>
|
<p>I could fix it with a new conda enviroment:</p>
<pre><code>conda create --name tf python=3
conda activate tf
conda install cudatoolkit=9.0 tensorflow-gpu=1.11.0
</code></pre>
<p>A table of compatible CUDA/TF combinations is available <a href="https://www.tensorflow.org/install/source#linux" rel="nofollow noreferrer">here</a>.
In my case, the combination of cudatoolkit=9.0 and tensorflow-gpu=1.12, inexplicably led to an std::bad_alloc error.
However, cudatoolkit=9.0 and tensorflow-gpu=1.11.0 works fine.</p>
|
python|tensorflow|keras
| 1
|
6,128
| 56,452,689
|
How do I import rows of a Google Sheet into Pandas, but with column names?
|
<p>There are great instructions in a number of places to import a Google Sheet into a Pandas DataFrame using gspread, eg:</p>
<pre><code># Open our new sheet and read some data.
worksheet = gc.open_by_key('...').sheet1
# get_all_values gives a list of rows.
rows = worksheet.get_all_values()
# Convert to a DataFrame and render.
import pandas as pd
df = pd.DataFrame.from_records(rows)
df.head()
</code></pre>
<p>The problem is that this import treats the first row as a value rather than as a header.</p>
<p>How can I import the DataFrame and treat the first row as column names instead of values?</p>
|
<p>You can do </p>
<pre><code>row=[[1,2,3,4]]*3
pd.DataFrame.from_records(row[1:],columns=row[0])
1 2 3 4
0 1 2 3 4
1 1 2 3 4
</code></pre>
|
pandas|gspread
| 4
|
6,129
| 67,028,048
|
Making common units within a column of a Pandas DataFrame
|
<p>I have a dataframe that has measurements with different units in the same column. A separate column exists for the unit names.</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame({'color': ['red','green','blue','blue','green'],
'length': [3,6,9,120,15],
'length_units': ['ft','m','ft','cm','ft'],
'width': [48,700,120,130,188],
'width_units': ['in','cm','in','cm','in'],
})
print(df)
</code></pre>
<pre><code> color length length_units width width_units
0 red 3 ft 48 in
1 green 6 m 700 cm
2 blue 9 ft 120 in
3 blue 120 cm 130 cm
4 green 15 ft 188 in
</code></pre>
<p>So my goal would be to convert each column to a particular common unit. For instance, in this example, I'd like to convert all lengths to feet and all widths to inches, so that it looked like:</p>
<pre><code> color length length_units width width_units
0 red 3.000000 ft 48.00000 in
1 green 19.685040 ft 275.59070 in
2 blue 9.000000 ft 120.00000 in
3 blue 3.937008 ft 51.18113 in
4 green 15.000000 ft 188.00000 in
</code></pre>
<p>I'm very new to pandas, and I've come up with an ugly and inefficient solution with lots of filtering, slicing, and apply(). I'd love to see how it should really be done! Thanks in advance.</p>
|
<p>Create a <code>dict</code> of <code>dicts</code> where the keys are your desired units and the value is a dict with all conversions to that unit (that are required in your data). Then given your consistent naming conventions of the columns, use a simple loop to map the unit to its conversion and multiply by the value, and set the _units column.</p>
<pre><code>d = {'ft': {'ft': 1, 'm': 3.2808, 'cm': .032808},
'in': {'in': 1, 'cm': 0.3937}}
for col, unit in [('length', 'ft'), ('width', 'in')]:
df[col] = df[col] * df[f'{col}_units'].map(d[unit])
df[f'{col}_units'] = unit
</code></pre>
<hr />
<pre><code> color length length_units width width_units
0 red 3.00000 ft 48.000 in
1 green 19.68480 ft 275.590 in
2 blue 9.00000 ft 120.000 in
3 blue 3.93696 ft 51.181 in
4 green 15.00000 ft 188.000 in
</code></pre>
<hr />
<p>For a lot more flexibility, instead of the <code>dict</code> of <code>dicts</code> you could create the NxN matrix of conversions, let's call it <code>df_conv</code>:</p>
<pre><code> mm cm m km in ft yd mi nmi
mm 1.0 0.10 0.0010 0.000001 0.039370 0.003281 0.001094 6.213712e-07 5.399568e-07
cm 10.0 1.00 0.0100 0.000010 0.393701 0.032808 0.010936 6.213712e-06 5.399568e-06
m 1000.0 100.00 1.0000 0.001000 39.370079 3.280840 1.093613 6.213712e-04 5.399568e-04
km 1000000.0 100000.00 1000.0000 1.000000 39370.078740 3280.839895 1093.613298 6.213712e-01 5.399568e-01
in 25.4 2.54 0.0254 0.000025 1.000000 0.083333 0.027778 1.578283e-05 1.371490e-05
ft 304.8 30.48 0.3048 0.000305 12.000000 1.000000 0.333333 1.893939e-04 1.645788e-04
yd 914.4 91.44 0.9144 0.000914 36.000000 3.000000 1.000000 5.681818e-04 4.937365e-04
mi 1609344.0 160934.40 1609.3440 1.609344 63360.000000 5280.000000 1760.000000 1.000000e+00 8.689762e-01
nmi 1852000.0 185200.00 1852.0000 1.852000 72913.385827 6076.115486 2025.371829 1.150779e+00 1.000000e+00
</code></pre>
<p>And now you can use those Series to map to any unit you've included in that matrix</p>
<pre><code>for col, unit in [('length', 'ft'), ('width', 'in')]:
df[col] = df[col]*df[f'{col}_units'].map(df_conv[unit])
df[f'{col}_units'] = unit
</code></pre>
|
python|pandas
| 3
|
6,130
| 66,895,438
|
Unable to find out how to specify datetime dtype to Numba @guvectorize
|
<p>I would like to handle array of datetime values in Numba.
I tried so on a small example, but I am unable to have it working.</p>
<pre class="lang-py prettyprint-override"><code>import numba as nb
from numba import guvectorize
@guvectorize('void(nb.types.NPDatetime("ns")[:],nb.types.NPDatetime("ns")[:])',
'(m)->(m)')
def ts_copy(ts, ts_c):
for idx, t in np.ndenumerate(ts):
idx, = idx
ts_c[idx] = t
ts = pd.date_range(start='2021/1/1 08:00', end='2021/1/1 18:00', freq='1H').to_numpy()
ts_c = np.empty(ts.size, dtype='datetime64[ns]')
res = ts_copy(ts, ts_c)
</code></pre>
<p>I get following error message:</p>
<pre><code>Traceback (most recent call last):
File "<ipython-input-39-1550d3aa0145>", line 3, in <module>
def ts_copy(ts, ts_c):
File "/home/pierre/anaconda3/lib/python3.8/site-packages/numba/np/ufunc/decorators.py", line 179, in wrap
guvec.add(fty)
File "/home/pierre/anaconda3/lib/python3.8/site-packages/numba/np/ufunc/ufuncbuilder.py", line 212, in add
cres, args, return_type = _compile_element_wise_function(
File "/home/pierre/anaconda3/lib/python3.8/site-packages/numba/np/ufunc/ufuncbuilder.py", line 144, in _compile_element_wise_function
cres = nb_func.compile(sig, **targetoptions)
File "/home/pierre/anaconda3/lib/python3.8/site-packages/numba/np/ufunc/ufuncbuilder.py", line 93, in compile
return self._compile_core(sig, flags, locals)
File "/home/pierre/anaconda3/lib/python3.8/site-packages/numba/np/ufunc/ufuncbuilder.py", line 124, in _compile_core
args, return_type = sigutils.normalize_signature(sig)
File "/home/pierre/anaconda3/lib/python3.8/site-packages/numba/core/sigutils.py", line 24, in normalize_signature
parsed = _parse_signature_string(sig)
File "/home/pierre/anaconda3/lib/python3.8/site-packages/numba/core/sigutils.py", line 14, in _parse_signature_string
return eval(signature_str, {}, types.__dict__)
File "<string>", line 1, in <module>
NameError: name 'nb' is not defined
</code></pre>
<p>Please, any idea?
Thanks a lot for your help.
Bests,</p>
|
<p><code>guvectorize</code> does not work with datetime types. Convert to float timestamp.</p>
<pre><code>import numba as nb
from numba import guvectorize
import numpy as np
import pandas as pd
import arrow
@guvectorize([(nb.types.float64[:],nb.types.float64[:])],'(m)->(m)')
def ts_copy(ts, ts_c):
for idx, t in np.ndenumerate(ts):
idx, = idx
ts_c[idx] = t
ts = pd.date_range(start='2021/1/1 08:00', end='2021/1/1 18:00', freq='1H').to_numpy()
ts_c = np.empty(ts.size, dtype='datetime64[ns]')
def e(_list):
return np.array([arrow.get(str(_)).float_timestamp for _ in _list])
ts = e(ts)
ts_c = e(ts_c)
res = ts_copy(ts, ts_c)
for each in res:
print(arrow.get(each))
</code></pre>
|
python|numpy|numba
| 1
|
6,131
| 47,254,587
|
Is pandas / numpy's axis the opposite of R's MARGIN?
|
<p>Is it correct to think about these two things as being opposite? This has been a major source of confusion for me.</p>
<p>Below is an example where I find the column sums of a data frame in R and Python. Notice the opposite values for <code>MARGIN</code> and <code>axis</code>.</p>
<p>In R (using <code>MARGIN=2</code>, i.e. the column margin):</p>
<pre><code>m <- matrix(1:6, nrow=2)
apply(m, MARGIN=2, mean)
[1] 1.5 3.5 5.5
</code></pre>
<p>In Python (using <code>axis=0</code>, i.e. the row axis):</p>
<pre><code>In [25]: m = pd.DataFrame(np.array([[1, 3, 5], [2, 4, 6]]))
In [26]: m.apply(np.mean, axis=0)
Out[26]:
0 1.5
1 3.5
2 5.5
dtype: float64
</code></pre>
|
<p>Confusion arises because <code>apply()</code> talks both about which dimension the apply is "over", as well as which dimension is <em>retained</em>. In other words, when you <code>apply()</code> over rows, the result is a vector whose length is the number of columns in the input. This particular confusion is highlighted by Pandas' documentation (but not R's):</p>
<pre><code>axis : {0 or ‘index’, 1 or ‘columns’}
0 or ‘index’: apply function to each column
1 or ‘columns’: apply function to each row
</code></pre>
<p>As you can see, <code>0</code> means the index (row) dimension is retained, and the column dimension is "applied over" (thus eliminated).</p>
<p>Put another way, application over columns is <code>axis=0</code> or <code>MARGIN=2</code>, and application over rows is <code>axis=1</code> or <code>MARGIN=1</code>. The <code>1</code> values appear to match, but that's spurious: <code>1</code> in Python is the second dimension, because Python is 0-based.</p>
|
python|r|pandas|numpy
| 3
|
6,132
| 59,241,216
|
Padding NumPy arrays to a specific size
|
<p>I would like to pad all my arrays to a certain constant shape. </p>
<p>All arrays have size (X, 13) but I want them to be (99, 13). X is smaller than or equals to 99. There are arrays that are smaller than 99. I'm looking for a way to pad them to the size of the default <code>var</code>. </p>
<p>I have seen and tried examples where they check padding dynamically but I can't find out the right code.</p>
<pre><code>for item in data:
if len(item) < len(var):
np.pad(len(var) - len(item)
</code></pre>
|
<p>Here:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
arr = np.random.randint(0, 10, (7, 4))
def padding(array, xx, yy):
"""
:param array: numpy array
:param xx: desired height
:param yy: desirex width
:return: padded array
"""
h = array.shape[0]
w = array.shape[1]
a = (xx - h) // 2
aa = xx - a - h
b = (yy - w) // 2
bb = yy - b - w
return np.pad(array, pad_width=((a, aa), (b, bb)), mode='constant')
print(padding(arr, 99, 13).shape) # just proving that it outputs the right shape
</code></pre>
<pre><code>Out[83]: (99, 13)
</code></pre>
<p>An <strong>example</strong>:</p>
<pre class="lang-py prettyprint-override"><code>padding(arr, 7, 11) # originally 7x4
</code></pre>
<pre><code>Out[85]:
array([[0, 0, 0, 4, 8, 8, 8, 0, 0, 0, 0],
[0, 0, 0, 5, 9, 6, 3, 0, 0, 0, 0],
[0, 0, 0, 4, 7, 6, 1, 0, 0, 0, 0],
[0, 0, 0, 5, 6, 5, 7, 0, 0, 0, 0],
[0, 0, 0, 6, 6, 3, 3, 0, 0, 0, 0],
[0, 0, 0, 6, 0, 9, 6, 0, 0, 0, 0],
[0, 0, 0, 9, 4, 4, 0, 0, 0, 0, 0]])
</code></pre>
|
python|numpy
| 5
|
6,133
| 45,986,897
|
Multiplying elementwise over final axis of two arrays
|
<p>Given a 3d array and a 2d array,</p>
<pre><code>a = np.arange(10*4*3).reshape((10,4,3))
b = np.arange(30).reshape((10,3))
</code></pre>
<p>How can I run elementwise-multiplication across the final axis of each, resulting in <code>c</code> where <code>c</code> has the shape <code>.shape</code> as <code>a</code>? I.e. </p>
<pre><code>c[0] = a[0] * b[0]
c[1] = a[1] * b[1]
# ...
c[i] = a[i] * b[i]
</code></pre>
|
<p>Without any sum-reduction involved, a simple <code>broadcasting</code> would be really efficient after extending <code>b</code> to <code>3D</code> with <a href="https://docs.scipy.org/doc/numpy-1.13.0/reference/arrays.indexing.html#numpy.newaxis" rel="nofollow noreferrer"><code>np.newaxis/None</code></a> -</p>
<pre><code>a*b[:,None,:] # or simply a*b[:,None]
</code></pre>
<p>Runtime test -</p>
<pre><code>In [531]: a = np.arange(10*4*3).reshape((10,4,3))
...: b = np.arange(30).reshape((10,3))
...:
In [532]: %timeit np.einsum('ijk,ik->ijk', a, b) #@Brad Solomon's soln
...: %timeit a*b[:,None]
...:
100000 loops, best of 3: 1.79 µs per loop
1000000 loops, best of 3: 1.66 µs per loop
In [525]: a = np.random.rand(100,100,100)
In [526]: b = np.random.rand(100,100)
In [527]: %timeit np.einsum('ijk,ik->ijk', a, b)
...: %timeit a*b[:,None]
...:
1000 loops, best of 3: 1.53 ms per loop
1000 loops, best of 3: 1.08 ms per loop
In [528]: a = np.random.rand(400,400,400)
In [529]: b = np.random.rand(400,400)
In [530]: %timeit np.einsum('ijk,ik->ijk', a, b)
...: %timeit a*b[:,None]
...:
10 loops, best of 3: 128 ms per loop
10 loops, best of 3: 94.8 ms per loop
</code></pre>
|
python|python-3.x|numpy|numpy-einsum
| 2
|
6,134
| 46,140,609
|
One to many, left, outer join with pandas (Python)
|
<p>I'm trying to join three tables together using Python 2.7 and pandas. My tables look like the ones below:</p>
<pre><code>Table 1
ID | test
1 | ss
2 | sb
3 | sc
Table 2
ID | tested | value1 | Value2 | ID2
1 | a | e | o | 1
1 | axe | ee | e | 1
1 | bce | io | p | 3
2 | bee | kd | … | 2
2 | bdd | a | fff | 3
3 | db | f | yiueie | 2
Table 3
ID2 | type
1 | i
1 | d
1 | h
3 | e
1 | o
2 | ou
2 | oui
3 | op
</code></pre>
<p>The code I'm using is below:</p>
<pre><code>import pandas as pd
xl = pd.ExcelFile(r'C:\Users\Joe\Desktop\Project1\xlFiles\test1.xlsx')
xl.sheet_names
df = xl.parse("Sheet1")
df.head()
xl2 = pd.ExcelFile(r'C:\Users\Joe\Desktop\Project1\xlFiles\test2.xlsx')
xl2.sheet_names
df2 = xl2.parse("Sheet1")
df2.head()
xl3 = pd.ExcelFile(r'C:\Users\Joe\Desktop\Project1\xlFiles\test3.xlsx')
xl3.sheet_names
df3 = xl3.parse("Sheet1")
df3.head()
df3 = df3.groupby('ID2')['type'].apply(','.join).reset_index()
s1 = pd.merge(df2, df3, how='left', on=['ID2'])
</code></pre>
<p>The code joins Table 3 to table Table 2 how I would like. But, I can't figure out how to group multiple columns to join s1 to Table 1. I need the information from every column in s1 to be added to Table 1, but I only want one row for each ID value (3 rows total). Does anyone know how I would do this?</p>
<p>My expected output, for reference, is below:</p>
<pre><code>ID | test | type | tested | value1 | ID2
1 | ss | i,d,h,o | a,axe,bce | e,ee,io | 1,1,3
2 | sb | ou,oui | bee,bdd | kd,a | 2,3
3 | sc | e,op | db | f | 2
</code></pre>
<p>Thanks in advance for the help.</p>
|
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.cumcount.html" rel="nofollow noreferrer"><code>cumcount</code></a> for count <code>ID2</code> in both <code>df2</code> and <code>df3</code> for merge by unique <code>pairs</code>. Then <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow noreferrer"><code>groupby</code></a> and aggregate <code>join</code>.</p>
<p>Last use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.join.html" rel="nofollow noreferrer"><code>join</code></a>:</p>
<pre><code>df2['g'] = df2.groupby('ID2').cumcount()
df3['g'] = df3.groupby('ID2').cumcount()
df23 = pd.merge(df2, df3, how='left', on=['g','ID2']).astype(str).groupby('ID').agg(','.join)
#for same dtype for match - int
df23.index = df23.index.astype(int)
print (df23)
tested value1 Value2 ID2 g type
ID
1 a,axe,bce e,ee,io o,e,p 1,1,3 0,1,0 i,d,e
2 bee,bdd kd,a ...,fff 2,3 0,1 ou,op
3 db f yiueie 2 1 oui
df = df1.join(df23, on='ID')
#subset and desired order of output columns
cols = ['ID','test','type','tested','value1','ID2']
df = df[cols]
print (df)
ID test type tested value1 ID2
0 1 ss i,d,e a,axe,bce e,ee,io 1,1,3
1 2 sb ou,op bee,bdd kd,a 2,3
2 3 sci oui db f 2
</code></pre>
|
python|pandas|join|outer-join
| 1
|
6,135
| 51,011,970
|
For loop for an array
|
<p>I have the array <code>vAgarch</code> and I am trying to extract each element from that, so I have the following code now:</p>
<pre><code>vAgarch = [0.05, 0.03, 0.04, 0.05, 0.03, 0.04]
vAgarch = np.array(vAgarch)
# Extract each element from array vAgarch
dA1garch = np.fabs(vAgarch[0])
dA2garch = np.fabs(vAgarch[1])
dA3garch = np.fabs(vAgarch[2])
dA4garch = np.fabs(vAgarch[3])
dA5garch = np.fabs(vAgarch[4])
dA6garch = np.fabs(vAgarch[5])
</code></pre>
<p>However, this could be easier right? My array will consist of 40 elements later on and I think this code can be simplified with a for loop. I tried several for loops, but so far I have no success. Is there someone who can help me with simplifying this code?</p>
|
<p>There is no need to use a for loop,<code>numpy fabs</code> parameters can be <code>array_like</code>, so you can just pass list to it. As below code(I have change some elements in <code>vAgarch</code> to minus):</p>
<pre><code>def test_s():
vAgarch = [-0.05, 0.03, -0.04, 0.05, 0.03, 0.04]
print vAgarch
# [-0.05, 0.03, -0.04, 0.05, 0.03, 0.04]
# equivalent to a for loop
arr2 = np.fabs(vAgarch)
print arr2 # arr2 is a new array, all its elements is plus
# [ 0.05 0.03 0.04 0.05 0.03 0.04]
print vAgarch[0]
# -0.05
print arr2[0]
# 0.05
</code></pre>
<p>you can also see tutorial in <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.fabs.html" rel="nofollow noreferrer">https://docs.scipy.org/doc/numpy/reference/generated/numpy.fabs.html</a>.</p>
<p>Or if you use pycharm,you can go to the function definition for example:</p>
<p><a href="https://i.stack.imgur.com/H8pTK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/H8pTK.png" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/Hl69G.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Hl69G.png" alt="enter image description here"></a></p>
|
python|arrays|loops|numpy
| 0
|
6,136
| 66,661,492
|
Identifying overlapping events (datetime records) in a pandas dataframe
|
<p>I am having difficulty in trying to detect overlapping <code>start_datetime</code> and <code>end_datetime</code> in my dataset.</p>
<p>Currently my dataset looks like the following</p>
<p><a href="https://i.stack.imgur.com/UOE27.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UOE27.png" alt="enter image description here" /></a></p>
<p>but I'm trying to get to</p>
<p><a href="https://i.stack.imgur.com/2vJQ6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2vJQ6.png" alt="enter image description here" /></a></p>
<h2>Raw code to produce dataset</h2>
<pre><code>import pandas as pd
df = pd.DataFrame({
'start_datetime':[
'2000-01-01 02:23:49', '1997-12-20 07:22:10', '2000-01-05 03:42:29', '2002-02-25 17:20:09', '1999-06-30 03:33:20',
],
'end_datetime':[
'2000-01-06 04:50:20', '1998-12-20 01:24:12', '2000-03-01 11:01:11', '2003-02-25 22:05:02', '2000-01-01 02:50:30',
],
})
df['start_datetime'] = pd.to_datetime(df['start_datetime'])
df['end_datetime'] = pd.to_datetime(df['end_datetime'])
df
</code></pre>
<p>Is there a way (efficient or inefficient) to detect overlaps without sorting the columns?</p>
|
<h3><code>Numpy broadcasting</code></h3>
<pre><code>s, e = df[['start_datetime', 'end_datetime']].to_numpy().T
m1 = (s[:, None] > s) & (s[:, None] < e) # Check if start time overlap
m2 = (e[:, None] < e) & (e[:, None] > s) # Check if ending time overlap
df['overlap'] = (m1 | m2).any(1)
</code></pre>
<h3>Result</h3>
<pre><code>>>> df
start_datetime end_datetime overlap
0 2000-01-01 02:23:49 2000-01-06 04:50:20 True
1 1997-12-20 07:22:10 1998-12-20 01:24:12 False
2 2000-01-05 03:42:29 2000-03-01 11:01:11 True
3 2002-02-25 17:20:09 2003-02-25 22:05:02 False
4 1999-06-30 03:33:20 2000-01-01 02:50:30 True
</code></pre>
|
python|pandas|dataframe|datetime|python-datetime
| 4
|
6,137
| 70,946,613
|
pandas timeseries offset BusinessMonthBegin doesn't roll over on the new month
|
<p>I have a couple of pandas functions that I use to collect relative start dates for previous time periods. I noticed today on the start of the new month, my business month start (<code>BMS</code>) function returned an unexpected timestamp:</p>
<pre><code># so.py
import pandas
import time
def now(format='ms', normalize=True):
obj = pandas.Timestamp.now(tz='America/Toronto').normalize()
if normalize == False:
obj = pandas.Timestamp.now(tz='America/Toronto')
if format == 'ms':
obj = int(time.mktime(obj.timetuple()) * 1000)
return(obj)
def BMS(multiplier, format='ms'):
obj = now(format=None) + pandas.tseries.offsets.BusinessMonthBegin(multiplier)
obj = pandas.Timestamp(obj).floor(freq='D')
if format == 'ms':
obj = int(time.mktime(obj.timetuple()) * 1000)
return(obj)
print(f'my function: {BMS(-4, format=None)}')
# python3 so.py
2021-10-01 00:00:00-04:00
#
</code></pre>
<p><code>2021-10-01 00:00:00-04:00</code> is unexpected, because this timestamp was the same timestamp that was returned yesterday:</p>
<pre><code>
yesterday = pandas.Timestamp.now(tz='America/Toronto').normalize() - pandas.Timedelta(days=1)
print(f'yesterday: {yesterday + pandas.tseries.offsets.BusinessMonthBegin(-4)}')
# yesterday: 2021-10-01 00:00:00-04:00
</code></pre>
<p>Since today is a new month, I would expect <code>BMS(-4, format=None)</code> to return
<code>2021-11-01 00:00:00-04:00</code></p>
<p>In case it might be necessary, a more basic <code>mre</code> to re-produce what my functions are doing is like so:</p>
<pre><code># MRE
today = pandas.Timestamp.now(tz='America/Toronto').normalize()
print(f'mre: {today + pandas.tseries.offsets.BusinessMonthBegin(-4)}')
</code></pre>
<p><strong>Update</strong>
This morning, the <code>mre</code> returned the expected timestamp</p>
<pre><code>2021-11-01 00:00:00-04:00
</code></pre>
<p>Since it rolled over on the second day of the month and not the first day of the month, maybe there's an implicit inclusion of the first day of the month when calculating <code>BusinessMonthBegin</code>?</p>
<p>What am I missing?</p>
|
<p>If the date falls on the offset, the offset addition already gives the previous bmonth start date (e.g. 2022-02-01 is a business month start date):</p>
<pre><code>import pandas as pd
t_on_offset = pd.Timestamp('2022-02-01')
t_after_offset = pd.Timestamp('2022-02-02')
## on the offset, the offset addition will go back one month already:
t_on_offset + pd.tseries.offsets.BusinessMonthBegin(-1)
# Timestamp('2022-01-03 00:00:00')
# it seems what you actually want here is
# t_on_offset + pd.tseries.offsets.BusinessMonthBegin(0)
# this just rolls back to the beginning of the BM:
t_after_offset + pd.tseries.offsets.BusinessMonthBegin(-1)
# Timestamp('2022-02-01 00:00:00')
</code></pre>
<p>You can check if you're on the offset like</p>
<pre><code>pd.tseries.offsets.BusinessMonthBegin().rollback(t_on_offset) == t_on_offset
# True
pd.tseries.offsets.BusinessMonthBegin().rollback(t_after_offset) == t_after_offset
# False
</code></pre>
<p>So in your example BMS function (slightly refactored), that could look like</p>
<pre><code>def BMS(timestamp, multiplier, normalize=True, format='ms'):
if pd.tseries.offsets.BusinessMonthBegin().rollback(timestamp) == timestamp:
if multiplier < 0:
multiplier += 1
obj = timestamp + pd.tseries.offsets.BusinessMonthBegin(multiplier)
if normalize:
obj = obj.normalize()
if format == 'ms':
return obj.timestamp() * 1000
return(obj)
</code></pre>
<p>In action:</p>
<pre><code>for t in pd.Timestamp('2022-01-31'), pd.Timestamp('2022-02-01'), pd.Timestamp('2022-02-02'):
print(f'{str(t)} -> my function: {BMS(-4, t, format=None)}')
2022-01-31 00:00:00 -> my function: 2021-10-01 00:00:00
2022-02-01 00:00:00 -> my function: 2021-11-01 00:00:00
2022-02-02 00:00:00 -> my function: 2021-11-01 00:00:00
</code></pre>
|
python|pandas|datetime|time|timestamp
| 1
|
6,138
| 51,887,356
|
How to make repeated column values columns?
|
<p>Here is the dataset that I have. The items below are recorded on a daily basis. </p>
<p>Cigarettes, Tobacco, Snack/Grocery, Beverages, Milk, Coffee, Solaray, Prepared Foods, International Foods, Automotive/NewsPaper, Lottery - Scratch, Lottery - Machine, Whl-Sales/Gift-Card are repeated per date.</p>
<p>I want to transform this frame to one that covers the same data, with the repeated departments as columns, Date as index and Sales as the values.
I tried using pivot_table, but I realized it changes the values, the combination.
This is how I thought about it, but it returned unexpected results... </p>
<pre><code>dept = dept.pivot_table(values='Sales', index = dept.index, columns='Dept', aggfunc='first')
</code></pre>
<p>and here is the original dataframe that I want to change. </p>
<pre><code>Date Dept Sales
2018-12-01 Cigarettes 426.889
2018-12-01 Tobacco 43.84
2018-12-01 Snack/Grocery 198.57
2018-12-01 Beverages 160.97
2018-12-01 Milk 11.56
2018-12-01 Coffee 29.72
2018-12-01 Solaray 9.99
2018-12-01 Prepared Foods 3.99
2018-12-01 International Food 65
2018-12-01 Sweets 0
2018-12-01 Automotive/News Paper 10.47
2018-12-01 Lottery - Scratch 1397
2018-12-01 Lottery - Machine 191
2018-12-01 Whl-Sales/Gift-Card 0
2018-12-01 Total 2549
2018-12-02 Cigarettes 374.01
2018-12-02 Tobacco 89.29
2018-12-02 Snack/Grocery 178.01
2018-12-02 Beverages 135.28
2018-12-02 Milk 9.57
2018-12-02 Coffee 33.76
2018-12-02 Solaray 17.99
2018-12-02 Prepared Foods 20.98
2018-12-02 International Food 3.98
2018-12-02 Sweets 0
2018-12-02 Automotive/News Paper 13.16
2018-12-02 Lottery - Scratch 651
2018-12-02 Lottery - Machine 211
2018-12-02 Whl-Sales/Gift-Card 0
2018-12-02 Total 1738.03
2018-12-03 Cigarettes 463.54
2018-12-03 Tobacco 35.26
2018-12-03 Snack/Grocery 164.19
2018-12-03 Beverages 126.01
2018-12-03 Milk 8.57
2018-12-03 Coffee 30.47
2018-12-03 Solaray 17.99
2018-12-03 Prepared Foods 0
2018-12-03 International Food 21.98
2018-12-03 Sweets 0
2018-12-03 Automotive/News Paper 70.17
2018-12-03 Lottery - Scratch 1046
2018-12-03 Lottery - Machine 461
2018-12-03 Whl-Sales/Gift-Card 0
2018-12-03 Total 2445.18
2018-12-03 Cigarettes 463.54
2018-12-03 Tobacco 35.26
2018-12-03 Snack/Grocery 164.19
2018-12-03 Beverages 126.01
2018-12-03 Milk 8.57
2018-12-03 Coffee 30.47
2018-12-03 Solaray 17.99
2018-12-03 Prepared Foods 0
2018-12-03 International Food 21.98
2018-12-03 Sweets 0
2018-12-03 Automotive/News Paper 70.17
2018-12-03 Lottery - Scratch 1046
2018-12-03 Lottery - Machine 461
2018-12-03 Whl-Sales/Gift-Card 0
2018-12-03 Total 2445.18
2018-12-04 Cigarettes 291.91
2018-12-04 Tobacco 42.93
2018-12-04 Snack/Grocery 207.87
2018-12-04 Beverages 163.11
2018-12-04 Milk 3.99
2018-12-04 Coffee 32.17
2018-12-04 Solaray 40.98
2018-12-04 Prepared Foods 5
2018-12-04 International Food 6.98
2018-12-04 Sweets 0
2018-12-04 Automotive/News Paper 47
2018-12-04 Lottery - Scratch 762
2018-12-04 Lottery - Machine 112.75
2018-12-04 Whl-Sales/Gift-Card NaN
2018-12-04 Total 1716.69
2018-12-05 Cigarettes 255.72
2018-12-05 Tobacco 81.52
2018-12-05 Snack/Grocery 212.94
2018-12-05 Beverages 87.94
2018-12-05 Milk 9.77
2018-12-05 Coffee 15.95
2018-12-05 Solaray 11.98
2018-12-05 Prepared Foods 8.98
2018-12-05 International Food 17.73
2018-12-05 Sweets 0
2018-12-05 Automotive/News Paper 46.24
2018-12-05 Lottery - Scratch 540
2018-12-05 Lottery - Machine 151
2018-12-05 Whl-Sales/Gift-Card NaN
2018-12-05 Total 1439.77
2018-12-06 Cigarettes 377.96
2018-12-06 Tobacco 129.07
2018-12-06 Snack/Grocery 281.83
2018-12-06 Beverages 235.73
2018-12-06 Milk 0
2018-12-06 Coffee 29.32
2018-12-06 Solaray 12.99
2018-12-06 Prepared Foods 27.37
2018-12-06 International Food 9.99
2018-12-06 Sweets 5
2018-12-06 Automotive/News Paper 32.92
2018-12-06 Lottery - Scratch 509
2018-12-06 Lottery - Machine 194
2018-12-06 Whl-Sales/Gift-Card NaN
2018-12-06 Total 1845.18
2018-12-07 Cigarettes 526.91
2018-12-07 Tobacco 65.71
2018-12-07 Snack/Grocery 202.27
2018-12-07 Beverages 183.59
2018-12-07 Milk 2.79
2018-12-07 Coffee 16.22
2018-12-07 Solaray 5.99
2018-12-07 Prepared Foods 24.98
2018-12-07 International Food 1.99
2018-12-07 Sweets 0
2018-12-07 Automotive/News Paper 31.06
2018-12-07 Lottery - Scratch 300
2018-12-07 Lottery - Machine 61.5
2018-12-07 Whl-Sales/Gift-Card 0
2018-12-07 Total 1423.01
</code></pre>
|
<p>One way to do this would be to set the index to <code>['Date', 'Dept']</code> and <code>unstack()</code> but you have multiple values for each <code>Dept</code> for the date <code>2018-12-03</code>.</p>
<p>Note sure if that is expected but one way to resolve that issues is to <code>groupby().first()</code> to take the first value and then <code>unstack()</code>, e.g.:</p>
<pre><code>In []:
df.set_index(['Date', 'Dept']).groupby(level=[0, 1]).first().unstack()
Out []:
Sales
Dept Automotive/News Paper Beverages Cigarettes Coffee International Food Lottery - Machine Lottery - Scratch Milk Prepared Foods Snack/Grocery Solaray Sweets Tobacco Total Whl-Sales/Gift-Card
Date
2018-12-01 10.47 160.97 426.889 29.72 65.00 191.00 1397.0 11.56 3.99 198.57 9.99 0.0 43.84 2549.00 0.0
2018-12-02 13.16 135.28 374.010 33.76 3.98 211.00 651.0 9.57 20.98 178.01 17.99 0.0 89.29 1738.03 0.0
2018-12-03 70.17 126.01 463.540 30.47 21.98 461.00 1046.0 8.57 0.00 164.19 17.99 0.0 35.26 2445.18 0.0
2018-12-04 47.00 163.11 291.910 32.17 6.98 112.75 762.0 3.99 5.00 207.87 40.98 0.0 42.93 1716.69 NaN
2018-12-05 46.24 87.94 255.720 15.95 17.73 151.00 540.0 9.77 8.98 212.94 11.98 0.0 81.52 1439.77 NaN
2018-12-06 32.92 235.73 377.960 29.32 9.99 194.00 509.0 0.00 27.37 281.83 12.99 5.0 129.07 1845.18 NaN
2018-12-07 31.06 183.59 526.910 16.22 1.99 61.50 300.0 2.79 24.98 202.27 5.99 0.0 65.71 1423.01 0.0
</code></pre>
<p>But this is almost identical to <code>df.pivot_table(index='Date', columns='Dept', values='Sales')</code>:</p>
<pre><code>Dept Automotive/News Paper Beverages Cigarettes Coffee International Food Lottery - Machine Lottery - Scratch Milk Prepared Foods Snack/Grocery Solaray Sweets Tobacco Total Whl-Sales/Gift-Card
Date
2018-12-01 10.47 160.97 426.889 29.72 65.00 191.00 1397.0 11.56 3.99 198.57 9.99 0.0 43.84 2549.00 0.0
2018-12-02 13.16 135.28 374.010 33.76 3.98 211.00 651.0 9.57 20.98 178.01 17.99 0.0 89.29 1738.03 0.0
2018-12-03 70.17 126.01 463.540 30.47 21.98 461.00 1046.0 8.57 0.00 164.19 17.99 0.0 35.26 2445.18 0.0
2018-12-04 47.00 163.11 291.910 32.17 6.98 112.75 762.0 3.99 5.00 207.87 40.98 0.0 42.93 1716.69 NaN
2018-12-05 46.24 87.94 255.720 15.95 17.73 151.00 540.0 9.77 8.98 212.94 11.98 0.0 81.52 1439.77 NaN
2018-12-06 32.92 235.73 377.960 29.32 9.99 194.00 509.0 0.00 27.37 281.83 12.99 5.0 129.07 1845.18 NaN
2018-12-07 31.06 183.59 526.910 16.22 1.99 61.50 300.0 2.79 24.98 202.27 5.99 0.0 65.71 1423.01 0.0
</code></pre>
|
python|pandas|dataframe|pivot
| 1
|
6,139
| 51,585,272
|
ValueError: Index DATE invalid with pandas.read_csv on header row
|
<p>Trying to create a dictionary with the key's being the first row of the csv file and the value's being a dictionary of {first column: corresponding column to row}:</p>
<pre><code>import pandas as pd
df = pd.read_csv('~/StockMachine/data_stocks.csv', index_col=['DATE'], sep=',\s+')
data = df.to_dict()
print(data)
</code></pre>
<p>However, I get this error "ValueError: Index DATE invalid".</p>
<p>Traceback:</p>
<pre><code> File "/Users/cs/StockMachine/stockmachine.py", line 4, in <module>
df = pd.read_csv('~/StockMachine/data_stocks.csv', index_col=['DATE'], sep=',\s+')
File "/Users/cs/anaconda3/lib/python3.6/site-packages/pandas/io/parsers.py", line 678, in parser_f
return _read(filepath_or_buffer, kwds)
File "/Users/cs/anaconda3/lib/python3.6/site-packages/pandas/io/parsers.py", line 446, in _read
data = parser.read(nrows)
File "/Users/cs/anaconda3/lib/python3.6/site-packages/pandas/io/parsers.py", line 1036, in read
ret = self._engine.read(nrows)
File "/Users/cs/anaconda3/lib/python3.6/site-packages/pandas/io/parsers.py", line 2273, in read
index, columns = self._make_index(data, alldata, columns, indexnamerow)
File "/Users/cs/anaconda3/lib/python3.6/site-packages/pandas/io/parsers.py", line 1425, in _make_index
index = self._get_simple_index(alldata, columns)
File "/Users/cs/anaconda3/lib/python3.6/site-packages/pandas/io/parsers.py", line 1457, in _get_simple_index
i = ix(idx)
File "/Users/cs/anaconda3/lib/python3.6/site-packages/pandas/io/parsers.py", line 1452, in ix
raise ValueError('Index %s invalid' % col)
</code></pre>
<p>data_stocks.csv:
<a href="https://i.stack.imgur.com/PXUHo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PXUHo.png" alt="CSV File"></a></p>
|
<p>Similiar thing happened to me and in my case some readings of <code>['DATE']</code> were strings with empty spaces inside. Maybe if you would do something like:</p>
<pre><code>import pandas as pd
df = pd.read_csv('~/StockMachine/data_stocks.csv', sep=',\s+')
df['DATE'] = df['DATE'].apply(lambda x: str(x.strip())).astype(str)
df.set_index('DATE', inplace=True)
print(df.head())
</code></pre>
|
python|pandas|csv|valueerror|header-row
| 2
|
6,140
| 37,350,525
|
Train SyntaxNet model
|
<p>I am trying to train the Google Syntaxnet model in a different language using the datasets available at <a href="http://universaldependencies.org/" rel="nofollow">http://universaldependencies.org/</a> and following this <a href="https://github.com/tensorflow/models/tree/master/syntaxnet#detailed-tutorial-building-an-nlp-pipeline-with-syntaxnet" rel="nofollow">tutorial</a>. I edited the <code>syntaxnet/context.pbtxt</code> file but when I try to run the <code>bazel's script</code> provided in the guide I got the following error:</p>
<pre><code>syntaxnet/term_frequency_map.cc:62] Check failed: ::tensorflow::Status::OK() == (tensorflow::Env::Default()->NewRandomAccessFile(filename, &file)) (OK vs. Not found: brain_pos/greedy/0/label-map)
</code></pre>
<p>My doubt is: I have to provide this file and the other files such as <code>fine-to-universal.map</code>, <code>tag-map</code>, <code>word-map</code> and so on, or the train step have to create them using the training dataset? And if I have to provide them, how can I build them?</p>
<p>Thanks in advance</p>
|
<p>I recall having a similar error at the beginning. Did you use the exact code under 'training a parser step 1: local pretraining'? Because you will notice there's an uninitialized $PARAMS variable in there that is supposed to represent the parameters of your trained POS tagger. When you train a tagger (see earlier in the same tutorial), it will create files in models/brain_pos/greedy/$PARAMS. I believe that in your case, this $PARAMS variable was interpreted as 0 and the script is looking for a trained tagger in brain_pos/greedy/0 which it obviously does not find. If you just add a line at the beginning of the script that specifies the parameters of a trained tagger (128-0.08-3600-0.9-0 in the tutorial) it should work.</p>
<p>Thus:</p>
<pre><code>$PARAMS=128-0.08-3600-0.9-0
bazel-bin/syntaxnet/parser_trainer \
--arg_prefix=brain_parser \
--batch_size=32 \
--projectivize_training_set \
--decay_steps=4400 \
--graph_builder=greedy \
--hidden_layer_sizes=200,200 \
--learning_rate=0.08 \
--momentum=0.85 \
--output_path=models \
--task_context=models/brain_pos/greedy/$PARAMS/context \
--seed=4 \
--training_corpus=tagged-training-corpus \
--tuning_corpus=tagged-tuning-corpus \
--params=200x200-0.08-4400-0.85-4
</code></pre>
|
nlp|tensorflow|bazel|syntaxnet
| 0
|
6,141
| 37,350,052
|
pandas groupby when one record belongs to more than one group
|
<p>I would like to be able to produce summary statistics and pivot tables from a dataset that can be grouped in multiple ways. The complication arises because each entry can belong to more than one group within one categorisation axis (see example below).</p>
<p>So far, I have found a solution based on multi-indexing and repeating each record as many times as it appears in category1*category2 combinations. However, this seems to be inflexible (I will need to check whether an entry appears in the same categories across different data sources, I might want to add another category system that would be called category3, a category "d" might get added to the category1 system, etc.). Moreover, it seems to go against basic principles of database design. </p>
<p><strong>My Question is:</strong> Is there any other (more elegant, flexible) way to solve this problem than my solution below? I could imagine keeping various tables, one with the actual data, and others with the grouping information (much like the stack table below) and using these flexibly as input to Groupby, but I don't know if that is possible and how to make that work. Any suggestions for improvements are also welcome. Thanks!</p>
<p>the raw data comes as something like:</p>
<pre><code>import pandas
data={'ID' : [1 , 2, 3, 4],
'year' : [2004, 2008 , 2006, 2009],
'money' : [10000 , 5000, 4000, 11500],
'categories1' : [ "a,b,c" , "c" , "a,c" , "" ],
'categories2' : ["one, two" , "one" , "five" , "eight"]}
df= pandas.DataFrame(data)
df.set_index('ID', inplace=True)
print df
</code></pre>
<p>Which gives:</p>
<pre><code> categories1 categories2 money year
ID
1 a,b,c one, two 10000 2004
2 c one 5000 2008
3 a,c five 4000 2006
4 eight 11500 2009
</code></pre>
<p>I want to be able to make pivot tables that look like this:</p>
<pre><code>Average money
year 2004 2005 2006 2007
category
a
b
c
</code></pre>
<p>and also: </p>
<pre><code>Average money
category2 one two three four
category1
a
b
c
</code></pre>
<p>So far, I have:</p>
<p>Step1: extracted the categories information using get_dummies:</p>
<pre><code>cat1=df['categories1'].str.get_dummies(sep=",")
print cat1
</code></pre>
<p>Which gives:</p>
<pre><code> a b c
ID
1 1 1 1
2 0 0 1
3 1 0 1
4 0 0 0
</code></pre>
<p>Step 2: stacked this:</p>
<pre><code>stack = cat1.stack()
stack.index.names=['ID', 'cat1']
stack.name='in_cat1'
print stack
</code></pre>
<p>Which gives:</p>
<pre><code>ID cat1
1 a 1
b 1
c 1
2 a 0
b 0
c 1
3 a 1
b 0
c 1
4 a 0
b 0
c 0
Name: in_cat1, dtype: int64
</code></pre>
<p>Step 3: joined that onto the original data frame to create a multi-indexed data frame</p>
<pre><code>dl = df.join(stack, how='inner')
print dl
</code></pre>
<p>Which looks like this:</p>
<pre><code> categories1 categories2 money year in_cat1
ID cat1
1 a a,b,c one, two 10000 2004 1
b a,b,c one, two 10000 2004 1
c a,b,c one, two 10000 2004 1
2 a c one 5000 2008 0
b c one 5000 2008 0
c c one 5000 2008 1
3 a a,c five 4000 2006 1
b a,c five 4000 2006 0
c a,c five 4000 2006 1
4 a eight 11500 2009 0
b eight 11500 2009 0
c eight 11500 2009 0
</code></pre>
<p>Step 4: which is then usable with pandas groupby and pivot_table commands</p>
<pre><code>dl.reset_index(level=1, inplace=True)
pt= dl.pivot_table(values='money', columns='year', index='cat1')
print pt
</code></pre>
<p>and does what I want:</p>
<pre><code>year 2004 2006 2008 2009
cat1
a 10000 4000 5000 11500
b 10000 4000 5000 11500
c 10000 4000 5000 11500
</code></pre>
<p>
I have repeated steps 2 + 3 with category2, so that now the dataframe has 3-level indexing. </p>
|
<p>I created a function that takes a <code>DataFrame</code> and a column name. It's expected that the column specified by the column name has a string that can be split by <code>','</code>. It will append this split to the index with the appropriate name.</p>
<pre><code>def expand_and_add(df, col):
expand = lambda x: pd.concat([x for i in x[col].split(',')], keys=x[col].split(','))
df = df.apply(expand, axis=1).stack(0)
df.index.levels[-1].name = col
df.drop(col, axis=1, inplace=1)
return df
</code></pre>
<p>Now this will help create the 3 layers of <code>MultiIndex</code>. I do believe manipulating the <code>MultiIndex</code> provides all the flexibility you need to create the pivots you want.</p>
<pre><code>new_df = expand_and_add(expand_and_add(df, 'categories1'), 'categories2')
</code></pre>
<p>Looks like:</p>
<pre><code>money year
ID categories1 categories2
1 a two 10000.0 2004.0
one 10000.0 2004.0
b two 10000.0 2004.0
one 10000.0 2004.0
c two 10000.0 2004.0
one 10000.0 2004.0
2 c one 5000.0 2008.0
3 a five 4000.0 2006.0
c five 4000.0 2006.0
4 eight 11500.0 2009.0
</code></pre>
<p>Your pivots are still going to be individually messy but here are some.</p>
<h3>mean [categories1, year]</h3>
<pre><code>new_df.set_index(ndf.year.astype(int), append=True)['money'].groupby(level=[1, 3]).mean().unstack()
year 2004 2006 2008 2009
categories1
NaN NaN NaN 11500.0
a 10000.0 4000.0 NaN NaN
b 10000.0 NaN NaN NaN
c 10000.0 4000.0 5000.0 NaN
</code></pre>
<h3>mean [categories1, categories2]</h3>
<pre><code>new_df.groupby(level=[1, 2])['money'].mean().unstack()
categories2 two eight five one
categories1
NaN 11500.0 NaN NaN
a 10000.0 NaN 4000.0 10000.0
b 10000.0 NaN NaN 10000.0
c 10000.0 NaN 4000.0 7500.0
</code></pre>
|
python|database|pandas
| 0
|
6,142
| 31,632,050
|
Where should I put try/except?
|
<p>In an effort to improve my coding, I was wondering if I should put try/except inside a function or keep it outside. The following examples display what I mean.</p>
<pre><code> import pandas as pd
df = pd.read_csv("data.csv")
# Example 1
def do_something(df):
# Add some columns
# Split columns
return df
try:
df = do_something(df)
except Exception, e:
print e
# Example 2
def do_something(df):
try:
# Add some columns
# Split columns
except Exception, e:
print e
df = pd.DataFrame()
return df
df = do_something(df)
</code></pre>
<p>It might seem the same, but the first example is more clear on what happens while the second seems cleaner.</p>
|
<h2>Possibility of internal handling</h2>
<p>If the function can provide a <em>sensible</em> recovery from the exception and handle it within its scope of responsibilities and without any additional information, then you can catch the exception right there</p>
<h2>Raise or translate to caller</h2>
<p>Otherwise, it might be better left up to the caller to deal with it. And sometimes even in this case, the function might catch the exception and then immediately raise it again with a translation that makes sense to the caller</p>
|
python|exception|pandas
| 5
|
6,143
| 31,396,226
|
Numpy multiply multiple columns by scalar
|
<p>This seems like a really simple question but I can't find a good answer anywhere. How might I multiply (in place) select columns (perhaps selected by a list) by a scalar using numpy?</p>
<p>E.g. Multiply columns 0 and 2 by 4</p>
<pre><code>In: arr=([(1,2,3,5,6,7), (4,5,6,2,5,3), (7,8,9,2,5,9)])
Out: arr=([(4,2,12,5,6,7), (16,5,24,2,5,3), (28,8,36,2,5,9)])
</code></pre>
<p>Currently I am doing this in multiple steps but I feel like there must be a better way especially if the list gets larger. Current way:</p>
<pre><code>arr['f0'] *= 4
arr['f2'] *= 4
</code></pre>
|
<p>You can use array slicing as follows for this -</p>
<pre><code>In [10]: arr=([(1,2,3,5,6,7), (4,5,6,2,5,3), (7,8,9,2,5,9)])
In [11]: narr = np.array(arr)
In [13]: narr[:,(0,2)] = narr[:,(0,2)]*4
In [14]: narr
Out[14]:
array([[ 4, 2, 12, 5, 6, 7],
[16, 5, 24, 2, 5, 3],
[28, 8, 36, 2, 5, 9]])
</code></pre>
|
python|numpy
| 3
|
6,144
| 31,295,084
|
Matplotlib data plot contains too many labels
|
<p>I tried to visualize csv data with a 3D graph.</p>
<p>My code is included below:</p>
<pre><code>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import axes3d
MY_FILE = 'total_watt.csv'
df = pd.read_csv(MY_FILE, parse_dates=[0], header=None, names=['datetime', 'consumption'])
df['date'] = [x.date() for x in df['datetime']]
df['time'] = [x.time() for x in df['datetime']]
pv = df.pivot(index='time', columns='date', values='consumption')
# to avoid holes in the surface
pv = pv.fillna(0.0)
xx, yy = np.mgrid[0:len(pv),0:len(pv.columns)]
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
surf=ax.plot_surface(xx, yy, pv.values, cmap='jet', cstride=1, rstride=1)
fig.colorbar(surf, shrink=0.5, aspect=10)
dates = [x.strftime('%m-%d') for x in pv.columns]
times = [x.strftime('%H:%M') for x in pv.index]
consumptions = [x for x in pv.values]
ax.set_title('Energy consumptions Clusters')
ax.set_xlabel('time', color='lightseagreen')
ax.set_ylabel('date(year 2011)', color='lightseagreen')
ax.set_zlabel('energy consumption', color='lightseagreen')
ax.set_xticks(xx[::10,0])
ax.set_xticklabels(times[::10], color='lightseagreen')
ax.set_yticks(yy[0,::10])
ax.set_yticklabels(dates[::10], color='lightseagreen')
ax.set_zticklabels(consumptions[::100000], color='lightseagreen')
ax.set_axis_bgcolor('black')
plt.show()
</code></pre>
<p>Although I successfully colored;</p>
<p>x-axis <code>ax.set_xlabel('time', color='lightseagreen')</code></p>
<p>y-axis <code>ax.set_ylabel('date(year 2011)', color='lightseagreen')</code></p>
<p>z-axis <code>ax.set_zlabel('energy consumption', color='lightseagreen')</code></p>
<p>ticks in <code>x-axis ax.set_xticklabels(times[::10], color='lightseagreen')</code></p>
<p>ticks in <code>y-axis ax.set_yticklabels(dates[::10], color='lightseagreen')</code></p>
<p>I cannot color ticks in z-axis properly, because x-axis plot 'time' and y-axis plot date, which are defined properly. I think the way I defined consumptions (<code>consumptions = [x for x in pv.values]</code>) is incorrect and it causes this error..</p>
<p>The 3d graph, I got from this code is</p>
<p><img src="https://i.stack.imgur.com/XEsUL.png" alt=""></p>
<p>What my cause my issue and how do I resolve it?</p>
|
<p>I'm not sure to get the purpose of your question and wasn't able to test it but I think you should change the length of the zticks list:</p>
<pre><code># resizing the yticks list and changing the values
ax.set_yticks(yy[0,::10])
# changing the yticks labels
ax.set_yticklabels(dates[::10], color='lightseagreen')
# changing the zticks labels whereas zticks is not resized
# try adding
ax.set_zticks(consumptions[::100000])
# before changing the labels
ax.set_zticklabels(consumptions[::100000], color='lightseagreen')
</code></pre>
<p>Sorry I couldn't check if it works</p>
|
python|pandas|matplotlib
| 0
|
6,145
| 64,250,531
|
Is it possible to have both numpy array values and object atribute pointing to the same position in memory?
|
<p>In a project I am developing I was wondering if it is possible to do something like:</p>
<pre><code>class P:
def __init__(self, x):
self.x = x
def __str__(self):
return str(self.x)
def __repr__(self):
return self.__str__()
obj_lst = [P(x=2), P(x=3), P(x=4), P(x=5)]
np_x = array([p.x for p in obj_lst])
obj_lst[0].x = 10
print(np_x)
</code></pre>
<p>The expected result would be,</p>
<pre><code>array([10, 3, 4, 5])
</code></pre>
<p>Also,</p>
<pre><code>np_x[2] = 20
print(obj_lst)
</code></pre>
<p>I would get,</p>
<pre><code>[10, 3, 20, 5]
</code></pre>
<p>So the both the object's attribute and the values in the array are pointing to the same position in memory.</p>
<p>This way I could use the OOP abstraction by one side and the numpy speed for the complex algebraic operations.</p>
|
<p>You can do it if you think a bit outside the box (and possibly tweak your requirements just slightly). Let's reverse the way things are set up and create a buffer that holds the data first:</p>
<pre><code>np_x = np.array([2, 3, 4, 5])
</code></pre>
<p>Now define your class a bit differently. Instead of recording the <em>value</em> of <code>x</code>, we will record a pointer to it as an array and index (later you can do some interesting things with raw memory locations, but let's not do that for now). You can keep pretty much the exact same interface by making the <code>x</code> attribute a <code>property</code> in the class, and stashing the data for it in an instance attribute of the same name:</p>
<pre><code>class P:
def __init__(self, buffer, offset):
self.__dict__['x'] = (buffer, offset)
@property
def x(self):
buf, off = self.__dict__['x']
return buf[off]
@x.setter
def x(self, value):
buf, off = self.__dict__['x']
buf[off] = value
def __str__(self):
return str(self.x)
def __repr__(self):
return self.__str__()
</code></pre>
<p>Now you can make the list of objects. This is the only part of your code that changes outside the class definition:</p>
<pre><code>obj_lst = [P(np_x, 0), P(np_x, 1), P(np_x, 2), P(np_x, 3)]
</code></pre>
<p>All your changes are now mutually transparent because you share a buffer:</p>
<pre><code>>>> obj_lst[0].x = 10
>>> np_x
array([10, 3, 4, 5])
>>> np_x[-2] = 20
>>> obj_lst
[10, 3, 20, 5]
</code></pre>
<p>The neat thing about this is that <code>P</code> will work with essentially any type that supports <code>__getitem__</code> and <code>__setitem__</code>, regardless of how it is indexed. For example, you can apply it to a <code>dict</code>:</p>
<pre><code>>>> d_x = {'a': 2, 'b': 3, 'c': 4, 'd': 5}
>>> obj_lst = [P(d_x, 'a'), P(d_x, 'b'), P(d_x, 'c'), P(d_x, 'd')]
>>> obj_lst[0].x = 10
>>> d_x
{'a': 10, 'b': 3, 'c': 20, 'd': 5}
>>> d_x['c'] = 20
>>> obj_lst
[10, 3, 20, 5]
</code></pre>
<p>You can also supply complex indices to numpy arrays:</p>
<pre><code>>>> np_x = np.arange(10)
>>> obj_lst = [P(np_x, 0), P(np_x, slice(1, None, 2)), P(np_x, [1, 2, 6, 8])]
>>> obj_lst
[0, [1 3 5 7 9], [1 2 6 8]]
>>> obj_lst[-1].x = 100
>>> np_x
array([ 0, 100, 100, 3, 4, 5, 100, 7, 100, 9])
>>> np_x[5:] = 20
>>> obj_lst
[0, [100 3 20 20 20], [100 100 20 20]]
</code></pre>
|
python|numpy|oop|pointers
| 2
|
6,146
| 47,646,130
|
Create a nnModule that's just the identity
|
<p>
I'm trying to debug a pretty complex interaction between different nnModules. It would be very helpful for me to be able to replace one of them with just an identity network for debugging purposes. For example:</p>
<pre class="lang-py prettyprint-override"><code>net_a = NetworkA()
net_b = NetworkB()
net_c = NetworkC()
input = Autograd.Variable(torch.rand(10,2))
out = net_a(input)
out = net_b(out)
out = net_c(out)
</code></pre>
<p>I would like to be able to just change the second line to <code>net_b = IdentityNet()</code>, instead of having to go through and reconnect all my As to Cs. But, when I make a completely empty nnModule, the optimizer throws <code>ValueError: optimizer got an empty parameter list</code>.</p>
<p>Is there any workaround this? </p>
<p>A minimum non-working example:</p>
<pre class="lang-py prettyprint-override"><code>import torch.optim as optim
class IdentityModule(nnModule):
def forward(self, inputs):
return inputs
identity = IdentityModule()
opt = optim.Adam(identity, lr=0.001)
out = identity(any_tensor)
error = torch.mean(out)
error.backward()
opt.step()
</code></pre>
|
<p>You can also <a href="https://github.com/pytorch/pytorch/issues/9160#issuecomment-402381789" rel="nofollow noreferrer">just do</a>:</p>
<pre><code>net_b = torch.nn.Sequential()
</code></pre>
<p>EDIT: in PyTorch 1.7, <code>[nn.Identity](https://pytorch.org/docs/stable/generated/torch.nn.Identity.html#torch.nn.Identity)</code> appeared. Doc is [here].</p>
|
pytorch
| 4
|
6,147
| 47,750,388
|
TensorBoard Callback in Keras does not respect initial_epoch of fit?
|
<p>
I'm trying to train multiple models in parallel on a single graphics card. To achieve that I need to resume training of models from saved weights which is not a problem. The <code>model.fit()</code> method has even a parameter initial_epoch that lets me tell the model which epoch the loaded model is on. However when i pass a TensorBoard callback to the <code>fit()</code> method in order to monitor the training of the models, on Tensorboard all data is shown on x=0.</p>
<p>Is there a ways to overcome this and adjust the epoch on tensorboard?</p>
<p>By the way: Im running Keras 2.0.6 and Tensorflow 1.3.0.
</p>
<pre class="lang-python prettyprint-override"><code>self.callbacks = [TensorBoardCallback(log_dir='./../logs/'+self.model_name, histogram_freq=0, write_graph=True, write_images=False, start_epoch=self.step_num)]
self.model.fit(x=self.data['X_train'], y=self.data['y_train'], batch_size=self.input_params[-1]['batch_size'], epochs=1, validation_data=(self.data['X_test'], self.data['y_test']), verbose=verbose, callbacks=self.callbacks, shuffle=self.hyperparameters['shuffle_data'], initial_epoch=self.step_num)
self.model.save_weights('./weights/%s.hdf5'%(self.model_name))
self.model.load_weights('./weights/%s.hdf5'%(self.model_name))
self.model.fit(x=self.data['X_train'], y=self.data['y_train'], batch_size=self.input_params[-1]['batch_size'], epochs=1, validation_data=(self.data['X_test'], self.data['y_test']), verbose=verbose, callbacks=self.callbacks, shuffle=self.hyperparameters['shuffle_data'], initial_epoch=self.step_num)
self.model.save_weights('./weights/%s.hdf5'%(self.model_name))
</code></pre>
<p>
The resulting graph on Tensorboard looks like this which is not what i was hoping for:
<a href="https://i.stack.imgur.com/iWRh5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iWRh5.png" alt="enter image description here"></a>
</p>
<p><strong>Update:</strong></p>
<p>When passing <code>epochs=10</code> to the first <code>model.fit()</code> the 10 epoch results are displayed in TensorBoard (see picture).</p>
<p>However when reloading the model and running it (with the same callback attached) the <code>on_epoch_end</code> method of the callback gets never called.</p>
<p><a href="https://i.stack.imgur.com/xaCVy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xaCVy.png" alt="enter image description here"></a></p>
|
<p>Turns out that when i pass the number of episodes to model.fit() to tell it how long to train, it has to be the number FROM the initial_epoch specified. So if initial_epoch=self.step_num then , epochs=self.step_num+10 if i want to train for 10 episodes.</p>
|
tensorflow|keras|tensorboard
| 4
|
6,148
| 47,540,944
|
Python / Pandas - Merging on index with multiple repeated keys
|
<p>I have this dataframe:</p>
<pre><code>df1:
year revenues
index
03374312000153 2010 25432
03374312000153 2009 25433
48300560000198 2014 13894
48300560000198 2013 18533
48300560000198 2012 18534
NaN NaN NaN
...
</code></pre>
<p>And I have this other dataframe:</p>
<pre><code>df2:
Name Street
index
03374312000153 Yeap Co Locc St
54623827374939 Damn Co Geez St
37273829349299 Woohoo Co Under St
...
</code></pre>
<p>I need to select only the rows from df1 on which its index appear on df2.index and merge them, so it would look like this:</p>
<pre><code> year revenues Name Street
index
03374312000153 2010 25432 Yeap Co Locc St
03374312000153 2009 25433 Yeap Co Locc St
...
</code></pre>
<p>If I try:</p>
<pre><code>df2=df2.merge(df1,left_index=True,right_index=True)
</code></pre>
<p>I get an error:</p>
<pre><code>TypeError: type object argument after * must be a sequence, not map
</code></pre>
<p>If I try:</p>
<pre><code>df2=df2.join(df1)
</code></pre>
<p>I get the same error as above.</p>
<p>Can someone help?</p>
|
<p>I actually see nothing wrong with what you're doing, using Pandas 0.19.2. If your version isn't up to date that could be your issue. Check it with:</p>
<pre><code>import pandas as pd
pd.__version__
</code></pre>
<p>How I built your dataframes:</p>
<pre><code>df1 = pd.DataFrame({'year' : pd.Series([2010,2009,2014,2013,2012], index=['03374312000153','03374312000153','48300560000198','48300560000198','48300560000198']),
'revenues' : pd.Series([25432,25433,13894,18533,18534], index=['03374312000153','03374312000153','48300560000198','48300560000198','48300560000198'])})
df2 = pd.DataFrame({'Name' : pd.Series(['Yeap Co','Damn Co','Woohoo Co'],index=['03374312000153','54623827374939','37273829349299'] ),
'Street' : pd.Series(['Locc St','Geez St','Under St'], index=['03374312000153','54623827374939','37273829349299'] )})
df2.merge(df1,left_index=True,right_index=True)
Name Street revenues year
03374312000153 Yeap Co Locc St 25432 2010
03374312000153 Yeap Co Locc St 25433 2009
</code></pre>
<p>Some thoughts:</p>
<ul>
<li>It's not preferred practice to have a non-unique index, in part
because if you end up writing to an RDBMS that has a constraint on
unique primary key, you'll error out. In this case you'd join on a
column as a key instead of the index. </li>
<li>It's good practice to specify (as @Wen did) the 'how' option to your method. </li>
<li>It's good practice to generate a new dataframe from a join instead of writing over an old one. That way if the join fails, especially on a large dataframe, you don't have to re-create the previous dataframes.</li>
</ul>
|
python|pandas
| 1
|
6,149
| 49,004,037
|
TypeError: 'DataFrame' object is not callable in concatenating different dataframes of certain types
|
<p>I keep getting the following error.</p>
<p><a href="https://i.stack.imgur.com/q21nP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/q21nP.png" alt="enter image description here"></a></p>
<p>I read a file that contains time series data of 3 columns: [meter ID] [daycode(explain later)] [meter reading in kWh]</p>
<pre><code>consum = pd.read_csv("data/File1.txt", delim_whitespace=True, encoding = "utf-8", names =['meter', 'daycode', 'val'], engine='python')
consum.set_index('meter', inplace=True)
test = consum.loc[[1048]]
</code></pre>
<p>I will observe meter readings for all the length of data that I have in this file, but first filter by meter ID.</p>
<pre><code>test['day'] = test['daycode'].astype(str).str[:3]
test['hm'] = test['daycode'].astype(str).str[-2:]
</code></pre>
<p>For readability, I convert daycode based on its rule. First 3 digits are in range of 1 to 365 x2 = 730, last 2 digits in range of 1 to 48. These are 30-min interval reading of 2-year length. (but not all have in full)</p>
<p>So I create files that contain dates in one, and times in another separately. I will use index to convert the digits of daycode into the corresponding date & time that these file contain.</p>
<pre><code>#dcodebook index starts from 0. So minus 1 from the daycode before match
dcodebook = pd.read_csv("data/dcode.txt", encoding = "utf-8", sep = '\r', names =['match'])
#hcodebook starts from 1
hcodebook = pd.read_csv("data/hcode.txt", encoding = "utf-8", sep ='\t', lineterminator='\r', names =['code', 'print'])
hcodebook = hcodebook.drop(['code'], axis= 1)
</code></pre>
<p>For some weird reason, dcodebook was indexed using <code>.iloc</code> function as I understood, but hcodebook needed <code>.loc</code>. </p>
<pre><code>#iloc: by int-position
#loc: by label value
#ix: by both
day_df = dcodebook.iloc[test['day'].astype(int) - 1].reset_index(drop=True)
#to avoid duplicate index Valueerror, create separate dataframes..
hm_df = hcodebook.loc[test['hm'].astype(int) - 1]
#.to_frame error / do I need .reset_index(drop=True)?
</code></pre>
<p>The following line is where the code crashes. </p>
<pre><code>datcode_df = day_df(['match']) + ' ' + hm_df(['print'])
print datcode_df
print test
</code></pre>
<h1>What I don't understand:</h1>
<ul>
<li><strong>I tested earlier that columns of different dataframes can be merged using the simple addition as seen</strong></li>
<li>I initially assigned this to the existing column ['daycode'] in test dataframe, so that previous values will be replaced. And the same error msg was returned.</li>
</ul>
<p>Please advise.</p>
|
<p>You need same size of both <code>DataFrames</code>, so is necessary <code>day</code> and <code>hm</code> are <code>unique</code>.</p>
<p>Then <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reset_index.html" rel="nofollow noreferrer"><code>reset_index</code></a> with <code>drop=True</code> for same indices and last remove <code>()</code> in join:</p>
<pre><code>day_df = dcodebook.iloc[test['day'].astype(int) - 1].reset_index(drop=True)
hm_df = hcodebook.loc[test['hm'].astype(int) - 1].reset_index(drop=True)
datcode_df = day_df['match'] + ' ' + hm_df['print']
</code></pre>
|
pandas|dataframe|merge|typeerror
| 1
|
6,150
| 59,015,036
|
Retuning columns in a numpy array given a boolean index
|
<p>I have the given dataset:</p>
<pre><code>data = np.array([
[1, 2, 1, 3, 1, 2, 1],
[3, 4, 1, 5, 2, 7, 2],
[2, 1, 2, 1, 1, 4, 5],
[6, 1, 2 ,3, 1, 3, 1]])
cols_idx = np.array([0, 0, 1, 0, 1, 0, 0])
</code></pre>
<p>I want to return columns from <code>data</code> where <code>cols_idx == 1</code>. For that I used:</p>
<pre><code>data[:, np.nonzero(cols_idx)]
</code></pre>
<p>But it returns a 3D instead a 2D array:</p>
<pre><code>data[:, np.nonzero(cols_idx)]
array([[[1, 1]],
[[1, 2]],
[[2, 1]],
[[2, 1]]])
data[:, np.nonzero(cols_idx)].shape
(4, 1, 2)
</code></pre>
<p>I would like the output to be:</p>
<pre><code>data[:, np.nonzero(cols_idx)]
array([[1, 1],
[1, 2],
[2, 1],
[2, 1]])
data[:, np.nonzero(cols_idx)].shape
(4, 2)
</code></pre>
<p>How can I achieve that?</p>
|
<p><code>print(np.nonzero(cols_idx))</code> gives <code>(array([2, 4]),)</code> (a tuple rather than just an array)</p>
<p>So you should use <code>np.nonzero(cols_idx)[0] # gives [2 4]</code> to get what you want:</p>
<p><strong>Full code:</strong></p>
<pre><code>import numpy as np
data = np.array([
[1, 2, 1, 3, 1, 2, 1],
[3, 4, 1, 5, 2, 7, 2],
[2, 1, 2, 1, 1, 4, 5],
[6, 1, 2 ,3, 1, 3, 1]])
cols_idx = np.array([0, 0, 1, 0, 1, 0, 0])
new_data = data[:, np.nonzero(cols_idx)[0]]
print(new_data)
'''[[1 1]
[1 2]
[2 1]
[2 1]]'''
print(new_data.shape) # (4,2)
</code></pre>
|
python|numpy
| 1
|
6,151
| 58,678,005
|
Numeric file name, upload multiple txt files, Python Pandas
|
<p>I have around 70 .txt files all saved as </p>
<pre><code>1.txt, 2.txt
</code></pre>
<p>and so on. I would like to create a dataframe with just one columne like <code>fileContent</code> and in each row have text from each txt file. Each time I try to upload file with the name from array of numbers I get error.
Is it achievable?
It is important that my array is <strong><code>[1,2,3,........70]</code></strong> not <code>[1.txt, 2.txt.....70.txt]</code></p>
|
<pre><code>import pandas as pd
import os
txt_files = [f for f in os.listdir('path_of_txt_files') if '.txt' in f]
pd.DataFrame(pd.Series(dict(zip(txt_files,[open(f,'r').read() for f in txt_files]))))
</code></pre>
<p>This will create a table containing filenames in one column, and their respective contents in the other column.</p>
|
python|pandas|dataframe
| 3
|
6,152
| 70,029,304
|
Create pandas dataframe from datetime range
|
<p>I currently have a data that ranges from 2020-11-03 to 2021-10-01.</p>
<p>I want to make a new dataframe where the row value is equal to the date.</p>
<p>To clarify the first row of the datafame would be 2020-11-03 and the second row would be 2020-11-04 and so on.</p>
<p>Would there be a way to create a new dataframe where the rows would be every single date between that given range?</p>
<p>I am planning on mapping the other values later on, so I currently just need a new dataframe that has just one column.</p>
<p>Thank you in advance!!</p>
|
<p>You can use the pandas function <code>date_range</code> (documentation <a href="https://pandas.pydata.org/docs/reference/api/pandas.date_range.html" rel="noreferrer">here</a>) and pass your desired date strings to the <code>start</code> and <code>end</code> arguments (and the default frequency is 1 day):</p>
<pre><code>df = pd.DataFrame({'date':pd.date_range(start='2020-11-03', end='2021-10-01')})
</code></pre>
<p>Output:</p>
<pre><code>>>> df
date
0 2020-11-03
1 2020-11-04
2 2020-11-05
3 2020-11-06
4 2020-11-07
.. ...
328 2021-09-27
329 2021-09-28
330 2021-09-29
331 2021-09-30
332 2021-10-01
[333 rows x 1 columns]
</code></pre>
|
python|pandas|datetime
| 5
|
6,153
| 70,239,393
|
mypy error when evaluating return of np.argmin function
|
<p>I have a function that looks as follows:</p>
<pre><code>import numpy as np
def test() -> np.ndarray:
return np.argmin(np.array([1, 2, 3]))
</code></pre>
<p>According to the <a href="https://numpy.org/doc/stable/reference/generated/numpy.argmin.html" rel="nofollow noreferrer">documentation</a> the return type for <code>np.argmin</code> is <code>np.ndarray</code>, but mypy raises the error <code>Incompatible return value type (got "integer[Any]", expected "ndarray")</code>. How do I correctly annotate this function?</p>
|
<p>If the array you pass is 1D (shape is <code>(X,)</code>), <code>np.argmin</code> returns the index of the smallest value in that array.. For <code>[1,2,3]</code>, obviously, the lowest value is <code>1</code> at index <code>0</code>; thus <code>np.argmin</code> will return zero:</p>
<pre><code>>>> import numpy as np
>>> a = np.array([1,2,3])
>>> np.argmin(a)
0
</code></pre>
<p>It's similar to <code>np.argmax</code>, which returns the index of the <em>largest</em> value in the specified array:</p>
<pre><code>>>> np.argmax(a)
2
</code></pre>
<p>If the array you pass it has more complex dimensions (i.e., is not just a flat list), <em>and</em> you pass the <code>axis</code> parameter, <code>np.argmin</code> will return a copy of the array you pass it, except, for each array containing only numbers, that array will be replaced with the index of the smallest value in that array. For example:</p>
<pre><code>>>> a = np.array([[1, 2, 3],
[6, 5, 4]])
>>> np.argmin(a, axis=1)
array([0, 2])
</code></pre>
<p>You need to annotate your method to return <code>np.integer</code>, which is what <code>argmin</code> is annoted to return:</p>
<pre><code>import numpy as np
def test() -> np.integer:
return np.argmin(np.array([1, 2, 3]))
</code></pre>
|
python|numpy|mypy
| 0
|
6,154
| 70,132,252
|
Using NumPy argmax to count vs for loop
|
<p>I currently use something like the similar bit of code to determine comparison</p>
<pre><code>list_of_numbers = [29800.0, 29795.0, 29795.0, 29740.0, 29755.0, 29745.0]
high = 29980.0
lookback = 10
counter = 1
for number in list_of_numbers:
if (high >= number) \
and (counter < lookback):
counter += 1
else:
break
</code></pre>
<p>The resulted <code>counter</code> magnitude will be <code>7</code>. However, it is very taxing on large data arrays. So, I have looked for a solution and came up with <code>np.argmax()</code>, but there seems to be an issue. For example the following:</p>
<pre><code>list_of_numbers = [29800.0, 29795.0, 29795.0, 29740.0, 29755.0, 29745.0]
np_list = np.array(list_of_numbers)
high = 29980.0
print(np.argmax(np_list > high) + 1)
</code></pre>
<p>this will get output <code>1</code>, just like <code>argmax</code> is suppose to .. but I want it to get output <code>7</code>. Is there another method to do this that will give me similar output for the if statement ?</p>
|
<p>You can get a boolean array for where <code>high >= number</code> using NumPy:</p>
<pre><code>list_of_numbers = [29800.0, 29795.0, 29795.0, 29740.0, 29755.0, 29745.0]
high = 29980.0
lookback = 10
boolean_arr = np.less_equal(np.array(list_of_numbers), high)
</code></pre>
<p>Then finding where is the first <strong>False</strong> argument in that to satisfy <code>break</code> condition in your code. Furthermore, to consider countering, you can use <code>np.cumsum</code> on the <em>boolean array</em> and find the first argument that satisfying specified <em>lookback</em> magnitude. So, the result will be the smaller value between <em>break_arr</em> and <em>lookback_lim</em>:</p>
<pre><code>break_arr = np.where(boolean_arr == False)[0][0] + 1
lookback_lim = np.where(np.cumsum(boolean_arr) == lookback)[0][0] + 1
result = min(break_arr, lookback_lim)
</code></pre>
<p>If your <code>list_of_numbers</code> have not any bigger value than your specified <em>high</em> limit for <em><strong>break_arr</strong></em> <strong>or</strong> the specified <em>lookback</em> exceeds values in <code>np.cumsum(boolean_arr)</code> for <em><strong>lookback_lim</strong></em>, the aforementioned code will get stuck with an error like the following, relating to <code>np.where</code>:</p>
<blockquote>
<p>IndexError: index 0 is out of bounds for axis 0 with size 0</p>
</blockquote>
<p>Which can be handled by <code>try-except</code> or <code>if</code> statements e.g.:</p>
<pre><code>try:
break_arr = np.where(boolean_arr == False)[0][0] + 1
except:
break_arr = len(boolean_arr) + 1
try:
lookback_lim = np.where(np.cumsum(boolean_arr) == lookback)[0][0] + 1
except:
lookback_lim = len(boolean_arr) + 1
</code></pre>
|
python|numpy
| 1
|
6,155
| 56,248,645
|
How do I get the precursor nodes of each layer in Pytorch?
|
<p>I can get the summary of the model from pytorch, just like keras:</p>
<pre><code>device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
resnet = models.resnet18().to(device)
summary(resnet , (3, 224, 224))
</code></pre>
<p>result like this:</p>
<pre><code>----------------------------------------------------------------
Layer (type) Output Shape Param #
================================================================
Conv2d-1 [-1, 64, 112, 112] 9,408
BatchNorm2d-2 [-1, 64, 112, 112] 128
ReLU-3 [-1, 64, 112, 112] 0
MaxPool2d-4 [-1, 64, 56, 56] 0
Conv2d-5 [-1, 64, 56, 56] 36,864
BatchNorm2d-6 [-1, 64, 56, 56] 128
ReLU-7 [-1, 64, 56, 56] 0
Conv2d-8 [-1, 64, 56, 56] 36,864
BatchNorm2d-9 [-1, 64, 56, 56] 128
ReLU-10 [-1, 64, 56, 56] 0
BasicBlock-11 [-1, 64, 56, 56] 0
Conv2d-12 [-1, 64, 56, 56] 36,864
BatchNorm2d-13 [-1, 64, 56, 56] 128
ReLU-14 [-1, 64, 56, 56] 0
Conv2d-15 [-1, 64, 56, 56] 36,864
BatchNorm2d-16 [-1, 64, 56, 56] 128
ReLU-17 [-1, 64, 56, 56] 0
BasicBlock-18 [-1, 64, 56, 56] 0
Conv2d-19 [-1, 128, 28, 28] 73,728
BatchNorm2d-20 [-1, 128, 28, 28] 256
ReLU-21 [-1, 128, 28, 28] 0
Conv2d-22 [-1, 128, 28, 28] 147,456
BatchNorm2d-23 [-1, 128, 28, 28] 256
Conv2d-24 [-1, 128, 28, 28] 8,192
BatchNorm2d-25 [-1, 128, 28, 28] 256
ReLU-26 [-1, 128, 28, 28] 0
BasicBlock-27 [-1, 128, 28, 28] 0
Conv2d-28 [-1, 128, 28, 28] 147,456
BatchNorm2d-29 [-1, 128, 28, 28] 256
ReLU-30 [-1, 128, 28, 28] 0
Conv2d-31 [-1, 128, 28, 28] 147,456
BatchNorm2d-32 [-1, 128, 28, 28] 256
ReLU-33 [-1, 128, 28, 28] 0
BasicBlock-34 [-1, 128, 28, 28] 0
Conv2d-35 [-1, 256, 14, 14] 294,912
BatchNorm2d-36 [-1, 256, 14, 14] 512
ReLU-37 [-1, 256, 14, 14] 0
Conv2d-38 [-1, 256, 14, 14] 589,824
BatchNorm2d-39 [-1, 256, 14, 14] 512
Conv2d-40 [-1, 256, 14, 14] 32,768
BatchNorm2d-41 [-1, 256, 14, 14] 512
ReLU-42 [-1, 256, 14, 14] 0
BasicBlock-43 [-1, 256, 14, 14] 0
Conv2d-44 [-1, 256, 14, 14] 589,824
BatchNorm2d-45 [-1, 256, 14, 14] 512
ReLU-46 [-1, 256, 14, 14] 0
Conv2d-47 [-1, 256, 14, 14] 589,824
BatchNorm2d-48 [-1, 256, 14, 14] 512
ReLU-49 [-1, 256, 14, 14] 0
BasicBlock-50 [-1, 256, 14, 14] 0
Conv2d-51 [-1, 512, 7, 7] 1,179,648
BatchNorm2d-52 [-1, 512, 7, 7] 1,024
ReLU-53 [-1, 512, 7, 7] 0
Conv2d-54 [-1, 512, 7, 7] 2,359,296
BatchNorm2d-55 [-1, 512, 7, 7] 1,024
Conv2d-56 [-1, 512, 7, 7] 131,072
BatchNorm2d-57 [-1, 512, 7, 7] 1,024
ReLU-58 [-1, 512, 7, 7] 0
BasicBlock-59 [-1, 512, 7, 7] 0
Conv2d-60 [-1, 512, 7, 7] 2,359,296
BatchNorm2d-61 [-1, 512, 7, 7] 1,024
ReLU-62 [-1, 512, 7, 7] 0
Conv2d-63 [-1, 512, 7, 7] 2,359,296
BatchNorm2d-64 [-1, 512, 7, 7] 1,024
ReLU-65 [-1, 512, 7, 7] 0
BasicBlock-66 [-1, 512, 7, 7] 0
AvgPool2d-67 [-1, 512, 1, 1] 0
Linear-68 [-1, 1000] 513,000
================================================================
</code></pre>
<p>But in keras, I am able to get the precursor nodes of each layer.</p>
<pre><code>Model Summary:
____________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
====================================================================================================
input_1 (InputLayer) (None, 1, 15, 27) 0
____________________________________________________________________________________________________
convolution2d_1 (Convolution2D) (None, 8, 15, 27) 872 input_1[0][0]
____________________________________________________________________________________________________
maxpooling2d_1 (MaxPooling2D) (None, 8, 7, 27) 0 convolution2d_1[0][0]
____________________________________________________________________________________________________
flatten_1 (Flatten) (None, 1512) 0 maxpooling2d_1[0][0]
____________________________________________________________________________________________________
dense_1 (Dense) (None, 1) 1513 flatten_1[0][0]
====================================================================================================
</code></pre>
<p>How do I get the precursor nodes of each layer in pytorch? I looked at OrderDict, which has no information about the precursor nodes.
How can I get information about each layer of precursor nodes in pytorch?</p>
|
<p>You're right - PyTorch uses dynamic computation graphs and as such has no concept of children/ancestors by itself. For instance, the <code>Inception3</code> model <a href="https://github.com/pytorch/vision/blob/master/torchvision/models/inception.py#L57" rel="nofollow noreferrer">is created</a> by declaring a bunch of submodules and then <a href="https://github.com/pytorch/vision/blob/master/torchvision/models/inception.py#L94" rel="nofollow noreferrer">executed</a> by a long hand-coded method which just uses them in some way, in some order.</p>
<p>This allows for arbitrary flow control to be used, in which case you would have a hard time telling which layer is the child of a given layer - it <em>depends</em> on the data input.</p>
<p>For some special cases however, it is possible. For instance <code>VGG</code> models <a href="https://github.com/pytorch/vision/blob/master/torchvision/models/vgg.py#L75" rel="nofollow noreferrer">are built</a> using <a href="https://pytorch.org/docs/master/nn.html#torch.nn.Sequential" rel="nofollow noreferrer"><code>nn.Sequantial</code></a>, which is a list of modules applied sequentially to its input. If you have a model like</p>
<pre><code>model = nn.Sequential(nn.Linear(30, 40), nn.Linear(40, 20), nn.Linear(20, 30))
</code></pre>
<p>you know that the ancestor of the second <code>Linear</code> layer (<code>model[1]</code>) is <code>model[0]</code> and its child is <code>model[2]</code>.</p>
<p>To my untrained eye, it appears that Inception models could be largely implemented in terms of <code>nn.Sequantial</code> containers, which would give you your expected functionality. That being said, they are not (at least not in <code>torchvision</code> model zoo), so you have no way to obtain that other than manually.</p>
|
python|python-3.x|pytorch
| 0
|
6,156
| 55,604,575
|
Tensorflow compute update_op for metric using values from previous iteration
|
<p>I am working on a custom metric and my <code>update_op</code> is a function of current values and the values from the previous run. How do I use them? I have smth like this</p>
<pre><code>x, y = f(data)
var1 = metric_variable([], dtypes.float32)
var1_op = state_ops.assign_add(var1, x + y_previous_iteration)
var2 = metric_variable([], dtypes.float32)
var2_op = state_ops.assign_add(var2, y)
value = _aggregate_across_replicas(
metrics_collections, f2, var1, var2)
update_op = f2(var1_op, var2_op)
</code></pre>
<p>UPDATED: The way metrics work is that during evaluation at each step variables are getting aggregated. It is done so that at each moment the metric value is the value over all data seen until now and not the value computed over the last batch. For example if you have <code>var1_op = state_ops.assign_add(var1, x)</code> it means that at each iteration <code>var1 = var1_prev + x</code>. For example, I simplified computation of <code>auc</code> <a href="https://gist.github.com/ychervonyi/05ee35a70bd5f7751a97bbd6ef1f879d" rel="nofollow noreferrer">here</a>. I need to do <code>var1 = var1_prev + x + y_prev</code>.</p>
|
<p>I solved the problem. The idea is inspired by <a href="https://github.com/tensorflow/tensorflow/blob/r1.13/tensorflow/contrib/metrics/python/ops/metric_ops.py#L3561" rel="nofollow noreferrer"><code>streaming_concat</code></a> to create a variable of size <code>2*size</code> ( where <code>size</code> is the length of <code>y</code>), keep data from previous run in the first part of <code>y</code> and the new value in second part as <code>[y_prev, y]</code>. Here is the code:</p>
<pre><code>y_var = metric_variable([2 * size], name='y_var')
copy_op = y_var[size:].assign(y_var[:size])
with ops.control_dependencies([copy_op]):
update_ops['y_var'] = state_ops.assign(y_var[:size], y)
</code></pre>
|
python|tensorflow
| 0
|
6,157
| 39,772,896
|
Add prefix to specific columns of Dataframe
|
<p>I've a DataFrame like that :</p>
<pre><code>col1 col2 col3 col4 col5 col6 col7 col8
0 5345 rrf rrf rrf rrf rrf rrf
1 2527 erfr erfr erfr erfr erfr erfr
2 2727 f f f f f f
</code></pre>
<p>I would like to rename all columns but not <strong>col1</strong> and <strong>col2</strong>.</p>
<p>So I tried to make a loop</p>
<pre><code>print(df.columns)
for col in df.columns:
if col != 'col1' and col != 'col2':
col.rename = str(col) + '_x'
</code></pre>
<p>But it's not very efficient...it doesn't work !</p>
|
<p>You can use the <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.rename.html" rel="noreferrer">DataFrame.rename()</a> method</p>
<pre><code>new_names = [(i,i+'_x') for i in df.iloc[:, 2:].columns.values]
df.rename(columns = dict(new_names), inplace=True)
</code></pre>
|
python|pandas
| 23
|
6,158
| 39,649,902
|
python pandas 'AttributeError: Can only use .dt accessor with datetimelike values' decimal to hours
|
<p>I have a column in a df that looks like </p>
<pre><code>hour
1.0
2.0
3.0
6.0
Nan
</code></pre>
<p>I want to convert this into a time format so like below</p>
<pre><code>hour
1:00
2:00
3:00
6:00
</code></pre>
<p>However I cannot get the formatting correct. I have tried to coerce the format like below. I would also like the logic to leave the Nan values.</p>
<pre><code>pd.to_datetime(df['hour'], format='%H', coerce=True)
</code></pre>
<p>But this only returns NaT values.</p>
|
<p>Here:</p>
<pre><code>pd.to_datetime(df['hour'], unit='h')
</code></pre>
<p>Change the <code>format</code> to <code>unit</code>. If you are using numeric data (like you are), then you <strong>cannot</strong> use <code>format</code>. <code>format</code> works only on strings.</p>
<p><code>%H</code>, for example is string interpolation, and since your data is numeric (float), string interpolation will fail.</p>
<hr>
<p>Note that the printed output will not resemble the output in the question. Because, pandas will convert it to datetime type. It will look like <code>'1970-01-01 00:00:00'</code> when printed. If you want the desired format, you can run either:</p>
<pre><code>pd.to_datetime(df['hour'], unit='h').apply(lambda h: '{}:00'.format(h.hour))
</code></pre>
<p>Or:</p>
<pre><code>df['hour'].apply(lambda h: '{}:00'.format(h))
</code></pre>
|
python|pandas
| 0
|
6,159
| 44,028,741
|
Matplotlib + Pandas = how to see labels ?
|
<p>I am trying to display a dataframe with long labels.
The plot is mainly occupied by the graphs, when I would like it to show the labels.
I have : </p>
<pre><code>new_labels = []
for i, index in enumerate(df.index):
new_label = "%s (%.2f)"%(index,df.performance[i])
new_labels.append(new_label)
fig , axes = plt.subplots(1,1)
df.sort_values(col_name).plot(kind='barh', ax=axes)
axes.xaxis.set_ticklabels(new_labels)
axes.xaxis.set_major_locator(ticker.MultipleLocator(1))
</code></pre>
<p>Which gives me :
<a href="https://i.stack.imgur.com/7h0KE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7h0KE.png" alt="enter image description here"></a></p>
<p>As you can see, the labels are not displayed.
Indeed there values is : </p>
<pre><code>new_labels
['PLS regression\n\n PLSRe (0.12)',
'Regression based on k-nea (0.44)',
'The Gaussian Process mode (0.46)',
'Orthogonal Matching Pursu (0.52)',
'An extremely randomized t (0.54)',
'RANSAC (RANdom SAmple Con (0.56)',
'Elastic Net model with it (0.66)',
'Kernel ridge regression. (0.67)',
'Cross-validated Orthogona (0.67)',
'Linear Model trained with (0.68)',
'Linear regression with co (0.68)',
'Theil-Sen Estimator (0.68)',
'Lasso linear model with i (0.69)',
'Bayesian ridge regression (0.70)'...
</code></pre>
<p>How can I give more space to my labels, and have shorter bars ?</p>
|
<p>Try this:</p>
<pre><code>df.sort_values(col_name).plot(x=new_labels, kind='barh', ax=axes)
# NOTE: ^^^^^^^^^^^^
</code></pre>
|
python|pandas|matplotlib
| 0
|
6,160
| 44,141,253
|
Tensorflow: simple linear regression
|
<p>I just started to use Tensorflow in python for optimisation problems. And I just gave it a try with really simple regression model. But the results (both slope and constant) I obtain seemed to be quite far off from what I expect, can anyone point out what I have done wrong (the code runs, but I am not sure if I use Tensorflow properly).</p>
<p>What I did:<br>
1 import module:</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
import numpy as np
import random as ran
import tensorflow as tf
</code></pre>
<p>2 create data based on a linear structure (y = 3 X + 4 + error):</p>
<pre class="lang-py prettyprint-override"><code>train_X = np.array(range(-20,20,1))
b = 3; c = 4; sd = 0.5;
error = np.random.normal(loc=0.0, scale=sd, size=40);
deterministic = b* train_X + c;
train_Y = np.add(deter,error)
</code></pre>
<p>3 setting up for optimisation: </p>
<pre class="lang-py prettyprint-override"><code>X = tf.placeholder(tf.float32,[40])
Y = tf.placeholder(tf.float32,[40])
beta = tf.Variable(np.random.randn(), name="beta")
alpha = tf.Variable(np.random.randn(), name="alpha")
n_samples = 40
learning_rate = 0.01
pred_full = tf.add(tf.scalar_mul(beta, X),alpha)
cost = tf.reduce_mean(tf.pow(tf.subtract(Y, pred_full),2))
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
</code></pre>
<p>4 Running it:</p>
<pre class="lang-py prettyprint-override"><code>init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
sess.run(optimizer, feed_dict={X: train_X, Y: train_Y})
result = sess.run(cost, feed_dict={X: train_X, Y: train_Y})
result_beta = sess.run(beta, feed_dict={X: train_X, Y: train_Y})
result_alpha = sess.run(alpha, feed_dict={X: train_X, Y: train_Y})
print('result:', result, ';', 'result_beta:', result_beta, ';', 'result_alpha:',result_alpha)
</code></pre>
<p>The result I obtained is:</p>
<pre class="lang-py prettyprint-override"><code> result: 1912.99 ; result_beta: 6.75786 ; result_alpha: -0.209623
</code></pre>
<p>obviously beta is supposed to be close to 3 and alhpa should be close to 4. I am wondering what went wrong in my code?</p>
<p>Thanks</p>
|
<p>You have to call the optimizer multiple times for multiple iterations of gradient descent. As, @dv3 noted, try</p>
<pre><code>init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
for i in range(50):
opt, result_alpha, result_beta = sess.run([optimizer, alpha, beta], feed_dict={X: train_X, Y: train_Y})
print('beta =', result_beta, 'alpha =', result_alpha)
</code></pre>
<p>NB: It isn't necessary to access multiple tensor values each with individual calls to run(). You can do that with a list of values to fetch.</p>
|
tensorflow
| 1
|
6,161
| 69,475,434
|
Reason for getting different array each time when 2nd code block is executed
|
<pre><code>dataset=tf.keras.preprocessing.image_dataset_from_directory(
"PlantVillage",
shuffle=True,
image_size=(IMAGE_SIZE,IMAGE_SIZE),
batch_size=BATCH_SIZE
)
for image_batch,label_batch in dataset.take(1):
print(image_batch[1])
</code></pre>
<p>When i am executing 2nd code block each time system is printing different array. could someone let me know the reason for this. I'm thinking this is because due to shuffle=True statement in first block. please correct me if i'm wrong</p>
|
<p>Setting shuffle=True will randomize the data order. Looking at the documentation, you could set shuffle=False to sort the data alphanumerically, or use a seed to ensure your shuffles stay constant, i.e.</p>
<pre><code>dataset=tf.keras.preprocessing.image_dataset_from_directory(
"PlantVillage",
shuffle=True,
seed=40,
image_size=(IMAGE_SIZE,IMAGE_SIZE),
batch_size=BATCH_SIZE
)
</code></pre>
|
python|tensorflow|keras|deep-learning
| 0
|
6,162
| 69,343,228
|
Grouping a column based on values on other columns to create new columns in pandas
|
<p>I have a dataframe which looks something like this:</p>
<pre><code>dfA
name group country registration
X engg Thailand True
A engg Peru True
B engg Nan False
H IT Nan False
J IT India False
K Food Nan True
Z Food Nan False
</code></pre>
<p>I want to add two new columns based on the grouping of the group column but considering the values of the country and registration column.
The new dataframe should look like this:</p>
<pre><code>dfB
name group country registration value_country value registration
X engg Thailand True True True
A engg Peru True True True
B engg Nan False True True
H IT Nan False True False
J IT India False True False
K Food Nan True False True
Z Food Nan False False True
</code></pre>
<p>The value_country column is formed by grouping the "group" column agaisnt country to check for every group is there is even a single country value we assign the complete group value to be True and similarly for value_registartion in the "group" column if any group has a single True value the entire group has the value True else False. How do I do this?</p>
<p>I can use the pandas.groupby() funtion for this but how do I apply a condition for checking values in other columns as one is string column (country) the other is a boolean column(registration)?</p>
|
<p>Use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.DataFrameGroupBy.transform.html" rel="nofollow noreferrer"><code>GroupBy.transform</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.any.html" rel="nofollow noreferrer"><code>GroupBy.any</code></a> for test if at least one non missing values in <code>country</code> for <code>value_country</code> and for test if at least one <code>True</code> in <code>registration</code> for <code>value registration</code> column:</p>
<pre><code>df = df.replace('Nan', np.nan)
df['value_country'] = df['country'].notna().groupby(df['group']).transform('any')
df['value registration'] = df.groupby('group')['registration'].transform('any')
print (df)
name group country registration value_country value registration
0 X engg Thailand True True True
1 A engg Peru True True True
2 B engg NaN False True True
3 H IT NaN False True False
4 J IT India False True False
5 K Food NaN True False True
6 Z Food NaN False False True
</code></pre>
<p>Both together:</p>
<pre><code>df[['value_country', 'value registration']] = (df.assign(new = df['country'].notna())
.groupby('group')[['new','registration']]
.transform('any'))
</code></pre>
|
python|pandas|pandas-groupby
| 1
|
6,163
| 40,879,967
|
How to use Batch Normalization correctly in tensorflow?
|
<p>I had tried several versions of batch_normalization in tensorflow, but none of them worked! The results were all incorrect when I set batch_size = 1 at inference time.</p>
<p>Version 1: directly use the official version in tensorflow.contrib</p>
<pre><code>from tensorflow.contrib.layers.python.layers.layers import batch_norm
</code></pre>
<p>use like this:</p>
<pre><code>output = lrelu(batch_norm(tf.nn.bias_add(conv, biases), is_training), 0.5, name=scope.name)
</code></pre>
<p>is_training = True at training time and False at inference time.</p>
<p>Version 2: from <a href="https://stackoverflow.com/questions/33949786/how-could-i-use-batch-normalization-in-tensorflow/38320613#38320613">How could I use Batch Normalization in TensorFlow?</a></p>
<pre><code>def batch_norm_layer(x, train_phase, scope_bn='bn'):
bn_train = batch_norm(x, decay=0.999, epsilon=1e-3, center=True, scale=True,
updates_collections=None,
is_training=True,
reuse=None, # is this right?
trainable=True,
scope=scope_bn)
bn_inference = batch_norm(x, decay=0.999, epsilon=1e-3, center=True, scale=True,
updates_collections=None,
is_training=False,
reuse=True, # is this right?
trainable=True,
scope=scope_bn)
z = tf.cond(train_phase, lambda: bn_train, lambda: bn_inference)
return z
</code></pre>
<p>use like this:</p>
<pre><code>output = lrelu(batch_norm_layer(tf.nn.bias_add(conv, biases), is_training), 0.5, name=scope.name)
</code></pre>
<p>is_training is a placeholder at training time is True and False at inference time.</p>
<p>version 3: from slim <a href="https://github.com/tensorflow/models/blob/master/inception/inception/slim/ops.py" rel="noreferrer">https://github.com/tensorflow/models/blob/master/inception/inception/slim/ops.py</a></p>
<pre><code>def batch_norm_layer(inputs,
is_training=True,
scope='bn'):
decay=0.999
epsilon=0.001
inputs_shape = inputs.get_shape()
with tf.variable_scope(scope) as t_scope:
axis = list(range(len(inputs_shape) - 1))
params_shape = inputs_shape[-1:]
# Allocate parameters for the beta and gamma of the normalization.
beta, gamma = None, None
beta = tf.Variable(tf.zeros_initializer(params_shape),
name='beta',
trainable=True)
gamma = tf.Variable(tf.ones_initializer(params_shape),
name='gamma',
trainable=True)
moving_mean = tf.Variable(tf.zeros_initializer(params_shape),
name='moving_mean',
trainable=False)
moving_variance = tf.Variable(tf.ones_initializer(params_shape),
name='moving_variance',
trainable=False)
if is_training:
# Calculate the moments based on the individual batch.
mean, variance = tf.nn.moments(inputs, axis)
update_moving_mean = moving_averages.assign_moving_average(
moving_mean, mean, decay)
update_moving_variance = moving_averages.assign_moving_average(
moving_variance, variance, decay)
else:
# Just use the moving_mean and moving_variance.
mean = moving_mean
variance = moving_variance
# Normalize the activations.
outputs = tf.nn.batch_normalization(
inputs, mean, variance, beta, gamma, epsilon)
outputs.set_shape(inputs.get_shape())
return outputs
</code></pre>
<p>use like this:</p>
<pre><code>output = lrelu(batch_norm_layer(tf.nn.bias_add(conv, biases), is_training), 0.5, name=scope.name)
</code></pre>
<p>is_training = True at training time and False at inference time.</p>
<p>version 4: like version3, but add tf.control_dependencies</p>
<pre><code>def batch_norm_layer(inputs,
decay=0.999,
center=True,
scale=True,
epsilon=0.001,
moving_vars='moving_vars',
activation=None,
is_training=True,
trainable=True,
restore=True,
scope='bn',
reuse=None):
inputs_shape = inputs.get_shape()
with tf.variable_op_scope([inputs], scope, 'BatchNorm', reuse=reuse):
axis = list(range(len(inputs_shape) - 1))
params_shape = inputs_shape[-1:]
# Allocate parameters for the beta and gamma of the normalization.
beta = tf.Variable(tf.zeros(params_shape), name='beta')
gamma = tf.Variable(tf.ones(params_shape), name='gamma')
# Create moving_mean and moving_variance add them to
# GraphKeys.MOVING_AVERAGE_VARIABLES collections.
moving_mean = tf.Variable(tf.zeros(params_shape), name='moving_mean',
trainable=False)
moving_variance = tf.Variable(tf.ones(params_shape), name='moving_variance',
trainable=False)
control_inputs = []
if is_training:
# Calculate the moments based on the individual batch.
mean, variance = tf.nn.moments(inputs, axis)
update_moving_mean = moving_averages.assign_moving_average(
moving_mean, mean, decay)
update_moving_variance = moving_averages.assign_moving_average(
moving_variance, variance, decay)
control_inputs = [update_moving_mean, update_moving_variance]
else:
# Just use the moving_mean and moving_variance.
mean = moving_mean
variance = moving_variance
# Normalize the activations.
with tf.control_dependencies(control_inputs):
return tf.nn.batch_normalization(
inputs, mean, variance, beta, gamma, epsilon)
</code></pre>
<p>use like this:</p>
<pre><code>output = lrelu(batch_norm(tf.nn.bias_add(conv, biases), is_training), 0.5, name=scope.name)
</code></pre>
<p>is_training = True at training time and False at inference time.</p>
<p>The 4 versions of Batch_normalization are all not correct. So, how to use batch normalization correctly?</p>
<p>Another strange phenomenon is if I set batch_norm_layer to null like this, the inference result are all same.</p>
<pre><code>def batch_norm_layer(inputs, is_training):
return inputs
</code></pre>
|
<p>I have tested that the following simplified implementation of batch normalization gives the same result as <code>tf.contrib.layers.batch_norm</code> as long as the setting is the same.</p>
<pre><code>def initialize_batch_norm(scope, depth):
with tf.variable_scope(scope) as bnscope:
gamma = tf.get_variable("gamma", shape[-1], initializer=tf.constant_initializer(1.0))
beta = tf.get_variable("beta", shape[-1], initializer=tf.constant_initializer(0.0))
moving_avg = tf.get_variable("moving_avg", shape[-1], initializer=tf.constant_initializer(0.0), trainable=False)
moving_var = tf.get_variable("moving_var", shape[-1], initializer=tf.constant_initializer(1.0), trainable=False)
bnscope.reuse_variables()
def BatchNorm_layer(x, scope, train, epsilon=0.001, decay=.99):
# Perform a batch normalization after a conv layer or a fc layer
# gamma: a scale factor
# beta: an offset
# epsilon: the variance epsilon - a small float number to avoid dividing by 0
with tf.variable_scope(scope, reuse=True):
with tf.variable_scope('BatchNorm', reuse=True) as bnscope:
gamma, beta = tf.get_variable("gamma"), tf.get_variable("beta")
moving_avg, moving_var = tf.get_variable("moving_avg"), tf.get_variable("moving_var")
shape = x.get_shape().as_list()
control_inputs = []
if train:
avg, var = tf.nn.moments(x, range(len(shape)-1))
update_moving_avg = moving_averages.assign_moving_average(moving_avg, avg, decay)
update_moving_var = moving_averages.assign_moving_average(moving_var, var, decay)
control_inputs = [update_moving_avg, update_moving_var]
else:
avg = moving_avg
var = moving_var
with tf.control_dependencies(control_inputs):
output = tf.nn.batch_normalization(x, avg, var, offset=beta, scale=gamma, variance_epsilon=epsilon)
return output
</code></pre>
<p>The main tips with using the official implementation of batch normalization in <code>tf.contrib.layers.batch_norm</code> are: (1) set <code>is_training=True</code> for training time and <code>is_training=False</code> for validation and testing time; (2) set <code>updates_collections=None</code> to make sure that <code>moving_variance</code> and <code>moving_mean</code> are updated in place; (3) be aware and careful with the scope setting; (4) set <code>decay</code> to be a smaller value (<code>decay=0.9</code> or <code>decay=0.99</code>) than default value (default is 0.999) if your dataset is small or your total training updates/steps are not that large.</p>
|
tensorflow|deep-learning
| 8
|
6,164
| 54,044,781
|
openpyxl changes number format of columns in un-altered worksheet of the same file
|
<p>I've noticed that when I use openpyxl to add an extra sheet to a .xlsx file, it automatically alters the number format of column(s) in a pre-existent sheet in this file.</p>
<p>Chronologically, the problem is as follows:</p>
<p>1) I use a "timestamp" format to record by hand the date and time of some events of interest in a column of this pre-existent sheet. I set, via Excel, the column format to Date (format code 'MM/DD/YYYY HH:MM:SS')</p>
<p><a href="https://i.stack.imgur.com/uRea3.png" rel="nofollow noreferrer">The column where I save the date and time of the events that I'm registering</a></p>
<p>2) I read this "pre-existent" worksheet with pandas, and everything goes fine (i.e., pandas can read these dates/times):</p>
<pre><code>import pandas as pd
df = pd.read_excel(myPath + 'myFile.xlsx',sheetname='pre-existent',header=0)
print(df['timeStampUTC'])
timeStampUTC
0 2018-12-02 12:59:00
1 2018-12-02 14:29:00
2 2018-12-02 15:39:00
3 2018-12-02 17:05:00
4 2018-12-02 18:38:00
5 2018-12-02 19:36:00
6 2018-12-02 20:27:00
7 2018-12-02 21:44:00
8 2018-12-02 22:15:00
9 2018-12-02 22:46:00
10 2018-12-02 23:07:00
11 2018-12-04 15:46:00
12 2018-12-04 15:53:00
Name: timeStampUTC, dtype: datetime64[ns]
</code></pre>
<p>3) I do some calculations and store these other calculations at a new worksheet at the same file ('myFile.xlsx') and save the changes:</p>
<pre><code>from openpyxl import *
writer = pd.ExcelWriter(myPath + 'myFile.xlsx', engine = 'openpyxl')
writer.book = book
New_df.to_excel(writer, index = False, sheet_name='new-sheet')
writer.save()
writer.close()
</code></pre>
<p>4) Once I try to repeat step 2, pandas can no longer read correctly the date-times in my column:</p>
<pre><code>print(df['timeStampUTC'])
timeStampUTC
0 NaN
1 NaN
2 NaN
3 NaN
4 NaN
5 NaN
6 NaN
7 NaN
8 NaN
9 NaN
10 NaN
11 NaN
12 NaN
Name: timeStampUTC, dtype: float64
</code></pre>
<p>It's important to notice that when I re-open 'myFile.xlsx' with Excel, the column appears as normal. When I re-set the number format of the column to Date (format code 'MM/DD/YYYY HH:MM:SS'), pandas is able again to read the timestamps.</p>
<h3>Anything that allows me to re-read this column with pandas is welcome.</h3>
<p>Thanks!!!!</p>
<p>Juancho Gossn</p>
|
<p>possible partial solution: </p>
<p>Use read only when opening the workbook. Save the output to a new excel file.
workbook = openpyxl.load_workbook(filename='name.xlsx', read_only=True)</p>
<p>My problem: 2 font cells change to 1 font cells.</p>
|
python|excel|pandas|openpyxl|datetime-format
| 1
|
6,165
| 38,292,340
|
How to count rows not values in python pandas?
|
<p>I would like to group DataFrame by some field like</p>
<pre><code>student_data.groupby(['passed'])
</code></pre>
<p>and then count number of rows inside each group. </p>
<p>I know how to count values like</p>
<pre><code>student_data.groupby(['passed'])['passed'].count()
</code></pre>
<p>or</p>
<pre><code>student_data.groupby(['passed']).agg({'passed': 'count'})
</code></pre>
<p>but this will <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.count.html" rel="nofollow">exclude empties by default</a>. I would like to count all rows in groups?</p>
<p>I found I can count rows in entire DataFrame with </p>
<pre><code>len(student_data.index)
</code></pre>
<p>but can't find any <code>index</code> field in <code>GroupBy</code> object or something.</p>
|
<p>You need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.value_counts.html" rel="nofollow"><code>value_counts</code></a> with parameter <code>dropna=False</code>:</p>
<pre><code>import pandas as pd
import numpy as np
student_data = pd.DataFrame({'passed':[1,1,2,2,2,np.nan,np.nan]})
print(student_data)
passed
0 1.0
1 1.0
2 2.0
3 2.0
4 2.0
5 NaN
6 NaN
print (student_data['passed'].value_counts(dropna=False))
2.0 3
1.0 2
NaN 2
Name: passed, dtype: int64
</code></pre>
|
python|pandas
| 3
|
6,166
| 38,177,549
|
How does one implement a subsampled RBF (Radial Basis Function) in Numpy?
|
<p>I was trying to implement a Radial Basis Function in Python and Numpy as describe by <a href="http://work.caltech.edu/slides/slides16.pdf" rel="nofollow noreferrer">CalTech lecture here</a>. The mathematics seems clear to me so I find it strange that its not working (or it seems to not work). The idea is simple, one chooses a subsampled number of centers for each Gaussian form a kernal matrix and tries to find the best coefficients. i.e. solve <code>Kc = y</code> where K is the guassian kernel (gramm) matrix with least squares. For that I did:</p>
<pre><code>beta = 0.5*np.power(1.0/stddev,2)
Kern = np.exp(-beta*euclidean_distances(X=X,Y=subsampled_data_points,squared=True))
#(C,_,_,_) = np.linalg.lstsq(K,Y_train)
C = np.dot( np.linalg.pinv(Kern), Y )
</code></pre>
<p>but when I try to plot my interpolation with the original data they don't look at all alike:</p>
<p><a href="https://i.stack.imgur.com/BuD3M.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BuD3M.png" alt="enter image description here"></a></p>
<p>with 100 random centers (from the data set). I also tried 10 centers which produces essentially the same graph as so does using every data point in the training set. I assumed that using every data point in the data set should more or less perfectly copy the curve but it didn't (overfit). It produces:</p>
<p><a href="https://i.stack.imgur.com/rjMju.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rjMju.png" alt="enter image description here"></a></p>
<p>which doesn't seem correct. I will provide the full code (that runs without error):</p>
<pre><code>import numpy as np
from sklearn.metrics.pairwise import euclidean_distances
from scipy.interpolate import Rbf
import matplotlib.pyplot as plt
## Data sets
def get_labels_improved(X,f):
N_train = X.shape[0]
Y = np.zeros( (N_train,1) )
for i in range(N_train):
Y[i] = f(X[i])
return Y
def get_kernel_matrix(x,W,S):
beta = get_beta_np(S)
#beta = 0.5*tf.pow(tf.div( tf.constant(1.0,dtype=tf.float64),S), 2)
Z = -beta*euclidean_distances(X=x,Y=W,squared=True)
K = np.exp(Z)
return K
N = 5000
low_x =-2*np.pi
high_x=2*np.pi
X = low_x + (high_x - low_x) * np.random.rand(N,1)
# f(x) = 2*(2(cos(x)^2 - 1)^2 -1
f = lambda x: 2*np.power( 2*np.power( np.cos(x) ,2) - 1, 2) - 1
Y = get_labels_improved(X , f)
K = 2 # number of centers for RBF
indices=np.random.choice(a=N,size=K) # choose numbers from 0 to D^(1)
subsampled_data_points=X[indices,:] # M_sub x D
stddev = 100
beta = 0.5*np.power(1.0/stddev,2)
Kern = np.exp(-beta*euclidean_distances(X=X,Y=subsampled_data_points,squared=True))
#(C,_,_,_) = np.linalg.lstsq(K,Y_train)
C = np.dot( np.linalg.pinv(Kern), Y )
Y_pred = np.dot( Kern , C )
plt.plot(X, Y, 'o', label='Original data', markersize=1)
plt.plot(X, Y_pred, 'r', label='Fitted line', markersize=1)
plt.legend()
plt.show()
</code></pre>
<p>Since the plots look strange I decided to read the docs for the ploting functions but I couldn't find anything obvious that was wrong.</p>
|
<h3>Scaling of interpolating functions</h3>
<p>The main problem is unfortunate choice of standard deviation of the functions used for interpolation:</p>
<pre><code>stddev = 100
</code></pre>
<p>The features of your functions (its humps) are of size about 1. So, use</p>
<pre><code>stddev = 1
</code></pre>
<h3>Order of X values</h3>
<p>The mess of red lines is there because <code>plt</code> from matplotlib connects consecutive data points, in the order given. Since your X values are in random order, this results in chaotic left-right movements. Use sorted X: </p>
<pre><code>X = np.sort(low_x + (high_x - low_x) * np.random.rand(N,1), axis=0)
</code></pre>
<h3>Efficiency issues</h3>
<p>Your <code>get_labels_improved</code> method is inefficient, looping over the elements of X. Use <code>Y = f(X)</code>, leaving the looping to low-level NumPy internals.</p>
<p>Also, the computation of least-squared solution of an overdetermined system should be done with <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.lstsq.html#numpy.linalg.lstsq" rel="nofollow noreferrer">lstsq</a> instead of computing the pseudoinverse (computationally expensive) and multiplying by it. </p>
<p>Here is the cleaned-up code; using 30 centers gives a good fit.</p>
<p><img src="https://i.stack.imgur.com/aW1WJ.png" alt="fit"></p>
<pre><code>import numpy as np
from sklearn.metrics.pairwise import euclidean_distances
import matplotlib.pyplot as plt
N = 5000
low_x =-2*np.pi
high_x=2*np.pi
X = np.sort(low_x + (high_x - low_x) * np.random.rand(N,1), axis=0)
f = lambda x: 2*np.power( 2*np.power( np.cos(x) ,2) - 1, 2) - 1
Y = f(X)
K = 30 # number of centers for RBF
indices=np.random.choice(a=N,size=K) # choose numbers from 0 to D^(1)
subsampled_data_points=X[indices,:] # M_sub x D
stddev = 1
beta = 0.5*np.power(1.0/stddev,2)
Kern = np.exp(-beta*euclidean_distances(X=X, Y=subsampled_data_points,squared=True))
C = np.linalg.lstsq(Kern, Y)[0]
Y_pred = np.dot(Kern, C)
plt.plot(X, Y, 'o', label='Original data', markersize=1)
plt.plot(X, Y_pred, 'r', label='Fitted line', markersize=1)
plt.legend()
plt.show()
</code></pre>
|
python|numpy|machine-learning|neural-network
| 4
|
6,167
| 38,382,497
|
Numpy recarray indexing by column truncates floats
|
<p>In a simple recarray in python, the output value is getting truncated when indexed by column name:</p>
<pre><code>import numpy #1.10.0
arr = numpy.zeros(1, dtype=[('a', np.float)])
arr[0]['a'] = 0.1234567891234
print arr
print arr['a']
[(0.1234567891234,)]
[ 0.12345679]
</code></pre>
<p>Why does this happen? Can I get the full, non-truncated value with column indexing?</p>
|
<p>The print precision for a numeric array is 8 digits:</p>
<pre><code>In [250]: np.get_printoptions()
Out[250]:
{'edgeitems': 3,
'formatter': None,
'infstr': 'inf',
'linewidth': 75,
'nanstr': 'nan',
'precision': 8,
'suppress': False,
'threshold': 1000}
</code></pre>
<p>But it doesn't use that value when displaying the recarray or its records. You'd probably also see the longer print with the scalar value:</p>
<pre><code>print arr['a'].item()
</code></pre>
<p>==============</p>
<pre><code>In [252]: arr = np.zeros(1, dtype=[('a', np.float)])
...: arr[0]['a'] = 0.1234567891234
...:
In [253]: arr
Out[253]:
array([(0.1234567891234,)],
dtype=[('a', '<f8')])
In [254]: arr[0]
Out[254]: (0.1234567891234,)
In [255]: arr['a']
Out[255]: array([ 0.12345679])
In [256]: arr['a'].item()
Out[256]: 0.1234567891234
In [257]: arr['a'][0]
Out[257]: 0.1234567891234
</code></pre>
<p>==================</p>
<p><a href="https://github.com/numpy/numpy/issues/5463" rel="nofollow">https://github.com/numpy/numpy/issues/5463</a></p>
<p><code>array2string handles floats differently for structured array and ndarray</code>
touches on this. The formatting of numbers in a structured array record does not follow <code>print options</code>.</p>
|
python|numpy
| 1
|
6,168
| 38,387,765
|
Loop over columns to cleanse the values
|
<p>I have many columns in my pandas data frame which I want to cleanse with a specific method. I want to see if there is a way to do this in one go.</p>
<p>This is what I tried and this does not work.</p>
<pre><code>list = list(bigtable) # this list has all the columns i want to cleanse
for index in list:
bigtable1.column = bigtable.column.str.split(',', expand=True).apply(lambda x: pd.Series(np.sort(x)).str.cat(sep=','), axis=1)
</code></pre>
|
<p>try this should work:</p>
<pre><code>bigtable1=pd.Dataframe()
for index in list:
bigtable1[index] = bigtable[index].str.split(',', expand=True).apply(lambda x: pd.Series(np.sort(x)).str.cat(sep=','), axis=1)
</code></pre>
|
python|pandas
| 1
|
6,169
| 66,304,853
|
Calculate weight using normal distribution in Python
|
<p>I have to add a weight column in the titanic dataset to calculate adult passengers' weight using a normal distribution with std = 20 and mean = 70 kg. I have tried this code:</p>
<pre><code>df['Weight'] = np.random.normal(20, 70, size=891)
df['Weight'].fillna(df['Weight'].iloc[0], inplace=True)
</code></pre>
<p>but I am concerned about two things:</p>
<ol>
<li>It generates negative values, not just positive; how can this be considered normal weight value, is there anything that I can change in code to generate just positive values.</li>
<li>Since I am targeting the adults' age group, what about children. Some of them also have abnormal weight values, such as 7 kg for adults or 30 kg for a child; how can this be solved.
I appreciate any help you can provide.</li>
</ol>
<p>Edit:</p>
<p>This code worked for me</p>
<pre><code>Weight = np.random.normal(80, 20, 718)
adults['Weight'] = Weight
</code></pre>
<p>Now I have to calculate probability for people weighted less than 70
and who is between 70 and 100.</p>
<p>I have tried the following code but it raise an error: TypeError: unsupported operand type(s) for -: 'str' and 'int'.</p>
<pre><code>import pandas as pd
import numpy as np
import scipy.stats
adults = df[(df['Age'] >= 20) & (df['Age'] <= 70)]
Weight = np.random.normal(80, 20, 718)
adults['Weight'] = Weight
p1 = adults['Weight'] < 70
p2 = adults[(adults['Weight'] > 70) & (adults['Weight'] < 100)]
scipy.stats.norm.pdf(p1)
scipy.stats.norm.pdf(p2)
</code></pre>
|
<ol>
<li><p>Range of a Normal distribution is not restricted. It spans all across real numbers. If you want to restrict it, you should do it manually or use other distributions.</p>
<pre><code>df['Weight'] = np.random.normal(20, 70, size=891)
df.loc[df['Weight'] < min_value, 'Weight'] = min_value
df.loc[df['Weight'] > max_value, 'Weight'] = max_value
</code></pre>
</li>
<li><p>Since weights of children and adults are not iid's you should sample it from different distributions</p>
<pre><code># use different distributions
df.loc[df['person_type'] == 'child', 'Weight'] = np.random.normal(x1, y1, size=children_size)
df.loc[df['person_type'] == 'adult', 'Weight'] = np.random.normal(x2, y2, size=adult_size)
</code></pre>
</li>
</ol>
|
python|dataframe|numpy|scipy|normal-distribution
| 0
|
6,170
| 65,941,521
|
automatic detection of matrix size, numpy
|
<p>I created a code that adds a value of 10 in each column. But it's only created for an array that has 3 rows.
I have more matrix in a cycle with different dimensions
Is it possible to do this so that I don't have to constantly forward the number of columns or rows, but to do so it automatically finds out the number of columns?</p>
<pre><code>import numpy as np
c1=np.array([[1,1,1], [2,2,2], [3,3,3], [4,4,4]])
#c2=np.array([[1,1,1,1], [2,2,2,2], [3,3,3,3], [4,4,4,4]])
#c3=np.array([[1,1,1,1], [2,2,2,2], [3,3,3,3]])
left=c1+10*np.ones((4,1))*((1,2,3,))#*p
print('\n')
print(left)
#p +=1
</code></pre>
<p>example</p>
<pre><code>left=c1+10*np.ones((4,1))*((1,2,3,)) #4 rows and 3 columns
left=c1+10*np.ones((3,1))*((1,2,3,4)) #3 rows and 4 columns
</code></pre>
<p>output:</p>
<pre><code>array([[11., 21., 31.],
[12., 22., 32.],
[13., 23., 33.],
[14., 24., 34.]])
</code></pre>
|
<p>If you want the multiple of tens to be added to be proportional to the column number (as in your answer), you can make a <code>range</code> based on the number of columns and multiply by 10:</p>
<pre><code>>>> np.arange(1, c1.shape[1]+1)*10
array([10, 20, 30])
>>> c1 + np.arange(1, c1.shape[1]+1)*10
array([[11, 21, 31],
[12, 22, 32],
[13, 23, 33],
[14, 24, 34]])
>>> c2 + np.arange(1, c2.shape[1]+1)*10 # same but with c2
array([[11, 21, 31, 41],
[12, 22, 32, 42],
[13, 23, 33, 43],
[14, 24, 34, 44]])
</code></pre>
<p>Your description OTOH makes it sound like you want to just add <code>10</code> to every element, in which case it is as simple as <code>c1 + 10</code>.</p>
|
python|arrays|numpy
| 2
|
6,171
| 66,117,362
|
Python Pandas Delete empty cells
|
<p>I'm trying to delete empty cells with pandas. I wanna delete only empty cells but I have no idea how to do that.</p>
<p>ex</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>A</th>
<th>B</th>
<th>C</th>
<th>D</th>
<th>E</th>
<th>F</th>
<th>G</th>
<th>H</th>
<th>I</th>
<th>J</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>apple</td>
<td>price</td>
<td>10</td>
<td></td>
<td></td>
<td>quantity</td>
<td>5</td>
<td></td>
<td></td>
</tr>
<tr>
<td>2</td>
<td>pineapple</td>
<td>price</td>
<td>12</td>
<td>condition</td>
<td>good</td>
<td></td>
<td></td>
<td>quantity</td>
<td>4</td>
</tr>
</tbody>
</table>
</div>
<p>what I want</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>A</th>
<th>B</th>
<th>C</th>
<th>D</th>
<th>E</th>
<th>F</th>
<th>G</th>
<th>H</th>
<th>I</th>
<th>J</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>apple</td>
<td>price</td>
<td>10</td>
<td>quantity</td>
<td>5</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>2</td>
<td>pineapple</td>
<td>price</td>
<td>12</td>
<td>condition</td>
<td>good</td>
<td>quantity</td>
<td>4</td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
</div>
<p>I need all values without empty cells. So I don't want to delete whole row or column. I wanna delete empty cell and pull the values in the back.</p>
<p>Real Data
<a href="https://i.stack.imgur.com/AgbnS.jpg" rel="nofollow noreferrer">EXCEL</a></p>
|
<p>I made it with this</p>
<p><a href="https://stackoverflow.com/questions/61569994/removing-nan-from-pandas-dataframe-and-reshaping-dataframe">Removing nan from pandas dataframe and reshaping dataframe</a></p>
<p>Keypoint : Change Invalid_val with len of value strings</p>
|
python|excel|pandas|dataframe
| 0
|
6,172
| 66,169,726
|
How to ignore empty columns in a dataframe?(Pandas)
|
<p>I want to ignore empty columns in a dataframe.</p>
<p>For Example:</p>
<p>sample.csv</p>
<pre><code>Id Name Address Contact Item Rate Qty Price
1 Mark California 98429102 Shirt 57 2 8
2 Andre Michigan 92010211
</code></pre>
<p>I have tried:</p>
<pre><code>import pandas as pd
df = pd.read_csv('sample.csv')
df = df.fillna('')
df.to_csv('sample.txt',sep='*',index=False, header=False)
</code></pre>
<p>The sample.txt looks like</p>
<pre><code>1*Mark*California*98429102*Shirt*57*2*8
2*Andre*Michigan*92010211****
</code></pre>
<p>I want to remove the empty columns here. The sample.txt should look like this:</p>
<pre><code>1*Mark*California*98429102*Shirt*57*2*8
2*Andre*Michigan*92010211
</code></pre>
|
<p>Just use a memory buffer and <code>strip()</code></p>
<pre><code>import io
df = pd.read_csv(io.StringIO("""1*Mark*California*98429102*Shirt*57*2*8
2*Andre*Michigan*92010211****"""), sep="*", header=None)
with open("sample.csv", "w") as f:
f.write("\n".join([l.strip("*") for l in df.to_csv(sep="*",header=None, index=None).split("\n")]))
with open("sample.csv") as f: print(f.read())
</code></pre>
<h3>output</h3>
<pre><code>1*Mark*California*98429102*Shirt*57.0*2.0*8.0
2*Andre*Michigan*92010211
</code></pre>
|
python|python-3.x|pandas|dataframe|numpy
| 2
|
6,173
| 46,217,242
|
Creating arrays quickly from a dataframe with lots of values?
|
<ol>
<li>I have a large dataframe (imported from a csv file via pandas) with lots of values (259 rows × 27 columns). The index are months starting from January 1996 through to July 2017.</li>
</ol>
<p><a href="https://i.stack.imgur.com/Fx1Cl.png" rel="nofollow noreferrer">Image of my dataframe</a></p>
<ol start="2">
<li><p>I want to sort every column by year e.g. K37L: 1996, 1997, 1998, 1999, 2000 etc; K37M: 1996, 1997, 1998, 1999, 2000 etc.</p>
</li>
<li><p>This is my current code:</p>
</li>
</ol>
<blockquote>
<pre><code>#Importing CSV
import pandas as pd
import numpy as np
df = pd.read_csv('file.csv', index_col=0, skipinitialspace=True)
#Calling a column
K37L = df['K37L']
#Filtering this column by year (from 1996 to 2017)
K37L96 = K37L.filter(regex = '1996', axis = 0); npK37L96 = np.array(K37L96)
...
...
...
K37L17 = K37L.filter(regex = '2017', axis = 0); npK37L17 = np.array(K37L17)
</code></pre>
</blockquote>
<ol start="4">
<li>This produces what I want: <a href="https://i.stack.imgur.com/ALj3e.png" rel="nofollow noreferrer">K37L filtered by 1996</a></li>
</ol>
<p>However, this is a tedious process, since I have to type out all the years and the column names to get what I want, it will take ages. Is there a faster / more elegant way to do this?</p>
<p>EDIT: Here is the df.head() output as requested:</p>
<pre><code> K37L K37M K37N K37P K37Q K37R K37S K37T K37U K37V ... \
1996 Jan 78.9 79.4 71.7 36.7 0.0 88.7 94.1 90.7 80.2 98.9 ...
1996 Feb 79.3 81.0 72.7 36.7 0.0 88.7 94.3 90.9 79.8 98.7 ...
1996 Mar 79.8 80.4 72.7 36.7 0.0 89.0 94.6 91.0 79.6 98.6 ...
1996 Apr 80.4 80.7 72.9 36.7 0.0 89.0 94.6 91.3 79.2 97.9 ...
1996 May 80.6 80.7 72.9 36.7 0.0 89.1 94.7 91.9 79.2 96.6 ...
K385 K386 K387 K388 K389 K38A K38B K38C K38D K38E
1996 Jan 70.9 78.7 257.8 83.9 79.7 92.2 73.8 86.4 79.6 74.0
1996 Feb 70.7 78.7 257.2 83.9 79.8 92.6 73.7 86.6 79.9 73.9
1996 Mar 70.9 78.7 257.3 83.9 80.1 92.6 73.8 87.2 80.1 74.0
1996 Apr 70.8 78.9 256.6 83.9 80.4 92.7 73.9 87.9 80.7 74.0
1996 May 70.9 78.9 256.3 83.9 80.5 92.9 73.9 88.0 80.7 74.1
[5 rows x 27 columns]
</code></pre>
|
<p>You can use:</p>
<pre><code>np.random.seed(458)
cols = ['K37L', 'K37M', 'K37N', 'K37P', 'K37Q', 'K37R', 'K37S', 'K37T', 'K37U','K37V', 'K37W', 'K37X', 'K37Y', 'K37Z', 'K382', 'K383', 'K384', 'K385', 'K386', 'K387', 'K388', 'K389', 'K38A', 'K38B', 'K38C', 'K38D', 'K38E']
idx = pd.date_range('1996-01-01', periods=259, freq='MS').strftime('%Y %b')
df = pd.DataFrame(np.random.randint(20, size=(259,27)), index=idx, columns=cols)
print (df.head(3))
K37L K37M K37N K37P K37Q K37R K37S K37T K37U K37V ... \
1996 Jan 8 13 18 1 6 2 1 11 13 0 ...
1996 Feb 12 0 14 0 11 0 1 10 3 4 ...
1996 Mar 5 8 8 8 5 5 2 8 1 7 ...
K385 K386 K387 K388 K389 K38A K38B K38C K38D K38E
1996 Jan 18 16 0 11 18 18 11 18 11 17
1996 Feb 9 12 15 7 7 0 17 3 6 7
1996 Mar 13 9 0 9 2 17 13 1 12 9
[3 rows x 27 columns]
</code></pre>
<p>Create <code>Datetimeindex</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_datetime.html" rel="nofollow noreferrer"><code>to_datetime</code></a>:</p>
<pre><code>df.index = pd.to_datetime(df.index, format='%Y %b')
print (df.head(3))
K37L K37M K37N K37P K37Q K37R K37S K37T K37U K37V ... \
1996-01-01 8 13 18 1 6 2 1 11 13 0 ...
1996-02-01 12 0 14 0 11 0 1 10 3 4 ...
1996-03-01 5 8 8 8 5 5 2 8 1 7 ...
K385 K386 K387 K388 K389 K38A K38B K38C K38D K38E
1996-01-01 18 16 0 11 18 18 11 18 11 17
1996-02-01 9 12 15 7 7 0 17 3 6 7
1996-03-01 13 9 0 9 2 17 13 1 12 9
[3 rows x 27 columns]
</code></pre>
<p>So for select by yars use <a href="http://pandas.pydata.org/pandas-docs/stable/timeseries.html#partial-string-indexing" rel="nofollow noreferrer">partial string indexing</a> and for select column <code>[]</code> (same syntax):</p>
<pre><code>#seelcting rows with year 2000
print (df['2000'])
K37L K37M K37N K37P K37Q K37R K37S K37T K37U K37V ...
2000-01-01 12 15 8 14 2 0 17 0 8 14 ...
2000-02-01 14 10 11 4 18 1 3 12 9 11 ...
2000-03-01 4 5 17 16 13 6 18 6 12 12 ...
2000-04-01 2 15 3 5 6 6 17 3 1 3 ...
2000-05-01 6 14 14 9 4 0 4 10 14 15 ...
#selecting column K37P
print (df['K37P'])
1996-01-01 1
1996-02-01 0
1996-03-01 8
1996-04-01 11
1996-05-01 14
1996-06-01 12
1996-07-01 12
1996-08-01 14
1996-09-01 2
1996-10-01 1
</code></pre>
<p>For selecting both first select column and then year:</p>
<pre><code>print (df['K37L']['2000'])
2000-01-01 12
2000-02-01 14
2000-03-01 4
2000-04-01 2
2000-05-01 6
2000-06-01 10
2000-07-01 2
2000-08-01 13
2000-09-01 18
2000-10-01 4
2000-11-01 12
2000-12-01 11
Name: K37L, dtype: int32
</code></pre>
<p>For numpy array use:</p>
<pre><code>print (df['K37L']['2000'].values)
[12 14 4 2 6 10 2 13 18 4 12 11]
</code></pre>
<p>If need dictionary of arrays by years:</p>
<p>Then select <code>year</code>s by <a href="http://pandas.pydata.org/pandas-docs/stable/timeseries.html#partial-string-indexing" rel="nofollow noreferrer">partial string indexing</a> and last convert to array by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.values.html" rel="nofollow noreferrer"><code>values</code></a> to <code>dictionary</code>:</p>
<pre><code>d = {x: df[str(x)].values for x in range(1996, 2018)}
print (d[2000])
[[12 15 8 14 2 0 17 0 8 14 17 15 2 3 14 17 19 2 8 7 5 7 12 13
17 7 4]
[14 10 11 4 18 1 3 12 9 11 8 3 12 19 19 15 7 19 14 12 5 19 14 15
7 11 7]
[ 4 5 17 16 13 6 18 6 12 12 7 15 3 16 2 18 14 18 15 8 5 9 3 7
</code></pre>
|
python|python-3.x|pandas|numpy|dataframe
| 1
|
6,174
| 58,530,882
|
Group by in Pandas to find Median
|
<p>I have a properties dataset. I would like to know the median price according to several attributes as follows:
<code>suburb</code>,<code>rooms</code>,<code>bathrooms</code>,<code>type</code>,<code>car</code>,<code>age</code>. Then I want to add a new boolean column to state if the property is overpriced or not. </p>
<p>sample of my dataframe(the original dataframe has 180 suburbs):</p>
<pre><code>house=pd.DataFrame({'subrub':['BALWYN NORTH','ARMADALE','ARMADALE','PASCOE VALE'],
'price':[1350000.0,800000.0,1250000.0,680000.0],
'rooms':[3,4,7,2],
'bathroom':[1.0,2.0,4.0,1.0],
'type':['h','t','t','u'],
'car':['2.0','1.0','4.0','1.0'],
'age':[59.0,69.0,12.0,14.0]})
</code></pre>
<p>So far I have grouped by suburbs. I know I can use <code>median</code> to find the median, but I am not sure how to approach the other attributes. Any tip would be helpful. Thank you,</p>
|
<p>Like this</p>
<pre class="lang-py prettyprint-override"><code>def over_price(elements):
median = np.median(elements)
return elements > median
house["OverPrice"] = house.groupby(["subrub","rooms","bathroom","type","car","age"])["price"].transform(over_price)
</code></pre>
|
python|pandas
| 0
|
6,175
| 58,402,887
|
How to vectorize custom algorithms in numpy or pytorch?
|
<p>Suppose I have two matrices:</p>
<pre><code>A: size k x m
B: size m x n
</code></pre>
<p>Using a custom operation, my output will be <code>k x n.</code></p>
<p>This custom operation is not a dot product between the rows of <code>A</code> and columns of <code>B</code>. <strong>Suppose</strong> this custom operation is defined as:</p>
<p>For the Ith row of <code>A</code> and Jth column of <code>B</code>, the <code>i,j</code> element of the output is:</p>
<pre><code>sum( (a[i] + b[j]) ^20 ), i loop over I, j loops over J
</code></pre>
<p>The only way I can see to implement this is to expand this equation, calculate each term, them sum them.</p>
<p>Is there a way in numpy or pytorch to do this without expanding the equation?</p>
|
<p>Apart from the method @hpaulj outlines in the comments, you can also use the fact that what you are calculating is essentially a pair-wise Minkowski distance:</p>
<pre><code>import numpy as np
from scipy.spatial.distance import cdist
k,m,n = 10,20,30
A = np.random.random((k,m))
B = np.random.random((m,n))
method1 = ((A[...,None]+B)**20).sum(axis=1)
method2 = cdist(A,-B.T,'m',p=20)**20
np.allclose(method1,method2)
# True
</code></pre>
|
python|numpy|vectorization|pytorch
| 1
|
6,176
| 69,126,255
|
Python, Numpy: copying single rows to multiple indexes in irregular order
|
<p>I have a problem where I need to copy single rows of one matrix to multiple specific rows in a bigger matrix. For example:</p>
<pre><code> Matrix A:
[row1]
[row2]
[row3]
</code></pre>
<p>Copied to</p>
<pre><code> Matrix B:
[row3]
[row3]
[row2]
[row2]
[row3]
[row1]
</code></pre>
<p>The values in A may change during execution, but the rows are always copied to B by the same pattern.
I've already tried this:</p>
<pre><code> B = A[3,3,2,2,3,1]
</code></pre>
<p>But this was even slower than a simple for-loop where I placed the rows one by one. I need to do this as fast as possible with matrix sizes A ≈ (500, 500) and B ≈ (2000, 500).
Does anyone know a more efficient way to do this in python with numpy matrices?</p>
|
<p>I am not sure if this is what you are looking for but it looks decently fast</p>
<pre><code>%%timeit
A = np.matrix([[1,2],[3,4],[5,6]])
Alist = A.tolist()
B = np.matrix([Alist[2],Alist[2],Alist[1],Alist[1],Alist[2],Alist[0]])
13.4 µs ± 82.6 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
</code></pre>
|
python|arrays|numpy|matrix|copy
| 0
|
6,177
| 44,816,922
|
Pandas multi indexing not working
|
<p>I have below code, </p>
<pre><code>purchase_1 = pd.Series({'Name': 'Chris', 'Item Purchased': 'Dog Food',
'Cost': 22.50})
purchase_2 = pd.Series({'Name': 'Kevyn', 'Item Purchased': 'Kitty Litter',
'Cost': 2.50})
purchase_3 = pd.Series({'Name': 'Vinod', 'Item Purchased': 'Bird Seed',
'Cost': 5.00})
df = pd.DataFrame([purchase_1, purchase_2, purchase_3],
index=['Store 1', 'Store 1', 'Store 2'])
</code></pre>
<p>I added below code to change the indexes - But, when append the new row, it is giving weird results. Please help me:</p>
<pre><code>df['location'] = df.index
df.set_index(['location', 'Name'])
df = df.append(pd.Series(data={'Cost': 3.00, 'Item Purchased': 'Kitty
Food'}, name=('Store 2', 'Kevyn')))
print (df)
</code></pre>
|
<p>You forget assign <code>df</code>:</p>
<pre><code>df['location'] = df.index
#assign back to df
df = df.set_index(['location', 'Name'])
df = df.append(pd.Series(data={'Cost': 3.00, 'Item Purchased': 'Kitty Food'},
name=('Store 2', 'Kevyn')))
print (df)
Cost Item Purchased
location Name
Store 1 Chris 22.5 Dog Food
Kevyn 2.5 Kitty Litter
Store 2 Vinod 5.0 Bird Seed
Kevyn 3.0 Kitty Food
</code></pre>
<p>Simplier solution is <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.set_index.html" rel="nofollow noreferrer"><code>DataFrame.set_index</code></a> with parameter <code>append=True</code>, if want new index names use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.rename_axis.html" rel="nofollow noreferrer"><code>rename_axis</code></a>:</p>
<pre><code>df = df.set_index('Name', append=True).rename_axis(['location','Name'])
df = df.append(pd.Series(data={'Cost': 3.00, 'Item Purchased': 'Kitty Food'},
name=('Store 2', 'Kevyn')))
print (df)
Cost Item Purchased
location Name
Store 1 Chris 22.5 Dog Food
Kevyn 2.5 Kitty Litter
Store 2 Vinod 5.0 Bird Seed
Kevyn 3.0 Kitty Food
</code></pre>
<p>And if not necessary name of first level of <code>MultiIndex</code>:</p>
<pre><code>df = df.set_index('Name', append=True)
df = df.append(pd.Series(data={'Cost': 3.00, 'Item Purchased': 'Kitty Food'},
name=('Store 2', 'Kevyn')))
print (df)
Cost Item Purchased
Name
Store 1 Chris 22.5 Dog Food
Kevyn 2.5 Kitty Litter
Store 2 Vinod 5.0 Bird Seed
Kevyn 3.0 Kitty Food
</code></pre>
|
pandas
| 0
|
6,178
| 60,858,780
|
Dict of cluster and partition with kmeans python
|
<p>I search a solution to my problem.</p>
<p>I use Kmeans by sklearn and i want a dictionary with <code>{ cluster : list of partition}</code></p>
<pre><code>kmeans = KMeans(n_clusters=n)
kmeans.fit(data)
result = zip(data,kmeans.labels_)
sortedR = sorted(result,key=lambda x: x[1])
cluster_nb = {}
for k,v in sortedR:
if v in cluster_nb:
cluster_nb[v].append(k)
else:
cluster_nb[v] = [k]
</code></pre>
<p>I have the positions of the clusters of kmoyen.labels as key but I need a corresponding element of kmoyen.cluster_centers_</p>
<p>For example :</p>
<pre><code>{'[1,2]' : [array([1, 3]), array([2,4])], '[5,5]' : [array([7, 8]), array([10,12])]}
</code></pre>
<p>I tried with a new loop :</p>
<pre><code>for x in cluster_nb:
cluster_nb[str(kmeans.cluster_centers_[x])] = cluster_nb.pop(x)
return cluster_nb
</code></pre>
<p>But i have this error : </p>
<pre><code>IndexError: only integers, slices (`:`), ellipsis (`...`), numpy.newaxis (`None`) and integer or boolean arrays are valid indices
</code></pre>
<p>Where am I making my error?</p>
<p>Is there a simpler solution?</p>
|
<p>Try this:</p>
<pre><code>from sklearn.cluster import KMeans
import numpy as np
data = np.random.randint(100, size=(100, 2))
kmeans = KMeans(n_clusters=5)
kmeans.fit(data)
centroids_partitions = {}
for centr in kmeans.cluster_centers_:
centroid_label = kmeans.predict([centr])
partition = []
for k, v in zip(data, kmeans.labels_):
if v == centroid_label:
partition.append(k.ravel())
centroids_partitions[centroid_label[0]] = partition
print(centroids_partitions)
</code></pre>
<p>That returns a dict like:</p>
<pre><code>{0: [array([55, 8]), ... ,[truncated], 1: [array([70, 87]), array([77, 63]), ... ]}
</code></pre>
<p>Where 0, 1, etc are the clusters labels from <code>kmeans.labels_</code></p>
<p>Or, if you want to have the centroids coordinated as key for the dict, replace with:</p>
<pre><code>centroids_partitions[centr[0],centr[1]] = partition
</code></pre>
<p>That outputs:</p>
<pre><code>{(68.29411764705881, 24.470588235294127): [array([72, 19]), array([69, 1]), array([58, 46]), .... ]}
</code></pre>
|
python|arrays|numpy|dictionary|k-means
| 0
|
6,179
| 61,138,390
|
Get dummy variables from a string column full of mess
|
<p>I'm a less-than-a-week beginner in Python and Data sciences, so please forgive me if these questions seem obvious.</p>
<p>I've scraped data on a website, but the result is unfortunately not very well formatted and I can't use it without transformation.</p>
<p><strong>My Data</strong></p>
<p>I have a string column which contains a lot of features that I would like to convert into dummy variables.</p>
<p><strong>Example of string</strong> : "8 équipements & optionsextérieur et châssisjantes aluintérieurBluetoothfermeture électrique5 placessécuritékit téléphone main libre bluetoothABSautreAPPUI TETE ARclimatisation"</p>
<p><strong>What I would like to do</strong></p>
<p>I would like to create a dummy colum "Bluetooth" which would be equal to one if the pattern "bluetooth" is contained in the string, and zero if not.</p>
<p>I would like to create an other dummy column "Climatisation" which would be equal to one if the pattern "climatisation" is contained in the string, and zero if not.</p>
<p>...etc</p>
<p>And do it for 5 or 6 patterns which interest me.</p>
<p><strong>What I have tried</strong></p>
<p>I wanted to use a match-test with regular expressions and to combine it with pd.getdummies method.</p>
<pre><code>import re
import pandas as pd
def match(My_pattern,My_strng):
m=re.search(My_pattern,My_strng)
if m:
return True
else:
return False
pd.getdummies(df["My messy strings colum"], ...)
</code></pre>
<p>I haven't succeeded in finding how to settle pd.getdummies arguments to specify the test I would like to apply on the column.</p>
<p>I was even wondering if it's the best strategy and if it wouldn't be easier to create other parallels columns and apply a match.group() on my messy strings to populate them.
Not sure I would know how to program that anyway.</p>
<p>Thanks for your help</p>
|
<p>I think one way to do this would be:</p>
<pre><code>df.loc[df['My messy strings colum'].str.contains("bluetooth", na=False),'Bluetooth'] = 1
df.loc[~(df['My messy strings colum'].str.contains("bluetooth", na=False)),'Bluetooth'] = 0
df.loc[df['My messy strings colum'].str.contains("climatisation", na=False),'Climatisation'] = 1
df.loc[~(df['My messy strings colum'].str.contains("climatisation", na=False)),'Climatisation'] = 0
</code></pre>
<p>The tilde (~) represents <em>not</em>, so the condition is reversed in this case to string <em>does not contain</em>.</p>
<p>na = false means that if your messy column contains any null values, these will not cause an error, they will just be assumed to not meet the condition.</p>
|
python|regex|pandas|dummy-variable
| 0
|
6,180
| 61,065,720
|
Substraction of pandas dataframe
|
<p>I am trying to subtract 2 dataframes but I am not getting what I want and afterward, I would like to divide the difference by the values of a third dataframe.</p>
<p>For the first part, I have tried to do: </p>
<pre class="lang-py prettyprint-override"><code>r.sub(rf, fill_value=0)
</code></pre>
<p>And to be sure that they have the same number of rows, I decided not to drop the na for the moment and I made sure they have the same index name. </p>
<p>Here it is what I have... </p>
<p><img src="https://i.stack.imgur.com/8Trht.png" alt=""></p>
<p>For example, on 2020-01-09, I am supposed to have 0.030079 (=0.136245 - 0.106166).
It looks like it is concatenating the columns of the two dataframes... </p>
<p>Any suggestions? </p>
|
<p>Note that according to your image:</p>
<ul>
<li>you have <strong>only one</strong> DataFrame (say <em>df</em>) with <strong>two columns</strong>,</li>
<li>you write about <strong>subtraction</strong> of them, but the second value is
<strong>negative</strong>.</li>
</ul>
<p>So run:</p>
<pre><code>df['Brent Oil'] + df['S&P GSCI']
</code></pre>
<p>and e.g. for <em>2020-01-09</em> the result will be just <em>0.030079</em>.</p>
<h1>Edit</h1>
<p>Or maybe you have 2 DataFrames:</p>
<ul>
<li><em>r</em> with (the only) <em>Brent Oil</em> column,</li>
<li><em>rf</em> with (also the only) <em>S&P GSCI</em> column (with respective positive values),</li>
<li>and your picture contains the <strong>result</strong> of subtraction?</li>
</ul>
<p>If this is the case, subtract the given columns, not whole DataFrames:</p>
<pre><code>r['Brent Oil'].sub(rf['S&P GSCI'], fill_value=0)
</code></pre>
<p>Then the result will be a <em>Series</em>, with value just <em>0.030079</em> for
<em>2020-01-09</em>.</p>
<p>You could also run <code>np.array(r['Brent Oil']) - np.array(rf['S&P GSCI'])</code>
(something similar to what <em>Anurag Reddy</em> proposed), but then you get
only a <em>Numpy</em> array, with index stripped off, so it is not obvious
which difference is for which date and it is probably not what you want.</p>
|
python|pandas|dataframe|subtraction
| 0
|
6,181
| 71,749,056
|
How can I divide each array by the first value in that array in an array of arrays
|
<p>I have 4 labels listed and a corresponding 4 arrays listed for four different conditions containing relevant data.</p>
<p>I'm trying to write a loop that will go through 4 arrays(the conditions) of 4 arrays(for the 4 labels)and divide each array by the first value of that array, subtract 1, and *100 (each item in that array)</p>
<pre><code>#INPUT - for the 4 labels there is an array under each condition in order, each array of values contains 5 values.
drug_name = ["Astemizole","Nifedipine","Ibutilide","Piperacillin"]
ord_90 =np.array([[0.274874838,0.285399869,0.317728578,0.407954316,0.57446342],[0.267759632,0.251495008,0.217297331,0.197795259,0.189896571],[0.570722939,0.284657491,0.604528529,1,1],[0.406940575,0.330184737,0.249444664,0.195703438,0.18415127]])
ord_30 =np.array([[0.171064425,0.174987058,0.193670662,0.22728254,0.266821461],[0.166787292,0.151327656,0.116268265,0.083153659,0.070527687],[0.322497615,0.153017323,0.267804563,0.705918704,0.683116757],[0.191369803,0.129987371,0.06659752,0.0465941,0.042836445]])
paci_90 =np.array([[0.441088892,0.457035981,0.50722613,0.64209474,0.890497102],[0.411905114,0.331256048,0.220719443,0.160115913,0.146116825],[0.359142009,0.6907817009,1,0.8307300345,0.8588833043],[0.271391074,0.275477015,0.348068661,0.505349944,0.589615152]])
paci_30 =np.array([[0.220987732,0.236639735,0.270462255,0.366486834,0.471698079],[0.20193762,0.14111078,0.066240407,0.043473622,0.037192029],[0.2119885431,0.2723828983,0.2864669099,0.77502029,0.771138501],[0.168170895,0.158282051,0.171664219,0.194167938,0.193573238]])
</code></pre>
<h1>In essence i need this for each array in each array of arrays giving 4 arrays of 4 arrays containing 5 values.</h1>
<pre><code>o9 = 1-(ord_90/[0])*100
o3 = 1-(ord_30/[0])*100
p9 = 1-(paci_90/[0])*100
p3 = 1-(paci_30/[0])*100
</code></pre>
<h1>I've Tried</h1>
<pre><code>for i in range(len(drug_name)):
o9 = 1-(ord_90/i[0])*100
but get:
o9 = 1-(ord_90/i[0])*100
TypeError: 'int' object is not subscriptable
</code></pre>
<p>#ive also tried writing the below function to calculate this but get a similar error when i use it in the loop.</p>
<pre><code>def per_herg_block(i,base,negative,x):
return negative - (i/base) *x
for i in range(len(drug_name)):
o9 = per_herg_block(ord_90, i[0], 1, 100)
</code></pre>
|
<p>This should do I guess:</p>
<pre><code>def per_herg_block(i,base,negative,x):
return negative - (i/base) *x
for i in range(len(drug_name)):
o9 = per_herg_block(ord_90, ord_90[i][0], 1, 100)
print(o9)
</code></pre>
|
python|arrays|numpy|for-loop|calculation
| 0
|
6,182
| 71,743,275
|
Python index error: too many indices for array
|
<p>I am trying to code a plot with this code:</p>
<pre><code> for k in range(5):
plt.scatter(np.arange(0,200),cprofit[:,k],label = marketshare)
a = pd.Series([np.mean(cprofit[:,k]), np.std(cprofit[:,k]), np.max(cprofit[:,k]), np.min(cprofit[:,k])], index=df.columns)
df = df.append(a,ignore_index=True)
</code></pre>
<p>The error:</p>
<pre><code>a = pd.Series([np.mean(cprofit[:,k]), np.std(cprofit[:,k]), np.max(cpro fit[:,k]), np.min(cprofit[:,k])], index=df.columns)
IndexError: too many indices for array: array is 1-dimensional, but 2 were indexed
</code></pre>
<p>for line</p>
<p>can someone explain what could be wrong??</p>
|
<p>The two inputs of <code>plt.scatter()</code> should be of the same dimension, and size. You have <code>numpy.arange(0, 200)</code> creating a 1-dimensional array of size 200, and it's unclear what <code>cprofit[:,k]</code> is doing, but it seems to be slicing a large 2-D array, which may or may not produce a 1-D array of size 200.</p>
|
python|arrays|python-3.x|pandas|numpy
| 0
|
6,183
| 71,712,054
|
How do I change PyCharm output so it shows all yahoo finance data when using company.history()?
|
<p>I am using yahoo finance in python and when I run the following code:</p>
<pre><code>print(apple.history('max'))
</code></pre>
<p>It gives me this output:</p>
<pre class="lang-none prettyprint-override"><code> Open High ... Dividends Stock Splits
Date ...
1980-12-12 0.100323 0.100759 ... 0.0 0.0
1980-12-15 0.095525 0.095525 ... 0.0 0.0
</code></pre>
<p>How do I get the output to show the Low price, Close price, and Volume for each date as many sites show it does? It only shows me the 3 dots in between High and Dividends.</p>
|
<p><code>ticker.history()</code> returns a Pandas <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html" rel="nofollow noreferrer">DataFrame</a>. You can access any column using the column name e. g. <code>'Low'</code>. A primer on indexing and selecting data can be found in the <a href="https://pandas.pydata.org/docs/user_guide/indexing.html" rel="nofollow noreferrer">docs.</a></p>
<p>By default the number of rows, that are shown of a DataFrame are limited. However, you can disable this limit.</p>
<pre><code>import yfinance as yf
import pandas as pd
apple = yf.Ticker('AAPL')
with pd.option_context('display.max_rows', None, 'display.max_columns', None):
print(apple.history('max')[['Low', 'Close', 'Volume']])
# or: display(...) if working with jupyter
</code></pre>
|
python|pandas|yahoo-finance|yfinance
| 0
|
6,184
| 42,247,358
|
How do I check if a pandas Series is merely initialised (i.e. empty) or defined?
|
<p>I'd like to distinguish between a pandas Series that has merely been initialised, and one that has actually been defined and has values. I tried the following code, but it doesn't work.</p>
<pre><code>import pandas as pd
labels = pd.Series
print len(labels)
print labels.empty
</code></pre>
<p>I get:</p>
<ul>
<li>TypeError: object of type 'type' has no len() </li>
<li>property object</li>
</ul>
<p>Then, I define the Series:</p>
<pre><code>labels = pd.Series([0, 1, 1]).unique()
print labels.empty
print len(labels)
</code></pre>
<p>This time I get:</p>
<ul>
<li>2 </li>
<li>AttributeError: 'numpy.ndarray' object has no attribute 'empty'</li>
</ul>
<p>How can I check is a Series is empty or not - getting True or False in return?</p>
|
<p>In the first instance you took a reference to the method <code>Series</code>:</p>
<pre><code>In [31]:
labels = pd.Series
print(type(labels))
labels = pd.Series
print(type(labels))
<class 'type'>
</code></pre>
<p>hence all the errors, you want empty parentheses to make an empty series:</p>
<pre><code>In [33]:
labels = pd.Series()
print(labels.empty)
print(len(labels))
True
0
</code></pre>
<p>In the second instance <code>unique</code> returns a numpy array which has no method <code>empty</code>, a pandas <code>Series</code> does.</p>
<pre><code>In [38]:
labels = pd.Series([0, 1, 1]).unique()
type(labels)
Out[38]:
numpy.ndarray
</code></pre>
<p>You can use <code>size</code> attribute to check the dimensions:</p>
<pre><code>In [42]:
labels = pd.Series([0, 1, 1]).unique()
print(labels.size)
2
</code></pre>
|
python|pandas
| 1
|
6,185
| 69,702,449
|
Get rid of NaT and duplicates from a pandas dataFrame to obtain a series of datetime values
|
<p>I have a dataframe that looks like shown picture <a href="https://i.stack.imgur.com/wk0aS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wk0aS.png" alt="enter image description here" /></a></p>
<p>Dataframe shape is (1944,900).
Each row of the dataframe has one value (it might be repeated multiple times based on the row index). I need to extract a list of 1944 numbers each representing valid value from each row (excluding NaT and duplicate values).</p>
<p>Any ideas on this?</p>
|
<p>it looks like you could just get the values from the diagonal of this array right? If so, then assuming your dataframe is called <code>df</code></p>
<pre><code>df.values[range(len(df)), range(len(df))]
</code></pre>
<p>will give you a numpy array of these values you want</p>
|
python|pandas|dataframe|datetime
| 0
|
6,186
| 69,765,548
|
Get original values from rolling sum in Pandas DataFrame
|
<p>I got data describing the number of newly hospitalized persons for specific days and regions.
The number of hospitalized persons is the rolling sum of new hospitalized persons for the last 7 days.
The DataFrame looks like this:</p>
<pre><code>Date Region sum_of_last_7_days
01.01.2020 1 1
02.01.2020 1 2
03.01.2020 1 3
04.01.2020 1 4
05.01.2020 1 5
06.01.2020 1 6
07.01.2020 1 7
08.01.2020 1 7
09.01.2020 1 7
01.01.2020 2 1
02.01.2020 2 2
03.01.2020 2 3
04.01.2020 2 4
05.01.2020 2 5
06.01.2020 2 6
07.01.2020 2 7
08.01.2020 2 7
09.01.2020 2 7
10.01.2020 2 4
</code></pre>
<p>The goal output is:</p>
<pre><code>Date Region daily_new
01.01.2020 1 1
02.01.2020 1 1
03.01.2020 1 1
04.01.2020 1 1
05.01.2020 1 1
06.01.2020 1 1
07.01.2020 1 1
08.01.2020 1 0
09.01.2020 1 0
01.01.2020 2 1
02.01.2020 2 1
03.01.2020 2 1
04.01.2020 2 1
05.01.2020 2 1
06.01.2020 2 1
07.01.2020 2 1
08.01.2020 2 0
09.01.2020 2 0
10.01.2020 2 0
</code></pre>
<p>The way should be via <strong>undo</strong> the rolling sum operation with a window for 7 days, but I wasn't able to find any solution.</p>
|
<p>To get the original, perform a <code>diff</code> and fill with the first value:</p>
<pre><code>s = df.groupby('Region')['sum_of_last_7_days'].diff()
df['original'] = s.mask(s.isna(), df['sum_of_last_7_days'])
</code></pre>
<p>output:</p>
<pre><code> Date Region sum_of_last_7_days original
0 01.01.2020 1 1 1.0
1 02.01.2020 1 2 1.0
2 03.01.2020 1 3 1.0
3 04.01.2020 1 4 1.0
4 05.01.2020 1 5 1.0
5 06.01.2020 1 6 1.0
6 07.01.2020 1 7 1.0
7 08.01.2020 1 7 0.0
8 09.01.2020 1 7 0.0
9 01.01.2020 2 1 1.0
10 02.01.2020 2 2 1.0
11 03.01.2020 2 3 1.0
12 04.01.2020 2 4 1.0
13 05.01.2020 2 5 1.0
14 06.01.2020 2 6 1.0
15 07.01.2020 2 7 1.0
16 08.01.2020 2 7 0.0
17 09.01.2020 2 7 0.0
</code></pre>
|
python|pandas|dataframe|rolling-computation
| 2
|
6,187
| 69,977,726
|
Why am i getting the error: list' object has no attribute 'replace'. I need to put my answer in a list without the character \xa0
|
<p>Here is my code:</p>
<pre><code>df_olympics = pandas.read_csv("Olympics_data.csv", sep = ";")
df_olympics.drop(df_olympics.tail(1).index,inplace=True)
df_olympics= df_olympics[(df_olympics[' summer_games_played'] >0) & (df_olympics['winter_games_played'] >0)]
df_olympics['total_medals_won']=(df_olympics[" summer_games_gold_won"])+(df_olympics[" winter_games_gold_won"])
df_olympics=df_olympics[(df_olympics['total_medals_won']==0)]
team_name_list=(df_olympics.loc[:,['team_name',]])
team_name_list = team_name_list.values.tolist()
team_list_name_clean = team_name_list.replace('\xa0', '')
print(team_list_name_clean)
</code></pre>
<p>The picture:</p>
<p><img src="https://i.stack.imgur.com/VzgnU.png" alt="enter image description here" /></p>
|
<p>Try this:</p>
<pre><code>team_list_name_clean = [x.replace('\xa0', '') for x in team_name]
</code></pre>
|
python|pandas|dataframe
| 1
|
6,188
| 43,393,164
|
Extract sub-DataFrames
|
<p>I have this kind of dataframe in Pandas :</p>
<pre><code>NaN
1
NaN
452
1175
12
NaN
NaN
NaN
145
125
NaN
1259
2178
2514
1
</code></pre>
<p>On the other hand I have this other dataframe :</p>
<pre><code>1
2
3
4
5
6
</code></pre>
<p>I would like to separate the first one into differents sub-dataframes like this:</p>
<pre><code>DataFrame 1:
1
DataFrame 2:
452
1175
12
DataFrame 3:
DataFrame 4:
DataFrame 5:
145
125
DataFrame 6:
1259
2178
2514
1
</code></pre>
<p>How can I do that without a loop?</p>
|
<p><strong>UPDATE:</strong> thanks to <a href="https://stackoverflow.com/questions/43393164/extract-sub-dataframes/43393492?noredirect=1#comment73848980_43393492">@piRSquared</a> for pointing out that the solution above will not work for DFs/Series with non-numeric indexes. Here is more generic solution:</p>
<pre><code>dfs = [x.dropna()
for x in np.split(df, np.arange(len(df))[df['column'].isnull().values])]
</code></pre>
<p><strong>OLD answer:</strong></p>
<p>IIUC you can do something like this:</p>
<p><strong>Source DF:</strong></p>
<pre><code>In [40]: df
Out[40]:
column
0 NaN
1 1.0
2 NaN
3 452.0
4 1175.0
5 12.0
6 NaN
7 NaN
8 NaN
9 145.0
10 125.0
11 NaN
12 1259.0
13 2178.0
14 2514.0
15 1.0
</code></pre>
<p><strong>Solution:</strong></p>
<pre><code>In [31]: dfs = [x.dropna()
for x in np.split(df, df.index[df['column'].isnull()].values+1)]
In [32]: dfs[0]
Out[32]:
Empty DataFrame
Columns: [column]
Index: []
In [33]: dfs[1]
Out[33]:
column
1 1.0
In [34]: dfs[2]
Out[34]:
column
3 452.0
4 1175.0
5 12.0
In [35]: dfs[3]
Out[35]:
Empty DataFrame
Columns: [column]
Index: []
In [36]: dfs[4]
Out[36]:
Empty DataFrame
Columns: [column]
Index: []
In [37]: dfs[4]
Out[37]:
Empty DataFrame
Columns: [column]
Index: []
In [38]: dfs[5]
Out[38]:
column
9 145.0
10 125.0
In [39]: dfs[6]
Out[39]:
column
12 1259.0
13 2178.0
14 2514.0
15 1.0
</code></pre>
|
python|pandas
| 2
|
6,189
| 72,230,482
|
Signal correlation shift and lag correct only if arrays subtracted by mean
|
<p>If I have two arrays that are identical except for a shift:</p>
<pre><code>import numpy as np
from scipy import signal
x = [4,4,4,4,6,8,10,8,6,4,4,4,4,4,4,4,4,4,4,4,4,4,4]
y = [4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,6,8,10,8,6,4,4]
</code></pre>
<p>And I want to quantify this shift through a cross-correlation:</p>
<pre><code>correlation = signal.correlate(x, y, mode="full")
lags = signal.correlation_lags(len(x), len(y), mode="full")
lag = lags[np.argmax(correlation)]
</code></pre>
<p>The <code>lag = 0</code> but if I modify the correlation definition as:</p>
<pre><code>correlation = signal.correlate(x-np.mean(x), y-np.mean(y), mode="full")
</code></pre>
<p>Then <code>lag=-12</code>, which is the correct shift. What is the actual meaning of the array returned by <code>signal.correlation</code> and why I need to subtract the mean to obtain the true shift?</p>
|
<p>the issue is as you are doing 'full' correlation, the algorithm add zeros to complete the vectors. which mean that you will have the highest value when both signal are align in the convolution
here both example with and without the mean suppression</p>
<pre><code>import numpy as np
from scipy import signal
import matplotlib.pyplot as plt
x = np.array([4,4,4,4,6,8,10,8,6,4,4,4,4,4,4,4,4,4,4,4,4,4,4])
y = np.array([4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,6,8,10,8,6,4,4])
correlation = signal.correlate(x, y, mode="full")
lag = np.argmax(correlation) - len(x)
plt.figure()
plt.subplot(411)
plt.plot(correlation)
plt.subplot(412)
correlation = signal.correlate(x-x.mean(), y-y.mean(), mode="full")
lag = np.argmax(correlation) - len(x)
plt.plot(correlation)
plt.subplot(413)
correlation_f = signal.correlate(np.ones(x.shape), np.ones(y.shape), mode="full")
correlation = signal.correlate(x, y, mode="full")/correlation_f
plt.plot(correlation)
plt.subplot(414)
corr = np.zeros(x.shape)
for i in range(len(x)):
y2 = np.roll(y, i)
corr[i] = np.corrcoef(x,y2)[0,1]
print(np.argmax(corr))
plt.plot(corr)
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/puSeE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/puSeE.png" alt="enter image description here" /></a></p>
<p>Usually the correlation allows to find a small signal in a longer signal where this issue is negligible.</p>
<p>You can try to compensate with the size of overlapping between signal like in the third plot. But it is not perfect.</p>
<p>the mean suppression would not work in all cases.</p>
<p>if you have the same exact signal that have been rolled then you can tried what I put in the fourth row where you roll the first signal and look when you have a maximum of correlation. This could be the best option for you. But it works only if both signal are the same just rolled by a shift number.</p>
|
python|numpy|scipy|correlation|cross-correlation
| 2
|
6,190
| 72,184,641
|
Python/Pandas - Combine two columns with NaN values
|
<p>Lets say i have a dataframe with two columns: OldValue and NewValue</p>
<p>i want to do some plotting and i want to combine values. If OldValue is empty, NewValue is filled, and the other way around. There is not a single instance both or neither are filled.
Lets say my dataframe looks like this:</p>
<pre><code> OldValue NewValue
0 14.0 NaN
1 NaN 7.0
2 3.0 NaN
3 NaN 3.0
</code></pre>
<p>I found that i could fillna(0) and then do something along the lines of</p>
<pre><code>df["AllValue"] = df["NewValue"] + df["OldValue"]
</code></pre>
<p>Is this the most efficient way?</p>
|
<p>Your solution should be change with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.add.html" rel="nofollow noreferrer"><code>Series.add</code></a> and <code>fill_value=0</code>:</p>
<pre><code>df["AllValues"] = df["Newvalues"].add(df["OldValues"], fill_value=0)
</code></pre>
<p>Or use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.fillna.html" rel="nofollow noreferrer"><code>Series.fillna</code></a>:</p>
<pre><code>df["AllValue"] = df["NewValue"].fillna(df["OldValue"])
</code></pre>
|
python|pandas|dataframe
| 4
|
6,191
| 72,164,620
|
Data Extraction in Python
|
<p>I've been given a data set consisting of three columns. One column has transaction information, one has a store number, and one has sections. My goal is to extract the store number from the transaction information column for 300 different stores using entity extraction. My thought process behind this was to make something similar to how companies search resumes for key words using a word bank, since I have the store numbers in a separate column already. I have the .csv file read into my program, and I have the store numbers stored into their own array. I'm trying to figure out how to search the transaction information column for those store numbers.</p>
<p>Code so far:</p>
<pre><code>import pandas as pd
import numpy as np
file = pd.read_csv(r'C:\Users\cspea\Desktop\assignment.csv')
print(file)
store_number_array = file['store_number'].to_numpy()
print(store_number_array)
</code></pre>
<p>Sample data set (in .csv format):</p>
<pre><code>transaction_descriptor,store_number,dataset
DOLRTREE 2257 00022574 ROSWELL,2257,train
AUTOZONE #3547,3547,train
TGI FRIDAYS 1485 0000,1485,train
BUFFALO WILD WINGS 003,3,train
J. CREW #568 0,568,train
</code></pre>
<p>Any tips would be greatly appreciated. Thanks for your time and assistance in advance :)</p>
|
<p>try this :</p>
<pre><code>df['c'] = df['transaction_descriptor'].apply(lambda x: (df[df['transaction_descriptor'].str.contains(x)]['store_number']))[0]
for index,row in df.loc[df['c'].isna(),:].iterrows():
test_=df.loc[index,'store_number']
test=df.loc[index,'transaction_descriptor']
result=[s for s in test.split() if str(test_) in s]
df.loc[index,'c']=result
</code></pre>
|
python|pandas|nlp|named-entity-extraction
| 1
|
6,192
| 50,459,267
|
In eager mode, how to convert a tensor to a ndarray
|
<p>How can I convert a tensor to a numpy array in eager mode?
In eager mode, I do not need to create a session, so I cannot use <code>.eval()</code>.</p>
<p>And I tried <code>tf.constant()</code>, it gives the following error:</p>
<p><code>TypeError: Failed to convert object of type <class 'tensorflow.python.ops.variables.Variable'> to Tensor. Contents: <tf.Variable 'filters_C:0' shape=(2, 2) dtype=float32_ref>. Consider casting elements to a supported type.</code></p>
<p>Here is the supporting code:</p>
<pre><code>filters_C = tf.get_variable('filters_C',
shape=[2, 2],
initializer=tf.ones_initializer,
regularizer=None,
trainable=True)
filters_C = tf.constant(filters_C)
</code></pre>
|
<p>Simpy call the <code>numpy</code> method:</p>
<pre><code>filters_C.numpy()
</code></pre>
<p>It is a <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/framework/ops.py#L723" rel="nofollow noreferrer">property of the <code>EagerTensor</code> class</a>, which is the subclass of <code>Tensor</code> that is used by default in eager execution, which explains why this property pops up then.</p>
|
python|tensorflow
| 2
|
6,193
| 50,373,813
|
Aggregate columns with same date (sum)
|
<p>So, i need to aggregate rows where the date is the same.</p>
<p>My code, as of now, returns the following:</p>
<pre><code> date value source
0 2018-04-08 15:52:26.110 1 ANAPRO
1 2018-04-22 12:14:38.807 1 ANAPRO
2 2018-04-22 12:34:18.403 1 ANAPRO
3 2018-04-22 12:40:35.877 1 ANAPRO
4 2018-04-22 12:53:57.897 1 ANAPRO
5 2018-04-22 13:02:45.180 1 ANAPRO
6 2018-05-04 17:41:15.840 1 ANAPRO
7 2018-04-22 15:03:54.353 1 ANAPRO
8 2018-04-22 15:24:27.030 1 ANAPRO
9 2018-04-22 15:27:56.813 1 ANAPRO
</code></pre>
<p>I don't think I can aggregate the columns while I have HH:MM:SS.ms being showed alongside the date (I only need the date)</p>
<p>I've tried this :</p>
<pre><code>df['date'] = pandas.to_datetime(df['date'], format='%b %d %Y.%f').astype(str)
</code></pre>
<p>But to no avail, I still got the same return. </p>
<p>The code is: </p>
<p>Reads the my excel file (user input).</p>
<pre><code>df = pandas.read_excel(var + '.xlsx')
</code></pre>
<p>Selects the columns I need, and create a new .xlsx to contain it.</p>
<pre><code>df = df.iloc[:, 36].to_excel(var + '_.xlsx', index=False)
</code></pre>
<p>Opens the new .xlsx file.</p>
<pre><code>df = pandas.read_excel(var + '_.xlsx')
</code></pre>
<p>Renames the column</p>
<pre><code>df = df.rename(columns={'Prospect Dt. Cadastro': 'date'})
</code></pre>
<p>Adds the other columns I need.</p>
<pre><code>df['value'] = 1
df['source'] = 'ANAPRO'
</code></pre>
<p>Tries to format the date.</p>
<pre><code>df['date'] = pandas.to_datetime(df['date'], format='%b %d` %Y.%f').astype(str)
</code></pre>
<p>Creates the final xlsx, with all the formatted data.</p>
<pre><code>df = df.to_excel('payload.xlsx')
</code></pre>
<p>Reads the final xlsx.</p>
<pre><code>df = pandas.read_excel('payload.xlsx', names=['date', 'value', 'source'])
</code></pre>
<p>Prints the first 10 rows.</p>
<pre><code>print(df.head(10))
</code></pre>
<p>I'm new to python, so sorry if I'm doing something awkward, thank you!</p>
|
<p>IIUC, you might want <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.dt.date.html" rel="nofollow noreferrer"><code>pandas.Series.dt.date</code></a>:</p>
<pre><code>df['date'] = pandas.to_datetime(df['date']).dt.date
>>> df
date value source
0 2018-04-08 1 ANAPRO
1 2018-04-22 1 ANAPRO
2 2018-04-22 1 ANAPRO
3 2018-04-22 1 ANAPRO
4 2018-04-22 1 ANAPRO
5 2018-04-22 1 ANAPRO
6 2018-05-04 1 ANAPRO
7 2018-04-22 1 ANAPRO
8 2018-04-22 1 ANAPRO
9 2018-04-22 1 ANAPRO
</code></pre>
<p>Or, if your goal is aggregation using <a href="https://pandas.pydata.org/pandas-docs/stable/groupby.html" rel="nofollow noreferrer"><code>groupby</code></a>, you can retain all the information in your original date column, and group by only the date as such:</p>
<pre><code>df['date'] = pandas.to_datetime(df['date'])
df.groupby(df['date'].dt.date)
# for example, to get the sum each day:
# df.groupby(df['date'].dt.date).sum()
# value
# date
# 2018-04-08 1
# 2018-04-22 8
# 2018-05-04 1
</code></pre>
<p>Or, using <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Grouper.html" rel="nofollow noreferrer"><code>pd.Grouper</code></a>:</p>
<pre><code>df['date'] = pandas.to_datetime(df['date'])
df.groupby(pd.Grouper(key='date', freq='D'))
</code></pre>
|
python|excel|pandas|xlsx|xlsxwriter
| 3
|
6,194
| 50,400,038
|
Warning in NumPy when setting array element to NaN using Spyder
|
<p>I am running Python 3.6.4 with Anaconda and Spyder.</p>
<p>When I am trying to set a value of a NumPy array to NaN I am getting the following RuntimeWarning.</p>
<pre><code>a = numpy.array([5.0,2.0,1.0])
a[0] = numpy.nan
</code></pre>
<blockquote>
<p>C:\Users..\Anaconda3\lib\site-packages\numpy\core_methods.py:29:
<strong>RuntimeWarning: invalid value encountered in reduce</strong> return
umr_minimum(a, axis, None, out, keepdims)</p>
<p>C:\Users..\Anaconda3\lib\site-packages\numpy\core_methods.py:26:
<strong>RuntimeWarning: invalid value encountered in reduce</strong> return
umr_maximum(a, axis, None, out, keepdims)</p>
</blockquote>
<p>Why is this happening?</p>
|
<p>I have updated the NumPy version and everything works fine now.</p>
|
python|numpy|nan|spyder
| 2
|
6,195
| 50,283,088
|
Cut a bounding box using numpy meshgrid python
|
<p>I want to create a bounding box out of the following dimensions using meshgrid but just not able to get the right box. </p>
<p>My parent dimensions are <code>x = 0 to 19541</code> and <code>y = 0 to 14394</code>. Out of that, I want to cut a box from x' = 4692 to 12720 and <code>y' = 4273 to 10117</code>.</p>
<p>However, I am not getting the right bounds. Could someone please help me here?</p>
<pre><code>from matplotlib.path import Path
xmin, xmax = 4692, 12720
ymin, ymax = 4273, 10117
sar_ver = [(4692, 10117), (12720, 10117), (12658, 4274), (4769, 4273), (4692, 10117)]
x, y = np.meshgrid(np.arange(xmin, xmax + 1), np.arange(ymin, ymax + 1))
shx = x
x, y = x.flatten(), y.flatten()
points = np.vstack((x, y)).T
path = Path(sar_ver)
grid = path.contains_points(points)
grid.shape = shx.shape # 5845 X 8029
print grid
</code></pre>
<p>UPDATE: This is what I tried and I am close to what I want but not exactly. I want to change the original origin from 0 to the image's surrounding box as shown in expected output.</p>
<p>The updated code that I am using is this </p>
<pre><code>from matplotlib.path import Path
nx, ny = 16886, 10079
sar_ver = [(16886, 1085), (15139, 2122), (14475, 5226), (8419, 5601), (14046, 6876), (14147, 10079), (16816, 3748), (16886, 1085)]
x, y = np.meshgrid(np.arange(nx), np.arange(ny))
x, y = x.flatten(), y.flatten()
points = np.vstack((x,y)).T
path = Path(sar_ver)
grid = path.contains_points(points)
grid.shape = (10079, 16886)
grid = np.multiply(grid,255)
int_grid = grid.astype(np.uint8)
grid_img = Image.fromarray(int_grid)
grid_img.save('grid_image.png') # ACTUAL OUTPUT IMAGE WITH ORIGIN NOT SHIFTED
</code></pre>
<p>Input geom: <a href="https://i.stack.imgur.com/DslO6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DslO6.png" alt="enter image description here"></a></p>
<p>Expected output is this: Doesn't matter if the image is rotated the other way round but will be a cherry on top if its aligned correctly.
<a href="https://i.stack.imgur.com/2cLEm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2cLEm.png" alt="enter image description here"></a></p>
<p>However I am getting right now this so my ACTUAL OUTPUT from the updated code posted is this: </p>
<p><a href="https://i.stack.imgur.com/uuQEd.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uuQEd.jpg" alt="enter image description here"></a></p>
<p>So I want to shift the origin around the box.</p>
<p>BOUNDING BOX PROBLEM DETAILS AFTER GETTING THE MASK: This code comes after the line posted in the second update <code>grid_img.save('grid_image.png') # ACTUAL OUTPUT IMAGE WITH ORIGIN NOT SHIFTED</code></p>
<p>Here <code>im</code> is the matrix of the actual image. What should be the x-y min, max of <code>im</code> to have the same shape as mask and multiply both of them to get pixel values and the rest cancelled out with 0s. </p>
<pre><code> img_x = 19541 # 0 - 19541
img_y = 14394 # 0 - 14394
im = np.fromfile(binary_file_path, dtype='>f4')
im = np.reshape(im.astype(np.float32), (img_x, img_y))
im = im[:10079, :16886]
bb_list = np.multiply(grid, im)
# slice and dice
slice_rows = np.any(bb_list, axis=1)
slice_cols = np.any(bb_list, axis=0)
ymin, ymax = np.where(slice_rows)[0][[0, -1]]
xmin, xmax = np.where(slice_cols)[0][[0, -1]]
answer = bb_list[ymin:ymax + 1, xmin:xmax + 1]
# convert to unit8
int_ans = answer.astype(np.uint8)
fin_img = Image.fromarray(int_ans)
fin_img.save('test_this.jpeg')
</code></pre>
<p>My GOAL is to cut out a polygon of a given geom out of a given image. So I am taking the mask out of that polygon and then using that mask to cut the same out of the original image. So multiplying mask's 1's and 0's with the pixel values in the image to just get 1*pixel values. </p>
<p>I tried the following to cut out the actual image to have the same dimensions so that I can multiply <code>np.multiply(im, mask)</code> but it didn't work as image's shape is not cut into same shape as mask's. I tried your min and max below but didn't work!</p>
<pre><code>im = im[xmin:xmax, ymin:ymax]
ipdb> im.shape
(5975, 8994)
ipdb> mask.shape
(8994, 8467)
</code></pre>
<p>Clearly I cannot multiple mask and im now.</p>
|
<p>I think you got it almost right in the first attempt, in the second one you're building a <code>meshgrid</code> for the full image while you just want the shape mask, don't you?</p>
<pre><code>import numpy as np
import matplotlib as mpl
from matplotlib.path import Path
from matplotlib import patches
import matplotlib.pyplot as plt
from PIL import Image
sar_ver = [(16886, 1085), (15139, 2122), (14475, 5226), (8419, 5601),
(14046, 6876), (14147, 10079), (16816, 3748), (16886, 1085)]
path = Path(sar_ver)
xmin, ymin, xmax, ymax = np.asarray(path.get_extents(), dtype=int).ravel()
x, y = np.mgrid[xmin:xmax, ymin:ymax]
points = np.transpose((x.ravel(), y.ravel()))
mask = path.contains_points(points)
mask = mask.reshape(x.shape).T
img = Image.fromarray((mask * 255).astype(np.uint8))
img.save('mask.png')
# plot shape and mask for debug purposes
fig = plt.figure(figsize=(8,4))
gs = mpl.gridspec.GridSpec(1,2)
gs.update(wspace=0.2, hspace= 0.01)
ax = plt.subplot(gs[0])
patch = patches.PathPatch(path, facecolor='orange', lw=2)
ax.add_patch(patch)
ax.set_xlim(xmin, xmax)
ax.set_ylim(ymin, ymax)
ax = plt.subplot(gs[1])
ax.imshow(mask, origin='lower')
plt.savefig("shapes.png", bbox_inches="tight", pad_inches=0)
</code></pre>
<p>It produces the mask:</p>
<p><a href="https://i.stack.imgur.com/A8GcCm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/A8GcCm.png" alt="mask"></a></p>
<p>And also plots both the mask and the path for debugging purposes:</p>
<p><a href="https://i.stack.imgur.com/7rpli.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7rpli.png" alt="shapes"></a></p>
<p>The different orientation comes from the different origin position in <code>matplotlib</code> plots and images, but it should be trivial enough to change it the way you want.</p>
<p><strong>EDIT after latest question edits</strong></p>
<p>Here's an updated script that takes an image, generates a mask for your path and cuts it out. I'm using a dummy image and scaling down shapes a bit so they're easier to work with.</p>
<pre><code>import numpy as np
import matplotlib as mpl
from matplotlib.path import Path
from matplotlib import patches
import matplotlib.pyplot as plt
import skimage.transform
import skimage.data
from PIL import Image
sar_ver = np.asarray([(16886, 1085), (15139, 2122), (14475, 5226), (8419, 5601),
(14046, 6876), (14147, 10079), (16816, 3748), (16886, 1085)])
# reshape into smaller path for faster debugging
sar_ver = sar_ver // 20
# create dummy image
img = skimage.data.chelsea()
img = skimage.transform.rescale(img, 2)
# matplotlib path
path = Path(sar_ver)
xmin, ymin, xmax, ymax = np.asarray(path.get_extents(), dtype=int).ravel()
# create a mesh grid of the shape of the final mask
x, y = np.mgrid[:img.shape[1], :img.shape[0]]
# mesh grid to points
points = np.vstack((x.ravel(), y.ravel())).T
# mask for the point included in the path
mask = path.contains_points(points)
mask = mask.reshape(x.shape).T
# plots
fig = plt.figure(figsize=(8,6))
gs = mpl.gridspec.GridSpec(2,2)
gs.update(wspace=0.2, hspace= 0.2)
# image + patch
ax = plt.subplot(gs[0])
ax.imshow(img)
patch = patches.PathPatch(path, facecolor="None", edgecolor="cyan", lw=3)
ax.add_patch(patch)
# mask
ax = plt.subplot(gs[1])
ax.imshow(mask)
# filter image with mask
ax = plt.subplot(gs[2])
ax.imshow(img * mask[..., np.newaxis])
# remove mask from image
ax = plt.subplot(gs[3])
ax.imshow(img * ~mask[..., np.newaxis])
# plt.show()
plt.savefig("shapes.png", bbox_inches="tight", pad_inches=0)
</code></pre>
<p><a href="https://i.stack.imgur.com/db8Te.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/db8Te.png" alt="cat"></a></p>
|
python-2.7|numpy|matplotlib|polygon|bounding-box
| 2
|
6,196
| 45,720,331
|
Getting NaN's in mutiple columns after pivoting
|
<p>I need to pivot a dataframe (dfM) which looks something like</p>
<pre><code>Task Question Answer analystID
x a 1 u
y b 2 i
z c 3 o
</code></pre>
<p>I want to pivot it so that the analyst IDs are the headers and the Answers are what is filled under the headers, with the Task and Question as index.
Initially I tried</p>
<pre><code>dfM['Answer'] = pd.to_numeric(dfM['Answer'], errors='coerce')
dfP = pd.pivot_table(dfM, index = ['Task', 'Question'], columns = 'analystID',
values = ['Answer'])
</code></pre>
<p>because I was getting a No Numeric Types to aggregate error, but now all the Answers that are supposed to be under the headers are NaNs.
Is there a good way to fix this problem?</p>
|
<p>One way you can see what is wrong with your 'Answer' column,<br>
is to find the letters in the strings that are not numerics</p>
<pre><code>dfm= pd.DataFrame({'Answer': ['1', '2 ', '3o', '1,000']})
def find_non_num(x):
uniq= set(str(x))
nums= set(['1','2','3','4','5','6','7','8','9','0'])
return uniq - nums
dfm.Answer.apply(find_non_num) #optional .unique()
#the output will be:
#0 {}
#1 { }
#2 {o}
#3 {,}
</code></pre>
<p>In this way you can see what happened that made your NaNs appear,<br>
Moreover, you can use the <code>.unique()</code> method to find what kinds of errors are there,<br>
and boolean indexing to see which rows have what problem and how does it behave</p>
<pre><code>dfm.Answer[dfm.Answer.str.find('o') > -1] #returns Series of [3o] with index 2
</code></pre>
|
python|pandas
| 0
|
6,197
| 45,612,077
|
Creating vector of tensorflow nodes efficiently
|
<p>Lets say I have the following function in python, that gets tensorflow variable x and some constant y and as output it returns node, that depends in some way on those two. </p>
<pre><code>import tensorflow as tf
x = tf.Variable(3.0)
y = {"a" : 3, "b" : 1.0}
def make_graph(x, y):
return y["a"] * x**2 + y["b"]
</code></pre>
<p>I have a list of constants like y (y_vec) and I would like to apply the function to each element and then calculate sum of these nodes, something like this: </p>
<pre><code>f = sum([ make_graph(x, y) for y in y_vec ])
</code></pre>
<p>Then I want to optimize f with respect to x. Of course the function make_graph can be more complicated. The question is how to do this efficiently for a very long y_vec. </p>
|
<p>The answer depends on the function that you are applying. In your example, you could do something like this:</p>
<pre><code>import tensorflow as tf
x = tf.Variable(3.0)
y = {"a" : [3, 4, 5], "b" : [1.0, 2.0, 3.0]}
def make_graph(x, y):
return tf.reduce_sum(y["a"] * x**2 + y["b"], axis=0)
f = make_graph(x, y)
</code></pre>
|
python|tensorflow
| 1
|
6,198
| 45,671,466
|
Distributed Tensorflow reload model failed
|
<p>I use tensorflow distributed, <strong>store model with codes</strong>:</p>
<pre><code>hooks=[tf.train.StopAtStepHook(last_step=1000000)]
with tf.train.MonitoredTrainingSession(master=server.target,
is_chief=is_chief,
checkpoint_dir=self.checkpoint_dir,
hooks=hooks,
save_checkpoint_secs=30,
config=session_conf) as self.sess:
</code></pre>
<p><strong>reload model:</strong></p>
<pre><code>checkpoint_dir = 'checkpoints'
checkpoint_file = tf.train.latest_checkpoint(checkpoint_dir)
graph = tf.Graph()
with graph.as_default():
session_conf = tf.ConfigProto(
allow_soft_placement=FLAGS.allow_soft_placement,
log_device_placement=FLAGS.log_device_placement)
sess = tf.Session(config=session_conf)
with sess.as_default():
# Load the saved meta graph and restore variables
saver = tf.train.import_meta_graph("{}.meta".format(checkpoint_file))
saver.restore(sess, checkpoint_file)
</code></pre>
<p><strong>get error:</strong></p>
<pre><code> saver = tf.train.import_meta_graph("{}.meta".format(checkpoint_file))
File "C:\Program Files\Python35\lib\site-packages\tensorflow\python\training\saver.py", line 1686, in import_meta_graph
**kwargs)
File "C:\Program Files\Python35\lib\site-packages\tensorflow\python\framework\meta_graph.py", line 504, in import_scoped_meta_graph
producer_op_list=producer_op_list)
File "C:\Program Files\Python35\lib\site-packages\tensorflow\python\framework\importer.py", line 311, in import_graph_def
op_def=op_def)
File "C:\Program Files\Python35\lib\site-packages\tensorflow\python\framework\ops.py", line 2506, in create_op
original_op=self._default_original_op, op_def=op_def)
File "C:\Program Files\Python35\lib\site-packages\tensorflow\python\framework\ops.py", line 1269, in __init__
self._traceback = _extract_stack()
InvalidArgumentError (see above for traceback): Cannot assign a device for operation 'save/RestoreV2_65': Operation was explicitly assigned to /job:ps/task:0/device:CPU:0 but available devices are [ /job:localhost/replica:0/task:0/cpu:0 ]. Make sure the device specification refers to a valid device.
[[Node: save/RestoreV2_65 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:ps/task:0/device:CPU:0"](save/Const, save/RestoreV2_65/tensor_names, save/RestoreV2_65/shape_and_slices)]]
</code></pre>
<p><strong>the key point is</strong> /job:ps/task:0/device:CPU:0</p>
<pre><code>I find it in meta file:
conv-maxpool-2/W
VariableV2"/job:ps/task:0*
dtype0*
</code></pre>
<p>saving model with wrong way? or reloading with wrong way?</p>
|
<p>You need to clear device assignments when you load the graph, ie</p>
<pre><code>tf.train.import_meta_graph('...', clear_devices=True)
</code></pre>
|
python|tensorflow|distributed
| 1
|
6,199
| 45,670,487
|
numpy.cov() exception: 'float' object has no attribute 'shape'
|
<p>I have a dataset for different plant species, and I separated each species into a different <code>np.array</code>.</p>
<p>When trying to generate gaussian models out of these species, I had to calculate the means and covariance matrices for each different label. </p>
<p>The problem is: when using <code>np.cov()</code> in one of the labels, the function raises the error "'float' object has no attribute 'shape'" and I can't really figure out where the problem is coming from. The exact line of code I'm using is the following:</p>
<pre><code>covx = np.cov(label0, rowvar=False)
</code></pre>
<p>Where <code>label0</code> is a numpy ndarray of shape (50,3), where the columns represent different variables and each row is a different observation.</p>
<p>The exact error trace is:</p>
<pre><code>---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-81-277aa1d02ff0> in <module>()
2
3 # Get the covariances
----> 4 np.cov(label0, rowvar=False)
C:\Users\Matheus\Anaconda3\lib\site-packages\numpy\lib\function_base.py in cov(m, y, rowvar, bias, ddof, fweights, aweights)
3062 w *= aweights
3063
-> 3064 avg, w_sum = average(X, axis=1, weights=w, returned=True)
3065 w_sum = w_sum[0]
3066
C:\Users\Matheus\Anaconda3\lib\site-packages\numpy\lib\function_base.py in average(a, axis, weights, returned)
1143
1144 if returned:
-> 1145 if scl.shape != avg.shape:
1146 scl = np.broadcast_to(scl, avg.shape).copy()
1147 return avg, scl
AttributeError: 'float' object has no attribute 'shape'
</code></pre>
<p>Any ideas of what is going wrong?</p>
|
<p>The error is reproducible if the array is of <code>dtype=object</code>:</p>
<pre><code>import numpy as np
label0 = np.random.random((50, 3)).astype(object)
np.cov(label0, rowvar=False)
</code></pre>
<blockquote>
<p>AttributeError: 'float' object has no attribute 'shape'</p>
</blockquote>
<p>If possible you should convert it to a numeric type. For example:</p>
<pre><code>np.cov(label0.astype(float), rowvar=False) # works
</code></pre>
<p>Note: <code>object</code> arrays are rarely useful (they are slow and not all NumPy functions deal gracefully with these - like in this case), so it could make sense to check where it came from and also fix that.</p>
|
python|arrays|numpy|attributeerror
| 30
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.