Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
5,700
| 73,388,298
|
How to broadcast based on an index specification?
|
<p>I have the following input and use-case, note the index are arrays and when <code>len</code> is greater than one then means broadcast:</p>
<pre><code>import pandas as pd
df = pd.DataFrame([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]],
index=pd.Index([[1], [2, 3], [4]]),
columns=['a', 'b', 'c'])
print(df)
</code></pre>
<p>and would like to flatten the index in a way that broadcast the values as follows:</p>
<pre><code>expected = pd.DataFrame([[1, 2, 3],
[4, 5, 6],
[4, 5, 6],
[7, 8, 9]],
index=[1, 2, 3, 4],
columns=['a', 'b', 'c'])
print(expected)
</code></pre>
|
<p>You can temporarily set the index as column, <code>explode</code> it and set it back as index:</p>
<pre><code>df.reset_index().explode('index').set_index('index')
</code></pre>
<p>output:</p>
<pre><code> a b c
index
1 1 2 3
2 4 5 6
3 4 5 6
4 7 8 9
</code></pre>
|
python|pandas
| 2
|
5,701
| 73,204,249
|
Output error if there is value under NaN header in Excel file
|
<p>I have inputed Excel table, that look like this:</p>
<pre><code>head_1 head_2 head_3
val_1 val_2 val_3
val_4 val_5 val_6 val_7
</code></pre>
<p>I need to output error, because under the NaN header there is value(val_7), but i have no idea how to implement it</p>
|
<p>try:</p>
<pre class="lang-py prettyprint-override"><code>assert sum(['unnamed' in col.lower() for col in df.columns])==0, \
f"Values present in {sum(['unnamed' in col.lower() for col in df_1.columns])} unnamed column(s)"
</code></pre>
<p>result:</p>
<pre class="lang-py prettyprint-override"><code>---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
Input In [23], in <cell line: 1>()
----> 1 assert sum(['unnamed' in col.lower() for col in df.columns])==0, \
2 f"Values present in {sum(['unnamed' in col.lower() for col in df.columns])} unnamed column(s)"
AssertionError: Values present in 1 unnamed column(s)
</code></pre>
<p>it works if you don't have the word "unnamed" in any of your columns names</p>
|
python|excel|pandas
| 0
|
5,702
| 73,372,699
|
More efficient way to build dataset then using lists
|
<p>I am building a dataset for a squence to point conv network, where each window is moved by one timestep.
Basically this loop is doing it:</p>
<pre><code> x_train = []
y_train = []
for i in range(window,len(input_train)):
x_train.append(input_train[i-window:i].tolist())
y = target_train[i-window:i]
y = y[int(len(y)/2)]
y_train.append(y)
</code></pre>
<p>When im using a big value for window, e.g. 500 i get a memory error.
Is there a way to build the training dataset more efficiently?</p>
|
<p>You should use <code>pandas</code>. It still might take too much space, but you can try:</p>
<pre><code>import pandas as pd
# if input_train isn't a pd.Series already
input_train = pd.Series(input_train)
rolling_data = (w.reset_index(drop=True) for w in input_train.rolling(window))
x_train = pd.DataFrame(rolling_data).iloc[window - 1:]
y_train = target_train[window//2::window]
</code></pre>
<p>Some explanations with an example:</p>
<p>Assuming a simple series:</p>
<pre><code>>>> input_train = pd.Series([1, 2, 3, 4, 5])
>>> input_train
0 1
1 2
2 3
3 4
4 5
dtype: int64
</code></pre>
<p>We can create a dataframe with the windowed data like so:</p>
<pre><code>>>> pd.DataFrame(input_train.rolling(2))
0 1 2 3 4
0 1.0 NaN NaN NaN NaN
1 1.0 2.0 NaN NaN NaN
2 NaN 2.0 3.0 NaN NaN
3 NaN NaN 3.0 4.0 NaN
4 NaN NaN NaN 4.0 5.0
</code></pre>
<p>The problem with this is that values in each window have their own indices (0 has 0, 1 has 1, etc.) so they end up in corresponding columns. We can fix this by resetting indices for each window:</p>
<pre><code>>>> pd.DataFrame(w.reset_index(drop=True) for w in input_train.rolling(2))
0 1
0 1.0 NaN
1 1.0 2.0
2 2.0 3.0
3 3.0 4.0
4 4.0 5.0
</code></pre>
<p>The only thing left to do is remove the first <code>window - 1</code> number of rows because they are not complete (that is just how <code>rolling</code> works):</p>
<pre><code>>>> pd.DataFrame(w.reset_index(drop=True) for w in input_train.rolling(2)).iloc[2-1:] # .iloc[1:]
0 1
1 1.0 2.0
2 2.0 3.0
3 3.0 4.0
4 4.0 5.0
</code></pre>
|
python|list|numpy|keras
| 1
|
5,703
| 60,218,681
|
Tensorflow always gives me the same result while the outputs are normal
|
<p>I hope you are having a great day!</p>
<p>I recently tried to train a regression model by using TensorFlow and I completed my code by following the instruction in <a href="https://www.tensorflow.org/tutorials/keras/regression" rel="nofollow noreferrer">here</a>.</p>
<pre><code>data = pd.read_csv('regret.csv')
max_regret = data['regret'].max()
data['regret'] = data['regret'] / max_regret # Normalize Regrets
regret_labels = data.pop('regret')
def build_model():
model = keras.Sequential([
layers.Dense(64, activation='relu', input_shape=[len(data.keys())]),
layers.Dense(64, activation='relu'),
layers.Dense(1)
])
optimizer = tf.keras.optimizers.RMSprop(0.001)
model.compile(loss='mse',
optimizer=optimizer,
metrics=['mae', 'mse'])
return model
model = build_model()
class PrintDot(keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs):
if epoch % 100 == 0: print('')
print('.', end='')
EPOCHS = 1000
# early_stop = keras.callbacks.EarlyStopping(monitor='val_loss', patience=100)
history = model.fit(
data, regret_labels,
epochs=EPOCHS, validation_split=0.2, verbose=0,
callbacks=[PrintDot()])
test = model.predict(data)
loss, mae, mse = model.evaluate(data, regret_labels, verbose=2)
</code></pre>
<p>However, I encountered a problem that all the predictions were the same, even though the model.evaluate() gave me different statistics by trials.</p>
<p><a href="https://i.stack.imgur.com/StjxB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/StjxB.png" alt="enter image description here"></a></p>
<p>I also attached the file via <a href="https://www.dropbox.com/transfer/AAAAAGPs7NjHt3_KC_Vt6L-M89ygT-2TqVaWX-TLB5JrP0oMGRj1zC0" rel="nofollow noreferrer">this link</a>.</p>
<p>Would you take a look at it and give me some ideas to solve it? Thanks in advance!</p>
|
<p>You can try this approach as below which split your data set into training and test set before fitting into the model.
You can try this approach as below which split your data set into training and test set before fitting into the model.</p>
<pre><code> import pandas as pd
import keras
from keras.models import Sequential
from keras.layers.core import Dense
from keras.optimizers import RMSprop
data = pd.read_csv('regret.csv')
max_regret = data['regret'].max()
data['regret'] = data['regret'] / max_regret
len(data.keys())
data
train_dataset = data.sample(frac=0.8,random_state=0)
test_dataset = data.drop(train_dataset.index)
train_labels = train_dataset.pop('regret')
test_labels = test_dataset.pop('regret')
def build_model():
model = Sequential()
model.add(Dense(64, activation='relu', input_shape=(27,)))
model.add(Dense(64, activation='relu'))
model.add(Dense(1))
optimizer = RMSprop(0.001)
model.compile(loss='mse',
optimizer=optimizer,
metrics=['mae', 'mse'])
return model
model = build_model()
class PrintDot(keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs):
if epoch % 100 == 0: print('')
print('.', end='')
EPOCHS = 1000
early_stop = keras.callbacks.EarlyStopping(monitor='val_loss', patience=100)
history = model.fit(
train_dataset, train_labels,
epochs=EPOCHS, validation_split=0.2, verbose=0,
callbacks=[PrintDot()])
test = model.predict(test_dataset)
</code></pre>
<p>The result have slightly changes as below:
<a href="https://i.stack.imgur.com/C8RxO.png" rel="nofollow noreferrer">enter image description here</a></p>
<p>What you could do better is min max scaling all the attributes.</p>
<p>Hope it can help you.</p>
|
python|tensorflow
| 0
|
5,704
| 60,116,419
|
Extract entity from dataframe using spacy
|
<p><a href="https://i.stack.imgur.com/2Atis.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2Atis.png" alt="enter image description here"></a>I read contents from excel file using pandas::</p>
<pre><code>import pandas as pd
df = pd.read_excel("FAM_template_Update 1911274_JS.xlsx" )
df
</code></pre>
<p>While trying to extract entities using spacy::</p>
<pre><code>import spacy
nlp = spacy.load("en_core_web_sm")
doc = nlp(df)
for enitity in doc.ents:
print((entity.text))
</code></pre>
<p>Got Error:: TypeError: Argument 'string' has incorrect type (expected str, got DataFrame) </p>
<pre><code> On line(3)-----> doc = nlp(df)
</code></pre>
|
<p>This is expected as <code>Spacy</code> is not prepared to deal with a dataframe as-is. You need to do some work before being able to print the entities. Start by identifying the column that contains the text you want to use <code>nlp</code> on. After that, extract its value as list, and now you're ready to go. Let's suppose the column name that contains the text is named <code>Text</code>.</p>
<pre><code>for i in df['Question'].tolist():
doc = nlp(i)
for entity in doc.ents:
print((entity.text))
</code></pre>
<p>This will iterate over each text (row) for in your dataframe and print the entities. </p>
|
python|pandas|spacy
| 5
|
5,705
| 65,231,843
|
Is it possible to only load part of a TensorFlow dataset?
|
<p>I have a notebook in Google Colab with the following code:</p>
<pre><code>batch_size = 64
dataset_name = 'coco/2017_panoptic'
tfds_dataset, tfds_info = tfds.load(
dataset_name,
split='train',
with_info=True)
</code></pre>
<p>I would like to know if it possible to only download part of the dataset (say: 5%, or X number of images) with the <code>tfds_load</code> function. As far as I can see in the documentation, there are no arguments to do so. Of course it would be possible to slice the dataset <em>after</em> dowloading, but this particular dataset (<code>coco/2017_panoptic</code>) is 19.57 GiB, which obviously takes quite a while to download.</p>
|
<p>The original question was about how to <em>download</em> a subset of the dataset.</p>
<p>And so the answer recommending the use of an argument like <code>split='train[:5%]'</code> as a way of downloading only 5% of the training data is mistaken. It seems that this still downloads the entire dataset, but then only loads 5%.</p>
<p>You can check this for yourself by running
<code>mnist_ds_5p = tfds.load("mnist", split="train[:5%]")</code>
followed by <code>mnist_ds = tfds.load("mnist", split="train")</code></p>
<p>No downloading takes place after running the second command. This is because the entire dataset has already been downloaded and cached after running the first command!</p>
<p>As many of the datasets are being fetched from a compressed form I doubt there is a simple way to avoid downloading the entire dataset I'm afraid.</p>
|
python|tensorflow|tensorflow-datasets
| 5
|
5,706
| 65,374,774
|
Python: printed object has a type but returned object is NoneType?
|
<p>I have a function that returns a tuple containing a NumPy array and a list. At the end of the function I print out the array and the list and both look correct. Then I print their types and these also look correct. But when I return them, I get a NoneType error. I am very confused as to why this is happening. Code below. <code>adjust_param</code> is a helper function. The TypeError is asserted in the return line of <code>optimize_theta</code>.</p>
<pre><code>def adjust_param(R, delta, i, theta):
thetaplus = theta.copy()
thetaminus = theta.copy()
thetaplus[i*2] += delta
thetaplus[i*2+1] += delta
thetaminus[i*2] -= delta
thetaminus[i*2+1] -= delta
y = Remp(q_data, labels, R, num_samples, theta)
yplus = Remp(q_data, labels, R, num_samples, thetaplus)
yminus = Remp(q_data, labels, R, num_samples, thetaminus)
if (yplus < y and yplus < yminus and yplus != -1):
return thetaplus, yplus
elif (yminus < y and yminus < yplus and yminus != -1):
return thetaminus, yminus
else:
return theta, y
</code></pre>
<pre><code>def optimize_theta(N, R, delta, i, theta, risk):
if N == 0:
print("Theta : " + str(type(theta)))
print("= " + str(theta))
print()
print("Risk : " + str(type(risk)))
print("= " + str(risk))
return theta, risk
else:
theta_new, risk_new = adjust_param(R, delta, i, theta)
if i == (len(theta)/2)-1:
#print("N = " + str(N-1))
#print("theta = " + str(theta))
risk_copy = risk.copy()
risk_copy.append(risk_new)
optimize_theta(N-1, R, delta, 0, theta_new, risk_copy)
else:
optimize_theta(N, R, delta, i+1, theta_new, risk)
</code></pre>
<p>Output:</p>
<pre><code>Theta : <class 'numpy.ndarray'>
= [0.85885111 0.86066499 0.47482528 0.13555158 0.87249245 0.02604654
0.2906744 0.34618303]
Risk : <class 'list'>
= [0.6273510217403618]
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-307-8b43b528fee2> in <module>
----> 1 theta, risk = optimize_theta(N, R, delta, 0, theta0, [])
TypeError: cannot unpack non-iterable NoneType object
</code></pre>
<p>Any insight would be much appreciated. Thank you!</p>
|
<p>You must explicitly return results in the <code>else</code> section in optimize_theta.</p>
<pre><code>def optimize_theta(N, R, delta, i, theta, risk):
if N == 0:
print("Theta : " + str(type(theta)))
print("= " + str(theta))
print()
print("Risk : " + str(type(risk)))
print("= " + str(risk))
return theta, risk
else:
theta_new, risk_new = adjust_param(R, delta, i, theta)
if i == (len(theta)/2)-1:
#print("N = " + str(N-1))
#print("theta = " + str(theta))
risk_copy = risk.copy()
risk_copy.append(risk_new)
return optimize_theta(N-1, R, delta, 0, theta_new, risk_copy)
else:
return optimize_theta(N, R, delta, i+1, theta_new, risk)
</code></pre>
|
python|list|typeerror|numpy-ndarray|nonetype
| 0
|
5,707
| 50,144,430
|
How to use a search window in numpy and retain the maximum value and all the other values to zero
|
<p>I have a two-dimensional array eg (101 rows and 100 columns). Now I want to create a search window or block of (3 rows x 3 columns) that will move around the array and determine the highest value, select it and retain all other values as zeros using python and numpy. For example </p>
<pre><code>x = ([[1,2,3,4,5,6,7,8,9,10],
[2,5,4,5,3,4,6,7,5,3],
[3,3,4,5,6,7,3,4,5,8]]
</code></pre>
<p>Using a 2x2 search window x.somefunction for example starting from the top left will give a result</p>
<pre><code>Result = ([0,0...
[0,5... for the 1st iteration so the the whole result should look like this
Result = ([[0,0,0,0,0,6,0,8,0,10],
[0,5,0,5,0,0,0,0,0,0],
[3,3,0,5,0,7,0,4,0,8]]
</code></pre>
<p>Notice that that the last row the search window had to change from a 2x2 array to a 2x1 because the search window is non-overlapping</p>
<p>Your help will be greatly appreciated.
Thank you in advance</p>
|
<p>Here is an approach using <code>skimage.util.view_as_blocks</code>:</p>
<pre><code>>>> import numpy as np
>>> import skimage.util as su
>>>
>>> def split_axis(N, n):
... q, r = divmod(N, n)
... left = ((np.s_[:q*n], n),) if q else ()
... right = ((np.s_[q*n:], r),) if r else ()
... return (*left, *right)
...
>>> def block_max(x, block, inplace=False):
... if not inplace:
... x = x.copy()
... xi, xj = x.shape
... bi, bj = block
... for ci, ri in split_axis(xi, bi):
... for cj, rj in split_axis(xj, bj):
... vab = su.view_as_blocks(x[ci, cj], (ri, rj))
... vab[vab < vab.max(axis=(-1, -2), keepdims=True)] = 0
... return x
...
>>> x = ([[1,2,3,4,5,6,7,8,9,10],
... [2,5,4,5,3,4,6,7,5,3],
... [3,3,4,5,6,7,3,4,5,8]])
>>>
>>> x = np.array(x)
>>>
>>> block_max(x, (2, 2))
array([[ 0, 0, 0, 0, 0, 6, 0, 8, 0, 10],
[ 0, 5, 0, 5, 0, 0, 0, 0, 0, 0],
[ 3, 3, 0, 5, 0, 7, 0, 4, 0, 8]])
</code></pre>
<p>If you don't have <code>skimage</code>:</p>
<pre><code>>>> def view_as_blocks(x, blockshape):
... *xs, xi, xj = x.shape
... bi, bj = blockshape
... return np.ascontiguousarray(x).reshape(*xs, xi//bi, xj//bj, *blockshape)
</code></pre>
<p>Your updated question (untested):</p>
<pre><code>>>> def block_max(x, block):
... out = np.zeros_like(x)
... xi, xj = x.shape
... bi, bj = block
... for ci, ri in split_axis(xi, bi):
... for cj, rj in split_axis(xj, bj):
... vab = su.view_as_blocks(x[ci, cj], (ri, rj))
... oab = su.view_as_blocks(out[ci, cj], (ri, rj))
... vmx = vab.max(axis=(-1, -2), keepdims=True)
... vmn = vab.min(axis=(-1, -2), keepdims=True)
... cond = vmx - vmn > 2
... oab[cond & (vab == vmx)] == 1
... oab[cond & (vab == vmn)] == 2
... return out
</code></pre>
|
python|python-2.7|numpy
| 1
|
5,708
| 49,937,041
|
pandas exporting to excel without format
|
<p>I would like to export data from dataframe to an excel, that has already its format-layout (colours, cells, etc.)</p>
<p>This code overwrite the all sheet, instead I would like the export data without changing excel layout.</p>
<p>is that possible?</p>
<h1>Create a Pandas Excel writer using XlsxWriter as the engine.</h1>
<pre><code>writer = pd.ExcelWriter('C:/pandas_positioning.xlsx', engine='xlsxwriter')
df.to_excel(writer, sheet_name='my_data',
startrow=7, startcol=4, header=False, index=False)
</code></pre>
<h1>Close the Pandas Excel writer and output the Excel file.</h1>
<pre><code>writer.save()
</code></pre>
|
<p>Do it manually. </p>
<p>Example:</p>
<pre><code>WB = 'C:/pandas_positioning.xlsx'
WS = 'my_data'
writer = pd.ExcelWriter(WB, engine='openpyxl')
wb = openpyxl.load_workbook(WB)
wb.create_sheet(WS)
writer.book = wb
writer.sheets = {x.title: x for x in wb.worksheets}
ws = writer.sheets[WS]
for icol, col_name in zip(range(len(df.columns)+1), df.columns):
ws.cell(1, icol+2, col_name)
for irow, row_name in zip(range(len(df.columns)+1), df.index):
ws.cell(irow+2, 1, row_name)
for (_, row), icol in zip(df.iterrows(), range(len(df.index) + 1)):
for (_, cell), c in zip(row.iteritems(), range(len(df.columns) + 1)):
ws.cell(icol+2, c+2).value = cell
writer.save()
</code></pre>
|
python|excel|pandas|pandas.excelwriter
| 1
|
5,709
| 50,085,381
|
pip install giving platform not supported error
|
<p>I have following python distribution installed.</p>
<pre><code>Python 3.6.5 (v3.6.5:f59c0932b4, Mar 28 2018, 17:00:18) [MSC v.1900 64 bit (AMD64)] on win32
</code></pre>
<p>I downloaded <code>numpy-1.14.3+mkl-cp35-cp35m-win_amd64.whl</code> </p>
<p>but upon installing, i got platform not supported error </p>
<pre><code>C:\Users\HP\Downloads>pip install "numpy-1.14.3+mkl-cp35-cp35m-win_amd64.whl"
numpy-1.14.3+mkl-cp35-cp35m-win_amd64.whl is not a supported wheel on this platform.
</code></pre>
|
<p>I got the error resolved.</p>
<p>python 3.6 supports only cp 36. </p>
|
python-3.x|numpy
| 0
|
5,710
| 64,140,283
|
I would like to calculate euclidian distance and put it in a list. I recive range error, what I'm missing?
|
<p>I would like to evaluate the euclidian distance from a fixed point to several points, I want to do it through a loop. Why is not working? I also tried without the '-1' for the range but still not working</p>
<pre><code>from scipy.spatial import distance
vettore = np.array(np.mat('1 2; 3 4;6,7;8,9;10,12'))
posizione= np.array(np.mat('2,2'))
codio= []
for i in range(0,len(vettore)-1):
codio[i]=distance.euclidean(vettore[i],posizione)
codio
>>> IndexError: list assignment index out of range
</code></pre>
|
<p>How about <code>distance_matrix</code>:</p>
<pre><code>from scipy.spatial import distance_matrix
distance_matrix(vettore, posizione).ravel()
</code></pre>
<p>Output:</p>
<pre><code>array([ 1. , 2.23606798, 6.40312424, 9.21954446, 12.80624847])
</code></pre>
|
python|numpy|loops
| 2
|
5,711
| 63,904,614
|
How to change dataframe row index to datetime.date while reading from csv?
|
<p><code>df.index[0]</code> I want to be <code>datetime.date(2006, 8, 27)</code>.</p>
<p>While reading from file, <code>df = pd.read_csv(filePath,index_col="Date")</code>, <code>df.index[0]</code> appears as string <code>'2006-08-27'</code>.</p>
<p>I tried:</p>
<pre><code>dateparser = lambda s: datetime.datetime.strptime(s,"%Y-%m-%d").date()
df = pd.read_csv(filePath,parse_dates=["Date"], date_parser=dateparser,index_col="Date")
</code></pre>
<p>Now, <code>df.index[0]</code> appears as <code>Timestamp('2006-08-27 00:00:00')</code>.</p>
<p>How to make <code>df.index[0]</code> as <code>datetime.date(2006, 8, 27)</code>?</p>
<p>used sample csv:</p>
<pre><code>Date,Symbol,Series,Prev Close,Open,High,Low,Last,Close,VWAP,Volume,Turnover,Trades,Deliverable Volume,%Deliverble
2006-08-27,,,,,,,,,,,,,,
2006-08-28,ATFC,EQ,365.0,521.0,569.0,502.0,553.0,554.25,552.0,15166163,837176013020000.0,,3777529,0.24910000000000002
2006-08-29,ATFC,EQ,554.25,555.0,563.9,535.55,536.1,539.3,547.59,3929113,215153038915000.0,,727534,0.1852
2006-08-30,ATFC,EQ,539.3,537.0,542.9,521.5,529.0,528.1,529.55,2034983,107762957620000.0,,345064,0.1696
2006-08-31,ATFC,EQ,528.1,525.0,544.0,515.0,539.35,538.45,532.89,1670990,89044643830000.0,,286440,0.1714
2006-09-01,ATFC,EQ,538.45,539.0,549.0,535.1,541.35,541.85,542.46,1176195,63803856150000.0,,213842,0.1818
</code></pre>
|
<p>As per <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html" rel="nofollow noreferrer"><code>pandas.read_csv</code></a>, you can also specify the <code>parse_dates = True</code> and the <code>infer_datetime_format = True</code> arguments to have pandas attempt to parse dates from the index, which you have set to date.
As in:</p>
<pre><code>df = pd.read_csv(filePath,index_col="Date",parse_dates=True,infer_datetime_format=True)
</code></pre>
|
python|pandas|python-datetime
| 1
|
5,712
| 64,128,817
|
How to plot a min-max fill_between plot from multiple files
|
<p>Thank you in advance for your help! (Code Below) (<a href="https://github.com/the-datadudes/deepSoilTemperature/blob/master/AllDeepSoilData.zip" rel="nofollow noreferrer">Link to 1st piece of data</a>) (<a href="https://raw.githubusercontent.com/the-datadudes/deepSoilTemperature/master/allStationsDailyAirTemp1.csv" rel="nofollow noreferrer">Link to data I want to add</a>)</p>
<p>I am trying to import data from a second CSV (above) and add a second line to this plot based on that CSVs data. What is the best approach to doing this? (Images below)</p>
<p>The squiggly lines on the plot represent the range of data.</p>
<pre><code>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings('ignore')
raw_data = pd.read_csv('all-deep-soil-temperatures.csv', index_col=1, parse_dates=True)
df_all_stations = raw_data.copy()
selected_soil_station = 'Minot'
df_selected_station = df_all_stations[df_all_stations['Station'] == selected_soil_station]
df_selected_station.fillna(method = 'ffill', inplace=True);
df_selected_station_D=df_selected_station.resample(rule='D').mean()
df_selected_station_D['Day'] = df_selected_station_D.index.dayofyear
mean=df_selected_station_D.groupby(by='Day').mean()
mean['Day']=mean.index
maxx=df_selected_station_D.groupby(by='Day').max()
minn=df_selected_station_D.groupby(by='Day').min()
mean['maxx20']=maxx['20 cm']
mean['minn20']=minn['20 cm']
plt.style.use('ggplot')
bx = mean.plot(x='Day', y='20 cm',color='black')
plt.fill_between(mean['Day'],mean['minn20'],mean['maxx20'],color='blue',alpha = 0.2);
bx.set_xlabel("Day of the year")
bx.set_ylabel("Temperature in Celsius")
bx.set_title("Soil Temp, Air Temp, and Snow Depth for " + str(selected_soil_station))
</code></pre>
<p><strong>What I have:</strong></p>
<p><a href="https://i.stack.imgur.com/Vr6T9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Vr6T9.png" alt="enter image description here" /></a></p>
<p><strong>What I want to have:</strong></p>
<p><a href="https://i.stack.imgur.com/PG0tz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PG0tz.png" alt="enter image description here" /></a></p>
<h2>Sample Data</h2>
<ul>
<li><code>all-deep-soil-temperatures.csv</code></li>
</ul>
<pre><code>Station,Time,5 cm,10 cm,20 cm,30 cm,40 cm,50 cm,60 cm,80 cm,100 cm,125 cm,150 cm,175 cm,200 cm,225 cm
Adams,2018-06-21 1700,32.8,27.74,23.06,20.28,18.16,16.64,15.33,13.07,11.19,9.35,7.919,6.842,6.637,5.686
Adams,2018-06-21 1800,31.78,27.66,23.41,20.52,18.31,16.77,15.46,13.23,11.34,9.51,8.06,6.894,6.681,5.781
Adams,2018-06-21 1900,30.5,27.24,23.61,20.73,18.54,17.02,15.73,13.51,11.63,9.8,8.36,7.262,6.681,5.893
Adams,2018-06-21 2000,29.12,26.74,23.72,20.9,18.66,17.14,15.85,13.62,11.8,10.03,8.69,7.65,6.684,5.904
Adams,2018-06-21 2100,27.5,26.08,23.74,21.07,18.86,17.36,16.12,13.96,12.19,10.43,9.11,8.1,6.823,6.069
Adams,2018-06-21 2200,26.05,25.41,23.66,21.2,18.98,17.43,16.15,13.96,12.15,10.41,9.09,8.11,6.909,6.164
Adams,2018-06-21 2300,24.89,24.75,23.48,21.21,19.01,17.42,16.1,13.9,12.07,10.33,9.01,7.997,6.886,6.132
Adams,2018-06-22 0000,24.09,24.19,23.31,21.22,19.06,17.43,16.1,13.88,12.04,10.31,8.97,7.964,6.887,6.125
Adams,2018-06-22 0100,23.49,23.74,23.11,21.2,19.1,17.49,16.13,13.87,12.01,10.23,8.88,7.89,6.89,6.128
Adams,2018-06-22 0200,22.92,23.3,22.91,21.19,19.15,17.53,16.16,13.88,12.02,10.25,8.91,7.911,6.902,6.14
Adams,2018-06-22 0300,22.32,22.86,22.68,21.11,19.14,17.52,16.14,13.84,11.98,10.21,8.87,7.858,6.892,6.121
Adams,2018-06-22 0400,21.81,22.46,22.44,21.05,19.15,17.55,16.16,13.85,11.99,10.21,8.86,7.84,6.899,6.111
Williston,2020-09-21 0500,14.69,15.29,15.61,15.68,15.48,15.22,14.99,14.7,14.51,14.27,14.06,13.85,,
Williston,2020-09-21 0600,14.39,15.09,15.49,15.61,15.43,15.19,14.99,14.68,14.46,14.2,13.97,13.73,,
Williston,2020-09-21 0700,14.16,14.93,15.39,15.56,15.4,15.18,14.99,14.69,14.47,14.22,13.99,13.74,,
Williston,2020-09-21 0800,13.72,14.54,15.05,15.22,15.05,14.84,14.68,14.37,14.09,13.92,13.64,13.35,,
Williston,2020-09-21 0900,13.64,14.35,14.87,15.08,14.95,14.78,14.63,14.32,14.04,13.88,13.61,13.33,,
Williston,2020-09-21 1000,13.9,14.33,14.79,15.06,14.99,14.85,14.72,14.41,14.14,13.99,13.74,13.51,,
Williston,2020-09-21 1100,14.46,14.43,14.78,15.07,15.04,14.93,14.78,14.49,14.24,14.07,13.84,13.62,,
Williston,2020-09-21 1200,15.34,14.77,14.89,15.15,15.17,15.09,14.97,14.7,14.47,14.28,14.06,13.87,,
Williston,2020-09-21 1300,16.26,15.19,15.03,15.22,15.28,15.24,15.16,14.89,14.69,14.49,14.28,14.06,,
Williston,2020-09-21 1400,17.2,15.74,15.24,15.29,15.35,15.31,15.24,15,14.82,14.62,14.41,14.22,,
Williston,2020-09-21 1500,18.04,16.35,15.54,15.35,15.37,15.32,15.23,14.97,14.77,14.55,14.35,14.15,,
Williston,2020-09-21 1600,18.59,16.89,15.83,15.42,15.36,15.28,15.16,14.89,14.69,14.47,14.28,14.09,,
Williston,2020-09-21 1700,18.68,17.21,16.1,15.52,15.4,15.3,15.23,14.95,14.78,14.54,14.35,14.14,,
</code></pre>
<ul>
<li><code>allStationsDailyAirTemp1.csv</code></li>
</ul>
<pre><code>Station,Date,Temp
Adams,2018-06-21,22.723
Adams,2018-06-22,23.358
Adams,2018-06-23,20.986
Adams,2018-06-24,20.524
Adams,2018-06-25,19.699
Adams,2018-06-26,22.146
Adams,2018-06-27,21.239
Adams,2018-06-28,21.367
Adams,2018-06-29,20.701
Adams,2018-06-30,18.613
Adams,2018-07-01,19.376
Adams,2018-07-02,19.079
Adams,2018-07-03,20.747
Adams,2018-07-04,19.622
Adams,2018-07-05,18.029
Adams,2018-07-06,18.883
Adams,2018-07-07,25.655
Adams,2018-07-08,22.953
Adams,2018-07-09,20.281
Williston,2020-09-05,21.69
Williston,2020-09-06,16.595
Williston,2020-09-07,5.917
Williston,2020-09-08,3.863
Williston,2020-09-09,8.996
Williston,2020-09-10,14.488
Williston,2020-09-11,15.689
Williston,2020-09-12,16.002
Williston,2020-09-13,11.219
Williston,2020-09-14,16.695
Williston,2020-09-15,12.77
Williston,2020-09-16,9.523
Williston,2020-09-17,13.186
Williston,2020-09-18,16.992
Williston,2020-09-19,16.85
Williston,2020-09-20,17.235
Williston,2020-09-21,17.595
Williston,2020-09-22,19.115
Williston,2020-09-23,16.43
Williston,2020-09-24,21.035
Williston,2020-09-25,17.01
Williston,2020-09-26,14.109
</code></pre>
|
<ul>
<li>See inline notations with the new code</li>
<li>Removed <code>plt.style.use('ggplot')</code> because it makes it difficult to see the <code>fill_between</code> colors</li>
<li>Also see <a href="https://stackoverflow.com/questions/64067519/how-to-create-a-min-max-lineplot-by-month">How to create a min-max lineplot by month?</a></li>
<li>Don't use <code>;</code> in python</li>
<li>Load the data from the other file, into a separate dataframe</li>
<li>Clean, and aggregate the new data as needed
<ul>
<li>Set the Date column to a datetime format</li>
<li>Extract day of year</li>
<li><code>groupby</code> day of year and aggregate <code>mean</code>, <code>min</code>, and <code>max</code> temperature</li>
</ul>
</li>
<li>Plot the new data to the same <code>axes</code> as the original plot, <code>bx</code>.</li>
</ul>
<pre class="lang-py prettyprint-override"><code>df_all_stations = pd.read_csv('data/so_data/2020-09-29 64128817/all-deep-soil-temperatures.csv', index_col=1, parse_dates=True)
# load air temp data
at = pd.read_csv('data/so_data/2020-09-29 64128817/allStationsDailyAirTemp1.csv')
# set Date to a datetime format
at.Date = pd.to_datetime(at.Date)
# extract day of year
at['doy'] = at.Date.dt.dayofyear
# selet data from Minot
at = at[at.Station == 'Minot']
# groupby the day of year (doy) and aggregate min max and mean
atg = at.groupby('doy')['Temp'].agg([min, max, 'mean'])
selected_soil_station = 'Minot'
df_selected_station = df_all_stations[df_all_stations['Station'] == selected_soil_station].copy() # make a copy here, otherwise there will be warning
df_selected_station.fillna(method = 'ffill', inplace=True)
df_selected_station_D=df_selected_station.resample(rule='D').mean()
df_selected_station_D['Day'] = df_selected_station_D.index.dayofyear
mean=df_selected_station_D.groupby(by='Day').mean()
mean['Day']=mean.index
maxx=df_selected_station_D.groupby(by='Day').max()
minn=df_selected_station_D.groupby(by='Day').min()
mean['maxx20']=maxx['20 cm']
mean['minn20']=minn['20 cm']
bx = mean.plot(x='Day', y='20 cm', color='black', figsize=(9, 6), label='20 cm Soil Temp')
plt.fill_between(mean['Day'], mean['minn20'], mean['maxx20'], color='blue', alpha = 0.2, label='20 cm Soil Temp Range')
# add air temp plot to the bx plot with ax=bx
atg['mean'].plot(ax=bx, label='Mean Air Temp')
# add air temp fill between plot to the bx plot
bx.fill_between(atg.index, atg['min'], atg['max'], color='cyan', alpha = 0.2, label='Air Temp Range')
bx.set_xlabel("Day of the year")
bx.set_ylabel("Temperature in Celsius")
bx.set_title("Soil Temp, Air Temp, and Snow Depth for " + str(selected_soil_station))
# grid
bx.grid()
# set legend location
bx.legend(bbox_to_anchor=(1.05, 1), loc='upper left')
# remove margin spaces
plt.margins(0, 0)
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/IepWb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IepWb.png" alt="enter image description here" /></a></p>
|
python|pandas|matplotlib|time-series
| 1
|
5,713
| 46,861,241
|
Analysis the correlation between age group and survived rate
|
<pre><code>#First, I divide the age group as follow ,
# 1. group A: 0-17years old;
# 2. group B: 18-35years old
# 3. group C: 36-50years old
# 4. group D: 51-65years old
# 5. group E: above 66 years old
#Then I begin to write code extact the CVC data
Passenger_Age={"PassengerId":titanic["PassengerId"][:],"Age":titanic["Age"][:]}
Passenger_Age_df = pd.DataFrame(Passenger_Age,columns =["Age","PassengerId"])
Passenger_Survived={"PassengerId":titanic["PassengerId"[:],"Survived":titanic["Survived"][:]}
Passenger_Survived_df = pd.DataFrame(Passenger_Survived,columns = ["Survived","PassengerId"])
# consider there are some NAN in Age, so wirte the blow cod to drop the Age data
cleaned_Passenger_Age_df = Passenger_Age_df.dropna()
</code></pre>
<p>About the next step, I would like to merge two dataframe ,"cleaned_Passenger_Age_df" and "Passenger_Survived_df".<br>
After that, use applymap function to convert the age to ABCDE<br>
Then according to that to find survived rate of age groups<br>
my problem is smy idea is clear,but I don't know write the code,could someone help me? THX!</p>
|
<p>You can use <code>pd.cut()</code> to group the age, for example:</p>
<pre><code>group_names = ['A','B','C','D','E']
bins = [0,17,35,50,65,1000]
df['Age_Group'] = pd.cut(df['Age'], bins=bins, labels=group_names)
</code></pre>
<p>More detail :
<a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.cut.html" rel="nofollow noreferrer">pandas.cut</a></p>
<p>As for calculating survived rate, you can just use <em>group by</em>, like: </p>
<pre><code>df.groupby(['Age_Group','Survived']).count() / total_numbers
</code></pre>
|
python|pandas|csv|numpy
| 0
|
5,714
| 46,767,724
|
Loop over grouped pandas df and export individual plots
|
<p><a href="http://pandas.pydata.org/pandas-docs/stable/groupby.html#iterating-through-groups" rel="nofollow noreferrer">The documentation</a> seems a little sparse, as to how every element works, so here goes:</p>
<p>I have a bunch of files that I would like to iterate over and export a plot, for every single file.</p>
<pre><code>df_all.head()
</code></pre>
<p>Returns</p>
<pre><code> Dem-Dexc Aem-Dexc Aem-Aexc S E fit frame filename
0 18150.0595 18548.2451 15263.7451 0.7063 0.5054 0.879 1.0 Traces_exp22_tif_pair16.txt
1 596.9286 7161.7353 1652.8922 0.8244 0.9231 0.879 2.0 Traces_exp22_tif_pair16.txt
2 93.2976 3112.3725 2632.6667 0.5491 0.9709 0.879 3.0 Traces_exp22_tif_pair16.txt
3 1481.1310 4365.4902 769.3333 0.8837 0.7467 0.879 4.0 Traces_exp22_tif_pair16.txt
4 583.1786 6192.6373 1225.5392 0.8468 0.9139 0.879 5.0 Traces_exp22_tif_pair16.txt
</code></pre>
<p>And now I would like to group and iterate:</p>
<pre><code>for group in df_all.groupby("filename"):
plot = sns.regplot(data = group, x = "Dem-Dexc", y = "frame")
</code></pre>
<p>But I get <code>TypeError: tuple indices must be integers or slices, not str</code>. Why do I get this?</p>
|
<p>I think you need change:</p>
<pre><code>for group in df_all.groupby("filename")
</code></pre>
<p>to:</p>
<pre><code>for i, group in df_all.groupby("filename"):
plot = sns.regplot(data = group, x = "Dem-Dexc", y = "frame")
</code></pre>
<p>for unpack <code>tuples</code>.</p>
<p>Or select second value of tuple by <code>[1]</code>:</p>
<pre><code>for group in df_all.groupby("filename"):
plot = sns.regplot(data = group[1], x = "Dem-Dexc", y = "frame")
</code></pre>
<p>You can check <code>tuple</code> output by:</p>
<pre><code>for group in df_all.groupby("filename"):
print (group)
('Traces_exp22_tif_pair16.txt', Dem-Dexc Aem-Dexc Aem-Aexc S E fit frame \
0 18150.0595 18548.2451 15263.7451 0.7063 0.5054 0.879 1.0
1 596.9286 7161.7353 1652.8922 0.8244 0.9231 0.879 2.0
2 93.2976 3112.3725 2632.6667 0.5491 0.9709 0.879 3.0
3 1481.1310 4365.4902 769.3333 0.8837 0.7467 0.879 4.0
4 583.1786 6192.6373 1225.5392 0.8468 0.9139 0.879 5.0
filename
0 Traces_exp22_tif_pair16.txt
1 Traces_exp22_tif_pair16.txt
2 Traces_exp22_tif_pair16.txt
3 Traces_exp22_tif_pair16.txt
4 Traces_exp22_tif_pair16.txt )
</code></pre>
<p>vs:</p>
<pre><code>for i, group in df_all.groupby("filename"):
print (group)
Dem-Dexc Aem-Dexc Aem-Aexc S E fit frame \
0 18150.0595 18548.2451 15263.7451 0.7063 0.5054 0.879 1.0
1 596.9286 7161.7353 1652.8922 0.8244 0.9231 0.879 2.0
2 93.2976 3112.3725 2632.6667 0.5491 0.9709 0.879 3.0
3 1481.1310 4365.4902 769.3333 0.8837 0.7467 0.879 4.0
4 583.1786 6192.6373 1225.5392 0.8468 0.9139 0.879 5.0
filename
0 Traces_exp22_tif_pair16.txt
1 Traces_exp22_tif_pair16.txt
2 Traces_exp22_tif_pair16.txt
3 Traces_exp22_tif_pair16.txt
4 Traces_exp22_tif_pair16.txt
</code></pre>
<p>If want save output to pictures <code>png</code>:</p>
<pre><code>for i, group in df_all.groupby("filename"):
plot = sns.regplot(data = group, x = "Dem-Dexc", y = "frame")
fig = plot.get_figure()
fig.savefig("{}.png".format(i.split('.')[0]))
</code></pre>
|
python|python-3.x|pandas|pandas-groupby
| 1
|
5,715
| 46,974,717
|
tf.train.replica_device_setter needed with tf.contrib.learn.Experiment?
|
<p>I built a distributed tensorflow program using <code>tf.estimator.Estimator</code>, <code>tf.contrib.learn.Experiment</code> and <code>tf.contrib.learn.learn_runner.run</code>. </p>
<p>For now it seems to work fine. However, the <a href="https://www.tensorflow.org/deploy/distributed" rel="nofollow noreferrer">tensorflow distributed tutorial</a> uses <code>tf.train.replica_device_setter</code> to pin operations to jobs. </p>
<p>My model function does not use any <code>with device</code> annotation. Is this done automatically by the <code>Experiment</code> class or am I missing an important point?</p>
<p>I am further not sure, why there is a need to assign certain devices when I am using data parallism?</p>
<p>Thanks for any help and hints on this,
Tobias</p>
|
<p>Variables and ops are defined in <code>tf.estimator.Estimator</code>, which actually uses <code>replica_device_setter</code> (<a href="https://github.com/tensorflow/tensorflow/blob/r1.3/tensorflow/python/estimator/estimator.py#L758" rel="nofollow noreferrer">defined here</a>). As you can see, it assigns variables to <code>ps</code> jobs and ops to <code>worker</code> jobs, which is the common way to handle data parallelism. </p>
<p><code>replica_device_setter</code> returns a device function that assigns ops and variables to devices. Even if you're using data parallelism, you might have many parameter servers, and a device function will ensure each parameter server gets separate variables (determined by <code>ps_strategy</code> of <code>replica_device_setter</code>). e.g. <code>/job:ps/tasks:0</code> could get <code>W1</code> and <code>b1</code>, and <code>/job:ps/tasks:1</code> could get <code>W2</code> and <code>b2</code>. The device function has to be deterministic in assigning variables to parameter servers, since the function is called every time a worker replica is instantiated, and the workers need to agree on which <code>ps</code> holds which variables.</p>
<p>tf.(contrib.)learn libraries <a href="https://stackoverflow.com/a/41601168/507062">use between-graph replication</a>. This means that each worker replica will build a separate graph, with the non-Variable ops assigned to that worker: worker with task index 2 defines ops to <code>/job:worker/task:2</code>, and variables to <code>/job:ps</code> (which specific <code>ps</code> is determined by <code>ps_strategy</code>). This means that the worker replica will compute the ops (loss value & gradients) itself, and send the resulting variable updates (gradients) to the particular parameter servers that are responsile for holding the particular variables.</p>
<p>If you didn't have a mechanism to assign variables/ops to devices, it would not be clear which replica should hold which variables and ops. Assigning to specific devices might also be needed if you have several GPUs on a worker replica: even though your variables are stored on parameter servers, you would need to create the compute-intensive part of the graph once for each of your GPUs (with explicitly assigning the created ops to the relevant GPU).</p>
|
tensorflow|distributed-computing
| 0
|
5,716
| 63,026,510
|
How do you plot month and year data to bar chart in matplotlib?
|
<p>I have data like this that I want to plot by month and year using matplotlib.</p>
<pre><code>df = pd.DataFrame({'date':['2018-10-01', '2018-10-05', '2018-10-20','2018-10-21','2018-12-06',
'2018-12-16', '2018-12-27', '2019-01-08','2019-01-10','2019-01-11',
'2019-01-12', '2019-01-13', '2019-01-25', '2019-02-01','2019-02-25',
'2019-04-05','2019-05-05','2018-05-07','2019-05-09','2019-05-10'],
'counts':[10,5,6,1,2,
5,7,20,30,8,
9,1,10,12,50,
8,3,10,40,4]})
</code></pre>
<p>First, I converted the datetime format, and get the year and month from each date.</p>
<pre><code>df['date'] = pd.to_datetime(df['date'])
df['year'] = df['date'].dt.year
df['month'] = df['date'].dt.month
</code></pre>
<p>Then, I tried to do groupby like this.</p>
<pre><code>aggmonth = df.groupby(['year', 'month']).sum()
</code></pre>
<p>And I want to visualize it in a barchart or something like that. But as you notice above, there are missing months in between the data. I want those missing months to be filled with 0s. I don't know how to do that in a dataframe like this. Previously, I asked <a href="https://stackoverflow.com/questions/63013746/how-do-i-create-a-period-range-of-months-and-fill-it-with-zeroes">this question about filling missing dates in a period of data.</a> where I converted the dates to period range in month-year format.</p>
<pre><code>by_month = pd.to_datetime(df['date']).dt.to_period('M').value_counts().sort_index()
by_month.index = pd.PeriodIndex(by_month.index)
df_month = by_month.rename_axis('month').reset_index(name='counts')
df_month
idx = pd.period_range(df_month['month'].min(), df_month['month'].max(), freq='M')
s = df_month.set_index('month').reindex(idx, fill_value=0)
s
</code></pre>
<p>But when I tried to plot s using matplotlib, it returned an error. It turned out you cannot plot a period data using matplotlib.</p>
<p>So basically I got these two ideas in my head, but both are stuck, and I don't know which one I should keep pursuing to get the result I want.</p>
<p>What is the best way to do this? Thanks.</p>
|
<p>Convert the <code>date</code> column to pandas datetime series, then use <code>groupby</code> on monthly <code>period</code> and aggregate the data using <code>sum</code>, next use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.resample.html" rel="nofollow noreferrer"><code>DataFrame.resample</code></a> on the aggregated dataframe to resample using monthly frequency:</p>
<pre><code>df['date'] = pd.to_datetime(df['date'])
df1 = df.groupby(df['date'].dt.to_period('M')).sum()
df1 = df1.resample('M').asfreq().fillna(0)
</code></pre>
<p>Plotting the data:</p>
<pre><code>df1.plot(kind='bar')
</code></pre>
<p><a href="https://i.stack.imgur.com/AopPC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AopPC.png" alt="enter image description here" /></a></p>
|
python|pandas|datetime|matplotlib
| 3
|
5,717
| 67,690,363
|
Pandas - Extracting value from a pandas column that has data as a dict
|
<p>I am trying to extract a details from a pandas column that is a dict as shown below:</p>
<pre><code>id, details
101, {'title': '',
'phone': '',
'skype': '',
'real_name': 'Kevin'}
102, {'title': '',
'phone': '',
'skype': '',
'real_name': 'Scott'}
</code></pre>
<p>Expected output:</p>
<p>Trying to extract name real_name value within the dict column</p>
<pre><code>id, details
101, Kevin
102, Scott
</code></pre>
|
<p>Use the <code>str</code> accessor:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({
'id': [101, 102],
'details': [{'title': '',
'phone': '',
'skype': '',
'real_name': 'Kevin'},
{'title': '',
'phone': '',
'skype': '',
'real_name': 'Scott'}]
})
df['details'] = df['details'].str['real_name']
print(df)
</code></pre>
<p><code>df</code>:</p>
<pre><code> id details
0 101 Kevin
1 102 Scott
</code></pre>
|
pandas
| 1
|
5,718
| 67,718,805
|
Find the N smallest values in a pair-wise comparison NxN numpy array?
|
<p>I have a python NxN numpy pair-wise array (matrix) of double values. Each array element of e.g., (<em>i</em>,<em>j</em>), is a measurement between the <em>i</em> and <em>j</em> item. The diagonal, where <em>i</em>==<em>j</em>, is 1 as it's a pairwise measurement of itself. This also means that the 2D NxN numpy array can be represented in matrix triangular form (one half of the numpy array identical to the other half across the diagonal).</p>
<p>A truncated representation:</p>
<pre><code>[[1. 0.11428571 0.04615385 ... 0.13888889 0.07954545 0.05494505]
[0.11428571 1. 0.09836066 ... 0.06578947 0.09302326 0.07954545]
[0.04615385 0.09836066 1. ... 0.07843137 0.09821429 0.11711712]
...
[0.13888889 0.06578947 0.07843137 ... 1. 0.34313725 0.31428571]
[0.07954545 0.09302326 0.09821429 ... 0.34313725 1. 0.64130435]
[0.05494505 0.07954545 0.11711712 ... 0.31428571 0.64130435 1. ]]
</code></pre>
<p><strong>I want to get out the smallest N values</strong> whilst not including the pairwise values twice, as would be the case due to the pair-wise duplication e.g., (5,6) == (6,5), and I do not want to include any of the identical diagonal values of 1 where <em>i</em> == <em>j</em>.</p>
<p>I understand that numpy has the <em>partition</em> method and I've seen plenty of examples for a flat array, but I'm struggling to find anything straightforward for a pair-wise comparison matrix.</p>
<p><strong>EDIT #1</strong>
Based on my first response below I implemented:</p>
<pre><code>seventyPercentInt: int = round((populationSizeInt/100)*70)
upperTriangleArray = dataArray[np.triu_indices(len(dataArray),1)]
seventyPercentArray = upperTriangleArray[np.argpartition(upperTriangleArray,seventyPercentInt)][0:seventyPercentInt]
print(len(np.unique(seventyPercentArray)))
</code></pre>
<p>The <em>upperTriangleArray</em> numpy array has 1133265 elements to pick the lowest <em>k</em> from. In this case <em>k</em> is represented by <em>seventyPercentInt</em>, which is around 1054 values. However, when I apply <em>np.argpartition</em> only the value of <em>0</em> is returned.</p>
<p>The flat array <em>upperTriangleArray</em> is reduced to a shape (1133265,).</p>
<p><strong>SOLUTION</strong></p>
<p>As per the first reply below (the accepted answer), my code that worked:</p>
<pre><code>upperTriangleArray = dataArray[np.triu_indices(len(dataArray),1)]
seventyPercentInt: int = round((len(upperTriangleArray)/100)*70)
seventyPercentArray = upperTriangleArray[np.argpartition(upperTriangleArray,seventyPercentInt)][0:seventyPercentInt]
</code></pre>
<p>I ran into some slight trouble (my own making), with the <em>seventyPercentInt</em>. Rather than taking 70% of the pairwise elements, I took 70% of the elements to be compared. Two very different values.</p>
|
<p>You can use <strong>np.triu_indices</strong> to keep only the values of the upper triangle.</p>
<p>Then you can use <strong>np.argpartition</strong> as in the example below.</p>
<pre><code>import numpy as np
A = np.array([[1.0, 0.1, 0.2, 0.3],
[0.1, 1.0, 0.4, 0.5],
[0.2, 0.3, 1.0, 0.6],
[0.3, 0.5, 0.4, 1.0]])
A_upper_triangle = A[np.triu_indices(len(A), 1)]
print(A_upper_triangle)
# return [0.1 0.2 0.3 0.3 0.5 0.4]
k=2
print(A_upper_triangle[np.argpartition(A_upper_triangle, k)][0:k])
#return [0.1 0.2]
</code></pre>
|
python|numpy|numpy-ndarray|pairwise
| 3
|
5,719
| 61,393,745
|
Dataframe column: if cell contains string, return a range of digits starting at the index where string was found
|
<p>I have a dataframe set up where i'm looking to extract out 12 digits starting at "W" in column "test"
"W" may fall at different indices throughout the column.</p>
<p>Here is what my data looks like:</p>
<pre><code> Text Result(I'd like to see)
1 SP/00016 - return of scrap from WH/MO/00003 - internal WH/MO/00003
2 SP/28 - return of scrap from WH/MO/00074 - internal WH/MO/00074
3 return of scrap from WH/MO/00074 - internal WH/MO/00074
4 WH/MO/00074 - internal WH/MO/00074
5 SP/00026 - return of scrap from WH/MO/00074 - internal WH/MO/00074
</code></pre>
<p>I have tried creating a variable that identified the index value of the "W", turning that in to an integer and feeding it back in to a slice against my dataframe.
Here is a snippet of code:</p>
<pre class="lang-py prettyprint-override"><code>start1 = df1['Text'].str.index('W')
start2 = start1.astype(int)
df1['Result'] = df1['Text'].astype(str).str[start2:]
</code></pre>
|
<p>IIUC you want <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.extract.html#pandas-series-str-extract" rel="nofollow noreferrer"><code>str.extract</code></a></p>
<pre class="lang-py prettyprint-override"><code>df.Text.str.extract(r'(\w\w\/\w\w\/\d{5})')
0
0 WH/MO/00003
1 WH/MO/00074
2 WH/MO/00074
3 WH/MO/00074
4 WH/MO/00074
</code></pre>
<p>You can also assign it to new column in the dataframe.</p>
<pre><code>df['Result'] = df.Text.str.extract(r'(\w\w\/\w\w\/\d{5})')
</code></pre>
|
python|pandas|dataframe
| 1
|
5,720
| 61,447,388
|
Heatmap does not show all the rows
|
<p>I have a dataset with 399 rows (<code>Words</code>) and 5 columns (<code>Dates</code>). I would like to visualise some information by heat maps. I have created a pivot table by using: </p>
<pre><code>pd.pivot_table(df, index='Words', columns='Date', values='frequency', aggfunc=np.sum)
</code></pre>
<p>Output: </p>
<pre><code>Date 2018-02-18 2018-02-19 2018-02-20 2018-02-21 2018-02-22
Words
A NaN NaN NaN 2.0 2.0
B NaN NaN NaN NaN 1.0
C NaN NaN NaN NaN 1.0
D NaN 1.0 NaN NaN NaN
E NaN NaN 1.0 NaN NaN
... ... ... ... ... ...
RRR NaN 10.0 NaN NaN 90.0
SSS NaN 3.0 3.0 3.0 NaN
TTT NaN 4.0 NaN NaN NaN
UUU NaN NaN NaN 1.0 NaN
VVV NaN NaN NaN NaN 1.0
ZZZ NaN NaN 1.0 NaN 1.0
399 rows × 5 columns
</code></pre>
<p>Then I have tried to create a heatmap using the following lines of code:</p>
<pre><code>piv = pd.pivot_table(df, values="frequency",index=["Words"], columns=["Date"], fill_value=0)
ax = sns.heatmap(piv, square=False)
</code></pre>
<p>However the output shows only 20 of those 399 rows. Would it be possible to visualise all the rows in the heatmap? In case it would not be possible, how could I visualise only the most popular rows (i.e. rows that have greater frequency through time/dates)?</p>
<p>Your help will be greatly appreciated. Thanks. </p>
|
<p>Your output <em>does</em> show all the rows, but the y-labels are reduced as they would overlap too much and be unreadable.</p>
<p>If you don't have a frequency column, you can create one and set all values to <code>1</code> with <code>df['frequency'] = 1</code>. The aggregation function will then sum everything up.</p>
<p>You can sort the <code>piv</code> dataframe and take the 10 highest values with <code>idx = piv.sum(axis=1).sort_values(ascending=False).head(10).index</code>. Then, <code>piv.loc[idx]</code> will get just those rows in that order.</p>
<p>The code below shows the steps. It also rotates the tick labels to make them better readable in this particular case.</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import pandas as pd
N = 1000
abc = list('ABCDEFGHIJKLMNOPQRS')
df = pd.DataFrame({'Date':[f'2018-02-{i:02d}' for i in np.random.randint(18, 23, N)],
'Words': [abc[i]+abc[j] for i,j in np.random.randint(0, len(abc), (N, 2)) ] ,
'frequency': np.random.randint(1, 10, N)
})
# df['frequency'] = 1 # in case there wasn't a frequency column yet
piv = pd.pivot_table(df, values="frequency",index=["Words"], columns=["Date"], fill_value=0, aggfunc=np.sum)
idx = piv.sum(axis=1).sort_values(ascending=False).head(10).index
ax = sns.heatmap(piv.loc[idx], square=False)
ax.set_xticklabels(ax.get_xticklabels(), rotation=0) # rotate the x labels to be horizontally again
ax.set_yticklabels(ax.get_yticklabels(), rotation=0) # rotate the y labels to be horizontally
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/78am0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/78am0.png" alt="sample plot"></a></p>
<p>PS: To show all the ticks, and all the labels (they can be too crowded) sorted alphabetically:</p>
<pre><code>from matplotlib.ticker import FixedLocator
idx = piv.sort_values('Words', ascending=True).index
ax = sns.heatmap(piv.loc[idx], square=False)
ax.yaxis.set_major_locator(FixedLocator(np.arange(0.5, len(idx) + 0.5, 1)))
ax.set_yticklabels(idx, rotation=0, fontsize=6)
</code></pre>
<p>Or to see the labels alternating left and right (to fit the double), a secondary axis could help:</p>
<pre><code>ax.yaxis.set_major_locator(FixedLocator(np.arange(0.5, len(idx) + 0.5, 2)))
ax.set_yticklabels(idx[::2], rotation=0, fontsize=6)
secax = ax.secondary_yaxis('right')
secax.yaxis.set_major_locator(FixedLocator(np.arange(1.5, len(idx) + 0.5, 2)))
secax.set_yticklabels(idx[1::2], rotation=0, fontsize=6)
</code></pre>
|
python|pandas|seaborn
| 0
|
5,721
| 61,608,030
|
How can I export status of list of websites as 'Yes' or 'No' to csv from python?
|
<p>I've imported a list of websites from csv file. I then checked every website to see if this is developed in wordpress or not. Now, I wanted to export wordpress status of every websited as 'Yes' or 'No' into a csv.
From this code I could get status as either all 'Yes' or all 'No'. It returns the wordpress status of last runned website as the wordpress status of all websites. E.g: If wordpress status of last runned website is 'Yes' then status of all websites in csv is 'Yes'.</p>
<pre><code>`wpLoginCheck = requests.get(websiteToScan + '/wp-login.php', headers=user_agent)
print(f"wpLoginCheck: {wpLoginCheck}")
if wpLoginCheck.status_code == 200:
df['Status'] = 'Yes'
df.to_csv('updated.csv', index=False)
else:
print ("Not detected")
df['Status'] = 'No'
df.to_csv('updated.csv', index=False)`
</code></pre>
|
<pre><code>df['Status'] = 'Yes'
</code></pre>
<p>sets the variable 'Status' equal to Yes for all rows. If you want to change the value of a specific cell you need to use</p>
<pre><code>df.at[i, 'Status'] = 'Yes'
</code></pre>
<p>Where i is the row index.
If you want to add rows as the loop goes on, you can save the results in dictionaries and convert it to a dataframe in the end. See for example <a href="https://stackoverflow.com/questions/31674557/how-to-append-rows-in-a-pandas-dataframe-in-a-for-loop">How to append rows in a pandas dataframe in a for loop?</a> </p>
|
python|pandas|csv
| 0
|
5,722
| 61,546,771
|
summation pandas rows and cols for make a plot and a new column
|
<p>I have a csv file that represents the evolution of a trend in the market. But the data has a repetition of dates and city names like the df I show bellow. </p>
<p>I have a pandas dataframe like this:</p>
<pre><code> date city confirmed
0 2020-03-12 Florianópolis 2
1 2020-03-13 Florianópolis 2
2 2020-03-13 Joinville 1
3 2020-03-14 Florianópolis 2
4 2020-03-14 Joinville 1
</code></pre>
<p>I just want to make a plot showing a follow up increase day by day of the values in the column confirmed.</p>
<p>I would like to show the increase by day/city/confirmed values.</p>
<p>I tried to groupby but the plot get crazy.</p>
<p><a href="https://i.stack.imgur.com/u0hMP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/u0hMP.png" alt="enter image description here"></a>
And I would like some like this:</p>
<p><a href="https://i.stack.imgur.com/Czhkx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Czhkx.png" alt="enter image description here"></a>
Any help would be greatly apreciated.</p>
<p>Thank you for your time.</p>
<p>Paulo</p>
|
<p>Do you want this?</p>
<pre><code>import pandas as pd
df = pd.DataFrame([['2020-03-12','Florianópolis',2],
['2020-03-13', 'Florianópolis' , 2],
['2020-03-13', 'Joinville', 1],
['2020-03-14', 'Florianópolis', 2],
['2020-03-14', 'Joinville', 1]
] , columns = ['date','city','confirmed'])
df.plot.bar(x='date',y='confirmed',rot=0)
</code></pre>
|
python|pandas|pandas-groupby
| 0
|
5,723
| 61,449,222
|
tensorflow-addon not compatible with tensorflow 1.xx
|
<p>I am working with code that is using tensorflow 1.14. Also they used tensorflow-addons, but as far as I understand tensorflow-addons that are available to install support tensorflow >= 2 only, because when I tried to install an older version of tf addons it says:
"Could not find a version that satisfies the requirement tensorflow-addons==0.4.0 (from versions: 0.7.0, 0.7.1, 0.8.1, 0.8.2, 0.8.3, 0.9.0, 0.9.1)"</p>
<p>And upgrading to tf2 would be very complicated, so I wanted to ask if there is any other solution?</p>
|
<p>According to the <a href="https://github.com/tensorflow/addons" rel="nofollow noreferrer">official repository</a> it only works with TensorFlow 2+. So, there are two options for your situation.</p>
<ul>
<li><a href="https://www.tensorflow.org/guide/migrate" rel="nofollow noreferrer">Manual upgrade</a> your code to TensorFlow 2.0.</li>
<li>Use an <a href="https://www.tensorflow.org/guide/upgrade" rel="nofollow noreferrer">automatic upgrade script</a> to upgrade your code. It is an easier solution.</li>
</ul>
<p>Also upgrading from 1.X to 2.2 can be more convenient. Because they keep core API the same with the 1.X versions as they said <a href="https://blog.tensorflow.org/2020/03/recap-of-2020-tensorflow-dev-summit.html" rel="nofollow noreferrer">here</a>.</p>
|
python|tensorflow
| 0
|
5,724
| 61,228,219
|
Use Pandas dataframe groupby.filter with own function and argument
|
<p>I would like to filter my dataframe with my own filter function which require an object as argument</p>
<pre><code>def my_filter_function(df: pd.DataFrame, my_arg: object) -> bool:
</code></pre>
<p>I know I can do the following</p>
<pre><code>df.groupby('column_name').filter(lambda group_df: my_filter_function(group_df, my_arg))
</code></pre>
<p>But I would like to know it there is a way to simply pass my_arg as an argument in some way, which could be used by my_filter_function without using a lambda expression.</p>
<p>Something like this for example, but it does not work:</p>
<pre><code>df.groupby('column_name').filter(my_filter_function, args=(my_arg,))
</code></pre>
|
<p>According to the <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.DataFrameGroupBy.filter.html?highlight=filter#pandas.core.groupby.DataFrameGroupBy.filter" rel="nofollow noreferrer">documentation</a>, you can pass <code>*args</code> and <code>**kwargs</code> to the function. This is an option in python that allows a function to collect all additional arguments passed (<code>*args</code> for regular arguments, <code>**kwargs</code> for keyword arguments). Then it can pass these arguments to the received function.</p>
<p>The most simple way is to add a keyword argument, which will be caught by <code>**kwargs</code>, like this:</p>
<pre><code>df.groupby('column_name').filter(my_filter_function, my_arg=my_arg)
</code></pre>
<p>You can also add a regular argument (which will be caught by <code>*args</code>), but you need to specify all other parameters beforehand. Filter only has 2 arguments - the function and dropna. If you specify <code>dropna</code> (its default is <code>True</code>) you can then add arguments that will be passed to your function:</p>
<pre><code>df.groupby('column_name').filter(my_filter_function, True, my_arg)
</code></pre>
|
pandas
| 1
|
5,725
| 68,708,984
|
keep only year month and day in datetime dataframe pandas
|
<p>Could anyone help me to delete the hours and minutes from this datetimes please?
I used this code but it stills returning the same output!</p>
<pre><code>data["expected_Date"]=pd.to_datetime(data["Last_Date"]+ timedelta(days=365*2.9),format = '%Y-%m-%d')
</code></pre>
<p>but it returns always this type 2019-01-22 12:00:00 but I want to keep only this 2019-01-22
how can I manage with that please? Thank you!</p>
|
<pre><code>data["expected_Date"].dt.date
</code></pre>
|
python|pandas|datetime|datetime-format|python-datetime
| 2
|
5,726
| 68,525,688
|
modify column values other than a specific value is not working
|
<p>I have below dataframe and need to modify profession column except the value has doctor.</p>
<pre><code>id firstname lastname email profession
0 100 Ekaterina Skell Ekaterina.Skell@yopmail.com developer
1 101 Judy Vernier Judy.Vernier@yopmail.com police officer
2 102 Tarra Diann Tarra.Diann@yopmail.com police officer
3 103 Odessa Maxi Odessa.Maxi@yopmail.com firefighter
4 104 Mallory Peonir Mallory.Peonir@yopmail.com firefighter
5 105 Nataline Hoenack Nataline.Hoenack@yopmail.com doctor
6 106 Dude Adrienne Dode.Adrienne@yopmail.com developer
7 107 Caressa Meli Caressa.Meli@yopmail.com doctor
8 108 Zaria Carey Zaria.Carey@yopmail.com firefighter
9 109 Harmonia Seumas Harmonia.Seumas@yopmail.com worker
</code></pre>
<p>what i tried is</p>
<pre><code>if src[src['profession'].isin(['doctor'])]:
src['profession'] = src['profession'].astype(str)+'-Done'
</code></pre>
<p>but i am getting below error.</p>
<pre><code>ValueError: The truth value of a DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
</code></pre>
<p>How do i get below output(if value has doctor, then it should not append)</p>
<pre><code>id firstname lastname email profession
0 100 Ekaterina Skell Ekaterina.Skell@yopmail.com developer-Done
1 101 Judy Vernier Judy.Vernier@yopmail.com police officer-Done
2 102 Tarra Diann Tarra.Diann@yopmail.com police officer-Done
3 103 Odessa Maxi Odessa.Maxi@yopmail.com firefighter-Done
4 104 Mallory Peonir Mallory.Peonir@yopmail.com firefighter-Done
5 105 Nataline Hoenack Nataline.Hoenack@yopmail.com doctor
6 106 Dude Adrienne Dode.Adrienne@yopmail.com developer-Done
7 107 Caressa Meli Caressa.Meli@yopmail.com doctor
8 108 Zaria Carey Zaria.Carey@yopmail.com firefighter-Done
9 109 Harmonia Seumas Harmonia.Seumas@yopmail.com worker-Done
</code></pre>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.loc.html" rel="nofollow noreferrer"><code>DataFrame.loc</code></a> with inverted mask by <code>~</code>:</p>
<pre><code>#if need compare one scalar value
m = src['profession'].eq('doctor')
#if need compare by list of values
m = src['profession'].isin(['doctor'])
src.loc[~m, 'profession'] = src.loc[~m, 'profession'].astype(str)+'-Done'
</code></pre>
<p>Or <a href="https://numpy.org/doc/stable/reference/generated/numpy.where.html" rel="nofollow noreferrer"><code>numpy.where</code></a>:</p>
<pre><code>src['profession'] = np.where(m, src['profession'], src['profession'].astype(str)+'-Done')
</code></pre>
<p>If need 100% all strings:</p>
<pre><code>s = src['profession'].astype(str)
src['profession'] = np.where(m, s, s+'-Done')
</code></pre>
|
python|pandas|dataframe
| 1
|
5,727
| 68,674,468
|
Can you change the input shape of a trained model in Tensorflow?
|
<p>I trained a model with the input shape of (224, 224, 3) and I'm trying to change it to (300, 300, 3). For instance:</p>
<pre class="lang-py prettyprint-override"><code>resnet50 = tf.keras.models.load_model(path_to_model)
model = tf.keras.models.Model([Input(shape=(300, 300, 3))], [resnet50.output])
# or
resnet50.inputs[0].set_shape([None, 300, 300, 3])
</code></pre>
<p>doesn't work.</p>
<p>I saw that the pretained model allows for different input shapes but adjusts the hole network architecture, for example, the size of the convolutional channels. I was wondering if I needed to do something similar or if for a trained model it is impossibel to change the input shape.</p>
|
<p>This would only work for convolutional layers as they do not care about <code>input_shape</code> because they are just sliding filters. However, if your model is trained on RGB images then also <code>new_input</code> shape should have 3 as channels.</p>
<p>Example:</p>
<pre><code>first_model = VGG16(weights = None, input_shape=(224,224,3), include_top=False)
first_model.summary()
>> input_6 (InputLayer) [(None, 224, 224, 3)] 0
</code></pre>
<p>And second model:</p>
<pre><code>new_input = tf.keras.Input((300,300,3))
x = first_model.layers[1](new_input) # First conv. layer
for new_layer in first_model.layers[2:]:
x = new_layer(x) # loop through layers using Functional API
second_model = tf.keras.Model(inputs=new_input, outputs=x)
second_model.summary()
>>
Layer (type) Output Shape Param #
=================================================================
input_9 (InputLayer) [(None, 300, 300, 3)] 0
</code></pre>
|
tensorflow|keras
| 1
|
5,728
| 63,442,203
|
How to do rolling calculation based a a subsets of rows
|
<p>I am doing rolling mean analysis on a pandas dataframe,</p>
<pre><code>R1_Chr_Name distance count
0 chr1 100 163
1 chr1 101 203
2 chr1 102 194
3 chr1 103 193
4 chr1 104 154
5 chr2 100 150
6 chr2 101 152
7 chr2 102 163
8 chr2 103 161
9 chr2 104 170
10 chr3 100 154
11 chr3 101 160
12 chr3 102 175
13 chr3 103 134
14 chr3 104 151
</code></pre>
<p>I want to add a column named 'average_count' that will do rolling mean within a subgroup of rows, e.g. do rolling mean of 'count' when column 'R1_Chr_Name' equals to chr1, chr2, ch3 ...,
The disired result should look like:</p>
<pre><code>R1_Chr_Name distance count average_count
0 chr1 100 163 NaN
1 chr1 101 203 NaN
2 chr1 102 194 186.666667
3 chr1 103 193 196.666667
4 chr1 104 154 180.333333
5 chr2 100 150 NaN
6 chr2 101 152 NaN
7 chr2 102 163 155.000000
8 chr2 103 161 158.666667
9 chr2 104 170 164.666667
10 chr3 100 154 NaN
11 chr3 101 160 NaN
12 chr3 102 175 163.000000
13 chr3 103 134 156.333333
14 chr3 104 151 153.333333
</code></pre>
<p>Currently I am using the following code and found previous calculations were overwritten:</p>
<pre><code>chr_ls = ['chr1', 'chr2', 'chr3']
for chrom in chr_ls:
df['average_count']=df[df['R1_Chr_Name']==chrom]['count'].rolling(3).mean()
print(df)
R1_Chr_Name distance count average_count
0 chr1 100 163 NaN
1 chr1 101 203 NaN
2 chr1 102 194 NaN
3 chr1 103 193 NaN
4 chr1 104 154 NaN
5 chr2 100 150 NaN
6 chr2 101 152 NaN
7 chr2 102 163 NaN
8 chr2 103 161 NaN
9 chr2 104 170 NaN
10 chr3 100 154 NaN
11 chr3 101 160 NaN
12 chr3 102 175 163.000000
13 chr3 103 134 156.333333
14 chr3 104 151 153.333333
</code></pre>
<p>So, how to do it correctly?</p>
|
<p>Check with <code>groupby</code></p>
<pre><code>df['ave_count']=df.groupby('R1_Chr_Name')['count'].rolling(3).mean().reset_index(level=0,drop=True)
df
Out[232]:
R1_Chr_Name distance count ave_count
0 chr1 100 163 NaN
1 chr1 101 203 NaN
2 chr1 102 194 186.666667
3 chr1 103 193 196.666667
4 chr1 104 154 180.333333
5 chr2 100 150 NaN
6 chr2 101 152 NaN
7 chr2 102 163 155.000000
8 chr2 103 161 158.666667
9 chr2 104 170 164.666667
10 chr3 100 154 NaN
11 chr3 101 160 NaN
12 chr3 102 175 163.000000
13 chr3 103 134 156.333333
14 chr3 104 151 153.333333
</code></pre>
|
pandas|rolling-computation
| 2
|
5,729
| 63,713,006
|
Moving arrays from cells to column headers and row values
|
<p>With below code:</p>
<pre><code>import pandas as pd
import numpy as np
data = {'Brand': [['Honda', 'Toyota'],['Toyota', 'Honda', 'Ford'],['Ford','Toyota']],
'Price': [[10,12],[15,18,11],[11,12]]}
df = pd.DataFrame(data)
df
</code></pre>
<p>I have the following dataframe:</p>
<pre><code> Brand Price
0 [Honda, Toyota] [10, 12]
1 [Toyota, Honda, Ford] [15, 18, 11]
2 [Ford, Toyota] [11, 12]
</code></pre>
<p>I would like to transform it, making <strong>Brand</strong> entries my column names and <strong>Price</strong> cell values, to look like this:</p>
<pre><code> Honda Toyota Ford
0 10 12 NaN
1 18 15 11
2 NaN 12 11
</code></pre>
<p>Unfortunately both the order of entries in the array and their appearance varies across over 200k records. Is it possible to do to?</p>
|
<p>Check with</p>
<pre><code>s = pd.DataFrame([dict(zip(x, y)) for x , y in zip(df['Brand'], df['Price'])])
Out[403]:
Honda Toyota Ford
0 10.0 12 NaN
1 18.0 15 11.0
2 NaN 12 11.0
</code></pre>
|
python|pandas
| 4
|
5,730
| 53,607,710
|
How does pytorch calculate matrix pairwise distance? Why isn't 'self' distance not zero?
|
<p>If this is a naive question, please forgive me, my test code like this:</p>
<pre><code>import torch
from torch.nn.modules.distance import PairwiseDistance
list_1 = [[1., 1.,],[1., 1.]]
list_2 = [[1., 1.,],[2., 1.]]
mtrxA=torch.tensor(list_1)
mtrxB=torch.tensor(list_2)
print "A-B distance :",PairwiseDistance(2).forward(mtrxA, mtrxB)
print "A 'self' distance:",PairwiseDistance(2).forward(mtrxA, mtrxA)
print "B 'self' distance:",PairwiseDistance(2).forward(mtrxB, mtrxB)
</code></pre>
<p>Result:</p>
<pre><code>A-B distance : tensor([1.4142e-06, 1.0000e+00])
A 'self' distance: tensor([1.4142e-06, 1.4142e-06])
B 'self' distance: tensor([1.4142e-06, 1.4142e-06])
</code></pre>
<p>Questions are:</p>
<ol>
<li><p>How does pytorch calculate pairwise distance? Is it to calculate row vectors distance?</p></li>
<li><p>Why isn't 'self' distance 0?</p></li>
</ol>
<hr>
<p><strong>Update</strong></p>
<p>After changing list_1 and list_2 to this:</p>
<pre><code>list_1 = [[1., 1.,1.,],[1., 1.,1.,]]
list_2 = [[1., 1.,1.,],[2., 1.,1.,]]
</code></pre>
<p>Result becomes:</p>
<pre><code>A-B distance : tensor([1.7321e-06, 1.0000e+00])
A 'self' distance: tensor([1.7321e-06, 1.7321e-06])
B 'self' distance: tensor([1.7321e-06, 1.7321e-06])
</code></pre>
|
<p>Looking at the documentation of <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.PairwiseDistance" rel="noreferrer"><code>nn.PairWiseDistance</code></a>, pytorch expects two 2D tensors of <code>N</code> vectors in <code>D</code> dimensions, and computes the distances between the <code>N</code> pairs. </p>
<p>Why "self" distance is not zero - probably because of <a href="https://stackoverflow.com/q/9508518/1714410">floating point precision</a> and because of <code>eps = 1e-6</code>.</p>
|
python|pytorch|pairwise-distance
| 5
|
5,731
| 71,941,116
|
Extract subset of dataframe by value of column - column datatypes are mixed
|
<p>I have a dataframe like this:</p>
<pre><code>Seq Value
20-35-ABCDE 14268142.986651151
21-33-ABEDFD 4204281194.109206
61-72-ASDASD 172970.7123134008912
61-76-ASLDKAS 869238.232460215262
63-72-ASDASD string1
63-76-OIASD 20823821.49471747433
64-76-ASDAS(s)D string1
65-72-AS*AS 8762472.99003354316
65-76-ASYAD*S* 32512348.3285536161
66-76-A(AD(AD)) 3843230.72933184169
</code></pre>
<p>I want to rank the rows based on the Value, highest to lowest, and return the top 50% of the rows (where the row number could change over time).</p>
<p>I wrote this to do the ranking:</p>
<pre><code>df = pd.read_csv(sys.argv[1],sep='\t')
df.columns =['Seq', 'Value']
#cant do because there are some strings
#pd.to_numeric(df['Value'])
df2 = df.sort_values(['Value'], ascending=True).head(10)
print(df2)
</code></pre>
<p>The output is like this:</p>
<pre><code>Seq Value
17210 ASK1 0.0
15061 ASD**ASDHA 0.0
41110 ASD(£)DA 1.4355078174305618
50638 EILMH 1000.7985554926368
62019 VSEFMTRLF 10000.89805735126
41473 LEDSAGES 10002.182707004016
41473 LEDSASDES 10000886.012834921
</code></pre>
<p>So I guess it sorted them by string instead of floats, but I'm struggling to understand how to sort by float because some of the entries in that column say string1 (and I want all the string1 to go to the end of the list, i.e. I want to sort by all the floats, and then just put all the string1s at the end), and then I want to be able to return the Seq values in the top 50% of the sorted rows.</p>
<p>Can someone help me with this, even just the sorting part?</p>
|
<p>The problem is that your column is storing the values as strings, so they will sort according to string sorting, not numeric sorting. You can sort numerically using the <code>key</code> of <code>DataFrame.sort_values</code>, which also allows you to preserve the string values in that column.</p>
<p>Another option would be to turn that column into a numeric column before the sort, but then non-numeric values must be replaced with <code>NaN</code></p>
<h3>Sample data</h3>
<pre><code>import pandas as pd
df = pd.DataFrame({'Seq': [1,2,3,4,5],
'Value': ['11', '2', '1.411', 'string1', '91']})
# String sorting
df.sort_values('Value')
#Seq Value
#2 3 1.411
#0 1 11
#1 2 2
#4 5 91
#3 4 string1
</code></pre>
<h3>Code</h3>
<pre><code># Numeric sorting
df.sort_values('Value', key=lambda x: pd.to_numeric(x, errors='coerce'))
Seq Value
2 3 1.411
1 2 2
0 1 11
4 5 91
3 4 string1
</code></pre>
|
python|pandas
| 2
|
5,732
| 55,236,331
|
Custom grouping for all possible groups when having missing values
|
<p>I have a dictionary which represents a set of products. I need to find all duplicate products within these products. If products have same <code>product_type</code>,<code>color</code> and <code>size</code> -> they are duplicates. I could easily group by ('product_type','color','size') if I did not have a problem: some values are missing. Now I have to find all possible groups of products that might be duplicates between themselves. <strong>This means that some elements can appear in multiple groups.</strong></p>
<p>Let me illustrate:</p>
<pre><code>import pandas as pd
def main():
data= {'product_id': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11],
'product_type': ['shirt', 'shirt', 'shirt', 'shirt', 'shirt', 'hat', 'hat', 'hat', 'hat', 'hat', 'hat', ],
'color': [None, None, None, 'red', 'blue', None, 'blue', 'blue', 'blue', 'red', 'red', ],
'size': [None, 's', 'xl', None, None, 's', None, 's', 'xl', None, 'xl', ],
}
print(data)
if __name__ == '__main__':
main()
</code></pre>
<p>for this data:</p>
<p><a href="https://i.stack.imgur.com/eNU4J.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eNU4J.png" alt="enter image description here"></a></p>
<p>I need this result - list of possibly duplicate products for each possible group (take only the biggest super groups):</p>
<p><a href="https://i.stack.imgur.com/XU2D9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XU2D9.png" alt="![enter image description here"></a></p>
<p>So for example, lets take "shirt" with <code>id=1</code>
this product does not have color or size so he can appear in a possible "duplicates group" together with shirt #2 (which has size "s" but does not have color) and shirt #4 (which has color "red" but does not have size). So these three shirts (1,2,4) are possibly duplicates with same color "red" and size "s".</p>
<p>I tried to implement it by looping through all possible combinations of missing values but it feels wrong and complex.</p>
<p>Is there a way to get the desired result?</p>
|
<p>You can create all possible keys that are not <code>None</code> and then check which item falls into what key - respecting the <code>None</code>s:</p>
<pre><code>data= {'product_id' : [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11],
'product_type': ['shirt', 'shirt', 'shirt', 'shirt', 'shirt', 'hat',
'hat', 'hat', 'hat', 'hat', 'hat', ],
'color' : [None, None, None, 'red', 'blue', None, 'blue',
'blue', 'blue', 'red', 'red', ],
'size' : [None, 's', 'xl', None, None, 's', None, 's', 'xl', None, 'xl', ]}
from itertools import product
# create all keys without None in it
p = product((t for t in set(data['product_type']) if t),
(c for c in set(data['color']) if c),
(s for s in set(data['size']) if s))
# create the things you have in stock
inventar = list( zip(data['product_id'],data['product_type'],data['color'],data['size']))
d = {}
# order things into its categories
for cat in p:
d.setdefault(cat,set()) # uses a set to collect the IDs
for item in inventar:
TY, CO, SI = cat
ID, TYPE, COLOR, SIZE = item
# the (TYPE or TY) will substitute TY for any TYPE that is None etc.
if (TYPE or TY)==TY and (COLOR or CO)==CO and (SIZE or SI)==SI:
d[cat].add(ID)
print(d)
</code></pre>
<p>Output:</p>
<pre><code># category-key id's that match
{('shirt', 'blue', 's') : {1, 2, 5},
('shirt', 'blue', 'xl'): {1, 3, 5},
('shirt', 'red', 's') : {1, 2, 4},
('shirt', 'red', 'xl') : {1, 3, 4},
('hat', 'blue', 's') : {8, 6, 7},
('hat', 'blue', 'xl') : {9, 7},
('hat', 'red', 's') : {10, 6},
('hat', 'red', 'xl') : {10, 11}}
</code></pre>
<p>Doku:</p>
<ul>
<li><a href="https://docs.python.org/3/library/itertools.html#itertools.product" rel="nofollow noreferrer">itertools.product(*iterables)</a></li>
<li><a href="https://docs.python.org/3/library/functions.html#zip" rel="nofollow noreferrer">zip(*iterables)</a></li>
<li><a href="https://docs.python.org/3/library/stdtypes.html#truth-value-testing" rel="nofollow noreferrer">truth value testing</a></li>
</ul>
|
python|pandas|python-2.7|dataframe
| 0
|
5,733
| 55,392,261
|
Merging mulitple rows to one row of a dataframe column
|
<p>My Current dataframe was like below,</p>
<pre><code> 0 1 2
0 HA-567034786 AB-1018724 None
1 AB-6348403 HA-7298656 None
</code></pre>
<p>After using the <code>apply()</code>, I just make it like,</p>
<pre><code>def make_dict(row):
s = set(x for x in row if x)
return {x: list(s - {x}) for x in s}
result = df.apply(make_dict, axis=1).to_frame(name = 'duplicates')
duplicates
1 {'HA-567034786': ['AB-1018724'],'AB-1018724':['HA-567034786']}
2 {'AB-6348403': ['HA-7298656'],'HA-7298656':['AB-6348403']}
</code></pre>
<p>Now I'm stucked on to make it a single dimentional dictionary like below,</p>
<pre><code>{
'HA-567034786': ['AB-1018724'],'AB-1018724':['HA-567034786'],
'AB-6348403': ['HA-7298656'],'HA-7298656':['AB-6348403']
}
</code></pre>
|
<p>Instead <code>apply</code> use dictionary comprehension with flattening:</p>
<pre><code>print (df)
0 1
0 HA-567034786 AB-1018724
1 AB-6348403 HA-7298656
def make_dict(row):
s = set(x for x in row if x)
return {x: list(s - {x}) for x in s}
result = {k:v for x in df.values for k, v in make_dict(x).items()}
print (result)
{'HA-567034786': ['AB-1018724'],
'AB-1018724': ['HA-567034786'],
'HA-7298656': ['AB-6348403'],
'AB-6348403': ['HA-7298656']}
</code></pre>
<p>Another solution with <code>apply</code>:</p>
<pre><code>result = {k:v for x in df.apply(make_dict, axis=1) for k, v in x.items()}
</code></pre>
|
python|pandas|dataframe
| 1
|
5,734
| 66,949,958
|
Set row in dataframe as column attributes or meta data
|
<p>I have a dataframe with survey data, where each column is labelled Q1-Q100. The first row in the dataframe contains the actual questions from the survey (one question for each column). I would like to set that row as an attribute or meta data for each column so I can reference it later.</p>
<p>The dataframe looks like this:</p>
<pre><code>Q1 Q2 Q3 Q4
ID Age Gender Handedness
1 19 Female Right
2 19 Male Right
3 25 Female Right
4 17 Female Left
</code></pre>
<p>But for Q10-100 the label is a full sentence/question rather than a short label like 'Age'.</p>
<p>I know I can set attributes individually using:</p>
<pre><code>df.attrs['Q1'] = 'This is an example question'
</code></pre>
<p>But am looking to see if there is a way to set the entire row as the attributes for each associated column, without having to loop through each column and set them one at a time.</p>
|
<p>Why can't you use a loop, simply?</p>
<p>If you have the questions stored as a list, you can just do something like:</p>
<pre class="lang-py prettyprint-override"><code>for column, question in zip(df.columns, questions):
df.attrs[column] = question
</code></pre>
<p>PS: If you need to have the questions stored in <code>df</code>'s first row as a list, just do:</p>
<pre class="lang-py prettyprint-override"><code>questions = df.iloc[0].tolist()
df = df.drop(df.index[0], axis=0)
</code></pre>
|
python|pandas|dataframe
| 1
|
5,735
| 68,304,336
|
Plotting a Time Schedule with Business Hour
|
<p>I'm implementing a time schedule associated with business hour (8am to 5pm) using pd.offsets.CustomBusinessHour and attempting to plot the gantt chart or horizonal bar chart using matplotlib.
At this point, I want to cut off the interval between x-axis ticks out of business hour which is unnecessary. It seems like breaking hours exist between 5pm of d-day and 8am of d+1 day</p>
<p><img src="https://i.stack.imgur.com/caZQF.png" alt="(As-Is) Chart Image" />
<img src="https://i.stack.imgur.com/2TShz.png" alt="(To-Be) Chart Image" /></p>
<p>I searched parameter configuration of BusinessHour method, way of tick setting using keyword 'interval', 'spacing', however I couldn't find appropriate solution.
I considered other plotting approaches using matplotlib.dates module but the result was in vain.</p>
<p>And this is my python code.</p>
<pre><code>import pandas as pd
from datetime import datetime, date, timedelta, time
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
import matplotlib.dates as mdates
num = 6
start_time = datetime(2021, 7, 7, 13, 5, 16, 268902)
int_to_time = pd.offsets.CustomBusinessHour(start="08:00", end="17:00", weekmask="1111111")
duration = num * int_to_time
horizon = [start_time + (i+1) * int_to_time for i in range(num+1)]
horizon = [i.replace(microsecond=0) for i in horizon]
fig, gnt = plt.subplots(figsize=(12,3))
gnt.barh(y=1, width=duration, left=start_time, color="cyan", height=0.2)
gnt.set_xticks(horizon)
gnt.set_xticklabels(horizon, rotation=90)
gnt.tick_params(bottom=False, labelbottom=False, top=True, labeltop=True)
plt.show()
</code></pre>
|
<p>You are trying to develop a Gantt chart and are having issues with spacing of the x axis labels. Your x-axis is representing Timestamps and you want them evenly spaced out (hourly).</p>
<p>Axis tick locations are determined by <a href="https://matplotlib.org/stable/gallery/ticks_and_spines/tick-locators.html" rel="nofollow noreferrer">Tick Locators</a> and the labels are determined by <a href="https://matplotlib.org/stable/gallery/ticks_and_spines/tick-formatters.html" rel="nofollow noreferrer">Tick Formatters</a>. The default tick locator for datetimes is <a href="https://matplotlib.org/stable/api/dates_api.html#matplotlib.dates.AutoDateLocator" rel="nofollow noreferrer">AutoDatesLocator</a> which is likely implementing <a href="https://matplotlib.org/stable/api/dates_api.html#matplotlib.dates.HourLocator" rel="nofollow noreferrer">HourLocator</a>. This will return x and y values that correspond to a 24 hour date time axis.</p>
<p>One solution to your problem is to simply use LinearLocator or FixedLocator along with a FixedFormatter. This puts you in very direct control over the tick locations and labels.</p>
<p>I must add that there are many tutorials and posts about how to make a Gantt chart with matplotlib or plotly that are easily searchable. I recommend reviewing some of those as you develop your plots.</p>
<p>The solution is implemented below in the context of your code.</p>
<p><a href="https://i.stack.imgur.com/6Zjtd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6Zjtd.png" alt="enter image description here" /></a></p>
<pre><code>import pandas as pd
from datetime import datetime, date, timedelta, time
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
import matplotlib.dates as mdates
num = 6
start_time = datetime(2021, 7, 7, 13, 5, 16, 268902)
int_to_time = pd.offsets.CustomBusinessHour(start="08:00", end="17:00", weekmask="1111111")
duration = num * int_to_time
horizon = [start_time + (i+1) * int_to_time for i in range(num+1)]
horizon = [i.replace(microsecond=0) for i in horizon]
fig, gnt = plt.subplots(figsize=(12,3))
gnt.barh(y=1, width=duration, left=start_time, color="cyan", height=0.2)
gnt.xaxis.set_major_locator(ticker.LinearLocator(7))
gnt.xaxis.set_major_formatter(ticker.FixedFormatter(horizon))
gnt.tick_params(bottom=False, labelbottom=False, top=True, labeltop=True, rotation=90)
</code></pre>
|
python|pandas|matplotlib|scheduling|xticks
| 0
|
5,736
| 68,113,477
|
pandas how to iteratively count instances of a category by row and reset them when the other category appears?
|
<p>I have a DataFrame that shows the behavior of a machine. This machine can be in two states: Production or cleaning. Hence, I have a dummy variable called "Production", that shows 1 when the machine is producing and 0 when it is not. I would like to know the production cycles (how many hours does the machine stay producing until it stops, and how much time it stops until it starts the whole process again). Therefore, I would like to create a column that counts how much time (how many rows) the machine is under each state, but it should reset itself when the other category appears again.</p>
<p>Example:</p>
<pre><code>production production_cycle
1 5
1 5
1 5
1 5
1 5
0 2
0 2
1 1
0 3
0 3
0 3
</code></pre>
|
<p>You can first detect the turning points by looking at the points where it <code>diff</code>ers from the previous one. Then cumulative sum of this gives the needed groupings. We <code>transform</code> this with <code>count</code> to get the size of each group:</p>
<pre><code>>>> grouper = df.production.diff().ne(0).cumsum()
>>> df["production_cycle"] = df.groupby(grouper).transform("count")
>>> df
production production_cycle
0 1 5
1 1 5
2 1 5
3 1 5
4 1 5
5 0 2
6 0 2
7 1 1
8 0 3
9 0 3
10 0 3
</code></pre>
<hr>
<p>the <code>grouper</code> is</p>
<pre><code>>>> grouper
0 1
1 1
2 1
3 1
4 1
5 2
6 2
7 3
8 4
9 4
10 4
</code></pre>
|
python|pandas|count|categories
| 2
|
5,737
| 57,011,658
|
ConvNet with missing output data for weather forecast
|
<p>I am using ConvNets to build a model to make weather forecast. My input data is 10K samples of a 96x144 matrix (which represents a geographic region) with values of a variable Z (geopotential height) in each point of the grid at a fixed height. If I include 3 different heights (Z is very different in different heights) then I have this input shape: (num_samples,96,144,3). The samples are for every hour, one sample = one hour. I have nearly 2 years of data. And the input data (Z) represents the state of the atmosphere in that hour.</p>
<p>That can be thought as an image with 3 channels, but instead of pixel values in a 0-256 range i have values of Z in a much larger range (last channel of height has a range of 7500 to 9500 and the first one has a range of 500 to 1500 aprox).</p>
<p>I want to predict precipitation (will it rain or not? just that, binary, yes or no).</p>
<p>In that grid, that region of space in my country, i only have output data in specific (x,y) points (just 122 weather stations with rain data in the entire region). There are just 122 (x,y) points where i have values of 1 (it rained that hour) or 0 (didn't).</p>
<p>So my output matrix is a (num_samples,122) vector which contains 1 in the station index if in that sample (that hour) did rain or 0 if it didn't.</p>
<p>So i used a mix between VGG16 model and this one <a href="https://github.com/prl900/precip-encoder-decoders/blob/master/encoder_vgg16.py" rel="nofollow noreferrer">https://github.com/prl900/precip-encoder-decoders/blob/master/encoder_vgg16.py</a> which is a model used for this specific application that i found on a paper.</p>
<p>I wish to know if i'm building the model the right way, I just changed the input layer to match my shape and the last layer of the FC layer to match my classes (122, because for a specific sample of input, i wish to have an 1x122 vector with a 0 or 1 depending if in that station rained or not, is this right?). And because of the probabilities are not mutually-exclusive (i can have many 1s if it rained in more than one station) i used the 'sigmoid' activation in the last layer.</p>
<p>I DON'T know which metric to use in the compile, and acc, mae, and categorical acc are just staying the same all epochs (in the second epoch increases a little but after of that, acc and val_acc stay the same for every epoch). </p>
<p>AND, in the output matrix there are null values (hours in which the station doesn't have data), i am just filling that NaNs with a -1 value (like an 'i don't know' label). This may be the reason because nothing works?</p>
<p>Thanks for the help and sorry for the over-explanation.</p>
<pre><code>def get_vgg16():
model = Sequential()
# Conv Block 1
model.add(BatchNormalization(axis=3, input_shape=(96,144,3)))
model.add(Conv2D(64, (3, 3), activation='relu', padding='same'))
model.add(BatchNormalization(axis=3))
model.add(Conv2D(64, (3, 3), activation='relu', padding='same'))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))
# Conv Block 2
model.add(BatchNormalization(axis=3))
model.add(Conv2D(128, (3, 3), activation='relu', padding='same'))
model.add(BatchNormalization(axis=3))
model.add(Conv2D(128, (3, 3), activation='relu', padding='same'))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))
# Conv Block 3
model.add(BatchNormalization(axis=3))
model.add(Conv2D(256, (3, 3), activation='relu', padding='same'))
model.add(BatchNormalization(axis=3))
model.add(Conv2D(256, (3, 3), activation='relu', padding='same'))
model.add(BatchNormalization(axis=3))
model.add(Conv2D(256, (3, 3), activation='relu', padding='same'))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))
# Conv Block 4
model.add(BatchNormalization(axis=3))
model.add(Conv2D(512, (3, 3), activation='relu', padding='same'))
model.add(BatchNormalization(axis=3))
model.add(Conv2D(512, (3, 3), activation='relu', padding='same'))
model.add(BatchNormalization(axis=3))
model.add(Conv2D(512, (3, 3), activation='relu', padding='same'))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))
# Conv Block 5
model.add(BatchNormalization(axis=3))
model.add(Conv2D(512, (3, 3), activation='relu', padding='same'))
model.add(BatchNormalization(axis=3))
model.add(Conv2D(512, (3, 3), activation='relu', padding='same'))
model.add(BatchNormalization(axis=3))
model.add(Conv2D(512, (3, 3), activation='relu', padding='same'))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))
# FC layers
model.add(Flatten())
model.add(Dense(4096, activation='relu'))
model.add(Dense(4096, activation='relu'))
model.add(Dense(122, activation='sigmoid'))
#adam = Adam(lr=0.001)
sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='binary_crossentropy', optimizer=sgd, metrics=[metrics.categorical_accuracy,metrics.binary_accuracy, 'acc'])
print(model.summary())
return model
</code></pre>
|
<p>There are various things to consider in order to improve the model:</p>
<p><strong>Your choice of loss</strong></p>
<p>You could do various things here. Using a L2 loss (squared distance minimization) is an option, where your targets are no rain (0) or rain (1) for each station. Another (more accurate) option would be to consider each output as the probability of it raining at that station. Then, you would apply a binary <a href="https://en.wikipedia.org/wiki/Cross_entropy" rel="nofollow noreferrer">cross entropy</a> loss for each one of the output values.</p>
<p>The binary cross entropy is just the regular cross entropy applied to two-class classification problems. Please note that P(y) = 1 - P(x) when there are only two possible outcomes. As such, you don't need to add any extra neurons.</p>
<p><strong>Mask the loss</strong></p>
<p>Do not set the missing targets to -1. This does not make sense and only introduces noise to the training. Imagine you are using an L2 loss. If your network predicts rain for that value, that would mean (1 - (-1))^2 = 4, a very high prediction error. Instead, you want the network to ignore these cases.</p>
<p>You can do that by masking the losses. Lets say you make Y = (num_samples, 122) predictions, and have an equally shaped target matrix T. You could define a binary mask M of the same size, with ones for the values you know, and zeros in the missing value locations. Then, your loss would be L = M * loss(Y, T). For missing values, the loss would always be 0, with no gradient: nothing would be learnt from them.</p>
<p><strong>Normalize the inputs</strong></p>
<p>It is always good practice to <a href="https://en.wikipedia.org/wiki/Feature_scaling#Mean_normalization" rel="nofollow noreferrer">normalize/standardize</a> the inputs. This avoids some features having more relevance than others, speeding up the training. In cases where the inputs have very large magnitudes, it also helps stabilise the training, preventing gradient explosions.</p>
<p>In your case, you have three channels, and each one follows a different distribution (it has a different minimum and maximum value). You need to consider, separately for each channel (height), the data on all samples when computing the min+max / mean+stdv values, and then apply these two values to normalize/standardize the corresponding channel on all samples. That is, given a tensor of size (N,96,144,3), normalize/standardize each sub-tensor of size (N,96,144,1) separately. You will need to apply the same transform to the test data, so save the scaling values for later.</p>
|
python|tensorflow|keras|neural-network|conv-neural-network
| 0
|
5,738
| 51,064,217
|
Combine multiple CSV files using Python and Pandas
|
<p>I have the following code:</p>
<pre><code>import glob
import pandas as pd
allFiles = glob.glob("C:\*.csv")
frame = pd.DataFrame()
list_ = []
for file_ in allFiles:
print file_
df = pd.read_csv(file_,index_col=None, header=0)
list_.append(df)
frame = pd.concat(list_, sort=False)
print list_
frame.to_csv("C:\f.csv")
</code></pre>
<p>This combines multiple CSVs to single CSV.</p>
<p>However it also adds a row number column.</p>
<p>Input:</p>
<p>a.csv</p>
<pre><code>a b c d
1 2 3 4
</code></pre>
<p>b.csv</p>
<pre><code>a b c d
551 55 55 55
551 55 55 55
</code></pre>
<p>result:
f.csv</p>
<pre><code> a b c d
0 1 2 3 4
0 551 55 55 55
1 551 55 55 55
</code></pre>
<p>How can I modify the code not to show the row numbers in the output file?</p>
|
<p>Change <code>frame.to_csv("C:\f.csv")</code> to <code>frame.to_csv("C:\f.csv", index=False)</code></p>
<p>See: <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_csv.html" rel="nofollow noreferrer">pandas.DataFrame.to_csv</a></p>
|
python|pandas
| 2
|
5,739
| 57,475,296
|
How can we plot two different dictionaries in a single x axis with one plot up and the other plot down, like a head to tail format?
|
<p>I need to plot two different dictionaries with varying keys and values (since a few keys might be present in one while missing in other) in a single plot like one on top while the other on the bottom, so that one can compare between the two.</p>
<p>Something like what has been given here, but I have two different dictionaries with varying keys and values count
<a href="https://matplotlib.org/2.0.0/examples/pylab_examples/xcorr_demo.html" rel="nofollow noreferrer">https://matplotlib.org/2.0.0/examples/pylab_examples/xcorr_demo.html</a></p>
|
<p>I recommend to use <code>pandas</code>:</p>
<pre><code>import pandas as pd
dicts = {
'dict_1': {'1': 10, '2': 20, 'a': 5, 'b': 10},
'dict_2': {'1': 5, '2': 5, '3': 30, 'a': -5, 'c': -5}}
df = pd.DataFrame(dicts)
df.plot.bar(subplots=True) # one line to perform the task
</code></pre>
<p>Result looks kinda close to the example you linked in the question. Of course you may tune it like you want.</p>
<p><a href="https://i.stack.imgur.com/jiwOo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jiwOo.png" alt="enter image description here"></a></p>
|
python|pandas|matplotlib|spectra
| 0
|
5,740
| 73,042,044
|
Panda multiply dataframes using dictionary to map columns
|
<p>I am looking to multiply element-wise two dataframes with matching indices, using a dictionary to map which columns to multiply together. I can only come up with convoluted ways to do it and I am sure there is a better way, really appreciate the help! thx!</p>
<p>df1:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Index</th>
<th>ABC</th>
<th>DEF</th>
<th>XYZ</th>
</tr>
</thead>
<tbody>
<tr>
<td>01/01/2004</td>
<td>1</td>
<td>2</td>
<td>3</td>
</tr>
<tr>
<td>05/01/2004</td>
<td>4</td>
<td>7</td>
<td>2</td>
</tr>
</tbody>
</table>
</div>
<p>df2:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Index</th>
<th>Echo</th>
<th>Epsilon</th>
</tr>
</thead>
<tbody>
<tr>
<td>01/01/2004</td>
<td>5</td>
<td>10</td>
</tr>
<tr>
<td>05/01/2004</td>
<td>-1</td>
<td>-2</td>
</tr>
</tbody>
</table>
</div>
<p>Dictionary <code>d = {'ABC': 'Echo', 'DEF': 'Echo', 'XYZ': 'Epsilon'}</code></p>
<p>Expected result:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Index</th>
<th>ABC</th>
<th>DEF</th>
<th>XYZ</th>
</tr>
</thead>
<tbody>
<tr>
<td>01/01/2004</td>
<td>5</td>
<td>10</td>
<td>30</td>
</tr>
<tr>
<td>05/01/2004</td>
<td>-4</td>
<td>-7</td>
<td>-4</td>
</tr>
</tbody>
</table>
</div>
|
<p>You can use:</p>
<pre><code># only if not already the index
df1 = df1.set_index('Index')
df2 = df2.set_index('Index')
df1.mul(df2[df1.columns.map(d)].set_axis(df1.columns, axis=1))
</code></pre>
<p>or:</p>
<pre><code>df1.mul(df2.loc[df1.index, df1.columns.map(d)].values)
</code></pre>
<p>output:</p>
<pre><code> ABC DEF XYZ
Index
01/01/2004 5 10 30
05/01/2004 -4 -7 -4
</code></pre>
|
python|pandas
| 3
|
5,741
| 72,875,433
|
How to predict time series with limited data
|
<p>I have a dataset with four columns: date, category, product, rate(%). I would like to be able to forecast the rate for every product in my data. The major issue I'm having is that because products constantly come in an out of production, certain products have very little historical data making predictions difficult. I've read online that people with similar issues have used bayesian hierarchical models, like this example from Numpyro:</p>
<pre><code>import numpyro
from numpyro.infer import MCMC, NUTS, Predictive
import numpyro.distributions as dist
from jax import random
def model(PatientID, Weeks, FVC_obs=None):
μ_α = numpyro.sample("μ_α", dist.Normal(0., 100.))
σ_α = numpyro.sample("σ_α", dist.HalfNormal(100.))
μ_β = numpyro.sample("μ_β", dist.Normal(0., 100.))
σ_β = numpyro.sample("σ_β", dist.HalfNormal(100.))
unique_patient_IDs = np.unique(PatientID)
n_patients = len(unique_patient_IDs)
with numpyro.plate("plate_i", n_patients):
α = numpyro.sample("α", dist.Normal(μ_α, σ_α))
β = numpyro.sample("β", dist.Normal(μ_β, σ_β))
σ = numpyro.sample("σ", dist.HalfNormal(100.))
FVC_est = α[PatientID] + β[PatientID] * Weeks
with numpyro.plate("data", len(PatientID)):
numpyro.sample("obs", dist.Normal(FVC_est, σ), obs=FVC_obs)
</code></pre>
<p>But every example I've found online has only shown code examples of linear regression being used within the hierarchical model. Is it possible to to use hierarchical models to predict for data that is non-linear? Does anyone have experience with using hierarchical models, specifically for time series data? I'd greatly appreciate it.</p>
|
<p>I think you are looking for a simulation, which you can do based on statistics.</p>
<p>You could "randomize" the produced data using a mean rate +- a variance between the mean minus the max value. Never done this, but i think it's doable. I would try the machine learning way to be honest.</p>
<p>Anyways, it will not be representative of the reality that's why everyone uses linear regression as "reference" and not a prediction as such. Kind of "the results should be around this value".
This is, talking from a business perspective. If what you need is more data, then i would look for a simulation.</p>
|
time-series|hierarchical-clustering|hierarchical-bayesian|multivariate-time-series|numpyro
| 0
|
5,742
| 51,535,925
|
Add two tensor in keras
|
<p>I got two tensors with shape <code>(X,y)</code> and <code>(y,)</code> respectively, is there any function in keras can add them togher? I only found the <code>K.bias_add</code> in <a href="https://keras.io/backend/" rel="nofollow noreferrer">doc</a> but it does not work. The error is:</p>
<pre><code>TypeError: Failed to convert object of type <class 'tuple'> to Tensor.
</code></pre>
<p>The types of my variables is:</p>
<pre><code>>>x :<class 'tensorflow.python.framework.ops.Tensor'>
>>b :<class 'tensorflow.python.framework.ops.Tensor'>
</code></pre>
<p>Why this error occurs? How can I add two tensors together?</p>
|
<p>Just compute the sum within a <a href="https://keras.io/layers/core/#lambda" rel="nofollow noreferrer">Lambda</a> layer. For example:</p>
<pre><code>from keras.layers import Input, Lambda
from keras.models import Model
X = 3
y = 2
x = Input(shape=(X, y))
b = Input(shape=(y,))
out = Lambda(lambda a: a[0] + a[1])([x, b])
model = Model(inputs=[x, b], outputs=out)
</code></pre>
|
python|tensorflow|keras
| 2
|
5,743
| 51,499,161
|
Keras layer.weights and layer.get_weights() give different values
|
<p>My Keras model have Dense layers which I need to access the weights and bias values. I can access them using get_weights() method. It returns me expected sized matrices (57X50 for the weights) for weights and biases.</p>
<pre><code>model.layers[0].get_weights()[0]
</code></pre>
<p>However the following code snippet gives me same sized matrices with different values. </p>
<pre><code>import tensorflow as tf
init_op = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init_op)
print(sess.run(model.layers[0].weights[0]))
</code></pre>
<p>In the second method bias values are returned as all zeros for all models and weights are different than the output of get_weights() method.</p>
<p>Do you have any idea which way is correct and what exactly the second method does?</p>
|
<p>With <code>init_op</code>, you initialize all trainable variables, which means zeros for biases and random values for the other weights of your model. Try:</p>
<pre><code>import keras.backend as K
with K.get_session() as sess:
print(sess.run(model.layers[0].weights[0]))
</code></pre>
|
tensorflow|keras
| 2
|
5,744
| 71,056,462
|
Groupby A column and bring up the A value, only if the B values differ from the other ones, including nulls
|
<p>I have this example, dataset:</p>
<pre><code> A B
11 A
11 V
11 C
12 A
12 A
12 A
12 A
13 A
13 A
13 B
13 B
14 C
14 C
14
14
</code></pre>
<p>And I want it to return, the grouped A values, that has different values on the B column. So in this example, the expected output is:</p>
<pre><code>[11, 13,14]
</code></pre>
<p>I made an attempt at formulating the code, and I succeeded but it is terrible and unoptimized. And I was looking for alternatives so I could iterate faster through my much bigger dataset. Would appreciate some help.</p>
<p>Here is my code:</p>
<pre><code>user_mult_camps = []
for i in df['A'].unique():
filt = (df['A'] == i)
df2 = df.loc[filt]
x=df2['B'].unique()
if len(x) > 1:
user_mult_camps.append(i)
print(i)
</code></pre>
|
<p>You could <code>groupby</code> "A" and use <code>nunique</code> to count the number of unique "B"s per "A". Then evaluate if it's greater than 1 to filter the "A"s that have more than one corresponding "B":</p>
<pre><code>msk = df.groupby('A')['B'].nunique()>1
out = msk.index[msk].tolist()
</code></pre>
<p>Output:</p>
<pre><code>[11]
</code></pre>
<p>If you want to count NaN as well, then set <code>dropna</code> parameter in <code>nunique</code> to False:</p>
<pre><code>df2['B'] = df2['B'].fillna(value=np.nan)
msk = df2.groupby('A')['B'].nunique(dropna=False)>1
</code></pre>
<p>Then the output:</p>
<pre><code>[11, 13, 14]
</code></pre>
|
python|pandas|dataframe|pandas-groupby
| 2
|
5,745
| 70,752,223
|
Python Pandas - Calculate total mean, group by field, then calculate grouped means and append
|
<p>Let's say I have a pd.DataFrame with the columns "dir" and "speed":</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'dir': ['fwd', 'fwd', 'fwd', 'bwd', 'bwd'],
'speed': [10, 5, 1, 6, 8]})
# or with more columns:
df = pd.DataFrame({'dir': ['fwd', 'fwd', 'fwd', 'bwd', 'bwd'],
'speed': [10, 5, 1, 6, 8],
'mass': [100, 200, 100, 500, 300]})
</code></pre>
<pre><code> dir speed
0 fwd 10
1 fwd 5
2 fwd 1
3 bwd 6
4 bwd 8
</code></pre>
<p>I'm trying to calculate 3 things, with the result being a DataFrame with 1 row, containing "median_speed", "median_fwd_speed", "median_bwd_speed".</p>
<p>I'm really new to Pandas so forgive my horrible upcoming mistakes. Also, I have a lot of other stuff being calculated, so keeping agg is definitely preferable, but doing away with np.where() would be great.</p>
<p>What I have so far:</p>
<pre><code># duplicate dir column for future referencing
df['dir2'] = df['dir']
# groupby and calc median for fwd and bwd
df = df.groupby('dir').agg({"dir2": lambda x: x.iloc[0], # how do I do nothing with agg?
"speed": "median"})
# grab forward and bwd fields
df['median_fwd_speed'] = np.where(df['dir2'] == 'fwd', df['speed'], 0)
df['median_bwd_speed'] = np.where(df['dir2'] == 'bwd', df['speed'], 0)
</code></pre>
<p>Output:</p>
<pre><code> dir2 speed median_fwd_speed median_bwd_speed
dir
bwd bwd 7.0 0.0 7.0
fwd fwd 5.0 5.0 0.0
</code></pre>
<p>Of course the output is not 1 row, and doesn't contain the total median. Any help would be appreciated!</p>
<p>I could probably use <code>df["speed"].median()</code> and store it as a variable, but is there an elegant way using just groupby and agg?</p>
<p>Expected output with multiple columns would be something like:</p>
<pre><code>median_speed fwd_median_speed bwd_median_speed median_mass fwd_median_mass bwd_median_mass
6 5 7 200 100 400
</code></pre>
|
<p>You can aggregate <code>median</code> and then add new column for <code>median</code>:</p>
<pre><code>f = lambda x: f'median_{x}_speed'
df1=df.groupby('dir')[['speed']].median().rename(f).T.assign(median = df['speed'].median())
print (df1)
dir median_bwd_speed median_fwd_speed median
speed 7 5 6.0
</code></pre>
<p>EDIT: For multiple columns use:</p>
<pre><code>cols = ['speed', 'mass']
df1=(df.groupby('dir')[cols]
.median()
.T
.assign(median = df[cols].median())
.stack()
.to_frame()
.T
)
df1.columns = df1.columns.map(lambda x: f'{x[1]}_{x[0]}')
print (df1)
bwd_speed fwd_speed median_speed bwd_mass fwd_mass median_mass
0 7.0 5.0 6.0 400.0 100.0 200.0
</code></pre>
|
python|pandas|dataframe
| 1
|
5,746
| 70,842,345
|
When I try to replace strings out of pandas series from a dictionary some half strings get converted into some value that resonates with half string
|
<h1>My try is to replace keys in strings of pandas dataframe series in a loop from a dictionary.</h1>
<h1>My code is botching up the replace in one instance where there are common characters of two replaced with values candidates.</h1>
<pre><code>mapped_lookup={'Ardy=': '0',
'is=': '1',
'Hais':'22',
'the=': '2',
'best=': '3',
'est=': '4',
'est2=': '5'}
df['header'] = pd.Series(["[Ardy=4.2, is=402, the=100]", "[HAis=4.3, the=399, est=200]", "[HAis=4.4, is=398, C=150]"])
def replacer(value, mappings):
for k, v in mappings.items():
value = value.replace(k, v)
return value
mapped_ok = {k.replace("=", ""): str(v) for k, v in mapped_lookup.items()}
df['Header']=df['Header'].apply(lambda x: replacer(x, mapped_ok))
</code></pre>
<h1>The output I want-</h1>
<pre><code>df['header'] = pd.Series(["[0=4.2, 1=402, 2=100]", "[22=4.3, 2=399, 4=200]", "[22=4.4, 1=398, C=150]"])
</code></pre>
<h1>The output I get with below code-</h1>
<pre><code>df['header'] = pd.Series(["[0=4.2, 1=402, 2=100]", "[HA1=4.3, 2=399, 4=200]", "[HA1=4.4, 1=398, C=150]"])
</code></pre>
<h1>How can I do this?</h1>
|
<p>Here is the revised code after reordering the dictionary keys in mapped_lookup.</p>
<pre><code>import pandas as pd
import numpy as np
mapped_lookup={'Ardy=': '0',
'HAis':'22',
'is=': '1',
'the=': '2',
'best=': '3',
'est=': '4',
'est2=': '5'}
df = pd.DataFrame(columns=['Header'])
df['Header'] = pd.Series(["[Ardy=4.2, is=402, the=100]", "[HAis=4.3, the=399, est=200]", "[HAis=4.4, is=398, C=150]"])
def replacer(value, mappings):
for k, v in mappings.items():
value = value.replace(k, v)
return value
mapped_ok = {k.replace("=", ""): str(v) for k, v in mapped_lookup.items()}
print(mapped_ok)
df['Header']=df['Header'].apply(lambda x: replacer(x, mapped_ok))
print(df['Header'])
</code></pre>
<p>With this code my output is:</p>
<pre><code>0 [0=4.2, 1=402, 2=100]
1 [22=4.3, 2=399, 4=200]
2 [22=4.4, 1=398, C=150]
Name: Header, dtype: object
</code></pre>
|
python|regex|pandas|string|replace
| 1
|
5,747
| 51,733,223
|
Getting error while converting to pandas dataframe from a list
|
<p>I am trying to convert a list of values to a dataframe in a single row so the output should be a pandas dataframe with single row but getting <code>object of type 'int' has no len()</code>
I tried other SO posts but same error.</p>
<pre><code>metadata = [var1, var2, var3, var4, var5]
df = pd.DataFrame(columns=['col1', 'col2', 'col3', 'col4', 'col5'], data=metadata)
</code></pre>
|
<pre><code>import numpy as np
import pandas as pd
metadata = ["var1", "var2", "var3", "var4", "var5"]
arr = np.array(metadata).reshape(1,5)
df = pd.DataFrame(columns=['col1', 'col2', 'col3', 'col4', 'col5'], data = arr)
</code></pre>
<p>Output:</p>
<pre><code> col1 col2 col3 col4 col5
0 var1 var2 var3 var4 var5
</code></pre>
|
python|list|pandas|dataframe
| 0
|
5,748
| 51,851,684
|
Easier way to combine all these binary columns into categorical columns?
|
<p>These are the categories that I want to change to one columns. The values in each list are the current binary columns present in the dataframe. </p>
<pre><code>housesitu = ['tipovivi1', 'tipovivi2', 'tipovivi3', 'tipovivi4', 'tipovivi5']
educlevels = ['instlevel1', 'instlevel2', 'instlevel3', 'instlevel4', 'instlevel5', 'instlevel6', 'instlevel7',
'instlevel8', 'instlevel9']
regions = ['lugar1', 'lugar2', 'lugar3', 'lugar4', 'lugar5', 'lugar6']
relations = ['parentesco1', 'parentesco2', 'parentesco3', 'parentesco4', 'parentesco5', 'parentesco6',
'parentesco7', 'parentesco8', 'parentesco9', 'parentesco10', 'parentesco11', 'parentesco12']
</code></pre>
<p>I currently have this code to combine binary columns into categorical columns:</p>
<pre><code> train['housesitu'] = train[housesitu].idxmax(axis=1)
train.drop(train[housesitu], axis=1, inplace=True)
train['educlevels'] = train[educlevels].idxmax(axis=1)
train.drop(train[educlevels], axis=1, inplace=True)
train['regions'] = train[regions].idxmax(axis=1)
train.drop(train[regions], axis=1, inplace=True)
train['relations'] = train[relations].idxmax(axis=1)
train.drop(train[relations], axis=1, inplace=True)
train['marital'] = train[marital].idxmax(axis=1)
train.drop(train[marital], axis=1, inplace=True)
train['rubbish'] = train[rubbish].idxmax(axis=1)
train.drop(train[rubbish], axis=1, inplace=True)
train['energy'] = train[energy].idxmax(axis=1)
train.drop(train[energy], axis=1, inplace=True)
train['toilets'] = train[toilets].idxmax(axis=1)
train.drop(train[toilets], axis=1, inplace=True)
train['floormat'] = train[floormat].idxmax(axis=1)
train.drop(train[floormat], axis=1, inplace=True)
train['roofmat'] = train[roofmat].idxmax(axis=1)
train.drop(train[roofmat], axis=1, inplace=True)
train['wallmat'] = train[wallmat].idxmax(axis=1)
train.drop(train[wallmat], axis=1, inplace=True)
train['floorqual'] = train[floorqual].idxmax(axis=1)
train.drop(train[floorqual], axis=1, inplace=True)
train['wallqual'] = train[wallqual].idxmax(axis=1)
train.drop(train[wallqual], axis=1, inplace=True)
train['roofqual'] = train[roofqual].idxmax(axis=1)
train.drop(train[roofqual], axis=1, inplace=True)
train['waterprov'] = train[waterprov].idxmax(axis=1)
train.drop(train[waterprov], axis=1, inplace=True)
train['electric'] = train[electric].idxmax(axis=1)
train.drop(train[electric], axis=1, inplace=True)
</code></pre>
<p>I would like to know if there is a shorter way to do this. </p>
|
<p>I can only think about a <code>groupby</code> with <code>idxmax</code>, since you column named as XXXddd </p>
<pre><code>df.groupby(df.columns.to_series().str.replace('\d+',''),axis=1).idxmax(1)
Out[1100]:
A B
0 A2 B2
1 A1 B1
2 A1 B1
</code></pre>
<p>Data Input </p>
<pre><code>df=pd.DataFrame({'A1':[1,2,3],'A2':[2,1,3],'B1':[1,2,3],'B2':[2,1,3]})
</code></pre>
|
python|pandas
| 1
|
5,749
| 51,702,229
|
numpy mask using np.where then replace values
|
<p>I've got two 2-D numpy arrays with same shape, let's say (10,6).</p>
<p>The first array <code>x</code> is full of some meaningful float numbers.</p>
<pre><code>x = np.arange(60).reshape(-1,6)
</code></pre>
<p>The second array <code>a</code> is sparse array, with each row contains ONLY 2 non-zero values.</p>
<pre><code>a = np.zeros((10,6))
for i in range(10):
a[i, 1] = 1
a[i, 2] = 1
</code></pre>
<p>Then there's a third array with the shape of (10,2), and I want to update the values of each row to the first array <code>x</code> at the position where <code>a</code> is not zero.</p>
<pre><code>v = np.arange(20).reshape(10,2)
</code></pre>
<p>so the original <code>x</code> and the updated <code>x</code> will be: </p>
<pre><code>array([[ 0, 1, 2, 3, 4, 5],
[ 6, 7, 8, 9, 10, 11],
[12, 13, 14, 15, 16, 17],
[18, 19, 20, 21, 22, 23],
[24, 25, 26, 27, 28, 29],
[30, 31, 32, 33, 34, 35],
[36, 37, 38, 39, 40, 41],
[42, 43, 44, 45, 46, 47],
[48, 49, 50, 51, 52, 53],
[54, 55, 56, 57, 58, 59]])
</code></pre>
<p>and </p>
<pre><code>array([[ 0, 0, 1, 3, 4, 5],
[ 6, 2, 3, 9, 10, 11],
[12, 4, 5, 15, 16, 17],
[18, 6, 7, 21, 22, 23],
[24, 8, 9, 27, 28, 29],
[30, 10, 11, 33, 34, 35],
[36, 12, 13, 39, 40, 41],
[42, 14, 15, 45, 46, 47],
[48, 16, 17, 51, 52, 53],
[54, 18, 19, 57, 58, 59]])
</code></pre>
<p>I've tried the following method</p>
<pre><code>x[np.where(a!=0)] = v
</code></pre>
<p>Then I got an error of <code>shape mismatch: value array of shape (10,2) could not be broadcast to indexing result of shape (20,)</code></p>
<p>What's wrong with this approach, is there an alternative to do it? Thanks a lot.</p>
|
<p>Thanks to the comment by @Divakar, the problem happens because the shapes of the two variables on both side of the assignment mark <code>=</code> are different.</p>
<p>To the left, the expression <code>x[np.where(a!=0)]</code> or <code>x[a!=0]</code> or <code>x[np.nonzero(a)]</code> are not structured, which has a shape of <code>(20,)</code></p>
<p>To the right, we need an array of similar shape to finish the assignment. Therefore, a simple <code>ravel()</code> or <code>reshape(-1)</code> will do the job.</p>
<p>so the solution is as simple as <code>x[a!=0] = v.ravel()</code>.</p>
|
python|arrays|numpy
| 1
|
5,750
| 35,875,617
|
Why the cross spectra is differ in mlab and scipy.signal?
|
<p>I have two signals</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from matplotlib import mlab
import mpld3
from scipy import signal
mpld3.enable_notebook()
nfft = 256
dt = 0.01
t = np.arange(0, 30, dt)
nse1 = np.random.randn(len(t)) * 0.1 # white noise 1
nse2 = np.random.randn(len(t)) * 0.1 # white noise 2
# two signals with a coherent part and a random part
s1 = np.sin(2*np.pi*1*t) + nse1
s2 = np.sin(2*np.pi*1*t+np.pi) + nse2
plt.plot(s1, 'r', s2, 'g')
plt.show()
</code></pre>
<p>I would like to get the coherence </p>
<pre><code>cxy, fcoh = plt.cohere(s1, s2, nfft, 1./dt)
fcoh,cxy = signal.coherence(s1,s2, nfft=nfft, fs=1./dt)
plt.hold(True)
plt.plot(fcoh, cxy)
#plt.xlim(0, 5)
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/FGrQi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FGrQi.png" alt="Coherence"></a></p>
<p>and the phase shift</p>
<pre><code>(csd, f) = mlab.csd(s1, s2, NFFT=nfft, Fs=1./dt)
fig = plt.figure()
angle = np.angle(csd,deg =False)
angle[angle<-np.pi/2] += 2*np.pi
plt.plot(f, angle, 'g')
plt.hold(True)
(f, csd) = signal.csd(s1, s2, fs=1./dt, nfft=nfft)
angle = np.angle(csd,deg =False)
angle[angle<-np.pi/2] += 2*np.pi
plt.plot(f, angle,'r')
#plt.xlim(0,5)
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/xVMmI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xVMmI.png" alt="enter image description here"></a></p>
<p>I tried to use <code>scipy</code> and <code>mlab</code>. Can anybody explain why do I get different results? </p>
|
<p>Because the two functions have different default values for some parameters.</p>
<p>For example, if you pass in to <code>plt.cohere()</code> the option <code>noverlap=128</code> you get an almost perfect match with the <code>numpy.signal()</code> solution:
<a href="https://i.stack.imgur.com/5FQil.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5FQil.png" alt="enter image description here"></a></p>
<p>Apart for a small mismatch at 0 Hz frequency, and we do not really care much about coherence of the DC components do we? I bet that if you dig deeper in the documentation of both you will find another smaller quirk in the standard values of the two.</p>
|
python|numpy|matplotlib|scipy|signal-processing
| 4
|
5,751
| 37,279,260
|
Why doesn't pandas allow a categorical column to be used in groupby?
|
<p>I would like to create a custom sorted DataFrame. To do this I have used <code>pandas.Categorical()</code> however if I then use the result of this in a groupby <code>NAN</code> values are returned.</p>
<pre><code># import the pandas module
import pandas as pd
# Create an example dataframe
raw_data = {'Date': ['2016-05-13', '2016-05-13', '2016-05-13', '2016-05-13', '2016-05-13','2016-05-13', '2016-05-13', '2016-05-13', '2016-05-13', '2016-05-13', '2016-05-13', '2016-05-13', '2016-05-13', '2016-05-13', '2016-05-13', '2016-05-13', '2016-05-13'],
'Portfolio': ['A', 'A', 'A', 'A', 'A', 'A', 'B', 'B','B', 'B', 'B', 'C', 'C', 'C', 'C', 'C', 'C'],
'Duration': [1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3],
'Yield': [0.3, 0.3, 0.3, 0.3, 0.3, 0.3, 2, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1],}
df = pd.DataFrame(raw_data, columns = ['Date', 'Portfolio', 'Duration', 'Yield'])
df['Portfolio'] = pd.Categorical(df['Portfolio'],['C', 'B', 'A'])
df=df.sort_values('Portfolio')
dfs = df.groupby(['Date','Portfolio'], as_index =False).sum()
print(dfs)
Date Portfolio Duration Yield
Date Portfolio
13/05/2016 C NaN NaN NaN NaN
B NaN NaN NaN NaN
A NaN NaN NaN NaN
</code></pre>
<p>Why is this and how can I overcome this?</p>
<p>Also <code>SettingWithCopyWarning</code> is raised is there a better idiom for Categorical?</p>
|
<p><code>as_index=False</code> is messing something up. If I run just:</p>
<pre><code>dfs = df.groupby(['Date','Portfolio']).sum()
</code></pre>
<p>I get:</p>
<pre><code> Duration Yield
Date Portfolio
2016-05-13 C 18 6.0
B 10 10.0
A 6 1.8
</code></pre>
<p>I don't know why this is. It may be a bug.</p>
<p>If you really wanted the result without the index and just have <code>'Date'</code> and <code>'Portfolio'</code> as columns then use <code>'reset_index()'</code>.</p>
<pre><code>dfs = df.groupby(['Date','Portfolio']).sum().reset_index()
Date Portfolio Duration Yield
0 2016-05-13 C 18 6.0
1 2016-05-13 B 10 10.0
2 2016-05-13 A 6 1.8
</code></pre>
|
python|pandas
| 1
|
5,752
| 37,226,330
|
Deploy Tensorflow on a web server with Flask
|
<p>I am trying to deploy a Flask web app with tensorflow on an AWS server ( AMI ID: Deep Learning (ami-77e0da1d)), for an image classification app.</p>
<p>When I use tensorflow in the server, it works normally, but when I try to use it with the app, I get: </p>
<blockquote>
<p>No data received ERR_EMPTY_RESPONSE </p>
</blockquote>
<p>In the end of the error.log file, I have:</p>
<blockquote>
<p>F tensorflow/stream_executor/cuda/cuda_dnn.cc:204] could not find cudnnCreate in cudnn DSO; dlerror: /usr/local/lib/python2.7/dist-packages/tensorflow/python/_pywrap_tensorflow.so: undefined symbol: cudnnCreate
[Sat May 14 11:30:54.124034 2016] [core:notice] [pid 1332:tid 139695334930304] AH00051: child pid 2999 exit signal Aborted (6), possible coredump in /etc/apache2</p>
</blockquote>
<p>My CuDNN version: 4.0.7</p>
<p>I can provide more details if necessary</p>
|
<p>The value of <code>LD_LIBRARY_PATH</code> is being cleared before starting your web app starts, for security reasons. See for example <a href="https://stackoverflow.com/q/28712022/3574081">this question</a>, which observes that the value of <code>os.environ['LD_LIBRARY_PATH']</code> is empty inside the Flask app, even though it may be set when you launch Apache.</p>
<p>There are at least a couple of options:</p>
<ul>
<li><p>You could use Apache's <a href="http://httpd.apache.org/docs/current/mod/mod_env.html" rel="nofollow noreferrer"><code>mod_env</code></a> to set the environment variables that are propagated to your Flask app.</p></li>
<li><p>Based on <a href="https://stackoverflow.com/a/28756047/3574081">this answer</a>, you could modify your script to perform a <code>subprocess</code> call, and set the <code>LD_LIBRARY_PATH</code> to <code>/usr/local/cuda/lib64</code> for the subprocess.</p></li>
</ul>
|
apache|flask|tensorflow|tensorflow-serving
| 1
|
5,753
| 38,012,050
|
Replacing column of inf values with 0
|
<p>I am new to numpy but i cannot seem to get this piece of code to work.</p>
<pre><code>item3.apply(lambda x : (x[np.isneginf(x)] = 0))
</code></pre>
<p>item3 is a vector of numpy arrays with 300 dimensions in each array.</p>
<p>The error thrown is invalid syntax. How do i get to achieve this function.</p>
<p>However, given that it is a vector of float64 numpy vectors. The datatype is object. and it throws an exception</p>
<pre><code>TypeError: ufunc 'isinf' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''
</code></pre>
<p>Item3 is a column which contains elements which has 300 dimension each </p>
|
<p>You could make something like that :</p>
<pre><code>import numpy as np
from numpy import inf
x = np.array([inf, inf, 0]) # Create array with inf values
print x # Show x array
x[x == inf] = 0 # Replace inf by 0
print x # Show the result
</code></pre>
<p><a href="https://i.stack.imgur.com/YbshY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YbshY.png" alt="enter image description here"></a></p>
|
python|numpy
| 0
|
5,754
| 37,610,757
|
How to remove nodes from TensorFlow graph?
|
<p>I need to write a program where part of the TensorFlow nodes need to keep being there storing some global information(mainly variables and summaries) while the other part need to be changed/reorganized as program runs. </p>
<p>The way I do now is to reconstruct the whole graph in every iteration. But then, I have to store and load those information manually from/to checkpoint files or numpy arrays in every iteration, which makes my code really messy and error prone.</p>
<p>I wonder if there is a way to remove/modify part of my computation graph instead of reset the whole graph?</p>
|
<p>Changing the structure of TensorFlow graphs isn't really possible. Specifically, there isn't a clean way to remove nodes from a graph, so removing a subgraph and adding another isn't practical. (I've tried this, and it involves surgery on the internals. Ultimately, it's way more effort than it's worth, and you're asking for maintenance headaches.)</p>
<p>There are some workarounds.</p>
<p>Your reconstruction is one of them. You seem to have a pretty good handle on this method, so I won't harp on it, but for the benefit of anyone else who stumbles upon this, a very similar method is a filtered deep copy of the graph. That is, you iterate over the elements and add them in, predicated on some condition. This is most viable if the graph was given to you (i.e., you don't have the functions that built it in the first place) or if the changes are fairly minor. You still pay the price of rebuilding the graph, but sometimes loading and storing can be transparent. Given your scenario, though, this probably isn't a good match.</p>
<p>Another option is to recast the problem as a superset of all possible graphs you're trying to evaluate and rely on dataflow behavior. In other words, build a graph which includes every type of input you're feeding it and only ask for the outputs you need. Good signs this might work are: your network is parametric (perhaps you're just increasing/decreasing widths or layers), the changes are minor (maybe including/excluding inputs), and your operations can handle variable inputs (reductions across a dimension, for instance). In your case, if you have only a small, finite number of tree structures, this could work well. You'll probably just need to add some aggregation or renormalization for your global information.</p>
<p>A third option is to treat the networks as physically split. So instead of thinking of one network with mutable components, treat the boundaries between fixed and changing pieces are inputs and outputs of two separate networks. This does make some things harder: for instance, backprop across both is now ugly (which it sounds like might be a problem for you). But if you can avoid that, then two networks can work pretty well. It ends up feeling a lot like dealing with a separate pretraining phase, which you many already be comfortable with.</p>
<p>Most of these workarounds have a fairly narrow range of problems that they work for, so they might not help in your case. That said, you don't have to go all-or-nothing. If partially splitting the network or creating a supergraph for just some changes works, then it might be that you only have to worry about save/restore for a few cases, which may ease your troubles.</p>
<p>Hope this helps!</p>
|
neural-network|tensorflow
| 7
|
5,755
| 64,617,223
|
Using entrywise sum of boolean arrays as inclusive `or`
|
<p>I would like to compare many <em>m</em>-by-<em>n</em> boolean numpy arrays and get an array of the same shape whose entries are <code>True</code> if the corresponding entry in at least one of the inputs is <code>True</code>.</p>
<p>The easiest way I've found to do this is:</p>
<pre><code>In [5]: import numpy as np
In [6]: a = np.array([True, False, True])
In [7]: b = np.array([True, True, False])
In [8]: a + b
Out[8]: array([ True, True, True])
</code></pre>
<p>But I can also use</p>
<pre><code>In [11]: np.stack([a, b]).sum(axis=0) > 0
Out[11]: array([ True, True, True])
</code></pre>
<p>Are these equivalent operations? Are there any gotchas I should be aware of? Is one method preferable to the other?</p>
|
<p>You can use <code>np.logical_or</code></p>
<pre><code>a = np.array([True, False, True])
b = np.array([True, True, False])
np.logical_or(a,b)
</code></pre>
<p>it also works for <code>(m,n)</code> arrays</p>
<pre><code>a = np.random.rand(3,4) < 0.5
b = np.random.rand(3,4) < 0.5
print('a\n',a)
print('b\n',b)
np.logical_or(a,b)
</code></pre>
|
python|numpy
| 1
|
5,756
| 47,564,932
|
how to show all the x-axis label with panda/matplotlib
|
<p>I make a simple scraper for a company's stock price history. The problem I have is when I use matplotlib to make graph, most of the x-axis labels (date in this case) are missing. How can I force pandas/matplotlib to display all labels?</p>
<pre><code>import pandas as pd
import urllib.request
from bs4 import BeautifulSoup
import matplotlib.pyplot as plt
#open url and make soup
quote_url = "http://www.nasdaq.com/symbol/atnx/historical"
page = urllib.request.urlopen(quote_url)
soup = BeautifulSoup(page, 'html.parser')
#grab all the stock price data (except volume) for the last 2 weeks
trs = soup.find('div', attrs={'id': 'historicalContainer'})\
.find('tbody').find_all('tr')
temp_list = list()
for tr in trs:
for td in tr:
if td.string.strip():
temp_list.append(td.string.strip())
list_of_list = [temp_list[i:i+5] for i in range(0, len(temp_list), 6)]
#only take data from the last 2 weeks for the sake of simplicity
list_of_list = list_of_list[:14]
data_dict = {sublist[0]: [float(n.replace(',', '')) for n in sublist[1:len(sublist)]] for sublist in list_of_list}
#create a pandas DataFrame for stock prices
df = pd.DataFrame(data_dict)
df.rename({0: 'Open', 1: 'High', 2: 'Low', 3: 'Close/Last'}, inplace=True)
#Transpose dataframe
df= df.T
#plot with matplotlib (most x labels are missing here)
df.plot()
plt.show()
</code></pre>
<p>Thanks.</p>
|
<p>The formatting with <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.plot.html" rel="nofollow noreferrer">pandas.DataFrame.plot</a> is rarely perfect in my experience. The function returns an np.array of the axes. You can just use <strong>set_xlabel()</strong> on each axis after. </p>
<p>Pro tip: If the axis is dates I would also recommend to use <strong>autofmt_xdate()</strong></p>
|
python|pandas|matplotlib
| -1
|
5,757
| 49,288,651
|
Tensorflow: gradients are zero for LSTM and GradientDescentOptimizer
|
<p>Gradients which are computed by GradientDescentOptimizer for LSTM network are always zero. They are zero even on the first step, so, I think it is not vanishing gradient problem. The same issue happens for AdamOptimizer.</p>
<p>I have reduced input to one point of time series and label (expected output) to just next point with additional information for neural network to predict in hope to find the root cause on why gradients are zeros. Gradients are zeros even in this minimal setup.</p>
<p>I have read similar question but have not answer which could help me.</p>
<pre><code>import tensorflow as tf
import numpy as np
from tensorflow.contrib import rnn
def input_placeholder_sequence(input_placeholder, sequence_length, batch_size):
input_placeholder = tf.transpose(input_placeholder, name="transpose_input")
print("input_placeholder shape: " + str(input_placeholder.get_shape()))
input_placeholder = tf.split(input_placeholder, np.repeat(batch_size, sequence_length), axis=0)
print("input_placeholder_sequence shape: " + str(np.shape(input_placeholder)))
print("input_placeholder_in_sequence shape: " + str(input_placeholder[0].get_shape()))
return input_placeholder
def train_model(input_placeholder, sequence_length, batch_size, output_size):
input_placeholder = input_placeholder_sequence(input_placeholder, sequence_length, batch_size)
rnn_cell = rnn.BasicLSTMCell(output_size, name="hidden_layer")
hidden_outputs, states = rnn.static_rnn(rnn_cell, input_placeholder, dtype=tf.float32)
print("hidden_outputs shape: " + str(np.shape(hidden_outputs)))
print("hidden_outputs last shape: " + str(hidden_outputs[-1].get_shape()))
result = tf.concat(hidden_outputs, 0, name="concat")
print("result shape: " + str(result.get_shape()))
result = tf.transpose(result, name="transpose_result")
print("result shape transposed: " + str(result.get_shape()))
return result, rnn_cell, states
def main():
input = [[1448949600], [3], [0.70089], [0.70089], [0.70086], [0.70089], [0.70071], [0.70071], [0.7007], [0.70071]]
label = [[1448949660], [10], [0.70086], [0.7009], [0.70084], [0.70092], [0.7007], [0.70071], [0.70067], [0.70073], [0], [0], [0], [1]]
print("input shape: " + str(np.shape(input)))
print("label shape: " + str(np.shape(label)))
input_size = np.shape(input)[0]
output_size = np.shape(label)[0]
batch_size = 1
sequence_length = 1
learning_rate = 0.01
print("input_size: " + str(input_size))
print("output_size: " + str(output_size))
print("batch_size: " + str(batch_size))
print("sequence_length: " + str(sequence_length))
input_placeholder = tf.placeholder(tf.float32, (input_size, sequence_length * batch_size), "input")
prediction_operation, rnn_cell, states = train_model(input_placeholder, sequence_length, batch_size, output_size)
label_placeholder = tf.placeholder(tf.float32, (output_size, batch_size), "label")
global_step = tf.Variable(0, name='global_step',trainable=False)
cost = tf.norm(tf.subtract(prediction_operation, label_placeholder))
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
gradients = optimizer.compute_gradients(cost)
minimizer = optimizer.apply_gradients(gradients, global_step)
init = tf.global_variables_initializer()
with tf.Session() as session:
session.run(init)
_, prediction, loss, grads, weights, gl_step \
= session.run([minimizer, prediction_operation, cost, gradients, rnn_cell.weights, global_step],
feed_dict={input_placeholder: input, label_placeholder: label})
print("loss: " + str(loss))
print("prediction: " + str(prediction))
print("rnn weights and biases: " + str(weights))
print("rnn weights and biases shape: " + str(np.shape(weights)))
print("rnn weights shape: " + str(np.shape(weights[0])))
print("rnn biases shape: " + str(np.shape(weights[1])))
print("rnn weights and biases sum: " + str(np.sum(np.abs(weights[0])) + np.sum(np.abs(weights[1]))))
print("gradients and variables: " + str(grads))
print("gradients weights: " + str(grads[0][0]))
print("gradients biases: " + str(grads[1][0]))
print("gradients sum: " + str(np.sum(np.abs(grads[0][0])) + np.sum(np.abs(grads[1][0]))))
print("global steps: " + str(gl_step))
if __name__ == "__main__":
main()
</code></pre>
|
<p>Implementation is correct. Input and label results into vanishing gradients.</p>
<p>If input and label are as follows, gradients are not zeros.</p>
<pre><code>input = [[3], [0.70089], [0.70089], [0.70086], [0.70089], [0.70071], [0.70071], [0.7007], [0.70071]]
label = [[10], [0.70086], [0.7009], [0.70084], [0.70092], [0.7007], [0.70071], [0.70067], [0.70073], [0], [0], [0], [1]]
</code></pre>
|
python|tensorflow|machine-learning|lstm|recurrent-neural-network
| 0
|
5,758
| 49,034,291
|
Integer multiplication result is negative
|
<p>My multiplication of two positive integers results in a negative value and thus i cant calculate the sqrt but get a <code>math domain error</code>.
My variables can be of size 10^10 and higher.</p>
<pre><code>sum = math.sqrt ( np.power( x , 2 ) * np.power( y , 2 ) )
</code></pre>
<p>Which dtype works for my needs or how else can i solve this?</p>
<p><strong>EDIT:</strong></p>
<p>The values are currently both (by accident the same) 59049. But as I said, they allready create the error and can get even higher. I can't give you a print because this is part of a calculation done in Django.</p>
<p><strong>EDIT 2:</strong></p>
<p>As correctly assumed in the comments the code should have been with <code>+</code> no <code>*</code></p>
<pre><code>d.sum = math.sqrt( np.power(d.active , 2) + np.power(d.passive , 2))
</code></pre>
<p>Some more background:
At one point of my Project I get a matrix and need to do the following calculations:</p>
<pre><code>test = np.arange(9).reshape(3,3)
m = test
matrix= m.dot(m).dot(m).dot(m).dot(m)
activearray = matrix.sum(axis=1)
passivearray = matrix.sum(axis=0)
for idx, Descriptor.id in enumerate(projectdescriptors):
d = Descriptor.objects.get(name=Descriptor.id)
d.active = activearray[idx]
d.passive = passivearray[idx]
d.sum = math.sqrt( np.power(d.active , 2) + np.power(d.passive , 2))
d.save()
</code></pre>
|
<p>As stated in the comments your problem is that the result is overflowing and becoming negative due to using integer variables.</p>
<p>The solution that should solve your issue should be to convert the numbers to floats (Before the operations do: <code>x=x.astype(np.float32)</code>) and then do the operations.</p>
<pre><code>import numpy as np
x=np.array([3,3E18])
y=np.array([4,4E18])
dtypes=[np.int64,np.float32]
for dt in dtypes:
x=x.astype(dt)
y=y.astype(dt)
z2=x**2+y**2
z=np.sqrt(z2)
print("Result with "+str(dt)+"=",z2,z)
</code></pre>
<p>Result:</p>
<pre><code>Result with <class 'numpy.int64'>= [ 25 -9051522149004607488] [ 5. nan]
Result with <class 'numpy.float32'>= [2.5e+01 2.5e+37] [5.e+00 5.e+18]
</code></pre>
|
python|python-3.x|numpy
| 0
|
5,759
| 70,036,839
|
How to use "load_model" in Keras after trained using as metric "tf.keras.metrics.AUC"?
|
<p>After training using that metric on my <code>model.fit()</code>. When I try to load the model using "load_model". It don't recognize the metric AUC, so I add it on <code>custom_objects={"auc":AUC}</code>.</p>
<p>I get this error:</p>
<blockquote>
<p>ValueError: Unknown metric function: {'class_name': 'AUC' ...
returning all the thresholds used and more information about the metric.</p>
</blockquote>
<p>The code:</p>
<pre><code>model.compile(..., metrics=["accuracy", AUC(name="auc", curve="PR")]
load_model(checkpoint, custom_objects={"auc":AUC(name="auc")})
</code></pre>
|
<p>I found the answer. When you have no defined metrics, you can load the model without compiling it. Just set <code>compile=False</code> like this:</p>
<pre><code>model.compile(..., metrics=["accuracy", AUC(name="auc", curve="PR")]
load_model(checkpoint, custom_objects={"auc": AUC(name="auc")}, compile=False)
</code></pre>
|
python|tensorflow|keras|metrics|auc
| 0
|
5,760
| 56,131,080
|
How to save a part of a network?
|
<p>I have made an autoencoder, consisting of an encoder and a decoder part.
I have managed to get the encoder separated from the full network, but I have some troubles with the decoder part. </p>
<p>This part works:</p>
<pre><code>encoder = tf.keras.Model(inputs=autoencoder.input, outputs=autoencoder.layers[5].output)
</code></pre>
<p>This part however doesn't:</p>
<pre><code>decoder = tf.keras.Model(inputs=autoencoder.layers[6].input, outputs=autoencoder.output)
</code></pre>
<p>the error:</p>
<blockquote>
<p>W0514 14:57:48.965506 78976 network.py:1619] Model inputs must come from <code>tf.keras.Input</code> (thus holding past layer metadata), they cannot be the output of a previous non-Input layer. Here, a tensor specified as input to "model_15" was not an Input tensor, it was generated by layer flatten.
Note that input tensors are instantiated via <code>tensor = tf.keras.Input(shape)</code>.
The tensor that caused the issue was: flatten/Reshape:0</p>
</blockquote>
<p>any ideas what to try?</p>
<p>thanks</p>
<p>/mikael</p>
<p>EDIT:
for kruxx</p>
<pre><code>autoencoder = tf.keras.models.Sequential()
# Encoder Layers
autoencoder.add(tf.keras.layers.Conv2D(16, (3, 3), activation='relu', padding='same', input_shape=x_train_tensor.shape[1:]))
autoencoder.add(tf.keras.layers.MaxPooling2D((2, 2), padding='same'))
autoencoder.add(tf.keras.layers.Conv2D(8, (3, 3), activation='relu', padding='same'))
autoencoder.add(tf.keras.layers.MaxPooling2D((2, 2), padding='same'))
autoencoder.add(tf.keras.layers.Conv2D(8, (3, 3), strides=(2,2), activation='relu', padding='same'))
# Flatten encoding for visualization
autoencoder.add(tf.keras.layers.Flatten())
autoencoder.add(tf.keras.layers.Reshape((4, 4, 8)))
# Decoder Layers
autoencoder.add(tf.keras.layers.Conv2D(8, (3, 3), activation='relu', padding='same'))
autoencoder.add(tf.keras.layers.UpSampling2D((2, 2)))
autoencoder.add(tf.keras.layers.Conv2D(8, (3, 3), activation='relu', padding='same'))
autoencoder.add(tf.keras.layers.UpSampling2D((2, 2)))
autoencoder.add(tf.keras.layers.Conv2D(16, (3, 3), activation='relu'))
autoencoder.add(tf.keras.layers.UpSampling2D((2, 2)))
autoencoder.add(tf.keras.layers.Conv2D(1, (3, 3), activation='sigmoid', padding='same'))
</code></pre>
<pre class="lang-sh prettyprint-override"><code>> Model: "sequential"
> _________________________________________________________________
> Layer (type).................Output Shape..............Param #
> =================================================================
> conv2d (Conv2D)..............(None, 28, 28, 16)........160
> _________________________________________________________________
> max_pooling2d (MaxPooling2D).(None, 14, 14, 16)........0
> _________________________________________________________________
> conv2d_1 (Conv2D)............(None, 14, 14, 8).........1160
> _________________________________________________________________
> max_pooling2d_1 (MaxPooling2.(None, 7, 7, 8)...........0
> _________________________________________________________________
> conv2d_2 (Conv2D)............(None, 4, 4, 8)...........584
> _________________________________________________________________
> flatten (Flatten)............(None, 128)...............0
> _________________________________________________________________
> reshape (Reshape)............(None, 4, 4, 8)...........0
> _________________________________________________________________
> conv2d_3 (Conv2D)............(None, 4, 4, 8)...........584
> _________________________________________________________________
> up_sampling2d (UpSampling2D).(None, 8, 8, 8)...........0
> _________________________________________________________________
> conv2d_4 (Conv2D)............(None, 8, 8, 8)...........584
> _________________________________________________________________
> up_sampling2d_1 (UpSampling2 (None, 16, 16, 8).........0
> _________________________________________________________________
> conv2d_5 (Conv2D)............(None, 14, 14, 16)........1168
> _________________________________________________________________
> up_sampling2d_2 (UpSampling2.(None, 28, 28, 16)........0
> _________________________________________________________________
> conv2d_6 (Conv2D)............(None, 28, 28, 1).........145
> =================================================================
> Total params: 4,385
> Trainable params: 4,385
> Non-trainable params: 0
> ______________________________________
</code></pre>
|
<p>I would approach the problem the other way:</p>
<pre class="lang-py prettyprint-override"><code># Encoder model:
encoder_input = Input(...)
# Encoder Hidden Layers
encoded = Dense()(...)
encoder_model = Model(inputs=[encoder_input], outputs=encoded)
# Decoder model:
decoder_input = Input(...)
# Decoder Hidden Layers
decoded = Dense()(...)
decoder_model = Model(inputs=[decoder_input], outputs=decoded)
</code></pre>
<p>And then the autoencoder could be defined as:</p>
<pre class="lang-py prettyprint-override"><code>autoencoder = Model(inputs=[encoder_input], output=decoder_model(encoder_model))
</code></pre>
|
tensorflow|keras
| 0
|
5,761
| 55,957,147
|
Reorder columns in groups by number embedded in column name?
|
<p>I have a very large dataframe with 1,000 columns. The first few columns occur only once, denoting a customer. The next few columns are representative of multiple encounters with the customer, with an underscore and the number encounter. Every additional encounter adds a new column, so there is NOT a fixed number of columns -- it'll grow with time.</p>
<p>Sample dataframe header structure excerpt:</p>
<pre><code>id dob gender pro_1 pro_10 pro_11 pro_2 ... pro_9 pre_1 pre_10 ...
</code></pre>
<p>I'm trying to re-order the columns based on the number after the column name, so all _1 should be together, all _2 should be together, etc, like so:</p>
<pre><code>id dob gender pro_1 pre_1 que_1 fre_1 gen_1 pro2 pre_2 que_2 fre_2 ...
</code></pre>
<p>(Note that the re-order should order the numbers correctly; the current order treats them like strings, which orders 1, 10, 11, etc. rather than 1, 2, 3)</p>
<p>Is this possible to do in pandas, or should I be looking at something else? Any help would be greatly appreciated! Thank you!</p>
<p>EDIT:</p>
<p>Alternatively, is it also possible to re-arrange column names based on the string part AND number part of the column names? So the output would then look similar to the original, except the numbers would be considered so that the order is more intuitive:</p>
<pre><code>id dob gender pro_1 pro_2 pro_3 ... pre_1 pre_2 pre_3 ...
</code></pre>
<p>EDIT 2.0:</p>
<p>Just wanted to thank everyone for helping! While only one of the responses worked, I really appreciate the effort and learned a lot about other approaches / ways to think about this.</p>
|
<p>You need to split you column on '_' then convert to int:</p>
<pre><code>c = ['A_1','A_10','A_2','A_3','B_1','B_10','B_2','B_3']
df = pd.DataFrame(np.random.randint(0,100,(2,8)), columns = c)
df.reindex(sorted(df.columns, key = lambda x: int(x.split('_')[1])), axis=1)
</code></pre>
<p>Output:</p>
<pre><code> A_1 B_1 A_2 B_2 A_3 B_3 A_10 B_10
0 68 11 59 69 37 68 76 17
1 19 37 52 54 23 93 85 3
</code></pre>
<p>Next case, you need <a href="https://stackoverflow.com/a/5967539/6361531">human sorting</a>:</p>
<pre><code>import re
def atoi(text):
return int(text) if text.isdigit() else text
def natural_keys(text):
'''
alist.sort(key=natural_keys) sorts in human order
http://nedbatchelder.com/blog/200712/human_sorting.html
(See Toothy's implementation in the comments)
'''
return [ atoi(c) for c in re.split(r'(\d+)', text) ]
df.reindex(sorted(df.columns, key = lambda x:natural_keys(x)), axis=1)
</code></pre>
<p>Output:</p>
<pre><code> A_1 A_2 A_3 A_10 B_1 B_2 B_3 B_10
0 68 59 37 76 11 69 68 17
1 19 52 23 85 37 54 93 3
</code></pre>
|
python-3.x|pandas
| 1
|
5,762
| 64,651,866
|
Create DataFrame from list of dicts in Pandas series
|
<p>I have a pandas series with string data structured like this for each "row":</p>
<pre><code>["[{'id': 240, 'name': 'travolta'}, {'id': 378, 'name': 'suleimani'}, {'id': 730, 'name': 'pearson'}, {'id': 1563, 'name': 'googenhaim'}, {'id': 1787, 'name': 'al_munir'}, {'id': 10183, 'name': 'googenhaim'}, {'id': 13072, 'name': 'vodkin'}]"]
</code></pre>
<p>When I use a standard solutions to get a DataFrame I got:</p>
<pre><code>> 0 [{'id': 240, 'name': 'travolta'}, {'id': 378, ...
> 1 [{'id': 240, m'name': 'suleimani'}, {'id': 378,...
</code></pre>
<p>How to make an explicit DataFrame with columns named by dict keys?</p>
|
<p>You can use json module to load that structure:</p>
<pre><code>import json
data = ["[{'id': 240, 'name': 'travolta'}, {'id': 378, 'name': 'suleimani'}, {'id': 730, 'name': 'pearson'}, {'id': 1563, 'name': 'googenhaim'}, {'id': 1787, 'name': 'al_munir'}, {'id': 10183, 'name': 'googenhaim'}, {'id': 13072, 'name': 'vodkin'}]"]
data = ''.join(data).replace('\'', '"')
data = json.loads(data)
df = pd.DataFrame(data)
#print result df
# id name
0 240 travolta
1 378 suleimani
2 730 pearson
3 1563 googenhaim
4 1787 al_munir
</code></pre>
|
python|pandas
| 2
|
5,763
| 65,013,199
|
StopIteration error while trying to build data input for a model
|
<pre><code>from __future__ import print_function
import tensorflow as tf
import os
#Dataset Parameters - CHANGE HERE
MODE = 'folder' # or 'file', if you choose a plain text file (see above).
DATASET_PATH = "D:\\Downloads\\Work\\" # the dataset file or root folder path.
# Image Parameters
N_CLASSES = 7 # CHANGE HERE, total number of classes
IMG_HEIGHT = 64 # CHANGE HERE, the image height to be resized to
IMG_WIDTH = 64 # CHANGE HERE, the image width to be resized to
CHANNELS = 3 # The 3 color channels, change to 1 if grayscale
# Reading the dataset
# 2 modes: 'file' or 'folder'
def read_images(dataset_path, mode, batch_size):
imagepaths, labels = list(), list()
if mode == 'file':
# Read dataset file
data = open(dataset_path, 'r').read().splitlines()
for d in data:
imagepaths.append(d.split(' ')[0])
labels.append(int(d.split(' ')[1]))
elif mode == 'folder':
# An ID will be affected to each sub-folders by alphabetical order
label = 0
# List the directory
#try: # Python 2
classes = next(os.walk(dataset_path))[1]
#except Exception: # Python 3
# classes = sorted(os.walk(dataset_path).__next__()[1])
# List each sub-directory (the classes)
for c in classes:
c_dir = os.path.join(dataset_path, c)
try: # Python 2
walk = os.walk(c_dir).next()
except Exception: # Python 3
walk = os.walk(c_dir).__next__()
# Add each image to the training set
for sample in walk[2]:
# Only keeps jpeg images
if sample.endswith('.bmp'):
imagepaths.append(os.path.join(c_dir, sample))
labels.append(label)
label += 1
else:
raise Exception("Unknown mode.")
# Convert to Tensor
imagepaths = tf.convert_to_tensor(imagepaths, dtype=tf.string)
labels = tf.convert_to_tensor(labels, dtype=tf.int32)
# Build a TF Queue, shuffle data
image, label = tf.train.slice_input_producer([imagepaths, labels],
shuffle=True)
# Read images from disk
image = tf.read_file(image)
image = tf.image.decode_jpeg(image, channels=CHANNELS)
# Resize images to a common size
image = tf.image.resize_images(image, [IMG_HEIGHT, IMG_WIDTH])
# Normalize
image = image * 1.0/127.5 - 1.0
# Create batches
X, Y = tf.train.batch([image, label], batch_size=batch_size,
capacity=batch_size * 8,
num_threads=4)
return X, Y
# Parameters
learning_rate = 0.001
num_steps = 10000
batch_size = 32
display_step = 100
# Network Parameters
dropout = 0.75 # Dropout, probability to keep units
# Build the data input
X, Y = read_images(DATASET_PATH, MODE, batch_size)
</code></pre>
<p>Gives an error</p>
<pre><code>StopIteration Traceback (most recent call last)
<ipython-input-27-510f945ab86c> in <module>()
9
10 # Build the data input
---> 11 X, Y = read_images(DATASET_PATH, MODE, batch_size)
<ipython-input-26-c715e653cf59> in read_images(dataset_path, mode, batch_size)
14 # List the directory
15 #try: # Python 2
---> 16 classes = next(os.walk(dataset_path))[1]
17 #except Exception: # Python 3
18 # classes = sorted(os.walk(dataset_path).__next__()[1])
StopIteration:
</code></pre>
<p>I saw the documentation for next() and found that you can no longer use at as .next but after correction, it still gives me StopIteration error
I checked the value of <em>classes</em> on my local Python and it gives me a list ['Class0', 'Class1', 'Class2', 'Class3', 'Class4', 'Class5', 'Class6']</p>
|
<p><code>StopIteration</code> means the iterable is empty, you also get it in a case like this:</p>
<pre class="lang-py prettyprint-override"><code>>>> next(iter([]))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
StopIteration
</code></pre>
<p>Most likely the path you provided does not exist.</p>
|
python|tensorflow|stopiteration
| 0
|
5,764
| 64,747,422
|
iterate on non zero columns to get cost
|
<p>I have a data with weekly sale quantity, amount and cost and i want to find out the cost price for each product row by dividing the weekly quantity sold with the cost, however it is possible that the latest row has zero values, so i wish to skip it i it has zero value and use the previous week to calculate for the cost or until it finds a non zero values and computes the item cost(wkx_cost/wkx_amount) . Also note that product price may have changed over the weeks so i need the cost from the latest week but if not available try calcuating item cost price from the previous week.</p>
<pre><code> df2 = pd.DataFrame([
{'product':'iphone11', 'wk1_qty':2, 'wk1_amount':100,
'wk1_cost':60, 'wk2_qty':3, 'wk2_amount':150,
'wk2_cost':90, 'wk3_qty':0, 'wk3_amount':0,
'wk3_cost':0, 'wk4_qty':5, 'wk4_amount':300,
'wk4_cost':60, 'wk5_qty':0, 'wk5_amount':0,
'wk5_cost':0}, {'product':'acer laptop', 'wk1_qty':3, 'wk1_amount':300,
'wk1_cost':210, 'wk2_qty':3, 'wk2_amount':300,
'wk2_cost':210, 'wk3_qty':0, 'wk3_amount':0,
'wk3_cost':0, 'wk4_qty':5, 'wk4_amount':550,
'wk4_cost':375, 'wk5_qty':5, 'wk5_amount':500,
'wk5_cost':375}])
</code></pre>
<p>What result should look like</p>
<pre><code> df2 = pd.DataFrame([
{'product':'iphone11', 'wk1_qty':2, 'wk1_amount':100,
'wk1_cost':60, 'wk2_qty':3, 'wk2_amount':150,
'wk2_cost':90, 'wk3_qty':0, 'wk3_amount':0,
'wk3_cost':0, 'wk4_qty':5, 'wk4_amount':300,
'wk4_cost':160, 'wk5_qty':0, 'wk5_amount':0,
'wk5_cost':0, 'product_price':32}, {'product':'acer laptop', 'wk1_qty':3, 'wk1_amount':300,
'wk1_cost':210, 'wk2_qty':3, 'wk2_amount':300,
'wk2_cost':210, 'wk3_qty':0, 'wk3_amount':0,
'wk3_cost':0, 'wk4_qty':5, 'wk4_amount':550,
'wk4_cost':375, 'wk5_qty':5, 'wk5_amount':500,
'wk5_cost':375, 'product_price':75}])
</code></pre>
|
<p>The problem arises when there is a division by zero. So, to short-cut that (or deal with it) would could try this:</p>
<pre><code>try:
Product_price = wk_cost/wk_qty
except ZeroDivisionError:
Product_price = 0
</code></pre>
|
python|pandas
| 0
|
5,765
| 39,948,935
|
Multiplying pandas dataframe and series, element wise
|
<p>Lets say I have a pandas series:</p>
<pre><code>import pandas as pd
x = pd.DataFrame({0: [1,2,3], 1: [4,5,6], 2: [7,8,9] })
y = pd.Series([-1, 1, -1])
</code></pre>
<p>I want to multiply x and y in such a way that I get z:</p>
<pre><code>z = pd.DataFrame({0: [-1,2,-3], 1: [-4,5,-6], 2: [-7,8,-9] })
</code></pre>
<p>In other words, if element j of the series is -1, then all elements of the j-th row of x get multiplied by -1. If element k of the series is 1, then all elements of the j-th row of x get multiplied by 1. </p>
<p>How do I do this?</p>
|
<p>You can do that:</p>
<pre><code>>>> new_x = x.mul(y, axis=0)
>>> new_x
0 1 2
0 -1 -4 -7
1 2 5 8
2 -3 -6 -9
</code></pre>
|
python|pandas
| 11
|
5,766
| 40,002,760
|
Why is my data not recognized as time series?
|
<p>I have daily (<code>day</code>) data on calories intake for one person (<code>cal2</code>), which I get from a Stata <code>dta</code> file. </p>
<p>I run the code below:</p>
<pre><code>import pandas as pd
import numpy as np
import matplotlib.pylab as plt
from pandas import read_csv
from matplotlib.pylab import rcParams
d = pd.read_stata('time_series_calories.dta', preserve_dtypes=True,
index = 'day', convert_dates=True)
print(d.dtypes)
print(d.shape)
print(d.index)
print(d.head)
plt.plot(d)
</code></pre>
<p>This is how the data looks like:</p>
<pre><code>0 2002-01-10 3668.433350
1 2002-01-11 3652.249756
2 2002-01-12 3647.866211
3 2002-01-13 3646.684326
4 2002-01-14 3661.941406
5 2002-01-15 3656.951660
</code></pre>
<p>The prints reveal the following:</p>
<pre><code>day datetime64[ns]
cal2 float32
dtype: object
(251, 2)
Int64Index([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9,
...
241, 242, 243, 244, 245, 246, 247, 248, 249, 250],
dtype='int64', length=251)
</code></pre>
<p>And here is the problem - the data should identify as <code>dtype='datatime64[ns]'</code>.</p>
<p>However, it clearly does not. Why not?</p>
|
<p>There is a discrepancy between the code provided, the data and the types shown.
This is because irrespective of the type of <code>cal2</code>, the <code>index = 'day'</code> argument
in <code>pd.read_stata()</code> should always render <code>day</code> the index, albeit not as the
desired type.</p>
<p>With that said, the problem can be reproduce as follows.</p>
<p>First, create the dataset in Stata:</p>
<pre><code>clear
input double day float cal2
15350 3668.433
15351 3652.25
15352 3647.866
15353 3646.684
15354 3661.9414
15355 3656.952
end
format %td day
save time_series_calories
</code></pre>
<p></p>
<pre><code>describe
Contains data from time_series_calories.dta
obs: 6
vars: 2
size: 72
----------------------------------------------------------------------------------------------------
storage display value
variable name type format label variable label
----------------------------------------------------------------------------------------------------
day double %td
cal2 float %9.0g
----------------------------------------------------------------------------------------------------
Sorted by:
</code></pre>
<p>Second, load the data in Pandas:</p>
<pre><code>import pandas as pd
d = pd.read_stata('time_series_calories.dta', preserve_dtypes=True, convert_dates=True)
</code></pre>
<p></p>
<pre><code>print(d.head)
day cal2
0 2002-01-10 3668.433350
1 2002-01-11 3652.249756
2 2002-01-12 3647.866211
3 2002-01-13 3646.684326
4 2002-01-14 3661.941406
5 2002-01-15 3656.951660
print(d.dtypes)
day datetime64[ns]
cal2 float32
dtype: object
print(d.shape)
(6, 2)
print(d.index)
Int64Index([0, 1, 2, 3, 4, 5], dtype='int64')
</code></pre>
<p>In order to change the index as desired, you can use <code>pd.set_index()</code>:</p>
<pre><code>d = d.set_index('day')
print(d.head)
cal2
day
2002-01-10 3668.433350
2002-01-11 3652.249756
2002-01-12 3647.866211
2002-01-13 3646.684326
2002-01-14 3661.941406
2002-01-15 3656.951660
print(d.index)
DatetimeIndex(['2002-01-10', '2002-01-11', '2002-01-12', '2002-01-13',
'2002-01-14', '2002-01-15'],
dtype='datetime64[ns]', name='day', freq=None)
</code></pre>
<p>If <code>day</code> is a string in the Stata dataset, then you can do the following: </p>
<pre><code>d['day'] = pd.to_datetime(d.day)
d = d.set_index('day')
</code></pre>
|
python-3.x|pandas|import|time-series|stata
| 0
|
5,767
| 39,504,885
|
Estimate memory requirements for tensorflow model
|
<p>How can I estimate the memory requirements of my tensorflow model? Should the below give a somewhat accurate representation?</p>
<pre><code>size = 0
for variable in tf.all_variables():
size += int(np.prod(variable.get_shape()))
print(size)
</code></pre>
<p><code>size</code> should be the number of variables. Should <code>size * dtype</code> then be an estimate of memory requirements?</p>
|
<p>No, you also have to account for your other tensors (e.g. <code>tf.placeholder</code> and <code>tf.constant</code>), and you should also have room for the gradients, as I believe a bunch of values are cached during the forward pass so that backprop doesn't become too slow.</p>
|
python|tensorflow
| 0
|
5,768
| 43,935,336
|
How to rename file and copy it with the new name in a new folder (Python)?
|
<pre><code>import numpy as np
import os
import matplotlib.pyplot as plt
# Loading Data
Srcpath ='/mnt/SrcFolder'
Destpath='/mnt/DestFolder'
traces= os.listdir(path)
with open(File_Sensitive_Value, 'wb') as fp:
for trace in traces:
Plaintext = Extract_Plaintext(trace)
print(Plaintext)
Ciphertext= Extract_Ciphertext(trace)
Key= Extract_Key(trace)
print(Key)
filepath = os.path.join(path, trace)
dataArray= np.load(filepath)
// sbox is function that have used to have the correct Sensitive value.
Sensitive_Value = (sbox[int(Plaintext[0:2],16) ^ int(Key[0:2],16)])
print ("Plaintext=", int(Plaintext[0:2],16))
print("Key=", int(Key[0:2],16))
print ("Sensitive_Value=", Sensitive_Value)
os.rename(os.path.join(path,trace), os.path.join(path_Sensitive_Value, trace[:-4]+'_'+'SenVal='+str('{:03}'.format(Sensitive_Value))+'.npy'))
</code></pre>
<p>After finding the new name of each file in my Src folder, I need to copy each file with its new name in the destfolder, od course with giving the original name of each file in the Src Folder. My solution, code lets me to delecte all the original files. How to resole this problem pleasee? </p>
|
<p>This is the structure I use when I need to COPY files en bulk:</p>
<pre><code>import os
import shutil
shutil.copy("OriginalFileAddress", "NewFileAddress")
print("Files moved!")
</code></pre>
<p>If you need to copy the metadata for the files, then use</p>
<pre><code>shutil.copy2(src, dst)
</code></pre>
<p>instead of .copy as above.</p>
<p>If you need to iterate over what needs to be moved, post below and I can add new code.
I hope this helps.</p>
|
python|numpy
| 0
|
5,769
| 69,340,680
|
Pandas Long to Wide for Categorical Dataframe
|
<p>Usually when we want to transform a dataframe long to wide in Pandas, we use <em>pivot</em> or <em>pivot_table</em>, or <em>unstack</em>, or <em>groupby</em>, but that works well when there are aggregatable elements. How do we transform in the same manner a categorical dataframe?</p>
<p>Example:</p>
<pre><code>d = {'Fruit':['Apple', 'Apple', 'Apple', 'Kiwi'],
'Color1':['Red', 'Yellow', 'Red', 'Green'],
'Color2':['Red', 'Red', 'Green', 'Brown'],'Color3':[np.nan,np.nan,'Red',np.nan]}
pd.DataFrame(d)
Fruit Color1 Color2 Color3
0 Apple Red Red NaN
1 Apple Yellow Red NaN
2 Apple Red Green Red
3 Kiwi Green Brown NaN
</code></pre>
<p>Should become something like this:</p>
<pre><code>d = {'Fruit':['Apple','Kiwi'],
'Color1':['Red','Green'],
'Color1_1':['Yellow',np.nan],
'Color1_2':['Red',np.nan],
'Color2':['Red', 'Brown'],
'Color2_1':['Red',np.nan],
'Color2_2':['Green',np.nan],
'Color3':[np.nan,np.nan],
'Color3_1':[np.nan,np.nan],
'Color3_2':['Red',np.nan]
}
pd.DataFrame(d)
Fruit Color1 Color1_1 Color1_2 Color2 Color2_1 Color2_2 Color3 Color3_1 Color3_2
0 Apple Red Yellow Red Red Red Green NaN NaN Red
1 Kiwi Green NaN NaN Brown NaN NaN NaN NaN NaN
</code></pre>
|
<p>Try <a href="https://pandas.pydata.org/docs/reference/api/pandas.core.groupby.GroupBy.cumcount.html" rel="nofollow noreferrer"><code>cumcount</code></a> with <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.groupby.html" rel="nofollow noreferrer"><code>groupby</code></a> to get the counts, then <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.pivot.html" rel="nofollow noreferrer"><code>pivot</code></a> on it as the columns, then set the column names, with:</p>
<pre><code>df = df.assign(idx=df.groupby('Fruit').cumcount()).pivot(index='Fruit',columns='idx')
print(df.set_axis([f'{x}_{y}' if y != 0 else x for x, y in df.columns], axis=1).reset_index())
</code></pre>
<p>Output:</p>
<pre><code> Fruit Color1 Color1_1 Color1_2 Color2 Color2_1 Color2_2 Color3 Color3_1 Color3_2
0 Apple Red Yellow Red Red Red Green NaN NaN Red
1 Kiwi Green NaN NaN Brown NaN NaN NaN NaN NaN
</code></pre>
<p>Matches your output exactly.</p>
|
python|pandas
| 4
|
5,770
| 69,512,802
|
replace nan with random value between the existing max and min values in its column in pandas
|
<p>I have missing data and would like to replace the NaN's with random values from between the existing min and max for that column (different filled values for each NaN). I have been trying things like <strong>the below</strong> but it doesn't work and I am not sure how to loop through the columns correctly as the min max will change for each column.</p>
<pre class="lang-py prettyprint-override"><code>import datetime
import numpy as np
import pandas as pd
def fill_blanks(df):
for i in list(df):
for x in i:
if type(x) is datetime.datetime:
return x
continue
if pd.isnull(x):
#print (i,x)
x=(np.random.uniform(df[i].min(), df[i].max()))
return x
else:
return x
df.applymap(fill_blanks)
</code></pre>
<p>example data</p>
<pre><code>d = {'Date': ['2015-09-01 09:00:00', '2015-09-02 09:00:00','2015-09-03 09:00:00','2015-09-01 09:00:00',], 'col2': [np.nan, 102,np.nan,105],'col3': [1, np.nan,3,2.5,],'col4': [0.0001, 0.0002,np.nan,0.0003]}
df = pd.DataFrame(data=d)
df
</code></pre>
<p>gives</p>
<pre><code>Out[5]:
Date col2 col3 col4
0 2015-09-01 09:00:00 NaN 1.0 0.0001
1 2015-09-02 09:00:00 102.0 NaN 0.0002
2 2015-09-03 09:00:00 NaN 3.0 NaN
3 2015-09-01 09:00:00 105.0 2.5 0.0003
</code></pre>
<p>desired output might be:</p>
<pre><code>Out[5]:
Date col2 col3 col4
0 2015-09-01 09:00:00 102.5 1.0 0.0001
1 2015-09-02 09:00:00 102.0 2.0 0.0002
2 2015-09-03 09:00:00 104.5 3.0 0.0002
3 2015-09-01 09:00:00 105.0 2.5 0.0003
</code></pre>
|
<p>You can use:</p>
<pre><code>numeric_cols = df.select_dtypes([np.number]).columns
df[numeric_cols] = df[numeric_cols].apply(lambda x: x.fillna(np.random.uniform(x.min(), x.max(), 1)[0]))
</code></pre>
<p>Output:</p>
<pre><code> Date col2 col3 col4
0 2015-09-01 09:00:00 100.00000 1.000000 0.000100
1 2015-09-02 09:00:00 102.00000 1.435334 0.000200
2 2015-09-03 09:00:00 103.97625 3.000000 0.962672
3 2015-09-01 09:00:00 105.00000 2.500000 0.000300
</code></pre>
<p>If you want every nan in column to be filled with different random value, use:</p>
<pre><code>df[numeric_cols] = df[numeric_cols].apply(lambda x: x.fillna(pd.Series(np.random.uniform(x.min(), x.max(), len(x)))))
</code></pre>
|
pandas
| 1
|
5,771
| 69,489,255
|
How to do Pandas stacked bar chart on number line instead of categories
|
<p>I am trying to make a stacked bar chart where the x-axis is based on a regular number line instead of categories. Maybe bar chart is not the right term?</p>
<p>How can I make the stacked bars, but have the x number line be spaced "normally" (with a big relative gap between 5.0 and 10.6)? I also want to set a regular tick interval, instead of having every bar labeled. (The real dataset is dense but with some spurious gaps, and I want to use the bar colors to qualitatively show changes as a function of x.)</p>
<pre><code>fid = ["name", "name", "name", "name", "name"]
x = [1.02, 1.3, 2, 5, 10.6]
y1 = [0, 1, 0.2, 0.6, 0.1]
y2 = [0.3, 0, 0.1, 0.1, 0.4]
y3 = [0.7, 0, 0.7, 0.3, 0.5]
df = pd.DataFrame(data=zip(fid, x, y1, y2, y3), columns=["fid", "x", "y1", "y2", "y3"])
fig, ax = plt.subplots()
df.plot.bar(x="x", stacked=True, ax=ax)
plt.legend(bbox_to_anchor=(1.05, 1), loc=2)
</code></pre>
<p><a href="https://i.stack.imgur.com/p8FOo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/p8FOo.png" alt="enter image description here" /></a></p>
|
<p>In a matplotlib bar chart, the <code>x</code> values are treated as categorical data, so matplotlib always plots it along <code>range(0, ...)</code> and relabels the ticks with the <code>x</code> values.</p>
<p>To scale the bar distances, <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.reindex.html" rel="nofollow noreferrer"><code>reindex</code></a> the <code>x</code> values to have filler rows between the real data points:</p>
<pre><code>start, stop = 0, 16
xstep = 0.01
tickstep = 2
xfill = np.round(np.arange(start, stop + xstep, xstep), 2)
out = df.set_index("x").reindex(xfill).reset_index()
ax = out.plot.bar(x="x", stacked=True, width=20, figsize=(10, 3))
xticklabels = np.arange(start, stop+tickstep, tickstep).astype(float)
xticks = out.index[out.x.isin(xticklabels)]
ax.set_xticks(xticks)
ax.set_xticklabels(xticklabels)
plt.legend(bbox_to_anchor=(1.05, 1), loc=2)
</code></pre>
<p><a href="https://i.stack.imgur.com/PoKSW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PoKSW.png" alt="bar plot on number line" /></a></p>
<hr />
<h3>Details</h3>
<ol>
<li><p>Generate the <code>xfill</code> as <code>[0, 0.01, 0.02, ...]</code>. I've tried to make this portable by extracting the max number of decimals from <code>x</code>, but float precision is always tricky so this may need to be tweaked:</p>
<pre><code>decimals = df.x.astype(str).str.split(".").str[-1].str.len().max()
xstep = 10.0 ** -decimals
start = 0
stop = 16
xfill = np.round(np.arange(start, stop + xstep, xstep), decimals)
# array([ 0. , 0.01, 0.02, 0.03, 0.04, 0.05, ...])
</code></pre>
</li>
<li><p><a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.reindex.html" rel="nofollow noreferrer"><code>reindex</code></a> the <code>x</code> column against this new <code>xfill</code>, so the filler rows will be NaN:</p>
<pre><code>out = df.set_index("x").reindex(xfill).reset_index()
# x fid y1 y2 y3
# 0.00 NaN NaN NaN NaN
# ... ... ... ... ...
# 1.01 NaN NaN NaN NaN
# 1.02 name 0.0 0.3 0.7
# 1.03 NaN NaN NaN NaN
# ... ... ... ... ...
# 1.29 NaN NaN NaN NaN
# 1.30 name 1.0 0.0 0.0
# 1.31 NaN NaN NaN NaN
# ... ... ... ... ...
# 1.99 NaN NaN NaN NaN
# 2.00 name 0.2 0.1 0.7
# 2.01 NaN NaN NaN NaN
# ... ... ... ... ...
# 4.99 NaN NaN NaN NaN
# 5.00 name 0.6 0.1 0.3
# 5.01 NaN NaN NaN NaN
# ... ... ... ... ...
# 10.59 NaN NaN NaN NaN
# 10.60 name 0.1 0.4 0.5
# 10.61 NaN NaN NaN NaN
# ... ... ... ... ...
# 16.00 NaN NaN NaN NaN
</code></pre>
</li>
<li><p>Plot the reindexed data (with <code>xticks</code> spaced apart by <code>tickstep</code>):</p>
<pre><code>ax = out.plot.bar(x="x", stacked=True, width=20, figsize=(10, 3))
tickstep = 2
xticklabels = np.arange(start, stop + tickstep, tickstep).astype(float)
xticks = out.index[out.x.isin(xticklabels)]
ax.set_xticks(xticks)
ax.set_xticklabels(xticklabels)
plt.legend(bbox_to_anchor=(1.05, 1), loc=2)
</code></pre>
</li>
</ol>
<hr />
<p>Combined code:</p>
<pre><code>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
df = pd.DataFrame({"fid": ["name", "name", "name", "name", "name"], "x": [1.02, 1.3, 2, 5, 10.6], "y1": [0, 1, 0.2, 0.6, 0.1], "y2": [0.3, 0, 0.1, 0.1, 0.4], "y3": [0.7, 0, 0.7, 0.3, 0.5]})
decimals = df.x.astype(str).str.split(".").str[-1].str.len().max()
xstep = 10.0 ** -decimals
start = 0
stop = 16
xfill = np.round(np.arange(start, stop + xstep, xstep), decimals)
out = df.set_index("x").reindex(xfill).reset_index()
ax = out.plot.bar(x="x", stacked=True, width=20, figsize=(10, 3))
tickstep = 2
xticklabels = np.arange(start, stop + tickstep, tickstep).astype(float)
xticks = out.index[out.x.isin(xticklabels)]
ax.set_xticks(xticks)
ax.set_xticklabels(xticklabels)
plt.legend(bbox_to_anchor=(1.05, 1), loc=2)
</code></pre>
|
python|pandas|matplotlib|bar-chart
| 1
|
5,772
| 69,566,776
|
Pandas Dataframe - Search by index
|
<p>I have a dataframe where the index is a timestamp.</p>
<pre><code>DATE VALOR
2020-12-01 00:00:00 0.00635
2020-12-01 01:00:00 0.00941
2020-12-01 02:00:00 0.01151
2020-12-01 03:00:00 0.00281
2020-12-01 04:00:00 0.01080
... ...
2021-04-30 19:00:00 0.77059
2021-04-30 20:00:00 0.49285
2021-04-30 21:00:00 0.49057
2021-04-30 22:00:00 0.50339
2021-04-30 23:00:00 0.48792
</code></pre>
<p>I´m searching for a specific date</p>
<pre><code>drop.loc['2020-12-01 04:00:00']
VALOR 0.0108
Name: 2020-12-01 04:00:00, dtype: float64
</code></pre>
<p>I want the return for the index of search above.</p>
<p>In this case is line 5. After I want to use this value to do a slice in the dataframe</p>
<pre><code>drop[:5]
</code></pre>
<p>Thanks!</p>
|
<p>It looks like you want to subset <code>drop</code> up to index <code>'2020-12-01 04:00:00'</code>.</p>
<p>Then simply do this: <code>drop.loc[:'2020-12-01 04:00:00']</code></p>
<p>No need to manually get the line number.</p>
<p>output:</p>
<pre><code> VALOR
DATE
2020-12-01 00:00:00 0.00635
2020-12-01 01:00:00 0.00941
2020-12-01 02:00:00 0.01151
2020-12-01 03:00:00 0.00281
2020-12-01 04:00:00 0.01080
</code></pre>
<p>If you really want to get the position:</p>
<pre><code>pos = drop.index.get_loc(key='2020-12-01 04:00:00') ## returns: 4
drop[:pos+1]
</code></pre>
|
python|pandas
| 2
|
5,773
| 69,477,694
|
A shorter, more compact alternative to loc multiple variables into separate columns in a loop
|
<p>Is there a shorter, more compact alternative to loc multiple variables into separate columns in a loop?</p>
<p>The code that I am using now is looping through each polygon by index, then finding which point (from point file) is located in which polygon by ID. Then find some metrics or any other variable by equation and loc in new columns.</p>
<p>So, I was thinking if there is other way to loc multiple variables in separate columns to avoid great list of <code>polygon.loc[n, 'column name'] = some variable</code>.</p>
<pre><code>for n in polygon.index:
points_in_poly = points[points['ID'] == n]
# then equation to find some variables in each row
maxv = points_in_poly['Height'].max()
mean = points_in_poly['Height'].mean()
median = points_in_poly['Height'].median()
std = points_in_poly['Height'].std()
half_std = std / 2
max1 = maxv * std + 15
# then by doing this I'm get each row corresponding value
polygon.loc[n, 'max'] = maxv
polygon.loc[n, 'mean'] = mean
polygon.loc[n, 'median'] = median
polygon.loc[n, 'std'] = std
polygon.loc[n, 'half_std'] = half_std
polygon.loc[n, 'max1'] = max1
</code></pre>
|
<p>You could update your rows by using a dictionary like so:</p>
<pre class="lang-py prettyprint-override"><code>
polygon.loc[n] = {'max': maxv, 'mean': mean, 'median': median, 'std': std, 'half_std': half_std, 'max1': max1}
</code></pre>
<p>If there are other rows with values you don't want to overwrite you should add them to the dictionary. Something like this would work:</p>
<pre class="lang-py prettyprint-override"><code>polygon_data = polygon.loc[n].to_dict()
polygon_data.update({'max': maxv, 'mean': mean, 'median': median, 'std': std, 'half_std': half_std, 'max1': max1})
polygon.loc[n] = polygon_data
</code></pre>
|
python|loops|geopandas
| 0
|
5,774
| 69,644,874
|
Does @tf.function decorator work with class attributes?
|
<p>I'm currently developing an Autoencoder class - one of the methods is as follows:</p>
<pre class="lang-py prettyprint-override"><code>@tf.function
def build_ae(self, inputs):
self.input_ae = inputs
self.encoded = self.encoder(self.input_ae)
self.decoded = self.decoder(self.encoded)
self.autoencoder = Model(self.input_ae,outputs=self.encoded)
</code></pre>
<p>If I try to run this function:</p>
<pre class="lang-py prettyprint-override"><code>inp = Input(shape=(Nx, Nx, Nu)) # Nx, Nu are both int
ae = Autoencoder(Nx, Nu, [32, 64, 128]) # Simply specifying layers + input dimensions
ae.build_ae(inp)
</code></pre>
<p>I get the following error:</p>
<pre><code>TypeError: Cannot convert a symbolic Keras input/output to a numpy array.
This error may indicate that you're trying to pass a symbolic value to a NumPy call,
which is not supported. Or, you may be trying to pass Keras symbolic inputs/outputs
to a TF API that does not register dispatching, preventing Keras from automatically
converting the API call to a lambda layer in the Functional Model.
</code></pre>
<p>However, when I remove the <code>@tf.function</code> decorator, the function works as intended.</p>
<p>I've tried writing a simple test example:</p>
<pre class="lang-py prettyprint-override"><code>class Test:
@tf.function
def build_test(self, inputs):
self.inp = inputs
t = Test()
input_t = Input(shape=(3,3,3))
t.build_test(input_t)
</code></pre>
<p>Once again, this results in the same error.</p>
<p>I've tried disabling eager execution and this has had no effect.</p>
<p>Does anyone know why this might not be working?</p>
<p><strong>Update:</strong></p>
<p>Here is the full Autoencoder class:</p>
<pre class="lang-py prettyprint-override"><code>import einops
import h5py
from pathlib import Path
from typing import List, Tuple
import numpy as np
import tensorflow as tf
from tensorflow.keras import Input
from tensorflow.keras.layers import (
Dense,
Conv2D,
MaxPool2D,
UpSampling2D,
concatenate,
BatchNormalization,
Conv2DTranspose,
Flatten,
PReLU,
Reshape,
Dropout,
AveragePooling2D,
add,
Lambda,
Layer,
TimeDistributed,
LSTM
)
from tensorflow.keras.regularizers import l2
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.models import Model
from tensorflow.keras.callbacks import ModelCheckpoint, EarlyStopping
class Autoencoder:
def __init__(self,
nx: int,
nu: int,
features_layers: List[int],
latent_dim: int = 16,
filter_window: Tuple[int, int] = (3, 3),
act_fn: str = 'tanh',
batch_norm: bool = False,
dropout_rate: float = 0.0,
lmb: float = 0.0,
resize_method: str = 'bilinear') -> None:
self.nx = nx
self.nu = nu
self.features_layers = features_layers
self.latent_dim = latent_dim
self.filter_window = filter_window
self.act_fn = act_fn
self.batch_norm = batch_norm
self.dropout_rate = dropout_rate
self.lmb = lmb
self.resize_method = resize_method
self.train_history = None
self.val_history = None
self.encoder = Encoder(
self.nx,
self.nu,
self.features_layers,
self.latent_dim,
self.filter_window,
self.act_fn,
self.batch_norm,
self.dropout_rate,
self.lmb
)
self.decoder = Decoder(
self.nx,
self.nu,
self.features_layers,
self.latent_dim,
self.filter_window,
self.act_fn,
self.batch_norm,
self.dropout_rate,
self.lmb,
self.resize_method
)
@tf.function
def build_ae(self, inputs):
self.input_ae = inputs
self.encoded = self.encoder(self.input_ae)
self.decoded = self.decoder(self.encoded)
self.autoencoder = Model(self.input_ae, outputs=self.decoded)
@tf.function
def compile_ae(self, learning_rate, loss):
self.autoencoder.compile(optimizer=Adam(learning_rate=learning_rate), loss=loss)
def train(self,
inputs,
targets,
inputs_valid,
targets_valid,
n_epoch,
batch_size,
learning_rate,
patience,
filepath):
model_cb = ModelCheckpoint(filepath, monitor='val_loss', save_best_only=True, verbose=1, save_format="h5")
early_cb = EarlyStopping(monitor='val_loss', patience=patience, verbose=1)
cb = [model_cb, early_cb]
self.train_history = []
self.val_history = []
tf.keras.backend.set_value(self.autoencoder.optimizer.lr, learning_rate)
hist = self.autoencoder.fit(
inputs,
targets,
epochs=n_epoch,
batch_size=batch_size,
shuffle=True,
validation_data=(inputs_valid, targets_valid),
callbacks=cb
)
self.train_history.extend(hist.history['loss'])
self.train_history.extend(hist.history['val_loss'])
return self.train_history, self.val_history
</code></pre>
|
<p>The following methods both seem to do the job:</p>
<pre class="lang-py prettyprint-override"><code>tf.config.run_functions_eagerly
class Test:
@tf.function
def build_test(self, inputs):
self.inp = inputs
t = Test()
input_t = tf.keras.layers.Input(shape=(3,3,3))
t.build_test(input_t)
</code></pre>
<p>and</p>
<pre class="lang-py prettyprint-override"><code>class Test:
@tf.function
def build_test(self, inputs):
self.inp = inputs
t = Test()
t.build_test(tf.constant(1))
</code></pre>
<p>According to the <a href="https://www.tensorflow.org/api_docs/python/tf/keras/Model" rel="nofollow noreferrer">docs</a>, when you create a <code>tf.keras.Model</code>:</p>
<blockquote>
<p>By default, we will attempt to compile your model to a static graph to deliver the best execution performance.</p>
</blockquote>
<p>You are already creating a model in your <code>build_ae</code> method, so I don't think that omitting the <code>@tf.function</code> decorator will affect your performance.</p>
|
python|tensorflow|keras
| 0
|
5,775
| 69,609,192
|
using a tensorflow model trained on google colab on my PC
|
<p>I am using colab to train a <code>tensorflow</code> model. I see that google colab installs the following version by default:</p>
<pre><code>import tensorflow
tensorflow.__version__
2.6.0
...
[train model]
...
model.save('mymodel.h5')
</code></pre>
<p>However, when I download the model to my windows pc and try to load it with <code>tensorflow/keras</code>, I get an error</p>
<pre><code>import keras
import tensorflow
model = keras.models.load_model(r"mymodel.h5")
model_config = json.loads(model_config.decode('utf-8'))
AttributeError: 'str' object has no attribute 'decode'
</code></pre>
<p>After searching on the net, it appears this is due to the different <code>tensorflow</code> versions (colab vs. my PC).</p>
<pre><code>tensorflow.__version__
Out[4]: '2.1.0'
</code></pre>
<p>The problem is that when I install <code>tensorflow</code> with <code>conda install tensorflow-gpu</code> this is the version I get. Even trying to force <code>conda install tensorflow-gpu==2.6</code> does not install anything.</p>
<p>What should I do?
Thanks!</p>
|
<p>hacky solution for now...</p>
<ol>
<li>download tensorflow 2.1 + CUDA and CuDNN using <code>conda install tensorflow-gpu</code></li>
<li>upgrade using <code>pip install tensorflow-gpu==2.6 --upgrade --force-reinstall</code></li>
</ol>
<p>The GPU does not work (likely because the CUDA versions are not the right ones) but at least I can run a tf 2.6 script using the CPU.</p>
|
python|tensorflow|conda
| 0
|
5,776
| 41,058,534
|
failed to read inch symbol in pandas read_csv
|
<p>I have csv with below details</p>
<pre><code>Name,Desc,Year,Location
Jhon,12" Main Third ,2012,GR
Lew,"291" Line (12,596,3)",2012,GR
,All, 1992,FR
</code></pre>
<p>...</p>
<p>It is very long file. i just showed problematic lines.I am confused how can i read it in Pandas data frame, I tried</p>
<ul>
<li><p>quotechar,</p>
</li>
<li><p>quoting,</p>
</li>
<li><p>sep</p>
<p>like attribute of pandas <strong>read_csv</strong>.
Still no success.</p>
</li>
</ul>
<p>I have no control on how csv is being designed.</p>
|
<p>You can do something like this. Try if this works for you:</p>
<pre><code>import pandas as pd
import re
l1=[]
with open('/home/yusuf/Desktop/c1') as f:
headers = f.readline().strip('\n').split(',')
for a in f.readlines():
if a:
q = re.findall("^(\w*),(.*),\s?(\d+),(\w+)",a)
if q:
l1.append(q)
l2 = [list(b[0]) for b in l1]
df = pd.DataFrame(data=l2, columns=headers)
df
</code></pre>
<p>Output:</p>
<p><a href="https://i.stack.imgur.com/gIqyv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gIqyv.png" alt="enter image description here"></a></p>
<p>Regex Demo: <a href="https://regex101.com/r/AU2WcO/1" rel="nofollow noreferrer">https://regex101.com/r/AU2WcO/1</a></p>
|
python|csv|pandas|dataframe
| 1
|
5,777
| 41,076,619
|
How can I read *.csv files that have numbers with commas using pandas?
|
<p>I want to read a *.csv file that have numbers with commas.</p>
<p>For example,</p>
<p>File.csv</p>
<pre><code>Date, Time, Open, High, Low, Close, Volume
2016/11/09,12:10:00,'4355,'4358,'4346,'4351,1,201 # The last value is 1201, not 201
2016/11/09,12:09:00,'4361,'4362,'4353,'4355,1,117 # The last value is 1117, not 117
2016/11/09,12:08:00,'4364,'4374,'4359,'4360,10,175 # The last value is 10175, not 175
2016/11/09,12:07:00,'4371,'4376,'4360,'4365,590
2016/11/09,12:06:00,'4359,'4372,'4358,'4369,420
2016/11/09,12:05:00,'4365,'4367,'4356,'4359,542
2016/11/09,12:04:00,'4379,'1380,'4360,'4365,1,697 # The last value is 1697, not 697
2016/11/09,12:03:00,'4394,'4396,'4376,'4381,1,272 # The last value is 1272, not 272
2016/11/09,12:02:00,'4391,'4399,'4390,'4393,524
...
2014/07/10,12:05:00,'10195,'10300,'10155,'10290,219,271 # The last value is 219271, not 271
2014/07/09,12:04:00,'10345,'10360,'10185,'10194,235,711 # The last value is 235711, not 711
2014/07/08,12:03:00,'10339,'10420,'10301,'10348,232,050 # The last value is 242050, not 050
</code></pre>
<p>It actually has 7 columns, but the values of the last column sometimes have commas and pandas take them as extra columns.</p>
<p>My questions is, if there are any methods with which I can make pandas takes only the first 6 commas and ignore the rest commas when it reads columns, or if there are any methods to delete commas after the 6th commas(I'm sorry, but I can't think of any functions to do that.)</p>
<p>Thank you for reading this :)</p>
|
<p>You can do all of it in Python without having to save the data into a new file. The idea is to clean the data and put in a dictionary-like format for pandas to grab it and turn it into a dataframe. The following should constitute a decent starting point:</p>
<pre><code>from collections import defaultdict
from collections import OrderedDict
import pandas as pd
# Import the data
data = open('prices.csv').readlines()
# Split on the first 6 commas
data = [x.strip().replace("'","").split(",",6) for x in data]
# Get the headers
headers = [x.strip() for x in data[0]]
# Get the remaining of the data
remainings = [list(map(lambda y: y.replace(",",""), x)) for x in data[1:]]
# Create a dictionary-like container
output = defaultdict(list)
# Loop through the data and save the rows accordingly
for n, header in enumerate(headers):
for row in remainings:
output[header].append(row[n])
# Save it in an ordered dictionary to maintain the order of columns
output = OrderedDict((k,output.get(k)) for k in headers)
# Convert your raw data into a pandas dataframe
df = pd.DataFrame(output)
# Print it
print(df)
</code></pre>
<p>This yields:</p>
<pre><code> Date Time Open High Low Close Volume
0 2016/11/09 12:10:00 4355 4358 4346 4351 1201
1 2016/11/09 12:09:00 4361 4362 4353 4355 1117
2 2016/11/09 12:08:00 4364 4374 4359 4360 10175
3 2016/11/09 12:07:00 4371 4376 4360 4365 590
4 2016/11/09 12:06:00 4359 4372 4358 4369 420
5 2016/11/09 12:05:00 4365 4367 4356 4359 542
6 2016/11/09 12:04:00 4379 1380 4360 4365 1697
7 2016/11/09 12:03:00 4394 4396 4376 4381 1272
8 2016/11/09 12:02:00 4391 4399 4390 4393 524
</code></pre>
<p>The starting file (<code>prices.csv</code>) is the following:</p>
<pre><code>Date, Time, Open, High, Low, Close, Volume
2016/11/09,12:10:00,'4355,'4358,'4346,'4351,1,201
2016/11/09,12:09:00,'4361,'4362,'4353,'4355,1,117
2016/11/09,12:08:00,'4364,'4374,'4359,'4360,10,175
2016/11/09,12:07:00,'4371,'4376,'4360,'4365,590
2016/11/09,12:06:00,'4359,'4372,'4358,'4369,420
2016/11/09,12:05:00,'4365,'4367,'4356,'4359,542
2016/11/09,12:04:00,'4379,'1380,'4360,'4365,1,697
2016/11/09,12:03:00,'4394,'4396,'4376,'4381,1,272
2016/11/09,12:02:00,'4391,'4399,'4390,'4393,524
</code></pre>
<p>I hope this helps.</p>
|
python|csv|pandas
| 2
|
5,778
| 38,160,637
|
Python: single colon vs double colon
|
<p>What is the difference between single and double colon in this situation?
<code>data[0:,4]</code> vs <code>data[0::,4]</code></p>
<pre><code>women_only_stats = data[0::,4] == "female"
men_only_stats = data[0::,4] != "female"
</code></pre>
<p>I tried to replace <code>data[0::,4]</code> with <code>data[0:,4]</code> and I see no difference. Is there any difference in this or another case?</p>
<p><code>data</code> is 2-dimensional array with rows like <code>['1' '0' '3' 'Braund, Mr. Owen Harris' 'male' '22' '1' '0' 'A/5 21171' '7.25' '' 'S']</code></p>
|
<p><strong>No</strong>, there is no difference.</p>
<p>See the Python documentation for <a href="https://docs.python.org/2/library/functions.html#slice" rel="noreferrer">slice</a>:</p>
<p>From the docs: <code>a[start:stop:step]</code></p>
<blockquote>
<p>The start and step arguments default to None. Slice objects have
read-only data attributes start, stop and step which merely return the
argument values (or their default).</p>
</blockquote>
<p>In this case, you are including an empty <code>step</code> parameter.</p>
<pre><code>>>> a = [1,2,3,4]
>>> a[2:]
[3,4]
>>> a[2::]
[3,4]
>>> a[2:] == a[2::]
True
</code></pre>
<p>And to understand what the <code>step</code> parameter actually does:</p>
<pre><code>>>> b = [1,2,3,4,5,6,7,8,9,10]
>>> b[0::5]
[1, 6]
>>> b[1::5]
[2, 7]
</code></pre>
<p>So by leaving it to be implicitly <code>None</code> (i.e., by either <code>a[2:]</code> or <code>a[2::]</code>), you are not going to change the output of your code in any way. </p>
<p>Hope this helps.</p>
|
python|numpy|slice|colon
| 12
|
5,779
| 66,004,135
|
How to print out the tensor values of a specific layer
|
<p>I wish to exam the values of a tensor after <code>mask</code> is applied to it.</p>
<p>Here is a truncated part of the model. I let <code>temp = x</code> so later I wish to print <code>temp</code> to check the exact values.</p>
<p>So given a 4-class classification model using acoustic features. Assume I have data in (1000,50,136) as (batch, timesteps, features)</p>
<p>The objective is to check if the model is studying the features by timesteps. In other words, we wish to reassure the model is learning using slice as the red rectangle in the picture. Logically, it is the way for <code>Keras LSTM</code> layer but the <code>confusion matrix</code> produced is quite different when a parameter changes (eg. Dense units). The validation accuracy stays 45% thus we would like to visualize the model.</p>
<p>The proposed idea is to print out the first step of the first batch and print out the input in the model. If they are the same, then model is learning in the right way ((136,1) features once) instead of (50,1) timesteps of a single feature once.</p>
<p><a href="https://i.stack.imgur.com/BVieq.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BVieq.jpg" alt="enter image description here" /></a></p>
<pre><code>input_feature = Input(shape=(X_train.shape[1],X_train.shape[2]))
x = Masking(mask_value=0)(input_feature)
temp = x
x = Dense(Dense_unit,kernel_regularizer=l2(dense_reg), activation='relu')(x)
</code></pre>
<p>I have tried <code>tf.print()</code> which brought me <code>AttributeError: 'Tensor' object has no attribute '_datatype_enum'</code></p>
<hr />
<p>As <a href="https://stackoverflow.com/questions/51949208/get-output-from-a-non-final-keras-model-layer">Get output from a non final keras model layer</a> suggested by Lescurel.</p>
<pre><code>model2 = Model(inputs=[input_attention, input_feature], outputs=model.get_layer('masking')).output
print(model2.predict(X_test))
AttributeError: 'Masking' object has no attribute 'op'
</code></pre>
|
<p>You want to output after mask.
lescurel's <a href="https://stackoverflow.com/questions/51949208/get-output-from-a-non-final-keras-model-layer">link</a> in the comment shows how to do that.
This <a href="https://github.com/keras-team/keras/issues/4205#issuecomment-257284099" rel="nofollow noreferrer">link to github</a>, too.</p>
<p>You need to make a new model that</p>
<ul>
<li>takes as inputs the input from your model</li>
<li>takes as outputs the output from the layer</li>
</ul>
<p>I tested it with some made-up code derived from your snippets.</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
from keras import Input
from keras.layers import Masking, Dense
from keras.regularizers import l2
from keras.models import Sequential, Model
X_train = np.random.rand(4,3,2)
Dense_unit = 1
dense_reg = 0.01
mdl = Sequential()
mdl.add(Input(shape=(X_train.shape[1],X_train.shape[2]),name='input_feature'))
mdl.add(Masking(mask_value=0,name='masking'))
mdl.add(Dense(Dense_unit,kernel_regularizer=l2(dense_reg),activation='relu',name='output_feature'))
mdl.summary()
mdl2mask = Model(inputs=mdl.input,outputs=mdl.get_layer("masking").output)
maskoutput = mdl2mask.predict(X_train)
mdloutput = mdl.predict(X_train)
maskoutput # print output after/of masking
mdloutput # print output of mdl
maskoutput.shape #(4, 3, 2): masking has the shape of the layer before (input here)
mdloutput.shape #(4, 3, 1): shape of the output of dense
</code></pre>
|
python|tensorflow|machine-learning
| 0
|
5,780
| 66,195,425
|
Encoding a list column to the legend of a plot
|
<p>Apologies in advance, I am not sure how to word this question best:</p>
<p>I am working with a large dataset, and I would like to plot Latitude and Longitude where the colour of the points (actually the opacity) is encoded to a 'FeatureType' column binded to the legend. This way I can use the legend to highlight on my map various features I am looking for.</p>
<p><a href="https://i.stack.imgur.com/HpB0W.png" rel="nofollow noreferrer">Here is a picture of my map and legend so far</a></p>
<p>The problem is that in my dataset, the FeatureType column is a list of features that can be found there (i.e arch, bridge, etc..).</p>
<p>How can I make it so that the point shows up for both arch, and bridge. At the moment it creates its own category of (arch,bridge etc.), leading to over 300 combinations of about 20 different FeatureTypes.</p>
<p>The dataset can be found at <a href="http://atlantides.org/downloads/pleiades/dumps/pleiades-locations-latest.csv.gz" rel="nofollow noreferrer">http://atlantides.org/downloads/pleiades/dumps/pleiades-locations-latest.csv.gz</a></p>
<p>N.B: I am using altair/pandas</p>
<pre><code>import altair as alt
import pandas as pd
from vega_datasets import data
df = pd.read_csv ('C://path/pleiades-locations.csv')
alt.data_transformers.enable('json')
countries = alt.topo_feature(data.world_110m.url, 'countries')
selection = alt.selection_multi(fields=['featureType'], bind='legend')
brush = alt.selection(type='interval', encodings=['x'])
map = alt.Chart(countries).mark_geoshape(
fill='lightgray',
stroke='white'
).project('equirectangular').properties(
width=500,
height=300
)
points = alt.Chart(df).mark_circle().encode(
alt.Latitude('reprLat:Q'),
alt.Longitude('reprLong:Q'),
alt.Color('featureType:N'),
tooltip=['featureType','timePeriodsKeys:N'],
opacity=alt.condition(selection, alt.value(1), alt.value(0.0))
).add_selection(
selection)
(map + points)
</code></pre>
|
<p>It is not possible for Altair to generate the labels you want from your current column format. You will need to turn your comma-separated string labels into lists and then <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.explode.html" rel="nofollow noreferrer">explode the column</a> so that you get one row per item in the list:</p>
<pre><code>import altair as alt
import pandas as pd
from vega_datasets import data
alt.data_transformers.enable('data_server')
df = pd.read_csv('http://atlantides.org/downloads/pleiades/dumps/pleiades-locations-latest.csv.gz')[['reprLong', 'reprLat', 'featureType']]
df['featureType'] = df['featureType'].str.split(',')
df = df.explode('featureType')
countries = alt.topo_feature(data.world_110m.url, 'countries')
world_map = alt.Chart(countries).mark_geoshape(
fill='lightgray',
stroke='white')
points = alt.Chart(df).mark_circle(size=10).encode(
alt.Latitude('reprLat:Q'),
alt.Longitude('reprLong:Q'),
alt.Color('featureType:N', legend=alt.Legend(columns=2)))
world_map + points
</code></pre>
<p><a href="https://i.stack.imgur.com/NTPus.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NTPus.png" alt="enter image description here" /></a></p>
<p>Note that having this many entries in the legend is not meaningful since the colors are repeated. The interactivity would help with that somewhat, but I would consider splitting this up into multiple charts. I am not sure if it is even possible to expand the legend to show those hidden 81 entries. And double check that the long lat location corresponds correctly with the world map projection you are using, they seemed to move around when I changed the projection.</p>
<hr />
|
python|pandas|data-visualization|altair
| 0
|
5,781
| 52,465,868
|
TypeError: 'int' object is not iterable when iterating through pandas column
|
<p>I have a simple use case. I want to read a text file into a pandas file and iterate through the unique <strong><em>id</em></strong> s to plot a <strong><em>x-y graph</em></strong>.
Worked for me fine at many other projects but now i get the <code>TypeError: 'int' object is not iterable</code>.
At first I got <code>TypeError: 'numpy.float64' object is not iterable</code>that is why I changed the type of <em>id</em> to <em>int</em> (see code). But that does not work either. I do not see why. Any ideas?</p>
<pre><code>f = open(file,"r+")
with open(file,"r+") as f1:
data10 = f1.read()
TESTDATA = StringIO(data11)
df = pd.read_table(TESTDATA, sep=" ")
df.columns = ["x", "y", "id"]
#Astype because i got the error TypeError: 'numpy.float64' object is not iterable
df.id = df.id.astype(int)
#get unique values of column id
list1=df['id'].tolist()
list1=list(set(list1))
fig, ax = plt.subplots()
for i ,g in list1:
x = df[df.id==i]['x']
y = df[df.id==i]['y']
g.plot(x='x',y='x', marker='o', ax=ax, title="Evaluation")
</code></pre>
|
<p>IMHO, your code reduces to</p>
<pre><code>df = pd.read_table(file, sep=" ")
fig, ax = plt.subplots()
grpd = df.groupby('id')
for i, g in grpd:
g.plot(x='x',y='y', marker='o', ax=ax, title="Evaluation")
</code></pre>
<p><strong><em>Explanation:</em></strong><br>
I'll step through the code step by step:</p>
<pre><code>f = open(file,"r+")
</code></pre>
<p><strong>can be deleted</strong> you don't need to open a file before you open it in a <code>with</code>-block<br>
But <em>if</em> you open it this way, do not forget to <em>close</em> it after you don't need it anymore.</p>
<pre><code>with open(file,"r+") as f1:
data10 = f1.read()
TESTDATA = StringIO(data11)
</code></pre>
<p><strong>can be deleted</strong><br>
1. weird usage of data10, data11 (it will not work) and StringIO (way too complicated)<br>
2. see next code line</p>
<pre><code>df = pd.read_table(file, sep=" ")
</code></pre>
<p><strong>OK</strong> if you replace <code>TESTDATA</code> simply by <code>file</code>, pandas is very powerful at importing files of different flavours...</p>
<pre><code>df.columns = ["x", "y", "id"]
</code></pre>
<p><strong>OK</strong> if your file doesn't have already it's own header which would fit your needs, which I'd recommend to check perhaps ...</p>
<pre><code>#Astype because i got the error TypeError: 'numpy.float64' object is not iterable
df.id = df.id.astype(int)
</code></pre>
<p><strong>can be deleted</strong> because your error has nothing to do with that, as I stated in my comment yesterday</p>
<pre><code>#get unique values of column id
list1=df['id'].tolist()
list1=list(set(list1))
</code></pre>
<p><strong>can be deleted</strong> if you want unique values of a column in pandas, use <code>df['id'].unique()</code>, but you don't need to do everything manually here, as for the target you want to achieve there is <code>groupby</code>:</p>
<pre><code>grpd = df.groupby('id')
</code></pre>
<p>This returns a groupby-object, which indeed provides you with groupname <em>and</em> group data when you iterate over it, so your loop with two variables would work here: </p>
<pre><code>fig, ax = plt.subplots()
for i, g in grpd:
g.plot(x='x',y='x', marker='o', ax=ax, title="Evaluation")
</code></pre>
<p>and to be complete: your extra</p>
<pre><code>x = df[df.id==i]['x']
y = df[df.id==i]['y']
</code></pre>
<p><strong>can be deleted</strong>, too, because this is also included in the idea behind grouping. (And btw, you didn't even use it in your code...)</p>
<p><strong>In the end</strong> I'd recommend you to read the very basics of Python and pandas, especially file io <a href="https://docs.python.org/3/tutorial/inputoutput.html#reading-and-writing-files" rel="nofollow noreferrer">https://docs.python.org/3/tutorial/inputoutput.html#reading-and-writing-files</a> and pandas grouping <a href="https://pandas.pydata.org/pandas-docs/stable/groupby.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/groupby.html</a></p>
|
python|pandas|loops|iteration|typeerror
| 0
|
5,782
| 52,739,240
|
Extend a pandas dataframe to include 'missing' weeks
|
<p>I have a pandas dataframe which contains time series data, so the index of the dataframe is of type datetime64 at weekly intervals, each date occurs on the Monday of each calendar week.</p>
<p>There are only entries in the dataframe when an order was recorded, so if there was no order placed, there isn't a corresponding record in the dataframe. I would like to "pad" this dataframe so that any weeks in a given date range are included in the dataframe and a corresponding zero quantity is entered. </p>
<p>I have managed to get this working by creating a dummy dataframe, which includes an entry for each week that I want with a zero quantity and then merging these two dataframes and dropping the dummy dataframe column. This results in a 3rd padded dataframe. </p>
<p>I don't feel this is a great solution to the problem and being new to pandas wanted to know if there is a more specific and or pythonic way to achieve this, probably without having to create a dummy dataframe and then merge.</p>
<p>The code I used is below to get my current solution:</p>
<pre><code># Create the dummy product
# Week hold the week date of the order, want to set this as index later
group_by_product_name = df_all_products.groupby(['Week', 'Product Name'])['Qty'].sum()
first_date = group_by_product_name.head(1) # First date in entire dataset
last_date = group_by_product_name.tail().index[-1] # last date in the data set
bdates = pd.bdate_range(start=first_date, end=last_date, freq='W-MON')
qty = np.zeros(bdates.shape)
dummy_product = {'Week':bdates, 'DummyQty':qty}
df_dummy_product = pd.DataFrame(dummy_product)
df_dummy_product.set_index('Week', inplace=True)
group_by_product_name = df_all_products.groupby('Week')['Qty'].sum()
df_temp = pd.concat([df_dummy_product, group_by_product_name], axis=1, join='outer')
df_temp.fillna(0, inplace=True)
df_temp.drop(columns=['DummyQty'], axis=1, inplace=True)
</code></pre>
<p>The problem with this approach is sometimes (I don't know why) the indexes don't match correctly, I think somehow the dtype of the index on one of the dataframes loses its type and goes to object instead of staying with dtype datetime64. So I am sure there is a better way to solve this problem than my current solution. </p>
<p>EDIT</p>
<p>Here is a sample dataframe with "missing entries"</p>
<pre><code>df1 = pd.DataFrame({'Week':['2018-05-28', '2018-06-04',
'2018-06-11', '2018-06-25'], 'Qty':[100, 200, 300, 500]})
df1.set_index('Week', inplace=True)
df1.head()
</code></pre>
<p>Here is an example of the padded dataframe that contains the additional missing dates between the date range</p>
<pre><code> df_zero = pd.DataFrame({'Week':['2018-05-21', '2018-05-28', '2018-06-04',
'2018-06-11', '2018-06-18', '2018-06-25', '2018-07-02'], 'Dummy Qty':[0, 0, 0, 0, 0, 0, 0]})
df_zero.set_index('Week', inplace=True)
df_zero.head()
</code></pre>
<p>And this is the intended outcome after concatenating the two dataframes</p>
<pre><code>df_padded = pd.concat([df_zero, df1], axis=1, join='outer')
df_padded.fillna(0, inplace=True)
df_padded.drop(columns=['Dummy Qty'], inplace=True)
df_padded.head(6)
</code></pre>
<p>Note that the missing entries are added before and between other entries where necessary in the final dataframe.</p>
<p>Edit 2:</p>
<p>As requested here is an example of what the initial product dataframe would look like:</p>
<pre><code>df_all_products = pd.DataFrame({'Week':['2018-05-21', '2018-05-28', '2018-05-21', '2018-06-11', '2018-06-18',
'2018-06-25', '2018-07-02'],
'Product Name':['A', 'A', 'B', 'A', 'B', 'A', 'A'],
'Qty':[100, 200, 300, 400, 500, 600, 700]})
</code></pre>
|
<p>Ok given your original data you can achieve the expected results by using <code>pivot</code> and resample for any missing weeks, like the following:</p>
<pre><code>results = df_all_products.groupby(
['Week','Product Name']
)['Qty'].sum().reset_index().pivot(
index='Week',columns='Product Name', values='Qty'
).resample('W-MON').asfreq().fillna(0)
</code></pre>
<p>Output results:</p>
<pre><code>Product Name A B
Week
2018-05-21 100.0 300.0
2018-05-28 200.0 0.0
2018-06-04 0.0 0.0
2018-06-11 400.0 0.0
2018-06-18 0.0 500.0
2018-06-25 600.0 0.0
2018-07-02 700.0 0.0
</code></pre>
<p>So if you want to get the <code>df</code> for Product Name A, you can do <code>results['A']</code>.</p>
|
python|pandas|dataframe
| 2
|
5,783
| 52,557,357
|
Changed value in numpy array based on index and on criteria
|
<p>I have a numpy array:</p>
<pre><code>arr = np.array([[1, 2, 3, 4, 5], [6, 7, 8, 9, 10]])
>> arr
[[ 1 2 3 4 5]
[ 6 7 8 9 10]]
</code></pre>
<p>I want to take a portion of the array based on indices (not slices):</p>
<pre><code>ix = np.ix_([0, 1], [0, 2])
>> arr[ix]
[[1 3]
[6 8]]
</code></pre>
<p>And I want to modify those elements in the original array, which would work if I did this:</p>
<pre><code>arr[ix] = 0
>> arr
[[ 0 2 0 4 5]
[ 0 7 0 9 10]]
</code></pre>
<p>But I only want to change them if they follow a specific condition, like if they are lesser than <code>5</code>. I am trying this:</p>
<pre><code>subarr = arr[ix]
subarr[subarr < 5] = 0
</code></pre>
<p>But it doesn't modify the original one.</p>
<pre><code>>> arr
[[ 1 2 3 4 5]
[ 6 7 8 9 10]]
>> subarr
[[0 0]
[6 8]]
</code></pre>
<p>I am not sure why this is not working, since both accessing the array by indices with <code>np.ix_</code> and using a mask <code>subarr < 5</code> should return a view of the array, not a copy.</p>
|
<p>Fancy indexing returns a copy; hence your original array will not be updated. You can use <a href="https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.where.html" rel="nofollow noreferrer"><code>numpy.where</code></a> to update your values:</p>
<pre><code>arr[ix] = np.where(arr[ix] < 5, 0, arr[ix])
array([[ 0, 2, 0, 4, 5],
[ 6, 7, 8, 9, 10]])
</code></pre>
|
python|arrays|python-2.7|numpy|indexing
| 4
|
5,784
| 46,347,623
|
Reading data from csv file with different number of columns and mixed datatype - python
|
<p>i've got an CSV-file with the following input:</p>
<blockquote>
<p>Title;High Data</p>
<p>Date save;01.01.2000;00:00</p>
<p>Comment;</p>
<p>Magnification;1;[m]</p>
<p>Counts;4931</p>
<p>Length;5583;[m]</p>
<p>Start 1;0;1475</p>
<p>End 1;4931;1475</p>
<p>Profil 1[µm]</p>
<p>529</p>
<p>528</p>
<p>and so on ...</p>
</blockquote>
<p>I want to read the counts and length into variables. The problem seems to be that there are different numbers of columns. I've tried different things to load it into an numpy array or an pandas dataframe but nothing really worked out. Please help me! Thank you!</p>
<p>Edit: This is the code i used to load it into the pandas dataframe:</p>
<pre><code>fin = pd.read_csv('Temp.csv', sep = ';')
df = pd.DataFrame(fin)
</code></pre>
<p>But after that i can't read the data out of the dataframe...</p>
|
<p>This is not really a CSV file. If you want to parse a file into a pandas dataframe, you usually want to be looking at something like a table (for example: each column is one feature, each row is one sample/item/person).</p>
<p>It seems like you have three types of data annotation formats here:</p>
<p>1</p>
<pre><code>[NAME OF FEATURE];[VALUE FOR FEATURE]
</code></pre>
<p>2</p>
<pre><code>[NAME OF FEATURE];[VALUE FOR FEATURE];[UNIT]
</code></pre>
<p>3</p>
<pre><code>[NAME OF FEATURE][UNIT]
[VALUE]
[VALUE]
[VALUE]
</code></pre>
<p>I would recommend writing a parser yourself if there are a lot of these files, or reshaping the data to be in a consistent format if it is just a single file. Preferably something like:</p>
<pre><code>Title, Date Save, Comment, Magnification, ..., Profil 1, ...
High Data, 01.01.2000;00:00,,1,...,"529,528",...
...
</code></pre>
<p><strong>EDIT:</strong> If you only care about count and length.</p>
<blockquote>
<p>I want to read the counts and length into variables. </p>
</blockquote>
<pre><code>relevant_lines = [line.split(';')[:1] for line in open(your_file_name).read().split('\n') if line.startswith('Counts') or line.startswith('Length')]
df = pd.DataFrame([dict(relevant_lines)])
</code></pre>
<p>Makes a df that looks like this:</p>
<pre><code> Count Length
0 4931 5583
</code></pre>
<p><strong>EVEN SIMPLER EDIT:</strong> If you just want them as variables and don't care about the dataframe at all:</p>
<pre><code>lines = open(your_file).read().split('\n')
count = None
length = None
for line in lines:
if(line.startswith('Length')):
length=int(line.split(';')[1])
if(line.startswith('Count')):
count=int(line.split(';')[1])
</code></pre>
|
python-3.x|pandas|numpy
| 1
|
5,785
| 46,200,506
|
How to convert [0,0,0,1,2,2] to [0,1,2,0,0,1]?
|
<p>Using standard TensorFlow operators, how can I convert a 1D tensor which has values in ascending order to a 1D tensor in which each value is the number of times the value at the same index in the input tensor has appeared when scanning from left to right?
An example is given in the question title.</p>
|
<p>You can achieve this using <a href="https://www.tensorflow.org/api_docs/python/tf/while_loop" rel="nofollow noreferrer">tf.while_loop</a>. I suggest the following solution:</p>
<pre><code>import tensorflow as tf
def scan_accum(a):
def condition(a):
def c(counts, acc, i):
return tf.less(i, tf.gather(tf.shape(a), 0))
return c
def body(a):
def b(counts, acc, i):
current = tf.gather(a, i)
ant = tf.gather(a, tf.add(i, -1))
update = tf.scatter_nd(tf.reshape(i, shape=(1, 1)),
tf.expand_dims(acc, axis=0),
shape=tf.shape(counts))
counts_ = tf.cond(
pred=tf.equal(current, ant),
true_fn=lambda: counts + update,
false_fn=lambda: counts
)
acc_ = tf.cond(
pred=tf.equal(current, ant),
true_fn=lambda: tf.add(acc, 1),
false_fn=lambda: tf.constant(1, dtype=counts.dtype)
)
i_ = tf.add(i, 1)
return [counts_, acc_, i_]
return b
i = tf.constant(1)
counts = tf.zeros_like(a)
acc = tf.constant(1, dtype=counts.dtype)
counts, _, _ = tf.while_loop(cond=condition(a),
body=body(a),
loop_vars=[counts, acc, i])
return counts
a = tf.constant([0, 0, 0, 1, 2, 2])
with tf.Session() as sess:
print(sess.run(scan_accum(a))) # prints [0 1 2 0 0 1]
</code></pre>
|
tensorflow
| 0
|
5,786
| 46,424,912
|
tf.contrib.legacy_seq2seq.embedding_rnn_seq2seq output projection
|
<p>The official documentation for <code>tf.contrib.legacy_seq2seq.embedding_rnn_seq2seq</code> has the following explanation for the output_projection argument:
<br></p>
<p><code>output_projection</code>: None or a pair (W, B) of output projection weights and biases; W has shape [output_size x num_decoder_symbols] and B has shape [num_decoder_symbols]; if provided and feed_previous=True, each fed previous output will first be multiplied by W and added B.</p>
<p>I don't understand why the B argument should have the size of <code>[num_decoder_symbols]</code>? Since the output is first multiplied by W and then the biases are added, Shouldn't it be <code>[output_size]</code>? </p>
|
<p>Alright! So, I have found the answer to the question. <br>
The main source of confusion was in the dimensions
<code>[output_size x num_decoder_symbols]</code> of the W matrix itself. <br></p>
<p>The <code>output_size</code> here doesn't refer to the <strong>output_size</strong> that you want, but is the output_size (same as the size of the hidden vector) of the LSTM cell. Thus the matrix multiplication <code>u x W</code> will result in a vector of size <code>num_decoder_symbols</code> that can be considered as the logits for the output symbols.</p>
|
python|tensorflow|lstm|embedding|rnn
| 0
|
5,787
| 46,420,007
|
Numpy sum() got an 'keepdims' error
|
<p>This is a paragraph of a neural network code example:</p>
<pre><code>def forward_step(X, W, b, W2, b2):
hidden_layer = np.maximum(0, np.dot(X, W) + b)
scores = np.dot(hidden_layer, W2) + b2
exp_scores = np.exp(scores)
probs = exp_scores / np.sum(exp_scores, axis=1, keepdims=True)
...
</code></pre>
<p>The last line of the code shown above threw an error:</p>
<pre><code><ipython-input-49-d97cff51c360> in forward_step(X, W, b, W2, b2)
14 scores = np.dot(hidden_layer, W2) + b2
15 exp_scores = np.exp(scores)
---> 16 probs = exp_scores / np.sum(exp_scores, axis=1, keepdims=True)
17 corect_logprobs = -np.log(probs[range(X.shape[0]), y])
/Users/###/anaconda/lib/python3.6/site-packages/numpy/core/fromnumeric.py in sum(a, axis, dtype, out, keepdims)
1810 pass
1811 else:
-> 1812 return sum(axis=axis, dtype=dtype, out=out, **kwargs)
1813 return _methods._sum(a, axis=axis, dtype=dtype,
1814 out=out, **kwargs)
TypeError: sum() got an unexpected keyword argument 'keepdims'
</code></pre>
<p>There is a similar question <a href="https://stackoverflow.com/questions/45601131/numpy-sum-keepdims-error">Numpy sum keepdims error</a> which says that the edition of numpy should be greater than 1.7. I have checked my edition of numpy:</p>
<pre><code>import numpy
numpy.version.version
>> 1.12.1
</code></pre>
<p>Now I am confused about how this error occurred.</p>
|
<p>Note that under the <code>keepdims</code> argument in the <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.sum.html#numpy.sum" rel="nofollow noreferrer">docs for <code>numpy.sum()</code></a> it states:</p>
<blockquote>
<p><strong>keepdims</strong> : <em>bool, optional</em><br>
If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array.<br>
If the default value is passed, then <em>keepdims</em> will not be passed through to the <code>sum</code> method of sub-classes of <code>ndarray</code>, however any non-default value will be. If the sub-classes <code>sum</code> method does not implement <em>keepdims</em> any exceptions will be raised.</p>
</blockquote>
<p>So it states here that if you're using a sub-class of <code>numpy.ndarray</code>, then you'll get this error if the corresponding <code>sum</code> function for the sub-class hasn't been defined with it.</p>
<p>Notice that in your error it references line <code>1812</code> in <code>numpy/core/fromnumeric.py</code>. Take a look at that in context in the actual <a href="https://github.com/numpy/numpy/blob/maintenance/1.12.x/numpy/core/fromnumeric.py" rel="nofollow noreferrer"><code>numpy</code> 1.12.x source</a>:</p>
<pre><code>kwargs = {}
if keepdims is not np._NoValue:
kwargs['keepdims'] = keepdims
if isinstance(a, _gentype):
res = _sum_(a)
if out is not None:
out[...] = res
return out
return res
if type(a) is not mu.ndarray:
try:
sum = a.sum
except AttributeError:
pass
else:
return sum(axis=axis, dtype=dtype, out=out, **kwargs)
return _methods._sum(a, axis=axis, dtype=dtype,
out=out, **kwargs)
</code></pre>
<p>Two things are important to note here: the <code>sum</code> function <em>did</em> parse your <code>keepdims</code> variable, since it pulled it above line <code>1812</code> and tried to put it in another function, so you know the error wasn't the way you used the variable. The other important thing is that the line <code>1812</code> which you're erroring on is only executing <em>if</em> <code>type(a) is not mu.ndarray</code>, i.e., if you're using a different class than <code>ndarray</code>. And this is exactly what the documentation is referencing. If you have a different class, then they need to implement this <code>sum</code> function <em>with the keepdims argument</em>, and if they don't it will raise an error.</p>
<p>Other classes like <code>np.matrix</code> for example will have a different sum function, and it seems that, even in <code>numpy 1.13.x</code>, <code>sum</code> for <code>np.matrix</code> types does not support the <code>keepdim</code> argument (because in <code>numpy</code>, matrices <em>always</em> are 2D). For example, it works fine with a <code>np.array</code>:</p>
<pre><code>>>> import numpy as np
>>> A = np.eye(4)
>>> A
array([[ 1., 0., 0., 0.],
[ 0., 1., 0., 0.],
[ 0., 0., 1., 0.],
[ 0., 0., 0., 1.]])
>>> np.sum(A, axis=1, keepdims=True)
array([[ 1.],
[ 1.],
[ 1.],
[ 1.]])
</code></pre>
<p>But with a <code>np.matrix</code>, it doesn't:</p>
<pre><code>>>> import numpy.matlib
>>> B = np.matlib.eye(4)
>>> np.sum(B, axis=1, keepdims=True)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File ".../numpy/core/fromnumeric.py", line 1832, in sum
return sum(axis=axis, dtype=dtype, out=out, **kwargs)
TypeError: sum() got an unexpected keyword argument 'keepdims'
</code></pre>
<p>But, most array/matrix type objects can be easily cast to an array in <code>numpy</code> with <code>np.array(<object>)</code>, and this should solve the problem for most sub-classed objects in <code>numpy</code> and likely your problem. You can also simply wrap the result back into a <code>np.matrix</code> if you need to.</p>
<pre><code>>>> B = np.matlib.eye(4)
>>> B = np.array(B)
>>> np.sum(B, axis=1, keepdims=True)
array([[ 1.],
[ 1.],
[ 1.],
[ 1.]])
</code></pre>
<p><strong>However, if your class of object <em>is</em> a <code>np.matrix</code> type, then the <code>keepdims</code> argument is pointless. Matrices are <em>always</em> 2D, so the <code>sum</code> function won't reduce a dimension, and thus the argument wouldn't do anything. This is why it isn't implemented for matrices.</strong></p>
|
python|numpy|matrix|sum|numpy-ndarray
| 10
|
5,788
| 58,454,736
|
How do I delete the spaces (trailing and ending) using a python code
|
<p>This is my string "INDUSTRIAL, PARKS, PROPERTY"
I want to delete the spaces trailing parks, property.
I want the output as "INDUSTRIAL,PARKS,PROPERTY"</p>
|
<p>For your example you can do:</p>
<pre><code>strr = "INDUSTRIAL, PARKS, PROPERTY"
','.join(strr.split(', '))
</code></pre>
<p>Otherwise if you have something like:</p>
<pre><code>strr = "INDUSTRIAL, PARKS, PROPERTY"
','.join([s.strip() for s in strr.split(', ')])
</code></pre>
|
python|python-3.x|pandas|data-science|data-analysis
| 0
|
5,789
| 58,209,644
|
How do I remove a MultiIndex from my DataFrame?
|
<p>I try pulling the columns from my dataframe using <em>df.columns</em> and I get:</p>
<pre><code>MultiIndex([( 'time', ''),
('numbers', 11),
('numbers', 12),
('numbers', 13),
('numbers', 14)],
names=[None, 'letters'])
</code></pre>
<p>I have never used a MultiIndex before so I am now confused how to get 'Time' as a column now instead of an index so I can go from this DataFrame:</p>
<pre><code>df =
time numbers
letters a b c d
0 22:45:00 1016.0 1059.0 1042.0 1007.0
1 23:00:00 1006.0 10507.0 1040.0 1084.0
2 23:15:00 1084.0 1058.0 1047.0 1495.0
3 23:30:00 1095.0 1498.0 1480.0 1048.0
4 23:45:00 1098.0 1002.0 1044.0 1084.0
5 00:00:00 1044.0 1517.0 1084.0 1051.0
</code></pre>
<p>(By preferably just removing the MultiIndex)
So it resembles this:</p>
<pre><code>df =
time a b c d
0 22:45:00 1016.0 1059.0 1042.0 1007.0
1 23:00:00 1006.0 1007.0 1040.0 1084.0
2 23:15:00 1084.0 1058.0 1047.0 1495.0
3 23:30:00 1095.0 1498.0 1480.0 1048.0
4 23:45:00 1098.0 1002.0 1044.0 1084.0
5 00:00:00 1044.0 1517.0 1084.0 1051.0
</code></pre>
<p>I have tried using droplevel but I get</p>
<blockquote>
<p>Cannot remove 1 levels from an index with 1 levels: at least one level must be left.</p>
</blockquote>
<p>Is this because the index is in the columns and not the rows?</p>
|
<p>IIUC you have this:</p>
<pre><code>dd = {('time', ''): {0: '22:45:00',
1: '23:00:00',
2: '23:15:00',
3: '23:30:00',
4: '23:45:00',
5: '00:00:00'},
('numbers', 'a'): {0: 1016.0,
1: 1006.0,
2: 1084.0,
3: 1095.0,
4: 1098.0,
5: 1044.0},
('numbers', 'b'): {0: 1059.0,
1: 10507.0,
2: 1058.0,
3: 1498.0,
4: 1002.0,
5: 1517.0},
('numbers', 'c'): {0: 1042.0,
1: 1040.0,
2: 1047.0,
3: 1480.0,
4: 1044.0,
5: 1084.0},
('numbers', 'd'): {0: 1007.0,
1: 1084.0,
2: 1495.0,
3: 1048.0,
4: 1084.0,
5: 1051.0}}
df1 = pd.DataFrame(dd).rename_axis([None,'letters'], axis=1)
df1
</code></pre>
<p>Input Dataframe:</p>
<pre><code> time numbers
letters a b c d
0 22:45:00 1016.0 1059.0 1042.0 1007.0
1 23:00:00 1006.0 10507.0 1040.0 1084.0
2 23:15:00 1084.0 1058.0 1047.0 1495.0
3 23:30:00 1095.0 1498.0 1480.0 1048.0
4 23:45:00 1098.0 1002.0 1044.0 1084.0
5 00:00:00 1044.0 1517.0 1084.0 1051.0
</code></pre>
<p>Then,</p>
<pre><code>df2 = df1.set_index('time')
df2.columns = df2.columns.droplevel(0)
df2.reset_index()
</code></pre>
<p>Output:</p>
<pre><code>letters time a b c d
0 22:45:00 1016.0 1059.0 1042.0 1007.0
1 23:00:00 1006.0 10507.0 1040.0 1084.0
2 23:15:00 1084.0 1058.0 1047.0 1495.0
3 23:30:00 1095.0 1498.0 1480.0 1048.0
4 23:45:00 1098.0 1002.0 1044.0 1084.0
5 00:00:00 1044.0 1517.0 1084.0 1051.0
</code></pre>
|
python|pandas|multi-index
| 1
|
5,790
| 58,369,033
|
Getting list is unhashable when use apply on a pandas DataFrame
|
<p>I have a <code>DataFrame</code> <code>df</code> that that has a list in each row, and I want to apply the <code>remove_stops</code> function to each row.</p>
<pre><code>import pandas as pd
from nltk.corpus import stopwords
stop = stopwords.words('english')
def remove_stops(row):
meaningful_words = [w for w in row if not w in stop]
return meaningful_words
df.apply(remove_stops)
</code></pre>
<p>When I run the code, I get the following error </p>
<pre><code>meaningful_words = [w for w in row if not w in stop]
TypeError: ("unhashable type: 'list'", 'occurred at index original')
</code></pre>
<p>After some research, I understood the error is being caused because lists are not mutable.</p>
<pre><code>print(type(df))
print(type(df.iloc[0, 0]))
<class 'pandas.core.frame.DataFrame'>
<class 'list'>
</code></pre>
<p>How can I solve this issue? </p>
|
<p>After explicitly using the name of the column I wanted to applycode as expected</p>
<pre><code>df['original'].apply(remove_stops)
</code></pre>
<p>I was able to run the as intended. Thanks for the prompt replies.</p>
|
python|pandas|dataframe|apply
| 1
|
5,791
| 69,105,604
|
Effective way to determine position of two lines
|
<p>I am trying to determine the position of two lines, so far using numpy but I am open to use opencv if necessary.</p>
<p>First look at the pic
<a href="https://i.stack.imgur.com/TNWDD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TNWDD.png" alt="lines" /></a></p>
<p>I am using the means of the X coordinate. So in the first picture, you can see the blobs in the center are the position of the mean of the X coordinate for each line. You can clearly see that the red line is "to the left" of the green line (since mean_read < mean_green)</p>
<p>The problem is when some lines are small like in the picture to the right.
You can intuitively know that the red line is still "to the left" of the green line. However if we see the means, this time mean_red > mean_green.</p>
<p>Is there a better method using numpy or even using opencv to correctly determine that the green line is to the right of the red line?</p>
|
<p>This idea works if the intersection of the lines does not matter to you if they extend.</p>
<pre class="lang-py prettyprint-override"><code>im = cv2.imread(sys.path[0]+'/im.png')
gr = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)
bw = cv2.threshold(gr, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)[1]
cnts, _ = cv2.findContours(~bw, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
gMid, g1, g2, gColor = None, None, None, (100, 255, 50)
rMid, r1, r2, rColor = None, None, None, (100, 50, 255)
for c in cnts:
x, y, w, h = cv2.boundingRect(c)
cv2.rectangle(im, (x, y), (x+w, y+h), (127, 127, 127), 2)
ROI = im[y:y+h, x:x+w]
if np.mean(ROI[:, :, 1]) > np.mean(ROI[:, :, 2]): # Green line
g1, g2 = ((x, y), (x+w, y+h))[:2] if ROI[0, 0, 0] == 255 else ((x+w, y), (x, y+h))[:2]
gMid = (x+w//2, y+h//2)
else: # red line
r1, r2 = ((x, y), (x+w, y+h))[:2] if ROI[0, 0, 0] == 255 else ((x+w, y), (x, y+h))[:2]
rMid = (x+w//2, y+h//2)
# Draw coordinates of green line
for p in [gMid, g1, g2]:
cv2.circle(im, p, 20, gColor, 5)
# Draw coordinates of red line
for p in [rMid, r1, r2]:
cv2.circle(im, p, 20, rColor, 5)
# Swap (Sort) green line start and end points
if r1[0]>r2[0]:
r1,r2=r2,r1
# Swap (Sort) red line start and end points
if g1[0]>g2[0]:
g1,g2=g2,g1
if r1[0] < g1[0] and r2[0] < g2[0]:
print("x-axis --- Red line is left")
elif r1[0] > g1[0] and r2[0] > g2[0]:
print("x-axis --- Green line is left")
elif r1[0] > g1[0] and r2[0] < g2[0]:
print("x-axis --- Red is inside Green zone")
else:
print("Something else")
</code></pre>
<p>Visual Output:</p>
<p><a href="https://i.stack.imgur.com/Vr6bE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Vr6bE.png" alt="enter image description here" /></a></p>
<p>Printed Output:</p>
<pre class="lang-py prettyprint-override"><code># For left sample:
# x-axis --- Red line is left
# For right sample:
# x-axis --- Red is inside Green zone
</code></pre>
<p>Now other situations may occur. If we continue the lines, the lines will intersect if they are not parallel. If you want to know in which position the lines will be relative to each other. First you need to <a href="https://en.wikipedia.org/wiki/Linear_equation" rel="nofollow noreferrer">calculate the equation</a> of both lines. And then for intersection point, search <a href="https://en.wikipedia.org/wiki/Line%E2%80%93line_intersection" rel="nofollow noreferrer">LineLineIntersection</a> to find related mathematical equations.</p>
|
python|numpy|opencv|image-processing
| 1
|
5,792
| 60,831,722
|
Fit 3d ellipsoid to a distribution of 3d points?
|
<p>I have a number of points in 3d space (cartesian x,y,z) and would like to fit a ellipsoid
to that in order to determine the axis ratios. The issue here is that I have a distribution of points (not points on a surface), and the solutions to this problem mainly consider the points on a surface. Also would this fit be iterative (like some optimize or mcmc type method), I work in Python.</p>
<p>The code i am using was given in this answer: <a href="https://stackoverflow.com/questions/58501545/python-fit-3d-ellipsoid-oblate-prolate-to-3d-points">Python: fit 3D ellipsoid (oblate/prolate) to 3D points</a></p>
<p>But this does not work for me ( I think it was meant for points on the surface of an ellipsoid). But I have more density distribution of points rather than surface points.</p>
|
<p>I assume that you are not after the tightest bounding ellipsoid nor some best fit ellipsoid based on the "outer" points (such as an ellipsoid fit on the convex hull).</p>
<p>I understand that you are after a distribution, i.e. a unit-sum positive function of the coordinates, which you want to have "ellipsoidal" symmetry, so that the loci of equiprobable points are ellipsoids.</p>
<p>If you assume your distribution to be multivariate normal,</p>
<pre><code>P(p) = c.exp(-(p-µ)^T M (p-µ)/2)
</code></pre>
<p>then <code>M</code> is the inverse of the covariance matrix and <code>µ</code> the average vector.</p>
|
python|numpy|scipy|computational-geometry|mcmc
| -1
|
5,793
| 71,638,200
|
How to fix IndexError: only integers, slices (`:`), ellipsis (`...`), numpy.newaxis (`None`) and integer or boolean arrays are valid indices
|
<p>I am working on generating a simple sine function defined over the range [0,1] as shown below. However I got an index error when assigning <code>function[i]</code>. I tried to set <code>x_n = int(x_n)</code> as some questions with the same error suggest but that only creates further errors.</p>
<pre><code>import numpy as np
function = np.zeros(20)
x_n=np.arange(0,1,0.05) #[0,1]
for i in x_n:
function[i] = np.sin(2*np.pi*i)
</code></pre>
<p>Note <code>x_n</code> has a step of 0.05 because I need to generate 20 values ranging from 0 to 1.</p>
|
<p>The problem is that i is a float, while array indices are ints.
You should change your code to</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
function = np.zeros(20)
x_n=np.arange(0,1,0.05) #[0,1]
for i in x_n:
function[int(i*20)] = np.sin(2*np.pi*i)
</code></pre>
|
python|numpy
| 1
|
5,794
| 71,563,914
|
What is the best solution to comparing specific columns against two pandas dataframes here?
|
<p>I have two pandas dataframes with the same columns, one is old and one is new. They have a subset set of matching account numbers and I have done the following changes to create versions of these two dataframes which have the same account numbers/row length:</p>
<pre><code>#Merge and find out where the commonality is in the accounts
m = old_df.merge(new_df, on='account_number', how='outer', suffixes=['', '_'], indicator=True)
m_both = m[m['_merge'] == 'both']
#Create a version of the old file which has the matching account numbers
old_df['both'] = old_df ['account_number'].isin(m_both['account_number'].unique())
old_both = old_df[old_df['both'] == True]
#Create a version of the new file which has the matching account numbers
new_df['both'] = new_df['account_number'].isin(m_both['account_number'].unique())
new_both = new_df[new_df['both'] == True]
</code></pre>
<p>Here is where my curiosity lies. Within both dataframes, there is a column called 'residence'. The new df has different values in some accounts under this column, compared to the old df. What would be the best way to identify/flag these different rows in the new df?</p>
<p>Example:</p>
<p>old df:</p>
<p><a href="https://i.stack.imgur.com/pgp1k.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pgp1k.png" alt="enter image description here" /></a></p>
<p>new df:</p>
<p><a href="https://i.stack.imgur.com/tqCti.png." rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tqCti.png." alt="enter image description here" /></a></p>
<p>I have included code to create these dataframes below:</p>
<pre><code>data = {'Account number': ['1234568','1111111','1111111','1414141','9898536','2360660','1144569','4488755','1122369'],
'Residence': ['VIRGIN ISLANDS, BRITISH','SINGAPORE','INDIA','VIRGIN ISLANDS, BRITISH','VIRGIN ISLANDS, BRITISH','BAHAMAS','VIRGIN ISLANDS, BRITISH','SWITZERLAND','SWITZERLAND']}
old_df = pd.DataFrame(data)
data = {'Account number': ['1234568','1111111','1111111','1414141','9898536','2360660','1144569','4488755','1122369'],
'Residence': ['VIRGIN ISLANDS, BRITISH','SINGAPORE','SINGAPORE','VIRGIN ISLANDS, BRITISH','VIRGIN ISLANDS, BRITISH','VIRGIN ISLANDS, BRITISH','VIRGIN ISLANDS, BRITISH','VIRGIN ISLANDS, BRITISH','SWITZERLAND']}
new_df = pd.DataFrame(data)
</code></pre>
|
<p>IIUC, simply use equality and aggregate per row with <code>any</code>:</p>
<pre><code>new_df['equal'] = old_df.eq(new_df).all(1)
</code></pre>
<p>Output:</p>
<pre><code> Account number Residence equal
0 1234568 VIRGIN ISLANDS, BRITISH True
1 1111111 SINGAPORE True
2 1111111 SINGAPORE False
3 1414141 VIRGIN ISLANDS, BRITISH True
4 9898536 VIRGIN ISLANDS, BRITISH True
5 2360660 VIRGIN ISLANDS, BRITISH False
6 1144569 VIRGIN ISLANDS, BRITISH True
7 4488755 VIRGIN ISLANDS, BRITISH False
8 1122369 SWITZERLAND True
</code></pre>
|
python|pandas
| 0
|
5,795
| 42,308,270
|
Python - numpy mgrid and reshape
|
<p>Can someone explain to me what the second line of this code does?</p>
<pre><code>objp = np.zeros((48,3), np.float32)
objp[:,:2] = np.mgrid[0:8,0:6].T.reshape(-1,2)
</code></pre>
<p>Can someone explain to me what exactly the np.mgrid[0:8,0:6] part of the code is doing and what exactly the T.reshape(-1,2) part of the code is doing?</p>
<p>Thanks and good job!</p>
|
<p>The easiest way to see these is to use smaller values for <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.mgrid.html" rel="noreferrer"><code>mgrid</code></a>:</p>
<pre><code>In [11]: np.mgrid[0:2,0:3]
Out[11]:
array([[[0, 0, 0],
[1, 1, 1]],
[[0, 1, 2],
[0, 1, 2]]])
In [12]: np.mgrid[0:2,0:3].T # (matrix) transpose
Out[12]:
array([[[0, 0],
[1, 0]],
[[0, 1],
[1, 1]],
[[0, 2],
[1, 2]]])
In [13]: np.mgrid[0:2,0:3].T.reshape(-1, 2) # reshape to an Nx2 matrix
Out[13]:
array([[0, 0],
[1, 0],
[0, 1],
[1, 1],
[0, 2],
[1, 2]])
</code></pre>
<p>Then <code>objp[:,:2] =</code> sets the 0th and 1th columns of <code>objp</code> to this result.</p>
|
numpy
| 6
|
5,796
| 43,231,649
|
Julia: Products of sequences in a vectorized way
|
<p>Learning to pass from Python to Julia, I am trying to convert an old code that I have, that is calculating a product of sequence of this expression:</p>
<p><a href="https://i.stack.imgur.com/z9jAd.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/z9jAd.jpg" alt="enter image description here"></a></p>
<p>I have two versions of the code in Python, one implemented with <code>for</code> loops, and the other using broadcasting. The <code>for</code> loop version is:</p>
<pre><code>import numpy as np
A = np.arange(1.,5.,1)
G = np.array([[1.,2.],[3.,4.]])
def calcF(G,A):
N = A.size
print A
print N
F = []
for l in range(N):
F.append(G/A[l])
print F[l]
for j in range(N):
if j != l:
F[l]*=((G - A[l])/(G + A[j]))*((A[l] - A[j])/(A[l] + A[j]))
return F
F= calcF(G,A)
print F
</code></pre>
<p>And the vectorized version I have learned from a response to my question <a href="https://stackoverflow.com/questions/34696179/product-of-a-sequence-in-numpy/34697017#34697017">here</a>, is this function:</p>
<pre><code>def calcF_vectorized(G,A):
# Get size of A
N = A.size
# Perform "(G - A[l])/(G + A[j]))" in a vectorized manner
p1 = (G - A[:,None,None,None])/(G + A[:,None,None])
# Perform "((A[l] - A[j])/(A[l] + A[j]))" in a vectorized manner
p2 = ((A[:,None] - A)/(A[:,None] + A))
# Elementwise multiplications between the previously calculated parts
p3 = p1*p2[...,None,None]
# Set the escaped portion "j != l" output as "G/A[l]"
p3[np.eye(N,dtype=bool)] = G/A[:,None,None]
Fout = p3.prod(1)
# If you need separate arrays just like in the question, split it
return np.array_split(Fout,N)
</code></pre>
<p>I tried to naively translate the Python <code>for</code> loops code to Julia:</p>
<pre><code>function JuliacalcF(G,A)
F = Array{Float64}[]
for l in eachindex(A)
push!(F,G/A[l])
println(A[i])
for j in eachindex(A)
if j!=l
F[l]*=((G - A[l])/(G + A[j]))*((A[l] - A[j])/(A[l] + A[j]))
end
end
end
#println(alpha)
return F
end
A = collect(1.0:1.0:5.0)
G = Vector{Float64}[[1.,2.],[3.,4.]]
println(JuliacalcF(G,A))
</code></pre>
<p>But is there a way to do it in a smart way as in the <code>numpy</code> broadcasting vectorized version?</p>
|
<p>Also, take a look at <a href="https://github.com/JuliaLang/julialang.github.com/blob/master/blog/_posts/moredots/More-Dots.ipynb" rel="nofollow noreferrer">More-Dots</a> and <a href="https://julialang.org/blog/2017/01/moredots" rel="nofollow noreferrer">Loop Fusion</a> where vectorization is described with examples. </p>
|
python|algorithm|numpy|julia|array-broadcasting
| 1
|
5,797
| 43,211,893
|
How to calculate a index series for a event window
|
<p>Suppose I have a time series like so:</p>
<pre><code>pd.Series(np.random.rand(20), index=pd.date_range("1990-01-01",periods=20))
1990-01-01 0.018363
1990-01-02 0.288625
1990-01-03 0.460708
1990-01-04 0.663063
1990-01-05 0.434250
1990-01-06 0.504893
1990-01-07 0.587743
1990-01-08 0.412223
1990-01-09 0.604656
1990-01-10 0.960338
1990-01-11 0.606765
1990-01-12 0.110480
1990-01-13 0.671683
1990-01-14 0.178488
1990-01-15 0.458074
1990-01-16 0.219303
1990-01-17 0.172665
1990-01-18 0.429534
1990-01-19 0.505891
1990-01-20 0.242567
Freq: D, dtype: float64
</code></pre>
<p>Suppose the event date is on 1990-01-05 and 1990-01-15. I want to subset the data down to a window of length (-2,+2) around the event, but with an added column yielding the relative number of days from the event date (which has value 0):</p>
<pre><code>1990-01-01 0.460708 -2
1990-01-04 0.663063 -1
1990-01-05 0.434250 0
1990-01-06 0.504893 1
1990-01-07 0.587743 2
1990-01-13 0.671683 -2
1990-01-14 0.178488 -1
1990-01-15 0.458074 0
1990-01-16 0.219303 1
1990-01-17 0.172665 2
Freq: D, dtype: float64
</code></pre>
<p>This question is related to my previous question here : <a href="https://stackoverflow.com/questions/43202696/event-study-in-pandas">Event Study in Pandas</a></p>
|
<p>Leveraging your previous solution from 'Event Study in Pandas' by @jezrael:</p>
<pre><code>import numpy as np
import pandas as pd
s = pd.Series(np.random.rand(20), index=pd.date_range("1990-01-01",periods=20))
date1 = pd.to_datetime('1990-01-05')
date2 = pd.to_datetime('1990-01-15')
window = 2
dates = [date1, date2]
s1 = pd.concat([s.loc[date - pd.Timedelta(window, unit='d'):
date + pd.Timedelta(window, unit='d')] for date in dates])
</code></pre>
<p>Convert to dataframe:</p>
<pre><code>df = s1.to_frame()
df['Offset'] = pd.Series(data=np.arange(-window,window+1).tolist()*len(dates),index=s1.index)
df
</code></pre>
|
python|pandas
| 1
|
5,798
| 72,198,871
|
How to create a new column with the percentage of the occurance of a particular value in another column in a DataFrame?
|
<p>I have a column, it has A value either 'Y' or 'N' for yes or no. i want to be able to calculate the percentage of the occurance of Yes. and then include this as the value of a new column called "Percentage"</p>
<p>I have come up with this so far, Although this is what i need i dont know how to get the information in the way i describe</p>
<pre><code>port_merge_lic_df.groupby(['Port'])['Shellfish Licence licence
(Y/N)'].value_counts(normalize=True) * 100
Port Shellfish Licence licence (Y/N)
ABERDEEN Y 80.731789
N 19.268211
AYR N 94.736842
Y 5.263158
BELFAST N 81.654676
...
STORNOWAY N 23.362692
0.383857
ULLAPOOL N 56.936826
Y 43.063174
WICK N 100.000000
Name: Shellfish Licence licence (Y/N), Length: 87, dtype: float64
</code></pre>
<p>The dataframe is in the form:</p>
<pre><code>df1 = pd.DataFrame({'Port': {0: 'NORTH SHIELDS', 1: 'NORTH SHIELDS',
2: 'NORTH SHIELDS', 3: 'NORTH SHIELDS', 4: 'NORTH SHIELDS'},
'Shellfish Licence licence (Y/N)': {0: 'Y', 1: 'N', 2: 'N', 3: 'N', 4: 'N'},
'Scallop Licence (Y/N)': {0: 'N', 1: 'N', 2: 'N', 3: 'N', 4: 'N'},
'Length Group': {0: 'Over10m', 1: 'Over10m', 2: 'Over10m',3:
'Over10m',4: 'Over10m'}})
df1
</code></pre>
|
<p>IIUC, you can use:</p>
<pre><code>df1['Shellfish Licence licence (Y/N)'].eq('Y').groupby(df1['Port']).mean()
</code></pre>
<p>output:</p>
<pre><code>Port
NORTH SHIELDS 0.2
Name: Shellfish Licence licence (Y/N), dtype: float64
</code></pre>
<p>For all columns:</p>
<pre><code>df1.filter(like='Y/N').apply(lambda c: c.eq('Y').groupby(df1['Port']).mean())
</code></pre>
<p>output:</p>
<pre><code> Shellfish Licence licence (Y/N) Scallop Licence (Y/N)
Port
NORTH SHIELDS 0.2 0.0
</code></pre>
<p>To have the data in the original dataframe:</p>
<pre><code>df1['Shellfish percent'] = df1['Shellfish Licence licence (Y/N)'].eq('Y').groupby(df1['Port']).transform('mean')
</code></pre>
<p>output:</p>
<pre><code> Port Shellfish Licence licence (Y/N) Scallop Licence (Y/N) \
0 NORTH SHIELDS Y N
1 NORTH SHIELDS N N
2 NORTH SHIELDS N N
3 NORTH SHIELDS N N
4 NORTH SHIELDS N N
Length Group Shellfish percent
0 Over10m 0.2
1 Over10m 0.2
2 Over10m 0.2
3 Over10m 0.2
4 Over10m 0.2
</code></pre>
|
python|pandas|dataframe
| 0
|
5,799
| 72,370,399
|
Pandas: Search and replace a string across all columns in a dataframe
|
<p>I'm trying to search for a string 'NONE' that is in uppercase across all columns in a dataframe and replace it with 'None'. If it is already 'None' I don't do anything.</p>
<p>I tried using the lambda function but its not working.</p>
<p>ps: In the data <code>None</code> is not a keyword. It is a string eg: "None"</p>
<pre><code>df=data.apply(lambda row: 'None' if row.astype(str).str.contains('NONE').any() else row, axis=1)
</code></pre>
<p><strong>Sample input</strong></p>
<pre><code>A | B
None | 234
NONE | NONE
565 | 347
</code></pre>
<p><strong>Expected Output</strong></p>
<pre><code>A | B
None | 234
None | None
565 | 347
</code></pre>
|
<p>Try this</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({'A': ['None', 'NONE', '565'],
'B': ['234', 'NONE', '347']})
# replace NONE by None
df = df.replace('NONE', 'None')
print(df)
</code></pre>
<pre><code> A B
0 None 234
1 None None
2 565 347
</code></pre>
|
python|pandas|numpy|contains
| 2
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.