Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
5,800
| 72,234,859
|
Why won't PyTorch RNN accept unbatched input?
|
<p>I'm trying to train a PyTorch RNN to predict the next value in a 1D sequence. According to the PyTorch documentation page, I think I should be able to feed unbatched input to the RNN with shape [L,H_in] where L is the length of the sequence and H_in is the input length. That is, a 2D vector.</p>
<p><a href="https://pytorch.org/docs/stable/generated/torch.nn.RNN.html" rel="nofollow noreferrer">https://pytorch.org/docs/stable/generated/torch.nn.RNN.html</a></p>
<pre><code>import torch
x1 = torch.tensor([[1.0], [2.0], [3.0], [4.0], [5.0], [6.0], [7.0], [8.0]])
x1_input = x1[0:-1, :]
x1_target = x1[1:, :]
rnn = torch.nn.RNN(1, 1, 1)
optimizer_prediction = torch.optim.Adam(rnn.parameters())
prediction_loss = torch.nn.L1Loss()
rnn.train()
epochs = 100
for i in range(0, epochs):
output_x1 = rnn(x1_input)
print(output_x1)
error_x1 = prediction_loss(output_x1, x1_target)
error_x1.backward()
optimizer_prediction.step()
optimizer_prediction.zero_grad()
</code></pre>
<p>However, PyTorch is complaining that it expects a 3D input vector (i.e. including a dimension for the batch):</p>
<pre><code>RuntimeError: input must have 3 dimensions, got 2
</code></pre>
<p>What is the correct method for feeding unbatched input to an RNN?</p>
|
<p>I would recommend turning your input into a 3d array by adding a batch size of one with:</p>
<pre><code>torch.unsqueeze(x1_input, dim=0).
</code></pre>
|
pytorch|recurrent-neural-network
| 1
|
5,801
| 72,170,346
|
DATAFRAME TO BIGQUERY - Error: FileNotFoundError: [Errno 2] No such file or directory: '/tmp/tmp1yeitxcu_job_4b7daa39.parquet'
|
<p>I am uploading a dataframe to a bigquery table.</p>
<pre><code>df.to_gbq('Deduplic.DailyReport', project_id=BQ_PROJECT_ID, credentials=credentials, if_exists='append')
</code></pre>
<p>And I get the following error:</p>
<pre><code>OSError Traceback (most recent call last)
~/.local/lib/python3.8/site-packages/google/cloud/bigquery/client.py in load_table_from_dataframe(self, dataframe, destination, num_retries, job_id, job_id_prefix, location, project, job_config, parquet_compression, timeout)
2624
-> 2625 _pandas_helpers.dataframe_to_parquet(
2626 dataframe,
~/.local/lib/python3.8/site-packages/google/cloud/bigquery/_pandas_helpers.py in dataframe_to_parquet(dataframe, bq_schema, filepath, parquet_compression, parquet_use_compliant_nested_type)
672 arrow_table = dataframe_to_arrow(dataframe, bq_schema)
--> 673 pyarrow.parquet.write_table(
674 arrow_table,
~/.local/lib/python3.8/site-packages/pyarrow/parquet.py in write_table(table, where, row_group_size, version, use_dictionary, compression, write_statistics, use_deprecated_int96_timestamps, coerce_timestamps, allow_truncated_timestamps, data_page_size, flavor, filesystem, compression_level, use_byte_stream_split, column_encoding, data_page_version, use_compliant_nested_type, **kwargs)
2091 **kwargs) as writer:
-> 2092 writer.write_table(table, row_group_size=row_group_size)
2093 except Exception:
~/.local/lib/python3.8/site-packages/pyarrow/parquet.py in write_table(self, table, row_group_size)
753
--> 754 self.writer.write_table(table, row_group_size=row_group_size)
755
~/.local/lib/python3.8/site-packages/pyarrow/_parquet.pyx in pyarrow._parquet.ParquetWriter.write_table()
~/.local/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status()
OSError: [Errno 28] Error writing bytes to file. Detail: [errno 28] No space left on device
During handling of the above exception, another exception occurred:
FileNotFoundError Traceback (most recent call last)
<ipython-input-8-f7137c1f7ee8> in <module>
62 )
63
---> 64 df.to_gbq('Deduplic.DailyReport', project_id=BQ_PROJECT_ID, credentials=credentials, if_exists='append')
~/.local/lib/python3.8/site-packages/pandas/core/frame.py in to_gbq(self, destination_table, project_id, chunksize, reauth, if_exists, auth_local_webserver, table_schema, location, progress_bar, credentials)
2052 from pandas.io import gbq
2053
-> 2054 gbq.to_gbq(
2055 self,
2056 destination_table,
~/.local/lib/python3.8/site-packages/pandas/io/gbq.py in to_gbq(dataframe, destination_table, project_id, chunksize, reauth, if_exists, auth_local_webserver, table_schema, location, progress_bar, credentials)
210 ) -> None:
211 pandas_gbq = _try_import()
--> 212 pandas_gbq.to_gbq(
213 dataframe,
214 destination_table,
~/.local/lib/python3.8/site-packages/pandas_gbq/gbq.py in to_gbq(dataframe, destination_table, project_id, chunksize, reauth, if_exists, auth_local_webserver, table_schema, location, progress_bar, credentials, api_method, verbose, private_key)
1191 return
1192
-> 1193 connector.load_data(
1194 dataframe,
1195 destination_table_ref,
~/.local/lib/python3.8/site-packages/pandas_gbq/gbq.py in load_data(self, dataframe, destination_table_ref, chunksize, schema, progress_bar, api_method, billing_project)
584
585 try:
--> 586 chunks = load.load_chunks(
587 self.client,
588 dataframe,
~/.local/lib/python3.8/site-packages/pandas_gbq/load.py in load_chunks(client, dataframe, destination_table_ref, chunksize, schema, location, api_method, billing_project)
235 ):
236 if api_method == "load_parquet":
--> 237 load_parquet(
238 client,
239 dataframe,
~/.local/lib/python3.8/site-packages/pandas_gbq/load.py in load_parquet(client, dataframe, destination_table_ref, location, schema, billing_project)
127
128 try:
--> 129 client.load_table_from_dataframe(
130 dataframe,
131 destination_table_ref,
~/.local/lib/python3.8/site-packages/google/cloud/bigquery/client.py in load_table_from_dataframe(self, dataframe, destination, num_retries, job_id, job_id_prefix, location, project, job_config, parquet_compression, timeout)
2670
2671 finally:
-> 2672 os.remove(tmppath)
2673
2674 def load_table_from_json(
FileNotFoundError: [Errno 2] No such file or directory: '/tmp/tmp1yeitxcu_job_4b7daa39.parquet'
</code></pre>
<p>A solution please</p>
|
<p>As Ricco D has mentioned, when writing the dataframe to the table, the BigQuery client creates temporary files on the host system, then removes it once the dataframe is written. The <a href="https://github.com/googleapis/python-bigquery/blob/main/google/cloud/bigquery/client.py#L2621-L2675" rel="nofollow noreferrer">source code</a> of the client for your reference. The linked code chunk does the following operations.</p>
<ol>
<li>Create a temporary file</li>
<li>Load the temporary file into the table</li>
<li>Delete the file after loading.</li>
</ol>
<p>The error you are facing is from the 1st step. There is not enough space for the BigQuery client to create the temporary file. So, consider deleting unused files from the host system for the client to create the temporary files.</p>
|
pandas|dataframe|google-bigquery
| 1
|
5,802
| 72,203,472
|
How to return the loc/index (row and column) of a searched item in Pandas dataframe
|
<p>I am searching for a sub-string in a Pandas dateframe.</p>
<pre><code>tmp = Metadata_sheet_0.apply(lambda row: row.astype(str).str.contains('sRNA spacer'), axis=1)
</code></pre>
<p>It returns a dataframe of the same size, with every element True or False. I would like the indexes of all Trues, not another dataframe of Trues/Falses.</p>
<p>How to do this Pandas' way without resorting to for loops?</p>
<p>Thank you!</p>
|
<p>Assuming such example:</p>
<pre><code>df = pd.DataFrame([[1,2,3],[4,1,2],[1,5,1]], columns=list('ABC'))
A B C
0 1 2 3
1 4 1 2
2 1 5 1
</code></pre>
<p>you can use a boolean mask and <code>stack</code>:</p>
<pre><code>df.where(df.eq(1)).stack()
</code></pre>
<p>output:</p>
<pre><code>0 A 1.0
1 B 1.0
2 A 1.0
C 1.0
dtype: float64
</code></pre>
<p>to only get the coordinates:</p>
<pre><code>df.where(df.eq(1)).stack().index.to_list()
</code></pre>
<p>output:</p>
<pre><code>[(0, 'A'), (1, 'B'), (2, 'A'), (2, 'C')]
</code></pre>
|
pandas|string|dataframe|search|indexing
| 1
|
5,803
| 72,315,271
|
How do you view the first non zero number in an NumPy array?
|
<p>I have a cube a = numpy.array(n,n,n). The size will vary.</p>
<p>I want to look from the top, so down on the columns, which there are nxn of, and return an array (n,n) with the first non zero number you can 'see'.</p>
<p>This is like looking down on the array imagining zeros are holes.</p>
<p>I can use sum(a, axis=0) but this adds up any number below the first zero also. I want just the first number after a zero (or zero if there are n zeros in the column)</p>
<p>I hope this is clear enough to get some advice :)</p>
<p>Cheers, Paul</p>
|
<p>from what I understand you can use non zero function with slicing.
refer to : <a href="https://numpy.org/doc/stable/reference/generated/numpy.nonzero.html" rel="nofollow noreferrer">https://numpy.org/doc/stable/reference/generated/numpy.nonzero.html</a></p>
|
python|numpy
| 0
|
5,804
| 72,211,665
|
I'm having problems with one-hot encoding
|
<p>I am using logistic regression for a football dataset, but it seems when i try to one-hot encode the home team names and away team names it gives the model a 100% accuracy, even when doing a train_test_split i still get 100. What am i doing wrong?</p>
<pre><code>from sklearn.linear_model
import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score
from sklearn.preprocessing import OneHotEncoder
import pandas as pd
import numpy as np
df = pd.read_csv("FIN.csv")
df['Date'] = pd.to_datetime(df["Date"])
df = df[(df["Date"] > '2020/04/01')]
df['BTTS'] = np.where((df.HG > 0) & (df.AG > 0), 1, 0)
#print(df.to_string())
df.dropna(inplace=True)
x = df[['Home', 'Away', 'Res', 'HG', 'AG', 'PH', 'PD', 'PA', 'MaxH', 'MaxD', 'MaxA', 'AvgH', 'AvgD', 'AvgA']].values
y = df['BTTS'].values
np.set_printoptions(threshold=np.inf)
model = LogisticRegression()
ohe = OneHotEncoder(categories=[df.Home, df.Away, df.Res], sparse=False)
x = ohe.fit_transform(x)
print(x)
model.fit(x, y)
print(model.score(x, y))
x_train, x_test, y_train, y_test = train_test_split(x, y, shuffle=False)
model.fit(x_train, y_train)
print(model.score(x_test, y_test))
y_pred = model.predict(x_test)
print("accuracy:",
accuracy_score(y_test, y_pred))
print("precision:", precision_score(y_test, y_pred))
print("recall:", recall_score(y_test, y_pred))
print("f1 score:", f1_score(y_test, y_pred))
</code></pre>
|
<p>Overfitting would be a situation where your training accuracy is very high, and your test accuracy is very low. That means it's "over fitting" because it essentially just learns what the outcome will be on the training, but doesn't fit well on new, unseen data.</p>
<p>The reason you are getting 100% accuracy is precisely as I stated in the comments, there's a (for lack of a better term) data leakage. You are essentially allowing your model to "cheat". Your target variable <code>y</code> (which is <code>'BTTS'</code>) is feature engineered by the data. It is derived from <code>'HG'</code> and <code>'AG'</code>, and thus are highly (100%) correlated/associated to your target. You define <code>'BTTS'</code> as 1 when both <code>'HG'</code> and <code>'AG'</code> are greater than 1. And then you have those 2 columns included in your training data. So the model simply picked up that obvious association (Ie, when the home goals is 1 or more, and the away goals are 1 or more -> Both teams scored).</p>
<p>Once the model sees those 2 values greater than 0, it predicts 1, if one of those values is 0, it predicts 0.</p>
<p>Drop <code>'HG'</code> and '<code>AG</code>' from the x (features).</p>
<p>Once we remove those 2 columns, you'll see a more realistic performance (albeit poor - slightly better than a flip of the coin) here:</p>
<pre><code>1.0
0.5625
accuracy: 0.5625
precision: 0.6666666666666666
recall: 0.4444444444444444
f1 score: 0.5333333333333333
</code></pre>
<p>With the Confusion Matrix:</p>
<pre><code>from sklearn.metrics import confusion_matrix
labels = labels = np.unique(y).tolist()
cf_matrixGNB = confusion_matrix(y_test, y_pred, labels=labels)
import seaborn as sns
import matplotlib.pyplot as plt
ax = sns.heatmap(cf_matrixGNB, annot=True,
cmap='Blues')
ax.set_title('Confusion Matrix\n');
ax.set_xlabel('\nPredicted Values')
ax.set_ylabel('Actual Values ');
ax.xaxis.set_ticklabels(labels)
ax.yaxis.set_ticklabels(labels)
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/K7zpE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/K7zpE.png" alt="enter image description here" /></a></p>
<p>Another option would be to do a calculated field of <code>'Total_Goals'</code>, then see if it can predict on that. Obviously again, it has a little help in the obvious (if <code>'Total_Goals'</code> is 0 or 1, then <code>'BTTS'</code> will be 0.). But then if <code>'Total_Goals'</code> is 2 or more, it'll have to rely on the other features to try to work out if one of the teams got shut out.</p>
<p>Here's that example:</p>
<pre><code>from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score
from sklearn.preprocessing import OneHotEncoder
import pandas as pd
import numpy as np
df = pd.read_csv("FIN.csv")
df['Date'] = pd.to_datetime(df["Date"])
df = df[(df["Date"] > '2020/04/01')]
df['BTTS'] = np.where((df.HG > 0) & (df.AG > 0), 1, 0)
#print(df.to_string())
df.dropna(inplace=True)
df['Total_Goals'] = df['HG'] + df['AG']
x = df[['Home', 'Away', 'Res', 'Total_Goals', 'PH', 'PD', 'PA', 'MaxH', 'MaxD', 'MaxA', 'AvgH', 'AvgD', 'AvgA']].values
y = df['BTTS'].values
np.set_printoptions(threshold=np.inf)
model = LogisticRegression()
ohe = OneHotEncoder(sparse=False)
x = ohe.fit_transform(x)
#print(x)
model.fit(x, y)
print(model.score(x, y))
x_train, x_test, y_train, y_test = train_test_split(x, y, shuffle=False)
model.fit(x_train, y_train)
print(model.score(x_test, y_test))
y_pred = model.predict(x_test)
print("accuracy:",
accuracy_score(y_test, y_pred))
print("precision:", precision_score(y_test, y_pred))
print("recall:", recall_score(y_test, y_pred))
print("f1 score:", f1_score(y_test, y_pred))
from sklearn.metrics import confusion_matrix
labels = np.unique(y).tolist()
cf_matrixGNB = confusion_matrix(y_test, y_pred, labels=labels)
import seaborn as sns
import matplotlib.pyplot as plt
ax = sns.heatmap(cf_matrixGNB, annot=True,
cmap='Blues')
ax.set_title('Confusion Matrix\n');
ax.set_xlabel('\nPredicted Values')
ax.set_ylabel('Actual Values ');
ax.xaxis.set_ticklabels(labels)
ax.yaxis.set_ticklabels(labels)
plt.show()
</code></pre>
<p><strong>Output:</strong></p>
<pre><code>1.0
0.8
accuracy: 0.8
precision: 0.8536585365853658
recall: 0.7777777777777778
f1 score: 0.8139534883720929
</code></pre>
<p><a href="https://i.stack.imgur.com/fN9D4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fN9D4.png" alt="enter image description here" /></a></p>
<p>To Predict on new data, you need the new data in the form of the training data. You then also need to apply any transformations you fit on the trianing data, to transform on the new data:</p>
<pre><code>new_data = pd.DataFrame(
data = [['Haka', 'Mariehamn', 3.05, 3.66, 2.35, 3.05, 3.66, 2.52, 2.88, 3.48, 2.32]],
columns = ['Home', 'Away', 'PH', 'PD', 'PA', 'MaxH', 'MaxD', 'MaxA', 'AvgH', 'AvgD', 'AvgA']
)
to_predcit = new_data[['Home', 'Away', 'PH', 'PD', 'PA', 'MaxH', 'MaxD', 'MaxA', 'AvgH', 'AvgD', 'AvgA']]
to_predict_encoded = ohe.transform(to_predcit)
prediction = model.predict(to_predict_encoded)
prediction_prob = model.predict_proba(to_predict_encoded)
print(f'Predict: {prediction[0]} with {prediction_prob[0][0]} probability.')
</code></pre>
<p><strong>Output:</strong></p>
<pre><code>Predict: 0 with 0.8204957018099501 probability.
</code></pre>
|
numpy|scikit-learn
| 0
|
5,805
| 50,364,991
|
how to push live panda Dataframe and index it to fit it in my Tkinter table?
|
<p>I am trying to push my mqtt data to my tkinter table, which i have created using <a href="http://github.com/dmnfarrell/pandastable" rel="nofollow noreferrer">pandastable</a> module. I am getting data in form of a list. So i first created a csv file, and i labeled it manually. And then i pushed my list to that csv file. So, i have two part in my table, first it will take converted dataframe from my csv file and is like the history part of table and then i need to push my recent dataframe(which is in same format as my previous dataframes with my csv file's column index as my dataframe column index ) to that table while its open. And also i am saving my recent datframes in csv file, so this process can in circle every time i open my table.Problem is i can't figure where i am going wrong.
this is my table script:</p>
<pre><code>import tkinter as tk
import pandas as pd
from pandastable import Table, TableModel
from threading import Thread
import time
import datetime
import numpy as np
#import mqtt_cloud_rec
#import tkintermqtt
prevframe = pd.read_csv('mqttresult.csv')
class TestApp(tk.Frame):
"""Basic test frame for the table"""
def __init__(self, parent=None):
self.parent = parent
tk.Frame.__init__(self)
self.main = self.master
self.main.geometry('800x600+200+100')
self.main.title('Mqtt Result Table')
f = tk.Frame(self.main)
f.pack(fill=tk.BOTH,expand=1)
#df = TableModel.getSampleData(rows=5)
self.table = pt = Table(f, dataframe=prevframe, showtoolbar=True )
pt.show()
self.startbutton = tk.Button(self.main,text='START',command=self.start)
self.startbutton.pack(side=tk.TOP,fill=tk.X)
self.stopbutton = tk.Button(self.main,text='STOP',command=self.stop)
self.stopbutton.pack(side=tk.TOP,fill=tk.X)
# self.table.showPlotViewer()
return
def update(self,data):
table=self.table
#plotter = table.pf
#opts = plotter.mplopts
#plotter.setOption('linewidth',3)
#plotter.setOption('kind','line')
#opts.widgets['linewidth'].set(3)
#opts.widgets['kind'].set('line')
date_today=str(datetime.date.today())
time_today=time.strftime("%H:%M:%S")
datalist=[date_today,time_today]+self.data
datalist1=np.array(datalist)
datalist2=pd.DataFrame(data=datalist1 ,columns=['Date','Time','power state','Motor state','Mode','Voltage','Current','Power Factor','KW','KWH','total Runtime'])
#self.table = Table(dataframe=datalist2, showtoolbar=True )
self.dataframe.loc[len(self.dataframe)]=datalist2
table.model.df=self.dataframe
table.redraw()
#table.multiplecollist=range(0,10)
#table.plotSelected()
time.sleep(.1)
if self.stop == True:
return
return
def start(self):
self.stop=False
t = Thread(target=self.update)
t.start()
def stop(self):
self.stop = True
return
app = TestApp()
#launch the app
app.mainloop()
</code></pre>
|
<p>Convert this to a dictionary instead of a Dataframe and I think it will work:</p>
<pre><code>datalist=[date_today,time_today]+self.data
datalist1=np.array(datalist)
datalist2=pd.DataFrame(data=datalist1 ,columns=['Date','Time','power state','Motor state','Mode','Voltage','Current','Power Factor','KW','KWH','total Runtime'])
</code></pre>
|
python-3.x|pandas|tkinter
| 0
|
5,806
| 62,834,697
|
copy of array gets overwritten in function
|
<p>I am trying to create an array <code>np.zeros((3, 3))</code> outside a function and use it inside a function over and over again. The reason for that is <code>numba's</code> <code>cuda</code> implementation, which does not support <code>array creation</code> inside functions that are to be run on a <code>gpu</code>. So I create aforementioned array <code>ar_ref</code> , and pass it as argument to <code>function</code>. <code>ar</code> creates a copy of <code>ar_ref</code> (this is supposed to be used as "fresh" <code>np.zeros((3, 3))</code> copy). Then I perform some changes to <code>ar</code> and return it. But in the process <code>ar_ref</code> gets overwritten inside the function by the last iteration of <code>ar</code>. How do I start every new iteration of the function with <code>ar = np.zeros((3, 3))</code> without having to call <code>np.zeros</code> inside the <code>function</code>?</p>
<pre><code>import numpy as np
def function(ar_ref=None):
for n in range(3):
print(n)
ar = ar_ref
print(ar)
for i in range(3):
ar[i] = 1
print(ar)
return ar
ar_ref = np.zeros((3, 3))
function(ar_ref=ar_ref)
</code></pre>
<p>Output:</p>
<pre><code>0
[[0. 0. 0.]
[0. 0. 0.]
[0. 0. 0.]]
[[1. 1. 1.]
[1. 1. 1.]
[1. 1. 1.]]
1
[[1. 1. 1.]
[1. 1. 1.]
[1. 1. 1.]]
[[1. 1. 1.]
[1. 1. 1.]
[1. 1. 1.]]
2
[[1. 1. 1.]
[1. 1. 1.]
[1. 1. 1.]]
[[1. 1. 1.]
[1. 1. 1.]
[1. 1. 1.]]
</code></pre>
|
<p>simple assignment will only assign pointer, so when you change <code>ar</code>, <code>ar_ref</code> changes too. try to use shallow copy for this issue</p>
<pre><code>import numpy as np
import copy
def function(ar_ref=None):
for n in range(3):
print(n)
ar = copy.copy(ar_ref)
print(ar)
for i in range(3):
ar[i] = 1
print(ar)
return ar
ar_ref = np.zeros((3, 3))
function(ar_ref=ar_ref)
</code></pre>
<p>output:</p>
<pre><code>0
[[0. 0. 0.]
[0. 0. 0.]
[0. 0. 0.]]
[[1. 1. 1.]
[1. 1. 1.]
[1. 1. 1.]]
1
[[0. 0. 0.]
[0. 0. 0.]
[0. 0. 0.]]
[[1. 1. 1.]
[1. 1. 1.]
[1. 1. 1.]]
2
[[0. 0. 0.]
[0. 0. 0.]
[0. 0. 0.]]
[[1. 1. 1.]
[1. 1. 1.]
[1. 1. 1.]]
</code></pre>
|
python|arrays|numpy
| 2
|
5,807
| 54,485,768
|
Data Manipulation using Pandas
|
<p>I want to copy a value in a dataframe uptil next NaN.</p>
<p>Here is the dataframe I have:</p>
<pre><code> Description
0 091SS16 GASOILA THREAD SEALANT
1 NaN
2 NaN
3 NaN
4 NaN
5 NaN
6 NaN
7 3M07447 SCOTCH BRITE PAD
8 NaN
9 NaN
10 NaN
11 NaN
12 NaN
13 NaN
14 NaN
15 600B 6" BUNA-N GASKET
</code></pre>
<p>And this is my expected output:</p>
<pre><code>Description
0 091SS16 GASOILA THREAD SEALANT
1 091SS16 GASOILA THREAD SEALANT
2 091SS16 GASOILA THREAD SEALANT
3 091SS16 GASOILA THREAD SEALANT
4 091SS16 GASOILA THREAD SEALANT
5 091SS16 GASOILA THREAD SEALANT
6 091SS16 GASOILA THREAD SEALANT
7 3M07447 SCOTCH BRITE PAD
8 3M07447 SCOTCH BRITE PAD
9 3M07447 SCOTCH BRITE PAD
10 3M07447 SCOTCH BRITE PAD
11 3M07447 SCOTCH BRITE PAD
12 3M07447 SCOTCH BRITE PAD
13 3M07447 SCOTCH BRITE PAD
14 3M07447 SCOTCH BRITE PAD
15 600B 6" BUNA-N GASKET
</code></pre>
<p>Kindly help. Thank you!</p>
|
<p>You need <code>fillna</code> with <code>ffill</code>:</p>
<pre><code>df.fillna(ffill)
</code></pre>
|
python|pandas
| 1
|
5,808
| 54,617,326
|
Cleaning of data in pandas
|
<p>I have a data frame in the following format:</p>
<pre><code> Col
Honda [edit]
Accord (4 models)[1]
Civic (4 models)[2]
Pilot (3 models)[1]
Toyota [edit]
Prius (4 models)[1]
Highlander (3 models)[4]
Ford [edit]
Explorer (2 models)[1]
</code></pre>
<p>I want data in the following format:</p>
<pre><code> A B
Honda Accord
Honda Civic
Honda Pilot
Toyota Prius
Toyota Highlander
</code></pre>
|
<p>Create boolean mask for test string <code>[edit]</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.contains.html" rel="nofollow noreferrer"><code>str.contains</code></a>, then split column by whitespace with first <code>(</code> or <code>[</code>, replace not matched values to <code>NaN</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.where.html" rel="nofollow noreferrer"><code>where</code></a> and forward filling missing values to column <code>A</code>. Function <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.insert.html" rel="nofollow noreferrer"><code>insert</code></a> is for new column to first position. Last remove same values in both columns by <a href="http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#boolean-indexing" rel="nofollow noreferrer"><code>boolean indexing</code></a> and create default index by <code>reset_index</code>:</p>
<pre><code>mask = df['Col'].str.contains('[edit]', regex=False)
df['B'] = df.pop('Col').str.split('\s+\(|\s+\[', n=1).str[0]
df.insert(0, 'A', df['B'].where(mask).ffill())
df = df[df['A'] != df['B']].reset_index(drop=True)
print (df)
A B
0 Honda Accord
1 Honda Civic
2 Honda Pilot
3 Toyota Prius
4 Toyota Highlander
5 Ford Explorer
</code></pre>
<p>Another solution:</p>
<pre><code>items = []
for x in df['Col']:
if x.endswith('[edit]'):
a = x.rstrip(' [edit]')
else:
b = x.split('(')[0].strip()
items.append((a, b))
df = pd.DataFrame(items, columns=['A', 'B'])
print (df)
A B
0 Honda Accord
1 Honda Civic
2 Honda Pilot
3 Toyota Prius
4 Toyota Highlander
5 For Explorer
</code></pre>
|
python|pandas|dataframe
| 2
|
5,809
| 73,697,251
|
Skipping certain folders in Python
|
<p>Is there a way to skip certain folders? In the present code, I specify number of folders <code>N=20</code> i.e. the code analyzes all folders named <code>1,2,...,20</code>. However, in this range, I want to skip certain folders, say <code>10,15</code>. Is there a way to do so?</p>
<pre><code>import pandas as pd
import numpy as np
from functools import reduce
import statistics
A=[]
X=[]
N=20 #Number of folders
for i in range(1,N):
file_loc = f"C:\\Users\\USER\\OneDrive - Technion\\Research_Technion\\Python_PNM\\Sept7_2022\\220\\beta_1e-1\\Var_1\\10 iterations\\{i}\\Data_220_beta_1e-1_48.25_51.75ND.csv"
df = pd.read_csv(file_loc)
A=df["% of Nodes not visited"].to_numpy()
A = [x for x in A if str(x) != 'nan']
A = [eval(e) for e in A]
X.append(A)
#print(X)
X = [x for x in X if min(x) != max(x)]
A=[reduce(lambda x,y: x+y, v) for v in zip(*X)]
print("A =",A)
Mean1=[]
Std1=[]
for i in range(0,len(A)):
Mean=statistics.mean(A[i])
Std=statistics.stdev(A[i])
Mean1.append(Mean)
Std1.append(Std)
print("mean =",*Mean1,sep='\n')
print("std =",*Std1,sep='\n')
</code></pre>
|
<p>Just add a test to skip the unwanted numbers:</p>
<pre><code>
N = 20 # Number of folders
skip = {10,15} # using a set for efficiency
for i in range(1,N):
if i in skip:
continue
# rest of code
</code></pre>
<p><em>NB. If you want to include 20, you should use <code>range(1, N+1)</code>.</em></p>
|
python|pandas|numpy|statistics
| 2
|
5,810
| 73,767,659
|
How to add rows according to other column
|
<p>now the result looks like this</p>
<pre><code>file_name text 1
2a.txt 0 0.712518 0.61525 0.43918 0.2065 1 0.635078 0.81175 0.292786 0.0925
2b.txt 2 0.551273 0.5705 0.30198 0.0922 0 0.550212 0.31125 0.486563 0.2455
</code></pre>
<p>But I want duplicate rows according to the third column(as shown below), is there an easy way to do this?</p>
<pre><code>file_name text
2a.txt 0 0.712518 0.61525 0.43918 0.2065
2a.txt 1 0.635078 0.81175 0.292786 0.0925
2b.txt 2 0.551273 0.5705 0.30198 0.0922
2b.txt 0 0.550212 0.31125 0.486563 0.2455
</code></pre>
|
<p>That should help:</p>
<pre><code>df = pd.melt(df,id_vars='file_name' ,value_vars=['text','1'])
df = df.drop('variable', axis=1)
df = df.sort_values(by = 'file_name')
</code></pre>
|
pandas|dataframe|duplicates
| 0
|
5,811
| 73,819,387
|
Can't read a .xlsx/.csv file while I have it open in my Excel
|
<p>I got that error everytime I try to read a .xlsx/.csv file in my Jupyter Notebook using Pandas and the file is currently open in my Excel:</p>
<pre><code>import pandas as pd
df = pd.read_excel('Filename.xlsx')
PermissionError: [Errno 13] Permission denied: 'Filename.xlsx'
</code></pre>
<p>I have a friend that could read those files in his Jupyter without closing the Excel. How can I make it?</p>
|
<p>close all your open excel/csv files then try to read the file.</p>
<p>For csv files:</p>
<pre><code>import pandas as pd
df = pd.read_csv('filename.csv')
</code></pre>
<p>For excel file:</p>
<pre><code>pip install xlrd
import pandas as pd
df = pd.read_excel('filename.xlsx')
</code></pre>
|
python|pandas|dataframe|jupyter-notebook|data-science
| -1
|
5,812
| 71,303,701
|
How can I let my function return all values of a NumPY array columnwise?
|
<p>So I have this set of data:</p>
<pre><code>[[ 99.14931546 104.03852715 107.43534677 97.85230675 98.74986914
98.80833412 96.81964892 98.56783189]
[ 92.02628776 97.10439252 99.32066924 97.24584816 92.9267508
92.65657752 105.7197853 101.23162942]
[ 95.66253664 95.17750125 90.93318132 110.18889465 98.80084371
105.95297652 98.37481387 106.54654286]
[ 91.37294597 100.96781394 100.40118279 113.42090475 105.48508838
91.6604946 106.1472841 95.08715803]
[101.20862522 103.5730309 100.28690912 105.85269352 93.37126331
108.57980357 100.79478953 94.20019732]
[102.80387079 98.29687616 93.24376389 97.24130034 89.03452725
96.2832753 104.60344836 101.13442416]
[106.71751618 102.97585605 98.45723272 100.72418901 106.39798503
95.46493436 94.35373179 106.83273763]
[ 96.02548256 102.82360856 106.47551845 101.34745901 102.45651798
98.74767493 97.57544275 92.5748759 ]
[105.30350449 92.87730812 103.19258339 104.40518318 101.29326772
100.85447132 101.2226037 106.03868807]
[110.44484313 93.87155456 101.5363647 97.65393524 92.75048583
101.72074646 96.96851209 103.29147111]
[101.3514185 100.37372248 106.6471081 100.61742813 105.0320535
99.35999981 98.87007532 95.85284217]
[ 97.21315663 107.02874163 102.17642112 96.74630281 95.93799169
102.62384733 105.07475277 97.59572169]
[ 95.65982034 107.22482426 107.19119932 102.93039474 85.98839623
95.19184343 91.32093303 102.35313953]
[100.39303522 92.0108226 97.75887636 93.18884302 100.44940274
108.09423367 96.50342927 99.58664719]
[103.1521596 109.40523174 93.83969256 99.95827854 101.83462816
99.69982772 103.05289628 103.93383957]
[106.11454989 88.80221141 94.5081787 94.59300658 101.08830521
96.34622848 96.89244283 98.07122664]
[ 96.78266211 99.84251605 104.03478031 106.57052697 105.13668343
105.37011896 99.07551254 104.15899829]
[101.86186193 103.61720152 99.57859892 99.4889538 103.05541444
98.65912661 98.72774132 104.70526438]
[ 97.49594839 96.59385486 104.63817694 102.55198606 105.86078488
96.5937781 93.04610867 99.92159953]
[ 96.76814836 91.6779221 101.79132774 101.20773355 98.29243952
101.83845792 97.94046856 102.20618501]
[106.89005002 106.57364584 102.26648279 107.40064604 99.94318168
103.40412146 106.38276709 98.00253006]
[ 99.80873105 101.63973121 106.46476468 110.43976681 100.69156231
99.99579473 101.32113654 94.76253572]
[ 96.10020311 94.57421727 100.80409326 105.02389857 98.61325194
95.62359311 97.99762409 103.83852459]
[ 94.11176915 99.62387832 104.51786419 97.62787811 93.97853495
98.75108352 106.05042487 100.07721494]]
</code></pre>
<p>and now I want to print my data out columnwise <strong>using NumPY</strong>, so if the dataset above is the data, I expect to get this output back:</p>
<pre><code>[99.14931546, 92.02628776, 95.66253664, 91.37294597, 101.20862522, 102.80387079, 106.71751618, 96.02548256, 105.30350449, 110.44484313, 101.3514185, 97.21315663, 95.65982034, 100.39303522, 103.1521596, 106.11454989, 96.78266211, 101.86186193, 97.49594839, 96.76814836, 106.89005002, 99.80873105, 96.10020311, 94.11176915]
[104.03852715, 97.10439252, 95.17750125, 100.96781394, 103.5730309, 98.29687616, 102.97585605, 102.82360856, 92.87730812, 93.87155456, 100.37372248, 107.02874163, 107.22482426, 92.0108226, 109.40523174, 88.80221141, 99.84251605, 103.61720152, 96.59385486, 91.6779221, 106.57364584, 101.63973121, 94.57421727, 99.62387832]
[107.43534677, 99.32066924, 90.93318132, 100.40118279, 100.28690912, 93.24376389, 98.45723272, 106.47551845, 103.19258339, 101.5363647, 106.6471081, 102.17642112, 107.19119932, 97.75887636, 93.83969256, 94.5081787, 104.03478031, 99.57859892, 104.63817694, 101.79132774, 102.26648279, 106.46476468, 100.80409326, 104.51786419]
[97.85230675, 97.24584816, 110.18889465, 113.42090475, 105.85269352, 97.24130034, 100.72418901, 101.34745901, 104.40518318, 97.65393524, 100.61742813, 96.74630281, 102.93039474, 93.18884302, 99.95827854, 94.59300658, 106.57052697, 99.4889538, 102.55198606, 101.20773355, 107.40064604, 110.43976681, 105.02389857, 97.62787811]
[98.74986914, 92.9267508, 98.80084371, 105.48508838, 93.37126331, 89.03452725, 106.39798503, 102.45651798, 101.29326772, 92.75048583, 105.0320535, 95.93799169, 85.98839623, 100.44940274, 101.83462816, 101.08830521, 105.13668343, 103.05541444, 105.86078488, 98.29243952, 99.94318168, 100.69156231, 98.61325194, 93.97853495]
[98.80833412, 92.65657752, 105.95297652, 91.6604946, 108.57980357, 96.2832753, 95.46493436, 98.74767493, 100.85447132, 101.72074646, 99.35999981, 102.62384733, 95.19184343, 108.09423367, 99.69982772, 96.34622848, 105.37011896, 98.65912661, 96.5937781, 101.83845792, 103.40412146, 99.99579473, 95.62359311, 98.75108352]
[96.81964892, 105.7197853, 98.37481387, 106.1472841, 100.79478953, 104.60344836, 94.35373179, 97.57544275, 101.2226037, 96.96851209, 98.87007532, 105.07475277, 91.32093303, 96.50342927, 103.05289628, 96.89244283, 99.07551254, 98.72774132, 93.04610867, 97.94046856, 106.38276709, 101.32113654, 97.99762409, 106.05042487]
[98.56783189, 101.23162942, 106.54654286, 95.08715803, 94.20019732, 101.13442416, 106.83273763, 92.5748759, 106.03868807, 103.29147111, 95.85284217, 97.59572169, 102.35313953, 99.58664719, 103.93383957, 98.07122664, 104.15899829, 104.70526438, 99.92159953, 102.20618501, 98.00253006, 94.76253572, 103.83852459, 100.07721494]
</code></pre>
<p>This is what I have so far, which doesn't work at all.</p>
<pre><code>import numpy as np
data = np.genfromtxt('./data/normal_distribution.csv', delimiter=",")
j = len(data[0])
q = []
for x in data:
for w in range(0,j):
q.append(x[w])
print(q)
</code></pre>
<p>What changes should I make to correct my code?</p>
|
<p>Example for the first 2 samples (accordingly to @enke):</p>
<pre><code>arr = np.array([[ 99.14931546, 104.03852715, 107.43534677, 97.85230675, 98.74986914,
98.80833412, 96.81964892, 98.56783189],
[ 92.02628776, 97.10439252, 99.32066924, 97.24584816, 92.9267508,
92.65657752, 105.7197853, 101.23162942]])
# tranpose array
print(arr.T)
[[ 99.14931546 92.02628776]
[104.03852715 97.10439252]
[107.43534677 99.32066924]
[ 97.85230675 97.24584816]
[ 98.74986914 92.9267508 ]
[ 98.80833412 92.65657752]
[ 96.81964892 105.7197853 ]
[ 98.56783189 101.23162942]]
</code></pre>
|
arrays|numpy|numpy-ndarray
| 0
|
5,813
| 71,180,682
|
Can't iterate through excel or .csv with pandas or openpyxl
|
<p>I'm trying to iterate through a column of a pandas dataframe from .csv with pandas, and I've tried .xlsx with openpyxl, and copy to a new dataframe if the string contains substring "Leading Edge". I'm getting the error "TypeError: argument of type 'float' is not iterable" even though the columns and rows should be integers.
I've tried printing out sample values and it works fine, so I'm not sure why it isnt working with either a for lop with int i as iterator, nor .iter_rows iterator (openpyxl).
Any help is greatly appreciated!</p>
<p>Using openpyxl</p>
<pre><code>import pandas as pd
from openpyxl import Workbook
from openpyxl import load_workbook
path = r"C:\Users\austi\Downloads\IVY 19 Clock Genes"
workbook = load_workbook(path + "\\newSheetfromJupyter.xlsx", read_only=True)
readSheet = workbook.active
wb = Workbook()
newSheet = wb.active
df = pd.DataFrame
for row in readSheet.iter_rows(min_row = 2,
min_col = 1,
max_col = 18,
values_only = True):
if("Leading Edge" in row[1]): #ERROR CORRESPONDS TO HERE
df.append((cell.value for cell in row[0:18]))
print(df)
</code></pre>
<p>Using pandas</p>
<pre><code>LE = dfWithCol.groupby(dfWithCol.tumor_region)
for i in range(len(dfWithCol.index)):
if("Leading Edge" in dfWithCol.tumor_region[i]): #ERROR HERE
LEf = LE.get_group(dfWithCol.tumor_region[i])
print(LEf)
</code></pre>
|
<p>You can't iterate an int, you can only iterate sequence types (such as <strong>list</strong>, <strong>str</strong>, and <strong>tuple</strong>) and some non-sequence types (like <strong>dict</strong>, <strong>file objects</strong>, and objects of any classes you define with an <strong>iter</strong>() method or with a <strong>getitem</strong>() method).
I just added a str() to your code and it worked on my device.</p>
<pre><code>import pandas as pd
from openpyxl import Workbook
from openpyxl import load_workbook
workbook = load_workbook("test.xlsx")
readSheet = workbook.active
wb = Workbook()
newSheet = wb.active
df = pd.DataFrame
for row in readSheet.iter_rows(min_row = 2,
min_col = 1,
max_col = 18,
values_only = True):
if("Leading Edge" in **str(row[1])**): #ERROR CORRESPONDS TO HERE
df.append((cell.value for cell in row[0:18]))
print(df)
</code></pre>
|
python|pandas|openpyxl
| 0
|
5,814
| 71,239,632
|
Creating custom loss function, error with unstacking tensor in tensorflow, python
|
<p>Creating a weighted gaussian loss function for use in deep learning models and getting the following error message when running model.fit for training data</p>
<blockquote>
<p>ValueError: Dimension must be 2 but is 1 for '{{node
gaussian_loss/unstack_1}} = UnpackT=DT_FLOAT, axis=-1, num=2'
with input shapes: [?,1].</p>
</blockquote>
<p>attached below is the code for the function:</p>
<pre><code>def w_g_l(weight_train, weight_test, train_num):
def gaussian_loss(y_true, y_pred):
if y_true.get_shape()[0] == train_num:
weight=weight_train
else:
weight=weight_test
mu, sigma = tf.unstack(y_pred, num=2, axis=-1)
truevals, dummy = tf.unstack(y_true, num=2, axis=-1)
mu = tf.expand_dims(mu, -1)
sigma = tf.expand_dims(sigma, -1)
truevals = tf.expand_dims(truevals, -1)
nll = (
tf.math.square(truevals-mu)/(2.0 * tf.math.square(sigma))
+ tf.math.log(sigma) + tf.math.log(weight)
)
return tf.math.reduce_mean(nll) - tf.math.reduce_mean(tf.math.log(weight))
return gaussian_loss
</code></pre>
<p>Any clues? It seems to be an issue with the truevals, dummy = tf.unstack(y_pred, num=2, axis=-1) line, but I'm unsure what specifically can fix it.</p>
<p>Model is below:</p>
<pre><code>def build_model():
model = keras.Sequential([
layers.Dense(units=2, input_dim=2, activation = 'relu'),
layers.Dense(units=12, activation = 'relu'),
layers.Dense(units=2, activation = 'softplus')
])
my_loss = w_g_l(weight_train, weight_test, 1148)
loss=my_loss
model.compile(loss = loss, optimizer = keras.optimizers.Adam(0.01), metrics = ['mse', my_loss])
return model
</code></pre>
|
<p>Your loss function is expecting <code>y_true</code> (which is data coming from your <code>y_train</code> that you passed to <code>fit</code>) to have two elements in the last dimension for unstacking in <code>truevals</code> and <code>dummy</code>.</p>
<p>One solution is making <code>truevals = y_true</code>, since you don't seem to have dummy values in your data.</p>
<p>Another solution is adding dummy values to your data like:</p>
<pre><code>dummy_data = numpy.zeros(y_train.shape)
if len(y_train.shape) == 1:
y_train = numpy.stack([y_train, dummy_data], axis=-1)
else:
y_train = numpy.concatenate([y_train, dummy_data], axis=-1)
</code></pre>
|
python|tensorflow|keras|deep-learning|loss-function
| 1
|
5,815
| 52,444,921
|
Save Numpy Array using Pickle
|
<p>I've got a Numpy array that I would like to save (130,000 x 3) that I would like to save using Pickle, with the following code. However, I keep getting the error "EOFError: Ran out of input" or "UnsupportedOperation: read" at the pkl.load line. This is my first time using Pickle, any ideas?</p>
<p>Thanks,</p>
<p>Anant</p>
<pre><code>import pickle as pkl
import numpy as np
arrayInput = np.zeros((1000,2)) #Trial input
save = True
load = True
filename = path + 'CNN_Input'
fileObject = open(fileName, 'wb')
if save:
pkl.dump(arrayInput, fileObject)
fileObject.close()
if load:
fileObject2 = open(fileName, 'wb')
modelInput = pkl.load(fileObject2)
fileObject2.close()
if arrayInput == modelInput:
Print(True)
</code></pre>
|
<p>You should use <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.save.html" rel="noreferrer">numpy.save</a> and <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.load.html#numpy.load" rel="noreferrer">numpy.load</a>.</p>
|
python|numpy|pickle
| 52
|
5,816
| 52,439,468
|
Keras: How to take random samples for validation set?
|
<p>I'm currently training a Keras model whose corresponding fit call looks as follows:</p>
<pre><code>model.fit(X,y_train,batch_size=myBatchSize,epochs=myAmountOfEpochs,validation_split=0.1,callbacks=myCallbackList)
</code></pre>
<p><a href="https://github.com/keras-team/keras/issues/597#issuecomment-394642797" rel="nofollow noreferrer">This comment</a> on the Keras Github page explains the meaning of "validation_split=0.1":</p>
<blockquote>
<p>The validation data is not necessarily taken from every class and it
is just the last 10% (assuming that you ask for 10%) of the data.</p>
</blockquote>
<p>My question is now: Is there an easy way to randomly select, say, 10 % of my training data as validation data? The reason I would like to use randomly picked samples is that the last 10 % of the data don't necessarily contain all classes in my case.</p>
<p>Thank you very much.</p>
|
<p>Keras doesn't provide any more advanced feature than just taking a fraction of your training data for validation. If you need something more advanced, like stratified sampling to make sure classes are well represented in the sample, then you need to do this manually outside of Keras (using say, scikit-learn or numpy) and then pass that validation data to keras through the <code>validation_data</code> parameter in <code>model.fit</code></p>
|
python|tensorflow|keras
| 3
|
5,817
| 52,206,505
|
toPandas() error using pyspark: 'int' object is not iterable
|
<p>I have a pyspark dataframe and I am trying to convert it to pandas using toPandas(), however I am running into below mentioned error. <br></p>
<p>I tried different options but got the same error:<br>1) limit the data to just few records <br>2) used collect() explicitly (which I believe toPandas() uses inherently) </p>
<p>Explored many posts on SO, but AFAIK none has toPandas() issue.<br></p>
<p><em>Snapshot of my dataframe:-</em></p>
<pre><code>>>sc.version
2.3.0.2.6.5.0-292
>>print(type(df4),len(df4.columns),df4.count(),
(<class 'pyspark.sql.dataframe.DataFrame'>, 13, 296327)
>>df4.printSchema()
root
|-- id: string (nullable = true)
|-- gender: string (nullable = true)
|-- race: string (nullable = true)
|-- age: double (nullable = true)
|-- status: integer (nullable = true)
|-- height: decimal(6,2) (nullable = true)
|-- city: string (nullable = true)
|-- county: string (nullable = true)
|-- zipcode: string (nullable = true)
|-- health: double (nullable = true)
|-- physical_inactivity: double (nullable = true)
|-- exercise: double (nullable = true)
|-- weight: double (nullable = true)
>>df4.limit(2).show()
+------+------+------+----+-------+-------+---------+-------+-------+------+-------------------+--------+------------+
|id |gender|race |age |status |height | city |county |zipcode|health|physical_inactivity|exercise|weight |
+------+------+------+----+-------+-------+---------+-------+-------+------+-------------------+--------+------------+
| 90001| MALE| WHITE|61.0| 0| 70.51|DALEADALE|FIELD | 29671| null| 29.0| 49.0| 162.0|
| 90005| MALE| WHITE|82.0| 0| 71.00|DALEBDALE|FIELD | 36658| 16.0| null| 49.0| 195.0|
+------+------+------+----+-------+-------+---------+-------+-------+------+-------------------+--------+------------+
*had to mask few features due to data privacy concerns
</code></pre>
<p><em>Error:-</em></p>
<pre><code>>>df4.limit(10).toPandas()
'int' object is not iterable
Traceback (most recent call last):
File "/repo/python2libs/pyspark/sql/dataframe.py", line 1968, in toPandas
pdf = pd.DataFrame.from_records(self.collect(), columns=self.columns)
File "/repo/python2libs/pyspark/sql/dataframe.py", line 467, in collect
return list(_load_from_socket(sock_info, BatchedSerializer(PickleSerializer())))
File "/repo/python2libs/pyspark/rdd.py", line 142, in _load_from_socket
port, auth_secret = sock_info
TypeError: 'int' object is not iterable
</code></pre>
|
<p>Our custom repository of libraries had a package for pyspark which was clashing with the pyspark that is provided by the spark cluster and somehow having both works on Spark shell but does not work on a notebook.
<br>So, renaming the pyspark library in the custom repository resolved the issue!</p>
|
pandas|apache-spark|pyspark|apache-zeppelin
| 1
|
5,818
| 60,488,705
|
Why Numpy's int datatype is not of the same type as Numpy's int64 datatype?
|
<p>Why does this give me false?</p>
<pre><code>isinstance(np.int32(3.0),np.int)
</code></pre>
|
<p>Because <code>np.int</code> is the same as python <code>int</code> data type. </p>
<p>Check <a href="https://stackoverflow.com/a/46416257/8353711">Difference between np.int, np.int_, int, and np.int_t in cython</a> for more info.</p>
<pre><code>>>> np.int
<class 'int'>
</code></pre>
<p>To check with <code>numpy.int32</code>, you can try with <code>np.int_</code>,</p>
<pre><code>>>> isinstance(np.int32(3.0),np.int_)
True
</code></pre>
|
python|python-3.x|numpy
| 1
|
5,819
| 60,417,369
|
dataframe.to_sql into Teradata (this user does not have permission to create on LABUSERS) Datalab Table name
|
<p>I have an issue with the <code>dataframe.to_sql</code> when trying to use this function</p>
<p>The <code>dataframe.to_sql</code> does not recognize or separate the data lab name and the table name, instead it takes it all as a string to create a table. So it is trying to create it on the default root level and gives the error, this user does not have permission to create on <em>LABUSERS</em>.</p>
<pre><code>from sqlalchemy import create_engine
engine = create_engine(f'teradata://{username}:{password}@tdprod:22/')
df.to_sql('data_lab.table_name', engine)
</code></pre>
<p>How can I use <code>df.to_sql</code> function and specify the datalab?</p>
|
<p><code>schema</code> (str, optional)</p>
<p>Specify the schema (if database flavor supports this). If None, use default schema.</p>
<p><a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_sql.html" rel="nofollow noreferrer">to_sql documentation</a></p>
|
python|pandas|sqlalchemy|teradata
| 1
|
5,820
| 60,636,991
|
Assigning value in a column based on values on many other columns in dataframe
|
<p>I have a dataframe like that contains of 5 columns , I want to update one column based on the other 4 columns, the dataframe looks like that</p>
<pre><code>from via to x y
3 2 13 in out
3 2 15 in out
3 2 21 in out
13 2 3
15 2 13
21 2 13
1 12 2
1 12 2
1 12 22
2 12 1 in
2 12 22 in out
22 12 2
</code></pre>
<p>the idea is to fill the column X depending on values on the other four columns, the sequence should be like that:
i have to check if x and y have values , if yes, then i have to use the corresponding values of (from ,via) and compare it in all rows with the values of (to, via) if they are the same, so i have to assign the value of Y which is correponding to (from, via) to the column X at the row which has equal values of ( to, via)
so at this example, I can see that (from=3, Via=2 has x and y values, so i will take the value of (from=3, Via=2) and compare it with the values of (to, via) in all rows, then I can assign the value of (y=out) at the rows which has (to=3, via=10)</p>
<p>the final result should be like that:</p>
<pre><code>from via to x y
3 2 13 in out
3 2 15 in out
3 2 21 in
13 2 3 out
15 2 13 out
21 2 13
1 12 2 out
1 12 2 out
1 12 22 out
2 12 1 in
2 12 22 in out
22 12 2 out
</code></pre>
<p>how can i do that in pandas dataframe?</p>
|
<p>I cannot find exactly the same result, but I have used the described algo:</p>
<pre><code># identify the lines where a change will occur and store the index and the new value
tmp = df.assign(origix=df.index).merge(df[~df['x'].isna() & ~df['y'].isna()],
left_on = ['from', 'via'], right_on = ['to', 'via'],
suffixes=('_x', '')).set_index('origix')
# apply changes in dataframe:
df.loc[tmp.index, 'x'] = tmp['y']
</code></pre>
<p>it gives:</p>
|
python|pandas|dataframe|multiple-columns
| 0
|
5,821
| 72,681,091
|
Subtract df1 from df2, df2 from df3 and so on from all data from folder
|
<p>I have a few data frames as CSV files in the folder.</p>
<p>example1_result.csv</p>
<p>example2_result.csv</p>
<p>example3_result.csv</p>
<p>example4_result.csv</p>
<p>example5_result.csv</p>
<p>My each data frame looks like following</p>
<pre><code> TestID Result1 Result2 Result3
0 0 5 1
1 1 0 4
2 2 1 2
3 3 0 0
4 4 3 0
5 5 0 1
</code></pre>
<p>I want to subtract example1_result.csv from example2_result.csv on the Result1, Result2, and Result3 columns, and save it as a new data frame as result1.csv. Then the similar subtraction operation on example2_result.csv from example3_result.csv, and so on.</p>
<p>I want to do it using python scripts. Please help me as I am a novice in python. Thanks.</p>
|
<pre><code>import pandas as pd
df1 = pd.read_csv("file1.csv")
df2 = pd.read_csv("file2.csv")
dfresult = pd.DataFrame()
dfresult["Result1"] = df2["Result1"] - df1["Result1"] # do for all columns
dfresult.to_csv("result.csv")
</code></pre>
|
python|pandas|dataframe|subtraction
| 0
|
5,822
| 40,373,897
|
tensorflow map_fun indexing
|
<p>I am trying to use the tensorflow map function but I am stuck at an indexing problem. </p>
<p>In simple python, I am trying to do the following operation:-</p>
<pre><code>for i in range(1,25):
u [i] = uold [i] - K * ( uold [i] - uold [i-1] )
</code></pre>
<p>In tensorflow, I am encountering an indexing issue due to "(uold1[i]-uold1[i-1])". Currently I have written the statement as:-</p>
<pre><code> u = tf.map_fn ( lambda u: uold - K * ( uold - uold ), uold )
</code></pre>
<p>In the current equation the second term is always zero. I am not sure how to change it to get the desired output. </p>
|
<p>You probably want to create a tensor that shifts to right by one dimension (using <a href="https://www.tensorflow.org/versions/r0.11/api_docs/python/array_ops.html#pad" rel="nofollow noreferrer">tf.pad()</a>) instead, and then calculate the difference. Ex.</p>
<pre><code>temp = uold_shifted_to_right - K * (uold - uold_shifted_to_right)
</code></pre>
<p>Then take out the first column from temp (using <a href="https://www.tensorflow.org/versions/r0.11/api_docs/python/array_ops.html#slice" rel="nofollow noreferrer">tf.slice()</a>). </p>
|
python|tensorflow
| 1
|
5,823
| 40,721,057
|
Multiply two matrix by columns with python
|
<p>I have two matrix:</p>
<pre><code>A = [a11 a12
a21 a22]
B = [b11 b12
b21 b22]
</code></pre>
<p>And I want to multiply all its columns (without loops) in order to obtain the matrix:</p>
<pre><code>C =[a11*b11 a11*b12 a12*b11 a12*b12
a21*b21 a21*b22 a22*b21 a22*b22]
</code></pre>
<p>I've tried with </p>
<pre><code>>>> C = np.prod(A,B,axis=0)
</code></pre>
<p>but prod doesn't accept two input matrix. Neither np.matrix.prod.</p>
<p>Thanks in advance. </p>
|
<p>We could use <a href="https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html" rel="nofollow noreferrer"><code>broadcasting</code></a> for a vectorized solution -</p>
<pre><code>(A[...,None]*B[:,None]).reshape(A.shape[0],-1)
</code></pre>
<p><strong>Philosophy :</strong> In terms of vectorized/broadcasting language, I would describe this as <em>spreading</em> or putting the second dimension of the input arrays against each other, while keeping their first dimensions aligned. This spreading is done by introducing new axes with <a href="https://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#numpy.newaxis" rel="nofollow noreferrer"><code>None/np.newaxis</code></a> for these two inputs and then simply multiplying each other.</p>
<p><strong>Mathematical view :</strong> Let's use a bit more mathematical view of it with the help of a generic example. Consider input arrays having different number of columns -</p>
<pre><code>In [504]: A = np.random.rand(2,3)
In [505]: B = np.random.rand(2,4)
</code></pre>
<p>First off, extend the dimensions and check their shapes -</p>
<pre><code>In [506]: A[...,None].shape
Out[506]: (2, 3, 1)
In [507]: B[:,None].shape
Out[507]: (2, 1, 4)
</code></pre>
<p>Now, perform the element-wise multiplication, which will perform these multiplications in a broadcasted manner. Take a closer look at the output's shape -</p>
<pre><code>In [508]: (A[...,None]*B[:,None]).shape
Out[508]: (2, 3, 4)
</code></pre>
<p>So, the singleton dimensions (dimension with length = 1) introduced by the use of <code>None/np.newaxis</code> would be the ones along which elements of the respective arrays would be broadcasted under the hood before being multiplied. This under-the-hood broadcasting paired with the respective operation (multiplication in this case) is done in a very efficient manner. </p>
<p>Finally, we reshape this <code>3D</code> array to <code>2D</code> keeping the number of rows same as that of the original inputs.</p>
<p><strong>Sample run :</strong></p>
<pre><code>In [494]: A
Out[494]:
array([[2, 3],
[4, 5]])
In [495]: B
Out[495]:
array([[12, 13],
[14, 15]])
In [496]: (A[...,None]*B[:,None]).reshape(A.shape[0],-1)
Out[496]:
array([[24, 26, 36, 39],
[56, 60, 70, 75]])
</code></pre>
<p><strong><code>NumPy matrix</code> type as inputs</strong> </p>
<p>For <a href="https://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.matrix.html" rel="nofollow noreferrer"><code>NumPy matrix types</code></a> as the inputs, we could use <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.asmatrix.html" rel="nofollow noreferrer"><code>np.asmatrix</code></a> that would simply create view into the inputs. Using those views, the broadcasted element-wise multiplication would be performed, finally resulting in a <code>2D</code> array after the reshaping. So, the last step would be to convert back to <code>np.matrix</code> type. Let's use the same sample inputs to demonstrate the implementation -</p>
<pre><code>In [553]: A
Out[553]:
matrix([[2, 3],
[4, 5]])
In [554]: B
Out[554]:
matrix([[12, 13],
[14, 15]])
In [555]: arrA = np.asarray(A)
In [556]: arrB = np.asarray(B)
In [557]: np.asmatrix((arrA[...,None]*arrB[:,None]).reshape(A.shape[0],-1))
Out[557]:
matrix([[24, 26, 36, 39],
[56, 60, 70, 75]])
</code></pre>
|
python|numpy|matrix|vectorization
| 8
|
5,824
| 40,577,096
|
why loss changes value after I add additional inference with reuse = True
|
<p>In tensorflow example cifar10, the loss value changes after I add one more inference with reuse = True to the graph. </p>
<p>Originally:</p>
<pre><code>2016-11-13 06:08:04.936044: step 0, loss = 4.68 (6.5 examples/sec;
19.787 sec/batch)
</code></pre>
<p>After my change: </p>
<pre><code>2016-11-13 06:00:50.400917: step 0, loss = 7.05 (6.4 examples/sec; 20.109 sec/batch)
</code></pre>
<p>I don't understand why. All the changes I made are as following,</p>
<p>1) In <code>cifar10_train.py</code>, I added a line, </p>
<pre><code>logits = cifar10.inference(images, reuse = False)
logits2 = cifar10.inference(images, reuse=True)
</code></pre>
<p>2) In <code>cifar10.py</code>, I added reuse to <code>inference()</code></p>
<pre><code>def inference(images, reuse):
with tf.variable_scope('conv1', reuse) as scope:
......
</code></pre>
<p>Then I found the loss value is quite different. </p>
<p>Originally:</p>
<pre><code>2016-11-13 06:08:04.936044: step 0, loss = 4.68 (6.5 examples/sec; 19.787 sec/batch)
</code></pre>
<p>After my change:</p>
<pre><code>2016-11-13 06:00:50.400917: step 0, loss = 7.05 (6.4 examples/sec; 20.109 sec/batch)
</code></pre>
<p>Why is this?</p>
|
<p>parameter reuse is True means logits and logits2 use the same model to get their output, if reuse is False, logits and logits2 come from different model
for more information to understand what I said, you can watch this: <a href="https://www.tensorflow.org/programmers_guide/variable_scope" rel="nofollow noreferrer">https://www.tensorflow.org/programmers_guide/variable_scope</a></p>
|
tensorflow
| 0
|
5,825
| 40,368,984
|
Solving a BVP on a fixed non-uniform grid in python without interpolating
|
<p>I am aware of <code>scipy.solve_bvp</code> but it requires that you interpolate your variables which I do not want to do. </p>
<p>I have a boundary value problem of the following form:</p>
<p><code>y1'(x) = -c1*f1(x)*f2(x)*y2(x) - f3(x)</code></p>
<p><code>y2'(x) = f4(x)*y1 + f1(x)*y2(x)</code></p>
<p><code>y1(x=0)=0, y2(x=1)=0</code></p>
<p>I have values for <code>x=[0, 0.0001, 0.025, 0.3, ... 0.9999999, 1]</code> on a non-uniform grid and values for all of the variables/functions at only those values of <code>x</code>. </p>
<p>How can I solve this BVP? </p>
|
<p>This is a new function, and I don't have it on my <code>scipy</code> version (0.17), but I found the source in <code>scipy/scipy/integrate/_bvp.py</code> (github).</p>
<p>The relevant pull request is <a href="https://github.com/scipy/scipy/pull/6025" rel="nofollow noreferrer">https://github.com/scipy/scipy/pull/6025</a>, last April.</p>
<p>It is based on a paper and MATLAB implementation, </p>
<pre><code> J. Kierzenka, L. F. Shampine, "A BVP Solver Based on Residual
Control and the Maltab PSE", ACM Trans. Math. Softw., Vol. 27,
Number 3, pp. 299-316, 2001.
</code></pre>
<p>The <code>x</code> mesh handling appears to be:</p>
<pre><code>while True:
....
solve_newton
....
insert_1, = np.nonzero((rms_res > tol) & (rms_res < 100 * tol))
insert_2, = np.nonzero(rms_res >= 100 * tol)
nodes_added = insert_1.shape[0] + 2 * insert_2.shape[0]
if m + nodes_added > max_nodes:
status = 1
if verbose == 2:
nodes_added = "({})".format(nodes_added)
print_iteration_progress(iteration, max_rms_res, m,
nodes_added)
...
if nodes_added > 0:
x = modify_mesh(x, insert_1, insert_2)
h = np.diff(x)
y = sol(x)
</code></pre>
<p>where <code>modify_mesh</code> add nodes to <code>x</code> based on:</p>
<pre><code>insert_1 : ndarray
Intervals to each insert 1 new node in the middle.
insert_2 : ndarray
Intervals to each insert 2 new nodes, such that divide an interval
into 3 equal parts.
</code></pre>
<p>From this I deduce that </p>
<ul>
<li><p>you can track the addition of nodes with the <code>verbose</code> parameter</p></li>
<li><p>nodes are added, but not removed. So the out mesh should include all of your input points.</p></li>
<li><p>I assume nodes are added to improve resolution in certain segments of the problem</p></li>
</ul>
<p>This is based on reading the code, and not verified with test code. You may be the only person to be asking about this function on SO, and one of the few to have actually used it.</p>
|
python|numpy|math|scipy|differential-equations
| 1
|
5,826
| 40,703,751
|
using Fourier transforms to do convolution?
|
<p>According to the <a href="https://en.wikipedia.org/wiki/Convolution_theorem#Convolution_theorem_for_inverse_Fourier_transform" rel="nofollow noreferrer">Convolution theorem</a>, we can convert the Fourier transform operator to convolution.</p>
<p>Using Python and Scipy, my code is below but not correct.
Can you help me and explain it?</p>
<pre><code>import tensorflow as tf
import sys
from scipy import signal
from scipy import linalg
import numpy as np
x = [[1 , 2] , [7 , 8]]
y = [[4 , 5] , [3 , 4]]
print "conv:" , signal.convolve2d(x , y , 'full')
new_x = np.fft.fft2(x)
new_y = np.fft.fft2(y)
print "fft:" , np.fft.ifft2(np.dot(new_x , new_y))
</code></pre>
<p>The result of code:</p>
<pre><code>conv: [[ 4 13 10]
[31 77 48]
[21 52 32]]
fft: [[ 20.+0.j 26.+0.j]
[ 104.+0.j 134.+0.j]]
</code></pre>
<p>I'm confused!</p>
|
<p>The problem may be in the discrepancy between the discrete and continuous convolutions. The convolution kernel (i.e. y) will extend beyond the boundaries of x, and these regions need accounting for in the convolution.</p>
<p>scipy.signal.convolve will by default pad the out of bounds regions with 0s, which will bias results:
<a href="https://docs.scipy.org/doc/scipy-0.18.1/reference/generated/scipy.signal.convolve2d.html" rel="nofollow noreferrer">https://docs.scipy.org/doc/scipy-0.18.1/reference/generated/scipy.signal.convolve2d.html</a></p>
<p>The Fourier multiplication will not do this by default - you could test this by making padded x, y arrays and comparing the results.</p>
<p>The discrepancy between such techniques should diminish as the kernel size becomes much less than the image dimensions.</p>
<p>As a further note - you should not use the dot product between new_x, new_y. Instead, just multiply the arrays with the * operator. </p>
<p>Hope this helps.</p>
|
python|numpy|filter|fft|convolution
| 4
|
5,827
| 61,894,629
|
ValueError: Length of values does not match length of index in nested loop
|
<p>I'm trying to remove the stopwords in each row of my column. The columns contains rows and the rows since i already <code>word_tokenized</code> it with <code>nltk</code> then now it's a list which contains tuples. I'm trying to remove the stopwords with this nested list comprehension but it says <code>ValueError: Length of values does not match length of index in nested loop</code>. How to fix this?</p>
<pre><code>import pandas as pd
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize
data = pd.read_csv(r"D:/python projects/read_files/spam.csv",
encoding = "latin-1")
data = data[['v1','v2']]
data = data.rename(columns = {'v1': 'label', 'v2': 'text'})
stopwords = set(stopwords.words('english'))
data['text'] = data['text'].str.lower()
data['new'] = [word_tokenize(row) for row in data['text']]
data['new'] = [word for new in data['new'] for word in new if word not in stopwords]
</code></pre>
<p>My text data</p>
<pre><code>data['text'].head(5)
Out[92]:
0 go until jurong point, crazy.. available only ...
1 ok lar... joking wif u oni...
2 free entry in 2 a wkly comp to win fa cup fina...
3 u dun say so early hor... u c already then say...
4 nah i don't think he goes to usf, he lives aro...
Name: text, dtype: object
</code></pre>
<p>After i <code>word_tokenized</code> it with nltk</p>
<pre><code>data['new'].head(5)
Out[89]:
0 [go, until, jurong, point, ,, crazy.., availab...
1 [ok, lar, ..., joking, wif, u, oni, ...]
2 [free, entry, in, 2, a, wkly, comp, to, win, f...
3 [u, dun, say, so, early, hor, ..., u, c, alrea...
4 [nah, i, do, n't, think, he, goes, to, usf, ,,...
Name: new, dtype: object
</code></pre>
<p>The Traceback</p>
<pre><code>runfile('D:/python projects/NLP_nltk_first.py', wdir='D:/python projects')
Traceback (most recent call last):
File "D:\python projects\NLP_nltk_first.py", line 36, in <module>
data['new'] = [new for new in data['new'] for word in new if word not in stopwords]
File "C:\Users\Ramadhina\Anaconda3\lib\site-packages\pandas\core\frame.py", line 3487, in __setitem__
self._set_item(key, value)
File "C:\Users\Ramadhina\Anaconda3\lib\site-packages\pandas\core\frame.py", line 3564, in _set_item
value = self._sanitize_column(key, value)
File "C:\Users\Ramadhina\Anaconda3\lib\site-packages\pandas\core\frame.py", line 3749, in _sanitize_column
value = sanitize_index(value, self.index, copy=False)
File "C:\Users\Ramadhina\Anaconda3\lib\site-packages\pandas\core\internals\construction.py", line 612, in sanitize_index
raise ValueError("Length of values does not match length of index")
ValueError: Length of values does not match length of index
</code></pre>
|
<p>Read the error message carefully:</p>
<blockquote>
<p>ValueError: Length of values does not match length of index</p>
</blockquote>
<p>The "values" in this case is the stuff on the right of the <code>=</code>:</p>
<pre class="lang-py prettyprint-override"><code>values = [word for new in data['new'] for word in new if word not in stopwords]
</code></pre>
<p>The "index" in this case is the row index of the DataFrame:</p>
<pre class="lang-py prettyprint-override"><code>index = data.index
</code></pre>
<p>The <code>index</code> here always has the same number of rows as the DataFrame itself.</p>
<p>The problem is that <code>values</code> is too long for the <code>index</code> -- i.e. they are too long for the DataFrame. If you inspect your code this should be immediately obvious. If you still don't see the problem, try this:</p>
<pre class="lang-py prettyprint-override"><code>data['text_tokenized'] = [word_tokenize(row) for row in data['text']]
values = [word for new in data['text_tokenized'] for word in new if word not in stopwords]
print('N rows:', data.shape[0])
print('N new values:', len(values))
</code></pre>
<p>As for how to fix the problem -- it depends entirely on what you're trying to achieve. One option is to "explode" the data (also note the use of <code>.map</code> instead of a list comprehension):</p>
<pre class="lang-py prettyprint-override"><code>data['text_tokenized'] = data['text'].map(word_tokenize)
# Flatten the token lists without a nested list comprehension
tokens_flat = data['text_tokenized'].explode()
# Join your labels w/ your flattened tokens, if desired
data_flat = data[['label']].join(tokens_flat)
# Add a 2nd index level to track token appearance order,
# might make your life easier
data_flat['token_id'] = data.groupby(level=0).cumcount()
data_flat = data_flat.set_index('token_id', append=True)
</code></pre>
<hr>
<p>As an unrelated tip, you can make your CSV processing more efficient by only loading the columns you need, as follows:</p>
<pre class="lang-py prettyprint-override"><code>data = pd.read_csv(r"D:/python projects/read_files/spam.csv",
encoding="latin-1",
usecols=["v1", "v2"])
</code></pre>
|
python|pandas|for-loop|nltk|list-comprehension
| 3
|
5,828
| 61,930,274
|
fastest way to get max value of each masked np.array for many masks?
|
<p>I have two numpy arrays of the same shape. One contains information that I am interested in, and the other contains a bunch of integers that can be used as mask values.</p>
<p>In essence, I want to loop through each unique integer to get each mask for the array, then filtered the main array using this mask and find the max value of the filtered array.</p>
<p>For simplicity, lets say the arrays are:</p>
<pre><code>arr1 = np.random.rand(10000,10000)
arr2 = np.random.randint(low=0, high=1000, size=(10000,10000))
</code></pre>
<p>right now I'm doing this:</p>
<pre><code>maxes = {}
ids = np.unique(arr2)
for id in ids:
max_val = arr1[np.equal(arr2, id)].max()
maxes[id] = max_val
</code></pre>
<p>My arrays are alot bigger and this is painfully slow, I am strugging to find a quicker way of doing this...maybe there's some kind of creative method I'm not aware of, would really appreciate any help.</p>
<p>EDIT</p>
<p>let's say the majority of arr2 is actually 0 and I dont care about the 0 id, is it possible to speed it up by dropping this entire chunk from the search??</p>
<p>i.e.</p>
<p><code>arr2[:, 0:4000] = 0</code></p>
<p>and just return the maxes for ids > 0 ??</p>
<p>much appreciated..</p>
|
<h3>Generic bin-based reduction strategies</h3>
<p>Listed below are few approaches to tackle such scenarios where we need to perform bin-based reduction operations. So, essentially we are given two arrays and we are required to use one as the bins and the other one for values and reduce the second one.</p>
<p><strong>Approach #1 :</strong> One strategy would be to sort <code>arr1</code> based on <code>arr2</code>. Once we have them both sorted in that same order, we find the group start and stop indices and then with appropriate <a href="https://numpy.org/doc/stable/reference/generated/numpy.ufunc.reduceat.html" rel="nofollow noreferrer"><code>ufunc.reduceat</code></a>, we do our slice-based reduction operation. That's all there is!</p>
<p>Here's the implementation -</p>
<pre><code>def binmax(bins, values, reduceat_func):
''' Get binned statistic from two 1D arrays '''
sidx = bins.argsort()
bins_sorted = bins[sidx]
grpidx = np.flatnonzero(np.r_[True,bins_sorted[:-1]!=bins_sorted[1:]])
max_per_group = reduceat_func(values[sidx],grpidx)
out = dict(zip(bins_sorted[grpidx], max_per_group))
return out
out = binmax(arr2.ravel(), arr1.ravel(), reduceat_func=np.maximum.reduceat)
</code></pre>
<p>It's applicable across ufuncs that have their corresponding <code>ufunc.reduceat</code> methods.</p>
<p><strong>Approach #2 :</strong> We can also leverage <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.binned_statistic.html#scipy-stats-binned-statistic" rel="nofollow noreferrer"><code>scipy.stats.binned_statistic</code></a> that 's basically a generic utility to do some of the common reduction operations based on binned array values -</p>
<pre><code>from scipy.stats import binned_statistic
def binmax_v2(bins, values, statistic):
''' Get binned statistic from two 1D arrays '''
num_labels = bins.max()+1
R = np.arange(num_labels+1)
Mx = binned_statistic(bins, values, statistic=statistic, bins=R)[0]
idx = np.flatnonzero(~np.isnan(Mx))
out = dict(zip(idx, Mx[idx].astype(int)))
return out
out = binmax_v2(arr2.ravel(), arr1.ravel(), statistic='max')
</code></pre>
|
python|numpy|mask
| 2
|
5,829
| 61,756,355
|
How to change a value in pandas df when itering rows
|
<pre><code> counter = 0
for _ in df.iterrows():
print(df.loc[df['descrizione'] == alimenti[counter][0]])
counter += 1
</code></pre>
<p>I have a dataframe and I want to change a value of a different column ('quantita') in the same row of<br>
(df.loc[df['descrizione'] == alimenti[counter][0]])</p>
<p><strong>this is the df</strong> </p>
<pre><code> descrizione famiglia parte edibile ... vitamina c vitamina e quantita
240 Arance frutta 80 ... 50 0 0
</code></pre>
<p>[1 rows x 23 columns]</p>
<p>for example when i reach 'descrizione'= Arance i want to change 'quantita'
something like df['quantita'] = 50 and output like hereunder</p>
<pre><code>descrizione famiglia parte edibile ... vitamina c vitamina e quantita
240 Arance frutta ... 50 0 50
</code></pre>
<p>How can I change this value in the df ?</p>
|
<p>you don't need manually iterating over rows.</p>
<pre><code>df.loc[df['descrizione'] == alimenti[counter][0], 'descrizione'] = 'quantita'`
# or
df.loc[df['descrizione'] == alimenti[counter][0], 'quantita'] = 50`
</code></pre>
|
python|pandas
| 1
|
5,830
| 58,151,779
|
Search column for specific phrases and count the amount of times they appear in the column and plot to bar graph
|
<p>Search column for each month of the year. Column is organized like this "01-Jan-2018". I want to find how many times "Jan-2018" appears in the column. Basically count it and plot it on a bar graph. I want it to show all the quantities for "Jan-2018" , "Feb-2018", etc. Should be 12 bars on the graph. Maybe using count or sum. I am pulling the data from a CSV using pandas and python.</p>
<p>I have tried to printing it out onto the console with some success. But I am getting confused as correct way to search a portion of the date.</p>
<pre><code> import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import csv
import seaborn as sns
data = pd.read_csv(r'C:\Users\rmond\Downloads\PS_csvFile1.csv', error_bad_lines=False, encoding="ISO-8859-1", skiprows=6)
cols = data.columns
cols = cols.map(lambda x: x.replace(' ', '_') if isinstance(x, (str)) else x)
data.columns = cols
print(data.groupby('Case_Date').mean().plot(kind='bar'))
</code></pre>
<p>I am expecting the a bar graph that will show the total quantity for each month. So there should be 12 bar graphs. But I am not sure how to search the column 12 times and each time only looking for the data of each month. While excluding the date, only searching for the month and year.</p>
|
<p>IIUC, this is what you need.</p>
<p>Let's work with the below dataframe as input dataframe.</p>
<pre><code> date
0 1/31/2018
1 2/28/2018
2 2/28/2018
3 3/31/2018
4 4/30/2018
5 5/31/2018
6 6/30/2018
7 6/30/2018
8 7/31/2018
9 8/31/2018
10 9/30/2018
11 9/30/2018
12 9/30/2018
13 9/30/2018
14 10/31/2018
15 11/30/2018
16 12/31/2018
</code></pre>
<p>The below mentioned lines of code will get the number of count for each month as a bar graph. When you have a column as as datetime object, a lot of function are much easy & the contents of the column are much more flexible. With that, you don't need search string of the name of the month.</p>
<pre><code>df['date'] = pd.to_datetime(df['date'])
df['my']=df.date.dt.strftime('%b-%Y')
ax = df.groupby('my', sort=False)['my'].value_counts().plot(kind='bar')
ax.set_xticklabels(df.my, rotation=90);
</code></pre>
<p><strong>Output</strong></p>
<p><a href="https://i.stack.imgur.com/nIrky.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nIrky.png" alt="enter image description here"></a></p>
|
python|pandas|csv|matplotlib|seaborn
| 0
|
5,831
| 57,993,200
|
How to extract a geopandas plot as a numpy array that consists of numerical values of the pixels?
|
<p>I have a GeoDataFrame and I want to get a numpy array that corresponds to the GeoDataFrame.plot(). </p>
<p>At the moment, my code looks like this:</p>
<pre><code>import numpy as np
import geopandas as gpd
from shapely.geometry import Polygon
import matplotlib.pyplot as plt
from PIL import Image
# Create GeoDataFrame
poly_list = [Polygon([[0, 0], [1, 0], [1, 1], [0, 1]])]
polys_gdf = gpd.GeoDataFrame(geometry=poly_list)
# Save plot with matplotlib
plt.ioff()
polys_gdf.plot()
plt.savefig('plot.png')
plt.close()
# Open file and convert to array
img = Image.open('plot.png')
arr = np.array(img.getdata())
</code></pre>
<p>This is a minimal working example. My actual problem is that I have a list of thousands of GeoDataFrames, 'list_of_gdf'.</p>
<p>My first idea was to just run that in a loop:</p>
<pre><code>arr_list = []
for element in list_of_gdf:
plt.ioff()
element.plot()
plt.savefig('plot.png')
plt.close()
img = Image.open('plot.png')
arr_list.append(np.array(img.getdata()))
</code></pre>
<p>This seems like it could be done in a faster way, instead of saving and opening every single .png-file for example. Any ideas?</p>
|
<p>I found a working solution for me. Instead of saving and opening every picture as .png, I use matplotlib "backend agg to acces the figure canvas as an RGB string and then convert it ot an array" (<a href="https://matplotlib.org/3.1.0/gallery/misc/agg_buffer.html" rel="nofollow noreferrer">https://matplotlib.org/3.1.0/gallery/misc/agg_buffer.html</a>).</p>
<pre><code>arr_list = []
for element in list_of_gdf:
plt.close('all')
fig, ax = plt.subplots()
ax.axis('off')
element.plot(ax = ax)
fig.canvas.draw()
arr = np.array(fig.canvas.renderer.buffer_rgba())
arr_list.append(arr)
</code></pre>
|
python|matplotlib|geopandas|shapely
| 1
|
5,832
| 34,057,165
|
Numpy: Clever way of accomplishing v[np.arange(v.shape[0]), col_indices] = 1?
|
<p>Suppose I have a matrix v:</p>
<pre><code>0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
</code></pre>
<p>I also have a vector col_indices = [0,0,1,2,2,1] that indicates which column I should put an 1 for each row of matrix v.</p>
<p>The result of the task, in this case, should be:</p>
<pre><code>1 0 0
1 0 0
0 1 0
0 0 1
0 0 1
0 1 0
</code></pre>
<p>The following code works:</p>
<pre><code>v[np.arange(v.shape[0]), col_indices] = 1
</code></pre>
<p>But I was wondering if there is a clever way to accomplish this, because in the code above I have to create a vector just to index the matrix and that seems wasteful.</p>
<p>I have also tried the following code, but it doesn't do what I want:</p>
<pre><code>v[:, col_indices] = 1
</code></pre>
|
<p>That IS the cleaver way of doing the indexing.</p>
<p>Look at these timings. Generating that array does not take long. Actually indexing those points takes much longer.</p>
<pre><code>In [208]: x=np.zeros((10000,10000))
In [209]: timeit np.arange(x.shape[0]),np.arange(x.shape[1])
10000 loops, best of 3: 23.5 us per loop
In [210]: timeit x[np.arange(x.shape[0]),np.arange(x.shape[1])]=1
1 loops, best of 3: 1.88 ms per loop
</code></pre>
<p>A similar question from a couple of weeks ago is
<a href="https://stackoverflow.com/questions/33735987/numpy-shorthand-for-taking-jagged-slice">numpy shorthand for taking jagged slice</a></p>
<p>where the poster wanted something simple like <code>a[:, entries_of_interest]</code> as opposed to <code>a[np.arange(a.shape[0]), entries_of_interest]</code>. I argued that this is just a special case of <code>a[I, J]</code> (for any pair of matching indexing arrays). </p>
|
python|numpy
| 4
|
5,833
| 34,275,251
|
applying step function when reassigning pandas row
|
<p>I have two pandas tables, <code>d</code> and <code>num_original_introns</code>. They are both indexed with the same non-numeric index. I want to apply a step function to transform <code>d</code> based on values in <code>d</code> and <code>num_original_introns</code>, like so:</p>
<pre><code>d["HasOriginalIntrons"] = d["HasOriginalIntrons"] >= 0.5 * num_original_introns["NumberIntrons"] - 0.5 if num_original_introns["NumberIntrons"] != 0 else False
</code></pre>
<p>But this gives the error</p>
<pre><code>ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
</code></pre>
<p>I know that this is invalid, and it is not possible to apply a pair of conditionals like this, but I can't seem to find an alternative from googling. How can I do this?</p>
|
<p>To have multiple logical conditions, each one needs to be enclosed in parentheses with an ampersand in-between. For example:</p>
<pre><code>d["HasOriginalIntrons"] = (num_original_introns["NumberIntrons"] != 0) & \
(
d["HasOriginalIntrons"] >=
(0.5 * num_original_introns["NumberIntrons"] - 0.5)
)
</code></pre>
|
python|pandas
| 0
|
5,834
| 34,287,449
|
Python Numpy unsupported operand types 'list' - 'list'
|
<p>I have some problem. I want to substract one list from another. For that I use conversion from python array to numpy array. But it failed.
For example, <code>wealthRS</code> is the list. I create a copy:
<code>wealthRSCopy = wealthRS</code>
Then I want to sustract, but it is error (<code>unsuppoerted operand types</code>)
Here is the screenshot.<a href="https://i.stack.imgur.com/XH5gU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XH5gU.png" alt="enter image description here"></a></p>
|
<p><strong>Edit with answer:</strong></p>
<p>You initial lists have lists as their elements. These lists are of different length, so casting to NumPy arrays makes arrays of dtype object, ie the elements of your arrays are <em>lists</em>. See here: <a href="https://stackoverflow.com/a/33987165/4244912">https://stackoverflow.com/a/33987165/4244912</a></p>
<p>When subtracting NumPy arrays, it does elementwise subtraction, that is, it subtracts the elements (which in your case are lists) of one array from the respective elements in the other, which is why you are getting the error message you are (ie. subtraction is not supported for type 'list').</p>
<p>Quick example:</p>
<pre><code>In [1]: import numpy as np
In [2]: A=np.array([[1,2],[],[1,2,3,4]])
In [3]: A[0]
Out[3]: [1, 2]
In [4]: A[0].append(3) #<-- The first element is a list!
In [5]: A
Out[5]: array([[1, 2, 3], [], [1, 2, 3, 4]], dtype=object) #<-- The first element (a list) has changed.
</code></pre>
<p>Here I reproduce your error:</p>
<pre><code>In [35]: B, C = np.array([[1,2],[3]]), np.array([[4,5],[6]]) # Note the differing sizes of the nested lists.
In [36]: C-B
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-36-f4554df570db> in <module>()
----> 1 C-B
TypeError: unsupported operand type(s) for -: 'list' and 'list'
</code></pre>
<p>So you need to make sure that you can sanely cast your lists to arrays by making each list with the initial lists equal length. Then they should be cast to NumPy arrays with dtype of float and act in the way you expect.</p>
<p><strong>Original post</strong></p>
<p>I don't think your code snippet created a copy, and it looks like you are still subtracting lists, not numpy arrays.</p>
<p>If <code>wealthRS</code> is a list, then <code>wealthRSCopy = wealthRS</code> creates what I believe is called a shallow copy: the lists refer to the same elements, so changing one will change the other.</p>
<p>For instance: </p>
<pre><code>In [1]: a = [1,2,3]
In [2]: b = a
In [3]: b[0] = 10 # change the first item in 'b'
In [4]: b
Out[4]: [10, 2, 3]
In [5]: a # <-- 'a' has changed too!
Out[5]: [10, 2, 3]
</code></pre>
<p>One way to create copies which are independent of each other is by using slices.</p>
<pre><code>In [6]: c = a[:] # <-- slice containing the whole list
In [6]: c[0] = 15
In [7]: a
Out[7]: [10, 2, 3]
In [8]: c
Out[8]: [15, 2, 3]
</code></pre>
<p>Edit: For the rest of your question: Could you try this for me?</p>
<pre><code>In [1]: import numpy as np
In [2]: a, b = [[[1]]], [[[3]]]
In [3]: np.array(b) - np.array(a)
Out[3]: array([[[2]]])
</code></pre>
<p>I can't figure out why your subtraction isn't working unless the array elements are lists themselves, but I don't know how that could happen.</p>
|
python|arrays|list|numpy
| 1
|
5,835
| 36,759,987
|
How to normalize scipy's convolve2d when working with images?
|
<p>I'm using scipy's convolve2d:</p>
<pre><code>for i in range(0, 12):
R.append(scipy.signal.convolve2d(self.img, h[i], mode = 'same'))
</code></pre>
<p>After convolution all values are in magnitudes of 10000s, but considering I'm working with images, I need them to be in the range of 0-255. How do I normalize it?</p>
|
<p>Assuming that you want to normalize within one single image, you can simply use <code>im_out = im_out / im_out.max() * 255</code> . </p>
<p>You could also normalize kernel or original image.</p>
<p>Example below.</p>
<pre><code>import scipy.signal
import numpy as np
import matplotlib.pyplot as plt
from skimage import color
from skimage import io
im = plt.imread('dice.jpg')
gray_img = color.rgb2gray(im)
print im.max()
# make some kind of kernel, there are many ways to do this...
t = 1 - np.abs(np.linspace(-1, 1, 16))
kernel = t.reshape(16, 1) * t.reshape(1, 16)
kernel /= kernel.sum() # kernel should sum to 1! :)
im_out =scipy.signal.convolve2d(gray_img, kernel, mode = 'same')
im_out = im_out / im_out.max() * 255
print im_out.max()
plt.subplot(2,1,1)
plt.imshow(im)
plt.subplot(2,1,2)
plt.imshow(im_out)
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/nnk1x.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nnk1x.png" alt="enter image description here"></a></p>
|
python|opencv|numpy|scipy
| 2
|
5,836
| 54,843,448
|
How to "zip" Tensorflow Dataset and train in Keras correctly?
|
<p>I have a <code>train_x.csv</code> and a <code>train_y.csv</code>, and I'd like to train a model using Dataset API and Keras interface. This what I'm trying to do:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import pandas as pd
import tensorflow as tf
tf.enable_eager_execution()
N_FEATURES = 10
N_SAMPLES = 100
N_OUTPUTS = 2
BATCH_SIZE = 8
EPOCHS = 5
# prepare fake data
train_x = pd.DataFrame(np.random.rand(N_SAMPLES, N_FEATURES))
train_x.to_csv('train_x.csv', index=False)
train_y = pd.DataFrame(np.random.rand(N_SAMPLES, N_OUTPUTS))
train_y.to_csv('train_y.csv', index=False)
train_x = tf.data.experimental.CsvDataset('train_x.csv', [tf.float32] * N_FEATURES, header=True)
train_y = tf.data.experimental.CsvDataset('train_y.csv', [tf.float32] * N_OUTPUTS, header=True)
dataset = ... # What to do here?
model = tf.keras.models.Sequential([
tf.keras.layers.Dense(N_OUTPUTS, input_shape=(N_FEATURES,)),
tf.keras.layers.Activation('linear'),
])
model.compile('sgd', 'mse')
model.fit(dataset, steps_per_epoch=N_SAMPLES/BATCH_SIZE, epochs=EPOCHS)
</code></pre>
<p>What's the right way to implement this <code>dataset</code>?</p>
<p>I tried <a href="https://www.tensorflow.org/api_docs/python/tf/data/Dataset#zip" rel="nofollow noreferrer"><code>Dataset.zip</code></a> API like <code>dataset = tf.data.Dataset.zip((train_x, train_y))</code> but it seems not working(code <a href="https://pastebin.com/7nX8bxus" rel="nofollow noreferrer">here</a> and error <a href="https://pastebin.com/ikfZZ88E" rel="nofollow noreferrer">here</a>). I also read <a href="https://stackoverflow.com/a/46140332/2666624">this</a> answer, it's working but I'd like a non-functional model declaration way.</p>
|
<p>The problem is in the input shape of your dense layer. It should match shape of your input tensor, which is 1.
<code>tf.keras.layers.Dense(N_OUTPUTS, input_shape=(features_shape,))</code></p>
<p>Also you might encounter problems defining <code>model.fit()</code> <code>steps_per_epoch parameter</code>, it should be of type <code>int</code>.
<code>model.fit(dataset, steps_per_epoch=int(N_SAMPLES/BATCH_SIZE), epochs=EPOCHS)</code></p>
<p>Edit 1:
In case you need multiple labels, you can do</p>
<pre><code>def parse_f(data, labels):
return data, tf.stack(labels, axis=0)
dataset = tf.data.Dataset.zip((train_x, train_y))
dataset = dataset.map(parse_func)
dataset = dataset.batch(BATCH_SIZE)
dataset = dataset.repeat()
</code></pre>
|
python|tensorflow|keras
| 2
|
5,837
| 54,891,381
|
Change day to specific entries in pandas dataframe
|
<p>I have a dataframe in pandas which has an error in the index: each entry between 23:00:00 and 23:59:59 has a wrong date. I would need to subtract one day (i.e. 24 hours) to each entry between those two times. </p>
<p>I know that I can obtain the entries between those two times as <code>df[df.hour == 23]</code>, where <code>df</code> is my dataframe. However, can I modify the day only for those specific entries of the dataframe index? </p>
<p>Resetting would take me more time, since my dataframe index is not evenly spaced as you can see from the figure below (the step between two consecutive entries is once 15 minutes and once 30 minutes). Note also from the figure the wrong date in the last three entries: it should be 2018-02-05 and not 2018-02-06.</p>
<p>I tried to do this</p>
<pre><code>df[df.index.hour == 23].index.day = df[df.index.hour == 23].index.day - 1
</code></pre>
<p>but I get <code>AttributeError: can't set attribute</code></p>
<p>Sample data:</p>
<pre><code>2018-02-05 22:00:00 271.8000
2018-02-05 22:30:00 271.5600
2018-02-05 22:45:00 271.4400
2018-02-06 23:15:00 271.3750
2018-02-06 23:30:00 271.3425
2018-02-06 00:00:00 271.2700
2018-02-06 00:15:00 271.2300
2018-02-06 00:45:00 271.1500
2018-02-06 01:00:00 271.1475
2018-02-06 01:30:00 271.1425
2018-02-06 01:45:00 271.1400
</code></pre>
<p>Expected output:</p>
<pre><code>2018-02-05 22:00:00 271.8000
2018-02-05 22:30:00 271.5600
2018-02-05 22:45:00 271.4400
2018-02-05 23:15:00 271.3750
2018-02-05 23:30:00 271.3425
2018-02-06 00:00:00 271.2700
2018-02-06 00:15:00 271.2300
2018-02-06 00:45:00 271.1500
2018-02-06 01:00:00 271.1475
2018-02-06 01:30:00 271.1425
2018-02-06 01:45:00 271.1400
</code></pre>
<p><a href="https://i.stack.imgur.com/8Y4aM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8Y4aM.png" alt="enter image description here"></a></p>
|
<p>I solved the issue myself by using <a href="https://stackoverflow.com/a/40428133/6014171">this answer</a>. This is my code:</p>
<pre><code>as_list = df.index.tolist()
new_index = []
for idx,entry in enumerate(as_list):
if entry.hour == 23:
if entry.day != 1:
new_index.append(as_list[idx].replace(day = as_list[idx].day - 1))
else:
new_day = calendar.monthrange(as_list[idx].year, as_list[idx].month -1)[1]
new_index.append(as_list[idx].replace(day = new_day, month = entry.month -1))
else:
new_index.append(entry)
df.index = new_index
</code></pre>
|
python|pandas|datetime|dataframe
| 0
|
5,838
| 49,361,068
|
pandas: convert nested json to flattened table
|
<p>I have a JSON of the following structure:</p>
<pre><code>{
"a": "a_1",
"b": "b_1",
"c": [{
"d": "d_1",
"e": "e_1",
"f": [],
"g": "g_1",
"h": "h_1"
}, {
"d": "d_2",
"e": "e_2",
"f": [],
"g": "g_2",
"h": "h_2"
}, {
"d": "d_3",
"e": "e_3",
"f": [{
"i": "i_1",
"j": "j_1",
"k": "k_1",
"l": "l_1",
"m": []
}, {
"i": "i_2",
"j": "j_2",
"k": "k_2",
"l": "l_2",
"m": [{
"n": "n_1",
"o": "o_1",
"p": "p_1",
"q": "q_1"
}]
}],
"g": "g_3",
"h": "h_3"
}]
}
</code></pre>
<p>And I want to convert it into pandas data frame of the following type:</p>
<p><a href="https://i.stack.imgur.com/uWxfI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uWxfI.png" alt="enter image description here"></a></p>
<p>How can I achieve that?</p>
<hr>
<p>Following is my attempt but the direction is completely diff.</p>
<p>code:</p>
<pre><code>from pandas.io.json import json_normalize
def flatten_json(y):
out = {}
def flatten(x, name=''):
if type(x) is dict:
for a in x:
flatten(x[a], name + a + '_')
elif type(x) is list:
i = 0
for a in x:
flatten(a, name + str(i) + '_')
i += 1
else:
out[name[:-1]] = x
flatten(y)
return out
sample_object = { "a": "a_1", "b": "b_1", "c": [{ "d": "d_1", "e": "e_1", "f": [], "g": "g_1", "h": "h_1" }, { "d": "d_2", "e": "e_2", "f": [], "g": "g_2", "h": "h_2" }, { "d": "d_3", "e": "e_3", "f": [{ "i": "i_1", "j": "j_1", "k": "k_1", "l": "l_1", "m": [] }, { "i": "i_2", "j": "j_2", "k": "k_2", "l": "l_2", "m": [{ "n": "n_1", "o": "o_1", "p": "p_1", "q": "q_1" }] }], "g": "g_3", "h": "h_3" }] }
intermediate_json = flatten_json(sample_object)
flattened_df = json_normalize(intermediate_json)
transposed_df = flattened_df.T
print(transposed_df.to_string())
</code></pre>
<p>OUTPUT:</p>
<pre><code> 0
a a_1
b b_1
c_0_d d_1
c_0_e e_1
c_0_g g_1
c_0_h h_1
c_1_d d_2
c_1_e e_2
c_1_g g_2
c_1_h h_2
c_2_d d_3
c_2_e e_3
c_2_f_0_i i_1
c_2_f_0_j j_1
c_2_f_0_k k_1
c_2_f_0_l l_1
c_2_f_1_i i_2
c_2_f_1_j j_2
c_2_f_1_k k_2
c_2_f_1_l l_2
c_2_f_1_m_0_n n_1
c_2_f_1_m_0_o o_1
c_2_f_1_m_0_p p_1
c_2_f_1_m_0_q q_1
c_2_g g_3
c_2_h h_3
</code></pre>
|
<hr>
<p><strong>Before Reading</strong></p>
<ul>
<li>This do the Job as presented in the Question, if some additionnal specificities, please communicate it.</li>
<li>This surely can be improved, take it as a possible solution to your problem</li>
<li>Please note that the key to solve your problem leads in <a href="https://stackoverflow.com/a/10756547/3941704">Looping through nested dictionary</a> which can be done with <em>recursive functions</em>.</li>
</ul>
<hr>
<p><strong>Solution</strong></p>
<p>With <code>_dict</code> your nested dictionary you can do a recursive function and some tricks to achieve your goal:</p>
<p>I first write a function <code>iterate_dict</code> that recursively read your dictionary and store the results into a new <code>dict</code> where keys/values are your final <code>pd.Dataframe</code> columns content:</p>
<pre><code>def iterate_dict(_dict, _fdict,level=0):
for k in _dict.keys(): #Iterate over keys of a dict
#If value is a string update _fdict
if isinstance(_dict[k],str):
#If first seen, initialize your dict
if not k in _fdict.keys():
_fdict[k] = [-1]*(level-1) #Trick to shift columns
#Append the value
_fdict[k].append(_dict[k])
#If a list
if isinstance(_dict[k],list):
if not k in _fdict.keys(): #If first seen key initialize
_fdict[k] = [-1]*(level) #Same previous trick
#Extend with required range (0, 1, 2 ...)
_fdict[k].extend([i for i in range(len(_dict[k]))])
else:
if len(_dict[k]) > 0:
_start = 0 if len(_fdict[k]) == 0 else (int(_fdict[k][-1])+1)
_fdict[k].extend([i for i in range(_start,_start+len(_dict[k]))]) #Extend
for _d in _dict[k]: #If value of key is a list recall iterate_dict
iterate_dict(_d,_fdict,level=level+1)
</code></pre>
<p>And another function, <code>to_series</code>, to transform the values of the future columns into <code>pd.Series</code> replacing previous <code>int</code> equals to <code>-1</code> into <code>np.nan</code>:</p>
<pre><code>def to_series(_fvalues):
if _fvalues[0] == -1:
_fvalues.insert(0,-1) #Trick to shift again
return pd.Series(_fvalues).replace(-1,np.nan) #Replace -1 with nan in case
</code></pre>
<p>Then use it like this:</p>
<pre><code>_fdict = dict() #The future columns content
iterate_dict(_dict,_fdict) #Do the Job
print(_fdict)
{'a': ['a_1'],
'b': ['b_1'],
'c': [0, 1, 2],
'd': ['d_1', 'd_2', 'd_3'],
'e': ['e_1', 'e_2', 'e_3'],
'f': [-1, 0, 1],
'g': ['g_1', 'g_2', 'g_3'],
'h': ['h_1', 'h_2', 'h_3'],
'i': [-1, 'i_1', 'i_2'],
'j': [-1, 'j_1', 'j_2'],
'k': [-1, 'k_1', 'k_2'],
'l': [-1, 'l_1', 'l_2'],
'm': [-1, -1, 0],
'n': [-1, -1, 'n_1'],
'o': [-1, -1, 'o_1'],
'p': [-1, -1, 'p_1'],
'q': [-1, -1, 'q_1']}
#Here you can see a shift is required, use your custom to_series() function
</code></pre>
<p>Then create your <code>pd.Dataframe</code>:</p>
<pre><code>df = pd.DataFrame(dict([ (k,to_series(v)) for k,v in _fdict.items() ])).ffill()
#Don't forget to do a forward fillna as needed
print(df)
a b c d e f g h i j k l m n o \
0 a_1 b_1 0.0 d_1 e_1 NaN g_1 h_1 NaN NaN NaN NaN NaN NaN NaN
1 a_1 b_1 1.0 d_2 e_2 NaN g_2 h_2 NaN NaN NaN NaN NaN NaN NaN
2 a_1 b_1 2.0 d_3 e_3 0.0 g_3 h_3 i_1 j_1 k_1 l_1 NaN NaN NaN
3 a_1 b_1 2.0 d_3 e_3 1.0 g_3 h_3 i_2 j_2 k_2 l_2 0.0 n_1 o_1
p q
0 NaN NaN
1 NaN NaN
2 NaN NaN
3 p_1 q_1
</code></pre>
|
python|json|pandas|dataframe
| 0
|
5,839
| 27,963,577
|
Optimizing histogram distance metric for two matrices in Python
|
<p>I have two matrices <code>A</code> and <code>B</code>, each with a size of <code>NxM</code>, where <code>N</code> is the number of samples and <code>M</code> is the size of histogram bins. Thus, each row represents a histogram for that particular sample.</p>
<p>What I would like to do is to compute the <code>chi-square</code> distance between two matrices for a different pair of samples. Therefore, each row in the matrix <code>A</code> will be compared to all rows in the other matrix <code>B</code>, resulting a final matrix <code>C</code> with a size of <code>NxN</code> and <code>C[i,j]</code> corresponds to the <code>chi-square</code> distance between <code>A[i]</code> and <code>B[j]</code> histograms.</p>
<p>Here is my python code that does the job:</p>
<pre><code>def chi_square(histA,histB):
esp = 1.e-10
d = sum((histA-histB)**2/(histA+histB+eps))
return 0.5*d
def matrix_cost(A,B):
a,_ = A.shape
b,_ = B.shape
C = zeros((a,b))
for i in xrange(a):
for j in xrange(b):
C[i,j] = chi_square(A[i],B[j])
return C
</code></pre>
<p>Currently, for a <code>100x70</code> matrix, this entire process takes 0.1 seconds. </p>
<p>Is there any way to improve this performance?</p>
<p>I would appreciate any thoughts or recommendations. </p>
<p>Thank you.</p>
|
<p>Sure! I'm assuming you're using <a href="http://www.numpy.org/" rel="nofollow">numpy</a>?</p>
<p>If you have the RAM available, you could use <a href="http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html" rel="nofollow">broadcast</a> the arrays and use numpy's efficient vectorization of the operations on those arrays.</p>
<p>Here's how:</p>
<pre><code>Abroad = A[:,np.newaxis,:] # prepared for broadcasting
C = np.sum((Abroad - B)**2/(Abroad + B), axis=-1)/2.
</code></pre>
<p>Timing considerations on my platform show a factor of 10 speed gain compared to your algorithm.</p>
<p>A slower option (but still faster than your original algorithm) that uses less RAM than the previous option is simply to broadcast the rows of A into 2D arrays:</p>
<pre><code>def new_way(A,B):
C = np.empty((A.shape[0],B.shape[0]))
for rowind, row in enumerate(A):
C[rowind,:] = np.sum((row - B)**2/(row + B), axis=-1)/2.
return C
</code></pre>
<p>This has the advantage that it can be run for arrays with shape (N,M) much larger than (100,70).</p>
<p>You could also look to <a href="http://www.deeplearning.net/software/theano/" rel="nofollow">Theano</a> to push the expensive for-loops to the C-level if you don't have the memory available. I get a factor 2 speed gain compared to the first option (not taking into account the initial compile time) for both the (100,70) arrays as well as (1000,70):</p>
<pre><code>import theano
import theano.tensor as T
X = T.matrix("X")
Y = T.matrix("Y")
results, updates = theano.scan(lambda x_i: ((x_i - Y)**2/(x_i+Y)).sum(axis=1)/2., sequences=X)
chi_square_norm = theano.function(inputs=[X, Y], outputs=[results])
chi_square_norm(A,B) # same result
</code></pre>
|
python|algorithm|optimization|numpy|matrix
| 1
|
5,840
| 28,253,779
|
Python Pandas Setting Dataframe index and Column names from an array
|
<p>Lets say I have data loaded from an spreadsheet:</p>
<pre><code>df = pd.read_csv('KDRAT_2012.csv', index_col=0, encoding = "ISO-8859-1",)
0 1 2 3 4 5 6 7 8 9
0 -5.53 -6.69 -6.29 -5.76 -7.74 -7.66 -6.27 -4.13 -3.08 0.00
1 -5.52 -6.68 -6.28 -5.75 -7.73 -7.65 -6.26 -4.12 -3.07 0.01
2 -4.03 -5.19 -4.79 -4.26 -6.24 -6.16 -4.77 -2.63 -1.58 1.50
3 0.11 -1.05 -0.65 -0.12 -2.10 -2.02 -0.63 1.51 2.56 5.64
4 0.23 -0.93 -0.53 0.00 -1.98 -1.90 -0.51 1.63 2.68 5.76
5 -2.53 -3.69 -3.29 -2.76 -4.74 -4.66 -3.27 -1.13 -0.08 3.00
[6 rows x 10 columns]
</code></pre>
<p>and I have a the names in another dataframe for the rows and the columns for example</p>
<pre><code>colnames = pd.DataFrame({'Names': ['A', 'B', 'C', 'D', 'E', 'F'],
'foo': [0, 1, 0, 0, 0, 0]})
</code></pre>
<p>Is there a way I can set the values in <code>colnames['Names'].value</code> as the index for df? and is there a way to do this for column names?</p>
|
<p>How about <code>df.index = colnames['Names']</code> for example:</p>
<pre><code>In [77]: df = pd.DataFrame(np.arange(18).reshape(6,3))
In [78]: colnames = pd.DataFrame({'Names': ['A', 'B', 'C', 'D', 'E', 'F'],
'foo': [0, 1, 0, 0, 0, 0]})
In [79]: df.index = colnames['Names']
In [80]: df
Out[80]:
0 1 2
Names
A 0 1 2
B 3 4 5
C 6 7 8
D 9 10 11
E 12 13 14
F 15 16 17
[6 rows x 3 columns]
</code></pre>
|
python|pandas
| 2
|
5,841
| 28,054,501
|
Mask values that are not in sort order in numpy
|
<p>I have a complex calculation which I expect the result to be an array where values are in sorted order. However, because of numerical errors at some critical points, some resulting values are wrong. I'd like to mask those values. How should I do that?</p>
<p>Here is an equivalent function, but that assume values are sorted from highest to lowers, and that outsiders are always greater than the expected value. I wonder if there is a simpler and more efficient way to do this.</p>
<pre><code>def maskoutsiders(a):
mask = numpy.zeros(len(a))
lastval = a[0]
for i in range(1, len(a)):
if a[i] > lastval:
mask[i] = 1
else:
lastval = a[i]
return ma.masked_array(a, mask=mask)
</code></pre>
|
<p>When <code>a</code> is supposed to be decreasing, you can use:</p>
<pre><code>mask = a > np.minimum.accumulate(a)
</code></pre>
<p>and when <code>a</code> is supposed to be increasing, you can use:</p>
<pre><code>mask = a < np.maximum.accumulate(a)
</code></pre>
<p>(<code>np</code> is <code>numpy</code>.)</p>
<p>For example,</p>
<pre><code>In [44]: def mymaskoutsiders(a):
....: mask = a > np.minimum.accumulate(a)
....: return ma.masked_array(a, mask=mask)
....:
</code></pre>
<p>Compare the results with this array:</p>
<pre><code>In [100]: x
Out[100]: array([ 13. , 16.5, 15.5, 11.5, 6. , 9.5, 5.5, 9. , 5. , 2.5])
</code></pre>
<p>Here's your function:</p>
<pre><code>In [101]: maskoutsiders(x)
Out[101]:
masked_array(data = [13.0 -- -- 11.5 6.0 -- 5.5 -- 5.0 2.5],
mask = [False True True False False True False True False False],
fill_value = 1e+20)
</code></pre>
<p>And here's my version:</p>
<pre><code>In [102]: mymaskoutsiders(x)
Out[102]:
masked_array(data = [13.0 -- -- 11.5 6.0 -- 5.5 -- 5.0 2.5],
mask = [False True True False False True False True False False],
fill_value = 1e+20)
</code></pre>
|
python|numpy
| 5
|
5,842
| 73,268,015
|
Pandas - get columns where all values are unique (distinct)
|
<p>I have a dataframe with many column and I am trying to get the columns where all values are unique (distinct).</p>
<p>I was able to to this for columns without missing values:</p>
<pre><code>df.columns[df.nunique(dropna=False) == len(df)]
</code></pre>
<p>But I can't find a simple solution for columns with NaNs</p>
|
<p>This will print all columns that contains unique values excluding NaN</p>
<pre><code>[col for col in df.columns if df[col].dropna().is_unique ]
</code></pre>
<p>Here is another one liner solution without using loop</p>
<pre><code>df.columns[df.apply(lambda x : x.dropna().is_unique, axis=0)]
</code></pre>
<p>To get it in an array form you can use</p>
<pre><code>df.columns[df.apply(lambda x : x.dropna().is_unique, axis=0)].array
</code></pre>
|
python|python-3.x|pandas
| 2
|
5,843
| 73,222,812
|
Mergining excel sheet in pandas
|
<p>Having an two excel sheet as follows:</p>
<p><a href="https://i.stack.imgur.com/5YJZ6.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5YJZ6.jpg" alt="img1" /></a><a href="https://i.stack.imgur.com/g0yV3.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/g0yV3.jpg" alt="sheet2" /></a></p>
<p>Need to merge these two excel sheet using pandas.Second sheet needs to be merge with the first excel.
The output will be want in the following format.</p>
<p><a href="https://i.stack.imgur.com/Q0Obg.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Q0Obg.jpg" alt="new" /></a></p>
|
<p>Let's assume you have df1 and df2
and you wanna merge the two dataframes together on the same start
here is sample:</p>
<pre><code>df = df1.merge(df2, how='outer', on='Start')
</code></pre>
|
python|pandas|data-analysis|merging-data|exploratory-data-analysis
| 2
|
5,844
| 73,488,646
|
Get the mean value for range of dates from a time series and based on condition
|
<p>I have two dataframes <code>df1</code> and <code>df2</code>.</p>
<p><code>df1</code> contains three columns: <code>Agent</code> and a datetime range (<code>start</code> and <code>end</code>).<br />
<code>df2</code> contains timeseries with three columns: <code>Agent</code>, <code>datetime</code> and <code>value</code>.</p>
<p>I would like to calculate the <code>mean</code> value and <code>count</code> the numbers of values within the range for every <code>Agent</code> and <code>datetime range</code> in <code>df1</code> from the second file <code>df2</code> entries and add this to the first file.</p>
<p><strong>df1:</strong></p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">index</th>
<th style="text-align: left;">Agent</th>
<th style="text-align: left;">start_date</th>
<th style="text-align: left;">end_date</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;"></td>
<td style="text-align: left;"></td>
<td style="text-align: left;"></td>
<td style="text-align: left;"></td>
</tr>
<tr>
<td style="text-align: left;">0</td>
<td style="text-align: left;">Andrew</td>
<td style="text-align: left;">2022-08-18 14:10:00</td>
<td style="text-align: left;">2022-08-18 15:20:00</td>
</tr>
<tr>
<td style="text-align: left;">1</td>
<td style="text-align: left;">John</td>
<td style="text-align: left;">2022-08-18 14:00:00</td>
<td style="text-align: left;">2022-08-18 16:05:00</td>
</tr>
<tr>
<td style="text-align: left;">2</td>
<td style="text-align: left;">Max</td>
<td style="text-align: left;">2022-08-18 14:25:00</td>
<td style="text-align: left;">2022-08-18 15:00:00</td>
</tr>
<tr>
<td style="text-align: left;">3</td>
<td style="text-align: left;">Sam</td>
<td style="text-align: left;">2022-08-18 16:12:00</td>
<td style="text-align: left;">2022-08-18 16:20:00</td>
</tr>
</tbody>
</table>
</div>
<p><strong>df2:</strong></p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">index</th>
<th style="text-align: left;">Agent</th>
<th style="text-align: left;">datetime_log</th>
<th style="text-align: left;">value</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">0</td>
<td style="text-align: left;">Andrew</td>
<td style="text-align: left;">2022-08-18 14:00:14</td>
<td style="text-align: left;">246</td>
</tr>
<tr>
<td style="text-align: left;">1</td>
<td style="text-align: left;">John</td>
<td style="text-align: left;">2022-08-18 14:00:14</td>
<td style="text-align: left;">33</td>
</tr>
<tr>
<td style="text-align: left;">2</td>
<td style="text-align: left;">Max</td>
<td style="text-align: left;">2022-08-18 14:00:14</td>
<td style="text-align: left;">1080</td>
</tr>
<tr>
<td style="text-align: left;">3</td>
<td style="text-align: left;">Sam</td>
<td style="text-align: left;">2022-08-18 14:00:14</td>
<td style="text-align: left;">500</td>
</tr>
<tr>
<td style="text-align: left;">4</td>
<td style="text-align: left;">Andrew</td>
<td style="text-align: left;">2022-08-18 14:30:14</td>
<td style="text-align: left;">367</td>
</tr>
<tr>
<td style="text-align: left;">5</td>
<td style="text-align: left;">John</td>
<td style="text-align: left;">2022-08-18 14:30:14</td>
<td style="text-align: left;">50</td>
</tr>
<tr>
<td style="text-align: left;">6</td>
<td style="text-align: left;">Max</td>
<td style="text-align: left;">2022-08-18 14:30:14</td>
<td style="text-align: left;">970</td>
</tr>
<tr>
<td style="text-align: left;">7</td>
<td style="text-align: left;">Andrew</td>
<td style="text-align: left;">2022-08-18 15:00:14</td>
<td style="text-align: left;">290</td>
</tr>
<tr>
<td style="text-align: left;">8</td>
<td style="text-align: left;">John</td>
<td style="text-align: left;">2022-08-18 15:00:14</td>
<td style="text-align: left;">75</td>
</tr>
<tr>
<td style="text-align: left;">9</td>
<td style="text-align: left;">Max</td>
<td style="text-align: left;">2022-08-18 15:00:14</td>
<td style="text-align: left;">800</td>
</tr>
<tr>
<td style="text-align: left;">10</td>
<td style="text-align: left;">Andrew</td>
<td style="text-align: left;">2022-08-18 15:30:14</td>
<td style="text-align: left;">244</td>
</tr>
<tr>
<td style="text-align: left;">11</td>
<td style="text-align: left;">John</td>
<td style="text-align: left;">2022-08-18 15:30:14</td>
<td style="text-align: left;">120</td>
</tr>
<tr>
<td style="text-align: left;">12</td>
<td style="text-align: left;">Max</td>
<td style="text-align: left;">2022-08-18 15:30:14</td>
<td style="text-align: left;">1800</td>
</tr>
</tbody>
</table>
</div>
<p>what I want is (desired output):</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">index</th>
<th style="text-align: left;">Agent</th>
<th style="text-align: left;">start_date</th>
<th style="text-align: left;">end_date</th>
<th style="text-align: left;">mean_df2</th>
<th style="text-align: left;">count_df2</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;"></td>
<td style="text-align: left;"></td>
<td style="text-align: left;"></td>
<td style="text-align: left;"></td>
<td style="text-align: left;"></td>
<td style="text-align: left;"></td>
</tr>
<tr>
<td style="text-align: left;">0</td>
<td style="text-align: left;">Andrew</td>
<td style="text-align: left;">2022-08-18 14:10:00</td>
<td style="text-align: left;">2022-08-18 15:20:00</td>
<td style="text-align: left;">328.5</td>
<td style="text-align: left;">2</td>
</tr>
<tr>
<td style="text-align: left;">1</td>
<td style="text-align: left;">John</td>
<td style="text-align: left;">2022-08-18 14:00:00</td>
<td style="text-align: left;">2022-08-18 16:05:00</td>
<td style="text-align: left;">69.5</td>
<td style="text-align: left;">4</td>
</tr>
<tr>
<td style="text-align: left;">2</td>
<td style="text-align: left;">Max</td>
<td style="text-align: left;">2022-08-18 14:25:00</td>
<td style="text-align: left;">2022-08-18 15:00:00</td>
<td style="text-align: left;">970</td>
<td style="text-align: left;">1</td>
</tr>
<tr>
<td style="text-align: left;">3</td>
<td style="text-align: left;">Sam</td>
<td style="text-align: left;">2022-08-18 16:12:00</td>
<td style="text-align: left;">2022-08-18 16:20:00</td>
<td style="text-align: left;"></td>
<td style="text-align: left;">0</td>
</tr>
</tbody>
</table>
</div>
<p>My problem is the condition.</p>
<p>I would appreciate any help! Thanks!</p>
|
<p>Try this:</p>
<pre><code>df1['start_date'] = pd.to_datetime(df1['start_date'])
df1['end_date'] = pd.to_datetime(df1['end_date'])
df2['datetime_log'] = pd.to_datetime(df2['datetime_log'])
df = pd.merge(df2, df1, on = 'Agent', how = 'left')
df['flag'] = np.where(((df['start_date'] <= df['datetime_log']) & (df['datetime_log']< df['end_date'])), 1, 0)
df = df[df['flag'] == 1]
df_final = df.groupby('Agent').agg({'value':['mean', 'count']})
print(df_final)
value
mean count
Agent
Andrew 328.5 2
John 69.5 4
Max 970.0 1
</code></pre>
|
python|pandas|dataframe|datetime
| 0
|
5,845
| 73,216,211
|
how to reshape a matrix to 3D but I found that matlab will do this according to column
|
<p>I was trying to transform a python program to matlab and I found that the transpose function cannot be realized in matlab. In python it goes as the first picture. But in Matlab, I found it read by column, which is so different from python. And I use cat and reshape function in Matlab, I cannot find one way to realize the same function as in python. [enter image description here][1]</p>
<p>For example, I create a matrix in matlab, and reshape it. But I found it do not read in rows.</p>
<pre><code>a = [1:1:100];
b = reshape(a,2,5,10);
</code></pre>
<p>I hope it can be divided by its rows rather than columns.
And the codes for python is</p>
<pre><code>a = np.linspace(0,10*10-1,10*10)
b = a.reshape(10,10)
c = np.transpose(b[0::2],b[1::2],axes=(1,0,2))
</code></pre>
<p>So I wonder if there is anyways that I can get a results like c in python?
[1]: <a href="https://i.stack.imgur.com/kc4Er.jpg" rel="nofollow noreferrer">https://i.stack.imgur.com/kc4Er.jpg</a></p>
|
<p>To have the same shape (in the sense that the same tuple of indices would access the same elements) you should use:</p>
<pre class="lang-matlab prettyprint-override"><code>a = 0:1:99; % Note that your python code goes from 0 to 99, not from 1 to 100
b = reshape(a, 10, 10);
b = b'; % This puts b in a similar order as numpy would use, so it becomes easier to use;
c = permute(cat(3, b(1:2:end,:), b(2:2:end, :)), [1, 3, 2]); % Equivalent to numpy transpose of the concatenated array on your python code
% Note that the way Matlab displays the matrices is different, but the indices are matching:
assert(c(1,1,1) == 0)
assert(c(1,1,2) == 1)
assert(c(1,2,1) == 10)
assert(c(1,2,2) == 11)
assert(c(2,2,2) == 31)
</code></pre>
|
python|numpy|matlab
| 0
|
5,846
| 73,340,033
|
How to combine multiple rows into a single row with many columns in pandas using an id (clustering multiple records with same id into one record)
|
<p><strong>Situation:</strong></p>
<p><a href="https://i.stack.imgur.com/0EdjS.png" rel="nofollow noreferrer">1. all_task_usage_10_19</a></p>
<p><em>all_task_usage_10_19</em> is the file which consists of <strong>29229472 rows × 20 columns</strong>.
There are multiple rows with the same <em>ID</em> inside the column <strong>machine_ID</strong> with different values in other columns.</p>
<p>Columns:</p>
<pre><code>'start_time_of_the_measurement_period','end_time_of_the_measurement_period', 'job_ID', 'task_index','machine_ID', 'mean_CPU_usage_rate','canonical_memory_usage', 'assigned_memory_usage','unmapped_page_cache_memory_usage', 'total_page_cache_memory_usage', 'maximum_memory_usage','mean_disk_I/O_time', 'mean_local_disk_space_used', 'maximum_CPU_usage','maximum_disk_IO_time', 'cycles_per_instruction_(CPI)', 'memory_accesses_per_instruction_(MAI)', 'sample_portion',
'aggregation_type', 'sampled_CPU_usage'
</code></pre>
<hr />
<p><a href="https://i.stack.imgur.com/nzReX.png" rel="nofollow noreferrer">2. clustering code</a></p>
<p>I am trying to cluster multiple <strong>machine_ID</strong> records using the following code, referencing: <em><strong><a href="https://stackoverflow.com/questions/36392735/how-to-combine-multiple-rows-into-a-single-row-with-pandas">How to combine multiple rows into a single row with pandas</a></strong></em></p>
<hr />
<p><a href="https://i.stack.imgur.com/hX75L.png" rel="nofollow noreferrer">3. Output</a></p>
<p>Output displayed using: <em><strong>with option_context</strong></em> as it allows to better visualise the content</p>
<hr />
<p><strong>My Aim:</strong></p>
<p>I am trying to cluster multiple rows with the same <strong>machine_ID</strong> into a single record, so I can apply algorithms like Moving averages, LSTM and HW for predicting cloud workloads.</p>
<p><a href="https://i.stack.imgur.com/8OUJP.png" rel="nofollow noreferrer">Something like this.</a></p>
|
<p>The below code worked for me:</p>
<pre><code>all_task_usage_10_19.groupby('machine_ID')[['start_time_of_the_measurement_period','end_time_of_the_measurement_period','job_ID', 'task_index','mean_CPU_usage_rate', 'canonical_memory_usage',
'assigned_memory_usage', 'unmapped_page_cache_memory_usage', 'total_page_cache_memory_usage', 'maximum_memory_usage',
'mean_disk_I/O_time', 'mean_local_disk_space_used','maximum_CPU_usage',
'maximum_disk_IO_time', 'cycles_per_instruction_(CPI)',
'memory_accesses_per_instruction_(MAI)', 'sample_portion',
'aggregation_type', 'sampled_CPU_usage']].agg(list).reset_index()
</code></pre>
|
python|pandas|dataframe
| 0
|
5,847
| 67,264,550
|
How to add new column by getting the data from each?
|
<p>I have Dataframe:</p>
<pre><code>teamId pts xpts
Liverpool 82 59
Man City 57 63
Leicester 53 47
Chelsea 48 55
</code></pre>
<p>And I'm trying to add new columns that identify the team position by each column</p>
<p>I wanna get this:</p>
<pre><code>teamId pts xpts №pts №xpts
Liverpool 82 59 1 2
Man City 57 63 2 1
Leicester 53 47 3 4
Chelsea 48 53 4 3
</code></pre>
<p>I tried to do something similar with the following code, but to no avail. The result is a list</p>
<pre><code>df = [df.sort_values(by=i, ascending=False).assign(new_col=lambda x: range(1, len(df) + 1)) for i in df.columns]
</code></pre>
|
<p>You can use <a href="https://numpy.org/doc/stable/reference/generated/numpy.argsort.html" rel="nofollow noreferrer"><code>np.argsort</code></a>:</p>
<pre><code>df[["no_pts", "no_xpts"]] = df.apply(lambda x: np.argsort(-x)) + 1
</code></pre>
<p>We pass each column but negated so that it gives the <em>ascending</em> order sort's indices. Since it starts from 0, we add 1 at the end.</p>
<p>to get</p>
<pre><code> teamId pts xpts no_pts no_xpts
Liverpool 82 59 1 2
Man City 57 63 2 1
Leicester 53 47 3 4
Chelsea 48 55 4 3
</code></pre>
|
pandas
| 0
|
5,848
| 67,451,785
|
Loop for adding value to last n values in list
|
<p>I have a list where I would like to +1 to all the values after <em>n</em> in a loop, where <em>n</em> is a row index. I want to repeat this multiple times to achieve the following:</p>
<pre><code>original_list = [1,2,3,4,5,6,7]
multiples = [3,6] #i.e. index 2 & 5
for i in multiples:
Do Something to +1...
final_list = [1,2,4,5,6,8,9]
</code></pre>
<p>Slicing lists within loops doesn't seem that clear to me, does anyone know how to think about solving this?</p>
|
<pre><code>original_list = np.array([1,2,3,4,5,6,7])
multiples = np.array([3,6])
cs = np.zeros_like(original_list)
multiples = np.array(multiples) - 1 # since you are not using 0-index
cs[multiples] = 1
original_list + cs.cumsum()
</code></pre>
<p>Output:</p>
<pre><code>array([1, 2, 4, 5, 6, 8, 9])
</code></pre>
|
python|numpy
| 2
|
5,849
| 67,555,510
|
Counting occurrences in time interval in a pandas data frame
|
<p>I have this simple data frame:</p>
<pre><code> Date and time Event
--------------------------
2020-03-23 9:05:03 A
2020-03-23 14:06:02 B
2020-03-23 9:06:43C B
2020-03-23 12:11:50 D
2020-03-23 12:12:38 D
2020-03-23 12:13:17 B
2020-03-23 12:14:07 A
2020-03-23 12:14:54 A
2020-04-29 10:37:09 A
2020-04-29 10:39:13 A
2020-04-29 11:53:33 A
2020-04-29 12:04:46 C
2020-04-30 19:15:29 D
2020-04-30 16:18:4 B
</code></pre>
<p>I want to count the number of occurrences in <code>Event</code> in a 4H hour time interval and create a new data frame.</p>
<p>I'm trying to get something like this:</p>
<pre><code> 10:00-14:00 14:00-18:00 18:00-22:00 22:00-02:00
A 2 1 3 0
B 0 1 1 2
C 1 2 1 1
D 0 0 0 2
</code></pre>
<p>I've tried aggregating using resampling, then I've extracted <code>Time</code> from <code>DateTime</code> and then apply counting, I also tried different combinations with <code>pd.TimeGrouper()</code>, but all of this doesn't seem to work. I don't know how to set up those 4h time intervals so I could apply aggregating.</p>
<p>At this point, I have searched all the relevant posts but couldn't find the solution.</p>
<p>Any suggestion would be highly appreciated.</p>
|
<p>Here's an approach using pandas <code>.groupby()</code>, <code>.explode()</code>, and <code>'.pivot_table()</code></p>
<pre><code>>>> import pandas as pd
>>> df = pd.DataFrame([i.strip().split(' ') for i in ''' 2020-03-23 9:05:03 A
... 2020-03-23 14:06:02 B
... 2020-03-23 9:06:43 B
... 2020-03-23 12:11:50 D
... 2020-03-23 12:12:38 D
... 2020-03-23 12:13:17 B
... 2020-03-23 12:14:07 A
... 2020-03-23 12:14:54 A
... 2020-04-29 10:37:09 A
... 2020-04-29 10:39:13 A
... 2020-04-29 11:53:33 A
... 2020-04-29 12:04:46 C
... 2020-04-30 19:15:29 D
... 2020-04-30 16:18:04 B '''.split('\n')], columns=['Date and time', 'Event'])
>>> df
Date and time Event
0 2020-03-23 9:05:03 A
1 2020-03-23 14:06:02 B
2 2020-03-23 9:06:43 B
3 2020-03-23 12:11:50 D
4 2020-03-23 12:12:38 D
5 2020-03-23 12:13:17 B
6 2020-03-23 12:14:07 A
7 2020-03-23 12:14:54 A
8 2020-04-29 10:37:09 A
9 2020-04-29 10:39:13 A
10 2020-04-29 11:53:33 A
11 2020-04-29 12:04:46 C
12 2020-04-30 19:15:29 D
13 2020-04-30 16:18:04 B
>>> # convert Date and time column to datetime type
>>> df['Date and time'] = pd.to_datetime(df['Date and time'])
>>> # groupby based on freq 4H
>>> df = df.groupby(pd.Grouper(key='Date and time', freq='4H')).agg(list).explode('Event')
>>> df = df.reset_index().dropna()
>>> # retrieve time value and convert it to time bins
>>> def time_binning(x):
... return f'{x.time()} - {(x + pd.offsets.DateOffset(hours=3, minutes=59, seconds=59)).time()}'
...
>>> df['time'] = df['Date and time'].apply(time_binning)
>>> # pivot table
>>> df = df.pivot_table(index='Event', columns='time', aggfunc='count', fill_value=0)['Date and time']
>>> df
time 08:00:00 - 11:59:59 12:00:00 - 15:59:59 16:00:00 - 19:59:59
Event
A 4 2 0
B 1 2 1
C 0 1 0
D 0 2 1
</code></pre>
|
python|pandas|datetime|aggregate
| 0
|
5,850
| 60,263,100
|
Problem with neural network in TensorFlow 2.0
|
<pre><code>import tensorflow as tf
import pandas as pd
import numpy as np
import matplotlib as plt
from sklearn.model_selection import train_test_split
from tensorflow import keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from sklearn.preprocessing import StandardScaler
import functools
LABEL_COLUMN = 'Endstage'
LABELS = [1, 2, 3, 4]
x = pd.read_csv('HCVnew.csv', index_col=False)
def get_dataset(file_path, **kwargs):
dataset = tf.data.experimental.make_csv_dataset(
file_path,
batch_size=35, # Artificially small to make examples easier to show.
label_name=LABEL_COLUMN,
na_value="?",
num_epochs=1,
ignore_errors=True,
**kwargs)
return dataset
SELECT_COLUMNS = ["Alter", "Gender", "BMI", "Fever", "Nausea", "Fatigue",
"WBC", "RBC", "HGB", "Plat", "AST1", "ALT1", "ALT4", "ALT12", "ALT24", "ALT36", "ALT48", "ALT24w",
"RNABase", "RNA4", "Baseline", "Endstage"]
DEFAULTS = [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
temp_dataset = get_dataset("HCVnew.csv",
select_columns=SELECT_COLUMNS,
column_defaults=DEFAULTS)
def pack(features, label):
return tf.stack(list(features.values()), axis=-1), label
packed_dataset = temp_dataset.map(pack)
"""
for features, labels in packed_dataset.take(1):
print(features.numpy())
print()
print(labels.numpy())
"""
NUMERIC_FEATURES = ["Alter", "Gender","BMI", "Fever", "Nausea", "Fatigue",
"WBC", "RBC", "HGB", "Plat", "AST1", "ALT1", "ALT4", "ALT12", "ALT24", "ALT36", "ALT48", "ALT24w",
"RNABase", "RNA4", "Baseline", "Endstage"]
desc = pd.read_csv("HCVnew.csv")[NUMERIC_FEATURES].describe()
MEAN = np.array(desc.T['mean'])
STD = np.array(desc.T['std'])
def normalize_numeric_data(data, mean, std):
# Center the data
return (data-mean)/std
# See what you just created.
raw_train_data = get_dataset("HCVnew.csv")
raw_test_data = get_dataset("HCVnew.csv")
class PackNumericFeatures(object):
def __init__(self, names):
self.names = names
def __call__(self, features, labels):
numeric_freatures = [features.pop(name) for name in self.names]
numeric_features = [tf.cast(feat, tf.float32) for feat in numeric_freatures]
numeric_features = tf.stack(numeric_features, axis=-1)
features['numeric'] = numeric_features
return features, labels
NUMERIC_FEATURES = ["Alter", "Gender","BMI", "Fever", "Nausea", "Fatigue",
"WBC", "RBC", "HGB", "Plat", "AST1", "ALT1", "ALT4", "ALT12", "ALT24", "ALT36", "ALT48", "ALT24w",
"RNABase", "RNA4", "Baseline", "Endstage"]
packed_train_data = raw_train_data.map(
PackNumericFeatures(NUMERIC_FEATURES))
packed_test_data = raw_test_data.map(
PackNumericFeatures(NUMERIC_FEATURES))
normalizer = functools.partial(normalize_numeric_data, mean=MEAN, std=STD)
numeric_column = tf.feature_column.numeric_column('numeric', normalizer_fn=normalizer, shape=[len(NUMERIC_FEATURES)])
numeric_columns = [numeric_column]
numeric_layer = tf.keras.layers.DenseFeatures(numeric_columns)
preprocessing_layer = tf.keras.layers.DenseFeatures(numeric_columns)
#———————————————————————MODEL———————————————————————————————————————————————————————————————————————————————————————————
model = tf.keras.Sequential([
preprocessing_layer,
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid'),
])
model.compile(
loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
data_x = get_dataset("HCVnew.csv")
train_data = data_x.shuffle(500)
model.fit(train_data, epochs=20)
</code></pre>
<p>Hello, I'm trying to build a neural network that can predict Hepatitis C based on a csv file containing patient information and I can't fix the error...
I'm getting the error: KeyError 'Endstage', whereas Endstage is the csv column that contains the corresponding values (between 1 and 4) and serves as the label column.
If someone has an idea that could fix my problem then please tell me.
Thanks a lot for your help!</p>
|
<p>That's because <code>Endstage</code> is your label column and the framework does a favour to you by removing (popping) it out of your dataset. Otherwise your training data set would have also the target class, rendering it useless. </p>
<p>Remove it from <code>NUMERIC_FEATURES</code> and any other place that makes it into your training set features.</p>
<p>[EDIT]</p>
<p>The OP asked in the follow-up question (in the comments) why, after fixing the initial problem, he's getting an error:</p>
<blockquote>
<p>ValueError: Feature numeric is not in features dictionary</p>
</blockquote>
<p>By the looks of it, feature called <code>numeric</code> is produced via call to <code>PackNumericFeatures</code>. The latter is used to create <code>packed_train_data</code> and <code>packed_test_data</code>, but these are never used. Yet this line:</p>
<pre><code>numeric_column = tf.feature_column.numeric_column('numeric', normalizer_fn=normalizer, shape=[len(NUMERIC_FEATURES)])
</code></pre>
<p>assumes the data is there - hence the error. </p>
|
python|pandas|numpy|tensorflow|keras
| 2
|
5,851
| 60,001,958
|
Save a 3 dimensional array from R to a format to be read by Python numpy
|
<p>I have a 3-dimensional array in R (numerical), which I want to <em>transfer</em> to Python, i.e. to a 3-dimensional numpy array. How can I do this? I have tried several options, but all of them <em>destroy</em> the 3rd dimension.</p>
|
<p>Try reticulate to convert the array from R to python. Then save it using numpy (called from within R). </p>
<pre><code>## inside R
library(reticulate)
x = array(runif(27),dim=c(3,3,3))
# import numpy
np = import("numpy")
np$save("test.npy",r_to_py(x))
</code></pre>
<p>Now we load it with python:</p>
<pre><code>import numpy as np
np.load("test.npy")
array([[[0.53035511, 0.09324333, 0.74165792],
[0.32596559, 0.84278233, 0.63397294],
[0.71819993, 0.69992033, 0.23523802]],
[[0.4240157 , 0.92849409, 0.23161098],
[0.82145088, 0.789411 , 0.18161145],
[0.87357443, 0.29713062, 0.35034028]],
[[0.17399566, 0.81314384, 0.92519895],
[0.72759271, 0.62621744, 0.02139281],
[0.39817859, 0.62391164, 0.66426406]]])
</code></pre>
|
python|r|arrays|python-3.x|numpy
| 2
|
5,852
| 60,082,233
|
Error in plotting of frequency histogram from csv data
|
<p>I am working with a csv file with pandas module on python3. Csv file consists of 5 columns: job, company's name, description of the job, amount of reviews, location of the job; and i want to plot a frequency histogram , where i pick only the jobs containing the words "mechanical engineer" and find the frequencies of the 5 most frequent locations for the "mechanical engineer" job.</p>
<p>So,i defined a variable engloc which stores all the "mechanical engineer" jobs. </p>
<pre><code>engloc=df[df.position.str.contains('mechanical engineer|mechanical engineering', flags=re.IGNORECASE, regex=True)].location
</code></pre>
<p>and did a histogram plot with matplotlib with code i found online</p>
<pre><code> x = np.random.normal(size = 1000)
plt.hist(engloc, bins=50)
plt.gca().set(title='Frequency Histogram ', ylabel='Frequency');
</code></pre>
<p>but it printed like this</p>
<p><a href="https://i.stack.imgur.com/GmKkm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GmKkm.png" alt="enter image description here"></a></p>
<p>How can i plot a proper frequency histogram where it plots using only 5 of the most frequent locations for jobs containing "mechanical engineer" words, instead of putting all of the locations in the graph? </p>
<p>This is a sample from the csv file
<a href="https://imgur.com/a/1UTDv91" rel="nofollow noreferrer"><img src="https://imgur.com/a/1UTDv91" alt="csv data screenshot"></a></p>
|
<p>Something along the following lines should help you with numerical data:</p>
<pre><code>import numpy as np
counts_, bins_ = np.histogram(englog.values)
filtered = [(c,b) for (c,b) in zip(counts_,bins_) if counts_>=5]
counts, bins = list(zip(*filtered))
plt.hist(bins[:-1], bins, weights=counts)
</code></pre>
<p>For a string type try:</p>
<pre><code>from collections import Counter
coords, counts = list(zip(*Counter(englog.values).most_common(5)))
plt.bar(coords, counts)
</code></pre>
|
python|python-3.x|pandas|csv|matplotlib
| 1
|
5,853
| 65,263,070
|
Get a specific string with Regex in Python
|
<p>I have strings that look alike as below:</p>
<pre><code>ART-B-C-ART0015-D-E01
ADC-B-C-ADC00112-V-E01
AEE-B-C-AEE00011-D-E01
AQW-B-C-AQW0013-D-E01
AAZ-B-C-AAZ0014-D-E01
AQQ-B-C-AQQ0032-D-E01
ADD-B-C-D-ADD0001-D-E01
AAA-B-C-AAA0012-D-E01
</code></pre>
<p>I want to have the below result:
Expected Result:</p>
<pre><code>ART0015
ADC00112
AEE00011
AQW0013
AAZ0014
AQQ0032
ADD0001
AAA0012
</code></pre>
<p>I used the below regex code and unfortunately, I don't get the expected result, due to the 7th record is not in the third dash. it is in the fourth dash.</p>
<pre><code>df["A"].str.extract(r'^(?:[^-]*-){3}\s*([^-]+)', expand=False)
0 ART0015
1 ADC00112
2 AEE00011
3 AQW0013
4 AAZ0014
5 AQQ0032
6 D
7 AAA0012
</code></pre>
|
<p>Use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.extract.html" rel="nofollow noreferrer"><code>Series.str.extract</code></a> by searching for 3 letters followed by <code>4-5</code> numbers:</p>
<pre><code>In [477]: df['col'] = df['col'].str.extract(r'([a-zA-Z]{3}\d{4,5})')
In [478]: df
Out[478]:
0 ART0015
1 ADC00112
2 AEE00011
3 AQW0013
4 AAZ0014
5 AQQ0032
6 ADD0001
7 AAA0012
</code></pre>
|
python|python-3.x|regex|pandas|python-2.7
| 5
|
5,854
| 50,032,197
|
Tensorflow gradients are 0, weights are not updating
|
<p>I'm trying to learn TensorFlow after using Keras for a while, and I'm trying to build a ConvNet for CIFAR-10 classification. However, I think I misunderstand something in TensorFlow API, since the weights are not updating even in 1-layer net model.</p>
<p>The code for the model is as follows:</p>
<pre><code>num_epochs = 10
batch_size = 64
# Shape of mu and std is correct: (1, 32, 32, 3)
mu = np.mean(X_train, axis=0, keepdims=True)
sigma = np.std(X_train, axis=0, keepdims=True)
# Placeholders for data & normalization
# (normalisation does not help)
data = tf.placeholder(np.float32, shape=(None, 32, 32, 3), name='data')
labels = tf.placeholder(np.int32, shape=(None,), name='labels')
data = (data - mu) / sigma
# flatten
flat = tf.reshape(data, shape=(-1, 32 * 32 * 3))
dense1 = tf.layers.dense(inputs=flat, units=10)
predictions = tf.nn.softmax(dense1)
onehot_labels = tf.one_hot(indices=labels, depth=10)
# Tried sparse_softmax_cross_entropy_with_logits as well
loss = tf.losses.softmax_cross_entropy(onehot_labels=onehot_labels, logits=predictions)
loss = tf.reduce_mean(loss)
# Learning rate does not matter as the weights are not updating!
optimizer = tf.train.AdamOptimizer(learning_rate=0.01).minimize(loss)
loss_history = []
with tf.Session() as session:
tf.global_variables_initializer().run()
tf.local_variables_initializer().run()
for epochs in range(10):
print("Epoch:", epochs)
# Load tiny batches-
for batch in iterate_minibatches(X_train.astype(np.float32)[:10], y_train[:10], 5):
inputs, target = batch
feed_dict = {data: inputs, labels: target}
loss_val, _ = session.run([loss, optimizer], feed_dict=feed_dict)
grads = tf.reduce_sum(tf.gradients(loss, dense1)[0])
grads = session.run(grads, {data: inputs, labels: target})
print("Loss:", loss_val, "Grads:", grads)
</code></pre>
<p>The code produces the following output:</p>
<pre><code>Epoch: 0
Loss: 2.46115 Grads: -1.02031e-17
Loss: 2.46041 Grads: 0.0
Epoch: 1
Loss: 2.46115 Grads: 0.0
Loss: 2.26115 Grads: 0.0
Epoch: 2
Loss: 2.46115 Grads: 0.0
Loss: 2.26115 Grads: 0.0
Epoch: 3
Loss: 2.46115 Grads: 0.0
Loss: 2.26115 Grads: 0.0
Epoch: 4
Loss: 2.46115 Grads: 0.0
Loss: 2.26115 Grads: 0.0
Epoch: 5
Loss: 2.46115 Grads: 0.0
Loss: 2.26115 Grads: 0.0
Epoch: 6
Loss: 2.46115 Grads: 0.0
Loss: 2.26115 Grads: 0.0
Epoch: 7
Loss: 2.46115 Grads: 0.0
Loss: 2.26115 Grads: 0.0
Epoch: 8
Loss: 2.46115 Grads: 0.0
Loss: 2.26115 Grads: 0.0
Epoch: 9
Loss: 2.46115 Grads: 0.0
Loss: 2.26115 Grads: 0.0
</code></pre>
<p>It looks like the model <strong>probably resets</strong> its weights somehow or stops learning completely.
I have also tried sparse softmax crossentropy loss, but nothing helps.</p>
|
<p>You are applying softmax two times to the output, once in <code>tf.nn.softmax</code> and again when you apply <code>softmax_cross_entropy</code>. This probably destroys any learning capability in the network.</p>
|
python|tensorflow|machine-learning|computer-vision|deep-learning
| 2
|
5,855
| 50,055,065
|
Subtracting values from a row based off other columns
|
<p>Sorry about the vague title it's hard to explain. It's easier to display. </p>
<p>I'm trying subtract values in the same row but based off strings in other columns. Here is an input df:</p>
<pre><code>import pandas as pd
import numpy as np
k = 5
N = 8
d = ({'Time' : np.random.randint(k, k + 100 , size=N),
'Events' : ['ABC','DEF','GHI','JKL','ABC','DEF','GHI','JKL'],
'Number1' : ['xx','xx',1,'xx','xx','xx',2,'xx'],
'Number2' : ['xx',1,'xx',1,'xx',2,'xx',2]})
df = pd.DataFrame(data=d)
</code></pre>
<p>Output:</p>
<pre><code> Events Number1 Number2 Time
0 ABC xx xx 14
1 DEF xx 1 34
2 GHI 1 xx 78
3 JKL xx 1 49
4 ABC xx xx 49
5 DEF xx 2 24
6 GHI 2 xx 19
7 JKL xx 2 67
</code></pre>
<p>I want to export values based on the difference in <code>Time</code>. The first time difference column will be <code>ABC - DEF</code> and the second column will be <code>GHI - JKL</code>.</p>
<p>I need to repeat this process a number of times. The example above displays a loop of 2 times. I can use the integers for columns <code>Number1</code> and <code>Number2</code> but they aren't in order. </p>
<p>I tried to combine and ffill these columns to display an order. And then use this column as a reference.</p>
<pre><code>for col in ['Number2']:
df[col] = df[col].ffill()
</code></pre>
<p>But this creates 5 identical integers when I need 4.</p>
<p>I then manually subtracted the appropriate values via row slicing but it becomes very inefficient when I have to do this numerous times. </p>
<p>Is it possible to create a loop subtracting the intended rows?</p>
<p>For the above example the output would be:</p>
<pre><code> Diff_1 Diff_2
0 -20 29
1 25 -48
</code></pre>
|
<pre><code>import pandas as pd
import numpy as np
k = 5
N = 8
d = ({'Time' : np.random.randint(k, k + 100 , size=N),
'Events' : ['ABC','DEF','GHI','JKL','ABC','DEF','GHI','JKL'],
'Number1' : ['xx','xx',1,'xx','xx','xx',2,'xx'],
'Number2' : ['xx',1,'xx',1,'xx',2,'xx',2]})
df = pd.DataFrame(data=d)
print(df)
</code></pre>
<p>Output:</p>
<pre><code> Events Number1 Number2 Time
0 ABC xx xx 8
1 DEF xx 1 54
2 GHI 1 xx 52
3 JKL xx 1 101
4 ABC xx xx 56
5 DEF xx 2 34
6 GHI 2 xx 81
7 JKL xx 2 23
</code></pre>
<p>This would have the new col in <code>df</code>. We only care about the rows for <code>ABC</code> and <code>GHI</code> </p>
<pre><code>df['diff'] = df['Time'] - df['Time'].shift(-1)
diff = pd.DataFrame({
'diff1' : list(df.loc[df['Events'] == 'ABC', 'diff']),
'diff2' : list(df.loc[df['Events'] == 'GHI', 'diff'])
})
print(diff)
</code></pre>
<p>Output: </p>
<pre><code> diff1 diff2
0 -46.0 -49.0
1 22.0 58.0
</code></pre>
|
python|pandas|loops|dataframe
| 2
|
5,856
| 63,764,945
|
Can't understand: ValueError: Graph disconnected: cannot obtain value for tensor Tensor
|
<p>I wrote a architecture similar to this code:
<a href="https://keras.io/guides/functional_api/#manipulate-complex-graph-topologie" rel="nofollow noreferrer">https://keras.io/guides/functional_api/#manipulate-complex-graph-topologie</a>:</p>
<pre><code> visual_features_input = keras.Input(
shape=(1000,), name="Visual-Input-FM", dtype='float')
et_features_input = keras.Input(
shape=(12,), name="ET-input", dtype='float')
sentence_encoding_input = keras.Input(
shape=(784,), name="Sentence-Input-Encoding", dtype='float')
et_features = layers.Dense(units = 12, name = 'et_features')(et_features_input)
visual_features = layers.Dense(units = 100, name = 'visual_features')(visual_features_input)
sentence_features = layers.Dense(units = 60, name = 'sentence_features')(sentence_encoding_input)
x = layers.concatenate([sentence_features, visual_features, et_features], name = 'hybrid-concatenation')
score_pred = layers.Dense(units = 1, name = "score")(x)
group_pred = layers.Dense(units = 5, name="group")(x)
# Instantiate an end-to-end model predicting both score and group
hybrid_model = keras.Model(
inputs=[sentence_features, visual_features, et_features],
outputs=[group_pred]
# outputs=[group_pred, score_pred],
)
</code></pre>
<p>But I get the error:</p>
<pre><code>ValueError: Graph disconnected: cannot obtain value for tensor Tensor("Sentence-Input-Encoding_2:0", shape=(None, 784), dtype=float32) at layer "sentence_features". The following previous layers were accessed without issue: []
</code></pre>
<p>Any idea why?</p>
|
<p>pay attention to define the input layers correctly when you are building your model</p>
<p>they are <code>inputs=[sentence_encoding_input, visual_features_input, et_features_input]</code> and not <code>inputs=[sentence_features, visual_features, et_features]</code></p>
<p>here the full model</p>
<pre><code>from tensorflow import keras
from tensorflow.keras import layers
visual_features_input = keras.Input(
shape=(1000,), name="Visual-Input-FM", dtype='float')
et_features_input = keras.Input(
shape=(12,), name="ET-input", dtype='float')
sentence_encoding_input = keras.Input(
shape=(784,), name="Sentence-Input-Encoding", dtype='float')
et_features = layers.Dense(units = 12, name = 'et_features')(et_features_input)
visual_features = layers.Dense(units = 100, name = 'visual_features')(visual_features_input)
sentence_features = layers.Dense(units = 60, name = 'sentence_features')(sentence_encoding_input)
x = layers.concatenate([sentence_features, visual_features, et_features], name = 'hybrid-concatenation')
score_pred = layers.Dense(units = 1, name = "score")(x)
group_pred = layers.Dense(units = 5, name="group")(x)
# Instantiate an end-to-end model predicting both score and group
hybrid_model = keras.Model(
inputs=[sentence_encoding_input, visual_features_input, et_features_input],
outputs=[group_pred]
# outputs=[group_pred, score_pred],
)
hybrid_model.summary()
</code></pre>
|
python|tensorflow|keras|keras-layer|tf.keras
| 0
|
5,857
| 64,003,734
|
Cannot download tensorflow model of cahya/bert-base-indonesian-522M
|
<p>I was going to download <a href="https://huggingface.co/cahya/bert-base-indonesian-522M" rel="nofollow noreferrer">this</a> model, and then I was going to save it later to be used with bert-serving. Since bert-serving only supports tensorflow model, I need to download the tensorflow one and not the PyTorch. The PyTorch model downloads just fine, but the I cannot download the tensorflow model. I used this code to download:</p>
<pre><code>from transformers import BertTokenizer, TFBertModel
model_name='cahya/bert-base-indonesian-522M'
model = TFBertModel.from_pretrained(model_name)
</code></pre>
<p>Here's what I got when running the code on Ubuntu 16.04, python3.5, transformers==2.5.1,</p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/username/.local/lib/python3.5/site-packages/transformers/modeling_tf_utils.py", line 346, in from_pretrained
assert os.path.isfile(resolved_archive_file), "Error retrieving file {}".format(resolved_archive_file)
File "/usr/lib/python3.5/genericpath.py", line 30, in isfile
st = os.stat(path)
TypeError: stat: can't specify None for path argument
</code></pre>
<p>And here's what I got when running it on Windows 10, python 3.6.5, transformers 3.1.0:</p>
<pre><code>---------------------------------------------------------------------------
OSError Traceback (most recent call last)
C:\ProgramData\Anaconda3\lib\site-packages\transformers\modeling_tf_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
579 if resolved_archive_file is None:
--> 580 raise EnvironmentError
581 except EnvironmentError:
OSError:
During handling of the above exception, another exception occurred:
OSError Traceback (most recent call last)
<ipython-input-3-c2f14f761f05> in <module>()
3 model_name='cahya/bert-base-indonesian-522M'
4 tokenizer = BertTokenizer.from_pretrained(model_name)
----> 5 model = TFBertModel.from_pretrained(model_name)
C:\ProgramData\Anaconda3\lib\site-packages\transformers\modeling_tf_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
585 f"- or '{pretrained_model_name_or_path}' is the correct path to a directory containing a file named one of {TF2_WEIGHTS_NAME}, {WEIGHTS_NAME}.\n\n"
586 )
--> 587 raise EnvironmentError(msg)
588 if resolved_archive_file == archive_file:
589 logger.info("loading weights file {}".format(archive_file))
OSError: Can't load weights for 'cahya/bert-base-indonesian-522M'. Make sure that:
- 'cahya/bert-base-indonesian-522M' is a correct model identifier listed on 'https://huggingface.co/models'
- or 'cahya/bert-base-indonesian-522M' is the correct path to a directory containing a file named one of tf_model.h5, pytorch_model.bin.
</code></pre>
<p>This also happens with other cahya/ models. <a href="https://huggingface.co/cahya/bert-base-indonesian-522M" rel="nofollow noreferrer">This page</a> says that you can use the tensorflow model. However, based on the error, it seems like the file does not exist over there?
I tried downloading other pretrained model like <code>bert-base-uncased</code> etc. and they download just fine. This issue only happens with cahya/ models.</p>
<p>Am I missing something? or should I report this issue to forum or the github issue?</p>
|
<p>This seems to be purely an issue of your environment.</p>
<p>Running the first code sample worked fine for me under Ubuntu 18.04 (I think using at least Ubuntu 16.04 should work as well, Windows 10 I cannot guarantee). I further use <code>transformers</code> 3.1.0, and <code>tensorflow</code> 2.3.0.</p>
<p>The first environment to me seems to be purely the fault of outdated versions for both Python (recommendation in general is at least 3.6+ currently, not even tied to <code>transformers</code> specifically), as well as the latest transformers release for full compatibility with models from the ModelHub.</p>
<p>For the second enviornment, I cannot full confirm this, but I supsect that it is due to path handling under Windows 10, as <code>transformers</code> needs to interpret paths as either an OS path, or ModelHub id.</p>
|
bert-language-model|huggingface-transformers
| 0
|
5,858
| 63,872,108
|
One to many join in python with zero fill in repeated record created due to one to many join
|
<p>I have two pandas dataframe df1, & df2.The relationship is one to many & I need 0 instead of repeating same value of table with 1 relationship.Here is the sample of my two dataframes & the datafrane after merging
df1 looks like</p>
<pre><code>Class Section ID Subject Score
I A 12 Maths 70
I A 12 Chemistry 85
I A 12 Physics 75
I A 16 Maths 70
I A 16 Chemistry 85
I A 16 Physics 75
I A 16 Arts 65
I B 14 Arts 60
</code></pre>
<p>& df2 looks like</p>
<pre><code>Class Section ID Subject Score
I A 12 Total 230
I A 16 Total 230
I A 16 Total 65
I B 14 Total 65
</code></pre>
<p>I would like to join these two tables using matching columns Class, Section,ID & I need the final table looks like after joining</p>
<pre><code> Class Section ID Subject Score Total
I A 12 Maths 70 230
I A 12 Chemistry 85 0
I A 12 Physics 75 0
I A 16 Maths 70 230
I A 16 Chemistry 85 65
I A 16 Physics 75 0
I A 16 Arts 65 0
I B 14 Arts 60 60
</code></pre>
<p>Can you suggest me how should I do this using python 3.X</p>
|
<p>A very late answer, but each group can be enumerated with <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.cumcount.html" rel="nofollow noreferrer">groupby cumcount</a> then the enumeration can be used for <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.merge.html" rel="nofollow noreferrer">merge</a>:</p>
<pre><code>cols = ['Class', 'Section', 'ID']
df3 = (
df1.merge(df2.drop('Subject', axis=1) # Remove unneeded column from df2
.rename(columns={'Score': 'Total'}), # Fix column name for output
left_on=[*cols, df1.groupby(cols).cumcount()],
right_on=[*cols, df2.groupby(cols).cumcount()],
how='left')
.drop('key_3', axis=1) # remove added merge key
)
</code></pre>
<p><code>df3</code>:</p>
<pre><code> Class Section ID Subject Score Total
0 I A 12 Maths 70 230.0
1 I A 12 Chemistry 85 NaN
2 I A 12 Physics 75 NaN
3 I A 16 Maths 70 230.0
4 I A 16 Chemistry 85 65.0
5 I A 16 Physics 75 NaN
6 I A 16 Arts 65 NaN
7 I B 14 Arts 60 65.0 # This should be 65 from df2
</code></pre>
<p>Then <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.fillna.html" rel="nofollow noreferrer">fillna</a> and <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.astype.html" rel="nofollow noreferrer">astype</a> to fix the <code>Total</code> column:</p>
<pre><code>df3['Total'] = df3['Total'].fillna(0).astype(int)
</code></pre>
<p><code>df3</code>:</p>
<pre><code> Class Section ID Subject Score Total
0 I A 12 Maths 70 230
1 I A 12 Chemistry 85 0
2 I A 12 Physics 75 0
3 I A 16 Maths 70 230
4 I A 16 Chemistry 85 65
5 I A 16 Physics 75 0
6 I A 16 Arts 65 0
7 I B 14 Arts 60 65
</code></pre>
<hr />
<p>DataFrame constructors:</p>
<pre><code>import pandas as pd
df1 = pd.DataFrame({
'Class': ['I', 'I', 'I', 'I', 'I', 'I', 'I', 'I'],
'Section': ['A', 'A', 'A', 'A', 'A', 'A', 'A', 'B'],
'ID': [12, 12, 12, 16, 16, 16, 16, 14],
'Subject': ['Maths', 'Chemistry', 'Physics', 'Maths', 'Chemistry',
'Physics', 'Arts', 'Arts'],
'Score': [70, 85, 75, 70, 85, 75, 65, 60]
})
df2 = pd.DataFrame({
'Class': ['I', 'I', 'I', 'I'],
'Section': ['A', 'A', 'A', 'B'],
'ID': [12, 16, 16, 14],
'Subject': ['Total', 'Total', 'Total', 'Total'],
'Score': [230, 230, 65, 65]
})
</code></pre>
|
python|pandas
| 0
|
5,859
| 63,939,096
|
Non-deterministic behavior for training a neural network on GPU implemented in PyTorch and with a fixed random seed
|
<p>I observed a strange behavior of the final Accuracy when I run exactly the same experiment (the same code for training neural net for image classification) with the same random seed on different GPUs (machines). I use only one GPU. Precisely, When I run the experiment on one machine_1 the Accuracy is 86,37. When I run the experiment on machine_2 the Accuracy is 88,0.
There is no variability when I run the experiment multiple times on the same machine. PyTorch and CUDA versions are the same. Could you help me to figure out the reason and fix it?</p>
<p>Machine_1:
NVIDIA-SMI 440.82 Driver Version: 440.82 CUDA Version: 10.2</p>
<p>Machine_2:
NVIDIA-SMI 440.100 Driver Version: 440.100 CUDA Version: 10.2</p>
<p>To fix random seed I use the following code:</p>
<pre><code>random.seed(args.seed)
os.environ['PYTHONHASHSEED'] = str(args.seed)
np.random.seed(args.seed)
torch.manual_seed(args.seed)
torch.cuda.manual_seed(args.seed)
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.deterministic = True
</code></pre>
|
<p>This is what I use:</p>
<pre class="lang-py prettyprint-override"><code>import torch
import os
import numpy as np
import random
def set_seed(seed):
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
np.random.seed(seed)
random.seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
set_seed(13)
</code></pre>
<p>Make sure you have a single function that set's the seeds from once. If you are using Jupyter notebooks cell execution timing may cause this. Also the order of functions inside may be important. I never had problems with this code. You may call <code>set_seed()</code> often in code.</p>
|
pytorch|conv-neural-network|random-seed
| 1
|
5,860
| 63,881,970
|
How do I use SQL intersect operator in Pandas dataframe on databricks
|
<p>I'm using python 3.x on databricks. I have two dataframe,a & b. a contains 2 rows & b contains 5 rows. While I'm merging this two dataframe using below command</p>
<pre><code>combine=pd.merge(a,b,on=[...],how="inner")
</code></pre>
<p>I'm getting 10 rows. But I need 5 rows or maximum number of rows in the dataframe. I tried to implement SQL intersect using following code</p>
<pre><code>combine=a.merge(b)
</code></pre>
<p>Again I got 10 rows. Can you suggest me how do I implement intersect in Python.</p>
|
<pre><code>result = pd.concat([a, b], axis=1, join='inner')
</code></pre>
<p>will give the following result:</p>
<p><a href="https://i.stack.imgur.com/FKsds.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FKsds.png" alt="enter image description here" /></a></p>
<p>I suggest you read this walkthrough to get exact what you want:</p>
<p><a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/merging.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/user_guide/merging.html</a></p>
|
python|sql|pandas
| 0
|
5,861
| 63,806,772
|
Matplotlib interactive bar chart
|
<p>I am trying to explore interactive feature in matplotlib, basically user picks a y value by clicking on the graph, depending on the value the user picked, a horizontal line is drawn. And according to that line the color of barchart should change (how far is the value from the mean).</p>
<p>My program draws the user picked value but the color of bars do not change according. The click event calls my compare value function which draws the line but do not change color. My code is as follows, any help would be appreciated</p>
<pre><code>import matplotlib.pyplot as plt
import matplotlib as mpl
import pandas as pd
import numpy as np
np.random.seed(12345)
df = pd.DataFrame([np.random.normal(32000,200000,3650),
np.random.normal(43000,100000,3650),
np.random.normal(43500,140000,3650),
np.random.normal(48000,70000,3650)],
index=[1992,1993,1994,1995])
df=df.T
n = len(df)
std = df.std()
means = df.mean()
ci = (1.96*std/(n**0.5))
cu = list(means + ci)
cl = list(means - ci)
yerror = list(zip(cl , cu))
lab =list(df.columns)
x = np.arange(len(lab))
my_cmap = plt.cm.get_cmap('coolwarm')
norm = mpl.colors.Normalize(vmin=0.,vmax=1.)
def cmp_val(n):
data_c=list((n - means))
data_c = [x / max(data_c) for x in data_c]
for i in range(len(data_c)):
if data_c[i] > 0:
my_cmap = plt.cm.get_cmap('Blues')
colors = my_cmap(norm(data_c[i]))
bar[i].set_facecolor(colors)
if data_c[i] < 0:
my_cmap = plt.cm.get_cmap('Reds')
colors = my_cmap(norm(data_c[i]*-1))
bar[i].set_facecolor(colors)
plt.axhline(y=n, xmin=0, xmax=1, c = 'lightslategray', linestyle = ':')
return n
plt.figure()
bar=plt.bar(x ,list(means), width=x[1]-x[0], edgecolor='black', yerr= ci,capsize= 20)
plt.xticks(x, lab)
def onclick(event):
plt.cla()
bar=plt.bar(x ,list(means), width=x[1]-x[0], edgecolor='black', yerr= ci,capsize= 20)
cmp_val(event.ydata)
plt.gca().set_title('{}'.format(event.ydata))
plt.xticks(x, lab)
plt.gcf().canvas.mpl_connect('button_press_event', onclick)
plt.show()
</code></pre>
|
<p>I'm not sure I understood how you wanted to normalize your color coding, but I rewrote your code to make it work. Hopefully you'll be able to adapt the code to your needs:</p>
<pre><code>import matplotlib.pyplot as plt
import matplotlib.colors as mcolors
import pandas as pd
import numpy as np
np.random.seed(12345)
df = pd.DataFrame([np.random.normal(32000, 200000, 3650),
np.random.normal(43000, 100000, 3650),
np.random.normal(43500, 140000, 3650),
np.random.normal(48000, 70000, 3650)],
index=[1992, 1993, 1994, 1995])
df = df.T
n = len(df)
std = df.std()
means = df.mean()
ci = (1.96 * std / (n ** 0.5))
cu = list(means + ci)
cl = list(means - ci)
yerror = list(zip(cl, cu))
lab = list(df.columns)
x = np.arange(len(lab))
my_cmap = plt.cm.get_cmap('coolwarm_r')
my_norm = mcolors.Normalize(vmin=-means.max(), vmax=means.max())
def color_bars(val, rectangles, cmap, norm):
heights = np.array([b.get_height() for b in rectangles])
diff = heights - val
colors = cmap(norm(diff))
for rectangle, color in zip(rectangles, colors):
rectangle.set_facecolor(color)
fig.canvas.draw()
fig, ax = plt.subplots()
bars = ax.bar(x, means, width=x[1] - x[0], edgecolor='black', yerr=ci, capsize=20)
hline = ax.axhline(y=0, c='lightslategray', linestyle=':')
ax.set_xticks(x)
ax.set_xticklabels(lab)
def onclick(event):
if event.inaxes:
ax.set_title('{:.2f}'.format(event.ydata))
hline.set_ydata([event.ydata, event.ydata])
color_bars(event.ydata, bars, cmap=my_cmap, norm=my_norm)
fig.canvas.draw()
fig.canvas.mpl_connect('button_press_event', onclick)
plt.show()
</code></pre>
|
python|pandas|numpy|matplotlib
| 2
|
5,862
| 46,701,216
|
Are all train samples used in fit_generator in Keras?
|
<p>I am using <code>model.fit_generator()</code> to train a neural network with <code>Keras</code>. During the fitting process I've set the <code>steps_per_epoch</code> to 16 (<code>len(training samples)/batch_size</code>). </p>
<p>If the mini batch size is set to 12, and the total number of training samples is 195, does it mean that 3 samples won't be used in the training phase? </p>
|
<p>No, because it is a generator the model does not know the total number of training samples. Therefore, it finishes an epoch when it reaches the final step defined with the <code>steps_per_epoch</code> argument. In your case it will indeed train 192 samples per epoch.</p>
<p>If you want to use all samples in your model you can shuffle the data at the start of every epoch with the argument <code>shuffle</code>.</p>
|
python|tensorflow|machine-learning|keras|neural-network
| 2
|
5,863
| 46,967,538
|
Installing Tensorflow on Amazon EC2 Free Tier for testing
|
<p>I used <a href="https://codelabs.developers.google.com/codelabs/tensorflow-for-poets/#0" rel="nofollow noreferrer">The Tensorflow for Poets</a> tutorial to train a model for classifying some images. Now I want to use that in a webpage on an EC2 instance on AWS Free Tier. As stated in the tutorial it is as simple as running this simple command:</p>
<pre><code>python -m scripts.label_image --graph=tf_files/retrained_graph.pb --image=
</code></pre>
<p>I'm not planning to test many images all at once.</p>
<p>But to be able to do this I need to install Tensorflow on my EC2 instance. I was wondering if it is possible to do that and stay in the free zone? And I would appreciate a good tutorial for a beginner on how to do that.</p>
|
<p>The free tier EC2 does not worry about what the instance is use for. </p>
<p>Rather, how long it's instances are running for. </p>
<p>A t2.micro you will get 750hours a month. </p>
<p>All "free-tier eligible" products are labelled appropriately..</p>
<p>If you need to use data storage and database they also have free tier deals available. </p>
<p>You can setup cloud watch monitoring to alert you of charges etc. To keep you informed.</p>
<p>AWS support are very helpful.</p>
<p><a href="https://aws.amazon.com/free/" rel="nofollow noreferrer">https://aws.amazon.com/free/</a></p>
<p>Personally I am fan of the free tier Ubuntu amd64 instance. </p>
<p>Thus the instructions for tensorflow are here. </p>
<p><a href="https://www.tensorflow.org/install/install_linux" rel="nofollow noreferrer">https://www.tensorflow.org/install/install_linux</a></p>
|
python|amazon-ec2|tensorflow
| 0
|
5,864
| 38,575,213
|
Join and sum on subset of rows in a dataframe
|
<p>I have a pandas dataframe which stores date ranges and some associated colums: </p>
<pre><code> date_start date_end ... lots of other columns ...
1 2016-07-01 2016-07-02
2 2016-07-01 2016-07-03
3 2016-07-01 2016-07-04
4 2016-07-02 2016-07-07
5 2016-07-05 2016-07-06
</code></pre>
<p>and another dataframe of Pikachu sightings indexed by date: </p>
<pre><code> pikachu_sightings
date
2016-07-01 2
2016-07-02 4
2016-07-03 6
2016-07-04 8
2016-07-05 10
2016-07-06 12
2016-07-07 14
</code></pre>
<p>For each row in the first df I'd like to calculate the sum of pikachu_sightings within that date range (i.e., <code>date_start</code> to <code>date_end</code>) and store that in a new column. So would end up with a df like this (numbers left in for clarity): </p>
<pre><code> date_start date_end total_pikachu_sightings
1 2016-07-01 2016-07-02 2 + 4
2 2016-07-01 2016-07-03 2 + 4 + 6
3 2016-07-01 2016-07-04 2 + 4 + 6 + 8
4 2016-07-02 2016-07-07 4 + 6 + 8 + 10 + 12 + 14
5 2016-07-05 2016-07-06 10 + 12
</code></pre>
<p>If I was doing this iteratively I'd iterate over each row in the table of date ranges, select the subset of rows in the table of sightings that match the date range and perform a sum on it - but this is way too slow for my dataset: </p>
<pre><code>for range in ranges.itertuples():
sightings_in_range = sightings[(sightings.index >= range.date_start) & (sightings.index <= range.date_end)]
sum_sightings_in_range = sightings_in_range["pikachu_sightings"].sum()
ranges.set_value(range.Index, 'total_pikachu_sightings', sum_sightings_in_range)
</code></pre>
<p>This is my attempt at using pandas, but fails because the length of the two dataframes does not match (and even if they did, there's probably some other flaw in my approach): </p>
<pre><code>range["total_pikachu_sightings"] =
sightings[(sightings.index >= range.date_start) & (sightings.index <= range.date_end)
["pikachu_sightings"].sum()
</code></pre>
<p>I'm trying to understand what the general approach/design should look like as I'd like to aggregate with other functions too, <code>sum</code> just seems like the easiest for an example. Sorry if this is an obvious question - I'm new to pandas!</p>
|
<p>A sketch of a vectorized solution:</p>
<p>Start with a <code>p</code> as in piRSquared's answer.</p>
<p>Make sure <code>date_</code> cols have <code>datetime64</code> dtypes, i.e.:</p>
<pre><code>df['date_start'] = pd.to_datetime(df.date_time)
</code></pre>
<p>Then calculate cumulative sums:</p>
<pre><code>psums = p.cumsum()
</code></pre>
<p>and</p>
<pre><code>result = psums.asof(df.date_end) - psums.asof(df.date_start)
</code></pre>
<p>It's not yet the end, though. <code>asof</code> returns the last good value, so it sometimes will take the exact start date and sometimes not (depending on your data). So, you have to adjust for that. (If the date frequency is <code>day</code>, then probably moving the index of <code>p</code> an hour backwards, e.g. <code>-pd.Timedelta(1, 'h')</code>, and then adding <code>p.asof(df.start_date)</code> might do the trick.)</p>
|
python|pandas|dataframe
| 3
|
5,865
| 38,558,735
|
Trying to convert CSV file contents to a desired format using python
|
<p>I am trying to convert CSV file contents from format A to Format B. I tried pandas, default dict, Dict writer, etc but I could not make it out.The problem is that it is printing horizontally but not vertically. Please find the example below.<a href="https://i.stack.imgur.com/9AoJa.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9AoJa.png" alt="enter image description here"></a></p>
<p>Format A: </p>
<p>item meas COL A COL B COL C COL D</p>
<p>84P37W265B3 B1 3970 99.82368 99.82368 0.07556675</p>
<p>84P37W265B3 B3 3960 95.10101</p>
<p>84P37W265B3 B5 3705 96.89609 96.89609 0.05398111</p>
<p>84P37W265B3 B6 3763 98.45868 98.45868 0.02657454</p>
<p>84P3XT135A4 B1 7904 99.73431 99.73431 0.02</p>
<p>84P3XT135A4 B3 7817 97.5694 100 0.01</p>
<p>Format B:</p>
<p>item 84P37W265B3 84P3XT135A4</p>
<p>meas B1 B1</p>
<p>COL A 3970 7904</p>
<p>COL B 99.82368 99.73431</p>
<p>COL C 99.82368 99.73431</p>
<p>COL D 0.07556675 0.02</p>
<p>meas B3 B3</p>
<p>COL A 3960 7817</p>
<p>COL B 95.10101 97.5694</p>
<p>COL C - 100</p>
<p>COL D - 0.01</p>
<p>meas B5 -</p>
<p>COL A 3705 -</p>
<p>COL B 96.89609 -</p>
<p>COL C 96.89609 -</p>
<p>COL D 0.05398111 -</p>
<p>meas B6 -</p>
<p>COL A 3763 -</p>
<p>COL B 98.45868 -</p>
<p>COL C 98.45868 -</p>
<p>COL D 0.02657454 -</p>
<p>Can anyone help me out in this, Thanks in advance...</p>
|
<p>I would like to thank @cyclops for the reply. Please find my code for the dynamic type .i.e. the user do not know number of columns in the input csv file.</p>
<p>CODE:</p>
<pre><code>import csv
from collections import defaultdict
column_header=[]
columns = defaultdict(list)
with open('C:\outfile4.csv') as f:
reader = csv.DictReader(f)
for row in reader:
for (k,v) in row.items():
columns[k].append(v)
b_csv = r'C:\outfile5.csv'
with open(b_csv, 'w') as csvfile:
writer = csv.writer(csvfile, delimiter=',', quotechar='|', quoting=csv.QUOTE_MINIMAL)
for key,values in sorted(columns.iteritems()):
if key == 'item':
writer.writerow([key]+values)
for key,values in sorted(columns.iteritems()):
if key == 'meas':
writer.writerow([key]+values)
for key,values in sorted(columns.iteritems()):
if key != 'item' and key != 'meas':
writer.writerow([key]+values)
</code></pre>
<p>It is working but please let me is any other simple way to do this. Thanks in advance.</p>
|
python-2.7|pandas
| 0
|
5,866
| 38,961,170
|
Pandas Add Values of Column to Different Dataframe
|
<p>So I have a DataFrame, df1, that has 3 columns, A, B, and C as such:</p>
<pre><code> A B C
Arizona 0 2.800000 5.600000
California 0 18.300000 36.600000
Colorado 0 2.666667 5.333333
Connecticut 0 0.933333 1.866667
Delaware 0 0.100000 0.200000
Florida 0 0.833333 1.666667
Georgia 0 0.000000 0.000000
Hawaii 0 1.000000 2.000000
Illinois 0 3.366667 6.733333
Indiana 0 0.000000 0.000000
Iowa 0 0.000000 0.000000
</code></pre>
<p>I then have another dataframe, df2, that has just one column, D:</p>
<pre><code> D
Arizona 13
California 18
Colorado 5
Connecticut 15
Delaware 7
Florida 5
Georgia 13
Hawaii 3
Illinois 21
Indiana 2
Iowa 4
</code></pre>
<p>What I'd like to do is add the values of column D to all the columns in df1. By add I mean take the value of [Arizona, A] and add it to the value of [Arizona, D] not add column D as a new column. So far I tried using</p>
<pre><code>df1 + df2 #returned all NaN
df1 + df2['D'] #Also returned all NaN
df1['A'] + df2['D'] #Returned a new dataframe with each as a separate column
</code></pre>
<p>I'm now not entirely sure where to go from here so I'd love some advice on how to solve this. It doesn't seem like it should be difficult and I'm probably missing something obvious. Any help would be appreciated.</p>
|
<p>you can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.add.html" rel="nofollow">add()</a> method:</p>
<pre><code>In [22]: df1.add(df2.D, axis='index')
Out[22]:
A B C
Arizona 13.0 15.800000 18.600000
California 18.0 36.300000 54.600000
Colorado 5.0 7.666667 10.333333
Connecticut 15.0 15.933333 16.866667
Delaware 7.0 7.100000 7.200000
Florida 5.0 5.833333 6.666667
Georgia 13.0 13.000000 13.000000
Hawaii 3.0 4.000000 5.000000
Illinois 21.0 24.366667 27.733333
Indiana 2.0 2.000000 2.000000
Iowa 4.0 4.000000 4.000000
</code></pre>
|
python|pandas
| 3
|
5,867
| 38,731,480
|
How do I get this loop to work correctly when writing pandas df to xlsx?
|
<p>I have used <a href="https://stackoverflow.com/questions/20219254/how-to-write-to-an-existing-excel-file-without-overwriting-data">this code</a>, which is kind of working. Right now in the smaller 'rep_list' as it executes the first rep in the list which is CP is adds it, but then when it goes to AM it overwrites the CP. SO right now when I run this code it only actually saves the last person in the loop. If I run the code with just "CP" and then just "AM" it appends it as it should. Is it something wrong with the for loop? or is it an issue with the workbook itself?</p>
<pre><code>import pandas as pd
import datetime
from openpyxl import load_workbook
now = datetime.datetime.now()
currentDate = now.strftime("%Y-%m-%d")
call_report = pd.read_excel("Ending 2016-07-30.xlsx", "raw_data")
#rep_list = ["CP", "AM", "JB", "TT", "KE"]
rep_list = ["CP", "AM"]
def call_log_reader(rep_name):
rep_log = currentDate + "-" + rep_name + ".csv"
df = pd.read_csv(rep_log)
df = df.drop(['From Name', 'From Number', 'To Name / Reference', 'To Number', 'Billing Code', 'Original Dialed Number',
'First Hunt Group', 'Last Hunt Group'], axis=1)
df['rep'] = rep_name
book = load_workbook('Ending 2016-07-30.xlsx')
writer = pd.ExcelWriter('Ending 2016-07-30.xlsx', engine='openpyxl')
writer.book = book
writer.sheets = dict((ws.title, ws) for ws in book.worksheets)
df.to_excel(writer, "raw_data", index=False)
writer.save()
## I tried adding this : writer.close() hoping it would close the book and then force it to reopen for the next rep in the loop but it doesn't seem to work.
for rep in rep_list:
call_log_reader(rep)
</code></pre>
<p>Thank you so much!</p>
<p>EDIT:</p>
<p>Gaurav Dhama gave a great answer that worked excellent. He pointed out that there is a bit of a limitation with the Pandas excelwriter <a href="https://github.com/pydata/pandas/issues/3441" rel="nofollow noreferrer">(refer to this link)</a> and proposed a solution in which each rep gets their own sheet in the end. This worked, however after I thought on it I opted against the additional sheets and came up with this solution knowing the limitation existed. Basically, I appended a CSV instead of the actual XLSX file, and then at the end opened that CSV and appended the one big list into the XLSX file. Either one works, just depends on what you're final product looks like. </p>
<pre><code>import pandas as pd
import datetime
from openpyxl import load_workbook
now = datetime.datetime.now()
currentDate = now.strftime("%Y-%m-%d")
call_report = "Ending 2016-07-30.xlsx"
#rep_list = ["CP", "AM", "JB", "TT", "KE"]
rep_list = ["CP", "AM"]
csv_to_xl_files = []
merged_csv = currentDate + "-master.csv"
def call_log_reader(rep_name):
rep_log = currentDate + "-" + rep_name + ".csv"
df = pd.read_csv(rep_log)
df = df.drop(['TimestampDetail', 'Billing Code', 'From Name', 'From Number', 'To Name / Reference', 'To Number',
'Original Dialed Number', 'First Hunt Group', 'Last Hunt Group'], axis=1)
df['rep'] = rep_name
#print (df.head(3))
df.to_csv(merged_csv, mode='a', index=False, header=False)
csv_to_xl_files.append(rep_log)
book = load_workbook(call_report)
writer = pd.ExcelWriter(call_report, engine='openpyxl')
writer.book = book
writer.sheets = dict((ws.title, ws) for ws in book.worksheets)
for rep in rep_list:
call_log_reader(rep)
master_df = pd.read_csv(merged_csv)
master_df.to_excel(writer, "raw_data", index=False)
writer.save()
#this csv_to_xl_files list isn't finished yet, basically I'm going to use it to delete the files from the directory as I don't need them once the script is run.
print (csv_to_xl_files)
</code></pre>
|
<p>Try using the following:</p>
<pre><code>import pandas as pd
import datetime
from openpyxl import load_workbook
now = datetime.datetime.now()
currentDate = now.strftime("%Y-%m-%d")
call_report = pd.read_excel("Ending 2016-07-30.xlsx", "raw_data")
#rep_list = ["CP", "AM", "JB", "TT", "KE"]
rep_list = ["CP", "AM"]
def call_log_reader(rep_name):
rep_log = currentDate + "-" + rep_name + ".csv"
df = pd.read_csv(rep_log)
df = df.drop(['From Name', 'From Number', 'To Name / Reference', 'To Number', 'Billing Code', 'Original Dialed Number',
'First Hunt Group', 'Last Hunt Group'], axis=1)
df['rep'] = rep_name
df.to_excel(writer, "raw_data"+rep, index=False)
return df
book = load_workbook('Ending 2016-07-30.xlsx')
writer = pd.ExcelWriter('Ending 2016-07-30.xlsx', engine='openpyxl')
writer.book = book
writer.sheets = dict((ws.title, ws) for ws in book.worksheets)
for rep in rep_list:
call_log_reader(rep)
writer.save()
</code></pre>
|
python|pandas|openpyxl
| 1
|
5,868
| 38,927,886
|
Slow performance of timedelta methods
|
<p>Why does <code>.dt.days</code> take 100 times longer than <code>.dt.total_seconds()</code>?</p>
<pre><code>df = pd.DataFrame({'a': pd.date_range('2011-01-01 00:00:00', periods=1000000, freq='1H')})
df.a = df.a - pd.to_datetime('2011-01-01 00:00:00')
df.a.dt.days # 12 sec
df.a.dt.total_seconds() # 0.14 sec
</code></pre>
|
<p><code>.dt.total_seconds</code> is basically just a multiplication, and can be performed at numpythonic speed:</p>
<pre><code>def total_seconds(self):
"""
Total duration of each element expressed in seconds.
.. versionadded:: 0.17.0
"""
return self._maybe_mask_results(1e-9 * self.asi8)
</code></pre>
<p>Whereas if we abort the <code>days</code> operation, we see it's spending its time in a slow listcomp with a getattr and a construction of Timedelta objects (<a href="https://github.com/pydata/pandas/blob/master/pandas/tseries/tdi.py#L380" rel="nofollow">source</a>):</p>
<pre><code> 360 else:
361 result = np.array([getattr(Timedelta(val), m)
--> 362 for val in values], dtype='int64')
363 return result
364
</code></pre>
<p>To me this screams "look, let's get it correct, and we'll cross the optimization bridge when we come to it."</p>
|
python-3.x|pandas
| 3
|
5,869
| 63,059,308
|
Json file not formatted correctly when writing json differences with pandas and numpy
|
<p>I am trying to compare two json and then write another json with columns names and with differences as yes or no. I am using pandas and numpy</p>
<p>The below is sample files i am including actually, these json are dynamic, that mean we dont know how many key will be there upfront</p>
<p><strong>Input files:</strong></p>
<pre><code>fut.json
[
{
"AlarmName": "test",
"StateValue": "OK"
}
]
Curr.json:
[
{
"AlarmName": "test",
"StateValue": "OK"
}
]
</code></pre>
<p><strong>Below code I have tried:</strong></p>
<pre><code> import pandas as pd
import numpy as np
with open(r"c:\csv\fut.json", 'r+') as f:
data_b = json.load(f)
with open(r"c:\csv\curr.json", 'r+') as f:
data_a = json.load(f)
df_a = pd.json_normalize(data_a)
df_b = pd.json_normalize(data_b)
_, df_a = df_b.align(df_a, fill_value=np.NaN)
_, df_b = df_a.align(df_b, fill_value=np.NaN)
with open(r"c:\csv\report.json", 'w') as _file:
for col in df_a.columns:
df_temp = pd.DataFrame()
df_temp[col + '_curr'], df_temp[col + '_fut'], df_temp[col + '_diff'] = df_a[col], df_b[col], np.where((df_a[col] == df_b[col]), 'No', 'Yes')
#[df_temp.rename(columns={c:'Missing'}, inplace=True) for c in df_temp.columns if df_temp[c].isnull().all()]
df_temp.fillna('Missing', inplace=True)
with pd.option_context('display.max_colwidth', -1):
_file.write(df_temp.to_json(orient='records'))
</code></pre>
<p><strong>Expected output:</strong></p>
<pre><code>[
{
"AlarmName_curr": "test",
"AlarmName_fut": "test",
"AlarmName_diff": "No"
},
{
"StateValue_curr": "OK",
"StateValue_fut": "OK",
"StateValue_diff": "No"
}
]
</code></pre>
<p><strong>Coming output:</strong> Not able to parse it in json validator, below is the problem, those [] should be replaed by <code>','</code> to get right json dont know why its printing like that</p>
<pre><code>[{"AlarmName_curr":"test","AlarmName_fut":"test","AlarmName_diff":"No"}][{"StateValue_curr":"OK","StateValue_fut":"OK","StateValue_diff":"No"}]
</code></pre>
<p><strong>Edit1:</strong></p>
<p>Tried below as well</p>
<pre><code>_file.write(df_temp.to_json(orient='records',lines=True))
</code></pre>
<p>now i get json which is again not parsable, ',' is missing and unless i add , between two dic and [ ] at beginning and end manually , its not parsing..</p>
<pre><code>[{"AlarmName_curr":"test","AlarmName_fut":"test","AlarmName_diff":"No"}{"StateValue_curr":"OK","StateValue_fut":"OK","StateValue_diff":"No"}]
</code></pre>
|
<p>Honestly pandas is overkill for this... however</p>
<ol>
<li>load dataframes as you did</li>
<li>concat them as columns. rename columns</li>
<li>do calcs and map boolean to desired Yes/No</li>
<li><code>to_json()</code> returns a string so <code>json.loads()</code> to get it back into a list/dict. Filter columns to get to your required format</li>
</ol>
<pre><code>import json
data_b = [
{
"AlarmName": "test",
"StateValue": "OK"
}
]
data_a = [
{
"AlarmName": "test",
"StateValue": "OK"
}
]
df_a = pd.json_normalize(data_a)
df_b = pd.json_normalize(data_b)
df = pd.concat([df_a, df_b], axis=1)
df.columns = [c+"_curr" for c in df_a.columns] + [c+"_fut" for c in df_a.columns]
df["AlarmName_diff"] = df["AlarmName_curr"] == df["AlarmName_fut"]
df["StateValue_diff"] = df["StateValue_curr"] == df["StateValue_fut"]
df = df.replace({True:"Yes", False:"No"})
js = json.loads(df.loc[:,(c for c in df.columns if c.startswith("Alarm"))].to_json(orient="records"))
js += json.loads(df.loc[:,(c for c in df.columns if c.startswith("State"))].to_json(orient="records"))
js
</code></pre>
<p><strong>output</strong></p>
<pre><code>[{'AlarmName_curr': 'test', 'AlarmName_fut': 'test', 'AlarmName_diff': 'Yes'},
{'StateValue_curr': 'OK', 'StateValue_fut': 'OK', 'StateValue_diff': 'Yes'}]
</code></pre>
|
python|json|pandas
| 1
|
5,870
| 62,950,611
|
null value after binning
|
<p>while transforming continuous variable to categorical variable using <strong>pd.cut()</strong> null value appears in 'age' column which is transformed form 'age_in_years' that doesn't have any null value. what is the solution here?</p>
<pre><code>df['age_in_years']=df['age_in_days']/365
df.drop('age_in_days',inplace=True,axis=1)
bins=[0,35,60,100]
group=['young','middle_aged','senior']
df['age']=pd.cut(df['age_in_years'],bins,labels=group,right=True).astype('object')
</code></pre>
<p>now when i run <code>df.isnull().sum()</code> the age column shows null values
<a href="https://i.stack.imgur.com/nxJtV.png" rel="nofollow noreferrer">image o/p of df.isnull().sum()</a></p>
<p>dataset : <a href="https://drive.google.com/file/d/11_qSL5tI1epiRcOzueYaMT-1GUiwAQvs/view?usp=sharing" rel="nofollow noreferrer">https://drive.google.com/file/d/11_qSL5tI1epiRcOzueYaMT-1GUiwAQvs/view?usp=sharing</a></p>
|
<p>You can try:</p>
<pre><code>bins=[-np.inf,0,35,60,100,np.inf]
df['age']=pd.cut(df['age_in_years'],bins,labels=group,right=True).astype('object')
</code></pre>
<p>This will diagnose the problem and also include values below 0 <code>(-inf, 0.0]</code> and above 100 <code>[100.0, inf)</code></p>
|
python|pandas|dataframe|data-science
| 0
|
5,871
| 63,012,552
|
Change DataTypes of Pandas Columns by selecting columns by regex
|
<p>I have a Pandas dataframe with a lot of columns looking like p_d_d_c0, p_d_d_c1, ... p_d_d_g1, p_d_d_g2, ....</p>
<pre><code> df =
a b c p_d_d_c0 p_d_d_c1 p_d_d_c2 ... p_d_d_g0 p_d_d_g1 ...
</code></pre>
<p>All these columns, which confirm to the regex need to be selected and their datatypes need to be changed from object to float. In particular, columns look like p_d_d_c* and p_d_d_g* are they are all <code>object</code> types and I would like to change them to <code>float</code> types. Is there a way to select columns in bulk by using regular expression and change them to float types?</p>
<p>I tried the answer from <a href="https://stackoverflow.com/questions/30808430/how-to-select-columns-from-dataframe-by-regex">here</a>, but it takes a lot of time and memory as I have hundreds of these columns.</p>
<pre><code> df[df.filter(regex=("p_d_d_.*"))
</code></pre>
<p>I also tried:</p>
<pre><code> df.select(lambda col: col.startswith('p_d_d_g'), axis=1)
</code></pre>
<p>But, it gives an error:</p>
<pre><code> AttributeError: 'DataFrame' object has no attribute 'select'
</code></pre>
<p>My Pandas version is <code>1.0.1</code></p>
<p>So, how to select columns in bulk and change their data types using regex?</p>
|
<p>From the same link, and with some <code>astype</code> magic.</p>
<pre class="lang-py prettyprint-override"><code>column_vals = df.columns.map(lambda x: x.startswith("p_d_d_"))
train_temp = df.loc(axis=1)[column_vals]
train_temp = train_temp.astype(float)
</code></pre>
<hr />
<p>EDIT:</p>
<p>To modify the original dataframe, do something like this:</p>
<pre class="lang-py prettyprint-override"><code>column_vals = [x for x in df.columns if x.startswith("p_d_d_")]
df[column_vals] = df[column_vals].astype(float)
</code></pre>
|
python|python-3.x|regex|pandas|pandas-1.0
| 3
|
5,872
| 67,619,245
|
How can I split pandas dataframe into column of tuple, quickly?
|
<p>I have a pd.Series element of strings, separated by <code>'_'</code>, with only two elements in it.</p>
<p>for instance,</p>
<pre><code>s = pd.Series([a_1, a_2, a_3, b_1])
</code></pre>
<p>the command <code>s.str.split("_")</code> will return a series of lists</p>
<pre><code>0 ['a', '1']
1 ['a', '2']
2 ['a', '3']
3 ['b', '1']
</code></pre>
<p>the command <code>s.str.partition("_", expand=False)</code> will return a series of tuples, where <code>_</code> will be the second element in the tuple</p>
<pre><code>0 ('a', '_', '1')
1 ('a', '_', '2')
2 ('a', '_', '3')
3 ('b', '_', '1')
</code></pre>
<p>Is there a clean (and fast) way to create a series of tuples without <code>_</code> in it:</p>
<pre><code>0 ('a', '1')
1 ('a', '2')
2 ('a', '3')
3 ('b', '1')
</code></pre>
<p>I can always do: <code>s.str.split("_").apply(tuple)</code>, but apply is always slower than built-in functions (like <code>str.split</code>...)</p>
|
<p>One idea is use list comprehension:</p>
<pre><code>s = pd.Series('a_1, a_2, a_3, b_1'.split(', '))
#4k rows
s = pd.concat([s] * 1000, ignore_index=True)
In [195]: %timeit s.str.split("_").apply(tuple)
2.49 ms ± 41.4 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [196]: %timeit [tuple(x.split('_')) for x in s]
1.46 ms ± 79.9 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
In [197]: %timeit pd.Index(s).str.split("_", expand=True).tolist()
4.31 ms ± 14.6 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
</code></pre>
<hr />
<pre><code>s = pd.Series('a_1, a_2, a_3, b_1'.split(', '))
#400k rows
s = pd.concat([s] * 100000, ignore_index=True)
In [199]: %timeit s.str.split("_").apply(tuple)
252 ms ± 4.63 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [200]: %timeit [tuple(x.split('_')) for x in s]
180 ms ± 370 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [201]: %timeit pd.Index(s).str.split("_", expand=True).tolist()
379 ms ± 1.73 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
</code></pre>
|
python|pandas
| 2
|
5,873
| 31,766,547
|
Pandas Filtering
|
<p>I have a data frame that I am getting some counts on, like so:</p>
<pre><code>t = df['NAME'].value_counts()[:10]
</code></pre>
<p>I would then like to reduce the original data set (df) to only include items that match t. Something like:</p>
<pre><code>temp = df[t]
</code></pre>
<p>or</p>
<pre><code>temp = df[df['NAME'] in t]
</code></pre>
<p>Thanks</p>
|
<p>try this:</p>
<pre><code>df[df.name.isin(t.index)]
</code></pre>
|
python|pandas
| 1
|
5,874
| 31,806,512
|
Export Pandas dataframe to custom CSV format with JSON rows
|
<p>In my pandas program I am reading a csv and converting some columns as json</p>
<p>For ex: my csv is like this:</p>
<pre><code>id_4 col1 col2 .....................................col100
1 43 56 .....................................67
2 46 67 ....................................78
</code></pre>
<p>What I want to achieve is:</p>
<p>id_4 json</p>
<pre><code>1 {"col1":43,"col2":56,.....................,"col100":67}
2 {"col1":46,"col2":67,.....................,"col100":78}
</code></pre>
<p>The code what I have tried is as follows:</p>
<pre><code> df = pd.read_csv('file.csv')
def func(df):
d = [
dict([
(colname, row[i])
for i,colname in enumerate(df[['col1','col2',............,'col100']])
for row in zip(df['col1'].astype(str),df['col2'].astype(str),...............,df['col100'].astype(str))]
format_data = json.dumps(d)
format_data = format_data[1:len(format_data)-1]
json_data = '{"key":'+format_data+'}'
result.append(pd.Series([df['id_4'].unique()[0],json_data],index = headers))
return df
df.groupby('id_4').apply(func)
b = open('output.csv', 'w')
writer = csv.writer(b)
writer.writerow(headers)
writer.writerows(result[1:len(result)])
</code></pre>
<p>The CSV contains some 100 000 data, memory is (15 MB). when I execute this, after a long time the process is killed automatically. I think its a memory issue.</p>
<p>As I am newbie to this python and pandas, Is there any way to optimize the above code to work properly or increasing the memory is the only way. </p>
<p>I am using 5GB RAM Linux System.</p>
<p>EDIT:</p>
<pre><code>df = pd.read_csv('Vill_inter.csv')
with open('output.csv', 'w') as f:
writer = csv.writer(f)
for id_4, row in itertools.izip(df.index.values, df.to_dict(orient='records')):
writer.writerow((id_4, json.dumps(row))
</code></pre>
|
<p>Pandas dataframe can be directly serializable in JSON with <code>to_json</code> method.</p>
<p>Your output format is not very clear but have a look at this:</p>
<pre><code># Generate dataframe
df = pd.DataFrame(np.random.randn(5, 100), columns=['col' + str(n) for n in xrange(1, 101)])
# Create id_4 column
df.index += 1
df.index.name = 'id_4'
# Reindex df to have the column id_4 in the output, remove if you only columns1 to X
df.reset_index(drop=False, inplace=True)
# Dump data to disk, or buffer
path = 'out.json'
df.to_json(path, orient='records')
</code></pre>
<p>It is gonna be much faster than your loops and will probably solve your error.</p>
<p>EDIT:</p>
<p>Apparently the output should be a custom fileformat. In this case you can output the dataframe using <code>to_dict(orient='records)</code>. The output will be a list where each element represents a row as a dictionary. You can serialize the dictionary using the <code>dumps</code> function of the <code>json</code> module (built-in).</p>
<p>Something like this:</p>
<pre><code>import json
import itertools
with open('output.csv', 'w') as f:
writer = csv.writer(f)
for id, row in itertools.izip(df.index.values, df.to_dict(orient='records')):
writer.writerow((id, json.dumps(row)))
</code></pre>
|
python|json|csv|pandas
| 2
|
5,875
| 41,624,310
|
Pandas datetime day frequency to week frequency
|
<p>Q1:
I have the following pandas dataframe:</p>
<p><a href="https://i.stack.imgur.com/aZ7jq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aZ7jq.png" alt="enter image description here"></a></p>
<p>with a huge number of rows with a daily frequency (the <em>Data</em> column).
I would like to convert the dataframe in a week base, meaning that the frequency is not diary but now is weekly. Also with this the Money and the workers are the "week sum".</p>
<p>Q2:
Is it possible to define the starting day (by date) of the week?</p>
|
<p>first make sure your "Date" column is of type datetime.<br>
Consider this example: </p>
<pre><code>tidx = pd.date_range('2012-01-01', periods=1000)
df = pd.DataFrame(dict(
Money=np.random.rand(len(tidx)) * 1000,
Workers=np.random.randint(1, 11, len(tidx)),
Date=tidx
))
</code></pre>
<hr>
<p>When we <code>resample</code> we can pass a string that represents the time unit by which we resample. When using <code>W</code> for weeks we can actually pass <code>W-Mon</code> through <code>W-Sun</code>. So if you have a date</p>
<pre><code>date=pd.to_datetime('2012-03-31')
</code></pre>
<p>Which was a Saturday, we can produce the correct resample unit string</p>
<pre><code>'W-{:%a}'.format(date)
'W-Sat'
</code></pre>
<p>Then we can resample with it</p>
<pre><code>df.resample('W-{:%a}'.format(date), on='Date').sum().reset_index()
</code></pre>
<p><a href="https://i.stack.imgur.com/wNWIE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wNWIE.png" alt="enter image description here"></a></p>
<p>The simple answer is to <code>resample</code> without it, which produces a different starting point.</p>
<pre><code>df.resample('W', on='Date').sum().reset_index()
</code></pre>
<p><a href="https://i.stack.imgur.com/7IlKW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7IlKW.png" alt="enter image description here"></a></p>
|
python|pandas|date|data-conversion
| 4
|
5,876
| 27,575,823
|
Breaking down large dataset into organized index
|
<p>I'm trying to create an indexed dictionary of <code>shape_id</code>'s from a dataset that I have (see below). I realize I could use loops (and tried to do so), but I have an intuition that there's a bulk way to do this in pandas that isn't as computationally expensive.</p>
<p>Possible solutions:
<a href="http://pandas.pydata.org/pandas-docs/stable/groupby.html" rel="nofollow">groupby</a>,
<a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.strings.StringMethods.findall.html?highlight=match" rel="nofollow">str.findall</a>, <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.strings.StringMethods.extract.html" rel="nofollow">str.extract</a></p>
<p>The dictionary should be structured like so:</p>
<pre><code>{shape_id: [shape_pt_sequence, [shape_pt_lat,shape_pt_lon]]}
</code></pre>
<p>Here's what code I have so far:</p>
<pre><code>import pandas as pd
# readability assignments for shapes.csv
shapes = pd.read_csv('csv/shapes.csv')
shapes_shape_id = shapes['shape_id']
shapes_shape_id_index = list(set(shapes_shape_id))
shapes_shape_pt_sequence = shapes['shape_pt_sequence']
shapes_shape_pt_lat = shapes['shape_pt_lat']
shapes_shape_pt_lon = shapes['shape_pt_lon']
shapes_tuple = []
# add shape index to final dict
for i in range(len(shapes_shape_id_index)):
shapes_tuple.append([shapes_shape_id_index[i]])
print(shapes_tuple)
</code></pre>
<p>Here's the <a href="https://gist.github.com/adampitchie/90f2e4f7e1b06964b23d" rel="nofollow">LINK</a> to the <code>shapes.csv</code> Gist.</p>
<p>Here's an empty shape_id index:</p>
<pre><code>[[20992], [20993], [20994], [20995], [20996], [20997], [20998], [20999], [21000], [21001], [21002], [21003], [21004], [21005], [21006], [21007], [21008], [21009], [21010], [21011], [21012], [21013], [21014], [21015], [21016], [21017], [21018], [21019], [21020], [21021], [21022], [21023], [21026], [21027], [21028], [21029], [21030], [21031], [21032], [21033], [21034], [21035], [21036], [21037], [21038], [21039], [21040], [21041], [21042], [21043], [21044], [21045], [21046], [21047], [21048], [21049], [21050], [21051], [21052], [21053], [21054], [21055], [21056], [21057], [21058], [21059], [21060], [21061], [21062], [21063], [21064], [21065], [21066], [21067], [21068], [21069], [21070], [21071], [21072], [21073], [21074], [21075], [21076], [21077], [21078], [21079], [21080], [21081], [21082], [21083], [21084], [21085], [21086], [21087], [21088], [21089], [20958], [20959], [20960], [20961], [20962], [20963], [20964], [20965], [20966], [20967], [20968], [20969], [20970], [20971], [20972], [20973], [20974], [20975], [20976], [20977], [20978], [20979], [20980], [20981], [20982], [20983], [20984], [20985], [20986], [20987], [20988], [20989], [20990], [20991]]
</code></pre>
<p>The <code>shapes.csv</code> looks like this:</p>
<pre><code>shape_id,shape_pt_lat,shape_pt_lon,shape_pt_sequence,is_stop
20958,44.0577683,-123.0873313,1,0
20958,44.0577163,-123.087073,2,0
20958,44.0576286,-123.0867103,3,0
20958,44.0574258,-123.086641,4,0
20958,44.0571421,-123.0866518,5,0
20958,44.0568706,-123.086653,6,0
20958,44.0566161,-123.0867028,7,0
20958,44.0565641,-123.0869733,8,0
20958,44.0565503,-123.0872603,9,0
20958,44.0565536,-123.087631,10,0
20958,44.0565439,-123.0879283,11,0
20958,44.0564661,-123.087894,12,0
20958,44.0565124,-123.0881793,13,0
20958,44.0565181,-123.0884921,14,0
20958,44.0565331,-123.0888668,15,0
20958,44.0565406,-123.0892323,16,0
20958,44.0565406,-123.0896295,17,0
20958,44.0563515,-123.0897096,18,0
20958,44.056073,-123.0897108,19,0
20958,44.0558501,-123.0897,20,0
20958,44.0558358,-123.0897016,21,0
20958,44.0556489,-123.0896861,22,0
20958,44.0554398,-123.0896781,23,0
20958,44.0552033,-123.0896776,24,0
20958,44.0549253,-123.089692,25,0
20958,44.0546778,-123.0897281,26,0
20958,44.0546578,-123.0897326,27,0
20958,44.0546338,-123.0896965,28,0
20958,44.0543988,-123.0896838,29,0
20958,44.0543536,-123.0899543,30,0
20958,44.0543628,-123.0903496,31,0
20958,44.0543668,-123.0906733,32,0
20958,44.0543718,-123.0910178,33,0
</code></pre>
<p>In shapes.csv, for instance, <code>20958</code> has a max <code>shape_pt_sequence</code> value of 72. <code>20960</code> has a max <code>shape_pt_sequence</code> value of 400, etc.</p>
|
<p>Assuming that your REAL task is not validating the data file, reading the file and filling an appropriate data structure with a loop isn't bulky, not at all...</p>
<pre><code>f = open('shapes.csv')
f.next() # skip headers
lines = [line.strip().split(',') for line in f] # f is closed automatically
data = {} ; item = 0
for i, lat, lon, seq, stop in lines:
i = int(i)
if i != item:
item = i
data[item] = [(float(lat), float(lon))]
else:
data[item].append((float(lat), float(lon)))
</code></pre>
<p>There is no need for a <code>stop</code> sentinel in your data file, and there is no need to explicitly store an index for each coordinate pair.</p>
|
python|csv|dictionary|pandas
| 0
|
5,877
| 61,217,866
|
How to get a list of event that occurred simultaneous
|
<p>Im trying to get the number of simulteneaous telephone call. I have this dataframe and I want to get for each user how many simulteanous call they had. my desired output is [{'A': 4}, {'E': 3}]</p>
<pre><code>user = ['A', 'A',
'A', 'E',
'F', 'E',
'E', 'A',
'G', 'A']
started_time = [
'2020-04-02 16:16:11',
'2020-04-02 16:06:25',
'2020-04-02 16:11:53',
'2020-04-02 16:29:29',
'2020-04-10 16:09:56',
'2020-04-02 16:30:18',
'2020-04-02 16:25:20',
'2020-04-02 16:00:47',
'2020-04-07 16:11:44',
'2020-04-05 16:55:25'
]
ended_time = [
'2020-04-02 16:22:05',
'2020-04-02 16:17:22',
'2020-04-02 16:21:50',
'2020-04-02 16:34:29',
'2020-04-10 16:44:15',
'2020-04-02 16:41:26',
'2020-04-02 16:53:02',
'2020-04-02 16:45:49',
'2020-04-07 16:57:37',
'2020-04-05 16:59:26',
]
df = pd.DataFrame({
'user':user,
'started_time':started_time,
'ended_time':ended_time
})
df['started_time'] = pd.to_datetime(df['started_time'])
df['ended_time'] = pd.to_datetime(df['ended_time'])
df['sim_calls'] = None
</code></pre>
<p>print</p>
<pre><code> user started_time ended_time sim_calls
0 A 2020-04-02 16:16:11 2020-04-02 16:22:05 None
1 A 2020-04-02 16:06:25 2020-04-02 16:17:22 None
2 A 2020-04-02 16:11:53 2020-04-02 16:21:50 None
3 E 2020-04-02 16:29:29 2020-04-02 16:34:29 None
4 F 2020-04-10 16:09:56 2020-04-10 16:44:15 None
5 E 2020-04-02 16:30:18 2020-04-02 16:41:26 None
6 E 2020-04-02 16:25:20 2020-04-02 16:53:02 None
7 A 2020-04-02 16:00:47 2020-04-02 16:45:49 None
8 G 2020-04-07 16:11:44 2020-04-07 16:57:37 None
9 A 2020-04-05 16:55:25 2020-04-05 16:59:26 None
</code></pre>
<p>Removing all operators that have less than 3 calls that day</p>
<pre><code>ab = df.groupby('user').count()
ab = ab.reset_index('user')
ab = ab[ab['started_time']>2]
operators = list(ab.user.unique())
</code></pre>
<p>result</p>
<pre><code> user started_time ended_time sim_calls
0 A 5 5 0
1 E 3 3 0
</code></pre>
<p>computation</p>
<pre><code>active_events_index= []
simulteaneous_call = []
for user in operators:
my_list_of_operators =[]
my_list_of_operators.append(user)
my_list_of_operators_count = 0
new_df = df[df['user']==user]
for i in new_df.index:
started_time = new_df.loc[i,"started_time"]
ended_time = new_df.loc[i,"ended_time"]
for row in new_df.index:
if (new_df.loc[row,"started_time"] <= started_time and new_df.loc[row,"ended_time"] >= started_time or new_df.loc[row,"ended_time"] <= ended_time ) :
print(new_df.loc[row])
my_list_of_operators_count += 1
simulteaneous_call.append({my_list_of_operators[0]:my_list_of_operators_count})
</code></pre>
<p>result </p>
<pre><code>print(simulteaneous_call)
[{'A': 18}, {'E': 8}]
</code></pre>
<p>My desired output should have been</p>
<pre><code>[{'A': 4}, {'E': 3}]
</code></pre>
|
<p>I assume any overlapping call of the same user to be "simultaneous". Explanation in code:</p>
<pre><code>def count_simul(group):
n = 0
g = []
ranges = {}
# For each user, start the loop with a time range covering the distant
# past to distant future
started_time = pd.Timestamp('1900-01-01')
ended_time = pd.Timestamp('2099-12-31')
for index, row in group.iterrows():
if (row['started_time'] < ended_time) and (started_time < row['ended_time']):
# If the current row overlaps with the time range defined by
# `started_time` and `ended_time`, set `started_time` and
# `ended_time` to the intersection of the two. And keep the row
# in the current time group
started_time = max(started_time, row['started_time'])
ended_time = min(ended_time, row['ended_time'])
else:
# Otherwise, set `started_time` and `ended_time` to those of the
# current row and assign the current row to a new time group
started_time, ended_time = row[['started_time', 'ended_time']]
n += 1
# `ranges` is a dictionary mapping each group number to the time range
ranges[n] = (started_time, ended_time)
g.append(n)
# Group the rows by their time group number and get the size
freq = group.groupby(np.array(g)).size()
freq.index = freq.index.map(ranges)
return freq
df.sort_values(['user', 'started_time', 'ended_time']) \
.groupby('user') \
.apply(count_simul) \
.replace(1, np.nan).dropna() # we don't consider groups of 1 to be "simultaneous"
</code></pre>
<p>Result:</p>
<pre><code>user
A 2020-04-02 16:16:11 2020-04-02 16:17:22 4.0
E 2020-04-02 16:30:18 2020-04-02 16:34:29 3.0
dtype: float64
</code></pre>
|
python-3.x|pandas
| 1
|
5,878
| 61,373,070
|
Maplotlib calendar subplot need to avoid
|
<p><a href="https://i.stack.imgur.com/K3jPS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/K3jPS.png" alt="enter image description here"></a></p>
<p>I tried to plot a calendar graphic with this code, I plot a graphic, but I do nοt know how to avoid the subplot, I adapted from other code. </p>
<p>Can you help to solve this error, please?</p>
<pre><code>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
total_test = pd.read_csv('datos_pos_neg_peru.csv', sep=";")
df = total_test
df['Time'] = pd.to_datetime(df[['Year', 'Month','Day']])
df = df.drop(['Year', 'Month','Day'], axis = 1)
df.rename(columns = {'Recuperdos_dia' : 'Temp'}, inplace = True)
df = df.groupby([df['Time'].dt.date]).mean()
df.index = pd.to_datetime(df.index)
cal = {'2020': df[df.index.year == 2020]}
DAYS = ['Lun', 'Mar', 'Mier', 'Jue', 'Vie', 'Sab','Dom']
MONTHS = ['Ene', 'Feb', 'Mar', 'Abr', 'May', 'Jun', 'Jul', 'Ago', 'Set', 'Oct', 'Nov', 'Dic']
fig, ax = plt.subplots(2, 1, figsize = (20,15))
for i, val in enumerate(['2020']):
start = cal.get(val).index.min()
end = cal.get(val).index.max()
start_sun = start - np.timedelta64((start.dayofweek + 1) % 7, 'D')
end_sun = end + np.timedelta64(7 - end.dayofweek -1, 'D')
num_weeks = (end_sun - start_sun).days // 7
heatmap = np.full([7, num_weeks], np.nan)
ticks = {}
y = np.arange(8) - 0.5
x = np.arange(num_weeks + 1) - 0.5
for week in range(num_weeks):
for day in range(7):
date = start_sun + np.timedelta64(7 * week + day, 'D')
if date.day == 1:
ticks[week] = MONTHS[date.month - 1]
if date.dayofyear == 1:
ticks[week] += f'\n{date.year}'
if start <= date < end:
heatmap[day, week] = cal.get(val).loc[date, 'Temp']
mesh = ax[i].pcolormesh(x, y, heatmap, cmap = 'Reds', edgecolors = 'grey')
ax[i].invert_yaxis()
ax[i].set_xticks(list(ticks.keys()))
ax[i].set_xticklabels(list(ticks.values()))
ax[i].set_yticks(np.arange(7))
ax[i].set_yticklabels(DAYS)
ax[i].set_ylim(6.5,-0.5)
ax[i].set_aspect('equal')
ax[i].set_title(val, fontsize = 20)
# Hatch for out of bound values in a year
ax[i].patch.set(hatch='xx', edgecolor='grey'
cbar_ax = fig.add_axes([0.15, -0.10, 0.3, 0.05])
fig.colorbar(mesh, orientation="horizontal", pad=0.1, cax = cbar_ax)
r = colorbar.vmax - colorbar.vmin
</code></pre>
|
<p>You call <code>fig, ax = plt.subplots(2, 1, ....)</code>. This means 2 rows, 1 column. If you only want one subplot, use <code>fig, ax = plt.subplots(1, 1, ...)</code>. Thereafter, you should directly use <code>ax</code> instead of <code>ax[i]</code>.</p>
|
python|pandas|matplotlib
| 1
|
5,879
| 68,454,383
|
Can u check why is an error coming after concatenating numpy arrays
|
<p>I tried Concatenating 2 numpy arrays but I got an error.
The error is:</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\hp\Desktop\Python\Numpy\OperationsOnArrays1.py", line 28, in <module>
array3 = np.concatenate((array,array2))
File "<__array_function__ internals>", line 5, in concatenate
ValueError: all the input array dimensions for the concatenation axis must match exactly, but along dimension 1, the array at index 0 has size 3 and the array at index 1 has size 2
</code></pre>
<pre><code>import numpy as np
array = np.array([2,43,2,
4,1,3])
# Sorting an array by ascending order
array = np.sort(array)
# Sorting by specifying the axis
array = np.array([[2,5,4],[3,2,1]])
# array5 = np.array([[2,5,4],[3,2,1]])
array = np.sort(array,axis=1)
# Concate (adding 1 array after another)
array2 = np.zeros((4,2))
print(array2)
array3 = np.concatenate((array,array2))
print(array)
print(array2)
print(array3)
</code></pre>
|
<p><code>print(array.shape, array2.shape)</code> will print <code>(2, 3) (4, 2)</code>.</p>
<p>For concatenate to work, the first dimension has to be the same in all arrays.</p>
|
python|arrays|python-3.x|numpy
| 2
|
5,880
| 68,792,486
|
How to select a value in a dataframe with MultiIndex?
|
<p>I use the Panda library to analyze the data coming from an excel file.
I used pivot_table to get a pivot table with the information I'm interested in. I end up with a multi index array.
For "OPE-2016-0001", I would like to obtain the figures for 2017 for example. I've tried lots of things and nothing works. What is the correct method to use? thank you</p>
<pre><code>import pandas as pd
import numpy as np
from math import *
import tkinter as tk
pd.set_option('display.expand_frame_repr', False)
df = pd.read_csv('datas.csv')
def tcd_op_dataExcercice():
global df
new_df = df.assign(Occurence=1)
tcd= new_df.pivot_table(index=['Numéro opération',
'Libellé opération'],
columns=['Exercice'],
values=['Occurence'],
aggfunc=[np.sum],
margins=True,
fill_value=0,
margins_name='Total')
print(tcd)
print(tcd.xs('ALSTOM 8', level='Libellé opération', drop_level=False))
</code></pre>
<p>tcd_op_dataExcercice()</p>
<p>I get the following table (image).
How do I get the value framed in red?</p>
<p><a href="https://i.stack.imgur.com/bCI56.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bCI56.png" alt="image" /></a></p>
|
<p>You can use <code>.loc</code> to select rows by a DataFrame's Index's labels. If the Index is a MultiIndex, it will index into the first level of the MultiIndex (<code>Numéro Opéracion</code> in your case). Though you can pass a tuple to index into both levels (e.g. if you specifically wanted <code>("OPE-2016-0001", "ALSTOM 8")</code>)</p>
<p>It's worth noting that the columns of your pivoted data are also a MultiIndex, because you specified the <code>aggfunc</code>, <code>values</code> and <code>columns</code> as lists, rather than individual values (i.e. without the <code>[]</code>). Pandas creates a MultiIndex because of these lists, even though they had one
argument.</p>
<p>So you'll also need to pass a tuple to index into the columns to get the value for 2017:</p>
<pre><code>tcd.loc["OPE-2016-0001", ('sum', 'Occurence', 2017)]
</code></pre>
<p>If you had instead just specified the <code>aggfunc</code> etc as individual strings, the columns would just be the years and you could select the values by:</p>
<pre><code>tcd.loc["OPE-2016-0001", 2017]
</code></pre>
<p>Or if you specifically wanted the value for <code>ALSTOM 8</code>:</p>
<pre><code>tcd.loc[("OPE-2016-0001", "ALSTOM 8"), 2017]
</code></pre>
<p>An alternative to indexing into a MultiIndex would also be to just <code>.reset_index()</code> after pivoting -- in which case the levels of the MultiIndex will just become columns in the data. And you can then select rows based on the values of those columns. E.g (assuming you specified <code>aggfunc</code> etc as strings):</p>
<pre><code>tcd = tcd.reset_index()
tcd.query("'Numéro Opéracion' == 'OPE-2016-0001'")[2017]
</code></pre>
|
pandas|dataframe|multi-index
| 1
|
5,881
| 68,533,960
|
Multiple modes for multiple accounts in Python
|
<p>I have a dataframe of several accounts that display different modes of animal categories. How can I identify the accounts that have more than 1 mode?</p>
<p>For example, note that account 3 only has one mode (i.e. "dog"), but accounts 1, 2 and 4 have multiple modes (i.e more than one mode).</p>
<pre><code>test = pd.DataFrame({'account':[1,1,1,2,2,2,2,3,3,3,3,4,4,4,4],
'category':['cat','dog','rabbit','cat','cat','dog','dog','dog','dog','dog','rabbit','rabbit','cat','cat','rabbit']})
</code></pre>
<p>The expected output I'm looking for would be something like this:</p>
<pre><code>pd.DataFrame({'account':[1,2,4],'modes':[3,2,2]})
</code></pre>
<p>Secondary to this, I am then trying to take any random highest mode for all accounts having multiple modes. I have come up with the following code, however, this only returns the first (alphabetical) mode for each account. My intuition tells me something could be written within the <code>iloc</code> brackets below, perhaps a a random array between 0 and the total number of modes, but I'm unable to fully get there.</p>
<pre><code>test.groupby('account')['category'].agg(lambda x: x.mode(dropna=False).iloc[0])
</code></pre>
<p>Any suggestions? Thanks much.</p>
|
<p>You can use numpy.random.choice for that</p>
<pre><code>test.groupby('account')['category'].agg(
lambda x: np.random.choice(x.mode(dropna=False)))
</code></pre>
|
python|pandas|mode
| 1
|
5,882
| 36,442,094
|
Using Pandas filtering non-numeric data from two columns of a Dataframe
|
<p>I'm loading a Pandas dataframe which has many data types (loaded from Excel). Two particular columns should be floats, but occasionally a researcher entered in a random comment like "not measured." I need to drop any rows where any values in one of two columns is not a number and preserve non-numeric data in other columns. A simple use case looks like this (the real table has several thousand rows...)</p>
<pre><code>import pandas as pd
df = pd.DataFrame(dict(A = pd.Series([1,2,3,4,5]), B = pd.Series([96,33,45,'',8]), C = pd.Series([12,'Not measured',15,66,42]), D = pd.Series(['apples', 'oranges', 'peaches', 'plums', 'pears'])))
</code></pre>
<p>Which results in this data table:</p>
<pre><code> A B C D
0 1 96 12 apples
1 2 33 Not measured oranges
2 3 45 15 peaches
3 4 66 plums
4 5 8 42 pears
</code></pre>
<p>I'm not clear how to get to this table:</p>
<pre><code> A B C D
0 1 96 12 apples
2 3 45 15 peaches
4 5 8 42 pears
</code></pre>
<p>I tried dropna, but the types are "object" since there are non-numeric entries.
I can't convert the values to floats without either converting the whole table, or doing one series at a time which loses the relationship to the other data in the row. Perhaps there is something simple I'm not understanding?</p>
|
<p>You can first create subset with columns <code>B</code>,<code>C</code> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.apply.html" rel="nofollow"><code>apply</code></a> <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_numeric.html" rel="nofollow"><code>to_numeric</code></a>, check if <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.all.html" rel="nofollow"><code>all</code></a> values are <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.notnull.html" rel="nofollow"><code>notnull</code></a>. Then use <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing" rel="nofollow">boolean indexing</a>:</p>
<pre><code>print df[['B','C']].apply(pd.to_numeric, errors='coerce').notnull().all(axis=1)
0 True
1 False
2 True
3 False
4 True
dtype: bool
print df[df[['B','C']].apply(pd.to_numeric, errors='coerce').notnull().all(axis=1)]
A B C D
0 1 96 12 apples
2 3 45 15 peaches
4 5 8 42 pears
</code></pre>
<p>Next solution use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.isdigit.html" rel="nofollow"><code>str.isdigit</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.isnull.html" rel="nofollow"><code>isnull</code></a> and xor (<code>^</code>):</p>
<pre><code>print df['B'].str.isdigit().isnull() ^ df['C'].str.isdigit().notnull()
0 True
1 False
2 True
3 False
4 True
dtype: bool
print df[df['B'].str.isdigit().isnull() ^ df['C'].str.isdigit().notnull()]
A B C D
0 1 96 12 apples
2 3 45 15 peaches
4 5 8 42 pears
</code></pre>
<p>But solution with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_numeric.html" rel="nofollow"><code>to_numeric</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.isnull.html" rel="nofollow"><code>isnull</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.notnull.html" rel="nofollow"><code>notnull</code></a> is fastest:</p>
<pre><code>print df[pd.to_numeric(df['B'], errors='coerce').notnull()
^ pd.to_numeric(df['C'], errors='coerce').isnull()]
A B C D
0 1 96 12 apples
2 3 45 15 peaches
4 5 8 42 pears
</code></pre>
<p><strong>Timings</strong>:</p>
<pre><code>#len(df) = 5k
df = pd.concat([df]*1000).reset_index(drop=True)
In [611]: %timeit df[pd.to_numeric(df['B'], errors='coerce').notnull() ^ pd.to_numeric(df['C'], errors='coerce').isnull()]
1000 loops, best of 3: 1.88 ms per loop
In [612]: %timeit df[df['B'].str.isdigit().isnull() ^ df['C'].str.isdigit().notnull()]
100 loops, best of 3: 16.1 ms per loop
In [613]: %timeit df[df[['B','C']].apply(pd.to_numeric, errors='coerce').notnull().all(axis=1)]
The slowest run took 4.28 times longer than the fastest. This could mean that an intermediate result is being cached
100 loops, best of 3: 3.49 ms per loop
</code></pre>
|
excel|numpy|pandas
| 1
|
5,883
| 36,292,441
|
How long are Pandas groupby objects remembered?
|
<p>I have the following example Python 3.4 script. It does the following:</p>
<ol>
<li>creates a dataframe,</li>
<li>converts the date variable to datetime64 format,</li>
<li>creates a groupby object based on two categorical variables,</li>
<li>produces a dataframe that contains a count of the number items in each group,</li>
<li>merges count dataframe back with original dataframe to create a column containing the number of rows in each group</li>
<li>creates a column containing the difference in dates between sequential rows.</li>
</ol>
<p>Here is the script:</p>
<pre><code>import numpy as np
import pandas as pd
# Create dataframe consisting of id, date and two categories (gender and age)
tempDF = pd.DataFrame({ 'id': [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20],
'date': ["02/04/2015 02:34","06/04/2015 12:34","09/04/2015 23:03","12/04/2015 01:00","15/04/2015 07:12","21/04/2015 12:59","29/04/2015 17:33","04/05/2015 10:44","06/05/2015 11:12","10/05/2015 08:52","12/05/2015 14:19","19/05/2015 19:22","27/05/2015 22:31","01/06/2015 11:09","04/06/2015 12:57","10/06/2015 04:00","15/06/2015 03:23","19/06/2015 05:37","23/06/2015 13:41","27/06/2015 15:43"],
'gender': ["male","female","female","male","male","female","female",np.nan,"male","male","female","male","female","female","male","female","male","female",np.nan,"male"],
'age': ["young","old","old","old","old","old",np.nan,"old","old","young","young","old","young","young","old",np.nan,"old","young",np.nan,np.nan]})
# Convert date to datetime
tempDF['date'] = pd.to_datetime(tempDF['date'])
# Create groupby object based on two categorical variables
tempGroupby = tempDF.sort_values(['gender','age','id']).groupby(['gender','age'])
# Count number in each group and merge with original dataframe to create 'count' column
tempCountsDF = tempGroupby['id'].count().reset_index(drop=False)
tempCountsDF = tempCountsDF.rename(columns={'id': 'count'})
tempDF = tempDF.merge(tempCountsDF, on=['gender','age'])
# Calculate difference between consecutive rows in each group. (First row in each
# group should have date difference = NaT)
tempGroupby = tempDF.sort_values(['gender','age','id']).groupby(['gender','age'])
tempDF['diff'] = tempGroupby['date'].diff()
print(tempDF)
</code></pre>
<p>This script produces the following output:</p>
<pre><code> age date gender id count diff
0 young 2015-02-04 02:34:00 male 1 2 NaT
1 young 2015-10-05 08:52:00 male 10 2 243 days 06:18:00
2 old 2015-06-04 12:34:00 female 2 3 NaT
3 old 2015-09-04 23:03:00 female 3 3 92 days 10:29:00
4 old 2015-04-21 12:59:00 female 6 3 -137 days +13:56:00
5 old 2015-12-04 01:00:00 male 4 6 NaT
6 old 2015-04-15 07:12:00 male 5 6 -233 days +06:12:00
7 old 2015-06-05 11:12:00 male 9 6 51 days 04:00:00
8 old 2015-05-19 19:22:00 male 12 6 -17 days +08:10:00
9 old 2015-04-06 12:57:00 male 15 6 -44 days +17:35:00
10 old 2015-06-15 03:23:00 male 17 6 69 days 14:26:00
11 young 2015-12-05 14:19:00 female 11 4 NaT
12 young 2015-05-27 22:31:00 female 13 4 -192 days +08:12:00
13 young 2015-01-06 11:09:00 female 14 4 -142 days +12:38:00
14 young 2015-06-19 05:37:00 female 18 4 163 days 18:28:00
</code></pre>
<p>And this exactly what I'd expect. However, it seems to rely on creating the groupby object twice (in exactly the same way). If the second groupby definition is commented out, it seems to lead to a very different output in the diff column:</p>
<pre><code>import numpy as np
import pandas as pd
# Create dataframe consisting of id, date and two categories (gender and age)
tempDF = pd.DataFrame({ 'id': [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20],
'date': ["02/04/2015 02:34","06/04/2015 12:34","09/04/2015 23:03","12/04/2015 01:00","15/04/2015 07:12","21/04/2015 12:59","29/04/2015 17:33","04/05/2015 10:44","06/05/2015 11:12","10/05/2015 08:52","12/05/2015 14:19","19/05/2015 19:22","27/05/2015 22:31","01/06/2015 11:09","04/06/2015 12:57","10/06/2015 04:00","15/06/2015 03:23","19/06/2015 05:37","23/06/2015 13:41","27/06/2015 15:43"],
'gender': ["male","female","female","male","male","female","female",np.nan,"male","male","female","male","female","female","male","female","male","female",np.nan,"male"],
'age': ["young","old","old","old","old","old",np.nan,"old","old","young","young","old","young","young","old",np.nan,"old","young",np.nan,np.nan]})
# Convert date to datetime
tempDF['date'] = pd.to_datetime(tempDF['date'])
# Create groupby object based on two categorical variables
tempGroupby = tempDF.sort_values(['gender','age','id']).groupby(['gender','age'])
# Count number in each group and merge with original dataframe to create 'count' column
tempCountsDF = tempGroupby['id'].count().reset_index(drop=False)
tempCountsDF = tempCountsDF.rename(columns={'id': 'count'})
tempDF = tempDF.merge(tempCountsDF, on=['gender','age'])
# Calculate difference between consecutive rows in each group. (First row in each
# group should have date difference = NaT)
# ****** THIS TIME THE FOLLOWING GROUPBY DEFINITION IS COMMENTED OUT *****
# tempGroupby = tempDF.sort_values(['gender','age','id']).groupby(['gender','age'])
tempDF['diff'] = tempGroupby['date'].diff()
print(tempDF)
</code></pre>
<p>And, this time the output is very different (and NOT what I wanted at all)</p>
<pre><code> age date gender id count diff
0 young 2015-02-04 02:34:00 male 1 2 NaT
1 young 2015-10-05 08:52:00 male 10 2 NaT
2 old 2015-06-04 12:34:00 female 2 3 92 days 10:29:00
3 old 2015-09-04 23:03:00 female 3 3 NaT
4 old 2015-04-21 12:59:00 female 6 3 -233 days +06:12:00
5 old 2015-12-04 01:00:00 male 4 6 -137 days +13:56:00
6 old 2015-04-15 07:12:00 male 5 6 NaT
7 old 2015-06-05 11:12:00 male 9 6 NaT
8 old 2015-05-19 19:22:00 male 12 6 51 days 04:00:00
9 old 2015-04-06 12:57:00 male 15 6 243 days 06:18:00
10 old 2015-06-15 03:23:00 male 17 6 NaT
11 young 2015-12-05 14:19:00 female 11 4 -17 days +08:10:00
12 young 2015-05-27 22:31:00 female 13 4 -192 days +08:12:00
13 young 2015-01-06 11:09:00 female 14 4 -142 days +12:38:00
14 young 2015-06-19 05:37:00 female 18 4 -44 days +17:35:00
</code></pre>
<p>(In my real-life script the results seem to be a little erratic, sometimes it works and sometimes it doesn't. But in the above script, the different outputs seem to occur consistently.)</p>
<p>Why is it necessary to recreate the groupby object on what is, essentially, the same dataframe (albeit with an additional column added) immediately before using the .diff() function? This seems very dangerous to me.</p>
|
<p>Not the same, the index has changed. For example:</p>
<pre><code>tempDF.loc[1].id # before
10
tempDF.loc[1].id # after
2
</code></pre>
<p>So if you compute <code>tempGroupby</code> with the old <code>tempDF</code> and then change the indexes in <code>tempDF</code> when you do this:</p>
<pre><code>tempDF['diff'] = tempGroupby['date'].diff()
</code></pre>
<p>the indexes do not match as you expect. You are assigning to each row the difference corresponding to the row that had that index in the old <code>tempDF</code>.</p>
|
python|python-3.x|pandas|dataframe
| 2
|
5,884
| 65,904,279
|
Scatter Pie Plot Python Pandas
|
<p>"Scatter Pie Plot" ( a scatter plot using pie charts instead of dots). I require this as I have to represent 3 dimensions of data.
1: x axis (0-6)
2: y axis (0-6)
3: Category lets say (A,B,C - H)</p>
<p>If two x and y values are the same I want a pie chart to be in that position representing that Category.
Similar to the graph seen in this link:
<a href="https://matplotlib.org/gallery/lines_bars_and_markers/scatter_piecharts.html#sphx-glr-gallery-lines-bars-and-markers-scatter-piecharts-py" rel="nofollow noreferrer">https://matplotlib.org/gallery/lines_bars_and_markers/scatter_piecharts.html#sphx-glr-gallery-lines-bars-and-markers-scatter-piecharts-py</a></p>
<p>or this image from Tableu:
[![enter image description here][1]][1]</p>
<p>As I am limited to only use python I have been struggling to manipulate the code to work for me.
Could anyone help me with this problem? I would very grateful!</p>
<p>Example data:</p>
<pre><code>XVAL YVAL GROUP
1.3 4.5 A
1.3 4.5 B
4 2 E
4 6 A
2 4 A
2 4 B
1 1 G
1 1 C
1 2 B
1 2 D
3.99 4.56 G
</code></pre>
<p>The final output should have 6 pie charts on the X & Y with 1 containing 3 groups and 2 containing 3 groups.</p>
<p>My attempt:</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
def draw_pie(dist,
xpos,
ypos,
size,
ax=None):
if ax is None:
fig, ax = plt.subplots(figsize=(10,8))
# for incremental pie slices
cumsum = np.cumsum(dist)
cumsum = cumsum/ cumsum[-1]
pie = [0] + cumsum.tolist()
for r1, r2 in zip(pie[:-1], pie[1:]):
angles = np.linspace(2 * np.pi * r1, 2 * np.pi * r2)
x = [0] + np.cos(angles).tolist()
y = [0] + np.sin(angles).tolist()
xy = np.column_stack([x, y])
ax.scatter([xpos], [ypos], marker=xy, s=size)
return ax
fig, ax = plt.subplots(figsize=(40,40))
draw_pie([Group],'xval','yval',10000,ax=ax)
draw_pie([Group], 'xval', 'yval', 20000, ax=ax)
draw_pie([Group], 'xval', 'yval', 30000, ax=ax)
plt.show()
</code></pre>
|
<p>I'm not sure how to get 6 pie charts. If we group on <code>XVAL</code> and <code>YVAL</code>, there are 7 unique pairs. You can do something down this line:</p>
<pre><code>fig, ax = plt.subplots(figsize=(40,40))
for (x,y), d in df.groupby(['XVAL','YVAL']):
dist = d['GROUP'].value_counts()
draw_pie(dist, x, y, 10000*len(d), ax=ax)
plt.show()
</code></pre>
<p>Output:</p>
<p><a href="https://i.stack.imgur.com/tFyma.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tFyma.png" alt="enter image description here" /></a></p>
|
python|pandas|matplotlib|scatterpie
| 1
|
5,885
| 65,792,565
|
Preserving training/validation split after restarting training from a checkpoint with TensorFlow
|
<p>I have written a TensorFlow training loop which does validation at the end of each epoch. At the start of the training I split my dataset into training and validation subsets (about 85%-15% split). My dataset actually consists of audio samples stored in small chunks on disk, and I randomly shuffle the entire dataset before splitting, so I get a completely even distribution over the training and validation subsets. Problem is, if I restart the training from a given checkpoint the random shuffle occurs again, and I suspect this can lead to data contamination - the validation phase is potentially going to be processing bits of the dataset that the network has aleady been trained on. I think I'm seeing this affecting the loss and accuracy of the training after retsrating, but it's hard to tell.</p>
<p>I can't find any info on this specific issue on the web, but my proposed solution is to cache the names of the files in the validation split to a file, and if restarting load them from there. Is there a better solution?</p>
<p>For clarity, I am using the tf.data.Dataset API, building both training and validation datasets with a simple dataset pipeline which begins by reading samples from the files on disk.</p>
|
<p>If you set the seed of the shuffling, the order will be consistant:</p>
<pre><code>import tensorflow as tf
for _ in range(5):
ds = tf.data.Dataset.range(1, 10).shuffle(4, seed=42).batch(3)
for i in ds:
print(i)
print()
</code></pre>
<pre><code>tf.Tensor([4 1 2], shape=(3,), dtype=int64)
tf.Tensor([7 3 6], shape=(3,), dtype=int64)
tf.Tensor([5 9 8], shape=(3,), dtype=int64)
tf.Tensor([4 1 2], shape=(3,), dtype=int64)
tf.Tensor([7 3 6], shape=(3,), dtype=int64)
tf.Tensor([5 9 8], shape=(3,), dtype=int64)
tf.Tensor([4 1 2], shape=(3,), dtype=int64)
tf.Tensor([7 3 6], shape=(3,), dtype=int64)
tf.Tensor([5 9 8], shape=(3,), dtype=int64)
tf.Tensor([4 1 2], shape=(3,), dtype=int64)
tf.Tensor([7 3 6], shape=(3,), dtype=int64)
tf.Tensor([5 9 8], shape=(3,), dtype=int64)
tf.Tensor([4 1 2], shape=(3,), dtype=int64)
tf.Tensor([7 3 6], shape=(3,), dtype=int64)
tf.Tensor([5 9 8], shape=(3,), dtype=int64)
</code></pre>
<p>So, all you need is the list of files in the same order each time, which you can do with <code>tf.data.Dataset.list_files</code>, and set <code>shuffle=False</code>:</p>
<pre><code>ds = tf.data.Dataset.list_files(r'C:\Users\User\Downloads\*', shuffle=False)
</code></pre>
|
python|tensorflow|machine-learning
| 2
|
5,886
| 65,719,217
|
Panda dataframe take column and append as new rows efficiently
|
<p>If I have a df:</p>
<pre><code>df = pd.DataFrame(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]), columns=['a', 'b', 'c'])
</code></pre>
<p>and wish to take the second column "b" and append to the end of a "new" df with the columns "a" and "b" and a name column containing the name of the "b" column and then the third column "c" and append to the end of the new df together with "a" and the name "c" appened to the name column.
It is timeseries data with a datetime in "a" and a variable in b and c and there is sometimes 20 variables and sometimes 1 or 2.</p>
<p>How do I do that in a pretty and efficient way.
right now im doing it like this but have to do it a 100 times for slightly different df but with the same idea.</p>
<pre><code>col_nam_list = list(df.columns.values)
df_1 = pd.DataFrame()
df_1["a"] = df["a"]
df_1["name"] = col_nam_list[1]
df_1["value"] = df["b"]
df_2 = pd.DataFrame()
df_2["a"] = df["a"]
df_2["name"] = col_nam_list[2]
df_2["value"] = df["c"]
result = pd.concat([df_1, df_2])
</code></pre>
<p>This should be the output
<a href="https://i.stack.imgur.com/Ph8fD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ph8fD.png" alt="results" /></a></p>
<p>Now this is not fun to write and looks so ugly and unnecessary long. How do I improve my method?</p>
<p>BR</p>
|
<p>IIUC, you can use <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.melt.html?highlight=dataframe%20melt#pandas-dataframe-melt" rel="nofollow noreferrer"><code>pd.DataFrame.melt</code></a> with parameter <code>id_vars</code> equal to 'a',</p>
<pre><code>df.melt('a')
</code></pre>
<p>Output:</p>
<pre><code> a variable value
0 1 b 2
1 4 b 5
2 7 b 8
3 1 c 3
4 4 c 6
5 7 c 9
</code></pre>
|
python|pandas
| 2
|
5,887
| 63,606,492
|
How do I concatenate multiple pandas dataframe columns(Address details) into a single column, space delimited, and ignoring empty strings?
|
<p>So I've got a pandas dataframe that contains a ton of address info. Aka</p>
<pre><code>AddressNumber
StreetNamePrefix
StreetName
StreetNameSuffix
StreetNamePreDirectional
StreetNamePostDirectional
OccupancySuite
</code></pre>
<p>I'd like to combine everything except for OccupancySuite into Address1</p>
<p>I can get address2 easily enough, it's OccupancySuite.</p>
<p>What I'm getting hung up on is combining the rest of the columns, separated by a space, and ignoring the column AND space if it's null. I'd rather not have multiple spaces between address parts due to multiple null columns.</p>
<p>What I have currently is probably pretty hacky, but it gets me there minus the additional spaces between the columns/words.</p>
<pre><code>#Example Pandas DF with two addresses
import pandas as pd
data = [['123','','','easy','st','',''],['500','N','County Road','3932','','East','']]
df = pd.DataFrame(data,columns=['AddressNumber','StreetNamePreDirectional','StreetNamePrefix','StreetName','StreetNameSuffix','StreetNamePostDirectional','OccupancySuite'])
df['Address1']= df['AddressNumber'].fillna('') + ' ' + df['StreetNamePreDirectional'].fillna('') + ' ' + df['StreetNamePrefix'].fillna('') + ' ' + df['StreetName'].fillna('') + ' ' + df['StreetNameSuffix'].fillna('') + ' ' + df['StreetNamePostDirectional'].fillna('')
df.to_csv('localpath\\cleaned_addresses.csv')
</code></pre>
<p>If you open said csv, you'll see</p>
<pre><code>123 easy st
500 N County Road 3932 East
</code></pre>
<p>What I'm needing is</p>
<pre><code>123 easy st
500 N County Road 3932 East
</code></pre>
|
<p>I Hope this helps you:</p>
<p>I added the column "Address1" to the data frame.</p>
<p>Then, you can perform a for cycle over the len of the data frame (in order to work with the rows) and over the elements in the columns of the data frame.</p>
<p>With an if statement you can ignore the two last columns "OcupancySuite", "Address1" and ignoring the null space.</p>
<pre><code>df["Address1"]=''
for a in range(0, len(df)):
for element in df.columns:
if element in ["OcupancySuite", "Address1"]:
continue
values=df[element].iloc[a]
if not values:
continue
else:
df["Address1"].iloc[a]+=df[element].iloc[a] + ' '
</code></pre>
<p>And if the value is not null you can add the info with the space. (last line).
Here you can see more info about the <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.iloc.html" rel="nofollow noreferrer">iloc</a> method.</p>
<pre><code>df.to_csv('localpath\\cleaned_addresses.csv')
</code></pre>
<p>then you will have the correct spaces.</p>
|
python|pandas|dataframe
| 0
|
5,888
| 63,578,833
|
tf.keras.backend.function for transforming embeddings inside tf.data.dataset
|
<p>I am trying to use the output of a neural network to transform data inside tf.data.dataset. Specifically, I am using a <a href="https://arxiv.org/pdf/1806.04734.pdf" rel="nofollow noreferrer">Delta-Encoder</a> to manipulate embeddings inside the tf.data pipeline. In so doing, however, I get the following error:</p>
<pre><code>OperatorNotAllowedInGraphError: iterating over `tf.Tensor` is not allowed in Graph execution. Use Eager execution or decorate this function with @tf.function.
</code></pre>
<p>I have searched the <a href="https://www.tensorflow.org/guide/data#applying_arbitrary_python_logic" rel="nofollow noreferrer">dataset pipeline</a> page and stack overflow, but I could not find something that addresses my question. In the code below I am using an Autoencoder, as it yields an identical error with more concise code.</p>
<p>The offending part seems to be
<code>[[x,]] = tf.py_function(Auto_Func, [x], [tf.float32])</code>
inside
<code>tf_auto_transform</code>.</p>
<pre><code>num_embeddings = 100
input_dims = 1000
embeddings = np.random.normal(size = (num_embeddings, input_dims)).astype(np.float32)
target = np.zeros(num_embeddings)
#creating Autoencoder
inp = Input(shape = (input_dims,), name ='input')
hidden = Dense(10, activation = 'relu', name = 'hidden')(inp)
out = Dense(input_dims, activation = 'relu', name='output')(hidden)
auto_encoder = tf.keras.models.Model(inputs =inp, outputs=out)
Auto_Func = tf.keras.backend.function(inputs = Autoencoder.get_layer(name='input').input,
outputs = Autoencoder.get_layer(name='output').input )
#Autoencoder transform for dataset.map
def tf_auto_transform(x, target):
x_shape = x.shape
#@tf.function
#def func(x):
# return tf.py_function(Auto_Func, [x], [tf.float32])
#[[x,]] = func(x)
[[x,]] = tf.py_function(Auto_Func, [x], [tf.float32])
x.set_shape(x_shape)
return x, target
def get_dataset(X,y, batch_size = 32):
train_ds = tf.data.Dataset.from_tensor_slices((X, y))
train_ds = train_ds.map(tf_auto_transform)
train_ds = train_ds.batch(batch_size)
return train_ds
dataset = get_dataset(embeddings, target, 2)
</code></pre>
<p>The above code yields the following error:</p>
<pre><code>OperatorNotAllowedInGraphError: iterating over `tf.Tensor` is not allowed in Graph execution. Use Eager execution or decorate this function with @tf.function.
</code></pre>
<p>I tried to eliminate the error by running the commented out section of the tf_auto_transform function, but the error persisted.</p>
<p>SideNote: While it is true that the Delta encoder paper has <a href="https://github.com/EliSchwartz/DeltaEncoder" rel="nofollow noreferrer">code</a>, it is written in tf 1.x. I am trying to use tf 2.x with the tf functional API instead. Thank you for your help!</p>
|
<p>At the risk of outing myself as a n00b, the answer is to switch the order of the map and batch functions. I am trying to apply a neural network to make some changes on data. tf.keras models take <strong>batches</strong> as input, not <strong>individual samples</strong>. By batching the data first, I can run <strong>batches</strong> through my nn.</p>
<pre><code>def get_dataset(X,y, batch_size = 32):
train_ds = tf.data.Dataset.from_tensor_slices((X, y))
#The changed order
train_ds = train_ds.batch(batch_size)
train_ds = train_ds.map(tf_auto_transform)**strong text**
return train_ds
</code></pre>
<p>It really is that simple.</p>
|
tensorflow|tensorflow2.0|tf.keras|autoencoder|tf.data.dataset
| 0
|
5,889
| 21,902,211
|
How do I import a plotly graph within a python script?
|
<p>Is there any API call to import a plotly graph as a .png file within an existing python script? If so, what is it?</p>
<p>For example, having just created a graph using the plotly module for python...</p>
<pre><code>py.plot([data0, data1], layout = layout, filename='foo', fileopt='overwrite')
</code></pre>
<p>...is there a way of retrieving that graph as a .png within the same python script?</p>
|
<p>Another solution: add <code>.png</code> to any public plotly graph, e.g. </p>
<pre><code>import requests
r = requests.get('https://plot.ly/~chris/1638.png')
</code></pre>
<p>This works for any public plotly graph. <code>.png</code>, <code>.svg</code>, <code>.pdf</code> are supported. </p>
<p><code>.py</code>, <code>.jl</code>, <code>.json</code>, <code>.m</code>, <code>.r</code>, <code>.js</code> can be used to view the code could regenerate the graph (more here: <a href="http://blog.plot.ly/post/89402845747/a-graph-is-a-graph-is-a-graph" rel="nofollow">http://blog.plot.ly/post/89402845747/a-graph-is-a-graph-is-a-graph</a>)</p>
|
python|numpy|png|plotly
| 3
|
5,890
| 21,484,930
|
Numpy: boolean comparison on two vectors
|
<p>I have two vectors (or two one dimensional numpy arrays with the same number of elements) <em>a</em> and <em>b</em> where I want to find the number of cases I have that:</p>
<p><strong>a < 0 and b >0</strong></p>
<p>But when I type the above (or something similar) into IPython I get:</p>
<p>ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()</p>
<p>How am I supposed to do the above operation?</p>
<p>Thank you</p>
|
<p>I'm not certain that I understand what you're trying to do, but you might want <code>((a < 0) & (b > 0)).sum()</code></p>
<pre><code>>>> a
array([-1, 0, 2, 0])
>>> b
array([4, 0, 5, 3])
>>> a < 0
array([ True, False, False, False], dtype=bool)
>>> b > 0
array([ True, False, True, True], dtype=bool)
>>> ((a < 0) & (b > 0)).sum()
1
</code></pre>
|
numpy
| 2
|
5,891
| 24,755,632
|
matplotlib can not import pylab
|
<p>I have installed <code>matplotlib</code> and of course its requirements <code>Numpy</code> and <code>scipy</code> on my pc but I get this error message when I import <code>pylab</code>:</p>
<pre><code> >>> from matplotlib import pylab
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/anaconda/lib/python2.7/site-packages/matplotlib-1.4.x-py2.7-linux-x86_64.egg/matplotlib/pylab.py", line 230, in <module>
import matplotlib.finance
File "/anaconda/lib/python2.7/site-packages/matplotlib-1.4.x-py2.7-linux-x86_64.egg/matplotlib/finance.py", line 36, in <module>
from matplotlib.dates import date2num
File "/anaconda/lib/python2.7/site-packages/matplotlib-1.4.x-py2.7-linux-x86_64.egg/matplotlib/dates.py", line 137, in <module>
import matplotlib.ticker as ticker
File "anaconda/lib/python2.7/site-packages/matplotlib-1.4.x-py2.7-linux-x86_64.egg/matplotlib/ticker.py", line 138, in <module>
from matplotlib import transforms as mtransforms
File "/anaconda/lib/python2.7/site-packages/matplotlib-1.4.x-py2.7-linux-x86_64.egg/matplotlib/transforms.py", line 39, in <module>
from matplotlib._path import (affine_transform, count_bboxes_overlapping_bbox,
ImportError: /anaconda/lib/python2.7/site-packages/matplotlib-1.4.x-py2.7-linux-x86_64.egg/matplotlib/_path.so: undefined symbol: PyUnicodeUCS2_AsEncodedString
</code></pre>
<p>As far as I remember it used to work but now I get error message. I even re-installed it but didn't help. How could I fix it?</p>
|
<p>It may be that you are missing:</p>
<pre><code>import matplotlib as mpl
</code></pre>
<p>However, if that does not work. Reinstall the Anacoda distribution, then make sure you have numpy and scipy installed. </p>
<p>The top of your program is then:</p>
<pre><code>import numpy as np
import scipy
import matplotlib as mpl
import pylab
</code></pre>
|
python|numpy|matplotlib|scipy
| -1
|
5,892
| 29,848,757
|
Multiplication of two arrays in numpy
|
<p>I have two numpy arrays:</p>
<pre><code>x = numpy.array([1, 2])
y = numpy.array([3, 4])
</code></pre>
<p>And I would like to create a matrix of elements products:</p>
<pre><code>[[3, 6],
[4, 8]]
</code></pre>
<p>What is the easiest way to do this?</p>
|
<p>One way is to use the <code>outer</code> function of <code>np.multiply</code> (and transpose if you want the same order as in your question):</p>
<pre><code>>>> np.multiply.outer(x, y).T
array([[3, 6],
[4, 8]])
</code></pre>
<p>Most ufuncs in NumPy have this useful <code>outer</code> feature (<code>add</code>, <code>subtract</code>, <code>divide</code>, etc.). As <a href="https://stackoverflow.com/a/29848822/3923281">@Akavall suggests</a>, <code>np.outer</code> is equivalent for the multiplication case here.</p>
<p>Alternatively, <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.einsum.html" rel="nofollow noreferrer"><code>np.einsum</code></a> can perform the multiplication and transpose in one go:</p>
<pre><code>>>> np.einsum('i,j->ji', x, y)
array([[3, 6],
[4, 8]])
</code></pre>
<p>A third approach is to insert a new axis in one the arrays and then multiply, although this is a little more verbose:</p>
<pre><code>>>> (x[:, np.newaxis] * y).T
array([[3, 6],
[4, 8]])
</code></pre>
<hr>
<p>For those interested in performance, here are the timings of the operations, from quickest to slowest, on two arrays of length 15:</p>
<pre><code>In [70]: x = np.arange(15)
In [71]: y = np.arange(0, 30, 2)
In [72]: %timeit np.einsum('i,j->ji', x, y)
100000 loops, best of 3: 2.88 µs per loop
In [73]: %timeit np.multiply.outer(x, y).T
100000 loops, best of 3: 5.48 µs per loop
In [74]: %timeit (x[:, np.newaxis] * y).T
100000 loops, best of 3: 6.68 µs per loop
In [75]: %timeit np.outer(x, y).T
100000 loops, best of 3: 12.2 µs per loop
</code></pre>
|
python|arrays|python-2.7|numpy|multiplication
| 7
|
5,893
| 29,872,350
|
Fastest way of comparing two numpy arrays
|
<p>I have two arrays:</p>
<pre><code>>>> import numpy as np
>>> a=np.array([2, 1, 3, 3, 3])
>>> b=np.array([1, 2, 3, 3, 3])
</code></pre>
<p>What is the fastest way of comparing these two arrays for equality of elements, regardless of the order?</p>
<p><strong>EDIT</strong>
I measured for the execution times of the following functions:</p>
<pre><code>def compare1(): #works only for arrays without redundant elements
a=np.array([1,2,3,5,4])
b=np.array([2,1,3,4,5])
temp=0
for i in a:
temp+=len(np.where(b==i)[0])
if temp==5:
val=True
else:
val=False
return 0
def compare2():
a=np.array([1,2,3,3,3])
b=np.array([2,1,3,3,3])
val=np.all(np.sort(a)==np.sort(b))
return 0
def compare3(): #thx to ODiogoSilva
a=np.array([1,2,3,3,3])
b=np.array([2,1,3,3,3])
val=set(a)==set(b)
return 0
import numpy.lib.arraysetops as aso
def compare4(): #thx to tom10
a=np.array([1,2,3,3,3])
b=np.array([2,1,3,3,3])
val=len(aso.setdiff1d(a,b))==0
return 0
</code></pre>
<p>The results are:</p>
<pre><code>>>> import timeit
>>> timeit.timeit(compare1,number=1000)
0.0166780948638916
>>> timeit.timeit(compare2,number=1000)
0.016178131103515625
>>> timeit.timeit(compare3,number=1000)
0.008063077926635742
>>> timeit.timeit(compare4,number=1000)
0.03257489204406738
</code></pre>
<p>Seems like the "set"-method by ODiogoSilva is the fastest.</p>
<p>Do you know other methods that I can test as well?</p>
<p><strong>EDIT2</strong>
The runtime above was not the right measure for comparing arrays, as explained in a comment by user2357112.</p>
<pre><code>#test.py
import numpy as np
import numpy.lib.arraysetops as aso
#without duplicates
N=10000
a=np.arange(N,0,step=-2)
b=np.arange(N,0,step=-2)
def compare1():
temp=0
for i in a:
temp+=len(np.where(b==i)[0])
if temp==len(a):
val=True
else:
val=False
return val
def compare2():
val=np.all(np.sort(a)==np.sort(b))
return val
def compare3():
val=set(a)==set(b)
return val
def compare4():
val=len(aso.setdiff1d(a,b))==0
return val
</code></pre>
<p>The output is:</p>
<pre><code>>>> from test import *
>>> import timeit
>>> timeit.timeit(compare1,number=1000)
101.16708397865295
>>> timeit.timeit(compare2,number=1000)
0.09285593032836914
>>> timeit.timeit(compare3,number=1000)
1.425955057144165
>>> timeit.timeit(compare4,number=1000)
0.44780397415161133
</code></pre>
<p>Now compare2 is the fastest. Is there still a method that could outgun this?</p>
|
<p>Numpy as a collection of set operations.</p>
<pre><code>import numpy as np
import numpy.lib.arraysetops as aso
a=np.array([2, 1, 3, 3, 3])
b=np.array([1, 2, 3, 3, 3])
print aso.setdiff1d(a, b)
</code></pre>
|
python|arrays|performance|python-2.7|numpy
| 4
|
5,894
| 53,617,786
|
How to remove rows in a dataframe with more than x number of Null values?
|
<p>I am trying to remove the rows in the data frame with more than 7 null values. Please suggest something that is efficient to achieve this.</p>
|
<p>If I understand correctly, you need to remove rows only if total nan's in a row is more than <code>7</code>:</p>
<pre><code>df = df[df.isnull().sum(axis=1) < 7]
</code></pre>
<p>This will keep only rows which have <code>nan</code>'s less than 7 in the dataframe, and will remove all having nan's > 7.</p>
|
python-3.x|pandas|dataframe|data-science
| 21
|
5,895
| 53,567,301
|
Numpy append and normal append
|
<pre><code>x = [[1,2],[2,3],[10,1],[10,10]]
def duplicatingRows(x, l):
severity = x[l][1]
if severity == 1 or severity == 2:
for k in range(1,6):
x.append(x[l])
for l in range(len(x)):
duplicatingRows(x,l)
print(x)
x = np.array([[1,2],[2,3],[10,1],[10,10]])
def duplicatingRows(x, l):
severity = x[l][1]
if severity == 1 or severity == 2:
for k in range(1,6):
x = np.append(x, x[l])
for l in range(len(x)):
duplicatingRows(x,l)
print(x)
</code></pre>
<p>I would like it to print an array with extra appended rows.
Giving out a list of <code>[[1, 2], [2, 3], [10, 1], [10, 10], [1, 2], [1, 2], [1, 2], [1, 2], [1, 2], [10, 1], [10, 1], [10, 1], [10, 1], [10, 1]]</code>. Why does it not work? I tried different combinations with concatenate as well, but it didnt work.</p>
|
<p>You have some bugs in your code. Here's a little bit improved, correct, and (partially) vectorized implementation of your code which prints your desired output.</p>
<p>Here we leverage <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.tile.html" rel="nofollow noreferrer"><code>numpy.tile</code></a> for repeating the rows, followed by a reshape so that we can append it along axis 0, which is what is needed.</p>
<pre><code>In [24]: x = np.array([[1,2],[2,3],[10,1],[10,10]])
def duplicatingRows(x, l):
severity = x[l][1]
if severity == 1 or severity == 2:
# replaced your `for` loop
# 5 corresponds to `range(1, 6)`
reps = np.tile(x[l], 5).reshape(5, -1)
x = np.append(x, reps, axis=0)
return x
for l in range(len(x)):
x = duplicatingRows(x,l)
print(x)
</code></pre>
<p><strong>Output</strong>:</p>
<pre><code>[[ 1 2]
[ 2 3]
[10 1]
[10 10]
[ 1 2]
[ 1 2]
[ 1 2]
[ 1 2]
[ 1 2]
[10 1]
[10 1]
[10 1]
[10 1]
[10 1]]
</code></pre>
|
python|list|numpy|append|atom-editor
| 1
|
5,896
| 53,446,129
|
Matplotlib Hour Minute Based Histogram
|
<pre><code>jupyter notebook 5.2.2
Python 3.6.4
pandas 0.22.0
matplotlib 2.2.2
</code></pre>
<p>Hi I'm trying to present and format a histogram in a jupyter notebook based on hour and minute log data retrieved from a hadoop store using Hive SQL. </p>
<p>I'm having problems with the presentation. I'd like to be able to set the axes from 00:00 to 23:59 with the bins starting at zero and ending at the next minute. I'd like half hourly tick marks. I just can't see how to do it.</p>
<p>The following pulls back 2 years data with 1440 rows and the total count of events at each minute. </p>
<pre><code>%%sql -o jondat
select eventtime, count(1) as cnt
from logs.eventlogs
group by eventtime
</code></pre>
<p>The data is stored as a string but is hour and minute <code>hh:mm</code>, however it appears to be being auto converted as sysdate plus timestamp by the notebook, I have been playing with the data in this format and others.</p>
<p>If I strip out the colons I get</p>
<pre><code>df.dtypes
eventtime int64
cnt int64
</code></pre>
<p>and if I use a dummy filler like a pipe I get</p>
<pre><code>eventtime object
cnt int64
</code></pre>
<p>If I leave the colon in with colons I get</p>
<pre><code>eventtime datetime64
cnt int64
</code></pre>
<p>which is what I am currently using.</p>
<pre><code>...
2018-11-22 00:27:00 32140
2018-11-22 00:28:00 32119
2018-11-22 00:29:00 31726
...
2018-11-22 23:30:00 47989
2018-11-22 23:31:00 40019
2018-11-22 23:32:00 40962
...
</code></pre>
<p>I can then plot the data </p>
<pre><code>%%local
import pandas as pd
import matplotlib
import matplotlib.pyplot as plt
import datetime as dt
import mateplotlib.dates as md
xtformat = md.DateFormatter('%H:%M')
plt.rcParams['figure.figsize'] = [15,10]
df = pd.DataFrame(jondat)
x=df['eventtime']
b=144
y=df['cnt']
fig, ax=plt.subplots()
ax.xaxis_date()
ax.hist(x,b,weights=y)
ax.xaxis.set_major_formatter(xtformat)
plt.show(ax)
</code></pre>
<p>Currently my axes start well before and after the data and the bins are centered over the minute which is more of a pain if I change the number of bin. I can't see where to stop the auto-conversion from string to datetime and I'm not sure if I need to in order to get the result I want.</p>
<p>Is this about formatting my eventtime and setting the axes or can I just set the axes easily irrespective of the data type. Ideally the labelled ticks would be user friendly</p>
<p><a href="https://i.stack.imgur.com/CNElU.jpg" rel="nofollow noreferrer">This is the chart I get with 144 bins. As some of the log records are manual the 1440 bin chart is "hairy" due to the tendency for the manual records being rounded. One of the things I am experimenting with is different bin counts.</a></p>
|
<p>Thanks to <a href="https://stackoverflow.com/users/4124317/importanceofbeingernest">https://stackoverflow.com/users/4124317/importanceofbeingernest</a> who gave me enough clues to find the answer.</p>
<pre><code>%%local
import pandas as pd
import matplotlib
import matplotlib.pyplot as plt
import datetime as dt
import mateplotlib.dates as md
plt.rcParams['figure.figsize'] = [15,10]
df = pd.DataFrame(jondat)
xtformat = md.DateFormatter('%H:%M')
xtinter = md.MinuteLocator(byminute=[0], interval=1)
xtmin = md.MinuteLocator(byminute=[30], interval=1)
x=df['eventtime']
b=144
y=df['cnt']
fig, ax=plt.subplots()
ld=min(df['eventtime'])
hd=max(df['eventtime'])
ax.xaxis_date()
ax.hist(x,b,weights=y)
ax.xaxis.set_major_formatter(xtformat)
ax.xaxis.set_major_locator(xtinter)
ax.xaxis.set_minor_locator(stmin)
ax.set_xlim([ld,hd])
plt.show(ax);
</code></pre>
<p>This lets me plot the chart tidily and play with the bin setting to see how much it impacts the curve both for presentation on a dashboard and to help think about categorization into time bands for analysis of even types by time.</p>
|
python|pandas|matplotlib|hive|jupyter-notebook
| 0
|
5,897
| 17,159,207
|
Change timezone of date-time column in pandas and add as hierarchical index
|
<p>I have data with a time-stamp in UTC. I'd like to convert the timezone of this timestamp to 'US/Pacific' and add it as a hierarchical index to a pandas DataFrame. I've been able to convert the timestamp as an Index, but it loses the timezone formatting when I try to add it back into the DataFrame, either as a column or as an index.</p>
<pre><code>>>> import pandas as pd
>>> dat = pd.DataFrame({'label':['a', 'a', 'a', 'b', 'b', 'b'], 'datetime':['2011-07-19 07:00:00', '2011-07-19 08:00:00', '2011-07-19 09:00:00', '2011-07-19 07:00:00', '2011-07-19 08:00:00', '2011-07-19 09:00:00'], 'value':range(6)})
>>> dat.dtypes
#datetime object
#label object
#value int64
#dtype: object
</code></pre>
<p>Now if I try to convert the Series directly I run into an error.</p>
<pre><code>>>> times = pd.to_datetime(dat['datetime'])
>>> times.tz_localize('UTC')
#Traceback (most recent call last):
# File "<stdin>", line 1, in <module>
# File "/Users/erikshilts/workspace/schedule-detection/python/pysched/env/lib/python2.7/site-packages/pandas/core/series.py", line 3170, in tz_localize
# raise Exception('Cannot tz-localize non-time series')
#Exception: Cannot tz-localize non-time series
</code></pre>
<p>If I convert it to an Index then I can manipulate it as a timeseries. Notice that the index now has the Pacific timezone.</p>
<pre><code>>>> times_index = pd.Index(times)
>>> times_index_pacific = times_index.tz_localize('UTC').tz_convert('US/Pacific')
>>> times_index_pacific
#<class 'pandas.tseries.index.DatetimeIndex'>
#[2011-07-19 00:00:00, ..., 2011-07-19 02:00:00]
#Length: 6, Freq: None, Timezone: US/Pacific
</code></pre>
<p>However, now I run into problems adding the index back to the dataframe as it loses its timezone formatting:</p>
<pre><code>>>> dat_index = dat.set_index([dat['label'], times_index_pacific])
>>> dat_index
# datetime label value
#label
#a 2011-07-19 07:00:00 2011-07-19 07:00:00 a 0
# 2011-07-19 08:00:00 2011-07-19 08:00:00 a 1
# 2011-07-19 09:00:00 2011-07-19 09:00:00 a 2
#b 2011-07-19 07:00:00 2011-07-19 07:00:00 b 3
# 2011-07-19 08:00:00 2011-07-19 08:00:00 b 4
# 2011-07-19 09:00:00 2011-07-19 09:00:00 b 5
</code></pre>
<p>You'll notice the index is back on the UTC timezone instead of the converted Pacific timezone.</p>
<p>How can I change the timezone and add it as an index to a DataFrame?</p>
|
<p>If you set it as the index, it's automatically converted to an Index:</p>
<pre><code>In [11]: dat.index = pd.to_datetime(dat.pop('datetime'), utc=True)
In [12]: dat
Out[12]:
label value
datetime
2011-07-19 07:00:00 a 0
2011-07-19 08:00:00 a 1
2011-07-19 09:00:00 a 2
2011-07-19 07:00:00 b 3
2011-07-19 08:00:00 b 4
2011-07-19 09:00:00 b 5
</code></pre>
<p>Then do the <code>tz_localize</code>:</p>
<pre><code>In [12]: dat.index = dat.index.tz_localize('UTC').tz_convert('US/Pacific')
In [13]: dat
Out[13]:
label value
datetime
2011-07-19 00:00:00-07:00 a 0
2011-07-19 01:00:00-07:00 a 1
2011-07-19 02:00:00-07:00 a 2
2011-07-19 00:00:00-07:00 b 3
2011-07-19 01:00:00-07:00 b 4
2011-07-19 02:00:00-07:00 b 5
</code></pre>
<p><strike>And then you can append the label column to the index:</strike></p>
<p><em>Hmmm this is definitely a bug!</em></p>
<pre><code>In [14]: dat.set_index('label', append=True).swaplevel(0, 1)
Out[14]:
value
label datetime
a 2011-07-19 07:00:00 0
2011-07-19 08:00:00 1
2011-07-19 09:00:00 2
b 2011-07-19 07:00:00 3
2011-07-19 08:00:00 4
2011-07-19 09:00:00 5
</code></pre>
<p>A hacky workaround is to convert the (datetime) level directly (when it's already a MultiIndex):</p>
<pre><code>In [15]: dat.index.levels[1] = dat.index.get_level_values(1).tz_localize('UTC').tz_convert('US/Pacific')
In [16]: dat1
Out[16]:
value
label datetime
a 2011-07-19 00:00:00-07:00 0
2011-07-19 01:00:00-07:00 1
2011-07-19 02:00:00-07:00 2
b 2011-07-19 00:00:00-07:00 3
2011-07-19 01:00:00-07:00 4
2011-07-19 02:00:00-07:00 5
</code></pre>
|
python|timezone|dataframe|pandas|multi-index
| 31
|
5,898
| 12,534,029
|
Radial sampling with SciPy
|
<p>I'm doing image processing with <code>scipy.ndimage</code>. Given a ring-shaped object, I'd like to generate a "profile" around its circumference. The profile could be something like thickness measurements at various point around the ring, or the mean signal along the ring's "thickness."</p>
<p>It seems to me that I could use <code>ndimage.mean</code> if I could first get a good labels image.</p>
<p>If my ring looks like this,</p>
<pre><code> A = array([[0, 0, 0, 0, 1, 0, 0, 0, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 0, 0, 1, 1, 0, 0],
[0, 0, 1, 1, 0, 0, 1, 1, 0, 0],
[0, 0, 1, 1, 0, 0, 1, 1, 0, 0],
[0, 0, 1, 1, 0, 0, 1, 1, 1, 0],
[0, 0, 1, 1, 0, 0, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 0, 0, 1, 0, 0, 0, 0, 0]])
</code></pre>
<p>I could get the "profile of means" with <code>numpy.mean( A, labels )</code> where <code>labels</code> is.</p>
<pre><code>array([[0, 0, 0, 0, 2, 0, 0, 0, 0, 0],
[0, 0, 1, 1, 2, 3, 4, 4, 0, 0],
[0, 0, 1, 1, 2, 3, 4, 5, 0, 0],
[0, 0, 16, 16, 0, 0, 5, 5, 0, 0],
[0, 0, 15, 15, 0, 0, 6, 6, 0, 0],
[0, 0, 14, 14, 0, 0, 7, 7, 0, 0],
[0, 0, 13, 13, 0, 0, 8, 8, 8, 0],
[0, 0, 13, 12, 0, 0, 9, 9, 0, 0],
[0, 0, 12, 12, 11, 10, 9, 9, 0, 0],
[0, 0, 0, 0, 11, 0, 0, 0, 0, 0]])
</code></pre>
<p>I bet there's some interpolation stuff that will go overlooked with this, but it's all I can come up with on my own.</p>
<p>Is there a way to generate my proposed <code>labels</code> image? Is there a better approach for generating my profiles?</p>
|
<p>A standard probability distribution on a circle (or sphere) is the Von-Mises Fisher distribution.</p>
<p>Scipy supports this distribution: <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.vonmises.html" rel="nofollow">http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.vonmises.html</a></p>
<p>So you should be able to use the fit function to find maximum-likelihood parameters for your data.</p>
|
python|image-processing|numpy|scipy
| 1
|
5,899
| 21,981,820
|
creating multiple excel worksheets using data in a pandas dataframe
|
<p>Just started using pandas and python.</p>
<p>I have a worksheet which I have read into a dataframe and the applied forward fill (ffill) method to. </p>
<p>I would then like to create a single excel document with two worksheets in it. </p>
<p>One worksheet would have the data in the dataframe before the ffill method is applied and the next would have the dataframe which has had the ffill method applied.</p>
<p>Eventually I intend to create one worksheet for every unique instance of data in a certain column of the dataframe.</p>
<p>I would then like to apply some vba formatting to the results - but i'm not sure which dll or addon or something I would need to call excel vba using python to format headings as bold and add color etc.</p>
<p>I've had partial success in that xlsxwriter will create a new workbook and add sheets, but dataframe.to_excel operations don't seems to work on the workbooks it creates, the workbooks open but the sheets are blank.</p>
<p>Thanks in advance.</p>
<pre><code>import os
import time
import pandas as pd
import xlwt
from xlwt.Workbook import *
from pandas import ExcelWriter
import xlsxwriter
#set folder to import files from
path = r'path to some file'
#folder = os.listdir(path)
#for loop goes here
#get date
date = time.strftime('%Y-%m-%d',time.gmtime(os.path.getmtime(path)))
#import excel document
original = pd.DataFrame()
data = pd.DataFrame()
original = pd.read_excel(path,sheetname='Leave',skiprows=26)
data = pd.read_excel(path,sheetname='Leave',skiprows=26)
print (data.shape)
data.fillna(method='ffill',inplace=True)
#the code for creating the workbook and worksheets
wb= Workbook()
ws1 = wb.add_sheet('original')
ws2 = wb.add_sheet('result')
original.to_excel(writer,'original')
data.to_excel(writer,'result')
writer.save('final.xls')
</code></pre>
|
<pre><code>import pandas as pd
df1 = pd.DataFrame({'Data': ['a', 'b', 'c', 'd']})
df2 = pd.DataFrame({'Data': [1, 2, 3, 4]})
df3 = pd.DataFrame({'Data': [1.1, 1.2, 1.3, 1.4]})
writer = pd.ExcelWriter('multiple.xlsx', engine='xlsxwriter')
df1.to_excel(writer, sheet_name='Sheeta')
df2.to_excel(writer, sheet_name='Sheetb')
df3.to_excel(writer, sheet_name='Sheetc')
writer.save()
</code></pre>
|
python|pandas
| 65
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.