Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
376,200
| 61,925,035
|
TensorflowException: Invalid GraphDef (TensorFlow 2.0)
|
<p>I'm building a model using tf.keras.models.Sequential and saving it as a SavedModel object which contains a saved_model.pb file. The model is then going to be used in a C# service using ML.net.</p>
<p>Here is the code (pulled and adapted from docs)</p>
<pre><code>(train_images, train_labels), (test_images, test_labels) = tf.keras.datasets.mnist.load_data()
train_labels = train_labels[:1000]
test_labels = test_labels[:1000]
train_images = train_images[:1000].reshape(-1, 28 * 28) / 255.0
test_images = test_images[:1000].reshape(-1, 28 * 28) / 255.0
# Define a simple sequential model
def create_model():
model = tf.keras.models.Sequential([
keras.layers.Dense(512, activation='relu', input_shape=(784,)),
keras.layers.Dropout(0.2),
keras.layers.Dense(10)
])
model.compile(optimizer='adam',
loss=tf.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
return model
# Create a basic model instance
model = create_model()
model.fit(train_images, train_labels, epochs=5)
# Save model
#model.save('/Users/fco/Desktop/saved_model/test.h5', save_format='tf')
tf.saved_model.save(model, '/Users/fco/Desktop/saved_model')
# Load model
new_model = tf.keras.models.load_model('/Users/fco/Desktop/saved_model')
print(new_model.predict(test_images).shape)
</code></pre>
<p>When loading the saved_model.pb file in ML.NET I get the following exception.</p>
<pre><code>TensorflowException: Invalid GraphDef
</code></pre>
<p>When I search for this error - it references freezing weights on model, but the solutions are for TF1. TF2 seems to have a more streamlined method of saving model, but I cannot understand what is wrong.</p>
<p>Does anyone know what I'm missing?</p>
|
<p>I don't know answer for you problem but you can save your model in .h5 format and load it easily.</p>
<p>Example:</p>
<p>save your model using</p>
<blockquote>
<p>model.save('/content/saved_model.h5') </p>
</blockquote>
<p>and load it using</p>
<blockquote>
<p>loaded_model= models.load_model('/content/saved_model.h5')</p>
</blockquote>
|
tensorflow|keras
| -1
|
376,201
| 61,998,848
|
print column position of a dataframe
|
<p>this is my dataframe:</p>
<pre><code>c_id string1 age salary string2
1 apple 21 21.22 hello_world
2 orange 41 23.4 world
3 kiwi 81 20.22 hello
</code></pre>
<p>i need to print the string value which has max_len along with the column datatype, name and its position.so my expected output should be:</p>
<pre><code>position c_name c_dtype max_len
1 string1 object orange
4 string2 object hello_world
</code></pre>
<p>i tried these concept to print string value based on its max length.</p>
<pre><code>for col in df.select_dtypes([np.object]):
max_len = max(df[col], key=len)
print('prints col_name:', col)
print('prints the datatype ',df[col].dtype)
print('prints the maximum length string value',max_len)
</code></pre>
<p>i need to merge all these and should get my expected output as mentioned above.</p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Index.get_loc.html" rel="nofollow noreferrer"><code>Index.get_loc</code></a> for position of column:</p>
<pre><code>out = []
for col in df.select_dtypes([np.object]):
max_len = max(df[col], key=len)
print('position:', df.columns.get_loc(col))
print('prints col_name:', col)
print('prints the datatype ',df[col].dtype)
print('prints the maximum length string value',max_len)
out.append({'position':df.columns.get_loc(col),
'c_name': col, 'c_dtype':df[col].dtype, 'max_len': max_len})
df1 = pd.DataFrame(out)
print (df1)
position c_name c_dtype max_len
0 1 string1 object orange
1 4 string2 object hello_world
</code></pre>
<p>List comprehension solution:</p>
<pre><code>out = [{'position':df.columns.get_loc(col),
'c_name': col, 'c_dtype':df[col].dtype, 'max_len': max(df[col], key=len)}
for col in df.select_dtypes([np.object])]
df1 = pd.DataFrame(out)
print (df1)
position c_name c_dtype max_len
0 1 string1 object orange
1 4 string2 object hello_world
</code></pre>
|
python|pandas|list|numpy|dataframe
| 1
|
376,202
| 61,856,994
|
Pandas split cell tex to columns
|
<p>I have a dataframe with 1 row. </p>
<pre><code> col1
</code></pre>
<p>0 Term: Fall 2020 New Student: First-time Freshmen Run Date: 5/13/2020 </p>
<p>How can I split the text into three columns like below? </p>
<p>My code got an error - 'tuple' object has no attribute 'columns'</p>
<p>'''</p>
<pre><code>data['Term'], data['Type'], data['Date'] = data['col1'].str[:4], data['col1'].str[18:29], data['col1'].str[53:61]
data1 = data['col1'].str[6:15], data['col1'].str[31:50], data['col1'].str[62:]
data1.columns = data.columns
newdata = pd.concat([data, data1])
print(newdata)
</code></pre>
<p>''' </p>
<p><a href="https://i.stack.imgur.com/dlK3w.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dlK3w.png" alt="enter image description here"></a></p>
|
<p>Try:</p>
<pre class="lang-py prettyprint-override"><code>data['Term'], data['Type'], data['Date'] = data['col1'].str[:4], data['col1'].str[18:29], data['col1'].str[53:61]
data1=pd.DataFrame([], columns=data.columns)
data1['Term'], data1['Type'], data1['Date'] = data['col1'].str[6:15], data['col1'].str[31:50], data['col1'].str[62:]
data1.columns = data.columns
newdata = pd.concat([data, data1]).drop('col1', axis=1)
print(newdata)
</code></pre>
<p>Generally this row seems to be a problem:</p>
<pre class="lang-py prettyprint-override"><code>data1 = data['col1'].str[6:15], data['col1'].str[31:50], data['col1'].str[62:]
</code></pre>
<p>It's equivalent to:</p>
<pre class="lang-py prettyprint-override"><code>data1 = (data['col1'].str[6:15], data['col1'].str[31:50], data['col1'].str[62:])
</code></pre>
<p>Which is where this <code>tuple</code> in the error message is coming from...</p>
|
python|pandas
| 0
|
376,203
| 61,933,773
|
Data cleaning/sorting
|
<p>Fairly new to coding, learning Python as my first language. I have an Excel file full of data. I'm trying to drop columns I don't need and then sort them by Name maybe. Each column would have its title, and I want to keep a few specific columns and delete the rest. Unsure of how to do that. So far :</p>
<pre><code>filename = input('Enter File Name : ')
sheet = input('Enter Sheet name : ')
import pandas as pd
df = pd.read_excel(io=filename, sheet_name=sheet)
print(df.head)
</code></pre>
|
<p>Almost all of your questions about pandas are covered in the docs <a href="https://pandas.pydata.org/pandas-docs/stable/index.html" rel="nofollow noreferrer">here</a>.</p>
<p><code>usecols</code> should help to read only specific columns, you can use ranges like "A:F" or specific column names or a combination of both.</p>
<pre class="lang-py prettyprint-override"><code>filename = input('Enter File Name : ')
sheet = input('Enter Sheet name : ')
df = pd.read_excel(filename, sheet_name=sheet, usecols="A:D,F")
</code></pre>
<p>You can then sort like this,</p>
<pre class="lang-py prettyprint-override"><code># if Name is the column name, otherwise just give the columns in a list you want to sort by
df.sort_values(by=['Name'])
</code></pre>
<p>Also, say you read the entire excel sheet and want to delete columns later</p>
<pre class="lang-py prettyprint-override"><code># Just give the list of columns you want to delete
df.drop(['col1', 'col2', 'col3'], inplace=True, axis=1)
</code></pre>
|
python|pandas
| 0
|
376,204
| 61,832,851
|
How do I solve this kind of problem through pandas.cut()?
|
<p>I have my data as</p>
<pre><code>data = pd.DataFrame({'A':[3,50,50,60],'B':[49,5,37,59],'C':[15,34,43,6],'D':[35,39,10,25]})
</code></pre>
<p>If I use cut this way</p>
<pre><code>p = ['A','S','T','U','V','C','Z']
bins = [0,30,35,40,45,50,55,60]
data['A*'] = pd.cut(data.A,bins,labels=p)
print(data)
</code></pre>
<p>I get</p>
<pre><code> A B C D A*
0 3 49 15 35 A
1 50 5 34 39 V
2 50 37 43 10 V
3 60 59 6 25 Z
</code></pre>
<p>How would I cut it to get</p>
<pre><code> A B C D A*
0 3 49 15 35 3A
1 50 5 34 39 50V
2 50 37 43 10 50V
3 60 59 6 25 60Z
</code></pre>
<p>I tried this but doesn't work</p>
<pre><code>for x in data.A:
p = [str(x)+'A',str(x)+'S',str(x)+'T',str(x)+'U',str(x)+'V',str(x)+'C',str(x)+'Z']
bins = [0,30,35,40,45,50,55,60]
</code></pre>
<p>It gives me this</p>
<pre><code> A B C D A*
0 3 49 15 35 60A
1 50 5 34 39 60V
2 50 37 43 10 60V
3 60 59 6 25 60Z
</code></pre>
|
<p>Convert column <code>A</code> to strings and categoricals from <code>pd.cut</code> too and join together:</p>
<pre><code>p = ['A','S','T','U','V','C','Z']
bins = [0,30,35,40,45,50,55,60]
data['A*'] = data.A.astype(str) + pd.cut(data.A,bins,labels=p).astype(str)
print(data)
A B C D A*
0 3 49 15 35 3A
1 50 5 34 39 50V
2 50 37 43 10 50V
3 60 59 6 25 60Z
</code></pre>
<p>EDIT:</p>
<p>For processing all columns is possible use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.apply.html" rel="nofollow noreferrer"><code>DataFrame.apply</code></a>:</p>
<pre><code>data = data.apply(lambda x: x.astype(str) + pd.cut(x,bins,labels=p).astype(str))
print(data)
A B C D
0 3A 49V 15A 35S
1 50V 5A 34S 39T
2 50V 37T 43U 10A
3 60Z 59Z 6A 25A
</code></pre>
|
pandas
| 1
|
376,205
| 62,010,883
|
How to create a dictionary dynamically based on number of attributes?
|
<p>I have a CSV file with 6 attributes and 1 class which I read with Pandas.</p>
<pre><code>CsvFile = "/path/to/file.csv"
df = pd.read_csv(CsvFile)
</code></pre>
<p>First 5 rows of my CSV:</p>
<pre class="lang-none prettyprint-override"><code>x,y,x1,y1,x2,y2,class
92,115,120,94,84,102,3
84,102,106,79,84,102,3
84,102,102,83,80,102,3
80,102,102,79,84,94,3
84,94,102,79,80,94,3
</code></pre>
<p>Since I have 6 attributes, I want to create a dictionary in Python (6 keys, 5 values each key) which will have the centroids for kmeans.</p>
<pre><code>numberOfClusters = 5
centroids =
{
i+1: [random.uniform(0.0, 255.0), random.uniform(0.0, 255.0),
random.uniform(0.0, 255.0), random.uniform(0.0, 255.0),
random.uniform(0.0, 255.0), random.uniform(0.0, 255.0)]
for i in range(numberOfClusters)
}
</code></pre>
<p><strong>Question nr.1:</strong> as you understand, it's not very productive to copy-paste the <code>random.uniform(0.0, 255.0)</code> as many times as the random points I want to get in order to match the number of attributes in my CSV file. Any idea how to do that without copy-paste?</p>
<p>In a similar fashion, in the following code I calculate the Euclidean distance.</p>
<pre><code>for i in centroids.keys():
df['distance_from_{}'.format(i)] = (
np.sqrt(
(df['x'] - centroids[i][0]) ** 2
+ (df['y'] - centroids[i][1]) ** 2
+ (df['x.1'] - centroids[i][2]) ** 2
+ (df['y.1'] - centroids[i][3]) ** 2
+ (df['x.2'] - centroids[i][4]) ** 2
+ (df['y.2'] - centroids[i][5]) ** 2
)
)
</code></pre>
<p><strong>Question nr.2:</strong> if I have more attributes I have to add more <code>df['x'] - centroids[i][0]) ** 2</code>, whereas delete one or more if I have less. How can I automate this process a bit?</p>
<p>The reason for not using scikit's kmeans is that I want to calculate weights per cluster.</p>
|
<p>If number of keys is the problem you can use</p>
<pre><code>n=0
with open('filename.csv','r') as f:
l=f.readline().strip()
n=len(l.split(','))
</code></pre>
<p>where n holds number of keys</p>
|
python|pandas|dataframe|dictionary|k-means
| 1
|
376,206
| 61,618,893
|
Turn pandas dataframe into dictionary
|
<p>I am running a for loop over a pandas dataframe that takes each row and creates a dictionary (of a sort) then uploads to an internal system. </p>
<p>The for loop isn't a problem, neither is the upload to the internal system. I cannot seem to get the format of the dictionary correct for the upload to proceed. </p>
<p>Here is a model of the dataframe: </p>
<pre><code> Id Acct_num Acct_Name Prod Date Rev
0 1495 5001 Alpha ret34 4/30/2020 4999
1 1496 5002 Beta pro45 4/30/2020 18076
2 1497 5003 Gamma sli55 4/30/2020 5671
3 1498 5004 Delta ret34 4/30/2020 16683
</code></pre>
<p><a href="https://i.stack.imgur.com/RAzto.png" rel="nofollow noreferrer">for better viewing if needed</a></p>
<p>I need each row of the dataframe to look like the below (first two rows): </p>
<pre><code>1495:{'Acct_num':'5001', 'Acct_Name':'Alpha', 'Prod':'ret34', 'Date':'4/30/2020', 'Rev':'4999'}
1496:{'Acct_num':'5002', 'Acct_Name':'Beta', 'Prod':'pro45', 'Date':'4/30/2020', 'Rev':'18076'}
</code></pre>
<p>Here is what I have tried (with a few other variations, but to no avail): </p>
<pre><code>for row in df.iloc[:,:].itertuples(index=False):
if not row:
break
else:
d = row._asdict()
d
</code></pre>
<p>Which outputs an ordered dictionary like this: </p>
<pre><code>OrderedDict([('Id', '1495),('Acct_num', '5001'),('Acct_Name', 'Alpha'),('Prod', 'ret34'), ('Date','2020-04-30'),('Rev',4999)])
</code></pre>
|
<p>Use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_dict.html#pandas-dataframe-to-dict" rel="nofollow noreferrer"><code>to_dict</code></a> with orient="index" </p>
<blockquote>
<p>‘index’ : dict like {index -> {column -> value}}</p>
</blockquote>
<pre><code>d = df.to_dict(orient="index")
d
{1495: {'Acct_num': 5001, 'Acct_Name': 'Alpha', 'Prod': 'ret34', 'Date': '4/30/2020', 'Rev': 4999},
1496: {'Acct_num': 5002, 'Acct_Name': 'Beta', 'Prod': 'pro45', 'Date': '4/30/2020', 'Rev': 18076},
1497: {'Acct_num': 5003, 'Acct_Name': 'Gamma', 'Prod': 'sli55', 'Date': '4/30/2020', 'Rev': 5671},
1498: {'Acct_num': 5004, 'Acct_Name': 'Delta', 'Prod': 'ret34', 'Date': '4/30/2020', 'Rev': 16683}}
</code></pre>
|
python|pandas|dictionary
| 1
|
376,207
| 61,791,060
|
Fill a pandas column according to the value of two other columns
|
<p>I am trying to fill a column: if the value of a row A is contained in the row of column B, then fill the column C with the value A</p>
<p><strong>I tried:</strong></p>
<pre><code>import pandas
df = pandas.DataFrame([{'A': "a", 'B': ["a"], 'C': ''},
{'A': "b", 'B': ["a", "b"], 'C': ''},
{'A': "d", 'B': [], 'C': ''},
{'A': "c", 'B': ["d", "e"], 'C': ''}])
def fill_row(df):
if df["B"].str.contains(df["A"], regex = False):
val = df["A"]
else:
val = ""
return val
df['C'] = df.apply(fill_row, axis=1)
</code></pre>
<p><strong>My output:</strong></p>
<blockquote>
<p>AttributeError: 'list' object has no attribute 'str'</p>
</blockquote>
<p><strong>Good output:</strong></p>
<pre><code>df = pandas.DataFrame([{'A': "a", 'B': ["a"], 'C': 'a'},
{'A': "b", 'B': ["a", "b"], 'C': 'b'},
{'A': "d", 'B': [], 'C': ''},
{'A': "c", 'B': ["d", "e"], 'C': ''}])
</code></pre>
|
<p>Use <code>in</code> statemenet for test values in list:</p>
<pre><code>def fill_row(df):
if df["A"] in df['B']:
val = df["A"]
else:
val = ""
return val
df['C'] = df.apply(fill_row, axis=1)
print (df)
A B C
0 a [a] a
1 b [a, b] b
2 d []
3 c [d, e]
</code></pre>
|
python|pandas
| 2
|
376,208
| 61,865,420
|
Testing and validation of the model
|
<p>friends. I have a question for you regarding object detection. I trained my model and it works perfectly. Now, I have to make a presentation of my work. The problem is that I saw some things about testing and validation. The problem is that after training, I used the model, but I do not remember to use the test set or to run the testing or the validation or hyperparameters tunning. Can you explain to me how these work? I attached below the command I used for training.</p>
<pre><code>python train.py --logtostderr --train_dir=training/ --pipeline_config_path=training/faster_rcnn_inception_v2_pets.config
</code></pre>
|
<p>Here is the main idea of train, test and validation data:</p>
<p>In the beginning you have only one original data set, this set is divided into three distinct subsets: <em>train set</em>, <em>validation set</em> and <em>test set</em>.</p>
<p>You train your model on the train set multiple times, each time with a different set of hyperparameters and evaluate its performance on the validation set to select the best set of hyperparameters. If you found your optimal model, you evaluate its true performance <strong>only once</strong> on the test set, no further optimization. This is what you report as final model performance.</p>
<p>What you truly care about is the <em>generalization performance</em> of your model, the holy grail of Machine Learning. This is the ability to handle new and never seen data. To assess this, you need to have spare data your model never has seen before. This is the main purpose of the test set.</p>
<p>Here are some additional links which explain the concept of train,test and validation in more detail: <a href="https://en.wikipedia.org/wiki/Training,_validation,_and_test_sets" rel="nofollow noreferrer">wiki</a>, <a href="https://scikit-learn.org/stable/modules/cross_validation.html#cross-validation" rel="nofollow noreferrer">sklearn-guide</a>, <a href="https://machinelearningmastery.com/difference-test-validation-datasets/" rel="nofollow noreferrer">What is the Difference Between Test and Validation Datasets?</a>.</p>
|
python|tensorflow|object-detection
| 0
|
376,209
| 61,937,985
|
Python Pandas, make date time rounding based on value in another column
|
<p>I need to only select the cases for sensor type == air to be rounded to the nearest 5 seconds but do not how I should use a function to make this happen. </p>
<p>I do have the following lines: </p>
<pre><code>In [1]: import pandas as pd
In [2]: df = pd.DataFrame({'timestamp' : ['2020-04-14 00:00:23', '2020-04-14 00:00:37',
'2020-04-14 00:01:01', '2020-04-14 00:01:05',
'2020-04-14 00:01:19'],
'sensor type' : ['sound', 'air', 'sound', 'air', 'sound']})
In [3]: df["timestamp"] = pd.to_datetime(df.timestamp)
In [4]: df["rounded_timestamp"] = df.groupby("sensor type").transform(lambda d: d.dt.round("5s"))
</code></pre>
<p>Which results in</p>
<pre><code>In [5]: df
Out[5]:
timestamp sensor type rounded_timestamp
0 2020-04-14 00:00:23 sound 2020-04-14 00:00:25
1 2020-04-14 00:00:37 air 2020-04-14 00:00:35
2 2020-04-14 00:01:01 sound 2020-04-14 00:01:00
3 2020-04-14 00:01:05 air 2020-04-14 00:01:05
4 2020-04-14 00:01:19 sound 2020-04-14 00:01:20
</code></pre>
<p>Hence, I do have the column with the rounded times. But ONLY for air sensors the time should be rounded, how could I get a column with the rounded timestamps for air sensors and the non-rounded timestamps for the sound sensors? </p>
|
<p>One way of solving this is with the <code>apply()</code> function to the DataFrame (not a series). What this does is lets you operate on a per-row basis if you set <code>axis=1</code>. This way, you can specify operations that need to apply to one column but can still access any other column you need to for that row for applying those operations conditionally.</p>
<pre><code>df["rounded_timestamp"] = df.apply(lambda row: row["timestamp"].round("5s")
if row["sensor type"] == "air"
else row["timestamp"],
axis=1)
</code></pre>
|
python|pandas|function
| 1
|
376,210
| 61,946,901
|
How to create a column and change it value by for loop?
|
<p>I am new to python and pandas. I have searched many posts talking about how to change the value of dateframe by condition. However, how if I got a dataframe with a lot of condition?</p>
<p>I have the following dataframe:</p>
<pre><code>import pandas as pd
import datetime as dt
data = {"Project":["A","A","A","B","B"], "Date":[dt.datetime(2020,1,1),dt.datetime(2020,3,1),dt.datetime(2020,5,1),dt.datetime(2020,2,1),dt.datetime(2020,4,1)]}
df = pd.DataFrame(data)
</code></pre>
<pre><code> Project Date
0 A 2020-01-01
1 A 2020-03-01
2 A 2020-05-01
3 B 2020-02-01
4 B 2020-04-01
</code></pre>
<p>and I would like to get the following result:</p>
<pre><code> Project Date Start End
0 A 2020-01-01 2020-01-01 2020-05-01
1 A 2020-03-01 2020-01-01 2020-05-01
2 A 2020-05-01 2020-01-01 2020-05-01
3 B 2020-02-01 2020-02-01 2020-04-01
4 B 2020-04-01 2020-02-01 2020-04-01
</code></pre>
<p>I think I can create Start and End column by following method, but I would like to set the start date and end date for different project seperately.</p>
<pre><code>for i in df['Project']:
tmp = df[df['Project']== i ]
df['Start'] = min(tmp['Date'])
df['End'] = max(tmp['Date'])
Project Date Start End
0 A 2020-01-01 2020-02-01 2020-04-01
1 A 2020-03-01 2020-02-01 2020-04-01
2 A 2020-05-01 2020-02-01 2020-04-01
3 B 2020-02-01 2020-02-01 2020-04-01
4 B 2020-04-01 2020-02-01 2020-04-01
</code></pre>
<p>And this is just a simple example. How if I have many project and Date. Can I use for loop to check the condition?
Is there any way to do this? Thanks a lot</p>
|
<p>You can use <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/groupby.html#transformation" rel="nofollow noreferrer"><code>groupby.transform</code></a> with <code>min</code> and <code>max</code> like:</p>
<pre><code>gr = df.groupby('Project')['Date'] #create the grouped object
df['Start'] = gr.transform('min')
df['End'] = gr.transform('max')
print (df)
Project Date Start End
0 A 2020-01-01 2020-01-01 2020-05-01
1 A 2020-03-01 2020-01-01 2020-05-01
2 A 2020-05-01 2020-01-01 2020-05-01
3 B 2020-02-01 2020-02-01 2020-04-01
4 B 2020-04-01 2020-02-01 2020-04-01
</code></pre>
<p>or another way with <code>groupby.agg</code> and <code>merge</code> for the same result</p>
<pre><code>df = df.merge(df.groupby('Project')['Date']
.agg([('Start', 'min'), ('End', 'max')]),
on='Project', how='left')
</code></pre>
|
python|pandas
| 2
|
376,211
| 61,992,535
|
python pandas fill NaN or blanket with max value
|
<p>I have a problem with a big data frame. Here is a small snippet. I want to fill the last columns E with the maximal value, if there ist some value or let it empty. That is the data:</p>
<pre><code>d = {'A': [4000074, 4000074, 4000074, 4000074, 4000074, 4000074, 4000074, 4000074, 4000074,
4000074, 4000074, 4000074, 4000074, 4000074, 4000074, 4000074, 4000074, 4000074],
'B': ['SP000796746', 'SP000796746', 'SP000796746', 'SP000796746', 'SP000796746','SP000796746',
'SP000796746', 'SP000796746', 'SP000796746', 'SP000796746', 'SP000796746', 'SP000796746',
'SP000796746', 'SP000796746', 'SP000796746', 'SP000796746', 'SP000796746', 'SP000796746'],
'C': [201926, 201926, 201926, 201926, 201926, 201926, 201909,201909, 201909, 201909, 201909,
201909, 201933, 201933, 201933, 201933, 201933, 201933],
'D': [-1, 0, 1, 2, 3, 4, -1, 0, 1, 2, 3, 4, -1, 0, 1, 2, 3, 4],
'E': [np.nan, 1000, 1000, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, 3000, 3000, np.nan]}
</code></pre>
<p>it looks like this:</p>
<pre><code> A B C D E
0 4000074 SP000796746 201926 -1 NaN
1 4000074 SP000796746 201926 0 1000.0
2 4000074 SP000796746 201926 1 1000.0
3 4000074 SP000796746 201926 2 NaN
4 4000074 SP000796746 201926 3 NaN
5 4000074 SP000796746 201926 4 NaN
6 4000074 SP000796746 201909 -1 NaN
7 4000074 SP000796746 201909 0 NaN
8 4000074 SP000796746 201909 1 NaN
9 4000074 SP000796746 201909 2 NaN
10 4000074 SP000796746 201909 3 NaN
11 4000074 SP000796746 201909 4 NaN
12 4000074 SP000796746 201933 -1 NaN
13 4000074 SP000796746 201933 0 NaN
14 4000074 SP000796746 201933 1 NaN
15 4000074 SP000796746 201933 2 3000.0
16 4000074 SP000796746 201933 3 3000.0
17 4000074 SP000796746 201933 4 NaN
</code></pre>
<p>But my target is to fill column "E" every where with the highest value, if there
is any value between the range -1 to 4 (column D).if not it should remain empty. So it should look like:</p>
<pre><code> A B C D E
0 4000074 SP000796746 201926 -1 0
1 4000074 SP000796746 201926 0 1000.0
2 4000074 SP000796746 201926 1 1000.0
3 4000074 SP000796746 201926 2 0
4 4000074 SP000796746 201926 3 0
5 4000074 SP000796746 201926 4 0
6 4000074 SP000796746 201909 -1 NaN
7 4000074 SP000796746 201909 0 NaN
8 4000074 SP000796746 201909 1 NaN
9 4000074 SP000796746 201909 2 NaN
10 4000074 SP000796746 201909 3 NaN
11 4000074 SP000796746 201909 4 NaN
12 4000074 SP000796746 201933 -1 3000.0
13 4000074 SP000796746 201933 0 3000.0
14 4000074 SP000796746 201933 1 3000.0
15 4000074 SP000796746 201933 2 3000.0
16 4000074 SP000796746 201933 3 3000.0
17 4000074 SP000796746 201933 4 3000.0
</code></pre>
<p>My code looks like this:</p>
<pre><code>df = d
indx = df[df['D'] == -1].index.values
for i, j in zip(indx[:-1], indx[1:]):
df.loc[i:j-1, 'E'] = df.loc[i:j-1, 'E'].max()
if j == indx[-1]:
df.loc[j:, 'E'] = df.loc[j:, 'E'].max()
</code></pre>
<p>It does not work for very big data frames... Maybe somebody has an idea
for another code or a correction in my code.</p>
<p>Thank you!!</p>
<pre><code> A B C D E
0 4000074 SP000796746 201926 -1 0
1 4000074 SP000796746 201926 0 1000.0
2 4000074 SP000796746 201926 1 1000.0
3 4000074 SP000796746 201926 2 0
4 4000074 SP000796746 201926 3 0
5 4000074 SP000796746 201926 4 0
6 4000074 SP000796746 201909 -1 NaN
7 4000074 SP000796746 201909 0 NaN
8 4000074 SP000796746 201909 1 NaN
9 4000074 SP000796746 201909 2 NaN
10 4000074 SP000796746 201909 3 NaN
11 4000074 SP000796746 201909 4 NaN
12 4000074 SP000796746 201933 -1 0
13 4000074 SP000796746 201933 0 0
14 4000074 SP000796746 201933 1 0
15 4000074 SP000796746 201933 2 3000.0
16 4000074 SP000796746 201933 3 3000.0
17 4000074 SP000796746 201933 4 0
</code></pre>
|
<p>you can do it with <code>groupby.transform</code> the <code>max</code> of the groups made with a new -1 in column D and <code>cumsum</code>. Then <code>fillna</code> the original column.</p>
<pre><code>df['E'] = df['E'].fillna(df['E'].groupby(df['D'].eq(-1).cumsum()).transform('max'))
</code></pre>
<p>EDIT: to fill with zeros, you can do it:</p>
<pre><code>mask = df['E'].groupby(df['D'].eq(-1).cumsum()).transform('any')
df.loc[mask, 'E'] = df.loc[mask, 'E'].fillna(0)
</code></pre>
|
python|pandas|dataframe
| 2
|
376,212
| 61,998,882
|
Annotate city names
|
<p>I would like to annotate the city name Berlin at the coordinates <code>xy=(52.52, 13.405)</code>. I've tried <code>ax.annotate()</code> which yields a strange map. Maybe it has to do with the CRS of the coordinates?</p>
<pre><code>import geopandas as gpd
import contextily as ctx
world = gpd.read_file(gpd.datasets.get_path('naturalearth_lowres'))
world = world[(world.name=="Germany")]
world = world.to_crs(epsg=3857)
ax = world.plot(figsize=(10, 10), color='none', linewidth=1, alpha=0.5)
ax.annotate("Berlin", xy=(52.52, 13.405))
ctx.add_basemap(ax, url=ctx.providers.Stamen.Watercolor, zoom=9)
</code></pre>
<p><a href="https://i.stack.imgur.com/4bNRm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4bNRm.png" alt="enter image description here"></a></p>
|
<p>According to <a href="https://matplotlib.org/3.2.1/tutorials/text/annotations.html" rel="nofollow noreferrer">Annotations docpage</a> your code should look like this:</p>
<pre><code>ax.annotate("Berlin", xy=(52.52, 13.405))
</code></pre>
|
python|matplotlib|geopandas|contextily
| 2
|
376,213
| 61,705,858
|
Keras: UnboundLocalError: local variable 'logs' referenced before assignment
|
<p>I am relatively new to python, and while attempting to train a chatbot I received the error: ‘UnboundLocalError: local variable 'logs' referenced before assignment‘. I used model.fit to train:</p>
<pre><code>model.fit(x_train, y_train, epochs=7)
</code></pre>
<p>And I received the error:</p>
<pre><code>UnboundLocalError Traceback (most recent call last)
<ipython-input-10-847c83704a3f> in <module>()
2 x_train,
3 y_train,
----> 4 epochs=7
5 )
1 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py in _method_wrapper(self, *args, **kwargs)
64 def _method_wrapper(self, *args, **kwargs):
65 if not self._in_multi_worker_mode(): # pylint: disable=protected-access
---> 66 return method(self, *args, **kwargs)
67
68 # Running inside `run_distribute_coordinator` already.
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing)
854 logs = tmp_logs # No error, now safe to assign to logs.
855 callbacks.on_train_batch_end(step, logs)
--> 856 epoch_logs = copy.copy(logs)
857
858 # Run validation.
UnboundLocalError: local variable 'logs' referenced before assignment
</code></pre>
<p>I ran this in google colab, with the link here: <a href="https://colab.research.google.com/drive/18uTvvKYDrd8CQi31kg6vX2Dbxg1gD20X?usp=sharing" rel="noreferrer">https://colab.research.google.com/drive/18uTvvKYDrd8CQi31kg6vX2Dbxg1gD20X?usp=sharing</a></p>
<p>I used the chatterbot/english dataset on kaggle: <a href="https://www.kaggle.com/kausr25/chatterbotenglish" rel="noreferrer">https://www.kaggle.com/kausr25/chatterbotenglish</a> </p>
|
<p>This issue looks similar to the problem I had while working with small datasets and it is covered in this thread: <a href="https://github.com/tensorflow/tensorflow/issues/38064" rel="noreferrer">#38064</a>.
I solved my particular issue setting a smaller batch_size, in my case:</p>
<pre><code>batch_size = 2
</code></pre>
|
python|tensorflow|keras
| 20
|
376,214
| 61,785,498
|
multiply dataframes based on timestamp intervals overlap
|
<p>I have two pandas dataframes, each with two columns: a measurement and a timestamp. I need to multiply the first differences of the measurements, but only if there is a time overlap between the two measurement intervals. How can I do this efficiently, as the size of the dataframes gets large?
Example:</p>
<pre><code>dfA
mesA timeA
0 125 2015-01-14 04:44:49
1 100 2015-01-14 05:16:23
2 115 2015-01-14 08:57:10
dfB
mesB timeB
0 140 2015-01-14 00:13:17
1 145 2015-01-14 08:52:01
2 120 2015-01-14 11:31:44
</code></pre>
<p>Here I would multiply <code>(100-125)*(145-140)</code> since there is a time overlap between the intervals <code>[04:44:49, 05:16:23]</code> and <code>[00:13:17, 08:52:01]</code>, but not <code>(100-125)</code>and<code>(120-145)</code>, since there isn't one. Similarly, I would have <code>(115-100)*(145-140)</code> but also <code>(115-100)*(120-145)</code>, since both have a time overlap.</p>
<p>In the end I will have to sum all the relevant products in a single value, so the result need not be a dataframe. In this case:</p>
<pre><code>s = (100-125)*(145-140)+(115-100)*(145-140)+(115-100)*(120-145) = -425
</code></pre>
<p>My current solution:</p>
<pre><code>s = 0
for i in range(1, len(dfA)):
startA = dfA['timeA'][i-1]
endA = dfA['timeA'][i]
for j in range(1, len(dfB)):
startB = dfB['timeB'][j-1]
endB = dfB['timeB'][j]
if (endB>startA) & (startB<endA):
s+=(dfA['mesA'][i]-dfA['mesA'][i-1])*(dfB['mesB'][j]-dfB['mesB'][j-1])
</code></pre>
<p>Although it seems to work, it is very inefficient and becomes impractical with very large datasets. I believe it could be vectorized more efficiently, perhaps using <code>numexpr</code>, but I still haven't found a way.</p>
<p>EDIT:
other data</p>
<pre><code> mesA timeA
0 125 2015-01-14 05:54:03
1 100 2015-01-14 11:39:53
2 115 2015-01-14 23:58:13
mesB timeB
0 110 2015-01-14 10:58:32
1 120 2015-01-14 13:30:00
2 135 2015-01-14 22:29:26
s = 125
</code></pre>
|
<p>Edit: the original answer did not work, so I came up with another version that is not vectorize but they need to be sorted by date.</p>
<pre><code>arrA = dfA.timeA.to_numpy()
startA, endA = arrA[0], arrA[1]
arr_mesA = dfA.mesA.diff().to_numpy()
mesA = arr_mesA[1]
arrB = dfB.timeB.to_numpy()
startB, endB = arrB[0], arrB[1]
arr_mesB = dfB.mesB.diff().to_numpy()
mesB = arr_mesB[1]
s = 0
i, j = 1, 1
imax = len(dfA)-1
jmax = len(dfB)-1
while True:
if (endB>startA) & (startB<endA):
s+=mesA*mesB
if (endB>endA) and (i<imax):
i+=1
startA, endA, mesA= endA, arrA[i], arr_mesA[i]
elif j<jmax:
j+=1
startB, endB, mesB = endB, arrB[j], arr_mesB[j]
else:
break
</code></pre>
<p>Original not working answer</p>
<p>The idea is to great category with <code>pd.cut</code> based on the value in <code>dfB['timeB']</code> in both dataframes to see where they could overlap. Then calculate the <code>diff</code> in measurements. <code>merge</code> both dataframes on categories and finally multiply and <code>sum</code> the whole thing</p>
<pre><code># create bins
bins_dates = [min(dfB['timeB'].min(), dfA['timeA'].min())-pd.DateOffset(hours=1)]\
+ dfB['timeB'].tolist()\
+ [max(dfB['timeB'].max(), dfA['timeA'].max())+pd.DateOffset(hours=1)]
# work on dfB
dfB['cat'] = pd.cut(dfB['timeB'], bins=bins_dates,
labels=range(len(bins_dates)-1), right=False)
dfB['deltaB'] = -dfB['mesB'].diff(-1).ffill()
# work on dfA
dfA['cat'] = pd.cut(dfA['timeA'], bins=bins_dates,
labels=range(len(bins_dates)-1), right=False)
# need to calcualte delta for both start and end of intervals
dfA['deltaAStart'] = -dfA['mesA'].diff(-1)
dfA['deltaAEnd'] = dfA['mesA'].diff().mask(dfA['cat'].astype(float).diff().eq(0))
# in the above method, for the end of interval, use a mask to not count twice
# intervals that are fully included in one interval of B
# then merge and calcualte the multiplication you are after
df_ = dfB[['cat', 'deltaB']].merge(dfA[['cat','deltaAStart', 'deltaAEnd']])
s = (df_['deltaB'].to_numpy()[:,None]*df_[['deltaAStart', 'deltaAEnd']]).sum().sum()
print (s)
#-425.0
</code></pre>
|
python|python-3.x|pandas|numpy|numexpr
| 1
|
376,215
| 61,701,432
|
Pandas: Mapping column name from a particular table to a row in another table
|
<p>Let's say I have the following DataFrames:</p>
<pre><code>table_a = pandas.DataFrame({ 'employee' : ['a','b','c','d','e','f'], 'department' : ['developer', 'test engineer', 'network engineer', 'manager', 'hr','intern']})
dept_mapping = pandas.DataFrame({'department':['developer','test engineer','network engineer','manager','hr', 'intern'], 'engineer' : [1,1,1,0,0,0], 'management' : [0,0,0,1,1,0], 'intern' : [0,0,0,0,0,1]})
</code></pre>
<p>How can I create a new column in table_a which contains corresponding general_department values.That is:</p>
<pre><code>table_a = pd.DataFrame({ 'employee' : ['a','b','c','d','e','f'], 'department' : ['developer', 'test engineer', 'network engineer', 'manager', 'hr','intern'], 'general department' : ['engineer', 'engineer', 'engineer', 'management', 'management' ,'intern' ]})
</code></pre>
|
<p>You can try <code>idxmax</code> on <code>axis=1</code> with <code>series.map()</code>:</p>
<pre><code>table_a['general department'] = table_a['department'].map(
dept_mapping.set_index('department').idxmax(1))
print(table_a)
</code></pre>
<hr>
<pre><code> employee department general department
0 a developer engineer
1 b test engineer engineer
2 c network engineer engineer
3 d manager management
4 e hr management
5 f intern intern
</code></pre>
|
python|pandas
| 2
|
376,216
| 61,791,032
|
How to solve index column issue in Panda while converting to json file?
|
<p>There is a pandas dataframe as follow :</p>
<p><a href="https://i.stack.imgur.com/jIPLF.png" rel="nofollow noreferrer">df</a></p>
<p>I wanted to create a json file by using this command <code>df.to_json(os.path.join(path, 'test.json'))</code></p>
<p>My desired output is </p>
<pre><code>{"Big": {"A": "Big", "B": [["rose", 100], ["camelia", 200], ["Lily", 300]], }, "Medium": {"A": "Medium", "B": [["house", 45], ["car", 30], ["money", 56]], }}
</code></pre>
<p>Instead, the output I am having is:</p>
<pre><code>{"A":{"0":"Big","1":"Medium",},"B":{"0":[["rose", 100], ["camelia", 200], ["Lily", 300]], "1":[[["house", 45], ["car", 30], ["money", 56]] }}
</code></pre>
<p>I also tried by taking the Column A as my index column <code>df=df.set_index('A')</code>.
But that gives me more puzzled answer:</p>
<pre><code>{"B":{"Big":[":[["rose", 100], ["camelia", 200], ["Lily", 300]],"Medium":[["house", 45], ["car", 30], ["money", 56]]}}
</code></pre>
<p>Can anybody help me how to solve that issue?</p>
|
<p>It seems that you read your Excel file invoking a command like:</p>
<pre><code>df = pd.read_excel('Input.xlsx', skiprows=2, index_col=0, names=[None, 'A', 'B'])
</code></pre>
<p>i.e.:</p>
<ul>
<li><code>skiprows=2</code> - skip 2 initial empty rows,</li>
<li><code>index_col=0</code> - set column <em>0</em> (in Excel parlance - column <em>A</em>) as the index,</li>
<li><code>names=...</code> - ignore existing column names (line 3 - nothing for index,
<em>Fruit</em> and <em>Flowers</em> for data columns) and set column names from the
given list.</li>
</ul>
<p>This way the DataFrame contains:</p>
<pre><code> A B
0 Big [(rose:100), (camelia:200), (Lily: 300)]
1 Medium [(house:45), (car:30), (money: 56)]
</code></pre>
<p>and just this should be the actual start of your post (instead of Excel screenshot).</p>
<p>To get the JSON output similar to your desired result, you can run:</p>
<pre><code>df.set_index('A', drop=False).to_json('Output.json', orient='index')
</code></pre>
<p>The result (prettyfied for readability) is:</p>
<pre><code>{
"Big" : {"A": "Big", "B":"[(rose:100), (camelia:200), (Lily: 300)]"},
"Medium": {"A": "Medium", "B":"[(house:45), (car:30), (money: 56)]"}
}
</code></pre>
<p>The difference is in <em>B</em> column: The whole content is a <strong>single</strong> string,
e.g. <code>"[(rose:100), (camelia:200), (Lily: 300)]"</code>, whereas you want
<code>[["rose", 100], ["camelia", 200], ["Lily", 300]]</code>, i.e. a list of lists.</p>
<p>If this detail is important, you can reformat <em>B</em> column:</p>
<pre><code>df.B = df.B.str.extractall(r'\((?P<name>\w+) *: *(?P<num>\d+)\)')\
.astype({'num': 'int'}).apply(list, axis=1).unstack()\
.apply(lambda lst: lst.dropna().tolist(), axis=1)
</code></pre>
<p>and <strong>after that</strong> save to JSON. This time the result is just as you need.</p>
<p>The above conversion involves several steps, maybe a bit difficult
for inexperienced reader.
To fully comprehend how it works, execute each step separately
and then inspect results of each step.</p>
<p>I only add that the final <em>apply</em> of a <em>lambda</em> function (instead of
plain <em>list</em> as before) is needed if the number of matches from
<em>extractall</em> is different in particular rows.
In this case <em>NaN</em> entries must be filtered out.</p>
|
json|pandas
| 0
|
376,217
| 61,751,704
|
Cython min and max on Arrays
|
<p>I want to speed up a quite simple Python code by converting some functions into cython.
However, in the loop body, I need to find the min and max values of an array and that seems to be the critical point. According to the .html file, these lines need to be translated into very much c-code.. Why is that?</p>
<p>That is the entire code, below I list the lines that give me headaches:</p>
<pre><code>import numpy as np
cimport numpy as np
cimport cython
from cython cimport boundscheck, wraparound
@boundscheck(False)
@wraparound(False)
cdef box_overlaps_contour(unsigned int[:] boxTopLeftXY, unsigned int boxSize, unsigned int[:, :, :] contourData):
cdef bint isOverlapping = False
cdef unsigned int xmin, xmax, width, boxXmin, boxXmax, ymin, ymax, height, boxYmin, boxYmax
xmin = min(contourData[:, 0, 1])
xmax = max(contourData[:, 0, 1])
width = xmax - xmin
boxXmin = boxTopLeftXY[0]
boxXmax = boxTopLeftXY[0] + boxSize
if xmin > (boxXmin-width/2):
if xmax < (boxXmax+width/2):
ymin = min(contourData[:, 0, 1])
ymax = max(contourData[:, 0, 1])
height = ymax - ymin
boxYmin = boxTopLeftXY[1]
boxYmax = boxTopLeftXY[1] + boxSize
if ymin > (boxYmin-height/2):
if ymax < (boxYmax+width/2):
isOverlapping = True
return isOverlapping
@boundscheck(False)
@wraparound(False)
def def_get_indices_of_overlapping_particles(contours not None, unsigned int[:, :] topLefts, unsigned int boxSize):
cdef Py_ssize_t i, j
cdef unsigned int counter, numParticles, numTopLefts
numParticles = len(contours)
numTopLefts = topLefts.shape[0]
cdef unsigned int[:] overlappingIndices = np.zeros(numParticles, dtype=np.uint32)
cdef unsigned int[:, :, :] currentContour
counter = 0
for i in range(numParticles):
currentContour = contours[i]
for j in range(numTopLefts):
if box_overlaps_contour(topLefts[j, :], boxSize, currentContour):
overlappingIndices[counter] = i
counter += 1
break
return overlappingIndices[:counter]
</code></pre>
<p>The function takes a list of contours (np.ndarray, as retrieved from cv2) and an array, representing a certain number of xy-Coordinates, where rectangles are placed with the indicated boxsize. The function is supposed to iterate through the contours and return the indices of contours that are overlapping with one of the boxes.
These lines here seem to make the entire process horribly slow (it is, in fact, slower than the purely Python version..):</p>
<pre><code>+13: xmin = min(contourData[:, 0, 1])
+14: xmax = max(contourData[:, 0, 1])
</code></pre>
<p>and, similarly:</p>
<pre><code>+21: ymin = min(contourData[:, 0, 1])
+22: ymax = max(contourData[:, 0, 1])
</code></pre>
<p>Other lines that are problematic (but a little less) without me understanding why:</p>
<pre><code>+48: if box_overlaps_contour(topLefts[j, :], boxSize, currentContour):
</code></pre>
<p>Why is the function call already so complicated? The data types match, everything is unsigned integer..</p>
<p>And also already returning the bool value; I expanded what the compiler made out of it:</p>
<pre><code>+31: return isOverlapping
__Pyx_XDECREF(__pyx_r);
__pyx_t_2 = __Pyx_PyBool_FromLong(__pyx_v_isOverlapping); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 31, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_2);
__pyx_r = __pyx_t_2;
__pyx_t_2 = 0;
goto __pyx_L0;
</code></pre>
<p>Any help would be highly appreciated! I still don't really understand how cython works, as it seems :/
If required, I can give further information!</p>
<p>Many thanks!!! :)</p>
<p>EDIT: Here is what Cython makes out of the np.min() line...: Any ideas?</p>
<pre><code>+21: ymin = np.min(contourData[:, 0, 1])
__Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_n_s_np); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 21, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_2);
__pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_min); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 21, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_3);
__Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
__pyx_t_4.data = __pyx_v_contourData.data;
__pyx_t_4.memview = __pyx_v_contourData.memview;
__PYX_INC_MEMVIEW(&__pyx_t_4, 0);
__pyx_t_4.shape[0] = __pyx_v_contourData.shape[0];
__pyx_t_4.strides[0] = __pyx_v_contourData.strides[0];
__pyx_t_4.suboffsets[0] = -1;
{
Py_ssize_t __pyx_tmp_idx = 0;
Py_ssize_t __pyx_tmp_stride = __pyx_v_contourData.strides[1];
if ((0)) __PYX_ERR(0, 21, __pyx_L1_error)
__pyx_t_4.data += __pyx_tmp_idx * __pyx_tmp_stride;
}
{
Py_ssize_t __pyx_tmp_idx = 1;
Py_ssize_t __pyx_tmp_stride = __pyx_v_contourData.strides[2];
if ((0)) __PYX_ERR(0, 21, __pyx_L1_error)
__pyx_t_4.data += __pyx_tmp_idx * __pyx_tmp_stride;
}
__pyx_t_2 = __pyx_memoryview_fromslice(__pyx_t_4, 1, (PyObject *(*)(char *)) __pyx_memview_get_unsigned_int, (int (*)(char *, PyObject *)) __pyx_memview_set_unsigned_int, 0);; if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 21, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_2);
__PYX_XDEC_MEMVIEW(&__pyx_t_4, 1);
__pyx_t_4.memview = NULL;
__pyx_t_4.data = NULL;
__pyx_t_5 = NULL;
if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_3))) {
__pyx_t_5 = PyMethod_GET_SELF(__pyx_t_3);
if (likely(__pyx_t_5)) {
PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3);
__Pyx_INCREF(__pyx_t_5);
__Pyx_INCREF(function);
__Pyx_DECREF_SET(__pyx_t_3, function);
}
}
__pyx_t_1 = (__pyx_t_5) ? __Pyx_PyObject_Call2Args(__pyx_t_3, __pyx_t_5, __pyx_t_2) : __Pyx_PyObject_CallOneArg(__pyx_t_3, __pyx_t_2);
__Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0;
__Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 21, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
__Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
__pyx_t_6 = __Pyx_PyInt_As_unsigned_int(__pyx_t_1); if (unlikely((__pyx_t_6 == (unsigned int)-1) && PyErr_Occurred())) __PYX_ERR(0, 21, __pyx_L1_error)
__Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
__pyx_v_ymin = __pyx_t_6;
</code></pre>
|
<p>Using <code>np.min</code> and <code>np.max</code> will probably be quicker than the Python <code>min</code> and <code>max</code> functions (possibly depending on the size of the array). The Numpy functions will use the C buffer protocol and operate on the C numeric type while the Python ones will use the Python iterator protocol and the numbers will be treated as Python objects. Despite that they'll look just as yellow in Cython.</p>
<p><em>Edit:</em> if that doesn't help you may want to write your own <code>cdef</code> function to do <code>minmax</code> (to avoid the Python call). Something like (untested code follows...)</p>
<pre><code># return type is a C struct of 2 values - this should be quick...
cdef (double, double) minmax(double arr[:]):
cdef double min = np.inf
cdef double max = -np.inf
cdef int i
for i in range(arr.shape[0]):
if arr[i] < min:
min = arr[i]
if arr[i] > max
max = arr[i]
return min, max
</code></pre>
<p>That has the advantage of doing both in one loop, and not requiring a Python function call. It obviously has the disadvantage that you need to write it yourself.</p>
<p>A lot of the generated C code you see is to do with the memoryview slicing, and isn't actually too slow (although it takes up a lot of space).</p>
<hr>
<pre><code>cdef box_overlaps_contour(unsigned int[:] boxTopLeftXY, unsigned int boxSize, unsigned int[:, :, :] contourData):
</code></pre>
<p>No return type is specified so it returns as Python object. You could do <code>cdef bint box_overlaps_contour(...)</code> to return a "boolean integer".</p>
|
python|arrays|numpy|cython
| 1
|
376,218
| 61,811,257
|
Split a multiple dimensional pytorch tensor into "n" smaller tensors
|
<p>Let's say I have a 5D tensor which has this shape for example : <strong>(1, 3, 10, 40, 1)</strong>. I want to split it into smaller equal tensors (if possible) according to a certain dimension with a <strong>step</strong> equal to <strong>1</strong> while preserving the other dimensions.</p>
<p>Let's say for example I want to split it according to the fourth dimension (=<strong>40</strong>) where each tensor will have a size equal to <strong>10</strong>. So the first <em>tensor_1</em> will have values from <strong>0->9</strong>, <em>tensor_2</em> will have values from <strong>1->10</strong> and so on.</p>
<p>The 39 tensors will have these shapes :</p>
<pre><code>Shape of tensor_1 : (1, 3, 10, 10, 1)
Shape of tensor_2 : (1, 3, 10, 10, 1)
Shape of tensor_3 : (1, 3, 10, 10, 1)
...
Shape of tensor_39 : (1, 3, 10, 10, 1)
</code></pre>
<p>Here's what I have tried : </p>
<pre><code>a = torch.randn(1, 3, 10, 40, 1)
chunk_dim = 10
a_split = torch.chunk(a, chunk_dim, dim=3)
</code></pre>
<p>This gives me 4 tensors. How can I edit this so I'll have 39 tensors with a step = 1 like I explained ?</p>
|
<p>This creates overlapping tensors which what I wanted : </p>
<pre><code>torch.unfold(dimension, size, step)
</code></pre>
|
python|pytorch|tensor
| 2
|
376,219
| 61,616,810
|
How to do cubic spline interpolation and integration in Pytorch
|
<p>In Pytorch, is there cubic spline interpolation similar to <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.CubicSpline.html" rel="noreferrer">Scipy's</a>? Given 1D input tensors <code>x</code> and <code>y</code>, I want to interpolate through those points and evaluate them at <code>xs</code> to obtain <code>ys</code>. Also, I want an integrator function that finds <code>Ys</code>, the integral of the spline interpolation from <code>x[0]</code> to <code>xs</code>.</p>
|
<p>Here is a <a href="https://gist.github.com/chausies/c453d561310317e7eda598e229aea537" rel="nofollow noreferrer">gist</a> I made doing this with <a href="https://en.wikipedia.org/wiki/Cubic_Hermite_spline" rel="nofollow noreferrer">Cubic Hermite Splines</a> in Pytorch efficiently and with autograd support.</p>
<p>For convenience, I'll also put the code here.</p>
<pre class="lang-py prettyprint-override"><code>import torch as T
def h_poly_helper(tt):
A = T.tensor([
[1, 0, -3, 2],
[0, 1, -2, 1],
[0, 0, 3, -2],
[0, 0, -1, 1]
], dtype=tt[-1].dtype)
return [
sum( A[i, j]*tt[j] for j in range(4) )
for i in range(4) ]
def h_poly(t):
tt = [ None for _ in range(4) ]
tt[0] = 1
for i in range(1, 4):
tt[i] = tt[i-1]*t
return h_poly_helper(tt)
def H_poly(t):
tt = [ None for _ in range(4) ]
tt[0] = t
for i in range(1, 4):
tt[i] = tt[i-1]*t*i/(i+1)
return h_poly_helper(tt)
def interp_func(x, y):
"Returns integral of interpolating function"
if len(y)>1:
m = (y[1:] - y[:-1])/(x[1:] - x[:-1])
m = T.cat([m[[0]], (m[1:] + m[:-1])/2, m[[-1]]])
def f(xs):
if len(y)==1: # in the case of 1 point, treat as constant function
return y[0] + T.zeros_like(xs)
I = T.searchsorted(x[1:], xs)
dx = (x[I+1]-x[I])
hh = h_poly((xs-x[I])/dx)
return hh[0]*y[I] + hh[1]*m[I]*dx + hh[2]*y[I+1] + hh[3]*m[I+1]*dx
return f
def interp(x, y, xs):
return interp_func(x,y)(xs)
def integ_func(x, y):
"Returns interpolating function"
if len(y)>1:
m = (y[1:] - y[:-1])/(x[1:] - x[:-1])
m = T.cat([m[[0]], (m[1:] + m[:-1])/2, m[[-1]]])
Y = T.zeros_like(y)
Y[1:] = (x[1:]-x[:-1])*(
(y[:-1]+y[1:])/2 + (m[:-1] - m[1:])*(x[1:]-x[:-1])/12
)
Y = Y.cumsum(0)
def f(xs):
if len(y)==1:
return y[0]*(xs - x[0])
I = P.searchsorted(x[1:].detach(), xs)
dx = (x[I+1]-x[I])
hh = H_poly((xs-x[I])/dx)
return Y[I] + dx*(
hh[0]*y[I] + hh[1]*m[I]*dx + hh[2]*y[I+1] + hh[3]*m[I+1]*dx
)
return f
def integ(x, y, xs):
return integ_func(x,y)(xs)
# Example
if __name__ == "__main__":
import matplotlib.pylab as P # for plotting
x = T.linspace(0, 6, 7)
y = x.sin()
xs = T.linspace(0, 6, 101)
ys = interp(x, y, xs)
Ys = integ(x, y, xs)
P.scatter(x, y, label='Samples', color='purple')
P.plot(xs, ys, label='Interpolated curve')
P.plot(xs, xs.sin(), '--', label='True Curve')
P.plot(xs, Ys, label='Spline Integral')
P.plot(xs, 1-xs.cos(), '--', label='True Integral')
P.legend()
P.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/zgA0s.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zgA0s.png" alt="Resulting image from code example" /></a></p>
|
python|pytorch|interpolation|numeric
| 6
|
376,220
| 62,010,428
|
Plotly: How to add vertical lines at specified points?
|
<p>I have a data frame plot of a time series along with a list of numeric values at which I'd like to draw vertical lines. The plot is an interactive one created using the cufflinks package. Here is an example of three time series in 1000 time values, I'd like to draw vertical lines at 500 and 800. My attempt using "axvlinee" is based upon suggestions I've seen for similar posts:</p>
<pre><code>import numpy as np
import pandas as pd
import cufflinks
np.random.seed(123)
X = np.random.randn(1000,3)
df=pd.DataFrame(X, columns=['a','b','c'])
fig=df.iplot(asFigure=True,xTitle='time',yTitle='values',title='Time Series Plot')
fig.axvline([500,800], linewidth=5,color="black", linestyle="--")
fig.show()
</code></pre>
<p>The error message states 'Figure' object has no attribute 'axvline'.</p>
<p>I'm not sure whether this message is due to my lack of understanding about basic plots or stems from a limitation of using igraph.</p>
|
<h3>The answer:</h3>
<p>To add a line to an existing plotly figure, just use:</p>
<pre><code>fig.add_shape(type='line',...)
</code></pre>
<h3>The details:</h3>
<p>I gather <a href="https://stackoverflow.com/questions/40166463/is-there-a-simple-way-to-plot-vertical-lines-on-scatter-plots-in-plotly">this</a> is the post you've seen since you're mixing in matplotlib. And as it has been stated in the comments, <code>axvline</code> has got nothing to do with plotly. That was only used as an example for how you could have done it using matplotlib. Using plotly, I'd either go for <code>fig.add_shape(go.layout.Shape(type="line")</code>. But before you try it out for yourself, please b aware that <code>cufflinks</code> has been deprecated. I really liked cufflinks, but now there are better options for building both quick and detailed graphs. If you'd like to stick to one-liners similat to <code>iplot</code>, I'd suggest using <code>plotly.express</code>. The only hurdle in your case is changing your dataset from a wide to a long format that is preferred by <code>plotly.express</code>. The snippet below does just that to produce the following plot:</p>
<p><a href="https://i.stack.imgur.com/APnaH.png" rel="noreferrer"><img src="https://i.stack.imgur.com/APnaH.png" alt="enter image description here"></a></p>
<h3>Code:</h3>
<pre><code>import numpy as np
import pandas as pd
import plotly.express as px
from plotly.offline import iplot
#
np.random.seed(123)
X = np.random.randn(1000,3)
df=pd.DataFrame(X, columns=['a','b','c'])
df['id'] = df.index
df = pd.melt(df, id_vars='id', value_vars=df.columns[:-1])
# plotly line figure
fig = px.line(df, x='id', y='value', color='variable')
# lines to add, specified by x-position
lines = {'a':500,'c':700,'a':900,'b':950}
# add lines using absolute references
for k in lines.keys():
#print(k)
fig.add_shape(type='line',
yref="y",
xref="x",
x0=lines[k],
y0=df['value'].min()*1.2,
x1=lines[k],
y1=df['value'].max()*1.2,
line=dict(color='black', width=3))
fig.add_annotation(
x=lines[k],
y=1.06,
yref='paper',
showarrow=False,
text=k)
fig.show()
</code></pre>
|
python|pandas|plotly|cufflinks
| 6
|
376,221
| 61,611,024
|
Python: I would like to return a subset of dataframe based on a list, if the records are ordered the same way the list is
|
<p>I have a dataframe that has more that a thousand records and I would like to return a sliced dataframe where the values are ordered similarly to the list.</p>
<p>e.g.</p>
<pre><code>lst = [0,1,0,0,0,1]
</code></pre>
<h1>Input</h1>
<pre><code> date season hot_or_cold
0 2012-01-01 Winter 0
1 2012-01-02 Winter 1
2 2012-01-03 Winter 0
3 2012-01-04 Winter 0
4 2012-01-05 Winter 0
5 2012-01-06 Winter 1
6 2012-01-07 Winter 1
7 2012-01-08 Winter 1
8 2012-01-09 Winter 0
9 2012-01-10 Winter 1
10 2012-01-11 Winter 0
# 1 - hot
# 0 - cold
</code></pre>
<h1>Output</h1>
<pre><code> date season hot_or_cold
0 2012-01-01 Winter 0
1 2012-01-02 Winter 1
2 2012-01-03 Winter 0
3 2012-01-04 Winter 0
4 2012-01-05 Winter 0
5 2012-01-06 Winter 1
</code></pre>
<p>Thank you in advance</p>
|
<p>Define 2 following functions:</p>
<ol>
<li><p>Find match between <em>s</em> (a <em>Series</em>, longer) and <em>lst</em> (a list, shorter).</p>
<pre><code>def fndMatch(s, lst):
len1 = s.size
len2 = len(lst)
for i1 in range(len1 - len2 + 1):
i2 = i1 + len2
if s.iloc[i1:i2].eq(lst).all():
return (i1, i2)
return (None, None)
</code></pre>
<p>When a match has been found, the result is both slice borders,
otherwise a pair of <em>None</em> values.</p></li>
<li><p>Get a fragment of <em>df</em> with <em>hot_or_cold</em> column matching <em>lst</em>:</p>
<pre><code>def getFragment():
i1, i2 = fndMatch(df.hot_or_cold, lst)
if i1 is None:
return None
else:
return df.iloc[i1:i2]
</code></pre></li>
</ol>
<p>When you call it (<code>getFragment()</code>) the result is:</p>
<pre><code> date season hot_or_cold
0 2012-01-01 Winter 0
1 2012-01-02 Winter 1
2 2012-01-03 Winter 0
3 2012-01-04 Winter 0
4 2012-01-05 Winter 0
5 2012-01-06 Winter 1
</code></pre>
|
python|python-3.x|pandas|list
| 0
|
376,222
| 61,778,100
|
Counting a list of words in a list of strings using python
|
<p>So I have a pandas dataframe with rows of tokenized strings in a column named story. I also have a list of words in a list called selected_words. I am trying to count the instances of any of the selected_words in each of the rows in the column story. </p>
<p>The code I used before that had worked is </p>
<p><code>CCwordsCount=df4.story.str.count('|'.join(selected_words))</code></p>
<p>This is now giving me NaN values for every row. </p>
<p>Below is the first few rows of the column story in df4. The dataframe contains a little over 400 rows of NYTimes Articles. </p>
<pre><code>0 [it, was, a, curious, choice, for, the, good, ...
1 [when, he, was, a, yale, law, school, student,...
2 [video, bitcoin, has, real, world, investors, ...
3 [bitcoin, s, wild, ride, may, not, have, been,...
4 [amid, the, incense, cheap, art, and, herbal, ...
5 [san, francisco, eight, years, ago, ernie, all...
</code></pre>
<p>This is the list of selected_words</p>
<pre><code>selected_words = ['accept', 'believe', 'trust', 'accepted', 'accepts', 'trusts', 'believes', \
'acceptance', 'trusted', 'trusting', 'accepting', 'believes', 'believing', 'believed',\
'normal', 'normalize', ' normalized', 'routine', 'belief', 'faith', 'confidence', 'adoption', \
'adopt', 'adopted', 'embrace', 'approve', 'approval', 'approved', 'approves']
</code></pre>
<p><a href="https://pitt.box.com/s/s37b5ztq47mkebqyjwf40m4tyllewwg8" rel="nofollow noreferrer">Link to my df4 .csv file</a></p>
|
<p><code>.find()</code> function can be useful. And this can be implemented in many different ways. If you don't have any other purpose for the raw article and it can be a bunch of string. Then try this, you can also put them in a dictionary and loop over.</p>
<pre><code>def find_words(text, words):
return [word for word in words if word in text]
sentences = "0 [it, was, a, curious, choice, for, the, good, 1 [when, he, was, a, yale, law, school, student, 2 [video, bitcoin, has, real, world, investors, 3 [bitcoin, s, wild, ride, may, not, have, been, 4 [amid, the, incense, cheap, art, and, herbal, 5 [san, francisco, eight, years, ago, ernie, all"
search_keywords=['accept', 'believe', 'trust', 'accepted', 'accepts', 'trusts', 'believes', \
'acceptance', 'trusted', 'trusting', 'accepting', 'believes', 'believing', 'believed',\
'normal', 'normalize', ' normalized', 'routine', 'belief', 'faith', 'confidence', 'adoption', \
'adopt', 'adopted', 'embrace', 'approve', 'approval', 'approved', 'approves', 'good']
found = find_words(sentences, search_keywords)
print(found)
</code></pre>
<p>Note : I didn't have panda data frame in mind whine I create this. </p>
|
python|pandas|count
| 0
|
376,223
| 62,031,478
|
Updating value of one column in dataframe if ID match found in column of another dataframe
|
<p>I have two dataframes. The second dataframe is a derived from the first dataframe. I update a column in the second dataframe, and then I want to put the updated values back in the first dataframe. I have tried "merge", but it gives me two columns with suffixes "_x" and "_y"</p>
<pre><code>import pandas
lotQtyQueryForDF = pandas.read_sql_query(refreshQuery,conForInfo)
dataFrameOfLots = pandas.DataFrame(lotQtyQueryForDF,columns=['Customer','Stage','ProdType','Brand','ProdName','Size','Strength','Lot','PackedOn','Qty','Available'])
dataFrameOfLots['Available']=dataFrameOfLots["Available"].fillna(dataFrameOfLots['Qty'])
#inserting columns
dataFrameOfLots['QtyInTransaction']=0
dataFrameOfLots['IndexCol'] = range(1, len(dataFrameOfLots) + 1)
dataFrameFiltered=dataFrameOfLots.query('Brand=="XYZ" & Customer=="ABC"')
dataFrameFiltered.loc[:,'Qty in transaction']=34
dataFrameFiltered2=dataFrameFiltered[['Qty in transaction','IndexCol']].copy()
dataFrameOfLots.merge(dataFrameFiltered2,on='IndexCol',how='outer')
</code></pre>
<p>Input Dataset:</p>
<pre><code>Customer Stage ProdType Brand ProdName Size Strength Lot PackedOn Qty Available
DEF A Bulk YYY Test Test Weak 1 20200101 10 5
ABC A Bulk XYZ Test Test Weak 1 20200101 10 5
GHI A Bulk YTY Test Test Weak 1 20200101 10 5
ABC B RAW XYZ Test Test Weak 1 20200101 10 5
</code></pre>
<p>Actual output:</p>
<pre><code>Customer Stage ProdType Brand ProdName Size Strength Lot PackedOn Qty Available QtyInTransaction_x IndexCol QtyInTransaction_y
DEF A Bulk YYY Test Test Weak 1 20200101 10 5 0 1 0
ABC A Bulk XYZ Test Test Weak 1 20200101 10 5 0 2 34
GHI A Bulk YTY Test Test Weak 1 20200101 10 5 0 3 0
ABC B RAW XYZ Test Test Weak 1 20200101 10 5 0 4 34
</code></pre>
<p>Expected output:</p>
<pre><code>Customer Stage ProdType Brand ProdName Size Strength Lot PackedOn Qty Available IndexCol QtyInTransaction
DEF A Bulk YYY Test Test Weak 1 20200101 10 5 1 0
ABC A Bulk XYZ Test Test Weak 1 20200101 10 5 2 34
GHI A Bulk YTY Test Test Weak 1 20200101 10 5 3 0
ABC B RAW XYZ Test Test Weak 1 20200101 10 5 4 34
</code></pre>
<p>Is query the right approach?
How would I merge so that only one column shows up?</p>
<p>Thanks</p>
|
<p>Please Try use outer merge and drop unrequired rows after you do your filters. Code below. </p>
<pre><code>result=pd.merge(dataFrameOfLots, dataFrameFiltered, how='outer', on=['Customer', 'Stage', 'ProdType', 'Brand', 'ProdName', 'Size',
'Strength', 'Lot', 'PackedOn', 'Qty', 'Available'],suffixes=('_x', '')).fillna(0)
result=result.loc[:,~result.columns.str.endswith('_x')]#drop unwanted columns
</code></pre>
<p>or </p>
<pre><code>result.drop(columns=['QtyInTransaction_x','IndexCol_x'], inplace=True)#drop unwanted columns
</code></pre>
|
python|pandas|dataframe|join|merge
| 1
|
376,224
| 61,912,391
|
Tuning the Model to Obtain Better Performance
|
<p>I made a model for regression problem which is to predict a value from 9 input variables.
Development of the model is ANN with library of Keras</p>
<p>In this model with compile and fit method, I already predicted the output value.
However, I got the bad evaluate score. I evaluated the model using RMSE and R2</p>
<p>RMSE between the (those have been normalized) predicted and labelled value is 0.207,
RMSE between the (original form) predicted and labelled valued is 215,
R2 is 0.4</p>
<p>How can I modify my model to obtain better result (low RMSE and high R2)?
or is this model acceptable?</p>
<pre><code>import keras
model = keras.models.Sequential()
model.add(keras.layers.Dense(36, input_dim=9, activation='relu', kernel_initializer='normal'))
model.add(keras.layers.Dropout(0.5))
model.add(keras.layers.Dense(1))
callback = keras.callbacks.EarlyStopping(monitor='val_mean_squared_error', patience=10)
from keras.models import Model
model.compile(loss=[keras.losses.MeanSquaredError()],
optimizer=keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999),
metrics=[keras.metrics.MeanSquaredError()])
model_history = model.fit(myscaled_x_train.values, myscaled_y_train.values, epochs=100, batch_size=32, verbose=1, validation_data=(myscaled_x_valid.values, myscaled_y_valid.values),
callbacks=[callback])
model_history
</code></pre>
<p>Looking for the solution and explanation, if somebody can help me with this.
Thankyou</p>
|
<p>There are different techniques to improve performance: </p>
<ul>
<li>you can add more hidden layers; </li>
<li>you can change layers number of units, activation functions and other hyperparameters;</li>
<li>you can try different type of neural network. Where are different types: ResNet, inception blocks, RNN, etc. </li>
<li>also there is a change you preprocessed your data wrong. Neural networks like to word with scaled features.</li>
</ul>
|
python|tensorflow|machine-learning|keras|deep-learning
| 1
|
376,225
| 61,867,945
|
Python import error: cannot import name 'six' from 'sklearn.externals'
|
<p>I'm using numpy and mlrose, and all i have written so far is:</p>
<pre><code>import numpy as np
import mlrose
</code></pre>
<p>However, when i run it, it comes up with an error message:</p>
<pre><code> File "C:\Users\<my username>\AppData\Local\Programs\Python\Python38-32\lib\site-packages\mlrose\neural.py", line 12, in <module>
from sklearn.externals import six
ImportError: cannot import name 'six' from 'sklearn.externals' (C:\Users\<my username>\AppData\Local\Programs\Python\Python38-32\lib\site-packages\sklearn\externals\__init__.py)
</code></pre>
<p>Any help on sorting this problem will be greatly appreciated.</p>
|
<h3>Solution: The real answer is that the dependency needs to be changed by the <code>mlrose</code> maintainers.</h3>
<h3>A workaround is:</h3>
<pre><code>import six
import sys
sys.modules['sklearn.externals.six'] = six
import mlrose
</code></pre>
|
python|numpy|scikit-learn|python-import|six
| 61
|
376,226
| 62,007,858
|
If a timestamp in one table is between two time stamps in another table, then increment by 1 using Python Pandas
|
<p><strong>Summary of the Problem:</strong></p>
<p>I would like to calculate the number of ambulances on a response at any given minute of the day over an entire calendar year.
Two pandas dataframes are generated; The first is the emergency responses of ambulances showing the starting time stamp of the emergency and the ending time stamp for that ambulance emergency.This data comes from our database. For example, an ambulance responded to a cardiac arrest at 2020-01-01 00:30:17 and the ambulance was cleared from this response at 2020-01-01 00:38:05.000. Let's call this dataframe "emergency_event". </p>
<p>The second pandas dataframe takes the minimum value of the emergency_event and the maximum value. It generates a dataframe using the min and max time stamps as the starting and ending points of another dataframe. It increments by one minute from starting point to ending point and generates a zero as a place holder for the number of trucks working. Let's call this second dataframe "coincident" because we want to count the number of ambulances working coincidentally within that one minute time frame.</p>
<p>In other words, the first emergency event started at "2020-01-01 00:00:28" so the "coincident" events table would take this value and increment by one minute until the very last emergency_event ending time stamp. For example the "coincident" table would look like:</p>
<pre><code>calendar_timestamp TrucksWorking
2020-01-01 00:00:28 0
2020-01-01 00:01:28 0
2020-01-01 00:02:28 0
2020-01-01 00:03:28 0
2020-01-01 00:04:28 0
2020-01-01 00:05:28 0
......
</code></pre>
<p>Notice how it increments by one minute and there is a placeholder of zero for the number of ambulances working.</p>
<p>There are now two dataframes: An "emergency_event" and a "coincident" table.
The goal of the program is to use the first observation of the "coincident" table and evaluate against every row of the "emergency_events" table. <strong>Does the time stamp of the "coincident" observation occur between the StartTime and EndTime of the "emergency_events"?</strong> If True, then increment the TrucksWorking value by 1. Loop through every "coincident" observation and evaluate if it is in between any of the "emergency_events" and increment by 1 if True.</p>
<p>At the end of the program this will generate a dataframe of one minute increments and the number of ambulances working at that time. Using this data I can statistically analyze the number of ambulances working at any given time and even parse it out by hour of day, weekday, daytime/nighttime, etc. This is very powerful information.</p>
<p>But I am stuck on the logic and I need your help. Specifically, I can't figure out how to make "coincident" table add 1 when its timestamp is within the "emergency_events" table.</p>
<p><strong>What I have tried</strong></p>
<pre><code>for each in coincident.calendar_timestamp:
if (coincident[coincident['calendar_timestamp']] >= emergency_events[emergency_events['StartTime']] & coincident[coincident['calendar_timestamp']] <= emergency_events[emergency_events['EndTime']]):
coincident[coincident['TrucksWorking']] = coincident[coincident['TrucksWorking']] + 1
else:
coincident[coincident['TrucksWorking']]
</code></pre>
<p><strong>I have also attempted:</strong></p>
<pre><code># =============================================================================
# I have attempted the following
# the following code returns an error message
# ValueError: The truth value of a DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
# =============================================================================
## for each in coincident.calendar_timestamp:
## if (coincident[coincident['calendar_timestamp'].between(starting_point, ending_point)]):
## coincident[coincident['TrucksWorking']] = coincident[coincident['TrucksWorking']] + 1
## else:
## coincident[coincident['TrucksWorking']]
# =============================================================================
# I have attempted the following
# a dead end code that I cannot make work
# df = coincident[coincident['calendar_timestamp'].between(starting_point, ending_point)]
# print(df.head(n = 5))
# =============================================================================
# =============================================================================
# I have attempted the following but it will not work
# another dead end code
# for timestamp in coincident_events.calendar:
# print(coincident_events.calendar.query('coincident_events.calendar >= emergency_events.starting_point and coincident_events.calendar <= emergency_events.ending_point'))
# =============================================================================
</code></pre>
<p><strong>Show my code:</strong></p>
<pre><code># -*- coding: utf-8 -*-
# Python 3.7 Anaconda distribution
import pandas as pd
import datetime
# =============================================================================
# Step 1: Read in the ambulance runs with a starting and ending time values
# call this dataframe "emergency_events"
# =============================================================================
# the following array is a small sample when an ambulance starts a call and when it ends a call
data = [['2020-01-01 00:00:28.000','2020-01-01 00:35:28.987']
, ['2020-01-01 00:02:34.000','2020-01-01 01:05:13.540']
, ['2020-01-01 00:03:57.000','2020-01-01 01:14:44.537']
, ['2020-01-01 00:06:17.000','2020-01-01 01:26:52.087']
, ['2020-01-01 00:13:20.000','2020-01-01 01:17:31.310']
, ['2020-01-01 00:14:01.000','2020-01-01 01:57:28.343']
, ['2020-01-01 00:16:11.000','2020-01-01 00:39:34.967']
, ['2020-01-01 00:22:03.000','2020-01-01 01:46:40.037']
, ['2020-01-01 00:23:07.000','2020-01-01 00:49:25.890']
, ['2020-01-01 00:23:19.000','2020-01-01 01:26:39.920']
, ['2020-01-01 00:30:17.000','2020-01-01 00:38:05.000']]
#convert the array to a pandas data frame
emergency_events = pd.DataFrame(data, columns = ['StartTime', 'EndTime'])
#convert the string values to date time values
emergency_events['StartTime'] = pd.to_datetime(emergency_events['StartTime'])
emergency_events['EndTime'] = pd.to_datetime(emergency_events['EndTime'])
# =============================================================================
# Step 2 Create a calendar of date time stamps incremented by 1 minute using the ambulance runs min/max values
# call this dataframe "coincident"
# =============================================================================
## establish a starting value based on the first ambulance event
starting_point = emergency_events.StartTime.min()
print(starting_point)
## establish an ending value based on the final ambulance call ending time.
ending_point = emergency_events.EndTime.max()
print(ending_point)
## create a range of time stamps incremented by 1 minute from starting point to ending point
days = pd.date_range(starting_point, ending_point, freq='min')
## create a pandas dataframe with two columns: calendar for time stamps and a place holder of 0 for trucks working
coincident = pd.DataFrame({'calendar_timestamp': days, 'TrucksWorking': 0})
## print it out to verify the data
print(coincident.head(n = 5))
# =============================================================================
# Step 3 --- now for the difficult part
# if a "coincident" time stamp is between a start and end time of an emergency_event
# increment the TrucksWorking column by 1
# loop through every "coincident" observation and test if it is between a start and an end of an "emergency_event"
# =============================================================================
for each in coincident.calendar_timestamp:
if (coincident[coincident['calendar_timestamp']] >= emergency_events[emergency_events['StartTime']] & coincident[coincident['calendar_timestamp']] <= emergency_events[emergency_events['EndTime']]):
coincident[coincident['TrucksWorking']] = coincident[coincident['TrucksWorking']] + 1
else:
coincident[coincident['TrucksWorking']]
## at the end of this program it should return a calendar of date time stamps with
## the number of ambulances at work during that one minute interval.
## this information can be used for data modeling.
# =============================================================================
# I have attempted the following
# the following code returns an error message
# ValueError: The truth value of a DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
# =============================================================================
## for each in coincident.calendar_timestamp:
## if (coincident[coincident['calendar_timestamp'].between(starting_point, ending_point)]):
## coincident[coincident['TrucksWorking']] = coincident[coincident['TrucksWorking']] + 1
## else:
## coincident[coincident['TrucksWorking']]
# =============================================================================
# I have attempted the following
# a dead end code that I cannot make work
# df = coincident[coincident['calendar_timestamp'].between(starting_point, ending_point)]
# print(df.head(n = 5))
# =============================================================================
# =============================================================================
# I have attempted the following but it will not work
# another dead end code
# for timestamp in coincident_events.calendar:
# print(coincident_events.calendar.query('coincident_events.calendar >= emergency_events.starting_point and coincident_events.calendar <= emergency_events.ending_point'))
# =============================================================================
print(coincident.head(n = 20))
# =============================================================================
# Step 4: verify the "coincident" table is correct and then analyze the data
# Printing the "coincident" dataframe should look something like:
# =============================================================================
# StartTime TrucksWorking
# 0 2020-01-01 00:00:28 1
# 1 2020-01-01 00:01:28 1
# 2 2020-01-01 00:02:28 1
# 3 2020-01-01 00:03:28 1
# 4 2020-01-01 00:04:28 2
# 5 2020-01-01 00:05:28 2
# 6 2020-01-01 00:06:28 3
# 7 2020-01-01 00:07:28 3
# 8 2020-01-01 00:08:28 3
# 9 2020-01-01 00:09:28 3
# 10 2020-01-01 00:10:28 3
# etc for a full calendar year of ambulance responses
# =============================================================================
# Step 5: analyze the data looking for patterns of ambulance utilization. TBD
# =============================================================================
</code></pre>
|
<p>Using your data I found the following solution. I used only the <strong>first 200 minutes of the year 2020</strong> but you can change that easily by adjusting <code>periods=200</code> to the number of minutes per year.</p>
<p>I used the following <code>variables</code> :
<code>df</code> corresponds to your coincident-dataframe. I generated it upfront for every minute from Januar 1, 2020:</p>
<pre><code>import pandas as pd
import datetime
df = pd.DataFrame()
df['time1'] = pd.date_range('2020-01-01 00:00:00', periods=200, freq='min')
df['trucks working'] = 0
print(df)
</code></pre>
<p>This gives me the minutes of the year still with all <strong>trucks working = 0</strong>:</p>
<pre><code> time1 trucks working
0 2020-01-01 00:00:00 0
1 2020-01-01 00:01:00 0
2 2020-01-01 00:02:00 0
3 2020-01-01 00:03:00 0
4 2020-01-01 00:04:00 0
.. ... ...
195 2020-01-01 03:15:00 0
196 2020-01-01 03:16:00 0
197 2020-01-01 03:17:00 0
198 2020-01-01 03:18:00 0
199 2020-01-01 03:19:00 0
</code></pre>
<p>Using your emergency calls as <code>data</code></p>
<pre><code>data = [['2020-01-01 00:00:28.000','2020-01-01 00:35:28.987']
, ['2020-01-01 00:02:34.000','2020-01-01 01:05:13.540']
, ['2020-01-01 00:03:57.000','2020-01-01 01:14:44.537']
, ['2020-01-01 00:06:17.000','2020-01-01 01:26:52.087']
, ['2020-01-01 00:13:20.000','2020-01-01 01:17:31.310']
, ['2020-01-01 00:14:01.000','2020-01-01 01:57:28.343']
, ['2020-01-01 00:16:11.000','2020-01-01 00:39:34.967']
, ['2020-01-01 00:22:03.000','2020-01-01 01:46:40.037']
, ['2020-01-01 00:23:07.000','2020-01-01 00:49:25.890']
, ['2020-01-01 00:23:19.000','2020-01-01 01:26:39.920']
, ['2020-01-01 00:30:17.000','2020-01-01 00:38:05.000']]
</code></pre>
<p>I add colum names and name the resulting dataframe <code>emergency_events</code> :</p>
<pre><code> emergency_events = pd.DataFrame(data, columns = ['StartTime', 'EndTime'])
</code></pre>
<p>Now I can iterate over the dataframe <code>emergency_events</code></p>
<p>and increment the <code>'trucks working'</code>
for each minute of the day that is
between a row of <code>'StartTime'</code> and <code>'EndTime'</code></p>
<pre><code>for index2, row2 in df.iterrows():
for index, row in emergency_events.iterrows():
if pd.to_datetime(row['StartTime']) <= pd.to_datetime(row2['time1']) <= pd.to_datetime(row['EndTime']):
#print(row2['trucks working'])
#print(row['StartTime'],row2['time1'],row['EndTime'])
df.at[index2,'trucks working'] += 1
</code></pre>
<p>This gives me a dataframe with the number of trucks for each minute in the day.</p>
<pre><code>time1 trucks working
0 2020-01-01 00:00:00 0
1 2020-01-01 00:01:00 1
2 2020-01-01 00:02:00 1
3 2020-01-01 00:03:00 2
4 2020-01-01 00:04:00 3
.. ... ...
195 2020-01-01 03:15:00 0
196 2020-01-01 03:16:00 0
197 2020-01-01 03:17:00 0
198 2020-01-01 03:18:00 0
199 2020-01-01 03:19:00 0
</code></pre>
|
python|pandas|dataframe|datetime|increment
| 0
|
376,227
| 61,830,226
|
How to add new row in time series dataframe
|
<p>My dataframe has an index column of dates and one column</p>
<pre><code> var
date
2020-03-10 77
2020-03-11 88
2020-03-12 99
</code></pre>
<p>I have an array and I want to append it to the dataframe one by one. I have tried a few methods but anything isn't working.
my code is something like this</p>
<pre><code>for i in range(20):
x=i*i
df.append(x)
</code></pre>
<p>After each iteration dataframe needs to be appended with x value.
Final output:</p>
<pre><code> var
date
2020-03-10 77
2020-03-11 88
2020-03-12 99
2020-03-13 1
2020-03-14 4
2020-03-15 9
.
.
. 20 times
</code></pre>
<p>Will be grateful for any suggestions.</p>
|
<p>Try this</p>
<pre><code>tmpdf = pd.DataFrame({"var":[77,88,99]},index=pd.date_range("2020-03-10",periods=3,freq='D'))
for i in range(1,21):
idx = tmpdf.tail(1).index[0] + pd.Timedelta(days=1)
tmpdf.loc[idx] = i*i
</code></pre>
<p>output</p>
<pre><code>2020-03-10 77
2020-03-11 88
2020-03-12 99
2020-03-13 1
2020-03-14 4
2020-03-15 9
2020-03-16 16
2020-03-17 25
2020-03-18 36
2020-03-19 49
2020-03-20 64
2020-03-21 81
2020-03-22 100
2020-03-23 121
2020-03-24 144
2020-03-25 169
2020-03-26 196
2020-03-27 225
2020-03-28 256
2020-03-29 289
2020-03-30 324
2020-03-31 361
2020-04-01 400
</code></pre>
|
python|pandas|python-2.7|dataframe|time-series
| 1
|
376,228
| 61,970,972
|
How to put image uploaded in tkinter into a function?
|
<p>I am trying to create a Python tkinter application where the user can upload an image from file and the image is put through a image segmentation function which outputs an matplotlib plot.</p>
<p>I have the image segmentation function, it takes two parameters: neural network, image file pathway.</p>
<pre><code>from torchvision import models
from PIL import Image
import matplotlib.pyplot as plt
import torch
import torchvision.transforms as T
import numpy as np
fcn = models.segmentation.fcn_resnet101(pretrained=True).eval()
# Define the helper function
def decode_segmap(image, nc=21):
label_colors = np.array([(0, 0, 0), # 0=background
# 1=aeroplane, 2=bicycle, 3=bird, 4=boat, 5=bottle
(128, 0, 0), (0, 128, 0), (128, 128, 0), (0, 0, 128), (128, 0, 128),
# 6=bus, 7=car, 8=cat, 9=chair, 10=cow
(0, 128, 128), (128, 128, 128), (64, 0, 0), (192, 0, 0), (64, 128, 0),
# 11=dining table, 12=dog, 13=horse, 14=motorbike, 15=person
(192, 128, 0), (64, 0, 128), (192, 0, 128), (64, 128, 128), (192, 128, 128),
# 16=potted plant, 17=sheep, 18=sofa, 19=train, 20=tv/monitor
(0, 64, 0), (128, 64, 0), (0, 192, 0), (128, 192, 0), (0, 64, 128)])
r = np.zeros_like(image).astype(np.uint8)
g = np.zeros_like(image).astype(np.uint8)
b = np.zeros_like(image).astype(np.uint8)
for l in range(0, nc):
idx = image == l
r[idx] = label_colors[l, 0]
g[idx] = label_colors[l, 1]
b[idx] = label_colors[l, 2]
rgb = np.stack([r, g, b], axis=2)
return rgb
def segment(net, path):
img = Image.open(path)
plt.imshow(img); plt.axis('off'); plt.show()
# Comment the Resize and CenterCrop for better inference results
trf = T.Compose([T.Resize(256),
T.CenterCrop(224),
T.ToTensor(),
T.Normalize(mean = [0.485, 0.456, 0.406],
std = [0.229, 0.224, 0.225])])
inp = trf(img).unsqueeze(0)
out = net(inp)['out']
om = torch.argmax(out.squeeze(), dim=0).detach().cpu().numpy()
rgb = decode_segmap(om)
plt.imshow(rgb); plt.axis('off'); plt.show()
</code></pre>
<p>I have not made the Tkinter GUI as I am unsure how I can take the image uploaded from file and convert it into a file path (string) and put it through the function. There is only one neural network for now, being <code>fcn</code>.</p>
|
<p>Firstly import <code>filedialog</code> and <code>PIL</code>:</p>
<pre><code>from tkinter import filedialog
from PIL import Image
</code></pre>
<p>Now use a variable path (or anything) to define the path that is returned when you choose inside of a GUI.</p>
<pre><code>path = filedialog.askopenfilename(initialdir='/Downloads', title='Select Photo', filetypes=(('JPEG files', '*.jpg'), ('PNG files', '*.png')))
</code></pre>
<p>You can use any filetype you want by specifying it in the argument there with the title and '*.extension'. <em>NOTE THAT IT IS CASE-SENSITIVE</em>. Now your variable <code>path</code> has the path for the image and all you have to do is open it with PIL using</p>
<pre><code>img = Image.open(path)
</code></pre>
<p>Just like you did in your code</p>
|
python|numpy|tkinter|pytorch
| 1
|
376,229
| 61,802,666
|
Multiplying columns with missing values in Python (pandas)
|
<p>I have a dataset with multiple columns which i need to multiply. One of these columns have missing values in them, what I would like is that when I am multiplying the columns, the missing values are skipped, and the columns which do have values in them are used for the result.</p>
<p>For example,</p>
<pre><code>A B C
1 2 1
2 3 NaN
1 4 NaN
1 1 2
</code></pre>
<p>For each row, i would like the result to be column D with following values:-</p>
<pre><code>2
6
4
2
</code></pre>
<p>I have tried .fillna(), .notnull(), .isnull() and .dropna() but I did not get the desired result.</p>
<p>Thanks in advance</p>
<p>Edit:</p>
<p>I had tried:</p>
<blockquote>
<p>df['D'] = df['A'].fillna()*df['B']*df['C']</p>
<p>df</p>
</blockquote>
|
<p>Here's an example of utilizing <code>.fillna()</code>:</p>
<pre><code>import pandas as pd
import numpy as np
data = pd.DataFrame({"a":[3,6,7],"b":[2,5,7],"c":[5,np.nan,np.nan]})
</code></pre>
<p>A quick look at <code>data</code>:</p>
<pre><code>a b c
3 2 5.0
6 5 NaN
7 7 NaN
</code></pre>
<p>Then utilize <code>.fillna()</code>:</p>
<pre><code>data.fillna(1).prod(axis=1)
</code></pre>
<p>Result:</p>
<pre><code>30.0
30.0
49.0
</code></pre>
<p>I noticed that you used <code>.fillna()</code>. If you could include attempted code, it would help us debug your code for a more precise solution.</p>
|
python|python-3.x|pandas|dataframe
| 2
|
376,230
| 61,917,991
|
Finding highest n values of every column in dataframe
|
<p>I want to find the highest 3 values of each column in a dataframe, and return the index names, ordered by value. The dataframe looks like this:</p>
<pre><code>df = pd.DataFrame({"u1":[1,2,-3,4,5],
"u2":[8,-4,5,6,7],
"u3":[np.NaN,np.NaN,np.NaN,np.NaN,np.NaN]},
index=["q1","q2","q3","q4","q5"])
</code></pre>
<p>The result would look like this:</p>
<pre><code>u1 u2 u3
q5 q1 NaN
q4 q5 NaN
q2 q4 NaN
</code></pre>
|
<p>You can use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.apply.html" rel="nofollow noreferrer"><code>apply</code></a> with <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.nlargest.html" rel="nofollow noreferrer"><code>pandas.Series.nlargest</code></a> function.</p>
<pre><code>df.apply(lambda x: pd.Series(x.nlargest(3).index))
u1 u2 u3
0 q5 q1 NaN
1 q4 q5 NaN
2 q2 q4 NaN
</code></pre>
|
python|python-3.x|pandas
| 5
|
376,231
| 61,642,363
|
No module named 'torch.autograd'
|
<p>Working with torch package:</p>
<pre><code>import torch
from torch.autograd import Variable
x_data = [1.0,2.0,3.0]
y_data = [2.0,4.0,6.0]
w = Variable(torch.Tensor([1.0]), requires_grad = True)
def forward(x):
return x*w
def loss(x,y):
y_pred = forward(x)
return (y_pred-y)*(y_pred-y)
print("my prediction before training",4,forward(4))
for epoch in range(10):
for x_val, y_val in zip(x_data,y_data):
l= loss(x_val, y_val)
l.backward()
print("\tgrad: ", x_val, y_val, w.grad.data[0])
w.data=w.data-0.01*w.grad.data
w.grad.data.zero_()
print("progress:", epoch, l.data[0] )
print("my new prediction after training ", forward(4))
</code></pre>
<p>Got error:</p>
<pre><code>runfile('C:/gdrive/python/temp2.py', wdir='C:/gdrive/python')
Traceback (most recent call last):
File "C:\gdrive\python\temp2.py", line 11, in <module>
from torch.autograd import Variable
ModuleNotFoundError: No module named 'torch.autograd'
</code></pre>
<p>Command <code>conda list pytorch</code> brings:</p>
<pre><code># packages in environment at C:\Users\g\.conda\envs\test:
#
# Name Version Build Channel
(test) PS C:\gdrive\python>
</code></pre>
<p>How to fix this problem?</p>
|
<p>It seems to me that you have installed pytorch using conda.
Might be you have <strong>torch</strong> named folder in your current directory.
Try changing the directory, or try installing pytorch using pip.
This <a href="https://github.com/pytorch/pytorch/issues/1851" rel="nofollow noreferrer">https://github.com/pytorch/pytorch/issues/1851</a> might help you to solve your problem.</p>
|
python|pytorch|conda
| 1
|
376,232
| 61,774,477
|
plotting points with list logical comparison
|
<p>I have a file containing 6 columns. I want to separate some parts of this file and then plot them so I have read them by numpy and defining empty space to store the points which I needed. To fill the array I have defined a condition then filling array. I faced an error of </p>
<pre><code>ValueError Traceback (most recent call last)
<ipython-input-118-c74d3cae8a8a> in <module>
23 for i in range(1,len(x)):
24
---> 25 if (near == 0.0 or near>=0.0):
26 xx.append(x[i])
27 yy.append(y[i])
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
</code></pre>
<p>the code that I have written is below:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
a = np.loadtxt('file_1001.out')
near = a[-1]
x = a[0]
y = a[1]
#print(most_frequent(near))
xx=[]
yy=[]
for i in range(1,len(x)):
if (near == 0.0 or near>=0.0):
xx.append(x[i])
yy.append(y[i])
print(xx)
print(yy)
</code></pre>
|
<p>Try the following using <code>all()</code> since <code>near</code> seems to be a list:</p>
<pre><code>for i in range(1,len(x)):
if all(ii>=0.0 for ii in near):
xx.append(x[i])
yy.append(y[i])
</code></pre>
|
python|numpy|matplotlib
| 1
|
376,233
| 61,954,571
|
How to add unbalanced List into a dataFrame in Python?
|
<p>Here is My dataframe and List</p>
<pre><code>
X Y Z X1
1 2 3 3
2 7 2 6
3 10 5 4
4 3 7 9
5 3 3 4
list1=[3,5,6]
list2=[4,3,7,4]
</code></pre>
<p>I want to add the lists into a data frame, I have tried some code but it gives an error and something is not working</p>
<pre><code>#Expected Output
X Y Z X1
1 2 3 3
2 7 2 6
3 10 5 4
4 3 7 9
5 3 3 4
3 4
5 3
6 7
4
#here is my code
list1 = [3,5, 6]
df_length = len(df1)
df1.loc[df_length] = list1
</code></pre>
<p>Please help me to solve this problem.
Thanks in advance.</p>
|
<p>Use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.append.html" rel="nofollow noreferrer"><code>series.append()</code></a> to create the new series (<code>X</code> & <code>X1</code>), and create the output <code>df</code> using <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.concat.html" rel="nofollow noreferrer"><code>pd.concat()</code></a>:</p>
<pre><code>s1 = df.X.append(pd.Series(list1)).reset_index(drop=True)
s2 = df.X1.append(pd.Series(list2)).reset_index(drop=True)
df = pd.concat([s1, df.Y, df.Z, s2], axis=1).rename(columns={0: 'X', 1: 'X1'})
</code></pre>
<hr>
<pre><code>df
X Y Z X1
0 1.0 2.0 3.0 3
1 2.0 7.0 2.0 6
2 3.0 10.0 5.0 4
3 4.0 3.0 7.0 9
4 5.0 3.0 3.0 4
5 3.0 NaN NaN 4
6 5.0 NaN NaN 3
7 6.0 NaN NaN 7
8 NaN NaN NaN 4
</code></pre>
|
python|pandas
| 3
|
376,234
| 61,990,016
|
Efficiently check if an array is jagged
|
<p>I'm looking for an efficient way to check if an array is jagged, where "jagged" means that an element of the array has a different shape from one of it's neighbors in the same dimension.</p>
<p>e.g. <code>[[1, 2], [3, 4, 5]]</code> or <code>[[1, 2], [3, 4], [5, 6], [[7], [8]]]</code></p>
<p>Where I'm using list syntax for convenience, but the arguments may be nested lists or nested numpy arrays. I'm also showing integers for convenience, by the lowest-level components could be anything (e.g. generic objects). Let's say the lowest-level objects are <em>not</em> iterable themselves (e.g. <code>str</code> or <code>dict</code>, but definitely bonus points for a solution that can handle those too!).</p>
<p>Attemp:</p>
<p>Recursively flattening an array is pretty easy, though I'm guessing quite <em>in</em>efficient, and then the length of the flattened array can be compared to the <code>numpy.size</code> of the input array. If they match, then it is <em>not</em> jagged.</p>
<pre><code>def really1d(arr):
# Returns false if the given array is not 1D or is a jagged 1D array.
if np.ndim(arr) != 1:
return False
if len(arr) == 0:
return True
if np.any(np.vectorize(np.ndim)(arr)):
return False
return True
def flatten(arr):
# Convert the given array to 1D (even if jagged)
if (not np.iterable(arr)) or really1d(arr):
return arr
return np.concatenate([flatten(aa) for aa in arr])
def isjagged(arr):
if (np.size(arr) == len(flatten(arr))):
return False
return True
</code></pre>
<p>I'm pretty sure the concatenations are copying all of the data, which is a complete waste. Maybe there is an <code>itertools</code> or <code>numpy.flatiter</code> method of achieving the same goal? Ultimately the flattened array is <em>only</em> being used to find it's length.</p>
|
<p>Perhaps not the most efficient but it works nicely in numpy. <code>and</code> will short circuit as soon as one of the conditions is <code>False</code>. If the first three conditions are <code>True</code>, we have no choice but to iterate through the rows.</p>
<p>Thankfull, <code>all</code> will shortcircuit as soon as one of the iterations is <code>False</code> so it won't check all the rows if it doesn't have to.</p>
<pre class="lang-py prettyprint-override"><code>def jagged(x):
x = np.asarray(x)
return (
x.dtype == "object"
and x.ndim == 1
and isinstance(x[0], list)
and not all(len(row) == len(x[0]) for row in x)
)
</code></pre>
<p>If you wan't to squeeze more efficieny out, it's actually performing the <code>len(x[0])</code> every iteration of the <code>all</code> part but this is probably inconsequential and this is a lot more legible than the alaternative which would have you write out the whole if statement.</p>
|
python|arrays|numpy
| 1
|
376,235
| 61,793,268
|
Pytorch Siamese Network not converging
|
<p>Good morning everyone</p>
<p>Below is my implementation of a pytorch siamese network. I am using 32 batch size, MSE loss and SGD with 0.9 momentum as optimizer.</p>
<pre><code>class SiameseCNN(nn.Module):
def __init__(self):
super(SiameseCNN, self).__init__() # 1, 40, 50
self.convnet = nn.Sequential(nn.Conv2d(1, 8, 7), nn.ReLU(), # 8, 34, 44
nn.Conv2d(8, 16, 5), nn.ReLU(), # 16, 30, 40
nn.MaxPool2d(2, 2), # 16, 15, 20
nn.Conv2d(16, 32, 3, padding=1), nn.ReLU(), # 32, 15, 20
nn.Conv2d(32, 64, 3, padding=1), nn.ReLU()) # 64, 15, 20
self.linear1 = nn.Sequential(nn.Linear(64 * 15 * 20, 100), nn.ReLU())
self.linear2 = nn.Sequential(nn.Linear(100, 2), nn.ReLU())
def forward(self, data):
res = []
for j in range(2):
x = self.convnet(data[:, j, :, :])
x = x.view(-1, 64 * 15 * 20)
res.append(self.linear1(x))
fres = abs(res[1] - res[0])
return self.linear2(fres)
</code></pre>
<p>Each batch contains alternating pairs, i.e <code>[pos, pos], [pos, neg], [pos, pos]</code> etc... However, the network doesn't converge, and the problem seems that <code>fres</code> in the network is the same for each pair (regardless of whether it is a positive or negative pair), and the output of <code>self.linear2(fres)</code> is always approximately equal to <code>[0.0531, 0.0770]</code>. This is in contrast with what I am expecting, which is that the first value of <code>[0.0531, 0.0770]</code> would get closer to 1 for a positive pair as the network learns, and the second value would get closer to 1 for a negative pair. These two values also need to sum up to 1.</p>
<p>I have tested exactly the same setup and same input images for a 2 channel network architecture, where, instead of feeding in <code>[pos, pos]</code> you would stack those 2 images in a depth-wise fashion, for example <code>numpy.stack([pos, pos], -1)</code>. The dimension of <code>nn.Conv2d(1, 8, 7)</code> also changes to <code>nn.Conv2d(2, 8, 7)</code> in this setup. This works perfectly fine.</p>
<p>I have also tested exactly the same setup and input images for a traditional CNN approach, where I just pass in single positive and negative grey scale images into the network, instead of stacking them (as with the 2-CH approach) or passing them in as image pairs (as with the Siamese approach). This also works perfectly, but the results are not so good as with the 2 channel approach.</p>
<p>EDIT (Solutions I've tried):</p>
<ul>
<li>I have tried a number of different loss functions, including HingeEmbeddingLoss and CrossEntropyLoss, all resulting in more or less the same problem. So I think it is safe to say that the problem is not caused by the employed loss function; MSELoss.</li>
<li>Different batch sizes also seem to have no effect on the issue.</li>
<li>I tried increasing the number of trainable parameters as suggested in
<a href="https://stackoverflow.com/questions/59442922/keras-model-for-siamese-network-not-learning-and-always-predicting-the-same-oupu?rq=1">Keras Model for Siamese Network not Learning and always predicting the same ouput</a>
Also doesn't work.</li>
<li>Tried to change the network architecture as implemented here: <a href="https://github.com/benmyara/pytorch-examples/blob/master/notebooks/1_NeuralNetworks/9_siamese_nn.ipynb" rel="nofollow noreferrer">https://github.com/benmyara/pytorch-examples/blob/master/notebooks/1_NeuralNetworks/9_siamese_nn.ipynb</a>. In other words, changed the forward pass to the following code. Also changed the loss to CrossEntropy, and the optimizer to Adam. Still no luck:</li>
</ul>
<pre><code>def forward(self, data):
res = []
for j in range(2):
x = self.convnet(data[:, j, :, :])
x = x.view(-1, 64 * 15 * 20)
res.append(x)
fres = self.linear2(self.linear1(abs(res[1] - res[0]))))
return fres
</code></pre>
<ul>
<li>I also tried to change the whole network from a CNN to a linear network as implemented here: <a href="https://github.com/benmyara/pytorch-examples/blob/master/notebooks/1_NeuralNetworks/9_siamese_nn.ipynb" rel="nofollow noreferrer">https://github.com/benmyara/pytorch-examples/blob/master/notebooks/1_NeuralNetworks/9_siamese_nn.ipynb</a>. Still doesn't work.</li>
<li>Tried to use a lot more data as suggested here: <a href="https://stackoverflow.com/questions/59442922/keras-model-for-siamese-network-not-learning-and-always-predicting-the-same-oupu?noredirect=1&lq=1">Keras Model for Siamese Network not Learning and always predicting the same ouput</a>. No luck...</li>
<li>Tried to use <code>torch.nn.PairwiseDistance</code> between the outputs of <code>convnet</code>. Made some sort of improvement; the network starts to converge for the first few epochs, and then hits the same plateau everytime:</li>
</ul>
<pre><code>def forward(self, data):
res = []
for j in range(2):
x = self.convnet(data[:, j, :, :])
res.append(x)
pdist = nn.PairwiseDistance(p=2)
diff = pdist(res[1], res[0])
diff = diff.view(-1, 64 * 15 * 10)
fres = self.linear2(self.linear1(diff))
return fres
</code></pre>
<p>Another thing to note perhaps is that, within the context of my research, a Siamese network is trained for each object. So the first class is associated with the images containing the object in question, and the second class is associated with images containing other objects. Don't know if this might be the cause of the problem. It is however not a problem within the context of the Traditional CNN and 2-Channel CNN approaches.</p>
<p>As per request, here is my training code:</p>
<pre><code>model = SiameseCNN().cuda()
ls_fn = torch.nn.BCELoss()
optim = torch.optim.SGD(model.parameters(), lr=1e-6, momentum=0.9)
epochs = np.arange(100)
eloss = []
for epoch in epochs:
model.train()
train_loss = []
for x_batch, y_batch in dp.train_set:
x_var, y_var = Variable(x_batch.cuda()), Variable(y_batch.cuda())
y_pred = model(x_var)
loss = ls_fn(y_pred, y_var)
train_loss.append(abs(loss.item()))
optim.zero_grad()
loss.backward()
optim.step()
eloss.append(np.mean(train_loss))
print(epoch, np.mean(train_loss))
</code></pre>
<p>Note <code>dp</code> in <code>dp.train_set</code> is a class with attributes <code>train_set, valid_set, test_set</code>, where each set is created as follows:</p>
<pre><code>DataLoader(TensorDataset(torch.Tensor(x), torch.Tensor(y)), batch_size=bs)
</code></pre>
<p>As per request, here is an example of the predicted probabilities vs true label, where you can see the model doesn't seem to be learning:</p>
<pre><code>Predicted: 0.5030623078346252 Label: 1.0
Predicted: 0.5030624270439148 Label: 0.0
Predicted: 0.5030624270439148 Label: 1.0
Predicted: 0.5030625462532043 Label: 0.0
Predicted: 0.5030625462532043 Label: 1.0
Predicted: 0.5030626654624939 Label: 0.0
Predicted: 0.5030626058578491 Label: 1.0
Predicted: 0.5030627250671387 Label: 0.0
Predicted: 0.5030626654624939 Label: 1.0
Predicted: 0.5030627846717834 Label: 0.0
Predicted: 0.5030627250671387 Label: 1.0
Predicted: 0.5030627846717834 Label: 0.0
Predicted: 0.5030627250671387 Label: 1.0
Predicted: 0.5030628442764282 Label: 0.0
Predicted: 0.5030627846717834 Label: 1.0
Predicted: 0.5030628442764282 Label: 0.0
</code></pre>
|
<p>I think that your approach is correct and you are doing things fine. What looks a bit weird to me is the last layer which has a RELU activation. Usually with Siamese networks you want to output a high probability when the two input images belong to the same class and a low probability otherwise. So you can implement this with a single neuron output and a sigmoid activation function.</p>
<p>Therefore I would reimplement your Network as follows:</p>
<pre class="lang-py prettyprint-override"><code>class SiameseCNN(nn.Module):
def __init__(self):
super(SiameseCNN, self).__init__() # 1, 40, 50
self.convnet = nn.Sequential(nn.Conv2d(1, 8, 7), nn.ReLU(), # 8, 34, 44
nn.Conv2d(8, 16, 5), nn.ReLU(), # 16, 30, 40
nn.MaxPool2d(2, 2), # 16, 15, 20
nn.Conv2d(16, 32, 3, padding=1), nn.ReLU(), # 32, 15, 20
nn.Conv2d(32, 64, 3, padding=1), nn.ReLU()) # 64, 15, 20
self.linear1 = nn.Sequential(nn.Linear(64 * 15 * 20, 100), nn.ReLU())
self.linear2 = nn.Sequential(nn.Linear(100, 1), nn.Sigmoid())
def forward(self, data):
for j in range(2):
x = self.convnet(data[:, j, :, :])
x = x.view(-1, 64 * 15 * 20)
res.append(self.linear1(x))
fres = res[0].sub(res[1]).pow(2)
return self.linear2(fres)
</code></pre>
<p>Then to be consistent whith training you should use a binary crossentropy:</p>
<pre class="lang-py prettyprint-override"><code>criterion_fn = torch.nn.BCELoss()
</code></pre>
<p>And remember to set labels to 1 wehen both input images belong to the same class.</p>
<p>Also, I recommend you to use a little bit of dropout, around 30% probability of dropping a neuron, after the <code>linear1</code> layer.</p>
|
python|pytorch|convergence|siamese-network
| 1
|
376,236
| 62,022,108
|
Ordering hierarchical data in a pivot-like way
|
<p>I have a hierarchical dataset which needs to be presented in a certain way. Items of the same hierarchy paths needs to be presented in successive order. Further parents should be listed above their children.<br />
Appreciate any guidance to achieve the same..</p>
<p>Thank You</p>
<p>** sample dataset **</p>
<pre><code>level Parent Child
0 z z
1 z o
1 z p
2 p t
2 p q
2 o r
</code></pre>
<p>** what i tried **</p>
<pre><code>df = pd.read_clipboard(sep='\t')
df1=df.pivot(columns='level',values='Child')
df1.fillna('-',inplace=True)
df1
</code></pre>
<p>** my result **</p>
<pre><code>level 0 1 2
0 z - -
1 - o -
2 - p -
3 - - t
4 - - q
5 - - r
</code></pre>
<p>** desired result **</p>
<pre><code>level 0 1 2
0 z - -
1 - o -
2 - - r
3 - p -
4 - - t
5 - - q
</code></pre>
|
<p>As the title states that the question is about hierarchies, the solution is should be found in some network packages like <a href="https://networkx.org/" rel="nofollow noreferrer">networkx</a> or <a href="https://igraph.org/" rel="nofollow noreferrer">igraph</a>. I am not an expert in that tools, so I give an abstract solution to this question.</p>
<p>The writer of the question is asking for an order based on the way to the given item, e.g. going from a parent downwards the stream to the children. So the following workflow is appropriate.</p>
<ol>
<li>find for each item the path from the top level to this item. (example: the last item in the list z -> p -> q)</li>
<li>Order the paths (key from top level to low level)</li>
<li>Apply the order onto the pivoted table</li>
</ol>
|
python|pandas
| 0
|
376,237
| 61,633,602
|
Binary classification using Keras always give wrong predictions: The acc is always 0.5
|
<p>Hi~ I am using Keras to make a simple binary classification. And I am using TF as backend.</p>
<p>I checked:</p>
<ul>
<li>data shuffle: I set the param in model.fit() shuffle = True</li>
<li>network structure: The NN take a vector with 1024 elements and makes a prediction 0 or 1.</li>
</ul>
<p>ENV: tensorflow 1.13.2 Ubuntu 16.04 python3</p>
<p>But the output is still wrong. The acc is always 0.5.</p>
<pre><code>import tensorflow as tf
from tensorflow.keras.layers import Input, Flatten, Dense, Lambda, Conv2D, Reshape, MaxPool2D, Average, Dropout, Concatenate, \
Add, Maximum, Layer, Activation, Conv1D, TimeDistributed, GlobalAvgPool2D
import numpy as np
class Test(tf.keras.Model):
def __init__(self,attention_sz,dropout_rt, name=None):
super(Test, self).__init__(name=name)
# here we define the layer:
self.fc = Dense(attention_sz,input_dim = attention_sz ,activation='relu')
self.fc2 = Dense(attention_sz, activation='relu')
self.fc3 = Dense(1, activation='sigmoid')
self.dp = Dropout(dropout_rt,input_shape=(attention_sz,))
self.dp2 = Dropout(dropout_rt,input_shape=(attention_sz,))
def call(self, inp):
# here we get the segmentation and pose
with tf.device('/gpu:0'):
print("~~~~~~~~~~~")
x = self.fc(inp)
print(x.shape)
z = self.dp(x)
print(z.shape)
x = self.fc2(z)
print(x.shape)
z = self.dp2(x)
print(z.shape)
y = self.fc3(z)
print(y.shape)
return y
if __name__ == '__main__':
model = Test(1024, 0.05)
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['accuracy'])
x = np.round(np.random.normal(1.75, 0.2, size=(10000, 1024)), 2)
x2 = np.round(np.random.normal(100.75, 0.2, size=(10000, 1024)), 2)
labels = np.zeros((10000, 1))
labels2 = np.ones((10000, 1))
x_t = np.row_stack((x, x2))
labels = np.row_stack((labels,labels2))
print(x_t.shape)
print(labels.shape)
model.fit(x_t, labels, shuffle=True, epochs=10, batch_size=32)
x = np.round(np.random.normal(1.75, 0.2, size=(1, 1024)), 2)
y = np.round(np.random.normal(100.75, 0.2, size=(1, 1024)), 2)
res = model.predict(x)
print(res)
print(res.shape)
res = model.predict(y)
print(res)
print(res.shape)
</code></pre>
<p>output:</p>
<pre><code>WARNING:tensorflow:From /home/frank/Desktop/mesh-py3/my_venv/lib/python3.5/site-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
2020-05-06 19:00:58.440615: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2020-05-06 19:00:58.616327: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:998] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-05-06 19:00:58.617158: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x55201b0 executing computations on platform CUDA. Devices:
2020-05-06 19:00:58.617175: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): GeForce RTX 2080, Compute Capability 7.5
2020-05-06 19:00:58.636996: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2592000000 Hz
2020-05-06 19:00:58.637508: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x558add0 executing computations on platform Host. Devices:
2020-05-06 19:00:58.637523: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): <undefined>, <undefined>
2020-05-06 19:00:58.637876: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 0 with properties:
name: GeForce RTX 2080 major: 7 minor: 5 memoryClockRate(GHz): 1.095
pciBusID: 0000:01:00.0
totalMemory: 7.77GiB freeMemory: 7.06GiB
2020-05-06 19:00:58.637892: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0
2020-05-06 19:00:58.639694: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-05-06 19:00:58.639708: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0
2020-05-06 19:00:58.639713: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0: N
2020-05-06 19:00:58.639923: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6868 MB memory) -> physical GPU (device: 0, name: GeForce RTX 2080, pci bus id: 0000:01:00.0, compute capability: 7.5)
Epoch 1/10
2020-05-06 19:00:59.495123: I tensorflow/stream_executor/dso_loader.cc:152] successfully opened CUDA library libcublas.so.10.0 locally
20000/20000 [==============================] - 3s 148us/sample - loss: 8.0497 - acc: 0.4997
Epoch 2/10
20000/20000 [==============================] - 2s 98us/sample - loss: 8.0590 - acc: 0.5000
Epoch 3/10
20000/20000 [==============================] - 2s 99us/sample - loss: 8.0590 - acc: 0.5000
Epoch 4/10
20000/20000 [==============================] - 2s 80us/sample - loss: 8.0590 - acc: 0.5000
Epoch 5/10
20000/20000 [==============================] - 2s 81us/sample - loss: 8.0590 - acc: 0.5000
Epoch 6/10
20000/20000 [==============================] - 2s 80us/sample - loss: 8.0590 - acc: 0.5000
Epoch 7/10
20000/20000 [==============================] - 2s 89us/sample - loss: 8.0590 - acc: 0.5000
Epoch 8/10
20000/20000 [==============================] - 2s 83us/sample - loss: 8.0590 - acc: 0.5000
Epoch 9/10
20000/20000 [==============================] - 2s 78us/sample - loss: 8.0590 - acc: 0.5000
Epoch 10/10
20000/20000 [==============================] - 2s 79us/sample - loss: 8.0590 - acc: 0.5000
[[0.]]
(1, 1)
[[0.]]
(1, 1)
Process finished with exit code 0
</code></pre>
<p>Thanks in advance!</p>
|
<p>Root-cause of the issue is related to numerical instabilities of sigmoid activation in the final layer of model when used with tensorflow-cpu version.I changed two lines in your code as follows and got the similar as you get with TF1.15. Please check the <a href="https://github.com/jvishnuvardhan/Stackoverflow_Questions/blob/master/39223_cpu.ipynb" rel="nofollow noreferrer">gist here</a>.</p>
<pre><code>self.fc3 = Dense(1) #, activation='sigmoid'
loss = tf.keras.losses.BinaryCrossentropy(from_logits=True)
model.compile(optimizer='rmsprop',
loss=loss, #'binary_crossentropy'
metrics=['accuracy'])
</code></pre>
<p>When I used your code as it is with tensorflow-gpu version of TF1.13.2, then I noticed similar results as you have seen with TF1.15. Please note that the cpu and gpu versions uses different libraries for optimum computational time. <a href="https://colab.research.google.com/gist/jvishnuvardhan/dfbb1b550478e5b171e407aec7b32af5/39223_with_gpu.ipynb" rel="nofollow noreferrer">Here</a> is a gist with TF1.13.2-gpu version. Hope it is clear.</p>
|
python|tensorflow|keras|deep-learning
| 1
|
376,238
| 58,151,934
|
Convert an object datatype column with format mm:ss to a time format pandas
|
<p>I have a dataframe that has an column that has an object datatype with the format mm:ss. I want to convert that column to a time format so that I could turn the time into seconds instead of mm:ss. However, I have not been able to convert the column into a time format. </p>
<p>Example of my data:</p>
<pre><code>time
33:22
24:56
30:15
26:57
</code></pre>
<p>I have tried:</p>
<pre><code>df['time'] = pd.to_timedelta(df['time'])
</code></pre>
<p>How do I convert this object data type column to a time format? And ultimately to total seconds?</p>
|
<p>Just add '00:' to the beginning.</p>
<pre><code>df['time'] = pd.to_timedelta('00:' + df['time'])
df['total seconds'] = df['time'].dt.total_seconds()
</code></pre>
|
python|python-3.x|pandas
| 0
|
376,239
| 58,107,652
|
Comparing one dataframe against another with pandas
|
<p>Good Afternoon,</p>
<p>I want to compare dataframe "new" against dataframe "old" to get a new dataframe with data that <em>only</em> exists in "new" but <em>not</em> old. For example</p>
<pre><code>New Old Desired Output
--- --- --------------
1 1 4
3 2 7
4 3
5 5
7 8
8 9
9 0
</code></pre>
<p>What I did at first (forgive me, I'm new to this) was:</p>
<pre><code>df = pd.concat([new, old])
final = pd.DataFrame(df.iloc[:,0].unique())
</code></pre>
<p>What I failed to realize, of course, is that there are values in 'old' that may not be in 'new' and per my code, those values would also show up in 'final' - which I don't want.</p>
<p>If anyone can point me in the right direction, any help is always appreciated!</p>
|
<p>Utilizing <code>set()</code> here will help with providing values in <code>New</code> and not in <code>Old</code>. Then filter based on the resulting list.</p>
<pre><code>df1 = pd.DataFrame(data=[1,2,3,4,5,7,8,9], columns=['New'])
df2 = pd.DataFrame(data=[1,2,3,5,8,9,0], columns=['Old'])
df1_unique = set(df1['New']) - set(df2['Old'])
final_df= df1[df1['New'].isin(df1_unique)]
final_df.rename(columns = {'New' : 'Desired Output'}, inplace=True)
print(final_df)
</code></pre>
|
python|pandas|dataframe
| 1
|
376,240
| 57,955,558
|
Python Pandas : Unable to return dictionary in two different columns based on groupby
|
<p>I have dataframe which is like below,</p>
<pre><code>df1:
mac gw_mac building rssi type payload
0 0010403bf0db b827eb36fb0b main -45 iBeacon e2c56db5dffb48d2b060d0f5a71096e0
1 0010403bf0db d827fc36gc0c main -67 other 02010612ff590080bc2c01001d0b3a00000005000000
2 bf0db0010403 b827eb36fb0b main -71 iBeacon e2c56db5dffb48d2b060d0f5a71096e0
3 bf0db0010403 d827fc36gc0c main -59 other 02010612ff590080bc2c01001d0b3a00000005000000
</code></pre>
<p>Based on the group of "mac" & "building", the column values of "gw_mac" and "rssi" has to be framed as dictionary in the name of "gw_mac_rssi" column.</p>
<p>Similarly, Based on the same grouping condition which is mentioned above, the column values of "payload" and "type" has to be framed as dictionary in the name of "payload_type" and the resultant dataframe is suppose to be,</p>
<pre><code>df2:
mac building gw_mac_rssi payload_type
0 0010403bf0db main {'b827eb36fb0b':-45,'d827fc36gc0c':-67} {'e2c56db5dffb48d2b060d0f5a71096e0':'iBeacon','02010612ff590080bc2c01001d0b3a00000005000000':'other'}
1 bf0db0010403 main {'b827eb36fb0b':-71,'d827fc36gc0c':-59} {'e2c56db5dffb48d2b060d0f5a71096e0':'iBeacon','02010612ff590080bc2c01001d0b3a00000005000000':'other'}
</code></pre>
<p>I have tried with</p>
<pre><code>df.groupby(['mac', 'building']) \
.apply(lambda x: x.set_index('edge_mac_gw_mac_rssi')['rssi'].to_dict()).apply(lambda x: x.set_index('type')['payload'].to_dict()).reset_index(name=["gw_mac_rssi","payload_type"])
</code></pre>
<p>Can I anyone please help me out in framing two different dictionaries based on same grouping condition with multiple column values?</p>
|
<p>First let's see how to add one column of dictionary from groupby object:</p>
<pre><code>df.groupby(['mac','building']).apply(lambda x: dict(zip(x['gw_mac'],x['rssi'])))
</code></pre>
<p>Then for two columns simultaneously generated, we need to return <code>pandas.Series</code> from the lambda function, then it becomes:</p>
<pre class="lang-py prettyprint-override"><code>df.groupby(['mac','building']).apply(lambda x: pd.Series([dict(zip(x['gw_mac'],x['rssi'])),
dict(zip(x['payload'],x['type']))],index=['gw_mac_rssi','payload_type']))
</code></pre>
<p>Should generate the desired result, I didn't play with your input though, used simple input and worked.</p>
|
python-3.x|pandas|dictionary|pandas-groupby
| 3
|
376,241
| 58,067,594
|
Getting the image name using autoencoder on tensorflow
|
<p>I'm using this tensorflow image search script:
<a href="https://www.kaggle.com/jonmarty/using-autoencoder-to-search-images" rel="nofollow noreferrer">https://www.kaggle.com/jonmarty/using-autoencoder-to-search-images</a></p>
<pre><code>def search(image):
hidden_states = [sess.run(hidden_state(X, mask, W, b),
feed_dict={X: im.reshape(1, pixels), mask:
np.random.binomial(1, 1-corruption_level, (1, pixels))})
for im in image_set]
query = sess.run(hidden_state(X, mask, W, b),
feed_dict={X: image.reshape(1,pixels), mask: np.random.binomial(1, 1-corruption_level, (1, pixels))})
starting_state = int(np.random.random()*len(hidden_states)) #choose random starting state
best_states = [imported_images[starting_state]]
distance = euclidean_distance(query[0], hidden_states[starting_state][0]) #Calculate similarity between hidden states
for i in range(len(hidden_states)):
dist = euclidean_distance(query[0], hidden_states[i][0])
if dist <= distance:
distance = dist #as the method progresses, it gets better at identifying similiar images
best_states.append(imported_images[i])
if len(best_states)>0:
return best_states
else:
return best_states[len(best_states)-101:]
</code></pre>
<p>I'm wondering about if it's possible to know the image name (e.g: homer.jpg). I'm lost and I don't know what should I add in the code to know that.. That's the part of the script where I print the results:</p>
<pre><code> print(len(results))
slots = 0
plt.figure(figsize = (125,125))
for im in results[::-1]: #reads through results backwards (more similiar images first)
plt.subplot(10, 10, slots+1)
plt.imshow(cv2.cvtColor(im, cv2.COLOR_BGR2RGB)); plt.axis('off')
slots += 1
</code></pre>
<p>Thank you a lot! :)</p>
|
<p>You need to modify the search function.</p>
<p>Specifically, look at the line:</p>
<pre><code>best_states.append(imported_images[i])
</code></pre>
<p>If you want to map between the images returned and the filenames, you need to record and return that index, <code>i</code>. Consider adding a <code>best_states_index</code> variable and returning both or simply replacing <code>imported_images[i]</code> with <code>i</code> and using that to access both the filename and the image data.</p>
<p>More explicitly:</p>
<pre><code>def search(image):
hidden_states = [sess.run(hidden_state(X, mask, W, b),
feed_dict={X: im.reshape(1, pixels), mask:
np.random.binomial(1, 1-corruption_level, (1, pixels))})
for im in image_set]
query = sess.run(hidden_state(X, mask, W, b),
feed_dict={X: image.reshape(1,pixels), mask: np.random.binomial(1, 1-corruption_level, (1, pixels))})
starting_state = int(np.random.random()*len(hidden_states)) #choose random starting state
best_states = [imported_images[starting_state]]
best_states_index = [starting_state]
distance = euclidean_distance(query[0], hidden_states[starting_state][0]) #Calculate similarity between hidden states
for i in range(len(hidden_states)):
dist = euclidean_distance(query[0], hidden_states[i][0])
if dist <= distance:
distance = dist #as the method progresses, it gets better at identifying similiar images
best_states.append(imported_images[i])
best_states_index.append(i)
if len(best_states)>0:
return best_states, best_states_index
else:
return best_states[len(best_states)-101:], best_states_index[len(best_states)-101:]
</code></pre>
|
python|tensorflow
| 0
|
376,242
| 57,797,021
|
I have question about group by how to use it?
|
<p>so, how to split it
as I want to know the group by and by split </p>
|
<p>You'll need to 'chunk' it into groups of 30. To do this, you can use the // operator on the index, which divides by 30 and rounds down to the nearest whole number.</p>
<p>Using 'unstack()' at the end will reshape the dataframe into the format you want. </p>
<pre><code>df.groupby([df.index // 30,'sex']).sum().unstack(level=0)
</code></pre>
|
python|python-3.x|pandas|pandas-groupby|sklearn-pandas
| 0
|
376,243
| 58,099,214
|
bokeh: How to edit a df or CDS-object through box_select?
|
<p>I'm trying to label a pandas-df (containing timeseries data) with the help of
a bokeh-lineplot, box_select tool and a TextInput widget in a jupyter-notebook. How can I access the by the box_select selected data points?</p>
<p>I tried to edit a similar problems code (<a href="https://stackoverflow.com/questions/34164587/get-selected-data-contained-within-box-select-tool-in-bokeh?rq=1">Get selected data contained within box select tool in Bokeh</a>) by changing the CustomJS to something like:</p>
<pre><code>source.callback = CustomJS(args=dict(p=p), code="""
var inds = cb_obj.get('selected')['1d'].indices;
[source.data['xvals'][i] for i in inds] = 'b'
"""
)
</code></pre>
<p>but couldn't apply a change on the source of the selected points.</p>
<p>So the shortterm goal is to manipulate a specific column of source of the selected points.</p>
<p>Longterm I want to use a TextInput widget to label the selected points by the supplied Textinput. That would look like:</p>
<p><a href="https://i.stack.imgur.com/RXydF.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RXydF.jpg" alt="enter image description here"></a></p>
<p><strong>EDIT:</strong></p>
<p>That's the current code I'm trying in the notebook, to reconstruct the issue:</p>
<pre><code>from random import random
import bokeh as bk
from bokeh.layouts import row
from bokeh.models import CustomJS, ColumnDataSource, HoverTool
from bokeh.plotting import figure, output_file, show, output_notebook
output_notebook()
x = [random() for x in range(20)]
y = [random() for y in range(20)]
hovertool=HoverTool(tooltips=[("Index", "$index"), ("Label", "@label")])
source = ColumnDataSource(data=dict(x=x, y=y, label=[i for i in "a"*20]))
p1 = figure(plot_width=400, plot_height=400, tools="box_select", title="Select Here")
p1.circle('x', 'y', source=source, alpha=0.6)
p1.add_tools(hovertool)
source.selected.js_on_change('indices', CustomJS(args=dict(source=source), code="""
var inds = cb_obj.indices;
for (var i = 0; i < inds.length; i++) {
source.data['label'][inds[i]] = 'b'
}
source.change.emit();
""")
)
layout = row(p1)
show(layout)
</code></pre>
|
<p>The main thing to note is that BokehJS can only <em>automatically</em> notice updates when actual assignments are made, e.g. </p>
<pre><code>source.data = some_new_data
</code></pre>
<p>That would trigger an update. If you update the data "in place" then BokehJS is not able to notice that. You will have to be explicit and call <code>source.change.emit()</code> to let BokehJS know something has been updated. </p>
<p>However, you should also know that you are using three different things that are long-deprecated and will be removed in the release after next. </p>
<ul>
<li><p><code>cb_obj.get('selected')</code></p>
<p>There is no need to ever use <code>.get</code> You can just access properties directly:</p>
<pre><code>cb_obj.selected
</code></pre></li>
<li><p>The <code>['1d']</code> syntax. This dict approach was very clumsy and will be removed very soon. For most selections you want the <code>indices</code> property of the selection:</p>
<pre><code>source.selected.indices
</code></pre></li>
<li><p><code>source.callback</code></p>
<p>This is an ancient ad-hoc callback. There is a newer general mechanism for callbacks on properties that should always be used instead</p>
<pre><code>source.selected.js_on_change('indices', CustomJS(...))
</code></pre>
<p>Note that in this case, the <code>cb_obj</code> is the selection, not the data source.</p></li>
</ul>
|
python|bokeh|pandas-bokeh
| 1
|
376,244
| 57,975,173
|
How to extract values from json-like text
|
<p>I want to extract values from json-like text which look like:</p>
<pre><code>df.head()
budget genres homepage id keywords original_language original_title overview popularity production_companies ... runtime spoken_languages status tagline title vote_average vote_count movie cast crew
0 237000000 [{"id": 28, "name": "Action"}, {"id": 12, "nam... http://www.avatarmovie.com/ 19995 [{"id": 1463, "name": "culture clash"}, {"id":... en Avatar In the 22nd century, a paraplegic Marine is di... 150.437577 [{"name": "Ingenious Film Partners", "id": 289... ... 162.0 [{"iso_639_1": "en", "name": "English"}, {"iso... Released Enter the World of Pandora. Avatar 7.2 11800 Avatar [{"cast_id": 242, "character": "Jake Sully", "... [{"credit_id": "52fe48009251416c750aca23", "de...
1 300000000 [{"id": 12, "name": "Adventure"}, {"id": 14, "... http://disney.go.com/disneypictures/pirates/ 285 [{"id": 270, "name": "ocean"}, {"id": 726, "na... en Pirates of the Caribbean: At World's End Captain Barbossa, long believed to be dead, ha... 139.082615 [{"name": "Walt Disney Pictures", "id": 2}, {"... ... 169.0 [{"iso_639_1": "en", "name": "English"}] Released At the end of the world, the adventure begins. Pirates of the Caribbean: At World's End 6.9 4500 Pirates of the Caribbean: At World's End [{"cast_id": 4, "character": "Captain Jack Spa... [{"credit_id": "52fe4232c3a36847f800b579", "de...
2 245000000 [{"id": 28, "name": "Action"}, {"id": 12, "nam... http://www.sonypictures.com/movies/spectre/ 206647 [{"id": 470, "name": "spy"}, {"id": 818, "name... en Spectre A cryptic message from Bond’s past sends him o...
</code></pre>
<p>I've tried:</p>
<pre><code># Parse the stringified features into their corresponding python objects
from ast import literal_eval
features = ['cast', 'crew', 'keywords', 'genres', 'original_language']
for feature in features:
df[feature] = df[feature].apply(literal_eval)
</code></pre>
<p>...which raises:</p>
<blockquote>
<p>ValueError: malformed node or string: <_ast.Name object at
0x7f5c5a523358></p>
</blockquote>
<p>Help would be appreacited.</p>
|
<p>I think problem is with bad values, one possible solution is create custom function with <code>try-except</code> statement:</p>
<pre><code>df = pd.DataFrame({'genres':['[{"id": 28, "name": "Action"}]',
'[{"id": 28, "name": "Action"}, {"id": 12, "n]']})
print (df)
genres
0 [{"id": 28, "name": "Action"}]
1 [{"id": 28, "name": "Action"}, {"id": 12, "n]
</code></pre>
<hr>
<pre><code>from ast import literal_eval
def literal_eval_cust(x):
try:
return literal_eval(x)
except Exception:
return {}
features = ['genres']
for feature in features:
df[feature] = df[feature].apply(literal_eval_cust)
print (df)
genres
0 [{'id': 28, 'name': 'Action'}]
1 {}
</code></pre>
|
python|json|pandas
| 2
|
376,245
| 58,013,898
|
Problem with Keras LSTM input_shape: expected lstm_1_input to have shape (500, 2) but got array with shape (500, 5)
|
<p><code>x_train</code> and <code>y_train</code> are input and output of my model with shapes of <code>(6508, 500, 5), (6508, 5)</code> respectively.</p>
<p>And the model is like this:</p>
<pre class="lang-py prettyprint-override"><code>model = Sequential()
model.add(LSTM(units=96, return_sequences=True, input_shape=x_train.shape[1:]))
model.add(Dropout(0.2))
model.add(LSTM(units=96, return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(units=96))
model.add(Dropout(0.2))
model.add(Dense(units=5))
model.compile(loss='mean_squared_error', optimizer='adam', metrics=['mse'])
model.fit(x_train, y_train, epochs=epochs, batch_size=batch_size)
</code></pre>
<p>Model Summary:</p>
<pre><code>Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
lstm_1 (LSTM) (None, 500, 96) 39168
_________________________________________________________________
dropout_1 (Dropout) (None, 500, 96) 0
_________________________________________________________________
lstm_2 (LSTM) (None, 500, 96) 74112
_________________________________________________________________
dropout_2 (Dropout) (None, 500, 96) 0
_________________________________________________________________
lstm_3 (LSTM) (None, 96) 74112
_________________________________________________________________
dropout_3 (Dropout) (None, 96) 0
_________________________________________________________________
dense_1 (Dense) (None, 5) 485
=================================================================
Total params: 187,877
Trainable params: 187,877
Non-trainable params: 0
</code></pre>
<p>The problem is <code>lstm_1</code> requires input_shape (500, 2) and my data shape is (500, 5):</p>
<pre><code>ValueError: Error when checking input: expected lstm_1_input to have shape (500, 2) but got array with shape (500, 5)
</code></pre>
<p>And I print layers' shape:</p>
<pre class="lang-py prettyprint-override"><code>for layer in model.layers:
print(layer.input_shape, end='\t')
# (None, 500, 5) (None, 500, 96) (None, 500, 96) (None, 500, 96) (None, 500, 96) (None, 96) (None, 96)
</code></pre>
<p>It prints <code>(None, 500, 5)</code> for <code>lstm_1</code> so I can't figure out the problem.</p>
<pre><code>Keras==2.3.0
tf==1.14.0
</code></pre>
<p><strong>UPDATE:</strong></p>
<p>Using <code>keras==2.2.5</code> or <code>tf.keras</code> solves the problem.</p>
|
<p>Mentioning the Solution in Answer Section for the Benefit of the Community.</p>
<p>Using <code>tf.keras</code> instead of <code>keras</code> has resolved the problem.</p>
|
python|tensorflow|keras|deep-learning|lstm
| 0
|
376,246
| 57,964,805
|
Offset groupby difference by one row
|
<p>I have a dataframe that looks like this:</p>
<pre><code>first client last_visit theme_type days_borrowed
----------------------------------------------------------
Y A 4/23/2019 Candy 0
N A 5/5/2019 Jewel 12
N A 5/8/2019 Chocolate 3
N A 6/2/2019 Candy 25
N A 6/12/2019 Rock 10
Y B 3/5/2019 Chocolate 0
N B 3/5/2019 Rock 0
Y C 2/6/2019 Rock 0
Y D 1/30/2019 Jewel 0
N D 2/4/2019 Rock 5
N D 2/8/2019 Candy 4
</code></pre>
<p>The days_borrowed column is calculated by:</p>
<pre><code>df['days_borrowed'] = df.groupby('client')['last_visit'].diff().dt.days.fillna(0)
</code></pre>
<p>However, I need it to actually take the difference in reverse, if that makes sense, since the # days borrowed is actually for the prior theme, not the current theme. The last theme selection should calculate to the difference of the last_visit and a static date (e.g., 7/31/2019). </p>
<p>Thus, the desired output looks like this:</p>
<pre><code>first client last_visit theme_type days_borrowed
----------------------------------------------------------
Y A 4/23/2019 Candy 12
N A 5/5/2019 Jewel 3
N A 5/8/2019 Chocolate 25
N A 6/2/2019 Candy 10
N A 6/12/2019 Rock 49
Y B 3/5/2019 Chocolate 0
N B 3/5/2019 Rock 148
Y C 2/6/2019 Rock 175
Y D 1/30/2019 Jewel 5
N D 2/4/2019 Rock 4
N D 2/8/2019 Candy 173
</code></pre>
<p>Where the 49, 148, 175, and 173 were calculated by taking the difference from last_visit and the fixed date of 7/31/2019.</p>
<p>So I was wondering if was possible to:</p>
<p>1) Offset the difference calculation by 1, and </p>
<p>2) For the last occurrence for each client, to have it take the difference between last_visit and a fixed date (7/31/2019)?</p>
<p>Any help would be greatly appreciated! Thank you!</p>
|
<ol>
<li>Use <code>-1</code> for the <code>periods</code> argument of <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.diff.html" rel="nofollow noreferrer"><code>diff</code></a> then take the absolute value.</li>
<li><code>fillna</code> with your desired calculation.</li>
</ol>
<h3>Code:</h3>
<pre><code>import pandas as pd
#df['last_visit'] = pd.to_datetime(df.last_visit)
df['days_borrowed'] = (df.groupby('client')['last_visit']
.diff(-1).dt.days.abs()
.fillna((pd.to_datetime('2019-07-31')-df['last_visit']).dt.days))
</code></pre>
<h3>Output: <code>df</code></h3>
<pre><code> first client last_visit theme_type days_borrowed
0 Y A 2019-04-23 Candy 12.0
1 N A 2019-05-05 Jewel 3.0
2 N A 2019-05-08 Chocolate 25.0
3 N A 2019-06-02 Candy 10.0
4 N A 2019-06-12 Rock 49.0
5 Y B 2019-03-05 Chocolate 0.0
6 N B 2019-03-05 Rock 148.0
7 Y C 2019-02-06 Rock 175.0
8 Y D 2019-01-30 Jewel 5.0
9 N D 2019-02-04 Rock 4.0
10 N D 2019-02-08 Candy 173.0
</code></pre>
|
python|python-3.x|pandas|pandas-groupby
| 2
|
376,247
| 57,890,719
|
Merge similar strings together in pandas column
|
<p>I have pandas crosstab dataframe which looks like this:<a href="https://i.stack.imgur.com/Jl6MP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Jl6MP.png" alt="enter image description here"></a></p>
<p>This is a small sample of the whole dataframe. As you can see, sku1_entity has some strings like 4 Cheese W Verm, 4 Cheese w Verm, 4Cheese w Verm and similarly there are more such cases in the whole dataframe. Correspondingly we have 0.0 and 1.0 values against each row. I want to merge these similar strings(maybe based on similarity score) and club the corresponding 0.0 and 1.0 values. </p>
<p>So the output for 0.0 and 1.0 would be like (for 4 Cheese W Verm):</p>
<p>0.0 = 6 +55 + 3 = 64
1.0 = 6 + 60 + 4 = 70 </p>
<p>As I'm a beginner, Please help me out how we can achieve this.</p>
|
<p>Not a universal solution but it should give you an idea how you could tackle it: use some function to 'normalize' your <code>sku1_entity</code> column and group on these normalized values like that:</p>
<pre><code>df = pd.DataFrame( {'sku1_entity': ['4 Cheese W Verm','4 Cheese w Verm','4Cheese w Verm', 'something else'], '0.0': [6,55,3,1], '1.0': [0,5,1,0]})
df = df.set_index('sku1_entity')
df['All'] = df['0.0'] + df['1.0']
def grouper(x):
return ''.join(x.lower().split())
df.groupby(grouper).sum()
</code></pre>
<p>Result:</p>
<pre><code> 0.0 1.0 All
4cheesewverm 64 6 70
somethingelse 1 0 1
</code></pre>
<p>As an alternative you could of course 'normalize' the column before creating the pivot table in the first place.
<hr>
If you want to retain the original <code>sku1_entity</code> names, you could do something like that:</p>
<pre><code>df = pd.DataFrame( {'sku1_entity': ['4 Cheese W Verm','4 Cheese w Verm','4Cheese w Verm', 'something else'], '0.0': [6,55,3,1], '1.0': [0,5,1,0]})
df['sku1_entity_norm'] = df['sku1_entity'].str.lower().str.split().map(''.join)
df.groupby('sku1_entity_norm').agg({'sku1_entity': list, '0.0': sum, '1.0': sum})
</code></pre>
<p>Result:</p>
<pre><code> sku1_entity 0.0 1.0
sku1_entity_norm
4cheesewverm [4 Cheese W Verm, 4 Cheese w Verm, 4Cheese w Verm] 64 6
somethingelse [something else] 1 0
</code></pre>
|
python|string|pandas|dataframe|fuzzywuzzy
| 0
|
376,248
| 58,075,544
|
Groupby for selecting multiple columns Pandas python
|
<p>I have a table pandas dataframe df with 3 columns lets say:</p>
<pre><code>[IN]:df
[OUT]:
Tree Name Planted by Govt Planted by College
A Yes No
B Yes No
C Yes No
C Yes No
A No No
B No Yes
B Yes Yes
B Yes No
B Yes No
</code></pre>
<p><strong>Query:</strong></p>
<p>How many trees were planted by govt and not by college for each type of tree. Govt: Yes, Pvt: No</p>
<p><strong>Output needed:</strong></p>
<pre><code>1 Tree(s) 'A' were planted by govt and not by college
3 Tree(s) 'B' were planted by govt and not by college
2 Tree(s) 'C' were planted by govt and not by college
</code></pre>
<p>Can anyone please help</p>
|
<p>First create boolean mask by compare both column chained with <code>&</code> for bitwise <code>AND</code> and then convert to numeric with aggregate <code>sum</code>:</p>
<pre><code>s = df['Planted by Govt'].eq('Yes') & df['Planted by College'].eq('No')
out = s.view('i1').groupby(df['Tree Name']).sum()
#alternative
#out = s.astype(int).groupby(df['Tree Name']).sum()
print (out)
Tree Name
A 1
B 3
C 2
dtype: int8
</code></pre>
<p>Last for custom output use <code>f-string</code>s:</p>
<pre><code>for k, v in out.items():
print (f"{v} Tree(s) {k} were planted by govt and not by college")
1 Tree(s) A were planted by govt and not by college
3 Tree(s) B were planted by govt and not by college
2 Tree(s) C were planted by govt and not by college
</code></pre>
<p>Another idea is create new column to original:</p>
<pre><code>df['new'] = (df['Planted by Govt'].eq('Yes') & df['Planted by College'].eq('No')).view('i1')
print (df)
Tree Name Planted by Govt Planted by College new
0 A Yes No 1
1 B Yes No 1
2 C Yes No 1
3 C Yes No 1
4 A No No 0
5 B No Yes 0
6 B Yes Yes 0
7 B Yes No 1
8 B Yes No 1
out = df.groupby('Tree Name')['new'].sum()
print (out)
Tree Name
A 1
B 3
C 2
Name: new, dtype: int8
</code></pre>
|
python|pandas|group-by
| 1
|
376,249
| 57,989,716
|
Loading .npy files as dataset for pytorch
|
<p>I have preprocessed data in .npy files, let's call it X.npy for raw data and Y.npy for labels. They're organized to match every element from both files (first element from X has first label from Y etc.). How can I load it as dataset using <code>torch.utils.data.DataLoader</code>? I'm very new to pytorch, and any help will be useful.</p>
|
<p>You could also use DatasetFolder, which basically is the underlying class of ImageFolder. Using this class you can provide your own file extensions and loader to load the samples.</p>
<pre class="lang-py prettyprint-override"><code>def npy_loader(path):
return torch.from_numpy(np.load(path))
</code></pre>
|
python|numpy|serialization|pytorch
| 5
|
376,250
| 57,838,944
|
Read Excel file with blank cells as Pandas dataframe with multiindex
|
<p>Suppose there is a Excel file:</p>
<p><a href="https://i.stack.imgur.com/3DnEB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3DnEB.png" alt="enter image description here"></a></p>
<p>Is there a way to read it directly as a Pandas dataframe with multiindex, without filling blank spaces in the first column?</p>
|
<h2>Data:</h2>
<p><a href="https://i.stack.imgur.com/GeAjv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GeAjv.png" alt="enter image description here"></a></p>
<h2>Code:</h2>
<pre><code>df = pd.read_excel('test.xlsx')
</code></pre>
<p><a href="https://i.stack.imgur.com/wfgpm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wfgpm.png" alt="enter image description here"></a></p>
<h3><code>.ffill()</code>:</h3>
<pre><code>df.i0.ffill(inplace=True)
</code></pre>
<p><a href="https://i.stack.imgur.com/a9orz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/a9orz.png" alt="enter image description here"></a></p>
<h3><code>set_index()</code>:</h3>
<pre><code>df.set_index(['i0', 'i1'], inplace=True)
</code></pre>
<p><a href="https://i.stack.imgur.com/Y3W3s.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Y3W3s.png" alt="enter image description here"></a></p>
|
python|pandas
| 2
|
376,251
| 57,836,849
|
Tensorflow.Keras: Custom Constraint Not Working
|
<p>Im trying to implement the Weights Orthogonality Constraint showed <a href="https://towardsdatascience.com/build-the-right-autoencoder-tune-and-optimize-using-pca-principles-part-ii-24b9cca69bd6" rel="nofollow noreferrer">here</a>, in section 2.0. when i try to use it on a Keras Dense Layer, An Value Error is raised.</p>
<p>This is happening too when trying to implement the Custom Uncorrelated Features Constraint in the part 3.0 of the same Article.</p>
<pre><code>import tensorflow as tf
import numpy as np
class WeightsOrthogonalityConstraint(tf.keras.constraints.Constraint):
def __init__(self, encoding_dim, weightage = 1.0, axis = 0):
self.encoding_dim = encoding_dim
self.weightage = weightage
self.axis = axis
def weights_orthogonality(self, w):
if(self.axis==1):
w = tf.keras.backend.transpose(w)
if(self.encoding_dim > 1):
m = tf.keras.backend.dot(tf.keras.backend.transpose(w), w) - tf.keras.backend.eye(self.encoding_dim)
return self.weightage * tf.keras.backend.sqrt(tf.keras.backend.sum(tf.keras.backend.square(m)))
else:
m = tf.keras.backend.sum(w ** 2) - 1.
return m
def __call__(self, w):
return self.weights_orthogonality(w)
rand_samples = np.random.rand(16, 4)
dummy_ds = tf.data.Dataset.from_tensor_slices((rand_samples, rand_samples)).shuffle(16).batch(16)
encoder = tf.keras.layers.Dense(2, "relu", input_shape=(4,), kernel_regularizer=WeightsOrthogonalityConstraint(2))
decoder = tf.keras.layers.Dense(4, "relu")
autoencoder = tf.keras.models.Sequential()
autoencoder.add(encoder)
autoencoder.add(decoder)
autoencoder.compile(metrics=['accuracy'],
loss='mean_squared_error',
optimizer='sgd')
autoencoder.summary()
autoencoder.fit(dummy_ds, epochs=1)
</code></pre>
<p>If i stop using the contraint, there is no errors, but when used, the next error is raised:</p>
<pre><code>2019-09-07 14:20:25.962610: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library nvcuda.dll
2019-09-07 14:20:26.997957: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1640] Found device 0 with properties:
name: GeForce GTX 1060 major: 6 minor: 1 memoryClockRate(GHz): 1.733
pciBusID: 0000:01:00.0
2019-09-07 14:20:27.043016: I tensorflow/stream_executor/platform/default/dlopen_checker_stub.cc:25] GPU libraries are statically linked, skip dlopen check.
2019-09-07 14:20:27.050749: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1763] Adding visible gpu devices: 0
2019-09-07 14:20:27.081369: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
2019-09-07 14:20:27.113598: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1640] Found device 0 with properties:
name: GeForce GTX 1060 major: 6 minor: 1 memoryClockRate(GHz): 1.733
pciBusID: 0000:01:00.0
2019-09-07 14:20:27.144194: I tensorflow/stream_executor/platform/default/dlopen_checker_stub.cc:25] GPU libraries are statically linked, skip dlopen check.
2019-09-07 14:20:27.151802: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1763] Adding visible gpu devices: 0
2019-09-07 14:20:27.800616: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1181] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-09-07 14:20:27.817323: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1187] 0
2019-09-07 14:20:27.840635: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1200] 0: N
2019-09-07 14:20:27.848536: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1326] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 4712 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1060, pci bus id: 0000:01:00.0, compute capability: 6.1)
Traceback (most recent call last):
File "c:\Users\whitm\.vscode\extensions\ms-python.python-2019.9.34911\pythonFiles\ptvsd_launcher.py", line 43, in <module>
main(ptvsdArgs)
File "c:\Users\whitm\.vscode\extensions\ms-python.python-2019.9.34911\pythonFiles\lib\python\ptvsd\__main__.py", line 432, in main
run()
File "c:\Users\whitm\.vscode\extensions\ms-python.python-2019.9.34911\pythonFiles\lib\python\ptvsd\__main__.py", line 316, in run_file
runpy.run_path(target, run_name='__main__')
File "C:\ProgramData\Anaconda3\envs\tensorflow-gpu\lib\runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "C:\ProgramData\Anaconda3\envs\tensorflow-gpu\lib\runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "C:\ProgramData\Anaconda3\envs\tensorflow-gpu\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "c:\Users\whitm\Desktop\CodeProjects\ForestClassifier-DEC\Test.py", line 35, in <module>
optimizer='sgd')
File "C:\ProgramData\Anaconda3\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\training\tracking\base.py", line 458, in _method_wrapper
result = method(self, *args, **kwargs)
File "C:\ProgramData\Anaconda3\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\keras\engine\training.py", line 337, in compile
self._compile_weights_loss_and_weighted_metrics()
File "C:\ProgramData\Anaconda3\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\training\tracking\base.py", line 458, in _method_wrapper
result = method(self, *args, **kwargs)
File "C:\ProgramData\Anaconda3\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\keras\engine\training.py", line 1494, in _compile_weights_loss_and_weighted_metrics
self.total_loss = self._prepare_total_loss(masks)
File "C:\ProgramData\Anaconda3\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\keras\engine\training.py", line 1601, in _prepare_total_loss
custom_losses = self.get_losses_for(None) + self.get_losses_for(
File "C:\ProgramData\Anaconda3\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 1209, in get_losses_for
return [l for l in self.losses if l._unconditional_loss]
File "C:\ProgramData\Anaconda3\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 835, in losses
return collected_losses + self._gather_children_attribute('losses')
File "C:\ProgramData\Anaconda3\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 2129, in _gather_children_attribute
getattr(layer, attribute) for layer in nested_layers))
File "C:\ProgramData\Anaconda3\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 2129, in <genexpr>
getattr(layer, attribute) for layer in nested_layers))
File "C:\ProgramData\Anaconda3\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 832, in losses
loss_tensor = regularizer()
File "C:\ProgramData\Anaconda3\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 907, in _tag_unconditional
loss = loss()
File "C:\ProgramData\Anaconda3\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 1659, in _loss_for_variable
regularization = regularizer(v)
File "c:\Users\whitm\Desktop\CodeProjects\ForestClassifier-DEC\Test.py", line 21, in __call__
return self.weights_orthogonality(w)
File "c:\Users\whitm\Desktop\CodeProjects\ForestClassifier-DEC\Test.py", line 14, in weights_orthogonality
m = tf.keras.backend.dot(tf.keras.backend.transpose(w), w) - tf.keras.backend.eye(self.encoding_dim)
File "C:\ProgramData\Anaconda3\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\keras\backend.py", line 1310, in eye
return variable(linalg_ops.eye(size, dtype=tf_dtype), dtype, name)
File "C:\ProgramData\Anaconda3\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\keras\backend.py", line 785, in variable
constraint=constraint)
File "C:\ProgramData\Anaconda3\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\ops\variables.py", line 264, in __call__
return super(VariableMetaclass, cls).__call__(*args, **kwargs)
File "C:\ProgramData\Anaconda3\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\ops\resource_variable_ops.py", line 464, in __init__
shape=shape)
File "C:\ProgramData\Anaconda3\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\ops\resource_variable_ops.py", line 550, in _init_from_args
raise ValueError("Tensor-typed variable initializers must either be "
ValueError: Tensor-typed variable initializers must either be wrapped in an init_scope or callable (e.g., `tf.Variable(lambda : tf.truncated_normal([10, 40]))`) when building functions. Please file a feature request if this restriction inconveniences you.
</code></pre>
<p>Thanks in advance!</p>
<p>PD: <a href="https://colab.research.google.com/drive/19jTi5jRaDKFey0QZ1FQgOUqiz3UqS3xm" rel="nofollow noreferrer">Here</a> is a Colab Notebook showing the error</p>
<p>PD2: I anage to found the line causing the problem, is this one:</p>
<pre><code>m = tf.keras.backend.dot(tf.keras.backend.transpose(w), w) - tf.keras.backend.eye(self.encoding_dim)
</code></pre>
<p>specifically the keras backend eye() function is causing the problem</p>
|
<p>I manage to solve this problem:</p>
<p>the function causing the error was tf.keras.backed.eye() on line 14. I read out there that the implementation in the keras backend of this function use numpy array for the identity matrix, but tensorflow and other backends already have their impementation for this function using tensors. There is something with the lack of tensors causing the error on tf2.0, so just change tf.keras.backed.eye() to tf.eye() solve the problem.</p>
|
python|python-3.x|tensorflow|tf.keras
| 3
|
376,252
| 58,050,020
|
Is there a handy way to dump the running_stats for a pytorch model?
|
<p>I'm writing a C version of the pytorch model to run it on my special hardware.
Everything looks ok so far, except the running_mean and running_var in every batchnorm layer.</p>
<p>We have a python code to dump all named_parameters, but nothing to do for the running_stats, although we need to use it in the forwarding computation.</p>
<p>So is there a way to dump it with sort of built-in features?
I searched pytorch doc, no help on my task.
Otherwise I might need to write a regexp code to recognize and dump them.</p>
<p>Thanks a lot.
/Patrick</p>
<pre class="lang-py prettyprint-override"><code>for name, param in model.named_parameters():
# here can dump weight and bias, but not running_stats
names.append(name)
shapes.append(list(param.data.numpy().shape))
values.append(param.data.numpy().flatten().tolist())
</code></pre>
|
<p><code>running_mean</code> and others are <code>registered_buffers</code> in PyTorch. You can save (as you say dump) them with <code>torch.nn.Module</code>'s <a href="https://pytorch.org/tutorials/beginner/saving_loading_models.html#what-is-a-state-dict" rel="nofollow noreferrer"><code>state_dict</code></a>:</p>
<pre><code>torch.save(model.state_dict(), PATH)
</code></pre>
<p>You can iterate over named buffers and save each of them however you like similarly to parameters:</p>
<pre><code>for name, buffer in model.named_buffers():
# do your thing with them
</code></pre>
|
pytorch
| 1
|
376,253
| 57,732,470
|
Sort dates of multiple json files with python
|
<p>I got multiple json files and I am trying to sort them by date. I managed to print them out in 2 columns, DATE and TEXT, but the DATES are not in order. </p>
<p>When I try to mess around with datetime, nothing happens. I'm sure there's an easy solution, but I just can't find it. </p>
<pre class="lang-py prettyprint-override"><code>import os, json
import pandas as pd
path_to_json = 'MyPath'
json_files = [pos_json for pos_json in os.listdir(path_to_json) if pos_json.endswith('.json')]
jsons_data = pd.DataFrame(columns=['DATE', 'TEXT'])
for index, js in enumerate(json_files):
with open(os.path.join(path_to_json, js)) as json_file:
json_text = json.load(json_file)
DATE = json_text['DATE']
TEXT = json_text['TEXT']
jsons_data.loc[index] = [DATE, TEXT]
print(jsons_data)
</code></pre>
<p>Print dates in sorted order:</p>
<pre><code>from datetime import datetime
def sort_data_by_datetime(jsons_data, field_name='DATE', datetime_format='%d.%m.%Y'):
return sorted(jsons_data, key=lambda x: datetime.strptime(x[field_name], datetime_format))
print(jsons_data)
</code></pre>
<p>Here is a snippet of my unordered result </p>
<pre><code> DATE TEXT
0 19.08.2018 "Den Unmut der Sparer kann ich gut verstehen"\...
1 17.05.2019 „Selbstzufriedenheit ist sehr gefährlich“\n\nI...
2 25.08.2019 „Ich sehe keinen Grund zur Panik“\n\nInterview...
3 15.09.2018 "Bargeld ist gedruckte Freiheit"\n\nInterview ...
</code></pre>
<p>And of one of my json files</p>
<pre><code>{"AUTHOR": "JoachimWuermeling", "PDF_URL": "-", "LOCAL_PDF_FILE": "-", "DATE": "02.10.2018", "TEXT": "Die Bundesbank digitalisiert die Bankenaufsicht\n\nInterview mit der Börsen-Zeitung\n\n\n\n02.10.2018\n\n|\nJoachim Wuermeling\n\n\nEN\n\nDas
</code></pre>
|
<pre><code>jsons_data['DATE'] = pd.to_datetime(jsons_data['DATE'])
jsons_data = jsons_data.sort_values('DATE')
</code></pre>
<p>This might help.</p>
|
python|json|pandas|datetime
| 0
|
376,254
| 57,828,510
|
Value Error: time data '12:00:01 AM' does not match format '%I:%M:00 %p' using time.strptime
|
<p>I'm a bit new to python so any help is greatly appreciated. Thanks in advance (and sorry for any mislabel).</p>
<p>I'm working on a csv file containing columns with Date, Time, CO, CO2 and CH4. What I want to achieve is to make a loop so that every time there is a time with zero seconds (ex: "12:00:00 AM", "3:05:00 PM" etc) it will take the data of that row and send it to a new text or csv file(this part is not included in the code). I imported the csv using pandas and used time.strptime to convert the string to readable time format. </p>
<p>Unfortunately since there is some data missing, I can't make a loop to gather every 60th data for this. I've also tried making a function using strptime but it also gives me a type error saying it must be a string and not a panda core series.</p>
<p>Importing the csv file:</p>
<pre><code>data1 = pd.read_csv("prueba1.csv")
print(data1)
</code></pre>
<p>Where the output is:</p>
<pre><code> DATE TIME CO CO2_dry CH4_dry
0 3/4/2019 12:00:00 AM 0.352 420 1.99
1 3/4/2019 12:00:01 AM 0.352 420 1.99
2 3/4/2019 12:00:02 AM 0.352 420 1.99
3 3/4/2019 12:00:03 AM 0.366 420 1.99
4 3/4/2019 12:00:04 AM 0.366 420 1.99
5 3/4/2019 12:00:05 AM 0.366 421 1.99
6 3/4/2019 12:00:06 AM 0.369 421 1.99
7 3/4/2019 12:00:07 AM 0.369 421 1.99
8 3/4/2019 12:00:09 AM 0.354 421 1.99
9 3/4/2019 12:00:10 AM 0.354 421 1.99
</code></pre>
<p>And the code I'm using is</p>
<pre><code>for i in data1["TIME"]:
time.strptime(i,"%I:%M:%S %p")
if time.strptime(i,"%I:%M:%S %p") == time.strptime(i,"%I:%M:00 %p"):
print("Found a number!", i)
else:
print("Yikes")
</code></pre>
<p>The error message is:</p>
<pre><code>Found a number! 12:00:00 AM
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-18-8b936d17df46> in <module>()
2 time.strptime(i,"%I:%M:%S %p")
3 #print(i)
----> 4 if time.strptime(i,"%I:%M:%S %p") == time.strptime(i,"%I:%M:00 %p"):
5 print("Found a number!", i)
6 else:
C:\Users\Diego\Anaconda3\lib\_strptime.py in _strptime_time(data_string, format)
557 """Return a time struct based on the input string and the
558 format string."""
--> 559 tt = _strptime(data_string, format)[0]
560 return time.struct_time(tt[:time._STRUCT_TM_ITEMS])
561
C:\Users\Diego\Anaconda3\lib\_strptime.py in _strptime(data_string, format)
360 if not found:
361 raise ValueError("time data %r does not match format %r" %
--> 362 (data_string, format))
363 if len(data_string) != found.end():
364 raise ValueError("unconverted data remains: %s" %
ValueError: time data '12:00:01 AM' does not match format '%I:%M:00 %p'
</code></pre>
<p>It returns the preceding output. I expected for it to return all time numbers matching the '%I:%M:00 %p' format, but only returned the first number. It seems odd to me that it stopped after it encountered the first number not matching the specified format.</p>
|
<p>If you want to skip errors, then you should use <code>try</code> and <code>except</code></p>
<pre><code>for i in data1["TIME"]:
try:
time.strptime(i,"%I:%M:%S %p")
if time.strptime(i,"%I:%M:%S %p") == time.strptime(i,"%I:%M:00 %p"):
print("Found a number!", i)
else:
print("Yikes")
except ValueError:
print("Ouch! Something failed")
</code></pre>
|
python|pandas|dataframe|time|jupyter-notebook
| 0
|
376,255
| 58,170,435
|
Generate dictionary from a pandas dataframe with multiple columns combined as the keys, remaining columns as values?
|
<p>I'm trying to generate a dictionary from a pandas dataframe. Specifically, I need to:</p>
<ol>
<li><p>Take the first (x) columns and use the data points in each of their rows, together, as keys. </p></li>
<li><p>Compile a dictionary for each key using the remaining data points in the row as values, as a list. </p></li>
</ol>
<p>Let's use this sample dataframe for the sake of simplicity.</p>
<ol>
<li>Generate dataframe:</li>
</ol>
<pre><code>df = pd.DataFrame([
{'c1':a1, 'c2':110, 'c3':'xyz', 'c4':24},
{'c1':b2,'c2':100, 'c3':'jdf', 'c4':15},
{'c1':a1,'c2':110, 'c3':'kjl', 'c4':125},
{'c1':b2, 'c2':100, 'c3':'abc', 'c4':71},
])
c1 c2 c3 c4
0 a1 110 xyz 24
1 b2 100 jdf 15
2 a1 110 kjl 125
3 b2 100 abc 71
</code></pre>
<ol start="2">
<li>Yield the following:</li>
</ol>
<pre><code>new_dict = some code
new_dict
{('a1', 110): [['xyz', 24], ['kjl', 125]], ('b2', 100): [['jdf', 15], ['abc', 71]]}
</code></pre>
<p>I've tried many, many things, including creating a list of tuple lists for the keys, assigning unique lists as keys to a new dictionary (with values empty lists)--but I can't then populate the values. </p>
<p>I'm able to compile a dictionary with a single column as the key, and everything else as needed, like this:</p>
<pre><code>test_dict = {}
for index, row in df.iterrows():
if row['c1'] in test_dict:
test_dict[row['c1']].append([row['c2'], row['c3'], row['c4']])
else:
test_dict[row['c1']] = []
test_dict[row['c1']].append([row['c2'], row['c3'], row['c4']])
</code></pre>
<p>But I can't make the jump to combining multiple columns as the key.</p>
|
<p>Assuming the following DataFrame:</p>
<pre><code>import pandas as pd
df = pd.DataFrame([
{'c1': 'a1', 'c2': 110, 'c3': 'xyz', 'c4': 24},
{'c1': 'b2', 'c2': 100, 'c3': 'jdf', 'c4': 15},
{'c1': 'a1', 'c2': 110, 'c3': 'kjl', 'c4': 125},
{'c1': 'b2', 'c2': 100, 'c3': 'abc', 'c4': 71},
])
</code></pre>
<p>You could <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.groupby.html" rel="nofollow noreferrer">groupby</a>, aggregate and then convert to dictionary (<a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_dict.html" rel="nofollow noreferrer">to_dict</a>):</p>
<pre><code>groups = df.groupby(['c1', 'c2']).apply(lambda x: x[['c3', 'c4']].values.tolist()).to_dict()
print(groups)
</code></pre>
<p><strong>Output</strong></p>
<pre><code>{('a1', 110): [['xyz', 24], ['kjl', 125]], ('b2', 100): [['jdf', 15], ['abc', 71]]}
</code></pre>
|
python|pandas|list|dictionary
| 1
|
376,256
| 57,909,222
|
Pandas Shift Row Value to Match Column Name
|
<p>I have a sample dataset that has a set list of column names. In shifting data around, I have each row printing letters in each row as seen below.</p>
<p>I am trying to shift the values of each row to match either respective column. I have tried doing pd.shift() to do so but have not had much success. I am trying to get what is seen below. Any thoughts?</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'A': list('AAAAA'),
'B': list('CBBDE'),
'C': list('DDCEG'),
'D': list('EEDF '),
'E': list('FFE '),
'F': list('GGF '),
'G': list(' G ')})
A B C D E F G
0 A C D E F G
1 A B D E F G
2 A B C D E F G
3 A D E F
4 A E G
</code></pre>
<p>After:</p>
<pre><code> A B C D E F G
0 A C D E F G
1 A B D E F G
2 A B C D E F G
3 A D E F
4 A E G
</code></pre>
|
<p>This is more list <code>pivot</code> problem </p>
<pre><code>s=df.mask(df=='').stack().reset_index()
s.pivot(index='level_0',columns=0,values=0)
Out[34]:
0 A B C D
level_0
0 A B C D
1 A NaN C NaN
2 A NaN C D
</code></pre>
|
python|pandas|pandas-groupby
| 3
|
376,257
| 57,795,370
|
Data resolution change in Pandas
|
<p>I have a dataframe whose data has a resolution of 10 minutes as seen below:</p>
<pre><code> DateTime TSM
0 2011-03-18 14:20:00 26.8
1 2011-03-18 14:30:00 26.5
2 2011-03-18 14:40:00 26.3
... ... ...
445088 2019-09-03 11:40:00 27.6
445089 2019-09-03 11:50:00 27.6
445090 2019-09-03 12:00:00 27.6
</code></pre>
<p>Now, I would like to reduce its resolution to 1 day. Does Pandas have any function that can help me with this?</p>
|
<p>Your dataframe should have datetime index in order to use <code>resample</code> method. Also you need to apply an aggregate function, for example <code>mean()</code></p>
<pre><code># Make sure DateTime type is datetime
df['DateTime'] = df['DateTime'].astype('datetime64')
# Set DateTime column as index
df.set_index('DateTime', inplace=True)
# 1D stands for 1 day offset
df.resample('1D').mean()
</code></pre>
|
python-3.x|pandas
| 2
|
376,258
| 58,042,419
|
Unexpected results on groupby([]).sum()
|
<pre class="lang-py prettyprint-override"><code>n = df1.groupby(['Year', 'State', 'Regulator', 'Industry','Product', 'Count']).sum() # <-- this produces the error
</code></pre>
<p>Problem description
[Hi, I think there's a problem dropping/excluding data points with groupby.sum function. I've performed the following code (see above), which at hindsight seemed ok until I compared with the same data using Excel and/or simple plot of the dataset. In addition, removing 'Count' will throw off values on other df columns. Thanks for checking this out.]</p>
<p>Expected Output</p>
<pre><code>Year | 2012
State | Alabama
Regulator | SEC
Insurance/Annuity Products | 2
Stocks | 4
Year | 2012
State | Alabama
Regulator | FDIC
Debit Card | 1
Residential Mortgage | 3
</code></pre>
<p>Output of pd.df</p>
<pre><code>Year | 2012
State | Alabama
Regulator | FDIC
Debit Card | 1
Residential Mortgage | 1
</code></pre>
|
<p>Problem solved. I've had ran the code including and excluding the Column ['Count'] from the code which gave me a mix of good and bad results. For some reason the CSV wasn't being read correctly if that makes any sense. Column ['Count'] was dtypes int, but it seems was being read as string. So i did a .apply(pd.to_numeric), removed 'Count' and re-ran the cell which solved the issue.</p>
<p>Here's the final code for groupby/sum:</p>
<pre><code>n = df1.groupby(['Year', 'State', 'Regulator', 'Industry','Product'])['Count'].sum()
</code></pre>
|
pandas|pandas-groupby
| 0
|
376,259
| 58,117,330
|
I need to insert a row at nth index that will take summation of all rows that are underneath it
|
<p>I have a dataframe with 30 rows. I need to insert a row at the 10th index, give it a name and then have all the cells in it, be the summation of all cells that are underneath it. It will represent a total of the lower performing parts. </p>
<pre><code>pd.DataFrame(np.insert(df.values, 0,)
</code></pre>
<p>I would like to name give it an index name, and keep all the other data below. simply and insertion and the summation of all rows underneath it.</p>
|
<p>I pandas exist <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.insert.html" rel="nofollow noreferrer"><code>DataFrame.insert</code></a>, but working only for columns, so is necessary something more complicated:</p>
<pre><code>df = pd.DataFrame({
'B':[4,5,4,5,5,4],
'C':[7,8,9,4,2,3],
})
</code></pre>
<p>Solutions for include rows with index <code>idx</code> for sum:</p>
<pre><code>idx = 3
df1 = df.iloc[:idx]
df2 = df.iloc[idx:]
df = pd.concat([df1, df2.sum().to_frame('new').T, df2])
print (df)
B C
0 4 7
1 5 8
2 4 9
new 14 9
3 5 4
4 5 2
5 4 3
</code></pre>
<p>Or:</p>
<pre><code>idx = 3
df.loc[idx + .5] = df.iloc[idx:].sum()
df = df.sort_index().rename({idx + .5:'new'})
print (df)
B C
0.0 4.0 7.0
1.0 5.0 8.0
2.0 4.0 9.0
3.0 5.0 4.0
new 14.0 9.0
4.0 5.0 2.0
5.0 4.0 3.0
</code></pre>
<p>Solutions for exclude row with <code>idx</code> for sum:</p>
<pre><code>idx = 3
df1 = df.iloc[:idx+1]
df2 = df.iloc[idx+1:]
df = pd.concat([df1, df2.sum().to_frame('new').T, df2])
print (df)
B C
0 4 7
1 5 8
2 4 9
3 5 4
new 9 5
4 5 2
5 4 3
</code></pre>
<hr>
<pre><code>idx = 3
df.loc[idx + .5] = df.iloc[idx + 1:].sum()
df = df.sort_index().rename({idx + .5:'new'})
print (df)
B C
0.0 4.0 7.0
1.0 5.0 8.0
2.0 4.0 9.0
3.0 5.0 4.0
new 9.0 5.0
4.0 5.0 2.0
5.0 4.0 3.0
</code></pre>
<p>If all columns are numeric, is possible use also <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.insert.html" rel="nofollow noreferrer"><code>np.insert</code></a>:</p>
<pre><code>idx = 3
arr = df.to_numpy()
s = arr[idx:].sum(axis=0)[None, :]
np.insert(arr, 1, s, 0)
df = pd.DataFrame(arr, columns=df.columns).rename({idx:'new'})
print (df)
B C
0 4 7
1 5 8
2 4 9
new 5 4
4 5 2
5 4 3
</code></pre>
|
python|pandas|dataframe
| 2
|
376,260
| 57,801,680
|
How can I use GPU for running a tflite model (*.tflite) using tf.lite.Interpreter (in python)?
|
<p>I have converted a tensorflow inference graph to tflite model file (*.tflite), according to instructions from <a href="https://www.tensorflow.org/lite/convert" rel="nofollow noreferrer">https://www.tensorflow.org/lite/convert</a>.</p>
<p>I tested the tflite model on my GPU server, which has 4 Nvidia TITAN GPUs. I used the tf.lite.Interpreter to load and run tflite model file. </p>
<p>It works as the former tensorflow graph, however, the problem is that the inference became too slow. When I checked out the reason, I found that the GPU utilization is simply 0% when tf.lite.Interpreter is running.</p>
<p>Is there any method that I can run tf.lite.Interpreter with GPU support?</p>
|
<p><a href="https://github.com/tensorflow/tensorflow/issues/34536" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow/issues/34536</a></p>
<p>CPU is kind of good enough for tflite, especially multicore. </p>
<p>nvidia GPU likely not updated for tflite, which is for mobile GPU platform. </p>
|
python|tensorflow|interpreter|tensorflow-lite
| 0
|
376,261
| 57,891,587
|
Sorting dataframe based on multiple columns and conditions
|
<p>I am trying to sort the following dataframe based on <code>rolls</code> descending first, followed by <code>diff_vto</code> ascending for positive values, finally by <code>diff_vto</code> ascending for negative values. This is the original dataframe:</p>
<pre><code> day prob vto rolls diff diff_vto
0 1 10 14 27.0 0.0 -13
1 2 10 14 20.0 3.0 -12
2 3 7 14 16.0 4.0 -11
3 4 3 14 12.0 -3.0 -10
4 5 6 14 17.0 3.0 -9
5 6 3 14 14.0 -5.0 -8
6 7 8 14 14.0 5.0 -7
7 8 3 14 9.0 0.0 -6
8 9 3 14 9.0 0.0 -5
9 10 3 14 17.0 0.0 -4
10 11 3 14 22.0 -8.0 -3
11 12 11 14 27.0 3.0 -2
12 13 8 14 23.0 0.0 -1
13 14 8 14 25.0 1.0 0
14 15 7 14 27.0 -3.0 1
</code></pre>
<p>This is the code in case you wish to replicate it: </p>
<pre><code> import pandas as pd
a = {'day':[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15],'prob':[10,10,7,3,6,3,8,3,3,3,3,11,8,8,7],'vto':[14,14,14,14,14,14,14,14,14,14,14,14,14,14,14]}
df = pd.DataFrame(a)
df.loc[len(df)+1] = df.loc[0] #Add an extra 2 days for rolling rolling
df.loc[len(df)+2] = df.loc[1] #Add an extra 2 days for rolling
df['rolls'] = df['prob'].rolling(3).sum()
df['rolls'] = df['rolls'].shift(periods=-2) #Displace rolls to match the index + 2
df['diff'] = df['prob'].diff(periods=-1) #Prob[i] - Prob[i+1]
df['diff_vto'] = df['day'] - df['vto']
df = df.head(15)
print(df)
</code></pre>
<p>I want to be able to sort the dataframe, based on <code>rolls</code> (descending) followed by the minimum value of <code>diff_vto</code> when it's possitive (ascending), followed by the minimum value of <code>diff_vto</code> when it's negative (ascending). Based on the dataframe posted above, this would be the expected output:</p>
<pre><code> day prob vto rolls diff diff_vto
14 15 7 14 27.0 -3.0 1
0 1 10 14 27.0 0.0 -13
11 12 11 14 27.0 3.0 -2
13 14 8 14 25.0 1.0 0
12 13 8 14 23.0 0.0 -1
10 11 3 14 22.0 -8.0 -3
1 2 10 14 20.0 3.0 -12
4 5 6 14 17.0 3.0 -9
9 10 3 14 17.0 0.0 -4
2 3 7 14 16.0 4.0 -11
5 6 3 14 14.0 -5.0 -8
6 7 8 14 14.0 5.0 -7
3 4 3 14 12.0 -3.0 -10
7 8 3 14 9.0 0.0 -6
8 9 3 14 9.0 0.0 -5
</code></pre>
<p>I have obviously tried applying <code>.sort_values()</code> but I can't get the conditional sorting to work on <code>diff_vto</code> because setting it to ascending will obviously place the negative values before the positive ones. Could I please get a suggestion? Thanks.</p>
|
<p>You want to sort by <code>diff_vto>0</code> and <code>abs(diff_vto)</code>, both decreasing:</p>
<pre><code>df['pos'] = df['diff_vto'].gt(0)
df['abs'] = df['diff_vto'].abs()
df.sort_values(['rolls', 'pos', 'abs'], ascending=[False, False, False])
</code></pre>
<p>Output (you can drop <code>pos</code> and <code>abs</code> if needed):</p>
<pre><code> day prob vto rolls diff diff_vto pos abs
14 15 7 14 27.0 -3.0 1 True 1
0 1 10 14 27.0 0.0 -13 False 13
11 12 11 14 27.0 3.0 -2 False 2
13 14 8 14 25.0 1.0 0 False 0
12 13 8 14 23.0 0.0 -1 False 1
10 11 3 14 22.0 -8.0 -3 False 3
1 2 10 14 20.0 3.0 -12 False 12
4 5 6 14 17.0 3.0 -9 False 9
9 10 3 14 17.0 0.0 -4 False 4
2 3 7 14 16.0 4.0 -11 False 11
5 6 3 14 14.0 -5.0 -8 False 8
6 7 8 14 14.0 5.0 -7 False 7
3 4 3 14 12.0 -3.0 -10 False 10
7 8 3 14 9.0 0.0 -6 False 6
8 9 3 14 9.0 0.0 -5 False 5
</code></pre>
|
python|pandas|sorting
| 2
|
376,262
| 58,165,203
|
KeyError: 'class' while using ImageDataGenerator.flow_from_dataframe
|
<p>I am trying to create data generator using ImageDataGenerator.flow_from_dataframe but facing keyerror: class</p>
<p>Before using flow_from_dataframe, i created a pivot of training dataframe where class labels are converted to columns</p>
<pre><code>train_df = train[['Label', 'filename', 'subtype']].drop_duplicates().pivot(index='filename', columns='subtype', values='Label').reset_index()
</code></pre>
<p>Below is the output of dataframe train_df.</p>
<pre><code>subtype filename any epidural intraparenchymal intraventricular subarachnoid subdural
0 ID_000039fa0.dcm 0 0 0 0 0 0
1 ID_00005679d.dcm 0 0 0 0 0 0
2 ID_00008ce3c.dcm 0 0 0 0 0 0
3 ID_0000950d7.dcm 0 0 0 0 0 0
4 ID_0000aee4b.dcm 0 0 0 0 0 0
</code></pre>
<pre><code>train_gen = datagen.flow_from_dataframe(train_df,
directory='/kaggle/input/rsna-intracranial-hemorrhage-detection/stage_1_train_images',
xcol='filename',
ycol=['any', 'epidural', 'intraparenchymal','intraventricular', 'subarachnoid', 'subdural'],
class_mode='categorical',
target_size=(300, 300),
batch_size=64,
subset='training')
</code></pre>
<pre><code>---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
/opt/conda/lib/python3.6/site-packages/pandas/core/indexes/base.py in get_loc(self, key, method, tolerance)
2896 try:
-> 2897 return self._engine.get_loc(key)
2898 except KeyError:
pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_loc()
pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_loc()
pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item()
pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item()
KeyError: 'class'
During handling of the above exception, another exception occurred:
KeyError Traceback (most recent call last)
<ipython-input-93-0b64db9da6bb> in <module>
6 target_size=(300, 300),
7 batch_size=64,
----> 8 subset='training')
/opt/conda/lib/python3.6/site-packages/keras_preprocessing/image/image_data_generator.py in flow_from_dataframe(self, dataframe, directory, x_col, y_col, weight_col, target_size, color_mode, classes, class_mode, batch_size, shuffle, seed, save_to_dir, save_prefix, save_format, subset, interpolation, validate_filenames, **kwargs)
681 subset=subset,
682 interpolation=interpolation,
--> 683 validate_filenames=validate_filenames
684 )
685
/opt/conda/lib/python3.6/site-packages/keras_preprocessing/image/dataframe_iterator.py in __init__(self, dataframe, directory, image_data_generator, x_col, y_col, weight_col, target_size, color_mode, classes, class_mode, batch_size, shuffle, seed, data_format, save_to_dir, save_prefix, save_format, subset, interpolation, dtype, validate_filenames)
127 self.dtype = dtype
128 # check that inputs match the required class_mode
--> 129 self._check_params(df, x_col, y_col, weight_col, classes)
130 if validate_filenames: # check which image files are valid and keep them
131 df = self._filter_valid_filepaths(df, x_col)
/opt/conda/lib/python3.6/site-packages/keras_preprocessing/image/dataframe_iterator.py in _check_params(self, df, x_col, y_col, weight_col, classes)
202 if self.class_mode == 'categorical':
203 types = (str, list, tuple)
--> 204 if not all(df[y_col].apply(lambda x: isinstance(x, types))):
205 raise TypeError('If class_mode="{}", y_col="{}" column '
206 'values must be type string, list or tuple.'
/opt/conda/lib/python3.6/site-packages/pandas/core/frame.py in __getitem__(self, key)
2978 if self.columns.nlevels > 1:
2979 return self._getitem_multilevel(key)
-> 2980 indexer = self.columns.get_loc(key)
2981 if is_integer(indexer):
2982 indexer = [indexer]
/opt/conda/lib/python3.6/site-packages/pandas/core/indexes/base.py in get_loc(self, key, method, tolerance)
2897 return self._engine.get_loc(key)
2898 except KeyError:
-> 2899 return self._engine.get_loc(self._maybe_cast_indexer(key))
2900 indexer = self.get_indexer([key], method=method, tolerance=tolerance)
2901 if indexer.ndim > 1 or indexer.size > 1:
pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_loc()
pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_loc()
pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item()
pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item()
KeyError: 'class'
</code></pre>
<p>Could someone let me know how can I fix this issue.
Any help is appreciated.</p>
|
<p>Can you try this, basically setting <code>class_mode</code> to <code>other</code> </p>
<pre><code>columns=["any", "epidural", "intraparenchymal","intraventricular", "subarachnoid", "subdural"]
train_generator=datagen.flow_from_dataframe(
directory="/kaggle/input/rsna-intracranial-hemorrhage-detection/stage_1_train_images",
x_col="filename",
y_col=columns,
class_mode="other"
target_size=(300, 300)
batch_size=64,
subset="training")
</code></pre>
|
python-3.x|pandas|tensorflow|keras|conv-neural-network
| 0
|
376,263
| 58,007,391
|
Attention Text Generation in Character-by-Character fashion
|
<p>I am searching the web for a couple of days for any <strong>text generation</strong> model that would use only attention mechanisms.</p>
<p>The <strong>Transformer</strong> architecture that made waves in the context of <strong>Seq-to-Seq</strong> models is actually based solely on <strong>Attention</strong> mechanisms but is mainly designed and used for translation or chat bot tasks so it doesn't fit to the purpose, but the principle does.</p>
<p>My question is:</p>
<p>Does anyone knows or heard of a text generation model <strong>based solely on Attention without any recurrence</strong>?</p>
<p>Thanks a lot!</p>
<p>P.S. I'm familiar with <strong>PyTorch</strong>.</p>
|
<p>Building a character-level self-attentive model is a challenging task. Character-level models are usually based on RNNs. Whereas in a word/subword model, it is clear from the beginning what are the units carrying meaning (and therefore the units the attention mechanism can attend to), a character-level model needs to learn word meaning in the following layers. This makes it quite difficult for the model to learn.</p>
<p>Text generation models are nothing more than conditional languages model. Google AI recently published a paper on <a href="https://arxiv.org/abs/1908.10322" rel="nofollow noreferrer">Transformer character language model</a>, but it is the only work I know of.</p>
<p>Anyway, you should consider either using subwords units (as BPE, SentencePiece) or if you really need to go for character level, use RNNs instead.</p>
|
neural-network|nlp|pytorch|transformer-model|attention-model
| 1
|
376,264
| 57,901,697
|
Keras model evaluation accuracy unchanged, and designing model
|
<p>I'm trying to design a CNN in Keras to classify small images of emojis in other images. Below is an example of one of the 13 classes. All images are the same size and all the emojis are of the same size as well. I would think that one should rather easily be able to achieve VERY high accuracy when classifying, as emojis from one class is exactly the same! My intuition told me that if an emoji is 50x50 I could create a convolutional layer of the same size to match one type of emoji. My supervisor did not think that was feasible however. Anyway, my problem is that, no matter how I design my model, I always get the same validation accuracy for each epoch, which corresponds to 1/13 (or simply guessing that each emoji belongs to the same class).</p>
<p>My model looks like this:</p>
<pre class="lang-py prettyprint-override"><code>model = Sequential()
model.add(Conv2D(16, kernel_size=3, activation="relu", input_shape=IMG_SIZE))
model.add(Dropout(0.5))
model.add(Conv2D(32, kernel_size=3, activation="relu"))
model.add(Conv2D(64, kernel_size=3, activation="relu"))
model.add(Conv2D(128, kernel_size=3, activation="relu"))
#model.add(Conv2D(256, kernel_size=3, activation="relu"))
model.add(Dropout(0.5))
model.add(Flatten())
#model.add(Dense(256, activation="relu"))
model.add(Dense(128, activation="relu"))
model.add(Dense(64, activation="relu"))
model.add(Dense(NUM_CLASSES, activation='softmax', name="Output"))
</code></pre>
<p>And I train it like this:</p>
<pre class="lang-py prettyprint-override"><code># ------------------ Compile and train ---------------
sgd = optimizers.SGD(lr=0.001, decay=1e-6, momentum=0.9, nesterov=True)
rms = optimizers.RMSprop(lr=0.004, rho=0.9, epsilon=None, decay=0.0)
model.compile(optimizer=sgd, loss='categorical_crossentropy', metrics=["accuracy"]) # TODO Read more about this
train_hist = model.fit_generator(
train_generator,
steps_per_epoch=train_generator.n // BATCH_SIZE,
validation_steps=validation_generator.n // BATCH_SIZE, # TODO que?
epochs=EPOCHS,
validation_data=validation_generator,
#callbacks=[EarlyStopping(patience=3, restore_best_weights=True)]
)
</code></pre>
<p>Even with this model, that has over 200 million parameters, I get exactly 0.0773 in validation accuracy for each epoch:</p>
<pre><code>Epoch 1/10
56/56 [==============================] - 21s 379ms/step - loss: 14.9091 - acc: 0.0737 - val_loss: 14.8719 - val_acc: 0.0773
Epoch 2/10
56/56 [==============================] - 6s 108ms/step - loss: 14.9308 - acc: 0.0737 - val_loss: 14.8719 - val_acc: 0.0773
Epoch 3/10
56/56 [==============================] - 6s 108ms/step - loss: 14.7869 - acc: 0.0826 - val_loss: 14.8719 - val_acc: 0.0773
Epoch 4/10
56/56 [==============================] - 6s 108ms/step - loss: 14.8948 - acc: 0.0759 - val_loss: 14.8719 - val_acc: 0.0773
Epoch 5/10
56/56 [==============================] - 6s 109ms/step - loss: 14.8897 - acc: 0.0762 - val_loss: 14.8719 - val_acc: 0.0773
Epoch 6/10
56/56 [==============================] - 6s 109ms/step - loss: 14.8178 - acc: 0.0807 - val_loss: 14.8719 - val_acc: 0.0773
Epoch 7/10
56/56 [==============================] - 6s 108ms/step - loss: 15.0747 - acc: 0.0647 - val_loss: 14.8719 - val_acc: 0.0773
Epoch 8/10
56/56 [==============================] - 6s 108ms/step - loss: 14.7509 - acc: 0.0848 - val_loss: 14.8719 - val_acc: 0.0773
Epoch 9/10
56/56 [==============================] - 6s 108ms/step - loss: 14.8948 - acc: 0.0759 - val_loss: 14.8719 - val_acc: 0.0773
Epoch 10/10
56/56 [==============================] - 6s 108ms/step - loss: 14.8228 - acc: 0.0804 - val_loss: 14.8719 - val_acc: 0.0773
</code></pre>
<p>Because its not learning anything, I'm starting to think that it's not my models fault, but maybe the dataset or how I train it. I have tried training with "adam" as well but get the same result. I try to change the input size of the images but still, same result. Below is a sample from my dataset. Do you guys have any ideas what could be wrong?</p>
<p><a href="https://i.stack.imgur.com/ZaV16.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZaV16.jpg" alt="Same from dataset"></a></p>
|
<p>I think the main issue currently is that your model has way too many parameters relative to how few samples you have for training. For image classification nowadays, you generally want to just have conv layers, a Global<em>Something</em>Pooling layer, and then a single Dense layer for your outputs. You just need to make sure that your conv section ends up with a large enough receptive field to be able to find all the features you need.</p>
<p>The first thing to think about is making sure that you have a large enough receptive field (<a href="https://medium.com/mlreview/a-guide-to-receptive-field-arithmetic-for-convolutional-neural-networks-e0f514068807" rel="nofollow noreferrer">further reading about that here</a>). There are three main ways to achieve that: pooling, stride >= 2, and/or dilation >= 2. Because you have only 13 "features" that you want to identify, and all of them will always be pixel-perfect, I'm thinking dilation will be the way to go so that the model can easily "overfit" on those 13 "features". If we use 4 conv layers with dilations of 1, 2, 4, and 8, respectively, then we'll end up with a receptive field of 31. This should be enough to easily recognize 50-pixel emoji.</p>
<p>Next, how many filters should each layer have? Normally, you start with a few filters, and increase as you go through the model, as you are doing here. However, we want to "overfit" on specific features, so we should probably increase the amounts in the earlier layers. Just to make it easy, let's give all layers 64 filters.</p>
<p>Last, how do we convert this all to a single prediction? Rather than using a dense layer, which would use a <em>ton</em> of parameters and would not be translation invariant, people nowadays use GlobalAveragePooling or GlobalMaxPooling. GlobalAveragePooling is more common, because it's good for helping to find combinations of many features. However, we just want to find 13 or so exact features here, so GlobalMaxPooling might work even better. Then a single dense layer after that will be enough to get a prediction. Because we're using <em>Global</em>MaxPooling, it doesn't need to be flattened first—global pooling already does that for us.</p>
<p>Resulting model:</p>
<pre><code>Conv2D(64, kernel_size=3, activation="relu", input_shape=IMG_SIZE)
Conv2D(64, kernel_size=3, dilation_rate=2, activation="relu")
Conv2D(64, kernel_size=3, dilation_rate=4, activation="relu")
Conv2D(64, kernel_size=3, dilation_rate=8, activation="relu")
GlobalMaxPooling()
Dense(NUM_CLASSES, activation='softmax', name="Output")
</code></pre>
<p>Try that. You'll probably also want to add BatchNormalization layers in there after every layers except the last. Once you get decent training accuracy, check if it's overfitting. If it is (which is likely), try these steps:</p>
<ul>
<li>batch norm if you haven't already</li>
<li>weight decay on all conv and dense layers</li>
<li>SpatialDropout2D after the conv layers and regular Dropout after the pooling layer</li>
<li>augment your data with an ImageDataGenerator</li>
</ul>
<p>Designing nets like this is almost more of an art than a science, so change stuff around if you think you should. Eventually, you will get an intuition for what is more or less likely to work in any given situation.</p>
|
python|tensorflow|keras|deep-learning|multiclass-classification
| 0
|
376,265
| 57,868,723
|
How to replace a loop that looks at multiple previous values with a formula in Python
|
<p><strong>My Problem</strong></p>
<p>I have a loop that creates a column using either a formula based on values from other columns or the previous value in the column depending on a condition ("days from new low == 0"). It is really slow over a huge dataset so I wanted to get rid of the loop and find a formula that is faster. </p>
<p><strong>Current Working Code</strong></p>
<pre><code>import numpy as np
import pandas as pd
csv1 = pd.read_csv('stock_price.csv', delimiter = ',')
df = pd.DataFrame(csv1)
for x in range(1,len(df.index)):
if df["days from new low"].iloc[x] == 0:
df["mB"].iloc[x] = (df["RSI on new low"].iloc[x-1] - df["RSI on new low"].iloc[x]) / -df["days from new low"].iloc[x-1]
else:
df["mB"].iloc[x] = df["mB"].iloc[x-1]
df
</code></pre>
<p><strong>Input Data and Expected Output</strong></p>
<pre><code>RSI on new low,days from new low,mB
0,22,0
29.6,0,1.3
29.6,1,1.3
29.6,2,1.3
29.6,3,1.3
29.6,4,1.3
21.7,0,-2.0
21.7,1,-2.0
21.7,2,-2.0
21.7,3,-2.0
21.7,4,-2.0
21.7,5,-2.0
21.7,6,-2.0
21.7,7,-2.0
21.7,8,-2.0
21.7,9,-2.0
25.9,0,0.5
25.9,1,0.5
25.9,2,0.5
23.9,0,-1.0
23.9,1,-1.0
</code></pre>
<p><strong>Attempt at Solution</strong></p>
<pre><code>def mB_calc (var1,var2,var3):
df[var3]= np.where(df[var1] == 0, df[var2].shift(1) - df[var2] / -df[var1].shift(1) , "")
return df
df = mB_calc('days from new low','RSI on new low','mB')
</code></pre>
<p>First, it gives me this "TypeError: can't multiply sequence by non-int of type 'float'" and second I dont know how to incorporate the "ffill" into the formula.</p>
<p>Any idea how I might be able to do it?</p>
<p>Cheers!</p>
|
<p>Try this one:</p>
<pre class="lang-py prettyprint-override"><code>df["mB_temp"] = (df["RSI on new low"].shift() - df["RSI on new low"]) / -df["days from new low"].shift()
df["mB"] = df["mB"].shift()
df["mB"].loc[df["days from new low"] == 0]=df["mB_temp"].loc[df["days from new low"] == 0]
df.drop(["mB_temp"], axis=1)
</code></pre>
<p>And with <code>np.where</code>:</p>
<pre class="lang-py prettyprint-override"><code>df["mB"] = np.where(df["days from new low"]==0, df["RSI on new low"].shift() - df["RSI on new low"]) / -df["days from new low"].shift(), df["mB"].shift())
</code></pre>
|
python|database|pandas|loops
| 1
|
376,266
| 57,820,916
|
Pandas pivot_table: `margins=True` shows `NaN` with `Period` columns
|
<p>The following code reproduces the issue I'm having:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df = pd.DataFrame(
{
"a": [1, 1, 2, 2],
"b": [
pd.Period("2019Q1"),
pd.Period("2019Q2"),
pd.Period("2019Q1"),
pd.Period("2019Q2"),
],
"x": 1.0,
}
)
df.pivot_table(index="a", columns="b", values="x", margins=True)
</code></pre>
<p>Output:</p>
<pre class="lang-py prettyprint-override"><code>b 2019Q1 2019Q2 All
a
1 1.0 1.0 1.0
2 1.0 1.0 1.0
All NaN NaN 1.0
</code></pre>
<p>Why the <code>NaN</code> subtotals? I would have expected:</p>
<pre class="lang-py prettyprint-override"><code>b 2019Q1 2019Q2 All
a
1 1.0 1.0 1.0
2 1.0 1.0 1.0
All 1.0 1.0 1.0
</code></pre>
<p>This happens with <code>Period</code> columns.</p>
|
<p>If anyone else stumbles across this issue, it is indeed a bug, the relevant GitHub issues are <a href="https://github.com/pandas-dev/pandas/issues/28323" rel="nofollow noreferrer">#28323</a> and <a href="https://github.com/pandas-dev/pandas/issues/28337" rel="nofollow noreferrer">#28337</a></p>
<hr>
<p>The underlying problem is caused by the <code>get_indexer</code> method of a <code>PeriodIndex</code>. Right now, when reindexing, instead of using the actual <code>PeriodIndex</code>, the <code>PeriodIndex</code>'s <code>_int64index</code> is used. The relevant code <a href="https://github.com/pandas-dev/pandas/blob/master/pandas/core/indexes/period.py#L657" rel="nofollow noreferrer">can be found here</a>, and summarized below:</p>
<pre><code>if isinstance(target, PeriodIndex):
target = target.asi8
if tolerance is not None:
tolerance = self._convert_tolerance(tolerance, target)
return Index.get_indexer(self._int64index, target, method, limit, tolerance)
</code></pre>
<p>This clearly works fine if re-indexing using another <code>PeriodIndex</code>, since the target is also converted to <code>int</code>, but results in some wonky behavior if the other index is <em>not</em> a <code>PeriodIndex</code>, here is a small example of the behavior.</p>
<pre><code>>>> i = pd.PeriodIndex([pd.Period("2019Q1", "Q-DEC"), pd.Period("2019Q2", "Q-DEC")])
>>> j = pd.Index([pd.Period("2019Q1", "Q-DEC"), 'All'])
>>> s = pd.Series([1, 2], index=i)
>>> s
2019Q1 1
2019Q2 2
Freq: Q-DEC, dtype: int64
>>> s.reindex(j)
2019Q1 NaN
All NaN
dtype: float64
>>> s.index._int64index
Int64Index([196, 197], dtype='int64')
>>> s.reindex([196])
196 1
dtype: int64
</code></pre>
<p>Clearly this is not desired behavior, and the solution is to only use the <code>_int64index</code> when reindexing with another <code>PeriodIndex</code>, and using the regular <code>PeriodIndex</code> otherwise. I submitted a PR to fix this which hopefully should be included soon.</p>
|
python|pandas
| 0
|
376,267
| 58,107,700
|
Raise Elements of Array to Series of Exponents
|
<p>Suppose I have a numpy array such as:</p>
<pre><code>a = np.arange(9)
>> array([0, 1, 2, 3, 4, 5, 6, 7, 8])
</code></pre>
<p>If I want to raise each element to succeeding powers of two, I can do it this way:</p>
<pre><code>power_2 = np.power(a,2)
power_4 = np.power(a,4)
</code></pre>
<p>Then I can combine the arrays by:</p>
<pre><code>np.c_[power_2,power_4]
>> array([[ 0, 0],
[ 1, 1],
[ 4, 16],
[ 9, 81],
[ 16, 256],
[ 25, 625],
[ 36, 1296],
[ 49, 2401],
[ 64, 4096]])
</code></pre>
<p>What's an efficient way to do this if I don't know the degree of the even monomial (highest multiple of 2) in advance?</p>
|
<p>One thing to observe is that x^(2^n) = (...(((x^2)^2)^2)...^2)
meaning that you can compute each column from the previous by taking the square.</p>
<p>If you know the number of columns in advance you can do something like:</p>
<pre><code>import functools as ft
a = np.arange(5)
n = 4
out = np.empty((*a.shape,n),a.dtype)
out[:,0] = a
# Note: this works by side-effect!
# The optional second argument of np.square is "out", i.e. an
# array to write the result to (nonetheless the result is also
# returned directly)
ft.reduce(np.square,out.T)
out
# array([[ 0, 0, 0, 0],
# [ 1, 1, 1, 1],
# [ 2, 4, 16, 256],
# [ 3, 9, 81, 6561],
# [ 4, 16, 256, 65536]])
</code></pre>
<p>If the number of columns is not known in advance then the most efficient method is to make a list of columns, append as needed and only in the end use <code>np.column_stack</code> or <code>np.c_</code> (if using <code>np.c_</code> do not forget to cast the list to tuple first).</p>
|
python|numpy
| 1
|
376,268
| 58,077,373
|
Use numpy structured array instead of dict to save space and keep speed
|
<p>Are <code>numpy</code> structured arrays an alternative to Python <code>dict</code>?</p>
<p>I would like to save memory and I cannot affort much of a performance decline.</p>
<p>In my case, the keys are <code>str</code> and the values are <code>int</code>.</p>
<p>Can you give a quick conversion line in case they actually are an alternative?</p>
<p>I also don't mind if you can suggest a different alternative.</p>
<p>I need to save memory, because some dictionaries get larger than 50Gb in memory and I need to open multiple at a time with 'only' 192 GB RAM available.</p>
|
<p>Maybe a bit late, but in case others have the same question, I did a simple benchmarking:</p>
<pre><code>In [1]: import random
In [2]: import string
In [3]: import pandas as pd
In [4]: import sys
In [5]: size = 10**6
In [6]: d = {''.join(random.choices(string.ascii_letters + string.digits, k=32)): random.randrange(size) for _ in range(size)}
In [7]: s = pd.Series(d)
In [8]: a = s.values.view(list(zip(s.index, ['i8'] * size)))
In [9]: key = s.index[random.randrange(size)]
In [10]: %timeit d[key]
61.5 ns ± 1.46 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each)
In [11]: %timeit s[key]
3.54 µs ± 158 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
In [12]: %timeit a[key]
154 ns ± 1.63 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each)
In [13]: sys.getsizeof(d)
Out[13]: 41943136
In [14]: sys.getsizeof(s)
Out[14]: 130816632
</code></pre>
<p>It seems that <code>np.ndarray</code> is about 2-3 times slower than <code>dict</code> (which is acceptable), while the performance of <code>pd.Series</code> is much worse than the other two. As for space efficiency, <code>dict</code> also outperforms <code>pd.Series</code>. I didn't find a way to get the memory usage of a numpy structured array (tried with <code>sys.getsizeof</code> and <code>ndarray.nbytes</code>, but it seems both are missing the size of fields).</p>
|
python|numpy|dictionary|time-complexity|structured-array
| 2
|
376,269
| 57,982,349
|
Writing dataframes to multiple sheets in existing Excel file. Get 'We Found Problem with some content in X.xlsx' when opening excel file
|
<p>I'm creating a few dfs based on existing excel files. I'm then writing each of those dfs to their own separate sheet in a different (existing excel) file. Script executes fine, but when I open the excel file the dfs were written to I get the following error msg: "We found a problem with some content in 'X.xlsx'...</p>
<p>I tried this not using openpyxl as several answers on similar posts indicated you didn't have to use openpyxl; however, pandas docs indicate you need to use openpyxl if writing to .xlsx. </p>
<pre><code>import pandas as pd
from openpyxl import load_workbook
df_complete = pd.read_excel('completed_contracts_2019.xlsx',
index_col=None)
df_wip_out = pd.read_excel('wip19.xlsx', index_col=None)
df_in = pd.read_excel('wip_18_to_19.xlsx', index_col=None)
with pd.ExcelWriter('Final_Template.xlsx', engine='openpyxl') as writer:
writer.book = load_workbook('Final_Template.xlsx')
df_complete.to_excel(writer, sheet_name='complete', index=False)
df_wip_out.to_excel(writer, sheet_name='wipout', index=False)
df_in.to_excel(writer, sheet_name='wipin', index=False)
</code></pre>
<p>I expect to open the excel file without getting the error.</p>
|
<p>Hit the same issue with openpyxl, but actually it might not be the problem of openpyxl.</p>
<p>Based on my experience, after I got a pop window with "Alert We found a problem with some content in......". Just look into what was the error, and finally found that, in Excel if the data format is "General", you cannot input the string starts with "==", otherwise Excel will deem it as a syntax error. But if the data format is "Text", there will be no problem.</p>
<p>Two solution:
1) If you are sure some of the string starts with "==", and it is not that necessary in your case, you might have to change the first "=" or insert another character at the beginning. Then write it into xls file with openpyxl. This is what I applied.
2) Or you may use openpyxl set the data format of cell (might be "number_format") as "Text". I didn't successfully try this way, because after manually setting a cell data format as "Text", read its "number_format" with openpyxl, got the value of "@"(not "Text"). But when I set "number_format" to "@" with openpyxl, then wrote "==" into it, it still hit the issue.</p>
<p>Besides "==", not sure if any other string will also be deemed as a syntax error by Excel. Anyway, if you happen to hit the issue due to "==", you can try the first solution as a workaround. </p>
|
excel|pandas|openpyxl
| 1
|
376,270
| 57,758,267
|
Pandas rolling aggregate list of functions. ValueError: no results
|
<p>Aggregate method after rolling doesn't work for list of functions.</p>
<p>This code rises an Valueerror.</p>
<pre><code>df = pd.DataFrame({'col1':range(3), 'date':pd.date_range('2018-01-01', '2018-01-03')})
df.rolling('6D', min_periods=1, on='date', closed='left').agg([sum])
</code></pre>
<p>BUT this code works fine for a single function.</p>
<pre><code>df.rolling('6D', min_periods=1, on='date', closed='left').agg(sum)
</code></pre>
<p>Error text:</p>
<pre><code>---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-389-91b03860c0e6> in <module>
----> 1 df.rolling('6D', min_periods=1, on='date', closed='left').agg([sum])
~/anaconda3/lib/python3.7/site-packages/pandas/core/window.py in aggregate(self, arg, *args, **kwargs)
1683 @Appender(_shared_docs['aggregate'])
1684 def aggregate(self, arg, *args, **kwargs):
-> 1685 return super(Rolling, self).aggregate(arg, *args, **kwargs)
1686
1687 agg = aggregate
~/anaconda3/lib/python3.7/site-packages/pandas/core/window.py in aggregate(self, arg, *args, **kwargs)
310
311 def aggregate(self, arg, *args, **kwargs):
--> 312 result, how = self._aggregate(arg, *args, **kwargs)
313 if result is None:
314 return self.apply(arg, raw=False, args=args, kwargs=kwargs)
~/anaconda3/lib/python3.7/site-packages/pandas/core/base.py in _aggregate(self, arg, *args, **kwargs)
557 return self._aggregate_multiple_funcs(arg,
558 _level=_level,
--> 559 _axis=_axis), None
560 else:
561 result = None
~/anaconda3/lib/python3.7/site-packages/pandas/core/base.py in _aggregate_multiple_funcs(self, arg, _level, _axis)
615 # if we are empty
616 if not len(results):
--> 617 raise ValueError("no results")
618
619 try:
ValueError: no results
</code></pre>
|
<p>I find a workaround. I don't know why but we need to use date columns as index in that case.</p>
<pre><code>df.set_index('date').rolling('6D', min_periods=1, closed='left').agg(['sum','max'])
</code></pre>
<p>result</p>
<pre><code> col1
sum max
date
2018-01-01 NaN 0.0
2018-01-02 0.0 1.0
2018-01-03 1.0 2.0
</code></pre>
|
python|pandas|aggregate|rolling-computation
| 0
|
376,271
| 34,228,138
|
Doesn't work example with Keras framework
|
<p>I am trying to study <code>Keras</code> library and created follow script as example:</p>
<pre><code>from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation
from keras.optimizers import SGD
from keras.utils import np_utils
import pandas as pd
import numpy as np
import time
import memory_profiler as mprof
def write_preds(preds, fname):
pd.DataFrame({"ImageId": list(range(1,len(preds)+1)), "Label": preds}).to_csv(fname, index=False, header=True)
start = time.time()
# read data
train = pd.read_csv("..\\data\\train_small.csv")
labels = train.ix[:,0].values.astype('int32')
X_train = (train.ix[:,1:].values).astype('float32')
print 'Loaded train', time.time() - start, mprof.memory_usage()
test = pd.read_csv("..\\data\\test_small.csv")
X_test = (test.values).astype('float32')
# convert list of labels to binary class matrix
y_train = np_utils.to_categorical(labels)
print 'Loaded test', time.time() - start, mprof.memory_usage()
# pre-processing: divide by max and substract mean
scale = np.max(X_train)
X_train /= scale
X_test /= scale
mean = np.std(X_train)
X_train -= mean
X_test -= mean
input_dim = X_train.shape[1]
nb_classes = y_train.shape[1]
print 'Prepare data', time.time() - start, mprof.memory_usage()
# Here's a Deep Dumb MLP (DDMLP)
model = Sequential()
model.add(Dense(64, input_dim=20, init='uniform'))
model.add(Activation('tanh'))
model.add(Dropout(0.5))
model.add(Dense(64, init='uniform'))
model.add(Activation('tanh'))
model.add(Dropout(0.5))
model.add(Dense(2, init='uniform'))
model.add(Activation('softmax'))
print 'Created model', time.time() - start, mprof.memory_usage()
# we'll use MSE (mean squared error) for the loss, and RMSprop as the optimizer
model.compile(loss='mse', optimizer='rmsprop')
print 'Training ...', time.time() - start, mprof.memory_usage()
model.fit(X_train, y_train, nb_epoch=10, batch_size=16, show_accuracy=True, verbose=1)
print 'Generating ...', time.time() - start, mprof.memory_usage()
preds = model.predict_classes(X_test, verbose=0)
print 'Predicted', time.time() - start, mprof.memory_usage()
write_preds(preds, "..\\data\\keras-mlp.csv")
print 'Finished experiment', time.time() - start, mprof.memory_usage()
</code></pre>
<p>In my opinion, this script should work :), however I got next error:</p>
<pre><code>Traceback (most recent call last):
File "X:/new_test.py", line 58, in <module>
model.fit(X_train, y_train, nb_epoch=10, batch_size=16, show_accuracy=True, verbose=1)
File "C:\Anaconda2\lib\site-packages\keras\models.py", line 507, in fit
shuffle=shuffle, metrics=metrics)
File "C:\Anaconda2\lib\site-packages\keras\models.py", line 226, in _fit
outs = f(ins_batch)
File "C:\Anaconda2\lib\site-packages\keras\backend\theano_backend.py", line 357, in __call__
return self.function(*inputs)
File "C:\Anaconda2\lib\site-packages\theano\compile\function_module.py", line 606, in __call__
storage_map=self.fn.storage_map)
File "C:\Anaconda2\lib\site-packages\theano\compile\function_module.py", line 595, in __call__
outputs = self.fn()
File "C:\Anaconda2\lib\site-packages\theano\gof\op.py", line 768, in rval
r = p(n, [x[0] for x in i], o)
File "C:\Anaconda2\lib\site-packages\theano\tensor\blas.py", line 1612, in perform
z[0] = numpy.asarray(numpy.dot(x, y))
</code></pre>
<blockquote>
<pre><code>ValueError: ('shapes (9,784) and (20,64) not aligned: 784 (dim 1) != 20 (dim 0)', (9L, 784L), (20L, 64L))
Apply node that caused the error: Dot22(<TensorType(float32, matrix)>, <TensorType(float32, matrix)>)
Inputs types: [TensorType(float32, matrix), TensorType(float32, matrix)]
Inputs shapes: [(9L, 784L), (20L, 64L)]
Inputs strides: [(3136L, 4L), (256L, 4L)]
Inputs values: ['not shown', 'not shown']
HINT: Re-running with most Theano optimization disabled could give you a back-trace of when this node was created. This can be done with
</code></pre>
<p>by setting the Theano flag 'optimizer=fast_compile'. If that does not
work, Theano optimizations can be disabled with 'optimizer=None'.
HINT: Use the Theano flag 'exception_verbosity=high' for a debugprint and storage map footprint of this apply node.</p>
</blockquote>
<p>PS. Shapes data:</p>
<ul>
<li>Size of X_train (9L, 784L)</li>
<li>Size of X_test (9L, 784L)</li>
</ul>
|
<p>Check this line in your code</p>
<pre><code>model.add(Dense(64, input_dim=20, init='uniform'))
</code></pre>
<p>Why 20 input dimensions? MNIST has 28X28 images, i.e. an input dimension of <code>784</code>. The error message confirms that as well:</p>
<pre><code>ValueError: ('shapes (9,784) and (20,64) not aligned: 784 (dim 1) != 20 (dim 0)', (9L, 784L), (20L, 64L))
</code></pre>
<p>You can further verify the size of your input </p>
<pre><code>print "Size of X_train", x_train.shape
print "Size of X_test", x_test.shape
</code></pre>
<p>And accordingly change the line above to:</p>
<pre><code>model.add(Dense(64, input_dim=784, init='uniform'))
</code></pre>
|
python|pandas|theano|deep-learning|keras
| 2
|
376,272
| 34,212,605
|
pandas v0.17.1 not working with py2exe
|
<p>I Have a problem with python pandas v0.17.1. I upgraded from v0.16.2.
System:</p>
<p>Win10 x64, Python 3.4 64Bit, using PyCharm Community Edition for coding.
(numpy 1.9.3+mkl)</p>
<p>I'm using py2exe to create a stand-alone of a statistics program, using pandas to hold the data, matplotlib for plotting and pyqt4 for everything related to gui.</p>
<p>Since i upgraded pandas, the created .exe from py2exe doesn't work anymore. After doubleclick or start from commandline nothing happens. No Errors, no Errorlog file or similar, no 'window flashing' open and close again. just nothing.</p>
<p>I uninstalled pandas and reinstalled (fresh install) it via pip. Same problem.
I just downgraded pandas to v0.16.2 again. Everything works fine now (with v0.16.2). No other changes made. </p>
<p>For the sake of testing i created a simplest program as possible, only an empy pyqt mainwindow and whats needed to start the programm. works fine witout pandas. After 'import pandas' nothing happens anymore (with v0.17.1).</p>
<p>Somebody knows whats going on? Do i have to tweak my setup.py for the new pandas version? Because i dont get any error, i cannot check whats wrong.</p>
<p>main.py:</p>
<pre><code># coding=utf-8
import sys
from PyQt4 import QtCore, QtGui
import matplotlib
#import pandas
class app(QtGui.QMainWindow):
def __init__(self, *args):
QtGui.QMainWindow.__init__(self, *args)
if __name__ == "__main__":
programm = QtGui.QApplication(sys.argv)
window = app()
window.show()
eventloop = programm.exec_()
sys.exit()
</code></pre>
<p>setup.py:</p>
<pre><code># coding=utf-8
from distutils.core import setup
import py2exe
path_to_source = r'path to dir' # replace with your working directory
setup(
options = {"py2exe": {
'includes': ['sip'],
'excludes': [],
'optimize': 2,
'compressed' : False,
'packages': ['encodings']
#'skip_archive': True
}},
zipfile = None,
windows = [{"script": path_to_source + r"/main.py"}]
)
</code></pre>
<p>Just uncomment the import statement of pandas and nothing works anymore with v0.17.1.
The 'dist' directory gets created with a the same files as before.
I tried to 'include' pandas in setup.py but no effect. Dont know what to do to solve this. Are some dll's needed in the setup.py now?</p>
<p>Sorry for my bad english.
ps: In PyCharm, everything works fine, it's only the .exe that does not work.
ps2: Tested the same with my Win7 installation, same behavior.</p>
|
<p>I solved my problem. It was my AVAST Anti-Virus. It's 'deepscreen' feature started the programm in the background as a sandbox and analysed the .exe but never informed me about it running in the back (no info baloon etc.).</p>
<p>By chance, i had it deactivated while looking into Calvin's Answer.</p>
<p>It works on both, my PC and Laptop now, without any changes. Just deactivated AVAST 'deepscreen' feature while using the .exe created py py2exe.</p>
|
python|pandas|pyqt4|py2exe
| 1
|
376,273
| 34,192,927
|
How to modify the time that 'date' changes (00:00:00) in an index in Pandas dataframe?
|
<p>I have a dataframe that looks like this:</p>
<pre><code>Date and Time Close dif
2015/01/01 17:00:00.211 2030.25 0.3
2015/01/01 17:00:02.456 2030.75 0.595137615
2015/01/01 23:55:01.491 2037.25 2.432613592
2015/01/02 00:02:01.955 2036.75 -0.4
2015/01/02 00:04:04.887 2036.5 -0.391144414
2015/01/02 15:14:56.207 2021.5 -4.732676608
2015/01/02 15:14:59.020 2021.5 -4.731171953
2015/01/02 15:30:00.020 2022 -4.228169436
2015/01/02 16:13:18.948 2021.25 -4.96153033
2015/01/02 16:15:00.000 2021 -5.210187988
2015/01/04 17:00:00.105 2020.5 0
2015/01/04 17:00:01.077 2021 0.423093923
</code></pre>
<p>How can I modify the index so that the current day starts at 17:00:00 of the day before and ends at 15:15:00. (Data between 15:15:00 and 17:00:00 can be eliminated).</p>
<p>The new dataframe would look like this:</p>
<pre><code>Date and Time Close dif
2015/01/02 17:00:00.211 2030.25 0.3
2015/01/02 17:00:02.456 2030.75 0.595137615
2015/01/02 23:55:01.491 2037.25 2.432613592
2015/01/02 00:02:01.955 2036.75 -0.4
2015/01/02 00:04:04.887 2036.5 -0.391144414
2015/01/02 15:14:56.207 2021.5 -4.732676608
2015/01/02 15:14:59.020 2021.5 -4.731171953
2015/01/05 17:00:00.105 2020.5 0
2015/01/05 17:00:01.077 2021 0.423093923
</code></pre>
<p>Thanks</p>
|
<p>Is this what you are looking for?</p>
<pre><code># read in your dataframe
import pandas as pd
df = pd.read_csv('dt_data.csv', skipinitialspace=True)
df.columns = ['mydt', 'close', 'dif'] # changed your column name to 'mydt'
df.mydt = pd.to_datetime(df.mydt) # convert mydt to datetime so we can operate on it
# keep times outside [15:15 to 17:00] interval
df = df[(((df.mydt.dt.hour >= 15) & (df.mydt.dt.minute > 15))
| (df.mydt.dt.hour == 16))==False]
# increment the day count for hours >= 17 at start of new 'day'
ndx = df[df.mydt.dt.hour>=17].index
df.ix[ndx, 'mydt'] += pd.Timedelta(days=1)
df.set_index('mydt', inplace=True, drop=True)
print(df)
close dif
mydt
2015-01-02 17:00:00.211 2030.25 0.300000
2015-01-02 17:00:02.456 2030.75 0.595138
2015-01-02 00:02:01.955 2036.75 -0.400000
2015-01-02 00:04:04.887 2036.50 -0.391144
2015-01-02 15:14:56.207 2021.50 -4.732677
2015-01-02 15:14:59.020 2021.50 -4.731172
2015-01-05 17:00:00.105 2020.50 0.000000
2015-01-05 17:00:01.077 2021.00 0.423094
</code></pre>
<p>EDIT: to address groupby question in comments. If you need to access only the date portion of the datetime column mydt above, you can do this:</p>
<pre><code>df.reset_index(inplace=True)
print(df.mydt.dt.date)
0 2015-01-02
1 2015-01-02
2 2015-01-02
3 2015-01-02
4 2015-01-02
5 2015-01-02
6 2015-01-05
7 2015-01-05
dtype: object
</code></pre>
<p>and then you can do groupby operations using only the date portion </p>
<pre><code>print(df.groupby(df.mydt.dt.date)['dif'].sum())
2015-01-02 -9.359855
2015-01-05 0.423094
Name: dif, dtype: float64
</code></pre>
|
python|python-2.7|pandas|indexing
| 1
|
376,274
| 34,132,279
|
numpy build with mingw fails on Window with AttributeError: Mingw32CCompiler instance has no attribute 'compile_options', How to resolve this?
|
<p>I've downloaded the numpy source from <a href="https://github.com/numpy/numpy" rel="nofollow">git-hub</a>, I also have mingw installed and all the paths set on Windows, I can compile C files with mingw just fine so this is also working.<br>
I'm following instructions on <a href="http://www.scipy.org/scipylib/building/windows.html" rel="nofollow">scipy website</a>, with <br><code>python.exe setup.py config --compiler=mingw32 build --compiler=mingw32 bdist_wininst</code><br> It compiles for a while and then suddenly stops with this-
<br></p>
<pre><code>gcc -O2 -Wall -Wstrict-prototypes -DNPY_MINGW_USE_CUSTOM_MSVCR -D__MSVCRT_VERSION__=0x0900 -Inumpy\core\src\private -Inu
mpy\core\src -Inumpy\core -Inumpy\core\src\npymath -Inumpy\core\src\multiarray -Inumpy\core\src\umath -Inumpy\core\src\npysort -IC:\python27\include -IC:\python27\PC -c _configtest.c -o _configtest.o
_configtest.c: In function 'main':
_configtest.c:7:12: error: 'Py_UNICODE_WIDE' undeclared (first use in this function)
(void) Py_UNICODE_WIDE;
^~~~~~~~~~~~~~~
_configtest.c:7:12: note: each undeclared identifier is reported only once for each function it appears in failure.
removing: _configtest.c _configtest.o
.
.
.
.
File "D:\pylibs\numpy-master\numpy\distutils\command\build_src.py", line 386, in generate_sources
source = func(extension, build_dir)
File "numpy\core\setup.py", line 443, in generate_config_h
rep = check_long_double_representation(config_cmd)
File "numpy\core\setup_common.py", line 194, in check_long_double_representation
cmd.compiler.compile_options.remove("/GL")
AttributeError: Mingw32CCompiler instance has no attribute 'compile_options'
</code></pre>
<p>How to resolve this please?</p>
|
<p>In your distribution, add the following code at line 194 of file numpy\core\setup_common.py and rebuild. It should allow you to build.</p>
<pre><code> except AttributeError:
pass
</code></pre>
|
python|windows|numpy
| 1
|
376,275
| 34,298,129
|
Select values in Pandas groupby dataframe that are present in n previous groups
|
<p>I have a Pandas dataframe <code>groupby</code> object which looks like the following:</p>
<pre><code> ID
2014-11-30 1
2
3
2014-12-31 1
2
3
4
2015-01-31 2
3
4
2015-02-28 1
3
4
5
2015-03-31 1
2
4
5
6
2015-04-30 3
4
5
6
</code></pre>
<p>What I want to do is create another dataframe where the values in groupby date x are values that are in each of groupby dates y(x-1) thru y(x-n) where y is the n period previous groupby. So for instance, if n=1, then if x groupby period is '2015-04-30', then you would check against '2015-03-31'. If n=2, then if groupby date '2015-02-28', then you would check against groupby dates ['2015-01-31', '2014-12-31'].</p>
<p>The resulting dataframe from the above would look like this for n=1:</p>
<pre><code> ID
2014-12-31 1
2
3
2015-01-31 2
3
4
2015-02-28 3
4
2015-03-31 1
4
5
2015-04-30 4
5
6
</code></pre>
<p>The resulting dataframe for n=2 would be:</p>
<pre><code>2015-01-31 2
3
2015-02-28 3
4
2015-03-31 4
2015-04-30 4
5
</code></pre>
<p>Looking forward to some pythonic solutions!</p>
|
<p>This would seem to work:</p>
<pre><code>def filter_unique(df, n):
data_by_date = df.groupby('date')['ID'].apply(lambda x: x.tolist())
filtered_data = {}
previous = []
for i, (date, data) in enumerate(data_by_date.items()):
if i >= n:
if len(previous)==1:
filtered_data[date] = list(set(previous[i-n]).intersection(data))
else:
filtered_data[date] = list(set.intersection(*[set(x) for x in previous[i-n:]]).intersection(data))
else:
filtered_data[date] = data
previous.append(data)
result = pd.DataFrame.from_dict(filtered_data, orient='index').stack()
result.index = result.index.droplevel(1)
filter_unique(df, 2)
1/31/15 2
1/31/15 3
1/31/15 4
11/30/14 1
11/30/14 2
11/30/14 3
12/31/14 2
12/31/14 3
2/28/15 1
2/28/15 3
3/31/15 1
3/31/15 4
4/30/15 4
4/30/15 5
</code></pre>
|
python|pandas|group-by
| 1
|
376,276
| 34,213,946
|
Force Python Pandas DataFrame( read_csv() method) to avoid/not consider first row of my csv/txt file as header
|
<p>I am reading a txt file (data.txt) using pandas read_csv method. The file has 16 columns and 600 rows. However, after reading the csv into dataframe, I observed that first row in my data.txt file has been taken as the column headings in the dataframe. This reduces the size of my dataframe to 599 from 600 in my text file. How can I force pandas to not use first row as headers for Dataframe.
I am using this code to read the file.</p>
<pre><code>import pandas as pd
df = pd.read_csv("C:\<my_directory_path>\data.txt)
</code></pre>
|
<p>Just add header=None: </p>
<pre><code>import pandas as pd
df = pd.read_csv("C:\<my_directory_path>\data.txt",header=None)
</code></pre>
|
python|csv|pandas|dataframe
| 1
|
376,277
| 34,127,559
|
Finding intersection of points on a python graph generated by a list of points
|
<p>I'm trying to find the intersection between two lines that were generated by a list of points.</p>
<p>I had two list of points, and then I plotted them using</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
a = arrayOfPoints1
plt.plot(*zip(*a))
b = arrayOfPoints2
plt.plot(*zip(*b))
plt.show()
</code></pre>
<p>Now I've generated a graph that looks something like this<a href="https://i.stack.imgur.com/nK9mQ.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nK9mQ.jpg" alt="enter image description here"></a></p>
<p>My goal is to find all the points where these two graphs intersect now (the blue and green line intersections). At first glance, it might seem like the points would just be the points that are in both array a and b, but there could be some intersections that occur that are not in the arrays when the lines between the points are generated.</p>
<p>How do I go about finding all the intersections?</p>
<p>Note: I'm looking for a solution that works in Python 2.7</p>
|
<p>If both graphs use the same X-axis values (different functions evaluated on the same array), you could do it manually by direct computation of the intersection of each consecutive pair of segments. You have to consider several cases (if the segments are parallel, etc). Intersection can be calculated with the equation of lines in the plane. You can addapt this method for the general case, by taking the union of both X-axis values and computing the needed values.</p>
<p>An easier (but probably less efficient if you have to compute it millions of times), is to rely on <code>shapely</code> library. This method also works if the paths do not use the same X-axis values. A simple example of how to do it below.</p>
<pre><code>from shapely.geometry import LineString
l1 = LineString([(0,0), (10,10)])
l2 = LineString([(1,0), (5,10), (10,0)])
intersection = l1.intersection(l2)
intersect_points = [list(p.coords)[0] for p in intersection]
print intersect_points
</code></pre>
<p>This will return </p>
<pre><code>[(1.6666666666666667, 1.6666666666666665), (6.666666666666667, 6.666666666666667)]
</code></pre>
|
python-2.7|numpy|matplotlib
| 1
|
376,278
| 34,001,816
|
Scipy: Partition array into 3 subarrays
|
<p>I am trying to figure out whether there's a numpy/scipy function to efficiently partition an array into subarrays using a certain rule.</p>
<p>My problem is the following:
I have a nxn matrix, lets call it W. And I have a vector h.
I now want to partition the column vectors of W into 3 arrays:</p>
<ul>
<li>W_pos, where > 0 for all vectors w from W_pos</li>
<li>W_null, where = 0 for all vectors w from W_null</li>
<li>W_neg, where < 0 for all vectors w from W_neg</li>
</ul>
<p>Right now I am doing it like this, which is working but I think it is not very efficient:</p>
<pre><code> nonzero_indices = (sp.isclose(sp.dot(h_k.T, W),0, 10e-12) == False)
self.W_null = W[:,~nonzero_indices]
W_nonzero = W[:,nonzero_indices]
pos_indices = (sp.dot(h_k.T, W_nonzero) > 0)
W_pos = W_nonzero[:,pos_indices]
W_neg = W_nonzero[:,~pos_indices]
</code></pre>
<p>Is there a better way? Thanks for your help and if there something not clear please let me know.
Cheers</p>
|
<pre><code>w=np.random.random((10,10))-0.5 # example array
</code></pre>
<p>.</p>
<pre><code>wneg = w[w<0]
wzero = w[w==0]
wpos = w[w>0]
</code></pre>
|
python|arrays|numpy|scipy
| 2
|
376,279
| 34,205,659
|
Speed up Newtons Method using numpy arrays
|
<p>I am using Newton's method to generate fractals that visualise the roots and the number of iterations taken to find the roots.</p>
<p>I am not happy with the speed taken to complete the function. Is there are a way to speed up my code?</p>
<pre><code>def f(z):
return z**4-1
def f_prime(z):
'''Analytic derivative'''
return 4*z**3
def newton_raphson(x, y, max_iter=20, eps = 1.0e-20):
z = x + y * 1j
iter = np.zeros((len(z), len(z)))
for i in range(max_iter):
z_old = z
z = z-(f(z)/f_prime(z))
for k in range(len(z[:,0])): #this bit of the code is slow. Can I do this WITHOUT for loops?
for j in range(len(z[:,1])):
if iter[k,j] != 0:
continue
if z[k,j] == z_old[k,j]:
iter[k,j] = i
return np.angle(z), iter #return argument of root and iterations taken
n_points = 1000; xmin = -1.5; xmax = 1.5
xs = np.linspace(xmin, xmax, n_points)
X,Y = np.meshgrid(xs, xs)
dat = newton_raphson(X, Y)
</code></pre>
|
<p>You can simply vectorize the loops for fairly large speed gains:</p>
<pre><code>def newton_raphson(x, y, max_iter=20, eps = 1.0e-20):
z = x + y * 1j
nz = len(z)
iters = np.zeros((nz, nz))
for i in range(max_iter):
z_old = z
z = z-(f(z)/f_prime(z))
mask = (iters == 0) & (z == z_old)
iters[mask] = i
return np.angle(z), items
</code></pre>
<p>Your presented equations are fairly simple; however, I would assume that your <code>f</code> and <code>f_prime</code> functions are significantly more complex. Further speedups can likely be found in those equations rather than in the presented problem.</p>
<p>I would also avoid the use of <code>iter</code> as a variable name as it is a built in python function.</p>
|
python|arrays|performance|numpy|newtons-method
| 2
|
376,280
| 33,985,392
|
`ValueError: operands could not be broadcast together` when attempting to plot a univariate distribution from a DataFrame column using Seaborn
|
<p>I'm trying to plot the univariate distribution of a column in a Pandas <code>DataFrame</code>. Here's the code:</p>
<pre><code>ad = summary["Acquired Delay"]
sns.distplot(ad)
</code></pre>
<p>This throws:</p>
<pre><code>ValueError: operands could not be broadcast together with shapes (9,) (10,) (9,)
</code></pre>
<p>I've checked to see if there is anything wrong about this series, passing it as <code>ad.values</code>, but the same error occurs. The problem disappears when I use the <code>.plot</code> method of <code>ad</code>:</p>
<pre><code>ad = summary["Acquired Delay"]
ad.plot.hist()
</code></pre>
<p><a href="https://i.stack.imgur.com/2ZfgS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2ZfgS.png" alt="Successful Plotting with matplotlib"></a></p>
<p>The problem disappears. The plot is less translucent, but reasonably good. Is this a common bug in seaborn? Has this happened because my data contained large number of zeros?</p>
|
<p>This is happening because the seaborn function <code>distplot</code> includes lines</p>
<pre><code> if bins is None:
bins = min(_freedman_diaconis_bins(a), 50)
</code></pre>
<p>to set the number of bins when it's not specified, and the <code>_freedman_diaconis_bins</code> function can return a non-integer number if the length of <code>a</code> isn't square and the IQR is 0. And if <code>a</code> is dominated by enough zeros, the IQR will be zero as well, e.g.</p>
<pre><code>>>> sns.distributions.iqr([0]*8 + [1]*2)
0.0
</code></pre>
<p>so your intuition that the high number of zeros might be playing a role was right, I think. Anyway, if we get a float number back for the number of bins, that will break <code>np.histogram</code>:</p>
<pre><code>>>> np.histogram([0,0,1], bins=2)
(array([2, 1], dtype=int32), array([ 0. , 0.5, 1. ]))
>>> np.histogram([0,0,1], bins=2.1)
Traceback (most recent call last):
File "<ipython-input-4-9aae3e6c77af>", line 1, in <module>
np.histogram([0,0,1], bins=2.1)
File "/home/dsm/sys/pys/3.5/lib/python3.5/site-packages/numpy/lib/function_base.py", line 249, in histogram
n += np.bincount(indices, weights=tmp_w, minlength=bins).astype(ntype)
ValueError: operands could not be broadcast together with shapes (2,) (3,) (2,)
</code></pre>
<p>So I think this is a bug, and you could open a ticket. You can work around it by passing the number of bins directly:</p>
<pre><code>sns.displot(ad, bins=10)
</code></pre>
<p>or if you really wanted, you could monkeypatch a fix with something like</p>
<pre><code>sns.distributions._freedman_diaconis_bins_orig =
sns.distributions._freedman_diaconis_bins
sns.distributions._freedman_diaconis_bins = lambda x:
np.round(sns.distributions._freedman_diaconis_bins_orig(x))
</code></pre>
|
python|numpy|pandas|matplotlib|seaborn
| 2
|
376,281
| 34,075,094
|
Python struct like Matlab
|
<p>I seem to have found lots of hack answers, without a 'standardized' answer to this questions. I am looking for an implementation of Matlab's struct in Python, specifically with the two following capabilities:</p>
<ol>
<li>in struct 's', access field value 'a' using dot notation (i.e. s.a)</li>
<li>create fields on the fly, without initialization of dtype, format (i.e. s.b = np.array([1,2,3,4]) )</li>
</ol>
<p>Is there no way to do this in Python? To date, the only solution I have found is <a href="https://stackoverflow.com/questions/11637045/complex-matlab-like-data-structure-in-python-numpy-scipy">here</a>, using a dummy class structtype(). This works but feels a little hackish.
I also thought maybe scipy would expose its mat_struct, used in loadmat(), but I couldn't find a public interface to it.
What do other people do? I'm not too worried about performance for this struct, its more of a convenience.</p>
|
<p>If you're on 3.3 and up, there's <a href="https://docs.python.org/3/library/types.html#types.SimpleNamespace" rel="nofollow"><code>types.SimpleNamespace</code></a>. Other than that, an empty class is probably your best option.</p>
|
python|matlab|numpy|scipy
| 4
|
376,282
| 34,291,023
|
Pandas: rounding halfway values in dataframe using np.round and applymap
|
<p>I want to understand why I get different values when using 1) np.round and 2) applymap on the same DF</p>
<p>my df</p>
<pre><code> df1 = pd.DataFrame({'total': [25.23, 3.55, 76.55, 36.48, 45.59]}, index=['cat1', 'cat2', 'cat3', 'cat4', 'cat5'])
total
cat1 25.23
cat2 3.55
cat3 76.55
cat4 36.48
cat5 45.59
</code></pre>
<p>np.round returns</p>
<pre><code>np.round(df1, 1)
total
cat1 25.2
cat2 3.6
cat3 76.6
cat4 36.5
cat5 45.6
</code></pre>
<p>appymap returns </p>
<pre><code>df1.applymap(lambda x: round(x,1))
total
cat1 25.2
cat2 3.5
cat3 76.5
cat4 36.5
cat5 45.6
</code></pre>
<p>As you can see, np.round rounds up halfway values while applymap rounds down. What's going on?</p>
|
<p>This is documented behaviour in python 2: <a href="https://docs.python.org/2/library/functions.html#round" rel="nofollow"><code>round</code></a> and <a href="http://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.around.html#numpy.around" rel="nofollow"><code>np.around</code></a> in python 3 you get the same results:</p>
<pre><code>In [63]:
np.round(df1['total'], 1)
Out[63]:
cat1 25.2
cat2 3.6
cat3 76.6
cat4 36.5
cat5 45.6
Name: total, dtype: float64
In [69]:
df1.applymap(lambda x: round(x,1))
Out[69]:
total
cat1 25.2
cat2 3.6
cat3 76.6
cat4 36.5
cat5 45.6
</code></pre>
|
numpy|pandas|decimal|rounding
| 1
|
376,283
| 34,418,668
|
Numpy and Pandas interpolation also changes the original data
|
<p>I am trying to interpolate data for some missing days. The orginal data is;</p>
<pre><code>2012-06-27 00:00:00 17
2012-06-27 01:00:00 17
2012-06-27 02:00:00 18
2012-06-27 03:00:00 18
2012-06-27 04:00:00 19
2012-06-27 05:00:00 20
2012-06-27 06:00:00 22
2012-06-27 07:00:00 23
2012-06-27 08:00:00 25
2012-06-27 09:00:00 27
2012-06-27 10:00:00 27
2012-06-27 11:00:00 29
2012-06-27 12:00:00 29
2012-06-27 13:00:00 30
2012-06-27 14:00:00 30
2012-06-27 15:00:00 29
2012-06-27 16:00:00 28
2012-06-27 17:00:00 26
2012-06-27 18:00:00 25
2012-06-27 19:00:00 24
2012-06-27 20:00:00 23
2012-06-27 21:00:00 23
2012-06-27 22:00:00 16
2012-06-27 23:00:00 15
2012-06-29 00:00:00 15
2012-06-29 01:00:00 16
2012-06-29 02:00:00 16
2012-06-29 03:00:00 16
2012-06-29 04:00:00 17
2012-06-29 05:00:00 17
2012-06-29 06:00:00 18
2012-06-29 07:00:00 19
2012-06-29 08:00:00 20
2012-06-29 09:00:00 22
2012-06-29 10:00:00 22
2012-06-29 11:00:00 22
2012-06-29 12:00:00 22
2012-06-29 13:00:00 22
2012-06-29 14:00:00 22
2012-06-29 15:00:00 22
2012-06-29 16:00:00 21
2012-06-29 17:00:00 19
2012-06-29 18:00:00 17
2012-06-29 19:00:00 16
2012-06-29 20:00:00 15
2012-06-29 21:00:00 14
2012-06-29 22:00:00 14
2012-06-29 23:00:00 13
</code></pre>
<p>As you can see 2014-12-28 is missing, so I tried to interpolate it using both Numpy and Pandas.
For Numpy the code is;</p>
<pre><code>def inter_lin_nan(ts_temp, rule):
ts_temp = ts_temp.resample(rule)
mask = np.isnan(ts_temp)
# interpolling missing values
ts_temp[mask] = np.interp(np.flatnonzero(mask), np.flatnonzero(~mask),ts_temp[~mask])
return(ts_temp)
</code></pre>
<p>and with Pandas I used;</p>
<pre><code>df_temp=df_temp.asfreq('1h')
df_temp['Temp2'] = df_temp['temp'].interpolate(method='linear')
</code></pre>
<p>The problem is, both of these method does interpolate for the missing day, but they also change original data for 2014-12-29. Do you know why this is happening or am I missing something? </p>
|
<p>I cannot reproduce the problem, but this works for me (assuming your data frame is indexed on datetime):</p>
<pre><code>df_resampled = df.resample('1H').interpolate(method='linear')
</code></pre>
<p>Output:</p>
<p><a href="https://i.stack.imgur.com/nHHcP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nHHcP.png" alt="enter image description here"></a></p>
<p>As you can see, the lines overlap perfectly for the days where there is data: no original data is 'changed'. The interpolation seems to make sense too, and in this plot the missing values in the original series were set to 0 to allow a comparison.</p>
|
python|numpy|pandas|interpolation
| 0
|
376,284
| 36,926,443
|
How to unpack a pandas Panel created with a dictionary?
|
<p>I have several <code>.txt</code> files in a subdirectoy, <code>/subdirect/</code></p>
<p>These files are </p>
<pre><code>file1.txt
file2.txt
file3.txt
file4.txt
...
</code></pre>
<p>Using glob, I can put these into a three-dimensional panel, using the filename as the key for key-value pairs. </p>
<pre><code>import glob
import pandas as pd
dataframe = {filename: pd.read_csv(filename) for filename in glob.glob('*.txt') # dictionary
data = pd.Panel.from_dict(dataframe) # create panel
</code></pre>
<p>Now, I would like to unpack these files to manipulate each DataFrame individually and plot data. </p>
<pre><code>for fname in data:
df = pd.read_csv(fname)
df['total_sum'] = df[["column1", "column2", "column3"]].sum(axis=1) # sum total reads
df.plot(kind='bar')
</code></pre>
<p>However, I do not seem to be unpacking the panel correctly as the dimensions have completely changed. </p>
<p>How does one unpack a pandas Panel? </p>
|
<p>How about reading the data files individually instead, since you don't seem to be interested in the <code>Panel</code> structure per se:</p>
<pre><code>import glob
import pandas as pd
for filename in glob.glob('*.txt'):
df = pd.read_csv(filename)
df['total_sum'] = df[["column1", "column2", "column3"]].sum(axis=1) # sum total reads
df.plot(kind='bar')
</code></pre>
<p>Alternatively, take a look at <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Panel.to_frame.html" rel="nofollow"><code>pd.Panel.to_frame()</code></a> to convert <code>Panel</code> to <code>DataFrame</code>. For instance, with a <code>Panel</code> from a <code>dict</code> with two <code>DataFrames</code>:</p>
<pre><code>df = pd.DataFrame(np.random.random(size=(20, 10)))
panel = pd.Panel.from_dict({'1': df, '2': df.add(10)})
<class 'pandas.core.panel.Panel'>
Dimensions: 2 (items) x 20 (major_axis) x 10 (minor_axis)
Items axis: 1 to 2
Major_axis axis: 0 to 19
Minor_axis axis: 0 to 9
</code></pre>
<p>Using <code>to_frame()</code> gets you a long-format <code>DataFrame</code> with two columns and a <code>MultiIndex</code> with length of <code>row</code> x <code>column</code>. To plot, you could iterate over the <code>columns</code> of <code>data_frame</code> using <code>.items()</code> and use <code>.unstack()</code> to convert into format suitable for plotting:</p>
<pre><code>data_frame = panel.to_frame()
MultiIndex: 200 entries, (0, 0) to (19, 9)
Data columns (total 2 columns):
1 200 non-null float64
2 200 non-null float64
dtypes: float64(2)
memory usage: 4.7+ KB
None
for i, data in data_frame.items():
data.unstack().plot()
</code></pre>
<p>On performance - if you start from a panel, summing there is faster than grouping and unstacking. It's also faster than summing an individual dataframe.</p>
<pre><code>%timeit panel.sum(axis=1)
10000 loops, best of 3: 111 µs per loop
%timeit panel.to_frame().groupby(data_frame.columns, axis=1).apply(lambda x: x.unstack(0).sum(axis=1))
100 loops, best of 3: 3.63 ms per loop
df = data_frame.unstack(0)
%timeit df.loc[:, '1'].sum(axis=1)
1000 loops, best of 3: 409 µs per loop
</code></pre>
|
python|csv|dictionary|pandas|panel
| 1
|
376,285
| 36,715,110
|
import nested data into pandas from a json file
|
<p>I have a generated file as follows:</p>
<pre><code>[{"intervals": [{"overwrites": 35588.4, "latency": 479.52}, {"overwrites": 150375.0, "latency": 441.1485001192274}], "uid": "23"}]
</code></pre>
<p>I simplified the file a bit for space reasons (there are more columns besides for the "overwrites" and "latency" ). I would like to import the data into a dataframe so I can later on draw the latency. I tried the following:</p>
<pre><code>with open(os.path.join(path, "my_file.json")) as json_file:
curr_list=json.load(json_file)
df=pd.Series(curr_list[0]['intervals'])
print df
</code></pre>
<p>which returned: </p>
<p><strong>0 {u'overwrites': 35588.4, u'latency...</strong></p>
<p><strong>1 {u'overwrites': 150375.0, u'latency...</strong></p>
<p>However I couldn't get to store df in a data structure that allows me to access the latency field as follows:</p>
<pre><code>graph = df[['latency']]
graph.plot(title="latency")
</code></pre>
<p>Any ideas?
Thanks for the help!</p>
|
<p>I think you can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.io.json.json_normalize.html" rel="nofollow"><code>json_normalize</code></a>:</p>
<pre><code>import pandas as pd
from pandas.io.json import json_normalize
data = [{"intervals": [{"overwrites": 35588.4, "latency": 479.52},
{"overwrites": 150375.0, "latency": 441.1485001192274}],
"uid": "23"}]
result = json_normalize(data, 'intervals', ['uid'])
print result
latency overwrites uid
0 479.5200 35588.4 23
1 441.1485 150375.0 23
</code></pre>
|
python|json|pandas
| 1
|
376,286
| 37,025,485
|
Getting only top values within each group that have the same column value
|
<p>I have a table that looks something like this:</p>
<pre><code>Column 1 | Column 2 | Column 3
1 a 100
1 r 100
1 h 200
1 j 200
2 a 50
2 q 50
2 k 40
3 a 10
3 q 150
3 k 150
</code></pre>
<p>Imagine I am trying to get the top values of each groupby('Column 1')</p>
<p>Normally I would just .head(n) but in this case I am also trying to get only top rows with the same Column 3 value like:</p>
<pre><code>Column 1 | Column 2 | Column 3
1 a 100
1 r 100
2 a 50
2 q 50
3 a 10
</code></pre>
<p>Assuming the table is already in the order I want it</p>
<p>Any advice would be highly appreciated</p>
|
<p>I think you need first need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow"><code>groupby</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.first.html" rel="nofollow"><code>first</code></a> and then <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.merge.html" rel="nofollow"><code>merge</code></a>:</p>
<pre><code>print df.groupby('Column 1')['Column 3'].first().reset_index()
Column 1 Column 3
0 1 100
1 2 50
2 3 10
print pd.merge(df,
df.groupby('Column 1')['Column 3'].first().reset_index(),
on=['Column 1','Column 3'])
Column 1 Column 2 Column 3
0 1 a 100
1 1 r 100
2 2 a 50
3 2 q 50
4 3 a 10
</code></pre>
<p><strong>Timings</strong>:</p>
<pre><code>df = pd.concat([df]*1000).reset_index(drop=True)
%timeit pd.merge(df, df.groupby('Column 1')['Column 3'].first().reset_index(), on=['Column 1','Column 3'])
100 loops, best of 3: 3.58 ms per loop
%timeit df[(df.assign(diff=df.groupby('Column 1')['Column 3'].diff().fillna(0)).groupby('Column 1')['diff'].cumsum() == 0)]
100 loops, best of 3: 5.06 ms per loop
</code></pre>
|
python|pandas|dataframe
| 1
|
376,287
| 36,928,487
|
A value is trying to be set on a copy of a slice from a DataFrame
|
<p>I have a dataframe column period that has values by Quarters(Q1,Q2,Q3,Q4) that I want to convert into associated month (see dict). My code below works however wondering why I'm getting this warning.</p>
<p>A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead </p>
<pre><code>quarter = {"Q1":"Mar","Q2":"Jun","Q3":"Sep","Q4":"Dec"}
df['period'] = df['period'].astype(str).map(quarter)
</code></pre>
|
<p>"A value is trying to be set on a copy of a slice from a DataFrame" is a warning. SO contains many posts on this subject.</p>
<p><code>df.assign</code> was added in Pandas 0.16 and is a good way to avoid this warning.</p>
<pre><code>quarter = {"Q1": "Mar", "Q2": "Jun", "Q3": "Sep", "Q4": "Dec"}
df = pd.DataFrame({'period': ['Q1', 'Q2', 'Q3', 'Q4', 'Q5'], 'qtr': [1, 2, 3, 4, 5]})
df
period qtr
0 Q1 1
1 Q2 2
2 Q3 3
3 Q4 4
4 Q5 5
df = df.assign(period=[quarter.get(q, q) for q in df.period])
# Unmapped values unchanged.
>>> df
period qtr
0 Mar 1
1 Jun 2
2 Sep 3
3 Dec 4
4 Q5 5
df = pd.DataFrame({'period': ['Q1', 'Q2', 'Q3', 'Q4', 'Q5'], 'qtr': [1, 2, 3, 4, 5]})
df = df.assign(period=df.period.map(quarter))
# Unmapped values get `NaN`.
>>> df
period qtr
0 Mar 1
1 Jun 2
2 Sep 3
3 Dec 4
4 NaN 5
</code></pre>
<blockquote>
<p>Assign new columns to a DataFrame, returning a new object
(a copy) with all the original columns in addition to the new ones.</p>
<p>.. versionadded:: 0.16.0</p>
</blockquote>
|
python|dictionary|pandas|dataframe
| 12
|
376,288
| 36,973,544
|
TensorFlow ValueError Dimensions are not compatible
|
<p>I have a simple program, mostly copied from the MNIST tutorial on Tensorflow. I have a 2D array 118 items long, with each subarray being 13 long. And a 2nd 2D array that is 118 long with a single integer in each sub array, containing either 1, 2, or 3 (the matching class of the first array's item)</p>
<p>Whenever I run it however, I get various dimension errors.</p>
<p>either <code>ValueError: Dimensions X and X are not compatible</code>
or <code>ValueError: Incompatible shapes for broadcasting: (?, 13) and (3,)</code>
or something along those lines. I've tried most every combination of numbers I can think of here in the various places to get it to align properly but am unable to get it to.</p>
<pre><code>x = tf.placeholder(tf.float32, [None, 13])
W = tf.Variable(tf.zeros([118, 13]))
b = tf.Variable(tf.zeros([3]))
y = tf.nn.softmax(tf.matmul(x, W) + b)
y_ = tf.placeholder(tf.float32, [None, 13])
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1]))
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
for i in range(1000):
batch_xs = npWineList
batch_ys = npWineClass
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print(sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels}))
</code></pre>
|
<p>First, it's not clear how many labels you have (3 or 13), and what is the size of input (X) vector (113 or 13)? I assume you have 13 labels, and 118 X vectors based on:</p>
<pre><code>W = tf.Variable(tf.zeros([118, 13]))
y_ = tf.placeholder(tf.float32, [None, 13])
</code></pre>
<p>Then, you may change your code something like this:</p>
<pre><code>x = tf.placeholder(tf.float32, [None, 118])
W = tf.Variable(tf.zeros([118, 13]))
b = tf.Variable(tf.zeros([13]))
y = tf.nn.softmax(tf.matmul(x, W) + b)
y_ = tf.placeholder(tf.float32, [None, 13])
</code></pre>
<p>Let me know it addresses your issue. </p>
|
python|arrays|numpy|neural-network|tensorflow
| 1
|
376,289
| 36,728,111
|
Update Jupyter to Python 3.4 in default Tensorflow docker container
|
<p>I am using gcr.io/tensorflow/tensorflow docker image and need to update jupyter to python version 3.4 within the container. I've tried searching online but haven't really found how to do this. Could someone help me with this by explaining step-by-step?</p>
|
<p>There are now python3 builds available in the nightly docker images as of <a href="https://github.com/tensorflow/tensorflow/pull/6030" rel="nofollow noreferrer">pull 6030</a>. See the <a href="https://hub.docker.com/r/tensorflow/tensorflow/tags/" rel="nofollow noreferrer">TensorFlow public docker repository</a>
for a list of all docker images.</p>
|
python-3.x|docker|tensorflow|jupyter
| 0
|
376,290
| 36,818,832
|
pandas plot bar chart -- Unexpected layout
|
<p>I am trying to plot bar char with line chart. I created 2 subplot.
Using the below code </p>
<pre><code> RSI_14 = df['RSI_14']
df['ATR_14'] = df['ATR_14'].astype(float)
ATR_14 = df['ATR_14']
fig5 = plt.figure(figsize=(14,9), dpi=200)
ax1 = fig5.add_subplot(211)
ax2 = fig5.add_subplot(212)
ax1.plot_date(x=days, y=RSI_14,fmt="r-",label="ROC_7")
ax2 = df[['indx','ATR_14']].plot(kind='bar', title ="V comp",figsize=(7,4),legend=True, fontsize=12)
ticklabels = ['']*len(df.indx)
ax.xaxis.set_major_formatter(ticker.FixedFormatter(ticklabels))
plt.gcf().autofmt_xdate()
pp.savefig()
</code></pre>
<p>The image created below is very different from what I am expecting. I have tried few other methods but couldn't figure it .
Any help is appreciated.</p>
<p><a href="https://i.stack.imgur.com/awLDb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/awLDb.png" alt="enter image description here"></a></p>
<p>Here is sample data </p>
<pre><code>indx ATR_14 RSI_14
20141015 0.01737336 99.48281325
20141016 0.017723579 99.48281325
20141017 0.020027102 99.53091876
20141020 0.024023488 99.67180924
20141021 0.02415369 99.72027954
20141022 0.026266531 99.76100661
20141023 0.026764327 85.41188977
</code></pre>
|
<p>I'm not sure if you want to see this as a primary and secondary axis but here's how you'd do that.</p>
<pre><code>import matplotlib.pyplot as plt
import pandas as pd
from pandas import Timestamp
df = pd.DataFrame(
{'ATR_14': {Timestamp('2014-10-15 00:00:00'): 0.01737336,
Timestamp('2014-10-16 00:00:00'): 0.017723579,
Timestamp('2014-10-17 00:00:00'): 0.020027101999999998,
Timestamp('2014-10-20 00:00:00'): 0.024023488,
Timestamp('2014-10-21 00:00:00'): 0.02415369,
Timestamp('2014-10-22 00:00:00'): 0.026266531,
Timestamp('2014-10-23 00:00:00'): 0.026764327},
'RSI_14': {Timestamp('2014-10-15 00:00:00'): 99.48281325,
Timestamp('2014-10-16 00:00:00'): 99.48281325,
Timestamp('2014-10-17 00:00:00'): 99.53091876,
Timestamp('2014-10-20 00:00:00'): 99.67180924,
Timestamp('2014-10-21 00:00:00'): 99.72027954,
Timestamp('2014-10-22 00:00:00'): 99.76100661,
Timestamp('2014-10-23 00:00:00'): 85.41188977}},
columns=['ATR_14', 'RSI_14'])
fig, ax1 = plt.subplots()
ax1.bar(df.index, df['ATR_14'], width=0.65, align='center', color='#F27727',
edgecolor='#F27727')
ax1.set_xlabel('Date')
ax1.set_ylabel('ATR_14')
ax2 = ax1.twinx()
ax2.plot(df.index, df['RSI_14'], color='#058DC7', linewidth=4, marker='o',
markersize=10, markeredgecolor='w', markeredgewidth=3)
ax2.set_ylabel('RSI_14', rotation=270)
fig.autofmt_xdate()
#plt.tight_layout()
plt.show()
</code></pre>
<p>Produces:
<a href="https://i.stack.imgur.com/L5gx2.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/L5gx2.jpg" alt="enter image description here"></a></p>
|
python|pandas|matplotlib
| 0
|
376,291
| 36,771,245
|
python datetime extract hour minute fast
|
<p>I have a 878000*1 dataframe, where the 1 column is date in several years. I have the following code to create new columns and store year,month,day,hour,min,week in different new columns:</p>
<pre><code>for i in train.index:
train['Year'][i] = train.Dates[i].year
train['Month'][i] = train.Dates[i].month
train['Day'][i] = train.Dates[i].day
train['Hour'][i] = train.Dates[i].hour
train['Min'][i] = train.Dates[i].minute
train['Week'][i] = train.Dates[i].isocalendar()[1]
</code></pre>
<p>However this is really slow, my computer has been working overnight for this simple command and it still not finished. I am wondering if there is some faster way that I can use to extract these information without waiting for so long?</p>
|
<h3>Setup</h3>
<pre><code>In [15]: train = pd.DataFrame(pd.date_range('2015-12-31', '2016-12-31'), columns=['Dates'])
In [16]: train.head()
Out[16]:
Dates
0 2015-12-31
1 2016-01-01
2 2016-01-02
3 2016-01-03
4 2016-01-04
</code></pre>
<h3>Solution</h3>
<pre><code>In [17]: fields = ['Year', 'Month', 'Day', 'Hour', 'Min', 'Week']
In [18]: f = lambda x: pd.Series([x[0].year, x[0].month,
x[0].day, x[0].hour,
x[0].minute, x[0].isocalendar()[1]],
index=fields)
In [19]: train.apply(f, axis=1)
</code></pre>
<h3>Looks like</h3>
<pre><code>Out[19]:
Year Month Day Hour Min Week
0 2015 12 31 0 0 53
1 2016 1 1 0 0 53
2 2016 1 2 0 0 53
3 2016 1 3 0 0 53
4 2016 1 4 0 0 1
5 2016 1 5 0 0 1
6 2016 1 6 0 0 1
7 2016 1 7 0 0 1
8 2016 1 8 0 0 1
9 2016 1 9 0 0 1
10 2016 1 10 0 0 1
11 2016 1 11 0 0 2
12 2016 1 12 0 0 2
13 2016 1 13 0 0 2
14 2016 1 14 0 0 2
15 2016 1 15 0 0 2
16 2016 1 16 0 0 2
17 2016 1 17 0 0 2
18 2016 1 18 0 0 3
19 2016 1 19 0 0 3
20 2016 1 20 0 0 3
21 2016 1 21 0 0 3
22 2016 1 22 0 0 3
23 2016 1 23 0 0 3
24 2016 1 24 0 0 3
25 2016 1 25 0 0 4
26 2016 1 26 0 0 4
27 2016 1 27 0 0 4
28 2016 1 28 0 0 4
29 2016 1 29 0 0 4
.. ... ... ... ... ... ...
337 2016 12 2 0 0 48
338 2016 12 3 0 0 48
339 2016 12 4 0 0 48
340 2016 12 5 0 0 49
341 2016 12 6 0 0 49
342 2016 12 7 0 0 49
343 2016 12 8 0 0 49
344 2016 12 9 0 0 49
345 2016 12 10 0 0 49
346 2016 12 11 0 0 49
347 2016 12 12 0 0 50
348 2016 12 13 0 0 50
349 2016 12 14 0 0 50
350 2016 12 15 0 0 50
351 2016 12 16 0 0 50
352 2016 12 17 0 0 50
353 2016 12 18 0 0 50
354 2016 12 19 0 0 51
355 2016 12 20 0 0 51
356 2016 12 21 0 0 51
357 2016 12 22 0 0 51
358 2016 12 23 0 0 51
359 2016 12 24 0 0 51
360 2016 12 25 0 0 51
361 2016 12 26 0 0 52
362 2016 12 27 0 0 52
363 2016 12 28 0 0 52
364 2016 12 29 0 0 52
365 2016 12 30 0 0 52
366 2016 12 31 0 0 52
</code></pre>
|
python|pandas
| 2
|
376,292
| 37,113,556
|
TensorFlow: dimension error. how to debug?
|
<p>I'm a beginner with TF</p>
<p>I've tried to adapt a code which is working well with some other data (noMNIST) to some new data, and i have a dimensionality error, and i don't know how to deal with it. </p>
<p>To debug, i'm trying to use <code>tf.shape</code> method but it doesn't give me the info i need...</p>
<pre><code>def reformat(dataset, labels):
#dataset = dataset.reshape((-1, num_var)).astype(np.float32)
# Map 2 to [0.0, 1.0, 0.0 ...], 3 to [0.0, 0.0, 1.0 ...]
labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)
return dataset, labels
train_dataset, train_labels = reformat(train_dataset, train_labels)
valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)
test_dataset, test_labels = reformat(test_dataset, test_labels)
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
type(train_dataset)
</code></pre>
<blockquote>
<p>Training set (790184, 29) (790184, 39) Validation set (43899, 29)
(43899, 39) Test set (43899, 29) (43899, 39)</p>
</blockquote>
<pre><code># Adding regularization to the 1 hidden layer network
graph1 = tf.Graph()
batch_size = 128
num_steps=3001
import datetime
startTime = datetime.datetime.now()
def define_and_run_batch(beta):
num_RELU =1024
with graph1.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, num_var))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
weights_RELU = tf.Variable(
tf.truncated_normal([num_var, num_RELU]))
print(tf.shape(weights_RELU) )
biases_RELU = tf.Variable(tf.zeros([num_RELU]))
weights_layer1 = tf.Variable(
tf.truncated_normal([num_RELU, num_labels]))
biases_layer1 = tf.Variable(tf.zeros([num_labels]))
# Training computation.
logits_RELU = tf.matmul(tf_train_dataset, weights_RELU) + biases_RELU
RELU_vec = tf.nn.relu(logits_RELU)
logits_layer = tf.matmul(RELU_vec, weights_layer1) + biases_layer1
# loss = tf.reduce_mean(
# tf.nn.softmax_cross_entropy_with_logits(logits_layer, tf_train_labels))
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits_layer, tf_train_labels,name="cross_entropy")
l2reg = tf.reduce_sum(tf.square(weights_RELU))+tf.reduce_sum(tf.square(weights_layer1))
# beta = 0.005
loss = tf.reduce_mean(cross_entropy+beta*l2reg)
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.3).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits_layer)
print("ok")
print(tf.shape(weights_RELU) )
valid_prediction = tf.nn.softmax(
tf.matmul(tf.nn.relu((tf.matmul(tf_valid_dataset, weights_RELU) + biases_RELU)),weights_layer1)+biases_layer1)
test_prediction =tf.nn.softmax(
tf.matmul(tf.nn.relu((tf.matmul(tf_test_dataset, weights_RELU) + biases_RELU)),weights_layer1)+biases_layer1)
with tf.Session(graph=graph1) as session:
tf.initialize_all_variables().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
#
_, l, predictions, logits = session.run(
[optimizer, loss,train_prediction,logits_RELU], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(
valid_prediction.eval(), valid_labels))
test_acc = accuracy(test_prediction.eval(), test_labels)
print("Test accuracy: %.1f%%" % test_acc)
print('loss=%s' % l)
x = datetime.datetime.now() - startTime
print(x)
return(test_acc,round(l,5))
define_and_run_batch(0.005)
</code></pre>
<blockquote>
<p>Tensor("Shape:0", shape=(2,), dtype=int32) ok Tensor("Shape_1:0",
shape=(2,), dtype=int32)
--------------------------------------------------------------------------- ValueError Traceback (most recent call
last) in ()
94 return(test_acc,round(l,5))
95
---> 96 define_and_run_batch(0.005)</p>
<p> in define_and_run_batch(beta)
54 print(tf.shape(weights_RELU) )
55 valid_prediction = tf.nn.softmax(
---> 56 tf.matmul(tf.nn.relu((tf.matmul(tf_valid_dataset, weights_RELU) + biases_RELU)),weights_layer1)+biases_layer1)
57
58 </p>
<p>/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/ops/math_ops.pyc
in matmul(a, b, transpose_a, transpose_b, a_is_sparse, b_is_sparse,
name)
949 transpose_a=transpose_a,
950 transpose_b=transpose_b,
--> 951 name=name)
952
953 sparse_matmul = gen_math_ops._sparse_mat_mul</p>
<p>/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/ops/gen_math_ops.pyc in _mat_mul(a, b, transpose_a, transpose_b, name)
684 """
685 return _op_def_lib.apply_op("MatMul", a=a, b=b, transpose_a=transpose_a,
--> 686 transpose_b=transpose_b, name=name)
687
688 </p>
<p>/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/ops/op_def_library.pyc
in apply_op(self, op_type_name, name, **keywords)
653 op = g.create_op(op_type_name, inputs, output_types, name=scope,
654 input_types=input_types, attrs=attr_protos,
--> 655 op_def=op_def)
656 outputs = op.outputs
657 return _Restructure(ops.convert_n_to_tensor(outputs), output_structure)</p>
<p>/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/framework/ops.pyc
in create_op(self, op_type, inputs, dtypes, input_types, name, attrs,
op_def, compute_shapes, compute_device) 2040<br>
original_op=self._default_original_op, op_def=op_def) 2041 if
compute_shapes:
-> 2042 set_shapes_for_outputs(ret) 2043 self._add_op(ret) 2044<br>
self._record_op_seen_by_control_dependencies(ret)</p>
<p>/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/framework/ops.pyc
in set_shapes_for_outputs(op) 1526 raise RuntimeError("No
shape function registered for standard op: %s" 1527<br>
% op.type)
-> 1528 shapes = shape_func(op) 1529 if len(op.outputs) != len(shapes): 1530 raise RuntimeError(</p>
<p>/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/ops/common_shapes.pyc
in matmul_shape(op)
87 inner_a = a_shape[0] if transpose_a else a_shape[1]
88 inner_b = b_shape[1] if transpose_b else b_shape[0]
---> 89 inner_a.assert_is_compatible_with(inner_b)
90 return [tensor_shape.TensorShape([output_rows, output_cols])]
91 </p>
<p>/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/framework/tensor_shape.pyc
in assert_is_compatible_with(self, other)
92 if not self.is_compatible_with(other):
93 raise ValueError("Dimensions %s and %s are not compatible"
---> 94 % (self, other))
95
96 def merge_with(self, other):</p>
<p>ValueError: Dimensions Dimension(29) and Dimension(30) are not
compatible</p>
</blockquote>
<p>the whole code is on my github
<a href="https://github.com/FaguiCurtain/Kaggle-SF" rel="nofollow">https://github.com/FaguiCurtain/Kaggle-SF</a>
the Udacity Assignment 3 file is working</p>
<p>the original data is here
<a href="https://www.kaggle.com/c/sf-crime/data" rel="nofollow">https://www.kaggle.com/c/sf-crime/data</a></p>
<p>in Udacity, the data were images and each image was a 28x28 matrix which was reformatted into flattened vectors of size 784</p>
<p>in the Kaggle-SF file, i am feeding vectors of size 29, and labels can take 39 different values.</p>
<p>thanks for your help</p>
|
<p>In debug mode you can check shapes of you Tensors.
by the way you error is valid_prediction assignment. to make it better for debugging and reading it's better to define each step in a separate line. you are using 4 operation in 1 line. BTW in debug mode (for example in Pycharm) you can inspect the element and check what is causing the problem</p>
|
runtime-error|tensorflow|dimension
| 1
|
376,293
| 36,933,308
|
Generate image data from three numpy arrays
|
<p>I have three numpy arrays, <code>X</code>, <code>Y</code>, and <code>Z</code>.</p>
<p><code>X</code> and <code>Y</code> are coordinates of a spatial grid and each grid point <code>(X, Y)</code> has an intensity <code>Z</code>. I would like to save a PNG image using this data. Interpolation is not needed, as <code>X</code> and <code>Y</code> are guaranteed to cover each grid point between <code>min(X)</code> and <code>max(Y)</code>. </p>
<p>I'm guessing the solution lies within numpy's <code>meshgrid()</code> function, but I can't figure out how to reshape the <code>Z</code> array to <code>NxM</code> intensity data.</p>
<p>How can I do that?</p>
<hr>
<p>To clarify the input data structure, this is what it looks like:</p>
<pre><code> X | Y | Z
-----------------------------
0.1 | 0.1 | something..
0.1 | 0.2 | something..
0.1 | 0.3 | something..
...
0.2 | 0.1 | something..
0.2 | 0.2 | something..
0.2 | 0.3 | something..
...
0.2 | 0.1 | something..
0.1 | 0.2 | something..
0.3 | 0.3 | something..
...
</code></pre>
|
<p>To begin with, you should run this piece of code:</p>
<pre><code>import numpy as np
X = np.asarray(<X data>)
Y = np.asarray(<Y data>)
Z = np.asarray(<Z data>)
Xu = np.unique(X)
Yu = np.unique(Y)
</code></pre>
<hr>
<p>Then you could apply any of the following approaches. It is worth noting that all of them would work fine even if the data are NOT sorted (in contrast to the currently accepted answer):</p>
<p><strong>1) A <code>for</code> loop and <code>numpy.where()</code> function</strong></p>
<p>This is perhaps the simplest and most readable solution:</p>
<pre><code>Zimg = np.zeros((Xu.size, Yu.size), np.uint8)
for i in range(X.size):
Zimg[np.where(Xu==X[i]), np.where(Yu==Y[i])] = Z[i]
</code></pre>
<p><strong>2) A list comprehension and <code>numpy.sort()</code> funtion</strong></p>
<p>This solution - which is a bit more involved than the previous one - relies on Numpy's <a href="http://docs.scipy.org/doc/numpy/user/basics.rec.html" rel="nofollow noreferrer">structured arrays</a>:</p>
<pre><code>data_type = [('x', np.float), ('y', np.float), ('z', np.uint8)]
XYZ = [(X[i], Y[i], Z[i]) for i in range(len(X))]
table = np.array(XYZ, dtype=data_type)
Zimg = np.sort(table, order=['y', 'x'])['z'].reshape(Xu.size, Yu.size)
</code></pre>
<p><strong>3) Vectorization</strong></p>
<p>Using <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.lexsort.html#numpy.lexsort" rel="nofollow noreferrer">lexsort</a> is an elegant and efficient way of performing the required task:</p>
<pre><code>Zimg = Z[np.lexsort((Y, X))].reshape(Xu.size, Yu.size)
</code></pre>
<p><strong>4) Pure Python, not using NumPy</strong></p>
<p>You may want to check out <a href="https://stackoverflow.com/questions/902761/saving-a-numpy-array-as-an-image/19174800#19174800">this link</a> for a pure Python solution without any third party dependencies.</p>
<hr>
<p>To end up, you have different options to save <code>Zimg</code> as an image:</p>
<pre><code>from PIL import Image
Image.fromarray(Zimg).save('z-pil.png')
import matplotlib.pyplot as plt
plt.imsave('z-matplotlib.png', Zimg)
import cv2
cv2.imwrite('z-cv2.png', Zimg)
import scipy.misc
scipy.misc.imsave('z-scipy.png', Zimg)
</code></pre>
|
python|numpy
| 2
|
376,294
| 54,998,630
|
Write Real Raw Binary File from Python
|
<p>I've tried multiple different variations, but for some reason I keep getting invalid binary digits (human readable) being output to the file:</p>
<pre><code>img_array = np.asarray(imageio.imread('test.png', as_gray=True), dtype='int8')
img_array.astype('int8').tofile("test.dat")
</code></pre>
<p>But this doesn't produce a valid binary file. Once the file is read into a Verilog tb, it complains about invalid binary digits and when I open up the file I see a bunch of numbers and other characters. It just doesn't seem like its translating correctly.</p>
<p>UPDATE:
After running</p>
<pre><code>print(img_array)
print(img_array.tobytes())
</code></pre>
<p>I can see that the int value '43' is being translated to '+' whereas I would expect '2B'. It seems to only be printing certain values to ASCII. Here's a simple example:</p>
<pre><code>x = np.array([[0, 9], [2, 3]], dtype='uint8')
print(x.astype('uint8'))
print(x.tobytes())
</code></pre>
<p>The output is:</p>
<p>[[0 9]</p>
<p>[2 3]]</p>
<p>b'\x00\t\x02\x03'</p>
<p>How Can I fix this?</p>
<p>Any help would be greatly appreciated.</p>
<p>Other Solutions that I've tried:</p>
<p><a href="https://stackoverflow.com/questions/12024358/write-a-string-as-raw-binary-into-a-file-python">Write a “string” as raw binary into a file Python</a></p>
<p><a href="https://stackoverflow.com/questions/10535687/write-a-raw-binary-file-with-numpy-array-data">Write a raw binary file with NumPy array data</a></p>
|
<p>This gave a workable solution which converts to a string of hex values. It's not exactly what I wanted, but it created a valid work around since my original question has yet to be answered. Although I didn't find this solution so I can reference where it came from, I'll share it here anyways. Apparently this handles signed integers as well:</p>
<pre><code>("{:0>2X}" * len(x.flatten())).format(*tuple(x.flatten() & (2**8-1)))
</code></pre>
|
python|python-3.x|numpy|binary
| 0
|
376,295
| 54,795,015
|
Why does gcloud ml-engine submit command give "requested cpu s exceed quota"?
|
<p>I am running a tensorflow object detection job on GCP with the folowing command: <br/></p>
<p>gcloud ml-engine jobs submit training <code>whoami</code>_object_detection_<code>date +%s</code> --job-dir=gs://${YOUR_GCS_BUCKET}/train --packages dist/object_detection-0.1.tar.gz,slim/dist/slim-0.1.tar.gz,/tmp/pycocotools/pycocotools-2.0.tar.gz --module-name object_detection.model_tpu_main --runtime-version 1.9 --scale-tier BASIC_TPU --region us-central1 -- --model_dir=gs://${YOUR_GCS_BUCKET}/train --tpu_zone us-central1 --pipeline_config_path=gs://${YOUR_GCS_BUCKET}/data/pinches_pipeline.config <br/></p>
<p>Got the following error: <br/></p>
<p>ERROR: (gcloud.ml-engine.jobs.submit.training) RESOURCE_EXHAUSTED: Quota failure for project seal-pinches. The requested 54.0 CPUs exceeds the allowed maximum of 20.0. To read more about Cloud ML Engine quota, see <a href="https://cloud.google.com/ml-engine/quotas" rel="nofollow noreferrer">https://cloud.google.com/ml-engine/quotas</a>.
- '@type': type.googleapis.com/google.rpc.QuotaFailure
violations:
- description: The requested 54.0 CPUs exceeds the allowed maximum of 20.0.</p>
<p>My question is how the requested CPU getting set to 54? I am not setting this anywhere explicitly.</p>
<p>Thanks in advance.</p>
|
<p>This option in your code is setting the size and type of your ml instance:</p>
<pre><code>--scale-tier BASIC_TPU
</code></pre>
<p>The BASIC_TPU costs $6.8474 per hour. I am not sure of the formula, but a Cloud TPU translates into N CPUs in equivalent billing. You also need to add the cost of the Cloud ML Engine machine type to your cost: standard is $0.2774 per hour.</p>
<p><a href="https://cloud.google.com/tpu/docs/quota" rel="nofollow noreferrer">Google's description:</a></p>
<blockquote>
<p>Quota is defined in terms of Cloud TPU cores. A single Cloud TPU
device comprises 4 TPU chips and 8 cores: 2 cores per TPU chip. A
Cloud TPU v2 Pod (alpha) consists of 64 TPU devices containing 256 TPU
chips (512 cores). The number of cores also specifies the quota for a
particular Cloud TPU. For example, a quota of 8 enables the use of 8
cores. A quota of 16 enables use of up to 16 cores, and so forth.</p>
</blockquote>
<p>Your CPU quota is 20. You will need to increase your quota or choose a different model such as <code>BASIC</code> or <code>BASIC_GPU</code> which does not use TPUs. Also double check that you have billing setup with a credit / debit card with sufficient credit available.</p>
|
python|tensorflow|google-cloud-platform
| 0
|
376,296
| 54,842,256
|
reshape np array for deep learning
|
<p>I want to use keras to apply a neural network to my time-series data. TO improve the model I want to have 50 time states of input per output. The final input should have 951 samples with 50 time points of 10 features (951, 50, 10)</p>
<p>Therefore, I have to reshape my data. I do that doing a for loop, but is awfully slow. Is there a way to improve the code and making it faster?</p>
<p>Example:</p>
<pre><code>import numpy as np
X = np.ones((1000,10))
for i in range(50, int(X.shape[0]) + 1):
if i == 50:
z = 0
X2 = np.array(X[z:i, :]).reshape((1, 50, X.shape[1]))
else:
X2 = np.concatenate([X2, np.array(X[z:i, :]).reshape((1, 50, X.shape[1]))])
z = z + 1
</code></pre>
|
<p>We can leverage <a href="http://www.scipy-lectures.org/advanced/advanced_numpy/#indexing-scheme-strides" rel="nofollow noreferrer"><code>np.lib.stride_tricks.as_strided</code></a> based <a href="http://scikit-image.org/docs/dev/api/skimage.util.html#skimage.util.view_as_windows" rel="nofollow noreferrer"><code>scikit-image's view_as_windows</code></a> to get sliding windows. <a href="https://stackoverflow.com/a/51890064/">More info on use of <code>as_strided</code> based <code>view_as_windows</code></a>.</p>
<pre><code>from skimage.util.shape import view_as_windows
X2 = view_as_windows(X,(50,10))[:,0]
</code></pre>
<p>It's simply a view into the input and hence virtually free on runtime -</p>
<pre><code>In [17]: np.shares_memory(X,view_as_windows(X,(50,10))[:,0])
Out[17]: True
In [18]: %timeit view_as_windows(X,(50,10))[:,0]
10000 loops, best of 3: 32.8 µs per loop
</code></pre>
|
python|numpy|keras|reshape
| 2
|
376,297
| 54,725,031
|
Tensorflow: Manipulate bias during training
|
<p>I train a model and want to manipulate all bias terms during training. For this reason, I build the graph using a parameter <code>change_bias</code></p>
<pre><code>change_bias = tf.placeholder(tf.float32)
b = change_bias * b
</code></pre>
<p>To manipulate the bias term, I want to be able to feed <code>change_bias=0.1</code> if I want the bias to decrease.</p>
<p>My approach does not work. What is the right way to manipulate the biases of a model during training?</p>
|
<p>because of b is defined as variable this is wrong:</p>
<pre><code>b = change_bias * b
</code></pre>
<p>try something like this:</p>
<pre><code>import tensorflow as tf
x=tf.placeholder(tf.float32,shape=[-1,26])
change_bias=tf.placeholder(tf.float32,shape=[])
b=tf.Variable(tf.zeros([26]),name="bias")
output=x+tf.math.multiply(b,change_bias)
</code></pre>
<p>edit: change_bias must be scalar</p>
|
python|tensorflow
| 0
|
376,298
| 54,961,109
|
Multiply Matrix by column vector with variable entry that has range (1,101)
|
<p>I want to multiply Matrix AB. To get the vector Y,
Where A is 3x4 and B is 4x1
x= range(1,101)
B = [2,x,3,x]
Since B contains the variable x we will get 100 different vectors for Y. I want to add them to a list so I can use these vectors for computations later on. </p>
<p>This is what i've tried but i get an error messages </p>
<pre><code>AB= list()
for x in range (1,100):
A = np.matrix('1 9 2 3; 7 2 1 4; 4 2 5 2')
B = ('2; x; 3; x')
AB.append(A @ B)
</code></pre>
<p>What am I doing wrong?
The error i get is: (that refers to a different file btw) </p>
<pre><code>raise ValueError('malformed node or string: ' + repr(node))
</code></pre>
|
<p>Okay first, you forgot to make B a numpy matrix, second you need to use f-strings to use x as a variable instead of the character x which is an incompatible type.</p>
<pre><code>AB = list()
for x in range (1,100):
A = np.matrix('1 9 2 3; 7 2 1 4; 4 2 5 2')
B = np.matrix(f'2; {x}; 3; {x}')
AB.append(A @ B)
</code></pre>
|
python|numpy
| 0
|
376,299
| 55,017,879
|
pandas list manipulation and fill NA
|
<p>I am trying to use this function in order to extract the <code>AdjClose</code> value from a dataframe.</p>
<pre><code>def get_sell_price(data):
buy_date = get_buy_date(data)
sell_date = get_sell_date(buy_date)
l=[]
for i in range(0,len(buy_date)):
sell_price = data[(data.Date == sell_date[i])].AdjClose
l.append(sell_price)
return l
</code></pre>
<p>This returns the data:</p>
<pre><code>[8180 110.459999
Name: AdjClose, dtype: float64, 17052 655.679993
Name: AdjClose, dtype: float64, 17452 968.099976
Name: AdjClose, dtype: float64, 17453 970.280029
Name: AdjClose, dtype: float64, 17454 965.719971
Name: AdjClose, dtype: float64, 17455 955.25
Name: AdjClose, dtype: float64, 17458 944.159973
Name: AdjClose, dtype: float64, 17462 950.690002
Name: AdjClose, dtype: float64, 17470 914.619995
Name: AdjClose, dtype: float64, 17497 951.640015
Name: AdjClose, dtype: float64, 17536 977.070007
Name: AdjClose, dtype: float64, 17537 966.580017
Name: AdjClose, dtype: float64, 17538 964.0
Name: AdjClose, dtype: float64, 18180 1335.209961
Name: AdjClose, dtype: float64, 18181 1313.040039
Name: AdjClose, dtype: float64, 18182 1285.550049
Name: AdjClose, dtype: float64, 21116 1514.400024
Name: AdjClose, dtype: float64, 21424 1300.680054
Name: AdjClose, dtype: float64, 22006 1178.099976
Name: AdjClose, dtype: float64, 22016 1196.47998
Name: AdjClose, dtype: float64, 22017 1197.300049
Name: AdjClose, dtype: float64, 22018 1210.650024
Name: AdjClose, dtype: float64, 22537 1209.109985
Name: AdjClose, dtype: float64, 25106 2914.0
Name: AdjClose, dtype: float64, 25113 2901.610107
Name: AdjClose, dtype: float64, 25114 2885.570068
Name: AdjClose, dtype: float64, 25116 2885.570068
Name: AdjClose, dtype: float64, 25117 2884.429932
Name: AdjClose, dtype: float64, 25118 2880.340088
Name: AdjClose, dtype: float64, 25119 2785.679932
Name: AdjClose, dtype: float64, 25122 2767.129883
Name: AdjClose, dtype: float64, 25129 2767.780029
Name: AdjClose, dtype: float64, 25143 2723.060059
Name: AdjClose, dtype: float64, 25144 2723.060059
Name: AdjClose, dtype: float64, 25157 2736.27002
Name: AdjClose, dtype: float64, 25158 2736.27002
Name: AdjClose, dtype: float64, 25169 2737.800049
Name: AdjClose, dtype: float64, 25219 2670.709961
Name: AdjClose, dtype: float64, 25240 2707.879883
Name: AdjClose, dtype: float64, Series([], Name: AdjClose, dtype: float64), Series([], Name: AdjClose, dtype: float64)]
</code></pre>
<p>I would preferably change this following line</p>
<pre><code>sell_price = data[(data.Date == sell_date[i])].AdjClose
</code></pre>
<p>to </p>
<pre><code>sell_price = data[(data.Date == sell_date[i])].AdjClose.values[0]
</code></pre>
<p>so that I would only get the value list without explanations attached in them.</p>
<p>However, the last 2 items in the list is empty, so when it tries to extract the value, it causes error. It is because 2 of the <code>sell_date</code> in the data frame is in 2020 thus it has no data to return, thus causes index error.</p>
<p>I tried to filter the <code>sell_date</code> < 2019-2-28 as that is the amount of data I have. But it does not work as this entire table needs to have 41 rows.</p>
<p>Is there any way I can return the values with 0 within that function using </p>
<pre><code>sell_price = data[(data.Date == sell_date[i])].AdjClose.values[0]
</code></pre>
<p>I appreciate your experience and insights!</p>
|
<p>You can use <code>next</code> with <code>iter</code> for first value if exist, else default value (here <code>NaN</code>) is returned.</p>
<p>Better for select column with filter is use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.loc.html" rel="nofollow noreferrer"><code>DataFrame.loc</code></a>:</p>
<pre><code>sell_price = next(iter(data.loc[(data.Date == sell_date[i]), 'AdjClose']), np.nan)
</code></pre>
|
python|pandas|function|filter
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.