Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
2,600
| 66,052,235
|
Filtering dataframe based on that if string is made from specific letters
|
<p>So i have Dataframe that look's like this</p>
<p><strong>note</strong> i put diffrent letters in * * for you to see easy</p>
<pre><code> id genome
0 639 ATGTTTGTTTTT*Y*TTGTTTTATATGTTTGTTTTTCTTGTTTTATATGTTTGTTTTTCTTGTTTTAT
1 640 ATGTTTGTTTTT*J*TTGTTTTATATGTTTGTTTTTCTTGTTTTATATGTTTGTTTTTCTTGTTTTAT
2 641 ATGTTTGTTTTTCTTGTTTTATATGTTTGTTTTTCTTGTTTTATATGTTTGTTTTTCTTGTTTTAT
3 642 ATGTTTGTTTTTCTTGTTTTATATGTTTGTTTTTCTTGTTTTATATGTTTGTTTTTCTTGTTTTAT
</code></pre>
<p>I want to filter it by string. Basically if string contains any other letter than A, C, T, G, N leave this row in dataframe else <strong>just delete it</strong>.</p>
<p>I was tying this</p>
<pre class="lang-py prettyprint-override"><code>df = df[~df['genome'].str.contains('[^ACTGN]')]
</code></pre>
<p>and this</p>
<pre><code>df = df[df['genome'].str.match('^[ACTGN]+$')]
</code></pre>
<p>but nothings seams to work, all i get is all rows are true or false despite having diffrent letters</p>
|
<p>It looks like your strings have leading/trailing spaces (look at those alignments in print out). So try:</p>
<pre><code>df['genome'] = df['genome'].str.strip()
df = df[~df['genome'].str.contains('[^ACTGN]')]
</code></pre>
<p>Or you can chain them if you don't want to modify your <code>genome</code> column:</p>
<pre><code>df = df[df[~df['genome'].str.strip().str.contains('[^ACTGN]')]
</code></pre>
|
python|arrays|pandas|dataframe
| 2
|
2,601
| 65,982,015
|
tf.keras.preprocessing.sequence.pad_sequences in JavaScript
|
<p>How can we implement tf.keras.preprocessing.sequence.pad_sequences in TensorFlow.js?</p>
<pre><code>encoded_text = tokenizer.texts_to_sequences([input_text])[0]
pad_encoded = pad_sequences([encoded_text], maxlen=seq_len, truncating='pre')
</code></pre>
|
<p>The <a href="https://github.com/tensorflow/tfjs-models/tree/master/universal-sentence-encoder" rel="nofollow noreferrer">universal sentence encoder</a> can be used to convert text into tensors</p>
<pre><code>require('@tensorflow/tfjs');
const use = require('@tensorflow-models/universal-sentence-encoder');
use.load().then(model => {
// Embed an array of sentences.
const sentences = [
'Hello.',
'How are you?'
];
model.embed(sentences).then(embeddings => {
// `embeddings` is a 2D tensor consisting of the 512-dimensional embeddings for each sentence.
// So in this example `embeddings` has the shape [2, 512].
embeddings.print(true /* verbose */);
});
});
</code></pre>
<p><code>tf.pad</code> can later be used to padd the tensors</p>
|
javascript|tensorflow|keras|tensorflow.js
| 1
|
2,602
| 52,606,908
|
generic function for conditional filtering in pandas dataframe
|
<p>sample filtering condition:-</p>
<p><a href="https://i.stack.imgur.com/AMSBB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AMSBB.png" alt="enter image description here"></a>
Data</p>
<pre><code>x y z
1 2 1
1 3 2
1 2 5
1 3 1
</code></pre>
<p>now i want to filter the above specified condition from the given data.
for that i need a generic function, i.e, that function should be work for any filters not only for the above specified filters.</p>
<p>I know how to filter data manually in python for more than one condition.</p>
<p>I think generic function may be needed two arguments one is data and another one is filtering condition.</p>
<p>But I am unable to found the logic for write the generic function to filter the data.</p>
<p>Kindly anyone can help me to tackle.</p>
<p>Thanks in advance.</p>
|
<p>You can create list of <code>conditions</code> and then <a href="https://stackoverflow.com/q/20528328"><code>np.logical_and.reduce</code></a>:</p>
<pre><code>x1 = df.x==1
y2 = df.y==2
z1 = df.z==1
y3 = df.y==3
m1 = np.logical_and.reduce([x1, y2, z1])
m2 = np.logical_and.reduce([x1, y3, z1])
</code></pre>
<p>Or <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.concat.html" rel="nofollow noreferrer"><code>concat</code></a> all mask tohether and check all <code>True</code>s per row by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.all.html" rel="nofollow noreferrer"><code>DataFrame.all</code></a>:</p>
<pre><code>m1 = pd.concat([x1, y2, z1], axis=1).all(axis=1)
m2 = pd.concat([x1, y3, z1], axis=1).all(axis=1)
</code></pre>
<p>EDIT:</p>
<p>If possible define column names with values for filtering in dictionary:</p>
<pre><code>d1 = {'x':1, 'y':2, 'z':1}
d2 = {'x':1, 'y':3, 'z':1}
m1 = np.logical_and.reduce([df[k] == v for k, v in d1.items()])
m2 = np.logical_and.reduce([df[k] == v for k, v in d2.items()])
</code></pre>
<p>Another approach with <code>merge</code> by one row DataFrame created from dictionary:</p>
<pre><code>df1 = pd.DataFrame([d1]).merge(df)
</code></pre>
<p>EDIT:</p>
<p>For general solution is possible parse each value of file to tuples and use <a href="https://stackoverflow.com/questions/18591778/how-to-pass-an-operator-to-a-python-function?noredirect=1&lq=1">operators</a>:</p>
<pre><code>df1 = pd.DataFrame({0: ['x==1', 'x==1'], 1: ['y==2', 'y<=3'], 2: ['z!=1', 'z>1']})
print (df1)
0 1 2
0 x==1 y==2 z!=1
1 x==1 y<=3 z>1
import operator, re
ops = {'>': operator.gt,
'<': operator.lt,
'>=': operator.ge,
'<=': operator.le,
'==': operator.eq,
'!=': operator.ne}
#if numeric, parse to float, else not touch ()e.g. if string
def try_num(x):
try:
return float(x)
except ValueError:
return x
L = df1.to_dict('r')
#https://stackoverflow.com/q/52620865/2901002
rgx = re.compile(r'([<>=!]+)')
parsed = [[rgx.split(v) for v in d.values()] for d in L]
L = [[(x, op, try_num(y)) for x,op,y in ps] for ps in parsed]
print (L)
[[('x', '==', 1.0), ('y', '==', 2.0), ('z', '!=', 1.0)],
[('x', '==', 1.0), ('y', '<=', 3.0), ('z', '>', 1.0)]]
</code></pre>
<p>And now filter by first value of list - first row of file:</p>
<pre><code>m = np.logical_and.reduce([ops[j](df[i], k) for i, j, k in L[0]])
print (m)
[False False True False]
</code></pre>
|
python|pandas|filter
| 2
|
2,603
| 52,899,858
|
Collapsing rows with NaN entries in pandas dataframe
|
<p>I have a pandas DataFrame with rows of data::</p>
<pre><code># objectID grade OS method
object_id_0001 AAA Mac organic
object_id_0001 AAA Mac NA
object_id_0001 AAA NA organic
object_id_0002 NA NA NA
object_id_0002 ABC Win NA
</code></pre>
<p>i.e. there are often multiple entries for the same objectID but sometimes/often the entries have NAs. </p>
<p>As such, I'm just looking for a way that would combine on ObjectID, and report the non-NA entries e.g. the above collapses down to::</p>
<pre><code>object_id_0001 AAA Mac organic
object_id_0002 ABC Win NA
</code></pre>
|
<h3>Quick and Dirty</h3>
<p>This works and has for a long time. However, some claim that this is a bug that may be fixed. As it is currently implemented, <code>first</code> returns the first non-null element if it exists per column.</p>
<pre><code>df.groupby('objectID', as_index=False).first()
objectID grade OS method
0 object_id_0001 AAA Mac organic
1 object_id_0002 ABC Win NaN
</code></pre>
<hr>
<h3><code>pd.concat</code></h3>
<pre><code>pd.concat([
pd.DataFrame([d.lookup(d.notna().idxmax(), d.columns)], columns=d.columns)
for _, d in df.groupby('objectID')
], ignore_index=True)
objectID grade OS method
0 object_id_0001 AAA Mac organic
1 object_id_0002 ABC Win NaN
</code></pre>
<hr>
<h3><code>stack</code></h3>
<pre><code>df.set_index('objectID').stack().groupby(level=[0, 1]).head(1).unstack()
grade OS method
objectID
object_id_0001 AAA Mac organic
object_id_0002 ABC Win None
</code></pre>
<hr>
<p>If by chance those are strings (<code>'NA'</code>)</p>
<pre><code>df.mask(df.astype(str).eq('NA')).groupby('objectID', as_index=False).first()
</code></pre>
|
python|pandas|dataframe|rows|nan
| 6
|
2,604
| 58,439,491
|
Why my code is giving me data in 1 column it should give me in two different column
|
<p>i need to know what is happening in my code? it should give data in separate columns it is giving me same data in a oath columns.</p>
<p>i tried to change the value of row variable but it didn't found the reason</p>
<pre><code>import requests
import csv
from bs4 import BeautifulSoup
import pandas as pd
import time
arrayofRequest= []
prices=[]
location=[]
columns=['Price', 'Location']
df = pd.DataFrame(columns=columns)
for i in range(0,50):
arrayofRequest.append("https://www.zameen.com/Homes/Karachi-2-"+str(i+1)+".html?gclid=Cj0KCQjw3JXtBRC8ARIsAEBHg4mj4jX1zZUt3WzGScjH6nfwzrEqkuILarcmg372imSneelSXPj0fGIaArNeEALw_wcB")
request = requests.get(arrayofRequest[i])
soupobj= BeautifulSoup(request.content,"lxml")
# print(soupobj.prettify())
links =soupobj.find_all('span',{'class':'f343d9ce'})
addresses =soupobj.find_all('div',{'class':'_162e6469'})
price = ""
for i in range(0,len(links)):
price = str(links[i]).split(">")
price = price[len(price)-2].split("<")[0]
prices.append(price)
address = str(addresses[i]).split(">")
address = address[len(address)-2].split("<")[0]
location.append(address)
row=location[i]+","+prices[i]
df = df.append(pd.Series(row, index=columns), ignore_index=False)
# filewriter = csv.writer(csvfile, delimiter=',',filewriter.writerow(['Price', 'Location']),filewriter.writerow([prices[0],location[0]])
df.to_csv('DATA.csv', index=False)
</code></pre>
|
<p>because of this: </p>
<p><code>pd.Series(row, index=columns)</code> </p>
<p>try smthg like </p>
<p><code>pd.DataFrame([[locations[i], prices[i]]], index=columns))</code></p>
<p>However this could be done only once outside of your for loop</p>
<p><code>pd.DataFrame(list(zip(locations, prices)), index=columns))</code></p>
|
pandas|series
| 0
|
2,605
| 58,563,632
|
How do I convert two DataFrame columns into summed Series?
|
<p>I have a pandas DataFrame that looks like this:</p>
<pre><code> date sku qty
0 2015-10-30 ABC 1
1 2015-10-30 DEF 1
2 2015-10-30 ABC 2
3 2015-10-31 DEF 1
4 2015-10-31 ABC 1
... ... ... ...
</code></pre>
<p>How can extract all of the data for a particular <code>sku</code> and sum up the <code>qty</code> by date. For example, the <code>ABC</code> SKU?</p>
<pre><code>2015-10-30 3
2015-10-31 1
... ...
</code></pre>
<p>The closest I've gotten is a hierarchal grouping with <code>sales.groupby(['date', 'sku']).sum()</code>.</p>
|
<p>If you will work with all (or several) <code>sku</code>, then:</p>
<pre><code>agg_df = df.groupby(['sku','date']).qty.sum()
# extract some sku data
agg_df.loc['ABC']
</code></pre>
<p>Output:</p>
<pre><code>date
2015-10-30 3
2015-10-31 1
Name: qty, dtype: int64
</code></pre>
<p>If you only care for <code>ABC</code> particularly, then it's better to filter it first</p>
<pre><code>df[df['sku'].eq('ABC')].groupby('date')['qty'].sum()
</code></pre>
<p>The output would be the same as above.</p>
|
python|pandas|dataframe
| 2
|
2,606
| 68,911,720
|
Date stuck as unformattable in pandas dataframe
|
<p>I am trying to plot time series data and my date column is stuck like this, and I cannot seem to figure out what datatype it is to change it, as adding <code>verbose = True</code> doesn't yield any explanation for the data.</p>
<p>Here is a screenshot of the output <a href="https://i.stack.imgur.com/V1dS9.png" rel="nofollow noreferrer">Date formatting</a></p>
<p>Here is the code I have for importing the dataset and all the formatting I've done to it. Ultimately, I want date and time to be separate values, but I cannot figure out why it's auto formatting it and applying today's date.</p>
<pre><code>df = pd.read_csv('dataframe.csv')
df['Avg_value'] = (df['Open'] + df['High'] + df['Low'] + df['Close'])/4
df['Date'] = pd.to_datetime(df['Date'])
df['Time'] = pd.to_datetime(df['Time'])
</code></pre>
<p>Any help would be appreciated</p>
<p>The output I'd be looking for would be something like:</p>
<pre><code>Date: 2016-01-04 10:00:00
</code></pre>
<p>As one column rather than 2 separate ones.</p>
|
<p>When you pass a Pandas Series into <code>pd.to_datetime(...)</code>, it parses the values and returns a new Series of dtype <code>datetime64[ns]</code> containing the parsed values:</p>
<pre class="lang-py prettyprint-override"><code>>>> pd.to_datetime(pd.Series(["12:30:00"]))
0 2021-08-24 12:30:00
dtype: datetime64[ns]
</code></pre>
<p>The reason you see today's date is that a <code>datetime64</code> value must have <strong>both date and time information</strong>. Since the date is missing, the current date is substituted by default.</p>
<p>Pandas does not really have a dtype that supports "just time of day, no date" (there is one that supports time intervals, such as "5 minutes"). The Python standard library does, however: the <a href="https://docs.python.org/3/library/datetime.html#time-objects" rel="nofollow noreferrer"><code>datetime.time</code></a> class.</p>
<p>Pandas provides a convenience function called the <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/basics.html#basics-dt-accessors" rel="nofollow noreferrer"><code>.dt accessor</code></a> for extracting a Series of <code>datetime.time</code> objects from a Series of <code>datetime64</code> objects:</p>
<pre class="lang-py prettyprint-override"><code>>>> pd.to_datetime(pd.Series(["12:30:00"])).dt.time
0 12:30:00
dtype: object
</code></pre>
<p>Note that the dtype is just <code>object</code> which is the fallback dtype Pandas uses for anything which is not natively supported by Pandas or NumPy. This means that working with these <code>datetime.time</code> objects will be a bit slower than working with a native Pandas dtype, but this probably won't be a bottleneck for your application.</p>
<p>Recommended reference: <a href="https://pandas.pydata.org/docs/user_guide/timeseries.html" rel="nofollow noreferrer">https://pandas.pydata.org/docs/user_guide/timeseries.html</a></p>
|
python-3.x|pandas
| 0
|
2,607
| 68,893,658
|
Exponential moving average on pandas
|
<p>I was having a bit of trouble making an exponential moving average for a pandas data frame. I managed to make a simple moving average but I'm not sure how I can make one that is exponential. I was wondering if there's a function in pandas or maybe another module that can help with this. Ideally the exponential moving average would be in another column in my data frame. This is my code below:</p>
<pre><code>import pandas as pd
import datetime as dt
import yfinance as yf
#Get initial paramaters
start = dt.date(2020,1,1)
end = dt.date.today()
ticker = 'SPY'
#Get df data
df = yf.download(ticker,start,end,progress=False)
#Make simple moving average
df['SMA'] = df['Adj Close'].rolling(window=75,min_periods=1).mean()
</code></pre>
<p>Thanks</p>
|
<p>Use the <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.ewm.html" rel="nofollow noreferrer"><code>ewm</code></a> method:</p>
<pre><code>df['SMA'] = df['Adj Close'].ewm(span=75, min_periods=1).mean()
</code></pre>
<p><em>NB. check carefully the parameters' documentation as there is no more <code>window</code>, you should use one of <code>com</code>, <code>span</code>, <code>halflife</code> or <code>alpha</code> instead</em></p>
|
pandas|numpy|yfinance
| 0
|
2,608
| 69,270,418
|
Chunk 2D array into smaller arrays, get the chunk means, and plot a heatmap
|
<p>I want to make a heatmap using seaborn. I have a 1920x1080 2D array that contains saliency values of each pixel of an image from 0-1 (0=lowest saliency-blue color, 1=highest saliency-red color). I have divided my image into smaller grids of 80x90 pixels. I am getting the image below:</p>
<p><a href="https://i.stack.imgur.com/GDSkZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GDSkZ.png" alt="enter image description here" /></a></p>
<p>So far so good. What I want to do next is to create a seaborn heatmap, so that each grid is averaged and represented by only one color (with blue grids representing areas of low saliency and warmer grids representing areas of high saliency), like below:</p>
<p><a href="https://i.stack.imgur.com/CA6bg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CA6bg.png" alt="enter image description here" /></a></p>
<p>However, using this code:</p>
<pre><code>import numpy as np
plt.figure(figsize= (16,9))
xticklabels=range(0,1920,80)
yticklabels=range(0,1080,90)
xticks[80,160,240,320,400,480,560,640,720,800,880,960,1040,1120,1200,1280,1360,1440,1520,1600,1680,1760,1840,1920]
yticks=[90,180,270,360,450,540,630,720,810,900,990,1080]
normalized_saliency_map=np.random.random((1080,1920))
ax=sns.heatmap(normalized_saliency_map,
cmap='jet',
linewidth=0.5,
xticklabels = xticklabels,
yticklabels = yticklabels)
ax.set_xticks(xticks)
ax.set_yticks(yticks)
plt.title(f'Image: {i}')
plt.show()
</code></pre>
<p>I am getting this empty plot:</p>
<p><a href="https://i.stack.imgur.com/pxePH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pxePH.png" alt="enter image description here" /></a>
If I comment out <code>ax.set_xticks(xticks)</code> and <code>ax.set_yticks(yticks)</code>, I am getting this:</p>
<p><a href="https://i.stack.imgur.com/eBAIF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eBAIF.png" alt="enter image description here" /></a></p>
<p>What am I missing here?</p>
|
<h2>Main Array</h2>
<ul>
<li>Remove <code>linewidth</code></li>
<li>Add <code>set_xticklabels</code> and <code>set_yticklabels</code></li>
</ul>
<pre class="lang-py prettyprint-override"><code># test data
np.random.seed(365)
data = np.random.random((1080,1920))
ax = sns.heatmap(data, cmap='jet')
ax.set_xticks(xticks) # this is only the tick location, not the label
ax.set_xticklabels(xticks) # this adds the labels, after setting the ticks
ax.set_yticks(yticks)
ax.set_yticklabels(yticks)
ax.invert_yaxis() # use if desired to swap the direction of the y-axis values
ax.grid(color='k')
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/XzLMX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XzLMX.png" alt="enter image description here" /></a></p>
<h2>Divide the Array</h2>
<ul>
<li>I used the function in this <a href="https://stackoverflow.com/a/16858283/7758804">answer</a> to chunk the data into an array <code>(288, 90, 80)</code></li>
</ul>
<pre class="lang-py prettyprint-override"><code># using function from other answer
chunked = blockshaped(data, 90, 80)
# get the means of each chunk and then reshape
means = np.array([v.mean() for v in chunked]).reshape(12, 24)
# plot the chunks
fig, ax = plt.subplots(figsize= (16,9))
p = sns.heatmap(means, cmap='jet', ax=ax)
p.set_xticks(range(25))
p.set_xticklabels([0] + xticks)
p.set_yticks(range(13))
p.set_yticklabels([0] + yticks)
p.invert_yaxis()
p.grid(color='k')
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/VxRoS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VxRoS.png" alt="enter image description here" /></a></p>
<h2><code>blockshaped</code></h2>
<ul>
<li>Here's the function from the other answer for reshapeding the array</li>
</ul>
<pre class="lang-py prettyprint-override"><code>def blockshaped(arr, nrows, ncols):
"""
Return an array of shape (n, nrows, ncols) where
n * nrows * ncols = arr.size
If arr is a 2D array, the returned array should look like n subblocks with
each subblock preserving the "physical" layout of arr.
"""
h, w = arr.shape
assert h % nrows == 0, f"{h} rows is not evenly divisible by {nrows}"
assert w % ncols == 0, f"{w} cols is not evenly divisible by {ncols}"
return (arr.reshape(h//nrows, nrows, -1, ncols)
.swapaxes(1,2)
.reshape(-1, nrows, ncols))
</code></pre>
|
python|numpy|seaborn|heatmap
| 1
|
2,609
| 69,084,798
|
In transformers of ViT model, last_hidden_state is not equal to hidden_states[-1]
|
<p>When input the same image, in Google ViT model output.last_hidden_state is not equal to output.hidden_states[-1] ?
I tried in Bert, the outputs are the same.</p>
<p>feature_extractor = ViTFeatureExtractor.from_pretrained('google/vit-base-patch16-224-in21k')</p>
<pre><code>model = ViTModel.from_pretrained('google/vit-base-patch16-224-in21k')
inputs = feature_extractor(images=[image], return_tensors="pt")
outputs = model(pixel_values=inputs['pixel_values'], output_hidden_states=True)
vec1 = outputs.hidden_states[-1][0, 0, :]
vec2 = outputs.last_hidden_state[0, 0, :]
</code></pre>
<p>in my mind, vec1 should be the same as vec2. But the fact is they are not the same at all.</p>
|
<p>The difference is that the layernorm is applied to the last_hidden_state.</p>
<p>The following is an excerpt of the last 15 lines or so of ViTModel's forward method. For sequence_output, which is assigned to last_hidden_state, layernorm is applied to the output from the encoder.</p>
<pre><code>sequence_output = encoder_outputs[0]
sequence_output = self.layernorm(sequence_output)
pooled_output = self.pooler(sequence_output) if self.pooler is not None else None
if not return_dict:
head_outputs = (sequence_output, pooled_output) if pooled_output is not None else (sequence_output,)
return head_outputs + encoder_outputs[1:]
return BaseModelOutputWithPooling(
last_hidden_state=sequence_output,
pooler_output=pooled_output,
hidden_states=encoder_outputs.hidden_states,
attentions=encoder_outputs.attentions,
)
</code></pre>
<p>If we apply layernorm to hidden_state[-1], we can confirm that the same value is obtained.
Please refer to the notebook I made in Colab.
<a href="https://colab.research.google.com/drive/12OmNW5dZsARio0Tzu11ParHxblOoez7u?usp=sharing" rel="nofollow noreferrer">vit_huggingface</a></p>
|
python|pytorch|huggingface-transformers|transformer-model
| 0
|
2,610
| 69,161,346
|
RuntimeError: The layer has never been called and thus has no defined output shape
|
<p>I am trying to add attention to pretrained vgg16 network. I am trying to get the output shape of the last layer but it's throwing an error. This is the code,</p>
<pre><code>img_shape = (224,224,3)
in_lay = Input(img_shape)
base_pretrained_model = VGG16(input_shape = img_shape,
include_top = False, weights = 'imagenet')
base_pretrained_model.trainable = False
pt_depth = base_pretrained_model.get_output_shape_at(0)[-1]
pt_features = base_pretrained_model(in_lay)
bn_features = BatchNormalization()(pt_features)
attn_layer = Conv2D(64, kernel_size = (1,1), padding = 'same', activation = 'relu')(bn_features)
attn_layer = Conv2D(16, kernel_size = (1,1), padding = 'same', activation = 'relu')(attn_layer)
attn_layer = Conv2D(1,
kernel_size = (1,1),
padding = 'valid',
activation = 'sigmoid')(attn_layer)
up_c2_w = np.ones((1, 1, 1, pt_depth))
up_c2 = Conv2D(pt_depth, kernel_size = (1,1), padding = 'same',
activation = 'linear', use_bias = False, weights = [up_c2_w])
up_c2.trainable = False
attn_layer = up_c2(attn_layer)
mask_features = multiply([attn_layer, bn_features])
gap_features = GlobalAveragePooling2D()(mask_features)
gap_mask = GlobalAveragePooling2D()(attn_layer)
gap = Lambda(lambda x: x[0]/x[1], name = 'RescaleGAP')([gap_features, gap_mask])
gap_dr = Dropout(0.5)(gap)
dr_steps = Dropout(0.25)(Dense(128, activation = 'elu')(gap_dr))
out_layer = Dense(1, activation = 'sigmoid')(dr_steps)
tb_model = Model(inputs = [in_lay], outputs = [out_layer])
tb_model.compile(optimizer = 'adam', loss = 'binary_crossentropy',
metrics = ['binary_accuracy'])
tb_model.summary()
</code></pre>
<p>I am getting an error on the 6th line which says,</p>
<pre><code>RuntimeError: The layer has never been called and thus has no defined output shape.
</code></pre>
|
<p>Instead of</p>
<pre><code> pt_depth = base_pretrained_model.get_output_shape_at(0)[-1]
</code></pre>
<p>Try this one:</p>
<pre><code> pt_depth = base_pretrained_model.layers[-1].output_shape
</code></pre>
<p>Since, <em>include_top=False</em> the output will be: (None, 7, 7, 512) that is the shape of the last layer <em>"block5_pool (MaxPooling2D)"</em></p>
|
tensorflow|conv-neural-network|transfer-learning
| 0
|
2,611
| 69,080,534
|
How do I create a prefetch dataset from a folder of images?
|
<p>I am trying to input a dataset from Kaggle into this <a href="https://www.tensorflow.org/tutorials/generative/cyclegan" rel="nofollow noreferrer">notebook</a> from the Tensorflow docs in order to train a CycleGAN model. My current approach is to download the folders into my notebook and loop through the paths of each image and use cv2.imread(path) to add the uint8 image data to a list. But this doesn't work and I know my current approach is wrong because the code provided by google requires a Prefetch dataset.</p>
<p>Here's my current code (excluding the opencv part)</p>
<pre><code>import os
# specify the img directory path
art_path = "/content/abstract-art-gallery/Abstract_gallery/Abstract_gallery/"
land_path = "/content/landscape-pictures/"
def grab_path(folder, i_count=100):
res = []
for file in range(i_count):
if os.listdir(folder)[0].endswith(('.jpg', '.png', 'jpeg')):
img_path = folder + os.listdir(folder)[0]
res.append(img_path)
return res
art_path, land_path = grab_path(art_path), grab_path(land_path)
print(art_path)
print(land_path)
</code></pre>
<p>The error in the code comes here:</p>
<pre><code>train_horses = train_horses.cache().map(
preprocess_image_train, num_parallel_calls=AUTOTUNE).shuffle(
BUFFER_SIZE).batch(BATCH_SIZE)
</code></pre>
<p>Is there a simpler approach to this problem?</p>
|
<pre><code> import pathlib
import tensorflow as tf
import numpy as np
@tf.autograph.experimental.do_not_convert
def read_image(path):
image_string = tf.io.read_file(path)
image = DataUtils.decode_image(image_string,(image_size))
return image
AUTO = tf.data.experimental.AUTOTUNE
paths = np.array([x for x in pathlib.Path(IMAGE_PATHS_DIR).rglob('*.jpg')])
dataset = tf.data.Dataset.from_tensor_slices((paths.astype(str)))
dataset = dataset.map(self.read_image)
dataset = dataset.shuffle(2048)
dataset = dataset.prefetch(AUTOTUNE)
</code></pre>
|
image|tensorflow|image-processing|tensorflow-datasets|kaggle
| 1
|
2,612
| 44,591,704
|
How to find largest amount based on entity within group?
|
<p>Let's say my DataFrame looks something like this:</p>
<pre><code>Bank Entity Amount
JPM NY 5000
JPM NY 300
BOA LA 10000
BOA China 3000
MS Japan 21000
</code></pre>
<p>I would like to output based on top entity, while keeping in mind that the Bank is different, so the DataFrame then becomes:</p>
<pre><code>Bank Entity Amount
JPM NY 5000
BOA LA 10000
MS Japan 21000
</code></pre>
<p>How would I go about creating something like this? I know how to <code>sort_values</code> and also <code>group_by</code> but I'm definitely doing something wrong. </p>
<p>Any ideas? I'm sure it's super simple. </p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.DataFrameGroupBy.idxmax.html" rel="nofollow noreferrer"><code>DataFrameGroupBy.idxmax</code></a> for indexes of max values and then select by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.loc.html" rel="nofollow noreferrer"><code>loc</code></a>:</p>
<pre><code>df = df.loc[df.groupby('Bank')['Amount'].idxmax()]
print (df)
Bank Entity Amount
2 BOA LA 10000
0 JPM NY 5000
4 MS Japan 21000
</code></pre>
<p>Or <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sort_values.html" rel="nofollow noreferrer"><code>sort_values</code></a> first and then use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.last.html" rel="nofollow noreferrer"><code>GroupBy.last</code></a>:</p>
<pre><code>df = df.sort_values('Amount').groupby('Bank', as_index=False).last()
print (df)
Bank Entity Amount
0 BOA LA 10000
1 JPM NY 5000
2 MS Japan 21000
</code></pre>
|
python|pandas|dataframe
| 2
|
2,613
| 44,729,498
|
Plotting data from multiple pandas data frames in one plot
|
<p>I am interested in plotting a time series with data from several different pandas data frames. I know how to plot a data for a single time series and I know how to do subplots, but how would I manage to plot from several different data frames in a single plot? I have my code below. Basically what I am doing is I am scanning through a folder of json files and parsing that json file into a panda so that I can plot. When I run this code it is only plotting from one of the pandas instead of the ten pandas created. I know that 10 pandas are created because I have a print statement to ensure they are all correct. </p>
<pre><code>import sys, re
import numpy as np
import smtplib
import matplotlib.pyplot as plt
from random import randint
import csv
import pylab as pl
import math
import pandas as pd
from pandas.tools.plotting import scatter_matrix
import argparse
import matplotlib.patches as mpatches
import os
import json
parser = argparse.ArgumentParser()
parser.add_argument('-file', '--f', help = 'folder where JSON files are stored')
if len(sys.argv) == 1:
parser.print_help()
sys.exit(1)
args = parser.parse_args()
dat = {}
i = 0
direc = args.f
directory = os.fsencode(direc)
fig1 = plt.figure()
ax1 = fig1.add_subplot(111)
for files in os.listdir(direc):
filename = os.fsdecode(files)
if filename.endswith(".json"):
path = '/Users/Katie/Desktop/Work/' + args.f + "/" +filename
with open(path, 'r') as data_file:
data = json.load(data_file)
for r in data["commits"]:
dat[i] = (r["author_name"], r["num_deletions"], r["num_insertions"], r["num_lines_changed"],
r["num_files_changed"], r["author_date"])
name = "df" + str(i).zfill(2)
i = i + 1
name = pd.DataFrame.from_dict(dat, orient='index').reset_index()
name.columns = ["index", "author_name", "num_deletions",
"num_insertions", "num_lines_changed",
"num_files_changed", "author_date"]
del name['index']
name['author_date'] = name['author_date'].astype(int)
name['author_date'] = pd.to_datetime(name['author_date'], unit='s')
ax1.plot(name['author_date'], name['num_lines_changed'], '*',c=np.random.rand(3,))
print(name)
continue
else:
continue
plt.xticks(rotation='35')
plt.title('Number of Lines Changed vs. Author Date')
plt.show()
</code></pre>
|
<p>Quite straightforward actually. Don't let pandas confuse you. Underneath it every column is just a numpy array.</p>
<pre><code>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
df1 = pd.DataFrame(np.random.randint(0,100,size=(100, 4)), columns=list('ABCD'))
df2 = pd.DataFrame(np.random.randint(0,100,size=(100, 4)), columns=list('ABCD'))
fig1 = plt.figure()
ax1 = fig1.add_subplot(111)
ax1.plot(df1['A'])
ax1.plot(df2['B'])
</code></pre>
<p><a href="https://i.stack.imgur.com/R94TY.png" rel="noreferrer"><img src="https://i.stack.imgur.com/R94TY.png" alt="enter image description here"></a></p>
|
python|pandas|plot
| 7
|
2,614
| 42,410,237
|
tensor flow character recognition with softmax results in accuracy 1 due to [NaN...NaN] prediction
|
<p>I am trying to use the softmax regression method discussed in <a href="https://www.tensorflow.org/get_started/mnist/beginners" rel="nofollow noreferrer">https://www.tensorflow.org/get_started/mnist/beginners</a> to recognize characters. </p>
<p>My code is as follows. </p>
<pre><code>train_data = pd.read_csv('CharDataSet/train.csv')
print(train_data.shape)
x = tf.placeholder(tf.float32, [None, 130])
W = tf.Variable(tf.zeros([130, 26]))
b = tf.Variable(tf.zeros([26]))
y = tf.nn.softmax(tf.matmul(x, W) + b)
y_ = tf.placeholder(tf.float32, [None, 26])
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1]))
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
sess = tf.InteractiveSession()
tf.global_variables_initializer().run()
for _ in range(10):
batch_xs = train_data.iloc[:, 2:]
print(batch_xs)
batch_ys = getencodedbatch(train_data.iloc[:, 1])
print(batch_ys)
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
</code></pre>
<p>However, I am getting an accuracy of 1, which shouldn't be the case.
The reason why I am getting it so is because my y tensor results with an array like</p>
<pre><code>[nan, ..., nan]
</code></pre>
<p>Can anyone explain to me what is wrong in my code?</p>
<p>I converted each character to a one-hot encoding using the method below</p>
<pre><code>def getencodedbatch(param):
s = (param.shape[0],26)
y_encode = np.zeros(s)
row=0
# print(y_encode)
for val in param:
col = ord(val)-97
y_encode[row, col] = 1
row += 1
return pd.DataFrame(y_encode)
</code></pre>
|
<p>Here is the problem you are having: </p>
<ul>
<li>You set your initial weights and biases to 0 (this is wrong, as your
network does not learn). </li>
<li>The result is that y consists of all zeros</li>
<li>You take the log of y.. and a log of 0 is not defined... Hence the NaN. </li>
</ul>
<p>Good luck!</p>
<p>Edit to tell you how to fix it: look for an example on classifying MNIST characters and see what they do. You probably want to initialise your weights to be random normals ;)</p>
|
tensorflow|softmax
| 0
|
2,615
| 69,814,968
|
Add Rows to Section of Dataframe
|
<p>I'm looking to add rows from first section of dataframe to other sections of the dataframe while changing the value in one column.</p>
<p>For example if I have this as my input dataframe:</p>
<pre><code>Type Color Size Dimensions
Circle Blue Large 2D
Circle Green Small 3D
Circle Black Large 2D
Square Red Large 2D
Square White Small 3D
Triangle Red Large 2D
Triangle White Small 3D
</code></pre>
<p>I want to add the Circle rows to the other shapes (but replace the value circle with its respective shape, either square or triangle) so it looks like this:</p>
<pre><code>Type Color Size Dimensions
Circle Blue Large 2D
Circle Green Small 3D
Circle Black Large 2D
Square Red Large 2D
Square White Small 3D
Square Blue Large 2D
Square Green Small 3D
Square Black Large 2D
Triangle Red Large 2D
Triangle White Small 3D
Triangle Blue Large 2D
Triangle Green Small 3D
Triangle Black Large 2D
</code></pre>
<p>Is there a simple way to do this via group by or merge?</p>
|
<p>Try:</p>
<pre><code># circle rows
circles = df.query('Type=="Circle"')
pd.concat([df] + [circles.assign(Type=t) for t in df.Type.unique() if t!='Circle'])
</code></pre>
<p>Output:</p>
<pre><code> Type Color Size Dimensions
0 Circle Blue Large 2D
1 Circle Green Small 3D
2 Circle Black Large 2D
3 Square Red Large 2D
4 Square White Small 3D
5 Triangle Red Large 2D
6 Triangle White Small 3D
0 Square Blue Large 2D
1 Square Green Small 3D
2 Square Black Large 2D
0 Triangle Blue Large 2D
1 Triangle Green Small 3D
2 Triangle Black Large 2D
</code></pre>
|
python|pandas|dataframe
| 1
|
2,616
| 69,782,374
|
Count list member pairs within array
|
<p>Let's assume, I've got an array containing the following lists:</p>
<pre><code>data = [['a', 'b', 'c'],['a', 'b'],['c']]
</code></pre>
<p>What would be the best solution to count every pair occurrence by the number of lists they're in?</p>
<p>E.g. result should be:</p>
<pre><code>member_one_is member_two_is COUNT
a b 2
a c 1
b c 1
</code></pre>
|
<p>One approach using <a href="https://docs.python.org/3.8/library/collections.html#collections.Counter" rel="nofollow noreferrer"><code>collections.Counter</code></a> and <a href="https://docs.python.org/3/library/itertools.html#itertools.combinations" rel="nofollow noreferrer"><code>itertools.combinations</code></a>:</p>
<pre><code>from collections import Counter
from itertools import combinations
import pandas as pd
data = [['a', 'b', 'c'], ['a', 'b'], ['c']]
# get the counts using collections Counter and the combinations using combinations
# make sure each sub-list is sorted with sorted
counts = Counter(combination for lst in map(sorted, data) for combination in combinations(lst, 2))
# create the DataFrame
df = pd.DataFrame(data=[[*k, v] for k, v in counts.items()], columns=["member_one_is", "member_two_is", "COUNT"])
print(df)
</code></pre>
<p><strong>Output</strong></p>
<pre><code> member_one_is member_two_is COUNT
0 a b 2
1 a c 1
2 b c 1
</code></pre>
<p>Note that if the list are sorted you can skip the <code>map(sorted, data)</code> and iterate directly over <code>data</code>.</p>
|
python|pandas|dataframe
| 2
|
2,617
| 69,749,087
|
Need help in compiling custom loss
|
<p>I am adding a custom loss to a VAE, as suggested here: <a href="https://www.linkedin.com/pulse/supervised-variational-autoencoder-code-included-ibrahim-sobh-phd/" rel="nofollow noreferrer">https://www.linkedin.com/pulse/supervised-variational-autoencoder-code-included-ibrahim-sobh-phd/</a></p>
<p>Instead of defining a loss function, it uses a <code>dense</code> network and takes its output as the loss (if I understand correctly).</p>
<pre><code># New: add a classifier
clf_latent_inputs = Input(shape=(latent_dim,), name='z_sampling_clf')
clf_outputs = Dense(10, activation='softmax', name='class_output')(clf_latent_inputs)
clf_supervised = Model(clf_latent_inputs, clf_outputs, name='clf')
clf_supervised.summary()
# instantiate VAE model
# New: Add another output
outputs = [decoder(encoder(inputs)[2]), clf_supervised(encoder(inputs)[2])]
vae = Model(inputs, outputs, name='vae_mlp')
vae.summary()
reconstruction_loss = binary_crossentropy(inputs, outputs[0])
reconstruction_loss *= original_dim
kl_loss = 1 + z_log_var - K.square(z_mean) - K.exp(z_log_var)
kl_loss = K.sum(kl_loss, axis=-1)
kl_loss *= -0.5
vae_loss = K.mean((reconstruction_loss + kl_loss) /100.0)
vae.add_loss(vae_loss)
# New: add the clf loss
vae.compile(optimizer='adam', loss={'clf': 'categorical_crossentropy'}) ===> this line <===
vae.summary()
# reconstruction_loss = binary_crossentropy(inputs, outputs)
svae_history = vae.fit(x_train, {'clf': y_train},
epochs=epochs,
batch_size=batch_size)
</code></pre>
<p>I was stuck at the compilation step (annotated as ===> this line <===) that I met a type error:</p>
<blockquote>
<p>TypeError: Expected float32, got <function
BaseProtVAE.<strong>init</strong>..vae_loss at 0x7ff53051dd08> of type
'function' instead.</p>
</blockquote>
<p>I need your help if you've got any suggestions.</p>
|
<p>There are several ways to implement VAE in Tensorflow. I propose an alternative implementation that can be found in <a href="https://www.tensorflow.org/guide/keras/custom_layers_and_models#putting_it_all_together_an_end-to-end_example" rel="nofollow noreferrer">custom_layers_and_models</a> in Tensorflow guide pages :</p>
<blockquote>
<p>Let's put all of these things together into an end-to-end example: we're going to implement a Variational AutoEncoder (VAE). We'll train it on MNIST digits.</p>
</blockquote>
<p>It uses custom Model classes and the gradient tape. In this way, it is quite easy to add the classifier into the VAE model and add the categorical cross-entropy to the total loss during the optimization.</p>
<p>All you need is to modify:</p>
<pre><code>class VariationalAutoEncoder(Model):
"""Combines the encoder and decoder into an end-to-end model for training."""
def __init__(
self,
original_dim,
intermediate_dim=64,
latent_dim=32,
name="autoencoder",
**kwargs
):
super(VariationalAutoEncoder, self).__init__(name=name, **kwargs)
self.original_dim = original_dim
self.encoder = Encoder(latent_dim=latent_dim, intermediate_dim=intermediate_dim)
self.decoder = Decoder(original_dim, intermediate_dim=intermediate_dim)
self.clf_supervised = Dense(10, activation='softmax', name='class_output')
def call(self, inputs):
z_mean, z_log_var, z = self.encoder(inputs)
reconstructed = self.decoder(z)
# Add KL divergence regularization loss.
kl_loss = -0.5 * tf.reduce_mean(
z_log_var - tf.square(z_mean) - tf.exp(z_log_var) + 1
)
self.add_loss(kl_loss)
# classifier
y_pred = self.clf_supervised(z)
return reconstructed, y_pred
</code></pre>
<p>by adding the lines <code>self.clf_supervised = Dense(10, activation='softmax', name='class_output')</code> and <code>y_pred = self.clf_supervised(z)</code>.</p>
<p>The optimization is done this way:</p>
<pre><code>vae = VariationalAutoEncoder(original_dim, intermediate_dim, latent_dim)
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-3)
mse_loss_fn = tf.keras.losses.MeanSquaredError()
loss_metric = tf.keras.metrics.Mean()
epochs = 2
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.shuffle(buffer_size=500).batch(4)
# Iterate over epochs.
for epoch in range(epochs):
print("Start of epoch %d" % (epoch,))
# Iterate over the batches of the dataset.
for step, (x_batch_train, y_batch_train) in enumerate(train_dataset):
with tf.GradientTape() as tape:
reconstructed, y_pred = vae(x_batch_train)
clf_loss = tf.keras.losses.SparseCategoricalCrossentropy()(y_batch_train, y_pred)
# Compute reconstruction loss
loss = mse_loss_fn(x_batch_train, reconstructed)
loss += sum(vae.losses) # Add KLD regularization loss
loss += clf_loss
grads = tape.gradient(loss, vae.trainable_weights)
optimizer.apply_gradients(zip(grads, vae.trainable_weights))
loss_metric(loss)
if step % 100 == 0:
print("step %d: mean loss = %.4f" % (step, loss_metric.result()))
</code></pre>
<p>The rest of the code is in the link above. The main change is the optimization done with tf.GradientTape(). It's a bit more complicated than the fit method but it's still quite simple and very powerful.</p>
|
tensorflow|keras
| 1
|
2,618
| 69,848,005
|
Does .loc in Python Pandas make inplace change on the original dataframe?
|
<p>I was working on a dataframe like below:</p>
<p>df:</p>
<pre><code>Site Visits Temp Type
KFC 511 74 Food
KFC 565 77 Food
KFC 498 72 Food
K&G 300 75 Gas
K&G 255 71 Gas
</code></pre>
<p>I wanted to change 'Type' column into 0-1 variable so I could use df.corr() to check the correlation.</p>
<p>I tried two ways, one was to make a dictionary and make a new column:</p>
<pre><code>dict = {'Food':1, 'Gas':0}
df['BinaryType'] = df['Type'].map(dict)
</code></pre>
<p>I was then able to use <code>df.corr()</code> to check correlation between 'Visits' and 'BinaryType'. Since 'Type' column contains strings, df.corr() would not show correlation between 'Visits' and 'Type'.</p>
<p>Second way was to use .loc:</p>
<pre><code>df.loc[df['Type']=='Food','Type'] = 1
df.loc[df['Type']!=1,'Type'] = 0
</code></pre>
<p>Then I checked df in console, it was like below and it seemed an inplace change was made. I also checked the data type using <code>df['Type'][0]</code> and it read 1(I suppose it's integer):</p>
<pre><code>Site Visits Temp Type
KFC 511 74 1
KFC 565 77 1
KFC 498 72 1
K&G 300 75 0
K&G 255 71 0
</code></pre>
<p>Here however, <code>df.corr()</code> would not show correlation between 'Visits' and 'Type'! It was as if this column hadn't been changed.</p>
<p>You can use the code below to reproduce:</p>
<pre><code>df = pd.DataFrame({
'Site': {0: 'KFC', 1: 'KFC', 2: 'KFC', 3: 'K&G', 4:'K&G'},
'Visits': {0: 511, 1: 565, 2: 498, 3: 300, 4:255},
'Temp': {0: 74, 1: 77, 2: 72, 3: 75, 4:71},
'Type': {0: 'Food', 1: 'Food', 2: 'Food', 3: 'Gas', 4:'Gas'}})
# 1
dict = {'Food':1, 'Gas':0}
df['BinaryType'] = df['Type'].map(dict)
df.corr()
del df['BinaryType']
# 2
df.loc[df['Type']=='Food','Type'] = 1
df.loc[df['Type']!=1,'Type'] = 0
df.corr()
</code></pre>
<p>Any idea on how Pandas .loc works on the background?</p>
|
<p>Your 2nd method doesn't actually change the <code>dtype</code> of the series even though the values are all ints. You can see that by doing <code>df.dtypes</code> which would show the <code>Type</code> column is still of <code>object</code> dtype</p>
<p>You need to explicitly cast them to int using an <code>.astype(int)</code></p>
<p>OR</p>
<p>use <code>df['Type'] = np.where(df['Type'] == 'Food', 1, 0)</code></p>
<p>running <code>df.corr()</code> after that gives</p>
<pre class="lang-py prettyprint-override"><code>In [22]: df.corr()
Out[22]:
Visits Temp Type
Visits 1.000000 0.498462 0.976714
Temp 0.498462 1.000000 0.305888
Type 0.976714 0.305888 1.000000
</code></pre>
|
python|pandas|dataframe|pandas-loc
| 1
|
2,619
| 69,738,324
|
How can I insert several rows at given position in pandas DataFrame?
|
<p>I have a table of data that had to be recorded every 30 seconds but some of it was recorded with a larger time step.</p>
<p>So I want to write a program with pandas to check the time steps and if they are larger than 30, insert specific number of NaN rows, then fill the NaN cells with interpolation. but I don't know how to write a code to do it for seveal times at different positions.</p>
<p>My data looks something like this (with many more rows and columns):</p>
<pre><code> T1 T2 time_step
0 15 30 30
1 19 40 90
2 18 30 30
3 16 50 90
4 16 70 30
...
</code></pre>
<p>and I want it to look like this before interpolation:</p>
<pre><code> T1 T2 time_step
0 15 30 30
1 NaN NaN 30
2 NaN NaN 30
3 19 40 90
4 18 30 30
5 NaN NaN 30
6 NaN NaN 30
7 16 50 90
8 16 70 30
...
</code></pre>
<p>I have found a code on this site: <a href="https://www.geeksforgeeks.org/insert-row-at-given-position-in-pandas-dataframe/" rel="nofollow noreferrer">https://www.geeksforgeeks.org/insert-row-at-given-position-in-pandas-dataframe/</a>
which inserts a row at a given position but my problem is I can not write a program to insert rows several times at different positions for a dataframe.</p>
<p>Is it even possible with pandas? Is there any other way to do it with python?</p>
<p>I want to write something like this but I don't know how to write a correct code for it:</p>
<pre><code>### The function close to what I found in the link above:
def Insert_row(row_number, df, m, row_value):
start_upper = 0
end_upper = row_number
start_lower = row_number
end_lower = df.shape[0]
upper_half = [*range(start_upper, end_upper, 1)]
lower_half = [*range(start_lower, end_lower, 1)]
lower_half = [x.__add__(m) for x in lower_half]
index_ = upper_half + lower_half
df.index = index_
for i in range(row_number, row_number+m):
df.loc[i] = row_value
df = df.sort_index()
return df
**### The main problem:
for j in range(dm.shape[0]):
if dm['time_step'][j] != 30:
row_number = j
m = dm['time_step'][j]/ 30 - 1
row_value = [np.Nan, np.NaN, 30]
dm_new = Insert_row(row_number, dm, m, row_value)**
dm_new = dm_new.interpolate()
</code></pre>
<p>(I know the range is wrong and it is wrong to modify what I'm iterating over but I don't know how to write it correctly.)</p>
|
<p>There will be a lot of ways to solve this. If I understand correctly, you want to insert two rows everytime when <code>time_step</code> is 90. If you have more general cases as well, we would need to modify the solution a bit.</p>
<p>This is a very imperative requirement. For this case, I would work very directly with the indices of the data and therefore use the underlying numpy array instead of pandas.</p>
<pre><code>In [54]: a = df.copy().values
In [55]: a
Out[55]:
array([[15, 30, 30],
[19, 40, 90],
[18, 30, 30],
[16, 50, 90],
[16, 70, 30]])
In [56]: index = np.squeeze(np.argwhere(a[:,2] > 30))
In [57]: index
Out[57]: array([1, 3])
In [58]: np.insert(a.astype(np.float64), obj=np.repeat(index, 2), values=np.array([np.nan, np.nan, 30]), axis=0)
Out[58]:
array([[15., 30., 30.],
[nan, nan, 30.],
[nan, nan, 30.],
[19., 40., 90.],
[18., 30., 30.],
[nan, nan, 30.],
[nan, nan, 30.],
[16., 50., 90.],
[16., 70., 30.]])
</code></pre>
<p>Then you can create a new DataFrame with these values.</p>
<pre><code>In [74]: pd.DataFrame(np.insert(a.astype(np.float64), obj=np.repeat(index, 2), values=np.array([np.nan, np.nan, 30]), axis=0), columns=df.columns)
Out[74]:
T1 T2 time_step
0 15.0 30.0 30.0
1 NaN NaN 30.0
2 NaN NaN 30.0
3 19.0 40.0 90.0
4 18.0 30.0 30.0
5 NaN NaN 30.0
6 NaN NaN 30.0
7 16.0 50.0 90.0
8 16.0 70.0 30.0
</code></pre>
<p>The <code>index</code> are the row numbers where your second column (<code>'time_step'</code>) is larger than <code>30</code>. You could still do this with pandas. However to insert rows <em>in the middle</em> of your DataFrame, there simply is no API function. Therefore we go to numpy. We need to convert to float, because np.nan is not a valid integer. Repeating the <code>index</code> twice, makes it insert the row twice. And finally the row to insert is <code>np.array([np.nan, np.nan, 30])</code> and <code>axis=0</code> means we insert rows.</p>
|
python|pandas|dataframe
| 0
|
2,620
| 69,930,461
|
Any alternate approaches to calculate co-occurrence matrix in Python?
|
<p>I'm trying to calculate the co-occurrence matrix for a large corpus but it takes a very long time(+6hours). Are there any faster ways?</p>
<p>My approach:</p>
<p>consider this array as the <code>corpus</code> and each element of the corpus as <code>context</code>:</p>
<pre><code>corpus = [
'where python is used',
'what is python used in',
'why python is best',
'what companies use python'
]
</code></pre>
<p>Algorithm:</p>
<pre><code>words = list(set(' '.join(corpus).split(' ')))
c_matrix = np.zeros((len(words), len(words)), dtype='int')
for context in corpus:
context = context.split(' ')
for i in range(len(context)):
for j in range(i + 1, len(context)):
row = words.index(context[i])
column = words.index(context[j])
c_matrix[row][column] += 1
</code></pre>
|
<p>The provided algorithm is not efficient because it recompute <code>words.index(...)</code> a lot of time. You can <strong>pre-compute the indices</strong> first and then build the matrix. Here is a significantly better solution:</p>
<pre class="lang-py prettyprint-override"><code>words = list(set(' '.join(corpus).split(' ')))
c_matrix = np.zeros((len(words), len(words)), dtype='int')
for context in corpus:
context = context.split(' ')
index = [words.index(item) for item in context]
for i in range(len(context)):
for j in range(i + 1, len(context)):
c_matrix[index[i]][index[j]] += 1
</code></pre>
<p>Moreover, you can transform <code>index</code> to a Numpy array and use <strong>Numba</strong> (or Cython) to build the <code>c_matrix</code> very quickly from <code>index</code>.</p>
<p>Finally, you can <strong>transform <code>words</code> to a dictionary</strong> (with the string in the current list as the dictionary keys and the index in the current list as the dictionary values) so that indexing will be much faster (constant-time fetch).</p>
<p>The resulting algorithm should be <em>several order of magnitude faster</em>. If this is not enough, then you probably need to replace the matrix <code>c_matrix</code> with a more advanced (but also much more complex) <em>sparse data-structure</em> regarding your needs.</p>
|
python|numpy|machine-learning
| 1
|
2,621
| 43,175,382
|
Python: create a pandas data frame from a list
|
<p>I am using the following code to create a data frame from a list:</p>
<pre><code>test_list = ['a','b','c','d']
df_test = pd.DataFrame.from_records(test_list, columns=['my_letters'])
df_test
</code></pre>
<p>The above code works fine. Then I tried the same approach for another list:</p>
<pre><code>import pandas as pd
q_list = ['112354401', '116115526', '114909312', '122425491', '131957025', '111373473']
df1 = pd.DataFrame.from_records(q_list, columns=['q_data'])
df1
</code></pre>
<p>But it gave me the following errors this time:</p>
<pre><code>---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
<ipython-input-24-99e7b8e32a52> in <module>()
1 import pandas as pd
2 q_list = ['112354401', '116115526', '114909312', '122425491', '131957025', '111373473']
----> 3 df1 = pd.DataFrame.from_records(q_list, columns=['q_data'])
4 df1
/usr/local/lib/python3.4/dist-packages/pandas/core/frame.py in from_records(cls, data, index, exclude, columns, coerce_float, nrows)
1021 else:
1022 arrays, arr_columns = _to_arrays(data, columns,
-> 1023 coerce_float=coerce_float)
1024
1025 arr_columns = _ensure_index(arr_columns)
/usr/local/lib/python3.4/dist-packages/pandas/core/frame.py in _to_arrays(data, columns, coerce_float, dtype)
5550 data = lmap(tuple, data)
5551 return _list_to_arrays(data, columns, coerce_float=coerce_float,
-> 5552 dtype=dtype)
5553
5554
/usr/local/lib/python3.4/dist-packages/pandas/core/frame.py in _list_to_arrays(data, columns, coerce_float, dtype)
5607 content = list(lib.to_object_array(data).T)
5608 return _convert_object_array(content, columns, dtype=dtype,
-> 5609 coerce_float=coerce_float)
5610
5611
/usr/local/lib/python3.4/dist-packages/pandas/core/frame.py in _convert_object_array(content, columns, coerce_float, dtype)
5666 # caller's responsibility to check for this...
5667 raise AssertionError('%d columns passed, passed data had %s '
-> 5668 'columns' % (len(columns), len(content)))
5669
5670 # provide soft conversion of object dtypes
AssertionError: 1 columns passed, passed data had 9 columns
</code></pre>
<p>Why would the same approach work for one list but not another? Any idea what might be wrong here? Thanks a lot!</p>
|
<p><a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.from_records.html" rel="noreferrer"><code>DataFrame.from_records</code></a> treats string as a character list. so it needs as many columns as length of string.</p>
<p>You could simply use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html" rel="noreferrer">the <code>DataFrame</code> constructor</a>.</p>
<pre><code>In [3]: pd.DataFrame(q_list, columns=['q_data'])
Out[3]:
q_data
0 112354401
1 116115526
2 114909312
3 122425491
4 131957025
5 111373473
</code></pre>
|
list|python-3.x|pandas|dataframe
| 128
|
2,622
| 72,154,349
|
Getting 1970-01-01 after converting int to datetime
|
<p>I have an attribute like:</p>
<pre><code>df. CalculationDateKey.head()
0 20201231
1 20201130
2 20201031
3 20200930
4 20200831
Name: CalculationDateKey, dtype: int64
</code></pre>
<p>And I want to convert it into datetime.</p>
<p>I tried:</p>
<pre><code>pd.to_datetime(df['CalculationDateKey']).head()
</code></pre>
<p>which yields:</p>
<pre><code>0 1970-01-01 00:00:00.020201231
1 1970-01-01 00:00:00.020201130
2 1970-01-01 00:00:00.020201031
3 1970-01-01 00:00:00.020200930
4 1970-01-01 00:00:00.020200831
Name: CalculationDateKey, dtype: datetime64[ns]
</code></pre>
<p>I want this, so I can calculate difference in months between two dates.</p>
|
<p>Don't let Pandas infer your date format so specify it:</p>
<pre><code>>>> pd.to_datetime(df['CalculationDateKey'], format='%Y%m%d')
0 2020-12-31
1 2020-11-30
2 2020-10-31
3 2020-09-30
4 2020-08-31
Name: CalculationDateKey, dtype: datetime64[ns]
</code></pre>
|
python|pandas|numpy
| 1
|
2,623
| 72,357,902
|
How can I Get, Edit and Set Gradient Matrix in Training of Keras Model?
|
<p>I am creating a sparse neural network as described in the image below. Keras only provides a dense layer and we can't choose how many neurons we want to be connected to the previous layer. For implementing this using Keras, I am trying to implement the following approach:</p>
<p>1- Get the Gradient Matrix of each layer in each epoch</p>
<p>2- Multiply Gradient Matrix with a mask matrix to make it sparse</p>
<p>3- Update new weights according to that updated gradient matrix</p>
<p>I could not be able to find a gradient matrix in Keras. How can I get it and update it during epochs?</p>
<p><code>tf.GradientTape()</code> is only for see gradients.</p>
<p>Thanks in Advance. Please see attach picture below</p>
<p><a href="https://i.stack.imgur.com/BouBj.png" rel="nofollow noreferrer">Sparse Network Image</a></p>
|
<p>Instead of working with the gradients matrix manually, It's better to apply TensorFlow pruning techniques. We can apply pruning to each layer of the model by passing a parameter.</p>
<pre><code> pruning_params = {
'pruning_schedule':
PolynomialDecay(
initial_sparsity=0.1,
final_sparsity=0.75,
begin_step=1000,
end_step=5000,
frequency=100)}
</code></pre>
<p>For every layer chosen to be pruned, it adds a binary mask variable which is of the same size and shape as the layer’s weight tensor and determines which of the weights participate in the forward execution of the graph. Tensorflow introduce a new automated gradual pruning algorithm in which the sparsity is increased from an initial sparsity value(usually 0) to a final sparsity value over a span of n pruning steps, starting at training step t0 and with pruning frequency ∆t.
You can add a pruned layer by following code:</p>
<pre><code>prune.prune_low_magnitude(
l.Conv2D(32, 5, padding='same', activation='relu'),
input_shape=input_shape,
**pruning_params)
</code></pre>
<p>Complete Tutorial is available <a href="https://github.com/tensorflow/model-optimization/blob/master/tensorflow_model_optimization/python/examples/sparsity/keras/mnist/mnist_cnn.py" rel="nofollow noreferrer">here</a></p>
|
python|tensorflow|keras|deep-learning
| 0
|
2,624
| 72,139,727
|
Product and summation with 3d and 1d arrays
|
<p>Given a 3d array <strong><code>X</code></strong> with dimensions (<code>K,n,m</code>) that can be considered as a stack of <code>K</code> (<code>n,m</code>) matrices and a 1d vector <strong><code>b</code></strong> (dim <code>n</code>), the goal is to obtain the resulting vector <strong><code>r</code></strong> (dim <code>n</code>) each component of which is calculated as:
<a href="https://i.stack.imgur.com/6Akiv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6Akiv.png" alt="enter image description here" /></a></p>
<p>It is easy to see that the expression under the <code>k</code>-summation (i.e. two internal sums) is just a dot product <strong><code>X_k b X_k</code></strong> (and, therefore, can easily be calculated using <code>numpy</code>). So, the desired vector <strong><code>r</code></strong> is</p>
<p><a href="https://i.stack.imgur.com/0Kj9Q.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0Kj9Q.png" alt="enter image description here" /></a></p>
<p>where <strong><code>X_k</code></strong> is the <code>k</code>-th 2d (<code>n,m</code>) 'layer' of 3d array <code>X</code>.</p>
<p>I.e. the current solution is</p>
<pre><code>r = 0
for k in range(K):
r += x[k,:,:] @ (b @ x[k, :, :])
</code></pre>
<p>Can <strong><code>r</code></strong> be efficiently calculated avoiding a for-loop by <code>k</code>?
Or maybe there is another efficient way to calculate <strong><code>r</code></strong>?</p>
<p>(I tried <code>np.tensordot</code> but since it is just pure summation by <code>k</code> I didn't get a correct result yet.)</p>
|
<p>This looks like a perfect usecase for <a href="https://numpy.org/doc/stable/reference/generated/numpy.einsum.html" rel="nofollow noreferrer">einsum</a>:</p>
<pre class="lang-py prettyprint-override"><code>r = np.einsum('kij,l,klj->i', x, b, x)
</code></pre>
<p>which will vectorize the operation, e.g. it's more optimal than a for loop.</p>
|
python|numpy|numpy-ndarray
| 1
|
2,625
| 72,434,189
|
Grouping rows in a Pandas DataFrame
|
<p>We're working with a really large Twitter DataBase containing around 4,9 million entries. Every entry can either be a tweet, or a reply to a tweet (or a reply to a reply of course). Since this data has been collected using the Twitter API tweets and their replies are not neatly grouped in the DataFrame but many entries are in between:
<a href="https://i.stack.imgur.com/qAu8d.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qAu8d.png" alt="" /></a></p>
<p>We are trying to group the tweets with their corresponding replies so we can perform a sentimental analysis on this conversation, but this is where we are stuck.
We started by inverting the DataFrame as it will be easier to search from the last reply to the original tweet than the other way around.
<a href="https://i.stack.imgur.com/H3P7Y.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/H3P7Y.png" alt="" /></a></p>
<p>Now we'll be using the column <code>id</code> (the original tweet ID) and the <code>in_reply_to_status_id</code> (refers to the ID of the original tweet to which it was replied).</p>
<p>In essence we want to create some kind of for loop which detects the first row where the <code>in_reply_to_status_id</code> is an integer and then links this to the reply/tweet above by matching it with the <code>id</code> column. But it has to continue this process until it finds a row where the <code>in_reply_to_status_id</code> is <code>None</code>, as this means you've found the original tweet (as a tweet evidently can not be a reply to something).</p>
<p>So the first entry here would be <code>in_reply_to_status_id</code> = 1244694453190897664, we store this entry and use this to search its "original" tweet:
<a href="https://i.stack.imgur.com/DQJiy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DQJiy.png" alt="" /></a>
But this gives us a new <code>in_reply_to_status_id</code> of 1243885949697888263 so we store this entry as well but also have to look for its original tweet with this new <code>in_reply_to_status_id</code>. We want to continue this process until we arrive at an entry where <code>in_reply_to_status_id</code> is <code>None</code>, as this marks the end of a conversation.
<a href="https://i.stack.imgur.com/0zMC9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0zMC9.png" alt="" /></a></p>
<p>Would anyone have any ideas on how to start on such an operation?</p>
|
<p>This seems like a pretty hard operation.
I think that i understand a little your issue (but not all of it, sadly).
In my opinion, you should firstly group the elements by the "in_reply_to_user" or "in_reply_to_status" (I don't know exactly the difference between the 2), and after that you should verify if the id of the rows where the "in_reply_to_status" == 'None' is present in any other "in_reply_..". In this scenario, you'd take in the first part only the "head" tweets, the one to which the others are pointing, then verifying if any of them has any replies.
After that, in my opinion, you should check recursively the id by searching it in the "in reply to" columns, until for each link there is no value to which it points.
You could try to recursively create a list/tuple/dict in which you could append the links and the tweets those are linking to (Think at it like a graph), then you could just make a new dataframe which would take the batches containing the specific id/index of each node.
This is my take, hope it'll help!</p>
|
python|pandas|twitter
| 0
|
2,626
| 50,491,564
|
Merge Two different dataframe with Pandas
|
<p>I am new to pandas, I need to complete the following task, is there an effective way to do it?
There are 2 different dataframes, dfa and dfb:
<a href="https://i.stack.imgur.com/K0Oq0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/K0Oq0.png" alt="dfa"></a></p>
<p><a href="https://i.stack.imgur.com/VH6bp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VH6bp.png" alt="dfb"></a></p>
<p>I used this to merge them together:</p>
<pre><code>df = pd.merge(dfa, dfb, left_on = ['a_retry','a_cca', 'a_rssif', 'a_lqif'], right_on = ['b_retry','b_cca', 'b_rssif', 'b_lqif'])
</code></pre>
<p>I got the df output:
<a href="https://i.stack.imgur.com/G0iqj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/G0iqj.png" alt="df"></a></p>
<p>However it is not my expectation.
The merged dataframe contains all columns, it is OK, but the rows shall not exceed the smaller one (aka. dfa), that means the row 3 must be dropped, the expected one is:
<a href="https://i.stack.imgur.com/vQCqO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vQCqO.png" alt="enter image description here"></a>
How can I do that? Thanks.</p>
|
<p>It is expected, because duplicates per all 4 columns.</p>
<p>So need remove duplicates rows by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.drop_duplicates.html" rel="nofollow noreferrer"><code>drop_duplicates</code></a>:</p>
<pre><code>dfa = dfa.drop_duplicates(subset=['a_retry','a_cca', 'a_rssif', 'a_lqif'])
dfb = dfb.drop_duplicates(subset=['b_retry','b_cca', 'b_rssif', 'b_lqif'])
</code></pre>
<p>But if need match duplicates rows, is it possible with new column by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.cumcount.html" rel="nofollow noreferrer"><code>cumcount</code></a>, which is used for <code>merge</code>:</p>
<pre><code>dfa['new'] = dfa.groupby(['a_retry','a_cca', 'a_rssif', 'a_lqif']).cumcount()
dfb['new'] = dfb.groupby(['b_retry','b_cca', 'b_rssif', 'b_lqif']).cumcount()
df = (pd.merge(dfa,
dfb,
left_on = ['a_retry','a_cca', 'a_rssif', 'a_lqif', 'new'],
right_on = ['b_retry','b_cca', 'b_rssif','b_lqif', 'new']).drop('new', axis=1))
</code></pre>
|
python|pandas
| 0
|
2,627
| 50,503,246
|
How can I do feature mapping (pop) from JSON format to tabular format?
|
<p>Here's my data</p>
<pre><code> id var_map
0 7068 {'feature_1': 2.0, 'feature_2': 4.0, 'feature_3': 8.0, 'feature_4': 8.0}
1 7116 {'feature_1': '2', 'feature_2': 5.0, 'feature_3': 7.0}
2 7154 {'feature_1': 1.0, 'feature_2': 8.0, 'feature_3': 17.0}
</code></pre>
<p>Here's what I want</p>
<pre><code> id feature_1 feature_2 feature_3 feature_4 feature_5
0 7068 2.0 4.0 8.0 8.0
1 7116 2 5.0 7.0
2 7154 1.0 8.0 17.0
</code></pre>
|
<p>I believe need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.pop.html" rel="nofollow noreferrer"><code>pop</code></a> with <code>DataFrame</code> contructor and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.join.html" rel="nofollow noreferrer"><code>join</code></a> to original:</p>
<pre><code>df = df.join(pd.DataFrame(df.pop('var_map').values.tolist(), index=df.index))
print (df)
id feature_1 feature_2 feature_3 feature_4
0 7068 2 4.0 8.0 8.0
1 7116 2 5.0 7.0 NaN
2 7154 1 8.0 17.0 NaN
</code></pre>
<p>But if input is <code>json</code> better should be use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.io.json.json_normalize.html" rel="nofollow noreferrer"><code>json_normalize</code></a>.</p>
|
python|json|pandas|dataframe
| 3
|
2,628
| 62,545,426
|
Filter rows from a pandas column binned by pandas.cut()
|
<p>I have a pandas series that I've got from <code>pandas.cut()</code>.</p>
<p>Given a value 'VALUE' I'd like a boolean series for all the rows whose interval comprises the given value.</p>
<p>For instance, if i have a value = 10 I'd like the rows with the bin (8, 12] to assume True and those with the bin (0, 8] assume False.</p>
<p>I´ve managed to do that using:</p>
<pre><code>mask = (df['COLUMN'].apply(lambda x: x.left).astype('float64') <= VALUE ) & (df['COLUMN'].apply(lambda x: x.right).astype('float64') >= VALUE )
</code></pre>
<p>I feel this is not an efficient way for doing it. Is there any better way for doing so?</p>
|
<p>Yes there is one short cut</p>
<pre><code>m=pd.arrays.IntervalArray(df['COLUMN']).overlaps(pd.Interval(VALUE, VALUE))
</code></pre>
<p>Or</p>
<pre><code>m=pd.Index(df['COLUMN']).isin([VALUE])
</code></pre>
|
python|pandas
| 3
|
2,629
| 62,885,709
|
Which fmt option in numpy.savetxt keeps infinite integer precision?
|
<p>I have been using <code>numpy.savetxt</code> without specifying the <code>fmt</code> option, and sometimes when a particularly large integer is supposed to be saved, it is recorded with an <code>e</code> notation as a floating point number of some finite precision.
I would like all integers, no matter how many digits, to be recorded lossless, simply including all digits.
However, reading through the <a href="https://docs.python.org/3/library/string.html#format-specification-mini-language" rel="nofollow noreferrer">format documentation</a> it is not clear to me which <code>fmt</code> choice will result in lossless integer storage.</p>
<p>What is the appropriate <code>fmt</code> setting I should use?</p>
|
<p>Use <code>'%s'</code> or <code>'%r'</code>, which simply call <code>str</code> or <code>repr</code> on the elements of your array respectively.</p>
<p>Also, you're reading the wrong format string documentation. (It's the format string documentation the <code>numpy.savetxt</code> docs link to, but it's still wrong.) <code>numpy.savetxt</code> uses old-school <code>%</code> formatting, documented <a href="https://docs.python.org/3/library/stdtypes.html#old-string-formatting" rel="nofollow noreferrer">here</a>.</p>
|
python|numpy|precision
| 4
|
2,630
| 62,546,306
|
How to create a text file based on the unique values of a dataframe column?
|
<p>I have an excel table with 2 columns called ('NE' and 'Interface'), and what I want to do is: to edit a .txt-file template (which I already have, I show it below) with each Interface value. Then concatenate the txt-files which belongs to the same group of 'NE'.</p>
<p>This is my excel:</p>
<p><a href="https://i.stack.imgur.com/KuWhL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KuWhL.png" alt="excel" /></a></p>
<p>this is my txt-file template, i want to change "Interface" with the interface value of the excel:</p>
<pre><code>conf t
**$interface**
no service-policy input QOS-IN_ACCESS
end
conf t
no policy-map QOS-IN_ACCESS
policy-map QOS-IN_ACCESS
class DSCP_VOIX_SIG
set mpls experimental imposition 5
set qos-group 5
end
conf t
**$interface**
service-policy input QOS-IN_ACCESS
end
</code></pre>
<p>and this is my code: ( i already concatenate the files, what i need to do is to put them in a group of NE)</p>
<pre class="lang-py prettyprint-override"><code>from string import Template
import pandas as pd
df3 = pd.read_excel(r"C:\Users\audit_policymap.xlsx")
with open(r"C:\Users\audit_policymap.txt") as fp:
template = Template(fp.read())
content2 = ''
content3 = ''
for i in range(len(df3)):
file_name = df.loc[i, "NE"] + '_output.txt'
with open(file_name, 'w') as fp:
content = template.substitute(interface=df.loc[i, "Interface"])
if df.loc[i, "NE"] == df.loc[i+1, "NE"]:
content2 = str(content2)+'\n'+str(content)+'\n'
content3 = str(content2)+'\n'
fp.write(content2)
else:
content2 = ''
content3 = str(content3)+'\n'+str(content)+'\n'
fp.write(content3)
</code></pre>
<p><strong>Summarizing: I want to have one txt-file per 'NE' edited with all the interfaces according to their corresponding 'NE'</strong></p>
|
<ul>
<li>Use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.groupby.html" rel="nofollow noreferrer">pandas.DataFrame.groupby</a> on the <code>NE</code> column.
<ul>
<li>This returns a <code>DataFrameGroupBy</code> object, where <code>i</code> is the unique groupby value from <code>NE</code>, and <code>g</code> is the associated group.</li>
<li>The for-loop will iterate through each unique value in <code>NE</code></li>
</ul>
</li>
<li>Use an f-string to specify a unique file name (e.g. <code>f'{i}_output.txt'</code>)
<ul>
<li><code>'250002-PEFTTS-2_output.txt'</code></li>
</ul>
</li>
<li>Because all values in <code>g</code> belong only to one of the unique values in <code>NE</code>, there's no need to check if <code>NE</code> matches for each row, as you've done in the question.</li>
<li><code>[str(template.substitute(interface=row)) for row in g.Interface]</code> is a list-comprehension which, for each row in <code>g.Interface</code>, adds <code>str(template.substitute(interface=row))</code> to a list.
<ul>
<li><code>'\n'.join()</code> joins each item in the list as a string separated by a newline.</li>
</ul>
</li>
</ul>
<pre class="lang-py prettyprint-override"><code>for i, g in df.groupby('NE'): # iterate through unique values in NE
file_name = f'{i}_output.txt' # create the empty content string
with open(file_name, 'w') as fp: # open the file
content = '\n'.join([str(template.substitute(interface=row)) for row in g.Interface])
fp.write(content) # write content to file
</code></pre>
|
python|excel|pandas|export|concatenation
| 1
|
2,631
| 54,380,560
|
Exporting /writing to Excel tabs from a Multi-Index Pandas DataFrame
|
<p>I'd like to split/slice a multi-index dataframe by the first index '0' into a dataframe for each level of the first index (for example below there would be 4 dataframes). I would then like to export each dataframe into a separate tab in EXCEL. The most important problem I'd like help on is how to write a loop or list comprehension that would split the multi-index dataframe into separate dataframes.</p>
<p>The Example Dataframe:</p>
<pre><code>import pandas as pd
import numpy as np
arrays = [
np.array(['bar', 'bar', 'baz', 'baz', 'foo', 'foo', 'qux', 'qux']),
np.array(['one', 'two', 'one', 'two', 'one', 'two', 'one', 'two'])
]
index = pd.MultiIndex.from_tuples(list(zip(*arrays)), names=['IDX1', 'IDX2'])
df = pd.DataFrame(np.random.randn(3, 8), index=['A', 'B', 'C'], columns=index)
df2 = df.T
</code></pre>
<p>The resulting df2 multiindex example dataframe:</p>
<p><a href="https://i.stack.imgur.com/IhRDV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IhRDV.png" alt="enter image description here"></a></p>
<p>I'd like to create a dataframe for each level of IDX1 and export each 1 into a separate tab.</p>
<pre><code># Create a Pandas Excel writer using XlsxWriter as the engine.
writer = pd.ExcelWriter('pandas_multiple.xlsx', engine='xlsxwriter')
# Write each dataframe to a different worksheet.
df1.to_excel(writer, sheet_name='bar')
df2.to_excel(writer, sheet_name='baz')
df3.to_excel(writer, sheet_name='foo')
df4.to_excel(writer,sheet_name = 'qux')
</code></pre>
|
<p>Use</p>
<pre><code>for idx in df2.index.get_level_values('IDX1').unique():
temp = df2.loc[idx]
temp.to_excel(writer, sheet_name=idx)
</code></pre>
<p>Loop over all unique values of the index by using <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Index.get_level_values.html" rel="nofollow noreferrer"><code>get_level_values</code></a>, then use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.loc.html" rel="nofollow noreferrer"><code>.loc</code></a> to select the sub-<code>DataFrame</code>. You can then write this sub-<code>DataFrame</code> to excel using your predefined Writer.</p>
|
python|excel|list-comprehension|pandas-groupby
| 2
|
2,632
| 73,819,961
|
Pandas: Check values between columns in different dataframes and return list of multiple possible values into a new column
|
<p>I am trying to compare two columns from two different dataframes, and return all possible matches using python: (Kinda of an xlookup in excel but with multiple possible matches)</p>
<p>Please see the details below for sample dataframes and work I attempted.</p>
<p>An explanation of the datasets below: Mark does not own any cars, however, there are several listed under his name, which we know that none belong to him. I am attempting to look at dataframe 1 (Marks) and compare it against the larger dataset that has all other owners and their cars: dataframe 2 (claimed) and return possible owners for Mark's cars as shown below.</p>
<pre><code>Dataframe 1 : Marks
Marks = pd.DataFrame({'Car Brand': ['Jeep','Jeep','BMW','Volvo'],'Owner Name': ['Mark',
'Mark', 'Mark', 'Mark']})
Car Brand Owner Name
0 Jeep Mark
1 Jeep Mark
2 BMW Mark
3 Volvo Mark
</code></pre>
<p>Dataframe 2: claimed</p>
<pre><code>Dataframe 2: claimed
claimed = pd.DataFrame({'Car Brand': ['Dodge', 'Jeep', 'BMW', 'Merc', 'Volvo', 'Jeep',
'Volvo'], 'Owner Name': ['Chris', 'Frank','Rob','Kelly','John','Chris','Kelly']})
Car Brand Owner Name
0 Dodge Chris
1 Jeep Frank
2 BMW Rob
3 Merc Kelly
4 Volvo John
5 Jeep Chris
6 Volvo Kelly
</code></pre>
<p>The data does have duplicate car brand names HOWEVER, the Owner Names are unique - meaning that Kelly even though she is mentioned twice IS THE SAME PERSON. same for Chris..etc</p>
<p>I want my Mark's dataframe to have a new column that looks like this:</p>
<pre><code>Car Brand Owner Name Possible Owners
0 Jeep Mark [Frank, Chris]
1 Jeep Mark [Frank, Chris]
2 BMW Mark Rob
3 Volvo Mark [John, Kelly]
</code></pre>
<p>I have tried the below codes:</p>
<pre><code>possible_owners = list()
for cars in Marks['Car Brand']:
for car_brands in claimed['Car Brand']:
if Marks.loc[Marks['Car Brand'].isin(claimed['Car Brand'])]:
sub = list()
sub.append()
possible_owners.append(sub)
else:
not_found = 'No possible Owners Identified'
possible_owners.append(not_found)
#Then I will add possible_owners as a new column to Marks
error code:ValueError: The truth value of a DataFrame is ambiguous. Use a.empty, a.bool(),
a.item(), a.any() or a.all().
</code></pre>
<p>I have also tried to do a merge, excel xlookup but (that has many limitations), and I am stuck trying to understand how to return possible matches even if there are multiple and line them up in one row.</p>
<p><strong>Question:</strong> how can I compare the two frames, return possible values from the Owner Name column and put these values in a new column in Marks' table?</p>
<p>Excuse my code, I am fairly new to Python.</p>
|
<p>You can always use a list comprehension with the <code>df.Series.isin</code> to do the work.</p>
<pre><code>result = [claimed[claimed['Car Brand'].isin([i])]['Owner Name'].to_numpy() for i in Marks['Car Brand']]
Marks['Possible Owners'] = result
Car Brand Owner Name Possible Owners
0 Jeep Mark [Frank, Chris]
1 Jeep Mark [Frank, Chris]
2 BMW Mark [Rob]
3 Volvo Mark [John, Kelly]
</code></pre>
|
python|pandas|list|dataframe
| 1
|
2,633
| 73,554,325
|
How to read data and write to variables from dictionary
|
<p>I have a dictionary like:</p>
<pre><code>mapping = {"Filename1": 999, "Filename2": "998"}
</code></pre>
<p>I have a process where I define a variable 'Filename1' with:</p>
<pre><code>import pandas as pd
read="Filename1"
code=999
df=pd.read_csv(f'{read}.csv')
df['new_col'] = code
Filename1 = df
</code></pre>
<p>In short, to read the filenames, add a new column with 'code', and write a new variable with same filename.</p>
<p>How can I loop this process through the dictionary so that it repeats for all filenames and their respective 'codes', and writes filenames as variables?</p>
|
<p>How about</p>
<pre class="lang-py prettyprint-override"><code>for read, code in mapping.items():
df=pd.read_csv(f'{read}.csv')
df['new_col'] = code
df.to_csv(f'{read}_new.csv', index=False)
</code></pre>
<p>You haven't specified how you want to write the DataFrame to disk, but you could modify the last line accordingly</p>
|
python|pandas
| 2
|
2,634
| 73,626,750
|
Cannot plot or use .tolist() on pd dataframe column
|
<p>so I am reading in data from a csv and saving it to a dataframe so I can use the columns. Here is my code:</p>
<pre><code>filename = open(r"C:\Users\avalcarcel\Downloads\Data INSTR 9 8_16_2022 11_02_42.csv")
columns = ["date","time","ch104","alarm104","ch114","alarm114","ch115","alarm115","ch116","alarm116","ch117","alarm117","ch118","alarm118"]
df = pd.read_csv(filename,sep='[, ]',encoding='UTF-16 LE',names=columns,header=15,on_bad_lines='skip',engine='python')
length_ = len(df.date)
scan = list(range(1,length_+1))
plt.plot(scan,df.ch104)
plt.show()
</code></pre>
<p>When I try to plot scan vs. df.ch104, I get the following exception thrown:</p>
<blockquote>
<p>'value' must be an instance of str or bytes, not a None</p>
</blockquote>
<p>So what I thought to do was make each column in my df a list:</p>
<pre><code>ch104 = df.ch104.tolist()
</code></pre>
<p>But it is turning my data from this to this:
<a href="https://i.stack.imgur.com/hq0K5.png" rel="nofollow noreferrer">before .tolist()</a></p>
<p>To this:
<a href="https://i.stack.imgur.com/32lkF.png" rel="nofollow noreferrer">after .tolist()</a></p>
<p>This also happens when I use <code>df.ch104.values.tolist()</code></p>
<p>Can anyone help me? I haven't used python/pandas in a while and I am just trying to get the data read in first. Thanks!</p>
|
<p>So, the <code>df.ch104.values.tolist()</code> code beasicly turns your column into a 2d 1XN array. But what you want is a 1D array of size N.</p>
<p>So transpose it before you call <code>.tolist()</code>. Lastly call <code>[0]</code> to convert <strong>Nx1 array</strong> to <strong>N array</strong></p>
<pre><code>df.ch104.values.tolist()[0]
</code></pre>
<hr />
<p>Might I also suggest you include <strong>dropna()</strong> to avoid <code>'value' must be an instance of str or bytes, not a Non</code></p>
<pre><code>df.dropna(subset=['ch104']).ch104.values.tolist()[0]
</code></pre>
|
python|pandas|dataframe|csv|matplotlib
| 0
|
2,635
| 71,333,010
|
Use dataframe column containing "column name strings", to return values from dataframe based on column name and index without using .apply()
|
<p>I have a dataframe as follows:</p>
<pre><code>df=pandas.DataFrame()
df['A'] = numpy.random.random(10)
df['B'] = numpy.random.random(10)
df['C'] = numpy.random.random(10)
df['Col_name'] = numpy.random.choice(['A','B','C'],size=10)
</code></pre>
<p>I want to obtain an output that uses 'Col_name' and the respective index of the dataframe row to lookup the value in the dataframe.
I can get the desired output this with .apply() follows:</p>
<pre><code>df['output'] = df.apply(lambda x: x[ x['Col_name'] ], axis=1)
</code></pre>
<p>.apply() is slow over a large dataframe with it iterating row by row. Is there an obvious solution in pandas that is faster/vectorised?</p>
|
<p>Use <code>melt</code> to flatten your dataframe and keep rows where <code>Col_name</code> equals to <code>variable</code> column:</p>
<pre><code>df['output'] = df.melt('Col_name', ignore_index=False).query('Col_name == variable')['value']
print(df)
# Output
A B C Col_name output
0 0.202197 0.430735 0.093551 B 0.430735
1 0.344753 0.979453 0.999160 C 0.999160
2 0.500904 0.778715 0.074786 A 0.500904
3 0.050951 0.317732 0.363027 B 0.317732
4 0.722624 0.026065 0.424639 C 0.424639
5 0.578185 0.626698 0.376692 C 0.376692
6 0.540849 0.805722 0.528886 A 0.540849
7 0.918618 0.869893 0.825991 C 0.825991
8 0.688967 0.203809 0.734467 B 0.203809
9 0.811571 0.010081 0.372657 B 0.010081
</code></pre>
<p>Transformation after <code>melt</code>:</p>
<pre><code>>>> df.melt('Col_name', ignore_index=False)
Col_name variable value
0 B A 0.202197
1 C A 0.344753
2 A A 0.500904 # keep
3 B A 0.050951
4 C A 0.722624
5 C A 0.578185
6 A A 0.540849 # keep
7 C A 0.918618
8 B A 0.688967
9 B A 0.811571
0 B B 0.430735 # keep
1 C B 0.979453
2 A B 0.778715
3 B B 0.317732 # keep
4 C B 0.026065
5 C B 0.626698
6 A B 0.805722
7 C B 0.869893
8 B B 0.203809 # keep
9 B B 0.010081 # keep
0 B C 0.093551
1 C C 0.999160 # keep
2 A C 0.074786
3 B C 0.363027
4 C C 0.424639 # keep
5 C C 0.376692 # keep
6 A C 0.528886
7 C C 0.825991 # keep
8 B C 0.734467
9 B C 0.372657
</code></pre>
<p><strong>Update</strong></p>
<p>Alternative with <code>set_index</code> and <code>stack</code> for @Rabinzel:</p>
<pre><code>df['output'] = (
df.set_index('Col_name', append=True).stack()
.loc[lambda x: x.index.get_level_values(1) == x.index.get_level_values(2)]
.droplevel([1, 2])
)
print(df)
# Output
A B C Col_name output
0 0.209953 0.332294 0.812476 C 0.812476
1 0.284225 0.566939 0.087084 A 0.284225
2 0.815874 0.185154 0.155454 A 0.815874
3 0.017548 0.733474 0.766972 A 0.017548
4 0.494323 0.433719 0.979399 C 0.979399
5 0.875071 0.789891 0.319870 B 0.789891
6 0.475554 0.229837 0.338032 B 0.229837
7 0.123904 0.397463 0.288614 C 0.288614
8 0.288249 0.631578 0.393521 A 0.288249
9 0.107245 0.006969 0.367748 C 0.367748
</code></pre>
|
python|pandas|dataframe
| 1
|
2,636
| 71,410,450
|
python pandas print specific location of a datetime object
|
<p>I have looked at other solutions online but none of them work on my dataframe.
I want to get the exact location of a specific datetime object but this code produces this keyerror <code>KeyError: '2018-1-31'</code></p>
<pre><code>import pandas as pd
data=pd.DataFrame()
dti = pd.date_range("2018-01-01", periods=10, freq="M")
data['dti']=dti
print((data['dti'].loc['2018-1-31'])) # it should print 0 since this date is in the first row
</code></pre>
|
<p>'dti' is not the index, so you cannot use <code>loc</code> directly. You need to generate a boolean Series first:</p>
<pre><code>data.loc[data['dti'].eq('2018-1-31'), 'dti']
</code></pre>
<p>output:</p>
<pre><code>0 2018-01-31
Name: dti, dtype: datetime64[ns]
</code></pre>
<p>to get the index:</p>
<pre><code>data.loc[data['dti'].eq('2018-1-31'), 'dti'].index
</code></pre>
|
python|pandas|datetime|location
| 0
|
2,637
| 52,161,380
|
Does condition selection preserve order in Pandas DataFrame?
|
<p>For example,</p>
<pre><code>df = pandas.DataFrame({'name':['a','b','c'], 'age':[10,20,30]})
name age
0 a 10
1 b 20
2 c 30
df[df['age'] > 10]
name age
1 b 20
2 c 30
</code></pre>
<p>My question is: Does Pandas make sure the index order is preserved?
Is any possible the result like:</p>
<pre><code> name age
2 c 30
1 b 20
</code></pre>
<p>Thanks</p>
|
<p>Yes, filtering preserve order of rows (also index values).</p>
<p>Need to sort by column <code>age</code> if need change ordering:</p>
<pre><code>df1 = df[df['age'] > 10].sort_values('age', ascending=False)
print (df1)
name age
2 c 30
1 b 20
</code></pre>
|
python|pandas|dataframe
| 3
|
2,638
| 52,108,313
|
How to split data to multiple rows in pandas on one condition?
|
<p>I have the data in the below dataframe as:-</p>
<pre><code>id name value year quarter
1 an 2.3 2012 1
2 yu 3.5 2012 2
3 ij 3.1 2013 4
4 ij 2.1 2013 1
</code></pre>
<p>to be converted to below dataframe i.e. get month from quarter and split row into 3. </p>
<pre><code>id name value year quarter month
1 an 2.3 2012 1 01
1 an 2.3 2012 1 02
1 an 2.3 2012 1 03
2 yu 3.5 2012 2 04
2 yu 3.5 2012 2 05
2 yu 3.5 2012 2 06
3 ij 3.1 2013 4 10
3 ij 3.1 2013 4 11
3 ij 3.1 2013 4 12
4 ij 2.1 2013 1 01
4 ij 2.1 2013 1 02
4 ij 2.1 2013 1 03
</code></pre>
|
<p>Create a quarter to month dataframe to merge on</p>
<pre><code>q2m = pd.DataFrame([
[(m - 1) // 3 + 1, m] for m in range(1, 13)],
columns=['quarter', 'month']
)
df.merge(q2m)
id name value year quarter month
0 1 an 2.3 2012 1 1
1 1 an 2.3 2012 1 2
2 1 an 2.3 2012 1 3
3 2 yu 3.5 2012 2 4
4 2 yu 3.5 2012 2 5
5 2 yu 3.5 2012 2 6
6 3 ij 3.1 2013 4 10
7 3 ij 3.1 2013 4 11
8 3 ij 3.1 2013 4 12
</code></pre>
|
python|pandas|date
| 3
|
2,639
| 60,355,337
|
How to 'quickly' add row data values in pandas with respect to a certain column?
|
<p>Please see the attached image below.
I have a data set which record value for each time instance (Time Stamp) in the data set.
Now, I want a single data record (for each column) for any given time. For example, the red box (for 'Time Stamp' 23054350) that I have made should sum up to be a single 'SMS in', 'SMS out', 'Call in' etc.
Similar example can be seen for other 'Time Stamp'. Note that all the instances of time should be summed together.</p>
<p>I know I can run a loop to solve this problem. But my data is very huge and I have multiple files (of huge data per file) and running a loop is very inefficient. Can I do it in a quicker way, sort of using vectorized implementation?</p>
<p><a href="https://i.stack.imgur.com/GSj7C.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GSj7C.png" alt="enter image description here"></a></p>
|
<p>Try this</p>
<pre><code>df.groupby('Time Stamp').sum()
</code></pre>
|
pandas|dataframe
| 1
|
2,640
| 60,508,218
|
Keras CNN Model accuracy remaining relatively the same and val_accuracy not improving
|
<p>I am trying to train a model to identify between malignant and benign images using Keras, however I am not achieving the results I had hoped for. The dataset is categorized well and gathered from the ISIC - Archive (<a href="https://www.isic-archive.com/" rel="nofollow noreferrer">https://www.isic-archive.com/</a>). I have tried to change the learning rate multiple times but to no avail...<a href="https://i.stack.imgur.com/le3Uh.png" rel="nofollow noreferrer">results from one of the training intervals</a></p>
<p>below is the code I am using to train my model using the Adam Optimizer:</p>
<pre><code># In[1]:
from keras.callbacks import ModelCheckpoint
from keras.models import Sequential
from keras.layers import Dropout, Flatten, Dense
from keras.layers import Conv2D, MaxPooling2D, GlobalAveragePooling2D
from PIL import ImageFile
from tqdm import tqdm
from keras.preprocessing import image
from sklearn.datasets import load_files
from keras.utils import np_utils
import numpy as np
from glob import glob
import keras
# define function to load train, test, and validation datasets
def load_dataset(path):
data = load_files(path)
condition_files = np.array(data['filenames'])
condition_targets = np_utils.to_categorical(np.array(data['target']), 2)
print(condition_targets)
return condition_files, condition_targets
# load train, test, and validation datasets
train_files, train_targets = load_dataset(
'/Users/Grampun/Desktop/ISIC-Archive-Downloader-master/data_set/training_data')
valid_files, valid_targets = load_dataset(
'/Users/Grampun/Desktop/ISIC-Archive-Downloader-master/data_set/valid_data')
test_files, test_targets = load_dataset(
'/Users/Grampun/Desktop/ISIC-Archive-Downloader-master/data_set/test_data')
# load list of labels
condition_names = [item[58:-1] for item in sorted(
glob("/Users/Grampun/Desktop/ISIC-Archive-Downloader-master/data_set/training_data/*/"))]
print(condition_names)
# print statistics about the dataset
print('There are %d total categories.' % len(condition_names))
print('There are %s total images.\n' %
len(np.hstack([train_files, valid_files, test_files])))
print('There are %d training images.' % len(train_files))
print('There are %d validation images.' % len(valid_files))
print('There are %d test images.' % len(test_files))
# In[2]:
def path_to_tensor(img_path):
# loads RGB image as PIL.Image.Image type
img = image.load_img(img_path, target_size=(224, 224))
# convert PIL.Image.Image type to 3D tensor with shape (224, 224, 3)
x = image.img_to_array(img)
# convert 3D tensor to 4D tensor with shape (1, 224, 224, 3) and return 4D tensor
return np.expand_dims(x, axis=0)
def paths_to_tensor(img_paths):
list_of_tensors = [path_to_tensor(img_path)
for img_path in tqdm(img_paths)]
return np.vstack(list_of_tensors)
ImageFile.LOAD_TRUNCATED_IMAGES = True
# pre-process the data for Keras
train_tensors = paths_to_tensor(train_files).astype('float32')/255
valid_tensors = paths_to_tensor(valid_files).astype('float32')/255
test_tensors = paths_to_tensor(test_files).astype('float32')/255
# (IMPLEMENTATION) Model Architecture
#
# In[4]:
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3),
activation='relu',
input_shape=(224, 224, 3)))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(2, activation='sigmoid'))
model.summary()
# Compile the Model
# In[ ]:
# opt = keras.optimizers.Adadelta()
opt = keras.optimizers.Adam(lr=0.00003, beta_1=0.9,
beta_2=0.999, epsilon=1e-08, decay=0.0)
model.compile(optimizer=opt, loss='binary_crossentropy', metrics=['accuracy'])
# ### Train the Model
#
# In[ ]:
# TODO: specify the number of epochs that you would like to use to train the model.
epochs = 20
checkpointer = ModelCheckpoint(filepath='weights.best.from_scratch.6.hdf5',
verbose=1, save_best_only=True)
model.fit(train_tensors, train_targets,
validation_data=(valid_tensors, valid_targets),
epochs=epochs, batch_size=10, callbacks=[checkpointer], verbose=1)
# ### Load the Model with the Best Validation Loss
# In[5]:
model.load_weights('weights.best.from_scratch.6.hdf5')
# ### Test the Model
#
# In[6]:
# get index of predicted label for each image in test set
condition_predictions = [np.argmax(model.predict(
np.expand_dims(tensor, axis=0))) for tensor in test_tensors]
# report test accuracy
test_accuracy = 100*np.sum(np.array(condition_predictions) ==
np.argmax(test_targets, axis=1))/len(condition_predictions)
print('Test accuracy: %.4f%%' % test_accuracy) # confusion matrix
</code></pre>
<p>Any help on this would be greatly appreciated (this is my first ML project and am still learning).
Thanks!</p>
|
<p>This line of code is the your source of problem: <code>model.add(Dense(2, activation='sigmoid'))</code>.</p>
<p>Either use:</p>
<ol>
<li><code>model.add(Dense(2, activation='softmax'))</code></li>
<li><code>model.add(Dense(1, activation='sigmoid'))</code></li>
</ol>
<p>Note that in case (1) you need to use 'categorical_crossentropy' instead of 'binary_crossentropy'. Therefore, you will also have to change</p>
<p><code>model.compile(optimizer=opt, loss='binary_crossentropy', metrics=['accuracy'])</code> TO</p>
<p><code>model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy']</code>)</p>
|
python|tensorflow|machine-learning|image-processing|keras
| 0
|
2,641
| 60,375,395
|
Minimum if condition is met in pandas dataframe
|
<p>I have a data frame;</p>
<pre><code>Date Price Product
1/1/12 22 Pen
1/2/12 44 Paper
1/2/12 33 Paper
1/3/12 34 Paper
</code></pre>
<p>And I want to just have the min value if there are duplicates for Date and Product. </p>
<p>So the expected output is </p>
<pre><code>Date Price Product
1/1/12 22 Pen
1/2/12 33 Paper
1/3/12 34 Paper
</code></pre>
<p>I am happy to keep the data in the flat file format or create a time series pivot table. </p>
<p>The only option I can currently see is to sort by price (highest to lowest) and then remove duplicates and keep 'last'. but was keen to explore if there is a better way to do this</p>
|
<pre><code>df.sort_values('Price', ascending=False).groupby(['Date','Product'],sort=False).last()
Price
Date Product
1/2/12 Paper 33
1/3/12 Paper 34
1/1/12 Pen 22
</code></pre>
<p>Feedback from cs95 was accurate.</p>
|
pandas|min
| 2
|
2,642
| 72,709,699
|
How can I loop through every item of multiple list with special condition?
|
<p>I have 2 dataframes as below:</p>
<p>df1:</p>
<pre><code>df1 = pd.DataFrame({'feature1':['a1','a1','a1','b1','b1','b1'], 'value': [1,2,3,4,5,6]})
df1
</code></pre>
<p>df2:</p>
<pre><code>df2 = pd.DataFrame({'feature1':['c1','c1','c1','c2','c2','c2'], 'value2': [1,2,3,1,2,3]})
df2
</code></pre>
<p>My goal is to yield this result:</p>
<ul>
<li>Which a1 loops with c1 ; b1 loops with c2</li>
</ul>
<pre><code>| feature1 | value | feature2 | value2|
| -------- | ----- | -------- | ----- |
| a1 | 1 | c1 | 1 |
| a1 | 1 | c1 | 2 |
| a1 | 1 | c1 | 3 |
| a1 | 2 | c1 | 1 |
| a1 | 2 | c1 | 2 |
| a1 | 2 | c1 | 3 |
| a1 | 3 | c1 | 1 |
| a1 | 3 | c1 | 2 |
| a1 | 3 | c1 | 3 |
| b1 | 4 | c2 | 1 |
| b1 | 4 | c2 | 2 |
| b1 | 4 | c2 | 3 |
| b1 | 5 | c2 | 1 |
| b1 | 5 | c2 | 2 |
| b1 | 5 | c2 | 3 |
| b1 | 6 | c2 | 1 |
| b1 | 6 | c2 | 2 |
| b1 | 6 | c2 | 3 |
</code></pre>
<p>What I have done is as below:</p>
<ol>
<li>Convert the value & value2 into 2 lists:</li>
</ol>
<pre><code>list1 = df1[df1.columns[1]].values.tolist()
list1
output: [1, 2, 3, 4, 5, 6]
</code></pre>
<pre><code>list2 = df2[df2.columns[1]].values.tolist()
list2
output: [1, 2, 3, 1, 2, 3]
</code></pre>
<ol start="2">
<li>Do a multiloop iteration using list comprehension:</li>
</ol>
<pre><code>lim1, lim2 = [], []
for x, y in [(x,y) for x in list1 for y in list2]:
#print(x, y, z)
lim1.append(x)
lim2.append(y)
df_limit = pd.DataFrame({
"value": lim1,
"value2": lim2,
})
</code></pre>
<p>The result loops entire columns instead of what I need:</p>
<pre><code>
value value2
0 1 1
1 1 2
2 1 3
3 1 1
4 1 2
5 1 3
6 2 1
7 2 2
8 2 3
9 2 1
10 2 2
11 2 3
12 3 1
13 3 2
14 3 3
15 3 1
16 3 2
17 3 3
18 4 1
19 4 2
20 4 3
21 4 1
22 4 2
23 4 3
24 5 1
25 5 2
26 5 3
27 5 1
28 5 2
29 5 3
30 6 1
31 6 2
32 6 3
33 6 1
34 6 2
35 6 3
</code></pre>
<p>I am trying to figure out if use df.groupby() for the features and do list comprehension would help but so far I am unable to proceed...</p>
<p>The real life example is much more complicated than this as there more than 100 of combinations, so would to seek a more iterable way to do so.</p>
|
<p>Loops are basically <em>never</em> the answer when it comes to pandas.</p>
<p>Filtering after cross joining everything:</p>
<pre><code>import pandas as pd
df1 = pd.DataFrame({'feature':['a1','a1','a1','b1','b1','b1'], 'value': [1,2,3,4,5,6]})
df2 = pd.DataFrame({'feature':['c1','c1','c1','c2','c2','c2'], 'value': [1,2,3,1,2,3]})
df = df1.merge(df2, 'cross', suffixes=['1', '2'])
out = df[df.feature1.eq('a1') & df.feature2.eq('c1') | df.feature1.eq('b1') & df.feature2.eq('c2')].reset_index(drop=True)
print(out)
</code></pre>
<p>Output:</p>
<pre><code> feature1 value1 feature2 value2
0 a1 1 c1 1
1 a1 1 c1 2
2 a1 1 c1 3
3 a1 2 c1 1
4 a1 2 c1 2
5 a1 2 c1 3
6 a1 3 c1 1
7 a1 3 c1 2
8 a1 3 c1 3
9 b1 4 c2 1
10 b1 4 c2 2
11 b1 4 c2 3
12 b1 5 c2 1
13 b1 5 c2 2
14 b1 5 c2 3
15 b1 6 c2 1
16 b1 6 c2 2
17 b1 6 c2 3
</code></pre>
<hr />
<p>Filtering before cross joining:</p>
<pre><code>a1_c1 = [df1[df1.feature.eq('a1')], df2[df2.feature.eq('c1')]]
b1_c2 = [df1[df1.feature.eq('b1')], df2[df2.feature.eq('c2')]]
dfs = []
for pair in [a1_c1, b1_c2]:
temp_df = pd.merge(*pair, how='cross', suffixes=['1','2'])
dfs.append(temp_df)
df = pd.concat(dfs, ignore_index=True)
print(df)
</code></pre>
<p>Output:</p>
<pre><code> feature1 value1 feature2 value2
0 a1 1 c1 1
1 a1 1 c1 2
2 a1 1 c1 3
3 a1 2 c1 1
4 a1 2 c1 2
5 a1 2 c1 3
6 a1 3 c1 1
7 a1 3 c1 2
8 a1 3 c1 3
9 b1 4 c2 1
10 b1 4 c2 2
11 b1 4 c2 3
12 b1 5 c2 1
13 b1 5 c2 2
14 b1 5 c2 3
15 b1 6 c2 1
16 b1 6 c2 2
17 b1 6 c2 3
</code></pre>
|
python|pandas|list-comprehension
| 1
|
2,643
| 72,495,177
|
Convert string duration column to seconds
|
<p>In the dataframe, one of the columns is duration. It was given as a string.</p>
<pre><code>index duration
1 1 hour, 2 minutes, 21 seconds
2 1 hour, 2 minutes, 26 seconds
3 1 hour, 2 minutes, 41 seconds
4 1 hour, 4 minutes, 39 seconds
5 1 hour, 42 seconds
6 6 minutes, 7 seconds
7 9 minutes, 7 seconds
8 9 minutes, 9 seconds
9 9 minutes, 9 seconds
10 9 minutes, 9 seconds
</code></pre>
<p>How can I convert this column into seconds?</p>
|
<p>Use <a href="https://pandas.pydata.org/docs/reference/api/pandas.Timedelta.html?highlight=timedelta#pandas.Timedelta" rel="nofollow noreferrer"><code>pd.Timedelta</code></a> to parse each item:</p>
<pre><code>df['duration'] = df['duration'].apply(pd.Timedelta).dt.total_seconds().astype(int)
</code></pre>
<p>Output:</p>
<pre><code>>>> df
duration
0 3741
1 3746
2 3761
3 3879
4 3642
5 367
6 547
7 549
8 549
9 549
</code></pre>
|
python|pandas|string|datetime
| 1
|
2,644
| 72,493,838
|
TensorFlow Error: dictionary update sequence element #0 has length 6; 2 is required
|
<p>I am new to Python and machine learning. I am getting an error whenever I run the following code:</p>
<pre><code>def make_input_fn(data_df, label_df, num_epochs=10, shuffle=True, batch_size=32):
def input_function():
ds = tf.data.Dataset.from_tensor_slices((dict(data_df), label_df))
if shuffle:
ds = ds.shuffle(1000)
ds = ds.batch(batch_size).repeat(num_epochs)
return ds
return input_function
train_input_fn = make_input_fn(X_train, y_train)
eval_input_fn = make_input_fn(X_test, y_test, num_epochs=1, shuffle=False)
linear_est = tf.estimator.LinearClassifier(feature_columns=feature_columns)
linear_est.train(train_input_fn)
result = linear_est.evaluate(eval_input_fn)
clear_output()
print(result['accuracy'])
</code></pre>
<p>Error:</p>
<pre><code>ValueError Traceback (most recent call last)
<ipython-input-38-47fb35491976> in <module>()
----> 1 linear_est.train(train_input_fn)
2 result = linear_est.evaluate(eval_input_fn)
3
4 clear_output()
5 print(result['accuracy'])
5 frames
<ipython-input-36-16cfc32eb7b2> in input_function()
1 def make_input_fn(X_train, y_train, num_epochs=10, shuffle=True, batch_size=32):
2 def input_function():
----> 3 ds = tf.data.Dataset.from_tensor_slices((dict(X_train), y_train))
4 if shuffle:
5 ds = ds.shuffle(1000)
ValueError: dictionary update sequence element #0 has length 6; 2 is required
</code></pre>
<p>I am not sure if the issue would be related to my data or data types. My data has no blanks.</p>
|
<p>You are getting this error because of your creating a dictionary in improper format.</p>
<p>For example:</p>
<pre><code>t1 = ('a','b','c','d')
t2 = (1,2,3,4)
dict(t1) #output: ValueError: dictionary update sequence element #0 has length 1; 2 is required
dict(zip(t1,t2)) #output: {'a': 1, 'b': 2, 'c': 3, 'd': 4}
</code></pre>
<p>By creating a dictionary using the <code>dict</code> function with proper format will not produce that error. Thank You!</p>
|
python|tensorflow|machine-learning
| 0
|
2,645
| 59,720,243
|
Take elements of an array based on indexes in another array
|
<p>I have an array of values on one side:</p>
<pre><code>A = np.arange(30).reshape((3, 10))
Out: array([[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14, 15, 16, 17, 18, 19],
[20, 21, 22, 23, 24, 25, 26, 27, 28, 29]])
</code></pre>
<p>And an array of indexes referencing it where each column references each row in A.</p>
<pre><code>np.random.seed(0)
index = np.random.randint(0, 9, 6).reshape((2, 3))
Out: array([[5, 0, 3],
[3, 7, 3]])
</code></pre>
<p>I want to obtain an array of the same dimensions of the index array but replacing each index by its value in A. I have been able to accomplish by:</p>
<pre><code> np.dstack([A[0].take(index.T[0]),
A[1].take(index.T[1]),
A[2].take(index.T[2])]).squeeze()
Out: array([[ 5, 10, 23],
[ 3, 17, 23]])
</code></pre>
<p>I believe I am missing something and this is not the optimal way to do it. I am also concerned on performance when the size of the arrays increases. Is there a more generic and scalable way to accomplish that?</p>
|
<p>You can use <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.take_along_axis.html" rel="nofollow noreferrer"><code>np.take_along_axis</code></a>:</p>
<pre><code>np.take_along_axis(A, index.T, 1).T
array([[ 5, 10, 23],
[ 3, 17, 23]])
</code></pre>
|
python|arrays|numpy|indexing|take
| 3
|
2,646
| 59,498,389
|
How can use df.appen when I use eval
|
<p>j=88.87</p>
<p>I wnat to use eval to do like this:</p>
<pre><code>data_88_87=data_88_87.append(data[data['norm']==88.87])
</code></pre>
<p>but:</p>
<pre><code>eval('data_'+str(j).replace('.','_'))=eval('data_'+str(j).replace('.','_')).append(data[data['norm']==j])
File "<ipython-input-110-a69e45d994b1>", line 5
eval('data_'+str(j).replace('.','_'))=eval('data_'+str(j).replace('.','_')).append(data[data['norm']==j])
^
SyntaxError: can't assign to function call
</code></pre>
<pre><code>eval('data_'+str(c).replace('.','_')+'='+('data_'+str(c).replace('.','_')).append(data[data['norm']==j]))
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-121-64d2b5d27c76> in <module>()
----> 1 eval('data_'+str(c).replace('.','_')+'='+('data_'+str(c).replace('.','_')).append(data[data['norm']==2.98]))
AttributeError: 'str' object has no attribute 'append'
</code></pre>
<p>How can use df.appen when I use eval?</p>
|
<p>You can use it as follows:</p>
<pre><code>j = 88.87
tempVar = "data_"+str(j).replace('.','_')
globals()[tempVar] = eval(globals()[tempVar]).append(data[data['norm']==88.87])
</code></pre>
<p>The above code will give you following output:</p>
<pre><code>data_88_87=data_88_87.append(data[data['norm']==88.87])
</code></pre>
|
python|pandas|eval
| 1
|
2,647
| 59,694,536
|
Need to split data into multiple columns based on character length of each row, using Python
|
<pre><code>df1.head(1)
Airline_data
0 CAK ATL 114.47 528 424.56 FL 70.19 ...
</code></pre>
<p>The above column named "Airline_data" contains all information combined into a single column. </p>
<p>This has to be splitted to multiple columns like ("City1","City2","Average Fare", ... etc )based on the to below information of String index positional splitting</p>
<p>Column name : Section of original column to be split</p>
<p>City1 : 1-3</p>
<p>City2 : 5-7</p>
<p>Average Fare : 11-17</p>
<p>and so on.</p>
<p>PLEASE NOTE: Simply splitting based on blank spaces wont work here.</p>
|
<p>I think, the most intuitive way is to apply <em>str.extract</em> to the column of interest
(in your case <em>0</em>).</p>
<p>In order to have proper output column names, use named capturing groups
of respective sizes.
To capture the "wanted" fields, put between them either a space or a dot
(matching any char) with respective repetition count.</p>
<p>So for your example 3 columns, run:</p>
<pre><code>df[0].str.extract(r'(?P<City1>.{3}) (?P<City2>.{3}) {3}(?P<Average_Fare>.{7})')
</code></pre>
<p>Note: The name of a named capturing group can not include any spaces,
so I put "_" instead. If you want to get rid of these underscores,
just rename respective columns.</p>
<p>In the final version, to capture all remaining columns, add respective
other capturing groups to the regex above.</p>
<p>Or, if you have the source as a <strong>text file</strong>, then read it calling
<em>read_fwf</em>. It reads just <strong>fixed witdh fields</strong> (for details look
the documentation).</p>
<p>This variant is even better in one detail: <em>read_fwf</em> by default performs
conversion of input columns to appropriate types (e.g. <em>int</em> or <em>float</em>),
whereas <em>str.extract</em> generates just <strong>text</strong> columns (if you need,
you have to cast the required columns to the intended types on your own.</p>
|
python|pandas
| 0
|
2,648
| 61,693,249
|
Finding min. in list / 2D array and do calculation in Python
|
<p>I have a list, I would like to find the min. values in each row and do calculations: row - row.min - 1. </p>
<p>This is what I tried</p>
<pre><code>import numpy as np
list = [[1.2886089553007253e-15, 3.283665029781338e-16, 0.0, 3.4027301260438933e-16],\
[3.047580716284324e-15, 1.3787915767152193e-15, 3.505982818951592e-16, 0.0]]
array = np.asarray(list)
result = array-array.min(axis=0)-1
print(result)
</code></pre>
<p>This is the result I got, </p>
<pre><code>[[-1. -1. -1. -1.]
[-1. -1. -1. -1.]]
</code></pre>
<p>But I hope to get</p>
<pre><code>[[1.2886089553007253e-15 -0.0-1, 3.283665029781338e-16 -0.0-1, 0.0 -0.0-1, 3.4027301260438933e-16 -0.0-1],
[3.047580716284324e-15 -0.0-1, 1.3787915767152193e-15 -0.0-1, 3.505982818951592e-16 -0.0-1, 0.0 -0.0-1]]
</code></pre>
<p>So it would be</p>
<pre><code>[[-0.9999999999999987, -0.9999999999999997, -1, -0.9999999999999997],
[-0.999999999999997, -0.9999999999999987, -0.9999999999999997, -1]]
</code></pre>
<p>How can I make it?</p>
|
<p>To take the minimum from each row you actually want to take the minimum across the columns, ie. <code>axis=1</code>. Building on what @Patrick has done, to apply the subtraction we need to do some transposing to get broadcasting to work:</p>
<pre><code>import numpy as np
np.set_printoptions(precision=20)
list = [[1.2886089553007253e-15, 3.283665029781338e-16, 0.0, 3.4027301260438933e-16],\
[3.047580716284324e-15, 1.3787915767152193e-15, 3.505982818951592e-16, 0.0]]
array = np.asarray(list)
# minimum across each row
row_min = array.min(axis=1)
row_min
>>> array([0., 0.])
# row_min.shape = (2,), array.shape = (2, 4)
# so we transpose to do the subtraction and then transpose back
result = (array.T - row_min - 1).T
result
>>> array([[-0.9999999999999987, -0.9999999999999997, -1. ,
-0.9999999999999997],
[-0.999999999999997 , -0.9999999999999987, -0.9999999999999997,
-1. ]])
</code></pre>
|
python|arrays|list|numpy
| 1
|
2,649
| 61,652,558
|
apply function to dataframe column
|
<p>I have data frame x,</p>
<p><a href="https://i.stack.imgur.com/RIepC.png" rel="nofollow noreferrer">Please view the x dataframe here</a></p>
<p>We want to create new column using below function, which will add Complete columns value in start and create new column finish.</p>
<pre><code>import datetime
def date_by_adding_business_days(from_date, add_days):
business_days_to_add = add_days
current_date = from_date
while business_days_to_add > 0:
current_date += datetime.timedelta(days=1)
weekday = current_date.weekday()
if weekday >= 5: # sunday = 6
continue
business_days_to_add -= 1
return current_date
</code></pre>
<p>I have tried this getting below error, please help.</p>
<pre><code>x['Finish'] = x.apply(date_by_adding_business_days(datetime.date.today(), x['Complete']))
ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
</code></pre>
|
<p>Try to refactor your code. If you apply function only to one column, then you do it wrong. Additionally, for some reason you trying to call the function passing time to it. Why if you can just get it right in the function:</p>
<pre><code>import datetime
def date_by_adding_business_days(add_days):
business_days_to_add = add_days
current_date = datetime.date.today()
while business_days_to_add > 0:
current_date += datetime.timedelta(days=1)
weekday = current_date.weekday()
if weekday >= 5: # sunday = 6
continue
business_days_to_add -= 1
return current_date
x['Finish'] = x['Complete'].apply(date_by_adding_business_days)
</code></pre>
|
python|pandas|function|dataframe|apply
| 2
|
2,650
| 61,684,027
|
ValueError: Shape must be rank 2 but is rank 1 for 'MatMul'
|
<p>I am trying to run a linear regression model using TensorFlow. I have given the code below. However, I got the error as: ValueError: Shape must be at least rank 2 but is rank 1 for 'model_19/MatMul' (op: 'BatchMatMulV2') with input shapes: [1], ?.</p>
<p>From the error, it seems that the input to function model_linear is creating the problem. Any suggestions will be highly appreciating to solve the error. </p>
<pre><code>import tensorflow as tf
x_train = [1.0, 2.0, 3.0, 4.0]
y_train = [1.5, 3.5, 5.5, 7.5]
def model_linear(x, y):
with tf.variable_scope('model', reuse=tf.AUTO_REUSE):
W = tf.get_variable("W", initializer=tf.constant([0.1]))
b = tf.get_variable("b", initializer=tf.constant([0.0]))
output = tf.matmul(W, x) + b
loss = tf.reduce_sum(tf.square(output - y))
return loss
optimizer = tf.train.GradientDescentOptimizer(0.01)
with tf.Session():
tf.global_variables_initializer().run()
x = tf.placeholder(tf.float32)
y = tf.placeholder(tf.float32)
loss = model_linear(x, y)
train = optimizer.minimize(loss)
for i in range(1000):
train.run(feed_dict = {x:x_train, y:y_train})
</code></pre>
|
<p><code>tf.matmul</code> expects rank 2 tensors, i.e, matrices. Instead you have flat vectors. Try <code>x.reshape(-1,1)</code> or <code>x.expand_dims(0)</code>. And it seems that you also need that for your weight matrix. </p>
|
python|tensorflow
| 0
|
2,651
| 54,935,409
|
Concat 2 Dataframes off of 2 columns of matching data but keep the remaining
|
<p>I have seen how you can merge 2 DFs off of 2 column IDs but it appears as though this creates duplicate values for every iteration. I want to know how to match up 2 columns as if it was a concatenated ID.</p>
<pre><code>df1
1 3 12
1 4 14
df2
1 3 12
1 4 12
Desired Output
id1 id2 df1 df2
1 3 12 12
1 4 14 12
</code></pre>
<p>Basically I want to have returned where they inner join off of the 2 columns but to also include the different data after it...</p>
|
<p>I put together this quick code to re-produce your DataFrame examples and to produce the desired output:</p>
<pre><code>df1 = pd.DataFrame({'id1':[1,1],'id2':[3,4],'value1':[12,14]})
df2 = pd.DataFrame({'id1':[1,1],'id2':[3,4],'value2':[12,12]})
new_df = pd.merge(df1,df2,on=['id1','id2'])
</code></pre>
<p>This merge command produces an inner join (i.e., uses intersection of keys from both frames) together on the <code>id1</code> and <code>id2</code> columns found in both frames.</p>
|
python|pandas
| 1
|
2,652
| 54,915,946
|
Select a dataframe using boolean selection, and then extract the value corresponding to a certain column
|
<p>Example dataframe:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'a': [-3, -2, 0], 'b': [-2, 2, 5], 'c': [-1, 0, 7], 'd': [1, 4, 8]})
</code></pre>
<p>I'm trying to do something which I would expect to be fairly simple, and which is indeed immediate in other languages supporting the dataframe class, such as R. I just want to extract a single value from <code>df</code>, with the only caveat that I select the row with a boolean expression (say, `"a"==0), instead than by using a label. The column instead is selected by label, as usual. For example, this works, but it seems unnecessarily wasteful:</p>
<pre><code>df["c"][df["a"]==0][1]
</code></pre>
<p>Rather than directly extracting a value from a dataframe, this instruction 1) extracts a Pandas Series, 2) selects a row in the series and 3) selects the second element of the array returned by the row selection! (the first element is the index). Not only does it seem needlessly complicated, but I'm worried it could also be slow for very large dataframes.</p>
<p>I tried other solutions using <code>.at</code> or <code>.iat</code> but nothing seems to work. Isn't there a simpler/smarter way to do this?</p>
|
<p>You can't do this in one shot:</p>
<pre><code>In [11]: df.loc[df["a"]==0, "c"]
Out[11]:
2 7
Name: c, dtype: int64
In [12]: df.loc[df["a"]==0, "c"].iat[0]
Out[12]: 7
</code></pre>
|
python|pandas|boolean-expression
| 2
|
2,653
| 49,594,592
|
TensorFlow placeholder decoupling for external python code
|
<p>still learning Tensorflow and I'm trying to change a loss function in some code in Darkflow </p>
<p>The network outputs a given tensor with shape [49,3,2]. I would like to take the two elements in the last part of the tensor and process them with some code. I would then like to return the data back. So a bit like a map that would work with Tensorflow. </p>
<p>More Context - <a href="https://github.com/thtrieu/darkflow/blob/master/darkflow/net/yolo/train.py" rel="nofollow noreferrer">https://github.com/thtrieu/darkflow/blob/master/darkflow/net/yolo/train.py</a> of the file I'm trying to change.</p>
<p>So not sure how to do this, please ask for more information If I've not been clear enough on the question. I still trying to get my head around what I want to do yet. </p>
<p>e.g </p>
<pre><code>S = 7
SS = S * S
C = 8
B = 3
size1 = [None, SS, C]
size2 = [None, SS, B]
# Extract the coordinate prediction from net.out
coords = net_out[:, SS * (C + B):]
# Take flatten array and make it back into a tensor.
coords = tf.reshape(coords, [-1, SS, B, 4])
wh = tf.pow(coords[:,:,:,2:4], 2) * S # unit: grid cell
area_pred = wh[:,:,:,0] * wh[:,:,:,1] # unit: grid cell^2
centers = coords[:,:,:,0:2] # [batch, SS, B, 2]
floor = centers - (wh * .5) # [batch, SS, B, 2]
ceil = centers + (wh * .5) # [batch, SS, B, 2]
# calculate the intersection areas
# WHAT HAPPENS CURRENTLY
intersect_upleft = tf.maximum(floor, _upleft)
intersect_botright = tf.minimum(ceil , _botright)
intersect_wh = intersect_botright - intersect_upleft
intersect_wh = tf.maximum(intersect_wh, 0.0)
intersect = tf.multiply(intersect_wh[:,:,:,0], intersect_wh[:,:,:,1])
# I WANT TO CALCULATE THE AREA OF INTERSECTION THE BOX DIFFERENTLY SO
I WOULD HAVE
MY OWN FUNCTION DOING SOMETHING. BUT I ONLY WANT IT DONE FOR CENTERS
AND THEN RETURN A BIT LIKE A MAP FUNCTION BUT I NEED IT TO WORK WITH
TENSORFLOW PLACEHOLDERS
</code></pre>
<p>Any tips or advice would be good, thanks guy :D </p>
|
<p>It seems the <code>tf.map_fn</code> function fits your needs. The <a href="https://www.tensorflow.org/api_docs/python/tf/map_fn" rel="nofollow noreferrer">documentation</a> explains you can apply a Python callable to a tensor or a sequence of tensors.</p>
<p>An extract of the current documentation, about the main arguments of the function:</p>
<blockquote>
<p>fn: The callable to be performed. It accepts one argument, which will have the same (possibly nested) structure as elems. Its output must have the same structure as dtype if one is provided, otherwise it must have the same structure as elems.</p>
<p>elems: A tensor or (possibly nested) sequence of tensors, each of which will be unpacked along their first dimension. The nested sequence of the resulting slices will be applied to fn.</p>
</blockquote>
<p>This function is available from TensorFlow 0.8, so virtually always available.</p>
|
python|tensorflow|yolo|darkflow
| 0
|
2,654
| 49,546,769
|
Pandas Styler Hiding Index
|
<p>I basically want a table that has right-justified table cell entries and a minimum column width of 100. I have two ways to display a Pandas dataframe <code>df</code> in HTML:</p>
<pre><code>html = df.style.applymap(color_negative_red).set_precision(5).set_table_attributes(
'class = "dataframe table-bordered table-striped table-hover').set_properties(
**{'width': '10em', 'text-align': 'right'}).render()
display(HTML(html)
</code></pre>
<p>In this method, I can't figure out how to hide the row indices using the Styler object (i.e., there is no <code>index = False</code> parameter I could set before I render.</p>
<p>and: </p>
<pre><code>return df.to_html(index=ind, justify = {"right"}, col_space = 100,
classes = ["table-bordered", "table-striped", "table-hover"])
</code></pre>
<p>This method, using <code>to_html</code>, does not seem to work with the <code>justify</code> and <code>col_space</code> parameters.</p>
<p>How can I go about doing this/find a workaround to either problem?</p>
|
<p>I think <code>hide_index()</code> may help with dropping the index on your first method. From the <a href="https://pandas.pydata.org/pandas-docs/stable/style.html#Hiding-the-Index-or-Columns" rel="nofollow noreferrer">docs</a>,</p>
<blockquote>
<p>The index can be hidden from rendering by calling <code>Styler.hide_index</code>. Columns can be hidden from rendering by calling <code>Styler.hide_columns</code> and passing in the name of a column, or a slice of columns.</p>
</blockquote>
<p>So the following should work,</p>
<pre><code>....
.hide_index().render()
</code></pre>
|
python|pandas|dataframe|pandas-styles
| 2
|
2,655
| 49,419,205
|
Unexpected behavior with open() and numpy.savetext() functions
|
<h1>Problem</h1>
<p>I am trying to output statistics about a table, followed by more table data using Pandas and numpy.</p>
<p>When I execute the following code:</p>
<pre><code>import pandas as pd
import numpy as np
data = pd.read_csv(r'c:\Documents\DS\CAStateBuildingMetrics.csv')
waterUsage = data["Water Use (All Water Sources) (kgal)"]
dept = data[["Department Name", "Property Id"]]
mean = str(waterUsage.mean())
median = str(waterUsage.median())
most = str(waterUsage.mode())
hw1 = open(r'c:\Documents\DS\testFile', "a")
hw1.write("Mean Water Usage Median Water Usage Most Common Usage Amounts\n")
hw1.write(mean+' '+median+' '+most)
np.savetxt(r'c:\Documents\DS\testFile', dept.values, fmt='%s')
</code></pre>
<p>The table output by np.savetext is written into <code>c:\Documents\DS\testFile</code> before the statistics about Mean, Median, and Mode water usage are written into the file. Below is the output I am describing:</p>
<p>Here is a sample of the table output, which ends up to be 1700 rows. </p>
<blockquote>
<p>Capitol Area Development Authority 1259182<br>
Capitol Area Development Authority 1259200<br>
Capitol Area Development Authority 1259218<br>
California Department of Forestry and Fire Protection 3939905<br>
California Department of Forestry and Fire Protection 3939906<br>
California Department of Forestry and Fire Protection 3939907</p>
</blockquote>
<p>After this, the script outputs the statistics in this format</p>
<blockquote>
<p>Mean Water Usage Median Water Usage Most Common Usage Amounts<br>
6913.1633414932685 182.35 0 165.0<br>
Type: float64</p>
</blockquote>
<h1>Question</h1>
<p>How do I adjust the behavior to guarantee that the statistics appear before the table?</p>
|
<p>The issue, as pointed out by @hpaulj, is that the same open file is not being referenced. </p>
<p>Replacing</p>
<pre><code>np.savetxt(r'c:\Documents\DS\testFile', dept.values, fmt='%s')
</code></pre>
<p>With</p>
<pre><code>np.savetxt(hw1, dept.values, fmt='%s')
hw1.close()
</code></pre>
<p>Will write all information in the expected order in the same file. <a href="https://www.tutorialspoint.com/python/file_close.htm" rel="nofollow noreferrer">Closing it</a> follows best practices of handling files in Python.</p>
|
python|pandas|numpy
| 1
|
2,656
| 73,454,249
|
Colon operator in List Slicing
|
<pre><code>mini_batch_X = shuffled_X[:, k * mini_batch_size:(k + 1) * mini_batch_size]
</code></pre>
<p>What is the semantics of the above line? what does the first colon mean?</p>
|
<p>Colon in a slicing operation will generate <code>slice(None, None, None)</code>, in numpy it means <code>take all indices for this dimension</code>.</p>
<p>A slice is <code>start:end:step</code>, usually step is omitted writing only <code>start:end</code>, but you can also omit start <code>:end</code> that will slice from beginning, or <code>start:</code> that will end at the last index.</p>
|
list|machine-learning|numpy-slicing|mini-batch
| 0
|
2,657
| 73,500,816
|
'str' object has no attribute 'to_csv'
|
<p>I'm trying to save some data that I collected on a csv file.</p>
<p>And for that I'm using the following code, but I'm getting the error:</p>
<blockquote>
<p>'str' object has no attribute 'to_csv'</p>
</blockquote>
<p>I am using this line <strong>df = pd.to_numeric(df, errors='ignore')</strong> to change Nonetype to numeric type. is this the correct method?</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.read_csv("D:\data_ana\\2022.07.27_at_10.00.33.csv")
index = 5
cols_in_the_slice = df.loc[
:,
(
f"Objects[{index}].General.u_MeasuredTimeStamp",
f"Objects[{index}].General.u_LifeCycles",
f"Objects[{index}].KinematicRel.f_DistX",
f"Objects[{index}].KinematicRel.f_DistY",
f"Objects[{index}].KinematicRel.f_VrelX",
),
].columns
other_cols = pd.Index(["TimeStamp", "Velocity", "Accel", "YawRate"])
all_cols = other_cols.union(cols_in_the_slice, sort=False)
df = df[all_cols]
df.rename(
columns={
f"Objects[{index}].General.u_MeasuredTimeStamp": "Obj_TimeStamp",
f"Objects[{index}].General.u_LifeCycles": "Age",
f"Objects[{index}].KinematicRel.f_DistX": "K_DistX",
f"Objects[{index}].KinematicRel.f_DistY": "K_DistY",
f"Objects[{index}].KinematicRel.f_VrelX": "K_VrelX",
f"Objects[{index}].KinematicRel.f_VrelY": "K_VrelY",
},
inplace=True,
)
df = str(round(df, 2))
df = pd.to_numeric(df, errors="ignore")
df = df.to_csv(r"D:\data_ana\duplicate_2022.07.27_at_10.00.33.csv")
</code></pre>
|
<p>The issue isn't (only) in <code>pd.to_numeric()</code>; right before that you're <code>str()</code>ing the df and assigning to <code>df</code>, so at that point you have no dataframe left, just a string describing it.</p>
<p>Additionally, you can't use <code>to_numeric</code> like that.</p>
<p>If you want to convert everything in the df to numbers, you can use <code>astype</code>.</p>
<p>Furthermore, <code>to_csv</code> doesn't return the dataframe.</p>
<pre><code>df.rename(...)
df.astype(float, copy=False, errors='ignore')
df.to_csv(r'D:\data_ana\duplicate_2022.07.27_at_10.00.33.csv')
</code></pre>
<p>You can also simplify things by just telling <code>to_csv</code> which columns to write:</p>
<pre><code>import pandas as pd
df = pd.read_csv(r"D:\data_ana\2022.07.27_at_10.00.33.csv")
index = 5
df.rename(
columns={
f"Objects[{index}].General.u_MeasuredTimeStamp": "Obj_TimeStamp",
f"Objects[{index}].General.u_LifeCycles": "Age",
f"Objects[{index}].KinematicRel.f_DistX": "K_DistX",
f"Objects[{index}].KinematicRel.f_DistY": "K_DistY",
f"Objects[{index}].KinematicRel.f_VrelX": "K_VrelX",
f"Objects[{index}].KinematicRel.f_VrelY": "K_VrelY",
},
inplace=True,
)
df.astype(float, copy=False, errors="ignore")
cols_to_write = [
"TimeStamp",
"Velocity",
"Accel",
"YawRate",
"Obj_TimeStamp",
"Age",
"K_DistX",
"K_DistY",
"K_VrelX",
"K_VrelY",
]
df = df.to_csv(r"D:\data_ana\duplicate_2022.07.27_at_10.00.33.csv", columns=cols_to_write)
</code></pre>
|
python|pandas|dataframe
| 1
|
2,658
| 67,461,097
|
Filling a dataframe column with values from another column, based on values from a third column
|
<p>I have a pandas dataframe like the following. I created the last 3 columns based on unique values from the column <code>RefIDPrefix</code>.</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>RefIDPrefix</th>
<th>RefIDNumber</th>
<th>GO</th>
<th>PMID</th>
<th>Reactome</th>
</tr>
</thead>
<tbody>
<tr>
<td>GO</td>
<td>12345</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>PMID</td>
<td>23456</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Reactome</td>
<td>34567</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>GO</td>
<td>45678</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>GO</td>
<td>56789</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>PMID</td>
<td>67890</td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
</div>
<p>I want to fill the last 3 columns like the following. Basically, based on the value in <code>RefIDPrefix</code>, I want to take the value in <code>RefIDNumber</code> and put it in the correct column corresponding to <code>RefIDPrefix</code>.</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>RefIDPrefix</th>
<th>RefIDNumber</th>
<th>GO</th>
<th>PMID</th>
<th>Reactome</th>
</tr>
</thead>
<tbody>
<tr>
<td>GO</td>
<td>12345</td>
<td>12345</td>
<td></td>
<td></td>
</tr>
<tr>
<td>PMID</td>
<td>23456</td>
<td></td>
<td>23456</td>
<td></td>
</tr>
<tr>
<td>Reactome</td>
<td>34567</td>
<td></td>
<td></td>
<td>34567</td>
</tr>
<tr>
<td>GO</td>
<td>45678</td>
<td>45678</td>
<td></td>
<td></td>
</tr>
<tr>
<td>GO</td>
<td>56789</td>
<td>56789</td>
<td></td>
<td></td>
</tr>
<tr>
<td>PMID</td>
<td>67890</td>
<td></td>
<td>67890</td>
<td></td>
</tr>
</tbody>
</table>
</div>
<p>I have been trying to do this for a while but haven't been able to figure out how to go about it. Any help would be appreciated!</p>
|
<p>You can use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.pivot.html" rel="nofollow noreferrer"><code>df.pivot()</code></a> to build the columns from <code>RefIDPrefix</code> and <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.join.html" rel="nofollow noreferrer"><code>.join()</code></a> it back to the original <code>df</code></p>
<pre><code>df.join(df.pivot(columns='RefIDPrefix', values='RefIDNumber').fillna(''))
</code></pre>
<p>Output:</p>
<pre><code> RefIDPrefix RefIDNumber GO PMID Reactome
0 GO 12345 12345.0
1 PMID 23456 23456.0
2 Reactome 34567 34567.0
3 GO 45678 45678.0
4 GO 56789 56789.0
5 PMID 67890 67890.0
</code></pre>
<h2>Edit</h2>
<p>For the display format of the numbers in new columns (currently displayed like <code>float</code> number with decimal point), if your <code>RefIDNumber</code> column is actually in string, the numbers in new column will also be in string and have no decimal point (like integers).</p>
<p>However, if <code>RefIDNumber</code> is in numeric format (most probably in positive numbers for ID numbers), we can retain the numbers as <code>integer</code> by fine-tuning the <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.fillna.html" rel="nofollow noreferrer"><code>.fillna()</code></a> part, as follows:</p>
<pre><code>df.join(df.pivot(columns='RefIDPrefix', values='RefIDNumber').fillna(-1, downcast='infer').replace(-1, ''))
</code></pre>
<p>Output:</p>
<pre><code> RefIDPrefix RefIDNumber GO PMID Reactome
0 GO 12345 12345
1 PMID 23456 23456
2 Reactome 34567 34567
3 GO 45678 45678
4 GO 56789 56789
5 PMID 67890 67890
</code></pre>
|
python|pandas|dataframe
| 3
|
2,659
| 59,939,549
|
Tensorflow Transformer Decoder output not giving the expected result
|
<p>I have designed an transformer model using tensorflow. The aim of the model is to generate a sequence of text which is ideally an question followed by an answer given an input sentence.</p>
<p>I have datapoints ( around 15k ) whose format is as below</p>
<pre><code>SOURCE SENTENCE: <@>A man in the distance is walking past a brick wall painted with words and graffiti.<#>where<%>wall<?>brick
TARGET SENTENCE: <^>where is the man walking ?<~>A man is walking past a brick wall
</code></pre>
<p>I have trained the model using sentencepiece tokenizer. </p>
<p>For some reason, even after training the model upto 100 epochs, I am not getting the desired output. I expect the network to pick up the
words from the source sentence and construct an Question Answer pair. But, in actual, the network constructs a question answer pair ( which is really good but the words which it user is not in the source sentence. </p>
<p>Below is the output from the network from the above source input after 50 Epochs for a beam search width of 15. </p>
<pre><code>PRED: <^>what does the woman?<~>the girls are young
PRED: <^>what was the girl holding ?<~>the girl was be
PRED: <^>what was the girl doing ?<~>the man are posing.
PRED: <^>what was the girl doing ?<~>the man are posing
PRED: <^>what was the girl holding ?<~>the girl was looking
PRED: <^>what was the girl holding ?<~>a man wearing a black shirt.
PRED: <^>what is the girls are ?<~>the girls are wearing a young man
PRED: <^>what is the girls are ?<~>the girls are wearing a
PRED: <^>what was the girl holding ?<~>the girl was be for the field
PRED: <^>what was the girl holding ?<~>the girl was holding a swing
PRED: <^>what was the girl doing ?<~>the man are tryings
PRED: <^>what is the girls are ?<~>the girls are wearing a brunette
PRED: <^>what was the girl holding ?<~>the girl was holding a peace man
PRED: <^>what is the girls are ?<~>the girls are wearing a older girl
PRED: <^>what is the girls are ?<~>the girls are wearing a older
</code></pre>
<p>I am not sure where I am going wrong. I am quite sure that the network is learning from the training which is very promising given the way the output is constructed but the main issue here is the question answer is formed from words which are not in the source sentence. </p>
<p>Is there a way to instruct the network to mainly use the words from the source sentence only? Below is the decoder output function.</p>
<pre><code>def symbols_to_logits_fn(model, config, decoder_tensor, debug=False):
'''We basically need to run the complete decoder function
:param model: namespace returned from function
:param decoder_tensor: [batch_size * beam_size, decoded_length]
:return new_ids: [batch_size * beam_size, vocab_size]
'''
print('^^^^^^ decoder_tensor: {}'.format(decoder_tensor))
decoder_gather = tf.gather(
model.context_embedding, decoder_tensor
) * (config.embedding_dim ** 0.5)
decoder_gather += tf.gather(model.position_embedding,
positions_for(decoder_tensor, past_length=0))
print('>>>>> {}'.format(decoder_gather))
encoder_tiled = tf.tile(model.encoder_embedding, [config.beam_size, 1, 1])
print('>>> encoder_tiled: {}'.format(encoder_tiled))
local_decoder_pad_mask = tf.math.equal(
decoder_tensor, config.pad_id, name='beam_decoder_pad_mask')
print('>>>> local_decoder_pad_mask: {}'.format(local_decoder_pad_mask))
decoder_out_func = transformer_model.decoder_fn(config=config,
dec_out=decoder_gather,
enc_out=encoder_tiled,
encoder_pad_mask=model.encoder_pad_mask,
decoder_pad_mask=local_decoder_pad_mask) # [bs, None, embedding_dim]
print('>>>> decoder_out_func: {}'.format(decoder_out_func))
# [bs, None, vocab_size]
decoder_out = tf.matmul(decoder_out_func, model.fproj_w, transpose_b=False)
print('>>>> decoder_out: {}'.format(decoder_out))
decoder_out_last_step = decoder_out[:, -1, :] # [bs, vocab_size]
print('>>> decoder_out_last_step: {}'.format(decoder_out_last_step))
return decoder_out_last_step
</code></pre>
<p>Could anyone help me in resolving this issue. I feel I am too close to quit. Any help to tweak the network would be very helpful.</p>
|
<p>15k examples are quite little for training a transformer model from scratch. Its typical use is in machine translation where the training corpora typically have millions of sentence pairs.</p>
<p>You can try to tune pre-trained models:</p>
<ul>
<li><p><a href="https://github.com/pytorch/fairseq/tree/master/examples/bart" rel="nofollow noreferrer">Facebook's BART</a> is pre-trained denoising Transformer that they used for language generation and sentence compression.</p></li>
<li><p><a href="https://github.com/huggingface/transformers" rel="nofollow noreferrer">Hugingface's Transformers</a> now allows combining a pre-trained encoder (like BERT) with a pre-trained language model (like GPT) and you can only train the encoder-decoder attention, they have <a href="https://medium.com/huggingface/encoder-decoders-in-transformers-a-hybrid-pre-trained-architecture-for-seq2seq-af4d7bf14bb8" rel="nofollow noreferrer">a tutorial on that</a>.</p></li>
</ul>
|
python|tensorflow|transformer-model|attention-model
| 0
|
2,660
| 59,907,766
|
pandas groupby datetime.date object is not consistent
|
<p>For example</p>
<pre><code>import datetime
data={'date':[datetime.date(2020,1,i) for i in range(11,13)],
'a1':range(11,13),
'a2':range(21,23)}
df=pd.DataFrame(data)
</code></pre>
<p>If we groupby only the date column, everything is ok</p>
<pre><code>g=df.groupby('date')
print(g.groups)
g.get_group(list(g.groups.keys())[0])
</code></pre>
<p>gives</p>
<pre><code>{datetime.date(2020, 1, 11): Int64Index([0], dtype='int64'), datetime.date(2020, 1, 12): Int64Index([1], dtype='int64')}
date a1 a2
0 2020-01-11 11 21
</code></pre>
<p>However, if we groupby two column to form multiIndex, we got problem</p>
<pre><code>g=df.groupby(['date','a1'])
print(g.groups)
g.get_group(list(g.groups.keys())[0])
</code></pre>
<p>gives</p>
<pre><code>{(Timestamp('2020-01-11 00:00:00'), 11): Int64Index([0], dtype='int64'), (Timestamp('2020-01-12 00:00:00'), 12): Int64Index([1], dtype='int64')}
</code></pre>
<p>and error message</p>
<blockquote>
<p>--------------------------------------------------------------------------- KeyError Traceback (most recent call
last) in
1 g=df.groupby(['date','a1'])
2 print(g.groups)
----> 3 g.get_group(list(g.groups.keys())[0])</p>
<p>~/anaconda3/lib/python3.7/site-packages/pandas/core/groupby/groupby.py
in get_group(self, name, obj)
678 inds = self._get_index(name)
679 if not len(inds):
--> 680 raise KeyError(name)
681
682 return obj.take(inds, axis=self.axis)</p>
<p>KeyError: (Timestamp('2020-01-11 00:00:00'), 11)</p>
</blockquote>
<p>We can see pandas <code>groupby</code> is too smart to change <code>datetime.date</code> object to <code>Timestamp</code> object. And it mess up indexing, we can not get the correct group. Is it a bug?</p>
|
<p>IIUC you can try grouping like this:</p>
<pre><code>g=df.groupby([['date','a1']])
print(g.groups)
g.get_group(list(g.groups.keys())[0])
</code></pre>
|
python|pandas
| 1
|
2,661
| 65,280,749
|
Fill NaN values wit mean of previous rows?
|
<p>I have to fill the nan values of a column in a dataframe with the mean of the previous 3 instances.
Here is the following example:</p>
<pre><code>df = pd.DataFrame({'col1': [1, 3, 4, 5, np.NaN, np.NaN, np.NaN, 7]})
df
col1
0 1.0
1 3.0
2 4.0
3 5.0
4 NaN
5 NaN
6 NaN
7 7.0
</code></pre>
<p>And here is the output I need:</p>
<pre><code>col1
0 1.0
1 3.0
2 4.0
3 5.0
4 4.0
5 4.3
6 4.4
7 7.0
</code></pre>
<p>I tried pd.rolling, but it does not work the way I want when the column has more than one NaN value in a roll:</p>
<pre><code>df.fillna(df.rolling(3, min_periods=1).mean().shift())
col1
0 1.0
1 3.0
2 4.0
3 5.0
4 4.0 # np.nanmean([3, 4, 5])
5 4.5 # np.nanmean([np.NaN, 4, 5])
6 5.0 # np.nanmean([np.NaN, np.naN ,5])
7 7.0
</code></pre>
<p>Can someone help me with that? Thanks in advance!</p>
|
<p>Probably not the most efficient but terse and gets the job done</p>
<pre><code>from functools import reduce
reduce(lambda d, _: d.fillna(d.rolling(3, min_periods=3).mean().shift()), range(df['col1'].isna().sum()), df)
</code></pre>
<p>output</p>
<pre><code>
col1
0 1.000000
1 3.000000
2 4.000000
3 5.000000
4 4.000000
5 4.333333
6 4.444444
7 7.000000
</code></pre>
<p>we basically use <code>fillna</code> but require <code>min_periods=3</code> meaning it will only fill a single NaN at a time, or rather those NaNs that have three non-NaN numbers immediately preceeding it. Then we use <code>reduce</code> to repeat this operation as many times as there are NaNs in <code>col1</code></p>
|
python|pandas|nan|mean|fillna
| 2
|
2,662
| 65,459,915
|
Tensorflow '_pywrap_tensorflow_internal' module error
|
<p>I am facing this error since a long time now. I tried all the possible solutions on the internet. I am using tensorflow 1.14.0 for my project and am facing this error. Project date is due. Please help.
My PC spec:
i5/8gb ram/inbuilt graphics</p>
<pre><code>C:\Users\skshr\.virtualenvs\ThesisProject\Scripts\python.exe C:/ThesisProject/estimators/estimator.py
Traceback (most recent call last):
File "C:\Users\skshr\.virtualenvs\ThesisProject\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 18, in swig_import_helper
fp, pathname, description = imp.find_module('_pywrap_tensorflow_internal', [dirname(__file__)])
File "C:\Users\skshr\AppData\Local\Programs\Python\Python38\lib\imp.py", line 296, in find_module
raise ImportError(_ERR_MSG.format(name), name=name)
ImportError: No module named '_pywrap_tensorflow_internal'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\skshr\.virtualenvs\ThesisProject\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "C:\Users\skshr\.virtualenvs\ThesisProject\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "C:\Users\skshr\.virtualenvs\ThesisProject\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 20, in swig_import_helper
import _pywrap_tensorflow_internal
ModuleNotFoundError: No module named '_pywrap_tensorflow_internal'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:/ThesisProject/estimators/estimator.py", line 5, in <module>
import tensorflow as tf
File "C:\Users\skshr\.virtualenvs\ThesisProject\lib\site-packages\tensorflow\__init__.py", line 28, in <module>
from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-import
File "C:\Users\skshr\.virtualenvs\ThesisProject\lib\site-packages\tensorflow\python\__init__.py", line 49, in <module>
from tensorflow.python import pywrap_tensorflow
File "C:\Users\skshr\.virtualenvs\ThesisProject\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 74, in <module>
raise ImportError(msg)
ImportError: Traceback (most recent call last):
File "C:\Users\skshr\.virtualenvs\ThesisProject\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 18, in swig_import_helper
fp, pathname, description = imp.find_module('_pywrap_tensorflow_internal', [dirname(__file__)])
File "C:\Users\skshr\AppData\Local\Programs\Python\Python38\lib\imp.py", line 296, in find_module
raise ImportError(_ERR_MSG.format(name), name=name)
ImportError: No module named '_pywrap_tensorflow_internal'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\skshr\.virtualenvs\ThesisProject\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "C:\Users\skshr\.virtualenvs\ThesisProject\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "C:\Users\skshr\.virtualenvs\ThesisProject\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 20, in swig_import_helper
import _pywrap_tensorflow_internal
ModuleNotFoundError: No module named '_pywrap_tensorflow_internal'
Failed to load the native TensorFlow runtime.
See https://www.tensorflow.org/install/errors
</code></pre>
|
<p>Have you tried to delete TensorFlow and change your TF version? For example, to 1.15?</p>
|
python|python-3.x|tensorflow|virtualenv|tensorflow1.15
| 0
|
2,663
| 65,328,438
|
Get Max Value of a Row in subset of Column respecting a condition
|
<p>I have a dataframe that looks like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>FakeDist</th>
<th>-5</th>
<th>-4</th>
<th>-3</th>
<th>-2</th>
<th>-1</th>
<th>0</th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>37</td>
<td>14</td>
<td>17</td>
<td>29</td>
<td>31</td>
<td>34</td>
<td>32</td>
<td>31</td>
<td>21</td>
<td>17</td>
<td>18</td>
</tr>
<tr>
<td>2</td>
<td>12</td>
<td>13</td>
<td>12</td>
<td>16</td>
<td>30</td>
<td>33</td>
<td>37</td>
<td>32</td>
<td>32</td>
<td>15</td>
<td>42</td>
</tr>
<tr>
<td>3</td>
<td>40</td>
<td>16</td>
<td>29</td>
<td>31</td>
<td>36</td>
<td>32</td>
<td>30</td>
<td>19</td>
<td>16</td>
<td>15</td>
<td>12</td>
</tr>
<tr>
<td>4</td>
<td>12</td>
<td>14</td>
<td>12</td>
<td>28</td>
<td>28</td>
<td>30</td>
<td>29</td>
<td>27</td>
<td>16</td>
<td>18</td>
<td>33</td>
</tr>
<tr>
<td>5</td>
<td>12</td>
<td>13</td>
<td>16</td>
<td>17</td>
<td>28</td>
<td>32</td>
<td>33</td>
<td>30</td>
<td>29</td>
<td>17</td>
<td>35</td>
</tr>
</tbody>
</table>
</div>
<p>I want to add a column that will be the Column_Name of the Maximum Value per Row.<br />
I did that with:</p>
<pre><code>df['MaxVal_Dist'] = df.idxmax(axis=1)
</code></pre>
<p>Which gives me this df:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>FakeDist</th>
<th>-5</th>
<th>-4</th>
<th>...</th>
<th>MaxVal_Dist</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>37</td>
<td>14</td>
<td>...</td>
<td>-5</td>
</tr>
<tr>
<td>2</td>
<td>12</td>
<td>13</td>
<td>...</td>
<td>5</td>
</tr>
<tr>
<td>3</td>
<td>40</td>
<td>16</td>
<td>...</td>
<td>-5</td>
</tr>
<tr>
<td>4</td>
<td>12</td>
<td>14</td>
<td>...</td>
<td>5</td>
</tr>
<tr>
<td>5</td>
<td>12</td>
<td>13</td>
<td>...</td>
<td>5</td>
</tr>
</tbody>
</table>
</div>
<p>But my real end point would be to add an if condition. I want the Max Value for the column where 'FakeDist' is between -2 and 2. To have the following result:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>FakeDist</th>
<th>-5</th>
<th>-4</th>
<th>...</th>
<th>MaxVal_Dist</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>37</td>
<td>14</td>
<td>...</td>
<td>0</td>
</tr>
<tr>
<td>2</td>
<td>12</td>
<td>13</td>
<td>...</td>
<td>1</td>
</tr>
<tr>
<td>3</td>
<td>40</td>
<td>16</td>
<td>...</td>
<td>-1</td>
</tr>
<tr>
<td>4</td>
<td>12</td>
<td>14</td>
<td>...</td>
<td>0</td>
</tr>
<tr>
<td>5</td>
<td>12</td>
<td>13</td>
<td>...</td>
<td>1</td>
</tr>
</tbody>
</table>
</div>
<p>I did try to look at how to add a <code>df.apply</code> but couldn't find how to make it work.<br />
I have a "work around" idea that would be to store a subset of column (from -2 to 2) in a new dataframe, create my new column to get the max there, and then add that result column to my initial dataframe but it seem to me to be a very not elegant solution and I am sure there is much better to do.</p>
<p><strong>I would be really glad to learn the elegant way to do that from you !</strong></p>
|
<p>You can use <code>boolean indexing</code> with <code>loc</code> to filter the columns in the range <code>-2</code> to <code>2</code>, then use <code>idxmax</code> along <code>axis=1</code>:</p>
<pre><code>c = df.columns.astype(int)
df['MaxVal_Dist'] = df.loc[:, (c >= -2) & (c <= 2)].idxmax(1)
</code></pre>
<p>Result:</p>
<pre><code>FakeDist -5 -4 -3 -2 -1 0 1 2 3 4 5 MaxVal_Dist
1 37 14 17 29 31 34 32 31 21 17 18 0
2 12 13 12 16 30 33 37 32 32 15 42 1
3 40 16 29 31 36 32 30 19 16 15 12 -1
4 12 14 12 28 28 30 29 27 16 18 33 0
5 12 13 16 17 28 32 33 30 29 17 35 1
</code></pre>
|
python|pandas
| 2
|
2,664
| 65,124,919
|
Column values are not updated when getting the values from another dataframe
|
<p>I have the following 2 data frames: df, and df_final:</p>
<p><a href="https://i.stack.imgur.com/iqvXe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iqvXe.png" alt="enter image description here" /></a></p>
<p>and</p>
<p><a href="https://i.stack.imgur.com/0ct2v.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0ct2v.png" alt="enter image description here" /></a></p>
<p>I want to make the values of column <code>x1</code> in <code>df_final</code> as <code>x1</code> in <code>df</code>. so I wrote the following loop:</p>
<pre><code>for j in range(df.shape[0]):
#for k in range(i+"_Edges_Count"):
df_final[['x2']+['x1']].iloc[j]=df[['x2']+['x1']].iloc[j]
</code></pre>
<p>however, this does not change the values of <code>x1</code>. why is this?</p>
|
<p>It's easier than you think:</p>
<p><code>df_final['x1'] = df['x1']</code></p>
|
python|pandas|dataframe
| 0
|
2,665
| 65,172,221
|
How can i solve Type Error in using Numpy
|
<p>Why ı got a error? Could you help and modify my code please?</p>
<p>-TypeError: object of type <class 'float'> cannot be safely interpreted as an integer.</p>
<p>-TypeError: 'float' object cannot be interpreted as an integer</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
def f(x):
return x**3-2*x
def inputNumber(message):
while True:
try:
userInput =float(input(message))
except ValueError:
print("Enter valid")
continue
else:
return userInput
break
interval1=float(inputNumber("Please write lower bound: "))
interval=float(inputNumber("Please write upper bound: "))
stepsize=float(inputNumber("Enter a step size: "))
x = np.linspace(float(interval1),float(interval),float(stepsize))
y = f(x)
</code></pre>
|
<p>The first three arguments of <code>linspace</code> are the start, end, and the number of samples to generate - not the step size. Try <code>np.arange(interval1, interval, stepsize)</code>.</p>
<p>Alternatively, you could calculate the number of samples with: <code>num = round((interval-interval1) / stepsize)</code> and then plug that into linspace: <code>np.linspace(interval1, interval, num)</code>, but I do not recommend using this instead of <code>arange</code>.</p>
|
python|numpy|error-handling
| 0
|
2,666
| 65,264,898
|
How to apply cummulative sum in Numpy in slices with condition to previous value?
|
<p>I have a vector with signals with values of <code>1</code> or <code>-1</code>. I want to have a second vector that computes the cumulative sum of consecutive signals with the same value and restarts the cumulative sum every time the signal change. Here is an example:</p>
<pre><code>signal = [1 1 1 -1 -1 -1 -1]
cum_sum = [1 2 3 -1 -2 -3 -4]
</code></pre>
<p>I have large data to be computed and want to do it as efficiently as possible.
My code right now does the job but it takes time and is not taking advantage of numpy efficiency:</p>
<pre><code>import numpy as np
# Signal values to be analyzed
signal = np.array([1,1,1,-1,-1,-1,-1], dtype=int)
# Vector with previous value of signal
signal_prev = signal[:-1]
signal_prev = np.pad(signal_prev,(1,0), mode='constant', constant_values=(0))
#Array with signal values in first column and previous values in second column
arr = np.array([signal,signal_prev], dtype=int)
arr = np.transpose(arr)
print(arr)
""" Array with signal values and previous values
[[ 1 0]
[ 1 1]
[ 1 1]
[-1 1]
[-1 -1]
[-1 -1]
[-1 -1]]
"""
#create an empty array to append cumulative sum
signal_sum = np.array([], dtype=int)
# compute the cumulative sum iterating row by row
for x in arr:
if np.sign(x[0]*x[1]) > 0:
signal_sum = np.append(signal_sum, signal_sum[-1] + x[1])
else:
signal_sum= np.append(signal_sum, x[0])
arr_sum = np.array([signal, signal_sum])
arr_sum = np.transpose(arr_sum)
print(arr_sum)
""" Array with signal values and cumulative sum restarted with signal change
[[ 1 1]
[ 1 2]
[ 1 3]
[-1 -1]
[-1 -2]
[-1 -3]
[-1 -4]]
"""
</code></pre>
<p>I believe that this calculation can be done more efficiently using numpy functions or using lambda functions. I'm not a programmer, and I'm new to Python. I would like to know if this could be done faster.</p>
|
<p>For a <em>fast</em>, fully vectorized way (no loops), you can use a regular <code>np.cumsum()</code>, but on a copy of your array where you subtract the previous group sum at the start of each group:</p>
<pre><code>def group_cumsum(s):
# make a copy and ensure np.array (in case list was given)
s = np.array(s).copy()
idx = np.nonzero(np.diff(s))[0] # last of each group
off = np.diff(np.concatenate(([0], np.cumsum(s)[idx])))
s[idx + 1] -= off
return np.cumsum(s)
</code></pre>
<p>Example:</p>
<pre><code>print(group_cumsum([1, 1, 1, -1, -1, -1, -1]))
# [ 1 2 3 -1 -2 -3 -4]
print(group_cumsum([1]*3 + [-1]*2 + [1]*4 + [-1]*5))
# [ 1 2 3 -1 -2 1 2 3 4 -1 -2 -3 -4 -5]
</code></pre>
<p><strong>The time saving is substantial</strong> for large arrays:</p>
<ol>
<li>no loops in the Python code, all ops are vectorized, and</li>
<li>it is <code>O(n + k)</code> for <code>k</code> groups in an array of size <code>n</code> (unlike other solutions that are <code>O(n * k)</code>).</li>
</ol>
<p>Try this:</p>
<pre><code>s = np.random.choice([1, -1], size=(int(1e6)))
%%timeit
group_cumsum(s)
19.1 ms ± 137 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
</code></pre>
|
python|arrays|numpy
| 4
|
2,667
| 65,165,800
|
Print portion of null value
|
<p>I am working with titanic dataset. I wonder how to show portion of null value from a train set.</p>
<p>Here is my code: `</p>
<pre><code>train_count_of_missval_by_col = (train.isnull().sum())
print('----- all columns along with count of missing value')
print(train_count_of_missval_by_col)
print('----only columns which has missing values----')
print(train_count_of_missval_by_col[train_count_of_missval_by_col>0])
print('----only columns which has missing data to total observations----')
print(train_count_of_missval_by_col[train_count_of_missval_by_col>0]/train.shape[])`
</code></pre>
<p>Unfortunately, the last line of the code generate error. What to add / edit on the lastline so the code will work?</p>
|
<p>I am not sure if there is a specific operation for this. <code>info()</code> shows you the raw # and tells you the total rows but there are no parameters for the %. Also <code>.info()</code> returns as a <code>None</code> type object, so you can't access any data from that object.</p>
<p>I would suggest looping through the column and returning the # null divided by total rows with <code>df[col].isnull().sum() / df.shape[0] * 100</code> and printing out the output in a formatted string as such:</p>
<pre><code>d = {'Col1': [np.nan, 6, np.nan, 2, np.nan],
'Col2': [np.nan, 3, 5, np.nan, 9],
'Col3': [2, 1, 8, np.nan, 9]}
df = pd.DataFrame(d)
for col in df.columns:
print(col, f'{df[col].isnull().sum() / df.shape[0] * 100} % NULL')
Col1 60.0 % NULL
Col2 40.0 % NULL
Col3 20.0 % NULL
</code></pre>
|
python|pandas|null
| 0
|
2,668
| 49,960,132
|
CuDNN library compatibility error after loading model weights
|
<p>I am trying to load NSynth weights and I am using tf version 1.7.0 </p>
<pre class="lang-py prettyprint-override"><code>from magenta.models.nsynth import utils
from magenta.models.nsynth.wavenet import fastgen
def wavenet_encode(file_path):
# Load the model weights.
checkpoint_path = './wavenet-ckpt/model.ckpt-200000'
# Load and downsample the audio.
neural_sample_rate = 16000
audio = utils.load_audio(file_path,
sample_length=400000,
sr=neural_sample_rate)
encoding = fastgen.encode(audio, checkpoint_path, len(audio))
# Reshape to a single sound.
return encoding.reshape((-1, 16))
# An array of n * 16 frames.
wavenet_z_data = wavenet_encode(file_path)
</code></pre>
<p>I get the following error: </p>
<blockquote>
<p>tensorflow/stream_executor/cuda/cuda_dnn.cc:396] Loaded runtime CuDNN
library: 7103 (compatibility version 7100) but source was compiled
with 7005 (compatibility version 7000). If using a binary install,
upgrade your CuDNN library to match. If building from sources, make
sure the library loaded at runtime matches a compatible version
specified during compile configuration.</p>
</blockquote>
<p>What should I do and, which version of tf should I install, and exactly which CUDA version do I need?</p>
|
<p>As the error says, the Tensorflow version you are using is compiled for CuDNN 7.0.5 while your system has CuDNN 7.1.3 installed.</p>
<p>As the error also suggests, you can solve this problem:</p>
<ul>
<li>Either by installing CuDNN 7.0.5 (follow instructions here: <a href="https://developer.nvidia.com/cudnn" rel="noreferrer">https://developer.nvidia.com/cudnn</a>);</li>
<li>Or by compiling Tensorflow yourself for your system (follow instructions here: <a href="https://www.tensorflow.org/install/install_sources" rel="noreferrer">https://www.tensorflow.org/install/install_sources</a>).</li>
</ul>
|
tensorflow|magenta
| 16
|
2,669
| 63,831,676
|
Remove duplicated but with priority for keep first in pandas
|
<p>Here is a df:</p>
<pre><code>COL1 COL2 COL3
seqA NA 10
seqA Unknown 5
seqA Cow 50
seqB NA 2
seqC NA 2
seqC Unknown 2
seqC Bird 6
seqC Cow 1
seqD Unknown 30
seqD Shark 2
</code></pre>
<p>so the idea would bee to remove duplicated COL1 value and keep only one with the lowest <code>COL3</code>BUT only take ones with <code>NA</code> or <code>Unknown</code> containt if there is no other <code>COL3 value < 10</code></p>
<p>for instance for <code>SeqA</code></p>
<p>I keep</p>
<pre><code>seqA Unknown 5
</code></pre>
<p>because thise one is > 10 :</p>
<pre><code>seqA Cow 50
</code></pre>
<p>but in
seqC I keep :</p>
<pre><code>seqC Cow 1
</code></pre>
<p>because it is <code><10</code></p>
<p>In the exemple the expected output would be :</p>
<pre><code>COL1 COL2 COL3
seqA Unknown 5
seqB NA 2
seqC Cow 1
seqD Shark 2
</code></pre>
<p>So one idea would be to first do a</p>
<pre><code>tab=df.sort_values(by=['COL3'], ascending = True)
</code></pre>
<p>But I do not know how to integrate the priority by the fact that everything different from Unknwown or NA is a priority except it its COL3 > 10</p>
|
<p>Let us do filter then <code>sort_values</code> + <code>drop_duplicates</code></p>
<pre><code>out = df[df.COL3.lt(10) | df.COL2.eq('Unknown')].sort_values('COL3').drop_duplicates('COL1').sort_index()
Out[47]:
COL1 COL2 COL3
1 seqA Unknown 5
3 seqB NaN 2
7 seqC Cow 1
9 seqD Shark 2
</code></pre>
|
python|pandas|dataframe
| 2
|
2,670
| 64,137,179
|
Numpy get values at indices given in array form
|
<p>I want to get the values at <code>indices</code> of <code>my_array</code>.</p>
<pre><code>indices = np.array([[[0],
[1],
[0]]])
my_array = np.array([[[1.1587323 , 1.75406635],
[1.05464125, 1.29215026],
[0.9784655 , 1.16957462]]])
</code></pre>
<p>I should get the following output:</p>
<pre><code>output: array([[[1.1587323], [1.29215026], [0.9784655]]])
</code></pre>
<p>Is it possible without for loops or list comprehensions?</p>
|
<p>You can use <a href="https://numpy.org/devdocs/reference/generated/numpy.take_along_axis.html" rel="nofollow noreferrer"><code>np.take_along_axis</code></a>:</p>
<pre><code>np.take_along_axis(my_array, indices, axis=-1)
array([[[1.1587323 ],
[1.29215026],
[0.9784655 ]]])
</code></pre>
|
python|numpy|numpy-ndarray|numpy-slicing
| 2
|
2,671
| 46,984,540
|
Vectorize the midpoint rule for integration
|
<p>I need some help with this problem.
The midpoint rule for approximating an integral can be expressed as:</p>
<pre><code> h * summation of f(a -(0.5 * h) + i*h)
</code></pre>
<p>where h = (b - a)/2 </p>
<p>Write a function midpointint(f,a,b,n) to compute the midpoint rule using the numpy sum function.</p>
<p>Make sure your range is from 1 to n inclusive. You could use a range and convert it to an array.</p>
<p>for midpoint(np.sin,0,np.pi,10) the function should return 2.0082</p>
<p>Here is what I have so far</p>
<pre><code>import numpy as np
def midpointint(f,a,b,n):
h = (b - a) / (float(n))
for i in np.array(range(1,n+1)):
value = h * np.sum((f(a - (0.5*h) + (i*h))))
return value
print(midpointint(np.sin,0,np.pi,10))
</code></pre>
<p>My code is not printing out the correct output.</p>
|
<p>Issue with the posted code was that we needed accumulation into output : <code>value += ..</code> after initializing it as zero at the start.</p>
<p>You can vectorize by using a range array for the iterator, like so -</p>
<pre><code>I = np.arange(1,n+1)
out = (h*np.sin(a - (0.5*h) + (I*h))).sum()
</code></pre>
<p>Sample run -</p>
<pre><code>In [78]: I = np.arange(1,n+1)
In [79]: (h*np.sin(a - (0.5*h) + (I*h))).sum()
Out[79]: 2.0082484079079745
</code></pre>
|
python|numpy
| 2
|
2,672
| 46,916,171
|
Tensorflow: classifier.predict and predicted_classes
|
<h3>System information</h3>
<ul>
<li>custom code: no, it is the one in <a href="https://www.tensorflow.org/get_started/estimator" rel="nofollow noreferrer">https://www.tensorflow.org/get_started/estimator</a></li>
<li>system: Apple</li>
<li>OS: Mac OsX 10.13</li>
<li>TensorFlow version: 1.3.0</li>
<li>Python version: 3.6.3</li>
<li>GPU model: AMD FirePro D700 (actually, two such GPUs)</li>
</ul>
<h3>Describe the problem</h3>
<p>Dear all,
I am running the simple iris program:
<a href="https://www.tensorflow.org/get_started/estimator" rel="nofollow noreferrer">https://www.tensorflow.org/get_started/estimator</a>
under python 3.6.3 and tensorflow 1.3.0.
The program executes correctly, apart from the very last part, i.e. the one related to the confusion matrix.
In fact, the result I get for the confusion matrix is:
New Samples, Class Predictions: [array([b'1'], dtype=object), array([b'2'], dtype=object)]
rather than the expected output:
New Samples, Class Predictions: [1 2]
Has anything about confusion matrix changed in the latest release?
If so, how should I modify that part of the code?
Thank you very much for your help!
Best regards
Ivan</p>
<h3>Source code / logs</h3>
<p><a href="https://www.tensorflow.org/get_started/estimator" rel="nofollow noreferrer">https://www.tensorflow.org/get_started/estimator</a></p>
|
<p>This looks like a numpy issue. <code>array([b'1'], dtype=object)</code> is one way numpy represents the string <code>'1'</code>.</p>
|
tensorflow|classification
| 0
|
2,673
| 62,958,214
|
Numpy where behavior
|
<p>Why is numpy.where returning two arrays when only one True value is found but when n True>1 return one array for each True. I would like it to always return one array for each true.</p>
<pre><code>import numpy as np
a = np.zeros((5,5),dtype=int)
print(a)
#[[0 0 0 0 0]
# [0 0 0 0 0]
# [0 0 0 0 0]
# [0 0 0 0 0]
# [0 0 0 0 0]]
a[1,1]=1
print(np.where(a==1))
#(array([1]), array([1])) #Two separate arrays
#Why not array([1,1])
a[1,2]=1
print(np.where(a==1))
#(array([1, 1]), array([1, 2])) #One array for each True, desired behavior
</code></pre>
|
<p><code>np.where</code> returns the row and column indices where the condition specified holds <code>True</code>. In the example your considered, coincidentally the row column indices directly matched the positions where the condition is satisfied. Hence take one more location as 1 to understand <code>np.where</code>.</p>
<pre><code>import numpy as np
a = np.zeros((5,5),dtype=int)
a[1,1]=1
a[1,2]=1
a[2,3] = 1
row_ind, col_ind = np.where(a==1)
row_ind
Out[22]: array([1, 1, 2], dtype=int64)
col_ind
Out[23]: array([1, 2, 3], dtype=int64)
[(i,j) for i, j in zip(row_ind, col_ind)]
Out[24]: [(1, 1), (1, 2), (2, 3)]
</code></pre>
|
python|numpy
| 1
|
2,674
| 63,045,478
|
tensorflow dataset direct normalization
|
<pre><code>def get_dataset(file_path, **kwargs):
dataset = tf.data.experimental.make_csv_dataset(
file_path,
batch_size=5, # Artificially small to make examples easier to show.
label_name=LABEL_COLUMN,
na_value="?",
num_epochs=1,
ignore_errors=True,
**kwargs)
return dataset
CSV_COLUMNS = ['survived', 'sex', 'age', 'n_siblings_spouses', 'parch', 'fare', 'class', 'deck', 'embark_town', 'alone']
TRAIN_DATA_URL = "https://storage.googleapis.com/tf-datasets/titanic/train.csv"
train_file_path = tf.keras.utils.get_file("train.csv", TRAIN_DATA_URL)
temp_dataset = get_dataset(train_file_path, column_names=CSV_COLUMNS)
</code></pre>
<p>Is it possible to directly normalize the temp_dataset <em>without exporting the data to other data library for further manipulation</em>?? (some columns are numeric and others are categorical)</p>
|
<p>I would to first read the CSV file with Pandas. Use the <code>map</code> function to normalize the column,</p>
<pre><code># Read the CSV
df = pd.read_csv( 'train.csv' )
# Normalize the "temp" column
df[ 'temp' ] = df[ 'temp' ].map( lambda x : x / SOME_NUMBER )
</code></pre>
<p>Now, save the modified CSV,</p>
<pre><code>df.to_csv( 'data.csv' , sep='\t', encoding='utf-8')
</code></pre>
<p>Now, pass this CSV file to <code>make_csv_dataset()</code> like,</p>
<pre><code>dataset = tf.data.experimental.make_csv_dataset(
'data.csv',
batch_size=5, # Artificially small to make examples easier to show.
label_name=LABEL_COLUMN,
na_value="?",
num_epochs=1,
ignore_errors=True,
**kwargs)
</code></pre>
<p>This could be a short trick to normalize a single column and then forge it into a <code>tf.data.Dataset</code>. For more details, refer to this <a href="https://www.tensorflow.org/guide/data#consuming_csv_data" rel="nofollow noreferrer">doc</a>.</p>
|
tensorflow|dataset|normalization
| 0
|
2,675
| 67,861,409
|
How to assign a new column that shows the total number of rows in a file
|
<p>Can anyone help? I want to assign a new column which counts how many rows do I have in a single file.
Here I have file name in each row, you can see that the first 9 rows are in a single file(...block_10.jpg) so I want a new column that shows the total rows in the file.
<a href="https://i.stack.imgur.com/Z7u41.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Z7u41.png" alt="Here is my dataframe" /></a></p>
<p>I have extracted the values from the data frame using df.iloc</p>
<pre><code>X = df.iloc[:,:-1].values # This is all the columns NOT including the last one
Y = df.iloc[:,-1].values # This is the filename
</code></pre>
|
<pre><code>df_classification['count']=df_classification.groupby('filename')['Name'].transform('count').values
</code></pre>
<p>Thanks to someone who gave an answer but he deleted it because there was a small error. Please comment to take credit for the answer, sir!</p>
|
python|pandas|dataframe
| 1
|
2,676
| 67,862,179
|
IndexError: index 54 is out of bounds for axis 0 with size 48
|
<p>I am trying to fetch the predicted sentiment score and determine whether the text is positive or negative. But while predicting the values I am getting an array sequence of scores and throws the following error.</p>
<pre><code>import json
f = open(("/content/trending_tweets.json"), "r+")
data = f.read()
for x in data.split("\n"):
strlist = "[" + x + "]"
datalist = json.loads(strlist)
for y in datalist:
f = open('/content/user_lookup_data.json', 'a', encoding='utf-8')
print(y["user"]["screen_name"])
screen_name = ('@' + y["user"]["screen_name"])
file_name ='/content/user_timeline/' + screen_name + '_tweets.csv'
user_timeline_data = pd.read_csv(file_name, sep='\t', lineterminator='\n',encoding='latin')
user_timeline_data = (user_timeline_data['tweet'])
print(len(user_timeline_data))
df = pd.DataFrame(columns=['Text', 'Sentiment'])
for index, row in user_timeline_data.iteritems():
sequence = tokenizer.texts_to_sequences(row)
test = pad_sequences(sequence, maxlen=max_len)
pred = model.predict(test)
if pred[index] > 0.5:
df.loc[index, ['Text']] = row
df.loc[index, ['Sentiment']] = 'Positive'
print(df.shape)
print(pred)
else:
df.loc[index, ['Text']] = row
df.loc[index, ['Sentiment']] = 'Negative'
print(df.shape)
print(pred)
df.to_csv('sentiment_'+ screen_name +'.csv', index=False)
</code></pre>
<p>Error message</p>
<pre><code>---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-68-274fe2f3a8c0> in <module>()
18 test = pad_sequences(sequence, maxlen=max_len)
19 pred = model.predict(test)
---> 20 if pred[index] > 0.5:
21 df.loc[index, ['Text']] = row
22 df.loc[index, ['Sentiment']] = 'Positive'
IndexError: index 54 is out of bounds for axis 0 with size 48
</code></pre>
<p>It would be great if someone can help me out</p>
|
<p>The <code>index</code> variable you use on line 20 is the index of the row in <code>user_timeline_data.iteritems</code> it is not an index from the prediction. The prediction is most likely an array with only one value, since you only predict one instance. So change the <code>index</code> on line</p>
<pre><code>if pred[index] > 0.5:
</code></pre>
<p>To</p>
<pre><code>if pred[0] > 0.5:
</code></pre>
|
python|pandas|dataframe|keras|sentiment-analysis
| 0
|
2,677
| 67,848,962
|
Selecting loss and metrics for Tensorflow model
|
<p>I'm trying to do transfer learning, using a pretrained <strong>Xception</strong> model with a newly added classifier.</p>
<p>This is the model:</p>
<pre><code>base_model = keras.applications.Xception(
weights="imagenet",
input_shape=(224,224,3),
include_top=False
)
</code></pre>
<p>The dataset I'm using is <code>oxford_flowers102</code> taken directly from tensorflow datasets.
<a href="https://www.tensorflow.org/datasets/catalog/oxford_flowers102" rel="nofollow noreferrer">This</a> is a dataset page.</p>
<p><strong>I have a problem with selecting some parameters</strong> - either training accuracy shows suspiciously low values, or there's an error.</p>
<p>I need help with specifying this parameter, for this (oxford_flowers102) dataset:</p>
<ol>
<li>Newly added dense layer for the classifier. I was trying with:
<code>outputs = keras.layers.Dense(102, activation='softmax')(x)</code> and I'm not sure whether I should select the activation function here or not.</li>
<li>loss function for model.</li>
<li>metrics.</li>
</ol>
<p>I tried:</p>
<pre><code>model.compile(
optimizer=keras.optimizers.Adam(),
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[keras.metrics.Accuracy()],
)
</code></pre>
<p>I'm not sure whether it should be <code>SparseCategoricalCrossentropy</code> or <code>CategoricalCrossentropy</code>, and what about <code>from_logits</code> parameter?</p>
<p>I'm also not sure whether should I choose for metrics<code>keras.metrics.Accuracy()</code> or <code>keras.metrics.CategoricalAccuracy()</code></p>
<p>I am definitely lacking some theoretical knowledge, but right now I just need this to work. Looking forward to your answers!</p>
|
<h2>About the data set: <a href="https://www.tensorflow.org/datasets/catalog/oxford_flowers102" rel="noreferrer">oxford_flowers102</a></h2>
<p>The dataset is divided into a <strong>training set</strong>, a <strong>validation set</strong>, and a <strong>test set</strong>. The training set and validation set each consist of <strong>10</strong> images per class (totaling <strong>1020</strong> images each). The test set consists of the remaining <strong>6149</strong> images (minimum <strong>20</strong> per class).</p>
<pre><code>'test' 6,149
'train' 1,020
'validation' 1,020
</code></pre>
<p>If we check, we'll see</p>
<pre><code>import tensorflow_datasets as tfds
tfds.disable_progress_bar()
data, ds_info = tfds.load('oxford_flowers102',
with_info=True, as_supervised=True)
train_ds, valid_ds, test_ds = data['train'], data['validation'], data['test']
for i, data in enumerate(train_ds.take(3)):
print(i+1, data[0].shape, data[1])
1 (500, 667, 3) tf.Tensor(72, shape=(), dtype=int64)
2 (500, 666, 3) tf.Tensor(84, shape=(), dtype=int64)
3 (670, 500, 3) tf.Tensor(70, shape=(), dtype=int64)
</code></pre>
<pre><code>ds_info.features["label"].num_classes
102
</code></pre>
<p>So, it has <strong>102</strong> categories or classes and the target comes with an <strong>integer</strong> with different shapes input.</p>
<h2>Clarification</h2>
<p><strong>First</strong>, if you keep this integer target or label, you should use <a href="https://www.tensorflow.org/api_docs/python/tf/keras/metrics/SparseCategoricalAccuracy" rel="noreferrer"><code>sparse_categorical_accuracy</code></a> for accuracy and <a href="https://www.tensorflow.org/api_docs/python/tf/keras/losses/SparseCategoricalCrossentropy" rel="noreferrer"><code>sparse_categorical_crossentropy</code></a> for loss function. But if you transform your integer label to a <strong>one-hot encoded vector</strong>, then you should use <a href="https://www.tensorflow.org/api_docs/python/tf/keras/metrics/categorical_accuracy" rel="noreferrer"><code>categorical_accuracy</code></a> for accuracy, and <a href="https://www.tensorflow.org/api_docs/python/tf/keras/losses/categorical_crossentropy" rel="noreferrer"><code>categorical_crossentropy</code></a> for loss function. As these data set have integer labels, you can choose <code>sparse_categorical</code> or you can transform the label to one-hot in order to use <code>categorical</code>.</p>
<p><strong>Second</strong>, if you set <code>outputs = keras.layers.Dense(102, activation='softmax')(x)</code> to the last layer, you will get <strong>probabilities score</strong>. But if you set <code>outputs = keras.layers.Dense(102)(x)</code>, then you will get <strong>logits</strong>. So, if you set <code>activations='softmax'</code>, then you should not use <code>from_logit = True</code>. For example in your above code you should do as follows (here's <a href="https://stackoverflow.com/questions/34240703/what-are-logits-what-is-the-difference-between-softmax-and-softmax-cross-entrop">some theory</a> for you):</p>
<pre><code>...
(a)
# Use softmax activation (no logits output)
outputs = keras.layers.Dense(102, activation='softmax')(x)
...
model.compile(
optimizer=keras.optimizers.Adam(),
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=False),
metrics=[keras.metrics.Accuracy()],
)
or,
(b)
# no activation, output will be logits
outputs = keras.layers.Dense(102)(x)
...
model.compile(
optimizer=keras.optimizers.Adam(),
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[keras.metrics.Accuracy()],
)
</code></pre>
<p><strong>Third</strong>, <a href="/questions/tagged/keras" class="post-tag" title="show questions tagged 'keras'" rel="tag">keras</a> uses <strong>string identifier</strong> such as <code>metrics=['acc'] , optimizer='adam'</code>. But in your case, you need to be a bit more specific as you mention loss function specific. So, instead of <code>keras.metrics.Accuracy()</code>, you should choose <code>keras.metrics.SparseCategoricalAccuracy()</code> <strong>if you target are integer</strong> or <code>keras.metrics.CategoricalAccuracy()</code> <strong>if your target are one-hot encoded vector</strong>.</p>
<h2>Code Examples</h2>
<p>Here is an end-to-end example. Note, I will <strong>transform integer labels to a one-hot encoded vector</strong> (right now, it's a matter of preference to me). Also, I want <strong>probabilities</strong> (not logits) from the last layer which means <code>from_logits = False</code>. And for all of these, I need to choose the following parameters in my training:</p>
<pre><code># use softmax to get probabilities
outputs = keras.layers.Dense(102,
activation='softmax')(x)
# so no logits, set it false (FYI, by default it already false)
loss = keras.losses.CategoricalCrossentropy(from_logits=False),
# specify the metrics properly
metrics = keras.metrics.CategoricalAccuracy(),
</code></pre>
<p>Let's complete the whole code.</p>
<pre><code>import tensorflow_datasets as tfds
tfds.disable_progress_bar()
data, ds_info = tfds.load('oxford_flowers102',
with_info=True, as_supervised=True)
train_ds, valid_ds, test_ds = data['train'], data['validation'], data['test']
NUM_CLASSES = ds_info.features["label"].num_classes
train_size = len(data['train'])
batch_size = 64
img_size = 120
</code></pre>
<p><strong>Preprocess and Augmentation</strong></p>
<pre><code>import tensorflow as tf
# pre-process functions
def normalize_resize(image, label):
image = tf.cast(image, tf.float32)
image = tf.divide(image, 255)
image = tf.image.resize(image, (img_size, img_size))
label = tf.one_hot(label , depth=NUM_CLASSES) # int to one-hot
return image, label
# augmentation
def augment(image, label):
image = tf.image.random_flip_left_right(image)
return image, label
train = train_ds.map(normalize_resize).cache().map(augment).shuffle(100).\
batch(batch_size).repeat()
valid = valid_ds.map(normalize_resize).cache().batch(batch_size)
test = test_ds.map(normalize_resize).cache().batch(batch_size)
</code></pre>
<p><strong>Model</strong></p>
<pre><code>from tensorflow import keras
base_model = keras.applications.Xception(
weights='imagenet',
input_shape=(img_size, img_size, 3),
include_top=False)
base_model.trainable = False
inputs = keras.Input(shape=(img_size, img_size, 3))
x = base_model(inputs, training=False)
x = keras.layers.GlobalAveragePooling2D()(x)
outputs = keras.layers.Dense(NUM_CLASSES, activation='softmax')(x)
model = keras.Model(inputs, outputs)
</code></pre>
<p>Okay, additionally, here I like to use two metrics to compute <a href="https://www.tensorflow.org/api_docs/python/tf/keras/metrics/TopKCategoricalAccuracy" rel="noreferrer"><code>top-1</code></a> and <a href="https://www.tensorflow.org/api_docs/python/tf/keras/metrics/TopKCategoricalAccuracy" rel="noreferrer"><code>top-3</code></a> accuracy.</p>
<pre><code>model.compile(optimizer=keras.optimizers.Adam(),
loss=keras.losses.CategoricalCrossentropy(),
metrics=[
keras.metrics.TopKCategoricalAccuracy(k=3, name='acc_top3'),
keras.metrics.TopKCategoricalAccuracy(k=1, name='acc_top1')
])
model.fit(train, steps_per_epoch=train_size // batch_size,
epochs=20, validation_data=valid, verbose=2)
</code></pre>
<pre><code>...
Epoch 19/20
15/15 - 2s - loss: 0.2808 - acc_top3: 0.9979 - acc_top1: 0.9917 -
val_loss: 1.5025 - val_acc_top3: 0.8147 - val_acc_top1: 0.6186
Epoch 20/20
15/15 - 2s - loss: 0.2743 - acc_top3: 0.9990 - acc_top1: 0.9885 -
val_loss: 1.4948 - val_acc_top3: 0.8147 - val_acc_top1: 0.6255
</code></pre>
<p><strong>Evaluate</strong></p>
<pre><code># evaluate on test set
model.evaluate(test, verbose=2)
97/97 - 18s - loss: 1.6482 - acc_top3: 0.7733 - acc_top1: 0.5994
[1.648208498954773, 0.7732964754104614, 0.5994470715522766]
</code></pre>
|
tensorflow|machine-learning|keras|deep-learning|tensorflow2.0
| 8
|
2,678
| 68,019,521
|
Calculating Cumulative Compound Interest
|
<p>I have the code below. It works correctly if the deposit is always > 0. However, if it's 0 for some months, then the pandas algorithm breaks. It doesn't return the correct value.</p>
<pre><code>import pandas as pd
deposit = [100] * 4
rate = [0.1] * 4
df = pd.DataFrame({ 'deposit':deposit, 'rate':rate})
df['interest'] = df.deposit * df.rate
df['total'] = df.deposit.cumsum() + df.interest.cumsum()
df.loc[2:, ['deposit']] = 0
df['total'] = (df['deposit'] * df['rate'].shift().add(1).cumprod().fillna(1)).cumsum()
</code></pre>
<p>Here's the result</p>
<pre><code>
deposit rate interest total
0 100 0.1 10.0 100.0
1 100 0.1 10.0 210.0
2 0 0.1 10.0 210.0
3 0 0.1 10.0 210.0
</code></pre>
<p>For months 2 and 3, the amount should increase by the interest accumulated from the previous months on months 1.</p>
|
<pre><code>import pandas as pd
init_deposit = 100
init_rate = 0.1
deposit = [init_deposit] * 8
rate = [init_rate] * 8
df = pd.DataFrame({ 'deposit':deposit, 'rate':rate})
df.loc[2:4, ['deposit']] = 0
df['interest'] =0
df['total'] =0
df['total'].iloc[0] = init_deposit
for i in range(1, len(df)):
df['interest'].iloc[i] = df['total'].iloc[i-1] * df['rate'].iloc[i]
df['total'].iloc[i] = df['interest'].iloc[i] + df['total'].iloc[i-1] + df['deposit'].iloc[i]
</code></pre>
|
python|pandas|dataframe
| 0
|
2,679
| 61,195,547
|
Show only integers in matplotlib x-axis tickmarks
|
<p>New to matplotlib and have created a simple line chart from a dataset constructed similar in principle to that below. We'll call that dataframe 'cardata'</p>
<pre><code>|------- |--------|------------|---------|
| id | year | some_var | count |
---------|--------|------------|---------|
| 1 | 2016 | car | 1 |
| 2 | 2016 | car | 1 |
| 3 | 2017 | car | 1 |
| 4 | 2017 | car | 1 |
| 5 | 2018 | car | 1 |
| 6 | 2018 | car | 1 |
| 7 | 2018 | car | 1 |
| 8 | 2019 | car | 1 |
| 9 | 2019 | car | 1 |
| 10 | 2020 | car | 1 |
</code></pre>
<p>I wish to aggregate the counts by year so that I can see how many times 'car' occurs per year.</p>
<p>I have achieved this using the following code </p>
<pre><code>cardata.groupby(['year']).count()['some_var'].plot()
</code></pre>
<p>This gives me a plot I can use, however the x-axis goes like this...</p>
<pre><code>| 2016 | 2016.5 | 2017 | 2017.5 | 2018 | 2018.5 | etc etc
</code></pre>
<p>Question 1) How can I set the x-asxis labels/tickmarks to only show integers for the year?</p>
<p>Question 2) How would I exclude the year '2020' for example, from the plot?</p>
<p>Thanks in advance.</p>
|
<p>boolean indexing, groupby and plot with the param xticks:</p>
<pre><code>g = df[df['year'] != 2020].groupby('year').count()['some_var']
g.plot(xticks=g.index)
</code></pre>
<p>One way of plotting labels is to use matplotlib and list comprehension. The code blow will plot the <code>y</code> value but it could really be anything:</p>
<pre><code>import matplotlib.pyplot as plt
g = df[df['year'] != 2020].groupby('year').count()['some_var']
g.plot(xticks=g.index)
[plt.annotate(y, (x,y), textcoords="offset points",
xytext=(0,10), ha='center') for x,y in list(zip(g.index, g))]
</code></pre>
<p><a href="https://i.stack.imgur.com/uBuPY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uBuPY.png" alt="enter image description here"></a></p>
|
python|pandas|matplotlib|jupyter-notebook|jupyter
| 2
|
2,680
| 68,630,198
|
Unable to read csv file in Google Colab
|
<p>I imported these libraries and trying to read a csv file on my desktop</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.read_csv('/Users/yoshithKotla/Desktop/canal/verymimi_M.csv')
</code></pre>
<p>But I get an error saying</p>
<pre><code>FileNotFoundError: [Errno 2] No such file or directory: '/Users/yoshithKotla/Desktop/canal/verymimi_M.csv'
</code></pre>
<p>The path of the file is correct and it is present on my desktop.</p>
|
<p>You cannot read the local files present on your computer directly into the google colab environment.</p>
<p>To upload from your local drive, start with the following code:</p>
<pre><code>from google.colab import files
uploaded = files.upload()
</code></pre>
<p>It will prompt you to select a file. Click on “Choose Files” then select and upload the file. Wait for the file to be 100% uploaded. You should see the name of the file once Colab has uploaded it.</p>
<p>Finally, type in the following code to import it into a dataframe (make sure the filename matches the name of the uploaded file).</p>
<pre><code>import iodf2 = pd.read_csv(io.BytesIO(uploaded['Filename.csv']))# Dataset is now stored in a Pandas Dataframe
</code></pre>
<p>Reference: <a href="https://towardsdatascience.com/3-ways-to-load-csv-files-into-colab-7c14fcbdcb92" rel="nofollow noreferrer">https://towardsdatascience.com/3-ways-to-load-csv-files-into-colab-7c14fcbdcb92</a></p>
|
python|pandas|google-colaboratory
| 1
|
2,681
| 52,910,939
|
mark rows with timestamp between times
|
<p>I need to mark rows in a time series where the timestamps fall between given time-of-day blocks; when I have eg</p>
<pre><code>values = ([ 'motorway' ] * 5000) + ([ 'link' ] * 300) + ([ 'motorway' ] * 7000)
df = pd.DataFrame.from_dict({
'timestamp': pd.date_range(start='2018-1-1', end='2018-1-2', freq='s').tolist()[:len(values)],
'road_type': values,
})
df.set_index('timestamp', inplace=True)
</code></pre>
<p>I need to add a column <code>rush</code> that marks rows where <code>timestamp</code> is between <code>06:00</code> and <code>09:00</code> or <code>15:30</code> and <code>19:00</code>. I've seen <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.between_time.html" rel="nofollow noreferrer">between_time</a> but I don't know how to apply it here.</p>
<p>edit: based on <a href="https://stackoverflow.com/questions/35004241/pandas-between-time-boolean">this answer</a> I managed to put together </p>
<pre><code>df['rush'] = df.index.isin(df.between_time('00:00:15', '00:00:20', include_start=True, include_end=True).index) | df.index.isin(df.between_time('00:00:54', '00:00:59', include_start=True, include_end=True).index)
</code></pre>
<p>but I wonder whether there isn't a more elegant way.</p>
|
<p>One alternative using <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.between.html" rel="nofollow noreferrer"><code>between</code></a></p>
<pre><code>from datetime import time as t
values = ([ 'motorway' ] * 5000) + ([ 'link' ] * 300) + ([ 'motorway' ] * 7000)
df = pd.DataFrame.from_dict({
'timestamp': pd.date_range(start='2018-1-1', end='2018-1-2',
freq='s').tolist()[:len(values)],
'road_type': values,
})
time = df['timestamp'].dt.time
df['rush'] = (time.between(t(0,6,0), t(0,9,0)) | time.between(t(0,15,30),t(0,19,0))).values
</code></pre>
<p>Or slicing the <code>df</code> using <code>datetime.time</code></p>
<pre><code>df = df.set_index(df.timestamp.dt.time)
df['rush'] = df.index.isin(df[t(0,6,0):t(0,9,0)].index | df[t(0,15,30):t(0,19,0)].index)
df = df.reset_index(drop=True)
</code></pre>
|
pandas|timestamp
| 0
|
2,682
| 53,347,606
|
Repeat numpy vector-matrix-vector multiplication
|
<p>I have a set of vectors (n), another set of vectors (s) and a set of 3x3 2D arrays (T).</p>
<pre><code>n = np.array([
[[1, 2, 3]],
[[2, 2, 3]],
[[3, 2, 3]],
[[4, 2, 3]],
[[5, 2, 3]],
[[6, 2, 3]]
])
s = np.array([
[[1, 1, 5]],
[[2, 2, 5]],
[[3, 3, 5]],
[[4, 4, 5]],
[[5, 5, 5]],
[[6, 6, 5]]
])
T = np.array([
[[1, 2, 3],
[1, 2, 3],
[2, 2, 3]],
[[2, 2, 3],
[3, 2, 3],
[4, 2, 3]],
[[3, 2, 3],
[5, 2, 3],
[6, 2, 3]],
[[4, 2, 3],
[7, 2, 3],
[8, 2, 3]]
])
</code></pre>
<p>Right now, my current code loops through n, s, and then T:</p>
<pre><code>result = np.array(n.shape[0], s.shape[0], T.shape[0])
for i in range(n.shape[0]):
for j in range(s.shape[0]):
for k in range(T.shape[0]):
result[i][j][k] = np.sum(n[i] * T[k] * s[j].T)
</code></pre>
<p>I tried to use np.apply_along_axis but it requires a 1D array to operate on. Ideally, I'm trying to work out a solution that doesn't require any for loops.</p>
<p>I tried to get <code>np.tensordot()</code> working (and do this in two operations), but so far no success.</p>
<p>Anyone have ideas on a more 'numpy-ish' way to do this?</p>
|
<pre><code>np.einsum('imn,jnm,kmn->ijk', n, s, T)
</code></pre>
|
numpy
| 2
|
2,683
| 53,129,440
|
Create dataframe with specific length
|
<p>How can I create a new dataframe using pandas of 1000 length and assign values using for loop. I tried this way. But it doesn't work. </p>
<pre><code> f = {'ID': [],'CSE':[], 'Course Name':[]}
ff = pd.DataFrame(data=f)
for i in range(1000):
ff.loc['173'] = [151, 'CSE']
</code></pre>
<p>It gives output like-</p>
<pre><code> *ID *CSE Course Name*
173 151 CSE
</code></pre>
|
<p>Use:</p>
<pre><code>for i in range(1000):
ff.loc[i] = ['173', 151, 'CSE']
</code></pre>
<p>Better and faster solution is create list of lists and then <code>Dataframe</code> by contructor:</p>
<pre><code>#loop
L = []
for i in range(10):
L.append(['173', 151, 'CSE'])
#list comprehension
#L = [['173', 151, 'CSE'] for i in range(10)]
ff = pd.DataFrame(data=L, columns=['ID','CSE','Course Name'])
print (ff.head())
ID CSE Course Name
0 173 151 CSE
1 173 151 CSE
2 173 151 CSE
3 173 151 CSE
4 173 151 CSE
</code></pre>
|
pandas|loops|dataframe
| 1
|
2,684
| 53,207,650
|
python pandas lambda with 2 and more variables
|
<p>I have a dataframe where I'd like to add a column with conditional sum of values based on criteria in 2 (possibly 3) different columns. I'm trying to use lambda function such as:</p>
<pre><code>df['newColumn'] = df[['colA','colB']].apply(lambda x,y:
df.loc[df['colA']==x].loc[df['colB']==y]['Total Amount'].sum())
</code></pre>
<p>This approach doesn't work although when I test the .loc statement separately and use values in lieu of x and y I do get the correct sum. I would like to bring in another column to this if possible. The error Im getting is: "() missing 1 required positional argument: 'y'", 'occurred at index colA.
Any help greatly appreciated,</p>
|
<p>My guess is you want this:</p>
<pre><code>df = pd.DataFrame({'A': [1,1,2,2,3,3],
'B': [2,2,2,3,3,3],
'TotalAmount': [10,20,30,40,50,60]})
df['NewColumn'] = df.groupby(['A', 'B'])['TotalAmount'].transform('sum')
df
# A B TotalAmount NewColumn
#0 1 2 10 30
#1 1 2 20 30
#2 2 2 30 30
#3 2 3 40 40
#4 3 3 50 110
#5 3 3 60 110
</code></pre>
|
python|pandas|lambda
| 0
|
2,685
| 65,768,909
|
Group numpy matrix based on value of particular column, only on row indices from given array
|
<p>I have a numpy matrix consisting of binary values. I have a list of row indices as well. Now I have to obtain indices from the matrix where the value for a particular column is 1, and the indices must be contained in the row indices list. What will be an efficient way of doing this? I'm currently doing:</p>
<pre><code>result = [index for index in np.where(dataset[:, col] == 1)[0] if index in indices]
</code></pre>
|
<p>Maybe looping through the list of row indices is faster since you don't need to loop over all the rows of the matrix:</p>
<pre><code>result = [index for index in indices if dataset[index, col] == 1]
</code></pre>
|
python|list|numpy|matrix|numpy-slicing
| 0
|
2,686
| 65,805,232
|
The loss will be Nan when I use loss function defined by torch.nn.function.mse_loss
|
<p>The loss is always Nan when I use the loss function as follow:</p>
<pre><code>def Myloss1(source, target):
loss = torch.nn.functional.mse_loss(source, target, reduction="none")
return torch.sum(loss).sqrt()
...
loss = Myloss1(s, t)
loss.backward()
</code></pre>
<br>
<p>But when I use the following loss function, the training becomes normal:</p>
<pre><code>def Myloss2(source, target):
diff = target - source
loss = torch.norm(diff)
return loss
...
loss = Myloss2(s, t)
loss.backward()
</code></pre>
<br>
<p>Why can't use the ‘Myloss1’ to train? Aren't Myloss1 and Myloss2 equivalent?</p>
<p>Please help me,thank you very much!</p>
|
<p><code>Myloss1</code> and <code>Myloss2</code> are indeed supposedly equivalent. They at least return the same values for all the tensors I have tried them on.</p>
<p>About the Nan, let's first try to find when it happens. The only possible culprit here is the <code>sqrt</code>, which is not differentiable in 0. And indeed :</p>
<pre><code>y = torch.randn(2,3)
x = y.clone()
x.requires_grad_(True)
Myloss1(x,y).backward()
print(x.grad.data)
>>> [[nan, nan, nan], [nan, nan, nan]]
</code></pre>
<p>On the other hand :</p>
<pre><code>Myloss2(x,y).backward()
print(x.grad.data)
>>> [[-0., -0., -0.],[-0., -0., -0.]]
</code></pre>
<p>Of both results, only the first is mathematically "accurate". Computing the derivative of the square root at 0 yields a division by 0. That is why when training neural networks or whatever, the <code>sqrt</code> is not used. You should use</p>
<pre><code>good_loss = torch.nn.MSELoss(reduction='mean') # or ='sum' if you prefer
</code></pre>
<p>This function is differentiable everywhere, you won't have any more trouble.</p>
<p>As to why your <code>Myloss2</code> yields a different gradient, it is related to its implementation. It was extensively discussed <a href="https://discuss.pytorch.org/t/nan-in-torch-norm-if-input-is-zero/6844" rel="nofollow noreferrer">here</a>. Basically, people complained about the nans, so the lib was changed to modify this behavior, while acknowledging that there is no mathematically correct answer here since this derivative is not defined at 0.</p>
|
pytorch
| 0
|
2,687
| 63,637,135
|
How to scan characters in strings to flag if the match is correct
|
<p>I have 2 columns of strings and I'd like to create a column with a "yes" or "no" if the first 3 characters of each string in their row match. Basically code that goes over the first 3 characters of column 1 row 1 and compares it with column 2 row 1 to see if the first 3 chars match; if yes then it should print YES in column 3 as seen in the example.</p>
<p>IE: Row 1 Column 1 scans "p""a""s" and looks in Row 1 Column 2 and scans "p""a""s" meanign that they are the same and should be true in Column 3.</p>
<p>I'm fairly new to python; my apologies.</p>
<p>Original Table:</p>
<pre><code>+-------------+---------+----------+
| Row Index | Col1 | Col2 |
+-------------+---------+----------+
| 1 | pasta | pastas |
| 2 | sauces | orange |
| 3 | kiwi | kiwis |
+-------------+---------+----------+
</code></pre>
<p>Expected Output Table:</p>
<pre><code>+-------------+---------+----------+---------+
| Row Index | Col1 | Col2 | Col3 |
+-------------+---------+----------+---------+
| 1 | pasta | pastas | YES |
| 2 | sauces | orange | NO |
| 3 | rosin | robert | NO |
+-------------+---------+----------+---------+
</code></pre>
<p>I don't have any code to show as I'm not sure how to start this. Thanks.</p>
|
<p>Here is a one-liner:</p>
<pre><code>df['Col3'] = (df['Col1'].str[:3] == df['Col2'].str[:3]).map(
{True: 'YES', False: 'NO'})
</code></pre>
<p>Rule of thumb: pretty much everything you do with pandas/numpy data is better in vector format, i.e. without using loops.</p>
<p>Step1: extract first three letters from all strings in a column:
You can perform pretty much all standard string operations on columns via <code>df['col'].str</code> objects. Here: <code>df['Col1'].str[:3]</code></p>
<p>Step2: check if 3-char prefixes match: again, you can directly compare columns to get a column of boolean values. <code>df['Col1'].str[:3] == df['Col2'].str[:3]</code></p>
<p>Step3: replace boolean values with 'YES' and 'NO'. I hope you see where it is going: <code>boolean_data.map({True: 'YES', False: 'NO'})</code></p>
|
python|pandas|numpy|knime
| 3
|
2,688
| 63,567,713
|
IndexError: index is out of bounds - word2vec
|
<p>I have trained a word2vec model called <code>word_vectors</code>, using the Gensim package with size = 512.</p>
<pre><code>fname = get_tmpfile('word2vec.model')
word_vectors = KeyedVectors.load(fname, mmap='r')
</code></pre>
<p>Now, I have created a new Numpy array (also of size 512) which I have added to the word2vec as follows:</p>
<pre><code>vector = (rand(512)-0.5) *20
word_vectors.add('koffie', vector)
</code></pre>
<p>Doing this seems to go fine and even when I call</p>
<pre><code>word_vectors['koffie']
</code></pre>
<p>I get the array as output, as expected.</p>
<p>However, when I want to look for the most similar words in my model and run the following code:</p>
<pre><code>word_vectors.most_similar('koffie')
</code></pre>
<p>I get the following error:</p>
<pre><code>Traceback (most recent call last):
File "<ipython-input-283-ce992786ce89>", line 1, in <module>
word_vectors.most_similar('koffie')
File "C:\Users\20200016\AppData\Local\Continuum\anaconda3\envs\ldaword2vec\lib\site-packages\gensim\models\keyedvectors.py", line 553, in most_similar
mean.append(weight * self.word_vec(word, use_norm=True))
File "C:\Users\20200016\AppData\Local\Continuum\anaconda3\envs\ldaword2vec\lib\site-packages\gensim\models\keyedvectors.py", line 461, in word_vec
result = self.vectors_norm[self.vocab[word].index]
IndexError: index 146139 is out of bounds for axis 0 with size 146138
word_vector.size()
Traceback (most recent call last):
File "<ipython-input-284-2606aca38446>", line 1, in <module>
word_vector.size()
NameError: name 'word_vector' is not defined
</code></pre>
<p>The error seems to indicate that my indexing isn't correct here. But since I am only indexing indirectly (with a key rather than an actual numeric index), I don't see what I need to change here.</p>
<p>Who knows what goes wrong here? And what can I do to overcome this error?</p>
|
<p>The 1st time you do a <code>.most_similar()</code>, a <code>KeyedVectors</code> instance (in gensim versions through 3.8.3) will create a cache of unit-normalized vectors to assist in all subsequent bulk-similarity operations, and place it in <code>.vectors_norm</code>.</p>
<p>It looks like your addition of a new vector didn't flush/recalculate/expand that cached <code>.vectors_norm</code> - originally the <code>KeyedVectors</code> class and <code>.most_similar()</code> operation were not designed with constantly-growing or constantly-changing sets-of-vectors in mind, but rather as utilities for a post-training, frozen set of vectors.</p>
<p>So that's the cause of your <code>IndexError</code>.</p>
<p>You should be able to work-around this by explicitly clearing the <code>.vectors_norm</code> any time you perform modifications/additions to the <code>KeyedVectors</code>, eg:</p>
<pre class="lang-py prettyprint-override"><code>word_vectors.vectors_norm = None
</code></pre>
<p>(This shouldn't be necessary in the next 4.0.0 release of gensim, but I'll double-check there's not a similar problem there.)</p>
<p>Separately:</p>
<ul>
<li><p>Your <code>'word_vector' is not defined</code> error is simply because you seem to have left the 's' off your chosen variable name <code>word_vectors</code></p>
</li>
<li><p>You probably don't need to be using the gensim-testing-utility-method <code>get_tmpfile()</code> - just use your own explicit, intentional filesystem paths for saving and loading</p>
</li>
<li><p>Whether it's proper to use <code>KeyedVectors.load()</code> depends on what was saved. If you are in fact saving a full <code>Word2Vec</code> class instance (more than just the vectors), using <code>Word2Vec.load()</code> would be more appropriate.</p>
</li>
</ul>
|
python-3.x|numpy|gensim|word2vec|index-error
| 1
|
2,689
| 63,423,089
|
How to swap two rows of a Pandas DataFrame?
|
<p>Suppose I have this dataframe :</p>
<pre><code> 0 1 2 3 4
0 0 1 2 3 4
1 5 6 7 8 9
2 10 11 12 13 14
3 15 16 17 18 19
4 20 21 22 23 24
</code></pre>
<p>I want to swap the position of row 1 and 2.</p>
<p>Is there a native Pandas function that can do this?
Thanks!</p>
|
<p>Use <code>rename</code> with a custom dict and <code>sort_index</code></p>
<pre><code>d = {1: 2, 2: 1}
df_final = df.rename(d).sort_index()
Out[27]:
0 1 2 3 4
0 0 1 2 3 4
1 10 11 12 13 14
2 5 6 7 8 9
3 15 16 17 18 19
4 20 21 22 23 24
</code></pre>
|
python-3.x|pandas|dataframe
| 4
|
2,690
| 63,569,977
|
Custom Aggregate Function in Python
|
<p>I have been struggling with a problem with custom aggregate function in Pandas that I have not been able to figure it out. let's consider the following data frame:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import pandas as pd
df = pd.DataFrame({'value': np.arange(1, 5), 'weights':np.arange(1, 5)})
</code></pre>
<p>Now if, I want to calculate the the average of the <code>value</code> column using the <code>agg</code> in <code>Panadas</code>, it would be:</p>
<pre class="lang-py prettyprint-override"><code>df.agg({'value': 'mean'})
</code></pre>
<p>which results in a scaler value of 2.5 as shown in the following:
<a href="https://i.stack.imgur.com/ksSNC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ksSNC.png" alt="enter image description here" /></a></p>
<p>However, if I define the following custom <code>mean</code> function:</p>
<pre class="lang-py prettyprint-override"><code>def my_mean(vec):
return np.mean(vec)
</code></pre>
<p>and use it in the following code:</p>
<pre class="lang-py prettyprint-override"><code>df.agg({'value': my_mean})
</code></pre>
<p>I would get the following result:</p>
<p><a href="https://i.stack.imgur.com/YibmN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YibmN.png" alt="enter image description here" /></a></p>
<p>So, the question here is, what should I do to get the same result as default <code>mean</code> aggregate function. One more thing to note that, if I use the <code>mean</code> function as a method in the custom function (shown below), it works just fine, however, I would like to know how to use <code>np.mean</code> function in my custom function. Any help would be much appreciated!</p>
<pre class="lang-py prettyprint-override"><code>df my_mean2(vec):
return vec.mean()
</code></pre>
|
<p>When you pass a callable as the aggregate function, if that callable is not one of the predefined callables like <code>np.mean</code>, <code>np.sum</code>, etc It'll treat it as a transform and acts like <code>df.apply()</code>.</p>
<p>The way around it is to let pandas know that your callable expects a vector of values. A crude way to do it is to have sth like:</p>
<pre><code>def my_mean(vals):
print(type(vals))
try:
vals.shape
except:
raise TypeError()
return np.mean(vals)
>>> df.agg({'value': my_mean})
<class 'int'>
<class 'pandas.core.series.Series'>
value 2.5
dtype: float64
</code></pre>
<p>You see, at first pandas tries to call the function on each row (<code>df.apply</code>), but <code>my_mean</code> raises a type error and in the second attempt it'll pass the whole column as a <code>Series</code> object. Comment the try...except part out and you'll see <code>my_mean</code> will be called on each row with an <code>int</code> argument.</p>
<hr />
<p>more on the first part:</p>
<pre><code>my_mean1 = np.mean
my_mean2 = lambda *args, **kwargs: np.mean(*args, **kwargs)
df.agg({'value': my_mean1})
df.agg({'value': my_mean2})
</code></pre>
<p>Although <code>my_mean2</code> and <code>np.mean</code> are essentially the same, since <code>my_mean2 is np.mean</code> evaluates to false, it'll go down the <code>df.apply</code> route while <code>my_mean1</code> will work as expected.</p>
|
python|pandas|numpy
| 2
|
2,691
| 63,481,820
|
Excel to JSON using Python, how do I format this data to my needs?
|
<p>So I want to read an excel file and extract the data into a JSON file using python.</p>
<p>The excel data is formatted as so:</p>
<pre><code>Header 1 | Header 2 | Header 3
x00 x01 x02
x10 x11 x12
. . .
. . .
</code></pre>
<p>Now I've managed to get most of the coding done right I think which is the following. However I really need to get the json output in a very specific format, which is why I used the line for <strong>data[i]</strong></p>
<pre><code>import json
import pandas as pd
df = pd.read_excel (r'C:\Users\ezammit\Documents\Python Scripts\FILE.xlsx', sheet_name='sheet_1')
#initialize data
data=[0 for i in range(len(df) - 1)]
for i in range(len(df) - 1):
data[i] = r'{"'+str(df.columns.values[0])+'": "' +str(df.loc[i][0])+'", '+str(df.columns.values[1])+'": "' +str(df.loc[i][1])+'", '+str(df.columns.values[2])+'": "' +str(df.loc[i][2])+'"}'
with open('Savedwork.json', 'w') as json_file:
json.dump(data, json_file)
</code></pre>
<p>As I mentioned, I really want to get a specific format in the JSON file which should be exactly as the following:</p>
<pre><code>{"Header1":"data[0][0]", "Header2":"data[0][1]", "Header3":"data[0][2]"},
{"Header1":"data[1][0]", "Header2":"data[1][1]", "Header3":"data[1][2]"},
{"Header1":"data[2][0]", "Header2":"data[2][1]", "Header3":"data[2][2]"},
...
</code></pre>
<p>Any help would be appreciated</p>
|
<p>Instead of "creating" JSON yourself, you can create a python dictionary and then let Python convert it to JSON String and store it in a file.</p>
<p>The first error was when you passed <code>len(df)-1</code> to <code>range</code> function. The range function automatically goes till <code>passedValue-1</code> so you had to pass it just <code>len(df)</code>.</p>
<p>And inside the loop just create a <code>dict</code> instead of a String. Here is the modified code for you:</p>
<pre class="lang-py prettyprint-override"><code>import json
import pandas as pd
df = pd.read_excel (r'D:\example.xlsx', sheet_name='Sheet1')
#initialize data
data=[0 for i in range(len(df))]
for i in range(len(df)):
# data[i] = r'{"'+str(df.columns.values[0])+'": "' +str(df.loc[i][0])+'", '+str(df.columns.values[1])+'": "' +str(df.loc[i][1])+'", '+str(df.columns.values[2])+'": "' +str(df.loc[i][2])+'"}'
data[i] = {str(df.columns.values[0]) : str(df.loc[i][0]), str(df.columns.values[1]): str(df.loc[i][1]), str(df.columns.values[2]): str(df.loc[i][2])}
output_lines = [json.dumps(line)+",\n" for line in data]
output_lines[-1] = output_lines[-1][:-2] # remove ",\n" from last line
with open('Savedwork.json', 'w') as json_file:
json_file.writelines(output_lines)
</code></pre>
<p>Here is the link to sample Excel File, I used: <a href="https://docs.google.com/spreadsheets/d/1qvhH8WN9pLh1g9kM5gnEvXEWsyYto2Y2WirODv8EGYM/edit?usp=sharing" rel="nofollow noreferrer">Sample XLSX</a></p>
<p>And here is the sample output of the code:</p>
<pre class="lang-js prettyprint-override"><code>{"Header1": "1", "Header2": "2", "Header3": "3"},
{"Header1": "6", "Header2": "5", "Header3": "4"},
{"Header1": "7", "Header2": "8", "Header3": "9"}
</code></pre>
|
python|json|excel|pandas|formatting
| 3
|
2,692
| 53,698,082
|
Return multiple columns based on date range using pandas
|
<p>I'm basically trying to calculate revenue to date using pandas. I would like to return N columns consisting of each quarter end. Each column would calculate total revenue to date as of that quarter end. I have:</p>
<pre><code>df['Amortization_per_Day'] = (2.5, 3.2, 5.5, 6.5, 9.2)
df['Start_Date'] = ('1/1/2018', '2/27/2018', '3/31/2018', '5/23/2018', '6/30/2018')
Date_Range = pd.date_range('10/31/2017', periods=75, freq='Q-Jan')
</code></pre>
<p>and want to do something like:</p>
<pre><code>df['Amortization_per_Day'] * (('Date_Range' - df['Start_Date']).dt.days + 1)
</code></pre>
<p>for each date within the Date_Range. I'm not sure how to pass the Date_Range through the function and to return N columns. I've been reading about zip(*df) and shift but not fully grasping it. Thank you so much for your help.</p>
|
<h1>Solution</h1>
<p>Here's a complete solution:</p>
<pre><code>from datetime import datetime
import pandas as pd
df = pd.DataFrame()
df['Amortization_per_Day'] = (2.5, 3.2, 5.5, 6.5, 9.2)
df['Start_Date'] = ('1/1/18', '2/27/18', '3/31/18', '5/23/2018', '6/30/2018')
df['Start_Date'] = pd.to_datetime(df['Start_Date'])
dr = pd.date_range('10/31/2017', periods=75, freq='Q-Jan')
def betweendates(x, y):
xv = x.values.astype('datetime64[D]')
xpad = np.zeros(xv.size + 2, dtype=xv.dtype)
xpad[1:-1] = xv
xpad[0],xpad[-1] = np.datetime64(datetime.min), np.datetime64(datetime.max)
yv = y.values.astype('datetime64[D]')
return (xpad[:-1] <= yv[:,None]) & (xpad[1:] >= yv[:,None])
# get a boolean array that indicates which dates in dr are in between which dates in df['Start_Date']
btwn = betweendates(df['Start_Date'], dr)
# based on the boolean array btwn, select out the salient rows from df and dates from dr
dfsel = df[btwn[:, 1:].T]
drsel = dr[btwn[:, 1:].sum(axis=1, dtype=bool)]
# do the actual calculation the OP wanted
dfsel['Amortization_per_Day'] * ((drsel - dfsel['Start_Date']).dt.days + 1)
</code></pre>
<p>Output:</p>
<pre><code>0 77.5
2 170.5
4 294.4
4 1140.8
4 1987.2
4 2806.0
4 3652.4
4 4498.8
4 5345.2
4 6173.2
...
4 52394.0
4 53212.8
4 54059.2
4 54905.6
4 55752.0
4 56570.8
4 57417.2
4 58263.6
4 59110.0
4 59938.0
Length: 74, dtype: float64
</code></pre>
<h1>Explanation</h1>
<p>The boolean <code>btwn</code> array looks like this:</p>
<pre><code>[[ True False False False False False]
[False True False False False False]
[False False False True False False]
[False False False False False True]
[False False False False False True]
[False False False False False True]
[False False False False False True]
[False False False False False True]
[False False False False False True]
[False False False False False True]
[False False False False False True]
[False False False False False True]
[False False False False False True]
...
</code></pre>
<p>The <code>i</code>th row of <code>btwn</code> corresponds to the <code>i</code>th datetime in your date range. In each row, exactly one value will be <code>True</code>, and the others will be <code>False</code>. A <code>True</code> value in the <code>0</code>th column indicates that the datetime is before any of the <code>Start_Times</code>, a <code>True</code> value in the <code>1</code>st column indicates that the datetime is in between the <code>0</code>th and the <code>1</code>st dates in <code>Start_Times</code>, and so forth. A <code>True</code> value in the last column indicates that the datetime is after any of the <code>Start_Times</code>.</p>
<p>By slicing <code>btwn</code> like this:</p>
<pre><code>btwn[:, 1:]
</code></pre>
<p>it can be used to match up datetimes in your date range with the immediately preceding <code>Start_Time</code>. If you instead change the slices of <code>btwn</code> to be like this:</p>
<pre><code>btwn[:, :-1]
</code></pre>
<p>you would end up matching each datetime to the next <code>Start_Time</code> instead.</p>
|
python|pandas
| 0
|
2,693
| 72,013,365
|
How do i use the & to add two booleans?
|
<p>can anyone tell me why im getting:</p>
<p>TypeError: unsupported operand type(s) for &: 'float' and 'bool'</p>
<p>on this:</p>
<pre><code> quotesstock['crossup'] = (quotesstock['volatility']*5.5 > quotesstock['vix'] & quotesstock['volatility'].shift(-1, axis = 0) <= quotesstock['vix'].shift(-1, axis = 0))
</code></pre>
<p>the individual parts do give a lists of dates(index) and booleans but i cant seem to implement an and.</p>
<pre><code>quotesstock['volatility']*5.5 > quotesstock['vix']
Out[158]:
Date
2019-12-31 False
2020-01-02 False
2020-01-03 False
2020-01-06 False
2020-01-07 False
2022-04-19 True
2022-04-20 True
2022-04-21 True
2022-04-22 False
2022-04-25 False
Length: 584, dtype: bool
quotesstock['volatility'].shift(-1, axis = 0) <= quotesstock['vix'].shift(-1, axis = 0)
Out[159]:
Date
2019-12-31 False
2020-01-02 False
2020-01-03 False
2020-01-06 False
2020-01-07 False
2022-04-19 True
2022-04-20 True
2022-04-21 True
2022-04-22 True
2022-04-25 False
Length: 584, dtype: bool
</code></pre>
<p>but</p>
<pre><code>quotesstock['crossup'] = (quotesstock['volatility']*5.5 > quotesstock['vix'] & quotesstock['volatility'].shift(-1, axis = 0) <= quotesstock['vix'].shift(-1, axis = 0))
Traceback (most recent call last):
File "D:\MachineLearning\Anaconda\lib\site-packages\pandas\core\ops\array_ops.py", line 302, in na_logical_op
result = op(x, y)
TypeError: unsupported operand type(s) for &: 'float' and 'bool'
</code></pre>
|
<p>This is due to <a href="https://docs.python.org/3/reference/expressions.html#operator-precedence" rel="nofollow noreferrer">Operator precedence</a>, <code>&</code> is more binding (more sticky) than <code><</code>, <code><=</code>, <code>></code>, <code>>=</code>, <code>!=</code>, <code>==</code> so for example</p>
<pre><code>1.5 < 2.5 & True
</code></pre>
<p>is understood as</p>
<pre><code>1.5 < (2.5 & True)
</code></pre>
<p>which cause</p>
<pre><code>TypeError: unsupported operand type(s) for &: 'float' and 'bool'
</code></pre>
<p>you need to use brackets to get <code><</code> comparison evaluted first, in above example that mean you need to do</p>
<pre><code>(1.5 < 2.5) & True
</code></pre>
<p>in order to get <code>True</code></p>
|
python|pandas
| 1
|
2,694
| 56,641,691
|
What are the purposes of each step in train-evaluate-predict in tensorflow?
|
<p>What do each of the stages do? I understand that for neural nets in nlp, the train will find the best parameters for the word embedding. But what is the purpose of the evaluation step? What is it supposed to do? How is that different from the prediction phase?</p>
|
<p>Training, evaluation and prediction are the three main steps of training a model ( basically in any ML framework ) and to <strong>move a model from research/development to production</strong>.</p>
<p><strong>Training:</strong></p>
<p>A suitable ML architecture is selected based on the problem which needs to be solved. <strong>Hyperparameter optimization</strong> is carried out to fine-tune the model. The model is then trained on the data for a certain number of epochs. <strong>Metrics such as loss, accuracy, MSE are monitored.</strong></p>
<p><strong>Evaluation:</strong></p>
<blockquote>
<p>We need to move the model to production. The model in the production
stage will only make inferences and hence we require the best model
possible. So, in order to evaluate or test the model based on some
predefined levels, the evaluation phase is carried out.</p>
</blockquote>
<p>Evaluation is mostly carried out on the data which is a subset of the original dataset. Training and evaluations splits are made while preprocessing the data. <strong>Metrics are calculated in order to check the performance of the model on the evaluation dataset.</strong></p>
<blockquote>
<p>The evaluation data has been never seen by the model as it is not trained on it. Hence, the model's best performance is expected here.</p>
</blockquote>
<p><strong>Prediction:</strong></p>
<p>After the testing of the model, we can move it to production. In the production phase, models only make an inference ( predictions ) on the data given to them. No training takes place here.</p>
<blockquote>
<p>Even after a thorough examination, the model tends to make
mispredictions. Hence, in the production stage, we can receive
interactive feedback from the users about the performance of the
model.</p>
</blockquote>
<p>Now,</p>
<blockquote>
<p>But what is the purpose of the evaluation step? What is it supposed to
do? How is that different from the prediction phase?</p>
</blockquote>
<p>Evaluation is to make the model better for most cases through which it will come across. Predictions are made to check for other problems which are not related to performance.</p>
|
tensorflow|machine-learning|nlp
| 2
|
2,695
| 47,270,722
|
How to define a custom accuracy in Keras to ignore samples with a particular gold label?
|
<p>I want to write in Keras a custom metric (I am using the tensorflow backend) equivalent to <code>categorical_accuracy</code>, but where the output for samples with a particular gold label (in my case 0, from y_true) have to be ignored. For example, if my outputs were:</p>
<p>Pred 1 - Gold 0</p>
<p>Pred 1 - Gold 1</p>
<p>The accuracy would be 1, since samples with the gold label 0 have to be ignored. That said, the function I wrote (and that is not giving the expected results) is:</p>
<pre><code>def my_accuracy(y_true, y_pred):
mask = K.any(K.not_equal(K.argmax(y_true, axis=-1), 0), axis=-1, keepdims=True)
masked_y_true = y_true*K.cast(mask, K.dtype(y_true))
masked_y_pred = y_pred*K.cast(mask, K.dtype(y_pred))
return keras.metrics.categorical_accuracy(masked_y_true, masked_y_pred)`
</code></pre>
<p>Any help is appreciated, thanks!</p>
|
<p>You could try this approach:</p>
<pre><code>def ignore_accuracy_of_class(class_to_ignore=0):
def ignore_acc(y_true, y_pred):
y_true_class = K.argmax(y_true, axis=-1)
y_pred_class = K.argmax(y_pred, axis=-1)
ignore_mask = K.cast(K.not_equal(y_pred_class, class_to_ignore), 'int32')
matches = K.cast(K.equal(y_true_class, y_pred_class), 'int32') * ignore_mask
accuracy = K.sum(matches) / K.maximum(K.sum(ignore_mask), 1)
return accuracy
return ignore_acc
</code></pre>
|
tensorflow|keras|metrics
| 1
|
2,696
| 59,221,557
|
TensorFlow v2: Replacement for tf.contrib.predictor.from_saved_model
|
<p>So far, I was using <code>tf.contrib.predictor.from_saved_model</code> to load a <code>SavedModel</code> (<code>tf.estimator</code> model class). However, this function has unfortunately been removed in TensorFlow v2. So far, in TensorFlow v1, my coding was the following:</p>
<pre><code> predict_fn = predictor.from_saved_model(model_dir + '/' + model, signature_def_key='predict')
prediction_feed_dict = dict()
for key in predict_fn._feed_tensors.keys():
#forec_data is a DataFrame holding the data to be fed in
for index in forec_data.index:
prediction_feed_dict[key] = [ [ forec_data.loc[index][key] ] ]
prediction_complete = predict_fn(prediction_feed_dict)
</code></pre>
<p>Using <code>tf.saved_model.load</code>, I unsuccessfully tried the following in TensorFlow v2:</p>
<pre><code> model = tf.saved_model.load(model_dir + '/' + latest_model)
model_fn = model.signatures['predict']
prediction_feed_dict = dict()
for key in model_fn._feed_tensors.keys(): #<-- no replacement for _feed_tensors.keys() found
#forec_data is a DataFrame holding the data to be fed in
for index in forec_data.index:
prediction_feed_dict[key] = [ [ forec_data.loc[index][key] ] ]
prediction_complete = model_fn(prediction_feed_dict) #<-- no idea if this is anyhow close to correct
</code></pre>
<p>So my questions are (both in the context of TensorFlow v2):</p>
<ol>
<li>How can I replace <code>_feed_tensors.keys()</code>?</li>
<li>How to inference in a straightforward way using a <code>tf.estimator</code> model loaded with <code>tf.saved_model.load</code> </li>
</ol>
<p>Thanks a lot, any help is appreciated.</p>
<p>Note: This question is not a duplicate of the one posted <a href="http://This%20question%20is%20not%20a%20duplicate%20of%20the%20question%20posted" rel="nofollow noreferrer">here</a> as the answers provided there all rely on features of TensorFlow v1 that have been removed in TensorFlow v2.</p>
<p><strong>EDIT:</strong> The question postet <a href="https://stackoverflow.com/questions/58308258/run-prediction-from-saved-model-in-tensorflow-2-0">here</a> seems to ask basically the same thing, but until now (2020-01-22) is also unanswered.</p>
|
<p>Hope you have Saved the Estimator Model using the code similar to that mentioned below:</p>
<pre><code>input_column = tf.feature_column.numeric_column("x")
estimator = tf.estimator.LinearClassifier(feature_columns=[input_column])
def input_fn():
return tf.data.Dataset.from_tensor_slices(
({"x": [1., 2., 3., 4.]}, [1, 1, 0, 0])).repeat(200).shuffle(64).batch(16)
estimator.train(input_fn)
serving_input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(
tf.feature_column.make_parse_example_spec([input_column]))
export_path = estimator.export_saved_model(
"/tmp/from_estimator/", serving_input_fn)
</code></pre>
<p>You can Load the Model using the code mentioned below:</p>
<pre><code>imported = tf.saved_model.load(export_path)
</code></pre>
<p>To <strong><code>Predict</code></strong> using your Model by passing the Input Features, you can use the below code:</p>
<pre><code>def predict(x):
example = tf.train.Example()
example.features.feature["x"].float_list.value.extend([x])
return imported.signatures["predict"](examples=tf.constant([example.SerializeToString()]))
print(predict(1.5))
print(predict(3.5))
</code></pre>
<p>For more details, please refer <a href="https://www.tensorflow.org/guide/saved_model#savedmodels_from_estimators" rel="nofollow noreferrer">this link</a> in which Saved Models using TF Estimator are explained.</p>
|
python|tensorflow|tensorflow-serving|tensorflow2.0|tensorflow-estimator
| 0
|
2,697
| 57,281,781
|
Is there any method or function in python to name the sides of a 3-Dimensional Matrix, like in 2-D there is a Panda method Dataframedata
|
<p>I want to name/index the sides of the 3D matrix (Plane,row,column) in pythong code, like in 2D we can do it with the help od Panda methond Dataframe</p>
<blockquote>
<p>matrix = np.reshape((1, 2, 3, 4, 5, 6, 7, 8, 9), (3, 3))</p>
<blockquote>
<p>df = pd.DataFrame(matrix, columns=column_names, index=row_names)</p>
<blockquote>
<blockquote>
<p>print(df)</p>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
<p>I want a result like, when we write A[0][0][0] then we can also track the what first index is representing and second index and 3rd.</p>
<p>Like in 2D we get something like</p>
<pre><code> a b c
1 1 2 3
2 4 5 6
3 7 8 9
</code></pre>
|
<p>In pandas we can using multiple index </p>
<pre><code>matrix = np.reshape((1, 2, 3, 4, 5, 6, 7, 8), (2,2,2))
df=pd.concat([pd.DataFrame(x)for x in matrix],keys=np.arange(matrix.shape[2]))
df[0][0][0]
1
df[0][0][1]
3
</code></pre>
|
python|numpy|tensor
| 0
|
2,698
| 50,845,372
|
TensorFlow's map_fn only runs on CPU
|
<p>I'm running into a weird problem when trying to get TensorFlow's <code>map_fn</code> to run on my GPU. Here's a minimal broken example:</p>
<pre><code>import numpy as np
import tensorflow as tf
with tf.Session() as sess:
with tf.device("/gpu:0"):
def test_func(i):
return i
test_range = tf.constant(np.arange(5))
test = sess.run(tf.map_fn(test_func, test_range, dtype=tf.float32))
print(test)
</code></pre>
<p>This leads to the error:</p>
<blockquote>
<p>InvalidArgumentError: Cannot assign a device for operation
'map/TensorArray_1': Could not satisfy explicit device specification
'' because the node was colocated with a group of nodes that required
incompatible device '/device:GPU:0' Colocation Debug Info: Colocation
group had the following types and devices: TensorArrayScatterV3: CPU
TensorArrayGatherV3: GPU CPU Range: GPU CPU TensorArrayWriteV3: CPU
TensorArraySizeV3: GPU CPU TensorArrayReadV3: CPU Enter: GPU CPU
TensorArrayV3: CPU Const: GPU CPU </p>
<p>Colocation members and user-requested devices:<br>
map/TensorArrayStack/range/delta (Const)<br>
map/TensorArrayStack/range/start (Const) map/TensorArray_1
(TensorArrayV3) map/while/TensorArrayWrite/TensorArrayWriteV3/Enter
(Enter) /device:GPU:0 map/TensorArrayStack/TensorArraySizeV3
(TensorArraySizeV3) map/TensorArrayStack/range (Range)<br>
map/TensorArrayStack/TensorArrayGatherV3 (TensorArrayGatherV3)<br>
map/TensorArray (TensorArrayV3) map/while/TensorArrayReadV3/Enter
(Enter) /device:GPU:0 Const (Const) /device:GPU:0<br>
map/TensorArrayUnstack/TensorArrayScatter/TensorArrayScatterV3
(TensorArrayScatterV3) /device:GPU:0 map/while/TensorArrayReadV3
(TensorArrayReadV3) /device:GPU:0<br>
map/while/TensorArrayWrite/TensorArrayWriteV3 (TensorArrayWriteV3)
/device:GPU:0</p>
<p>[[Node: map/TensorArray_1 = TensorArrayV3clear_after_read=true,
dtype=DT_FLOAT, dynamic_size=false, element_shape=,
identical_element_shapes=true,
tensor_array_name=""]]</p>
</blockquote>
<p>The code behaves as expected when run on my CPU, and simple operations such as:</p>
<pre><code>import numpy as np
import tensorflow as tf
with tf.Session() as sess:
with tf.device("/gpu:0"):
def test_func(i):
return i
test_range = tf.constant(np.arange(5))
test = sess.run(tf.add(test_range, test_range))
print(test)
</code></pre>
<p>work fine on my GPU. <a href="https://stackoverflow.com/questions/47045026/is-there-a-way-to-use-tensorflow-map-fn-on-gpu">This post</a> seems to describe a similar issue. Does anyone have any tips? The answer on that post implies that <code>map_fn</code> should work fine on the GPU. I'm running version 1.8.0 of TensorFlow on Python 3.6.4 on Arch Linux, with CUDA version 9.0 and cuDNN version 7.0 on a GeForce GTX 1050.</p>
<p>Thanks! </p>
|
<p>The error actually stems from the fact that <code>np.arange</code> produces <code>int32</code>s by default but you specified a <code>float32</code> return type. The error is gone with</p>
<pre><code>import numpy as np
import tensorflow as tf
with tf.Session() as sess:
with tf.device("/gpu:0"):
def test_func(i):
return i
test_range = tf.constant(np.arange(5, dtype=np.float32))
test = sess.run(tf.map_fn(test_func, test_range, dtype=tf.float32))
print(test)
</code></pre>
<p>I agree that the error message you got is rather confusing. You get the "real" error message by removing device placement:</p>
<pre><code>import numpy as np
import tensorflow as tf
with tf.Session() as sess:
def test_func(i):
return i
test_range = tf.constant(np.arange(5))
test = sess.run(tf.map_fn(test_func, test_range, dtype=tf.float32))
print(test)
# InvalidArgumentError (see above for traceback): TensorArray dtype is float but Op is trying to write dtype int32.
</code></pre>
|
python|python-3.x|tensorflow|gpu
| 2
|
2,699
| 50,813,683
|
How to reorder columns based on regex?
|
<p>Let's say I have a dataframe like this:</p>
<pre><code>df = pd.DataFrame({'foo':[1, 2], 'bar': [3, 4], 'xyz': [5, 6]})
bar foo xyz
0 3 1 5
1 4 2 6
</code></pre>
<p>I now want to put the column that contains <code>oo</code> at the first position (i.e. at 0th index); there is always only one column with this pattern. </p>
<p>I currently solve this using <code>filter</code> twice and a <code>concat</code>:</p>
<pre><code>pd.concat([df.filter(like='oo'), df.filter(regex='^((?!(oo)).)*$')], axis=1)
</code></pre>
<p>which gives the desired output:</p>
<pre><code> foo bar xyz
0 1 3 5
1 2 4 6
</code></pre>
<p>I am wondering whether there is a more efficient way of doing this.</p>
|
<p>Use list comprehensions only, join lists together and select by <code>subset</code>:</p>
<pre><code>a = [x for x in df.columns if 'oo' in x]
b = [x for x in df.columns if not 'oo' in x]
df = df[a + b]
print (df)
foo bar xyz
0 1 3 5
1 2 4 6
</code></pre>
|
python|regex|pandas
| 4
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.