Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
5,600
| 48,620,057
|
Keras loss weights
|
<p>I've model with two output layers, age and gender prediction layers. I want to assign different weight values for each output layer's loss. I've the following line of code to do so.</p>
<pre><code>model.compile(loss=[losses.mean_squared_error,losses.categorical_crossentropy], optimizer='sgd',loss_weights=[1,10])
</code></pre>
<p>My question is what is the effect of loss weights on performance of a model? How can I configure the loss weights so that the model can perform better on age prediction?</p>
|
<p>As stated in the book <a href="https://www.manning.com/books/deep-learning-with-python" rel="noreferrer">Deep Learning with Python</a> by François Chollet:</p>
<blockquote>
<p>The mean squared error (MSE) loss used for the age-regression task
typically takes a value around 3–5, whereas the crossentropy loss used
for the gender-classification task can be as low as 0.1. In such a
situation, to balance the contribution of the different losses, you
can assign a weight of 10 to the crossentropy loss and a weight of
0.25 to the MSE loss.</p>
</blockquote>
|
python|tensorflow|machine-learning|keras
| 7
|
5,601
| 48,465,333
|
Pandas interpolation /MSE
|
<p>I know that there is an easy solution for my problem with pandas (hopefully) but I just don't know how to find it. Let's say I've two dataframes: </p>
<pre><code>df1 = pd.DataFrame({'x1': [1, 2, 3], 'y1': [1, 4, 9]})
df2 = pd.DataFrame({'x2': [1.5, 2, 3.1, 3.9], 'y2': [1, 3, 5.5, 8]})
</code></pre>
<p>I want to calculate Mean Square Error and standard deviation between those 2 curves. I thought I could make an interpolation by joining those 2 dataframes with only one axis "X" containing both x1 and x2, and two axis "Y1" and "Y2" containing y1 and y2 values and interpolated values. </p>
<p>I can do it with loops but I'm pretty sure it must be some way easier with Pandas. Do you have any idea ?
Thanks!</p>
|
<p>If I understand correctly, you need to bring them to a common grid (abscissa) in order to perform the subtraction and the statistics on it.</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
df1 = pd.DataFrame({'x1': [1, 2, 3], 'y1': [1, 4, 9]})
df2 = pd.DataFrame({'x2': [1.5, 2, 3.1, 3.9], 'y2': [1, 3, 5.5, 8]})
df = pd.DataFrame(columns=['df1', 'df2'], index=np.linspace(0, 4, 100))
df['df1'] = np.interp(df.index, df1.x1, df1.y1)
df['df2'] = np.interp(df.index, df2.x2, df2.y2)
print("MSE = {}".format(np.sqrt((df.df1.values**2 - df.df2.values**2).mean())))
print("STD = {}".format((df.df1.values - df.df2).std()))
df.plot()
plt.show()
</code></pre>
<p>That would produce the output</p>
<pre><code>MSE = 4.051003878995869
STD = 1.1595280634334968
</code></pre>
<p>And the plot of the values<a href="https://i.stack.imgur.com/x8vC1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/x8vC1.png" alt="enter image description here"></a></p>
<p>Note that I used a sampling of 100 points which should be enough for your case.</p>
|
python|pandas
| 1
|
5,602
| 48,843,357
|
python pandas - select particular values after groupby
|
<p>I have groupby table:</p>
<pre><code>df.groupby(['Age', 'Movie']).mean()
User Raitings
Age Movie
1 1 4.666667 7.666667
2 4.666667 8.000000
3 2.000000 7.500000
4 2.000000 5.500000
5 3.000000 7.000000
18 1 3.000000 7.500000
2 3.000000 8.000000
3 3.000000 8.500000
25 1 8.000000 7.250000
2 8.000000 7.500000
3 5.500000 8.500000
4 5.000000 7.000000
45 1 9.000000 7.500000
2 9.000000 7.500000
3 11.000000 7.000000
4 11.000000 6.000000
60 1 8.000000 7.000000
2 8.000000 9.000000
3 8.000000 7.000000
</code></pre>
<p>please, help with function, which takes integer (Age) and return Movie with MIN raitings in this Age-group.
Example def(1) should return 4 (min Raitings in group Age(1) = 5.5, Movies(5.5) = 4)</p>
<p>I can get min Raiting:</p>
<pre><code>df['Raitings'].min()
</code></pre>
<p>But i don't know - how to get raiting in particular group (Age)?</p>
|
<p>This gets all of them in one go.</p>
<pre><code>df.groupby('Age').Raitings.idxmin().str[-1]
Age
1 4
18 1
25 4
45 4
60 1
Name: Raitings, dtype: int64
</code></pre>
<p>If you want a function, I'd use <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.xs.html" rel="nofollow noreferrer"><strong><code>pd.DataFrame.xs</code></strong></a> (xs is for cross section).<br>
By default, <code>xs</code> will grab from the first level of the index and subsequently drop that level. This conveniently leaves the level in which we want to draw the value in which <code>idxmin</code> will hand us.</p>
<pre><code>def f(df, age):
return df.xs(age).Raitings.idxmin()
f(df, 1)
4
</code></pre>
<hr>
<p><strong>Setup</strong><br>
Useful for those who try to parse this stuff. </p>
<pre><code>txt = """\
Age Movie User Raitings
1.0 1 4.666667 7.666667
2 4.666667 8.000000
3 2.000000 7.500000
4 2.000000 5.500000
5 3.000000 7.000000
18.0 1 3.000000 7.500000
2 3.000000 8.000000
3 3.000000 8.500000
25.0 1 8.000000 7.250000
2 8.000000 7.500000
3 5.500000 8.500000
4 5.000000 7.000000
45.0 1 9.000000 7.500000
2 9.000000 7.500000
3 11.000000 7.000000
4 11.000000 6.000000
60.0 1 8.000000 7.000000
2 8.000000 9.000000"""
df = pd.read_fwf(pd.io.common.StringIO(txt))
df = df.ffill(downcast='infer').set_index(['Age', 'Movie'])
</code></pre>
|
python|pandas|pandas-groupby|multi-index
| 4
|
5,603
| 48,454,312
|
Python (Pandas) Error IndexError: single positional indexer is out-of-bounds
|
<p>Here is the error I can't seem to squash, dropping down my count to be one less than my actual rows fixes it, but that means it can't even read the last row. The error is coming from me attempting to parse data from my .csv I have saved in the same directory.</p>
<p>Here is the code that seems to be causing the issue:</p>
<pre><code> margin1 = datetime.timedelta(days = 1)
margin3 = datetime.timedelta(days = 3)
margin7 = datetime.timedelta(days = 7)
df = pd.read_csv('gameDB.csv')
a = df.values
rows=len(df.index)
while (x <= rows):
print (rows)
print (x)
input("Press Enter to continue...")
csvName = str((df.iloc[x,0]))
csvRel = str((df.iloc[x,1]))
csvCal = str((df.iloc[x,2]))
from datetime import datetime
today = datetime.strptime(twiday, '%Y-%m-%d').date()
compDate = datetime.strptime(csvRel, '%Y-%m-%d').date()
print (csvName + ' ' + csvRel + ' ' + csvCal)
try:
if (today+margin7 == compDate):
#tweet = (csvName + ' releases in 7 days. Click here to add to calendar ' + csvCal)
#api.update_status(tweet)
time.sleep(10)
elif (today+margin3 == compDate):
#tweet = (csvName + ' releases in 3 days. Click here to add to calendar ' + csvCal)
#api.update_status(tweet)
time.sleep(10)
elif (today+margin1 == compDate):
#tweet = (csvName + ' releases in tomorrow. Click here to add to calendar ' + csvCal)
#api.update_status(tweet)
time.sleep(10)
elif (today == compDate):
#tweet = (csvName + ' is now released.')
#api.update_status(tweet)
time.sleep(10)
except:
continue
x += 1
</code></pre>
<p>And Here is the error i get</p>
<pre><code>Traceback (most recent call last):
File ".\gameRelease.py", line 306, in <module>
NintendoSwitch()
File ".\gameRelease.py", line 277, in NintendoSwitch
main(system,data,color,calID)
File ".\gameRelease.py", line 270, in main
twitUpdate(tDay)
File ".\gameRelease.py", line 97, in twitUpdate
csvName = str((df.iloc[x,0]))
File "C:\Users\UmbraTytan\AppData\Local\Programs\Python\Python36-32\lib\site-packages\pandas\core\indexing.py", line 1367, in __getitem__
return self._getitem_tuple(key)
File "C:\Users\UmbraTytan\AppData\Local\Programs\Python\Python36-32\lib\site-packages\pandas\core\indexing.py", line 1737, in _getitem_tuple
self._has_valid_tuple(tup)
File "C:\Users\UmbraTytan\AppData\Local\Programs\Python\Python36-32\lib\site-packages\pandas\core\indexing.py", line 204, in _has_valid_tuple
if not self._has_valid_type(k, i):
File "C:\Users\UmbraTytan\AppData\Local\Programs\Python\Python36-32\lib\site-packages\pandas\core\indexing.py", line 1672, in _has_valid_type
return self._is_valid_integer(key, axis)
File "C:\Users\UmbraTytan\AppData\Local\Programs\Python\Python36-32\lib\site-packages\pandas\core\indexing.py", line 1713, in _is_valid_integer
raise IndexError("single positional indexer is out-of-bounds")
IndexError: single positional indexer is out-of-bounds
</code></pre>
|
<p>Forgot to add the header row when creating the csv on application start, that resolved all of it.</p>
<pre><code> writer.writeheader()
</code></pre>
<p>That's all it needed.</p>
|
python|python-3.x|pandas|runtime-error
| 0
|
5,604
| 51,845,480
|
Error when using tf.get_variable as alternativ for tf.Variable in Tensorflow
|
<p>Hi I'm new to neural networks and I'm currently working on Tensoflow.
First I did the MNIST tutorial which worked quite well. Now I wanted to deepen the whole by means of an own network for Cifar10 in Google Colab. For this purpose I wrote the following code:</p>
<pre><code>def conv2d(input, size, inputDim, outputCount):
with tf.variable_scope("conv2d"):
## -> This area causes problems <- ##
##########variant1
weight = tf.Variable(tf.truncated_normal([size, size, inputDim, outputCount], stddev=0.1),name="weight")
bias = tf.Variable( tf.constant(0.1, shape=[outputCount]),name="bias")
##########variant2
weight = tf.get_variable("weight", tf.truncated_normal([size, size, inputDim, outputCount], stddev=0.1))
bias = tf.get_variable("bias", tf.constant(0.1, shape=[outputCount]))
##################
conv = tf.nn.relu(tf.nn.conv2d(input, weight, strides=[1, 1, 1, 1], padding='SAME') + bias)
return conv
def maxPool(conv2d):....
def fullyConnect(input, inputSize, outputCount, relu):
with tf.variable_scope("fullyConnect"):
## -> This area causes problems <- ##
##########variant1
weight = tf.Variable( tf.truncated_normal([inputSize, outputCount], stddev=0.1),name="weight")
bias = tf.Variable( tf.constant(0.1, shape=[outputCount]),name="bias")
##########variant2
weight = tf.get_variable("weight", tf.truncated_normal([inputSize, outputCount], stddev=0.1))
bias = tf.get_variable("bias", tf.constant(0.1, shape=[outputCount]))
##################
fullyIn = tf.reshape(input, [-1, inputSize])
fullyCon = fullyIn
if relu:
fullyCon = tf.nn.relu(tf.matmul(fullyIn, weight) + bias)
return fullyCon
#Model Def.
def getVGG16A(grafic,width,height,dim):
with tf.name_scope("VGG16A"):
img = tf.reshape(grafic, [-1,width,height,dim])
with tf.name_scope("Layer1"):
with tf.variable_scope("Layer1"):
with tf.variable_scope("conv1"):
l1_c = conv2d(img,3, dim, 64)
with tf.variable_scope("mp1"):
l1_mp = maxPool(l1_c) #32 > 16
with tf.name_scope("Layer2"):
with tf.variable_scope("Layer2"):
with tf.variable_scope("conv1"):
l2_c = conv2d(l1_mp,3, 64, 128)
with tf.variable_scope("mp1"):
l2_mp = maxPool(l2_c) #16 > 8
with tf.name_scope("Layer6"):
with tf.variable_scope("Layer6"):
with tf.variable_scope("fully1"):
L6_fc1 = fullyConnect(l2_mp, 8*8*128 , 1024, True)
with tf.variable_scope("fully2"):
L6_fc2 = fullyConnect(L6_fc1, 1024, 1024, True)
keep_prob = tf.placeholder(tf.float32)
drop = tf.nn.dropout(L6_fc2, keep_prob)
with tf.variable_scope("fully3"):
L6_fc3 = fullyConnect(drop,1024, 3, False)
return L6_fc3, keep_prob
x = tf.placeholder(tf.float32, [None, 3072]) #input
y_ = tf.placeholder(tf.float32, [None, 3]) #output
# Build the graph for the deep net
y_conv, keep_prob = getVGG16A(x,32,32,3) #create Model
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y_conv))
train_step = tf.train.AdamOptimizer(1e-3).minimize(cross_entropy)
correct_prediction = tf.equal(tf.argmax(y_conv, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch in getBatchData(prep_filter_dataBatch1,2): #a self-written method for custom batch return
train_step.run(feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.8})
print('test accuracy %g' % accuracy.eval(feed_dict={
x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0}))
</code></pre>
<p>For the definition of the tensorflow variables I first used variant1 (tf.variable).
This caused an overflow of the graphics memory after repeated execution.
Then I used variant2 (tf.get_variable). If I have understood the documentation correctly, this should use already existing variables if they exist. </p>
<p>But as soon as I do this I get the following error message:</p>
<pre><code>TypeError: Tensor objects are not iterable when eager execution is not enabled. To iterate over this tensor use tf.map_fn.
</code></pre>
<p>I've been looking the hole day, but I haven't found an explanation for this.</p>
<p>Now I hope that there is someone here who can explain to me why this is not possible, or where I can find further information. The error message is getting me nowhere. I don't want a solution because I want to and have to understand this, because I want to write my bachelor thesis in the field of CNN.</p>
<p>Why can I use tf.variable but not tf.get_variable which should do the same?</p>
<p>Thanks for the help,
best regards, Pascal :)</p>
|
<p>I found my mistake.
I forgot the keyword <code>initializer</code>.</p>
<p>the correct line looks like this:</p>
<pre><code>weight = tf.get_variable("weight",initializer=tf.truncated_normal([size, size, inputDim, outputCount], stddev=anpassung))
</code></pre>
|
python|tensorflow|conv-neural-network|google-colaboratory
| 1
|
5,605
| 41,971,322
|
Pandas: How to select the minimum value of a series of rows grouped by a key
|
<p>Suppose I have the followin dataframe:</p>
<pre><code>Key | Amount | Term | Other | Other_2
----+--------+--------+-------+--------
A | 9999 | Short | ABC | 100
A | 261 | Short | ABC | 100
B | 281 | Long | CDE | 200
C | 140 | Long | EFG | 300
C | 9999 | Long | EFG | 300
</code></pre>
<p>The desired output should be:</p>
<pre><code>Key | Amount | Term | Other | Other_2
----+--------+--------+-------+--------
A | 261 | Short | ABC | 100
B | 281 | Long | CDE | 200
C | 140 | Long | EFG | 300
</code></pre>
<p>That is, to take the min of the "Amount" column while retaining the rest of the values in the row with the min value.</p>
<p>I think this can be done with a groupby() but I don't visualize how.</p>
<p><strong>EDIT</strong>: I removed the commas, my data is numeric</p>
|
<p>To get the min value within each key, you can use <code>groupby.apply</code> to create a boolean Series where the min value takes true and other values take false; then you can use the boolean series for subsetting:</p>
<pre><code>df[df.Amount.groupby(df.Key).apply(lambda x: x == x.min())]
# Key Amount Term Other Other_2
#1 A 261 Short ABC 100
#2 B 281 Long CDE 200
#3 C 140 Long EFG 300
</code></pre>
<hr>
<p>Another option you can use <code>nsmallest()</code> method on each sub group, here you can take the smallest row ordered by <code>Amount</code>:</p>
<pre><code>df.groupby("Key", group_keys=False).apply(lambda g: g.nsmallest(1, "Amount"))
# Key Amount Term Other Other_2
#1 A 261 Short ABC 100
#2 B 281 Long CDE 200
#3 C 140 Long EFG 300
</code></pre>
|
python|pandas|data-cleaning
| 1
|
5,606
| 41,864,024
|
Why is pyplot.imshow() changing color channels in arrays
|
<p>For the life of me I have been struggling with this for the last 24 hours. I am doing a Neural Net--storing images in a 4D-array. The first index of the array is bascially the "sample" aka sample 1,2,3, etc. dimenions 2,3,4 are 128x128 x3 rgb pictures. Now in the process of this, i take the input pictures (which are not 128x128) and rescale them. But when I picked a sample, it had all the color channels mixed up. So I tried to figure out where the problem was.</p>
<p>If I just resize the picture and assign the nummber array (128x128x3) to a variable, everything is 'normal'. If I assign the 'sub-array' to the larger 4-d array the color channels get mixed up. However I can recover the original by picture subtracting the array-slice for the sameple from 255. </p>
<p>Here is a code snippet with the original(1), resized(2), color-channel mix(3), recovered (4).</p>
<p>I know the open cv and pyplot.imshow() use different color channels, but the fact that it appears that storing the picture in the larger array is what causes the switch-- that is confusing me. Some guidance would be appreciated.</p>
<p>Also I can "subtratct" the arrays (img2-train[0]) and get an array of all zeros). That part REALLY is confusing. They are the same numbers and yet imshow() gives 2 completely different images. </p>
<pre><code>import numpy as np
import cv2
import matplotlib.pyplot as plt
img = cv2.imread(<path to your pic>)
img2 = cv2.resize(img, (128, 128))
train = np.ndarray(shape=(1,128, 128,3))
plt.subplot(1,4,1)
plt.imshow(img)
plt.subplot(1,4,2)
plt.imshow(img2)
plt.subplot(1,4,3)
train[0] = img2
plt.imshow(train[0])
plt.subplot(1,4,4)
plt.imshow(255-train[0])
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/lCac4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lCac4.png" alt="enter image description here"></a></p>
|
<p><code>OpenCV</code> stores color images using the <code>BGR</code> convention while
<code>matplotlib</code> uses the <code>RGB</code> convention.</p>
<p>You should simply flip the channels order when displaying the images using <code>pyplot</code>:</p>
<pre><code> plt.imshow(img[:,:,[2,1,0])
plt.imshow(train[0][:,:,[2,1,0])
...
</code></pre>
<p>Alternatively you can use <code>cv2.imshow</code></p>
|
opencv|numpy|image-processing|matplotlib
| 8
|
5,607
| 64,248,962
|
How to calculate number of rows per group in pandas dataframe and add it to original data
|
<p>I have dataframe df like below</p>
<pre><code>ID COMMODITY_CODE DELIVERY_TYPE DAY Window_start case_qty deliveries.
6042.0 SCGR Live 1.0 15:00 15756.75 7.75
6042.0 SCGR Live 1.0 18:00 15787.75 5.75
6042.0 SCGR Live 1.0 21:00 10989.75 4.75
6042.0 SCGR Live 2.0 15:00 21025.25 9.00
6042.0 SCGR Live 2.0 18:00 16041.75 5.75
</code></pre>
<p><strong>I want below output</strong> where i am grouping by ID, COMMODITY_CODE, DELIVERY_TYPE, DAY and Calculate window_count like below</p>
<pre><code>ID COMMODITY_CODE DELIVERY_TYPE DAY Window_start window_count case_qty deliveries
6042.0 SCGR Live 1.0 15:00 3 15756.75 7.75
6042.0 SCGR Live 1.0 18:00 3 15787.75 5.75
6042.0 SCGR Live 1.0 21:00 3 10989.75 4.75
6042.0 SCGR Live 2.0 15:00 2 21025.25 9.00
6042.0 SCGR Live 2.0 18:00 2 16041.75 5.75
</code></pre>
<p>I tried below code by agg.</p>
<pre><code>df = df.groupby(['ID','CHAMBER_TYPE','COMMODITY_CODE','DELIVERY_TYPE','DAY'],as_index=False)\
.agg(window_count=("DAY", "count"))
</code></pre>
<p>Even though ,it calculates the number of windows per ID,COMMODITY_CODE,DELIVERY_TYPE,DAY groups, it removes the older columns i.e. Window_start, case_qty, deliveries</p>
<p>i.e i get below output which is not desired</p>
<pre><code>ID COMMODITY_CODE DELIVERY_TYPE DAY window_count
6042.0 SCGR Live 1.0 3
6042.0 SCGR Live 1.0 3
6042.0 SCGR Live 1.0 3
6042.0 SCGR Live 2.0 2
6042.0 SCGR Live 2.0 2
</code></pre>
|
<p>You are looking for a <code>transform</code>:</p>
<pre><code>df['window_count'] = df.groupby(['ID','CHAMBER_TYPE','COMMODITY_CODE','DELIVERY_TYPE','DAY'])['ID'].transform('size')
</code></pre>
<p>By the way, there is no <code>'CHAMBER_TYPE'</code> columns in your sample data.</p>
|
python|pandas|numpy|pandas-groupby|aggregate-functions
| 0
|
5,608
| 64,370,308
|
Create pandas MultiIndex DataFrame from multi dimensional np arrays
|
<p>I am trying to insert 72 matrixes with dimensions (24,12) from an np array into a preexisting MultiIndexDataFrame indexed according to a np.array with dimension (72,2). I don't care to index the content of the matrixes (24,12), I just need to index the 72 matrix even as objects for rearrangemnet purposes. It is like a map to reorder accroding to some conditions to then unstack the columns.</p>
<p>what I have tried so far is:</p>
<pre><code>cosphi.shape
</code></pre>
<p>(72, 2)</p>
<pre><code>MFPAD_RCR.shape
</code></pre>
<p>(72, 24, 12)</p>
<pre><code>df = pd.MultiIndex.from_arrays(cosphi.T, names=("costheta","phi"))
</code></pre>
<p>I successfully create an DataFrame of 2 columns with 72 index row. Then I try to add the 72 matrixes</p>
<pre><code>df1 = pd.DataFrame({'MFPAD':MFPAD_RCR},index=df)
</code></pre>
<p>or possibly</p>
<pre><code>df1 = pd.DataFrame({'MFPAD':MFPAD_RCR.astype(object)},index=df)
</code></pre>
<p>I get the error</p>
<pre><code>Exception: Data must be 1-dimensional.
</code></pre>
<p>Any idea?</p>
|
<p>After a bot of careful research, I found that my question <a href="https://stackoverflow.com/questions/48482256/what-is-the-pandas-panel-deprecation-warning-actually-recommending">has been already answered here</a> (the right answer) and <a href="https://stackoverflow.com/questions/43427189/3-dimensional-numpy-array-to-multiindex-pandas-dataframe">here</a> (a solution using a deprecated function).</p>
<p>For my specific question, the answer is something like:</p>
<pre><code>data = MFPAD_RCR.reshape(72, 288).T
df = pd.DataFrame(
data=data,
index=pd.MultiIndex.from_product([phiM, cosM],names=["phi","cos(theta)"]),
columns=['item {}'.format(i) for i in range(72)]
)
</code></pre>
<p>Note: that the 3D np array has to be reshaped with the second dimension equal to the product of the major and the minor indexes.</p>
<pre><code>df1 = df.T
</code></pre>
<p>I want to be able to sort my items (aka matrixes) according to extra indexes coming from cosphi</p>
<pre><code>cosn=np.array([col[0] for col in cosphi]); #list
phin=np.array([col[1] for col in cosphi]); #list
</code></pre>
<p>Note: the length of the new indexes has to be the same as the items (matrixes) = 72</p>
<pre><code>df1.set_index(cosn, "cos_ph", append=True, inplace=True)
df1.set_index(phin, "phi_ph", append=True, inplace=True)
</code></pre>
<p>And after this one can sort</p>
<pre><code>df1.sort_index(level=1, inplace=True, kind="mergesort")
</code></pre>
<p>and reshape</p>
<pre><code>outarray=(df1.T).values.reshape(24,12,72).transpose(2, 0, 1)
</code></pre>
<p>Any suggestion to make the code faster / prettier is more than welcome!</p>
|
pandas|dataframe|multi-index
| 0
|
5,609
| 47,835,879
|
What is different between RNN output and rule based output?
|
<p>I am new in machine learning and i got a question , I am <a href="https://towardsdatascience.com/lstm-by-example-using-tensorflow-feb0c1968537" rel="nofollow noreferrer">following this tutorial</a> , I read about LSTM and RNN. i use the code provided by tutorial and run it , it completed the training and now i gave some strings for testing :</p>
<p><a href="https://raw.githubusercontent.com/roatienza/Deep-Learning-Experiments/master/Experiments/Tensorflow/RNN/belling_the_cat.txt" rel="nofollow noreferrer">Training data is this :</a></p>
<p>output is :</p>
<pre><code>Iter= 20000, Average Loss= 0.531466, Average Accuracy= 84.60%
['the', 'sly', 'and'] - [treacherous] vs [treacherous]
Optimization Finished!
Elapsed time: 12.159853319327036 min
Run on command line.
tensorboard --logdir=/tmp/tensorflow/rnn_words
Point your web browser to: http://localhost:6006/
3 words: ,hello wow and
Word not in dictionary
3 words: mouse,mouse,mouse
3 words: mouse
3 words: mouse mouse mouse
mouse mouse mouse very well , but who is to bell the cat approaches the until will at one another and take mouse a receive some signal of her approach , we he easily escape
3 words: 3 words: had a general
had a general to proposal to make round the neck will all agree , said he easily at and enemy approaches to consider what common the case . you will all agree , said he
3 words: mouse mouse mouse
mouse mouse mouse very well , but who is to bell the cat approaches the until will at one another and take mouse a receive some signal of her approach , we he easily escape
3 words: what was cat
what was cat up and said he is all very well , but who is to bell the cat approaches the until will at one another and take mouse a receive some signal of her
3 words: mouse fear cat
Word not in dictionary
3 words: mouse tell cat
Word not in dictionary
mo3 words: mouse said cat
Word not in dictionary
3 words: mouse fear fear
Word not in dictionary
3 words: mouse ring bell
Word not in dictionary
m3 words: mouse ring ring
Word not in dictionary
3 words: mouse bell bell
mouse bell bell and general to make round the neck will all agree , said he easily at and enemy approaches to consider what common the case . you will all agree , said he
3 words: mouse and bell
mouse and bell this means we should always , but looked is young always , but looked is young always , but looked is young always , but looked is young always , but looked
3 words: mouse was bell
mouse was bell and said he is all very well , but who is to bell the cat approaches the until will at one another and take mouse a receive some signal of her approach
3 words:
</code></pre>
<p>Now what i am not getting , When i give three words it gives result something like which we can easily achieve via regular expression or rule based code using if-else like if input words in file then fetch some sentence previous or next sentences , What is special about this output , How its different ? Explain please </p>
<p>like sometimes it says , word not in dict , so if i have to give only those words which are in training file then its like its matching inout words in training data and fetching some result from file then we can do same thing with if else or in pure programming without any module then how's its different ?</p>
|
<p>Your training dataset only has ~180 words, and is achieving a 84.6% (training) accuracy, so it is overfitting quite a bit. Essentially, the model is simply predicting the next most likely word based on the training data.</p>
<p>Usually language models are trained on much larger datasets, such as PTB or the 1B word benchmark. PTB is a small dataset, with 100,000 words, and the 1B word benchmark has 1 billion words.</p>
<p>RNN models have a limited vocabulary to allow words or characters to be encoded. The vocabulary size would depend on model. Most word models that train on PTB have a vocabulary size of 10,000, which is enough for most common words. </p>
|
machine-learning|tensorflow|deep-learning|artificial-intelligence|lstm
| 0
|
5,610
| 49,135,821
|
Pandas: How to add number of the row within grouped rows
|
<p>so I have DataFrame:</p>
<pre><code>>>> df2
text
0 0 a
0 1 b
0 2 c
0 3 d
1 4 e
1 5 f
1 6 g
2 7 h
2 8 1
</code></pre>
<p>How do I create another column, which contains counter for each row within an level=0 index?</p>
<p>I have tried the following code (i need to get df['counter'] column):</p>
<pre><code>current_index = ''
for index, row in df.iterrows():
if index[0] != current_index:
current_index = index[0]
df[(df.index == current_index)]['counter'] = np.arange(len(df[(df.index == current_index)].index))
</code></pre>
<p>and following code as well:</p>
<pre><code>df2 = pd.DataFrame()
for group, df in df1.groupby('level_0_column'):
df0 = df0.sort_values(by=['level_1_column'])
df['counter'] = list(df.reset_index().index.values + 1)
df2 = df2.append(df0)
</code></pre>
<p>I have around 650K rows in DataFrame... goes to infinite loop. Please advice </p>
|
<p>I believe you're looking for <code>groupby</code> along the 0<sup>th</sup> column index + <code>cumcount</code>: </p>
<pre><code>df['counter'] = df.groupby(level=0).cumcount() + 1
df
text counter
0 0 a 1
1 b 2
2 c 3
3 d 4
1 4 e 1
5 f 2
6 g 3
2 7 h 1
8 1 2
</code></pre>
|
python|pandas|dataframe|group-by|pandas-groupby
| 2
|
5,611
| 49,088,447
|
Input pipeline using TensorFlow Dataset API and Pandas?
|
<p>I am trying to create a TensorFlow Dataset that takes in a list of path names for CSV files and create batches of training data. First I create a parse function which uses Pandas to read the first n rows. I give this function as argument for the 'map' method in Dataset</p>
<pre><code>def _get_data_for_dataset(file_name,rows=100):
print(file_name.decode())
df_input=pd.read_csv(os.path.join(folder_name, file_name.decode()),
usecols =['Wind_MWh','Actual_Load_MWh'],nrows = rows)
X_data = df_input.as_matrix()
X_data.astype('float32', copy=False)
return X_data
dataset = tf.data.Dataset.from_tensor_slices(file_names)
dataset = dataset2.map(lambda file_name: tf.py_func(_get_data_for_dataset,[file_name,100], tf.float64))
dataset= dataset.batch(2) #Create batches
iter = dataset.make_one_shot_iterator()
get_batch = iter.get_next()
with tf.Session() as sess:
print(sess.run(get_batch).shape)
</code></pre>
<p>The above code works but instead of producing a dataset with shape (200,2) it produces a dataset with shape (2, 100, 2). Please help.</p>
|
<p>I finally got the answer from <a href="https://stackoverflow.com/questions/49116343/dataset-api-flat-map-method-producing-error-for-same-code-which-works-with-ma/49140725#49140725">Dataset API 'flat_map' method producing error for same code which works with 'map' method</a>
I am posting the full code in case it may help others who want to use Pandas and Dataset API together.</p>
<pre><code>folder_name = './data/power_data/'
file_names = os.listdir(folder_name)
def _get_data_for_dataset(file_name):
df_input=pd.read_csv(os.path.join(folder_name, file_name.decode()),
usecols=['Wind_MWh', 'Actual_Load_MWh'])
X_data = df_input.as_matrix()
return X_data.astype('float32', copy=False)
dataset = tf.data.Dataset.from_tensor_slices(file_names)
# Use `Dataset.from_tensor_slices()` to make a `Dataset` from the output of
# the `tf.py_func()` op.
dataset = dataset.flat_map(lambda file_name: tf.data.Dataset.from_tensor_slices(
tf.py_func(_get_data_for_dataset, [file_name], tf.float32)))
dataset = dataset.batch(100)
iter = dataset.make_one_shot_iterator()
get_batch = iter.get_next()
with tf.Session() as sess:
print(sess.run(get_batch))
</code></pre>
|
pandas|tensorflow|tensorflow-datasets
| 1
|
5,612
| 49,107,912
|
How to use for loop on pandas.core.groupby.DataFrameGroupBy in reverse order?
|
<p>I have a <code>pandas.core.groupby.DataFrameGroupBy</code> object,iter_gb contains customer Itinerary. The sample data looks like. After Itinerary it contains 6 columns, those columns are not important. </p>
<pre><code>Customer_id YEAR Itinerary A1, B1, ...
38915672 2015 B12345
38915672 2012 B12345
38915672 2012 B25431
38915672 2012 B25431
38915672 2012 B25431
38915672 2012 B25431
38915672 2012 B25431
38915672 2012 B36789
38915672 2012 B36789
38915672 2012 B36789
38915672 2012 B36789
38915672 2012 B36789
38915672 2012 B86451
38915672 2012 B86451
38915672 2012 B86451
38915672 2012 B86451
38915672 2011 B86451
</code></pre>
<p>I would like to read this using <code>for loop</code> in reverse order.</p>
<p>For example for loop will start from last row store all the rows of <code>B86451</code> in a data frame & then
<code>B36789</code> & so on.</p>
<p>how to do that in python 3.x? </p>
|
<p>It seems you need first reverse <code>DataFrame</code> and then loop:</p>
<pre><code>for i, x in df[::-1].groupby('Itinerary', sort=False):
print (x)
Customer_id YEAR Itinerary
16 38915672 2011 B86451
15 38915672 2012 B86451
14 38915672 2012 B86451
13 38915672 2012 B86451
12 38915672 2012 B86451
Customer_id YEAR Itinerary
11 38915672 2012 B36789
10 38915672 2012 B36789
9 38915672 2012 B36789
8 38915672 2012 B36789
7 38915672 2012 B36789
Customer_id YEAR Itinerary
6 38915672 2012 B25431
5 38915672 2012 B25431
4 38915672 2012 B25431
3 38915672 2012 B25431
2 38915672 2012 B25431
Customer_id YEAR Itinerary
1 38915672 2012 B12345
0 38915672 2015 B12345
</code></pre>
|
python|pandas
| 2
|
5,613
| 49,057,107
|
how to add headers to a numpy.ndarray
|
<p>I have a numpy.ndarray having dimensions of 23411 x 3.
I would like to add headers to the top of the matrix called: "summary", "age", and "label". In that order. </p>
<p>In:</p>
<pre><code>matrix.shape
</code></pre>
<p>Out:</p>
<pre><code>(23411L, 3L)
</code></pre>
<p>In:</p>
<pre><code>type(matrix)
</code></pre>
<p>Out:</p>
<pre><code>numpy.ndarray
</code></pre>
<p>I tried using the numpy.recarray but it did not work. any suggestions??</p>
|
<p>You can achieve this with <a href="https://pandas.pydata.org/" rel="nofollow noreferrer">pandas</a>.</p>
<pre><code>import pandas as pd
matrix = [...] # your ndarray
matrix = pd.DataFrame(data=matrix, columns=["summary", "age", "label"])
</code></pre>
|
python|numpy
| 4
|
5,614
| 49,308,330
|
get the same order after applying group by on data frame
|
<p>I've a situation where before applying a group by on a data frame, the order was in ascending, however after applying the data frame order gets changed and its changing internally.</p>
<p><strong>here is the code sample:</strong>
<strong>DataFrame:</strong> `</p>
<pre><code>final_day_wise = daily_sales_data.loc [ : ,
['Placement#',"Placement# Name" , "Date" , "Delivered Impressions" , "Clicks" , "CTR" , "Conversion" ,
"eCPA" , "Spend" ] ]`
</code></pre>
<p>The order of the values based on "placement#" column, which is in ascending order.</p>
<p><strong>Output:</strong> so while applying group by it is actually changing the order in reverse or any other order.</p>
<pre><code>startline = len ( placement_sales_data ) + len ( final_adsize ) + 18
for placement , placement_df in final_day_wise.groupby ( 'Placement# Name' ):
writing_daily_data = placement_df.to_excel ( self.config.writer , sheet_name = "Standard banner({})".format (
self.config.IO_ID ) ,encoding = 'UTF-8' ,startcol = 1 ,
startrow = startline , columns = ["Placement# Name"],index = False ,
header = False , merge_cells = False)
writing_daily_data_new = placement_df.to_excel ( self.config.writer , sheet_name = "Standard banner({})".format (
self.config.IO_ID ) , startcol = 1 , startrow = startline+1 ,columns = ["Date","Delivered Impressions","Clicks","CTR",
"Conversion","eCPA","Spend"], index = False , header = True , merge_cells = False)
startline += len(placement_df) + 5
</code></pre>
<p>so is there any function or anything that can keep the order same.</p>
<p><strong>Input. Daily Sales Data</strong><a href="https://i.stack.imgur.com/3Igrx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3Igrx.png" alt="enter image description here"></a>
<strong>output:</strong><a href="https://i.stack.imgur.com/Rwvmg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Rwvmg.png" alt="I want "></a></p>
|
<p>Is this what you are looking for?</p>
<pre><code>processed_df = df.groupby('col').agg(...).reset_index()
processed_df.sort_values(['col', 'Placement#'], ascending=True, inplace=True)
</code></pre>
|
python|python-2.7|pandas|pandas-groupby
| 0
|
5,615
| 58,679,115
|
Pandas - Modify by string values in each cell by using search
|
<p>I have pandas dataframe include a column a with content like this <code>'<152/abcx>' ,'<42/da>', '<2/kiw>'</code>. What I want to do is based on the content , remove "<",">", and create two new separate column like this column b : <code>152 ; 42; 2</code>, column c: <code>'abcx', 'da', 'kiw'</code>. </p>
<pre><code>df.a.str[df.a.str.find('<')+1:df.a.str.find('/')-1]
</code></pre>
<p>the code I tried doesn't work</p>
|
<p>Try using this code:</p>
<pre><code>df = pd.DataFrame({'a': ['<152/abcx>' ,'<42/da>', '<2/kiw>']})
df = df['a'].str.strip('<>').str.split('/', expand=True)
df.columns = ['columnb', 'columnc']
print(df)
</code></pre>
<p>Output:</p>
<pre><code> columnb columnc
0 152 abcx
1 42 da
2 2 kiw
</code></pre>
|
string|pandas
| 2
|
5,616
| 58,868,775
|
Can't initialize Bokeh dashboard with Select widget
|
<p>This is my first Bokeh experience, so I apologize if I'm asking a very silly question here. So, here's the code. I have a dataframe that contains scores for several individuals. I want to plot a time series line with years on the x-axis, and scores on y-axis and a select menu to pick a player. </p>
<p>This is what I have so far, but I've hit the wall. What am I missing here? </p>
<pre><code>import pandas as pd
import numpy as np
from bokeh.layouts import row, column
from bokeh.models import ColumnDataSource, Select
from bokeh.io import output_file, show
from bokeh.plotting import figure
players = {'AJ':'Alex Jones', 'CH':'Chris Humps', 'BH':'Brian Hill', 'CM':'Chris Matta',
'JB':'Jim Bellami'}
data = pd.DataFrame({'score_AJ':[6, 7, 5, 4, 3], 'score_CH':[4, 2, 4, 1, 3], 'score_BH':[7, 3, 2, 7, 6],
'score_CM':[1, 1, 3, 2, 4], 'score_JB':[2, 3, 3, 5, 6]})
data.index = pd.period_range(start='2015-01-01', end='2019-01-01', freq='A')
output_file("test.html")
player_select = Select(title='Player:', value="Chris Matta", options=sorted(players.values()))
def update_data(attr, old, new):
player = [key for (key, value) in players.items() if value == player_select.value]
df = pd.DataFrame({'year': data.index, 'score': data['score_'+ player]})
return ColumnDataSource(data=df)
def plot_charts(source):
chart = figure(width=600, plot_height = 300, x_axis_type ='datetime', title = 'Player score')
chart.line('year', 'score', color='midnightblue', line_width=2, alpha=1, source = source)
return chart
player_select.on_change('value', update_data)
chart = plot_charts(source)
main_row = row(chart, player_select)
show(main_row)
</code></pre>
<p>Thank you!</p>
|
<p>Using <em>real Python callbacks</em>, as you have done above, requires running your code as an application on a Bokeh server. That is because web browsers have no knowledge of, nor ability to run, Python code. Real Python callbacks imply there is some actually running Python process that can run the Python callback code. In this case, that process is the Bokeh server (that is what the Bokeh server <em>exists to be</em>).</p>
<p>Now, the <code>show</code> function is for generating <em>standalone</em> (i.e. non-Bokeh server) output. Just pure HTML and JS in a static file. Given that, there is no way real Python callbacks can function with <code>show</code>. </p>
<p>So you have two options:</p>
<ul>
<li>Rework this as a Bokeh Server application, in which case you should first refer to <a href="https://docs.bokeh.org/en/latest/docs/user_guide/server.html" rel="nofollow noreferrer">Running a Bokeh Server</a> in the User's Guide for necessary context. Then, <a href="https://github.com/bokeh/bokeh/tree/master/examples/app/crossfilter" rel="nofollow noreferrer">here is a complete example</a> of a Bokeh sever app that updates data from a <code>Select</code> you can emulate.</li>
<li>Alternatively, rework this to use only <code>CustomJS</code> callbacks, and no Python callbacks. It is definitely possible to do this sort of thing with only JS callbacks, in which case standalone output created with <code>show</code> will work. See <a href="https://docs.bokeh.org/en/latest/docs/user_guide/interaction/callbacks.html" rel="nofollow noreferrer">JavaScript Callbacks</a> for background and many examples of updating things from JS callbacks on widgets. </li>
</ul>
<p>Besides this, there are some other miscellaneous issues. Namely, this line:</p>
<pre><code>chart.line('year', 'score', ...)
</code></pre>
<p>tells Bokeh <em>"look in the data source for a column named 'year' for the x-values, and in a column named 'score' for the y-values"</em>. However, your data source has neither of these columns. It has columns named things like "score_AJ", etc, and no "year" column at all. </p>
|
python|pandas|bokeh
| 1
|
5,617
| 58,651,762
|
Summarize a column if values in another column are inferior (without for loop)
|
<h2>The dataframe</h2>
<p>I have dataframe with many items.</p>
<p>The items are identified by a code "Type" and by a weight.</p>
<p>The last column indicates the quantity.</p>
<pre><code>|-|------|------|---------|
| | type |weight|quantity |
|-|------|------|---------|
|0|100010| 3 | 456 |
|1|100010| 1 | 159 |
|2|100010| 5 | 735 |
|3|100024| 3 | 153 |
|4|100024| 7 | 175 |
|5|100024| 1 | 759 |
|-|------|------|---------|
</code></pre>
<h2>The compatibility rule</h2>
<p>A given item "A" is "compatible" with others items if :</p>
<ul>
<li>It is the same type</li>
<li>The weights of the other items is equal or less than the weight of the item "A"</li>
</ul>
<h2>The result expected</h2>
<p>I want to add a column "compatible quantity" calculating for each row, how many items are compatible.</p>
<pre><code>|-|------|------|---------|---------------------|
| | type |weight|quantity | compatible quantity |
|-|------|------|---------|---------------------|
|0|100010| 3 | 456 | 615 | 456 + 159
|1|100010| 1 | 159 | 159 | 159 only (the lightest items)
|2|100010| 5 | 735 | 1350 | 735 + 159 + 456 (the heaviest)
|3|100024| 3 | 153 | 912 | 153 + 759
|4|100024| 7 | 175 | 1087 | ...
|5|100024| 1 | 759 | 759 | ...
|-|------|------|---------|---------------------|
</code></pre>
<p>I want to avoid to use a For loop ti get this result. (the dataframe is huge).</p>
<h2>My code using a For loop</h2>
<pre><code>import pandas as pd
df = pd.DataFrame([[100010, 3, 456],[100010, 1, 159],[100010, 5, 735], [100024, 3, 153], [100024, 7, 175], [100024, 1, 759]],columns = ["type", "weight", "quantity"])
print(df)
for inc in range(df["type"].count()):
the_type = df["type"].iloc[inc]
the_weight = df["weight"].iloc[inc]
the_quantity = df["quantity"].iloc[inc]
df.at[inc,"quantity_compatible"] = df.loc[(df["type"] == the_type) & (df["weight"] <= the_weight),"quantity"].sum()
print(df)
</code></pre>
<h2>Some possible ideas</h2>
<ul>
<li>Can "apply" or "Transform" be helpful ?</li>
<li>Can it be done using loc inside a loc ?</li>
</ul>
|
<p>First sort your values by <code>weight</code> and <code>type</code>, then do a <code>groupby</code> for <code>cumsum</code>, and finally do a merge on index:</p>
<pre><code>df = pd.DataFrame([[100010, 3, 456],[100010, 1, 159],[100010, 5, 735], [100024, 3, 153], [100024, 7, 175], [100024, 1, 759]],columns = ["type", "weight", "quantity"])
new_df = df.merge(df.sort_values(["type","weight"])
.groupby("type")["quantity"]
.cumsum(),left_index=True, right_index=True)
print (new_df)
#
type weight quantity_x quantity_y
0 100010 3 456 615
1 100010 1 159 159
2 100010 5 735 1350
3 100024 3 153 912
4 100024 7 175 1087
5 100024 1 759 759
</code></pre>
|
pandas|dataframe|pandas-groupby
| 2
|
5,618
| 58,741,434
|
Filter one data frame based on other data frame in pandas
|
<p>I have two DataFrames in pandas:</p>
<pre><code>import pandas as pd
df1 = pd.DataFrame({'Name': ["A", "B", "C", "C","D","D","E"],
'start': [50, 124, 1, 159, 12, 26,110],
'stop': [60, 200, 19, 200, 24, 30,160]})
df2 = pd.DataFrame({'Name': ["B", "C","D","E"],
'start': [126, 143, 19, 159],
'stop': [129, 220, 27, 200]})
print(df1)
Name start stop
0 A 50 60
1 B 124 200
2 C 1 19
3 C 159 200
4 D 12 24
5 D 26 30
6 E 110 160
print(df2)
Name start stop
0 B 126 129
1 C 143 220
2 D 19 27
3 E 159 200
</code></pre>
<p>I want to filter df1 to remove rows based on df2 using the following criteria:</p>
<ol>
<li>Name should be present in both df1 and df2</li>
<li>The range from start to stop for a Name overlaps with the range from start to stop for that Name in the other DataFrame</li>
</ol>
<p>This would give:</p>
<pre><code> Name start stop
0 B 124 200
1 C 159 200
2 D 12 24
3 D 26 30
4 E 110 160
</code></pre>
<p>Where: </p>
<ul>
<li>A has been dropped as there is no A in df2</li>
<li>B is kept as the start and stop of B in df2 are nested in those of B in df1</li>
<li>One of the C's of df1 has been dropped as its values didn't overlap with df2, whereas the other was kept as it is nested in the start and stop range of C in df2</li>
<li>Both D's are kept as both have an overlap with the range of D in df2</li>
<li>E is kept as its range overlaps with E in df2</li>
</ul>
<p>Any help would be greatly appreciated! </p>
|
<p>To solve your problem, I applied an SQL-like way that mimics the following query:</p>
<pre class="lang-sql prettyprint-override"><code>SELECT
df.Name, df.start_x AS start, df.stop_x AS stop
FROM (
SELECT
df1.Name, df1.start AS start_x, df1.stop AS stop_x,
df2.start AS start_y, df2.stop AS stop_y
FROM df1
INNER JOIN df2
ON df1.Name = df2.Name
) AS df
WHERE (df.stop_y >= df.start_x) AND (df.stop_x >= df.start_y)
</code></pre>
<p>This query has been converted to the following code fragment that uses the <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.merge.html" rel="nofollow noreferrer"><code>pandas.merge</code></a> method. Note that you must use parentheses in the expression <code>(df.stop_y> = df.start_x) & (df.stop_x> = df.start_y)</code>. Without them, the code throws the exception</p>
<blockquote>
<p>ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().</p>
</blockquote>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df1 = pd.DataFrame({'Name': ["A", "B", "C", "C","D","D","E"],
'start': [50, 124, 1, 159, 12, 26,110],
'stop': [60, 200, 19, 200, 24, 30,160]})
df2 = pd.DataFrame({'Name': ["B", "C","D","E"],
'start': [126, 143, 19, 159],
'stop': [129, 220, 27, 200]})
df = pd.merge(df1, df2, on=['Name'])
df = df[(df.stop_y >= df.start_x) & (df.stop_x >= df.start_y)]
df.rename(columns={'start_x':'start', 'stop_x':'stop'}, inplace=True)
df.drop(['start_y', 'stop_y'], axis=1, inplace=True)
df.reset_index(drop=True, inplace=True)
print(df)
</code></pre>
<p><strong>Output:</strong></p>
<pre><code> Name start stop
0 B 124 200
1 C 159 200
2 D 12 24
3 D 26 30
4 E 110 160
</code></pre>
<p>Demo on <a href="https://repl.it/@AndrieiO/Pandas-inner-join" rel="nofollow noreferrer">Repl.it</a>.</p>
|
python|pandas|dataframe
| 1
|
5,619
| 58,705,732
|
How to fill empty cell value in pandas with condition
|
<p>My sample dataset is as below. Actuall data till 2020 is available.</p>
<pre><code> Item Year Amount final_sales
A1 2016 123 400
A2 2016 23 40
A3 2016 6
A4 2016 10 100
A5 2016 5 200
A1 2017 123 400
A2 2017 23
A3 2017 6
A4 2017 10
A5 2017 5 200
</code></pre>
<p>I have to extrapolate 2017 (and subsequent years) <code>final_sales</code> column data from 2016 for every Item if 2017 data not available.<br>
In the above dataset <code>final_sales</code> not available for the year 2017 for A2 and A4 but available for 2016 year. How to bring in 2016 data (final_sales) value if corresponding year final_sales not available? </p>
<p>Expected results as below. Thanks.</p>
<pre><code> Item Year Amount final_sales
A1 2016 123 400
A2 2016 23 40
A3 2016 6
A4 2016 10 100
A5 2016 5 200
A1 2017 123 400
A2 2017 23 40
A3 2017 6
A4 2017 10 100
A5 2017 5 200
</code></pre>
|
<p>It looks like you want to fill forward where there is missing data.</p>
<p>You can do this with 'fillna', which is available on pd.DataFrame objects.</p>
<p>In your case, you only want to fill forward for each item, so first group by item, and then use fillna. The method 'pad' just carries forward in order (hence why we sort first).</p>
<pre><code>df['final_sales'] = df.sort_values('Year').groupby('Item')['final_sales'].fillna(method='pad')
</code></pre>
<p>Note that on your example data, A3 is missing for 2016 as well, so there is nothing to carry forward and it remains missing for 2017.</p>
|
python-3.x|pandas|dataframe|row
| 2
|
5,620
| 70,277,650
|
Is there any other way (to combine values of one column into different groups), instead of using 'df.replace( )' several times in the below problem?
|
<p>In :
char_df['Loan_Title'].unique()</p>
<p>Out:
array(['debt consolidation', 'credit card refinancing',
'home improvement', 'credit consolidation', 'green loan', 'other',
'moving and relocation', 'credit cards', 'medical expenses',
'refinance', 'credit card consolidation', 'lending club',
'debt consolidation loan', 'major purchase', 'vacation',
'business', 'credit card payoff', 'credit card',
'credit card refi', 'personal loan', 'cc refi', 'consolidate',
'medical', 'loan 1', 'consolidation', 'card consolidation',
'car financing', 'debt', 'home buying', 'freedom', 'consolidated',
'get out of debt', 'consolidation loan', 'dept consolidation',
'personal', 'cards', 'bathroom', 'refi', 'credit card loan',
'credit card debt', 'house', 'debt consolidation 2013',
'debt loan', 'cc refinance', 'home', 'cc consolidation',
'credit card refinance', 'credit loan', 'payoff',
'bill consolidation', 'credit card paydown', 'credit card pay off',
'get debt free', 'myloan', 'credit pay off', 'my loan', 'loan',
'bill payoff', 'cc-refinance', 'debt reduction', 'medical loan',
'wedding loan', 'credit', 'pay off bills', 'refinance loan',
'debt payoff', 'car loan', 'pay off', 'pool', 'credit payoff',
'credit card refinance loan', 'cc loan', 'debt free', 'conso',
'home improvement loan', 'loan consolidation', 'lending loan',
'relief', 'cc', 'loan1', 'getting ahead', 'home loan', 'bills'],
dtype=object)</p>
<p>In :
char_df=char_df.replace(['debt consolidation','debt consolidation loan','dept consolidation','debt consolidation 2013'], 'dept_consolidation')
char_df = char_df.replace(['personal','personal loan'],'personal_loan')
char_df = char_df.replace(['credit card refinancing','credit card refi','credit card refinance','credit card refinance loan'],'credit_card_refinance')</p>
|
<p>IIUC (it is hard to read) you could try the following:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
# will use regex pattern as keys and replace string as value
patterns = {
r'dept consolidation.*': 'dept_consolidation',
r'personal.*': 'personal_loan',
r'credit card.*': 'credit_card_refinance'
}
df['Loan_Title'] = df['Loan_Title'].replace(regex=patterns)
</code></pre>
|
python|pandas|machine-learning|data-science|feature-engineering
| 2
|
5,621
| 70,123,729
|
Add hours to year-month-day data in pandas data frame
|
<p>I have the following data frame with hourly resolution</p>
<pre><code>day_ahead_DK1
Out[27]:
DateStamp DK1
0 2017-01-01 20.96
1 2017-01-01 20.90
2 2017-01-01 18.13
3 2017-01-01 16.03
4 2017-01-01 16.43
... ...
8756 2017-12-31 25.56
8757 2017-12-31 11.02
8758 2017-12-31 7.32
8759 2017-12-31 1.86
type(day_ahead_DK1)
Out[28]: pandas.core.frame.DataFrame
</code></pre>
<p>But the current column <code>DateStamp</code> is missing hours. How can I add hours <code>00:00:00</code>, to <code>2017-01-01</code> for Index <code>0</code> so it will be <code>2017-01-01 00:00:00</code>, and then <code>01:00:00</code>, to <code>2017-01-01</code> for Index <code>1</code> so it will be <code>2017-01-01 01:00:00</code>, and so on, so that all my days will have hours from 0 to 23. Thank you!</p>
<p>The expected output:</p>
<pre><code>day_ahead_DK1
Out[27]:
DateStamp DK1
0 2017-01-01 00:00:00 20.96
1 2017-01-01 01:00:00 20.90
2 2017-01-01 02:00:00 18.13
3 2017-01-01 03:00:00 16.03
4 2017-01-01 04:00:00 16.43
... ...
8756 2017-12-31 20:00:00 25.56
8757 2017-12-31 21:00:00 11.02
8758 2017-12-31 22:00:00 7.32
8759 2017-12-31 23:00:00 1.86
</code></pre>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.cumcount.html" rel="nofollow noreferrer"><code>GroupBy.cumcount</code></a> for counter with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.to_timedelta.html" rel="nofollow noreferrer"><code>to_timedelta</code></a> for hours and add to <code>DateStamp</code> column:</p>
<pre><code>df['DateStamp'] = pd.to_datetime(df['DateStamp'])
df['DateStamp'] += pd.to_timedelta(df.groupby('DateStamp').cumcount(), unit='H')
print (df)
DateStamp DK1
0 2017-01-01 00:00:00 20.96
1 2017-01-01 01:00:00 20.90
2 2017-01-01 02:00:00 18.13
3 2017-01-01 03:00:00 16.03
4 2017-01-01 04:00:00 16.43
8756 2017-12-31 00:00:00 25.56
8757 2017-12-31 01:00:00 11.02
8758 2017-12-31 02:00:00 7.32
8759 2017-12-31 03:00:00 1.86
</code></pre>
|
python|pandas|dataframe
| 1
|
5,622
| 70,058,518
|
InvalidArgumentError: Incompatible shapes: [29] vs. [29,7,7,2]
|
<p>so I'm new right here and in Python also. I'm trying to make my own network. I found some pictures of docs and cats 15x15 and unfortunatly couldn't make this basic network...</p>
<p>So, these are libraries which I'm using</p>
<pre><code> from tensorflow.keras.models import Sequential
from tensorflow.keras import utils
from tensorflow.keras.datasets import mnist
from tensorflow.keras.layers import Dense
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
import keras
from tensorflow.keras.layers import Conv2D
from tensorflow.keras.layers import MaxPooling2D
from tensorflow.keras.layers import GlobalMaxPooling2D
</code></pre>
<p>Body</p>
<pre><code>train_dataset = tf.keras.preprocessing.image_dataset_from_directory(
'drive/MyDrive/cats vs dogs/cats vs dogs/training',
color_mode="rgb",
batch_size=32,
image_size=(150, 150),
shuffle=True,
seed=42,
validation_split=0.1,
subset='training',
interpolation="bilinear",
follow_links=False,
)
validation_dataset = tf.keras.preprocessing.image_dataset_from_directory(
'drive/MyDrive/cats vs dogs/cats vs dogs/training',
color_mode="rgb",
batch_size=32,
image_size=(150, 150),
shuffle=True,
seed=42,
validation_split=0.1,
subset='validation',
interpolation="bilinear",
follow_links=False,
)
test_dataset = tf.keras.preprocessing.image_dataset_from_directory(
'drive/MyDrive/cats vs dogs/cats vs dogs/test',
batch_size = 32,
image_size = (150, 150),
interpolation="bilinear"
)
model = Sequential()
model.add(keras.Input(shape=(150, 150, 3)))
model.add(Conv2D(32, 5, strides=2, activation="relu"))
model.add(Conv2D(32, 3, activation="relu"))
model.add(MaxPooling2D(3))
model.add(Dense(250, activation='sigmoid'))
model.add(Dense(100))
model.add(MaxPooling2D(3))
model.add(Dense(2))
model.summary()
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
history = model.fit(train_dataset, validation_data=validation_dataset, epochs=5, verbose=2)
</code></pre>
<p>And I get this error</p>
<pre><code>Incompatible shapes: [29] vs. [29,7,7,2]
[[node gradient_tape/binary_crossentropy/mul_1/BroadcastGradientArgs
(defined at /usr/local/lib/python3.7/dist-packages/keras/optimizer_v2/optimizer_v2.py:464)
]] [Op:__inference_train_function_4364]
Errors may have originated from an input operation.
Input Source operations connected to node
gradient_tape/binary_crossentropy/mul_1/BroadcastGradientArgs:
In[0] gradient_tape/binary_crossentropy/mul_1/Shape:
In[1] gradient_tape/binary_crossentropy/mul_1/Shape_1
</code></pre>
<p>I was trying to change from <code>binary_crossentropy</code> to <code>categorical_crossentrapy</code> but it didn't help, I suppose my mistake is in datasets or inputs but I don't know how to solve it :(</p>
<p>Really hope to find help here!</p>
<p>[my architecture][1]
[1]: https://i.stack.imgur.com/w4Y9N.png</p>
|
<p>You need to <strong>flatten</strong> your prediction somewhere, otherwise you are outputing an image (29 samples of size 7x7 with 2 channels), while you simply want a flat 2 dimensional logits (so shape 29x2). The architecture you are using is somewhat odd, did you mean to have flattening operation <strong>before</strong> first Dense layer, and then no "maxpooling2d" (as it makes no sense for flattened signal)? Mixing relu and sigmoid activations is also quite non standard, I would encourage you to start with established architectures rather than try to compose your own to get some intuitions.</p>
<pre><code>model = Sequential()
model.add(keras.Input(shape=(150, 150, 3)))
model.add(Conv2D(32, 5, strides=2, activation="relu"))
model.add(Conv2D(32, 3, activation="relu"))
model.add(MaxPooling2D(3))
model.add(Flatten())
model.add(Dense(250, activation="relu"))
model.add(Dense(100, activation="relu"))
model.add(Dense(2))
model.summary()
</code></pre>
|
tensorflow|keras|neural-network|google-colaboratory
| 0
|
5,623
| 56,318,354
|
TensorFlow: Why should my image tensor shape equal my label tensor shape?
|
<p>I wish to do semantic segmentation using TensorFlow 1.12. I create a dataset using <code>from_generator()</code>, where my generator is as follows:</p>
<pre><code>def train_sample_fetcher():
return sample_fetcher()
def val_sample_fetcher():
return sample_fetcher(is_validations=True)
def sample_fetcher(is_validations=False):
sample_names = [filename[:-4] for filename in os.listdir(DIR_DATASET + "ndarrays/")]
if not is_validations: sample_names = sample_names[:int(len(sample_names) * TRAIN_VAL_SPLIT)]
else: sample_names = sample_names[int(len(sample_names) * TRAIN_VAL_SPLIT):]
for sample_name in sample_names:
rgb = tf.image.decode_jpeg(tf.read_file(DIR_DATASET + sample_name + ".jpg"))
rgb = tf.image.resize_images(rgb, (HEIGHT, WIDTH))
#d = tf.image.decode_jpeg(tf.read_file(DIR_DATASET + "depth/" + sample_name + ".jpg"))
#d = tf.image.resize_images(d, (HEIGHT, WIDTH))
#rgbd = tf.concat([rgb,d], axis=2)
onehots = tf.convert_to_tensor(np.load(DIR_DATASET + "ndarrays/" + sample_name + ".npy"), dtype=tf.float32)
yield tf.stack([rgb, onehots])
</code></pre>
<p>In other words, I have a label tensor containing a one-hot label vector of length 21 (21 classes) for every pixel. However, this is not permitted according to this stack trace:</p>
<pre><code>Traceback (most recent call last):
File "semantic_fpn.py", line 89, in <module>
callbacks=[checkpoint_full, checkpoint_weights, tensorboard])
File ".../site-packages/tensorflow/python/keras/engine/training.py", line 1574, in fit
steps=validation_steps)
File ".../site-packages/tensorflow/python/keras/engine/training.py", line 975, in _standardize_user_data
next_element = x.get_next()
File ".../site-packages/tensorflow/python/data/ops/iterator_ops.py", line 623, in get_next
return self._next_internal()
File ".../site-packages/tensorflow/python/data/ops/iterator_ops.py", line 564, in _next_internal
output_shapes=self._flat_output_shapes)
File ".../site-packages/tensorflow/python/ops/gen_dataset_ops.py", line 2266, in iterator_get_next_sync
_six.raise_from(_core._status_to_exception(e.code, message), None)
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.UnknownError: InvalidArgumentError: Shapes of all inputs must match: values[0].shape = [512,512,3] != values[1].shape = [512,512,21] [Op:Pack] name: stack
</code></pre>
<p>Why is this not allowed? How can I circumvent this?</p>
|
<p><code>tf.stack</code> operation tries to merge N rank K tensors into one rank (K+1) tensors. In other words, it tries to join a sequence of tensors along new axis and therefore, other axis of tensors should be the same. </p>
<p>Can simply return a pair <code>yield rgb, onehots</code> out of your generator.</p>
|
python|tensorflow|dataset|tensor
| 1
|
5,624
| 56,246,970
|
How to apply a binomial low pass filter to data in a NumPy array?
|
<p>I'm supposed to apply a "binomial low pass filter" to data given in a NumPy <code>numpy.ndarray</code>.</p>
<p>However, I wasn't able to find anything of the sort at <a href="https://docs.scipy.org/doc/scipy/reference/signal.html" rel="nofollow noreferrer">https://docs.scipy.org/doc/scipy/reference/signal.html</a> What am I missing here? This should be a faily basic operation, right?</p>
|
<p>A binomial filter is a <a href="https://en.wikipedia.org/wiki/Finite_impulse_response" rel="nofollow noreferrer">FIR filter</a> whose coefficients can be generated by taking a row from <a href="https://en.wikipedia.org/wiki/Pascal%27s_triangle" rel="nofollow noreferrer">Pascal's triangle</a>. A quick way ("quick" as in just one line of code--not necessarily the most efficient) is with <code>numpy.poly1d</code>:</p>
<pre><code>In [15]: np.poly1d([1, 1])**2
Out[15]: poly1d([1, 2, 1])
In [16]: np.poly1d([1, 1])**3
Out[16]: poly1d([1, 3, 3, 1])
In [17]: np.poly1d([1, 1])**4
Out[17]: poly1d([1, 4, 6, 4, 1])
</code></pre>
<p>To use a set of these coefficients as a low pass filter, the values must be normalization so the sum is one. The sum of the coefficients of <code>np.poly1d([1, 1])**n</code> is <code>2**n</code>, so we could divide the above result by <code>2**n</code>. Alternatively, we can generate coefficients that are already normalized by giving <code>numpy.poly1d</code> [1/2, 1/2] instead of <code>[1, 1]</code> (i.e. start with a normalized set of two coefficients). This function generates the filter coefficients for a given n:</p>
<pre><code>def binomcoeffs(n):
return (np.poly1d([0.5, 0.5])**n).coeffs
</code></pre>
<p>For example,</p>
<pre><code>In [35]: binomcoeffs(3)
Out[35]: array([0.125, 0.375, 0.375, 0.125])
In [36]: binomcoeffs(5)
Out[36]: array([0.03125, 0.15625, 0.3125 , 0.3125 , 0.15625, 0.03125])
In [37]: binomcoeffs(15)
Out[37]:
array([3.05175781e-05, 4.57763672e-04, 3.20434570e-03, 1.38854980e-02,
4.16564941e-02, 9.16442871e-02, 1.52740479e-01, 1.96380615e-01,
1.96380615e-01, 1.52740479e-01, 9.16442871e-02, 4.16564941e-02,
1.38854980e-02, 3.20434570e-03, 4.57763672e-04, 3.05175781e-05])
</code></pre>
<p>To <em>apply</em> the filter to a signal, use <a href="https://en.wikipedia.org/wiki/Convolution" rel="nofollow noreferrer">convolution</a>. There are several discrete convolution functions available, including <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.convolve.html" rel="nofollow noreferrer"><code>numpy.convolve</code></a>, <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.convolve.html" rel="nofollow noreferrer"><code>scipy.signal.convolve</code></a>, <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.convolve1d.html" rel="nofollow noreferrer"><code>scipy.ndimage.convolve1d</code></a>. You can also use <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.lfilter.html" rel="nofollow noreferrer"><code>scipy.signal.lfilter</code></a> (give the coefficients as the <code>b</code> argument, and set <code>a=1</code>).</p>
<p>For concrete examples, check out <a href="https://scipy-cookbook.readthedocs.io/items/ApplyFIRFilter.html" rel="nofollow noreferrer">"Applying a FIR filter"</a>, a short article that I wrote several years ago (and that has been edited by others since then). Note that the timings shown in that article might not be up-to-date. The code in both NumPy and SciPy is continually evolving. If you run those scripts now, you might get radically different results.</p>
|
python|numpy|filter|scipy|binomial-coefficients
| 4
|
5,625
| 55,683,523
|
separating a .txt file into two columns using pandas
|
<p>I have a text file of a script and is ordered like this:</p>
<pre><code>0 "character one" "dialogue for character one."
1 "character two" "dialogue for character two."
2 "character one" "dialogue for character one again"
...
etc
</code></pre>
<p>My problem is that I want to analyze this text and need it to be in .csv format where the character is in the first column, and the dialogue is all in the second column.</p>
<p>I have read the .txt file into pandas like so: </p>
<p><code>txt_ep_4 = pd.read_table('/Users/nathancahn/star_wars/0_data/ep_IV_script.txt')</code>
so now I have a pandas data series (not a data frame) to interact with.</p>
<p>I've mostly tried different methods of splitting the text into columns with Series.str.split() but have been unsuccessful. I used <code>series_txt_ep_4.str.split(pat=" ")</code> to indicate separating at the space but this instead separated at every space.</p>
<p>Again, my ideal output would be to have the first column be the character name and the second column be the string of dialogue associated with that character.</p>
|
<p>I believe you need <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html" rel="nofollow noreferrer"><code>read_csv</code></a> with parameters <code>sep</code> and <code>names</code> for new columns names, because in <code>pandas 0.24.2</code> get:</p>
<blockquote>
<p>FutureWarning: read_table is deprecated, use read_csv instead.</p>
</blockquote>
<pre><code>temp=u'''"character one" "dialogue for character one."
"character two" "dialogue for character two."
"character one" "dialogue for character one again"'''
#after testing replace 'pd.compat.StringIO(temp)' to 'filename.csv'
df = pd.read_csv(pd.compat.StringIO(temp), sep="\s+", names=['a','b'])
#alternative
#df = pd.read_csv(pd.compat.StringIO(temp), delim_whitespace=True, names=['a','b'])
print (df)
a b
0 character one dialogue for character one.
1 character two dialogue for character two.
2 character one dialogue for character one again
</code></pre>
<p>EDIT:</p>
<p>If values have also header:</p>
<pre><code>temp=u""""character" "dialogue"
"1" "THREEPIO" "Did you hear that? They've shut down the main reactor. We'll be destroyed for sure. This is madness!"
"2" "THREEPIO" "We're doomed!"
"3" "THREEPIO" "There'll be no escape for the Princess this time."
"4" "THREEPIO" "What's that?"
"5" "THREEPIO" "I should have known better than to trust the logic of a half-sized thermocapsulary dehousing assister..."
"6" "LUKE" "Hurry up! Come with me! What are you waiting for?! Get in gear!"
"7" "THREEPIO" "Artoo! Artoo-Detoo, where are you?"
"8" "THREEPIO" "At last! Where have you been?"
"9" "THREEPIO" "They're heading in this direction. What are we going to do? We'll be sent to the spice mines of Kessel or smashed into who knows what!"
"10" "THREEPIO" "Wait a minute, where are you going?"
"""
#after testing replace 'pd.compat.StringIO(temp)' to 'filename.csv'
df = pd.read_csv(pd.compat.StringIO(temp), sep="\s+")
</code></pre>
<hr>
<pre><code>print (df)
character dialogue
1 THREEPIO Did you hear that? They've shut down the main...
2 THREEPIO We're doomed!
3 THREEPIO There'll be no escape for the Princess this time.
4 THREEPIO What's that?
5 THREEPIO I should have known better than to trust the l...
6 LUKE Hurry up! Come with me! What are you waiting...
7 THREEPIO Artoo! Artoo-Detoo, where are you?
8 THREEPIO At last! Where have you been?
9 THREEPIO They're heading in this direction. What are we...
10 THREEPIO Wait a minute, where are you going?
</code></pre>
|
pandas|csv|export-to-csv|data-cleaning
| 2
|
5,626
| 55,878,165
|
create 2 lines plot by dates of Trump's tweets
|
<p>I have series of Trump's tweets sources by date:</p>
<pre><code>trump_tweets_sources.groupby([' created_at ', ' source '])[' source '].count()
</code></pre>
<p><a href="https://i.stack.imgur.com/y12Cf.png" rel="nofollow noreferrer">enter image description here</a></p>
<p>I want to create 2 lines plot describes the sources tweets by date, one for iPhone and one for Android</p>
<p>How can I do that?</p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.unstack.html" rel="nofollow noreferrer"><code>Series.unstack</code></a> for <code>DataFrame</code> and then use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.plot.html" rel="nofollow noreferrer"><code>DataFrame.plot</code></a>:</p>
<pre><code>s = trump_tweets_sources.groupby([' created_at ', ' source '])[' source '].count()
s.unstack().plot()
</code></pre>
|
python|pandas|dataframe|matplotlib
| 0
|
5,627
| 55,990,477
|
Creating a DataFrame by passing a NumPy array, with a datetime index and labeled columns:
|
<p>On running a code, getting a following error:-</p>
<p><strong>'list' object is not callable</strong></p>
<pre><code>dates = pd.date_range('20190101', periods=6)
dates
df = pd.DataFrame(np.random.randn(6, 4), index=dates, columns=list('ABCD'))
</code></pre>
<p>Expected output:-</p>
<pre><code>In [8]: df
Out[8]:
A B C D
2013-01-01 0.469112 -0.282863 -1.509059 -1.135632
2013-01-02 1.212112 -0.173215 0.119209 -1.044236
2013-01-03 -0.861849 -2.104569 -0.494929 1.071804
2013-01-04 0.721555 -0.706771 -1.039575 0.271860
2013-01-05 -0.424972 0.567020 0.276232 -1.087401
2013-01-06 -0.673690 0.113648 -1.478427 0.524988
</code></pre>
|
<p>Your code is perfectly correct. If you execute it right after restarting your kernel it is bound to run. The error you are getting refers to the <code>list</code> function you have used to convert <code>string</code> 'ABCD' to list. Most likely, you have used list as an <code>object</code> name somewhere else, and this object is still in memory and overriding the list function.</p>
|
python|pandas
| 0
|
5,628
| 64,744,068
|
Remove row from arbitrary dimension in numpy
|
<p>I have a function, <code>remrow</code> which takes as input an arbitrary numpy nd array, <code>arr</code>, and an integer, <code>n</code>. My function should remove the last row from <code>arr</code> in the <code>n</code>th dimension. For example, if call my function like so:</p>
<p><code>remrow(arr,2)</code></p>
<p>with <code>arr</code> as a 3d array, then my function should return:</p>
<p><code>arr[:,:,:-1]</code></p>
<p>Similarly if I call;</p>
<p><code>remrow(arr,1)</code></p>
<p>and <code>arr</code> is a 5d array, then my function should return:</p>
<p><code>arr[:,:-1,:,:,:]</code></p>
<p>My problem is this; my function must work for all shapes and sizes of <code>arr</code> and all compatible <code>n</code>. How can I do this with numpy array indexing?</p>
|
<p>Construct an indexing tuple, consisting of the desired combination of slice(None) and slice(None,-1) objects.</p>
<pre><code>In [75]: arr = np.arange(24).reshape(2,3,4)
In [76]: idx = [slice(None) for _ in arr.shape]
In [77]: idx
Out[77]: [slice(None, None, None), slice(None, None, None), slice(None, None, None)]
In [78]: idx[1]=slice(None,-1)
In [79]: arr[tuple(idx)].shape
Out[79]: (2, 2, 4)
In [80]: idx = [slice(None) for _ in arr.shape]
In [81]: idx[2]=slice(None,-1)
In [82]: arr[tuple(idx)].shape
Out[82]: (2, 3, 3)
</code></pre>
|
python|numpy|indexing|numpy-ndarray
| 2
|
5,629
| 64,940,267
|
How to access specific rows in a pandas multiindex dataframe
|
<p>I'm working with what I think is a pandas multi-index data frame. Is there any way to make sure? My data looks like this.</p>
<pre><code> cinc Outcome
Side 1 2 1 2
WarNum
1 0.146344 0.029989 1 2
4 0.152565 0.056853 1 2
7 0.082757 0.017940 1 2
10 0.076032 0.022553 1 2
13 0.048538 0.005754 1 2
</code></pre>
<p>When I type in war_cinc.columns. I get the following output.</p>
<pre><code>MultiIndex([( 'cinc', 1),
( 'cinc', 2),
('Outcome', 1),
('Outcome', 2)],
names=[None, 'Side'])
</code></pre>
<p>If I wanted to subset this data, how would I do so? (say I want to get the entire 2nd column of the cinc column of the dataframe)</p>
|
<p>To check if your df has <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/advanced.html" rel="nofollow noreferrer"><code>Multiindex</code></a>, you can do this:</p>
<pre><code>isinstance(war_cinc.index, pd.MultiIndex)
</code></pre>
<p>This will return <code>True</code>.</p>
<p>To check for <code>hierarchical</code> columns, you can check <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.MultiIndex.nlevels.html" rel="nofollow noreferrer"><code>nlevels</code></a>:</p>
<pre><code>if len(war_cinc.columns.nlevels) > 1:
</code></pre>
<p>This will be <code>True</code> in your case.</p>
<p>You can get the entire 2nd column like:</p>
<pre><code>war_cinc[( 'cinc', 2)]
</code></pre>
<p>You need to pass all <code>levels</code> of the column in a <code>tuple</code> to fetch the column values.</p>
|
python|python-3.x|pandas|dataframe
| 1
|
5,630
| 64,952,877
|
How can I add comma into existing value in DataFrame?
|
<p>I scraped a DataFrame from one website but during scraping I lost comma in values, so it looks like below:</p>
<pre><code>name price
x 100
y 89
z 123584
</code></pre>
<p>Now I have to modify values in column "price" by adding comma in each value on the second place counting by right. The result should be like this:</p>
<pre><code>name price
x 1,00
y 0,89
z 1235,84
</code></pre>
<p>Do you have any ideas how I can achieve this?</p>
|
<p>We can slice your string and add the commas:</p>
<pre><code>df['price'].str[:-2] + ',' + df['price'].str[-2:]
</code></pre>
<pre><code>0 1,00
1 ,89
2 1235,84
Name: price, dtype: object
</code></pre>
<hr />
<p>Or we can use <code>str.cat</code> with the <code>sep</code> argument:</p>
<pre><code>df['price'].str[:-2].str.cat(df['price'].str[-2:], sep=',')
</code></pre>
<pre><code>0 1,00
1 ,89
2 1235,84
Name: price, dtype: object
</code></pre>
|
python|python-3.x|pandas|dataframe|number-formatting
| 1
|
5,631
| 64,984,092
|
Pandas Dataframe calculate Time difference for each group and Time difference between two different groups
|
<p>I have created a dataframe like that:</p>
<pre><code>import pandas as pd
d = {'Time': ['01.07.2019, 06:21:33', '01.07.2019, 06:32:01', '01.07.2019, 06:57:33', '01.07.2019, 07:24:33','01.07.2019, 08:26:25', '01.07.2019, 09:12:44']
,'Action': ['Opened', 'Closed', 'Opened', 'Closed', 'Opened', 'Closed']
,'Name': ['Bayer', 'Bayer', 'ITM', 'ITM', 'Geco' , 'Geco'],
'Group': ['1', '1', '2','2','3','3']}
df = pd.DataFrame(data=d)
output:
Time Action Name Group
0 01.07.2019, 06:21:33 Opened Bayer 1
1 01.07.2019, 06:32:01 Closed Bayer 1
2 01.07.2019, 06:57:33 Opened ITM 2
3 01.07.2019, 07:24:33 Closed ITM 2
4 01.07.2019, 08:26:25 Opened Geco 3
5 01.07.2019, 09:12:44 Closed Geco 3
</code></pre>
<p>so now i'm trying to calculate the Time difference for each group and the Time difference between these groups in minutes. So For example the Time difference in the Group Bayer should be 10 minutes and 28 seconds and the Time difference between Bayer and ITM should be 25 minutes and 32 seconds. After that the Time difference between the same group should be displayed in a column at the same line where the group begins and the Time difference between two different groups should be displayed in another column at the same line where the group ends.</p>
<p>so the wished Output would be:</p>
<pre><code> Time Action Name Group Time Difference(names) Time Difference(groups)
0 01.07.2019, 06:21:33 Opened Bayer 1 10:28
1 01.07.2019, 06:32:01 Closed Bayer 1 25:32
2 01.07.2019, 06:57:33 Opened ITM 2 27:00
3 01.07.2019, 07:24:33 Closed ITM 2 1:01:52
4 01.07.2019, 08:26:25 Opened Geco 3 46:19
5 01.07.2019, 09:12:44 Closed Geco 3
</code></pre>
<p>how could i do that?</p>
|
<p>Start by making datetime from string, then some groupbys and diffs:</p>
<pre><code>df["Time"] = pd.to_datetime(df["Time"])
df["d1"] = df.groupby("Name")["Time"].diff().shift(-1).fillna("")
df["d2"] = (
df.groupby((df["Action"] == "Closed").cumsum())["Time"]
.diff()
.shift(-1)
.fillna("")
)
</code></pre>
<p>produces</p>
<pre><code>| | Time | Action | Name | Group | d1 | d2 |
|---:|:--------------------|:---------|:-------|--------:|:----------------|:----------------|
| 0 | 2019-01-07 06:21:33 | Opened | Bayer | 1 | 0 days 00:10:28 | |
| 1 | 2019-01-07 06:32:01 | Closed | Bayer | 1 | | 0 days 00:25:32 |
| 2 | 2019-01-07 06:57:33 | Opened | ITM | 2 | 0 days 00:46:19 | |
| 3 | 2019-01-07 07:24:33 | Closed | ITM | 2 | | 0 days 01:01:52 |
| 4 | 2019-01-07 08:26:25 | Opened | Geco | 3 | 0 days 00:27:00 | |
| 5 | 2019-01-07 09:12:44 | Closed | Geco | 3 | | |
</code></pre>
<p>to explain a bit the <code>d2</code> calc, this <code>(df['Action'] == 'Closed').cumsum()</code> increments by 1 for each new <code>'Closed'</code> row. Here I print it alongside <code>Action</code> for clarity, using this</p>
<pre><code>df['d2_cond'] = (df['Action'] == 'Closed').cumsum()
df[['Action', 'd2_cond']]
</code></pre>
<p>prints</p>
<pre><code>
Action d2_cond
0 Opened 0
1 Closed 1
2 Opened 1
3 Closed 2
4 Opened 2
5 Closed 3
</code></pre>
<p>so we can <code>groupby</code> on this list to put together each <code>Closed</code> with the corresponding next <code>Opened</code></p>
|
python|pandas|dataframe
| 3
|
5,632
| 64,827,154
|
Python/Pandas time series correlation on values vs differences
|
<p>I am familiar with Pandas Series corr function to compute the correlation between two Series, so example:</p>
<pre><code>import pandas as pd
import numpy as np
A = pd.Series(np.random.randint(1,101,50))
B = pd.Series(np.random.randint(1,101,50))
A.corr(B)
</code></pre>
<p>This willl compute the correlation in the VALUES of the two series, but if I'm working with a Time Series, I might want to compute teh correlation on changes (absolute changes or percentage changes and over 1d, 1w, 1m, etc).</p>
<p>Some of the statistical software can do that quite easily and of course I could create a series with the daily, weekly, changes and then run the same function, but I wondered it there was a more Pythonic way to do this?</p>
|
<p>I guess the more pythonic way, through pandas, would be to use <code>df.pct_change()</code>:</p>
<p>Suppose A and B are time series:</p>
<pre><code>A.pct_change().corr(B.pct_change())
</code></pre>
|
python|pandas|numpy|statistics|correlation
| 1
|
5,633
| 64,667,734
|
Pretty print a dictionary matrix in Python
|
<p>Giving a dictionary of dictionary:</p>
<pre><code>dict_2d = collections.defaultdict(dict)
dict_2d['A']['A'] = 1
dict_2d['A']['B'] = 0.19
...
dict_2d['Z']['A'] = 0.76
...
dict_2d['Z']['Z'] = 1
</code></pre>
<p>Could you advise an elegant way to print such a square matrix?</p>
<p>For instance:</p>
<ol>
<li>is there a way to map or print it like a pandas.Dataframe?</li>
</ol>
<p><a href="https://i.stack.imgur.com/XkKMw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XkKMw.png" alt="enter image description here" /></a></p>
<ol start="2">
<li>is there a way to print it like an heatmap, with Strings as indexes? <a href="https://matplotlib.org/gallery/images_contours_and_fields/matshow.html#sphx-glr-gallery-images-contours-and-fields-matshow-py" rel="nofollow noreferrer">https://matplotlib.org/gallery/images_contours_and_fields/matshow.html#sphx-glr-gallery-images-contours-and-fields-matshow-py</a></li>
</ol>
|
<p>Not sure exactly how your print should look like however I tried writing some code. I would recommend using <code>tabulate</code> or <code>pandas</code> as it makes it way easier.</p>
<pre><code>df = pd.DataFrame(dict_2d)
print(df)
# A Z
# A 1.00 0.76
# B 0.19 NaN
# Z NaN 1.00
</code></pre>
<p>I will attach my quick attempt on making something similar. The code is quite ugly and the columns and rows are flipped compared to pandas's print.</p>
<pre><code>def pp(d):
s = "\t"
keys = set([k for values in d.values() for k in values])
s += "\t".join(keys) + "\n"
for key, value in d.items():
s += key + "\t"
for k in keys:
s += str(d[key].get(k)) + "\t"
s += "\n"
print(s)
# A B Z
# A 1 0.19 None
# Z 0.76 None 1
</code></pre>
|
python|pandas|dictionary
| 1
|
5,634
| 64,847,271
|
Use two columns as inputs - Pandas
|
<p>I'm trying to create a new column that comes from the calculation of two columns. Usually when I need to do this but with only one column I use <code>.apply()</code> but now with two parameters I don't know how to do it.</p>
<p>With one I do the following code:</p>
<pre><code>from pandas import read_csv, DataFrame
df = read_csv('results.csv')
def myFunc(x):
x = x + 5
return x
df['new'] = df['colA'].apply(myFunc)
df.head()
</code></pre>
<p>With two I thought was like the following, but not.</p>
<pre><code>from pandas import read_csv, DataFrame
df = read_csv('results.csv')
def myFunc(x,y):
x = x + y
return x
df['new'] = df[['colA','colB']].apply(myFunc)
df.head()
</code></pre>
<p>I see some people use <code>lambda</code> but I don't understand and furthermore I think has to be easier.</p>
<p>Thank you very much!</p>
|
<p>Disclaimer: avoid <code>apply</code> if possible. With that in mind, you are looking for <code>axis=1</code>, but you need to rewrite the function like:</p>
<pre><code>df['new'] = df.apply(lambda x: myFunc(x['colA'], x['colB']),
axis=1)
</code></pre>
<p>which is essentially equivalent to:</p>
<pre><code>df['new'] = [myFunc(x,y) for x,y in zip(df['colA'], df['colB'])]
</code></pre>
|
python|pandas
| 2
|
5,635
| 40,033,190
|
Pandas Grouping - Values as Percent of Grouped Totals Not Working
|
<p>Using a data frame and pandas, I am trying to figure out what each value is as a percentage of the grand total for the "group by" category</p>
<p>So, using the tips database, I want to see, for each sex/smoker, what the proportion of the total bill is for female smoker / all female and for female non smoker / all female (and the same thing for men)</p>
<p>For example,</p>
<p>If the complete data set is:</p>
<pre><code>Sex, Smoker, Day, Time, Size, Total Bill
Female,No,Sun,Dinner,2, 20
Female,No,Mon,Dinner,2, 40
Female,No,Wed,Dinner,1, 10
Female,Yes,Wed,Dinner,1, 15
</code></pre>
<p>The values for the first line would be (20+40+10)/(20+40+10+15), as those are the other 3 values for non smoking females</p>
<p>So the output should look like</p>
<pre><code>Female No 0.823529412
Female Yes 0.176470588
</code></pre>
<p>However, I seem to be having some trouble</p>
<p>When I do this,</p>
<pre><code>import pandas as pd
df=pd.read_csv("https://raw.githubusercontent.com/wesm/pydata- book/master/ch08/tips.csv", sep=',')
df.groupby(['sex', 'smoker'])[['total_bill']].apply(lambda x: x / x.sum()).head()
</code></pre>
<p>I get the following:</p>
<pre><code> total_bill
0 0.017378
1 0.005386
2 0.010944
3 0.012335
4 0.025151
</code></pre>
<p>It seems to be ignoring the group by and just calculating it for each line item</p>
<p>I am looking for something more like </p>
<pre><code>df.groupby(['sex', 'smoker'])[['total_bill']].sum()
</code></pre>
<p>Which will return</p>
<pre><code> total_bill
sex smoker
Female No 977.68
Yes 593.27
Male No 1919.75
Yes 1337.07
</code></pre>
<p>But I want this expressed as percentages of totals for the total of the individual sex/smoker combinations or </p>
<pre><code>Female No 977.68/(977.68+593.27)
Female Yes 593.27/(977.68+593.27)
Male No 1919.75/(1919.75+1337.07)
Male Yes 1337.07/(1919.75+1337.07)
</code></pre>
<p>Ideally, I would like to do the same with the "tip" column at the same time.</p>
<p>What am I doing wrong and how do I fix this? Thank you!</p>
|
<p>You can add another grouped by process after you get the <code>sum</code> table to calculate the percentage:</p>
<pre><code>(df.groupby(['sex', 'smoker'])['total_bill'].sum()
.groupby(level = 0).transform(lambda x: x/x.sum())) # group by sex and calculate percentage
#sex smoker
#Female No 0.622350
# Yes 0.377650
#Male No 0.589455
# Yes 0.410545
#dtype: float64
</code></pre>
|
python|pandas|dataframe|aggregate|aggregation
| 10
|
5,636
| 39,998,594
|
Many small data table I/O for pandas?
|
<p>I have many table (about 200K of them) each small (typically less than 1K rows and 10 columns) that I need to read as fast as possible in pandas. The use case is fairly typical: a function loads these table one at a time, computes something on them and stores the final result (not keeping the content of the table in memory). </p>
<p>This is done many times over and I can choose the storage format for these tables for best (speed) performance.
What <a href="http://pandas.pydata.org/pandas-docs/stable/io.html" rel="nofollow">natively supported</a> storage format would be the quickest?</p>
|
<p>IMO there are a few options in this case:</p>
<ol>
<li><p>use HDF Store (AKA PyTable, H5) as <a href="https://stackoverflow.com/questions/39998594/many-small-data-table-i-o-for-pandas#comment67274829_39998594">@jezrael has already suggested</a>. You can decide whether you want to group some/all of your tables and store them in the same <code>.h5</code> file using different identifiers (or <code>keys</code> in Pandas terminology)</p></li>
<li><p>use new and extremely fast <a href="https://pypi.python.org/pypi/feather-format" rel="nofollow noreferrer">Feather-Format (part of the Apache Arrow project)</a>. NOTE: it's still a bit new format so its format might be changed in future which could lead to incompatibilities between different versions of feather-format module. You also can't put multiple DFs in one <code>feather</code> file, so you can't group them.</p></li>
<li><p>use a database for storing/reading tables. PS it might be slower for your use-case.</p></li>
</ol>
<p>PS you may also want to check <a href="https://stackoverflow.com/a/37012035/5741205">this comparison</a> especially if you want to store your data in compressed format</p>
|
pandas|io
| 1
|
5,637
| 44,339,072
|
Difference between initializing a Tensorflow Variable using np.zeros and using tf.zeros
|
<p>Are there some differences between initializing a tensorflow Variable using np.zeros and using tf.zeros?</p>
<p>For example, if we take a look at the MNIST softmax tutorial (
<a href="https://github.com/tensorflow/tensorflow/blob/r1.1/tensorflow/examples/tutorials/mnist/mnist_softmax.py" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow/blob/r1.1/tensorflow/examples/tutorials/mnist/mnist_softmax.py</a>), the variables W and b are initialized in the following way.</p>
<pre><code> W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))
</code></pre>
<p>Instead of initializing using tf.zeros, the code still works fine if we use np.zeros as follows.</p>
<pre><code> W = tf.Variable(np.zeros([784, 10], dtype=np.float32))
b = tf.Variable(np.zeros([10], dtype=np.float32))
</code></pre>
<p>So, I think there might be no difference. But, I tried the following code segment assuming that the following <code>a</code> and <code>b</code> variables are the same.</p>
<pre><code>a = tf.Variable(tf.zeros((3, 2)), tf.float32)
b = tf.Variable(np.zeros((3, 2)), tf.float32)
sess = tf.InteractiveSession()
init = tf.global_variables_initializer()
sess.run(init)
a.eval()
</code></pre>
<p>The result of <code>a.eval()</code> is as follows:</p>
<pre><code>array([[ 0., 0.],
[ 0., 0.],
[ 0., 0.]], dtype=float32)
</code></pre>
<p>But the result of <code>b.eval()</code> is as follows:</p>
<pre><code>array([[ 0., 0.],
[ 0., 0.],
[ 0., 0.]])
</code></pre>
<p>So, even though the values are the same, there is a difference in dtype. Could you please explain to me why this difference exists?</p>
|
<p>For most experiments it almost does not matter. You can also provide python list of lists <code>[[0, 0, ...], ...]</code>. The difference you see in your <code>eval</code> is because <a href="https://www.tensorflow.org/api_docs/python/tf/zeros" rel="nofollow noreferrer">tf.zeros</a> by default uses <code>float32</code>. On the contrary <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.zeros.html" rel="nofollow noreferrer">np.zeros</a> by default uses <code>float64</code>. Change one or another and you will not see a difference.</p>
<p>In production code it is better to use <code>tf.</code> functions for initialization to reduce the overhead.</p>
|
variables|tensorflow|initialization
| 2
|
5,638
| 69,616,471
|
Hugginface Bert Tokenizer build from source due to proxy issues
|
<p>I encountered something similar to this: <a href="https://stackoverflow.com/questions/59701981/bert-tokenizer-model-download">BERT tokenizer & model download</a></p>
<p>The link above is about downloading the Bert model itself, but I would only like to use the Bert Tokenizer.</p>
<p>Normally I could do it like this:</p>
<pre><code>from transformers import BertTokenizer
bert_tokenizer_en = BertTokenizer.from_pretrained("bert-base-uncased")
bert_tokenizer_de=BertTokenizer.from_pretrained("bert-base-german-cased")
</code></pre>
<p>But I am running it remotely, so I can't download via the method above. But I do not know which files I need from here: <a href="https://huggingface.co/bert-base-uncased/tree/main" rel="nofollow noreferrer">https://huggingface.co/bert-base-uncased/tree/main</a>, so that I could build the tokenizer?</p>
|
<p>You would need to 1) download vocabulary and configuration files (<code>vocab.txt</code>, <code>config.json</code>), 2) put them into a folder and 3) pass folder's path to <code>BertTokenizer.from_pretrained(<path>)</code> function.</p>
<p>Here is the download location of <code>vocab.txt</code> for different tokenizer models</p>
<pre><code>PRETRAINED_VOCAB_FILES_MAP = {
"vocab_file": {
"bert-base-uncased": "https://huggingface.co/bert-base-uncased/resolve/main/vocab.txt",
"bert-large-uncased": "https://huggingface.co/bert-large-uncased/resolve/main/vocab.txt",
"bert-base-cased": "https://huggingface.co/bert-base-cased/resolve/main/vocab.txt",
"bert-large-cased": "https://huggingface.co/bert-large-cased/resolve/main/vocab.txt",
"bert-base-multilingual-uncased": "https://huggingface.co/bert-base-multilingual-uncased/resolve/main/vocab.txt",
"bert-base-multilingual-cased": "https://huggingface.co/bert-base-multilingual-cased/resolve/main/vocab.txt",
"bert-base-chinese": "https://huggingface.co/bert-base-chinese/resolve/main/vocab.txt",
"bert-base-german-cased": "https://huggingface.co/bert-base-german-cased/resolve/main/vocab.txt",
"bert-large-uncased-whole-word-masking": "https://huggingface.co/bert-large-uncased-whole-word-masking/resolve/main/vocab.txt",
"bert-large-cased-whole-word-masking": "https://huggingface.co/bert-large-cased-whole-word-masking/resolve/main/vocab.txt",
"bert-large-uncased-whole-word-masking-finetuned-squad": "https://huggingface.co/bert-large-uncased-whole-word-masking-finetuned-squad/resolve/main/vocab.txt",
"bert-large-cased-whole-word-masking-finetuned-squad": "https://huggingface.co/bert-large-cased-whole-word-masking-finetuned-squad/resolve/main/vocab.txt",
"bert-base-cased-finetuned-mrpc": "https://huggingface.co/bert-base-cased-finetuned-mrpc/resolve/main/vocab.txt",
"bert-base-german-dbmdz-cased": "https://huggingface.co/bert-base-german-dbmdz-cased/resolve/main/vocab.txt",
"bert-base-german-dbmdz-uncased": "https://huggingface.co/bert-base-german-dbmdz-uncased/resolve/main/vocab.txt",
"TurkuNLP/bert-base-finnish-cased-v1": "https://huggingface.co/TurkuNLP/bert-base-finnish-cased-v1/resolve/main/vocab.txt",
"TurkuNLP/bert-base-finnish-uncased-v1": "https://huggingface.co/TurkuNLP/bert-base-finnish-uncased-v1/resolve/main/vocab.txt",
"wietsedv/bert-base-dutch-cased": "https://huggingface.co/wietsedv/bert-base-dutch-cased/resolve/main/vocab.txt",
}
</code></pre>
<p>Location of <code>config.json</code>:</p>
<pre><code>BERT_PRETRAINED_CONFIG_ARCHIVE_MAP = {
"bert-base-uncased": "https://huggingface.co/bert-base-uncased/resolve/main/config.json",
"bert-large-uncased": "https://huggingface.co/bert-large-uncased/resolve/main/config.json",
"bert-base-cased": "https://huggingface.co/bert-base-cased/resolve/main/config.json",
"bert-large-cased": "https://huggingface.co/bert-large-cased/resolve/main/config.json",
"bert-base-multilingual-uncased": "https://huggingface.co/bert-base-multilingual-uncased/resolve/main/config.json",
"bert-base-multilingual-cased": "https://huggingface.co/bert-base-multilingual-cased/resolve/main/config.json",
"bert-base-chinese": "https://huggingface.co/bert-base-chinese/resolve/main/config.json",
"bert-base-german-cased": "https://huggingface.co/bert-base-german-cased/resolve/main/config.json",
"bert-large-uncased-whole-word-masking": "https://huggingface.co/bert-large-uncased-whole-word-masking/resolve/main/config.json",
"bert-large-cased-whole-word-masking": "https://huggingface.co/bert-large-cased-whole-word-masking/resolve/main/config.json",
"bert-large-uncased-whole-word-masking-finetuned-squad": "https://huggingface.co/bert-large-uncased-whole-word-masking-finetuned-squad/resolve/main/config.json",
"bert-large-cased-whole-word-masking-finetuned-squad": "https://huggingface.co/bert-large-cased-whole-word-masking-finetuned-squad/resolve/main/config.json",
"bert-base-cased-finetuned-mrpc": "https://huggingface.co/bert-base-cased-finetuned-mrpc/resolve/main/config.json",
"bert-base-german-dbmdz-cased": "https://huggingface.co/bert-base-german-dbmdz-cased/resolve/main/config.json",
"bert-base-german-dbmdz-uncased": "https://huggingface.co/bert-base-german-dbmdz-uncased/resolve/main/config.json",
"cl-tohoku/bert-base-japanese": "https://huggingface.co/cl-tohoku/bert-base-japanese/resolve/main/config.json",
"cl-tohoku/bert-base-japanese-whole-word-masking": "https://huggingface.co/cl-tohoku/bert-base-japanese-whole-word-masking/resolve/main/config.json",
"cl-tohoku/bert-base-japanese-char": "https://huggingface.co/cl-tohoku/bert-base-japanese-char/resolve/main/config.json",
"cl-tohoku/bert-base-japanese-char-whole-word-masking": "https://huggingface.co/cl-tohoku/bert-base-japanese-char-whole-word-masking/resolve/main/config.json",
"TurkuNLP/bert-base-finnish-cased-v1": "https://huggingface.co/TurkuNLP/bert-base-finnish-cased-v1/resolve/main/config.json",
"TurkuNLP/bert-base-finnish-uncased-v1": "https://huggingface.co/TurkuNLP/bert-base-finnish-uncased-v1/resolve/main/config.json",
"wietsedv/bert-base-dutch-cased": "https://huggingface.co/wietsedv/bert-base-dutch-cased/resolve/main/config.json",
# See all BERT models at https://huggingface.co/models?filter=bert
}
</code></pre>
<p>Source: Transformers codebase <a href="https://github.com/huggingface/transformers/blob/master/src/transformers/models/bert/tokenization_bert.py#L31-L52" rel="nofollow noreferrer">1</a>, <a href="https://github.com/huggingface/transformers/blob/master/src/transformers/models/bert/configuration_bert.py#L27-L51" rel="nofollow noreferrer">2</a></p>
<p>Steps:</p>
<pre><code>mkdir ~/german-tokenizer
cd german-tokenizer
wget https://huggingface.co/bert-base-german-cased/resolve/main/vocab.txt
wget https://huggingface.co/bert-base-german-cased/resolve/main/config.json
python
# Python Runtime:
>> import transformers
>> from transformers import BertTokenizer
>> BertTokenizer.from_pretrained('~/german-tokenizer')
</code></pre>
|
python|tokenize|huggingface-transformers
| 2
|
5,639
| 69,660,556
|
pd.DataFrame cuts the decimals' mantissa
|
<p>Need to convert list of values with long mantissa into pd.DataFrame without precision lost:</p>
<p><a href="https://i.stack.imgur.com/Nl05Q.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Nl05Q.png" alt="enter image description here" /></a></p>
<p>My list of decimals:</p>
<p><a href="https://i.stack.imgur.com/Hj2tQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Hj2tQ.png" alt="enter image description here" /></a></p>
<p>The way pd.DataFrame convert it.</p>
|
<p>The values are stored with the correct precision, they are just truncated when you display them.</p>
<p>You can change the precision used by pandas when displaying datarames by using <code>pd.set_option('display.precision', num_decimal_digits)</code>.</p>
<p>For example</p>
<pre><code>df = pd.DataFrame([0.123456789])
print('Before setting precision:')
print(df)
pd.set_option('display.precision', 9)
print('After setting precision:')
print(df)
</code></pre>
<p>prints</p>
<pre><code>Before setting precision:
0
0 0.123457
After setting precision:
0
0 0.123456789
</code></pre>
|
pandas|dataframe|decimal|mantissa
| 0
|
5,640
| 69,561,023
|
Combine value strings of two dataframes in corresponding locations within each dataframe
|
<p>I am looking for python code to paste value strings of two dataframes with identical column names, dimensions and datatype (Strings).</p>
<p>I want to create a new dataframe by pasting each element of DF1 & DF2 together which is in the corresponding location within each dataframe and have a "," delimiter between each string.</p>
<p><a href="https://i.stack.imgur.com/JdGkH.png" rel="nofollow noreferrer">Input dataframes (DF1 & DF2) and expected output</a></p>
<p>Equivalent R code would be using matrices: matrix(paste(DF1,DF2, sep=";"),nrow = nrow(DF1), dimnames=dimnames(DF1))</p>
<p>Is there a way to do this with python? Appears like it should be simple but thanks in advance!</p>
|
<p>Use <code>pd.concat</code>:</p>
<pre><code>out = pd.concat([df1.stack(), df2.stack()], axis=1) \
.apply(lambda x: ','.join(x), axis=1) \
.unstack()
</code></pre>
<p>Output:</p>
<pre><code>>>> out
A B C
0 z,f x,h c,j
1 v,t b,y n,u
2 a,u s,i d,o
</code></pre>
|
python|pandas|dataframe
| 2
|
5,641
| 41,067,917
|
Tensorflow: InvalidArgumentError for placeholder
|
<p>I tried to run the following code in Jupyter Notebook, however I got the <code>InvalidArgumentError</code> for the <code>placeholder</code>.</p>
<p>But when I wrote a Python script and ran it in command window, it worked. I want to know how can I ran my code in the Notebook successfully, thanks.</p>
<ul>
<li>OS: Ubuntu 16.04 LTS</li>
<li>Tensorflow version: 0.12rc (installed from source)</li>
</ul>
<p>Programs and Output:</p>
<p><a href="https://i.stack.imgur.com/APhpZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/APhpZ.png" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/rSlp3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rSlp3.png" alt="enter image description here"></a></p>
<p>Command window:</p>
<p><a href="https://i.stack.imgur.com/om9FV.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/om9FV.jpg" alt="enter image description here"></a></p>
<p>Actural code:</p>
<pre><code>import tensorflow as tf
import numpy as np
raw_data = np.random.normal(10, 1, 100)
# Define alpha as a constant
alpha = tf.constant(0.05)
# A placeholder is just like a variable, but the value is injected from the
# session
curr_value = tf.placeholder(tf.float32)
# Initialize the previous average to some
prev_avg = tf.Variable(0.)
avg_hist = tf.summary.scalar("running_average", update_avg)
value_hist = tf.summary.scalar("incoming_values", curr_value)
merged = tf.summary.merge_all()
writer = tf.summary.FileWriter("./logs")
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
for i in range(len(raw_data)):
summary_str, curr_avg = sess.run([merged, update_avg], feed_dict={curr_value: raw_data[i]})
sess.run(tf.assign(prev_avg, curr_avg))
print(raw_data[i], curr_avg)
writer.add_summary(summary_str, i)
</code></pre>
|
<p>Your <code>raw_data</code> is float64 (default numpy float type) whereas your placeholder is float32 (default tensorflow float type). You should explicitly cast your data to float32</p>
|
tensorflow
| 1
|
5,642
| 54,125,497
|
Expected 2D array, got scalar array instead error in pandas regression
|
<p>I'm taking an error when i try to calculate regression in pandas. Here is the code:</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
df=pd.DataFrame({"haftalar":[1,2,3,4,5,6,7],
"degerler":[6.11,5.66,5.30,5.32,5.25,5.37,5.28]})
haftalar=df[['haftalar']]
degerler=df[['degerler']]
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(
haftalar, degerler, test_size=0.57, random_state=0)
from sklearn.linear_model import LinearRegression
lr=LinearRegression()
lr.fit(x_train,y_train)
tahmin=lr.predict(8)
print(tahmin)
</code></pre>
<p>when i try to run the code, I'm taking this error:</p>
<pre><code>"if it contains a single sample.".format(array))
ValueError: Expected 2D array, got scalar array instead:
array=8.
Reshape your data either using array.reshape(-1, 1)
if your data has a single feature or array.reshape(1, -1)
if it contains a single sample.
</code></pre>
<p>I have an exam in 3 hours about that subject. Can you help me?</p>
|
<p>Try:</p>
<pre><code>tahmin=lr.predict([[8]])
</code></pre>
<p>More often, you may have data in <code>numpy</code> arrays like so:</p>
<pre><code>import numpy as np
x_test = np.array([8])
</code></pre>
<p>Now the error message tells you what to do:</p>
<pre><code>tahmin=lr.predict(x_test.reshape(-1, 1))
</code></pre>
|
python|python-3.x|pandas|scikit-learn|sklearn-pandas
| 0
|
5,643
| 53,869,191
|
When converting to CSV UTF-8 with python, my date columns are being turned into date-time
|
<p>When I run the following code </p>
<pre><code>import glob,os
import pandas as pd
dirpath = os.getcwd()
inputdirectory = dirpath
for xls_file in glob.glob(os.path.join(inputdirectory,"*.xls*")):
data_xls = pd.read_excel(xls_file, sheet_name=0, index_col=None)
csv_file = os.path.splitext(xls_file)[0]+".csv"
data_xls.to_csv(csv_file, encoding='utf-8', index=False)
</code></pre>
<p>It will convert all xls files in the folder into CSV as I want.
HOWEVER, on doing so, any dates such as 20/12/2018 will be converted to 20/12/2018 00:00:00 which is causing major issues with later data processing.</p>
<p>What is going wrong with this? </p>
|
<p>Nothing is "going wrong" per se. You simply need to provide a custom <code>date_format</code> to <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_csv.html" rel="nofollow noreferrer"><code>df.to_csv</code></a>:</p>
<blockquote>
<p>date_format : string, default None
Format string for datetime objects</p>
</blockquote>
<p>In your case that would be</p>
<pre><code>data_xls.to_csv(csv_file, encoding='utf-8', index=False, date_format='%d/%m/%Y')
</code></pre>
<p>This will fix the way the raw data is saved to the file. If you will open the file in Excel you may still see it using the full format. This is because Excel tries to assume the cell formats based on their content. You will need to right click the column and select another cell formatting, there is nothing that pandas or Python can do about that (as long as you are using <code>to_csv</code> and not <code>to_excel</code>).</p>
|
python|excel|pandas
| 5
|
5,644
| 53,932,989
|
Output file for every input file in a folder
|
<pre><code>import pandas as pd
import glob
import csv
files=glob.glob('*.csv')
for file in files:
df=pd.read_csv(file)
a=sum(df.iloc[: , 0])
with open ('output.txt','w') as f:
f.write("sum of the first column is "+ str(a))
f.close()
</code></pre>
<p>I would like to write output files for every input file that is in the folder. In output file i would like to have a info about a sum of the 1. column in every file. For ex. in a folder i have files[input1, input2,input3], i would like to make in the same folder files [otuput1,output2,output3] for every input file. In this code i got only one output file.</p>
|
<p>You can create a separate output file for each input file and prefix/suffix it with the name of the input file - </p>
<pre><code>import pandas as pd
import glob
import csv
files=glob.glob('*.csv')
for file in files:
df=pd.read_csv(file)
a=sum(df.iloc[: , 0])
output_file_name = "output_" + file
with open (output_file_name,'w') as f:
f.write("sum of the first column is "+ str(a))
f.close()
</code></pre>
<p>If your input file is <code>input1.txt</code> - you will get a corresponding output file of <code>output_input1.txt</code></p>
|
python|python-3.x|pandas
| 0
|
5,645
| 66,269,739
|
De-Duplicate in Pandas based off of multiple rules
|
<p>I want to de-dupe rows in pandas based off of multiple criteria.</p>
<p>I have 3 columns: name, id and nick_name.</p>
<p>First rule is look for duplicate id's. When id's match, only keep rows where name and nick_name are different as long as I am keeping at least one row.</p>
<p>In other words, if name and nick_name don't match, keep that row. If name and nick_name match, then get rid of that row, as long as it isn't the only row that would be left for that id.</p>
<p>Example data:</p>
<pre><code>data = {"name": ["Sam", "Sam", "Joseph", "Joseph", "Joseph", "Philip", "Philip", "James"],
"id": [1,1,2,2,2,3,3,4],
"nick_name": ["Sammie", "Sam", "Joseph", "Joe", "Joey", "Philip", "Philip", "James"]}
df = pd.DataFrame(data)
df
</code></pre>
<p>Produces:</p>
<pre><code> name id nick_name
0 Sam 1 Sammie
1 Sam 1 Sam
2 Joseph 2 Joseph
3 Joseph 2 Joe
4 Joseph 2 Joey
5 Philip 3 Philip
6 Philip 3 Philip
7 James 4 James
</code></pre>
<p>Based on my rules above, I want a resulting dataframe to produce the following:</p>
<pre><code> name id nick_name
0 Sam 1 Sammie
3 Joseph 2 Joe
4 Joseph 2 Joey
5 Philip 3 Philip
7 James 4 James
</code></pre>
|
<p>We can split this into 3 boolean condtions to filter your initial dataframe by.</p>
<pre><code>#where name and nick_name match, keep the first value.
con1 = df.duplicated(subset=['name','nick_name'],keep='first')
# where ids are duplicated and name is not equal to nick_name
con2 = df.duplicated(subset=['id'],keep=False) & df['name'].ne(df['nick_name'])
# where no duplicate exists.
con3 = df.groupby('id')['id'].transform('size').eq(1)
print(df.loc[con1 | con2 | con3])
name id nick_name
0 Sam 1 Sammie
3 Joseph 2 Joe
4 Joseph 2 Joey
6 Philip 3 Philip
7 James 4 James
</code></pre>
|
python|pandas
| 1
|
5,646
| 66,198,363
|
Extract first column and last column of a Numpy array and populate on one dimension array
|
<p>I am a novice on Numpy.</p>
<p>I have extracted a CSV file and with the condition to populate the array below</p>
<pre class="lang-py prettyprint-override"><code>data = [(2006, 'Total Live-Births', 38317) (2007, 'Total Live-Births', 39490)
(2008, 'Total Live-Births', 39826) (2009, 'Total Live-Births', 39570)
(2010, 'Total Live-Births', 37967) (2011, 'Total Live-Births', 39654)
(2012, 'Total Live-Births', 42663) (2013, 'Total Live-Births', 39720)
(2014, 'Total Live-Births', 42232) (2015, 'Total Live-Births', 42185)
(2016, 'Total Live-Births', 41251)]
</code></pre>
<p>I wanted to plot a bar chart using years and the counts and I probably need to store an <code>x</code> array and a <code>y</code> array
how do I get the following?</p>
<pre class="lang-py prettyprint-override"><code>x=[2006, 2007, 2008,....]
y=[39826, 39490, 39826, ...]
</code></pre>
|
<p>Given your data as follows:
Just sticking to <code>numpy</code> you could do something like:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
data = np.genfromtxt('live-births.csv', delimiter=',', skip_header=1, dtype=str)
# keep only the 'Total Live-Births' values
live_births = data[data[:, 1] == 'Total Live-Births']
x = live_births[:, 0].astype(int)
y = live_births[:, 2].astype(int)
</code></pre>
<p>Using <code>pandas</code>:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df = pd.read_csv('live-births.csv')
# keep only the 'Total Live-Births' values
df = df[df['level_1'] == 'Total Live-Births']
# get as numpy arrays
x = df['year'].values.astype(int)
y = df['value'].values.astype(int)
</code></pre>
|
numpy
| 2
|
5,647
| 65,954,374
|
pandas mapping list to dict items for new column
|
<p>i have df like:</p>
<pre><code>col_A
[1,2,3]
[2,3]
[1,3]
</code></pre>
<p>and dict like:</p>
<pre><code>dd = {1: "Soccer", 2: "Cricket", 3: "Hockey"}
</code></pre>
<p>how can i create a new column col_B like:</p>
<pre><code>col_A col_B
[1,2,3] ["Soccer", "Cricket", "Hockey"]
[2,3] ["Cricket", "Hockey"]
[1,3] ["Soccer", "Hockey"]
</code></pre>
<p>tried something like:</p>
<pre><code>df['sports'] = df['col_A'].map(dd)
</code></pre>
<p>got error:</p>
<pre><code>TypeError: unhashable type: 'list'
</code></pre>
|
<p>You can use list comprehension with <code>if</code> for filter out not matched values:</p>
<pre><code>df['sports'] = df['col_A'].map(lambda x: [dd[y] for y in x if y in dd])
</code></pre>
<p>Or replace to <code>None</code> if no match:</p>
<pre><code>df['sports'] = df['col_A'].map(lambda x: [dd.get(y, None) for y in x])
</code></pre>
<p>Or return same values if no match:</p>
<pre><code>df['sports'] = df['col_A'].map(lambda x: [dd.get(y, y) for y in x])
</code></pre>
<p><strong>Sample</strong>:</p>
<pre><code>df['sports1'] = df['col_A'].map(lambda x: [dd[y] for y in x if y in dd])
df['sports2'] = df['col_A'].map(lambda x: [dd.get(y, None) for y in x])
df['sports3'] = df['col_A'].map(lambda x: [dd.get(y, y) for y in x])
print (df)
col_A sports1 sports2 \
0 [1, 2, 3, 5] [Soccer, Cricket, Hockey] [Soccer, Cricket, Hockey, None]
1 [2, 3] [Cricket, Hockey] [Cricket, Hockey]
2 [1, 3] [Soccer, Hockey] [Soccer, Hockey]
sports3
0 [Soccer, Cricket, Hockey, 5]
1 [Cricket, Hockey]
2 [Soccer, Hockey]
</code></pre>
|
python|pandas|list|dictionary
| 3
|
5,648
| 52,871,044
|
Count duplicates in rows per column in pandas DataFrame
|
<p>I have a very long table like below:</p>
<pre><code> A B C D .......
0 au br gt uy
1 cd gq gt uy
2 fg br gt ml
3 kl br gt wx
</code></pre>
<p>..............</p>
<p>I would like to count and to print duplicates per column like:</p>
<pre><code>A 0
B 2
C 3
D 1
</code></pre>
<p>I have only found to count duplicates for one column:</p>
<pre><code>df.duplicated(['B']).sum()
</code></pre>
<p>Do I have to write all columns (about 30) or is it possible to use something from pandas? I have tried this but it doesn't work:</p>
<pre><code>df.duplicated(df.loc[:,:]).sum()
</code></pre>
|
<p>Subtract length of DataFrame with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.nunique.html" rel="noreferrer"><code>nunique</code></a>:</p>
<pre><code>df = len(df) - df.nunique()
print (df)
A 0
B 2
C 3
D 1
dtype: int64
</code></pre>
<p>Or use<a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.apply.html" rel="noreferrer"><code>apply</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.duplicated.html" rel="noreferrer"><code>duplicated</code></a> for get boolean mask for each column separately and <code>sum</code> for count of <code>True</code> values:</p>
<pre><code>df = df.apply(lambda x: x.duplicated()).sum()
print (df)
A 0
B 2
C 3
D 1
dtype: int64
</code></pre>
|
python|pandas|dataframe|duplicates
| 5
|
5,649
| 52,484,199
|
Series' object has no attribute 'decode in pandas
|
<p>I am trying to decode utf-8 encoded text in python. The data is loaded to a pandas data frame and then I decode. This produces an error: <strong>AttributeError: 'Series' object has no attribute 'decode'</strong>. How can I properly decode the text that is in pandas column? </p>
<pre><code>>> preparedData.head(5).to_dict( )
{'id': {0: 1042616899408945154, 1: 1042592536769044487, 2: 1042587702040903680, 3: 1042587263643930626, 4: 1042586780292276230}, 'date': {0: '2018-09-20', 1: '2018-09-20', 2: '2018-09-20', 3: '2018-09-20', 4: '2018-09-20'}, 'time': {0: '03:30:14', 1: '01:53:25', 2: '01:34:13', 3: '01:32:28', 4: '01:30:33'}, 'text': {0: "b'\\xf0\\x9f\\x8c\\xb9 are red, violets are blue, if you want to buy us \\xf0\\x9f\\x92\\x90, here is a CLUE \\xf0\\x9f\\x98\\x89 Our #flowerpowered eye &amp; cheek palette is AL\\xe2\\x80\\xa6 '", 1: "b'\\xf0\\x9f\\x8e\\xb5Is it too late now to say sorry\\xf0\\x9f\\x8e\\xb5 #tartetalk #memes'", 2: "b'@JillianJChase Oh no! Please email your order # to social@tarte.com &amp; we can help \\xf0\\x9f\\x92\\x95'", 3: 'b"@Danikins__ It\'s best applied with our buffer brush! \\xf0\\x9f\\x92\\x9c\\xc2\\xa0"', 4: "b'@AdelaineMorin DEAD \\xf0\\x9f\\xa4\\xa3\\xf0\\x9f\\xa4\\xa3\\xf0\\x9f\\xa4\\xa3'"}, 'hasMedia': {0: 0, 1: 1, 2: 0, 3: 0, 4: 0}, 'hasHashtag': {0: 1, 1: 1, 2: 0, 3: 0, 4: 0}, 'followers_count': {0: 801745, 1: 801745, 2: 801745, 3: 801745, 4: 801745}, 'retweet_count': {0: 17, 1: 94, 2: 0, 3: 0, 4: 0}, 'favourite_count': {0: 181, 1: 408, 2: 0, 3: 0, 4: 14}}
</code></pre>
<p>My data looks like the above. I want to decode the 'text' column. </p>
<p>ExampleText = b'\xf0\x9f\x8c\xb9 are red, violets are blue, if you want to buy us \xf0\x9f\x92\x90, here is a CLUE \xf0\x9f\x98\x89 Our #flowerpowered eye & cheek palette is AL\xe2\x80\xa6'</p>
<p>I could decode the text above as </p>
<pre><code>ExampleText = ExampleText.decode('utf8')
</code></pre>
<p>However, when I try to decode text from a pandas dataframe column, I get the error. I tried like this, </p>
<pre><code>preparedData['text'] = preparedData['text'].decode('utf8')
</code></pre>
<p>Then the error I get is, </p>
<pre><code>Traceback (most recent call last):
File "F:/Level 4 Research Project/makeViral/main.py", line 23, in <module>
main()
File "F:/Level 4 Research Project/makeViral/main.py", line 19, in main
preprocessedData = preprocessData(preparedData)
File "F:\Level 4 Research Project\makeViral\preprocess.py", line 34, in preprocessData
preparedData['text'] = preparedData['text'].decode('utf8')
File "C:\Users\Kabilesh\AppData\Local\Programs\Python\Python36\lib\site-packages\pandas\core\generic.py", line 4376, in __getattr__
return object.__getattribute__(self, name)
AttributeError: 'Series' object has no attribute 'decode'
</code></pre>
<p>I also tried </p>
<pre><code>preparedData['text'] = preparedData['text'].str.decode('utf8', errors='strict')
</code></pre>
<p>This does not produce any error. But the resulting 'text' column is like, </p>
<pre><code>'text': {0: nan, 1: nan, 2: nan, 3: nan, 4: nan}
</code></pre>
|
<p>I could be wrong but I would guess that what you have are byte strings rather than strings of bytes strings <code>b"XXXXX"</code> instead of <code>"b'XXXXX'"</code> as you've posted in your answer in which case you could do the following (you need to use the string accessor):</p>
<pre><code>preparedData['text'] = preparedData['text'].str.decode('utf8')
</code></pre>
<p>Edit:
Looks like my assumption was wrong, in which case you can do a pre-processing step:</p>
<pre><code>import ast
preparedData['text'] = preparedData['text'].apply(ast.literal_eval).str.decode("utf-8")
</code></pre>
|
python|pandas
| 7
|
5,650
| 52,736,221
|
Pandas NaN value causing trouble when change values depending on other columns
|
<p>why do pandas NaN values sometime typed as numpy.float64, and sometimes float?
This is so confusing when I want to use function and change values in a dataframe depending on other columns</p>
<p>example:</p>
<pre><code> A B C
0 1 NaN d
1 2 a s
2 2 b s
3 3 c NaN
</code></pre>
<p>I have a def to change value of column C</p>
<pre><code>def change_val(df):
if df.A==1 and df.B==np.nan:
return df.C
else:
return df.B
</code></pre>
<p>Then I apply this function onto column C</p>
<pre><code>df['C']=df.apply(lambda x: change_val(x),axis=1)
</code></pre>
<p>Things go wrong on <code>df.B==np.nan</code>, how do I correctly express this please?</p>
<p>Desired result:</p>
<pre><code> A B C
0 1 NaN d
1 2 a a
2 2 b b
3 3 c c
</code></pre>
|
<p>Use <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.where.html" rel="nofollow noreferrer"><code>numpy.where</code></a> or <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.loc.html" rel="nofollow noreferrer"><code>loc</code></a>, for check missing values is used special function <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.isna.html" rel="nofollow noreferrer"><code>Series.isna</code></a>:</p>
<pre><code>mask = (df.A==1) & (df.B.isna())
#oldier pandas versions
#mask = (df.A==1) & (df.B.isnull())
df['C'] = np.where(mask, df.C, df.B)
</code></pre>
<p>Or:</p>
<pre><code>df.loc[~mask, 'C'] = df.B
</code></pre>
<hr>
<pre><code>print (df)
A B C
0 1 NaN d
1 2 a a
2 2 b b
3 3 c c
</code></pre>
<p>For more information about working with missing data check <a href="http://pandas.pydata.org/pandas-docs/stable/missing_data.html#values-considered-missing" rel="nofollow noreferrer">docs</a>.</p>
|
python|pandas|numpy
| 2
|
5,651
| 46,330,595
|
Build a dictionary from two DataFrame columns
|
<p>How to build up a dictionary with a Dataframe.</p>
<p><code>df</code></p>
<pre><code> Vendor Project
0 John Cordoba
1 Saul Pampa
2 Peter La Plata
3 John Federal District
4 Lukas Salta
5 Thomas Rio Grande
6 Peter Rio Salado
7 Peter Bahia Blanca
</code></pre>
<p>The result I need is a dictionary with Vendors as Keys and Projects as items.</p>
|
<p>Use</p>
<pre><code>In [1893]: df.groupby('Vendor')['Project'].apply(list).to_dict()
Out[1893]:
{'John': ['Cordoba', 'Federal District'],
'Lukas': ['Salta'],
'Peter': ['La Plata', 'Rio Salado', 'Bahia Blanca'],
'Saul': ['Pampa'],
'Thomas': ['Rio Grande']}
</code></pre>
<p>Or,</p>
<pre><code>In [1900]: {k: g['Project'].values.tolist() for k, g in df.groupby('Vendor')}
Out[1900]:
{'John': ['Cordoba', 'Federal District'],
'Lukas': ['Salta'],
'Peter': ['La Plata', 'Rio Salado', 'Bahia Blanca'],
'Saul': ['Pampa'],
'Thomas': ['Rio Grande']}
</code></pre>
|
python|pandas|dictionary|dataframe
| 5
|
5,652
| 58,311,504
|
filter DataFrame leaving rows where Needle is part of the list present in columnB
|
<p>I am looking for a filter function on my DataFrame to find rows where my searchTerm is in the list that is in the dataframe row.</p>
<p>I've been digging trough list related filters, they all seem to have a 'list' as needles and a single value in the dataFrame column.</p>
<pre class="lang-python prettyprint-override"><code>df = pd.DataFrame({'groups':['SportsMen','FisherMen','Students','OutdoorTypes'],'members':[['a','b'],['b'],['a'],['FisherMen','c']]})
</code></pre>
<p>Now i need to get a filtered set (preferably a clone) where 'b' is a member. In my final solution I'll have to recursively do this so I would also find
'b' is member of 'Fisherman' and thereby member of 'OutdoorTypes'</p>
|
<p><a href="https://stackoverflow.com/questions/32280556/how-to-filter-a-dataframe-column-of-lists-for-those-that-contain-a-certain-item">How to filter a DataFrame column of lists for those that contain a certain item</a></p>
<p>contains the solution:</p>
<pre><code>df[df['members'].apply(lambda x: 'b' in x)]
</code></pre>
|
python|pandas
| 0
|
5,653
| 58,460,285
|
Why is df.cumsum() giving ValueError: Wrong number of items passed, placement implies 1
|
<p>I would like to create a new column called total_amount based on the sum of each amount in each group. I would like the final data set to look like the set below. </p>
<p>company | amount | total_amount</p>
<p>company 1 | 10000 | 10000</p>
<p>company 1 | 20000 | 30000</p>
<p>company 1 | 30000 | 60000</p>
<p>company 2 | 10000 | 10000</p>
<p>company 2 | 20000 | 30000</p>
<p>company 3 | 10000 | 10000</p>
<p>company 4 | 10000 | 10000</p>
<p>company 4 | 20000 | 20000</p>
<p>company 5 | 10000 | 10000</p>
<p>company 5 | 20000 | 30000</p>
<p>company 5 | 30000 | 60000</p>
<p>company 5 | 40000 | 100000</p>
<hr>
<p>I ran this code</p>
<pre><code> df['total_amount'] = df.groupby('company').cumsum()
</code></pre>
<p>and it worked briefly but when I tried to change its position to make my code more readable, it started giving me KeyError "total_amount" and the value error listed above. What am I doing wrong?</p>
|
<p>It indicates <code>cumsum</code> returns more than 1 columns. In other words, <code>df.groupby('company').cumsum()</code> is calling <code>cumsum</code> on <code>DataFrameGroupby</code> object, so it returns a dataframe. If the returned dataframe is only 1 column, the assignment still works. However, if the returned dataframe has 2 or more columns, it will failed as your error above. I suspect your first run returns 1-column dataframe, so It worked. However, the 1st run created an additional column. On next runs, it returns n-columns dataframe, so the assignment failed.</p>
<p>Try this to fix your error:</p>
<pre><code>df['total_amount'] = df.groupby('company')['amount'].cumsum()
</code></pre>
|
python|pandas|cumsum
| 0
|
5,654
| 69,223,955
|
Building a Neural Network for Binary Classification on Top of Pre-Trained Embeddings Not Working?
|
<p>I am trying to build a Neural Network on top of the embeddings that a pre-trained model outputs. In specific: I have the logits of a base model saved to disk where each example is an array of shape 512 (which originally corresponds to an image) with an associated label (0 or 1). This is what I am doing right now:</p>
<p>Here's the model definition and training loop that I have. Right now it is a simple Linear layer, just to make sure that it works, however, when I run this script, the loss starts at .4 and not ~.7 which is the standard for binary classification. Can anyone spot where I am going wrong?</p>
<pre><code>from transformers.modeling_outputs import SequenceClassifierOutput
class ClassNet(nn.Module):
def __init__(self, num_labels=2):
super(ClassNet, self).__init__()
self.num_labels = num_labels
self.classifier = nn.Linear(512, num_labels) if num_labels > 0 else nn.Identity()
def forward(self, inputs):
logits = self.classifier(inputs)
loss_fct = nn.CrossEntropyLoss()
loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
return SequenceClassifierOutput(
loss=loss,
logits=logits
)
model = ClassNet()
optimizer = optim.Adam(model.parameters(), lr=1e-4,weight_decay=5e-3) # L2 regularization
loss_fct=nn.CrossEntropyLoss()
for epoch in range(2): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(train_loader, 0):
# get the inputs; data is a list of [inputs, labels]
#data['embeddings'] -> torch.Size([1, 512])
#data['labels'] -> torch.Size([1])
inputs, labels = data['embeddings'], data['labels']
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = model(inputs)
loss = loss_fct(outputs.logits.squeeze(1), labels.squeeze())
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.item()
if i % 2000 == 1: # print every 2000 mini-batches
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 2000))
running_loss = 0.0
</code></pre>
<p>an example of printing the <code>outputs.logits.squeeze(1)</code> and <code>labels.squeeze()</code>:</p>
<pre><code>#outputs.logits.squeeze(1)
tensor([[-0.2214, 0.2187],
[ 0.3838, -0.3608],
[ 0.9043, -0.9065],
[-0.3324, 0.4836],
[ 0.6775, -0.5908],
[-0.8017, 0.9044],
[ 0.6669, -0.6488],
[ 0.4253, -0.5357],
[-1.1670, 1.1966],
[-0.0630, -0.1150],
[ 0.6025, -0.4755],
[ 1.8047, -1.7424],
[-1.5618, 1.5331],
[ 0.0802, -0.3321],
[-0.2813, 0.1259],
[ 1.3357, -1.2737]], grad_fn=<SqueezeBackward1>)
#labels.squeeze()
tensor([1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0])
#loss
tensor(0.4512, grad_fn=<NllLossBackward>)
</code></pre>
|
<p>You are only printing from the second iteration. The above will effectively print for every <code>200k+1</code> steps, but <code>i</code> starts at <code>0</code></p>
<pre><code>if i % 2000 == 1: # print every 2000 mini-batches
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 2000))
</code></pre>
<p><em>i.e.</em> one gradient descent step has already occurred. This might be enough to go from the initial loss value <code>-log(1/2) = ~0.69</code> to the one you observed <code>~0.45</code>.</p>
|
python|machine-learning|deep-learning|neural-network|pytorch
| 0
|
5,655
| 69,102,683
|
Python - Join two dataframes based on closest date match and additional column
|
<p>I have two dataframes that I need to join on two columns, one of which is a date column. However, the dates do not match as shown in below example. I have seen people on other posts using merge_of for similar examples however that will not work here I believe as I also need to join on another non-date column(pty).I would like to output the closest alert_dt to the inv_dt that is before or the same as the inv_dt. First post for me so please let me know if anything is unclear.</p>
<p>DataFrame A:</p>
<pre><code>|alert_dt|pty|
|--------|---|
| 01/06/2020|A|
| 08/06/2020|A|
| 12/06/2020|A|
| 27/06/2020|A|
| 12/06/2020|B|
| 15/07/2020|B|
</code></pre>
<p>DataFrame B:</p>
<pre><code>|inv_dt | pty|
|-----------|----|
| 07/06/2020| A |
| 14/06/2020| A |
| 27/06/2020| A |
| 12/07/2020| B |
| 15/08/2020| B |
</code></pre>
<p>Desired Output:</p>
<pre><code>|inv_dt|closest_alert_dt_before_inv_dt|pty|
|------|--------|---|
|07/06/2020| 01/06/2020|A|
|14/06/2020| 08/06/2020|A|
|27/06/2020| 27/06/2020|A|
|12/07/2020|12/06/2020|B|
|15/08/2020|15/07/2020|B|
</code></pre>
|
<p>My output is a bit different in <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.merge_asof.html" rel="nofollow noreferrer"><code>merge_asof</code></a> method:</p>
<pre><code>df1['alert_dt'] = pd.to_datetime(df1['alert_dt'], dayfirst=True)
df2['inv_dt'] = pd.to_datetime(df2['inv_dt'], dayfirst=True)
df = pd.merge_asof(df2.sort_values('inv_dt'),
df1.sort_values('alert_dt'),
left_on='inv_dt',
right_on='alert_dt',
by='pty')
print (df)
inv_dt pty alert_dt
0 2020-06-07 A 2020-06-01
1 2020-06-14 A 2020-06-12
2 2020-06-27 A 2020-06-27
3 2020-07-12 B 2020-06-12
4 2020-08-15 B 2020-07-15
</code></pre>
|
python|pandas
| 2
|
5,656
| 69,170,906
|
Convert a string column to period in pandas preserving the string
|
<p>I would like to understand if I can convert a <em>string</em> column to a <strong>PeriodIndex</strong> (for instance year), preserving the string (suffix).</p>
<p>I have the following DataFrame:</p>
<pre><code>company date ... revenue taxes
Facebook 2017-01-01 00:00:00 Total ... 1796.00 0.00
Facebook 2018-07-01 00:00:00 Total ... 7423.20 -11.54
Facebook Total - ... 1704.00 0.00
Google 2017-12-01 00:00:00 Total ... 1938.60 -1938.60
Google 2018-12-01 00:00:00 Total ... 1403.47 -102.01
Google 2018-01-01 00:00:00 Total ... 2028.00 -76.38
Google Total - ... 800.00 -256.98
</code></pre>
<p>I'm trying to apply the PeriodIndex to <strong>date</strong>:</p>
<p><code>df['date'] = pd.PeriodIndex(df['date'].values, freq='Y')</code></p>
<p>However, nothing happens because Pandas can't convert it to a string. I can't remove the word Total from my <strong>DataFrame</strong>.</p>
<p>This is what I expect to achieve:</p>
<pre><code>company date ... revenue taxes
Facebook 2017 Total ... 1796.00 0.00
Facebook 2018 Total ... 7423.20 -11.54
Facebook Total - ... 1704.00 0.00
Google 2017 Total ... 1938.60 -1938.60
Google 2018 Total ... 1403.47 -102.01
Google 2018 Total ... 2028.00 -76.38
Google Total - ... 800.00 -256.98
</code></pre>
<p>Any way I can get around with this?</p>
<p>Thanks!</p>
|
<p>Let's say there is a dummy dataframe, similiar with yours:</p>
<pre><code>dictionary = {'company' : ['Facebook', 'Facebook', 'Facebook_Total','Google','Google_Total'],
'date' : ['2019-09-14 09:00:08.279000+09:00 Total',
'2020-09-14 09:00:08.279000+09:00 Total',
'-',
'2021-09-14 09:00:08.279000+09:00 Total',
'-'],
'revenue' : [10,20,30,40,50]}
df = pd.DataFrame(dictionary)
</code></pre>
<p>I used <code>regex</code> module to delete <strong>Total</strong> behind the <em>year</em> column as following:</p>
<pre><code>substring = ' Total'
for i in range(len(df)):
if re.search(substring, df['date'][i] , flags=re.IGNORECASE):
df['date'][i] = df['date'][i].replace(' Total','')
else: pass
</code></pre>
<p>Then, I used <code>pd.PeriodIndex</code> as following:</p>
<pre><code>for i in range(len(df)) :
if df['date'][i] == '-':
pass
else:
df['date'][i] = pd.PeriodIndex(pd.Series(df['date'][i]), freq='Y')[0]
for i in range(len(df)):
if df['date'][i] == '-':
pass
else:
df['date'][i] = str(df['date'][i]) + ' Total'
</code></pre>
<p>The above code returns :</p>
<pre><code>Out[1]:
company date revenue
0 Facebook 2019 Total 10
1 Facebook 2020 Total 20
2 Facebook_Total - 30
3 Google 2021 Total 40
4 Google_Total - 50
</code></pre>
|
python|pandas|dataframe|datetime|period
| 2
|
5,657
| 68,883,614
|
Keras Captcha OCR - How to pass single jpeg image to loaded (trained) model and receive prediction in string?
|
<p>for the past several hours I was looking all over the internet for an answer to how I can pass a single jpeg image into my pre-trained model (saved and loaded) and receive prediction in string format.</p>
<p>I am using Captcha OCR from this source - <a href="https://keras.io/examples/vision/captcha_ocr/" rel="nofollow noreferrer">https://keras.io/examples/vision/captcha_ocr/</a></p>
<p>Those two approaches below got me the farthest (I think) but they are still not working:</p>
<p>APPROACH 1:</p>
<pre><code>model = load_model('trained_models/my_trained_model.h5', custom_objects={'CTCLayer': CTCLayer})
img_path = '/test/my_image.jpeg'
img = image.load_img(img_path, target_size=(200, 50))
img_array = image.img_to_array(img)
img_batch = np.expand_dims(img_array, axis=0)
img_preprocessed = preprocess_input(img_batch)
prediction = model.predict(img_preprocessed)
</code></pre>
<p>With this approach I didn't convert image to grey scale but before it could make any troubles I receive this error:</p>
<pre><code>ValueError: Layer ocr_model_v1 expects 2 input(s), but it received 1 input tensors. Inputs received: [<tf.Tensor 'IteratorGetNext:0' shape=(None, 200, 50, 3) dtype=float32>]
</code></pre>
<p>APPROACH 2:
This approach is pretty much copied from data preprocessing from OCR model:</p>
<pre><code>img = tf.io.read_file(img_path)
img = tf.io.decode_jpeg(img, channels=1)
img = tf.image.convert_image_dtype(img, tf.float32)
img = tf.image.resize(img, [200, 50])
img_preprocessed = tf.transpose(img, perm=[1, 0, 2])
prediction = model.predict(img_preprocessed)
</code></pre>
<p>And it gives me pretty much they same error:</p>
<pre><code>ValueError: Layer ocr_model_v1 expects 2 input(s), but it received 1 input tensors. Inputs received: [<tf.Tensor 'IteratorGetNext:0' shape=(None, 200, 1) dtype=float32>]
</code></pre>
<p>But this time it looks like image is additionally malformed.</p>
<p>I think this error is caused by this line in OCR:</p>
<pre><code># Define the model
model = keras.models.Model(
inputs=[input_img, labels], outputs=output, name="ocr_model_v1"
)
</code></pre>
<p>Since the model is expecting two values (while training we were passing dict with image and image name (answer to captcha)). But now, I would like this model to actually predict the image so I am not able to pass answer/label.</p>
<p>After several hours, I was able to push this up to this moment but right now I ran out of ideas.</p>
<p>Could someone please point me in the right direction?</p>
<p>/// ---------- /// EDIT /// ---------- ///</p>
<p>Hi! I wanted to edit my question. In the meantime I was able to pass jpeg into this model but in not so clean way. I basically copied all the code from lowest part of this tutorial - <a href="https://keras.io/examples/vision/captcha_ocr/" rel="nofollow noreferrer">https://keras.io/examples/vision/captcha_ocr/</a></p>
<p>Thanks to that I am not receiving any errors but there is really a lot of code that seems redundant but I am not able to refactor it efficiently.</p>
<p>With this code changed:</p>
<pre><code>prediction_model = keras.models.Model(
trained_model.get_layer(name="image").input, trained_model.get_layer(name="dense2").output
)
</code></pre>
<p>Now I am receiving errors about wrong Shapes etc. Is it possible to somehow refactor code from section "Inference" of this tutorial?</p>
|
<p>From the last code snippet and "Since the model is expecting two values (while training we were passing dict with image and image name (answer to captcha)). But now, I would like this model to actually predict the image so I am not able to pass answer/label." statement you have made.</p>
<p>you are trying to train a model that outputs the label of a Captcha image. Therefore, your model has to be a multiclass classification model [if a finite small number of image labels] or an image similarity searching dictionary learning method [if there are a very large number of classes]. However, under both model architectures you dataset format has to be in {X : Captcha image , y : related image label}.<br />
<code>model = keras.model.fit(inputs=input_img, outputs=labels)</code><br />
or if in the data preprocessing step outputs in dataset object<br />
<code>model = keras.model.fit(data=img_preprocessed)</code><br />
the resulting model will support both your inferencing [make predictions] methods stated above.</p>
|
python|tensorflow|keras|ocr|captcha
| 0
|
5,658
| 61,150,879
|
Mapping a Dataframe into another using conditionals
|
<p>I would like to map one dataframe into another, though it is not so simple because I am using 2 conditions to execute the mapping - I will explain them below. Basically, what I am trying to do is given two dataframes, df1 and df2, such that:</p>
<p>df1:</p>
<pre><code>A B Type
Heart Spades Boo
Heart Clubs Fog
Spades Diamonds Bler
</code></pre>
<p>df2:</p>
<pre><code>A B Boo Fog Bler
Heart Spades True True True
Spades Diamonds True False True
Heart Spades True True False
</code></pre>
<p>I could map the values contained in the columns 'Boo','Fog,'Bler' into a new column in df1 called 'Verification', resulting in:</p>
<pre><code>A B Type Verification
Heart Spades Boo True
Heart Clubs Fog
Spades Diamonds Bler True
</code></pre>
<p>Then, to do this process I have 2 conditions that need to be filled: the values in df1 and the values in df2 for the columns A and B must be equal - as they were acting as keys, and the mapping should take the values in some column of df2 based on the value in the type of df1. I am having two difficulties:</p>
<ol>
<li>The mapping requires two columns so I am not able to figure out a way to use pandas.series.map; furthermore I was not able to apply Dataframe.loc[conditions] in this context so that the conditions compare df1 and df2.</li>
<li>The example above is quite short, but the data set that I am working on has several combinations from the values of A and B, hence is unreasonable to write a function of association between A,B and value to each type.</li>
</ol>
<p>Do you have any suggestions?</p>
|
<p>Try <code>melt</code> and <code>drop_duplicates</code> on <code>df2</code>. Finally, left <code>merge</code> df1 to the result of <code>melt</code> and <code>drop_duplicates</code></p>
<pre><code>df_final = (df1.merge(df2.melt(['A','B'], var_name='Type', value_name='Verification')
.drop_duplicates(['A','B','Type']), how='left'))
Out[240]:
A B Type Verification
0 Heart Spades Boo True
1 Heart Clubs Fog NaN
2 Spades Diamonds Bler True
</code></pre>
<hr>
<p><strong>Note</strong>: on <code>df2</code>, the value of <code>bler</code> for <code>Spades Diamonds</code> (2nd row) is <code>True</code>, so its <code>Verification</code> is <code>True</code> in the output</p>
|
python|pandas|dataframe
| 2
|
5,659
| 61,072,251
|
How can I find Feature importance from sklearn api, after I have converted my categorical variables into dummy variables?
|
<p>After we have converted our categorical variables to dummy variables for training the model. We tend to find feature importance. But sklearn's model.feature_importance_ object returns feature impotance for every dummy variable, rather than the original categorical variable. How to fix this?</p>
|
<p>Because the dummy variables are used to train the model, you cannot find the importance of the original categorical variable. It is mathematically impossible thing.</p>
|
python|pandas|machine-learning|scikit-learn|data-science
| 0
|
5,660
| 71,517,905
|
Merge with replacement of smaller dataframe to larger dataframe
|
<p>I have two DataFrames that look like this:</p>
<p>DF1:</p>
<pre><code>index colA colB
id1 0 0
id2 0 0
id3 0 0
id4 0 0
id5 0 0
</code></pre>
<p>DF2:</p>
<pre><code>index colA colB
id3 A3 B3
id4 A4 B4
id6 A6 B6
</code></pre>
<p>I want to infuse values from <code>DF2</code> to <code>DF1</code>. I was trying to merge but it does not replace the values and creates newer columns. The resulting DataFrame I want should look like this:</p>
<p>DF1:</p>
<pre><code>index colA colB
id1 0 0
id2 0 0
id3 A3 B3
id4 A4 B4
id5 0 0
id6 A6 B6
</code></pre>
<p>Note: it should create a new index in <code>DF1</code> if <code>DF2</code> has some index not present in <code>DF1</code>. Also columns <code>index</code> are the index of DataFrames and not column names.</p>
|
<p>Here's one way using <code>concat</code> + <code>drop_duplicates</code>:</p>
<pre><code>out = pd.concat((df1, df2)).reset_index().drop_duplicates(subset=['index'], keep='last').set_index('index').sort_index()
</code></pre>
<p>or use <code>reindex</code> + <code>update</code>:</p>
<pre><code>df1 = df1.reindex(df1.index.union(df2.index))
df1.update(df2)
</code></pre>
<p>Output:</p>
<pre><code> index colA colB
0 id1 0 0
1 id2 0 0
0 id3 A3 B3
1 id4 A4 B4
4 id5 0 0
2 id6 A6 B6
</code></pre>
|
python|pandas|dataframe|merge
| 2
|
5,661
| 71,645,121
|
Pandas replace change so values to NaN
|
<p>I have the the following dataframe:</p>
<pre><code># dictionary with list object in values
details = {
'Name' : ['D', 'C', 'F', 'G','A','N'],
'values' : ['21%','45%','10%',12,14,15],
}
df = pd.DataFrame(details)
</code></pre>
<p>The column value has values in %, however, some were originally saves as string with symbol % and some as number. I would like to get rid of the % and have them all as int type. For that I have used replace and then as_type. however, when I repalce the '%', the values that son't have % change to Nan values:</p>
<pre><code>df['values']=df['values'].str.replace('%', '')
df
>>> Name values
0 D 21
1 C 45
2 F 10
3 G NaN
4 A NaN
5 N NaN
</code></pre>
<p>My <strong>reuired output</strong> should be:</p>
<pre><code>>>> Name values
0 D 21
1 C 45
2 F 10
3 G 12
4 A 14
5 N 15
</code></pre>
<p>My question is, how can I get rid of the % and get the column with the values , without getting these NaN values? and why is this happenning?</p>
|
<p>There are numeric values, so if use <code>str</code> function get missing values for numeric, possible solution is use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.replace.html" rel="nofollow noreferrer"><code>Series.replace</code></a> with <code>regex=True</code> for replace by substring and then because get numeric with strings convert output to integers:</p>
<pre><code>df['values']=df['values'].replace('%', '', regex=True).astype(int)
print (df)
Name values
0 D 21
1 C 45
2 F 10
3 G 12
4 A 14
5 N 15
</code></pre>
<p>Or your solution with replace missing values:</p>
<pre><code>df['values']=df['values'].str.replace('%', '').fillna(df['values']).astype(int)
</code></pre>
|
python|pandas|replace|nan
| 1
|
5,662
| 71,688,065
|
Generic requirements.txt for TensorFlow on both Apple M1 and other devices
|
<p>I have a new MacBook with the Apple M1 chipset. To install tensorflow, I follow the instructions <a href="https://developer.apple.com/metal/tensorflow-plugin/" rel="noreferrer">here</a>, i.e., installing <code>tensorflow-metal</code> and <code>tensorflow-macos</code> instead of the normal <code>tensorflow</code> package.</p>
<p>While this works fine, it means that I can't run the typical <code>pip install -r requirements.txt</code> as long as we have <code>tensorflow</code> in the <code>requirements.txt</code>. If we instead include <code>tensorflow-macos</code>, it'll lead to problems for non-M1 or even non-macOS users.</p>
<p>Our library must work on all platforms. Is there a generic install command that installs the correct TensorFlow version depending on whether the computer is a M1 Mac or not? So that we can use a single <code>requirements.txt</code> for everyone?</p>
<p>Or if that's not possible, can we pass some flag/option, e.g., <code>pip install -r requirements.txt --m1</code> to install some variation?
What's the simplest and most elegant solution here?</p>
|
<p>According to this post <a href="https://stackoverflow.com/questions/29222269/is-there-a-way-to-have-a-conditional-requirements-txt-file-for-my-python-applica">Is there a way to have a conditional requirements.txt file for my Python application based on platform?</a></p>
<p>You can use conditionals on your requirements.txt, thus</p>
<pre><code>tensorflow==2.8.0; sys_platform != 'darwin' or platform_machine != 'arm64'
tensorflow-macos==2.8.0; sys_platform == 'darwin' and platform_machine == 'arm64'
</code></pre>
|
python|macos|tensorflow|apple-m1|requirements.txt
| 2
|
5,663
| 71,669,501
|
Having trouble calculating mean squared error in sklearn python
|
<p>I am trying to fit a decision tree regressor to a dataset, and it is working but when I test it out by calculating mean squared error. I get an error that looks like this:</p>
<pre><code>msee = mse(x_test, y_test)
ValueError Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_17480/3348210221.py in <module>
----> 1 msee = mse(x_test, y_test)
~\anaconda3\lib\site-packages\sklearn\metrics\_regression.py in mean_squared_error(y_true, y_pred, sample_weight, multioutput, squared)
436 0.825...
437 """
--> 438 y_type, y_true, y_pred, multioutput = _check_reg_targets(
439 y_true, y_pred, multioutput
440 )
~\anaconda3\lib\site-packages\sklearn\metrics\_regression.py in _check_reg_targets(y_true, y_pred, multioutput, dtype)
103
104 if y_true.shape[1] != y_pred.shape[1]:
--> 105 raise ValueError(
106 "y_true and y_pred have different number of output ({0}!={1})".format(
107 y_true.shape[1], y_pred.shape[1]
ValueError: y_true and y_pred have different number of output (4!=1)
</code></pre>
<p>Here is the model code and a head of the df I am training the model on:</p>
<pre><code>x = np.array(bat[["TB_x"]])
y = np.array(bat[["TB_y"]])
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size= .2, random_state= 1)
dt = DecisionTreeRegressor(max_depth= 10, random_state= 1, min_samples_leaf=.1)
dt.fit(x_train.reshape(-1,1), y_train.reshape(-1,1))
y_pred = dt.predict(x_test)
index Year Age_x AgeDif_x Tm_x Lg_x Lev_x G_x PA_x AB_x ... BA_y OBP_y SLG_y OPS_y TB_y GDP_y HBP_y SH_y SF_y IBB_y
0 19 2019 22.0 1.5 UCLA P12 NCAA 38.0 72.0 58.0 ... 0.179 0.364 0.194 0.558 13.0 0.0 1.0 2.0 1.0 0.0
2 24 2020 23.0 1.7 St. Leo SSC NCAA 20.0 86.0 69.0 ... 0.156 0.309 0.219 0.527 14.0 0.0 2.0 0.0 2.0 0.0
6 45 2020 20.0 -0.8 Illinois BTen NCAA 13.0 58.0 47.0 ... 0.200 0.343 0.288 0.631 23.0 1.0 1.0 0.0 1.0 0.0
7 46 2020 20.0 -0.8 Illinois BTen NCAA 13.0 58.0 47.0 ... 0.156 0.309 0.219 0.527 14.0 0.0 2.0 0.0 2.0 0.0
8 49 2020 21.0 0.3 Miami (FL) ACC NCAA 16.0 69.0 54.0 ... 0.200 0.343 0.288 0.631 23.0 1.0 1.0 0.0 1.0 0.0
</code></pre>
|
<p>from the <a href="https://scikit-learn.org/stable/modules/generated/sklearn.metrics.mean_squared_error.html" rel="nofollow noreferrer">documentation</a>:</p>
<blockquote>
<p>Parameters y_true array-like of shape (n_samples,) or (n_samples,
n_outputs) Ground truth (correct) target values.</p>
<p>y_pred array-like of shape (n_samples,) or (n_samples, n_outputs)
Estimated target values.</p>
</blockquote>
<p>So, instead of feeding in x_test and y_test, you need to feed in true and predicted y-values:</p>
<pre><code> y_pred = dt.predict(x_test)
mse(y_test, y_pred)
</code></pre>
<p>or</p>
<pre><code>mse(y_test, dt.predict(x_test))
</code></pre>
|
python|numpy|scikit-learn|decision-tree
| 0
|
5,664
| 71,727,478
|
Trying to plot a graph with Pandas but only wish to display data from 1 row of a csv between two dates
|
<p>I have a CSV file of weather data.</p>
<p>I have the file indexed by date (e.g., the date of the reading).</p>
<p>One of my columns is 'Humidity' and contains humidity data.</p>
<p>I wish to use the .plot function but I wish to limit the data set to between two dates.</p>
<p>To discriminate by time I used this to view my rows,</p>
<p>london[london.loc[datetime(2021,3,1) : datetime(2021,5,31)]]</p>
<p>With london being;</p>
<p>london = read_csv('London_2021.csv')</p>
<p>My question is how can I modify this
london['Mean Humidity'].plot(grid=True, figsize=(10,5))</p>
<p>To only display the data between the two dates?</p>
|
<p>What about</p>
<pre><code>london[london.loc[start_date : end_date]]['Mean Humidity'].plot(grid=True, figsize=(10,5))
</code></pre>
|
python|pandas|csv
| 0
|
5,665
| 71,629,686
|
Python - how to add another "column" to numpy ndarray
|
<p>I have a 34000,18 dimension numpy array and I have a 34000,1 array that needs to be appended to the first one at the end.</p>
<p><code>X_train.shape</code>
=(34189, 18)</p>
<p><code>type(X_train)</code>
= numpy.ndarray</p>
<p><code>y_train.shape</code>
= (34189,)</p>
<p>my attempt:</p>
<p>Data = np.append(X_train,y_train)
and now its returning a (649591,) np array.</p>
<p>Any help please?</p>
<p>Additionally, how would I take a column out of the numpy.ndarray?
I.e after I have put them together and I have sorted my data- how would i then proceed to take the (34189,19) dimension array and turn it into two arrays being - (34189, 18) and (34189, 1)? (reversing what I am asking above)</p>
<p>Thank you</p>
|
<p>An array with <code>ndim == 1</code> is implicitly a row, not a column. The simplest way would be to turn it into a column and use <code>np.concatenate</code>:</p>
<pre><code>np.concatenate((X_train, Y_train[:, None]), axis=1)
</code></pre>
<p>You could do the same with <code>np.append</code>:</p>
<pre><code>np.append(X_train, Y_train[:, None], axis=1)
</code></pre>
<p>Other ways to turn <code>Y_train</code> into a column include <code>Y_train.reshape(-1, 1)</code> and <code>np.atleast_2d(Y_train)</code>.</p>
|
python|dataframe|numpy
| 0
|
5,666
| 42,449,339
|
General way to test restored tensorflow model
|
<p>I am very, very brand new to Tensorflow, and need to write a script that tests a single example on a model restored from a checkpoint file. </p>
<p>I was wondering if there was a general way to build a test function for a restored model without knowing all the minute details of the model. </p>
<p>Further, in the last section of the code below, does this look like I'm headed in the right direction? If so, how does one build 'y' without knowing details of the model by heart?</p>
<pre><code>import tensorflow as tf
from tensorflow.python import pywrap_tensorflow
import numpy as np
from fuel.datasets.hdf5 import H5PYDataset
ckpt_path='ckt/mnist/mnist_2017_02_23_17_22_50/mnist_2017_02_23_17_22_50_5000.ckpt'
##############################
#### Initialize Variables ####
##############################
reader = pywrap_tensorflow.NewCheckpointReader(ckpt_path)
var_to_shape_map = reader.get_variable_to_shape_map()
var=[0]*len(var_to_shape_map)
i=0
for key in var_to_shape_map:
var[i] = tf.Variable(reader.get_tensor(key), name=key)
#print("tensor_name: ", key)
#print(reader.get_tensor(key))
i=i+1
initialize=tf.global_variables_initializer()
###############################
####### Restore Model #########
###############################
saver = tf.train.Saver()
sess = tf.Session()
saver.restore(sess, ckpt_path)
###############################
##### Get Example to Test #####
###############################
test_set = H5PYDataset('../CNN3D/data/bmnist.hdf5', which_sets=('test',))
handle = test_set.open()
for i in range(0,100):
test_data = test_set.get_data(handle, slice(i, i+1))
if test_data[1][0][0]==8:
model_idx=i
test_data = test_set.get_data(handle, slice(model_idx,model_idx+1))
data = tf.Variable(np.asarray(test_data[0][0][0]), name='data')
###############################
######## Test Example #########
###############################
x = tf.placeholder(tf.float32,shape=[28,28])
y = ???
sess.run(initialize)
result=sess.run(y, feed_dict={x: data})
print result
</code></pre>
|
<p>The <a href="https://www.tensorflow.org/extend/estimators" rel="nofollow noreferrer">Estimator</a> class has a convenient set of utilities such that, if your model is wrapped around an estimator, loading and predicting from it is easy.</p>
<p>Overall, without some kind of coordination, this will be hard.</p>
|
python-2.7|tensorflow
| 0
|
5,667
| 42,392,260
|
Return equivalent of `:` from function for indexing array
|
<p>I have a large array and a function that returns index lists into the array, i.e.,</p>
<pre><code>import numpy
n = 500
a = numpy.random.rand(n)
def get_idx(k):
# More complicated in reality
return range(n) if k > 6 else range(k)
data = a[get_idx(29)]
data = a[get_idx(30)]
# ...
</code></pre>
<p>A typical case is that the range is the entire array, <code>range(n)</code>. Unfortunately, <code>a[range(n)]</code> scales with <code>n</code> while <code>a[:]</code> is of course constant-time. It's a pity that one cannot return <code>:</code> from <code>get_idx</code>.</p>
<p>What can I return from <code>get_idx</code> to use as an index for the entire array?</p>
|
<p>Have a look at <a href="https://docs.python.org/3/library/functions.html#slice" rel="nofollow noreferrer"><code>slice</code></a></p>
<pre><code>def get_x():
return slice(2)
a=list(range(100))
a[get_x()]
</code></pre>
<p>will return <code>[0, 1]</code></p>
<h2>UPDATE</h2>
<p>And for your need <code>get_x</code> function should be</p>
<pre><code>def get_x(k, n):
return slice(n if k > 6 else k)
</code></pre>
<h3>Update</h3>
<p>as @Eric correctly noted it's better to pass <code>None</code> instead of <code>n</code>.
So function would be:</p>
<pre><code>def get_x(k):
return slice(None if k > 6 else k)
</code></pre>
|
python|arrays|numpy
| 9
|
5,668
| 42,191,656
|
Tensorflow learning XOR with linear function even though it shouldn't
|
<p>I am implementing a simple network in tensorflow and for pedagogical purposes, I am trying to show that the linear transformation:</p>
<pre><code>yhat = w(Wx + c) + b
</code></pre>
<p>cannot learn XOR. But the problem right now is that with my current implementation, it does! This suggests a bug in the code. Please elucidate?</p>
<pre><code>############################################################
'''
dummy data
'''
x_data = [[0.,0.],[0.,1.],[1.,0.],[1.,1.]]
y_data = [[0],[1],[1],[0]]
############################################################
'''
Input and output
'''
X = tf.placeholder(tf.float32, shape = [4,2], name = 'x')
Y = tf.placeholder(tf.float32, shape = [4,1], name = 'y')
'''
Network parameters
'''
W = tf.Variable(tf.random_uniform([2,2],-1,1), name = 'W')
c = tf.Variable(tf.zeros([2]) , name = 'c')
w = tf.Variable(tf.random_uniform([2,1],-1,1), name = 'w')
b = tf.Variable(tf.zeros([1]) , name = 'b')
############################################################
'''
Network 1:
function: Yhat = (w (x'W + c) + b)
loss : \sum_i Y * log Yhat
'''
H1 = tf.matmul(X, W) + c
Yhat1 = tf.matmul(H1, w) + b
cross_entropy1 = -tf.reduce_sum(
Y*tf.log(
tf.clip_by_value(Yhat1,1e-10,1.0)
)
)
step1 = tf.train.AdamOptimizer(0.01).minimize(cross_entropy1)
'''
Train
'''
writer = tf.train.SummaryWriter("./logs/xor_logs.graph_def")
graph1 = tf.initialize_all_variables()
sess1 = tf.Session()
sess1.run(graph1)
for i in range(100):
sess1.run(step1, feed_dict={X: x_data, Y: y_data})
'''
Evaluation
'''
corrects = tf.equal(tf.argmax(Y,1), tf.argmax(Yhat1,1))
accuracy = tf.reduce_mean(tf.cast(corrects, tf.float32))
r = sess1.run(accuracy, feed_dict={X: x_data, Y: y_data})
print ('accuracy: ' + str(r * 100) + '%')
</code></pre>
<p>Right now accuracy is at <code>100%</code>, even though it should be at <code>75%</code>.</p>
|
<p>tf.argmax(Y,1) will return [0,0,0,0]. This is not what you want.</p>
|
tensorflow|neural-network
| 1
|
5,669
| 69,873,341
|
need help assigning custom timeslots to datetime data
|
<p>I have a datetime data to a minute (sample below)</p>
<pre><code>2021-11-08 00:10:00
2021-11-08 01:10:00
2021-11-08 02:25:00
2021-11-08 03:55:00
2021-11-08 06:55:00
2021-11-08 12:35:00
2021-11-08 16:05:00
2021-11-08 17:10:00
2021-11-08 18:45:00
2021-11-08 19:10:00
2021-11-08 20:25:00
2021-11-08 20:55:00
2021-11-08 22:55:00
</code></pre>
<p>and I need to assign a custom time slot below to that data set. some slots start at a full hour (at 9:00) some in the middle (at 12:30)</p>
<pre><code>'0000-0259'
'0300-0859'
'0900-1229'
'1230-1659'
'1700-1929'
'1930-2029'
'2030-2359'
</code></pre>
<p>I have been trying to do that via dict. with each hour having a time slot but 1230 time slots are tricky.</p>
<p>try 2 was with <code>between_time</code> but it requires DateTimeIndex - does not work here</p>
<pre><code>def time_slot(ref):
if ref.between_time('00:00','02:59'):
return '0000-0259'
elif ref.between_time('03:00','08:59'):
return '0300-0859'
elif ref.between_time('09:00','12:29'):
return '0900-1229'
elif ref.between_time('12:30','16:59'):
return '1230-1659'
elif ref.between_time('17:00','19:29'):
return '1700-1929'
elif ref.between_time('19:30','20:29'):
return '1930-2029'
else:
return '2030-2359'
</code></pre>
<p>try 3 was set up nested if with < below selected time lost</p>
<pre><code>format = '%H:%M'
def time_slot(ref):
if ref < dt.strptime('03:00', format):
return '0000-0259'
elif ref < dt.strptime('09:00', format):
return '0300-0859'
elif ref < dt.strptime('12:30', format):
return '0900-1229'
elif ref < dt.strptime('17:00', format):
return '1700-1929'
elif ref < dt.strptime('19:30', format):
return '1930-2029'
else:
return '2030-2359'
</code></pre>
<p>but I haven't compared <code>datetime.time</code> with <code>datetime.datetime</code>.</p>
|
<p>Given that the initial time data is in a string format, this is how I would proceed:
Given a dataframe of form:</p>
<pre><code> Time
0 2021-11-08 00:10:00
1 2021-11-08 01:10:00
2 2021-11-08 02:25:00
3 2021-11-08 03:55:00
4 2021-11-08 06:55:00
5 2021-11-08 12:35:00
</code></pre>
<p>Step 1.
Add a timestamp column</p>
<pre><code>df['TimeStamp'] = df.apply(lambda row: du.parser.parse(row.Time), axis = 1)
</code></pre>
<p>Producing:</p>
<pre><code> Time TimeStamp
0 2021-11-08 00:10:00 2021-11-08 00:10:00
1 2021-11-08 01:10:00 2021-11-08 01:10:00
2 2021-11-08 02:25:00 2021-11-08 02:25:00
3 2021-11-08 03:55:00 2021-11-08 03:55:00
4 2021-11-08 06:55:00 2021-11-08 06:55:00
5 2021-11-08 12:35:00 2021-11-08 12:35:00
</code></pre>
<p>Step # 2, create a function which will return a time slot label for each timestamp as follows:</p>
<pre><code>def getLabel(tval):
""" Return the label associated with the timestamp """
labels = ['0000-0259', '0300-0859', '0900-1229', '1230-1659', '1700-1929', '1930-2029', '2030-2359' ]
slot_start = [(0, 0), (3, 0), (9, 0), (12, 30), (17, 0), (19,30), (20, 30)]
for lidx, tme in enumerate(slot_start):
if tme[0] > tval.hour:
return labels[lidx-1]
elif tval.hour == tme[0] and tme[1] <= tval.minute:
return labels[lidx]
return labels[-1]
</code></pre>
<p>Step 3 Apply the getLabel function to create a Time_Ref column as follows:</p>
<pre><code>df['Time_Ref'] = df.apply(lambda row: getLabel(row.TimeStamp), axis=1)
</code></pre>
<p>Which yields:</p>
<pre><code> Time TimeStamp Time_Ref
0 2021-11-08 00:10:00 2021-11-08 00:10:00 0000-0259
1 2021-11-08 01:10:00 2021-11-08 01:10:00 0000-0259
2 2021-11-08 02:25:00 2021-11-08 02:25:00 0000-0259
3 2021-11-08 03:55:00 2021-11-08 03:55:00 0300-0859
4 2021-11-08 06:55:00 2021-11-08 06:55:00 0300-0859
5 2021-11-08 12:35:00 2021-11-08 12:35:00 1230-1659
6 2021-11-08 16:05:00 2021-11-08 16:05:00 1230-1659
7 2021-11-08 17:10:00 2021-11-08 17:10:00 1700-1929
8 2021-11-08 18:45:00 2021-11-08 18:45:00 1700-1929
9 2021-11-08 19:10:00 2021-11-08 19:10:00 1930-2029
10 2021-11-08 20:25:00 2021-11-08 20:25:00 2030-2359
11 2021-11-08 20:55:00 2021-11-08 20:55:00 2030-2359
</code></pre>
<p>You can also combine steps 2 & 3 which eliminates adding the timestamp column with the following:</p>
<pre><code>df['Time_Ref'] = df.apply(lambda row: getLabel(du.parser.parse(row.Time)), axis=1)
</code></pre>
|
python|pandas|datetime|compare
| 0
|
5,670
| 43,165,025
|
dropping columns from dataframes
|
<p>I have 5 dataframes and I want to drop certain columns from them. I tried for loop, something like this -</p>
<pre><code>dataframes =[af,bf,cf,df,ef,ff,gf]
for col in dataframes:
print col.head(1)
col = col.drop(col.columns[[0,2]],axis=1)
print col.head(1)
</code></pre>
<p>I know the approach is wrong. How to do that without doing it repetitively ?</p>
|
<p>consider the list of dataframes <code>dataframes</code></p>
<pre><code>dataframes = [pd.DataFrame(dict(A=[1], B=[2], C=[3])) for _ in range(4)]
</code></pre>
<p>Use <code>drop</code> and <code>inplace=True</code></p>
<pre><code>for d in dataframes:
d.drop(['B'], 1, inplace=True)
for d in dataframes:
print(d)
A C
0 1 3
A C
0 1 3
A C
0 1 3
A C
0 1 3
</code></pre>
|
python|pandas
| 1
|
5,671
| 43,394,148
|
Replace strings in entire dataframe if they are present in a list
|
<p>Thank you for your time visiting my post. I have the following dataframe below:</p>
<pre><code>df1
col1 col2
1 virginia is cold, canada is cold too virginia is cold, canada is cold too
2 florida, virginia, washington are good florida, virginia, washington are good
3 georgia, alabama, virginia are hot virginia is cold, canada is cold too
4 virginia, ohio, new castle are great hawaii, nebreska is wonderful
5 hawaii, nebreska is wonderful virginia, ohio, new castle are great
</code></pre>
<p>Also, I have a list containing a string:</p>
<pre><code>lst = ['virginia', 'hot', 'too']
</code></pre>
<p>I want to replace the string in the entire dataframe with "xxxxxx" if it matches one of the strings in the list. For instance, my dataframe would look like this after replacement:</p>
<pre><code> df1
col1 col2
1 xxxxxx is cold, canada is cold xxxxxx xxxxxx is cold, canada is cold xxxxxx
2 florida, xxxxxx, washington are good florida, xxxxxx, washington are good
3 georgia, alabama, xxxxxx are xxxxxx xxxxxx is cold, canada is cold xxxxxx
4 xxxxxx, ohio, new castle are great hawaii, nebreska is wonderful
5 hawaii, nebreska is wonderful xxxxxx, ohio, new castle are great
</code></pre>
<p>So far, I have tried but it does not work:</p>
<pre><code>df1 = df1.replace(lst, "xxxxxx")
</code></pre>
|
<pre><code>df1.replace(lst, 'x' * 5, regex=True)
col1 col2
1 xxxxx is cold, canada is cold xxxxx xxxxx is cold, canada is cold xxxxx
2 florida, xxxxx, washington are good florida, xxxxx, washington are good
3 georgia, alabama, xxxxx are xxxxx xxxxx is cold, canada is cold xxxxx
4 xxxxx, ohio, new castle are great hawaii, nebreska is wonderful
5 hawaii, nebreska is wonderful xxxxx, ohio, new castle are great
</code></pre>
|
python|list|pandas|dataframe|replace
| 3
|
5,672
| 50,661,449
|
Trying to set a subset of a vector to equal another vector, but everything gets set to 0
|
<p>I'm trying to get into Python for statistics, coming from an R background. I've set up a script for cross validation on a dataset I've been working with:</p>
<pre><code>cvIndex = np.remainder(np.arange(dat.shape[0]), 10)
pred = np.arange(dat.shape[0])
for i in range(10):
#get training and test set
trFeatures = dat[cvIndex != i, :]
teFeatures = dat[cvIndex == i, :]
trY = y[cvIndex != i]
#fit random forest
rf = RandomForestClassifier(n_estimators = 500, random_state = 42)
rf.fit(trFeatures, trY);
#make and store prediction
tePred = rf.predict_proba(teFeatures)[:, 1]
pred[cvIndex == i] = tePred.copy()
print(pred)
</code></pre>
<p>which returns a vector of all zeros. As far as I can tell, this is the proper way to set a subset of a vector to equal another vector (and indeed, I've tried doing this with some dummy vectors, with success). The other obvious potential problem is that the tePred could be all zeros, but extracting any specific case (i=9) for example, gives this:</p>
<pre><code>i = 9
#get training and test set
trFeatures = dat[cvIndex != i, :]
teFeatures = dat[cvIndex == i, :]
trY = y[cvIndex != i]
#fit random forest
rf = RandomForestClassifier(n_estimators = 500, random_state = 42)
rf.fit(trFeatures, trY);
#make and store prediction
tePred = rf.predict_proba(teFeatures)[:, 1]
print(tePred[1:50])
[ 0.264 0.034 0.02 0.002 0. 0.014 0. 0. 0. 0.102
0.14 0. 0.024 0.002 0. 0.002 0.004 0. 0.044 0. 0.382
0.042 0. 0.004 0. 0.112 0.002 0.074 0. 0.016 0.012
0.004 0. 0. 0.006 0.002 0.01 0. 0. 0. 0. 0.004
0.002 0.002 0.044 0.004 0. 0. 0.004]
</code></pre>
<p>Would really appreciate some help.</p>
|
<p>Looks like integer coercion to me. <code>np.arange</code> returns an integer array which you then update in-place. As an in-place operation cannot change an array's dtype the r.h.s. will be converted to int. With your input being probabilities this will be all zeros.</p>
<p>Since you are overwriting all of <code>pred</code> eventually you needn't initialize it to anything, so using <code>np.empty(dat.shape[0])</code> which defaults to a float dtype instead of <code>np.arange</code> should fix your code.</p>
<p>Two unrelated side notes:</p>
<ul>
<li>taking a copy of tePred on the last line of the loop is not necessary.</li>
<li>Python like C uses zero-based indexing, so <code>tePred[1:50]</code> skips the first element.</li>
</ul>
|
python|python-3.x|validation|numpy|numpy-ndarray
| 2
|
5,673
| 50,581,526
|
Two dataframe columns to row and column index with third column as values
|
<p>I have a dataframe that looks as follows:</p>
<pre><code> x y error
(1, 1) 1.0 1.0 0.062532
(1.0, 2.0) 1.0 2.0 0.050991
(1.0, 3.0) 1.0 3.0 0.028133
(1.0, 4.0) 1.0 4.0 0.023807
...
(99.0, 20.0) 99.0 20.0 0.019846
(99.0, 21.0) 99.0 21.0 0.135257
(99.0, 22.0) 99.0 22.0 0.230610
(99.0, 23.0) 99.0 23.0 0.481302
</code></pre>
<p>I want a new dataframe like this, such that i can make a heatmap easily with seaborn</p>
<pre><code> X 1 2 3 .. 99
Y
1 erros....
2
3
4
..
49
</code></pre>
<p>How do i do this?</p>
|
<p>This should work: </p>
<pre><code>df.pivot(index='y', columns='X', values='error')
</code></pre>
<p>You can get more examples on pivot table in pandas <a href="https://pandas.pydata.org/pandas-docs/stable/reshaping.html" rel="nofollow noreferrer">here</a></p>
|
python|pandas
| 0
|
5,674
| 50,595,173
|
pandas merge and groupby
|
<p>I have 2 pandas dataframes which look like below. </p>
<p>Data Frame 1: </p>
<pre><code>Section chainage_from chainage_to Frame
R125R002 10.133 10.138 1
R125R002 10.138 10.143 2
R125R002 10.143 10.148 3
R125R002 10.148 10.153 4
R125R002 10.153 10.158 5
</code></pre>
<p>Data Frame 2:</p>
<pre><code>Section Chainage 1 2 3 4 5 6 7 8
R125R002 10.133 0 0 1 0 0 0 0 0
R125R002 10.134 1 0 1 0 0 0 0 0
R125R002 10.135 0 0 1 0 0 0 0 0
R125R002 10.136 0 0 1 0 0 0 0 0
R125R002 10.137 0 0 1 0 0 0 0 0
R125R002 10.138 0 0 1 0 0 0 0 0
R125R002 10.139 0 0 1 0 0 0 0 0
R125R002 10.14 5 0 1 0 0 0 0 0
R125R002 10.141 1 0 1 0 0 0 0 0
R125R002 10.142 0 0 1 0 0 0 0 0
R125R002 10.143 0 0 1 0 0 0 0 0
R125R002 10.144 0 0 1 0 0 0 0 0
R125R002 10.145 0 0 1 0 0 0 0 0
R125R002 10.146 0 0 1 0 0 0 0 0
R125R002 10.147 0 0 1 0 0 0 0 0
R125R002 10.148 0 0 1 0 0 0 0 0
R125R002 10.149 0 0 1 0 0 0 0 0
R125R002 10.15 0 0 1 0 0 0 0 0
R125R002 10.151 0 0 1 0 0 0 0 0
R125R002 10.152 0 0 1 0 0 0 0 0
R125R002 10.153 0 0 1 0 0 0 0 0
</code></pre>
<p>required output dataframe:</p>
<pre><code>Section Chainage Frame 1 2 3 4 5 6 7 8
R125R002 10.133 1 1 0 1 0 0 0 0 0
R125R002 10.138 2 0 0 1 0 0 0 0 0
R125R002 10.143 3 6 0 1 0 0 0 0 0
R125R002 10.148 4 0 0 1 0 0 0 0 0
R125R002 10.153 5 0 0 1 0 0 0 0 0
</code></pre>
<p>Dataframe 2 has the increment of 1 m intervals while dataframe 1 has the increment of 5 m. I would like to merge dataframe 2 to dataframe 1 between chainage_from and chainage_to and apply group by. Groupby for column 1 is sum, column 2 max, colum3 to 8 average.</p>
<p>In SQL, I would link section between 2 frames and apply between the condition for the chainage from and to and then add groupby.
Is there any way to achieve this in pandas. </p>
|
<p>merge the dataframes by <code>Section</code> and filter so that <code>Chainage</code> is in [from & to). </p>
<pre><code>merged = pd.merge_asof(df2, df1, by='Section', left_on='Chainage', right_on='chainage_from')
</code></pre>
<p>groupby & aggregate, passing a dictionary that maps column name & aggregate function to use.</p>
<pre><code>merged.groupby(['Section', 'chainage_from', 'Frame'], as_index=False).agg(
{'1': 'sum', '2': 'max', '3': 'mean', '4': 'mean',
'5': 'mean', '6': 'mean', '7': 'mean', '8': 'mean'}
)
</code></pre>
<p>outputs:</p>
<pre><code> Section chainage_from Frame 1 2 3 4 5 6 7 8
0 R125R002 10.133 1 1 0 1 0 0 0 0 0
1 R125R002 10.138 2 6 0 1 0 0 0 0 0
2 R125R002 10.143 3 0 0 1 0 0 0 0 0
3 R125R002 10.148 4 0 0 1 0 0 0 0 0
4 R125R002 10.153 5 0 0 1 0 0 0 0 0
</code></pre>
|
python|pandas
| 1
|
5,675
| 50,263,985
|
tf.datasets input_fn getting error after 1 epoch
|
<p>So I am trying to switch to an input_fn() using tf.datasets as described in this <a href="https://stackoverflow.com/questions/48779293/upgrade-to-tf-dataset-not-working-properly-when-parsing-csv">question</a>. While I have been able to get superior steps/sec using tf.datasets with the input_fn() below, I appear to run into an error after 1 epoch when running this experiment on GCMLE. Consider this input_fn():</p>
<pre><code>def input_fn(...):
files = tf.data.Dataset.list_files(filenames).shuffle(num_shards)
dataset = files.apply(tf.contrib.data.parallel_interleave(lambda filename: tf.data.TextLineDataset(filename).skip(1), cycle_length=num_shards))
dataset = dataset.apply(tf.contrib.data.map_and_batch(lambda row:
parse_csv_dataset(row, hparams = hparams),
batch_size = batch_size,
num_parallel_batches = multiprocessing.cpu_count()))
dataset = dataset.prefetch(1)
if shuffle:
dataset = dataset.shuffle(buffer_size = 10000)
dataset = dataset.repeat(num_epochs)
iterator = dataset.make_initializable_iterator()
features = iterator.get_next()
tf.add_to_collection(tf.GraphKeys.TABLE_INITIALIZERS, iterator.initializer)
labels = {key: features.pop(key) for key in LABEL_COLUMNS}
return features, labels
</code></pre>
<p>I receive the following error on GCMLE:</p>
<pre><code>disable=protected-access InvalidArgumentError (see above for traceback): Inputs to operation loss/sparse_softmax_cross_entropy_loss/num_present/Select of type Select must have the same size and shape. Input 0: [74] != input 1: [110] [[Node: loss/sparse_softmax_cross_entropy_loss/num_present/Select = Select[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:GPU:0"](loss/sparse_softmax_cross_entropy_loss/num_present/Equal, loss/sparse_softmax_cross_entropy_loss/num_present/zeros_like, loss/sparse_softmax_cross_entropy_loss/num_present/ones_like)]] [[Node: global_step/add/_1509 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_3099_global_step/add", tensor_type=DT_INT64, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
</code></pre>
<p>This implies that there is a shape mismatch <code>Input 0: [74] != input 1: [110]</code>, however my old queue based input_fn() works fine on the same exact data, so I do not believe it is any issue with the underlying data. This is taking place at what I believe to be the end of the epoch (because the <code>num_steps</code> when th GCMLE error ends is right around th <code>num_train_examples/batch_size</code> so I am guessing that the issue might be that the final batch is not equal the <code>batch_size</code> which is 110 (as it shows up in the error) and instead there are only 74 examples. Can anybody confirm that this is the error? Assuming that it is, is there some other flag that I need to set so that the last batch can be something other than the spcified batch size of 110?</p>
<p>For what it's worth, I have replicated this behavior with two different datasets (trains for multiple epochs with the old queue based input_fn, gets hung up at end of first epoch for the tf.datasets input_fn)</p>
|
<p>As Robbie suggests in the <a href="https://stackoverflow.com/a/50266170/3574081">other answer</a>, it looks like your old implementation used fixed batch sizes throughout (presumably using an API like <a href="https://www.tensorflow.org/api_docs/python/tf/train/batch" rel="nofollow noreferrer"><code>tf.train.batch()</code></a> or one of its wrappers with the default argument of <code>allow_smaller_final_batch=False</code>), and the default behavior of batching in <code>tf.data</code> (via <a href="https://www.tensorflow.org/api_docs/python/tf/data/Dataset" rel="nofollow noreferrer"><code>tf.data.Dataset.batch()</code></a> and <a href="https://www.tensorflow.org/api_docs/python/tf/contrib/data/map_and_batch" rel="nofollow noreferrer"><code>tf.contrib.data.map_and_batch()</code></a>) is to include the smaller final batch.</p>
<p>The bug is most likely in the <code>model_fn</code>. Without seeing that function, it is difficult to guess, but I suspect that there is either an explicit (and incorrect) assertion of a tensor's shape via <a href="https://www.tensorflow.org/api_docs/python/tf/Tensor#set_shape" rel="nofollow noreferrer"><code>Tensor.set_shape()</code></a> (possibly in library code) or a bug in the implementation of <a href="https://www.tensorflow.org/api_docs/python/tf/losses/sparse_softmax_cross_entropy" rel="nofollow noreferrer"><code>tf.losses.sparse_softmax_cross_entropy()</code></a>.</p>
<p>First, I am assuming that the <code>features</code> and <code>labels</code> tensors returned from <code>input_fn()</code> have statically unknown batch size. Can you confirm that by printing the <code>features</code> and <code>labels</code> objects, and ensuring that their reported <a href="https://www.tensorflow.org/api_docs/python/tf/Tensor#shape" rel="nofollow noreferrer"><code>Tensor.shape</code></a> properties have <code>None</code> for the 0th dimension?</p>
<p>Next, locate the call to <code>tf.losses.sparse_softmax_cross_entropy()</code> in your <code>model_fn</code>. Print the object that is passed as the <code>weights</code> argument to this function, which should be a <code>tf.Tensor</code>, and locate its static shape. Given the error you are seeing, I suspect it will have a shape like <code>(110,)</code>, where <code>110</code> is your specified batch size. If that is the case, there is a bug in <code>model_fn</code> that incorrectly asserts that the shape of the weights is a full batch, when it might not be. (If that is not the case, then there's a bug in <code>tf.losses.sparse_softmax_cross_entropy()</code>! Please open a <a href="https://github.com/tensorflow/tensorflow/issues" rel="nofollow noreferrer">GitHub issue</a> with an example that enables us to reproduce the problem.)</p>
<blockquote>
<p><strong>Aside:</strong> Why would this explain the bug? The <a href="https://github.com/tensorflow/tensorflow/blob/a9761960e282cdcf0822951dec86372181f0e88e/tensorflow/python/ops/losses/losses_impl.py#L145-L148" rel="nofollow noreferrer">code that calls</a> the failing <code>tf.where()</code> op looks like this (edited for readability):</p>
<pre><code>num_present = tf.where(tf.equal(weights, 0.0), # This input is shape [74]
tf.zeros_like(weights), # This input is shape [110]
tf.ones_like(weights) # This input is probably [110]
)
</code></pre>
<p>This flavor of <code>tf.where()</code> op (named <code>"Select"</code> in the error message for historical reasons) requires that all three inputs have the same size. Superficially, <code>tf.equal(weights, 0.0)</code>, <code>tf.ones_like(weights)</code>, and <code>tf.zeros_like(weights)</code> all have the same shape, which is the shape of <code>weights</code>. However, if the static shape (the result of <code>Tensor.shape</code>) differs from the <a href="https://stackoverflow.com/q/37096225/3574081">dynamic shape</a>, then the behavior is undefined.</p>
<p>What actually happens? In this particular case, let's say the static shape of <code>weights</code> is <code>[110]</code>, but the dynamic shape is <code>[74]</code>. The static shape of our three arguments to <code>tf.where()</code> will be <code>[110]</code>. The implementation of <code>tf.equal()</code> doesn't care that there's a mismatch, so its dynamic shape will be <code>[74]</code>. The implementations of <code>tf.zeros_like()</code> and <code>tf.ones_like()</code> use an <a href="https://github.com/tensorflow/tensorflow/blob/a9761960e282cdcf0822951dec86372181f0e88e/tensorflow/python/ops/array_ops.py#L245-L246" rel="nofollow noreferrer">optimization</a> that ignores that dynamic shape when the static shape is fully defined, and so their dynamic shapes will be <code>[110]</code>, causing the error you are seeing.</p>
</blockquote>
<p>The proper fix is to locate the code that is asserting a fixed batch size in your <code>model_fn</code>, and remove it. The optimization and evaluation logic in TensorFlow is robust to variable batch sizes, and this will ensure that all of your data is used in the training and evaluation processes.</p>
<p>A less desirable short-term fix would be to drop the small batch at the end of the data. There are a couple of options here:</p>
<ul>
<li><p>Drop some data randomly at the end of each epoch:</p>
<ul>
<li>With TF 1.8 or later, pass <code>drop_remainder=False</code> to <code>tf.contrib.data.map_and_batch()</code>. </li>
<li>With TF 1.7 or earlier, use <code>dataset = dataset.filter(lambda features: tf.equal(tf.shape(features[LABEL_COLUMNS[0]])[0], batch_size))</code> after the <code>map_and_batch</code>.</li>
</ul></li>
<li><p>Drop the very last batch of data:</p>
<ul>
<li>Move the <code>dataset.repeat(NUM_EPOCHS)</code> before the <code>map_and_batch()</code> and then apply one of the two fixes mentioned above.</li>
</ul></li>
</ul>
|
tensorflow|google-cloud-ml|tensorflow-datasets
| 1
|
5,676
| 62,766,042
|
Inputs to eager execution function cannot be Keras symbolic tensors with Variational Autoencoder
|
<p>I'm trying to implement a custom Variational Autoencoder. The code is shown below</p>
<pre><code>image = Input(shape = (X_train.shape[1]))
label = Input(shape = (Y_train.shape[1]))
inputs = Concatenate()([image, label])
x = Dense(625, activation = 'relu')(inputs)
x = Reshape((25,25,1))(x)
x = LocallyConnected2D(8, (5,5), padding = 'valid')(x)
x = LeakyReLU()(x)
x = LocallyConnected2D(8, (5,5), padding = 'valid')(x)
x = LeakyReLU()(x)
x = LocallyConnected2D(8, (3,3), padding = 'valid')(x)
x = LeakyReLU()(x)
x = LocallyConnected2D(8, (3,3), padding = 'valid')(x)
x = LeakyReLU()(x)
x = AveragePooling2D((2, 2))(x)
encoder_out = Flatten()(x)
mu = Dense(latent_size, activation ='linear')(encoder_out)
sigma = Dense(latent_size, activation = 'linear')(encoder_out)
def sampling(args):
mu, sigma = args
eps = K.random_normal(shape=(batch_size, latent_size), mean=0., stddev=1.)
return mu + K.exp(sigma / 2) * eps
latent_space = Lambda(sampling, output_shape = (latent_size, ))([mu, sigma])
decoder_latent = Input(shape = (latent_size, ))
decoder_c = Input(shape = (c_space, ))
x = Concatenate()([decoder_latent, decoder_c])
x = Dense(288)(x)
x = Reshape((6,6,8))(x)
x = ZeroPadding2D((2,2))(x)
x = LocallyConnected2D(8, (3,3), padding = 'valid')(x)
x = LeakyReLU()(x)
x = ZeroPadding2D((2,2))(x)
x = LocallyConnected2D(8, (3,3), padding = 'valid')(x)
x = LeakyReLU()(x)
x = UpSampling2D(size = (2,2))(x)
x = LocallyConnected2D(8, (5,5), padding = 'valid')(x)
x = LeakyReLU()(x)
x = UpSampling2D(size = (2,2))(x)
x = LocallyConnected2D(8,(5,5), padding = 'valid')(x)
x = LeakyReLU()(x)
x = LocallyConnected2D(1,(4,4), padding = 'valid')(x)
decoder_out = Activation('relu')(x)
</code></pre>
<p>The loss function I defined as</p>
<pre><code>def DFC_loss(x_in, x_out):
kl_loss = 0.5 * K.sum(K.exp(sigma) + K.square(mu) - 1. - sigma, axis=1)
return K.mean(perceptual_loss(x_in, x_out) + kl_loss)
def perceptual_loss(x_in, x_out):
x_in = K.reshape(x_in, shape=(batch_size, 25,25,1))
x_out = K.reshape(x_out, shape=(batch_size, 25,25,1))
conv_outputs = [classifier.get_layer(l).output for l in selected_layers]
activation = Model(classifier.input, conv_outputs)
h1_list = activation(x_in)
h2_list = activation(x_out)
rc_loss = 0.0
for h1, h2, weight in zip(h1_list, h2_list, [1.0, 1.0]):
h1 = K.batch_flatten(h1)
h2 = K.batch_flatten(h2)
rc_loss = rc_loss + weight * K.sum(K.square(h1 - h2), axis=-1)
return rc_loss
CVAE.compile(optimizer = "adam", loss = DFC_loss, metrics = [perceptual_loss])
</code></pre>
<p>Whenever I run the code below</p>
<pre><code>CVAE_hist = CVAE.fit([X_train,Y_train], X_train, verbose = 1, batch_size=batch_size, epochs=n_epochs, validation_data = ([X_test, Y_test], X_test))
</code></pre>
<p>I get two errors</p>
<pre><code>An op outside of the function building code is being passed
a "Graph" tensor. It is possible to have Graph tensors
leak out of the function building context by including a
tf.init_scope in your function building code.
For example, the following function will fail:
@tf.function
def has_init_scope():
my_constant = tf.constant(1.)
with tf.init_scope():
added = my_constant * 2
The graph tensor has name: dense_2_1/Identity:0
</code></pre>
<p>and</p>
<pre><code>Inputs to eager execution function cannot be Keras symbolic tensors, but found [<tf.Tensor 'dense_2_1/Identity:0' shape=(None, 6) dtype=float32>, <tf.Tensor 'dense_1_1/Identity:0' shape=(None, 6) dtype=float32>]
</code></pre>
<p>What was interesting is whenever I set the loss function as the perceptual loss alone without the Kl divergence loss, my code did not receive an error. There are many implementations of the KL divergence loss for the Variational Autoencoder but I do not know why it did not work with this specific implementation.</p>
|
<p>I've been having the same problem for a long time, but managed to figure it out.
The problem is that TF only accepts loss functions which accept <code>(input, output)</code> parameters which are then compared. However, you are computing your (kl_)loss using also <code>mu</code> and <code>sigma</code>, which are basically dense layers. Until tensorflow v2.1 it magically knew what these parameters were and knew how to include/manipulate them, but from then on, you have to be more careful. After reading <a href="https://www.tensorflow.org/guide/keras/custom_layers_and_models#the_add_loss_method" rel="nofollow noreferrer">this tutorial</a> (EDIT: also scroll to the bottom of the page for the complete VAE example) I propose these changes to your code:</p>
<p><strong>1. When compiling your model, only define <code>perceptual_loss</code> as a loss:</strong></p>
<p><code>CVAE.compile(optimizer = "adam", loss = perceptual_loss, metrics = [perceptual_loss])</code></p>
<p><strong>2. Change <code>sampling</code> function into a class, and under <code>call</code> method, add your <code>kl_loss</code></strong>, something like</p>
<pre><code>class Sampling(keras.layers.Layer):
def __init__(self):
super(Sampling, self).__init__()
def build(self, input_shape):
_, sigma_shape = input_shape
self.sigma_shape = (sigma_shape[-1], )
def call(self, inputs):
mu, sigma = inputs
# Add loss
kl_loss = 0.5 * K.sum(K.exp(sigma) + K.square(mu) - 1. - sigma, axis=1)
self.add_loss(K.mean(kl_loss))
# Return sampling as before
eps = K.random_normal(self.sigma_shape, mean=0., stddev=1.)
return mu + K.exp(sigma / 2) * eps
</code></pre>
<p>Make sure you only add <code>kl_loss</code> not <code>DFC_loss </code>, so that you don't count <code>perceptual_loss</code> twice.</p>
<p><strong>3. Use <code>Sampling</code> class as a latent layer</strong></p>
<p><code>latent_space = Sampling()([mu, sigma])</code></p>
<p>This is my first answer, hope it helps!</p>
<p>P.S. Maybe you can also try moving both losses in the Sampling class via <code>add_loss</code> command and then compile without any <code>loss</code> parameter.</p>
|
tensorflow|keras
| 0
|
5,677
| 62,848,487
|
Inserting new fields(columns) to mongoDB with pandas
|
<p>I have an existing data in MongoDB where Primary Key is set on 'date' with a few fields in it.</p>
<p>And I want to insert a new pandas dataframe with new fields(columns) to the existing data in MongoDB, joining on the 'date' field which exists on the both dataframe.</p>
<p>For example, lets say the this is dataframe A I have in my MongoDB ( I set the index with 'date' field when calling the data from MongoDB)</p>
<p><a href="https://i.stack.imgur.com/4WiK9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4WiK9.png" alt="enter image description here" /></a></p>
<p>And this is the new dataframe B I want to insert to MongoDB</p>
<p><a href="https://i.stack.imgur.com/btPte.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/btPte.png" alt="enter image description here" /></a></p>
<p>And this is the final dataframe C with new fields( 'std_50_3000window', 'std_50_300window', 'std_50_500window' added on 'date' index), which I want it to have on my MongoDB.</p>
<p><a href="https://i.stack.imgur.com/1GpdS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1GpdS.png" alt="enter image description here" /></a></p>
<p>Is there any way to do this?? (Maybe with insert_many method?)</p>
|
<p>The method you need is <code>update_one()</code> with <code>upsert=True</code> in a loop; you can't use <code>insert_many()</code> for two reasons; firstly your not always inserting; sometime you are updating; secondly <code>update_many()</code> (and <code>insert_many()</code>) only work on a single filter; in your case each filter is different as each update relates to a different time.</p>
<p>This is generic solution that will combine dataframes (<code>df_a</code>, <code>df_b</code> in this case - you can have as many as you like) in the manner that you need. It uses <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.iterrows.html#pandas-dataframe-iterrows" rel="nofollow noreferrer">iterrows</a> to get each row of the dataframe, filters on the date, and sets the values to those in the dataframe. the <code>$set</code> operator will override values if they are there already and set them if not set. <code>upsert=True</code> will perform an insert if there's no match on the date.</p>
<pre><code>for df in [df_a, df_b]:
for _, row in df.iterrows():
db.mycollection.update_one({'date': row.get('date')}, {'$set': row.to_dict()}, upsert=True)
</code></pre>
<p>Full worked example:</p>
<pre><code>from pymongo import MongoClient
from pprint import pprint
import datetime
import pandas as pd
# Sample data setup
db = MongoClient()['mydatabase']
data_a = [[datetime.datetime(2017, 5, 19, 21, 20), 96, 8, 98],
[datetime.datetime(2017, 5, 19, 21, 21), 95, 8, 97],
[datetime.datetime(2017, 5, 19, 21, 22), 95, 8, 97]]
df_a = pd.DataFrame(data_a, columns=['date', 'std_500_1000window', 'std_50_100window', 'std_50_2000window'])
data_b = [[datetime.datetime(2017, 5, 19, 21, 20), 98, 9, 10],
[datetime.datetime(2017, 5, 19, 21, 21), 98, 9, 10],
[datetime.datetime(2017, 5, 19, 21, 22), 98, 9, 10]]
df_b = pd.DataFrame(data_b, columns=['date', 'std_50_3000window', 'std_50_300window', 'std_50_500window'])
# Perform the upserts
for df in [df_a, df_b]:
for _, row in df.iterrows():
db.mycollection.update_one({'date': row.get('date')}, {'$set': row.to_dict()}, upsert=True)
# Print the results
for record in db.mycollection.find():
pprint(record)
</code></pre>
<p>Result:</p>
<pre><code>{'_id': ObjectId('5f0ae909df5531ac655ce528'),
'date': datetime.datetime(2017, 5, 19, 21, 20),
'std_500_1000window': 96,
'std_50_100window': 8,
'std_50_2000window': 98,
'std_50_3000window': 98,
'std_50_300window': 9,
'std_50_500window': 10}
{'_id': ObjectId('5f0ae909df5531ac655ce52a'),
'date': datetime.datetime(2017, 5, 19, 21, 21),
'std_500_1000window': 95,
'std_50_100window': 8,
'std_50_2000window': 97,
'std_50_3000window': 98,
'std_50_300window': 9,
'std_50_500window': 10}
{'_id': ObjectId('5f0ae909df5531ac655ce52c'),
'date': datetime.datetime(2017, 5, 19, 21, 22),
'std_500_1000window': 95,
'std_50_100window': 8,
'std_50_2000window': 97,
'std_50_3000window': 98,
'std_50_300window': 9,
'std_50_500window': 10}
</code></pre>
|
pandas|mongodb|dataframe|pymongo
| 1
|
5,678
| 62,688,771
|
merge cells in dataframe while converting to html using style object
|
<p><a href="https://i.stack.imgur.com/iSlvn.png" rel="nofollow noreferrer"> </a></p>
<p>I am trying to merge cells in dataframe while converting to HTML. But however, ending with row lines. Is it possible to merge only specific cells in a dataframe while converting to HTML or can we give header option to multiple columns, but not with lines? It should be header.</p>
<p>While converting to html am using style function</p>
<pre><code>th_props = [
('border', '1px solid black'),
('border-collapse','collapse')
]
styles = [
dict(selector="th", props=th_props),
dict(selector="td", props=th_props)
]
lic_data.style.applymap(self.color_negative_red, subset=['Utilised Percentage']).hide_index().set_table_styles(styles).render() )
</code></pre>
<p>In the Application column, I want to merge all three columns and only map title should be there without lines.
Otherwise, if I drop the Application column I need Map as header to remaining columns.</p>
<p>Let me know if there is any way to do that, or if this is not possible.</p>
<p>Thank you</p>
<p><a href="https://i.stack.imgur.com/J11HB.png" rel="nofollow noreferrer">enter image description here</a></p>
|
<p>I was unable to do this. But I tried to add caption to table as an option and its very feasible than this.</p>
<pre><code>styles = [dict(selector="th", props=th_props), dict(selector="td", props=th_props), dict(selector="caption", props=[("text-align", "center"), ("font-weight", "bold"), ("font-size", "120%"),("color", 'orange')])]
lic_data.style.applymap(self.color_negative_red, subset=['% Utilized']).hide_index().set_table_styles(styles).set_caption('caption').render() )
</code></pre>
|
html|pandas|dataframe|styles|cell
| 0
|
5,679
| 54,566,209
|
LSTM's expected hidden state dimensions doesn't take batch size into account
|
<p>I have this decoder model, which is supposed to take batches of sentence embeddings (batchsize = 50, hidden size=300) as input and output a batch of one hot representation of predicted sentences:</p>
<pre><code>class DecoderLSTMwithBatchSupport(nn.Module):
# Your code goes here
def __init__(self, embedding_size,batch_size, hidden_size, output_size):
super(DecoderLSTMwithBatchSupport, self).__init__()
self.hidden_size = hidden_size
self.batch_size = batch_size
self.lstm = nn.LSTM(input_size=embedding_size,num_layers=1, hidden_size=hidden_size, batch_first=True)
self.out = nn.Linear(hidden_size, output_size)
self.softmax = nn.LogSoftmax(dim=1)
def forward(self, my_input, hidden):
print(type(my_input), type(hidden))
output, hidden = self.lstm(my_input, hidden)
output = self.softmax(self.out(output[0]))
return output, hidden
def initHidden(self):
return Variable(torch.zeros(1, self.batch_size, self.hidden_size)).cuda()
</code></pre>
<p>However, when I run it using:</p>
<pre><code>decoder=DecoderLSTMwithBatchSupport(vocabularySize,batch_size, 300, vocabularySize)
decoder.cuda()
decoder_input=np.zeros([batch_size,vocabularySize])
for i in range(batch_size):
decoder_input[i] = embeddings[SOS_token]
decoder_input=Variable(torch.from_numpy(decoder_input)).cuda()
decoder_hidden = (decoder.initHidden(),decoder.initHidden())
for di in range(target_length):
decoder_output, decoder_hidden = decoder(decoder_input.view(1,batch_size,-1), decoder_hidden)
</code></pre>
<p>I get he following error:</p>
<blockquote>
<p>Expected hidden[0] size (1, 1, 300), got (1, 50, 300)</p>
</blockquote>
<p>What am I missing in order to make the model expect batched hidden states?</p>
|
<p>When you create the <code>LSTM</code>, the flag <code>batch_first</code> is not necessary, because it assumes a different shape of your input. From the docs:</p>
<blockquote>
<p>If True, then the input and output tensors are provided as (batch,
seq, feature). Default: False</p>
</blockquote>
<p>change the LSTM creation to:</p>
<pre><code>self.lstm = nn.LSTM(input_size=embedding_size, num_layers=1, hidden_size=hidden_size)
</code></pre>
<p>Also, there is a type error. When you create the <code>decoder_input</code> using <code>torch.from_numpy()</code> it has a <code>dtype=torch.float64</code>, while <code>decoder_input</code> has as default <code>dtype=torch.float32</code>. Change the line where you create the <code>decoder_input</code> to something like </p>
<pre><code>decoder_input = Variable(torch.from_numpy(decoder_input)).cuda().float()
</code></pre>
<p>With both changes, it is supposed to work fine :)</p>
|
python-3.x|lstm|pytorch|batchsize
| 1
|
5,680
| 71,128,122
|
Pandas - How to convert timedelta64[ns] to HH:MM:SS
|
<p>I have a doubt how can I convert all column from Pandas (Python) from timedelta64[ns] like: <code>2 days 03:29:05</code> to: <code>51:29:05</code>.</p>
<pre><code>**PLEASE, CONSIDER time COLUMN AS A timedelta64[ns]**
d = {'id': [1123, 2342], 'time': ['2 days 03:29:05', '1 days 01:57:53']}
df = pd.DataFrame(data=d)
df['time'] = pd.to_timedelta(df['time'])
id time
0 1123 2 days 03:29:05
1 2342 1 days 01:57:53
df.info()
Data columns (total 2 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 id 2 non-null int64
1 time 2 non-null timedelta64[ns]
dtypes: int64(1), timedelta64[ns](1)
</code></pre>
<p>And I would like to add a new column as:</p>
<pre><code> id time new
0 1123 2 days 03:29:05 51:29:05
1 2342 1 days 01:57:53 25:57:53
</code></pre>
|
<p>I don't think that there is a direct way. But you could use a custom function on top of <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.dt.total_seconds.html" rel="nofollow noreferrer"><code>total_seconds</code></a>:</p>
<pre><code>def sec_to_format(s):
h,s = divmod(int(s),3600)
m,s = divmod(s,60)
return f'{h:02}:{m:02}:{s:02}'
df['time_str'] = [sec_to_format(s) for s in df['time'].dt.total_seconds()]
</code></pre>
<p>output:</p>
<pre><code> id time time_str
0 1123 2 days 03:29:05 51:29:05
1 2342 1 days 01:57:53 25:57:53
</code></pre>
|
python|pandas
| 1
|
5,681
| 60,758,957
|
Using generator in Python to feed into Keras model.fit_generator
|
<p>I am learning how to use generator in Python and feed it into Keras model.fit_generator.</p>
<pre><code>from keras.utils import to_categorical
from keras.models import Sequential
from keras.layers import Dense, Conv2D, Flatten
import pandas as pd
import os
import cv2
class Generator:
def __init__(self,path):
self.path = path
def gen(self, feat, labels):
i=0
while (True):
im = cv2.imread(feat[i],0)
im = im.reshape(28,28,1)
yield im,labels[i]
i+=1
if __name__ == "__main__":
input_dir = './mnist'
output_file = 'dataset.csv'
filename = []
label = []
for root,dirs,files in os.walk(input_dir):
for file in files:
full_path = os.path.join(root,file)
filename.append(full_path)
label.append(os.path.basename(os.path.dirname(full_path)))
data = pd.DataFrame(data={'filename': filename, 'label':label})
data.to_csv(output_file,index=False)
feat = data.iloc[:,0]
labels = pd.get_dummies(data.iloc[:,1]).as_matrix()
image_gen = Generator(input_dir)
# #create model
model = Sequential()
model.add(Conv2D(64, kernel_size=3, activation="relu", input_shape=(28,28,1)))
model.add(Conv2D(32, kernel_size=3, activation="relu"))
model.add(Flatten())
model.add(Dense(2, activation="softmax"))
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
model.fit_generator(image_gen.gen(filename,labels), steps_per_epoch=5 ,epochs=5, verbose=1)
</code></pre>
<p>I have 2 subfolders inside <code>./mnist</code> folder, corresponding to each class in my dataset.
I created a Dataframe that contains the path of each image and the label (which is the name of the corresponding subfolder).</p>
<p>I created <code>Generator</code> class that loads the image whose path is written in the DataFrame.</p>
<p>It gave me error:
<code>ValueError: Error when checking input: expected conv2d_1_input to have 4 dimensions, but got array with shape (28, 28, 1)</code></p>
<p>Could anyone please help?
And also, is it the correct way to implement generator in general?</p>
<p>Thanks!</p>
|
<p>I think answers to your questions can be found in Keras documentation. </p>
<p>In terms of the input shape, <code>Conv2D</code> layers expects 4-dimensional input, but you explicitly reshape to <code>(28,28,1)</code> in your generator, so 3 dimensions. On the Conv2D info and the input format, see <a href="https://keras.io/layers/convolutional/" rel="nofollow noreferrer">this documentation</a>.</p>
<p>In terms of the generator itself, <a href="https://keras.io/models/model/#fit_generator" rel="nofollow noreferrer">Keras documentation</a> provides an example with generator being a function, the same is discussed in <a href="https://wiki.python.org/moin/Generators" rel="nofollow noreferrer">Python Wiki</a>. But your particular implementation seem to work, at least for the first iteration, if you get to the point of feeding the data into the convolution layer.</p>
|
python|tensorflow|machine-learning|keras|deep-learning
| 1
|
5,682
| 60,683,901
|
Tensorboard smoothing
|
<p>I downloaded the CSV files from tesnorboard in order to plot the losses myself as I want them Smoothed.</p>
<p>This is currently my code:</p>
<pre><code>import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
df = pd.read_csv('C:\\Users\\ali97\\Desktop\\Project\\Database\\Comparing Outlier Fractions\\10 Percent (MAE)\\MSE Validation.csv',usecols=['Step','Value'],low_memory=True)
df2 = pd.read_csv('C:\\Users\\ali97\\Desktop\\Project\\Database\\Comparing Outlier Fractions\\15 Percent (MAE)\\MSE Validation.csv',usecols=['Step','Value'],low_memory=True)
df3 = pd.read_csv('C:\\Users\\ali97\\Desktop\\Project\\Database\\Comparing Outlier Fractions\\20 Percent (MAE)\\MSE Validation.csv',usecols=['Step','Value'],low_memory=True)
plt.plot(df['Step'],df['Value'] , 'r',label='10% Outlier Frac.' )
plt.plot(df2['Step'],df2['Value'] , 'g',label='15% Outlier Frac.' )
plt.plot(df3['Step'],df3['Value'] , 'b',label='20% Outlier Frac.' )
plt.xlabel('Epochs')
plt.ylabel('Validation score')
plt.show()
</code></pre>
<p>I was reading how to smooth the graph and I found out another member here wrote the code on how tensorboard actually smooths graphs, but I really don't know how to implement it in my code.</p>
<pre><code>def smooth(scalars: List[float], weight: float) -> List[float]: # Weight between 0 and 1
last = scalars[0] # First value in the plot (first timestep)
smoothed = list()
for point in scalars:
smoothed_val = last * weight + (1 - weight) * point # Calculate smoothed value
smoothed.append(smoothed_val) # Save it
last = smoothed_val # Anchor the last smoothed value
return smoothed
</code></pre>
<p>Thank you.</p>
|
<p>If you are working with <code>pandas</code> library you can use the function <code>ewm</code> (<a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.ewm.html" rel="noreferrer">Pandas EWM</a>) and ajust the <code>alpha</code> factor to get a good approximation of the smooth function from tensorboard.</p>
<pre><code>df.ewm(alpha=(1 - ts_factor)).mean()
</code></pre>
<p>CSV file <strong>mse_data.csv</strong></p>
<pre><code> step value
0 0.000000 9.716303
1 0.200401 9.753981
2 0.400802 9.724551
3 0.601202 7.926591
4 0.801603 10.181700
.. ... ...
495 99.198400 0.298243
496 99.398800 0.314511
497 99.599200 -1.119387
498 99.799600 -0.374202
499 100.000000 1.150465
</code></pre>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
df = pd.read_csv("mse_data.csv")
print(df)
TSBOARD_SMOOTHING = [0.5, 0.85, 0.99]
smooth = []
for ts_factor in TSBOARD_SMOOTHING:
smooth.append(df.ewm(alpha=(1 - ts_factor)).mean())
for ptx in range(3):
plt.subplot(1,3,ptx+1)
plt.plot(df["value"], alpha=0.4)
plt.plot(smooth[ptx]["value"])
plt.title("Tensorboard Smoothing = {}".format(TSBOARD_SMOOTHING[ptx]))
plt.grid(alpha=0.3)
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/deIAL.png" rel="noreferrer"><img src="https://i.stack.imgur.com/deIAL.png" alt="enter image description here"></a></p>
|
python|tensorflow|tensorboard
| 8
|
5,683
| 72,728,922
|
ERROR: Input to reshape is a tensor with 2381440 values, but the requested shape requires a multiple of 200704
|
<p>I am trying to learn the CNN model in deep learning and I am using a <code>Cats-vs_Dogs</code> dataset to begin with. I am following a video tutorial and the steps are the same, although the dataset is different and all the other solutions have vastly varied code that I am not able to understand. Can someone tell me why I am going wrong here? Thanks</p>
<pre><code>import numpy as np
import pandas as pd
import tensorflow as tf
import itertools
import os
import shutil
import random
import glob
import matplotlib.pyplot as plt
import warnings
from tensorflow import keras
from keras.models import Sequential
from keras.layers import Dense,Activation,Flatten,BatchNormalization,Conv2D,MaxPool2D
from tensorflow.keras.optimizers import Adam
from keras.metrics import categorical_crossentropy
from keras.preprocessing.image import ImageDataGenerator
from sklearn.metrics import confusion_matrix
train_batch=ImageDataGenerator(preprocessing_function=tf.keras.applications.vgg16.preprocess_input) \
.flow_from_directory(directory='/content/Cat-vs-Dogs/train',target_size=(244,244),classes=['cats','dogs'],batch_size=10)
valid_batch=ImageDataGenerator(preprocessing_function=tf.keras.applications.vgg16.preprocess_input) \
.flow_from_directory(directory='/content/Cat-vs-Dogs/valid',target_size=(244,244),classes=['cats','dogs'],batch_size=10)
test_batch=ImageDataGenerator(preprocessing_function=tf.keras.applications.vgg16.preprocess_input) \
.flow_from_directory(directory='/content/Cat-vs-Dogs/test',target_size=(244,244),classes=['cats','dogs'],batch_size=10,shuffle=False)
model1=Sequential([
Conv2D(filters=32,kernel_size=(3,3),activation='relu', padding='same',input_shape=(224,224,3)),
MaxPool2D(pool_size=(2,2),strides=2),
Conv2D(filters=64,kernel_size=(3,3),activation='relu', padding='same'),
MaxPool2D(pool_size=(2,2),strides=2),
Flatten(),
Dense(units=2,activation='softmax')
])
model1.summary()
model1.compile(optimizer=Adam(learning_rate=0.0001),loss='categorical_crossentropy',metrics=['accuracy'])
model1.fit(x=train_batch,validation_data=valid_batch,epochs=10,verbose=2)
```
The Error Output occurs like below
```
Epoch 1/10
---------------------------------------------------------------------------
InvalidArgumentError Traceback (most recent call last)
<ipython-input-87-3dd4821591bb> in <module>()
----> 1 model1.fit(x=train_batch,validation_data=valid_batch,epochs=10,verbose=2)
1 frames
/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
53 ctx.ensure_initialized()
54 tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
---> 55 inputs, attrs, num_outputs)
56 except core._NotOkStatusException as e:
57 if name is not None:
InvalidArgumentError: Graph execution error:
Detected at node 'sequential_4/flatten_4/Reshape' defined at (most recent call last):
File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py", line 16, in <module>
app.launch_new_instance()
File "/usr/local/lib/python3.7/dist-packages/traitlets/config/application.py", line 846, in launch_instance
app.start()
File "/usr/local/lib/python3.7/dist-packages/ipykernel/kernelapp.py", line 499, in start
self.io_loop.start()
File "/usr/local/lib/python3.7/dist-packages/tornado/platform/asyncio.py", line 132, in start
self.asyncio_loop.run_forever()
File "/usr/lib/python3.7/asyncio/base_events.py", line 541, in run_forever
self._run_once()
File "/usr/lib/python3.7/asyncio/base_events.py", line 1786, in _run_once
handle._run()
File "/usr/lib/python3.7/asyncio/events.py", line 88, in _run
self._context.run(self._callback, *self._args)
File "/usr/local/lib/python3.7/dist-packages/tornado/platform/asyncio.py", line 122, in _handle_events
handler_func(fileobj, events)
File "/usr/local/lib/python3.7/dist-packages/tornado/stack_context.py", line 300, in null_wrapper
return fn(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/zmq/eventloop/zmqstream.py", line 577, in _handle_events
self._handle_recv()
File "/usr/local/lib/python3.7/dist-packages/zmq/eventloop/zmqstream.py", line 606, in _handle_recv
self._run_callback(callback, msg)
File "/usr/local/lib/python3.7/dist-packages/zmq/eventloop/zmqstream.py", line 556, in _run_callback
callback(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/tornado/stack_context.py", line 300, in null_wrapper
return fn(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/ipykernel/kernelbase.py", line 283, in dispatcher
return self.dispatch_shell(stream, msg)
File "/usr/local/lib/python3.7/dist-packages/ipykernel/kernelbase.py", line 233, in dispatch_shell
handler(stream, idents, msg)
File "/usr/local/lib/python3.7/dist-packages/ipykernel/kernelbase.py", line 399, in execute_request
user_expressions, allow_stdin)
File "/usr/local/lib/python3.7/dist-packages/ipykernel/ipkernel.py", line 208, in do_execute
res = shell.run_cell(code, store_history=store_history, silent=silent)
File "/usr/local/lib/python3.7/dist-packages/ipykernel/zmqshell.py", line 537, in run_cell
return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/IPython/core/interactiveshell.py", line 2718, in run_cell
interactivity=interactivity, compiler=compiler, result=result)
File "/usr/local/lib/python3.7/dist-packages/IPython/core/interactiveshell.py", line 2828, in run_ast_nodes
if self.run_code(code, result):
File "/usr/local/lib/python3.7/dist-packages/IPython/core/interactiveshell.py", line 2882, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-76-3dd4821591bb>", line 1, in <module>
model1.fit(x=train_batch,validation_data=valid_batch,epochs=10,verbose=2)
File "/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py", line 64, in error_handler
return fn(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1384, in fit
tmp_logs = self.train_function(iterator)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1021, in train_function
return step_function(self, iterator)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1010, in step_function
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1000, in run_step
outputs = model.train_step(data)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 859, in train_step
y_pred = self(x, training=True)
File "/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py", line 64, in error_handler
return fn(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/base_layer.py", line 1096, in __call__
outputs = call_fn(inputs, *args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py", line 92, in error_handler
return fn(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/sequential.py", line 374, in call
return super(Sequential, self).call(inputs, training=training, mask=mask)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/functional.py", line 452, in call
inputs, training=training, mask=mask)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/functional.py", line 589, in _run_internal_graph
outputs = node.layer(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py", line 64, in error_handler
return fn(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/base_layer.py", line 1096, in __call__
outputs = call_fn(inputs, *args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py", line 92, in error_handler
return fn(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/keras/layers/core/flatten.py", line 96, in call
return tf.reshape(inputs, flattened_shape)
Node: 'sequential_4/flatten_4/Reshape'
Input to reshape is a tensor with 2381440 values, but the requested shape requires a multiple of 200704
[[{{node sequential_4/flatten_4/Reshape}}]] [Op:__inference_train_function_3624]
</code></pre>
|
<p>The error is because the target size is (244, 244) and the input shape given in the model is (224, 224, 3). You can either change the target size to (224, 224) or change the input shape to (244, 244, 3).</p>
<p>Change the <code>input_shape</code> to <code>(244, 244, 3)</code></p>
<pre><code>Conv2D(filters=32,kernel_size=(3,3),activation='relu', padding='same',input_shape=(244, 244, 3)),
</code></pre>
<p>OR</p>
<p>Change the <code>target_size</code> to <code>(224, 224)</code> in train_batch, valid_batch and test_batch</p>
<pre><code>train_batch=ImageDataGenerator(preprocessing_function=tf.keras.applications.vgg16.preprocess_input) \
.flow_from_directory(directory='/content/Cat-vs-Dogs/train',target_size=(224, 224),classes=['cats','dogs'],batch_size=10)
</code></pre>
|
python|tensorflow|machine-learning|keras|deep-learning
| 0
|
5,684
| 72,550,969
|
Interpolating a time series with interp1d using only numpy
|
<p>I want to plot a time series with numpy and matplotlib, using markers for the exact points, and interpolation. Basically this (data is dummy, but functionality is the same, note that distance between time-points may vary):</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
from scipy.interpolate import interp1d
import matplotlib.pyplot as plt
T = [
np.datetime64('2020-01-01T00:00:00.000000000'),
np.datetime64('2020-01-02T00:00:00.000000000'),
np.datetime64('2020-01-03T00:00:00.000000000'),
np.datetime64('2020-01-05T00:00:00.000000000'),
np.datetime64('2020-01-06T00:00:00.000000000'),
np.datetime64('2020-01-09T00:00:00.000000000'),
np.datetime64('2020-01-13T00:00:00.000000000'),
]
Z = [543, 234, 435, 765, 564, 235, 345]
fig = plt.figure(figsize=(8,6))
ax = fig.add_subplot()
ax.plot(T, Z, 'o-')
</code></pre>
<p><a href="https://i.stack.imgur.com/n5jme.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/n5jme.png" alt="enter image description here" /></a></p>
<p>However, the interpolation done here is just connecting the points. I want to include spline interpolation and other kinds using scipy's interp1d. So, I tried replacing the last line with the following:</p>
<pre class="lang-py prettyprint-override"><code>ax.plot(T,Z, 'o')
ax.plot(T,interp1d(T, Z)(T), '-')
</code></pre>
<p>and I get the following error:</p>
<pre><code>UFuncTypeError: ufunc 'true_divide' cannot use operands with types dtype('float64') and dtype('<m8[ns]')
</code></pre>
<p>Reading <a href="https://stackoverflow.com/questions/56500546/ufunc-true-divide-cannot-use-operands-with-types-dtypefloat64-and-dtypem8">this answer</a>, I read that during interpolation I should divide <code>T</code> by <code>np.timedelta64(1, 's')</code>, like this:</p>
<pre class="lang-py prettyprint-override"><code>ax.plot(T,Z, 'o')
ax.plot(T,interp1d(T/np.timedelta64(1, 's'))(T), '-')
</code></pre>
<p>however, I get an even weirder error:</p>
<pre><code>ufunc 'true_divide' cannot use operands with types dtype('<M8[ns]') and dtype('<m8[s]')
</code></pre>
<p>What should I do?</p>
|
<p>The data type of any element in <code>T</code> is <code>np.datetime64</code> and not <code>np.timedelta64</code>.</p>
<p>Thus, convert the dtype of all elements of T to <code>np.timedelta64</code> by creating a numpy array with datatype <a href="https://numpy.org/doc/stable/reference/generated/numpy.dtype.kind.html" rel="nofollow noreferrer"><code>m</code></a>:</p>
<pre><code>T = np.array(
np.datetime64('2020-01-01T00:00:00.000000000'),
np.datetime64('2020-01-02T00:00:00.000000000'),
np.datetime64('2020-01-03T00:00:00.000000000'),
np.datetime64('2020-01-05T00:00:00.000000000'),
np.datetime64('2020-01-06T00:00:00.000000000'),
np.datetime64('2020-01-09T00:00:00.000000000'),
np.datetime64('2020-01-13T00:00:00.000000000'),
dtype='m')
</code></pre>
<p>Then, as <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.interp1d.html" rel="nofollow noreferrer">the documentation</a> suggests, we have to pass <code>x</code> and <code>y</code> that are convertible to float like values to <code>scipy.interpolate.interp1d</code> to get a interpolation function. We'll use a method suggested in this <a href="https://stackoverflow.com/a/56500952/7789963">answer</a> to do that:</p>
<pre><code># Get an interpolation function f
f = scipy.interpolation.interp1d(x=T/np.timedelta64(1, 's'), y=Z)
</code></pre>
<p>Finally, we can use the interpolated function as follows for plotting:</p>
<pre><code>ax.plot(T, f(T/np.timedelta64(1, 's'), '-')
</code></pre>
<p>Combining everything, we get the following output:</p>
<p><a href="https://i.stack.imgur.com/OIldO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OIldO.png" alt="enter image description here" /></a></p>
<p>The code that can reproduce the image:</p>
<pre><code>import numpy as np
from scipy.interpolate import interp1d
import matplotlib.pyplot as plt
T = np.array([
np.datetime64('2020-01-01T00:00:00.000000000'),
np.datetime64('2020-01-02T00:00:00.000000000'),
np.datetime64('2020-01-03T00:00:00.000000000'),
np.datetime64('2020-01-05T00:00:00.000000000'),
np.datetime64('2020-01-06T00:00:00.000000000'),
np.datetime64('2020-01-09T00:00:00.000000000'),
np.datetime64('2020-01-13T00:00:00.000000000'),
], dtype='m')
Z = [543, 234, 435, 765, 564, 235, 345]
fig = plt.figure(figsize=(8,6))
ax = fig.add_subplot()
ax.plot(T, Z, 'o')
f = interp1d(x=T/np.timedelta64(1, 's'), y=Z)
ax.plot(T, f(T/np.timedelta64(1, 's')), '-')
plt.show()
</code></pre>
|
python|numpy|matplotlib|scipy|interpolation
| 1
|
5,685
| 72,538,834
|
How to do a nested loop with 2 columns for pandas dataframe with counter?
|
<p>I am using python and pandas. I have a bunch of unstructured survey data.</p>
<p>I have a dataframe:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Type</th>
<th>Activity</th>
</tr>
</thead>
<tbody>
<tr>
<td>Sport</td>
<td>rowing</td>
</tr>
<tr>
<td>Sport</td>
<td>Surfing</td>
</tr>
<tr>
<td>Sport</td>
<td>Basketball</td>
</tr>
<tr>
<td>Sport</td>
<td>Dancing</td>
</tr>
<tr>
<td>Sport</td>
<td>Dancing</td>
</tr>
<tr>
<td>Studies</td>
<td>science</td>
</tr>
<tr>
<td>Studies</td>
<td>Math</td>
</tr>
<tr>
<td>Studies</td>
<td>History</td>
</tr>
</tbody>
</table>
</div>
<p>I have survey data that says:
"Sarah does Basketball and Math"
"Kilian does Math"
"Lorenzo does history"
"Robert does dancing"
"Rachel does basketball and dancing"
I want a table that says which students do one or the other and which students do both. (the real data has 30 different sub categories)</p>
<p>I want to create a table like below:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">Student</th>
<th style="text-align: center;">Sports</th>
<th style="text-align: right;">Studies</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">"Sarah does Basketball and Math"</td>
<td style="text-align: center;">1</td>
<td style="text-align: right;">1</td>
</tr>
<tr>
<td style="text-align: left;">"Kilian does Math"</td>
<td style="text-align: center;">0</td>
<td style="text-align: right;">1</td>
</tr>
<tr>
<td style="text-align: left;">""Lorenzo does history"</td>
<td style="text-align: center;">0</td>
<td style="text-align: right;">1</td>
</tr>
<tr>
<td style="text-align: left;">"Robert does dancing"</td>
<td style="text-align: center;">1</td>
<td style="text-align: right;">0</td>
</tr>
<tr>
<td style="text-align: left;">"Rachel does basketball and dancing"</td>
<td style="text-align: center;">2</td>
<td style="text-align: right;">0</td>
</tr>
</tbody>
</table>
</div>
<p>I think I need to say</p>
<pre><code>Distinct_Activities = dataframe.Activity.nunique()
</code></pre>
<p>#split survey data to be a list of words.
counter = 0
Then say:</p>
<pre><code>For i in Survey_data:
while j = Disitinct_Activities[0]
</code></pre>
<ul>
<li>if you compare a list of words from sentence and your words in data frame where type = sport and one Activity is similar then counter +1 then go to the next Activity till you finish that type. then return count in a dictionary to a column for how many times it hit that section. then go to next sentence and compare all activities in #Activity or go to next part of Distinct_Activities[1]
Then loop back up to next sentence once done.</li>
</ul>
<p>I am struggling figuring out how to loop through the dataframe using type and activity. I tried to create 30 different lists and dataframes but that didn't go well. Can anyone help me create this inner loop strategy.</p>
<hr />
<p>PROGRESS and ERROR</p>
<pre><code>import pandas as pd
from collections import Counter
# read the Type_Activity, Student files
df1 = pd.read_csv('df1.csv')
df2 = pd.read_csv('df2.csv')
# create a dictionary with (activity, Type)
### activity_type = dict(zip(df1['Activity'].str.lower(), df1['Type'].str.lower()))
activity_type = df1.groupby('Type')['Activity'].apply(list).to_dict()
df2 = df2.join( # join the df2 with the new dataframe
pd.json_normalize( # convert the dictionaries into columns
df2['Student'].apply( # apply the following function on the "Student" column
lambda x: Counter([ # count the types
type_
for activity in x.strip().lower().split() # lower then split the student text into words
for type_ in [activity_type.get(activity)] # just a hack to ignore the normal words
if type_ # the hole purpose of the pervoius line is to add this check
])
)
).fillna(0) # fill the NAN values with zeros then convert to int for better look
)
Traceback (most recent call last):
File "/mnt/06082022_CreateFactorization.py", line 33, in <module>
pd.json_normalize(
File "/opt/conda/lib/python3.8/site-packages/pandas/core/frame.py", line 7841, in applymap
return self.apply(infer).__finalize__(self, "applymap")
File "/opt/conda/lib/python3.8/site-packages/pandas/core/frame.py", line 7765, in apply
return op.get_result()
File "/opt/conda/lib/python3.8/site-packages/pandas/core/apply.py", line 185, in get_result
return self.apply_standard()
File "/opt/conda/lib/python3.8/site-packages/pandas/core/apply.py", line 276, in apply_standard
results, res_index = self.apply_series_generator()
File "/opt/conda/lib/python3.8/site-packages/pandas/core/apply.py", line 290, in apply_series_generator
results[i] = self.f(v)
File "/opt/conda/lib/python3.8/site-packages/pandas/core/frame.py", line 7839, in infer
return lib.map_infer(x.astype(object)._values, func, ignore_na=ignore_na)
File "pandas/_libs/lib.pyx", line 2467, in pandas._libs.lib.map_infer
TypeError: int() argument must be a string, a bytes-like object or a number, not 'Counter'
</code></pre>
|
<p>It's best to avoid loops as much as possible, here we replaced the 2 loops with a dictionary <code>activity_type</code> that maps every activity to its type.</p>
<p>NOTE: the <code>df1</code> is the Type, Activity <code>DataFrame</code></p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
from collections import Counter
# read the Type_Activity, Student files
df1 = pd.read_csv('df1.csv')
df2 = pd.read_csv('df2.csv')
# create a dictionary with (activity, Type)
activity_type = dict(zip(df1['Activity'].str.lower(), df1['Type'].str.lower()))
df2 = df2.join( # join the df2 with the new dataframe
pd.json_normalize( # convert the dictionaries into columns
df2['Student'].apply( # apply the following function on the "Student" column
lambda x: Counter([ # count the types
type_
for activity in x.strip().lower().split() # lower then split the student text into words
for type_ in [activity_type.get(activity)] # just a hack to ignore the normal words
if type_ # the hole purpose of the pervoius line is to add this check
])
)
).fillna(0) # fill the NAN values with zeros then convert to int for a better look
)
</code></pre>
<p><strong>update:</strong></p>
<p>my <code>activity_type</code> dictionary was mapping the activity to a single string (type), assuming that every activity has only one type.</p>
<p>your <code>activity_type</code> dictionary was mapping the activity to a list of types, which will be better if activities may have more than one type.</p>
<p>NOTE: I changed it to <code>set</code> instead of <code>list</code> to avoid duplications and for better performance.</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
from collections import Counter
# read the Type_Activity, Student files
df1 = pd.read_csv('df1.csv')
df2 = pd.read_csv('df2.csv')
# lower case the columns then create a dictionary with (activity, [Types])
df1[['Activity', 'Type']] = df1[['Activity', 'Type']].applymap(lambda x: x.lower())
activity_types = df1.groupby('Activity')['Type'].apply(set).to_dict()
# join the df2 with the new dataframe
df2 = df2.join(
# convert the dictionaries into columns
pd.json_normalize(
# apply the following function on the "Student" column
df2['Student'].apply(
# count the types
lambda x: Counter([
type_
# lower then split the student text into words
for activity in x.strip().lower().split()
# ignore the normal words
for type_ in activity_types.get(activity, [])
])
)
).fillna(0).applymap(int) # fill the NAN values with zeros then convert to int for better look
)
</code></pre>
<p><strong>output:</strong></p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">Student</th>
<th style="text-align: right;">sport</th>
<th style="text-align: right;">studies</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">Sarah does Basketball and Math</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">1</td>
</tr>
<tr>
<td style="text-align: left;">Kilian does Math</td>
<td style="text-align: right;">0</td>
<td style="text-align: right;">1</td>
</tr>
<tr>
<td style="text-align: left;">Lorenzo does history</td>
<td style="text-align: right;">0</td>
<td style="text-align: right;">1</td>
</tr>
<tr>
<td style="text-align: left;">Robert does dancing</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">0</td>
</tr>
<tr>
<td style="text-align: left;">Rachel does basketball and dancing</td>
<td style="text-align: right;">2</td>
<td style="text-align: right;">0</td>
</tr>
</tbody>
</table>
</div>
|
python|pandas|loops
| 0
|
5,686
| 59,755,671
|
How to use a trained TensorFlow model in a plain Javascript webapp
|
<p>I've been following this tutorial:
<a href="https://docs.annotations.ai/workshops/object-detection/6.html" rel="nofollow noreferrer">https://docs.annotations.ai/workshops/object-detection/6.html</a></p>
<p>And got to step 6, once I get to the webapp example it's done in ReactJS and I can't figure out how to convert it to plain JS for our particular use case. I was able to get this far:</p>
<p>scripts.js</p>
<pre><code>var videoRef = document.getElementById("video");
if(navigator.mediaDevices.getUserMedia) {
try {
let stream = await navigator.mediaDevices.getUserMedia({ audio: false, video: { facingMode: { exact: "environment" } } });
} catch(error) {}
navigator.mediaDevices.enumerateDevices()
.then(function(devices) {
// Remove non camera devices
for(var i = devices.length - 1; i>=0; i--) {
var device = devices[i];
if(device.kind != 'videoinput') {
devices.splice(i, 1);
}
if(!device.kind) {
devices.splice(i, 1);
}
}
// Force camera to back camera for mobile devices
var activeDevice = devices[0];
for(var i in devices) {
var device = devices[i];
if(device.label) {
if(device.label.toLowerCase().indexOf('back') > -1) {
activeDevice = device;
}
}
}
const constraints = {video: {deviceId: activeDevice.deviceId ? {exact: activeDevice.deviceId} : undefined}, audio: false};
const webcamPromise = navigator.mediaDevices
.getUserMedia(constraints)
.then(stream => {
window.stream = stream;
videoRef.srcObject = stream;
return new Promise(resolve => {
videoRef.onloadedmetadata = () => {
resolve();
};
});
}, (error) => {
console.error(error, 'camera error');
});
const loadlModelPromise = cocoSsd.load({modelUrl: 'https://nanonets.s3-us-west-2.amazonaws.com/uploadedfiles/87be4e38-b40d-4217-898b-fd619319c2e4/ssd/model.json'});
Promise.all([loadlModelPromise, webcamPromise])
.then(values => {
detectFromVideoFrame(values[0], videoRef);
})
.catch(error => {
console.error(error, 'error loading promises');
})
})
}
function detectFromVideoFrame(model, video) {
model.detect(video).then(predictions => {
console.log(predictions, 'predictions found');
requestAnimationFrame(() => {
detectFromVideoFrame(model, video);
});
}, (error) => {
console.error(error, 'Tensorflow Error');
});
};
</code></pre>
<p>In the HTML I include a <code>coco-ssd.js</code> file which I also believe I need to modify, but i'm not sure how to generate that file:</p>
<pre><code><script src="/lib/coco-ssd.js"></script>
<script src="https://unpkg.com/babel-standalone@6.26.0/babel.min.js"></script>
</code></pre>
<p>That code works with a <code>pre-defined coco-ssd model</code> but from following the tutorial I can't figure out how to use my own model, here is the files that were generated:</p>
<p><a href="https://i.stack.imgur.com/cc9rc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cc9rc.png" alt="Picture of files generated from tutorial"></a></p>
<p>Now I need to find out how to use those files in my Javascript above.</p>
<p>I think I need to change these lines:</p>
<p><code>const loadlModelPromise = cocoSsd.load({modelUrl: 'https://nanonets.s3-us-west-2.amazonaws.com/uploadedfiles/87be4e38-b40d-4217-898b-fd619319c2e4/ssd/model.json'});</code></p>
<p>And include a different <code>coco-ssd.js</code> file:</p>
<p><code><script src="/lib/coco-ssd.js"></script></code></p>
<p>But it's not clear what files to include from the generated folder structure, that's what I'm getting stuck on.</p>
|
<p>You can use the script for TensorFlow js & coco SSD</p>
<pre><code><!-- Load TensorFlow.js. This is required to use coco-ssd model. -->
<script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs"> </script>
<!-- Load the coco-ssd model. -->
<script src="https://cdn.jsdelivr.net/npm/@tensorflow-models/coco-ssd"> </script>
</code></pre>
<p>Step 1: Create an index.html file <br/>
<strong><em>index.html</em></strong> <br/>
Note: You can use the modelUrl functionality to load your own model hosted somewhere</p>
<pre><code><!-- Load TensorFlow.js. This is required to use coco-ssd model. -->
<script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs"> </script>
<!-- Load the coco-ssd model. -->
<script src="https://cdn.jsdelivr.net/npm/@tensorflow-models/coco-ssd"> </script>
<!-- Replace this with your image. Make sure CORS settings allow reading the image! -->
<img id="img" src="https://images.pexels.com/photos/20787/pexels-photo.jpg?auto=compress&cs=tinysrgb&dpr=1&w=500" crossorigin="anonymous"/>
<!-- Place your code in the script tag below. You can also use an external .js file -->
<script>
// Notice there is no 'import' statement. 'cocoSsd' and 'tf' is
// available on the index-page because of the script tag above.
const img = document.getElementById('img');
// Load the model.
//Note: cocoSsd.load() will also work on this without parameter. It will default to the coco ssd model
cocoSsd.load({ modelUrl: 'PATH TO MODEL JSON FILE' }).then(model => {
// detect objects in the image.
model.detect(img).then(predictions => {
console.log('Predictions: ', predictions);
});
});
</script>
</code></pre>
<p>Step 2: Let's say you have a nodejs server to run the index file and serve it locally by accessing localhost:3000. <br/>
<strong><em>Server.js</em></strong><br/></p>
<pre><code>const express = require('express');
app = express();
app.get('/',function(req,res) {
res.sendFile('./index.html', { root: __dirname });
});
const port = 3000
app.listen(port, function(){
console.log(`Listening at port ${port}`);
})
'''
</code></pre>
|
javascript|tensorflow
| 0
|
5,687
| 59,671,362
|
Downloading blocks of data from census, how to wrtie to multiple csvs to not exceed memory
|
<p>Suppose I have a list of api keys I am downloading from the census data</p>
<p>Example:</p>
<pre><code>variable_list = [
'B08006_017E',
'B08016_002E',
'B08016_003E',
'B08016_004E',
...
]
</code></pre>
<p>Now given memory constraints for putting this data onto one csv file. I want to create a way in which I place blocks of 100 variables from the variable list onto a number of csv files. For example, if I have 200 variables than I would have 2 csv files of the first 100 and one with the second 100 varaibles. I hope that is clear.</p>
<p>This is how I am currently downloading the data:</p>
<pre><code>import pandas as pd
import censusdata
pd.set_option('display.expand_frame_repr', False)
pd.set_option('display.precision', 2)
#import statsmodels.formula.api as sm
import urllib3
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
import censusgeocode as cg
import numpy as np
from numbers import Number
import plotly
import matplotlib.pyplot as plt
from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot
init_notebook_mode(connected=True)
import requests
import pandas
import geopandas
import json
import math
from haversine import haversine
from ipfn import ipfn
import networkx
from matplotlib import pyplot
from matplotlib import patheffects
from shapely.geometry import LineString, MultiLineString
variable_list1 = [
'B08006_017E',
'B08016_002E'
'B08016_003E',
'B08016_004E'
]
all_variable_lists = [variable_list1]
print(len(all_variable_lists[0]))
#2) For each year, download the relevant variables for each tract
def download_year(year,variable_list,State,County,Tract):
df = censusdata.download('acs5', year, censusdata.censusgeo([('state',State),('county',County),('tract',Tract)]), variable_list, key = 'e39a53c23358c749629da6f31d8f03878d4088d6')
df['Year']=str(year)
return df
#3) Define function to download for a single year and state
def callback_arg(i,variable_list,year):
try:
print('Downloading - ',year,'State', i,' of 57')
if i<10:
df = download_year(year,variable_list,'0'+str(i),'*','*')
return df
if i==51:
df = download_year(year,variable_list,str(i),'*','*')
return df
else:
df = download_year(year,variable_list,str(i),'*','*')
return df
except:
pass
#3) Function to download for all states and all years, do some slight formatting
def download_all_data(variable_list,max_year):
df=download_year(2012,variable_list,'01','*','*')
for year in range(2012,max_year+1):
if year == 2012:
for i in range(0,57):
df=df.append(callback_arg(i,variable_list,year))
else:
for i in range(0,57):
df=df.append(callback_arg(i,variable_list,year))
df2=df.reset_index()
df2=df2.rename(columns = {"index": "Location+Type"}).astype(str)
df2['state']=df2["Location+Type"].str.split(':').str[0].str.split(', ').str[2]
df2['Census_tract']=df2["Location+Type"].str.split(':').str[0].str.split(',').str[0].str.split(' ').str[2][0]
df2['County_name']=df2["Location+Type"].str.split(':').str[0].str.split(', ').str[1]
return(df2)
#4) Some slight formatting
def write_to_csv(df2,name = 'test'):
df2.to_csv(name)
#5) The line below is commented out, but should run the entire download sequence
def write_to_csv(df, ide):
df.to_csv('test' + str(ide) + '.csv')
list_of_dfs = []
for var_list in all_variable_lists:
list_of_dfs.append(download_all_data(var_list, 2012))
x1 = list_of_dfs[0].reset_index()
# x3 = pd.merge(x1,x2, on=['index','Location+Type','Year','state','Census_tract','County_name'])
write_to_csv(x1,1)
</code></pre>
<p>If anyone can give me some ideas on how to achieve what I want this would greatly help me. Thank you.</p>
|
<p>It looks like you're already chunking the variable_lists here:</p>
<pre class="lang-py prettyprint-override"><code>for var_list in all_variable_lists:
list_of_dfs.append(download_all_data(var_list, 2012))
</code></pre>
<p>Just make sure each <code>var_list</code> has only 100 items. Then chunk the csv writing in the same way, using <code>enumerate</code> to increment the index for filename:</p>
<pre class="lang-py prettyprint-override"><code>for index, out_list in enumerate(list_of_dfs):
write_to_csv(out_list.reset_index(),index)
</code></pre>
<p>If you're just looking to break up the final output at write time:</p>
<pre class="lang-py prettyprint-override"><code>for index, out_list in enumerate(np.array_split(x1, 100)):
write_to_csv(out_list,index)
</code></pre>
|
python|pandas|census
| 1
|
5,688
| 59,616,436
|
How to reset initialization in TensorFlow 2
|
<p>If I try to change parallelism in TensorFlow 2 after initializing a <code>tf.Variable</code>,</p>
<pre><code>import tensorflow as tf
_ = tf.Variable([1])
tf.config.threading.set_inter_op_parallelism_threads(1)
</code></pre>
<p>I get an error</p>
<blockquote>
<p>RuntimeError: Inter op parallelism cannot be modified after initialization.</p>
</blockquote>
<p>I understand why that could be, but it (and possibly other factors) are causing my tests to interfere with each other. For example</p>
<pre><code>def test_model(): # this test
v = tf.Variable([1])
...
def test_threading(): # is breaking this test
tf.config.threading.set_inter_op_parallelism_threads(1)
...
</code></pre>
<p>How do I reset the TensorFlow state so that I can set the threading?</p>
|
<p>This is achievable in a "hacky" way. But I'd recommend doing this the right way (i.e. by setting up config at the beginning).</p>
<pre><code>import tensorflow as tf
from tensorflow.python.eager import context
_ = tf.Variable([1])
context._context = None
context._create_context()
tf.config.threading.set_inter_op_parallelism_threads(1)
</code></pre>
<p><strong>Edit</strong>: What is meant by setting up config at the beginning,</p>
<pre><code>import tensorflow as tf
from tensorflow.python.eager import context
tf.config.threading.set_inter_op_parallelism_threads(1)
_ = tf.Variable([1])
</code></pre>
<p>But there could be circumstances where you cannot always do this. Merely pointing out the conventional way of setting up config in <code>tf</code>. So if your circumstances don't allow you to fix <code>tf.config</code> at the beginning you have to reset your <code>tf.eager.context</code> as shown in the solution above.</p>
|
python|tensorflow|pytest|tensorflow2.0
| 3
|
5,689
| 59,818,185
|
memory error reading big size csv in pandas
|
<p>My laptops memory is 8 gig and I was trying to read and process a big csv file, and got memory issues, I found a solution which is using <strong>chunksize</strong> to process the file chunk by chunk, but apperntly when uisng chunsize the file format vecoe <strong>textreaderfile</strong> and the code I was using to process normal csvs with it doesnt work anymore, this is the code I'm trying to use to read how many sentences inside the csv file. </p>
<pre><code>wdata = pd.read_csv(fileinput, nrows=0,).columns[0]
skip = int(wdata.count(' ') == 0)
wdata = pd.read_csv(fileinput, names=['sentences'], skiprows=skip, chunksize=1000)
data = wdata.count()
print(data)
</code></pre>
<p>the error I'm getting is:- </p>
<pre><code>Traceback (most recent call last):
File "table.py", line 24, in <module>
data = wdata.count()
AttributeError: 'TextFileReader' object has no attribute 'count'
</code></pre>
<p>I tried another way arround aswell by running this code </p>
<pre><code>
TextFileReader = pd.read_csv(fileinput, chunksize=1000) # the number of rows per chunk
dfList = []
for df in TextFileReader:
dfList.append(df)
df = pd.concat(dfList,sort=False)
print(df)
</code></pre>
<p>and it gives this error</p>
<pre><code>
data = self._reader.read(nrows)
File "pandas/_libs/parsers.pyx", line 881, in pandas._libs.parsers.TextReader.read
File "pandas/_libs/parsers.pyx", line 908, in pandas._libs.parsers.TextReader._read_low_memory
File "pandas/_libs/parsers.pyx", line 950, in pandas._libs.parsers.TextReader._read_rows
File "pandas/_libs/parsers.pyx", line 937, in pandas._libs.parsers.TextReader._tokenize_rows
File "pandas/_libs/parsers.pyx", line 2132, in pandas._libs.parsers.raise_parser_error
pandas.errors.ParserError: Error tokenizing data. C error: Expected 2 fields in line 3, saw 4
</code></pre>
|
<p>You have to iterate over the chunks:</p>
<pre><code>csv_length = 0
for chunk in pd.read_csv(fileinput, names=['sentences'], skiprows=skip, chunksize=10000):
csv_length += chunk.count()
print(csv_length )
</code></pre>
|
python|pandas|csv|chunking
| 1
|
5,690
| 59,551,405
|
rolling function does not print all values
|
<p>I am trying to understand rolling function on pandas on python here is my example code </p>
<pre><code># importing pandas as pd
import pandas as pd
# By default the "date" column was in string format,
# we need to convert it into date-time format
# parse_dates =["date"], converts the "date" column to date-time format
# Resampling works with time-series data only
# so convert "date" column to index
# index_col ="date", makes "date" column
df = pd.read_csv("apple.csv", parse_dates = ["date"], index_col = "date")
print (df.close.rolling(3).sum())
print (df.close.rolling(3, win_type ='triang').sum())
</code></pre>
<p>cvs input file has 255 entries but I get few entries on the output, I get "..." between 2018-10-04 and 2017-12-26. I verified the input file, it has a lot more valid entries in between these dates. </p>
<pre><code>date
2018-11-14 NaN
2018-11-13 NaN
2018-11-12 578.63
2018-11-09 590.87
2018-11-08 607.13
2018-11-07 622.91
2018-11-06 622.21
2018-11-05 615.31
2018-11-02 612.84
2018-11-01 631.29
2018-10-31 648.56
2018-10-30 654.38
2018-10-29 644.40
2018-10-26 641.84
2018-10-25 648.34
2018-10-24 651.19
2018-10-23 657.62
2018-10-22 658.47
2018-10-19 662.69
2018-10-18 655.98
2018-10-17 656.52
2018-10-16 659.36
2018-10-15 660.70
2018-10-12 661.62
2018-10-11 653.92
2018-10-10 652.92
2018-10-09 657.68
2018-10-08 667.00
2018-10-05 674.93
2018-10-04 676.05
...
2017-12-26 512.25
2017-12-22 516.18
2017-12-21 520.59
2017-12-20 524.37
2017-12-19 523.90
2017-12-18 525.31
2017-12-15 524.93
2017-12-14 522.61
2017-12-13 518.46
2017-12-12 516.19
2017-12-11 516.64
2017-12-08 513.74
2017-12-07 511.36
2017-12-06 507.70
2017-12-05 507.97
2017-12-04 508.45
2017-12-01 510.49
2017-11-30 512.70
2017-11-29 512.38
2017-11-28 514.40
2017-11-27 516.64
2017-11-24 522.13
2017-11-22 524.02
2017-11-21 523.07
2017-11-20 518.08
2017-11-17 513.27
2017-11-16 511.23
2017-11-15 510.33
2017-11-14 511.52
2017-11-13 514.39
Name: close, Length: 254, dtype: float64
</code></pre>
<p>thank you for your help ...</p>
|
<p><code>...</code> just means that <code>pandas</code> isn't showing you all the rows, that's where the 'missing' ones are.</p>
<p>To display all rows:</p>
<pre><code>with pd.option_context("display.max_rows", None):
print (df.close.rolling(3, win_type ='triang').sum())
</code></pre>
|
python|pandas
| 1
|
5,691
| 61,858,321
|
Plot specific months from whole data set?
|
<p>From a whole data set, I need to plot the maximum & minimum temperatures for just the months of January and July. Column 2 is the date, and columns 8 and 9 are the 'TMAX' and 'TMIN.' This is what I have so far:</p>
<pre><code>napa3=pd.read_csv('MET 51 Lab #10 data (Pandas, NAPA).csv',usecols=[2,8,9])
time2=pd.to_datetime(napa3['DATE'],format='%Y-%m-%d')
imon=time2.dt.month
jj=(imon==1)&(imon==7)
data_jj=napa3.loc[jj]
data_jj.plot.hist(title='TMAX & TMIN for January and July')
plt.show()
</code></pre>
<p>I keep getting the error: "TypeError: no numeric data to plot"
Why is this? </p>
|
<p>The problem can raise because the dates are saved as an "object" or a string.
However, I can't see that you have created dataframe?! you do read_csv but you do not make dataframe out of that:</p>
<pre><code>dnapa3 = pd.DataFrame(napa3)
</code></pre>
<p>then repeat converting your time data and check:</p>
<pre><code>print(dnapa3.dtypes)
</code></pre>
<p>after you became sure that your requested column values are string or object you can change the values of that column to floats:</p>
<pre><code>dnapa3['your_temp_column_label'] = dnapa3['your_date_column_label'].astype(float)
</code></pre>
<p>This should work hopefully. Or silmilarly :</p>
<pre><code>dnapa3['your_tem_column_label'] =pd.to_numeric(dnapa3['your_date_column_label'], errors='coerce')
</code></pre>
|
python|pandas|datetime
| 0
|
5,692
| 61,885,775
|
Rolling mean over the last n-days with if statement
|
<p>I have the following dataframe:</p>
<pre><code>entry_time_flat route_id time_slot duration n_of_trips
2019-09-02 00:00:00 1_2 0-6 10 29
2019-09-04 00:00:00 3_4 6-12 15 10
2019-09-06 00:00:00 1_2 0-6 20 30
2019-09-06 00:00:00 1_2 18-20 43 30
...
</code></pre>
<p>I would like to compute the mean value of "duration" - creating a new feature - over the last n-days (n_days = 30), with the following condition:</p>
<pre><code>if "n_of_trips" >= 30:
mean of "duration", over the last 30 days and all the past transactions, grouping by "route_id" & "time_slot"
else:
mean of "duration", over the last 30 days and all the past transactions, grouping by "route_id" only
</code></pre>
<p>Unfortunately, splitting the dataframe into two chunks (>= and < 30 n_of_trips) would not yield to an acceptable result since all transactions must be included when computing mean;</p>
<p>How can I implement an if-statement while computing rolling mean over the last n-days?</p>
|
<p>I am not complete sure if I understood your goal here but I'll try:</p>
<pre><code>import pandas as pd
data = {'entry_time_flat': ['2019-09-02 00:00:00', '2019-09-04 00:00:00', '2019-09-06 00:00:00', '2019-09-06 00:00:00'], 'route_id': ['1_2', '3_4', '1_2', '1_2'], 'time_slot': ['0-6', '6-12', '0-6', '18-20'], 'duration': [10, 15, 20, 43], 'n_of_trips': [29, 10, 30, 30]}
df = pd.DataFrame(data=data)
df.entry_time_flat = pd.to_datetime(df.entry_time_flat)
df.set_index('entry_time_flat', inplace=True)
df['duration_rolling'] = df.duration.rolling('30d', min_periods=1).mean()
print(df)
print(df[df.n_of_trips >= 30].groupby(['route_id']).mean())
print(df[df.n_of_trips >= 30].groupby(['time_slot']).mean())
print(df[df.n_of_trips < 30].groupby(['route_id']).mean())
Output:
route_id time_slot duration n_of_trips duration_rolling
entry_time_flat
2019-09-02 1_2 0-6 10 29 10.0
2019-09-04 3_4 6-12 15 10 12.5
2019-09-06 1_2 0-6 20 30 15.0
2019-09-06 1_2 18-20 43 30 22.0
duration n_of_trips duration_rolling
route_id
1_2 31.5 30.0 18.5
duration n_of_trips duration_rolling
time_slot
0-6 20 30 15.0
18-20 43 30 22.0
duration n_of_trips duration_rolling
route_id
1_2 10 29 10.0
3_4 15 10 12.5
</code></pre>
<p>In the outputs you can of course dismiss <code>duration</code>.</p>
<p>Was this something you were looking for?</p>
|
python|pandas|pandas-groupby|rolling-computation
| 0
|
5,693
| 61,831,227
|
How to solve Import object_detection/protos/image_resizer.proto but not used protobuf compilation
|
<p>I have an issue when I try to compile Protobuf to use TensorFlow Object Detection API</p>
<p>In jupyter I tried to lauch object_detection_tutorial</p>
<p>I got this error:</p>
<p>jupyter protobuf error:<br />
<a href="https://i.stack.imgur.com/YLtAl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YLtAl.png" alt="jupyter protobuf error" /></a></p>
|
<p>Haven't tried to build the latest version of the api, but perhaps that's a bug? Try removing line 5 from input_reader.proto:</p>
<pre><code>//import "object_detection/protos/image_resizer.proto";
</code></pre>
<p>It looks as though it isn't used, perhaps they forgot to remove it.</p>
|
python|tensorflow
| 4
|
5,694
| 57,736,228
|
Errors when load selected raw data into dataframe
|
<p>I want to select three groups of json data and load them into a dataframe but i got the errors "string indices must be integers". Can anyone kindly tell me what is the reason for it?</p>
<p>The code is as following and i have also attached the screenshot:</p>
<pre><code>for currency in data:
if '/BTC' in currency['symbol']:
change_daily=currency['percentage']
name=currency["symbol"]
price = currency['lastPrice']
df_binance.append({"NAME":name,"24h_change":change_daily,"PRICE":price})
</code></pre>
<p><a href="https://i.stack.imgur.com/DnnHQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DnnHQ.png" alt="enter image description here"></a></p>
|
<p>Looks like <code>currency</code> is a key in <code>data</code> dictionary here:</p>
<pre><code>for currency in data:
if '/BTC' in data[currency]['symbol']:
change_daily=data[currency]['percentage']
name=data[currency]["symbol"]
price = data[currency]['last']
df_binance.append({"NAME":name,"24h_change":change_daily,"PRICE":price})
</code></pre>
<p>BTW you should define <code>df_binance</code> list before your loop. And better iterate with <code>for currency in data.keys()</code></p>
<p>Update: Shorter load method:</p>
<pre><code>df = pd.DataFrame(data).T.reset_index()[['symbol', 'percentage', 'last']]
df = df.loc[df.symbol.str.endswith("BTC")]
</code></pre>
|
python|pandas|data-analysis|cryptocurrency
| 0
|
5,695
| 57,990,852
|
catplot(kind="count") is significantly slower than countplot()
|
<p>I am working on a fairly large dataset (~40m rows). I have found that if I call <strong>sns.countplot()</strong> directly then my visualisation plots really quickly:</p>
<pre><code>%%time
ax = sns.countplot(x="age_band",data=acme)
</code></pre>
<p><a href="https://i.stack.imgur.com/k4qwV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/k4qwV.png" alt="Plot using axis level function directly:"></a></p>
<p>However if I do the same visualisation using <strong>catplot(kind="count")</strong> then the speed of execution slows down dramatically:</p>
<pre><code>%%time
g = sns.catplot(x="age_band",data=acme,kind="count")
</code></pre>
<p><a href="https://i.stack.imgur.com/mkf8r.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mkf8r.png" alt="# Plot using figure level function with kind="count""></a></p>
<p>Is there a reason for such a large performance difference? Is <strong>catplot()</strong> doing some sort of conversion on my data before it can plot it?</p>
<p>If there is a known reason for this, then does it extend to all figure level functions vs axis level functions eg is <strong>sns.scatterplot()</strong> faster that <strong>sns.relplot(kind="scatter")</strong> etc? </p>
<p>My preference would be to use <strong>catplot()</strong> as I like its flexibility and easy plotting on a FacetGrid but if it is going to take so much longer to achieve the same plot then I will just use the axis level functions directly.</p>
|
<p>There is a lot of overhead in <code>catplot</code>, or for that matter in <code>FacetGrid</code>, that will ensure that the categories are synchronized along the grid. Consider e.g. that you have a variable you plot along the columns of the grid for which not every age group occurs. You would still need to show that non-occuring age group and hold on to its color. Hence, two countplots next to each other do not necessarily make up one catplot.</p>
<p>However, if you are only interested in a single countplot, a catplot is clearly overkill. On the other hand, even a single countplot is overkill compared to a barplot of the counts. That is, </p>
<pre><code>counts = df["Category"].value_counts().sort_index()
colors = plt.cm.tab10(np.arange(len(counts)))
ax = counts.plot.bar(color=colors)
</code></pre>
<p>will be twice as fast as</p>
<pre><code>ax = sns.countplot(x="Category", data=df)
</code></pre>
|
python|pandas|matplotlib|seaborn
| 1
|
5,696
| 54,956,068
|
Rename a column values in pandas
|
<p>I have a dataset with column named region. Sample values are
Eg. <code>region_1, region_2, region_3</code> etc.</p>
<p>I need to replace these values to
Eg. <code>1,2,3</code>, etc.</p>
<p>Any specific function to deal with this easy transformation?</p>
<p>Thanks</p>
|
<p>I believe you need split with select second value and if necessary convert to integers:</p>
<pre><code>df.region = df.region.str.split('_').str[1].astype(int)
</code></pre>
<p>Or use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.extract.html" rel="nofollow noreferrer"><code>extract</code></a> with regex for extract integers:</p>
<pre><code>df.region = df.region.str.extract('(\d+)', expand=False).astype(int)
</code></pre>
<p><strong>Sample</strong>:</p>
<pre><code>df = pd.DataFrame({'region':['region_1','region_2','region_3']})
df.region = df.region.str.extract('(\d+)', expand=False).astype(int)
print (df)
region
0 1
1 2
2 3
</code></pre>
|
python|pandas
| 0
|
5,697
| 54,809,462
|
xarray and netCDF file with empty variables and 0-dimensional object dataframe
|
<p>I'm trying to convert some .nc files into <code>pandas</code> dataframes using <code>xarray</code>.</p>
<p>Here's one of the netCDF files:</p>
<p><a href="ftp://l5ftl01.larc.nasa.gov/MISR/MIL2ASAE.003/2017.08.31/MISR_AM1_AS_AEROSOL_P006_O094165_F13_0023.nc" rel="nofollow noreferrer">ftp://l5ftl01.larc.nasa.gov/MISR/MIL2ASAE.003/2017.08.31/MISR_AM1_AS_AEROSOL_P006_O094165_F13_0023.nc</a></p>
<p>And the code:</p>
<pre><code>import xarray as xr
ds = xr.open_dataset("MISR_AM1_AS_AEROSOL_P006_O094165_F13_0023.nc")
df = ds.to_dataframe()
</code></pre>
<p>And the error:</p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\abreucbr\AppData\Local\Continuum\anaconda3\envs\py36\lib\site-packages\xarray\core\
dataset.py", line 3088, in to_dataframe
return self._to_dataframe(self.dims)
File "C:\Users\abreucbr\AppData\Local\Continuum\anaconda3\envs\py36\lib\site-packages\xarray\core\
dataset.py", line 3078, in _to_dataframe
index = self.coords.to_index(ordered_dims)
File "C:\Users\abreucbr\AppData\Local\Continuum\anaconda3\envs\py36\lib\site-packages\xarray\core\
coordinates.py", line 80, in to_index
raise ValueError('no valid index for a 0-dimensional object')
ValueError: no valid index for a 0-dimensional object
</code></pre>
<p>If I inspect the <code>ds</code> variable, for example,</p>
<pre><code>ds.variables
</code></pre>
<p>I get</p>
<pre><code>Frozen(OrderedDict())
</code></pre>
<p>The .nc file has a few MB, so it doesn't seem to be "empty".</p>
<p>What's the problem here?</p>
|
<p>Your dataset seems to be setup with a hierarchy of <a href="http://docs.h5py.org/en/stable/high/group.html" rel="nofollow noreferrer">groups</a>. Xarray's <a href="http://xarray.pydata.org/en/stable/generated/xarray.open_dataset.html#xarray.open_dataset" rel="nofollow noreferrer"><code>open_dataset</code></a> function only supports opening a single group at a time. So you'll need to open just one group at a time. Something like:</p>
<pre><code>xr.open_dataset("MISR_AM1_AS_AEROSOL_P006_O094165_F13_0023.nc", group='4.4_KM_PRODUCTS')
</code></pre>
<p>Generally speaking, the <code>to_dataframe</code> method is going to have limited utility for your dataset since collapsing 6 dimensions into a single index is going to be pretty clunky/inefficient.</p>
|
python|pandas|netcdf|python-xarray
| 3
|
5,698
| 54,911,436
|
How to implement CRelu in Keras?
|
<p>I'm trying to implement CRelu layer in Keras</p>
<p>One option that seems work is to use Lambda layer:</p>
<pre><code>def _crelu(x):
x = tf.nn.crelu(x, axis=-1)
return x
def _conv_bn_crelu(x, n_filters, kernel_size):
x = Conv2D(filters=n_filters, kernel_size=kernel_size, strides=(1, 1), padding='same')(x)
x = BatchNormalization(axis=-1)(x)
x = Lambda(_crelu)(x)
return x
</code></pre>
<p>But I wonder is Lamda layer introduce some overhead in training or inference process?</p>
<p>My second attemp is to create keras layer that is wrapper around <code>tf.nn.crelu</code></p>
<pre><code>class CRelu(Layer):
def __init__(self, **kwargs):
super(CRelu, self).__init__(**kwargs)
def build(self, input_shape):
super(CRelu, self).build(input_shape)
def call(self, x):
x = tf.nn.crelu(x, axis=-1)
return x
def compute_output_shape(self, input_shape):
output_shape = list(input_shape)
output_shape[-1] = output_shape[-1] * 2
output_shape = tuple(output_shape)
return output_shape
def _conv_bn_crelu(x, n_filters, kernel_size):
x = Conv2D(filters=n_filters, kernel_size=kernel_size, strides=(1, 1), padding='same')(x)
x = BatchNormalization(axis=-1)(x)
x = CRelu()(x)
return x
</code></pre>
<p>Which version will be more efficient?</p>
<p>Also looking forward for pure Keras implementation, if it's possible.</p>
|
<p>I don't think there is a significant difference between the two implementations speed-wise. </p>
<p>The Lambda implementation is the simplest actually but writing a custom Layer as you have done usually is better, especially for what regards model saving and loading (<strong>get_config</strong> method). </p>
<p>But in this case it doesn't matter as the CReLU is trivial and don't require saving and restoring parameters. You can store the axis parameter actually as in the code below. In this way it will be retrieved automatically when the model is loaded.</p>
<pre><code>class CRelu(Layer):
def __init__(self, axis=-1, **kwargs):
self.axis = axis
super(CRelu, self).__init__(**kwargs)
def build(self, input_shape):
super(CRelu, self).build(input_shape)
def call(self, x):
x = tf.nn.crelu(x, axis=self.axis)
return x
def compute_output_shape(self, input_shape):
output_shape = list(input_shape)
output_shape[-1] = output_shape[-1] * 2
output_shape = tuple(output_shape)
return output_shape
def get_config(self, input_shape):
config = {'axis': self.axis, }
base_config = super(CReLU, self).get_config()
return dict(list(base_config.items()) + list(config.items()))
</code></pre>
|
python|tensorflow|keras|deep-learning
| 1
|
5,699
| 49,759,626
|
Is it possible to Skip Blank Lines in a Dataframe? If Yes then how I can do this
|
<p>I am trying to run this code</p>
<pre><code>num = df_out.drop_duplicates(subset=['Name', 'No.']).groupby.(['Name']).size()
</code></pre>
<p>But when I do I get this error:</p>
<pre><code>ValueError: not enough values to unpack (expected 2, got 0)
</code></pre>
<p>If we think about my dataframe(df_out) as an excel file I do have blank cells but no full column or full row is blank. I needed to skip the blank lines to run the code without changing the dataframe's structure. </p>
<p>Is this possible?</p>
<p>Thank you</p>
|
<p>Consider using <code>df.dropna()</code>. It is uses to remove rows that contains NA. See <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.dropna.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.dropna.html</a> for more information.</p>
<p>At first, you probably want your "blank cells" to be converted to NA value, so they can be dropped by <code>dropna()</code>. This can be done using various methods, notably <code>df.replace(r'\s+', pandas.np.nan, regex=True)</code>. If your "blank cells" are all empty strings, or fixed strings equal to some value <code>s</code>, you can directly use (first case) <code>df.replace('', pandas.np.nan)</code> or (second case) <code>df.replace(s, pandas.np.nan)</code>.</p>
|
python|pandas|dataframe
| 3
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.