Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
8,300
| 52,685,466
|
Sliced column of Pandas dataframe keep mentioning original column name in new objects created from the column
|
<p>I sliced from a pandas dataframe to create object label. The name of the column in the original dataframe was <code>y</code>. </p>
<p>Now when I take sum of <code>label</code> and assign it to <code>m</code>, while printing it keeps showing <code>y</code>. Why is it doing so and what is it trying to mean by writing <code>y 50.0</code>?</p>
<pre><code>>>> type(label)
<class 'pandas.core.frame.DataFrame'>
>>> label.head(2)
y
0 1.0
1 1.0
>>> m = label.sum()
>>> m
y 50.0
dtype: float64
>>>
</code></pre>
|
<p>Your <code>label</code> DataFrame contains only 1 column named <code>y</code> with 50 rows of <code>1.0</code>, so it returned <code>sum of y</code>. In your code the name became the index name (a sum of a single column) since all index in DataFrame <em>needs</em> a name, you can rename that using <code>m.index = <insert a name or int here></code>, but <code>m.index = None</code> will raise <code>TypeError</code> exception.</p>
<pre><code>>>> import pandas as pd
>>> import numpy as np
>>> df = pd.DataFrame(np.ones(50), columns=['y'])
>>> df.head(2)
y
0 1.0
1 1.0
>>> df
y
0 1.0
1 1.0
2 1.0
3 1.0
4 1.0
... # reducted
48 1.0
49 1.0
>>> df.sum()
y 50.0
dtype: float64
>>> m = df.sum()
>>> m
y 50.0
dtype: float64
>>> m.index
Index(['y'], dtype='object')
>>> m.index = None
Traceback (most recent call last):
...
TypeError: Index(...) must be called with a collection of some kind, None was passed
</code></pre>
|
python|pandas|slice
| 0
|
8,301
| 52,633,472
|
Index labels are not displaying - Pandas(Series)
|
<p>I am using pandas and matplotlib and I am trying to set the label on x axis by the <strong>index in Series of panda</strong></p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
index = ['apples','oranges','cherries','bananas']
quantity = [20,30,40,50]
s = pd.Series(quantity, index = index)
s.plot()
plt.title("pandas series")
plt.show()
</code></pre>
<p><img src="https://i.stack.imgur.com/KEgNt.png" alt="Output Graph"></p>
<p>and it displays the output without the label on the x axis,
I need fruits name as label on X-axis.
Can anyone please help me to resolve this error?</p>
<p>Thanks in advance !</p>
|
<p>There seems to be some problem with pandas (currently?) as also seen from <a href="https://stackoverflow.com/questions/52631031/make-pandas-plot-show-xlabel-and-xvalues">Make pandas plot() show xlabel and xvalues</a>.</p>
<p>Here using matplotlib directly is a good option as well. Just replace <code>s.plot()</code> by </p>
<pre><code>plt.plot(s)
</code></pre>
<p><a href="https://i.stack.imgur.com/PvkaJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PvkaJ.png" alt="enter image description here"></a></p>
|
pandas|matplotlib
| 1
|
8,302
| 52,628,672
|
numpy rfftn changes input dimensions
|
<p>I want to compute the discrete Fourier Transform of a 3D numpy array. I'm using the <code>numpy.fft.rfftn</code> function but its output has different dimensions of the input, how can I fix this?
Here it is my code:</p>
<pre><code>np.shape(img_coll)
>>> (9997, 50, 50)
img_spectrum = np.fft.rfftn(img_coll, axes = [0])
np.shape(img_spectrum)
>>>(4999, 50, 50)
</code></pre>
<p>Thank you very much for helping.</p>
|
<p>There is nothing to fix in your code.
If your signal is <em>real</em>, then its Fourier transform is <em>conjugate symmetric</em>.
In other words, the frequency-domain signal <code>img_spectrum</code> (along the first axis <code>axes=[0]</code>) has <em>even magnitude</em> and <em>odd phase</em>, so the user is responsible of reconstructing the Fourier-transformed signal.</p>
|
python|numpy|fft
| 1
|
8,303
| 46,510,422
|
Append npy file to another npy file with same number of columns in both files
|
<p>npy files size are around 5 gb and RAM is around 5gb so cannot load both numpy arrays. How to load one npy file and append its rows to other npy file without loading it </p>
|
<p>An npy file is a header containing the data type (metadata) and shape, followed by the data itself.</p>
<p>The header ends with a <code>'\n'</code> (newline) character. So, open your first file in append mode, then open the second file in read mode, skip the header by <code>readline()</code>, then copy chunks (using <code>read(size)</code>) from the second file to the first.</p>
<p>There is only one thing left: to update the shape (length) field in the header. And here it gets a bit tricky, because if the two files had for example the shapes <code>(700,)</code> and <code>(400,)</code>, the new shape needs to be <code>(1300,)</code> but you may not have space in the header for it. This depends on how many pad characters were in the original header--sometimes you will have space and sometimes you won't. If there is no space, you will need to write a new header into a new file and then copy the data from both source files. Still, this won't take much memory or time, just a bit of extra disk space.</p>
<p>You can see the code which reads and writes npy files here: <a href="https://github.com/numpy/numpy/blob/master/numpy/lib/format.py" rel="nofollow noreferrer">https://github.com/numpy/numpy/blob/master/numpy/lib/format.py</a> - there are some undocumented functions you may find useful in your quest.</p>
|
python|python-2.7|numpy|numpy-ufunc|numpy-memmap
| 0
|
8,304
| 58,557,552
|
Custom mean implementation is slower than pandas default mean. How to optimize?
|
<p>I want to find the mean of the pandas <code>Dataframe</code>. So I was using the following mean function which pandas provide by default. <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.mean.html" rel="nofollow noreferrer">Link to its doc</a></p>
<pre><code>df.mean()
</code></pre>
<p>But the problem with this function is that if the total of all the values is greater than the limit of data type, overflow occurs. In my case, I've data with <code>float16</code> and number of records are more than 20 million. So obviously total of all the records will overflow <code>float16</code>. One approach is to change the datatype to <code>float64</code> but this will use too much extra memory as each value is in range <code>~1900-2100</code>. So I want to implement mean iteratively using the method given <a href="https://stackoverflow.com/questions/1930454/what-is-a-good-solution-for-calculating-an-average-where-the-sum-of-all-values-e">here</a>. Here is my implementation for pandas data frame</p>
<pre><code>def mean_without_overflow(df):
avgs = []
for column in df:
avg, t = 0, 1
for data in df[column]:
if not math.isnan(data):
avg += (data-avg) / t;
t += 1
avgs.append(avg)
return avgs
</code></pre>
<p>Here for each column, I'm iterating all the rows. So total iterations will be <code># of columns * # of records</code>. However, this does not overflow and gives a correct mean of the entire data frame but it's way slower than the default mean function provided by pandas. </p>
<p>So what I'm missing here? How can I optimize this? Or is there any function available in pandas out of the box for finding mean iteratively? </p>
<p><strong>Edit:</strong>
Overflow seems a common problem while calculating the mean. I wonder why the default <code>mean()</code> in pandas not implemented using such an iterative approach which prevents overflow in data types with smaller ranges.</p>
|
<p>Found the solution by my self. The logic is to first normalize all the values by dividing it by length of Series (# of records) and then use default <code>df.mean()</code> and then multiply the normalized mean with # of records: This is an improvement from 1min 37 seconds to 3.13 seconds. But I still don't understand why pandas implementation is not using such optimization. </p>
<pre><code>def mean_without_overflow_fast(col):
col /= len(col)
return col.mean() * len(col)
</code></pre>
<p>Use this function as follows: </p>
<pre><code>print (df.apply(mean_without_overflow_fast))
</code></pre>
|
python|python-3.x|pandas|optimization|mean
| 1
|
8,305
| 58,385,916
|
Selecting Cells in Pandas MultiIndex DataFrames Using a List
|
<p>I am trying to set the values of certain cells in a Pandas MultiIndex DataFrame by selecting these cells using a list. </p>
<p><em>Note the sequence of both lists.</em></p>
<pre><code>df.loc[(['Peter','John','Tom'],'AAPL'),1] = ['Peter', 'John', 'Tom']
</code></pre>
<p><strong>Problem:</strong> However, the values are being set to the wrong cell. For example, I expect the value <code>Peter</code> to be set under the index <code>Peter</code>, but it is being set under <code>Tom</code>!</p>
<p>Anyone knows the reason, and what the proper way of doing this is?</p>
<p>In other words, how do we ensure the sequence of the list used in <code>df.loc()</code> (eg: <code>['Peter','John','Tom']</code> inside <code>df.loc</code>) to be the same sequence as the list of values (eg: <code>['Peter','John','Tom']</code> to the right of <code>=</code>)</p>
<p><strong>Expected Result</strong></p>
<pre><code> 0 1 2
Name Stock
Tom AAPL 0 Tom 0
GOOG 0 0 0
NFLX 0 0 0
John AAPL 0 John 0
GOOG 0 0 0
NFLX 0 0 0
Peter AAPL 0 Peter 0
GOOG 0 0 46
NFLX 0 0 0
</code></pre>
<p><strong>Actual Result</strong></p>
<pre><code> 0 1 2
Name Stock
Tom AAPL 0 Peter 0 <----- should be Tom
GOOG 0 0 0
NFLX 0 0 0
John AAPL 0 John 0
GOOG 0 0 0
NFLX 0 0 0
Peter AAPL 0 Tom 0 <----- should be Peter
GOOG 0 0 46
NFLX 0 0 0
</code></pre>
<p><strong>Code to reproduce problem</strong></p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
# Initialize MultiIndex DataFrame
stocks = ['AAPL', 'GOOG', 'NFLX']
names = ['Tom', 'John', 'Peter']
midx = pd.MultiIndex.from_product([names, stocks], names=['Name','Stock'])
df = pd.DataFrame(index=midx, columns=[0,1,2])
df.loc[pd.IndexSlice[:,:],:] = 0
# Partially populate the empty MultiIndex DataFrame
df.loc[('Tom', 'AAPL'), 1] = 36
df.loc[('Peter', 'GOOG'), 2] = 46
print(df) # looks correct
</code></pre>
<pre class="lang-py prettyprint-override"><code># Set values for some cells
df.loc[(['Peter','John','Tom'],'AAPL'),1] = ['Peter', 'John', 'Tom']
print(df) # wrong!!!
</code></pre>
|
<p>Like this, by giving the entire index for each elements.</p>
<pre class="lang-py prettyprint-override"><code>df.loc[[('Peter', 'AAPL'), ('John', 'AAPL'),('Tom','AAPL')],1] = ['Peter', 'John', 'Tom']
print(df)
</code></pre>
<p><a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/advanced.html#advanced-indexing-with-hierarchical-index" rel="nofollow noreferrer">Fron pandas documentation</a></p>
<blockquote>
<p>Note It is important to note that tuples and lists are not treated identically in pandas when it comes to indexing. Whereas a tuple is interpreted as one multi-level key, a list is used to specify several keys. Or in other words, tuples go horizontally (traversing levels), lists go vertically (scanning levels).</p>
</blockquote>
|
python|python-3.x|pandas|dataframe|multi-index
| 2
|
8,306
| 68,945,022
|
Fuzzy-compare two dataframes of addresses and copy info from 1 to another
|
<p>I have this data set. df1 = 70,000 rows and df2 = ~30 rows. I want to match the address to see if df2 appears in df1 and if it does than I want to show the match and also pull info from df1 to create a new df3. Sometimes the address info is off by a bit..for example (road = rd, street = st, etc )Here's an example:</p>
<pre><code>df1 =
address unique key (and more columns)
123 nice road Uniquekey1
150 spring drive Uniquekey2
240 happy lane Uniquekey3
80 sad parkway Uniquekey4
etc
df2 =
address (and more columns)
123 nice rd
150 spring dr
240 happy lane
80 sad parkway
etc
</code></pre>
<p>And this is what Id want a new dataframe :</p>
<pre><code>df3=
address(from df2) addressed matched(from df1) unique key(comes from df1) (and more columns)
123 nice rd 123 nice road Uniquekey1
150 spring dr 150 spring drive Uniquekey2
240 happy lane 240 happy lane Uniquekey3
80 sad parkway 80 sad parkway Uniquekey4
etc
</code></pre>
<p>Here's what Ive tried so far using difflib:</p>
<pre><code>df1['key'] = df1['address']
df2['key'] = df2['address']
df2['key'] = df2['key'].apply(lambda x: difflib.get_close_matches(x, df1['key'], n=1))
this returns what looks like a list, the answer is in []'s so then I convert the df2['key'] into a string using df2['key'] = df2['key'].apply(str)
then I try to merge using df2.merge(df1, on ='key') and no address is matching?
</code></pre>
<p>I'm not sure what it could be but any help would be greatly appreciated. I also am playing around with the fuzzywuzzy package.</p>
|
<p>My answer is similar to <a href="https://stackoverflow.com/a/68324933/15239951">one</a> of your old questions that I answered.</p>
<p>I slightly modified your dataframe:</p>
<pre><code>>>> df1
address unique key
0 123 nice road Uniquekey1
1 150 spring drive Uniquekey2
2 240 happy lane Uniquekey3
3 80 sad parkway Uniquekey4
>>> df2 # shuffle rows
address
0 80 sad parkway
1 240 happy lane
2 150 winter dr # change the season :-)
3 123 nice rd
</code></pre>
<p>Use <code>extractOne</code> function from <code>fuzzywuzzy.process</code>:</p>
<pre><code>from fuzzywuzzy import process
THRESHOLD = 90
best_match = \
df2['address'].apply(lambda x: process.extractOne(x, df1['address'],
score_cutoff=THRESHOLD))
</code></pre>
<p>The output of <code>extractOne</code> is:</p>
<pre><code>>>> best_match
0 (80 sad parkway, 100, 3)
1 (240 happy lane, 100, 2)
2 None
3 (123 nice road, 92, 0)
Name: address, dtype: object
</code></pre>
<p>Now you can merge your 2 dataframes:</p>
<pre><code>df3 = pd.merge(df2, df1.set_index(best_match.apply(pd.Series)[2]),
left_index=True, right_index=True, how='left')
</code></pre>
<pre><code>>>> df3
address_x address_y unique key
0 80 sad parkway 80 sad parkway Uniquekey4
1 240 happy lane NaN NaN
2 150 winter dr 150 spring drive Uniquekey2
3 123 nice rd 123 nice road Uniquekey1
</code></pre>
|
python|pandas|dataframe|fuzzywuzzy|difflib
| 1
|
8,307
| 69,253,057
|
Apply multiple criteria to select current and prior row - Pandas
|
<p>I have a dataframe like as shown below</p>
<pre><code>person_id source_system r_diff
1 O NULL
1 O 0
1 O 9
1 O NULL
2 O 574
2 I 20
2 O 135
2 O 0
2 I 21
2 O 2
2 O 0
2 O 0
2 I 12
</code></pre>
<p>I would like to select rows based on the criteria below</p>
<p>criteria 1 - pick all rows where source-system = <code>I</code></p>
<p>criteria 2 - pick prior row (n-1) only when source-system of (n-1)th is <code>O</code> and diff is zero.</p>
<p>This criteria 2 should be applied only when nth row has source-system = <code>I</code>. If source-system of (n-1)th is <code>I</code>, we don't have to do anything because criteria 1 will handle that.</p>
<p>We have to apply both the criteria each person</p>
<p>I tried the below based on SO suggestion but not sure how to make it work</p>
<pre><code>m1 = df['visit_source_value'] == 'I'
m2 = df['diff'] <= 0
m3 = df.groupby('person_id')['diff'].shift(-1) <= 0
df = df1[m1 | m2 | m3]
</code></pre>
<p>I expect my output to be like as shown below</p>
<pre><code> 2 I 20
2 O 0
2 I 21
2 O 0
2 I 12
</code></pre>
|
<p>I prefer not one line solution, because hard readable if more complicated code, so better is use:</p>
<pre><code>m1 = df['visit_source_value'] == 'I'
m2 = df['r_diff'] <= 0
m3 = df.groupby('person_id')['visit_source_value'].shift(-1) == 'I'
df = df[m1 | (m2 & m3)]
print (df)
person_id visit_source_value r_diff
5 2 I 20.0
7 2 O 0.0
8 2 I 21.0
11 2 O 0.0
12 2 I 12.0
</code></pre>
|
python|pandas|dataframe|pandas-groupby|series
| 1
|
8,308
| 69,159,550
|
Keras - Is there a way to manage the filenames generated by the flow_from_directory function of ImageDataGenerator?
|
<p>As the title is self-descriptive, I need to keep the original filenames of my images after the data augmentation, which is handled by the <code>flow_from_directory</code> function of the <code>ImageDataGenerator</code> class of <code>Keras</code>. The reason behind this requirement is that the filenames actually represent the labels and I'll move these new images into the respective folders through their names. Please feel free to ask for any further information.</p>
<p>Here are my <code>ImageDataGenerator</code> and how I handle the task:</p>
<pre><code>aug = ImageDataGenerator(
rotation_range=20,
zoom_range=0.15,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.15,
horizontal_flip=True,
fill_mode="nearest")
i = 0
for batch in aug.flow_from_directory(extract_dir, batch_size=1, color_mode='grayscale', target_size=(28, 28),
save_to_dir=extract_dir + '/augmented', save_prefix='aug'):
i += 1
if i == 100:
break
</code></pre>
|
<p>Unfortunately, there is not an easy way to access to the filenames from the <code>ImageDataGenerator.flow_from_directory</code> iterator. Instead, you can use <code>ImageDataGenerator.flow</code> to apply your augmentations to an image, then save the augmented image manually using another image processing library, e.g. <code>cv2</code>.</p>
<pre><code>import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' # suppress Tensorflow messages
from keras.preprocessing.image import ImageDataGenerator
import cv2
import numpy as np
image_folder = 'img_folder' # your image folder
print(f'my images: {os.listdir(image_folder)}')
aug_image_folder = 'aug_img_folder' # aug images will be saved here
os.makedirs(aug_image_folder,exist_ok=True)
# define your augmentations
aug = ImageDataGenerator(
rotation_range=20,
zoom_range=0.15,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.15,
horizontal_flip=True,
fill_mode="nearest")
# iterate over the image folder
image_names = os.listdir(image_folder)
for image_name in image_names:
img = cv2.imread(f'{image_folder}/{image_name}',1)
# get the augmented image
aug_img_iterator = aug.flow(x=np.expand_dims(img,0),batch_size=1)
aug_img=next(aug_img_iterator)
# save the augmented image
cv2.imwrite(f'{aug_image_folder}/{image_name}',aug_img[0,:,:,:])
print(f'aug images: {os.listdir(aug_image_folder)}')
</code></pre>
<pre><code>my images: ['img_0.png', 'img_2.png', 'img_1.png']
aug images: ['img_0.png', 'img_2.png', 'img_1.png']
</code></pre>
|
keras|tensorflow2.0|tf.keras|data-augmentation|data-generation
| 0
|
8,309
| 68,957,453
|
Get unique count of items of a column in pandas pivot table
|
<p>Here is my code:</p>
<p><code>df1.pivot_table(index=["Unit", "Grade"],values = ["Unit", "QTY","ORDER_NUM", "ORDER ID"], aggfunc={'Cost': 'sum' ,'QTY':'sum', "ORDER_NUM":'count',"ORDER ID":'count'})</code></p>
<p>I want to get a count of the unique order numbers and order id. Using 'count' is not giving me a unique count. How do i do this?</p>
|
<p>You can use <code>'nunique'</code>:</p>
<p>example input:</p>
<pre><code>df = pd.DataFrame({"A": ["foo", "foo", "foo", "foo", "foo",
"bar", "bar", "bar", "bar"],
"B": ["one", "one", "one", "two", "two",
"one", "one", "two", "two"],
"C": ["small", "large", "large", "small",
"small", "large", "small", "small",
"large"],
"D": [1, 2, 2, 3, 3, 4, 5, 6, 7],
"E": [2, 4, 5, 5, 6, 6, 8, 9, 9]})
</code></pre>
<p>processing:</p>
<pre><code>pd.pivot_table(df,
values=['D', 'E'],
index=['A', 'B'],
columns=['C'],
aggfunc={'D': 'sum',
'E': 'nunique', ### HERE
}
)
</code></pre>
<p>output:</p>
<pre><code> D E
C large small large small
A B
bar one 4.0 5.0 1.0 1.0
two 7.0 6.0 1.0 1.0
foo one 4.0 1.0 2.0 1.0
two NaN 6.0 NaN 2.0
</code></pre>
|
python|pandas
| 1
|
8,310
| 61,108,469
|
How to randomly pick and mask a portion of a Tensor in Tensorflow (python)
|
<p>I'm training a denoising autoencoder in Tensorflow 2, one part of the run time is spent on CPU doing masking of a portion of the input data, randomly selecting the indices to be masked, then setting their values to zero. This is my masking function, this masking is repeated on the beginning of each epoch, at different v values:</p>
<pre><code>import numpy as np
def masking_noise(X, v):
X_noise = X.copy()
n_samples = X.shape[0]
n_features = X.shape[1]
v = int(np.round(n_features*v))
for i in range(n_samples):
mask = np.random.choice(n_features, v, replace=False)
for m in mask:
X_noise[i][m] = np.repeat(0.,X.shape[2])
return X_noise
</code></pre>
<p>Here is a toy example:</p>
<pre><code>a = np.array([[[1., 0.],
[1., 0.],
[1., 0.],
[1., 0.],
[0., 1.]],
[[1., 0.],
[1., 0.],
[1., 0.],
[1., 1.],
[0., 1.]],
[[1., 0.],
[1., 0.],
[1., 0.],
[1., 0.],
[1., 1.]]])
masking_noise(a, 0.40)
</code></pre>
<p>Output:</p>
<pre><code>array([[[1., 0.],
[0., 0.],
[1., 0.],
[1., 0.],
[0., 0.]],
[[0., 0.],
[0., 0.],
[1., 0.],
[1., 1.],
[0., 1.]],
[[1., 0.],
[1., 0.],
[1., 0.],
[0., 0.],
[0., 0.]]])
</code></pre>
<p>My question is, how could I do the same masking operation in Tensorflow? </p>
|
<p>I think I finally figured it out, it was easy to debug this problem with Tensorflow 2, so I was able to solve this when I changed from TF1 to TF2:</p>
<pre><code>def mask_data(y_true, mask_ratio, verbose=0):
nf = tf.cast(tf.shape(y_true)[1], tf.float32)
mask_portion = tf.math.round( tf.math.multiply(nf,(1-mask_ratio)) )
mask_portion = tf.cast(mask_portion, tf.int32)
z = -tf.math.log(-tf.math.log(tf.random.uniform(tf.shape(y_true)[0:-1],0,1)))
_, indices = tf.nn.top_k(z, mask_portion)
one_hots = tf.one_hot(indices, tf.shape(y_true)[1])
mask = tf.reduce_max(one_hots, axis=1)
mask = tf.expand_dims(mask,axis=-1)
mask_tiles = tf.tile(mask,[1,1,tf.shape(y_true)[-1]])
masked = tf.multiply(mask_tiles,toy_example)
if(verbose>0):
print("\nRandomly selected indices:", indices)
print("\n2D mask (per variant)", mask)
print("\n3D mask (per allele)", mask_tiles)
print("\nmasked results", masked)
return masked
</code></pre>
<p>Then I can run it like this:</p>
<pre><code>toy_example = np.array([[[1., 0.],
[1., 0.],
[1., 0.],
[1., 0.],
[0., 1.]],
[[1., 0.],
[1., 0.],
[1., 0.],
[1., 1.],
[0., 1.]],
[[1., 0.],
[1., 0.],
[1., 0.],
[1., 0.],
[1., 1.]]])
mask_ratio = 0.40
result = mask_data(toy_example, mask_ratio, verbose=0)
print(result)
</code></pre>
<p>The result will look like this:</p>
<pre><code>tf.Tensor(
[[[1. 0.]
[1. 0.]
[0. 0.]
[1. 0.]
[0. 0.]]
[[1. 0.]
[0. 0.]
[0. 0.]
[1. 1.]
[0. 1.]]
[[0. 0.]
[0. 0.]
[1. 0.]
[1. 0.]
[1. 1.]]], shape=(3, 5, 2), dtype=float32)
</code></pre>
|
python|numpy|tensorflow|tensorflow2.0|masking
| 5
|
8,311
| 60,980,181
|
How can I get one array to return only the masked values define by another array with Numpy / PyTorch?
|
<p>I have a <code>mask</code>, which has a shape of: <code>[64, 2895]</code> and an array <code>pred</code> which has a shape of <code>[64, 2895, 161]</code>.</p>
<p><code>mask</code> is binary with only <code>0</code>s and <code>1</code>s. What I want to do is reduce <code>pred</code> so that it maintains <code>64</code> batches, and along the <code>2895</code>, wherever there is a <code>1</code> in the <code>mask</code> for each batch, return the related <code>pred</code>.</p>
<p>So as a simplified example, if:</p>
<pre><code>mask = [[1, 0, 0],
[1, 1, 0],
[0, 0, 1]]
pred = [[[0.12, 0.23, 0.45, 0.56, 0.57],
[0.91, 0.98, 0.97, 0.96, 0.95],
[0.24, 0.46, 0.68, 0.80, 0.15]],
[[1.12, 1.23, 1.45, 1.56, 1.57],
[1.91, 1.98, 1.97, 1.96, 1.95],
[1.24, 1.46, 1.68, 1.80, 1.15]],
[[2.12, 2.23, 2.45, 2.56, 2.57],
[2.91, 2.98, 2.97, 2.96, 2.95],
[2.24, 2.46, 2.68, 2.80, 2.15]]]
</code></pre>
<p>What I want is:</p>
<pre><code>[[[0.12, 0.23, 0.45, 0.56, 0.57]],
[[1.12, 1.23, 1.45, 1.56, 1.57],
[1.91, 1.98, 1.97, 1.96, 1.95]],
[[2.24, 2.46, 2.68, 2.80, 2.15]]]
</code></pre>
<p>I realize that there are different dimensions, I hope that that's possible. If not, then fill in the missing dimensions with <code>0</code>. Either <code>numpy</code> or <code>pytorch</code> would be helpful. Thank you.</p>
|
<p>If you want a vectorized computation then different dimension seems not possible, but this would give you the one with masked entry filled with 0:</p>
<pre><code># pred: torch.size([64, 2895, 161])
# mask: torch.size([64, 2895])
result = pred * mask[:, :, None]
# extend mask with another dimension so now it can do entry-wise multiplication
</code></pre>
<p>and <code>result</code> is exactly what you want</p>
|
python|numpy|pytorch
| 1
|
8,312
| 60,942,519
|
Can't download c4 dataset with Dataflow in colab
|
<p>I want to download the c4 dataset. As per the instructions page: <a href="https://www.tensorflow.org/datasets/catalog/c4" rel="nofollow noreferrer">https://www.tensorflow.org/datasets/catalog/c4</a>, it's recommended to use dataflow. I followed the steps described here: <a href="https://www.tensorflow.org/datasets/beam_datasets" rel="nofollow noreferrer">https://www.tensorflow.org/datasets/beam_datasets</a> in google colab.</p>
<p>Packages:</p>
<pre><code>!pip install -q tensorflow-datasets
!pip install -q apache-beam[gcp]
</code></pre>
<p>This is the cell I'm trying to run in colab</p>
<pre><code>%env DATASET_NAME=c4/en
%env GCP_PROJECT=......
%env GCS_BUCKET=gs://c4-dump
%env DATAFLOW_JOB_NAME=c4-en-gen
!echo "tensorflow_datasets[$DATASET_NAME]" > /tmp/beam_requirements.txt
!python -m tensorflow_datasets.scripts.download_and_prepare \
--datasets=$DATASET_NAME
--data_dir=$GCS_BUCKET \
--beam_pipeline_options="runner=DataflowRunner,project=$GCP_PROJECT,job_name=$DATAFLOW_JOB_NAME,staging_location=$GCS_BUCKET/binaries,temp_location=$GCS_BUCKET/temp,requirements_file=/tmp/beam_requirements.txt"
</code></pre>
<p>It's pretty much the same code as in the tutorial. But there is no dataflow job created in the Dataflow tab and it looks like it's downloading locally. See output logs:</p>
<pre><code>env: DATASET_NAME=c4/en
env: GCP_PROJECT=ai-vs-covid19
env: GCS_BUCKET=gs://c4-dump
env: DATAFLOW_JOB_NAME=c4-en-gen
2020-03-31 02:18:46.297213: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1
I0331 02:18:49.098738 139869050173312 download_and_prepare.py:180] Running download_and_prepare for datasets:
c4/en
I0331 02:18:49.099436 139869050173312 download_and_prepare.py:181] Version: "None"
I0331 02:18:50.353859 139869050173312 dataset_builder.py:202] Load pre-computed datasetinfo (eg: splits) from bucket.
I0331 02:18:50.468347 139869050173312 dataset_info.py:431] Loading info from GCS for c4/en/2.2.1
I0331 02:18:50.522799 139869050173312 download_and_prepare.py:130] download_and_prepare for dataset c4/en/2.2.1...
I0331 02:18:50.560583 139869050173312 driver.py:124] Generating grammar tables from /usr/lib/python3.6/lib2to3/Grammar.txt
I0331 02:18:50.683776 139869050173312 driver.py:124] Generating grammar tables from /usr/lib/python3.6/lib2to3/PatternGrammar.txt
I0331 02:18:51.189772 139869050173312 dataset_builder.py:310] Generating dataset c4 (gs://c4-dump/c4/en/2.2.1)
Downloading and preparing dataset c4/en/2.2.1 (download: 6.96 TiB, generated: 816.78 GiB, total: 7.76 TiB) to gs://c4-dump/c4/en/2.2.1...
</code></pre>
<p>And then a bunch of</p>
<pre><code>Dl Completed...: 0% 0/18 [00:38<?, ? url/s]
Dl Completed...: 0% 0/18 [00:38<?, ? url/s]
Dl Completed...: 0% 0/18 [00:39<?, ? url/s]I0331 02:19:33.506697 139869050173312 download_manager.py:256] Downloading https://commoncrawl.s3.amazonaws.com/crawl-data/CC-MAIN-2019-18/segments/1555578517558.8/wet/CC-MAIN-20190418101243-20190418123243-00326.warc.wet.gz into gs://c4-dump/downloads/comm.s3_craw-data_CC-MAIN-2019-18_segm_1555iQS7Yn3hZ3JmwClTiCNY5qtVgGfQQAObrCqx7cMloOg.gz.tmp.1bbeb83abada465287dcecabb0e4f4b0...
</code></pre>
<p>Am I missing something or is it just a preparation stage? My main concern is that I can't see the dataflow job running.</p>
<p>Thanks!</p>
<p>UPD: tried the same approach with the compute instance - same result.</p>
|
<p>As of today, you don't have to do the processing yourself. We uploaded the dataset to a bucket in the Google Cloud, and also created a JSON version. More details at <a href="https://github.com/allenai/allennlp/discussions/5056" rel="nofollow noreferrer">https://github.com/allenai/allennlp/discussions/5056</a>.</p>
|
google-colaboratory|apache-beam|tensorflow-datasets|dataflow
| 0
|
8,313
| 71,470,699
|
Embedding multiple real-time graphs in one Python Tkinter GUI
|
<p>I am new with Tkinter. I am trying to plot two real-time animated graphs in a window, but two realtime data overlaps onto the same graph after a while. I want them to be displayed on separate graphs. <a href="https://i.stack.imgur.com/PlXNC.gif" rel="nofollow noreferrer">I put a gif to show my output</a>.
I want to plot the other data to left graph. Is there any way to fix this? If I can make it I will try to plot three graphs instead of two. Can you help me with my code below?</p>
<pre><code>from matplotlib.backends.backend_tkagg import (
FigureCanvasTkAgg, NavigationToolbar2Tk)
import tkinter as Tk
from matplotlib.figure import Figure
import random
from itertools import count
import matplotlib.pyplot as plt
from matplotlib.animation import FuncAnimation
import numpy as np
from pandas import DataFrame
plt.style.use('fivethirtyeight')
# values for first graph
x_vals = []
y_vals = []
# values for second graph
x_vals2 = []
y_vals2 = []
index = count()
index2 = count()
def animate(i):
x_vals.append(next(index))
y_vals.append(random.randint(0, 5))
plt.cla() # clear the current axes
plt.plot(x_vals, y_vals)
def animate2(j):
x_vals2.append(next(index2))
y_vals2.append(random.randint(0, 5))
plt.cla() # clear the current axes
plt.plot(x_vals2, y_vals2)
# GUI
root = Tk.Tk()
label = Tk.Label(root, text="Realtime Animated Graphs").grid(column=0, row=0)
# graph 1
canvas = FigureCanvasTkAgg(plt.gcf(), master=root)
canvas.get_tk_widget().grid(column=0, row=1)
ani = FuncAnimation(plt.gcf(), animate, interval=1000, blit=False)
# graph 2
canvas2 = FigureCanvasTkAgg(plt.gcf(), master=root)
canvas2.get_tk_widget().grid(column=1, row=1)
ax2 = plt.gcf().add_subplot(111)
line2, = ax2.plot(x_vals2, y_vals2)
ani2 = FuncAnimation(plt.gcf(), animate2, interval=1000, blit=False)
Tk.mainloop()
</code></pre>
|
<p>If You don't specifically need canvas1 and 2, You can create two subplots for one figure / canvas.<br />
Then You will get 2 axes: <code>ax1</code> and <code>ax2</code>.</p>
<p>You can use just one <code>FuncAnimation</code> with same <code>x</code>.
If You need separate animations for <code>ax1</code> and <code>ax2</code>, You can do that as well and just update either <code>ax1</code> or <code>ax2</code> in respective animation.</p>
<p>Here is code snippet:</p>
<pre><code>import random
import tkinter as Tk
from itertools import count
import matplotlib.pyplot as plt
from matplotlib.animation import FuncAnimation
from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg
plt.style.use('fivethirtyeight')
# values for first graph
x_vals = []
y_vals = []
# values for second graph
y_vals2 = []
index = count()
index2 = count()
def animate(i):
# Generate values
x_vals.append(next(index))
y_vals.append(random.randint(0, 5))
y_vals2.append(random.randint(0, 5))
# Get all axes of figure
ax1, ax2 = plt.gcf().get_axes()
# Clear current data
ax1.cla()
ax2.cla()
# Plot new data
ax1.plot(x_vals, y_vals)
ax2.plot(x_vals, y_vals2)
# GUI
root = Tk.Tk()
label = Tk.Label(root, text="Realtime Animated Graphs").grid(column=0, row=0)
# graph 1
canvas = FigureCanvasTkAgg(plt.gcf(), master=root)
canvas.get_tk_widget().grid(column=0, row=1)
# Create two subplots in row 1 and column 1, 2
plt.gcf().subplots(1, 2)
ani = FuncAnimation(plt.gcf(), animate, interval=1000, blit=False)
Tk.mainloop()
</code></pre>
<p>This is the output:</p>
<p><a href="https://i.stack.imgur.com/tEqqt.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tEqqt.gif" alt="figure animation" /></a></p>
|
python|pandas|matplotlib|tkinter
| 1
|
8,314
| 42,324,119
|
Converting column into proper timestamp using pandas read_csv
|
<p>I have a time series csv file that consists of timestamps and financial data, like this:</p>
<pre><code>20140804:10:00:13.281486,782.83,443355
20140804:10:00:13.400113,955.71,348603
</code></pre>
<p>Now, I would like to put this into a <code>pandas.DataFrame</code>, and parse the dates to <code>yyyymmddhhmmss</code> when I read in the <code>csv</code>. I searched around the threads and I see people using the <code>datetime</code> module, but I'm pretty new to Python, so I'm not sure how to use that module to parse the above data, and to do this all at the same time I read in the <code>csv</code>. </p>
<p>How best to go about this?</p>
|
<p>You need:</p>
<p><strong>no header of csv</strong>:</p>
<pre><code>import pandas as pd
from pandas.compat import StringIO
temp=u"""
20140804:10:00:13.281486,782.83,443355
20140804:10:00:13.400113,955.71,348603"""
#after testing replace 'StringIO(temp)' to 'filename.csv'
df = pd.read_csv(StringIO(temp),
#parse first columns
parse_dates=[0],
#custom parse function
date_parser = lambda x: pd.datetime.strptime(x, '%Y%m%d:%H:%M:%S.%f'),
#no header of csv
header=None)
print (df)
0 1 2
0 2014-08-04 10:00:13.281486 782.83 443355
1 2014-08-04 10:00:13.400113 955.71 348603
print (df.dtypes)
0 datetime64[ns]
1 float64
2 int64
dtype: object
</code></pre>
<hr>
<p><strong>header of csv</strong></p>
<pre><code>import pandas as pd
from pandas.compat import StringIO
temp=u"""dates,a,b
20140804:10:00:13.281486,782.83,443355
20140804:10:00:13.400113,955.71,348603"""
#after testing replace 'StringIO(temp)' to 'filename.csv'
df = pd.read_csv(StringIO(temp),
parse_dates=[0],
date_parser = lambda x: pd.datetime.strptime(x, '%Y%m%d:%H:%M:%S.%f'))
print (df)
dates a b
0 2014-08-04 10:00:13.281486 782.83 443355
1 2014-08-04 10:00:13.400113 955.71 348603
print (df.dtypes)
dates datetime64[ns]
a float64
b int64
dtype: object
</code></pre>
|
python|pandas|datetime
| 1
|
8,315
| 42,569,432
|
How to copy every row of one matrix into every other row of another matrix using broadcasting?
|
<p>if I have the following matrices:</p>
<pre><code>a = np.array([['A'], ['B'], ['C']])
b = np.array([['0'], ['0'], ['0'], ['0'], ['0'], ['0']])
</code></pre>
<p>and I want to get the following:</p>
<pre><code>c = np.array([['A'], ['0'], ['B'], ['0'], ['C'], ['0']])
</code></pre>
<p>Is there a way to get c using some type of numpy broadcast/vectorized solution instead of a for loop?</p>
|
<p>For in-situ edit in <code>b</code> -</p>
<pre><code>b[::2] = a
</code></pre>
<p>To make those changes in a new array, make a copy and edit -</p>
<pre><code>c = b.copy()
c[::2] = a
</code></pre>
|
python-2.7|numpy|array-broadcasting
| 1
|
8,316
| 42,451,011
|
How is the numpy way to binarize arrays with a threshold value?
|
<p>How to binarize an numpy array to its corresponding maximum value in a row above a threshold value. if the arrays row maximum value is lesser than the threshold value then column 1 should be equal to one.</p>
<pre><code>a=np.array([[ 0.01, 0.3 , 0.6 ],
[ 0.2 , 0.1 , 0.4 ],
[ 0.7 , 0.1 , 0.3 ],
[ 0.2 , 0.3 , 0.5 ],
[ 0.1 , 0.7 , 0.3 ],
[ 0.1 , 0.5 , 0.8 ]])
#required output with thresold row.max>=0.6 else column 1 should be 1
np.array([[ 0.0 , 0.0 , 1.0 ],
[ 0.0 , 1.0 , 0.0 ],
[ 1.0 , 0.0 , 0.0 ],
[ 0.0 , 1.0 , 0.0 ],
[ 0.0 , 1.0 , 0.0 ],
[ 0.0 , 0.0 , 1.0 ]])
</code></pre>
|
<p>One solution: use <code>argmax</code> and advanced indexing</p>
<pre><code>am = a.argmax(axis=-1)
am[a[np.arange(len(a)), am] < 0.6] = 1
out = np.zeros_like(a)
out[np.arange(len(a)), am] = 1
out
array([[ 0., 0., 1.],
[ 0., 1., 0.],
[ 1., 0., 0.],
[ 0., 1., 0.],
[ 0., 1., 0.],
[ 0., 0., 1.]])
</code></pre>
|
python|arrays|numpy
| 1
|
8,317
| 69,699,821
|
Creating a dataframe from list and existing dataframe
|
<p>I have a dataframe in the form of</p>
<pre><code>column 1 column 2 column 3
</code></pre>
<p>And I would like to add values to it.
I have a list which I would like to add which is in the form of:</p>
<pre><code>a= [['Master Vithal', ' Vithal Zubeida'], ['Firozshah Mistry', ' B Irani'], ['Grigor']]
</code></pre>
<p>How do I add elements in this list so that a[0] will go to column 1 and so on.
Thanks in advance!</p>
|
<p>You can use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.loc.html" rel="nofollow noreferrer"><code>.loc</code></a> to add elements of list <code>a</code> at the end of the dataframe, as follows:</p>
<pre><code>import numpy as np
cols = ['column 1', 'column 2', 'column 3'] # enter your selected columns here
df.loc[len(df), cols] = np.array(a, dtype='object')[0:len(cols)]
</code></pre>
<p><strong>Demo</strong></p>
<pre><code>import numpy as np
df = pd.DataFrame(columns=['column 1', 'column 2', 'column 3'])
a= [['Master Vithal', ' Vithal Zubeida'], ['Firozshah Mistry', ' B Irani'], ['Grigor']]
cols = ['column 1', 'column 2', 'column 3']
df.loc[len(df), cols] = np.array(a, dtype='object')[0:len(cols)]
print(df)
column 1 column 2 column 3
0 [Master Vithal, Vithal Zubeida] [Firozshah Mistry, B Irani] [Grigor]
</code></pre>
|
python|pandas|web-scraping
| 0
|
8,318
| 69,900,878
|
How do I create a contourplot with a custom function?
|
<p>I have seen many examples online of create a contourplot as follows</p>
<pre><code>import numpy as np
xlist = np.linspace(-3.0, 3.0, 3)
ylist = np.linspace(-3.0, 3.0, 4)
X, Y = np.meshgrid(xlist, ylist)
Z = np.sqrt(X**2 + Y**2)
cp = plt.contourf(X, Y, Z)
plt.colorbar(cp)
ax.set_title('Contour Plot')
ax.set_xlabel('x (cm)')
ax.set_ylabel('y (cm)')
plt.show()
</code></pre>
<p>now this is quite straightforward because the sqrt function in numpy will happily accept meshgrid inputs and evaluate on all points of interest. What I don't understand however is how I would do this if I have a function of my own. For example say I have a much more complex function, <code>lambda nllh x1,x2 : my_function(x1,x2)</code> which could be arbitrarily complicated but takes in two scalars and itself returns a scalar.</p>
<p>When I try to implement this with my function in the same way (i.e. the same code above except with the line <code>Z=nllh(X,Y)</code>), I get an error that the input array dimensions don't make any sense, as this function is not designed to take a meshgrid as an input. How can i rectify this issue or make python understand that the function <code>Z=nllh(X,Y)</code> needs to be evaluated on the meshgrid value pairs independently instead of the meshgrids themselves?</p>
<p>Thanks</p>
|
<p>Answered by Tadhg McDonald-Jensen in a comment. Create a new function with <code>np.vectorize(nllh)</code></p>
|
python|numpy|contour
| 0
|
8,319
| 43,086,123
|
plot a graph with its x-label is month and date
|
<p>I have a dataframe like this</p>
<pre><code> 2015max 2015min idxmax idxmin
01-05 242.0 -54.0 241.0 -127.0
01-26 245.0 -45.0 238.0 -134.0
04-02 298.0 -23.0 280.0 -59.0
04-04 288.0 72.0 283.0 -86.0
04-17 281.0 29.0 278.0 -47.0
</code></pre>
<p>I want to overlay a scatter of the data for any points with its x-lable is a year like "01-01,01-02,01-03..."
I have tried to use </p>
<pre><code>idxmin.index = pd.to_datetime(idxmin.index, format='%m-%d',errors='ignore')
</code></pre>
<p>but it always remind me of the error:</p>
<blockquote>
<p>ValueError: could not convert string to float</p>
</blockquote>
<p>Is there someone has a good idea to solve the problem?</p>
|
<p>It seems in your data are some bad values, so need parameter <code>error='coerce'</code> for replace them to <code>NaT</code> and then replace <code>NaT</code> to some value:</p>
<pre><code>print (idxmin)
2015max 2015min idxmax idxmin
01-05 242.0 -54.0 241.0 -127.0
01-26 245.0 -45.0 238.0 -134.0
04-02 298.0 -23.0 280.0 -59.0
04-04 288.0 72.0 283.0 -86.0
04-35 281.0 29.0 278.0 -47.0 <- change last value to bad for testing
idxmin.index = pd.to_datetime(idxmin.index, format='%m-%d',errors='coerce')
print (idxmin)
2015max 2015min idxmax idxmin
1900-01-05 242.0 -54.0 241.0 -127.0
1900-01-26 245.0 -45.0 238.0 -134.0
1900-04-02 298.0 -23.0 280.0 -59.0
1900-04-04 288.0 72.0 283.0 -86.0
NaT 281.0 29.0 278.0 -47.0
idxmin.index = idxmin.index.fillna(pd.to_datetime('01-01-2000'))
print (idxmin)
2015max 2015min idxmax idxmin
1900-01-05 242.0 -54.0 241.0 -127.0
1900-01-26 245.0 -45.0 238.0 -134.0
1900-04-02 298.0 -23.0 280.0 -59.0
1900-04-04 288.0 72.0 283.0 -86.0
2000-01-01 281.0 29.0 278.0 -47.0
</code></pre>
<p>You can also check all bad values:</p>
<pre><code>print (idxmin.index[pd.isnull(pd.to_datetime(idxmin.index, format='%m-%d',errors='coerce'))])
Index(['04-35'], dtype='object')
</code></pre>
|
python|pandas|matplotlib
| 3
|
8,320
| 43,110,684
|
How to plot values of pandas dataframe with reference to a list (problems with indexing)?
|
<p>I am looking for a clever way to produce a plot styled like this rather childish example:
<a href="https://i.stack.imgur.com/6HeqV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6HeqV.png" alt="enter image description here"></a></p>
<p>with source data like this:</p>
<pre><code>days = ['Monday','Tuesday','Wednesday','Thursday','Friday']
Feature Values observed on
0 1 [5.5, 14.3, 12.0, 11.8] [Tuesday, Wednesday, Thursday, Friday]
1 2 [6.1, 14.6, 12.7] [Monday, Tuesday, Wednesday]
2 3 [15.2, 13.3] [Tuesday, Friday]
3 4 [14.9, 14.3, 17.0] [Monday, Thursday, Friday]
4 5 [13.0, 13.1, 13.5, 10.3] [Monday, Tuesday, Thursday, Friday]
5 6 [12.5, 7.0] [Wednesday, Friday]
</code></pre>
<p>In other words, for each line of this dataframe, I want to plot/connect the values for the "days" on which they were acquired. (Please note the days are here just to illustrate my problem, using datetime is not a solution.)
But I got lost in indexing.</p>
<p>This is how I prepared the figure (i.e. having vertical black lines for each day)</p>
<pre><code>for count, log in enumerate(days):
plt.plot(np.ones(len(allvalues))*count,np.array(allvalues),'k',linestyle='-',linewidth=1.)
plt.xticks(np.arange(0,5,1),['M','T','W','T','F'])
</code></pre>
<p>and this works, I get my vertical lines and the labels. (later I may want to plot other datasets instead of those vertical lines, but for now, the vertical lines are more illustrative)
But now, how can I plot the values for each day?</p>
<pre><code>for index, group in observations.iterrows():
whichdays= group['observed on']
values = group['Values']
for d in whichdays:
plt.plot(days[np.where(days==d)],values)
</code></pre>
<p>but this produces <code>TypeError: list indices must be integers, not tuple</code></p>
|
<p>One possible solution is flatenning values from <code>lists</code>, <code>pivot</code> and then plot:</p>
<pre><code>from itertools import chain
df2 = pd.DataFrame({
"Feature": np.repeat(df.Feature.values, df.Values.str.len()),
"Values": list(chain.from_iterable(df.Values)),
"observed on": list(chain.from_iterable(df['observed on']))})
print (df2)
Feature Values observed on
0 1 5.5 Tuesday
1 1 14.3 Wednesday
2 1 12.0 Thursday
3 1 11.8 Friday
4 2 6.1 Monday
5 2 14.6 Tuesday
6 2 12.7 Wednesday
7 3 15.2 Tuesday
8 3 13.3 Friday
9 4 14.9 Monday
10 4 14.3 Thursday
11 4 17.0 Friday
12 5 13.0 Monday
13 5 13.1 Tuesday
14 5 13.5 Thursday
15 5 10.3 Friday
16 6 12.5 Wednesday
17 6 7.0 Friday
</code></pre>
<hr>
<pre><code>df = df2.pivot(index='observed on', columns='Feature', values='Values')
df.index.name = None
df.columns.name = None
print (df)
1 2 3 4 5 6
Friday 11.8 NaN 13.3 17.0 10.3 7.0
Monday NaN 6.1 NaN 14.9 13.0 NaN
Thursday 12.0 NaN NaN 14.3 13.5 NaN
Tuesday 5.5 14.6 15.2 NaN 13.1 NaN
Wednesday 14.3 12.7 NaN NaN NaN 12.5
df.plot(linestyle='-',linewidth=1.)
</code></pre>
<p><a href="https://i.stack.imgur.com/cvnx1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cvnx1.png" alt="graph"></a></p>
|
python|pandas|matplotlib
| 1
|
8,321
| 43,464,410
|
Pandas drop group column after groupby.apply(..)
|
<pre><code> uid iid val
uid
1 1 1 5 5.5
2 3 1 4 3.5
2 2 1 4 3.5
2 7 1 4 3.5
2 9 1 4 3.5
2 11 1 4 3.5
</code></pre>
<p>From the dataframe above, I want to remove the first column, which is:</p>
<pre><code>uid
1
2
2
2
2
2
</code></pre>
<p>and extract</p>
<pre><code> uid iid val
1 1 5 5.5
3 1 4 3.5
2 1 4 3.5
7 1 4 3.5
9 1 4 3.5
11 1 4 3.5
</code></pre>
<p>Can someone help?</p>
|
<p>You can avoid including the <code>uid</code> in the index in the first place by passing <code>group_keys=False</code> to the <code>groupby</code></p>
<pre><code>df.groupby('uid', group_keys=False).apply(lambda x: x.tail(len(x) // 5))
uid iid val
4 1 5 5.5
</code></pre>
|
python|pandas|dataframe|group-by|pandas-groupby
| 15
|
8,322
| 72,346,587
|
How to convert tf.estimator.DNNClassifier() to tflite
|
<p>code:</p>
<pre><code>classifier1 = tf.estimator.DNNClassifier(
feature_columns=my_feature_columns,
hidden_units=[128,64,32,10],
n_classes=10)
</code></pre>
<p>#Save model</p>
<pre><code>feature_spec = tf.feature_column.make_parse_example_spec(my_feature_columns)
export_input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(feature_spec)
servable_model_dir = โ/content/DNN_modelโ
servable_model_path = classifier1.export_saved_model(servable_model_dir, export_input_fn)
</code></pre>
<p>#convert to tflite</p>
<pre><code>converter=tf.lite.TFLiteConverter.from_saved_model("/content/DNN_model/1653290705/saved_model.pb")
tflite_model = converter.convert()
</code></pre>
<p>#finaly get this error</p>
<pre><code>OSError: SavedModel file does not exist at: /content/DNN_model/1653290705/saved_model.pb/{saved_model.pbtxt|saved_model.pb}
</code></pre>
<p>How can fix this error? [note this code create on colab]</p>
|
<p>You need to specify the directory that has the saved model, not the .pb file in the saved model directory.
So if you saved the mdoel in "/content/DNN_model" then pass the same path "/content/DNN_model" to the converter.</p>
<pre><code>converter=tf.lite.TFLiteConverter.from_saved_model(servable_model_dir)
</code></pre>
|
tensorflow|deep-learning|tensorflow2.0|tensorflow-lite
| 0
|
8,323
| 50,535,067
|
Need to know when, and how many times a variable gets updated in Tensorflow
|
<p>In my Tensorflow project, I need to know whether <code>train_op</code> as defined below, updates a certain variable or not, and if it does then, how many times it gets updated. </p>
<p>For a feed-forward network this is trivial, one <code>train_op</code> call results in one time update of the variable, but in case of Recurrent net, one <code>train_op</code> will result in <code>num_steps</code> <a href="https://www.tensorflow.org/tutorials/recurrent#truncated_backpropagation" rel="nofollow noreferrer">1</a> updates, but since I have my own variables in the recurrent layer, I am not sure if they are getting updated <code>num_steps</code> times or just once.</p>
<pre><code>tf.reset_default_graph()
tf.InteractiveSession()
__N = 10
tf_w0 = tf.get_variable(name="w0",\
initializer=tf.constant(value=10.00,shape=[__N]),
dtype=tf.float32,\
trainable=True)
tf_counter = tf.get_variable(name="counter",\
initializer=tf.constant(value=0.0,shape=[]),
dtype=tf.float32,\
trainable=False)
loss = tf.square(tf_w0)
optimizer = tf.train.AdamOptimizer(learning_rate=1e-3)
grads_vars = optimizer.compute_gradients(loss=loss, var_list=tf.trainable_variables())
train_op = optimizer.apply_gradients(grads_vars)
</code></pre>
<p>How can I attach a counter to a variable, such that each time <code>train_op</code> updates, it should also increment the counter?
That way I will know if my variables in recurrent layer (I have my own modified recurrent layer from the original <a href="https://github.com/tensorflow/tensorflow/blob/r1.8/tensorflow/python/ops/rnn_cell_impl.py" rel="nofollow noreferrer">2</a> and the code is a bit messy) of Tensorflow are getting updated like the way they should.</p>
<p>Thanks in advance.</p>
|
<p>I'm pretty sure that the recurrent weights are just updated once. The weights are reused multiple times in the forward pass. Multiple gradients are calculated in the backward pass. Those gradients are added together and then a single update is made.</p>
|
python|tensorflow
| 0
|
8,324
| 45,437,458
|
Combine row data from multiple txt files to a single data frame in column format
|
<p>I have row data in each text file in the following format</p>
<p>File 1</p>
<pre><code> Sample 1, 24/07/2017 13:26:08
0 Peak at 1219 , 1.864
1 Peak at 1092 , 0.412
2 Peak at 1358 , 1.661
</code></pre>
<p>File 2</p>
<pre><code> Sample 2, 24/07/2017 14:28:15
0 Peak at 1219 , 1.544
1 Peak at 1092 , 0.315
2 Peak at 1358 , 1.564
</code></pre>
<p>File 3</p>
<pre><code> Sample 3, 24/07/2017 15:31:05
0 Peak at 1219 , 1.954
1 Peak at 1092 , 0.524
2 Peak at 1358 , 1.423
</code></pre>
<p>I want to combine the data from all the files and create a single data frame in a column format like this. </p>
<pre><code> Sample No Date Time Peak at 1219 Peak at 1092 Peak at 1358
0 1 24/07/2017 13:26:08 1.864 0.412 1.661
1 2 24/07/2017 13:28:15 1.544 0.315 1.564
2 3 24/07/2017 13:31:05 1.954 0.524 1.423
</code></pre>
<p>Can anyone please help with a code. Thanks alot</p>
|
<p>There is main function <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.concat.html" rel="nofollow noreferrer"><code>concat</code></a>, which create big <code>df</code>. But need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.set_index.html" rel="nofollow noreferrer"><code>set_index</code></a> for align data.</p>
<p>Then transpose <code>df</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.T.html" rel="nofollow noreferrer"><code>T</code></a> and convert second level selected by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Index.get_level_values.html" rel="nofollow noreferrer"><code>get_level_values</code></a> <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_datetime.html" rel="nofollow noreferrer"><code>to_datetime</code></a>.</p>
<p>For first and second column use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.insert.html" rel="nofollow noreferrer"><code>insert</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DatetimeIndex.date.html" rel="nofollow noreferrer"><code>DatetimeIndex.date</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DatetimeIndex.time.html" rel="nofollow noreferrer"><code>DatetimeIndex.time</code></a>.</p>
<p>Last remove second level by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reset_index.html" rel="nofollow noreferrer"><code>reset_index</code></a> with <code>drop=True</code>, <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.rename_axis.html" rel="nofollow noreferrer"><code>rename_axis</code></a> for column name and last <code>reset_index</code> again:</p>
<pre><code>dfs = [df1,df2,df3]
#set first column to index
dfs = [x.set_index(x.columns[0]) for x in dfs]
df = pd.concat(dfs, 1, keys = range(1, len(dfs) + 1)).T
print (df)
Peak at 1219 Peak at 1092 Peak at 1358
1 24/07/2017 13:26:08 1.864 0.412 1.661
2 24/07/2017 14:28:15 1.544 0.315 1.564
3 24/07/2017 15:31:05 1.954 0.524 1.423
print (df.index.labels[0])
FrozenNDArray([0, 1, 2], dtype='int8')
dates = pd.to_datetime(df.index.get_level_values(1))
df.insert(0, 'Date', dates.date)
df.insert(1, 'Time', dates.time)
df = df.reset_index(level=1, drop=True).rename_axis('Sample No').reset_index()
print (df)
Sample No Date Time Peak at 1219 Peak at 1092 Peak at 1358
0 1 2017-07-24 13:26:08 1.864 0.412 1.661
1 2 2017-07-24 14:28:15 1.544 0.315 1.564
2 3 2017-07-24 15:31:05 1.954 0.524 1.423
</code></pre>
|
python|pandas|numpy
| 1
|
8,325
| 45,603,672
|
Can the date format of dataframe and csv file be the same?
|
<p>The two photos that I've attached below show a dataframe table and a table that was exported out to csv file. I'm wondering if there is any command that can modify the date so that the dates shown on both files would be the same. </p>
<p>On the dataframe: 2017-08-01 -> but after exporting out it becomes 2017/8/1(<strong>Instead</strong> ->2017/08/01).
Does anyone know how it can be done, or do I can only manually edit the cell format?</p>
<p>[<img src="https://i.stack.imgur.com/zb6Jr.png" alt="Dates of the dataframe"></p>
<p><a href="https://i.stack.imgur.com/OaOZv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OaOZv.png" alt="Dates of the csv file"></a></p>
|
<p><a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_csv.html" rel="nofollow noreferrer">pandas.DataFrame.to_csv</a></p>
<p>When you make the call to the <code>to_csv</code> function, you can supply it the parameter <code>date_format='%Y-%m-%d'</code>.</p>
|
python|pandas|csv|date|dataframe
| 2
|
8,326
| 45,699,653
|
Using column name as a new attribute in pandas
|
<p>I have the following data structure</p>
<pre><code>Date Agric Food
01/01/1990 1.3 0.9
01/02/1990 1.2 0.9
</code></pre>
<p>I would like to covert it into the format</p>
<pre><code>Date Sector Beta
01/01/1990 Agric 1.3
01/02/1990 Agric 1.2
01/01/1990 Food 0.9
01/02/1990 Food 0.9
</code></pre>
<p>while I am sure I can do this in a complicated way, is there a way of doing this in a few line of code?</p>
|
<p>Use <code>set_index</code> and <code>stack</code>:</p>
<pre><code>df.set_index('Date').rename_axis('Sector',axis=1).stack()\
.reset_index(name='Beta')
</code></pre>
<p>Output:</p>
<pre><code> Date Sector Beta
0 01/01/1990 Agric 1.3
1 01/01/1990 Food 0.9
2 01/02/1990 Agric 1.2
3 01/02/1990 Food 0.9
</code></pre>
|
python|pandas|reshape
| 5
|
8,327
| 45,515,031
|
How to remove columns with too many missing values in Python
|
<p>I'm working on a machine learning problem in which there are many missing values in the features. There are 100's of features and I would like to remove those features that have too many missing values (it can be features with more than 80% missing values). How can I do that in Python?</p>
<p>My data is a Pandas dataframe.</p>
|
<p>Demo:</p>
<p><strong>Setup:</strong></p>
<pre class="lang-none prettyprint-override"><code>In [105]: df = pd.DataFrame(np.random.choice([2,np.nan], (20, 5), p=[0.2, 0.8]), columns=list('abcde'))
In [106]: df
Out[106]:
a b c d e
0 NaN 2.0 NaN NaN NaN
1 NaN NaN 2.0 NaN 2.0
2 NaN 2.0 NaN NaN NaN
3 NaN NaN NaN NaN 2.0
4 NaN 2.0 2.0 NaN NaN
5 NaN NaN NaN NaN NaN
6 NaN 2.0 NaN NaN NaN
7 2.0 2.0 NaN NaN NaN
8 2.0 2.0 NaN NaN NaN
9 NaN NaN NaN NaN NaN
10 NaN 2.0 2.0 NaN 2.0
11 NaN NaN NaN 2.0 NaN
12 2.0 NaN NaN 2.0 NaN
13 NaN NaN NaN 2.0 NaN
14 NaN NaN NaN 2.0 2.0
15 NaN NaN NaN NaN NaN
16 NaN 2.0 NaN NaN NaN
17 2.0 NaN NaN NaN 2.0
18 NaN NaN NaN 2.0 NaN
19 NaN 2.0 NaN 2.0 NaN
In [107]: df.isnull().mean()
Out[107]:
a 0.80
b 0.55
c 0.85
d 0.70
e 0.75
dtype: float64
</code></pre>
<p><strong>Solution:</strong></p>
<pre class="lang-none prettyprint-override"><code>In [108]: df.columns[df.isnull().mean() < 0.8]
Out[108]: Index(['b', 'd', 'e'], dtype='object')
In [109]: df[df.columns[df.isnull().mean() < 0.8]]
Out[109]:
b d e
0 2.0 NaN NaN
1 NaN NaN 2.0
2 2.0 NaN NaN
3 NaN NaN 2.0
4 2.0 NaN NaN
5 NaN NaN NaN
6 2.0 NaN NaN
7 2.0 NaN NaN
8 2.0 NaN NaN
9 NaN NaN NaN
10 2.0 NaN 2.0
11 NaN 2.0 NaN
12 NaN 2.0 NaN
13 NaN 2.0 NaN
14 NaN 2.0 2.0
15 NaN NaN NaN
16 2.0 NaN NaN
17 NaN NaN 2.0
18 NaN 2.0 NaN
19 2.0 2.0 NaN
</code></pre>
|
python|pandas|dataframe|scikit-learn|missing-data
| 27
|
8,328
| 62,707,855
|
apache arrow - adequacy for parallel processing
|
<p>I have a huge dataset and am using Apache Spark for data processing.</p>
<p>Using Apache Arrow, we can convert Spark-compatible data-frame to Pandas-compatible data-frame and run operations on it.</p>
<p>By converting the data-frame, will it achieve the performance of parallel processing seen in Spark or will it behave like Pandas?</p>
|
<p>As you can see on the documentation <a href="http://spark.apache.org/docs/latest/sql-pyspark-pandas-with-arrow.html#apache-arrow-in-spark" rel="nofollow noreferrer">here</a></p>
<blockquote>
<p>Note that even with Arrow, toPandas() results in the collection of all records in the DataFrame to the driver program and should be done on a small subset of the data</p>
</blockquote>
<p>The data will be sent to the driver when the data is moved to the Pandas data frame. That means that you may have performance issues if there is too much data for the driver to deal with. For that reason, if you are decided to use Pandas, try to group the data before calling to <code>toPandas()</code> method.</p>
<p>It won't have the same parallelization once it's converted to a Pandas data frame because Spark executors won't be working on that scenario. The beauty of Arrow is to be able to move from the Spark data frame to Pandas directly, but you have to think on the size of the data</p>
<p>Another possibility would be to use other frameworks like <a href="https://koalas.readthedocs.io/en/latest/" rel="nofollow noreferrer">Koalas</a>. It has some of the "beauties" of Pandas but it's integrated into Spark.</p>
|
pandas|apache-spark|apache-arrow
| 3
|
8,329
| 62,817,874
|
Is there a way to check a dataframe against a single value?
|
<p>I have a dataframe like this.</p>
<pre><code>import pandas as pd
import numpy as np
# Creating a dict of lists
data = {'Name':["Akash", "Geeku", "Pankaj", "Sumitra","Ramlal"],
'Branch':["B.Tech", np.nan, "BCA", "B.Tech", "BCA"],
'Score':["80","90","60", "30", "B.Tech"],
'Result': ["Pass","Pass","Pass","Fail","Fail"]}
# creating a dataframe
df = pd.DataFrame(data)
df
</code></pre>
<p>df1:</p>
<p><a href="https://i.stack.imgur.com/343lM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/343lM.png" alt="enter image description here" /></a></p>
<p>Then I want to check the dataframe against a value like 'B.Tech' that can be anywhere in the df. And return some df like this one below.</p>
<p>df2:</p>
<p><a href="https://i.stack.imgur.com/0dwIe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0dwIe.png" alt="enter image description here" /></a></p>
<p>Then I want to get a list, where the value would be based on the first 4 boolean value, e.g. if any value in first 4 columns contains one+ True, the new column would be True, otherwise False</p>
<p>For this case, the result I want is <em>[True, False, False, False, True]</em></p>
<p>Sorry I am new to pandas, I wonder if Pandas provides an efficient way to do it.</p>
|
<p>This will do it in one go:</p>
<p><code>(df == "B.Tech").sum(axis=1).astype(bool)</code></p>
<p>To explain:</p>
<p><code>df == "B.Tech"</code> returns a DataFrame the same shape as your original but just containing True/False values as to whether the value is equal to "B.Tech"</p>
<p><code>.sum(axis=1)</code> sums the boolean values by row, interpreting True as 1 and False as 0.</p>
<p><code>.astype(bool)</code> converts the results of the sum back into boolean, where anything greater than 0 becomes True, and 0 becomes False.</p>
<h3>Update:</h3>
<p>Alternatively as Ch3steR pointed out you can replace the last part with <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.any.html" rel="nofollow noreferrer">any</a>, as in:</p>
<p><code>(df == "B.Tech").any(axis=1)</code></p>
|
python|pandas
| 2
|
8,330
| 54,336,065
|
How to reduce command with .replace and problems in create function
|
<p>I have a dataframe in which I would like to replace the 0, 1 encoding with 'yes' and 'no' in some columns that I've selected. Some df columns have this encoding and so I wrote the following command:</p>
<pre><code>dados_trabalho = dados_trabalho.replace({"ASSINTOM": {0: "Sim", 1 : "Nรฃo"}}).replace({"DOR ATIPICA": {0: "Sim", 1 : "Nรฃo"}}).replace({"IAM": {0: "Sim", 1 : "Nรฃo"}}).replace({"HAS": {0: "Sim", 1 : "Nรฃo"}}).replace({"DM": {0: "Sim", 1 : "Nรฃo"}}).replace({"DISPLIP": {0: "Sim", 1 : "Nรฃo"}}).replace({"DOR TIPICA": {0: "Sim", 1 : "Nรฃo"}})
</code></pre>
<p>It runs correctly and replaces the columns identified by the new encoding, but I would like to know if there is a way to summarize this formula so that the script does not get huge.</p>
<p>I tried to create the function:</p>
<pre><code>def change_columns (df):
c = df.columns
df = df.replace ({c: {0: "Yes", 1: "No"}})
</code></pre>
<p>The problem is when I enter the dataframe in this function the following error occurs:</p>
<pre><code>change_columns (df)
TypeError Traceback (most recent call last)
<ipython-input-141-43eb9316b19b> in <module>
----> 1 change_columns (df)
<ipython-input-140-9fbbd4e9e293> in change_columns (df)
1 def change_columns (df):
2 c = df.columns
----> 3 df = df.replace ({c: {0: "Yes", 1: "No"}})
/usr/lib/python3/dist-packages/pandas/core/indexes/base.py in __hash __ (self)
2060
2061 def __hash __ (self):
-> 2062 raise TypeError ("unhashable type:% r"% type (self) .__ name__)
2063
2064 def __setitem __ (self, key, value):
TypeError: unhashable type: 'Index'
</code></pre>
<p>I'm starting with Python and so I think I'm forgetting something.</p>
<p>I changed a few things in the code and it worked. But the problem is that it applies the function in all df columns. How do I apply the function only on the columns I want and not for all columns?</p>
<pre><code>def change_columns(df):
for i in df.columns:
df = df.replace({i: {0: "Sim", 1 : "Nรฃo"}})
return df
</code></pre>
|
<p>The function you created (<code>change_columns(df)</code>), looks like it is trying to perform the replace on all the columns. If this was your intention, you don't need any special function or chained method calls. All you need is:</p>
<pre><code>dados_trabalho = dados_trabalho.replace({0: "Sim", 1 : "Nรฃo"})
</code></pre>
<p>In order to only replace the 0's and 1's from some of the columns, you will need to tell the function which columns you want to perform the replacement on. For example:</p>
<pre><code>import pandas
def change_columns(df, cols):
for col_name in cols:
df = df.replace({col_name: {0:'yes', 1:'no'}})
return df
# create sample data
df = pandas.DataFrame([[0, 0, 1, 0, 1, 1], [1, 0, 1, 0, 1, 0]])
print('Starting DataFrame:')
print(df)
# define columns to do the replacement
columns_to_replace = [0, 2, 3]
# perform the replacement
df = change_columns(df, columns_to_replace)
# see the result
print('After processing DataFrame: ')
print(df)
</code></pre>
<p>Running the code above should produce the result:</p>
<pre><code>Starting DataFrame:
0 1 2 3 4 5
0 0 0 1 0 1 1
1 1 0 1 0 1 0
After processing DataFrame:
0 1 2 3 4 5
0 yes 0 no yes 1 1
1 no 0 no yes 1 0
</code></pre>
|
python|pandas|dataframe
| 1
|
8,331
| 54,279,454
|
saving the bounding box image
|
<p>Instead of trying to draw the bounding box over the image, i am trying to save it as a new image. </p>
<p>When i was getting [ymin, xmax, ymax, xmin] points, i was doing this.</p>
<pre><code>import cv2
import numpy as np
image = cv2.imread('ballet_106_0.jpg')
image = np.array(image)
boxes = [21, 511, 41, 420 ]
ymin, xmax , ymax ,xmin = boxes
im2 = image[ymin:ymax,xmin:xmax,:]
cv2.imwrite('bboximage.jpg',im2)
</code></pre>
<p>But if i only get the <code>x</code> and <code>y</code> points along with the <code>height</code> and <code>width</code>. I'm not sure how i could index the numpy array.</p>
<p>Any suggestions would be really helpful ,Thanks in advance.</p>
|
<p>Your code looks ok, though this line:</p>
<pre><code>image = np.array(image)
</code></pre>
<p>is not required, as if everything goes well <code>cv2.imread</code> produce <code>np.array</code>, however if <code>cv2.imread</code> fails it returns <code>None</code>, which might be source of your problem, please add following line below your <code>cv2.imread</code>:</p>
<pre><code>print(type(image))
</code></pre>
<p>if it prints <code>None</code>, it most probably means that there is not <code>ballet_106_0.jpg</code> image in your directory.</p>
<p>EDIT: To convert <code>x,y,height,width</code> to <code>x/y-min/max</code> values simply do</p>
<pre><code>ymin = y
ymax = y+height
xmin = x
xmax = x+width
</code></pre>
|
python|numpy|opencv|image-processing
| 1
|
8,332
| 54,308,172
|
Adding a trend line to a matplotlib line plot python
|
<p>Apologies if this has already been asked but I can't find the answer anywhere. I want to add an overall trend line to a plt plot. Sample data:</p>
<pre><code>import pandas as pd
data = pd.DataFrame({'year': [2011, 2012, 2013, 2014, 2015, 2016, 2017, 2018,
2019],
'value': [2, 5, 8, 4, 1, 6, 10, 14, 8]})
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = [28, 26]
data.plot(x = "year", y = "value", fontsize = 30)
plt.xlabel('Time', fontsize = 30)
</code></pre>
<p><a href="https://i.stack.imgur.com/2B98z.png" rel="noreferrer"><img src="https://i.stack.imgur.com/2B98z.png" alt="enter image description here"></a></p>
<p>How can I add a trend line?</p>
|
<p>If you are looking for a simple linear regression fit, you can use directly either <a href="http://seaborn.pydata.org/generated/seaborn.lmplot.html#seaborn.lmplot" rel="noreferrer"><code>lmplot</code></a> or <a href="http://seaborn.pydata.org/generated/seaborn.regplot.html#seaborn.regplot" rel="noreferrer"><code>regplot</code></a> from <code>seaborn</code>. It performs the linear regression and plots the fit (line) with a 95% confidence interval (shades, default value). You can also use NumPy to perform the fit. In case you want to use NumPy, comment below and I will update.</p>
<pre><code>import seaborn as sns
# Your DataFrame here
# sns.lmplot(x='year',y='value',data=data,fit_reg=True)
sns.regplot(x='year',y='value',data=data, fit_reg=True)
</code></pre>
<p><a href="https://i.stack.imgur.com/StKuu.png" rel="noreferrer"><img src="https://i.stack.imgur.com/StKuu.png" alt="enter image description here"></a></p>
<p><strong>From the Docs</strong></p>
<blockquote>
<p>The regplot() and lmplot() functions are closely related, but the former is an axes-level function while the latter is a figure-level function that combines regplot() and <a href="http://seaborn.pydata.org/generated/seaborn.FacetGrid.html#seaborn.FacetGrid" rel="noreferrer"><code>FacetGrid</code></a> which allows you to plot conditional relationships amongst your data on different subplots in the grid.</p>
</blockquote>
|
python|pandas|matplotlib
| 13
|
8,333
| 73,805,959
|
Create Pandas DataFrame column which joins column names for any non na values
|
<p>How do I create a new column which joins the column names for any non na values on a per row basis.</p>
<ul>
<li>Please note the duplicate index.</li>
</ul>
<p><strong>Code</strong></p>
<pre><code>so_df = pd.DataFrame({"ma_1":[10,np.nan,13,15],
"ma_2":[10,11,np.nan,15],
"ma_3":[np.nan,11,np.nan,15]},index=[0,1,1,2])
</code></pre>
<p><strong>Example DF</strong></p>
<pre><code> ma_1 ma_2 ma_3
0 10.0 10.0 NaN
1 NaN 11.0 11.0
1 13.0 NaN NaN
2 15.0 15.0 15.0
</code></pre>
<p><strong>Desired output</strong> is a new column which joins the column names for non na values as per <code>col_names</code> example below.</p>
<pre><code>so_df["col_names"] = ["ma_1, ma_2","ma_2, ma_3","ma_1","ma_1, ma_2, ma_3"]
ma_1 ma_2 ma_3 col_names
0 10.0 10.0 NaN ma_1, ma_2
1 NaN 11.0 11.0 ma_2, ma_3
1 13.0 NaN NaN ma_1
2 15.0 15.0 15.0 ma_1, ma_2, ma_3
</code></pre>
|
<p>Try with <code>dot</code></p>
<pre><code>df['new'] = df.notna().dot(df.columns+',').str[:-1]
df
Out[77]:
ma_1 ma_2 ma_3 new
0 10.0 10.0 NaN ma_1,ma_2
1 NaN 11.0 11.0 ma_2,ma_3
1 13.0 NaN NaN ma_1
2 15.0 15.0 15.0 ma_1,ma_2,ma_3
</code></pre>
|
python|pandas
| 2
|
8,334
| 73,630,099
|
Docker run failing when mounting host dir inside a container
|
<p>I am trying to mount a directory from host to container and at the same time running jupyter from that directory. What am I doing wrong here that docker is complaining as file now found please?</p>
<p>docker run -it --rm -p 8888:8888 tensorflow/tensorflow:nightly-jupyter -v $HOME/mytensor:/tensor --name TensorFlow python:3.9 bash
WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
docker: Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "-v": executable file not found in $PATH: unknown.</p>
<p>I tried removing the version of python but still same problem. I searched extensively online and couldnt get an answer.</p>
<p>Basically i want to mount that directory which is is a git clone where I have tensor files. At the same time, I want to run jupyter notebook where I can see the files and run them. With so many issues with apple:m1 processor and tensorflow, i thought going the docker route would be better but am i not surprised :)</p>
<p>Appreciate the help</p>
|
<p>Docker run command syntax is</p>
<pre><code>docker run [OPTIONS] IMAGE [COMMAND] [ARG...]
</code></pre>
<p>image name <code>tensorflow/tensorflow:nightly-jupyter</code> should be after options (<code>-v</code>, <code>-p</code> <code>--name</code> et.al.) and before the command.</p>
<pre><code>docker run -it --rm -p 8888:8888 -v $HOME/mytensor:/tensor --name TensorFlow tensorflow/tensorflow:nightly-jupyter bash
</code></pre>
|
python|docker|tensorflow|jupyter-notebook|mount
| 0
|
8,335
| 73,633,844
|
Model calculating accurcy of only class 0 in class-wise accuracy evaluation
|
<p>My code is given below, I am only getting the accuracy of the class 0 instead of all the classes.
The output is
Epoch 1/2
58/58 [==============================] - 424s 7s/step - loss: 4.7356 - accuracy: 0.5317 - acc_1_0: 0.9655 - acc_1_1: 0.0000e+00 - acc_1_2: 0.0000e+00 - acc_1_3: 0.0000e+00 - acc_1_4: 0.0000e+00 - recall_1_0: 0.2252 - recall_1_1: 0.0000e+00 - prec_1_0: 0.9655 - prec_1_1: 0.0000e+00 - val_loss: 0.9262 - val_accuracy: 0.6447 - val_acc_1_0: 1.0000 - val_acc_1_1: 0.0000e+00 - val_acc_1_2: 0.0000e+00 - val_acc_1_3: 0.0000e+00 - val_acc_1_4: 0.0000e+00 - val_recall_1_0: 0.2062 - val_recall_1_1: 0.0000e+00 - val_prec_1_0: 1.0000 - val_prec_1_1: 0.0000e+00</p>
<p>The code on <a href="https://tykimos.github.io/2017/09/24/Custom_Metric/" rel="nofollow noreferrer">https://tykimos.github.io/2017/09/24/Custom_Metric/</a> is working perfectly find. I think the issue is with the preprocessing function used to process the data because in the given example in the URL, they are using mnist dataset. and extracting data as x_train, y_train while with tf.keras.preprocessing.image_dataset_from_directory the dataset as a whole is fed to fit.
Please help to resolve this issue. Thanks</p>
<pre><code>train_ds = tf.keras.preprocessing.image_dataset_from_directory(
data_dir, labels ='inferred', label_mode='int',
validation_split=0.2,
subset="training",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)
val_ds = tf.keras.preprocessing.image_dataset_from_directory(
data_dir, labels ='inferred', label_mode='int',
validation_split=0.2,
subset="validation",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)
resnet_model = Sequential()
pretrained_model= tf.keras.applications.VGG16(include_top=False,
input_shape=(180,180,3),
pooling='avg',classes=5,
weights='imagenet',
#classifier_activation= 'sigmoid')
classifier_activation= 'softmax')
</code></pre>
<p>for layer in pretrained_model.layers:
layer.trainable=False</p>
<pre><code>resnet_model.add(pretrained_model)
resnet_model.add(Flatten())
resnet_model.add(Dense(512, activation='relu'))
resnet_model.add(Dense(5, activation='softmax'))
interesting_class_id = 0 # Choose the class of interest
from keras import backend as K
def single_class_accuracy(interesting_class_id):
def acc1(y_true, y_pred):
class_id_true = K.argmax(y_true, axis=-1)
class_id_preds = K.argmax(y_pred, axis=-1)
accuracy_mask = K.cast(K.equal(class_id_preds, interesting_class_id), 'int32')
class_acc_tensor = K.cast(K.equal(class_id_true, class_id_preds), 'int32') *
accuracy_mask
class_acc = K.cast(K.sum(class_acc_tensor), 'float32') /
K.cast(K.maximum(K.sum(accuracy_mask), 1), 'float32')
return class_acc
acc1.__name__ = 'acc_1_{}'.format(interesting_class_id)
return acc1
def single_class_recall(interesting_class_id):
def recall(y_true, y_pred):
class_id_true = K.argmax(y_true, axis=-1)
class_id_pred = K.argmax(y_pred, axis=-1)
recall_mask = K.cast(K.equal(class_id_true, interesting_class_id), 'int32')
class_recall_tensor = K.cast(K.equal(class_id_true, class_id_pred), 'int32') *
recall_mask
class_recall = K.cast(K.sum(class_recall_tensor), 'float32') /
</code></pre>
<p>K.cast(K.maximum(K.sum(recall_mask), 1), 'float32')
return class_recall
recall.<strong>name</strong> = 'recall_1_{}'.format(interesting_class_id)
return recall</p>
<pre><code>def single_class_precision(interesting_class_id):
def prec(y_true, y_pred):
class_id_true = K.argmax(y_true, axis=-1)
class_id_pred = K.argmax(y_pred, axis=-1)
precision_mask = K.cast(K.equal(class_id_pred, interesting_class_id), 'int32')
class_prec_tensor = K.cast(K.equal(class_id_true, class_id_pred), 'int32') *
precision_mask
class_prec = K.cast(K.sum(class_prec_tensor), 'float32') /
K.cast(K.maximum(K.sum(precision_mask), 1), 'float32')
return class_prec
prec.__name__ = 'prec_1_{}'.format(interesting_class_id)
return prec
resnet_model.compile(optimizer=Adam(lr=0.01),loss='sparse_categorical_crossentropy',
metrics=[
'accuracy',
single_class_accuracy(0),
single_class_accuracy(1),
single_class_accuracy(2),
single_class_accuracy(3),
single_class_accuracy(4),
single_class_recall(0),
single_class_recall(1),
single_class_precision(0),
single_class_precision(1)
])
history = resnet_model.fit(train_ds, validation_data=val_ds, epochs=2)
</code></pre>
|
<p><em>My code is given below, I am only getting the accuracy of the class 0 instead of all the classes</em>... probably because accuracy is meant to do exactly that... what you probably are looking for, are precision and recall</p>
|
python|tensorflow|keras
| 1
|
8,336
| 73,702,175
|
How to save multiple dataframe using one variable in a for loop
|
<pre><code>import numpy as np
count = np.arange(0,1849)
for i in range(0,6):
for j in range (0,6):
for k in range (0,4):
for l in range (0,10):
for m in count:
case = data[(data["CURRENT_ENERGY_RATING_Code"] == i)&(data["PROPERTY_TYPE"] == j)&(data["BUILT_FORM"] == k)&(data["CONSTRUCTION_AGE_BAND"] == l)]
case[m] = pd.DataFrame()
</code></pre>
<p>I wanted to save multiple data frames within the case variable with a proper number like case1, case2, etc.
So I can view each data frame.</p>
|
<p>You could create a dictionary with keys in the format <code>i-j-k-l-m</code> iterating through 0,1,2,etc; and values as the relevant dataframes. For example:</p>
<pre><code>dic = {}
count = np.arange(0,1849)
for i in range(0,6):
for j in range (0,6):
for k in range (0,4):
for l in range (0,10):
for m in count:
key = str(i) + '-' + str(j) + '-' + str(k) + '-' + str(l) + '-' + str(m)
dic[key] = data[(data["CURRENT_ENERGY_RATING_Code"] == i)&(data["PROPERTY_TYPE"] == j)&(data["BUILT_FORM"] == k)&(data["CONSTRUCTION_AGE_BAND"] == l)]
print(dic)
#note resulting `dic` is a huge dictionary!
</code></pre>
|
python|pandas|analysis
| 0
|
8,337
| 71,346,565
|
pandas to_json exclude the groupby keys
|
<p>How do we exclude the grouped by key from the <code>to_json</code> method ?</p>
<pre><code>import pandas as pd
students_df = pd.DataFrame(
[
["Jay", 16, "Soccer"],
["Jack", 19, "FootBall"],
["Dorsey", 19, "Dining"],
["Mark", 18, "Swimming"],
],
columns=["Name", "Age", "Sport"],
)
students_df.groupby("Name").apply(lambda x: x.to_json(orient="records")).reset_index(
name="students_json"
)
</code></pre>
<p>Current output:</p>
<pre><code> Name students_json
0 Dorsey [{"Name":"Dorsey","Age":19,"Sport":"Dining"}]
1 Jack [{"Name":"Jack","Age":19,"Sport":"FootBall"}]
2 Jay [{"Name":"Jay","Age":16,"Sport":"Soccer"}]
3 Mark [{"Name":"Mark","Age":18,"Sport":"Swimming"}]
</code></pre>
<p>I want to exclude the grouped by key from the resulting json.</p>
<p>There could be multiple keys on which I can group on not just name.
Expected output should be:</p>
<pre><code> Name students_json
0 Dorsey [{"Age":19,"Sport":"Dining"}]
1 Jack [{"Age":19,"Sport":"FootBall"}]
2 Jay [{"Age":16,"Sport":"Soccer"}]
3 Mark [{"Age":18,"Sport":"Swimming"}]
</code></pre>
|
<p>You could <code>drop</code> it:</p>
<pre><code>out = students_df.groupby('Name').apply(lambda x: x.drop(columns='Name').to_json(orient="records"))
</code></pre>
<p>Output:</p>
<pre><code>Name
Dorsey [{"Age":19,"Sport":"Dining"}]
Jack [{"Age":19,"Sport":"FootBall"}]
Jay [{"Age":16,"Sport":"Soccer"}]
Mark [{"Age":18,"Sport":"Swimming"}]
dtype: object
</code></pre>
|
python|pandas|dataframe|pandas-groupby
| 2
|
8,338
| 71,227,514
|
Skipping empty values python apply
|
<p>I need to apply the right function to a column ('Example') in a dataframe. The following code works perfectly if the column has no empty "cells". However, when it comes to columns with some empty cells I get "TypeError: 'float' object is not subscriptable".</p>
<pre><code>def right(x):
return x[-70:]
df1['Example'] = df1['Example'].apply(right)
</code></pre>
<p>For empty "cells" I would actually need to keep them empty.
Any help much appreciated!</p>
|
<p>I'd modify your function to skip the floats (which are actually NaNs):</p>
<pre><code>def right(x):
if np.isnan(x):
return np.nan
return x[-70:]
</code></pre>
|
python|pandas|dataframe
| 0
|
8,339
| 52,284,482
|
How can I divide sub-selections of a data frame by another data frame using minimal memory usage in python?
|
<p>I have a dataframe with many columns and I want to divide it by another data frame at regular column intervals with minimal memory usage. </p>
<p>For example: </p>
<pre><code>df1 = pd.DataFrame([[1,2,3,4,5,6,7,8,9,10], [10,9,8,7,6,5,4,3,2,1], [2,4,3,1,6,5,7,8,9,4]])
df2 = pd.DataFrame([[1,3],[7,6],[9,3]])
</code></pre>
<p>I want to divide df1 by df2 multiple times at every two column interval. The result I would want is: </p>
<pre><code>finalDf = pd.DataFrame([[1/1,2/3,3/1,4/3,5/1,6/3,7/1,8/3,9/1,10/3], [10/7,9/6,8/7,7/6,6/7,5/6,4/7,3/6,2/7,1/6], [2/9,4/3,3/9,1/3,6/9,5/3,7/9,8/3,9/9,4/3]])
</code></pre>
<p>I think the code would look something like this: </p>
<pre><code>df3 = df1.iloc[:, 0:2].divide(df2.iloc[:,:].values, axis = 'rows')
df4 = df1.iloc[:, 2:4].divide(df2.iloc[:,:].values, axis = 'rows')
df5 = df1.iloc[:, 4:6].divide(df2.iloc[:,:].values, axis = 'rows')
df6 = df1.iloc[:, 6:8].divide(df2.iloc[:,:].values, axis = 'rows')
finalDf = pd.concat([df3, df4, df5, df6], axis=1)
</code></pre>
<p>The only way I can think to implement something like that would be to put it in a loop, but I feel like that is not the smart way to do it. Is there a way to vectorize the solution? </p>
|
<h3>Using <code>pd.concat</code>:</h3>
<pre><code>res = pd.concat([df2]*5, 1)
res.columns = df1.columns
df1/res
</code></pre>
<p></p>
<pre><code> 0 1 2 3 ... 6 7 8 9
0 1.000000 0.666667 3.000000 1.333333 ... 7.000000 2.666667 9.000000 3.333333
1 1.428571 1.500000 1.142857 1.166667 ... 0.571429 0.500000 0.285714 0.166667
2 0.222222 1.333333 0.333333 0.333333 ... 0.777778 2.666667 1.000000 1.333333
</code></pre>
<p>To generalize:</p>
<pre><code>res = pd.concat([df2]*(df1.shape[1]//df2.shape[1]), 1)
</code></pre>
|
python|pandas|dataframe
| 2
|
8,340
| 52,009,579
|
qr decomposition of a matrix
|
<p>I want to compute the qr decomposition of a matrix
Here is my code</p>
<pre><code>const a = tf.tensor([1, 2, 3, 4], [2, 2]);
a.print()
const [b, c] = tf.qr(a)
b.print()
</code></pre>
<p>But it is throwing the following error</p>
<blockquote>
<p>tf.qr is not a function or its return value is not iterable</p>
</blockquote>
|
<p>The documentation is not clear about <a href="https://js.tensorflow.org/api/0.12.5/#qr" rel="nofollow noreferrer">tf.qr</a> and <a href="https://js.tensorflow.org/api/0.12.5/#gramSchmidt" rel="nofollow noreferrer">tf.gramSchmidt</a>. You need to use <code>tf.linalg.qr</code> and <code>tf.linalg.gramSchmidt</code> instead as you can see in the unit test code <a href="https://github.com/tensorflow/tfjs-core/blob/v0.12.11/src/ops/linalg_ops_test.ts#L100" rel="nofollow noreferrer">here</a></p>
<pre><code>const [b, c] = tf.linalg.qr(a)
</code></pre>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-js lang-js prettyprint-override"><code>const a = tf.tensor([1, 2, 3, 4], [2, 2]);
a.print()
const [b, c] = tf.linalg.qr(a)
b.print()</code></pre>
<pre class="snippet-code-html lang-html prettyprint-override"><code><html>
<head>
<!-- Load TensorFlow.js -->
<script src="https://cdnjs.cloudflare.com/ajax/libs/tensorflow/0.12.4/tf.js"> </script>
</head>
<body>
</body>
</html></code></pre>
</div>
</div>
</p>
|
tensorflow.js
| 1
|
8,341
| 52,095,303
|
Length of a datetimeindex in python
|
<p>I just cant find the answer and I know pandas eats problems like this for desert.</p>
<p>I have a <code>datetime index</code> and want to know its length, in years:</p>
<pre><code>idx=pd.date_range('2011-07-03', '2015-07-10')
</code></pre>
<p>expected output:</p>
<pre><code>4.0191 years (4 years and 7 days)
</code></pre>
<p>If I do:
<code>idx[0]-idx[-1]</code> I get an output in <code>days</code>, but id like it in years</p>
<p>Sorry: couldn't locate on <code>panads</code> docs </p>
|
<p>You can convert timedelta to days and then divide by <code>365.25</code> if is not necessary <code>100%</code> accuracy:</p>
<pre><code>idx=pd.date_range('2011-07-03', '2015-07-10')
print ((idx[-1]-idx[0]).days / 365.25)
4.0191649555099245
</code></pre>
<p>But if need <code>year</code>s with <code>day</code>s:</p>
<pre><code>from dateutil.relativedelta import relativedelta
r = relativedelta(idx[-1], idx[0])
print('{} years {} days'.format(r.years, r.days))
4 years 7 days
</code></pre>
|
python|pandas|datetime|timedelta|datetimeindex
| 4
|
8,342
| 72,623,188
|
I want to split a single dataframe column into multiple oclumns
|
<p>I have a dataframe with 1 column and 5776 rows. I want to move every 76 rows into a new column so I am left with 76 columns and 76 rows. How do I do this?<a href="https://i.stack.imgur.com/RpWAa.png" rel="nofollow noreferrer">enter image description here</a></p>
|
<p>This might be the transpose of the matrix you want, and if so you can do wide_df = wide_df.T</p>
<pre><code>import numpy as np
import pandas as pd
df = pd.DataFrame({'column1':np.random.rand(5776)})
wide_df = pd.DataFrame(df['column1'].values.reshape((76,76)))
print(wide_df)
</code></pre>
|
pandas|dataframe
| 3
|
8,343
| 72,621,164
|
Tkinter Pandas Python App - Not Getting Value and Performing Calculation
|
<p>I'm fairly new to Python/Pandas/Tkinter and I'm attempting to build a tkinter application that can receive numerical inputs and then perform a WAC (weighted average coupon rate) calculation based on the given input.</p>
<p><a href="https://i.stack.imgur.com/zHt87.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zHt87.png" alt="enter image description here" /></a></p>
<p>I bring in a dataset via pandas dataframe with three columns - Original Amount, Loan Rate and Special Project Code. I then create a new column, Rate Weight, that is the product of the Original Amount and Loan Rate columns.</p>
<p>The purpose of the app is to receive input from a user for a certain code value that changes the row values in the Loan Rate column based on what interest rate input they give, then a new WAC calculation is performed and shown. So far I only have one input box for one code, code 3231 (not shown above in the sample image).</p>
<p>When I run the app and input a numerical value in the box and click 'Calculate WAC' it returns the normal WAC value of the dataset, but not a WAC value performed with the new interest rate for code 3231. It seems to me I'm not capturing the value for the rate and properly and replacing the old with the new.</p>
<pre><code>from tkinter import *
from random import randint
import pandas as pd
root = Tk()
root.title("Financing Grid")
root.geometry("600x600")
global df
df = pd.read_excel("C:\\Users\\mhorv656\\Documents\\Financing Grid v4.xlsx", "Data",
engine = 'openpyxl', usecols = ['ORIG_AMT', 'LOAN_RATE', 'SPECIAL_PROJ_CD'])
df['ORIG_AMT'] = df['ORIG_AMT'].fillna(0).astype(int)
df["Rate_WT"] = df["ORIG_AMT"] * df["LOAN_RATE"]
df['SPECIAL_PROJ_CD'] = df['SPECIAL_PROJ_CD'].fillna(0).astype(int)
# Codes
def c3231_code():
c3231_frame.pack(fill = "both", expand = 1)
# Creating Input Box
global c3231_input
#c3231_input = IntVar()
c3231_input = Entry(c3231_frame)
c3231_input.pack(pady = 5)
# Creating Answer Button
c3231_button = Button(c3231_frame, text = "Calculate WAC", command = c3231_wac)
c3231_button.pack(pady = 5)
# Creating Correct or Incorrect Message
global c3231_label
c3231_label = Label(c3231_frame, text = "Enter Rate Above")
c3231_label.pack(pady = 5)
def c3231_wac():
df.loc[df['SPECIAL_PROJ_CD'] == '3231', 'LOAN_RATE'] = int(c3231_input.get())
WAC = df["Rate_WT"].sum() / df["ORIG_AMT"].sum()
WAC_label = Label(c3231_frame, text = WAC)
WAC_label.pack(pady = 5)
# Clearing the answer box
c3231_input.delete(0, 'end')
# Creating a Hide Frame Function
def hide_menu_frames():
# Destroying the children widgets in each frame
for widget in c3231_frame.winfo_children():
widget.destroy()
# Hiding all frames
c3231_frame_frame.pack_forget()
start_frame.pack_forget()
# Creating Frames
c3231_frame = Frame(root, width = 400, height = 400)
start_frame = Frame(root, width = 400, height = 400)
# Creating Start Screen
def home():
start_frame.pack(fill = "both", expand = 1)
start_label = Label(start_frame, text = "Performing WAC Calculation", font = ("Helvetica", 18)).pack(pady = 40)
# Creating buttons to codes
c3231_button = Button(start_frame, text = "Enter Rate for 3231", command = c3231_code).pack(pady = 5)
# Defining a Main Menu
my_menu = Menu(root)
root.config(menu = my_menu)
# Creating Menu Items
app_menu = Menu(my_menu)
my_menu.add_cascade(label = "Finance Grid", menu = app_menu)
app_menu.add_separator()
app_menu.add_command(label = "Exit", command = root.quit)
# Showing the Start Screen
home()
root.mainloop()
</code></pre>
|
<ol>
<li><p>In the c3231_wac() function you are comparing an int value with the string <code>'SPECIAL_PROJ_CD'] == '3231'</code>, change: 3231.</p>
</li>
<li><p>After the replacement, you must recalculate the value of the "Rate_WT" column and only then calculate the WAC.</p>
</li>
</ol>
|
python|pandas|tkinter
| 1
|
8,344
| 59,568,216
|
Keras Add_loss throwing Operator Not allowed in Graph error
|
<p>I created a basic loss function that takes the CDF (cumsum of pdf) and does a mean_squared error between the two.</p>
<p>Here is the code:</p>
<pre><code>def tuner_loss(y_true, y_pred):
y_actual=K.cumsum(y_true)
y_pred=K.cumsum(y_pred)
return K.mean(K.square(y_actual-y_pred))
</code></pre>
<p>I tried <code>model.add_loss(loss_model)</code> with <code>loss="None"</code> then tried
<code>loss=loss_model</code>, then commented out the <code>model.add_loss</code> and <code>loss_model</code> and directly put the <code>tuner_loss</code> function into loss. </p>
<p>None of the them worked.</p>
<p>Here is that code:</p>
<pre><code>input_layer= Input(179,)
y_actual=Input(199,)
x=Dense(32, activation='relu')(input_layer)
x=Dense(64, activation='relu')(x)
x=Dense(32, activation='relu')(x)
x=Dropout(0.2)(x)
x=Dense(64)(x)
output_layer = Dense(199, activation='softmax')(x)
model=Model(inputs=[input_layer, y_actual], outputs=output_layer)
#loss_model= tuner_loss(y_actual, output_layer)
#model.add_loss(loss_model)
model.compile(optimizer='adam', loss= tuner_loss(y_actual,output_layer), metrics=['accuracy'])
</code></pre>
<p>Any help would be greatly appreciated, I figure it has to be
something with how I'm manipulating the tf.tensors in the <code>tuner_loss</code>.<br>
Here is error message I got running last code:</p>
<blockquote>
<p>"OperatorNotAllowedInGraphError using a <code>tf.Tensor</code> as a Python <code>bool</code>
is not allowed in Graph execution. Use Eager execution or decorate
this function with @tf.function.</p>
</blockquote>
<p>I tried the decorator on the tuner, that also threw an error, so that didn't work. </p>
<p>Neither did enabling eager execution...</p>
|
<p>Why not use the standard?</p>
<pre><code>model = Model(input_layer, output_layer)
model.compile(loss = tuner_loss, ...)
model.fit(input_data, output_data, ...)
</code></pre>
|
python|tensorflow|keras|neural-network
| 0
|
8,345
| 59,877,664
|
Return column names for 3 highest values in rows
|
<p>I'm trying to come up with a way to return the column names for the 3 highest values in each row of the table below. So far I've been able to return the highest value using idxmax but I haven't been able to figure out how to get the 2nd and 3rd highest. </p>
<pre><code> Clust Stat1 Stat2 Stat3 Stat4 Stat5 Stat6
0 9 0.00 0.15 0.06 0.11 0.23 0.01
1 4 0.00 0.25 0.04 0.10 0.10 0.00
2 11 0.00 0.34 0.00 0.09 0.24 0.00
3 12 0.00 0.16 0.00 0.11 0.00 0.00
4 0 0.00 0.35 0.00 0.04 0.02 0.00
5 17 0.01 0.21 0.02 0.18 0.27 0.01
</code></pre>
<p>Expected output:</p>
<pre><code> Clust Stat1 Stat2 Stat3 Stat4 Stat5 Stat6 TopThree
0 9 0.00 0.15 0.06 0.11 0.23 0.01 [Stat5,Stat2,Stat4]
1 4 0.00 0.25 0.04 0.10 0.10 0.00 [Stat2,Stat4,Stat5]
2 11 0.00 0.34 0.00 0.09 0.24 0.00 [Stat2,Stat5,Stat4]
3 12 0.00 0.16 0.00 0.19 0.00 0.01 [Stat4,Stat2,Stat6]
4 0 0.00 0.35 0.00 0.04 0.02 0.00 [Stat2,Stat4,Stat5]
5 17 0.01 0.21 0.02 0.18 0.27 0.01 [Stat5,Stat2,Stat4]
</code></pre>
<p>If anyone has ideas on how to do this I'd appreciate it. </p>
|
<p>Use <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.argsort.html" rel="nofollow noreferrer"><code>numpy.argsort</code></a> for positions of sorted values and filter all columns without first:</p>
<pre><code>a = df.iloc[:, 1:].to_numpy()
df['TopThree'] = df.columns[1:].to_numpy()[np.argsort(-a, axis=1)[:, :3]].tolist()
print (df)
Clust Stat1 Stat2 Stat3 Stat4 Stat5 Stat6 TopThree
0 9 0.00 0.15 0.06 0.11 0.23 0.01 [Stat5, Stat2, Stat4]
1 4 0.00 0.25 0.04 0.10 0.10 0.00 [Stat2, Stat4, Stat5]
2 11 0.00 0.34 0.00 0.09 0.24 0.00 [Stat2, Stat5, Stat4]
3 12 0.00 0.16 0.00 0.11 0.00 0.00 [Stat2, Stat4, Stat1]
4 0 0.00 0.35 0.00 0.04 0.02 0.00 [Stat2, Stat4, Stat5]
5 17 0.01 0.21 0.02 0.18 0.27 0.01 [Stat5, Stat2, Stat4]
</code></pre>
<p>If performace is not important:</p>
<pre><code>df['TopThree'] = df.iloc[:, 1:].apply(lambda x: x.nlargest(3).index.tolist(), axis=1)
</code></pre>
|
python|pandas
| 4
|
8,346
| 59,805,592
|
Why does renewing an optimizer give a bad result?
|
<p>I tried to change my optimizer, but first of all, I want to check whether the following two codes give the same results:</p>
<pre><code>optimizer = optim.Adam(params, lr)
for epoch in range(500):
....
optimizer.zero_grad()
loss.backward()
optimizer.step()
for epoch in range(500):
....
optimizer.zero_grad()
loss.backward()
optimizer.step()
</code></pre>
<p>If I insert the same optimizer between 'for loops',</p>
<pre><code>optimizer = optim.Adam(params, lr)
for epoch in range(500):
....
optimizer.zero_grad()
loss.backward()
optimizer.step()
optimizer = optim.Adam(params, lr)
for epoch in range(500):
....
optimizer.zero_grad()
loss.backward()
optimizer.step()
</code></pre>
<p>The result become bad. Why does this happens? Doesn't optimizer just receive gradients from loss and operate gradiet descent like steps?</p>
|
<p>Different optimizers may have some "memory".<br>
For instance, <a href="https://pytorch.org/docs/stable/optim.html#torch.optim.Adam" rel="nofollow noreferrer"><code>Adam</code></a> updates rule tracks the first and second moments of the gradients of each parameter and uses them to calculate the step size for each parameter.<br>
Therefore, if you initialize your optimizer you erase this information and consequently make the optimizer "less informed" resulting with sub optimal choises for step sizes.</p>
|
python|optimization|pytorch
| 3
|
8,347
| 59,732,495
|
running keras in jupyter notebook, windows 10,64 bit system
|
<p>running keras gives me following error:</p>
<pre><code>Using TensorFlow backend.
ERROR:root:Internal Python error in the inspect module.
Below is the traceback from this internal error.
ERROR:root:Internal Python error in the inspect module.
Below is the traceback from this internal error.
Traceback (most recent call last):
File "c:\users\chetan garg\appdata\local\programs\python\python36\lib\site-packages\tensorflow_core\python\pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "c:\users\chetan garg\appdata\local\programs\python\python36\lib\site-packages\tensorflow_core\python\pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "c:\users\chetan garg\appdata\local\programs\python\python36\lib\site-packages\tensorflow_core\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "c:\users\chetan garg\appdata\local\programs\python\python36\lib\imp.py", line 243, in load_module
return load_dynamic(name, filename, file)
File "c:\users\chetan garg\appdata\local\programs\python\python36\lib\imp.py", line 343, in load_dynamic
return _load(spec)
ImportError: DLL load failed: The specified module could not be found.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:\users\chetan garg\appdata\local\programs\python\python36\lib\site-packages\IPython\core\interactiveshell.py", line 2034, in showtraceback
stb = value._render_traceback_()
AttributeError: 'ImportError' object has no attribute '_render_traceback_'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:\users\chetan garg\appdata\local\programs\python\python36\lib\site-packages\IPython\core\interactiveshell.py", line 3242, in run_ast_nodes
if (await self.run_code(code, result, async_=asy)):
File "c:\users\chetan garg\appdata\local\programs\python\python36\lib\site-packages\IPython\core\interactiveshell.py", line 3336, in run_code
self.showtraceback(running_compiled_code=True)
File "c:\users\chetan garg\appdata\local\programs\python\python36\lib\site-packages\IPython\core\interactiveshell.py", line 2037, in showtraceback
value, tb, tb_offset=tb_offset)
File "c:\users\chetan garg\appdata\local\programs\python\python36\lib\site-packages\IPython\core\ultratb.py", line 1418, in structured_traceback
self, etype, value, tb, tb_offset, number_of_lines_of_context)
File "c:\users\chetan garg\appdata\local\programs\python\python36\lib\site-packages\IPython\core\ultratb.py", line 1318, in structured_traceback
self, etype, value, tb, tb_offset, number_of_lines_of_context
File "c:\users\chetan garg\appdata\local\programs\python\python36\lib\site-packages\IPython\core\ultratb.py", line 1186, in structured_traceback
formatted_exceptions += self.prepare_chained_exception_message(evalue.__cause__)
TypeError: must be str, not list
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:\users\chetan garg\appdata\local\programs\python\python36\lib\site-packages\IPython\core\interactiveshell.py", line 2034, in showtraceback
stb = value._render_traceback_()
AttributeError: 'TypeError' object has no attribute '_render_traceback_'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:\users\chetan garg\appdata\local\programs\python\python36\lib\site-packages\tensorflow_core\python\pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "c:\users\chetan garg\appdata\local\programs\python\python36\lib\site-packages\tensorflow_core\python\pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "c:\users\chetan garg\appdata\local\programs\python\python36\lib\site-packages\tensorflow_core\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "c:\users\chetan garg\appdata\local\programs\python\python36\lib\imp.py", line 243, in load_module
return load_dynamic(name, filename, file)
File "c:\users\chetan garg\appdata\local\programs\python\python36\lib\imp.py", line 343, in load_dynamic
return _load(spec)
ImportError: DLL load failed: The specified module could not be found.
Failed to load the native TensorFlow runtime.
See https://www.tensorflow.org/install/errors
for some common reasons and solutions. Include the entire stack trace
above this error message when asking for help.
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
c:\users\chetan garg\appdata\local\programs\python\python36\lib\site-packages\tensorflow_core\python\pywrap_tensorflow.py in <module>
57
---> 58 from tensorflow.python.pywrap_tensorflow_internal import *
59 from tensorflow.python.pywrap_tensorflow_internal import __version__
c:\users\chetan garg\appdata\local\programs\python\python36\lib\site-packages\tensorflow_core\python\pywrap_tensorflow_internal.py in <module>
27 return _mod
---> 28 _pywrap_tensorflow_internal = swig_import_helper()
29 del swig_import_helper
c:\users\chetan garg\appdata\local\programs\python\python36\lib\site-packages\tensorflow_core\python\pywrap_tensorflow_internal.py in swig_import_helper()
23 try:
---> 24 _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
25 finally:
c:\users\chetan garg\appdata\local\programs\python\python36\lib\imp.py in load_module(name, file, filename, details)
242 else:
--> 243 return load_dynamic(name, filename, file)
244 elif type_ == PKG_DIRECTORY:
c:\users\chetan garg\appdata\local\programs\python\python36\lib\imp.py in load_dynamic(name, path, file)
342 name=name, loader=loader, origin=path)
--> 343 return _load(spec)
344
ImportError: DLL load failed: The specified module could not be found.
During handling of the above exception, another exception occurred
During handling of the above exception, another exception occurred:
AttributeError Traceback (most recent call last)
c:\users\chetan garg\appdata\local\programs\python\python36\lib\site-packages\IPython\core\interactiveshell.py in showtraceback(self, exc_tuple, filename, tb_offset, exception_only, running_compiled_code)
2033 # in the engines. This should return a list of strings.
-> 2034 stb = value._render_traceback_()
2035 except Exception:
AttributeError: 'ImportError' object has no attribute '_render_traceback_'
During handling of the above exception, another exception occurred:
TypeError Traceback (most recent call last)
c:\users\chetan garg\appdata\local\programs\python\python36\lib\site-packages\IPython\core\interactiveshell.py in run_code(self, code_obj, result, async_)
3334 if result is not None:
3335 result.error_in_exec = sys.exc_info()[1]
-> 3336 self.showtraceback(running_compiled_code=True)
3337 else:
3338 outflag = False
c:\users\chetan garg\appdata\local\programs\python\python36\lib\site-packages\IPython\core\interactiveshell.py in showtraceback(self, exc_tuple, filename, tb_offset, exception_only, running_compiled_code)
2035 except Exception:
2036 stb = self.InteractiveTB.structured_traceback(etype,
-> 2037 value, tb, tb_offset=tb_offset)
2038
2039 self._showtraceback(etype, value, stb)
c:\users\chetan garg\appdata\local\programs\python\python36\lib\site-packages\IPython\core\ultratb.py in structured_traceback(self, etype, value, tb, tb_offset, number_of_lines_of_context)
1416 self.tb = tb
1417 return FormattedTB.structured_traceback(
-> 1418 self, etype, value, tb, tb_offset, number_of_lines_of_context)
1419
1420
c:\users\chetan garg\appdata\local\programs\python\python36\lib\site-packages\IPython\core\ultratb.py in structured_traceback(self, etype, value, tb, tb_offset, number_of_lines_of_context)
1316 # Verbose modes need a full traceback
1317 return VerboseTB.structured_traceback(
-> 1318 self, etype, value, tb, tb_offset, number_of_lines_of_context
1319 )
1320 elif mode == 'Minimal':
c:\users\chetan garg\appdata\local\programs\python\python36\lib\site-packages\IPython\core\ultratb.py in structured_traceback(self, etype, evalue, etb, tb_offset, number_of_lines_of_context)
1184 exception = self.get_parts_of_chained_exception(evalue)
1185 if exception:
-> 1186 formatted_exceptions += self.prepare_chained_exception_message(evalue.__cause__)
1187 etype, evalue, etb = exception
1188 else:
TypeError: must be str, not list
n_pts = 100
</code></pre>
|
<p>Try uninstall and install <code>TensorFlow</code>. If you use conda:</p>
<pre><code>conda uninstall tensorflow
conda uninstall keras
conda install tensorflow
conda install keras
</code></pre>
<p>Next time, it is better to provide the code you run, not just error. In that case, people can better help you. </p>
|
python-3.x|tensorflow|keras|jupyter-notebook|command-prompt
| 1
|
8,348
| 61,711,534
|
Do operation and add it to new column depending on pattern in pandas
|
<p>I have a dataframe such as </p>
<pre><code>COL1 COL2 COL3 COL4
SEQ_1:HDHD_DIDH(-):DUUD_37 1 40 80000
SEQ_2:HDHD_DIDH(-):DUUD_35 90 456 766
QTTSS:XGGGD(+)JJDDH_0 4 990 3556
QTTSS:XGGGD(-)JJDDH_099 6 7789 90000
HYYH:LHGGH(+)FTT_H 667 88990 150000
</code></pre>
<p>and I would like to add 2 new columns = <code>COL2bis</code> <code>COL3bis</code></p>
<p>when there is a <code>(+)</code> in <code>COL1</code> the <code>COL2bis</code> <code>COL3bis</code> take the same values as <code>COL2</code> <code>COL3</code> BUT </p>
<p>when there is a <code>(-)' in</code>COL1`:</p>
<pre><code>COL2bis = COL4 - COL3
COL3bis = COL4 - COL2
</code></pre>
<p>here the output hsould be </p>
<pre><code>COL1 COL2 COL3 COL4 COL2bis COL3bis
SEQ_1:HDHD_DIDH(-):DUUD_37 1 40 80000 79960 79999
SEQ_2:HDHD_DIDH(-):DUUD_35 90 456 766 310 676
QTTSS:XGGGD(+)JJDDH_0 4 990 3556 4 990
QTTSS:XGGGD(-)JJDDH_099 6 7789 90000 82211 89994
HYYH:LHGGH(+)FTT_H 667 88990 150000 667 88990
</code></pre>
|
<p>Use <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.where.html" rel="nofollow noreferrer"><code>np.where</code></a>:</p>
<pre><code>In [56]: import numpy as np
In [57]: df['COL2bis'] = np.where(df['COL1'].str.contains('-'), df['COL4'] - df['COL3'], df['COL2'])
In [59]: df['COL3bis'] = np.where(df['COL1'].str.contains('-'), df['COL4'] - df['COL2'], df['COL3'])
In [60]: df
Out[60]:
COL1 COL2 COL3 COL4 COL2bis COL3bis
0 SEQ_1:HDHD_DIDH(-):DUUD_37 1 40 80000 79960 79999
1 SEQ_2:HDHD_DIDH(-):DUUD_35 90 456 766 310 676
2 QTTSS:XGGGD(+)JJDDH_0 4 990 3556 4 990
3 QTTSS:XGGGD(-)JJDDH_099 6 7789 90000 82211 89994
4 HYYH:LHGGH(+)FTT_H 667 88990 150000 667 88990
</code></pre>
|
python|pandas
| 2
|
8,349
| 61,849,079
|
Pandas: how to drop rows if contains more that 2 entries?
|
<p>I have a dataframe like the following</p>
<pre><code>df
entry
0 (5, 4)
1 (4, 2, 1)
2 (0, 1)
3 (2, 7)
4 (9, 4, 3)
</code></pre>
<p>I would like to keep only the <code>entry</code> that contains two values</p>
<pre><code>df
entry
0 (5, 4)
1 (0, 1)
2 (1, 7)
</code></pre>
|
<p>If there are tuples use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.len.html" rel="nofollow noreferrer"><code>Series.str.len</code></a> for lengths and compare by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.le.html" rel="nofollow noreferrer"><code>Series.le</code></a> for <code><=</code> and filter in <a href="http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#boolean-indexing" rel="nofollow noreferrer"><code>boolean indexing</code></a>:</p>
<pre><code>df1 = df[df['entry'].str.len().le(2)]
print (df1)
entry
0 (5, 4)
2 (0, 1)
3 (2, 7)
</code></pre>
<p>If there are strings compare number of <code>,</code> and compare by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.lt.html" rel="nofollow noreferrer"><code>Series.lt</code></a> for <code><</code>:</p>
<pre><code>df2 = df[df['entry'].str.count(',').lt(2)]
print (df2)
entry
0 (5,4)
2 (0,1)
3 (2,7)
</code></pre>
|
python|pandas
| 2
|
8,350
| 58,125,373
|
Converting Data Types in Multiple Columns to Date
|
<p>I'm new to Python3 and I've been searching for a way to convert multiple string columns to dates using the to_datetime function but haven't had any luck. Currently, I have 4 columns that need to be converted from their originating data type to a date ("yyyy-mm-dd"). Below is a sample of the code I've written, while it works fine, I'd like to condense the total number of lines written to accomplish this.</p>
<pre><code>import pandas as pd
df = pd.read_csv("C:/Users/Desktop/test_data.csv")
print(df.dtypes)
df['Dob'] = pd.to_datetime(df['Dob'], format='%Y%m%d', errors='coerce')
df['Appt_Date'] = pd.to_datetime(df['Appt_Date'], format='%Y%m%d', errors='coerce')
df['Payment_Date'] = pd.to_datetime(df['Payment_Date'], format='%Y%m%d', errors='coerce')
df['Collection_Date'] = pd.to_datetime(df['Collection_Date'], format='%Y%m%d', errors='coerce')
print(df)
</code></pre>
<p>I would use astype if it wasn't critical that these dates must be in "yyyy-mm-dd" format (unless there is a way to do it with astype that I'm unaware of). Any help would be greatly appreciated.</p>
<p>Thank you!</p>
|
<p>Maybe use loop? </p>
<pre><code>date_cols = ['Dob','Appt_Date','Payment_Date','Collection_Date']
for col_name in date_cols:
df[col_name] = pd.to_datetime(df[col_name], format='%Y%m%d', errors='coerce')
</code></pre>
|
python-3.x|pandas
| 1
|
8,351
| 58,117,576
|
How can I loop through a DataFrame and build a new one (with conditions)?
|
<p>So I created a DataFrame for my question:</p>
<pre class="lang-py prettyprint-override"><code>
import pandas as pd
import random
median = random.uniform(0, 1)
data = [[random.uniform(0, 1), random.uniform(0, 1)], [random.uniform(0, 1), random.uniform(0, 1)], [random.uniform(0, 1), random.uniform(0, 1)]]
df= pd.DataFrame(data, columns=["A","B"])
</code></pre>
<p>The DataFrame looks like this:</p>
<pre><code> A B
0 0.243965 0.363859
1 0.376634 0.968781
2 0.113388 0.555450
</code></pre>
<p>What I'm trying to do is to look if the value in column A row 0 is greater than the median which was defined earlier. If that's the case I want to apply a certain formula in column B row 0 and save the result in a new DataFrame. If that's not the case I want to apply on the value in column B row 0 a different Formula and also save it in a new DataFrame. I want to repeat this for every row.</p>
<p>Let's say the median equals to 0.3
The two formulas to make it simple are:</p>
<p>x -0.1 and X+0.1</p>
<p>I tried to solve it like this:</p>
<pre class="lang-py prettyprint-override"><code>for column in df[["A"]]:
if A > median:
new_Dataframe = B - 0.1
else:
new_Dataframe = B + 0.1
</code></pre>
<p>The result should look like this and it should be a new DataFrame:</p>
<pre><code> new_DataFrame
0 0.463859
1 0.868781
2 0.655450
</code></pre>
<p>I have problems accessing the wished cells and I have no clue how to solve this problem. Any help is appreciated. Also, the real DataFrame has a lot more rows so I can't just calculate it for every row as I did in my example.</p>
|
<p>Use <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.where.html" rel="nofollow noreferrer">np.where</a>:</p>
<pre><code>new_dataframe=pd.DataFrame(np.where(df['A']>median,df['B']-0.1,df['B']+0.1),columns=['new_dataframe'])
print(new_dataframe)
new_dataframe
0 0.463859
1 0.868781
2 0.655450
</code></pre>
|
python-3.x|pandas|function|loops|dataframe
| 0
|
8,352
| 54,993,345
|
datetime instead of str in read_excell with pandas
|
<p>I have a dataset saved in an xls file.<br>
In this dataset there are 4 columns that represent dates, in the format dd/mm/yyyy.<br>
My problem is that when I read it in python using pandas and the function read_excel all the columns are read as string, except one, read as datetime64[ns], also if I specify dtypes={column=str}. Why?</p>
|
<p>Dates in Excel are frequently stored as numbers, which allows you to do things like subtract them, even though they might be displayed as human-readable dates like dd/mm/yyyy. Pandas is handily taking those numbers and interpreting them as dates, which lets you deal with them more flexibly.</p>
<p>To turn them into strings, you can use the <code>converters</code> argument of <code>pd.read_excel</code> like so:</p>
<pre class="lang-py prettyprint-override"><code>df = pd.read_excel(filename, converters={'name_of_date_column': lambda dt: dt.strftime('%d/%m/%Y')})
</code></pre>
<p>The <a href="https://docs.python.org/2/library/datetime.html#strftime-strptime-behavior" rel="nofollow noreferrer">strftime</a> method lets you format dates however you like. Specifying a converter for your column lets you apply the function to the data as you read it in.</p>
|
string|pandas|datetime
| 1
|
8,353
| 54,854,821
|
Concatenate two data-frames (dask) with the same number of partitions but different number of columns
|
<p>I have two data-frames with the same number of partitions. I want to concatenate these data-frames (first partition with first partition, the second one with the second one, etc.) Therefore, the final data-frame has the initial number of partitions (<code>V</code>), the same number of rows in every partition (<code>n</code>) but a different number of columns (sum of the number of columns of data-frame one and data-frame two <code>(n+m)</code>). The first data frame (<code>A</code>) has a timestamp as an index but the second one (B) doesn't have this column. Both data-frames are sorted, and I only need to put these data-sets together without any change in every partition. Also, the index for <code>A</code> will be the index for the new data-frame.</p>
<pre><code>A: data-frame (V partitions) - every partition (nXn)
B: data-frame (V partitions) - every partition (nXm)
C (new data-frame): (V partitions) - every partition (nX(n+m))
</code></pre>
|
<p>This is not too hard:</p>
<pre><code>C = dd.from_delayed([dask.delayed(pd.concat)([a, b])
for a, b in zip(A.to_delayed(), B.to_delayed())],
meta=A._meta)
</code></pre>
<p>explanation</p>
<ul>
<li>get the partitions of each dataframe as delayed objects</li>
<li>pass pairs of these to <code>concat</code></li>
<li>form the concatenated pairs back into a dataframe</li>
<li>reuse meta, since the output has the same columns and index as the inputs</li>
</ul>
<p>(C is, of course, still lazy, the operation will only be triggered when you do something to it)</p>
|
python|pandas|dataframe|dask
| 2
|
8,354
| 54,991,008
|
AttributeError: 'Series' object has no attribute 'iterrows'
|
<pre><code>accounts = pd.read_csv('C:/*******/New_export.txt', sep=",", dtype={'number': object})
accounts.columns = ["Number", "F"]
for i, j in accounts["Number"].iterrows(): #i represents the row(index number), j is the number
if (str(j) == "27*******5"):
print(accounts["F"][i], accounts["Number"][i])
</code></pre>
<p>I get the following error:</p>
<blockquote>
<p>AttributeError: 'Series' object has no attribute 'iterrows'</p>
</blockquote>
<p>I don't quite understand the error since "accounts" is a pandas dataframe. Please assist.</p>
|
<p><code>accounts["Number"]</code> is a <em>Series</em> object, not a DataFrame. Either iterate over <code>accounts.iterrows()</code> and take the <code>Number</code> column from each row, or use the <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.iteritems.html" rel="noreferrer"><code>Series.iteritems()</code> method</a>.</p>
<p>Iterating over the dataframe:</p>
<pre><code>for i, row in accounts.iterrows():
if str(row['Number']) == "27*******5":
print(row["F"], row["Number"])
</code></pre>
<p>or over <code>Series.iteritems()</code>:</p>
<pre><code>for i, number in accounts['Number'].iteritems():
if str(number) == "27*******5":
print(accounts["F"][i], number)
</code></pre>
|
python-3.x|pandas|loops
| 36
|
8,355
| 73,331,494
|
What is the difference betweend pandas.Series.items() and pandas.Series.iteritems()?
|
<p>As you can see the documentation pages for <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.items.html" rel="nofollow noreferrer">Series.items()</a> and <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.iteritems.html" rel="nofollow noreferrer">Series.iteritems()</a> are identical. Is it a mistake? Is one method outdated but kept for backward compatibility?</p>
<p>Most importantly, which one should I use?</p>
|
<p><code>Series.iteritems()</code> just calls <code>Series.items()</code> under the hood, see source code below:</p>
<pre class="lang-py prettyprint-override"><code>def iteritems(self) -> Iterable[tuple[Hashable, Any]]:
return self.items()
</code></pre>
<p><a href="https://github.com/pandas-dev/pandas/blob/v1.4.3/pandas/core/series.py#L1691-L1693" rel="nofollow noreferrer">Pandas Source</a></p>
<p>As a result, you should be fine to use either, although it appears <code>Series.items()</code> is preferred.</p>
|
python|pandas|documentation
| 1
|
8,356
| 73,520,608
|
assign a time period to each value of a column in a Pandas dataframe
|
<p>I have a pandas dataframe with one of the columns being a date. I need to create another column which would be a start (or end, doesn't matter) of a 2W period containing this date. Ideally this would be generalizable to any offset used by <code>pd.Grouper</code>.</p>
<p>Knowing <code>pd.Grouper</code> I can come up with a hacky solution using <code>.groupby.transform()</code> - but I hope there is a nicer solution.</p>
<p>I tried using <code>pd.Series.dt.to_period()</code> but it does not accept offsets like <code>"2W"</code> and interprets them as a weekly offset. I could not find documentation of <code>dt.to_period()</code> that would explain this.</p>
<pre><code>df = pd.DataFrame({"date":["2022-01-03", "2022-01-10", "2022-01-20"], "data":[1,2,3]})
df["date"] = pd.to_datetime(df["date"])
# Trying to assign a 2W period to a new column
# This is ugly and hacky, and pd.Grouper is deprecated
# can this be made better?
df["2W_date_grouper"] = df.groupby(pd.Grouper(freq="2W", key="date"))["data"].transform(lambda x:[x.name]*len(x))
# using .dt.to_period() seems to ignore "2W" and interpret it as "weekly" - WHY???
df["2W_date_to_period"] = df["date"].dt.to_period("2W")
</code></pre>
<p><a href="https://i.stack.imgur.com/DCQe1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DCQe1.png" alt="output of the code above" /></a></p>
|
<p>Try to use the following:</p>
<pre><code>pd.Timedelta(days=14)
df[โdateโ] = df[โ2A_date_grouperโ] + pd.Timedelta(days=14)
</code></pre>
<p><a href="https://pandas.pydata.org/pandas-docs/stable/timedeltas.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/timedeltas.html</a></p>
|
python|pandas|dataframe|group-by
| -2
|
8,357
| 67,507,052
|
Pandas: sort according to a row
|
<p>I have a Dataframe like this (with labels on rows and columns):</p>
<pre><code> 0 1 2 3
0 1 1 0 0
1 0 1 1 0
2 1 0 1 0
-1 5 6 3 2
</code></pre>
<p>I would like to order the columns according to the last row (and then drop the row):</p>
<pre><code> 0 1 2 3
0 1 1 0 0
1 1 0 1 0
2 0 1 1 0
</code></pre>
|
<p>Try <code>np.argsort</code> to get the order, then <code>iloc</code> to rearrange columns and drop rows:</p>
<pre><code>df.iloc[:-1, np.argsort(-df.iloc[-1])]
</code></pre>
<p>Output:</p>
<pre><code> 1 0 2 3
0 1 1 0 0
1 1 0 1 0
2 0 1 1 0
</code></pre>
|
python|pandas
| 1
|
8,358
| 67,246,703
|
Tensorflow running on terminal but not with my code editor
|
<p>I have created a venv and installed the tenserflow via pip, checked the versions and everything seems fine. However, when I want to run my code (simply <strong>import <strong>tensor</strong>flow</strong>) it pops the following error.</p>
<pre><code>**ModuleNotFoundError: No module named 'tensorflow'**
</code></pre>
<p>I can't use Keras as well because of that.
Something also caught my attention, when on terminal and venv active I run the same code on python3 and it does just fine. I am able to import Keras as well on terminal Python3.</p>
<p>What could be my problem? I have read almost every article and tried every possible solutions that can be found on the web.</p>
<p>System: MACOS Mojave
Python: 3.8.8
Pip: the latest
Code editor: Visual Studio Code</p>
|
<p>According to your description, please refer to the following:</p>
<ol>
<li><p>The location where the module is installed is not the python environment currently used by VS Code.</p>
<p>Please use "<code>pip --version</code>" in the VS Code terminal to check whether the source of the module installation tool "pip" is the same as that shown in the lower left corner of VS Code.</p>
<p>(If they are different, please use the shortcut key Ctrl+Shift+` to open a new VS Code terminal, it will automatically enter the selected environment.)</p>
</li>
<li><p>Related files in the installation package of the module are damaged.</p>
<p>Please uninstall the module "tensorflow" and reinstall it. (<code>pip uninstall tensorflow</code> <code>pip install tensorflow</code>)</p>
</li>
<li><p>Please check the naming of the module installation package. (Please pay attention to the case of naming.)</p>
<p><a href="https://i.stack.imgur.com/UDcbf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UDcbf.png" alt="enter image description here" /></a></p>
</li>
</ol>
<p>Reference: <a href="https://code.visualstudio.com/docs/python/environments" rel="nofollow noreferrer">Python environments in VS Code</a>.</p>
|
python|macos|tensorflow|keras|visual-studio-code
| 1
|
8,359
| 60,261,393
|
Mapping values from series over a column to replace nan values pandas
|
<p>I have a DataFrame which has job numbers and the customer names associated with that job. There are instances where the job numbers have no customer name and therefore is null.
I have a separate series which has these job numbers as index and the missing customer names to replace the null values, based on the job numbers. I am not entirely sure how I would go about mapping this over the original DataFrame column.</p>
<p>This is the original DataFrame (df):</p>
<pre><code> Job Number Customer
0 02123 Paul F
1 46456 nan
2 56823 Kevin T
3 62948 nan
</code></pre>
<p>The series to replace the nan values:</p>
<pre><code>Job Number
46456 Kara L
62948 Sabrina M
Name: Customers, dtype: object
</code></pre>
<p>The final output I need is:</p>
<pre><code> Job Number Customer
0 02123 Paul F
1 46456 Kara L
2 56823 Kevin T
3 62948 Sabrina M
</code></pre>
<p>I hope this makes sense. I have had a look at other answers such as using: <code>df['Customer'] = df['Job Number'].map(customers)</code> but that didn't work or <code>test['Customer'] = df['Customer'].combine_first(df['Customer'].map(customers))</code>.</p>
<p>I wasn't sure how to paste code into here so I have written out the df and series by hand.</p>
<p>Any help would be greatly appreciated.</p>
|
<p>You could use <code>reset_index</code> with <code>combine_first</code>:</p>
<pre><code>(df.set_index('JobNumber').squeeze()
.combine_first(customers.set_index('Job').squeeze())
.reset_index())
index Customer
0 2123 Paul F
1 46456 Kara L
2 56823 Kevin T
3 62948 Sabrina M
</code></pre>
|
python|pandas|dataframe|series
| 1
|
8,360
| 60,332,169
|
How can I output some data during a model.fit() run in tensorflow?
|
<p>I would like to print the value and/or the shape of a tensor during a <code>model.fit()</code> run and not before.
In PyTorch I can just put a print(input.shape) statement into the <code>model.forward()</code> function.</p>
<p>Is there something similar in TensorFlow?</p>
|
<p>You can pass a <em>callback</em> object to the <code>model.fit()</code> method and then perform actions at different stages during fitting.</p>
<p><a href="https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/Callback" rel="nofollow noreferrer">https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/Callback</a></p>
<pre class="lang-py prettyprint-override"><code>class MyCustomCallback(tf.keras.callbacks.Callback):
def on_train_batch_begin(self, batch, logs=None):
print('Training: batch {} begins at {}'.format(batch, datetime.datetime.now().time()))
def on_train_batch_end(self, batch, logs=None):
print('Training: batch {} ends at {}'.format(batch, datetime.datetime.now().time()))
def on_test_batch_begin(self, batch, logs=None):
print('Evaluating: batch {} begins at {}'.format(batch, datetime.datetime.now().time()))
def on_test_batch_end(self, batch, logs=None):
print('Evaluating: batch {} ends at {}'.format(batch, datetime.datetime.now().time()))
model = get_model()
model.fit(x_train, y_train, callbacks=[MyCustomCallback()])
</code></pre>
<p><a href="https://www.tensorflow.org/guide/keras/custom_callback" rel="nofollow noreferrer">https://www.tensorflow.org/guide/keras/custom_callback</a></p>
|
python|tensorflow|neural-network|pytorch
| 2
|
8,361
| 65,067,042
|
pandas frequency of a specific value per group
|
<p>Suppose I have data for 50K shoppers and the products they bought. I want to count the number of times each user purchased product "a". <code>value_counts</code> seems to be the fastest way to calculate these types of numbers for a grouped pandas data frame. However, I was surprised at how much slower it was to calculate the purchase frequency for just one specific product (e.g., "a") using <code>agg</code> or <code>apply</code>. I could select a specific column from a data frame created using <code>value_counts</code> but that could be rather inefficient on very large data sets with lots of products.</p>
<p>Below a simulated example where each customer purchases 10 times from a set of three products. At this size you already notice speed differences between <code>apply</code> and <code>agg</code> compared to <code>value_counts</code>. Is there a better/faster way to extract information like this from a grouped pandas data frame?</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame({
"col1": [f'c{j}' for i in range(10) for j in range(50000)],
"col2": np.random.choice(["a", "b", "c"], size=500000, replace=True)
})
dfg = df.groupby("col1")
# value_counts is fast
dfg["col2"].value_counts().unstack()
# apply and agg are (much) slower
dfg["col2"].apply(lambda x: (x == "a").sum())
dfg["col2"].agg(lambda x: (x == "a").sum())
# much faster to do
dfg["col2"].value_counts().unstack()["a"]
</code></pre>
<p>EDIT:</p>
<p>Two great responses to this question. <strong>Given the starting point of an already grouped data frame</strong>, it seems there may not be a better/faster way to count the number of occurrences of a single level in a categorical variable than using (1) <code>apply</code> or <code>agg</code> with a lambda function or (2) using <code>value_counts</code> to get the counts for all levels and then selecting the one you need.</p>
<p>The <code>groupby</code>/<code>size</code> approach is an excellent alternative to <code>value_counts</code>. With a minor edit to Cainรฃ Max Couto-Silva's answer, this would give:</p>
<pre><code>dfg = df.groupby(['col1', 'col2'])
dfg.size().unstack(fill_value=0)["a"]
</code></pre>
<p>I assume there would be a trade-off at some point where if you have many levels <code>apply</code>/<code>agg</code> or <code>value_counts</code> on an already grouped data frame may be faster than the <code>groupby</code>/<code>size</code> approach which requires creating a newly grouped data frame. I'll post back when I have some time to look into that.</p>
<p>Thanks for the comments and answers!</p>
|
<p>Filter before <code>value_counts</code></p>
<pre><code>df.loc[df.col2=='a','col1'].value_counts()['c0']
</code></pre>
<p>Also I think <code>crosstab</code> is 'faster' than <code>groupby</code> + <code>value_counts</code></p>
<pre><code>pd.crosstab(df.col1, df.col2)
</code></pre>
|
python|pandas|dataframe|pandas-groupby
| 2
|
8,362
| 65,227,589
|
Sort Pandas Dataframe based on previous row value
|
<p>I have a dataframe that looks like:</p>
<pre><code>Name Previous Name
Alice NaN
Charlie Bob
Bob Alice
Fred Eddy
Danny Charlie
Eddy Dan
</code></pre>
<p>I would like to sort the dataframe so that is looks like:</p>
<pre><code>Name Previous Name
Alice NaN
Bob Alice
Charlie Bob
Danny Charlie
Eddy Danny
Fred Eddy
</code></pre>
<p>I know the boolean test involves something like</p>
<pre><code>dataframe['Value'] = dataframe['PreviousValue'].shift(1)
</code></pre>
<p>But how can I use that as a sort criteria?</p>
<p>EDIT: Changed example from letters to names</p>
|
<p>Sort of values, then shift:</p>
<pre><code>df = df.sort_values('Value')
df['Previous Value'] = df['Value'].shift()
</code></pre>
<p>Output:</p>
<pre><code> Value Previous Value
0 A NaN
2 B A
1 C B
4 D C
5 E D
3 F E
</code></pre>
|
python|pandas|dataframe|sorting
| 0
|
8,363
| 65,358,010
|
AttributeError: module 'tensorflow' has no attribute 'string_join'
|
<p>I'm reading an introductory book to tensorflow and encountered an error with the first code snippet.</p>
<pre><code>from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow as tf
msg = tf.string_join(["Hello ", "TensorFlow"])
with tf.Session() as sess:
print(sess.run(msg))
</code></pre>
<p>This is the output from the console:</p>
<pre><code>2020-12-18 13:30:58.723487: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'cudart64_110.dll'; dlerror: cudart64_110.dll not found
2020-12-18 13:30:58.727081: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
Traceback (most recent call last):
File "tensor.py", line 7, in <module>
msg = tf.string_join(["Hello ", "TensorFlow"])
AttributeError: module 'tensorflow' has no attribute 'string_join'
</code></pre>
<p>Any help would be greatly apppreciated. Thanks!</p>
|
<p><code>string_join</code> seems to be from <a href="https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/strings/join" rel="nofollow noreferrer">Tensorflow 1</a>. Notice the alias of <code>string_join</code> for <code>tf.strings.join</code>.</p>
<p>However in <a href="https://www.tensorflow.org/api_docs/python/tf/strings/join" rel="nofollow noreferrer">Tensorflow 2</a> they no longer have that alias. They do have an alias for <code>tf.compat.v1.string_join</code>. But it looks like you can probably just <code>tf.strings.join</code></p>
<p>Example code from those docs</p>
<pre><code>tf.strings.join(['abc','def']).numpy()
tf.strings.join([['abc','123'],
['def','456'],
['ghi','789']]).numpy()
tf.strings.join([['abc','123'],
['def','456']],
separator=" ").numpy()
</code></pre>
|
python|tensorflow
| 1
|
8,364
| 65,468,026
|
norm.ppf vs norm.cdf in python's scipy.stats
|
<p>so i have pasted my complete code for your reference, i want to know what's the use of ppf and cdf here? can you explain it? i did some research and found out that ppf(percent point function) is an inverse of CDF(comulative distribution function)
if they really are, shouldn't this code work if i replaced ppf and cdf as 1/cdf and 1/ppf respectively?</p>
<p>please explain this to me, the difference between the two. and how to and when to use which</p>
<p>this is, btw, hypothesis testing.
and sorry for so many comments, just a habit of explaining everything for my future reference.(do point me out if any of my comments is wrong regarding the same)</p>
<pre><code>ball_bearing_radius = [2.99, 2.99, 2.70, 2.92, 2.88, 2.92, 2.82, 2.83, 3.06, 2.85]
import numpy as np
from math import sqrt
from scipy.stats import norm
# h1 : u != U_0
# h0 : u = u_0
#case study : ball bearing example, claim is that radius = 3, do hypothesis testing
mu_0 = 3
sigma = 0.1
#collect sample
sample = ball_bearing_radius
#compute mean
mean = np.mean(sample)
#compute n
n = len(sample)
#compute test statistic
z = (mean - mu_0) /(sigma/sqrt(n))
#set alpha
a = 0.01
#-------------------------
#calculate the z_a/2, by using percent point function of the norm of scipy
#ppf = percent point function, inverse of CDF(comulative distribution function)
#also, CDF = pr(X<=x), i.e., probability to the left of the distribution
z_critical = norm.ppf(1-a/2) #this returns a value for which the probab to the left is 0.975
p_value = 2*(1 - norm.cdf(np.abs(z)))
p_value = float("{:.4f}".format(p_value))
print('z : ',z)
print('\nz_critical :', z_critical)
print('\nmean :', mean, "\n\n")
#test the hypothesis
if (np.abs(z) > z_critical):
print("\nREJECT THE NULL HYPOTHESIS : \n p-value = ", p_value, "\n Alpha = ", a )
else:
print("CANNOT REJECT THE NULL HYPOTHESIS. NOT ENOUGH EVIDENCE TO REJECT IT: \n p-value = ", p_value, "\n Alpha = ", a )
</code></pre>
|
<p>The <code>.cdf()</code> function calculates the probability for a given normal distribution value, while the <code>.ppf()</code> function calculates the normal distribution value for which a given probability is the required value. These are inverse of each other in this particular sense.</p>
<p>To illustrate this calculation, check the below sample code.</p>
<pre><code>from scipy.stats import norm
print(norm.ppf(0.95))
print(norm.cdf(1.6448536269514722))
</code></pre>
<p><a href="https://i.stack.imgur.com/aZSLH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aZSLH.png" alt="enter image description here" /></a></p>
<p>This image with the code above should make it clear for you.</p>
<p>Thanks!</p>
|
python|numpy|data-science|hypothesis-test|scipy.stats
| 11
|
8,365
| 49,912,441
|
How to get batch size back from a tensorflow dataset?
|
<p>It is recommended to use tensorflow dataset as the input pipeline which can be set up as follows:</p>
<pre><code># Specify dataset
dataset = tf.data.Dataset.from_tensor_slices((features, labels))
# Suffle
dataset = dataset.shuffle(buffer_size=1e5)
# Specify batch size
dataset = dataset.batch(128)
# Create an iterator
iterator = dataset.make_one_shot_iterator()
# Get next batch
next_batch = iterator.get_next()
</code></pre>
<p>I should be able to get the batch size (either from dataset itself or from an iterator created from it, i.e. both <code>iterator</code> and <code>next_batch</code>). Maybe someone wants to know how many batches there are in the dataset or its iterators. Or how many batches have been called and how many remain in the iterator? One might also want to get particular elements, or even the entire dataset at once.</p>
<p>I wasn't able to find anything on the tensorflow documentation. Is this possible? If not, does anyone know if this has been requested as an issue on tensorflow GitHub?</p>
|
<p>In TF2 at least, the type of a dataset is statically defined and accessible via <code>tf.data.Dataset.element_spec</code>.</p>
<p>This is a somewhat complex return type because it has tuple nesting that matches your Dataset.</p>
<pre class="lang-py prettyprint-override"><code>>>> tf.data.Dataset.from_tensor_slices([[[1]],[[2]]]).element_spec.shape
TensorShape([1, 1])
</code></pre>
<p>If your data is organized as a tuple[image, label], then you'd get a tuple of TensorSpecs. You can index into it if you are certain of the nesting of the return type. E.g.</p>
<pre class="lang-py prettyprint-override"><code>>>> image = tf.data.Dataset.from_tensor_slices([[1],[2],[3],[4]]).batch(2, drop_remainder=True)
>>> label = tf.data.Dataset.from_tensor_slices([[1],[2],[3],[4]]).batch(2, drop_remainder=True)
>>> train = tf.data.Dataset.zip((image, label))
>>> train.element_spec[0].shape[0]
2
</code></pre>
|
tensorflow|issue-tracking|tensorflow-datasets
| 1
|
8,366
| 63,850,929
|
Building forecast Pandas DataFrame
|
<p>I have a DataFrame in Pandas that contains forecasted sales data that looks like this:</p>
<pre><code> | Date | ProductID | Forecasted_Date | Sales |
---|-------|-----------|-----------------|-------|
0 | 1_Jan | 1 | 2_Jan | 10 |
1 | 1_Jan | 2 | 3_Jan | 3 |
2 | 1_Jan | 1 | 2_Jan | 7 |
3 | ... | | | |
4 | 2_Jan | 1 | 3_Jan | 7 |
</code></pre>
<p>On each Date, and for each ProductId, sales are forecasted between 1 and 20 days in front (the Forecasted_date).</p>
<p>I want to create a new DataFrame, MultiIndexed by "[Date, ProductID]" and with the following columns:</p>
<pre><code>| IND_Date | IND_ProductID | F1 | F2 | ... | F20 |
|----------|---------------|----|------|-----|-----|
| 1_Jan | 1 | 10 | 3 | | |
| 1_Jan | 2 | 7 | etc. | | |
| ... | | | | | |
| 2_Jan | 1 | 7 | | | |
</code></pre>
<p>Where the columns represent the number of days ahead the forecast was made. (I.e., for Date=1_Jan, F1=Sales on 2_Jan).</p>
<p>What is the best way to construct this in Pandas?</p>
|
<p>I figured it out.</p>
<p>If <code>df</code> is my base dataframe...</p>
<ol>
<li>Find difference in days, assuming Date and Forecasted_Date are in Datetime format:</li>
</ol>
<pre><code>df['difference'] = (df['Forecasted_Date'] - df['Date']) / pd.Timedelta(1,'D'))
</code></pre>
<ol start="2">
<li>Convert to required "f_" format:</li>
</ol>
<pre><code>df['forecast_day'] = 'f_' + df['difference'].astype('int').astype('str')
</code></pre>
<ol start="3">
<li>Create pivot table</li>
</ol>
<pre><code>df_forecast = pd.pivot_table(data=df, values="Sales", index=["Date", "Product_ID"], columns="forecast_day", aggfunc="sum")
</code></pre>
<p>Done!</p>
|
python|pandas
| 1
|
8,367
| 46,900,915
|
Object detection: Error with Export/Import for inference
|
<p>I am a beginner in machine learning and currently trying to follow the tutorial given in the following link <a href="https://github.com/tensorflow/models/blob/master/object_detection/g3doc/exporting_models.md" rel="nofollow noreferrer">https://github.com/tensorflow/models/blob/master/object_detection/g3doc/exporting_models.md</a></p>
<p>I had completed training a model which gave me model.ckpt files.below is the command that I typed into window command prompt</p>
<pre><code>{Your path}\tensor\flow\models\research>
python object_detection\export_inference_graph.py\
--input_type=image_tensor \
--pipeline_config_path="{Your path}\model\ssd_mobilenet_v1_pets.config" \
--trained_checkpoint_prefix="{Your path}\models\train\" \
--output_directory=output_inference_graph.pb \
1>mloutput.txt 2>mlerror.txt
</code></pre>
<p>so when I checked the error file, this was what I found:</p>
<pre><code>Traceback (most recent call last):
File "object_detection\export_inference_graph.py", line 106, in <module>
tf.app.run()
File "C:\Users\ericsen\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\platform\app.py", line 48, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "object_detection\export_inference_graph.py", line 95, in main
assert FLAGS.output_directory, '`output_directory` is missing'
AssertionError: `output_directory` is missing
</code></pre>
<p>I dont really understand why the output directory is missing. Is it purely my mistake or is it a bug. Your kind feedback and help will be highly appreciated. Thank you</p>
|
<p>I have try print all argument and see:</p>
<blockquote>
<p>trained_checkpoint_prefix = "{Your path}\models\train\" \ --output_directory=output_inference_graph.pb \</p>
</blockquote>
<p>Its seem to be missing "\" in "{Your path}\models\train\"</p>
<p>Try with add more "\" after "train\" --> "train\\"</p>
<pre><code>{Your path}\tensor\flow\models\research>
python object_detection\export_inference_graph.py\
--input_type=image_tensor \
--pipeline_config_path="{Your path}\model\ssd_mobilenet_v1_pets.config" \
--trained_checkpoint_prefix="{Your path}\models\train\\" \
--output_directory=output_inference_graph.pb \
</code></pre>
|
python|tensorflow|object-detection
| 0
|
8,368
| 46,972,640
|
Find rows whose values are less/greater than rows of another dataFrame
|
<p>I have 2 dataframes:</p>
<pre><code>df = pd.DataFrame({'begin': [10, 20, 30, 40, 50],
'end': [15, 23, 36, 48, 56]})
begin end
0 10 15
1 20 23
2 30 36
3 40 48
4 50 56
df2 = pd.DataFrame({'begin2': [12, 13, 22, 40],
'end2': [14, 13, 26, 48]})
begin2 end2
0 12 14
1 13 13
2 22 26
3 40 48
</code></pre>
<p>How can i get the rows of df2 which are within the rows of df1? I want each row of the df2 to be compared to all rows of df1.</p>
<p>That is, i want a df3 like:</p>
<pre><code> begin2 end2
0 12 14
1 13 13
3 40 48
</code></pre>
<p>I tried:</p>
<pre><code>df3 = df2.loc[ (df['begin'] <= df2['begin2']) & (df2['end2'] <= df['end'] )]
</code></pre>
<p>But it only compares row for row and requeres same sizes of the dataframes.</p>
|
<p>You need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.apply.html" rel="nofollow noreferrer"><code>apply</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing" rel="nofollow noreferrer"><code>boolean indexing</code></a>:</p>
<pre><code>df = df2[df2.apply(lambda x: any((df['begin'] <= x['begin2']) &
(x['end2'] <= df['end'])), axis=1)]
print (df)
begin2 end2
0 12 14
1 13 13
3 40 48
</code></pre>
<p>Detail:</p>
<pre><code>print (df2.apply(lambda x: any((df['begin'] <= x['begin2']) &
(x['end2'] <= df['end'])), axis=1))
0 True
1 True
2 False
3 True
dtype: bool
</code></pre>
|
python|pandas
| 1
|
8,369
| 46,993,291
|
Python: Plot histogram of dataframe with one column as the labels, and the other as the values
|
<p>I have a dataframe with two columns. I want to plot a histogram with the 'Word_Length' column as the x-axis labels and the y-axis values as the 'Count'</p>
<p>Here's a short example of what the data looks like. Both Columns values are integers.</p>
<pre><code>Word_Length Count
1 265
9 67
3 45
</code></pre>
|
<p>I guess you need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.plot.bar.html" rel="nofollow noreferrer"><code>DataFrame.plot.bar</code></a>, because <a href="https://en.wikipedia.org/wiki/Histogram" rel="nofollow noreferrer"><code>histogram</code></a> is an accurate graphical representation of the distribution of numerical data:</p>
<pre><code>df.plot.bar(x = 'Word_Length', y='Count')
</code></pre>
|
python|pandas
| 0
|
8,370
| 63,023,793
|
How to prevent the initial pytorch variable from changing using a function?
|
<p>I want to apply a function to the variable <code>x</code> and saved as <code>y</code>. But why the <code>x</code> is also changed? How to prevent it?</p>
<pre><code>import torch
def minus_min(raw):
for col_i in range(len(raw[0])):
new=raw
new[:,col_i] = (raw[:,col_i] - raw[:,col_i].min())
return new
x=torch.tensor([[0,1,2,3,4],
[2,3,4,0,8],
[0,1,2,3,4]])
y=minus_min(x)
print(y)
print(x)
</code></pre>
<p>output:</p>
<pre><code>tensor([[0, 0, 0, 3, 0],
[2, 2, 2, 0, 4],
[0, 0, 0, 3, 0]])
tensor([[0, 0, 0, 3, 0],
[2, 2, 2, 0, 4],
[0, 0, 0, 3, 0]])
</code></pre>
|
<p>Because this assignment:</p>
<pre class="lang-py prettyprint-override"><code>new[:,col_i] = (raw[:,col_i] - raw[:,col_i].min())
</code></pre>
<p>is an in-place operation. Therefore, <code>x</code> and <code>y</code> will share the underlying <code>.data</code>.</p>
<p>The smallest change that would solve this issue would be to make a copy of <code>x</code> inside the function:</p>
<pre class="lang-py prettyprint-override"><code>def minus_min(raw):
new = raw.clone() # <--- here
for col_i in range(len(raw[0])):
new[:,col_i] = raw[:,col_i] - raw[:,col_i].min()
return new
</code></pre>
<p>If you want, you can simplify your function (and remove the <code>for</code> loop):</p>
<pre class="lang-py prettyprint-override"><code>y = x - x.min(dim=0).values
</code></pre>
|
function|pytorch|tensor
| 2
|
8,371
| 63,041,257
|
Merge Excel files with pandas in python
|
<p>I'm almost done with merging excel files with pandas in python but when I give the path it wont work. I get the error ''No such file or directory: 'file1.xlsx'''. When I leave the path empty it work but I want to decide from what folder it should take files from. AND I saved the file the folder 'excel'</p>
<pre><code>cwd = os.path.abspath('/Users/Viktor/downloads/excel') #If i leave it empty and have files in /Viktor it works but I have the desired excel files in /excel
print(cwd)
files = os.listdir(cwd)
df = pd.DataFrame()
for file in files:
if file.endswith('.xlsx'):
df = df.append(pd.read_excel(file), ignore_index=True)
df.head()
df.to_excel(r'/Users/Viktor/Downloads/excel/resultat/merged.xlsx')
</code></pre>
|
<p>pd.read_excel(file) looks for the file relative to the path where the script is executed. If you execute in '/Users/Viktor/' try with:</p>
<pre><code>import os
import pandas as pd
cwd = os.path.abspath('/Users/Viktor/downloads/excel') #If i leave it empty and have files in /Viktor it works but I have the desired excel files in /excel
#print(cwd)
files = os.listdir(cwd)
df = pd.DataFrame()
for file in files:
if file.endswith('.xlsx'):
df = df.append(pd.read_excel('downloads/excel/' + file), ignore_index=True)
df.head()
df.to_excel(r'/Users/Viktor/downloads/excel/resultat/merged.xlsx')
</code></pre>
|
python|excel|pandas
| 2
|
8,372
| 63,105,754
|
How do I calculate lambda to use scipy.special.boxcox1p function for my entire dataframe of 500 columns?
|
<p>I have a dataframe with total sales of around 500 product categories in each row. So there are 500 columns in my dataframe. I am trying to find the highest correlated category with my another dataframe columns.
So I will use Pearson correlation method for this.
But the Total sales for all the categories are highly skewed data, with the skewness level ranging from 10 to 40 for all the category columns. So I want to log transform this sales data using boxcox transformation.
Since, my sales data has 0 values as well, I want to use boxcox1p function.
Can somebody help me, how do I calculate lambda for boxcox1p function, since it is a mandatory parameter for this function?
Also, Is this the correct approach for my problem statement to find highly correlated categories?</p>
|
<p>Assume <code>df</code> is Your dataframe with many columns containing numeric values, and lambda parameter of box-cox transformation equals 0.25, then:</p>
<pre><code>from scipy.special import boxcox1p
df_boxcox = df.apply(lambda x: boxcox1p(x,0.25))
</code></pre>
<p>Now transformed values are in <code>df_boxcox</code>.</p>
<p>Unfortunately there is no built-in method to find lambda of <code>boxcox1p</code> but we can use <code>PowerTransformer</code> from <code>sklearn.preprocessing</code> instead:</p>
<pre><code>import numpy as np
from sklearn.preprocessing import PowerTransformer
pt = PowerTransformer(method='yeo-johnson')
</code></pre>
<p>Note method 'yeo-johnson' is used because it works with both positive and negative values. Method 'box-cox' will raise error: <code>ValueError: The Box-Cox transformation can only be applied to strictly positive data</code>.</p>
<pre><code>data = pd.DataFrame({'x':[-2,-1,0,1,2,3,4,5]}) #just sample data to explain
pt.fit(data)
print(pt.lambdas_)
[0.89691707]
</code></pre>
<p>then apply calculated lambda:</p>
<pre><code>print(pt.transform(data))
</code></pre>
<p>result:</p>
<pre><code>[[-1.60758267]
[-1.09524803]
[-0.60974999]
[-0.16141745]
[ 0.26331586]
[ 0.67341476]
[ 1.07296428]
[ 1.46430326]]
</code></pre>
|
python|pandas|logging|transformation|pearson-correlation
| 5
|
8,373
| 67,639,478
|
Is there a significant speed improvement when using transformers tokenizer over batch compared to per item?
|
<p>is calling tokenizer on a batch significantly faster than on calling it on each item in a batch? e.g.</p>
<pre class="lang-py prettyprint-override"><code>encodings = tokenizer(sentences)
# vs
encodings = [tokenizer(x) for x in sentences]
</code></pre>
|
<p>i ended up just timing both in case it's interesting for someone else</p>
<pre><code>%%timeit
for _ in range(10**4): tokenizer("Lorem ipsum dolor sit amet, consectetur adipiscing elit.")
785 ms ยฑ 24.5 ms per loop (mean ยฑ std. dev. of 7 runs, 1 loop each)
%%timeit
tokenizer(["Lorem ipsum dolor sit amet, consectetur adipiscing elit."]*10**4)
266 ms ยฑ 6.52 ms per loop (mean ยฑ std. dev. of 7 runs, 1 loop each)
</code></pre>
|
pytorch|huggingface-transformers
| 2
|
8,374
| 61,553,229
|
Pandas: How to find the average length of days for a local outbreak to peak in a COVID-19 dataframe?
|
<p>Let's say I have this dataframe containing the difference in number of active cases from previous value in each country:</p>
<pre><code>[in]
import pandas as pd
import numpy as np
active_cases = {'Day(s) since outbreak':['0', '1', '2', '3', '4', '5'], 'Australia':[np.NaN, 10, 10, -10, -20, -20], 'Albania':[np.NaN, 20, 0, 15, 0, -20], 'Algeria':[np.NaN, 25, 10, -10, 20, -20]}
df = pd.DataFrame(active_cases)
df
[out]
Day(s) since outbreak Australia Albania Algeria
0 0 NaN NaN NaN
1 1 10.0 20.0 25.0
2 2 10.0 0.0 10.0
3 3 -10.0 15.0 -10.0
4 4 -20.0 0.0 20.0
5 5 -20.0 -20.0 -20.0
</code></pre>
<p>I need to find the average length of days for a local outbreak to peak in this COVID-19 dataframe.</p>
<p>My solution is to find the nth row with the first negative value in each column (e.g., nth row of first negative value in 'Australia': 3, nth row of first negative value in 'Albania': 5) and average it.</p>
<p>However, I have no idea how to do this in Panda/Python.</p>
<p>Are there any ways to perform this task with simple lines of Python/Panda code?</p>
|
<p>you can <code>set_index</code> the column <code>Day(s) since outbreak</code>, then use <code>iloc</code> to select all rows except the first one, then check where the values are less than (<code>lt</code>) 0. Use <code>idxmax</code> to get the first row where the value is less than 0 and take the <code>mean</code>. With your input, it gives:</p>
<pre><code>print (df.set_index('Day(s) since outbreak')\
.iloc[1:, :].lt(0).idxmax().astype(float).mean())
3.6666666666666665
</code></pre>
|
python|pandas|numpy|dataframe
| 1
|
8,375
| 61,217,403
|
How to convert XLA_GPU into GPU
|
<p>My OS is Ubuntu 18.04 and my GPU is GTX850M. I'm using nvidia drivers 430.50, <code>CUDA 10.1</code> ,<code>CuDNN 9.0</code> and <code>tensorflow-gpu 1.14.0</code>. When I try getting available devices in tensorflow with</p>
<pre><code>from tensorflow.python.client import device_lib
device_lib.list_local_devices()
</code></pre>
<p>I'm getting this out</p>
<pre><code>[name: "/device:CPU:0"
device_type: "CPU"
memory_limit: 268435456
locality {
}
incarnation: 2293723676390825589,
name: "/device:XLA_GPU:0"
device_type: "XLA_GPU"
memory_limit: 17179869184
locality {
}
incarnation: 15287372432461854293
physical_device_desc: "device: XLA_GPU device",
name: "/device:XLA_CPU:0"
device_type: "XLA_CPU"
memory_limit: 17179869184
locality {
}
incarnation: 10399216684927698454
physical_device_desc: "device: XLA_CPU device"]
</code></pre>
<p>I can use the XLA_GPU for basic applications(i.e. tensorflow constant production) but I can't train a neural network. How can I convert XLA_GPU into GPU for train deep neural networks?</p>
|
<p>Your output says that, there was issue with <code>Tensorflow GPU</code> installation.</p>
<blockquote>
<p>I'm using nvidia drivers 430.50, CUDA 10.1 ,CuDNN 9.0 and
tensorflow-gpu 1.14.0.</p>
</blockquote>
<p>According to <a href="https://www.tensorflow.org/install/source#gpu" rel="nofollow noreferrer">Tensorflow tested build configurations</a>, <code>tensorflow_gpu-1.14.0</code> requires <code>CUDA-10.0</code> and <code>cuDNN-7.4</code>, with this combination you can use GPU on your device.</p>
<p><em><code>Note:</code></em> If a non-GPU version of the package is installed, the function would also return <code>False</code>. Use <code>tf.test.is_built_with_cuda()</code> to validate if TensorFlow was build with CUDA support.</p>
<p>Simple workaround to use gpu is, if you can use Anaconda to install tensorflow-gpu, then it will install cuda and cudnn for you in same conda environment as tensorflow-gpu. You can create a virtual environment using conda as shown below</p>
<pre><code>conda create --name tf python=3.8
conda activate tf # activate tf environment
pip install tensorflow # install latest version of Tensorflow 2.4
import tensorflow
</code></pre>
|
python|tensorflow|gpu
| 0
|
8,376
| 61,205,873
|
Group values in NxN matrix into a N/2 x N/2 matrix
|
<p>Let's assume I have the following 4x4 matrix:</p>
<pre><code>import numpy as np
np.array([[1, 2, 3, 4],
[5, 6, 7, 8],
[9,10,11,12],
[13,14,15,16]])
</code></pre>
<p>I wish to group the values in 2x2 submatrices, sum them and gather the result in a 2x2 matrix, so that the result in this case would be:</p>
<pre><code>[
[14, 22],
[46, 54]
]
</code></pre>
<p>What is the most numpy-ish way to do so?</p>
|
<p>You can use <code>.reshape</code> method and then sum along axis:</p>
<pre><code>import numpy as np
data = np.array([[1, 2, 3, 4],
[5, 6, 7, 8],
[9,10,11,12],
[13,14,15,16]])
bs = 2 #block size
data_r = data.reshape(bs,bs,bs,bs)
data_r
array([[[[ 1, 2],
[ 3, 4]],
[[ 5, 6],
[ 7, 8]]],
[[[ 9, 10],
[11, 12]],
[[13, 14],
[15, 16]]]])
data_r.sum(axis=(1,3))
array([[14, 22],
[46, 54]])
</code></pre>
|
python|numpy
| 1
|
8,377
| 68,455,178
|
read txt file with multiple tab & space separated values in pandas
|
<p>I want to read a fixed width <strong>file.txt</strong> using pandas like this :</p>
<pre><code>option19971675181 ACHILLE BLA BLA BLA1 blabla 88 498
option19971675182 ACHILLE BLA BLA BLA1 blabla 176 498
option19971675183 ACHILLE BLA BLA BLA1 blabla 191 498
option19971675184 ACHILLE BLA BLA BLA1 blabla 521 498
option19971675185 ACHILLE BLA BLA BLA1 blabla 919 498
option19971675186 ACHILLE BLA BLA BLA134234531 blabla 10 498
option19971675187 ACHILLE BLA BLA BLA134234531 7 65 blabla 0 0
option19971675188 ACHILLE BLA BLA BLA1342 90345 31 blabla 0 0
option19971675189 ACHILLE BLA BLA BLA 134 23N 094 87OP531 blabla 0 0
option19971675190 ACHILLE BLA BLA BLA 134 23N 094 87OP53 blabla 0 0
</code></pre>
<p>I tried to read the file into pandas. The file has values separated by space</p>
<p>But i dont know how can i separate the text <strong>option199716751810</strong> into 2 columns.</p>
<p>I used the code from the answer it work but not for the first line</p>
<pre><code> df = pd.read_csv("test.txt", delimiter ="\s\s+", header = None,error_bad_lines=False)
df[df.columns[0]] = df[df.columns[0]].str.replace("option199716","")
>>> df
</code></pre>
<p>I got this output</p>
<pre><code>75181 ACHILLE BLA BLA BLA1 blabla 88 498
75182 ACHILLE BLA BLA BLA1 blabla 176 498
75183 ACHILLE BLA BLA BLA1 blabla 191 498
75184 ACHILLE BLA BLA BLA1 blabla 521 498
75185 ACHILLE BLA BLA BLA1 blabla 919 498
75186 ACHILLE BLA BLA BLA134234531 blabla 10 498
75187 ACHILLE BLA BLA BLA134234531 7 65 blabla 0 0
75188 ACHILLE BLA BLA BLA1342 90345 31 blabla 0 0
75189 ACHILLE BLA BLA BLA 134 23N 094 87OP531 blabla 0 0
75190 ACHILLE BLA BLA BLA 134 23N 094 87OP53 blabla 0 0
</code></pre>
<p>But it still show error :
<code>Skipping line 16: Expected 5 fields in line 136, saw 6. Error could possibly be due to quotes being ignored when a multi-char delimiter is used.</code>
Can someone help to get this plz</p>
|
<p>Assuming your text file is spaced exactly as in your question, try with the following:</p>
<pre><code>df = pd.read_csv("test.txt", delimiter ="\s\s+")
df[df.columns[0]] = df[df.columns[0]].str.replace("option199716","")
>>> df
0 1 2 3 4
0 751810 Pascal Male 23 11
1 845087 Achille Male 13 12
2 602183 Hera Femelles 9 98
3 802183 Alma Femelles 19 88
</code></pre>
|
python|pandas
| 1
|
8,378
| 52,939,042
|
Tensorflowsharp and Retinanet -- How to determine what to Fetch when graph is run?
|
<p>I've been using TensorflowSharp with Faster RCNN successfully for a while now; however, I recently trained a Retinanet model, verified it works in python, and have created a frozen pb file for use with Tensorflow. For FRCNN, there is an example in the TensorflowSharp GitHub repo that shows how to run/fetch this model. For Retinanet, I tried modifying the code but nothing seems to work. I have a model summary for Retinanet that I've tried to work from, but it's not obvious to me what should be used.</p>
<p>For FRCNN, the graph is run in this way:</p>
<pre><code> var runner = m_session.GetRunner();
runner
.AddInput(m_graph["image_tensor"][0], tensor)
.Fetch(
m_graph["detection_boxes"][0],
m_graph["detection_scores"][0],
m_graph["detection_classes"][0],
m_graph["num_detections"][0]);
var output = runner.Run();
var boxes = (float[,,])output[0].GetValue(jagged: false);
var scores = (float[,])output[1].GetValue(jagged: false);
var classes = (float[,])output[2].GetValue(jagged: false);
var num = (float[])output[3].GetValue(jagged: false);
</code></pre>
<p>From the model summary for FRCNN, it is obvious what the input ("image_tensor") and outputs ("detection_boxes", "detection_scores", "detection_classes", and "num_detections") are. They are not the same for Retinanet (I've tried), and I can't figure out what they should be. The "Fetch" part of the code above is causing a crash, and I'm guessing its because I'm not getting the node names right.</p>
<p>I won't paste the entire Retinanet summary here, but here is the first few nodes:</p>
<pre><code>Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) (None, None, None, 3 0
__________________________________________________________________________________________________
padding_conv1 (ZeroPadding2D) (None, None, None, 3 0 input_1[0][0]
__________________________________________________________________________________________________
conv1 (Conv2D) (None, None, None, 6 9408 padding_conv1[0][0]
__________________________________________________________________________________________________
bn_conv1 (BatchNormalization) (None, None, None, 6 256 conv1[0][0]
__________________________________________________________________________________________________
conv1_relu (Activation) (None, None, None, 6 0 bn_conv1[0][0]
__________________________________________________________________________________________________
</code></pre>
<p>And here are the last several nodes:</p>
<pre><code>__________________________________________________________________________________________________
anchors_0 (Anchors) (None, None, 4) 0 P3[0][0]
__________________________________________________________________________________________________
anchors_1 (Anchors) (None, None, 4) 0 P4[0][0]
__________________________________________________________________________________________________
anchors_2 (Anchors) (None, None, 4) 0 P5[0][0]
__________________________________________________________________________________________________
anchors_3 (Anchors) (None, None, 4) 0 P6[0][0]
__________________________________________________________________________________________________
anchors_4 (Anchors) (None, None, 4) 0 P7[0][0]
__________________________________________________________________________________________________
regression_submodel (Model) (None, None, 4) 2443300 P3[0][0]
P4[0][0]
P5[0][0]
P6[0][0]
P7[0][0]
__________________________________________________________________________________________________
anchors (Concatenate) (None, None, 4) 0 anchors_0[0][0]
anchors_1[0][0]
anchors_2[0][0]
anchors_3[0][0]
anchors_4[0][0]
__________________________________________________________________________________________________
regression (Concatenate) (None, None, 4) 0 regression_submodel[1][0]
regression_submodel[2][0]
regression_submodel[3][0]
regression_submodel[4][0]
regression_submodel[5][0]
__________________________________________________________________________________________________
boxes (RegressBoxes) (None, None, 4) 0 anchors[0][0]
regression[0][0]
__________________________________________________________________________________________________
classification_submodel (Model) (None, None, 1) 2381065 P3[0][0]
P4[0][0]
P5[0][0]
P6[0][0]
P7[0][0]
__________________________________________________________________________________________________
clipped_boxes (ClipBoxes) (None, None, 4) 0 input_1[0][0]
boxes[0][0]
__________________________________________________________________________________________________
classification (Concatenate) (None, None, 1) 0 classification_submodel[1][0]
classification_submodel[2][0]
classification_submodel[3][0]
classification_submodel[4][0]
classification_submodel[5][0]
__________________________________________________________________________________________________
filtered_detections (FilterDete [(None, 300, 4), (No 0 clipped_boxes[0][0]
classification[0][0]
==================================================================================================
Total params: 36,382,957
Trainable params: 36,276,717
Non-trainable params: 106,240
</code></pre>
<p>Any help with figure out how to fix the "Fetch" part of this would be greatly appreciated.</p>
<p>EDIT:</p>
<p>To dig a little further into this, I found a python function to print the operation names from a .pb file. When doing this for the FRCNN .pb file, it clearly gave the output node names, as can be seen below (only posting the last several lines from the output of the python function).</p>
<pre><code>import/SecondStagePostprocessor/BatchMultiClassNonMaxSuppression/map/TensorArrayStack_4/TensorArrayGatherV3
import/SecondStagePostprocessor/ToFloat_1
import/add/y
import/add
import/detection_boxes
import/detection_scores
import/detection_classes
import/num_detections
</code></pre>
<p>If I do the same thing for the Retinanet .pb file, it's not obvious what the outputs are. Here's the last several lines from the python function.</p>
<pre><code>import/filtered_detections/map/while/NextIteration_4
import/filtered_detections/map/while/Exit_2
import/filtered_detections/map/while/Exit_3
import/filtered_detections/map/while/Exit_4
import/filtered_detections/map/TensorArrayStack/TensorArraySizeV3
import/filtered_detections/map/TensorArrayStack/range/start
import/filtered_detections/map/TensorArrayStack/range/delta
import/filtered_detections/map/TensorArrayStack/range
import/filtered_detections/map/TensorArrayStack/TensorArrayGatherV3
import/filtered_detections/map/TensorArrayStack_1/TensorArraySizeV3
import/filtered_detections/map/TensorArrayStack_1/range/start
import/filtered_detections/map/TensorArrayStack_1/range/delta
import/filtered_detections/map/TensorArrayStack_1/range
import/filtered_detections/map/TensorArrayStack_1/TensorArrayGatherV3
import/filtered_detections/map/TensorArrayStack_2/TensorArraySizeV3
import/filtered_detections/map/TensorArrayStack_2/range/start
import/filtered_detections/map/TensorArrayStack_2/range/delta
import/filtered_detections/map/TensorArrayStack_2/range
import/filtered_detections/map/TensorArrayStack_2/TensorArrayGatherV3
</code></pre>
<p>For reference, here's the python function that I used:</p>
<pre><code>def printTensors(pb_file):
# read pb into graph_def
with tf.gfile.GFile(pb_file, "rb") as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
# import graph_def
with tf.Graph().as_default() as graph:
tf.import_graph_def(graph_def)
# print operations
for op in graph.get_operations():
print(op.name)
</code></pre>
<p>Hope this helps.</p>
|
<p>I am not sure exactly the problem you are facing; You can get the ouputs from TF Serving output, Actually in retinanet Ipython/Jupyter notebook they have mentioned the output format as well</p>
<p>Querying the save model gives</p>
<pre><code> """ The given SavedModel SignatureDef contains the following output(s):
outputs['filtered_detections/map/TensorArrayStack/TensorArrayGatherV3:0'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 300, 4)
name: filtered_detections/map/TensorArrayStack/TensorArrayGatherV3:0
outputs['filtered_detections/map/TensorArrayStack_1/TensorArrayGatherV3:0'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 300)
name: filtered_detections/map/TensorArrayStack_1/TensorArrayGatherV3:0
outputs['filtered_detections/map/TensorArrayStack_2/TensorArrayGatherV3:0'] tensor_info:
dtype: DT_INT32
shape: (-1, 300)
name: filtered_detections/map/TensorArrayStack_2/TensorArrayGatherV3:0
Method name is: tensorflow/serving/predict
---
From retina-net
In general, inference of the network works as follows:
boxes, scores, labels = model.predict_on_batch(inputs)
Where `boxes` are shaped `(None, None, 4)` (for `(x1, y1, x2, y2)`), scores is shaped `(None, None)` (classification score) and labels is shaped `(None, None)` (label corresponding to the score). In all three outputs, the first dimension represents the shape and the second dimension indexes the list of detections.
"""
</code></pre>
|
tensorflow|tensorflowsharp
| 0
|
8,379
| 65,801,868
|
Cold start recommender system implementation
|
<p>I have to implement a recommender system model.
The data I have been provided is unique ID and ICD codes of the patients.
How am I supposed to build the system when every new case has a unique id and there seems to be no relationship between the data?</p>
|
<h1>Two ways:</h1>
<h2>1. regard it as a unk id and train your model every day .</h2>
<h2>2. online learning for every new id .</h2>
|
python-3.x|machine-learning|tensorflow2.0|recommendation-system
| 0
|
8,380
| 65,805,813
|
Create line plot from dataframe with two columns index
|
<p>I have the following dataframe:</p>
<pre><code>>>> mean_traf_tie
a d c
0.22 0.99 0.11 22
0.23 21
0.34 34
0.46 45
0.44 0.99 0.11 45
0.23 65
0.34 66
0.46 68
0.50 0.50 0.11 22
0.23 12
0.34 34
0.46 37
...
</code></pre>
<p>I want to crate plot from this dataframe, on a way that c will be the X axis, y will be the mean velocity and lines will be according to the a and d columns, so for example, one line will be for a=0.22 and d=0.99, the x will be c and y will be mean velocity, and then the 2nd line will be for a=0.44 and d=0.99 ect.</p>
<p>I have tried to do it like this:</p>
<pre><code>df.plot()
</code></pre>
<p><a href="https://i.stack.imgur.com/dg5u5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dg5u5.png" alt="enter image description here" /></a></p>
<p>(values are differrent in original dataframe).</p>
<p>as you can see,for some reason it plots i nthe x axis the a,d and creates only one line.</p>
<p>I have tried to fix it like this:</p>
<pre><code>df.unstack(level=0).plot(figsize=(10,6))
</code></pre>
<p>but then I got very weird graph, with the correct lines by a and d but wrong x axis:
<a href="https://i.stack.imgur.com/eUG4d.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eUG4d.png" alt="enter image description here" /></a></p>
<p>As you can see, it plot somehow the a,d values ,but that not what I want- I want it to be the c columns, and then to create lines based on the a,d columns,which suppose to create continous line.
I have trid that as well:</p>
<pre><code>df[('mean_traf_tie')].unstack(level=0).plot(figsize=(10,6))
plt.xlabel('C')
plt.ylabel('mean_traf_tie')
</code></pre>
<p>but got again:
<a href="https://i.stack.imgur.com/ApcVc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ApcVc.png" alt="enter image description here" /></a></p>
<p>The desired output will have the c column as x axis, the mean_traf_tie as y axis, and lines will be generated bseed on a and d columns (line for 0.22 and 0.99, line for 0.44 and 0.99 ect).</p>
|
<p>Update:
I Have managed to go over it by concatinating the two index columns to one before plotting like this:</p>
<pre><code>df['a,d'] = list(zip(df.a, df.d))
df=df.groupby(['a,d','C']).mean()
df.unstack(level=0).plot(figsize=(10,6))
</code></pre>
<p><a href="https://i.stack.imgur.com/J2gwa.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/J2gwa.png" alt="enter image description here" /></a></p>
<p>the legenss is still not idealistic but I got the lines and axes as I wanted.</p>
<p>If anyone has better idea how to do it with original columns, i'm still open to learn.</p>
|
python|pandas|matplotlib|multi-index|line-plot
| 0
|
8,381
| 63,643,708
|
Sum values in a list contained in every row of a column pandas dataframe
|
<p>I have a df where every row in the column <code>"numbers"</code> is a list of floats. I want to add a column to the df with the sum of those floats.</p>
<pre><code>#current output
letter numbers
a [0.0, 0.1, 2.3]
b [5, 6.7, 11.21]
#desired output
letter numbers sum_result
a [0.0, 0.1, 2.3] 2.4
b [5, 6.7, 11.21] 22.91
</code></pre>
<p>I have tried sum(df.numbers) and get this error message</p>
<pre><code>TypeError: unsupported operand type(s) for +: 'int' and 'list'
</code></pre>
<p>Any help would be appreciated!</p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.apply.html" rel="noreferrer"><code>Series.apply</code></a> with <code>sum</code>:</p>
<pre><code>df['sum_result'] = df['numbers'].apply(sum)
</code></pre>
<p>Or <code>list comprehension</code>:</p>
<pre><code>df['sum_result'] = [sum(x) for x in df['numbers']]
</code></pre>
|
python|pandas|list|dataframe
| 6
|
8,382
| 63,670,931
|
Create a date counter variable starting with a particular date
|
<p>I have a variable as:
<code>start_dt = 201901</code> which is basically Jan 2019</p>
<p>I have an initial data frame as:</p>
<pre><code>month
0
1
2
3
4
</code></pre>
<p>I want to add a new column (date) to the dataframe where for month 0, the date is the <code>start_dt</code> - 1 month, and for subsequent months, the date is a month + 1 increment.</p>
<p>I want the resulting dataframe as:</p>
<pre><code>month date
0 12/1/2018
1 1/1/2019
2 2/1/2019
3 3/1/2019
4 4/1/2019
</code></pre>
|
<p>You can subtract <code>1</code> and add datetimes converted to month periods by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Timestamp.to_period.html" rel="nofollow noreferrer"><code>Timestamp.to_period</code></a> and then output convert to timestamps by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.PeriodIndex.to_timestamp.html" rel="nofollow noreferrer"><code>to_timestamp</code></a>:</p>
<pre><code>start_dt = 201801
start_dt = pd.to_datetime(start_dt, format='%Y%m')
s = df['month'].sub(1).add(start_dt.to_period('m')).dt.to_timestamp()
print (s)
0 2017-12-01
1 2018-01-01
2 2018-02-01
3 2018-03-01
4 2018-04-01
Name: month, dtype: datetime64[ns]
</code></pre>
<p>Or is possible convert column to month offsets with subtract <code>1</code> and add datetime:</p>
<pre><code>s = df['month'].apply(lambda x: pd.DateOffset(months=x-1)).add(start_dt)
print (s)
0 2017-12-01
1 2018-01-01
2 2018-02-01
3 2018-03-01
4 2018-04-01
Name: month, dtype: datetime64[ns]
</code></pre>
|
python|pandas|datetime
| 1
|
8,383
| 71,821,463
|
Extract elements from a list into a dataframe based on a regex string
|
<p>I have a list that is built like this:</p>
<pre><code>mylist = ['2003 00045', 'John', 'Closed', '4/10/21', '19675-B', '2001 00065',
'Kate', 'Approved', '2005 00054', 'True']
</code></pre>
<p>I am trying to build a dataframe where the first column will contain all of the identifiers in the list (e.g., '2003 00045', '2001 00065', '2005 00054'), and the second row will contain all of the list elements that come after it. The example data frame would like this:</p>
<pre><code>col1 col2
2003 00045 John, Closed, 4/10/21, 19675-B
2001 00065 Kate, Approved
2005 00054 True
</code></pre>
<p>I've been able to define the regex pattern to pull out the identifier from the list, but haven't been able to figure out how to pull out all the elements following the identifier. The list is unstructured so there is no set amount of elements that come after an identifier. I experimented with using a <code>dict</code> but the identifiers are not unique so if I treated them as keys, the code would overwrite values as it goes through the list.</p>
|
<p>Try:</p>
<pre class="lang-py prettyprint-override"><code>import re
r = re.compile(r"\d{4} \d{5}")
data, id_ = {}, None
for v in mylist:
if (m := r.match(v)):
id_ = m.group(0)
else:
data.setdefault(id_, []).append(v)
df = pd.DataFrame(
[{"col1": k, "col2": ", ".join(map(str, v))} for k, v in data.items()]
)
print(df.to_markdown())
</code></pre>
<p>Prints:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: right;"></th>
<th style="text-align: left;">col1</th>
<th style="text-align: left;">col2</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">0</td>
<td style="text-align: left;">2003 00045</td>
<td style="text-align: left;">John, Closed, 4/10/21, 19675-B</td>
</tr>
<tr>
<td style="text-align: right;">1</td>
<td style="text-align: left;">2001 00065</td>
<td style="text-align: left;">Kate, Approved</td>
</tr>
<tr>
<td style="text-align: right;">2</td>
<td style="text-align: left;">2005 00054</td>
<td style="text-align: left;">True</td>
</tr>
</tbody>
</table>
</div><hr />
<p>Or:</p>
<pre class="lang-py prettyprint-override"><code>x = pd.DataFrame({"col1": mylist})
x = x.groupby(x["col1"].str.match(r"\d{4} \d{5}").cumsum()).agg(list)
df = pd.DataFrame(
{"col1": x["col1"].str[0].values, "col2": x["col1"].str[1:].values}
)
print(df)
</code></pre>
<p>Prints:</p>
<pre class="lang-none prettyprint-override"><code> col1 col2
0 2003 00045 [John, Closed, 4/10/21, 19675-B]
1 2001 00065 [Kate, Approved]
2 2005 00054 [True]
</code></pre>
|
python|pandas|list|dataframe
| 2
|
8,384
| 56,677,980
|
Reading values within pandas.groupby
|
<p>I have a dataframe like below</p>
<pre><code> name item
0 Jack A
1 Sarah B
2 Ross A
3 Sean C
4 Jack C
5 Ross B
</code></pre>
<p>What I like to do is to produce a dictionary that connects people to the products they are related to.</p>
<pre><code>{Jack: [1, 0, 1], Sarah: [0, 1, 0], Ross:[1, 1, 0], Sean:[0, 0, 1]}
</code></pre>
<p>I feel like this should be done fairly easily using pandas.groupby</p>
<p>I have tried looping through the dataframe, but I have >1E7 entries, and looping does not look very efficient.</p>
|
<p>Check with <code>crosstab</code> and <code>to_dict</code></p>
<pre><code>pd.crosstab(df.item,df.name).to_dict('l')
{'Jack': [1, 0, 1], 'Ross': [1, 1, 0], 'Sarah': [0, 1, 0], 'Sean': [0, 0, 1]}
</code></pre>
<hr>
<p>Another interesting option is using <code>str.get_dummies</code>:</p>
<pre><code># if you need counts
df.set_index('item')['name'].str.get_dummies().sum(level=0).to_dict('l')
# if you want to record boolean indicators
df.set_index('item')['name'].str.get_dummies().max(level=0).to_dict('l')
# {'Jack': [1, 0, 1], 'Ross': [1, 1, 0], 'Sarah': [0, 1, 0], 'Sean': [0, 0, 1]}
</code></pre>
|
pandas|pandas-groupby
| 4
|
8,385
| 47,399,201
|
How to store a dictionary and map words to ints when using Tensorflow Serving?
|
<p>I have trained an LSTM RNN classification model on Tensorflow. I was saving and restoring checkpoints to retrain and use the model for testing. Now I want to use Tensorflow serving so that I can use the model in production.</p>
<p>Initially, I would parse through a corpus to create my dictionary which is then used to map words in a string to integers. I would then store this dictionary in a pickle file which could be reloaded when restoring a checkpoint and retraining on a data set or just for using the model so that the mapping is consistent. How do I store this dictionary when saving the model using SavedModelBuilder?</p>
<p>My code for the neural network is as follows. The code for saving the model is towards the end (I am including an overview of the whole structure for context):</p>
<pre><code>...
# Read files and store them in variables
with open('./someReview.txt', 'r') as f:
reviews = f.read()
with open('./someLabels.txt', 'r') as f:
labels = f.read()
...
#Pre-processing functions
#Parse through dataset and create a vocabulary
vocab_to_int, reviews = RnnPreprocessing.map_vocab_to_int(reviews)
with open(pickle_path, 'wb') as handle:
pickle.dump(vocab_to_int, handle, protocol=pickle.HIGHEST_PROTOCOL)
#More preprocessing functions
...
# Building the graph
lstm_size = 256
lstm_layers = 2
batch_size = 1000
learning_rate = 0.01
n_words = len(vocab_to_int) + 1
# Create the graph object
tf.reset_default_graph()
with tf.name_scope('inputs'):
inputs_ = tf.placeholder(tf.int32, [None, None], name="inputs")
labels_ = tf.placeholder(tf.int32, [None, None], name="labels")
keep_prob = tf.placeholder(tf.float32, name="keep_prob")
#Create embedding layer LSTM cell, LSTM Layers
...
# Forward pass
with tf.name_scope("RNN_forward"):
outputs, final_state = tf.nn.dynamic_rnn(cell, embed, initial_state=initial_state)
# Output. We are only interested in the latest output of the lstm cell
with tf.name_scope('predictions'):
predictions = tf.contrib.layers.fully_connected(outputs[:, -1], 1, activation_fn=tf.sigmoid)
tf.summary.histogram('predictions', predictions)
#More functions for cost, accuracy, optimizer initialization
...
# Training
epochs = 1
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
iteration = 1
for e in range(epochs):
state = sess.run(initial_state)
for ii, (x, y) in enumerate(get_batches(train_x, train_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 0.5,
initial_state: state}
summary, loss, state, _ = sess.run([merged, cost, final_state, optimizer], feed_dict=feed)
train_writer.add_summary(summary, iteration)
if iteration%1==0:
print("Epoch: {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Train loss: {:.3f}".format(loss))
if iteration%2==0:
val_acc = []
val_state = sess.run(cell.zero_state(batch_size, tf.float32))
for x, y in get_batches(val_x, val_y, batch_size):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: val_state}
summary, batch_acc, val_state = sess.run([merged, accuracy, final_state], feed_dict=feed)
val_acc.append(batch_acc)
print("Val acc: {:.3f}".format(np.mean(val_acc)))
iteration +=1
test_writer.add_summary(summary, iteration)
#Saving the model
export_path = './SavedModel'
print ('Exporting trained model to %s'%(export_path))
builder = saved_model_builder.SavedModelBuilder(export_path)
# Build the signature_def_map.
classification_inputs = utils.build_tensor_info(inputs_)
classification_outputs_classes = utils.build_tensor_info(labels_)
classification_signature = signature_def_utils.build_signature_def(
inputs={signature_constants.CLASSIFY_INPUTS: classification_inputs},
outputs={
signature_constants.CLASSIFY_OUTPUT_CLASSES:
classification_outputs_classes,
},
method_name=signature_constants.CLASSIFY_METHOD_NAME)
legacy_init_op = tf.group(
tf.tables_initializer(), name='legacy_init_op')
#add the sigs to the servable
builder.add_meta_graph_and_variables(
sess, [tag_constants.SERVING],
signature_def_map={
signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY:
classification_signature
},
legacy_init_op=legacy_init_op)
print ("added meta graph and variables")
#save it!
builder.save()
print("model saved")
</code></pre>
<p>I am not entirely sure if this is the correct way to save a model such as this but this is the only implementation I have found in the documentation and online tutorials.</p>
<p>I haven't found any example or any explicit guide to saving the dictionary or how to use it when restoring a savedModel in the documentation.</p>
<p>When using checkpoints, I would just load the pickle file before running the session. How do I restore this savedModel so that I can use the same word to int mapping using the dictionary? Is there any specific way I should be saving the model or loading it?</p>
<p>I have also added inputs_ as the input for the input signature. This is a sequence of integeres 'after' the words have been mapped. I can't specify a string as input because I get an <code>AttributeError: 'str' object has no attribute 'dtype'</code> . In such cases, how exactly are words mapped to integers in models that are in production?</p>
|
<p>One approach to this is storing the vocabulary in the model's graph. This will then be shipped with the model. </p>
<pre><code>...
vocab_table = lookup.index_table_from_file(vocabulary_file='data/vocab.csv', num_oov_buckets=1, default_value=-1)
text = features[commons.FEATURE_COL]
words = tf.string_split(text)
dense_words = tf.sparse_tensor_to_dense(words, default_value=commons.PAD_WORD)
word_ids = vocab_table.lookup(dense_words)
padding = tf.constant([[0, 0], [0, commons.MAX_DOCUMENT_LENGTH]])
# Pad all the word_ids entries to the maximum document length
word_ids_padded = tf.pad(word_ids, padding)
word_id_vector = tf.slice(word_ids_padded, [0, 0], [-1, commons.MAX_DOCUMENT_LENGTH])
</code></pre>
<p>Source: <a href="https://github.com/KishoreKarunakaran/CloudML-Serving/blob/master/text/imdb_cnn/model/cnn_model.py#L83" rel="nofollow noreferrer">https://github.com/KishoreKarunakaran/CloudML-Serving/blob/master/text/imdb_cnn/model/cnn_model.py#L83</a></p>
|
machine-learning|tensorflow|lstm|tensorflow-serving|word-embedding
| 0
|
8,386
| 47,393,658
|
Pandas generate missing dates & hours with 0 values
|
<p>I have this dataframe :</p>
<pre><code>date station count
2015-01-01 13:00:00 A 4
2015-01-01 14:00:00 B 2
2015-01-02 15:00:00 A 7
</code></pre>
<p>For simplicity, pretend that the station only have 2 values : A & B</p>
<p>My goal is to generate 0 count for each date, each hour, and each station.</p>
<p>For example, the code will generate : </p>
<pre><code>date station count
2015-01-01 00:00:00 A 0
2015-01-01 00:00:00 B 0
</code></pre>
<p>This is what I tried :</p>
<pre><code># generate 0 values (no transaction) for each hour at each station
df_trans = df_trans.set_index(['date', 'station'])
(date_index, station_index) = df_trans.index.levels
# generate a range of all dates & hours
all_dates = pd.date_range('2014-01-09', '2015-12-08', freq='H')
new_index = pd.MultiIndex.from_product([all_dates, station_index])
df_trans = df_trans.reindex(new_index)
df_trans = df_trans['net_rate'].fillna(0)
</code></pre>
<p>However the result dataframe is not hourly.</p>
<p>The output (no hour in date) :</p>
<pre><code> net_rate
2014-01-09 2 0.0
3 0.0
4 0.0
</code></pre>
|
<p>For me it working nice, small improvement is use parameter <code>fill_value=0</code> in <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reindex.html" rel="nofollow noreferrer"><code>reindex</code></a>:</p>
<pre><code>new_index = pd.MultiIndex.from_product([all_dates, station_index], names=('date', 'station'))
df_trans = df_trans.reindex(new_index, fill_value=0)
print (df_trans.head(10))
count
date station
2014-01-09 00:00:00 A 0
B 0
2014-01-09 01:00:00 A 0
B 0
2014-01-09 02:00:00 A 0
B 0
2014-01-09 03:00:00 A 0
B 0
2014-01-09 04:00:00 A 0
B 0
print (df_trans[df_trans['count'] != 0])
count
date station
2015-01-01 13:00:00 A 4
2015-01-01 14:00:00 B 2
2015-01-02 15:00:00 A 7
</code></pre>
<hr>
<pre><code>print (df_trans.index.levels)
[[2014-01-09 00:00:00, 2014-01-09 01:00:00, 2014-01-09 02:00:00, 2014-01-09 03:00:00,
2014-01-09 04:00:00, 2014-01-09 05:00:00, 2014-01-09 06:00:00, 2014-01-09 07:00:00,
2014-01-09 08:00:00, 2014-01-09 09:00:00, 2014-01-09 10:00:00, 2014-01-09 11:00:00,
2014-01-09 12:00:00, 2014-01-09 13:00:00, 2014-01-09 14:00:00, 2014-01-09 15:00:00,
2014-01-09 16:00:00, 2014-01-09 17:00:00, 2014-01-09 18:00:00, 2014-01-09 19:00:00,
2014-01-09 20:00:00, 2014-01-09 21:00:00, 2014-01-09 22:00:00, 2014-01-09 23:00:00,
2014-01-10 00:00:00, 2014-01-10 01:00:00, 2014-01-10 02:00:00, 2014-01-10 03:00:00,
2014-01-10 04:00:00, 2014-01-10 05:00:00, 2014-01-10 06:00:00, 2014-01-10 07:00:00,
2014-01-10 08:00:00, 2014-01-10 09:00:00, 2014-01-10 10:00:00, 2014-01-10 11:00:00,
2014-01-10 12:00:00, 2014-01-10 13:00:00, 2014-01-10 14:00:00, 2014-01-10 15:00:00,
2014-01-10 16:00:00, 2014-01-10 17:00:00, 2014-01-10 18:00:00, 2014-01-10 19:00:00,
2014-01-10 20:00:00, 2014-01-10 21:00:00, 2014-01-10 22:00:00, 2014-01-10 23:00:00,
2014-01-11 00:00:00, 2014-01-11 01:00:00, 2014-01-11 02:00:00, 2014-01-11 03:00:00,
2014-01-11 04:00:00, 2014-01-11 05:00:00, 2014-01-11 06:00:00, 2014-01-11 07:00:00,
2014-01-11 08:00:00, 2014-01-11 09:00:00, 2014-01-11 10:00:00, 2014-01-11 11:00:00,
2014-01-11 12:00:00, 2014-01-11 13:00:00, 2014-01-11 14:00:00, 2014-01-11 15:00:00,
2014-01-11 16:00:00, 2014-01-11 17:00:00, 2014-01-11 18:00:00, 2014-01-11 19:00:00,
2014-01-11 20:00:00, 2014-01-11 21:00:00, 2014-01-11 22:00:00, 2014-01-11 23:00:00,
2014-01-12 00:00:00, 2014-01-12 01:00:00, 2014-01-12 02:00:00, 2014-01-12 03:00:00,
2014-01-12 04:00:00, 2014-01-12 05:00:00, 2014-01-12 06:00:00, 2014-01-12 07:00:00,
2014-01-12 08:00:00, 2014-01-12 09:00:00, 2014-01-12 10:00:00, 2014-01-12 11:00:00,
2014-01-12 12:00:00, 2014-01-12 13:00:00, 2014-01-12 14:00:00, 2014-01-12 15:00:00,
2014-01-12 16:00:00, 2014-01-12 17:00:00, 2014-01-12 18:00:00, 2014-01-12 19:00:00,
2014-01-12 20:00:00, 2014-01-12 21:00:00, 2014-01-12 22:00:00, 2014-01-12 23:00:00,
2014-01-13 00:00:00, 2014-01-13 01:00:00, 2014-01-13 02:00:00, 2014-01-13 03:00:00, ...], ['A', 'B']]
</code></pre>
|
python|pandas|dataframe|time-series
| 1
|
8,387
| 47,118,677
|
Does Numpy fancy indexing copy values directly to another array?
|
<p>According to the documentation that I could find, when using fancy indexing a copy rather than a view is returned. However, I couldn't figure out what its behavior is during assignment to another array, for instance:</p>
<pre><code>A = np.arange(0,10)
B = np.arange(-10,0)
fancy_slice = np.array([0,3,5])
A[fancy_slice] = B[fancy_slice]
</code></pre>
<p>I understand that <code>A</code> will just receive a call to <code>__setitem__</code> while <code>B</code> will get a call to <code>__getitem__</code>. What I am concerned about is whether an intermediate array is created before copying the values over to <code>A</code>.</p>
|
<p>The interpreter will parse the code and issue the method calls as:</p>
<pre><code>A[idx] = B[idx]
A.__setitem__(idx, B.__getitem__(idx))
</code></pre>
<p>The <code>B</code> method is evaluated fully before being passed to the <code>A</code> method. <code>numpy</code> doesn't alter the Python interpreter or its syntax. Rather it just adds functions, objects, and methods.</p>
<p>Functionally, it should be the equivalent to</p>
<pre><code>temp = B[idx]
A[idx] = temp
del temp
</code></pre>
<p>We could do some <code>timeit</code> just be sure.</p>
<pre><code>In [712]: A = np.zeros(10000,int)
In [713]: B = np.arange(10000)
In [714]: idx = np.arange(0,10000,100)
In [715]: timeit A[idx] = B[idx]
1.2 ยตs ยฑ 3.24 ns per loop (mean ยฑ std. dev. of 7 runs, 1000000 loops each)
In [716]: %%timeit
...: temp = B[idx]
...: A[idx] = temp
...:
1.11 ยตs ยฑ 0.669 ns per loop (mean ยฑ std. dev. of 7 runs, 1000000 loops each)
</code></pre>
<p>There are some alternative functions/methods, like <code>add.at</code>, <code>copyto</code>, <code>place</code>, <code>put</code>, that may do some copies without an intermediate, but I haven't used them much. This indexed assignment is good enough - most of the time.</p>
<p>Example with <code>copyto</code></p>
<pre><code>In [718]: wh = np.zeros(A.shape, bool)
In [719]: wh[idx] = True
In [721]: np.copyto(A, B, where=wh)
In [722]: timeit np.copyto(A, B, where=wh)
7.47 ยตs ยฑ 9.92 ns per loop (mean ยฑ std. dev. of 7 runs, 100000 loops each)
</code></pre>
<p>So even without timing the construction of the boolean mask, <code>copyto</code> is slower.</p>
<p><code>put</code> and <code>take</code> are no better:</p>
<pre><code>In [727]: timeit np.put(A,idx, np.take(B,idx))
7.98 ยตs ยฑ 8.34 ns per loop (mean ยฑ std. dev. of 7 runs, 100000 loops each)
</code></pre>
|
python|arrays|numpy
| 3
|
8,388
| 47,313,765
|
ValueError: Cannot feed value of shape (200,) for Tensor 'Placeholder_32:0', which has shape '(?, 1)'
|
<p>I am new to tensorflow. This code is just for a simple neural network.
I think the problem maybe is from:</p>
<pre><code>x_data = np.linspace(-0.5,0.5,200)[:np..newaxis]
</code></pre>
<p>I tried to write without <code>[:np.newaxis]</code>, but it looks like the same.</p>
<pre><code>import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
x_data = np.linspace(-0.5,0.5,200)[:np.newaxis]
noise = np.random.normal(0,0.02,x_data.shape)
y_data = np.square(x_data) + noise
x = tf.placeholder(tf.float32,[None,1])
y = tf.placeholder(tf.float32,[None,1])
Weights_L1 = tf.Variable(tf.random_normal([1,10]))
biases_L1 = tf.Variable(tf.zeros([1,10]))
Wx_plus_b_L1 = tf.matmul(x,Weights_L1) + biases_L1
L1 = tf.nn.tanh(Wx_plus_b_L1)
Weights_L2 = tf.Variable(tf.random_normal([10,1]))
biases_L2 = tf.Variable(tf.zeros([1,1]))
Wx_plus_b_L2 = tf.matmul(L1,Weights_L2) + biases_L2
prediction = tf.nn.tanh(Wx_plus_b_L2)
loss = tf.reduce_mean(tf.square(y-prediction))
train_step = tf.train.GradientDescentOptimizer(0.1).minimize(loss)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for _ in range(2000):
sess.run(train_step,feed_dict={x:x_data,y:y_data})
prediction_value = sess.run(prediction,feed_dict={x:x_data})
plt.figure()
plt.scatter(x_data,y_data)
plt.plot(x_data,prediction_value,'r-',lw=5)
plt.show()
</code></pre>
|
<p>The defined placeholders (both <code>x</code> and <code>y</code>) are 2-dimensional, so you should reshape the input arrays to rank 2. Try to add this:</p>
<pre class="lang-py prettyprint-override"><code>x_data = x_data.reshape([-1,1])
y_data = y_data.reshape([-1,1])
</code></pre>
|
python|numpy|machine-learning|tensorflow|neural-network
| 0
|
8,389
| 68,044,757
|
Replacing -999 with a number but I want all replaced number to be different
|
<p>I have a Pandas DataFrame named <code>df</code> and in <code>df['salary']</code> column, there are 400 values represented by same number <code>-999</code>. I want to replace that <code>-999</code> value with any number in between <code>200</code> and <code>500</code>. I want to replace all 400 values with a different number from 200 to 500. So far I have written this code:</p>
<pre><code>df['salary'] = df['salary'].replace(-999, random.randint(200, 500))
</code></pre>
<p>but this code is replacing all -999 with the same value. I want all replaced values to be different from each other. How can do this.</p>
|
<p>You can use <code>Series.mask</code> with <code>np.random.randint</code>:</p>
<pre><code>df = pd.DataFrame({"salary":[0,1,2,3,4,5,-999,-999,-999,1,3,5,-999]})
df['salary'] = df["salary"].mask(df["salary"].eq(-999), np.random.randint(200, 500, size=len(df)))
print (df)
salary
0 0
1 1
2 2
3 3
4 4
5 5
6 413
7 497
8 234
9 1
10 3
11 5
12 341
</code></pre>
<p>If you want non-repeating numbers instead:</p>
<pre><code>s = pd.Series(range(200, 500)).sample(frac=1).reset_index(drop=True)
df['salary'] = df["salary"].mask(df["salary"].eq(-999), s)
</code></pre>
|
pandas
| 0
|
8,390
| 68,283,105
|
Merits of avoiding allocations for soft realtime NumPy/CPython
|
<p>I read that (soft) real-time programs often avoid heap allocations in part due to unpredictable timings, especially when stop-the-world (STW) garbage collection (GC) is used to free memory. I'm wondering if avoiding heap allocations is at all helpful for reducing lag in a main loop (say, 100 Hz) that uses NumPy and CPython. My questions:</p>
<ul>
<li>CPython uses reference counting for the most part and a STW GC for cyclic references. Does that mean the STW part would never trigger if I don't use any objects with cyclic references? For example, scalars and NumPy arrays don't seem to have cyclic references, and most of them would not go beyond the function in which they are allocated.</li>
<li>Would reducing array allocations (preallocate, in-place, etc) make a significant difference?</li>
<li>Typical NumPy expressions allocate a temporary array for every operation; what are some good ways to get around this? Only thing that comes to mind now is very tedious Numba rewrites, and even then I'm not sure if non-ufuncs can avoid allocating a temporary array e.g. <code>output[:] = not_a_ufunc(input)</code></li>
</ul>
|
<blockquote>
<p>CPython uses reference counting for the most part and a STW GC for cyclic references. Does that mean the STW part would never trigger if I don't use any objects with cyclic references? For example, scalars and NumPy arrays don't seem to have cyclic references, and most of them would not go beyond the function in which they are allocated.</p>
</blockquote>
<p>As long as Numpy array contains native scalars (eg. <code>np.float64</code>, <code>np.int32</code>, etc.), it is generally fine. But if the Numpy array contains pure CPython objects, then the GC could be likely an issue (although this is rarely the case since cycles are rare and Python use a generational GC).</p>
<p>Actually, the GC could run a collection in both cases (especially when a new CPython objects are created/deleted, including Numpy arrays). However, the overhead of the GC is negligible with a program using natively-typed Numpy arrays since the number of reference is small (cells of the array are not visible from the garbage collector in this case as opposed to the case where the Numpy array contains pure CPython objects).</p>
<p>Note that reference cycles are theoretically possible with Numpy arrays containing pure CPython objects as two array can contains reference to each other:</p>
<pre class="lang-py prettyprint-override"><code>a = np.empty(1, dtype=object)
b = np.empty(1, dtype=object)
a[0] = b
b[0] = a
</code></pre>
<p>Note that you can disable the GC in your targetted use-case as stated in the <a href="https://docs.python.org/3/library/gc.html" rel="nofollow noreferrer">Python documentation</a>. It should however not make a significant difference in most cases.</p>
<blockquote>
<p>Would reducing array allocations (preallocate, in-place, etc) make a significant difference?</p>
</blockquote>
<p>Definitively, yes! When you deal with many very-small Numpy arrays, creating an array is quite expensive (more than 400 ns on my machine). <a href="https://stackoverflow.com/questions/67669074">This post</a> and <a href="https://stackoverflow.com/questions/67189935">this one</a> are interesting examples showing the cost of allocating Numpy arrays. However, you should check this is the actual problem before applying in-place optimizations massively in a big code as it makes the code clearly harder to read and maintain (and so reducing the ability to apply further high-level optimisation later).</p>
<blockquote>
<p>Typical NumPy expressions allocate a temporary array for every operation; what are some good ways to get around this?</p>
</blockquote>
<p>You can use the <code>out</code> parameter of Numpy functions not to allocate a new array (as seen in the previous SO post link). Note that this is not always possible.</p>
<blockquote>
<p>Only thing that comes to mind now is very tedious Numba rewrites, and even then I'm not sure if non-ufuncs can avoid allocating a temporary array e.g. output[:] = not_a_ufunc(input)</p>
</blockquote>
<p>Using instructions like <code>output[:] = numpy_function(...)</code> may not help a lot as it will likely create a new temporary array and perform a copy. The copy is often expensive on big array but often cheap on small ones (due to CPU caches).</p>
<p>AFAIK, Numba barely optimize allocations (unless the variable is unused or this is a trivial code). However, Numba helps to avoid creating many temporary arrays. Not to mention that the creation of temporary arrays is not the only problem with small Numpy arrays : Numpy calls are quite expensive too (due to many internal checks and the C/Python context switch in the interpreter) and reducing the number of Numpy calls can quickly become tedious or tricky.</p>
|
python|numpy|garbage-collection|real-time|cpython
| 2
|
8,391
| 57,167,200
|
Unable to train from flow_from_dataframe Got unexpected no. of classes
|
<p>I am going to train a model on the set of images whose labels are in a csv file. So I used <code>flow_from_dataframe from tf.keras</code> and specified the parameters, but when it comes to <code>class_mode</code> it says errors and says <code>Found 3662 validated image filenames belonging to 1 classes.</code> - for both sparse and categorical. This is multi class classification."</p>
<p>"Initially labels were int so i converted it to strings then I got this output."</p>
<pre><code>df_train=pd.read_csv(r"../input/train.csv",delimiter=',')
df_test=pd.read_csv(r"../input/test.csv",delimiter=',')
print(df_train.head())
print(df_test.head())
df_train['id_code']=df_train['id_code']+'.png'
df_train['diagnosis']=str(df_train['diagnosis'])
df_test['id_code']=df_test['id_code']+'.png'
""" output is
id_code diagnosis
0 000c1434d8d7 2
1 001639a390f0 4
2 0024cdab0c1e 1
3 002c21358ce6 0
4 005b95c28852 0
id_code
0 0005cfc8afb6
1 003f0afdcd15
2 006efc72b638
3 00836aaacf06
4 009245722fa4
"""
train_datagen = ImageDataGenerator(
rescale = 1./255,
rotation_range=30,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
TRAINING_DIR='../input/train_images'
train_generator= train_datagen.flow_from_dataframe(
dataframe=df_train,
directory=TRAINING_DIR,
x_col='id_code',
y_col='diagnosis',
batch_size=20,
target_size=(1050,1050),
class_mode='categorical'#used also sparsed
)
""" output is
Found 3662 validated image filenames belonging to 1 classes.
"""
</code></pre>
<p>โI expect the output of <code>"Found 3662 validated image filenames belonging to 5 classes"</code> , but the actual output is <code>"Found 3662 validated image filenames belonging to 1 classes"</code></p>
<p>โ</p>
|
<p>"sparse" class mode requires integer value and "categorical" requires one hot encoded vector of your class columns. So I would try:</p>
<pre><code>df['diagnosis'] = df['diagnosis'].astype(str)
</code></pre>
<p>and then use "sparse" class mode.</p>
<pre><code>train_generator= train_datagen.flow_from_dataframe(
dataframe=df_train,
directory=TRAINING_DIR,
x_col='id_code',
y_col='diagnosis',
batch_size=20,
target_size=(1050,1050),
class_mode='sparse'
)
</code></pre>
<p>Or <strong>alternatively</strong> you can use <strong>one hot encoding</strong> like this:</p>
<pre><code>pd.get_dummies(df,prefix=['diagnosis'], drop_first=True)
</code></pre>
<p>And then use "categorical" class_mode: </p>
<pre><code>train_generator= train_datagen.flow_from_dataframe(
dataframe=df_train,
directory=TRAINING_DIR,
x_col='id_code',
y_col=df.columns[1:],
batch_size=20,
target_size=(1050,1050),
class_mode='categorical'
)
</code></pre>
|
python|tensorflow|keras
| 2
|
8,392
| 46,021,216
|
Implementing attention with beam search in tensorflow
|
<p>I have written my own code with reference to <a href="https://github.com/tensorflow/nmt" rel="nofollow noreferrer">this</a> wonderful tutorial and I am not able to get results when using attention with beam search as per my understanding in the class AttentionModel the _build_decoder_cell function creates separate decoder cell and attention wrapper for inference mode , assuming this ( which i think is incorrect and cant find a way around it ) ,</p>
<pre><code>with tf.name_scope("Decoder"):
mem_units = 2*dim
dec_cell = tf.contrib.rnn.BasicLSTMCell( 2*dim )
beam_cel = tf.contrib.rnn.BasicLSTMCell( 2*dim )
beam_width = 3
out_layer = Dense( output_vocab_size )
with tf.name_scope("Training"):
attn_mech = tf.contrib.seq2seq.BahdanauAttention( num_units = mem_units, memory = enc_rnn_out, normalize=True)
attn_cell = tf.contrib.seq2seq.AttentionWrapper( cell = dec_cell,attention_mechanism = attn_mech )
batch_size = tf.shape(enc_rnn_out)[0]
initial_state = attn_cell.zero_state( batch_size = batch_size , dtype=tf.float32 )
initial_state = initial_state.clone(cell_state = enc_rnn_state)
helper = tf.contrib.seq2seq.TrainingHelper( inputs = emb_x_y , sequence_length = seq_len )
decoder = tf.contrib.seq2seq.BasicDecoder( cell = attn_cell, helper = helper, initial_state = initial_state ,output_layer=out_layer )
outputs, final_state, final_sequence_lengths= tf.contrib.seq2seq.dynamic_decode(decoder=decoder,impute_finished=True)
training_logits = tf.identity(outputs.rnn_output )
training_pred = tf.identity(outputs.sample_id )
with tf.name_scope("Inference"):
enc_rnn_out_beam = tf.contrib.seq2seq.tile_batch( enc_rnn_out , beam_width )
seq_len_beam = tf.contrib.seq2seq.tile_batch( seq_len , beam_width )
enc_rnn_state_beam = tf.contrib.seq2seq.tile_batch( enc_rnn_state , beam_width )
batch_size_beam = tf.shape(enc_rnn_out_beam)[0] # now batch size is beam_width times
# start tokens mean be the original batch size so divide
start_tokens = tf.tile(tf.constant([27], dtype=tf.int32), [ batch_size_beam//beam_width ] )
end_token = 0
attn_mech_beam = tf.contrib.seq2seq.BahdanauAttention( num_units = mem_units, memory = enc_rnn_out_beam, normalize=True)
cell_beam = tf.contrib.seq2seq.AttentionWrapper(cell=beam_cel,attention_mechanism=attn_mech_beam,attention_layer_size=mem_units)
initial_state_beam = cell_beam.zero_state(batch_size=batch_size_beam,dtype=tf.float32).clone(cell_state=enc_rnn_state_beam)
my_decoder = tf.contrib.seq2seq.BeamSearchDecoder( cell = cell_beam,
embedding = emb_out,
start_tokens = start_tokens,
end_token = end_token,
initial_state = initial_state_beam,
beam_width = beam_width
,output_layer=out_layer)
beam_output, t1 , t2 = tf.contrib.seq2seq.dynamic_decode( my_decoder,
maximum_iterations=maxlen )
beam_logits = tf.no_op()
beam_sample_id = beam_output.predicted_ids
</code></pre>
<p>when i call beam _sample_id after training i am not getting correct result.</p>
<p>my guess is that we are supposed to using the same attention wrapper but that is not possible since we have to tile_sequence for beam search to be used.</p>
<p>Any insights / suggestion would be much appreciated.</p>
<p>i have also created an issue for this in their main repository <a href="https://github.com/tensorflow/nmt/issues/93" rel="nofollow noreferrer">Issue-93</a></p>
|
<p>I'm not sure what do you mean by "I am not able to get results" but I'm assuming that your model is not making use of the wieghts learnt while training . </p>
<p>if this is the case , then first of all you need to know that its all about variable sharing , the first thing you need to do is that you get rid of the different variable scopes between the training and inferring and instead you need to use some thing like </p>
<p>remove the </p>
<pre><code>with tf.name_scope("Training"):
</code></pre>
<p>and use : </p>
<pre><code>with tf.variable_scope("myScope"):
</code></pre>
<p>and then remove the </p>
<pre><code>with tf.name_scope("Inference"):
</code></pre>
<p>and use instead </p>
<pre><code>with tf.variable_scope("myScope" , reuse=True):
</code></pre>
<p>also at the beginning of your and after <code>with tf.variable_scope("myScope" )</code></p>
<pre><code>enc_rnn_out = tf.contrib.seq2seq.tile_batch( enc_rnn_out , 1 )
seq_len = tf.contrib.seq2seq.tile_batch( seq_len , 1 )
enc_rnn_state = tf.contrib.seq2seq.tile_batch( enc_rnn_state , 1 )
</code></pre>
<p>this will insure that your inference variables and training variables have the same signature and are shared , </p>
<p>I have tested this when I was following the same tutorial that you have mentioned , my model is still training as I'm writing this post but I can see that the accuracy increasing as we speak , which indicates that the solution should work for you as well . </p>
<p>thank you </p>
|
tensorflow
| 2
|
8,393
| 50,687,475
|
I am getting an error when I load Pandas and Numpy in
|
<p>I am getting below error when execute below:</p>
<pre><code>import numpy as np
</code></pre>
<p><strong>The full stack trace:</strong></p>
<pre><code> File "C:\Anaconda3\lib\site-packages\numpy\__init__.py", line 142, in <module>
from . import add_newdocs
File "C:\Anaconda3\lib\site-packages\numpy\add_newdocs.py", line 13, in <module>
from numpy.lib import add_newdoc
File "C:\Anaconda3\lib\site-packages\numpy\lib\__init__.py", line 8, in <module>
from .type_check import *
File "C:\Anaconda3\lib\site-packages\numpy\lib\type_check.py", line 11, in <module>
import numpy.core.numeric as _nx
File "C:\Anaconda3\lib\site-packages\numpy\core\__init__.py", line 14, in <module>
from . import multiarray
ImportError: DLL load failed: The specified module could not be found.
Process finished with exit code 1
</code></pre>
|
<p>This looks like some of the dependencies are missing for the package <code>numpy</code>. The DLL load failed error points towards either incorrectly installed and/or missing files for package numpy. You should just try installing numpy again through Anaconda CLI using the following command:</p>
<pre><code>conda install numpy
</code></pre>
<p>This works most of the times, if your python environments are sorted correctly. Do make sure that the python installation you're using for Anaconda, is the same one as your coding environment. Conflicting versions can sometimes lead to corrupted packages.</p>
<p>You could also try the following command to update the Anaconda Environment just to make sure that the installations are up to date, and not conflicting with older versions..</p>
<pre><code>conda update -y -all
</code></pre>
|
python-3.x|numpy
| 0
|
8,394
| 50,829,492
|
Extract list of JSON objects in string form from Pandas Dataframe column
|
<p>I have a perfectly normal pandas dataframe which I create after loading this dataset: <a href="https://www.kaggle.com/tmdb/tmdb-movie-metadata/data" rel="nofollow noreferrer">https://www.kaggle.com/tmdb/tmdb-movie-metadata/data</a></p>
<p>As you can see, the genres column contains a nested structure which appears to be a list of dictionaries, or json objects depending on how you see it? The keys of these dictionaries are 'id' and 'name'.</p>
<p>Anyways, I have tried everything including transforming the column into a json with tojson(), or using pandas json_normalize() method without any luck. </p>
<p>If I use json_normalize() I get an AttributeError: 'str' object has no attribute 'itervalues':</p>
<pre><code>pd.io.json.json_normalize(obj_movies['genres'], meta = ['id','name'])
</code></pre>
<p>In reality, my goal would be to parse this list to create a set of unique genres names for each row...</p>
|
<p>Use:</p>
<pre><code>import ast
obj_movies = pd.read_csv('tmdb_5000_movies.csv')
obj_movies['uniq'] = [list(set([y['name'] for y in x])) for x in obj_movies['genres'].apply(ast.literal_eval)]
print (obj_movies[['uniq'] ].head(10))
uniq
0 [Fantasy, Science Fiction, Adventure, Action]
1 [Fantasy, Adventure, Action]
2 [Crime, Adventure, Action]
3 [Drama, Crime, Thriller, Action]
4 [Science Fiction, Adventure, Action]
5 [Fantasy, Adventure, Action]
6 [Family, Animation]
7 [Science Fiction, Adventure, Action]
8 [Fantasy, Family, Adventure]
9 [Fantasy, Adventure, Action]
</code></pre>
|
python|pandas|dictionary
| 1
|
8,395
| 66,451,513
|
Check if Column exceeding specific value and replace
|
<p>I use the long list of codes similar to below codes, to check data frame with multiple columns</p>
<p>I need to check if the column has any values greater than Eg. 1000. If >1000 its error value, so make it '0'</p>
<pre><code>b=1000
a = np.array(df['E8'].values.tolist()); df['E8'] = np.where(a > b, 0, a).tolist()
a = np.array(df['E9'].values.tolist()); df['E9'] = np.where(a > b, 0, a).tolist()
a = np.array(df['E10'].values.tolist()); df['E10'] = np.where(a > b, 0, a).tolist()
a = np.array(df['E11'].values.tolist()); df['E11'] = np.where(a > b, 0, a).tolist()
a = np.array(df['E12'].values.tolist()); df['E12'] = np.where(a > b, 0, a).tolist()
a = np.array(df['E13'].values.tolist()); df['E13'] = np.where(a > b, 0, a).tolist()
a = np.array(df['E20'].values.tolist()); df['E20'] = np.where(a > b, 0, a).tolist()
a = np.array(df['E21'].values.tolist()); df['E21'] = np.where(a > b, 0, a).tolist()
a = np.array(df['E29'].values.tolist()); df['E29'] = np.where(a > b, 0, a).tolist()
a = np.array(df['E28'].values.tolist()); df['E28'] = np.where(a > b, 0, a).tolist()
a = np.array(df['E30'].values.tolist()); df['E30'] = np.where(a > b, 0, a).tolist()
a = np.array(df['E31'].values.tolist()); df['E31'] = np.where(a > b, 0, a).tolist()
a = np.array(df['E32'].values.tolist()); df['E32'] = np.where(a > b, 0, a).tolist()
a = np.array(df['E36'].values.tolist()); df['E36'] = np.where(a > b, 0, a).tolist()
a = np.array(df['E37'].values.tolist()); df['E37'] = np.where(a > b, 0, a).tolist()
</code></pre>
<p>Is there a simple and efficient way to do it.</p>
|
<p>You can simplify your solution with list of columns names:</p>
<pre><code>np.random.seed(2021)
cols = ['E8','E9','E10', 'E37']
df = pd.DataFrame(np.random.randint(0, 2000, size=(10, 4)), columns=cols)
b = 1000
df[cols] = np.where(df[cols] > b, 0, df[cols])
print (df)
E8 E9 E10 E37
0 0 0 57 0
1 621 0 44 0
2 830 669 0 349
3 152 0 502 198
4 0 70 545 613
5 0 257 0 0
6 0 0 410 0
7 944 0 63 0
8 656 0 822 379
9 0 0 632 0
</code></pre>
|
python|pandas|numpy
| 4
|
8,396
| 66,404,756
|
Converting Mobilenet Model to TFLite changes input size
|
<p>right now I'm trying to convert a SavedModel to TFLite for use on a raspberry pi. The model is MobileNet Object Detection trained on a custom dataset. The SavedModel works perfectly, and retains the same shape of <code>(1, 150, 150, 3)</code>. However, when I convert it to a TFLite model using this code:</p>
<pre><code>import tensorflow as tf
saved_model_dir = input("Model dir: ")
# Convert the model
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir) # path to the SavedModel directory
tflite_model = converter.convert()
# Save the model.
with open('model.tflite', 'wb') as f:
f.write(tflite_model)
</code></pre>
<p>And run this code to run the interpreter:</p>
<pre><code>import numpy as np
import tensorflow as tf
from PIL import Image
from os import listdir
from os.path import isfile, join
from random import choice, random
# Load the TFLite model and allocate tensors.
interpreter = tf.lite.Interpreter(model_path="model.tflite")
interpreter.allocate_tensors()
# Get input and output tensors.
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
input_shape = input_details[0]['shape']
print(f"Required input shape: {input_shape}")
</code></pre>
<p>I get an input shape of <code>[1 1 1 3]</code>, therefore I can't use a 150x150 image as input.</p>
<p>I'm using Tensorflow 2.4 on Python 3.7.10 with Windows 10.</p>
<p>How would I fix this?</p>
|
<p>You can rely on TFLite converter V1 API to set input shapes. Please check out the input_shapes argument in <a href="https://www.tensorflow.org/api_docs/python/tf/compat/v1/lite/TFLiteConverter" rel="nofollow noreferrer">https://www.tensorflow.org/api_docs/python/tf/compat/v1/lite/TFLiteConverter</a>.</p>
|
python|tensorflow|tensorflow-lite|mobilenet
| 1
|
8,397
| 57,696,395
|
Finding the difference between two rows, over specific columns
|
<p>I'm wondering how I would find the difference between a number of columns in a pandas dataframe, while keeping other columns intact.</p>
<p>So if I have DataFrame, DF, I would want to find the difference between columns (val1, val2, val3), while retaining month and year. User type is not important, and can be removed.</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({'mo': ['6', '6'],
...: 'yr': ['2017', '2017'],
...: 'user_type': ['a', 'b'],
...: 'val1': ['1', '10'],
...: 'val2': ['2', '20'],
...: 'val3': ['3', '30']},
...: index=[0, 1])
#### DF ####
| index | mo | yr | user_type | val1 | val2 | val3 |
|-------|----|------|-----------|------|------|------|
| 0 | 6 | 2017 | a | 1 | 2 | 3 |
| 1 | 6 | 2017 | b | 10 | 20 | 30 |
</code></pre>
<p>Is this possible? I attempted to use <code>df.diff()</code>, which works if my dataframe contains only the three value columns, but not if I have the month and year columns. </p>
<p>Ideally, something like this would be my output</p>
<pre><code>| index | mo | yr | val1 | val2 | val3 |
|-------|----|------|------|------|------|
| 0 | 6 | 2017 | 9 | 18 | 27 |
</code></pre>
<p>Any help is greatly appreciated. </p>
|
<pre><code>df.groupby(['mo','yr'])['val1','val2','val3'].apply(lambda x : x.iloc[1]-x.iloc[0]).reset_index()
</code></pre>
<p><strong>Output</strong></p>
<pre><code> mo yr val1 val2 val3
0 6 2017 9 18 27
</code></pre>
|
python|pandas|dataframe
| 2
|
8,398
| 70,914,878
|
Pandas treat the same DataFrame differently when read from excel or read from an API
|
<p>I wrote a script before that read an excel file first and then manipulated the data frame, I replaced the read_excel part with an API and transposed it to look exactly as the excel file but now When I use the data directly from the API the rest of the script doesn't work properly but when I save the df to_excel and read_excel the script works fine!</p>
<p><strong>Working code:</strong></p>
<pre><code>res = requests.post(url, headers = headers)
data = res.json()
data = data['body'][0]['data']
df = pd.DataFrame(data)
df = df.T
df.to_excel(path+name,index=False)
df = pd.read_excel(path+name)
df.columns=['ID','Name','Tel','1','2','3']
</code></pre>
<p><strong>Code with error:</strong></p>
<pre><code>res = requests.post(url, headers = headers)
data = res.json()
data = data['body'][0]['data']
df = pd.DataFrame(data)
df = df.T
df.columns=['ID','Name','Tel','1','2','3']
</code></pre>
<p>PS: I've tried removing the index and converting the data types but nothing seems to work!</p>
<p>PPS: the error is traceback on not finding the 'Tel' column about 20 lines after this one, it can be fixed by adding the column to the previous line but the aggregation on another part still results in an empty data frame</p>
<p>EDIT:</p>
<p>Whenever I use a dictionary (or list) to create the data frame instead of or after read_excel the error is raised.</p>
<pre><code>res = requests.post(url, headers = headers)
data = res.json()
data = data['body'][0]['data']
df = pd.DataFrame(data)
df = df.T
df.to_excel(path+name,index=False)
df = pd.read_excel(path+name)
dict1 = df.to_dict()
df = pd.DataFrame(dict1)
df.columns=['ID','Name','Tel','1','2','3']
</code></pre>
<p>this is the result of df.to_dict():</p>
<pre><code>{0:
{0: r'u7it'},
1: {0: 'ุชุณุช'},
2: {0: None},
3: {0: 'ฺฉุงู
ูุง ู
ูุงููู
'},
4: {0: 'ู
ูุงููู
'},
5: {0: 'ฺฉุงู
ูุง ู
ูุงููู
'}
}
</code></pre>
|
<p>This is not an answer but cannot format code in Comments. If I do this:</p>
<pre><code>dict1 = {0:
{0: r'u7it'},
1: {0: 'ุชุณุช'},
2: {0: None},
3: {0: 'ฺฉุงู
ูุง ู
ูุงููู
'},
4: {0: 'ู
ูุงููู
'},
5: {0: 'ฺฉุงู
ูุง ู
ูุงููู
'}
}
df = pd.DataFrame(dict1)
df.columns=['ID','Name','Tel','1','2','3']
</code></pre>
<p>I get no errors and df looks fine:</p>
<pre><code>
ID Name Tel 1 2 3
0 u7it ุชุณุช None ฺฉุงู
ูุง ู
ูุงููู
ู
ูุงููู
ฺฉุงู
ูุง ู
ูุงููู
</code></pre>
<p>can you check your python version and pandas version? Mine are</p>
<pre><code>!python -V
Python 3.8.8
!pip show pandas
Name: pandas
Version: 1.2.2
Summary: Powerful data structures for data analysis, time series, and statistics
Home-page: https://pandas.pydata.org
Author: None
Author-email: None
License: BSD
Location: c:\python38\lib\site-packages
Requires: numpy, python-dateutil, pytz
Required-by: seaborn, statsmodels
</code></pre>
|
python|pandas|dataframe
| 0
|
8,399
| 71,018,656
|
Getting error: module 'tensorflow.keras.layers' has no attribute 'Normalization'
|
<p>I am using</p>
<pre><code>tf.keras.layers.Normalization(axis=-1)
</code></pre>
<p>and am getting the following error:</p>
<pre><code>module 'tensorflow.keras.layers' has no attribute 'Normalization'
</code></pre>
<p>I'm following the tensorflow tutorial available <a href="https://www.tensorflow.org/tutorials/keras/regression" rel="nofollow noreferrer">here</a>. I went through the solution given <a href="https://stackoverflow.com/questions/69470332/i-get-error-module-tensorflow-keras-layers-has-no-attribute-normalization">here</a> and updated my tensorflow to v2.6.0. But still getting the error. I am new to tensorflow. Any help would be greatly appreciated.</p>
|
<p>For anyone who may be looking for answer, I ditched Anconda altogether and set up everything from scratch. Thanks for all the responses.</p>
|
python|tensorflow|keras
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.