Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
374,100
| 66,680,661
|
Find the movies that have more than one genre in movielens project - Pandas
|
<p>Hi I need to find the movies that have more than one genre in movielens project where the genre is not a single column instead of its multiple columns like genre1, genre 2, etc and I tried using the item.sum(axis=1) but it didn't fetch me the required result.</p>
<p>I also tried the following code based on a solution thread but it didnt work.</p>
<pre><code>tempdf = item[[column for column in item if 'genre' in column]]
number_of_genres = tempdf.sum(axis=1)
sub =item[number_of_genres > 1]
print(sub)
</code></pre>
<p>Can someone please help?</p>
|
<p>Asuming you use the MovieLens 100k data set (obtained from <a href="https://grouplens.org/datasets/movielens/" rel="nofollow noreferrer">https://grouplens.org/datasets/movielens/</a>).</p>
<p>It comes with a file called 'u.genre' which contains movie information including one hot encoded genres.</p>
<p>Load the data:</p>
<pre><code>import pandas as pd
dt_dir_name = '/path/to/ml-100k/'
genres = ['unknown', 'Action' ,'Adventure' ,'Animation',
'Children' ,'Comedy' ,'Crime' ,'Documentary' ,'Drama' ,'Fantasy',
'Film-Noir' ,'Horror' ,'Musical' ,'Mystery' ,'Romance' ,'Sci-Fi',
'Thriller' ,'War' ,'Western']
movie_data = pd.read_csv(dt_dir_name +'/'+ 'u.item', delimiter='|', names=['movie id' ,'movie title' ,'release date' ,'video release date' ,
'IMDb URL'] + genres)
print('movie data', movie_data.shape)
</code></pre>
<p>Then we search for the movies with more than one genre and save the title in a list:</p>
<pre><code>movies_with_several_genres = []
for _, movie in movie_data.iterrows():
if movie[genres].sum() > 1:
movies_with_several_genres.append(movie['movie title'])
print(movies_with_several_genres
</code></pre>
<p>Or more pythonic:</p>
<pre><code>print([movie['movie title'] for _, movie in movie_data.iterrows() if movie[genres].sum() > 1])
</code></pre>
|
pandas
| 1
|
374,101
| 66,680,557
|
Matplotlib & Pandas DateTime Compatibility
|
<p><strong>Problem</strong>: I am trying to make a very simple bar chart in Matplotlib of a Pandas DataFrame. The DateTime index is causing confusion, however: Matplotlib does not appear to understand the Pandas DateTime, and is labeling the years incorrectly. How can I fix this?</p>
<p><strong>Code</strong></p>
<pre><code># Make date time series
index_dates = pd.date_range('2018-01-01', '2021-01-01')
# Make data frame with some random data, using the date time index
df = pd.DataFrame(index=index_dates,
data = np.random.rand(len(index_dates)),
columns=['Data'])
# Make a bar chart in marplot lib
fig, ax = plt.subplots(figsize=(12,8))
df.plot.bar(ax=ax)
ax.xaxis.set_major_locator(mdates.YearLocator())
ax.xaxis.set_minor_locator(mdates.MonthLocator())
ax.xaxis.set_major_formatter(mdates.DateFormatter('%Y'))
</code></pre>
<p>Instead of showing up as 2018-2021, however, the years show up as 1970 - 1973.
<a href="https://i.stack.imgur.com/N62tH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/N62tH.png" alt="enter image description here" /></a></p>
<p>I've already looked at the answers <a href="https://stackoverflow.com/questions/5902371/matplotlib-bar-chart-with-dates">here</a>, <a href="https://stackoverflow.com/questions/46517171/matplotlib-pandas-datetime-frequency">here</a>, and documentation <a href="https://matplotlib.org/stable/gallery/text_labels_and_annotations/date.html" rel="nofollow noreferrer">here</a>. I know the date timeindex is in fact a datetime index because when I call <code>df.info()</code> it shows it as a datetime index, and when I call <code>index_dates[0].year</code> it returns 2018. How can I fix this? Thank you!</p>
|
<p>The problem is with mixing <code>df.plot.bar</code> and <code>matplotlib</code> here.</p>
<p><code>df.plot.bar</code> sets tick locations starting from 0 (and assigns labels), while <code>matplotlib.dates</code> expects the locations to be the number of days since 1970-01-01 (more info <a href="https://matplotlib.org/stable/api/dates_api.html" rel="nofollow noreferrer">here</a>).</p>
<p>If you do it with <code>matplotlib</code> directly, it shows labels correctly:</p>
<pre><code># Make a bar chart in marplot lib
fig, ax = plt.subplots(figsize=(12,8))
plt.bar(x=df.index, height=df['Data'])
ax.xaxis.set_major_locator(mdates.YearLocator())
ax.xaxis.set_minor_locator(mdates.MonthLocator())
ax.xaxis.set_major_formatter(mdates.DateFormatter('%Y'))
</code></pre>
<p>Output:</p>
<p><a href="https://i.stack.imgur.com/RCYZK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RCYZK.png" alt="chart" /></a></p>
|
python|pandas|datetime|matplotlib
| 1
|
374,102
| 66,702,153
|
ModuleNotFoundError: No module named 'pandas' when working with Tensorflow
|
<p>i am trying to make a face mask recognition here, whenever i try to create my TF record an import problem pops up, i have tried to install pandas with condas and with pip3 the problem is that when i just import pandas it runs without problem but when i start the generate_tfrecord.py this problem shows up,
here is my code so far</p>
<pre><code>WORKSPACE_PATH = "Tensorflow/workspace"
SCRIPTS_PATH = "Tensorflow/scripts"
APIMODEL_PATH = "Tensorflow/models"
ANNOTATION_PATH = WORKSPACE_PATH + "/annotations"
IMAGE_PATH = WORKSPACE_PATH + "/images"
MODEL_PATH = WORKSPACE_PATH + "/models"
PRETRAINED_MODEL_PATH = WORKSPACE_PATH + "/pre-trained-models"
CONFIG_PATH = MODEL_PATH + '/my_ssd_mobnet/pipeline.config'
CHECKPOINT_PATH = MODEL_PATH + '/my_ssd_mobnet/'
</code></pre>
<p>and here is the command</p>
<pre><code>!python {SCRIPTS_PATH + '/generate_tfrecord.py'} -x {IMAGE_PATH + '/train'} -l {ANNOTATION_PATH + '/label_map.pbtxt'} -o {ANNOTATION_PATH + '/train.record'}
</code></pre>
<p>and here is the error message</p>
<pre><code>Traceback (most recent call last):
File "Tensorflow/scripts/generate_tfrecord.py", line 21, in <module>
import pandas as pd
ModuleNotFoundError: No module named 'pandas'
</code></pre>
|
<p>Try running <code>pip install pandas</code> in your CMD it might solve your problem.</p>
|
python|tensorflow|anaconda
| 1
|
374,103
| 66,695,391
|
Execution of Inference Workloads on Coral Dev Board in CPU, GPU and TPU simultaneously
|
<p>I am currently working on executing inference workloads on Coral Dev Board with TensorFlow Lite. I am trying to run inference on CPU,GPU and TPU simultaneously to reduce inference latency.</p>
<p>Could you guys help me understand how I can execute inference on all the devices simultaneously? I could divide the layers of network for training phase in CPU and GPU but I am having trouble in assigning layers of the network to each devices for inference.The code is written in python language with keras API in Tensorflow.</p>
<p>Thanks.</p>
|
<p>As of now, if you compile your CPU TFLite model with the edgeTPU compiler (<a href="https://coral.ai/docs/edgetpu/compiler/" rel="nofollow noreferrer">https://coral.ai/docs/edgetpu/compiler/</a>) then the compiler tries to Map the operations on the TPU only (as long as the operations are supported by the TPU)</p>
<p>The Edge TPU compiler cannot partition the model more than once, and as soon as an unsupported operation occurs, that operation and everything after it executes on the CPU, even if supported operations occur later.</p>
<p>So partitioning a single TFLite model into CPU, GPU and TPU is not feasible as of now.</p>
|
tensorflow|keras|google-coral
| 0
|
374,104
| 66,429,900
|
Numpy random number generators and lambda functions
|
<p>I'm trying to create a list of random variables <code>wtfuns</code> that I can call as: <code>wtfuns[i](size=1000)</code> to return a list of 1000 samples of the particular random variable. For this, I am using lambda functions as follows:</p>
<pre><code>wtfuns = []
pvals = [0.3,0.5,0.7]
for p in pvals:
wtfuns.append(('bernoulli p='+str(p),lambda **x: binom(p,**x)))
for i in range(3):
print(wtfuns[i][1](size=1000).mean())
</code></pre>
<p>Output</p>
<pre><code>0.686
0.684
0.706
</code></pre>
<p>That is, in the column <code>wtfuns[:,1]</code> I have the same binomial random variable with parameter 0.7. However,</p>
<pre><code>for p in pvals:
print(wtfuns[0][1](size=1000).mean())
</code></pre>
<p>produces</p>
<pre><code>0.311
0.524
0.67
</code></pre>
<p>Somehow the p value is being passed to the lambda function by reference. What is going on? I'm completely confused.</p>
|
<p>Yes, your first definition captures a reference to <code>p</code>. As <code>p</code> changes, the function changes. The solution is to use a trick that turns the lambda into a closure:</p>
<pre><code> wtfuns.append(('bernoulli p='+str(p),lambda p=p,**x: binom(p,**x)))
</code></pre>
<p>The "p=p" thing captures the current value of p into a local, which carries with the function.</p>
|
python|numpy|random
| 1
|
374,105
| 66,402,926
|
How to create empty column with a specific number in dataframe python?
|
<p>I am new to python and I want to know how to create empty column with a specific number. Let's say I want to create 20 columns. What I tried:</p>
<pre><code>import pandas as pd
num =20
for i in range(num):
df = df + pd.DataFrame(columns=['col'+str(i)])
</code></pre>
<p>But I got the unwanted result:</p>
<pre><code>Empty DataFrame
Columns: [col0, col1, col10, col11, col12, col13, col14, col15, col16, col17, col18, col19, col2, col3, col4, col5, col6, col7, col8, col9]
Index: []
</code></pre>
<p>Desired result:</p>
<pre><code>Empty DataFrame
Columns: [col0, col1, col2,...,col19]
Index: []
</code></pre>
<p>How to rectify it? Any help will be much appreciated!</p>
|
<p>Assuming you wish to create an empty dataframe, the solution is to remove the for loop, and use a list comprehension for the column names:</p>
<pre><code>import pandas as pd
num =20
df = pd.DataFrame(columns=['col'+str(i) for i in range(num)])
</code></pre>
|
python|pandas|dataframe
| 2
|
374,106
| 66,569,767
|
Changing multiple column names by column number in Pandas?
|
<p>I am borrowing this example from <a href="https://www.geeksforgeeks.org/how-to-rename-columns-in-pandas-dataframe/" rel="nofollow noreferrer">here</a>. I have a dataframe like this:</p>
<pre><code># Import pandas package
import pandas as pd
# Define a dictionary containing ICC rankings
rankings = {'test': ['India', 'South Africa', 'England',
'New Zealand', 'Australia'],
'odi': ['England', 'India', 'New Zealand',
'South Africa', 'Pakistan'],
't20': ['Pakistan', 'India', 'Australia',
'England', 'New Zealand']}
# Convert the dictionary into DataFrame
rankings_pd = pd.DataFrame(rankings)
# Before renaming the columns
print(rankings_pd)
test odi t20
0 India England Pakistan
1 South Africa India India
2 England New Zealand Australia
3 New Zealand South Africa England
4 Australia Pakistan New Zealand
</code></pre>
<p>Now let's say I want to change the name of the 1st and 2nd columns. This is what I am trying:</p>
<pre><code>rankings_pd[rankings_pd.columns[0:2]].columns = ['tes_after_change', 'odi_after_change']
print(rankings_pd[rankings_pd.columns[0:2]].columns)
Index(['test', 'odi'], dtype='object')
</code></pre>
<p>But this seems to return exactly the same columns names and not changing them.</p>
|
<p>Just use <code>rename()</code> method and pass the <code>dictionary</code> of old values and new values as key-value pair in <strong>columns</strong> parameter:-</p>
<pre><code>rankings_pd=rankings_pd.rename(columns={'test':'tes_after_change','odi':'odi_after_change'})
</code></pre>
<p><strong>Edit</strong> by <a href="https://stackoverflow.com/users/7175713/sammywemmy"><code>@sammywemmy</code></a>:</p>
<p>You could extend the idea with an anonymous function :</p>
<pre><code>rankings_pd.rename(columns= lambda df: f"{df}_after_change" if df in ("test", "odi") else df)
</code></pre>
|
python|pandas
| 8
|
374,107
| 66,705,131
|
Custom data generator build from tf.keras.utils.Sequence doesn't work with tensorflow model's fit api
|
<p>I implemented a sequence generator object according to guidelines from <a href="https://www.tensorflow.org/api_docs/python/tf/keras/utils/Sequence" rel="nofollow noreferrer">link</a>.</p>
<pre><code>import tensorflow as tf
from cv2 import imread, resize
from sklearn.utils import shuffle
from cv2 import imread, resize
import numpy as np
from tensorflow.keras import utils
import math
import keras as ks
class reader(tf.keras.utils.Sequence):
def __init__(self, x, y, batch_size, n_class):
self.x, self.y = x, y
self.batch_size = batch_size
self.n_class = n_class
def __len__(self):
return math.ceil(len(self.x) / self.batch_size)
def __getitem__(self, idx):
print('getitem', idx)
batch_x = self.x[idx * self.batch_size:(idx + 1) *
self.batch_size]
batch_y = self.y[idx * self.batch_size:(idx + 1) *
self.batch_size]
data_x = list()
for batch in batch_x:
tmp = list()
for img_path in batch:
try:
img = imread(img_path)
tmp.append(img)
except Exception as e:
print(e)
print('failed to find path {}'.format(img_path))
data_x.append(tmp)
#
data_x = np.array(data_x, dtype='object')
data_y = np.array(batch_y)
data_y = utils.to_categorical(data_y, self.n_class)
print('return item')
print(data_x.shape)
return (data_x, data_y)
def on_epoch_end(self):
# option method to run some logic at the end of each epoch: e.g. reshuffling
print('on epoch end')
seed = np.random.randint()
self.x = shuffle(self.x, random_state=seed)
self.y = shuffle(self.y, random_state=seed)
</code></pre>
<p>However, it doesn't work with tensorflow model's fit api. Below is the simple model architecture I used to replicate this issue.</p>
<pre><code>model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Conv3D(10, input_shape=(TEMPORAL_LENGTH,HEIGHT,WIDTH,CHANNEL), kernel_size=(2,2,2), strides=2))
model.add(tf.keras.layers.Conv3D(10, kernel_size=(2,3,3), strides=2))
model.add(tf.keras.layers.Conv3D(10, kernel_size=(2,3,3), strides=2))
model.add(tf.keras.layers.Conv3D(10, kernel_size=(2,3,3), strides=2))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(10))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy', tf.keras.metrics.Precision(), tf.keras.metrics.Recall()])
model.summary()
</code></pre>
<p>Let me create a reader</p>
<p><code>r1 = reader(x_train, y_train, 20, 10)</code></p>
<p>Then I call the model.fit api.</p>
<pre><code>train_history = model.fit(r1, epochs=3, steps_per_epoch=5, verbose=1)
### output ###
getitem 0
return item
(20, 16, 192, 256, 3)
WARNING:tensorflow:sample_weight modes were coerced from
...
to
['...']
Train for 5 steps
Epoch 1/3
</code></pre>
<p>It will stay like this forever if I don't interrupt. Out of curiosity, I tried this approach with model created from Keras api and to my surprise it just work!</p>
<pre><code>model = ks.models.Sequential()
model.add(ks.layers.Conv3D(10, input_shape=(TEMPORAL_LENGTH,HEIGHT,WIDTH,CHANNEL), kernel_size=(2,2,2), strides=2))
model.add(ks.layers.Conv3D(10, kernel_size=(2,3,3), strides=2))
model.add(ks.layers.Conv3D(10, kernel_size=(2,3,3), strides=2))
model.add(ks.layers.Conv3D(10, kernel_size=(2,3,3), strides=2))
model.add(ks.layers.Flatten())
model.add(ks.layers.Dense(10))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.summary()
train_history = model.fit(r1, epochs=3, steps_per_epoch=5, verbose=1)
### output ###
Epoch 1/3
getitem 586
return item
(20, 16, 192, 256, 3)
getitem 169
1/5 [=====>........................] - ETA: 22s - loss: 11.0373 - accuracy: 0.0000e+00return item
(20, 16, 192, 256, 3)
getitem 601
2/5 [===========>..................] - ETA: 12s - loss: 7.9983 - accuracy: 0.0250 return item
(20, 16, 192, 256, 3)
getitem 426
3/5 [=================>............] - ETA: 8s - loss: 10.7049 - accuracy: 0.2500return item
(20, 16, 192, 256, 3)
getitem 243
4/5 [=======================>......] - ETA: 3s - loss: 8.5093 - accuracy: 0.1875
</code></pre>
<p><strong>Dependencies</strong></p>
<ol>
<li>tensorflow-gpu: 2.1</li>
<li>keras-gpu: 2.3.1</li>
</ol>
|
<p>seniors. I am very sorry for the late response. I have found the fix for this issue.
All I need to change is to convert data_x to dtype='float32' at function self.<strong>getitem</strong>(). To replicate the issue just change the dtype as 'object'.</p>
<p>Besides that, please allow me to share that <em>class ActionDataGenerator</em> were modified from <a href="https://github.com/anujshah1003" rel="nofollow noreferrer">Anujshah's</a> tutorial.</p>
<pre><code>import tensorflow as tf
from sklearn.utils import shuffle
import cv2
from cv2 import imread, resize
from tensorflow.keras import utils
import math
import keras as ks
import pandas as pd
import numpy as np
import os
from collections import deque
import copy
class reader(tf.keras.utils.Sequence):
def __init__(self, x, y, batch_size, n_class):
self.x, self.y = x, y
self.batch_size = batch_size
self.n_class = n_class
def __len__(self):
return math.ceil(len(self.x) / self.batch_size)
def __getitem__(self, idx):
batch_x = self.x[idx * self.batch_size:(idx + 1) *
self.batch_size]
batch_y = self.y[idx * self.batch_size:(idx + 1) *
self.batch_size]
data_x = list()
for batch in batch_x:
tmp = list()
for img_path in batch:
try:
img = imread(img_path)
if img.shape != (192, 256, 3):
img = cv2.resize(img,(256, 192))
tmp.append(img)
except Exception as e:
print(e)
print('failed to find path {}'.format(img_path))
data_x.append(tmp)
#
data_x = np.array(data_x, dtype='float32')
data_y = np.array(batch_y)
data_y = utils.to_categorical(data_y, self.n_class)
return data_x, data_y
def on_epoch_end(self):
# option method to run some logic at the end of each epoch: e.g. reshuffling
seed = np.random.randint()
self.x = shuffle(self.x, random_state=seed)
self.y = shuffle(self.y, random_state=seed)
class ActionDataGenerator(object):
def __init__(self,root_data_path,temporal_stride=1,temporal_length=16,resize=224, max_sample=20):
self.root_data_path = root_data_path
self.temporal_length = temporal_length
self.temporal_stride = temporal_stride
self.resize=resize
self.max_sample=max_sample
def file_generator(self,data_path,data_files):
'''
data_files - list of csv files to be read.
'''
for f in data_files:
tmp_df = pd.read_csv(os.path.join(data_path,f))
label_list = list(tmp_df['Label'])
total_images = len(label_list)
if total_images>=self.temporal_length:
num_samples = int((total_images-self.temporal_length)/self.temporal_stride)+1
img_list = list(tmp_df['FileName'])
else:
print ('num of frames is less than temporal length; hence discarding this file-{}'.format(f))
continue
samples = deque()
samp_count=0
for img in img_list:
if samp_count == self.max_sample:
break
samples.append(img)
if len(samples)==self.temporal_length:
samples_c=copy.deepcopy(samples)
samp_count+=1
for t in range(self.temporal_stride):
samples.popleft()
yield samples_c,label_list[0]
def load_samples(self,data_cat='train', test_ratio=0.1):
data_path = os.path.join(self.root_data_path,data_cat)
csv_data_files = os.listdir(data_path)
file_gen = self.file_generator(data_path,csv_data_files)
iterator = True
data_list = []
while iterator:
try:
x,y = next(file_gen)
x=list(x)
data_list.append([x,y])
except Exception as e:
print ('the exception: ',e)
iterator = False
print ('end of data generator')
# data_list = self.shuffle_data(data_list)
return data_list
def train_validation_split(self, data_list, target_column, val_size=0.1, ks_sequence=False):
dataframe = pd.DataFrame(data_list)
dataframe.columns = ['Feature', target_column]
data_dict = dict()
for i in range(len(np.unique(dataframe[target_column]))):
data_dict[i] = dataframe[dataframe[target_column]==i]
train, validation = pd.DataFrame(), pd.DataFrame()
for df in data_dict.values():
cut = int(df.shape[0] * val_size)
val = df[:cut]
rem = df[cut:]
train = train.append(rem, ignore_index=True)
validation = validation.append(val, ignore_index=True)
if ks_sequence:
return train['Feature'].values.tolist(), train['Label'].values.tolist(), \
validation['Feature'].values.tolist(), validation['Label'].values.tolist() # without shuffle
return train.values.tolist(), validation.values.tolist() # without shuffle
root_data_path = 'C:\\Users\\AI-lab\\Documents\\activity_file\\UCF101\\csv_files\\' # machine specific
CLASSES = 101
BATCH_SIZE = 10
EPOCHS = 1
TEMPORAL_STRIDE = 8
TEMPORAL_LENGTH = 16
MAX_SAMPLE = 20
HEIGHT = 192
WIDTH = 256
CHANNEL = 3
data_gen_obj = ActionDataGenerator(root_data_path, temporal_stride=TEMPORAL_STRIDE, \
temporal_length=TEMPORAL_LENGTH, max_sample=MAX_SAMPLE)
train_data = data_gen_obj.load_samples(data_cat='train')
x_train, y_train, x_val, y_val = data_gen_obj.train_validation_split(train_data, 'Label', 0.1, True)
r1 = reader(x_train, y_train, BATCH_SIZE, CLASSES)
r2 = reader(x_val, y_val, BATCH_SIZE, CLASSES)
print(type(r1), type(r2))
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Conv3D(10, input_shape=(TEMPORAL_LENGTH,HEIGHT,WIDTH,CHANNEL), kernel_size=(2,2,2), strides=2))
model.add(tf.keras.layers.Conv3D(10, kernel_size=(2,3,3), strides=2))
model.add(tf.keras.layers.Conv3D(10, kernel_size=(2,3,3), strides=2))
model.add(tf.keras.layers.Conv3D(10, kernel_size=(2,3,3), strides=2))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(101, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.summary()
train_history = model.fit(r1, epochs=3, steps_per_epoch=r1.__len__(), verbose=1)
score = model.evaluate(r2, steps=5)
print(score)
</code></pre>
<h1>Output</h1>
<pre><code>the exception:
end of data generator
<class '__main__.reader'> <class '__main__.reader'>
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv3d (Conv3D) (None, 8, 96, 128, 10) 250
_________________________________________________________________
conv3d_1 (Conv3D) (None, 4, 47, 63, 10) 1810
_________________________________________________________________
conv3d_2 (Conv3D) (None, 2, 23, 31, 10) 1810
_________________________________________________________________
conv3d_3 (Conv3D) (None, 1, 11, 15, 10) 1810
_________________________________________________________________
flatten (Flatten) (None, 1650) 0
_________________________________________________________________
dense (Dense) (None, 101) 166751
=================================================================
Total params: 172,431
Trainable params: 172,431
Non-trainable params: 0
_________________________________________________________________
WARNING:tensorflow:sample_weight modes were coerced from
...
to
['...']
Train for 17562 steps
Epoch 1/3
77/17562 [..............................] - ETA: 1:35:53 - loss: 67.0937 - accuracy: 0.0156
</code></pre>
|
python|tensorflow|keras|sequence-generators
| 1
|
374,108
| 66,720,113
|
how does memory allocation occur in numpy array?
|
<pre><code>import numpy as np
a = np.arange(5)
for i in a:
print("Id of {} : {} \n".format(i,id(i)))
</code></pre>
<p><strong>>>>></strong></p>
<p>Id of 0 : 2295176255984</p>
<p>Id of 1 : 2295176255696</p>
<p>Id of 2 : 2295176255984</p>
<p>Id of 3 : 2295176255696</p>
<p>Id of 4 : 2295176255984</p>
<p>I want to understand how the elements of numpy array are being allocated in the memory, which I understand is different from that of Python arrays seeing the output.</p>
<p>Any help is appreciated.</p>
|
<p>I'm a fan of Code with Mosh. He teaches all such kind of things on his youtube channel as well as udemy. I've purchased his udemy course on Data structures and Algorithms which goes deep into how something works.
For example, while teaching about an array, he shows how to make an array so as to understand the underlying concepts behind it.</p>
<p>You can take a look here: <a href="https://www.youtube.com/watch?v=BBpAmxU_NQo" rel="nofollow noreferrer">https://www.youtube.com/watch?v=BBpAmxU_NQo</a></p>
<p>If you're only interested in knowing about only the NumPy array:</p>
<p>First I'll tell you about the differences:</p>
<p><strong>Difference between NumPy and an Array</strong></p>
<p>Numpy is the core library for scientific computing in Python. It provides a high-performance multidimensional array object and tools for working with these arrays. A NumPy array is a grid of values, all of the same type, and is indexed by a tuple of nonnegative integers. The number of dimensions is the rank of the array; the shape of an array is a tuple of integers giving the size of the array along each dimension.</p>
<p>The Python core library provided Lists. A list is the Python equivalent of an array, but is resizeable and can contain elements of different types.</p>
<p>A common beginner question is what is the real difference here. The answer is performance. Numpy data structures perform better in:</p>
<p>Size - Numpy data structures take up less space
Performance - they have a need for speed and are faster than lists
Functionality - SciPy and NumPy have optimized functions such as linear algebra operations built-in.</p>
<p>Another key notable difference is in how they store and make use of memory
<strong>Memory</strong></p>
<p>The main benefits of using NumPy arrays should be smaller memory consumption and better runtime behaviour.</p>
<p><a href="https://i.stack.imgur.com/ojEVS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ojEVS.png" alt="Image of how python lists work" /></a></p>
<p>For Python Lists - We can conclude from this that for every new element, we need another eight bytes for the reference to the new object. The new integer object itself consumes 28 bytes.</p>
<p><a href="https://i.stack.imgur.com/zRzLu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zRzLu.png" alt="Image of how NumPy works" /></a>
NumPy takes up less space. This means that an arbitrary integer array of length "n" in NumPy needs</p>
<p>If you are curious and want me to prove that NumPy really takes less time:</p>
<pre><code># importing required packages
import numpy
import time
# size of arrays and lists
size = 1000000
# declaring lists
list1 = range(size)
list2 = range(size)
# declaring arrays
array1 = numpy.arange(size)
array2 = numpy.arange(size)
# capturing time before the multiplication of Python lists
initialTime = time.time()
# multiplying elements of both the lists and stored in another list
resultantList = [(a * b) for a, b in zip(list1, list2)]
# calculating execution time
print("Time taken by Lists to perform multiplication:",
(time.time() - initialTime),
"seconds")
# capturing time before the multiplication of Numpy arrays
initialTime = time.time()
# multiplying elements of both the Numpy arrays and stored in another Numpy array
resultantArray = array1 * array2
# calculating execution time
print("Time taken by NumPy Arrays to perform multiplication:",
(time.time() - initialTime),
"seconds")
</code></pre>
<p><strong>Output:</strong></p>
<pre><code>Time taken by Lists : 0.15030384063720703 seconds
Time taken by NumPy Arrays : 0.005921125411987305 seconds
</code></pre>
<p><strong>Wait.. There is a very big disadvantage too:</strong></p>
<p>Requires continuous allocation of memory -
Insertion and deletion operations can become costly as data is stored in contiguous memory locations as shifting it requires shifting.</p>
<p>If you want to learn more about numpy:
<a href="https://www.educba.com/introduction-to-numpy/" rel="nofollow noreferrer">https://www.educba.com/introduction-to-numpy/</a></p>
<p>You can thank me later!</p>
|
python|numpy|memory|numpy-ndarray
| 0
|
374,109
| 66,655,683
|
Tensorflow Addon's Cohen Kappa Return 0 for all epochs
|
<p>I was having an issue with tensorflow_addons's CohenKappa metric. I'm trying to train an image classification model, but I frame this problem as a regression problem. So, I trained the model with MSE loss. However, I need to know the classification performance and I want to use CohenKappa. Gladly, Tensorflow supports CohenKappa metric using an addon called tensorflow_addons. But, I need to customize the metric, so I add an additional logic to clip the y_pred, round them, then feed them to the CohenKappa API. Here's the code:</p>
<pre><code>import tensorflow_addons as tfa
from tensorflow_addons.metrics import CohenKappa
from tensorflow.keras.metrics import Metric
from tensorflow_addons.utils.types import AcceptableDTypes, FloatTensorLike
from typeguard import typechecked
from typing import Optional
from tensorflow.python.ops import math_ops
from tensorflow.python.keras.utils import losses_utils
from tensorflow.python.keras.utils import metrics_utils
class CohenKappaMetric(CohenKappa):
def __init__(
self,
num_classes: FloatTensorLike,
name: str = "cohen_kappa",
weightage: Optional[str] = None,
sparse_labels: bool = False,
regression: bool = False,
dtype: AcceptableDTypes = None,
):
"""Creates a `CohenKappa` instance."""
super().__init__(num_classes=num_classes, name=name, weightage=weightage, sparse_labels=sparse_labels,
regression=regression,dtype=dtype)
def update_state(self, y_true, y_pred, sample_weight=None):
y_pred = tf.clip_by_value(y_pred, 0, 4)
y_pred = tf.math.round(y_pred)
y_pred = tf.cast(y_pred, dtype=tf.uint8)
y_true = math_ops.cast(y_true, self._dtype)
y_pred = math_ops.cast(y_pred, self._dtype)
[y_true, y_pred], sample_weight = \
metrics_utils.ragged_assert_compatible_and_get_flat_values([y_true, y_pred], sample_weight)
print(f'y_true after ragged assert: {y_true}')
print(f'y_pred after ragged assert: {y_pred}')
y_pred, y_true = losses_utils.squeeze_or_expand_dimensions(y_pred, y_true)
print(f'y_true after squeeze: {y_true}')
print(f'y_pred after squeeze: {y_pred}')
return super().update_state(y_true, y_pred, sample_weight)
</code></pre>
<p>I trained it using tf's Keras API and using tf.Dataset object. Here's the full script for context. <br><br>
<strong>========= Full script ==========</strong></p>
<pre><code># Import Library
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
import cv2
from PIL import Image
import tensorflow as tf
from keras import layers
from tensorflow.keras import applications
from keras.callbacks import Callback, ModelCheckpoint
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential, load_model
from keras.optimizers import Adam
from keras import models
import os, glob, pathlib
from sklearn.model_selection import train_test_split
from sklearn.metrics import cohen_kappa_score, accuracy_score, confusion_matrix
from tqdm import tqdm
SIZE = 224
DATASET_DIR = 'Dataset/APTOS-2019-RAW/'
BATCH_SIZE = 32
RESHUFFLE = 700
model_backbone = tf.keras.applications.EfficientNetB0
ARCH = 'EfficientNetB0'
train_df = pd.read_csv('Dataset/CSVs/converted_x_train_8.csv')
valid_df = pd.read_csv('Dataset/CSVs/converted_x_valid_8.csv')
#resample
from sklearn.utils import resample
X=train_df
normal=X[X.diagnosis==0]
mild=X[X.diagnosis==1]
moderate=X[X.diagnosis==2]
severe=X[X.diagnosis==3]
pdr=X[X.diagnosis==4]
#downsampled
mild = resample(mild,
replace=True, # sample with replacement
n_samples=RESHUFFLE, # match number in majority class
random_state=2020) # reproducible results
moderate = resample(moderate,
replace=False, # sample with replacement
n_samples=RESHUFFLE, # match number in majority class
random_state=2020) # reproducible results
severe = resample(severe,
replace=True, # sample with replacement
n_samples=RESHUFFLE, # match number in majority class
random_state=2020) # reproducible results
normal = resample(normal,
replace=False, # sample with replacement
n_samples=RESHUFFLE, # match number in majority class
random_state=2020) # reproducible results
pdr = resample(pdr,
replace=True, # sample with replacement
n_samples=RESHUFFLE, # match number in majority class
random_state=2020) # reproducible results
# combine minority and downsampled majority
sampled = pd.concat([normal, mild, moderate, severe, pdr])
# checking counts
sampled.diagnosis.value_counts()
train_df = sampled
train_df = train_df.sample(frac=1).reset_index(drop=True)
train_df['id_code'] = train_df['id_code'].apply(lambda x: DATASET_DIR+x)
valid_df['id_code'] = valid_df['id_code'].apply(lambda x: DATASET_DIR+x)
list_ds = tf.data.Dataset.list_files(list(train_df['id_code']), shuffle=False)
list_ds = list_ds.shuffle(len(train_df), reshuffle_each_iteration=True)
val_list_ds = tf.data.Dataset.list_files(list(valid_df['id_code']), shuffle=False)
val_list_ds = val_list_ds.shuffle(len(valid_df), reshuffle_each_iteration=True)
class_names = np.array(sorted([item.name for item in pathlib.Path(DATASET_DIR).glob('*') if item.name != "LICENSE.txt"]))
print(class_names)
train_ds = list_ds
val_ds = val_list_ds
def get_label(file_path):
# convert the path to a list of path components
parts = tf.strings.split(file_path, os.path.sep)
# The second to last is the class-directory
one_hot = parts[-2] == class_names
# Integer encode the label
return tf.argmax(one_hot)
def decode_img(img):
# convert the compressed string to a 3D uint8 tensor
img = tf.image.decode_jpeg(img, channels=3)
# resize the image to the desired size
return tf.image.resize(img, [SIZE, SIZE])
def process_path(file_path):
label = get_label(file_path)
# load the raw data from the file as a string
img = tf.io.read_file(file_path)
img = decode_img(img)
return img, label
AUTOTUNE = tf.data.AUTOTUNE
# Set `num_parallel_calls` so multiple images are loaded/processed in parallel.
train_ds = train_ds.map(process_path, num_parallel_calls=AUTOTUNE)
val_ds = val_ds.map(process_path, num_parallel_calls=AUTOTUNE)
def configure_for_performance(ds):
ds = ds.cache()
ds = ds.shuffle(buffer_size=1000)
ds = ds.batch(BATCH_SIZE)
ds = ds.prefetch(buffer_size=AUTOTUNE)
return ds
train_ds = configure_for_performance(train_ds)
val_ds = configure_for_performance(val_ds)
import tensorflow_addons as tfa
from tensorflow_addons.metrics import CohenKappa
from tensorflow.keras.metrics import Metric
from tensorflow_addons.utils.types import AcceptableDTypes, FloatTensorLike
from typeguard import typechecked
from typing import Optional
from tensorflow.python.ops import math_ops
from tensorflow.python.keras.utils import losses_utils
from tensorflow.python.keras.utils import metrics_utils
class CohenKappaMetric(CohenKappa):
def __init__(
self,
num_classes: FloatTensorLike,
name: str = "cohen_kappa",
weightage: Optional[str] = None,
sparse_labels: bool = False,
regression: bool = False,
dtype: AcceptableDTypes = None,
):
"""Creates a `CohenKappa` instance."""
super().__init__(num_classes=num_classes, name=name, weightage=weightage, sparse_labels=sparse_labels,
regression=regression,dtype=dtype)
def update_state(self, y_true, y_pred, sample_weight=None):
y_pred = tf.clip_by_value(y_pred, 0, 4)
y_pred = tf.math.round(y_pred)
y_pred = tf.cast(y_pred, dtype=tf.uint8)
y_true = math_ops.cast(y_true, self._dtype)
y_pred = math_ops.cast(y_pred, self._dtype)
[y_true, y_pred], sample_weight = \
metrics_utils.ragged_assert_compatible_and_get_flat_values([y_true, y_pred], sample_weight)
print(f'y_true after ragged assert: {y_true}')
print(f'y_pred after ragged assert: {y_pred}')
y_pred, y_true = losses_utils.squeeze_or_expand_dimensions(y_pred, y_true)
print(f'y_true after squeeze: {y_true}')
print(f'y_pred after squeeze: {y_pred}')
return super().update_state(y_true, y_pred, sample_weight)
class QWKCallback(tf.keras.callbacks.Callback):
def __init__(self, patience = 10):
super().__init__()
self.patience = patience
def on_train_begin(self, logs=None):
# The number of epoch it has waited when loss is no longer minimum.
self.wait = 0
# The epoch the training stops at.
self.stopped_epoch = 0
# Initialize the best as infinity.
self.best = -1
def on_epoch_end(self, epoch, logs=None):
current = logs.get("cohen_kappa")
if np.greater(current, self.best):
self.best = current
self.wait = 0
# Record the best weights if current results is better (less).
self.best_weights = self.model.get_weights()
if current > 0.75:
print("Validation Kappa has improved and greater than 0.75. Worth saving, dude. Saving model.")
self.model.save(f'Kaggle - Model Weights/{ARCH}-model.h5')
else:
self.wait += 1
if self.wait >= self.patience:
self.stopped_epoch = epoch
self.model.stop_training = True
print("Restoring model weights from the end of the best epoch.")
self.model.set_weights(self.best_weights)
def on_train_end(self, logs=None):
if self.stopped_epoch > 0:
print("Epoch %05d: early stopping" % (self.stopped_epoch + 1))
efficientnet = model_backbone(include_top=False, weights='imagenet', input_shape=(SIZE,SIZE,3))
dummy_model = Sequential([
Rescaling(1/.255, input_shape = (224, 224, 3)),
RandomFlip(seed = 2019),
RandomRotation((-0.5, 0.5), fill_mode = 'constant', seed = 2019),
RandomZoom(0.1),
layers.GlobalAveragePooling2D(),
layers.Dense(10),
layers.Dense(1)])
dummy_model.compile(
loss='mse',
optimizer=Adam(lr=0.0001),
metrics = [CohenKappaMetric(num_classes=5, weightage='quadratic', sparse_labels = True)]
)
dummy_model.fit(
train_ds,
epochs = 9,
validation_data = val_ds,
callbacks = [QWKCallback(patience = 10)]
)
</code></pre>
<p>The result of this script is this log:</p>
<pre><code>Epoch 1/9
y_true after ragged assert: Tensor("Cast_2:0", shape=(None, 1), dtype=float32)
y_pred after ragged assert: Tensor("Cast_3:0", shape=(None, 1), dtype=float32)
y_true after squeeze: Tensor("Cast_2:0", shape=(None, 1), dtype=float32)
y_pred after squeeze: Tensor("Cast_3:0", shape=(None, 1), dtype=float32)
y_true after ragged assert: Tensor("Cast_2:0", shape=(None, 1), dtype=float32)
y_pred after ragged assert: Tensor("Cast_3:0", shape=(None, 1), dtype=float32)
y_true after squeeze: Tensor("Cast_2:0", shape=(None, 1), dtype=float32)
y_pred after squeeze: Tensor("Cast_3:0", shape=(None, 1), dtype=float32)
109/110 [============================>.] - ETA: 0s - loss: 144816.2047 - cohen_kappa: 0.0000e+00y_true after ragged assert: Tensor("Cast_2:0", shape=(None, 1), dtype=float32)
y_pred after ragged assert: Tensor("Cast_3:0", shape=(None, 1), dtype=float32)
y_true after squeeze: Tensor("Cast_2:0", shape=(None, 1), dtype=float32)
y_pred after squeeze: Tensor("Cast_3:0", shape=(None, 1), dtype=float32)
110/110 [==============================] - 3s 18ms/step - loss: 144618.4215 - cohen_kappa: 0.0000e+00 - val_loss: 119745.2266 - val_cohen_kappa: 0.0000e+00
Epoch 2/9
110/110 [==============================] - 2s 16ms/step - loss: 105063.3554 - cohen_kappa: 0.0000e+00 - val_loss: 86080.0625 - val_cohen_kappa: 0.0000e+00
Epoch 3/9
110/110 [==============================] - 2s 16ms/step - loss: 75889.1368 - cohen_kappa: 0.0000e+00 - val_loss: 60222.9531 - val_cohen_kappa: 0.0000e+00
Epoch 4/9
110/110 [==============================] - 2s 16ms/step - loss: 52277.5727 - cohen_kappa: 0.0000e+00 - val_loss: 40955.3906 - val_cohen_kappa: 0.0000e+00
Epoch 5/9
110/110 [==============================] - 2s 16ms/step - loss: 35806.8430 - cohen_kappa: 0.0000e+00 - val_loss: 26828.6133 - val_cohen_kappa: 0.0000e+00
Epoch 6/9
110/110 [==============================] - 2s 16ms/step - loss: 23043.7091 - cohen_kappa: 0.0000e+00 - val_loss: 16888.7090 - val_cohen_kappa: 0.0000e+00
Epoch 7/9
110/110 [==============================] - 2s 16ms/step - loss: 14327.0133 - cohen_kappa: 0.0000e+00 - val_loss: 10193.4795 - val_cohen_kappa: 0.0000e+00
Epoch 8/9
110/110 [==============================] - 2s 16ms/step - loss: 8697.9348 - cohen_kappa: 0.0000e+00 - val_loss: 5862.7231 - val_cohen_kappa: 0.0000e+00
Epoch 9/9
110/110 [==============================] - 2s 16ms/step - loss: 4940.2150 - cohen_kappa: 0.0000e+00 - val_loss: 3193.6562 - val_cohen_kappa: 0.0000e+00
<tensorflow.python.keras.callbacks.History at 0x7f6c285d6c50>
</code></pre>
<p>From these results, I have two questions:</p>
<ul>
<li>How to fix this so I can get a working cohen kappa metric? It should improve from 0 to 1.</li>
<li>I want to see y_pred and y_true for each metric update_state method, is it expected to have Tensor object as y_true and y_pred? Thanks!</li>
</ul>
|
<p>Nevermind, after several moments I discovered where I was wrong. tensorflow addons isn't supporting tf.data yet, so the quick fix for this is stated on this github issue: <a href="https://github.com/tensorflow/addons/issues/2417" rel="nofollow noreferrer">https://github.com/tensorflow/addons/issues/2417</a></p>
|
tensorflow|metrics|kappa
| 0
|
374,110
| 66,459,171
|
How to convert this to a list or numpy array
|
<pre><code>cm = [[406402 30 0 0 11 6 0 0 0 0
200 0 0 0 0]
[ 89 269 0 0 0 0 0 0 0 0
0 0 0 0 0]
[ 9 0 25854 0 0 0 0 0 0 0
0 0 0 0 0]
[ 5 0 0 2050 2 0 0 0 0 0
0 0 0 0 0]
[ 64 0 1 6 34497 0 0 0 0 0
1 0 0 0 0]
[ 3 0 0 0 0 982 5 0 0 0
0 0 0 0 0]
[ 4 0 0 0 0 3 1072 0 0 0
0 0 1 0 0]
[ 0 0 0 0 0 0 0 1132 0 0
0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 1 0
0 0 0 0 0]
[ 2 0 0 0 0 0 0 0 0 5
0 0 0 0 0]
[ 100 0 0 0 5 0 0 0 0 0
11298 0 0 0 2]
[ 6 0 0 0 0 0 0 0 0 0
0 628 0 0 0]
[ 8 0 0 0 0 0 0 0 0 0
0 0 235 0 38]
[ 3 0 0 0 0 0 0 0 0 0
0 0 1 0 0]
[ 1 0 0 0 1 0 0 0 0 0
0 0 67 0 47]]
</code></pre>
<p>I got this output of shape (15,15) from sklearn's confusion matrix but I get an invalid syntax error when I try to convert it into an np array. How can I do that? Need commas between elements of each array and between different arrays</p>
|
<p>You can't directly convert this to np.array. You need add commas between elements and rows. For example:</p>
<pre><code>import numpy as np
np.array([[1,2],[3,4]])
</code></pre>
|
python|numpy
| 0
|
374,111
| 66,606,544
|
How to convert long data to wide in pandas?
|
<p>Similar question:</p>
<ul>
<li><a href="https://stackoverflow.com/questions/52534500/long-wide-data-to-wide-long">Long/wide data to wide/long</a></li>
</ul>
<p>(I got duplicate index error when using the method given in that link)</p>
<h1>MWE</h1>
<pre><code>df_long = pd.DataFrame({'name': ['A', 'B', 'A', 'B'],
'variable': ['height', 'height', 'width', 'width'],
'value': [10, 20, 1, 2]})
print(df_long)
name variable value
0 A height 10
1 B height 20
2 A width 1
3 B width 2
============================
Requires answer
name height width
0 A 10 1
1 B 20 2
</code></pre>
<h1>My attempt</h1>
<pre class="lang-py prettyprint-override"><code>(df_long.set_index(['name'])
.stack()
.unstack(0)
.reset_index()
.rename_axis(None, axis=1)
)
ValueError: Index contains duplicate entries, cannot reshape
</code></pre>
|
<pre><code>df_long.pivot_table("value",["name"], "variable")
variable height width
name
A 10 1
B 20 2
</code></pre>
|
python|pandas
| 1
|
374,112
| 66,640,280
|
Calculating a period as long as a criteria is met in a dataframe
|
<p>here is my dataframe:</p>
<pre><code> date value negative trigger period
125652 2020-01-12 07:00:00+00:00 21.688670 False False NaN
125653 2020-01-12 07:05:00+00:00 1.456942 False False NaN
125654 2020-01-12 07:10:00+00:00 -22.268280 True False 1
125655 2020-01-12 07:15:00+00:00 -37.850510 True False 2
125656 2020-01-12 07:20:00+00:00 -66.259944 True False 3
125657 2020-01-12 07:25:00+00:00 -68.059245 True False 4
125658 2020-01-12 07:30:00+00:00 -63.986797 True False 5
125659 2020-01-12 07:35:00+00:00 -75.223634 True False 6
125660 2020-01-12 07:40:00+00:00 -73.597524 True False 7
125661 2020-01-12 07:45:00+00:00 -68.174247 True False 8
125662 2020-01-12 07:50:00+00:00 -80.020121 True False 9
125663 2020-01-12 07:55:00+00:00 -84.121360 True False 10
125664 2020-01-12 08:00:00+00:00 -98.860264 True False 11
125665 2020-01-12 08:05:00+00:00 -120.808291 True True 12
125666 2020-01-12 08:10:00+00:00 -100.162919 True False 13
125667 2020-01-12 08:15:00+00:00 -80.048591 True False 14
125668 2020-01-12 08:20:00+00:00 -23.830259 True False 15
125669 2020-01-12 08:25:00+00:00 8.356292 False False NaN
125670 2020-01-12 08:30:00+00:00 95.368355 False False NaN
125671 2020-01-12 08:35:00+00:00 79.023180 False False NaN
125672 2020-01-12 08:40:00+00:00 72.057324 False False NaN
125673 2020-01-12 08:45:00+00:00 35.903934 False False NaN
</code></pre>
<p>The column <code>period</code> is what i want. So basically, i want to count the rows with negative values as long as they are negative. To be more precise: I only need the period when <code>trigger === True</code>. I don't care about the period in other rows. So i need <code>period</code> on row with index <code>125665</code> which is <code>12</code> in the example</p>
<p>The initial dataframe doesn't have the column <code>period</code>.</p>
<p>I wasn't able to create an algo which can calculate this column for me.
I tried working with <code>for index, row in dataframe.iterrows():</code> but this takes ages to iterate over all the rows (check the index of the dataframe cutout).</p>
<p>Does anybody know an fast algo which gives me the period column with a valid period when <code>trigger == True</code> ?</p>
|
<p>First, you can identify the groups of continuous positive/negative values with this:</p>
<pre class="lang-py prettyprint-override"><code>df['grp'] = df.negative.diff().cumsum().fillna(0)
</code></pre>
<p>I explained the reasoning behind this neat little trick <a href="https://stackoverflow.com/questions/66209794/getting-local-max-min-values-from-variable-row-ranges-defined-with-a-condition/66210963#66210963">here</a>, feel free to check it out if you want to understand the logic behind it</p>
<p>With the groups identified, you can use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.groupby.html" rel="nofollow noreferrer"><code>groupby</code></a> and cumulative summation on the <code>negative</code> column to get the count of negative values in the current group</p>
<pre class="lang-py prettyprint-override"><code>df['period'] = df2.groupby('grp').negative.cumsum()
</code></pre>
<p>Because <code>df.negative</code> is always <code>False</code> for groups of positive values, <code>period</code> will be zero for these rows and you can use <code>df.period.replace({0: np.nan})</code> to set them to null to achieve your described outcome. If you won't ever touch these values anyway, I recommend simply leaving them at zero to save on computation time</p>
|
python|pandas|dataframe
| 0
|
374,113
| 66,759,689
|
pandas How to find the group with the maximum value and delete the group
|
<p>I have dataframe like this:</p>
<pre><code>import numpy as np
import pandas as pd
dataA = [["2005-1-20", "9:35", 5], ["2005-1-20", "9:40", 8], ["2005-1-20", "9:45", 7],
["2005-1-20","9:50", 4], ["2005-1-20", "10:00", 2],
["2005-1-21", "9:35", 2], ["2005-1-21", "9:40", 3], ["2005-1-21", "9:45", 4],
["2005-1-21","9:50", 4], ["2005-1-21", "10:00", 775],
["2005-1-22", "9:35", 12], ["2005-1-22", "9:40", 13], ["2005-1-22", "9:45", 14],
["2005-1-22","9:50", 14], ["2005-1-22", "10:00", 15]]
df = pd.DataFrame(data = dataA, columns=["date", "min", "val"])
</code></pre>
<pre><code>print(df)
date min val
0 2005-1-20 9:35 5
1 2005-1-20 9:40 8
2 2005-1-20 9:45 7
3 2005-1-20 9:50 4
4 2005-1-20 10:00 2
5 2005-1-21 9:35 2
6 2005-1-21 9:40 3
7 2005-1-21 9:45 4
8 2005-1-21 9:50 4
9 2005-1-21 10:00 775
10 2005-1-22 9:35 12
11 2005-1-22 9:40 13
12 2005-1-22 9:45 14
13 2005-1-22 9:50 14
14 2005-1-22 10:00 15
</code></pre>
<p>i want to find the group with the maximum value where min=10:00, and delete the group groupby date,
how to it?</p>
|
<p>This is my solution, it only removes one, even if two dates have the highest value.</p>
<pre><code>#Filter, so that you only have the values you want to compare
only_data_at_ten = df[ df.minute == '10:00']
#Find the highest value by sorting ascending and getting the last value
date_to_remove = only_data_at_ten.sort_values('val').date.iloc[-1]
#Filter your data so that you have only the dates, that are NOT the one you found
cleaned_data = df[ df.date != date_to_remove]
</code></pre>
|
python|pandas|dataframe
| 1
|
374,114
| 66,476,198
|
How to remove certain values a Python Numpy Array
|
<p>I'm new to this, so this is probably a basic question, but how do I remove values from my array that are less than 0?</p>
<p>So, if</p>
<pre><code>a=np.random.randint(-10,11,(10,10))
</code></pre>
<p>How would I create an array with only the positive values from a?
thanks</p>
|
<pre><code>import numpy as np
a=np.random.randint(-10,11,(10,10))
np.where(a > 0, a, 0)
</code></pre>
|
python|arrays|numpy|random
| 1
|
374,115
| 66,453,901
|
Converting a pandas column from an array of string Quarters and Years to a datetime column
|
<p>I have the following dataframe</p>
<pre><code> Date Data
0 [Q1, 10] 8.7
1 [Q2, 10] 8.4
2 [Q3, 10] 14.1
3 [Q4, 10] 16.2
4 [Q1, 11] 18.6
5 [Q2, 11] 20.4
6 [Q3, 11] 17.1
7 [Q4, 11] 37.0
8 [Q1, 12] 35.1
9 [Q2, 12] 26.0
10 [Q3, 12] 26.9
11 [Q4, 12] 47.8
12 [Q1, 13] 37.4
13 [Q2, 13] 31.2
14 [Q3, 13] 33.8
15 [Q4, 13] 51.0
16 [Q1, 14] 43.7
17 [Q2, 14] 35.2
18 [Q3, 14] 39.3
19 [Q4, 14] 74.5
20 [Q1, 15] 61.2
21 [Q2, 15] 47.5
22 [Q3, 15] 48.0
23 [Q4, 15] 74.8
24 [Q1, 16] 51.2
25 [Q2, 16] 40.4
26 [Q3, 16] 45.5
27 [Q4, 16] 78.3
28 [Q1, 17] 50.8
29 [Q2, 17] 38.5
30 [Q3, 17] 46.7
31 [Q4, 17] 77.3
32 [Q1, 18] 52.2
33 [Q2, 18] 41.3
34 [Q3, 18] 46.9
35 [Q4, 18] 68.4
36 [Q1, 19] 36.4
37 [Q2, 19] 33.8
38 [Q3, 19] 46.6
39 [Q4, 19] 73.8
40 [Q1, 20] 36.7
41 [Q2, 20] 37.6
</code></pre>
<p>I want to merge it into a <code>Date</code> into a Datetime object,</p>
<p>So <code>Q1,10</code> will become <code>Q1,2010</code> and then become <code>2010-03-31</code></p>
<p>I tried the following code,</p>
<pre><code>df['Date'] = pd.to_datetime(df['Date'].str.join('20'))
</code></pre>
<p>But it doesnt work.</p>
<p>i also tried using</p>
<pre><code>df['Date'].astype(str)[:1]
</code></pre>
<p>to access the second column in the series to add a 20 at the front but it wont let me.</p>
<p>What the best way to convert this series into a pandas datatime column?</p>
|
<p>First create quarter <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.PeriodIndex.html" rel="nofollow noreferrer"><code>PeriodIndex</code></a>, then convert to datetimes by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.PeriodIndex.to_timestamp.html" rel="nofollow noreferrer"><code>PeriodIndex.to_timestamp</code></a> and floor to days by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DatetimeIndex.floor.html" rel="nofollow noreferrer"><code>DatetimeIndex.floor</code></a>:</p>
<pre><code>#if necessary create lists
df['Date'] = df['Date'].str.strip('[]').str.split(',')
#test if format match
print ('20' + df['Date'].str[::-1].str.join(''))
0 2010Q1
1 2010Q2
2 2010Q3
3 2010Q4
4 2011Q1
5 2011Q2
Name: Date, dtype: object
df['Date'] = (pd.PeriodIndex('20' + df['Date'].str[::-1].str.join(''), freq='Q')
.to_timestamp(how='e')
.floor('d'))
print (df)
Date Data
0 2010-03-31 8.7
1 2010-06-30 8.4
2 2010-09-30 14.1
3 2010-12-31 16.2
4 2011-03-31 18.6
5 2011-06-30 20.4
</code></pre>
<p>Alternative for convert to <code>Period</code>s:</p>
<pre><code>df['Date'] = (df['Date'].str[::-1].str.join('').apply(lambda x: pd.Period(x, freq='Q'))
.dt.to_timestamp(how='e')
.dt.floor('d'))
</code></pre>
<p>Or solution from @MrFuppes, thank you:</p>
<pre><code>df['Date'] = (pd.to_datetime("20"+df['Date'].str[::-1].str.join('')) +
pd.offsets.QuarterEnd(0))
</code></pre>
|
python|pandas|datetime|time-series
| 1
|
374,116
| 66,538,664
|
User Defined Aggregate Function in PySpark SQL
|
<p>How to implement a User Defined Aggregate Function (UDAF) in PySpark SQL?</p>
<pre><code>pyspark version = 3.0.2
python version = 3.7.10
</code></pre>
<p>As a minimal example, I'd like to replace the AVG aggregate function with a UDAF:</p>
<pre><code>sc = SparkContext()
sql = SQLContext(sc)
df = sql.createDataFrame(
pd.DataFrame({'id': [1, 1, 2, 2], 'value': [1, 2, 3, 4]}))
df.createTempView('df')
rv = sql.sql('SELECT id, AVG(value) FROM df GROUP BY id').toPandas()
</code></pre>
<p>where rv will be:</p>
<pre><code>In [2]: rv
Out[2]:
id avg(value)
0 1 1.5
1 2 3.5
</code></pre>
<p>How can a UDAF replace <code>AVG</code> in the query?</p>
<p>For example this does not work</p>
<pre><code>import numpy as np
def udf_avg(x):
return np.mean(x)
sql.udf.register('udf_avg', udf_avg)
rv = sql.sql('SELECT id, udf_avg(value) FROM df GROUP BY id').toPandas()
</code></pre>
<p>The idea is to implement a UDAF in pure Python for processing not supported by SQL aggregate functions (e.g. a low-pass filter).</p>
|
<p>A Pandas UDF can be used, where the definition is compatible from <code>Spark 3.0</code> and <code>Python 3.6+</code>. See the <a href="https://issues.apache.org/jira/browse/SPARK-28264" rel="nofollow noreferrer">issue</a> and <a href="https://spark.apache.org/docs/latest/api/python/user_guide/arrow_pandas.html" rel="nofollow noreferrer">documentation</a> for details.</p>
<p>Full implementation in Spark SQL:</p>
<pre><code>import pandas as pd
from pyspark.sql import SparkSession
from pyspark.sql.functions import pandas_udf
from pyspark.sql.types import DoubleType
spark = SparkSession.builder.getOrCreate()
df = spark.createDataFrame(
pd.DataFrame({'id': [1, 1, 2, 2], 'value': [1, 2, 3, 4]}))
df.createTempView('df')
@pandas_udf(DoubleType())
def avg_udf(s: pd.Series) -> float:
return s.mean()
spark.udf.register('avg_udf', avg_udf)
rv = spark.sql('SELECT id, avg_udf(value) FROM df GROUP BY id').toPandas()
</code></pre>
<p>with return value</p>
<pre><code>In [2]: rv
Out[2]:
id avg_udf(value)
0 1 1.5
1 2 3.5
</code></pre>
|
pandas|apache-spark|pyspark|apache-spark-sql|user-defined-functions
| 3
|
374,117
| 66,387,247
|
How to remove a zero frequency artefact from FFT using numpy.fft.fft() when detrending or subtracting the mean does not work
|
<p>I am trying to calculate the FFT of the following data : <a href="http://www.mediafire.com/file/91olqnm6i9qh5bl/data.txt/file" rel="nofollow noreferrer">data.txt</a></p>
<pre class="lang-py prettyprint-override"><code>y_array = np.loadtxt('data.txt',dtype='complex')
plt.plot(np.real(y_array))
</code></pre>
<p><a href="https://i.stack.imgur.com/mB3gx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mB3gx.png" alt="Original Data" /></a></p>
<pre class="lang-py prettyprint-override"><code>y_array_fft = np.fft.fftshift(np.fft.fft(y_array))
x_array = np.linspace(-125,125,len(y_array))
</code></pre>
<p>The FFT plot :</p>
<p><a href="https://i.stack.imgur.com/By2sd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/By2sd.png" alt="enter image description here" /></a></p>
<p>I want to remove the artefact at zero frequency which I think is the DC offset. For this, I tried subtracting the mean from the original signal and also use scipy.signal.detrend() for removing a linear trend from data before fft. However, both operations do not seem to have any effect on the FFT.</p>
<pre class="lang-py prettyprint-override"><code>y_array_detrend = signal.detrend(y_array)
y_array_mean_subtracted = y_array-np.mean(y_array)
y_array_detrend_fft = np.fft.fftshift(np.fft.fft(y_array_detrend))
y_array_mean_subtracted_fft = np.fft.fftshift(np.fft.fft(y_array_mean_subtracted))
</code></pre>
<p>FFT after transforming the data using scipy.detrend():</p>
<p><a href="https://i.stack.imgur.com/vMLQB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vMLQB.png" alt="fft_after_detrend" /></a></p>
<p>FFT after subtracting the mean from data :</p>
<p><a href="https://i.stack.imgur.com/X7xtp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/X7xtp.png" alt="FFT after mean subtraction" /></a></p>
<p>Any help or comments is greatly appreciated!</p>
|
<p>If you subtract the mean, then what's left isn't a 0 Hz artifact, but some low frequency spectrum (perhaps between 2 to 10 Hz in your plot, depending on your dimensions). Try a high pass filter.</p>
<p>Also, since it's complex data, make sure you subtracted the complex mean.</p>
|
python|numpy|scipy|signal-processing|fft
| 0
|
374,118
| 66,656,620
|
What is my problem with the implementation of multivariate_gauss pdf?
|
<p>I use python to calculate the multivariate_gauss distribution, but I don't know what's wrong.
The code is here</p>
<pre><code># calculate multi-d gaussian pdf
def mul_gauss(x, mu, sigma) -> float:
d = len(x[0])
front = 1 / math.sqrt(((2 * math.pi) ** d) * np.linalg.det(sigma))
tmp = (np.array(x) - np.array(mu))
tmp_T = np.transpose(tmp)
back = -0.5 * (np.matmul(np.matmul(tmp, np.linalg.inv(sigma)), tmp_T))[0][0]
return front * math.exp(back)
</code></pre>
<p>I compared the result with scipy.stats.multivariate_normal(x,mu,sigma)</p>
<pre><code>x = [[2,2]]
mu = [[4,4]]
sigma = [[3,0],[0,3]]
ret_1 = mul_gauss(x, mu, sigma)
ret_2 = scipy.stats.multivariate_normal(x[0], mu[0], sigma).pdf(x[0])
print('ret_1=',ret1)
print('ret_2=',ret2)
</code></pre>
<p>and output is
ret_1=0.013984262505331654
ret_2=0.03978873577297383</p>
<p>Could anyone help me?</p>
|
<p>In line 5 of your main you call the <code>.pdf()</code> on the object instead as a method.
Here is a fix:</p>
<pre><code># calculate multi-d gaussian pdf
import math
import numpy as np
from scipy import stats
def mul_gauss(x, mu, sigma) -> float:
d = x[0].shape[0]
coeff = 1/np.sqrt((2 * math.pi) ** d * np.linalg.det(sigma))
tmp = x - mu
exponent = -0.5 * (np.matmul(np.matmul(tmp, np.linalg.inv(sigma)), tmp.T))[0][0]
return coeff * math.exp(exponent)
x = np.array([[2,2]])
mu = np.array([[4,4]])
sigma = np.array([[3,0],[0,3]])
ret_1 = mul_gauss(x, mu, sigma)
ret_2 = stats.multivariate_normal.pdf(x[0], mu[0], sigma)
print('ret_1=',ret_1)
print('ret_2=',ret_2)
</code></pre>
<p>Output:</p>
<pre><code>ret_1= 0.013984262505331654
ret_2= 0.013984262505331658
</code></pre>
<p>Cheers.</p>
|
python|numpy|math|gaussian
| 2
|
374,119
| 66,683,088
|
How to fix the error: ValueError: endog and exog matrices are different sizes
|
<p>I'm trying to write a program in python that uses a query in SQL to collect data and make a regression model. When I try to actually create the model, however, it gives me this error.</p>
<pre><code>import pyodbc
import pandas
import statsmodels.api as sm
import numpy as np
server = 'ludsampledb.database.windows.net'
database = 'SampleDB'
username = 'sampleadmin'
password = '+U9Ly9/p'
driver = '{ODBC Driver 17 for SQL Server}'
table = 'GooglePlayStore'
conn = pyodbc.connect('DRIVER='+driver+';SERVER='+server+';PORT=1433;DATABASE='+database+';UID='+username+';PWD='+ password)
sql1 = "SELECT ISNULL((CASE WHEN ISNUMERIC(Rating) = 1 THEN CONVERT(float, Rating) ELSE 0 END), 0) AS 'Rating', ISNULL((CASE WHEN ISNUMERIC(Reviews) = 1 THEN CONVERT(float, Reviews) ELSE 0 END), 0) AS 'Reviews', ISNULL((CASE WHEN ISNUMERIC(SUBSTRING(Size, 0, LEN(Size))) = 1 THEN CONVERT(float, SUBSTRING(Size, 0, LEN(Size))) ELSE 0 END), 0) AS 'Size', ISNULL((CASE WHEN ISNUMERIC(REPLACE(Price, '$', '')) = 1 THEN CONVERT(float, REPLACE(Price, '$', '')) ELSE 0 END), 0) AS 'Price', ISNULL((CASE WHEN ISNUMERIC(REPLACE(SUBSTRING(Installs, 0, LEN(Installs)), ',', '')) = 1 THEN CONVERT(float, REPLACE(SUBSTRING(Installs, 0, LEN(Installs)), ',', '')) ELSE 0 END), 0) AS 'Installs' FROM GooglePlayStore"
data = pandas.read_sql(sql1,conn)
x = np.array([data["Rating"], data["Size"], data["Installs"], data["Price"]]).reshape(-1, 1)
x = sm.add_constant(x)
print(x.shape)
y = np.array([data['Reviews']]).reshape(-1, 1)
print(y.shape)
fit = sm.OLS(y, x).fit() #This is where the error is occurring
</code></pre>
<p>I'm pretty sure that I know what is going wrong, but I have no idea how to fix it. I've tried several things already, but none so far have worked.</p>
|
<p>Avoid <code>reshape</code> calls which may be transposing data causing mismatched sizes. Pandas DataFrames and Series (each column of DataFrame) are extensions of numpy 2D and 1D arrays. Also, <code>sm.OLS</code> can directly receive pandas objects.</p>
<pre class="lang-py prettyprint-override"><code>x = data.reindex(["Rating", "Size", "Installs", "Price"], axis="columns")
x = sm.add_constant(x)
y = data['Reviews']
fit = sm.OLS(y, x).fit()
</code></pre>
<p>Consider even R-style formula (constant automatically added) using lower case <code>ols</code>:</p>
<pre class="lang-py prettyprint-override"><code>fit = sm.ols(formula="Reviews ~ Rating + Size + Installs + Price", data=data).fit()
</code></pre>
|
python|sql|pandas|numpy
| 0
|
374,120
| 66,468,250
|
Split a column with a number and name into two different columns 'ID' and 'Name'
|
<p>I am converting a text file to csv.
In the csv file Im getting a column having a number and name in it (e.g 1: Aki ) , I want to seperate them both in two different columns.</p>
<p>samle data</p>
<pre><code>1: Aki
2: Aki
3: Kano
</code></pre>
<p>code tried</p>
<pre><code>df_output.columns = ['Name', 'date', 'Description']
###df_output['ID'],df_output['Name_'] = df_output['Name'].str[:1],df_output['Name'].str[1:]
obj = df_output['Name']
obj = obj.str.strip()
obj = obj.str.split(':/s*')
df_output['Name'] = obj.str[-1]
df_output['idx'] = obj.str[0]
df_output = df_output.set_index('idx')
</code></pre>
|
<p>Use <code>str.extract</code> here:</p>
<pre class="lang-py prettyprint-override"><code>df_output['ID'] = df['name'].str.extract(r'^(\d+)')
df_output['name'] = df['name'].str.extract(r'^\d+: (.*)$')
</code></pre>
|
python|pandas
| 4
|
374,121
| 66,705,839
|
Invert order and color of hue categories using Seaborn
|
<p>So when I am plotting a box graph and I am using a cotagory in "hue" I get a default order of the categories (Yes and No in my case). I want the boxes to be presented in the reverse order (No Yes) and with the reverse colors. This is my code:</p>
<pre><code>fig=plt.figure(figsize=(12,6))
df2=df[df["AMT_INCOME_TOTAL"]<=df["AMT_INCOME_TOTAL"].median()]
ax = sns.boxplot(x="NAME_FAMILY_STATUS", y="AMT_INCOME_TOTAL", data=df2,hue="FLAG_OWN_REALTY")
plt.legend(bbox_to_anchor=(1.05,1),loc=2, borderaxespad=1,title="Own Property flag")
plt.title('Income Totals per Family Status for Bottom Half of Earners')
</code></pre>
<p><a href="https://i.stack.imgur.com/ZYE9u.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZYE9u.png" alt="enter image description here" /></a></p>
<p>I would like to have the "N" category first and with blue color.</p>
|
<p>As mentioned above this does the job if you want to explicitly specify the order of the boxes and the hue_order:</p>
<pre><code>fig=plt.figure(figsize=(12,6))
df2=df[df["AMT_INCOME_TOTAL"]<=df["AMT_INCOME_TOTAL"].median()]
ax = sns.boxplot(x="NAME_FAMILY_STATUS", y="AMT_INCOME_TOTAL", data=df2,hue="FLAG_OWN_REALTY",order=['Married',"Civil marriage","Widow","Single / not married","Separated"], hue_order = ['N', 'Y'])
plt.legend(bbox_to_anchor=(1.05,1),loc=2, borderaxespad=1,title="Own Property flag")
plt.title('Income Totals per Family Status for Bottom Half of Earners')
</code></pre>
|
python|pandas|matplotlib|plot|seaborn
| 1
|
374,122
| 66,479,033
|
Pandas read excel returning type object when time is 00:00
|
<p>In more recent versions of Pandas (I am using 1.2.3) when reading times from an excel file, there is a problem when the time is 00:00:00. Below script, where filepath is the route to my excel file, which contains a column with a header named 'Time'.</p>
<pre><code>import pandas as pd
df = pd.read_excel(filepath)
print(df['Time'])
</code></pre>
<p>Output:</p>
<pre><code>0 20:00:00
1 22:00:00
2 23:00:00
3 1899-12-30 00:00:00
4 02:00:00
5 02:45:00
6 03:30:00
7 04:00:00
8 04:45:00
9 05:30:00
10 07:00:00
11 08:00:00
12 08:45:00
13 09:30:00
14 10:30:00
15 10:45:00
16 11:45:00
17 12:30:00
18 13:15:00
19 14:00:00
20 14:45:00
21 15:45:00
22 23:00:00
23 1899-12-30 00:00:00
</code></pre>
<p>This was not the case in version 1.0.5.</p>
<p>Is there a way to read in these times correctly, without the date on rows 3 and 23 above?</p>
|
<p>I can reproduce this behavior (pandas 1.2.3); it leaves you with a mix of <code>datetime.datetime</code> and <code>datetime.time</code> objects in the 'time' column.</p>
<hr />
<p><em><strong>One way</strong></em> around can be to import the time column as type string; you can explicitly specify that like</p>
<pre><code>df = pd.read_excel(path_to_your_excelfile, dtype={'Time': str})
</code></pre>
<p>which will give you "excel day zero" prefixed to some entries. You can remove them by split on space and then taking the last element of the split result:</p>
<pre><code>df['Time'].str.split(' ').str[-1]
</code></pre>
<p>Now you can proceed by converting string to <code>datetime</code>, <code>timedelta</code> etc. - whatever makes sense in your context.</p>
<hr />
<p><em><strong>Another way</strong></em> to handle this can be to specify that pandas should parse this column to datetime; like</p>
<pre><code>df = pd.read_excel(path_to_your_excelfile, parse_dates=['Time'])
</code></pre>
<p>Then, you'll have pandas' datetime, with either today's date or "excel day zero":</p>
<pre><code>df['Time']
0 2021-03-04 20:00:00
1 2021-03-04 22:00:00
2 2021-03-04 23:00:00
3 1899-12-30 00:00:00
4 2021-03-04 02:00:00
...
23 1899-12-30 00:00:00
Name: Time, dtype: datetime64[ns]
</code></pre>
<p>Now you have some options, depending on what you intend to do further with the data. You could just ignore the date, or strip it (<code>df['Time'].dt.time</code>), or parse to string (<code>df['Time'].dt.strftime('%H:%M:%S')</code>) etc.</p>
|
python|excel|pandas|datetime
| 3
|
374,123
| 66,362,203
|
Less Expensive Tuple Loop
|
<p>this loop is currently working, I'm just asking for feedback on how to make it less expensive. Learning python, so all feedback is welcome! Also, not working with bananas, made 2 new tables for the purpose of this example.</p>
<p>If you care for more detail about how/why I'm using this, read below.</p>
<pre><code>import pandas as pd
def banana_validation(typeValue):
bananaTable = pd.DataFrame([["Banana",3], ["Raspberry",1]],columns=['Type', 'TypeID'])
if typeValue in bananaTable.values:
return True
return False
data = pd.DataFrame([["Banana", 58407, 3], ["Apple", 58407, 1], ["Banana", 59874, 30], ["Banana", 54651, 1], ["Berry", 13546, 1]], columns=['Type', 'ItemID', 'Number'])
def messege_printer(data):
df = data
output = ()
message = ' is not a valid type'
linebreak = '<br>'
for index, value in df['Type'].iteritems():
if banana_validation(value) != True:
output = output+(value+message,linebreak,)
return output
messege_printer(data)
</code></pre>
<p>I am creating a 'simple' web app to allow users to upload a spreadsheet and ultimately upload it into our database. Like any good data person, I need to validate that the 'Type' of value they are entering already exists. 'banana_validation' essentially is a replica of a function that queries our system and verifies that the value exists. 'messege_printer' function is ran whenever the user clicks an 'upload' button, and reads whatever the user uploaded.</p>
<p><strong>Why a tuple?</strong> This is the only way I can pass a line break through the <a href="https://plotly.com/dash/" rel="nofollow noreferrer">dash framework</a>. My actual 'output' variable returns something like this:</p>
<pre><code>output = ('line 1', html.Br(), 'line 2')
</code></pre>
<p>layout of app (if anyone cares):</p>
<ul>
<li>database.py: banana_validation</li>
<li>app_layout.py: messege_printer</li>
</ul>
<p>Thank you so much!</p>
|
<ul>
<li><p>Is this an actual bottleneck in the application? It doesn't seem like it should be expensive enough to warrant optimisation.</p>
</li>
<li><p>If it is an problem, the loop can either append to a list, or <code>yield</code> the values (from a helper function), then only contruct a tuple once.</p>
<pre><code> output = []
for index, value in df['Type'].iteritems():
if not banana_validation(value):
output.extend(value + message, linebreak)
return tuple(output)
</code></pre>
<p>It's even possible that the framework will accept a list or a generator, in which case you wouldn't need to construct the tuple at all; just <code>yield</code> directly from the loop as it is.</p>
<pre><code> for index, value in df['Type'].iteritems():
if not banana_validation(value):
yield value + message
yield linebreak
</code></pre>
</li>
</ul>
|
python|pandas|dataframe
| 2
|
374,124
| 66,413,127
|
Python Pandas split list by delimiter into own columns
|
<p>new to python and still learning. Have tried many posts already but none are working. Might need help with the syntax etc. 2 parts to my question:</p>
<p>First part - I want to split column 'resources' by every unique value and make new columns from them. Similar to the picture below with the columns highlighted yellow
<a href="https://i.stack.imgur.com/XZyYr.png" rel="nofollow noreferrer">enter image description here</a></p>
<pre><code>CustID, Resources, 100, 200, 30, 50
222, 100;200;30;50, 1 , 1, 1, ,1
</code></pre>
<p>The second way - i want to split 'resources' by every unique value into one column. This might lead to duplication of rows which is fine... Similar to the below picture
<a href="https://i.stack.imgur.com/kurGf.png" rel="nofollow noreferrer">enter image description here</a></p>
<pre><code>CustID, Resources, New Column
222, 100;200;30;50, 100
222, 100;200;30;50, 200
222, 100;200;30;50, 30
222, 100;200;30;50, 50
</code></pre>
|
<p>For your first question, use <code>str.get_dummies</code>:</p>
<pre><code>dummies = df['Resources'].str.get_dummies(sep=";")
df = pd.concat([df, dummies], axis=1)
CustID Resources 100 200 30 50
0 222 100;200;30;50 1 1 1 1
</code></pre>
<hr />
<p>For the second question, the <code>explode</code> solution given in comments works:</p>
<pre><code>df = df.assign(
Resources=df['Resources'].str.split(';')
).explode('Resources', ignore_index=True)
CustID Resources
0 222 100
1 222 200
2 222 30
3 222 50
</code></pre>
|
python|pandas|list
| 1
|
374,125
| 66,708,320
|
Check pandas DataFrame values with pydantic
|
<p>I have DataFrame, and would like to add a new column to it with rows filled by JSONs, that consists of values from other columns. like that:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>id</th>
<th>a_field</th>
<th>b_field</th>
<th>json</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>1</td>
<td>"abc"</td>
<td>{"num":1, "name":"abc"}</td>
</tr>
</tbody>
</table>
</div>
<p>so values from row get to flat structure but with different names.</p>
<p>And I'd like to create that json using pydantic.
And to check such frames with pydantic if i get it from elsewhere.
What's the best way to do it?</p>
|
<p>If I get the question right, you can parse it something like this:</p>
<pre><code>from pydantic import BaseModel
import json
class JSON(BaseModel):
num: int
name: str
obj = JSON(**json.loads('{"num":1, "name":"abc"}'))
assert obj.json() == '{"num": 1, "name": "abc"}'
</code></pre>
|
python|pandas|pydantic
| 2
|
374,126
| 66,747,278
|
Why does the export of a styled pandas dataframe to Excel not work?
|
<p>I would like to apply the same background color to cells that have for each PEOPLE instance the name and the related name. I have tried to <code>df.style.applymap</code>, it does not return an error but it does not seem to work. Anyone has any ideas why? Thank you.</p>
<pre><code> clrs = list(mcolors.CSS4_COLORS.keys())
for k in range(len(PEOPLE)):
if PEOPLE[k].attribute == 'child':
df1_data = [PEOPLE[k].name, PEOPLE[k].related]
df.style.applymap([lambda x: 'background-color: yellow' if x in df1_data else 'background-color: red'])
df.to_excel('styledz.xlsx', engine='openpyxl')
</code></pre>
|
<p><a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/style.html" rel="nofollow noreferrer">Here</a> is some more info on <code>df.style</code>. Here I'm using some simple example because I don't have your data available:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import numpy as np
df = pd.DataFrame({'a': np.random.randint(0, 10, 10), 'b': np.random.randint(0, 10, 10), 'late': np.random.choice([0, 1], 10).astype(np.bool)})
def highlight_late(s):
return ['background-color: red' if s['late'] else 'background-color: green' for s_ in s]
df = df.style.apply(highlight_late, axis=1)
df.to_excel('style.xlsx', engine='openpyxl')
</code></pre>
<p>Looks in the excel file like this:</p>
<p><a href="https://i.stack.imgur.com/irc29.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/irc29.png" alt="enter image description here" /></a></p>
<hr />
<p>For <code>cell</code> based coloring use:</p>
<pre><code>def highlight_late(s):
return ['background-color: red' if s_ else 'background-color: green' for s_ in s]
df = df.style.apply(highlight_late, subset=["late"], axis=1)
</code></pre>
<p>This gives you:</p>
<p><a href="https://i.stack.imgur.com/ZaWQs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZaWQs.png" alt="enter image description here" /></a></p>
|
python|excel|pandas|dataframe|pandas-styles
| 3
|
374,127
| 66,549,990
|
How do I change the values of a column with respect to certain conditions using pandas?
|
<p>The data frame contains 5 columns named V, W, X, Y, Z.</p>
<p>I'm supposed to change the values in column X from a dataset according to:</p>
<ul>
<li>if 1 to 100, change to 1</li>
<li>if 101 to 200, change to 2</li>
<li>if 201 to 300, change to 3</li>
</ul>
<p>otherwise, change to 4</p>
<p>What's the most efficient way this can be done?</p>
|
<p>Using an example <code>df</code> and @Pygirl's idea:</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({'a': np.random.randint(0, 400, 10), 'b': np.random.randint(0, 400, 10), 'X': np.zeros(10)})
</code></pre>
<p>gives us:</p>
<pre><code>| | a | b | X |
|---:|----:|----:|----:|
| 0 | 237 | 188 | 0 |
| 1 | 212 | 147 | 0 |
| 2 | 135 | 30 | 0 |
| 3 | 296 | 154 | 0 |
| 4 | 133 | 219 | 0 |
| 5 | 185 | 317 | 0 |
| 6 | 365 | 5 | 0 |
| 7 | 108 | 189 | 0 |
| 8 | 358 | 34 | 0 |
| 9 | 105 | 2 | 0 |
</code></pre>
<p>with <code>pd.cut</code> <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.cut.html" rel="nofollow noreferrer">doc</a> we can get:</p>
<pre><code>df['X'] = pd.cut(df.a, [0, 100, 200, 300, np.inf], labels=[1, 2, 3, 4])
</code></pre>
<p>which results in:</p>
<pre><code>| | a | b | X |
|---:|----:|----:|----:|
| 0 | 377 | 232 | 4 |
| 1 | 61 | 11 | 1 |
| 2 | 52 | 217 | 1 |
| 3 | 191 | 42 | 2 |
| 4 | 178 | 228 | 2 |
| 5 | 235 | 206 | 3 |
| 6 | 39 | 222 | 1 |
| 7 | 316 | 210 | 4 |
| 8 | 135 | 390 | 2 |
| 9 | 44 | 311 | 1 |
</code></pre>
|
python|pandas
| 0
|
374,128
| 66,682,344
|
Numpy array bigger than the total size of images it is made up of
|
<p>I am trying to convert a directory of RGB images to a numpy array but the resulting array is way bigger than the sum of sizes of all the images put together. What is going on here?</p>
|
<p>That's because image files are usually compressed, which means that the stored data will be smaller than the original file containing all pixel data, when you open a image using PIL, for example, you'll get access to all RGB values of all pixels, so, there's more data 'cus it's uncompressed.</p>
|
python|python-3.x|numpy
| 2
|
374,129
| 66,699,658
|
Comparing 2 Structured Arrays that contain values of different types and NaNs
|
<p>So I have 2 structured Numpy Arrays:</p>
<pre><code>a = numpy.array([('2020-01-04', 'Test', 1, 1.0),
('2020-01-05', 'Test2', 2, NaN)],
dtype=[('Date', 'M8[D]'), ('Name', 'S8'), ('idx', 'i8'), ('value', 'f8')])
b = numpy.array([('2020-01-04', 'Test', 2, 1.0),
('2020-01-05', 'Test2', 2, NaN)],
dtype=[('Date', 'M8[D]'), ('Name', 'S8'), ('idx', 'i8'), ('value', 'f8')])
</code></pre>
<p>I need to compare the 2 arrays and get an array of <code>True/False</code> values that will indicate which indices in the array are different.</p>
<p>Doing something like:</p>
<pre><code>not_same = np.full(shape=a.shape, dtype=bool, fill_value=False)
for field in a.dtype.names:
not_same = np.logical_or(not_same,
a[field] != b[field])
</code></pre>
<p>works to a point but comparison of <code>NaN != NaN</code> is actually <code>True</code>, so I would need to use something like <code>np.allclose</code> but you can only do this if the values you're comparing are floating point (Strings blow up).</p>
<p>So I need either one of 2 things either:</p>
<ol>
<li>Determine that values in the <code>a[field]</code> are floating point or not</li>
</ol>
<p>or</p>
<ol start="2">
<li>A method of comparing 2 arrays which will allow comparison of 2 NaN values that will be give you True</li>
</ol>
<p>Per Request below regarding the error:</p>
<pre><code>dt = np.dtype([('string', 'S10'), ('val', 'f8')])
arr = np.array([('test', 1.0)], dtype=dt)
np.isreal(arr['string'])
</code></pre>
<p>Ran on Ubuntu 20.04 with Python 3.8.5</p>
|
<p>Here's the workaround I was able to use to solve this. This is not pretty but does check for all the elements, compare and provide the answer. You can expand this to find a way to change the answer to True if np.Nan == np.Nan.</p>
<pre><code>import numpy as np
a = np.array([('2020-01-04', 'Test', 1, 1.0),
('2020-01-05', 'Test2', 2, np.NaN)],
dtype=[('Date', 'M8[D]'), ('Name', 'S8'), ('idx', 'i8'), ('value', 'f8')])
b = np.array([('2020-01-04', 'Test', 2, 1.0),
('2020-01-05', 'Test2', 2, np.NaN)],
dtype=[('Date', 'M8[D]'), ('Name', 'S8'), ('idx', 'i8'), ('value', 'f8')])
idx_ab = []
for i,j in zip(a,b):
for x,y in zip(i,j):
if (isinstance(x, float) and np.isnan(x)) or (isinstance(y, float) and np.isnan(y)):
idx_ab.append(False)
elif x == y:
idx_ab.append(True)
else:
idx_ab.append(False)
print (idx_ab)
</code></pre>
<p>Output of this will be:</p>
<pre><code>[True, True, False, True, True, True, True, False]
</code></pre>
<p>Unfortunately, you cannot just check if np.isnan(x,y). Both x and y have to be float if you want to check. If it is a string, it will give you an error. So you need to check isinstance first before you check for nan.</p>
<p>The alternate way is to use the np.isclose() or np.allclose() option that i shared the link to:</p>
<p><a href="https://stackoverflow.com/questions/10710328/comparing-numpy-arrays-containing-nan">comparing numpy arrays containing NaN</a></p>
<p>If you want to check each element in a and b separately, you can give:</p>
<pre><code>idx_ab = []
for i,j in zip(a,b):
ab = []
for x,y in zip(i,j):
if (isinstance(x, float) and np.isnan(x)) or (isinstance(y, float) and np.isnan(y)):
ab.append(False)
elif x == y:
ab.append(True)
else:
ab.append(False)
idx_ab.append(ab)
print (idx_ab)
</code></pre>
<p>The output of this will be:</p>
<pre><code>[[True, True, False, True], [True, True, True, False]]
</code></pre>
<p>If you want the result of np.NaN == np.NaN as True, add this as the first condition followed by the rest:</p>
<pre><code>if (isinstance(x, float) and isinstance(y, float) and all(np.isnan([x,y]))): ab.append(True)
</code></pre>
<p>This will result in the above answer as:</p>
<pre><code>[[True, True, False, True], [True, True, True, True]]
</code></pre>
<p>The last value is set to True as <code>a[1][3]</code> is <code>np.NaN</code> and <code>b[1][3]</code> is <code>np.NaN</code>.</p>
|
python|arrays|numpy
| 0
|
374,130
| 16,246,324
|
Applying a function on every row of numpy array
|
<p>I have a (16000000,5) numpy array, and I want to apply this function on each row.</p>
<pre><code>def f(row):
#returns a row of the same length.
return [row[0]+0.5*row[1],row[2]+0.5*row[3],row[3]-0.5*row[2],row[4]-0.5*row[3],row[4]+1]
</code></pre>
<p>vectorizing would operate slow.</p>
<p>I tried going like this</p>
<pre><code>np.column_stack((arr[:,0]+0.5*arr[:,1],arr[:,2]+0.5*arr[:,3],arr[:,3]-0.5*arr[:,2],arr[:,4]-0.5*arr[:,3],arr[:,4]+1))
</code></pre>
<p>but I get Memory Error.</p>
<p>What is the fastest way to do this?</p>
|
<pre><code>In [104]: arr=np.random.rand(1000000,5)
In [105]: %timeit a=np.column_stack((arr[:,0]+0.5*arr[:,1],arr[:,2]+0.5*arr[:,3],arr[:,3]-0.5*arr[:,2],arr[:,4]-0.5*arr[:,3],arr[:,4]+1))
10 loops, best of 3: 86.3 ms per loop
In [106]: %timeit a2=map(f,arr)1 loops, best of 3: 10.2 s per loop
In [98]: a2=map(f,arr)
In [99]: %timeit a2=map(f,arr)
100 loops, best of 3: 10.5 ms per loop
In [100]: np.all(a==a2)
Out[100]: True
</code></pre>
|
python|numpy
| 2
|
374,131
| 16,415,730
|
Python indexing 2D array
|
<p>How can i do indexing of a 2D array column wise. For example- </p>
<pre><code>array([[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14],
[15, 16, 17, 18, 19],
[20, 21, 22, 23, 24],
[25, 26, 27, 28, 29],
[30, 31, 32, 33, 34],
[35, 36, 37, 38, 39],
[40, 41, 42, 43, 44],
[45, 46, 47, 48, 49]])
</code></pre>
<p>This is a 2D array. I can access to it column wise by using <code>a[:,0]</code> which will give me the first column. But if I want to read all column at a time and want to pick values for example</p>
<pre><code>[5]
[10][15]
[20][25][37]
</code></pre>
<p>then it should pick the values like </p>
<pre><code>20
45, 21
46,22, 33
</code></pre>
<p>I know it must be easy. But i am learning the stuff. </p>
|
<p>If you want [5] to give 20, you must be starting to count from 1. Since Python starts counting from 0, that's a habit to break now: it'll only cause headaches.</p>
<p>I'm not sure what output format you want because numpy doesn't support ragged arrays, but maybe</p>
<pre><code>>>> idx = np.array([5, 10, 15, 20, 25, 37])
>>> a.T.flat[idx-1]
array([20, 45, 21, 46, 22, 33])
</code></pre>
<p>would suffice? Here I had to take the transpose, view it as a flat array, and then subtract 1 from the indices to match the way you seem to be counting.</p>
<p>We can use a list instead of an array (but then we'd need to do a listcomp or something to subtract the 1s.) For example:</p>
<pre><code>>>> a.T.flat[[4, 9, 14, 19, 24, 36]]
array([20, 45, 21, 46, 22, 33])
</code></pre>
|
python|numpy
| 2
|
374,132
| 16,420,673
|
Pythonic way to access certain parts of an array
|
<p>I have a 2D numpy array containing x (data[:,0]) and y(data[:,1]) information for a plot.</p>
<p>I’d like to fit a curve to the data, but only using certain parts of the data to determine the fitting parameters (e.g. using data in the range x = x1 -> x2, and x3 -> x4). My plan to do this is to create a new numpy array only containing the data I intend to pass to a SciPy CurveFitting routine.</p>
<pre><code>index_range1 = np.where((data[:,0] > x1) and (data[:,0] < x2)
index_range2 = np.where((data[:,0] > x3) and (data[:,0] < x4)
</code></pre>
<p>and then I'd use these index ranges to pull the data of interest into a new array which I could pass to CurveFit.</p>
<p>Firstly, given that Python can handle complex arrays, this seems a very un-pythonic approach. Secondly, when running my script, I get an error saying that I need to use .any() or .all() in my expression for index_range 1 and 2.</p>
<p>I wonder, therefore, if anyone has any suggestions for an improved, more pythonic approach to solving this problem.</p>
<p>Thanks!</p>
|
<p>To get a boolean array out of two others, use <code>&</code> to compare element-wise:</p>
<pre><code>index_range1 = np.where((data[:,0] > x1) & (data[:,0] < x2))
index_range2 = np.where((data[:,0] > x3) & (data[:,0] < x4))
</code></pre>
<p>Using <a href="http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#advanced-indexing" rel="nofollow">boolean arrays</a> to index an array is probably more 'pythonic'. You don't need to find (using <code>where</code>) and save the indices, you can access the data directly from the array:</p>
<pre><code>range1 = data[(data[:,0] > x1) & (data[:,0] < x2)]
range2 = data[(data[:,0] > x3) & (data[:,0] < x4)]
</code></pre>
<p>You could shorten this/make it more readable with:</p>
<pre><code>x, y = data.T
range1 = data[(x > x1) & (x < x2)]
range2 = data[(x > x3) & (x < x4)]
</code></pre>
<p>Note: <code>x</code> and <code>y</code> are <em>views</em>, so modifying <code>x</code>, <code>y</code> modifies <code>data</code>, and there's no copying so it shouldn't slow your code down. But the <code>range</code>s are <em>copies</em>, since <a href="http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#advanced-indexing" rel="nofollow">fancy indexing</a> makes a copy, so modifying them won't affect <code>x</code>, <code>y</code>, or <code>data</code>.</p>
|
python|arrays|numpy
| 2
|
374,133
| 16,043,299
|
Substitute for numpy broadcasting using scipy.sparse.csc_matrix
|
<p>I have in my code the following expression:</p>
<pre><code>a = (b / x[:, np.newaxis]).sum(axis=1)
</code></pre>
<p>where <code>b</code> is an ndarray of shape <code>(M, N)</code>, and <code>x</code> is an ndarray of shape <code>(M,)</code>. Now, <code>b</code> is actually sparse, so for memory efficiency I would like to substitute in a <code>scipy.sparse.csc_matrix</code> or <code>csr_matrix</code>. However, broadcasting in this way is not implemented (even though division or multiplication is guaranteed to maintain sparsity) (the entries of <code>x</code> are non-zero), and raises a <code>NotImplementedError</code>. Is there a <code>sparse</code> function I'm not aware of that would do what I want? (<code>dot()</code> would sum along the wrong axis.)</p>
|
<p>If <code>b</code> is in CSC format, then <code>b.data</code> has the non-zero entries of <code>b</code>, and <code>b.indices</code> has the row index of each of the non-zero entries, so you can do your division as:</p>
<pre><code>b.data /= np.take(x, b.indices)
</code></pre>
<p>It's hackier than Warren's elegant solution, but it will probably also be faster in most settings:</p>
<pre><code>b = sps.rand(1000, 1000, density=0.01, format='csc')
x = np.random.rand(1000)
def row_divide_col_reduce(b, x):
data = b.data.copy() / np.take(x, b.indices)
ret = sps.csc_matrix((data, b.indices.copy(), b.indptr.copy()),
shape=b.shape)
return ret.sum(axis=1)
def row_divide_col_reduce_bis(b, x):
d = sps.spdiags(1.0/x, 0, len(x), len(x))
return (d * b).sum(axis=1)
In [2]: %timeit row_divide_col_reduce(b, x)
1000 loops, best of 3: 210 us per loop
In [3]: %timeit row_divide_col_reduce_bis(b, x)
1000 loops, best of 3: 697 us per loop
In [4]: np.allclose(row_divide_col_reduce(b, x),
...: row_divide_col_reduce_bis(b, x))
Out[4]: True
</code></pre>
<p>You can cut the time almost in half in the above example if you do the division in-place, i.e.:</p>
<pre><code>def row_divide_col_reduce(b, x):
b.data /= np.take(x, b.indices)
return b.sum(axis=1)
In [2]: %timeit row_divide_col_reduce(b, x)
10000 loops, best of 3: 131 us per loop
</code></pre>
|
python|numpy|scipy|sparse-matrix
| 12
|
374,134
| 57,303,955
|
Should the embedding layer be changed during training a neural network?
|
<p>I'm a new one for the field of deep learning and Pytorch.</p>
<p>Recently when I learn one of the <a href="https://pytorch.org/tutorials/beginner/nlp/advanced_tutorial.html#sphx-glr-beginner-nlp-advanced-tutorial-py" rel="nofollow noreferrer">pytorch tutorial example for NER task</a>, I found the embedding of nn.Embedding changed during the training.</p>
<p>So my question is should the embedding be changed during training the network?</p>
<p>And if I want to load a pre-trained embedding (for example, trained word2vec embedding) into a PyTorch embedding layer, should the pre-trained embedding also be changed during the train process?</p>
<p>Or how could I prevent updating the embeddings?</p>
<p>Thank you.</p>
|
<p>One can either learn embeddings during the task, finetune them for task at hand or leave as they are (provided they have been learned in some fashion before).</p>
<p>In the last case, with standard embeddings like word2vec one eventually finetunes (using small learning rate), but uses vocabulary and embeddings provided. When it comes to current SOTA like BERT fine-tuning on your data should always be done, but in unsupervised way (as trained originally).</p>
<p>Easiest way to use them is static method of <code>torch.nn.Embedding.from_pretrained</code> (<a href="https://pytorch.org/docs/stable/nn.html#embedding" rel="nofollow noreferrer">docs</a>) and provide Tensor with your pretrained data.</p>
<p>If you want the layer to be trainable, pass <code>freeze=False</code>, by default it's not as you want.</p>
|
python|pytorch
| 2
|
374,135
| 57,404,020
|
comparing date time values in a pandas DataFrame with a specific data_time value and returning the closet one
|
<p>I have a date column in a pandas DataFrame as follows:</p>
<pre><code>index date_time
1 2013-01-23
2 2014-01-23
3 2015-8-14
4 2015-10-23
5 2016-10-28
</code></pre>
<p>I want to compare the values in <code>date_time column</code> with a specific date, for example <code>date_x = 2015-9-14</code> ad return a date that is before this date and it is the most closet, which is <code>2015-8-14.</code> </p>
<p>I thought about <code>converting the values in date_time column to a list</code> and then <code>compare them with the specific date</code>. However, I do not think it is an efficient solution. </p>
<p>Any solution?</p>
<p>Thank you. </p>
|
<p>Here is one way using <code>searchsorted</code>, and all my method is assuming the data already order , if not doing the <code>df=df.sort_values('date_time')</code></p>
<pre><code>df.date_time=pd.to_datetime(df.date_time)
date_x = '2015-9-14'
idx=np.searchsorted(df.date_time,pd.to_datetime(date_x))
df.date_time.iloc[idx-1]
Out[408]:
2 2015-08-14
Name: date_time, dtype: datetime64[ns]
</code></pre>
<p>Or we can do </p>
<pre><code>s=df.date_time-pd.to_datetime(date_x)
df.loc[[s[s.dt.days<0].index[-1]]]
Out[417]:
index date_time
2 3 2015-08-14
</code></pre>
|
python|pandas
| 2
|
374,136
| 57,354,545
|
Number formatting after mapping?
|
<p>I have a data frame with a number column, such as:</p>
<pre><code>CompteNum
100
200
300
400
500
</code></pre>
<p>and a file with the mapping of all these numbers to other numbers, that I import to python and convert into a dictionary:</p>
<pre><code>{100: 1; 200:2; 300:3; 400:4; 500:5}
</code></pre>
<p>And I am creating a second column in the data frame that combine both numbers in the format df number + dict number: From 100 to 1001 and so on...</p>
<pre><code>## dictionary
accounts = pd.read_excel("mapping-accounts.xlsx")
accounts = accounts[['G/L Account #','FrMap']]
accounts = accounts.set_index('G/L Account #').to_dict()['FrMap']
## data frame --> CompteNum is the Number Column
df['CompteNum'] = df['CompteNum'].map(accounts1).astype(str) + df['CompteNum'].astype(str)
</code></pre>
<p>The problem is that my output then is 100.01.0 instead of 1001 and that creates additional manual work in the output excel file. I have tried:</p>
<pre><code>df['CompteNum'] = df['CompteNum'].str.replace('.0', '')
</code></pre>
<p>but it doesn't deletes ALL the zero's, and I would want the additional ones deleted. Any suggestions?</p>
|
<p>There is problem missing values for non matched values after <code>map</code>, possible solution is:</p>
<pre><code>print (df)
CompteNum
0 100
1 200
2 300
3 400
4 500
5 40
accounts1 = {100: 1, 200:2, 300:3, 400:4, 500:5}
s = df['CompteNum'].astype(str)
s1 = df['CompteNum'].map(accounts1).dropna().astype(int).astype(str)
df['CompteNum'] = (s + s1).fillna(s)
print (df)
CompteNum
0 1001
1 2002
2 3003
3 4004
4 5005
5 40
</code></pre>
<p>Your solution should be changed for replace by regex - <code>$</code> for end of string with escape <code>.</code>, because special regex character (regex any char):</p>
<pre><code>df['CompteNum'] = df['CompteNum'].str.replace('\.0$', '')
</code></pre>
|
python|pandas|dictionary|replace
| 0
|
374,137
| 57,684,467
|
Create Multilevel DataFrame by reading in data from multiple files using read_csv() [SOLVED]
|
<p>I have 10 files with the following identical format and column names (values are different across different files): </p>
<pre><code> event_code timestamp counter
0 9071 1165783 NaN
1 9070 1165883 NaN
2 8071 1166167 NaN
3 7529 NaN 0.0
4 8529 NaN 1.0
5 9529 NaN 1.0
</code></pre>
<p>Due to the nature of the files, I am trying to store these data in multilevel dataframe like the following: (Eventually, I would want the <code>box_num</code> level to go all the way to 10)</p>
<pre><code>box_num 1 2 ...
col_names event_code timestamp counter |event_code timestamp counter
0 9071 1270451 1 | 8529 NaN 1 ...
1 9070 1270484 0 | 9529 NaN 0 ...
2 9071 1270736 1 | 5520 3599167 2 ...
3 9070 1272337 3 | 7171 3599169 1 ...
</code></pre>
<p>I initially thought I could make a multilevel dataframe with a dictionary using the keys as the hierarchical index and the dataframe as the subjugated dataframe </p>
<pre><code>col_names = ['event_code','timestamp', 'counter']
df_dict = {}
for i in range(len(files)):
f = files[i] # actual file name
df = pd.read_csv(f, sep=":", header=None, names=col_names)
df_dict[i+1] = df # 'i+1' so that dict_key can correspond to actual box number
</code></pre>
<p>But I soon realized that I can't create a multilevel index or dataframe from a dictionary. So to create a Multilevel Index, this is what I did, but now I am stuck on what to do next... </p>
<p><code>(box_num, col_list) = df_dict.keys(), list(df_dict.values())[0].columns
</code></p>
<p>If there are other more efficient, concise ways to approach this problem, please let me know as well. Ideally, I would like to create the multilevel dataframe right after the for loop</p>
<h3><strong><em>::UPDATE:: [SOLVED]</em></strong></h3>
<p>So I eventually figured out a way to create a multilevel dataframe from the for loop using pd.concat(). I'll post my answer below. Hopefully it's helpful to someone. </p>
<pre><code>col_names = ['event_code', 'timestamp', 'counter']
result = []
box_num = []
for i in range(len(files)):
f = files[i]
box_num.append(i+1) # box_number
df = pd.read_csv(f, sep=":", header=None, names=col_names)
result.append(df)
# # pd.concat() combines all the Series in the 'result' list
# # 'Keys' option adds a hierarchical index at the outermost level of the data.
final_df = pd.concat(result, axis=1, keys=box_num, names=['Box Number','Columns'])
</code></pre>
|
<p>I think you should use a pivot table or the pandas groupby function for this task. Neither will give you exactly what you have requested above, but it will be simpler to use. </p>
<p>Using your code as a starting point:</p>
<pre><code>col_names = ['event_code','timestamp', 'counter']
data = pd.DataFrame()
for i in range(len(files)):
f = files[i]
df = pd.read_csv(f, sep=":", header=None, names=col_names)
# instead of a dictionary try creating a master DataFrame
df['box_num'] = i
data = pd.concat([data, df]).reset_index(drop=True)
data['idx'] = data.index
# option 1 create a pivot table
pivot = data.pivot(index='idx', columns='box_num', values=col_names)
# option 2 use pandas groupby function
group = data.groupby(['idx', 'box_num']).mean()
</code></pre>
<p>Hopefully one of these can help you get going in the right direction or work for what you are trying to accomplish. Good luck!</p>
|
python|pandas
| 0
|
374,138
| 57,421,804
|
Aggregate dataframe columns by hourly index
|
<p>So I have a pandas dataframe that is taking in / out interface traffic every 10 minutes. I want to aggregate the two time series into hourly buckets for analysis. What seems to be simple has actually ended up being quite challenging for me to figure out! Just need to bucket into hourly bins</p>
<pre><code>times = list()
ins = list()
outs = list()
for row in results['results']:
times.append(row['DateTime'])
ins.append(row['Intraffic'])
outs.append(row['Outtraffic'])
df = pd.DataFrame()
df['datetime'] = times
df['datetime'] = pd.to_datetime(df['datetime'])
df.index = df['datetime']
df['ins'] = ins
df['outs'] = outs
</code></pre>
<p>I have tried using</p>
<pre><code>df.resample('H').mean()
</code></pre>
<p>I have tried pandas </p>
<pre><code>groupby
</code></pre>
<p>but was having trouble with the two columns and getting the means over the hourly bucket</p>
|
<p>I believe this should do what you want:</p>
<pre><code>df = pd.DataFrame()
df['datetime'] = times
df['datetime'] = pd.to_datetime(df['datetime'])
df.set_index('datetime',inplace=True) # This won't try to remap your rows
new_df = df.groupby(pd.Grouper(freq='H')).mean()
</code></pre>
<p>That last line groups your data by timestamp into hourly chunks, based on the index, and then spits out a new DataFrame with the mean of each column.</p>
|
python|pandas
| 0
|
374,139
| 57,658,038
|
Print a dataframe to a specific column/row location like (1,2) using xlwings
|
<p>Trying to find out how to print to a specific column/row similar to how
pd.to_excel(startcol = 1, startrow = 1) works. I have to do this in an open excel workbook, and found the library xlwings. I'm currently using openpyxl, how would I do this in xlwings? I read the documentation printing to specific cells like A1, but not by specifying columns/rows. </p>
<pre><code>#Write to Excel
book = load_workbook('Test.xlsx')
writer = pd.ExcelWriter('Test.xlsx', engine='openpyxl')
writer.book = book
writer.sheets = dict((ws.title, ws) for ws in book.worksheets)
def addtoexcel(df, row):
i = 0
df[["val1", "val2"]] = df[["val1", "val2"]].apply(pd.to_numeric)
line = int(df.loc[i][1])
for i in range(1, line+1):
if line ==i:
df = df.T
df.to_excel(writer, "test", index = False, header = False, startcol = line+2, startrow = row)
</code></pre>
<p>How can I print in xlwings by specifying column/row like (1,1)?</p>
|
<p>You can easily print a pandas dataframe to excel using xlwings. The range object takes a row and a column number as arguments (or just a cell reference as a string). Consider the following code:</p>
<pre><code>import xlwings as xw
import pandas as pd
row = 1
column = 2
path = 'your/path/file.xlsx'
df = pd.DataFrame({'A' : [5,5,5],
'B' : [6,6,6]})
wb = xw.Book(path)
sht = wb.sheets["Sheet1"]
sht.range(row, column).value = df
</code></pre>
<p>You can also add options to include index/header:</p>
<pre><code>sht.range(row, column).options(index=False, header=False).value = df
</code></pre>
|
python|pandas|xlwings
| 3
|
374,140
| 57,525,309
|
Make multiple plots at once
|
<p>I have a df that looks like this:</p>
<pre><code> date group score origin ...
0 1 group1 1 0 ...
1 1 group2 2 1 ...
2 2 group2 5 2 ...
3 2 group1 2 3 ...
4 1 group1 1 3 ...
5 1 group2 2 2 ...
6 2 group2 5 1 ...
7 2 group1 2 0 ...
</code></pre>
<p>and I need to make multiple, separate line plots. One for each unique value of <code>origin</code>, with <code>date</code> on the x axis and <code>score</code> on the y.</p>
<p>Currently my code looks like this:</p>
<pre class="lang-py prettyprint-override"><code>sns.lineplot(x='date', y='score', hue='group', style_order=order, data=df)
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
</code></pre>
<p>Instead of running this multiple times, I'd like to be able to have 1 statement that produces separate plots for each unique value of origin. I've tried a few variations of <code>groupby</code> or <code>subplot</code> but nothing has worked.</p>
|
<p>Try using <a href="https://seaborn.pydata.org/generated/seaborn.FacetGrid.html" rel="nofollow noreferrer"><code>seaborn.FacetGrid</code></a> for simple control over these types of plots:</p>
<pre><code>g = sns.FacetGrid(df, row='origin', hue='group')
g.map(sns.lineplot, 'date', 'score')
</code></pre>
<p>[out]</p>
<p><a href="https://i.stack.imgur.com/YMHrI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YMHrI.png" alt="enter image description here"></a></p>
|
python|python-3.x|pandas|matplotlib|seaborn
| 3
|
374,141
| 57,474,267
|
Efficient and elegant way to fill values in pandas column based on each groups
|
<pre><code>df_new = pd.DataFrame(
{
'person_id': [1, 1, 3, 3, 5, 5],
'obs_date': ['12/31/2007', 'NA-NA-NA NA:NA:NA', 'NA-NA-NA NA:NA:NA', '11/25/2009', '10/15/2019', 'NA-NA-NA NA:NA:NA']
})
</code></pre>
<p>It looks like as shown below</p>
<p><a href="https://i.stack.imgur.com/bngk0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bngk0.png" alt="enter image description here"></a></p>
<p>What I would like to do is replace/fill <code>NA</code> type rows with actual date values from the same group. For which I tried the below</p>
<pre><code>m1 = df_new['obs_date'].str.contains('^\d')
df_new['obs_date'] = df_new.groupby((m1).cumsum())['obs_date'].transform('first')
</code></pre>
<p>But this gives an unexpected output like shown below</p>
<p><a href="https://i.stack.imgur.com/PE5eY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PE5eY.png" alt="enter image description here"></a></p>
<p>Here for the 2nd row it should have been <code>11/25/2009</code> from person_id = 3 instead it is from the 1st group of person_id = 1.</p>
<p>How can I get the expected output as shown below</p>
<p><a href="https://i.stack.imgur.com/KDSkv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KDSkv.png" alt="enter image description here"></a></p>
<p>Any elegant and efficient solution is helpful as I am dealing with more than million records</p>
|
<p>First use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.to_datetime.html" rel="nofollow noreferrer"><code>to_datetime</code></a> with <code>errors='coerce'</code> for convert non datetimes to missing values, then <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.first.html" rel="nofollow noreferrer"><code>GroupBy.first</code></a> for get first non missing value in <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.transform.html" rel="nofollow noreferrer"><code>GroupBy.transform</code></a> new column filled by data:</p>
<pre><code>df_new['obs_date'] = pd.to_datetime(df_new['obs_date'], format='%m/%d/%Y', errors='coerce')
df_new['obs_date'] = df_new.groupby('person_id')['obs_date'].transform('first')
#alternative - minimal value per group
#df_new['obs_date'] = df_new.groupby('person_id')['obs_date'].transform('min')
print (df_new)
person_id obs_date
0 1 2007-12-31
1 1 2007-12-31
2 3 2009-11-25
3 3 2009-11-25
4 5 2019-10-15
5 5 2019-10-15
</code></pre>
<p>Another idea is use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.sort_values.html" rel="nofollow noreferrer"><code>DataFrame.sort_values</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.first.html" rel="nofollow noreferrer"><code>GroupBy.first</code></a>:</p>
<pre><code>df_new['obs_date'] = pd.to_datetime(df_new['obs_date'], format='%m/%d/%Y', errors='coerce')
df_new['obs_date'] = (df_new.sort_values(['person_id','obs_date'])
.groupby('person_id')['obs_date']
.ffill())
print (df_new)
person_id obs_date
0 1 2007-12-31
1 1 2007-12-31
2 3 2009-11-25
3 3 2009-11-25
4 5 2019-10-15
5 5 2019-10-15
</code></pre>
|
python|python-3.x|pandas|dataframe|pandas-groupby
| 2
|
374,142
| 57,694,732
|
Python uses wrong Package version
|
<p>Hi i try to launch the <code>object_detection_tutorial</code> on my PC. When i run the following code to load a (frozen) <code>Tensorflow</code> model into memory. </p>
<pre><code>detection_graph = tf.Graph()
with detection_graph.as_default():
od_graph_def = tf.GraphDef()
with tf.gfile.GFile(PATH_TO_FROZEN_GRAPH, 'rb') as fid:
serialized_graph = fid.read()
od_graph_def.ParseFromString(serialized_graph)
tf.import_graph_def(od_graph_def, name='')
</code></pre>
<blockquote>
<p>ValueError: No op named NonMaxSuppressionV2 in defined operations.</p>
</blockquote>
<p>I googled the error and upgrading the tensorflow version to 1.4 should fix the bug. In my code i used tensorflow 1.13 and it worked in google cloud. But even after uninstalling and installing eg. tensorflow 1.4, python uses 1.2.1</p>
<p>Picture of my code: <a href="https://ibb.co/VYkq2rF" rel="nofollow noreferrer">https://ibb.co/VYkq2rF</a></p>
|
<p>Looks like the module is not installed correctly. Try creating a new environment using conda and set up Object Detection in it. It should resolve your issue. </p>
<p>Also, as best practices, it is always better to work on a conda environment than working on the base environment. </p>
<p>Please use below command to create an empty new environment and then install packages on it.</p>
<pre><code>conda create --no-default-packages --name <env_name> python=<version>
</code></pre>
|
python|tensorflow|pip|object-detection
| 0
|
374,143
| 57,505,365
|
how to plot a scatter plot for the following data in python?
|
<p>I did splitting among trained and testing data using the function train_test_split() and get the following.</p>
<p><code>print(X_train)</code>
<code>
+--------------------+
| fre loc |
+--------------------+
| 1.208531 0.010000 |
| 0.169742 0.010000 |
| 0.119691 0.010000 |
| 0.151515 0.010000 |
| 0.632653 0.010000 |
| 0.104000 1.125000 |
| 3.313433 1.076923 |
| 0.323899 0.010000 |
| 3.513011 1.100000 |
| 0.184971 0.010000 |
| 0.158470 0.010000 |
| 0.175258 0.010000 |
| 0.149038 0.010000 |
| 0.158879 0.010000 |
+--------------------+
</code></p>
<p><code>print(X_test)</code>
<code>
+--------------------+
| fre loc |
+--------------------+
| 1.208531 0.010000 |
| 0.169742 0.010000 |
| 0.119691 0.010000 |
| 0.151515 0.010000 |
| 0.632653 0.010000 |
| 0.104000 1.125000 |
| 3.313433 1.076923 |
+--------------------+
</code></p>
<p><code>print(y_train)</code>
<code>
+----------------+
| Critical Value |
+----------------+
| 1.208531 |
| 0.000000 |
| 0.000000 |
| 0.000000 |
| 0.632653 |
| 1.125000 |
| 4.390356 |
| 0.000000 |
| 4.613011 |
| 0.000000 |
| 0.000000 |
| 0.000000 |
| 0.000000 |
| 0.000000 |
+----------------+
</code></p>
<p><code>print(y_test)</code>
<code>
+----------------+
| Critical Value |
+----------------+
| 1.208531 |
| 0.000000 |
| 0.000000 |
| 0.000000 |
| 0.632653 |
| 1.125000 |
| 4.390356 |
+----------------+
</code></p>
<p>Then I performed Gradient Boosting Regressor in the following manner,</p>
<p><code>est_knc= GradientBoostingRegressor()
est_knc.fit(X_train, y_train)
pred = est_knc.score(X_test, y_test)
print(pred)
</code></p>
<p>and got the output,
<code>0.8879530974429752</code></p>
<p>it's ok till here. Now I want to plot this but its quite confusing for me to understand what and how parameters do I have to pass in order to plot a scatter plot using the above data. I'm new in visualisation. :(</p>
|
<p>Try out scatter plots for different data sets you created and deferant results you obtained. Then of course you will see the patterns.</p>
<p>Here is a code snippet I used for creating scatter plots. Hope it helps if you are new to visualization.
Here I take inputs for x and y from two separate files as xdata.txt and ydata.txt. They should be simple files with the data you want to plot separated by new lines.</p>
<p>ie- </p>
<pre><code>xdata.txt file
1.208531
0.169742
0.119691
0.151515
0.632653
0.104000
3.313433
ydata.txt file
0.010000
0.010000
0.010000
0.010000
0.010000
1.125000
1.076923
</code></pre>
<p>but of course you can change this and create your own numpy arrays to get the data to plot in a convenient way. </p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
x = np.fromfile("xdata.txt",float,-1," ")
y = np.fromfile("ydata.txt",float,-1," ")
plt.scatter(x, y,alpha=0.5)
plt.show()
</code></pre>
<p>If the imports are not working then you will have to install the required packages using pip.</p>
|
python-3.x|pandas|dataframe|matplotlib|plot
| 1
|
374,144
| 57,531,811
|
Filtering rows in DataFrame with dependent conditions
|
<p>Apologies if this has already been asked: </p>
<p>I want to remove all rows with values between 15-25 in one column AND have a specific string in another column. </p>
<p>For example: </p>
<pre><code>options = ['pizza', 'pasta']
df2 = df[(~df['columnA'].between(15, 25)) & df.loc[~df['columnB'].isin(options)]]
</code></pre>
<p>So if a row has a value of 15-25 in columnA but does not have 'pizza' or 'pasta' in columnB, then I want that row to be retained...</p>
<p><strong>Solution:</strong> </p>
<pre><code>df[~((df['columnA'].between(15, 25)) & (df['columnB'].isin(options)))]
</code></pre>
|
<p>The easiest to understand is negating the entire condition, like <code>~((...) & (...))</code>:</p>
<pre><code>df[~((df['columnA'].between(15, 25)) & (df['columnB'].isin(options)))]</code></pre>
<p>Or you can use <a href="https://en.wikipedia.org/wiki/De_Morgan%27s_laws" rel="nofollow noreferrer"><em>De Morgan's laws</em> [wiki]</a>, and specify this as <code>(~ ...) | (~ ...)</code></p>
<pre><code>df[(~df['columnA'].between(15, 25)) | (~df['columnB'].isin(options))]</code></pre>
<p>So the negation of <em>x ∧ y</em> is <em>(¬x)∨(¬y)</em>.</p>
|
python|pandas
| 2
|
374,145
| 57,487,272
|
Calculate the sum of the numbers separated by a comma in a dataframe column
|
<p>I am trying to calculate the sum of all the numbers separated by a comma in a dataframe column however I keep getting error. This is what the dataframe looks like:</p>
<pre><code>Description scores
logo
graphics
eyewear 0.360740,-0.000758
glasses 0.360740,-0.000758
picture -0.000646
tutorial 0.001007,0.000968,0.000929,0.000889
computer 0.852264 0.001007,0.000968,0.000929,0.000889
</code></pre>
<p>This is what the code looks like</p>
<pre><code>test['Sum'] = test['scores'].apply(lambda x: sum(map(float, x.split(','))))
</code></pre>
<p>However I keep getting the following error</p>
<pre><code>ValueError: could not convert string to float:
</code></pre>
<p>I though it could it be because of missing values at the start of the dataframe. But I subset the dataframe to exclude the missing the values, still I get the same error.</p>
<p>Output</p>
<pre><code>Description scores SUM
logo
graphics
eyewear 0.360740,-0.000758 0.359982
glasses 0.360740,-0.000758 0.359982
picture -0.000646 -0.000646
tutorial 0.001007,0.000968,0.000929,0.000889 0.003793
computer 0.852264 0.001007,0.000968,0.000929,0.000889 0.856057
</code></pre>
<p>How can I resolve it?</p>
|
<p>There are times when using Python seems to be very effective, this might be one of those.</p>
<pre><code>df['scores'].apply(lambda x: sum(float(i) if len(x) > 0 else np.nan for i in x.split(',')))
0 NaN
1 NaN
2 0.359982
3 0.359982
4 -0.000646
5 0.003793
6 0.856057
</code></pre>
|
python-3.x|pandas|dataframe|lambda|apply
| 2
|
374,146
| 57,485,157
|
Tensorflow: Save metrics every certain steps
|
<p>I have trained a model and want to access the standard metrics plus a custom metric, every 100 steps. </p>
<pre><code>def top3error(features, labels, predictions):
return {'top3error': tf.metrics.mean(tf.nn.in_top_k(predictions=predictions['logits'],
targets=labels,
k=3))}
m = tf.estimator.DNNClassifier(
# Config & Params
config=config
model_dir=model_dir,
n_classes=n_classes,
# Model
feature_columns=deep_columns,
hidden_units=[256, 128],
activation_fn='relu',
optimizer=tf.train.ProximalAdagradOptimizer(
learning_rate=0.1,
l1_regularization_strength=0.001)
)
'''Add metrics'''
m = tf.estimator.add_metrics(m, top3error)
</code></pre>
<p>I have added a config variable to specify that every 100 steps I want to save the metrics. I have been successful to print the metrics. </p>
<pre><code>config=tf.estimator.RunConfig(
model_dir=model_dir,
save_checkpoints_steps=100,
)
</code></pre>
<p>However after the model has been trained I can only access the last value for each metric. Is it possible to save the metrics everytime there is a checkpoint?</p>
<pre><code> results = tf.estimator.train_and_evaluate(m, train_spec, eval_spec)
</code></pre>
|
<p>you can use tensorflow summaries. You can also use that to visualize your metric in tensorboard.</p>
<p>check this out-><a href="https://stackoverflow.com/questions/42164772/tensorflow-estimator-api-summaries">Tensorflow Estimator API: Summaries</a></p>
|
tensorflow|hook|metrics
| 0
|
374,147
| 57,532,688
|
Indexing 3d numpy array with 2d array
|
<p>I would like to create a numpy 2d-array based on values in a numpy 3d-array, using another numpy 2d-array to determine which element to use in axis 3.</p>
<pre><code>import numpy as np
#--------------------------------------------------------------------
arr_3d = np.arange(2*3*4).reshape(2,3,4)
print('arr_3d shape=', arr_3d.shape, '\n', arr_3d)
arr_2d = np.array(([3,2,0], [2,3,2]))
print('\n', 'arr_2d shape=', arr_2d.shape, '\n', arr_2d)
res_2d = arr_3d[:, :, 2]
print('\n','res_2d example using element 2 of each 3rd axis...\n', res_2d)
res_2d = arr_3d[:, :, 3]
print('\n','res_2d example using element 3 of each 3rd axis...\n', res_2d)
</code></pre>
<p>Results...</p>
<pre><code>arr_3d shape= (2, 3, 4)
[[[ 0 1 2 3]
[ 4 5 6 7]
[ 8 9 10 11]]
[[12 13 14 15]
[16 17 18 19]
[20 21 22 23]]]
arr_2d shape= (2, 3)
[[3 2 0]
[2 3 2]]
res_2d example using element 2 of each 3rd axis...
[[ 2 6 10]
[14 18 22]]
res_2d example using element 3 of each 3rd axis...
[[ 3 7 11]
[15 19 23]]
</code></pre>
<p>The 2 example results show what I get if I use the 2nd and then the 3rd element of axis 3. But I would like to get the element from arr_3d, specified by arr_2d. So...</p>
<pre><code>- res_2d[0,0] would use the element 3 of arr_3d axis 3
- res_2d[0,1] would use the element 2 of arr_3d axis 3
- res_2d[0,2] would use the element 0 of arr_3d axis 3
etc
</code></pre>
<p>So res_2d should look like this...</p>
<pre><code>[[3 6 8]
[14 19 22]]
</code></pre>
<p>I tried using this line to get the arr_2d entries, but it results in a 4-dim array and I want a 2-dim array. </p>
<pre><code>res_2d = arr_3d[:, :, arr_2d[:,:]]
</code></pre>
|
<p>The shape of the result from fancy index and broadcasting is the shape of the indexing array. You need passing 2d array for each axis of <code>arr_3d</code></p>
<pre><code>ax_0 = np.arange(arr_3d.shape[0])[:,None]
ax_1 = np.arange(arr_3d.shape[1])[None,:]
arr_3d[ax_0, ax_1, arr_2d]
Out[1127]:
array([[ 3, 6, 8],
[14, 19, 22]])
</code></pre>
|
python|numpy|multidimensional-array|indexing
| 5
|
374,148
| 57,619,816
|
Hashing_trick in Keras. How it works?
|
<p>Needs basic understanding of one hot or hashing trick in keras.</p>
<pre><code>from keras.preprocessing.text import hashing_trick
from keras.preprocessing.text import text_to_word_sequence
# define the document
text = 'The quick brown fox jumped over the lazy dog dog.'
# estimate the size of the vocabulary
words = set(text_to_word_sequence(text))
print(words)
vocab_size = len(words)
print(vocab_size)
# integer encode the document
result = hashing_trick(text, round(vocab_size*1.3), hash_function='md5')
print(text)
print(result)
</code></pre>
<p>Output:</p>
<p>{'over', 'the', 'lazy', 'dog', 'quick', 'brown', 'jumped', 'fox'}
8
The quick brown fox jumped over the lazy dog dog.
[6, 4, 1, 2, 7, 5, 6, 2, 6, 6]</p>
<p>Conclusion:
Each tokens are assigned an integer here. For eg.
"quick is assigned 4"
the--6 <br>
quick--4
brown--1
fox--2
jumped--7
over--5
the--6
lazy--2
dog--6</p>
<p>I would like to understand how "the" & "dog" are assigned same integer 6.
Correct me if I am wrong & please provide explanation how it does exactly?</p>
|
<p>This is an example of a <a href="https://en.wikipedia.org/wiki/Hash_table#Collision_resolution" rel="nofollow noreferrer">hashing collision</a>. The hash function is just a function computed on the input words. For example, Java's default hash function does something like multiplying the first character by 1, the second character by 31, the third character by 31^2, etc., and adding them all together.</p>
<p>There is no guarantee that two different strings might not compute the same number.</p>
<p>This problem becomes even more pronounced if we choose a small vocabulary size. If the vocabulary size is 10, for example, a hash of 11 might "wrap around" to 1. (A modulus operator is applied to map an arbitrarily-large integer into the range 1-<code>vocab_size</code>.)</p>
<p>If you want to make hashes unlikely, using <code>vocab_size = 10*len(words)</code> or <code>vocab_size = 10*len(words)</code> could reduce the number of collisions.</p>
<p>I'm not sure what the cost of a larger vocabulary is downstream, however.</p>
|
python-3.x|tensorflow|keras|deep-learning
| 0
|
374,149
| 57,403,919
|
Dimensionality of strided convolution with stride 2 and max pooling layer
|
<p>This question is NOT about the benefit of strided convolution vs max pooling. This post is intended as a canonical source on how to compute the dimensionality of strided convolution and max-pooling when the <strong>input image size is NOT the same for width and height while padding is SAME</strong>.</p>
<p><strong>My research:</strong> I can't find any formula that properly let's me compute the output of a convolution when image width and height are different while padding is "SAME" particularly for tensorflow. The same problem persists for strided convolution and max pooling.</p>
<p>I'm aware of this <a href="https://stackoverflow.com/a/50165711/8100895">post</a>. But like I said previously, it doesn't work for different image sizes. I'm also aware of this <a href="https://www.d2l.ai/chapter_convolutional-neural-networks/padding-and-strides.html" rel="nofollow noreferrer">post</a>. But it doesn't answer what happens padding is same (in tensorflow). </p>
<p>However, let's say I have <strong>images of size <code>240x320</code>.</strong> And I have 2 versions of a network.</p>
<p>Version A:</p>
<pre><code>from tensorflow import layers as tf
x = tf.conv2d(input_im, filters=64, kernel_size=3, strides=1, padding='SAME')
x = tf.conv2d(x, filters=64, kernel_size=3, strides=1, padding='SAME')
x = tf.conv2d(x, filters=64, kernel_size=3, strides=2, padding='SAME')
</code></pre>
<p>Version B: </p>
<pre><code>from tensorflow import layers as tf
x = tf.conv2d(input_im, filters=64, kernel_size=3, strides=1, padding='SAME')
x = tf.conv2d(x, filters=64, kernel_size=3, strides=2, padding='SAME')
x = tf.max_pooling(x, 2, 2, padding='SAME')
</code></pre>
<p>My question is the following. After each of the layers for version A and B, <strong>what are the output dimensions given the above mentioned input image size?</strong> If I was doing this in Keras, I'd simply use <code>model.summary()</code>; however, I'm using tensorflow and there's no such equivalent function. I am not able to run tensorboard on the remote machine I'm working on. </p>
|
<p>You can get the shape of the resulting tensors the following way. </p>
<pre class="lang-py prettyprint-override"><code>import tensorflow as tf
input_im = tf.placeholder(tf.float32, shape=[None, 320, 240, 3])
x = tf.layers.conv2d(input_im, filters=64, kernel_size=3, strides=1, padding='SAME')
print('After conv1', x.shape)
x = tf.layers.conv2d(x, filters=64, kernel_size=3, strides=1, padding='SAME')
print('After conv2', x.shape)
x = tf.layers.conv2d(x, filters=64, kernel_size=3, strides=2, padding='SAME')
print('After conv3', x.shape)
</code></pre>
|
python|tensorflow|deep-learning
| 1
|
374,150
| 57,321,495
|
How to sort date by descending order and time by ascending order using Pandas
|
<p>My <code>df</code> looks like this. It is an <code>hourly</code> dataset.</p>
<pre><code>time Open
2017-01-03 09:00:00 5.2475
2017-01-03 08:00:00 5.2180
2017-01-03 07:00:00 5.2128
2017-01-02 09:00:00 5.4122
2017-01-02 08:00:00 5.2123
2017-01-02 07:00:00 5.2475
2017-01-01 09:00:00 5.2180
2017-01-01 08:00:00 5.2128
2017-01-01 07:00:00 5.4122
</code></pre>
<p>I want to <code>sort</code> the <code>hourly</code> only data by <code>ascending</code> order.</p>
<p><strong>What did I do?</strong></p>
<p>I did:</p>
<pre><code>df.sort_values(by='time', ascending=True)
</code></pre>
<p>but it <code>sort</code> entire value of <code>time</code> however I want to only <code>sort</code> the <code>time</code> section.</p>
<p>My new <code>df</code> should look like this:</p>
<pre><code>time Open
2017-01-03 07:00:00 5.2475
2017-01-03 08:00:00 5.2180
2017-01-03 09:00:00 5.2128
2017-01-02 07:00:00 5.4122
2017-01-02 08:00:00 5.2123
2017-01-02 09:00:00 5.2475
2017-01-01 07:00:00 5.2180
2017-01-01 08:00:00 5.2128
2017-01-01 09:00:00 5.4122
</code></pre>
<p>Here <code>date</code> stays the same but the <code>time</code> is in <code>ascending</code> order. </p>
|
<p>If need sorting by dates and times create new columns for sorting by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.assign.html" rel="noreferrer"><code>DataFrame.assign</code></a>, then sort by both columns with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.sort_values.html" rel="noreferrer"><code>DataFrame.sort_values</code></a> and ascending parameter, because sorting by <code>dates</code> is descending and by times is ascending and last remove helper columns with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.drop.html" rel="noreferrer"><code>DataFrame.drop</code></a>:</p>
<pre><code>df1 = (df.assign(d=df['time'].dt.date,
t=df['time'].dt.time)
.sort_values(['d','t'], ascending=[False, True])
.drop(['d','t'], axis=1))
print (df1)
time Open
2 2017-01-03 07:00:00 5.2128
1 2017-01-03 08:00:00 5.2180
0 2017-01-03 09:00:00 5.2475
5 2017-01-02 07:00:00 5.2475
4 2017-01-02 08:00:00 5.2123
3 2017-01-02 09:00:00 5.4122
8 2017-01-01 07:00:00 5.4122
7 2017-01-01 08:00:00 5.2128
6 2017-01-01 09:00:00 5.2180
</code></pre>
<p>Or if dates cannot be changed and need sorting only by times use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.groupby.html" rel="noreferrer"><code>DataFrame.groupby</code></a> with lambda function - groupby not sorting, because <code>sort=False</code> parameter and <code>group_keys=False</code> is for avoid <code>MultiIndex</code>:</p>
<pre><code>df1 = (df.groupby(df['time'].dt.date, sort=False, group_keys=False)
.apply(lambda x: x.sort_values('time')))
print (df1)
time Open
2 2017-01-03 07:00:00 5.2128
1 2017-01-03 08:00:00 5.2180
0 2017-01-03 09:00:00 5.2475
5 2017-01-02 07:00:00 5.2475
4 2017-01-02 08:00:00 5.2123
3 2017-01-02 09:00:00 5.4122
8 2017-01-01 07:00:00 5.4122
7 2017-01-01 08:00:00 5.2128
6 2017-01-01 09:00:00 5.2180
</code></pre>
|
python-3.x|pandas|dataframe
| 9
|
374,151
| 57,506,227
|
How to join several time series with MultiIndex Data Frame?
|
<p>I am trying to merge several time series dataframes into one very large dataframe with a MultiIndex.</p>
<p>Suppose I have these DataFrames.</p>
<pre class="lang-py prettyprint-override"><code>In [1]: dates = pd.DatetimeIndex(["2019-1-1", "2019-1-2", "2019-1-3"], name="Date")
In [2]: df_a = pd.DataFrame(np.random.randn(3, 2), columns=['Col1', 'Col2'], index=dates)
In [3]: df_b = pd.DataFrame(np.random.randn(3, 2), columns=['Col1', 'Col2'], index=dates)
In [4]: df_c = pd.DataFrame(np.random.randn(3, 2), columns=['Col1', 'Col2'], index=dates)
</code></pre>
<pre class="lang-py prettyprint-override"><code>In [5]: df_a
Out[5]:
Col1 Col2
Date
2019-01-01 1.317679 -1.201769
2019-01-02 -0.991833 0.626420
2019-01-03 0.549733 1.942215
</code></pre>
<p>Now, I've created the scafolding for the dataframe that I want. It looks like this.</p>
<pre class="lang-py prettyprint-override"><code>In [6]: stock_symbols = ["A", "B", "C"]
In [7]: index = pd.MultiIndex.from_product([dates, stock_symbols], names=["Date", "Script"])
In [8]: df = pd.DataFrame(columns=['Col1', 'Col2'], index=index)
</code></pre>
<pre class="lang-py prettyprint-override"><code>In [9]: df
Out[9]:
Col1 Col2
Date Script
2019-01-01 A NaN NaN
B NaN NaN
C NaN NaN
2019-01-02 A NaN NaN
B NaN NaN
C NaN NaN
2019-01-03 A NaN NaN
B NaN NaN
C NaN NaN
</code></pre>
<p>How do I specify to Pandas that values from df_a are to be appended to the appropriate index position?
I figured I'd have to use a <code>.join()</code> but since the values of <em>Script</em> don't occur in the DataFrames I don't know what to do.</p>
<p>Please help.</p>
|
<p>Okay, so currently I'm working with this bit of code.</p>
<pre class="lang-py prettyprint-override"><code>idx = pd.IndexSlice
df.loc[idx[:, "A"], :] = df.loc[idx[:, "A"], :].fillna(df_a)
df.loc[idx[:, "B"], :] = df.loc[idx[:, "B"], :].fillna(df_b)
df.loc[idx[:, "C"], :] = df.loc[idx[:, "C"], :].fillna(df_c)
</code></pre>
<p>If anyone has a better way of doing this... I'm all ears!</p>
|
python|pandas
| 0
|
374,152
| 57,418,397
|
How to combine a wide and a long dataframe in pandas?
|
<p>I have the following dataframe</p>
<pre><code>data = {'Name':['Tom', 'nick', 'krish', 'jack'], 'Age':[20, 21, 19, 18], 'Height':[23, 43, 123, 12], 'Hair_Width':[21, 11, 23, 14]}
df = pd.DataFrame(data)
df
Name Age Height Hair_Width
0 Tom 20 23 21
1 nick 21 43 11
2 krish 19 123 23
3 jack 18 12 14
</code></pre>
<p>I performed a melt operation on this dataframe as follows:</p>
<pre><code>pd.melt(df, id_vars=['Name'], value_vars=['Age', 'Height'])
df
Name variable value
0 Tom Age 20
1 nick Age 21
2 krish Age 19
3 jack Age 18
4 Tom Height 23
5 nick Height 43
6 krish Height 123
7 jack Height 12
</code></pre>
<p>However, I would like to combine the new melted dataframe with a variable from the original (wide) dataframe, to get the following desired output:</p>
<pre><code> Name variable value Hair_Width
0 Tom Age 20 21
1 nick Age 21 11
2 krish Age 19 23
3 jack Age 18 14
4 Tom Height 23 21
5 nick Height 43 11
6 krish Height 123 23
7 jack Height 12 14
</code></pre>
<p>I would love to hear any suggestions on how this can be accomplished.</p>
<p>Edit: A lot of people correctly pointed out that the original dataset is in tidy format. That is correct- it is just used as a simple example. The actual data frame is not tidy to start.</p>
|
<p>Just add <code>Hair_Width</code> as another <code>id_var</code> when you <code>melt</code>, no need to do anything after.</p>
<hr>
<pre><code>df.melt(id_vars=['Name', 'Hair_Width'], value_vars=['Age', 'Height'])
</code></pre>
<p></p>
<pre><code> Name Hair_Width variable value
0 Tom 21 Age 20
1 nick 11 Age 21
2 krish 23 Age 19
3 jack 14 Age 18
4 Tom 21 Height 23
5 nick 11 Height 43
6 krish 23 Height 123
7 jack 14 Height 12
</code></pre>
|
python|python-3.x|pandas|dataframe|melt
| 6
|
374,153
| 57,403,572
|
Colab not recognizing local gpu
|
<p>Im trying to train a Neural Network that I wrote, but it seems that colab is not recognizing the gtx 1050 on my laptop. I can't use their cloud GPU's for this task, because I run into memory constraints </p>
<pre class="lang-py prettyprint-override"><code>print(cuda.is_available())
</code></pre>
<p>is returning False</p>
|
<p>Indeed you gotta select the local runtime accelerator to use GPUs or TPUs, go to <code>Runtime</code> then <code>Change runtime type</code> like in the picture:</p>
<p><a href="https://i.stack.imgur.com/azdlm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/azdlm.png" alt="runtime"></a></p>
<p>And then change it to GPU (takes some secs):
<a href="https://i.stack.imgur.com/4INnw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4INnw.png" alt="GPU"></a></p>
|
python-3.x|pytorch|google-colaboratory
| 1
|
374,154
| 57,726,283
|
How do I build an array using numpy.repeat where each element is 1% over the previous one?
|
<p>I have an initial value of salary=500. I want it to increment by 1% every year so my array would look like <code>[500,550,605,665.5]</code> etc. Essentially multiplying <code>1.1</code> to each previous value. </p>
<p>I was using a loop for this, but was curious if it could be done using <code>numpy.arange</code> and <code>numpy.repeat</code> somehow?</p>
<pre><code>salary=500
for i in range(0,15,1):
salary=1.1*salary
print(salary)
</code></pre>
|
<p>Use <code>1.1</code> as the base to create an <em>exponential-array</em> and scale it with the starting value -</p>
<pre><code>In [52]: 500*(1.1**np.arange(4)) # 4 is number of output elements
Out[52]: array([500. , 550. , 605. , 665.5])
</code></pre>
|
python|arrays|numpy
| 2
|
374,155
| 57,667,455
|
appending Dict to nested list per request made
|
<p>I am currently scraping through an XML API response. I am looking to gather a piece of information for each request and create a dictionary each time I find this piece of data. Each request can have several IDs. So one response can have 2 IDs while the next response might have 3 IDs. For example, let's say the first response has 2 IDs. I am storing this data in a list at the moment when the second request is done the additional 3 IDs are being stored under this same list as well. </p>
<pre><code>import requests
import pandas as pd
from pandas import DataFrame
from bs4 import BeautifulSoup
import datetime as datetime
import json
import time
trackingDomain = ''
domain = ''
aIDs = []
cIDs = []
url = "https://" + domain + ""
print(url)
df = pd.read_csv('campids.csv')
for index, row in df.iterrows():
payload = {'api_key':'',
'campaign_id':'0',
'site_offer_id':row['IDs'],
'source_affiliate_id':'0',
'channel_id':'0',
'account_status_id':'0',
'media_type_id':'0',
'start_at_row':'0',
'row_limit':'0',
'sort_field':'campaign_id',
'sort_descending':'TRUE'
}
print('Campaign Payload', payload)
r = requests.get(url, params=payload)
print(r.status_code)
soup = BeautifulSoup(r.text, 'lxml')
success = soup.find('success').string
for affIDs in soup.select('campaign'):
affID = affIDs.find('source_affiliate_id').string
aIDs.append(affID)
dataDict = dict()
dataDict['offers'] = []
affDict = {'affliate_id':aIDs}
dataDict['offers'].append(dict(affDict))
</code></pre>
<p>The result ends up being as follows:</p>
<pre><code>dictData = {'offers': [{'affliate_id': ['9','2','45','47','14','8','30','30','2','2','9','2']}]}
</code></pre>
<p>What I am looking to do is this:</p>
<pre><code>dictData = {'offers':[{'affiliate_id'['9','2','45','47','14','8','30','30','2','2']},{'affiliate_id':['9','2']}]}
</code></pre>
<p>On the first request, I obtain the following:</p>
<pre><code>IDs['9','2','45','47','14','8','30','30','2','2']
</code></pre>
<p>On the second request these IDs are returned:</p>
<pre><code>['9','2']
</code></pre>
<p>I am new to Python so please bear with me as far etiquette goes and I am missing something. I'll be happy to provide any additional information.</p>
|
<p>It has to do with the order of your initializing and appending that is causing you to not get the outcome you are wanting. You are overwriting your <code>dataDict</code> after each iteration, and inserting the appended list which is not overwritten, thus leaving you with a final list that has appended ALL <code>aIDs</code>. What you want to to do is initialise that <code>dataDict</code> out side of your for loop, and then you can append the dictionary in the nested loop into that list:</p>
<p><strong>Note: It's tough to work out/test without having the actual data, but I believe this should do it if I worked out the logic correctly in my head:</strong></p>
<pre><code>import requests
import pandas as pd
from pandas import DataFrame
from bs4 import BeautifulSoup
import datetime as datetime
import json
import time
trackingDomain = ''
domain = ''
cIDs = []
url = "https://" + domain + ""
# Initialize your dictionary
dataDict = dict()
# Initialize your list in your dictionary under key `offers`
dataDict['offers'] = []
print(url)
df = pd.read_csv('campids.csv')
for index, row in df.iterrows():
payload = {'api_key':'',
'campaign_id':'0',
'site_offer_id':row['IDs'],
'source_affiliate_id':'0',
'channel_id':'0',
'account_status_id':'0',
'media_type_id':'0',
'start_at_row':'0',
'row_limit':'0',
'sort_field':'campaign_id',
'sort_descending':'TRUE'
}
print('Campaign Payload', payload)
r = requests.get(url, params=payload)
print(r.status_code)
soup = BeautifulSoup(r.text, 'lxml')
success = soup.find('success').string
# Initialize your list for this iteration/row in your df.iterrows
aIDs = []
for affIDs in soup.select('campaign'):
affID = affIDs.find('source_affiliate_id').string
# Append those affIDs to the aIDs list
aIDs.append(affID)
# Create your dictionary of key:value with key 'affiliate_id' and value the aIDs list
affDict = {'affliate_id':aIDs}
# NOW append that into your list in your dictionary under key `offers`
dataDict['offers'].append(dict(affDict))
</code></pre>
|
python-3.x|pandas|beautifulsoup|python-requests
| 1
|
374,156
| 57,682,831
|
What happens if Keras Tensorflow's `Model.fit`'s target/output is `None`?
|
<p>I was looking at this code: <a href="https://github.com/keras-team/keras/blob/master/examples/variational_autoencoder.py#L198" rel="nofollow noreferrer">https://github.com/keras-team/keras/blob/master/examples/variational_autoencoder.py#L198</a>, where <code>Model.fit()</code> is called without an output or target Tensor. At first, I thought the behavior of <code>Model.fit()</code> is to use the input as the output (which would make sense for this autoencoder implementation). But then I looked into the documentation, and that's not what it says: <a href="https://www.tensorflow.org/api_docs/python/tf/keras/Model#fit" rel="nofollow noreferrer">https://www.tensorflow.org/api_docs/python/tf/keras/Model#fit</a></p>
<p>It implies when <code>y</code>, the target, is <code>None</code>, <code>x</code> should be some kind of structure that contains both the input and target. </p>
<p>But it's clear in this autoencoder implementation, that's not what happens (<code>x</code> only contains the input). Could someone explain what happens in this case?</p>
|
<p>In the Keras documentation for <a href="https://keras.io/models/model/#fit" rel="nofollow noreferrer">model.fit</a> the following is stated:</p>
<blockquote>
<ul>
<li>y: Numpy array of target (label) data (if the model has a single output), or list of Numpy arrays (if the model has multiple outputs).
If output layers in the model are named, you can also pass a
dictionary mapping output names to Numpy arrays. <strong>y can be None
(default) if feeding from framework-native tensors (e.g. TensorFlow
data tensors).</strong></li>
</ul>
</blockquote>
<p>Now, notice that in the variational autoencoder example, the argument <code>outputs</code> of the model <a href="https://github.com/keras-team/keras/blob/master/examples/variational_autoencoder.py#L161" rel="nofollow noreferrer"><code>vae</code></a> is a TensorFlow native tensor, since it is given by the output of another model <a href="https://github.com/keras-team/keras/blob/master/examples/variational_autoencoder.py#L155" rel="nofollow noreferrer"><code>decoder</code></a> (a TensorFlow native tensor) whose <code>inputs</code> argument is <em>independent</em> of the <code>vae</code>'s input.</p>
|
python|tensorflow|keras|autoencoder
| 2
|
374,157
| 57,527,756
|
Cutting Sheets on Unique Values in each column
|
<p>On a quarterly basis I have to cut a master file that is sent to me from the hire ups based on a few specified columns which are Region, City, and Department. I wrote a script that does the cutting for me but unfortunately I cant figure out a way how to run only 1 iteration of same script versus same scripts changed for each column</p>
<pre class="lang-py prettyprint-override"><code>Input = pd.read_excel('Master.xlsx', header = 0)
for d in Input['Region'].unique():
temp_df = Input[Input['Region']==d]
temp_df.drop(temp_df.columns[32:37], axis = 1, inplace = True)
temp_df.to_excel('Quarterly_Cut_for_{}'.format(d), index = False)
for d in Input['City'].unique():
temp_df = Input[Input['City']==d]
temp_df.drop(temp_df.columns[32:37], axis = 1, inplace = True)
temp_df.to_excel('Quarterly_Cut_for_{}'.format(d), index = False)
for d in Input['Dept'].unique():
temp_df = Input[Input['Dept']==d]
temp_df.drop(temp_df.columns[32:37], axis = 1, inplace = True)
temp_df.to_excel('Quarterly_Cut_for_{}'.format(d), index = False)
</code></pre>
<p>What I am trying to figure out is how to run the script for all 3 columns above in one script versus having 1 for each column</p>
|
<p>This produces a dataframe for each unique combination of three attributes. Replace <code>print(sub_df)</code> with the operations you need (select columns, write to csv)</p>
<pre><code>from io import StringIO
import pandas as pd
df = pd.read_csv(StringIO(
"""Region,City,Department,value
R1,C1,D1,1
R1,C1,D2,2
R1,C2,D3,3
R1,C2,D3,4
R2,C3,D5,5
R2,C3,D5,6
R2,C3,D6,7
"""))
grouper = df.groupby(["Region","City", "Department"])
for g in grouper:
sub_df = pd.DataFrame(g[1]).reset_index(drop=True)
print(sub_df)
# Region City Department value
# 0 R1 C1 D1 1
# Region City Department value
# 0 R1 C1 D2 2
# Region City Department value
# 0 R1 C2 D3 3
# 1 R1 C2 D3 4
# Region City Department value
# 0 R2 C3 D5 5
# 1 R2 C3 D5 6
# Region City Department value
# 0 R2 C3 D6 7
</code></pre>
|
python|pandas
| 0
|
374,158
| 57,463,064
|
Exporting dataframes within a dict to separate csv files
|
<p>I have a dictionary with a year for each key and a dataframe for each value. I need to export these dataframes as CSV files with the key (being the year) as the file name. I've tried running a for loop changing the variable name but doesn't seem to work.
Could someone please outline another method of doing this. Thanks</p>
|
<p>Not that different than Charles answer but as <a href="/help/minimal-reproducible-example">mcve</a> and we can even pass integers as year instead of strings.</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import os
fldr = 'dataframes'
os.makedirs(fldr, exist_ok=True)
diz = {key:pd.util.testing.makeDataFrame().head(10)
for key in range(2000,2020)}
for key, value in diz.items():
value.to_csv(f"{fldr}/{key}.csv", index=False)
</code></pre>
|
python|pandas
| 1
|
374,159
| 57,631,182
|
How to query pandas dataframe for regular expression?
|
<p>I have df:</p>
<pre><code>{'col1': {0: 'vJAaIAM',
1: 'K0jQAF',
2: '00qvP1IIU',
3: 'tFCJ2',
4: '0d2fIAB'},
'col2': {0: 6294.0,
1: 859485.0,
2: 7362.0,
3: 6273921.0,
4: 114506.0}}
</code></pre>
<p>and I am looking to query this dataframe for all rows that have capital 'A' and here is what I have:</p>
<pre><code>df[df['col1']==r'+%[A]%+']
</code></pre>
<p>I'm not needing to replace these values, I simply want to list and see them.</p>
|
<p>You can use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.contains.html" rel="nofollow noreferrer"><strong><code>.str.contains('A')</code></strong> [pandas-doc]</a> here:</p>
<pre><code>>>> df[df['col1'].str.contains('A')]
col1 col2
0 vJAaIAM 6294.0
1 K0jQAF 859485.0
4 0d2fIAB 114506.0
</code></pre>
|
python|pandas|dataframe
| 2
|
374,160
| 57,320,779
|
'numpy.ndarray' object has no attribute 'drop'
|
<p>I have a dataset with four inputs named X1, X2, X3, X4.
Here I created the lstm model to predict next X1 value with the previous values of four inputs. </p>
<p>Here I changed the time into minutes and then I set the time as index.</p>
<p>Then I created the x_train, x_test , y_test and y_train. Then I wanted to drop the time in x_train and x_test. </p>
<p>I used the code:</p>
<pre><code>data= pd.DataFrame(data,columns=['X1','X2','X3','X4'])
pd.options.display.float_format = '{:,.0f}'.format
print(data)
</code></pre>
<p>data:</p>
<p><a href="https://i.stack.imgur.com/VhOk0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VhOk0.png" alt="enter image description here"></a></p>
<pre><code>y=data['X1'].astype(int)
cols=['X1', 'X2', 'X3','X4']
x=data[cols].astype(int)
data=data.values
scaler_x = preprocessing.MinMaxScaler(feature_range =(0, 1))
x = np.array(x).reshape ((len(x),4 ))
x = scaler_x.fit_transform(x)
scaler_y = preprocessing.MinMaxScaler(feature_range =(0, 1))
y = np.array(y).reshape ((len(y), 1))
y = scaler_y.fit_transform(y)
train_end = 80
x_train=x[0: train_end ,]
x_test=x[train_end +1: ,]
y_train=y[0: train_end]
y_test=y[train_end +1:]
x_train=x_train.reshape(x_train.shape +(1,))
x_test=x_test.reshape(x_test.shape + (1,))
x_train = x_train.drop('time', axis=1)
x_test = x_test.drop('time', axis=1)
</code></pre>
<p>Then error :<code>'numpy.ndarray' object has no attribute 'drop'</code></p>
<p>Can any one help me to solve this error?</p>
|
<p>Because you extracted the values of your Pandas dataframe, your data have been converted into a NumPy array and thus the column names have been removed. The time column is the first column of your data, so all you really need to do is index into it so that you extract the second column and onwards:</p>
<pre><code>x_time_train = x_train[:, 0]
x_train = x_train[:, 1:]
x_time_test = x_test[:, 0]
x_test = x_test[:, 1:]
</code></pre>
<p>Take note that I've separated the time values for both training and testing datasets as you require them for plotting.</p>
|
python|pandas|numpy|machine-learning|lstm
| 1
|
374,161
| 57,418,236
|
Aggregating rows in python pandas dataframe
|
<p>I have a dataframe documenting when a product was added and removed from basket. However, the <code>set_name</code> column contains two sets of information for the color set and the shape set. See below:</p>
<pre><code> eff_date prod_id set_name change_type
0 20150414 20770 MONO COLOR SET ADD
1 20150414 20770 REC SHAPE SET ADD
2 20150429 132 MONO COLOR SET ADD
3 20150429 132 REC SHAPE SET ADD
4 20150521 199 MONO COLOR SET DROP
5 20150521 199 REC SHAPE SET DROP
6 20150521 199 TET SHAPE SET ADD
7 20150521 199 MONO COLOR SET ADD
</code></pre>
<p>I would like to split out the two sets of information contained in <code>set_name</code> into columns <code>color_set</code> and <code>shape_set</code> and drop <code>set_name</code>. so the previous df should look like:</p>
<pre><code> eff_date prod_id change_type color_set shape_set
0 20150414 20770 ADD MONO COLOR SET REC SHAPE SET
1 20150429 132 ADD MONO COLOR SET REC SHAPE SET
2 20150521 199 DROP MONO COLOR SET REC SHAPE SET
3 20150521 199 ADD MONO COLOR SET TET SHAPE SET
</code></pre>
<p>I attempted first splitting out the columns in a for loop and then aggregating with groupby:</p>
<pre><code>for index, row in df.iterrows():
if 'COLOR' in df.loc[index,'set_name']:
df.loc[index,'color_set'] = df.loc[index,'set_name']
if 'SHAPE' in df.loc[index,'set_name']:
df.loc[index,'shape_set'] = df.loc[index,'set_name']
df = df.fillna('')
df.groupby(['eff_date','prod_id','change_type']).agg({'color_set':sum,'shape_set':sum})
</code></pre>
<p>However this left me with a dataframe of only two columns and multi-level index that i wasn't sure how to unstack.</p>
<pre><code> color_set shape_set
eff_date prod_id change_type
20150414 20770 ADD MONO COLOR SET REC SHAPE SET
20150429 132 ADD MONO COLOR SET REC SHAPE SET
20150521 199 DROP MONO COLOR SET REC SHAPE SET
ADD MONO COLOR SET TET SHAPE SET
</code></pre>
<p>Any help on this is greatly appreciated!</p>
|
<p>Your code looks fine apart from having to reset your index, but we can simplify it quite a bit (in particular remove the need for <code>iterrows</code> which can be painfully slow, using a <code>pivot</code> with a small trick to get your column names.</p>
<p>This answer assumes that you only have these two options in your column, if you have <em>more</em> categories, simply use <code>numpy.select</code> instead of <code>numpy.where</code> and define your conditions / outputs that way.</p>
<hr>
<pre><code>df['key'] = np.where(df['set_name'].str.contains('COLOR'), 'color_set', 'shape_set')
df.pivot_table(
index=['eff_date', 'prod_id', 'change_type'],
columns='key',
values='set_name',
aggfunc='first'
).reset_index()
</code></pre>
<p></p>
<pre><code>key eff_date prod_id change_type color_set shape_set
0 20150414 20770 ADD MONO COLOR SET REC SHAPE SET
1 20150429 132 ADD MONO COLOR SET REC SHAPE SET
2 20150521 199 ADD MONO COLOR SET TET SHAPE SET
3 20150521 199 DROP MONO COLOR SET REC SHAPE SET
</code></pre>
|
python|pandas|dataframe
| 1
|
374,162
| 57,500,021
|
How to ignore a missing value in a file in a sum in Python
|
<p>I'm trying to get the quantities of some products in 6 Excel file, but in some of them, may not have the product, while in the others has.
Ex: in the arq1 exists the bj code, but in the arq2 doesn't. So when it makes the sum it doesn't give me the right number. As you can see below:</p>
<pre><code>import pandas as pd
import numpy as np
arq1 = pd.read_excel(r'C:\Users\Usuario\Downloads\arq1.xlsx')
arq2 = pd.read_excel(r'C:\Users\Usuario\Downloads\arq1.xlsx')
list_arq = [arq1,arq2]
i = int(input('Which Arq ? '))
print(bj_xl)
bj_xl = list_arq[i][list_arq[i]['Cod.Artigo']=='PTTGBCH01023']['Unidades']
m_xl = list_arq[i][list_arq[i]['Cod.Artigo']=='PTTGBCM01B05']['Unidades']
b_xl = bj_xl + m_xl
print('B XL:',b_xl)
</code></pre>
<p>I expect the output to be an integer number but it gave me:</p>
<pre><code>Which Arq ? 0
Series([], Name: Unidades, dtype: int64)
B XL: 0 NaN
3 NaN
Name: Unidades, dtype: float64
</code></pre>
|
<p>Try taking the value from the result Series, so that you don't have to worry about index. Assuming <code>Cod.Artigo</code> is unique, you can do:</p>
<pre><code>bj_xl = list_arq[i][list_arq[i]['Cod.Artigo']=='PTTGBCH01023']['Unidades'].values[0]
m_xl = list_arq[i][list_arq[i]['Cod.Artigo']=='PTTGBCM01B05']['Unidades'].values[0]
</code></pre>
|
python|python-3.x|pandas
| 1
|
374,163
| 57,544,806
|
Pandas - Align matching column values to row
|
<p>I have what appears to be a simple problem, that I was not able to find a solution to. Namely, I have a table, where first column contains the list of all available applications, while other columns represent users and the list of applications they have:</p>
<p><a href="https://i.stack.imgur.com/yeDxH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yeDxH.png" alt="enter image description here"></a></p>
<p>I'm trying to convert the table into pandas DataFrame and align matching values on the first column. The desired output should look like this:</p>
<p><a href="https://i.stack.imgur.com/KZdgP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KZdgP.png" alt="enter image description here"></a></p>
<pre><code>import pandas as pd
df = pd.read_excel('U:/Desktop/appdata.xlsx')
df.head(10)
Out[21]:
Applications User 1 User 2 User 3 User 4 User 5
0 App1 App1 App2 App1 App1 App2
1 App2 App3 App3 App2 App3 App3
2 App3 App10 App4 App7 App4 App4
3 App4 NaN App5 App8 App5 App5
4 App5 NaN NaN App10 App6 App6
5 App6 NaN NaN NaN NaN App7
6 App7 NaN NaN NaN NaN App8
7 App8 NaN NaN NaN NaN App9
8 App9 NaN NaN NaN NaN NaN
9 App10 NaN NaN NaN NaN NaN
df[df.apply(lambda x: x['Applications'] == x, axis=1)]
Out[22]:
Applications User 1 User 2 User 3 User 4 User 5
0 App1 App1 NaN App1 App1 NaN
1 App2 NaN NaN App2 NaN NaN
2 App3 NaN NaN NaN NaN NaN
3 App4 NaN NaN NaN NaN NaN
4 App5 NaN NaN NaN NaN NaN
5 App6 NaN NaN NaN NaN NaN
6 App7 NaN NaN NaN NaN NaN
7 App8 NaN NaN NaN NaN NaN
8 App9 NaN NaN NaN NaN NaN
9 App10 NaN NaN NaN NaN NaN
</code></pre>
<p>Any help is appreciated.
Cheers!</p>
|
<p>Here is an approach with some numpy tools. Here, <code>apply</code> loops through the columns of interest, <code>np.isin</code> performs a search over your first column (dat.Applications) and returns True if the respective element is contained in the current column. This boolean array is then converted to the respective string in dat.Applications or to NAN if there is no match via <code>np.where</code>. The results are then assigned back to the original DataFrame.</p>
<pre><code>import numpy as np
dat.iloc[:, 1:] = \
dat.iloc[:, 1:].apply(lambda x : np.where(np.isin(dat.Applications, x),
dat.Applications, np.NAN))
</code></pre>
<p>Note that it would work to use pd.<code>np.isin</code> for instance rather than directly importing numpy, but this seems a bit cleaner to me.</p>
|
python|pandas|dataframe
| 2
|
374,164
| 57,483,013
|
Python saying "no module named tflearn" but i imported it
|
<p>I used Pip install TFlearn and ran my code on my Raspberry Pi but it says</p>
<blockquote>
<p>no module named tflearn</p>
</blockquote>
<p>video:<a href="https://www.youtube.com/watch?v=wypVcNIH6D4&t=3s" rel="nofollow noreferrer">https://www.youtube.com/watch?v=wypVcNIH6D4&t=3s</a></p>
<p>code:</p>
<pre class="lang-py prettyprint-override"><code>import nltk
from nltk.stem.lancaster import LancasterStemmer
stemmer = LancasterStemmer()
import numpy
import tflearn
import tensorflow
import random
import json
{"intents": [
{"tag": "greeting",
"patterns": ["Hi", "How are you", "Is anyone there?", "Hello", "Good day", "Whats up"],
"responses": ["Hello!", "Good to see you again!", "Hi there, how can I help?"],
"context_set": ""
},
{"tag": "goodbye",
"patterns": ["cya", "See you later", "Goodbye", "I am Leaving", "Have a Good day"],
"responses": ["Sad to see you go :(", "Talk to you later", "Goodbye!"],
"context_set": ""
},
{"tag": "age",
"patterns": ["how old", "how old is tim", "what is your age", "how old are you", "age?"],
"responses": ["I am 18 years old!", "18 years young!"],
"context_set": ""
},
{"tag": "name",
"patterns": ["what is your name", "what should I call you", "whats your name?"],
"responses": ["You can call me Tim.", "I'm Tim!", "I'm Tim aka Tech With Tim."],
"context_set": ""
},
{"tag": "shop",
"patterns": ["Id like to buy something", "whats on the menu", "what do you reccommend?", "could i get something to eat"],
"responses": ["We sell chocolate chip cookies for $2!", "Cookies are on the menu!"],
"context_set": ""
},
{"tag": "hours",
"patterns": ["when are you guys open", "what are your hours", "hours of operation"],
"responses": ["We are open 7am-4pm Monday-Friday!"],
"context_set": ""
}
]
}
</code></pre>
|
<p>Basic preliminary checks:
run
<code>pip freeze</code> to get a list of packages installed. Ideally tflearn should be there. If not, it indicates that the package has not been installed properly.</p>
|
python|tensorflow|raspberry-pi|tflearn
| 0
|
374,165
| 57,591,480
|
Python Pandas sum a constant value in Columns If date between 2 dates
|
<p>Let's assume a dataframe using datetimes as index, where we have a column named 'Score', initialy set to 10:</p>
<pre><code> score
2016-01-01 10
2016-01-02 10
2016-01-03 10
2016-01-04 10
2016-01-05 10
2016-01-06 10
2016-01-07 10
2016-01-08 10
</code></pre>
<p>I want to substract a fixed value (let's say 1) from the score, but only when the index is between certain dates (for example between the 3rd and the 6th):</p>
<pre><code> score
2016-01-01 10
2016-01-02 10
2016-01-03 9
2016-01-04 9
2016-01-05 9
2016-01-06 9
2016-01-07 10
2016-01-08 10
</code></pre>
<p>Since my real dataframe is big, and I will be doing this for different dateranges and different fixed values N for each one of them, I'd like to achieve this without requiring to create a new column set to -N for each case.</p>
<p>Something like numpy's <code>where</code> function, but for a certain range, and allowing me to sum/substract to current value if the condition is met, and do nothing otherwise. Is there something like that?</p>
|
<p>Use index slicing:</p>
<pre><code>df.loc['2016-01-03':'2016-01-06', 'score'] -= 1
</code></pre>
|
python|pandas
| 5
|
374,166
| 57,303,687
|
Create column o difference between two pandas DF
|
<p>I have <strong>firstDF:</strong></p>
<pre><code>rs Chr MapInfo Name SourceSeq
1 A1 B1 C1 D1
2 A2 B2 C2 D2
3 A3 B3 C3 D3
4 A4 B4 C4 D4
5 A5 B5 C5 D5
</code></pre>
<p>And <strong>secondDF:</strong></p>
<pre><code>Chr MapInfo Name SourceSeq Unnamed: 0 rs
1 A1 B1 C1 D1 E1
4 A4 B4 C4 D4 E4
8 A8 B8 C8 D8 E8
10 A10 B10 C10 D10 E10
</code></pre>
<p>I need to create a new data frame contains only rows from <em>secondDF</em> which does not exist in first:</p>
<p><strong>newDF:</strong></p>
<pre><code>Chr MapInfo Name SourceSeq Unnamed: 0 rs
8 A8 B8 C8 D8 E8
10 A10 B10 C10 D10 E10
</code></pre>
<p>I want filter it by <code>Name</code>. What will be better way to do that?</p>
<p>I trough about a <code>fullouter</code> merge but the cols are different and honestly I don't know how to do it proper.</p>
<p>Second, think was a loop but it's not efficient. </p>
<p>And last I tried do ith by:</p>
<pre><code>new= secondDF[~firstDF.Name.isin(secondDF.name)]
</code></pre>
<p>but i got:</p>
<blockquote>
<p>IndexingError: Unalignable boolean Series provided as indexer (index of the boolean Series and of the indexed object do not match</p>
</blockquote>
<p>Can someone give me advice about that task?</p>
|
<p>Solution is change mask - compare <code>secondDF.Name</code> by column from <code>firstDF</code>, from sample data it is <code>MapInfo</code> column, in real data seems <code>Name</code> column for boolean mask with same size and index values like <code>secondDF</code>, because is filtered <code>secondDF</code> DataFrame:</p>
<pre><code>new= secondDF[~secondDF.Name.isin(firstDF.MapInfo)]
print (new)
Chr MapInfo Name SourceSeq Unnamed: 0 rs
2 8 A8 B8 C8 D8 E8
3 10 A10 B10 C10 D10 E10
</code></pre>
|
python|pandas
| 1
|
374,167
| 57,457,504
|
how can I make limit on cumsum in dataframe and minus to all values
|
<p>when I do cumsum with dataframe with lots of datas there's some errors and bugs, so I want to make limit on cumsum data and minus limit value to all datas. Like below</p>
<pre><code>A B IntA IntB
1 2 1 2
2 4 3 6
3 6 6 12
4 8 10 20
5 2 15 22
6 4 21 26
7 8 28 34
</code></pre>
<p>I like to make If minmum value of IntA or IntB goes over 10
minus 10 to before cumsum values and keep doing cumsum like below</p>
<pre><code>A B IntA IntB
1 2 1 2
2 4 3 6
3 6 6 12
4 8 0 10
5 2 5 12
6 4 1 6
7 8 8 14
</code></pre>
<p>Is there any way I could do?</p>
|
<p>Ok finally I think I got it. It would have been easy with <code>.apply</code> but it should scale better without and it's also more fun :-)</p>
<pre><code>df= pd.DataFrame({
'A': [1, 2, 3, 4, 5, 6, 7],
'B': [2, 4, 6, 8, 2, 4, 8]
})
cumsumA= df['A'].cumsum()
cumsumB= df['B'].cumsum()
tensA= (cumsumA // 10)*10
tensB= (cumsumB // 10)*10
tens= tensA.where(tensA<tensB, tensB)
df['intA']= cumsumA - tens
df['intB']= cumsumB - tens
df
</code></pre>
<p>This outputs:</p>
<pre><code> A B intA intB
0 1 2 1 2
1 2 4 3 6
2 3 6 6 12
3 4 8 0 10
4 5 2 5 12
5 6 4 1 6
6 7 8 8 14
</code></pre>
<p>I just didn't recognize immediately, what your <code>min</code> rule means for the correction value, but with this knowledge it looks emarrasingly simple...</p>
|
python|pandas
| 0
|
374,168
| 57,350,598
|
fastest way to insert multiple rows into a dataframe given a list of indexes (python)
|
<p>I have a dataframe and I would like to insert rows at specific indexes at the beginning of each group within the dataframe. As an example lets say I have the following dataframe:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df = pd.DataFrame(data=[['A',1,1],['A',2,3],['A',5,4],['B',3,4],['B',2,6],['B',8,4],['C',9,3],['C',3,7],['C',1,9],['D',5,5],['D',8,3],['D',4,7]], columns=['Group','val1','val2'])
</code></pre>
<p>I would like to copy the first row of each unique value in the column group and insert that row at the beginning of each group while growing the dataframe. I can currently achieve this by using a for loop but it is pretty slow because my dataframe is large so I am looking for a vectorized solution.</p>
<p>I have a list of indexes where I would like to insert the rows.</p>
<pre class="lang-py prettyprint-override"><code>idxs = [0, 3, 6, 9]
</code></pre>
<p>In each iteration of the loop I currently slice the dataframe at each one of the idxs into two dataframes, insert the row, and concat the dataframes. My dataframe is very large so this process has been very slow.</p>
<p>The solution would look like this:</p>
<pre class="lang-py prettyprint-override"><code> Group val1 val2
0 A 1 1
1 A 1 1
2 A 2 3
3 A 5 4
4 B 3 4
5 B 3 4
6 B 2 6
7 B 8 4
8 C 9 3
9 C 9 3
10 C 3 7
11 C 1 9
12 D 5 5
13 D 5 5
14 D 8 3
15 D 4 7
</code></pre>
|
<p>You can do this by grouping by <code>group</code>, iterating over each group, and constructing a DataFrame via concatenation of each the first row of a group to the group itself, then the concatenation of all those concatenations.</p>
<p><strong>Code:</strong></p>
<pre><code>import pandas as pd
df = pd.DataFrame(data=[['A',1,1],['A',2,3],['A',5,4],['B',3,4],['B',2,6],['B',8,4],['C',9,3],['C',3,7],['C',1,9],['D',5,5],['D',8,3],['D',4,7]], columns=['Group','val1','val2'])
df_new = pd.concat([
pd.concat([grp.iloc[[0], :], grp])
for key, grp in df.groupby('Group')
])
print(df_new)
</code></pre>
<p><strong>Output:</strong></p>
<pre><code> Group val1 val2
0 A 1 1
0 A 1 1
1 A 2 3
2 A 5 4
3 B 3 4
3 B 3 4
4 B 2 6
5 B 8 4
6 C 9 3
6 C 9 3
7 C 3 7
8 C 1 9
9 D 5 5
9 D 5 5
10 D 8 3
11 D 4 7
</code></pre>
|
python|pandas|dataframe|insert|concat
| 2
|
374,169
| 57,540,443
|
Loading python TensorFlow Layers models into JavaScript
|
<p>The core <code>TensorFlow</code> libraries provide the ability to convert a model created in <code>Python</code> to be saved into a <code>JSON</code> file describing the graph and weights to be executed in a browser environment.</p>
<p>In the examples you are required to load the whole <code>TensorFlow</code> library in the browser, which is extremely heavy. Also tree shaking is not available for this library.</p>
<p>My question is: how would I load just the necessary elements from <code>TensorFlow JS</code> into the client/browser to reduce the overall size of my bundled application?</p>
<p>EDIT: We are trying to reduce the over size of the bundled library.</p>
|
<p>The Tensorflow.js API combines four packages:</p>
<ul>
<li><a href="https://github.com/tensorflow/tfjs/tree/master/tfjs-core" rel="nofollow noreferrer">tfjs-core</a>: Functionality like mathematical functions and backend support</li>
<li><a href="https://github.com/tensorflow/tfjs/tree/master/tfjs-layers" rel="nofollow noreferrer">tfjs-layers</a>: Support of layers to create models (depends on <code>tfjs-core</code>)</li>
<li><a href="https://github.com/tensorflow/tfjs/tree/master/tfjs-data" rel="nofollow noreferrer">tfjs-data</a>: Data handling (depends on <code>tfjs-core</code>)</li>
<li><a href="https://github.com/tensorflow/tfjs/tree/master/tfjs-converter" rel="nofollow noreferrer">tfjs-converter</a>: Support for converting models to Tensorflow.js</li>
</ul>
<p>Depending on what tasks exactly you need to perform, it might be sufficent to only use some of the packages. That said, keep in mind that <code>tfjs-layers</code> and <code>tfjs-data</code> require <code>tfjs-core</code> to be imported.</p>
<p><strong>Code Sample</strong></p>
<p>The following lines only import the Core and Layers API:</p>
<pre class="lang-js prettyprint-override"><code>import * as tfc from '@tensorflow/tfjs-core';
import * as tfl from '@tensorflow/tfjs-layers';
// Examples how to use the APIs
const vectr = tfc.tensor(/* .. */);
const model = tfl.sequential();
const dense = tfl.layers.dense(/* .. */);
</code></pre>
<p>Note that functions like <a href="https://js.tensorflow.org/api/latest/#matMul" rel="nofollow noreferrer"><code>tf.matMul</code></a> are used by calling <code>tfc.matMul</code>, but some functions of the layers API (like <a href="https://js.tensorflow.org/api/latest/#layers.dense" rel="nofollow noreferrer"><code>tf.layers.dense</code></a>) are used by calling <code>tfl.layers.dense</code> while others (like <a href="https://js.tensorflow.org/api/latest/#sequential" rel="nofollow noreferrer"><code>tf.sequential</code></a>) are used by calling <code>tfl.sequential</code>.</p>
<p><strong>Optimization</strong></p>
<p>To give you an idea on the potential optimization, let's look at the numbers:</p>
<pre><code>--------------------------------------
| Package | Size | Relative |
|----------------|--------|----------|
| tfjs | 856 | 100% |
| tfjs-core | 506 | 59% |
| tfjs-layers | 228 | 27% |
| tfjs-data | 52 | 6% |
| tfjs-converter | 80 | 9% |
--------------------------------------
</code></pre>
<p><sub>Version 1.2.7, Size in KB (of the minified JS file), relative values compared to tfjs</sub></p>
<p>Using <code>tfjs-core</code> and <code>tfjs-layers</code> directly, it is possible to shrink the size by 122 KB or 14%. If you need more than that you can always try to rebuild the repository on your own, removing any unneeded functionality. Of course, this approach would mean a lot of manual work.</p>
<p><strong>Tree Shaking</strong></p>
<p>As you already noticed yourself, tree shaking is currently not supported, but you might want to follow the discussion for <a href="https://github.com/tensorflow/tfjs/issues/353" rel="nofollow noreferrer">support of tree-shaking</a> in the tfjs github repository regarding that topic.</p>
|
javascript|python|tensorflow|webpack|tensorflow.js
| 1
|
374,170
| 57,342,970
|
How to remove elements different than numbers of a Scipy Sparse Matrix?
|
<p>I have a <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.coo_matrix.html" rel="nofollow noreferrer">COO sparse matrix</a> in which every element is a dictionary. I want to filter that matrix by some conditions, nevertheless when I try to multiply the matrix by the filter I get an exception <code>TypeError: no supported conversion for types: (dtype('O'),)</code>. Is it possible to avoid this issue?</p>
<pre><code>from scipy.sparse import coo_matrix
import numpy as np
row = np.array([0, 0, 1, 2, 2, 2])
col = np.array([0, 2, 2, 0, 1, 2])
data = [{"x": 1}, {"y": -1}, {"x": -1}, {"x": 2}, {"t": -2}, {"z": 2}]
matrix = coo_matrix((data, (row, col)), shape=(3, 3))
matrix.multiply(np.array([0, 1, 0])) # Raises exception
</code></pre>
|
<p>It is possible to filter a <code>coo_matrix</code>, nevertheless it isn't straightforward. First you must create your <code>filter mask</code> with `True/False' values and then use it to index the matrix's column, rows and data vectors.</p>
<pre><code>In [22]: mask = [True, False, False, True, False, False]
In [23]: matrix.data[mask]
Out[23]: array([{'x': 1}, {'x': 2}], dtype=object)
In [24]: matrix.col[mask]
Out[24]: array([0, 0], dtype=int32)
In [25]: matrix.col[row]
Out[25]: array([0, 0, 2, 2, 2, 2], dtype=int32)
In [26]: matrix.col[mask]
Out[26]: array([0, 0], dtype=int32)
</code></pre>
|
python|numpy|scipy|sparse-matrix
| 0
|
374,171
| 57,405,672
|
How to get numpy.zeros() and (numpy.ones() * 255) to produce a black and white image respectively?
|
<p>I am new to Python + OpenCV, so this might be a basic question for most of you, as I couldn't find a good/ satisfactory solution for this online.</p>
<p>So I am trying to create an Image by separately creating R-G-B layers<br>
R - Layer of 0s<br>
G - Layer of 255s<br>
B - Layer of 255*Identity matrix</p>
<pre><code>import cv2 as cv
import numpy as np
import matplotlib.pyplot as plt
Red = np.zeros([6, 6], dtype = np.uint8)
plt.imshow(Red) # it is just the red layer which is actually all black
plt.show()
Green = np.ones([6, 6], dtype = np.uint8) * 255
plt.imshow(Green) # it is just the Green layer which is actually all white
plt.show()
Blue = np.eye(6, dtype = int) * 255
plt.imshow(Blue) # it is just the Blue layer which is actually black with white diag
plt.show()
</code></pre>
<p>But I am actually getting a Purple or combination of purplr and yellow.
<a href="https://i.stack.imgur.com/ZbG0T.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZbG0T.png" alt="My Output"></a></p>
<p>Can someone explain what's happening and/or how to solve it?</p>
|
<p>Try using</p>
<pre><code>Blue = np.eye(6, dtype = int) * 255
plt.imshow(Blue, cmap='gray', vmin=0, vmax=255)
plt.show()
</code></pre>
<p>for more reference <a href="https://stackoverflow.com/questions/3823752/display-image-as-grayscale-using-matplotlib">this answer</a></p>
|
python|numpy|opencv|matplotlib
| 5
|
374,172
| 57,659,946
|
Python: how to find right combination between two pandas dataframe?
|
<p>I have two dataframes <code>df1</code> and <code>df2</code>. </p>
<p><code>df1</code> contains the information of the link between an <code>ID</code> and a <code>Code</code></p>
<pre><code>df1
ID Code
0 48 3
1 47 2
2 50 0
3 49 1
</code></pre>
<p><code>df2</code> contains the information of: the <code>ID</code> as index, some distances <code>d</code> and the different codes.</p>
<pre><code>df2
d1 d2 ... d100 Code1 Code2 ... Code100
47 3.2 5.4 45.2 3 2 1
48 1.4 7.4 46.7 0 3 2
49 5.4 8.9 33.2 1 2 0
50 6.3 8.7 47.5 3 0 2
</code></pre>
<p>I would like to associate to <code>df1</code> the distance relative to the same combination such as</p>
<pre><code>df1
ID Code d
0 48 3 7.4
1 47 2 5.4
2 50 0 8.7
3 49 1 5.4
</code></pre>
<p>Let me say that I have hundreds of Codes and Distances in <code>df2</code>.</p>
<p>The goal is to find the combinations of <code>df1</code> in <code>df2</code>. For instance the combination <code>ID=48</code> and <code>Code=3</code> is in the second row of <code>df2</code> with a distance <code>d2=7.4</code></p>
|
<p>You need to perform merge with selective columns from the referenced <code>df2</code>. After that, you can concat the merged results. </p>
<pre><code>m1 = df1.merge(df2.reset_index(), left_on=['ID', 'Code'], right_on=['index', 'Code1'])[['ID', 'Code', 'd1']].rename(columns={'d1': 'd'})
m2 = df1.merge(df2.reset_index(), left_on=['ID', 'Code'], right_on=['index', 'Code2'])[['ID', 'Code', 'd2']].rename(columns={'d2': 'd'})
res = pd.concat([m1, m2])
</code></pre>
<p>Output:</p>
<pre><code> ID Code d
0 49 1 5.4
0 48 3 7.4
1 47 2 5.4
2 50 0 8.7
</code></pre>
|
python|pandas
| 0
|
374,173
| 24,438,273
|
Aggregation on pandas datetime series only returns as datetime series
|
<p>I have a dataframe like</p>
<pre><code>test = pd.DataFrame({'date': ['2013-10-14 21:46:40', '2013-07-17 02:55:06', '2013-01-28 20:25:17'], 'category': [1, 1, 2]})
test['date'] = pd.to_datetime(test['date'])
category date
0 1 2013-10-14 21:46:40
1 1 2013-07-17 02:55:06
2 2 2013-01-28 20:25:17
</code></pre>
<p>and I would like to compute some summary statistics for each category, specifically the earliest and latest date as well as the number of items in each category. The obvious way (to me) to do this is:</p>
<pre><code>test.groupby('category')['date'].agg([len, min, max])
</code></pre>
<p>but when I do this, the <code>len</code> column gets automatically cast as <code>np.datetime64</code>, which I assume is happening because that's the dtype of the original <code>date</code> column:</p>
<pre><code> len min max
category
1 1970-01-01 00:00:00.000000002 2013-07-17 02:55:06 2013-10-14 21:46:40
2 1970-01-01 00:00:00.000000001 2013-01-28 20:25:17 2013-01-28 20:25:17
</code></pre>
<p>I could go back and reconvert this <code>len</code> column to nanoseconds since GMT epoch, but that is pretty ugly and I feel like there must be a better way. Any ideas?</p>
|
<p>use <code>'size'</code>; this is currently an API bug (in that the <code>len</code> should just be translated directly to <code>size</code>), see <a href="https://github.com/pydata/pandas/issues/7570" rel="nofollow">here</a></p>
<pre><code>In [5]: test.groupby('category')['date'].agg(['size', min, max])
Out[5]:
size min max
category
1 2 2013-07-17 02:55:06 2013-10-14 21:46:40
2 1 2013-01-28 20:25:17 2013-01-28 20:25:17
</code></pre>
|
python|datetime|numpy|pandas
| 2
|
374,174
| 24,107,440
|
Python Pandas calucate Z score of groupby means
|
<p>I have a dataframe like this:</p>
<pre><code>df = pd.DataFrame({'Year' : ['2010', '2010', '2010', '2010', '2010', '2011', '2011', '2011', '2011', '2011', '2012', '2012', '2012', '2012', '2012'],
'Name' : ['Bob', 'Joe', 'Bill', 'Bob', 'Joe', 'Dave', 'Bob', 'Joe', 'Bill', 'Bill', 'Joe', 'Dave', 'Dave', 'Joe', 'Steve'],
'Score' : [95, 76, 77, 85, 82, 92, 67, 80, 77, 79, 82, 92, 64, 71, 83]})
</code></pre>
<p>I would like to get the Z Score for each <em>Name</em> in each <em>Year</em>.</p>
<p>I can do it if subset the Year column like this:</p>
<pre><code>(df[df.Year == '2010'].groupby(['Year', 'Name'])['Score'].mean() - df[df.Year == '2010'].groupby(['Year', 'Name'])['Score'].mean().mean()) / ( df[df.Year == '2010'].groupby(['Year', 'Name'])['Score'].mean().std())
</code></pre>
<p>Is there a cleaner way of doing it?</p>
|
<p>There is a <code>zscore</code> functionality in <code>scipy</code>, but be careful the default delta-degree-of-freedom is 0 in <code>scipy.stats.zscore</code>:</p>
<pre><code>In [171]:
import scipy.stats as ss
S=(df[df.Year == '2010'].groupby(['Year', 'Name'])['Score'].mean())
pd.Series(ss.zscore(s, ddof=1), S.index)
Out[171]:
Year Name
2010 Bill -0.714286
Bob 1.142857
Joe -0.428571
dtype: float64
</code></pre>
|
python-2.7|pandas|group-by
| 1
|
374,175
| 24,195,815
|
Averaging time series of different lengths
|
<p>I have a number of lists (time series)</p>
<pre><code>dictionary = {'a': [1,2,3,4,5], 'b': [5,2,3,4,1], 'c': [1,3,5,4,6]}
</code></pre>
<p>that I would like to average on another:</p>
<pre><code>merged = {'m': [2.33,2.33,3.66,4.0,4.0]}
</code></pre>
<p>Is there a smart way to find this?</p>
<p>What if the lists have different lengths and I want either an average from what's available, or to pretend all lists happened in the same time frame despite having different numbers of data points?</p>
|
<p>Given that you tagged this with numpy and scipy, I'm assuming it's OK to use scientific python functions. A terse way to accomplish the first task is then</p>
<pre><code>$ ipython --pylab
>>> dictionary = {'a': [1,2,3,4,5], 'b': [5,2,3,4,1], 'c': [1,3,5,4,6]}
>>> map(mean, np.array(dictionary.values()).transpose())
[2.3333333333333335, 2.3333333333333335, 3.6666666666666665, 4.0, 4.0]
</code></pre>
<p>You can of course put this into a dictionary with key 'm' and round off the results to get the result in the form you specified.</p>
<p>As for handling missing values or arrays of different lengths,
you'd first need to decide how the missing data should be treated.
The way you've asked the second question feels too vague.</p>
|
python|numpy|scipy
| 2
|
374,176
| 24,007,762
|
Python Pandas - Using to_sql to write large data frames in chunks
|
<p>I'm using Pandas' <code>to_sql</code> function to write to MySQL, which is timing out due to large frame size (1M rows, 20 columns).</p>
<p><a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_sql.html">http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_sql.html</a></p>
<p>Is there a more official way to chunk through the data and write rows in blocks? I've written my own code, which seems to work. I'd prefer an official solution though. Thanks!</p>
<pre><code>def write_to_db(engine, frame, table_name, chunk_size):
start_index = 0
end_index = chunk_size if chunk_size < len(frame) else len(frame)
frame = frame.where(pd.notnull(frame), None)
if_exists_param = 'replace'
while start_index != end_index:
print "Writing rows %s through %s" % (start_index, end_index)
frame.iloc[start_index:end_index, :].to_sql(con=engine, name=table_name, if_exists=if_exists_param)
if_exists_param = 'append'
start_index = min(start_index + chunk_size, len(frame))
end_index = min(end_index + chunk_size, len(frame))
engine = sqlalchemy.create_engine('mysql://...') #database details omited
write_to_db(engine, frame, 'retail_pendingcustomers', 20000)
</code></pre>
|
<p>Update: this functionality has been merged in pandas master and will be released in 0.15 (probably end of september), thanks to @artemyk! See <a href="https://github.com/pydata/pandas/pull/8062">https://github.com/pydata/pandas/pull/8062</a></p>
<p>So starting from 0.15, you can specify the <code>chunksize</code> argument and e.g. simply do:</p>
<pre><code>df.to_sql('table', engine, chunksize=20000)
</code></pre>
|
python|mysql|sql|pandas|sqlalchemy
| 29
|
374,177
| 24,028,281
|
Pandas series operations very slow after upgrade
|
<p>I am seeing a huge difference in performance between pandas 0.11 and pandas 0.13 on simple series operations.</p>
<pre><code>In [7]: df = pandas.DataFrame({'a':np.arange(1000000), 'b':np.arange(1000000)})
In [8]: pandas.__version__
Out[8]: '0.13.0'
In [9]: %timeit df['a'].values+df['b'].values
100 loops, best of 3: 4.33 ms per loop
In [10]: %timeit df['a']+df['b']
10 loops, best of 3: 42.5 ms per loop
</code></pre>
<p>On version 0.11 however (on the same machine), </p>
<pre><code>In [10]: pandas.__version__
Out[10]: '0.11.0'
In [11]: df = pandas.DataFrame({'a':np.arange(1000000), 'b':np.arange(1000000)})
In [12]: %timeit df['a'].values+df['b'].valuese
100 loops, best of 3: 2.22 ms per loop
In [13]: %timeit df['a']+df['b']
100 loops, best of 3: 2.3 ms per loop
</code></pre>
<p>So on 0.13, it's about 20x slower. Profiling it, I see</p>
<pre><code> ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 0.047 0.047 <string>:1(<module>)
1 0.000 0.000 0.047 0.047 ops.py:462(wrapper)
3 0.000 0.000 0.044 0.015 series.py:134(__init__)
1 0.000 0.000 0.044 0.044 series.py:2394(_sanitize_array)
1 0.000 0.000 0.044 0.044 series.py:2407(_try_cast)
1 0.000 0.000 0.044 0.044 common.py:1708(_possibly_cast_to_datetime)
1 0.044 0.044 0.044 0.044 {pandas.lib.infer_dtype}
1 0.000 0.000 0.003 0.003 ops.py:442(na_op)
1 0.000 0.000 0.003 0.003 expressions.py:193(evaluate)
1 0.000 0.000 0.003 0.003 expressions.py:93(_evaluate_numexpr)
</code></pre>
<p>So it's spending some huge amount of time on _possibly_cash_to_datetime and pandas.lib.infer_dtype.</p>
<p>Is this change expected? How can I get the old, faster performance back?</p>
<p>Note: It appears that the problem is that the output is of an integer type. If I make one of the columns a double, it goes back to being fast ...</p>
|
<p>This was a very odd bug having to do (I think) with a strange lookup going on in cython. For some reason</p>
<pre><code>_TYPE_MAP = { np.int64 : 'integer' }
np.int64 in _TYPE_MAP
</code></pre>
<p>was not evaluating correctly, ONLY for <code>int64</code> (but worked just fine for all other dtypes). Its possible the hash of the <code>np.dtype</code> object was screwy for some reason. In any event, fixed here: https: github.com/pydata/pandas/pull/7342 so we use name hashing instead.</p>
<p>Here's the perf comparison:</p>
<p>master</p>
<pre><code>In [1]: df = pandas.DataFrame({'a':np.arange(1000000), 'b':np.arange(1000000)})
In [2]: %timeit df['a'] + df['b']
100 loops, best of 3: 2.49 ms per loop
</code></pre>
<p>0.14.0</p>
<pre><code>In [6]: df = pandas.DataFrame({'a':np.arange(1000000), 'b':np.arange(1000000)})
In [7]: %timeit df['a'] + df['b']
10 loops, best of 3: 35.1 ms per loop
</code></pre>
|
python|pandas
| 2
|
374,178
| 24,071,202
|
Improve the speed of loop performance
|
<p>I am trying to build a sample for my Markov chain Monte Carlo code using <strong>pyMC</strong>. So with the sampled parameters of the model, each time the output is built by calling <code>getLensing</code> <strong>instance</strong> from <strong>nfw class</strong> and compared to the observed data. My problem is that my code is very slow, when it computed the model parameter. I have for instance 24000 data point and then for each of them I have a probability distribution- e.g. <code>obj_pdf</code>- which I marginalize over it (integrate) in the inner loop. so each time at least it takes an hour to compute all the outputs of the model.</p>
<pre><code>import numpy as np
z=np.arange(0,1.5,0.001)
z_h=0.15
for j in range(pos.shape[0]):
value1=0;value2=0
pdf=obj_pdf[j,:]/sum(obj_pdf[j,:])
for i in range(len(z)):
if (z[i]>z_h) :
g1,g2=nfw.getLensing( pos[j,:], z[i])
value1+=g1*pdf[i]
value2+=g2*pdf[i]
if (j<1):
value=np.array([value1,value2])
else:
value=np.vstack((value, np.array([value1,value2])))
</code></pre>
<p>so if I want to re-sample the input parameters for instance 100000 time, it would get months to do the MCMC calculation. Is there any smart way to speed up my code and loops?
Do I need to use something like <code>numpy.vectorize</code> or still it won't improve the speed of my code? How about <code>cython</code>, would it increase the performance of the code? In case it helps, how does it work?</p>
<p>I ran <code>python -m cProfile mycode.py</code> to see what was caused that my code gets slow, and it was the results:</p>
<pre><code> 12071 0.004 0.000 0.004 0.000 {min}
2 0.000 0.000 0.000 0.000 {next}
1 0.000 0.000 0.000 0.000 {numexpr.interpreter._set_num_threads}
8 0.002 0.000 0.002 0.000 {numpy.core.multiarray.arange}
132424695 312.210 0.000 312.210 0.000 {numpy.core.multiarray.array}
73498 3.933 0.000 3.933 0.000 {numpy.core.multiarray.concatenate}
99151506 201.497 0.000 201.497 0.000 {numpy.core.multiarray.copyto}
99151500 164.303 0.000 164.303 0.000 {numpy.core.multiarray.empty_like}
28 0.000 0.000 0.000 0.000 {numpy.core.multiarray.empty}
2 0.000 0.000 0.000 0.000 {numpy.core.multiarray.set_string_function}
1 0.000 0.000 0.000 0.000 {numpy.core.multiarray.set_typeDict}
1 0.000 0.000 0.000 0.000 {numpy.core.multiarray.where}
14 0.000 0.000 0.000 0.000 {numpy.core.multiarray.zeros}
14 0.000 0.000 0.000 0.000 {numpy.core.umath.geterrobj}
7 0.000 0.000 0.000 0.000 {numpy.core.umath.seterrobj}
270 0.000 0.000 0.000 0.000 {numpy.lib._compiled_base.add_docstring}
6 0.011 0.002 0.011 0.002 {open}
1 0.000 0.000 0.000 0.000 {operator.div}
2 0.000 0.000 0.000 0.000 {operator.mul}
1918 0.000 0.000 0.000 0.000 {ord}
2 0.000 0.000 0.000 0.000 {posix.WEXITSTATUS}
2 0.000 0.000 0.000 0.000 {posix.WIFEXITED}
1 0.000 0.000 0.000 0.000 {posix.WIFSIGNALED}
9 0.002 0.000 0.002 0.000 {posix.access}
3 0.000 0.000 0.000 0.000 {posix.close}
5 0.002 0.000 0.002 0.000 {posix.fdopen}
1 0.002 0.002 0.002 0.002 {posix.fork}
4 0.000 0.000 0.000 0.000 {posix.getcwd}
6 0.000 0.000 0.000 0.000 {posix.getpid}
1 0.000 0.000 0.000 0.000 {posix.getuid}
1 0.000 0.000 0.000 0.000 {posix.listdir}
6 0.000 0.000 0.000 0.000 {posix.lstat}
4 0.043 0.011 0.043 0.011 {posix.open}
2 0.000 0.000 0.000 0.000 {posix.pipe}
2 0.004 0.002 0.004 0.002 {posix.popen}
1 0.007 0.007 0.007 0.007 {posix.read}
205 0.059 0.000 0.059 0.000 {posix.stat}
3 0.000 0.000 0.000 0.000 {posix.sysconf}
2 0.000 0.000 0.000 0.000 {posix.uname}
4 0.004 0.001 0.004 0.001 {posix.unlink}
3 0.000 0.000 0.000 0.000 {posix.urandom}
1 0.000 0.000 0.000 0.000 {posix.waitpid}
1 0.000 0.000 0.000 0.000 {pow}
1522 0.004 0.000 0.004 0.000 {range}
73 0.000 0.000 0.000 0.000 {repr}
99151501 2102.879 0.000 6380.906 0.000 {scipy.integrate._quadpack._qagse}
1776 0.002 0.000 0.002 0.000 {setattr}
32 0.000 0.000 0.000 0.000 {sorted}
24500 18.861 0.001 18.861 0.001 {sum}
184 0.000 0.000 0.000 0.000 {sys._getframe}
1 0.000 0.000 0.000 0.000 {sys.getfilesystemencoding}
2 0.000 0.000 0.000 0.000 {sys.settrace}
1 0.000 0.000 0.000 0.000 {tables.utilsextension._broken_hdf5_long_double}
1 0.000 0.000 0.000 0.000 {tables.utilsextension.blosc_compressor_list}
2 0.000 0.000 0.000 0.000 {tables.utilsextension.get_hdf5_version}
1 0.000 0.000 0.000 0.000 {tables.utilsextension.get_pytables_version}
2 0.000 0.000 0.000 0.000 {tables.utilsextension.which_lib_version}
27 0.000 0.000 0.000 0.000 {thread.allocate_lock}
6 0.000 0.000 0.000 0.000 {thread.get_ident}
4 0.000 0.000 0.000 0.000 {thread.start_new_thread}
1 0.000 0.000 0.000 0.000 {time.localtime}
2 0.000 0.000 0.000 0.000 {time.time}
105 0.000 0.000 0.000 0.000 {unichr}
229 0.000 0.000 0.000 0.000 {vars}
49300 2.127 0.000 2.127 0.000 {zip}
</code></pre>
|
<p>Here is some code. I'd be amazed if the timings went from 60 to 59 minutes though.</p>
<pre><code>import numpy as np
z_h=0.15
z=np.arange(z_h, 1.5,0.001) #start the range from what you need (not exactly
z=z[1:] # needed because you said if (z[i]>z_h), range gives (z[i]>=z_h)
value=np.array([])
for j in range(pos.shape[0]):
value1=0;value2=0
pdf=obj_pdf[j,:]/sum(obj_pdf[j,:])
posj=pos[j,:] #precalculate
for i,zi in enumerate(z): #use enumerate if you need value and index
g1,g2=nfw.getLensing( posj, zi)
value1+=g1*pdf[i]
value2+=g2*pdf[i]
value=np.append(value, np.array([value1,value2])) # use a proper append function
</code></pre>
<p>Like the others, I assume getLensing is eating up your CPU cycles.</p>
<p>According to the first answer to <a href="https://stackoverflow.com/questions/3379301/using-numpy-vectorize-on-functions-that-return-vectors">this</a>, np.vectorize will not speed up your function.</p>
|
numpy|scipy|multiprocessing|cython|pymc
| 0
|
374,179
| 43,834,533
|
How to quantize the values of tf.Variables in Tensorflow
|
<p>I have a training model like</p>
<pre><code>Y = w * X + b
</code></pre>
<p>where Y and X are output and input placeholder, w and b are the vectors<br>
I already know the value of w can only be 0 or 1, while b is still tf.float32.<br><br>
How could I quantize the range of variable w when I define it?<br>
or<br>
Can I have two different learning rates? The rate for w is 1 or -1 and the rate for b is 0.0001 as usual.<br></p>
|
<p>There is no way to limit your variable during the activation. But what you can do is to limit it after each iteration. Here is one way to do this with <a href="https://www.tensorflow.org/api_docs/python/tf/where" rel="noreferrer"><code>tf.where()</code></a>:</p>
<pre><code>import tensorflow as tf
a = tf.random_uniform(shape=(3, 3))
b = tf.where(
tf.less(a, tf.zeros_like(a) + 0.5),
tf.zeros_like(a),
tf.ones_like(a)
)
with tf.Session() as sess:
A, B = sess.run([a, b])
print A, '\n'
print B
</code></pre>
<p>Which will convert everything above 0.5 to 1 and everything else to 0:</p>
<pre><code>[[ 0.2068541 0.12682056 0.73839438]
[ 0.00512838 0.43465161 0.98486936]
[ 0.32126224 0.29998791 0.31065524]]
[[ 0. 0. 1.]
[ 0. 0. 1.]
[ 0. 0. 0.]]
</code></pre>
|
variables|tensorflow|rate
| 6
|
374,180
| 43,596,570
|
convert pandas lists into dummy variables
|
<p>I have a pandas dataFrame which contains list of variables which I want to convert to dummy variables. Basically I want to convert:</p>
<p><a href="https://i.stack.imgur.com/weBPw.png" rel="noreferrer"><img src="https://i.stack.imgur.com/weBPw.png" alt="enter image description here"></a></p>
<p>to this:</p>
<p><a href="https://i.stack.imgur.com/Nsoq3.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Nsoq3.png" alt="enter image description here"></a></p>
|
<pre><code>df = pd.DataFrame({0: [['hello', 'motto'], ['motto', 'mania']]})
print(df)
0
0 [hello, motto]
1 [motto, mania]
</code></pre>
<p>use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.join.html" rel="noreferrer"><strong><code>str.join</code></strong></a> followed by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.get_dummies.html" rel="noreferrer"><strong><code>str.get_dummies</code></strong></a></p>
<pre><code>df[0].str.join('|').str.get_dummies()
hello mania motto
0 1 0 1
1 0 1 1
</code></pre>
|
pandas
| 9
|
374,181
| 43,532,878
|
Pandas bins with additional bin of ==0
|
<p>I am trying to use the pandas bins with the range like below</p>
<pre><code>tipBins = [1,5,10,15,20,25,30].
</code></pre>
<p>Also for some rides tip would be zero that doesn't fall under any range.
How to supply that value zero in the pandas bins.I need a partition of bins like below</p>
<pre><code>==0
1-5
5-10
10-15
15-20
20-25
25-30
</code></pre>
<hr>
<pre><code>import numpy as np
tipBins = [1,5,10,15,20,25,30]
tipData=DataFrame(tipPercentage)
tip_data_names = ["No Tip", '1-5','5-10','10-15','15-20','20-25','25-30']
tipData['ranges'] = pd.cut(tipData['tipPercent'], tipBins, labels=tip_data_names)
td=tipData[['count','ranges']].groupby(['ranges']).sum().fillna(0)
sd.reset_index()
</code></pre>
<p>Should i have bins like this <code>tipBins = [0,0,1,5,10,15,20,25,30].</code></p>
|
<p>If you are sure the <em>tipPercentage</em> won't contain any negative numbers, you can add a negative number in the <code>tipBins</code>, for instance:</p>
<pre><code>tipBins = [-1,1,5,10,15,20,25,30]
</code></pre>
<p><em>Example</em>:</p>
<pre><code>v = [0, 4, 7, 20, 26]
tip_data_names = ["No Tip", '1-5','5-10','10-15','15-20','20-25','25-30']
import pandas as pd
pd.cut(v, tipBins, labels=tip_data_names)
# [No Tip, 1-5, 5-10, 15-20, 25-30]
# Categories (7, object): [No Tip < 1-5 < 5-10 < 10-15 < 15-20 < 20-25 < 25-30]
</code></pre>
<p>If you have data outside of the range you want, for example, some values between 0 and 1 which you don't want to include, then you might need to filter on your data before <em>cut</em>:</p>
<pre><code>tipData = tipData[(tipData['tipPercent'] == 0) | ((tipData['tipPercent'] >= 1) & (tipData['tipPercent'] <= 30)]
</code></pre>
<p>In this way, your data would fall exclusively in the range you care about, then you can cut it use the method as above.</p>
|
python|pandas|numpy|dataframe
| 1
|
374,182
| 43,728,629
|
TensorFlow Saver unworked
|
<p>I am trying to save all variables of the model, but instead the error "<em>FailedPreconditionError: Attempting to use uninitialized value beta1_power</em>" raised.
I havn't defined beta1_power variable. I don't know what it is.</p>
<pre><code>saver = tf.train.Saver()
saver.save(save_path="/home/eldmitro/MNIST",sess=tf.Session())
</code></pre>
<p><strong>Layers defining:</strong></p>
<pre><code>def params_init(shape, name):
w = tf.truncated_normal(shape=shape, stddev=0.1)
return tf.Variable(w, name= name)
def conv2D_init(x, kernel_shape,output_channels, name, activation=tf.nn.relu):
with tf.name_scope(name):
w = params_init(kernel_shape, "W")
b = params_init([output_channels], "b")
return tf.nn.relu(tf.nn.conv2d(x, filter=w, strides=[1, 1,1,1], padding='SAME', name=name))
def pool2x2(x, name):
return tf.nn.max_pool(x, ksize=[1,2,2,1], strides=[1,2,2,1], padding='SAME', name=name)
def fcl_init(x, in_neurons, out_neurons,name):
with tf.name_scope(name):
w = params_init([in_neurons, out_neurons], name="W")
b = params_init([out_neurons], name="b")
return(tf.matmul(x, w) + b)
</code></pre>
|
<p>You should first initialize the variables then to save them. You can initialize the variables either by restoring form a checkpoint or</p>
<pre><code>sess.run(global_variables_initializer())
</code></pre>
<p>You cannot save variables being uninitialized. </p>
<p>Then do this: </p>
<pre><code>saver = tf.train.Saver()
sess=tf.Session()
sess.run(global_variables_initializer())
saver.save(save_path="/home/eldmitro/MNIST",sess=sess)
</code></pre>
<p>To just store initialized(not trained) variables. </p>
|
python|python-2.7|machine-learning|tensorflow|computer-vision
| 0
|
374,183
| 43,783,280
|
Python Compare 2 CSV's delete differences
|
<p>I have 1 CSV that is heavily manipulated, it looks something like this:</p>
<p><code>"ID","Vulnerability","Report Category","IP","DNS","NetBIOS","OS",
"x","Title","Category Type","x.x.x.x","DNS Name","Net Name","Windows"</code></p>
<p>The 2nd CSV looks like this:</p>
<p><code>"IP","DNS","NetBIOS","OS","Title","x.x.x.x","DNS Name","Net Name","Operating System","Title"</code></p>
<p>What I am needing to do is compare the 2 CSV's based on certain columns.<br>
On CSV 1 I want to compare column B(Vulnerability) and column D(IP) with CSV 2 column E(Title) and column A(IP). For the purposes of this argument CSV1 column B(Vulnerability) will match CSV column E's(Title) data exactly.<br>
Once it finds matches it will delete the rows that match on CSV 2.</p>
|
<p>You already know you can use pandas so just load both csv files into dataframes and then join the tables and delete where they match.</p>
<p>For example:</p>
<pre><code>csv2 = pd.DataFrame(data = {'col1' : [1, 2, 3, 4, 5], 'col2' : [10, 11, 12, 13, 14]})
csv1 = pd.DataFrame(data = {'col1' : [1, 3, 4], 'col2' : [10, 12, 13]})
print(csv2.loc[~csv2.set_index(list(csv2.columns)).index.isin(csv1.set_index(list(csv1.columns)).index)])
</code></pre>
<p>In this example you will keep all values from csv2 which are not in csv1.</p>
<h1>This should be more memory efficient:</h1>
<pre><code>merged = csv2.merge(csv1, how="left", left_on=["Title", "IP"], right_on=["Vulnerability", "IP"])
print(merged.loc[merged['Vulnerability'].isnull()])
</code></pre>
<p>This does a left join (keeping all values in csv2) and filters so that only values which do not match to csv1 are kept. </p>
|
python|python-3.x|pandas
| 0
|
374,184
| 43,763,117
|
Python Pandas If value in column D equals Windows Print Server
|
<p>I have a table that looks like this<br/>
"IP","DNS","NetBIOS","OS"<br/>
"x.x.x","name","name","Windows 2012"<br/>
"x.x.x","name","name","HP JetDirect"<br/> </p>
<p>I am trying to find a way using Pandas to have the code look in the OS column, if it equals "Windows" (anything after the "Windows" does not matter), it will print the word "Workstation" if it is anything else it will print "Printer"<br/><br></p>
<p>I also have this line of code that would insert the new column. But I would need it to know what value to print based on the question above<br></p>
<pre><code>df.insert(4,'Report Category',' ')
</code></pre>
|
<p>You can use <a href="http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.where.html" rel="nofollow noreferrer"><code>numpy.where</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.contains.html" rel="nofollow noreferrer"><code>str.contains</code></a>:</p>
<pre><code>print (df['OS'].str.contains('Windows'))
0 True
1 False
Name: OS, dtype: bool
#for last column
df['Report Category'] = np.where(df['OS'].str.contains('Windows'), 'Workstation', 'Printer')
print (df)
IP DNS NetBIOS OS Report Category
0 x.x.x name name Windows 2012 Workstation
1 x.x.x name name HP JetDirect Printer
</code></pre>
<p>And for <code>4th</code>column use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.insert.html" rel="nofollow noreferrer"><code>insert</code></a>:</p>
<pre><code>df.insert(4,'Report Category', np.where(df['OS'].str.contains('Windows'),
'Workstation', 'Printer'))
print (df)
IP DNS NetBIOS OS Report Category
0 x.x.x name name Windows 2012 Workstation
1 x.x.x name name HP JetDirect Printer
</code></pre>
|
python|python-3.x|pandas
| 1
|
374,185
| 43,711,311
|
How to calculate class scores when batch size changes
|
<p>My question is at the bottom, but first I will explain what I am attempting to achieve.</p>
<p>I have an example I am trying to implement on my own model. I am creating an adversarial image, in essence I want to graph how the image score changes when the epsilon value changes.</p>
<p>So let's say my <code>model</code> has already been trained, and in this example I am using the following model...</p>
<pre><code>x = tf.placeholder(tf.float32, shape=[None, 784])
...
...
# construct model
logits = tf.matmul(x, W) + b
pred = tf.nn.softmax(logits) # Softmax
</code></pre>
<p>Next, let us assume I extract an array of images of the number 2 from the <code>mnist</code> data set, and I saved it in the following variable...</p>
<pre><code># convert into a numpy array of shape [100, 784]
labels_of_2 = np.concatenate(labels_of_2, axis=0)
</code></pre>
<p>So now, in the example that I have, the next step is to try different epsilon values on every image...</p>
<pre><code># random epsilon values from -1.0 to 1.0
epsilon_res = 101
eps = np.linspace(-1.0, 1.0, epsilon_res).reshape((epsilon_res, 1))
labels = [str(i) for i in range(10)]
num_colors = 10
cmap = plt.get_cmap('hsv')
colors = [cmap(i) for i in np.linspace(0, 1, num_colors)]
# Create an empty array for our scores
scores = np.zeros((len(eps), 10))
for j in range(len(labels_of_2)):
# Pick the image for this iteration
x00 = labels_of_2[j].reshape((1, 784))
# Calculate the sign of the derivative,
# at the image and at the desired class
# label
sign = np.sign(im_derivative[j])
# Calculate the new scores for each
# adversarial image
for i in range(len(eps)):
x_fool = x00 + eps[i] * sign
scores[i, :] = logits.eval({x: x_fool,
keep_prob: 1.0})
</code></pre>
<p>Now we can graph the images using the following...</p>
<pre><code># Create a figure
plt.figure(figsize=(10, 8))
plt.title("Image {}".format(j))
# Loop through the score functions for each
# class label and plot them as a function of
# epsilon
for k in range(len(scores.T)):
plt.plot(eps, scores[:, k],
color=colors[k],
marker='.',
label=labels[k])
plt.legend(prop={'size':8})
plt.xlabel('Epsilon')
plt.ylabel('Class Score')
plt.grid('on')
</code></pre>
<p>For the first image the graph would look something like the following...</p>
<p><a href="https://i.stack.imgur.com/RX07P.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RX07P.png" alt="enter image description here"></a></p>
<p><strong>Now Here Is My Question</strong></p>
<p>Let's say the model I trained used a <code>batch_size</code> of <code>100</code>, in that case the following line would not work...</p>
<pre><code> scores[i, :] = logits.eval({x: x_fool,
keep_prob: 1.0})
</code></pre>
<p>In order for this to work, I would need to pass an array of 100 images to the model, but in this instance <code>x_fool</code> is just one image of size <code>(1, 784)</code>.</p>
<p>I want to graph the effect of different epsilon values on class scores for any one image, but how can I do so when I need calculate the score of <code>100</code> images at a time (since my model was trained on a batch_size of 100)?</p>
|
<p>You can choose to not choose a batch size by setting it to <code>None</code>. That way, any batch size can be used.</p>
<p>However, keep in mind that this non-choice could com with <a href="https://stackoverflow.com/questions/42547456/are-there-any-downsides-of-creating-tensorflow-placeholders-for-variable-sized-v">a moderate penalty</a>.</p>
<p>This fixes it if you start again from scratch. If you start from an existing trained network with a batch size of 100, you can create a test network that is similar to your starting network except for the batch size. You can set the batch size to <code>1</code>, or again, to <code>None</code>.</p>
|
python-3.x|matplotlib|machine-learning|tensorflow
| 1
|
374,186
| 43,735,722
|
KNN Algorithm from scratch python
|
<p>I am trying to execute a KNN algorithm from scratch, but I am getting a really strange error saying "KeyError: 0"</p>
<p>I assume this implying I have an empty dictionary somewhere, but I don't understand how that can be. I might just add for the sake of clarity that the data works fine in the black box KNN algorithm, so it definitely has to be something in the code...</p>
<p>This is my code:</p>
<pre><code>import numpy as np
import pandas as pd
import csv
import scipy.stats as stats
import math
from collections import Counter
import operator
from operator import itemgetter
"""Training features dataset"""
filenametrain_data = 'training_data.csv'
training_feature_set = pd.read_csv(filenametrain_data, header=None, usecols=range(1,13627))
"""Training labels dataset"""
filenametrain_label = 'training_labels.csv'
training_feature_label = pd.read_csv(filenametrain_label, header=None, usecols=[1], names=['Category'])
"""Split into training and testing datasets 90%/10%"""
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(training_feature_set, training_feature_label, test_size = 0.1, random_state=42)
"""KNN Model"""
def distance(X_train, y_train):
dist = 0.0
for i in range(len(X_train)):
dist += pow((X_train[i] - y_train[i]), 2)
return math.sqrt(dist)
def getNeighbors(X_train, y_train, X_test, k):
distances = []
for i in range(len(X_train)):
dist = distance(X_test, X_train[i])
distances.append((X_train[i], dist, y_train[i]))
distances.sort(key=operator.itemgetter(1))
neighbor = []
for elem in range(k):
neighbor.append((distances[elem][0], distances[elem][2]))
return neighbor
def getResponse(neighbors):
classVotes = {}
for x in range(len(neighbors)):
response = int(neighbors[x][-1])
if response in classVotes:
classVotes[response] += 1
else:
classVotes[response] = 1
sortedVotes = sorted(classVotes.items(), key=operator.itemgetter(1), reverse = True)
return sortedVotes[0][0]
"""Prediction"""
predictions = []
k = 4
for x in range(len(X_test)):
neighbors = getNeighbors(X_train, y_train, y_test[x], k)
result = getResponse(neighbors)
predictions.append(result)
</code></pre>
<p>The error returned is:</p>
<blockquote>
<p>Traceback (most recent call last):</p>
<p>File "", line 2, in
neighbors = getNeighbors(X_train, y_train, y_test[x], k)</p>
<p>File "C:\ANACONDA\lib\site-packages\pandas\core\frame.py", line
1797, in <strong>getitem</strong>
return self._getitem_column(key)</p>
<p>File "C:\ANACONDA\lib\site-packages\pandas\core\frame.py", line
1804, in _getitem_column
return self._get_item_cache(key)</p>
<p>File "C:\ANACONDA\lib\site-packages\pandas\core\generic.py", line
1084, in _get_item_cache
values = self._data.get(item)</p>
<p>File "C:\ANACONDA\lib\site-packages\pandas\core\internals.py", line
2851, in get
loc = self.items.get_loc(item)</p>
<p>File "C:\ANACONDA\lib\site-packages\pandas\core\index.py", line
1572, in get_loc
return self._engine.get_loc(_values_from_object(key))</p>
<p>File "pandas\index.pyx", line 134, in
pandas.index.IndexEngine.get_loc (pandas\index.c:3824)</p>
<p>File "pandas\index.pyx", line 154, in
pandas.index.IndexEngine.get_loc (pandas\index.c:3704)</p>
<p>File "pandas\hashtable.pyx", line 686, in
pandas.hashtable.PyObjectHashTable.get_item (pandas\hashtable.c:12280)</p>
<p>File "pandas\hashtable.pyx", line 694, in
pandas.hashtable.PyObjectHashTable.get_item (pandas\hashtable.c:12231)</p>
<p>KeyError: 0</p>
</blockquote>
<p>The datasets can be accessed <a href="https://www.dropbox.com/sh/686pironeq4bfia/AABAiZyVhQ2dOZ6pf_SDij3Ba?dl=0" rel="nofollow noreferrer">here</a></p>
|
<p>EDIT: You may have an extra character at the beginning of your csv files. Try specifying the encoding in the read_csv() calls. See "encoding" in <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html" rel="nofollow noreferrer">http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html</a></p>
<blockquote>
<p>encoding : str, default None Encoding to use for UTF when
reading/writing (ex. ‘utf-8’). List of Python standard encodings:
<a href="https://docs.python.org/3/library/codecs.html#standard-encodings" rel="nofollow noreferrer">https://docs.python.org/3/library/codecs.html#standard-encodings</a></p>
</blockquote>
<p>You're using a dot when you don't need a dot (in two places i can see right off the bat):</p>
<pre><code>operator.itemgetter(1)
</code></pre>
<p>You've done an import of itemgetter specifically:</p>
<pre><code>from operator import itemgetter
</code></pre>
<p>So when you call itemgetter, just call it without dot notation:</p>
<pre><code>itemgetter(1)
</code></pre>
|
python|algorithm|pandas|machine-learning|knn
| 0
|
374,187
| 43,754,344
|
Tensorflow Auto Tagging
|
<p>I want to know how to make an auto-tagging for images.</p>
<p>I have tried tensorflow and trained the model several times.
For start, It was quite good for classification.
But now, I need to do auto-tagging.</p>
<p>Using tensorflow the prediction sum result will be always 1.</p>
<p>For example something like this :</p>
<p>xxx.jpg prediction result :
- Cat = 0.822
- Dog = 0.177
- Deer = 0.001
The sum will be always 1.</p>
<p>What I wanted is something like this :</p>
<p>xxx.jpg prediction result :
- Cat = 0.901
- Dog = 0.811
- Deer = 0.991</p>
<p>Because there might be Cat, Dog and Deer in the same picture in xxx.jpg.
Just like Clarifai (<a href="https://www.clarifai.com/" rel="nofollow noreferrer">https://www.clarifai.com/</a>) did.</p>
<p>I wonder what is the basic concept to achieved that ?</p>
<p>Thank you. </p>
|
<p>Take a look at the last layer you made. As you say the sum of your predictions is always one, this sounds like you applied softmax (<a href="https://en.wikipedia.org/wiki/Softmax_function" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Softmax_function</a>).
If you remove this, you get activations for every object. </p>
<p>Please let me know if this helped you!</p>
|
image|image-processing|tensorflow|tagging
| 0
|
374,188
| 43,754,753
|
Resample a MultiIndexed Pandas DataFrame and apply different functions to columns
|
<p>In the case where you do not have MultiIndexed columns, you can do <code>df.resample(freq).agg(some_dict)</code> where some_dict is in the form <code>{column_name: function}</code> in order to apply a different function to each column (See demonstration below or see <a href="https://stackoverflow.com/questions/10020591/how-to-resample-a-dataframe-with-different-functions-applied-to-each-column">this question</a> or the <a href="http://www.jeffreytratner.com/example-pandas-docs/html-minor-doc-fixup-10-25/groupby.html#applying-different-functions-to-dataframe-columns" rel="nofollow noreferrer">docs</a>).</p>
<p>I'd like to do the same when I have MultiIndexed columns, but Pandas is doing the product between my columns and the dict.</p>
<p>Here's some dummy data to play with:</p>
<pre><code>In [1]:
import pandas as pd
import numpy as np
cols = pd.MultiIndex.from_tuples([('A', 'one'), ('A', 'two'),
('B', 'one'), ('B', 'two')])
ind = pd.DatetimeIndex(start='2017-01-01', freq='15Min', periods=20)
df = pd.DataFrame(np.random.randn(20,4), index=ind, columns=cols)
print(df.head())
Out[1]:
A B
one two one two
2017-01-01 00:00:00 -0.627329 0.756533 2.149236 -1.204808
2017-01-01 00:15:00 1.493381 1.320806 -1.692557 1.225271
2017-01-01 00:30:00 -0.572762 1.365679 -1.993464 1.118474
2017-01-01 00:45:00 -1.785283 -1.625370 -0.437199 0.776267
2017-01-01 01:00:00 -0.220307 1.308388 2.981333 -0.569586
</code></pre>
<p>Now, let's create an aggregate dictionary, that maps columns to a specific function:</p>
<pre><code>In [2]:
agg_dict = { col:(np.sum if col[1] == 'one' else np.mean) for col in df.columns }
agg_dict
Out[2]:
{('A', 'one'): <function numpy.core.fromnumeric.sum>,
('A', 'two'): <function numpy.core.fromnumeric.mean>,
('B', 'one'): <function numpy.core.fromnumeric.sum>,
('B', 'two'): <function numpy.core.fromnumeric.mean>}
</code></pre>
<p><strong>Here it doesn't work, it actually does the product between my actual columns and the agg_dict</strong>. I expected a shape of <code>(5,4)</code>, but I'm getting <code>(5,16)</code> (4 entries in dict, 4 columns in df):</p>
<pre><code>In [3]: df.resample('H').agg(agg_dict).shape
Out[3]: (5,16)
In [4]: print(df.resample('H').agg(agg_dict).columns.tolist())
Out[4]: [('A', 'one', 'A', 'one'), ('A', 'one', 'A', 'two'), ('A', 'one', 'B', 'one'), ('A', 'one', 'B', 'two'), ('A', 'two', 'A', 'one'), ('A', 'two', 'A', 'two'), ('A', 'two', 'B', 'one'), ('A', 'two', 'B', 'two'), ('B', 'one', 'A', 'one'), ('B', 'one', 'A', 'two'), ('B', 'one', 'B', 'one'), ('B', 'one', 'B', 'two'), ('B', 'two', 'A', 'one'), ('B', 'two', 'A', 'two'), ('B', 'two', 'B', 'one'), ('B', 'two', 'B', 'two')]
</code></pre>
<p><strong>How can I obtain similar behavior to the non-MultiIndexed case, that is end up with a <code>(5,4)</code>DataFrame here?</strong></p>
<hr>
<p>I can verify that it works using a non-MultiIndexed DataFrame.</p>
<pre><code>In [5]:
df2 = df.copy()
# Flatten columns
df2.columns = ['_'.join(x) for x in df.columns]
# Create similar agg_dict
agg_dict2 = { col:(np.sum if 'one' in col else np.mean) for col in df2.columns }
print(df2.resample('H').agg(agg_dict2))
Out[5]:
A_one A_two B_one B_two
2017-01-01 00:00:00 -1.491994 0.454412 -1.973983 0.478801
2017-01-01 01:00:00 -0.931024 0.465611 4.837972 -0.118674
2017-01-01 02:00:00 2.015399 0.203814 1.539722 -0.296053
2017-01-01 03:00:00 -0.569376 -0.382343 -2.244470 -0.038828
2017-01-01 04:00:00 -0.747308 -0.212246 2.025314 0.713344
</code></pre>
|
<p>I just came up with an idea that works using <code>apply</code> with a <code>lambda</code></p>
<pre><code>In [1]:
df.resample('H').apply(lambda x: agg_dict[x.name](x))
Out[1]:
A B
one two one two
2017-01-01 00:00:00 -2.211489 0.538068 1.379451 -0.619921
2017-01-01 01:00:00 1.524752 -0.195767 1.157592 0.137513
2017-01-01 02:00:00 -1.225071 0.020599 -1.372751 -0.245233
2017-01-01 03:00:00 2.922656 0.032864 3.118994 0.315109
2017-01-01 04:00:00 -1.438694 1.025585 1.915400 -0.536389
</code></pre>
<p><code>x.name</code> returns for eg <code>('A', 'one')</code>, so I use that to select the function in the dict, and pass <code>x</code> to it.</p>
|
python|python-3.x|pandas|numpy|dictionary
| 2
|
374,189
| 43,722,660
|
Extend lists within a pandas Series
|
<p>I have a pandas series that looks like this: </p>
<pre><code>group
A [1,0,5,4,6,...]
B [2,2,0,1,9,...]
C [3,5,2,0,6,...]
</code></pre>
<p>I have similar series that I would like to add to the existing series by extending each of the lists. How can I do this?</p>
<p>I tried</p>
<pre><code>for x in series:
x.extend(series[series.index[x]])
</code></pre>
<p>but this isn't working.</p>
|
<p>Consider the series <code>s</code></p>
<pre><code>s = pd.Series([[1, 0], [2, 2], [4, 1]], list('ABC'), name='group')
s
A [1, 0]
B [2, 2]
C [4, 1]
Name: group, dtype: object
</code></pre>
<p>You can extend each list with a similar series simply by adding them. <code>pandas</code> will use the underlying objects <code>__add__</code> method to combine the pairwise elements. In the case of a <code>list</code>, the <code>__add__</code> method concatenates the lists.</p>
<pre><code>s + s
A [1, 0, 1, 0]
B [2, 2, 2, 2]
C [4, 1, 4, 1]
Name: group, dtype: object
</code></pre>
<hr>
<p>However, this would not work if the elements were <code>numpy.array</code></p>
<pre><code>s = pd.Series([[1, 0], [2, 2], [4, 1]], list('ABC'), name='group')
s = s.apply(np.array)
</code></pre>
<p>In this case, I'd make sure they are lists</p>
<pre><code>s.apply(list) + s.apply(list)
A [1, 0, 1, 0]
B [2, 2, 2, 2]
C [4, 1, 4, 1]
Name: group, dtype: object
</code></pre>
|
python|pandas
| 5
|
374,190
| 43,691,241
|
Can not fetch an image from mnist tfrecord using tf-slim framework
|
<p>I used DatasetDataProvider to get an image from tfrecord. I can 'print(image)', but when using 'sess.run(image)' to fetch it, the program seem to fall into a infinite loop. I have no knowledge of whether I have make a mistake.</p>
<p>print(image) get</p>
<pre><code> Tensor("Reshape_3:0", shape=(28, 28, 1), dtype=uint8, device=/device:CPU:0)
</code></pre>
<p>Full code as below:</p>
<pre><code> from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow as tf
from datasets import dataset_factory
from tensorflow.contrib import slim
dataset = dataset_factory.get_dataset(
'mnist', 'train', '/home/zehao/Dataset/mnist')
with tf.device('/cpu:0'):
provider = slim.dataset_data_provider.DatasetDataProvider(
dataset,
num_readers=1,
common_queue_capacity=20 * 1,
common_queue_min=10 * 1)
[image, label] = provider.get(['image', 'label'])
print(image)
sess = tf.Session()
sess.run(image)
</code></pre>
|
<p>The <code>slim.dataset_data_provider</code> uses TensorFlow <a href="https://www.tensorflow.org/programmers_guide/threading_and_queues" rel="nofollow noreferrer">input queues</a> under the hood. Therefore, it's important to (after creating your session) add the following 2 lines to kick off the queue runners:</p>
<pre><code>coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(coord=coord)
</code></pre>
<p>For a full example, see: <a href="https://www.tensorflow.org/programmers_guide/reading_data" rel="nofollow noreferrer">https://www.tensorflow.org/programmers_guide/reading_data</a></p>
|
tensorflow
| 3
|
374,191
| 43,689,633
|
Enabling XLA JIT from tf.slim
|
<p>One way of turning on the Tensorflow XLA JIT is to use <code>tf.OptimizerOptions.ON_1</code> flag, by passing it to the TF session, similar to the following lines in python:</p>
<pre><code>config = tf.ConfigProto()
config.graph_options.optimizer_options.global_jit_level = tf.OptimizerOptions.ON_1
sess = tf.Session(config=config)
</code></pre>
<p>However, I'm not sure when/how to enable XLA JIT when I utilize the Slim library instead (I use <code>slim.learning.train</code> to start the training).</p>
|
<p><code>slim.learning.train</code> takes a <code>session_config</code> argument to which you can pass the <code>ConfigProto</code>.</p>
|
python|tensorflow
| 3
|
374,192
| 43,651,802
|
How to Combine 2 integer columns in a dataframe and keep the type as integer itself in python
|
<p>I have 2 columns df[year] and df[month]. It has values ranging from 2000 to 2017 and month values 1 - 12.</p>
<p>How to combine these to another column which would contain the combined output.</p>
<p>Eg: </p>
<pre>Year Month Y0M
2000 1 200001
2000 2 200002
2000 3 200003
2000 10 200010
</pre>
<p>Note : there is a 0 added in between Year and Month in Y0M column, (only for single digit numbers and not double digit) </p>
<p>Currently I am able to only do this by converting to string, but I want to retain them as type numbers</p>
|
<p>Maybe something like <code>df[year] * 100 + df[month]</code> would help.</p>
|
python|pandas|dataframe
| 3
|
374,193
| 43,605,963
|
How to join 2 cell in 1 cell in header?
|
<p><a href="https://i.stack.imgur.com/LsDl2.png</img" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LsDl2.png</img" alt="enter image description here"></a></p>
<p>Above is my dataframe, and I wish to get this out put on header</p>
<p><a href="https://i.stack.imgur.com/wYSlr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wYSlr.png" alt="enter image description here"></a></p>
<p>Anyone have idea on this?</p>
|
<p>You can use:</p>
<pre><code>a = ['201701', '', '201705', '', '201707']
b = ['PHI', 'Actual', 'Actual', 'PHI', 'Actual']
data = [[np.nan, np.nan, np.nan, 8, np.nan]]
df = pd.DataFrame(data, index=['ClassCold'], columns = pd.MultiIndex.from_arrays([a,b]))
print (df.columns)
MultiIndex(levels=[['', '201701', '201705', '201707'], ['Actual', 'PHI']],
labels=[[1, 0, 2, 0, 3], [1, 0, 0, 1, 0]])
print (df)
201701 201705 201707
PHI Actual Actual PHI Actual
ClassCold NaN NaN NaN 8 NaN
</code></pre>
<p>Get first level of <code>MultiIndex</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Index.get_level_values.html" rel="nofollow noreferrer"><code>get_level_values</code></a>, convert to <code>Series</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Index.to_series.html" rel="nofollow noreferrer"><code>to_series</code></a>, <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.replace.html" rel="nofollow noreferrer"><code>replace</code></a> empty strings (or space) to <code>NaN</code> and forward fill <code>NaN</code>s by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.ffill.html" rel="nofollow noreferrer"><code>ffill</code></a>.</p>
<p>Last create new <code>MultiIndex</code> <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.MultiIndex.from_arrays.html" rel="nofollow noreferrer"><code>from_arrays</code></a>:</p>
<pre><code>a = df.columns.get_level_values(0).to_series().replace('',np.nan).ffill()
df.columns = df.columns = pd.MultiIndex.from_arrays([a, df.columns.get_level_values(1)])
print (df)
201701 201705 201707
PHI Actual Actual PHI Actual
ClassCold NaN NaN NaN 8 NaN
print (df.columns)
MultiIndex(levels=[['201701', '201705', '201707'], ['Actual', 'PHI']],
labels=[[0, 0, 1, 1, 2], [1, 0, 0, 1, 0]])
</code></pre>
|
python|pandas
| 2
|
374,194
| 43,680,870
|
Feed CNN features to LSTM
|
<p>I want to build an end-to-end trainable model with the following proprieties:</p>
<ul>
<li>CNN to extract features from image</li>
<li>The features is reshaped to a matrix</li>
<li>Each row of this matrix is then fed to LSTM1</li>
<li>Each column of this matrix is then fed to LSTM2</li>
<li>The output of LSTM1 and LSTM2 are concatenated for the final output</li>
</ul>
<p>(it's more or less similar to Figure 2 in this paper: <a href="https://arxiv.org/pdf/1611.07890.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1611.07890.pdf</a>)</p>
<p>My problem now is after the reshape, how can I feed the values of feature matrix to LSTM with Keras or Tensorflow?</p>
<p>This is my code so far with VGG16 net (also a link to <a href="https://github.com/fchollet/keras/issues/6426" rel="nofollow noreferrer">Keras issues</a>):</p>
<pre><code># VGG16
model = Sequential()
model.add(Conv2D(64, (3, 3), activation='relu', padding='same', input_shape=(224, 224, 3)))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D((2, 2)))
# block 2
model.add(Conv2D(128, (3, 3), activation='relu', padding='same'))
model.add(Conv2D(128, (3, 3), activation='relu'))
model.add(MaxPooling2D((2, 2)))
# block 3
model.add(Conv2D(256, (3, 3), activation='relu', padding='same'))
model.add(Conv2D(256, (3, 3), activation='relu'))
model.add(Conv2D(256, (3, 3), activation='relu'))
model.add(MaxPooling2D((2, 2)))
# block 4
model.add(Conv2D(512, (3, 3), activation='relu', padding='same'))
model.add(Conv2D(512, (3, 3), activation='relu'))
model.add(Conv2D(512, (3, 3), activation='relu'))
model.add(MaxPooling2D((2, 2)))
# block 5
model.add(Conv2D(512, (3, 3), activation='relu', padding='same'))
model.add(Conv2D(512, (3, 3), activation='relu'))
model.add(Conv2D(512, (3, 3), activation='relu'))
model.add(MaxPooling2D((2, 2)))
# block 6
model.add(Flatten())
model.add(Dense(4096, activation='relu'))
model.add(Dense(4096, activation='relu'))
# reshape the feature 4096 = 64 * 64
model.add(Reshape((64, 64)))
# How to feed each row of this to LSTM?
# This is my first solution but it doesn’t look correct:
# model.add(LSTM(256, input_shape=(64, 1))) # 256 hidden units, sequence length = 64, feature dim = 1
</code></pre>
|
<p>Consider building your CNN model with Conv2D and MaxPool2D layers, until you reach your Flatten layer, because the vectorized output from the Flatten layer will be you input data to the LSTM part of your structure.</p>
<p>So, build your CNN model like this:</p>
<pre><code>model_cnn = Sequential()
model_cnn.add(Conv2D...)
model_cnn.add(MaxPooling2D...)
...
model_cnn.add(Flatten())
</code></pre>
<p>Now, this is an interesting point, the current version of Keras has some incompatibility with some TensorFlow structures that will not let you stack your entire layers in just one Sequential object.</p>
<p>So it's time to use the Keras Model Object to complete you neural network with a trick:</p>
<pre><code>input_lay = Input(shape=(None, ?, ?, ?)) #dimensions of your data
time_distribute = TimeDistributed(Lambda(lambda x: model_cnn(x)))(input_lay) # keras.layers.Lambda is essential to make our trick work :)
lstm_lay = LSTM(?)(time_distribute)
output_lay = Dense(?, activation='?')(lstm_lay)
</code></pre>
<p>And finally, now it's time to put together our 2 separated models:</p>
<pre><code>model = Model(inputs=[input_lay], outputs=[output_lay])
model.compile(...)
</code></pre>
<p><strong>OBS: Note that you can substitute my model_cnn example by your VGG without including the top layers, once the vectorized output from the VGG Flatten layer will be the input of the LSTM model.</strong></p>
|
tensorflow|keras|lstm
| 0
|
374,195
| 43,802,038
|
Writing dataframe to csv column wise
|
<p>I need to write a dataframe to a csv file, for that I've done the following:</p>
<pre><code>..............................................
temp_df = pd.DataFrame(variance.values,df.columns.values)
temp_df.to_csv('var.csv')
...................................
</code></pre>
<p>this working fine, but I still need one tiny thing, writing the csv file column wise, adding the columns parameter to to_csv doens't really help: </p>
<pre><code>temp_df.to_csv('var'+tempname+'.csv',columns=df.columns.values )
</code></pre>
<p>delivers the following: </p>
<pre><code>KeyError: "None of [['Feature0', 'Feature1', 'Feature2', 'Feature3', 'Feature4', 'Feature5', 'Feature6', 'Feature7', 'Feature8', 'Feature9', 'Feature10', 'Feature11', 'Feature12', 'Feature13', 'Feature14', 'Feature15', 'Feature16', 'Feature17', 'Feature18', 'Feature19', 'Feature20', 'Feature21', 'Feature22', 'Feature23', 'Feature24', 'Feature25', 'Feature26', 'Feature27', 'Feature28', 'Feature29', 'Feature30', 'Feature31', 'Feature32', 'Feature33', 'Feature34', 'Feature35', 'Feature36', 'Feature37', 'Feature38', 'Feature39', 'Feature40', 'Feature41', 'Feature42', 'Feature43', 'Feature44', 'Feature45', 'Feature46', 'Feature47', 'Feature48', 'Feature49', 'Feature50', 'Feature51', 'Feature52', 'Feature53', 'Feature54', 'Feature55', 'Feature56', 'Feature57', 'Feature58', 'Feature59', 'Feature60', 'Feature61', 'Feature62', 'Feature63', 'Feature64', 'Feature65', 'Feature66', 'Feature67', 'Feature68', 'Feature69', 'Feature70', 'Feature71', 'Feature72', 'Feature73', 'Feature74', 'Feature75', 'Feature76', 'Feature77', 'Feature78', 'Feature79', 'Feature80', 'Feature81', 'Feature82', 'Feature83', 'Feature84', 'Feature85', 'Feature86', 'Feature87', 'Feature88', 'Feature89', 'Feature90', 'Feature91', 'Feature92', 'Feature93', 'Feature94', 'Feature95', 'Feature96', 'Feature97', 'Feature98', 'Feature99', 'Feature100', 'Feature101', 'Feature102', 'Feature103', 'Feature104', 'Feature105', 'Feature106', 'Feature107', 'Feature108', 'Feature109', 'Feature110', 'Feature111', 'Feature112', 'Feature113', 'Feature114', 'Feature115', 'Feature116', 'Feature117', 'Feature118', 'Feature119', 'Feature120', 'Feature121', 'Feature122', 'Feature123', 'Feature124', 'Feature125', 'Feature126', 'Feature127', 'Feature128', 'Feature129', 'Feature130', 'Feature131', 'Feature132', 'Feature133', 'Feature134', 'Feature135', 'Feature136', 'Feature137', 'Feature138', 'Feature139', 'Feature140', 'Feature141', 'Feature142', 'Feature143', 'Feature144', 'Feature145', 'Feature146', 'Feature147', 'Feature148', 'Feature149', 'Feature150', 'Feature151', 'Feature152', 'Feature153', 'Feature154', 'Feature155', 'Feature156', 'Feature157', 'Feature158', 'Feature159', 'Feature160', 'Feature161', 'Feature162', 'Feature163', 'Feature164', 'Feature165', 'Feature166', 'Feature167', 'Feature168', 'Feature169', 'Feature170', 'Feature171', 'Feature172', 'Feature173', 'Feature174', 'Feature175', 'Feature176', 'Feature177', 'Feature178', 'Feature179', 'Feature180', 'Feature181', 'Feature182', 'Feature183', 'Feature184', 'Feature185', 'Feature186', 'Feature187', 'Feature188', 'Feature189', 'Feature190', 'Feature191', 'Feature192', 'Feature193', 'Feature194', 'Feature195', 'Feature196', 'Feature197', 'Feature198', 'Feature199', 'Feature200', 'Feature201', 'Feature202', 'Feature203', 'Feature204', 'Feature205', 'Feature206', 'Feature207', 'Feature208', 'Feature209', 'Feature210', 'Feature211', 'Feature212', 'Feature213', 'Feature214', 'Feature215', 'Feature216', 'Feature217', 'Feature218', 'Feature219', 'Feature220', 'Feature221', 'Feature222', 'Feature223', 'Feature224', 'Feature225', 'Feature226', 'Feature227', 'Feature228', 'Feature229', 'Feature230', 'Feature231', 'Feature232', 'Feature233', 'Feature234', 'Feature235', 'Feature236', 'Feature237', 'Feature238', 'Feature239', 'Feature240', 'Feature241', 'Feature242', 'Feature243', 'Feature244', 'Feature245', 'Feature246', 'Feature247', 'Feature248', 'Feature249', 'Feature250', 'Feature251', 'Feature252', 'Feature253', 'Feature254', 'Feature255', 'Feature256', 'Feature257', 'Feature258', 'Feature259', 'Feature260', 'Feature261', 'Feature262', 'Feature263', 'Feature264', 'Feature265', 'Feature266', 'Feature267', 'Feature268', 'Feature269', 'Feature270', 'Feature271', 'Feature272', 'Feature273', 'Feature274', 'Feature275', 'Feature276', 'Feature277', 'Feature278', 'Feature279', 'Feature280', 'Feature281', 'Feature282', 'Feature283', 'Feature284', 'Feature285', 'Feature286', 'Feature287', 'Feature288', 'Feature289', 'Feature290', 'Feature291', 'Feature292', 'Feature293', 'Feature294', 'Feature295', 'Feature296', 'Feature297', 'Feature298', 'Feature299', 'Feature300', 'Feature301', 'Feature302', 'Feature303', 'Feature304', 'Feature305', 'Feature306', 'Feature307', 'Feature308', 'Feature309', 'Feature310', 'Feature311', 'Feature312', 'Feature313', 'Feature314', 'Feature315', 'Feature316', 'Feature317', 'Feature318', 'Feature319', 'Feature320', 'Feature321', 'Feature322', 'Feature323', 'Feature324', 'Feature325', 'Feature326', 'Feature327', 'Feature328', 'Feature329', 'Feature330', 'Feature331', 'Feature332', 'Feature333', 'Feature334', 'Feature335', 'Feature336', 'Feature337', 'Feature338', 'Feature339', 'Feature340', 'Feature341', 'Feature342', 'Feature343', 'Feature344', 'Feature345', 'Feature346', 'Feature347', 'Feature348', 'Feature349', 'Feature350', 'Feature351', 'Feature352', 'Feature353', 'Feature354', 'Feature355', 'Feature356', 'Feature357', 'Feature358', 'Feature359', 'Feature360', 'Feature361', 'Feature362', 'Feature363', 'Feature364', 'Feature365', 'Feature366', 'Feature367', 'Feature368', 'Feature369', 'Feature370', 'Feature371', 'Feature372', 'Feature373', 'Feature374', 'Feature375', 'Feature376', 'Feature377', 'Feature378', 'Feature379', 'Feature380', 'Feature381', 'Feature382', 'Feature383', 'Feature384', 'Feature385', 'Feature386', 'Feature387', 'Feature388', 'Feature389', 'Feature390', 'Feature391', 'Feature392', 'Feature393', 'Feature394', 'Feature395', 'Feature396', 'Feature397', 'Feature398', 'Feature399', 'Feature400', 'Feature401', 'Feature402', 'Feature403', 'Feature404', 'Feature405', 'Feature406', 'Feature407', 'Feature408', 'Feature409', 'Feature410', 'Feature411', 'Feature412', 'Feature413', 'Feature414', 'Feature415', 'Feature416', 'Feature417', 'Feature418', 'Feature419', 'Feature420', 'Feature421', 'Feature422', 'Feature423', 'Feature424', 'Feature425', 'Feature426', 'Feature427', 'Feature428', 'Feature429', 'Feature430', 'Feature431', 'Feature432', 'Feature435', 'Feature436', 'Feature437', 'Feature438', 'Feature439', 'Feature440', 'Feature441', 'Feature442', 'Feature443', 'Feature444', 'Feature445', 'Feature446', 'Feature447', 'Feature448', 'Feature449', 'Feature450', 'Feature451', 'Feature452', 'Feature453', 'Feature454', 'Feature455', 'Feature456', 'Feature457', 'Feature458']] are in the [columns]"
</code></pre>
<p>**UDPATE **</p>
<p>the Result now look like :</p>
<pre><code> 1.Column 2.column
Feature0 26657.97061
Feature1 40253.50694
Feature2 3221147446
Feature3 0.027772714
Feature4 5.959388786
Feature5 266.56
Feature6 734.2481633
Feature7 307.363629
Feature8 0.000566779
Feature9 0.000520574
...........
</code></pre>
<p>what I want to have is : </p>
<pre><code>1.row Feature0 Feature1 Feature2 Feature3 Feature5 ...........
2.row 26657.97061 40253.50694 3221147446 0.027772714 5.959388786 ......
</code></pre>
|
<pre><code>In [129]: df
Out[129]:
col1 col2
0 Feature0 2.665797e+04
1 Feature1 4.025351e+04
2 Feature2 3.221147e+09
3 Feature3 2.777271e-02
4 Feature4 5.959389e+00
5 Feature5 2.665600e+02
6 Feature6 7.342482e+02
7 Feature7 3.073636e+02
8 Feature8 5.667790e-04
9 Feature9 5.205740e-04
In [130]: df.T
Out[130]:
0 1 2 3 4 5 6 7 8 9
col1 Feature0 Feature1 Feature2 Feature3 Feature4 Feature5 Feature6 Feature7 Feature8 Feature9
col2 26658 40253.5 3.22115e+09 0.0277727 5.95939 266.56 734.248 307.364 0.000566779 0.000520574
In [131]: df.T.to_csv('d:/temp/out.csv', header=None)
</code></pre>
<p>Result (<code>d:/temp/out.csv</code>):</p>
<pre><code>col1,Feature0,Feature1,Feature2,Feature3,Feature4,Feature5,Feature6,Feature7,Feature8,Feature9
col2,26657.97061,40253.50694,3221147446.0,0.027772714,5.959388786,266.56,734.2481633,307.363629,0.000566779,0.000520574
</code></pre>
|
python|csv|pandas
| 2
|
374,196
| 43,551,015
|
stop gradient with n/a label in tensorflow
|
<p>I'm implementing a Convolutional Neural Network in Tensorflow with python.
I'm in the following scenario: I've got a tensor of labels <strong>y</strong> (batch labels) like this:</p>
<pre><code>y = [[0,1,0]
[0,0,1]
[1,0,0]]
</code></pre>
<p>where each row is a <strong>one-hot</strong> vector that represents a label related to the correspondent example. Now in training I want stop <strong>loss gradient</strong> (set to 0) of the example with that label (the third):</p>
<pre><code> [1,0,0]
</code></pre>
<p>which rappresents the n/a label,
instead the loss of the other examples in the batch are computed.
For my loss computation I use a method like that:</p>
<pre><code>self.y_loss = kl_divergence(self.pred_y, self.y)
</code></pre>
<p>I found this <a href="https://www.tensorflow.org/api_docs/python/tf/stop_gradient" rel="nofollow noreferrer">function</a> that stop gradient, but how can apply it to conditionally to the batch elements?</p>
|
<p>If you don't want some samples to contribute to the gradients you could just avoid feeding them to the network during training at all. Simply remove the samples with that label from your training set.</p>
<p>Alternatively, since the loss is computed by summing over the KL-divergences for each sample, you could multiply the KL-divergence for each sample with either 1 if the sample should be taken into account and 0 otherwise before summing over them.
You can get the vectors of values you need to multiply the individual KL-divergences with by subtracting the first column of the tensor of labels from 1: <code>1 - y[:,0]</code></p>
<p>For the <code>kl_divergence</code> function from the <a href="https://stackoverflow.com/a/43298483/118173">answer to your previous question</a> it might look like this:</p>
<pre><code>def kl_divergence(p, q)
return tf.reduce_sum(tf.reduce_sum(p * tf.log(p/q), axis=1)*(1-p[:,0]))
</code></pre>
<p>where p is the groundtruth tensor and q are the predictions</p>
|
python|tensorflow|conv-neural-network
| 2
|
374,197
| 43,693,969
|
Flatten xml into pandas dataframe, deeply nested
|
<p>I'm thinking this might be very easy, and I simply not figured it out yet.</p>
<p>The objective is to 'flatten' into a pandas DataFrame.</p>
<p><a href="https://www.gleif.org/lei-files/20170427/GLEIF/20170427-GLEIF-concatenated-file.zip" rel="nofollow noreferrer">Here is one xml</a> (A direct download of a 60~ MB zip file, which extracted inflates to around 800~ MB).</p>
<p>I have tried the following 2 approaches:</p>
<p>The first one, taken from <a href="http://www.austintaylor.io/lxml/python/pandas/xml/dataframe/2016/07/08/convert-xml-to-pandas-dataframe/" rel="nofollow noreferrer">here</a>, has been modified a little bit:</p>
<pre><code>def xml2dfa(xml_data):
tree = ET.parse(xml_data)
root = tree.getroot()[1] # Modification here
all_records = []
headers = []
for i, child in enumerate(root):
record = []
for subchild in child:
record.append(subchild.text)
if subchild.tag not in headers:
headers.append(subchild.tag)
all_records.append(record)
return pd.DataFrame(all_records, columns=headers)
</code></pre>
<p>Line 3 (<code>root</code>) was modified to get the element <code>LEIRecords</code> rather than <code>LEIHeader</code></p>
<p>The previous results in a DataFrame of correct number of rows but only 4 columns:</p>
<pre><code>array(['{http://www.leiroc.org/data/schema/leidata/2014}LEI',
'{http://www.leiroc.org/data/schema/leidata/2014}Entity',
'{http://www.leiroc.org/data/schema/leidata/2014}Registration',
'{http://www.leiroc.org/data/schema/leidata/2014}Extension'], dtype=object)
</code></pre>
<p>From columns 2 to 4 there are still nested children with information that could be extracted, but all of the information is lost, as the unique value of any column is an array that looks like this:</p>
<pre><code>array(['\n '], dtype=object)
</code></pre>
<p>The second approach I have been running for at least 16 hours, with no result, so something is not right. I took that from <a href="http://rexdouglass.com/parsing-xml-files-to-a-flat-dataframe/" rel="nofollow noreferrer">here</a>.</p>
<p>The expected output would be a DataFrame that is completely flat and for whatever information is not there (because a particular tree branch did not go that far, or was not populated, filled with <code>NaN</code> (<a href="https://stackoverflow.com/questions/27503851/read-hierarchical-tree-like-xml-into-a-pandas-dataframe-preserving-hierarchy">as in this question</a>)</p>
|
<p>I was facing a similar problem. I had xml from ebscohost about research articles returned from a search.</p>
<p>Using xmltodict <a href="https://github.com/martinblech/xmltodict" rel="nofollow noreferrer">https://github.com/martinblech/xmltodict</a></p>
<pre><code>import xmltodict
with open(filename) as fd:
doc = xmltodict.parse(fd.read())
</code></pre>
<p>This converted the xml to a nested dict</p>
<p>Using the example code from the stack overflow link,</p>
<pre><code>def flatten_dict(dd, separator='_', prefix=''):
return { prefix + separator + k if prefix else k : v
for kk, vv in dd.items()
for k, v in flatten_dict(vv, separator, kk).items()
} if isinstance(dd, dict) else { prefix : dd }
</code></pre>
<p>I flattened the dict at the level of individual articles (two levels down in my case - doc['records']['rec'])</p>
<pre><code>flattened_doc = [flatten_dict(x) for x in doc['records']['rec']]
</code></pre>
<p>and then made a Dataframe from the resulting list</p>
<pre><code>data1 = pd.DataFrame(flattened_doc)
</code></pre>
<p>Some of the columns still contain dicts, but it is at a level that I don't care about. The function to flatten the dict will only flatten two levels down as it is written.</p>
|
python|xml|pandas
| 4
|
374,198
| 43,791,970
|
Pandas: assigning columns with multiple conditions and date thresholds
|
<p>Edited:</p>
<p>I have a financial portfolio in a pandas dataframe df, where the index is the date and I have multiple financial stocks per date.</p>
<p>Eg dataframe:</p>
<pre><code>Date Stock Weight Percentile Final weight
1/1/2000 Apple 0.010 0.75 0.010
1/1/2000 IBM 0.011 0.4 0
1/1/2000 Google 0.012 0.45 0
1/1/2000 Nokia 0.022 0.81 0.022
2/1/2000 Apple 0.014 0.56 0
2/1/2000 Google 0.015 0.45 0
2/1/2000 Nokia 0.016 0.55 0
3/1/2000 Apple 0.020 0.52 0
3/1/2000 Google 0.030 0.51 0
3/1/2000 Nokia 0.040 0.47 0
</code></pre>
<p>I created <code>Final_weight</code> by doing assigning values of <code>Weight</code> whenever <code>Percentile</code> is greater than <code>0.7</code></p>
<p>Now I want this to be a bit more sophisticated, I still want <code>Weight</code> to be assigned to <code>Final_weight</code> when <code>Percentile is > 0.7</code>, however after this date (at any point in the future), rather than become 0 when a stocks <code>Percentile</code> is not <code>>0.7</code>, we would still get a weight as long as the Stocks <code>Percentile</code> is above <code>0.5</code> (ie holding the position for longer than just one day).</p>
<p>Then if the stock goes below <code>0.5</code> (in the near future) then <code>Final_weight would become 0</code>.</p>
<p>Eg modified dataframe from above:</p>
<pre><code>Date Stock Weight Percentile Final weight
1/1/2000 Apple 0.010 0.75 0.010
1/1/2000 IBM 0.011 0.4 0
1/1/2000 Google 0.012 0.45 0
1/1/2000 Nokia 0.022 0.81 0.022
2/1/2000 Apple 0.014 0.56 0.014
2/1/2000 Google 0.015 0.45 0
2/1/2000 Nokia 0.016 0.55 0.016
3/1/2000 Apple 0.020 0.52 0.020
3/1/2000 Google 0.030 0.51 0
3/1/2000 Nokia 0.040 0.47 0
</code></pre>
<p>Everyday the portfolios are different not always have the same stock from the day before.</p>
|
<p>This solution is more explicit and less pandas-esque, but it involves only a single pass through all rows without creating tons of temporary columns, and is therefore possibly faster. It needs an additional state variable, which I wrapped it into a closure for not having to make a class. </p>
<pre><code>def closure():
cur_weight = {}
def func(x):
if x["Percentile"] > 0.7:
next_weight = x["Weight"]
elif x["Percentile"] < 0.5 :
next_weight = 0
else:
next_weight = x["Weight"] if cur_weight.get(x["Stock"], 0) > 0 else 0
cur_weight[x["Stock"]] = next_weight
return next_weight
return func
df["FinalWeight"] = df.apply(closure(), axis=1)
</code></pre>
|
python|pandas|dataframe|finance|portfolio
| 6
|
374,199
| 43,574,675
|
how to perform the equivalent of a correlated subquery in pandas
|
<p>I have a CSV file from the Kaggle Titanic competition as follows. The record format of this file is described by the following columns:
PassengerId, Survived, Pclass, Name, Sex, Age, SibSp, Parch, Ticket, Fare, Cabin, Embarked.
I want to analyze the data in this file and check whether passengers traveling in a group had a better survival rate. For this I assume that the value for Ticket will be the same for all passengers in a group. </p>
<p>I loaded the CSV in MS Access, and executed the following query to get the desired result set:</p>
<pre><code>SELECT a.Ticket, a.PassengerId, a.Survived
FROM train a
WHERE 1 < (SELECT COUNT(*) FROM train b WHERE b.Ticket = a.Ticket)
ORDER BY a.Ticket
</code></pre>
<p>I am not being able to extract the same result set as above, without writing a loop.</p>
|
<p>Let's see if this matches:</p>
<pre><code>df.groupby(['Ticket']).filter(lambda x: x.Ticket.count()>1)[['Ticket','PassengerId','Survived']]
</code></pre>
<p>Or with Jezrael's suggestion:</p>
<pre><code>df.groupby(['Ticket']).filter(lambda x: len(x)>1)[['Ticket','PassengerId','Survived']]
</code></pre>
<p>I am using <code>groupby</code> on Tickets then filtering my dataframe to those records where the count in that ticket group is greater than 1, using <code>filter</code>.</p>
|
python|pandas
| 3
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.