Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
376,700
| 46,880,589
|
Reading image files into Tensorflow with tf.WholeFileReader()
|
<p>I'm trying to read a directory of images files into tensorflow and I'm having a little trouble. When I run the script, the shell is just hanging (even waited 10 mins for an output) and the only output I get is the from the <code>print(len(os.listdir())</code> line</p>
<p>My attempts stem from this guide:</p>
<p><a href="https://www.tensorflow.org/api_guides/python/reading_data" rel="nofollow noreferrer">https://www.tensorflow.org/api_guides/python/reading_data</a></p>
<pre><code>import tensorflow as tf
import os
os.chdir(r'C:\Users\Moondra\Desktop\Testing')
print(len(os.listdir())) # only 2 in this case
file_names =tf.constant(os.listdir(), tf.string)
file_tensors =tf.train.string_input_producer(string_tensor = file_names)
reader =tf.WholeFileReader()
key, value = reader.read(file_tensors)
##features = tf.parse_single_example(value)
#records = reader.num_records_produced()
with tf.Session() as sess:
values =sess.run(value)
##print(records_num)
print(type(values))
</code></pre>
<p>The reader is supposed to read images one at a time, so I'm assuming
value will hold the image values on on the current image.
Despite this, the shell is just hanging and no output.</p>
<p>Thank you and --just incase @mrry is available.</p>
|
<p><code>tf.train.string_input_producer</code> adds a <code>QueueRunner</code> to the current Graph and you need to manually start it. Otherwise it is just hanging and no output is produced.</p>
<pre><code>with tf.Session() as sess:
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(coord=coord)
values = sess.run(value)
# ....
</code></pre>
|
python-3.x|file-io|tensorflow
| 1
|
376,701
| 46,757,220
|
Can anyone explain this list comprehension?
|
<pre><code>def unpack_dict(matrix, map_index_to_word):
table = sorted(map_index_to_word, key=map_index_to_word.get)
data = matrix.data
indices = matrix.indices
indptr = matrix.indptr
num_doc = matrix.shape[0]
return [{k:v for k,v in zip([table[word_id] for word_id in
indices[indptr[i]:indptr[i+1]] ],
data[indptr[i]:indptr[i+1]].tolist())} \
for i in range(num_doc) ]
wiki['tf_idf'] = unpack_dict(tf_idf, map_index_to_word)
</code></pre>
<p><a href="https://i.stack.imgur.com/skkAO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/skkAO.png" alt="enter image description here"></a></p>
<p>map_index_to_word is dictionary of word:index for few thousand words.
tf_idf is TFIDF sparse vector
DataFrame wiki is displayed in screenshot here</p>
|
<pre><code>[{k: v for k, v in zip([table[word_id] for word_id in indices[indptr[i]:indptr[i + 1]]],data[indptr[i]:indptr[i + 1]].tolist())} for i in range(num_doc)]
</code></pre>
<p>is same as :</p>
<pre><code>final_list = []
for i in range(num_doc):
new_list = []
for word_id in indices[indptr[i]:indptr[i + 1]]:
new_list.append(table[word_id])
new_dict = {}
for k, v in zip(new_list, data[indptr[i]:indptr[i + 1]].tolist()):
new_dict[k] = v
final_list.append(new_dict)
</code></pre>
|
python|numpy|machine-learning
| 3
|
376,702
| 46,728,376
|
Selecting columns by column NAME dtype
|
<pre><code>import pandas as pd
import numpy as np
cols = ['string',pd.Timestamp('2017-10-13'), 'anotherstring', pd.Timestamp('2017-10-14')]
pd.DataFrame(np.random.rand(5,4), columns=cols)
</code></pre>
<p>How can I get back just the 2nd and 4th column (which have dtype 'date time.datetime')? The types of the column contents are exactly the same, so select_dtypes doesn't help.</p>
|
<p>Use <code>type</code> with <code>map</code>:</p>
<pre><code>df = df.loc[:, df.columns.map(type) == pd.Timestamp]
print (df)
2017-10-13 00:00:00 2017-10-14 00:00:00
0 0.894932 0.502015
1 0.080334 0.155712
2 0.600152 0.206344
3 0.008913 0.919534
4 0.280229 0.951434
</code></pre>
<p>Details:</p>
<pre><code>print (df.columns.map(type))
Index([ <class 'str'>,
<class 'pandas._libs.tslib.Timestamp'>,
<class 'str'>,
<class 'pandas._libs.tslib.Timestamp'>]
print (df.columns.map(type) == pd.Timestamp)
[False True False True]
</code></pre>
<p>Alternative solution:</p>
<pre><code>df1 = df.loc[:, [isinstance(i, pd.Timestamp) for i in df.columns]]
print (df1)
2017-10-13 00:00:00 2017-10-14 00:00:00
0 0.818283 0.128299
1 0.570288 0.458400
2 0.857426 0.395963
3 0.595765 0.306861
4 0.196899 0.438231
</code></pre>
|
python|pandas
| 2
|
376,703
| 46,911,725
|
Find duplicates for one column with the last row group by one column in Pandas Python
|
<p>I have 4 columns in my dataframe <code>user</code> <code>abcisse</code> <code>ordonnee</code>,<code>time</code></p>
<p>I want to find for each user the duplicate row with the last row of the user, duplicate row meaning two row with same abcisse and ordonnee.</p>
<p>I was thinking to use the df.duplicated function but i don't know how to combine it with groupby ?</p>
<pre><code>entry = pd.DataFrame([[1,0,0,1],[1,3,-2,2],[1,2,1,3],[1,3,1,4],[1,3,-2,5],[2,1,3,1],[2,1,3,2]],columns=['user','abcisse','ordonnee','temps'])
output = pd.DataFrame([[1,0,0,1],[1,2,1,3],[1,3,1,4],[1,3,-2,5],[2,1,3,2]],columns=['user','abcisse','ordonnee','temps'])
</code></pre>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.drop_duplicates.html" rel="nofollow noreferrer"><code>drop_duplicates</code></a>:</p>
<pre><code>print (entry.drop_duplicates(['user', 'abcisse', 'ordonnee'], keep='last'))
user abcisse ordonnee temps
0 1 0 0 1
2 1 2 1 3
3 1 3 1 4
4 1 3 -2 5
6 2 1 3 2
</code></pre>
|
python|pandas|dataframe
| 0
|
376,704
| 46,885,454
|
How to create a DataFrame with the word2ve vectors as data, and the terms as row labels?
|
<p>I tried to follow this documentation:
nbviewer.jupyter.org/github/skipgram/modern-nlp-in-python/blob/master/executable/Modern_NLP_in_Python.ipynb
Where I have the following code snippet:</p>
<pre><code>ordered_vocab = [(term, voc.index, voc.count)
for term, voc in food2vec.vocab.iteritems()]
ordered_vocab = sorted(ordered_vocab, key=lambda (term, index, count): -count)
ordered_terms, term_indices, term_counts = zip(*ordered_vocab)
word_vectors = pd.DataFrame(food2vec.syn0norm[term_indices, :],
index=ordered_terms
</code></pre>
<p>To get it to run i have change it to following:</p>
<pre><code>ordered_vocab = [(term, voc.index, voc.count)
for term, voc in word2vecda.wv.vocab.items()]
ordered_vocab = sorted(ordered_vocab)
ordered_terms, term_indices, term_counts = zip(*ordered_vocab)
word_vectorsda = pd.DataFrame(word2vecda.wv.syn0norm[term_indices,],index=ordered_terms)
word_vectorsda [:20]
</code></pre>
<p>But the last line before I print the DataFrame give me an error I cannot get my head around. It keeps return that the noneType object cannot be in this line. To me, it looks like it is Term_indices there tracking it, but I do not get why? </p>
<pre><code> TypeError: 'NoneType' object is not subscriptable
</code></pre>
<p>Can any help me with this? Any inputs are most welcome
Best Niels</p>
|
<p>Use the following code:</p>
<pre><code>ordered_vocab = [(term, voc.index, voc.count) for term, voc in model.wv.vocab.items()]
ordered_vocab = sorted(ordered_vocab, key=lambda k: k[2])
ordered_terms, term_indices, term_counts = zip(*ordered_vocab)
word_vectors = pd.DataFrame(model.wv.syn0[term_indices, :], index=ordered_terms)
</code></pre>
<p>Replace <code>model</code> with <code>food2vec</code>.<br>
Working on <code>python 3.6.1</code>, <code>gensim '3.0.0'</code></p>
|
python-3.x|pandas|word2vec|gensim
| 3
|
376,705
| 47,024,584
|
Pandas: reading from multiple files with different variable ordering
|
<p>I have many files that I'd like to read into a single pandas data frame. An example file might look like this:</p>
<pre><code>variable_1_name
variable_2_name
...
variable_n_name
0.0 0.5 0.3 ... 0.8
...
1.0 4.5 6.5 ... 1.0
</code></pre>
<p>So, the file has a list of variable names (one per line) at the top of the file and then the data is presented in a space delimited table with <code>n</code> values per row.</p>
<p>There are a couple of problems:</p>
<p>1) There is a different number of variables in each file. Not all variables are present in each file.</p>
<p>2) The variables may be in different order between files.</p>
<p>How can I read all this data into a panadas data frame, while matching up the correct data between files?</p>
|
<p>Extending <a href="https://stackoverflow.com/users/8802367/pal">Pal</a>'s answer: the best way is to read data out of csv files. So why not converting the files to csv files (or even better, csv file-like objects living in memory) and let <code>pandas</code> do the dirty work?</p>
<pre><code>try:
import io # python3
except ImportError:
import cStringIO as io # python2
import pandas as pd
DELIMITER = ','
def pd_read_chunk(file):
"""
Reads file contents, converts it to a csv file in memory
and imports a dataframe from it.
"""
with open(file) as f:
content = [line.strip() for line in f.readlines()]
cols = [line for line in content if ' ' not in line]
vals = [line for line in content if ' ' in line]
csv_header = DELIMITER.join(cols)
csv_body = '\n'.join(DELIMITER.join(line.split()) for line in vals)
stream = io.StringIO(csv_header + '\n' + csv_body)
return pd.read_csv(stream, sep=DELIMITER)
if __name__ == '__main__':
files = ('file1', 'file2', )
# read dataframe from each file and concat all resulting dataframes
df_chunks = [pd_read_chunk(file) for file in files]
df = pd.concat(df_chunks)
print(df)
</code></pre>
<p>If you try out the sample files from <a href="https://stackoverflow.com/users/996205/thom-ives">Thom Ives</a>' answer, the script will return</p>
<pre><code> A B C D E
0 1.0 2.0 3.0 NaN NaN
1 1.1 2.1 3.1 NaN NaN
0 NaN 2.2 NaN 4.2 5.2
1 NaN 2.3 NaN 4.3 5.3
</code></pre>
<hr>
<p><strong>Edit</strong>: Actually, we don't need the comma delimiter - we can reuse space as the delimiter so we can compact and speed up the conversion at the same time. Here is an updated version of the one above that has less code and runs faster:</p>
<pre><code>try:
import io # python3
except ImportError:
import cStringIO as io # python2
import pandas as pd
def pd_read_chunk(file):
"""
Reads file contents, converts it to a csv file in memory
and imports a dataframe from it.
"""
with open(file) as f:
content = [line.strip() for line in f.readlines()]
cols = [line for line in content if ' ' not in line]
vals = [line for line in content if ' ' in line]
csv_header = ' '.join(cols)
csv_lines = [csv_header] + vals
stream = io.StringIO('\n'.join(csv_lines))
return pd.read_csv(stream, sep=' ')
if __name__ == '__main__':
files = ('file1', 'file2', )
# read dataframe from each file and concat all resulting dataframes
df_chunks = [pd_read_chunk(file) for file in files]
df = pd.concat(df_chunks)
print(df)
</code></pre>
|
python|pandas
| 2
|
376,706
| 46,966,865
|
Is there any way to load data in parallel when gpu is computing?
|
<p>I am using keras with tensorflow for deep learning. I work with slightly large dataset (images), so I couldn't load the whole dataset at once, instead, I load on batches of 8000 images, and use model.fit() for training (code snippet below). </p>
<pre><code>for epochs in range(50): # number of epochs
for i in range(num_batches): # 5 batches with 8000 images each
#### load the data here (train, val)
history_train = model.fit(train_image, train_gt, batch_size=16, epochs=1, verbose=1, shuffle=False, validation_data=(val_image,val_gt))
if history_train.history['val_loss'][0] < total_val_loss:
model.save_weights(weights_file)
total_val_loss = history_train.history['val_loss'][0]
</code></pre>
<p>Certainly not an efficient way. It takes a lot time to load a batch (of 8000 images), than the time to learn (running time of each epoch). Is there any better way in which I could do this? or any other way to optimize it? Can I load data (for next epoch) in parallel when computing (GPU) is getting done so I can save on the loading time?</p>
|
<p>Yes, you can do this by using <code>fit_generator</code> instead of <code>fit</code> and passing the arguments <code>use_multiprocessing=True</code> and <code>workers=n</code> (where n is the number of worker threads). For loading your dataset, you will need to make a generator method that yields each batch. It should look something like this (I'm assuming that your input dataset is a 4D array of shape (num_images, height, width, channels), so adapt as necessary) :</p>
<pre><code>def generator(dataset_x, dataset_y):
while True:
for i in range(num_batches):
yield dataset_x[i*8000:(i+1)*8000, :, :, :], dataset_y[i*8000:(i+1)*8000, :]
</code></pre>
<p>Note that this will return batches of size 8000 -- you might need to cut it down more with another loop that returns batch sizes of, say, 16.
And for training the model:</p>
<pre><code>history_train = model.fit_generator(generator=generator, steps_per_epoch=5, epochs=50, use_multiprocessing=True, workers=16, validation_data=val_generator, validation_steps=5)
</code></pre>
<p>You might want to make 2 generators: one for training data and one for validation data. Also, Keras might give you a warning about using multiprocessing with multiple workers -- you should make your generators thread-safe by encapsulating them or by using keras.utils.Sequence (more info about this in Keras documentation).</p>
|
python|tensorflow|deep-learning|keras|conv-neural-network
| 2
|
376,707
| 46,667,790
|
Distributed tensorflow of Between-graph replication?
|
<p>Look at the code I wrote:</p>
<pre><code>import tensorflow as tf
tf.flags.DEFINE_string('job_name', 'ps', 'worker or ps')
tf.flags.DEFINE_integer('task_index', 0, 'task id')
FLAGS = tf.flags.FLAGS
host = '127.0.0.1:'
cluster = {"ps": [host+'2222'],
"worker": [host+'2223', host+'2224']}
clusterspec = tf.train.ClusterSpec(cluster)
server = tf.train.Server(cluster,
job_name=FLAGS.job_name,
task_index=FLAGS.task_index)
def print_fn():
print('job_name: %s, task_index: %d' % (FLAGS.job_name, FLAGS.task_index))
if FLAGS.job_name == 'ps':
server.join()
elif FLAGS.job_name == 'worker':
with tf.device(tf.train.replica_device_setter(
worker_device="/job:worker/task:%d" % FLAGS.task_index,
cluster=cluster)):
a = tf.Variable(tf.zeros([]), name='a')
b = tf.Variable(tf.zeros([]), name='b')
op = tf.add(a, b)
print(a.device)
print(b.device)
print(op.device)
print(tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES))
print_fn()
</code></pre>
<p>When I run <code>python distributed_replicas.py --job_name=worker --task_index=0</code> in the <code>cmd</code>, but not run <code>python distributed_replicas.py --job_name=ps --task_index=0</code> before, the program also works. Both <code>a.device</code> and <code>b.device</code> are <code>/job:ps/task:0</code>, but the <code>ps server</code> don't start, how are variables <code>a</code> and <code>b</code> stored on <code>ps server</code>? And <code>tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES)</code> also contain variables <code>a</code> and <code>b</code>, which means <code>a</code> and <code>b</code> are created on <code>/job:worker/task:0</code>, although their device is <code>/job:ps/task:0</code>, so what's wrong? Where are <code>a</code> and <code>b</code> created?</p>
|
<p>This is the intended behavior. Up to this point in your code, you only have created a graph, which does not need/care jobs to be up and running.</p>
<p>You will encounter the problem after creating a Session (or any variation of Session).</p>
<p>More information here:
<a href="https://www.tensorflow.org/extend/architecture" rel="nofollow noreferrer">https://www.tensorflow.org/extend/architecture</a></p>
|
python|tensorflow|distributed-computing|grpc
| 0
|
376,708
| 46,830,776
|
tensorflow - is this equivalent to mse?
|
<p>I am very new to TensorFlow, I notice that <a href="https://www.tensorflow.org/api_docs/python/tf/losses/mean_squared_error" rel="nofollow noreferrer">here</a> there is <code>tf.losses.mean_squared_error</code> which implements the mean squared error loss function.</p>
<p>Before using it, I played around with TF and I wrote</p>
<pre class="lang-python prettyprint-override"><code>tf.reduce_mean(tf.reduce_sum(tf.square(tf.subtract(y, y_))))
</code></pre>
<p>However this gives different results. To me it looks like it is the same formula. What is going wrong?</p>
<p>Are the two formulations different? (and what about <code>tf.nn.l2_loss</code>?)</p>
<p>Also, I am trying to do a MLP and I am using a <code>mse</code> loss function as input to <code>tf.train.GradientDescentOptimizer(0.5).minimize(mse)</code>. Can this function (<code>mse = tf.losses.mean_squared_error(y, y_)</code>) be used (in a regression problem) also as "accuracy" on the test set by using <code>sess.run(mse, feed_dict = {x:X_test, y: y_test})</code>? Or what is the difference?</p>
|
<p>It is because you sum before taking the mean, so you get the squared error and not its mean. Change <code>tf.reduce_mean(tf.reduce_sum(tf.square(tf.subtract(y, y_))))</code> to <code>tf.reduce_mean((tf.square(tf.subtract(y, y_)))</code></p>
<pre class="lang-python prettyprint-override"><code>import tensorflow as tf
import numpy as np
y = tf.get_variable("y", [1, 5])
x = tf.get_variable("x", [1, 5])
sess = tf.Session()
t = tf.reduce_mean(tf.reduce_sum(tf.square(tf.subtract(y, x))))
t2 = tf.losses.mean_squared_error(x, y)
t3 = tf.reduce_mean(tf.square(tf.subtract(y, x)))
sess.run(t, {"x": np.ones((1, 5)), "y": np.zeros((1, 5))}) # 5
sess.run(t2, {x: np.ones((1, 5)), y: np.zeros((1, 5))}) # 1
sess.run(t3, {x: np.ones((1, 5)), y: np.zeros((1, 5))}) # 1
</code></pre>
|
machine-learning|tensorflow|neural-network
| 2
|
376,709
| 47,059,124
|
In pandas crosstab, how to calculate weighted averages? And how to add row and column totals?
|
<p>I have a pandas dataframe with two categorical variables (in my example, city and colour), a column with percentages, and one with weights.
I want to do a crosstab of city and colour, showing, for each combination of the two, the weighted average of perc.</p>
<p>I have managed to do it with the code below, where I first create a column with weights x perc, then one crosstab with the sum of (weights x perc), another crosstab with the sum of weights, then finally divide the first by the second.</p>
<p>It works, but <strong>is there a quicker/more elegant way to do it?</strong></p>
<pre><code>import pandas as pd
import numpy as np
np.random.seed(123)
df=pd.DataFrame()
myrows=10
df['weight'] = np.random.rand(myrows)*100
np.random.seed(321)
df['perc']=np.random.rand(myrows)
df['weight x perc']=df['weight']*df['perc']
df['colour']=np.where( df['perc']<0.5, 'red','yellow')
np.random.seed(555)
df['city']=np.where( np.random.rand(myrows) <0.5,'NY','LA' )
num=pd.crosstab( df['city'], df['colour'], values=df['weight x perc'], aggfunc='sum', margins=True)
den=pd.crosstab( df['city'], df['colour'], values=df['weight'], aggfunc='sum', margins=True)
out=num/den
print(out)
</code></pre>
|
<p>Here using a groupby with apply() and using the numpy weighted average method.</p>
<pre><code>df.groupby(['colour','city']).apply(lambda x: np.average(x.perc, weights=x.weight)).unstack(level=0)
</code></pre>
<p>which gives</p>
<pre><code>colour red yellow
city
LA 0.173870 0.865636
NY 0.077912 0.687400
</code></pre>
<p>I don't have All on the margin though.</p>
<p>This will produce the totals</p>
<pre><code>df.groupby(['colour']).apply(lambda x: np.average(x.perc, weights=x.weight))
df.groupby(['city']).apply(lambda x: np.average(x.perc, weights=x.weight))
</code></pre>
<p>Granted still not packaged into a single frame</p>
|
python|pandas|crosstab|categorical-data
| 3
|
376,710
| 46,647,050
|
Merging selected columns from multiple pandas dataframe by comparing the values
|
<p>I have <strong>df1</strong> as follows:</p>
<pre><code>id
1
2
3
4
5
6
7
</code></pre>
<p>I have <strong>df2</strong> as:</p>
<pre><code>id1 name1 val1
1 abbb1 10
2 abbb2 20
3 abbb3 30
4 abbb4 40
7 abbb7 70
</code></pre>
<p>I have <strong>df3</strong> as:</p>
<pre><code>id2 name2 val2
1 abbb1 90
2 abbb2 20
5 abbb5 50
6 abbb6 60
</code></pre>
<p>So, I want to pick values from <strong>df2</strong> and <strong>df3</strong>, add it to df1 by matching the ids. So, df1 should look as follows:</p>
<pre><code>id val1 val2
1 10 90
2 20 20
3 30 0
4 40 0
5 0 40
6 0 60
7 70 0
</code></pre>
<p>All I reached was till this line of code, and I got stuck:</p>
<pre><code>df1 = df1.merge(df2, df3, on=['id'])
</code></pre>
<p>Please note that:</p>
<ul>
<li>I don't want to have name1 and name2 in the expected output. </li>
<li>If val1 or val2 does not exists (after comparing), I want the cell to
contain 0.</li>
</ul>
|
<p>I think here is better use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.map.html" rel="nofollow noreferrer"><code>map</code></a>.</p>
<p>Also is necessary unique values of <code>id1</code> and <code>id2</code> in <code>df2</code> and <code>df3</code>.</p>
<pre><code>df1['val1'] = df1['id'].map(df2.set_index('id1')['val1']).fillna(0).astype(int)
df1['val2'] = df1['id'].map(df3.set_index('id2')['val2']).fillna(0).astype(int)
print (df1)
id val1 val2
0 1 10 90
1 2 20 20
2 3 30 0
3 4 40 0
4 5 0 50
5 6 0 60
6 7 70 0
</code></pre>
<p>Alternative:</p>
<pre><code>a = df1['id'].map(df2.set_index('id1')['val1']).fillna(0).astype(int)
b = df1['id'].map(df3.set_index('id2')['val2']).fillna(0).astype(int)
df1 = df1.assign(val1=a, val2=b)
print (df1)
id val1 val2
0 1 10 90
1 2 20 20
2 3 30 0
3 4 40 0
4 5 0 50
5 6 0 60
6 7 70 0
</code></pre>
<p>Solution with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.merge.html" rel="nofollow noreferrer"><code>merge</code></a>:</p>
<pre><code>df1 = df1.merge(pd.merge(df2.rename(columns={'id1':'id'}),
df3.rename(columns={'id2':'id'}), on='id', how='outer')
[['id','val1','val2']].fillna(0).astype(int), how='left')
print (df1)
id val1 val2
0 1 10 90
1 2 20 20
2 3 30 0
3 4 40 0
4 5 0 50
5 6 0 60
6 7 70 0
</code></pre>
|
python|pandas|dataframe
| 1
|
376,711
| 46,638,618
|
python pandas dataframe find row containing specific value and return boolean
|
<p>I want compare two dataframes which is df1 and df2. df1 is a data that updates every hour by it self. df2 is a dataframe that alreay exists. I want to append specific row that is updated.</p>
<p>for example, Here is df1</p>
<p>df1:</p>
<p><img src="https://i.stack.imgur.com/vfJ4o.png" alt="fd1"></p>
<p>which contains 5 rows of information</p>
<p>and df2 which already existed</p>
<p>df2:</p>
<p><img src="https://i.stack.imgur.com/pOYRu.png" alt="df2"></p>
<p>we can tell that eric is added but df2 is not representing that.</p>
<p>I could overwrite df2 with df1 but I shouldn't as there will be remark that will be updated by person later the data is written.</p>
<p>So, I decided to remove each rows of data by finding it from df2 by its id, and remove them with for loop</p>
<p>and after that, there will be only eric's row will remain which will make me possible to just append eric to df2. </p>
<p>So what I tried is this</p>
<pre><code>for index, row in df1.iterrows():
id = row['id']
if df2.loc[df1['id'].isin(id)] = True:
df1[df1.id != id)
</code></pre>
<p>and it returns syntax error....</p>
<p>am I on the right track? is it will be the best solution to solve this problem?and how should I change the code to achieve my goal?</p>
|
<p>To fix your code ...</p>
<pre><code>l=[]
for index, row in df1.iterrows():
id = row['Id']
if sum(df2['Id'].isin([id]))>0:
l.append(id)
l
Out[334]: [0, 1, 2, 3, 4] # those are the row you need to remove
df1.loc[~df1.index.isin(l)]# you remove them by using `~` + .isin
Out[339]:
Id Name
5 5 F
6 6 G
</code></pre>
<hr>
<p>By using <code>pd.concat</code></p>
<pre><code>pd.concat([df2,df1[~df1.Id.isin(df2.Id)]],axis=0)
Out[337]:
Id Name
0 0 A
1 1 B
2 2 C
3 3 D
4 4 E
5 5 F
6 6 G
</code></pre>
<hr>
<p>Data Input </p>
<pre><code>fake = {'Id' : [0,1,2,3,4,5,6],
'Name' : ['A','B','C','D','E','F','G']}
df1 = pd.DataFrame(fake)
fake = {'Id' : [0,1,2,3,4],
'Name' : ['A','B','C','D','E']}
df2 = pd.DataFrame(fake)
</code></pre>
|
python|pandas
| 2
|
376,712
| 47,016,833
|
How to manually select which x-axis label(Dates) gets plotted in pandas
|
<p>First of all I am sorry if I am not describing the problem correctly but the example should make my issue clear.</p>
<p>I have this dataframe and I need to plot it sorted by date, but I have lots of date (around 60), therefore pandas automatically chooses which date to plot(label) in x-axis and the dates are random. Due to visibility issue I too want to only plot selected dates in x-axis but I want it to have some pattern like january every year.</p>
<p>This is my code:</p>
<pre><code>df = pd.read_csv('dbo.Access_Stat_all.csv',error_bad_lines=False, usecols=['Range_Start','Format','Resource_ID','Number'])
df1 = df[df['Resource_ID'] == 32543]
df1 = df1[['Format','Range_Start','Number']]
df1["Range_Start"] = df1["Range_Start"].str[:7]
df1 = df1.groupby(['Format','Range_Start'], as_index=True).last()
pd.options.display.float_format = '{:,.0f}'.format
df1 = df1.unstack()
df1.columns = df1.columns.droplevel()
if df1.index.contains('entry'):
df2 = df1[1:4].sum(axis=0)
else:
df2 = df1[0:3].sum(axis=0)
df2.name = 'sum'
df2 = df1.append(df2)
print(df2)
df2.to_csv('test.csv', sep="\t", float_format='%.f')
if df2.index.contains('entry'):
df2.T[['entry','sum']].plot(rot = 30)
else:
df2.T[['sum']].plot(kind = 'bar')
ax1 = plt.axes()
ax1.legend(["Seitenzugriffe", "Dateiabrufe"])
plt.xlabel("")
plt.savefig('image.png')
</code></pre>
<p><a href="https://i.stack.imgur.com/iWpMH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iWpMH.png" alt="This is the plot"></a></p>
<p>As you can see the plot has 2010-08, 2013-09, 2014-07 as the x-axis value. How can I make it something like 2010-01, 2013-01, 2014-01 e.t.c</p>
<p>Thank you very much, I know this is not the optimal description but since english is not my first language this is the best I could come up with.</p>
|
<p><strong>NOTE: Updated to answer OP question more directly.</strong></p>
<p>You are mixing Pandas plotting as well as the <code>matplotlib</code> <a href="http://matplotlib.org/api/pyplot_summary.html#id11" rel="nofollow noreferrer">PyPlot API</a> and <a href="http://matplotlib.org/api/pyplot_summary.html#the-object-oriented-api" rel="nofollow noreferrer">Object-oriented API</a> by using <code>axes</code> (<code>ax1</code> above) methods and <code>plt</code> methods. The latter are two distinctly different APIs and they may not work correctly when mixed. The <code>matplotlib</code> documentation <em>recommends</em> using the object-oriented API.</p>
<blockquote>
<p>While it is easy to quickly generate plots with the <code>matplotlib.pyplot</code> module, we recommend using the object-oriented approach for more control and customization of your plots. See the methods in the <code>matplotlib.axes.Axes()</code> class for many of the same plotting functions. For examples of the OO approach to Matplotlib, see the API Examples.</p>
</blockquote>
<p>Here's how you can control the x-axis "tick" values/labels using proper <code>matplotlib</code> date formatting (<a href="http://matplotlib.org/gallery/api/date.html#sphx-glr-gallery-api-date-py" rel="nofollow noreferrer">see <code>matplotlib</code> example</a>) with the object-oriented API. Also, see link from <a href="https://stackoverflow.com/a/44214830/5874320">@ImportanceOfBeingErnest answer to another question</a> for incompatibilities between Pandas' and <code>matplotlib</code>'s <code>datetime</code> objects.</p>
<pre><code># prepare your data
df = pd.read_csv('../../../so/dbo.Access_Stat_all.csv',error_bad_lines=False, usecols=['Range_Start','Format','Resource_ID','Number'])
df.head()
df1 = df[df['Resource_ID'] == 10021]
df1 = df1[['Format','Range_Start','Number']]
df1["Range_Start"] = df1["Range_Start"].str[:7]
df1 = df1.groupby(['Format','Range_Start'], as_index=True).last()
pd.options.display.float_format = '{:,.0f}'.format
df1 = df1.unstack()
df1.columns = df1.columns.droplevel()
if df1.index.contains('entry'):
df2 = df1[1:4].sum(axis=0)
else:
df2 = df1[0:3].sum(axis=0)
df2.name = 'sum'
df2 = df1.append(df2)
print(df2)
df2.to_csv('test.csv', sep="\t", float_format='%.f')
if df2.index.contains('entry'):
# convert your index to use pandas datetime format
df3 = df2.T[['entry','sum']].copy()
df3.index = pd.to_datetime(df3.index)
# for illustration, I changed a couple dates and added some dummy values
df3.loc['2014-01-01']['entry'] = 48
df3.loc['2014-05-01']['entry'] = 28
df3.loc['2015-05-01']['entry'] = 36
print(df3)
# plot your data
fig, ax = plt.subplots()
# use matplotlib date formatters
years = mdates.YearLocator() # every year
yearsFmt = mdates.DateFormatter('%Y-%m')
# format the major ticks
ax.xaxis.set_major_locator(years)
ax.xaxis.set_major_formatter(yearsFmt)
ax.plot(df3)
# add legend
ax.legend(["Seitenzugriffe", "Dateiabrufe"])
fig.savefig('image.png')
else:
# left as an exercise...
df2.T[['sum']].plot(kind = 'bar')
</code></pre>
|
python|pandas|datetime|matplotlib|plot
| 1
|
376,713
| 46,861,171
|
Filter arrays in Numpy
|
<p>I have an array: <code>[[True], [False], [True]]</code>. If I would want this array to filter my existing array, e.g <code>[[1,2],[3,4],[5,6]]</code> should get filtered to <code>[[1,2],[5,6]]</code>, what is the correct way to do this?</p>
<p>A simple <code>a[b]</code> indexing gives the error: <code>boolean index did not match indexed array along dimension 1; dimension is 2 but corresponding boolean dimension is 1</code></p>
|
<p>The solution is to get the array <code>[[True], [False], [True]]</code> into shape <code>[True, False, True]</code>, so that it works for indexing the rows of the other array. As Divakar said, <a href="https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.ravel.html#numpy.ravel" rel="nofollow noreferrer"><code>ravel</code></a> does this; in general it flattens any array to a 1D array. Another option is <a href="https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.squeeze.html" rel="nofollow noreferrer"><code>squeeze</code></a> which removes the dimensions with size 1 but leaves the other dimensions as they were,</p>
|
python|arrays|numpy
| 1
|
376,714
| 47,047,140
|
Pandas: slice one multiindex dataframe with multiindex of another when some levels don't match
|
<p>I have two multiindexed dataframes, one with two levels and one with three. The first two levels match in both dataframes. I would like to find all values from the first dataframe where the first two index levels match in the second dataframe. The second data frame does not have a third level. </p>
<p>The closest answer I have found is this:
<a href="https://stackoverflow.com/questions/29266600/how-to-slice-one-multiindex-dataframe-with-the-multiindex-of-another">How to slice one MultiIndex DataFrame with the MultiIndex of another</a> -- however the setup is slightly different and doesn't seem to translate to this case. </p>
<p>Consider the setup below </p>
<pre><code>array_1 = [np.array(['bar', 'bar', 'baz', 'baz', 'foo', 'foo', 'qux', 'qux']),
np.array(['one', 'two', 'one', 'two', 'one', 'two', 'one', 'two']),
np.array(['a', 'a','a', 'a','b','b','b','b' ])]
array_2 = [np.array(['bar', 'bar', 'baz', 'baz', 'foo', 'foo', 'qux', 'qux']),
np.array(['one', 'two', 'three', 'one', 'two', 'two', 'one', 'two'])]
df_1 = pd.DataFrame(np.random.randn(8,4), index=array_1).sort_index()
print df_1
0 1 2 3
bar one a 1.092651 -0.325324 1.200960 -0.790002
two a -0.415263 1.006325 -0.077898 0.642134
baz one a -0.343707 0.474817 0.396702 -0.379066
two a 0.315192 -1.548431 -0.214253 -1.790330
foo one b 1.022050 -2.791862 0.172165 0.924701
two b 0.622062 -0.193056 -0.145019 0.763185
qux one b -1.241954 -1.270390 0.147623 -0.301092
two b 0.778022 1.450522 0.683487 -0.950528
df_2 = pd.DataFrame(np.random.randn(8,4), index=array_2).sort_index()
print df_2
0 1 2 3
bar one -0.354889 -1.283470 -0.977933 -0.601868
two -0.849186 -2.455453 0.790439 1.134282
baz one -0.143299 2.372440 -0.161744 0.919658
three -1.008426 -0.116167 -0.268608 0.840669
foo two -0.644028 0.447836 -0.576127 -0.891606
two -0.163497 -1.255801 -1.066442 0.624713
qux one -1.545989 -0.422028 -0.489222 -0.357954
two -1.202655 0.736047 -1.084002 0.732150
</code></pre>
<p>Now I query the second, dataframe, returning a subset of the original indexes</p>
<pre><code>df_2_selection = df_2[(df_2 > 1).any(axis=1)]
print df_2_selection
0 1 2 3
bar two -0.849186 -2.455453 0.790439 1.134282
baz one -0.143299 2.372440 -0.161744 0.919658
</code></pre>
<p>I would like to find all the values in df_1 that match the indices found in df_2. The first two levels line up, but the third does not. </p>
<p>This problem is easy when the indices line up, and would be solved by something like <code>df_1.loc[df_2_selection.index] #this works if indexes are the same</code></p>
<p>Also I can find thhe values which match one of the levels with something like
<code>df_1[df_1.index.isin(df_2_selection.index.get_level_values(0),level = 0)]</code> but this does not solve the problem. </p>
<p>Chaining these statements together does not provide the desired functionality</p>
<p><code>df_1[(df_1.index.isin(df_2_selection.index.get_level_values(0),level = 0)) & (df_1.index.isin(df_2_selection.index.get_level_values(1),level = 1))]</code></p>
<p>I envision something along the lines of:</p>
<pre><code>df_1_select = df_1[(df_1.index.isin(
df_2_selection.index.get_level_values([0,1]),level = [0,1])) #Doesnt Work
print df_1_select
0 1 2 3
bar two a -0.415263 1.006325 -0.077898 0.642134
baz one a -0.343707 0.474817 0.396702 -0.379066
</code></pre>
<p>I have tried many other methods, all of which have not worked exactly how I wanted. Thank you for your consideration. </p>
<p>EDIT: </p>
<p>This
<code>df_1.loc[pd_idx[df_2_selection.index.get_level_values(0),df_2_selection.index.get_level_values(1),:],:]</code> Also does not work</p>
<p>I want only the rows where both levels match. Not where either level match. </p>
<p>EDIT 2: This solution was posted by someone who has since deleted it </p>
<pre><code>id=[x+([x for x in df_1.index.levels[-1]]) for x in df_2_selection.index.values]
pd.concat([df_1.loc[x] for x in id])
</code></pre>
<p>Which indeed does work! However on large dataframes it is prohibitively slow. Any help with new methods / speedup is greatly appreciated. </p>
|
<p>You can use <code>reset_index()</code> and <code>merge()</code>.</p>
<p>With <code>df_2_selection</code> as:</p>
<pre><code> 0 1 2 3
foo two -0.530151 0.932007 -1.255259 2.441294
qux one 2.006270 1.087412 -0.840916 -1.225508
</code></pre>
<p>Merge with:</p>
<pre><code>lvls = ["level_0","level_1"]
(df_1.reset_index()
.merge(df_2_selection.reset_index()[lvls], on=lvls)
.set_index(["level_0","level_1","level_2"])
.rename_axis([None]*3)
)
</code></pre>
<p>Output:</p>
<pre><code> 0 1 2 3
foo two b -0.112696 0.287421 -0.380692 -0.035471
qux one b 0.658227 0.632667 -0.193224 1.073132
</code></pre>
<p>Note: The <code>rename_axis()</code> part just removes the level names, e.g. <code>level_0</code>. It's purely cosmetic, and not necessary to perform the actual matching procedure.</p>
|
python|pandas|indexing|slice|multi-index
| 1
|
376,715
| 46,902,237
|
Tensorflow - saving the checkpoint files as .pb, but with no output node names
|
<p>I have the following files:</p>
<pre><code>model.ckpt-2400.data-00000-of-00001
model.ckpt-2400.index
model.ckpt-2400.meta
</code></pre>
<p>And I would like to save them in the form of a <code>.pb</code> with the following function:</p>
<pre><code>def freeze_graph(model_dir, output_node_names):
"""Extract the sub graph defined by the output nodes and convert all its variables into constant
Args:
model_dir: the root folder containing the checkpoint state file
output_node_names: a string, containing all the output node's names,
comma separated
"""
if not tf.gfile.Exists(model_dir):
raise AssertionError(
"Export directory doesn't exists. Please specify an export "
"directory: %s" % model_dir)
if not output_node_names:
print("You need to supply the name of a node to --output_node_names.")
return -1
# We retrieve our checkpoint fullpath
checkpoint = tf.train.get_checkpoint_state(model_dir)
input_checkpoint = checkpoint.model_checkpoint_path
# We precise the file fullname of our freezed graph
absolute_model_dir = "/".join(input_checkpoint.split('/')[:-1])
output_graph = absolute_model_dir + "/frozen_model.pb"
# We clear devices to allow TensorFlow to control on which device it will load operations
clear_devices = True
# We start a session using a temporary fresh Graph
with tf.Session(graph=tf.Graph()) as sess:
# We import the meta graph in the current default Graph
saver = tf.train.import_meta_graph(input_checkpoint + '.meta', clear_devices=clear_devices)
# We restore the weights
saver.restore(sess, input_checkpoint)
# We use a built-in TF helper to export variables to constants
output_graph_def = tf.graph_util.convert_variables_to_constants(
sess, # The session is used to retrieve the weights
tf.get_default_graph().as_graph_def(), # The graph_def is used to retrieve the nodes
output_node_names.split(",") # The output node names are used to select the usefull nodes
)
# Finally we serialize and dump the output graph to the filesystem
with tf.gfile.GFile(output_graph, "wb") as f:
f.write(output_graph_def.SerializeToString())
print("%d ops in the final graph." % len(output_graph_def.node))
return output_graph_def
</code></pre>
<p>The problem is that when I use <code>tf.get_default_graph().as_graph_def().node</code>, it returns <code>[]</code>. An empty array. There are no output node names I can use for this.</p>
<p>So how else can I save them as .pb? Should I just refer to the <code>tf.python.tools.freeze_graph.freeze_graph()</code> function?</p>
|
<p>Turns out all I needed to do is to supply the name of the output node... that I, in another part of my code, designated as the node to log to check the results.</p>
<pre><code>predictions = {
# Generate predictions (for PREDICT and EVAL mode)
"classes": tf.argmax(input=logits, axis=1),
# Add `softmax_tensor` to the graph. It is used for PREDICT and by the
# `logging_hook`.
"probabilities": tf.nn.softmax(logits, name="softmax_tensor") #This one
}
</code></pre>
<p>In my case it's <code>softmax_tensor</code>.</p>
|
tensorflow
| 0
|
376,716
| 46,866,513
|
How do I get values from a text file and put them into a dataframe in python?
|
<p>I have a text file and want to get its values and want to put them in a dataframe in python. I know I have to use read_csv but not sure how to do it. </p>
<p>The text file looks something like this:</p>
<p>duration,protocol_type,service,flag,src_bytes,dst_bytes,land,wrong_fragment,urgent,hot,num_failed_logins,logged_in,num_compromised,root_shell,su_attempted,num_root,num_file_creations,num_shells,num_access_files,num_outbound_cmds,is_host_login,is_guest_login,count,srv_count,serror_rate,srv_serror_rate,rerror_rate,srv_rerror_rate,same_srv_rate,diff_srv_rate,srv_diff_host_rate,dst_host_count,dst_host_srv_count,dst_host_same_srv_rate,dst_host_diff_srv_rate,dst_host_same_src_port_rate,dst_host_srv_diff_host_rate,dst_host_serror_rate,dst_host_srv_serror_rate,dst_host_rerror_rate,dst_host_srv_rerror_rate
0,icmp,ecr_i,SF,1032,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,511,511,0.0,0.0,0.0,0.0,1.0,0.0,0.0,255,255,1.0,0.0,1.0,0.0,0.0,0.0,0.0,0.0</p>
<p>Where all the strings are what I want to use for rows and the rest of them are values. I am using pandas in anaconda and the text file is called kddcup.txt. If anyone can help me out it would be much appreciated. Thank you!</p>
|
<p>Try this. Path to file is wherever your file is located, the relative path. </p>
<pre><code>import pandas as pd
data = pd.read_csv('path_to_file.csv', sep=',')
</code></pre>
|
python|pandas
| 0
|
376,717
| 46,882,307
|
AttributeError: module 'tensorflow' has no attribute 'feature_column'
|
<p>So I am new to machine learning and was trying out the TensorFlow Linear Model Tutorial given here:
<a href="https://www.tensorflow.org/tutorials/wide" rel="noreferrer">https://www.tensorflow.org/tutorials/wide</a></p>
<p>I literally just downloaded their tutorial and tried to run it in my computer but I got the error:</p>
<blockquote>
<p>AttributeError: module 'tensorflow' has no attribute 'feature_column'</p>
</blockquote>
<p>I searched online and got to know that this can happen on older versions of tensorflow, but I am running the latest version: 1.3.0</p>
<p>So why am I getting this error and how to fix it?</p>
|
<p>Tensorflow 1.3 should support feature_column well. You might accidentally used an old version. Try the following code to verify your version:</p>
<pre><code>import tensorflow as tf
print(tf.__version__)
print(dir(tf.feature_column))
</code></pre>
|
python|machine-learning|tensorflow
| 3
|
376,718
| 32,958,399
|
Transpose DataFrame in Pandas while preserving Index column
|
<p>The problem is, when I transpose the DataFrame, the header of the transposed DataFrame becomes the Index numerical values and not the values in the "id" column. See below original data for examples:</p>
<p><strong>Original data that I wanted to transpose (but keep the 0,1,2,... Index intact and change "id" to "id2" in final transposed DataFrame)</strong>.<br>
<a href="https://i.stack.imgur.com/y2VFK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/y2VFK.png" alt="enter image description here"></a>
<strong>DataFrame after I transpose, notice the headers are the Index values and NOT the "id" values (which is what I was expecting and needed)</strong>
<a href="https://i.stack.imgur.com/8UEmJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8UEmJ.png" alt="enter image description here"></a></p>
<p><strong><em>Logic Flow</em></strong></p>
<p>First this helped to get rid of the numerical index that got placed as the header: <a href="https://stackoverflow.com/questions/31520033/how-to-stop-pandas-adding-time-to-column-title-after-transposing-a-datetime-inde">How to stop Pandas adding time to column title after transposing a datetime index?</a></p>
<p>Then this helped to get rid of the index numbers as the header, but now "id" and "index" got shuffled around: <a href="https://stackoverflow.com/questions/31763800/reassigning-index-in-pandas-dataframe">Reassigning index in pandas DataFrame</a> & <a href="https://stackoverflow.com/questions/31763800/reassigning-index-in-pandas-dataframe">Reassigning index in pandas DataFrame</a></p>
<p><a href="https://i.stack.imgur.com/At9Hs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/At9Hs.png" alt="enter image description here"></a> </p>
<p>But now my id and index values got shuffled for some reason. </p>
<p><strong>How can I fix this so the columns are [id2,600mpe, au565...]?</strong></p>
<p><strong>How can I do this more efficiently?</strong> </p>
<p>Here's my code:</p>
<pre><code>DF = pd.read_table(data,sep="\t",index_col = [0]).transpose() #Add index_col = [0] to not have index values as own row during transposition
m, n = DF.shape
DF.reset_index(drop=False, inplace=True)
DF.head()
</code></pre>
<p>This didn't help much: <a href="https://stackoverflow.com/questions/9762935/add-indexed-column-to-dataframe-with-pandas">Add indexed column to DataFrame with pandas</a></p>
|
<p>If I understand your example, what seems to happen to you is that you <code>transpose</code> takes your actual index (the 0...n sequence as column headers. First, if you then want to preserve the numerical index, you can store that as <code>id2</code>.</p>
<pre><code>DF['id2'] = DF.index
</code></pre>
<p>Now if you want <code>id</code> to be the column headers then you must set that as an index, overriding the default one:</p>
<pre><code>DF.set_index('id',inplace=True)
DF.T
</code></pre>
<p>I don't have your data reproduced, but this should give you the values of <code>id</code> across columns.</p>
|
pandas|indexing|dataframe|transpose
| 7
|
376,719
| 32,652,718
|
Pandas: Find rows which don't exist in another DataFrame by multiple columns
|
<p>same as this <a href="https://stackoverflow.com/questions/32651860/python-pandas-how-to-find-rows-in-one-dataframe-but-not-in-another">python pandas: how to find rows in one dataframe but not in another?</a>
but with multiple columns</p>
<p>This is the setup:</p>
<pre><code>import pandas as pd
df = pd.DataFrame(dict(
col1=[0,1,1,2],
col2=['a','b','c','b'],
extra_col=['this','is','just','something']
))
other = pd.DataFrame(dict(
col1=[1,2],
col2=['b','c']
))
</code></pre>
<p>Now, I want to select the rows from <code>df</code> which don't exist in other. I want to do the selection by <code>col1</code> and <code>col2</code></p>
<p>In SQL I would do:</p>
<pre><code>select * from df
where not exists (
select * from other o
where df.col1 = o.col1 and
df.col2 = o.col2
)
</code></pre>
<p>And in Pandas I can do something like this but it feels very ugly. Part of the ugliness could be avoided if df had id-column but it's not always available. </p>
<pre><code>key_col = ['col1','col2']
df_with_idx = df.reset_index()
common = pd.merge(df_with_idx,other,on=key_col)['index']
mask = df_with_idx['index'].isin(common)
desired_result = df_with_idx[~mask].drop('index',axis=1)
</code></pre>
<p><strong>So maybe there is some more elegant way?</strong></p>
|
<p>Since <code>0.17.0</code> there is a new <a href="http://pandas.pydata.org/pandas-docs/stable/whatsnew.html#other-enhancements"><code>indicator</code></a> param you can pass to <code>merge</code> which will tell you whether the rows are only present in left, right or both:</p>
<pre><code>In [5]:
merged = df.merge(other, how='left', indicator=True)
merged
Out[5]:
col1 col2 extra_col _merge
0 0 a this left_only
1 1 b is both
2 1 c just left_only
3 2 b something left_only
In [6]:
merged[merged['_merge']=='left_only']
Out[6]:
col1 col2 extra_col _merge
0 0 a this left_only
2 1 c just left_only
3 2 b something left_only
</code></pre>
<p>So you can now filter the merged df by selecting only <code>'left_only'</code> rows</p>
|
python|join|pandas
| 46
|
376,720
| 32,967,201
|
how to concat sets when using groupby in pandas dataframe?
|
<p>This is my dataframe:</p>
<pre><code>> df
a b
0 1 set([2, 3])
1 2 set([2, 3])
2 3 set([4, 5, 6])
3 1 set([1, 34, 3, 2])
</code></pre>
<p>Now when I <code>groupby</code>, I want to update sets. If it was a <code>list</code> there was no problem. But the output of my command is:</p>
<pre><code>> df.groupby('a').sum()
a b
1 NaN
2 set([2, 3])
3 set([4, 5, 6])
</code></pre>
<p>What should I do in groupby to update sets? The output I'm looking for is as below: </p>
<pre><code>a b
1 set([2, 3, 1, 34])
2 set([2, 3])
3 set([4, 5, 6])
</code></pre>
|
<p>This might be close to what you want</p>
<pre><code>df.groupby('a').apply(lambda x: set.union(*x.b))
</code></pre>
<p>In this case it takes the union of the sets.</p>
<p>If you need to keep the column names you could use:</p>
<pre><code>df.groupby('a').agg({'b':lambda x: set.union(*x)}).reset_index('a')
</code></pre>
<p>Result:</p>
<pre><code> a b
0 1 set([1, 2, 3, 34])
1 2 set([2, 3])
2 3 set([4, 5, 6])
</code></pre>
|
python|pandas
| 10
|
376,721
| 33,029,514
|
How to scale MNIST from 28*28 to 29*29 in Python
|
<p>I was trying to apply deformation to MNIST dataset. The very first step of doing elastic distortion is to scale each image from 28*28 to 29*29 in order to simplify Gaussian convolution. </p>
<p>But almost every publications mentioned this procedure ended up with saying "scale it from 28*28 to 29*29". And nothing more...</p>
<p>So my question is, is there any specific implementation to do this with python, particularly with each image arranged as a 2-D numpy array.</p>
<p>Thanks a lot !</p>
|
<p>You have multiple choices for image resizing.</p>
<pre><code>import numpy as np
img28 = np.eye(28)
from skimage.transform import resize
img29r = resize(img, (29, 29))
from scipy.misc import imresize
img29i = imresize(img, (29, 29))
</code></pre>
<p>It's a matter of taste and specifics of your application which you want to use. </p>
|
python|numpy|image-processing
| 1
|
376,722
| 32,718,639
|
Pandas - filling NaNs in Categorical data
|
<p>I am trying to fill missing values (NAN) using the below code</p>
<pre><code>NAN_SUBSTITUTION_VALUE = 1
g = g.fillna(NAN_SUBSTITUTION_VALUE)
</code></pre>
<p>but I am getting the following error </p>
<pre><code>ValueError: fill value must be in categories.
</code></pre>
<p>Would anybody please throw some light on this error.</p>
|
<p>Your question is missing the important point what <code>g</code> is, especially that it has dtype <code>categorical</code>. I assume it is something like this:</p>
<pre><code>g = pd.Series(["A", "B", "C", np.nan], dtype="category")
</code></pre>
<p>The problem you are experiencing is that <code>fillna</code> requires a value that already exists as a category. For instance, <code>g.fillna("A")</code> would work, but <code>g.fillna("D")</code> fails. To fill the series with a new value you can do:</p>
<pre><code>g_without_nan = g.cat.add_categories("D").fillna("D")
</code></pre>
|
python|pandas
| 67
|
376,723
| 32,722,843
|
Transform an array of shape (n,) to a numpy array of shape (n,1)
|
<p>I have an array that I read from a <code>.npz</code> file with numpy, that has a shape I can not really explain.</p>
<p>When I print the array I get numbers in the following form:</p>
<pre><code>[1 2 3 2 1 8 9 8 3 4 ...]
</code></pre>
<p>without any comma separating them</p>
<p>I would like to transform this array into a numpy array of dimensions <code>(n,1)</code> where n is the number of elements and 1 is the number of columns.</p>
<p>Is there an elegant way of doing it?</p>
|
<p>The shape <code>(n, )</code> means its a one-dimensional array of <code>n</code> length . If you think the shape <code>(n, 1)</code> represents a one-dimensional array, then it does not, <code>(n,1)</code> represents a two dimensional array of n sub-arrays, with each sub-array having 1 element.</p>
<p>If what you really want is an array of shape <code>(n, 1)</code>, you can use <code>ndarray.reshape()</code> with shape <code>(-1, 1)</code> -</p>
<pre><code>array.reshape((-1,1))
</code></pre>
<p>Demo -</p>
<pre><code>In [64]: na
Out[64]: array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
In [65]: str(na)
Out[65]: '[0 1 2 3 4 5 6 7 8 9]'
In [66]: na.reshape((-1,1))
Out[66]:
array([[0],
[1],
[2],
[3],
[4],
[5],
[6],
[7],
[8],
[9]])
In [67]: na.reshape((-1,1)).shape
Out[67]: (10, 1)
</code></pre>
<p>As you can see this moves the array from being a 1d array to a 2d array with each inner row (inner array) containing only 1 element. This may not be what you want. The output like - </p>
<pre><code>[1 2 3 2 1 8 9 8 3 4 ...]
</code></pre>
<p>is just the <code>str()</code> result of a numpy array, it does mean the elements internally are not separated. </p>
|
python|arrays|numpy
| 6
|
376,724
| 32,764,899
|
Convert pandas multiindex into simple flat index of column names
|
<p>I have a pandas data frame like this:</p>
<pre><code>columns = pd.MultiIndex.from_tuples([
('A', 'cat', 'long'), ('A', 'cat', 'long2'),
('A', 'dog', 'short'), ('B', 'dog', 'short')
],
names=['exp', 'animal', 'hair_length']
)
df = pd.DataFrame(np.random.randn(4, 4), columns=columns, index=['W', 'X', 'Y', 'Z'])
</code></pre>
<p>Which results in this structure:</p>
<pre><code>exp A B
animal cat dog dog
hair_length long long2 short short
W 1.088097 -0.104486 2.574262 -0.614482
X -0.088731 0.620010 0.101627 -0.518250
Y -0.687172 0.108860 -1.932803 1.104636
Z 2.453511 0.947065 -2.144457 1.036991
</code></pre>
<p>I now need to "flatten" the column structure into a simple list of column names, such as A_cat_long, A_dog_short and so on.</p>
<p>The following command seems to work:</p>
<pre><code>df.columns = [ '_'.join(x) for x in df.columns ]
A_cat_long A_cat_long2 A_dog_short B_dog_short
W -0.968703 0.086291 -0.255741 1.487564
X 2.113484 -0.118909 0.698032 -0.058647
Y 0.822555 0.483175 1.221687 0.759047
Z -1.260217 1.620935 0.417836 1.581388
</code></pre>
<p>Can anybody confirm whether this is the correct way to do this?</p>
|
<p>In case anybody else comes across this - this indeed seems to do the trick:</p>
<pre><code>df.columns = [ '_'.join(x) for x in df.columns ]
</code></pre>
<p>Result:</p>
<pre><code> A_cat_long A_cat_long2 A_dog_short B_dog_short
W -0.968703 0.086291 -0.255741 1.487564
X 2.113484 -0.118909 0.698032 -0.058647
Y 0.822555 0.483175 1.221687 0.759047
Z -1.260217 1.620935 0.417836 1.581388
</code></pre>
|
python|pandas
| 5
|
376,725
| 32,998,842
|
efficient way of constructing a matrix of pair-wise distances between many vectors?
|
<p>First, thanks for reading and taking the time to respond.</p>
<p>Second, the question:</p>
<p>I have a PxN matrix X where P is in the order of 10^6 and N is in the order of 10^3. So, X is relatively large and is not sparse. Let's say each row of X is an N-dimensional sample. I want to construct a PxP matrix of pairwise distances between these P samples. Let's also say I am interested in Hellinger distances.</p>
<p>So far I am relying on sparse dok matrices:</p>
<pre><code>def hellinger_distance(X):
P = X.shape[0]
H1 = sp.sparse.dok_matrix((P, P))
for i in xrange(P):
if i%100 == 0:
print i
x1 = X[i]
X2 = X[i:P]
h = np.sqrt(((np.sqrt(x1) - np.sqrt(X2))**2).sum(1)) / math.sqrt(2)
H1[i, i:P] = h
H = H1 + H1.T
return H
</code></pre>
<p>This is super slow. Is there a more efficient way of doing this? Any help is much appreciated.</p>
|
<p>You can use <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.distance.pdist.html#scipy.spatial.distance.pdist" rel="nofollow"><code>pdist</code></a> and <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.distance.squareform.html#scipy.spatial.distance.squareform" rel="nofollow"><code>squareform</code></a> from <a href="http://docs.scipy.org/doc/scipy/reference/spatial.distance.html" rel="nofollow"><code>scipy.spatial.distance</code></a> -</p>
<pre><code>from scipy.spatial.distance import pdist, squareform
out = squareform(pdist(np.sqrt(X)))/np.sqrt(2)
</code></pre>
<p>Or use <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.distance.cdist.html#scipy.spatial.distance.cdist" rel="nofollow"><code>cdist</code></a> from the same -</p>
<pre><code>from scipy.spatial.distance import cdist
sX = np.sqrt(X)
out = cdist(sX,sX)/np.sqrt(2)
</code></pre>
|
python|numpy|scipy|memory-efficient|scalable
| 2
|
376,726
| 32,726,701
|
Convert real-valued numpy array to binary array by sign
|
<p>I am looking for a fast way to compute the following:</p>
<pre><code>import numpy as np
a = np.array([-1,1,2,-4,5.5,-0.1,0])
</code></pre>
<p>Now I want to cast <code>a</code> to an array of binary values such that it has a 1 for every positive entry of <code>a</code> and a 0 otherwise. So the result I want is this:</p>
<pre><code>array([ 0., 1., 1., 0., 1., 0., 0.])
</code></pre>
<p>One way to achieve this would be</p>
<pre><code>np.array([x if x >=0 else 0 for x in np.sign(a)])
array([ 0., 1., 1., 0., 1., 0., 0.])
</code></pre>
<p>But I am hoping someone can point out a faster solution. </p>
<pre><code>%timeit np.array([x if x >=0 else 0 for x in np.sign(a)])
100000 loops, best of 3: 11.4 us per loop
</code></pre>
<p><strong>EDIT:</strong> timing the great solutions from the answers</p>
<pre><code>%timeit (a > 0).astype(int)
100000 loops, best of 3: 3.47 us per loop
</code></pre>
|
<p>You can check where <code>a</code> is greater than 0 and cast the boolean array to an integer array:</p>
<pre><code>>>> (a > 0).astype(int)
array([0, 1, 1, 0, 1, 0, 0])
</code></pre>
<p>This should be significantly faster than the method proposed in the question (especially over larger arrays) because it avoids looping over the array at the Python level.</p>
<p>Faster still is to simply view the boolean array as the <code>int8</code> dtype - this prevents the need to create a new array from the boolean array:</p>
<pre><code>>>> (a > 0).view(np.int8)
array([0, 1, 1, 0, 1, 0, 0], dtype=int8)
</code></pre>
<p>Timings:</p>
<pre><code>>>> b = np.random.rand(1000000)
>>> %timeit np.array([ x if x >=0 else 0 for x in np.sign(b)])
1 loops, best of 3: 420 ms per loop
>>> %timeit (b > 0).astype(int)
100 loops, best of 3: 4.63 ms per loop
>>> %timeit (b > 0).view(np.int8)
1000 loops, best of 3: 1.12 ms per loop
</code></pre>
|
python|arrays|numpy|casting
| 3
|
376,727
| 32,977,911
|
Combine daily data into monthly data in Excel using Python
|
<p>I am trying to figure out how I can combine daily dates into specific months and summing the data for the each day that falls within the specific month.</p>
<p>Note: I have a huge list with daily dates but I put a small sample here to simply the example.</p>
<p>File name: (test.xlsx)</p>
<p>For an Example (sheet1) contains in dataframe mode:</p>
<pre><code> DATE 51 52 53 54 55 56
0 20110706 28.52 27.52 26.52 25.52 24.52 23.52
1 20110707 28.97 27.97 26.97 25.97 24.97 23.97
2 20110708 28.52 27.52 26.52 25.52 24.52 23.52
3 20110709 28.97 27.97 26.97 25.97 24.97 23.97
4 20110710 30.5 29.5 28.5 27.5 26.5 25.5
5 20110711 32.93 31.93 30.93 29.93 28.93 27.93
6 20110712 35.54 34.54 33.54 32.54 31.54 30.54
7 20110713 33.02 32.02 31.02 30.02 29.02 28.02
8 20110730 35.99 34.99 33.99 32.99 31.99 30.99
9 20110731 30.5 29.5 28.5 27.5 26.5 25.5
10 20110801 32.48 31.48 30.48 29.48 28.48 27.48
11 20110802 31.04 30.04 29.04 28.04 27.04 26.04
12 20110803 32.03 31.03 30.03 29.03 28.03 27.03
13 20110804 34.01 33.01 32.01 31.01 30.01 29.01
14 20110805 27.44 26.44 25.44 24.44 23.44 22.44
15 20110806 32.48 31.48 30.48 29.48 28.48 27.48
</code></pre>
<p>What I would like is to edit ("test.xlsx",'sheet1') to result in what is below:</p>
<pre><code> DATE 51 52 53 54 55 56
0 201107 313.46 303.46 293.46 283.46 273.46 263.46
1 201108 189.48 183.48 177.48 171.48 165.48 159.48
</code></pre>
<p>How would I go about implementing this? </p>
<p>Here is my code thus far:</p>
<pre><code>import pandas as pd
from pandas import ExcelWriter
df = pd.read_excel('thecddhddtestquecdd.xlsx')
def sep_yearmonths(x):
x['month'] = str(x['DATE'])[:-2]
return x
df = df.apply(sep_yearmonths,axis=1)
df.groupby('month').sum()
writer = ExcelWriter('thecddhddtestquecddMERGE.xlsx')
df.to_excel(writer,'Sheet1',index=False)
writer.save()
</code></pre>
|
<p>This will work if 'DATE' is a column of strings and not your index.</p>
<p>Example dataframe - shortened for clarity:</p>
<pre><code>df = pd.DataFrame({'DATE': {0: '20110706', 1:'20110707', 2: '20110801'},
52: {0: 28.52, 1: 28.97, 2: 28.52},
55: { 0: 24.52, 1: 24.97, 2:24.52 }
})
</code></pre>
<p>Which yields:</p>
<pre><code> 52 55 DATE
0 28.52 24.52 20110706
1 28.97 24.97 20110707
2 28.52 24.52 20110801
</code></pre>
<p>Apply the following function over the dataframe to generate a new column:</p>
<pre><code>def sep_yearmonths(x):
x['month'] = x['DATE'][:-2]
return x
</code></pre>
<p>Like this:</p>
<pre><code>df = df.apply(sep_yearmonths,axis=1)
</code></pre>
<p>Over which you can then groupby and sum:</p>
<pre><code>df.groupby('month').sum()
</code></pre>
<p>Resulting in the following:</p>
<pre><code> 52 55
month
201107 57.49 49.49
201108 28.52 24.52
</code></pre>
<p>If 'date' is your index, simply call <code>reset_index</code> before. If it's not a column of string values, then you need to do that beforehand.</p>
<p>Finally, you can rename your 'month' column to 'DATE'. I suppose you could just substitute the column 'DATE' inplace, but I choose to do things explictly. You can do that like so:</p>
<pre><code>df['DATE'] = df['DATE'].apply(lambda x: x[:-2])
</code></pre>
<p>Then 'groupby' 'DATE' instead of month.</p>
|
python|excel|date|pandas
| 2
|
376,728
| 32,868,177
|
Flattening shallow list with pandas
|
<p>I am trying to flatten the content of a column of a <code>pandas.DataFrame</code> which contains list of list however I cannot find a proper way to get a correct output.</p>
<p>Instead of a <a href="https://stackoverflow.com/questions/406121/flattening-a-shallow-list-in-python">different question</a> asked in StackOverflow about the same subject, here the focus is the flattering process inside each row of a <code>pandas.DataFrame</code>.</p>
<p>Here is a toy example :</p>
<pre><code>df = pd.DataFrame({ 'recipe': [['olive oil',
'low sodium chicken broth',
'cilantro leaves',
'chile powder',
'fresh thyme'],
['coconut milk', 'frozen banana', 'pure acai puree', 'almond butter'],
['egg',
'whole milk',
'extra-virgin olive oil',
'garlic cloves',
'corn kernels',
'chicken breasts']],
'category': ['A', 'B', 'B']
})
df_grouped = df.groupby('category')['recipe'].apply(lambda x: x.tolist())
df_grouped = df_grouped.reset_index()
df_grouped['recipe'][1]
</code></pre>
<p>This produce the following output :</p>
<pre><code>[['coconut milk', 'frozen banana', 'pure acai puree', 'almond butter'], ['egg', 'whole milk', 'extra-virgin olive oil', 'garlic cloves', 'corn kernels', 'chicken breasts']]
</code></pre>
<p>My objective is to merge row by row every list of words or sentences.
I tried with the following code but it split every letter.</p>
<pre><code>join = lambda list_of_lists: (val for sublist in list_of_lists for val in sublist)
df_grouped['merged'] = df_grouped['recipe'].apply(lambda x: list(join(x)))
df_grouped['merged']
</code></pre>
<p>This produce :</p>
<pre><code>0 [o, l, i, v, e, , o, i, l, l, o, w, , s, o, ...
1 [c, o, c, o, n, u, t, , m, i, l, k, f, r, o, ...
</code></pre>
<p>I would like the following output for each row, one array with all words</p>
<pre><code>['coconut milk', 'frozen banana', 'pure acai puree', 'almond butter', 'egg', 'whole milk', 'extra-virgin olive oil', 'garlic cloves', 'corn kernels', 'chicken breasts']
</code></pre>
|
<p>Just change the join to :</p>
<pre><code>join = lambda list_of_lists: (val for sublist in list_of_lists for val in sublist if isinstance(sublist, list))
</code></pre>
<p>Here is the output :</p>
<pre><code>In[69]: df_grouped['merged'] = df_grouped['recipe'].apply(lambda x: list(join(x)))
In[70]: df_grouped['merged']
Out[70]:
0 [olive oil, low sodium chicken broth, cilantro...
1 [coconut milk, frozen banana, pure acai puree,...
Name: merged, dtype: object
</code></pre>
|
python|python-3.x|pandas|flatten
| 1
|
376,729
| 32,926,116
|
What is the most canonical way to install Numpy, Scipy, Pandas?
|
<p>I need to install Numpy, Scipy, Pandas, and sklearn.
I downloaded Numpy and Scipy individually from their websites, but Scipy warns not to install manually, and when I try to install Numpy I get an error 'could not locate executable g77'. Scipy recommends using Anaconda. I have downloaded Anaconda, but cannot see how to download Numpy and Scipy using it. Also, apparently, I can use Macports, for which I need Xcode, and I have tried downloading Xcode twice and it won't open - the reviews say the new version is buggy.
Thus, what is the most conventional and effective way to install these tools? Did Anaconda automatically install Numpy and Scipy?</p>
|
<p>There is a lot of scientific distribution very well done today.For exemple, <a href="http://www.pyzo.org" rel="nofollow">pyzo</a>
is a very nice and modern plug and play system for you. </p>
|
python|numpy
| 0
|
376,730
| 33,055,718
|
Splitting data in Pandas/Python
|
<p>I'm new to Python and Pandas so bear with me.</p>
<p>I have a big data that looks like:</p>
<pre><code>1 E 1 NaN
2 T 2004-09-21 01:15:53 NaN
3 U 30 NaN
4 N 32 NaN
5 V 1 2004-09-14 16:26:00
6 V -1 2004-09-14 16:53:00
7 V 1 2004-09-14 17:08:00
...................................................
18 E 1 Nan
19 T 2004-10-21 02:13:43 Nan
20 U 35 Nan
21 N 40 Nan
22 V 1 2004-10-19 14:50:00
23 V 1 2004-10-20 15:31:00
24 V 1 2004-10-21 13:49:00
25 V 1 2004-10-21 20:57:00
26 V 1 2004-10-21 22:11:00
...................................................
</code></pre>
<p>How can I split this into individual little data sets, lets say <code>x(i)</code> , where <code>i=0,...,N</code>, and for example <code>x(0)</code> looks like:</p>
<pre><code> 1 E 1 NaN
2 T 2004-09-21 01:15:53 NaN
3 U 30 NaN
4 N 32 NaN
5 V 1 2004-09-14 16:26:00
6 V -1 2004-09-14 16:53:00
7 V 1 2004-09-14 17:08:00
...................................................
17 V 1 2004-09-16 12:38:01
</code></pre>
<p>I guess I should use some loop command for going from <code>E</code> to <code>E</code>, but I'm not quite sure how to divide it into individual sets.</p>
|
<p>You can use <code>groupby</code> here, using the compare-cumsum-groupby pattern (here let's say that the column with the Es is called "letter"):</p>
<pre><code>>>> grouped = df.groupby((df["letter"] == "E").cumsum())
>>> frames = [g for k,g in grouped]
>>> for frame in frames:
... print(frame)
... print("--")
...
letter
0 E
1 T
2 U
--
letter
3 E
4 M
--
letter
5 E
--
letter
6 E
--
</code></pre>
<hr>
<p>This works because we can compare everything to E, creating a Series of booleans:</p>
<pre><code>>>> df["letter"] == "E"
0 True
1 False
2 False
3 True
4 False
5 True
6 True
Name: letter, dtype: bool
</code></pre>
<p>and then if we take the cumulative sum of that we get </p>
<pre><code>>>> (df["letter"] == "E").cumsum()
0 1
1 1
2 1
3 2
4 2
5 3
6 4
Name: letter, dtype: int32
</code></pre>
<p>where each new group has its own number. Reading the <a href="http://pandas.pydata.org/pandas-docs/stable/groupby.html" rel="nofollow">split-apply-combine</a> section of the documentation is probably a good idea-- you might not even need to break everything up into subframes if the operation you want to perform on the groups is already supported.</p>
|
python|pandas|split
| 1
|
376,731
| 32,633,944
|
pandas extrapolation of polynomial
|
<p>Interpolating is easy in pandas using <code>df.interpolate()</code>
is there a method in pandas that with the same elegance do something like extrapolate. I know my extrapolation is fitted to a second degree polynom.</p>
|
<p>"With the same elegance" is a somewhat tall order but this can be done. As far as I'm aware you'll need to compute the extrapolated values manually. Note it is very unlikely these values will be very meaningful unless the data you are operating on actually obey a law of the form of the interpolant.</p>
<p>For example, since you requested a second degree polynomial fit:</p>
<pre><code>import numpy as np
t = df["time"]
dat = df["data"]
p = np.poly1d(np.polyfit(t,data,2))
</code></pre>
<p>Now p(t) is the value of the best-fit polynomial at time t.</p>
|
python|numpy|pandas
| 2
|
376,732
| 32,999,650
|
Python pandas map dict keys to values
|
<p>I have a csv for input, whose row values I'd like to join into a new field. This new field is a constructed url, which will then be processed by the requests.post() method.</p>
<p>I am constructing my url correctly, but my issue is with the data object that should be passed to requests. How can I have the correct values passed to their proper keys when my dictionary is unordered? If I need to use an ordered dict, how can I properly set it up with my current format?</p>
<p>Here is what I have:</p>
<pre><code>import pandas as pd
import numpy as np
import requests
test_df = pd.read_csv('frame1.csv')
headers = {'content-type': 'application/x-www-form-urlencoded'}
test_df['FIRST_NAME'] = test_df['FIRST_NAME'].astype(str)
test_df['LAST_NAME'] = test_df['LAST_NAME'].astype(str)
test_df['ADDRESS_1'] = test_df['ADDRESS_1'].astype(str)
test_df['CITY'] = test_df['CITY'].astype(str)
test_df['req'] = 'site-url.com?' + '&FIRST_NAME=' + test_df['FIRST_NAME'] + '&LAST_NAME=' + \
test_df['LAST_NAME'] + '&ADDRESS_1=' + test_df['ADDRESS_1'] + '&CITY=' + test_df['CITY']
arr = test_df.values
d = {'FIRST_NAME':test_df['FIRST_NAME'], 'LAST_NAME':test_df['LAST_NAME'],
'ADDRESS_1':test_df['ADDRESS_1'], 'CITY':test_df['CITY']}
test_df = pd.DataFrame(arr[0:, 0:], columns=d, dtype=np.str)
data = test_df.to_dict()
data = {k: v for k, v in data.items()}
test_df['raw_result'] = test_df['req'].apply(lambda x: requests.post(x, headers=headers,
data=data).content)
test_df.to_csv('frame1_result.csv')
</code></pre>
<p>I tried to map values to keys with a dict comprehension, but the assignment of a key like <code>FIRST_NAME</code> could end up mapping to values from an arbitrary field like <code>test_df['CITY']</code>.</p>
|
<p>Not sure if I understand the problem correctly. However, you can give argument to <code>to_dict</code> function e.g.</p>
<pre><code>data = test_df.to_dict(orient='records')
</code></pre>
<p>which will give you output as follows: <code>[{'FIRST_NAME': ..., 'LAST_NAME': ...}, {'FIRST_NAME': ..., 'LAST_NAME': ...}]</code> (which will give you a list that has equal length as <code>test_df</code>). This might be one possibility to easily map it to a correct row.</p>
|
python|dictionary|pandas
| 0
|
376,733
| 32,896,097
|
Error in Python while doing text preprocessing
|
<p>I have written a couple of functions to work on text documents and convert them into bag of words. Before that I am cleaning the text by removing the stop words, tokenization etc and storing the cleaned text docs as a list which I intend to pass as an argument to another function which would create bag of words features from it. </p>
<p>Here is the functions: </p>
<pre><code>def cleaningDocs(doc,stem): # 'S' for Stemming, 'L' for Lemmatization
"""This function cleans each doc string by doing the following:
i) Removing punctuation and other non alphabetical characters
ii) Convert to Lower case and split string into words (tokenization)
ii) Removes stop words (most frequent words)
iii) Doing Stemming and Lemmatization
"""
# Removing punctuations and other non alphabetic characters
import re
alphabets_only=re.sub(r'[^a-zA-Z]'," ",doc)
# Converting to lower case and splitting the words(tokenization)
words_lower=alphabets_only.lower().split()
# Removing stop words (Words like 'a', 'an', 'is','the' which doesn't contribute anything
from nltk.corpus import stopwords
useful_words = [w for w in words_lower if not w in set(stopwords.words("english"))]
# Doing Stemming or Lemmatization (Normalising the text)
from nltk.stem import PorterStemmer, WordNetLemmatizer
if (stem=='S'): # Choosing between Stemming ('S') and Lemmatization ('L')
stemmer=PorterStemmer()
final_words=[stemmer.stem(x) for x in useful_words]
else:
lemma=WordNetLemmatizer()
final_words=[lemma.lemmatize(x) for x in useful_words]
return(str(" ".join(final_words)))
</code></pre>
<p>Now here is a list of document strings .This is a pandas Series object. </p>
<pre><code>type(docs)
Out[53]:
pandas.core.series.Series
</code></pre>
<p>Each element within this document is a string. Basically each element is a text document and I want to pre-process each text document(get rid of stop words, lemmatization etc) and saving that as a new processed list. </p>
<pre><code>type(docs[0])
Out[55]:
str
</code></pre>
<p>Ideally I want to do something like this: </p>
<pre><code>doc=[]
for x in docs:
doc.append(cleaningDocs(x,"L"))
</code></pre>
<p>So that for each string document within docs series we get rid of it with stop words and other things and save it back to a list of documents. </p>
<p>The above code is giving me this error: </p>
<pre><code>---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-56-61345bb4d581> in <module>()
1 doc=[]
2 for x in docs:
----> 3 doc.append(cleaningDocs(x,"L"))
4
<ipython-input-42-6e1c58274c3d> in cleaningDocs(doc, stem)
13 # Removing punctuations and other non alphabetic characters
14 import re
---> 15 alphabets_only=re.sub(r'[^a-zA-Z]'," ",doc)
16
17 # Converting to lower case and splitting the words(tokenization)
/Users/mtripathi/anaconda/lib/python2.7/re.pyc in sub(pattern, repl, string, count, flags)
153 a callable, it's passed the match object and must return
154 a replacement string to be used."""
--> 155 return _compile(pattern, flags).sub(repl, string, count)
156
157 def subn(pattern, repl, string, count=0, flags=0):
TypeError: expected string or buffer
</code></pre>
<p>Whereas if I de-bug it by just passing the first element of docs (first document) by adding break to above for loop; it works totally fine. </p>
<pre><code>doc=[]
for x in docs:
doc.append(cleaningDocs(x,"L"))
break
doc
</code></pre>
<p>if you see the function does as required by removing the stop words from the original document and lemmatization it and then saving it back to a new processed list of documents. But it is working if I am sending only one document at a time. While sending all documents in a for loop it is throwing an error why would that be?. </p>
<p>EDIT: </p>
<p>Ok I just checked the type of each element in docs and see there is some issue happening that there is some elements getting casted as float there. See below: </p>
<pre><code>for x in docs:
print(type(x))
<type 'str'>
<type 'str'>
<type 'str'>
<type 'float'>
<type 'str'>
<type 'str'>
<type 'str'>
<type 'str'>
<type 'str'>
<type 'str'>
<type 'str'>
<type 'str'>
<type 'str'>
<type 'str'>
<type 'str'>
<type 'str'>
<type 'str'>
<type 'str'>
<type 'str'>
</code></pre>
<p>So couple of things: </p>
<p>1) I want to see those elements which are float in this doc.
2). Second if I want to convert all the elements to string I guess we can do .astype()?</p>
|
<p>So there are two main ideas here: <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing" rel="nofollow">Boolean indexing</a> and <a href="http://pandas.pydata.org/pandas-docs/stable/basics.html#function-application" rel="nofollow">function application</a></p>
<p>Boolean indexing allows you to use a true/false array to select a subset of a Series, and function application applies a single function to every item in a Series.</p>
<p>First, apply <code>isinstance</code> to determine which elements are floats, then slice your series to get the elements back.</p>
<p>Then just apply <code>str</code> and you're good.</p>
<pre><code>import pandas as pd
test = pd.Series(["Hey", "I'm", 1.0, "or", 2.0, "floats"])
# Find floats
floats = test[test.apply(lambda x: isinstance(x, float))]
# Make all strings
test_as_strings = test.apply(str)
</code></pre>
|
python|pandas|nltk
| 0
|
376,734
| 38,515,782
|
scipy.signal's convolve differs from calculated result
|
<p>I'd like to discuss a little bit on convolution as applied to CNNs and image filtering... If you have an RGB image (dimensions of say <code>3xIxI</code>) and <code>K</code> filters, each of size <code>3xFxF</code>, then you would end up with a <code>Kx(I - F + 1)x(I - F + 1)</code> output, assuming your stride is <code>1</code> and you only consider completely overlapping regions (no padding). </p>
<p>From all the material I've read on convolution, you're basically sliding each filter over the image, and at each stage computing a large number of dot products and then summing them up to get a single value. </p>
<p>For example:</p>
<pre><code>I -> 3x5x5 matrix
F -> 3x2x2 matrix
I * F -> 1x4x4 matrix
</code></pre>
<p>(Assume <code>*</code> is the convolution operation.)</p>
<p>Now, since both your kernel and image have the same number of channels, you are going to end up separating your 3D convolution into a number of parallel 2D convolutions, followed by a matrix summation. </p>
<p>Therefore, the above example should for all intents and purposes (assuming there is no padding and we are only considering completely overlapping regions) be the same as this:</p>
<pre><code>I -> 3x5x5 matrix
F -> 3x2x2 matrix
(I[0] * F[0]) + (I[1] * F[1]) + (I[2] * F[2]) -> 1x4x4 matrix
</code></pre>
<p>I am just separating each channel and convolving them independently. Please, look at this carefully and correct me if I'm wrong. </p>
<p>Now, on the assumption that this makes sense, I've carried out the following experiment in python.</p>
<pre><code>import scipy.signal
import numpy as np
import test
x = np.random.randint(0, 10, (3, 5, 5)).astype(np.float32)
w = np.random.randint(0, 10, (3, 2, 2)).astype(np.float32)
r1 = np.sum([scipy.signal.convolve(x[i], w[i], 'valid') for i in range(3)], axis=0).reshape(1, 4, 4)
r2 = scipy.signal.convolve(x, w, 'valid')
print r1.shape
print r1
print r2.shape
print r2
</code></pre>
<p>This gives me the following result:</p>
<pre><code>(1, 4, 4)
[[[ 268. 229. 297. 305.]
[ 256. 292. 322. 190.]
[ 173. 240. 283. 243.]
[ 291. 271. 302. 346.]]]
(1, 4, 4)
[[[ 247. 229. 291. 263.]
[ 198. 297. 342. 233.]
[ 208. 268. 268. 185.]
[ 276. 272. 280. 372.]]]
</code></pre>
<p>I'd just like to know whether this is due to:</p>
<ul>
<li>A bug in scipy (less likely)</li>
<li>A mistake in my program (more likely)</li>
<li>My misunderstanding of overlapping convolution (most likely)</li>
</ul>
<p>Or any combination of the above. Thanks for reading!</p>
|
<p>You wrote:</p>
<blockquote>
<p>... the same as this:</p>
</blockquote>
<pre><code>I -> 3x5x5 matrix
F -> 3x2x2 matrix
(I[0] * F[0]) + (I[1] * F[1]) + (I[2] * F[2]) -> 1x4x4 matrix
</code></pre>
<p>You have forgotten that convolution <em>reverses</em> one of the arguments. So the above is not true. Instead, the last line should be:</p>
<pre><code>(I[0] * F[2]) + (I[1] * F[1]) + (I[2] * F[0]) -> 1x4x4 matrix
</code></pre>
<p>For example,</p>
<pre><code>In [28]: r1 = np.sum([scipy.signal.convolve(x[i], w[2-i], 'valid') for i in range(3)], axis=0).reshape(1, 4, 4)
In [29]: r2 = scipy.signal.convolve(x, w, 'valid')
In [30]: r1
Out[30]:
array([[[ 169., 223., 277., 199.],
[ 226., 213., 206., 247.],
[ 192., 252., 332., 369.],
[ 167., 266., 321., 323.]]], dtype=float32)
In [31]: r2
Out[31]:
array([[[ 169., 223., 277., 199.],
[ 226., 213., 206., 247.],
[ 192., 252., 332., 369.],
[ 167., 266., 321., 323.]]], dtype=float32)
</code></pre>
|
python|numpy|scipy|convolution
| 4
|
376,735
| 38,576,969
|
Pct_change in python with missing data
|
<p>I have quarterly time series data that I am calculating derivatives for. The problem is, the raw data has gaps in the time series. Therefore, if I am trying to find the quarter-over-quarter percent change in a variable, there are times when it will not realize it's calculating a percent change for a period much longer than a quarter. How do I make sure the pct_change() is only being done if the preceding data point is from the previous quarter (not further back) </p>
<p>Related to this, I am looking to calculate Year-over-year percent changes, which would have to go back 4 periods. I could use pct_change and just have it look back 4 periods rather than 1, but again, that assumes all the data is present.</p>
<p>What would be the best approach for handling this situation?</p>
<p>Below is the code I would use if the data was perfect:</p>
<pre><code>dataRGQoQ = rawdata.groupby("ticker")['revenueusd'].pct_change()
</code></pre>
<p>I have included sample data below. There are 2 points in this data to focus on: (1) with ticker 'A', the gap between '2006-09-30' and '2007-12-31'; and (2) with ABV the gap (this time is slightly different because it has the dates and no data) between '2012-12-31' and '2013-12-31'.</p>
<pre><code>ticker,calendardate,revenueusd
A,2005-12-31,5139000000
A,2006-03-31,4817000000
A,2006-06-30,4560000000
A,2006-09-30,4325000000
A,2007-12-31,5420000000
A,2008-03-31,5533000000
A,2008-06-30,5669000000
A,2008-09-30,5739000000
AA,2005-12-31,26159000000
AA,2006-03-31,27242000000
AA,2006-06-30,28438000000
AA,2006-09-30,29503000000
AA,2006-12-31,30379000000
AA,2007-03-31,31338000000
AA,2007-06-30,31445000000
AA,2007-09-30,31201000000
AA,2007-12-31,30748000000
ABBV,2012-12-31,18380000000
ABBV,2013-03-31,
ABBV,2013-06-30,
ABBV,2013-09-30,
ABBV,2013-12-31,18790000000
ABBV,2014-03-31,19024000000
ABBV,2014-06-30,19258000000
ABBV,2014-09-30,19619000000
ABBV,2014-12-31,19960000000
ABBV,2015-03-31,20437000000
</code></pre>
|
<p>I'm going to put <code>['calendardate', 'ticker']</code> in the index to facilitate pivoting. Then <code>unstack</code> to get ticker values in the columns.</p>
<pre><code>df.set_index(['calendardate', 'ticker']).unstack().head(10)
</code></pre>
<p><a href="https://i.stack.imgur.com/2jMrS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2jMrS.png" alt="enter image description here"></a></p>
<p>With <code>calendardate</code> in the index, we can use <code>resample('Q')</code> to insert all quarters. This will ensure we get the proper <code>NaN</code>'s for missing quarters.</p>
<pre><code>df.set_index(['calendardate', 'ticker']).unstack().resample('Q').mean().head(10)
</code></pre>
<p>Assign this to <code>df1</code> and then we can do <code>pct_change</code>, <code>stack</code> back and <code>reset_index</code> to get columns back in the dataframe proper.</p>
<pre><code>df1 = df.set_index(['calendardate', 'ticker']).unstack().resample('Q').mean()
df1.pct_change().stack().reset_index()
</code></pre>
<p><a href="https://i.stack.imgur.com/NMxjv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NMxjv.png" alt="enter image description here"></a></p>
|
python|numpy|pandas
| 1
|
376,736
| 38,854,582
|
how to convert a (possibly negative) Pandas TimeDelta in minutes (float)?
|
<p>I have a dataframe like this</p>
<pre><code>df[['timestamp_utc','minute_ts','delta']].head()
Out[47]:
timestamp_utc minute_ts delta
0 2015-05-21 14:06:33.414 2015-05-21 12:06:00 -1 days +21:59:26.586000
1 2015-05-21 14:06:33.414 2015-05-21 12:07:00 -1 days +22:00:26.586000
2 2015-05-21 14:06:33.414 2015-05-21 12:08:00 -1 days +22:01:26.586000
3 2015-05-21 14:06:33.414 2015-05-21 12:09:00 -1 days +22:02:26.586000
4 2015-05-21 14:06:33.414 2015-05-21 12:10:00 -1 days +22:03:26.586000
</code></pre>
<p>Where <code>df['delta']=df.minute_ts-df.timestamp_utc</code></p>
<pre><code>timestamp_utc datetime64[ns]
minute_ts datetime64[ns]
delta timedelta64[ns]
</code></pre>
<p>Problem is, I would like to get the <strong>number of (possibly negative) minutes</strong> between <code>timestamp_utc</code> and <code>minutes_ts</code>, disregarding the seconds component. </p>
<p>So for the first row I would like to get <code>-120</code>. Indeed,<code>2015-05-21 12:06:00</code> is 120 minutes before <code>2015-05-21 14:06:33.414</code>.</p>
<p>What is the most pandaesque way to do it?</p>
<p>Many thanks!</p>
|
<p>You can use:</p>
<pre><code>df['a'] = df['delta'] / np.timedelta64(1, 'm')
print (df)
timestamp_utc minute_ts delta \
0 2015-05-21 14:06:33.414 2015-05-21 12:06:00 -1 days +21:59:26.586000
1 2015-05-21 14:06:33.414 2015-05-21 12:07:00 -1 days +22:00:26.586000
2 2015-05-21 14:06:33.414 2015-05-21 12:08:00 -1 days +22:01:26.586000
3 2015-05-21 14:06:33.414 2015-05-21 12:09:00 -1 days +22:02:26.586000
4 2015-05-21 14:06:33.414 2015-05-21 12:10:00 -1 days +22:03:26.586000
a
0 -120.5569
1 -119.5569
2 -118.5569
3 -117.5569
4 -116.5569
</code></pre>
<p>And then convert <code>float</code> to <code>int</code>:</p>
<pre><code>df['a'] = (df['delta'] / np.timedelta64(1, 'm')).astype(int)
print (df)
timestamp_utc minute_ts delta a
0 2015-05-21 14:06:33.414 2015-05-21 12:06:00 -1 days +21:59:26.586000 -120
1 2015-05-21 14:06:33.414 2015-05-21 12:07:00 -1 days +22:00:26.586000 -119
2 2015-05-21 14:06:33.414 2015-05-21 12:08:00 -1 days +22:01:26.586000 -118
3 2015-05-21 14:06:33.414 2015-05-21 12:09:00 -1 days +22:02:26.586000 -117
4 2015-05-21 14:06:33.414 2015-05-21 12:10:00 -1 days +22:03:26.586000 -116
</code></pre>
|
python|datetime|pandas
| 8
|
376,737
| 38,594,625
|
How to feature-ize timeseries data in Pandas?
|
<p>I have data that are structured as below:</p>
<pre><code>Group, ID, Time, Feat1, Feat2, Feat3
A, 1, 0, 1.52, 2.94, 3.1
A, 1, 2, 1.67, 2.99, 3.3
A, 1, 4, 1.9, 3.34, 5.6
</code></pre>
<p>In this data, there are individuals who have been measured repeatedly.</p>
<p>I'd like to restructure the data such that each feature-time combination is a unique column, as below:</p>
<pre><code>Group, ID, Feat1_Time0, Feat1_Time2, Feat1_Time4, Feat2_Time0, Feat2_Time2, Feat2_Time4, Feat3_Time0, Feat3_Time2, Feat3_Time4
A, 1, 1.52, 2.94, 3.1, 1.67, 2.99, 3.3, 1.9, 3.34, 5.6
</code></pre>
<p>Is there a simple way to handle this, without using a for-loop? I've tried accomplishing what I need with the for-loop method, but it is inelegant and clunky, and given real data of 10<sup>4</sup> columns, it would take a while as well.</p>
|
<pre><code>df = pd.DataFrame({'Group': {0: 'A', 1: 'A', 2: 'A', 3: 'A', 4: 'A', 5: 'A'},
'Time': {0: 0, 1: 2, 2: 4, 3: 0, 4: 2, 5: 4},
'ID': {0: 1, 1: 1, 2: 1, 3: 2, 4: 2, 5: 2},
'Feat1': {0: 1.52, 1: 1.6699999999999999, 2: 1.8999999999999999, 3: 1.52, 4: 1.6699999999999999, 5: 1.8999999999999999},
'Feat3': {0: 3.1000000000000001, 1: 3.2999999999999998, 2: 5.5999999999999996, 3: 3.1000000000000001, 4: 3.2999999999999998, 5: 5.5999999999999996},
'Feat2': {0: 2.9399999999999999, 1: 2.9900000000000002, 2: 3.3399999999999999, 3: 2.9399999999999999, 4: 2.9900000000000002, 5: 3.3399999999999999}})
df1 = df.set_index(['Group', 'ID', 'Time']).unstack()
df1
</code></pre>
<p><a href="https://i.stack.imgur.com/e8t8l.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/e8t8l.png" alt="enter image description here"></a></p>
<pre><code>df1.columns = df1.columns.to_series().apply(pd.Series).astype(str).T.apply('_'.join)
df1.reset_index()
</code></pre>
<p><a href="https://i.stack.imgur.com/d1T9i.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/d1T9i.png" alt="enter image description here"></a></p>
|
python|pandas
| 1
|
376,738
| 38,720,745
|
Setting boolean values in pandas dataframe (by date) based on column header membership in other dataframe (by date)
|
<p>I have two pandas dataframes (X and Y) and am trying to populate a third (Z) with boolean values based on interrelationships between the axes of X and the columns/constituents of Y. I could only manage to do this via nested loops and the code works on my toy example but is too slow for my actual data set.</p>
<pre><code># define X, Y and Z
idx=pd.date_range('2016-1-31',periods=3,freq='M')
codes = list('ABCD')
X = np.random.randn(3,4)
X = pd.DataFrame(X,columns=codes,index=idx)
Y = [['A','A','B'],['C','B','C'],['','C','D']]
Y = pd.DataFrame(Y,columns=idx)
Z = pd.DataFrame(columns=X.columns, index=X.index)
</code></pre>
<p>As you can see the index of X matches the columns of Y in this example. In my real example the columns of Y are a subset of the index of X.</p>
<p>Z's axes match X's. I want to populate elements of Z with True if the column header of Z is in the column of Y with header equal to the index of Z. My working code is as follows:</p>
<pre><code>for r in Y:
for c in Z:
Z.loc[r,c] = c in Y[r].values
</code></pre>
<p>The code is pretty clean and short but it takes a LONG time to run on the larger data sets. I'm hoping there is vectorised way to achieve the same much faster.</p>
<p>Any help would be greatly appreciated</p>
<p>Thanks!</p>
|
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.stack.html" rel="nofollow"><code>stack</code></a> method, where values of DataFrame are converted to columns and columns to values of DataFrames. Last test <code>NaN</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.notnull.html" rel="nofollow"><code>notnull</code></a>:</p>
<pre><code>print (Y.replace({'':np.nan})
.stack()
.reset_index(0)
.set_index(0, append=True)
.squeeze()
.unstack()
.rename_axis(None, axis=1)
.notnull())
A B C D
2016-01-31 True False True False
2016-02-29 True True True False
2016-03-31 False True True True
</code></pre>
<p>Another solution with <code>pivot</code>:</p>
<pre><code>print (Y.replace({'':np.nan})
.stack()
.reset_index(name='a')
.pivot(index='level_1', columns='a', values='level_0')
.rename_axis(None, axis=1)
.rename_axis(None)
.notnull())
A B C D
2016-01-31 True False True False
2016-02-29 True True True False
2016-03-31 False True True True
</code></pre>
<p>EDIT by comment:</p>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reindex.html" rel="nofollow"><code>reindex</code></a> if indexes are unique and then <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.fillna.html" rel="nofollow"><code>fillna</code></a> by <code>False</code>:</p>
<pre><code>import pandas as pd
import numpy as np
# define X, Y and Z
idx=pd.date_range('2016-1-31',periods=5,freq='M')
codes = list('ABCD')
X = np.random.randn(5,4)
X = pd.DataFrame(X,columns=codes,index=idx)
Y = [['A','A','B'],['C','B','C'],['','C','D']]
Y = pd.DataFrame(Y,columns=idx[:3])
Z = pd.DataFrame(columns=X.columns, index=X.index)
print (X)
A B C D
2016-01-31 0.810348 -0.737780 -0.523869 -0.585772
2016-02-29 -1.126655 -0.494999 -1.388351 0.460340
2016-03-31 -1.578155 0.950643 -1.699921 1.149540
2016-04-30 -2.320711 1.263740 -1.401714 0.090788
2016-05-31 1.218036 0.565395 0.172278 0.288698
print (Y)
2016-01-31 2016-02-29 2016-03-31
0 A A B
1 C B C
2 C D
print (Z)
A B C D
2016-01-31 NaN NaN NaN NaN
2016-02-29 NaN NaN NaN NaN
2016-03-31 NaN NaN NaN NaN
2016-04-30 NaN NaN NaN NaN
2016-05-31 NaN NaN NaN NaN
</code></pre>
<pre><code>Y1 = Y.replace({'':np.nan})
.stack()
.reset_index(name='a')
.pivot(index='level_1', columns='a', values='level_0')
.rename_axis(None, axis=1)
.rename_axis(None)
.notnull()
print (Y1)
A B C D
2016-01-31 True False True False
2016-02-29 True True True False
2016-03-31 False True True True
print (Y1.reindex(X.index).fillna(False))
A B C D
2016-01-31 True False True False
2016-02-29 True True True False
2016-03-31 False True True True
2016-04-30 False False False False
2016-05-31 False False False False
</code></pre>
|
python|pandas|boolean|intersection
| 1
|
376,739
| 38,901,925
|
Unexpected colors in multiple scatterplots in matplotlib
|
<p>I'm sure I'm messing up something really simple here, but can't seem to figure it out. I'm simply trying to plot groups of data as scatterplots with different colors for each group by cycling through a dataframe and repeatedly calling <code>ax.scatter</code>. A minimal example is:</p>
<pre><code>import numpy as np; import pandas as pd; import matplotlib.pyplot as plt; import seaborn as sns
%matplotlib inline
df = pd.DataFrame({"Cat":list("AAABBBCCC"), "x":np.random.rand(9), "y":np.random.rand(9)})
fig, ax = plt.subplots()
for i,cat in enumerate(df.Cat.unique()):
print i, cat, sns.color_palette("husl",3)[i]
ax.scatter(df[df.Cat==cat].x.values, df[df.Cat==cat].y.values, marker="h",s=70,
label = cat, color=sns.color_palette("husl",3)[i])
ax.legend(loc=2)
</code></pre>
<p>I added the <code>print</code> statement for my own sanity to confirm that I am indeed cycling through the groups and choosing different colors. The output however looks as follows:</p>
<p><a href="https://i.stack.imgur.com/EZ0NR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EZ0NR.png" alt="enter image description here"></a></p>
<p>(If this is slightly hard to see: the groups A, B, and C have three very similar blues according to the legend, however all scatterpoints have different and seemingly unrelated colors, which aren't even identical across groups)</p>
<p>What is going on here?</p>
|
<p>You could use <a href="http://pandas.pydata.org/pandas-docs/stable/visualization.html#scatter-plot" rel="nofollow noreferrer"><code>scatter()</code></a> method of <code>pandas</code> by specifying the target <code>ax</code>and repeating the plots to plot multiple column groups in a single axes,<code>ax</code>.</p>
<pre><code># set random seed
np.random.seed(42)
fig, ax = plt.subplots()
for i,label in enumerate(df['Cat'].unique()):
# select subset of columns equal to a given label
df['X'] = df[df['Cat']==label]['x']
df['Y'] = df[df['Cat']==label]['y']
df.plot.scatter(x='X',y='Y',color=sns.color_palette("husl",3)[i],label=label,ax=ax)
ax.legend(loc=2)
</code></pre>
<p><a href="https://i.stack.imgur.com/lGbsW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lGbsW.png" alt="enter image description here"></a></p>
|
python|pandas|matplotlib
| 1
|
376,740
| 38,808,643
|
tf.contrib.layers.embedding_column from tensor flow
|
<p>I am going through tensorflow tutorial <a href="https://www.tensorflow.org/versions/r0.10/tutorials/wide_and_deep/index.html#tensorflow-wide-deep-learning-tutorial" rel="noreferrer">tensorflow</a>. I would like to find description of the following line:</p>
<pre><code>tf.contrib.layers.embedding_column
</code></pre>
<p>I wonder if it uses word2vec or anything else, or maybe I am thinking in completely wrong direction. I tried to click around on GibHub, but found nothing. I am guessing looking on GitHub is not going to be easy, since python might refer to some C++ libraries. Could anybody point me in the right direction?</p>
|
<p>I've been wondering about this too. It's not really clear to me what they're doing, but this is what I found.</p>
<p>In the <a href="http://arxiv.org/pdf/1606.07792v1.pdf" rel="noreferrer">paper on wide and deep learning</a>, they describe the embedding vectors as being randomly initialized and then adjusted during training to minimize error.</p>
<p>Normally when you do embeddings, you take some arbitrary vector representation of the data (such as one-hot vectors) and then multiply it by a matrix that represents the embedding. This matrix can be found by PCA or while training by something like t-SNE or word2vec.</p>
<p>The actual code for the embedding_column is <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/layers/python/layers/feature_column.py" rel="noreferrer">here</a>, and it's implemented as a class called _EmbeddingColumn which is a subclass of _FeatureColumn. It stores the embedding matrix inside its sparse_id_column attribute. Then, the method to_dnn_input_layer applies this embedding matrix to produce the embeddings for the next layer.</p>
<pre><code> def to_dnn_input_layer(self,
input_tensor,
weight_collections=None,
trainable=True):
output, embedding_weights = _create_embedding_lookup(
input_tensor=self.sparse_id_column.id_tensor(input_tensor),
weight_tensor=self.sparse_id_column.weight_tensor(input_tensor),
vocab_size=self.length,
dimension=self.dimension,
weight_collections=_add_variable_collection(weight_collections),
initializer=self.initializer,
combiner=self.combiner,
trainable=trainable)
</code></pre>
<p>So as far as I can see, it seems like the embeddings are formed by applying whatever learning rule you're using (gradient descent, etc.) to the embedding matrix.</p>
|
python|tensorflow|embedding
| 6
|
376,741
| 38,570,198
|
Python pandas make new column from data in existing column and from another dataframe
|
<p>I have a DataFrame called 'mydata', and if I do</p>
<pre><code>len(mydata.loc['2015-9-2'])
</code></pre>
<p>It counts the number of rows in mydata that have that date, and returns a number like</p>
<pre><code>1067
</code></pre>
<p>I have another DataFrame called 'yourdata' which looks something like</p>
<pre><code> timestamp
51 2015-06-22
52 2015-06-23
53 2015-06-24
54 2015-06-25
43 2015-07-13
</code></pre>
<p>Now I want use each date in yourdata so instead of typing in each date</p>
<pre><code>len(mydata.loc['2015-9-2'])
</code></pre>
<p>I can iterate through 'yourdata' using them like</p>
<pre><code>len(mydata.loc[yourdata['timestamp']])
</code></pre>
<p>and produce a new DataFrame with the results or just add a new column to yourdata with the result for each date, but I'm lost as how to do this?</p>
<p>The following does not work</p>
<pre><code>yourdata['result'] = len(mydata.loc[yourdata['timestamp']])
</code></pre>
<p>neither does this</p>
<pre><code>yourdata['result'] = len(mydata.loc[yourdata.iloc[:,-3]])
</code></pre>
<p>this does work</p>
<pre><code>yourdata['result'] = len(mydata.loc['2015-9-2'])
</code></pre>
<p>buts that no good as I want to use the date in each row not some fixed date.</p>
<p><strong>Edit</strong>: first few rows of mydata</p>
<pre><code> timestamp BPM
0 2015-08-30 16:48:00 65
1 2015-08-30 16:48:10 65
2 2015-08-30 16:48:15 66
3 2015-08-30 16:48:20 67
4 2015-08-30 16:48:30 70
</code></pre>
|
<pre><code>import numpy as np
import pandas as pd
mydata = pd.DataFrame({'timestamp': ['2015-06-22 16:48:00']*3 +
['2015-06-23 16:48:00']*2 +
['2015-06-24 16:48:00'] +
['2015-06-25 16:48:00']*4 +
['2015-07-13 16:48:00',
'2015-08-13 16:48:00'],
'BPM': [65]*8 + [70]*4})
mydata['timestamp'] = pd.to_datetime(mydata['timestamp'])
print(mydata)
# BPM timestamp
# 0 65 2015-06-22 16:48:00
# 1 65 2015-06-22 16:48:00
# 2 65 2015-06-22 16:48:00
# 3 65 2015-06-23 16:48:00
# 4 65 2015-06-23 16:48:00
# 5 65 2015-06-24 16:48:00
# 6 65 2015-06-25 16:48:00
# 7 65 2015-06-25 16:48:00
# 8 70 2015-06-25 16:48:00
# 9 70 2015-06-25 16:48:00
# 10 70 2015-07-13 16:48:00
# 11 70 2015-08-13 16:48:00
yourdata = pd.Series(['2015-06-22', '2015-06-23', '2015-06-24',
'2015-06-25', '2015-07-13'], name='timestamp')
yourdata = pd.to_datetime(yourdata).to_frame()
print(yourdata)
# 0 2015-06-22
# 1 2015-06-23
# 2 2015-06-24
# 3 2015-06-25
# 4 2015-07-13
result = (mydata.set_index('timestamp').resample('D')
.size().loc[yourdata['timestamp']]
.reset_index())
result.columns = ['timestamp', 'result']
print(result)
# timestamp result
# 0 2015-06-22 3
# 1 2015-06-23 2
# 2 2015-06-24 1
# 3 2015-06-25 4
# 4 2015-07-13 1
</code></pre>
|
python|pandas|dataframe
| 1
|
376,742
| 38,614,659
|
Converting a single pandas index into a three level MultiIndex in python
|
<p>I have some data in a pandas dataframe which looks like this:</p>
<pre><code>gene VIM
time:2|treatment:TGFb|dose:0.1 -0.158406
time:2|treatment:TGFb|dose:1 0.039158
time:2|treatment:TGFb|dose:10 -0.052608
time:24|treatment:TGFb|dose:0.1 0.157153
time:24|treatment:TGFb|dose:1 0.206030
time:24|treatment:TGFb|dose:10 0.132580
time:48|treatment:TGFb|dose:0.1 -0.144209
time:48|treatment:TGFb|dose:1 -0.093910
time:48|treatment:TGFb|dose:10 -0.166819
time:6|treatment:TGFb|dose:0.1 0.097548
time:6|treatment:TGFb|dose:1 0.026664
time:6|treatment:TGFb|dose:10 -0.008032
</code></pre>
<p>where the left is an index. This is just a subsection of the data which is actually much larger. The index is composed of three components, time, treatment and dose. I want to reorganize this data such that I can access it easily by slicing. The way to do this is to use pandas MultiIndexing but I don't know how to convert my DataFrame with one index into another with three. Does anybody know how to do this? </p>
<p>To clarify, the desired output here is the same data with a three level index, the outer being treatment, middle is dose and the inner being time. This would be useful so then I could access the data with something like <code>df['time']['dose']</code> or 'df[0]` (or something to that effect at least). </p>
|
<p>You can first <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.replace.html" rel="nofollow"><code>replace</code></a> unnecessary strings (index has to be converted to <code>Series</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Index.to_series.html" rel="nofollow"><code>to_series</code></a>, because <code>replace</code> doesnt work with <code>index</code> yet) and then use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.split.html" rel="nofollow"><code>split</code></a>. Last set index names by <a href="http://pandas.pydata.org/pandas-docs/stable/whatsnew.html#changes-to-rename" rel="nofollow"><code>rename_axis</code></a> (new in <code>pandas</code> <code>0.18.0</code>)</p>
<pre><code>df.index = df.index.to_series().replace({'time:':'','treatment:': '','dose:':''}, regex=True)
df.index = df.index.str.split('|', expand=True)
df = df.rename_axis(('time','treatment','dose'))
print (df)
VIM
time treatment dose
2 TGFb 0.1 -0.158406
1 0.039158
10 -0.052608
24 TGFb 0.1 0.157153
1 0.206030
10 0.132580
48 TGFb 0.1 -0.144209
1 -0.093910
10 -0.166819
6 TGFb 0.1 0.097548
1 0.026664
10 -0.008032
</code></pre>
|
python|pandas|indexing|multi-index
| 1
|
376,743
| 38,561,268
|
parsing data using pandas with fixed sequence of strings
|
<p>I have data looked like below in file a.dat:</p>
<pre><code>01/Jul/2016 00:05:09 8438.2
01/Jul/2016 00:05:19 8422.4 g
</code></pre>
<p>I wish to parsing them into three columns: <strong>timeline, floating number, string(either None or g)</strong></p>
<p>I have tried: </p>
<pre><code>df=pd.read_csv('a.dat',sep=' | ',engine='python')
</code></pre>
<p>which ends up with 4 columns: date, time , float and g</p>
<pre><code>df=pd.read_csv('a.dat',sep=' | (g)',engine='python')
</code></pre>
<p>which gives 5 columns with column 1 and 4 as NaN</p>
<p>is there any better way to create the dataframe without any post processing? </p>
|
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html" rel="nofollow"><code>read_csv</code></a>:</p>
<pre><code>import pandas as pd
import io
temp=u'''01/Jul/2016 00:05:09 8438.2
01/Jul/2016 00:05:19 8422.4 g'''
#after testing replace io.StringIO(temp) to filename
df = pd.read_csv(io.StringIO(temp),
sep='\s+',
names=['date','time','float','string'],
parse_dates=[['date','time']])
print (df)
date_time float string
0 2016-07-01 00:05:09 8438.2 NaN
1 2016-07-01 00:05:19 8422.4 g
</code></pre>
<p>Or:</p>
<pre><code>import pandas as pd
import io
temp=u'''01/Jul/2016 00:05:09 8438.2
01/Jul/2016 00:05:19 8422.4 g'''
#after testing replace io.StringIO(temp) to filename
df = pd.read_csv(io.StringIO(temp),
delim_whitespace=True,
names=['date','time','float','string'],
parse_dates=[['date','time']])
print (df)
date_time float string
0 2016-07-01 00:05:09 8438.2 NaN
1 2016-07-01 00:05:19 8422.4 g
</code></pre>
<hr>
<p>Solution with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_fwf.html" rel="nofollow"><code>read_fwf</code></a>:</p>
<pre><code>import pandas as pd
import io
temp=u'''01/Jul/2016 00:05:09 8438.2
01/Jul/2016 00:05:19 8422.4 g'''
#after testing replace io.StringIO(temp) to filename
df = pd.read_fwf(io.StringIO(temp),
names=['date','time','float','string'],
parse_dates=[['date','time']])
print (df)
date_time float string
0 2016-07-01 00:05:09 8438.2 NaN
1 2016-07-01 00:05:19 8422.4 g
</code></pre>
<p>You can also specify width of columns:</p>
<pre><code>df = pd.read_fwf(io.StringIO(temp),
fwidths = [20,12,2],
names=['date','time','float','string'],
parse_dates=[['date','time']])
print (df)
date_time float string
0 2016-07-01 00:05:09 8438.2 NaN
1 2016-07-01 00:05:19 8422.4 g
</code></pre>
|
python|csv|datetime|pandas|dataframe
| 2
|
376,744
| 38,556,297
|
Standard deviation from center of mass along Numpy array axis
|
<p>I am trying to find a well-performing way to calculate the standard deviation from the center of mass/gravity along an axis of a Numpy array.</p>
<p>In formula this is (sorry for the misalignment):</p>
<p><img src="https://latex.codecogs.com/gif.latex?%5Cmu_j&space;=&space;%5Cfrac%7B%5Csum_i%7Bi&space;A_%7Bij%7D%7D%7D%7B%5Csum_i%7B&space;A_%7Bij%7D%7D%7D&space;%5Cnewline&space;%5Cnewline&space;%5Ctext%7Bvar%7D_j&space;=&space;%5Cfrac%7B%5Csum_i%7Bi%5E2&space;A_%7Bij%7D%7D%7D%7B%5Csum_i%7BA_%7Bij%7D%7D%7D&space;-&space;%5Cmu_j%5E2&space;%5Cnewline&space;%5Cnewline&space;%5Ctext%7Bstd%7D_j&space;=&space;%5Csqrt%7B%5Ctext%7Bvar%7D_j%7D" title="\mu_j = \frac{\sum_i{i A_{ij}}}{\sum_i{ A_{ij}}} \newline \newline \text{var}_j = \frac{\sum_i{i^2 A_{ij}}}{\sum_i{A_{ij}}} - \mu_j^2 \newline \newline \text{std}_j = \sqrt{\text{var}_j}" /></p>
<p>The best I could come up with is this:</p>
<pre><code>def weighted_com(A, axis, weights):
average = np.average(A, axis=axis, weights=weights)
return average * weights.sum() / A.sum(axis=axis).astype(float)
def weighted_std(A, axis):
weights = np.arange(A.shape[axis])
w1com2 = weighted_com(A, axis, weights)**2
w2com1 = weighted_com(A, axis, weights**2)
return np.sqrt(w2com1 - w1com2)
</code></pre>
<p>In <code>weighted_com</code>, I need to correct the normalization from sum of weights to sum of values (which is an ugly workaround, I guess). <code>weighted_std</code> is probably fine.</p>
<p>To avoid the XY problem, I still ask for what I actually want, (a better <code>weighted_std</code>) instead of a better version of my <code>weighted_com</code>.</p>
<p>The <code>.astype(float)</code> is a safety measure as I'll apply this to histograms containing ints, which caused problems due to integer division when not in Python 3 or when <code>from __future__ import division</code> is not active.</p>
|
<p>You want to take the mean, variance and standard deviation of the vector <code>[1, 2, 3, ..., n]</code> — where <code>n</code> is the dimension of the input matrix <code>A</code> along the axis of interest —, with weights given by the matrix <code>A</code> itself.</p>
<p>For concreteness, say you want to consider these center-of-mass statistics along the vertical axis (<code>axis=0</code>) — this is what corresponds to the formulas you wrote. For a fixed column <code>j</code>, you would do</p>
<pre><code>n = A.shape[0]
r = np.arange(1, n+1)
mu = np.average(r, weights=A[:,j])
var = np.average(r**2, weights=A[:,j]) - mu**2
std = np.sqrt(var)
</code></pre>
<p>In order to put all of the computations for the different columns together, you have to stack together a bunch of copies of <code>r</code> (one per column) to form a matrix (that I have called <code>R</code> in the code below). With a bit of care, you can make things work for both <code>axis=0</code> and <code>axis=1</code>.</p>
<pre><code>import numpy as np
def com_stats(A, axis=0):
A = A.astype(float) # if you are worried about int vs. float
n = A.shape[axis]
m = A.shape[(axis-1)%2]
r = np.arange(1, n+1)
R = np.vstack([r] * m)
if axis == 0:
R = R.T
mu = np.average(R, axis=axis, weights=A)
var = np.average(R**2, axis=axis, weights=A) - mu**2
std = np.sqrt(var)
return mu, var, std
</code></pre>
<p>For example,</p>
<pre><code>A = np.array([[1, 1, 0], [1, 2, 1], [1, 1, 1]])
print(A)
# [[1 1 0]
# [1 2 1]
# [1 1 1]]
print(com_stats(A))
# (array([ 2. , 2. , 2.5]), # centre-of-mass mean by column
# array([ 0.66666667, 0.5 , 0.25 ]), # centre-of-mass variance by column
# array([ 0.81649658, 0.70710678, 0.5 ])) # centre-of-mass std by column
</code></pre>
<p>EDIT:</p>
<p>One can avoid creating in-memory copies of <code>r</code> to build <code>R</code> by using <code>numpy.lib.stride_tricks</code>: swap the line</p>
<pre><code>R = np.vstack([r] * m)
</code></pre>
<p>above with</p>
<pre><code>from numpy.lib.stride_tricks import as_strided
R = as_strided(r, strides=(0, r.itemsize), shape=(m, n))
</code></pre>
<p>The resulting <code>R</code> is a (strided) <code>ndarray</code> whose underlying array is the same as <code>r</code>'s — absolutely no copying of any values occurs.</p>
<pre><code>from numpy.lib.stride_tricks import as_strided
FMT = '''\
Shape: {}
Strides: {}
Position in memory: {}
Size in memory (bytes): {}
'''
def find_base_nbytes(obj):
if obj.base is not None:
return find_base_nbytes(obj.base)
return obj.nbytes
def stats(obj):
return FMT.format(obj.shape,
obj.strides,
obj.__array_interface__['data'][0],
find_base_nbytes(obj))
n=10
m=1000
r = np.arange(1, n+1)
R = np.vstack([r] * m)
S = as_strided(r, strides=(0, r.itemsize), shape=(m, n))
print(stats(r))
print(stats(R))
print(stats(S))
</code></pre>
<p>Output:</p>
<pre class="lang-none prettyprint-override"><code>Shape: (10,)
Strides: (8,)
Position in memory: 4299744576
Size in memory (bytes): 80
Shape: (1000, 10)
Strides: (80, 8)
Position in memory: 4304464384
Size in memory (bytes): 80000
Shape: (1000, 10)
Strides: (0, 8)
Position in memory: 4299744576
Size in memory (bytes): 80
</code></pre>
<p>Credit to <a href="https://stackoverflow.com/questions/34637875/size-of-numpy-strided-array-broadcast-array-in-memory/34638891#34638891">this SO answer</a> and <a href="https://stackoverflow.com/questions/11264838/how-to-get-the-memory-address-of-a-numpy-array-for-c/11266170#11266170">this one</a> for explanations on how to get the memory address and size of the underlying array of a strided <code>ndarray</code>.</p>
|
python|python-2.7|numpy|standard-deviation|weighted-average
| 1
|
376,745
| 38,853,916
|
groupby/unstack on columns name
|
<p>I have a dataframe with the following structure</p>
<pre><code> idx value Formula_name
0 123456789 100 Frequency No4
1 123456789 150 Frequency No25
2 123456789 125 Frequency No27
3 123456789 0.2 Power Level No4
4 123456789 0.5 Power Level No25
5 123456789 -1.0 Power Level No27
6 123456789 32 SNR No4
7 123456789 35 SNR No25
8 123456789 37 SNR No27
9 111222333 ...
</code></pre>
<p>So the only way to relate a frequency to its corresponding metric is via the number of the frequency. I know the possible range (from 100 to 200 MHz in steps of 25 MHz), but not which frequencies (or how many) show up in the data, nor which "number" is used to relate the frequency to the metric. </p>
<p>I would like to arrive at a dataframe similar to that:</p>
<pre><code> SNR Power Level
idx 100 125 150 175 200 100 125 150 175 200
0 123456789 32 37 35 NaN NaN 0.2 -1.0 0.5 NaN NaN
1 111222333 ...
</code></pre>
<p>For only one metric, I created two dataframes, one with the frequencies, one with the metric, and merged them on the number:</p>
<pre><code> idx Formula_x value_x number Formula_y value_y
0 123456789 SNR 32 4 frequency 100
1 123456789 SNR 35 25 frequency 150
</code></pre>
<p>Then I would unstack the dataframe:</p>
<pre><code>df.groupby(['idx','value_y']).first()[['value_x']].unstack()
</code></pre>
<p>This works for one metric, but I don't really see how I can apply it to more metrics and access them with a multiindex in the columns. </p>
<p>Any ideas and suggestions would be very welcome. </p>
|
<p>You can use:</p>
<pre><code>print (df)
idx value Formula_name
0 123456789 100.0 Frequency No4
1 123456789 150.0 Frequency No25
2 123456789 125.0 Frequency No27
3 123456789 0.2 Power Level No4
4 123456789 0.5 Power Level No25
5 123456789 -1.0 Power Level No27
6 123456789 32.0 SNR No4
7 123456789 35.0 SNR No25
8 123456789 37.0 SNR No27
#create new columns from Formula_name
df[['a','b']] = df.Formula_name.str.rsplit(n=1, expand=True)
#maping by Series column b - from No4, No25 to numbers 100,150...
maps = df[df.a == 'Frequency'].set_index('b')['value'].astype(int)
df['b'] = df.b.map(maps)
#remove rows where is Frequency, remove column Formula_name
df1 = df[df.a != 'Frequency'].drop('Formula_name', axis=1)
print (df1)
idx value a b
3 123456789 0.2 Power Level 100
4 123456789 0.5 Power Level 150
5 123456789 -1.0 Power Level 125
6 123456789 32.0 SNR 100
7 123456789 35.0 SNR 150
8 123456789 37.0 SNR 125
</code></pre>
<p>Two solutions - with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.unstack.html" rel="nofollow noreferrer"><code>unstack</code></a> and with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.pivot_table.html" rel="nofollow noreferrer"><code>pivot_table</code></a>.</p>
<pre><code>df2 = df1.set_index(['idx','a', 'b']).unstack([1,2])
df2.columns = df2.columns.droplevel(0)
df2 = df2.rename_axis(None).rename_axis([None, None], axis=1)
print (df2)
Power Level SNR
100 150 125 100 150 125
123456789 0.2 0.5 -1.0 32.0 35.0 37.0
df3 = df1.pivot_table(index='idx', columns=['a','b'], values='value')
df3 = df3.rename_axis(None).rename_axis([None, None], axis=1)
print (df3)
Power Level SNR
100 125 150 100 125 150
123456789 0.2 -1.0 0.5 32.0 37.0 35.0
</code></pre>
|
python|pandas
| 3
|
376,746
| 38,582,127
|
How to filter data from a data frame when the number of columns are dynamic?
|
<p>I have a data frame like below </p>
<pre><code> A_Name B_Detail Value_B Value_C Value_D ......
0 AA X1 1.2 0.5 -1.3 ......
1 BB Y1 0.76 -0.7 0.8 ......
2 CC Z1 0.7 -1.3 2.5 ......
3 DD L1 0.9 -0.5 0.4 ......
4 EE M1 1.3 1.8 -1.3 ......
5 FF N1 0.7 -0.8 0.9 ......
6 GG K1 -2.4 -1.9 2.1 ......
</code></pre>
<p>This is just a sample of data frame, I can have n number of columns like (Value_A, Value_B, Value_C, ........... Value_N)</p>
<p>Now i want to filter all rows where absolute value of all columns (Value_A, Value_B, Value_C, ....) is less than 1.</p>
<p>If you have limited number of columns, you can filter the data by simply putting 'and' condition on columns in dataframe, but I am not able to figure out what to do in this case. </p>
<p>I don't know what would be number of such columns, the only thing I know that such columns would be prefixed with 'Value'.</p>
<p>In above case output should be like </p>
<pre><code> A_Name B_Detail Value_B Value_C Value_D ......
1 BB Y1 0.76 -0.7 0.8 ......
3 DD L1 0.9 -0.5 0.4 ......
5 FF N1 0.7 -0.8 0.9 ......
</code></pre>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.filter.html" rel="nofollow"><code>filter</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.abs.html" rel="nofollow"><code>abs</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.all.html" rel="nofollow"><code>all</code></a> for creating <code>mask</code> and then <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing" rel="nofollow"><code>boolean indexing</code></a>:</p>
<pre><code>mask = (df.filter(like='Value').abs() < 1).all(axis=1)
print (mask)
0 False
1 True
2 False
3 True
4 False
5 True
6 False
dtype: bool
print (df[mask])
A_Name B_Detail Value_B Value_C Value_D
1 BB Y1 0.76 -0.7 0.8
3 DD L1 0.90 -0.5 0.4
5 FF N1 0.70 -0.8 0.9
</code></pre>
<p>All combination in <strong>timings</strong>:</p>
<pre><code>#len df = 70k, 5 columns
df = pd.concat([df]*10000).reset_index(drop=True)
In [47]: %timeit (df[(df.filter(like='Value').abs() < 1).all(axis=1)])
100 loops, best of 3: 7.48 ms per loop
In [48]: %timeit (df[df.filter(regex=r'Value').abs().lt(1).all(1)])
100 loops, best of 3: 7.02 ms per loop
In [49]: %timeit (df[df.filter(like='Value').abs().lt(1).all(1)])
100 loops, best of 3: 7.02 ms per loop
In [50]: %timeit (df[(df.filter(regex=r'Value').abs() < 1).all(axis=1)])
100 loops, best of 3: 7.3 ms per loop
</code></pre>
<hr>
<pre><code>#len df = 70k, 5k columns
df = pd.concat([df]*10000).reset_index(drop=True)
df = pd.concat([df]*1000, axis=1)
#only for testing, create unique columns names
df.columns = df.columns.str[:-1] + [str(col) for col in list(range(df.shape[1]))]
print (df)
In [75]: %timeit ((df[(df.filter(like='Value').abs() < 1).all(axis=1)]))
1 loop, best of 3: 10.3 s per loop
In [76]: %timeit ((df[(df.filter(regex=r'Value').abs() < 1).all(axis=1)]))
1 loop, best of 3: 10.3 s per loop
In [77]: %timeit (df[df.filter(regex=r'Value').abs().lt(1).all(1)])
1 loop, best of 3: 10.4 s per loop
In [78]: %timeit (df[df.filter(like='Value').abs().lt(1).all(1)])
1 loop, best of 3: 10.1 s per loop
</code></pre>
|
python|numpy|pandas|dataframe
| 5
|
376,747
| 38,762,290
|
Pandas column name of the max cell value
|
<p>I have a df which has some codes in the leftmost column and a forward profile in the other columns (df1 below)</p>
<p>df1:</p>
<pre><code> code tp1 tp2 tp3 tp4 tp5 tp6 \
0 1111 0.000000 0.000000 0.018714 0.127218 0.070055 0.084065
1 222 0.000000 0.000000 0.000418 0.000000 0.017540 0.003015
2 333 1.146815 1.305678 0.384918 0.688284 0.000000 0.000000
3 444 0.000000 0.000000 1.838797 0.000000 0.000000 0.000000
4 555 27.190002 27.134837 24.137560 17.739465 11.990806 8.631395
5 666 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000
tp7 tp8 tp9 tp10
0 0.019707 0.000000 0.000000 0.000000
1 6.594860 10.535905 15.697232 21.035824
2 0.000000 0.000000 0.000000 0.000000
3 0.000000 0.000000 0.000000 0.000000
4 7.476197 6.461532 5.570051 4.730345
5 0.000000 0.000068 0.000000 0.000000
</code></pre>
<p>I want the output to be a 3 column df (df2 below) which has the column name of the cell (for each code) which has the last number (+ve or -ve) after which there are only 0s. The 2nd Column (<code>tp_with_max_num</code>) will have the column name which has the max such number. </p>
<p>df2:</p>
<pre><code> code max_tp tp_with_max_num
0 1111 tp7 tp4
1 222 tp10 tp10
2 333 tp4 tp2
3 444 tp3 tp3
4 555 tp10 tp1
5 666 tp8 tp8
</code></pre>
<p>Using this : <a href="https://stackoverflow.com/questions/34200153/name-of-column-that-contains-the-max-value">name of column, that contains the max value</a>
i was able to get the 3rd column:</p>
<pre><code>input_df['tp_with_max_num'] = input_df.ix[0:6,1:].apply(lambda x: input_df.columns[1:][x == x.max()][0], axis=1)
</code></pre>
<p>I am unable to solve for the 2nd column in df2....</p>
|
<p>Knowing that <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.idxmax.html">idxmax</a> returns the index of the <em>first</em> maximum, you can use cumsum to find the column after which there are only zeros:</p>
<pre><code>df.ix[:, 'tp1':].cumsum(axis=1).idxmax(axis=1)
Out[61]:
0 tp7
1 tp10
2 tp4
3 tp3
4 tp10
5 tp8
dtype: object
</code></pre>
|
python|pandas|dataframe|max|cumsum
| 5
|
376,748
| 38,632,209
|
Hi, I am trying to add weekstart column
|
<p>In my current table I am having date column and from that column I am able to find out weekday.By using to_timedelta I have created week_start column but it is not giving correct date.
Here the code is:</p>
<pre><code>final_data['weekday'] = final_data['DateOfInvoice'].dt.weekday
final_data['Weekstart'] = final_data['DateOfInvoice'] - pd.to_timedelta(final_data['weekday'],unit='ns', box=True, coerce=True)
</code></pre>
<p>output is as:</p>
<pre><code> Date weekday weekstart
2016-07-23 5 2016-07-22
</code></pre>
|
<p>IIUC you can construct a TimedeltaIndex and subtract from the other column:</p>
<pre><code>In [152]:
df['weekstart'] = df['Date'] - pd.TimedeltaIndex(df['weekday'], unit='D')
df
Out[152]:
Date weekday weekstart
0 2016-07-23 5 2016-07-18
</code></pre>
<p>in fact the weekday column is unnecessary:</p>
<pre><code>In [153]:
df['weekstart'] = df['Date'] - pd.TimedeltaIndex(df['Date'].dt.dayofweek, unit='D')
df
Out[153]:
Date weekday weekstart
0 2016-07-23 5 2016-07-18
</code></pre>
|
python|pandas
| 1
|
376,749
| 38,753,198
|
How can I remove rows from a numpy array that have NaN as the first element?
|
<p>I have a numpy array that looks like this:</p>
<pre><code> [[nan 0 0 ..., 0.0 0.053526738 0.068421053]
[nan 0 0 ..., 0.0 0.059653990999999996 0.068421053]
[nan 0 0 ..., 1.0 0.912542592 0.068421053]
...,
[1 0 0 ..., 0.0 0.126523399 0.193548387]
[nan 0 0 ..., 0.0 0.034388807 0.068421053]
[4 0 0 ..., 0.0 0.02250561 0.068421053]]
</code></pre>
<p>How do I remove all rows from the array where nan is the first element?</p>
|
<p>If x is the original array, the following puts the valid rows into y:</p>
<pre><code>y = x[~np.isnan(x[:, 0])]
</code></pre>
|
python|numpy
| 3
|
376,750
| 38,586,640
|
pandas multiindex selecting...how to get the right (restricted to selection) index
|
<p>I am struggeling to get the right (restricted to the selection) index when using the methode <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.xs.html" rel="nofollow">xs</a> by pandas to select specific data in my dataframe. Let me demonstrate, what I am doing:</p>
<pre><code>print(df)
value
idx1 idx2 idx3 idx4 idx5
10 2.0 0.0010 1 2 6.0 ...
2 3 6.0 ...
...
7 8 6.0 ...
8 9 6.0 ...
20 2.0 0.0010 1 2 6.0 ...
2 3 6.0 ...
...
18 19 6.0 ...
19 20 6.0 ...
# get dataframe for idx1 = 10, idx2 = 2.0, idx3 = 0.0010
print(df.xs([10,2.0,0.0010]))
value
idx4 idx5
1 2 6.0 ...
2 3 6.0 ...
3 4 6.0 ...
4 5 6.0 ...
5 6 6.0 ...
6 7 6.0 ...
7 8 6.0 ...
8 9 6.0 ...
# get the first index list of this part of the dataframe
print(df.xs([10,2.0,0.0010]).index.levels[0])
[ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17,18, 19]
</code></pre>
<p>So I do not understand, why the full list of values that occur in idx4 is returned even though we restricted the dataframe to a part where idx4 only takes values from 1 to 8. Is it that I use the <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Index.html" rel="nofollow">index</a> method in a wrong way?</p>
|
<p>This is a known <strong>feature</strong> not bug. pandas preserves all of the index information. You can determine which of the levels are expressed and at what location via the <code>labels</code> attribute.</p>
<p>If you are looking to create an index that is fresh and just contains the information relevant to the slice you just made, you can do this:</p>
<pre><code>df_new = df.xs([10,2.0,0.0010])
idx_new = pd.MultiIndex.from_tuples(df_new.index.to_series(),
names=df_new.index.names)
df_new.index = idx_new
</code></pre>
|
python|pandas|select|multi-index
| 1
|
376,751
| 38,817,357
|
Randomly select item from list of lists gives ValueError
|
<p>I have a function that sometimes gives me a list of lists where the nested lists sometimes only have one item, such as this one:</p>
<pre><code>a = [['1'], ['3'], ['w']]
</code></pre>
<p>And want to randomly select one item from that main list <code>a</code>. If I try to use <code>np.random.choice</code> on this list, I get a <code>ValueError: a must be 1-dimensional</code>.</p>
<p>But if the list were instead:</p>
<pre><code>b = [['1'], ['3'], ['w', 'w']]
</code></pre>
<p>Then using <code>np.random.choice</code> works perfectly fine. Why is this? And how can I make it so that I can randomly select from both types of lists?</p>
|
<p>I think <code>choice</code> is first turning your list into an array.</p>
<p>In the second case, this array is a 1d array with dtype object:</p>
<pre><code>In [125]: np.array([['1'], ['3'], ['w', 'w']])
Out[125]: array([['1'], ['3'], ['w', 'w']], dtype=object)
In [126]: _.shape
Out[126]: (3,)
</code></pre>
<p>In the second, it makes a 2d array of strings:</p>
<pre><code>In [127]: np.array([['1'], ['3'], ['w']])
Out[127]:
array([['1'],
['3'],
['w']],
dtype='<U1')
In [128]: _.shape
Out[128]: (3, 1)
</code></pre>
<p>This is an issue that comes up periodically. <code>np.array</code> tries to create as a high a dimensional array as the input allows. </p>
<p><a href="https://stackoverflow.com/questions/38774922/prevent-numpy-from-creating-a-multidimensional-array">Prevent numpy from creating a multidimensional array</a></p>
|
python|python-2.7|numpy|nested-lists
| 3
|
376,752
| 38,876,511
|
Some operations on DataFrame
|
<p>I am working on praising a *.csv file. Therefore I try to create a class which helps me to simplify some operations on DataFrame.</p>
<p>I've created two methods in order to parse a column 'z' that contains values for the 'Price' column. </p>
<pre><code>def subr(self):
isone = self.df.z == 1.0
if isone.any():
atone = self.df.Price[isone].iloc[0]
self.df.loc[self.df.z.between(0.8, 2.5), 'Benchmark'] = atone
# df.loc[(df.r >= .8) & (df.r <= 1.4), 'value'] = atone
return self.df
def obtain_z(self):
"Return a column with z for E_ref"
self.z_col = self.subr()
self.dfnew = self.df.groupby((self.df.z < self.df.z.shift()).cumsum()).apply(self.z_col)
return self.dfnew
def main():
x = ParseDataBase('data.csv')
file_content = x.read_file()
new_df = x.obtain_z()
</code></pre>
<p>I'm getting the following error:</p>
<blockquote>
<p>'DataFrame' objects are mutable, thus they cannot be hashed</p>
</blockquote>
<p>'DataFrame' objects are mutable means that we can change elements of that Frame. I'm not sure when I'm hashing. </p>
<p>I noticed the use of <code>apply(self.z_col)</code> is going wrong.</p>
<p>I also have no clue how to fix it. </p>
|
<p>You are passing the <code>DataFrame</code> <code>self.df</code> returned by <code>self.subr()</code> to <code>apply</code>, but actually <code>apply</code> only takes functions as parameters (<a href="http://pandas.pydata.org/pandas-docs/stable/groupby.html#flexible-apply" rel="nofollow noreferrer">see examples here</a>).</p>
|
python|pandas
| 0
|
376,753
| 38,668,482
|
Efficient way to find the shortest distance between two arrays?
|
<p>I am trying to find the shortest distance between two sets of arrays. The x- arrays are identical and just contain integers. Here is an example of what I am trying to do:</p>
<pre><code>import numpy as np
x1 = x2 = np.linspace(-1000, 1000, 2001)
y1 = (lambda x, a, b: a*x + b)(x1, 2, 1)
y2 = (lambda x, a, b: a*(x-2)**2 + b)(x2, 2, 10)
def dis(x1, y1, x2, y2):
return sqrt((y2-y1)**2+(x2-x1)**2)
min_distance = np.inf
for a, b in zip(x1, y1):
for c, d in zip(x2, y2):
if dis(a, b, c, d) < min_distance:
min_distance = dis(a, b, c, d)
>>> min_distance
2.2360679774997898
</code></pre>
<p>This solution works, but the problem is runtime. If x has a length of ~10,000, the solution is infeasible because the program ha O(n^2) runtime. Now, I tried making some approximations to speed the program up:</p>
<pre><code>for a, b in zip(x1, y1):
cut = (x2 > a-20)*(x2 < a+20)
for c, d in zip(x2, y2):
if dis(a, b, c, d) < min_distance:
min_distance = dis(a, b, c, d)
</code></pre>
<p>But the program is still taking longer than I'd like. Now, from my understanding, it is generally inefficient to loop through a numpy array, so I'm sure there is still room for improvement. Any ideas on how to speed this program up?</p>
|
<p>Your problem could also be represented as 2d collision detection, so a <a href="https://en.wikipedia.org/wiki/Quadtree" rel="nofollow">quadtree</a> might help. Insertion and querying both run in O(log n) time, so the whole search would run in O(n log n). </p>
<p>One more suggestion, since sqrt is monotonic, you can compare the squares of distances instead of the distances themselves, which will save you n^2 square root calculations. </p>
|
python|arrays|numpy|runtime|ipython
| 1
|
376,754
| 38,655,983
|
Reading text file with numpy.loadtxt
|
<p>I am getting an error when trying to read a text file. </p>
<pre><code>import numpy as np
fnam = 'file.txt'
test_fnames = np.loadtxt(fnam, dtype=None, delimiter=',')
test_fnames
</code></pre>
<p>I now get this error:</p>
<pre><code>ValueError: could not convert string to float:
</code></pre>
<p>The file content is just a comma separated list of numbers. Perhaps there is a space at the end of the file that is causing an error?</p>
<pre><code>1,2,3,4,5,6,7,7,8,9122,3,3,45,5,6
</code></pre>
<p>Thanks. The problem was the way I wrote the text file in Torch7. </p>
|
<p>You could use <code>np.genfromtxt()</code> instead of <code>np.loadtxt</code>.
Because the first one let handles missing values :</p>
<pre><code>import numpy as np
fnam = 'file.txt'
test_fnames = np.genfromtxt(fnam, dtype=None, delimiter=',')
</code></pre>
<p>You could also try :</p>
<pre><code>import numpy as np
fnam = 'file.txt'
test_fnames = np.genfromtxt(fnam, dtype=None, delimiter=',')[:,:-1]
</code></pre>
<p>It's just an idea ^^ But if you want, upload your data file somewhere, give me the link and I will see ;)</p>
|
python|numpy|file-io
| 0
|
376,755
| 38,710,993
|
Encoding data's label for text classification
|
<p>I am doing a project in clinical text classification. In my corpus ,data are already labelled by code (For examples: 768.2, V13.02, V13.09, 599.0 ...). I already separated text and labels then using word-embedded for text. I am going to feed them into convolution neural network. However, the labels are needs to encode, I read examples of sentiment text classification and mnist but they all used integers to classify their data, my label in text form that why I cannot use one-hot encoding like them. Could anyone suggest any way to do it ?
Thanks </p>
|
<p>Discrete text label is easily convertible to discrete numeric data by creating an enumeration mapping. For example, assuming the labels "Yes", "No" and "Maybe":</p>
<pre><code>No -> 0
Yes -> 1
Maybe -> 2
</code></pre>
<p>And now you have numeric data, which can later be converted back (as long as the algorithm treat those as discrete values and do not return 0.5 or something like that).</p>
<p>In the case each instance can have multiples labels, as you said in a comment, you can create the encoding by putting each label in a column ("one-hot encoding"). Even if some software do not implement that off-the-shelf, it is not hard to do by hand.</p>
<p>Here's a very simple (and not well-written to be honest) example using Panda's get_dummies function:</p>
<pre><code>import numpy as np
import pandas as pd
labels = np.array(['a', 'b', 'a', 'c', 'ab', 'a', 'ac'])
df = pd.DataFrame(labels, columns=['label'])
ndf = pd.get_dummies(df)
ndf.label_a = ndf.label_a + ndf.label_ab + ndf.label_ac
ndf.label_b = ndf.label_b + ndf.label_ab
ndf.label_c = ndf.label_c + ndf.label_ac
ndf = ndf.drop(['label_ab', 'label_ac'], axis=1)
ndf
label_a label_b label_c
0 1.0 0.0 0.0
1 0.0 1.0 0.0
2 1.0 0.0 0.0
3 0.0 0.0 1.0
4 1.0 1.0 0.0
5 1.0 0.0 0.0
6 1.0 0.0 1.0
</code></pre>
<p>You can now train a multivariate model to output the values of <code>label_a</code>, <code>label_b</code> and <code>label_c</code> and then reconstruct the original labels like "ab". Just make sure the output is in the set [0, 1] (by applying softmax-layer or something like that).</p>
|
python|encoding|tensorflow|text-classification
| 1
|
376,756
| 38,918,623
|
Pandas: pivot table
|
<p>I have df:</p>
<pre><code>ID,url,used_at,active_seconds,domain,search_engine,diff_time,period,code, category
08cd0141663315ce71e0121e3cd8d91f,market.yandex.ru/product/12858630?hid=91491&track=fr_same,2016-03-20 23:19:49,6,yandex.ru,None,78.0,515,100.0, Search system
08cd0141663315ce71e0121e3cd8d91f,market.yandex.ru/product/12858630?hid=91491&track=fr_same,2016-03-20 23:20:01,26,yandex.ru,None,6.0,515,100.0, Social network
08cd0141663315ce71e0121e3cd8d91f,market.yandex.ru/catalog/54726/list?hid=91491&track=pieces&gfilter=1801946%3A1871375&exc=1&regprice=9&how=dpop,2016-03-20 23:20:33,14,yandex.ru,None,6.0,515,100.0, Social network
08cd0141663315ce71e0121e3cd8d91f,market.yandex.ru/product/12858630/offers?hid=91491&grhow=shop,2016-03-20 23:20:47,2,yandex.ru,None,14.0,515,100.0, Internet shop
08cd0141663315ce71e0121e3cd8d91f,market.yandex.ru/product/12858630/offers?hid=91491&grhow=shop,2016-03-20 23:24:05,8,yandex.ru,None,196.0,515,100.0, Internet shop
08cd0141663315ce71e0121e3cd8d91f,market.yandex.ru/catalogmodels.xml?hid=91491&CAT_ID=160043&nid=54726&track=pieces,2016-03-20 23:24:13,32,yandex.ru,None,8.0,515,100.0, Search system
08cd0141663315ce71e0121e3cd8d91f,market.yandex.ru/catalog/54726/list?hid=91491&track=fr_cm_shwall&exc=1&how=dpop,2016-03-20 23:24:45,16,yandex.ru,None,32.0,515,100.0, Internet shop
08cd0141663315ce71e0121e3cd8d91f,market.yandex.ru/catalogmodels.xml?hid=91491&CAT_ID=160043&nid=54726&track=pieces,2016-03-20 23:25:01,4,yandex.ru,None,16.0,515,100.0, Search system
08cd0141663315ce71e0121e3cd8d91f,market.yandex.ru/catalog/54726/list?hid=91491&track=fr_cm_pop&exc=1&how=dpop,2016-03-20 23:25:05,10,yandex.ru,None,4.0,515,100.0, Social network
08cd0141663315ce71e0121e3cd8d91f,market.yandex.ru/product/11153512?hid=91491&track=fr_same,2016-03-21 06:52:44,2,yandex.ru,None,14.0,516,100.0, Internet shop
08cd0141663315ce71e0121e3cd8d91f,market.yandex.ru/catalog/54726/list?hid=91491&track=pieces&gfilter=1801946%3A1871375&exc=1&regprice=9&how=dpop,2016-04-04 21:08:41,24,yandex.ru,None,20.0,562,100.0, Internet shop
0bc0898d3fe2e46158621c674effb458,market.yandex.ru/product/12259780?hid=91491&show-uid=56508001849064882783001,2016-02-26 20:34:20,28,yandex.ru,yandex,10.0,1217,100.0, Social network
0bc0898d3fe2e46158621c674effb458,market.yandex.ru/product/12259780?hid=91491&show-uid=56508001849064882783001,2016-02-26 20:34:50,1,yandex.ru,None,2.0,1217,100.0, Internet shop
</code></pre>
<p>I need to build <code>pivot_table</code>.
I use</p>
<pre><code>table = pd.pivot_table(df, values='domain', index=['ID'], columns=['category'], aggfunc=np.sum)
</code></pre>
<p>But problems in that it concatenates <code>domain</code>, but I want to count quanity of unique domains. How can I do that?</p>
|
<p>It looks like need:</p>
<pre><code>table = pd.pivot_table(df, values='domain',
index=['ID'],
columns=['category'],
aggfunc=lambda x: x.nunique())
print (table)
category Internet shop Search system \
ID
08cd0141663315ce71e0121e3cd8d91f 1.0 1.0
0bc0898d3fe2e46158621c674effb458 1.0 NaN
category Social network
ID
08cd0141663315ce71e0121e3cd8d91f 1.0
0bc0898d3fe2e46158621c674effb458 1.0
</code></pre>
<p>Another faster solution:</p>
<pre><code>print (df.groupby(['ID','category'])['domain'].nunique().unstack())
category Internet shop Search system \
ID
08cd0141663315ce71e0121e3cd8d91f 1.0 1.0
0bc0898d3fe2e46158621c674effb458 1.0 NaN
category Social network
ID
08cd0141663315ce71e0121e3cd8d91f 1.0
0bc0898d3fe2e46158621c674effb458 1.0
</code></pre>
|
python|pandas|dataframe|unique|pivot-table
| 2
|
376,757
| 38,885,935
|
slice df where column looks like [(A, 3), (-A, 1), (-C, 4)] using criteria like all rows such that A>5 etc
|
<p>I have a dataframe that has a column that looks something like the following:</p>
<pre><code>dct = {}
for x in range(0,1000000):
test = {'A': np.random.randint(1,5), '-A': np.random.randint(1,5), '-C': np.random.randint(1,5)}
dct[str(x)+'_key'] = test
df = pd.DataFrame([[d.items()] for d in dct.values()])
df.tail()
Out[208]:
0
1299995 [(A, 3), (-A, 1), (-C, 4)]
1299996 [(A, 2), (-A, 4), (-C, 1)]
1299997 [(A, 3), (-A, 1), (-C, 3)]
1299998 [(A, 2), (-A, 2), (-C, 1)]
1299999 [(A, 1), (-A, 2), (-C, 4)]
</code></pre>
<p>I have about 1.3 million rows in the dataframe. There are other columns but for this question they are not relevant. </p>
<p>In my real life situation the total sum of the count per row = 10. But I don't know how to create an example dataframe using <code>np.random.randint()</code> that satisfies that the total count per row has to equal 10. Valid alphabets are any from the following <code>(A,B,C,D,-A,-B,-C,-D)</code>. </p>
<p>So every row selects from that set with the restriction that total <code>count = 10</code>. So a row can have anything like:</p>
<pre><code>[(A, 10)]
[(B, 3), (-D, 1), (-A, 6)]
[(A, 2), (B, 1), (-C, 2),(-D,5)]
</code></pre>
<p>In any case, the above example df should suffice. </p>
<p>What I want to do is be able to slice this df using this column using criteria that resembles questions like:</p>
<pre><code>-all rows such that the number of A > 5 AND B < 0 (or not existent) AND -D > 2
</code></pre>
<p>The questions can be single or multi-conditions like the above. </p>
<p>In any case, I'm not sure how to do this efficiently, especially since each row is comprised of tuples. </p>
|
<p>If you can split the column of tuples, this should work, just replace the conditionals with your numbers. I used these for the example data:</p>
<pre><code>def f(x, var):
tup_list = list(x)
for t in tup_list:
if t[0] == var:
return t[1]
return np.NaN
df.columns = ['col']
for var in ['A', '-A', 'B', '-B', 'C', '-C', 'D', '-D']:
df[var] = df['col'].apply(lambda x: f(x, var))
df2 = df.loc[(df['A'] > 3) & ((df['-A'] < 3) & (df['B'] is not np.NaN)) & (df['-C'] > 2)]
</code></pre>
|
python|pandas|tuples|slice
| 1
|
376,758
| 38,696,101
|
Python: How to check the number of occurrences and top (n) values in a dataframe?
|
<p>I want to count up the number of occurrences of countries in a dataframe, below is the sample and also find the top 2 countries by occurrence.</p>
<pre><code> Date Location
0 09/17/1908 Virginia
1 07/12/1912 New Jersey
2 08/06/1913 Canada
3 09/09/1913 England
4 10/17/1913 Germany
5 03/05/1915 Belgium
6 09/03/1915 Germany
7 07/28/1916 Bulgeria
8 09/24/1916 England
9 10/01/1916 England
</code></pre>
<p>Result value should be something like below:</p>
<pre><code>Location Count
England 3
Germany 2
</code></pre>
|
<pre><code>countCollection = df['collection'].value_counts()
</code></pre>
<p><code>.value_counts()</code> will give you a count for the items from the collection named <code>collection</code> in a dataFrame.</p>
<p>Also, as you mentioned you're new to Python, to get the final value:</p>
<pre><code>countCollection["a"]
</code></pre>
<p>will get the count value from the returned collection of counts, for the row with key "a".</p>
|
python|python-3.x|pandas
| 1
|
376,759
| 38,709,991
|
Group by hours and plot in Bokeh
|
<p>I am trying to get a plot like a stock data in Bokeh like in the link <a href="http://docs.bokeh.org/en/latest/docs/gallery/stocks.html" rel="nofollow noreferrer">http://docs.bokeh.org/en/latest/docs/gallery/stocks.html</a></p>
<pre><code>2004-01-05,00:00:00,01:00:00,Mon,20504,792
2004-01-05,01:00:00,02:00:00,Mon,16553,783
2004-01-05,02:00:00,03:00:00,Mon,18944,790
2004-01-05,03:00:00,04:00:00,Mon,17534,750
2004-01-06,00:00:00,01:00:00,Tue,17262,747
2004-01-06,01:00:00,02:00:00,Tue,19072,777
2004-01-06,02:00:00,03:00:00,Tue,18275,785
</code></pre>
<p>I want to use column 2:startTime and 5:count and I want to group by column <code>day</code> and sum the <code>counts</code> in respective hours. </p>
<p>code: Does not give the output </p>
<pre><code>import numpy as np
import pandas as pd
#from bokeh.layouts import gridplot
from bokeh.plotting import figure, show, output_file
data = pd.read_csv('one_hour.csv')
data.column = ['date', 'startTime', 'endTime', 'day', 'count', 'unique']
p1 = figure(x_axis_type='startTime', y_axis_type='count', title="counts per hour")
p1.grid.grid_line_alpha=0.3
p1.xaxis.axis_label = 'startTime'
p1.yaxis.axis_label = 'count'
output_file("count.html", title="time_graph.py")
show(gridplot([[p1]], plot_width=400, plot_height=400)) # open a browser
</code></pre>
<p>Reading the column and plot isn't any problem but applying group by and sum operations on the column data is something I am not able to perform. </p>
<p>Appreciate the help, Thanks ! </p>
|
<p>Sounds like this is what you need:</p>
<pre><code>data.groupby('startTime')['count'].sum()
</code></pre>
<p>Output:</p>
<pre><code>00:00:00 37766
01:00:00 35625
02:00:00 37219
03:00:00 17534
</code></pre>
|
python|pandas|plot|graph|bokeh
| 1
|
376,760
| 38,726,855
|
Pandas count the number of times an event has occurred in last n days by group
|
<p>I have table of events occurring by id. How would I count the number of times in the last n days that each event type has occurred prior to the current row?</p>
<p>For example with a list of events like:</p>
<pre><code>df = pd.DataFrame([{'id': 1, 'event_day': '2016-01-01', 'event_type': 'type1'},
{'id': 1, 'event_day': '2016-01-02', 'event_type': 'type1'},
{'id': 2, 'event_day': '2016-02-01', 'event_type': 'type2'},
{'id': 2, 'event_day': '2016-02-15', 'event_type': 'type3'},
{'id': 3, 'event_day': '2016-01-06', 'event_type': 'type3'},
{'id': 3, 'event_day': '2016-03-11', 'event_type': 'type3'},])
df['event_day'] = pd.to_datetime(df['event_day'])
df = df.sort_values(['id', 'event_day'])
</code></pre>
<p>or: </p>
<pre><code> event_day event_type id
0 2016-01-01 type1 1
1 2016-01-02 type1 1
2 2016-02-01 type2 2
3 2016-02-15 type3 2
4 2016-01-06 type3 3
5 2016-03-11 type3 3
</code></pre>
<p>by <code>id</code> I want to count the number of times each <code>event_type</code> has occurred prior to the current row in the last n days. For example, in row 3 id=2, so how many times up to (but not including) that point in the event history have events types 1, 2, and 3 occurred in the last n days for id 2?</p>
<p>The desired output would look something like below:</p>
<pre><code> event_day event_type event_type1_in_last_30days event_type2_in_last_30days event_type3_in_last_30days id
0 2016-01-01 type1 0 0 0 1
1 2016-01-02 type1 1 0 0 1
2 2016-02-01 type2 0 0 0 2
3 2016-02-15 type3 0 1 0 2
4 2016-01-06 type3 0 0 0 3
5 2016-03-11 type3 0 0 0 3
</code></pre>
|
<pre><code>res = ((((df['event_day'].values >= df['event_day'].values[:, None] - pd.to_timedelta('30 days'))
& (df['event_day'].values < df['event_day'].values[:, None]))
& (df['id'].values == df['id'].values[:, None]))
.dot(pd.get_dummies(df['event_type'])))
res
Out:
array([[ 0., 0., 0.],
[ 1., 0., 0.],
[ 0., 0., 0.],
[ 0., 1., 0.],
[ 0., 0., 0.],
[ 0., 0., 0.]])
</code></pre>
<hr>
<p>The first part is to generate a matrix as follows:</p>
<pre><code>(df['event_day'].values >= df['event_day'].values[:, None] - pd.to_timedelta('30 days'))
Out:
array([[ True, True, True, True, True, True],
[ True, True, True, True, True, True],
[False, True, True, True, True, True],
[False, False, True, True, False, True],
[ True, True, True, True, True, True],
[False, False, False, True, False, True]], dtype=bool)
</code></pre>
<p>It's a 6x6 matrix and for each row it makes a comparison against the other rows. It makes use of NumPy's broadcasting for pairwise comparision (<code>.values[:, None]</code> adds another axis). To make it complete, we need to check if this row occurs sooner than the other row as well: </p>
<pre><code>(((df['event_day'].values >= df['event_day'].values[:, None] - pd.to_timedelta('30 days'))
& (df['event_day'].values < df['event_day'].values[:, None])))
Out:
array([[False, False, False, False, False, False],
[ True, False, False, False, False, False],
[False, True, False, False, True, False],
[False, False, True, False, False, False],
[ True, True, False, False, False, False],
[False, False, False, True, False, False]], dtype=bool)
</code></pre>
<p>Another condition is about the id's. Using a similar approach, you can construct a pairwise comparison matrix that shows when id's match:</p>
<pre><code>(df['id'].values == df['id'].values[:, None])
Out:
array([[ True, True, False, False, False, False],
[ True, True, False, False, False, False],
[False, False, True, True, False, False],
[False, False, True, True, False, False],
[False, False, False, False, True, True],
[False, False, False, False, True, True]], dtype=bool)
</code></pre>
<p>It becomes:</p>
<pre><code>(((df['event_day'].values >= df['event_day'].values[:, None] - pd.to_timedelta('30 days'))
& (df['event_day'].values < df['event_day'].values[:, None]))
& (df['id'].values == df['id'].values[:, None]))
Out:
array([[False, False, False, False, False, False],
[ True, False, False, False, False, False],
[False, False, False, False, False, False],
[False, False, True, False, False, False],
[False, False, False, False, False, False],
[False, False, False, False, False, False]], dtype=bool)
</code></pre>
<p>Lastly, you want to see it for each type so you can use get_dummies:</p>
<pre><code>pd.get_dummies(df['event_type'])
Out:
type1 type2 type3
0 1.0 0.0 0.0
1 1.0 0.0 0.0
2 0.0 1.0 0.0
3 0.0 0.0 1.0
4 0.0 0.0 1.0
5 0.0 0.0 1.0
</code></pre>
<p>If you multiply the resulting matrix with this one, it should give you the number of rows satisfying that condition for each type. You can pass the resulting array to a DataFrame constructor and concat:</p>
<pre><code>pd.concat([df, pd.DataFrame(res, columns = ['e1', 'e2', 'e3'])], axis=1)
Out:
event_day event_type id e1 e2 e3
0 2016-01-01 type1 1 0.0 0.0 0.0
1 2016-01-02 type1 1 1.0 0.0 0.0
2 2016-02-01 type2 2 0.0 0.0 0.0
3 2016-02-15 type3 2 0.0 1.0 0.0
4 2016-01-06 type3 3 0.0 0.0 0.0
5 2016-03-11 type3 3 0.0 0.0 0.0
</code></pre>
|
python|pandas
| 3
|
376,761
| 38,527,667
|
Appended object to a set is `NoneType` python 2.7
|
<p>I have a huge array of labels which I make unique via:</p>
<pre><code>unique_train_labels = set(train_property_labels)
</code></pre>
<p>Which prints out as <code>set([u'A', u'B', u'C'])</code>. I want to create a new set of unique labels with a new label called "no_region", and am using:</p>
<pre><code>unique_train_labels_threshold = unique_train_labels.add('no_region')
</code></pre>
<p>However, this prints out to be <code>None</code>.</p>
<p>My ultimate aim is to use these unique labels to later generate a random array of categorical labels via:</p>
<pre><code> rng = np.random.RandomState(101)
categorical_random = rng.choice(list(unique_train_labels), len(finalTestSentences))
categorical_random_threshold = rng.choice(list(unique_train_labels_threshold), len(finalTestSentences))
</code></pre>
<p>From the <a href="https://docs.python.org/2/library/sets.html" rel="nofollow">docs</a> it says that <code>set.add()</code> should generate a new set, which seems not to be the case (hence I can't later call <code>list(unique_train_labels_threshold)</code>)</p>
|
<p>As mentioned in Moses' answer, the <code>set.add</code> method mutates the original set, it does not create a new set. In Python it's conventional for methods that perform in-place mutation to return <code>None</code>; the methods of all built-in mutable types do that, and the convention is generally observed by 3rd-party libraries.</p>
<p>An alternative to using the <code>.copy</code> method is to use the <code>.union</code> method, which returns a new set that is the union of the original set and the set supplied as an argument. For sets, the <code>|</code> <em>or</em> operator invokes the <code>.union</code> method.</p>
<pre><code>a = {1, 2, 3}
b = a.union({5})
c = a | {4}
print(a, b, c)
</code></pre>
<p><strong>output</strong></p>
<pre><code>{1, 2, 3} {1, 2, 3, 5} {1, 2, 3, 4}
</code></pre>
<p>The <code>.union</code> method (like other set methods that can be invoked via operator syntax) has a slight advantage over the operator syntax: you can pass it <em>any</em> iterable for its argument; the operator version requires you to explicitly convert the argument to a set (or frozenset).</p>
<pre><code>a = {1, 2, 3}
b = a.union([5, 6])
c = a | set([7, 8])
print(a, b, c)
</code></pre>
<p><strong>output</strong> </p>
<pre><code>{1, 2, 3} {1, 2, 3, 5, 6} {1, 2, 3, 7, 8}
</code></pre>
<p>Using the explicit <code>.union</code> method is slightly more efficient here because it bypasses converting the arg to a set: internally, the method just iterates over the contents of the arg, adding them to the new set, so it doesn't care if the arg is a set, list, tuple, string, or dict.</p>
<p>From the official Python <a href="https://docs.python.org/3/library/stdtypes.html#set" rel="noreferrer">set docs</a></p>
<blockquote>
<p>Note, the non-operator versions of union(), intersection(),
difference(), and symmetric_difference(), issubset(), and issuperset()
methods will accept any iterable as an argument. In contrast, their
operator based counterparts require their arguments to be sets. This
precludes error-prone constructions like set('abc') & 'cbs' in favor
of the more readable set('abc').intersection('cbs').</p>
</blockquote>
|
python|python-2.7|numpy|random|set
| 5
|
376,762
| 38,923,943
|
Numpy array from characters in BDF file
|
<p>I have a file, font_file.bdf, and need to get the characters contained in it as numpy arrays where each element is one pixel.</p>
<p>Here's the snippet of that file which defines the '?' character:</p>
<pre><code>STARTCHAR question
ENCODING 63
SWIDTH 1000 0
DWIDTH 6 0
BBX 5 7 0 0
BITMAP
70
88
08
10
20
00
20
ENDCHAR
</code></pre>
<p>I researched .bdf files to understand how they encode data. Basically, it's a bitmap with bit-depth of 1. I found a pillow module, PIL.BdfFontFile, which can interpret bdf files. After experimenting with this module a while I was able to get a PIL image for each of the characters in the font and save them to see that it is working like so:</p>
<pre><code>from PIL.BdfFontFile import BdfFontFile
fp = open("font_file.bdf", "r")
bdf_file = BdfFontFile(fp)
bdf_file.compile()
char = '?'
_, __, bounding_box, image = bdf_file[ord(char)]
image.save(char + ".png")
</code></pre>
<p>The saved image looks like the following: <a href="http://i.stack.imgur.com/lSLeL.png" rel="nofollow">Question Mark</a>. and from looking at its properties it has a bit-depth of 1, which makes sense. (I'm not sure why it seems inverted, but I could do that kind of manipulation with numpy if still needed.)</p>
<p>Once I had that, I tried to convert to a numpy array:</p>
<pre><code>print numpy.array(image, dtype=numpy.int)
</code></pre>
<p>which gave me an array that no longer seems to represent the corresponding character any longer:</p>
<pre><code>[[1 1 1 1 1]
[0 1 0 1 1]
[1 1 1 1 1]
[1 1 1 1 0]
[1 0 1 0 1]
[1 0 1 1 1]
[0 1 1 1 1]]
</code></pre>
<p>I was hoping for something that looked more like this:</p>
<pre><code>[[0 1 1 1 0]
[1 0 0 0 1]
[0 0 0 0 1]
[0 0 0 1 0]
[0 0 1 0 0]
[0 0 0 0 0]
[0 0 1 0 0]]
</code></pre>
<p>Worst case-scenario, I could make an algorithm myself that converts the data in the PIL image to a numpy array, but I feel like there has to be an easier way given my past experience with converting between PIL Images and numpy arrays (It's usually quite straight-forward.)</p>
<p>Any ideas about how to get the PIL image to convert to a numpy array properly or another solution to my problem would be appreciated.</p>
|
<p>For me to get @drake-mossman's answer to work, I had to modify the first line to read the file in byte format:</p>
<pre><code>fp = open("font_file.bdf", "rb")
</code></pre>
<p>Which unfortunately means that the BdfFontFile script currently doesn't support unicode characters (or any code points past 255).</p>
|
python|numpy|fonts|bitmap|python-imaging-library
| 0
|
376,763
| 63,281,404
|
TFLite Interpreter fails to load quantized model on Android
|
<p>I have a TFLite model. The model input is a 256x192 image, it is quantized to 16 bit. It was quantized with this converter:</p>
<pre class="lang-py prettyprint-override"><code>converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_types = [tf.float16]
tflite_model = converter.convert()
</code></pre>
<p>I am trying to load it in my android app, and faces the following problem, while executing <code>tflite = new Interpreter(tfliteModel, tfliteOptions);</code>:</p>
<pre><code>E/AndroidRuntime: FATAL EXCEPTION: CameraBackground
Process: android.example.com.tflitecamerademo, PID: 7943
java.lang.IllegalArgumentException: Internal error: Cannot create interpreter: Unimplemented data type FLOAT16 (1) in tensor
Unimplemented data type FLOAT16 (1) in tensor
</code></pre>
<p>What can I try in order to solve it?</p>
<p>Thanks</p>
|
<p>EDIT: Looks like you should use tf.float16 starting tf 2
<a href="https://www.tensorflow.org/lite/convert/1x_compatibility#unsupported_apis" rel="nofollow noreferrer">https://www.tensorflow.org/lite/convert/1x_compatibility#unsupported_apis</a></p>
<p>May be file an issue on <a href="https://github.com/tensorflow/tensorflow/issues/new?labels=type%3Abug&template=00-bug-issue.md" rel="nofollow noreferrer">github</a></p>
|
android|android-studio|tensorflow|quantization|tensorflow-lite
| 0
|
376,764
| 63,142,022
|
Computing the loss (MSE) for every iteration and time Tensorflow
|
<p>I want to use Tensorboard to plot the mean squared error (y-axis) for every iteration over a given time frame (x-axis), say 5 minutes.</p>
<p>However, i can only plot the MSE given every epoch and set a callback at 5 minutes. This does not however solve my problem.</p>
<p>I have tried looking at the internet for some solutions to how you can maybe set a maximum number of iterations rather than epochs when doing model.fit, but without luck. I know iterations is the number of batches needed to complete one epoch, but as I want to tune the batch_size, I prefer to use the iterations.</p>
<p>My code currently looks like the following:</p>
<pre><code>input_size = len(train_dataset.keys())
output_size = 10
hidden_layer_size = 250
n_epochs = 3
weights_initializer = keras.initializers.GlorotUniform()
#A function that trains and validates the model and returns the MSE
def train_val_model(run_dir, hparams):
model = keras.models.Sequential([
#Layer to be used as an entry point into a Network
keras.layers.InputLayer(input_shape=[len(train_dataset.keys())]),
#Dense layer 1
keras.layers.Dense(hidden_layer_size, activation='relu',
kernel_initializer = weights_initializer,
name='Layer_1'),
#Dense layer 2
keras.layers.Dense(hidden_layer_size, activation='relu',
kernel_initializer = weights_initializer,
name='Layer_2'),
#activation function is linear since we are doing regression
keras.layers.Dense(output_size, activation='linear', name='Output_layer')
])
#Use the stochastic gradient descent optimizer but change batch_size to get BSG, SGD or MiniSGD
optimizer = tf.keras.optimizers.SGD(learning_rate=0.001, momentum=0.0,
nesterov=False)
#Compiling the model
model.compile(optimizer=optimizer,
loss='mean_squared_error', #Computes the mean of squares of errors between labels and predictions
metrics=['mean_squared_error']) #Computes the mean squared error between y_true and y_pred
# initialize TimeStopping callback
time_stopping_callback = tfa.callbacks.TimeStopping(seconds=5*60, verbose=1)
#Training the network
history = model.fit(normed_train_data, train_labels,
epochs=n_epochs,
batch_size=hparams['batch_size'],
verbose=1,
#validation_split=0.2,
callbacks=[tf.keras.callbacks.TensorBoard(run_dir + "/Keras"), time_stopping_callback])
return history
#train_val_model("logs/sample", {'batch_size': len(normed_train_data)})
train_val_model("logs/sample1", {'batch_size': 1})
</code></pre>
<pre><code>%tensorboard --logdir_spec=BSG:logs/sample,SGD:logs/sample1
</code></pre>
<p>resulting in:</p>
<p><a href="https://i.stack.imgur.com/0fBZW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0fBZW.png" alt="x-axis: epochs, y-axis: MSE" /></a></p>
<p>The desired output should look something like this:</p>
<p><a href="https://i.stack.imgur.com/2q33o.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2q33o.png" alt="x-axis: minutes, y-axis: MSE" /></a></p>
|
<p>The answer was actually quite simple.</p>
<p>tf.keras.callbacks.TensorBoard has an update_freq argument allowing you to control when to write losses and metrics to tensorboard. The standard is epoch, but you can change it to batch or an integer if you want to write to tensorboard every n batches. See the documentation for more information: <a href="https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/TensorBoard" rel="nofollow noreferrer">https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/TensorBoard</a></p>
|
python|tensorflow|machine-learning|neural-network|tensorboard
| 0
|
376,765
| 62,935,077
|
How to fix the reshape process of train and test in CNN via Python
|
<p>I have a problem about fixing reshape process of train and test in <strong>CNN</strong> via Python.</p>
<p>While train set has <code>(270, 660, 3)</code> , test set has <code>(163, 600, 3)</code>. Because of this, these are not the same shape.</p>
<p>How can I fix it?</p>
<p>Here is my block shown below.</p>
<p><strong>Here is CNN</strong></p>
<pre><code>classifier = Sequential()
classifier.add(Convolution2D(filters = 32,
kernel_size=(3,3),
data_format= "channels_last",
input_shape=(270, 660, 3),
activation="relu")
)
classifier.add(MaxPooling2D(pool_size = (2,2)))
classifier.add(Convolution2D(32, (3, 3), activation = 'relu'))
classifier.add(MaxPooling2D(pool_size = (2, 2)))
classifier.add(Convolution2D(32, (3, 3), activation = 'relu'))
classifier.add(MaxPooling2D(pool_size = (2, 2)))
classifier.add(Flatten())
classifier.add(Dense(units = 128, activation = 'relu'))
classifier.add(Dense(units = 1, activation = 'sigmoid'))
classifier.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
</code></pre>
<p><strong>Fitting the CNN to the images</strong></p>
<pre><code>train_datagen = ImageDataGenerator(
rescale=1./255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
test_datagen = ImageDataGenerator(rescale=1./255)
</code></pre>
<p><strong>Create Training Test and Training Test</strong></p>
<pre><code>training_set = train_datagen.flow_from_directory(train_path,
target_size=(270, 660),
batch_size=32,
class_mode='binary')
test_set = test_datagen.flow_from_directory(
test_path,
target_size=(270, 660),
batch_size=32,
class_mode='binary')
</code></pre>
<p><strong>Fit the CNN to the training set and then evaluate our test set</strong></p>
<pre><code>classifier.fit_generator(
training_set,
steps_per_epoch=50,
epochs=30,
validation_data=test_set,
validation_steps=200)
</code></pre>
<p><strong>Prediction</strong></p>
<pre><code>directory = os.listdir(test_genuine_path)
print(directory[3])
print("Path : ", test_genuine_path + "/" + directory[3])
imgFGenuine = cv2.imread(test_genuine_path + "/" + directory[3])
plt.imshow(imgFGenuine)
pred = classifier.predict(np.expand_dims(imgFGenuine,0)) # ERROR
print("Probability of Genuine Signature : ", "%.2f" % (1 - pred))
</code></pre>
<p>The error :</p>
<pre><code>ValueError: Error when checking input: expected conv2d_19_input to have 4 dimensions, but got array with shape (163, 660, 3)
</code></pre>
|
<p>Here is my answer</p>
<p>After this code plt.imshow(imgFGenuine) , I fix the issue to write down these code snippets.</p>
<pre><code>imgFGenuine = cv2.resize(imgFGenuine, (270, 660))
imgFGenuine = imgFGenuine.reshape(270, 660,3)
</code></pre>
|
python|numpy|keras
| 0
|
376,766
| 63,218,311
|
Locating minimum date based on column equal to specific value in pandas dataframe?
|
<p>I have a dataframe that looks something like this:</p>
<pre><code> Date Account Symbol Name Transaction type
0 2020-06-24 Vanguard Brokerage VSGAX VANGUARD SMALL CAP GROWTH INDEX ADMIRAL CL Dividend
1 2020-06-24 Vanguard Brokerage VSGAX VANGUARD SMALL CAP GROWTH INDEX ADMIRAL CL Reinvestment
2 2020-06-24 Vanguard Brokerage VTSAX VANGUARD TOTAL STOCK MARKET INDEX ADMIRAL C Dividend
3 2020-06-24 Vanguard Brokerage VTSAX VANGUARD TOTAL STOCK MARKET INDEX ADMIRAL Reinvestment
4 2020-06-19 Vanguard Brokerage VHYAX VANGUARD HIGH DIVIDEND YIELD INDEX ADMIRAL Dividend
5 2020-06-19 Vanguard Brokerage VHYAX VANGUARD HIGH DIVIDEND YIELD INDEX ADMIRAL Reinvestment
7 2020-06-16 Vanguard Brokerage VHYAX VANGUARD HIGH DIVIDEND YIELD INDEX ADMIRAL Buy
8 2020-06-16 Vanguard Brokerage VSGAX VANGUARD SMALL CAP GROWTH INDEX ADMIRAL CL Buy
9 2020-06-16 Vanguard Brokerage VTSAX VANGUARD TOTAL STOCK MARKET INDEX ADMIRAL C Buy
</code></pre>
<p>I'd like to pull the earliest date for each symbol that has the transaction type 'buy' and put that info into a dictionary. I'm not sure if its better to use the .groupby, or if a for-loop is more appropriate.</p>
<p>I've currently been trying to use a loop to iterate over all the columns, pull out all transactions that equal 'Buy'. After that, I've been trying to figure out how to pull the minimum date out of that new set of data and put it in a dictionary . Here is what I currently have.</p>
<pre><code>excel_file_1 = 'Stock.Activity.xlsm'
#Putting excel files into dataframes
df_vang_brok = pd.read_excel(excel_file_1, sheet_name = 'Vanguard.Brokerage',
index=False)
df_vang_ira = pd.read_excel(excel_file_1, sheet_name = 'Vanguard.IRA',
index=False)
df_schwab_brok = pd.read_excel(excel_file_1, sheet_name = 'Schwab.Brokerage',
index=False)
#Combining data frames into one
df_all = pd.concat([df_vang_brok, df_vang_ira, df_schwab_brok])
df_early={}
for index,row in df_all.iterrows():
if row['Transaction type'] == 'Buy':
print(row['Date'],row['Symbol'],row['Amount'])
df_early = {'Date': row['Date'], 'Symbol': row['Symbol'],
'Amount': row['Amount']}
print(df_early)
</code></pre>
<p>I get the output:</p>
<pre><code>2017-07-17 00:00:00 VSGAX -678.93
2017-07-05 00:00:00 VTSAX -1915.76
2017-07-03 00:00:00 VTYAX -3022.93
{'Date': Timestamp('2017-07-03 00:00:00'), 'Symbol': 'VTYAX', 'Amount': -3022.93}
</code></pre>
<p>It successfully pulls all transactions with "buy" from the dataframe, but how do I pull the earliest date after this and put it in my df_early dataframe?</p>
<p>Is this even the best/most efficient way to go about this?</p>
<p>Thanks!</p>
|
<p>Something like this?</p>
<pre><code>df.loc[df['Transaction type'] == 'Buy'].groupby('symbol')['date'].min()
</code></pre>
<p>The first part (before .groupby()) selects all rows where 'Transaction type' is 'Buy', then you group that dataframe by 'symbol', select column 'date' and apply the min() function to it. If you want all other columns as well, you can put the above in a separate df.loc[].</p>
<p>I am still learning so mayby I am dead wrong, it is hard to try these things but I will play around a bit :)</p>
|
python|pandas
| 1
|
376,767
| 63,099,290
|
Remove characters from string in a column
|
<p>I have a column which contains number of months in string and int format. Need to convert it into just integers. (eg 12)</p>
<pre class="lang-py prettyprint-override"><code>df1=pd.DataFrame({'Term':["12"," ","12 Months","12months","12mthsb","12 *4months"]})
</code></pre>
<p>Output:</p>
<pre class="lang-py prettyprint-override"><code>df1=pd.DataFrame({'Term':["12","12","12","12","12"]})
</code></pre>
<p>Tried using <code>str.replace(r'\D', '')</code> and <code>str.replace(r'[^0-9]', '')</code> but then the row with just the number (i.e. 12) gets replaced by NaN</p>
|
<p>You probably have int/float/str mix in the column</p>
<p>You can try to convert to str and then replace:</p>
<pre><code>df1['someColumn'].astype(str).str.replace(r'\D', '')
</code></pre>
|
python|pandas
| 1
|
376,768
| 63,153,619
|
Model is recognizing background better than objects
|
<p>I created siamese model with triplet loss function.
I tested it a little bit and notice that when object are small, like 2/5 of the image space, model is matching images with similar background instead of object.
Some of pictures were taken on the same background what is causing the issue as I think.</p>
<p>Is there any way to maybe extract objects? train model to recognize those object, ignore background?</p>
<p>Shape of each image is (150, 150, 3).</p>
|
<p>the siamese model actually deepened on encoded data simply its match between tow encoded feature representation so it not know your object of intraset you have extract object than do the matching between them</p>
<p>for example if the model you built was for face matching
use opencv to extract the faces and than do the matching you want to make</p>
|
python|image|tensorflow|image-processing|deep-learning
| 1
|
376,769
| 62,904,660
|
Get pandas items at selected intervals
|
<p>I am trying to find a faster way of selecting dates at varying intervals. Currently, I am looping through the data frame and then finding the required interval spans using <code>iloc</code>. The performance is causing a bottleneck though. The files are huge and there many of them, so any help welcome.</p>
<pre><code>#example
df = pd.DataFrame(pd.date_range(start='01/01/1980', end='01/01/2020'), columns=['DT'])
n = 5
spans = []
max_len = len(df) - n
for k in df.index:
if k < max_len:
spans.append([df.iloc[k].DT, df.iloc[k + n].DT])
</code></pre>
<p>Is there a "better" way of doing this, i.e. faster. Thanks.</p>
|
<p>Maybe you can shift <code>DT</code> column by required amount:</p>
<pre><code>df = pd.DataFrame(pd.date_range(start='01/01/2018', end='01/01/2020'), columns=['DT'])
df['DT2'] = df['DT'].shift(-5)
print(df[df['DT2'].notna()])
</code></pre>
<p>Prints:</p>
<pre><code> DT DT2
0 2018-01-01 2018-01-06
1 2018-01-02 2018-01-07
2 2018-01-03 2018-01-08
3 2018-01-04 2018-01-09
4 2018-01-05 2018-01-10
.. ... ...
721 2019-12-23 2019-12-28
722 2019-12-24 2019-12-29
723 2019-12-25 2019-12-30
724 2019-12-26 2019-12-31
725 2019-12-27 2020-01-01
[726 rows x 2 columns]
</code></pre>
|
python|pandas|python-2.7|dataframe
| 2
|
376,770
| 62,902,731
|
Sort the columns by unique values
|
<p>I have this data-frame:</p>
<pre><code> AAA X_980 X_100 X_990 X_1100 X_2200 X_Y_100 X_Y_2200 X_Y_990 X_Y_1100 X_Y_980 X_10_100 X_10_980 X_10_990 X_10_1100 X_10_2200 X_A X_A_B
100 6 6 6 3 4 1 7 5 1 9 9 2 7 3 7 3 8
980 2 9 5 5 9 3 6 2 1 3 1 8 2 9 4 8 4
990 8 8 7 7 9 3 5 7 3 1 5 5 6 6 1 3 4
1100 6 5 4 7 4 6 2 1 6 2 3 5 3 9 7 5 2
2200 7 4 3 2 4 5 9 1 9 4 6 5 8 7 7 7 9
</code></pre>
<p>As you can see, there are 5 unique values in column <code>AAA</code>, and 3 groups of columns: <code>X_</code>, <code>X_Y_</code>, and <code>X_10_</code>, followed by a suffix with each of the unique values. I want to change the order of the columns so each group of columns is sorted by the unique values (ascending).</p>
<p>Expected result:</p>
<pre><code> AAA X_100 X_980 X_990 X_1100 X_2200 X_Y_100 X_Y_980 X_Y_990 X_Y_1100 X_Y_2200 X_10_100 X_10_980 X_10_990 X_10_1100 X_10_2200 X_A X_A_B
100 6 6 6 3 4 1 9 5 1 7 9 2 7 3 7 3 8
980 9 2 5 5 9 3 3 2 1 6 1 8 2 9 4 8 4
990 8 8 7 7 9 3 1 7 3 5 5 5 6 6 1 3 4
1100 5 6 4 7 4 6 2 1 6 2 3 5 3 9 7 5 2
2200 4 7 3 2 4 5 4 1 9 9 6 5 8 7 7 7 9
</code></pre>
|
<p><strong>Approach #1</strong></p>
<p>With simple columns manipulation -</p>
<pre><code>c = df.columns.values.copy()
c1 = df1.columns
c[np.isin(c,c1)] = c1
df_out = df.loc[:,c]
</code></pre>
<p>Sample output -</p>
<pre><code>In [174]: df_out
Out[174]:
AAA X_100 X_980 X_990 X_1100 X_2200 X_Y_100 X_Y_980 X_Y_990 X_Y_1100 X_Y_2200 X_10_100 X_10_980 X_10_990 X_10_1100 X_10_2200 X_A X_A_B
0 100 6 6 6 3 4 1 9 5 1 7 9 2 7 3 7 3 8
1 980 9 2 5 5 9 3 3 2 1 6 1 8 2 9 4 8 4
2 990 8 8 7 7 9 3 1 7 3 5 5 5 6 6 1 3 4
3 1100 5 6 4 7 4 6 2 1 6 2 3 5 3 9 7 5 2
4 2200 4 7 3 2 4 5 4 1 9 9 6 5 8 7 7 7 9
</code></pre>
<p><strong>Approach #2</strong> : Pushes the new data up-front</p>
<pre><code>In [117]: df1 = df[[i+str(j) for i in ['X_', 'X_Y_', 'X_10_'] for j in df.AAA]]
In [118]: c,c1 = df.columns,df1.columns
In [119]: pd.concat(( df1, df[c[~np.isin(c,c1)]]),axis=1)
Out[119]:
X_100 X_980 X_990 X_1100 X_2200 X_Y_100 X_Y_980 X_Y_990 X_Y_1100 X_Y_2200 X_10_100 X_10_980 X_10_990 X_10_1100 X_10_2200 AAA X_A X_A_B
0 6 6 6 3 4 1 9 5 1 7 9 2 7 3 7 100 3 8
1 9 2 5 5 9 3 3 2 1 6 1 8 2 9 4 980 8 4
2 8 8 7 7 9 3 1 7 3 5 5 5 6 6 1 990 3 4
3 5 6 4 7 4 6 2 1 6 2 3 5 3 9 7 1100 5 2
4 4 7 3 2 4 5 4 1 9 9 6 5 8 7 7 2200 7 9
</code></pre>
|
python|pandas|numpy
| 4
|
376,771
| 63,204,487
|
pandas - combining datasets
|
<p>I have 3 datasets I am trying to combine with pandas.</p>
<p>The first type dataset is like this. It has multiple index values for postcode as there are multiple restaurants in the dataframe (I am trying to give those restaurants more demographic context).</p>
<pre><code> postcode restaurants
3793 3,577
3477 21
3971 26
3222 7,519
3747 3,859
</code></pre>
<p>Second is like this (mainly postcodes versus one or maybe two attributes, key to one value pair.</p>
<pre><code> postcode burgers
2640 38064
postcode soda
3000 23715
3002 854
3003 780
3004 35
3006 3288>
</code></pre>
<p>These have been simplified.</p>
<p>When using concat or merge with pandas, I am receiving errors of</p>
<pre><code>ValueError: Plan shapes are not aligned
</code></pre>
<p>With this code</p>
<pre><code>result = pd.concat(frames,join='outer')
</code></pre>
<p>How can I simply join these datasets into one? What mistake am I making?</p>
<h3>Expected Output based on a comment</h3>
<p>Basically looking for burgers and sodas to be placed into the data frame as a value against the postcode.</p>
<p>example</p>
<pre><code> postcode pop growth burgers soda address
3793 3,577 123123 1231 AbyRoad
3793 3,577 12351 5151 northst
3971 26 6666 7777 northunder abby
</code></pre>
|
<p>First, you need to ensure that postcode column is the (only) index for each of the dataframes. You need to run this for all.</p>
<p>Next, if you do have all the dataframes with index as postcode. Please put them in a list called frames (list of dataframes) and use the following code.</p>
<pre><code>dfList = [df1, df2, df3]
frames = [df.set_index('postcode') for df in dfList]
pd.concat(frames, axis=1)
</code></pre>
<p>If that doesnt work, maybe try this -</p>
<pre><code>from functools import reduce
frames = [df.reset_index() for df in dfList] #reset the indexes and add dfs into a list
df_final = reduce(lambda left,right: pd.merge(left,right,on='postcode'), frames)
</code></pre>
|
python|pandas
| 1
|
376,772
| 62,902,870
|
Python pandas vectorization comparison between 2 dataframes
|
<p>I have two dataframes of different lengths.</p>
<p>df1</p>
<pre><code> gene_name chr start stop gene
0 ARNTL chr11 13376772 13376843 gene_name
1 ARNTL chr11 13393709 13393956 gene_name
2 PPP4R1 chr18 9595015 9595151 gene_name
3 PPP4R1 chr18 9595015 9595151 gene_name
4 SLC9B1 chr4 103806204 103806485 gene_name
... ... ... ... ... ...
4640 GCDH chr19 13010281 13010813 gene_name
4641 ARL4A chr7 12727790 12730558 gene_name
4642 ARL4A chr7 12727790 12730558 gene_name
4643 SMURF1 chr7 98630659 98630744 gene_name
4644 TSTD1 chr1 161007421 161007865 gene_name
4645 rows × 5 columns
</code></pre>
<p>d3</p>
<pre><code> chr start stop exon exon_number gene gene_name
0 chr1 901877 901994 exon_number 1 gene_name PLEKHN1
1 chr1 902084 902183 exon_number 2 gene_name PLEKHN1
2 chr1 905657 905803 exon_number 3 gene_name PLEKHN1
3 chr1 905901 905981 exon_number 4 gene_name PLEKHN1
4 chr1 906066 906138 exon_number 5 gene_name PLEKHN1
... ... ... ... ... ... ... ...
243869 chrY 15526615 15526673 exon_number 5 gene_name UTY
243870 chrY 15522873 15522993 exon_number 6 gene_name UTY
243871 chrY 15508182 15508852 exon_number 7 gene_name UTY
243872 chrY 15591394 15591803 exon_number 1 gene_name UTY
243873 chrY 15590922 15591197 exon_number 2 gene_name UTY
243874 rows × 7 columns
</code></pre>
<p>I am trying to iterate through rows of df1 and make comparisons to d3</p>
<p>here is my current code.</p>
<pre><code>for r,i in df1.iterrows():
for row, items in d3.iterrows():
if i[1] == items[0] and i[2] >= items[1] and i[3] <= items[2] and i[0] == items[6]:
print(i,items)
</code></pre>
<p>this gets the job done but takes quite a while to run on so many rows.</p>
<p>I would like to vectorize this but insure as to how best to proceed.</p>
|
<p>Use <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/merging.html#database-style-dataframe-or-named-series-joining-merging" rel="nofollow noreferrer"><code>pd.merge</code></a>:</p>
<pre class="lang-py prettyprint-override"><code>df = pd.merge(df1, d3, on=['chr', 'gene_name'])
df = df[(df.start_x >= df.start_y) & (df.stop_x <= df.stop_y)]
</code></pre>
<p>(btw, it's better to use column names, not confusing column indicies)</p>
|
python|pandas
| 1
|
376,773
| 63,095,574
|
How do I save/export (as .tf or .tflite), run, or test this Tensorflow Convolutional Neural Network (CNN) which It trained as a python file?
|
<p>How do I save, run, or test this Tensorflow Convolutional Neural Network (CNN) which It trained as a python file?</p>
<p>I want to be able to export/save this model as a <code>.tf</code> and <code>.tflite</code> file as well as input images to test it.</p>
<p>Here is the code for my model:</p>
<pre><code>import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
import numpy as np
DATA_DIR = 'data'
NUM_STEPS = 1000
MINIBATCH_SIZE = 100
def weight_variable(shape):
initial = tf.truncated_normal(shape, stddev = 0.1)
return tf.Variable(initial)
def bias_variable(shape):
initial = tf.constant(0.1, shape=shape)
return tf.Variable(initial)
def conv2d(x, W):
return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')
def max_pool_2x2(x):
return tf.nn.max_pool(x, ksize=[1,2,2,1], strides=[1,2,2,1], padding='SAME')
def conv_later(input, shape):
W = weight_variable(shape)
b = bias_variable([shape[3]])
return tf.nn.relu(conv2d(input, W) + b)
def full_layer(input, size):
in_size = int(input.get_shape()[1])
W = weight_variable([in_size, size])
b = bias_variable([size])
return tf.matmul(input, W) + b
x = tf.placeholder(tf.float32, shape=[None, 784])
y_ = tf.placeholder(tf.float32, shape=[None, 10])
x_image = tf.reshape(x, [-1, 28, 28, 1])
conv1 = conv_later(x_image, shape=[5,5,1,32])
conv1_pool = max_pool_2x2(conv1)
conv2 = conv_later(conv1_pool, shape=[5,5,32,64])
conv2_pool = max_pool_2x2(conv2)
conv2_flat = tf.reshape(conv2_pool, [-1, 7*7*64])
full_1 = tf.nn.relu(full_layer(conv2_flat, 1024))
keep_prob = tf.placeholder(tf.float32)
full1_drop = tf.nn.dropout(full_1, keep_prob=keep_prob)
y_conv = full_layer(full1_drop, 10)
mnist = input_data.read_data_sets(DATA_DIR, one_hot=True)
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(y_conv, y_))
train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
correct_prediction = tf.equal(tf.argmax(y_conv, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for i in range(NUM_STEPS):
batch = mnist.train.next_batch(50)
if i % 100 == 0:
train_accuracy = sess.run(accuracy, feed_dict={x: batch[0], y_: batch[1], keep_prob: 1.0})
print("step {}, training accuracy {}".format(i, train_accuracy))
sess.run(train_step, feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})
X = mnist.test.images.reshape(10, 1000, 784)
Y = mnist.test.labels.reshape(10, 1000, 10)
test_accuracy = np.mean([sess.run(accuracy, feed_dict={x:X[i], y_:Y[i], keep_prob:1.0}) for i in range(10)])
print("test accuracy: {}".format(test_accuracy))
</code></pre>
<p>Can someone please tell me how to save/export this model as a <code>.tf</code> or <code>.tflite</code>, and test this model?</p>
|
<p>Currently, Tensorflow 2 is much feasible to work with. So, I am posting about it and replicating your model as closely as possible.</p>
<pre><code>import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
fashion_mnist = keras.datasets.fashion_mnist
(train_images, train_labels), (test_images,
test_labels) = fashion_mnist.load_data()
train_images = train_images / 255.0
test_images = test_images / 255.0
inputs = keras.Input(shape=(28, 28, 1), name="img")
x = layers.Conv2D(32, 5, activation="relu")(inputs)
x = layers.MaxPooling2D(2)(x)
x = layers.Conv2D(64, 5, activation="relu")(x)
x = layers.MaxPooling2D(2)(x)
x = layers.Flatten()(x)
x = layers.Dense(1024, activation='relu')(x)
x = layers.Dropout(rate=0.5)(x)
outputs = layers.Dense(10, activation='relu')(x)
model = keras.Model(inputs=inputs, outputs=outputs, name="mnist_model")
print(model.summary())
keras.utils.plot_model(
model, "model.png", show_shapes=True)
model.compile(
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=keras.optimizers.Adam(learning_rate=1e-4),
metrics=["accuracy"],
)
history = model.fit(train_images, train_labels, batch_size=64,
epochs=2, validation_split=0.2)
# Saving Model
model.save("model.tf")
model = keras.models.load_model("model.tf")
test_scores = model.evaluate(test_images, test_labels, verbose=2)
print("Test loss:", test_scores[0])
print("Test accuracy:", test_scores[1])
# Convert the model.
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
# Save the TF Lite model.
with tf.io.gfile.GFile('model.tflite', 'wb') as f:
f.write(tflite_model)
</code></pre>
<p>Output:</p>
<pre><code>Model: "mnist_model"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
img (InputLayer) [(None, 28, 28, 1)] 0
_________________________________________________________________
conv2d (Conv2D) (None, 24, 24, 32) 832
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 12, 12, 32) 0
_________________________________________________________________
conv2d_1 (Conv2D) (None, 8, 8, 64) 51264
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 4, 4, 64) 0
_________________________________________________________________
flatten (Flatten) (None, 1024) 0
_________________________________________________________________
dense (Dense) (None, 1024) 1049600
_________________________________________________________________
dropout (Dropout) (None, 1024) 0
_________________________________________________________________
dense_1 (Dense) (None, 10) 10250
=================================================================
Total params: 1,111,946
Trainable params: 1,111,946
Non-trainable params: 0
_________________________________________________________________
Epoch 1/2
750/750 [==============================] - 82s 109ms/step - loss: 0.8672 - accuracy: 0.6908 - val_loss: 0.5704 - val_accuracy: 0.7868
Epoch 2/2
750/750 [==============================] - 109s 145ms/step - loss: 0.5553 - accuracy: 0.7936 - val_loss: 0.4854 - val_accuracy: 0.8205
313/313 - 14s - loss: 0.4997 - accuracy: 0.8206
Test loss: 0.4996977150440216
Test accuracy: 0.8205999732017517
</code></pre>
<p>Model Details: <a href="https://i.stack.imgur.com/erqQx.png" rel="nofollow noreferrer">LINK</a>.</p>
|
python|tensorflow|machine-learning|deep-learning|neural-network
| 1
|
376,774
| 63,006,475
|
How to solve ImportError: Keras requires TensorFlow 2.2 or higher. Install TensorFlow via `pip install tensorflow`?
|
<p>I get this error when I try to import Keras into my project.</p>
<blockquote>
<p>How to solve ImportError: Keras requires TensorFlow 2.2 or higher. Install TensorFlow via <code>pip install tensorflow</code></p>
</blockquote>
<p>I verified the versions I have installed (with pip) for everything and I have:</p>
<ul>
<li>Python 3.7.7</li>
<li>Tensorflow 2.2.0</li>
<li>keras 2.4.3</li>
</ul>
<p>I have linked a picture of the full error. There is some stuff about Dll but I'm not sure if this is what creates the error.</p>
<p><a href="https://i.stack.imgur.com/tZ7wp.png" rel="noreferrer"><img src="https://i.stack.imgur.com/tZ7wp.png" alt="Error" /></a></p>
|
<p>Tensorflow requires Python 3.5–3.8 , pip and venv >= 19.0</p>
<p>in order to fix it:</p>
<pre><code>sudo apt install python3-pip
pip3 install --upgrade pip
python3 -m pip install tensorflow
</code></pre>
<p>if you already had tensorflow installed substitute the last command this this:</p>
<pre><code>pip3 install --upgrade tensorflow
</code></pre>
<p>hope it helped.</p>
|
python|tensorflow|keras
| 8
|
376,775
| 63,098,764
|
Element-wise multiplication of a series of two lists from separate Pandas Dataframe Series in Python
|
<p>I have a dataframe where there are two series, and each contains a number of lists. I would like to perform element-wise multiplication of each list in 'List A' with the corresponding list in 'List B'.</p>
<pre><code>df = pd.DataFrame({'ref': ['A', 'B', 'C', 'D'],
'List A': [ [0,1,2], [2,3,4], [3,4,5], [4,5,6] ],
'List B': [ [0,1,2], [2,3,4], [3,4,5], [4,5,6] ] })
df['New'] = df.apply(lambda x: (a*b for a,b in zip(x['List A'], x['List B'])) )
</code></pre>
<p>The aim is to get the following output:</p>
<pre><code>print(df['New'])
0 [0, 1, 4]
1 [4, 9, 16]
2 [9, 16, 25]
3 [16, 25, 36]
Name: New, dtype: object
</code></pre>
<p>However I am getting the following error:</p>
<pre><code>KeyError: ('List A', 'occurred at index ref')
</code></pre>
|
<p>Your code is almost there. Mostly, you need to pass <code>axis=1</code> to apply:</p>
<pre><code>df["new"] = df.apply(lambda x: list(a*b for a,b in zip(x['List A'], x['List B'])), axis=1)
print(df)
</code></pre>
<p>The output is:</p>
<pre><code> ref List A List B new
0 A [0, 1, 2] [0, 1, 2] [0, 1, 4]
1 B [2, 3, 4] [2, 3, 4] [4, 9, 16]
2 C [3, 4, 5] [3, 4, 5] [9, 16, 25]
3 D [4, 5, 6] [4, 5, 6] [16, 25, 36]
</code></pre>
|
python|pandas|list|dataframe|multiplication
| 3
|
376,776
| 63,079,717
|
How to remove ' ' from a list in python
|
<p>I have df column with lists. Each looks like <code>[1,2,3,4,'',6,7],[2,3,'',5,6]</code>. I want to remove the <code>''</code> in each row. I used</p>
<pre><code>df[column].apply(lambda x: x.remove(''))
</code></pre>
<p>But it didn't work. Could some one help me? Thanks</p>
<pre><code>ValueError: list.remove(x): x not in list
</code></pre>
|
<p>Make an explicit filter on it: <code>filter(lambda x: x != "", your_list)</code> or use a list comprehension: <code>[x for x in your_list if x != ""]</code>. They work the same, just a matter of preference.</p>
<p>You don't want to filter out by a boolean method because then you'd accidentally get rid of 0s because they're "falsy" in python.</p>
|
python|pandas|list
| 2
|
376,777
| 62,987,718
|
PDF hyperlink extraction and writing to a pandas dataframe
|
<p>I am using altered code from this post (my code below):</p>
<p><a href="https://stackoverflow.com/questions/27744210/extract-hyperlinks-from-pdf-in-python">Extract hyperlinks from PDF in Python</a></p>
<p>I am trying to extract hyperlinks (URLs) from a PDF. I found code from the link above which worked. However, I am using Jupyter QtConsole and there are not enough rows to grab everything coming out via the print function. I am thus trying to write each URL as a new row to a pandas dataframe so I can export to a CSV and see everything.</p>
<p>When I run the code below without the commented lines toward the bottom, it runs fine - printing each unique URL to the console. When I add the dataframe lines the code prints each URL 10ish times in QtConsole. The resultant dataframe stops after the first URL (despite the program still running and printing URLs) and it shows the first URL multiple times in the dataframe. I have added comments where I think the problems lie. I'm clearly a bit out of my depth in understanding how to create a new dataframe row for each URL (which I believe is a dictionary key). I'm also thinking my forloop length referencing "pages" is a problem but I'm a bit confused as to what to reference for the forloop length.</p>
<p>Please help.</p>
<pre><code>import pandas as pd
import PyPDF2
PDFFile = open(r'file\path.pdf','rb')
PDF = PyPDF2.PdfFileReader(PDFFile)
pages = PDF.getNumPages()
key = '/Annots'
uri = '/URI'
ank = '/A'
for page in range(pages):
print("Current Page: {}".format(page))
pageSliced = PDF.getPage(page)
pageObject = pageSliced.getObject()
if key in pageObject.keys():
ann = pageObject[key]
for a in ann:
try:
u = a.getObject()
if uri in u[ank].keys():
df = pd.DataFrame(columns=['URL']) #POSSIBLE PROBLEM AREA
for i in range(pages): #LIKELY PROBLEM AREA
df.loc[i] = (u[ank][uri]) #LIKELY PROBLEM AREA
print(u[ank][uri])
except KeyError:
pass
</code></pre>
|
<p>I figured out how to fix my own problem. This seems to happen when I post to a forum. Perhaps a forum posting is a prerequiste to discovery. Regardless...</p>
<p>All code leading up to what follows is the same.</p>
<p>I created a list (aptly named "mylist") outside the initial forloop. I then append the current "u[ank][uri]" (aka the URL in English) to my list in the nested if where I am printing the URL. I then convert my list into a pandas dataframe at the end. This is giving me the results I was hoping for. I can then write my dataframe to a CSV.</p>
<pre><code>mylist = []
for page in range(pages):
print("Current Page: {}".format(page))
pageSliced = PDF.getPage(page)
pageObject = pageSliced.getObject()
if key in pageObject.keys():
ann = pageObject[key]
for a in ann:
try:
u = a.getObject()
if uri in u[ank].keys():
mylist.append(u[ank][uri])
print(u[ank][uri])
except KeyError:
pass
df = pd.DataFrame(mylist)
df.to_csv('fileoutput.csv')
</code></pre>
|
python|pandas|pdf
| 0
|
376,778
| 63,173,294
|
Fastest way to iterate function over pandas dataframe
|
<p>I have a function which operates over lines of a csv file, adding values of different cells to dictionaries depending on whether conditions are met:</p>
<pre class="lang-py prettyprint-override"><code>df = pd.concat([pd.read_csv(filename) for filename in args.csv], ignore_index = True)
ID_Use_Totals = {}
ID_Order_Dates = {}
ID_Received_Dates = {}
ID_Refs = {}
IDs = args.ID
def TSQs(row):
global ID_Use_Totals, ID_Order_Dates, ID_Received_Dates
if row['Stock Item'] not in IDs:
pass
else:
if row['Action'] in ['Order/Resupply', 'Cons. Purchase']:
if row['Stock Item'] not in ID_Order_Dates:
ID_Order_Dates[row['Stock Item']] = [{row['Ref']: pd.to_datetime(row['TransDate'])}]
else:
ID_Order_Dates[row['Stock Item']].append({row['Ref']: pd.to_datetime(row['TransDate'])})
elif row['Action'] == 'Received':
if row['Stock Item'] not in ID_Received_Dates:
ID_Received_Dates[row['Stock Item']] = [{row['Ref']: pd.to_datetime(row['TransDate'])}]
else:
ID_Received_Dates[row['Stock Item']].append({row['Ref']: pd.to_datetime(row['TransDate'])})
elif row['Action'] == 'Use':
if row['Stock Item'] in ID_Use_Totals:
ID_Use_Totals[row['Stock Item']].append(row['Qty'])
else:
ID_Use_Totals[row['Stock Item']] = [row['Qty']]
else:
pass
</code></pre>
<p>Currently, I am doing:</p>
<pre class="lang-py prettyprint-override"><code>for index, row in df.iterrows():
TSQs(row)
</code></pre>
<p>But <code>timer()</code> returns between 70 and 90 seconds for a 40,000 line csv file.</p>
<p>I want to know what the fastest way of implementing this is over the entire dataframe (which could potentially be hundreds of thousands of rows).</p>
|
<p>Probably fastest not to iterate at all:</p>
<pre><code># Build some boolean indices for your various conditions
idx_stock_item = df["Stock Item"].isin(IDs)
idx_purchases = df["Action"].isin(['Order/Resupply', 'Cons. Purchase'])
idx_order_dates = df["Stock Item"].isin(ID_Order_Dates)
# combine the indices to act on specific rows all at once
idx_combined = idx_stock_item & idx_purchases & ~idx_order_dates
# It looks like you were putting a single entry dictionary in each row - wouldn't it make sense to rather just use two columns? i.e. take advantage of the DataFrame data structure
ID_Order_Dates.loc[df.loc[idx_combined, "Stock Item"], "Ref"] = df.loc[idx_combined, "Ref"]
ID_Order_Dates.loc[df.loc[idx_combined, "Stock Item"], "Date"] = df.loc[idx_combined, "TransDate"]
# repeat for your other cases
# ...
</code></pre>
|
python|python-3.x|pandas|numpy
| 1
|
376,779
| 63,122,285
|
Is there any way to offload memory with TensorFlow?
|
<p>I have this method inside a class that prepares the data and trains on it inside the same method, each time the method gets called my memory usage grows around 200MB, this makes the script unable to train for long periods of time in the best cases it trains for 8-9 times before running out of memory, I tried commenting the <code>load_weights</code> section but this is not the source of the problem I also tried using <code>model.fit</code> without the callbacks but this does seem to solve the issue basically I tried commenting every line in this method but the memory usage keeps growing, in another script that trains on random numbers with a while loop it does not fill the memory, so I am pretty sure that this method has something wrong that keeps adding data to the memory without clearing it, I tried using <code>gc.collect()</code> but it does not help at all.</p>
<p>why does this happen and how to go about fixing this?</p>
<pre><code>def make_data(self):
if not os.path.exists("/py_stuff/BIN_API_v3/python-binance-master/"+str(self.coin)):
os.makedirs("/py_stuff/BIN_API_v3/python-binance-master/"+str(self.coin))
checkpoint_filepath ="/py_stuff/BIN_API_v3/python-binance-master/"+str(self.coin)+"/check_point"
weights_checkpoint = "/py_stuff/BIN_API_v3/python-binance-master/"+str(self.coin)
checkpoint_dir = os.path.dirname(checkpoint_filepath)
model_checkpoint_callback = tf.keras.callbacks.ModelCheckpoint(
filepath=checkpoint_filepath,
save_weights_only=True,
mode='max',
save_best_only=True,
verbose=1)
dataset_train = self.df.tail(400)
training_set = dataset_train.iloc[:, 1:2].values
print (dataset_train.tail(5))
sc = MinMaxScaler(feature_range=(0,1))
training_set_scaled = sc.fit_transform(training_set)
X_train = []
y_train = []
for i in range(10, 400):
X_train.append(training_set_scaled[i-10:i, 0])
y_train.append(training_set_scaled[i, 0])
X_train, y_train = np.array(X_train), np.array(y_train)
X_train = np.reshape(X_train, (X_train.shape[0], X_train.shape[1], 1))
ST = time.time()
model = Sequential()
model.add(LSTM(units = 128, return_sequences = True, input_shape = (X_train.shape[1], 1)))
model.add(Dropout(0.2))
model.add(LSTM(units=128 , return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(units=128 , return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(units=128))
model.add(Dropout(0.2))
model.add(Dense(units=1 ))
model.compile(optimizer='adam', loss='mean_squared_error' , metrics=[tf.keras.metrics.RootMeanSquaredError()])
## loading weights
try:
model.load_weights(checkpoint_filepath)
print ("Weights loaded successfully $$$$$$$ ")
except:
print ("No Weights Found !!! ")
model.fit(X_train,y_train,epochs=20,batch_size=50, callbacks=[model_checkpoint_callback])
### saving model conf and weights
try:
# model.save(checkpoint_filepath)
model.save_weights(filepath=checkpoint_filepath)
print ("Saving weights and model done ")
except OSError as no_model:
print ("Error saving weights and model !!!!!!!!!!!! ")
print (time.time() - ST)
self.model = model
# tf.keras.backend.clear_session()
return
</code></pre>
|
<p>The issue here is that the model is recreated every time the function is called. Tensorflow does not release a model from memory until the session is restarted (tf < 2.0) or the script itself is rerun (any tf version).</p>
<p>You should create your model outside the function (preferably in the <code>__init__</code> method and use it in your function for training:</p>
<pre><code>def __init__(self):
....
model = Sequential()
model.add(LSTM(units = 128, return_sequences = True, input_shape = (X_train.shape[1], 1)))
model.add(Dropout(0.2))
model.add(LSTM(units=128 , return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(units=128 , return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(units=128))
model.add(Dropout(0.2))
model.add(Dense(units=1 ))
model.compile(optimizer='adam', loss='mean_squared_error' , metrics=[tf.keras.metrics.RootMeanSquaredError()])
self.model = model
def make_data(self):
....
ST = time.time()
model = self.model
## loading weights
try:
model.load_weights(checkpoint_filepath)
print ("Weights loaded successfully $$$$$$$ ")
except:
print ("No Weights Found !!! ")
....
</code></pre>
|
python|tensorflow
| 1
|
376,780
| 62,915,504
|
How to calculate a weighted average in Python for each unique value in two columns?
|
<p>The picture below shows a few lines of printed lists I have in Python. I would like to get: a list of unique values of boroughs, a corresponding list of unique values of years, and a list of weighted averages of "averages" with "nobs" as weights but for each borough and each year (the variable "type" indicates if there was just one, two or three types in a specific year in a borough).</p>
<p>I know how to get a weighted average using the entire lists:</p>
<pre><code>weighted_avg = np.average(average, weights=nobs)
</code></pre>
<p>But I don't know how to calculate one for each unique borough-year.</p>
<p><a href="https://i.stack.imgur.com/5d0zk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5d0zk.png" alt="enter image description here" /></a></p>
<p>I'm new to Python, please help if you know how to do it.</p>
|
<p>Assuming that the 'type' column doesn't affect your calculations, you can get the average using <code>groupby</code>. Here's the data:</p>
<pre><code>df = pd.DataFrame({'borough': ['b1', 'b2']*6, 'year': [2008, 2009, 2010, 2011]*3,
'average': np.random.randint(low=100, high=200, size=12),
'nobs': np.random.randint(low=1, high=40, size=12)})
print(df):
borough year average nobs
0 b1 2008 166 1
1 b2 2009 177 35
2 b1 2010 114 27
3 b2 2011 187 18
4 b1 2008 193 2
5 b2 2009 105 27
6 b1 2010 114 36
7 b2 2011 144 3
8 b1 2008 114 39
9 b2 2009 157 6
10 b1 2010 133 17
11 b2 2011 176 12
</code></pre>
<p>we add a new column which is the product of the average and nobs columns:</p>
<pre><code>df['average x nobs'] = df['average']*df['nobs']
newdf = pd.DataFrame({'weighted average': df.groupby(['borough', 'year']).sum()['average x nobs']/df.groupby(['borough', 'year']).sum()['nobs']})
print(newdf):
weighted average
borough year
b1 2008 119.000000
2010 118.037500
b2 2009 146.647059
2011 179.090909
</code></pre>
|
python|pandas
| 1
|
376,781
| 63,272,148
|
ValueError: 4 columns passed, passed data had 3 columns when converting python list to dataframe. How to add blank values if 3 passed?
|
<p>I have a list called 'data' that generally has lists with 3 fields but can sometimes have 4:</p>
<pre><code>[['Bob', 'DeVito', '100 Lbs'], ['Mac', 'Charles', '150 Lbs']]
</code></pre>
<p>If I try converting data to a dataframe with at least one of the lists having 4 elements, it will run fine:</p>
<pre><code>df = pd.DataFrame(data, columns=['First', 'Last', 'Weight', 'Height'])
</code></pre>
<p>but if I run it against the list seen above, it will crash saying</p>
<blockquote>
<p>ValueError: 4 columns passed, passed data had 3 columns</p>
</blockquote>
<p>How can I get it to convert to a df with blank values for the Height column without crashing when I give it a list of lists w/o the 4th column? That way the conversion will run on lists containing only len 3 elems, len 4 elems, or a combo of both .</p>
<p>Desired result:</p>
<pre><code>First Last Weight Height
Bob DeVito 100 Lbs None
Mac Charles 150 Lbs None
</code></pre>
|
<p>This should solve your problem. First it creates a dictionary from your list elements in a try/ except loop so that if there is no height value isntead of throwing an error it puts np.nan instead. Finally it creates the pandas dataframe from dictionary.</p>
<pre><code>import pandas as pd
import numpy as np
list = [['Bob', 'DeVito', '100 Lbs'], ['Mac', 'Charles', '150 Lbs']]
dict = {}
try:
dict = [{ "First":a[0], "Last":a[1], "Weight":a[2], "Height": a[3]} for a in list]
except:
dict = [{"First": a[0], "Last": a[1], "Weight": a[2], "Height": np.nan} for a in list]
print(dict)
df = pd.DataFrame(dict)
print(df)
Output :
First Last Weight Height
0 Bob DeVito 100 Lbs NaN
1 Mac Charles 150 Lbs NaN
</code></pre>
|
python|pandas|dataframe
| 2
|
376,782
| 63,080,193
|
Passing a list to pandas loc method
|
<p>I'd like to change the values of certain columns in a pandas dataframe. But I can't seem to do if I pass a list of columns inside <code>loc</code>.</p>
<pre><code>df = pd.DataFrame({
"ID" : [1, 2, 3, 4, 5],
"QA_needed" : [0, 1, 1, 0, 1],
"QC_needed" : [1, 0, 1, 0, 0],
"Report_needed" : [1, 1, 1, 0, 1]
})
df.loc[:, ["QA_needed", "Report_needed"]].replace({1: "True", 0: "False"}, inplace=True)
</code></pre>
<p>To do this I have to replace the values for each column individually</p>
<pre><code>df.loc[:, "QA_needed"].replace({1: "True", 0: "False"}, inplace=True)
df.loc[:, "QC_needed"].replace({1: "True", 0: "False"}, inplace=True)
</code></pre>
<p>Is there a way to pass the list <code>["QA_needed", "Report_needed"]</code> to the <code>loc</code> method?</p>
|
<p>Try <code>update</code></p>
<pre><code>df.update(df.loc[:, ["QA_needed", "Report_needed"]].replace({1: "True", 0: "False"}))
df
Out[96]:
ID QA_needed QC_needed Report_needed
0 1 False 1 True
1 2 True 0 True
2 3 True 1 True
3 4 False 0 False
4 5 True 0 True
</code></pre>
|
python|pandas|dataframe|pandas-loc
| 3
|
376,783
| 62,964,298
|
How to select a specific TPU in Google Cloud?
|
<p>I'm trying to use TPUs on Google cloud and I'm trying to figure out how to specify the right TPU to use. I'm trying to following the quickstart</p>
<p><a href="https://cloud.google.com/tpu/docs/quickstart" rel="nofollow noreferrer">https://cloud.google.com/tpu/docs/quickstart</a></p>
<p>But it doesn't say how to select a TPU, it only gives instructions to select a region.</p>
<pre><code>$ ctpu up --zone=us-central1-b \
--tf-version=2.1 \
--name=tpu-quickstart
</code></pre>
<p>I am wondering how to select a v2-32 . At first I figured I should just specify <code>us-central1-a</code> but I noticed regions can hold more than one TPU type here</p>
<p><a href="https://cloud.google.com/tpu/docs/types-zones" rel="nofollow noreferrer">https://cloud.google.com/tpu/docs/types-zones</a></p>
<p>For example, <code>us-central1-a</code> has both v2-128 and v2-32, so I'm not exactly sure that the region alone can specify the TPU type. I'm sorta of afraid of accidentally spinning up a paid TPU.</p>
|
<p>You can select the TPU type by using the <code>tpu-size</code> parameter, as per <a href="https://cloud.google.com/tpu/docs/creating-deleting-tpus#setup_VM_only" rel="nofollow noreferrer">the documentation</a> (also <a href="https://cloud.google.com/tpu/docs/types-zones#accelerator-type" rel="nofollow noreferrer">here</a>).</p>
<p>For example:</p>
<pre><code>ctpu up --zone=us-central1-a \
--tf-version=2.1 \
--name=tpu-quickstart \
--tpu-size=v2-32
</code></pre>
<p>Remember that only <code>v2-8</code> and <code>v3-8</code> <a href="https://cloud.google.com/tpu/pricing#pod-pricing" rel="nofollow noreferrer">are available</a> unless you have access to evaluation quota or have purchased a commitment.</p>
|
tensorflow|google-cloud-platform|google-cloud-functions|google-cloud-storage|tpu
| 3
|
376,784
| 63,011,947
|
Pandas finding average in a comma separated column
|
<p>I want to take average based on one column which is comma separated and take mean on other column.</p>
<p>My file looks like this:</p>
<pre><code>ColumnA ColumnB
A, B, C 2.9
A, C 9.087
D 6.78
B, D, C 5.49
</code></pre>
<p>My output should look like this:</p>
<pre><code>A 7.4435
B 5.645
C 5.83
D 6.135
</code></pre>
<p>My code is this:</p>
<pre><code>df = pd.DataFrame(data.ColumnA.str.split(',', expand=True).stack(), columns= ['ColumnA'])
df = df.reset_index(drop = True)
df_avg = pd.DataFrame(df.groupby(by = ['ColumnA'])['ColumnB'].mean())
df_avg = df_avg.reset_index()
</code></pre>
<p>It has to be around the same lines but can't figure it out.</p>
|
<p>In your solution is created <code>index</code> by column <code>ColumnB</code> for avoid lost column values after <code>stack</code> and <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.reset_index.html" rel="nofollow noreferrer"><code>Series.reset_index</code></a>, last is added <code>as_index=False</code> for column after aggregation:</p>
<pre><code>df = (df.set_index('ColumnB')['ColumnA']
.str.split(',', expand=True)
.stack()
.reset_index(name='ColumnA')
.groupby('ColumnA', as_index=False)['ColumnB']
.mean())
print (df)
ColumnA ColumnB
0 A 5.993500
1 B 4.195000
2 C 5.825667
3 D 6.135000
</code></pre>
<p>Or alternative solution with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.explode.html" rel="nofollow noreferrer"><code>DataFrame.explode</code></a>:</p>
<pre><code>df = (df.assign(ColumnA = df['ColumnA'].str.split(','))
.explode('ColumnA')
.groupby('ColumnA', as_index=False)['ColumnB']
.mean())
print (df)
ColumnA ColumnB
0 A 5.993500
1 B 4.195000
2 C 5.825667
3 D 6.135000
</code></pre>
|
pandas|csv|average
| 2
|
376,785
| 63,220,901
|
Python Pandas Market Calendars day count (Trading day vs Calendar Days)
|
<p>I am conducting some market research and one of the variables I am investigating is the distribution of time for an event to occur as a log distribution and create a cumulative probability density function as a function of time. ( I simply convert my dates as so:</p>
<pre><code>A=datetime.strptime(UDate1[0],date_format)
B=datetime.strptime(UDate2[0],date_format)
</code></pre>
<p>and I can subtract like so:</p>
<pre><code>C=(A-B).days
</code></pre>
<p>and I am returned an integer of the number of days.. 5, 6, 10, 11.. whatever it may be).</p>
<p>My data should fit a log distribution,however, because I am currently using calendar days and my events only occur on market days ... it is an unacceptable source of error, and it creates empty histograms within my distribution (days 6 and 7 are always zero (weekends), and holiday effects).</p>
<p>I cannot calculate an accurate cumulative distribution function in this way so I recently downloaded the Pandas Market Calendar. Does anyone have experience figuring out how to calculate trading days vs market days. For example if I was looking at the time from July 19, 2020 to July 13, 2020. It would be 12 Calendar days, but only 8 trading days.</p>
|
<p>Info on the Pandas Market Calendars is here:
<a href="https://pypi.org/project/pandas-market-calendars/" rel="nofollow noreferrer">https://pypi.org/project/pandas-market-calendars/</a></p>
<p>First, create a market data object as described in the link:</p>
<pre><code>import pandas_market_calendars as mcal
# Create a calendar
nyse = mcal.get_calendar('NYSE')
early = nyse.schedule(start_date='2012-07-01', end_date='2012-07-10')
print(mcal.date_range(early, frequency='1D'))
DatetimeIndex(['2012-07-02 20:00:00+00:00', '2012-07-03 17:00:00+00:00',
'2012-07-05 20:00:00+00:00', '2012-07-06 20:00:00+00:00',
'2012-07-09 20:00:00+00:00', '2012-07-10 20:00:00+00:00'],
dtype='datetime64[ns, UTC]', freq=None)
</code></pre>
<p>Now, create a series with value of ones, and indexed by <em>market</em> days. Then re-index on <em>calendar</em> days, and fill missing values with zeros. Compute cumulative sum, and number of trading days between two dates is difference between cumulative sums at different dates:</p>
<pre><code>import pandas as pd
bus_day_index = pd.DatetimeIndex(
['2012-07-02 20:00:00+00:00', '2012-07-03 17:00:00+00:00',
'2012-07-05 20:00:00+00:00', '2012-07-06 20:00:00+00:00',
'2012-07-09 20:00:00+00:00', '2012-07-10 20:00:00+00:00'],
dtype='datetime64[ns, UTC]', freq=None)
bus_day_index = bus_day_index.normalize()
s = pd.Series(data=1, index=bus_day_index)
cal_day_index = pd.date_range(start=bus_day_index.min(), end=bus_day_index.max())
s = s.reindex(index=cal_day_index).fillna(0).astype(int)
s = s.cumsum()
s['2012-07-09'] - s['2012-07-03']
</code></pre>
<p>Advantage: This (inelegant) method incorporates non-trading days that fall on weekdays (Memorial Day, Labor Day, etc. in the U.S.).</p>
|
python|pandas|datetime|timedelta
| 2
|
376,786
| 63,051,666
|
Translating strings from Pandas dataframe in batches using googletrans
|
<p>I am trying to translate words from a Pandas dataframe column of ca. 200000 rows in length. It looks like this:</p>
<pre><code> df =| review | rating |
| love it | 5 |
| hate it | 1 |
| its ok | 3 |
| great | 4 |
</code></pre>
<p>I am attempting to translate this into a different language using googletrans, and I have seen some solutions using <code>df.apply</code> to apply the function to each row, however it is painfully slow in my case (roughly 16 hours needed to translate the whole column).</p>
<p>However googletrans do support batch translations where it takes a list of strings as an argument instead of just a single string.</p>
<p>I have been looking for a solution which takes advantage of this and my code looks like this:</p>
<pre><code>from googletrans import Translator
translator = Translator()
list1 = df.review.tolist()
translated = []
for i in range(0,len(df),50)):
translated.extend([x.text for x in translator.translate(list1[i:i+50], src='en' , dest='id')])
df['translated_review'] = translated #add back to df
</code></pre>
<p>But it is still as slow. Could anyone shed some light on how to further optimise this?</p>
|
<p>Perhaps you could try to reshape the column with words as a <code>numpy.array</code> instead, ie.:</p>
<pre><code>translated = []
for row in df.review.values.reshape((-1, 50)):
translated.append(translator.translate(row, src='en', dest='id'))
</code></pre>
<p>Note that the length of the <code>df.review</code> series must be divisible by 50 for the <code>reshape</code> method to work. If it is not either choose another value or trim the series to the size that is a multiple of 50.</p>
<p>A further improvement would be to parallelize the translations. For that you should look into parallel processing in Python, ie. <a href="https://wiki.python.org/moin/ParallelProcessing" rel="nofollow noreferrer">1</a>, <a href="https://stackoverflow.com/questions/20548628/how-to-do-parallel-programming-in-python">2</a>.</p>
|
python|pandas|loops|optimization
| 0
|
376,787
| 63,154,547
|
Iterating in Dataframe's Columns using column names as a List and then looping through the list in Python
|
<p>Im trying to LabelEncode particular columns of a Dataframe. I have stored those column names in a list(cat_features).
Now i want to use a For loop to iterate through this list's elements (which are strings) and use those elements to access dataframe's column. but it says</p>
<pre><code>TypeError: argument must be a string or number
</code></pre>
<p>Since Im accessing the element of the list which is already a string. so i dont understand why it throw that error.
Please help me understand why it doesn't work and what can I do to make it work.</p>
<pre><code>cat_features = [x for x in features if x not in features_to_scale]
from sklearn.preprocessing import LabelEncoder
for feature in cat_features:
le = LabelEncoder()
dataframe[feature] = le.fit_transform(dataframe[feature])
</code></pre>
|
<p>The error means that one or more of your columns contains a list/tuple/set or something similar. For this, you will need to convert the list/tuple to a string before you can apply a label encoder</p>
<p>Also, instead of a loop, you can first filter your data frame by the features you need then use apply function -</p>
<pre><code>df = main_df[cat_features]
df = df.astype(str) #This step changes each column to string as label encoder cant work on lists/tuples/sets
lb = LabelEncoder()
df.apply(lb.fit_transform)
</code></pre>
<p>Later you can combine this data frame with the remaining continuous features.</p>
|
python|pandas|scikit-learn|label-encoding
| 0
|
376,788
| 63,290,433
|
Merge multiple files keeping file name as column names
|
<p>I have multiple files in a directory. I want to merge them in a way such that the rows are merged together, and file names are kept as column headers.
For example, file1 looks like</p>
<pre><code>ENSG1 12
ENSG2 13
ENSG3 14
</code></pre>
<p>file2 looks like</p>
<pre><code>ENSG1 13
ENSG2 14
ENSG4 15
</code></pre>
<p>I'm looking forward to an output like</p>
<pre><code> file1 file2
ENSG1 12 13
ENSG2 13 14
ENSG3 14 0/na
ENSG4 0/na 15
</code></pre>
<p>Do you have any idea how to do this? Thank you for your time!</p>
|
<p>Here's a way to do that using <code>concat</code>:</p>
<pre><code>dfs = []
for f in ["file1", "file2"]: # iterate the relevant files here
df = pd.read_csv(f, header=None, sep = "\s+", index_col=0)
df.columns = [f]
dfs.append(df)
res = pd.concat(dfs, axis=1)
</code></pre>
<p>The output it:</p>
<pre><code> file1 file2
ENSG1 12.0 13.0
ENSG2 13.0 14.0
ENSG3 14.0 NaN
ENSG4 NaN 15.0
</code></pre>
|
python|pandas
| 0
|
376,789
| 63,026,997
|
Series to dictionary
|
<p>I have the following code and output</p>
<pre><code> mean = dataframe.groupby('LABEL')['RESP'].mean()
minimum = dataframe.groupby('LABEL')['RESP'].min()
maximum = dataframe.groupby('LABEL')['RESP'].max()
std = dataframe.groupby('LABEL')['RESP'].std()
df = [mean, minimum, maximum]
</code></pre>
<p>And the following output</p>
<pre><code>[LABEL
0.0 -1.193420
1.0 0.713425
2.0 -1.066513
3.0 -0.530640
4.0 -2.130600
6.0 0.084747
7.0 1.190506
Name: RESP, dtype: float64,
LABEL
0.0 -1.396179
1.0 -0.233459
2.0 -1.631165
3.0 -1.271057
4.0 -2.543640
6.0 -0.418091
7.0 -0.004578
Name: RESP, dtype: float64,
LABEL
0.0 0.042247
1.0 0.295534
2.0 0.128233
3.0 0.243975
4.0 0.088077
6.0 0.085615
7.0 0.693196
Name: RESP, dtype: float64
]
</code></pre>
<p>However I want the output to be a dictionary as</p>
<pre><code>{label_value: [mean, min, max, std_dev]}
</code></pre>
<p>For example</p>
<pre><code>{1: [1, 0, 2, 1], 2: [0, -1, 1, 1], ... }
</code></pre>
|
<p>I'm assuming your starting Dataframe is equivalent to one I've synthesised.</p>
<ol>
<li>calculate all of the aggregate values in one call to aggregate. rounded values so output fits in this answer</li>
<li><code>reset_index()</code> on aggregate then <code>to_dict()</code></li>
<li>list comprehension to reformat <code>dict</code> to your specification</li>
</ol>
<pre><code>df = pd.DataFrame([[l, random.random()] for l in range(8) for k in range(500)], columns=["LABEL","RESP"])
d = df.groupby("LABEL")["RESP"].agg([np.mean, np.min, np.max, np.std]).round(4).reset_index().to_dict(orient="records")
{e["LABEL"]:[e["mean"],e["amin"],e["amax"],e["std"]] for e in d}
</code></pre>
<p><strong>output</strong></p>
<pre><code>{0: [0.5007, 0.0029, 0.997, 0.2842],
1: [0.4967, 0.0001, 0.9993, 0.2855],
2: [0.4742, 0.0003, 0.9931, 0.2799],
3: [0.5175, 0.0062, 0.9996, 0.2978],
4: [0.4909, 0.0018, 0.9952, 0.2912],
5: [0.4787, 0.0077, 0.9976, 0.291],
6: [0.4878, 0.0009, 0.9942, 0.2806],
7: [0.4989, 0.0066, 0.9982, 0.278]}
</code></pre>
|
python-3.x|pandas|dataframe|dictionary|data-science
| 1
|
376,790
| 63,042,278
|
Why am I getting a syntax error on this code from the tensorflow website?
|
<p>I am learning tensorflow using the tensorflow website, and I directly copied the code from their website to test for myself. However, for some reason, I am unable to run the code, due to a syntax error. What is wrong with this if I hadn't tweaked any of this code?</p>
<pre><code>classifier = tf.estimator.DNNClassifier(
feature_columns=my_feature_columns,
# Two hidden layers of 30 and 10 nodes respectively.
hidden_units=[30, 10],
# The model must choose between 3 classes.
n_classes=3
classifier.train(
input_fn=lambda: input_fn(train, train_y, training=True),
steps=5000)
classifier.evaluate(input_fn=lambda: input_fn(test, test_y, training=False))
</code></pre>
<p><a href="https://i.stack.imgur.com/tb12F.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tb12F.png" alt="Error message" /></a></p>
|
<p>You have to put a closing parenthesis after <code>n_classes=3</code> on line 45.</p>
|
python|tensorflow|syntax-error
| 1
|
376,791
| 63,095,680
|
Adding new column to dataframe depending of other column value
|
<p>I have a dataframe that has two columns: DNI, Email.</p>
<p>And I have another one that has: first name, last name, num</p>
<p>This is the data structure:</p>
<p>dataframe 1:</p>
<pre><code> DNI email
. 1 Name1.lastname1@domain.com
. 525 Name2.lastname2@domain.com
. 665 Name3.lastname3@domain.com
</code></pre>
<p>dataframe 2:</p>
<pre><code> first name last name num
. name2 lastname2 8658685
. name1 lastname1 1131222
</code></pre>
<p>I want to add the num column to the first dataframe depending on the mail and if the name and last name combination does not exist for the email column I want to add "0" value and it looks like this:</p>
<pre><code> DNI email num
. 1 Name1.lastname1@domain.com 1131222
. 525 Name2.lastname2@domain.com 8658685
. 665 Name3.lastname3@domain.com 0
</code></pre>
<p>I'm not sure what is the correct way to do this... I'm thinking to do this using for loops, adding values to a dictionary depending on some conditionals but this logic is inefficient with large Dataframes</p>
<p>any idea to do this in a better way?</p>
<p>Thanks</p>
|
<p>You can follow these steps :</p>
<ol>
<li><p>Create a new column "email" in dataframe2 by concatenating first_name, last_name and "domain.com" .</p>
<blockquote>
<p><code>dataframe2["email"] = dataframe2["first_name"]+"."+dataframe2["last_name"]+ "@domain.com" </code></p>
</blockquote>
</li>
</ol>
<p>Make any other string changes that are required (as per your data) such that this email format exactly matches with the email in dataframe1.</p>
<ol start="2">
<li><p>Now, left join dataframe1 and dataframe2 via</p>
<blockquote>
<p><code>result = dataframe1.merge(dataframe2, on='email', how='left')</code></p>
</blockquote>
</li>
<li><p>Finally remove NaN's from the "num" column and replace it with 0.</p>
<blockquote>
<p><code>result['num'] = result['num'].fillna(0)</code></p>
</blockquote>
</li>
</ol>
<p>You can edit the query or the <code>result</code> dataframe to remove extra columns generated.</p>
|
python|pandas|dataframe
| 1
|
376,792
| 63,074,347
|
I have installed GDAL library but am having troubles importing and using it. What should I do?
|
<p>When importing the GDAL package in python, it's raising the following error:</p>
<pre class="lang-py prettyprint-override"><code>>>> import gdal
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/akki/anaconda3/envs/py36/lib/python3.6/site-packages/gdal.py", line 2, in <module>
from osgeo.gdal import deprecation_warn
File "/home/akki/anaconda3/envs/py36/lib/python3.6/site-packages/osgeo/__init__.py", line 21, in <module>
_gdal = swig_import_helper()
File "/home/akki/anaconda3/envs/py36/lib/python3.6/site-packages/osgeo/__init__.py", line 17, in swig_import_helper
_mod = imp.load_module('_gdal', fp, pathname, description)
File "/home/akki/anaconda3/envs/py36/lib/python3.6/imp.py", line 243, in load_module
return load_dynamic(name, filename, file)
File "/home/akki/anaconda3/envs/py36/lib/python3.6/imp.py", line 343, in load_dynamic
return _load(spec)
ImportError: libpoppler.so.71: cannot open shared object file: No such file or directory
</code></pre>
<p>I installed GDAL foloowing command on my conda virtual environment:</p>
<pre><code>conda install -c conda-forge gdal
</code></pre>
|
<p>you can try</p>
<p><code>from osgeo import gdal</code></p>
|
python-3.6|geospatial|gdal|geopandas
| 0
|
376,793
| 62,941,625
|
Problem with output of neural network in a cross-entropy method attempt at solving CartPole-v0
|
<p>I am trying to implement the cross-entropy policy-based method to the classic CartPole-v0 environment. I am actually reformatting a working implementation of this algorithm on the MountainCarContinuous-v0, but when I try to get the agent learning, I get this error message:</p>
<pre><code>---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
in
4
5 agent = Agent(env)
----> 6 scores = agent.learn()
7
8 # plot the scores
~/cross_entropy.py in learn(self, n_iterations, max_t, gamma, print_every, pop_size, elite_frac, sigma)
83 for i_iteration in range(1, n_iterations+1): # loop over all the training iterations
84 weights_pop = [best_weight + (sigma*np.random.randn(self.get_weights_dim())) for i in range(pop_size)] # population of the weights/policies
---> 85 rewards = np.array([self.evaluate(weights, gamma, max_t) for weights in weights_pop]) # rewards from the policies resulting from all individual weights
86
87 # get the best policies
~/cross_entropy.py in (.0)
83 for i_iteration in range(1, n_iterations+1): # loop over all the training iterations
84 weights_pop = [best_weight + (sigma*np.random.randn(self.get_weights_dim())) for i in range(pop_size)] # population of the weights/policies
---> 85 rewards = np.array([self.evaluate(weights, gamma, max_t) for weights in weights_pop]) # rewards from the policies resulting from all individual weights
86
87 # get the best policies
~/cross_entropy.py in evaluate(self, weights, gamma, max_t)
56 action = self.forward(state)
57 #action = torch.argmax(action_vals).item()
---> 58 state, reward, done, _ = self.env.step(action)
59 episode_return += reward * math.pow(gamma, t)
60 if done:
/gym/wrappers/time_limit.py in step(self, action)
14 def step(self, action):
15 assert self._elapsed_steps is not None, "Cannot call env.step() before calling reset()"
---> 16 observation, reward, done, info = self.env.step(action)
17 self._elapsed_steps += 1
18 if self._elapsed_steps >= self._max_episode_steps:
/gym/envs/classic_control/cartpole.py in step(self, action)
102 def step(self, action):
103 err_msg = "%r (%s) invalid" % (action, type(action))
--> 104 assert self.action_space.contains(action), err_msg
105
106 x, x_dot, theta, theta_dot = self.state
AssertionError: tensor([ 0.3987, 0.6013]) () invalid
</code></pre>
<p>I found this is because the MountainCarContinuous-v0 environment has an action_space of type Box(2) whereas CartPole-v0 is Discrete(2), meaning that I only want an integer as action selection.</p>
<p>I have tried working around this notion by applying a softmax activation function and then took the index of the higher value as the action.</p>
<pre><code>action_vals = self.forward(state)
action = torch.argmax(action_vals).item()
</code></pre>
<p>This gets rid of the error but when I train the agent, it seems to learn incredibly fast which is kind of an indicator that something is wrong. This is my full agent class:</p>
<pre><code>class Agent(nn.Module):
def __init__(self, env, h_size=16):
super().__init__()
self.env = env
# state, hidden layer, action sizes
self.s_size = env.observation_space.shape[0]
self.h_size = h_size
self.a_size = env.action_space.n
# define layers
self.fc1 = nn.Linear(self.s_size, self.h_size)
self.fc2 = nn.Linear(self.h_size, self.a_size)
self.device = torch.device('cpu')
def set_weights(self, weights):
s_size = self.s_size
h_size = self.h_size
a_size = self.a_size
# separate the weights for each layer
fc1_end = (s_size*h_size)+h_size
fc1_W = torch.from_numpy(weights[:s_size*h_size].reshape(s_size, h_size))
fc1_b = torch.from_numpy(weights[s_size*h_size:fc1_end])
fc2_W = torch.from_numpy(weights[fc1_end:fc1_end+(h_size*a_size)].reshape(h_size, a_size))
fc2_b = torch.from_numpy(weights[fc1_end+(h_size*a_size):])
# set the weights for each layer
self.fc1.weight.data.copy_(fc1_W.view_as(self.fc1.weight.data))
self.fc1.bias.data.copy_(fc1_b.view_as(self.fc1.bias.data))
self.fc2.weight.data.copy_(fc2_W.view_as(self.fc2.weight.data))
self.fc2.bias.data.copy_(fc2_b.view_as(self.fc2.bias.data))
def get_weights_dim(self):
return (self.s_size+1)*self.h_size + (self.h_size+1)*self.a_size
def forward(self, x):
x = F.relu(self.fc1(x))
x = F.softmax(self.fc2(x))
return x
def evaluate(self, weights, gamma=1.0, max_t=5000):
self.set_weights(weights)
episode_return = 0.0
state = self.env.reset()
for t in range(max_t):
state = torch.from_numpy(state).float().to(self.device)
action_vals = self.forward(state)
action = torch.argmax(action_vals).item()
state, reward, done, _ = self.env.step(action)
episode_return += reward * math.pow(gamma, t)
if done:
break
return episode_return
def learn(self, n_iterations=500, max_t=1000, gamma=1.0, print_every=10, pop_size=50, elite_frac=0.2, sigma=0.5):
"""PyTorch implementation of the cross-entropy method.
Params
======
n_iterations (int): maximum number of training iterations
max_t (int): maximum number of timesteps per episode
gamma (float): discount rate
print_every (int): how often to print average score (over last 100 episodes)
pop_size (int): size of population at each iteration
elite_frac (float): percentage of top performers to use in update
sigma (float): standard deviation of additive noise
"""
n_elite=int(pop_size*elite_frac) # number of elite policies from the population
scores_deque = deque(maxlen=100) # list of the past 100 scores
scores = [] # list of all the scores
best_weight = sigma*np.random.randn(self.get_weights_dim()) # initialize the first best weight randomly
for i_iteration in range(1, n_iterations+1): # loop over all the training iterations
weights_pop = [best_weight + (sigma*np.random.randn(self.get_weights_dim())) for i in range(pop_size)] # population of the weights/policies
rewards = np.array([self.evaluate(weights, gamma, max_t) for weights in weights_pop]) # rewards from the policies resulting from all individual weights
# get the best policies
##
elite_idxs = rewards.argsort()[-n_elite:]
elite_weights = [weights_pop[i] for i in elite_idxs]
##
best_weight = np.array(elite_weights).mean(axis=0) # take the average of the best weights
reward = self.evaluate(best_weight, gamma=1.0) # evaluate this new policy
scores_deque.append(reward) # append the reward
scores.append(reward) # also append the reward
torch.save(self.state_dict(), 'checkpoint.pth') # save the agent
if i_iteration % print_every == 0: # print every 100 steps
print('Episode {}\tAverage Score: {:.2f}'.format(i_iteration, np.mean(scores_deque)))
if np.mean(scores_deque)>=195.0: # print if environment is solved
print('\nEnvironment solved in {:d} iterations!\tAverage Score: {:.2f}'.format(i_iteration-100, np.mean(scores_deque)))
break
return scores
</code></pre>
<p>If anyone has an idea on how to get the agent training properly, please give me any suggestions.</p>
|
<p>Turns out all I needed was to add an act() method to the Agent class.</p>
<pre><code>def act(self, state):
state = state.unsqueeze(0)
probs = self.forward(state).cpu()
m = Categorical(probs)
action = m.sample()
return action.item()
</code></pre>
|
python|deep-learning|pytorch|reinforcement-learning
| 0
|
376,794
| 62,958,651
|
OpenCV warpPerspective and findHomography created output on both sides of image frame
|
<p>I've been trying to figure out how to get a birds-eye view of a scene by using a homography and then warping the image. The image that I am trying to warp is linked below, with the points selected with blue circles around them. I have seen advice from similar posts that I need to make sure the points are ordered correctly, and I have tried different orderings of the object points, but the result still has the same weird error.</p>
<p>The image points were selected manually, and you can see that the floor tile is close to a square in the output, as desired.</p>
<p>I believe the problem is with the homography matrix, as when the homography matrix is applied to the corners of the image, they end up at the pixel locations that they are in the output. As all corners are in the image, the standard method of calculating the corners of the image to get the full result as described in <a href="https://stackoverflow.com/questions/6087241/opencv-warpperspective">this post</a> are not helpful.</p>
<p>Any help would be appreciated! The input image is:
<a href="https://i.stack.imgur.com/rFq7s.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rFq7s.png" alt="enter image description here" /></a></p>
<p>And the output image is:
<a href="https://i.stack.imgur.com/Prlpo.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Prlpo.jpg" alt="enter image description here" /></a></p>
<p>The code used to produce this is included below.</p>
<pre><code>import cv2
import numpy as np
img = cv2.imread('test4_frame.png')
img_circled = img.copy()
im_pts = np.array([[659, 618], [779, 662], [542, 654], [661, 707]], np.float32)
for pt in im_pts:
cv2.circle(img_circled, (pt[0], pt[1]), 10, 255)
cv2_imshow(img_circled)
obj_pts = np.array([[1500, 1500], [1550, 1500], [1500, 1550], [1550, 1550]], np.float32)
H = cv2.findHomography(im_pts, obj_pts)[0]
out = cv2.warpPerspective(img, H, (4000, 4000))
cv2_imshow(out)
</code></pre>
|
<p>After another couple hours banging my head against this problem, I noticed that the issue is that the plane I am interested in will never reach the points in the image above the vanishing point. Thus, the points which are above the vanishing point have undefined behavior, as it cannot be projected onto the plane, which is what the issue was caused by.</p>
<p>I solved the issue by roughly estimating the vanishing point and cropping the image a bit below there, while adjusting the image points accordingly, the results of which are <a href="https://i.stack.imgur.com/upd4z.jpg" rel="nofollow noreferrer">here</a>.</p>
<pre><code>van_point_est = 250
img = img[van_point_est:, :]
for i in range(len(im_pts)):
im_pts[i, 1] -= van_point_est
</code></pre>
|
python|numpy|opencv|computer-vision|homography
| 0
|
376,795
| 63,029,166
|
Comparing two arrays throws a warning. Any workaround for this?
|
<p>I have 2 np.array() as below. When I compare the two using "==", I get an output but with a deprecation warning. There is no warning when comparing 2 arrays with a same matrix.</p>
<p>What's the workaround to get still the same result but with no warning?</p>
<p>Thank you so much!</p>
<pre><code>x = np.array([[0,1,2],[3,4,5]])
x
Out: array([[0, 1, 2],
[3, 4, 5]])
y = np.array([[6,7],[8,9],[10,11]])
y
Out: array([[ 6, 7],
[ 8, 9],
[10, 11]])
x == y
Out: False
**C:\ProgramData\Anaconda3\lib\site-packages\ipykernel_launcher.py:1: DeprecationWarning: elementwise comparison failed; this will raise an error in the future.
"""Entry point for launching an IPython kernel.**
</code></pre>
<p>Screenshot:</p>
|
<p>This error is telling you that the comparisson you're performing doesn't really make sense, since both arrays have different shapes, hence it can't perform elementwise comparisson:</p>
<pre><code>x==y
</code></pre>
<blockquote>
<p>DeprecationWarning: elementwise comparison failed; this will raise an error in the future.
x==y</p>
</blockquote>
<p>The right way to do this, would be to use <a href="https://numpy.org/doc/stable/reference/generated/numpy.array_equal.html" rel="nofollow noreferrer"><code>np.array_equal</code></a>, which checks equality of both shape and elements:</p>
<pre><code>np.array_equal(x,y)
# False
</code></pre>
|
python|python-3.x|pandas|numpy|data-science
| 2
|
376,796
| 63,188,215
|
How to plot a wind rose map with depend of color set to gas concentration
|
<pre><code>speed=[0.129438,0.0366483,0.439946,0.090253,0.19373,0.592419,0.00903306,0.520847,0.513714,1.16971,5.12548,4.37745,3.2362,2.91004,1.60186,0.115595,0.270153,0.19367,0.0865046,0.558443,0.613072,0.648203,0.0770592,0.81772,0.234523,1.04013,0.352675,0.0673293,0.492684,0.109398,0.402816,0.140199,0.998795,0.367604,0.52436,0.0968265,1.59786,2.43149,2.94133,0.940624,0.257224,0,0,0.0199409,0.125302,0,0.367911,0.259797,0.237776,0.45428,0.507738,0.389389,0.388758,0.335398,0.510133,0.180295,0.0738368,0.780367,0.925679,1.93922,1.96569,1.39523,0.824564,0.00833059,0,0.0498536,0.112622,0,0.00843256,0.0269059,0.00816307,0.0582206,0.578959,1.0171,2.24302,1.92721]
direction=[189.538,215.866,264.086,135.325,165.893,44.2853,136.158,350.437,83.2484,277.783,288.064,279.222,267.214,265.913,235.173,181.206,136.14,144.281,134.581,108.16,75.4158,22.2881,328.882,68.3736,129.256,278.097,326.581,35.7096,321.297,338.31,354.109,24.1976,38.1465,39.2318,63.8145,119.817,186.106,182.673,185.475,173.223,139.843,np.nan,np.nan,40.9179,320.081,np.nan,333.054,354.726,357.716,18.1253,355.461,286.084,319.073,324.621,339.681,313.331,346.647,84.9661,86.7814,88.5452,104.456,128.953,87.5388,72.1999,np.nan,345.5,356.68,np.nan,316.586,338.82,334.731,98.3435,85.669,25.9086,42.6986,34.4194]
gas=[1.10986,1.25806,1.50921,1.37323,1.41317,1.15709,1.16005,1.43474,1.43952,1.03368,0.246893,0.139811,0.15603,0.203752,0.177984,0.164834,0.528146,0.602864,0.809435,1.0036,1.05669,1.05348,0.988772,1.0588,1.12066,1.15746,1.23219,1.142,1.21676,1.27093,1.00094,1.16773,1.16163,1.1715,0.999969,0.863695,0.832681,0.92631,1.01416,1.02708,1.0084,1.00666,1.06311,1.32098,1.48134,1.60667,1.60324,1.58663,1.41159,1.3251,1.25114,1.24269,1.16683,1.20762,1.0616,1.21975,1.21312,1.11416,0.981076,0.707948,0.590113,0.515484,0.417111,0.436767,0.644229,0.998097,1.24321,1.45975,1.3905,1.50087,1.63685,1.53855,1.21446,1.09367,0.790929,0.693877]
</code></pre>
<p>I know it has to be np.meshgrid(direction,speed) firstly, but the var of 'gas' I don't know to how to matching direction and speed so that I can plot a windrose map. Just like a image from a paper I read as follow.</p>
<p><a href="https://i.stack.imgur.com/sq1i0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sq1i0.png" alt="enter image description here" /></a></p>
<p>I really appreciate for any help or suggestion.</p>
|
<p>Try polar scatter plot:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
speed=[0.129438,0.0366483,0.439946,0.090253,0.19373,0.592419,0.00903306,0.520847,0.513714,1.16971,5.12548,4.37745,3.2362,2.91004,1.60186,0.115595,0.270153,0.19367,0.0865046,0.558443,0.613072,0.648203,0.0770592,0.81772,0.234523,1.04013,0.352675,0.0673293,0.492684,0.109398,0.402816,0.140199,0.998795,0.367604,0.52436,0.0968265,1.59786,2.43149,2.94133,0.940624,0.257224,0,0,0.0199409,0.125302,0,0.367911,0.259797,0.237776,0.45428,0.507738,0.389389,0.388758,0.335398,0.510133,0.180295,0.0738368,0.780367,0.925679,1.93922,1.96569,1.39523,0.824564,0.00833059,0,0.0498536,0.112622,0,0.00843256,0.0269059,0.00816307,0.0582206,0.578959,1.0171,2.24302,1.92721]
direction=[189.538,215.866,264.086,135.325,165.893,44.2853,136.158,350.437,83.2484,277.783,288.064,279.222,267.214,265.913,235.173,181.206,136.14,144.281,134.581,108.16,75.4158,22.2881,328.882,68.3736,129.256,278.097,326.581,35.7096,321.297,338.31,354.109,24.1976,38.1465,39.2318,63.8145,119.817,186.106,182.673,185.475,173.223,139.843,np.nan,np.nan,40.9179,320.081,np.nan,333.054,354.726,357.716,18.1253,355.461,286.084,319.073,324.621,339.681,313.331,346.647,84.9661,86.7814,88.5452,104.456,128.953,87.5388,72.1999,np.nan,345.5,356.68,np.nan,316.586,338.82,334.731,98.3435,85.669,25.9086,42.6986,34.4194]
gas=[1.10986,1.25806,1.50921,1.37323,1.41317,1.15709,1.16005,1.43474,1.43952,1.03368,0.246893,0.139811,0.15603,0.203752,0.177984,0.164834,0.528146,0.602864,0.809435,1.0036,1.05669,1.05348,0.988772,1.0588,1.12066,1.15746,1.23219,1.142,1.21676,1.27093,1.00094,1.16773,1.16163,1.1715,0.999969,0.863695,0.832681,0.92631,1.01416,1.02708,1.0084,1.00666,1.06311,1.32098,1.48134,1.60667,1.60324,1.58663,1.41159,1.3251,1.25114,1.24269,1.16683,1.20762,1.0616,1.21975,1.21312,1.11416,0.981076,0.707948,0.590113,0.515484,0.417111,0.436767,0.644229,0.998097,1.24321,1.45975,1.3905,1.50087,1.63685,1.53855,1.21446,1.09367,0.790929,0.693877]
gas = [g * 100 for g in gas]
fig = plt.figure()
ax = fig.add_subplot(111, projection='polar')
c = ax.scatter(direction, speed, c=direction, s=gas, cmap='hsv', alpha=0.25)
</code></pre>
<p>result:</p>
<p><a href="https://i.stack.imgur.com/ES0k9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ES0k9.png" alt="enter image description here" /></a></p>
<p>I multiplied <code>gas</code> by 10 to make points bigger, but you may adjust it as You need,
as well as lists/variables to axis assigment and rotation of coordinate system.</p>
|
python|numpy|matplotlib|plot|colors
| 1
|
376,797
| 63,083,564
|
why a[:,[x]] could create a column vector from an array?
|
<p>why a[:,[x]] could create a column vector from an array? The [ ] represents what?
Could anyone explain to me the principle?</p>
<pre><code>a = np.random.randn(5,6)
a = a.astype(np.float32)
print(a)
c = torch.from_numpy(a[:,[1]])
</code></pre>
<pre><code>[[-1.6919796 0.3160475 0.7606999 0.16881375 1.325092 0.71536326]
[ 1.217861 0.35804042 0.0285245 0.7097111 -2.1760604 0.992101 ]
[-1.6351479 0.6607222 0.9375339 0.5308735 -1.9699149 -2.002803 ]
[-1.1895325 1.1744579 -0.5980689 -0.8906375 -0.00494479 0.51751447]
[-1.7642071 0.4681248 1.3938268 -0.7519176 0.5987852 -0.5138923 ]]
###########################################
tensor([[0.3160],
[0.3580],
[0.6607],
[1.1745],
[0.4681]])
</code></pre>
|
<p>The [ ] mean you are giving extra dimension. Try numpy shape method to see the diference.</p>
<pre><code>a[:,1].shape
</code></pre>
<p>output :</p>
<pre><code>(10,)
</code></pre>
<p>with [ ]</p>
<pre><code>a[:,[1]].shape
</code></pre>
<p>output :</p>
<pre><code>(10,1)
</code></pre>
|
python|numpy|pytorch|tensor
| 0
|
376,798
| 63,170,358
|
Get columns from excel file and plot them
|
<p>I'm new to python, and I have this assignment I have to deliver soon.
I have a .xlsx file that I've imported with <code>pandas</code>. It's a file from my workplace which tells us the day (mon - sat), time (from 10 am - 8 pm), sales per hour, visiting customers and customers that actually bought from the store (5 rows, 65 col). How can I get the total sales from each of the days? I tried to get the sum from monday by writing the cols from that day, but it wasn't accurate.</p>
<pre><code>monday = (data['Sales per hour'][1:12].sum())
</code></pre>
<p>Is there a better way to sum the data from monday without having to write down the cols <code>[1:12].sum())</code>?</p>
<p>Here is a pic of the file I'm using. I want to get the total sum for each of the days and plot them into a histogram. I's also like to plot a comparison histogram between visiting customers and buying customers.</p>
<p><a href="https://i.stack.imgur.com/OvQ9N.png" rel="nofollow noreferrer">The file</a></p>
|
<p>#You can try Pandas's Group by to resolve your issue</p>
<p>First, rename the column for better use remove blank space from the name</p>
<pre><code>data.rename(columns = {'Sales per hour':'Sales_per_hour'}, inplace = True)
Daywise_Data=data.groupby('Day').Sales_per_hour.sum().reset_index()
</code></pre>
<p>This will give you day-wise data into a separate data frame which can be used further to plot the histogram.</p>
|
python|excel|pandas|matplotlib|plot
| 0
|
376,799
| 63,036,809
|
How do I use only numpy to apply filters onto images?
|
<p>I would like to apply a filter/kernel to an image to alter it (for instance, perform vertical edge detection, diagonal blur, etc). I found this <a href="https://en.wikipedia.org/wiki/Kernel_(image_processing)" rel="nofollow noreferrer">wikipedia page</a> with some interesting examples of kernels.</p>
<p>When I look online, filters are implemented using opencv or default matplotlib/Pillow functions. I want to be able to modify an image using only numpy arrays and functions like matrix multiplication and such (There doesn't appear to be a default numpy function to perform the convolution operation.)I've tried very hard to figure it out but I keep making errors and I'm also relatively new to numpy.</p>
<p>I worked out this code to convert an image to greyscale:</p>
<pre><code>import numpy as np
from PIL import Image
img = Image.open("my_path/my_image.jpeg")
img = np.array(img.resize((180, 320)))
grey = np.zeros((320, 180))
grey_avg_array = (np.sum(img,axis=-1,keepdims=False)/3)
grey_avg_array = grey_avg_array.astype(np.uint8)
grey_image = Image.fromarray(grey_avg_array)
</code></pre>
<p>I have tried to multiply my image by a numpy array [[1, 0, -1], [1, 0, -1], [1, 0, -1]] to implement edge detection but that gave me a broadcasting error. What would some sample code/useful functions that can do this without errors look like?</p>
<p><em>Also: a minor problem I've faced all day is that PIL can't display (x, x, 1) shaped arrays as images. Why is this? How do I get it to fix this? (np.squeeze didn't work)</em></p>
|
<p>Note: I would highly recommend checking out OpenCV, which has a large variety of built-in image filters.</p>
<blockquote>
<p>Also: a minor problem I've faced all day is that PIL can't display (x, x, 1) shaped arrays as images. Why is this? How do I get it to fix this? (np.squeeze didn't work)</p>
</blockquote>
<p>I assume the issue here is with processing grayscale float arrays. To fix this issue, you have to convert the float arrays to <code>np.uint8</code> and use the <code>'L'</code> mode in PIL.</p>
<pre><code>img_arr = np.random.rand(100, 100) # Our float array in the range (0, 1)
uint8_img_arr = np.uint8(img_arr * 255) # Converted to the np.uint8 type
img = Image.fromarray(uint8_img_arr, 'L') # Create PIL Image from img_arr
</code></pre>
<p>As for doing convolutions, SciPy provides <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.convolve.html" rel="nofollow noreferrer">functions</a> for doing convolutions with kernels that you may find useful.</p>
<p><img src="https://i.ibb.co/QYtzq56/image.png" alt="Convolution Example" /></p>
<p>But since we're solely using NumPy, let's implement it!</p>
<p>Note: To make this as general as possible, I am adding a few extra parameters that may or may not be important to you.</p>
<pre class="lang-py prettyprint-override"><code># Assuming the image has channels as the last dimension.
# filter.shape -> (kernel_size, kernel_size, channels)
# image.shape -> (width, height, channels)
def convolve(image, filter, padding = (1, 1)):
# For this to work neatly, filter and image should have the same number of channels
# Alternatively, filter could have just 1 channel or 2 dimensions
if(image.ndim == 2):
image = np.expand_dims(image, axis=-1) # Convert 2D grayscale images to 3D
if(filter.ndim == 2):
filter = np.repeat(np.expand_dims(filter, axis=-1), image.shape[-1], axis=-1) # Same with filters
if(filter.shape[-1] == 1):
filter = np.repeat(filter, image.shape[-1], axis=-1) # Give filter the same channel count as the image
#print(filter.shape, image.shape)
assert image.shape[-1] == filter.shape[-1]
size_x, size_y = filter.shape[:2]
width, height = image.shape[:2]
output_array = np.zeros(((width - size_x + 2*padding[0]) + 1,
(height - size_y + 2*padding[1]) + 1,
image.shape[-1])) # Convolution Output: [(W−K+2P)/S]+1
padded_image = np.pad(image, [
(padding[0], padding[0]),
(padding[1], padding[1]),
(0, 0)
])
for x in range(padded_image.shape[0] - size_x + 1): # -size_x + 1 is to keep the window within the bounds of the image
for y in range(padded_image.shape[1] - size_y + 1):
# Creates the window with the same size as the filter
window = padded_image[x:x + size_x, y:y + size_y]
# Sums over the product of the filter and the window
output_values = np.sum(filter * window, axis=(0, 1))
# Places the calculated value into the output_array
output_array[x, y] = output_values
return output_array
</code></pre>
<p>Here is an example of its usage:</p>
<p>Original Image (saved as <code>original.png</code>):</p>
<p><img src="https://i.ibb.co/D81PNZz/image.png" alt="Original Image" /></p>
<pre class="lang-py prettyprint-override"><code>filter = np.array([
[1, 1, 1],
[1, 1, 1],
[1, 1, 1]
], dtype=np.float32)/9.0 # Box Filter
image = Image.open('original.png')
image_arr = np.array(image)/255.0
convolved_arr = convolve(image_arr, filter, padding=(1, 1))
convolved = Image.fromarray(np.uint8(255 * convolved_arr), 'RGB') # Convolved Image
</code></pre>
<p>Convolved Image:</p>
<p><img src="https://i.ibb.co/7C44CJ0/image.png" alt="Convolved Image" /></p>
|
python|numpy|image-processing|matrix|edge-detection
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.