Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
7,100
| 49,019,241
|
Execute trained tensorflow model using linear algebra only
|
<p>I'm training the image classification model as per : <a href="https://www.tensorflow.org/tutorials/image_recognition" rel="nofollow noreferrer">https://www.tensorflow.org/tutorials/image_recognition</a></p>
<p>I aim to extract the learned weight values ( based on : <a href="https://stackoverflow.com/questions/45562755/extracting-weights-values-from-a-tensorflow-model-checkpoint">Extracting weights values from a tensorflow model checkpoint</a> ) and execute the model using linear algebra operations only.</p>
<p>The function <code>def run_inference_on_image(image)</code> (src <a href="https://github.com/tensorflow/models/blob/master/tutorials/image/imagenet/classify_image.py" rel="nofollow noreferrer">https://github.com/tensorflow/models/blob/master/tutorials/image/imagenet/classify_image.py</a>) classifies an image but the linear algebra operations used to classify the image do not appear to be viewable. Is possible to execute the model using the various matrix transformations that I assume are taking place ' under the hood' in function <code>run_inference_on_image</code> ?</p>
|
<p>If you look closely at <code>run_inference_on_image</code> and the whole <code>classify_image.py</code> script, it doesn't define a model. It is just a runner script that loads a <strong>pre-trained model</strong> from disk (see <code>create_graph</code>) and executes it according to certain conventions (<code>run_inference_on_image</code> loos for the tensor named <code>softmax:0</code>).</p>
<p>The <a href="https://www.tensorflow.org/tutorials/image_recognition#usage_with_python_api" rel="nofollow noreferrer">tutorial</a> states the same:</p>
<blockquote>
<p><code>classify_image.py</code> downloads the trained model from tensorflow.org when the program is run for the first time. </p>
</blockquote>
<p>So the exact answer to your question in fact depends on what model you actually decide to run (e.g., you <em>can</em> supply your own model). I'll focus on the default choice of this script, namely <a href="http://download.tensorflow.org/models/image/imagenet/inception-2015-12-05.tgz" rel="nofollow noreferrer">Inception model</a> (see <code>DATA_URL</code> constant). By the way, there a newer pre-trained <a href="http://download.tensorflow.org/models/image/imagenet/inception-v3-2016-03-01.tar.gz" rel="nofollow noreferrer">Inception v3 model</a> that you can use as well (<a href="https://github.com/tensorflow/tensorflow/issues/5007" rel="nofollow noreferrer">GitHub issue</a>).</p>
<p>Side note: The exact source code of <a href="http://download.tensorflow.org/models/image/imagenet/inception-2015-12-05.tgz" rel="nofollow noreferrer"><em>this</em> implementation</a> is not published, but we can take a look at the latest implementation of the same network in <a href="https://github.com/tensorflow/models/blob/master/research/slim/nets/inception_v3.py" rel="nofollow noreferrer">tf slim</a>. The naming within a graph is a bit different, but the model is practically the same.</p>
<hr>
<p>The whole model in one picture looks something like <a href="https://stackoverflow.com/q/39352108/712995">this</a>. Essentially it's a long sequence of inception modules, consisting of convolutional layers with various filters. The variant of inception module v3 is:</p>
<p><a href="https://i.stack.imgur.com/V4smk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/V4smk.png" alt="inception-module-v3"></a></p>
<p>Here each <code>a x b</code> box means a convolutional layer with filter size <code>[a, b]</code>. It looks intimidating, but if you follow the <a href="https://towardsdatascience.com/neural-network-architectures-156e5bad51ba" rel="nofollow noreferrer">history of its development</a> over the years, it starts to make sense.</p>
<p>The picture above translates into the following code (for <code>n=7</code>):</p>
<pre><code> with tf.variable_scope(end_point):
with tf.variable_scope('Branch_0'):
branch_0 = slim.conv2d(net, depth(192), [1, 1], scope='Conv2d_0a_1x1')
with tf.variable_scope('Branch_1'):
branch_1 = slim.conv2d(net, depth(128), [1, 1], scope='Conv2d_0a_1x1')
branch_1 = slim.conv2d(branch_1, depth(128), [1, 7],
scope='Conv2d_0b_1x7')
branch_1 = slim.conv2d(branch_1, depth(192), [7, 1],
scope='Conv2d_0c_7x1')
with tf.variable_scope('Branch_2'):
branch_2 = slim.conv2d(net, depth(128), [1, 1], scope='Conv2d_0a_1x1')
branch_2 = slim.conv2d(branch_2, depth(128), [7, 1],
scope='Conv2d_0b_7x1')
branch_2 = slim.conv2d(branch_2, depth(128), [1, 7],
scope='Conv2d_0c_1x7')
branch_2 = slim.conv2d(branch_2, depth(128), [7, 1],
scope='Conv2d_0d_7x1')
branch_2 = slim.conv2d(branch_2, depth(192), [1, 7],
scope='Conv2d_0e_1x7')
with tf.variable_scope('Branch_3'):
branch_3 = slim.avg_pool2d(net, [3, 3], scope='AvgPool_0a_3x3')
branch_3 = slim.conv2d(branch_3, depth(192), [1, 1],
scope='Conv2d_0b_1x1')
net = tf.concat(axis=3, values=[branch_0, branch_1, branch_2, branch_3])
</code></pre>
<p>As for your suggestion of "linear algebra operations", note that the convolutional layer is different from linear layer (see <a href="http://cs231n.github.io/convolutional-networks/#conv" rel="nofollow noreferrer">CS231n tutorial</a> for details), though there exist efficient GPU implementations that boil down to matrix multiplications.</p>
<p>As you can see, repeating the same model from scratch using only low-level operations would require <strong>a lot of code</strong> (the <a href="https://github.com/tensorflow/models/blob/master/research/slim/nets/inception_v3.py" rel="nofollow noreferrer">full source code</a> in tf slim is 600 lines, and it actually consists of high-level abstractions). If you want to retrain it yourself from the pre-trained state, it would be simpler to import already built model like this:</p>
<pre><code>from tensorflow.contrib.slim.python.slim.nets.inception_v3 import inception_v3
...
inputs = tf.random_uniform((batch_size, height, width, 3))
logits, end_points = inception_v3(inputs, num_classes)
</code></pre>
|
python|image-processing|tensorflow|neural-network|deep-learning
| 2
|
7,101
| 48,971,879
|
Python for sum operation by groupby, but exclude the non-numeric data
|
<p>How to do a sum operation using groupby from csv file in python, but exclude some non-numeric data from that groupby? E.g. I have the csv file:</p>
<pre><code>id | filename | #Line_Changed
-----------------------------------------------
1 | analyze/dir_list.txt | 16
2 | metrics/metrics1.csv | 11
3 | metrics/metrics2.csv | 15
4 | analyze/dir_list.txt | =>
5 | metrics/metrics1.csv | 11
6 | metrics/metrics2.csv | bin
7 | metrics/metrics2.csv | 4
8 | analyze/dir_list.txt | 4
</code></pre>
<p>I want to groupby the column Filename and only calculate the sum of the rows with only numeric data, and excluding the non-numeric data. The result should look like this:</p>
<pre><code> filename | SUM #Line_Changed
-----------------------------------------------
analyze/dir_list.txt | 20
metrics/metrics1.csv | 22
metrics/metrics2.csv | 19
</code></pre>
<p>What I've done so far:</p>
<pre><code>df = pd.read_csv('diffhistogram.csv')
by_fn = df.groupby('filename')
mydata = {}
for name in ['#line_changed']:
mydata['SUM ' + name] = by_fn[name].sum()
output = pd.DataFrame(mydata)
print(output)
</code></pre>
<p>but the output assume the data in column "#line_changed" as string:</p>
<pre><code> filename | SUM #Line_Changed
-----------------------------------------------
analyze/dir_list.txt | 16=>4
metrics/metrics1.csv | 1111
metrics/metrics2.csv | 15bin4
</code></pre>
<p>Is there a way I can specify which numeric data to include in the sum() operation and non-numeric data to exclude?</p>
|
<p>I think you need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_numeric.html" rel="nofollow noreferrer"><code>to_numeric</code></a> with parameter <code>errors='coerce'</code> for convert non numeric to <code>NaN</code>s, then <code>groupby</code> + <code>sum</code> omit this rows:</p>
<pre><code>df = (pd.to_numeric(df['#Line_Changed'], errors='coerce')
.groupby(df['filename'])
.sum()
.to_frame()
.add_prefix('SUM ')
.reset_index())
print (df)
filename SUM #Line_Changed
0 analyze/dir_list.txt 20.0
1 metrics/metrics1.csv 22.0
2 metrics/metrics2.csv 19.0
</code></pre>
<p>Or assign to new column which is used for <code>groupby</code>:</p>
<pre><code>df['SUM #Line_Changed'] = pd.to_numeric(df['#Line_Changed'], errors='coerce')
df = df.groupby('filename', as_index=False)['SUM #Line_Changed'].sum()
print (df)
filename SUM #Line_Changed
0 analyze/dir_list.txt 20.0
1 metrics/metrics1.csv 22.0
2 metrics/metrics2.csv 19.0
</code></pre>
<p><strong>Detail</strong>:</p>
<pre><code>df['SUM #Line_Changed'] = pd.to_numeric(df['#Line_Changed'], errors='coerce')
print (df)
id filename #Line_Changed SUM #Line_Changed
0 1 analyze/dir_list.txt 16 16.0
1 2 metrics/metrics1.csv 11 11.0
2 3 metrics/metrics2.csv 15 15.0
3 4 analyze/dir_list.txt => NaN
4 5 metrics/metrics1.csv 11 11.0
5 6 metrics/metrics2.csv bin NaN
6 7 metrics/metrics2.csv 4 4.0
7 8 analyze/dir_list.txt 4 4.0
</code></pre>
<p>EDIT:</p>
<p>If want drop non numeric rows from original <code>DataFrame</code>:</p>
<pre><code>df['#Line_Changed'] = pd.to_numeric(df['#Line_Changed'], errors='coerce')
df = df.dropna(subset=['#Line_Changed'])
print (df)
id filename #Line_Changed
0 1 analyze/dir_list.txt 16.0
1 2 metrics/metrics1.csv 11.0
2 3 metrics/metrics2.csv 15.0
4 5 metrics/metrics1.csv 11.0
6 7 metrics/metrics2.csv 4.0
7 8 analyze/dir_list.txt 4.0
</code></pre>
|
python|pandas|csv|dataframe|sum
| 3
|
7,102
| 49,183,759
|
pandas groupby and update the sum of the number of times the values in one column is greater than the other column
|
<p>I have a dataset in the following format</p>
<pre><code>df = pd.DataFrame([[1, 'Label1', 0, 8, 2], [1, 'Label3', 0, 20, 5], [2, 'Label5', 1, 20, 2], [2, 'Label4', 1, 11, 0],
[5, 'Label2', 0, 0, -4],[1, 'Label2', 1, 8, 2], [2, 'Label5', 0, 20, 5], [3, 'Label2', 1, 20, 2], [4, 'Label4', 0, 1, 0],
[5, 'Label3', 0, 1, -4],[1, 'Label3', 1, 8, 2], [2, 'Label4', 0, 20, 5], [3, 'Label1', 1, 20, 2], [4, 'Label3', 0, 1, 0],
[5, 'Label4', 0, 1, -4],[1, 'Label4', 1, 8, 2], [2, 'Label3', 0, 20, 5], [3, 'Label3', 1, 20, 2], [4, 'Label5', 0, 1, 0],
[5, 'Label5', 0, 1, -4]],
columns=['ID', 'Label', 'Status', 'Coeff', 'result'])
cm = {'TP': 0,'FP': 0}
</code></pre>
<p>For each <code>ID</code> in df, I would like to find the number of times the column <code>Coeff</code> is greater than <code>Result</code> when <code>Status</code> column is 1. If this count is greater than 3 then <code>TP</code> should be incremented by 1 and if it less than 3, then <code>FP</code> should be incremented by 1. </p>
<p>Example: When <code>ID</code> is 1111 and <code>Status</code> 1, if the <code>Coeff</code> column is greater than <code>Result</code> column twice for that particular ID, then FP must be incremented by 1. </p>
<p>I tried to add a new column called count for each ID and assigned a value as 1 everytime the column <code>Coeff</code> was greater than <code>Result</code>. </p>
<pre><code>for ID in df.groupby('ID'):
df.loc[(df['Coeff'] > df['Result']), 'count'] = 1
df_new = list(df[['ID','count']].groupby(df['ID']))
</code></pre>
<p>Then I thought of finding whether <code>count</code> has the number 1 in it. If it does, then increment <code>TP</code>. Otherwise, increment <code>FP</code>.</p>
<p>But I couldn't achieve it. </p>
<p>How do I get the required result?</p>
|
<p>A simple grouping operation on a masked comparison should do:</p>
<pre><code>v = df.Coeff.gt(df.result).where(df.Status.astype(bool)).groupby(df.ID).sum()
</code></pre>
<p>Or (to retain <code>dtype=int</code>, thanks piR!),</p>
<pre><code>v = df.Coeff.gt(df.result).where(df.Status.astype(bool), 0).groupby(df.ID).sum()
</code></pre>
<p></p>
<pre><code>v # second expression result
ID
1 3
2 2
3 3
4 0
5 0
dtype: int64
</code></pre>
<p>Now, </p>
<pre><code>cm['TP'] = v.gt(3).sum()
cm['FP'] = v.lt(3).sum()
</code></pre>
<p><em>Details</em><br>
<code>df.Coeff.gt(df.result)</code> returns a mask. Now, hide all those values for which <code>df.Status</code> is not 1. This is done using <code>(df.Coeff > df.result).where(df.Status.astype(bool))</code>. Finally, take this masked result, and group on <code>ID</code>, followed by a sum to get your result.</p>
|
python|pandas|pandas-groupby
| 1
|
7,103
| 49,250,225
|
Python: List Nested Dictionary to pandas DataFrame Issue
|
<p>I am struggling with some simple from_dict conversion. I have a list nested dictionaries in dictionary as below. (quite confusing to me as well)</p>
<pre><code>dict_total = {'Jane' : {'a1' : [1.1,1.3,1.4,1.9],
'a2' : [3.1,2.4,2.3,1.2],
'a3' : [4.3,2.3,1.5,5.3],
'st' : ['d','dc','sc','sc']},
'Mark' : {'a1' : [3.1,2.3,1.3,1.9],
'a2' : [1.2,2.3,9.3,1.2],
'a3' : [1.1,5.5,1.2,5.3],
'st' : ['cs','s','wc','cd']}
}
</code></pre>
<p>Above is just simple example, but my original contains more then 20000+ keys in dict_total. I want to convert this dictionary to dataframe (hopefully on loops) like below.</p>
<pre><code>df_total =
a1 a2 a3 st
Jane 1.1 3.1 4.3 d
Jane 1.3 2.4 2.3 dc
Jane 1.4 2.3 1.5 sc
Jane 1.9 1.2 5.3 sc
Mark 3.1 1.2 1.1 cs
Mark 2.3 2.3 5.5 sc
Mark 1.3 9.3 1.2 wc
Mark 1.9 1.2 5.3 cd
</code></pre>
<p>As you can see the keys for dict_total would be the index of dataframe, and each keys for "Jane" and "Mark" will be the column name, and lists for values. </p>
<p>Hope there is a pythonic way to solve this. Thanks </p>
|
<p>I think need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.concat.html" rel="nofollow noreferrer"><code>concat</code></a> with dict comprehension, last remove first level by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reset_index.html" rel="nofollow noreferrer"><code>reset_index</code></a>:</p>
<pre><code>df_total = (pd.concat({k: pd.DataFrame(v) for k, v in dict_total.items()})
.reset_index(level=1, drop=True))
print (df_total)
a1 a2 a3 st
Jane 1.1 3.1 4.3 d
Jane 1.3 2.4 2.3 dc
Jane 1.4 2.3 1.5 sc
Jane 1.9 1.2 5.3 sc
Mark 3.1 1.2 1.1 cs
Mark 2.3 2.3 5.5 s
Mark 1.3 9.3 1.2 wc
Mark 1.9 1.2 5.3 cd
</code></pre>
|
python|pandas|dictionary|dataframe|nested
| 3
|
7,104
| 58,847,542
|
Why does diff on these Pandas groupby results in Nan?
|
<p>The example dataframe I have is-</p>
<pre><code>>>> new_df
date country score
0 2018-01-01 ch 50
1 2018-01-01 es 100
2 2018-01-01 us 150
3 2018-01-02 ch 10
4 2018-01-02 gb 100
5 2018-01-02 us 125
6 2018-01-03 us 160
</code></pre>
<p>Why does <code>new_df.groupby(["date", "country"]).diff()</code> produce Nan?</p>
<pre><code>>>> new_df.groupby(["date", "country"]).diff()
score
0 NaN
1 NaN
2 NaN
3 NaN
4 NaN
5 NaN
6 NaN
</code></pre>
|
<p>As you can see the size of each group is <strong>1</strong>,
then the subnetting of the subtraction is <code>NaN</code> because to make the subtraction a minuend and a subtraend are needed, that is to say <strong>size at least equal to 2</strong>:</p>
<pre><code>df.groupby(['date','country']).size()
date country
2018-01-01 ch 1
es 1
us 1
2018-01-02 ch 1
gb 1
us 1
2018-01-03 us 1
dtype: int64
</code></pre>
|
python|pandas|pandas-groupby
| 2
|
7,105
| 58,864,253
|
Pandas vectorization for a multiple data frame operation
|
<p>I am looking to increase the speed of an operation within pandas and I have learned that it is generally best to do so via using vectorization. The problem I am looking for help with is vectorizing the following operation.</p>
<p>Setup:</p>
<p><code>df1 =</code> a table with a date-time column, and city column</p>
<p><code>df2 =</code> another (considerably larger) table with a date-time column, and city column</p>
<p>The Operation:</p>
<pre><code>for i, row in df2.iterrows():
for x, row2 in df1.iterrows():
if row['date-time'] - row2['date-time'] > pd.Timedelta('8 hours') and row['city'] == row2['city']:
df2.at[i, 'result'] = True
break
</code></pre>
<p>As you might imagine, this operation is insanely slow on any dataset of a decent size. I am also just beginning to learn pandas vector operations and would like some help in figuring out a more optimal way to solve this problem</p>
|
<p>I think what you need is <code>merge()</code> with <code>numpy.where()</code> to achieve the same result. </p>
<p>Since you don't have a reproducible sample in your question, kindly consider this:</p>
<pre><code>>>> df1 = pd.DataFrame({'time':[24,20,15,10,5], 'city':['A','B','C','D','E']})
>>> df2 = pd.DataFrame({'time':[2,4,6,8,10,12,14], 'city':['A','B','C','F','G','H','D']})
>>> df1
time city
0 24 A
1 20 B
2 15 C
3 10 D
4 5 E
>>> df2
time city
0 2 A
1 4 B
2 6 C
3 8 F
4 10 G
5 12 H
6 14 D
</code></pre>
<p>From what I understand, you only need to get all the rows in your <code>df2</code> that has a value in the <code>city</code> column in <code>df1</code>, where the difference in the dates are at least 9 hours (greater than 8 hours). </p>
<p>To do that, we need to merge on your city column:</p>
<pre><code>>>> new_df = df2.merge(df1, how = 'inner', left_on = 'city', right_on = 'city')
>>> new_df
time_x city time_y
0 2 A 24
1 4 B 20
2 6 C 15
3 14 D 10
</code></pre>
<p><code>time_x</code> basically is the time in your <code>df2</code> dataframe, and <code>time_y</code> is from your <code>df1</code>.</p>
<p>Now we need to check the difference of those times and retain the one that will give a greater than 8 value in doing so, by using <code>numpy.where()</code> flagging them to do the filtering later:</p>
<pre><code>>>> new_df['flag'] = np.where(new_df['time_y'] - new_df['time_x'] > 8, ['Retain'], ['Remove'])
>>> new_df
time_x city time_y flag
0 2 A 24 Retain
1 4 B 20 Retain
2 6 C 15 Retain
3 14 D 10 Remove
</code></pre>
<p>Now that you have that, you can simply filter your <code>new_df</code> by the flag column, removing the column in the final output as such:</p>
<pre><code>>>> final_df = new_df[new_df['flag'].isin(['Retain'])][['time_x', 'city', 'time_y']]
>>> final_df
time_x city time_y
0 2 A 24
1 4 B 20
2 6 C 15
</code></pre>
<p>And there you go, no looping needed. Hope this helps :D</p>
|
python|pandas
| 0
|
7,106
| 70,193,405
|
How to refer to Excel index using Python Pandas ILOC?
|
<pre><code>import pandas as pd
import re
file_name = "example.xlsx" #name of the excel file
sheet = "sheet" #name of the sheet
df = pd.read_excel(file_name, sheet_name = sheet, usecols = "A:F")
select_rows = df.iloc[516-2:] #specify rows
</code></pre>
<p>My question is why if I want to refer to row 516 onwards (from excel's index), I should subtract the number by 2 as stated on the code? I know the index on Pandas starting from zero, which means subtracted by 1 and not 2.</p>
|
<p>@Samuel You are already 'minus one' because of zero-based index in Pandas. However, what isn't clear until reading the Pandas documentation for pd.read_excel is that there is a parameter called 'header' that is set to 0 by default (i.e. the first row (row 1 in Excel) is used as your header for column names). To demonstrate, try modifying the line where you create 'df' by adding an additional argument of header=None (code snippet below) and then run your code and inspect the results.</p>
<pre><code>df = pd.read_excel(file_name, sheet_name = sheet, usecols = "A:F", header=None)
</code></pre>
|
python|pandas
| 1
|
7,107
| 70,122,842
|
BERT Pre-Training MLM + NSP
|
<p>I want to pre-train BERT for the tasks MLM + NSP. When I run the code below, threw me an error:</p>
<p>RuntimeError: The size of tensor a (882) must match the size of tensor b (512) at non-singleton dimension 1
1%|▊ | 3/561 [00:02<06:13, 1.49it/s]</p>
<p>It looks like a truncation problem. But why? I just used libraries. If someone can enlighten me, I would be happy. Thanks for advance.</p>
<pre><code>The code I run:
from transformers import BertTokenizer
from transformers import BertConfig, BertForPreTraining
from transformers import TextDatasetForNextSentencePrediction
from transformers import Trainer, TrainingArguments
from transformers import DataCollatorForLanguageModeling
TOKENIZER_PATH = "hukuk_tokenizer"
MAX_LEN = 512
BLOCK_SIZE = 128
DATA_PATH = "data/toy_sentences_v3.removed_long_sent.txt"
OUTPUT_DIR = "/home/osahin/bert_yoktez/results/"
config = BertConfig()
if TOKENIZER_PATH == "hukuk_tokenizer":
config.update({"vocab_size":30000})
print("config: ",config)
tokenizer = BertTokenizer.from_pretrained(TOKENIZER_PATH)
tokenizer.model_max_length= MAX_LEN
print("Tokenizer: ",tokenizer)
model = BertForPreTraining(config)
dataset= TextDatasetForNextSentencePrediction(
tokenizer=tokenizer,
file_path=DATA_PATH,
block_size = BLOCK_SIZE
)
data_collator = DataCollatorForLanguageModeling(
tokenizer=tokenizer,
mlm=True,
mlm_probability= 0.15
)
training_args = TrainingArguments(
output_dir= OUTPUT_DIR,
overwrite_output_dir=True,
num_train_epochs=1,
per_device_train_batch_size= 32,
save_steps=1000,
save_on_each_node=True,
prediction_loss_only=True,
)
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=dataset,
)
trainer.train()
</code></pre>
<p>NOTE: For the NSP task, the input file was prepared by a sentence per line.</p>
|
<p>The error <code>The size of tensor a (882) must match the size of tensor b (512) at non-singleton dimension</code> most probably means that the maximal text size that the model supports is 512 tokens, but you try to pass a text with 882 tokens to it. To bypass this, you can enable truncation somewhere in your pipeline (most probably, at the moment of text tokenization, i.e. within <code>TextDatasetForNextSentencePrediction</code> or immediately after its creation).</p>
|
nlp|huggingface-transformers|bert-language-model
| 1
|
7,108
| 70,173,655
|
Dataframe faster cosine similarity
|
<p>I have a dataframe consisting of individual tweets (id, text, author_id, nn_list) where nn_list is a list of other tweet indices which were previously identified as potential nearest neighbours. Now I have to calculate the cosine similarity of the index and every single entry of this list by looking at the index in the tfidf matrix to compare the vectors but with my current approach this is kind of slow. The current code looks something like this:</p>
<pre><code>for index, row in data_df.iterrows():
for candidate in row["nn_list"]:
candidate_cos = float("%.2f" % pairwise_distances(tfidf_matrix[candidate], tfidf_matrix[index], metric='cosine'))
if candidate_cos < nn_distance:
current_nn_candidate = candidate
nn_distance = candidate_cos
</code></pre>
<p>Is there a significantly faster way to calculate this?</p>
|
<p>The following code should work assuming you have not a too large range of IDs:</p>
<pre><code>import pandas as pd
import numpy as np
from scipy.sparse import csr_matrix
from sklearn.metrics.pairwise import cosine_similarity
df = pd.DataFrame({"nn_list": [[1, 2], [1,2,3], [1,2,3,7], [11, 12, 13], [2,1]]})
# Data consistent with https://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.csr_matrix.html
df["data"] = df["nn_list"].apply(lambda x: np.repeat(1, len(x)))
df["row"] = df.index
df["row_ind"] = df[['row', 'nn_list']].apply(lambda x: np.repeat(x[0], len(x[1])), axis=1)
df["col_ind"] = df['nn_list'].apply(lambda x: np.array(x))
m = csr_matrix(
(np.concatenate(df['data']),
(np.concatenate(df['row_ind']), np.concatenate(df['col_ind']))))
cosine_similarity(m)
</code></pre>
<p>Will return:</p>
<pre><code>array([[1. , 0.81649658, 0.70710678, 0. , 1. ],
[0.81649658, 1. , 0.8660254 , 0. , 0.81649658],
[0.70710678, 0.8660254 , 1. , 0. , 0.70710678],
[0. , 0. , 0. , 1. , 0. ],
[1. , 0.81649658, 0.70710678, 0. , 1. ]])
</code></pre>
<p>If you have a larger range of IDs I recommend to use spark or have look to <a href="https://stackoverflow.com/questions/40900608/cosine-similarity-on-large-sparse-matrix-with-numpy">cosine similarity on large sparse matrix with numpy</a>.</p>
|
python|pandas|dataframe|performance|loops
| 0
|
7,109
| 56,407,591
|
Preprocess n files concurrently with tf.data API
|
<p>I want to use <code>tf.data.experimental.parallel_interleave</code> to preprocess n files concurrently. <code>cycle_length</code> argument is used for this purpose but what is the maximum value of this argument? My CPU has 8 cores and 16 threads.</p>
|
<p>As per official docs on <a href="https://www.tensorflow.org/api_docs/python/tf/data/experimental/parallel_interleave" rel="nofollow noreferrer">tf.data.experimental.parallel_interleave</a></p>
<blockquote>
<p>Unlike tf.data.Dataset.interleave, it gets elements from cycle_length
nested datasets in parallel</p>
</blockquote>
<p>and</p>
<blockquote>
<p>cycle_length: The number of input Datasets to interleave from in
parallel.</p>
</blockquote>
<p>So basically, a reasonable argument would be number of dataset elements, which would be processed in parallel. In this way, it has no relation to CPU cores/threads</p>
|
tensorflow|tensorflow-datasets
| 1
|
7,110
| 56,212,672
|
Create a new column by concating two string columns together
|
<p>Looking to combine two string columns into a new column in a dataframe.</p>
<p>For instance - </p>
<pre><code>>>> df = pd.DataFrame({'Primary Type':['a','b','c'],'Description':['1','2','3']})
>>> df
Primary Type Description
0 a 1
1 b 2
2 c 3
</code></pre>
<p>I'd like the output to be </p>
<pre><code> Primary Type Description combined
0 a 1 a ,1
1 b 2 b ,2
2 c 3 c ,3
</code></pre>
<p>Here's what's been tried -</p>
<pre><code>df['combined'] = df['Primary Type'] + ', ' + df['Description']
</code></pre>
<p>But that doesn't seem to work.</p>
<p>Other ideas?</p>
|
<pre><code>df['combined'] = df['Primary Type'].map(str) + ' ,' +
df['Description'].map(str)
df
Primary Type Description combined
a 1 a ,1
b 2 b ,2
c 3 c ,3
</code></pre>
|
python|python-3.x|pandas
| 1
|
7,111
| 56,081,166
|
Calculating difference in minutes based on 30 minute interval?
|
<p>I had a df such as </p>
<pre><code>ID | Half Hour Bucket | clock in time | clock out time | Rate
232 | 4/1/19 8:00 PM | 4/1/19 7:12 PM | 4/1/19 10:45 PM | 0.54
342 | 4/1/19 8:30 PM | 4/1/19 7:12 PM | 4/1/19 7:22 PM | 0.23
232 | 4/1/19 7:00 PM | 4/1/19 7:12 PM | 4/1/19 10:45 PM | 0.54
</code></pre>
<p>I want my output to be </p>
<pre><code> ID | Half Hour Bucket | clock in time | clock out time | Rate | Mins
232 | 4/1/19 8:00 PM | 4/1/19 7:12 PM | 4/1/19 10:45 PM | 0.54 |
342 | 4/1/19 8:30 PM | 4/1/19 7:12 PM | 4/1/19 7:22 PM | 0.23 |
232 | 4/1/19 7:00 PM | 4/1/19 7:12 PM | 4/1/19 10:45 PM | 0.54 |
</code></pre>
<p>Where minutes represents the difference between clock out time and clock in time.</p>
<p>But I can only contain the minutes value for the half hour bucket on the same row it corresponds to.</p>
<p>For example for id 342 it would be ten minutes and the 10 mins would be on that row. </p>
<p>But for ID 232 the clock in to clock out time spans 3 hours. I would only want the 30 mins for 8 to 830 in the first row and the 18 mins in the third row. for the minutes in the half hour bucket like 830-9 or 9-930 that dont exist in the first row, I would want to create a new row in that same df that contains nans for everything except the half hour bucket and mins field for the minutes that do not exist in the original row.</p>
<p>the 30 mins from 8-830 would stay in the first row, but I would want 5 new rows for all the half hour buckets that aren't <strong>4/1/19 8:00 PM</strong> as new rows with only the half hour bucket and the rate carrying over from the row. Is this possible? </p>
<p>I thank anyone for their time!</p>
|
<p>Realised my first answer probably wasn't what you wanted. This version, hopefully, is. It was a bit more involved than I first assumed!</p>
<p><strong>Create Data</strong></p>
<p>First of all create a dataframe to work with, based on that supplied in the question. The resultant formatting isn't quite the same but that would be easily fixed, so I've left it as-is here.</p>
<pre><code>import math
import numpy as np
import pandas as pd
# Create a dataframe to work with from the data provided in the question
columns = ['id', 'half_hour_bucket', 'clock_in_time', 'clock_out_time' , 'rate']
data = [[232, '4/1/19 8:00 PM', '4/1/19 7:12 PM', '4/1/19 10:45 PM', 0.54],
[342, '4/1/19 8:30 PM', '4/1/19 7:12 PM', '4/1/19 07:22 PM ', 0.23],
[232, '4/1/19 7:00 PM', '4/1/19 7:12 PM', '4/1/19 10:45 PM', 0.54]]
df = pd.DataFrame(data, columns=columns)
def convert_cols_to_dt(df):
# Convert relevant columns to datetime format
for col in df:
if col not in ['id', 'rate']:
df[col] = pd.to_datetime(df[col])
return df
df = convert_cols_to_dt(df)
# Create the mins column
df['mins'] = (df.clock_out_time - df.clock_in_time)
</code></pre>
<p>Output:</p>
<pre><code> id half_hour_bucket clock_in_time clock_out_time rate mins
0 232 2019-04-01 20:00:00 2019-04-01 19:12:00 2019-04-01 22:45:00 0.54 0 days 03:33:00.000000000
1 342 2019-04-01 20:30:00 2019-04-01 19:12:00 2019-04-01 19:22:00 0.23 0 days 00:10:00.000000000
2 232 2019-04-01 19:00:00 2019-04-01 19:12:00 2019-04-01 22:45:00 0.54 0 days 03:33:00.000000000
</code></pre>
<p><strong>Solution</strong></p>
<p>Next define a simple function to return a list of length equal to the number of 30-minute intervals in the <code>min</code> column.</p>
<pre><code>def upsample_list(x):
multiplier = math.ceil(x.total_seconds() / (60 * 30))
return list(range(multiplier))
</code></pre>
<p>And apply this to the dataframe:</p>
<pre><code>df['samples'] = df.mins.apply(upsample_list)
</code></pre>
<p>Next, create a new row for each list item in the 'samples' column (using the answer provided by <a href="https://stackoverflow.com/users/1744834/roman-pekar">Roman Pekar</a> <a href="https://stackoverflow.com/a/27266225/9576876">here</a>):</p>
<pre><code>s = df.apply(lambda x: pd.Series(x['samples']),axis=1).stack().reset_index(level=1, drop=True)
s.name = 'sample'
</code></pre>
<p>Join <code>s</code> to the dataframe and clean up the extra columns:</p>
<pre><code>df = df.drop('samples', axis=1).join(s, how='inner').drop('sample', axis=1)
</code></pre>
<p>Which gives us this:</p>
<pre><code> id half_hour_bucket clock_in_time clock_out_time rate mins
0 232 2019-04-01 20:00:00 2019-04-01 19:12:00 2019-04-01 22:45:00 0.54 03:33:00
0 232 2019-04-01 20:00:00 2019-04-01 19:12:00 2019-04-01 22:45:00 0.54 03:33:00
0 232 2019-04-01 20:00:00 2019-04-01 19:12:00 2019-04-01 22:45:00 0.54 03:33:00
0 232 2019-04-01 20:00:00 2019-04-01 19:12:00 2019-04-01 22:45:00 0.54 03:33:00
0 232 2019-04-01 20:00:00 2019-04-01 19:12:00 2019-04-01 22:45:00 0.54 03:33:00
0 232 2019-04-01 20:00:00 2019-04-01 19:12:00 2019-04-01 22:45:00 0.54 03:33:00
0 232 2019-04-01 20:00:00 2019-04-01 19:12:00 2019-04-01 22:45:00 0.54 03:33:00
0 232 2019-04-01 20:00:00 2019-04-01 19:12:00 2019-04-01 22:45:00 0.54 03:33:00
1 342 2019-04-01 20:30:00 2019-04-01 19:12:00 2019-04-01 19:22:00 0.23 00:10:00
2 232 2019-04-01 19:00:00 2019-04-01 19:12:00 2019-04-01 22:45:00 0.54 03:33:00
2 232 2019-04-01 19:00:00 2019-04-01 19:12:00 2019-04-01 22:45:00 0.54 03:33:00
2 232 2019-04-01 19:00:00 2019-04-01 19:12:00 2019-04-01 22:45:00 0.54 03:33:00
2 232 2019-04-01 19:00:00 2019-04-01 19:12:00 2019-04-01 22:45:00 0.54 03:33:00
2 232 2019-04-01 19:00:00 2019-04-01 19:12:00 2019-04-01 22:45:00 0.54 03:33:00
2 232 2019-04-01 19:00:00 2019-04-01 19:12:00 2019-04-01 22:45:00 0.54 03:33:00
2 232 2019-04-01 19:00:00 2019-04-01 19:12:00 2019-04-01 22:45:00 0.54 03:33:00
2 232 2019-04-01 19:00:00 2019-04-01 19:12:00 2019-04-01 22:45:00 0.54 03:33:00
</code></pre>
<p>Nearly there!</p>
<p>Reset the index:</p>
<pre><code>df = df.reset_index(drop=True)
</code></pre>
<p>Set duplicate rows to <code>NaN</code>:</p>
<pre><code>df = df.mask(df.duplicated())
</code></pre>
<p>Which gives:</p>
<pre><code> id half_hour_bucket clock_in_time clock_out_time rate mins
0 232.0 2019-04-01 20:00:00 2019-04-01 19:12:00 2019-04-01 22:45:00 0.54 03:33:00
1 NaN NaT NaT NaT NaN NaT
2 NaN NaT NaT NaT NaN NaT
3 NaN NaT NaT NaT NaN NaT
4 NaN NaT NaT NaT NaN NaT
5 NaN NaT NaT NaT NaN NaT
6 NaN NaT NaT NaT NaN NaT
7 NaN NaT NaT NaT NaN NaT
8 342.0 2019-04-01 20:30:00 2019-04-01 19:12:00 2019-04-01 19:22:00 0.23 00:10:00
9 232.0 2019-04-01 19:00:00 2019-04-01 19:12:00 2019-04-01 22:45:00 0.54 03:33:00
10 NaN NaT NaT NaT NaN NaT
11 NaN NaT NaT NaT NaN NaT
12 NaN NaT NaT NaT NaN NaT
13 NaN NaT NaT NaT NaN NaT
14 NaN NaT NaT NaT NaN NaT
15 NaN NaT NaT NaT NaN NaT
16 NaN NaT NaT NaT NaN NaT
</code></pre>
<p>Lastly, forward fill the <code>half_hour_bucket</code> and <code>rate</code> columns.</p>
<pre><code>df[['half_hour_bucket', 'rate']] = df[['half_hour_bucket', 'rate']].ffill()
</code></pre>
<p>Final output:</p>
<pre><code> id half_hour_bucket clock_in_time clock_out_time rate mins
0 232.0 2019-04-01 20:00:00 2019-04-01_19:12:00 2019-04-01_22:45:00 0.54 03:33:00
1 NaN 2019-04-01 20:00:00 NaT NaT 0.54 NaT
2 NaN 2019-04-01 20:00:00 NaT NaT 0.54 NaT
3 NaN 2019-04-01 20:00:00 NaT NaT 0.54 NaT
4 NaN 2019-04-01 20:00:00 NaT NaT 0.54 NaT
5 NaN 2019-04-01 20:00:00 NaT NaT 0.54 NaT
6 NaN 2019-04-01 20:00:00 NaT NaT 0.54 NaT
7 NaN 2019-04-01 20:00:00 NaT NaT 0.54 NaT
8 342.0 2019-04-01 20:30:00 2019-04-01_19:12:00 2019-04-01_19:22:00 0.23 00:10:00
9 232.0 2019-04-01 19:00:00 2019-04-01_19:12:00 2019-04-01_22:45:00 0.54 03:33:00
10 NaN 2019-04-01 19:00:00 NaT NaT 0.54 NaT
11 NaN 2019-04-01 19:00:00 NaT NaT 0.54 NaT
12 NaN 2019-04-01 19:00:00 NaT NaT 0.54 NaT
13 NaN 2019-04-01 19:00:00 NaT NaT 0.54 NaT
14 NaN 2019-04-01 19:00:00 NaT NaT 0.54 NaT
15 NaN 2019-04-01 19:00:00 NaT NaT 0.54 NaT
16 NaN 2019-04-01 19:00:00 NaT NaT 0.54 NaT
</code></pre>
|
python|python-3.x|pandas|data-science|python-datetime
| 1
|
7,112
| 55,869,511
|
groupby and join result has indices and data type included in output
|
<p>The objective is to take a data frame that looks like this:</p>
<pre><code>keywords group
word1 x
word2 x
word3 x
</code></pre>
<p>with group and keywords as strings within a pandas dataframe.</p>
<p>and create a dataframe that looks like this:</p>
<pre><code>x |word1|word2|word3
</code></pre>
<p>This is my current code:</p>
<p>I have tried using a function:</p>
<pre><code>def preprocessing(dataset, group, keywords):
dataset[keywords] = dataset[keywords].replace(' ', '_', regex = True)
df = dataset.groupby(group)[keywords].apply(lambda x: ','.join(str(x).split()))
df = pd.DataFrame(df)
df[keywords] = df[keywords].replace('_', ' ', regex = True)
return(df)
</code></pre>
<p>(the .replace in there was done to make it easier to keep spaces through the .join piece)</p>
<p>and I have tried doing it like this:</p>
<pre><code>data['keywords'] = ['|%s' %i for i in data['keywords']]
x = data.groupby('group')['keywords'].apply(lambda x: ''.join(str(x).split()))
</code></pre>
<p>What I am getting as output has two significant issues.</p>
<ol>
<li>The output ends up looking as follows, with group as the index:</li>
</ol>
<pre><code>0|word1|word2|wordName:x,dtype:object
</code></pre>
<p>where the numbers appear to be the index numbers for the individual words and the final string ends with the descriptive details "Name:x,dtype:object"</p>
<ol start="2">
<li>For large datasets, it will only get the first 30 and last 30 results in the string with an ellipsis in the middle, almost like a preview.</li>
</ol>
<pre><code>27|28|29|30|...|-30|-29|-28|
</code></pre>
<p>What would be causing the weird formatting issues and the data loss? It looks to be an issue with the lambda function as every other piece works as anticipated. Is there another way of doing this that will not result in lost data?</p>
|
<p>Use:</p>
<pre><code>df.groupby('group')['keywords'].apply(lambda x: '|'+'|'.join(x))
</code></pre>
<hr>
<pre><code>group
x |word1|word2|word3
</code></pre>
|
python|pandas
| 2
|
7,113
| 56,007,553
|
How to use loop to get values out of Pandas dataframe?
|
<p>I would like to loop through each value in <code>startDayValStr</code>, <code>endDayValStr</code>, <code>startTimeValStr</code>, <code>endTimeValStr</code> and use the values as parameters in a URL string. I am using every other value as my <code>end</code> variables, i.e. <code>06:00:00</code> is a <code>startTimeValStr</code> value and <code>07:00:00</code> is an <code>endTimeValStr</code> value.</p>
<p>How do I correctly construct my loop to use these variables in <code>time_param</code>?</p>
<p>I would like for <code>time_param</code> to look like this <code>?start=05-05-2019T06:00:00Z&end=05-05-2019T07:00:00Z</code></p>
<p>Here is my current code: </p>
<pre><code>import pandas as pd
rng = pd.date_range(pd.Timestamp('2019-05-05' + ' ' + "'06:00:00'"),periods=25, freq='H')
dtSeries = pd.Series(rng.format())
ddf = dtSeries.to_frame(name='Date')
ddf['time'] = pd.to_datetime(ddf['Date'])
dateDF = ddf['dates'] = ddf['time'].dt.date
timeDF = ddf['dates'] = ddf['time'].dt.time
startDayVal= dateDF[::2]
endDayVal = dateDF[1::2]
startTimeVal= timeDF[::2]
endTimeVal = timeDF[1::2]
startDayValStr = (startDayVal.to_string())
endDayValStr =(endDayVal.to_string())
startTimeValStr = (startTimeVal.to_string())
endTimeValStr = (endTimeVal.to_string())
for startDate, endDate, startTime, endTime in zip (startDayValStr, endDayValStr, startTimeValStr, endTimeValStr):
time_param = '?start='+ startDate +'T'+startTime + 'Z' + '&end='+ endDate + endTime + 'Z'
print time_param
</code></pre>
|
<p>If using <code>Python 3.x</code>, try <a href="https://docs.python.org/3.4/library/string.html#string-formatting" rel="nofollow noreferrer"><code>string.format</code></a> method, with <a href="https://docs.python.org/2/library/datetime.html#datetime.date.strftime" rel="nofollow noreferrer"><code>date.strftime</code></a>:</p>
<pre><code>rng = pd.date_range(pd.Timestamp('2019-05-05' + ' ' + "'06:00:00'"),
periods=25, freq='H')
for start, end in zip(rng[::2], rng[1::2]):
time_param = '?start={}&end={}'.format(start.strftime('%d-%m-%YT%H:%M:%SZ'),
end.strftime('%d-%m-%YT%H:%M:%SZ'))
print(time_param)
</code></pre>
<p>[out]</p>
<pre><code>?start=05-05-2019T06:00:00Z&end=05-05-2019T07:00:00Z
?start=05-05-2019T08:00:00Z&end=05-05-2019T09:00:00Z
?start=05-05-2019T10:00:00Z&end=05-05-2019T11:00:00Z
?start=05-05-2019T12:00:00Z&end=05-05-2019T13:00:00Z
?start=05-05-2019T14:00:00Z&end=05-05-2019T15:00:00Z
?start=05-05-2019T16:00:00Z&end=05-05-2019T17:00:00Z
?start=05-05-2019T18:00:00Z&end=05-05-2019T19:00:00Z
?start=05-05-2019T20:00:00Z&end=05-05-2019T21:00:00Z
?start=05-05-2019T22:00:00Z&end=05-05-2019T23:00:00Z
?start=06-05-2019T00:00:00Z&end=06-05-2019T01:00:00Z
?start=06-05-2019T02:00:00Z&end=06-05-2019T03:00:00Z
?start=06-05-2019T04:00:00Z&end=06-05-2019T05:00:00Z
</code></pre>
|
python|pandas|datetime
| 1
|
7,114
| 64,845,318
|
How do I send a test request with an image to my deployed model on Google Cloud?
|
<p>I've uploaded my trained model to the Google Cloud Platform that I trained and exported on lobe.ai. Now I want to send a test request with an image to it so I can use it on my web application. How do I do this?</p>
|
<p>With your tensorflow (<em>I deduce this from your tags</em>) model, you have 2 solutions</p>
<ul>
<li>Either your <a href="https://cloud.google.com/ai-platform/prediction/docs/deploying-models#test_your_model_with_local_predictions" rel="nofollow noreferrer">test locally</a></li>
<li>Or you can <a href="https://cloud.google.com/ai-platform/prediction/docs/deploying-models#deploy_models_and_versions" rel="nofollow noreferrer">deploy your model on AI Platform in online prediction mode</a>.</li>
</ul>
<p>In both cases, you have to submit a <a href="https://cloud.google.com/ai-platform/prediction/docs/online-predict" rel="nofollow noreferrer">binary</a> + your features in a JSON instance according with your model inputs</p>
|
tensorflow|google-cloud-platform
| 1
|
7,115
| 40,141,856
|
How can I manipulate strings in a slice of a pandas MultiIndex
|
<p>I have a <code>MultiIndex</code> like this:</p>
<pre><code> metric
sensor variable side
foo Speed Left Left speed
Right Right speed
bar Speed Left Left_Speed
Right Right_Speed
baz Speed Left speed
foo Support Left Left support
Right Right support
bar Support Left Left_support
Right Right_support
baz Support Left support
</code></pre>
<p>I'm trying to apply a string mapping to a slice of this dataframe:</p>
<pre><code>df.loc['baz',:,'Left'].metric.map(lambda s: "Left_" + s)
</code></pre>
<p>How can I apply this map to just the <code>baz-Left</code> rows, and get back the resulting <code>DataFrame</code>?</p>
<pre><code> metric
sensor variable side
foo Speed Left Left speed
Right Right speed
bar Speed Left Left_Speed
Right Right_Speed
baz Speed Left Left_speed
foo Support Left Left support
Right Right support
bar Support Left Left_support
Right Right_support
baz Support Left Left_support
</code></pre>
|
<p>I found the following method, but i think/hope there must be a more elegant way to achieve that:</p>
<pre><code>In [101]: index_saved = df.index
</code></pre>
<p>Let's sort index in order to get rid of <code>KeyError: 'MultiIndex Slicing requires the index to be fully lexsorted tuple len (3), lexsort depth (0)'</code> error:</p>
<pre><code>In [102]: df = df.sort_index()
In [103]: df
Out[103]:
metric
sensor variable side
bar Speed Left Left_Speed
Right Right_Speed
Support Left Left_support
Right Right_support
baz Speed Left speed
Support Left support
foo Speed Left Left speed
Right Right speed
Support Left Left support
Right Right support
In [119]: df.loc[pd.IndexSlice['baz', :, 'Left'], 'metric'] = \
...: 'AAA__' + df.loc[pd.IndexSlice['baz', :, 'Left'], 'metric']
In [120]: df
Out[120]:
metric
sensor variable side
bar Speed Left Left_Speed
Right Right_Speed
Support Left Left_support
Right Right_support
baz Speed Left AAA__speed
Support Left AAA__support
foo Speed Left Left speed
Right Right speed
Support Left Left support
Right Right support
</code></pre>
<p>set back old (saved) index:</p>
<pre><code>In [121]: df = df.reindex(index_saved)
In [122]: df
Out[122]:
metric
sensor variable side
foo Speed Left Left speed
Right Right speed
bar Speed Left Left_Speed
Right Right_Speed
baz Speed Left AAA__speed
foo Support Left Left support
Right Right support
bar Support Left Left_support
Right Right_support
baz Support Left AAA__support
</code></pre>
|
python|pandas
| 1
|
7,116
| 40,104,946
|
How to get date after subtracting days in pandas
|
<p>I have a dataframe:</p>
<pre><code>In [15]: df
Out[15]:
date day
0 2015-10-10 23
1 2015-12-19 9
2 2016-03-05 34
3 2016-09-17 23
4 2016-04-30 2
</code></pre>
<p>I want to subtract the number of days from the date and create a new column.</p>
<pre><code>In [16]: df.dtypes
Out[16]:
date datetime64[ns]
day int64
</code></pre>
<p>Desired output something like:</p>
<pre><code>In [15]: df
Out[15]:
date day date1
0 2015-10-10 23 2015-09-17
1 2015-12-19 9 2015-12-10
2 2016-03-05 34 2016-01-29
3 2016-09-17 23 2016-08-25
4 2016-04-30 2 2016-04-28
</code></pre>
<p>I tried but this does not work:</p>
<pre><code>df['date1']=df['date']+pd.Timedelta(df['date'].dt.day-df['day'])
</code></pre>
<p>it throws error :</p>
<blockquote>
<p>TypeError: unsupported type for timedelta days component: Series</p>
</blockquote>
|
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_timedelta.html" rel="noreferrer"><code>to_timedelta</code></a>:</p>
<pre><code>df['date1'] = df['date'] - pd.to_timedelta(df['day'], unit='d')
print (df)
date day date1
0 2015-10-10 23 2015-09-17
1 2015-12-19 9 2015-12-10
2 2016-03-05 34 2016-01-31
3 2016-09-17 23 2016-08-25
4 2016-04-30 2 2016-04-28
</code></pre>
<p>If need <code>Timedelta</code> use <code>apply</code>, but it is slower:</p>
<pre><code>df['date1'] = df['date'] - df.day.apply(lambda x: pd.Timedelta(x, unit='D'))
print (df)
date day date1
0 2015-10-10 23 2015-09-17
1 2015-12-19 9 2015-12-10
2 2016-03-05 34 2016-01-31
3 2016-09-17 23 2016-08-25
4 2016-04-30 2 2016-04-28
</code></pre>
<p><strong>Timings</strong>:</p>
<pre><code>#[5000 rows x 2 columns]
df = pd.concat([df]*1000).reset_index(drop=True)
In [252]: %timeit df['date'] - df.day.apply(lambda x: pd.Timedelta(x, unit='D'))
10 loops, best of 3: 45.3 ms per loop
In [253]: %timeit df['date'] - pd.to_timedelta(df['day'], unit='d')
1000 loops, best of 3: 1.71 ms per loop
</code></pre>
|
python|pandas
| 45
|
7,117
| 44,145,297
|
tensorflow: [NOT FOUND] error in RStudio
|
<p>I tried running the following code in <code>RStudio</code>:</p>
<pre><code>library(tensorflow)
x_data <- runif(100, min=0, max=1)
y_data <- x_data * 0.1 + 0.3
W <- tf$Variable(tf$random_uniform(shape(1L), -1.0, 1.0))
</code></pre>
<p>But the last line is throwing the following error:</p>
<pre><code>Error: Python module tensorflow was not found.
Detected Python configuration:
python: /usr/bin/python
libpython: /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/config/libpython2.7.dylib
pythonhome: /System/Library/Frameworks/Python.framework/Versions/2.7:/System/Library/Frameworks/Python.framework/Versions/2.7
version: 2.7.10 (default, Oct 23 2015, 19:19:21) [GCC 4.2.1 Compatible Apple LLVM 7.0.0 (clang-700.0.59.5)]
numpy: /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy
numpy_version: 1.8.1
tensorflow: [NOT FOUND]
</code></pre>
<p>This is my first time attempting to incorporate <code>Python</code> in <code>RStudio</code> (for purposes of accessing Tensorflow), so I'm not sure what I should check (or where) to make sure that my settings are appropriate.</p>
|
<p>I realize that I installed the <code>python 3</code> version of <code>tensorflow</code> instead of the <code>python 2</code> version, which is what my error message was telling me that <code>RStudio</code> is using. Using the install instructions found <a href="https://www.tensorflow.org/versions/r0.10/get_started/os_setup#test_the_tensorflow_installation" rel="nofollow noreferrer">here</a>, I installed <code>tensorflow</code> on <code>python 2</code> and was able to run the above code.</p>
|
python|r|tensorflow|rstudio
| 2
|
7,118
| 69,596,473
|
Get all values to a numeric value
|
<p>As you can see my df contains a price list with values like <code>$106.00</code> and <code>'1,190.00</code>. I want to get the values to a numeric value. So I want to replace the $ sign. But that didn't work.</p>
<pre><code>df = pd.DataFrame({'id':['A', 'B', 'C', 'D', 'E'], 'price':['$106.00', '$156.00',
'$166.00', '$106.00', '1,190.00']})
df['price'] = pd.to_numeric(df.price.str.replace("$",""))
# df['price'] = pd.to_numeric(df.price.str[1:])
# that givs me a ValueError: Unable to parse string "1,925.00" at position 7765
</code></pre>
<p>What I want at the end</p>
<pre><code>ID price
A 106.00
B 156.00
C 166.00
D 106.00
E 1,190.00
</code></pre>
|
<p>You can use <code>regex</code> and replace <code>'\$'</code> and <code>','</code> with <code>''</code> then convert to numeric like below: <em>(we use <code>'|'</code> for search <code>$</code> or <code>,</code>)</em></p>
<pre><code>>>> df.price = pd.to_numeric(df.price.str.replace(r"\$|,","", regex=True))
>>> df
id price
0 A 106.0
1 B 156.0
2 C 166.0
3 D 106.0
4 E 1190.0
</code></pre>
|
python|pandas
| 0
|
7,119
| 69,520,266
|
How do you match strings with different values in pandas?
|
<p>I'm trying to compare the values in 2 dataframes. This is my code :</p>
<pre><code>for i in df1['Searches']:
for j in df['Tags']:
if i == j:
print(i,j)
</code></pre>
<p>The code works. However, I want to account for cases where the strings don't entirely match, due to spacing, misspelling, or punctuation, but they should match given how much they have in common.</p>
<p>For instance:</p>
<pre><code> Searches | Tags
----------------------------------
lightblue | light blue
light-blue | light blue
light blu | light blue
lite blue | light blue
liteblue | light blue
liteblu | light blue
light b l u e | light blue
light.blue | light blue
l i ght blue | light blue
</code></pre>
<p>I listed variations of possible strings that could show up under searches, and the string that it should match to under tags. Is there a way to account for those variations and still have them match?</p>
<p>Thank you for taking the time to read my question and help in any way you can.</p>
|
<p>You are getting into fuzzy string matching. One way to do that is to use a similarity metric such as <code>jaro_similarity</code> from the Natural Language Toolkit (NLTK):</p>
<pre class="lang-py prettyprint-override"><code>from nltk.metrics.distance import jaro_similarity
df['jaro_similarity'] = df.apply(lambda row: jaro_similarity(row['Searches'], row['Tags']), axis=1)
</code></pre>
<p>Result:</p>
<pre class="lang-py prettyprint-override"><code> Searches Tags jaro_similarity
lightblue light blue 0.966667
light-blue light blue 0.933333
light blu light blue 0.966667
lite blue light blue 0.896296
liteblue light blue 0.858333
liteblu light blue 0.819048
light b l u e light blue 0.923077
light.blue light blue 0.933333
l i ght blue light blue 0.877778
</code></pre>
<p>You have to pick a cut-off point by experimenting on your data. Documentation on the <code>nltk.metrics.distance</code> module: <a href="https://www.nltk.org/api/nltk.metrics.distance.html#module-nltk.metrics.distance" rel="nofollow noreferrer">https://www.nltk.org/api/nltk.metrics.distance.html#module-nltk.metrics.distance</a></p>
|
python|pandas|string|dataframe|matching
| 3
|
7,120
| 41,178,761
|
Tensorflow: DropoutWrapper leads to different output?
|
<p>I build a LSTM like:</p>
<pre><code>lstm_cell = tf.nn.rnn_cell.LSTMCell(n_hidden, forget_bias=1.0, state_is_tuple=True, activation=tf.nn.tanh)
lstm_cell = tf.nn.rnn_cell.DropoutWrapper(lstm_cell, output_keep_prob=0.5)
lstm_cell = tf.nn.rnn_cell.MultiRNNCell([lstm_cell] * 3, state_is_tuple=True)
</code></pre>
<p>Then i train the model, and save variables.
The next time i load saved variables and skip training, it gives me a different prediction.</p>
<p>If i change the <code>output_keep_prob</code> to 1, this model can always show me the same prediction, but if the <code>output_keep_prob</code> is less than 1, like 0.5, this model shows me different prediction every time.</p>
<p>So i guess if the <code>DropoutWrapper</code> leads to different output?
If so, how can i solve this problem?</p>
<p>Thanks</p>
|
<p>Try using the <code>seed</code> keyword argument to <code>DropoutWrapper(...)</code>:</p>
<pre><code>lstm_cell = tf.nn.rnn_cell.DropoutWrapper(lstm_cell, output_keep_prob=0.5, seed=42)
</code></pre>
<p>See the docs <a href="https://www.tensorflow.org/versions/r0.11/api_docs/python/rnn_cell/rnn_cell_wrappers__rnncells_that_wrap_other_rnncells_#DropoutWrapper" rel="nofollow noreferrer">here</a> for <code>DropoutWrapper.__init__</code></p>
|
python|tensorflow|lstm
| 1
|
7,121
| 54,088,999
|
Groupby and flatten lists
|
<p>I have a pandas dataframe with the following form:</p>
<pre><code>import pandas as pd
p = pd.DataFrame({"int" : [1, 1, 1, 1, 2, 2],
"cod" : [[1,1], [2,2], [1,2], [3,9], [2,2], [2,2]]})
</code></pre>
<p>I want to group by <code>int</code>, which gives me a bunch of lists. I then want to flatten these lists, so I ultimately end up with a dataframe that has this form:</p>
<pre><code>p = pd.DataFrame({"int" : [1, 2],
"cod" : [[1,1,2,2,1,2,3,9], [2,2,2,2]]})
</code></pre>
<p>Here is what I have so far:</p>
<pre><code>p.groupby("int", as_index=False)["cod"]
</code></pre>
<p>I'm stuck at how to flatten once I have grouped by <code>int</code></p>
|
<p>Use <code>sum</code>:</p>
<pre><code>df = p.groupby("int", as_index=False)["cod"].sum()
</code></pre>
<p>Or <code>list comprehension</code>:</p>
<pre><code>df = p.groupby("int")["cod"].apply(lambda x: [z for y in x for z in y]).reset_index()
</code></pre>
<hr>
<pre><code>df = p.groupby("int")["cod"].apply(lambda x: np.concatenate(x.values).tolist()).reset_index()
</code></pre>
<p>For performance if large list should be fastest:</p>
<pre><code>from itertools import chain
df = p.groupby("int")["cod"].apply(lambda x: list(chain.from_iterable(x))).reset_index()
</code></pre>
<hr>
<p>Check more information about <a href="https://stackoverflow.com/q/952914">flattening lists</a>.</p>
<hr>
<pre><code>print (df)
int cod
0 1 [1, 1, 2, 2, 1, 2, 3, 9]
1 2 [2, 2, 2, 2]
</code></pre>
|
python|pandas
| 4
|
7,122
| 53,826,377
|
Numpy apply where operation to specific index range of an image
|
<p>I have an 2000X3000 binary rgb image, i want to get the number of pixels that match certain color (say blue(0,0,255)) on a given coordinate points ((x,y)(xmax,ymax)).</p>
<p>I know how to calculate for the whole image using np masks, but not sure how to do this on only a certain range of an array</p>
|
<p>You can loop through your array and check:</p>
<pre><code>if (part_of_array == blue).all():
number_of_blue_pixels += 1
</code></pre>
<p><code>All ()</code> returns true when all values are equal.</p>
|
python|numpy|opencv
| 0
|
7,123
| 66,210,323
|
pd DataFrame, need to add columns, parsing text string date_time into pandas year, dayOfWeek, etc in one pass
|
<p><strong>Need Statement</strong>: I have performed a cursor.fetchall Select from a SQLite database, returning 'id' and 'date_time', the later of which is text. I want to create additional columns using pd.to_date of year, dayOfWeek, dayOfYear, hourOfDay</p>
<p><strong>Issue:</strong> Following the example of a <a href="http://%20%20https://pandas.pydata.org/pandas-docs/stable/getting_started/intro_tutorials/05_add_columns.html" rel="nofollow noreferrer">no-loop column add and population approach</a>, I've tried multiple call combinations, none of which work.</p>
<p>I first tested a series of calls to confirm I could split the test date correctly;</p>
<pre><code>sr = pd.Series(['2015-02-08 20:00:00'])
sr = pd.to_datetime(sr)
#Year: Series.dt.year The year of the datetime
#Day of week: Series.dt.dayofweek The day of the week with Monday=0, Sunday=6
#Day of year: Series.dt.dayofyear The ordinal day of the year
#Hour: Series.dt.hour The hours of the datetime
print(sr)
print(sr.dt.year )
print(sr.dt.dayofweek )
print(sr.dt.dayofyear )
print(sr.dt.hour )
</code></pre>
<p>Everything came out as expected;</p>
<blockquote>
<p>0 2015-02-08 20:00:00<br />
dtype: datetime64[ns]<br />
0 2015
dtype: int64<br />
0 6<br />
dtype: int64<br />
0 39<br />
dtype: int64<br />
0 20<br />
dtype: int64</p>
</blockquote>
<p>The code I've tried works perfectly through the lines below, returning 105,861 rows x 2 columns;</p>
<pre><code>def splitDateTime():
try:
sqliteConnection = sqlite3.connect('TestElecConsump.db')
cursor = sqliteConnection.cursor()
print("Connected to SQLite")
sqlite_select_query = """SELECT id, date_time from WeatherRecord;"""
cursor.execute(sqlite_select_query)
records = cursor.fetchall()
print("Total rows are: ", len(records))
print("Printing first row:", records[0])
splitDatepd = pd.DataFrame(records, columns=['id','date_time'])
print("Dataframe shape:", splitDatepd.shape)
print("Dataframe : " , splitDatepd, sep='\n')
print ('records: ' + str(type(records)))
print ('splitDatepd: ' + str(type(splitDatepd)))
</code></pre>
<p>However, the next lines execute with no output whatsoever;</p>
<pre><code>#Add new column of Pandas datetime year
splitDatepd["pd-datetime"] = splitDatepd.to-datetime["date_time"].dt.year
print("Dataframe shape:", splitDatepd.shape)
print("Dataframe : " , splitDatepd, sep='\n')
</code></pre>
<p>So I decided to simplfy matters by repeated the above by leaving off the .year parsing;</p>
<pre><code>splitDatepd["pd-datetime"] = splitDatepd.to-datetime["date_time"]
</code></pre>
<p>Still there was no change to splitDatepd.</p>
<p>When the def finalizes and returns the dataframe, a printout of it looks exactly like the original dataframe from the Select statement.</p>
<p>What am I doing wrong?</p>
|
<p>You could try with the <code>pd.to_datetime</code> function in a single column, for example:</p>
<p><code>splitDatepd["pd_datetime"] = pd.to_datetime(splitDatepd["date_time"])</code></p>
<p>PS: Remember that functions names use underscode, I mean, it is <code>pd.to_datetime</code> not <code>pd.to-datetime</code>.</p>
|
pandas|dataframe|sqlite|datetime|to-date
| 1
|
7,124
| 65,957,307
|
Compare values of multiple pandas columns
|
<p>let's say I have four columns with strings in each column (pandas df).
If I want to compare if they are all the same, I came up with something like this:</p>
<pre><code>df['same_FB'] = np.where( (df['FB_a'] == df['FB_b']) & (df['FB_a'] == df['FB_c']) & (df['FB_a'] == df['FB_d']), 1,0)
</code></pre>
<p>It works fine, but it doesn't look good and if I had to add a fifth or sixth column it get's even uglier.
Is there another way to test if all columns are the same?
Alternatively, I would be ok with counting the distinct values in these four columns.</p>
|
<p>You can use <strong><a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.eq.html" rel="nofollow noreferrer"><code>DataFrame.eq</code></a></strong> + <strong><a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.all.html" rel="nofollow noreferrer"><code>DataFrame.all</code></a></strong>:</p>
<pre><code>x,*y = ['FB_a', 'FB_b', 'Fb_c', 'FB_d']
df['same_FB'] = df[y].eq(df[x], axis=0).all(1).view('i1')
</code></pre>
<p>Alternatively you can use <code>nunique</code>:</p>
<pre><code>c = ['FB_a', 'FB_b', 'Fb_c', 'FB_d']
df['same_FB'] = df[c].nunique(axis=1, dropna=False).eq(1).view('i1')
</code></pre>
<p>Example:</p>
<pre><code>print(df)
A B C D E
0 10 1 1 1 1
1 20 2 2 2 2
2 30 3 3 3 3
3 40 4 4 4 4
x,*y = ['B', 'C', 'D', 'E']
df['same'] = df[y].eq(df[x], axis=0).all(1).view('i1')
print(df)
A B C D E same
0 10 1 1 1 1 1
1 20 2 2 2 2 1
2 30 3 3 3 3 1
3 40 4 4 4 4 1
</code></pre>
|
python|pandas|dataframe
| 2
|
7,125
| 58,496,938
|
Automated legend creation in a matplotlib scatter plot with legend_elements()
|
<p>I have tried to change the code that is given at the following link, but have come up short. I am just a simple minded brut trying to learn to code in an elegant manner. Please help.</p>
<p><a href="https://matplotlib.org/3.1.1/gallery/lines_bars_and_markers/scatter_with_legend.html#sphx-glr-gallery-lines-bars-and-markers-scatter-with-legend-py" rel="nofollow noreferrer">https://matplotlib.org/3.1.1/gallery/lines_bars_and_markers/scatter_with_legend.html#sphx-glr-gallery-lines-bars-and-markers-scatter-with-legend-py</a></p>
<p>I need a title, left vertical and bottom horizontal legends. After hours of fiddling with the code I am reaching out to stackoverflow for assistance. Thank you.</p>
<pre><code>import matplotlib
matplotlib.axes.Axes.scatter
matplotlib.pyplot.scatter
matplotlib.axes.Axes.legend
matplotlib.pyplot.legend
matplotlib.collections.PathCollection.legend_elements
N = 365
#s =year = [2010, 2011, 2012, 2013, 2014, 2015, 2016, 2017]
y = colony = [27186, 35905, 2512, 41102, 6521, 84, 17802, 993, 76794, 95872, 15800, 7246, 383369, 38443, 345567, 29626, 35433, 14949, 1177, 60699, 573, 359, 5970, 163861, 106, 2717, 85387, 3183, 3603, 952, 4143, 29782, 3263, 76, 23181, 669, 2050, 1199, 27486, 15517, 3302, 6246, 5582, 906, 757, 427, 528, 1074, 355, 462, 9513, 695, 2374, 46949, 94, 1247, 3396, 127, 538, 38574, 993, 2510, 829, 305, 54707, 10635, 3656, 141902, 26403, 42656, 1707, 485133, 506729, 42030, 3029, 37, 3034, 4163, 284204, 2797, 92609, 865, 187895, 6572, 1318, 39981, 53969, 1433, 2020, 39422, 9220, 148, 1304, 30617, 8737, 51627, 29963, 7307, 1256, 15873, 25674, 22399, 1, 7, 12093, 44306, 2853, 1853, 94547, 11192, 18200, 20822, 731, 1839, 5923, 100410, 1691, 95755, 256234, 84676, 423823, 20681, 148277, 470, 37022, 2700, 1246, 423939, 4335, 4645, 1479, 2037, 183028, 401, 804, 504, 853, 3428, 781, 137066, 55882, 61246, 646, 1156, 3856, 51505, 46971, 751, 24579, 46466, 53106, 1206, 1317, 18277, 360, 11, 22954, 33093, 16970, 9387, 88509, 77674, 81356, 113406, 9228, 10852, 121915, 11549, 95243, 450107, 51415, 123, 499421, 58053, 3122, 717, 70806, 187311, 83120, 194926, 214, 134891, 27002, 4712, 44447, 1808, 1396, 31420, 27303, 9024, 21140, 23241, 1083, 19468, 1178, 1537, 36335, 38599, 24858, 458, 627, 1996, 318, 2465, 2006, 1531, 377, 15, 10027, 2289, 4799, 4087, 2381, 5026, 4189, 35214, 1243, 24961, 6882, 91535, 23061, 734, 192, 81087, 47013, 835, 39903, 71614, 2535, 84768, 289596, 79935, 36124, 697, 625897, 46365, 19318, 609111, 5812, 24995, 29167, 24893, 14700, 26722, 15720, 112502, 1414, 1357, 777, 2736, 72320, 1101, 156746, 71, 127087, 127801, 8366, 509, 2556, 13, 1599, 86, 22359, 2259, 28552, 13728, 9699, 44554, 394, 7351, 847, 15531, 2683, 841, 16189, 1192, 448, 582, 49950, 163019, 1744, 5874, 137431, 2510, 1703, 668, 375, 967, 4590, 156, 689, 869, 1513, 218, 356, 891, 49141, 555, 11972, 749, 26013, 12633, 33389, 1830, 12228, 470, 1018, 1877, 41, 40, 4172, 15, 5287, 29, 459, 11239, 20569, 16311, 21210, 11507, 2001, 1174, 1188, 761, 2340, 56080, 297, 98, 44896, 653, 430, 6519, 6987, 466, 130488, 120352, 415, 741, 175, 41, 948, 1491, 3451, 38378, 30618, 666, 2802, 765, 113, 13624, 1019, 539, 6153, 8197, 8640, 509, 1037, 10320, 5923, 5668, 12, 6, 4, 3883, 3581]
x = beekeepers = [87, 21, 13, 65, 18, 10, 9, 52, 30, 62, 99, 206, 163, 81, 126, 12, 137, 138, 88, 136, 38, 19, 18, 17, 20, 233, 89, 32, 50, 99, 74, 143, 371, 15, 16, 90, 43, 57, 79, 581, 167, 53, 127, 94, 49, 34, 10, 71, 15, 38, 29, 21, 48, 17, 19, 98, 130, 6, 43, 118, 120, 97, 71, 48, 81, 246, 45, 45, 26, 132, 152, 182, 143, 16, 5, 6, 65, 553, 26, 236, 13, 66, 95, 195, 127, 145, 109, 55, 36, 121, 112, 17, 113, 128, 32, 58, 599, 38, 48, 38, 21, 12, 1, 2, 28, 142, 71, 7, 48, 210, 11, 41, 50, 94, 74, 15, 73, 12, 30, 178, 183, 11, 128, 42, 26, 254, 39, 144, 18, 590, 150, 21, 139, 22, 59, 28, 93, 96, 92, 77, 146, 93, 56, 9, 270, 124, 103, 47, 126, 16, 674, 36, 102, 36, 3, 2, 10, 23, 67, 145, 11, 174, 241, 34, 51, 56, 93, 20, 14, 253, 173, 18, 186, 136, 140, 12, 119, 78, 150, 26, 22, 23, 165, 174, 140, 91, 78, 332, 637, 26, 21, 148, 33, 92, 65, 94, 162, 828, 407, 5, 38, 82, 58, 125, 55, 65, 6, 3, 41, 9, 66, 108, 37, 131, 39, 25, 74, 151, 44, 30, 120, 57, 15, 101, 61, 13, 205, 176, 39, 95, 34, 107, 127, 46, 212, 349, 13, 156, 275, 182, 467, 23, 70, 181, 45, 53, 165, 55, 36, 60, 74, 57, 26, 9, 13, 5, 29, 10, 4, 2, 45, 18, 15, 60, 76, 18, 106, 71, 30, 22, 46, 6, 294, 54, 25, 69, 7, 25, 9, 175, 60, 47, 83, 248, 8, 14, 33, 71, 138, 17, 30, 28, 190, 10, 21, 67, 72, 31, 113, 24, 121, 376, 73, 14, 89, 19, 50, 5, 2, 4, 4, 1, 31, 9, 21, 8, 15, 75, 69, 14, 7, 42, 36, 76, 48, 14, 41, 15, 37, 16, 21, 65, 54, 30, 144, 79, 43, 65, 11, 8, 116, 176, 303, 27, 7, 43, 7, 85, 25, 57, 57, 81, 13, 79, 218, 25, 14, 99, 61, 43, 1, 2, 1, 2, 3]
c = np.random.randint(1, 5, size=N)
s = np.random.randint(10, 220, size=N)
fig, ax = plt.subplots()
scatter = ax.scatter(x, y, c=c, s=s)
# produce a legend with the unique colors from the scatter
legend1 = ax.legend(*scatter.legend_elements(num=3),
loc="lower left", title="Ranking")
ax.add_artist(legend1)
# produce a legend with a cross section of sizes from the scatter
handles, labels = scatter.legend_elements(prop="sizes", alpha=0.6)
legend2 = ax.legend(handles, labels, loc="upper right", title="Sizes")
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/gpqnv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gpqnv.png" alt="matplotlib scatter plot"></a> </p>
<p>I cant figure out the code. I would like to have a title that says United States Bee Colonies 2016 and a left vertical saying 'Bee Colonies'. With a bottom horizontal legend saying Beekeepers</p>
<p>This is the error code I received:</p>
<pre><code>AttributeError Traceback (most recent call last)
<ipython-input-56-99856e677b46> in <module>()
17
18 # produce a legend with the unique colors from the scatter
---> 19 legend1 = ax.legend(*scatter.legend_elements(num=3),
20 loc="lower left", title="Ranking")
21 ax.add_artist(legend1)
AttributeError: 'PathCollection' object has no attribute 'legend_elements'
</code></pre>
|
<p>I think you just have some mistakes in your initial import lines.</p>
<p>Try this as your first three lines, followed by <code>y</code> and <code>x</code> definition etc.:</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
N = 365
</code></pre>
<p>The functions mentioned at the bottom of the matplotlib gallery url are for reference only and do not need to be specifically mentioned/imported. Also see their downloadable .py file.</p>
|
python|pandas|matplotlib
| 0
|
7,126
| 69,229,335
|
Pandas restrucutring to longform
|
<p>I have a dataframe in this format:</p>
<pre><code>{'April': {0: 3266.0, 3: 3044.0, 6: 3607.0},
'May': {0: 3767.0, 3: 3708.0, 6: 3709.0},
'June': {0: 4114.0, 3: 3539.0, 6: 4416.0},
'July': {0: 4544.0, 3: 5176.0, 6: 5298.0},
'August': {0: 4912.0, 3: 5424.0, 6: 5217.0},
'September': {0: 5358.0, 3: 5027.0, 6: 5262.0},
'October': {0: 5265.0, 3: 5163.0, 6: 5597.0},
'November': {0: 5167.0, 3: 5621.0, 6: 5457.0},
'December': {0: 3953.0, 3: 4074.0, 6: 4745.0},
'January': {0: 4235.0, 3: 5792.0, 6: 5067.0},
'February': {0: 3506.0, 3: 3861.0, 6: 3704.0},
'March': {0: 3840.0, 3: 3815.0, 6: 3679.0},
'year': {0: 2015, 3: 2016, 6: 2017}}
</code></pre>
<p>The task is to plot a time-series using the year column and respective values of the month from the dataframe. To do that I have tried converting the dataframe to long-form using <code>pandas.wide_to_long</code> and <code>pandas.pivot</code> but was unsuccessful any help will be much appreciated.</p>
<p>Expected output:</p>
<pre><code>year, month, value
2015, January, 4235.0
2015, February, 3506.0
2015, March, 3840.0
... , ..., ...
2017, November, 5457.0
2017, December, 4745.0
</code></pre>
|
<p>You need <code>melt</code>:</p>
<pre><code>df.melt('year', var_name='month')
year month value
0 2015 April 3266.0
1 2016 April 3044.0
2 2017 April 3607.0
3 2015 May 3767.0
4 2016 May 3708.0
5 2017 May 3709.0
6 2015 June 4114.0
...
</code></pre>
|
python-3.x|pandas|dataframe
| 1
|
7,127
| 44,723,608
|
Displaying data frame rows in sentence structure
|
<p>I have a simple pandas data frame with 3 columns: Num, Question, Answer:</p>
<pre><code>Num Question Answer
1 What is your favorite color? Green
2 Favorite sport? Basketball
</code></pre>
<p>Basically I just want to present each row of this dataframe in a sentence structure like the following:</p>
<pre><code>Question #1: What is your favorite color? Answer: Green
Question #2: Favorite sport? Answer: Basketball
</code></pre>
<p>How can I achieve this?</p>
|
<p>You can use <code>iterrows</code>.</p>
<pre><code>df = pd.DataFrame({"Question": ['What is your favorite color?', 'Favorite sport?'],
"Answer": ['Green', 'Basketball'],
"Num": [1, 2]})
for _, row in df.iterrows():
print("Question #{0}: {1} Answer: {2}".format(
row['Num'], row['Question'], row['Answer']))
# Output:
# Question #1: What is your favorite color? Answer: Green
# Question #2: Favorite sport? Answer: Basketball
</code></pre>
|
python|pandas
| 1
|
7,128
| 60,927,618
|
Set row maximum to 1 and other values to 0
|
<p>I have a matrix</p>
<pre><code>x = array([[ 1, 2, 4, 6],
[ 8, 29, 11, 35],
[18, 16, 28, 25],
[26, 28, 53, 52]])
</code></pre>
<p>I want to get the maximum and minimum along row and column and make it 1 and rest 0. I do in the followoing way to get max and min along column:</p>
<pre><code>getMax = np.where(x == np.amax(x, axis=0), 1, 0)
getMin = np.where(x == np.amin(x, axis=0), 1, 0)
</code></pre>
<p>upon doing it, I get:</p>
<pre><code>array([[0, 0, 0, 0],
[0, 1, 0, 0],
[0, 0, 0, 0],
[1, 0, 1, 1]]) for maximum
</code></pre>
<p>and </p>
<pre><code>array([[1, 1, 1, 1],
[0, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0]]) for minimum
</code></pre>
<p>but when I do the following to get the min and max along row</p>
<pre><code>getMax = np.where(x == np.amax(x, axis=1), 1, 0)
getMin = np.where(x == np.amin(x, axis=1), 1, 0)
</code></pre>
<p>I get this:</p>
<pre><code>array([[0, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 1, 0],
[0, 0, 0, 0]]) for maximum
</code></pre>
<p>and </p>
<pre><code>array([[1, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0]]) for minimum
</code></pre>
<p>what is wrong in the code for min and max along row?</p>
|
<p>The axes to be compared are not aligned in the second case, you need to ensure the dimensions of both arrays are the same. So for that you have <code>keepdims</code>, which is precisely aimed at preserving the input shape. Also there's no need for <code>np.where</code>, you can just cast to <code>int</code>:</p>
<pre><code>(x == np.max(x, axis=1, keepdims=True)).view('i1')
array([[0, 0, 0, 1],
[0, 0, 0, 1],
[0, 0, 1, 0],
[0, 0, 1, 0]], dtype=int8)
</code></pre>
<p>Or we could use <code>argmax</code> with <code>np.put_along_axis</code> for a more performant approach:</p>
<pre><code>getMax = np.zeros_like(x)
np.put_along_axis(getMax,x.argmax(1)[:,None],1,axis=1)
</code></pre>
<p>Timings:</p>
<pre><code>a = np.concatenate([x]*10000, axis=0)
%timeit np.where(a == np.amax(a, axis=1, keepdims=True), 1, 0)
# 1.15 ms ± 6.64 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
%timeit (a == np.amax(a, axis=1, keepdims=True)).view('i1')
# 986 µs ± 13.5 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
%%timeit
getMax = np.zeros_like(a)
np.put_along_axis(getMax,a.argmax(1)[:,None],1,axis=1)
# 436 µs ± 12.9 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
</code></pre>
<hr>
<p>Note that if you do not preserve the 2D shape, you'll get:</p>
<pre><code>np.amax(x, axis=1)
#array([ 6, 35, 28, 53])
</code></pre>
<p>Which is a 1D array, and will be compared along the last axis in <code>x</code>. This becomes clear when comparing the dimensions of both arrays:</p>
<pre><code>x.shape (2d array): 4 x 4
np.amax(x, axis=1).shape (1d array): 4
</code></pre>
<p>Whereas you really want:</p>
<pre><code>x.shape (2d array): 4 x 4
np.amax(x, axis=1, keepdims=True).shape: 4 x 1
</code></pre>
<p>So that they are compared along the first axis (rows)</p>
|
python|numpy
| 2
|
7,129
| 61,021,004
|
Add columns from an old dataset to a new one
|
<p>I have the following dataset: </p>
<pre><code>df=pd.read_csv('/path/text.csv')
</code></pre>
<p>that has columns <code>A B C D</code> (shown by using <code>print(df.columns)</code>)</p>
<p>What I have tried to do is to create new columns using columns from that file as follows: </p>
<pre><code>for index, row in df.iterrows():
parsed=urlparse(row['B'])
netloc.append(parsed.netloc) # E
paths.append(parsed.path) # F
</code></pre>
<p>What I would like to do is to manage this dataset including the new columns created (<code>E</code> and <code>F</code>)but also the old ones and saved this dataset as both <code>data frame</code> and <code>csv</code> (since it is very large, it could be useful to keep a copy in memory). My expected output would be a dataset with <code>6</code> columns <code>(A B C D E F)</code>, <code>4</code> from the old dataset and <code>2</code> from the new one.</p>
<p>How could I do to include columns <code>A B C D</code> in the new dataset and save it in both formats?</p>
<p>I tried with</p>
<pre><code>dataset = pd.DataFrame({"A": a, "B" : b, "C" : c, "D" : d, "E": e, "F": f})
dataset.to_csv('path/text_1.csv', mode='w', header=True, index=False)
</code></pre>
<p>but I got error that B is not defined (<code>NameError: name 'B' is not defined</code>).</p>
<p>Any help would be greatly appreciated. </p>
<p>Thank you</p>
|
<p>I think the function merge would be helpful for you.</p>
<p>If you have two dataframes you can "join" them vertically or horizontally. Merge can help you to join horizontally. The function should be used this way:</p>
<pre><code>df1.merge(df2, left_on='lkey', right_on='rkey')
</code></pre>
<p>What you should consider is the field names of both dataframes.</p>
|
python|pandas|csv|dataframe
| 0
|
7,130
| 61,069,643
|
tensorflow object detection from image not from live Camera
|
<p>Hi i want to try myself on object detection on Android in images not in live camera previews and i've seen that there is tensorflow lite. Sadly the tutorials i could find are all for live camera previews and not for images. So does anyone know about a tutorial or something for tensorflow lite or some other way to detect objects in images, that could teach me how to do basic object detection?Thanks in advance!!</p>
|
<p>I think generally it's two steps:</p>
<ol>
<li>Get image (no matter from preview or photos or anything), convert it to <code>android.graphics.Bitmap</code></li>
<li>Do Object detection from <code>Bitmap</code></li>
</ol>
<p>The part (1) is more like an Android question. For (2), you could checkout <a href="https://github.com/tensorflow/examples/blob/bef9fa3b8c81260decc925ced507739a5718c7ef/lite/examples/object_detection/android/app/src/main/java/org/tensorflow/lite/examples/detection/tflite/TFLiteObjectDetectionAPIModel.java#L152" rel="nofollow noreferrer">this example</a></p>
<p>Note: To use the <code>recognizeImage</code> method, you need to resize the <code>Bitmap</code> to <code>INPUT_SIZE * INPUT_SIZE</code> by your own, which is not obvious.</p>
|
android|image|tensorflow-lite
| 0
|
7,131
| 71,525,890
|
How to replace character into multiIndex pandas
|
<p>I have a dataset with severals columns containing numbers and I need to remove the ',' thousand separator.</p>
<p>Here is an example: <code>123,456.15</code> -> <code>123456.15</code>.</p>
<p>I tried to get it done with multi-indexes the following way:</p>
<pre><code>toProcess = ['col1','col2','col3']
df[toProcess] = df[toProcess].str.replace(',','')
</code></pre>
<p>Unfortunately, the error is: <code>'Dataframe' object has no attributes 'str'</code>. Dataframe don't have str attributes but Series does.</p>
<blockquote>
<p>How can I achieve this task efficiently ?</p>
</blockquote>
<p>Here is a working way iterating over the columns:</p>
<pre><code>toProcess = ['col1','col2','col3']
for i, col in enumerate(toProcess):
df[col] = df[col].str.replace(',','')
</code></pre>
|
<p>Use:</p>
<pre><code>df[toProcess] = df[toProcess].replace(',','', regex=True)
</code></pre>
|
pandas|dataframe|multi-index
| 1
|
7,132
| 71,484,472
|
Tensorflow Probability VI: Discrete + Continuous RVs inference: gradient estimation?
|
<p>See <a href="https://github.com/tensorflow/probability/issues/1534" rel="nofollow noreferrer">this tensorflow-probability issue</a></p>
<pre class="lang-sh prettyprint-override"><code>tensorflow==2.7.0
tensorflow-probability==0.14.1
</code></pre>
<h2>TLDR</h2>
<p>To perform VI on discrete RVs, should I use:</p>
<ul>
<li>A- the REINFORCE gradient estimator</li>
<li>B- the Gumbel-Softmax reparametrization</li>
<li>C- another solution</li>
</ul>
<p>and how to implement it ?</p>
<h2>Problem statement</h2>
<p>Sorry in advance for the long issue, but I believe the problem requires some explaining.</p>
<p>I want to implement a Hierarchical Bayesian Model involving both continuous and <strong>discrete</strong> Random Variables. A minimal example is a Gaussian Mixture model:</p>
<pre class="lang-py prettyprint-override"><code>import tensorflow as tf
import tensorflow_probability as tfp
tfd = tfp.distributions
tfb = tfp.bijectors
G = 2
p = tfd.JointDistributionNamed(
model=dict(
mu=tfd.Sample(
tfd.Normal(0., 1.),
sample_shape=(G,)
),
z=tfd.Categorical(
probs=tf.ones((G,)) / G
),
x=lambda mu, z: tfd.Normal(
loc=mu[z],
scale=1.
)
)
)
</code></pre>
<p>In this example I don't use the <code>tfd.Mixture</code> API on purpose to expose the Categorical label. I want to perform <strong>Variational Inference</strong> in this context, and for instance given an observed <code>x</code> fit over the posterior of <code>z</code> a <code>Categorical</code> distribution with parametric probabilities:</p>
<pre class="lang-py prettyprint-override"><code>q_probs = tfp.util.TransformedVariable(
tf.ones((G,)) / G,
tfb.SoftmaxCentered(),
name="q_probs"
)
q_loc = tf.Variable(0., name="q_loc")
q_scale = tfp.util.TransformedVariable(
1.,
tfb.Exp(),
name="q_scale"
)
q = tfd.JointDistributionNamed(
model=dict(
mu=tfd.Normal(q_loc, q_scale),
z=tfd.Categorical(probs=q_probs)
)
)
</code></pre>
<p>The issue is: when computing the ELBO and trying to optimize for the optimal <code>q_probs</code> I cannot use the reparameterization gradient estimators: this is AFAIK because <code>z</code> is a discrete RV:</p>
<pre class="lang-py prettyprint-override"><code>
def log_prob_fn(**kwargs):
return p.log_prob(
**kwargs,
x=tf.constant([2.])
)
optimizer = tf.optimizers.SGD()
@tf.function
def fit_vi():
return tfp.vi.fit_surrogate_posterior(
target_log_prob_fn=log_prob_fn,
surrogate_posterior=q,
optimizer=optimizer,
num_steps=10,
sample_size=8
)
_ = fit_vi()
# This last line raises:
# ValueError: Distribution `surrogate_posterior` must be reparameterized, i.e.,a diffeomorphic transformation
# of a parameterless distribution. (Otherwise this function has a biased gradient.)
</code></pre>
<p>I'm looking into a way to make this work. I've identified at least 2 ways to circumvent the issue: using REINFORCE gradient estimator or the Gumbel-Softmax reparameterization.</p>
<h2>A- REINFORCE gradient</h2>
<p>cf <a href="https://www.tensorflow.org/probability/api_docs/python/tfp/vi/GradientEstimators" rel="nofollow noreferrer">this TFP API link</a> a classical result in VI is that the REINFORCE gradient can deal with a non-differentiable objective function, for instance due to discrete RVs.</p>
<p>I can use a <code>tfp.vi.GradientEstimators.SCORE_FUNCTION</code> estimator instead of the <code>tfp.vi.GradientEstimators.REPARAMETERIZATION</code> one using the lower-level <code>tfp.vi.monte_carlo_variational_loss</code> function?
Using the REINFORCE gradient, In only need the <code>log_prob</code> method of <code>q</code> to be differentiable, but the <code>sample</code> method needn't be differentiated.</p>
<p>As far as I understood it, the <code>sample</code> method for a <code>Categorical</code> distribution implies a gradient break, but the <code>log_prob</code> method does not. Am I correct to assume that this could help with my issue? Am I missing something here?</p>
<p>Also I wonder: why is this possibility not exposed in the <code>tfp.vi.fit_surrogate_posterior</code> API ? Is the performance bad, meaning is the variance of the estimator too large for practical purposes ?</p>
<h2>B- Gumbel-Softmax reparameterization</h2>
<p>cf <a href="https://www.tensorflow.org/probability/api_docs/python/tfp/distributions/RelaxedOneHotCategorical" rel="nofollow noreferrer">this TFP API link</a> I could also reparameterize <code>z</code> as a variable <code>y = tfd.RelaxedOneHotCategorical(...)</code> . The issue is: I need to have a proper categorical label to use for the definition of <code>x</code>, so AFAIK I need to do the following:</p>
<pre class="lang-py prettyprint-override"><code>p_GS = tfd.JointDistributionNamed(
model=dict(
mu=tfd.Sample(
tfd.Normal(0., 1.),
sample_shape=(G,)
),
y=tfd.RelaxedOneHotCategorical(
temperature=1.,
probs=tf.ones((G,)) / G
),
x=lambda mu, y: tfd.Normal(
loc=mu[tf.argmax(y)],
scale=1.
)
)
)
</code></pre>
<p>...but his would just move the gradient breaking problem to <code>tf.argmax</code>. This is where I maybe miss something. Following the Gumbel-Softmax (Jang et al., 2016) paper, I could then use the "STRAIGHT-THROUGH" (ST) strategy and "plug" the gradients of the variable <code>tf.one_hot(tf.argmax(y))</code> -the "discrete y"- onto <code>y</code> -the "continuous y".</p>
<p>But again I wonder: how to do this properly ? I don't want to mix and match the gradients by hand, and I guess an autodiff backend is precisely meant to avoid me this issue. How could I create a distribution that differentiates the forward direction (sampling a "discrete y") from the backward direction (gradient computed using the "continuous y") ? I guess this is the meant usage of the <code>tfd.RelaxedOneHotCategorical</code> distribution, but I don't see this implemented anywhere in the API.</p>
<p>Should I implement this myself ? How ? Could I use something in the lines of <code>tf.custom_gradient</code>?</p>
<h2>Actual question</h2>
<p>Which solution -A or B or another- is meant to be used in the TFP API, if any? How should I implement said solution efficiently?</p>
|
<p>So the ides was not to make a Q&A but I looked into this issue for a couple days and here are my conclusions:</p>
<ul>
<li>solution A -REINFORCE- is a possibility, it doesn't introduce any bias, but as far as I understood it it has high variance in its vanilla form -making it prohibitively slow for most real-world tasks. As detailed a bit below, control variates can help tackle the variance issue;</li>
<li>solution B, Gumbell-Softmax, exists as well in the API, but I did not find any native way to make it work for hierarchical tasks. Below is my implementation.</li>
</ul>
<p>First off, we need to reparameterize the joint distribution <code>p</code> as the KL between a <em>discrete</em> and a <em>continuous</em> distribution is ill-defined (as explained in the <code>Maddison et al. (2017)</code> paper). To not break the gradients, I implemented a simple <code>one_hot_straight_through</code> operation that converts the <em>continuous</em> RV <code>y</code> into a <em>discrete</em> RV <code>z</code>:</p>
<pre class="lang-py prettyprint-override"><code>G = 2
@tf.custom_gradient
def one_hot_straight_through(y):
depth = y.shape[-1]
z = tf.one_hot(
tf.argmax(
y,
axis=-1
),
depth=depth
)
def grad(upstream):
return upstream
return z, grad
p = tfd.JointDistributionNamed(
model=dict(
mu=tfd.Sample(
tfd.Normal(0., 1.),
sample_shape=(G,)
),
y=tfd.RelaxedOneHotCategorical(
temperature=1.,
probs=tf.ones((G,)) / G
),
x=lambda mu, y: tfd.Normal(
loc=tf.reduce_sum(
one_hot_straight_through(y)
* mu
),
scale=1.
)
)
)
</code></pre>
<p>The variational distribution <code>q</code> follows the same reparameterization and the following code bit does work:</p>
<pre class="lang-py prettyprint-override"><code>q_probs = tfp.util.TransformedVariable(
tf.ones((G,)) / G,
tfb.SoftmaxCentered(),
name="q_probs"
)
q_loc = tf.Variable(tf.zeros((2,)), name="q_loc")
q_scale = tfp.util.TransformedVariable(
1.,
tfb.Exp(),
name="q_scale"
)
q = tfd.JointDistributionNamed(
model=dict(
mu=tfd.Independent(
tfd.Normal(q_loc, q_scale),
reinterpreted_batch_ndims=1
),
y=tfd.RelaxedOneHotCategorical(
temperature=1.,
probs=q_probs
)
)
)
def log_prob_fn(**kwargs):
return p.log_prob(
**kwargs,
x=tf.constant([2.])
)
optimizer = tf.optimizers.SGD()
@tf.function
def fit_vi():
return tfp.vi.fit_surrogate_posterior(
target_log_prob_fn=log_prob_fn,
surrogate_posterior=q,
optimizer=optimizer,
num_steps=10,
sample_size=8
)
_ = fit_vi()
</code></pre>
<p>Now there are several issues with that design:</p>
<ul>
<li>first off we needed to reparameterize not only <code>q</code> but also <code>p</code> so we "modify our target model". This results in our models <code>p</code> and <code>q</code> not outputing <em>discrete</em> RVs like originally intended but <em>continuous</em> RVs. I think that the introduction of a <code>hard</code> option like in the <a href="https://pytorch.org/docs/stable/generated/torch.nn.functional.gumbel_softmax.html" rel="nofollow noreferrer">torch implem</a> could be a nice addition to overcome this issue;</li>
<li>second we introduce the burden of setting up the <code>temperature</code> parameter. The latter make the <em>continuous</em> RV <code>y</code> smoothly converge to its discrete counterpart <code>z</code>. An annealing strategy, reducing the <code>temperature</code> to reduce the bias introduced by the relaxation at the cost of a higher variance can be implemented. Or the <code>temperature</code> can be learned online, akin to an entropy regularization (see <code>Maddison et al. (2017)</code> and <code>Jang et al. (2017)</code>);</li>
<li>the gradient obtained with this estimator are biased, which probably can be acceptable for most applications but is an issue in theory.</li>
</ul>
<p>Recent methods like REBAR (<code>Tucker et al. (2017)</code>) or RELAX (<code>Grathwohl et al. (2018)</code>) can instead obtain unbiased estimators with a lower variance than the original REINFORCE. But they do so at the cost of introducing -learnable- control variates with separate losses. Modifications of the <code>one_hot_straight_through</code> functions could probably implement this.</p>
<p>In conclusion my opinion is that the tensorflow probability support for discrete RVs optimization is too scarce at the moment and that the API lacks native functions and tutorials to make it easier for the user.</p>
|
tensorflow|tensorflow2.0|tensorflow-probability
| 0
|
7,133
| 71,743,482
|
Python numpy: iterate random.rand() without using loop
|
<p>I'm trying to have randomize number between 0-1 until the sum of those are >=1. This process iterate for nTime. Code below:</p>
<pre><code>from random import random
def funct(n): #10_000
media = 0
for _ in range(n):
result = 0
count = 0
while result < 1:
x = random()
result += x
count += 1
if result >= 1:
break
media += count
return media / n
</code></pre>
<p>I need to optimize the above code, utilize only <code>numpy library</code>, utilize numpy array, no loop python, no if python(numpy.where for example). How should I do it?</p>
<p>I need to optimize my code, because I'm try to reduce the time used. and my english is not the best</p>
|
<p>I don't think you'll find any significant improvements by trying to avoid a loop here. If you for some reason need a significantly faster version of this simple system, would probably recommend simply using a faster language than Python.</p>
|
python|arrays|python-3.x|numpy|optimization
| 0
|
7,134
| 71,521,735
|
Not able to switch off batch norm layers for faster-rcnn (PyTorch)
|
<p>I'm trying to switch off batch norm layers in a faster-rcnn model for evaluation mode.</p>
<p>I'm doing a sanity check atm:</p>
<pre><code>@torch.no_grad()
def evaluate_loss(model, data_loader, device):
val_loss = 0
model.train()
for images, targets in data_loader:
# check that all layers are in train mode
# for name, module in model.named_modules():
# if hasattr(module, 'training'):
# print('{} is training {}'.format(name, module.training))
# # set bn layers to eval
for module in model.modules():
if isinstance(module, torch.nn.BatchNorm2d):
module.eval()
# bn layers are now in eval
for name, module in model.named_modules():
if hasattr(module, 'training'):
print('{} is training {}'.format(name, module.training))
</code></pre>
<p>However, all the batch norm layers are still in training mode. When I replace it with for example Conv2d, I get the expected behaviour of <code>False</code>. Here is an example snippet of the output:</p>
<pre><code>backbone.body.layer4.0.conv1 is training True
backbone.body.layer4.0.bn1 is training True
backbone.body.layer4.0.conv2 is training True
backbone.body.layer4.0.bn2 is training True
backbone.body.layer4.0.conv3 is training True
backbone.body.layer4.0.bn3 is training True
</code></pre>
<p>Why is this happening? What can I do to switch off these layers? I have tried this with all variations of batch norm as provided by torch.nn.</p>
|
<p>So, after further investigation and after printing out all modules provided by the faster-rcnn, instead of <code>BatchNorm2d</code>, <code>FrozenBatchNorm2d</code> is used by the pretained model.</p>
<p>Furthermore, unlike what's currently stated by the documentation, you must call <code>torchvision.ops.misc.FrozenBatchNorm2d</code> instead of <code>torchvision.ops.FrozenBatchNorm2d</code>.</p>
<p>Additionally, as the layers are already frozen, there is no need to "switch off" these layers thus <code>model.eval()</code> is probably not required.</p>
|
python|deep-learning|pytorch|batch-normalization|faster-rcnn
| 0
|
7,135
| 42,521,114
|
Python: element-wise comparison of array to non-array
|
<p>I'm trying to plot some complex functions using numpy. Example of some working code:</p>
<pre><code>import numpy as np
from PIL import Image
size = 1000
w = np.linspace(-10, 10, size)
x, y = np.meshgrid(w, w)
r = x + 1j*y
def f(q):
return np.angle(q)
z = f(r)
normalized = ((255/(np.amax(z) - np.amin(z)))*(z+abs(np.amin(z)))).astype(int)
data = [i for j in normalized for i in j]
img = Image.new('L', (size, size))
img.putdata(data[::-1]) #pixels are done bottom to top
img.show()
</code></pre>
<p>However, suppose I want the function f to have a simple comparison in it, like this:</p>
<pre><code>def f(q):
if np.abs(q) < 4:
return 1
else:
return 0
</code></pre>
<p>I get the error</p>
<pre><code>ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
</code></pre>
<p>For the np.abs(q) < 4 check.</p>
<p>I did some digging and realized it's because Python is doing the operation on the entire r array, and it can't compare an array to an integer. So, I tried looking for ways to do element-wise comparisons.</p>
<p><a href="https://docs.scipy.org/doc/numpy/reference/routines.logic.html" rel="nofollow noreferrer">This page</a> looked promising: it says I can do element-wise comparisons by using np.less(a, b), so I tried</p>
<pre><code>def f(q):
if np.less(np.abs(q), 4):
return 1
else:
return 0
</code></pre>
<p>and got the same ValueError. It seems as though both arguments for np.less() need to be arrays of the same size.</p>
<p>What I want is to compare each element of my array to a <strong>single, non-array quantity.</strong> I suppose I could make a dummy array of the same size filled with identical 4's, but there has to be a more elegant way of doing this.</p>
|
<p>The key is to return an array value instead of trying to coerce an array into a single bool, which is what <code>if (some_array):</code> keeps trying to do. There being no unambiguous way to decide what single boolean <code>np.array([True, False])</code> should convert to, it doesn't even try.</p>
<p>So don't even branch:</p>
<pre><code>def f(q):
return abs(q) < 4
</code></pre>
<p>gives an array like</p>
<pre><code>>>> f(np.array([1,3,5]))
array([ True, True, False], dtype=bool)
</code></pre>
<p>which as numbers will behave like</p>
<pre><code>>>> f(np.array([1,3,5])).astype(int)
array([1, 1, 0])
</code></pre>
<p>and give</p>
<p><a href="https://i.stack.imgur.com/DHvkk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DHvkk.png" alt="circle"></a></p>
|
python|arrays|numpy|elementwise-operations
| 0
|
7,136
| 42,403,765
|
Multiply filtered rows by constant in pandas
|
<p>I checked <a href="https://stackoverflow.com/questions/33768122/python-pandas-dataframe-how-to-multiply-entire-column-with-a-scalar">this answer</a> but it only applies to entire columns.</p>
<p>I have a dataframe with 3 columns (name, date, value)</p>
<pre><code> Name Dt Value
0 aaaa 2018-01-01 100
1 bbbb 2018-07-02 200
2 aaaa 2019-01-01 300
3 aaaa 2020-01-04 500
</code></pre>
<p>I also have a dictionary of yearly scale factors. what i want to do is to multiply all value in a year by the corresponding scale factor.</p>
<p>I tried doing something like this</p>
<pre><code>df[df['Name'] == a & df['Dt'].dt.year == i]['Value'] *= dictionary[i]
</code></pre>
<p>this doesn't seem to be working. is there a way to do this without looping through every row?</p>
|
<p>Use <code>loc</code> and <code>map</code></p>
<pre><code>d = {2018: 100, 2019: 1000, 2020: 10000}
df.loc[
(df.Name == 'aaaa') & (df.Dt.dt.year == 2018), 'Value'
] *= df.Dt.dt.year.map(d)
print(df)
Name Dt Value
0 aaaa 2018-01-01 10000.0
1 bbbb 2018-07-02 200.0
2 aaaa 2019-01-01 300.0
3 aaaa 2020-01-04 500.0
</code></pre>
|
python|pandas
| 3
|
7,137
| 42,252,878
|
Count number of occurrences of an array without overlap in another array
|
<p>I have a <code>mxn</code> matrix <code>A</code>, where <code>m%t = n%t = 0</code>, so that a smaller <code>txt</code> matrix <code>B</code> tiles the matrix without borders or overlaps. I want to check if <code>A</code> consists entirely of tiles from <code>B</code> without calculating a tiling as an intermediate step as efficiently as possible. Furthermore for my special use case, it is not necessary to know <code>B</code>. It is enough to test if <code>A</code> strictly repeats itself every <code>txt</code> tile in every direction.</p>
<p>Numeric examples:</p>
<pre><code>A = [[1, 0, 1, 0],
[0, 1, 0, 1],
[1, 0, 1, 0],
[0, 1, 0, 1]]
B.shape = [2,2]
--> True
B.shape = [1,1]
--> False
</code></pre>
<p>So far, I calculate a comparison matrix <code>C</code>, which simply is a tiling of <code>B</code> to fit the size of <code>A</code>:</p>
<pre><code>import numpy as np
x,y = B.shape
x_a, y_a = A.shape
x_t = x_a/x
y_t = y_a/y
B_dash = A[:x, :y]
C = np.tile(B_dash,(x_t, y_t))
np.count_nonzero(A-C)
</code></pre>
<p>Is there a faster way, without calculating <code>C</code>?</p>
|
<p><strong>Appproach #1 :</strong> It seems we are counting the number of occurrences of B in A as distinct blocks. So, we can use <a href="http://scikit-image.org/docs/dev/api/skimage.util.html#skimage.util.view_as_blocks" rel="nofollow noreferrer"><code>skimage.util.view_as_blocks</code></a> -</p>
<pre><code>from skimage.util import view_as_blocks as viewW
out = np.count_nonzero((viewW(A, B.shape) == B).all((2,3)))
</code></pre>
<p><strong>Appproach #2 :</strong> Staying with <code>NumPy</code>, we would have -</p>
<pre><code>m1,n1 = A.shape
m2,n2 = B.shape
out = np.count_nonzero((A.reshape(m1//m2,m2,n1//n2,n2) == B[:,None]).all((1,3)))
</code></pre>
<p>Sample runs -</p>
<pre><code>In [274]: A
Out[274]:
array([[2, 0, 2, 0],
[5, 3, 5, 1],
[3, 3, 2, 6],
[1, 0, 3, 1]])
In [275]: B
Out[275]:
array([[3, 3],
[1, 0]])
In [276]: np.count_nonzero((viewW(A, B.shape) == B).all((2,3)))
Out[276]: 1
In [278]: A
Out[278]:
array([[2, 0, 3, 3],
[5, 3, 1, 0],
[3, 3, 2, 6],
[1, 0, 3, 1]])
In [279]: B
Out[279]:
array([[3, 3],
[1, 0]])
In [280]: np.count_nonzero((viewW(A, B.shape) == B).all((2,3)))
Out[280]: 2
</code></pre>
|
python|performance|numpy
| 3
|
7,138
| 69,838,931
|
How to plot heatmap from standard deviation of binned data?
|
<p>I am trying to make a heatmap of standard deviations (stdv) from gridded data, i.e. data which I have divided up into cells like a grid and I want to obtain stdv from each of those cells and plot its value as in each cell, colour-coded as a heatmap. Below is the code i used to divide the x -y plane into 4 equal bins or cells ad then let v-values fall into those 4 bins. I print them out and their respective stdv in each bin.</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
x = np.array([-10,-2,4,12,3,6,8,14,3])
y = np.array([5,5,-6,8,-20,10,2,2,8])
v = np.array([4,-6,-10,40,22,-14,20,8,-10])
xi = x // 20+1
yi = y // 20+1
k=2
cells = [[[] for yi in range(k)] for xi in range(k)]
for ycell in range(k):
for xcell in range(k):
cells[ycell][xcell] = v[(yi == xcell) & (xi == ycell)]
for ycell in range(k):
for xcell in range(k):
this = cells[ycell][xcell]
print(ycell, xcell, len(this), this, sep='\t')
print('direct std dev is', np.std(this))
</code></pre>
<p>I now want to plot this on the x-y plane with each of those 4 bins representing stdv value as intensity and appear as a heatmap like the following figure :</p>
<p><a href="https://i.stack.imgur.com/AOn7d.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AOn7d.png" alt="enter image description here" /></a></p>
<p>Could someone please help me figure out how to create this sort of heatmap? Thanks!</p>
|
<p>Do you mean:</p>
<pre><code>k=2
cells = [[[] for yi in range(k)] for xi in range(k)]
stds = [[[] for yi in range(k)] for xi in range(k)]
for ycell in range(k):
for xcell in range(k):
cells[ycell][xcell] = v[(yi == xcell) & (xi == ycell)]
# replace invalid (np.nan) with 0
stds[ycell][xcell] = np.std(cells[ycell][xcell]) or 0
# import plot lib
import seaborn as sns
# heatmap
sns.heatmap(stds, cmap='Blues', annot=True)
</code></pre>
<p>Output:</p>
<p><a href="https://i.stack.imgur.com/qTUF0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qTUF0.png" alt="enter image description here" /></a></p>
|
python|numpy|statistics|heatmap|binning
| 1
|
7,139
| 69,853,940
|
ValueError : Cannot setitem on a Categorical with a new category, set the categories first
|
<p>I have a column with the values changing from 0 to 600 and I want to group that values from 0 to 9.2 by 0.4 increments and 1 group between 9.2 and 600 values as outlier.I tried the following code ;</p>
<pre><code>bin_labels = ['0-0.4', '0.4-0.8', '0.8-1.2', '1.2-1.6',
'1.6-2.0', '2.0-2.4','2.4-2.8', '2.8-3.2',
'3.2-3.6', '3.6-4.0','4.0-4.4', '4.4-4.8',
'4.8-5.2', '5.2-5.6','5.6-6.0', '6.0-6.4',
'6.4-6.8', '6.8-7.2','7.2-7.6', '7.6-8.0',
'8.0-8.4', '8.4-8.8','8.8-9.2']
bins = np.linspace(0.0,9.2,24)
df['A_group'] = pd.cut(df['A'], bins = bins, labels = bin_labels, include_lowest = True)
</code></pre>
<p>After that I want to fill the values between 9.2 and 600 with '9.2-more' label value using following code ;</p>
<pre><code>df['A_group'] = df['A_group'].fillna('9.2-more')
</code></pre>
<p>But it says following error ;</p>
<blockquote>
<blockquote>
<p>Cannot setitem on a Categorical with a new category, set the categories first</p>
</blockquote>
</blockquote>
|
<p>You can append <code>float("inf")</code> to the <code>bins</code> and include "9.2-more" in the <code>bin_labels</code>:</p>
<pre><code>bin_labels = [ '0-0.4', '0.4-0.8', '0.8-1.2', '1.2-1.6',
'1.6-2.0', '2.0-2.4', '2.4-2.8', '2.8-3.2',
'3.2-3.6', '3.6-4.0', '4.0-4.4', '4.4-4.8',
'4.8-5.2', '5.2-5.6', '5.6-6.0', '6.0-6.4',
'6.4-6.8', '6.8-7.2', '7.2-7.6', '7.6-8.0',
'8.0-8.4', '8.4-8.8', '8.8-9.2', "9.20-more"]
bins = np.append(np.linspace(0.0, 9.2, 24), float("inf"))
df["A_group"] = pd.cut(df['A'], bins = bins, labels = bin_labels, include_lowest = True)
</code></pre>
|
python|pandas|numpy|fillna|linspace
| 1
|
7,140
| 43,417,090
|
Apply multiple functions at one time to Pandas groupby object
|
<p>Variations of this question have been asked (see <a href="https://stackoverflow.com/questions/40532024/pandas-apply-multiple-functions-of-multiple-columns-to-groupby-object">this question</a>) but I haven't found a good solution for would seem to be a common use-case of <code>groupby</code> in Pandas. </p>
<p>Say I have the dataframe <code>lasts</code> and I group by <code>user</code>: </p>
<pre><code>lasts = pd.DataFrame({'user':['a','s','d','d'],
'elapsed_time':[40000,50000,60000,90000],
'running_time':[30000,20000,30000,15000],
'num_cores':[7,8,9,4]})
</code></pre>
<p>And I have these functions I want to apply to <code>groupby_obj</code> (what the functions do isn't important and I made them up, just know that they require multiple columns from the dataframe):</p>
<pre><code>def custom_func(group):
return group.running_time.median() - group.num_cores.mean()
def custom_func2(group):
return max(group.elapsed_time) -min(group.running_time)
</code></pre>
<p>I could <code>apply</code> each of these functions separately to the dataframe and then merge the resulting dataframes, but that seems inefficient, is inelegant, and I imagine there has to be a one-line solution. </p>
<p>I haven't really found one, although this <a href="https://chrisalbon.com/python/pandas_apply_operations_to_groups.html" rel="nofollow noreferrer">blog post</a> (search for "Create a function to get the stats of a group" towards the bottom of the page) suggested wrapping the functions into one function as a dictionary thusly: </p>
<pre><code>def get_stats(group):
return {'custom_column_1': custom_func(group), 'custom_column_2':custom_func2(group)}
</code></pre>
<p>However, when I run the code <code>groupby_obj.apply(get_stats)</code>, instead of columns I get <em>a</em> column of dictionary results:</p>
<pre><code>user
a {'custom_column_1': 29993.0, 'custom_column_2'...
d {'custom_column_1': 22493.5, 'custom_column_2'...
s {'custom_column_1': 19992.0, 'custom_column_2'...
dtype: object
</code></pre>
<p>When in reality I would like to use a line of code to get something closer to this dataframe:</p>
<pre><code>user custom_column_1 custom_column_2
a 29993.0 10000
d 22493.5 75000
s 19992.0 30000
</code></pre>
<p>Suggestions on improving this workflow?</p>
|
<p>Consider the following approach:</p>
<pre><code>funcs = {
'running_time': {'rt_med':'median', 'rt_min':'min'},
'num_cores': {'nc_avg':'mean'},
'elapsed_time': {'et_max':'max'}
}
x = lasts.groupby('user').agg(funcs)
x.columns = x.columns.droplevel(0)
formulas = """
custom_column_1 = rt_med - nc_avg
custom_column_2 = et_max - rt_min
"""
res = x.eval(formulas, inplace=False).drop(x.columns, 1).reset_index()
</code></pre>
<p>Result:</p>
<pre><code>In [145]: res
Out[145]:
user custom_column_1 custom_column_2
0 a 29993.0 10000
1 d 22493.5 75000
2 s 19992.0 30000
</code></pre>
<p><strong>Explanation (step by step):</strong></p>
<pre><code>In [146]: x = lasts.groupby('user').agg(funcs)
In [147]: x
Out[147]:
running_time num_cores elapsed_time
rt_med rt_min nc_avg et_max
user
a 30000 30000 7.0 40000
d 22500 15000 6.5 90000
s 20000 20000 8.0 50000
In [148]: x.columns = x.columns.droplevel(0)
In [149]: x
Out[149]:
rt_med rt_min nc_avg et_max
user
a 30000 30000 7.0 40000
d 22500 15000 6.5 90000
s 20000 20000 8.0 50000
In [150]: x.eval(formulas, inplace=False)
Out[150]:
rt_med rt_min nc_avg et_max custom_column_1 custom_column_2
user
a 30000 30000 7.0 40000 29993.0 10000
d 22500 15000 6.5 90000 22493.5 75000
s 20000 20000 8.0 50000 19992.0 30000
In [151]: x.eval(formulas, inplace=False).drop(x.columns, 1)
Out[151]:
custom_column_1 custom_column_2
user
a 29993.0 10000
d 22493.5 75000
s 19992.0 30000
In [152]: x.eval(formulas, inplace=False).drop(x.columns, 1).reset_index()
Out[152]:
user custom_column_1 custom_column_2
0 a 29993.0 10000
1 d 22493.5 75000
2 s 19992.0 30000
</code></pre>
|
python|pandas|dataframe|group-by
| 5
|
7,141
| 72,325,139
|
The markers in my plot are far away from where they should be in the image
|
<p>I have the following code, which plots a normal plt.plot with markers in each value, however the markers text is far away from the marker point as seen in the image. Is there something I can do? Does it depends in the figure size? because I also tried with different figure sizes.</p>
<p><a href="https://i.stack.imgur.com/tjTIH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tjTIH.png" alt="Plot image" /></a></p>
<pre><code>import matplotlib.pyplot as plt
df.reset_index(inplace=True)
f = plt.figure()
f.set_figwidth(10)
f.set_figheight(8)
x = df["State"]
y = df["Total revenue"]
def add_value_label(x_list,y_list):
for i in range(1, len(x_list)+1):
plt.text(i,y_list[i-1],y_list[i-1])
add_value_label(x, y)
plt.plot(x, y, marker='o',color ='orange',markerfacecolor='orange',markeredgecolor='orange')
plt.show()
</code></pre>
|
<p>You're counting your X values from 1 and your Y values from 0. You can see that in the plot -- the values are right, but they're shifted one x column to the right.</p>
<pre><code>def add_value_label(x_list,y_list):
for i,xv in enumerate(x_list):
plt.text(xv,y_list[i],y_list[i])
</code></pre>
|
python|pandas|matplotlib|plot
| 1
|
7,142
| 72,232,480
|
Pandas rolling window cumsum, with incomplete series
|
<p>I have a pandas df as follows:</p>
<pre><code>YEAR MONTH USERID TRX_COUNT
2020 1 1 1
2020 2 1 2
2020 3 1 1
2020 12 1 1
2021 1 1 3
2021 2 1 3
2021 3 1 4
</code></pre>
<p>I want to <code>sum</code> the <code>TRX_COUNT</code> such that, each <code>TRX_COUNT</code> is the <code>sum</code> of <code>TRX_COUNTS</code> of the next 12 months.
So my end result would look like</p>
<pre><code>YEAR MONTH USERID TRX_COUNT TRX_COUNT_SUM
2020 1 1 1 5
2020 2 1 2 7
2020 3 1 1 8
2020 12 1 1 11
2021 1 1 3 10
2021 2 1 3 7
2021 3 1 4 4
</code></pre>
<p>For example <code>TRX_COUNT_SUM</code> for <code>2020/1</code> is <code>1+2+1+1=5</code> the count of the first 12 months.
Second entry is 7 as it is the sum of <code>2+1+1+3</code> which is 12 months from <code>2020/2</code>
I do expect partials, where there is not a full year of data. These can be either summed up partially or set to zero (as I wont be using partials).
I tried various variations of <code>cumsum</code> and grouping by <code>USERID, YR, MONTH</code> but am running into errors with handling the time window.
Thanks!</p>
|
<p>Looking at the logic and expected output, you are looking for more of rolling sum than cumsum. You want to roll 12 months and sum the number of <code>TRX_COUNT</code>. <code>cumsum</code> would cumulatively adding up the previous calculations.</p>
<p>Anyway, a few things that is complicated in your dataset. 1. interval is uneven 2. You are looking for forward rolling and typical rolling is backward.</p>
<p>To solve this, first, I would make the interval even so that I can use regular rolling.</p>
<pre class="lang-py prettyprint-override"><code>df['ym'] = pd.to_datetime([f'{x}/0{y}' if y < 10 else f'{x}/{y}' for x, y in zip(df.YEAR, df.MONTH)])
df = df.set_index('ym').resample('MS').first()
</code></pre>
<p>Then, try forward rolling. To do the forward rolling, I reverse the dataframe once and do rolling then reverse back.</p>
<pre class="lang-py prettyprint-override"><code>df['TRX_COUNT_SUM'] = (df.iloc[::-1] # Reverse to do (backward) rolling
.rolling(12, min_periods=0)
.TRX_COUNT.sum()
.iloc[::-1]) # Reverse back to original
# remove resampled records
df = df[df.YEAR > 0]
</code></pre>
<p>Result.</p>
<pre class="lang-none prettyprint-override"><code> YEAR MONTH USERID TRX_COUNT TRX_COUNT_SUM
ym
2020-01-01 2020.0 1.0 1.0 1.0 5.0
2020-02-01 2020.0 2.0 1.0 2.0 7.0
2020-03-01 2020.0 3.0 1.0 1.0 8.0
2020-12-01 2020.0 12.0 1.0 1.0 11.0
2021-01-01 2021.0 1.0 1.0 3.0 10.0
2021-02-01 2021.0 2.0 1.0 3.0 7.0
2021-03-01 2021.0 3.0 1.0 4.0 4.0
</code></pre>
|
pandas
| 1
|
7,143
| 72,258,450
|
Deep Convolutional Autoencoder for movie similarity
|
<p>i am new to python and i have a dataset that contains movie descriptions and i am trying to create a model that can calculate movie similarity based on these descriptions.
so i started by turning each movie description into a Word2Vec vector where each word has a size 100,since the longest movie description in my dataset has 213 words, each movie description is turned into a vector of size 21300.
now my next step is to reduce the dimensionality of these vectors using a convolutional autoencoder.
it was recommended to me that i turn each 21300-sized vector into a 150 by 142 matrix so i did that, my goal is to compress these matrices from 150 by 142 to 5 by 5 matrix which i will then flatten and use to calculate cosine similarity between different compressed movie vectors.
now here is my faulty code so far:</p>
<pre><code>encoder_input = keras.Input(shape=(21300,), name='sum')
encoded= tf.keras.layers.Reshape((150,142),input_shape=(21300,))(encoder_input)
x = tf.keras.layers.Conv1D(32, 3, activation="relu", padding="same",input_shape=(16,150,142))(encoded)
x = tf.keras.layers.MaxPooling1D(2, padding="same")(x)
x = tf.keras.layers.Conv1D(32, 3, activation="relu", padding="same")(x)
x = tf.keras.layers.MaxPooling1D(2, padding="same")(x)
x = tf.keras.layers.Conv1D(16, 3, activation="relu", padding="same")(x)
x = tf.keras.layers.MaxPooling1D(2, padding="same")(x)
x = tf.keras.layers.Conv1D(16, 3, activation="relu", padding="same")(x)
x = tf.keras.layers.MaxPooling1D(2, padding="same")(x)
x = tf.keras.layers.Conv1D(8, 3, activation="relu", padding="same")(x)
x = tf.keras.layers.MaxPooling1D(2, padding="same")(x)
x=tf.keras.layers.Flatten()(x)
encoder_output=keras.layers.Dense(units=25, activation='relu',name='encoder')(x)
x= tf.keras.layers.Reshape((5,5),input_shape=(25,))(encoder_output)
# Decoder
decoder_input=tf.keras.layers.Conv1D(8, 3, activation='relu', padding='same')(x)
x = tf.keras.layers.UpSampling1D(2)(decoder_input)
x = tf.keras.layers.Conv1D(16, 3, activation='relu')(x)
x = tf.keras.layers.UpSampling1D(2)(x)
x = tf.keras.layers.Conv1D(16, 3, activation='relu')(x)
x = tf.keras.layers.UpSampling1D(2)(x)
x = tf.keras.layers.Conv1D(32, 3, activation='relu')(x)
x = tf.keras.layers.UpSampling1D(2)(x)
x = tf.keras.layers.Conv1D(32, 3, activation='relu')(x)
x = tf.keras.layers.UpSampling1D(2)(x)
#x=tf.keras.layers.Flatten()(x)
decoder_output = keras.layers.Conv1D(1, 3, activation='relu', padding='same')(x)
opt = tf.keras.optimizers.Adam(learning_rate=0.001, decay=1e-6)
autoencoder = keras.Model(encoder_input, decoder_output, name='autoencoder')
autoencoder.compile(opt, loss='mse')
autoencoder.summary()
history = autoencoder.fit(
movies_vector,
movies_vector,
epochs=25
)
print("ENCODER READY")
#USING THE MIDDLE LAYER
encoder = keras.Model(inputs=autoencoder.input,
outputs=autoencoder.get_layer('encoder').output)
</code></pre>
<p>running this code produces the following error:</p>
<pre><code>ValueError: Dimensions must be equal, but are 100 and 21300 for '{{node mean_squared_error/SquaredDifference}} = SquaredDifference[T=DT_FLOAT](mean_squared_error/remove_squeezable_dimensions/Squeeze, IteratorGetNext:1)' with input shapes: [?,100], [?,21300].
</code></pre>
<p>how can i fix this autoencoder?</p>
|
<p>I was able to reproduce the error with dummy data. Changing the decoder model as follows will help.</p>
<pre><code>decoder_input=tf.keras.layers.Conv1D(8, 3, activation='relu', padding='same')(x)
x = tf.keras.layers.UpSampling1D(2)(decoder_input)
x = tf.keras.layers.Conv1D(16, 3, activation='relu')(x)
x = tf.keras.layers.UpSampling1D(2)(x)
x = tf.keras.layers.Conv1D(16, 3, activation='relu')(x)
x = tf.keras.layers.UpSampling1D(2)(x)
x = tf.keras.layers.Conv1D(32, 3, activation='relu')(x)
x = tf.keras.layers.UpSampling1D(2)(x)
x = tf.keras.layers.Conv1D(32, 3, activation='relu')(x)
x = tf.keras.layers.UpSampling1D(2)(x)
x=tf.keras.layers.Conv1D(213, 3, activation='relu', padding='same')(x)
decoder_output = tf.keras.layers.Flatten()(x)
</code></pre>
<p>Please find the gist <a href="https://colab.sandbox.google.com/gist/synandi/21a36e06372ad0be516f73ee7fcef8f1/untitled91.ipynb" rel="nofollow noreferrer">here</a>. Thank you.</p>
|
tensorflow|keras|deep-learning|word2vec|autoencoder
| 0
|
7,144
| 72,403,450
|
Change values of a timedelta column based on the previous row
|
<p>Let it be the following Python Panda Dataframe:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>code</th>
<th>visit_time</th>
<th>flag</th>
<th>other</th>
<th>counter</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>NaT</td>
<td>True</td>
<td>X</td>
<td>3</td>
</tr>
<tr>
<td>0</td>
<td>1 days 03:00:12</td>
<td>False</td>
<td>Y</td>
<td>1</td>
</tr>
<tr>
<td>0</td>
<td>NaT</td>
<td>False</td>
<td>X</td>
<td>3</td>
</tr>
<tr>
<td>0</td>
<td>0 days 05:00:00</td>
<td>True</td>
<td>X</td>
<td>2</td>
</tr>
<tr>
<td>1</td>
<td>NaT</td>
<td>False</td>
<td>Z</td>
<td>3</td>
</tr>
<tr>
<td>1</td>
<td>NaT</td>
<td>True</td>
<td>X</td>
<td>3</td>
</tr>
<tr>
<td>1</td>
<td>1 days 03:00:12</td>
<td>False</td>
<td>Y</td>
<td>1</td>
</tr>
<tr>
<td>2</td>
<td>NaT</td>
<td>True</td>
<td>X</td>
<td>3</td>
</tr>
<tr>
<td>2</td>
<td>5 days 10:01:12</td>
<td>True</td>
<td>Y</td>
<td>0</td>
</tr>
</tbody>
</table>
</div>
<p>To solve the problem, only the columns: <code>code, visit_time</code> and <code>flag</code> are needed.</p>
<p>Each row with a value of <code>visit_time</code>, has a previous row with value <code>NaT</code>. Knowing this, I want to do next modification in the dataframe:</p>
<ul>
<li>Sets the flag of the row with non-null value of <code>visit_time</code> to the same value as its previous row.</li>
</ul>
<p>Example:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>code</th>
<th>visit_time</th>
<th>flag</th>
<th>other</th>
<th>counter</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>NaT</td>
<td>True</td>
<td>X</td>
<td>3</td>
</tr>
<tr>
<td>0</td>
<td>1 days 03:00:12</td>
<td>True</td>
<td>Y</td>
<td>1</td>
</tr>
<tr>
<td>0</td>
<td>NaT</td>
<td>False</td>
<td>X</td>
<td>3</td>
</tr>
<tr>
<td>0</td>
<td>0 days 05:00:00</td>
<td>False</td>
<td>X</td>
<td>2</td>
</tr>
<tr>
<td>1</td>
<td>NaT</td>
<td>False</td>
<td>Z</td>
<td>3</td>
</tr>
<tr>
<td>1</td>
<td>NaT</td>
<td>True</td>
<td>X</td>
<td>3</td>
</tr>
<tr>
<td>1</td>
<td>1 days 03:00:12</td>
<td>True</td>
<td>Y</td>
<td>1</td>
</tr>
<tr>
<td>2</td>
<td>NaT</td>
<td>True</td>
<td>X</td>
<td>3</td>
</tr>
<tr>
<td>2</td>
<td>5 days 10:01:12</td>
<td>True</td>
<td>Y</td>
<td>0</td>
</tr>
</tbody>
</table>
</div>
<p>I am grateful for the help offered in advance.</p>
|
<p>You can use <code>.mask</code> to set the <code>'flag'</code> values to the <code>.shift</code>ed version of itself where <code>'visit_time'</code> values are <code>notnull</code>.</p>
<pre class="lang-py prettyprint-override"><code>out = df.assign(
flag=df['flag'].mask(df['visit_time'].notnull(), df['flag'].shift())
)
print(out)
code visit_time flag other counter
0 0 NaT True X 3
1 0 1 days 03:00:12 True Y 1
2 0 NaT False X 3
3 0 0 days 05:00:00 False X 2
4 1 NaT False Z 3
5 1 NaT True X 3
6 1 1 days 03:00:12 True Y 1
7 2 NaT True X 3
8 2 5 days 10:01:12 True Y 0
</code></pre>
<ul>
<li><code>.mask(condition, other)</code> replaces values where condition is True with the values of <code>other</code> in this case <code>other</code> is the value from the previous row.</li>
<li><code>.assign(…)</code> is a way to update a column while returning a new <code>DataFrame</code> this can be replaced with column assignment <code>df['flag'] = df['flag'].where(…)</code> to modify the <code>DataFrame</code> in place.</li>
</ul>
<hr />
<p>Creating a column from a string variable.</p>
<pre class="lang-py prettyprint-override"><code>df[name] = df[name].mask(df['visit_time'].notnull(), df[name].shift()))
</code></pre>
|
python|pandas|dataframe|timedelta
| 2
|
7,145
| 45,388,800
|
Python: data argument can't be an iterator
|
<p>I'm trying to replicate the code that is provided here:
<a href="https://github.com/IdoZehori/Credit-Score/blob/master/Credit%20score.ipynb" rel="noreferrer">https://github.com/IdoZehori/Credit-Score/blob/master/Credit%20score.ipynb</a></p>
<p>The function given below fails to run and give error. Can someone help me resolving it</p>
<pre><code>def replaceOutlier(data, method = outlierVote, replace='median'):
'''replace: median (auto)
'minUpper' which is the upper bound of the outlier detection'''
vote = outlierVote(data)
x = pd.DataFrame(zip(data, vote), columns=['annual_income', 'outlier'])
if replace == 'median':
replace = x.debt.median()
elif replace == 'minUpper':
replace = min([val for (val, vote) in list(zip(data, vote)) if vote == True])
if replace < data.mean():
return 'There are outliers lower than the sample mean'
debtNew = []
for i in range(x.shape[0]):
if x.iloc[i][1] == True:
debtNew.append(replace)
else:
debtNew.append(x.iloc[i][0])
return debtNew
</code></pre>
<p>Function Call:</p>
<pre><code>incomeNew = replaceOutlier(df.annual_income, replace='minUpper')
</code></pre>
<blockquote>
<p>Error:
x = pd.DataFrame(zip(data, vote), columns=['annual_income', 'outlier'])
TypeError: data argument can't be an iterator</p>
</blockquote>
<p>PS: I understand this has been asked before, but I tried using the techniques however the error still remains</p>
|
<p>zip cannot be used directly, you should give the result as a list i.e.:</p>
<pre><code>x = pd.DataFrame(list(zip(data, vote)), columns=['annual_income', 'outlier'])
</code></pre>
<p><strong>Edit</strong> (from <a href="https://stackoverflow.com/a/57223112/6655211">bayethierno</a> answer) :<br>
Since the release 0.24.0, we don't need to generate the list from the <code>zip</code> anymore, the following statement is valid :</p>
<pre><code>x = pd.DataFrame(zip(data, vote), columns=['annual_income', 'outlier'])
</code></pre>
|
python|pandas|numpy
| 19
|
7,146
| 62,534,751
|
How to properly setup a data set for training a Keras model
|
<p>I am trying to create a dataset for audio recognition with a simple Keras sequential model.</p>
<p>This is the function I am using to create the model:</p>
<pre><code>def dnn_model(input_shape, output_shape):
model = keras.Sequential()
model.add(keras.Input(input_shape))
model.add(layers.Flatten())
model.add(layers.Dense(512, activation = "relu"))
model.add(layers.Dense(output_shape, activation = "softmax"))
model.compile( optimizer='adam',
loss=tf.keras.losses.CategoricalCrossentropy(from_logits=True),
metrics=['acc'])
model.summary()
return model
</code></pre>
<p>And I am Generating my trainingsdata with this Generator function:</p>
<pre><code>def generator(x_dirs, y_dirs, hmm, sampling_rate, parameters):
window_size_samples = tools.sec_to_samples(parameters['window_size'], sampling_rate)
window_size_samples = 2**tools.next_pow2(window_size_samples)
hop_size_samples = tools.sec_to_samples(parameters['hop_size'],sampling_rate)
for i in range(len(x_dirs)):
features = fe.compute_features_with_context(x_dirs[i],**parameters)
praat = tools.praat_file_to_target( y_dirs[i],
sampling_rate,
window_size_samples,
hop_size_samples,
hmm)
yield features,praat
</code></pre>
<p>The variables <code>x_dirs</code> and <code>y_dirs</code> contain a list of paths to labels and audiofiles. In total I got 8623 files to train my model. This is how I train my model:</p>
<pre><code>def train_model(model, model_dir, x_dirs, y_dirs, hmm, sampling_rate, parameters, steps_per_epoch=10,epochs=10):
model.fit((generator(x_dirs, y_dirs, hmm, sampling_rate, parameters)),
epochs=epochs,
batch_size=steps_per_epoch)
return model
</code></pre>
<p>My problem now is that if I pass all 8623 files it will use all 8623 files to train the model in the first epoch and complain after the first epoch that it needs <code>steps_per_epoch * epochs</code> batches to train the model.</p>
<p>I tested this with only 10 of the 8623 files with a sliced list, but than Tensorflow complains that there are needed 100 batches.</p>
<p>So how do I have my Generator yield out data that its works best? I always thought that <code>steps_per_epoch</code> just limits the data received per epoch.</p>
|
<p>The fit function is going to exhaust your generator, that is to say, once it will have yielded all your 8623 batches, it wont be able to yield batches anymore.</p>
<p>You want to solve the issue like this:</p>
<pre class="lang-py prettyprint-override"><code>def generator(x_dirs, y_dirs, hmm, sampling_rate, parameters, epochs=1):
for epoch in range(epochs): # or while True:
window_size_samples = tools.sec_to_samples(parameters['window_size'], sampling_rate)
window_size_samples = 2**tools.next_pow2(window_size_samples)
hop_size_samples = tools.sec_to_samples(parameters['hop_size'],sampling_rate)
for i in range(len(x_dirs)):
features = fe.compute_features_with_context(x_dirs[i],**parameters)
praat = tools.praat_file_to_target( y_dirs[i],
sampling_rate,
window_size_samples,
hop_size_samples,
hmm)
yield features,praat
</code></pre>
|
python|tensorflow|keras
| 1
|
7,147
| 54,448,529
|
Selectig entries in a pandas data frame with an array
|
<p>I have a pandas data frame built on an object with multiple attributes. Lets call this data1. data1 has an array x which associates every entry in the data frame with a set of values and arrays y and z which do the same. My code looks as follows. </p>
<pre><code>x = (3, 5, 2, 8, 9)
y = (4, 9, 0, 2, 1)
z = (0, 3, 6, 0, 2)
cols = ['x', 'y', 'z']
data1.loc[:5, cols]
</code></pre>
<p>this returns a table with columns x, y, and z. The entries in the table correspond to the arrays above. What I need to do is use another array to select the corresponding entries in the table. For example, lets say the array is </p>
<pre><code>idx = [1, 4, 5]
</code></pre>
<p>I need to return a table exactly like the one given above but only containing the entries in the idx array. I have tried:</p>
<pre><code>cols = ['x', 'y', 'z']
data1.loc[idx, cols]
</code></pre>
<p>This returns a table that is identical to the previous table but now all the entries are scattered. If the code should return only the entries corresponding to (1,4,5) it returns (1,4,5,2,3).</p>
|
<p>Something like this (remember index start at 0):</p>
<pre><code>import pandas as pd
x = (3, 5, 2, 8, 9)
y = (4, 9, 0, 2, 1)
z = (0, 3, 6, 0, 2)
df = pd.DataFrame()
df['x'] = x
df['y'] = y
df['z'] = z
print(df);print()
idx = [1, 4, 2]
cols = ['x', 'y', 'z']
print(df.loc[idx, cols])
x y z
1 5 9 3
4 9 1 2
2 2 0 6
</code></pre>
|
linux|python-3.x|pandas|dataframe
| 0
|
7,148
| 54,338,133
|
Aggregate rows based on two identifiers
|
<p>I have the following data set</p>
<pre><code>df = pd.DataFrame({'A' : ['E1', 'E1', 'E1', 'E2', 'E2'],
'B' : ['R1', 'R1', 'R2', 'R2', 'R2'],
'C' : [100, 100, 300, 250, 250]})
</code></pre>
<p>I now want to aggregate the rows using <code>A</code> and <code>B</code> as the shared identifier for an observation. I then want to calculate the sum and the average of <code>C</code> and count the number of times this pair has been observed and append those values to a data frame.</p>
<pre><code>df = pd.DataFrame({'A' : ['E1', 'E1', 'E2'],
'B' : ['R1', 'R2', 'R2'],
'C_sum' : [200, 300, 500],
'C_avg' : [100, 300, 250],
'count' : [2, 1, 2]})
</code></pre>
|
<p>Using <code>groupby</code> with <code>agg</code></p>
<pre><code>df.groupby(['A','B']).C.agg(['sum','mean','count']).reset_index()
A B sum mean count
E1 R1 200 100 2
E2 R2 300 300 1
E2 R2 500 250 2
</code></pre>
|
python|pandas|dataframe|merge
| 1
|
7,149
| 54,545,054
|
Cleaner way to whiten each image in a batch using keras
|
<p>I would like to whiten each image in a batch. The code I have to do so is this:</p>
<pre><code>def whiten(self, x):
shape = x.shape
x = K.batch_flatten(x)
mn = K.mean(x, 0)
std = K.std(x, 0) + K.epsilon()
r = (x - mn) / std
r = K.reshape(x, (-1,shape[1],shape[2],shape[3]))
return r
#
</code></pre>
<p>where x is (?, 320,320,1). I am not keen on the reshape function with a -1 arg. Is there a cleaner way to do this?</p>
|
<p>Let's see what the <code>-1</code> does. From the Tensorflow documentation (Because the documentation from Keras is scarce compared to the one from Tensorflow):</p>
<blockquote>
<p>If one component of shape is the special value -1, the size of that dimension is computed so that the total size remains constant.</p>
</blockquote>
<p>So what this means:</p>
<pre><code>from keras import backend as K
X = tf.constant([1,2,3,4,5])
K.reshape(X, [-1, 5])
# Add one more dimension, the number of columns should be 5, and keep the number of elements to be constant
# [[1 2 3 4 5]]
X = tf.constant([1,2,3,4,5,6])
K.reshape(X, [-1, 3])
# Add one more dimension, the number of columns should be 3
# For the number of elements to be constant the number of rows should be 2
# [[1 2 3]
# [4 5 6]]
</code></pre>
<p>I think it is simple enough. So what happens in your code:</p>
<pre><code># Let's assume we have 5 images, 320x320 with 3 channels
X = tf.ones((5, 320, 320, 3))
shape = X.shape
# Let's flat the tensor so we can perform the rest of the computation
flatten = K.batch_flatten(X)
# What this did is: Turn a nD tensor into a 2D tensor with same 0th dimension. (Taken from the documentation directly, let's see that below)
flatten.shape
# (5, 307200)
# So all the other elements were squeezed in 1 dimension while keeping the batch_size the same
# ...The rest of the stuff in your code is executed here...
# So we did all we wanted and now we want to revert the tensor in the shape it had previously
r = K.reshape(flatten, (-1, shape[1],shape[2],shape[3]))
r.shape
# (5, 320, 320, 3)
</code></pre>
<p>Besides, I can't think of a cleaner way to do what you want to do. If you ask me, your code is already clear enough.</p>
|
tensorflow|keras
| 1
|
7,150
| 54,432,459
|
How to add rows to a dataframe based on the diff of two columns
|
<p>I am struggling with this one.</p>
<p>Let's assume a dataframe that looks like this:</p>
<pre><code>df = pd.DataFrame({'col0':['string1', 'string2'],
'col1':['some string','another string'],
'start':[100,1],
'end':[107,5]})
col0 col1 start end
0 string1 some string 100 107
1 string2 another string 1 5
</code></pre>
<p>The goal is the find the difference between <code>start</code> and <code>end</code> and then add that many rows to my dataframe, <code>ffill</code> the rest of the columns, and add a cumulative count for the range between <code>start</code> and <code>end</code>. Expected output below:</p>
<pre><code>df2 = pd.DataFrame({'col0':['string1']*8,
'col1':['some string']*8,
'new_col':[x for x in range(100,108)]})
df3 = pd.DataFrame({'col0':['string2']*5,
'col1':['another string']*5,
'new_col':[x for x in range(1,6)]})
output = pd.concat([df2,df3]).reset_index(drop=True)
col0 col1 new_col
0 string1 some string 100
1 string1 some string 101
2 string1 some string 102
3 string1 some string 103
4 string1 some string 104
5 string1 some string 105
6 string1 some string 106
7 string1 some string 107
8 string2 another string 1
9 string2 another string 2
10 string2 another string 3
11 string2 another string 4
12 string2 another string 5
</code></pre>
<p>My first though was to create a new dataframe...something like:</p>
<pre><code>vals = list(zip(df['start'], df['end']+1))
pd.concat([pd.DataFrame([i], columns=['new_col']) for val in vals for i in range(*val)])
</code></pre>
<p>but this seems rather inefficient and I am struggling to add the remaining data.</p>
|
<p>1st Create the list column using for loop with <code>range</code>, then the problem become <a href="https://stackoverflow.com/questions/53218931/how-do-i-unnest-explode-a-column-in-a-pandas-dataframe/53218939#53218939">unnesting</a> </p>
<pre><code>df['New']=[list(range(y,x+1)) for x , y in zip(df.pop('end'),df.pop('start'))]
unnesting(df,['New'])
New col0 col1
0 100 string1 some string
0 101 string1 some string
0 102 string1 some string
0 103 string1 some string
0 104 string1 some string
0 105 string1 some string
0 106 string1 some string
0 107 string1 some string
1 1 string2 another string
1 2 string2 another string
1 3 string2 another string
1 4 string2 another string
1 5 string2 another string
</code></pre>
<hr>
<p>FYI </p>
<pre><code>def unnesting(df, explode):
idx=df.index.repeat(df[explode[0]].str.len())
df1=pd.concat([pd.DataFrame({x:np.concatenate(df[x].values)} )for x in explode],axis=1)
df1.index=idx
return df1.join(df.drop(explode,1),how='left')
</code></pre>
|
python|python-3.x|pandas
| 2
|
7,151
| 54,283,085
|
Why am I getting strange triplication of video using Webcam and Tensorflow.js?
|
<p>I have a keras model trained and now I want to run this on the web. I thought this might be a good way to attempt testing out Tensorflow.js. I downloaded the Tesnroflow.js "Webcam-transfer-learning" tutorial and then modified it to get what I currently have. The working keras model performs emotion classification after reducing the size of the image to 48x48. Now in the keras model, I take a snapshot of the webcam, copy it and then draw my box and label. I was trying to do the same thing in tf.js, so I setup a canvas, got a reference to it and tried drawing to the canvas after my conversion to gray scale.</p>
<p>I am seeing a strange behavior where it is correctly showing the gray scale image but it is displaying it 3 times across and not sure what I am doing wrong. I have included the areas I believe the problem might reside below. Should any more info be needed, I can share more. It was my hope that someone that has already tried performing something similar may see right away what I am clearly doing wrong. Any info would be helpful. Thanks!</p>
<p>Modified webcam.js by adding function </p>
<pre><code>preProc() {
return tf.tidy(() => {
// Reads the image as a Tensor from the webcam <video> element.
const webcamImage = tf.fromPixels(this.webcamElement);
//Resize to our image and get back single channel for greyscale
const croppedImage = this.cropImage(webcamImage, 1);
// Expand the outer most dimension so we have a batch size of 1.
const batchedImage = croppedImage.expandDims(0);
// Normalize the image between -1 and 1. The image comes in between 0-255,
// so we divide by 127 and subtract 1.
return batchedImage.toFloat().div(tf.scalar(127)).sub(tf.scalar(1));
});
}
/**
* Crops an image tensor so we get a square image with no white space.
* @param {Tensor4D} img An input image Tensor to crop.
*/
cropImage(img, dim=3) {
const size = Math.min(img.shape[0], img.shape[1]);
const centerHeight = img.shape[0] / 2;
const beginHeight = centerHeight - (size / 2);
const centerWidth = img.shape[1] / 2;
const beginWidth = centerWidth - (size / 2);
return img.slice([beginHeight, beginWidth, 0], [size, size, dim]);
}
</code></pre>
<p>From ui.js I am using drawFrame</p>
<pre><code>export function drawFrame(image, canvas) {
const [width, height] = [300, 165];
const ctx = canvas.getContext('2d');
const imageData = new ImageData(width, height);
const data = image.dataSync();
for (let i = 0; i < height * width; ++i) {
const j = i * 4;
imageData.data[j + 0] = (data[i * 3 + 0] + 1) * 127;
imageData.data[j + 1] = (data[i * 3 + 1] + 1) * 127;
imageData.data[j + 2] = (data[i * 3 + 2] + 1) * 127;
imageData.data[j + 3] = 255;
}
ctx.putImageData(imageData, 0, 0);
}
</code></pre>
<p>Finally in index.js, when the predict button is pressed the below handler executes</p>
<pre><code>async function predict() {
while (isPredicting) {
const predictedClass = tf.tidy(() => {
// Capture the frame from the webcam.
const imgmod = webcam.preProc();
ui.drawFrame(imgmod, grayframe);
// Returns the index with the maximum probability. This number corresponds
// to the class the model thinks is the most probable given the input.
//return predictions.as1D().argMax();
return imgmod;
});
const classId = (await predictedClass.data())[0];
predictedClass.dispose();
//ui.predictClass(classId);
await tf.nextFrame();
}
ui.donePredicting();
}
</code></pre>
<p><a href="https://i.stack.imgur.com/shcW2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/shcW2.png" alt="enter image description here"></a></p>
|
<p><code>drawframe</code> is drawing the image three times.
It has to do with the shape of the input image and the way <code>height</code> and <code>width</code> are used to crop the image. If the input image were of shape [298, 160], the canvas will not be rendered as there will be an error when trying to access index that are not in <code>data</code>. For instance the size of <code>data</code> is <code>298 * 160</code> whereas the last element of the loop would try to access the element <code>3 * 300 * 160</code>. Since there are no error in the code, it indicates that the size of <code>data</code> is bigger than <code>[298, 160]</code>. At any rate, there is a mismatch in data dimension. The image are drawn 3 times because of the three channels, possibly because it was not removed before.</p>
<p>Instead of implementing your own way of drawing the imagedata, you can consider using <code>tf.toPixel</code> method</p>
|
python-3.x|tensorflow|keras|tensorflow.js
| 1
|
7,152
| 54,685,300
|
How to transfer the y (const y= await tf.toPixels(image)) to the webworker use webworker.postMessage?
|
<p>I want to use the webworker to deal with some tasks.</p>
<p>Main Thread:
Firstly,I use tf.loadFrozenModel() to load pre-train model.Secondly,I use model.predict() to predict a image(size:512*512*4).When I use <code>const data = await tf.toPixels(image)</code> to get the image pixels, it takes a lot of time, causing the UI operation to cause a jam. So I want to use webworker to deal with this problem.</p>
<pre><code>const y=tf.tidy(() => {
......
var output=model.predict(
{[INPUT_NODE_NAME]: imageConcat}, OUTPUT_NODE_NAME);
......
return output
})
webworker.postMessage({headpackage:y});//y is the predicted image
</code></pre>
<p>In webworker:</p>
<pre><code> importScripts('https://cdn.jsdelivr.net/npm/setimmediate@1.0.5/setImmediate.min.js')
importScripts('https://cdn.jsdelivr.net/npm/@tensorflow/tfjs@0.10.3')
var dataMessage;
self.addEventListener('message', function (e) {
dataMessage = e.data;
a();
}, false);
async function a() {
const data = await tf.toPixels(dataMessage["headpackage"]);
//Change the value of image data
var image={
data:new Uint8Array(data),
width:512,
height:512
};
tfoutputtexture.image=image;
tfoutputtexture.flipY=true;
tfoutputtexture.needsUpdate = true;
}
</code></pre>
<p>But it failed.
<a href="https://i.stack.imgur.com/BHHOm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BHHOm.png" alt="enter image description here"></a></p>
|
<p>Instead of sending the tensor object to the webworker, you can send a typed array. </p>
<p>From version 15 onward, the typed array has the same shape as the tensor using <code>tensor.array</code>. </p>
<pre><code>webworker.postMessage({headpackage:await y.array()})
// Webworker
tf.toPixels(tf.tensor(dataMessage["headpackage"]));
</code></pre>
<p>If you're using a version prior to 15, you will need to pass in both the typed array and its shape.</p>
<pre><code> webworker.postMessage({headpackage:y.dataSync(), shape: y.shape})
// Webworker
tf.toPixels(tf.tensor(dataMessage["headpackage"], dataMessage["shape"]));
</code></pre>
|
web-worker|tensorflow.js
| 1
|
7,153
| 54,647,711
|
Barplot 2 categorical variables
|
<p>I got two categorial variables and I want to plot something like this:</p>
<p><a href="https://i.stack.imgur.com/saxK5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/saxK5.png" alt="Plot"></a></p>
|
<p>You've tagged your question with pandas so I'm going to assume that your data is stored in a pandas dataframe. </p>
<p>Here I'm going to make some data which may or may not resemble your data:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
detect = np.array([4e6, 5e5])
no_detect = np.array([3.75e6, 6e5])
df = pd.DataFrame(np.array([detect, no_detect]).T, columns=['Has Detections', 'No Detections'])
</code></pre>
<p>pandas has inbuilt plotting routines which makes it easy to achieve the plot you'd like.</p>
<pre class="lang-py prettyprint-override"><code>fig, ax = plt.subplots(1, 1)
df.plot.bar(rot=0, ax=ax)
ax.set_ylabel('Counts')
ax.set_xlabel('Census')
</code></pre>
<p>This gave me the following figure:</p>
<p><a href="https://i.stack.imgur.com/cuf7s.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cuf7s.png" alt="enter image description here"></a></p>
|
python|pandas|matplotlib
| 2
|
7,154
| 73,832,058
|
pandas dataframe column manipulation
|
<p>I have a dataframe that look like this:</p>
<pre><code>Letter num
A 5
B 4
A 3
B 3
</code></pre>
<p>I want to add 3 if letter = A and 2 if letter = B
I tried this:</p>
<pre><code>for i in df:
if df['Letter'] == A:
df['num'] = df['num'] + 3
else:
df['num'] = df['num'] + 2
</code></pre>
<p>but i get this: ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().</p>
|
<p>here is one way to do it</p>
<pre><code># dictionary of the letters and associated values to add
d = {'A' : 3, 'B':2}
# map the letter to get the value and add to the num
df['num']=df['num'] + df['Letter'].map(d)
df
</code></pre>
<pre><code> Letter num
0 A 8
1 B 6
2 A 6
3 B 5
</code></pre>
|
python|pandas
| 3
|
7,155
| 73,707,529
|
iterating through index list of list using list comprehension
|
<pre><code>next_df = df.shift(-1)
next_waypt = next_df.values.tolist()
waypt=df.values.tolist()
</code></pre>
<p>So I have these 2 lists of lists from a dataframe in pandas. I want to create a new list using values from those 2 lists in a function. I do not know how to iterate over the first index however</p>
<pre><code>y = [math.sin(waypt[:1][1] - next_waypt[:1][1])*math.cos(next_waypt[:1][0]) for y in waypt]
</code></pre>
<p>For this input I get the error "list index out of range".</p>
<pre><code>y = [math.sin(waypt[y][1] - next_waypt[y][1])*math.cos(next_waypt[y][0]) for y in waypt]
</code></pre>
<p>for this input I get "list indices must be integers or slices". Any help would be greatly appreciated. I could just pull each column as a seperate list however it reduces code readability. I am relatively new to python so if you all think that is the best solution please let me know.</p>
|
<p>I think your problem is that when you use your list comprehension, you are iterating over the elements of the list and not the index of it</p>
<p>your code should look something like this:</p>
<pre class="lang-py prettyprint-override"><code>y = [math.sin(y[1] - next_waypt[val][1])*math.cos(next_waypt[val][0]) for val, y in enumerate(waypt)]
</code></pre>
<p>Note enumerate returns an index value plus an element</p>
<p>or this other alternative:</p>
<pre><code>y = [math.sin(way[1] - next[1])*math.cos(next[0]) for way, next in zip(waypt, next_waypt)]
</code></pre>
<p>In this case, you zip both list and access directly to each element in the loop</p>
|
python|pandas|dataframe
| 1
|
7,156
| 71,344,455
|
Custom loss functions in Keras with penalty depending on the values of y_pred and y_true
|
<p>I need a custom loss functions in Keras for a regression problem.
I have to predict two values (y1, y2) but I want to penalize the error if:</p>
<pre><code>if y1_pred > v1 and y1_true < v1:
or
if y2_pred < v2 and y2_true > v2:
</code></pre>
<p>I need something similar to:</p>
<pre><code>if y1_pred > v1 and y1_true < v1:
p = 1 + (k * (y1_pred-y1_true))
K.mean(K.square(y1_pred-y1_true) * p)
else:
K.mean(K.square(y1_pred-y1_true))
if y2_pred < v2 and y2_true > v2:
p = 1 + (k * (y2_true-y2_pred))
K.mean(K.square(y2_pred-y2_true) * p)
else:
K.mean(K.square(y2_pred-y2_true))
</code></pre>
<p>v1, v2 and k are constants.</p>
|
<p>Try <code>tf.where</code>:</p>
<pre><code>import tensorflow as tf
def custom_loss1(v1 = 0.7, v2 = 1, k =0.5):
def combined_loss(y1_true, y1_pred):
return tf.where(tf.logical_and(tf.greater(y1_pred, v1), tf.less(y1_true, v1)),
tf.reduce_mean(tf.math.square(y1_pred - y1_true) * (1 + (k * (y1_pred - y1_true)))),
tf.reduce_mean(tf.math.square(y1_pred - y1_true)))
return combined_loss
def custom_loss2(v1 = 0.7, v2 = 1, k =0.5):
def combined_loss(y2_true, y2_pred):
return tf.where(tf.logical_and(tf.less(y2_pred, v2), tf.greater(y2_true, v2)),
tf.reduce_mean(tf.math.square(y2_pred-y2_true) * (1 + (k * (y2_true - y2_pred)))),
tf.reduce_mean(tf.math.square(y2_pred-y2_true)))
return combined_loss
inputs = tf.keras.layers.Input((5,))
x = tf.keras.layers.Dense(1, activation = 'relu', name='loss1')(inputs)
y = tf.keras.layers.Dense(1, activation = 'tanh', name='loss2')(inputs)
model = tf.keras.Model(inputs, [x, y])
model.compile(optimizer='adam', loss = {'loss1': custom_loss1(), 'loss2': custom_loss2()})
model.fit(tf.random.normal((10, 5)), [tf.random.normal((10, 1)), tf.random.normal((10, 1))], batch_size=2, epochs=5)
</code></pre>
|
python|tensorflow|keras
| 0
|
7,157
| 52,273,857
|
Counting number of first time binary indicators in a time series
|
<p>I have a dataframe that uses binary indicators to reflect whether a customer is live during a particular month. If the customer is live, there is a 1, if not there is a 0. The dataframe looks like the below:</p>
<pre><code>Customer A B C D E F G H I J
11/30/2015 1 0 1 0 0 1 1 0 0 0
12/31/2015 0 1 0 1 0 1 1 0 0 1
1/31/2016 0 0 0 0 0 1 1 0 0 1
2/29/2016 1 1 1 1 1 1 0 1 1 1
3/31/2016 1 1 0 1 1 0 1 1 0 1
4/30/2016 0 1 1 1 0 1 1 1 0 1
5/31/2016 1 1 1 1 1 1 0 1 0 1
</code></pre>
<p>When a customer first becomes live, they get a 1 for the particular month. Therefore when a particular customer has their first 1, this is the month in which they are "new".</p>
<p>I want to add a column at the end of the dataframe which counts the number of "new" customers. </p>
<p>I think the most efficient method of doing this would be to sum the values from row 0 to row i, and count the number of times the sum equals 1. When this sum is greater than 1, then the customer will have been live for 2 months and is not a new customer in the given month.</p>
<p>I have calculated this in excel using this method but I am not clear on how to go about this in Python.</p>
<p>The resulting dataframe would look like this:</p>
<pre><code>Customer A B C D E F G H I J New_Customers
11/30/2015 1 0 1 0 0 1 1 0 0 0 4
12/31/2015 0 1 0 1 0 1 1 0 0 1 3
1/31/2016 0 0 0 0 0 1 1 0 0 1 0
2/29/2016 1 1 1 1 1 1 0 1 1 1 3
3/31/2016 1 1 0 1 1 0 1 1 0 1 0
4/30/2016 0 1 1 1 0 1 1 1 0 1 0
5/31/2016 1 1 1 1 1 1 0 1 0 1 0
</code></pre>
|
<p>By defining a custom <code>new</code> function and using <code>DataFrame.expanding</code>. I'm not sure why the result of <code>expanding().apply(new)</code> requires casting from <code>float</code> to <code>int</code>, but hey, it works:</p>
<pre><code>def new(column):
return column[-1] and not any(column[:-1])
result = df.expanding().apply(new).sum(axis=1).astype(int)
print(result)
Out:
11/30/2015 4
12/31/2015 3
1/31/2016 0
2/29/2016 3
3/31/2016 0
4/30/2016 0
5/31/2016 0
dtype: int32
</code></pre>
|
python|pandas|time-series
| 1
|
7,158
| 60,394,977
|
Pandas: How to remove non-alphanumeric columns in Series
|
<p>A Pandas' Series can contain invalid values:</p>
<pre><code>a b c d e f g
1 "" "a3" np.nan "\n" "6" " "
</code></pre>
<pre><code>df = pd.DataFrame([{"a":1, "b":"", "c":"a3", "d":np.nan, "e":"\n", "f":"6", "g":" "}])
row = df.iloc[0]
</code></pre>
<p>I want to produce a clean Series keeping only the columns that contain a <strong>numeric value</strong> or a <strong>non-empty non-space-only alphanumeric string</strong>:</p>
<ul>
<li><code>b</code> should be dropped because it is an empty string;</li>
<li><code>d</code> because <code>np.nan</code>;</li>
<li><code>e</code> and <code>g</code> because space-only strings.</li>
</ul>
<p>The expected result:</p>
<pre><code>a c f
1 "a3" "6"
</code></pre>
<p><strong>How can I filter the columns that contain numeric or valid alphanumeric?</strong></p>
<ul>
<li><code>row.str.isalnum()</code> returns <code>NaN</code> for <code>a</code>, instead of the True I would expect.</li>
<li><code>row.astype(str).str.isalnum()</code> changes <code>d</code>'s <code>np.nan</code> to string <code>"nan"</code> and later considers it a valid string.</li>
<li><code>row.dropna()</code> of course drops only <code>d</code> (<code>np.nan</code>).</li>
</ul>
<p>I don't see so many other possibilities listed at <a href="https://pandas.pydata.org/pandas-docs/stable/reference/series.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/reference/series.html</a></p>
<p>As a workaround I can loop on the items() checking type and content, and create a new Series from the values I want to keep, but this approach is inefficient (and ugly):</p>
<pre><code>for index, value in row.items():
print (index, value, type(value))
# a 1 <class 'numpy.int64'>
# b <class 'str'>
# c a3 <class 'str'>
# d nan <class 'numpy.float64'>
# e
# <class 'str'>
# f 6 <class 'str'>
# g <class 'str'>
</code></pre>
<p><strong>Is there any boolean filter that can help me to single out the good columns?</strong> </p>
|
<p>Convert values to strings and chain another mask by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.notna.html" rel="nofollow noreferrer"><code>Series.notna</code></a> with bitwise <code>AND</code> - <code>&</code>:</p>
<pre><code>row = row[row.astype(str).str.isalnum() & row.notna()]
print (row)
a 1
c a3
f 6
Name: 0, dtype: object
</code></pre>
|
python|pandas|dataframe|series
| 2
|
7,159
| 60,406,738
|
pandas merge dataframes move new column from the end
|
<p>I'm merging two data frames which works fine but the new column is placed on the end. I would like it to be the 3rd column or index 2. So far I have this which works but I'm wondering if there is a better way.</p>
<pre><code>overlap = overlap.merge(df_comp, how='left')
cols = overlap.columns.tolist()
cols.insert(2, cols.pop(cols.index('cr')))
overlap = overlap.reindex(columns= cols)
</code></pre>
<p>To further explain the names and number of columns if the final dataframe will change from day to day so the solution will need to be dynamic. Is there a clean one or two line way of doing this? </p>
|
<p>One quick hack is to set the first 2 columns and the last added columns as index, then reset the index, which will place them as the first 3 columns:</p>
<pre><code>import numpy as np
overlap.set_index(overlap.columns[np.r_[0:2,-1]].to_list()).reset_index()
</code></pre>
<p><code>np.r_[0:2, -1]</code> essentially take the column names for column 0, 1 and -1 (the last column appended).</p>
|
python|pandas|multiple-columns
| 1
|
7,160
| 60,535,238
|
Numpy boolean indexing if number is in list
|
<p>I have the following array:</p>
<pre><code>x = np.array([
[2, 0],
[5, 0],
[1, 0],
[8, 0],
[6, 0]])
</code></pre>
<p>I've learned that you can use boolean operations to change selected values in a numpy array. If I want to change the value of the 2nd column to 1 for the rows where the 1st value is equal to 2, 5 or 8 I can do the following:</p>
<pre><code>x[x[:, 0] == 2, 1] = 1
x[x[:, 0] == 5, 1] = 1
x[x[:, 0] == 8, 1] = 1
</code></pre>
<p>Which changes the output to:</p>
<pre><code>[[2 1]
[5 1]
[1 0]
[8 1]
[6 0]]
</code></pre>
<p>If that were "normal" python code, I know I could do:</p>
<pre><code>if value in [2, 5, 8]: ...
</code></pre>
<p>Instead of:</p>
<pre><code>if value == 2 or value == 5 or value == 8: ...
</code></pre>
<p>Is there a shorthand to do something like this with numpy arrays?</p>
|
<p>You can use numpy's <code>isin</code> method:</p>
<pre><code>x[np.isin(x[:, 0], [2, 5, 8]), 1] = 1
</code></pre>
|
python|arrays|numpy|boolean-operations
| 0
|
7,161
| 72,576,712
|
Remove the intersection between two curves
|
<p>I'm having a curve (parabol) from 0 to 1 on both axes as follows:</p>
<p><a href="https://i.stack.imgur.com/O3td2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/O3td2.png" alt="enter image description here" /></a></p>
<p>I generate another curve by moving the original curve along the x-axis and combine both to get the following graph:</p>
<p><a href="https://i.stack.imgur.com/bygFx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bygFx.png" alt="enter image description here" /></a></p>
<p>How can I remove the intersected section to have only the double bottoms pattern like this:</p>
<p><a href="https://i.stack.imgur.com/cpbTy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cpbTy.png" alt="enter image description here" /></a></p>
<p>The code I use for the graph:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import matplotlib.pyplot as plt
def get_parabol(start=-1, end=1, steps=100, normalized=True):
x = np.linspace(start, end, steps)
y = x**2
if normalized:
x = np.array(x)
x = (x - x.min())/(x.max() - x.min())
y = np.array(y)
y = (y - y.min())/(y.max() - y.min())
return x, y
def curve_after(x, y, x_ratio=1/3, y_ratio=1/2, normalized=False):
x = x*x_ratio + x.max() - x[0]*x_ratio
y = y*y_ratio + y.max() - y.max()*y_ratio
if normalized:
x = np.array(x)
x = (x - x.min())/(x.max() - x.min())
y = np.array(y)
y = (y - y.min())/(y.max() - y.min())
return x, y
def concat_arrays(*arr, axis=0, normalized=True):
arr = np.concatenate([*arr], axis=axis).tolist()
if normalized:
arr = np.array(arr)
arr = (arr - arr.min())/(arr.max() - arr.min())
return arr
x, y = get_parabol()
new_x, new_y = curve_after(x, y, x_ratio=1, y_ratio=1, normalized=False)
new_x = np.add(x, 0.5)
# new_y = np.add(y, 0.2)
xx = concat_arrays(x, new_x, normalized=True)
yy = concat_arrays(y, new_y, normalized=True)
# plt.plot(x, y, '-')
plt.plot(xx, yy, '--')
</code></pre>
<p>I'm doing a research on pattern analysis that requires me to generate patterns with mathematical functions.</p>
<p>Could you show me a way to achieve this? Thank you!</p>
|
<p>First off, I would have two different parabola functions such that:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
x = np.linspace(-1, 1, 100)
y1 = np.add(x, 0.3)**2 # Parabola centered at -0.3
y2 = np.add(x, -0.3)**2 # Parabola centered at 0.3
</code></pre>
<p>You can choose your own offsets for y1 and y2 depending on your needs.</p>
<p>And then it's simply take the min of the two arrays</p>
<pre><code>y_final = np.minimum(y1, y2)
plt.plot(x, y_final, '--')
</code></pre>
|
python|numpy|matplotlib
| 3
|
7,162
| 72,607,575
|
How to create subplots of all column combinations from two dataframes
|
<p>I have a made a function which plots input variables against predicted variables.</p>
<pre><code>dummy_data = pd.DataFrame(np.random.uniform(low=65.5,high=140.5,size=(50,4)), columns=list('ABCD'))
dummy_predicted = pd.DataFrame(np.random.uniform(low=15.5,high=17.5,size=(50,4)), columns=list('WXYZ'))
##Plot test input distriubtions
fig = plt.figure(figsize=(15,6))
n_rows = 1
n_cols = 4
counter = 1
for i in dummy_data.keys():
plt.subplot(n_rows, n_cols, counter)
plt.scatter(dummy_data[i], dummy_predicted['Z'])
plt.title(f'{i} vs Z')
plt.xlabel(i)
counter += 1
plt.tight_layout()
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/nPKQr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nPKQr.png" alt="enter image description here" /></a></p>
<p>How do I create a 4 x 4 subplot of all combinations of 'ABCD' and 'WXYZ'? I can have any number of <code>dummy_data</code> and <code>dummy_predicted</code> columns so some dynamism would be useful.</p>
|
<ul>
<li>Use <a href="https://docs.python.org/3/library/itertools.html#itertools.product" rel="nofollow noreferrer"><code>itertools.product</code></a> from the standard library, to create all combinations of column names, <code>combos</code>.</li>
<li>Use the <a href="https://docs.python.org/3/library/functions.html#len" rel="nofollow noreferrer"><code>len</code></a> of each set of columns to determine <code>nrows</code> and <code>ncols</code> for <a href="https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.subplots.html" rel="nofollow noreferrer"><code>plt.subplots</code></a></li>
<li>Flatten the array of <code>axes</code> to easily iterate through a 1D array instead of a 2D array.</li>
<li><a href="https://docs.python.org/3/library/functions.html#zip" rel="nofollow noreferrer"><code>zip</code></a> <code>combos</code> and <code>axes</code> to iterate through, and plot each group with a single loop.</li>
<li>See this <a href="https://stackoverflow.com/a/69228859/7758804">answer</a> in <a href="https://stackoverflow.com/q/31726643/7758804">How to plot in multiple subplots</a>.</li>
</ul>
<pre class="lang-py prettyprint-override"><code>from itertools import product
import matplotlib.pyplot as plt
import numpy as np
# sample data
np.random.seed(2022)
dd = pd.DataFrame(np.random.uniform(low=65.5, high=140.5, size=(50, 4)), columns=list('ABCD'))
dp = pd.DataFrame(np.random.uniform(low=15.5, high=17.5, size=(50, 4)), columns=list('WXYZ'))
# create combinations of columns
combos = product(dd.columns, dp.columns)
# create subplots
fig, axes = plt.subplots(nrows=len(dd.columns), ncols=len(dp.columns), figsize=(15, 6))
# flatten axes into a 1d array
axes = axes.flat
# iterate and plot
for (x, y), ax in zip(combos, axes):
ax.scatter(dd[x], dp[y])
ax.set(title=f'{x} vs. {y}', xlabel=x, ylabel=y)
plt.tight_layout()
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/8BTRX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8BTRX.png" alt="enter image description here" /></a></p>
|
python|pandas|matplotlib|data-visualization|subplot
| 2
|
7,163
| 72,493,319
|
How can I upload this DataFrame into an excel file?
|
<p>I am trying to upload this DataFrame to an excel file, but it keeps returning the error "could not broadcast input array from shape (50,56) into shape (50,)"
I am not sure how to change the shape though</p>
<p>Here is my code:</p>
<pre><code>from selenium import webdriver
from bs4 import BeautifulSoup
import pandas as pd
URL = "https://www.nba.com/stats/players/traditional/?sort=PTS&dir=-1&Season=2021-22&SeasonType=Playoffs"
driver = webdriver.Chrome()
driver.maximize_window()
driver.get(URL)
soup = BeautifulSoup(driver.page_source, 'lxml')
tables = soup.find_all('table')
response_data = pd.read_html(str(tables))
driver.implicitly_wait(5)
driver.quit()
new_storage = pd.DataFrame(response_data)
writer = pd.ExcelWriter('2021 Data.xlsx')
new_storage.to_excel(writer, '2021')
writer.save()
read_file = pd.read_excel ("2021 Data.xlsx")
</code></pre>
|
<p>What you did here:</p>
<pre><code>writer = pd.ExcelWriter('2021 Data.xlsx')
new_storage.to_excel(writer, '2021')
</code></pre>
<p>Is unclear, but doesn't work because it's like writing:</p>
<pre><code>new_storage.to_excel(pd.ExcelWriter('2021 Data.xlsx'), '2021')
</code></pre>
<p>which doesn't mean anything.</p>
<p><strong>What you should do:</strong></p>
<pre><code>new_storage = pd.DataFrame(response_data)
#a simple save:
new_storage.to_excel("output.xlsx")
#to read:
pd.read_excel('output.xlsx', index_col=0)
</code></pre>
|
python|excel|pandas|dataframe|beautifulsoup
| 0
|
7,164
| 59,870,386
|
Understanding code from official tensorflow page
|
<p>I am confused about code on <a href="https://www.tensorflow.org/tutorials/structured_data/imbalanced_data" rel="nofollow noreferrer">this page</a>.</p>
<p>question1) </p>
<p>Code block below shows output from that page. Before this step I dont see any code that trains our data using <code>model.fit</code> function. So what is the code below? Do they show predictions using random weights?</p>
<pre><code>model.predict(train_features[:10])
array([[0.6296253 ],
[0.82509124],
[0.75135857],
[0.73724824],
[0.82174015],
[0.33519754],
[0.6719973 ],
[0.30910844],
[0.6378555 ],
[0.8381703 ]], dtype=float32)
model = make_model(output_bias = initial_bias)
model.predict(train_features[:10])
array([[0.00124893],
[0.00185736],
[0.00164955],
[0.00123761],
[0.00137692],
[0.00182851],
[0.00170887],
[0.00239349],
[0.0024704 ],
[0.00517672]], dtype=float32)
results = model.evaluate(train_features, train_labels, batch_size=BATCH_SIZE, verbose=0)
print("Loss: {:0.4f}".format(results[0]))
Loss: 0.0157
</code></pre>
<p>question2) </p>
<p>Continuing on in the code it says below. What are <code>initial_weights</code>? are they random values? </p>
<pre><code>initial_weights = os.path.join(tempfile.mkdtemp(),'initial_weights')
model.save_weights(initial_weights)
</code></pre>
<p>question3) </p>
<p>Then they say that </p>
<pre><code>Before moving on, confirm quick that the careful bias initialization actually helped.Train the model for 20 epochs, with and without this careful initialization, and compare the losses:
</code></pre>
<p>, But I am not sure how they are assigning initial bias.
I understand we assign 0 bias for the object <code>zero_bias_history</code>. But how do we assign bias for <code>careful_bias_history</code>? isnt it supposed to have bias equal to <code>initial_bias</code>. How does <code>careful_bias_history</code> get the bias value? i felt that <code>careful_bias_history</code> should be created from a model that was created using <code>model = make_model(output_bias = initial_bias)</code></p>
<pre><code>### Confirm that the bias fix helps
Before moving on, confirm quick that the careful bias initialization actually helped.
Train the model for 20 epochs, with and without this careful initialization, and compare the losses:
model = make_model()
model.load_weights(initial_weights)
model.layers[-1].bias.assign([0.0])
zero_bias_history = model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=20,
validation_data=(val_features, val_labels),
verbose=0)
print (type(model))
#model.load_weights()
model = make_model()
model.load_weights(initial_weights)
careful_bias_history = model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=20,
validation_data=(val_features, val_labels),
verbose=0)
</code></pre>
|
<p>Answer 1: Yes, these predictions are from model after compiling but before training it. </p>
<p>Answer 2: Yes, they are random weights, for example, in Dense layer they are initialised using <code>glorot_uniform</code> . <a href="https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/layers/Dense#__init__" rel="nofollow noreferrer">tf.keras.layers.Dense</a></p>
<p>Answer 3: The model we saved above had a bias initialised using <code>np.log([pos/neg])</code>, it is mentioned <a href="https://www.tensorflow.org/tutorials/structured_data/imbalanced_data#optional_set_the_correct_initial_bias" rel="nofollow noreferrer">here</a>. </p>
<p>So, in <code>zero_bias_history</code>, they initialised bias with zeroes using <code>model.layers[-1].bias.assign([0.0])</code>, and in <code>careful_bias_history</code> they just loaded the saved model which already had the initialised bias. </p>
|
python|tensorflow|keras|classification|tensorflow2.0
| 1
|
7,165
| 61,622,273
|
How to move paired data from one data frame column to another dataframe while keeping the same order
|
<p>I have a csv file with a list of NBA players and their average fantasy draft positions. I'm trying to add this 'ADP' value to a data frame that has all of their stats for the season. However, the players are not in the same order in both files so I must iterate through them and compare the list of players, only adding the ADP value when the player names match.
this is the loop I'm using currently</p>
<pre><code>data['ADP'] = ""
for row in data:
for rdw in adp:
if row[0] == rdw[1]:
data["ADP"]=adp["ADP"]
else:
pass
</code></pre>
<p>this adds the ADP value from one data frame to the other but it adds the whole thing on the first match instead of adding the adp values one at a time. Any help would be appreciated, im not locked into using this style of loop I just want it to work </p>
|
<pre><code>data = data.join(adp, on=SOME_COLUMN)
</code></pre>
<p>You might need to do a bit more than this depending on your data frames; I don't know what your column names are. </p>
|
python-3.x|pandas
| 0
|
7,166
| 61,899,392
|
How to check if value from one column is equal to value in another columns data-frame
|
<p>I have two separate data frames df and xls. Xls is a data frame that contain unique IDs that I would like to see how many times occur in my df data frame (~650,000 rows) and then create an occurrence column that would keep track of the amount of times that our unique IDs from our xls dataframe are appearing in the df dataframe.</p>
<pre><code>xls = {'Unique ID': ['a', 'b', 'c', 'd', 'e'}
df = {'Contingency': ['a', 'b', 'c', 'd', 'a', 'b', 'c', 'e', 'd', 'b']}
result_df = {'Contingency': ['a', 'b', 'a', 'b', 'a', 'b', 'a', 'b', 'd', 'b'],'Occurences': [4, 5, 0, 1, 0]
</code></pre>
<p>Ultimately, I would just like to keep a track of which Unique ID is appearing the most in DF given its unique ID.</p>
|
<p><code>df.groupby('Contingency').count()</code> should produce the Series you are looking for, without the need for the xls dataframe containing the unique IDs.</p>
<p>Edit:</p>
<p>If your 'df' dataframe only has the 'Contingency' column, you'll need a second column to apply the count() to, like this:</p>
<pre><code>df = pd.DataFrame({'Contingency': ['a', 'b', 'c', 'd', 'a', 'b', 'c', 'e', 'd', 'b']})
df['Occurances'] = 1
result = df.groupby('Contingency').count()
</code></pre>
<p>Otherwise you can just do:</p>
<pre><code>result = pd.DataFrame(df.Contingency.value_counts())
</code></pre>
<p>For the same result.</p>
<p>Then you can sort the values : <code>result.sort_values(by = 'Contingency', ascending=False)</code></p>
|
python|pandas|dataframe
| 5
|
7,167
| 61,826,649
|
Train Test Split sklearn based on group variable
|
<p>My X is as follows:
EDIT1:</p>
<pre><code>Unique ID. Exp start date. Value. Status.
001 01/01/2020. 4000. Closed
001 12/01/2019 4000. Archived
002 01/01/2020. 5000. Closed
002 12/01/2019 5000. Archived
</code></pre>
<p>I want to make sure that none of the unique IDs that were in training are included in testing. I am using sklearn test train split. Is this possible?</p>
|
<p>I believe you need <code>GroupShuffleSplit</code> (<a href="https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GroupShuffleSplit.html" rel="nofollow noreferrer">documentation here</a>).</p>
<pre><code>import numpy as np
from sklearn.model_selection import GroupShuffleSplit
X = np.ones(shape=(8, 2))
y = np.ones(shape=(8, 1))
groups = np.array([1, 1, 2, 2, 2, 3, 3, 3])
print(groups.shape)
gss = GroupShuffleSplit(n_splits=2, train_size=.7, random_state=42)
for train_idx, test_idx in gss.split(X, y, groups):
print("TRAIN:", train_idx, "TEST:", test_idx)
TRAIN: [2 3 4 5 6 7] TEST: [0 1]
TRAIN: [0 1 5 6 7] TEST: [2 3 4]
</code></pre>
<p>It can be seen from above that train/test indices are created based on the <code>groups</code> variable.</p>
<p>In your case, <code>Unique ID.</code> should be used as groups.</p>
|
python|scikit-learn|sklearn-pandas|train-test-split
| 2
|
7,168
| 54,879,395
|
Check if a value exists in pandas dataframe
|
<p>I have a pandas dataframe which consists of 3000 latitude longitude values. I want to check if a lat-long exists in the dataframe or not.</p>
<p>The data frame looks like the following:</p>
<pre><code>lat long
31.76 77.84
31.77 77.84
31.78 77.84
32.76 77.85
</code></pre>
<p>Now, I want to check if (31.76, 77.84) exists or not in the above dataframe. If yes, then the index also.</p>
|
<p>Working with <code>float</code>s, so need <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.isclose.html" rel="nofollow noreferrer"><code>numpy.isclose</code></a> for check both columns, chain with <code>&</code> for bitwise <code>AND</code> and test with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.any.html" rel="nofollow noreferrer"><code>any</code></a> for at least one <code>True</code> of boolean mask:</p>
<pre><code>tup = (31.76, 77.84)
lat, long = tup
a = (np.isclose(df['lat'], lat) & np.isclose(df['long'], long)).any()
print (a)
True
</code></pre>
|
python|pandas|dataframe
| 2
|
7,169
| 55,065,431
|
Pandas row-wise aggregation with multi-index
|
<p>I have a pandas dataframe where there's three levels of row indexing. The last level is a datetime index. There are nan values and I am trying to fill them with the average of each row at the datetime level. How can I go about doing this?</p>
<pre><code>data_df
Level 0 | Level 1 | Level 2 |
A 123 2019-01-28 17:00:00 | 3 | 1 | nan
2019-01-28 18:00:00 | 2 | nan | 1
2019-01-28 19:00:00 | nan | nan | 5
234 2019-01-28 05:00:00 | 1 | 1 | 3
2019-01-28 06:00:00 | nan | nan | nan
</code></pre>
<p>Some rows may all be nan values. In this case I want to fill the row with 0's. Some rows may have all values filled in so imputing with average isn't needed.</p>
<p>I want this the following result:</p>
<pre><code>Level 0 | Level 1 | Level 2 |
A 123 2019-01-28 17:00:00 | 3 | 1 | 2
2019-01-28 18:00:00 | 2 | 1.5 | 1
2019-01-28 19:00:00 | 5 | 5 | 5
234 2019-01-28 05:00:00 | 1 | 1 | 3
2019-01-28 06:00:00 | 0 | 0 | 0
</code></pre>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.mask.html" rel="nofollow noreferrer"><code>DataFrame.mask</code></a> with <code>mean</code> per rows and last convert only <code>NaN</code>s rows by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.fillna.html" rel="nofollow noreferrer"><code>DataFrame.fillna</code></a>:</p>
<pre><code>df = df.mask(df.isna(), df.mean(axis=1), axis=0).fillna(0)
print (df)
a b c
Level 0 Level 1 Level 2
A 123 2019-01-28 17:00:00 3.0 1.0 2.0
2019-01-28 18:00:00 2.0 1.5 1.0
2019-01-28 19:00:00 5.0 5.0 5.0
234 2019-01-28 05:00:00 1.0 1.0 3.0
2019-01-28 06:00:00 0.0 0.0 0.0
</code></pre>
<p>Another solution is use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.fillna.html" rel="nofollow noreferrer"><code>DataFrame.fillna</code></a> for replace, but because not implemented <code>df.fillna(df.mean(axis=1), axis=1)</code> is necessary double transpose:</p>
<pre><code>df = df.T.fillna(df.mean(axis=1)).fillna(0).T
</code></pre>
|
python|pandas
| 0
|
7,170
| 54,777,895
|
tensorflow scaler.inverse_transform ValueError: operands could not be broadcast together with shapes (342,22) (23,) (342,22)
|
<p>Looking for some help here... so stuck.. Below is my code and the error I'm getting. Thanks for all your help.</p>
<pre><code>def series_to_supervised(data, n_in=1, n_out=1, dropnan=True):
"""
Frame a time series as a supervised learning dataset.
Arguments:
data: Sequence of observations as a list or NumPy array.
n_in: Number of lag observations as input (X).
n_out: Number of observations as output (y).
dropnan: Boolean whether or not to drop rows with NaN values.
Returns:
Pandas DataFrame of series framed for supervised learning.
"""
n_vars = 1 if type(data) is list else data.shape[1]
df = DataFrame(data)
cols, names = list(), list()
# input sequence (t-n, ... t-1)
for i in range(n_in, 0, -1):
cols.append(df.shift(i))
names += [('var%d(t-%d)' % (j+1, i)) for j in range(n_vars)]
# forecast sequence (t, t+1, ... t+n)
for i in range(0, n_out):
cols.append(df.shift(-i))
if i == 0:
names += [('var%d(t)' % (j+1)) for j in range(n_vars)]
else:
names += [('var%d(t+%d)' % (j+1, i)) for j in range(n_vars)]
# put it all together
agg = concat(cols, axis=1)
agg.columns = names
# drop rows with NaN values
if dropnan:
agg.dropna(inplace=True)
return agg
# load dataset
dataset = pd.read_csv('newdf2.csv', header=0, index_col=0)
dataset = dataset.drop('Monthday.Key', axis = 1)
dataset.head()
values = dataset.values
# integer encode direction
encoder = LabelEncoder()
values[:,4] = encoder.fit_transform(values[:,4])
# ensure all data is float
values = values.astype('float32')
# normalize features
scaler = MinMaxScaler(feature_range=(0, 1))
scaled = scaler.fit_transform(values)
# frame as supervised learning
reframed = series_to_supervised(scaled, 1, 1)
# drop columns we don't want to predict
reframed.drop(reframed.columns[[2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,20,21,22,23,24]], axis=1, inplace=True)
print(reframed.head())
# split into train and test sets
values = reframed.values
n_train_hours = round(len(dataset) *.7)
train = values[:n_train_hours, :]
test = values[n_train_hours:, :]
# split into input and outputs
train_X, train_y = train[:, :-1], train[:, -1]
test_X, test_y = test[:, :-1], test[:, -1]
# reshape input to be 3D [samples, timesteps, features]
train_X = train_X.reshape((train_X.shape[0], 1, train_X.shape[1]))
test_X = test_X.reshape((test_X.shape[0], 1, test_X.shape[1]))
print(train_X.shape, train_y.shape, test_X.shape, test_y.shape)
#(799, 1, 22) (799,) (342, 1, 22) (342,)
# make a prediction
yhat = model.predict(test_X)
test_X = test_X.reshape((test_X.shape[0], test_X.shape[2]))
inv_yhat = concatenate((yhat, test_X[:, 1:]), axis=1)
inv_yhat = scaler.inverse_transform(inv_yhat)
</code></pre>
<p>This is the error I'm getting:</p>
<blockquote>
<p>ValueError Traceback (most recent call
last) in
----> 1 inv_yhat = scaler.inverse_transform(inv_yhat)</p>
<p>~\Anaconda3\lib\site-packages\sklearn\preprocessing\data.py in
inverse_transform(self, X)
383 X = check_array(X, copy=self.copy, dtype=FLOAT_DTYPES)
384
--> 385 X -= self.min_
386 X /= self.scale_
387 return X</p>
<p>ValueError: operands could not be broadcast together with shapes
(342,22) (23,) (342,22)</p>
</blockquote>
|
<p>Tough this would sound weird, it did help me fix the error.</p>
<p>If in case u are using "excel" to tamper "csv training data" file and if you are deleting a column from the excel,</p>
<p>you would end up with a blank ",," value in your csv data which would cause the issue for me. Guess it helps. </p>
<p>Removing it or making sure you are not tampering the csv data file manually helped fix the issue for me</p>
|
python|tensorflow|keras
| 0
|
7,171
| 49,471,437
|
Custom loss function in tensorflow
|
<p>I have a seq2seq model where my inputs are short sentences like</p>
<pre><code>x = "The XYZ pub near Cafe ABC has a 5 star rating. Prices start at £30."
</code></pre>
<p>and my outputs are semantic info extracted from the input sentence like:</p>
<pre><code>y_true = name[XYZ], type[pub], price[moderate], rating[5], close_to[Cafe ABC]
</code></pre>
<p>the problem is that although in many cases my t_true contains the complete semantic info but in certain cases it has missing info like </p>
<pre><code>y_true = name[XYZ], type[pub]
</code></pre>
<p>What I want to do is, even if the model predicts:</p>
<pre><code>y_predicted = name[XYZ], type[pub], price[moderate], rating[5], close_to[Cafe ABC]
</code></pre>
<p>and if </p>
<pre><code>y_true = name[XYZ], type[pub]
</code></pre>
<p>the loss function should also look back in the input and check if the the predicted semantic info which are not in the target are located in the input, and if they are in the input, the cost should be zero.</p>
<p>The comparison of the y_predicted and input x will be a regex matching. Is it possible to integrate a complex process like this in an loss function and particularly in tensorflow?</p>
|
<p>Absolutely, and in fact it's quite simple to do. For a single sample you are computing a vector of 5 loss values, something like <code>losses = [1.2, 0.3, 1.5, 3.3, 0.6]</code>. Note that this result is before you perform any <code>tf.reduce_mean</code> functions on your loss.</p>
<p>Now build yourself a function in tensorflow that produces a result of 1 or 0 for each of these loss values to indicate whether you want to zero it out (0) or keep it (1). You now have <code>mask = [1 1 0 0 0]</code> based on the example where you want to keep name and type and zero out the loss for the other three.</p>
<p>Now you multiply <code>final_loss = losses * mask</code> to get your final loss values, apply <code>tf.reduce_mean</code> now and pass that into your optimizer. The key point to note is that this does what you want in gradient descent because <code>1x = dx</code> and <code>0x = 0</code> when you take the derivatives, so you end up zeroing out the gradient in the cases that shouldn't apply. This is how an RNN with variable sequence lengths works when it's passed in dummy padded values, it zeros out the gradient using a mask.</p>
|
python|tensorflow|machine-learning
| 0
|
7,172
| 49,400,500
|
passing 1 or 2 d numpy array to c throw cython
|
<p>I am writing an extension to my python code in c and cython, by following <a href="https://github.com/cython/cython/wiki/tutorials-NumpyPointerToC" rel="nofollow noreferrer">this</a> guide.</p>
<p>my c function signature is </p>
<pre><code>void c_disloc(double *pEOutput, double *pNOutput, double *pZOutput, double *pModel, double *pECoords, double *pNCoords, double nu, int NumStat, int NumDisl)
</code></pre>
<p>and my cython function is </p>
<pre><code>cdef extern void c_disloc(double *pEOutput, double *pNOutput, double *pZOutput, double *pModel, double *pECoords, double *pNCoords, double nu, int NumStat, int NumDisl)
@cython.boundscheck(False)
@cython.wraparound(False)
def disloc(np.ndarray[double, ndim=2, mode="c"] pEOutput not None,
np.ndarray[double, ndim=2, mode="c"] pNOutput not None,
np.ndarray[double, ndim=2, mode="c"] pZOutput not None,
np.ndarray[double, ndim=1, mode="c"] pModel not None,
np.ndarray[double, ndim=2, mode="c"] pECoords not None,
np.ndarray[double, ndim=2, mode="c"] pNCoords not None,
double nu,int NumStat, int NumDisl ):
c_disloc(&pEOutput[0,0], &pNOutput[0,0], &pZOutput[0,0], &pModel[0], &pECoords[0,0], &pNCoords[0,0], nu, NumStat, NumDisl)
return None
</code></pre>
<p>now my c function has the same behavior no matter if the arrays that its getting are 1d or 2d arrays, but I didn't succeed making the cython function to be able to get 1d or 2d numpy arrays.
of course, I could write tow cython function one for the 1d case and one for the 2d case but it will be cleaner to do it with one function.
dose someone knows how to do it?</p>
|
<p>I'd accept an untyped argument, check that it's a C contiguous array and then use <code>np.ravel</code> to get a flat array (this returns a view, not a copy, when passed a C contiguous array). It's easy to create that as a cdef function:</p>
<pre><code>cdef double* get_array_pointer(arr) except NULL:
assert(arr.flags.c_contiguous) # if this isn't true, ravel will make a copy
cdef double[::1] mview = arr.ravel()
return &mview[0]
</code></pre>
<p>Then you'd do</p>
<pre><code>def disloc(pEOutput,
pNOutput,
# etc...
double nu,int NumStat, int NumDisl ):
c_disloc(get_array_pointer(pEOutput), get_array_pointer(pNOutput),
# etc
nu, NumStat, NumDisl)
</code></pre>
<hr>
<p>I've removed the </p>
<pre><code>@cython.boundscheck(False)
@cython.wraparound(False)
</code></pre>
<p>since it's obvious they will gain you close to nothing. Using them without thinking about whether they do anything seems like cargo cult programming to me. </p>
|
numpy|cython|python-c-api
| 3
|
7,173
| 49,712,419
|
Reading data without headers in Python
|
<p>I would like to know how to read a .txt file with Python, so that I can plot the data.</p>
<p>The file is this form:</p>
<pre><code>1. " Experiment 1 1 1
2. Date: 04/04/18
3. data A B C
4. 1 12.5 0 3
5. 2 13 1 4.6
6. 3 14 10 5
7. . . . .
. . . . "
</code></pre>
<p>Thanks</p>
|
<p>Try to import with pandas</p>
<pre><code>import pandas as pd
df = pd.read_csv('yourfile.txt', sep=' ', skiprows=3, names=['col1', 'col2', 'col3'])
</code></pre>
<p>If you do not want to add column names on import </p>
<pre><code>df = pd.read_csv('yourfile.txt', sep=' ', skiprows=3, header=None)
</code></pre>
|
python|pandas|numpy|header
| 0
|
7,174
| 73,415,328
|
Why keyerror when header is in df - renamed header but same keyerror
|
<p>While working on a summery of data from train passages I would like to .sum() the monthly df from a sensor. Target is the df "Total weight (T)" column.</p>
<pre><code> Date Total weight (T) Axle passages ... Average speed (km/h)
0 2022-07-01 95652.12 6048 ... 53.31
1 2022-07-02 111260.02 6558 ... 53.16
2 2022-07-03 93814.35 5774 ... 54.5
3 2022-07-04 121471.96 7314 ... 53.26
...shortened...
</code></pre>
<p>Trying to go for month_tot_weight = df.loc['Total weight (T)].sum() but ended with keyerror because Total weight (T) is not an key to be found in the df.<br />
Tryed this renaming solution...:</p>
<pre><code>i = 0
for col in df.columns:
print(config.summery_passage_columns_alias[i])
print(col, "-", len(col))
df. rename(columns = {col:str(config.summery_passage_columns_alias[i])}, inplace = True)
i += 1
</code></pre>
<p>My config has a list containing alies for all coulmn headers and "Total weight (T)" is substituted with "Total"</p>
<pre><code>total = df.sum(axis=0)
print("Total\n", total)
print(df.head())
</code></pre>
<ul>
<li>but same problem occurs:</li>
</ul>
<pre><code>Total
Date 2022-07-012022-07-022022-07-032022-07-042022-0...
Total 3105787.5
Axle_passages 186121
Train_passages 1034
Highest_speed_(km/h) 2006.37
...and so on
</code></pre>
<p>Seems like in this .sum of all columns each column can deliver a sum, but I get a keyerror when trying to get sum for one column...</p>
<p>Any suggestions :)</p>
|
<p>See <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.loc.html" rel="nofollow noreferrer">https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.loc.html</a>.</p>
<p>You are using only one argument, thus selecting dataframe index. You want to select a column.</p>
<p>Here are possible workarounds:</p>
<pre><code>df['Total weight (T)'].sum()
</code></pre>
<p>here you select only 'Total weight (T)' column as a pd.Series, or</p>
<pre><code>df.loc[:, 'Total weight (T)'].sum()
</code></pre>
<p>here you explicitly select all rows in 'Total weight (T)' column.</p>
<p>For more information, Pandas indexing is described here: <a href="https://pandas.pydata.org/docs/user_guide/indexing.html" rel="nofollow noreferrer">https://pandas.pydata.org/docs/user_guide/indexing.html</a></p>
|
python|pandas|dataframe
| 0
|
7,175
| 67,513,818
|
how to read through multiple .tsv files located in different sub-directories
|
<p>I have multiple .tsv files located in a directory located in a sub-directory w/ different names (sub-directory different names)</p>
<p>I'm trying to read each of the .tsv files and perform this command:</p>
<pre><code>df_1 = pd.read_csv("C:/Car/0NN/car.tsv", delimiter='\t', encoding="utf-8-sig")
for node1 in df_1['#node1']:
for node2 in df_1['node2']:
if node1!=node2:
df_temp = df_1.iloc[0:1,1:2]
</code></pre>
<p>Is there a way to modify the first line so I can loop through all files ending that are "car.tsv"? The folder "0NN" change names, but the .tsv file itself has the same name, and the main "Car" folder has the same name. Thank you</p>
<p>Ex:</p>
<p>C:/Car/0NN/car.tsv</p>
<p>C:/Car/1AP/car.tsv</p>
|
<p>You can use python's inbuilt <code>glob</code> module to recursively read all the files inside the directory. Assuming every file is named <code>car.tsv</code> and is inside any subdirectory of <code>C:/Car/</code></p>
<pre><code>all_car_tsv_files = glob.glob("C:/Car/**/car.tsv", recursive=True)
</code></pre>
<p><code>**/</code> : This is glob star which matches everything in its path, combined with <code>/</code> matches any subdirectory.</p>
<p><code>recursive = True</code> : Makes sure to iterate over sub-directories inside sub-directories. For example, <code>C:/Car/1AP/AP/car.tsv</code> or <code>C:/Car/1AP/AP/BP/CP/car.tsv</code></p>
|
python|pandas|dataframe|glob
| 0
|
7,176
| 67,249,427
|
What do I do if ValueError: x and y must have same first dimension, but have shapes (32,) and (31, 5)?
|
<pre class="lang-py prettyprint-override"><code>csv_data = pd.read_csv("master.csv")
df = pd.DataFrame(csv_data,
columns=['year', 'suicides/100k pop', 'age', 'country', 'sex'])
us_rates = df['country'].values == 'United States'
df_us_rates = df.loc[us_rates]
teen_rates = df_us_rates['age'].values == '15-24 years'
df_teen_rates = df_us_rates.loc[teen_rates]
boy_rates = df_teen_rates['sex'].values == 'male'
df_boy_rates = df_teen_rates.loc[boy_rates]
girl_rates = df_teen_rates['sex'].values == 'female'
df_girls_rates = df_teen_rates.loc[girl_rates]
years = csv_data['year']
no_dups = []
print(df_teen_rates)
for year in years:
if year not in no_dups:
no_dups.append(year)
plt.plot(no_dups, df_boy_rates)
plt.show()
</code></pre>
|
<p>You are trying to plot:</p>
<ul>
<li><code>no_dups</code>, which is a 1D list of 32 values</li>
<li>against <code>df_boy_rates</code> which is a 2D dataframe with 31 rows and 5 columns</li>
</ul>
<p>Assuming that the column you're interested in is 'suicides/100k pop', modify your code like this:</p>
<pre><code>df_boy_rates = df_teen_rates.loc[boy_rates, 'suicides/100k pop']
</code></pre>
<p>Also, you have to check why there is one more element in <code>no_dups</code></p>
|
python-3.x|pandas|dataframe|csv
| 0
|
7,177
| 60,083,111
|
Multiclass Dataset Imbalance
|
<pre><code>from tensorflow.keras.preprocessing.image import ImageDataGenerator
import tensorflow as tf
train_path = 'Skin/Train'
test_path = 'Skin/Test'
train_gen = ImageDataGenerator(rescale=1./255)
train_generator = train_gen.flow_from_directory(train_path,target_size=
(300,300),batch_size=30,class_mode='categorical')
model = tf.keras.models.Sequential([
# Note the input shape is the desired size of the image 300x300 with 3 bytes color
# This is the first convolution
tf.keras.layers.Conv2D(16, (3,3), activation='relu', input_shape=(600, 450, 3)),
tf.keras.layers.MaxPooling2D(2, 2),
# The second convolution
tf.keras.layers.Conv2D(32, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# The third convolution
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# The fourth convolution
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# The fifth convolution
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# Flatten the results to feed into a DNN
tf.keras.layers.Flatten(),
# 512 neuron hidden layer
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dense(9, activation='softmax')
])
from tensorflow.keras.optimizers import RMSprop
model.compile(loss='categorical_crossentropy',
optimizer=RMSprop(lr=0.001),
metrics=['acc'])
history = model.fit_generator(
train_generator,
steps_per_epoch=8,
epochs=15,
verbose=2, class_weight = ? )
</code></pre>
<p>I have issue in achieving accuracy, I am training a 9 classes dataset, in which class 1 , 4, and 5 have only 100, 96, 90 images while the remaining classes are having above 500 images. Due to which I am unable to achieve higher accuracy as the weights are skewed toward images which are higher in number. I want that during training all classes are considered equal i.e 500. Would be appreciated if I could upsample the classes via tensorflow or any keras function code. rather than manually upsampling or downsampling the images in folders.</p>
|
<p>You can, instead, use a <code>class_weight</code> argument in your fit method.
For upsampling, you need a lot of manual work, that's inevitable.</p>
<p>Assuming you have an output with shape <code>(anything, 9)</code>, and you know the totals of each class:</p>
<pre><code>totals = np.array([500,100,500,500,96,90,.......])
totalMean = totals.mean()
weights = {i: totalMean / count for i, count in enumerate(totals)}
model.fit(....., class_weight = weights)
</code></pre>
|
keras|tensorflow2.0|tensorflow-datasets
| 2
|
7,178
| 59,969,782
|
Pandas and seaborn plot unexpected time frame on x axis
|
<p>I'm attempting to create a simple scatter plot using pandas and seaborn. This code : </p>
<pre><code>import pandas as pd
import seaborn as sns
lst = ['2019-01-31', '2019-02-25', '2019-03-31']
lst2 = [11, 22, 33]
df = pd.DataFrame(list(zip(lst, lst2)),
columns =['date', 'count'])
df['date'] = pd.to_datetime(df['date'])
sns.scatterplot(x='date', y='count', data=df)
</code></pre>
<p>renders plot : </p>
<p><a href="https://i.stack.imgur.com/6NTmN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6NTmN.png" alt="enter image description here"></a></p>
<p>How can I remove the dates that do not have a corresponding count value ? The dataframe just contains year 2019 values but the plot renders back to 2000. </p>
|
<p>You can use <code>set_xlim</code> to manually specify the range of the x-axes, e.g.:</p>
<pre><code>f = sns.scatterplot(x='date', y='count', data=df)
f.set_xlim('2019-01-01', '2019-12-31')
</code></pre>
|
python|pandas
| 1
|
7,179
| 60,170,082
|
How to add new rows of values for a distinct column value in pandas
|
<p>I have data frame like </p>
<pre><code> ORDER STATUS DATE
23412 200 7-2-2020
23412 300 8-2-2020
23412 400 10-2-2020
91234 300 8-2-2020
91234 400 9-2-2020
671234 200 10-3-2020
</code></pre>
<p>I want add static row for each distinct <code>order</code> with status =600 and <code>date =31-12-9999</code></p>
<p><strong>Expected output</strong></p>
<pre><code>ORDER STATUS DATE
23412 200 7-2-2020
23412 300 8-2-2020
23412 400 10-2-2020
23412 600 31-12-9999
91234 300 8-2-2020
91234 400 9-2-2020
91234 600 31-12-9999
671234 200 10-3-2020
671234 600 31-12-9999
</code></pre>
<p>How can this be done in pandas ?</p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.drop_duplicates.html" rel="nofollow noreferrer"><code>DataFrame.drop_duplicates</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.assign.html" rel="nofollow noreferrer"><code>DataFrame.assign</code></a> for new <code>DataFrame</code>, add to original by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.concat.html" rel="nofollow noreferrer"><code>concat</code></a>, sort index values by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.sort_index.html" rel="nofollow noreferrer"><code>DataFrame.sort_index</code></a> with only stable algo <code>mergesort</code> and last convert index to default <code>RangeIndex</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.reset_index.html" rel="nofollow noreferrer"><code>DataFrame.reset_index</code></a> with <code>drop=True</code>:</p>
<pre><code>df1 = df.drop_duplicates('ORDER', keep='last').assign(STATUS=600, DATE='31-12-9999')
df = pd.concat([df, df1]).sort_index(kind='mergesort').reset_index(drop=True)
print (df)
ORDER STATUS DATE
0 23412 200 7-2-2020
1 23412 300 8-2-2020
2 23412 400 10-2-2020
3 23412 600 31-12-9999
4 91234 300 8-2-2020
5 91234 400 9-2-2020
6 91234 600 31-12-9999
7 671234 200 10-3-2020
8 671234 600 31-12-9999
</code></pre>
<p>There is more solutions, each is different - @Quang Hoang sorting data (maybe problem, maybe not), @sammywemmy and my solution not sorting data. Also <code>groupby</code> is obviously slow, so if performance is important better is avoid (if possible):</p>
<pre><code>#some sample data, 100krows, 10k groups
np.random.seed(123)
N = 100000
L = ['7-2-2020', '8-2-2020', '10-2-2020', '8-2-2020', '9-2-2020', '10-3-2020']
df = pd.DataFrame({'ORDER': np.random.randint(10000, size=N),
'STATUS': np.random.randint(500, size=N),
'DATE':np.random.choice(L, N)}).sort_values('ORDER').reset_index(drop=True)
print (df)
In [391]: %timeit pd.concat([df, pd.DataFrame({'ORDER':df.ORDER.unique(), 'STATUS':600,'DATE':'31-12-9999'})],ignore_index=True).sort_values(['ORDER','STATUS'])
47.9 ms ± 1.27 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [392]: %timeit pd.concat([df, df.drop_duplicates('ORDER', keep='last').assign(STATUS=600, DATE='31-12-9999')]).sort_index(kind='mergesort').reset_index(drop=True)
34.1 ms ± 543 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [393]: %timeit pd.concat([group.append({'ORDER':name,'STATUS':600, 'DATE':'31-12-9999'}, ignore_index=True) for name,group in df.groupby('ORDER')],ignore_index=True )
24 s ± 455 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
</code></pre>
<hr>
<pre><code>#some sample data, 100krows, 100 groups
np.random.seed(123)
N = 100000
L = ['7-2-2020', '8-2-2020', '10-2-2020', '8-2-2020', '9-2-2020', '10-3-2020']
df = pd.DataFrame({'ORDER': np.random.randint(100, size=N),
'STATUS': np.random.randint(500, size=N),
'DATE':np.random.choice(L, N)}).sort_values('ORDER').reset_index(drop=True)
print (df)
In [398]: %timeit pd.concat([df, pd.DataFrame({'ORDER':df.ORDER.unique(), 'STATUS':600,'DATE':'31-12-9999'})],ignore_index=True).sort_values(['ORDER','STATUS'])
31 ms ± 1.41 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [399]: %timeit pd.concat([df, df.drop_duplicates('ORDER', keep='last').assign(STATUS=600, DATE='31-12-9999')]).sort_index(kind='mergesort').reset_index(drop=True)
28 ms ± 354 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [400]: %timeit pd.concat([group.append({'ORDER':name,'STATUS':600, 'DATE':'31-12-9999'}, ignore_index=True) for name,group in df.groupby('ORDER')],ignore_index=True )
290 ms ± 46.1 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
</code></pre>
|
python-3.x|pandas|dataframe
| 2
|
7,180
| 60,252,049
|
Spacing timestamp in pandas plot using seaborn
|
<p>Please my code is shown below. I want to space the timestamp as the plot looks so squeezed. </p>
<pre><code> import seaborn as sns
import matplotlib.pyplot as plt
import pandas as pd
dataset = pd.read_csv('../Data/User/DataSample.csv')
dataset.head(10)
Grade Time
0 Pass 2020-02-13 13:24:56
1 Pass 2020-02-13 13:25:00
2 Pass 2020-02-13 13:25:04
3 Pass 2020-02-13 13:25:08
4 Pass 2020-02-13 13:25:13
5 Pass 2020-02-13 13:25:17
6 Pass 2020-02-13 13:25:21
7 Pass 2020-02-13 13:25:27
8 Pass 2020-02-13 13:25:31
9 Pass 2020-02-13 13:26:19
sns.scatterplot(x='Time', y='Grade', hue='Grade', data=dataset)
plt.gcf().autofmt_xdate()
</code></pre>
<p><a href="https://i.stack.imgur.com/fkeTU.png" rel="nofollow noreferrer">Result of the plot</a></p>
|
<p>Thanks for including a sample dataset.</p>
<p>Give this a shot:</p>
<pre><code>import matplotlib.dates as mdates
ax = sns.scatterplot(x='Time',y='Grade',hue='Grade',data=df)
ax.xaxis.set_major_formatter(mdates.DateFormatter('%Y-%m-%d %H:%M'))
ax.xaxis.set_major_locator(mdates.HourLocator())
plt.gcf().autofmt_xdate()
plt.show()
</code></pre>
|
python|pandas|plot|seaborn
| 0
|
7,181
| 65,254,938
|
Simple GAN predicts NaN in Tensorflow after 2 steps
|
<p>I'm implementing a basic GAN based on the one in the <a href="https://www.tensorflow.org/tutorials/generative/dcgan" rel="nofollow noreferrer">Tensorflow documentation</a>.</p>
<p>After 2 training steps, the prediction from the generator is all NaN.</p>
<p>I don't know why it happens, but I noticed that the gradients of the 2nd convolution layer of the discriminator are all NaN since the first step:</p>
<blockquote>
<p><tf.Tensor: shape=(5, 5, 64, 128), dtype=float32, numpy=
array([[[[nan, nan, nan, ..., nan, nan, nan], [nan, nan, nan,</p>
</blockquote>
<p>My loss functions:</p>
<pre><code>loss = BinaryCrossentropy(from_logits=True)
def generator_loss(discriminator_output):
loss_value = loss(tf.ones_like(discriminator_output), discriminator_output)
print(f"Generator loss: {loss_value}")
return loss_value
def discriminator_loss(outputs_for_reals, outputs_for_fakes):
real_loss = loss(tf.ones_like(outputs_for_reals), outputs_for_reals)
fake_loss = loss(tf.zeros_like(outputs_for_fakes), outputs_for_fakes)
print(f"Disc loss (real/fake): {real_loss}/{fake_loss}")
return real_loss + fake_loss
</code></pre>
<p>The training loop:</p>
<pre><code>def training_step(self, real_images):
noise = tf.random.normal([BATCH_SIZE, NOISE_DIM])
with tf.GradientTape() as generator_tape, tf.GradientTape() as discriminator_tape:
generated_images = self._generator(noise, training=True)
if True in tf.math.is_nan(generated_images)[0, 0, :, 0]:
print("NaN result found")
else:
print("OK Result")
results_for_real = self._discriminator(real_images, training=True)
results_for_fake = self._discriminator(generated_images, training=True)
generator_loss = self._generator_loss(results_for_fake)
discriminator_loss = self._discriminator_loss(results_for_real, results_for_fake)
generator_gradients = generator_tape.gradient(generator_loss,
self._generator.trainable_variables)
discriminator_gradients = discriminator_tape.gradient(discriminator_loss,
self._discriminator.trainable_variables)
self._generator_optimizer.apply_gradients(
zip(generator_gradients, self._generator.trainable_variables))
self._discriminator_optimizer.apply_gradients(
zip(discriminator_gradients, self._discriminator.trainable_variables))
return generator_loss, discriminator_loss, generated_images
</code></pre>
<p>I build the models exactly the same way as in the documentation.</p>
<p>Things I tried:</p>
<ul>
<li>Reducing the learning rate</li>
<li>Running the model in different <code>training</code> modes (<code>training=False</code>/<code>training=True</code>)</li>
<li>Decorating the training step with <code>tf.function</code></li>
</ul>
<p>No matter what I do, calling the generator with any input will produce exclusively <code>NaN</code> elements.
Example output:</p>
<pre><code>2020-12-11 18:15:42.783543: W tensorflow/stream_executor/gpu/redzone_allocator.cc:314] Internal: Invoking GPU asm compilation is supported on Cuda non-Windows platforms only
Relying on driver to perform ptx compilation.
Modify $PATH to customize ptxas location.
This message will be only logged once.
OK Result
Generator loss: 1.5484254360198975
Disc loss (real/fake): 1.4994642734527588/0.2869953215122223
OK Result
Generator loss: 1.2899521589279175
Disc loss (real/fake): 1.3189733028411865/0.3767632842063904
Backend Qt5Agg is interactive backend. Turning interactive mode on.
NaN result found
</code></pre>
|
<p>I used RTX3080 which only works with CUDA 11, and I was using CUDA 10.1. With 10.1 it works without warnings but produces garbage/nan values</p>
|
python|tensorflow|keras
| 0
|
7,182
| 65,209,035
|
Renaming column names from a data set in pandas
|
<p>I am trying to rename column names from a DataFrame that have space in the name. DataFrame (<strong>df</strong>) consists of 45 columns and the majority have spaces in the name. For instance: <code>df.column.values [1] = 'Date Release'</code>, and the name should be changed to <code>'Date_Release'</code>. I tried <code>DataFrame.rename ()</code> and <code>DataFrame.columns.values[]</code> but did not work. I would much appreciate it if you could help me to find out what I did wrong</p>
<pre><code>for colmns in df:
if ' ' in colmns:
colmns_new = '_'.join(colmns.split())
df = df.rename (columns = {"\"%s\"" %colmns : "\"%s\"" %colmns_new})
else:
print (colmns)
print (df)
</code></pre>
<p>or this one:</p>
<pre><code>for i in range (len(df.columns)):
old= df.columns.values[i]
if ' ' in old:
new = '_'.join(old.split())
df = df.columns.values[i] = ['%s' % new]
print ("\"%s\"" % new)
print (df)
</code></pre>
<p>Error: <strong>AttributeError: 'list' object has no attribute 'columns'</strong></p>
|
<pre><code>import pandas as pd
df.columns = [i.replace(' ','_') for i in df.columns]
</code></pre>
|
python|pandas|rename
| 1
|
7,183
| 65,307,308
|
ValueError: Unknown layer: Functional Keras load_model
|
<pre><code>model = tf.keras.models.load_model(model_path)
</code></pre>
<p>is giving me error as,</p>
<pre><code>ValueError('Unknown ' + printable_module_name + ': ' + class_name) ValueError: Unknown layer: Functional
</code></pre>
<p>I tried setting path <code>model_path = "{}{}".format(Path().absolute(),"/model_new.h5")</code>
using tensorflow -2.2.0 and keras -2.4.3</p>
|
<blockquote>
<p>ValueError: Unknown layer: Functional</p>
</blockquote>
<p>You are getting above error, because you trained the model using <code>Tensorflow 2.2.0</code> and then tried to accessing the saved model( i.e <code>model_new.h5</code>) using different version of Tensorflow.</p>
<p>To fix above problem you can try to load the model using <code>Tensorflow 2.2.0</code>. (Or)</p>
<p>If you don't want to downgrade tensorflow version then you can try to retrained model using the same version.</p>
|
tensorflow|keras
| 0
|
7,184
| 49,929,378
|
Unpack list of dicts into list in pandas dataframe
|
<p>I have a pandas dataframe that includes a column of lists of dictionaries.</p>
<pre><code> list_dicts
id
a1 [{name:'cat'}, {name:'dog'}]
a2 [{name:'toy'}, {name:'boy'}]
a3 [{name:'jack'},{name:'jill'},{name:'sam'}]
a4 [{name:'pig'}]
</code></pre>
<p>Every key in the list of dicts is 'name'. I want to create a list of all the values associated with the 'name' keys and append the new column to the existing dataframe, as shown below.</p>
<pre><code> list_from_dict
id
a1 ['cat','dog']
a2 ['toy','boy']
a3 ['jack','jill','sam']
a4 ['pig']
</code></pre>
<p>How can I achieve this? I understand it'll probably use a lambda function, but not sure how.</p>
|
<p>You could do this with list comprehension and without a lambda function in just one line:</p>
<pre><code>df['list_from_dict'] = [[x['name'] for x in list_dict] for list_dict in df['list_dicts']]
</code></pre>
|
python|list|pandas|dictionary|dataframe
| 3
|
7,185
| 49,993,361
|
Creating a new 2 column numpy array from filtering through the first coumn/array
|
<p>I am trying to create a new 2 dimensional or 2 column array, which will consist of (data value <=20000) from the first column, and their associated ID values in the second column. Mathematically I am doing the following:
I am reading data from a text file. I am finding distance to all the points from the last point.</p>
<pre><code># ID M1 M2 M3 M4 R4 M5 R5 x y z
10217 11.467 11.502 13.428 13.599 432.17 13.266 281.06 34972.8 42985.9 14906
7991 11.529 11.559 13.438 13.520 435.23 13.224 272.23 8538.05 33219.8 43375.1
2100 11.526 11.573 13.478 13.490 448.97 13.356 301.27 9371.75 13734.1 43398.6
9467 11.557 11.621 13.481 13.537 449.99 13.367 303.67 33200.3 36008.9 12735.8
4002 11.454 11.530 13.502 13.583 457.34 13.327 294.53 44607.2 10410.9 9090
2971 11.475 11.563 13.506 13.558 458.77 13.391 309.43 29818.3 98.65 11718.6
1243 11.538 11.581 13.509 13.513 459.62 13.377 306.09 16238.4 11067.9 25048
9953 11.523 11.544 13.559 13.913 477.72 13.440 321.20 34589.6 42869 14878.6
7411 11.547 11.576 13.610 13.658 496.81 13.479 330.96 31436 42092.8 12307.8
1820 11.606 11.619 13.652 12.543 513.11 13.571 355.21 1758.75 15809.8 40473.6
2792 11.647 11.679 13.744 13.877 550.82 13.643 375.38 24393 6774.8 8346.35
510 11.687 11.717 13.771 13.810 562.27 13.642 375.14 22340.3 9316.4 13209.9
1721 11.602 11.646 13.821 14.139 584.37 13.770 413.84 2144.95 15769.1 40470.1
</code></pre>
<p>After I get the distances, I only want to take distances<=20,000 from my calculations and also their associated ID column. </p>
<p>So far I wrote this code to return calculated distances and IDs:</p>
<pre><code># Find nearest neighbors
import numpy as np
import matplotlib.pyplot as plt
halo = 'nntest.txt'
ID, m,r,x,y,z= np.loadtxt(halo, usecols=(0,6,7,8,9,10), unpack =True)
# selet the last point
m_mass = m[-1:]
ID_mass = ID[-1:]
r_mass = r[-1:]
x_mass = x[-1:]
y_mass = y[-1:]
z_mass = z[-1:]
#######################################
#Find distance to all points from our targeted point
nearest_neighbors = []
def neighbors(ID_mass, cx,cy,cz, ID, x, y, z):
dist = np.sqrt((cx-x)**2 + (cy-y)**2 + (cz-z)**2)
return dist, ID
for i in range(len(ID_mass)):
hist = neighbors(ID_mass[i], x_mass[i], y_mass[i], z_mass[i], ID, x, y, z)
print hist
#print all the IDs which are associated with dist<=20000
if (hist[0]<=20000):
print ID
nearest_neighbors.append(hist)
print nearest_neighbors
</code></pre>
<p>But I am having problem returning the new array, which will only contain distances<=20000, and associated IDs. I apologize in advance if this is not a good working example. But I will very much appreciate your suggestion to get that desired output.</p>
|
<p>Between the question you asked, and the code you have provided, I am still somewhat unclear on what you what to accomplish. But I can at least show you where there are errors in the code, and perhaps give you the tools you need.</p>
<p>As your code is now, x, y, z are all vectors. So the result of the neighbors distance calculation,</p>
<pre><code>dist = np.sqrt((cx-x)**2 + (cy-y)**2 + (cz-z)**2)
</code></pre>
<p>will be a vector. I think this is what you intended since the other values are indexed. But this means you run into trouble with</p>
<pre><code>if (hist[0]<=20000):
print ID
</code></pre>
<p>Numpy will treat the inequality as a mask, so <code>hist[0]<=2000</code> will look something like <code>[True, False, False, ...]</code>. Used properly, I think that numpy array masks are perfect for what you want.
For example, you could try</p>
<pre><code>for i in range(len(ID_mass)):
hist = neighbors(ID_mass[i], x_mass[i], y_mass[i], z_mass[i], ID, x, y, z)
print hist
#print all the IDs which are associated with dist<=20000
print ID[hist[0]<=20000]
nearest_neighbors.extend(list(zip(hist[0][hist[0]<=20000],ID[hist[0]<=20000])))
print nearest_neighbors
</code></pre>
<p>This line where we extend the nearest_neighbors list is a bit of a mess, and I may not have fully understood what you want the output to look like. But this will make a list of tuples, where each tuple contains the distance value and the ID for all of the cases where distance was less than 20000.</p>
|
python|arrays|numpy
| 1
|
7,186
| 50,062,936
|
Select columns periodically on pandas DataFrame
|
<p>I'm working on a Dataframe with <code>1116 columns</code>, how could I select just the columns in <strong>a period of 17</strong> ?
More clearly select the 12th, 29th,46th,63rd... columns </p>
|
<p>You can use <code>range</code> syntax:</p>
<pre><code>cols = range(12, 1116, 17)
</code></pre>
<p>Then use this to feed <code>pd.DataFrame.iloc</code>:</p>
<pre><code>df = df.iloc[:, cols]
</code></pre>
<p>Just remember that Python indexing begins at 0, so the first column with index 12 will be the 13th. This can easily be adjusted as necessary.</p>
|
python|pandas|dataframe
| 1
|
7,187
| 50,205,096
|
Mean of grouped data
|
<p>I have data in a dataframe regarding salaries of employees. Each employee also has data stored about their sex, discipline, years since earning phd, and years working at the current employer. An example of the data is as follows.</p>
<pre><code> rank dsc phd srv sex salary
1 Prof B 19 18 Male 139750
2 Prof B 20 16 Male 173200
3 Asst B 4 3 Male 79750
4 Prof B 45 39 Male 115000
5 Prof B 40 41 Male 141500
6 Assoc B 6 6 Male 97000
7 Prof B 30 23 Male 175000
8 Prof B 45 45 Male 147765
9 Prof B 21 20 Male 119250
10 Prof B 18 18 Female 129000
</code></pre>
<p>What I want to access is the mean salary of all employees grouped by both sex and a range of ten years of service. For example; Males that have 0-10 years of service, females with 0-10 years of service, Males that have 11-20 years of service, etc. I can get the mean of a range of workers with ranges of years working without separating by the sexes by doing: </p>
<pre><code> serviceSalary = data.groupby(pd.cut(data['yrs.service'], np.arange(0, 70,
10)))['salary'].mean()
</code></pre>
<p>What further can I do to add a third grouping to this variable?</p>
|
<p>You can groupby multiple columns with a list as the first argument, so instead of just one:</p>
<pre><code>In [11]: df.groupby(pd.cut(df['srv'], np.arange(0, 70, 10)))['salary'].mean()
Out[11]:
srv
(0, 10] 88375.0
(10, 20] 140300.0
(20, 30] 175000.0
(30, 40] 115000.0
(40, 50] 144632.5
(50, 60] NaN
Name: salary, dtype: float64
</code></pre>
<p>can pass <code>'sex'</code> too:</p>
<pre><code>In [12]: df.groupby([pd.cut(df['srv'], np.arange(0, 70, 10)), 'sex'])['salary'].mean()
Out[12]:
srv sex
(0, 10] Male 88375.000000
(10, 20] Female 129000.000000
Male 144066.666667
(20, 30] Male 175000.000000
(30, 40] Male 115000.000000
(40, 50] Male 144632.500000
Name: salary, dtype: float64
</code></pre>
|
python|pandas|numpy
| 4
|
7,188
| 64,038,547
|
Extracting Text Value if exists in another column or array
|
<p>I have a code that extracts the string in a column that has @ before the string, like below:</p>
<pre><code>df['new_column'] = df[text].str.extract(r"@([A-Za-z]+)")
</code></pre>
<p>But what if sometimes the text column contains the string I want to extract that might be missing the @ sign in front of it? How can I also account for those so I dont miss them?</p>
<pre><code>text
@bobby
@mike
why @mike
huh, @brad
@brad
cmon @Sunny_Bat
@Sunny_Bat
@But_ten
@g2
@Mikey/fj
@4242343
</code></pre>
<p>Also, can I make sure the extracted text exists in the another column of the df?</p>
<p>For example when I add isin to my code above, it outputs True or False, instead of that can I just output the actual text value itself?</p>
<pre><code> df['new_column'] = df['text'].str.extract(r"@([A-Za-z]+)").isin(name_list)
</code></pre>
<p>where <em><strong>name_list</strong></em> is an array of all the unique values of the <em><strong>name_list</strong></em> column in the same df</p>
<p>This gives me a new column that says True or False when I would want the name or nan itself instead</p>
<p>Desired Output:</p>
<pre><code>text | new column
@bobby bobby
@mike mike
why @mike mike
huh, @brad brad
@brad brad
cmon @Sunny_bat Sunny_bat
@Sunny_Bat Sunny_bat
@But_ten But_ten
@g2 g2
@Mikey/fj Mikey
@4242343 NaN
</code></pre>
<p>Thanks!</p>
|
<p>The regex pattern depends on which character you want to capture. By your desired output, you need to change the pattern as follows:</p>
<pre><code>df['new_column'] = df['text'].str.extract(r"@([A-Za-z_]+([0-9]+)*)")[0]
Out[51]:
text new_column
0 @bobby bobby
1 @mike mike
2 why @mike mike
3 huh, @brad brad
4 @brad brad
5 cmon @Sunny_Bat Sunny_Bat
6 @Sunny_Bat Sunny_Bat
7 @But_ten But_ten
8 @g2 g2
9 @Mikey/fj Mikey
10 @4242343 NaN
</code></pre>
<hr />
<p><strong>Original</strong>:</p>
<p>You may try <code>where</code> with <code>lambda</code></p>
<pre><code>name_list = ['brad', 'mike']
df['text'].str.extract(r"@([A-Za-z]+)").where(lambda x: x[0].isin(name_list))
Out[1658]:
0
0 NaN
1 mike
2 mike
3 brad
4 brad
5 NaN
6 NaN
7 NaN
</code></pre>
|
python|python-3.x|regex|pandas
| 1
|
7,189
| 63,896,884
|
how to select values from multiple columns based on a condition
|
<p>I have a dataframe which has information about people with balance in their different accounts. It looks something like below.</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame({'name':['John', 'Jacob', 'Mary', 'Sue', 'Harry', 'Clara'],
'accnt_1':[2, np.nan, 13, np.nan, np.nan, np.nan],
'accnt_2':[32, np.nan, 12, 21, 32, np.nan],
'accnt_3':[11,21,np.nan,np.nan,2,np.nan]})
df
</code></pre>
<p><a href="https://i.stack.imgur.com/nCYCL.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nCYCL.jpg" alt="enter image description here" /></a></p>
<p>I want to get balance for each person as if accnt_1 is not empty that is the balance of that person. If accnt_1 is empty and accnt_2 is not, number in accnt_2 is the balance. If both accnt_1 and accnt_2 are empty, whatever is in accnt_3 is the balance.
In the end the output should look like</p>
<pre><code>out_df = pd.DataFrame({'name':['John', 'Jacob', 'Mary', 'Sue', 'Harry', 'Clara'],
'balance':[2, 21, 13, 21, 32, np.nan]})
out_df
</code></pre>
<p><a href="https://i.stack.imgur.com/Hiefv.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Hiefv.jpg" alt="enter image description here" /></a></p>
<p>I will always know the priority of columns. I can write a simple function and apply on this dataframe. But I was thinking is there a better and faster way to do using pandas/numpy?</p>
|
<p>If balanced means first not missing values after <code>name</code> you can convert <code>name</code> to index, then back filling missing values and select first column by position:</p>
<pre><code>df = df.set_index('name').bfill(axis=1).iloc[:, 0].rename('balance').reset_index()
print (df)
name balance
0 John 2.0
1 Jacob 21.0
2 Mary 13.0
3 Sue 21.0
4 Harry 32.0
5 Clara NaN
</code></pre>
<p>If need specify columns names in order by list:</p>
<pre><code>cols = ['accnt_1','accnt_2','accnt_3']
df = df.set_index('name')[cols].bfill(axis=1).iloc[:, 0].rename('balance').reset_index()
</code></pre>
<p>Or if need filter only <code>accnt</code> columns use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.filter.html" rel="nofollow noreferrer"><code>DataFrame.filter</code></a>:</p>
<pre><code>df = df.set_index('name').filter(like='accnt').bfill(axis=1).iloc[:, 0].rename('balance').reset_index()
</code></pre>
|
python-3.x|pandas|numpy
| 0
|
7,190
| 63,809,217
|
Pandas python - Find the minimum timedelta of each group
|
<p>I want to find the min timedelta of each group.
For example, I have the following data set:</p>
<pre><code>DataSet
Name TimeFromStart
Omri 442 days
Omri 480 days
Omri 443 days
Lior 115 days
Lior 80 days
Lior 0 days
Output:
Name MinTimeDelta:
Omri 1
Lior 35
</code></pre>
<p>I assume there is a neat and clean way to do this through groupby with pandas but unfortunately I didn't manage to find how to do this with groupby.</p>
|
<p>First sorting both columns by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.sort_values.html" rel="nofollow noreferrer"><code>DataFrame.sort_values</code></a>, then use custom lambda functio with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.diff.html" rel="nofollow noreferrer"><code>Series.diff</code></a> and <code>min</code>, convert to days by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.dt.days.html" rel="nofollow noreferrer"><code>Series.dt.days</code></a> ad last <code>Series</code> to <code>DataFrame</code>:</p>
<pre><code>df1 = (df.sort_values(['Name','TimeFromStart'])
.groupby('Name')['TimeFromStart']
.apply(lambda x: x.diff().min())
.dt.days
.reset_index(name='MinTimeDelta'))
print (df1)
Name MinTimeDelta
0 Lior 35
1 Omri 1
</code></pre>
|
python|pandas|group-by
| 1
|
7,191
| 47,026,585
|
unhashable type 'list' error with get_dumies
|
<p>I have a dataframe with data like the sample data below. I'm trying to create dummy variables for the values in the categories field using get_dummies but I'm getting the error below when I run the code below. What I would like is say for example with the first record, to have one column called "Ramen" with a 1 in it and another column called "Japanese" with a 1 in it.</p>
<p>Sample Data:</p>
<pre><code> user_id business_id stars_x \
1 CxDOIDnH8gp9KXzpBHJYXw XSiqtcVEsP6dLOL7ZA9OxA 4
2 CxDOIDnH8gp9KXzpBHJYXw v95ot_TNwTk1iJ5n56dR0g 3
3 CxDOIDnH8gp9KXzpBHJYXw uloYxyRAMesZzI99mfNInA 2
4 CxDOIDnH8gp9KXzpBHJYXw gtcsOodbmk4E0TulYHnlHA 4
address attributes \
1 522 Yonge Street {u'BusinessParking': {u'garage': False, u'stre...
2 1661 Denison Street {u'BusinessParking': {u'garage': False, u'stre...
3 4101 Rutherford Road {u'BusinessParking': {u'garage': False, u'stre...
4 815 W Bloor Street {u'Alcohol': u'full_bar', u'HasTV': False, u'N...
categories city \
1 [Restaurants, Ramen, Japanese] Toronto
2 [Chinese, Seafood, Restaurants] Markham
3 [Italian, Restaurants] Woodbridge
4 [Food, Coffee & Tea, Sandwiches, Cafes, Cockta... Toronto
hours is_open latitude \
1 {u'Monday': u'11:00-22:00', u'Tuesday': u'11:0... 1 43.663689
2 {} 0 43.834295
3 {u'Monday': u'12:00-22:00', u'Tuesday': u'12:0... 1 43.823486
4 {u'Monday': u'12:00-2:00', u'Tuesday': u'12:00... 1 43.662726
longitude name neighborhood postal_code \
1 -79.384200 Kenzo Ramen Downtown Core M4Y 1X9
2 -79.305282 Vince Seafood Restaurant & BBQ Milliken L3R 6E4
3 -79.568345 Motorino Enoteca Pine Grove L4L 1A5
4 -79.422167 Northwood Bickford Park M6G 1M1
review_count stars_y state good_reviews
1 76 3.5 ON True
2 23 3.5 ON False
3 26 3.5 ON False
4 93 4.0 ON True
</code></pre>
<p>Code:</p>
<pre><code>pd.get_dummies(bus_rev['categories'])
</code></pre>
<p>Error:</p>
<pre><code>---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-23-e57eccbfbe12> in <module>()
----> 1 bus_rev_cat2 = pd.get_dummies(bus_rev['categories'])
2 #bus_revlist = pd.concat([bus_rev,bus_rev_cat2],axis=1)
3 #bus_revlist.head()
/Users/anaconda/lib/python2.7/site-packages/pandas/core/reshape.pyc in get_dummies(data, prefix, prefix_sep, dummy_na, columns, sparse, drop_first)
1102 else:
1103 result = _get_dummies_1d(data, prefix, prefix_sep, dummy_na,
-> 1104 sparse=sparse, drop_first=drop_first)
1105 return result
1106
/Users/anaconda/lib/python2.7/site-packages/pandas/core/reshape.pyc in _get_dummies_1d(data, prefix, prefix_sep, dummy_na, sparse, drop_first)
1109 sparse=False, drop_first=False):
1110 # Series avoids inconsistent NaN handling
-> 1111 codes, levels = _factorize_from_iterable(Series(data))
1112
1113 def get_empty_Frame(data, sparse):
/Users/anaconda/lib/python2.7/site-packages/pandas/core/categorical.pyc in _factorize_from_iterable(values)
2038 codes = values.codes
2039 else:
-> 2040 cat = Categorical(values, ordered=True)
2041 categories = cat.categories
2042 codes = cat.codes
/Users/anaconda/lib/python2.7/site-packages/pandas/core/categorical.pyc in __init__(self, values, categories, ordered, name, fastpath)
288 codes, categories = factorize(values, sort=True)
289 except TypeError:
--> 290 codes, categories = factorize(values, sort=False)
291 if ordered:
292 # raise, as we don't have a sortable data structure and so
/Users/anaconda/lib/python2.7/site-packages/pandas/core/algorithms.pyc in factorize(values, sort, order, na_sentinel, size_hint)
311 table = hash_klass(size_hint or len(vals))
312 uniques = vec_klass()
--> 313 labels = table.get_labels(vals, uniques, 0, na_sentinel, True)
314
315 labels = _ensure_platform_int(labels)
pandas/src/hashtable_class_helper.pxi in pandas.hashtable.PyObjectHashTable.get_labels (pandas/hashtable.c:15447)()
TypeError: unhashable type: 'list'
</code></pre>
|
<p>You can try this </p>
<pre><code>df=pd.DataFrame( {'categories':[['Restaurants', 'Ramen', 'Japanese'],['Chinese', 'Seafood', 'Restaurants']]})
pd.get_dummies(df.categories.apply(pd.Series).stack()).sum(level=0)
Out[1095]:
Chinese Japanese Ramen Restaurants Seafood
0 0 1 1 1 0
1 1 0 0 1 1
</code></pre>
|
python|pandas
| 10
|
7,192
| 46,855,689
|
ETL process with Python and SQL Server taking a really long time to load
|
<p>I'm looking for a technique that will increase the performance of a csv file SQL Server database load process. I've attempted various approaches but nothing I do seems to be able to break the 5.5 hour barrier. That's just testing loading a year of data which is about 2 million records. I have 20 years of data to load eventually so loading data for 4 days straight isn't going to work.</p>
<p>The challenge is, the data has to be enriched on load. I have to add some columns because that information isn't native to the file. So far I've tried:</p>
<ol>
<li>Using petl to append columns to the data and then flush that to the database.</li>
<li>Using pandas to append columns to the data and then flushing the data frame to the database.</li>
<li>Using bulk load to load an intermediary staging table and then using T-SQL to populate the extra columns and then pushing that on to a final staging table.</li>
</ol>
<p>Bulk load works REALLY fast but then I have to add the data for the extra columns and we're back to row level operations which I think is the bottleneck here. I'm getting ready to try:</p>
<ol>
<li>Appending the data with Pandas.</li>
<li>Writing the data back out to a CSV.</li>
<li>Bulk loading the CSV.</li>
</ol>
<p>This bothers me because I now have two I/O operations. Read the file into pandas and write the file back out again.</p>
<p>I read somewhere that Pandas was written in C or something so it should be really fast. Flushing a dataframe to the database wasn't that fast. At this point, I'm asking if anybody has a faster approach that they use in the real world. So far what i have is below:</p>
<pre><code>import pypyodbc
conn_str = "DSN=[dsn name];"
cnxn = pypyodbc.connect(conn_str)
crsr = cnxn.cursor()
sql = "BULK INSERT pre_stage_view FROM '[file path]' WITH (FIELDTERMINATOR = ',',ROWTERMINATOR = '\n')"
crsr.execute(sql)
cnxn.commit()
crsr.close()
cnxn.close()
</code></pre>
<p>This is the stored procedure get rid of headers:</p>
<pre><code>DELETE FROM pre_stage_table WHERE Symbol = 'Symbol'
INSERT INTO stage_table(
[Symbol],
[Exchange],
[Date],
[Open],
[High],
[Low],
[Close],
[Volume],
[SourceSystem],
[RunDate]
)
SELECT
[Symbol],
@exchange, --passed in proc parameter
[Date],
[Open],
[High],
[Low],
[Close],
[Volume],
'EODData',
CURRENT_TIMESTAMP
FROM pre_stage_table
TRUNCATE TABLE pre_stage_table
</code></pre>
|
<blockquote>
<p>Bulk load works REALLY fast but then I have to add the data for the extra columns and we're back to row level operations which I think is the bottleneck here.</p>
</blockquote>
<p>Sorry but I do not understand why you have row level operations. Try:</p>
<p>1) bulk load to stage table </p>
<p>2) <a href="https://docs.microsoft.com/en-us/sql/t-sql/statements/merge-transact-sql" rel="nofollow noreferrer"><code>MERGE</code></a> stage table with target table</p>
<p>You will still get set-based approach with presumably decent performance. Remember to disable triggers (if possible on target) plus you may drop indexes, load data and rebuild them after.</p>
|
python|sql-server|pandas|csv|tsql
| 2
|
7,193
| 67,822,077
|
pandas: Removing rows from cumulative column after it reaches a threshold
|
<p>I am working with the following dataframe:</p>
<pre><code>id1 Val cum_val
3233 24 24
3233 12 36
3233 7 43
3233 6 49
3233 6 55
3233 3 58
3255 5 5
3255 44 49
3255 4 53
3255 8 61
3255 8 69
</code></pre>
<p>where the cum_val column is cumulative of Val within each group of id1 3233 and 3255.</p>
<p>I want to get the following:</p>
<pre><code>id1 Val cum_val
3233 24 24
3233 12 36
3233 7 43
3233 6 49
3233 6 55
3255 5 5
3255 44 49
3255 4 53
</code></pre>
<p>i.e. keep only the rows until the <code>cum_val</code> reaches first value greater than 50. e.g. for id1 = 3255, I have discarded rows with <code>cum_val</code> of 61 and 69 as 53 was the first value greater than 50.</p>
<p>I am not sure how to approach this.</p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.DataFrameGroupBy.shift.html" rel="nofollow noreferrer"><code>DataFrameGroupBy.shift</code></a> for shifting values per groups and test for less like <code>50</code> for matching also next group after <code>50</code>:</p>
<pre><code>df = df[df.groupby('id1')['cum_val'].shift(fill_value=0).lt(50)]
print (df)
id1 Val cum_val
0 3233 24 24
1 3233 12 36
2 3233 7 43
3 3233 6 49
4 3233 6 55
6 3255 5 5
7 3255 44 49
8 3255 4 53
</code></pre>
|
python|pandas
| 2
|
7,194
| 67,715,525
|
rename the "name" value in pandas dataframe
|
<p>New to pandas</p>
<p>Here is the code that I am working on:</p>
<pre><code>import pandas as pd
import yahoo_fin.stock_info as si
def dividend(stocks):
for sName in stocks:
print(sName)
print('Dividend History: ')
df = si.get_dividends(sName, '01-01-2019').iloc[:, :1]
#df.rename(columns={'2019-02-20 00:00:00': '2019-02-20'}, inplace = True)
df.to_excel("1.xlsx")
print(df.iloc[0])
stocks = ['MSFT']
dividend(stocks)
</code></pre>
<p>the output from the excel file:
<a href="https://i.stack.imgur.com/16N9P.png" rel="nofollow noreferrer">output</a></p>
<p>I would like to get rid of the "00:00:00"s.</p>
<p>I have tried using .rename and vectorization and neither seems to work(I might be doing it wrong)</p>
<p>thanks in advance</p>
|
<p>The column with no name, looking like dates with hour part, in your
source DataFrame, is probably the <strong>index</strong> of <em>object</em> type (actually
a string).</p>
<p>To confirm it, run <code>df.index</code> and you should get something like:</p>
<pre><code>Index(['2019-02-20 00:00:00', '2019-05-15 00:00:00', '2019-08-14 00:00:00',
'2019-11-20 00:00:00', '2020-02-19 00:00:00', '2020-05-20 00:00:00'],
dtype='object')
</code></pre>
<p>Note <em>'object'</em> as <em>dtype</em>.</p>
<p>To get rid of the time part, convert the index to <em>datetime64</em> (native
<em>pandasonic</em> date / time type):</p>
<pre><code>df.index = pd.to_datetime(df.index)
</code></pre>
<p>Now when you run <code>df.index</code> and you should get something like:</p>
<pre><code>DatetimeIndex(['2019-02-20', '2019-05-15', '2019-08-14', '2019-11-20',
'2020-02-19', '2020-05-20'],
dtype='datetime64[ns]', freq=None)
</code></pre>
<p>Note that this time <em>dtype</em> is <em>'datetime64[ns]'</em>.</p>
<p>When you print <em>df</em>, you should get:</p>
<pre><code> dividend
2019-02-20 0.46
2019-05-15 0.46
2019-08-14 0.46
2019-11-20 0.51
2020-02-19 0.51
2020-05-20 0.51
</code></pre>
<p>Under the hood the index actually still has the time part, but <em>Pandas</em>
is clever enough not to print it when all date parts are zero.</p>
|
python|pandas
| 0
|
7,195
| 61,587,830
|
Is there a way to make changing DataFrame faster in a loop?
|
<pre><code> for index, row in df.iterrows():
print(index)
name = row['name']
new_name = get_name(name)
row['new_name'] = new_name
df.loc[index] = row
</code></pre>
<p>In this piece of code, my testing shows that the last line makes it quite slow, really slow. It basically insert a new column row by row. Maybe I should store all the 'new_name' into a list, and update the df outside of the loop?</p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.apply.html" rel="nofollow noreferrer"><code>Series.apply</code></a> for processing function for each value of column, it is faster like <code>iterrows</code>:</p>
<pre><code>df['new_name'] = df['name'].apply(get_name)
</code></pre>
<p>If want improve performance then is necessary change function if possible, but it depends of function.</p>
|
python|pandas|dataframe
| 1
|
7,196
| 61,187,702
|
Difference between two date column if and only values are present in both the columns
|
<p>I have two date column
PRIMARY CHILD diff
05-19-1945 01-13-1994 some value in years
03-01-1963<br>
05-33-1933 03-01-1955 some value in years
05-19-1944 06-11-1967 some value in years
04-22-2020 </p>
<p>I want to show difference in years if and only if value is present in both the column
(driver_data_new['ASGN_BRTH_DT_PRIMARY']-driver_data_new['ASGN_BRTH_DT_CHILD'])/np.timedelta64(1,'Y')
getting the following error
---> 36 driver_data_new['ASGN_BRTH_DT_PRIMARY'].dt.date
37 driver_data_new['ASGN_BRTH_DT_CHILD'].dt.date
38 driver_data_new['N_range']=(driver_data_new['ASGN_BRTH_DT_PRIMARY']-driver_data_new['ASGN_BRTH_DT_CHILD'])/np.timedelta64(1,'Y')</p>
<p>AttributeError: Can only use .dt accessor with datetimelike values</p>
|
<p>Your error <code>AttributeError: Can only use .dt accessor with datetimelike values</code> has nothing to do with only subtracting dates where both values are available. Rather, it has to do with the data types in the columns you're using. At least one of them is not a "datetimelike" object – therefore, the <code>.dt</code> accessor just isn't available. Use <code>df.dtypes</code> to see which columns are not datetime, and <code>pandas.to_datetime</code> to convert. Once you've done that, you'll see how the difference you're trying to calculate is already handled:</p>
<pre class="lang-py prettyprint-override"><code>>>> df = pd.DataFrame({'a_dt': pd.to_datetime([np.nan, '2019-01-01', '2020-02-04']), 'b_dt': pd.to_datetime([np.nan, np.nan, '2020-03-17'])})
>>> df
a_dt b_dt
0 NaT NaT
1 2019-01-01 NaT
2 2020-02-04 2020-03-17
# Both are datetime types
>>> df.dtypes
a_dt datetime64[ns]
b_dt datetime64[ns]
dtype: object
>>> df['b_dt'] - df['a_dt']
0 NaT
1 NaT
2 42 days
dtype: timedelta64[ns]
</code></pre>
|
python|pandas|datetime
| 0
|
7,197
| 68,695,012
|
Pandas: Rank Games according to score
|
<p>I am fairly new to development in any platform. Trying to basics in Python - Pandas. When trying to practise about pandas groupby function, I am getting duplicate records. Please see the data, questions and code I tried. Appreciate any suggestions on the same.</p>
<ol>
<li>read game.csv, game_score.csv</li>
</ol>
<p>game.csv -</p>
<pre><code> id,url,genre,editors_choice,release_year,release_month,release_day
0,/games/littlebigplanet-vita/vita-98907,Platformer,Y,2012,9,12
1,/games/littlebigplanet-ps-vita-marvel-super-hero-edition/vita-20027059,Platformer,Y,2012,9,12
2,/games/splice/ipad-141070,Puzzle,N,2012,9,12
3,/games/nhl-13/xbox-360-128182,Sports,N,2012,9,11
4,/games/nhl-13/ps3-128181,Sports,N,2012,9,11
5,/games/total-war-battles-shogun/mac- 142565,Strategy,N,2012,9,11
6,/games/double-dragon-neon/xbox-360- 131320,Fighting,N,2012,9,11
7,/games/guild-wars-2/pc-896298,RPG,Y,2012,9,11
8,/games/double-dragon-neon/ps3-131321,Fighting,N,2012,9,11
9,/games/total-war-battles-shogun/pc-142564,Strategy,N,2012,9,11
10,/games/tekken-tag-tournament-2/ps3-124584,Fighting,N,2012,9,11
</code></pre>
<p>game_score.csv</p>
<pre><code>id,score_phrase,title,platform,score
0,Painful,The History Channel: Battle for the Pacific,Wii,2.5
1,Awful,The History Channel: Battle For the Pacific,PlayStation 2,3
2,Bad,The History Channel: Battle For The Pacific,PC,4.9
3,Bad,The History Channel: Battle For the Pacific,Xbox 360,4.5
4,Bad,The History Channel: Battle For the Pacific,PlayStation 3,4.5
5,Awful,Hail to the Chimp,Xbox 360,3.5
6,Awful,Hail To The Chimp,PlayStation 3,3.5
7,Okay,Spyro: Enter The Dragonfly,PlayStation 2,6
8,Okay,Spyro: Enter the Dragonfly,GameCube,6
9,Okay,007 Legends,PlayStation 2,4
10,Okay,007 Racing,GameCube,5
</code></pre>
<ol start="2">
<li>merger 2 csv files based on "id"</li>
<li>Find the mean score of each game using groupby</li>
<li>sort the values in descending order to determine the rank</li>
<li>store the file into a o/p csv file</li>
<li>o/p csv file contains columns title, score</li>
<li>do not include header while writing o/p csv file</li>
</ol>
<p>My Code -</p>
<pre><code> import pandas as pd
file_game = pd.read_csv('game.csv')
file_game_score = pd.read_csv('game_score.csv')
merged_game_file = pd.merge(file_game, file_game_score, on='id')
final_data = merged_game_file[['title', 'score']]
mean_df = final_data.groupby('title').mean()
final_df = mean_df['score'].rank(ascending=0)
print(final_df)
</code></pre>
<h1>O/P --- final_df</h1>
<pre><code> 007 Legends,4.5
007 Racing,7.0
Hail To The Chimp,2.5
Hail to the Chimp,2.5
Spyro: Enter The Dragonfly,8.5
Spyro: Enter the Dragonfly,8.5
The History Channel: Battle For The Pacific,6.0
The History Channel: Battle For the Pacific,4.5
The History Channel: Battle for the Pacific,1.0
</code></pre>
|
<p>HeRe iS oNe iDeA...</p>
<p><strong>Try:</strong></p>
<pre><code>import pandas as pd
file_game = pd.read_csv('game.csv')
file_game_score = pd.read_csv('game_score.csv')
# make '...Battle For the Pacific' and '...Battle For The Pacific' the same
file_game_score['title'] = file_game_score['title'].str.lower()
merged_game_file = pd.merge(file_game, file_game_score, on='id')
final_data = merged_game_file[['title', 'score']]
mean_df = final_data.groupby('title').mean()
final_df = mean_df['score'].rank(ascending=0)
print(final_df)
</code></pre>
<p><strong>Outputs:</strong></p>
<pre><code>title
007 legends 3.0
007 racing 2.0
hail to the chimp 5.0
spyro: enter the dragonfly 1.0
the history channel: battle for the pacific 4.0
</code></pre>
|
python|pandas|csv
| 0
|
7,198
| 68,585,329
|
Is there a way to get around the 250 MB limit for an AWS lambda function?
|
<p>I'm working on a Lambda Function in AWS and I tried to use Layers to load the dependencies (which are statsmodels, scikit-learn, pyLDAvis, pandas, numpy, nltk, matplotlib, joblib, gensim, and eli5), but I'm not able to add them because I get an error saying that the maximum allowed size of the code and layers together is 262144000 bytes (250 MB). I managed to cut it down to 264 MB, but it's still not small enough, and even if it was allowed, I'm not sure it would work properly.</p>
<p>Is there any way to add more space for the dependencies? Or, alternatively, is there a way for me to delete some of the subdirectories within the packages-- for example, I only need the distributions for statsmodels, so could I delete everything else?</p>
|
<blockquote>
<p>Is there any way to add more space for the dependencies?</p>
</blockquote>
<p>If you package your lambda function as <a href="https://docs.aws.amazon.com/lambda/latest/dg/images-create.html" rel="nofollow noreferrer">container lambda image</a>, you will have <strong>10 GB</strong> for your dependencies. On runtime, you function still has only 500MB of /tmp storage though.</p>
|
amazon-web-services|numpy|amazon-s3|aws-lambda|statsmodels
| 1
|
7,199
| 53,010,465
|
Bidirectional LSTM output question in PyTorch
|
<p>Hi I have a question about how to collect the correct result from a BI-LSTM module’s output.</p>
<p>Suppose I have a 10-length sequence feeding into a single-layer LSTM module with 100 hidden units:</p>
<pre><code>lstm = nn.LSTM(5, 100, 1, bidirectional=True)
</code></pre>
<p><code>output</code> will be of shape:</p>
<pre><code>[10 (seq_length), 1 (batch), 200 (num_directions * hidden_size)]
# or according to the doc, can be viewed as
[10 (seq_length), 1 (batch), 2 (num_directions), 100 (hidden_size)]
</code></pre>
<p>If I want to get the 3rd (1-index) input’s output at both directions (two 100-dim vectors), how can I do it correctly?</p>
<p>I know <code>output[2, 0]</code> will give me a 200-dim vector. <strong>Does this 200 dim vector represent the output of 3rd input at both directions?</strong></p>
<p>A thing bothering me is that when do reverse feeding, the 3rd (1-index) output vector is calculated from the 8th(1-index) input, right?</p>
<p>Will pytorch automatically take care of this and group output considering direction?</p>
<p>Thanks!</p>
|
<p>Yes, when using a BiLSTM the hidden states of the directions are just concatenated (the second part after the middle is the hidden state for feeding in the reversed sequence). <br>So splitting up in the middle works just fine. </p>
<p>As reshaping works from the right to the left dimensions you won't have any problems in separating the two directions.</p>
<hr>
<p>Here is a small example:</p>
<pre class="lang-py prettyprint-override"><code># so these are your original hidden states for each direction
# in this case hidden size is 5, but this works for any size
direction_one_out = torch.tensor(range(5))
direction_two_out = torch.tensor(list(reversed(range(5))))
print('Direction one:')
print(direction_one_out)
print('Direction two:')
print(direction_two_out)
# before outputting they will be concatinated
# I'm adding here batch dimension and sequence length, in this case seq length is 1
hidden = torch.cat((direction_one_out, direction_two_out), dim=0).view(1, 1, -1)
print('\nYour hidden output:')
print(hidden, hidden.shape)
# trivial case, reshaping for one hidden state
hidden_reshaped = hidden.view(1, 1, 2, -1)
print('\nReshaped:')
print(hidden_reshaped, hidden_reshaped.shape)
# This works as well for abitrary sequence lengths as you can see here
# I've set sequence length here to 5, but this will work for any other value as well
print('\nThis also works for more multiple hidden states in a tensor:')
multi_hidden = hidden.expand(5, 1, 10)
print(multi_hidden, multi_hidden.shape)
print('Directions can be split up just like this:')
multi_hidden = multi_hidden.view(5, 1, 2, 5)
print(multi_hidden, multi_hidden.shape)
</code></pre>
<p><strong>Output:</strong></p>
<pre class="lang-py prettyprint-override"><code>Direction one:
tensor([0, 1, 2, 3, 4])
Direction two:
tensor([4, 3, 2, 1, 0])
Your hidden output:
tensor([[[0, 1, 2, 3, 4, 4, 3, 2, 1, 0]]]) torch.Size([1, 1, 10])
Reshaped:
tensor([[[[0, 1, 2, 3, 4],
[4, 3, 2, 1, 0]]]]) torch.Size([1, 1, 2, 5])
This also works for more multiple hidden states in a tensor:
tensor([[[0, 1, 2, 3, 4, 4, 3, 2, 1, 0]],
[[0, 1, 2, 3, 4, 4, 3, 2, 1, 0]],
[[0, 1, 2, 3, 4, 4, 3, 2, 1, 0]],
[[0, 1, 2, 3, 4, 4, 3, 2, 1, 0]],
[[0, 1, 2, 3, 4, 4, 3, 2, 1, 0]]]) torch.Size([5, 1, 10])
Directions can be split up just like this:
tensor([[[[0, 1, 2, 3, 4],
[4, 3, 2, 1, 0]]],
[[[0, 1, 2, 3, 4],
[4, 3, 2, 1, 0]]],
[[[0, 1, 2, 3, 4],
[4, 3, 2, 1, 0]]],
[[[0, 1, 2, 3, 4],
[4, 3, 2, 1, 0]]],
[[[0, 1, 2, 3, 4],
[4, 3, 2, 1, 0]]]]) torch.Size([5, 1, 2, 5])
</code></pre>
<p><em>Hope this helps! :)</em></p>
|
machine-learning|neural-network|deep-learning|lstm|pytorch
| 8
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.